Welcome to Planet OSGeo

July 13, 2018

It’s been three months since CARTO.js v4 reached the stable release state. Since then, we’ve been receiving very good feedback and researching what our customers are using the library for. Today, it’s my honor to introduce CARTO.js v4.1!

Focus on easier development

Our goal with the next milestone of CARTO.js is to help developers with the most common struggles we’ve seen so far. We’re introducing:

  • Buffer map changes: we take care of map changes so you don’t run into limit bottlenecks.
  • Source filters: an easier way to filter sources of data so you don’t have to maintain a heavy structure for filtering.
  • Histogram range: now you can ask a certain range to a histogram to focus on the data you find more interesting.

Buffering map changes

With CARTO.js v4, we introduced a shift on the way of creating applications with our library. Instead of an out-of-the-box solution, we provide now simpler building blocks that you can structure on top of your applications. That change is super powerful. You are now not restricted by what our solution provides. You can leverage your knowledge of Leaflet or Google Maps and use CARTO.js as another piece to answer your needs.

But with great power comes great responsability. CARTO.js map changes being programmatic means that now we can provoke changes in the map at the speed of light. We’ve seen applications making changes to lots of dataviews and layers at the same time, hitting our Engine platform limits. The solution was to structure better the application with those limits in mind. Until now.

In CARTO.js v4.1 we’re freeing developers from changing the code to avoid running into performance problems. Now we buffer your changes and apply them once they’re all finished. You don’t have to worry about that anymore. And the best part is that’s transparent. We do it under the hood so no changes in your code are needed to get advantage of it.

Source filtering

Another pain point we’ve seen during the last months is related to source filtering. The way you can filter your data (through a widget, through code… ) forced you to maintain a structure to build SQL sentences taking that filter into account. Not a big deal with one filter. A cumbersome task when combining lots of filters.

In CARTO.js v4.1 we’re introducing two source filters: the category filter and the range filter.

Category filter

One of the most common filtering actions in a map is to show some information that belongs to certain categories. With the category filter now you can tell a source to show only the data that has certain values in a column. Indeed, you can apply a broad criteria: in, not in, equal, not equal, like, similar to… Reference.

As an example, imagine you want to show apartment rental prices only in certaing districts. With the new category filter is as easy as:

const source = new carto.source.Dataset('apartments');
const districtFilter = new carto.filter.Category('district_group', { in: ['Diamond District', 'Tudor City', 'Little Brazil'] });

If you change your mind later, just set other values and the source will react to the new filter criteria.

districtFilter.set('in', ['Kips Bay', 'Gramercy Park', 'Flatiron District']);

More info in Category filter Developers reference.

Category Filter

Range filter

The other common filtering action is to restrict the numeric information to certain values. It’s the operation made when selecting a range in a histogram, for instance.

As an example, imagine you want to show only locations with a rental price between 3K$ and 4K$. With the new range filter is as easy as:

const source = new carto.source.Dataset('apartments');
const priceFilter = new carto.filter.Range('price', { between: { min: 3000, max: 4000 } });

Of course, you can alter the filter afterwards.

priceFilter.setFilters({ between: { min: 4000, max: 5000 } });

More info in Range filter Developers reference.

Range Filter

Combining filters

The real power comes when using several filters at the same time. As you can see in the examples documentation you can apply a filter as complex as you want and not worry about a single line of SQL code.

For instance, you could combine the two filters above (district and filter) with another one that only selects those apartments that rents the entire home or those who has recent reviews.

const filtersCombination = new carto.filter.AND([
  new carto.filter.OR([ entireHomeFilter, reviewsInLastYearFilter ])

We encourage you to take a look on the developers portal to learn about this great enhancement. Examples

Combined Filters

Custom histogram range

In CARTO.js v4, we added dataviews, objects able to extract data from a dataset in predefined ways. One of them is the histogram dataview, used to represent the distribution of numerical data.

Until CARTO.js v4.1, the histogram operated on the whole range of the selected column. That is, if you wanted to get the histogram of the column price, the histogram showed the data from the minimum value in the whole dataset to the maximum one. Although this is very convenient and it’s the most likely way to use it, there are some cases where you want to focus on a particular range of the histogram. The typical example is a column where most of the data falls into one bin and the rest of them are empty because of the existence of outliers. In this case, the histogram doesn’t tell you the right insights and, probably, you want to ask for the data in the most populated bin.

In CARTO.js v4.1 the histogram dataview now provides two extra parameters: start and end.

const histogramDataview = new carto.dataview.Histogram(
  'price', {
    bins: 5,
    start: 40,
    end: 60

This dataview will return a histogram whose range goes from 40$ to 50$, instead of the whole price range.

We can’t wait to see your incredible location intelligence apps using these new features!!

Happy mapping!

July 13, 2018 10:00 AM

Caros leitores,

Quero convidá-los a participarem do Curso Online de GeoServer que estarei ministrando pela GEOCURSOS. O objetivo do curso é que você aprenda a disponibilizar, compartilhar e editar dados geográficos na internet com o GeoServer.

No curso serão abordados tópicos como: configuração de dados, criação de estilo com SLD, padrões OGC, interface administrativa (web), visualização cartográfica com OpenLayers, REST API, Segurança, entre outros.

O curso ocorrerá entre os dias 21 e 30 de agosto (terças, quartas e quintas) das 20:00 as 22:00 (horário de Brasília).

Aqueles que poderem divulgar para seus contatos, agradeço. Quem quiser saber mais informações sobre o curso, pode obtê-las no site do curso (http://www.geocursos.com.br/geoserver), twitter (http://twitter.com/geo_cursos) e pelo facebook (http://www.facebook.com/geocursosbr).

by Fernando Quadro at July 13, 2018 06:57 AM

July 11, 2018

July 10, 2018

I haven't been quiet on the MapGuide front. I've just been deeply entrenched in my lab conducting an important experiment whose results are finally coming into fruition that I am back here to make this exciting announcement.

That experiment was: Can we generate bindings for the MapGuide API using a vanilla version of SWIG? Yes we can!


This is an important question that we needed an answer for and we wanted that answer to be "Yes". 

We currently provide bindings for the MapGuide API in:
  • Java
  • PHP 5.x
  • .net (full framework)
However, we currently use an ancient and heavily modified version of SWIG, whose unclear audit trail of modifications and changes means that being able to support newer versions of PHP (ie. PHP 7.x) or variants of .net like .net Core is nigh-impossible, which puts us in a bit of a pickle because:
  • PHP 5.6 (our current bundled version of PHP) will be end-of-life on December 2018. Bundling and targeting an EOL version of PHP is not a good look. To bundle the current version of PHP (7.x) we need to be able to generate a compatible PHP extension for it. We can't do that with current internal copy of SWIG as the zend extension APIs have massively breaking changes from PHP5 to PHP7.
  • .net Core is where the action is at in the .net space, and not having a presence in this space diminishes our story of being able to build MapGuide applications in .net because as time goes on, that definition of ".net" will assume to mean both the (windows-only) full framework and (cross-platform) .net core.
  • We may want to add support for additional programming languages in the future. Can't do it with our super-modified copy of SWIG again because of the unclear history of changes made to this tool.
Given these factors, and the untenable situation we currently find ourselves in technology-wise, we needed to explore the possibility of generating the bindings using a vanilla (un-modified) version of SWIG. If we're going to go vanilla, we want the latest and greatest, which supports generating bindings for PHP7, and can support .net core with the right amount of SWIG typemap elbow-grease, so all the more motivation to get this working!

What we now have

2 months since the decision to embark on this journey, the mission to get functional MapGuide bindings using vanilla SWIG has been a success! We now have the following bindings for the MapGuide API:
  • A Java binding modeled on the non-crufty official Java binding. Requires Java 7 or higher.
  • A (currently windows-only) PHP extension for PHP 7.1
  • A netstandard2.0-compatible binding for .net that works on both .net Core and full framework and is also cross-platform for platforms where both .net Core and official MapGuide binary packages are available for. For Linux, that means this .net binding works in Ubuntu 14.04 64-bit (where .net Core also has packages available). The nuget package for this binding is fully self-contained and includes the necessary native dependencies (both 32 and 64-bit) needed for the .net library to work. For .net full framework, it includes an MSBuild .targets file to ensure all the required native dependencies are copied out to your application's output directory.
Where to get it

You can grab the bits from the releases page of the mapguide-api-bindings GitHub repo.

For .net, you will have to setup a local package source and drop the nuget package there in order to consume in your library/application.

You will need MapGuide Open Source 3.1.1 installed as this is the only version of MapGuide I am generating these bindings for and testing against. Please observe the current supported platform matrix to see what combinations of technology stacks work with this new set of bindings. Please also observe the respective notes on the .net, PHP and Java bindings to observe what changes and adjustments you need to make in your MapGuide application should you want to try out these bindings.

Sample applications (to make sure this stuff works)

As proof that these bindings work, here's a sample asp.net core application using the new MapGuide .net binding. As a testament to what targeting .net Core gives us, you could bypass building the sample application from source and perhaps give the self-contained package a go. Thanks to the powerful publishing capabilities provided by the dotnet CLI, we can publish a self-contained .net core application with zero external dependencies. In the case of this sample application, you can download the zip, extract it to a windows or Ubuntu 14.04 64-bit machine with a standard MapGuide Open Source 3.1.1 install, run the MvcCoreSample executable within, go to http://localhost:5000 and your MapGuide application is up and running!

For Java and PHP, I'm still cooking up some sample applications in the same vein as the asp.net core one (ie. Porting across the MapGuide Developer's Guide samples), but for now the only verification that these bindings work is that our current binding test suite run to completion (with some failures, but these failures are known failures that are also present in our current bindings).

Where to from here?

I intend for the mapguide-api-bindings project to serve as an incubation area where we can iron out any show-stopping problems before planning for the eventual inclusion into MapGuide proper and supplementing (and in the case of PHP, replacing) our current bindings because eventually, we have to. We cannot keep bundling and targeting PHP 5.x forever. We need to be able to target newer versions of these programming languages, and maybe in some cases new programming languages.

mapguide-api-bindings project repo
asp.net core sample application repo

by Jackie Ng (noreply@blogger.com) at July 10, 2018 04:34 PM

SMB Sample

Dear Reader, we are glad to inform you that our R&D project SaveMyBike has been selected by the European Commission as one of the 21 finalists for the "REGIOSTARS AWARDS 2018" for Category 2 “Achieving sustainability through low carbon emissions”.

The main target of the project is to simplify the way we refer to the soft mobility, to make it convenient, immediate, a certainty in everyday life; we want to make soft mobility a pleasant habit and not a further difficulty. Our aim is to relief cities from the oppression of cars, to rediscover the serenity of alternative transport modes, to take on city traffic nodes and to release them once and for all. Sustainable mobility challenges are epochal, but we could begin from small gestures, from everyday life, to reach the target.

The voting process is open until October 7th, therefore we kindly ask you to support us by accessing this page and by voting for the project. We hope you will help us win this competition!

In a future post we are going to discuss about the technical infrastructure that is powering the project.

The GeoSolutions Team,


by simone giannecchini at July 10, 2018 02:06 PM

July 09, 2018

Last week, I traveled to Salzburg to attend the 30th AGIT conference and co-located English-speaking GI_Forum. Like in previous year, there were a lot of mobility and transportation research related presentations. Here are my personal highlights:

This year’s keynotes touched on a wide range of issues, from Sandeep Singhal (Google Cloud Storage) who – when I asked about the big table queries he showed – stated that they are not using a spatial index but are rather brute-forcing their way through massive data sets, to Laxmi Ramasubramanian @nycplanner (Hunter College City University of New York) who cautioned against tech arrogance and tendency to ignore expertise from other fields such as urban planning:

One issue that Laxmi particularly highlighted was the fact that many local communities are fighting excessive traffic caused by apps like Waze that suggest shortcuts through residential neighborhoods. Just because we can do something with (mobility) data, doesn’t necessarily mean that we should!

Not limited to mobility but very focused on open source, Jochen Albrecht (Hunter College City University of New York) invited the audience to join his quest for a spatial decision support system based on FOSS only at bit.ly/FiltersAndWeights and https://github.com/geojochen/fosssdss

The session Spatial Perspectives on Healthy Mobility featured multiple interesting contributions, particularly by Michelle P. Fillekes who presented a framework of mobility indicators to assess daily mobility of study participants. It considers both spatial and temporal aspects of movement, as well as the movement context:

Figure from Michelle Pasquale Fillekes, Eleftheria Giannouli, Wiebren Zijlstra, Robert Weibel. Towards a Framework for Assessing Daily Mobility using GPS Data. DOI: 10.1553/giscience2018_01_s177 (under cc-by-nd)

It was also good to see that topics we’ve been working on in the past (popularity routing in this case) continue to be relevant and have been picked up in the German-speaking part of the conference:

Of course, I also presented some new work of my own, specifically my research into PostGIS trajectory datatypes which I’ve partially covered in a previous post on this blog and which is now published in Graser, A. (2018) Evaluating Spatio-temporal Data Models for Trajectories in PostGIS Databases. GI_Forum ‒ Journal of Geographic Information Science, 1-2018, 16-33. DOI: 10.1553/giscience2018_01_s16.

My introduction to GeoMesa talk failed to turn up any fellow Austrian GeoMesa users. So I’ll keep on looking and spreading the word. The most common question – and certainly no easy one at that – is how to determine the point where it becomes worth it to advance from regular databases to big data systems. It’s not just about the size of the data but also about how it is intended to be used. And of course, if you are one of those db admin whizzes who manages a distributed PostGIS setup in their sleep, you might be able to push the boundaries pretty far. On the other hand, if you already have some experience with the Hadoop ecosystem, getting started with tools like GeoMesa shouldn’t be too huge a step either. But that’s a topic for another day!

Since AGIT&GI_Forum are quite a big event with over 1,000 participants, it was not limited to movement data topics. You can find the first installment of English papers in GI_Forum 2018, Volume 1. As I understand it, there will be a second volume with more papers later this year.

This post is part of a series. Read more about movement data in GIS.

by underdark at July 09, 2018 06:54 PM

July 07, 2018

Si eres usuari@ o desarrollador@ de QGIS y estás en Colombia, ¡estás cordialmente invitad@ a hacer parte del Grupo de Usuarios QGIS Colombia!

Para oficializar la creación del grupo haremos una Asamblea General el sábado 28 de Julio de 9am a 11am en la Universidad Nacional, Bogotá. Más detalles a continuación.



¿Quién puede participar en el Grupo?

Puedes asistir a la asamblea y hacer parte del Grupo de Usuarios QGIS Colombia si estás interesad@ en QGIS en cualquier forma. No necesitas ser un@ expert@ :D


¿Dónde y cuándo?

La asamblea se llevará a cabo el sábado 28 de Julio de 2018 de 9am a 11am en la Universidad Nacional, sede Bogotá (salón por confirmar). Debes realizar una preinscripción aquí (eso nos dará una idea de cuántas personas asistirán) y te contactaremos directamente cuando tengamos confirmado el salón.


¿Y si no estoy en Bogotá?

Si no te encuentras en Bogotá pero quieres participar en el grupo, ¡contáctanos! Podemos organizar grupos regionales para que te reúnas con otras personas de tu región y eventualmente podemos asistir a un evento en tu región.


Sobre el grupo

Los grupos de usuarios QGIS son creados para difundir el proyecto y aprender sobre el mismo junto a otros usuarios. En Latinoamérica ya existen grupos similares en Brasil, Perú y México; a nivel mundial existen cerca de 25 grupos. Se trata de asociaciones legalmente constituidas (según las condiciones locales para ello), en Colombia estamos constituyendo una asociación sin ánimo de lucro. El Grupo de Usuarios QGIS Colombia será inscrito al proyecto QGIS, obteniendo con ello el derecho a un voto en las decisiones del proyecto.

Puedes consultar los estatutos del Grupo de Usuarios QGIS Colombia en http://downloads.tuxfamily.org/tuxgis/geodescargas/Estatutos_Grupo_Usuarios_QGIS_Colombia_v20180706.pdf Allí encontrarás más información sobre los objetivos y conformación del grupo, incluyendo los tipos de membresías anuales que se manejarán.


Agenda de la Asamble General



July 07, 2018 02:50 AM

July 05, 2018

Usually when I have time to make bugfixes I also make a release after that. Last time I did fixes I then got caught in the work crossfire and forgot to. I didn't notice, since on my devices I have the most advanced testing devel version.

Yesterday Silvia asked me why the fix she asked for wasn't there yet! :-)
So I noticed. Well, I added a couple of more fixes to the closed list and here we go.

This is mostly a bugfix release, with a few minor usability enhancements that recently a very advanced geopaparazzi user started to report. I am not able to catch up with all his reports, but some of them are in.

Here we go:


  • better feedback about form name and positioning mode (GPS or Map Center) in actionbar
  • all exports now follow all the same pattern. They are exported in the geopaparazzi/export folder and their name is made if the project name + type + timestamp of export
  • project PDF export now has the possibility to export only a selected portion of notes
  • activate button for profiles is now on the main cardview
  • better proportion of forms in portrait mode
  • tile sources icon is now always visible in actionbar
  • dashboard enhancements: visualize number of notes and logs, open notes list on long tap
  • save button in forms now a floating action button to remind user to save


  • the force locale didn't have English to allow a user to go back to default
  • fixes on profiles not being visible in landscape mode
  • fix for last background map used not reloaded on app restart
  • fix for crash on pushing back button in form notes
  • fix for crash on issue empty sql query in advanced view
  • fix for issue with saving forms with empty fields
  • avoid data loss in form notes when user exists with back button. Now the user is warned.

 And of course, language updates.


by andrea antonello (noreply@blogger.com) at July 05, 2018 09:52 AM

July 03, 2018

Would you believe it? Only a month ago I was showing the Geopaparazzi Profiles Server developed by the guys at GeoAnalytic and now I am here again to write about the Geopaparazzi Survey Server (GSS).

What does Survey Server even mean? :-)

Well, while the Profile concept is a sophisticated way to handle survey data, background data, forms, spatialite databases and their way of synchronization, the GSS si something much smaller. But in our opinion it reflects the workflow of many, many surveyors and groups of surveyors.

The Geopaparazzi Survey Server (GSS) is a web application that allows geopaparazzi users to synchronize their project data with a central server.

Its companion is an Android app named Geopaparazzi Survey Server Sync (GSSS) available on google play. The app can connect to geopaparazzi projects and synchronize the data contained using the unique device ID to upload the data to the server.

Any device that connects to the server with its ID, will be accepted and if not available, the new id is automatically inserted in the central db.

So this is not about spatialite datasets, but only about geopaparazzi project files, that can contain notes, complex form notes, GPS logs and images.

The server application

The server application is packaged as a docker images and can be installed in the blink of an eye ( well, if your connection is fast). Once you install it, you get a nice and simple web application with a login

a dashboard, a mapview with the possibility to load/unload the data of your surveyors, zoom to it and check information of your notes, gps logs and images:

The surveyors' device that connects to the server is inserted in the surveyors list, if it is not registered already. There a human readable name can be assigned and some basic contact:

The mobile app

The mobile app has been done for android. FYI, we are also working on a desktop version app. There are several reasons for which we decided to go with an external app instead of adding this to geopaparazzi itself. The most important thing is that several cloud synchronization applications are getting born around geopaparazzi these days. We will probably have to let these ideas mature and then at some point it will be possible to converge on the best methodology.

But right now I find it more respectful to have an external app that uses its own way to collect the data from the geopaparazzi projects and send them to the server instance.

So let's have a short look at the Geopaparazzi Survey Server Sync, GSSS :-)

Well, it is a simple simple app with the possibility to load geopaparazzi projects. In it you can see what data are available for upload, i.e. dirty data:

Since the app uses the device id as surveyor id, some simple configurations need to be done:

Once the project file is loaded, the device id is ensured and the server url is inserted... well, just push the upload button! On successful upload the list of notes will be empty and your survey can go on using geopaparazzi.

Installation and training

A complete installation and quickstart guide is available here. Check it out to see all features available.

If you want to get a small training first hand from the developers, we will be giving a workshop about geopaparazzi and GSS at the following locations in the near future:

We would also be very happy to be involved with this stack in projects in developing countries, where data collection and centralization is necessary.
If you are interested, please reach out to us.

by andrea antonello (noreply@blogger.com) at July 03, 2018 03:19 PM

Two new plugins have been published for the management of duplicate values ​​in a field of the attribute table of a vector layer.

The first plugin is an improvement of the already existing duplicate selection tool, where all the elements with duplicate values ​​were selected in a field of the table. With the new functionality the first one won’t be selected now. With this the user has both options available.

Keep in mind that in 2.4 version both tools are in two different menus (Selection and Table) and in two different buttons at toolbar, but from 2.4.1 they will be unified in the Selection menu, and in a drop-down button of the toolbar.

The second plugin allows us to count different values in a field of the attribute table. This new tool is added as a geoprocess in the Toolbox, and selecting the table and the field, we would obtain a new table (available in Project Manager-> Table) with the different values ​​of the selected field, and the number of times that each one is repeated.

To install both plugins we must access to the Add-ons Manager (Tools menu) and search “duplic” term, where they will appear. We will have to mark them and after being installed we must restart gvSIG.

At this video you can watch how these tools work:

by Mario at July 03, 2018 10:31 AM

Se han publicado dos nuevos plugins para la gestión de valores duplicados en un campo de la tabla de atributos de una capa vectorial.

El primero sería una mejora del ya existente de selección de duplicados, donde se seleccionaban todos los elementos con valores duplicados en un campo de la tabla. Con la nueva funcionalidad ahora nos dejaría el primero de ellos sin seleccionar. Con ello el usuario tiene ambas opciones disponibles.

Hay que tener en cuenta que en la versión 2.4 ambas herramientas se encuentran en dos menús diferentes (Selección y Tabla) y en dos botones diferentes, pero a partir de la 2.4.1 se unificarán en el menú Selección, y en un botón de la barra de herramientas con un desplegable.

El segundo plugin es el de conteo de valores en un campo de la tabla de atributos. Esta nueva herramienta se añade como un geoproceso más en la Caja de herramientas, y con él, seleccionando la tabla y el campo deseados, obtendríamos una nueva tabla (disponible en el Gestor de proyectos->Tabla) con los distintos valores del campo seleccionado, y el número de veces que se repite cada uno.

Para instalar dichas extensiones se debe entrar en el Administrador de complementos (menú Herramientas), y buscar el término “duplic”, donde nos aparecerán los dos paquetes, que se deben marcar y se descargarán. Tras instalarlos se debe reiniciar gvSIG.

En este vídeo podéis ver cómo funcionan ambas herramientas:

by Mario at July 03, 2018 10:02 AM

The PostGIS development team is pleased to release PostGIS 2.5.0beta1.

This release is a work in progress. No more api changes will be made from this point on before 2.5.0 release. Remaining time will be focused on bug fixes and documentation for the new functionality and performance enhancements under the covers. Although this release will work for PostgreSQL 9.4 and above, to take full advantage of what PostGIS 2.5 will offer, you should be running PostgreSQL 11beta2+ and GEOS 3.7.0beta1 which were released recently.

Best served with PostgreSQL 11beta2 which was recently released.

View all closed tickets for 2.5.0.

After installing the binaries or after running pg_upgrade, make sure to do:


— if you use the other extensions packaged with postgis — make sure to upgrade those as well

ALTER EXTENSION postgis_topology UPDATE;
ALTER EXTENSION postgis_tiger_geocoder UPDATE;

If you use legacy.sql or legacy_minimal.sql, make sure to rerun the version packaged with these releases.


by Regina Obe at July 03, 2018 12:00 AM

July 02, 2018

Disasterrisk Featured Image

Dear Reader,

in this post we want to talk about how we used GeoNodeGeoServer, and MapStore to build a platform for visualization and cost-benefit analysis to support decision makers with resilient development planning, public policy and investments in Afghanistan in cooperation with the GFDRR group at the World Bank. While in this post we will concentrate a little more on the technicalities behind the platform, in case you were interested more into the higher level motivations and objectives you can read this blog post from the GFDRR group.

The goal of the project is to improve the decision-making and data-extraction capabilities for Afghanistan by expanding GeoNode with two additional modules:

  • The first, called Risk Data Extraction and Visualization, to allow users to easily generate maps and extract tabular results for their area and indicator of interest at different return periods and with the ability to drill down at different administrative levels.
  • The second, called Cost/Benefit Analysis and Decision Tool, to  allow users to perform costs-benefits analysis for various hazards (primarily earthquakes and floods) using pre-calculated risk management options, thus allowing the user to discover the benefits of investing in risk reduction.
Both modules are integrated within the GeoNode platform and are based on pre-calculated tabular statistical and geospatial layers (both vector and raster data). The responsive graphical user interface is based on the MapStore framework.

Risk Data Extraction & Visualization

This module enables users to easily visualize and extract risk data for the area of interest at different administrative levels (district, province, national). Based on the selection of administrative area, indicator and return period, the user is presented with a map and a series of charts about the risk assessments.

Input tables are delivered for each indicator (population, GDP, roads, etc), outlining damage/value for all return periods, an example is provided in the table below.

[caption id="attachment_4253" align="aligncenter" width="800"]Example losses per district for all return periods Example of losses per district for all return periods[/caption]

The tool is able to ingest tabular data (i.e. Excel files and CSV files), automatically process and convert inputs into internal model linked to administrative areas, and finally to present to the users a friendly and modern interface to navigate the collected information.  A summary of all the Risk Analysis available per Hazard type at the country level is presented to users as a first landing page (see below).

[caption id="attachment_4260" align="aligncenter" width="800"]Risk Management Tool Options for Afghanistan Risk Management Tool Options for Afghanistan. Various hazards are available.[/caption] By clicking on a Hazard type (e.g. earthquake), it is possible to view the list of available Risk Analysis ordered by Category of target (Agriculture, Airports, Buildings, Healthcare, …), as shown below. [caption id="attachment_4261" align="aligncenter" width="800"]Earthquake Risk Analysises ordered by Category of Target Earthquake Risk Analyzes available[/caption]

Users can click on the analysis panel to open the brief abstract and then click on 'Analysis Data' button to go into the detailed data page, as shown below.

[caption id="attachment_4262" align="aligncenter" width="800"]Earthquake Risk Analysises available -2- Earthquake Risk Analyzes available -2-[/caption]

The detailed view of the Risk Analysis presents the title of the analysis along with its brief description. An interactive chart that let the user to change the values shown in the map interactively. On top of the map, a toolbar gives the possibility to navigate through the data, get information or switch layers or dimensions.  The “Dimensions Switcher” tool allows to switch between “Scenarios” and “Return Periods” view instead. Charts and dashboards are updated dynamically. Additional GeoNode resources and documents (PDF reports, images, …) can also be linked to the analysis data and presented to the user for further readings. Last but not least, as shown below, data is available at multiple administrative levels (country, region, county) and we can use the interactive map to drill down geographically, as shown below.

[caption id="attachment_4263" align="aligncenter" width="800"]Hazard-Exposures for affected irrigated agriculture Hazard-Exposures for affected irrigated agriculture at Afghanistan Level[/caption] [caption id="attachment_4264" align="aligncenter" width="800"]Hazard-Exposures for affected irrigated agriculture at Afghanistan Level Hazard-Exposures for affected irrigated agriculture in Kunduz Region[/caption] [caption id="attachment_4265" align="aligncenter" width="800"]Hazard-Exposures for affected irrigated agriculture in Emamsaheb county in Kunduz region Hazard-Exposures for affected irrigated agriculture in Emamsaheb county in Kunduz region - Return period of 10 years[/caption]

Cost-benefit Analysis & Decision Tool

This module allows the users to access precomputed (obviously by domain experts) cost-benefit analyses of risk management options in a user-friendly way by making use of maps and charts. This various cost-benefit analyses have been conducted for a number of risk management options for floods, earthquakes, and landslides. For each of the options precomputed  reductions in disaster losses have been produced and ingested into the system (a specific ingestion system was developed suitably for this).  Input tables are delivered for each indicator with precomputed  data as in the table below.

[caption id="attachment_4254" align="aligncenter" width="800"]damages to schools under baseline conditions and for different risk reduction scenarios damages to schools under baseline conditions and for different risk reduction scenarios[/caption]

The cost-benefits analysis overview is similar to the previous one, see below.

[caption id="attachment_4242" align="aligncenter" width="800"]Cost-benefits analysis overview Cost-benefits analysis overview[/caption]

The cost-benefit detail dashboards change accordingly to the type of analysis. Typically cost-benefit analysis is conducted on site-specific use cases (roads, rivers, schools, sub-areas, etc…). Panels, charts and dashboards are automatically rendered by the tool and comparison charts and tables are presented to the users for each “Risk Reduction Scenario”. Here below an example for Flood hazard and its impact on the buildings in Kabul.

[caption id="attachment_4256" align="aligncenter" width="800"]Cost-benefit Analysis Tool Options for Earthquakes Cost-benefit Analysis Tool Options for Floods - before loading analysis data[/caption] [caption id="attachment_4257" align="aligncenter" width="800"]Impact and flood risk assessment for Kabul -1- Impact and flood risk assessment for Kabul -1-[/caption] [caption id="attachment_4258" align="aligncenter" width="800"]Impact and flood risk assessment for Kabul -2- Impact and flood risk assessment for Kabul -2-[/caption]

The entire framework leverage on GeoNode and GeoServer Parametric SQL Views while the front-end is based on modern technologies and concepts like material views on a single page application based on MapStore and D3-JS for the rendering of charts and dashboards. If you are interested in knowing more about the platform, please, feel free to send us some emails. The code is Open Source as usual, although these very specific modules will not be contributed back to the GeoNode community version.

Last but not least, if you are interested in learning about how we can help you achieving your goals with open source products like GeoServerMapStoreGeoNode and GeoNetwork through our Enterprise Support Services and GeoServer Deployment Warranty offerings, feel free to contact us!

The GeoSolutions Team,


by simone giannecchini at July 02, 2018 02:14 PM

June 28, 2018

Ya está disponible para instalar en gvSIG el plugin que permite cargar los ficheros XML de SIGPAC, el Sistema de Información Geográfica de Parcelas Agrícolas de España.Para instalar esta extensión se debe acceder al administrador de complementos de gvSIG (menú “Herramientas”), seleccionando la opción por URL, y conectando al servidor por defecto. En la siguiente ventana escribiremos en la parte superior “SIGPAC”, y nos aparecerá la extensión, que debemos marcar. Una vez instalada deberemos reiniciar gvSIG y ya tendremos la herramienta disponible en el menú “Herramientas->SIGPAC”.

Si la capa XML es de líneas o puntos, se cargará como un fichero CSV, y si es de polígonos se cargará directamente como SHP.

Una vez nos aparece el fichero en la ventana de “Añadir capa”, si está en un sistema de referencia diferente al de la Vista deberemos entrar en sus propiedades e indicárselo. De esa forma se reproyectará al vuelo, por lo que deberemos exportar la capa a un nuevo SHP para que esté ya en el sistema de la Vista.

En el siguiente vídeo podéis ver el funcionamiento:

by Mario at June 28, 2018 03:00 PM

TLTR: This post is about Joppy, a new service I’m working on that tries to eliminate the pain currently exists in the communication among recruiters and tech professionals. Let me describe you the current scenario in the recruitment world and, please, any feedback will be welcome.

Tech professionals: Wherever I wrote tech professionals I mean any kind of role related with tech companies: software engineers, developers or programmers, manager, product owners, QA, designers, …


It all starts…

…taken a beer and asking your friends: How many connection requests do you receive per week from LinkedIn? All three (two developers and a UI/UX designer) answer the same enough to be annoying. Every tech professional want to have his/her CV updated in LinkedIn, it is a great service, but no one agrees with the myriad of emails asking for connections from recruiters that has awesome job offers from awesome companies.

Do you thing the job of a recruiter is easy? Well, let me say you are completely wrong. It is not an easy job and often ungrateful. If you think in a more or less important city with many tech companies you can image the competition existing among companies to get tech professional.

Currently there are two main things recruiters can do to arrive to candidates:

  1. Publish offers in some kind of board and wait candidates applies (we all have in mind web sites that crawls and shows tons of job offers)
  2. Make an active search of potential candidates. Recruiters need to use services like LinkedIn, where they can search techies in a given geographical area, that know about X, Y, Z skills and many other options. Once filtered they need to contact each of them where, probably, most of them are not interested in a change or in the position the recruiter is offering.

June 28, 2018 02:10 PM

June 27, 2018


Dear Reader,

in this post we want to talk about how we twisted and extended GeoNodeGeoServer, and MapStore to build a platform for emergency response and early warning for the DeCATastrophize European Project (acronym DECAT).

DECAT Project

Effective systems for early warning, mitigation of impacts, and emergency management can save lives and protect people, property and the environment in the event of natural and man-made disasters. The goal of the DECAT project is to design, implement and operate a geospatial decision support system to assess, prepare for and respond to multiple and/or simultaneous natural and man-made hazards and disasters in a synergistic way. The DECAT platform provides:

  • Workflows and functionalities for early warning and rapid notification for risk resilience at all levels
  • Methodologies for rapid assessment and mitigation of impacts and decision-making through rapid mapping
  • Ability to disseminate geospatial data and information about various types of multi-hazards
  • Dedicated capabilities aimed to support impact assessment as well as emergency management based on activities suitable for overall operational scenarios

The types of hazards that were taken into account is extensive and so is the scenarios which the platform was tested; the list of hazards comprehends the following:

  • Wildfire
  • Tsunami
  • Flood
  • Earthquake
  • Oil Spill
Tests were conducted in Italy, Spain, France, Greece and Cyprus to evaluate the capabilities of the developed platform.

The DECAT Platform

The DECAT platform is composed by a few modules designed to support the needs of the three main phases of emergency management, in the order:

  • Early Warning, during which events are collected and analyzed in order to understand if we are facing of a real hazard or a false alarm
  • Impact Assessment, which triggers once an event has been confirmed to be an hazard  to model, under the guidance of a domain expert, the impact of an hazards on various targets
  • Mitigation of Impact, which deals with the management of the emergency derived from the impact of the to mitigate its impact.
The platform comprises also of a few additional modules providing horizontal functionalities that are needed by all three modules above (e.g. document management, geospatial data management, user management and the like). [caption id="attachment_4178" align="aligncenter" width="800"]Decat Platform Modules Decat Platform Modules[/caption]

The above modules provide the operators with specific tools and customized layouts for each phase of emergency management.

The Early Warning module provides the operator with wizards to create, edit and update so called events which represent potential hazards occurring inside his area of competence; this is supported via geospatial tools to edit point features and record ancillary information useful to characterize events and assess the level of hazard, generally used to revise and debrief the emergency response. Each event can be searched, modified and updated, to evolve to an occurrence treating the community or back to ordinary conditions. In the first case, it will be promoted and notified as early warning, otherwise it will be archived (see below); once an event is promoted the other phases are enabled and the entire workflow to assess the impact and manage the emergency comes to life.

[caption id="attachment_4203" align="aligncenter" width="800"]Early Warning User Interface Early Warning User Interface[/caption]  

As mentioned above, once an event is confirmed (i.e. an earthquake has struck somewhere) the impact assessment phase triggers in order to evaluate and model the hazard as well as to assess its evolution and impact over time. The Impact assessment module allows the impact assessor (i.e. a domain expert with scientific background and experience regarding effects and losses occurring because of a specific type of disaster) to evaluate the context and the environment where the event is taking place, providing by modelling or pre-formulated scenario analyses, additional geospatial information, reports and documents useful to properly identify and locate specific needs of rescue and recovery interventions. This module has been designed to permit the creation and update of the so-called Common Operational Picture (COP) for the emergency managers, which is an evolving geospatial representation of the hazards integrated with preliminary localization of rescue and recovery targets, by integrating relevant hazard models outputs in real-time. The impact assessor has the possibility to create a reference map (the COP) for the emergency management coupling hazards modelling together with geospatial information relevant for emergency plan implementation (e.g. gathering areas, field hospital location, command and control field unit) as well as targets needing urgent intervention. The symbols used to visualize the feature can be adapted to the subject and changed according to its specific evolution. The COP can then be frozen to a specific instant, and shared with the emergency managers, responsible to assign rescue or recovery targets to work-force teams (see below), however the impact assessor can, at any time, perform a new assessment by updating information in the active COP to create an updated one that would more closely represent the current situation.

[caption id="attachment_4202" align="aligncenter" width="800"]Impact Assessment User Interface Impact Assessment User Interface at work for the Paphos flood[/caption]

The last set of functionalities is related to the coordination of field workforce and Emergency Management; thanks to this module the platform provides the emergency managers with capabilities to collaboratively (and concurrently) manage online, directly on the COP geospatial features representing allocated teams, customized according to the type of workforce they belong. Such geospatial features can be updated over time to capture the status of resources engaged with the rescue operations on the field (see figure below) as well as the changing conditions on the field. Moreover, updates of the COP generated by newer impact assessment to capture the evolution of the disaster can be published at any time by the impact assessor and they will refresh background information used by the emergency managers.

[caption id="attachment_4200" align="aligncenter" width="800"]Emergency Management User Interface Emergency Management user interface at work for the Paphos flood[/caption]

Technologies and building blocks

The DECAT platform is implemented by leveraging on a few on well-known Open Source building blocks like GeoServer, GeoNode and MapStore, as shown below. GeoServer provides advanced geospatial data management and mapping capabilities according to the OGC Web Map Services (WMS), Web Coverage Services (WCS) and Web Feature Services (WFS) protocols while GeoNode acts as a broker for the data providing OGC Catalogue Services (CSW) capabilities, acting as the catalog for data and metadata discovery. MapStore is used as the mapping and visualization engine and provides geospatial visualization functionalities over the data ingested into the DSS by interacting with OGC protocols.

[caption id="attachment_4186" align="aligncenter" width="800"]DECAT Platform Components DECAT Platform Components[/caption]

Authentication is provided through the support for OAUTH 2.0 protocol having GeoNode play the provider role (i.e. being responsible for the management of users’ credentials and live sessions) hence it takes care of creating and expiring users’ sessions as well as of managing access permission over the ingested geospatial data in coordination with GeoServer. In its default configuration data is private and accessible only to the publisher and the users within his organization. Several additional modules have been implemented during the project to create the DECAT DSS by extending the GeoNode and MapStore frameworks. The user-interface has been completely redesigned to follow a three phases approach during the management of the emergency, where specific back-end modules have been developed in GeoNode to manage alerts and hazards, to perform and disseminate impact assessments associated to hazards, up to the annotations used by emergency managers to support resource allocation to targets in the field.

Conclusions and way forward

A final Table Top / Command Post exercise was conducted in the area of Paphos, Cyprus to assess the utility and usability of the DECAT Platform in disaster preparedness and response. The main objective of the exercise was to test the platform in realistic scenarios to evaluate its impact on existing decision making processes during emergency situations. The tested scenarios included a wildfire threatening the forest of Paphos, nearby villages and other infrastructure as well as an heavy rainfall in the Town of Paphos with a great danger for floods. The DECAT platform was used in order to extract and provide valuable information to the decision makers during the exercise, all the three phases of the platform were used and successfully presented (Early Warning, Impact Assessment and Emergency Management).

[caption id="attachment_4189" align="aligncenter" width="848"]Flood and fire hazards in DECAT DSS for Cyprus Flood and fire hazards in DECAT Platform for Cyprus[/caption]

The DECAT Platform has been an important step for GeoSolutions in order to strenghten its knowledge on the GeoNode framework given the size and depth of the customizations, moreover it allowed us to perform a first integration between MapStore and GeoNode which is part of our overall strategy for the future of GeoNode. Some of the implemented functionalities and fixes have been already contributed to the respected projects (e.g. map annotations developed to manage field resources during the emergency management phase are now port of MapStore and so are a large number of GeoNode fixes and improvements).

For those who are interested, the DECAT Platform is still up and running here (mind you, this is a snaphot of the cloud instance used during the tests so most data is password protected.; however, most data is not open to the public hence if you are interested in having a look we can schedule a demo. Last but not least, the source code is, obviously, Open Source like everything we do here at GeoSolutions. This work was cofinanced by DG-ECHO under the DECAT European project.


If you are interested in learning about how we can help you achieving your goals with open source products like GeoServerMapStoreGeoNode and GeoNetwork through our Enterprise Support Services and GeoServer Deployment Warranty offerings, feel free to contact us!

The GeoSolutions Team,


by simone giannecchini at June 27, 2018 05:37 PM

We are very excited to announce that QGIS Server has been successfully certified as a compliant WMS 1.3 server against the OGC certification platform, and moreover, it is even considered as a reference implementation now!

This is the first step on our roadmap of having a fast, compliant and bullet proof web map server that is straightforward to publish from a classical QGIS project.

What does it mean?

Having a certified server means that QGIS Server successfully passes the automated and semi automated tests that ensure we are 100% compliant with the standards. That means you can trust QGIS to be used by any WMS client seamlessly.
Moreover, that certification is now powered by a continuous integration system that checks every night in developement versions if we still pass the tests.

Daily compliance reports are available on the new tests.qgis.org website.

What’s next?

Building the automated testing platform and getting officially certified was only the first step. We now are starting to certify the WFS services, thanks to the latest grant application program support.

We also want QGIS server development to be performance-driven. The following projects are particularly relevant:

  • MS-Perf produces benchmark reports with MapServer and GeoServer.
  • graffiti  and PerfSuite tools have been designed to create a really light tool, easy to enrich with new datasets and performance tests, and easy to integrate in continuous integration systems. It compares QGIS-ltr, QGIS-rel and QGIS-dev nightlies for the same scenarios in details and produces html reports. It can also graph performance history for the development version to track regressions or improvements.

Many thanks to the supporters and voting members that helped bootstrap all those testing platforms and offer them to the community.

If you want to support or give a hand on the QGIS desktop client side, we think that area would deserve some love too!

by underdark at June 27, 2018 05:34 PM

June 26, 2018

We are pleased to announce that Orfeo ToolBox 6.6.0 is out ! As usual, ready-to-use binary packages are available for Windows, Linux and Mac OS X : OTB 6.6 You can also checkout the source directly with git: git clone https://gitlab.orfeo-toolbox.org/orfeotoolbox/otb.git OTB -b release-6.6 We welcome your feedback and requests, and encourage you to join […]

by Yannick Tanguy at June 26, 2018 09:21 AM

June 25, 2018

Ya está disponible la certificación del curso gratuito de gvSIG aplicado a Medio Ambiente.

Cartel gvSIG aplicado a Medioambiente2

Esta certificación se abre tras la publicación de los últimos temas del curso, pero seguirá abierta de forma continua, por lo que cualquier usuario/a podrá obtenerla en el momento en que finalice los distintos temas.

Para poder obtener dicha certificación se deberán completar todos los ejercicios de cada tema. Los ejercicios validarán los conocimientos adquiridos durante el curso y serán evaluados por un tutor.

La plataforma y la opción de matrícula abierta la podéis encontrar en www.geoalternativa.com/gvsig-training. Es necesario elegir el curso de gvSIG aplicado a Medio Ambiente. Después debéis elegir la opción “Registrarse como usuario” y, por último, matricularos en él.

Aparte de la entrega y aprobación de los ejercicios, la certificación llevará un coste mínimo, necesario para cubrir los gastos relativos a la evaluación y certificación. Este coste será de 30 €. El pago es posible hacerlo a través de Paypal en el siguiente enlace: http://www.gvsig.com/es/curso-gvsig-aplicado-medio-ambiente

La certificación será emitida por la Asociación gvSIG, y estará compuesta por dos certificados:

Certificado de aprovechamiento del curso, que incluirá toda la información relativa a los contenidos formativos adquiridos.

Certificado oficial gvSIG Usuario, al haber completado los 90 créditos necesarios para ello, y que da derecho a poder obtener el certificado de gvSIG Usuario, realizando y aprobando los créditos necesarios para su convalidación, a través de los cursos ofrecidos por la Asociación gvSIG.

El tiempo de dedicación del curso se ha estimado en 90 horas.

by Alonso Morilla at June 25, 2018 09:55 AM

We've got customers discovering PostGIS and GIS in general or migrating away from ArcGIS family of tools. When they ask, "How do I see my data?", we often point them at QGIS which is an open source GIS desktop with rich integration with PostGIS/PostgreSQL.

QGIS is something that is great for people who need to live in their GIS environment since it allows for easily laying on other datasources, web services and maps. The DBManager tool allows for more advanced querying (like writing Spatial SQL queries that take advantage of the 100s of functions PostGIS has to offer) , ability to import/export data, and create PostgreSQL views.

QGIS has this thing called Projects, which allow for defining map layers and the symbology associated with them. For example what colors do you color your roads, and any extra symbols, what field attributes do you overlay - street name etc. Projects are usually saved in files with a .qgs or .qgz extension. If you spent a lot of time styling these layers, chances are you want to share them with other people in your group. This can become challenging if your group is not connected via network share.

Continue reading "New in QGIS 3.2 Save Project to PostgreSQL"

by Regina Obe (nospam@example.com) at June 25, 2018 05:47 AM

June 24, 2018

We are pleased to announce the release of QGIS 3.2 ‘Bonn’. The city of Bonn was the location of our 16th developer meeting.


This is the second release in the 3.x series. It comes with tons of new features (see our visual changelog).

Packages and installers for all major platforms are available from downloads.qgis.org.

We would like to thank the developers, documenters, testers and all the many folks out there who volunteer their time and effort (or fund people to do so). From the QGIS community we hope you enjoy this release! If you wish to donate time, money or otherwise get involved in making QGIS more awesome, please wander along to qgis.org and lend a hand!

QGIS is supported by donors and sponsors. A current list of donors who have made financial contributions large and small to the project can be seen on our donors list. If you would like to become and official project sponsor, please visit our sponsorship page for details. Sponsoring QGIS helps us to fund our six monthly developer meetings, maintain project infrastructure and fund bug fixing efforts. A complete list of current sponsors is provided below – our very great thank you to all of our sponsors!

QGIS is Free software and you are under no obligation to pay anything to use it – in fact we want to encourage people far and wide to use it regardless of what your financial or social status is – we believe empowering people with spatial decision making tools will result in a better society for all of humanity.





by underdark at June 24, 2018 07:57 PM

June 22, 2018

We are extremely pleased to announce the winning proposals for our 2018 QGIS.ORG grant programme. Funding for the programme was sourced by you, our project donors and sponsorsNote: For more context surrounding our grant programme, please see:

The QGIS.ORG Grant Programme aims to support work from our community that would typically not be funded by client/contractor agreements, and that contributes to the broadest possible swathe of our community by providing cross-cutting, foundational improvements to the QGIS Project.

Voting to select the successful projects was carried out by our QGIS Voting Members. Each voting member was allowed to select up to 6 of the 14 submitted proposals by means of a ranked selection form. The full list of votes are available here (on the first sheet). The second sheet contains the calculations used to determine the winner (for full transparency). The table below summarizes the voting tallies for the proposals:


A couple of extra notes about the voting process:

  • The PSC has an ongoing program to fund documentation so elected to fund the QGIS Training Manual update even if this increases the total funded amount beyond the initial budget.
  • Although the budget for the grant programme was €25,000, the total amount for the winning proposals is €35,500. This increase is possible thanks to the generous support by our donors and sponsors this year.
  • Voting was carried out based on the technical merits of the proposals and the competency of the applicants to execute on these proposals.
  • No restrictions were in place in terms of how many proposals could be submitted per person / organization, or how many proposals could be awarded to each proposing person / organization.
  • Voting was ‘blind’ (voters could not see the existing votes that had been placed).

Of the 45 voting members, 29 registered their votes 17 community representatives and 12 user group representatives.

On behalf of the QGIS.ORG project, I would like to thank everyone who submitted proposals for this call!

A number of interesting and useful proposal didn’t make it because of our limited budget; we encourage organizations to pick up one of their choice and sponsor it.

by underdark at June 22, 2018 07:03 PM

Pomysł na zorganizowanie tego spotkania kiełkował od lat, ale zawsze brakowało czasu na jego realizację. Główny problem polegał na określeniu docelowej grupy uczestników oraz formuły. Wiemy oczywiście, jak szerokie jest grono użytkowników programu QGIS, ale ta wiedza wcale nie ułatwiała zadania. Ostatecznie po przeprowadzeniu kilku ankiet na polskim forum QGIS postanowiliśmy zaserwować wszystkiego po trochu. Potrzebni byli tylko prelegenci, miejsce i termin. Zdecydowaliśmy się na 19 czerwca 2018…

by robert at June 22, 2018 06:07 PM

June 21, 2018

One of the joys of geospatial processing is the variety of tools in the tool box, and the ways that putting them together can yield surprising results. I have been in the guts of PostGIS for so long that I tend to think in terms of primitives: either there’s a function that does what you want or there isn’t. I’m too quick to ignore the power of combining the parts that we already have.

A community member on the users list asked (paraphrased): “is there a way to split a polygon into sub-polygons of more-or-less equal areas?”

I didn’t see the question, which is lucky, because I would have said: “No, you’re SOL, we don’t have a good way to solve that problem.” (An exact algorithm showed up in the Twitter thread about this solution, and maybe I should implement that.)

PostGIS developer Darafei Praliaskouski did answer, and provided a working solution that is absolutely brilliant in combining the parts of the PostGIS toolkit to solve a pretty tricky problem. He said:

The way I see it, for any kind of polygon:

  • Convert a polygon to a set of points proportional to the area by ST_GeneratePoints (the more points, the more beautiful it will be, guess 1000 is ok);
  • Decide how many parts you’d like to split into, (ST_Area(geom)/max_area), let it be K;
  • Take KMeans of the point cloud with K clusters;
  • For each cluster, take a ST_Centroid(ST_Collect(point));
  • Feed these centroids into ST_VoronoiPolygons, that will get you a mask for each part of polygon;
  • ST_Intersection of original polygon and each cell of Voronoi polygons will get you a good split of your polygon into K parts.

Let’s take it one step at a time to see how it works.

We’ll use Peru as the example polygon, it’s got a nice concavity to it which makes it a little trickier than an average shape.

  FROM countries
  WHERE name = 'Peru'

Original Polygon (Petu)

Now create a point field that fills the polygon. On average, each randomly placed point ends up “occupying” an equal area within the polygon.

  SELECT (ST_Dump(ST_GeneratePoints(geom, 2000))).geom AS geom
  FROM peru
  WHERE name = 'Peru'

2000 Random Points

Now, cluster the point field, setting the number of clusters to the number of pieces you want the polygon divided into. Visually, you can now see the divisions in the polygon! But, we still need to get actual lines to represent those divisions.

CREATE TABLE peru_pts_clustered AS
  SELECT geom, ST_ClusterKMmeans(geom, 10) over () AS cluster
  FROM peru_pts;

10 Clusters

Using a point field and K-means clustering to get the split areas was inspired enough. The steps to get actual polygons are equally inspired.

First, calculate the centroid of each point cluster, which will be the center of mass for each cluster.

CREATE TABLE peru_centers AS
  SELECT cluster, ST_Centroid(ST_collect(geom)) AS geom
  FROM peru_pts_clustered
  GROUP BY cluster;

Centroids of Clusters

Now, use a voronoi diagram to get actual dividing edges between the cluster centroids, which end up closely matching the places where the clusters divide!

CREATE TABLE peru_voronoi AS
  SELECT (ST_Dump(ST_VoronoiPolygons(ST_collect(geom)))).geom AS geom
  FROM peru_centers;

Voronoi of Centrois

Finally, intersect the voronoi areas with the original polygon to get final output polygons that incorporate both the outer edges of the polgyon and the voronoi dividing lines.

CREATE TABLE peru_divided AS
  SELECT ST_Intersection(a.geom, b.geom) AS geom
  FROM peru a
  CROSS JOIN peru_voronoi b;

Intersection with Original Polygon


Clustering a point field to get mostly equal areas, and then using the voronoi to extract actual dividing lines are wonderful insights into spatial processing. The final picture of all the components of the calculation is also beautiful.

All the Components Together

I’m not 100% sure, but it might be possible to use Darafei’s technique for even more interesting subdivisions, like “map of the USA subdivided into areas of equal GDP”, or “map of New York subdivided into areas of equal population” by generating the initial point field using an economic or demographic weighting.

June 21, 2018 08:00 PM

Los próximos días 18 y 19 de octubre se celebrarán las 5as Jornadas gvSIG Uruguay y 3as Jornadas de Tecnologías Libres de Información Geográfica y Datos Abiertos en Montevideo (Uruguay), bajo el lema ‘Información Geográfica en un ámbito abierto’.

Desde ahora está abierto el periodo de recepción de resúmenes, los cuales pueden enviarse a la dirección de correo jornadas.uruguay@gvsig.org siguiendo la plantilla facilitada en el apartado Comunicaciones de la web del evento, donde pueden consultarse también las normas para el envío. Los tipos de comunicación admitidos son ponencia y póster.

Durante las jornadas se entregarán los premios del Concurso de Uso de Tecnologías Libres de Información Geográfica 2018, organizado por GeoForAll Iberoamérica y OSGeo. para el cual los usuarios de cualquier Tecnología Libre de Información Geográfica pueden enviar ya sus trabajos.

Las jornadas serán gratuitas, y el periodo de inscripción se abrirá el 24 de septiembre.

Así mismo, cualquier entidad interesada en colaborar con las jornadas, puede hacerlo de varias formas, que incluyen desde una aportación económica hasta el aporte de recursos que de forma equivalente cubran las necesidades de apoyo detectado por el comité organizador. Toda la información relacionada con ello está disponible en la web de las jornadas.

by Mario at June 21, 2018 05:21 PM

June 20, 2018

Save as a text file ending in .xml like qgis_scales.xml

These are the scales OpenStreetMap tiles are rendered in for 96 dpi, so the map will look sharp on most monitors.

The xml file can then be loaded into the project from:

Project> Project Properties…> General> Project scales

<qgsScales version="1.0">
    <scale value="1:554678932"/>
    <scale value="1:277339466"/>
    <scale value="1:138669733"/>
    <scale value="1:69334866"/>
    <scale value="1:34667433"/>
    <scale value="1:17333716"/>
    <scale value="1:8666858"/>
    <scale value="1:4333429"/>
    <scale value="1:2166714"/>
    <scale value="1:1083357"/>
    <scale value="1:541678"/>
    <scale value="1:270839"/>
    <scale value="1:135419"/>
    <scale value="1:67709"/>
    <scale value="1:33854"/>
    <scale value="1:16927"/>
    <scale value="1:8463"/>
    <scale value="1:4231"/>
    <scale value="1:2115"/>


1,000,000 (QGIS default):

1,083,357 (OSM wiki):

1,155,584 (From: 3liz):

Scales from:
OSM wiki

by Heikki Vesanto at June 20, 2018 09:00 AM

We are happy to announce the release of GeoServer 2.12.4. Downloads are available (zipwar, and exe) along with docs and extensions.

This is a maintenance release and a recommend update production systems. This release is made in conjunction with GeoTools 18.4.

Highlights of this release are featured below, for more information please see the release notes (,2.12.2, | 2.12-RC1 | 2.12-beta).


  • Add forceLabels=on in the style editor map legend to help users,
  • Remove language warnings during Windows setup compilation and remove ‘work’ folder when uninstalling on Windows
  • Move MongoDB community module to supported status

Bug Fixes

  • Response time of WMS 1.3.0 significantly higher than vs WMS 1.x.x on systems whose axis in north/east order
  • Exception with NULL values with AggregateProcess
  • Style with Interpolate function causes NullPointerException on GetLegendGraphic
  • WFS with startIndex doesn’t return some results
  • Vector identifying feature info uses an undocumented system variable to set the default search area
  • Removing extensions with own configuration bits may cause GeoServer not to start up anymore
  • Windows Installation issue – upgrading GeoServer results in corrupt data_dir
  • Class java.util.Map$Entry is not whitelisted for XML parsing.
  • Add WMS GetMap and GetFeatureInfo tests for App-Schema MongoDB integration
  • CatalogRepository cannot find a store by name, if the store has just been added
  • WCS 1.0.0 generates wrong links in GetCapabilities
  • CatalogRepository should return a null on store not found, instead it throws a RuntimeException
  • Layer page will only show up to 25 bands, regardless of the actual set of bands available
  • Undocumented GDAL 2.3.0 CSV output geometry column name change breaks WPSOgrTest

Security Updates

Please update your production instances of GeoServer to receive the latest security updates and fixes.

If you encounter a security vulnerability in GeoServer, or any other open source software, please take care to report the issue in a responsible fashion.

About GeoServer 2.12 Series

Additional information on the 2.12 series:

by iant at June 20, 2018 08:44 AM

June 19, 2018

A new mailing list for gvSIG Developers has been created, that replaces the previous one. This list will continue being the main contact point for English speaking developers to ask about any doubt or problem on gvSIG development (Java, Scripting…).

The previous mailing list was hosted in Joinup, but their support has been ended. Therefore, we have decided to migrate the mailing list to OSGeo.

The new mailing list is available here:

https://lists.osgeo.org/mailman/listinfo/gvsig-desktop-devel You can configure the mailing list to receive list traffic bunched in digests, or if you don’t want to receive the e-mails from the list you can choose that option at settings. Then you will be able to send the doubts to the list, and you can consult replies from


We also want to thank OSGeo for their offer to host the mailing list.

by Mario at June 19, 2018 11:05 AM


Dear Reader,

We apologize in advance, but this post is for our italian readers (hence in Italian only) to announce that we have finalized a new version of the DCAT-AP_IT Metadata Profile leveraging on the CKAN Open Data product.

Siamo lieti di condividere con voi le ultime novità che caratterizzeranno la nuova versione dell’estensione CKAN per il supporto al profilo applicativo DCAT-AP_IT. Come forse molti di voi già sapranno il profilo per la documentazione dei dati delle pubbliche amministrazioni (DCAT-AP_IT), reso disponibile dall’Agenzia per l’Italia Digitale (AgID), è nato con l’obiettivo di armonizzare i metadati con cui vengono descritti i dataset pubblici, al fine di migliorarne la qualità e favorire il riuso delle informazioni.

La prima versione, rilasciata ufficialmente nel Febbraio del 2017 e disponibile gratuitamente con licenza AGPL v3.0, fu sostenuta in uno sforzo congiunto dalla Provincia di Bolzano/Sud Tirol e dalla Provincia di Trento e fornisce ancora oggi un insieme valido ed eterogeneo di funzionalità non solo per la creazione guidata di datasets, ma anche per l’integrazione di metadati provenienti da sorgenti esterne (CSW, RDF, JSON-LD) in conformità al Profilo Applicativo. Sviluppata con scrupolosa attenzione alla stabilità delle sue caratteristiche funzionali, l’estensione ckanext-dcatapit, disponibile su una repository dedicata sotto il nostro account GitHub, nasce garantendo la più alta compatibilità possibile con le altre estensioni che spesso sono presenti nelle piattaforme CKAN. Anche gli aspetti legati al multilinguismo e la localizzazione dell’interfaccia sono stati affrontati e resi disponibili per garantire la massima usabilità da parte di quelle realtà che li necessitano, come per esempio le Provincie di Bolzano/Sud Tirol e Trento: l’estensione fornisce i propri files di localizzazione che aiutano a snellire eventuali personalizzazioni in questi termini, mentre l’estensione ckanext-multilang fornisce supporto per il multilinguismo dei contenuti presenti nel catalogo (dataset, organizzazioni, gruppi, risorse e altro).

Ad oggi non pochi sono i portali open data italiani che utilizzano questa estensione e tra questi si annoverano sicuramente:

  • Il portale OpenData della Provincia di Bolzano/Sud Tirol
  • Il portale OpenData del Trentino
  • L’infrastruttura federata OpenDataNetwork per il capofila Città Metropolitana di Firenze, che raccoglie e distribuisce i dati di vari enti toscani tra cui: Città Metropolitana di Firenze, Provincia di Prato, Provincia di Pistoia ed Autorità di Bacino dell’Arno.

Ma anche molti altri tra cui:

Grazie all’interesse mostrato dall’Agenzia per l’Italia Digitale (AgID) riguardo alle potenzialità e alle caratteristiche proprie di questa estensione, hanno avuto inizio alla fine del 2017 gli sviluppi per la realizzazione di una nuova versione arricchita e migliorata.

Gli sforzi di AgID nel finanziare questo progetto hanno avuto l’obiettivo di creare un unico hub di raccolta nazionale per i dataset pubblici basato su CKAN e fornire quindi, in un unico punto di accesso, le principali informazioni sui dati aperti esposti dalle PA locali e centrali (si fa in particolar modo riferimento al progetto DAF e la sua componente Dataportal).

Un insieme eterogeneo di funzionalità e peculiari caratteristiche sono state introdotte nella nuova versione per garantire non solo una più completa adesione al Profilo Applicativo, ma anche per aiutare l’utente nella ricerca dei datasets con nuove funzioni di indicizzazione e raggruppamento dei dataset stessi per regione di provenienza. È stato aggiunto dunque il supporto alla cardinalità multipla per le proprietà che la richiedono (come per esempio gli identificativi del dataset,  temi e sottotemi, autori e altre) e le funzionalità di catalogo sono state arricchite per identificare il catalogo e l’organizzazione di origine dei dataset  harvestati. In aggiunta, nuove facets saranno disponibili per filtrare i datasets per catalogo di origine, regioni e sottotemi.

[caption id="attachment_4152" align="aligncenter" width="800"]Nuove facet di ricerca disponibili nella schermata di ricerca Nuove facet di ricerca disponibili nella schermata di ricerca[/caption] Le caratteristiche proprie della form di creazione e modifica del dataset sono state migliorate offrendo delle mini guide di inserimento più dettagliate per l’utente, mentre il supporto al multilinguismo, offerto dall’estensione ckanext-multilang, è stato esteso anche ad altre proprietà, come per esempio rightsHolder, publisher, creator e conformsTo sia in harvesting che in serializzazione del dataset. [caption id="attachment_4153" align="aligncenter" width="800"]Scheda di dettaglio del dataset Scheda di dettaglio del dataset[/caption] La web form di creazione del dataset è stata inoltre ristrutturata attraverso un flusso di editing basato su macro ambiti di inserimento, per meglio indirizzare l’utente nella valorizzazione delle proprietà richieste dal Profilo. [caption id="attachment_4160" align="aligncenter" width="800"]Nuova form di modifica del dataset Nuova form di modifica del dataset[/caption] Anche i vocabolari controllati, primo tra tutti quello delle licenze, sono stati aggiornati e la nuova estensione ckanext-dcatapit metterà a disposizione un campo addizionale per la licenza a livello di risorsa del dataset. [caption id="attachment_4155" align="aligncenter" width="800"]Impostazione della licenza per la risorsa Impostazione della licenza per la risorsa[/caption]  

Anche il supporto al vocabolario controllato dei sottotemi, precedentemente mancante, è stato introdotto insieme al vocabolario controllato per la classificazione del territorio che consente l’associazione di una o più regioni italiane ad ogni organizzazione e facilitare quindi la ricerca dei datasets.

[caption id="attachment_4161" align="aligncenter" width="800"]Nuova form di modifica del dataset, selezione di temi e sottotemi Nuova form di modifica del dataset, selezione di temi e sottotemi[/caption]

Gli sforzi per la realizzazione della nuova versione si sono concentrati anche sul consolidare e accrescere le funzionalità di harvesting dei dataset con lo scopo sia di organizzare e catalogare al meglio i dataset raccolti ma anche di correggere, per quanto possibile, eventuali difformità nei dataset di origine (per esempio temi non conformi al Profilo Applicativo). Tra gli aspetti importanti riguardanti l’harvesting dei dataset troviamo anche i seguenti:

  • Migliorata la validazione dei tag in modo da gestire tag non conformi
  • Introdotto la mappatura delle licenze non conformi con quelle del vocabolario controllato aggiornato
  • Consolidamento delle funzionalità di harvesting già esistenti

Come ultimo punto, ma non per questo meno importante, lo sviluppo di una infrastruttura basata su Docker è stato messo a disposizione (attualmente ancora in fase di testing in vista del prossimo rilascio). Questo progetto, sviluppato in parallelo dal team di GeoSolutions, è disponibile sul GitHub Developers Italia e mette a disposizione tutto ciò di cui avete bisogno per ottenere rapidamente ed in pochi passi un’installazione completa di CKAN corredata dell’estensione ckanext-dcatapit.

Invitiamo tutti coloro che sono interessati a partecipare allo sforzo per lo sviluppo di questa estensione o che fossero interessati ad utilizzare questa estensione a seguire il nostro blog o iscriversi alla nostra newsletter; raccomandiamo di visionare anche i nostri pacchetti di supporto professionale GeoSolutions Enterprise Support Services nel caso si volesse usufruire di un supporto attento e qualificato per la messa in produzione di questa estensione. Allo stesso modo vi invitiamo a visionare le informazioni sugli altri nostri prodotti Open Source quali GeoServerMapstore, GeoNode e GeoNetwork.

The GeoSolutions team,

by simone giannecchini at June 19, 2018 08:43 AM

June 18, 2018


La Asociación gvSIG, fiel a su compromiso de dar visibilidad a los proyectos de SIG Libre, colabora con el XVIII Congreso Nacional TIG que se realizará en Valencia entre los días 20 y 22 de junio.

Tendremos un stand donde estaremos encantados de hablar con cualquier institución o empresa interesada en implementar cualquiera de los productos de gvSIG, informaremos de las últimas novedades de la Suite gvSIG y tendremos algún que otro obsequio para las personas visitantes.

Además, impartiremos un taller para aprender scripting en gvSIG Desktop. Será el jueves 21 de 16:00 a 17:30.

Os facilitamos el enlace para ver el programa completo:



by Alonso Morilla at June 18, 2018 05:09 PM

Thanks to the support given by the sponsors of the GDAL SRS barn effort, I have been able to kick in the first works in the past weeks. The work up to now has been concentrated on the PROJ front.

The first step was to set a foundation of C++ classes that implement the ISO-19111 / OGC Topic 2 "Referencing by coordinates" standard. Actually I have anticipated the future adoption of the 18-005r1 2018 revision of the standard that takes into account the latest advances in the modelling of coordinate reference systems (in particular dynamic reference frames, geoid-based vertical coordinate reference systems, etc.), which will be reflected in the corresponding update of the WKT2:2018 standard and future updates of the EPSG dataset. If you are curious, you can skim through the resulting PROJ class hierarchy which is really close to the abstract specification (a number of those classes currenty lack a real implementation for now). With the agreement of the newly born PROJ project steering committee, I have opted for C++11 which offers a number of useful modern features to reduce boilerplate and concentrate on the interesting aspects of the work.

On the functional front, there is already support to read WKT1 (its GDAL variant for now) and WKT2 strings and build a subset of the before mentionned C++ objects. And conversely to dump those C++ objects as WKT1 and WKT2 strings. In particular you can import from WKT1 and export to WKT2, or the reverse (within the limitations of each format). So this abstract modelling (quite close to WKT2 of course) effectively serves its purpose to help being independant from the actual representation of the CRS. As I mentionned an early adoption of the OGC Topic 2 standard, similarly I've taken into account the future WKT2:2018 (OGC 18-010) standard that aligns well with the abstract specification. In the API, the user can select if he wants to export according to the currently adopted version WKT2:2015 (OGC 12-063r5), or with the future WKT2:2018 revision.

The result of those first steps can be followed in this pull request.

Another task that has been accomplished is the addition of the Google Test C++ testing framework to PROJ (thanks to Mateusz Loskot for his help with the CMake integration), so all those new features can be correctly tested locally and on all platforms supported by PROJ continuous integration setup.

There are many future steps to do just on the PROJ front :
  • implement remaining classes
  • code documentation
  • comprehensive support of projection methods (at least the set currently supported by GDAL)
  • import from and export to PROJ strings for CRS definitions and coordinate operations
  • use of the EPSG dataset

by Even Rouault (noreply@blogger.com) at June 18, 2018 11:40 AM

June 16, 2018

Those past couple days, I was working on implementing multi-layer transaction support for GeoPackage datasources (for QGIS 3.4). Multi-layer transaction is an advanced functionality of QGIS (you have to enable it in project settings), initially implemented for PostgreSQL connections where several layers can be edited together so as to have atomic modifications when editing them. Modifications are automatically sent to the database, using SQL savepoints to implement undo/redo operations, instead of being queued in memory and committed at once when the user stops editing  the layer.

While debugging my work during development, I stumbled upon a heisenbug. From time to time, the two auxiliary files attached to a SQLite database opened in Write Ahead Logging (WAL) mode, suffixed -wal and -shm, would suddenly disappear, whereas the file was still being opened by QGIS. As those files are absolutely required, the consequence of this was that following operations on the database failed: new readers (in the QGIS process) would be denied opening the file, and QGIS could not commit any new change to it. When the file was closed, the file returned again in a proper state (which shows the robustness of sqlite). After some time, I realized that my issue arised exactly when I observed the database being edited by QGIS with an external ogrinfo on it (another way to reproduce the issue would be to open a second QGIS instance on the same file and close it). I indeed used ogrinfo to check that the state of the database was consistent during the editing operations. Okay, so instead of a random bug, I had now a perfectly reproducable bug. Half of the way to solve it, right ?

How come ogrinfo, which involves read-only operations, could cause those -wal and -shm files to disappear ? I had some recollection of code I had written in the OGR SQLite driver regarding this. When a dataset opened in read-only mode is closed by OGR, it checks if there's a -wal file still existing (which could happen if a database had not been cleanly closed, like a killed process), and if so, it re-opens it temporarily in update mode, does a dummy operation on it, and close it. If the ogrinfo process is the only one that had a connection on the database, libsqlite would remove the -wal and -shm files automatically (OGR does not directly remove the file, it relies on libsqlite wisdom to determine if they can be removed or not). But wait, in my above situation, ogrinfo was not the exclusive process operating on the database: QGIS was still editing it.... Would that be a bug in the venerable libsqlite ??? (spoiler: no)

I tried to reproduce the situation with replacing QGIS by a plain sqlite console opening the file, and doing a ogrinfo on it. No accidental removal of the -wal and -shm files. Okay, so what is the difference between QGIS and the sqlite console (beside QGIS having like one million extra line of code;-)). Well, QGIS doesn't directly use libsqlite3 to open GeoPackage databases, but uses the OGR GPKG driver. So instead of opening with QGIS or a sqlite3 console, what if I opened with the OGR GPKG driver ? Bingo, in that situation, I could also reproduce the issue. So something in OGR was the culprit. I will save you of the other details, but at the end it turned out that if OGR was opening itself a .gpkg file using standard file API, whereas libsqlite3 was opening it, chaos would result. This situation can happen since for example when opening a dataset, OGR has to open the underlying file to at least read its header and figure out which driver would handle it. So the sequence of operation is normally:
1) the GDALOpenInfo class opens the file
2) the OGR GeoPackage driver realizes this file is for it, and use the sqlite3_open() API to open it
3) the GDALOpenInfo class closes the file it has opened in step 1 (libsqlite3 still manages its own file handle)

When modifying the above sequence, so that 3) is executed before 2), the bug would not appear. At that point, I had some recollection that sqlite3 used POSIX advisory locks to handle concurrent accesses, and that there were some issues with that POSIX API. Digging into the sqlite3.c source code revealed a very interesting 86 line long comment about how POSIX advisory locks are broken by design. The main brokenness are they are advisory and not compulsory of course, but as this is indicated in the name, one cannot really complain about that being a hidden feature. The most interesting finding was: """If you close a file descriptor that points to a file that has locks, all locks on that file that are owned by the current process are released.""" Bingo: that was just what OGR was doing.
My above workaround (to make sure the file is closed before sqlite opens it and set its locks) was OK for a single opening of a file in a process. But what if the user wants to open a second connection on the same file (which arises easily in the QGIS context) ? The rather ugly solution I came off was that the OGR GPKG driver would warn the GDALOpenInfo not to try to open a given file while it was still opened by the driver and pass it the file header it would be supposed to find if it could open the file, so that the driver identification logic can still work. Those fixes are queued for GDAL 2.3.1, whose release candidate is planned next Friday.

  • never ever open (actually close) a SQLite database with regular file API while libsqlite3 is operating on it (in the same process)
  • POSIX advisory locks are awful.

by Even Rouault (noreply@blogger.com) at June 16, 2018 01:47 PM

June 13, 2018

June 12, 2018

Quando trabalhamos com Sistemas de Informações Geográficas (SIG), alguns trabalhos são bastante repetitivos, beirando à chatice.

Extrair, cortar, realizar analise, extrair, cortar, realizar analise. Quantas vezes você vai ficar repetindo esse processo? Por que não programar para realizar todo ele de uma vez só?

Já apresentamos como realizar esse tipo de procedimento utilizando o ArcGIS, e caso você queira mais detalhes sobre o que é Python, dê uma olhada na nossa postagem sobre como usar Python e a ferramenta Buffer no ArcGIS.

Nesta postagem, utilizaremos o QGIS 2.18 e Python para extrair o limite do município de Cocal do Sul (e de outros municípios), em seguida, iremos utilizar esse limite para recortar o mapa de solos do Estado de Santa Catarina.

Nos links abaixo, você poderá baixar os shapefiles que usaremos nesta postagem.

Como acessar o Python no QGIS (PyQGIS)?

Antes de adicionarmos qualquer shapefile, vamos abrir o terminal e o editor python do QGIS para escrever e executar nossos comandos.

Vá em Plugins e clique em Python Console (1). Provavelmente somente o Terminal será aberto, mas você pode habilitar o editor clicando em “Show Editor” (2), conforme figura abaixo.

Iniciando o módulo Python no QGIS 2.18.Iniciando o módulo Python no QGIS 2.18.

Com todas essas janelas abertas, vamos começar a programar na janela que foi aberta quando clicamos em “Show Editor”.

Carregando shapefile usando PyQGIS

Agora, vamos carregar algumas bibliotecas e vamos utilizar a função addVectorLayer() para adicionar nossos dois shapefiles.

#!/usr/bin/env python
#coding: utf-8

# Carregando bibliotecas no Python
from PyQt4.QtCore import *
from PyQt4.QtGui import *
from qgis.core import *
from qgis.gui import *
import processing
import sys

# Adicionando nossos arquivos shapefile
lim_mun = "C:/Users/ferna/Desktop/PyQGIS/42MUE250GC_SIR.shp"
solos_sc = "C:/Users/ferna/Desktop/PyQGIS/Solos_Santa_Catarina_250000_2004.shp"
iface.addVectorLayer(lim_mun, "Limites Municipais", "ogr")
iface.addVectorLayer(solos_sc, "Solos de SC", "ogr")

A função addVectorLayer recebe três parâmetros, são eles (1) o local onde foi salvo o shapefile; (2) nome da camada e (3) o identificador da fonte de dados, normalmente ogr.

Extraindo o limite municipal

Agora, com as camadas carregadas, iremos acessar os dados da camada “Limite Municipal” e por meio da ferramenta Extract by Attribute, iremos separar o limite do município de Cocal do Sul em um novo shapefile.

Iremos utilizar a função “processing” e dentro dela vamos especificar o algoritmo para extração de dado baseando-se em um atributo.

# Rodando algoritmo para extrair limites de Cocal do Sul
limCocal = "C:/Users/ferna/Desktop/PyQGIS/limCocal.shp"
processing.runalg('qgis:extractbyattribute', lim_mun, "NM_MUNICIP", 0, "COCAL DO SUL", limCocal)

Com o limite municipal de Cocal do Sul em mãos, já podemos rodar o nosso código seguinte para recortar os tipos de solos existentes dentro do município.

Cortando a partir de um polígono

Neste procedimento, utilizaremos a função Clip para recortar as classes de solo do município de Cocal do Sul.

Mas antes de executarmos esse código, nossos shapefiles estão em sistemas de projeção diferentes, isso pode ocasionar erros no momento do cruzamento dos mapas. Por isso, antes de rodar o código de recorte, vamos executar o qgis:reprojectlayer, para converter o shapefile do tipo de solo para SIRGAS2000.

# Convertendo os sistemas de coordenadas
Solos_SIRGAS = "C:/Users/ferna/Desktop/PyQGIS/Solos_SIRGAS.shp"
processing.runalg("qgis:reprojectlayer", solos_sc, "epsg:4674", Solos_SIRGAS)

# Recortando nossa area de estudo
solosCocal = "C:/Users/ferna/Desktop/PyQGIS/solosCocal.shp"
processing.runalg('qgis:clip', Solos_SIRGAS, limCocal, solosCocal)
iface.addVectorLayer(solosCocal, "Solos de Cocal do Sul", "ogr")

E dessa forma, conseguimos extrair os tipos de solos do município de Cocal do Sul.

Note que as funções Reproject Layer e Clip precisam dos seguintes argumentos:

  • Reproject Layer: (1) Arquivo de Entrada, (2) Sistema de Coordenadas Final, (3) Arquivo de Saída.
  • Clip: (1) Arquivo a ser cortado, (2) Limites do corte, (3) Arquivo de Saída.

Como realizar esse processo para varias cidades?

Para isso, vamos construir uma variável com os nosso municípios alvos (i.e. Ermo, Forquilhinha, Nova Veneza e Garopaba) e em seguida, criaremos um loop do tipo for para rodar todos eles, veja abaixo.

# Extraindo multiplos mapas de solo com um loop
for n in cidades:
  limite = "C:/Users/ferna/Desktop/PyQGIS/lim" + n +".shp"
  processing.runalg('qgis:extractbyattribute', lim_mun, "NM_MUNICIP", 0, n, limite)
  solos = "C:/Users/ferna/Desktop/PyQGIS/solos" + n + ".shp"
  processing.runalg('qgis:clip', Solos_SIRGAS, limite, solos)
  iface.addVectorLayer(solos, "Solos de " + n , "ogr")

Note que após realizar esse processo, você poderá substituir a variável cidades pelos municípios que você deseja, desde que o nome esteja igual ao nome do município no shapefile com os limites municipais.

Obs.: Estou pesquisando como extrair os municípios que apresentam caracteres especiais em seus nomes (i.e. acentos, cedilhas), pois até agora, não consegui extrair o shape de solos deles. Até onde pesquisei, tem a ver com a codificação dos caracteres (UTF-8). Caso você tenha a resposta, deixe ela nos comentários que estaremos agradecendo e atualizando a postagem.

[Atualização em 18/06/2018]

Corrigindo o problemas de Encoding

Depois de pesquisar um pouco sobre como converter um shapefile para um Encoding diferente, conseguimos realizar o procedimento anterior para municípios com acentos.

O novo código tem uma parte para realizar essa correção, onde carregamos novamente o shapefile (função QgsVectorLayer()) e em seguida salvamos ele num novo formato (função QgsVectorFileWrite().writeAsVectorFormat())

Lembre-se também de colocar um “u” na frente dos textos que contém acentos, como fizemos na variável cidades.

Confira o código completo abaixo.

# -*- coding: utf-8 -*-

# Carregando pacotes no Python
import processing
import sys
import osgeo.ogr as ogr
import osgeo.osr as osr

# Carregando nossos arquivos shapefile
lim_mun = "C:/Users/ferna/Desktop/PyQGIS/42MUE250GC_SIR.shp"
solos_sc = "C:/Users/ferna/Desktop/PyQGIS/Solos_Santa_Catarina_250000_2004.shp"

# Convertendo os sistemas de coordenadas
Solos_SIRGAS = "C:/Users/ferna/Desktop/PyQGIS/Solos_SIRGAS.shp"
processing.runalg("qgis:reprojectlayer", solos_sc, "epsg:4674", Solos_SIRGAS)

# Corrigindo Encoding
Camada = QgsVectorLayer(lim_mun, None, 'ogr')

lim_UTF8 = "C:/Users/ferna/Desktop/PyQGIS/lim_UTF8.shp"
QgsVectorFileWriter.writeAsVectorFormat(Camada, lim_UTF8, "utf_8_encode", Camada.crs(), "ESRI Shapefile")

# Extraindo vários itens com acentos
cidades = [u"CRICIÚMA", u"MORRO DA FUMAÇA"]
for n in cidades:
  limite = "C:/Users/ferna/Desktop/PyQGIS/lim" + n +".shp"
  processing.runalg('qgis:extractbyattribute', lim_UTF8, "NM_MUNICIP", 0, n, limite)
  solos = "C:/Users/ferna/Desktop/PyQGIS/solos" + n + ".shp"
  processing.runalg('qgis:clip', Solos_SIRGAS, limite, solos)
  iface.addVectorLayer(solos, "Solos de " + n , "ogr")

Caso tenha ficado com alguma dúvida, deixe ela nos comentários que estaremos respondendo assim que possível.

by Fernando BS at June 12, 2018 07:01 AM

June 11, 2018


Sharing data and information related to risk management is a stringent necessity. Natural events have an ever increasing impact on global population and assets, mostly in areas with reduced capabilities to deal with emergencies. Disaster management, prevention, and planning activities require access to the most up to date and detailed information available for a geographic area. Often data resides in some remote corners of the web hardly discoverable or, in the worst case, is kept segregated in local storage reducing or completely cancelling its value. Moreover data often lacks fundamental information (metadata) regarding its contents and formats.

The GFDRR group of World Bank tackled this within the second round of its "Challenge Fund" initiative. One of its goals was designing a common data model to store and share data about exposures, hazards and vulnerabilities, and a web platform to ingest, explore and download these data.

GeoSolutions was committed for its design and development. The project has been based on a lightweight User Centered Design approach,  doing interviews to stakeholders and collecting suggestions from domain experts. The results of this phase was a mockup that in a few weeks has become the base of the HEV-E Platform.

[caption id="attachment_4108" align="aligncenter" width="800"]HEV-E mockup HEV-E mockup[/caption]

Although being in its initial phase a first release of the platform was released a few weeks ago.  New and improved functionalities will be added in the future. Thanks to a great set of tools we were able to go online in a couple of months:

  • GeoNode: it provides the metadata services the and layers views management
  • Geoserver: (OGC) map services and layers styling
  • Django: a dedicated project has been implemented to expose custom APIs and extending GeoNode functionalities
  • MapStore and React: the frontend is completely based on MapStore (our flagship frontend framework) and new specific React components

The final result is a Catalog that let users explore Exposures, Hazards and Vulnerabilities functions currently available (for the moment just a small sample dataset).

[caption id="attachment_4144" align="aligncenter" width="800"]HEV-E Homepage HEV-E Homepage[/caption]  

The original dataset are split into single layers, that can be searched and filtered through the HEV-E catalog frontend. From the catalog a preview of the geographical area covered by the layer, and a preview of the layer content themselves, is shown on the contextual map. A user can add a layer to the map, to keep it around while continuing navigating through the catalog. Being in sync with the area currently shown on the map, the catalog helps the user obtaining only the relevant layers for the context currently explored.

For each layer a Detail view provides insights on the specific layer's contents, according to the layer type and contents. As an example, for exposures a chart with the number of occurrences by construction material or type of occupancy is shown.

[caption id="attachment_4116" align="aligncenter" width="800"]HEVE catalog search HEV-E catalog search[/caption]

During the exploration the user is able to add each layer to a Download area, a functionality is similar to common e-store "shopping carts". From this area layers can be selected to submit a download orders, that will be managed by the HEV-E platform asynchronously. A notification or an email will be sent to the user to communicate when the requested files are ready for download. A URL to the Shapefiles, CSV or GeoPackage (the format depends on layer type and user preferences) will be provided for direct download.

[caption id="attachment_4120" align="aligncenter" width="800"]HEVE Download Area HEV-E Download Area[/caption]

The project has proved again the benefits of adopting mainstream software integration to compose advanced custom applications. GeoNode's standard functionalities cannot cover all the specific needs and often a tailored user interface with custom tools and functionalities are required, but it can be adopted as a "backend service" with a bespoke frontend and, in this case, a dedicated backend API.

We wish long life to HEV-E and we hope to continue our contribute to it and its social goals!

Last but not least, we would want to thank the GFDRR group at the World Bank which provided the funding for this work.

If you are interested in learning about how we can help you achieving your goals with open source products like GeoServerMapStoreGeoNode and GeoNetwork through our Enterprise Support Services and GeoServer Deployment Warranty offerings, feel free to contact us!

The GeoSolutions Team,


by Giovanni Allegri at June 11, 2018 10:12 AM

June 06, 2018

If you’re are following me on Twitter, you’ve certainly already read that I’m working on PyQGIS 101 a tutorial to help GIS users to get started with Python programming for QGIS.

I’ve often been asked to recommend Python tutorials for beginners and I’ve been surprised how difficult it can be to find an engaging tutorial for Python 3 that does not assume that the reader already knows all kinds of programming concepts.

It’s been a while since I started programming, but I do teach QGIS and Python programming for QGIS to university students and therefore have some ideas of which concepts are challenging. Nonetheless, it’s well possible that I overlook something that is not self explanatory. If you’re using PyQGIS 101 and find that some points could use further explanations, please leave a comment on the corresponding page.

PyQGIS 101 is a work in progress. I’d appreciate any feedback, particularly from beginners!

by underdark at June 06, 2018 07:50 PM