Welcome to Planet OSGeo

August 27, 2016

gvSIG Team

FOSS4G 2016: gvSIG Video recordings (2)

A very good talk: Spatial tools for LiDAR based watershed management and forestry analysis integrated in gvSIG


Filed under: english, events, gvSIG Desktop Tagged: forestry, foss4g, LiDAR, watershed

by Alvaro at August 27, 2016 06:32 PM

OSGeo News

Mr. Jeff McKenna Receives Sol Katz Award

by astrid_emde at August 27, 2016 06:01 PM

Free and Open Source GIS Ramblings

OSGeo Code Sprint in Bonn

It’s been a great week in Bonn! I joined the other members of the QGIS project at the pre-FOSS4G code sprint at the Basecamp, the weirdest location we’ve had for a developer meeting so far. We used this opportunity to have a face-to-face meeting of the QGIS PSC  with special guests Matthias Kuhn (on QGIS 3.0 and bug tracker discussions) and Lene Fischer (on community team issues)  – notes here.

picture by Tim Sutton

QGIS PSC meeting in action (from left to right: Otto Dassau, Paolo Cavallini, Anita Graser, Andreas Neumann, Jürgen E. Fischer), picture by Tim Sutton

I also finally took the time to compile a blog post on the results of the QGIS user survey 2015.

The code sprint was also a great opportunity to present the results of Akbar Gumbira’s Google Summer of Code project: the QGIS Resource Sharing plugin. This plugin makes it possible to easily share resources (such as SVG icons, symbol definitions, QGIS styles, and Processing scripts) with other QGIS users through an interface that closely resembles the well-known QGIS Plugin Manager. Akbar has also prepared great presentation with background info and screencasts showcasing his project.

QGIS Resource Sharing presentation, picture by @foss4g

QGIS Resource Sharing presentation, picture by @foss4g

The plugin is now available in the Plugin Repository and we have created the official QGIS Resources repository on Github. If you have symbols or styles that you want to share with the community, please create a resource collection and send us a pull request to add it to the official list.

Thanks to all the organizers who worked hard to make this one of the most well-organized and enjoyable code sprints I’ve ever been to. You are awesome!

by underdark at August 27, 2016 09:24 AM

August 26, 2016

gvSIG Team

NASA WORLD WIND Europa Challenge 2016. List of nominees and gvSIG projects


The Europa Challenge has always had Europe’s INSPIRE Directive to guide project development. This year we continue to have INSPIRE guide us and more specifically, we are looking for urban management solutions. This year the Europa Challenge is asking the world’s *best and brightest* to deliver solutions serving city needs.

Almost every city needs the same data management tools as every other city. How can we help cities work together to be more sustainable, more livable and more resilient? If cities were able to share their solutions with each other, this would multiply their investment by the number of cities participating. Each city could develop different functionalities and then ‘share’ these with each other, massively increasing our planet’s collective productivity. “

This year gvSIG Association has two projects in the final list of nominees:

  • Improving the integration of NWW in gvSIG Desktop: Extrusion, vector data and EPSG support. More info: here.
  • gvSIG Online and Web World Wind, the solution for Spatial Data Infrastructures on Open Source software with 3D View. More info: here.

The 2016 projects are:


Filed under: events, gvSIG Desktop, gvSIG Online, press office, spanish Tagged: awards, Europa Challenge, nasa world wind

by Alvaro at August 26, 2016 03:10 PM


What are trusted plugins?

The core team of QGIS strives hard to provide the most advanced and user friendly GIS for free use by everyone. In the core QGIS project, every line of code that gets committed is subject to peer review when contributed by a non core developer. This gives us an opportunity to identify and correct inadvertent (or intentional) security issues that a developer may introduce into the code base. By contrast, all of the plugins that are published via the QGIS plugin repository are reviewed by the plugin developers themselves and we don’t have good insight into how much due diligence is applied to plugin code management.

The vast majority of our plugins (listed in http://plugins.qgis.org/ and inside your copy of QGIS) are developed by third parties, either individuals, companies, and institutions. As such, they are outside our direct control and the developers often relatively unknown to the QGIS community. We view this as a potential security risk. We are convinced the risk is small, because of many factors including the “many eyes” principle (the code is visible to everybody, and in use by thousands of people), but cannot exclude the possibility that someone tries to inject malicious code into a plugin.

In order to address this situation, we looked into the opportunity of implementing automatic tools to scan plugins, before their publication, and spot potential problems. Our research indicated that this approach would be difficult and costly, and easy to circumvent.

We (the PSC) therefore decided to implement a simple yet robust approach to security, based on the ‘web of trust’ principle: we trust people we know well in the community. You will see on the http://plugins.qgis.org web site that there is a ‘Trusted Author’ tag has been applied to plugins created by those members of the community that we know and trust.

The criteria for ‘Trusted Authors’ includes those community members that regularly meet at our QGIS developer meetings, and and those that are in almost daily contact with the core team via our developer mailing lists or background project discussions. The remaining plugins (and there are wonderful, reliable, robust, and useful plugins in the list) have not been given the ‘trusted’ label.

We would be delighted if a side effect of this choice would be to stimulate more active and direct involvement of plugin developers in the QGIS community. All plugin developers are therefore invited to join us at one of the next developer meetings (AKA HackFest), or otherwise become a recognized, active member of the community, so they can be integrated as ‘trusted’ plugin developers.

by faunaliagis at August 26, 2016 10:16 AM

gvSIG Team

FOSS4G 2016: gvSIG Video recordings (1)

3D tools in gvSIG using Nasa World Wind

Digital field mapping with Geopaparazzi and gvSIG

Filed under: english, events, gvSIG Desktop, gvSIG Mobile, gvSIG Online Tagged: foss4g, Geopaparazzi, osgeo

by Alvaro at August 26, 2016 10:00 AM

August 25, 2016

From GIS to Remote Sensing

Tutorial: Land Cover Classification and Mosaic of Several Landsat images

This tutorial is about the land cover classification of several Landsat images in order to create a classification of a large study area using the Semi-Automatic Classification Plugin (SCP). For very basic tutorials see Tutorial 1: Your First Land Cover Classification and Tutorial 2: Land Cover Classification of Landsat Images.
The study area of this tutorial is Costa Rica , a Country in Central America that has an extension of about 51,000 square kilometres. In particular, we are going to classify Landsat 8 and Landsat 7 images, masking clouds and creating a mosaic of classifications. We are going to identify the following land cover classes:
  1. Built-up;
  2. Vegetation;
  3. Soil;
  4. Water.
Following the video of this tutorial.

by Luca Congedo (noreply@blogger.com) at August 25, 2016 03:07 PM

From GIS to Remote Sensing

Major Update: Semi-Automatic Classification Plugin v. 4.9.0 - Sentinel-2 Download and Conversion to Reflectance

This post is about a major update for the Semi-Automatic Classification Plugin for QGIS, version 4.9.0.

Following the changelog:
-updated the Sentinel-2 download for downloading single granules and selected bands
-updated the Sentinel-2 Pre processing tab for converting bands to TOA reflectance and surface reflectance (using DOS1)

by Luca Congedo (noreply@blogger.com) at August 25, 2016 02:55 PM

Paul Ramsey

PgSQL Indexes and "LIKE"

Do you write queries like this:

SELECT * FROM users 
WHERE name LIKE 'G%'

Are your queries unexpectedly slow in PostgreSQL? Is the index not doing what you expect? Surprise! You’ve just discovered a PostgreSQL quirk.

TL;DR: If you are running a locale other than “C” (show LC_COLLATE to check) you need to create a special index to support pattern searching with the LIKE operator: CREATE INDEX myindex ON mytable (mytextcolumn text_pattern_ops). Note the specification of the text_pattern_ops operator class after the column name.

As a beginner SQL student, you might have asked “will the index make my ‘like’ query fast” and been answered “as long as the wildcard character is at the end of the string, it will.”

PgSQL Indexes and "LIKE"

That statement is only true in general if your database is initialized using the “C” locale (the North America/English-friendly UNIX default). Running with “C” used to be extremely common, but is less and less so, as modern operating systems automagically choose appropriate regional locales to provide approriate time and formatting for end users.

For example, I run Mac OSX and I live in British Columbia, an English-speaking chunk of North America. I could use “C” just fine, but when I check my database locale (via my collation), I see this:

pramsey=# show LC_COLLATE;
(1 row)

It’s a good choice, it’s where I live, it supports lots of characters via UTF-8. However, it’s not “C”, so there are some quirks.

I have a big table of data linked to postal codes, this is what the table looks like:

              Table "gis.postal_segments"
      Column       |     Type     | Modifiers 
 postal_code       | text         | not null
 segment           | character(4) | 
    "postal_segments_pkey" PRIMARY KEY, btree (postal_code)

Note the index on the postal code, a standard btree.

I want to search rows based on a postal code prefix string, so I run:

SELECT * FROM postal_segments 
WHERE postal_code LIKE 'V8V1X%';
                                              QUERY PLAN                                              
 Seq Scan on postal_segments  (cost=0.00..2496.85 rows=10 width=68) (actual time=30.320..34.219 rows=4 loops=1)
   Filter: (postal_code ~~ 'V8V1X%'::text)
   Rows Removed by Filter: 100144
 Planning time: 0.250 ms
 Execution time: 34.263 ms
(5 rows)

Ruh roh!

I have an index on the postal code, so why am I getting a sequence scan?!?! Because my index is no good for doing pattern matching in any collation other than “C”. I need a special index for that, which I create like this.

CREATE INDEX postal_segments_text_x 
  ON postal_segments (postal_code text_pattern_ops);

The magic part is at the end, invoking text_pattern_ops as the opclass for this index. Now my query works as expected:

SELECT * FROM postal_segments 
WHERE postal_code LIKE 'V8V1X%';
                                                           QUERY PLAN                                                           
 Index Scan using postal_segments_text_x on postal_segments  (cost=0.29..8.31 rows=10 width=68) (actual time=0.067..0.073 rows=4 loops=1)
   Index Cond: ((postal_code ~>=~ 'V8V1X'::text) AND (postal_code ~<~ 'V8V1Y'::text))
   Filter: (postal_code ~~ 'V8V1X%'::text)
 Planning time: 0.532 ms
 Execution time: 0.117 ms
(5 rows)

I have gotten so used to PostgreSQL doing exactly the right thing automatically that it took quite a long time to track down this quirk when I ran into it. I hope this page helps others save some time!

August 25, 2016 09:05 AM

gvSIG Team

12as Jornadas Internacionales gvSIG: “Conoce el territorio. Gestiona la realidad”

xx_12as J gvSIG esp

Del 30 de noviembre al 2 de diciembre de 2016 tendrán lugar en Valencia, España, las 12as Jornadas Internacionales gvSIG, organizadas por la Asociación gvSIG bajo el lema “Conoce el territorio. Gestiona la realidad“.

Este año las jornadas se celebrarán en una nueva sede, concretamente en la Escuela Técnica Superior de Ingeniería Geodésica, Cartográfica y Topográfica (Universitat Politècnica de València). En la página web del evento se facilitará más información sobre las salas de conferencias y talleres.

Ya está abierto el periodo para el envío de propuestas para comunicaciones para las Jornadas. Desde hoy pueden enviarse las propuestas a la dirección de correo electrónico conference-contact@gvsig.com, y serán valoradas por el comité científico de cara a su inclusión en el programa de las Jornadas. Existen dos modalidades de comunicación: ponencia y póster. Toda la información sobre las normas para la presentación de comunicaciones puede consultarse en el apartado ‘Comunicaciones’ de la web. El periodo de recepción de resúmenes finalizará el próximo 22 de septiembre.

Las organizaciones interesadas en colaborar en el evento pueden encontrar información en el apartado ‘¿Cómo colaborar?’ de la web de las jornadas.

¡Esperamos vuestra participación!

Filed under: events, press office, spanish Tagged: 12as Jornadas Internacionales gvSIG

by Alvaro at August 25, 2016 06:50 AM

Eduardo Kanegae

OpenTripPlanner - solução opensource para o planejamento multimodal de viagens

O OpenTripPlanner é uma solução opensource( licença LGPL) voltada para aplicações de planejamento multimodal de viagens. Sua característica multimodal baseia-se na possibilidade de se planejar roteiros através de diversos meios de locomoção - tais como roteiros a pé, viagens usando veículos, viagens de ônibus, percursos com bicicleta ou ainda combinações como percursos de bicicleta e ônibus. Seu desenvolvimento foi financiado e apoiado pela agência de trânsito e transporte de Portland/Oregon/EUA (TriMet) no período de 2009 a 2011.

Características técnicas

  • planeja rotas de forma multimodal - considerando percursos a pé, requisitos de acessibilidade, de bicicleta, vias de trânsito ou transporte público 
  • para trajetos de bicicleta, considera tempo de viagem, tipo/segurança das vias e elevação do terreno como parâmetros ajustáveis de roteirização
  • exibição de gráficos de perfil para viagens a pé ou de bicicleta
  • importação de dados em formato OpenStreetMap, Shapefile, GTFS e modelos digitais de elevação
  • planeja viagens em cerca de 100ms para cidades com tamanho moderado
  • arquitetura de sistema compatível com API RESTful
  • suporte ao formato GTFS-Realtime - para serviços de alertas
  • suporte ao processo de aluguéis de bicicletas
  • suporte experimental ao algoritmo Raptor

Fonte: OpenTripPlanner wiki

More »

by Eduardo Kanegae (noreply@blogger.com) at August 25, 2016 02:31 AM

Eduardo Kanegae

QGIS Server quick test

...but if you need to quickly design (colors, simbology, labels, zoom rules) and publish web maps, without the need of manually/copy-n-pasting MapServer MapFiles, then QGIS Server is here to help. Using an instance of QGIS - the great opensource desktop GIS - you will be able to design nice maps including zoom based rules and advanced simbology and then view the web map(powered by QGIS Server!) with exactly THE SAME appearance as defined in QGIS desktop. A WYSIWYG design-and-publish method.

More »

by Eduardo Kanegae (noreply@blogger.com) at August 25, 2016 02:18 AM

Eduardo Kanegae

Testando o QGIS Server

...e para casos onde se deseja rapidamente elaborar mapas e os publicar via web, sem a necessidade de se editar MapFiles, como no caso do MapServer, talvez o QGIS Server seja a solução. A partir de uma instalação do QGIS é possível elaborar mapas com regras de zoom bem definidas e recursos avançados de simbologia e posteriormente visualizar este mapa com IDÊNTICA aparência em uma aplicação web (powered by QGIS Server!) sem grandes esforços. Ou seja, no melhor do estilo WYSIWYG.

More »

by Eduardo Kanegae (noreply@blogger.com) at August 25, 2016 02:17 AM

Eduardo Kanegae

MS-SQL 2008 Spatial experiments - last part

...and as started in another post, more (old) notes about MS-SQL 2008 Spatial.

Exporting from SQL2008 Spatial using OGR

$ ogr2ogr -f KML -s_srs "EPSG:4326" -t_srs "EPSG:4326" -where "mytable_id = 6850" myexporteditem.kml "MSSQL:dsn=MY_DSN;server=(local);database=DBNAME_HERE;tables=myscheme.mytable;uid=USER_HERE;pwd=PWD_HEREmytable

Setup PHP driver for SQL 2008

IMPORTANT: this was drafted in 2011/03, so it could easier nowadays ;-)

  • get and extract related package ( from Microsoft website)
  • copy file php_sqlsrv_53_ts_vc9.dll to PHP extensions path
  • enable it at php.ini and run some phpinfo() to check it out

More »

by Eduardo Kanegae (noreply@blogger.com) at August 25, 2016 02:13 AM

Eduardo Kanegae

MS-SQL 2008 Spatial experiments

...notes of some experiments from 2011 beginning
     - mixing up SQL 2008 + MS4W 3.0.1 ( MapServer 5.6.6 )
SQL Server 2008 Setup
  • packages to install:
  • tried and tried, but it didn't work at my old Windows 7 Pro
  • worked fine on Windows 2003 Server - even under Virtual Box
  • at installer 7th step ("Server configuration"), define "NT AUTHORITY\LOCAL SERVICE" as credentials for 'SQL Server Database Engine' service
  • and please, define a good password for sa user and do not forget to enable 'mixed logins'
Some notes on SQL 2008 geometry handling
myscheme.mytable ( mygeomfield )
BOUNDING_BOX = ( -180, -90, 180, 90 ) )
  • when a table has a 'geometry' field, results will also be shown as a map at MS SQL Studio GUI 

  • OGC SFS methods are invoked from geometry fields. Eg.:
SELECT v.*, v.mygeomfield.STAsText() FROM mytable AS v

Testing connection using OGR
version installed: MS4W 3.0.1 / MapServer 5.6.6
  • create a system ODBC DSN
  • run an ogrinfo test:
$ ogrinfo -so "MSSQL:dsn=MY_DSN;server=(local);database=DBNAME_HERE;tables=myscheme.mytable;uid=USER_HERE;pwd=PWD_HERE" mytable
  • it will output something like:
INFO: Open of `MSSQL:.....'
      using driver `MSSQLSpatial' successful.


by Eduardo Kanegae (noreply@blogger.com) at August 25, 2016 02:10 AM

Eduardo Kanegae

TOPODATA Index Map now available in english

TOPODATA Index Map - the SRTM downloader tool for Brazil - is now also available in english.
It's a simple webmapping tool designed to facilitate access to TOPODATA project (SRTM products for Brazil) data for GIS users, and contribute to the application of this dataset in several knowledge areas like forestry, agriculture and environmental.

by Eduardo Kanegae (noreply@blogger.com) at August 25, 2016 02:06 AM

Eduardo Kanegae

Webmapping docs 2002-2003

A seguir, alguns tópicos que publiquei no passado e vejo agora, por meio de logs de visitas, que muitas pessoas ainda procuram por estes documentos.

  • tradução dos documentos "Mapserver Demo README" e "MapServer Get started HOW-TO" para português (da documentação oficial do MapServer, 2002)
  • "Democratizando a geoinformação através do Webmapping", portal MundoGEO, 2003
  • "O Brasil visto por clicks: a aplicação do webmapping como ferramenta de consulta de informações", congresso GeoBrasil, 2003
More »

by Eduardo Kanegae (noreply@blogger.com) at August 25, 2016 02:04 AM

August 24, 2016

gvSIG Team

12th International gvSIG Conference: “Know the territory. Manage the reality”. Call for papers is open.

xxx_gvSIG eng

The 12th International gvSIG Conference, organized by the gvSIG Association, will be held from November 30th to December 2nd 2016 in Valencia, Spain, under the slogan “Know the territory. Manage the reality“.

This year they will be held in a new location, the School of Engineering in Geodesy, Cartography and Surveying (Polytechnic University of Valencia). More information about the conference rooms will be available at the event website soon.

Call for papers is now open. As of today communication proposals can be sent to the email address: conference-contact@gvsig.com; they will be evaluated by the scientific committee as to their inclusion in the conference program.

There are two types of communication: paper or poster. Information regarding to regulations on communication presentations can be found in the Communications section of the website. Abstracts will be accepted until September 22nd.

Organizations interested in collaborating in the event can find information in the section: How to collaborate.

We expect your participation!

Filed under: english, events, press office Tagged: 12th International gvSIG Conference

by Alvaro at August 24, 2016 09:35 PM

gvSIG Team

GeoTIC: Solución para gestión del inventario de recursos TIC


GeoTIC es uno de los proyectos realizados en la Asociación gvSIG que ejemplifica el uso de nuestras tecnologías en software libre para la gestión de inventarios. Si con gvSIG Roads se dispone de un producto para gestión del inventario de carreteras, en este proyecto se pone en marcha una solución para optimizar la gestión del inventario de recursos TIC.

El Servicio de Atención al Usuario y al Puesto de Trabajo (SAUPT) de la Dirección General de Tecnologías de la Información y Comunicación (DGTIC) de la Generalitat Valenciana (GVA) tenía una serie de necesidades en su proceso de gestión.

La problemática principal a resolver era que gran parte de la gestión se realizaba en hojas de cálculo y que la información que necesitaban para realizar sus funciones estaba descentralizada, necesitando acceder a diferentes sistemas para obtenerla, y por lo tanto no se disponía de una visión de conjunto. Como solución a esta problemática nació el proyecto GeoTIC.

GeoTIC es tanto una herramienta para técnicos, la cual facilita su trabajo diario, como un cuadro de mandos, que ayuda en la toma de decisiones para optimizar los recursos de los que dispone dicho servicio.


La aplicación se ha desarrollado utilizando gvNIX -un producto impulsado por la Asociación gvSIG- y que destaca por el aumento de la productividad, en lo que se refiere a la disminución de los tiempos de desarrollo. Además, para este tipo de soluciones de inventario, que combinan gestión alfanumérica (realmente la más importante) con parte cartográfica, gvNIX facilita la incorporación de la componente geográfica, integrándola en el sistema como una más. Como base de datos se ha utilizado PostgreSQL + PostGIS.

Respecto a la información geográfica, el geoportal de GeoTIC está dedicado al manejo de recursos TIC de la DGTIC. Este geoportal muestra las sedes a la que presta servicio el Servicio de Atención al Usuario y al Puesto de Trabajo (SAUPT), pudiendo acceder y gestionar la información a través de ellas.

Otra de las características de GeoTIC es que es una aplicación web adaptativa, pudiéndose utilizar en dispositivos con diferentes resoluciones: sobremesas, portátiles, tablets o móviles .

En cuanto a la información, GeoTIC se nutre de diferentes bases de datos (GUC, CESTA, GeoTIC), que utilizan diferentes tecnologías (PostgreSQL y Oracle) y de diferentes servicios web (JIRA, SAFE, OSM, Cartografía oficial):

  • CESTA es una base de datos Oracle, que contiene el inventario de los activos TIC de la GVA.
  • GUC también es una base de datos Oracle, con el inventario de sedes de la GVA.
  • La base de datos de GeoTIC está implementada con postgreSQL + PostGIS donde se almacena la información que estaba en hojas de cálculo. Además, está base de datos es la que le da el soporte espacial a la aplicación.
  • JIRA es una aplicación para la gestión y planificación de proyectos mediante la gestión de incidencias.
  • SAFE es el servicio de autenticación y autorización que ofrece la GVA.

GeoTIC realiza sincronizaciones automáticas por la noche, cuando los usuarios no están trabajando, para tener siempre actualizados los datos que obtiene de GUC y JIRA. También ofrece la posibilidad de realizar sincronizaciones manuales.

La aplicación también necesita acceder a servidores de mapas para obtener la cartografía. Obtiene los mapas de OpenStreetMap y de diversas fuentes de cartografía oficial.

Para más información sobre el proyecto (y ver una demo de su funcionamiento) podéis consultar la presentación que se hizo del mismo en las pasadas Jornadas Internacionales de gvSIG:

Si en vuestra organización tenéis necesidades equivalentes para gestionar vuestro inventario (sea de TIC o de cualquier otro tipo de información), os animamos a que os pongáis en contacto con nosotros: info@gvsig.com . Además de contar con los mejores expertos en geomática libre estaréis contribuyendo al mantenimiento y desarrollo del proyecto gvSIG.

Filed under: geoportal, gvNIX, Inventory, spanish Tagged: gestión de recursos, inventario, TIC

by Alvaro at August 24, 2016 12:51 PM

Ivan Minčík

Automatic PostgreSQL tuning in Ansible

Once upon a time, there was a tool called pgtune, which I heavily used for setting reasonable defaults for automatically deployed PostgreSQL instances.

Unfortunately, this tool is not maintained for a long time, doesn't support new PostgreSQL versions and finally it was removed from Debian and all it's derivates.

I understand, that proper database configuration is result of deep knowledge of your data and queries, but in current world of automated deployment and containers, it is much more better to configure your PostgreSQL instance with some computed values based on current environment, then leaving default values designed for some very low memory system.

Therefore I have started to search Internet for some pgtune alternative and found great, new web based pgtune replacement developed by Alexey Vasiliev (le0pard).

Since I am using Ansible for my systems orchestration, I have decided to rewrite it's computing logic as a new Ansible postgresql_tune module.

Module requires following information to compute results:
  • PostgreSQL version
  • deployment type
  • total memory
  • maximal number or connections
  • path to resulting PostgreSQL file
  • path to resulting sysctl file

Example usage:

- name: Install PostgreSQL configuration
    db_version: "{{ POSTGRESQL-VERSION }}"
    db_type: dw
    total_memory: "{{ ansible_memtotal_mb }}MB"
    max_connections: "{{ POSTGRESQL-CONNECTIONS }}"
    postgresql_file: "{{ pgsql_config_dir }}/99-postgresql-tune.conf"
    sysctl_file: "/etc/sysctl.d/99-postgresql-tune.conf"
    - service postgresql restart
    - sysctl postgresql reload
  become: yes

Source code is here and any feedback is very much appreciated.

by Ivan Minčík (noreply@blogger.com) at August 24, 2016 10:27 AM

August 23, 2016

gvSIG Team

Problemas en la Alianza Rebelde: Star Wars y gvSIG

Skywalker_gvSIG«Estaban en el lugar equivocado en el momento equivocado. Naturalmente, se convirtieron en héroes.» .-Leia



No es muy habitual dedicar un post a un bug, pero en este caso es lo suficientemente curioso como para que me anime a ello. Quién sabe si no es una venganza del Imperio por jugar en gvSIG con la imagen ráster de la Estrella de la Muerte… aunque también es un buen ejemplo de lo escondidos que pueden estar algunos bugs.

El caso es que uno de los desarrolladores de gvSIG estaba intentando instalar la RC2 de gvSIG 2.3 en su casa…sin éxito. Algo raro pasaba, pues en la oficina no había tenido ningún problema, con el mismo sistema operativo y mismas características. Ubuntu 16.04 de 64 bits para más señas.

Primero probó con el instalador .run pero cuando se iniciaba la instalación, se cerraba sin motivo aparente. Tras esto probó con el instalador .jar con el mismo resultado.

Por alguna razón el instalador decidía que allí no se podía instalar gvSIG. Siguió dando vueltas al asunto intentando averiguar los motivos…y nada.

La siguiente prueba sería con la versión portable. gvSIG fallaba al intentar arrancar, sin llegar a mostrar el splash y sin generar el fichero “gvSIG.log” que es el que permite averiguar qué podía estar pasando.

Entonces ve en el fichero “gvSIG-installer.log” que se estaba intentando ejecutar un archivo denominado “cygcheck.exe” y que no lo encontraba.

¿cygchec.exe?¿Un ejecutable de Windows en la instalación de Linux? Estaba claro que ahí estaba el problema…¿pero qué estaba pasando? La conclusión no tenía lógica: gvSIG pensaba que se estaba ejecutando en un sistema Windows.

Y, claro, esto sólo le pasaba a él y sólo en el equipo de su casa.

Y ahora viene la explicación: nuestro desarrollador, en el equipo de casa, había puesto el nombre del sistema como Xwing, un tipo de nave de la saga Star Wars utilizado por la Alianza Rebelde. Y lo que nadie podía imaginar estaba pasando…si tú sistema Linux se llama Xwing no puedes instalar ni ejecutar gvSIG.

Cuando gvSIG le pide al sistema sus características para averiguar la arquitectura y sistema operativo ejecuta el comando:

# uname -a

Linux XWing 3.13.0-43-generic #72-Ubuntu SMP Mon Dec 8 19:35:06 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

…como resultado ve que contiene las letras “win” y decide que es un sistema operativo Windows…y claro, a partir de ahí, es normal que nada funcionara como tocaba.

Aunque haremos las pruebas necesarias, la corrección a priori es sencilla, haciendo que gvSIG utilice “uname -p -o” en lugar de “uname -a”, lo que informa sólo de la arquitectura del procesador y del sistema operativo (y no del nombre que haya decidido ponerle el usuario a su sistema). Con esto parece que la Alianza Rebelde podrá seguir utilizando gvSIG en su lucha contra el Imperio.

¿Merece o no ser nombrado el bug del año?

¡Y que la fuerza os acompañe!


Filed under: gvSIG Desktop, spanish, testing Tagged: bug, star wars

by Alvaro at August 23, 2016 05:31 PM


Results of the QGIS user survey 2015

In autumn last year, we ran a rather large-scale user survey, which was translated into many languages and advertised here on this blog. The final reports can be found here:

(Let me know if you have links to other language versions which were not sent to the mailing list.)

Looking at the English report, most responses were filed by regular (49.7%) and advanced users (35.9%) who use QGIS at least several times per week. One interesting result is that responders feel that the project should still prioritise new features when spending funds:

Top 3 “highest priority for spending QGIS funds”

  1. Important features that are missing (50%)
  2. More bugfixing (24.1%)
  3. Improved user documentation (12.4%)

This is also confirmed by the free comments section were roughly 23% of responders were asking for new features, 19% called for more stability (fewer releases and new features), and 9% for better documentation.

Documentation improvements were followed closely by calls for a more structured approach to plugins (making it easier to find the right tool for the job), stricter plugin documentation requirements, consolidation of plugins with similar functionality, and integration of key plugins into core.

When interpreting these results, it’s important to keep in mind that responses are skewed towards experienced users, who are more likely to require specialist functionality. Beginners on the other hand might rank stability, ease of use of core functionality, and good documentation higher.

by underdark at August 23, 2016 03:36 PM

Petr Pridal

PDFium GeoPDF driver in GDAL 2.1

GDAL is a very popular open-source library for decoding/encoding of geospatial data formats and raster processing. It is used by QGIS, Grass, MapServer, but also by Google Earth or ESRI ArcGIS and many other GIS software tools. It powers our MapTiler software too.

A new implementation of efficient reading of PDF and GeoPDF file formats is available in GDAL library since version 2.1. Klokan Technologies GmbH team is behind the implementation of this format driver, which has been accepted and merged by Even Rouault and released publicly in May 2016.

The driver is powered by the open-source PDFium library, which has been released by Google Inc. for easier previewing of the .pdf files used in the open-source Chrome/Chromium browser. The free PDFium library is BSD licensed light version of the PDF Foxit framework. We have prepared compiling scripts for Linux, Mac OS X and Windows, which were adopted, included and documented as part of the GDAL.

The advantages of the new PDF driver:
  • Significantly higher performance (compared to the previous PoDoFo and Poppler engines)
  • Support for larger PDF files with smaller memory footprint - even large AutoCAD plans or huge GeoPDFs can be processed efficiently now.
  • A non-restrictive BSD license! The copy-left GPL prevented the existence of applications supporting both PDF and MrSID/ECW formats for example.

The driver is in production use in our MapTiler software for over a year now.

We are pleased to share the source code with the community and offer it for integration in all open-source projects and third-party products. Next time, when you launch your favorite open-source GIS tool after the GDAL library has been updated, you may benefit from the faster PDF reading thanks to our work!

by Petr Pridal (noreply@blogger.com) at August 23, 2016 08:14 AM

August 22, 2016

Akshat Tandon

The How part (6) - Concatenating OSM way chunks

( This post is related to my GSoC project with Marble. For more information you can read the introductory post )


As of now, If you load an openstreetmap file containing linestrings such as highways, lanes or streets, in Marble and zoom to level 11, you will find that the highways or streets are not contiguous and appear as broken.

You will find the roads/streets as broken


Instead of getting contiguous roads



One of the primary reasons for this discontiguity is that often a particular highway/lane/street is described using multiple contiguous OSM way elements instead of a single OSM way element. Marble treats each of these specific way element as a linestring object. However Marble omits rendering any objects which are smaller than 2px, in order to improve rendering performance. Due to this, many of the OSM way elements, which are smaller than 2px don’t get rendered. Hence the street appears broken since only those of its OSM way elements are rendered which are larger than 2px.

One of the reasons which I can think of and which justifies this highway description using multiple elements is that a street might be called by a certain name for a specific stretch and might be called by some other different name for the remaining stretch. However, at level 11 we are mostly concerned with the type of highways (primary, secondary, motorway , residential) rather than the specifics such as names.

Usually, the multiple OSM way elements of a single highway share the first and the last node ID references. For example consider <1n…2n> as an OSM way element where 1n and 2n corresponds to the first and last node IDs of the way. A highway described by an ordered list of node IDs 1 to 56 can then usually be represented by the following OSM way elements <1…5>, <5…13>, <13…28>, <28…36>, <36…56>

I exploited this way of representation to create a way concatenating module in the existing osm-module tool. For the above example, the module would concat all the 5 OSM way elements into a single OSM way element <1…56>

osm-simplify -t highway=* -w input.osm

The above command concatenates all the highways present in the input file and produces a new osm file as output.

Apart from solving the problem of discontinuity, way concatenation also results in data reduction since it is eliminating redundant node references and way elements. This data reduction in turn results in faster file parsing as well as improved rendering performance since now to render a single stretch of highway one only needs to create and render a single linestring object as opposed to multiple linestring objects.


The tricky part of coding this OSM way concatenator is actually coming up with an algorithm which concatenates all the linestrings present in an input file. Finally I and my mentors Dennis Nienhüser and Torsten Rahn were able to come up with a working algorithm for concatenating osm ways of a file in reasonable time (approximately O(n) time).

The algorithm involves a made up data structure called WayChunk which basically is a list of contiguous ways. It also stores a GeoDataVisualCategory which represents the type of linestring. For example in case of highways GeoDataVisualCategory will contain the kind of highway, whether it is a motorway, primary, secondary or a residential type of highway.

The algorithm utilizes a multi-hash-map to bundle together OSM way elements which share a common starting or terminating node. This multi hash map has nodeID’s as keys and WayChunk’s as the values. The idea is that at any given instant, this map will contain the starting and ending point of the ways which have been evaluated till now, as keys and the corresponding way chunk as the value pointed to by these keys. Now whenever we encounter a new way, and if it’s starting or ending node matches with any of the existing way chunks as well as the type i.e. GeoDataVisualCategory of the way matches with the type of the way chunk, then this way is added to the way chunk and the values of the map are adjusted so that the map’s keys are the starting or ending nodes of some way chunk and not the intermediary ones. This way, eventually, we are able to concat the multiple small OSM chunks of highways, railroads into singular way elements.

The reason we are using multi-hash-maps instead of regular hash maps is that at a particular node, two or more highways(linestrings) of different types may emanate or terminate. Hence a given node may be associated with two or more way chunks having different type(GeoDataVisualCategory).

The algorithm in more detail is described below:

 Iterate over all of the way elements having key=value tags specified during input
   Check if the first or the last node ID of the way is present in the multi-hash-map.
   If neither of the IDs are present
   If only the first ID is present in the map
     Check if any chunk exsits in the map which has the key as that of first ID and GeoDataVisualCategory as that of the way.
     If such a chunk exists
       Append the way to this chunk accordingly(reverse the way if required)
       Delete the first ID from the map
       Insert a new entry into the multi map having the key as last node ID of the way and value as that of the found chunk
     If not
   If only the last ID is present in the map
     Check if any chunk exsits in the map which has the key as that of last ID and GeoDataVisualCategory as that of the way.
     If such a chunk exists
       Append the way to this chunk accordingly(reverse the way if required)
       Delete the last ID from the map
       Insert a new entry into the multi map having the key as first node ID of the way and value as that of the found chunk
     If not
   If both the IDs are present in the map
     Check if any chunk exsits in the map which has the key as that of first ID and GeoDataVisualCategory as that of the way.
     Check if any chunk exsits in the map which has the key as that of last ID and GeoDataVisualCategory as that of the way.
     If both the chunks exist
       Append the way and the second chunk to the first chunk and modify the map accordingly
     If only first chunk exists
       Append the way to the first chunk and modify the map accordingly
     If only last chunk exists
       Append the way to this last chunk and modify the map accordingly
     If none of the chunks exist
 Finally iterate over all the WayChunks and merge the ways in each of their lists to form one single osm way for each chunk

 Create a new WayChunk
 Append the way to this new way chunk
 Set the GeoDataVisualCategory of the chunk as that of the way
 Insert two new entries in the multi map having the starting and ending node IDs of the way as keys and the created WayChunk as the  value.


The first image having discontiguous highways represents the raw input OSM file. This file has 2704 ways in total and has a size of 4.7 MB

The way-concatenator reduced the number of ways to 812 and the size to 2.9 MB. The second image having contiguous roads represent the output of the way concatenator.

If we remove the redundant nodes from the above output using the node reducing module described in the previous post, we get a resulting file having a size of 2.5MB. This node reducer removes 15146 redundant nodes (keeping the resolution at level 11).

If you suspect that due to node reduction there will loss in quality of rendering, then look at the below rendered image and compare it with the above ones.


The node reducer and the way concatenator have resulted in a size reduction of approx 46% for the above sample without any significant loss in rendering quality.

August 22, 2016 11:30 PM

GeoSpatial Camptocamp

QGIS, contribuez sans coder !

Participer au projet QGIS ne nécessite pas forcément un investissement financier : un investissement humain peut également apporter autant à QGIS qu'à vous-même !

Cet article QGIS, contribuez sans coder ! est apparu en premier sur Camptocamp.

by Yves Jacolin at August 22, 2016 11:36 AM

gvSIG Team

Libro gratuito: Epidemiología panorámica. Introducción al uso de herramientas geoespaciales aplicadas a la salud pública

epidemiologia_SIG_gvSIG_geomaticaEl portal NOSOLOSIG, que merece estar entre los favoritos de cualquier interesado en la geomática, se ha hecho eco de una publicación que desde aquí recomendamos.

Se trata del libro “Epidemiología panorámica. Introducción al uso de herramientas geoespaciales aplicadas a la salud pública”, que tal y como se indica, es una obra que surge en el marco y como resultado de más de 10 años de cooperación entre el Ministerio de Salud de la Nación y la Comisión Nacional de Actividades Espaciales CONAE (Argentina), para generar herramientas operativas de utilidad en el campo de la salud basadas en tecnología geoespacial.

El libro incluye ejercicios prácticos con gvSIG, pudiendo descargar gratuitamente tanto el propio libro como los datos utilizados en los ejercicios.

Tenéis toda la información en:


Filed under: gvSIG Desktop, spanish, training Tagged: epidemeología, salud

by Alvaro at August 22, 2016 08:29 AM

Bjorn Sandvik

The history of the Telemark Canal - projected on a physical landscape model

Together with Jon Olav Eikenes and Christan Løverås, I'm part of a new startup called Norviz. The main focus so far has been on projecting animated graphics onto physical landscape models. Our first job was to tell the history of the Telemark Canal, a beautiful waterway connecting the sea and the interior through eight locks at a distance of 105 km from Skien to Dalen in Norway.

The installation was made for West Telemark museum, and is now on show in Vrangfoss, the largest lock complex on the canal with five locks and a lift of 23 metres. The 3D model displays a 10 minutes map animation showing the history of the canal together with historical images, voice and sound effects.

Here are some of the technical details which might interest my readers :-)

The digital elevation model was prepared in Blender and cutted with a a CNC router. It took the machine about 30 hours to finish the whole model.

Cutting a large 240x110 cm model of the Telemark Canal in Valchromat. 
Time-lapse of the cutting process:

After two coats of white paint, our model became a nice canvas for our map animation. Fun to see the geological structures from above.

Setup and calibration in an old barn at Vrangfoss. The video projector was mounted 4 meters above the model.   
The various maps telling the story of the canal was rendered with Mapnik before adding transitions and special effects in Adobe After Effects. With the help of three.js and some homemade scripts, we were able to align our map animation with the uneven landscape surface. Lastly we used JavaScript for Automation (JSX) to link and synchronise the the different parts of the installation.

The final installation showing graphics projected on the physical landscape model. 
From Varden newspaper 28 June 2016.

See it live at Vrangfoss during summer season while the canal boats are operating!

Are you interested in collaborating with us or help us fill the world with engaging visualizations? Please don’t hesitate contacting us!

Map data from Kartverket.
Concept and story by Indici and West Telemark museum.
Photos above taken by Jon Olav Eikenes.

by Bjørn Sandvik (noreply@blogger.com) at August 22, 2016 07:17 AM

gvSIG Team

3as Jornadas gvSIG México: Guía de talleres


Ya podéis consultar la guía de talleres y geotemas de las 3as Jornadas gvSIG México, y que son parte importante de las actividades que se celebrarán del 7 al 9 de septiembre durante estas jornadas. Las Jornadas tendrán lugar en Instituto de Geografía de la Universidad Nacional Autónoma de México.

Todos los talleres son gratuitos. Para poder asistir a ellos es necesario inscribirse en las jornadas. A tener en cuenta que tienen cupo limitado, por lo que el primer día de jornadas, en la mesa de registro, los asistentes se podrán inscribir a los talleres que les interesen. Para inscribirse en las jornadas podéis hacerlo a través del siguiente enlace:


Estos talleres pretenden ser mini-cursos que permitan a los usuarios formarse en distintos aspectos de la geomática libre. Veamos un breve resumen de lo que ofrecen estas jornadas en cuanto a talleres:

  • Talleres generales de gvSIG Desktop para usuarios:
    • Introducción al uso de SIG libre con gvSIG Desktop. El objetivo es que los asistentes aprendan el manejo de gvSIG desde cero. De forma sencilla y mediante ejercicios prácticos se comprenderán las principales funcionalidades de gvSIG. Al terminar el taller los asistentes podrán continuar sin dificultad con la utilización y aprendizaje del programa.
    • Novedades de gvSIG. Permitirá conocer y aprender a manejar las nuevas funcionalidades y características que presenta la nueva versión de gvSIG Desktop 2.3 (LiDAR, Segmentación dinámica, 3D,…) mediante ejercicios prácticos.
  • Talleres temáticos de gvSIG Desktop para usuarios:
    • Manejo de estrategías electorales con gvSIG. Tiene como objetivo proporcionar los elementos básicos de manejo de gvSIG para el análisis geoelectoral, y la importancia de su aplicación en la toma de decisiones en el ámbito electoral, con participación activa y práctica.
    • Aplicación de los SIG al urbanismo con gvSIG. Busca despertar el interés por el uso de los SIG a los urbanistas, mediante una introducción a gvSIG mediante una serie de ejercicios prácticos relacionados con el urbanismo.
    • Arqueología en gvSIG. Los asistentes aprenderán a integrar el uso de gvSIG al trabajo arqueológico diario, por lo que tendrá un planteamiento práctico y a base de ejemplos reales.
  • Talleres de gvSIG Desktop para desarrolladores e interesados en iniciarse en el desarrollo:
    • Introducción al scripting en Python con gvSIG. El objetivo es conocer las posibilidades del módulo de scripting en gvSIG y el proceso de manipulación de datos espaciales para la creación de capas a partir de otras existentes. Para este curso no son necesarios conocimientos en programación.
    • Desarrollo avanzado en gvSIG con scripting. Permitirá conocer de forma más profunda el entorno de programación en gvSIG con Python. Para este curso es recomendable tener conocimientos en Python.
  • Taller de gvSIG Online:
    • gvSIG Online, Infraestructuras de Datos Espaciales en software libre. Permitirá aprender a manejar gvSIG Online, una plataforma integral para la implantación y gestión de IDE. Los alumnos aprenderán a crear geoportales y administrar la información geográfica de una Infraestructuras de Datos Espaciales.
  • Talleres de geomática libre:
    • Alternativas de geocodificación en software libre. Por medio de este taller el asistente conocerá las diferentes formas de geocodificación que existen en software libre, tanto on-line como desktop.
    • Software para análisis fractal: FROG. El objetivo es que el asistente conozca el software FROG para el cálculo de la dimensión fractal de los objetos utilizando diversos tratamientos, a partir de imágenes con formatos raw o bmp.
    • Space Synthax con Depth Map. Tiene como objetivo examinar desde una perspectiva teórica y luego práctica el significado de las variables de gramática espacial y como utilizarlas en la planeación de espacios urbanos. Conocer el concepto de Space Syntax y su utilidad para conocer el grado de integración y conectividad de cada segmento de la ciudad, conocimiento útil para estudios urbanos y de transporte en general.
    • Mapeo libre: mapeando territorios en situación de riesgo. El territorio mexicano cuenta con distintas características que lo hace vulnerable a diversos tipos de desastres naturales como temblores, huracanes, lluvias torrenciales, deslaves, entre otros. El objetivo del taller es organizar a través de OpenStreetMaps el mapeo de zonas que pueden sufrir las consecuencias de alguno de estos fenómenos naturales.

La guía completa de talleres y geotemas está disponible en:


Podéis revisar el programa completo de las 3as Jornadas gvSIG México en:


Filed under: events, gvSIG Desktop, gvSIG Online, spanish, training Tagged: México

by Alvaro at August 22, 2016 06:57 AM

August 21, 2016

Akshat Tandon

The How part (5) - Removing redundant nodes from OSM files

( This post is related to my GSoC project with Marble. For more information you can read the introductory post )

The requirement for medium level tiles are a little bit different from those of lower level tiles. The biggest difference being that the medium levels will use openstreetmap data as opposed to the Natural Earth data. Another difference is that in lower level tiles the major data pre-processing steps were concatenation and conversion since the data was in shapefile format and the requirement was that of OSM whereas in medium levels the major data-pre processing steps are reduction, simplification, filtration and merging.

As already mentioned in a previous post, openstreetmap describes data at a pretty high resolution. Such resolution is fine for higher levels such as street levels but cannot be suitably used for medium (and also lower levels). Hence data-preprocessing steps such as reduction to a lower resolution, filtering features such as trees and lamp posts which are not visible at medium levels , and merging of buildings and lanes which are very close to each other.

In order to pre-process data so as to reduce, filter, combine; various tools need to be built which modify the raw openstreetmap data so as to make it suitable for a particular Marble zoom level.

The first tool which I built removes redundant nodes from geographical features present in the input OSM file. Redundant nodes are the nodes which are just not required at a particular zoom level since the resolution with which these nodes describe their parent geographic feature exceeds the maximum resolution which is visible at a particular zoom level.

Consider this particular patch of lanes at zoom level 11.


Now observe the number of nodes with which this patch has been described.


A small square describes a single node. These squares are colored in a translucent manner so as to depict overlappings. Higher the intensity of the color, greater are the number of overlappings.

As you can clearly see, these many nodes are not required for proper rendering of OSM data at medium zoom levels. In order to overcome this problem, I wrote a node reducing module. This module is a part of osm-simplify which is an OSM data preprocessor jointly being developed by Dávid Kolozsvári and me which supports many different kinds of OSM data preprocessing such as clipping, tiling, way-merging(other simplification tools are still being developed)

osm-simplify -z 11 -n input.osm

The above command removes the redundant nodes according to the resolution of level 11 of Marble and produces a new reduced OSM file as output.

The underlying algorithm of this node reduction module is pretty simple

  • Create a new empty list of nodes called reducedLine
  • nodeA = first node of the linestring, ring, or a polygon which is under consideration.
  • add nodeA to reducedLine
  • Iterate from the second to the second last node (so as to retain the first and last nodes) of a linestring, ring, or a polygon
    • If the great circle distance between the nodeA and the currentNode is greater than the minimum resolution defined for that level
    • Then add currentNode to reducedLine and change value of nodeA to that of currentNode
  • Add the last node to reducedLine
  • return reducedLine

This simple tool results in significant removal of redundant nodes. The final goal is that this node reduction module along with other modules of osm-simplify tool will result in a significant size reduction of the OSM data for a particular zoom level resulting in improved rendering performance of Marble, without compromising much on the visual quality of the rendered data.

In the next post, I will describe about way-merging module as well as do a little bit of objective comparison of the reduction in size of OSM data caused by these modules.

August 21, 2016 09:00 PM

Akshat Tandon

The How part (4) - Automating tile generation for lower levels

( This post is related to my GSoC project with Marble. For more information you can read the introductory post )

After revamping shp2osm tool and adding the missing styles, the only thing remaining for supporting lower level tiles was automating the process of tile generation . Before the process was automated, in order to create the lower level tiles one needed to manually download the Natural Earth shapefiles, combine and convert the shapefiles into a single OSM file using shp2osm tool, and then using the existing vector tile creator of Marble (which divides a large OSM file into small chunks of OSM vector tiles) create the tiles.

I wrote a script which simplifies and automates the process of generating lower level tiles for Marble. The script takes a text file as input. This text file includes a listing of the zoom levels and the Natural Earth geographical features which are to be included in those zoom levels.


As you can see, the above input file describes what all Natural Earth features are to be included for 0th, 1st and 3rd levels of the vector OSM theme of Marble

Using the above input file, the script checks the input directory for the Natural Earth geographical features specified in the file. If any geographical feature is not present, the script downloads the files in zipped form and then unzips it. Once all the required Natural Earth features are present in the input directory, using the shp2osm tool it combines all the data for a particular level and creates a single OSM file containing all the Natural Earth features specified in the input file. Then using the existing vector tile creator, it creates all the required OSM tiles , arranged properly in respective folder and sub-folders for the levels specified in the input file.

With this tool in place, we can now easily generate the lower level vector tiles for Marble.

level_3 level 3

level_5 level 5

level_7 level 7

level_9 level 9


As you can see in the video, as all the data is getting rendered dynamically we are getting a performance lag and the navigation is not very smooth. This is because of improper tile cutting as well as the huge size of data getting rendered at a particular instant. For creating the tiles, the script itself the preexisting vector tile cutter of Marble. This vector tile cutter in turn depends on osmconvert to actually clip the data and create smaller OSM tiles. However osmconvert’s clipping is not very exact resulting in data redundancy. This redundancy causes Marble to render the same geographical feature 3 or 4 times causing severe performance penalties. Currently the clipping is being improved by Dávid Kolozsvári who is also a GSoC student working on Marble. Apart from the clipping, there are resolution issues i.e. a particular feature is described by much greater number of nodes than required.

After all the work related to lower level tiles was done, began the second part of my project which dealt with creation of tools for simplification and reduction of OSM data so that it is rendered smoothly and according to the visual requirement of medium zoom levels. I will discuss about these tools in future posts.

August 21, 2016 06:00 PM

Akshat Tandon

The How part (3) - Concatenating the Natural Earth geographical features into a single OSM file

( This post is related to my GSoC project with Marble. For more more information you can read the introductory post )

Till now, the polyshp2osm tool, which I had modified to support various kinds of geometries as well as OSM tag-mappings for Natural Earth metadata, supported conversion of a single shapefile into OSM.

python3 ~/a/marble/tools/shp2osm/polyshp2osm.py ne_110m_rivers_lake_centerlines.shp

The above command will convert the given shp file containing rivers and lakes into its OSM equivalent.

One of the major aims of my project was to create a tool for automatically generating the OSM tiles for lower zoom levels using Natural Earth data. In my previous post, I described how I added styling for many geographical features so as to enable rendering of many Natural Earth features such as reefs and bathymetries. Now before creating a tool which can output the tiles, I need a tool which can concatenate all the different Natural Earth features such as roads, rails, rivers, reefs into one single OSM file which then can be broken down into tiles for lower zoom levels. Basically I need to modify the tool so that it can take multiple shp files as input and then produces a combined OSM file.

Now the straightforward way of doing this is to just concat the XML elements produced for different shp files. This will obviously work but will contain way too much redundant data especially in the form of redundant nodes. I solved this redundancy by using a dictionary which maps the (longitude, latitude) of a node to a unique node ID. Now whenever the tool iterates over the nodes of a polygon or a linestring in the tool, it is checked whether that node is already present in the dictionary or not. If it is not present, we assign a new ID to the node and add it to the dictionary, if it is present in the dictionary , we use the node ID of the existing node instead of creating a redundant node having the same longitude,latitude but a different ID.

Now we can give multiple shp files as input to the tool and it will generate a single OSM file containing all the features of the input shp file without any redundant node data.

python3 ~/a/marble/tools/shp2osm/polyshp2osm.py ne_50m_land ne_50m_geographic_lines ne_50m_lakes ne_50m_glaciated_areas

The above command will concatenate the Natural Earth shp files containing land, geographic line, lakes and glaciated areas into a single OSM file.

August 21, 2016 04:00 PM

Akshat Tandon

The How part (2) - Adding styling for various geographical features

( This post is related to my GSoC project with Marble. For more information you can read the introductory post )

As I mentioned in my earlier post, I modified the polyshp2osm tool so as to support conversion of Natural Earth shapefiles to OSM format so that Marble is able to render geographical features(roads, rivers, reefs, airports) provided by the Natural Earth data set. Now the primary problem was that Marble never supported the rendering of many of the geographical features provided by Natural Earth. In order to enable rendering of a new geographic feature(reef, boundaries), one needs to describe how the feature must be styled by describing the colors to be used, the width of the boundaries, any geometric patterns or any icon which needs to be associated with the geometry.

Before describing my work let me first describe how Marble handles the styling for the vector OSM theme. Marble reads the OSM file and for any geographical feature (roads, rivers, reefs, airports, shops, buildings etc) assigns it a GeoDataVisualCategory value. The geometry associated (point, polygon, linestring ) with the geographical feature is primarily styled according to this GeoDataVisualCategory. Depending on the presence of other OSM key=value pair tags and the current zoom level of the map, Marble might dynamically change the styling.

In fact, this ability to dynamically change the styling was recently added by my mentor Dennis Nienhüser during the Randa meetings.

In most of the cases, for adding the styling for a new geographic feature in Marble, one needs to do the following steps

  • Create a new GeoDataVisualCategory
  • Assign a default style to this GeoDataVisualCategory by describing the colors, width, pattern, icon which are associated with the feature
  • Decide the order of this new visual category w.r.t other visual categories. Visual categories which have a higher priority will be drawn on top of visual categories which have a lower priority.
  • Decide the minimum zoom level at which this new visual category starts appearing.
  • Map this visual category to some corresponding OSM key-value pair.
  • Decide if you want to change the default styling associated with this GeoDataVisualCategory on some particular zoom level or when some particular OSM key=value tags are encountered.

Natural Reefs

Has an OSM equivalent natural=reef which I used for tag mapping it to the styling of reefs.



Natural Earth provides the data for countries, continents, urban areas, islands as polygons. In order to provide a common tag mapping for these land polygons, I constructed a custom marble_land=landmass for tag mapping the above land polygons to the styling of rendered land areas in Marble.


Administrative Boundaries

The OSM administrative boundaries have an admin_level=x tag which defines the size/importance of a particular administrative boundary. I tag mapped these admin_level=(1..11) tags to 11 different GeoDataVisualCategories .


Salt Lakes

Instead of creating a new visual category, I added an additional OSM tag salt=yes to the list of key=value tags list of salt lakes while keeping the visual category as NaturalWater. Now when a salt lake in encountered during parsing, Marble assigns it the default styling of NaturalWater which is a shade of blue with blue outline. However due to the tag salt=yes the styling changes to a shade of yellow with a dashed blue line.


Antarctic Ice Shelves

I created a new visual category NaturalIceShelf for styling the antarctic ice shelves and tag mapped it to natural=glacier and glacier:type=shelf



Marine Indicators

I tag mapped the OSM tag maritime=yes to a new visual category BoundaryMaritime whereas used the tags (boundary=administrative, admin_level=2, maritime=yes, border_type=territorial) for dynamically changing the styling of disputed marine boundaries.


International Date Line

Created a custom OSM tag marble_line=date and tag mapped it to a new visual category InternationalDateLine.



Styling Bathymetries was quite tricky since I was allowed to use only a single visual category for styling different kinds of bathymetries. Initially I thought that this was going to be straightforward and I only need to pass the elevation info of the bathymetry and depending on this info, I can then dynamically change the styling. However I eventually found that in order to do the above task, I need go down a layer of abstraction and hack a few things at the graphics level. The problem was that the bathymetries at level 200 and 4000 were getting mixed up. Ideally bathymetries at level 4000 should appear above the bathymetry at level 200 but since the z-value(which decides which feature must be rendered on top of others) is fixed for a particular GeoDataVisualCategory, rendering the bathymetries was getting messed up. I had to make special cases in the graphics related methods of Marble so as to circumvent this problem.



August 21, 2016 12:00 PM

August 19, 2016

Hands-on to GIS in FOSS

GRASS GIS workshop and talks at FOSS4G 2016 in Bonn – You cannot miss them!

Next Sunday, August 21, and for one week, an historic event will take place in the city of Bonn – Germany … the international FOSS4G Conference!! This conference brings together people from all over the world that love to have FUN and FOSS🙂

GRASS GIS, being one of the first free and open source GIS, won’t miss the appointment nor the chance to meet friends and share. You shouldn’t either miss all the new goodies that will be presented in the dedicated GRASS GIS workshop and talks. Here are the links:


Unleash the power of GRASS GIS 7

When? Tuesday, August 23, from 9:00 to 13:00

Trainers: Markus Neteler (mundialis GmbH & Co KG), Luca Delucchi (Fondazione Edmund Mach), Martin Landa (Czech Technical University in Prague)

More info: http://2016.foss4g.org/ws27.html


#289: A complete toolchain for object-based image analysis with GRASS GIS

Moritz Lennert (http://2016.foss4g.org/talks.html#289)

#290: OpenDEM Generator: combining open access Digital Elevation Models into a homogenized DEM

Luca Delucchi and Markus Neteler (http://2016.foss4g.org/talks.html#290)

#318: Processing Copernicus Sentinel data with GRASS GIS

Markus Neteler and Carmen Tawalika (http://2016.foss4g.org/talks.html#318)

#533: Building applications with FOSS4G bricks: two examples of the use of GRASS GIS modules as a high-level “language”‘ for the analyses of continuous space data in economic geography

Moritz Lennert (http://2016.foss4g.org/talks.html#533)

and so much more…!!!!

Just come… and build bridges!


by veroandreo at August 19, 2016 08:44 PM

OSGeo News

ACM SIGSPATIAL International Workshop on Computational Transportation Science (IWCTS) - Call for papers

by jsanz at August 19, 2016 05:57 PM

OSGeo News

First ACM SIGSPATIAL Student Research Competition

by jsanz at August 19, 2016 05:52 PM

gvSIG Team

Abierto el plazo de inscripción de las 8as Jornadas de Latinoamérica y Caribe de gvSIG

Ya está abierto el periodo de inscripción de las 8as Jornadas de Latinoamérica y Caribe de gvSIG, que tendrán lugar los días 20 y 21 de octubre en Montevideo (Uruguay), y en las que habrá una gran cantidad de ponencias y talleres.

La inscripción es totalmente gratuita y se ha de realizar a través del formulario existente en la página web de las Jornadas. No esperéis al final a inscribiros, ya que el aforo es limitado.

8as_J_LAC_gvSIG-inscripcionesUna vez publicado el programa, se abrirá una inscripción aparte para los talleres, que cuentan con un cupo limitado de puestos.

Por otra parte, el plazo para la recepción de propuestas de comunicaciones para las Jornadas continúa abierto. Podéis enviar vuestra propuesta sobre ponencia o póster, en español o portugués, y será evaluada por el Comité Científico. Podéis consultar las normas en el apartado Comunicaciones de la web.

¡Esperamos vuestra participación!

Filed under: community, events, spanish Tagged: Jornadas LAC

by Mario at August 19, 2016 07:43 AM

August 18, 2016

Micha Silver

Precipitation animations with GRASS-GIS and python

The National Center for Atmospheric Research (NCAR) supplies climate forecast data online, with four daily updates. Why not use the power of GRASS-GIS and python to produce animations of predicted rainfall? We’ll see how to subset and download the GFS (Global Forecasting System) from the NCAR Research Data Archive. These data are stored in grib2 format. No worries – GDAL reads climate data in the various weather data formats including grib2. So we can import the data directly as rasters into GRASS, create a set of maps, and then, using the python imageio library, we output the precipitation forecasts as an animated gif.
For this tutorial I use the GFS 0.25 degree forecasts available at the link above. (Registration on the NCAR website is required). Then through the web interface I subset the data for a certain date range, a spatial extent, and particular variables. Here’s the procedure: I click on “Get a Subset” and choose both the Temporal Selection (date range) of interest, and the Precipitation Rate variable. GFS forecasts are published four times daily, and for this example I will go back to a storm event from October 2015.
After clicking “Continue” the next screen allows me to choose which products I want and to determine the spatial extent. I choose all the 6-hour Averages from initial+0 up to initial+144 hours. This choice will prepare 24 forecast files (one every 6 hours for 6 days) for each initialization time in the date range that I selected. So if my date range covers 2 days (with 4 GFS forecasts initialized each day) then I will be downloading 24×8=192 data files. Now scroll down to the map under Spatial Selection, zoom to the area of interest, and draw a box to set the longitude and latitude extents. Once all the options set, I click on “Submit Request”.
In a short while an email arrives (at my NCAR registration email address) with the download link for the chosen files. Browse to the link in the email message, select all files in the list, and click on one of the download options: either directly download a tar file, or get a shell or perl script. I should mention that the above procedure can be scripted and the whole selection and download can be handled with a set of python scripts offered by NCAR. Check the RDA apps webpage for details.
Now, with the GFS data at hand, I’m ready to start GRASS, and open a python session. The GFS data are in Longitude/Latitude, so I start GRASS in a longlat WGS84 location. Be sure you have imageio installed. The script below covers the basics, but it will need to be stitched to fit each particular situation.
import grass.script as gscript
import imageio
import os, glob

# Some initial variables
# cc is the GFS cycle, either ’00’, ’06’,’12’, or ’18’
cc = ’06’
# storm_date is the date of the storm event in YYYYMMDD format
storm_date = ‘20151021’

# The directory that holds the GFS files, and an output directory
input_dir = os.path.join(os.path.expanduser(‘~’), ‘Downloads/GFS’)
output_dir = os.path.join(os.path.expanduser(‘~’), ‘Documents’)

# A list for animation frames
frames = []
# and the final animated gif file:
anim = os.path.join(output_dir, 

# Now create a list of GFS files to process
# Since there might be several dates and 
# four daily cycles for each date in the Download directory,
# we use the storm_date and cycle to build a filter expression
# All files begin with "gfs.0p25." then the date, and cycle,
# and last the forecast hour
glob_expr = ‘gfs.0p25.’+storm_date+cc+’*’
gfs_paths = glob.glob(os.path.join(input_dir, glob_expr))
# Make sure the list of files is sorted 

# Start processing loop to import grib2 files
for gfs in gfs_paths:
    # Create a suitable name for the GRASS raster,
    # (dot in names are not SQL compliant, so
    # not accepted by GRASS, replace with ‘_’)
    gfs_parts = os.path.basename(gfs).split(‘.’)
    gfs_rast = ‘gfs_’+gfs_parts[2]+’_’+gfs_parts[3]
    # Now import the GFS grib2 file, first to a tmp raster
    gscript.run_command(‘r.in.gdal’, input=gfs, 
        output=’tmp_gfs_rast’, band=1, quiet=True, 
        flags=’o’, overwrite=True)
    # As always, be sure region settings match the imported raster
    gscript.run_command(‘g.region’, raster=’tmp_gfs_rast’, quiet=True)
    # The units in GFS "Precipitation Rate" are mm/sec. 
    # Multiply by 3600 to convert to mm/hr
    # This creates each final gfs raster layer
    expr = gfs_rast+" = tmp_gfs_rast*3600"
    gscript.mapcalc(expr, overwrite=True)
    # And set a uniform color ramp for the precip rasters
        rule=’precip_colors.txt’, quiet=True)
    # The file "precip_colors.txt" contains:
    # 0	    white
    # 0.1   240:243:255
    # 6	    8:69:149
    # 20    black
    # Now create a png file from each raster
    # using the d.mon utility, writing to a PNG file
    out_png = ‘precip_’+gfs_parts[2]+’_’+gfs_parts[3]+’.png’
    out_path = os.path.join(output_dir, out_png)
    ttl = " ".join([storm_date, cc+’Z’, gfs_parts[3]])
    gscript.run_command(‘d.mon’, start=’png’, width=350, 
        height=400, output=out_path,  quiet=True, overwrite=True)
    gscript.run_command(‘d.rast’, map=gfs_rast, quiet=True)
    # Add a vector map of coastlines
    gscript.run_command(‘d.vect’, map=’coastline’, color=’black’, 
        fill_color=’none’, quiet=True)
    gscript.run_command(‘d.text’, text=ttl, at=’3,12′, align=’ul’, 
        size=3, color="100:1:1", font=’DejaVu Sans:Bold’, flags=’pb’)
    gscript.run_command(‘d.legend’, rast=gfs_rast, at=’5,25,85,93′, 
        range=’0,8′, font=’Liberation Mono:Regular’, quiet=True)
    gscript.run_command(‘d.mon’, stop=’png’, flags="r",  quiet=True)
    # add each png to numpy array "frames"

# When the above loop completes, create the animation 
# from the "frames" list
    imageio.mimwrite(anim, frames, format=’GIF’, duration=0.7)
    print(‘Output GFS animation: %s saved’ % anim)
    print(‘Error making animation’)
And here’s what I get (click to animate):
precipitation animationPrecipitation 21-25/10/2015

by Micha Silver at August 18, 2016 09:58 PM

Free and Open Source GIS Ramblings

City flows unfolding with the other Processing

A previous version of this post has been published in German on Die bemerkenswerte Karte.

Visualizations of mobility data such as taxi or bike sharing trips have become very popular. One of the best most recent examples is cf. city flows developed by Till Nagel and Christopher Pietsch at the FH Potsdam. cf. city flows visualizes the rides in bike sharing systems in New York, Berlin and London at different levels of detail, from overviews of the whole city to detailed comparisons of individual stations:

The visualizations were developed using Unfolding, a library to create interactive maps and geovisualizations in Processing (the other Processing … not the QGIS Processing toolbox) and Java. (I tinkered with the Python port of Processing in 2012, but this is certainly on a completely different level.)

The insights into the design process, which are granted in the methodology section section of the project website are particularly interesting. Various approaches for presenting traffic flows between the stations were tested. Building on initial simple maps, where stations were connected by straight lines, consecutive design decisions are described in detail:

The results are impressive. Particularly the animated trips convey the dynamics of urban mobility very well:

However, a weak point of this (and many similar projects) is the underlying data. This is also addressed directly by the project website:

Lacking actual GPS tracks, the trip trajectories are rendered as smooth paths of the calculated optimal bike routes

This means that the actual route between start and drop off location is not known. The authors therefore estimated the routes using HERE’s routing service. The visualization therefore only shows one of many possible routes. However, cyclists don’t necessarily choose the “best” route as determined by an algorithm – be it the most direct or otherwise preferred. The visualization does not account for this uncertainty in the route selection. Rather, it gives the impression that the cyclist actually traveled on a certain route. It would therefore be undue to use this visualization to derive information about the popularity of certain routes (for example, for urban planning). Moreover, the data only contains information about the fulfilled demand, since only trips that were really performed are recorded. Demand for trips which could not take place due to lack of bicycles or stations, is therefore missing.

As always: exercise some caution when interpreting statistics or visualizations and then sit back and enjoy the animations.

If you want to read more about GIS and transportation modelling, check out
Loidl, M.; Wallentin, G.; Cyganski, R.; Graser, A.; Scholz, J.; Haslauer, E. GIS and Transport Modeling—Strengthening the Spatial Perspective. ISPRS Int. J. Geo-Inf. 2016, 5, 84. (It’s open access.)

by underdark at August 18, 2016 06:58 PM

From GIS to Remote Sensing

From GIS to Remote Sensing Wishes You Happy 2016

In this first post of 2016 I would like to wish you a Very Happy New Year!
2015 has been an intense year for the remote sensing field, especially considering the launch of Sentinel-2 satellite, which is now acquiring wonderful images such as the following one (Rome, acquired on 18/12/2015).

Image of Rome, acquired by ‪‎Copernicus‬ ‪‎Sentinel-2‬ on December 18th 2015

by Luca Congedo (noreply@blogger.com) at August 18, 2016 11:43 AM

From GIS to Remote Sensing

Video Tutorial: Using the tool Band calc of the Semi-Automatic Classification Plugin

This is a tutorial about the use of the tool Band calc that allows for the raster calculation for bands. In particular, we are going to calculate the NDVI (Normalized Difference Vegetation Index) of a Landsat image, and then apply a condition in order to refine a land cover classification (see Tutorial 2: Land Cover Classification of Landsat Images ) basing on NDVI values (a sort of Decision Tree Classifier).
The Band calc can perform multiple calculations in sequence. We are going to apply a mask to every Landsat bands in order to exclude cirrus and cloud pixels from the NDVI calculation, and avoid anomalous values. In particular, Landsat 8 includes a Quality Assessment Band ) that can be used for masking cirrus and cloud pixels.
The values that indicate with high confidence cirrus or clouds pixels are (for the description of these codes see the table at http://landsat.usgs.gov/L8QualityAssessmentBand.php ):
  • 61440;
  • 59424;
  • 57344;
  • 56320;
  • 53248;
  • 31744;
  • 28672 .
In particular, the Quality Assessment Band of the sample dataset includes mainly the value 53248 indicating clouds. Therefore, in this tutorial we are going to exclude the pixels with the value 53248 from all the Landsat bands.

Following the video of this tutorial.

by Luca Congedo (noreply@blogger.com) at August 18, 2016 11:39 AM