Welcome to Planet OSGeo

January 21, 2018

From GIS to Remote Sensing

How to install QGIS 3 using OSGeo4W in Windows OS

The Semi-Automatic Classification Plugin (SCP) version 6 (codename Greenbelt) will be released very soon.
This new version is compatible with QGIS 3 only; therefore you need to install the QGIS development version until QGIS 3 is officially released.

This post is a brief guide about how to install QGIS 3 in Windows OS using the OSGeo4W installer.

by Luca Congedo (noreply@blogger.com) at January 21, 2018 03:07 PM

Free and Open Source GIS Ramblings

Creating reports in QGIS3

QGIS 3 has a new feature: reports! In short, reports are the good old Altas feature on steroids.

Let’s have a look at an example project:

To start a report, go to Project | New report. The report window is quite similar to what we’ve come to expect from Print Composer (now called Layouts). The most striking difference is the report panel at the left side of the screen.

When a new report is created, the center of the report window is empty. To get started, we need to select the report entry in the panel on the left. By selecting the report entry, we get access to the Include report header and Include report footer checkboxes. For example, pressing the Edit button next to the Include report header option makes it possible to design the front page (or pages) of the report:

Similarly, pressing Edit next to the Include report footer option enables us to design the final pages of our report.

Now for the content! We can populate our report with content by clicking on the plus button to add a report section or a “field group”. A field group is basically an Atlas. For example, here I’ve added a field group that creates one page for each country in the Natural Earth countries layer that I have loaded in my project:

Note that in the right panel you can see that the Controlled by report option is activated for the map item. (This is equivalent to a basic Atlas setup in QGIS 2.x.)

With this setup, we are ready to export our report. Report | Export Report as PDF creates a 257 page document:

As configured, the pages are ordered by country name. This way, for example, Australia ends up on page 17.

Of course, it’s possible to add more details to the individual pages. In this example, I’ve added an overview map in Robinson projection (to illustrate again that it is now possible to mix different CRS on a map).

Happy QGIS mapping!

by underdark at January 21, 2018 09:00 AM

January 20, 2018

Just van den Broecke

Emit #1 – Into Spatiotemporal

Smart Emission Googled for Photos

One of my new year’s resolutions for 2018 was to “blog more”. Not being very active on the well-known social media: a bit tired of Twitter, never really into Facebook, bit of LinkedIn.  OSGeo mailing lists, GitHub and Gitter is where you can find me most (thanks Jody, for reminding!). And I read many blogs, especially on my Nexus 10 tablet and Fairphone 2 via the awesome Feedly App. If you have not heard of Feedly (or any other blog-feed collectors), stop here and check out Feedly! Most blogs (like this one) provide an RSS/Atom-feed. Via Feedly you can search/add RSS-feeds and thus create your own “reading table”. My favorite feeds are related to Open Source Geospatial, Python and IoT, like:

Feedly shown in web browser

Enough sidestepping, my goal is to share tech around the Open Source Smart Emission Platform (SE Platform) in a series of posts, dubbed  ‘Emits’. This is Emit #1. Since 2014 I have been working on several projects, often through Geonovum, and recently via the European Union Joint Research Centre (JRC), that deal with the acquisition, management, web-api-unlocking and visualization of environmental sensor-data, mainly for Air Quality (AQ).

Smart Emission Googled

What made these projects exciting for me is that they brought together many aspects and technologies (read: Open Source projects and OSGeo software) I had been working on through the years. Also, it was the first time I got back into Environmental Chemistry, for which I hold a master’s degree from the University of Amsterdam, co-authoring some publications, yes, many many years ago.

So what is the Smart Emission Platform and what makes it exciting and relevant? In a nutshell (read the tech doc here): The goal of the SE Platform is to facilitate the acquisition (harvesting)  of sensor-data from a multitude of sensor devices and make this data available via standardized formats and web-APIs (mainly: OGC Standards) and Viewers. The SE Platform originates, what is now called the award-winningSmart Emission Nijmegen project in 2015-2017. Quoting from the paper “Filling the feedback gap of place-related externalities in smart cities” :

“…we present the set-up of the pilot experiment in project “Smart Emission”, constructing an experimental citizen-sensor-network in the city of Nijmegen. This project, as part of research program ‘Maps 4 Society,’ is one of the currently running Smart City projects in the Netherlands. A number of social, technical and governmental innovations are put together in this project: (1) innovative sensing method: new, low-cost sensors are being designed and built in the project and tested in practice, using small sensing-modules that measure air quality indicators, amongst others NO2, CO2, ozone, temperature and noise load. (2) big data: the measured data forms a refined data-flow from sensing points at places where people live and work: thus forming a ‘big picture’ to build a real-time, in-depth understanding of the local distribution of urban air quality (3)empowering citizens by making visible the ‘externality’ of urban air quality and feeding this into a bottom-up planning process: the community in the target area get the co-decision-making control over where the sensors are placed, co-interpret the mapped feedback data, discuss and collectively explore possible options for improvement (supported by a Maptable instrument) to get a fair and ‘better’ distribution of air pollution in the city, balanced against other spatial qualities. ….”

So from the outset the SE Platform is geared towards connecting citizen-owned sensor devices. Many similar programs and initiatives are currently evolving, often under the flag of Citizen Science and Smart Cities. Within the Netherlands, where the SE Nijmegen project originated, the Dutch National Institute for Public Health and the Environment (RIVM) was an active project partner, and still stimulates citizens measuring Air Quality via a project and portal: “Together Measuring Air Quality”. In the context of discussions on Air Quality, climate change and lowering budgets for governmental environmental institutions, citizen-participation becomes more and more relevant. A whole series of blogs could be devoted to social and political aspects of Citizen Science, but I will stick to tech-stuff here.

What made working on the SE Nijmegen project exciting and challenging, is that I was given time and opportunity by the project partners (see pic) to not just build a one-time project-specific piece of software, but a reusable set of Open Source components: the Smart Emission Platform (sources on GitHub).

Having had some earlier experience within the Geonovum SOSPilot project (2014-2015), investigating among others the OGC Sensor Observation Service to unlock RIVM AQ data (LML), I was aware of the challenges dealing with what can be called Spatiotemporal (Big) Data.


The figure below shows The Big Picture of the SE Platform. Red arrows denote the flow of data: originating from sensor devices, going through Data Management (ETL), unlocked via various web-APIs, and finally “consumed” in client-apps and viewers.


There are many aspects of the SE Platform that can be expanded. These are for upcoming Emits. For now a summary of some of the challenges and applied technologies, to be detailed later:

  • raw data from sensors: requires refinement: validation/calibration/aggregation
  • dealing with Big Data that is both spatial (location-based) and temporal (time-based)
  • applying an Artificial Neural Network (ANN) for sensor-data calibration
  • databases for Spatiotemporal data: PostGIS and InfluxDB and TICK Stack
  • applying the Stetl framework for all data management (ETL)
  • metadata for sensors and sensor networks, always a tough and often avoided subject
  • connecting the Open Hardware EU JRC AirSensEUR AQ sensor-box to the SE Platform
  • using OGC WMS (with Dimensions for Time) and WFS for viewing and downloading sensor data
  • is OGC Sensor Observation Service (SOS) and SWE still viable?
  • how powerful is the OGC SensorThings API (STA) standard?
  • deployment with Docker and Docker Compose
  • Docker and host systems monitoring: Prometheus + Grafana
  • OGC Services Monitoring with GeoHealthCheck
  • Visualizations: custom viewers with Heron/Leaflet/OpenLayers, Grafana dashboards
  • from development to test and production: Vagrant+VirtualBox, Ubuntu, Docker
  • using component-subsets of the platform for small deployments

Monitoring SE Docker Containers: Prometheus+cAdvisor+Grafana

A lot of stuff to uncover, hopefully got your interest if you have read all the way to here. Will try to treat one aspect/technology in each subsequent Emit-blog post. And off course the entire SE platform is Open Source (GNU GPL), so you are free to download and experiment, and maybe even would like to contribute.


by Just van den Broecke at January 20, 2018 05:02 PM

Free and Open Source GIS Ramblings

Freedom of projection in QGIS3

If you have already designed a few maps in QGIS, you are probably aware of a long-standing limitation: Print Composer maps were limited to the project’s coordinate reference system (CRS). It was not possible to have maps with different CRS in a composition.

Note how I’ve been using the past tense? 

Rejoice! QGIS 3 gets rid of this limitation. Print Composer has been replaced by the new Layout dialog which – while very similar at first sight – offers numerous improvements. But today, we’ll focus on projection handling.

For example, this is a simple project using WGS84 as its project CRS:

In the Layouts dialog, each map item now has a CRS property. For example, the overview map is set to World_Robinson while the main map is set to ETRS-LAEA:

As you can see, the red overview frame in the upper left corner is curved to correctly represent the extent of the main map.

Of course, CRS control is not limited to maps. We also have full freedom to add map grids in yet another CRS:

This opens up a whole new level of map design possibilities.

Bonus fact: Another great improvement related to projections in QGIS3 is that Processing tools are now aware of layers with different CRS and will actively reproject layers. This makes it possible, for example, to intersect two layers with different CRS without any intermediate manual reprojection steps.

Happy QGIS mapping!

by underdark at January 20, 2018 02:06 PM

January 19, 2018

Paul Ramsey

Changing the Direction of Government IT

I love that the UK government review of IT outsourcing deals has been nicknamed the “Ocean Liner report”, in reference to the amount of time it takes to change the direction of a large vessel.

The UK is hardly alone in having dug a deep hole of contractual obligation and learned helplessness, but they were among the first to publicly acknowledge their problem and start work on fixing it. So the rest of us watch with great interest, and try to formulate our own plans.

This afternoon I was invited to talk about technology directions to an audience of technical architects in the BC government. It was a great group, and they seemed raring to go, by and large.

Changing the Direction of Government IT

I titled my talk “Let’s Get Small” in homage to my favourite comedian, Steve Martin.

Our collective IT ocean liners will be slow to change course, but step one is turning the wheel.

January 19, 2018 04:00 PM

Fernando Quadro

Testando softwares para Big Data Spatial – Parte 5

Este é o último post desta série intitulado “Testando softwares para Big Data Spatial”. Para concluir este artigo, vamos fala um pouco sobre a disponibilização das informações através de uma aplicação WebMap, que foi desenvolvida utilizando o Leaflet, mas poderia ter sido utilizada qualquer outra biblioteca para desenvolvimento de mapas na Web, como o OpenLayers, por exemplo.

1. Web Mapping com Leaflet

Este visualizador pode desenhar o mapa em dois estilos, desenhando um Heatmap (mapa de calor) ou desenhando uma camada temática. Ele faz todas as observações ou medidas de uma data única, e até mesmo entre todas as datas disponíveis (Clique na imagem para ver o vídeo).

Além disso, podemos verificar o desempenho com este visualizador, ele mistura o filtro espacial e temporal em uma consulta exclusiva.

A opção mais fácil, e talvez ideal, teria sido a aplicação do cliente executando solicitações WMS GetMap, mas vou executar solicitações ao GeoServer para buscar as geometrias para desenhá-las no cliente. Poderíamos usar os pedidos WFS GetFeature com os limites atuais do mapa (Ele gera um filtro BBOX espacial) e um filtro propertyIsEqual de uma data específica. Mas não devemos esquecer que estamos gerenciando grandes stores de dados que podem criar respostas GML ou JSON com grandes tamanhos e milhares e milhares de registros.

Para evitar esse problema, foi desenvolvido um par de processos WPS, chamado “geowave: PackageFeatureLayer” e “geowave:PackageFeatureCollection”, que retornam a resposta em um fluxo binário compactado. Você poderia usar outra lógica de empacotamento, por exemplo, retornando uma imagem especial onde pixels codificam geometrias e atributos de recursos. Tudo é para minimizar o tamanho da informação e acelerar a digestão dela na aplicação cliente.

Os parâmetros do WPS são: primeiro, o nome da camada no catálogo GeoServer atual (A “SimpleFeatureCollection” para o processo geowave:PackageFeatureCollection”), BBOX e um filtro CQL opcional (neste caso, estou enviando algo semelhante ao “datetime_begin = 2017-06 -01 12:00:00”).

Não vou explicar o código em detalhes, deixa o escopo deste guia. Se você quiser, você pode estudá-lo no link do github no final do artigo.

O aplicativo cliente executa um WebWorker executando uma solicitação WPS para nossa instância do GeoServer. O pedido executa o processo “geowave:PackageFeatureLayer” para minimizar o tamanho da resposta. Em seguida, o WebWorker descompacta o fluxo binário, analisa-o para criar objetos javascript com pontos e atributos e, finalmente, devolvê-los ao segmento principal do navegador para desenhar. O aplicativo cliente processa esses objetos usando a biblioteca Heatmap.js ou desenhando em um Canvas HTML5 para criar uma camada temática. Para este segundo estilo, o aplicativo cria algumas texturas on-the-fly dos ícones coloridos para usar ao desenhar os pontos. Este truque permite que mapas mostrem milhares e milhares de pontos de forma bastante rápida.

Se nosso aplicativo cliente exigir desenhar milhões de pontos, podemos mergulhar no WebGL e na ótima biblioteca do WebGL Heatmap ou demonstrações fantásticas de como construir um mapa com WebGL.

O código-fonte do módulo WPS e o aplicativo cliente estão disponíveis aqui. Espero que tenha gostado do artigo, e tenha consigo entender um pouco de como utilizar grandes volumes de dados (Big Data) com inteligência geográfica (GIS).

Este artigo e suas demais partes (1, 2, 3, 4) é uma tradução livre do artigo originalmente escrito por Alvaro Huarte no seu perfil do LinkedIn.

by Fernando Quadro at January 19, 2018 10:30 AM

January 18, 2018

gvSIG Team

gvSIG como herramienta de innovación educativa para la mejora de los procesos de enseñanza

El uso de gvSIG como herramienta educativa no es una novedad y de hecho hay experiencias exitosas como el conocido proyecto ‘gvSIG Batoví’ en Uruguay. Sin embargo todavía queda mucho trabajo por hacer para normalizar el uso en las aulas de los Sistemas de Información Geográfica. Y, en este punto, quiero resaltar la importancia de utilizar software libre y no iniciar, desde instituciones educativas, la dependencia tecnológica de los estudiantes desde edades tempranas. Esto último bien lo saben las transnacionales de software privativo que en el mundo de la Geomática no dudan en promover iniciativas que pongan en marcha esa dependencia.

Por eso mismo es importante divulgar trabajos como el realizado por Gisela Boixadera-Duran, y denominado ‘Propuesta de innovación para la Unidad Didáctica de Ecología y Ecosistemas de 4º de ESO: Introducción de los Sistemas de Información Geográfica en Secundaria como herramienta didáctica’.

El resumen nos dice:

Uno de los nuevos retos de la sociedad actual, a la que se enfrenta la escuela, es el uso de las Tecnologías de la Información y la comunicación (TIC’s) en gran parte de las acciones y ámbitos de nuestra vida cuotidiana. El uso de las TIC’s como prácticas pedagógicas innovadoras ha dado lugar a un gran número de proyectos que fomentan el aprendizaje significativo del alumnado. En este sentido, el presente trabajo de fin de máster, aborda el uso de los Sistemas de Información Geográfica (SIG) como herramienta didáctica innovadora para llevar a cabo la unidad didáctica de Ecología y Ecosistemas ubicada en el Bloque 4 de la asignatura Biología y Ecología de 4º de ESO, según el Real Decreto 1105/2014.”

En este enlace podéis acceder al PDF integro:


by Alvaro at January 18, 2018 05:53 PM

gvSIG Team

GIS applied to Municipality Management: Module 6 ‘Add-ons manager’

The video of the sixth module is now available, in which we are going to talk about the Add-ons manager.

Each version of gvSIG carries a large number of extensions and symbol libraries that are not installed by default in order to avoid overloading the application. The user can choose when to install them, and the Add-ons Manager can be used for that.

Besides, if a new gvSIG extension or symbol library is published after the final version release, it is not necessary to wait for the next version to be able to use them. With the Add-ons Manager we have the possibility to connect to the server and install them.

Also, if an error has been fixed after releasing a final version, the plugin can be updated without having to release a full version.

The Add-ons Manager offers us three options:

  • Standard installation: Plugins included in the installation package that has been downloaded but not installed by default.
  • Installation from file: When we download the extension package, script or symbol library on our disk. It’s very useful when we create a symbol library and we want to share it with the rest of users of our organization.
  • Installation from URL: Available plugins on the server. It is normally used for packages published after a final version, either new functionalities or to fix an error detected after publishing the final version.

In this module it won’t be necessary to download any cartography previously.

Here you have the third videotutorial of this sixth module:

Related posts:

by Mario at January 18, 2018 10:44 AM

Fernando Quadro

Testando softwares para Big Data Spatial – Parte 4

Neste post iremos falar um pouco sobre o GeoServer e sua integração com o GeoWave. Apesar de quem acompanha o Blog já conhecer bastante o GeoServer, farei um breve descritivo.

1. GeoServer

O GeoServer é um servidor de código aberto para compartilhar dados geoespaciais. Projetado para interoperabilidade, ele publica dados de qualquer fonte de dados espaciais usando padrões abertos. O GeoServer é uma implementação compatível com o Consórcio Geoespacial Aberto (OGC) de uma série de padrões abertos, tais como WFS (Web Feature Service), WMS (Web Map Service) e Web Coverage Service (WCS).

Formatos adicionais e opções de publicação estão disponíveis, incluindo Web Map Tile Service (WMTS) e extensões para Service Catalog (CSW) e Web Processing Service (WPS).

Usamos o GeoServer para ler Layers carregados com o GeoWave, o plugin que acabamos de adicionar ao nosso GeoServer nos permitirá nos conectar a esses dados. Podemos usá-lo como qualquer outro tipo de Layer! 🙂

Para configurar o acesso a store de dados distribuídos, podemos usar duas opções:

– Usando o painel de administração do GeoServer como de costume:

– Usando o comando “gs” do GeoWave para registrar as stores e camadas em uma instância do GeoServer.

Como estamos testando coisas, vamos usar a segunda opção. O primeiro passo requer indicar ao GeoWave a instância do GeoServer que queremos configurar.

> %geowave% config geoserver 
  -ws geowave -u admin -p geoserver http://localhost:8080/geoserver

Semelhante ao que faríamos na interface administrativa do GeoServer, nós executamos dois comandos para adicionar, respectivamente, o Store e o Layer desejado.

> %geowave% gs addds -ds geowave_eea -ws geowave eea-store
> %geowave% gs addfl -ds geowave_eea -ws geowave NO2-measures

Como você pode notar, o sistema de referência espacial da camada é o EPSG:4326. Se visualizarmos o mapa com o cliente OpenLayers do GeoServer…

A performance é quety decent (Clique na imagem acima para ver o vídeo), tendo em conta que estou executando um PC “não muito poderoso”, com o Hadoop trabalhando em “modo único” e desenhando todas as medições de NO2 de todos as datas disponíveis (~ 5 milhões de registros). O índice espacial funciona direito, como o zoom inferior como resposta mais rápida. Além disso, se nós executamos um filtro WFS com um critério temporal, verificamos que o índice temporal executa corretamente, porém o GeoServer não verifica todos os registros da camada.

O guia do usuário do GeoWave nos fala sobre um estilo especial chamado “subsamplepoints” (Ele usa um processo WPS chamado “geowave:Subsample” e que o plugin GeoWave implementa). Ao desenhar um mapa, esse estilo realiza subamostras espaciais para acelerar o processo de renderização. Com ele é verificado um ótimo ganho de desempenho, eu recomendo que você utilize ele para camadas do tipo ponto.

Testei também para carregar uma camada de tipo de polígono de um Shapefile, sem problemas, as requisições WMS GetMap e WFS GetFeature foram bem executadas. Apenas uma nota, a ferramenta de carregamento GeoWave transforma automaticamente geometrias do sistema de referência espacial original ( EPSG:25830 no meu caso) para EPSG:4326 em coordenadas geográficas.

Neste ponto, verificamos que tudo se encaixa, podemos parar por aqui, já que a exploração desses dados já pode ser feita com bibliotecas de Mapeamento Web ( Leaflet, OpenLayers, i3Geo… ) ou quaisquer aplicativo de desktop GIS (QGIS, gvSIG, etc…).

Você gostaria de continuar? Então não perca a última parte deste artigo, onde falaremos sobre WebMaps.

by Fernando Quadro at January 18, 2018 10:30 AM

January 17, 2018


GRASS GIS as described by a Google Code-In student

The Google Code-In contest is almost over. Today, January 17th, was the last day in which students could submit their work for revision. Last night we got one of the last tasks submissions for GRASS GIS within the contest. We...

January 17, 2018 05:59 PM

Fernando Quadro

Testando softwares para Big Data Spatial – Parte 3

Prezados leitores, hoje daremos continuidade ao post anterior falando um pouco sobre o GeoWave.

1. LocationTech GeoWave

O GeoWave é uma biblioteca que conecta a escalabilidade de estruturas de computação distribuídas de key-value stores (Hadoop + HBase neste caso) com software geoespacial para armazenar, recuperar e analisar conjuntos de dados geoespaciais maciços. Essa é uma ótima ferramenta 🙂

Falando do ponto de vista do desenvolvedor, esta biblioteca implementa um provedor de dados vetoriais do kit de ferramentas GeoTools para ler recursos (geometria e atributos) de um ambiente distribuído. Quando adicionamos o plugin correspondente ao GeoServer, o usuário verá novas stores para configurar novos tipos de conjuntos de dados distribuídos suportados.

Hoje em dia, o GeoWave suporta três tipos de armazenamento de dados distribuídos; Apache Accumulo, Google BigTable e HBase, usaremos o último deles.

Deixemos o GeoServer para mais tarde. De acordo com os guias de usuários e desenvolvedores do GeoWave, temos que definir índices primários e secundários que as camadas devem usar, então podemos carregar informações para o nosso local de armazenamento de dados.

Conforme consta no guia do desenvolvedor, vamos construir com o Maven o kit de ferramentas GeoWave para salvar dados geográficos no HBase:

> mvn package -P geowave-tools-singlejar

E incluir o plugin no GeoServer:

> mvn package -P geotools-container-singlejar

Defini minha própria variável de ambiente com um comando para executar os processos GeoWave o mais confortável possível:

> set GEOWAVE=
  java -cp "%GEOWAVE_HOME%/geowave-deploy-0.9.6-SNAPSHOT-tools.jar" 

Agora, podemos executar facilmente comandos digitando % geowave% […]. Verificamos a versão GeoWave:

> %geowave% --version

Ok, vamos registrar os índices espaciais e temporais necessários da nossa camada. O aplicativo cliente irá filtrar dados usando um filtro espacial (BBOX) e um filtro temporal para buscar apenas medições de NO2 de uma data específica.

Agora, registre ambos os índices:

> %geowave% config addindex 
  -t spatial eea-spindex --partitionStrategy ROUND_ROBIN

> %geowave% config addindex 
  -t spatial_temporal eea-hrindex --partitionStrategy ROUND_ROBIN 
  --period HOUR

E adicione uma “loja”, na terminologia do GeoWave, para nossa nova camada:

> %geowave% config addstore eea-store 
  --gwNamespace geowave.eea -t hbase --zookeeper localhost:2222

Aviso, no último comando, 2222 é o número da porta onde foi publicado o Zookeeper.

Agora, podemos carregar os dados. Nossa entrada são arquivos CSV, então eu usarei a opção “-f geotools-vector” para indicar que o GeoTools inspeciona o provider de vetores que deve usar para ler os dados. Existem outros formatos suportados e, claro, podemos desenvolver um novo provider para ler nossos próprios tipos de dados específicos.

Para carregar um arquivo CSV, faça:

> %geowave% ingest localtogw 
  -f geotools-vector 
  ./mydatapath/eea/NO2-measures.csv eea-store eea-spindex,eea-hrindex

Ok, dados carregados, até agora sem problemas, certo? Porém, a GeoTools CSVDataStore tem algumas limitações ao ler arquivos. O código atual não suporta atributos de data/hora (nem atributos booleanos). O código gera todos eles como strings (texto). Isso é inaceitável para nossos próprios requisitos, a data da medição deve ser um atributo preciso para o índice, então foi alterado no código java original. Além disso, para calcular o tipo de valor apropriado de cada atributo, o leitor lê todas as linhas no arquivo, é a maneira mais segura, mas pode ser muito lento ao ler grandes arquivos com milhares e milhares de linhas. Se o arquivo tiver um esquema congruente, podemos ler um pequeno conjunto de linhas para calcular os tipos. Então também foi alterado. Temos que reconstruir GeoTools e GeoWave. Você pode baixar as alterações deste fork do GeoTools.

Após esta pausa, vamos voltar ao caminho principal do guia, onde nós carregamos recursos em nossa camada com o comando “ingest”. Nós incluímos o plugin em uma instância do GeoGerver implantada também (é fácil, basta copiar a biblioteca “geowave-deploy-xxx-geoserver.jar” para a pasta “..\WEB-INF\lib” e reiniciar).

No próximo post iremos abordar o GeoServer, não perca!

by Fernando Quadro at January 17, 2018 12:34 PM

PostGIS Development

PostGIS Patch Releases 2.3.6 and 2.4.3

The PostGIS development team is pleased to provide bug fix release 2.3.6 and 2.4.3 for the 2.3 and 2.4 stable branches.

Key fixes in these releases are Brin upgrade, ST_Transform schema qualification to fix issues with restore, foreign table, and materialized view use, ClusterKMeans and encoded polyline fixes.

View all closed tickets for 2.4.3 and 2.3.6.

After installing the binaries or after running pg_upgrade, make sure to do:


— if you use the other extensions packaged with postgis — make sure to upgrade those as well

ALTER EXTENSION postgis_topology UPDATE;
ALTER EXTENSION postgis_tiger_geocoder UPDATE;

If you use legacy.sql or legacy_minimal.sql, make sure to rerun the version packaged with these releases.



by Regina Obe at January 17, 2018 12:00 AM

January 16, 2018

Fernando Quadro

Testando softwares para Big Data Spatial – Parte 2

Neste post iremos falar um pouco dos software que iremos utilizar no nosso teste, iniciando pelo Hadoop e passando por HBase.

1. Apache Hadoop

O Apache Hadoop é, quando buscamos um pouco no Google… uma estrutura que permite o processamento distribuído de grandes conjuntos de dados em clusters de computadores usando modelos de programação simples. Ele é projetado para ampliar de servidores individuais para milhares de máquinas, cada uma oferecendo processamento e armazenamento local. Ao invés de confiar no hardware para oferecer alta disponibilidade, a própria biblioteca é projetada para detectar e lidar com falhas na camada do aplicativo, oferecendo assim um serviço altamente disponível em um cluster de computadores, cada um dos quais podendo ser propenso a falhas.

O HDFS é um sistema de arquivos distribuídos que fornece acesso de alto desempenho aos dados em todos os clusters Hadoop. Como o HDFS normalmente é implantado em hardware de baixo custo, as falhas do servidor são comuns. O sistema de arquivos foi projetado para ser altamente tolerante a falhas, no entanto, facilitando a transferência rápida de dados entre os nós e permitindo que os sistemas Hadoop continuem sendo executados se um nó falhar. Isso diminui o risco de falha catastrófica, mesmo no caso de falhas em inúmeros nós.

Nosso teste usará o Hadoop e seu HDFS como repositório de dados onde vamos salvar e, finalmente, publicar para o aplicativo do usuário final. Você pode ler os recursos do projeto aqui, ou mergulhar na Internet para aprender profundamente sobre isso.

Utilizei o Windows para os meus testes. Os lançamentos oficiais do Apache Hadoop não incluem binários do Windows, mas você pode facilmente criá-los com este ótimo guia (Ele usa o Maven) e configurar os arquivos necessários pelo menos para executar um único cluster de nós. Claro, um ambiente de produção exigirá que configuremos um cluster multi-nó distribuído ou use uma distribuição “apenas para uso” (Hortonworks) ou salte para a Nuvem ( Amazon S3 , Azure, etc…).

Continuamos com este guia; Depois que o Hadoop foi construído com Maven, os arquivos de configuração foram editados e as variáveis ​​de ambiente foram definidas, podemos testar se tudo está bem executando no console …

> hadoop version

Em seguida, começamos os “daemons” dos objetos namenode e datanode, e o gerenciador de recursos “yarn”.

> call ".\hadoop-2.8.1\etc\hadoop\hadoop-env.cmd"
> call ".\hadoop-2.8.1\sbin\start-dfs.cmd"
> call ".\hadoop-2.8.1\sbin\start-yarn.cmd" 

Podemos ver o aplicativo de administração Hadoop rodando na porta HTTP configurada, 50070 no meu caso:

2. Apache HBase

O Apache HBase é, procurando novamente no Google… um banco de dados NoSQL que é executado no topo do Hadoop como um grande armazenamento de dados distribuído e escalável. Isso significa que o HBase pode alavancar o paradigma de processamento distribuído do sistema de arquivos distribuídos Hadoop (HDFS) e se beneficiar do modelo de programação MapReduce do Hadoop. Ele destina-se a hospedar tabelas grandes com bilhões de linhas com potencialmente milhões de colunas e executados em um cluster de hardware de commodities.

Você pode ler aqui para iniciar e instalar o HBase. Mais uma vez, verificamos a versão do produto executando:

> hbase version

Inicie o HBase:

> call ".\hbase-1.3.1\conf\hbase-env.cmd"
> call ".\hbase-1.3.1\bin\start-hbase.cmd"

Veja o aplicativo de administração HBase na porta 16010, no meu caso:

Ok, neste momento, temos o grande ambiente de dados funcionando, é hora de preparar algumas ferramentas que acrescentam capacidades geoespaciais; GeoWave e GeoServer, vamos em frente no próximo post

by Fernando Quadro at January 16, 2018 10:59 AM

January 15, 2018

Fernando Quadro

Testando softwares para Big Data Spatial – Parte 1

O objetivo deste artigo é mostrar os resultados testando a integração de uma plataforma Big Data com outras ferramentas geoespaciais. É necessário salientar que a integração de componentes usados, todos eles de código aberto, nos permite publicar serviços WEB compatíveis com padrões OGC (WMS, WFS, WPS).

Este artigo descreve as etapas de instalação, as configurações e o desenvolvimento feito para obter um aplicativo de mapeamento que mostre medidas de NO2 de aproximadamente 4k estações européias durante quatro meses (Observações foram registradas por hora), resultado em torno de 5 milhões de registros. Sim, eu sei, esses dados não parecem um armazenamento “Big Data”, mas parece grande o suficiente para verificar o desempenho quando as aplicações o lêem usando filtros espaciais e / ou temporais (clique na imagem acima para ver o vídeo).

O artigo não se concentra em ensinar um conhecimento mais profundo dos softwares usados, todos eles já tem publicado boa documentação do ponto de vista do usuário ou do desenvolvedor, simplesmente quero oferecer experiências e um guia simples para coletar recursos de componentes de software. Por exemplo, comentários sobre o GeoWave e sua integração com o GeoServer são uma cópia do conteúdo do guia do produto em seu site.

1. Esquema de dados

Os dados de teste foram baixados da European Environment Agency (EEA). Você pode pesquisar aqui informações ou visualizadores de mapas desta ou de outras fontes, ou melhor, você pode usar seus próprios dados. GDELT é outro projeto interessante que oferece dados maciços.

O esquema dos dados do teste é simples, a entrada é um grupo de arquivos CSV (arquivos de texto com seus atributos separados com vírgulas) com coordenadas geográficas do tipo ponto (Latitude / Longitude) que georreferenciam o sensor, a data da medida e a concentração de NO2 no ar. Existem outros atributos secundários, mas não são importantes para o nosso teste.

2. Arquitetura de software

O teste consiste na cadeia de um conjunto de ferramentas, todos eles oferecem dados e funcionalidade ao próximo componente de software na arquitetura do aplicativo. O fluxo de trabalho do aplicativo começa com o Hadoop e seu HDFS, HBase para mapeá-lo como um banco de dados, o ótimo GeoWave trabalhando como um conector entre ele e o popular GeoServer que implementa vários padrões OGC e, finalmente, um aplicativo de cliente web que busca dados para mostrar mapas como usual (por exemplo, usando Leaflet e Heatmap.js biblioteca).

No próximo post iremos falar detalhadamente de cada um dos softwares apresentados na imagem acima! Não Perca!

by Fernando Quadro at January 15, 2018 05:37 PM

gvSIG Team

GIS applied to Municipality Management: Module 5.3 ‘Web services (Non-standard services)’

The third video of the fifth module is now available, in which we will talk about how to work with web services that don’t follow the OGC standards in gvSIG Desktop. These web services can be used to complement our maps with different layers.

Among these services we have OpenStreetMap, with which we have access to several layers. for example streets, nautical cartography, railroads, or cartography with different tonalities that can be used as reference cartography on our map.

Other available services are Google Maps and Bing Maps, where we can load different layers.

The requirement to load these layers in gvSIG (until 2.4 version) is that we must have the View in EPSG:3857 reference system, a proprietary system used by these services.

Besides, in order to load the Bing Maps layers, we will need to obtain a key previously, from the Bing Maps Dev Center.

Once we have these services, we can add our layers, reprojecting them to the view reference system. In addition, many OGC web services, such as WMS, WFS …, offer their layers in this reference system, so we can overlap them on our Bing Maps, Google Maps or OpenStreetMap layers.

Here you have the third videotutorial of this fifth module:

Related posts:

by Mario at January 15, 2018 01:51 PM

January 11, 2018


Le FOSS4G-fr aura lieu en mai 2018, serez-vous sponsor, conférencier ou les deux ?

L'OSGeo-fr organise le prochain FOSS4G-fr du 15 au 17 mai 2018, à l’École Nationale des Sciences Géographiques, Marne-la-Vallée - Paris.

Opportunité unique de rencontres, cet événement s'adresse à tous les acteurs de l'écosystème GéoSpatial Opensource francophone : décideurs, utilisateurs, développeurs.

Cette 3ème édition se déroulera sur 3 jours : 2 jours de conférences précédés d’une journée d’ateliers, sur des thématiques volontairement larges et inspirantes !

A ce stade, vous pouvez d’ores et déjà :
- Prendre date !
- Soumettre présentation ou atelier ;
- Devenir sponsor de l’événement.

L’ensemble des informations sur le site : http://foss4g.osgeo.fr

Nous comptons sur vous !

by simo at January 11, 2018 09:07 PM

gvSIG Team

GIS applied to Municipality Management: Module 5.2 ‘Web services (Loading web services from gvSIG)’

The second video of the fifth module is now available, in which we will see how to load web services from gvSIG Desktop. In the first video of this module we saw an introduction on the Spatial Data Infrastructures (SDI), which helped us to understand this new video in a better way.

Many administrations have a large amount of cartography available for users, being in many cases web services that are accessible from desktop applications or web browsers, which allow us to access this cartography without having to download anything on our disk.

The cartography to follow this video is available at this link.

Here you have the second videotutorial of this fifth module:

Related posts:

by Mario at January 11, 2018 12:46 PM

Jackie Ng

RIP: Autodesk Infrastructure Map Server (2006-2018)

Prepare the burial plot in the Autodesk Graveyard, the news has come out which I had long suspected, but is now official: Autodesk has ceased development of Infrastructure Map Server, the commercial counterpart of MapGuide Open Source.

However unlike Autodesk's other ill-fated products, Infrastructure Map Server has the unique lifeline of being able to live on through the MapGuide Open Source project because AIMS is built on top of MapGuide Open Source. Just because AIMS has died does not mean the same fate has to apply to MapGuide Open Source. This project has been in existence for 12 years and counting. The future of the MapGuide Open Source project can be as bright as the community allows for it.

If you are an AIMS customer wondering what your options are in light of this announcement, you should subscribe to the mapguide-users mailing list (if you haven't already) and share your thoughts, questions and concerns.

If you provide support/consulting/development for MapGuide/AIMS you should also subscribe and advertise your services.

I'll make some announcements on the mailing lists about future plans for MapGuide Open Source.

Rest in peace Autodesk Infrastructure Map Server, formerly known as MapGuide Enterprise (2006 - 2018)

by Jackie Ng (noreply@blogger.com) at January 11, 2018 06:11 AM

January 10, 2018

Fernando Quadro

A transformação da França através do Open Data

Se um entusiasta de Open Data tentar inspirar outros, ele logo será confrontado com uma pergunta difícil: qual é o impacto? Alguns podem convencer com longos monólogos sobre transparência e potencial de inovação, mas, muitas vezes, todas as necessidades são alguns exemplos inspiradores de dados abertos do mundo real.

Antes de saltar para o impacto da Open Data, vamos dar uma olhada em alguns dos conjuntos de dados mais interessantes. Um dos conjuntos de dados abertos visualmente mais atraentes ao redor do mundo é o Archives of the Planet do Museu Albert Kahn. O departamento francês decidiu publicar o arquivo de mais de 60 mil fotos de lugares do mundo todo há mais de um século. No portal Open Data do departamento, os usuários podem navegar em uma galeria e clicar em um mapa para descobrir as imagens. Graças à API, o museu foi capaz de construir facilmente um novo site para expor este tesouro de forma fácil e aumentar significativamente o número de visitantes no seu site em dez vezes.

Enquanto muitos portais apresentam a posição de lugares de estacionamento nas ruas ou em lotes, apenas alguns indicam sua disponibilidade em tempo real. A cidade francesa de Issy-Les-Moulineaux, no entanto, consegue fazer isso onde outros ficam aquém; produziu um conjunto de dados de sensores em tempo real sobre a disponibilidade de lugares de estacionamento em algumas de suas ruas e foi ainda mais longe para criar um mapa que exibisse disponibilidade de espaço de estacionamento. A cada minuto, a plataforma tira os dados provenientes de sensores que foram instalados na superfície das ruas.

Um fato que sabemos é que quanto mais fácil for para os desenvolvedores reutilizarem dados, mais provável é que eles o façam. Um exemplo é Rennes, uma cidade francesa de cerca de 200.000 habitantes, cujo operador de transporte público (STAR), operado pela Keolis, publicou a localização dos ônibus em tempo real no seu portal Open Data. Você pode aprender muito mais sobre este estudo de caso, mas para dar uma pista sobre os resultados, a empresa atualmente lista um total de sete aplicativos de transporte construído por desenvolvedores provenientes da comunidade.

Embora o impacto seja frequentemente o objetivo desejado, ele não necessariamente motiva todos os funcionários que são solicitados a publicar conjuntos de dados. Afinal, o Open Data é considerado um trabalho adicional cujo valor agregado é difícil de projetar. Surpreendentemente, no entanto, quando concluído, o Open Data também pode ter benefícios importantes para uma organização.

Como um dos primeiros adotantes de dados abertos na França, a cidade acima mencionada de Issy-Les-Moulineaux decidiu publicar seu orçamento financeiro em 2011 para aumentar a transparência. Eles empurraram os dados para o portal e pediram a uma agência web que criasse um site dedicado para apresentar os dados de forma fácil de usar simplesmente incorporando os gráficos provenientes do portal. Desta forma, eles foram livres para fornecer um excelente contexto descritivo aos seus dados orçamentários. Seu truque: os gráficos são sincronizados com cada conjunto de dados, portanto, quando os dados são atualizados a cada ano, os gráficos também mudam. Assim, a cidade investiu apenas uma vez no desenvolvimento, que são capazes de replicar todos os anos com os dados mais atualizados.

Da mesma forma, o fornecedor francês de eletricidade ENEDIS está fazendo uso do portal Open Data para comunicação aberta externa. As visualizações interativas apresentadas em seu principal site corporativo foram desenvolvidas através do conjunto de APIs de geradas pelo portal, economizando os principais custos de desenvolvimento da empresa.

Quando o Ministério da Agricultura francês procurou uma ferramenta de busca simples para exibir empresas que vendem produtos de agricultura química para agricultores e consumidores, eles tiveram a opção de trabalhar com uma empresa de consultoria ou de contar com seu portal. Graças ao uso fácil de widgets, o Ministério criou um painel que listaria todas as empresas, pontos de varejo e informações relacionadas em um mapa. O projeto levou três dias para configurar – e também está sendo usado como um ponto de referência interno.

Os dados de publicação exigem que as organizações repensem sua estratégia interna de gerenciamento de dados. Hoje, Open Data ainda é muitas vezes considerado como “trabalho extra” que deve ser feito para marcar uma caixa. Muitos imaginam portais volumosos com arquivos para download em vez de dados dinâmicos que se pode explorar em visualizações interativas e acesso em diferentes formatos e através de APIs de conjunto de dados. Indexar os próprios registros de dados (em oposição aos arquivos) e transformá-los em APIs permite que as organizações trabalhem com seus próprios dados de uma maneira totalmente nova. Em vez de enviar arquivos de um funcionário para outro (ou carregá-los para uma unidade virtual), os dados em si podem ser compartilhados. Do ponto de vista técnico, é possível criar um ponto de acesso central para uma organização, ao mesmo tempo em que dá aos diferentes usuários diferentes níveis de acesso, dividindo os silos de dados. Isso assegura não só o acesso à versão de dados mais recentes em uma organização, mas também a sua fácil reutilização através de APIs em painéis ou outros serviços da Web. Portanto, são as próprias organizações que se beneficiam mais de uma estratégia otimizada de gerenciamento de dados. E, finalmente, abrir esses dados para o resto do mundo, pois Open Data muitas vezes não exige muito mais do que um simples clique do mouse.

É muito bom ver o quanto o Open Data está fazendo na França, em pensar que esses são apenas alguns exemplos, mas sabemos que as possibilidades são infinitas. Espero que um dia cheguemos nessa maturidade aqui no Brasil, de ter dados de verdade disponibilizados para que possamos explorá-los e criar produtos que ajudem a população.

Esta é uma tradução livre do artigo original escrito por Christina Schönfeld no site OpenDataSoft.

Fonte: OpenDataSoft

by Fernando Quadro at January 10, 2018 07:43 PM

January 08, 2018

From GIS to Remote Sensing

Announcing the release of the new Semi-Automatic Classification Plugin

I am very pleased to announce the release date of the new Semi-Automatic Classification Plugin (SCP) version 6 (codename Greenbelt).
This new SCP version, which is compatible with QGIS 3 only, will be released on the:
 22 January 2018

The Semi-Automatic Classification Plugin (SCP) version 6 has the codename Greenbelt (Maryland, USA) which is the location of the NASA’s Goddard Space Flight Center that had a key role in Landsat satellite development and will lead the Landsat 9 development of the space and flight segments (Landsat 9 to be launched in 2020).

Main interface

by Luca Congedo (noreply@blogger.com) at January 08, 2018 11:24 AM

gvSIG Team

GIS applied to Municipality Management: Module 5.1 ‘Web services (Introduction to SDI)’

The fifth module of this course deals with access to web services from gvSIG. At this first part we will introduce you to a fundamental concept when we talk about the efficient management of geographic information: Spatial Data Infrastructures (SDI). SDI are very important, and countries and regions of the world are legislating them more and more to make effective their implementation in all the administrations that generate digital geographic information.

The SDI is considered the ideal system to manage the geographic information of an organization and, of course, of a municipality completely. In future modules of this course we will speak about gvSIG Online, the free solution to start them up. In the current module we will see how to work with the web map services that SDI can generate from the desktop GIS.

Currently, a large number of administrations offer their cartography in a public way to be loaded through web services. Thanks to the use of this device it is possible to access these services from gvSIG Desktop, which allows us to load the cartography in our project without having to download anything on disk.

In order to understand this part in gvSIG in a better way we will start with a video about the introduction to the Spatial Data Infrastructures, where we will explain what a web service is, and some links where these available services are collected.

In this module it is not necessary to download any cartography, since it is a totally theoretical video.

Here you have the first videotutorial of this fifth module:

Related posts:

by Mario at January 08, 2018 10:50 AM

gvSIG Team

Grabación del Taller de Geoestadística con gvSIG realizado en la UMH de Elche

Ya está disponible la grabación del taller de Geoestadística con gvSIG impartido durante la Jornada realizada en la Universidad Miguel Hernández de Elche, España, el día 13 de diciembre de 2017, englobada dentro de la Cátedra gvSIG.

En esta jornada, aparte de los talleres sobre la aplicación y la ponencia sobre la Suite gvSIG se hizo entrega de los premios a los proyectos ganadores de la Cátedra gvSIG 2017.

En el vídeo se explica una breve introducción de cómo ejecutar código de R desde el Módulo de Scripting de gvSIG. El lenguaje de programación R orientado a la estadística y el análisis de datos permite un amplío abanico de posibilidades para el tratamiento de datos espaciales que complementan los ya existentes en gvSIG o los desarrollados también desde Scripting con Python.

El ejemplo mostrado realiza una lectura masiva de ficheros csv correspondientes a crímenes en la ciudad de Londres, sacados del portal de open data UK Data Police, los cuales transformamos a una capa shapefile para poder ser explorados desde gvSIG.

Cualquier duda puedes preguntar aquí o en las Listas de Correo.

by Óscar Martínez at January 08, 2018 07:21 AM

Marco Bernasocchi

PostgreSQL back end solution for quality assurance and data archive

Did you know that the possibilities to make a full QGIS back end solution for quality assurance and archiving in PostgreSQL are immense? SQL has it’s well known limitations, but with a little bit creativity you can make quite nice
See more ›

by David Signer at January 08, 2018 06:06 AM

January 04, 2018

gvSIG Team

GIS applied to Municipality Management: Module 4.2 ‘Attribute tables (joining tables)’

At this second video of the fourth module we will continue speaking about the attribute tables, where we will show how to join the the alphanumeric information of a vector layer and an external table.

In our city council we can have external information in a table, and it would be interesting to georeference it, that means, to join the information of that table with the alphanumeric information of a vector layer.

For example, if we have a table with the population of each neighbourhood in our municipality, and we also have a vector layer with the neighbourhoods in our GIS, we can add the population of the first table to the graphic layer. For that we would need a field in both tables with common values. If we use the name of the neighbourhoods there can be different names (with or without the article…), so we can get an error. Then it will be recommendable to use a numeric field, where the numbers will be the same in both tables (we can use the neighbourhood code).

Here you have the second videotutorial of this fourth module:

Related posts:

by Mario at January 04, 2018 01:15 PM

January 03, 2018

Stefano Costa

I libri che ho letto nel 2016

Diciamo subito che nel 2016 ho letto poco e male, e diamo la responsabilità al fatto che nella prima parte dell’anno invece ho scritto un po’ (abbastanza da concludere la mia tesi di dottorato, tanto per capirci), mentre nella seconda parte dell’anno ho dedicato del tempo allo studio per un concorso (che poi è andato bene).

Aggiungiamo che poco dopo la fine del 2016, come alcune delle letture suggeriscono, sono diventato papà, e ho aiutato come potevo la mamma con il suo pancione, invece che leggere (tranne un caso in cui ho letto per loro molte volte lo stesso libro ad alta voce).

  • James Ellroy, Perfidia è il mio preferito e mi ha fatto trovare vecchie mappe di Los Angeles (il massimo)
  • Wu Ming, L’invisibile ovunque
  • Joe R. Lansdale, Rumble Tumble che mi ha passato Andrea Bellotti e non glielo ho ancora reso
  • Roberto Negro, Bocca di rosa
  • Loredana Lipperini, Ancora dalla parte della bambine
  • Julien Blanc-Gras, Padri in attesa. Il giornale di bordo di un padre nella Terra della gravidanza
  • Chiara Cecilia Santamaria, Quello che le mamme non dicono
  • Emma Mora, L’orsacchiotto Gedeone (qualche dozzina di volte)

by Stefano Costa at January 03, 2018 10:26 PM

January 02, 2018


Ongoing Google Code-in contest

High-school students contributing to GRASS GIS through the Google Code-in contest

January 02, 2018 09:40 PM

Paul Ramsey

Open Source for/by Government

Update: Barcelona is going all-open. Sounds extreme, but some times you’ve got to…

“You’ve got to spend money to make money”, I once confidently told a business associate, on the occasion of paying him a thousand dollars to manually clean some terrible data for me. In the event, I was right: that cleaned data paid for itself 10 times over in the following years.

I’m still the only person with a GIS file for 1996 BC elections results by voting area, and the jealousy is killing you.

Governments can play the game too, but it seems like they all end up tilling the same landscape. There’s no shortage of governments trying to create their own Silicon Valley clusters, usually through the mechanisms of subsidizing venture capital funding (via tax breaks or directly) and increased spending on R&D grants to academia. Spending money to “hopefully” make money.

There’s an under-recognized niche available, for a government willing to go after it.

Venture capitalists are (understandably) interested in having their investments create “intellectual property”, that can be patented and monopolized for outsized profits. By following the VC model of focussing on IP formation, governments are missing out on another investment avenue: the formation of “intellectual capital” in their jurisdictions.

VCs don’t like intellectual capital because it’s too mobile. It lives between the ears of employees, who can change employers too easily, and require expensive golden handcuffs to lock into place. They can monetize intellectual property in an acquisition or public offering, but they cannot monetize intellectual capital.

Governments, on the other hand, understand that by investing in universities and colleges, they are creating intellectual capital that will tend to stick around in their jurisdictions (for all the public wailing about “brain drain”, the fact is that people don’t move around all that much).

Open Source for/by Government

Investment in open source technology is a potential gold mine for creating intellectual capital, but governments have been steadfastly ignoring it for years. There is also a big first mover advantage waiting for the first governments to get into the game:

  • Instead of “buying off-the-shelf” for government information systems, build on existing OSS, or start OSS from scratch, using local talent (in-house or private sector).
  • Deliberately build with enough generality to allow use in other jurisdictions.
  • Become the first reference customer for the project. Send your local talent out to evangelize it. Encourage them to commercialize their support and services.
  • Wash, rinse, repeat.

Is this risky? Yes. Will it result in some failed projects? Yes. Will it be more expensive than the “safe” alternative? Sometimes yes, sometimes no. Will it result in increased revenues flowing into your jurisdiction? Almost certainly, if committed to and carried out across a number of projects.

When the first library in BC adopted the Evergreen open source library software, they probably weren’t envisioning a Canada-wide open source cooperative, bringing support and consulting dollars into the province, but that’s what they did, by accident. When the Atlanta Public Library started the project, they probably weren’t thinking a local company would end up selling support and expertise on the software around the country.

There is no IP moat around open source projects, but there is a first mover advantage to having a critical mass of developers and professionals who have amassed intellectual and social capital around the project.

Intellectual capital isn’t just built in universities, and the private sector shouldn’t only be looked to for intellectual property. Let’s mix it up a little.

The BC government spends $9M/year on Oracle “maintenance”, basically the right to access bug fixes and updates from Oracle for the software we’re running. It’s not a lot of money, but it’s money being shipped straight over the border. Affilias, the “.org” top level DNS provider built their infrastructure on PostgreSQL – they spend a couple hundred thousand a year having some PostgreSQL core developers on staff. Same effect, different path.

January 02, 2018 04:00 PM

Tyler Mitchell

Deep learning + cartography

A couple years ago you may have read this great post from boredpanda talking about a research paper that took…

The post Deep learning + cartography appeared first on spatialguru.com.

by Tyler Mitchell at January 02, 2018 06:33 AM

January 01, 2018

gvSIG Team

GIS applied to Municipality Management: Module 4.1 ‘Attribute tables (alphanumeric information)’

At this first video of the fourth module we will speak about the attribute tables of a GIS, and we will show how to manage the alphanumeric information of a vector layer.

As we told at the first module, about differences between GIS and CAD, at the Geographic Information Systems we can manage different types of alphanumeric information. For example, for a parcel we can add information about the owner, area, coordinates, date of the buildings… And we can make a query to get the elements with a concrete values (for example the parcels with an area higer than X squared meters).

That information will be very useful for our city council, to manage the information in an easy way.

At this module we will see hot to manage that information.

At the first module of the course you can find a frequent questions section about the course, and if you have any doubt or error using gvSIG you can consult this post:  https://blog.gvsig.org/2015/06/17/what-to-do-when-we-get-an-error-in-gvsig/

At the third module you can see how to install gvSIG to follow this new module, and you can find the cartography to use for this video at this link.

Here you have the first videotutorial of this fourth module:

Related posts:

by Mario at January 01, 2018 06:41 PM

December 31, 2017

Tom Kralidis

Cheers to 2017

Here we go again! Following on from last year, a summary of my 2017: – pycsw: the lightweight CSW server continues provide stable, composable, and compliant CSW services.  Highlights include: an official code of conduct Docker image testing framework enhancements code coverage support custom repository plugin filter parsing – MapServer metadata: at long last RFC82 […]

by tomkralidis at December 31, 2017 05:52 PM

December 21, 2017

Markus Neteler

European Geosciences Union General Assembly 2018

The EGU General Assembly 2018 will bring together geoscientists from

The post European Geosciences Union General Assembly 2018 appeared first on GFOSS Blog | GRASS GIS Courses.

by Prod-copernicus-admin at December 21, 2017 03:59 PM

December 20, 2017


QGIS 3 compiling on Windows

As the Oslandia team work exclusively on GNU/Linux, the exercise of compiling QGIS 3 on Windows 8 is not an everyday’s task :). So we decided to share our experience, we bet that will help some of you.


The first step is to download Cygwin and to install it in the directory C:\cygwin (instead of the default C:\cygwin64). During the installation, select the lynx package:


Once installed, you have to click on the Cygwin64 Terminal icon newly created on your desktop:

Then, we’re able to install dependencies and download some other installers:

$ cd /cygdrive/c/Users/henri/Downloads
$ lynx -source rawgit.com/transcode-open/apt-cyg/master/apt-cyg > apt-cyg
$ install apt-cyg /bin
$ apt-cyg install wget git flex bison
$ wget http://download.microsoft.com/download/D/2/3/D23F4D0F-BA2D-4600-8725-6CCECEA05196/vs_community_ENU.exe
$ chmod u+x vs_community_ENU.exe
$ wget https://cmake.org/files/v3.7/cmake-3.7.2-win64-x64.msi
$ wget http://download.osgeo.org/osgeo4w/osgeo4w-setup-x86_64.exe
$ chmod u+x osgeo4w-setup-x86_64.exe


The next step is to install CMake. To do that, double clic on the file cmake-3.7.2-win64-x64.msi previously downloaded with wget. You should choose the next options during the installation:


Visual Studio

Then, we have to install Visual Studio and C++ tools. Double click on the vs_community_ENU.exe file and select the Custom installation. On the next page, you have to select Visual C++ chekbox:




In order to compile QGIS, some dependencies provided by the OSGeo4W installer are required. Double click on osgeo4w-setup-x86_64.exe and select the Advanced Install mode. Then, select the next packages:

  •  expat
  • fcgi
  • gdal
  • grass
  • gsl-devel
  • iconv
  • libzip-devel
  • libspatialindex-devel
  • pyqt5
  • python3-devel
  • python3-qscintilla
  • python3-nose2
  • python3-future
  • python3-pyyaml
  • python3-mock
  • python3-six
  • qca-qt5-devel
  • qca-qt5-libs
  • qscintilla-qt5
  • qt5-devel
  • qt5-libs-debug
  • qtwebkit-qt5-devel
  • qtwebkit-qt5-libs-debug
  • qwt-devel-qt5
  • sip-qt5
  • spatialite
  • oci
  • qtkeychain


To start this last step, we have to create a file C:\OSGeo4W\OSGeo4W-dev.bat containing something like:

@echo off 
call "%OSGEO4W_ROOT%\bin\o4w_env.bat" 
call "%OSGEO4W_ROOT%\bin\qt5_env.bat" 
call "%OSGEO4W_ROOT%\bin\py3_env.bat" 
set VS140COMNTOOLS=%PROGRAMFILES(x86)%\Microsoft Visual Studio 14.0\Common7\Tools\ 
call "%PROGRAMFILES(x86)%\Microsoft Visual Studio 14.0\VC\vcvarsall.bat" amd64 
set INCLUDE=%INCLUDE%;%PROGRAMFILES(x86)%\Microsoft SDKs\Windows\v7.1A\include 
set LIB=%LIB%;%PROGRAMFILES(x86)%\Microsoft SDKs\Windows\v7.1A\lib 
path %PATH%;%PROGRAMFILES%\CMake\bin;c:\cygwin\bin 
@set GRASS_PREFIX="%OSGEO4W_ROOT%\apps\grass\grass-7.2.1 
@set LIB=%LIB%;%OSGEO4W_ROOT%\lib;%OSGEO4W_ROOT%\lib 


According to your environment, some variables should probably be adapted. Then in the Cygwin terminal:

$ cd C:\
$ git clone git://github.com/qgis/QGIS.git
$ ./OSGeo4W-dev.bat
> cd QGIS/ms-windows/osgeo4w

In this directory, you have to edit the file package-nightly.cmd to replace:

cmake -G Ninja ^


cmake -G "Visual Studio 14 2015 Win64" ^

Moreover, we had to update the environment variable SETUAPI_LIBRARY according to the current position of the Windows Kits file SetupAPI.Lib:

set SETUPAPI_LIBRARY=C:\Program Files (x86)\Windows Kits\8.1\Lib\winv6.3\um\x64\SetupAPI.Lib

And finally, we just have to compile with the next command:

> package-nightly.cmd 2.99.0 1 qgis-dev x86_64


And see you soon for the generation of OSGEO4W packages 😉





by Paul Blottière at December 20, 2017 04:09 PM

Petr Pridal

OpenMapTiles Map Server: The easiest way to deploy vector OpenStreetMap

Less than one year ago, OpenMapTiles open-source project was announced. It simplifies the process of deploying OpenStreetMap maps significantly. However, setting up the whole software toolchain for vector tiles was an obstacle not everybody was able to pass. Today, we are tearing this barrier down by announcing OpenMapTiles Map Server, software which enables everyone to run a map server on his own infrastructure within a few minutes.

Vector tiles for the whole world in 10 minutes

Setting own map server was seen as an advanced task for skilled admin. With OpenMapTiles Map Server, you can do it with basic computer skills in less than 10 minutes.

To launch the software you can use a graphical user interface on Windows and Mac or one simple command on any Linux computer.

The main task of the OpenMapTiles Map Server is to provide you with vector map tiles, however, it covers more than that.

Backward compatibility with third-party software is important to us. Therefore, if your library, end-user device or third-party software doesn’t support vector tiles, there is a fallback mode with raster tiles. This can be used in libraries such as Leaflet or other tools. Compatibility is also ensured with WMS and WMTS protocols used in ArcGIS, QGIS, and other desktop GIS software.

Styles, schema, and languages

If you think about a map, one of the first thoughts is the appearance. By default, there is set of four free and open-source styles. However, if you want to make any change in the style, there is a visual editor where changes are immediately visible. You can also upload your own style in JSON format. Thanks to the vector technology, the tiles’ look are changing on the fly without a need for new rendering.

The cartography decisions are encoded by the Vector Tile Schema, which is fully free and open-source. It covers the selection of tags used in OpenStreetMap, some features from Natural Earth Data and other OpenData sources.

OpenMapTiles Map Server comes with built-in support for more than 50 languages. You can easily switch between them and again. The dual language option is supported, which can be useful especially in the countries where Latin is not the main script.

Fast and simple

OpenMapTiles was already a giant leap forward in making maps based on OpenStreetMap accessible to a broader audience. OpenMapTiles Map Server goes even further and with help of Docker, you can serve your own map within few minutes using minimum knowledge. It helps experienced admins to minimize their workload and allows to start serving map tiles to novices with minimal or even no administration knowledge.

For Docker installation there is basic setup available on Windows and Mac, for Linux users, there is the single command you just have to copy and paste into a terminal.

Once you install Docker, run openmaptiles-server container either from a terminal or in Kitematic GUI. It will start a web server, which is available on localhost. In the web wizard, select the area you want to display, style, and language and the map can be directly displayed on websites with JavaScript viewers, used in native mobile applications on Android and iOS (even offline), or turned into traditional raster tiles or high-resolution images for printing.

There are ready-to-use vector tiles from OpenMapTiles.com, but if you need more than just a street map, there are additional data available. You can add to your map contour lines, hillshading or additional satellite layer.

Map setup in a few steps Map setup in a few steps

What is running behind?

OpenMapTiles Map Server combines OpenMapTiles open-source technology with one of the most prominent virtualization technology - Docker.

OpenMapTiles is an open-source set of tools for deployment of maps. Originally developed by Klokan Technologies and launched early this year, it undergoes heavy development by both KlokanTech employees as well as growing community. This made OpenMapTiles leading technology for creating and deploying vector tiles.

OpenMapTiles Map Server is a production-ready software package with built-in open-source components (such as memcache or TileServer GL) and it’s tailored specifically for OpenMapTiles data and simple step-by-step configuration.

Docker is a free and open-source software for virtualization on operating system level. The system of containers, simplify installation of any software and recently an intuitive GUI makes it number one choice for running separate software.

This combination creates robust, but extremely easy to deploy, solution for your own map service.

Get started using OpenMapTiles Map Server

As you can see, OpenMapTiles Map Server is powerful, but simple tool for deploying vector map tiles. It enables everyone to run his own map server on his hardware without any deep IT knowledge.

Are you ready to create your own map server? Get set by installing Docker, running openmaptiles-server and go by choosing color, languages, and area of your map!

by Klokan Technologies GmbH (info@klokantech.com) at December 20, 2017 09:00 AM

Tamas Szekeres

GDAL is about to drop support for VS2013 and earlier, please vote

As you may have noticed from recent emails on the gdal-dev list, support for C++11 in VS 2013 is only partial and kind a bit of a pain to deal with.

The GDAL team created a poll to probe the community how much support for VS 2013 is seen as needed for GDAL 2.3

Please cast your vote (doesn't require any account, open to any user / developer / whatever party with at least some interest in GDAL ) at


Voting opened until end of this week


by Tamas Szekeres (noreply@blogger.com) at December 20, 2017 07:03 AM

Ian Turton's Blog

Finding anagrams of place names (in the World)

As a quick follow up to Anagrams in the UK I thought I’d try to do the world.

I downloaded the cities1000 file from GeoNames which contains all the cities with a population > 1000 or seats of adm div (ca 150.000). I then loaded it into PostGIS using this handy guide (the cities* files are just a subset of the geoname table).

Then I just need to change the table and column names in the orginal code to use the helpful asciiname column.


The global winner is the 13 letter Port-Saint-Pere as perpetrations. Honourable mentions to the following 12 letters:

  • Cerreto d’Asti - directorates
  • Chernomorets - chronometers
  • Dragodanesti - degradations
  • Idaho Springs - rhapsodising
  • Manderscheid - merchandised
  • Puerto Cisnes - persecutions
  • Saint-Emilion - eliminations
  • Saint-Georges - segregations
  • Seven Sisters - restivenesss
  • Solbiate Arno - elaborations
  • Villeurbanne - invulnerable

Full results are on line.

December 20, 2017 12:00 AM

Ian Turton's Blog

Finding anagrams of place names (in GB)

A little while ago Alasdair Rae asked if any one had combined an anagram engine with a list of place names.

Well, no one stepped forward so I thought it could be a fun project. And, it turns out it is quite fun though I got to think about data structures rather more than geography, but that is probably good for me.

I made the assumption that Alasdair was probably not interested in just permutations of letters but wanted actual words (such as would be used in a crossword clue). I also limited my search to single word anagrams as I can’t see a simple solution to finding multi word solutions.

First I stuffed the Ordnance Survey’s OpenNames data set into PostGIS (as who wants to be scanning hundreds of little csv files).

I then set up a GeoTool’s PostGIS datastore and grabbed the populated places.

    Map<String, Object> params = new HashMap<String, Object>();
    params.put(PostgisNGDataStoreFactory.DBTYPE.key, PostgisNGDataStoreFactory.DBTYPE.sample);
    params.put(PostgisNGDataStoreFactory.USER.key, "username");
    params.put(PostgisNGDataStoreFactory.PASSWD.key, "password");
    params.put(PostgisNGDataStoreFactory.SCHEMA.key, "opennames");
    params.put(PostgisNGDataStoreFactory.DATABASE.key, "osdata");
    params.put(PostgisNGDataStoreFactory.HOST.key, "");
    params.put(PostgisNGDataStoreFactory.PORT.key, "5432");

	  DataStore ds = DataStoreFinder.getDataStore(params);
    if (ds == null) {
      throw new RuntimeException("No datastore");
    SimpleFeatureSource fs = ds.getFeatureSource("opennames");
    SimpleFeatureCollection features = fs.getFeatures(CQL.toFilter("type = 'populatedPlace'"));

I tried a naive approach of recursively finding every anagram possible from the name and looking each one up in a HashMap of English words. Oddly, this took a long time so I thought (and Googled) some more and came up with the much more efficient way of sorting the letters in a word and using that as a key to all words that contained those letters. Then I could sort each place name’s letters and do a single lookup to find all the possible words that could be made with those letters. That speeded things up nicely.

To build the lookup table I made use of Google’s HashMultimap (from Guava) which allows you to create a Map of Collections keyed on a String.

  private Map<String, Collection<String>> dict;

  public AnagramLookup() throws FileNotFoundException, IOException {
    //change this to point to your dictionary (one word per line)
    File f = new File("/usr/share/dict/british-english");
    HashMultimap<String, String> indexedDictionary = HashMultimap.create();
    try (BufferedReader buf = new BufferedReader(new FileReader(f))) {
      String line;
      // read each word in the dictionary
      while ((line = buf.readLine()) != null) {
        //strip out non letters
        String word = line.toLowerCase().replaceAll("\\W", "");
        //store the word against the sorted key
        indexedDictionary.put(sort(word), word);
    dict = indexedDictionary.asMap();

Then all that is left to do is to iterate each populated place, grab it’s name and then remove all the non-letters and sort it’s letters and look the anagrams in the HashMap. The final trick is to remove the name itself if it appears in the list of anagrams (i,e. the name itself is an English word).

try (SimpleFeatureIterator itr = features.features()) {
      while (itr.hasNext()) {
        SimpleFeature f = itr.next();
        String name = (String) f.getAttribute("name1");

        current = name.toLowerCase().replaceAll("\\W", "");
        Collection<String> anagrams = getAnagrams(current);
        if(anagrams!=null&&!anagrams.isEmpty()) {
          //remove the name itself if it happens to be a word
          if(!anagrams.isEmpty()) {
            results.put(name, new TreeSet<String>(anagrams));


It turns out that there are 6 11 letter anagrams for the list of GB place names.

  • Balnadelson - belladonnas
  • Fortis Green - reforesting
  • Gilling East - legislating
  • Green Plains - spenglerian
  • Morningside - modernising
  • Sharrington - harringtons
  • Stone Corner - cornerstone

A Spenglerian is “of or relating to the theory of world history developed by Oswald Spengler which holds that all major cultures undergo similar cyclical developments from birth to maturity to decay”. While a Harrington is “a man’s short lightweight jacket with a collar and a zipped front.”

Other highlights for cross word setters include Aimes Green as menageries and Westlinton as tinseltown.

I have posted the full list of anagrams and the code to generate the list.

See this follow up for world names.

December 20, 2017 12:00 AM

December 19, 2017

GeoServer Team

GeoServer 2.11.4 Released

The GeoServer team are pleased to announce the release of GeoServer 2.11.4. Downloads are available (zipwardmg, and exe) along with documentation and extensions.

GeoServer 2.11.4 is a maintenance release of the GeoServer 2.11.x series recommended for production system. This release is made in conjunction with GeoTools 17.4 and GeoWebCache 1.11.3.

This release contains bug fixes as well as new features. For more information, please see the release notes (2.11.4 | 2.11.3 | | 2.11-RC1 | 2.11-beta).

New Features

  • Support for MongoDB as a source data store for app-schema.
  • Support for GeoJSON output for complex features (app-schema).

About GeoServer 2.11

  • OAuth2 for GeoServer (GeoSolutions).
  • YSLD has graduated and is now available for download as a supported extension.
  • Vector tiles has graduated and is now available for download as an extension.
  • The rendering engine continues to improve with underlying labels now available as a vendor option.
  • A new “opaque container” layer group mode can be used to publish a basemap while completely restricting access to the individual layers.
  • Layer group security restrictions are now available.
  • Latest in performance optimizations in GeoServer (GeoSolutions).
  • Improved lookup of EPSG codes allows GeoServer to automatically match EPSG codes making shapefiles easier to import into a database (or publish individually).

by Ben Caradoc-Davies at December 19, 2017 09:01 PM

GeoTools Team

GeoTools 17.4 Released

The GeoTools team is pleased to announce the release of GeoTools 17.4:
This release, which is also available from the GeoTools Maven repository, is made in conjunction with GeoServer 2.11.4.

GeoTools 17.4 is a maintenance release that mainly fixes bugs but also includes some enhancements: 
  • Support for MongoDB as an app-schema source data store.
  • Support for enhancements in recent MySQL releases, including precise object shape spatial computations.
For more information please see the release notes (17.4 | 17.3 | 17.2 | 17.1 | 17.0 | 17-RC1 | 17-beta).

About GeoTools 17

  • The wfs-ng module has now replaced gt-wfs.
  • The NetCDF module now uses NetCDF-Java 4.6.6.
  • Image processing provided by JAI-EXT 1.0.15.
  • YLSD module providing a plain-text representation of styling.


  • The AbstractDataStore has finally been removed. Please transition any custom DataStore implementations to ContentDataStore (tutorial available).

by Ben Caradoc-Davies (noreply@blogger.com) at December 19, 2017 08:59 PM

gvSIG Team

gvSIG 2017, a year of success, 12 months of progress

This year is finishing, and it’s time to evaluate it. If we review everything that has happened during 2017 we can classify it positively. It has been the year in which gvSIG has received an outstanding international recognition – the awards indicate it – and its brand has been consolidated around a catalog of open source software products for geographic information management – gvSIG Suite -.

A brief review about 2017:

  • 1st prize in “Cross-border category” at “Sharing & Reuse Awards” of European Commission.
  • “Europa Challenge” at Helsinki awarded by NASA to the gvSIG Suite in the “Professional” category.
  • Excellence “Internationalization” Award given by the Professional Union of Valencia.
  • “ITC Promoter organization” prize awarded by the Valencian Telecommunications.
  • gvSIG Online 2.0 publishing, with important improvements. SDI solution success that starts to be a reference. Implementation in local, regional, national administrations, supra-national organizations and in private companies.
  • Releasing of the new gvSIG Mobile, available on Google Play.
  • Towards gvSIG Desktop 2.4. The (imminent) release of the next version of the desktop GIS is being prepared with dozens and dozens of improvements. It will be the version with more external contributions, another important data.
  • gvSIG Crime arrives. This product is added to the sector solutions of the gvSIG Suite, oriented to crime management and the improvement of citizen safety and coexistence.
  •  gvSIG Suite consolidation. The ‘gvSIG brand’ is recognized as a complete catalog of geomatics solutions and in the professional field, beyond the desktop GIS.
  • The gvSIG Association multiplies the number of projects that have been carried out. It becomes one of the references as geomatics services provider, with clients in more than 30 countries.
  • Participation in multiple events around the world, many of them organized by gvSIG Communities.
  • Publishing of dozens of video-tutorials, courses, etc. with an excellent international reception.
  • Increasing of gvSIG software downloads from more than 160 countries.
  • Exponential growth of the number of university jobs carried out with gvSIG: final projects, master’s degree, thesis, research articles …
  • Growth of visits to the gvSIG Blog (more than 250,000 visits per year).

All the indicators are very positive. Everything indicates that 2018 is going to be an even better year.

And all of this would not make any sense without you, the gvSIG Community. Thanks for being there.

by Mario at December 19, 2017 06:56 AM

December 18, 2017

gvSIG Team

Grabación del taller de introducción a gvSIG realizado en la UMH de Elche

Ya está disponible la grabación del taller de introducción a gvSIG impartido durante la Jornada realizada en la Universidad Miguel Hernández de Elche, España, el día 13 de diciembre de 2017, englobada dentro de la Cátedra gvSIG.

En esta jornada, aparte de los talleres sobre la aplicación y la ponencia sobre la Suite gvSIG se hizo entrega de los premios a los proyectos ganadores de la Cátedra gvSIG 2017.

Si no has manejado previamente gvSIG, en este taller puedes aprender a trabajar con esta herramienta totalmente gratuita, que puedes descargar desde la web del proyecto como se indica en el vídeo.

Para comenzar este taller se mostrará cómo crear vistas e insertar en ellas tanto capas locales como remotas, tanto vectoriales como ráster. Cada vez son más las administraciones públicas que ponen a disposición de los ciudadanos tanto cartografía para descargar como servicios web, para consulta, sin necesidad de tener que descargar nada en disco, por lo que podemos trabajar en una gran cantidad de sectores (agricultura, sanidad, infraestructuras, medio ambiente…) sin tener que hacer un desembolso ni por aplicaciones ni por cartografía.

Una vez tengamos las capas vectoriales sobre una vista se podrá ver cómo aplicar simbología y etiquetado sobre ellas, o cómo gestionar los distintos sistemas de referencia en los que se puede trabajar. Entre las funcionalidades algo más avanzadas se verán las principales herramientas de edición (gráfica y alfanumérica) y de geoprocesamiento.

Para finalizar se creará un mapa, que será la salida gráfica de la información geográfica insertada en las vistas, con su norte, leyenda, escala…, y que podrá exportarse a PDF o PS, o imprimir directamente a papel.

Los datos para poder seguir este taller pueden descargarse desde el siguiente enlace.

El vídeo es el siguiente:


by Mario at December 18, 2017 08:26 PM