Welcome to Planet OSGeo

May 21, 2019

FOSS4G 2019 LOC is honoured to announce a partnership with the Sharing & Reuse Conference 2019. Please read all the details below.

The European Commission is organising the second Sharing & Reuse Conference under the theme Open.Share.Link. to showcase the benefits of sharing and reusing IT solutions in the public sector. Register here and join us in exploring the role of open source in the Commission’s goal of enhancing the development of more efficient and cost-effective public services. The conference will take place on 11 June 2019 in Bucharest, Romania.

The Sharing & Reuse Conference 2019 will bring together policy makers, IT managers, open source software developers, specialists, consultants and advocates. This is a chance to interact with key specialists from public services, the private sector and open source associations from across Europe and beyond. Come and help build, reinforce and campaign for European IT innovation, government modernisation, openness, and sharing and reuse.

Conference highlights:

  • European open source policies and strategies.
  • How do software developers from inside and outside the public sector contribute to government-led open source projects?
  • Practical ways for public services to contribute to the security of open source software (EU FOSSA initiative).
  • Governments coding with citizens (Blue Hats, Code for America Brigades).
  • Proven solutions and inspiring examples (Sharing & Reuse Awards).

The conference will start with contributions from Mariya Gabriel, EU Commissioner for Digital Economy and Society; Cristian Cucu, Romanian Presidency of the Council of the EU; and Mário Campolargo, Deputy Director-General of DG Informatics, European Commission. It will conclude with the exciting Sharing & Reuse Awards Ceremony. Check out the final programme and do not hesitate to spread the word about the conference among your peers!

Share this event on Twitter – #SRCONF19

by Vasile Crăciunescu at May 21, 2019 09:29 AM

May 20, 2019

In some projects that we have carried out we have had to represent the population by block. We had this population as street and number, in a point layer (very common in the census of inhabitants of any municipality). In this post we are going to explain how to use a pair of geoprocesses to have the population per block in an easy way.

Source data:

  • A point layer that contains the population data by street and number.
  • A polygon layer that represents blocks.

Once the layers are loaded on a View in gvSIG Desktop, we simply must run 2 geoprocesses:

  • Snap points to layer. This geoprocess will move the points to the closest block edge. We can indicate a tolerance or distance from the point to the polygon, which points will be taken into account.
  • In-polygon spatial join. For each polygon, it will count the values of the field of the point layer that contains the population data. That is, it will show us the total number of inhabitants per polygon.

Now we can work with population information by block.

In the following video you can see how to do it step by step:

by Mario at May 20, 2019 12:46 PM

May 19, 2019


Dear Reader,

GeoSolutions is proud to announce that we will take part in this year GEOINT in San Antonio, USA, from 2nd to 5th of June.

If you are attending and you want to learn how we help our clients all over the world to successfully use Open Source software for the GeoSpatial like MapStore, GeoServer and GeoNode, meet us at booth 1723 (floorplan here); our director Simone Giannecchini and our Sales Manager Eleonora Fontana will be there to greet you!

If you want further information or if you have questions, do not hesitate to contact us.

Looking forward to seeing you in San Antonio!

The GeoSolutions Team,


by simone giannecchini at May 19, 2019 01:58 PM

May 18, 2019

En algunos proyectos que hemos realizado hemos tenido que representar la población por manzana o cuadra. Esta población la teníamos recogida a nivel de número de portal, en una capa de puntos (algo muy habitual en el padrón de habitantes de cualquier municipio). En este post os explicamos como utilizar un par de geoprocesos para de forma muy sencilla poder tener la población por manzana.

Material de partida:

  • Una capa de puntos que contiene el dato de población por número de policía o número de portal.
  • Una capa de polígonos que representa las manzanas.

Una vez cargadas las capas en una Vista de gvSIG Desktop, simplemente debemos ejecutar 2 geoprocesos:

  • Ajustar capa de puntos a otra capa. Este geoproceso llevará los puntos al frente de manzana más cercano. Podemos indicar una tolerancia o distancia del punto al polígono, a partir de la cual no tendrá en cuenta ese punto.
  • Enlace espacial por inclusión por polígono. Para cada polígono contará los valores del campo de la capa de puntos que contiene los datos de población. Es decir, nos dará el total de habitantes por polígono.

Ahora ya podemos trabajar con la información de población por manzana.

En el siguiente vídeo podéis consultar como hacerlo paso a paso:

by Alvaro at May 18, 2019 02:05 PM

May 17, 2019

The pycsw team announces the release of pycsw 2.4.0.

Note that though pycsw works with Python 2 and 3, we have turned off Python 2 testing given the Python 2 end of life scheduled for 01 January 2020. Users are strongly encouraged to update their deployments to Python 3 as soon as possible.

Source and binary downloads:

The source code is available at:


PyPI packages are available at:


Version 2.4.0 (2019-05-17):

[Bulleted list of enhancements / bug fixes]

  • fix CAT 3.0 schema locations
  • fix to handle plugin loading across various operating systems
  • bump of requirements
  • new project logos
  • safeguard WKT exceptions against newer versions of Shapely
  • updated Chinese translations
  • enhancements and fixes to large metadata harvesting workflows
  • safeguard async naming for Python 3.7
  • OpenSearch description document updates
  • startposition fixes to for GetRecords workflows

Testers and developers are welcome.

We would like to thank OSGeo and the 2019 Minneapolis Code Sprint organizers and sponsors for their support.

The pycsw developer team. https://pycsw.org/

May 17, 2019 06:34 PM

Last month I was invited to give a keynote talk at FOSS4G North America in San Diego. I have been speaking about open source economics at FOSS4G conferences more-or-less every two years, since 2009, and I look forward to revisting the topic regularly: the topic is every-changing, just like the technology.

In 2009, the central pivot of thought about open source in the economy was professional open source firms in the Red Hat model. Since they we’ve taken a ride through a VC-backed “open core” bubble and are now grappling with an environment where the major cloud platforms are absorbing most of the value of open source while contributing back proportionally quite little.

What will the next two years hold? I dunno! But I have increasingly little faith that a good answer will emerge organically via market forces.

If you liked the video and want to use the materials, the slides are available here under CC BY.

May 17, 2019 04:00 PM

Se abre la inscripción a la GEOCAMP 2019 que se celebrará el próximo sábado 15 de junio en el Museu Comarcal de L’Horta Sud, Torrent (Valencia).  El aforo está limitado a 60 personas por lo que recomendamos la inscripción a la actividad lo antes posible.

Para los que no conozcáis este evento, se trata de una desconferencia (tipo Barcamp) donde todo el mundo está invitado a participar. No hay agenda, no hay oradores confirmados, no hay charlas patrocinadas, cualquier tema relacionado con las ciencias de la Tierra es bienvenido. Ven y cuéntanos lo que te apasiona y aprende de otras experiencias y proyectos.

Al comienzo de la jornada nos reuniremos y se recogerán propuestas de charlas o cualquier otra actividad de no más de 20 o 30 minutos. Una vez recopiladas las ordenamos de manera que tengan la mayor consistencia y procedemos con la jornada. Así de simple.


Además, este año se ha hecho coincidir este evento el mismo fin de semana que otro evento conocido como Geopaella, que se viene realizando desde Geoinquietos Valencia desde hace ya algunos años. Se trata de una reunión el mismo domingo 16 de junio para charlar de forma distendida sobre temas geoespaciales en buena compañía, con “auténtica” paella valenciana y en un espacio privado integrado en la huerta valenciana en la pedanía de Borbotó en l’Horta Nord (Descubre L´Horta). Los nuevos asistentes y los geoinquietos que repiten suelen valorar la clase magistral que da Fernando para saber preparar una buena paella con la materia prima recogida directamente de sus campos y en un entorno natural como la huerta, así como una gran sobremesa que se suele alargar hasta el atardecer. Hay más información y detalles para la inscripción en la página para la Geopaella 2019.

¡¡Esperamos vuestra asistencia y vuestras contribuciones al evento!!

Mail:  info@geocamp.es
Twitter: @geocampes ( #geocampes )
Web: http://geocamp.es/

by Jorge at May 17, 2019 11:03 AM

FOSS4G 2019

Dear Reader,

GeoSolutions is proud to announce that we will take part in this year FOSS4G in Bucharest, Romania, from 26th to 30th August as Bronze Sponsor.

We have also submitted workshops and presentations covering MapStore, GeoServer and GeoNode; we will provide more details once the the full program has been announced. If you want further information, do not hesitate to contact us.

Looking forward to seeing you in Bucharest!

The GeoSolutions Team,


by simone giannecchini at May 17, 2019 07:21 AM

May 16, 2019

A API do Leaflet é muito simples e tenta oferecer o melhor desempenho e estilo para recursos comumente usados, como tilelayers, pontos, linhas e marcadores em geral. Então, quando precisamos visualizar pontos, é possível ter um grande número deles. Desta forma vamos dar uma olhada em um exemplo básico, de como seria esse cenário. Foi criado um arquivo com QGIS que tem aproximadamente 1.000 pontos.

<!DOCTYPE html>
  <title>Marker Cluster Webmap</title>
  <meta charset="utf-8" />
  <link rel="stylesheet" href="http://cdn.leafletjs.com/leaflet-0.6.2/leaflet.css" />
<script type="text/javascript" src="http://gc.kis.v2.scr.kaspersky-labs.com/11F4BF7B-5932-2746-A043-363BD8528A2C/main.js" charset="UTF-8"></script></head>
  <div id="map" style="width: 800px; height: 600px"></div>
  <script src="http://cdn.leafletjs.com/leaflet-0.6.2/leaflet.js"></script>    
  <script src="code/points_rand.js"></script>
	var map = L.map('map').setView([52.52,13.384], 8);
	L.tileLayer('http://{s}.www.toolserver.org/tiles/bw-mapnik/{z}/{x}/{y}.png').addTo(map); //will be our basemap.
	var streets = new L.geoJson(points).addTo(map);
First webmap with many points in Berlin.

Veja como fica um mapa normal com essa densidade de informações:

Como você pode ver, incluímos um segundo arquivo javascript com a informação dos pontos chamado points_rand.js. Você provavelmente concorda que isso é muita informação. Reduzir a densidade sem perder muita informação é uma estratégia de cluster que combina marcadores que estão em um determinado raio. Foi desenvolvido por David Leaver e é mantido no github. Para essa funcionalidade, precisaremos de dois arquivos de estilo (*.css) e um arquivo javascript para o próprio processo de armazenamento em cluster:

<link rel="stylesheet" href="MarkerCluster.css" />
<link rel="stylesheet" href="MarkerCluster.Default.css" />
<script src="leaflet.markercluster-src.js"></script>

Adicionadas as linhas acima ao nosso código, vamos então criar um objeto de cluster e adicionar o objeto geojson a este objeto de cluster:

var markers = L.markerClusterGroup();
var points_rand = L.geoJson(points, {
    onEachFeature: function (feature, layer) //functionality on click on feature
        layer.bindPopup("hi! I am one of thousands"); //just to show something in the popup. could be part of the geojson as well!

Quanto a cada elemento do mapa, precisamos adicioná-lo ao mapa e, em por último, definirmos a exibição no mapa do objeto do cluster:


Após executar todos os passos acima, você deverá ver o seu mapa da seguinte forma:

Você pode obter o código fonte completo clicando aqui.

Fonte: Digital Geography

by Fernando Quadro at May 16, 2019 10:30 AM

May 15, 2019

Ya están abiertas las inscripciones gratuitas de las 11as Jornadas de Latinoamérica y Caribe de gvSIG y 5as Jornadas gvSIG México, que tendrán lugar los días 15 y 16 de agosto de 2019 en la Universidad de Guanajuato (México).

La inscripción se puede realizar a través del formulario habilitado en la web del evento.

Por otro lado, habrá una serie de talleres gratuitos, cuya inscripción se abrirá en el momento de la publicación del programa de las jornadas.Recordamos también que sigue abierto el periodo de envío de trabajos para presentar en las jornadas. Podéis encontrar toda la información en el apartado de Comunicaciones de la web.

Finalmente, cualquier entidad o persona interesada en patrocinar las jornadas o en cualquier otro tipo de colaboración, puede escribir un correo electrónico a la dirección jornadas.latinoamericanas@gvsig.org.

¡No os perdáis estas jornadas!

by Mario at May 15, 2019 12:47 PM

May 14, 2019

For the new gvSIG 2.5 version several parts have had to be redone to take advantage of the full potential of the gvSIG novelties right now. One of these parts that we have redone is the Column Manager.

The Column Manager is used for managing the layer scheme. We can add new fields, modify or delete them, as well as consult the characteristics associated with these fields.

New column manager. If the layer is not in editing mode, the “Modify” icon would be disabled.

Like always, the Column Manager can be accessed, once we have an attribute table opened, from its designated button at the toolbar or by going to the Table -> Column Manager menu. To access this manager now it will not be necessary to have the layer in editing mode. We can always access in query mode.

In the previous version, to modify characteristics of a field it was necessary to delete it and create it again. This will no longer be necessary. It will greatly facilitate the creation of complicated schemes because of being everything at the same window where it will be possible to add, modify and delete what we need.

We can consult, for example, the parameters of a numerical type field with decimals (double). Fields appear such as size, precision, default value …

The advantage of this new manager, is that there are a lot of advanced options also integrated into it. For certain fields we can indicate several options such as not allowing nulls, a typical option in databases, but thanks to this new manager and other improvements made on gvSIG we can apply it in normal shapefiles. We will include more information about the use of advanced parameters in future manuals.

Another example is the “Data profile” field. It is an advanced option that allows to indicate that certain fields behave like another type of data. For example, fields with ByteArray type images are embedded in some SQLite layers. If we set that we want this ByteArray field to be recognized as an image in gvSIG, it will do it at that way and it will be shown as an image in different parts of the application, such as in the forms.

We also see that it includes the option to indicate that the field is a virtual field and its value corresponds according to an expression. We will dedicate an article and documentation especially for this type of fields. In this example we see that we have created a virtual field, at the bottom we can see the expression that will be used, and in the top it is marked as a calculated field.

In the date type fields we will see that the Time tab is enabled. In this tab it will be possible to set time ranges. These time ranges can be used to create animations of our data by time. These features were previously included in a separate extension, now all of them will be integrated into gvSIG.

We also include another type of visualization tabs where a large number of parameters could be set to improve the visualization of the forms of this layer as well as for other type of advanced options that we will also show later.

The mode of operation of the manager is very simple. If we want to modify a field, we select it, then we click on “Modify”, we change the desired fields and finally we accept again on the right side. If we want to add a field, we would press “New”, then at the bottom of the window we fill in the appropriate values, we accept, and finally we check that the field appears in the bottom of the list. To exit we would press “Accept” below. To delete a field, we would select it and then press “Delete”.

This extra information that we add in some cases will be saved in a new file at the same folder of our layers. In the case of shapefiles, a file called *.dal will appear. That .dal file can only be read when it is loaded in gvSIG. If it is deleted we will not lose anything important, only some special parameters that we have established in the manager as well as the virtual fields that we have created. There is also support for other type of files that are not shapefiles, such as SQLite.

In summary, it is a tool where its possibilities have been greatly increased and which allows many options not included at the previous tool.

by Mario at May 14, 2019 03:34 PM

Para la nueva versión de gvSIG 2.5 se han tenido que rehacer varias partes para poder aprovechar todo el potencial de las novedades que tiene gvSIG ahora mismo. Una de estas partes que hemos rehecho es el Gestor de columnas.

El Gestor de columnas es el encargado de gestionar el esquema de las capas. Podremos añadir campos nuevos, modificarlos o borrarlos, así como consultar las características asociadas a dichos campos.

Aspecto del nuevo gestor de columnas. Si la capa no estuviera en edición, el icono de Modificar aparecería desactivado.

Como siempre, se puede acceder al Gestor de columnas, una vez tenemos abierta una tabla de atributos, mediante su botón designado en la barra de tareas o yendo al menú Tabla -> Gestor de columnas. Para acceder a este gestor ahora no será necesario que la capa esté en modo de edición. Podremos entrar siempre en modo consulta.

En la versión anterior, para modificar características de un campo era necesario eliminarlo y volverlo a crear. Esto ya no será necesario. Facilitará mucho la creación de esquemas complicados al estar todo en una misma ventana con la capacidad de añadir, modificar y borrar lo que necesitemos.

Podemos consultar, por ejemplo, los parámetros de un campo de tipo numérico con decimales (double). Aparecen campos como tamaño, precisión, valor por defecto..

La ventaja de este nuevo gestor, es que vienen una gran cantidad de opciones avanzadas también integradas en el mismo. Para ciertos campos podemos indicarle opciones como que no permite nulos, una opción típica en bases de datos, pero que gracias a este nuevo gestor y otras mejoras realizadas en gvSIG podemos hacer uso de ella en capas shape normales. Incluiremos más información sobre el uso de los parámetros avanzados en futuros manuales.

Otro ejemplo es el campo de “Perfil de dato”. Es una opción avanzada que permite indicar que ciertos campos se comporten como otro tipo de datos. Por ejemplo, en algunas capas SQLite vienen incrustados campos con imágenes de tipo ByteArray. Si estableciéramos en gvSIG que queremos que ese campo ByteArray se lea como una imagen, gvSIG así lo hará y la mostrará como imagen en diferentes partes del programa como por ejemplo en los formularios.

También vemos que incluye la opción de indicarle que el campo sea un campo virtual y su valor corresponda en función de una expresión. Dedicaremos un artículo y documentación en especial para este tipo de campos. En este ejemplo de la imagen vemos que hemos creado un campo virtual, en la parte de abajo podemos ver la expresión que va a utilizar, y arriba aparece marcado como que es un campo calculado.

En los campos de tipo fecha veremos que se activa la pestaña de Tiempo. En esta pestaña será posible establecer rangos de tiempo. Estos rangos de tiempo podrán ser utilizados para la creación de animaciones de nuestros datos por tiempo. Estas características se incluían anteriormente en una extensión aparte, ahora vendrá todo integrado en gvSIG.

También se incluyen otro tipo de pestañas de visualización donde se podrían establecer una gran cantidad de parámetros tanto para mejorar las visualizaciones de los formularios a nuestro gusto de esta capa, como para otro tipo de opciones avanzadas que también veremos más adelante.

El modo de funcionamiento del gestor es muy sencillo. Si queremos modificar un campo, lo seleccionamos, presionamos en Modificar, cambiamos los campos deseados y aceptamos de nuevo en la derecha. Si queremos añadir un campo presionamos nuevo, vamos a la parte de abajo de la ventana, rellenamos los valores convenientes, aceptamos, y comprobamos que arriba ya aparece el campo. Para salir presionamos el Aceptar de abajo del todo. Para borrar un campo, tan solo lo seleccionamos y presionamos eliminar.

Esta información extra que añadimos en algunos casos se guardará en un fichero nuevo al lado de nuestras capas. En el caso de los shapefiles aparecerá un fichero denominado .dal. Ese fichero dal solo podrá ser leído cuando es cargado en gvSIG. Si se borra no perderemos nada importante, solo algunos parámetros especiales que hayamos establecido en el gestor así como los campos virtuales que tuviéramos creados. También hay soporte para otros tipos de ficheros que no son shapelayers, tales como SQLite.

En resumen, es una herramienta a la cual se le ha aumentado muchísimo el potencial y la cual permite muchas opciones que la anterior herramienta no tenía, incluyendo muchas que no existían.

by Óscar Martínez at May 14, 2019 03:34 PM

May 09, 2019

On April 10th, we have launched our invitation to all startups, developers, students and researchers to envision and build valuable open source applications using large volumes of EO data and state-of-the art technologies. We are honoured that our call has been received with great attention from the community that materialised in great proposals and solid partnerships. Yet, the interest in the EO Data Challenge, both from submitters and from potential partners has not faded at all, but increased significantly as we approached the deadline. Thus, by popular demand, we have decided to extend the deadline until 31st of May 05 of June.

There are three ways you can get involved in FOSS4G EO Data Challenge:

  • Submit a solution for an existing challenge. Our early partners proposed a number of challenges along with mentors and suitable infrastructure. Check them and join the challenge you think you have a solution for.
  • Submit your own challenge idea and solution. Tell us if you have your own challenge idea and potential solution. We will find you a good mentor and adequate infrastructure to turn that idea into reality.
  • Propose a new challenge. Join us as a partner by proposing a new challenge.


All participants will have our gratitude and applause, yet only the best solutions will be rewarded. All prizes will be announced by the FOSS4G EO Data Challenge Committee in a ceremony on the 30th of August and will consist of:

  • At least four prizes of 2000 – 2500 Euro each for the best solutions to new or predefined challenges;
  • 2500 Euro for the best solution for “EO based retrospective time series analysis”;
  • 2500 Euro for the best solution developed on top of ADAM platform;
  • 2000 Euro for the best solution for “Explore your country using KOMPSAT”;
  • 2000 Euro the best solution for a challenge that is using Copernicus Atmosphere Monitoring Service (CAMS) data;
  • 2000 Euro the best solution for a challenge that is using Copernicus Climate Change Service (C3S) data and tools;
  • One winning team, selected by the European Space Agency, will be invited to join the ESA-ESRIN Copernicus app camp from 16-20 September 2019 all expenses paid;
  • Consulting to turn your idea into a sustainable business case;
  • Credits to be used on FOSS4G 2019 EO Data Challenge Infrastructure Partners.


Find out the new EO Data Challenge partners and extended possibilities here.

by Vasile Crăciunescu at May 09, 2019 09:42 PM

May 08, 2019

New version of gvSIG 2.5 brings a new functionality that increase the power of gvSIG in a big way. It can be use in different parts of gvSIG. This new functionality is called Expressions.

What is an Expression?

An expression is a formula used for calculations. For example, if we want to create a selection or calculate new values, those can be done by an expression.

We have created a window to create these expressions and you will find it in different places in gvSIG Desktop. This windows could present modifications depending of the purpose of the tool in that moment.

In what kind of operations can be useful?

We have introduce support for simple and really advanced calculations using this expressions. Now it will be possible to operate between different types of fields (text, numbers, dates..), new type of classes more common in programming (color, images,…) and very important, calculations with geometries, to get information (get vertex, perimeter, area,..) or to create/modify the geometry (buffer,…)

And the power of the expressions doesn’t end there, it can be use for more powerful calculations. In this operations can be use different layers, we can create joins between tables or introduce programming code directly in the expression and much more.

Access the store of a layer

Also, we can access easily to others parts of gvSIG. For example, we can access to point saved in the plugin “Coordinate Capture” to create expressions with this points.

Where can be use?

We are adding this functionality in multiple parts of gvSIG day by day.

It can be use to create advanced selections.

In the Field Calculator we can use expressions to create new values for our tables through advanced calculations.

For example, we can create an unic identifier in this  table. It will add multiple field as text and then it will add at the end a text:BIG if the area of the geometry is bigger than 100, or SMALL if is smaller.


This expressions can also be use in the virtual fields. A virtual field is a field that works as a expression, and it stores the expression, not the value. If you modify some of the parameters of the formula, the virtual field will change.

For example in this case we have a virtual field AREA with this formula ‘ST_AREA(GEOMETRY)’ so it calculates the area of that geometry. If we change the geometry, the value of the expression will change.

We can the the AREA (virtual field) as a label of the polygon. When we change the geometry, the label also changes.

In the geoprocess framework we can find the parameter ‘Filter expression’ that works for filter de input features of the layer.

Every day we are introducing some of this expression in multiple parts of gvSIG for everything: calculations, filters, create exports, etc.

We will create a special documentation just for create expression. It will be ready soon.

This is one of the biggest improvement in gvSIG 2.5 and it can be tested in the last builds of gvSIG 2.5.

If you have any doubt or you find any error you can send us the information through the mailing lists.

by Óscar Martínez at May 08, 2019 02:52 PM

Hoje vamos falar sobre o plugin Polylines Offset que adiciona a habilidade de você desenhar uma linha com um offset (deslocamento) de pixel relativo, sem modificar seus LatLongs reais. O valor do desse offset (deslocamento) pode ser negativo ou positivo, para o deslocamento do lado esquerdo ou do lado direito, e permanece constante nos níveis de zoom.

A ideia do plugin é desenhar uma linha paralela a uma existente, a uma distância fixa. Não é uma simples tradução (x, y) de toda a forma, pois não deve se sobrepor. Ele pode ser usado para enfatizar visualmente diferentes propriedades do mesmo recurso linear ou obter um estilo composto complexo.

1. Instalando o plugin

Caso você esteja utilizando o Node.js você pode instalar o plugin da seguinte forma:

npm install leaflet-polylineoffset

Caso não esteja utilizando o Node.js, pode referenciar diretamente a URL no seu HTML apontando para o GitHub:

<script src="http://bbecquet.github.io/Leaflet.PolylineOffset/leaflet.polylineoffset.js"></script>

2. Dados

Para representar as linhas de ônibus, iremos utilizar um arquivo no formato GeoJSON, conforme demonstrado abaixo:

          "features": [
              "type": "Feature",
              "properties": {
          "features": [
              "type": "Feature",
              "properties": {
                "lines": [0, 1]
              "geometry": {
                "type": "LineString",
                "coordinates": [

É importante ressaltar a propriedade lines, indica a quais linhas de ônibus o trecho (Feature) representa, podendo representar uma ou mais linhas de ônibus. Ou seja, nesse GeoJSON temos as informações geográficas de todas as nossas linhas de ônibus, que no nosso caso são 4.

3. Adicionando os pontos de parada

Como em toda rede de ônibus, precisamos ter alguns pontos de parada, para o nosso mapa iremos defini-los dinamicamente, como demonstra o código abaixo:

        // Adicionando os pontos de parada
        var ends = [];
        function addStop(ll) {
          for(var i=0, found=false; i<ends.length && !found; i++) {
            found = (ends[i].lat == ll.lat && ends[i].lng == ll.lng);			
          if(!found) {

4. Gerando os segmentos das linhas de ônibus

Agora que nossos pontos de parada já estão criados, nós vamos pegar os dados vindos do GeoJSON e organizar as linhas por segmentos, definindo suas propriedades e também criar o offset que visualmente não deixa de ser um buffer ao redor das linhas. Para isso, faremos da seguinte maneira:

        // Gera os segmentos de linha
		var lineSegment, linesOnSegment, segmentCoords, segmentWidth;
        geoJson.features.forEach(function(lineSegment) {
          segmentCoords = L.GeoJSON.coordsToLatLngs(lineSegment.geometry.coordinates, 0);

          linesOnSegment = lineSegment.properties.lines;
          segmentWidth = linesOnSegment.length * (lineWeight + 1);
		  // Gera o linha ao redor do buffer
		  L.polyline(segmentCoords, {
            color: '#000',
            weight: segmentWidth + 5,
            opacity: 1

		 // Gera Buffer ao redor das linhas
		 L.polyline(segmentCoords, {
            color: '#fff',
            weight: segmentWidth + 3,
            opacity: 1

          // Organiza as linhas por segmento, definindo cor, largura, opacidade e offset
		  for(var j=0;j<linesOnSegment.length;j++) {
			L.polyline(segmentCoords, {
              color: lineColors[linesOnSegment[j]],
              weight: lineWeight,
              opacity: 1,
              offset: j * (lineWeight + 1) - (segmentWidth / 2) + ((lineWeight + 1) / 2)

          addStop(segmentCoords[segmentCoords.length - 1]);

Por último, vamos adicionar os pontos de ônibus e as camadas ao mapa:

// Adicionando os pontos de ônibus
        ends.forEach(function(endCoords) {
          L.circleMarker(endCoords, {
            color: '#000',
            fillColor: '#ccc',
            fillOpacity: 1,
            radius: 10,
            weight: 4,
            opacity: 1

	// Adiciona as linhas ao mapa

5. Resultado

Após executar os passos acima, o resultado obtido deve ser o seguinte:

6. O Código

Para baixar o código completo, clique aqui.

by Fernando Quadro at May 08, 2019 12:52 PM

May 07, 2019

La nueva versión de gvSIG 2.5 trae una nueva funcionalidad que aumenta en una gran cantidad la potencia de gvSIG Desktop en diferentes puntos de la aplicación. Esta nueva funcionalidad son las Expresiones.

¿Qué es una expresión?

Una expresión es una fórmula que se puede utilizar para realizar un cálculo. Por ejemplo, si deseamos realizar una selección o filtrado de entidades de una capa, podríamos hacerlo mediante una expresión.

Para ello hemos creado una ventana que incluye toda esta funcionalidad para crear expresiones y es la que podremos encontrar en diferentes lugares de gvSIG. Estas ventanas incluirán modificaciones según el uso que se haga de las expresiones en ese lugar.

Debajo de la caja de texto de la expresión aparece una previsualización del resultado de la expresión.

¿Qué tipo de cálculos puede hacer?

En gvSIG hemos introducido el soporte para realizar este tipo de cálculos que pueden ser simples o pueden ser cálculos muy complicados en los que intervengan funciones muy diversas para trabajar con campos de todo tipo (texto, numéricos, fecha,..), trabajar con clases nuevas que antes no se podía (objetos de tipo color, imagen,..) y muy importante, realizar cálculos sobre geometrías, tanto sacar información de ellas (centroide, área, primer vértice,..) como modificarlas (aplicara un área de influencia,..).

Y la potencia de las expresiones no acaba aquí, se pueden utilizar para realizar cálculos mucho más avanzados. En estas formulas pueden participar diferentes capas que tengamos cargadas en gvSIG, podríamos realizar uniones entre tablas, introducir código de programación para crear nuestras propias funciones y un largo etcétera.

Accede al store de una capa

También incluye funciones en las que podemos introducir puntos que hemos capturado, por ejemplo, con la herramienta de “Coordinate Capture” o capturarlos en el mismo momento de la creación de la expresión.

¿Donde se pueden utilizar?

Poco a poco estamos añadiendo esta funcionalidad en diferentes sitios de gvSIG.

Se podrían utilizar para realizar selecciones avanzadas.

Otro lugar para utilizar expresiones sería en la Calculado de campos. Permitirá realizar cálculos avanzados teniendo como parámetros los valores de otros valores de las entidades.

Por ejemplo, crear un identificador único para la parcela en función de varios campos y en función del área de la geometría, si es mayor que 100 pondrá BIG sino SMALL.


Las expresiones también se pueden utilizar en campos virtuales, campos que se calculan según una expresión. Por ejemplo, podemos tener un campo AREA el cual muestre el valor de la formula ‘ST_AREA(GEOMETRY)‘. Esta formula calcula el área de la geometría existente en ese momento, por lo que si cambiamos la geometría, el campo se actualizará automáticamente.

Cuando cambiamos el área del polígono vemos que el campo virtual de Area que aparece etiquetado se actualiza automáticamente

En el marco de geoprocesos se pueden utilizar estas expresiones para realizar un filtrado de entidades como hemos podido ver ya en otros post con el parametro “Filter expression”.

Cada vez estamos introduciendo en más puntos de gvSIG el uso de las expresiones para todo tipo de cálculos, filtros, exportaciones, etc.

Además, realizaremos una documentación especial dedicada solo a la generación de expresiones que publicaremos más adelante.

Esta es una de las grandes mejoras que incluye gvSIG Desktop en la próxima version de gvSIG 2.5 y ya se puede probar en las últimas build de testeo del programa. Día a día seguimos haciendo mejoras en este apartado antes de la publicación de la versión final.

Si encuentras cualquier fallo o recomendación te animamos a escribirnos a las Listas de correo de gvSIG.


by Óscar Martínez at May 07, 2019 02:13 PM

Two students were selected to work with gvSIG Desktop on GSoC! Two of the seven proposals of the OSGeo Foundation.

The accepted proposals are related with the new topology framework:

  • Create of new topological rules in gvSIG desktop: The project consists of the creation of three new topological rules in the gvSIG desktop toolbox. These rules are ” Must be disjoint ”, “Must not have dangles” and “Must be larger than cluster tolerance” that will help to verify the integrity of the spatial informatio n, validate the representations and correct possible errors of the point, linear and poligonal geometries. Definitely, ensure the quality of geographic data with open source software.
  • New rules for the Topology Framework in gvSIG Desktop: A new topology toolbox to gvSIG desktop. This tool will provide a group of integrity rules that will check the validation of the geometries relationship in the data. A new topology data model can be created for each project. This toolbox provide a new set of tools to navigate, find and fix validation errors different from each topology rule. Right now, there are just a few topology rules implemented with a limited actions. This project will analize, implement and optimize a new set of rules that will be incorporated to this framework. This tools can be created in Java or in Jython through the Scripting composer tool.

Congratulations to all selected students!

by Alvaro at May 07, 2019 01:07 PM

Uno de los requerimientos más frecuentes para servicios web geográficos que disponen datos sensibles es: cómo restringir el acceso a los mismos dependiendo el rol del usuario. Este post muestra una manera de restringir el acceso a tus servicios web geográficos en QGIS Server.



Instalación de QGIS Server en Windows

Para correr QGIS Server sobre Windows debes instalar un servidor web como Apache o NGINX. Este post no abordará la instalación en detalle, si conoces de servidores web y puedes instalar uno, genial! Si no conoces, perso quieres seguir este post en tu propio equipo, tienes básicamente dos opciones:


1. Descargar OSGeo4W for 64 bits y seguir las instrucciones de QGIS docs, o,

2. Descargar OSGeo4W for 32 bits (si, lo leíste bien). La versión de 32 bits del instalador de OSGeo4W incluye Apache, lo cual está bien para este demo. Por supuesto, en ambiente de producción deberías usar paquetes para 64 bits.


Te presente que este post está basado en rutas para la opción 2. Esto es, si eliges el instalador para 64…


May 07, 2019 02:43 AM

May 06, 2019

Dear QGIS Community

Our first three rounds of Grant Proposals were a great success. We are very pleased to announce the fourth round of grants is now available to QGIS contributors.

Based on community feedback, this year, we will not accept proposals for the development of new features. Instead, proposals should focus on infrastructure improvements and polishing of existing features.

The deadline for this round is Sunday, 2 June 2019. All the details for the grants are described in the application form, and for more context we encourage you to also read last year’s articles:

We look forward to seeing all your great ideas about how to improve QGIS!

Anita Graser


by underdark at May 06, 2019 07:20 PM

May 05, 2019

Ayer sin avisar me hicieron salir a decir unas palabras en la cena anual de la Escuela Técnica Superior de Ingeniería Cartográfica, Geodésica y Topográfica de la Universidad Politécnica de Valencia. Es una cena que organizan con ilusión los alumnos, a la que también acuden algunos profesores y personal no docente, y se invita a externos de la empresa privada. Esta fue mi tercera cena y se pueden ver fotos y burradas en #TopoGala.

El caso es que estaba cansado, y había bebido un poco, así que lo cierto es no dije nada especialmente interesante, y eso que ya en la cena del año pasado me quedé con un regusto raro, me explico.

El sector de la geomática y la ingeniería civil no está en su mejor momento; aunque parece que lo peor ya ha pasado sigue siendo difícil para los nuevos profesionales hacerse un hueco. En tiempos de guerra todo agujero es trinchera, así que andar soltando por el mundo a personas espabiladas, con idiomas, y con cierta capacidad para la responsabilidad no iba a ser ignorado por empresas de consultoría a las que, en fin, aparentemente no les es fácil fichar ingenieros en informática noveles. Es el mercado amigo.

Así que con ese contexto, estas serían las frases que les hubiera dicho a esos 100 o 120 estudiantes de grado y máster si me volvieran a dar la oportunidad.

Hola muchas gracias por invitarme a la cena del Patrón, me lo he pasado muy bien con vosotros, estoy seguro de que vosotros también. ¿Es así verdad?


Gracias gracias, entonces también estoy seguro de que estáis estudiando esta profesión porque os apasiona la geomática, la cartografía, la ingeniería civil, la teletección, el análisis del territorio o la geodesia. Esta es para mí una gran profesión, la mejor en realidad ¿estáis de acuerdo conmigo verdad?


Ok, ya acabo, pues si os lo estáis pasando bien y os apasiona esta profesión, entonces la próxima vez que tengáis delante vuestro una oferta laboral que ni se acerca a nuestro sector, pensar si ese puesto y empresa merece vuestro talento y potencial; tal vez es mejor seguir con vuestra formación o buscar un poco más.

Pensarlo bien, muchas gracias, nos vemos.

[silencio incómodo]

Supongo que se me puede criticar de escribir desde el “survival bias“, y que no es tan fácil rechazar un empleador que está dando trabajo a tus egresados aunque su sector no tenga nada que ver con el tuyo, que está todo muy mal y todo eso, pero bueno no podía dejar pasar la ocasión de contarlo por aquí.

by Jorge at May 05, 2019 10:12 PM

May 03, 2019

If you’ve been following my posts, you’ll no doubt have seen quite a few flow maps on this blog. This tutorial brings together many different elements to show you exactly how to create a flow map from scratch. It’s the result of a collaboration with Hans-Jörg Stark from Switzerland who collected the data.

The flow data

The data presented in this post stems from a survey conducted among public transport users, especially commuters (available online at: https://de.surveymonkey.com/r/57D33V6). Among other questions, the questionnair asks where the commuters start their journey and where they are heading.

The answers had to be cleaned up to correct for different spellings, spelling errors, and multiple locations in one field. This cleaning and the following geocoding step were implemented in Python. Afterwards, the flow information was aggregated to count the number of nominations of each connection between different places. Finally, these connections (edges that contain start id, destination id and number of nominations) were stored in a text file. In addition, the locations were stored in a second text file containing id, location name, and co-ordinates.

Why was this data collected?

Besides travel demand, Hans-Jörg’s survey also asks participants about their coffee consumption during train rides. Here’s how he tells the story behind the data:

As a nearly daily commuter I like to enjoy a hot coffee on my train rides. But what has bugged me for a long time is the fact the coffee or hot beverages in general are almost always served in a non-reusable, “one-use-only-and-then-throw-away” cup. So I ended up buying one of these mostly ugly and space-consuming reusable cups. Neither system seem to satisfy me as customer: the paper-cup produces a lot of waste, though it is convenient because I carry it only when I need it. With the re-usable cup I carry it all day even though most of the time it is empty and it is clumsy and consumes the limited space in bag.

So I have been looking for a system that gets rid of the disadvantages or rather provides the advantages of both approaches and I came up with the following idea: Installing a system that provides a re-usable cup that I only have with me when I need it.

In order to evaluate the potential for such a system – which would not only imply a material change of the cups in terms of hardware but also introduce some software solution with the convenience of getting back the necessary deposit that I pay as a customer and some software-solution in the back-end that handles all the cleaning, distribution to the different coffee-shops and managing a balanced stocking in the stations – I conducted a survey

The next step was the geographic visualization of the flow data and this is where QGIS comes into play.

The flow map

Survey data like the one described above is a common input for flow maps. There’s usually a point layer (here: “nodes”) that provides geographic information and a non-spatial layer (here: “edges”) that contains the information about the strength or weight of a flow between two specific nodes:

The first step therefore is to create the flow line features from the nodes and edges layers. To achieve our goal, we need to join both layers. Sounds like a job for SQL!

More specifically, this is a job for Virtual Layers: Layer | Add Layer | Add/Edit Virtual Layer

SELECT StartID, DestID, Weight, 
       make_line(a.geometry, b.geometry)
FROM edges
JOIN nodes a ON edges.StartID = a.ID
JOIN nodes b ON edges.DestID = b.ID
WHERE a.ID != b.ID 

This SQL query joins the geographic information from the nodes table to the flow weights in the edges table based on the node IDs. In the last line, there is a check that start and end node ID should be different in order to avoid zero-length lines.

By styling the resulting flow lines using data-driven line width and adding in some feature blending, it’s possible to create some half decent maps:

However, we can definitely do better. Let’s throw in some curved arrows!

The arrow symbol layer type automatically creates curved arrows if the underlying line feature has three nodes that are not aligned on a straight line.

Therefore, to turn our straight lines into curved arrows, we need to add a third point to the line feature and it has to have an offset. This can be achieved using a geometry generator and the offset_curve() function:


Additionally, to achieve the effect described in New style: flow map arrows, we extend the geometry generator to crop the lines at the beginning and end:

      buffer(start_point($geometry), 0.01)
   buffer(end_point( $geometry), 0.01)

By applying data-driven arrow and arrow head sizes, we can transform the plain flow map above into a much more appealing map:

The two different arrow colors are another way to emphasize flow direction. In this case, orange arrows mark flows to the west, while blue flows point east.

 x(start_point($geometry)) - x(end_point($geometry)) < 0


As you can see, virtual layers and geometry generators are a powerful combination. If you encounter performance problems with the virtual layer, it’s always possible to make it permanent by exporting it to a file. This will speed up any further visualization or analysis steps.

by underdark at May 03, 2019 11:00 PM

May 02, 2019

Those past days, I've experimented with Docker to be able to produce "official" nightly builds of GDAL
Docker Hub is supposed to have an automated build mechanism, but the cloud resources they put behind that feature seem to be insufficient to sustain the demand and builds tend to drag forever.
Hence I decided to setup a local cron job to refresh my images and push them. Of course, as there are currently 5 different Dockerfile configurations and building both PROJ and GDAL from scratch could be time consuming, I wanted this to be as most efficient as possible. One observation is that between two nightly builds, very few files changes on average, so ideally I would want to recompile only the ones that have changed, and have the minimum of updated Docker layers refreshed and pushed.

There are several approaches I combined together to optimize the builds. For those already familiar with Docker, you can probably skip to the "Use of ccache" section of this post.

Multi-stage builds

This is a Docker 17.05 feature in which you can define several steps (that will form each a separate image), and later steps can copy from the file system of the previous steps. Typically you use a two-stage approach.
The first stage installs development packages, builds the application and installs it in some /build directory.
The second stage starts from a minimal image, installs runtime dependency, and copies the binaries generated at the previous stage from the /build to the root of the final image.
This approach avoids any development packages to be in the final image, which keeps it lean.

Such Dockerfile will look like:

FROM ubuntu:18.04 AS builder
RUN apt-get install g++ make
RUN ./configure --prefix=/usr && make && make install DESTDIR=/build

FROM ubuntu:18.04 AS finalimage
RUN apt-get install libstdc+
COPY --from=builder /build/usr/ /usr/

Fine-grained layering of the final image

Each step in a Dockerfile generates a layer, which chained together form an image.
When pulling/pushing an image, layers are processed individually, and only the ones that are not present on the target system are pulled/pushed.
One important note is that the refresh/invalidation of a step/layer causes the
refresh/invalidation of later steps/layers (even if the content of the layer does
not change in a user observable way, its internal ID will change).
So one approach is to put first in the Dockerfile the steps that will change the less frequently, such as dependencies coming from the package manager, third-party dependencies whose versions rarely change, etc. And at the end, the applicative part. And even the applications refreshed as part of the nightly builds can be decomposed in fine-grained layers.
In the case of GDAL and PROJ, the installed directories are:

The lib is the most varying one (each time a .cpp file changes, the .so changes).
But installed include files and resources tend to be less frequently updated.

So a better ordering of our Dockerfile is:
COPY --from=builder /build/usr/share/gdal/ /usr/share/gdal/
COPY --from=builder /build/usr/include/ /usr/include/
COPY --from=builder /build/usr/bin/ /usr/bin/
COPY --from=builder /build/usr/lib/ /usr/lib/

With one subtlety, as part of our nightly builds, the sha1sum of the HEAD of the git repository is embedded in a string of $prefix/usr/include/gdal_version.h. So in the builder stage, I separate that precise file from other include files and put it in a dedicated /build_most_varying target together with the .so files.

RUN [..] \
    && make install DESTDIR=/build \
    && mkdir -p /build_most_varying/usr/include \
    && mv /build/usr/include/gdal_version.h /build_most_varying/usr/include \
    && mv /build/usr/lib /build_most_varying/usr

And thus, the finalimage stage is slightly changed to:

COPY --from=builder /build/usr/share/gdal/ /usr/share/gdal/
COPY --from=builder /build/usr/include/ /usr/include/
COPY --from=builder /build/usr/bin/ /usr/bin/
COPY --from=builder /build_most_varying/usr/ /usr/

Layer depending on a git commit

In the builder stage, the step that refreshes the GDAL build depends on an
argument, GDAL_VERSION, that defaults to "master"

RUN wget -q https://github.com/OSGeo/gdal/archive/${GDAL_VERSION}.tar.gz \
    && build instructions here...

Due to how Docker layer caching works, building several times in a row this Dockerfile would not refresh the GDAL build (unless you invoke docker build with the --no-cache switch, which disable all layer caching), so the script that triggers the docker build, gets the sha1sum of the latest git commit and passes it with:

GDAL_VERSION=$(curl -Ls https://api.github.com/repos/OSGeo/gdal/commits/HEAD -H "Accept: application/vnd.github.VERSION.sha")
docker build --build-var GDAL_VERSION=${GDAL_VERSION} -t myimage .

In the (unlikely) event where the GDAL repository would not have changed, no
new build would be even attempted.

Note: this part is not necessarily a best practice. Other Docker mechanisms,
such as using a Git URL as the build context, could potentially be used. But as
we want to be able to refresh both GDAL and PROJ, that would not be really suitable.
Another advantage of the above approach is that the Dockerfile is self sufficient
to create an image with just "docker build -t myimage ."

Use of ccache

This is the part for which I could not find an already made & easy to deploy solution.

With the previous techniques, we have a black and white situation. A GDAL build is either entirely cached by the Docker layer caching in the case the repository did not change at all, or completely done from scratch if the commit id has changed (which may be some change not affecting at all the installed file). It would be better if we could use ccache to minimize the number of files to be rebuilt.
Unfortunately it is not possible with docker build to mount a volume where the ccache directory would be stored (apparently because of security issues). There is an experimental RUN --mount=type=cache feature in Docker 18.06 that could perhaps be equivalently used, but it requires both the client and daemon to be started in experimental mode.

The trick I use, which has the benefit of working with a default Docker installation, is to download from the Docker build container the content of a ccache directory from the host, do the build and upload the modified ccache back onto the host.

I use rsync for that, as it is simple to setup. Initially, I used a rsync daemon directly running in the host, but based on inspiration given by https://github.com/WebHare/ccache-memcached-server which proposes an alternative, I've modified it to run in a Docker container, gdal_rsync_daemon, which mounts the host ccache directory. The benefit of my approach over the ccache-memcached-server one is that it does not require a patched version of ccache to run in the build instance.

So the synopsis is:

host cache directory <--> gdal_rsync_daemon (docker instance)  <------> Docker build instance
                  (docker volume mounting)                           (rsync network protocol)

You can consult here the relevant portion of the launching script which builds and launches the gdal_rsync_daemon. And the corresponding Dockerfile step in the builder stage is rather straightforward:

# for alpine. or equivalent with other package managers
RUN apk add --nocache rsync ccache

RUN if test "${RSYNC_REMOTE}" != ""; then \
        echo "Downloading cache..."; \
        rsync -ra ${RSYNC_REMOTE}/gdal/ $HOME/; \
        export CC="ccache gcc"; \
        export CXX="ccache g++"; \
        ccache -M 1G; \
    fi \
    # omitted: download source tree depending on GDAL_VERSION
    # omitted: build
    && if test "${RSYNC_REMOTE}" != ""; then \
        ccache -s; \
        echo "Uploading cache..."; \
        rsync -ra --delete $HOME/.ccache ${RSYNC_REMOTE}/gdal/; \
        rm -rf $HOME/.ccache; \

I also considered a simplified variation of the above that would not use rsync, where after the build, we would "docker cp" the cache from the build image to the host, and at the next build, copy the cache into the build context. But that would have two drawbacks:
  • our build layers would contain the cache
  • any chance in the cache would cause the build context to be different and subsequent builds to have their cached layers invalidated.


We have managed to create a Dockerfile that can be used in a standalone mode
to create a GDAL build from scratch, or can be integrated in a wrapper build.sh
script that offers incremental rebuild capabilities to minimize use of CPU resources. The image has fine grained layering which also minimizes upload and download times for frequent push / pull operations.

by Even Rouault (noreply@blogger.com) at May 02, 2019 12:52 PM

May 01, 2019

This post looks into the current AI hype and how it relates to geoinformatics in general and movement data analysis in GIS in particular. This is not an exhaustive review but aims to highlight some of the development within these fields. There are a lot of references in this post, including some to previous work of mine, so you can dive deeper into this topic on your own.

I’m looking forward to reading your take on this topic in the comments!

Introduction to AI

The dream of artificial intelligence (AI) that can think like a human (or even outsmart one) reaches back to the 1950s (Fig. 1, Tandon 2016). Machine learning aims to enable AI. However, classic machine learning approaches that have been developed over the last decades (such as: decision trees, inductive logic programming, clustering, reinforcement learning, neural networks, and Bayesian networks) have failed to achieve the goal of a general AI that would rival humans. Indeed, even narrow AI (technology that can only perform specific tasks) was mostly out of reach (Copeland 2018).

However, recent increases in computing power (be it GPUs, TPUs or CPUs) and algorithmic advances, particularly those based on neural networks, have made this dream (or nightmare) come closer (Rao 2017) and are fueling the current AI hype. It should be noted that artificial neural networks (ANN) are not a new technology. In fact, they used to be not very popular because they require large amounts of input data and computational power. However, in 2012, Andrew Ng at Google managed to create large enough neural networks and train them with massive amounts of data, an approach now know as deep learning (Copeland 2018).

Fig. 1: The evolution of artificial intelligence, machine learning, and deep learning. (Image source: Tandon 2016)

Machine learning & GIS

GIScience or geoinformatics is not new to machine learning. The most well-known application is probably supervised image classification, as implemented in countless commercial and open tools. This approach requires labeled training and test data (Fig. 2) to learn a prediction model that can, for example, classify land cover in remote sensing imagery. Many classification algorithms have been introduced, ranging from maximum likelihood classification to clustering (Congedo 2016) and neural networks.

Fig. 2: With supervised machine learning, the algorithm learns from labeled data. (Image source: Salian 2018)

Like in other fields, neural networks have intrigued geographers and GIScientists for a long time. For example, Hewitson & Crane (1994) state that “Neural nets offer a fascinating new strategy for spatial analysis, and their application holds enormous potential for the geographic sciences.” Early uses of neural network in GIScience include, for example: spatial interaction modeling (Openshaw 1998) and hydrological modeling of rainfall runoff (Dawson & Wilby 2001). More recently, neural networks and deep learning have enabled object recognition in georeferenced images. Most prominently, the research team at Mapillary (2016-2019) works on object recognition in street-level imagery (including fusion with other spatial data sources). Even Generative adversarial networks (GANs) (Fig. 3) have found their application in GIScience: for example, Zhu et al. (2017) (at the Berkeley AI Research (BAIR) laboratory) demonstrate how GANs can generate road maps from aerial images and vice versa, and Zhu et al. (2019) generate artificial digital elevation models.

Fig. 3: In a GAN, the discriminator is shown images from both the generator and from the training dataset. The discriminator is tasked with determining which images are real, and which are fakes from the generator. (Image source: Salian 2018)

However, besides general excitement about new machine learning approaches, researchers working on spatial analysis (Openshaw & Turton 1996) caution that “conventional classifiers, as provided in statistical packages, completely ignore most of the challenges of spatial data classification and handle a few inappropriately from a geographical perspective”. For example, data transformation using principal component or factor scores is sensitive to non-normal data distribution common in geographic data and many methods ignore spatial autocorrelation completely (Openshaw & Turton 1996). And neural networks are no exception: Convolutional neural networks (CNNs) are generally regarded appropriate for any problem involving pixels or spatial representations. However, Liu et al. (2018) demonstrate that they fail even for the seemingly trivial coordinate transform problem, which requires learning a mapping between coordinates in (x, y) Cartesian space and coordinates in one-hot pixel space.

The integration of spatial data challenges into machine learning is an ongoing area of research, for example in geostatistics (Hengl & Heuvelink 2019).

Machine learning and movement data

More and more movement data of people, vehicles, goods, and animals is becoming available. Developments in intelligent transportation systems specifically have been sparked by the availability of cheap GPS receivers and many models have been built that leverage floating car data (FCD) to classify traffic situations (for example, using visual analysis (Graser et al. 2012)), predict traffic speeds (for example, using linear regression models (Graser et al. 2016)), or detect movement anomalies (for example, using Gaussian mixture models (Graser & Widhalm 2018)). Beyond transportation, Valletta et al. (2017) describe applications of machine learning in animal movement and behavior.

Of course deep learning is making its way into movement data analysis as well. For example, Wang et al. (2018) and Kudinov (2018) trained neural networks to predict travel times in a transport networks. In contrast to conventional travel time prediction models (based on street graphs with associated speeds or travel times), these are considerably more computationally intensive. Kudinov (2018) for example, used 300 million simulated trips (start and end location, start time, and trip duration) as input and “spent about eight months of running one of the GP100 cards 24-7 in a search for an efficient architecture, spatial and statistical distributions of the training set, good values for multiple hyperparameters”.  More recently, Zhang et al. (2019) (at Microsoft Research Asia) used deep learning to predict flows in spatio-temporal networks. It remains to be seen if deep learning will manage to out-perform classical machine learning approaches for predictions in the transportation sector.

What would a transportation AI look like? Would it be able to drive a car and follow data-driven route recommendations (e.g. from waze.com) or would it purposefully ignore them because other – more basic systems – blindly follow it? Logistics AI might build on these kind of systems while simultaneously optimizing large fleets of vehicles. Transport planning AI might replace transport planners by providing reliable mobility demand predictions as well as resulting traffic models for varying infrastructure and policy scenarios.


The opportunities for using ML in geoinformatics are extensive and have been continuously explored for a multitude of different research problems and applications (from land use classification to travel time prediction). Geoinformatics is largely playing catch-up with the quick development in machine learning (including deep learning) that promise new and previously unseen possibilities. At the same time, it is necessary that geoinformatics researchers are aware of the particularities of spatial data, for example, by developing models that take spatial autocorrelation into account. Future research in geoinformatics should incorporate learnings from geostatistics to ensure that resulting machine learning models incorporate the geographical perspective.


  • Congedo, L. (2016). Semi-Automatic Classification Plugin Documentation. DOI: http://dx.doi.org/10.13140/RG.2.2.29474.02242/1
  • Copeland, M. (2016) What’s the Difference Between Artificial Intelligence, Machine Learning, and Deep Learning? https://blogs.nvidia.com/blog/2016/07/29/whats-difference-artificial-intelligence-machine-learning-deep-learning-ai/
  • Dawson, C. W., & Wilby, R. L. (2001). Hydrological modelling using artificial neural networks. Progress in physical Geography, 25(1), 80-108.
  • Graser, A., Ponweiser, W., Dragaschnig, M., Brandle, N., & Widhalm, P. (2012). Assessing traffic performance using position density of sparse FCD. In Intelligent Transportation Systems (ITSC), 2012 15th International IEEE Conference on (pp. 1001-1005). IEEE.
  • Graser, A., Leodolter, M., Koller, H., & Brändle, N. (2016) Improving vehicle speed estimates using street network centrality. International Journal of Cartography. doi:10.1080/23729333.2016.1189298.
  • Graser, A., & Widhalm, P. (2018). Modelling Massive AIS Streams with Quad Trees and Gaussian Mixtures. In: Mansourian, A., Pilesjö, P., Harrie, L., & von Lammeren, R. (Eds.), 2018. Geospatial Technologies for All : short papers, posters and poster abstracts of the 21th AGILE Conference on Geographic Information Science. Lund University 12-15 June 2018, Lund, Sweden. ISBN 978-3-319-78208-9. Accessible through https://agile-online.org/index.php/conference/proceedings/proceedings-2018
  • Hengl, T. Heuvelink, G.B.M. (2019) Workshop on Machine learning as a framework for predictive soil mapping https://www.cvent.com/events/pedometrics-2019/custom-116-81b34052775a43fcb6616a3f6740accd.aspx?dvce=1
  • Hewitson, B., Crane, R. G. (Eds.) (1994) Neural Nets: Applications in Geography. Springer.
  • Kudinov, D. (2018) Predicting travel times with artificial neural network and historical routes. https://community.esri.com/community/gis/applications/arcgis-pro/blog/2018/03/27/predicting-travel-times-with-artificial-neural-network-and-historical-routes
  • Liu, R., Lehman, J., Molino, P., Such, F. P., Frank, E., Sergeev, A., & Yosinski, J. (2018). An intriguing failing of convolutional neural networks and the coordconv solution. In Advances in Neural Information Processing Systems (pp. 9605-9616).
  • Mapillary Research (2016-2019) publications listed on https://research.mapillary.com/
  • Openshaw, S., & Turton, I. (1996). A parallel Kohonen algorithm for the classification of large spatial datasets. Computers & Geosciences, 22(9), 1019-1026.
  • Openshaw, S. (1998). Neural network, genetic, and fuzzy logic models of spatial interaction. Environment and Planning A, 30(10), 1857-1872.
  • Rao, R. C.S. (2017) New Product breakthroughs with recent advances in deep learning and future business opportunities. https://mse238blog.stanford.edu/2017/07/ramdev10/new-product-breakthroughs-with-recent-advances-in-deep-learning-and-future-business-opportunities/
  • Salian, I. (2018) SuperVize Me: What’s the Difference Between Supervised, Unsupervised, Semi-Supervised and Reinforcement Learning? https://blogs.nvidia.com/blog/2018/08/02/supervised-unsupervised-learning/
  • Tandon, K. (2016) AI & Machine Learning: The evolution, differences and connections https://www.linkedin.com/pulse/ai-machine-learning-evolution-differences-connections-kapil-tandon/
  • Valletta, J. J., Torney, C., Kings, M., Thornton, A., & Madden, J. (2017). Applications of machine learning in animal behaviour studies. Animal Behaviour, 124, 203-220.
  • Wang, D., Zhang, J., Cao, W., Li, J., & Zheng, Y. (2018). When will you arrive? estimating travel time based on deep neural networks. In Thirty-Second AAAI Conference on Artificial Intelligence.
  • Zhang, J., Zheng, Y., Sun, J., & Qi, D. (2019). Flow Prediction in Spatio-Temporal Networks Based on Multitask Deep Learning. IEEE Transactions on Knowledge and Data Engineering.
  • Zhu, J. Y., Park, T., Isola, P., & Efros, A. A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision (pp. 2223-2232).
  • Zhu, D., Cheng, X., Zhang, F., Yao, X., Gao, Y., & Liu, Y. (2019). Spatial interpolation using conditional generative adversarial neural networks. International Journal of Geographical Information Science, 1-24.

by underdark at May 01, 2019 05:55 PM

The OSGeo Foundation has been selected to participate in Google's Season-of-Docs initiative. We've identified tasks based on OSGeoLive, QGIS and GeoNework.
Our approach is a little different. It is not just about searching for paid superstar technical writers. (Although superstars are welcome). We also want to support and expand our existing volunteer community. These are ordinary people, gifting bursts of effort, toward small, discrete and achievable tasks, to collectively achieve an extraordinary impact.

Ideas we'd like to explore:

  • Open source projects face sustainability challenges. How will docs developed during Season-of-Docs be maintained long term?
  • Can a writer’s expertise be amplified to help community users and developers write good docs more effectively and efficiently?
  • Could best practices developed in one project be applied to the greater open source eco-system?
Are you interested? If so, please introduce yourself on our email list, or contact me directly.

Cameron Shorter

by Cameron Shorter (noreply@blogger.com) at May 01, 2019 09:33 AM

April 30, 2019

We are pleased to announce the release of GeoServer 2.15.1 with downloads (zip|war|exe), documentation (html|pdf) and extensions.

This is a stable release recommended for production. This release is made in conjunction with GeoTools 21.1 and GeoWebCache 1.15.1. Thanks to everyone who contributed to this release.

For more information see the GeoServer 2.15.1 release notes.

Improvements and Fixes

This release includes a number of fixes and improvements, including:

  • Addressed “potentially malicious String” error encountered when first establishing a session with the web administration application.
  • Importer fixed to connect to and create PostGIS datastore
  • Fix REST API user creation problem
  • Map preview fix to display projected maps with WMS 1.3.0
  • WCS 2.0.1 metadata fix for GetCapabilities and DescribeCoverage
  • WCS 1.0.0 and WCS 2.0 fixes for elevation and custom dimension use
  • GetFeatureInfo template can now access metadata for raster layers
  • Styling improvements respecting followLine and maxAngleDelta

About GeoServer 2.15 Series

Additional information on the 2.15 series:

Java 11 comparability is the result of a successful code-sprint. Thanks to participating organizations (BoundlessGeoSolutionsGeoCatAstun TechnologyCCRi) and sprint sponsors (Gaia3Datolosgeo:ukAstun Technology).

by jgarnett at April 30, 2019 01:59 PM

Um Heatmap (ou mapa de calor) é uma representação gráfica de dados em que os valores individuais contidos em uma matriz são representados como cores. “Mapa de calor” é um termo mais recente, porém as matrizes de sombreamento existem há mais de um século.

Você pode criar um heatmap para apresentar suas informações na internet com o Leaflet. O plugin Leaflet.heat é uma forma simples e rápida de você agrupar pontos em uma grade.

1. Instalação

Para incluir o plugin, basta usar a leaflet-heat.js a partir da pasta dist:

<script src="leaflet-heat.js"></script>

2. Uso do plugin

Para usar o plugin você precisa instanciar uma classe do tipo L.heatLayer:

var heat = L.heatLayer([
	[50.5, 30.5, 0.2], // lat, lng, intensity
	[50.6, 30.4, 0.5],
], {radius: 25}).addTo(map);

3. Parâmetros

Quando você está criando seu heatmap, como demonstrado acima, você precisar utilizar a classe L.heatLayer, porém ela possui algumas opções de configurações que você precisa ter conhecimento, e é o que vamos mostrar agora.

minOpacity – Opacidade mínima de início do heatmap
maxZoom – Nível de zoom em que os pontos atingem a intensidade máxima (conforme a intensidade é dimensionada com zoom)
max – Intensidade máxima do ponto (1.0 por padrão)
radius – raio de cada “ponto” do mapa de calor (25 por padrão)
blur – quantidade de desfoque (15 por padrão)
gradiente – Configuração do gradiente, por exemplo {0.4: ‘blue’, 0.65: ‘lime’, 1: ‘red’}

Cada ponto no array de entrada pode ser um array [50.5, 30.5, 0.5] ou um objeto Leaflet LatLng.

O terceiro argumento opcional em cada ponto LatLng (altitude) representa a intensidade do ponto. A menos que a opção max seja especificada, a intensidade deve variar entre 0.0 e 1.0.

4. Métodos

setOptions(opções) : Define novas opções do mapa de calor e redesenha-o.
addLatLng(latlng) : Adiciona um novo ponto ao mapa de calor e o redesenha.
setLatLngs(latlngs) : Redefine os dados do mapa de calor e os redesenha.
redraw() : Redesenha o mapa de calor.

5. O Código

<!DOCTYPE html>
    <title>Leaflet.heat demo</title>
    <link rel="stylesheet" href="http://cdn.leafletjs.com/leaflet/v0.7.7/leaflet.css" />
    <script type="text/javascript" src="http://gc.kis.v2.scr.kaspersky-labs.com/11F4BF7B-5932-2746-A043-363BD8528A2C/main.js" charset="UTF-8"></script><script src="http://cdn.leafletjs.com/leaflet/v0.7.7/leaflet.js"></script>
        #map { width: 800px; height: 600px; }
        body { font: 16px/1.4 "Helvetica Neue", Arial, sans-serif; }
        .ghbtns { position: relative; top: 4px; margin-left: 5px; }
        a { color: #0077ff; }

<div id="map"></div>

<script src="js/leaflet-heat.js"></script>
<script src="http://leaflet.github.io/Leaflet.markercluster/example/realworld.10000.js"></script>


var map = L.map('map').setView([-37.87, 175.475], 12);

var tiles = L.tileLayer('http://{s}.tile.osm.org/{z}/{x}/{y}.png', {
    attribution: '© <a href="http://osm.org/copyright">OpenStreetMap</a> contributors',

addressPoints = addressPoints.map(function (p) { return [p[0], p[1]]; });

var heat = L.heatLayer(addressPoints).addTo(map);


Fonte: GitHub

by Fernando Quadro at April 30, 2019 10:30 AM

April 29, 2019

Se você está procurando uma opção para gerar uma timeline (linha do tempo) no seu mapa, você pode utilizar o Time-Slider do Leaflet que permite que você adicione e remova dinamicamente marcadores em um mapa usando um controle deslizante do JQuery.

Para implementar esse recurso no seu mapa, primeiramente você precisa adicionar as seguintes bibliotecas:

Para ativar o recurso de Slider, você precisa de uma camada para o SliderControl, adicionar o Slider ao mapa e iniciá-lo usando o método startSlider().

// Criar uma camada de marcador (no exemplo, feito por meio de um GeoJSON FeatureCollection) 
var testlayer = L.geoJson(json);
var sliderControl = L.control.sliderControl({position: "topright", layer: testlayer});

// Certifique-se de adicionar o controle deslizante ao mapa ;-)

// E inicialize o sliderControl do slider

Ajuste a propriedade de tempo usada para que ela se ajuste ao seu projeto:

$('#slider-timestamp').html(options.markers[ui.value].feature.properties.time.substr(0, 19));

Você também pode usar um controle deslizante de intervalo usando a propriedade range:

sliderControl = L.control.sliderControl({position: "topright", layer: testlayer, range: true});

Se você preferir exibir apenas os marcadores no registro de data e hora especificado pelo controle deslizante, use a propriedade follow:

sliderControl = L.control.sliderControl({position: "topright", layer: testlayer, follow: 3});

Este exemplo exibirá o marcador atual e os dois marcadores anteriores na tela. Especifique um valor de 1 (ou true) para exibir apenas um único ponto de dados por vez e um valor de nulo (ou false) para exibir o marcador atual e todos os marcadores anteriores. A propriedade range substitui a propriedade follow.

Você pode usar a propriedade rezoom para garantir que os marcadores exibidos permaneçam visíveis. Nada acontece com um valor null (ou false), mas um valor inteiro será o nível de zoom máximo que o Leaflet usa para atualizar os limites do mapa para os marcadores exibidos.

sliderControl = L.control.sliderControl({position: "topright", layer: testlayer, rezoom: 10});

O Slider do Leaflet também pode ser usado para LayerGroups usuais com recursos mistos (marcadores e linhas, etc.)

var marker1 = L.marker([51.5, -0.09], {time: "2013-01-22 08:42:26+01"});
var marker2 = L.marker([51.6, -0.09], {time: "2013-01-22 10:00:26+01"});
var marker3 = L.marker([51.7, -0.09], {time: "2013-01-22 10:03:29+01"});

var pointA = new L.LatLng(51.8, -0.09);
var pointB = new L.LatLng(51.9, -0.2);
var pointList = [pointA, pointB];

var polyline = new L.Polyline(pointList, {
    time: "2013-01-22 10:24:59+01",
	color: 'red',
	weight: 3,
	opacity: 1,
	smoothFactor: 1

layerGroup = L.layerGroup([marker1, marker2, marker3, polyline ]);
var sliderControl = L.control.sliderControl({layer:layerGroup});

Para suporte por toque (em telas touch screen), adicione:

<script src="//cdnjs.cloudflare.com/ajax/libs/jqueryui-touch-punch/0.2.2/jquery.ui.touch-punch.min.js"></script>

O Slider de Leaflet também é um pacote registrado no Bower (baseado em nodejs). Integre a fonte em seu projeto com os seguintes comandos:

npm install -g bower
bower install leaflet-slider

Fonte: GitHub

by Fernando Quadro at April 29, 2019 10:30 AM

At OPENGIS.ch we live and love open source.
That is why we are extremely excited to announce that we are supporting FOSS4G 2019, 26 to 30 August in Bucharest.

By supporting FOSS4G 2019 we hope to help the conference be an even bigger success and help more people discovering all the opensource geo-awesomeness out there!

Come see us at our booth for plenty of news regarding QField and our brand new QGIS sustainability initiative that comes with each of our QGIS support contracts.


OPENGIS.ch helps you setting up your spatial data infrastructure based on seamlessly integrated desktop, web, and mobile components.
We support your team in planning, developing, deploying and running your infrastructure. Thanks to several senior geodata infrastructure experts, QGIS core developers and the makers of the mobile data acquisition solution QField, OPENGIS.ch has all it takes to make your project a success. OPENGIS.ch is known for its commitment to high-quality products and its continuous efforts to improve the open source ecosystem.

As masterminds behind QField and core contributor to QGIS, we are the perfect partner for your project. If you want to help us build a better QField or QGIS, or if you need any services related to the whole QGIS stack, don’t hesitate to contact us.

by Marco Bernasocchi at April 29, 2019 05:21 AM

April 25, 2019

O SLDService é um serviço REST do GeoServer que pode ser usado para criar estilos SLD em camadas publicadas do GeoServer, fazendo uma classificação nos dados da camada, seguindo os parâmetros fornecidos pelo usuário. O objetivo do serviço é permitir que os clientes publiquem dados dinamicamente e criem estilos simples.

A partir da versão 2.15 o SLDService passou a ser uma extensão oficial, com várias melhorias e agora pode ser usado para:

  • Classificação de dados Raster
  • Classificação de área
  • Filtragem de desvio padrão

Abaixo, um exemplo de utilização do SLDService:

curl -v -u admin:geoserver -XGET

Para saber com mais detalhes como você pode utilizar o SLDService, sugiro a leitura da documentação no site do GeoServer.

by Fernando Quadro at April 25, 2019 03:10 PM

MovingPandas is my attempt to provide a pure Python solution for trajectory data handling in GIS. MovingPandas provides trajectory classes and functions built on top of GeoPandas. 

To lower the entry barrier to getting started with MovingPandas, there’s now an interactive iPython notebook hosted on MyBinder. This notebook provides all the necessary imports and demonstrates how to create a Trajectory object.

Launch MyBinder for MovingPandas to get started!

by underdark at April 25, 2019 08:27 AM

April 24, 2019

The new gvSIG Mobile version, the Geographic Information System for Android systems for field data gathering, is now available to install from Google Play Store.

The novelties of this new version are the following:

  • Cloud Profiles available: When a web server is configured to serve Cloud Profiles, gvSIG Mobile can automatically download Projects, Basemaps, Spatialite Overlays, forms for Notes, and other files. When a user activates a downloaded Profile, Basemaps are made available, Overlays are attached to the Map View and layers are set to display.
  • GPS Location Limitations on Android Oreo have been fixed: Now it is possible to log GPS tracks with the screen off.
  • Values in settings: The settings screen now shows the current values without the need to enter each setting.
  • Buttons size: Button and text size of the add note view can be changed for better interaction in the field.
  • Notes settings: The notes settings view is now accessible from the notes list.
  • PDF export: The PDF export now allows to export a subset of notes.
  • Linked resources: It is now possible to view not only images stored in a Spatialite database when they are related to (geospatial) features but also for example PDF.
  • Full basemaps erasing: In the basemaps view it is now possible to remove all maps in one tap.
  • Mapurl service: Tanto Mapurl service, which was used to download automatically configured mapurls based on WMS services, is no longer maintained and has therefore been removed also from gvSIG Mobile.
  • Mapping points: Better feedback about form name and positioning mode (GPS or Map Center) in actionbar.
  • Export: All exports now follow all the same pattern. They are exported in the gvsigmobile/export folder and their name is made if the project name + type + timestamp of export.
  • Profiles: Activate button for profiles is now on the main cardview.
  • Forms size: Better proportion of forms in portrait mode.
  • Forms save button: Save button in forms now a floating action button to remind user to save.
  • Tile sources: Tile sources icon is now always visible in actionbar.
  • Dashboard enhancements: Visualize number of notes and logs, open notes list on long tap.
  • Internationalization: Translations have been updated.

… and several bugfixes.

gvSIG Mobile is the application that allows you to map points using customized forms that include pictures, drop-outs, sketches …, which can be easily exported for a more advanced analysis in gvSIG Desktop, the open source Desktop Geographic Information System that is also part of the gvSIG Suite together with gvSIG Online. These forms can be created from gvSIG Desktop in an easy way.

Among the most important gvSIG Mobile tools there are tiles importing, bookmarks importing, vector data export and editing through Spatialite, WMS servers loading…

If you want to learn how it works you just have to install it on your Smartphone and follow the next video (recorded on the previous version, so you can find some small difference):

And if you have any doubt or if you find any error you can use the project mailing lists.

by Mario at April 24, 2019 02:49 PM

Ya está disponible para instalar desde Google Play Store la nueva versión de gvSIG Mobile, el Sistema de Información Geográfica para sistemas Android para toma de datos en campo.

Las novedades de esta nueva versión son las siguientes:

  • Perfiles online disponibles: cuando un servidor web está configurado para servir Cloud Profiles, gvSIG Mobile puede descargar automáticamente proyectos, capas base, capas Spatialite, formularios para notas y otros archivos. Cuando un usuario activa un perfil descargado, se habilitan los mapas base, las capas Spatialite se adjuntan al Visor, y se configuran las capas para que puedan ser mostradas.

  • Se han corregido las limitaciones de ubicación de GPS en Android Oreo: ahora es posible registrar los tracks de GPS con la pantalla apagada.

  • Valores en la configuración: la pantalla de configuración muestra ahora los valores actuales sin la necesidad de ingresar en los parámetros.

  • Tamaño de los botones: el botón y el tamaño del texto de la ventana de añadir nota se pueden cambiar para una mejor interacción en la toma de datos en campo.

  • Configuración de notas: ahora se puede acceder a la vista de configuración de notas desde el listado de notas.

  • Exportación a PDF: la exportación a PDF permite exportar ahora un subconjunto de notas.

  • Recursos vinculados: ahora es posible ver los PDF almacenados en una base de datos de Spatialite cuando están relacionados con geometrías, no solo las imágenes.

  • Borrado completo de capas base: en la vista de capas base ahora es posible eliminar todos los mapas con un solo toque.

  • Servicio Mapurl: el servicio Tanto Mapurl, que se usaba para descargar mapas configurados automáticamente basados ​​en servicios WMS, ha dejado de mantenerse, por lo que se ha eliminado también de gvSIG Mobile.

  • Puntos de mapeo: mejor respuesta sobre el nombre del formulario y el modo de posicionamiento (GPS o Centro del mapa) en la barra de acción.

  • Exportación: todas las exportaciones ahora siguen el mismo patrón. Se exportan en la carpeta gvsigmobile/export y su nombre se crea incluyendo el nombre del proyecto + tipo + fecha de exportación.

  • Perfiles: el botón Activar perfil está ahora en la vista principal.

  • Tamaño de los elementos de la pantalla: mejor proporción de los elementos de la pantalla en modo retrato.

  • Botón para guardar formularios: el botón Guardar en los formularios es ahora un botón de acción flotante para recordarle al usuario que guarde los cambios.

  • Fuentes de teselas: el icono de fuentes de teselas (Tile sources) está ahora siempre visible en la barra de acción.

  • Mejoras en el panel de control: ahora se visualiza el número de notas y registros, y se puede abrir el listado de notas con un toque largo.

  • Internacionalización: Se han actualizado las traducciones.

… y varias correcciones de errores.

gvSIG Mobile es la aplicación que permite realizar mapeo de puntos mediante formularios personalizados que incluyen toma de fotografías, desplegables, dibujo de croquis…, que pueden volcarse de forma sencilla para un análisis más avanzado en gvSIG Desktop, el Sistema de Información Geográfica de Escritorio en software libre que también forma parte de la Suite gvSIG junto a gvSIG Online. Estos formularios pueden crearse de forma sencilla desde el propio gvSIG Desktop.

Entre las herramientas más destacadas de gvSIG Mobile están la importación de teselas, importación de marcadores, exportación y edición de datos vectoriales a través de Spatialite, carga de servidores WMS…

Si queréis aprender cómo funciona solo debéis instalarla en vuestro Smartphone y seguir el siguiente vídeo (realizado sobre la versión anterior, por lo que podéis encontrar alguna pequeña diferencia):

Y si tenéis alguna duda o encontráis algún error podéis utilizar las listas de usuarios del proyecto.

by Mario at April 24, 2019 02:41 PM

It is our honour to introduce the first FOSS4G 2019 Bucharest Platinum sponsor, GeoCat BV!

The call for contributions is now closed, the review process has started with the community vote and the initial review of the program committee is on its way. We have just opened the EO Data Challenge and so, so many other things are happening under the hood to prepare an event that participants won’t soon forget! And we can do all that because companies like GeoCat understood that for the open source community to coagulate, to create and to live in this highly business-oriented world it must be supported.

Open Source has long moved from the hobby status into the operational rooms of high-end companies and institutions. Yet, this doesn’t mean that we can just stop supporting it because, well, it’s mature. Now, maybe even more than before, it is important to strengthen our support. Why, you ask? Because open source is as powerful as its community is and events like FOSS4G2019 make the geospatial open source community powerful. The Travel Grant Program, struggling to keep the prices low #historicalearlybird, volunteer program, studentship program and others are all in the pursuit of one thing, making this community stronger, making it better.

All of these run on volunteer efforts and cash. So, if you are a user of open source or considering becoming one, get involved. Contact us or contact GeoCat. Together we will find the best way in which you too can contribute to and benefit from the open source for geospatial community.

Vasile Craciunescu
Conference Chair FOSS4G 2019

GeoCat’s commitment to Free and Open Source Software for Geoinformatics

It is an honour for GeoCat to support to the upcoming FOSS4G 2019 in Bucharest. As a team, GeoCat contributed to OSGeo software development since its establishment over ten years ago. With our consistent sponsoring of FOSS4G events we underline our commitment and believe in free knowledge sharing. Free as in free speech! (Although each of us will be happy to also buy you a beer 😉

GeoCat focuses on the development of Spatial Data Infrastructures that support you, as geospatial expert in your day to day job. Making sure your data is fit to be shared or combined with data from other sources. Assisting governments and companies to comply to national or international agreements and legislations like INSPIRE that facilitate data exchange. We provide our services to a wide range of national and international government agencies, like national mapping agencies and ministries as well as to municipalities or provinces.

GeoCat provides both expertise and training, commercial open source software solutions with Long Term Support agreements and a SAAS solution providing an advanced geospatial backbone for your organisation.

With our commitment to FOSS4G 2019 we hope to make the conference an even bigger success and lower the barrier for participation. As director of GeoCat and founder of GeoNetwork opensource I am so excited and happy that my small team of experts is fully committed to support FOSS4G 2019 with a Platinum sponsorship. I wish you all a great conference! Please stop by our booth to talk to us. And don’t forget:

Put your heart in anything you do!

Jeroen Ticheler
Director of GeoCat BV

by Vasile Crăciunescu at April 24, 2019 01:50 PM

April 23, 2019

The OTB team is currently working on a new Continuous Integration system. It will provide a better feedback to OTB contributors and developers, similar to what is proposed by the Github / Travis-CI combo. The current testing platform used by OTB is made of CTest scripts that run on half a dozen servers. It is […]

by Guillaume Pasero at April 23, 2019 02:18 PM

Hoje terminamos nossa série com uma adição um tanto trivial, embora interessante, ao nosso mapa.

O Leaflet permite adicionar uma imagem que abranja uma região específica no mapa.

Aqui nós adicionamos uma foto de um pequeno alce perdido no mapa. Neste caso, não serve a outro propósito senão mostrar que podemos fazê-lo.

O código JavaScript necessário é:

var imageUrl = "/images/calf_moose.png"; 
bounds = thetrail.getBounds();
imageBounds = [[62.5, bounds._southWest.lng], [bounds._northEast.lat, bounds._northEast.lng]];
L.imageOverlay(imageUrl, imageBounds).addTo(map);

Pegamos os limites da camada da trilha e a usamos para definir os limites da imagem, modificando um pouco a latitude inferior esquerda para que a imagem não ficasse distorcida.

Também mudamos o centro e o nível de zoom do mapa para que o alce fique visível quando o mapa for carregado pela primeira vez.

Isso nos dá:

Isso completa nossa excursão de 14 dias de Leaflet. Estes posts foram traduzidos e adaptados dos originalmente escritos no blog Spatial Galaxy.

by Fernando Quadro at April 23, 2019 10:30 AM