Welcome to Planet OSGeo

October 20, 2017

Ian Turton's Blog

Adding a .prj file to existing data files

While teaching a GeoServer course recently, we were trying to add a collection of tif and world files to GeoServer as an image mosaic. But the operation kept failing as GeoServer was unable to work out the projection of the files.

This problem can be avoided by adding a .prj file to the tif file to help GeoServer out. However we had hundreds of files and a certain national mapping agency had just assumed that everyone knew its files were in EPSG:27700.

Later, I worked up a quick solution to this problem. GeoTools is capable of writing out a WKT representation of a projection and Java has no problem walking a directory tree matching a regular expression.

Getting the WKT of a projection is trivial:

CoordinateReferenceSystem crs = CRS.decode("epsg:27700");
String wkt = crs.toWKT();

Walking the directory tree was a little trickier but uses an anonymous method of the Files class walkFileTree

public static ArrayList<File> match(String glob, String location) throws IOException {
    ArrayList<File> ret = new ArrayList<>();
    final PathMatcher pathMatcher = FileSystems.getDefault().getPathMatcher("glob:**/" + glob);

    Files.walkFileTree(Paths.get(location), new SimpleFileVisitor<Path>() {

      public FileVisitResult visitFile(Path path, BasicFileAttributes attrs) throws IOException {
        if (pathMatcher.matches(path)) {
        return FileVisitResult.CONTINUE;

      public FileVisitResult visitFileFailed(Path file, IOException exc) throws IOException {
        return FileVisitResult.CONTINUE;
    return ret;

The full code can be found in this snippet. The usage is pretty simple to just add a .prj file to a single file (say a shapefile):

java AddProj epsg:27700 file.shp

Or to deal with a whole directory

java AddProj epsg:27700 /data/os-data/rasters/streetview/*.tif

Which adds a .prj file to all the .tif files in that directory and all subdirectories.

Obviously you can use other EPSG codes if your data supplier assumes that everyone knows their projection is the only one in the world.

October 20, 2017 12:00 AM

October 19, 2017


Auxiliary Storage support in QGIS 3

For those who know how powerful QGIS can be using data defined widgets and expressions almost anywhere in styling and labeling settings, it remains today quite complex to store custom data.

For instance, moving a simple label using the label toolbar is not straightforward, that wonderful toolbar remains desperately greyed-out for manual labeling tweaks

…unless you do the following:

  • Set your vector layer editable (yes, it’s not possible with readonly data)
  • Add two columns in your data
  • Link the X property position to a column and the Y position to another


the Move Label map tool becomes available and ready to be used (while your layer is editable). Then, if you move a label, the underlying data is modified to store the position. But what happened if you want to fully use the Change Label map tool (color, size, style, and so on)?


Well… You just have to add a new column for each property you want to manage. No need to tell you that it’s not very convenient to use or even impossible when your data administrator has set your data in readonly mode…

A plugin, made some years ago named EasyCustomLabeling was made to address that issue. But it kept being full of caveats, like a dependency to another plugin (Memory layer saver) for persistence, or a full copy of the layer to label inside a memory layer which indeed led to loose synchronisation with the source layer.

Two years ago, the French Agence de l’eau Adour Garonne (a water basin agency) and the Ministry in charge of Ecology asked Oslandia to think out QGIS Enhancement proposals to port that plugin into QGIS core, among a few other things like labeling connectors or curved labels enhancements.

Those QEPs were accepted and we could work on the real implementation, so here we are, Auxiliary storage has now landed in master!


The aim of auxiliary storage is to propose a more integrated solution to manage these data defined properties :

  • Easy to use (one click)
  • Transparent for the user (map tools always available by default when labeling is activated)
  • Do not update the underlying data (it should work even when the layer is not editable)
  • Keep in sync with the datasource (as much as possible)
  • Store this data along or inside the project file

As said above, thanks to the Auxiliary Storage mechanism, map tools like Move Label, Rotate Label or Change Label are available by default. Then, when the user select the map tool to move a label and click for the first time on the map, a simple question is asked allowing to select a primary key :

Primary key choice dialog – (YES, you NEED a primary key for any data management)

From that moment on, a hidden table is transparently created to store all data defined values (positions, rotations, …) and joined to the original layer thanks to the primary key previously selected. When you move a label, the corresponding property is automatically created in the auxiliary layer. This way, the original data is not modified but only the joined auxiliary layer!

A new tab has been added in vector layer properties to manage the Auxiliary Storage mechanism. You can retrieve, clean up, export or create new properties from there :

Where the auxiliary data is really saved between projects?

We end up in using a light SQLite database which, by default, is just 8 Ko! When you save your project with the usual extension .qgs, the SQLite database is saved at the same location but with a different extension : .qgd.

Two thoughts with that choice: 

  • “Hey, I would like to store geometries, why no spatialite instead? “

Good point. We tried that at start in fact. But spatialite database initializing process using QGIS spatialite provider was found too long, really long. And a raw spatialite table weight about 4 Mo, because of the huge spatial reference system table, the numerous spatial functions and metadata tables. We chose to fall back onto using sqlite through OGR provider and it proved to be fast and stable enough. If some day, we achieve in merging spatialite provider and GDAL-OGR spatialite provider, with options to only create necessary SRS and functions, that would open news possibilities, like storing spatial auxiliary data.

  • “Does that mean that when you want to move/share a QGIS project, you have to manually manage these 2 files to keep them in the same location?!”

True, and dangerous isn’t it? Users often forgot auxiliary files with EasyCustomLabeling plugin.  Hence, we created a new format allowing to zip several files : .qgz.  Using that format, the SQLite database project.qgd and the regular project.qgs file will be embedded in a single project.zip file. WIN!!

Changing the project file format so that it can embed, data, fonts, svg was a long standing feature. So now we have a format available for self hosted QGIS project. Plugins like offline editing, Qconsolidate and other similar that aim at making it easy to export a portable GIS database could take profit of that new storage container.

Now, some work remains to add labeling connectors capabilities,  allow user to draw labeling paths by hand. If you’re interested in making this happen, please contact us!



More information

A full video showing auxiliary storage capabilities:


QEP: https://github.com/qgis/QGIS-Enhancement-Proposals/issues/27

PR New Zip format: https://github.com/qgis/QGIS/pull/4845

PR Editable Joined layers: https://github.com/qgis/QGIS/pull/4913

PR Auxiliary Storage: https://github.com/qgis/QGIS/pull/5086

by Paul Blottière at October 19, 2017 12:50 PM

gvSIG Team

SIG aplicado a Gestión Municipal: Módulo 5.3 ‘Servicios web (Servicios no estándares)’

Ya está disponible el tercer vídeo del quinto módulo, en el que hablaremos de cómo trabajar con servicios web que no siguen los estándares OGC en gvSIG Desktop, pero que nos pueden servir para complementar nuestros mapas con capas diferentes.

Entre los servicios disponibles tenemos el de OpenStreetMap, con el que tenemos acceso a varias capas, tanto de callejeros, como de cartografía náutica o de ferrocarriles, o cartografía con diferentes tonalidades que nos pueden servir como cartografía de referencia en nuestro mapa.

Otros servicios disponibles son los de Google Maps y de Bing Maps, donde podemos cargar distintas capas.

El requisito para poder cargar estas capas hasta la versión 2.4 inclusive es que debemos tener la vista en el sistema de referencia EPSG 3857, un sistema propio que utilizan dichos servicios.

Aparte, para poder cargar las capas de Bing Maps necesitaremos obtener previamente una clave, que podemos obtener como se cuenta en el vídeo.

Una ver cargados podemos reproyectar a dicho sistema nuestras capas. Además muchos servicios web OGC, como WMS, WFS…, ofrecen sus capas en dicho sistema de referencia, por lo que podemos superponerlas a ellas.

El tercer vídeo-tutorial de este quinto módulo es el siguiente:

Post relacionados:

Filed under: gvSIG Desktop, IDE, spanish, training Tagged: ayuntamientos, Bing Maps, gestión municipal, Google Maps, OpenStreetMap, OSM, Servicios web

by Mario at October 19, 2017 10:49 AM

OSGeo News

OSGeo Board Election 2017

by jsanz at October 19, 2017 09:11 AM

October 18, 2017

gvSIG Team

Concurso gvSIG Batovi: premiación

gvSIG Batovi

Ha culminado el concurso Proyectos de trabajo con estudiantes y gvSIG Batoví. Esta muy gratificante y enriquecedora primera experiencia para Uruguay resultó todo un desafío, desde el punto de vista organizativo, de planificación y coordinación. Pero podemos afirmar -con modestia y sencillez pero también con convencimiento- que ha resultado todo un éxito.

Este concurso buscaba incentivar el uso de gvSIG Batoví en proyectos concretos. Fue una iniciativa del Ministerio de Transporte y Obras Públicas (en especial la Dirección Nacional de Topografía), en coordinación con el Consejo de Educación Secundaria de la Administración Nacional de Educación Pública -ANEP-CES- (en especial la Inspección Nacional de Geografía) y el Centro Ceibal (en especial el Área de Contenidos y LabTeD -Laboratorios de Tecnologías Digitales-).

Los grupos postulados (integrados por estudiantes y docentes de Geografía y otras disciplinas de Enseñanza Secundaria del sistema público de educación de todo el país) contaron con el seguimiento…

View original post 402 more words

Filed under: gvSIG Desktop

by Alvaro at October 18, 2017 03:20 PM

Jackie Ng

The journey of porting the MapGuide Maestro API to .net standard

So what prompted the push to port the MapGuide Maestro API to .net standard was Microsoft recently releasing a whole slate of developer goodies:
Of particular relevance to this subject of this post, is .net standard 2.0.

For those who don't know, .net standard is (you guessed it) a versioned standard by which one can write portable and cross-platform class libraries against that will work in any .net runtime environment that supports the version of .net standard that you are targeting. If you do Android development, this is similar to API levels.

.net standard is of interest to me as the MapGuide Maestro API at the moment is a set of class libraries that target the full .net Framework. Having it target .net standard instead would give us guaranteed cross-platform portability across .net runtime environments that support .net standard (Mono) and/or supporting platforms that would never have been possible before in the past (.net Core/Xamarin/UWP)

I tried an attempt at porting the Maestro API to earlier versions of .net standard, with mixed success:
  • The ObjectModels library was able to be ported to .net standard 1.6, but required installing many piecemeal System.* nuget packages to fill in the missing APIs.
  • Maestro API itself could not be ported due to reliance of XML schema functionality and HttpWebRequest, that no version of .net standard before 2.0 supported.
  • Maestro API had upstream dependencies (eg. NetTopologySuite) that were not ported to .net standard.
  • More importantly, the bits I were able to port across (ObjectModels), I couldn't run their respective (full-framework) unit test libraries from the VS test explorer due to cryptic assembly loading errors due to the assembly manifest of the various piecemeal System.* assemblies not matching their assembly reference. With no way to run these tests, the porting effort wasn't worth continuing.
Around this time, I heard of what the upcoming (at the time) .net standard 2.0 would bring to the table:
  • Over 2x the API surface of netstandard1.6, including key missing APIs needed by the Maestro API like the XML schema APIs and HttpWebRequest
  • A compatibility mode for full .net Framework. If this works as hoped, it means we can skip waiting on upstream dependencies like NetTopologySuite and friends needing to have netstandard-compatible ports and use the existing versions as-is.
Given the compelling points of .net standard 2.0 and mixed results with porting to the (then) current iteration on .net standard, I decided to put these porting efforts on ice and wait until the time when .net standard 2.0 and its supporting tooling comes out.

Now that .net standard 2.0 and supporting tooling came out, it was time to give this porting effort another try ... and I could not believe how much less painful the whole process was! This was basically all I had to do to port the following libraries to .net standard 2.0:

Preparation Work

To be able to use our (ported to .net standard 2.0) MaestroAPI in the (full framework) Maestro windows application, we needed to first re-target all affected project files to target .net Framework 4.6.1, as this is the minimal version of the full .net framework that supports .net standard 2.0


This is a class library that uses the Irony grammar parser to parse FDO expression strings to an object oriented form. Maestro uses this library to be able to analyze FDO expressions for validation purposes (eg. You don't have a FDO expression that references a property that doesn't exist).

My process of converting the existing full framework csproj file to .net standard was to basically just replace the entire contents of the original csproj file with the minimum required content for a .net standard 2.0 class library.

<Project Sdk="Microsoft.NET.Sdk">

That's right, the content of a minimal .net standard 2.0 class library is just 5 lines of XML! All .cs files are implicitly included now when building this project, which greatly contributes to the simplicity of the new csproj format.

Now obviously this project file as-is won't compile as we need to reference Irony and use VS2017 to regenerate the resx string bundles and source link shared assembly info files. After those changes were made, the project builds with the only notable warning being NU1701, which is the warning emitted by the new tooling when we reference full framework libraries/packages from a netstandard2.0 class library (that the new tooling allows us to do for compatibility purposes).

It was around this time that I discovered that someone has made a netstandard-compatible port of Irony, so we replaced the existing Irony reference with the netstandard-compatible port instead. This library was now fully ported across.


This is the class library that describes all of our XML resources in MapGuide as strongly-typed classes with full XML (de)serialization support to and from both forms at various schema versions.

The original porting attempt targeted netstandard 1.6. While this was mostly painless, I had to reference tons of piecemeal System.* nuget packages, which then flowed down to anything that was referencing it.

For this attempt, we target .net standard 2.0 using the same technique of pasting a minimal netstandard2.0 class library template into the existing csproj file. Like the previous attempt, building this project failed due to dependencies on System.Drawing as a result of usages of System.Drawing.Font. Further analysis shows that we were using Font as a glorified DTO. So it was just a case of adding a new type that carried the same properties we were capturing with the System.Drawing.Font objects that were being passed around.

Due to referencing the NETStandard.Library metapackage by default, this attempt did not require referencing piecemeal System.* nuget packages like the previous attempts. So that's another library ported across.


Now for the main event. Maestro API needed to be netstandard-compatible otherwise this whole porting effort is a waste. The previous attempt (to target netstandard1.6) was cut short as APIs such as XML Schema support was not there. For .net standard 2.0, these missing APIs are back, so porting across MaestroAPI should be a much simpler affair.

And indeed it was.

Just like the ObjectModels porting effort, we hit some snags around references to System.Drawing. Unlike ObjectModels, we were using full blown Images and Bitmaps from System.Drawing and not things like Fonts which we were just using to sling font information around.

To address this problem a new full framework library (OSGeo.MapGuide.MaestroAPI.FxBridge) was introduced where classes that were using these incompatible types were relocated to. There was also service interfaces that returned System.Drawing.Image objects (IMappingService). These APIs have been modified to return raw System.IO.Stream objects instead, with the FxBridge library providing extension methods to "polyfill" in the old APIs that returned images. Thus, code that used these affected APIs can just reference the FxBridge library in addition to MaestroAPI and be able to work as before.

After sectioning off these incompatible types to the FxBridge library, the next potential roadblock in our porting efforts was our upstream dependencies. In particular, we were using NetTopologySuite, GeoAPI and Proj.NET to give Maestro API a strongly-typed geometry model and some basic coordinate system transformation capabilities. These were all full framework packages, meaning our previous porting attempt (to target netstandard1.6) was stopped in its tracks.

Because netstandard2.0 has a full-framework compatibility shim, we were able to reference these existing packages with the standard NU1701 compatibility warnings spat out by NuGet. However, since the previous porting attempt, the authors of NetTopologySuite, GeoAPI and Proj.NET have released netstandard-compatible (albeit prerelease) versions of their respective libraries, so as a result we were able to fully netstandard-ify all our dependencies as well.

However, we had to turn off strong naming of our assembly in the process because our upstream dependencies did not offer strong-named netstandard assemblies.

And with that, the Maestro API was ported to .net standard 2.0

MaestroAPI HTTP Provider

However, the Maestro API would not be useful without a functional HTTP provider to communicate with the mapagent. So this library also needed to be netstandard-compatible.

The previous porting attempt (to netstandard1.6) was roadblocked because the HTTP provider uses HttpWebRequest to communicate with the mapagent. While we could have just replaced HttpWebRequest with the newer HttpClient, that would require a full async/await-ification of the whole code base and then having to deal properly with the leaky abstractions known as SynchronizationContext and ConfigureAwait to ensure our async/await-ified HTTP provider is usable in both ASP.net and desktop windows application contexts without it deadlocking on one or the other.

While having a fully async HTTP provider is good, I wanted to have a functional one first before undertaking the task of async/await-ifying it. The development effort involved was such that it was better to just wait for .net standard 2.0 to arrive (where HttpWebRequest was supported) than to try to modify the HTTP provider to use HttpClient.

And just like the porting of the ObjectModels/MaestroAPI projects, this was a case of taking the existing csproj file, replacing the contents with the minimal netstandard class library template and manually adding in the required references and various settings until the project built again.

Caught in a snag

So all the key parts of the Maestro API have been ported across to .net standard 2.0 and the code all builds, so now it was time to run our unit tests to make sure everything was still green.

All green they were indeed. All good! Now to run the thing.

Most things seemed to work until I validated a Map Definition and got this message.

Assembly manifest what? I have no idea! This error is also thrown when I use any part of the MaestroAPI that uses NetTopologySuite -> GeoAPI.

My first port of call was to look at this known issue and try all the workarounds listed:
  • Force all our projects to use PackageReferences mode for installing/restoring nuget packages
  • Enable automatic binding redirect generation on all executable projects
After trying these workarounds, the assembly manifest errors still persisted. At this point I was stuck and was on the verge of giving up on this porting effort until some part of my brain told me to take a look at the assemblies that were in the output directory.

Since the error in question referred to GeoAPI.dll, I'd thought I'd crack that assembly open in ILSpy and see what interesting information I could find about this assembly.

Well this was certainly most interesting! Why is a full-framework GeoAPI.dll being copied out? The only direct consumer of GeoAPI (OSGeo.MapGuide.MaestroAPI.dll) is netstandard2.0, and it is referencing the netstandard target of GeoAPI.

Here's a diagram of what I was expecting to see:

After digging around some more it appears from observation that there is a bug (or is it feature?) in MSBuild where given a nuget package that offers both netstandard and full-framework targets, it will prefer the full-framework target over the netstandard one. This means in the case of GeoAPI, because our root application is a full-framework one, MSBuild chose the full-framework target offered by GeoAPI instead of the netstandard one.

So what's the assembly manifest error all about? The FusionLog property of the exception reveals the answer.

GeoAPI is strong-named for full-framework. GeoAPI is not strong-named for netstandard. The assembly manifest error is because our netstandard-targeting MaestroAPI references the netstandard target of GeoAPI (not strong-named), but because our root application is a full-framework one, MSBuild gave us a full-framework GeoAPI assembly instead. At runtime, .net could not reconcile that a strong-named GeoAPI was being loaded when our netstandard-targeting MaestroAPI was references the netstandard GeoAPI that is not strong named. Hence the assembly manifest error.

Multi-targeting for the ... win?

Okay, so now we know why it's happening, what can we do about it? Well, the other major thing that the new MSBuild and csproj file format gives us is the ability to easily multi-target the project for different frameworks and runtimes.

By changing the TargetFramework element in our project to TargetFrameworks (plural) and specifying a semi-colon-delimited list of TFMs, we now have a class library that can build for each one of the TFMs specified.

For example, a netstandard 2.0 class library like this:

<Project Sdk="Microsoft.NET.Sdk">

Can be made to multi-target like this:

<Project Sdk="Microsoft.NET.Sdk">

If MSBuild insists on giving us full-framework dependencies if given the choice between full-framework and netstandard (when both are compatible), then the solution is to basically multi-target the MaestroAPI class library so that we offer 2 flavors of the assembly:
  • A full-framework one (net461) that will be selected by MSBuild if the consuming application is a full-framework one.
  • The netstandard one (netstandard2.0) that will be selected by MSBuild if the consuming application is .net Core, Xamarin, etc.
Under this setup MSBuild will choose the full-framework Maestro API over the netstandard one when building the Maestro windows application. Since we're now building for multiple frameworks/runtimes and explictly targeting full-framework again, we can re-activate strong naming on the full-framework (net461) target, ensuring the full-framework dependency chain of MaestroAPI is fully strong-named (as it was before we started this porting effort), and our assembly manifest error goes away when running unit tests and the Maestro application itself whenever we hit functionality that uses GeoAPI/NetTopologySuite.

So the problem is effectively solved, but the whole process feels somewhat anti-climactic.

I mean ... the whole premise of .net standard and why I wanted to port MaestroAPI to target it was the promise of one unified target (an interface if you will) with many supporting runtimes (ie. various implementations of this interface). Target the standard and your class library will work across the supporting runtimes, in theory.

Unfortunately in practice, strong-naming (and MSBuild choosing full-framework targets over netstandard, even if both are compatible) was the leaky abstraction that threw a monkey wrench on this whole concept, especially if some targets are strong-named and some are not. Having to multi-target the Maestro API as a workaround feels unnecessary.

But at the end of the day, we still achieved our goal of a netstandard-compatbile Maestro API that can be used in .net Core, Xamarin, etc. We just had to take a very long detour to get from A to B and all I can think of was: Was this (multi-targeting) absolutely necessary?

Some Changes and Compromises

Although we now have a .net standard and full framework compatible versions of the Maestro API, we have to make some changes and compromises around the developer and acquisition experience for this to work in a cross-platform .net world.

1. For reasons previously stated, we have to disable strong-naming of the Maestro API for the .net standard target. This is brought upon us by our upstream dependencies (the netstandard flavors of GeoAPI and NetTopologySuite), which we can't do anything about. The full framework target however is still strong-named as before.

2. The SDK package in its current form will most likely go away. This is because turning Maestro API into a .net standard library forces us to use nuget packages as the main delivery mechanism, which is a good thing because nobody should be manually referencing assemblies in this day and age for consuming libraries. The tooling now is just so brain-dead simple that we have no excuse to not make nuget packages. No SDK package also means that we can look at alternative means of generating API documentation (docfx looks like a winner), instead of Sandcastle as making CHM files is kind of pointless and the only reason I made CHM files was to bundle it with the SDK package.

The sample code and supporting tools that were previously part of the SDK package will be offloaded to a separate GitHub repository that I'll announce in due course. I'll need to re-think the main ASP.net code sample as well, because the old example required:

  • Manually setting up a web application in local IIS (not IIS Express)
  • Manually referencing a whole bunch of assemblies
  • Needing to run Visual Studio as administrator to debug the code sample due to the local IIS constraint.

These are things that should not be done in 2017!

3. Because nuget packages are the strongly preferred way of consuming libraries, it meant that having the HTTP provider as a separate library just complicates things (having to register this provider in ConnectionProviders.xml and automating it when installing its theoretical nuget package). The Maestro API on its own is pretty useless without the HTTP provider anyways, so in the interest of killing two birds with one stone, the HTTP provider has been integrated into the Maestro API assembly itself. This means that you don't even need ConnectionProviders.xml unless you need to use the (mg-desktop wrapper) local connection provider, or you need to use a (roll your own wrapper around the official MapGuide API) local-native connection provider.

4. The CI machinery needed some adjustments. I couldn't get OpenCover to work against our newly ported netstandard libraries using (dotnet test) as the runner, so I had to momentarily disable the OpenCover instrumentation while the unit tests ran in AppVeyor. But as a result of needing to multi-target MaestroAPI (for reasons already stated), I decided on this CI matrix:

  • Use AppVeyor to run the Maestro API unit tests for the full-framework target on Windows. Because we're running the tests under a full-framework runner, the OpenCover instrumentation can be restored, allowing us to upload code coverage reports again to coveralls.io
  • Use TravisCI to run the Maestro API unit tests for the netstandard target under .net Core 2.0 on Linux. The whole motivation for netstandard-izing MaestroAPI was to get it to run on these non-windows .net platforms, so let TravisCI handle and verify/validate that aspect for us. We have no code coverage stats here, but surely that can't be radically different than the code coverage states had we run the same test suite on Windows with OpenCover instrumentation.
Where to from here?

Now that the porting efforts have been completed, the next milestone release should follow shortly. 

This milestone will probably only concern the application itself as the SDK story needs revising and I don't want that to hold up on a new release of Maestro (the application).

by Jackie Ng (noreply@blogger.com) at October 18, 2017 02:37 PM

gvSIG Team

Apertura 13as Jornadas Internacionales de gvSIG

Buenos días a todas y todos los presentes.

Me gustaría comenzar por agradecer el esfuerzo de toda la gente que han hecho posible que hoy estemos inaugurando las 13as jornadas Internacionales de gvSIG. Este mismo mes se han celebrado las 4as jornadas Mexicanas y las 9as Jornadas LAC en Brasil.

Indicadores, junto a otros como los premios que se han sumado este año, que nos indican que estamos antes un proyecto consolidado, en constante crecimiento y con usuarios en más de 160 países. Ahí es nada.

En lo que llevamos de 2017 gvSIG ha sido reconocido en los ‘Share & Reuse Awards’ por la Comisión Europea como el proyecto de software libre más importante de Europa. Un premio a una trayectoria, lo cuál lo hace aún más relevante. Un galardón que reconoce, y aprovecho que está Vicente aquí para decirlo, más que merecidamente la apuesta de la Generalitat Valenciana por impulsar la geomática con software libre y talento valenciano.

Se suman el premio a la Excelencia en categoría internacional de la Unión Profesional de Valencia. El premio de las Telecomunicaciones Valencianas a la ‘Organización impulsora de las TIC’ y por último y por 3er año consecutivo, el ‘Europa Challenge’ otorgado por la NASA a la mejor solución profesional, la suite gvSIG.

Creo que debemos sentirnos orgullosos todos los que de una u otra forma, en mayor o menor medida, desde las comunidades o desde las organizaciones, estamos/estáis impulsándolo. Siempre lo hemos dicho, gvSIG no es un camino a recorrer, es un camino que construimos juntos.

Por otro lado, ya no hace falta decirlo. Es un hecho asumido. La geomática se ha convertido en una ciencia, en una herramienta fundamental. La modernización de los sistemas de información pasa indudablemente por ella. Por la geolocalización de las TIC.

Y dada su importancia, es de sentido común optar por soluciones que garanticen nuestra independencia, nuestros derechos como usuarios, nuestra libertad para adaptar la tecnología a nuestras necesidades y no al contrario.

También es de sentido común utilizar la tecnología para impulsar nuestra industria y generar empresas altamente especializadas, en un marco de colaboración y conocimiento compartido. Hacer de la economía colaborativa una realidad en un ámbito tan especializado como el de la tecnología.

Este quizá es el objetivo más ambicioso del proyecto, que se materializa en la Asociación gvSIG y en la puesta en marcha de la suite gvSIG, un amplio catálogo de soluciones profesionales sobre las que hablaremos mucho estas jornadas.

Poco más que añadir, disfruten y aprovechen al máximo las jornadas y disfruten también esta maravillosa ciudad los que vienen de fuera.

Muchas gracias y bienvenidos a las 13as Jornadas Internacionales de gvSIG

Filed under: events Tagged: 13as Jornadas gvSIG

by Alvaro at October 18, 2017 02:29 PM

Jackie Ng

A simpler MgCooker tile seeding process

I don't know if you've ever seen the guts of the tile seeding code used by MgCooker, it isn't the most prettiest of things, but it for the most part works.

Besides some cosmetic restructuring of the code, I haven't really touched this part of Maestro ever.

Consider the history of this tiling code. It originated around 2009. Things we now take for granted like async/await and the Task Parallel Library probably didn't exist around that time, so you had no choice but to dive deep into wait handles, auto-reset events and manual thread management.

If I had to write MgCooker from scratch today, I'd cook up (pun intended) probably something like this

using System.Diagnostics;
using System.Threading;
using System.Threading.Tasks;
public class TileSeeder
public void SeedTiles()
List<(int row, int col, int scale)> tiles = ComputeTileRequestList();
int total = tiles.Count;
int rendered = 0;
var sw = new Stopwatch();

//The magic sauce that takes multi-threads our tile seeding and takes care of all our multi-threading concerns!
Parallel.ForEach(tiles, (tile) =>
//Send a HTTP request to GETTILE mapagent API with tile.row, tile.col and tile.scale
Interlocked.Increment(ref rendered);
Console.WriteLine($"Rendered {rendered}/{total} tiles");

//And this method blocks too, so if we get to this point, the tiling process has finished.
Console.WriteLine($"Rendered {rendered} tiles in {sw.Elapsed}");

Isn't this much easier to read and comprehend?

The implementation of the ComputeTileRequestList method referenced here is omitted for brevity, but for the implementation we can just reuse what is in the current iteration of MgCooker. Most of the settings in MgCooker mainly affects the generation of the list of row/col/scale anyways.

The core multi-threading "render/cache all these tiles" is just one simple Parallel.ForEach method baked right in the .net Framework itself!

MgCooker is overdue for a rewrite anyways. I just didn't really think it would be so conceptually simple with today's .net libraries and C# language constructs!

by Jackie Ng (noreply@blogger.com) at October 18, 2017 01:03 PM

PostGIS Development

PostGIS Patch Releases

The PostGIS development team has uploaded bug fix releases for the 2.2, 2.3 and 2.4 stable branches.




by Paul Ramsey at October 18, 2017 12:00 AM

October 17, 2017

GeoTools Team

GeoTools 18.0 Released

The GeoTools team is pleased to announce the release of GeoTools 18.0:
This release is also available from our Maven repository.

Thanks to everyone who took part in the code-freeze, monthly bug stomp, or directly making the release. This release is made in conjunction with GeoServer 2.12.0

This release is the new stable release and as such users and downstream projects should consider moving from older releases to this one.

Highlights from our issue tracker release-notes:
  • GeoPackage store now supports spatial indexes.
  • WMTS store added this allows programs to process tiles in a similar way to the existing WMS store.
For more information see past release notes (18-RC1 | 18-beta).

Thanks to Astun Technology for allowing Ian Turton to make this release.

by Ian Turton (noreply@blogger.com) at October 17, 2017 08:58 AM

GeoServer Team

GeoServer 2.12.0 Released

We are happy to announce the release of GeoServer 2.12.0. Downloads are available (zipwardmg and exe) along with docs and extensions.

This is a stable release recommended for production use. This release is made in conjunction with GeoTools 18.0.

Rest API now using Spring MVC

In March, we upgraded the framework used by the GeoServer REST API from Restlet to Spring MVC. All the endpoints have remain unchanged and we would like to thank everyone who took part.

We should also thank David Vick who migrated the embedded GeoWebCache REST API, and the entire team who helped him reintegrate the results for this 2.12.0 release.

Thanks again to the code sprint sponsors and in-kind contributors:

Gaia3d   atol_logo  Boundless_Logo    How2map_logo     fossgis_logo  iag_logo  

As part of this upgrade, we also have new REST documentation, providing detailed information about each endpoint. The documentation is written in swagger, allowing different presentations to be generated as shown below.


WMTS Cascading

Adds the ability to create WMS layers backed by remote WMTS layers, similar to the pre-existing WMS cascading functionality.

See GSIP-162 for more details.

Style Based Layer Groups

Adds the ability to define a listing of layers and styles using a single SLD file, in accordance with the original vision of the SLD specification. This includes a new entry type in the Layer Group layers list and a new preview mode for the style editor.

GeoServer has long supported this functionality for clients, via an external SLD file. This change allows more people to use the idea of a single file defining their map layers and styling as a configuration option.

See GSIP-161 for more details.

Options for KML Placemark placement

New options for KML encoding have been added, to control the placement of placemark icons, mostly for polygons. The syntax of the new options introduces three new top-level format options keys:


See GSIP-160 for more details.

GeoWebCache data security API

Add an extension point to GeoWebCache allowing for a security check based on the layer and extent of the tile. Adds an implementation of this extension point to GeoServer’s GWC integration.

This change mostly only affects developers but will lead to improved security for users in the future.

See GSIP 159 for more details.

NetCDF output support for variable attributes and extra variables

Adds the following to the NetCDF output extension:

  1. An option to allow all attributes to be copied from the source NetCDF/GRIB variable to the target variable.
  2. Support for manual configuration of variable attributes, much like the current support for setting global attributes.
  3. Support for configuration of extra variables which are copied from the NetCDF/GRIB source to the output; initially only scalar variables will be supported. Extra variables can be expanded to “higher” dimensions, that is, values copied from one scalar per ImageMosaic granule are assembled into a multidimensional variable over, for example, time and elevation.

See GSIP 158 for more details.

New labelling features and QGIS compatibility

A number of small new features have been added to labelling to match some of QGIS features, in particular:

  • Kerning is on by default
  • New vendor option to strikethrough text
  • New vendor options to control char and word spacing


  • Perpendicular offset now works also for curved labels (previously only supported for straight labels):
  • Labeling the border of polygons as opposed to their centroid when using a LinePlacement (here with repetition and offset):

Along with this work some SLD 1.1 text symbolizer fixes were added in order to better support the new QGIS 3.0 label export, here is an example of a map labeling with background image, as shown in QGIS, and then again in GeoServer using the same data and the exported SLD 1.1 style (click to enlarge):


CSS improvements

The CSS styling language and editing UI have seen various improvements. The editor now supports some primitive code completion:

At the language level:

  • Scale dependencies can now also be expressed using the “@sd” variable (scale denominator) and the values can use common suffixes such as k and M to get more readable values, compare for example “[@scale < 1000000]” with “[@sd < 1M]”
  • Color functions have been introduced to match LessCSS functionality, like “Darken”, “Lighten, “Saturate” and so on. The same functions have been made available in all other styling languages.
  • Calling a “env” variable has been made easier, from “env(‘varName’)” to “@varName” (or “@varName(defaultValue)” if you want to provide a default value).

As you probably already know, internally CSS is translated to an equivalent SLD for map rendering purposes. This translation process became 50 times faster over large stylesheets (such as OSM roads, a particularly long and complicated style).

Image mosaic improvements and protocol control

Image mosaic saw several improvements in 2.12.

First, the support for mosaicking images in different coordinate reference systems improved greatly, with several tweaks and correctness fixes. As a noteworthy change, the code can now handle source data crossing the dateline. The following images show the footprints of images before and after the dateline (expressed in two different UTM zones, 60 and 1 respectively) and the result of mosaicking them as rasters (click to get a larger picture of each):

There is more good news for those that handle mosaics with a lot of super-imposing images taken at different times. If you added interesting information into the mosaic index, such as cloud cover, off-nadir, snow cover and the like, you can now filter and sort them, in both WMS (viewing) and WCS (downloading) by adding the cql_filter and sortBy KVP parameters.

Here is an example of the same mosaic, the first composite favouring smallest cloud cover, the second one favouring recency instead (click to enlarge):


GeoPackage graduation

The GeoPackage store jumped straight from community to core package, in light of its increasing importance.

The WMS/WFS/WPS output formats are still part of community. Currently, GeoPackage vector does not support spatial indexes but stay tuned, it’s cooking!

New community modules

The 2.12 series comes with a few new community modules, in particular:

  • Looking into styling vector tiles and server side using a single language? Look no further than the MBStyle module
  • For those into Earth Observation, there is a new OpenSearch for EO module in the community section
  • Need to store full GeoTiff in Amazon S3? The “S3 support for GeoTiff” module might just be what you’re looking for
  • A new “status-monitoring” community module has been added, providing basic statistics system resource usage. Check out this pull request to follow its progress and merge.

Mind, community modules are not part of the release, but you can find them in the nightly builds instead.

Other assorted improvements

Highlights of this release featured below, for more information please see the release notes (2.12.0 | 2.12-RC12.12-beta):

  • Users REST uses default role service name as a user/group service name
  • imageio-ext-gdal-bindings-xxx.jar not available in geoserver-2.x.x-gdal-plugin.zip anymore since 2.10
  • REST GET resource metadata – file extension can override format parameter
  • GeoServer macOS picks up system extensions
  • SLD files not deleted when SLD is deleted in web admin
  • Reproject geometries in WMS GetFeatureInfo responses when info_format is GML
  • Include Marlin by default in bin/win/osx downloads, add to war instructions
  • Handle placemark placement when centroid of geometry not contained within
  • Enable usage of viewParams in WPS embedded WFS requests
  • Add GeoJson encoder for complex features
  • Allow image mosaic to refer a GeoServer configured store
  • Duplicate GeoPackage formats in layer preview page
  • ExternalGraphicFactory does not have a general way to reset caches
  • Generating a raster SLD style from template produced a functionally invalid style, now fixed
  • Style Editor Can Create Incorrect External Legend URLs
  • Namespace filtering on capabilities returns all layer groups (including the ones in other workspaces)


About GeoServer 2.12 Series

Additional information on the 2.12.0 series:

by jgarnett at October 17, 2017 08:55 AM

October 16, 2017

gvSIG Team

SIG aplicado a Gestión Municipal: Módulo 5.2 ‘Servicios web (Carga de servicios web desde gvSIG Desktop)’

Ya está disponible el segundo vídeo del quinto módulo, en el que veremos cómo cargar servicios web desde gvSIG Desktop. En el primer vídeo de este módulo vimos una introducción sobre las Infraestructuras de Datos Espaciales (IDE), que nos sirvió para poder entender mejor este nuevo vídeo.

Muchas administraciones tienen a disposición de los usuarios una gran cantidad de cartografía disponible, siendo en muchas ocasiones servicios web accesibles desde aplicaciones de escritorio o visores web, que nos permite acceder a dicha cartografía sin necesidad de descargar nada en nuestro disco.

La cartografía a utilizar en este módulo podéis descargarla del siguiente enlace.

El segundo vídeo-tutorial de este quinto módulo es el siguiente:

Post relacionados:

Filed under: gvSIG Desktop, IDE, spanish, training Tagged: IDE, Infraestructuras de Datos Espaciales, Servicios web, WFS, WMS

by Mario at October 16, 2017 08:30 AM

October 15, 2017

Cameron Shorter

The Yin & Yang of OSGeo Leadership

The 2017 OSGeo Board elections are about to start. Some of us who have been involved with OSGeo over the years have collated thoughts about the effectiveness of different strategies. Hopefully these thoughts will be useful for future boards, and charter members who are about to select board members.
The Yin and Yang of OSGeo
As with life, there are a number of Yin vs Yang questions we are continually trying to balance. Discussions around acting as a high or low capital organisation; organising top down vs bottom up; populating a board with old wisdom or fresh blood; personal vs altruistic motivation; protecting privacy vs public transparency. Let’s discuss some of them here.
Time vs Money
OSGeo is an Open Source organisation using a primary currency of volunteer time. We mostly self-manage our time via principles of Do-ocracy and Merit-ocracy. This is bottom up.
However, OSGeo also manages some money. Our board divvies up a budget which is allocated down to committees and projects. This is top-down command-and-control management. This cross-over between volunteer and market economics is a constant point of tension. (For more on the cross-over of economies, see Paul Ramsey’s FOSS4G 2017 Keynote, http://blog.cleverelephant.ca/2017/08/foss4g-keynote.html)
High or low capital organisation?
Our 2013 OSGeo Board tackled this question:
Should OSGeo act as a high capital or low capital organisation? I.e., should OSGeo dedicate energy to collecting sponsorship and then passing out these funds to worthy OSGeo causes.
While initially it seems attractive to have OSGeo woo sponsors, because we would all love to have more money to throw at worthy OSGeo goals, the reality is that chasing money is hard work. And someone who can chase OSGeo sponsorship is likely conflicted with chasing sponsorship for their particular workplace. So in practice, to be effective in chasing sponsorship, OSGeo will probably need to hire someone specifically for the role. OSGeo would then need to raise at least enough to cover wages, and then quite a bit more if the sponsorship path is to create extra value.
This high capital path is how the Apache foundation is set up, and how LocationTech propose to organise themselves. It is the path that OSGeo started following when founded under the umbrella of Autodesk.
However, as OSGeo has grown, OSGeo has slowly evolved toward a low capital volunteer focused organisation. Our overheads are very low, which means we waste very little of our volunteer labour and capital on the time consuming task of chasing and managing money. Consequently, any money we do receive (from conference windfalls or sponsorship) goes a long way - as it doesn't get eaten up by high overheads.
Size and Titles
Within small communities influence is based around meritocracy and do-ocracy. Good ideas bubble to the top and those who do the work decide what work gets done. Leaders who try to pull rank in order to gain influence quickly lose volunteers. Within these small communities, a person’s title hold little tradable value.
However, our OSGeo community has grown very large, upward of tens of thousands of people. At this size, we often can’t use our personal relationships to assess reputation and trust. Instead we need to rely on other cues, such as titles and allocated positions of power.
Consider also that OSGeo projects have become widely adopted. As such, knowledge and influence within an OSGeo community has become a valuable commodity. It helps land a job; secure a speaking slot at a conference; or get an academic paper published.
This introduces a commercial dynamic into our volunteer power structures:
  • A title is sometimes awarded to a dedicated volunteer, hoping that it can be traded for value within the commercial economy. (In practice, deriving value from a title is much harder than it sounds).
  • There are both altruistic and personal reasons for someone to obtain a title. A title can be used to improve the effectiveness of the volunteer; or to improve the volunteers financial opportunities.
  • This can prompt questions of a volunteer’s motivations.
In response to this, over the years we have seen a gradual change to position of roles within the OSGeo community.
Top-down vs bottom-up
OSGeo board candidates have been asked for their “vision”, and “what they would like to change or introduce”. https://wiki.osgeo.org/wiki/Election_2017_Candidate_Manifestos  These are valid questions if OSGeo were run as a command-and-control top-down hierarchy; if board made decisions were delegated to OSGeo committees to implement. But OSGeo is bottom-up.
Boards which attempt to centralise control and delegate tasks cause resentment and disengagement amongst volunteers. Likewise, communities who try to delegate tasks to their leaders merely burn out their leaders. Both are ignoring the principles of Do-ocracy and Merit-ocracy. So ironically, boards which do less are often helping more.
Darwinian evolution means that only awesome ideas and inspiring leaders attract volunteer attention - and that is a good thing.
Recognising ineffective control attempts
How do you recognise ineffective command-and-control techniques within a volunteer community? Look for statements such as:
  • “The XXX committee needs to do YYY…”
  • “Why isn’t anyone helping us do …?”
  • “The XXX community hasn’t completed YYY requirements - we need to tell them to implement ZZZ”
If all the ideas from an organisation come from management, then management isn’t listening to their team.
Power to the people
In most cases the board should keep out of the way of OSGeo communities. Only in exceptional circumstances should a board override volunteer initiatives.
Decisions and power within OSGeo should be moved back into OSGeo committees, chapters and projects. This empowers our community, and motivates volunteers wishing to scratch an itch.
We do want our board members to be enlightened, motivated and engaged within OSGeo. This active engagement should be done within OSGeo communities: partaking, facilitating or mentoring as required. A recent example of this was Jody Garnett’s active involvement with OSGeo rebranding - where he worked with others within the OSGeo marketing committee.
Democratising key decisions
While we have a charter membership of nearly 400 who are tasked with ‘protecting’ the principles of the foundation and voting for new charter members and the board. Beyond this, charter members have had little way of engaging with the board to influence the direction of OSGeo.
How can we balance the signal-to-noise ratio such that we can achieve effective membership engagement with the board without overwhelming ourselves with chatter? Currently we have no formal or prescribed processes for such consultation.
OSGeo Board members are not paid for their services. However, they are regularly invited to partake in activities such as presenting at conferences or participating in meetings with other organisations. These are typically beneficial to both OSGeo and the leader’s reputation or personal interest. To avoid OSGeo Board membership being seen as a “Honey Pot”, and for the Board to maintain trust and integrity, OSGeo board members should refuse payment from OSGeo for partaking in such activities. (There is nothing wrong with accepting payment from another organisation, such as the conference organisers.)
In response to the question of conferences, OSGeo has previously created OSGeo Advocates - an extensive list of local volunteers from around the world willing to talk about OSGeo. https://wiki.osgeo.org/wiki/OSGeo_Advocate
Old vs new
Should we populate our board with old wisdom or encourage fresh blood and new ideas? We ideally want a bit of both, bring wisdom from the past, but also spreading the opportunity of leadership across our membership. We should avoid leadership becoming an exclusive “boys club” without active community involvement, and possibly should consider maximum terms for board members.
If our leadership follow a “hands off oversight role”, then past leaders can still play influential roles within OSGeo’s subcommittees.
Vision for OSGeo 2.0
Prior OSGeo thought leaders have suggested it’s time to grow from OSGeo 1.0 to OSGeo 2.0; time to update our vision and mission.  A few of those ideas have fed into OSGeo’s website revamp currently underway. This has been a good start, but there is still room to acknowledge that much has changed since OSGeo was born a decade ago, and there are plenty of opportunities to positively redefine ourselves.
A test of OSGeo’s effectiveness is to see how well community ideas are embraced and taken through to implementation. This is a challenge that I hope will attract new energy and new ideas from a new OSGeo generation.
Here are a few well considered ideas that have been presented to date that we can start from:
So where does this leave us.
  • Let’s recognise that OSGeo is an Open Source community, and we organise ourselves best with bottom-up Meritocracy and Do-ocracy.
  • Wherever possible, decisions should be made at the committee, chapter or project level, with the board merely providing hands-off oversight. This empowers and enables our sub-communities.
  • Let’s identify strategic topics where the OSGeo board would benefit from consultation with charter membership and work out how this could be accomplished efficiently and effectively.
  • Let’s embrace and encourage new blood into our leadership ranks, while retaining access to our wise old white beards.  
  • The one top-down task for the board is based around allocation of OSGeo’s (minimal) budget.

by Cameron Shorter (noreply@blogger.com) at October 15, 2017 10:42 PM

Free and Open Source GIS Ramblings

Movement data in GIS #9: trajectory data models

There are multiple ways to model trajectory data. This post takes a closer look at the OGC® Moving Features Encoding Extension: Simple Comma Separated Values (CSV). This standard has been published in 2015 but I haven’t been able to find any reviews of the standard (in a GIS context or anywhere else).

The following analysis is based on the official OGC trajcectory example at http://docs.opengeospatial.org/is/14-084r2/14-084r2.html#42. The header consists of two lines: the first line provides some meta information while the second defines the CSV columns. The data model is segment based. That is, each line describes a trajectory segment with at least two coordinate pairs (or triplets for 3D trajectories). For each segment, there is a start and an end time which can be specified as absolute or relative (offset) values:

@stboundedby,urn:x-ogc:def:crs:EPSG:6.6:4326,2D,50.23 9.23,50.31 9.27,2012-01-17T12:33:41Z,2012-01-17T12:37:00Z,sec
@columns,mfidref,trajectory,state,xsd:token,”type code”,xsd:integer
a, 10,150,11.0 2.0 12.0 3.0,walking,1
b, 10,190,10.0 2.0 11.0 3.0,walking,2
a,150,190,12.0 3.0 10.0 3.0,walking,2
c, 10,190,12.0 1.0 10.0 2.0 11.0 3.0,vehicle,1

Let’s look at the first data row in detail:

  • a … trajectory id
  • 10 … start time offset from 2012-01-17T12:33:41Z in seconds
  • 150 … end time offset from 2012-01-17T12:33:41Z in seconds
  • 11.0 2.0 12.0 3.0 … trajectory coordinates: x1, y1, x2, y2
  • walking …  state
  • 1… type code

My main issues with this approach are

  1. They missed the chance to use WKT notation to make the CSV easily readable by existing GIS tools.
  2. As far as I can see, the data model requires a regular sampling interval because there is no way to store time stamps for intermediate positions along trajectory segments. (Irregular intervals can be stored using segments for each pair of consecutive locations.)

In the common GIS simple feature data model (which is point-based), the same data would look something like this:


The main issue here is that there has to be some application logic that knows how to translate from points to trajectory. For example, trajectory a changes from walking1 to walking2 at 2012-01-17T12:36:11Z but we have to decide whether to store the previous or the following state code for this individual point.

An alternative to the common simple feature model is the PostGIS trajectory data model (which is LineStringM-based). For this data model, we need to convert time stamps to numeric values, e.g. 2012-01-17T12:33:41Z is 1326803621 in Unix time. In this data model, the data looks like this:

a,LINESTRINGM(11.0 2.0 1326803631, 12.0 3.0 1326803771),walking,1
a,LINESTRINGM(12.0 3.0 1326803771, 10.0 3.0 1326803811),walking,2
b,LINESTRINGM(10.0 2.0 1326803631, 11.0 3.0 1326803811),walking,2
c,LINESTRINGM(12.0 1.0 1326803631, 10.0 2.0 1326803771, 11.0 3.0 1326803811),vehicle,1

This is very similar to the OGC data model, with the notable difference that every position is time-stamped (instead of just having segment start and end times). If one has movement data which is recorded at regular intervals, the OGC data model can be a bit more compact, but if the trajectories are sampled at irregular intervals, each point pair will have to be modeled as a separate segment.

Since the PostGIS data model is flexible, explicit, and comes with existing GIS tool support, it’s my clear favorite.

Read more:

by underdark at October 15, 2017 04:23 PM


Using pg_upgrade to upgrade PostGIS without installing an older version of PostGIS

PostGIS releases a new minor version of PostGIS every one or two years. Each minor version of postgis has a different libname suffix. In PostGIS 2.1 you'll find files in your PostgreSQL lib folder called postgis-2.1.*, rtpostgis-2.1.*, postgis-topology-2.1.*, address-standardizer-2.1.* etc. and in a PostGIS 2.2 you'll find similar files but with 2.2 in the name. I believe PostGIS and pgRouting are the only extensions that stamp the lib with a version number. Most other extensions you will find are just called extension.so e.g. hstore is always called hstore.dll /hstore.so even if the version changed from 9.6 to 10. On the bright side this allows people to have two versions of PostGIS installed in a PostgreSQL cluster, though a database can use at most one version. So you can have an experimental database running a very new or unreleased version of PostGIS and a production database running a more battery tested version.

On the sad side this causes a lot of PostGIS users frustration trying to use pg_upgrade to upgrade from an older version of PostGIS/PostgreSQL to a newer version of PostGIS/PostgreSQL; as their pg_upgrade often bails with a message in the loaded_libraries.txt log file something to the affect:

could not load library "$libdir/postgis-2.2": ERROR:  could not access file "$libdir/postgis-2.2": No such file or directory
could not load library "$libdir/postgis-2.3": ERROR:  could not access file "$libdir/postgis-2.3": No such file or directory

This is also a hassle because we generally don't support a newer version of PostgreSQL on older PostGIS installs because the PostgreSQL major version changes tend to break our code often and backporting those changes is both time-consuming and dangerous. For example the DatumGetJsonb change and this PostgreSQL 11 crasher we haven't isolated the cause of yet. There are several changes like this that have already made the PostGIS 2.4.0 we released recently incompatible with the PostgreSQL 11 head development.

Continue reading "Using pg_upgrade to upgrade PostGIS without installing an older version of PostGIS"

by Regina Obe (nospam@example.com) at October 15, 2017 05:11 AM

October 13, 2017

Free and Open Source GIS Ramblings

Movement data in GIS extra: trajectory generalization code and sample data

Today’s post is a follow-up of Movement data in GIS #3: visualizing massive trajectory datasets. In that post, I summarized a concept for trajectory generalization. Now, I have published the scripts and sample data in my QGIS-Processing-tools repository on Github.

To add the trajectory generalization scripts to your Processing toolbox, you can use the Add scripts from files tool:

It is worth noting, that Add scripts from files fails to correctly import potential help files for the scripts but that’s not an issue this time around, since I haven’t gotten around to actually write help files yet.

The scripts are used in the following order:

  1. Extract characteristic trajectory points
  2. Group points in space
  3. Compute flows between cells from trajectories

The sample project contains input data, as well as output layers of the individual tools. The only required input is a layer of trajectories, where trajectories have to be LINESTRINGM (note the M!) features:

Trajectory sample based on data provided by the GeoLife project

In Extract characteristic trajectory points, distance parameters are specified in meters, stop duration in seconds, and angles in degrees. The characteristic points contain start and end locations, as well as turns and stop locations:

The characteristic points are then clustered. In this tool, the distance has to be specified in layer units, which are degrees in case of the sample data.

Finally, we can compute flows between cells defined by these clusters:

Flow lines scaled by flow strength and cell centers scaled by counts

If you use these tools on your own data, I’d be happy so see what you come up with!

Read more:

by underdark at October 13, 2017 06:41 PM

Equipo Geotux

Publicar un servicio de teselas de mapas con QMetatiles y GitHub

Para realizar una publicación de un servicio WMTS usando simplemente el almacenamiento estático en un servidor Web, es necesario utilizar una serie de herramientas que permiten, en primera medida generar las teselas o baldosas de imágenes y segundo una herramienta que nos permita generar el archivo XML.. Para el primer caso, se va hacer uso de las siguientes extensiones o complementos de QGIS.



Nota: Este documento hace parte del material de guías de talleres del curso de Servicios Web Geográficos de la Maestría en Geomática de la Universidad Nacional de Colombia Sede Bogotá, mayor información en http://www.aulageo.cloud/course/unal-ogc-2017/


Una vez instalados los complementos de QGIS, se procede a cargar el proyecto de Laguna de Tota, recuerde que el sistema por defecto de este proyecto debe ser EPSG:3857 o Web Mercator, ya que las herramientas a utilizar sólo son compatibles con este sistema de referencia de coordenadas. Una vez el proyecto es desplegado, se procede a cargar el esquema de matrices de teselas desde el menú Web → TileLayer Plugin → Add TileLayer …, en este caso para el servicio WMTS es el esquema XYZFrame.

TileLayer Plugin

El esquema se representa con una nueva capa de nombre XYZFrame, y permite identificar cual es rango mínimo y máximo de zoom, así cómo el número e índice de las teselas. Para el siguiente caso, el zoom mínimo que permite el almacenamiento de nuestra zona de estudio en una tesela en 13, y el índice de origen en la matriz de teselas es 2434,3967.



Una vez identificado los niveles de zoom o escalas de las matrices, se procede a generarlas las teselas con el complemento QMetaTiles disponible en la ruta del menú Complementos → QMetaTiles → QMetaTiles.

Los parámetros solicitados por esta herramienta son los que se muestran en la siguiente imagen.


  • Output: La ruta del directorio en el cual va a crear las teselas. Recomendable usar, si esta usando el servidor de GeoTux Server, la ruta que corresponda al punto de montaje /gisdata/tiles
  • Tileset name: hace referencia al nombre del proyecto o conjunto de matriz de teselas, en este caso “z11to17”.
  • Extent: Hace referencia a la extensión geográfica de la generación de teselas, para este caso use una capa para restringir la extensión geográfica para la generación de teselas.
  • Zoom: son los niveles de zoom para generar el conjunto de matrices de teselas, para este caso se ha identificado anteriormente un conjunto de matrices de teselas de 11 a 17.


October 13, 2017 03:07 PM

October 12, 2017

Even Rouault

Optimizing JPEG2000 decoding

Over this summer I have spent 40 days (*) in the guts of the OpenJPEG open-source library (BSD 2-clause licensed) optimizing the decoding speed and memory consumption. The result of this work is now available in the OpenJPEG 2.3.0 release.

For those who are not familiar with JPEG-2000, and they have a lot of excuse given its complexity, this is a standard for image compression, that supports lossless and lossy methods. It uses discrete wavelet transform for multi-resolution analysis, and a context-driven binary arithmetic coder for encoding of bit plane coefficients. When you go into the depths of the format, what is striking is the number of independent variables that can be tuned:

- use of tiling or not, and tile dimensions
- number of resolutions
- number of quality layers
- code-block dimensions
- 6 independent options regarding how code-blocks are encoded (code-block styles): use of Selective arithmetic coding bypass, use of Reset context probabilities on coding pass boundaries, use of Termination on each coding pass, use of Vertically stripe causal context, use of Predictable termination, use of Segmentation Symbols. Some can bring decoding speed advantages (notably selective arithmetic coding bypass), at the price of less compression efficiency. Others might help hardware based implementations. Others can help detecting corruption in the codestream (predictable termination)
- spatial partition of code-blocks into so-called precincts, whose dimension may vary per resolution
- progression order, ie the criterion to decide how packets are ordered, which is a permutation of the 4 variables: Precincts, Component, Resolution, Layer. The standard allows for 5 different permutations. To add extra fun, the progression order might be configured to change several times among the 5 possible (something I haven't yet had the opportunity to really understand)
- division of packets into tile-parts
- use of multi-component transform or not
- choice of lossless or lossy wavelet transforms
- use of start of packet / end of packet markers
- use of  Region Of Interest, to have higher quality in some areas
- choice of image origin and tiling origins with respect to a reference grid (the image and tile origin are not necessarily pixel (0,0))

And if that was not enough, some/most of those parameters may vary per-tile! If you already found that TIFF/GeoTIFF had too many parameters to tune (tiling or not, pixel or band interleaving, compression method), JPEG-2000 is probably one or two orders of magnitude more complex. JPEG-2000 is truly a technological and mathematical jewel. But needless to say that having a compliant JPEG-2000 encoder/decoder, which OpenJPEG is (it is an official reference implementation of the standard) is already something complex. Having it perform optimally is yet another target.

Previously to that latest optimization round, I had already worked at enabling multi-threaded decoding at the code-block level, since they can be decoded independently (once you've re-assembled from the code-stream the bytes that encode a code-block), and in the inverse wavelet transform as well (during the horizontal pass, resp vertical pass, rows, resp columns, can be transformed independently). But the single-thread use had yet to be improved. Roughly, around 80 to 90% of the time during JPEG-2000 decoding is spent in the context-driven binary arithmetic decoder, around 10% in the inverse wavelet transform and the rest in other operations such as multi-component transform. I managed to get around 10% improvement in the global decompression time by porting to the decoder an optimization that had been proposed by Carl Hetherington for the encoding side, in the code that determines which bit of wavelet transformed coefficient must be encoded during which coding pass. The trick here was to reduce the memory needed for the context flags, so as to decrease the pressure on the CPU cache. Other optimizations in that area have consisted in making sure that some critical variables are kept preferably in CPU registers rather than in memory. I've spent a good deal of time looking at the disassembly of the compiled code.
I've also optimized the reversible (lossless) inverse transform to use the Intel SSE2 (or AVX2) instruction sets to be able to process several rows, which can result up to 3x speed-up for that stage (so a global 3% improvement)

I've also worked on reducing the memory consumption needed to decode images, by removing the use of intermediate buffers when possible. The result is that the amount of memory needed to do full-image decoding was reduced by 2.4.

Another major work direction was to optimize speed and memory consumption for sub-window decoding. Up to now, the minimal unit of decompression was a tile. Which is OK for tiles of reasonable dimensions (let's say 1024x1024 pixels), but definitely not on images that don't use tiling, and that hardly fit into memory. In particular, OpenJPEG couldn't open images of more than 4 billion pixels. The work has consisted in 3 steps :
- identifying which precincts and code-blocks are needed for the reconstruction of a spatial region
- optimize the inverse wavelet transform to operate only on rows and columns needed
- reducing the allocation of buffers to the amount strictly needed for the subwindow of interest
The overall result is that the decoding time and memory consumption are now roughly proportional to the size of the subwindow to decode, whereas they were previously constant. For example decoding 256x256 pixels in a 13498x9944x3 bands image takes now only 190 ms, versus about 40 seconds before.

As a side activity, I've also fixed 2 different annoying bugs that could cause lossless encoding to not be lossless for some combinations of tile sizes and number of resolutions, or when some code-block style options were used.

I've just updated the GDAL OpenJPEG driver (in GDAL trunk) to be more efficient when dealing with untiled JPEG-2000 images.

There are many more things that could be done in the OpenJPEG library :
- port a number of optimizations on the encoding side: multi-threadig, discrete wavelet transform optimizations, etc...
- on the decoding side, reduce again the memory consumption, particularly in the untiled case. Currently we need to ingest into memory the whole codestream for a tile (so the whole compressed file, on a untiled image)
- linked to the above, use of TLM and PLT marker segments (kind of indexes to tiles and packets)
- on the decoding side, investigate further improvements for the code specific of irreversible / lossy compression
- make the opj_decompress utility do a better use of the API and consume less memory. Currently it decodes a full image into memory instead of proceeding by chunks (you won't have this issue if using gdal_translate)
- investigate how using GPGPU capabilities (CUDA or OpenCL) could help reduce the time spent in context-driven binary arithmetic decoder.

Contact me if you are interested in some of those items (or others !)

(*) funding provided by academic institutions and archival organizations, namely
… And logistic support from the International Image Interoperability Framework (IIIF), the Council on Library and Information Resources (CLIR), intoPIX, and of course the Image and Signal Processing Group (ISPGroup) from University of Louvain (UCL, Belgium) hosting the OpenJPEG project.

by Even Rouault (noreply@blogger.com) at October 12, 2017 04:41 PM

Paul Ramsey

Adding PgSQL to PHP on OSX

I’m yak shaving this morning, and one of the yaks I need to ensmooth is running a PHP script that connects to a PgSQL database.

No problem, OSX ships with PHP! Oh wait, that PHP does not include PgSQL database support.

Adding PgSQL to PHP on OSX

At this point, you can either run to completely replace your in-build PHP with another PHP (probably good if you’re doing modern PHP development and want something newer than 5.5) or you can add PgSQL to your existing PHP installation. I chose the latter.

The key is to build the extension you want without building the whole thing. This is a nice trick available in PHP, similar to the Apache module system for independent module development.

First, figure out what version of PHP you will be extending:

> php --info | grep "PHP Version"

PHP Version => 5.5.38

For my version of OSX, Apple shipped 5.5.38, so I’ll pull down the code package for that version.

Then, unbundle it and go to the php extension directory:

tar xvfz php-5.5.38.tar.bz2
cd php-5.5.38/ext/pgsql

Now the magic part. In order to build the extension, without building the whole of PHP, we need to tell the extension how the PHP that Apple ships was built and configured. How do we do that? We run phpize in the extension directory.

> /usr/bin/phpize

Configuring for:
PHP Api Version:         20121113
Zend Module Api No:      20121212
Zend Extension Api No:   220121212

The phpize process reads the configuration of the installed PHP and sets up a local build environment just for the extension. All of a sudden we have a ./configure script, and we’re ready to build (assuming you have installed the MacOSX command-line developers tools with XCode).

> ./configure \
    --with-php-config=/usr/bin/php-config \

> make

Note that I have my own build of PostgreSQL in /opt/pgsql. You’ll need to supply the path to your own install of PgSQL so that the PHP extension can find the PgSQL libraries and headers to build against.

When the build is complete, you’ll have a new modules/ directory in the extension directory. Now figure out where your system wants extensions copied, and copy the module there.

> php --info | grep extension_dir

extension_dir => /usr/lib/php/extensions/no-debug-non-zts-20121212 => /usr/lib/php/extensions/no-debug-non-zts-20121212

> sudo cp modules/pgsql.so /usr/lib/php/extensions/no-debug-non-zts-20121212

Finally, you need to edit the /etc/php.ini file to enable the new module. If the file doesn’t already exist, you’ll have to copy in the template version and then edit that.

sudo cp /etc/php.ini.default /etc/php.ini
sudo vi /etc/php.ini

Find the line for the PgSQL module and uncomment and edit it appropriately.


Now you can check and see if it has picked up the PgSQL module.

> php --info | grep PostgreSQL

PostgreSQL Support => enabled
PostgreSQL(libpq) Version => 10.0
PostgreSQL(libpq)  => PostgreSQL 10.0 on x86_64-apple-darwin15.6.0, compiled by Apple LLVM version 8.0.0 (clang-800.0.42.1)

That’s it!

October 12, 2017 02:00 PM


Обработка данных аэрофотосъемки средствами открытого пакета OpenDroneMap

В связи с бурным развитием как фотограмметрических технологий, так и индустрии простых в освоении БПЛА оснащенных фото/видео-аппаратурой, у специалистов самых разных профилей стал расти интерес к возможностям организации аэрофотосъемки и обработки получаемых данных для дальнейшей работы с географическими продуктами, такими как ортофотопланы, цифровые модели местности, трёхмерные модели. На рынке представлено большое количество решений как аппаратных (преимущественно БПЛА), так и программных. Программные продукты для фотограмметрической обработки данных стали разрабатывать практически все крупные вендоры (Autodesk, Trimble, …), также появилось множество новых компаний, продвигающих собственные пакеты (Agisoft, Pix4D, DroneDeploy, …). Параллельно начали развиваться и проекты с открытым исходным кодом. Подробное описание установки на виртуальную машину и основы использования одного из наиболее удачных открытых пакетов – OpenDroneMap – рассмотрены в представленной статье.

Прочитать | Обсудить


by Эдуард Казаков at October 12, 2017 07:46 AM

October 11, 2017

gvSIG Team

SIG aplicado a Gestión Municipal: Módulo 5.1 ‘Servicios web (Introducción a las IDE)’

Con el primer vídeo del quinto módulo, que trata sobre el acceso a servicios web desde gvSIG, nos introducimos en un concepto fundamental cuando hablamos de la gestión eficiente de la información geográfica: las Infraestructuras de Datos Espaciales (IDE). Es tal su importancia que son cada vez los países o regiones del mundo que legislan para hacer efectiva su implantación en toda administración que genere información geográfica digital.

Las IDE se consideran el sistema idóneo para gestionar, en su totalidad, la información geográfica de cualquier organización y, por supuesto, de un ayuntamiento. En futuros módulos de este curso veremos gvSIG Online, la solución libre para ponerlas en marcha. En el módulo de hoy veremos cómo trabajar con los servicios web de mapas que pueden generar las IDE desde el SIG de escritorio.

Actualmente, una gran cantidad de administraciones ofrecen su cartografía de forma pública para poder cargar a través de servicios web. Gracias al uso de determinados estándares es posible poder acceder a estos servicios desde gvSIG Desktop, lo que nos permite cargar cartografía en nuestro proyecto sin necesidad de descargar nada en disco.

Para poder entender mejor esta parte en gvSIG comenzaremos con un primer vídeo teórico sobre introducción a las Infraestructuras de Datos Espaciales, donde explicaremos qué son los servicios web, qué tipos hay, y algunos enlaces donde se recopilan algunos estos servicios disponibles.

En este módulo no es necesario descargar ninguna cartografía, ya que es un vídeo totalmente teórico.

El primer vídeo-tutorial de este quinto módulo es el siguiente:

Post relacionados:

Filed under: gvSIG Desktop, IDE, spanish, training Tagged: IDE, OGC, Servicios web

by Mario at October 11, 2017 10:29 PM

Even Rouault

GDAL and cloud storage

In the past weeks, a number of improvements related to access to cloud storage have been committed to GDAL trunk (future GDAL 2.3.0)

Cloud based virtual file systems

There was already support to access private data in Amazon S3 buckets through the /vsis3/ virtual file system (VFS). Besides a few robustness fixes, a few new capabilities have been added, like creation and deletion of directories inside a bucket with VSIMkdir() / VSIRmdir(). The authentication methods have also been extended to support, beyond the AWS_SECRET_ACCESS_KEY and AWS_ACCESS_KEY_ID environment variables, the other ways accepted by the "aws" command line utilities, that is to say storing credentials in the ~/.aws/credentials or ~/.aws/config files. If GDAL is executed since a Amazon EC2 instance that has been assigned rights to buckets, GDAL will automatically fetch the instance profile credentials.

The existing read-only /vsigs/ VFS for Google Cloud Storage as being extended with write capabilities (creation of new files), to be on feature parity with /vsis3/. The authentication methods have also been extended to support OAuth2 authentication with a refresh token, or with service account authentication. The credentials can be stored in a ~/.boto configuration file. And when run from a Google Compute Engine virtual machine, GDAL will automatically fetch the instance profile credentials.

Two new VFS have also been added, /vsiaz/ for Microsoft Azure Blobs and /vsioss/ for Alibaba Cloud Object Storage Service. They support read and write operations similarly to the two previously mentioned VFS.

To make file and directory management easy, a number of Python sample scripts have been created or improved:
gdal_cp.py my.tif /vsis3/mybucket/raster/
gdal_cp.py -r /vsis3/mybucket/raster /vsigs/somebucket
gdal_ls.py -lr /vsis3/mybucket
gdal_rm.py /vsis3/mybucket/raster/my.tif
gdal_mkdir.py /vsis3/mybucket/newdir
gdal_rmdir.py -r /vsis3/mybucket/newdir

Cloud Optimized GeoTIFF

Over the last past few months, there has been adoption by various actors of the cloud optimized formulation of GeoTIFF files, which enables clients to efficiently open and access portions of a GeoTIFF file available through HTTP GET range requests.

Source code for a online service that offers validation of cloud optimized GeoTIFF (using GDAL and the validate_cloud_optimized_geotiff.py script underneath) and can run as a AWS Lambda function is available. Note: as the current definition of what is or is not a cloud optimized formulation has been uniteraly decided up to now, it cannot be excluded that it might be changed on some points (for example relaxing constraints on the ordering of the data of each overview level, or enforcing that tiles are ordered in a top-to-bottom left-to-right way)

GDAL trunk has received improvements to speed up access to sub windows of a GeoTIFF file by making sure that the tiles that participate to a sub-window of interest are requested in parallel (this is true for public files accessed through /vsicurl/ or with the four above mentioned specialized cloud VFS), by reducing the amount of data fetched to the strict minimum and merging requests for consecutive ranges. In some environments, particularly when accessing to the storage service of a virtual machine of the same provider, HTTP/2 can be used by setting the GDAL_HTTP_VERSION=2 configuration option (provided you have a libcurl recent enough and built against nghttp2). In that case, HTTP/2 multiplexing will be used to request and retrieve data on the same HTTP connection (saving time to establish TLS for example). Otherwise, GDAL will default to several parallel HTTP/1.1 connections. For long lived processes, efforts have been made to re-use as much as possible existing HTTP connections.

by Even Rouault (noreply@blogger.com) at October 11, 2017 05:48 PM

October 10, 2017

gvSIG Team

Impresiones tras las 4as Jornadas gvSIG México

La pasada semana tuvieron lugar las 4as Jornadas gvSIG México. Por cuarto año consecutivo la Comunidad Mexicana de gvSIG ha celebrado un encuentro lleno de actividades, con interesantes ponencias y talleres llenos a rebosar.

Creo que la mejor forma de mostrar mis impresiones de cómo fueron la jornadas es publicar unas pocas imágenes sobre las mismas. Tan sólo repetiré alguna de las palabras que utilicé en la clausura del evento: las jornadas de gvSIG en México han sido semillas que han sumado a un proceso que va más allá de la migración a software libre de tal o cual organismo, han puesto en marcha algo mucho más importante desde el área de la geomática, el camino hacia la independencia tecnológica de un país.

Unas jornadas que se llevaron a cabo al mismo tiempo que se estaban celebrando las 9as Jornadas de Latinoamérica y Caribe; en dos semanas tenemos las 13as Jornadas Internacionales…indicadores, junto a muchos otros, que muestran que gvSIG es un proyecto consolidado y en constante crecimiento, cuyo uso se extiende a más de 160 países. Y esperaros al 2018, que va a deparar muchas sorpresas…

Filed under: events, spanish Tagged: Conferencia, Culiacán, jornadas, México, Sinaloa

by Alvaro at October 10, 2017 01:24 AM

October 09, 2017

gvSIG Team

SIG aplicado a Gestión Municipal: Módulo 4.2 ‘Tablas de atributos (unión de tablas)’

Ya está disponible el segundo vídeo del cuarto módulo, en el que continuamos trabajando con las tablas de atributos. En este caso lo que haremos será ver cómo realizar unión de tablas, muy útil para cuando tenemos una capa con cartografía de nuestro ayuntamiento (como por ejemplo una capa con las parcelas o barrios), y queremos unirle información de una tabla externa como puede ser la población de cada parcela o barrio. Para ello necesitaremos que ambas tablas tengan un campo con valores comunes, que será por los que las unirá.

En el tercer módulo tenéis toda la información sobre cómo instalar gvSIG, y en el segundo, en el apartado de preguntas frecuentes tenéis toda la información sobre cómo preguntar las dudas que tengáis durante el curso.

En este módulo se trabaja con un asistente para abrir un fichero CSV, que por defecto no suele ir instalado en gvSIG 2.3.1. Para poder instalarlo podéis ver la información del siguiente post.

La cartografía a utilizar en este módulo ya estaba disponible en el post anterior, pero si no la habéis descargado podéis hacerlo desde el siguiente enlace.

El segundo vídeo-tutorial de este cuarto módulo es el siguiente:

Post relacionados:

Módulo 1: Diferencias entre SIG y CAD

Módulo 2: Introducción a los Sistemas de Referencia

Módulo 3: Vistas, capas, simbología, etiquetado

Módulo 4.1: Tablas de atributos (información alfanumérica)

Filed under: gvSIG Desktop, spanish, training Tagged: ayuntamientos, gestión municipal, información alfanumérica, Tablas, tablas de atributos, unión de tablas

by Mario at October 09, 2017 05:46 PM

From GIS to Remote Sensing

Developing the SCP 6: Main interface and Multiple band sets

I am updating the Semi-Automatic Classification Plugin (SCP) to version 6 (codename Greenbelt) which will be compatible with the upcoming QGIS 3.

In this previous post, I described the main changes to the SCP dock, and in particular the new tabs designed to optimize the space on the display.
In this post I present the redesigned Main interface that contains the SCP tools.
In addition, I have implemented a profound change to the SCP core, which is the ability to define multiple band setsthis opens many possibilities for new tools that can exploit the combination of bands (I am thinking about raster mosaic and more tools) which I'll try to include in SCP 6.

Main interface

by Luca Congedo (noreply@blogger.com) at October 09, 2017 08:00 AM

October 08, 2017


GRASS GIS crash course

On 3 November 2017 there will be a GRASS GIS crash course, organized by OSGeo.nl and the Faculty of Geo-Information Science and Earth Observation of ITC.  The course will provide participants with an overview of the software capabilities and a hands-on experience in raster, vector and time series processing with the Open Source software GRASS GIS 7. For more information, go to https://osgeo.nl/grassgis-course/. You can register on our Meetup page [update: maximum number of participants has been reached, registration closed].



Course structure/contents

The course consists of mainly practical sessions with a short intro to basic concepts at the beginning. All the code and material will be publicly available.

  • We will cover the following introductory topics, among others:
  • GRASS database, locations and mapsets
  • Working with different data types (vector, raster, 3D raster formats, time series)
  • Different interfaces (GUI, CLI, Python)
  • Region and mask
  • Scripting examples
  • Visualization of spatial data; scale bar, symbols, grids, color tables, histograms
  • Where to find help

After the introduction with simple examples, we will go together through a guided exercise to demonstrate a full workflow in GRASS GIS, involving raster, vector and temporal data. In the end, participants will have the chance to follow three different tutorials: Remote sensing analysis using satellite data, Time series processing and spatial point interpolation. Teachers will be available for questions and explanations.

When and where?

  • Friday, November 3rd, 2017 from 10.30 to 16.30 (with 1-hour for lunch break)
  • ITC – Faculty of Geo-Information Science and Earth Observation. University of Twente. Hengelosestraat 99, 7514 AE, Enschede. The Netherlands.

by Paulo van Breugel at October 08, 2017 06:44 PM

Free and Open Source GIS Ramblings

Movement data in GIS #8: edge bundling for flow maps

If you follow this blog, you’ll probably remember that I published a QGIS style for flow maps a while ago. The example showed domestic migration between the nine Austrian states, a rather small dataset. Even so, it required some manual tweaking to make the flow map readable. Even with only 72 edges, the map quickly gets messy:

Raw migration flows between Austrian states, line width scaled by flow strength

One popular approach in the data viz community to deal with this problem is edge bundling. The idea is to reduce visual clutter by generate bundles of similar edges. 

Surprisingly, edge bundling is not available in desktop GIS. Existing implementations in the visual analytics field often run on GPUs because edge bundling is computationally expensive. Nonetheless, we have set out to implement force-directed edge bundling for the QGIS Processing toolbox [0]. The resulting scripts are available at https://github.com/dts-ait/qgis-edge-bundling.

The main procedure consists of two tools: bundle edges and summarize. Bundle edges takes the raw straight lines, and incrementally adds intermediate nodes (called control points) and shifts them according to computed spring and electrostatic forces. If the input are 72 lines, the output again are 72 lines but each line geometry has been bent so that similar lines overlap and form a bundle.

After this edge bundling step, most common implementations compute a line heatmap, that is, for each map pixel, determine the number of lines passing through the pixel. But QGIS does not support line heatmaps and this approach also has issues distinguishing lines that run in opposite directions. We have therefore implemented a summarize tool that computes the local strength of the generated bundles.

Continuing our previous example, if the input are 72 lines, summarize breaks each line into its individual segments and determines the number of segments from other lines that are part of the same bundle. If a weight field is specified, each line is not just counted once but according to its weight value. The resulting bundle strength can be used to create a line layer style with data-defined line width:

Bundled migration flows

To avoid overlaps of flows in opposing directions, we define a line offset. Finally, summarize also adds a sequence number to the line segments. This sequence number is used to assign a line color on the gradient that indicates flow direction.

I already mentioned that edge bundling is computationally expensive. One reason is that we need to perform pairwise comparison of edges to determine if they are similar and should be bundled. This comparison results in a compatibility matrix and depending on the defined compatibility threshold, different bundles can be generated.

The following U.S. dataset contains around 4000 lines and bundling it takes a considerable amount of time.

One approach to speed up computations is to first use a quick clustering algorithm and then perform edge bundling on each cluster individually. If done correctly, clustering significantly reduces the size of each compatibility matrix.

In this example, we divided the edges into six clusters before bundling them. If you compare this result to the visualization at the top of this post (which did not use clustering), you’ll see some differences here and there but, overall, the results are quite similar:

Looking at these examples, you’ll probably spot a couple of issues. There are many additional ideas for potential improvements from existing literature which we have not implemented yet. If you are interested in improving these tools, please go ahead! The code and more examples are available on Github.

For more details, leave your email in a comment below and I’ll gladly send you the pre-print of our paper.

[0] Graser, A., Schmidt, J., Roth, F., & Brändle, N. (accepted) Untangling Origin-Destination Flows in Geographic Information Systems. Information Visualization – Special Issue on Visual Movement Analytics.

Read more:

by underdark at October 08, 2017 05:50 PM

October 06, 2017


5eme rencontre des utilisateurs de QGIS

L'OSGeo-fr et Montpellier SupAgro organisent la cinquième rencontre de la communauté francophone QGIS.

Cette rencontre se déroulera les 14 et 15 décembre à Montpellier SupAgro.

La première journée est organisée sous forme de barcamp : des ateliers informels où vous (utilisateurs) pouvez proposer des sujets, présenter des travaux, échanger, discuter. Ce moment a vocation de permettre l'échange entre utilisateurs, contributeurs et financeurs. N'hésitez pas à venir découvrir une forme originale de contributions autour de QGIS !

Voici quelques sujets évoqués en 2015-2016 :

  • comment traduire QGIS ?
  • comment utiliser QGIS sur une tablette (QField) ?
  • comment créer une carte avec QGIS et la diffuser sur le net ?

Durant la seconde journée, plusieurs conférences seront proposées. Cette année, le thème de cette journée est "QGIS 3.0 : que va changer cette nouvelle version pour les utilisateurs ?".

L'appel à mécénat a été diffusé cette semaine et celui pour les présentations le sera tout bientôt. Restez attentif !

Pour toute information ou demande, utilisez le formulaire de contact prévu à cet effet.

À propos de Montpellier SupAgro et la formation AgroTIC

Montpellier SupAgro est un établissement d'enseignement et de recherche qui forme des ingénieurs agronomes. La formation AgroTIC est une des formations portées par Montpellier SupAgro. Chaque année nous formons une quinzaine d'ingénieurs agronomes avec une double compétence en géomatique. Qgis constitue un outil central de notre formation d'une part pour les fonctionnalités de gestion de l'information spatiale qu'il offre et d'autre part parce qu'il est libre et gratuit et pourra être réutilisé par nos futurs diplomés quelque soit leur contexte professionnel.
Depuis 3 ans, nous organisons un évènement d'échange autour de Qgis au cours duquel nos étudiants peuvent rencontrer des professionnels qui utilisent ce logiciel. Cet évènement leur permet aussi de valoriser les enseignements qu'ils ont suivi puisqu'ils présentent également les nouvelles fonctionnalités de QGIS devant les professionnels.
Montpellier SupAgro : www.supagro.fr
AgroTIC : www.agrotic.org

À propos de l'OSGeo-fr

L’association OSGeo.fr est la représentation Francophone de la fondation Open Source Geospatial dont la mission est d’aider et de promouvoir le développement collaboratif des données et des technologies géospatiales ouvertes. L’association sert d’entité légale envers qui les membres de la communauté peuvent contribuer au code, aux finances et aux autres ressources, s’assurer que leurs contributions seront maintenues au bénéfice du public.

L’OSGeo.fr sert également d’organisation de référence et d’assistance pour la communauté géospatiale libre, et fournit un forum commun et une infrastructure partagée pour améliorer la collaboration entre les projets.

La participation est ouverte à l’ensemble de la communauté Open Source. Tous les travaux de l’association sont publiés sur des forums publics où peut s’investir une communauté libre de participants. Les projets de la Fondation OSGeo sont tous librement disponibles et utilisables sous une licence Open Source certifiée par l’OSI.


by yjacolin at October 06, 2017 11:16 AM

October 05, 2017

Andrea Antonello

JGrasstools back in black: The Horton Machine - Part 1

I have to admit that when in far 2002, working at the University of Trento, Faculty of Environmental Engineering, we published the first release of The Horton Machine, I never thought we would ever change that name. The Horton Machine was a collection of around 40 GRASS modules written in C and dedicated to advanced Hydrology and Geomorphology. They represented the effort of the passed 10 years (now around 20) of professor Riccardo Rigon and his team.

At that time Riccardo and I were dreaming about a nice to use GUI for GRASS that would allow GRASS to be used more outside of the academic domain. In 2003 we started the JGrass project with that objective: create a userfriendly GUI for GRASS. The reaction of the GRASS community was bad, mostly (so they said) because Java was not open source. Also QGis was coming and becoming the natural choice for being an interface to GRASS.

Not at all in the mood of religious wars in 2006 we decided to join the java tribe and moved our resources to support the uDig project, where we happily lived and developed for many many years. We kind of stayed between two worlds, still using GRASS and its mapsets, but living in the userfriendly java world. :-)

At that time the processing libraries for Hydrology and Geomorphology (as well as LiDAR and forestry later on) were extracted into a library that could be used in standalone mode or inside uDig. That library, as a logical follow-up, got the name JGrasstools.

Had I only known better! In some (rare, but still!) occasions I and other JGrasstools developers have been asked why we still use the name JGrasstools, since we are not directly "bound" to GRASS. Well, I have been fighting over this a few times and had no hard feeling about this, apart of the huge work that it would have been to change everything. 

The last time it happened, was at the Foss4g conference in Paris. At the end of a great presentation given by Silvia about the tools we developed for forestry management using LiDAR data, (again) a member of the GRASS community asked the same old question in the questions interval dedicated to the presentation: Why do you still call it JGrasstools....?

This was the final straw for me. I still have to understand why people do certain things, but one thing was sure for me: we had to change that name, to make some of the GRASS community members sleep sweet dreams, and to be finally free!

So this is it. After 15 years of continuous development on the JGrasstools core, we go back to our origins: The Horton Machine.

The Horton Machine is now something more than just Hydrology and Geomorphology, there are other projects that support interaction with the mobile digital field mapping app Geopaparazzi, a module that supports interaction with spatial databases (also on android), the LESTO modules developed with the team of professor Giustino Tonon at the Free University of Bolzano... and beyond other things... also the plugins for the Desktop GIS gvSIG.

It has been quite an exercise to make this namespace migration and has taken me days between code refactoring, domain registration, maven publishing updates, documentation updating (still much work to be done there) and and and... but it is done now. Many will sleep sweet dreams, and I will be the first. And maybe at the next conference someone will ask a question related to the content of the presentation. Don't know what? A hint: "How did you get such a high single tree extraction rate from LiDAR data with your tools?" ;-)

 What will happen now?

Before the 13 international gvSIG conference we will do the first HortonMachine branded release and the together with it the connected release if gvSIG plugins.

Also maven releases of the modules will be done. At the time JGrasstools was at version 0.8.1. The HortonMachine will most probably start at 0.9.0. It sure should have been a major number, but well, we still need to reach the first major. :-)

In the next post we will show you what you will find in the release. Stay tuned!

by andrea antonello (noreply@blogger.com) at October 05, 2017 03:21 PM

gvSIG Team

SIG aplicado a Gestión Municipal: Módulo 4.1 ‘Tablas de atributos (información alfanumérica)’

Ya está disponible el primer vídeo del cuarto módulo, en el que veremos cómo trabajar con las tablas de atributos. En este primer vídeo de este módulo veremos cómo realizar selección de elementos, tanto gráficamente como desde la tabla, y también utilizando diferentes funcionalidades, como por ejemplo los filtros.

En el módulo anterior tenéis toda la información sobre cómo instalar gvSIG, y en el segundo módulo, en el apartado de preguntas frecuentes tenéis toda la información sobre cómo preguntar las dudas que tengáis durante el curso.

Podéis descargar la cartografía a utilizar en este módulo desde el siguiente enlace.

El primer vídeo-tutorial de este cuarto módulo es el siguiente:

Post relacionados:

Módulo 1: Diferencias entre SIG y CAD

Módulo 2: Introducción a los Sistemas de Referencia

Módulo 3: Vistas, capas, simbología, etiquetado

Filed under: gvSIG Desktop, spanish, training Tagged: ayuntamientos, filtros, gestión municipal, información alfanumérica, tablas de atributos

by Mario at October 05, 2017 07:08 AM

October 04, 2017


OSGeo.nl Dag 2017 – Geo.Samen.Doen.

Het is inmiddels traditie dat de jaarlijkse OSGeo.nl Dag op GeoBuzz plaatsvindt. De afgelopen jaren was ons programma altijd het drukst bezochte en best gewaardeerde onderdeel. Ook dit jaar wordt het natuurlijk weer geweldig! Zet dus 22 November 2017 in je agenda! Onder het motto “Samenwerking Versnelt” haken wij in bij het GeoBuzz-thema: “Geo Versnelt”. 

Wat gaan we doen? Het programma is nog in voorbereiding, maar de rode draad zal “Samenwerking” zijn. In grote lijnen kun je het volgende verwachten:

De Ochtend – QGIS Morning!

Thema van het ochtendprogramma is Samenwerking & QGIS. De laatste jaren is QGIS de meest zichtbare en populaire  component die de Open Source geo-wereld heeft voortgebracht. Maar wat is de oorzaak van haar groei en populariteit? Wat maakt QGIS “anders”? Hoe QGIS effectief in te zetten? We laten zien hoe het QGIS Project zelf een samenwerking tussen gebruikers en ontwikkelaars bevordert. Hoe “de markt” hierop inhaakt met een aanbod van QGIS cursussen en ondersteuning. Hoe QGIS gebruikers zelfstandig complexe oplossingen voor hun organisatie ontwikkelen. Slogan: “Doe het niet alleen, zoek de samenwerking”. Onder de sprekers zijn de voornaamste QGIS-experts in Nederland. Zij zullen voorbeelden en “cases” geven ter inspiratie en je uitnodigen om onderdeel te worden van de Nederlandse QGIS community in aanloop naar de QGIS Gebruikersdag op 31 januari 2018.

De Middag – Gebruikers en Aanbieders!

Open Source Geo lijkt op afstand een wat ondoorgrondelijk geheel van projecten en producten. Toch is er een groeiend aantal bedrijven en organisaties dat hier effectief gebruik van maakt. Hoe doen zij dit? Wat is hun geheim? We kunnen natuurlijk verklappen dat het antwoord in “Samenwerking” zit, maar dat is te vrijblijvend. Daarom staan in het middagprogramma vooral gebruikers centraal die effectief Open Source hebben ingezet. Zij zullen uiteenzetten welke stappen zij hebben ondernomen, welke hobbels zij daarbij zijn tegengekomen. Ook zullen aanbieders van Open Source Geo oplossingen uiteenzetten hoe zij hun klanten hierbij hebben geholpen.

Laat je daarom ook dit jaar ‘updaten’ op de OSGeo.nl Dag. Kom luisteren naar en praten met gebruikers die ervaringen delen en ontwikkelaars die hun projecten tonen. Kijk voor meer info en/of als je iets wilt presenteren op de website https://osgeo.nl of stuur een email naar events@osgeo.nl.

by Just van den Broecke at October 04, 2017 09:53 PM

Fernando Quadro

Curso de QGIS Básico

Neste curso de QGIS Básico ministrado pela Geocursos você terá uma introdução ao software QGIS Desktop. São apresentados desde como iniciar um projeto, passando pelos procedimentos básicos de edição de dados geográficos e criação de mapas temáticos até a geração de mapas para impressão.

O curso é uma excelente oportunidade para aqueles que desejam conhecer o QGIS, suas ferramentas e aplicabilidade em projetos de SIG. Embora não existam requisitos prévios rígidos, a familiaridade com os conceitos básicos de Geotecnologias é recomendável.

01. Apresentação do QGIS
02. Interface do Software
03. Inicialização de Projetos no QGIS
04. Ferramentas de Seleção
05. Consultas por Atributo
06. Simbologia e Rotulação
07. Elaboração de Mapas Temáticos
08. Manipulação da Tabela de Atributos
09. Edição de Atributos
10. Criação e configuração de Hiperlink
11. Medição de Áreas e Distâncias
12. União de Tabelas
13. Gerar camada a partir de Coordenadas
14. Extração de Coordenadas
15. Operações Espaciais
16. Integração com Base de Dados Espacial
17. Geração de Mapas para Impressão (Layout)
18. Outros Tópicos Relevantes

Para saber mais informações, e realizar a sua inscrição basta acessar o site:


A Geocursos informa que estão abertas as inscrições, e as vagas são limitadas para o Curso Online de QGIS Básico que acontecerá entre os dias 04 e 13 de Dezembro nas segunda, quartas e sextas das 20h00 as 23h00 (Horário de Brasília).

by Fernando Quadro at October 04, 2017 05:44 PM


Oslandia is baking some awesome QGIS 3 new features

QGIS 3.0 is now getting closer and closer, it’s the right moment to write about some major refactor and new features we have been baking at Oslandia.

A quick word about the release calendar, you probably felt like QGIS 3 freeze was expected for the end of August, didn’t you?

In fact, we have so many new major changes in the queue that the steering committee (PSC), advised by the core developers, decided to push twice the release date up up to the 27 of October. Release date has not be been pushed (yet).

At Oslandia we got involved in a dark list of hidden features of QGIS3.

They mostly aren’t easy to advertised visually, but you’ll appreciate them for sure!

  • Add  capabilities to store data in the project
    • add a new .qgz zipped file format container
    • have editable joins, with upsert capabilities (Insert Or Update)
    • Transparently store  and maintain in sync data in a sqlite database. Now custom labeling is pretty easy!
  • Coordinating work and tests on new node tool for data editing
  • Improving Z / m handling in edit tools and layer creation dialogs
  • Ticket reviewing and cleaning

Next articles will describe some of those tasks soon.

This work was a great opportunity to ramp up a new talented developer with commit rights on the repository! Welcome and congratulations to Paul our new core committer !

All this was possible with the support of many actors, but also thanks to the fundings of QGIS.org via Grant Applications or direct funding of QGIS server!

A last word, please help us in testing QGIS3, it’s the perfect moment to stress it, bugfix period is about to start !




by Régis Haubourg at October 04, 2017 03:35 PM

gvSIG Team

Software libre, Economía Circular y gestión eficiente de los recursos

Una Europa que utilice eficazmente los recursos” es una de las siete iniciativas emblemáticas que forman parte de la estrategia Europa 2020 que pretende generar un crecimiento inteligente, sostenible e integrador. Actualmente es la principal estrategia de Europa para generar crecimiento y empleo, con el respaldo del Parlamento Europeo y el Consejo Europeo.

Esta iniciativa pretende crear un marco político destinado a apoyar el cambio a una economía eficiente en el uso de los recursos, contemplando objetivos como:

  • Mejorar los resultados económicos al tiempo que se reduce el uso de los recursos;
  • Identificar y crear nuevas oportunidades de crecimiento económico e impulsar la innovación y la competitividad de la UE;
  • Garantizar la seguridad del suministro de recursos esenciales;

Para todos aquellos que trabajamos en proyectos libres como gvSIG la relación directa de estos objetivos y la utilización (y re-utilización) de software libre en las administraciones públicas es indiscutible.

Software libre es reutilización de recursos, software libre es impulso de la innovación y la competitividad, software libre es seguridad de acceso a suministros (tecnología, sistemas de información), software es -en definitiva- una apuesta por el crecimiento inteligente, sostenible e integrador.

Además, esta iniciativa introduce en las políticas de la UE el concepto de “economía circular”, un concepto económico ligado con la sostenibilidad, y cuyo objetivo es que el valor de los productos, los materiales y los recursos se mantenga en la economía durante el mayor tiempo posible, y que se reduzca al mínimo la generación de residuos. Se trata de implementar una nueva economía, circular -no lineal-, basada en el principio de «cerrar el ciclo de vida» de los productos, los servicios, los residuos, los materiales, el agua y la energía.

El concepto de economía circular rara vez lo he visto asociado directamente al software libre, estando más centrado en “recursos materiales” que en “conocimiento”, sin embargo creo que acopla perfectamente y no debería desligarse su impulso de la adopción de las únicas licencias de software que garantizan los objetivos que persigue la economía circular. Valga este post para aportar nuestro granito de arena a ello.

Filed under: opinion, software libre, spanish Tagged: economía circular

by Alvaro at October 04, 2017 02:06 PM

gvSIG Team

Towards gvSIG 2.4: Projects preview

Every new gvSIG Desktop version includes small improvements, some of which deserves a special mention because of its usefulness. It’s the case of the project preview that is included in gvSIG 2.4.

It allows a thing as easy as seeing an image of the gvSIG Desktop projects, that can be very helpful sometimes to identify the project that we want to open. This image is updated each time that we save changes at the project.

Here you can watch a video about how it works:

Filed under: english, gvSIG Desktop, testing Tagged: gvSIG 2.4

by Mario at October 04, 2017 01:14 PM


Refresh your maps FROM postgreSQL !

Continuing our love story with PostgreSQL and QGIS, we asked QGIS.org a grant application during early 2017 spring.

The idea was to take benefit of very advanced PostgreSQL features, that probably never were used in a Desktop GIS client before.

Today, let’s see what we can do with the PostgreSQL NOTIFY feature!

Ever dreamt of being able to trigger things from outside QGIS? Ever wanted a magic stick to trigger actions in some clients from a database action?

X All The Y Meme | REFRESH QGIS FROM THE DATABASE !!! | image tagged in memes,x all the y | made w/ Imgflip meme maker


NOTIFY is a PostgreSQL specific feature allowing to generate notifications on a channel and optionally send a message — a payload in PG’s dialect .

In short, from within a transaction, we can raise a signal in a PostgreSQL queue and listen to it from a client.

In action

We hardcoded a channel named “qgis” and made QGIS able to LISTEN to NOTIFY events and transform them into Qt’s signals. The signals are connected to layer refresh when you switch on this rendering option.

Optionnally, adding a message filter will only redraw the layer for some specific events.

This mechanism is really versatile and we now can imagine many possibilities, maybe like trigger a notification message to your users from the database, interact with plugins, or even code a chat between users of the same database  (ok, this is stupid) !


More than just refresh layers?

The first implementation we chose was to trigger a layer refresh because we believe this is a good way for users to discover this new feature.

But QGIS rocks hey, doing crazy things for limited uses is not the way.

Thanks to feedback on the Pull Request, we added the possibility to trigger layer actions on notification.

That should be pretty versatile since you can do almost anything with those actions now.


QGIS will open a permanent connection to PostgreSQL to watch the notify signals. Please keep that in mind if you have several clients and a limited number of connections.

Notify signals are only transmitted with the transaction, so when the COMMIT is raised. So be aware that this might not help you if users are inside an edit session.

QGIS has a lot of different caches, for attribute table for instance. We currently have no specific way to invalidate a specific cache, and then order QGIS to refresh it’s attribute table.

There is no way in PG to list all channels of a database session, that’s why we couldn’t propose a combobox list of available signals in the renderer option dialog. Anyway, to avoid too many issues, we decided to hardcode the channel name in QGIS with the name “qgis”. If this is somehow not enough for your needs, please contact us!


The github pull request is here : https://github.com/qgis/QGIS/pull/5179

We are convinced this would be really useful for real time application, let us know if that makes some bells ring on your side!

More to come soon, stay tuned!



by Régis Haubourg at October 04, 2017 01:09 PM


Undo Redo stack is back QGIS Transaction groups

Let’s keep on looking at what we did in QGIS.org grant application of early 2017 spring.

At Oslandia, we use a lot the transaction groups option of QGIS. It was an experimental feature in QGIS 2.X allowing to open only one common Postgres transaction for all layers sharing the same connection string.

Transaction group option

When activated, that option will bring many killer features:

  • Users can switch all the layers in edit mode at once. A real time saver.
  • Every INSERT, UPDATE or DELETE is forwarded immediately to the database, which is nice for:
    • Evaluating on the fly if database constraints are satisfied or not. Without transaction groups this is only done when saving the edits and this can be frustrating to create dozens of features and having one of them rejected because of a foreign key constraint…
    • Having triggers evaluated on the fly.  QGIS is so powerful when dealing with “thick database” concepts that I would never go back to a pure GIS ignoring how powerful databases can be !
    • Playing with QgsTransaction.ExecuteSQL allows to trigger stored procedures in PostgreSQL in a beautiful API style interface. Something like
SELECT invert_pipe_direction('pipe1');
  • However, the implementation was flagged “experimental” because some caveats where still causing issues:
    • Committing on the fly was breaking the logic of the undo/redo stack. So there was no way to do a local edit. No Ctrl+Z!  The only way to rollback was to stop the edit session and loose all the work. Ouch.. Bad!
    • Playing with ExecuteSQL did not dirty the QGIS edit buffer. So, if during an edit session no edit action was made using QGIS native tools, there was no clean way to activate the “save edits” icon.
    • When having some failures in the triggers, QGIS may loose DB connection and thus create a silent ROLLBACK.

We decided to try to restore the undo/redo stack by saving the history edits in PostgreSQL SAVEPOINTS and see if we could restore the original feature in QGIS.

And.. it worked!

Let’s see that in action:


Potential caveats ?

At start, we worried about how heavy all those savepoints would be for the database. It turns out that maybe for really massive geometries, and heavy editing sessions, this could start to weight a bit, but honestly far away from PostgreSQL capabilities.


Up to now, we didn’t really find any issue with that..

And we didn’t address the silent ROLLBACK that occurs sometimes, because it is generated by buggy stored procedures, easy to solve.

Some new ideas came to us when working in that area. For instance, if a transaction locks a feature, QGIS just… wait for the lock to be released. I think we should find a way to advertise those locks to the users, that would be great! If you’re interested in making that happen, please contact us.


More to come soon, stay tuned!



by Régis Haubourg at October 04, 2017 01:08 PM

October 03, 2017

gvSIG Team

Towards gvSIG 2.4: Rossmo Algorithm for serial killer detection

We show you a very special improvement, because of its theme (Criminology), which will be available at the next gvSIG Desktop version, but which you can test in gvSIG 2.3.1 already (from the Add-ons Manager, plugins by URL). This is the implementation of the Rossmo mathematical model in gvSIG that has been made by Jazmín Palomares, from GITS team of the National Autonomous University of Mexico (UNAM).

Within environmental criminology, Kim Rossmo’s mathematical model seeks to measure -by locating the crimes of a criminal- the probability that each point on a map is the usual place of a serial killer. The model has been successful for the largest number of cases in which it has been tested. However, there has not been enough case evaluation for this model, in part because of the high cost of the software applications that implement it … something that is not a problem from now.

In addition to the plugin and the help about it, if you want to know more about this algorithm and its implementation in gvSIG Desktop, as well as its application to the case of one of the most deadly killers in history, you can read a complete article in the Mapping magazine (in Spanish).

We would like to thank Jazmín and the rest of the GITS team for sharing the results of their work.

Note: This algorithm is not installed by default, it must be installed through the Add-ons Manager.

Video (in Spanish) with a demo of the algorithm:

Filed under: english, gvSIG Desktop, technical collaborations, testing Tagged: criminology, gvSIG-2.4, Rossmo Algorithm, Serial killers

by Mario at October 03, 2017 01:50 PM

Marco Bernasocchi

Cours PyQGIS 13.11./14.11.2017 à Neuchâtel

Le cours est destiné aux utilisateurs avancés de QGIS qui souhaitent accroître leurs possibilités grâce à l’utilisation de python dans QGIS. Lors de cette formation, nous aborderons différentes possibilités d’interaction avec l’API QGIS ainsi que la création d’interfaces graphiques simples
See more ›

by Matthias Kuhn at October 03, 2017 11:53 AM

OSGeo News

OSGeo and the International Geographical Union (IGU) sign MoU

by jsanz at October 03, 2017 09:57 AM