Welcome to Planet OSGeo

December 13, 2019

Están disponibles los materiales del cursoRepresentación Cartográfica, como herramienta para la mejora del Sistema Canario de Seguridad y Emergencias“, impulsado por la Dirección General de Seguridad y Emergencias de la Consejería de Administraciones Públicas, Justicia y Seguridad del Gobierno de Canarias.

Un curso que recomendamos a todos los interesados en la aplicación de los Sistemas de Información Geográfica en áreas como la Seguridad, Emergencias y Protección Civil. Realizado por profesionales de la seguridad, con ejercicios prácticos, que ayudarán a comprender la importancia del uso de herramientas como gvSIG en el ámbito de la protección civil.

Vídeo de presentación del curso:

Los responsables del curso son Gustavo Armas Gómez, Director General de Seguridad y Emergencias y Juan José Pacheco Lara, Responsable de Estudios e Investigación de la Unidad de Formación de la DGSE. El curso ha sido desarrollado por Gilberto Díaz Gil.

El curso tiene un carácter eminentemente práctico y todos los ejercicios han sido realizados utilizando el software libre gvSIG Desktop, y concretamente su versión 2.4, que podéis descargar aquí.

Desde la Asociación gvSIG queremos agradecer la predisposición del Gobierno de Canarias a publicar y divulgar estos interesantísimos materiales formativos.

A continuación os enlazamos con todos los temas y vídeo-tutoriales del curso.

by Alvaro at December 13, 2019 08:54 AM

Enlazamos con el post que nos cuenta que el proyecto ganador del concurso gvSIG Batoví en 2019 ha sido reconocido en los premios INSPIRA. ¡Nuestra más sincera felicitación a todo el equipo!

El  proyecto galardonado tenía como objetivo, utilizando gvSIG, identificar el lugar más apropiado para la construcción de un liceo de bachillerato en el barrio La Teja y conocer la oferta de servicios socio-culturales del barrio con el fin de relacionar el futuro liceo de bachillerato con las instituciones del barrio.

Estudiantes de secundaria utilizando gvSIG para analizar y resolver necesidades de su entorno. Para los que estamos impulsando el proyecto, ver estos logros, nos llena de satisfacción. Porque de eso se trata, de que la tecnología esté disponible para todos, sea de todos.

via Estudiantes ganadores del concurso gvSIG Batoví 2019 reconocidos en los premios INSPIRA

by Alvaro at December 13, 2019 08:24 AM

December 12, 2019

December 11, 2019

Ya tenéis disponible la grabación de la ponencia “Gestión de accidentes e integración con ARENA2 de la Dirección General de Tráfico en gvSIG Desktop”. En ella se muestran los desarrollos realizados para integrar en gvSIG Desktop los archivos que proporciona la DGT de España (ARENA2) y las herramientas que permiten gestionar la información de accidentalidad.

Entre los desarrollos abordados hay desde mejoras genéricas de gvSIG Desktop a nuevas funcionalidades relacionadas directamente con la gestión de datos de accidentalidad.

Utilizado en el CEGESEV (Centro de Gestión y Seguridad Vial de la Generalitat Valenciana), su desarrollo como software libre puede permitir la fácil adopción por otros organismos que requieran trabajar con este tipo de datos.

La ponencia la podéis revisar aquí:

 

by Alvaro at December 11, 2019 01:57 PM

Os traemos la grabación de la ponencia en la que se presenta el Geoportal para gestión de carreteras en la República Dominicana desarrollado con gvSIG Online y en el ámbito del proyecto “Apoyo en el Sistema de Gestión de Inventario de la Red Vial y Puentes de la República Dominicana”.

Una ponencia muy interesante en la que se puede apreciar el impacto que pequeños proyectos pueden tener en la gestión de infraestructuras de un país. Gracias a este proyecto por primera vez hay un mapa digital con todas las carreteras del país.

by Alvaro at December 11, 2019 10:47 AM

December 10, 2019

After publishing a version with big changes, as usual, we release an update, in which we put the final touches to the previous version, fixing the most relevant errors that may have appeared and adding new functionalities that bring considerable improvements without assuming major changes.

The new gvSIG Desktop version will be available soon (in fact you can download development builds already) and, if everything goes according to plan, the release candidate to final version will be available in January.

That is why we wanted to tell you about the improvements that gvSIG Desktop 2.5.1 will bring, and we will give you more details about them soon.

Exporting virtual fields as values

One of the most important improvements of gvSIG 2.5 has been the ability to work with virtual fields. Now we are going to add the possibility of indicating if you want the virtual fields as real fields in the table resulting from the export. It will be indicated during the table/layer export. As the virtual fields are typical of gvSIG, it will allow to access to these values ​​from other applications.

PDF and ODS viewer in forms

Another important improvement of gvSIG 2.5 has been the option to work with forms. Thanks to this new functionality we can make simple queries on a table using a form as well as practically generate data maintenance applications without writing any line of code.

Related to the forms, in gvSIG 2.5 we can have a reference to an image or the image directly included as an attribute in a field of a table. When we show the form associated to that table, we can configure it to embed an image viewer in the form and display the image on it.

In the new gvSIG version we are going to add the possibility of doing something similar with PDF and ODS files.

Heat map comparison

As all of you know, from gvSIG 2.4 you can generate heat map legends. gvSIG 2.5.1 will now bring a new type of legend that allows to compare heat maps. It allows, for example, to analyze the variations of a phenomenon between different dates or the behaviour of two different variables on the same layer.

Extreme heat map

We are going to add another new type of legend related to the concept of heat map. One of the problems presented by heat maps when analyzing certain variables or entities is to represent the density of all the information in a layer and sometimes, and depending on the type of information, it can make the detection of extreme values difficult.

The new type of legend that will be available in gvSIG 2.5.1 will allow to filter or differentiate those values ​​that we want to represent from a certain threshold or value.

Semi-automatic simple reports generation

Another of the most important gvSIG Desktop 2.5 novelties has been the possibility to create Reports. Currently you can generate reports using JaperSoft Studio and associate them with tables in gvSIG. However, in many cases it is required to be able to make simple and quick reports, without designing templates previously.

For this reason, a new functionality that allows the user to generate simple and quick reports will be integrated in gvSIG.

by Mario at December 10, 2019 12:58 PM

gvSIG Online es, hoy día, una de las soluciones de referencia para gestionar la información geográfica de una organización, sea cual sea. Frente a otros productos, gvSIG Online es una solución 100% software libre y con el soporte y respaldo profesional de la Asociación gvSIG. No es objeto de este post hacer referencia a la multitud de implantaciones de gvSIG Online en todo tipo de sectores y geografías. Lo que os traemos es una presentación de la nueva modalidad de implantación de gvSIG Online, la que hemos denominado gvSIG Appliance.

Las modalidades de implantación ya conocidas de SaaS (como servicio) y On-Premise (en los servidores del cliente) no cubrían la necesidad de determinado tipo de usuarios que querían utilizar gvSIG Online y, por motivos de seguridad, de forma totalmente aislada a la red e integrada con aplicaciones de vigilancia, centros de mando y control.

De este modo gvSIG Appliance permite tener gvSIG Online como un appliance dentro de un servidor e integrado con centros de mando y control como GENETEC, software líder en el mundo de la seguridad.

Podéis ver una presentación donde Ramón Sánchez de San2 Innovacion Sostenible, explica perfectamente las características de gvSIG Appliance y algunos ejemplos de implantación, de la Smart City de Ceuta a Iberdrola.

by Alvaro at December 10, 2019 10:30 AM

Tras la publicación de una versión con grandes cambios, como es habitual, sacamos una versión “menor”, en la que nos dedicamos a pulir la versión anterior corrigiendo los errores más relevantes que puedan haber surgido y añadiendo nuevas funcionalidades que sin suponer grandes cambios, aportan considerables mejoras.

La nueva versión de gvSIG Desktop va a estar muy pronto disponible (de hecho podéis ya acceder a builds de desarrollo) y, si todo va según lo previsto, en enero publicaremos ya las versiones candidatas a final.

Por eso os queríamos comentar las mejoras que traerá gvSIG Desktop 2.5.1 y de las que pronto os daremos más detalles.

Exportar campos virtuales como valores

Una de las mejoras más importantes de gvSIG 2.5 ha sido la de aportar la capacidad de trabajar con campos virtuales. Lo que vamos a añadir durante la exportación de una tabla/capa es la posibilidad de indicar si se quiere que los campos virtuales se conviertan en campos reales en la tabla resultante de la exportación. Como los campos virtuales son propios de gvSIG, esto permitirá acceder a esos valores desde otras aplicaciones.

Visor de PDF y ODS en los formularios

Otra de las mejoras importante de gvSIG 2.5 han sido los formularios. Gracias a esta nueva funcionalidad podemos realizar desde consultas simples a una tabla mediante un formulario a prácticamente generar aplicaciones de mantenimiento de datos sin escribir una sola línea de código.

Relacionado con los formularios, en gvSIG 2.5 podemos tener en un campo de una tabla una referencia a una imagen o la imagen directamente incluida como atributo. Cuando mostramos el formulario asociado a esa tabla, podemos configurar para que se incruste un visor de imágenes en el formulario y se muestre la imagen en él.

Lo que vamos a añadir en la nueva versión de gvSIG es la capacidad de hacer algo similar con archivos PDF y ODS.

Mapa de calor comparado

Como todos sabréis desde gvSIG 2.4 se pueden generar leyendas de mapas de calor. Lo que traerá gvSIG 2.5.1 es un nuevo tipo de leyenda que permita realizar mapas de calor comparados. Que permita, por ejemplo, analizar las variaciones de un fenómeno entre fechas distintas o el comportamiento de dos variables distintas sobre una misma capa.

Mapa de calor extremo

Y añadiremos otro nuevo tipo de leyenda relacionada con el concepto de mapa de calor. Uno de los problemas que presentan los mapas de calor a la hora de analizar ciertas variables o fenómenos es representan la densidad de la totalidad de la información de una capa y, por ejemplo, en ocasiones y en función de la tipología de la información puede llegar a dificultar la detección de valores extremos.

El nuevo tipo de leyenda que estará en gvSIG 2.5.1 permitirá filtrar o discriminar aquellos valores que queremos sean representados a partir de un determinado umbral o valor.

Generación semiautomática de informes simples

Otra de las novedades más importantes de gvSIG Desktop 2.5 ha sido la de Informes. Actualmente se pueden generar informes usando JaperSoft Studio y asociarlos a tablas en gvSIG. Sin embargo en muchas ocasiones se requiere poder hacer informes simples y rápidos, sin tener que pasar por el diseño de plantillas.

Por esto mismo se va a integrar una nueva funcionalidad que permita al usuario generar informes simples y rápidos.

by Alvaro at December 10, 2019 09:40 AM

December 09, 2019

Total Open Station 0.5 is here!

This release is the result of a short and intense development cycle.

The application is now based on Python 3, which means an improved handling of data transfers and a general improvement of the underlying source code.

An extensive test suite based on pytest was added to help developers work with more confidence and the documentation was reorganized to be more readable.

There are only minor changes for users but this release includes a large number of bugfixes and improvements in the processing of data formats like Leica GSI, Carlson RW5 and Nikon RAW.

The command line program totalopenstation-cli-parser has four new options:

  • --2d will drop Z coordinates so the resulting output only contains X and Y coordinates
  • --raw will include all available data in the CSV output for further processing
  • --log and --logtofile allow the logging of application output for debugging

If you were using a previous version of the program you can:

  • wait for your Linux distribution to upgrade
  • install with pip install --upgrade totalopenstation if you know your way around the command line on Linux or MacOS
  • download the Windows portable app from the release page: this release is the first to support the Windows portable app from the start – for the moment this release supports 64-bit operating systems but we are working to add a version for older 32-bit systems.

But there’s more. This release marks a renewed development process and the full onboarding of @psolyca in the team. With the 0.6 release we are planning to move the repository from the personal “steko” account to an organization account and improve the contribution guidelines so that the future of Total Open Station is not dependent on a single person. Of course we have already great plans for new features, as always listed on our issue tracker.

If you use Total Open Station please let us know and maybe give us a star ★ on GitHub.

by Stefano Costa at December 09, 2019 03:01 PM

Ya están disponibles las grabaciones de las presentaciones de una de las sesiones más interesantes de las pasadas Jornadas Internacionales de gvSIG, la relativa a Gestión Municipal. En esta sesión se impartieron ponencias con enfoques muy diversos sobre el impacto, metodologías y beneficios que tiene el implantar una solución tecnológica para gestionar la información geográfica en el ámbito de un ayuntamiento. Soluciones 100% software libre que desde la Asociación gvSIG estamos poniendo en marcha en cada vez más administraciones locales.

Una primera ponencia que con carácter generalista explica qué necesidades, beneficios y problemas ha de abordar un ayuntamiento para poner en marcha con éxito una Infraestructura de Datos Espaciales (IDE). En ella se recorren los distintos Servicios/Departamentos municipales y su relación con este tipo de soluciones.

Necesidades y beneficios de la implantación de una IDE a nivel municipal

Una segunda ponencia en la que se muestra el proceso seguido para implantar la IDE de un municipio de unos 25.000 habitantes. Un caso muy interesante porque había “sufrido” dos intentos anteriores que habían finalizado de forma poco exitosa. En esta ocasión, con la tecnología de la Suite gvSIG, se ha puesto en marcha una solución que está impactando positivamente en todo el ayuntamiento.

IDE en el Ayuntamiento de Onda

¿Y qué pasa con los pequeños ayuntamientos? ¿Pueden ellos implantar una solución similar a la de los grandes? ¿En qué les beneficia? Una excelente presentación que muestra como con software libre se ha implantado en la Manchuela Conquense una plataforma IDE que mejora la gestión diaria de los municipios.

AytoSIG. Infraestructuras de Datos Espaciales en pequeños ayuntamientos

Por último, el Ayuntamiento de Bétera nos muestra las ventajas que ha aportado la Suite gvSIG en la gestión de infraestructuras municipales.

Gestión de infraestructuras del Ayuntamiento de Bétera

by Alvaro at December 09, 2019 11:52 AM

December 07, 2019

Over the last years, many data analysis platforms have added spatial support to their portfolio. Just two days ago, Databricks have published an extensive post on spatial analysis. I took their post as a sign that it is time to look into how PySpark and GeoPandas can work together to achieve scalable spatial analysis workflows.

If you sign up for Databricks Community Edition, you get access to a toy cluster for experimenting with (Py)Spark. This considerably lowers the entry barrier to Spark since you don’t need to bother with installing anything yourself. They also provide a notebook environment:

I’ve followed the official Databricks GeoPandas example notebook but expanded it to read from a real geodata format (GeoPackage) rather than from CSV.

I’m using test data from the MovingPandas repository: demodata_geolife.gpkg contains a hand full of trajectories from the Geolife dataset. Demodata_grid.gpkg contains a simple 3×4 grid that covers the same geographic extent as the geolife sample:

Once the files are downloaded, we can use GeoPandas to read the GeoPackages:

Note that the display() function is used to show the plot.

The same applies to the grid data:

When the GeoDataFrames are ready, we can start using them in PySpark. To do so, it is necessary to convert from GeoDataFrame to PySpark DataFrame. Therefore, I’ve implemented a simple function that performs the conversion and turn the Point geometries into lon and lat columns:

To compute new values for our DataFrame, we can use existing or user-defined functions (UDF). Here’s a simple hello world function and associated UDF:

A spatial UDF is a little more involved. For example, here’s an UDF that finds the first polygon that intersects the specified lat/lon and returns that polygon’s ID. Note how we first broadcast the grid DataFrame to ensure that it is available on all computation nodes:

It’s worth noting that PySpark has its peculiarities. Since it’s a Python wrapper of a strongly typed language, we need to pay close attention to types in our Python code. For example, when defining UDFs, if the specified return type (Integertype in the above example) does not match the actual value returned by the find_intersection() function, this will cause rather cryptic errors.

To plot the results, I’m converting the joined PySpark DataFrame back to GeoDataFrame:

I’ve published this notebook so you can give it a try. (Any notebook published on Databricks is supposed to stay online for six months, so if you’re trying to access it after June 2020, this link may be broken.)

by underdark at December 07, 2019 02:10 PM

This post is a high level look at the recent stack I built for a raster tiling set up. I am working out some kinks in my online and network delivery of cartographic products, so I thought it was time to set up a raster tiling service to access XYZ and WTMS services from my raster tile caches. I’ll be adding maps and zooms levels in the future, so check back now and again. Antarctica is on it’s way soon!

nz_from_basemap_service

Basic Demo Service using NZTM projection is here: https://xycarto.github.io/

See below for WMTS links

Raster tiling is not the only method, but it is still a viable choice for delivering nice looking maps online, serving across networks, and designing with raster data. I am particularly enamored with the quality of the visual outputs. For me, it is akin to the difference between music in vinyl and digital formats. In addition, the process is well documented and fairly straight forward. By virtue of having been around for a while, raster tiling has a wealth of information and standards to work with, delivery from S3 is a robust process, and there is nice integration with QGIS, Leaflet and Openlayers.

I break the stack in to three areas: analysis, rendering, and delivery

Analysis
QGIS: Sketching, QC, and general geospatial work.

GDAL: Processing raster data. Configuring your rasters in an optimal format from the beginning will greatly improve your rendering speeds. I recommend creating a good set of overviews and gathering everything into a virtual raster tile (VRT).

Postgres/PostGIS: Handling your vector data. Pulling all your data from a database significantly improves rendering speeds. Don’t forget to index!

Rendering
Tilemill/Mapnik XML: Yes, I still design using CartoCSS when working with raster data. I love the simplicity of the language. Tilemill is easy enough to containerize these days too. Tilemill exports into the Mapnik XML format, essential for my process further down the line. Here is how to hack Tilemill to work in a custom projection.

Mapnik: Support for using Mapnik XML

Mapnik with Python Bindings: Necessary for using Mapnik XML documents in MapProxy

MapProxy: MapProxy is a map server and tile renderer . It is easy to build on your machine, though I recommend using a container like Docker. Specifically, I use a hack provided by PalmerJ at Github to increase rendering speeds through multi-threading.

Delivery
Amazon S3: Simple Storage Service. Amazon is pretty cheap, free in many cases, and a good place for storing your tile cache. You get an easily accessed URL for your tiles and a home for your WMTS GetCapabilities document.

WMTS: For me, the real power in a base map service is the WMTS, so, below are two links to the WTMS service for you to set up in QGIS if you’d like to have a play. Here is a quick tutorial about how to set up WMTS if you are unfamiliar.

https://s3-ap-southeast-2.amazonaws.com/basemaps.temp/nz_colour_basemap/WMTSCapabilities.nz_colour_basemap.xml
https://s3-ap-southeast-2.amazonaws.com/basemaps.temp/nz_topo_basemap/WMTSCapabilities.nz_topo_basemap.xml

XYZ: Building a web map? If your tile cache is in S3, in a TMS structure, and public you should be able to access it via simple XYZ request like so:

https://{s3-your-region-here}/{your_bucket}/{project_name}/{projection}/{z}/{x}/{y}.png

Leaflet: Leaflet will handle all the XYZ requests to the server and allow for custom projections. Have a look here for the basic HTML, CSS and JS set up.

by xycarto at December 07, 2019 12:46 AM

December 06, 2019


The QGIS documentation team is struggling and needs help. This has been known for a while. The much harder question is “How do we help a mostly volunteer community?”.

Reading time: 25 minutes

Summary

Many have tried to help QGIS docs, with limited success. I’ve collated insightful quotes from a bunch of their stories and then postulate solutions. Surprisingly, the biggest problem isn’t a lack of tech writers or complicated tools (although they are factors).

Problems centre around:
  • Poorly capturing community good-will and offers of assistance;
  • A lack of direction;
  • Struggling to keep up with a rapidly evolving software baseline;
  • Insufficient writing expertise;
  • A high technical barrier to entry;
  • Documentation and training being generated outside of the core project;
  • Awkward documentation tools and processes.
This leads to an immediate case to:
  • Define and evangelise a vision and roadmap.
  • Prioritise funding and lobby sponsors to resource the vision.
  • Implement an information architecture review.
  • Sustain a community evangelist/coordinator to attract and nurture a broader doc community.
  • Sustain a trained technical writer to amplify the quality and effectiveness of the community.
  • Attract external docs back into the core.
Medium-term:
  • Ask the greater open-source community to address the usability of documentation tools and reduce the technical barrier to entry. Adopt improvements as they are developed.
  • Align with best the practices evolving within TheGoodDocsProject.
While acknowledging the great work done to date, I feel the QGIS docs team has insufficient capacity and availability to skills to drive this agenda. Targeted and sustained investment should be applied to bring the quality of QGIS docs up to the quality of the software.

Observations

The challenge

As one of OSGeo’s Season of Docs administrators, I’ve been observing the QGIS documentation community for months. The Open Source Geospatial Foundation (OSGeo) was allocated two tech writers and we probably should have allocated one to QGIS. However, I recommended they work on GeoNetwork and OSGeoLive instead. I was concerned by:
  • How daunting QGIS doc challenges were,
  • A lack of clear direction within the QGIS docs project,
  • The brief three-month window for Season-of-Docs, and
  • The high risk that the writers’ efforts might not achieve tangible outcomes.
I noted:
The big challenge for QGIS is aggregating external content into the core docs from lots of satellite communities. It would be a huge win to get it done, but also very risky as it requires coordination and collaboration from so many external volunteers.
Harrissou added:
It's unfortunate to not assign a senior writer to QGIS. I was personally envisioning [Season of Docs] as a catalyzer, an opportunity to trigger mobilisation of the writing community, and to teach us actual and best practices. And maybe that experience would confirm to us that we need the profile [of person] you propose later.
So what is lacking, and what can be improved?

Kudos to the volunteers

Firstly, I’d like to acknowledge the value provided by QGIS documentation volunteers and help they provide to newbies who reach out. QGIS has a solid baseline of docs and dedicated but under-resourced volunteers. They face a difficult job keeping up with the more active, much larger, and better-resourced developer community. I don’t think external people appreciate the difficulty of the documentation challenge.

Season of Docs

Before Season-of-Docs’ writing period officially started (September 2019) we’d already attracted plenty of latent interest:
  • A spin-off GeoNetwork documentation group of 4+ volunteers was meeting fortnightly. (Swapnil, a senior tech writer supports this team, as part of Season-of-Docs.)
  • A spin-off GoodDocsProject, with 5+ senior tech writers, are creating best-practice templates and writing instructions for documenting open-source software.
  • QGIS and GeoNetwork quickstarts were updated to the latest 13.0 OSGeoLive release. (Felicity is updating 50+ quickstarts for Season-of-Docs.)
Over 20 people volunteered to help out with OSGeo’s Season-of-Docs. 10+ of these people were interested in QGIS - more than for any of the other OSGeo projects. However, we’ve had lack-lustre success at capturing this initial enthusiasm. Why? I collate quotes and observations below.

Piers, small company, creating training material

Piers Higgs is CEO of a Gaia Resources, a small environmental consulting company. He and his team have developed QGIS training material which they publish for free as videos, a manual and data package. Piers notes:
  • The thing I find strange is how many people are using our course now - there are people from all around the world now. Most of them aren't actually enviro's either - they are just people wanting any sort of resource to help them get into QGIS.
Piers articulately outlined how he’d love to share his material and continue to maintain it, noting also that he is time-poor. This is a hugely valuable offer, but there wasn’t someone from the community ready to catch this offer and work with him to the extent required.
Alexandre Neto noted:
  • Because we don't have many writers (we have two very active people), it's quite hard to allocate [time for] that “king of merge” into what we already have. It looks like no one has interest in it, but it's not really the case. What we would prefer is to see companies create new sections, improve and reuse what we have in the training manual.
Like Piers, many of the people who volunteered to help with Season-of-Docs are similarly from small consulting companies in similar situations. I see this as untapped potential. Piers commented further:
  • Yep, but how to tap this potential is pretty hard. Unless you have a TARDIS, or a cloning machine?
  • This pretty much says why I can't get into this. I don't have the bandwidth and much of my drive is taken up running the business. The personality - well, Cameron, you have spades of that ;)
  • I did find the whole thing really hard to actually understand what was needed and who was doing what. I guess being "outside the camp" for most of the QGIS stuff these days has made me realise how hard it is to find my way back in again.
  • So it's one thing to have a bunch of time-poor people who are interested, let's assume we don't have a TARDIS or cloning machine to fix that. I did find that trying to work out what was going on and what Season-of-Docs is, who's doing what, etc was just too big a beast to deal with. It was an effective barrier to entry for someone new, and it's one of the reasons Cameron found it so hard to engage me - I had to keep asking him for clarifications on what all the lists are, where the documents are, who's who in the zoo, etc. It's just a little bit... chaotic. I will readily admit I lean towards OCD tendencies, but being time poor, time spent trying to understand what is going on is an effective barrier to entry. It became "too hard" very quickly.
  • [Capturing offers of assistance and supporting and encouraging new volunteers] are things as a community we do pretty badly.
  • My interactions with the main QGIS developers etc hasn't been very frequent, but it's been reasonable.
  • ... So I think remembering everyone is a volunteer and will have different motivations is really important. I used to run a volunteer GIS group and keeping up ways in which time poor people can be involved is key - e.g. writing a chapter is a big ask, but editing or testing it might be smaller and easier for time poor people. Food for thought.

Andrew, power user, starting to help with docs

Andrew Jeffrey is a power QGIS user, and a bit more. He is not a programmer and is giving to QGIS through docs, coordinating a regional user group and qgis events. (Other potential volunteers have a similar profile.) Andrew painted a practical vision about how QGIS docs can be improved and proceded to write a getting started guide for the new users he’d been helping, and followed up with a QGIS quickstart for the OSGeoLive project. Andrew is the sort of person you’d want to encourage and support.
Andrew’s comments are revealing:
  • I feel this review [of QGIS docs] was started with the meetings you coordinated at the start of the Season-of-Docs process Cameron and then lost momentum because no one took the lead when you started to focus on other things. I did try to rally people for the OSGeolive quickstart amendments but quickly lost interest in continually asking for input with no response.
  • I haven't given a whole lot - but would like to do more. Things that stop me: What’s a priority? Docs, training material, screenshots? It would be helpful if a more senior doc mentor was able to say “this is the low hanging fruit” “that is a great way to get started”.
  • The help I have received from QGIS doc folks has been good and available when I ask for it. The support in terms of sharing contributions via participants in the Season-of-Docs has been sporadic, but I understand everyone has time constraints and other commitments. Also even before tech writers were assigned to projects I was asking the list for feedback on documentation and received nothing. So my enthusiasm for the Season-of-Docs has dropped off because I didn't feel like I was getting as much out as I was putting in.
So while Andrew had some support, more support would likely help him feel more welcomed and would empower him to increase his productivity. Again, I think Andrew’s anecdotes hint at lost opportunities we don’t hear about.

Jared, a tech writer

Jared Morgan is a senior tech writer, curious about open source, who volunteered to help out. He started reviewing QGIS docs and received feedback from core contributors (Harrissou and Matteo). Alexandre noted that he missed seeing Jarad’s feedback. Unfortunately, this initiative hasn’t appeared to be sustained. It appears there hasn’t been sufficient bandwidth to nurture and sustain the goodwill.

Charlie, university courses

There were a bunch of offers from GeoForAll university members, suggesting that their tertiary training material be used. For instance, we could update the comprehensive GeoAccademy courses which are still based on the old QGIS 2.8 version. Unfortunately, the initial enthusiasm didn’t translate into tangible action. From my perspective, there appeared to be a very high barrier to entry. How can we help all these disparate organisations and fragmented initiatives to collaborate on a common base of material which is brought into the core QGIS docs? How can they become less brittle, so material continues to be updated when program funding finishes?
Professor Charlie Schweik suggested developing training material and textbooks in conjunction with universities, possibly making use of OpenStax. I’d suggest that his suggestion be aligned with maintaining core QGIS material, rather than creating a parallel initiative, and that the common material can be retasked for various educational courses.

Andreas, cataloging doc team challenges

The QGIS docs team discussed many of the challenges they are facing. Andreas Neumann summarises many of these:
  • I agree that the documentation task seems to be overwhelming and might also be daunting for newcomers, volunteers and even paid people. I also agree that the team is under-resourced. … We already knew this. … it would be encouraging to hear more suggestions for how to improve the situation.
  • Should the team focus on smaller chunks/goals in order to have better progress and a better success feeling?
  • Are the tools too complicated?
  • Is there not enough information provided by developers or organizations who fund new features?
  • Another observation I have is that there is an awful lot of documentation about QGIS out there on the web, spread into many personal blog websites, company blog posts and news sites, youtube movies, social media posts, etc. etc. However, all of this vast and de-centralized information doesn't end up in our central documentation.

Anita, Nathan, tools and process limitations

Working out how to bring the world’s QGIS documentation back into the core looks to be a core challenge for QGIS. Anita Graser’s response provides insights:
  • I tried [putting my doc updates into the core] but something is keeping me from doing it regularly. Thinking about it, reasons for me include:
  • It's not always possible to simply copy a blog post to the documentation. The expected style (as in wording) of the text is different. The text should fit into the bigger picture. This often means a significant rewrite.
  • Maybe just me but: I'm always uncertain of how to add figures and links correctly so that they are not broken in the built documentation.
  • Lack of immediate feedback: When I post on a blog, the content is immediately online and - as feedback comes in - it's possible to make adjustments quickly. The above Pull Request was open for a month. (There were a lot of good discussions going on but it might feel more motivating to publish more quickly and improve incrementally).

    So the last two points come down to the process we currently have in place. Coming from a platform (Wordpress) where I can immediately see and verify the final results, the qgis.org documentation system makes me feel less certain about the quality of my edits and it takes much longer until corrections are visible online. (I know that I could build the documentation locally on my machine. I've tried with Richard's help in the past and failed to set it up on my machine.)
Nathan Woodrow, one of the core QGIS techies noted:
I personally find some of the technical issues as quite a blocker for people to help.  It's what stops me most of the time and I'm conformable with the tools, last time I tried on Windows I just gave up because it was too much work and I only have limited time these days.  I'm not sure what the solution here is but I don't know if moving to something like GitHub Markdown or Google Docs is the option, mainly because of it throws away a lot of the work we have already done.  Having said that though this is a pain point that might help address some of the community involvement if we can solve it.   It's not the only problem though like Alexandre said it's just not fun work at times and it can be hard to even write good docs when that is your job and you have a good platform to do it in.
Tim Sutton, one of the QGIS founders, reported:
Our main discussion points in the [QGIS Project Steering Committee] meeting were:

  1. Current documentation approach is unsustainable (a few hardcore enthusiasts but not enough to cope with the rapid pace of development)
  2. Inviting contributors needs to be substantially easier - I’m talking at the level of editing a google doc or word doc here. At the very least a GitHub markdown page that is instantly published as soon as you edit it.
  3. Cameron has had a chat with me about employing technical writers to make the documents more cohesive - I think this is a good initiative but Cameron I think we need to get the fundamental issue of the editing platform sorted out first
  4. Translations severely hamper our ability to switch to a more agile system (e.g. GitHub markdown based wiki or Google doc) - in the PSC call we want to surface the idea of doing away with translations and leave translation initiatives to outside communities (e.g. local user groups). PostgreSQL etc don't have the overhead of this.
  5. Our documentation could be easier if the format was more structured - think something like editing a changelog entry here. Again we looked at the PostGIS / PostgreSQL examples here which have a very standardised format.
There were some sentiments in the PSC call to drop the documentation effort completely and leave it to all the various community members to deal with, but I think maybe my 1-5 points above make a better compromise of reduced overhead, more accessible platform for writing while still having docs in English at least. 

Stepping back from specific comments about tools and processes, I’m seeing a high effort-to-reward ratio for the external documentation community. Options to address this include:
  1. QGIS docs core team to absorb the effort, either through funding or inspiring volunteers.
  2. Help contributors get their content back into the core, likely with hand-holding, or possibly out-sourcing paid work to them.
  3. Improve the efficiency of tools or improve our explanation of tools. While improved tools will be helpful, I think it is a generic problem faced by the whole open-source community. As such, I feel QGIS should reach out to the greater community to help solve it.
While there is acknowledgement that docs need improving, I feel there is a general under-appreciation of the effort required to:
  1. Merge disparate documentation.
  2. Increase doc quality from “verbose and okay” to “intuitive, obvious and concise”.
We need to start by articulating the problem and how we propose to solve it, which should help find both sponsors and volunteers.

QGIS Community questionnaire about docs

824 people answered a questionnaire about how you learn about QGIS. Anita Graser compiled results into the following tables.




Considering changes to doc writing based on survey insights varied from "do more" to "do less".
Tom Chadwin summarised the results as:
Questions 1-4 (quantitative)
  • 70%’s first-choice is Googling/StackExchange, which dwarfs the < 20% choosing official docs
  • The fact that Googling came top of search methods emphasizes that we need to pay attention to SEO in the official docs
  • Fewer than 50% find their answers in the official docs “often” or “always”, 45% answering “sometimes”
  • Official docs are underused – over 50% only consult monthly or less frequently (including “never”)
Question 5 (qualitative)
  • Many find the official docs too abstract, and would prefer examples worked in
  • Perhaps unsurprisingly, there is a lot of enthusiasm for video tutorials
  • There is some criticism of the confusion of QGIS versions in the documentation, especially when deep-linked from a Google result
Paolo Cavallini argued:
To me this confirms my opinion: our manuals are of limited relevance to the community of users. IMHO we have two options here:
  1. Re-haul the whole documentation so to make it the real reference.
  2. Shrink it down to the bare minimum (mostly a list of the commands and functions available), leaving the fancy documentation out in the Wild World of the Internet.
Quite frankly (sorry, no offense for the huge and excellent work done until now), I do not see a realistic way of implementing (and, more importantly, to keep always up to date) the first option, so I tend to prefer the second one.
Summarising Alexandre Neto's longer analysis:
  • It's clear that people often search for help a lot.
  • In terms of QGIS Docs quality, ... [it] seems that definitely needs improving.
  • As a documentation person myself, I naturally have to disagree with the idea that the project should ... simply resign to a shrunk version of the documentation and let the outside world provide the fancy answers to the users. ... IMHO, Good, precise, and updated documentation leads to more adoption and better user experience.
  • An interesting fact: in two weeks open to answers, ... this questionnaire gathered more than 800 responses! To me, this alone says a bit about the importance that documentation has for our users.

Documentation best practices

I'm concerned people are searching for one approach to documentation when there should be multiple. In a highly regarded article in Tech Writing circles, Daniele Procida argues:
There is a secret that needs to be understood in order to write good software documentation: there isn’t one thing called documentation, there are four.
They are:

  1. Tutorials, 
  2. How-to guides, 
  3. Explanations, and 
  4. Technical references. 
They represent four different purposes or functions, and require four different approaches to their creation. Understanding the implications of this will help improve most software documentation - often immensely. ...
I expand on this in Inspiring techies to become great writers.
The audience for different doc-types have different needs:
  • API References need to be accurate, unambiguous and up-to-date. Polish is a nice-to-have.
  • Community forums are great for niche topics. Incorrect or dated information is tolerated.
  • Quickstarts need to be accurate and polished, but need not reference the latest Bleeding Edge release. Aligning with the Long Term Release is acceptable.
This approach should be defined in an information architecture and an implementation strategy (which is yet to be created for QGIS). These should take inspiration from TheGoodDocsProject, an emerging community of technical writers building “best-practice templates and writing instructions for documenting open-source software.”

Matteo, what to focus on

Matteo from the core QGIS documentation team who volunteered to mentor QGIS Season Of Docs writers suggested:
  • I really think that currently, we need to define precise roles (issue manager, reviewer, English reviewer, etc). IMHO the growing complexity of the last years made it difficult for us to convince other people to contribute (at least, I'm not able to convince people during training and other activities)
  • +1 for the "community" evangelist (could be another role of above)
  • -1 to change the framework (even if complex is too important)
  • -1 to create other dedicated repositories with additional training material: just add another chapter to the existing manual

    Summary: without boring everyone that already knows the current situation, I really think we have to set up a clear workflow (for us [the QGIS documentation team] and newcomers) or else we will lose volunteers and other people that want to contribute to the project.

    Andreas, Paulo and Tim, considering funding

    Andreas Neumann, Paolo Cavallini and Tim Sutton weighed in on funding tech writers:
    • Andreas: It is not primarily a problem of finding financial resources. Every year we assigned funds for documentation and in most years those funds haven't been used. Even if we would make more funds available to the team, I feel this wouldn't solve the problems the team is facing.
    • Paulo: While I agree that we should keep on using our funds, and additional resources, as an incentive for documenters, I think this should be done with a clear plan in mind. If not done carefully, this move could discourage volunteers, not only in this area ("why should I volunteer, when another one is doing the same thing and is paid for this?"). Volunteer communities are hard to build, and easy to destroy. Replacing volunteers with employees can quickly become very expensive, and we should be sure we'll be able to raise enough money both in the short and in the long term to fully support the effort. I suggest working out a budget for this, to check how feasible this solution can be, before taking further steps.
    • Tim: I have a different opinion on this. Based on our experience of paying developers I don’t think it has in any way reduced the volunteer contributions to the code base - on the contrary, it probably has incentivised those that we paid to donate lots more of their time. I am pretty sure that we will have similar experience in other areas of the project. I am more bullish on documentation and think that we should work enthusiastically to get one or more dedicated, full-time document writers in the QGIS project….over and over we here it is the most wanting part of the project. 
      2019 QGIS Budgeted expenses
      Highlights for me after discussing the 2019 QGIS budget with Tim Sutton were:
      • The QGIS team run on an incredibly small and efficient budget. Strategic investment from external stakeholders should yield a significant return-on-investment.
      • Programmers' daily rates were higher than tech-writers'. This concerns me as I question whether tech-writers' employed are suitably experienced. Typically, a good tech-writer is a programmer who has learned to write, or a writer who has learned to program.
      • The €12,000 allocated to documentation won't go far if paid at standard tech-writer rates.
      • Bug fixing (for programmers) was allocated five times more than docs (for writers).

      Clarence, finding writers

      To find writers, Clarence Cromwell, a tech writer, suggested:
      Why don’t you reach out to the WriteTheDocs community. It has a slack group which includes a #job-posts-only channel. I’ve seen many budding tech writers asking how to break into tech writing. You could offer to mentor writers in git and software processes, in return for a review of documentation.
      This is worth pursuing in order to bolster our existing tech writing team. However, for holistic documentation leadership, I feel we need more than individuals can provide on volunteer time alone. We could consider Google’s SeasonOfDocs model of paying a stipend (for a lead tech writer). Note: I feel this role needs sustained sponsorship; Google’s sponsorship is limited to three months.

      Anne and Charlie’s research into open source success factors

      There are insights from open-source research we can draw upon. Pertinent to this conversation is Charlie Schweik’s research into open source success factors and Anne Barcomb’s research into episodic volunteers. Their research highlights:
      Factors which lead to a project’s success are:
      • Leadership by doing.
      • Clear vision.
      • Well-articulated goals.
      • Task granularity: Projects have small tasks ready for people who only can contribute small bits of time.
      • Financial backing.
      Successful strategies for working with episodic volunteers:
      • Although Open Source episodic volunteers were unlikely to see their participation as influenced by social norms, personal invitation was a common form of recruitment, especially among non-code contributors.
      • Episodic volunteers with intrinsic motives are more likely to intend to remain, compared to episodic volunteers with extrinsic motives.
      • Episodic volunteers derive satisfaction from knowing that their work is used, enjoying the work itself, and feeling appreciated.
      • Lower barriers to entry.
      • Provide opportunities for social interactions.
      I suspect the QGIS project has become so successful, and the community so large, that it appears daunting for someone on the fringe who might want to join. They don’t feel worthy, are not sure how to break into the inner circle, or feel someone else will do the work if they don’t. It will likely be worth rekindling a supportive and personal culture within our community.

      Learning from the OSGeoLive experience

      I think it is worth considering the formula used in the OSGeoLive project to attract hundreds of episodal contributors, many of whom have been working on docs. It is summarised here:
      • Start with a clear and compelling vision; inspiring enough that others want to adopt it and work to make it happen.
      • This should be followed by a practical and believable commitment to deliver on the vision. Typically this is demonstrated by delivering a “Minimum Viable Product”.
      • Be in need of help, preferably accepting small modular tasks with a low barrier to entry, and ideally something which each person is uniquely qualified to provide. If anyone could fix a widget, then maybe someone else will do it. But if you are one of a few people with the skills to do the fixing, then your gift of fixing is so much more valuable, and there is a stronger moral obligation for you to step up.
      • Ensure that every participant gets more out of the project than they put in.
      • Avoid giving away free rides. If you are giving away something uniquely valuable; and it costs you time to provide that value for your volunteers; then it is ok to expect something of your volunteers if they wish to get something in return.
      • Use templates and processes to facilitate domain experts working together.
      • Reduce all barriers that may prevent people from contributing, in particular, by providing step-by-step instructions.
      • Set a schedule and work to it.
      • Talk with your community regularly, and promptly answer queries.
      • And most of all, have fun while you are doing it. Because believe you me, it is hugely rewarding to share the team camaraderie involved in building something that is much bigger and better than you could possibly create by yourself.

      My assessment

      The QGIS documentation community appears overwhelmed and seems to need help with:
      • Articulating doc challenges to the community, potential contributors, and potential sponsors;
      • Defining a clear vision and roadmap;
      • Coordination and project maintenance;
      • Breaking large daunting challenges into small tasks that can be tackled easily by volunteers;
      • Capturing community good-will and offers of assistance,
      • Inviting people to get involved one-on-one and then mentoring them;
      • Periodic catchups;
      • Outreach and evangelising;
      • Attracting satellite initiatives into the core;
      • Keeping up with a rapidly innovating software baseline;
      • Documentation tooling and processes;
      • Sustaining initiatives, and orphaning unmaintained documentation.
      Most importantly, I think the QGIS docs team is missing sufficient people with the bandwidth, drive and personality to drive this agenda. I feel there is likely to be quite a bit of inertia required to ramp-up such a team, but I think it is worth investing in, as I think QGIS will benefit greatly once it is set up.

      Suggestions

      These suggestions are presented in my proposed order of priority.

      1. Community evangelist / coordinator

      I believe QGIS should engage a “community evangelist and coordinator”, tasked with:
      • Inspiring others.
      • Capturing untapped goodwill from within the QGIS community and potential business sponsors.
      • Embracing and extending QGIS’s supportive culture.
      • Reaching out one-on-one and personally inviting people to join, then pro-actively supporting them during their onboarding experience.
      • Helping to reduce barriers to entry.
      • Defining and managing a roadmap, with milestones and schedules.
      • Coordinating community collaboration.
      • Supporting documentation development, deployment and reviews, as required.
      We should look for someone who:
      • Is friendly, approachable, and community-minded.
      • Is likely a notable and experienced member of an open-source community, whose opinions are respected and hold weight within the community.
      • Is technical enough that they can help a newbie with git and doc tools.
      • Is business savvy and able to persuade business people about the value of collaboration.
      • Presents competently at conferences.
      Getting the right person for this role will be very important, as they will influence the culture of the rest of the team.
      Sustained funding should be sourced for this role as it will be difficult to resource on volunteer labour alone.

      2. Vision and roadmap

      Common feedback from volunteers was not knowing how to give back. We appear to be lacking the vision and roadmap which open-source research suggests is important. Alexandre Neto noted:
      • Unfortunately, this issue list is the only thing we have [re roadmap].
      I suggest defining a vision and roadmap, which can then be referenced to help prioritise direction.

      3. Information architecture review

      I get the impression that QGIS documentation is quite good, but it hasn’t been audited by a senior technical writer/information architect. I suggest a senior information architect be engaged for a once-off engagement to set up QGIS’s approach to documentation. We should consider:
      • Best practice document types, templates and writing styles, tailored for different target audiences.
      • Documentation architecture.
      • Quality expectations.
      • Maintenance strategy.
      Ideally this information architect would be the same person as the sustained technical writer role (in order to retain project knowledge).

      4. Engage a technical writer

      People from the core doc team have noted that much of the QGIS documentation is written by software developers, or power users (without formal writing training). For many, English is a second language.
      I suggest a sustained technical writer role be set up to:
      • Reviews all new documentation generated by the community.
      • Work with the authors to ensure it fits with QGIS’s writing guidelines and quality standards.
      This will require sustained resourcing, which I suggest be supported by a stipend, in-kind contribution from a company, or similar.

      5. Attract external QGIS docs into the core

      There has been discussion about the significant amount of external docs and training material which is not coming back into the core QGIS. I’d suggest:
      • Publicly stating the value we place on internal rather than external documentation.
      • Encourage sponsors, and those paying for training to make use of internal rather than external material.
      • Reach out to external material providers and work with them to bring their material back into the core. Acknowledge extra effort required to do this. (It will be short term pain for long term gain.)
      • Monitor community activity and opportunistically support people to bring their external docs back into the core.

      6. Ignore the tools for the moment

      It has been noted that the git/sphinx documentation toolchain is a barrier to entry for people coming into docs. While acknowledging the problem, I suggest leaving it for the moment as we have higher priority problems which we can resolve and should focus on. Leave this for the wider open-source documentation community to solve.

      by Cameron Shorter (noreply@blogger.com) at December 06, 2019 06:24 PM

      December 05, 2019

      Prezados leitores,

      Aos que acompanham os lançamentos de novas versões do GeoServer, devem ter percebido que a partir da versão 2.15.2 não está mais sendo disponibilizado o pacote de instalação para o Sistema Operacional Windows (Windows Installer).

      Para esclarecer, o que aconteceu é que a licença que a OSGeo possuía no servidor usado para automatizar o build do instalador do Windows expirou, desta forma não é possível criar o instalador, pois devido a isso o mesmo não seria assinado e o Windows reclamaria disso.

      Um novo certificado deve ser adquirido, porém o tempo para essa aquisição é incerto. Desta forma, enquanto não houver um novo certificado no servidor da OSGeo, o pacote de instalação para o Windows não será gerado. Se você estiver precisando criar um instalador para o GeoServer, você pode fazer manualmente, através do tutorial descrito no link abaixo:

      https://docs.geoserver.org/latest/en/developer/win-installer.html

      by Fernando Quadro at December 05, 2019 07:02 PM

      El pasado 19 de noviembre se publicó en el Diario Oficial de la Federación la nueva Ley de Austeridad Republicana, donde se marca una clara apuesta por el software libre. México, de este modo, se suma a iniciativas que se replican en más y más zonas del planeta.

      Es interesa resaltar como enlaza la apuesta por invertir en software libre frente al gasto en licencias con el artículo primero de dicha Ley:-

      Artículo 1.Esta Ley es de orden público e interés social. Tiene por objeto regular y normar las medidas de austeridad que deberá observar el ejercicio del gasto público federal y coadyuvar a que los recursos económicos de que se dispongan se administren con eficacia, eficiencia, economía, transparencia y honradez, conforme lo establece el artículo 134 de la Constitución Política de los Estados Unidos Mexicanos. Sus disposiciones son aplicables a todas las dependencias, entidades, organismos y demás entes que integran la Administración Pública Federal. Los Poderes Legislativo y Judicial, así como los órganos constitucionales autónomos tomarán las acciones necesarias para dar cumplimiento a la presente Ley, de acuerdo con la normatividad aplicable a cada uno de ellos, cuandose les asignen recursos del Presupuesto de Egresos de la Federación

      Sabiendo que de la Ley a la implementación hay muchas barreras que derribar, intereses varios, resistencia al cambio y desconocimiento sobre la importancia de la apuesta estratégica por el software libre, esta Ley no deja de ser una herramienta fundamental para impulsar un cambio que ya se está dando.

      Por nuestra parte, desde la Asociación gvSIG, nos pondremos a trabajar para ayudar a impulsar el uso de la geomática libre y el cumplimiento de esta nueva ley. La Comunidad gvSIG en México, realmente activa y entusiasta, estoy seguro de que aportará todo lo que esté en su mano.

       

      by Alvaro at December 05, 2019 04:18 PM

      En las pasadas Jornadas de gvSIG, Antonio Sánchez del Centro Temático Europeo de la Universidad de Málaga presentó un interesantísimo proyecto que engloba a unas 200 entidades de todo tipo de 18 países. En total 70 áreas protegidas del Mediterráneo.

      PANACEA ha puesto en marcha una plataforma de datos / conocimiento que tiene como objetivo dinamizar los esfuerzos de gestión de áreas protegidas para una mejor protección de la biodiversidad en el Mediterráneo.

      Nuestra labor ha recaído en la puesta en marcha de la Infraestructura de Datos Espaciales necesaria para disponer tanto de un catálogo de datos como de distintos geoportales relacionado con la biodiversidad en el Mediterráneo. Por supuesto todo ha sido desarrollado con gvSIG Online, que en poco tiempo se ha consolidado como una alternativa eficiente, potente y versátil para poner en marcha soluciones IDE con software libre y soporte profesional.

      El portal principal del proyecto está accesible en: http://panaceaweb.adabyron.uma.es/

      Y si queréis acceder directamente a los geoportales disponibles, podéis ir al siguiente enlace:

      https://panaceacatalogue.adabyron.uma.es/gvsigonline/core/select_public_project/

      Os dejamos con la presentación del proyecto, que por supuesto os recomendamos ver para entender el alcance del proyecto:

      by Alvaro at December 05, 2019 01:17 PM

      December 04, 2019

      This bugfix release fixes additional localization holes in:

      • The feature tooltip prompt text when it contains a hyperlink
      • The share link to view component.
      Also for any commands that open a modal dialog, it now uses the label of the command as the dialog title instead of the command name.

      This release will be the last one in the 0.12.x series as I move full steam ahead with the next 0.13 release. Yeah ... slight change of plans about putting this project on short hiatus

      It turns out mapguide-react-layout needs some major updates for several key libraries its using, so it's not worth holding off on this any longer.

      The journey to this next release is worth a long overdue blog post to talk about it as well.

      by Jackie Ng (noreply@blogger.com) at December 04, 2019 01:12 PM

      Disponible la excelente presentación sobre recursos abiertos impartida por Antonio F. Rodríguez del Centro Nacional de Información Geográfica y que incita al debate.

      ¿Sabías que según un estudio de la IDE de España el 72% de los datos oficiales que se publican no tienen licencia? ¿Cuáles son las tendencias en publicación de datos? ¿Qué licencias deben tener los datos abiertos? ¿Las Infraestructuras de Datos Espaciales están en crisis? ¿Si se hace negocio con nuestros datos, podemos pedir nuestra parte? Todo esto y más en la siguiente presentación:

      by Alvaro at December 04, 2019 10:44 AM

      Ya están disponibles las presentaciones relativas a la Infraestructura de Datos Espaciales y Sistema Central de Direcciones de Uruguay impartidas en las 15as Jornadas Internacionales de gvSIG.

      Ambos proyectos suponen un salto cualitativo en la gestión y disponibilidad de información geográfica en Uruguay. Ambos proyectos han sido realizados por la Asociación gvSIG con software libre y, en particular, con la Suite gvSIG.

      La IDEUY y el Proyecto de Adquisición de Imágenes Digitales y Modelos de Elevación de todo el Uruguay:

      Sistema Central de Direcciones del Uruguay:

      by Alvaro at December 04, 2019 09:31 AM

      December 03, 2019

      Ya está disponible la grabación del taller de “ConvertGISEpanet – RunEpanetGIS – gvSIG: herramientas para el tratamiento de información en redes de abastecimiento de agua” impartido en las pasadas Jornadas Internacionales de gvSIG por Óscar Vegas Niño, del Instituto de Ingeniería del Agua y Medio Ambiente (IIAMA) de la Universidad Politécnica de Valencia.

      El vídeo para seguir el taller lo podéis encontrar aquí:

      La descarga del software Epanet se puede realizar desde el siguiente enlace: https://www.iiama.upv.es/iiama/es/transferencia/software/epanet-esp

      Su instalación es sencilla, y no tiene complejidad.

      Las herramientas utilizadas, se pueden descargar desde los siguientes enlaces:

      Dentro de cada carpeta comprimida, hay un fichero de texto llamado instrucciones. Allí están los pasos a seguir para el correcto funcionamiento de las aplicaciones.

      En cuanto a los ficheros de Epanet y Shape de las redes utilizadas, se pueden descargar desde el siguiente enlace: http://bit.ly/2qd4Wsu

      by Alvaro at December 03, 2019 01:05 PM

      December 01, 2019

      GRASS GIS is an open source geoinformation system which is developed by a globally distributed team of developers. Besides the source code developers also message translators, people who write documentation, those who report bugs and wishes and more are involved.

      1. Early days… from pre-Internet to CVS and SVN

      While GRASS GIS is under development since 1982 (no typo!) it has been put into a centralized source code management system in December 1999. Why so late? Because the World Wide Web (WWW) became available in the 1990s along with tools like browsers and such, followed by the development of distributed source code management tools. We moved on 29th Dec 1999 (think Y2K bug) the entire code into our instance of CVS (Concurrent Versioning System). With OSGeo being founded in 2006, we migrated the CVS repository to SVN (Subversion for the source code management) and trac (bug and wish tracker) on 8 Dec 2007.

      2. Time to move on: git

      Now, after more than 10 years using SVN/trac time had come to move on and join the large group of projects managing their source code in git (see also our related Wiki page on migration). Git comes with numerous advantages, yet we needed to decide which hosting platform to use. Options where github.com, gitlab.com, gitlab or gitea on OSGeo infrastructure, or other platforms. Through a survey we found out that the preference among contributors is GitHub. While not being open source itself it offers several advantages: it is widely known (good to get new developers interested and involved), numerous OSGeo projects are hosted there under the GitHub “OSGeo organization“.

      If all fails (say, one day GitHub no longer being a reasonable choice) the import of our project from GitHub to GitLab is always possible. Indeed, we meanwhile mirror our code on OSGeo’s gitea server.

      Relevant script code and migration ticket:

      Relevant steps:

      • migrated SVN trunk -> git master
      • migrated and tagged release branches (milestones)
      • deleted “develbranch6” (we compared it to “releasebranch_6_4” and didn’t discover relevant differences)
      • Fix commit messages (yes, we really wanted to be brave, updating decades of commit messages!):
        • references to old RT tracker tickets (used Dec 2000 – Dec 2006)
        • references to old GForge tracker tickets (used Jan 2007 – Dec 2008)
        • references to other trac tickets (#x -> https://trac.osgeo.org/…)

      3. Source code migration: the new git repositories

      • github repository “grass” (repo)

        • Source code from 1999 to present day (SVN-trunk -> git-master)
        • all 7.x release branches
      • github repository “grass-legacy” (repo)

        • separate repository for older GRASS GIS releases (3.2, 4.x, 5.x, 6.x), hence source code now available in git since 1987!
      • github repository “grass-addons” (repo)

        • repository for addons
      • github repository “grass-promo” (repo)
        • repository for promotional material
      • github repository “grass-website” (repo)
        • repository for upcoming new Website

      4. Remarks on the “grass-legacy” repository

      What special about it:

      • the source code goes back to 1987!
      • file timestamps (which I tried to preserve for decades :-) have been used to reconstruct the source code history (e.g., releasebranch_3_2)
      • junk files removed (plenty of leftover old binary files, files consisting of a special char only etc)
      • having this grass-legacy repo available in parallel to the main grass repo which contains the  recent source code we have a continuous source code coverage from 1987 to today in git.
      • size is about 250MB

      What’s missing

      • the 4.3 source code doesn’t have distinct timestamps. Someone must have once packaged without mtime preservation… a pity. Perhaps a volunteer may fix that by carrying over the timestamps from GRASS GIS 4.2 in case the md5sum of a file is identical (or so).

      5. Trac issue migration

      A series of links had to be updated. Martin Landa invested days and days on that (thanks!!). He used the related GDAL efforts as a basis (Even Rouault: thanks!). As the date for the trac migration we selected 2007-12-09 (r25479) as it was the first SVN commit (after the years in CVS). The migration of trac bugs to github (i.e. transfer of trac ticket content) required several steps:

      Link updates in the ticket texts:

      • links to other tickets (now to be pointed to full trac URL). Note that there were many styles of referring in the commit log message which had to be parsed accordingly
      • links to trac wiki (now to be pointed to full trac URL)
      • links source code in SVN (now to be pointed to full trac URL)
      • images and attachments (now to be pointed to full trac URL)

      Transferring:

      • “operating system” trac label into the github issue text itself (following the new issue reporting template)
      • converting milestones/tickets/comments/labels
      • converting trac usernames to Github usernames
      • setting assignees if possible, set new “grass-svn2git” an assignee otherwise
      • slowing down transfer to match the 60 requests per second API limit rate at github

      6. Fun with user name mapping

      Given GRASS GIS’ history of 35+ years we had to invest major effort in identifying and mapping user names throughout the decades (see also bug tracker history). The following circumstances could be identified:

      • user present in CVS but not in SVN
      • user present in SVN but not in CVS
      • user present in both with identical name
      • user present in both with different name (well, in our initial CVS days in 1999 we often naivly picked our surnames like “martin”, “helena”, “markus”, “michael” … cute yet no scaling very much over the years!) as some were changed in the CVS to SVN migration in 2007, leading to
        • colliding user names
      • some users already having a github account (with mostly different name again)

      We came up with several lookup tables, aiming at catching all variants. Just a “few” hours to dig in old source code files and in emails for finding all the missing email addresses…

      7. Labels for issues

      We cleaned up the trac component of the bug reports, coming up with the following categories which have to be visually grouped by color since the label list is just sorted alphabetically in github/gitlab:

      • Issue category:
        • bug
        • enhancement
      • Issue solution (other than fixing and closing it normally):
        • duplicate
        • invalid
        • wontfix
        • worksforme
      • Priority:
        • blocker
        • critical
        • feedback needed
      • Components:
        • docs
        • GUI
        • libs
        • modules
        • packaging
        • python
        • translations
        • unittests
        • Windows specific

      Note that the complete issue migration is still to be done (as of Nov. 2019). Hopefully addressed at the GRASS GIS Community Sprint Prague 2019.

      8. Setting up the github repository

      In order to avoid users being flooded by emails due to the parsing of user contributions which normally triggers an email from github) we reached out to GitHub support in order to temporarily disable these notifications until all source code and selected issues were migrated.

      The issue conversion rate was 4 min per trac bug to be converted and uploaded to github. Fairly slow but likely due to the API rate limit imposed and the fact that the migration script above generates a lot of API requests rather than combined ones..
      Note to future projects to be migrated: use the new gihub import API (unfortunately we got to know about its existence too late in our migration process).

      Here out timings which occurred during the GRASS GIS project migration from SVN to github:

      • grass repo: XX hours (all GRASS GIS 7.x code)
      • grass-legacy repo: XX hours (all GRASS GIS 3.x-6.x code)
      • NNN issues: XX hours – forthcoming.

      9. New issue reporting template

      In order to guide the user when reporting new issues, we will develop a small template – forthcoming.

      10. Email notifications: issues to grass-dev and commits to grass-commit

      We changed the settings from SVN post-hook to Github commit notifications and they flow in smoothly into the grass-commit mailing list. Join it to follow the development.

      Overall, after now several months of using our new workflow we can state that things work fine.

      The post Remarks on SVN-trac to GitHub migration appeared first on GFOSS Blog | GRASS GIS and OSGeo News.

      by neteler at December 01, 2019 11:03 AM

      It’s a long time since I used any kind of powerpoint-ish presentation building software (2016, I think). My software of choice these days is RevealJS (https://github.com/hakimel/reveal.js/) with the occasional foray into using Big (https://github.com/tmcw/big). The premise is simple in both cases- write your presentation out in markdown or html and let javascript and css do the work to create the slide deck. This approach appeals greatly because there’s no need to plan the slides in advance, you can write them as you go, in a text editor. There’s no need to break your flow to create a new slide, or position a text box exactly where you want it to be on the page, you can just write.

      Of the two, Big is simpler to use but I have never got on with how it works for positioning background images. It’s fabulous for simple text though. Since I like to use background images quite a lot though, I’ve finally settled on RevealJS. As a bonus, the more I get into using it, the more I learn how to do fun things in css. I’m not a fan of css in general, and have a complete mental block about using it in my “real work” but RevealJS is forcing me, slowly, to get over this. This blog post is as much to remind me how to do these things in future when I inevitably forget, as it is to provide a reference for others!

      Quick Intro

      In Revealjs, your presentation is written as one single html file. You grab a copy of the repository from https://github.com/hakimel/reveal.js, and then hack away at index.html until it meets your needs. The essentials look like this:

      <html>
      	<head>
      		<link rel="stylesheet" href="css/reveal.css">
      		<link rel="stylesheet" href="css/theme/white.css">
      	</head>
      	<body>
      		<div class="reveal">
      			<div class="slides">
      				<section>Slide 1</section>
      				<section>Slide 2</section>
      			</div>
      		</div>
      		<script src="js/reveal.js"></script>
      		<script>
      			Reveal.initialize();
      		</script>
      	</body>
      </html>
      

      Each “slide” in your “deck” is denoted by a <section> tag in the <body> section, as shown above. You can then use normal html syntax and css styling to fancy up your elements, but basically you just write out your sections and voila you have a presentation. To use your own css you should probably add a custom.css file and reference it in the <header> section after the generic reveal css and the one for the theme you’re using, so you override any default behaviour.

      The instructions are all here: https://github.com/hakimel/reveal.js/blob/master/README.md, and the functionality that I want to talk about is fragments

      Introducing fun text effects with “fragments”

      Fragments (https://github.com/hakimel/reveal.js/#fragments) are used to highlight individual elements of a slide, and by default they allow elements to appear one at a time as you step through. This is great for delivering what would be a boring old list of bullet points. However, by changing Fragment behaviour, you can do all sorts of things, like highlight elements in bold, or add a ‘>’ before them as you step through. This requires an understanding of fragment events and the usage of the css “before” and “after” properties.

      For example, to add a pointing-finger emoji to an element in a list as you step into it, you’d use the following css:

      .reveal .slides section .fragment.pointy.current-fragment::before{
        content:"👉 " ;
      }
      

      and in your html you’d mark it up as follows:

      <section>
      	<h3>Unordered list:</h3>
      	<ul>
      		<li class="fragment pointy">Fragment 1</li>
      		<li class="fragment pointy">Fragment 2</li>
      		<li class="fragment pointy">Fragment 3</li>
      		<li class="fragment pointy">Fragment 4</li>
      	</ul>
      </section>
      

      To highlight fragments in a sentence in bold as you step into them, firstly you need to override the default behaviour where they are hidden until stepped into:

      .reveal .slides section .fragment.bold {
      opacity: 1;
      visibility: inherit;
      }
      
      .reveal .slides section .fragment.bold.current-fragment{
        font-weight: bold;
      }
      

      you’d use this in your html like this:

      <section>
      	<p>Here's a sentence where <span class="fragment bold">this</span>, <span class="fragment bold">this</span> and <span class="fragment bold">this</span> need to be highlighted.</p>
      </section>
      

      Best of all, with some additional javascript libraries you can have even more fun! Here’s how to add rainbow sparkles to elements on your slide, inspired by https://codepen.io/simeydotme/pen/jgcvi/. It’s still a bit of a work in progress but anyhow here we go:

      Firstly, I think you need to add jquery and the javascript code for the sparkles as dependencies in your html, so in the <script> section, underneath the line that initialises the main RevealJS code you need to add the following:

      <script src="js/reveal.js"></script>
      <script src="js/jquery-3.4.1.slim.min.js"></script>
      <script src="js/jquery-canvas-sparkles.js"></script>
      

      Then you need some css to define your fragment behaviour, like above:

      .reveal .slides section .fragment.sparkley {
      opacity: 1;
      visibility: inherit;
      }
      
      .reveal .slides section .fragment.sparkley.current-fragment{
        border: none;
        font-weight: normal;
        color:pink;
        }
      

      Then you need to set the desired behaviour of your sparkles in the html by adding a RevealJS EventListener that acts when the fragment is shown (https://github.com/hakimel/reveal.js/#fragment-events). This gets added into the <script> tag where RevealJS is initialised:

      Reveal.addEventListener('sparkley', function() {
      
      				Reveal.addEventListener( 'fragmentshown', function( event ) {
      					// event.fragment = the fragment DOM element
      					var timer;
      					clearTimeout(timer);
      					$curfrag = $(event.fragment);
      					$curfrag.sparkle({"color": "rainbow" , 
      	                "minSize": 5 , 
      	                "maxSize": 10 ,
      	                "overlap": 0 ,
      	                "direction": "both" ,
      	                "speed": 1,
      	                "fadeSpeed":3000});
      					$curfrag.off("mouseover.sparkle")
      					$curfrag.trigger("start.sparkle")
      						.on("click");
      					
      					timer = setTimeout(function(){
      						$curfrag.trigger("stop.sparkle");
      					},100);
      
      				} );
      				Reveal.addEventListener( 'fragmenthidden', function( event ) {
      					$curfrag.trigger("stop.sparkle")
      				});
      
      			}, false );
      

      You then reference the fragment name (in this case, sparkley) in your html as in the bold example above. The end result should look something like the following:

      sparkley gif

      In conclusion, with a small amount of css fun, you can make your presentations much more interesting!

      by Jo at December 01, 2019 12:00 AM

      November 30, 2019

      When editing vector features in a web GIS map one often needs support for precise drawing. Snapping to existing features of external data (using WFS or other vector features) was long available in OpenLayers Editor but it still lacked guide line support. We are pleased to announce that guide lines can now be generated automatically by OpenLayers Editor whilst a feature is drawn. Guide lines for the first run offer support for drawing right angled features or features in parallel to other features.

      by cartaro-admin at November 30, 2019 09:08 PM

      November 29, 2019

      Disponible para descarga el tutorial del taller de “Teledetección térmica” desarrollado en las pasadas 15as Jornadas Internacionales de gvSIG por Rubén Martínez de la Universidad de Costa Rica.

      La teledetección espacial ha experimentado un renovado impulso en las últimas décadas, con nuevos sensores y plataformas que han permitido la obtención de magnitudes o variables geofísicas de enorme valor geográfico. Dentro del amplio abanico de dichas variables, la temperatura terrestre es el principal objeto de estudio de la teledetección térmica.

      La temperatura superficial es una fuente fundamental de información, tanto cualitativa como cuantitativa, acerca de los procesos que ocurren en la superficie terrestre, permitiendo por tanto su caracterización,análisis y modelización.

      El objetivo del ejercicio desarrollado en este tutorial es obtener la temperatura del suelo a partir de los valores de brillo de la banda térmica en imágenes Landsat.

      Descarga:

       

      by Alvaro at November 29, 2019 11:07 AM

      November 28, 2019

      We are pleased to announce the release of GeoServer 2.16.1 with downloads (war|zip), documentation (html|pdf) and extensions.

      This is a stable release recommended for production systems.

      Note: Our builds detected a change in Oracle JDK 8u221 URL handling; this release was made with 8u202 as a result. Future releases will be made using OpenJDK.

      Improvements and Fixes

      This release includes a number of improvements, including:

      • The REST API now supports configuring the WMTS
      • New hideEmptyRules LEGEND_OPTION to hide rules not matching any features

      Fixes included in this release:

      • SLDService generated raster classifiers are not overwriting the default style any longer
      • Monitoring extension fixed to respect GEOSERVER_AUDIT_PATH setting
      • Cascaded WMTS now makes use of provided credentials

      For more information check the 2.16.1 release notes.

      Security Updates

      Please update your production instance of GeoServer to receive the latest security updates and fixes.

      If you encounter a security vulnerability in GeoServer please respect our responsible disclosure policy.

      Community Updates

      For developers building from source, our community modules are a great place to collaborate on functionality and improvements.

      • ncWMS GetTimeSeries now supports time ranges with period and excludes nodata from results
      • hz-cluster improved synchronization events

      About GeoServer 2.16

      Features, presentations and reference material on the 2.16 series:

      by jgarnett at November 28, 2019 06:12 PM

      01

      Culmina la 3a. edición del curso-concurso Proyectos con estudiantes y gvSIG Batoví. Luego de tres años de ajustes y maduración es buen momento de hacer un repaso del proceso.

      Como dijimos anteriormente, no se trata sólo de un concurso al que simplemente se convoca a participar publicando las bases del mismo. Se trata de todo un conjunto de acciones coordinadas que tienen como finalidad última promover el uso de tecnologías geoespaciales libres en la educación pre-universitaria.

      Leer más: curso-concurso Proyectos con estudiantes y gvSIG Batoví: las llaves para una educación geoespacial abierta

      by Alvaro at November 28, 2019 04:33 PM

      For the longest time, the MySQL FDO provider was of limited utility, though not to the fault of the provider itself. The last time this provider saw serious development, the latest version of MySQL was around 5.0/5.1 and the spatial capabilities of MySQL at that point in time left a lot to be desired.

      While you could store spatial data, querying this data out spatially was another matter. MySQL at this point in time only offered bounding-box-based spatial predicates which manifests in user-facing viewer behavior like this:



      Why would such a box selection query select that line? Because their minimal bounding boxes spatially intersect according to MySQL's limited spatial predicates. While MySQL 5.6 finally introduced proper spatial capabilities, the provider itself was still working against the pre-5.6 feature set until recently.

      After receiving signs that people still use MySQL for spatial data, I've finally decided to tackle this long standing annoyance. If you have MapGuide Open Source 3.1.2 64-bit installed, you can download the patched MySQL Provider to unlock the full set of spatial capabilities if you are connected to MySQL 5.6 or higher, the end result is that map selections against MySQL data sources now actually make sense!


      That line is no longer selected out of nowhere! You have to actually box select on the line, like an actual ST_Intersects spatial predicate should!


      Since MySQL has long been forked into another popular and highly-compatible fork called MariaDB, I did some testing to make sure this provider works against MariaDB as well. It turns out that the [MySQL version is >= 5.6] check the provider does to determine whether to unlock the full spatial capabilities in the provider is not quite correct when working with MariaDB. The problem was that the version checking APIs provided by the MySQL/MariaDB client return "5.5.5" when connected to MariaDB, which breaks the version check as 5.5 < 5.6.

      This left me scratching my head for a bit as why would MariaDB 10.4 (the version of MariaDB I was testing) return a version of "5.5.5"? It turns out this version number has special meaning as a versioning hack to support replication compatibility with MySQL. The real version can be obtained by getting the version string and if it contains "mariadb", check for the "5.5.5-" prefix and if it's present, parse the version number following that prefix for the real version number. Because there was never a release of MySQL 5.5.5, the presence of this "5.5.5-" prefix in the version string gives us 99.9% certainty that we're actually dealing with MariaDB. With this change, the provider will now have full spatial capabilities when connected to MariaDB as well.

      These changes will be rolled into FDO trunk and will be part of the MySQL FDO Provider that ships with the next preview release of MapGuide Open Source 4.0

      by Jackie Ng (noreply@blogger.com) at November 28, 2019 09:09 AM

      I never tire of telling developers that they should learn SQL.

      And I never run out of developers for whom that is good advice.

      I think the reason is that so many developers learn basic SQL CRUD operations, and then stop. They can filter with a WHERE clause, they can use Sum() and GROUP BY, they can UPDATE and DELETE.

      If they are particularly advanced, they can do a JOIN. But that’s usually where it ends.

      And the tragedy is that, because they stop there, they end up re-writing big pieces of data manipulation logic in their applications – logic that they could skip if only they knew what their SQL database engine was capable of.

      Since so many developers are using PostgreSQL now, I have taken to recommending a couple of books, written by community members.

      For people getting started with PostgreSQL, and SQL, the Art of PostgreSQL, by Dmitri Fontaine.

      Art of PostgreSQL

      For people who are wanting to learn PostGIS, and spatial SQL, I recommend PostGIS in Action, by Regina Obe and Leo Hsu.

      PostGIS in Action

      Both Dmitri and Regina are community members, and both have been big contributors to PostgreSQL and PostGIS. One of the key PostgreSQL features that PostGIS uses is the “extension” system, which Dmitri implemented many years ago now. And of course Regina has been active in the PostGIS develompent community almost since the first release in the early 2000s.

      I often toy with the idea of writing a PostGIS or a PostgreSQL book, but then I stop and think, “wait, there’s already lots of good ones out there!” So I wrote this short post instead.

      November 28, 2019 08:00 AM

      November 27, 2019

      Dear all, We are pleased to announce that OTB version 7.0.0 is out ! Ready-to-use binary packages are available on the package page of the website: OTB-7.0.0-Win32.zip for Windows 32 bits OTB-7.0.0-Win64.zip for Windows 64 bits OTB-7.0.0-Linux64.run for Linux OTB-7.0.0-Darwin64.run for Mac OS X You can also checkout the release branch with git: This version […]

      by Guillaume Pasero at November 27, 2019 09:14 AM