Welcome to Planet OSGeo

June 23, 2021

The GeoTools team is pleased to share the availability of GeoTools 24.4 :geotools-24.4-bin.zip    geotools-24.4-doc.zipgeotools-24.4-userguide.zipgeotools-24.4-project.zipThis release is published to the osgeo maven repository, and is made in conjunction with  GeoServer 2.18.4. This is a maintenance release and is a recommended upgrade for all users of the GeoTools library.We would like to

by Andrea Aime (noreply@blogger.com) at June 23, 2021 09:50 AM

We are happy to announce GeoServer 2.18.4 release is available for download (zip and war) along with docs and extensions.

This GeoServer 2.18.4 release was produced in conjunction with GeoTools 24.4 and GeoWebCache 1.18.4. This is a maintenance release recommended for production systems.

Thanks to everyone who contributed, and to Alessandro Parma (GeoSolutions) and Andrea Aime (GeoSolutions) for making this release.

Improvements and Fixes

This release includes support for the Krovak North Orientated projection. In addition to this new features, a few notable improvements are included:

Fixes included in this release:

  • GEOS-10057 Escape style editor page user input
  • GEOS-10056 Rename schemaless-mongo plug-in to mongodb-schemaless
  • GEOS-10055 Escape SRS demo page user input
  • GEOS-10051 Fix edge cases in the elevation parser
  • GEOS-10049 Schemaless-features layer with name different from the mongo collection throws exception
  • GEOS-10048 Schemaless-features mongoDB layer not present in WMS capabilities
  • GEOS-10031 GWC loses GridSets bounds when importing data through the Importer plugin.
  • GEOS-9748 Rendering process fails if vendor option sortByGroup is used

For details check the 2.18.4 release notes.

About GeoServer 2.18

Additional information on GeoServer 2.18 series:

by Andrea Aime at June 23, 2021 12:00 AM

June 21, 2021

June 20, 2021

¿Quién no ha tenido que recurrir a una copia perdida, en alguna carpeta del ordenador, con nuestros datos, para rescatar información tal y como estaba hace unos meses? No es que sea algo que ocurra todos los días, pero de vez en cuando se da y entonces suele causarnos algún dolor de cabeza. Y más aún si solo tenemos que recuperar algunos registros de una capa que borramos o modificamos hace semanas por accidente y queremos recuperar solo esos.

En los últimos años, en otros contextos que no están relacionados con la gestión de cartografía, ya va siendo normal encontrarnos con sistemas de control de versiones (VCS, del inglés Version Control System) para gestionar la información. Sin embargo, en los entornos SIG, no es tan habitual, sobre todo por la escasa o nula oferta que podemos encontrar en entornos no privativos. Pues bien, a partir de gvSIG Desktop 2.6 podremos disponer de un sistema de control de versiones, VCSGis, en un software libre y que podremos utilizar directamente sin instalar nada más que el propio gvSIG Desktop.

Entonces, si tengo instalado (o una portable) gvSIG Desktop 2.6…

¿Cómo puedo empezar a usar el control de versiones sobre mis tablas y capas ?

  • Lo primero será crear un repositorio para el control de versiones. Algo así como un espacio en el que se almacenarán todos mis datos… y los cambios que voy haciendo sobre ellos. En jerga gvSIG le llamaremos un repositorio VCSGis
  • Lo segundo, crear una copia de trabajo vinculada a dicho repositorio. Será sobre esa copia de trabajo sobre la que trabajaremos habitualmente.
  • Tercero… cargar nuestras tablas y capas en la copia de trabajo y sincronizarla con el repositorio VCSGis.

A partir de ahí, simplemente seguiremos trabajando de forma normal con gvSIG Desktop e iremos sincronizando con el repositorio de vez en cuando, normalmente al inicio o fin de nuestra sesión de trabajo.

Esto, dicho así, a algunos podrá parecerles algo complicado, pero nada mas lejos de la realidad. Voy a ir contándolo con un poquito mas de detalle en este post, solo un poquito, que no quiero alargarme demasiado.

Creación de un repositorio personal

El repositorio VCSGis no es más que una base de datos con una serie de tablas especiales, donde VCSGis almacena nuestra información. Para nuestro uso personal, normalmente será suficiente con usar el gestor de base de datos H2 que viene incluido con gvSIG Desktop. Para ello, simplemente accederemos al menú:

Herramientas -> VCSGis -> Administración -> Inicializar un repositorio.

Nos solicitará que seleccionemos una conexión a una base de datos e inicializará el repositorio VCSGis en ella.

Creación de una copia de trabajo

Lo segundo que teníamos que hacer… crear una copia de trabajo vinculada a ese repositorio. La copia de trabajo no es más que otra base de datos, normalmente en formato H2Spatial, donde estarán nuestras tablas, con las que trabajaremos, y una serie de tablas para poder mantener la información sincronizada con el repositorio.

Crear una copia de trabajo es también una tarea muy sencilla. Accederemos a la opción de menú:

Herramientas -> VCSGis -> Inicializar copia de trabajo.

Nos pedirá que seleccionemos la conexión a la base de datos del repositorio, y un nombre de archivo en el que crear nuestra copia local.

Estos dos pasos, crear el repositorio y la copia local, normalmente los haremos una sola vez. Una vez ya creados y configurados, podemos trabajar con ellos, sin necesidad de volver a crear otros.

Añadir capas y guardarlas en el repositorio

Ya tenemos nuestro repositorio y nuestra copia de trabajo. Solo hemos tardado unos pocos minutos. Ahora cargaremos nuestras capas en la copia local y las sincronizaremos con el repositorio.

Cargaremos en una Vista nuestra capa… por ejemplo, yo tengo por aquí una de provincias de España (esp_provincias). No voy a contar como cargar una capa en una Vista de gvSIG Desktop, seguro que ya sabéis. Con la capa cargada en la Vista, seleccionaremos la opción de menú:

Herramientas -> VCSGis -> Añadir a la copia de trabajo.

En el cuadro de diálogo que se muestra indicaremos nuestra copia de trabajo y, en la pestaña Capas, seleccionaremos la capa que queremos cargar de entre las existentes en la Vista de gvSIG Desktop. Y, por último, pulsaremos el botón de “Añadir a la copia de trabajo”. Una vez terminado el proceso, cerraremos el diálogo.

Como resultado tendremos cargada en la Vista nuestra capa… ¡¡ dos veces !!

Es normal. Estamos viendo nuestra capa original y una copia que se ha creado en… nuestra copia de trabajo. Eliminaremos la capa original de la Vista y, a partir de ese momento, trabajaremos con la que está en nuestra copia de trabajo.

Pero aún no hemos terminado del todo este proceso. Todavía tenemos que sincronizar nuestra copia de trabajo con el repositorio. Algo que sigue siendo muy fácil. Seleccionaremos la opción de menú:

Herramientas -> VCSGis -> Mostrar cambios

Seleccionaremos nuestra copia de trabajo y veremos, en la pestaña Copia de trabajo, que aparece la capa que acabamos de añadir. Solo tenemos que marcarla (haciendo clic en el check de esta), y pulsar el botón con la flechita azul que dice “commit”. Y al terminar ya podemos cerrar la ventana.

Guardar mi capa de Provincias en el repositorio ha llevado menos de un minuto. Y hacer todos los pasos, desde el comienzo del artículo, me ha llevado menos de 5. Si la capa a añadir a nuestro repositorio personal es muchísimo mas grande puede tardar algo mas, pero solo será durante esa carga inicial donde notaremos la pesadez de la capa.

Realizar modificaciones en nuestras capas

¿Y ahora, qué tendría que hacer para trabajar con mi capa?

Si hemos cerrado gvSIG Desktop, cuando lo volvamos a abrir tenemos que cargar de nuevo la capa. También fácil. Sacaremos el dialogo de Añadir capa y seleccionaremos la pestaña VCSGis. Seleccionaremos nuestra copia local, y nos mostrará las capas que tenemos disponibles, podremos seleccionar la nuestra y cargarla en la Vista.

Una vez cargada… trabajamos con ella como solemos hacer, si precisamos modificarla entramos en edición, la modificamos y terminamos edición. Tantas veces como necesitemos. Durante un rato nos olvidamos que se trata de una capa que esta en un sistema de control de versiones y trabajamos con ella como con cualquier capa de gvSIG en base de datos. Cuando hayamos terminado de trabajar con ella… seleccionamos la herramienta de Mostrar cambios. Indicamos nuestra copia de trabajo, marcamos nuestra capa en la lista de capas de esa copia de trabajo y pulsamos el botón Commit. Como hicimos cuando añadimos la capa al repositorio. Y nuestros cambios se integrarán en él.

El proceso de modificación de nuestra capa, y sincronización con el repositorio, lo repetiremos tantas veces como necesitemos, y de esa forma dispondremos de un histórico de cómo se encontraba la capa en cada momento.

Recuperar una versión anterior de mis datos

Y un día, de repente, nos pasa. ¡Oh! ¡Nos acabamos de dar cuenta que ya debe hacer un par de semanas que borramos un registro!. Debí borrar por error la provincia de Almería, y no me había dado cuenta hasta hoy. Y lo peor es que he estado trabajando sobre otras zonas de la capa y no quiero perder los cambios. Es ahí donde te acuerdas que llevas trabajando un tiempo con un sistema de control de versiones. Y que puedes utilizarlo para recuperar los datos de hace dos semanas y mezclarlos con tus cambios.

En general siempre podrás recuperar una versión de tus capas desde la ventana de “Obtener copia local (checkout)” que encontraras en la opción de menú:

Herramientas -> VCSGis -> Obtener copia local (checkout)

Pero antes de seguir… asegurate que no tienes cambios por sincronizar con el repositorio, que todos tus cambios sobre la capa que quieres trabajar están guardados, subidos, en el repositorio. Si ya no tienes ningún cambio en tu copia local, volvemos a la ventana de “Obtener copia local (checkout)”.

Seleccionaremos nuestra copia de trabajo, nuestra capa, marcaremos el check de “Sobrescribir tabla”, y pulsaremos en el botón de revisión para seleccionar que revisión queremos recuperar, en mi caso la revisión 1, que era la última en la que aún estaba la provincia de Almería. Finalmente, pulsaremos en el botón de “Obtener copia local (checkout)” y cuando termine, cerraremos este diálogo.

Y ahora… nuestra capa en la copia local está en el estado en que estaba la revisión que he seleccionado… y…

¿Dónde están los cambios que hice desde entonces ?

¡¡¿Los he perdido?!!

No. No hay que asustarse. Están en el repositorio. Solo tenemos que ir a la ventana de cambios, seleccionar nuestra copia de trabajo e ir a la pestaña de Repositorio. Seleccionaremos nuestra capa y pulsaremos en el botón de “Descargar cambios remotos de la tabla seleccionada” y en la tabla de la derecha tendremos todos los cambios que hemos realizado desde la revisión que tenemos ahora en la copia local. Podremos seleccionar cuales queremos descargar y cuales no, y luego subiremos el estado en que dejemos nuestra capa al repositorio.

Una pequeña anotación. Si en la lista de cambios seleccionamos un registro, y tenemos en la Vista activa la capa en cuestión, podemos utilizar los botones Centrar y Zoom, para ver cómo estaba y mostrará la información gráfica del registro.

Lo he contado todo muy rápido, dar una pincelada rápida de lo que sería trabajar con el sistema de control de versiones de gvSIG Desktop VCSGis. Se pueden hacer muchas más cosas de lo que he contado, pero este articulo ya ha salido muy largo. Lo dejamos para futuros post.

Importante de cara a acercarse a VCSGis… dos conceptos a recordar:

  • Repositorio VCSGis. Donde se guardan todos nuestros datos.
  • Copia de trabajo (ligada a un repositorio). Donde tenemos una copia de una versión de nuestros datos con la que trabajamos.

Y otra cosa importante… si tenemos que hacer copia de nuestros datos lo haremos de la base de datos del repositorio, normalmente dos ficheros (.mv.db y .trace.db) y lo haremos cuando no este gvSIG Desktop arrancado 😉

Espero que os haya servido, o como mínimo que haya sido interesante.

by Joaquin del Cerro at June 20, 2021 06:33 PM

June 19, 2021

Here is the long awaited third preview release of MapGuide Open Source 4.0.

Refer to the release notes for download links and an overview of what's new and change since the last preview release.

For MapGuide users on Linux who use Java, you will be glad to hear that Java support is back and fully operational with this release (no more 403 errors from Tomcat). Restoring Java support on Linux came at a small price, we now no longer ship MapGuideApi.jar as the Java wrapper to the MapGuide API. We now only ship MapGuideApiEx.jar which has been in all MapGuide Open Source releases since 2.5 and this wrapper fixes most of the "cruftiness" of the original Java wrapper as described in this RFC where it was first introduced.

For those who like to swap the bundled GDAL dll on windows with a version with an expanded suite of raster/vector driver support, this release will make this process more seamless:

  • Our internal GDAL version is now 2.4.4, the latest in the 2.x series.
  • Our internal xalan/xerces dlls now have a "fdo" suffix in their dll name, so you are no longer forced to overwrite these dlls when overlaying an external GDAL dll from gisinternals and introduce potential instability as a result.
  • All of our FDO providers that link to GDAL/OGR now only use their C API to avoid any potential ABI incompatibilities from replacing our internal GDAL dll with an external one that may not be built with the same MSVC compiler version as ours.
Other than that, Preview 3 is mostly a roll up of fixes to MapGuide/FDO since the last preview release to tide things over while work is still ongoing to support PHP7 in our MapGuide API to clear the obstacle to a final 4.0 release.

But before then, as of this release I will be taking a short break from all things open source. I am mildly burned out all things considered and a good solid month or two away from all things open source should hopefully be enough time to recharge the batteries.

by Jackie Ng (noreply@blogger.com) at June 19, 2021 05:26 PM

June 18, 2021

Sometimes we can have a layer where we have overlapping polygons, and we need to clip one of them so that there is no overlap (for example a large parcel with an internal parcel, where we really have the polygon of the large parcel without the gap).

For this, the Difference geoprocess can be applied on gvSIG, but with a specific configuration and steps. In this video you can see better how it would be applied:

by Mario at June 18, 2021 10:11 AM

En ocasiones podemos tener una capa donde tenemos polígonos superpuestos, y necesitamos que uno de ellos esté recortado para que no haya superposición (por ejemplo una parcela grande con una parcela interna, donde realmente tenemos el polígono de la parcela grande sin el hueco).

Para ello se puede aplicar el geoproceso Diferencia de gvSIG, pero con una configuración específica, y unos pasos concretos. En este vídeo podéis ver mejor cómo se aplicaría:

by Mario at June 18, 2021 10:06 AM

June 17, 2021

June 16, 2021

We’d like to share some exciting news with you about our cloud-based geo-data synchronisation service, Mergin.

In this post we’ll talk both about Mergin the online managed service (Mergin Cloud) and also about the software stack that powers it (the Mergin Software Stack).

Mergin CE

The Mergin Software Stack has been developed and maintained at Lutra Consulting over the past 3 years to power our Mergin Cloud service and has been maturing nicely in production.

We believe a sync service for supporting field-based GIS activities has been a missing piece of the open geospatial puzzle, and as strong advocates of open source software, we’re now sharing the Mergin Software Stack with the community. Therefore, on the 14th of June we released Mergin CE on GitHub. The CE stands for Community Edition.

The release means a fully open solution for field data collection and synchronisation is now possible using QGIS, Input app and Mergin CE.

Mergin CE is released under the AGPL licence and we are open to contributions from others. Looking forward to seeing what the open source community has to offer!

Mergin CE gives you the freedom to deploy, host and manage your own Mergin server on your own infrastructure, giving you complete control over your data. Mergin CE comes without commercial support.

Our main efforts are still very much focussed on the continuous improvement of Mergin Cloud, making it an awesome fully managed service for our customers.

We also now provide Mergin EE (Enterprise Edition) for those who want an on-premises deployment but with extra features like Active Directory integration, commercial support, and/or prefer a licence other than AGPL.

Changes to Mergin Cloud

Releasing Mergin CE got us taking another look at Mergin Cloud’s Community (free of charge) tier. That’s why from today we’re updating Mergin Cloud’s Terms of Service so its free tier can no longer be used to store projects for commercial use. Mergin accounts storing projects for commercial use should now purchase a paid subscription after their initial 14 day evaluation period.

A common surveying setup is a single paid account (providing extra storage for projects) and a handful of free tier accounts used by surveyors for collecting field data. This is still possible because the new Terms only require that the Mergin account hosting/storing/owning the commercial project has a paid subscription.

For example, consider two users: Fred (who uses a free account) and Penny (who uses a paid account). Penny is permitted to use the project Penny/Survey for commercial purposes as it resides on her paid account. Fred (who uses a free account) is permitted to collaborate on the commercial project Penny/Survey as it resides on Penny’s paid account. However, Fred may not use the project Fred/Survey2 for commercial use as it resides on his free account.

In light of the release of Mergin CE, we feel it is fair to encourage those using Mergin Cloud’s free tier for commercial gain to support us with one of our affordable subscriptions. The link also has a number of frequently asked questions relating to this change.

You may also like...

Input, a field data collection app based on QGIS. Input makes field work easy with its simple interface and cloud-based sync. Available on Android and iOS. Screenshots of the Input App for Field Data Collection
Get it on Google Play Get it on Apple store

June 16, 2021 05:00 AM

June 15, 2021

June 12, 2021

This blog post should give some insights on what happens behind the scenes in preparation of an online conference, and I also hope that some of the scripts I created might be useful for others as well. We were using pretalx for the submissions and Seafile for video uploads. Both systems are accessed over their HTTP API.

This year’s FOSSGIS 2021 conference was a pure online conference. Though it had the same format as every year. Three days of conference, with four tracks in parallel. This leads to about 100 talks. I joined the organizing team about 10 weeks before the conference took place. The task sounded easy. The speakers should be able to upload their talks prior to the conference, so that during the conference less could go wrong.

All scripts are available at https://github.com/vmx/conference-tools licensed under the MIT License.

The software

The speakers submitted their talks through pretalx, a conference management system I highly recommend. It is open source and has an active community. I’ve worked on/with it over the past few to make it suitable for OSGeo conferences. The latest addition is the public community voting plugin, which has been used for the FOSS4G 2021 as well as this conference. pretalx has a great HTTP API to get data out of the system. It doesn’t yet have much support for manipulating the data, but pull-requests are welcome.

For storing the video files, Seafile was used. I haven’t had any prior experience with it. It took me a bit to figure out, that the Python API is for local access only and that the public API is a pure HTTP API. You can clearly see that their API is tailored to their use in their web interface and not really designed for third party usage. Nonetheless, it guarantees that you can do everything via the HTTP API, that can be done through the web UI.

My scripts are heavily based on command line tools like b2sum, curl, cut, jq and jo, hence a lot of shell is used. For more complex data manipulation, like merging data, I use Python.

The task

The basic task is providing pre-recorded videos for a conference that were uploaded by the speakers themselves. The actual finer grained steps are:

  • Sending the speakers upload links
  • Looking through the videos to make sure they are good
  • Re-organizing the files suitable to be played back according to the schedule
  • Make the final files easily downloadable
  • Create a schedule which lists the live/pre-recorded talks

In Seafile you can create directories and make them publicly available so that people can upload files. Once uploaded, you won’t see what else in that directory. In order to be able to easily reference the uploaded videos back to the corresponding talk, it was important to create one dedicated directory per talk, as you won’t know which filenames people will use for their videos.

The speakers will receive an email containing dedicated upload links for each of their talks. See the email_upload_links directory for all the scripts that are needed for this step.

pretalx

First you need to get all the talks. In pretalx that’s easy, go to your conference, e.g. https://pretalx.com/api/events/democon/submissions/. We only care about the accepted talks, which can be done with selecting a filter. If you access it through curl, you’ll get a JSON response like that one: https://pretalx.com/api/events/democon/submissions/?format=json. pretalx returns 25 results per request. I’ve created a script called pretalx-get-all.py that automatically pages through all the results and concatenates them.

A talk might be associated with multiple speakers. Each speaker should get an email with an upload link. There were submissions that are not really talks in the traditional sense, so people shouldn’t get an email. The query for jq looks like that:

[.results[] | select((.submission_type[] | contains("Workshop")) or (.submission_type[] == "Anwendertreffen / BoF") | not) | { code: .code, speaker: .speakers[].code, title: .title, submission_type: .submission_type[]}]

The submissions contain only the speaker IDs and names, but not other details like their email address. We query the speakers API (e.g. https://pretalx.com/api/events/democon/speakers/) and post-process the data again with jq, as we care about their email addresses.

You can find all the requests and filter in the email_upload_links/upload_talks_to_seafile.sh script.

Seafile

Creating and upload link is a two-step process in Seafile. First create the directory, then creating a public accessible upload link for the directory. The directories are named after the pretalx ID of the talk (Full script for creating directories).

Creating emails

After acquiring the data, the next step is to process the data and creating the individual emails. Combining the data is done with the combine_talks_speakers_upload_links.py script, where the output is again post-processed with jq. The data_to_email.py script takes that data output and a template file to create the actual email as files. The template file is used as a Python format string, where the variables a filled with the data provided.

Those email files are then posted to pretalx, so that we can send them over their email system. That step is more complicated as currently there is no API in pretalx to do that. I logged in through the web interface and manually added a new email, while having the developer tools open. I then copied the POST request “as cURL” to have a look at the data it sent. There I manually extracted the session and cookie information in order to add emails from the command line. The script that takes the pre-generated emails and puts them into pretalx is called email_to_pretalx.sh.

Reviewing the uploaded videos

Once a video is uploaded, it gets reviewed. The idea was, that the speakers don’t need to care too much about the start and the end of the video, e.g. when they start the recording and there is a few seconds of silence while switching to the presentation. The reviewer will cut the beginning and end of the video and also convert it to a common format.

We wanted to preserve the original video quality, hence we use LosslessCut and converted it then to the Matroska format. The reviewers would also check that the video isn’t longer than the planned slot.

See the copy_uploads directory for all the scripts that are needed for this step.

pretalx

The reviewers get a file with things to check for each video file. We get the needed metadata again from pretalx and post-process it with jq. As above for the emails, there is again a template file which (this time) generates Markdown files with the information for the reviewers. The full script is called create_info_files.sh.

Seafile

Once videos are uploaded they should be available for the reviewers. The uploaded files are the primary source, hence it makes sense to always make copies of the talks, so that the original uploads are not lost. The sync_files_and_upload_info.sh script copies the talks into a new directory (together with the information files), which is then writeable for the reviewers. They will download the file, review it, cut it if needed, convert it to Matroska and upload it again. Once uploaded, they move the directory into one called fertig (“done” in German) as an indicator that no one else needs to review it.

I run the script daily as a cron job, it only copies the new uploads. Please note that it only checks the existence on a directory level. This means that if a talk was reviewed and a speaker uploads a new version of the talk, it won’t be copied. That case didn’t often happen often and speakers actually let us know about it, so it’s mostly a non-issue (also see the miscellaneous scripts section for more).

Last step is that someone looks through the filled out markdown files to check if everything was alright, respectively make sure that e.g. the audio volume is fixed, or asks the speaker for a new upload. The then checked videos are moved to yet another directory, which then contains all the talks that are ready to be streamed.

Re-org files for schedule

So far, the video files were organized by directories that are named after the pretalx ID of the talk. For running the conference we used OBS for streamer. The operator would need to play the right video at the right time. Therefore, it makes sense to sort them by the schedule. The cut_to_schedule.sh script does that re-organization, which can be found in the cut_to_schedule directory.

pretalx

To prevent accidental inconsistencies, the root directory is named after the current version of the pretalx schedule. So if you publish a new version of the schedule and run the script again, you’ll get a new directory structure. The video files still have an arbitrary name, chosen by the uploader/reviewer, we want a common naming scheme instead. The get_filepath.py script creates such a name that also sorts chronologically and contains all the information the OBS operators need. The current scheme is <room>/day<day-of-the-conference>/day<day-of-the-conference>_<day-of-the-week>_<date>_<time>_<pretalx-id>_<title>.mkv.

Seafile

The directories do not only contain the single final video, but also the metadata and perhaps the original video or a presentation. The file we actually copy is the *.mkv file which was modified last, which will be the cut video. The get_files_to_copy.sh script creates a list of the files that should be copied, it will only list the files that weren’t copied yet (based on the filename). The copy_files.sh script does the actual copying and is rather generic, it only depends on a file list and Seafile.

Easily downloadable files

Seafile has a feature to download a full directory as zip file. I originally planned to use that. It turns out that the size of the files can be too large, I got the error message Unable to download directory "day1": size is too large.. So I needed to provide another tool, as I didn’t want that people would need to click and download all individual talks.

The access to the files should as easy as possible, i.e. the operators that need the files shouldn’t need a Seafile account. As the videos also shouldn’t be public, the compromise was using a download link secured with a password. This means that an authentication step is needed, which isn’t trivial. The download_files.sh script does login and then downloads all the files in that directory. For simplicity, it doesn’t do recursively. This means that any stage would need to run this script for each day.

I also added a checksum check for more robustness. I created those checksums manually with running b2sum * > B2SUMS in each of the directories and then uploaded them to Seafile.

List of live/pre-recorded talks

Some talks are recorded and some are live, the list_recorded_talks.py script, creates a Markdown file that contains a schedule with that information, including the lengths of the talks if they are pre-recorded. This is useful for the moderators to know how much time for questions will be. At the FOSSGIS we have 5 minutes for questions, but if the talk runs longer, there will be less time.

You need the schedule and the length of the recorded talks. This time I haven’t fully automated the process, it’s a bit more manual than the other steps. All scripts can be found in the list_recorded_talks directory.

Get the schedule:

curl https://pretalx.com/<your-conference>/schedule.json > schedule.json

For getting the lengths of the videos, download them all with the download script from the Easily downloadable files section above. Then run the get_length.sh script in each of the directories and output then into a file. For example:

cd your-talks-day1
/path/to/get_lengths.sh > ../lengths/day1.txt

Then combine the lengths of all days into a single file:

cat ../lengths/*.txt > ../talk_lengths.txt

Now you can create the final schedule:

cd ..
python3 /path/to/list_recorded_talks.py schedule.json talk_lengths.txt

Here’s a sample schedule from the FOSSGIS 2021.

Miscellaneous Scripts

Speaker notification

The speakers didn’t get feedback whether their video was correctly uploaded/processed (other than seeing a successful upload in Seafile). A short time before the conference, we were sending out the latest information that speakers needs to know. We decided to take the chance to also add information whether their video upload was successful or not, so that they can contact us in case something with the upload didn’t go as they expected (there weren’t any issues :).

It is very similar to sending out the email with the upload links. You get the information about the speakers and talks in the same way. The only difference is we now also need the information whether the talk was pre-recorded or not. We get that from Seafile:

curl --silent -X GET --header 'Authorization: Token <seafile-token>' 'https://seafile.example.org/api2/repos/<repo-id>/?p=/<dir-with-talks>&t=d'|jq --raw-output '.[].name' > prerecorded_talks.txt

The full script to create the emails can be found at email_speaker_final.sh. In order to post them to pretalx, you can use the email_to_pretalx.sh script and follow the description in the creating emails section.

Number of uploads

It could happen that people upload a new version of the talk. The current scripts won’t recognize that if a previous version was already reviewed. Hence, I manually checked the directories for the ones with more than one file in it. This can easily be done with a single curl command to the Seafile HTTP API:

curl --silent -X GET --header 'Authorization: Token <seafile-token>' 'https://seafile.example.org/api2/repos/<repo-id>/dir/?p=/<dir-with-talks>&t=f&recursive=1'|jq --raw-output '.[].parent_dir'|sort|uniq -c|sort

The output is sorted by the number of files in that directory:

  1 /talks_conference/ZVAZQQ
  1 /talks_conference/DXCNKG
  2 /talks_conference/H7TWNG
  2 /talks_conference/M1PR79
  2 /talks_conference/QW9KTH
  3 /talks_conference/VMM8MX

Normalize volume level

If the volume of the talk was too low, it was normalized. I used ffmpeg-normalize for it:

ffmpeg_normalize --audio-codec aac --progress talk.mkv

Conclusion

Doing all this with scripts was a good idea. The less manual work the better. It also enabled me to process talks even during the conference in a semi-automated way. I created lots of small scripts and sometimes used just a subset of them, e.g. the copy_files.sh script, or quickly modified them to deal with a special case. For example, all lightning talks of a single slot (2-4) were merged together into one video file. That file of course then isn’t associated with a single pretalx ID any more.

During the conference, the volume level of the pre-recorded talks was really different. I think for next time I’d like to do some automated audio level normalization after the people have uploaded the file. It should be done before reviewers have a look, so that they can report in case the normalization broke the audio.

The speakers were confused whether the upload really worked. Seafile doesn’t have an “upload now” button or so, it does it’s JavaScript magic once you’ve selected a file. That’s convenient, but was also confusing me, when I used it for the first time. And if you reload the page, you also won’t see that something was uploaded already. So perhaps it could also be automated that speakers get an email “we received your upload” or so.

Overall I’m really happy how the whole process went, there weren’t major failures like lost videos. I also haven’t heard any complaints from the people that needed to use any of the videos at any stage of the pipeline. I’d also like to thank all the speakers that uploaded a pre-recorded video, it really helped a lot running the FOSSGIS conference as smooth as it was.

by Volker Mische at June 12, 2021 02:35 PM

June 10, 2021

June 09, 2021

We are excited to announce that the geodiff library has finally reached version 1.0. We have started to develop geodiff back in 2019 as a part of our efforts to allow synchronisation of changes between the Input mobile app and Mergin platform.

geodiff-diff.png

At the core, geodiff library provides functionality to:

  • compare a pair of GeoPackage databases and create “diff” files containing changes between them
  • apply a “diff” file to a GeoPackage database
  • rebase changes in a “diff” file
  • invert, concatenate diffs and other utility functions

Thanks to the above low-level operations, any changes to data stored in spatial/non-spatial tables in GeoPackages can be easily transferred to others and applied. And thanks to the “rebase” functionality - inspired by source code management systems like git - we can automatically merge changes from multiple users capturing data offline in Input/Mergin (see our recent blog post that covers rebasing for more).

The library is written in C++, providing stable C API and offering Python bindings as well (look for pygeodiff package in pip). It also comes with a command line interface tool geodiff covering all major features. The whole package has a very permissive MIT license.

Support for drivers

Initially, geodiff library only worked with SQLite / GeoPackage files. This has changed with the version 1.0 - geodiff supports drivers, allowing use of different database backends to compare and apply diffs. In the 1.0 release we have added PostGIS driver in addition to SQLite/GeoPackage driver.

This means that users can compare tables or apply diffs in PostGIS databases using the same APIs as with GeoPackages. And not only that - diff files are compatible across different drivers. That means it is possible to take a diff file from a GeoPackage and apply it to PostGIS database!

Using the PostGIS driver we were able to create mergin-db-sync tool as a companion to Mergin platform. With DB sync, one can keep a local PostGIS database always in sync with a project in Mergin, supporting automatic transfer of changes from Mergin to PostGIS and the other way round as well - from PostGIS to back Mergin.

Try it

The library is hosted on GitHub in lutraconsulting/geodiff repository. We would love to hear your feedback!

Stay tuned for more!

As announced earlier, next week we will be open sourcing Mergin, our platform for easy sharing of spatial data in teams (whether they are in office or in the field). If you have not heard about Mergin platform yet, please have a look at the Mergin website, try Mergin plugin for QGIS and Input app, a mobile app based on QGIS for iPhone/iPad and Android devices. Since the initial release in early 2019, Mergin and Input have been used by thousands of users around the world.

At Lutra Consulting, we are dedicated to improving free and open source software for geospatial. We will be releasing Mergin as open source to solve another missing piece in the puzzle, providing open source end-to-end solution for mobile data capture for QGIS users. Watch our blog and Twitter for further updates!

Mergin

June 09, 2021 05:00 AM

June 08, 2021

Os invitamos al cuarto seminario online de la Red CYTED IDEais, que se realizará el jueves 24 de junio. En ella hablaremos de la solución gvSIG Online, una plataforma en software libre para implantar Infraestructuras de Datos Espaciales, y de los motivos de su éxito, al romper las principales barreras que impiden disponer de soluciones informáticas para gestionar eficientemente la información geográfica de cualquier organización.

Os podéis inscribir en https://us02web.zoom.us/webinar/register/WN_zfFZTO03TGC8D4Z-OuXzOQ

by Alvaro at June 08, 2021 08:35 AM

June 07, 2021

June 04, 2021

June 03, 2021

The GRASS GIS community wants to honor Fred Limp on his retirement The GRASS GIS Project wants to honor Dr. Fred Limp, Jr., on the eve of his retirement, June 2021, from the Department of Geosciences at the University of Arkansas (UA). Fred was an early adopter and active promoter of GRASS GIS and open source geospatial technologies. He wrote a regular column called “Growing GRASS with Fred Limp” for the GRASS newsletter, GRASSCLIPPINGS (see pages 15-16).

June 03, 2021 12:00 AM

May 31, 2021

Hoje, há vinte anos, foi enviado o primeiro e-mail da lista de discussão dos usuários do PostGIS (na época hospedada no yahoogroups.com), anunciando o primeiro lançamento numerado do PostGIS.

O início da história do PostGIS estava intimamente ligada a uma empresa de consultoria fundada por Paul Ramsey alguns anos antes, a Refractions Research (2001) . Seus primeiros contratos acabaram sendo com o governo que, por suas próprias razões, não queria trabalhar com software ESRI.

Paul Ramsey escreveu um post detalhando como foi esse processo até chegar a histórica versão 0.1. Se você tiver interesse em ler essa história por completo, basta clicar aqui.

by Fernando Quadro at May 31, 2021 06:30 PM

Twenty years ago today, the first email on the postgis users mailing list (at that time hosted on yahoogroups.com) was sent, announcing the first numbered release of PostGIS.

Refractions

The early history of PostGIS was tightly bound to a consulting company I had started a few years prior, Refractions Research. My first contracts ended up being with British Columbia (BC) provincial government managers who, for their own idiosyncratic reasons, did not want to work with ESRI software, and as a result our company accrued skills and experience beyond what most “GIS companies” in the business had.

We got good at databases, and the FME. We got good at Perl, and eventually Java. We were the local experts in a locally developed (and now defunct) data analysis tool called Facet, which was the meat of our business for the first four years or so.

Facet

That Facet tool was a key part of a “watershed analysis atlas” the BC government commissioned from Facet in the late 1990’s. We worked as sub-contractors, building the analytical routines that would suck in dozens of environmental layers, chop them up by watershed, and spit out neat tables and maps, one for each watershed. Given the computational power of the era, we had to use multiple Sun workstations to run the final analysis province-wide, and to manage the job queue, and keep track of intermediate results, we placed them all into tables in PostgreSQL.

Putting the chopped up pieces of spatial data as blobs into PostgreSQL was what inspired PostGIS. It seemed really obvious that we had the makings of an interactive real-time analysis engine, with all this processed data in the database, if we could just do more with the blobs than only stuff them in and pull them out.

Maybe We Should do Spatial Databases?

Reading about spatial databases circa 2000 you would find that:

This led to two initiatives on our part, one of which succeeded and the other of which did not.

First, I started exploring whether there was an opportunity in the BC government for a consulting company that had skill with Oracle’s spatial features. BC was actually standardized on Oracle as the official database for all things governmental. But despite working with the local sales rep and looking for places where spatial might be of interest, we came up dry.

Oracle

The existing big Oracle ministries (Finance, Justice) didn’t do spatial, and the heavily spatial natural resource ministries (Forests, Environment) were still deeply embedded in a “GIS is special” head space, and didn’t see any use for a “spatial database”. This was all probably a good thing, as it turned out.

Our second spatial database initiative was to explore whether any of the spatial models described in the OpenGIS Simple Features for SQL specification were actually practical. In addition to describing the spatial types and functions, the specification described three ways to store the spatial part of a table.

OpenGIS

  • In a set of side tables (scheme 1a), where each feature was broken down into x’s and y’s stored in rows and columns in a table of numbers.
  • In a “binary large object” (BLOB) (scheme 1b).
  • In a “geometry type” (scheme 2).

Since the watershed work had given us experience with PostgreSQL, we carried out the testing with that database, examining: could we store spatial data in the database and pull it out efficiently enough to make a database-backed spatial viewer.

JShape

For the viewer part of the equation, we ran all the experiments using a Java applet called JShape. I was quite fond of JShape and had built a few little map viewer web pages for clients using it, so hooking it up to a dynamic data source rather than files was a rather exciting prospect.

All the development was done on the trusty Sun Ultra 10 I had taken out a $10,000 loan to purchase when starting up the company. (At the time, we were still making a big chunk of our revenue from programming against the Facet software, which only ran on Sun hardware.)

Ultra10

  • The first experiment, shredding the data into side tables, and then re-constituting it for display was very disappointing. It was just too slow to be usable.
  • The second experiment, using the PostgreSQL BLOB interface to store the objects, was much faster, but still a little disappointing. And there was no obvious way to add an index to the data.

Breakthrough

At this point we almost stopped: we’d tried all the stuff explained in the user-level documentation for PostgreSQL. But our most sophisticated developer, Dave Blasby, who had actually studied computer science (most of us had mathematics and physics degrees), and was unafraid of low-level languages, looked through the PostgreSQL code and contrib section and said he could probably do a custom type, given some time.

So he took several days and gave it a try. He succeeded!

When Dave had a working prototype, we hooked it up to our little applet and the thing sang. It was wonderfully quick, even when we loaded up quite large tables, zooming around the spatial data and drawing our maps. This is something we’d only seen on fancy XWindows displays on UNIX workstations and now were were doing it in an applet on ordinary PC. It was quite amazing.

We had gotten a lot of very good use out of the PostgreSQL database, but there was no commercial ecosystem for PostgreSQL extensions, so it seemed like the best business use of PostGIS was to put it “out there” as open source and see if it generated some in-bound customer traffic.

At the time, Refractions had perhaps 6 staff (it’s hard to remember precisely) and many of them contributed, both to the initial release and over time.

  • Dave Blasby continued polishing the code, adding some extra functions that seemed to make sense.
  • Jeff Lounsbury, the only other staffer who could write C, took up the task of a utility to convert Shape files into SQL, to make loading spatial data easier.
  • I took on the work of setting up a Makefile for the code, moving it into a CVS repository, writing the documentation, and getting things ready for open sourcing.
  • Graeme Leeming and Phil Kayal, my business partners, put up with this apparently non-commercial distraction. Chris Hodgson, an extremely clever developer, must have been busy elsewhere or perhaps had not joined us just yet, but he shows up in later commit logs.

Release

Finally, on May 31, Dave sent out the initial release announcement. It was PostGIS 0.1, and you can still download it, if you like. This first release had a “geometry” type, a spatial index using the PostgreSQL GIST API, and these functions:

  • npoints(GEOMETRY)
  • nrings(GEOMETRY)
  • mem_size(GEOMETRY)
  • numb_sub_objs(GEOMETRY)
  • summary(GEOMETRY)
  • length3d(GEOMETRY)
  • length2d(GEOMETRY)
  • area2d(GEOMETRY)
  • perimeter3d(GEOMETRY)
  • perimeter2d(GEOMETRY)
  • truly_inside(GEOMETRY, GEOMETRY)

The only analytical function, “truly_inside()” just tested if a point was inside a polygon. (For a history of how PostGIS got many of the other analytical functions it now has, see History of JTS and GEOS on Martin Davis’ blog.)

Reading through those early mailing list posts from 2001, it’s amazing how fast PostGIS integrated into the wider open source geospatial ecosystem. There are posts from Frank Warmerdam of GDAL and Daniel Morissette of MapServer within the first month of release. Developers from the Java GeoTools/GeoServer ecosystem show up early on as well.

There was a huge demand for an open source spatial database, and we just happened to show up at the right time.

Where are they Now?

  • Graeme, Phil, Jeff and Chris are still doing geospatial consulting at Refractions Research.
  • Dave maintained and improved PostGIS for the first couple years. He left Refractions for other work, but still works in open source geospatial from time to time, mostly in the world of GeoServer and other Java projects.
  • I found participating in the growth of PostGIS very exciting, and much of my consulting work… less exciting. In 2008, I left Refractions and learned enough C to join the PostGIS development community as a contributor, which I’ve been doing ever since, currently as a Executive Geospatial Engineer at Crunchy Data.

May 31, 2021 08:00 AM

Uma das áreas do QGIS que mais evoluiu nos últimos anos foi a simbologia, sendo que uma forma que desenvolver novos símbolos é por meio de patches. Assim, nesta postagem iremos mostrar como criá-los para modificar nossas legendas. Confiram!

Nós já sabemos que o QGIS esta sempre se atualizam e melhorando e uma das novas funcionalidades é a personalização dos patches das legendas. Essa funcionalidade esta disponível a partir do QGIS 3.14.

O objetivo de toda legenda é explicar ao leitor o que os diferentes símbolos do mapa representam. Dessa forma, os patches permitem ter formas de legendas mais intuitivas e formatos personalizados, além de incluir mais pontos, linhas e polígonos para representar com melhor precisão o que é visto no mapa.

Para realizar essa atividade nos vamos precisar do QGIS 3.14 em diante, ou seja, caso você tenha instalado em seu computador o QGIS com versão inferior ao QGIS 3.14, esse recurso não poderá ser utilizado.

Como criar Patches

Os patches de uma legenda podem ser gerenciados a partir do gerenciador de estilos do QGIS (QGIS Style Manager). Ao abrir o QGIS, no menu principal você encontrará a opção “Gerenciador de Estilos”, conforme apresentado na figura abaixo.

Gerenciador de Estilos.

Ao abrir o gerenciador, note que inicialmente, você não terá nenhum patch de legenda, pois o QGIS não instala um conjunto padrão.

Abrindo o Gerenciador de Estilos.

Desta forma, você pode importar legendas de patches.

Importando patches existentes

Uma forma de importar as legendas é por meio de patches disponíveis no GitHub (ou outros sites). Lembrando que esses são patches criados por outros membros da comunidade QGIS e disponibilizados gratuitamente para todos.

OBS: Você usará um repositório GitHub online criado pela Kartoza por meio desta URL.

Abrindo a URL você irá encontrar as informações apresentadas na Figura abaixo.

URL GITHUB.

Agora, para esta postagem iremos utilizar o patche “klas-karlsson-patcher.xml”, conforme apresentado na figura abaixo.

escolhendo o patche

Ao clicar na opção mencionada, abrirá a seguinte janela contendo um código apresentado abaixo.

Copiando o código.

Para copiar o código para o QGIS, basta clicar no botão “RAW” e copiar a URL (endereço eletrônico).

Botão Raw.

Após copiar a URL, de volta ao QGIS no Gerenciador de Estilos, no canto inferior clique no botão “Importar/Exportar”.

Escolha a opção Importar/Exportar.

Ao Abrir (conforme figura abaixo), escolha a opção importar.

Note que a janela de importação será aberta (Figura abaixo).

Janela de Importação.

Agora com a opção aberta, defina na opção “Importação de” a opção “Importação por URL” e em seguida insira a URL copiada.

Inserindo a URL.

Em seguida clique na opção “Buscar Itens” e os patches da URL inserida serão incorporados no seu QGIS. Selecione a opção “Adicionar aos Favoritos” e “Não Importar tags incorporadas” e coloque onde esta escrito “Imported”, o nome do arquivo “klas karlsson”, o mesmo nome do arquivo.

URL inserida.

Em seguida, clique em “selecionar tudo” e em seguida em “Importar”.

Ao selecionar a opção “importar”, irá aparecer uma caixa perguntado “Importar shape patch da legenda” , clicar em “Yes to all”.

Clique em Yes to All.

Agora você verá mais de 30 novos patches de legenda para escolher. Isso inclui formas de patch de legenda para camadas de polígono, linha e ponto.

Aplicação de patches de legenda aos itens em uma legenda

Aqui, você aprenderá como aplicar um patch de legenda a um item de legenda no Editor de impressão.

Para realizar este procedimento, feche o gerenciador de estilos e volte ao compositor de impressão.

Exemplo de mapa para inserção do patch.

Agora, selecione a legenda do mapa que será habilitado a opção “Itens da Legenda”, localizada no guia propriedades do item.

Basta clicar duas vezes em uma das camadas/legendas adicionadas e clicar no botão ao lado de formato, em seguida, clique em configurar patch. Nesta opção, irá aparecer os patches adicionados, bastando você selecionar o que você quer colocar na legenda.

Aplicando a camada patch.

Um exemplo da modicação da legenda é apresentado abaixo.

Patch inserido.

Agora, clique no botão azul no topo da guia para voltar e você terá um patch de legenda personalizado aplicado.

Criação de um patch de legenda personalizada a partir de um recurso

Agora iremos um pouco mais longe.

Neste item, descreverei como criar um patch de legenda a partir de um shapefile existente.

É importante frisar que você também pode usar essas etapas para criar um patch de legenda a partir de um recurso digitalizado, apenas com o propósito de criar uma forma de patch de legenda personalizada.

Caso você queira saber mais sobre o formato utilizado para a criação de patches, sugerimos que você dê uma lida na postagem sobre WKT (Well Known Text) do Iporã Possanti.

Agora mãos a massa!

Você irá retornar a tela principal do mapa do QGIS, adicionar um shapefile que você gostaria que tivesse suas formas representadas na legenda. Clique no botão para ‘Alternar edição’ para colocar esta camada no modo de edição e em seguida, selecione a feição que você quer utilizar (caso haja mais de uma no shape) e use a ferramenta “Copiar Recursos” para copiar o shapefile.

Agora você pode sair do modo edição e abrir o gerenciador de estilos.

Em seguida, mude para a aba “Legend Patch Shapes” e em seguida clique no botão verde mais adicionar item e escolha a forma apropriada do shape, isto é, Marcador (Ponto), Linha ou Preenchimento (Polígono) de Legenda.

Note que haverá uma expressão geométrica básica na caixa de texto; exclua este código e clique em CTrl + V para colar a geometria de seu recurso como WKT. A geometria do seu recurso é colada na caixa Forma. Cuidado ao utilizar shapefiles com muitos vértices, isso poderá travar um pouco seu computador.

Agora você precisa editar isso. Todos os atributos de sua camada são incluídos, mas você só precisa da geometria.

Em seguida, role para o topo e exclua tudo antes da linha Multipolígono ou outra função geométrica (Multilinha, Multiponto, Curva composta, etc) dependendo da geometria de seu patch de legenda.

E vá até o final e exclua qualquer coisa após o fechamento

Editing the geometry expression from a featureEscolhendo a patche. Fonte: Septima, 2021.

Usando essas etapas, você pode construir sua biblioteca de formas de patch de legenda para atender às suas necessidades cartográficas!

Você também pode usar este fluxo de trabalho com uma tela de mapa vazia e uma camada de rascunho temporária para digitalizar uma forma personalizada para um novo patch de legenda.

E ai gostou da postagem?

Se sim, não deixe de comentar logo abaixo desta postagem.

Referências:

GITHUB. QGIS Legend. 2021. Disponível em:<https://github.com/kartoza/QGIS-Legend-Patches>.

MAPAS ABERTOS. Codificado camadas vetoriais com WKT (Well Known Text). 2021. Disponível em:<https://mapasabertos.com/2021/02/17/codificado-camadas-vetoriais-com-wkt-well-known-text/>.

QGIS. QGIS Brasil. 2021. Disponível em:<https://www.qgis.org/en/site/>.

SEPTIMA. QGIS Legend Patches. 2021. Disponível em:<https://septima.dk/nyheder/QGIS-legend-patches.
The post Como criar patches e customizar os símbolos da legenda no QGIS? first appeared on Blog 2 Engenheiros.

by Émilin CS at May 31, 2021 06:00 AM

May 27, 2021

Hard to believe that the JTS Topology Suite is almost 20 years old.  That's 140 in dog years!  Despite what they say about old dogs, one of the benefits of longevity is that you have the opportunity to learn a trick or two along the way.  One of the key lessons learned after the initial release of JTS is that intersection (node) detection is a fundamental part of many spatial algorithms, and critical in terms of performance.  This resulted in the development of the noding package to provide an API supporting many different kinds of intersection detection and insertion.

Prior to this, intersection detection was performed as part of the GeometryGraph framework, which combined it with topology graph formation and analysis.  At the time this seemed like an elegant way to maximize code reuse across many JTS operations, including overlay, buffering, spatial predicates and validation.  But as often the case, there are significant costs to such general-purpose code:

  • The overall codebase is substantially more complex
  • A performance penalty is imposed on algorithms which don't require topology construction
  • Algorithms are harder to read and understand.  
  • The code is brittle, and so hard to modify
  • Porting the code is more difficult  
Because of this, a focus of JTS development is to free operations from their dependency on GeometryGraph - with the ultimate goal of expunging it from the JTS codebase.  A major step along this road was the rewrite of the overlay operations.

Another operation that relies on GeometryGraph is the IsSimpleOp class, which implements the OGC Simple Features isSimple predicate.  The algorithm for isSimple essentially involves determining if the geometry linework contains a self-intersection.  GeometryGraph is unnecessarily complex for this particular task, since there is no need to compute the entire topology graph in order to find a single self-intersection.  Reworking the code to use the the MCIndexNoder class in the noding API produces a much simpler and more performant implementation. I also took the opportunity to move the code to the operation.valid package, since the operations of isSimple and isValid are somewhat complementary.

Now, isSimple is probably the least-used OGC operation.  Its only real use is to test for self-intersections in lines or collections of lines, and that is not a critical issue for many workflows.  However, there is one situation where it is quite useful: testing that linear network datasets are "vector-clean" - i.e. contain LineStrings which touch only at their endpoints.

A linear network containing non-simple intersections (isSimple == false)

To demonstrate the performance improvement, I'll use a dataset for Rivers of the US maintained by the US National Weather Service.  It supplies two datasets: a full dataset of all rivers, and a subset of major rivers only.  You might expect a hydrographic network to be "vector-clean", but in fact both of these datasets contain numerous instances of self-intersections and coincident linework.

Here's the results of running the isSimple predicate on the datasets.  On the larger dataset the new implementation provides a 20x performance boost!

 Dataset New time  Old time 
  Subset  (909,865 pts)   0.25 s   1 s
  Full   (5,212,102 pts)  2 s  30 s

Finding Non-Simple Locations

The new codebase made it easy to add a functionality enhancement that computes the locations of all places where lines self-intersect.  This can be used to for visual confirmation that the operation is working as expected, and to indicate places where data quality needs to be improved.  Here's the non-simple intersection points found in the river network subset:

Closeups of some non-simple intersection locations:


IsSimpleOp is the easiest algorithm to convert over from using GeometryGraph.  As such it serves as a good proof-of-viability, and establishes useful code patterns for further conversions. 

Next up is to give IsValidOp the same treatment.  This should provide similar benefits of simplicity and performance.  And as always, porting the improved code to GEOS.



by Dr JTS (noreply@blogger.com) at May 27, 2021 12:48 AM

May 26, 2021

May 25, 2021

Ya están disponibles los vídeos de las dos presentaciones realizadas por la Asociación gvSIG en las Jornadas de SIG Libre de este año. En ellas se muestra el potencial de gvSIG Online como plataforma de gestión de la información geográfica de un ayuntamiento y, por otro lado, un adelanto de los nuevos desarrollos para disponer de herramientas avanzadas de edición que incluyan mecanismos para optimizar la edición multiusuario.

gvSIG online, Infraestructuras de Datos Espaciales para la gestión municipal

Resumen: gvSIG Online es la solución de la Suite gvSIG para la implantación de Infraestructuras de Datos Espaciales, basada en componentes de software libre como Geoserver, PostGIS y OpenLayers, entre otros. Con un conjunto de herramientas que facilitan enormemente la administración de la información geográfica y la generación de geoportales, su implantación crece día a día y en todo tipo de instituciones. Entre ellas, las administraciones locales.

Durante la ponencia se presentarán las principales herramientas de gvSIG Online aplicadas a la gestión municipal, mostrando diversos casos de éxito de implantación en ayuntamientos o entidades relacionadas con la gestión municipal, de diversas geografías. Se mostrará también la integración con gestores de expedientes y con apps desarrolladas con gvSIG MApps, el framework de desarrollo de apps móviles que se integra con el resto de la Suite gvSIG.

Vídeo: http://diobma.udg.edu/handle/10256.1/6225

Mantenimiento y gestión avanzada de cartografía con gvSIG Desktop

Resumen: Los últimos desarrollo de gvSIG Desktop han potenciado diversos aspectos de la aplicación, entre ellos la edición avanzada, mediante la incorporación de diversas herramientas que lo acercan cada vez más a las prestaciones que ofrece un CAD para el mantenimiento de cartografía avanzada. Sin embargo, en organizaciones con competencias en edición cartográfica existía un problema no resuelto: disponer de un control de versiones, aplicado al mantenimiento de la información geográfica, que permitiera editar de forma simultanea, generar bloqueos, realizar comprobaciones y validaciones a la hora de consolidar la información y controlar los históricos. Este complejo desarrollo, que lleva a gvSIG Desktop a un nuevo nivel en cuanto a edición profesional, es lo que se presentará en la ponencia propuesta.

Vídeo: http://diobma.udg.edu/handle/10256.1/6209

by Alvaro at May 25, 2021 12:12 PM

Yes, you read it (and it is not the 1st of April). We are taking Geopaparazzi into unsupported land and declaring SMASH as its successor. We (HydroloGIS) started SMASH with flutter as a test project a couple of years ago to support the IOS world, but it turned out to be the best way to go for a small company as ours is. And, after a while passed fooling ourselves, we had to admit that it doesn't really make sense to support two mobile projects. Much better to channel all energies into a single one.

That said, what does this mean for the Geopaparazzi project and users relying on it:

  • HydroloGIS will from now on perform fixes and developments only if funded. This means that we will not continue to add new features and will not make bugfixes on voluntary basis anymore.
  • Geopaparazzi users should move to SMASH. More about this later in the post. 
  • Geopaparazzi is an open source project, you can contribute to it, pay someone (also different from us) to do bugfixes. We will consider pull requests with well documented fixes, so if you are using geopaparazzi for a project that is critical to you, get in touch with us or any other provider to have your issues solved.

What's next. Well, that one is simple. SMASH is next. We started using it for testing purposes and never looked back. SMASH is simpler to use, simpler to develop, simpler to everything. So... overcome your mental friction and try it out.

Most important: SMASH is compatible with geopaparazzi projects and vice versa, so nothing will change in your data evaluation process.

Ok, but what will I miss?

Well, mostly 3 things:

  1. the smooth 3D view. That has been something that late geopaparazzi offered. If your main need is 3D (or better 2.5D), then SMASH is not an option for you. But to be honest, since the first moment we added 3D to geopaparazzi, we noticed that it is something that one doesn't need during surveys. So if I could go back, I would probably refrain from adding it anyways.
  2. spatialite. That one has been a struggle to keep updated and working also on older versions. It is a powerful engine of which on mobile we just exploited 0.01%. That, plus the fact that geopackage is now used more or less everywhere, supported by the major desktop GIS and simple to implement for any platform, made us completely abandon spatialite for our mobile applications.
  3. translations. Those take time. And your involvement.

 

Hmmm, and what will I gain?

First and foremost: a slick and responsive user interface. And that is one of the important things out in the field. The buttons in the right place, configurable in size, indicators of state and of survey status.

But then there are several features that SMASH supports and geopaparazzi would never have (for a full comparison have a look here):

  • IOS suppport. But soon we will also have the first experimental desktop versions. Macos and Linux are already working and windows is on the way.
  • way better support for GPS logs. Kalman filter, log profile view with stats markers, diagnostic tool and log statistics in the logs list.
  • onscreen logging information.
  • GPX are imported as layers and can be styled.
  • complete geopackage support. Visualization of tiles and vector data, editing of vector data both alphanumeric and geometric.
  • experimental Postgis support. Online editing of vector data both alphanumeric and geometric.
  • support for geotiffs and images with world files
  • shapefile visualization support (but please do yourself a favor and use geopackage)
  • SLD styling. SMASH supports styling of vector data (shapefile, gpx, geopackage, postgis) using simple SLD. 
  • export project to geopackage. This can help when exporting the survey to GIS.
  • centralization with the GSS survey server.
  • last but not least: icons on notes and form sections. Who doesn't love icons?!??!?!

Also, SMASH has already attracted external contributions. One example is the redmine support, which is very interesting to enable SMASH for example as a geo-ticketing tool.

Well, that seems quite a lot to me. So if you are still a geopaparazzi user, think of SMASH as the next generation geopaparazzi, because that is the way the future turns out to be. Actually, it is all already here.

A few last comments:

A big thank you to all the people that contributed to geopaparazzi, being it donations, bugfixes, features, custom projects or translations. I really hope you can trust us and move forward with us in the fast growing SMASH project.

by moovida (noreply@blogger.com) at May 25, 2021 07:03 AM

May 24, 2021

In March 2021, I taught a GRASS GIS online workshop as part of the distance learning offer of Gulich Institute (CONAE - UNC) in Argentina. We had a total of 65 students from different countries in South America.

During the workshop, we studied different topics within GRASS ecosystem, but we mostly covered remote sensing, Object Based Image Analysis (OBIA) and time series analysis, making use of GRASS GIS extensions to obtain and process Landsat, Sentinel and MODIS data. All the workshop materials, including presentations, code, and data, are available here (in Spanish).

As final assignment to pass the course and get their certificate, students were given two options:

  • write a report in Spanish for which they should pick a topic of interest, find relevant data and use GRASS modules to obtain results or,
  • write a tutorial in English of a topic relevant for them or even something new they wanted to learn, always with GRASS as main tool/focus.

As an incentive, the best reports and tutorials would be given the chance to be presented live through Gulich Institute Youtube channel. Tutorials, because of the extra difficulty of language, would also be highlighted in GRASS GIS website and social media (See the news here).

The topics chosen by the students were diverse and really interesting: from changes in snow cover in southern Argentina, to productivity of high altitude grasslands, wildfire simulations, segmentation to aid digitizing of implanted forests, network analysis, comparison of classification approaches to map urban areas, landscape characterization, urban heat islands, spatial and temporal gap-filling and species identification through OBIA and machine learning.

On May 14, those reports and tutorials that resulted selected were presented live with an audience of almost 90 people. It was really satisfying to witness the student’s learning process and outcome. Many of them overcame installation difficulties, learnt and studied new modules, searched for data, explored different solutions. Some, even moved to Linux and learnt to use Git/GitHub. Have a look 🤓

With this post I would like to encourage others to follow such an approach that resulted rewarding in many aspects. For students to get their work showcased, for their families to see aunt/uncle, mum or dad on the screen (we had some very sweet messages in the online chat 😃), and also for us as teachers/trainers. Furthermore, I believe these events bring science, higher education and technology closer to the general public and… we never know who might be inspired by our work! 😍

May 24, 2021 12:00 AM

We are happy to announce GeoServer 2.19.1 release is available for download (zip and war) along with docs and extensions.

This GeoServer 2.19.1 release was produced in conjunction with GeoTools 25.1 and GeoWebCache 1.19.1. this is a stable release recommended for production systems.

Thanks to everyone who contributed, and to Jody Garnett (GeoCat) for making this release.

Improvements and Fixes

Several new features are included in this release:

  • GetFeatureInfo can now includes ColorMap labels for the location clicked, check out the tutorial.
  • A new styling vendor option inclusion to control control legend generation for WMS GetLegendGraphic using values of legendOnly, mapOnly, or normal to define how a style element is used.

Notable improvements:

  • Improve parameter extractor logging
  • Customization of complex GeoJSON WFS output is now available for other data stores, previously this was restricted attributes marked @dataType by the AppSchema plugin.
  • SLD Service now places a limit, configured by system variable ``-Dorg.geoserver.sldService.maxUniqueRange=1024`, of the number of unique intervals.

Fixes included in this release:

  • Fix elevation key-value-pair parser (used in WMS GetMap) handling of edge cases such as zero intervals
  • Fix importer application of gridset bounds, (ot was not working correctly when GWC DiskQuote in use)
  • Inspire schemas URL updated to their new HTTPS location
  • WCS 2.0 slicing on lat/long fix, was sometimes returning adjacent pixel
  • WMS Layers with dimensions were missing from GetCapabilities when using catalog security challenge mode.
  • App Schema download was missing a required jar.
  • Improvements helping coverage format compatibility with file references
  • Address GeoFence interaction with non global named tree container
  • Address WPS Download animation out of memory issues
  • Address rendering process regression with use of vendor option sortByGroup resulting in “internal error rendering process failed”

Internal:

  • Upgrade to commons-io 2.8.0
  • Autoformat maven pom.xml files

For details check the 2.9.1 release notes.

About GeoServer 2.19

Additional information on GeoServer 2.19 series:

Release notes ( 2.19.1 | 2.19.0 | 2.19-RC )

by Jody Garnett at May 24, 2021 12:00 AM

May 21, 2021

The PostGIS Team is pleased to release the release of PostGIS 3.1.2!

This release is a bug fix release, addressing issues found in the previous 3.1 release.

  • #4871, TopoGeometry::geometry cast returns NULL for empty TopoGeometry objects (Sandro Santilli)
  • #4826, postgistigergeocoder Better answers when no zip is provided (Regina Obe)
  • #4817, handle more complex compound coordinate dystems (Paul Ramsey)
  • #4842, Only do axis flips on CRS that have a “Lat” as the first column (Paul Ramsey)
  • Support recent Proj versions that have removed pjgetrelease (Paul Ramsey)
  • #4835, Adjust tolerance for geodetic calculations (Paul Ramsey)
  • #4840, Improper conversion of negative geographic azimuth to positive (Paul Ramsey)
  • #4853, DBSCAN cluster not formed when recordset length equal to minPoints (Dan Baston)
  • #4863, Update bboxes after scale/affine coordinate changes (Paul Ramsey)
  • #4876, Fix raster issues related to PostgreSQL 14 tablefunc changes (Paul Ramsey, Regina Obe)
  • #4877, mingw64 PostGIS / PostgreSQL 14 compile (Regina Obe, Tom Lane)
  • #4838, Update to support Tiger 2020 (Regina Obe)
  • #4890, Change Proj cache lifetime to last as long as connection (Paul Ramsey)
  • #4845, Add Pg14 build support (Paul Ramsey)
Continue Reading by clicking title hyperlink ..

by Paul Ramsey at May 21, 2021 12:00 AM

May 18, 2021

Ya están disponibles las grabaciones de las distintas presentaciones realizadas en las Jornadas “Uso de las Tecnologías Libres de Información Geográficas en Educación Básica – experiencias iberoamericanas”, que se celebraron de forma online los días 12 y 13de mayo de 2021.

Estas jornadas, organizadas por la Red CYTED GeoLIBERO, han servido de excelente ejemplo para conocer experiencias de uso de la geomática libre en el sector educativo preuniversitario, y también para debatir a través de la mesa redonda «Oportunidades y desafíos de las TIGs libres como herramientas de enseñanza pre-universitaria».

Sin más os dejamos con la playlist de Youtube de las Jornadas: https://www.youtube.com/playlist?list=PLN3WPYNh02IvNO7zao0yndgIWr1QhUhby

by Alvaro at May 18, 2021 01:17 PM

May 17, 2021

The Nàquera municipality has launched its Spatial Data Infrastructure, based on gvSIG Online technology, the gvSIG Suite solution for SDIs.

As first public geoportal, which can now be consulted, an urban planning map viewer has been generated, in which it is possible to access different town-planning information of the municipality, partial plans, sector regulations (livestock roads, Sierra Calderona Natural Park, …), cadastre.

In this way, the Nàquera City Council joins the increasingly numerous group of local administrations that are committed to the gvSIG Online solution for the management of their geographic information. A bet based on the use of open source and standard technologies.

by Mario at May 17, 2021 12:04 PM

El Ayuntamiento de Nàquera ha puesto en marcha su Infraestructura de Datos Espaciales, basada en la tecnología gvSIG Online, la solución de la Suite gvSIG para IDEs.

Como primer geoportal público, ya consultable, se ha generado un visor de mapas de urbanismo, en el que se puede acceder a distintas información urbanística del municipio, planes parciales, normativa sectorial (vías pecuarias, Parque Natural Sierra Calderona,… ), catastro.

De este modo, el Ayuntamiento de Nàquera se suma al cada vez más numeroso conjunto de administraciones locales que apuestan por la solución gvSIG Online para la gestión de su información geográfica. Una apuesta basada en el uso de tecnologías libres y estándares.

by Alvaro at May 17, 2021 11:36 AM