Welcome to Planet OSGeo

September 15, 2021

This release addresses 2 issues:

  • The --connect-params switch of the list-connection-params command in FdoCmd is now optional. This means you can now list required connection parameters for a given FDO provider without having to establish a data store connection.
  • This release contains updated FDO binaries which include the following changes:

by Jackie Ng (noreply@blogger.com) at September 15, 2021 04:55 PM

Can we reliably measure truck traffic from space? Compared to private transport, spatiotemporal data on freight transport is even harder to come by. Detecting trucks using remote sensing has been a promising lead for many years but often required access to pretty specialized sensors, such as TerraSAR-X. That is why I was really excited to read about a new approach that detects trucks in commonly available Sentinel-2 imagery developed by Henrik Fisser (Julius-Maximilians-University Würzburg, Germany). So I reached out to him to learn more about the possibilities this new technology opens up. 

Vehicles are visible and detectable in Sentinel-2 data if they are large and moving fast enough (image source: ESA)

To verify his truck detection results. Henrik had already used data from truck counting stations along the German autobahn network. However, these counters are quite rare and thus cannot provide full spatial coverage. Therefore we started looking for more complete reference data. Fortunately, Nikolaus Kapser at the Austrian highway corporation ASFINAG offered his help. The Austrian autobahn toll system is gantry-based. It records when a truck passes a gantry. Using the timestamp of these truck passages and the current traffic speed, it is possible to estimate truck locations at arbitrary points in time, such as the time a Sentinel-2 image was taken. This makes it possible to assess the Sentinel-2-based truck detection along the autobahn network for complete Sentinel-2 images.

Overall, Sentinel-2-based detections tend to underestimate the number of trucks. Henrik found a strong correlation (with an average r value > 0.8) between German traffic counting stations and trucks detected by the Sentinel-2 method. These counting stations were selected for their ideal characteristics, including distance from volatile traffic situations such as a high number of highway intersections. This is very different from our comparison which covers autobahn sections in and near Vienna. We therefore expected larger detection errors. However, our new Austrian analysis reaches similar results (with r values of 0.79, 0.70, and 0.86 for three different days 2020-08-28, 2020-09-22, and 2020-11-06).

Thanks to the truck reference locations provided by ASFINAG, we were also able to analyze the spatial distribution of truck detections. We decided to compare ASFINAG data (truth) and Sentinel-2-based detections using a grid based approach with a cell size of 5×5 km. Confirming Henrik’s original results, grid cells with higher detection than ground truth values are clearly in the minority. Interestingly, many cells in Vienna (at the eastern border of the image extent) exhibit rather low relative errors compared to, for example, the cells along Westautobahn (the east-west running autobahn in the center of the image extent).

Some important remarks: The Sentinel-2-based detection method only works for large vehicles moving around 50km/h or faster. It is hence less suited to detect trucks in city traffic. Additionally, trucks in tunnel sections cannot be detected. To enable a fair comparison, we therefore flagged trucks in the ground truth dataset that were located in tunnels and excluded them from the analysis. Sentinel 2 data captures the region around Vienna around 10:00 o’clock. As a result, it is not possible to assess other times of day. Finally, cloud cover will reduce the accuracy. Therefore we picked images with low reported cloud cover percentage (< 5%).

It is really exciting to finally see a truck detection method that works with readily available remote sensing data because this means that it is potentially transferable to other areas of the world where no official traffic counts are available. Furthermore, this method should be in line with data protection regulations (avoiding identification of individuals and potential reconstruction of movement trajectories) thus making it possible to use and publish the resulting data without further anonymization steps.


This post was written in collaboration with Henrik Fisser (Uni Würzburg / DLR) and Nikolaus Kasper (Asfinag MSG). Keep your eyes open for upcoming detailed publications on the Sentinel-2-based method by Henrik.


This post is part of a series. Read more about movement data in GIS.

by underdark at September 15, 2021 07:51 AM

September 14, 2021

(This is a post I started on December 2019 and didn't publish because I felt it was too negative. But recent events show that it is still a current topic, and at least this will document my own preception of things) 
 
We have lately stumbled upon a few clumsy attempts of corporations at contributing to open source software. Needless to say this is a bumpy ride not for the faint of heart, from both side of the equation.

If you just want to stop your reading here, just remember that good old motto: When in Rome, do as the Romans do. And be prepared to become a Roman citizen.

Us, open source developers have our own hard-to-decipher customs. We pay tribute to the Allmighty Git (*), preferably his most revered (often with awe)  incarnation called GitHub. You may also find followers of GitLab, gitea, gitorious.
In that case, never pronounce the word 'GitHub' before them. It is also said that some might still use email patches: be sure to set a 80 character limit to lines, use LF line endings and refrain from using PNG signatures.  Be sure to stay away from SourceForge followers. They will continuously impose on you advertising messages that will destroy your sanity.

You should subscribe to mailing lists. Be wary of trolls. We may occasionally engage into flame wars, generally sanctionned by motions. We are structured into groups that defer to a Project Steering Committee to decide (or not) which way to follow. Our groups regularly gather at solstice time in dark rooms for a week-long trance (note: was written pre-pandemic), known as a hackfest, hackathon or code sprint. For the tribes involved in geospatial, we have an annual Pow-Wow, called FOSS4G, during which we temporarily bury the hatchet and occasionaly thank our sponsors. You may stumble upon a C89 developer sharing a beer with a Java 11 one, or an adorer of Leaflet trying to convert a OpenLayers follower. If you want to join the Pow-Wow, remove your tie and suit, wear a t-shirt and sandals, and make sure to display a prominent "I Love Open Source" sticker on your laptop. You may also add a "I Love Shapefile" or "I Love GeoPackage" sticker, but only if you are confident to which one the group pays tribute. Failure to display the appropriate sticker will expose you to terrific insults about 10-character limits or obscure write-ahead log locking issues. If unsure, use the "I ? Shapefile" sticker.

We have taboo words. Thou Should Not Talk about business plan, intellectual property, patent portfolio, customer demand, strategy, costless SDK, CVE number, internal policy, consolidated codebases, education license fees. Planning and roadmap should also be pronounced with care, as some tribes will probably not have even the slightest idea of what you talk about.

If despite all those strange habits, you want to join with us, be prepared to follow a long and demanding initiation path. You will have to demonstrate your (possibly affected) adoration to our principles. As strange as it can be, we abhor being offered large presents that we cannot prevent ourselves from seeing as Trojan horses. You will rather have to locate a long list of problems pinned on a wall called bug tracker where each member of the community has written an issue or a wish he has. Start by the modest one that makes sense to you, solve it and offer its resolution to the Reviewer in a sufficiently argumented Pull Request. The Reviewer, which often equates to the Maintainer, will scrutinize at your gift, often at night time after Day Job. He may decide to accept it right away, or return it to you with a comment requesting a change, or just ignore it with the highest contempt. Be humble and obey to his command. You may beg for enlightenment, but never object even if you cannot make sense of his rebutal. He is the allmighty after the Allmighty. Never Rebase if asked to Merge; never Merge if asked to Rebase. Refactor when asked, but don't when not ! You should rewrite history before submitting, or maybe not. If unsure, consult Contributing.md, or be prepared that RTFM will be yelled at you (only for the tribes that don't have yet written CodeOfConduct.md). Do not even consider objecting that you have not been tasked to address his demands. You must also make sure to listen to the complaints of the Continuous Integration (CI) half-gods: the Reviewer will probably not even look at your gift until CI has expressed its satisfaction. Retry as many times as needed until they are pleased. You may attempt at submitting a RFC, but be prepared for lengthy and lively discussions. Listen, correct or you may not survive to your first "-1" spell !

We praise especially gifts that have no direct value for you: improved test suite (e.g. https://github.com/OSGeo/gdal/issues/4407), documentation addition and fixes, answering to users on the mailing list. Only when you feel that you have built enough trust, you might try to offer your initial gift. But, even if it is accepted, the most surprising habit is that your gift will remain yours. You will have to make sure you regularly remove the dust that accumlates on it over time, so it remains constantly shiny. While removing dust on your gift, never neglect from removing it also from nearby gifts offered by others. Otherwise the Maintainer might threaten at invoking the terrible Revert incantation on you. Also consider that existing contributors to a project might see your new code, they won't have directly use of, as an extra burden to their daily tasks (studying the commit history of https://github.com/qgis/QGIS/commits/master/src/providers/hana, or even more crudly at https://github.com/qgis/QGIS/commits/master/src/providers/db2, demonstrates that)

Ready for a ride, and possibly enjoying the feeling that "our" code can also become "yours" ?


(*) some tribes, cut off from the rest of the world, are said to still pursue their adoration of more ancient divinities sometimes known as SVN or CVS. We cannot confirm and have eradicated the last remains of those old cults.

by Even Rouault (noreply@blogger.com) at September 14, 2021 03:19 PM

We are happy to announce GeoServer 2.20-RC release candidate is available for testing. Downloads are available (zip and war) along with docs and extensions.

This is a GeoServer release candidate made in conjunction with GeoTools 26-RC and GeoWebCache 1.20-RC.

  • Release candidates are a community building exercise and are not intended for production use.
  • We ask the community (everyone: individuals, organizations, service providers) to download and thoroughly test this release candidate and report back.
  • Testing priority is the new internationalization support
  • Participating in testing release candidates is a key expectation of our open source social contract. We make an effort to thank each person who tests in our release announcement and project presentations!
  • GeoServer commercial service providers are fully expected to test on behalf of their customers.

Release Candidate Testing Priorities

This is an exciting release and a lot of great new functionality has been added. We would like to ask for your assistance testing the following:

  • The number one testing priority is to try out GeoServer with your data! Mass market open source thrives on having many people to review. Scientific open source like GeoServer thrives on exposure to many datasets.
  • The rest of this blog post highlights new features for GeoServer 2.20, please try out these features, read the documentation links, and ask questions.

Known Issues:

  • No issues reported at this time, you could be the first!

Internationalization

The leading feature for this release is the internationalization of Title, Abstract and Contact details for:

  • WMS 1.1 and 1.3
  • WFS 2.0
  • WCS 2.0

See documentation for internationalization support and GSIP-203 proposal for details.

New feature:

  • GEOS-10123 Internationalization for title and abstract
  • GEOS-10207 Allow creation of internationalized raster legends
  • GEOS-10190 i18n support for Contact Information
  • GEOS-10185 LayerGroup legend internationalization styles returns multiple values
  • GEOS-10177 Allow Default Translation
  • GEOS-10129 Add language function for multilingual support in sld

Improvements and fixes:

  • GEOS-10205 Layer with i18n title might appear twice in the capabilities, while being contained in a named tree
  • GEOS-10204 Default locale is not being used while producing internationalized outputs in Capabilities document
  • GEOS-10160 Requested Language in GetCapabilities

Modules Status Information for Extensions

Thanks to Ian for completing a [long outstanding request][https://osgeo-org.atlassian.net/browse/GEOS-10067] to provide listing everything you have installed:

  • The Server Status page now provides a complete list of the loaded modules and extensions
  • This extension list can also be checked via REST API (allowing scripts to check if the functionality they require has been installed)

New Feature:

Improvements and fixes:

  • GEOS-9967 Add Module Status implementation for CSW Extension

Updates and quality assurance

GeoServer continues to be build with the latest open source technologies:

  • GeoTools 26-RC
  • GeoWebCache 1.20-RC
  • JAI-EXT 1.1.20
  • ImageIO-EXT 1.3.10
  • JTS 1.18.2
  • GeoFence 3.5.0
  • Flatgeobuf to 3.10.1

The team continues to work with automated code checks, gradually improving the codebase and introducing checks to ensure issues are not re-introduced over time:

  • Check system.out.println and printStackTrace statements are not accidentally committed, which can add to logs
  • Cognitive complexity checks, start cleaning up methods that are too complex
  • Use StandardCharsets when possible, rather than String
  • Avoid unnecessary object wrapper creation
  • Use short arrays initializers
  • Work towards consistent style with checks to avoid C style array declarations, add missing @Override annotations, and check java generics are used

This dedication helps provide confidence in the technology we publish.

WMS

Fixes and improvements:

  • GEOS-4939 Coordinate system ISSUE - S-JTSK Krovak East North (EPSG: 5514) - cannot be set up
  • GEOS-10032 Group Layer in Catalog Mode Hide not in capabilities when unauthenticated
  • GEOS-10013 Mark invalid error while validating or saving a Style
  • GEOS-9907 Enable usage of labelPoint function in GetFeatureInfo requests
  • GEOS-9759 Set Response Cache Headers in LayerGroups

The following functionality has been removed:

  • GEOS-10001 Remove animator and animated GIF support from WMS

    Use of WPS Animation process is provided as an alternative

WFS

Fixes and improvements:

WPS

Fixes and improvements:

  • GEOS-9990 Add GUI and REST API to configure the wps-download module
  • GEOS-10073 WPS animation download process should report about eventual time mis-matches

WMTS

Improvements and fixes:

  • GEOS-10008 Have GeoServerTileLayer implementing TileJSONProvider
  • GEOS-9971 GeoWebCache S3 plugin require AWS creds

INSPIRE Extension

New feature:

  • GEOS-10124 Add Language support to INSPIRE extension

Improvements and fixes:

  • GEOS-10211 Unable to pass INSPIRE validation: Version is mandatory (WMS)
  • GEOS-10192 Inspire extension consistent outputResponse element
  • GEOS-10141 Inspire extension better error message on language not found
  • GEOS-10163 Incorrect INSPIRE namespace URI

And more!

Fixes and Improvements:

  • GEOS-10092 Fix the page description of remote WMS/WMTS connection
  • GEOS-10189 I18n improvement using the UTF-8 charset for Chinese translations
  • GEOS-10033 Geoserver startup and shutdown shell scripts don’t handle path with spaces
  • GEOS-9381 Conversion from boolean true/false in geoserver to SQL Server bit 0/1, is broken
  • GEOS-9970 MapML GetFeature bug fix for CRS authority
  • GEOS-10201 Geoserver fails to start on Windows 11 beta

Find out more in the release notes.

About GeoServer 2.20

Additional information on GeoServer 2.20 series:

by Jody Garnett at September 14, 2021 12:00 AM

With the release of version 1.0 of the EOxServer achieved it is a good time to shed some light on this workhorse software more than ten years in active development. Even before this milestone, EOxServer was and is used in quite a number of operational deployments, most notably in the VirES line of p ...

September 14, 2021 12:00 AM

September 13, 2021

I use Ansible already for years to provision server instances and for subsequent CI/CD. A recent example is the Geonovum OGC API Testbed . Here (selective) Docker Containers are automatically deployed on GitHub pushes using Ansible called from within a GitHub Workflows.

Now investigating how Terraform could play a key role in (Cloud) infrastructure management. There is a small overlap between Ansible and Terraform but that is a matter of how they are applied in concert.

Ansible is more geared towards maintaining the OS and its running components e.g. Docker Containers on VM-instances. Terraform is more geared to maintaining a Cloud infrastructure, "in the large": acquiring VM-instances, networks, DNS. If you are familiar with AWS, Google Cloud Platform, or in our case Hetzner Cloud , it is what you can do by clicking in their respective UIs or via their APIs like Hetzner's hcloud . And btw: both Ansible and Terraform are Open Source.

Quote from random web-search : "Terraform is designed to provision different infrastructure components. Ansible is a configuration-management and application-deployment tool. It means that you'll use Terraform first to create, for example, a virtual machine and then use Ansible to install necessary applications on that machine."

Both Ansible and Terraform are "declarative", i.e. configuration-based, where the configuration describes a desired state. ctions are "idempotent", i.e. the same action can be applied multiple times, but when the desired state is reached it won't have effect.

As I plan to apply Terraform in other projects as well, I took a deep dive, following hands-on tutorials from the Terraform website. In a very short time I was amazed by Terraform's power and elegance! My ultimate goal was to manage (acquire, configure, access, destroy) the lifecycle of Hetzner Cloud Virtual Machines (VMs, VPS's). This all took about less than two hours, documenting my steps along the way.

I started at the Getting Started page. While there is a lot of stuff on AWS, Terraform Cloud, I basically stuck to these four steps. You may even skip the third (GCP) step. Important is to learn the terminology and config conventions.

  1. Installation
  2. Using Docker Provider
  3. Using GCP Provider
  4. Using Hetzner Cloud Provider
    Tip: in IntelliJ IDEA install the Terraform plugin. It will recognise/help with Terraform files!

Step 1 - Installation

learn.hashicorp.com/tutorials/terraform/install-cli

On a Mac with Homebrew install the Terraform CLI:

1$ brew tap hashicorp/tap
2
3$ brew install hashicorp/tap/terraform
4
5$ terraform -version
6
7Terraform v1.0.2
8on darwin_amd64

That's it!

Step 2 - Using Docker Provider

learn.hashicorp.com/collections/terraform/docker-get-started

I started project dirs under ~/project/terraform/learn/.

1$ mkdir -p ~/project/terraform/learn/terraform-docker-container

Create a file called main.tf:

 1terraform {
 2  required_providers {
 3    docker = {
 4      source  = "kreuzwerker/docker"
 5      version = "~ 2.13.0"
 6    }
 7  }
 8}
 9
10provider "docker" {}
11
12resource "docker_image" "nginx" {
13  name         = "nginx:latest"
14  keep_locally = false
15}
16
17resource "docker_container" "nginx" {
18  image = docker_image.nginx.latest
19  name  = "tutorial"
20  ports {
21    internal = 80
22    external = 8000
23  }
24}
25

This defines that we will use the Terraform Provider plugin named "docker" with source kreuzwerker/docker . `Terraform has a registry of official (Provider) plugins.

Now initialize and install the plugin:

 1$ terraform init
 2
 3 **Initializing the backend...**
 4
 5 **Initializing provider plugins...**
 6
 7- Finding kreuzwerker/docker versions matching "~ 2.13.0"...
 8
 9- Installing kreuzwerker/docker v2.13.0...
10
11- Installed kreuzwerker/docker v2.13.0 (self-signed, key ID **24E54F214569A8A5**)
12
13. etc
14

You may validate your config:

1$ terraform validate

Moment of truth: create the resources:

1$ terraform apply

Verify the existence of the NGINX container by visiting localhost:8000 in your web browser or running docker ps to see the container.

That's it for Docker. Next is to use a real Cloud Provider.

Step 3 - Using GCP Provider

learn.hashicorp.com/collections/terraform/gcp-get-started

This was actually more elaborate than the Hetzner Cloud exercise. I had a GCP account, so this went smooth: first creating a Network and later a VM Instance. In this step also learned about using Terraform Providers, Resources, Variables(-files.) I leave this as an option and skip to Hetzner Cloud which is the goal of this post.

Step 4 - Using Hetzner Cloud Provider

The Hetzner Cloud plugin provider was not in the tutorials, but it was still not too hard to extrapolate from Step 3, from various Hetzner tutorials and from the hetznercloud/hcloud Terraform Provider.

Links:

Prerequisite is to have a Hetzner Cloud account and thus login access to console.hetzner.cloud .

Steps:

  • create a new Project in https://console.hetzner.cloud/projects, e.g. TerraformLearn
  • add your SSH public key to this project via "Security" menu link left
  • generate and copy an API Token for the project

My goal was to create a Debian VM, login there with root and SSH key and destroy it afterwards.

1$ mkdir -p ~/project/terraform/learn/terraform-hetzner

Create main.tf as follows:

 1terraform {
 2   required_providers {
 3     hcloud = {
 4       source  = "hetznercloud/hcloud"
 5       version = "1.27.2"
 6     }
 7   }
 8 }
 9 
10 provider "hcloud" {
11   token = var.hcloud_token
12 }
13 
14 resource "hcloud_server" "node1" {
15   name        = "node1"
16   image       = "debian-9"
17   server_type = "cx11"
18   ssh_keys = ["just@sunda.lan"]
19 }
20 

Create a file variables.tf:

1 # Set the variable value in *.tfvars file
2 # or using the -var="hcloud_token=..." CLI option
3 variable "hcloud_token" {
4   sensitive = true # Requires terraform = 0.14
5 }
6 

Then a file called terraform.tfvars. This is a file with "secrets" normally not checked-in a repo but there are many other possiblities to deal with secrets/credentials:

1 hcloud_token = "the token string from Hetzner Cloud API Token"  

Moment of truth: apply!

1$ terraform init
2
3$ terraform apply -auto-approve

Using -auto-approve you skip the interactive approval-step.

Next check the Hetzner Cloud Console project page and see the new VM running!

Try to login on your new VM (IP may also be gotten from output.tf, another exercise):

1$ ssh root@<Your VM IP>

Then destroy your VM:

1$ terraform destroy -auto-approve`

There is much more one can do with the Hetzner Provider: basically everything that is available in the console UI and hcloud API: creating Volumes, managing networks, adding SSH-keys, snapshots, using cloud-init etc. See the manpage in particular the Resources drop-down menu: registry.terraform.io/providers/hetznercloud/hcloud/latest/docs

Beware that some Terraform actions are destructive: e.g. upgrading the OS will destroy the existing VM and create a new. For those cases Floating IPs and auto-provisioning with Ansible will help. But in that case Ansible would be more suited to upgrade the OS. One can always execute terraform plan first to see the execution plan. My recommendation is to let Terraform handle the basics, and have Ansible manage the details on VMs.

Alternatives: TF with DigitalOcean using the DO Provider: registry.terraform.io/providers/digitalocean/digitalocean/latest/docs .

All in all: Terraform can form a nice partnership with Ansible.

September 13, 2021 01:24 PM

Read the guest post and congratulate Francesco Bursi, who successfully completed GSOC 2021 project to add virtual raster provider for QGIS with help of mentors Martin Dobias and Peter Petrik.


In this year’s Google Summer of Code (GSoC), I decided to work on the native QGIS raster calculator. Martin Dobias and Peter Petrik volunteered to mentor my work. I’ve been studying Civil Engineering and GeoInformatics at the University of Padua; here I had the opportunity to work both with a lot of GIS software including QGIS. I enjoyed working with QGIS almost immediately because of the possibility to perform complex analysis with a few clicks or with few python commands. Being passionate about programming and enthusiastic about Open Source, I realized that having the possibility to work together with some experienced developers and with an active community was really a great and unique opportunity, so I decided to apply to the GSoC.

GSOC & OSGeo

Virtual Raster Provider

The existing raster calculator is a powerful tool to perform map algebra that outputs a raster layer, before this work it was possible to take advantage of this tool only by saving the output of this operation as a file. The aim of this year GSoC was to allow users to perform their analysis without creating a new derived raster and taking up disk space and therefore have the result as an on-the-fly computational layer.

Let’s jump to an example and let’s say I want to compute the Chanopy Height Model (CHM), subtracting the Digital Terrain Model (DTM) from the Digital Surface Model (DSM).

I also want to perform some other analysis on the DTM since I want to compute the ideal elevation value for a particular tree planting (disclaimer: the elevation value used is example purposes only, moreover when planting trees you should take into account a lot of factors like slope, aspect, latitude. QGIS, by the way, can really be helpful in this kind of analysis). To do so I will start from the same data and I will create different on-the-fly layers for each calculation, in order to avoid the creation of different files I can take advantage of the new checkbox added to the raster calculator dialog. The computation of CHM is performed in the next screencast and the output layer name is, of course, CHM.

computation of CHM

I’ll end up with a new raster layer (CHM) that can be styled as a normal raster and that is not written as an output file to the disk. For some further analysis, from the DTM, I want to obtain the portion of the area with an elevation between 150 and 350 metres above the datum. By applying the following expression to DTM I’ll end up with a raster that has a value of 1 where conditions specified by the expression is TRUE and it will have value of 0 otherwise.

("dtm@1" > 150) AND ("dtm@1" < 350)

I did not select the output layer name intentionally. The resulting layer will be named after the expression used to generate the layer.

generation of CHM layer

Conditional Statement

I also had the opportunity to improve the raster calculator capabilities by adding the possibility to write expressions that involve conditional statements. Taking the already used example, let’s imagine I want to compute the CHM only for the areas of the DTM that are between 150 and 350 metres above the datum. It’s now possible to write an expression as the following one:

if ( ("dtm@1" > 150) AND ("dtm@1" < 350), CHM, -10)

This expression will output a raster with values of the CHM where the conditions are met and value of -10 if the conditions are not met. Since this is a final result of our analysis I’ll store this output as a file to the disk in the form of a GeoTIFF. I’d like to outline that the CHM used in the expression above and in the next screencast is an onn-the-fly computed raster, so it is possible to:

  • Take advantage of the virtual raster provider (on-the-fly computed raster) in other analysis with the raster calculator (and with other analysis tools);
  • Store the on-the-fly computed raster as a file.

Conclusion

I had fun and I struggled working with QGIS, but I learned a lot of new and interesting things. My pull requests were met with several constructive comments, suggestions and feedback. Some suggestions can be a starting point for future improvements.

  • An enhancement for the feature I’ve developed can be the possibility to take advantage of OpenCL acceleration as it has also been suggested in the dev mailing list;
  • Another enhancement that concerns the raster calculator and only partially the virtual raster provider would be the possibility to support the creation of output raster with multiple bands with the declaration of multiple formulas. I hope to continue to contribute to the QGIS project in the future.

September 13, 2021 12:00 AM

September 11, 2021

The PostGIS Team is pleased to release the first alpha of the upcoming PostGIS 3.2.0 release.

Best served with PostgreSQL 14 beta3. This version of PostGIS utilizes the faster GiST building support API introduced in PostgreSQL 14. If compiled with the in-development GEOS 3.10dev you can take advantage of improvements in ST_MakeValid. This release also includes many additional functions and improvements for postgis_raster and postgis_topology extensions.

Continue Reading by clicking title hyperlink ..

by Regina Obe at September 11, 2021 12:00 AM

September 10, 2021

September 09, 2021

QuickOSM 2.0.0

Introduction

Je suis Maxime Charzat et je suis étudiant à l’ENSG, École Nationale des Sciences Géographiques.

3Liz m’a engagé pour donner un coup de jeune à l'extension. Outre quelques bugfix et mises à jour, cela faisait des années que le plugin n’avait pas été agrémenté de nouvelles fonctionnalités. C’est tout l’objet de ma venue à 3Liz pour ce stage.

Nouveautés

Requête rapide

Ce panneau a bien évolué dans cette nouvelle version. En effet, l’idée était de permettre autant des utilisations simples que plus avancées, et bien sûr d’améliorer ces utilisations.

Du côté des utilisations plus basiques, on a voulu simplifier les connaissances par rapport aux clés/valeurs d’OSM. Dans le contexte de l’utilisation du plugin par une personne novice, on a ajouté un champ contenant des pré-réglages de manière similaire aux autres outils qui utilise OSM tel que Vespucci, JOSM… Ce champ est traduit dans la langue définie dans QGIS et permet donc d’abaisser la barrière que peuvent être les clés/valeurs. En tapant Boulangerie en français, la requête OSM shop=bakery est automatique.

JOSM preset

Concernant les utilisations avancées, il y avait une fonctionnalité qui était demandée depuis longtemps. Il s’agit de pouvoir créer des requêtes avec plusieurs clés/valeurs. C’est maintenant possible. L’interface permet de jouer avec un tableau pour choisir les clés/valeurs, en ajouter, en supprimer, choisir les liens entre eux. On peut désormais construire une requête qui demande les boulangeries qui font aussi patisserie (shop=bakery AND pastry=yes) ou alors qui demande et les bars et les refuges animaliers (amenity=bar OR amenity=animal_shelter).

Multi keys in QuickOSM

Pour les férus de données OSM, les métadonnées peuvent être demandées en cochant la case à cocher dans le groupe Avancé. Cela permet entre autres d’avoir accès à la version de l’objet et à la dernière personne qui a mis à jour cet objet.

Dernier ajout sur ce panneau : un historique des requêtes. L’extension enregistre maintenant temporairement les dix dernières requêtes effectuées. Cela permet de relancer une requête récente sans avoir à s’embêter à devoir derechef tout remplir.

Advanced Quick Query

Fichier OSM

Ce panneau permet de charger un fichier OSM ou PBF dans QGIS, stocké en local sur votre ordinateur. Le problème, c’était que l’on était obligé de charger tout le fichier (qui peut être assez volumineux). Il est désormais possible de ne charger que les données qui répondent à une requête de clés/valeurs.

À partir d’un fichier téléchargé par exemple sur https://download.geofabrik.de, sans utiliser internet, on peut faire des requêtes sur un gros volume de données. Volume pas forcément supporté par l’API Overpass utilisé par le plugin pour télécharger les données OSM.

Boîte à outil Traitement

En réfléchissant à tous les usages du plugin, on a décidé de compléter la gamme d’algorithmes dans la boîte à outil Traitement. En effet, jusqu’alors seuls les algorithmes de construction de requêtes étaient implémentés dans le modeleur graphique de QGIS.

Si vous ne connaissez pas le modeleur de QGIS, c'est le moment de jeter un œil.

QGIS Processing

On en a donc ajouté équivalent du panneau Requête rapide dans la boîte à outil Processing.

Thème de carte

Voici une grosse nouveauté pour la version 2.0.0 et qui peut-être assez puissante. Sur les bases du panneau Mes requêtes qui existait dans la version QGIS 2, on a implémenté l’option de sauvegarder ses requêtes.

Map Preset

Mais, cela va même au-delà de juste enregistrer, on offre la possibilité de les transformer en un thème de carte. Ainsi, en deux clics, en précisant juste l’étendue ou le lieu voulu, il est possible de télécharger toutes les données, d’effectuer toutes les requêtes qui permettent d’afficher une carte prête à l'emploi. On peut même associer un style aux couches directement.

Concrètement, en lançant le thème Urban ci-dessus, vous téléchargez automatiquement les bâtiments et les routes. De plus un style accompagne les données pour la mise en forme.

Configuration

Les requêtes sauvegardées ont donc un mode d’édition pensé pour être exhaustif. On veut laisser à l’utilisateur le pouvoir de fabriquer son thème en ayant la main sur la majorité des options possibles. Ainsi, dans le thème on peut gérer plusieurs requêtes (qui se lancent les unes à la suite des autres lors du processus), gérer la plupart des paramètres pour chaque requête, définir les champs en sortie.

On a aussi le choix entre deux types de thème : soit basique, soit avancé. La différence se fait dans la requête. En type basique, le processus va construire les requêtes avec les clés/valeurs données. En type avancé, le processus va utiliser la requête écrite par l’utilisateur.

Le plugin ne contient que un seul thème par défaut pour le moment. On va avoir besoin de vous pour étoffer cette liste. Si vous vous sentez l’âme d’un contributeur, que vous pensez qu'un thème est manquant, que vous voulez participer à ce plugin, alors n’hésitez pas à nous proposer vos thèmes sur https://github.com/3liz/QuickOSM en prévoyant le fichier JSON et les fichiers QML.

On peut très bien imaginer un thème carte randonnée, carte cadastre, carte occupation du sol…

Conclusion

Je suis heureux de pouvoir vous présenter cette nouvelle version de QuickOSM. Cela ouvre et approfondi un champ des possibles que j’ai hâte que vous découvriez. N’hésitez pas à nous faire des retours (Twitter, LinkedIn, GitHub…) et à nous proposer vos thèmes.

Amusez-vous !!!

Maxime Charzat

by Maxime Charzat at September 09, 2021 03:00 PM

QuickOSM 2.0.0

Introduction

Hi everyone, I'm Maxime Charzat, student at ENSG, French Engineering school about GIS. I've been interning at 3Liz with the goal to clean up, update and add new features to the QGIS QuickOSM plugin.

What's new

Quick query

We had two goals with this panel. Make it simpler for newcomers to download OSM data and our other goal was to make it better for advanced users, so they could get more out of the OSM data.

The first goal was to simplify the way to find OSM keys and values. So we've used prepopulated data as in other OSM tools such as Vespucci, JOSM… These data are available in the language defined in QGIS, so people can easily find keys like bench/highway. By typing Boulangerie in French, the query will be automatically transformed with shop=bakery.

JOSM preset

For advanced user, we add the support for multiple key/value. There is now a table to add one or more keys. These different rows are linked by a AND and OR.

For instance, it's now possible to query bakeries that have pastries (shop=bakery AND pastry=yes).

Multi keys in QuickOSM

For people familiar with OSM data, metadata can be requested by clicking the checkbox in the advanced settings. This will give you the object version and who contributed to the last version of the said object.

Last but not least, we added the history of queries. The plugin now remembers temporarily the last ten queries, so they can be easily be relaunched.

Advanced Quick Query

OSM File

This lets you load a PBF or OSM file in QGIS, stored on your hard drive. Before 2.0, you had to load the full dataset from the file. In 2.0 you can now filter using key/values.

This means that you can now process huge quantities of OSM data offline, without using the Overpass API. Files can be generated by services like https://download.geofabrik.de. It means people working on large dataset won't affect the Overpass API.

Processing toolbox

Until now only construction query algorithm were supported in the QGIS Processing Modeler.

If you don't know yet about QGIS Modeler, it's time to have a look.

We decided to add more QGIS Processing algorithm.

QGIS Processing

You can now find the equivalent of the Quick query in the Processing toolbox.

Map presets

This is the biggest new feature in this release.

Map Preset

Map presets are a nice way to save queries, like it was possible before in QGIS 2. But we added more features. It's possible to save a set of queries within a single map preset, to have a map out-of-the-box.

You can define one or more queries and associate a QGIS Style file for each layer.

For instance, by clicking on the Urban map preset showed above, you will download automatically buildings and roads with a style.

The plugin comes with a single map preset for now. But we need you to contribute to this list. If you feel like a map maker and contributor and if you think a map preset is missing, feel free to come and share it on https://github.com/3liz/quickosm. We would like to see a hiking map preset, bicyle map preset…

Conclusion

I'm happy that I could make this newer version of QuickOSM available to you. I can't wait to see some feedbacks (Twitter, LinkedIn, GitHub…) and to see more map presets coming in the plugin.

Also we'd love to have more people translating QuickOSM, see https://docs.3liz.org/QuickOSM/translation-stats/

Have fun Maxime Charzat

by Maxime Charzat at September 09, 2021 03:00 PM

Significant time saved when route maps distributed with Input and Mergin.

This case study was originally written in Czech. The Czech version can be found here.

Every year, teams of volunteers walk door-to-door through the Czech town of Litomyšl collecting charitable donations. Event organisers define routes for the various volunteer teams by marking-up paper maps with pens. The process has a number of issues both in the making and usage of the maps which organisers worked to overcome by making the maps digital using open source GIS software.

Maps were developed using QGIS and made available on volunteers’ phones using the Input app. Volunteers are now able to easily orientate themselves on maps which clearly show their routes. Organisers have reduced the time it takes to update routes and distribute these to volunteers.

Veronika Peterková works for the Litomyšl Parish Charity, a non-profit organisation providing health and social services to people in need since 1993.

Veronika describes the charity’s activities: “We provide home medical services and nursing care to the residents of Litomyšl and its surrounding villages. This includes helping families where the healthy development of a child is at risk and providing respite stays for clients who are otherwise cared for by their families at home. We provide care for about 1000 clients a year.”

She added: “We also coordinate the activities of volunteers who visit the elderly, help with tutoring children and with various leisure and cultural activities.”

One of the parish charity’s biggest fundraising events is the “Tříkrálová sbírka” (Three Kings Collection), a door-to-door carol-singing collection taking part around the 6th of January each year.

Tříkrálová sbírka Litomyšl

Volunteers participating in the Three Kings Collection.

“The Three Kings Collection is the largest national volunteer event in the Czech Republic. In the Litomyšl region alone, nearly 300 volunteers are involved each year with the carol-singers collecting over 500,000 Czech crowns (~20,000 EUR) in sealed boxes. The proceeds are intended to help the sick, the disabled, the elderly, mothers with children in need and other in-need groups in the local area.” Veronika explains.

The Three Kings Collection is organised by Caritas Czech Republic and at least 10% of its proceeds are allocated for humanitarian aid abroad.

charita logo

The Challenge

Veronika is responsible for planning routes for the carol-singers so they efficiently visit households in the Litomyšl area. Singers are split into groups and paper maps are provided which show groups which households to visit.

Old map © mapy.cz

An example of previous paper maps, image courtesy of Farní charita Litomyšl.

The above maps were produced by printing screenshots from a national web mapping provider and marking-up printouts for each of the 50 teams using marker pens.

This method proved to have a number of issues as Veronika describes: “On maps of larger areas, house numbers were not always visible due to the scale. This made it even harder for coordinators not familiar with the area to orient themselves, leading to confusion. Coordinators also found it hard to keep the maps dry and undamaged during unfavourable weather. If new groups signed-up afterwards or others opted-out, we’d have to redo/redivide the areas which would be very time-consuming as the maps would need to be marked-up manually once again.”

The Solution and Implementation

Veronika wanted to try a new solution for organising the 2021 Three Kings Collection with the goal of making volunteer tasks clearer and less reliant on paper maps. She wanted the new solution to allow her to:

  • reduce work through the reuse of maps in future Three Kings Collection events
  • easily update maps if new groups sign in/out and areas need editing
  • allow carol singers to see exactly where they are on the map
  • gradually replace paper maps while still allowing the use of paper maps where preferred
  • group and colour buildings to be visited on the computer
  • record a building’s use (e.g. commercial) to direct volunteers more effectively
  • clearly show how areas are assigned so anyone can see who is responsible for a given area

In addition, Veronika wanted the solution to be affordable and work offline without volunteers needing internet connectivity in the field.

Peter Petrík, a regular participant of the Litomyšl Three Kings Collection suggested Veronika try using the Input app for coordinating the collection in 2021. Peter works for Lutra Consulting, the company behind Input and Mergin.

He showed Veronika how to create the maps in QGIS, a free and open source mapping software. Using map data from OpenStreetMap, they created a project showing the buildings to be visited, coloured by their associated volunteer group number.

qgis map © OpenStreetMap contributors

Houses grouped by team in QGIS, image courtesy of Farní charita Litomyšl.

The styled map was uploaded to Mergin, a collaborative mapping platform, making it readily available for viewing interactively on volunteer’s phones using the Input mobile app. Both QGIS and Input integrate closely with Mergin which meant that maps could be adjusted in QGIS with the resulting changes being visible to volunteers shortly thereafter.

Outcomes

Veronika reflects on the solution: “The solution met all our requirements and the maps we’ve prepared can easily be reused in upcoming events, saving us time. The fact that the new maps were made publicly accessible means volunteers can just download them using Input which makes distributing and updating them very easy.”

qgis map © OpenStreetMap contributors

Volunteer routes and position information shown in Input, screenshot courtesy of Farní charita Litomyšl.

She adds: “All the districts we wanted to visit were distinguished from each other by colour and we were also pleased to be able to clearly mark the areas not to be visited like industrial areas by colouring them in grey.”

Unfortunately COVID meant that Veronika’s plans changed as she explains: “Using these new methods we were able to prepare for the 2021 Three Kings Collection in a short time. Unfortunately however, the COVID situation meant we could not go out on the streets to use the new maps as intended. We hope that in 2022 we’ll be able to more closely evaluate the positives and negatives of the field aspect of the project.”

She adds: “We already see it’s now much easier to allocate areas of the town to our volunteers in a clear and fair manner using QGIS. Producing printed maps for those who prefer them is also now easy and the maps look much more professional. Those who only wanted to use the Input app could see the same information as on the paper maps, but had the advantage of being able to pinpoint their exact location and clearly see the house numbers of each building.”

new map © OpenStreetMap contributors

Example printed map created for volunteers wanting also paper maps, image courtesy of Farní charita Litomyšl.

She concludes: “Overall we found the solution user-friendly, and appreciated being able to discuss the process with Lutra Consulting who helped us solve issues as required. About a third of our volunteers are interested in using Input, which I consider positive.”

The Litomyšl Parish Charity are on Facebook and Instagram.

Download Input Today

Screenshots of the Input App for Field Data Collection

Get it on Google PlayGet it on Apple store

September 09, 2021 05:05 AM

We are pleased to announce that today we have published a new release of QGIS Cloud. Besides a whole bunch of bug fixes, we have also introduced new features for QGIS Cloud Pro customers. Starting with this release, we will be releasing more features for our QGIS Cloud Pro customers in the coming weeks. The following new features are available for QGIS Cloud Pro users starting today. Import Layer: In the Layers & Legend tool you will now find the possibility to import layers.

September 09, 2021 12:00 AM

September 06, 2021

September 04, 2021

The PostGIS development team is pleased to provide bug fix and performance enhancements 3.1.4 and 3.0.4 for the 3.1, 3.0 stable branches.

3.1.4 This release supports PostgreSQL 9.6-14.

3.0.4 This release works with PostgreSQL 9.5-13 and GEOS >= 3.6 Designed to take advantage of features in PostgreSQL 12+ and Proj 6+

View all closed tickets for 3.1.4, 3.0.4.

After installing the binaries or after running pg_upgrade:

For PostGIS 3.1, 3.0, 2.5 do below which will upgrade all your postgis extensions.

SELECT postgis_extensions_upgrade();

For PostGIS 2.4 and below do:

ALTER EXTENSION postgis UPDATE;

— if you use the other extensions packaged with postgis — make sure to upgrade those as well

ALTER EXTENSION postgis_sfcgal UPDATE;
ALTER EXTENSION postgis_topology UPDATE;
ALTER EXTENSION postgis_tiger_geocoder UPDATE;

If you use legacy.sql or legacy_minimal.sql, make sure to rerun the version packaged with these releases.

by Regina Obe at September 04, 2021 12:00 AM

September 03, 2021

We are happy to announce that OTB 7.4.0 has been released! Ready to use binary packages are available on the package page of the website: OTB-7.4.0-Darwin64.run (Mac OS) OTB-7.4.0-Linux64.run (Linux) OTB-7.4.0-rc1-Win64.zip (Windows 64 bits) It is also possible to checkout the branch with git: git clone https://gitlab.orfeo-toolbox.org/orfeotoolbox/otb.git OTB -b release-7.4 The documentation for OTB 7.4.0 […]

by Cédric Traizet at September 03, 2021 03:19 PM

September 01, 2021

I neglected to post about this at the time, which I guess is a testament to the power of twitter to suck up energy that might otherwise be used in blogging, but for posterity I am going to call out here:

Have a listen.

September 01, 2021 08:00 AM

August 31, 2021

GDAL has now been put under the continuous scrutinity of  OSS-Fuzz for more than 4 years. To keep it simple, OSS-Fuzz is a continuous running infrastructure that stresses a software with (not-so-)random data to discover various flaws, and automatically files issues in a dedicated issue tracker, with reproducer test cases and stack traces when available. It is time to use a bit the data accumulated to give a few trends.

First, we can see a total of 1787 issues having been found, which represents on average a bit more than one per day. Out of those, only 38 are still open (so 97.8% have been fixed or are no longer reproducible). Those 1787 issues are out of a total of 37 769 filed issues against all 530 enrolled software, hence representing 4.6 % (so significantly higher than the naive 1 / 530 = 0.2 % proportion that we could expect, at least if all software were of the same size, but GDAL is likely larger than the average). Does that mean that the quality of GDAL is lower than the average of enrolled software, or that it is stressed in a more efficient way... ? Most of GDAL code being about dealing with a lot of file formats, it is the ideal fit for fuzz testing.

We should mention that a number of issues attributed to GDAL actually belong to some of its dependencies: PROJ (coordinate transformation), Poppler (PDF rendering), cURL (network access), SQLite3 (embedded database), Xerces-C (XML parsing), etc. And we reguarly report or fix those issues to those upstream components.

Addressing those issues is now facilitated with the sponsorship program which allows us to spend funded time on such unsexy and usually hard to fund topics, but important to enhance the robustness of the software.

We have run a script that parses git commit logs to identify commits that explicitly reference an issue filed by ossfuzz and tried to analyze the commit message to find the category of the issue that it addresses.

Here's its output:

Category                                                Count  Pct
----------------------------------------------------------------------
integer_overflow_signed                                  178  14.67 %
excessive_memory_use                                     174  14.34 %
other_issue                                              114   9.40 %
buffer_overflow_unspecified                              105   8.66 %
excessive_processing_time                                102   8.41 %
null_pointer_dereference                                  93   7.67 %
integer_overflow_unsigned                                 68   5.61 %
division_by_zero_integer                                  54   4.45 %
heap_buffer_overflow_read                                 37   3.05 %
unspecified_crash                                         32   2.64 %
stack_call_overflow                                       31   2.56 %
memory_leak_error_code_path                               23   1.90 %
heap_buffer_overflow_write                                22   1.81 %
division_by_zero_floating_point_unknown_consequence       19   1.57 %
stack_buffer_overflow_unspecified                         19   1.57 %
invalid_cast                                              14   1.15 %
invalid_shift_unspecified_dir                             14   1.15 %
memory_leak_unspecified                                   13   1.07 %
assertion                                                 13   1.07 %
invalid_enum                                              11   0.91 %
division_by_zero_floating_point_harmless                  10   0.82 %
infinite_loop                                             10   0.82 %
integer_overflow_unsigned_harmless                         8   0.66 %
invalid_memory_dereference                                 7   0.58 %
undefined_shift_left                                       7   0.58 %
double_free                                                6   0.49 %
stack_buffer_overflow_read                                 6   0.49 %
use_after_free                                             6   0.49 %
integer_overflow_harmless                                  5   0.41 %
undefined_behavior_unspecified                             3   0.25 %
negative_size_allocation                                   3   0.25 %
unhandled_exception                                        2   0.16 %
invalid_shift_right                                        2   0.16 %
unsigned_integer_underflow                                 1   0.08 %
uninitialized_variable                                     1   0.08 %
----------------------------------------------------------------------
Total                                                   1213


So 1213 commits for 1787-38 fixed issues: the difference is duplicated issues, non-reliably reproducing issues that end up being closed or issues fixed in other code bases than GDAL.

Let's dig into those categories, from most frequently hit to lesser ones:

  • integer_overflow_signed: an arithmetic operation on a signed integer whose results overflows its size. This is a undefined behavior in C/C++, meaning that anything can happen in theory. In practice, most reasonable compilers and common CPU architectures implementing complement-to-two signed integers will have a wrap around behavior, and not crash when the overflow occurs. However this has often later consequences, like allocating an array of the wrong size, and having out-of-bounds access.

  • excessive_memory_use: this is not a vulnerability by itself. This issue is raised because processes that run under OSSFuzz are limited to 2 GB of RAM usage, which is reasonable given than OSSFuzz manipulates input buffers that are generally large of a few tens of kilobytes. It is thus expected that for those small inputs, RAM consumption should remain small. When that's violated, it is because the code generally trusts too much some fields in the input data that drive a memory allocation, without checking that against reasonable bounds, or the file size. However for some file formats, it is difficult to implement definitive bounds because they allow valid small files that need a lot of RAM to be processed. Part of the remaining open issues belong to that category.

  • other_issue: issues that could not be classified under a more precise category. Bonus points for anyone improving the script to analyze the diff and figure from the code the cases where the commit message lacks details! Or perhaps just parse the OSSFuzz issue itself which gives a categorization.

  • buffer_overflow_unspecified: an access outside the validity area of some buffer (string, array, vector, etc.), and we couldn't determine if it is a heap allocated or stack allocated, or a read attempt or write attempt. Potentially can result in arbitrary code execution.

  • excessive_processing_time: this is when a program exceeds the timeout of 60 second granted by OSS-Fuzz to complete processing of an input buffer. This is often due to a sub-optimal algorithm (e.g. quadratic performance whereas linear can be met), or a unexpected network access. Most of the remaining open issues are in that category. A significant numbers are also in a out-of-memory situation hit by the fuzzer while iterating over many inputs, but often that cannot be reproduced on a individual test case. We suspect heap fragmentation to happen in some of those situations.

  • null_pointer_dereference: a classic programming issue: accessing a null pointer, which results in a immediate crash.

  • integer_overflow_unsigned: this one is interesting. Technically in C/C++, overflow of unsigned integers is a well-defined behavior. Wrap around behavior is guaranteed by the standards. However we assumed that in most cases, the overflow was unintended and could lead to similar bugs as signed integer overflow, hence we opted in for OSSFuzz to consider those overflows as issues. For the very uncommon cases where the overflow is valid (e.g when applying a difference filter on a sequence of bytes), we can tag the function where it occurs with a __attribute__((no_sanitize("unsigned-integer-overflow"))) annotation

  • division_by_zero_integer: an integer divided by zero, with zero being evaluated as an integer. Results in immediate crash on at least x86 architecture

  • heap_buffer_overflow_read: read access outside the validity area of a heap allocated data structure. Results generally in a crash.

  • unspecified_crash: crash for a unidentified reason. Same bonus point as above to help better categorizing them.

  • stack_call_overflow: recursive calls to methods that ends up blowing the size limit of the stack, which results in a crash.

  • memory_leak_error_code_path: memory leak that is observed in a non-nominal code path, that is on corrupted/hostile datasets.

  • heap_buffer_overflow_write: write access outside the validity area of a heap allocated data structure. Results generally in a crash. Could also be sometimes exploited for arbitrary code execution.

  • division_by_zero_floating_point_unknown_consequence: a variable is divided by zero, and this operation is done with floating point evaluation. In C/C++, this is undefined behavior, but on CPU architectures implementing IEEE-754 (that is pretty much all of them nowadays) with default settings, the result is either infinity or not-a-number. If that result is cast to an integer, this results in undefined behavior again (generally not crashing), and various potential bugs (which can be crashing)

  • stack_buffer_overflow: a read or write access outside of a stack-allocated buffer. Often results in crashes, and if a write access, can be sometimes exploited for arbitrary code execution.

  • invalid_cast: this may be an integer cast to an invalid value for an enumeration (unspecified behavior, which can results in bugs due to not catching that situation later), an instance of a C++ class cast to an invalid type (unspecified behavior, crash likely)

  • invalid_shift_unspecified_direction: a left- or right-shift binary operation, generally on a signed integer. For left-shift, this is when this results in setting the most-significant-bit being set (either shifting too much a positive value, which results to a negative value, or shifting a negative value), or shifting by a number of bits equal or greater than the width of the integer. For rigth-shift, this is when shiting by a number of bits equal or greater than the width of the integer. Those undefined behaviors do not result in immediate crashes on common compilers/platforms, but can lead to subsequent bugs.

  • memory_leak_unspecified: self explanatory.

  • assertion: an assert() in the code is hit. A lot of what we initally think as programming invariants can actually be violated by specially crafted input, and should be replaced by classic checks to error out in a clean way.

  • invalid_enum: a particular case of invalid_cast where an invalid value is stored in a variable of a enumeration type.

  • division_by_zero_floating_point_harmless: a floating-point division by zero, whose consequences are estimated to be harmless. For example the NaN or infinity value is directly returned to the user and does not influence further execution of the code.

  • infinite_loop: a portion of the code is executed in a endless way. This is a denial of service.

  • integer_overflow_unsigned_harmless:  an example of that could be some_string.substr(some_string.find(' ') + 1) to extract the part of a string after the first space character, or the whole string if there's none. find() will return std::string::npos which is the largest positive size_t, and thus adding 1 to it will result in 0.

  • invalid_memory_dereference: generally the same as a heap_buffer_overflow_read.

  • invalid_shift_left: mentioned above.

  • double_free: a heap-allocated buffer is destroyed twice. Later crash is likely to occur. Could potentially be exploited for arbitrary code execution.

  • stack_buffer_overflow_read: mentioned above

  • use_after_free: subcase of heap_buffer_overflow_read where one accesses some buffer after it has been freed.

  • integer_overflow_harmless: mentioned above, but here, if we exclude the undefined behavior aspect of it, consequences are estimated to be harmless.

  • undefined_behavior_unspecified: some undefined behavior of a non identified category, restricted to those caught by the UBSAN analyzer of clang

  • negative_size_allocation: a negative size (actually a unsigned integer with its most significant bit set) is passed to a heap memory allocation routine. Not a bug by itself, but often reflects a previous issue.

  • unhandled_exception: a C++ exception that propagates up to the main() without being caught. Crash ensured for C code, or C++ callers not expecting it. As the fuzzer programs used by GDAL use the C API, such exception popping up is definitely a bug.

  • invalid_shift_right: mentioned above.

  • unsigned_integer_underflow: similar as the overflow but going through the negative values. Well defined behavior according to the standards, but often undesirable.

  • uninitialized_variable: access to a variable whose content is uninitialized. There are very few instances of that. The reason is probably that we use extensively strict compiler warnings and static code analyzers (cppcheck, clang static analyzer and Coverity Scan), that are very good to catch such issues.

Despite the big number of categories caught by the sanitizers run by OSS-Fuzz, there are some gaps not covered. For example casting an integer (or floating point value) to a narrower integer type with a value that does not fit into the target type. Those are considered as "implementation defined" (the compiler must do something, potentially different from another one, and document what it does), and thus do not enter into what is caught by undefined behavior sanitizers.

PS: For those interested in academic research analyzing outputs of OSSFuzz, you can find this paper (where GDAL actually appears in one figure)

by Even Rouault (noreply@blogger.com) at August 31, 2021 02:39 PM

August 30, 2021

I am currently looking for data I can use in my classes about spatial data analysis. A great source of data I found is the Movebank, a free, online database of animal tracking data hosted by the Max Planck Institute of Animal Behavior. The aim is to help animal tracking researchers to manage, share, protect, analyze and archive their data. This short article provides a nice short introduction to the type of data available on this site.

The data is great for something else I have wanted to do for some time now. That is, trying out the temporal controller in QGIS. This tool offers native temporal support to QGIS. It is the successor of the celebrated TimeManager plugin, and is available since version 3.14. The main developer, Nyal Dawson, made a video demonstrating some of the capabilities of this new tool. Definitely something to check out. Below you can find the steps I followed to create the animated map (you can find the map at the end of this post). Note, click on an image to enlarge it.

Cattle movements

To use the temporal controller, you obviously need temporal data. This can be raster or vector data. The animal tracking data available in the Movebank database is a .csv data layer with a number of columns, including the coordinates and the date-time column. This can be imported as a vector point layer in QGIS.

Study area with the 33 animal tracks. Background image, Bing satellite via the Quickmap services plugin for QGIS. In this study, we’ll focus on the eastern tracks along the main river.

For this post, we use a dataset collected by Moritz et al.1 for a study about grazing pressure in pastoral systems in the Logone Floodplain in Cameroon2. I particularly liked this study because of how it combines different approaches and methods (GPS/GIS, video recordings of animal behavior, and ethnographic methods). But that is something for another time.

The GPS data set consists of 33 tracks, representing the daily movement of individual animals during the day (21 tracks) or night (12 tracks) in three different locations. As mentioned above, the data comes as a .csv file, which we import and subsequently save as a vector layer in a new Geopackage.

Import csv and save in geopackage

Import the csv file in QGIS.

Save the data as the layer cattlemovement in the (new) Geopackage cattlemovementdata.gpkg.

Define the symbology, assigning random colors to each of the tracks. Tracks are identified by the categories in the ‘individual-local-identifier’ column.

Data preparation

The locations of the animals were recorded at 3-second intervals (only if the animal was moving)2. However, the date-time stamps provided in the Movebank data file are rounded to whole minutes. Information about the seconds between consecutive measurements is provided separately in the column ‘study-specific measurement.’

Because there are some missing observations, creating complete time-stamps based on this information is a bit tricky. So I will leave that as a challenge for you. For the visualizing of the animal movements, a one-minute time interval is good enough. It does mean we need to compute the average locations per minute first.

Calculate average position per minute

First step is to create a new column tracktime with the combined track ID and date-time stamp. This makes for a unique ID that can be used to group the location points that need to be averaged. Combining the two columns can be done using the expression "individual-local-identifier" || '_' || to_string( "timestamp" ) in the Field calculator. Note that we add the underscore to make it easier later on to split the created strings back in two columns with the original track identifiers and the date-time stamps.

Concatenate the columns with the track ID and the timestamp.

To speed up subsequent calculations, we create an index on the newly created field, using the function Create attribute index in the processing toolbox.

Create an index on the column track_time.

To calculate the arithmetic mean of the coordinates of each tracktime category, we use the Mean coordinates function from the processing toolbox.

Create new layer with average position per track_time category.

The resulting layer does not contain the track ID and date-times. To get this ‘back,’ we need to split the tracktime ID’s we created earlier back into two columns with the track ID’s and the date-time records. To do so, we use the string_to_array() function in the Field calculator; string_to_array("tracktime", "_")[0] to get the track ID’s and string_to_array("tracktime", "_")[1] to get the date-time values.

Extract the track ID’s from the column tracktime.

Extract the date-time records from the column tracktime.

We now have a vector layer cattlemovement_perminute with the location data per minute. The tracks were recorded over a 15-day period, with some longer periods between tracks. To speed things up, and to reduce the length of the animated map, we use a subset of the recorded tracks, e.g., those for the tracks in the Cubuna.

Select and save tracking data for Cubunu

Select features using the ‘select features’ option.

Save the selected features as a new layer using the context menu: right click on the layer cattle movement, and in the context menu, select ‘Export > Save selected Features as’.

Save the new vector layer ‘Cubuna’ in the (existing) Geopackage ‘cattlemovementdata.gpkg’ (of course, you can also save it in a new Geopackage if you prefer that).

Symbologies

In the Layers windows, we duplicate this layer (right-click the layer, and in the context menu, select duplicate layer) and rename this Cubuna_background. The idea is to use the Cubuna layer to show the moving points, and the Cubuna_background layer to keep the points visible, but in another color. To this end, we assigned random fill colors to each of the tracks in Cubuna layer, with white borders for the day tracks, and black borders for the night tracks. For the Cubuna_background day tracks, we assign a semi-transparent white color to the day tracks and a semi-transparent black color to the night tracks.

Define the symbology

Symbology for the point layer that symbolizing the point locations.

Symbology for the point layer symbolizing the tracks.

Temporal controller

The next step is to enable the Dynamic temporal control under the Temporal tab of the Layer Properties window. With the dynamic temporal control enabled, QGIS will display features/points at certain times or time intervals. Depending on the data, there are different configuration options. We use the Single field with Date/Time option with the date_time column as input. We furthermore set the event duration to 1 minute. This defines how long the points will be visible after the start.

Enable the temporal control

The Temporal tab can be found in the Layer properties 1. The date and time for each location is determined by the values in the field date_time. We select the Single Field with Data/Time from the configuration drop down menu 2, and the field date-time to define the date and time 4. Finally, we set the event duration to 1 minute 3.

Endable the dynamic temporal control for the ‘Cubuna’ layer in the Layer properties Window.

We do the same for the ‘Cubuna_background’ layer. Only this time, we select the ‘Accumulate features over time’ option 5. With this option enables, the points will remain visible after activation.

Endable the dynamic temporal control for the ‘Cubuna’ layer in the Layer properties Window.

We can now open the Temporal Control Panel to preview the animated map, change the animation speed, or change the temporal range to be animated.

Preview animated map

The layers under temporal control have a clock symbol next to the layer name in the Layer window 1. The Temporal Control Panel can be opened by clicking on the Clock icon on the Map Navigation Toolbar 2 or through View → Panels → Temporal Controller. In the Temporal Controller window, we can now click on the Animated Temporal Navigation (play icon 3) to activate the animation controls. Click the Set to Full Range (refresh icon 4) to automatically set the time range to the match the dataset. Set the step to 1 minute 5. Now we can preview the animation by clicking the Play button 6.

Preview of the animated map with the Temporal controller.

Note that if the animation is too fast, or too slow, you can adjust the frame rate by clicking Temporal Settings (yellow gear icon 7). Decreasing the the frame rate (frames per second) will slow down the animation.

It would be nice if we can add a label that displays the time frame on the map. We can do that using the built in Title Decoration.

Add label with time stamp

The following is based on the tutorial by Ujaval Gandhi3. Go to View → Decorations → Title Label.

Adding a title label to the map.

Click the checkbox to enable it and click Insert an Expression button and enter the following expression to display the date and time.

[%format_date(@map_start_time, 'dd MMMM yyyy hh:mm:ss')%]

Here, the variable @map_start_time contains the timestamp of the current time slice being displayed. So we can use that timestamp and format it to display the date and time of occurrence. See the QGIS documentation for more information about the syntax and options.

Define the title label text.

The font type and size, background color and placement can all be adapted to your likening. In the example, I used a white font (Noto Sans Georgian, 22pt) on a 43% transparent black background.

Export the animated map

So now we have an animated map that we can view and explore in QGIS. But what if we want to share the map? The Time Controller has the option to export the frames of the animations as individual png files.

Export the animated map

The first step is to export the map animation. To do so, we select the Export Animation (save icon) in the Temporal control window. In the Export Map Animation dialog, we click on the ... Output directory 1 to choose the directory in which the images will be saved. We use Calculate from layer 2 to set the extent to match that of the Cubuna point layer. Activate the little lock 4, and set the output height (the width will be automatically adjusted) 3. Optionally, the time range can be set, e.g., to limit the video to the first day.

Export map animation.

Once the export finishes, we have a large number of PNG images, each representing a 1-minute step, in the output directory. We can now convert these into an animated image (gif) or a video. Given the large number of png files we just created, the gif file will be very large, and it takes a long time to generate. Creating a video is therefore the better option.

There are various tools, but below we are going to use FFmpeg, a cross-platform solution to record, convert and stream audio and video.

From pngs to video

FFmpeg is a command-line tool. If you are on Windows, you can use the command line (cmd) or Windows PowerShell. Either way, first we need to go to the working directory with the png images. In the console, you type in:

cd C:\users\brp\Desktop\movementdata

Now we use the FFmpeg function to convert the png files in that folder to a video. With the command below, we create two video’s, one in the popular mp4 format, the other in the mkv format. The most important parameter is the -i, which stands for input, followed by the cattlemovement%04d.png. This part of the code identifies the input files as all the png files in the folder which names start with ‘cattlemovement,’ followed by a 4 digit number.

ffmpeg -framerate 60 -i cattlemovement%04d.png -crf 20 cattlemovement2.mp4
ffmpeg -i cattlemovement%04d.png -c:v libx264 -preset slow -crf 22 -c:a copy cattlemovement.mkv

I leave it up to you to find out more about the other parameters. This page provides an overview of some of the main options. And on this page is more specifically about the various encoding options. And there is of course the official documentation.

The resulting video is around 8 MB. Which is pretty good, given that all the input png files together were 6.45 GB. An in case you are curious, a gif file would have been around 218 MB large, and it takes a lot more time to create. So, yes, creating a video definitely is a good choice here. But the video, that was what it was all about. Check it out below (if it doesn’t automatically play, hit the play button).

So how useful is a video like the one above? Not sure to be honest. It does raise some interesting questions. Like, how do the pastoralists decide where to go? They must know the area pretty well given that they need to water their animals (i.e., reach the water holes, here marked by blue dots) on time. And what about those night tracks? I guess that is exactly what I like about this kind of maps, they make you curious about what is going on 😃. And check out the article for some answers.

Afterword

Hope you enjoyed reading the post. Want to see more examples? Check out this post by Topi Tjukanov and this tutorial by Ujaval Gandhi, both of which served as inspiration for this post. And there are a lot of inspiring video tutorials on Youtube. And last but not least, thanks to Moritz et al. for providing the data and providing feedback on questions, and the Movebank for the answers to my inquiries.




References

1.
Moritz M. Data from: An integrated approach to modeling grazing pressure in pastoral systems: the case of the Logone Floodplain (Cameroon). Published online 2018. doi:10.5441/001/1.J682DS56
2.
Moritz M, Soma E, Scholte P, et al. An Integrated Approach to Modeling Grazing Pressure in Pastoral Systems: The Case of the Logone Floodplain (Cameroon). Human Ecology. 2010;38(6):775-789. doi:10.1007/s10745-010-9361-z
3.
Gandhi U. Animating time series data (QGIS3). Published online 2019. https://www.qgistutorials.com/en/docs/3/animating_time_series.html

August 30, 2021 12:00 AM

August 27, 2021

I am very glad to announce that the paper "Semi-Automatic Classification Plugin: A Python tool for the download and processing of remote sensing images in QGIS" has been published in the prestigious Journal of Open Source Software.
The paper is freely available at this link.


I am very grateful to the reviewers and editors of the Journal for their valuable work, making an exceptional contribution to the open source initiative.
I also invite you to contribute to the Journal, for instance volunteering to review.

This paper is very important as it describes the purpose and characteristics of the Semi-Automatic Classification Plugin.

If you are using the Semi-Automatic Classification Plugin in your research please cite as:
Congedo, Luca, (2021). Semi-Automatic Classification Plugin: A Python tool for the download and processing of remote sensing images in QGIS. Journal of Open Source Software, 6(64), 3172, https://doi.org/10.21105/joss.03172

For any comment or question, join the Facebook group or GitHub discussions about the Semi-Automatic Classification Plugin.

by Luca Congedo (noreply@blogger.com) at August 27, 2021 05:59 PM

August 26, 2021

Prezado leitor,

Você trabalha com dados geográficos, domina o QGIS, tem vontade de automatizar os seus processos mas não sabe programar?

Então essa é a sua chance, pois a Geocursos acaba de lançar um novo curso que vai ter ensinar a programar em Python do Zero e te levar a automação dos seus processos GIS, criação de plugins, interpolação e muito mais.

Ficou interessado?

👉 Acesse o site e matricule-se!

https://geocursos.com.br/combo-python-do-zero

by Fernando Quadro at August 26, 2021 02:58 PM

August 24, 2021

We just released version 1.7.0 of SMASH to the stores. This was initially thought as a bugfix release but then we got caught up in the vortex of some advanced users and their well formulated comments, feature requests and bugfix ideas. This lead to a set of new features:

Form enhancements:

Probably the most important feature. Forms can now be used also for postgis and geopackage data sources. Until now it was possible only for project notes. 


Autocomplete combos for very long lists of choices:


 

Sketches are back (well, for those that came from geopaparazzi):

 

String combos in forms can now be encoded. This means that each item of a combo can have a label and a value. That is another nice feature brought in by the Georepublic people.

 

Export project images to folder

Another feature asked by geopaparazzi lovers. And here it is:


 

Contour lines

Mapsforge (or better andromap) maps can now be displayed with contour lines. 



Other fixes and little enhancements:

  • it is now possible to add points by GPS and map center when editing geopackage/postgis layers
  • geopackage layer selection has been improved
  • zoom-in when notes are very near has been enhanced
  • when selecting multiple notes, previous popups are properly disposed
  • tile based layer images now properly update when switching layers
  • camera setting, speak resolution) are properly applied
  • form notes now always show the label. They didn't when exporting to gpx/kml and in  the notes list
  • log merging now uses the right master log (the first)


This version is aligned with version 3.2 of the Geopaparazzi Survey Server.


Enjoy!!!

 

 

 

 

 

 

 

 

 

by moovida (noreply@blogger.com) at August 24, 2021 08:27 AM

August 20, 2021

The GeoTools team is pleased to share the availability of GeoTools 24.5 :geotools-24.5-bin.zip    geotools-24.5-doc.zipgeotools-24.5-userguide.zipgeotools-24.5-project.zipThis release is published to the OSGeo maven repository, and is made in conjunction with  GeoServer 2.18.4. This is a maintenance release and is a recommended upgrade for all users of the GeoTools library.This is the last

by Andrea Aime (noreply@blogger.com) at August 20, 2021 10:45 AM

We are happy to announce GeoServer 2.18.5 release is available for download (zip and war) along with docs and extensions.

This GeoServer 2.18.5 release was produced in conjunction with GeoTools 24.5 and GeoWebCache 1.18.4. This is a maintenance release recommended for production systems. It is also the last release of the 2.18.x series, users are warmly recommendeded to upgrade to 2.19.x, or upgrade to 2.20.0 as it gets released next month, September 2021.

Thanks to everyone who contributed, and to Alessandro Parma (GeoSolutions) and Andrea Aime (GeoSolutions) for making this release.

Improvements and Fixes

This release improves the importer module logging, as well as the documentation on how to enable and use catalog parametrization:

Fixes included in the release

  • GEOS-10173 CoverageViewReader’s format not being secured with Geofence-Geoserver
  • GEOS-10162 GeoServerOAuthAuthenticationFilter creates Anonymous authentication when preAuthenticated principal is not present
  • GEOS-10193 Indirect imports will drop the target table if there is any failure during the import process

For details check the 2.18.5 release notes.

About GeoServer 2.18

Additional information on GeoServer 2.18 series:

by Andrea Aime at August 20, 2021 12:00 AM

August 19, 2021

The 17th International gvSIG Conference will be held from December 1st to 3rd, being an on-site event again if health situation permits,  at School of Engineering in Geodesy, Cartography and Surveying (Universitat Politècnica de València, Spain), under the slogan “gvSIG solutions: Recovering the future“.

Communication proposals submission is now open for both paper and poster, and they can be sent to the email address: conference-contact@gvsig.com. Information regarding to regulations on communication presentations and deadline can be found in the Communications section of the event website.

In addition, registration period for the Conference will be open at the end of September. Registrations will be free of cost (limited capacity) and they will be available through an application form on the Conference web page.

Organizations interested in collaborating in the event can find information in the section: How to collaborate, with different levels of sponsoring.

All the information related to the conference, including workshops information, will be published at gvSIG Blog.

We expect your participation!

by Mario at August 19, 2021 03:48 PM

Las 17as Jornadas Internacionales gvSIG tendrán lugar del 1 al 3 de diciembre de 2021, y volverán de nuevo a la modalidad presencial si la situación sanitaria lo permite, en la Escuela Técnica Superior de Ingeniería Geodésica, Cartográfica y Topográfica (Universitat Politècnica de València, España), bajo el lema “Soluciones gvSIG: Recuperando el futuro”.

A partir de ahora pueden enviarse las propuestas de comunicación, tanto para ponencia como para póster, a la dirección de correo electrónico conference-contact@gvsig.com. Toda la información sobre las normas para la presentación de comunicaciones, así como las fechas límite para el envío de las propuestas, puede consultarse en el apartado ‘Comunicaciones’ de la web de las jornadas.

Por otro lado, el periodo de inscripción a las jornadas se abrirá a finales del mes de septiembre. Las inscripciones serán totalmente gratuitas (con aforo limitado) y se habrán de realizar a través del formulario que se habilitará en la página web de las Jornadas.

Finalmente, las organizaciones interesadas en colaborar en el evento pueden encontrar información en el apartado ‘¿Cómo colaborar?’ de la web de las jornadas, con distintos niveles de patrocinio disponibles.

Toda la información sobre las jornadas, incluyendo los talleres que se impartirán, se irá publicando en el blog de gvSIG.

¡Esperamos vuestra participación!

by Mario at August 19, 2021 03:48 PM

August 18, 2021

The case study presents the C++ development of QGIS Desktop to support rendering of 3D results produced by TUFLOW’s 3D capable solver: TUFLOW FV (10 minute read)

Introduction

TUFLOW is a suite of advanced 1D/2D/3D computer simulation software for flooding, urban drainage, coastal hydraulics, sediment transport, particle tracking and water quality. With over 30 years of continuous development, TUFLOW is internationally recognised as one of the industry leaders for hydraulic modelling accuracy, speed and workflow efficiency.

Lutra Consulting Ltd is a leader in software development for pre- and post-processing of hydraulic and meteorological results in open-source QGIS. We also work on mobile data collection Input App and GIS data synchronization service Mergin

TUFLOW

In 2019 the TUFLOW team commissioned us to develop post-processing support for their TUFLOW Flexible Mesh format for QGIS 3.12. The format is 3D stacked mesh, which consists of multiple stacked 2D unstructured meshes each extruded in the vertical direction (levels) by means of a vertical coordinate.

TUFLOW

At that time QGIS only supported 2D meshes that defined results on vertices and faces. We had been keen to extend the capabilities of the software stack to support 3D mesh data for a long time so this was an exciting opportunity. Part of the task was also to include rendering support for TUFLOW model results on the QGIS 3D view. The delivery of the project was within one QGIS release cycle (less than 6 months time for users to use it on their projects!)

Flooding simulation simulated as a mesh layer in QGIS 3D

Contact us at info@lutraconsulting.co.uk if you’d like to discuss the benefits of integrating your flood modelling software more tightly with QGIS or you have some custom QGIS development in mind.

C++ Development Process: From requirement to delivery

Communicate project with the community first

When doing a substantial change in the QGIS codebase, the developer needs to write a technical specification of the QGIS changes for community discussion. QGIS Core Developers (which Lutra is a part of) can give valuable feedback to the overall technical approach and the wider community can raise some usability issues or enhancement proposals. Most importantly, each part of the QGIS code has its lead maintainers, for example Martin Dobias, our CTO, is the maintainer of QGIS 3D code and Peter Petrik is the maintainer of the Mesh layer code. It is a good practice to address the maintainers’, users’ and other developers’ concerns and feedback to ensure the feature can be implemented in QGIS.

So after a thorough discussion about the requirements with the TUFLOW team and analysis of the existing tools for post-processing and display of the TUFLOW FV format we came up with the QGIS Enhancement: Support of 3D layered meshes

The community reaction was very positive and supportive. Time to start coding!

MDAL to support TUFLOW FV NetCDF format

Mesh Data Abstraction Library MDAL is a C++ library for handling unstructured mesh data. It provides a single data model for multiple supported data formats. MDAL is used by QGIS for data access for mesh layers. If you want QGIS to support your data format, you need to have a driver in MDAL that implements it.

MDAL

We added support for 3D stacked meshes and the TUFLOW FV format in MDAL. When we develop features in MDAL, we focus on quality code, so

  • all changes have a proper code review,
  • all code has fully automated tests with more than 90% coverage target
  • the documentation and manual testing is done after coding

To implement the TUFLOW FV driver for 3D stacked meshes, we added a new API/interface in MDAL, so we needed to follow up with the QGIS changes in QgsMeshLayer and MDAL data-provider.

QGIS C++ Development to support stacked meshes and visualization in 3D

The implementation of large feature changes is best to split into smaller but self-consistent parts. For example the first pull request added the basic support for the new 3D stacked meshes. Each pull request we do has a screenshot or gif/video with the new functionality preview, follows QGIS Coding Standards, has unit tests where necessary and includes the documentation for the functions/classes added in the public interface. Once the request is merged, the features are next day available in nightly builds on all platforms for testing!

3D Terrain in QGIS3

Final Steps: feedback, testing, documentation and presentation

When all the features were in QGIS master, the TUFLOW team used windows nightly builds to test the new features and provide feedback. After a small number of iterations, all issues were resolved and implementation signed.

Shortly the new official QGIS release was published and we started promotion of the new features on our social media channels. Also, the features developed under this contract were promoted in the visual QGIS changelog.

Streamlines in QGIS3

Benefits for TUFLOW to support QGIS Core C++ Development:

  • Reduced development and maintenance costs for tools such as the TUFLOW Viewer QGIS Plugin since the new features are part of the QGIS core
  • By being part of the QGIS ecosystem it provides opportunities to approach QGIS users in the flooding and coastal modeling industry to use TUFLOW software
  • As a project sponsor, the requirements of the new features meet the present and future needs of the TUFLOW user base.
  • At the beginning of the project Lutra showed all the current relevant capabilities of QGIS ecosystem, allowing TUFLOW to be aware of the latest and greatest features
  • Allowed TUFLOW to solve upstream bugs in QGIS or MDAL due to the open-source nature of the projects
QGIS3

Benefits for TUFLOW users:

Key benefits made available to TUFLOW users include:

  • Being able to work with TUFLOW models using open source GIS on all major operating systems
  • A full GIS application to support their data pre-processing
  • Logical and intuitive workflows
  • Visualisation and post-processing of TUFLOW results natively in QGIS via mesh layer
  • The development allows interactive plotting features for 3D results, such as 3D profiles and curtains that can be easily extracted, providing an improved user experience
  • Ability to use all native QGIS support and development channels in addition to TUFLOW support
  • Integration of internal workflows with powerful native QGIS features including projection support, GDAL/OGR integrations, background maps support (e.g. vector tiles) and printed flood maps.

Further Reading

Do you have any questions or would like to see a demo of the QGIS Mesh Layer? Contact us at info@lutraconsulting.co.uk or schedule a demo call calendly.com/saber-razmjooei/15min

Key words

QGIS, migration, optimised, speed up, fast, hydraulic modelling, water, 2D, 3D, open-source, cost reduction, software development, TUFLOW, TUFLOW FV

You may also like...

Input, a field data collection app based on QGIS. Input makes field work easy with its simple interface and cloud-based sync. Available on Android and iOS. Screenshots of the Input App for Field Data Collection
Get it on Google Play Get it on Apple store

August 18, 2021 06:00 AM

August 17, 2021

*Couple of weeks ago we have noticed the article getting attention around twitter. As it is using our WMS EOxMaps endpoint we have asked Erin Davis if we could host her idea here. At EOX we endorse brilliant and unique ideas that utilize our tools and data. Thank you for this most interesting contri ...

August 17, 2021 12:00 AM

August 03, 2021

The question

Suppose I have a categorical raster layer. Now I want to create a second raster layer with values that represent the surface area of the categories in the first layer. Below, I test four different methods to see which of these is the fastest.

The first three methods result in a raster layer with values that represent the surface area of the categories (A in figure 1). The fourth option calculates the area for each group of cell that form physically discrete areas (B in figure 1). You can achieve the same with the first three options by first recategorizing the contiguous areas into unique categories. You can do that with the function r.clump.

The raster layer in the left upper corner shows the distribution of two categories, 1 and 2. In the first three options discussed in this post, the surface area is computed per category (A). The fourth option calculates the surface area per physically discrete areas with the same category (B).

Figure 1: The raster layer in the left upper corner shows the distribution of two categories, 1 and 2. In the first three options discussed in this post, the surface area is computed per category (A). The fourth option calculates the surface area per physically discrete areas with the same category (B).

I provide the Python code using the library, but of course, you can run the same from the command line or using the menu. For the examples, I use the landclass96 raster layer from the North Carolina dataset. You can download the dataset here.

r.area & r.mapcalc

We can use the r.area addon to create a raster layer with values representing the size of the categories in the category_map in terms of number of cells. Next, we multiply this with the surface area per raster cell using r.mapcalc.

# Import the libraries
import grass.script as gs
import datetime

# Set the region
gs.run_command("g.region", raster="landclass96")

# Compute the area per category (repeat 25 times)
begin_time = datetime.datetime.now()

for i in range(1,25):
    # Area per categories (number of cells)
    output1 = "test0_{}a".format(i)
    output2 = "test0_{}b".format(i)
    gs.run_command("r.area", input="landclass96", output=output1)

    # Convert number of cells to m2
    expr = expression="{} = {} * area()".format(output2, output1)
    gs.run_command("r.mapcalc", expression=expr)

# Print the runtime in seconds
runtime01 = (datetime.datetime.now() - begin_time).total_seconds()
print("The runtime is {} seconds".format(runtime01))

# Clean up
gs.run_command("g.remove", flags="f", type="raster", pattern="test0_*",
               quiet=True)
The runtime is 5.03994 seconds

r.stats and r.recode

We can use the r.stats function to compute for each raster category the area. Based on these values, we can then create a recode string and use this as input in the r.recode function to create the map with the surface area per category.

import grass.script as gs
import datetime

def surfaceArea(input, output):
  """ Compute a raster layer with for each raster cell the surface area of the
  category it belong to
  """
  # Compute the area per category
  p = gs.read_command("r.stats", flags="an", input=input, 
                      separator=";").split("\n")
  p = [i.replace('\r', '') for i in p]
  p[:] = [x for x in p if x]
  p = [i.split(";") for i in p]
  
  # Create the recode rules
  a = []
  for i in range(0, len(p)):
      sarea = float(p[i][1])
      a.append("{0}:{0}:{1}".format(p[i][0], sarea))
  rules = "\n".join(a)
  
  # Recod input raster map based on recode rules.
  gs.write_command("r.recode", input=input, output=output, 
                   rules="-", stdin=rules)

# Compute the area per category (repeat 25 times)
begin_time = datetime.datetime.now()

for i in range(1,25):
    # Area per categories (number of cells)
    output1 = "test0_{}".format(i)
    surfaceArea("landclass96", output1)

# Print the runtime in seconds
runtime01 = (datetime.datetime.now() - begin_time).total_seconds()
print("The runtime is {} seconds".format(runtime01))

# Clean up
gs.run_command("g.remove", flags="f", type="raster", pattern="test0_*",
               quiet=True)
The runtime is 4.110168 seconds 

It runs faster than the previous options. I am not sure if the number of categories will make a difference though.

r.mapcalc & r.stats.zonal

The third options uses the r.mapcalc and r.stats.zonal functions. First we compute the area of each raster cell. Next, we sum all these values per category.

# Import the libraries
import grass.script as gs
import datetime

# Set the region
gs.run_command("g.region", raster="landclass96")

# Compute the area per category (repeat 25 times)
begin_time = datetime.datetime.now()

for i in range(1,25):
    # Area per categories (number of cells)
    output1 = "test0_{}a".format(i)
    output2 = "test0_{}b".format(i)
    gs.run_command("r.mapcalc", expression = "{} = area()".format(output1))
    gs.run_command("r.stats.zonal", base="landclass96", cover=output1,
                   method="sum", output=output2)

# Print the runtime in seconds
runtime01 = (datetime.datetime.now() - begin_time).total_seconds()
print("The runtime is {} seconds".format(runtime01))

# Clean up
gs.run_command("g.remove", flags="f", type="raster", pattern="test0_*",
               quiet=True)
The runtime is 5.289945 seconds

Slightly slower than the first option. The difference is small, so I ran both options a number of times. Results showed that this option is consistently slower than the first option.

r.to.vect & v.to.db

The fourth option is different, in that the raster layer is converted to a vector layer first, using r.to.vect. Then, the v.to.db function is used to calculate the area per polygon. These values will be used as source for the raster values when converting the vector layer back to a raster layer using the v.to.rast function. Note that this means that areas are calculated for the continuous cluster of cells with the same categories (B in figure 1).

# Import the libraries
import grass.script as gs
import datetime

# Set the region
gs.run_command("g.region", raster="landclass96")

# Compute the area per category (repeat 25 times)
begin_time = datetime.datetime.now()

for i in range(1,25):
    # Area per categories (number of cells)
    output1 = "test0_{}a".format(i)
    output2 = "test0_{}b".format(i)
    gs.run_command("r.to.vect", input="landclass96", output=output1,
                   type="area")
    gs.run_command("v.to.db", map=output1, option="area", columns="area",
                   units="meters")
    gs.run_command("v.to.rast", input=output1, output=output2, 
                   use="attr", attribute_column="area", memory=2000)

# Print the runtime in seconds
runtime01 = (datetime.datetime.now() - begin_time).total_seconds()
print("The runtime is {} seconds".format(runtime01))

# Clean up
gs.run_command("g.remove", flags="f", type="all", pattern="test0_*",
               quiet=True)
The runtime is 81.037101 seconds

This option is clearly much slower. However, when you need to compute multiple statistics per area, if you need to do some follow up calculations, or if you need a vector layer as output, this might still be a good option.

Summary

The combination of r.stats and r.recode is the fastest option. It requires a bit more of coding, but that’s just part of the fun, isn’t it :-). The fourth option is clearly slower, which is largely due to/from the raster to vector conversion. Currently this Google of Summer project works on the parallelization of existing modules for GRASS GIS, including the r.to.* modules. So this may improve in the near feature.

August 03, 2021 12:00 AM

August 01, 2021

This tutorial is about the image conversion to Radiance at the Sensor’s Aperture or to Top Of Atmosphere (TOA) Reflectance. It is assumed that one has the basic knowledge of SCP and Basic Tutorials.

SCP includes several tools for preprocessing images such as Landsat, Sentinel-2, Sentinel-3. Using satellite images from various sources could require the preprocessing and radiometric correction.

Usually, remote sensing images are delivered as calibrated Digital Numbers (DN), and the conversion to radiance or reflectance can be performed through parameters that are provided with the image.

This tutorials aims to describe how to perform the conversion to TOA reflectance of remote sensing images. The calculation can be performed for all the bands at once in Band calc, knowing the required parameters.


Following the video of this tutorial.




Read more »

by Luca Congedo (noreply@blogger.com) at August 01, 2021 08:58 PM

Due to a long-term planned re-structuring of the addon repository, the installation of addons through g.extension is currently not working on UNIX-like systems. Installation of addons with the current stable version GRASS GIS 7.8.5 on OSGeo4W (Version 1) is not affected. All the necessary changes in g.extension have been implemented, merged and back-ported. The functionality will be back to normal very soon with the upcoming GRASS 7.8.6 and 8.0.0 releases.

August 01, 2021 12:00 AM

July 29, 2021

July 28, 2021

After three years of slow paced development, IOSACal 0.5 is here.

As before, the preferred installation method is with pip in a virtual environment. The documentation is at https://iosacal.readthedocs.io/

This release brings the new IntCal20 calibration data and several improvements for different use cases, plus one important bug fix. Apart from myself, there were two contributors to this release, I’m grateful to Karl Håkansson and Wesley Weatherbee for their work.

These are the highlights from the release notes:

  • the project has moved to Codeberg for source code hosting and issue tracking. The new Git repository is at https://codeberg.org/steko/iosacal with a default branch name of main
  • there is an official Code of Conduct that all contributors (including the maintainter) will need to follow, based on Contributor Covenant
  • the documentation has seen some improvements, in particular in the Contributing section. Overall, making contributions easier from both expert and novice users is a major theme in this release.
  • interactive use in Jupyter notebooks is made easier with CalibrationCurve that can be created in many ways (such as loading from an arbitrary file, or from a standard calibration curve called by shorthand)
  • fixed a bug that made plots with AD/CE setting incorrect (contributed by Karl Håkansson)
  • fixed a bug that caused a wrong plot density function for dates 80 BP to 0 BP (contributed by Karl Håkansson)
  • add IntCal20 calibration data (contributed by Wesley Weatherbee)

On the technical side:

  • the command line interface is now based on the Click library
  • most code is now covered by tests, based on pytest
  • Python 3.6 or above required
  • requires Numpy 1.18 and Matplotlib 3.0

I don’t have big plans for the next release. I would like to add more tests, modernize the code and make it easier to adapt / tinker with. The only major achievement I’m looking forward to is to submit an article about IOSACal to the Journal of Open Source Software.

by Stefano Costa at July 28, 2021 08:53 PM

One of the new features in QGIS 3.20 is the option to trim the start and end of simple line symbols. This allows for the line rendering to trim off the first and last sections of a line at a user configured distance, as shown in the visual changelog entry

This new feature makes it much easier to create decorative label callout (or leader) lines. If you know QGIS Map Design 2, the following map may look familiar – however – the following leader lines are even more intricate, making use of the new trimming capabilities:

To demonstrate some of the possibilities, I’ve created a set of four black and four white leader line styles:

You can download these symbols from the QGIS style sharing platform: https://plugins.qgis.org/styles/101/ to use them in your projects. Have fun mapping!

by underdark at July 28, 2021 03:59 PM

July 27, 2021