Welcome to Planet OSGeo

March 15, 2026

With the QGIS Grant Programme 2025, we were able to support 6 enhancement proposals that improve the QGIS project. The following reports summarize the work performed:  

  1. QEP 332: Port SQL Query History to Browser — Report
    This enhancement has ported the “history” component of the DB Manager SQL dialog to the main QGIS browser “Execute SQL” dialog. The new command history is fully searchable, and shows historic commands grouped nicely in chronological groups. The history also includes the details of the associated connection, row count and execution time. This change was introduced in QGIS 3.44. Screencasts of the work are available in the original pull request.
  2. QEP 333: Add screenshots to PyQGIS reference documentation — Report
    This enhancement has added screenshots to the PyQGIS reference documentation, e.g. for the QgsMapLayerComboBox.html, QgsTableEditorDialog.html, QgsColorButton.html and many more. While the original proposal only promised screenshots for 100 classes, this change ended up adding over 150. The final process for adding screenshots is very straightforward and easy to implement. We’ve seen screenshots being contributed by other developers too, and hopefully this trend continues.
  3. QEP 335: Adopt wasm32-emscripten as a build target for QGIS — Report
    With this enhancement, QGIS now officially supports the wasm32-emscripten build target. All qgis-js patches have been upstreamed and qgis-js no longer requires any QGIS patches. This allows for easier creation of new qgis-js version in coordination with new QGIS versions. GitHub Actions CI ensures ongoing Emscripten compatibility. This lays the foundation for future WebAssembly-based QGIS applications and exploration of additional WASM possibilities (QGIS Processing, PyQGIS in browser, …).
  4. QEP 336: Trusted Projects and Folders — Report
    Thi enhancement has added trust levels for project files and folder (undetermined, untrusted, and trusted). The trust determination by the user can be temporary (for a single QGIS session) or saved in the user profile’s settings and remembered across sessions. The status can be modified in the corresponding options dialog (or preconfigured in the global INI file). Project trust is used to determine whether embedded scripts are permitted to run (including macros, custom expression functions, map layer actions, and attribute form custom init code).
  5. QEP 337: Coverity Scan cleanup — Report
    This enhancement has seen a massive cleanup to the QGIS code base via hundreds of fixes to issues reported by the Coverity Scan tool. From the original 1075 issues identified by Coverity Scan at the start of the project, we are now down to 145 remaining outstanding issues. All false positive issues have been marked accordingly, and many fixes submitted to QGIS to remedy valid issues in the QGIS code. The remaining issues are either non-trivial or ambiguous. Several related projects also saw fixes submitted (including MDAL, laz-perf, untwine, PDAL wrench and tinygltf libraries).
  6. QEP 338: SIP Incremental builds — Report
    This enhancement has improved the performance of clean builds through improvement to SIP itself as well as improvements on the QGIS side to not rebuild unchanged code generated by SIP. With code compilation now taking longer than SIP code generation, this effectively gives us incremental builds, just at a larger granularity.

Thank you to everyone who participated and made this round of grants a great success and thank you to all our sustaining members and donors who make this initiative possible!

by underdark at March 15, 2026 05:24 PM

In the last few years we’ve seen a huge increase in the prevalence of class of computational tools labelled as “AI” – or Artificial Intelligence. Increasingly in 2025/26 there has been an uptick in the concept of “GeoAI” – applying these tools to geospatial and geographic problem spaces. They’ve come with an incredible hype cycle… Read More »“No” is a good way to limit “AI” risks.

by Adam at March 15, 2026 08:58 AM

March 14, 2026

TorchGeo 0.8.0 Release Notes

TorchGeo 0.8 includes 28 new pre-trained model weights and a number of improvements required for better time series support, including a complete rewrite of all GeoDataset and GeoSampler internals, encompassing 8 months of hard work by 23 contributors from around the world.

Highlights of this release

Open and independent governance

TorchGeo logo

You may have noticed that https://github.com/microsoft/torchgeo is now https://github.com/torchgeo/torchgeo. This is not an accident!

Note

TorchGeo now belongs to YOU, please join our monthly Technical Steering Committee meetings!

TorchGeo was initially created as an intern project at Microsoft's AI for Good Lab back in 2021. Once we made it open source, we were blown away by how quickly it was adopted by the AI4EO community! Since then, over 100 people from around the world have contributed to making TorchGeo what it is today.

Despite being open source, we have received feedback from many current and potential contributors that they have found it difficult to contribute to TorchGeo due to its ownership by Microsoft. While Microsoft has been an excellent incubator for TorchGeo over the past four years, we believe TorchGeo has outgrown its incubation phase.

Over the past year, we have been working diligently with Microsoft to come up with a solution. As of this release, we are excited to announce the formation of the TorchGeo Organization, a governing body designed to ensure the independence and longevity of the TorchGeo Project. The TorchGeo Organization is led by a Technical Steering Committee (TSC), initially composed of the current maintainers of the TorchGeo Project:

TorchGeo now lives at https://github.com/torchgeo, and Microsoft has graciously volunteered to give away the copyright to YOU, the TorchGeo Contributors. We would like all TorchGeo users and developers to take ownership of the project, and thus invite each and every one of you to join our TSC meetings. Please join the #technical-steering-committee channel in the TorchGeo Slack for more information on our monthly meeting schedule.

Other than this new open and independent governance, not much will change with the TorchGeo Project. TorchGeo will always remain open source under an MIT license and be free for all users and developers. We hope this change will open up opportunities for more collaboration, more awesome libraries built on top of TorchGeo, and more confidence in the long-term future of the project!

Change detection support

BRIGHT dataset

TorchGeo 0.8 introduces support for change detection datasets (B x 2 x C x H x W)! This includes a new ChangeDetectionTask LightningModule with support for binary, multiclass, and multilabel change detection. This LightningModule supports both early-fusion (all encoders from timm and decoders from SMP) and the following change detection-specific late-fusion architectures:

TorchGeo also includes the following binary and multiclass change detection datasets:

All datasets have a corresponding LightningDataModule that is compatible with and tested against ChangeDetectionTask.

P.S. For other change detection models, check out @Z-Zheng's excellent torchange library!

Backwards-incompatible changes

Warning

TorchGeo 0.8 is unusual in the number of backwards-incompatible changes it includes. In preparation for a more stable 1.0 release in the future, we have made several changes to better support time series data. Below we motivate each change and describe how to migrate any existing code.

GeoDataset/GeoSampler: BoundingBox → GeoSlice

Prior releases of TorchGeo used a custom BoundingBox object for GeoDataset indexing:

from torchgeo.datasets import BoundingBox

bbox = BoundingBox(xmin, xmax, ymin, ymax, tmin, tmax)
ds[bbox]

This custom BoundingBox object was quite different from other libraries and lacked a lot of the flexibility needed for time series support. All inputs were required, even if space or time were unimportant, and only integer tmin/tmax were supported.

TorchGeo 0.8 adopts a powerful slicing syntax similar to numpy, xarray, and torch:

ds[xmin:xmax:xres, ymin:ymax:yres]
ds[:, :, tmin:tmax:tres]
ds[xmin:xmax, ymin:ymax, tmin:tmax]

Spatial-only, temporal-only, and spatiotemporal slices are all supported. Each slice can optionally specify the resolution of the returned data. If any min, max, or res values are missing, the defaults for the full dataset are used. While x and y are in float, t is in datetime.datetime for more natural temporal slicing.

If you are using GeoDataset in combination with GeoSampler, no changes are required for forwards-compatibility, as GeoSampler and GeoDataset.bounds now return these slices. BoundingBox is now deprecated and will be removed in a future release.

Tip

If you need a single BoundingBox-like object, you can use a tuple of slices like so:

bbox = (slice(xmin, xmax), slice(ymin, ymax), slice(tmin, tmax))

If you need to be able to calculate the area, intersection, or union of a bounding box, we suggest using shapely.box.

GeoSampler/splitters: BoundingBox → Polygon, Interval

In previous releases of TorchGeo, if you wanted to select a smaller region of interest (ROI) for training or validation, you could either use the roi parameter of GeoSampler or use roi_split and time_series_split to directly split your GeoDataset. However, only BoundingBox objects were supported.

In TorchGeo 0.8, you can now use arbitrary shapely.Polygon and pd.Interval objects for ROI and TOI bounds. These polygons do not have to be boxes, they can be any shape, including a GeoJSON outline of a complex island archipelago.

Tip

To migrate simple boxes to this new syntax, replace:

roi = BoundingBox(xmin, xmax, ymin, ymax, tmin, tmax)

with:

roi = shapely.box(xmin, ymin, xmax, ymax)  # note change in order
toi = pd.Interval(tmin, tmax)

GeoDataset/GeoSampler: rtree → geopandas

TorchGeo previously used the R-tree library for spatiotemporal indexing of geospatial data. This allowed for fast computation of intersection, union, and indexing. However, R-tree lacks support for a lot of desirable geospatial and geotemporal features, including non-rectangular polygons, automatic reprojection, datetime objects, and spatial/temporal aggregation.

TorchGeo 0.8 switches TorchGeo's spatiotemporal indexing backend from R-tree to geopandas, which supports all of these features and more with similar performance. Geopandas can efficiently scale to large datasets using dask-geopandas and multithreading. Entire shapefiles can be directly stored in geopandas instead of only storing a single bounding box.

This change will be most notable for users who write custom GeoDataset subclasses. If you are using a built-in dataset, you may not even notice this change. In the future, we plan to continue improving support for non-rectangular polygons in both the dataset and sampler so that nodata pixels can be easily avoided.

VectorDataset: fiona → geopandas

Similarly, all VectorDataset classes now use geopandas instead of fiona for data loading. This results in one less dependency and allows for complicated expressions without for-loops, often resulting in faster data loading. The only backwards-incompatible change here is that the get_label method now takes a pd.Series row as input instead of a fiona.Feature.

Change detection datasets: C x H x W → 2 x C x H x W

Previously, our time series datasets were inconsistent, with some datasets returning a single T x C x H x W object, others combining the T and C dimensions into a (T C) x H x W object, and others returning multiple C x H x W objects labeled image1, image2, image_pre, image_post, etc.

All change detection datasets now return a single 2 x C x H x W object. The remaining time series datasets will be changed in the next release to T x C x H x W.

Custom transform deprecation removals

This release removes several custom or private transforms that have been deprecated for several releases or are not compatible with time series data:

Removed Suggested Replacement
AugmentationSequential kornia.augmentation.AugmentationSequential
_ExtractPatches kornia.augmentation.CenterCrop
_Clamp torchvision.transforms.v2.Lambda(lambda x: torch.clamp(x, 0, 1))

CenterCrop is not identical to _ExtractPatches, and you may need to change the patch_size to get full coverage during evaluation. We are planning to upstream a method directly to Kornia or torchvision to better support this.

Dependencies

New dependencies

Removed dependencies

Changes to existing dependencies

  • kornia: 0.8.2+ now required (#3094)
  • lightning: 2.5.6 not supported (#3080)
  • numpy: 1.24+ now required (#1490)
  • rasterio: 1.4.3+ now required (#1490)
  • shapely: 2+ now required (#2747)
  • timm: 1.0.3+ now required (#2828)
  • xarray: 0.17+ now required (#1490)

Datasets

image

New datasets

Changes to existing datasets

  • Dataset plots: portrait → landscape (#3093)
  • BRIGHT: image_pre, image_post → image (#2862)
  • BRIGHT: add vmin/vmax to plots (#3097)
  • CaBuAr: (T C) → T x C (#2863)
  • CDL: add 2024 data (#2939)
  • ChaBuD: (T C) → T x C (#2878)
  • ChaBuD: fix link to homepage (#3128)
  • EDDMapS: add plot method (#2709)
  • GBIF: add plot method (#2741)
  • LEVIR-CD: image1, image2 → image (#2879)
  • iNaturalist: add plot method (#2743)
  • OSCD: image1, image2 → image (#2422)
  • OSCD: fix documentation of image sizes (#3128)
  • MMEarth: add plot method (#2759)
  • Sentinel-2: support for .SAFE multiresolution products (#3043)
  • Western USA Live Fuel Moisture: add plot method (#2769)
  • xView2: multi-temporal semantic segmentation → change detection (#2906)

Changes to dataset base classes

  • GeoDataset: rtree → geopandas (#2747)
  • GeoDataset: add support for slicing (#2804, #2847)
  • XarrayDataset: new base class, still experimental (#1490)
  • VectorDataset: fiona → geopandas (#2962, #3114, #3118, #3119)
  • VectorDataset: add support for object and instance detection (#2819)
  • IntersectionDataset: support spatial-only intersection (#2837)

Utilities

  • BoundingBox: deprecated in favor of GeoSlice (#2847)
  • pad_across_batches: time series collation function (#2921)
  • splitters: rtree → geopandas (#2747)
  • roi_split: support arbitrary Polygon roi (#2835)

Data Modules

New data modules

Changes to existing data modules

  • CaBuAr: use video transforms (#2863)
  • ChaBuD: use video transforms (#2878)
  • LEVIR-CD: use video transforms (#2879)
  • OSCD: use video transforms (#2422)
  • OSCD: support for non-square images (#3103)
  • xView2: use video transforms (#2906)

Losses

  • QR: improve numerical stability (#2796)

Models

Aurora Foundation Model

New model architectures

New model weights

Changes to existing models

  • All weight transforms are now exportable (#2893)
  • MOSAIKS: new alias for RCF (#2915)
  • Panopticon: speed up position embedding (#2888)

Samplers

  • GeoSampler: r-tree → geopandas (#2747)
  • GeoSampler: add support for slicing (#2804, #2847)
  • GeoSampler: support arbitrary Polygon roi, add toi (#2812)
  • GridGeoSampler: default stride = patch size (#2722)

Scripts

  • CLI: add --version argument (#2912)

Trainers

New trainers

Changes to existing trainers

  • Classification: class_weights list support (#2707)
  • Semantic segmentation: class_weights list support (#2707)
  • Semantic segmentation: add support for DPT, Segformer, and UPerNet decoders (#2828)
  • Semantic segmentation: add support for satellite image time series (#2921)

Transforms

New transforms

Removed transforms

Documentation

  • API: restructure model docs (#2892)
  • README: add YouTube channel (#3107)
  • README: add new podcast episode (#2947)
  • README: add more Slack links (#2953)
  • Related Libraries: auto-update metrics (#3001, #3011, #3013, #3120, #3125)
  • Related Libraries: TorchGeo rtree → geopandas (#2747)
  • Related Libraries: fix TerraTorch weights, URL, conda-forge (#3012, #3083, #3104)
  • Related Libraries: add GDL (#3008)
  • Related Libraries: add torchange (#3072)

Tutorials

  • TorchGeo: introduce new slicing syntax (#2747, #2804)
  • Embeddings: rename section header (#3087)

Governance

  • Add formal governance (#2514)
  • Copyright: Microsoft Corporation → TorchGeo Contributors (#2935, #2968, #3010)
  • Rename references to previous GitHub organization (#2941)
  • pyproject.toml: add explicit license-files (#3126)

Tests

  • Bundle all trainer tests together (#3066)
  • Silence shapely warnings in test data (#3077)
  • Reduce batch size to silence warnings (#3063)
  • No unconditional skips (#3096)

Contributors

This release is made possible thanks to the following contributors:

by adamjstewart at March 14, 2026 07:31 PM

March 13, 2026

Dados raster estão entre os datasets mais pesados do mundo GIS. Ortomosaicos, imagens de satélite e modelos digitais de elevação frequentemente possuem dezenas ou até centenas de gigabytes.

Historicamente, trabalhar com esses arquivos sempre foi um desafio para profissionais de geotecnologia. Entre os principais problemas estão:

  • Leitura lenta de arquivos grandes
  • Alto consumo de disco
  • Necessidade de armazenamento local
  • Dificuldade de uso em ambientes cloud
  • Baixa escalabilidade em servidores GIS

Foi nesse cenário que surgiu o Cloud Optimized GeoTIFF (COG).

Hoje o COG é considerado um dos formatos mais importantes para infraestruturas modernas de dados geoespaciais, permitindo trabalhar com rasters gigantes de forma muito mais eficiente.

Neste artigo vamos entender:

  • O que é COG
  • Como ele funciona
  • Por que ele é muito mais rápido
  • Vantagens e desvantagens
  • Como criar um COG
  • Como usar COG com servidores GIS
  • Impacto real na performance para usuários

1. O problema dos rasters tradicionais

Um GeoTIFF tradicional não foi projetado para acesso remoto eficiente.

Imagine um ortomosaico de 20 GB. Um usuário acessa apenas uma pequena área no mapa. O que acontece internamente:



Mesmo que o usuário precise de apenas 1% da imagem, o servidor pode acabar lendo uma grande parte do arquivo. Isso gera:

  • Alto I/O de disco
  • Grande uso de CPU
  • Lentidão no acesso
  • Pouca escalabilidade

2. O que é Cloud Optimized GeoTIFF (COG)

O Cloud Optimized GeoTIFF (COG) é uma variação do formato GeoTIFF otimizada para acesso eficiente via rede. Ele foi projetado para permitir que aplicações leiam apenas os pedaços necessários do arquivo. Em vez de carregar o raster inteiro, o cliente acessa somente os blocos relevantes.

Isso permite acessar rasters muito grandes diretamente em:

  • Servidores HTTP
  • Object Storage
  • Infraestruturas Cloud

Sem precisar baixar o arquivo completo.

3. Como o COG funciona internamente

A performance do COG depende de três características principais.

3.1 Tiling interno

No COG o raster é dividido em blocos menores chamados de tiles.

Esses blocos normalmente possuem tamanho como:

512 x 512 pixels

Quando um cliente pede uma área específica do mapa, apenas os tiles necessários são lidos.

3.2 Overviews (pirâmide de resolução)

COGs normalmente possuem overviews internas. Isso significa que versões reduzidas da imagem são armazenadas dentro do próprio arquivo. Exemplo:



Quando o usuário está visualizando o mapa em escalas menores, o servidor lê apenas as versões reduzidas. Isso reduz drasticamente:

  • Leitura de dados
  • Tempo de renderização
  • Consumo de CPU

3.3 HTTP Range Requests

Uma das principais características do COG é permitir leitura parcial do arquivo via HTTP. Exemplo de requisição:

GET /imagem.tif
Range: bytes=10000-20000

O servidor retorna apenas aquela parte do arquivo.

Isso permite acessar COGs sem baixar o raster inteiro diretamente em:

  • Servidores web
  • S3
  • MinIO
  • Cloud Storage

4. Comparação prática: GeoTIFF vs COG

Comparação visual – GeoTIFF vs COG

4.1 GeoTIFF tradicional



Problemas:

  • Alto uso de disco
  • Leitura pesada
  • Baixa escalabilidade

4.2 COG



Benefícios:

  • Leitura mínima de dados
  • Muito mais rápido
  • Ideal para cloud

5. Vantagens do COG

Principais vantagens:

  • Leitura parcial do raster
  • Acesso eficiente via HTTP
  • Ideal para cloud computing
  • Integração com object storage
  • Redução de I/O de disco
  • Excelente para grandes datasets

6. Desvantagens

Apesar das vantagens, existem alguns pontos a considerar:

  • Criação pode ser demorada para rasters muito grandes
  • Arquivos podem ficar maiores devido às overviews
  • Não é ideal para edição constante
  • Exige processamento inicial

Por isso o COG é mais indicado para dados finais de publicação.

7. Como criar um COG

O primeiro passo é converter o raster tradicional para Cloud Optimized GeoTIFF (COG). A maneira mais comum é usar o GDAL:

gdal_translate input.tif output_cog.tif \
-of COG \
-co COMPRESS=LZW \
-co BLOCKSIZE=512 \
-co BIGTIFF=YES

7.1 Parâmetros importantes

Parâmetro Função
-of COG gera um Cloud Optimized GeoTIFF
COMPRESS=LZW compressão sem perdas
BLOCKSIZE=512 otimização para leitura em blocos
BIGTIFF=YES necessário para arquivos grandes

7.2 Validação do COG

Depois da conversão, é importante verificar se o arquivo foi gerado corretamente, da seguinte forma:

gdalinfo arquivo_cog.tif

Se aparecer:

LAYOUT=COG

Significa que o arquivo foi criado corretamente.

8. COG e Object Storage

Uma das maiores vantagens do COG é funcionar perfeitamente com Object Storage.

  • S3
  • MinIO
  • Google Cloud Storage
  • Azure Blob Storage

Arquitetura típica:



Isso permite criar infraestruturas altamente escaláveis.

9. COG com servidores GIS

Servidores como GeoServer podem acessar COGs diretamente via HTTP.

Fluxo típico:



Uma grande vantagem é que o raster não precisa ficar no servidor GIS.

10. COG substitui o Cache (GWC)?

Não. O COG e cache resolvem problemas diferentes. Enquanto o COG otimiza leitura do raster, o cache otimiza entrega de mapas renderizados.

Arquitetura recomendada:



11. Benchmark de performance

Vamos comparar três cenários. Vamos supor que você tem um ortomosaico de 20GB. Como seria a performance dele nos cenários abaixo:

GeoTIFF tradicional

Primeiro acesso:

3 a 8 segundos

Alta leitura de disco.

COG

Primeiro acesso:

0.5 a 2 segundos

Leitura parcial.

COG + GeoWebCache

Após cache:

20 a 80 milissegundos

Praticamente instantâneo.

12. Conclusão

O Cloud Optimized GeoTIFF se tornou um dos formatos mais importantes para infraestruturas modernas de dados raster.

Ele permite acesso eficiente a grandes rasters, integração com cloud e publicação escalável em servidores GIS.

Quando combinado com object storage e servidores como GeoServer, o COG possibilita arquiteturas altamente performáticas para distribuição de dados geoespaciais.

by Fernando Quadro at March 13, 2026 08:02 PM

El pasado 11 de marzo tuve el placer de impartir el webinar “Geomática hoy: del levantamiento al gemelo digital con datos abiertos, IA y estándares”, organizado por la Delegación Territorial de la Comunidad Valenciana y Región de Murcia del Ilustre Colegio Oficial de Ingeniería Geomática y Topográfica (COIGT) en colaboración con la Asociación gvSIG.

En la sesión compartí una reflexión sobre cómo está evolucionando la geomática: desde los levantamientos tradicionales hasta enfoques más avanzados basados en datos abiertos, interoperabilidad, análisis geoespacial e inteligencia artificial, que permiten construir desde inventarios territoriales hasta gemelos digitales del territorio para apoyar la toma de decisiones.

Para quienes no pudieron asistir en directo, el vídeo del webinar ya está disponible y puede verse a continuación. Espero que resulte interesante y que sirva para seguir reflexionando sobre hacia dónde está evolucionando nuestro sector.

by Alvaro at March 13, 2026 11:23 AM

March 11, 2026

Quando trabalhamos com ortomosaicos ou rasters muito grandes, um dos principais desafios é como armazenar e publicar esses dados com boa performance, sem sobrecarregar o servidor GIS.

Uma arquitetura moderna que vem sendo cada vez mais utilizada é baseada em:

Essa combinação permite que o GeoServer leia diretamente rasters armazenados em object storage, sem precisar copiá-los para o servidor.

Neste post vou mostrar um passo a passo simples e prático para implementar essa arquitetura:

O ponto principal aqui é que o GeoServer não precisa armazenar o raster localmente. Ele apenas acessa o arquivo COG diretamente no storage.

Importante: O plugin COG (HTTP ou S3) já deve ter sido instalado no GeoServer.

1. Converter o raster para COG

O primeiro passo é converter o raster tradicional para Cloud Optimized GeoTIFF (COG). Isso pode ser feito utilizando o GDAL.

gdal_translate ortomosaico.tif ortomosaico_cog.tif \
-of COG \
-co COMPRESS=LZW \
-co BLOCKSIZE=512 \
-co BIGTIFF=YES \
-co OVERVIEWS=IGNORE_EXISTING

Parâmetros importantes

Parâmetro Função
-of COG gera um Cloud Optimized GeoTIFF
COMPRESS=LZW compressão sem perdas
BLOCKSIZE=512 otimização para leitura em blocos
BIGTIFF=YES necessário para arquivos grandes

2. Verificar se o COG foi criado corretamente

Depois da conversão, é importante verificar se o arquivo foi gerado corretamente.

gdalinfo ortomosaico_cog.tif

No resultado deve aparecer algo como:

LAYOUT=COG

E também a presença de overviews:

Overviews: 28676x21832, 14338x10916, ...

Isso confirma que o arquivo está otimizado para leitura em nuvem.

3. Subir o arquivo para o storage (S3 ou MinIO)

Agora precisamos enviar o arquivo para um Object Storage.

Você pode utilizar:

  • MinIO (self-hosted)
  • Amazon S3
  • Google Cloud Storage

No caso do MinIO, é comum utilizar o cliente mc.

Instalar o cliente MinIO (caso você não tenha):

wget https://dl.min.io/client/mc/release/linux-amd64/mc
chmod +x mc
sudo mv mc /usr/local/bin/

Configurar o acesso ao servidor

mc alias set minio http://SEU_SERVIDOR:9000 ACCESS_KEY SECRET_KEY

Enviar o raster para um bucket

mc cp ortomosaico_cog.tif minio/rasters/

Após o upload, o arquivo ficará acessível em algo como:

http://servidor:9000/rasters/ortomosaico_cog.tif

4. Configurar o raster no GeoServer

Agora vamos configurar o raster no GeoServer.

Acesse a interface administrativa:

http://seu-servidor:8080/geoserver

Criar um novo Store

Vá em:

Stores → Add new Store

Escolha:

GeoTIFF / Cloud Optimized GeoTIFF

Informar a URL do COG

No campo de URL informe o caminho do arquivo:

http://servidor:9000/rasters/ortomosaico_cog.tif

Salve o store.

5. Publicar a Layer

Após salvar o store, o GeoServer detectará automaticamente o raster.

Basta clicar em:

Publish

Configure:

  • Bounding Box
  • CRS
  • Nome da layer

Depois salve.

6. Testar o serviço

Agora o raster já pode ser acessado via:

  • WMS
  • WCS
  • WMTS

Exemplo de endpoint WMS:

http://servidor:8080/geoserver/wms

Ou diretamente pelo Layer Preview do GeoServer.

7. Por que essa arquitetura é interessante?

Escalabilidade

O storage pode crescer independentemente do GeoServer.

Performance

O COG permite leitura parcial do raster utilizando HTTP Range Requests.
Ou seja, o cliente solicita apenas a parte da imagem que precisa.

Integração com cloud

A mesma arquitetura funciona com diversos serviços de object storage.

  • Amazon S3
  • MinIO
  • Google Cloud Storage

8. Conclusão

A combinação de COG + Object Storage + GeoServer é hoje uma das formas mais eficientes de publicar rasters grandes em ambientes WebGIS. Essa abordagem permite:

  • separar armazenamento e serviço
  • escalar facilmente
  • melhorar a performance de acesso aos dados

E o melhor: tudo pode ser implementado utilizando software open source.

by Fernando Quadro at March 11, 2026 10:36 PM

Hoy tengo el placer de impartir el webinar “Geomática hoy: del levantamiento al gemelo digital con datos abiertos, IA y estándares”, organizado por la Delegación Territorial de la Comunidad Valenciana y Región de Murcia del Ilustre Colegio Oficial de Ingeniería Geomática y Topográfica (COIGT).

La idea de la sesión es compartir una reflexión sobre el momento tan interesante que vive la geomática. Nunca antes habíamos tenido a nuestra disposición tantos datos, tanta capacidad de procesamiento y tantas herramientas para transformar la información geoespacial en conocimiento útil para la toma de decisiones.

Durante el webinar haré un recorrido que conecta la topografía y la cartografía tradicional con tecnologías que hoy forman parte del día a día de muchos proyectos: teledetección, SIG, Infraestructuras de Datos Espaciales, estándares abiertos como OGC e INSPIRE y el papel creciente de la inteligencia artificial en el análisis territorial.

También hablaré de cómo, a partir de datos abiertos e interoperabilidad, se pueden construir soluciones que van desde inventarios y gestión catastral hasta gemelos digitales o cuadros de mando para la gestión del territorio.

Este webinar forma parte de un ciclo organizado por el COIGT, con la idea de compartir experiencias, reflexiones y tecnologías relacionadas con la información geográfica.

Además, en futuras sesiones profundizaremos en algunas de las herramientas del ecosistema gvSIG, como gvSIG Desktop, gvSIG Online o gvSIG Mapps, y en cómo pueden aplicarse en distintos ámbitos profesionales.

La sesión se celebra online entre las 17:00 y las 18:00, incluyendo un turno final de preguntas.

Espero que resulte interesante y que sirva para reflexionar sobre hacia dónde está evolucionando la geomática.

Inscripción aquí

by Alvaro at March 11, 2026 12:30 PM

March 10, 2026

March 09, 2026

The wait is over! We are pleased to announce the new major release of QGIS 4.0.

Installers for Windows, Linux, and Mac are already out.

What’s new?

On the surface, existing users should expect to engage with a QGIS experience familiar to what they have come to know and love from previous releases. Under the hood, however, 4.0 introduces significant changes to maintainability and usability. These changes ensure that QGIS 4.0 can unlock additional access to modern libraries while bringing much-needed performance and security improvements to the code base.

For developers

To ensure a smooth transition, we have retained deprecated APIs where possible, minimising the effort required for plugin developers to update their tools. While some legacy APIs (such as the Processing API from QGIS 2.x) will not be guaranteed future support and backwards compatibility throughout the lifespan of the QGIS 4.x series, developers supporting existing plugins can easily ensure their plugins are compatible with the new release using the Qt6 compatibility guide.

New features

While preparation for the QGIS 4.0 migration has been underway, the developer community has added over 100 new features across the application, making QGIS more powerful, more flexible, more secure, and generally just more awesome. Adjacent to developments associated with the code base, the budding community of QGIS users has continued to share resources, including projects, styles, scripts, and more, leading to an exciting period of growth for the revamped QGIS Hub and associated community sites.

For a whirlwind tour of all the new functionalities introduced in this release, you can view the highlight reel video on YouTube, and for a detailed rundown of the new features and improvements, please check out the visual changelog for this release.

QGIS is a community effort, and we would like to extend a big thank-you to the developers, documenters, testers, and the many folks out there who volunteer their time and effort (or fund others to do so) to make these releases possible. From the QGIS community, we hope you enjoy this release!

If you wish to donate time, money, or otherwise contribute towards making QGIS more awesome, please wander along to QGIS.ORG and lend a hand!

QGIS is supported by donors and sustaining members. A current list of donors who have made financial contributions, large or small, to the project can be seen on our list of donors. If you would like to become an official project sustaining member, please visit our sustaining member page for more details. Sponsoring QGIS helps us to fund our regular developer meetings, maintain project infrastructure, and fund bug-fixing efforts. A complete list of current sponsors is provided below – our very big thank you to all of our sponsors!

QGIS is free software, and you are under no obligation to pay anything to use it – in fact, we want to encourage people far and wide to use it regardless of their financial or social status – we believe that empowering people with spatial decision-making tools will result in a better society for all of humanity.

by underdark at March 09, 2026 08:00 PM

In recent days, the Roadmap to Accelerate Digital Sovereignty in Spain has been presented, a document that reflects something that has been becoming increasingly clear for some time now: digital sovereignty has ceased to be an abstract concept and has become a strategic issue. We are no longer talking only about innovation, but about decision-making capacity, resilience, technological autonomy, and control over critical infrastructures. Technological dependence is a structural vulnerability.

The document points to a key idea: Europe and Spain cannot limit themselves to consuming technologies developed by others. When a public administration, a company, or a public institution depends on technologies it does not control, on platforms it cannot audit, on licences that can change unilaterally, and on infrastructures that respond to external interests, it is not innovating: it is taking on a strategic risk.

And that risk does not affect technology alone. It affects the ability to decide, to plan, to ensure continuity, to protect data, to guarantee public services, and to sustain long-term policies. The question is why we continue to build critical parts of our administrations and organisations on technologies we do not control.

That is why the answer cannot remain at the level of regulation alone. It is necessary to invest, deploy our own infrastructures, strengthen the use of open standards, and promote technological solutions that can be audited, evolved, and governed within our own institutional framework.

In that context, it is especially relevant that the roadmap explicitly mentions the promotion of free software and open source within public administration, as well as the need to connect Spain’s public digital infrastructure with the European one and to strengthen domestic technological capabilities.

That is where initiatives such as gvSIG fully make sense. gvSIG is not just software. It is, and has always been, a practical commitment to digital sovereignty.

Because talking about digital sovereignty also means talking about the concrete tools with which public administrations manage their territory, their information, and their services. It means asking whether an administration can retain control over its geospatial infrastructure, its data, its processes, and the future evolution of its systems.

Today, gvSIG is a suite, a digital infrastructure, a catalogue of solutions based on free software, open standards, and interoperability. It is an ecosystem that enables administrations and organisations to build and maintain their own Spatial Data Infrastructures, geoportals, and territorial management systems without depending on unilateral decisions by third parties, without becoming trapped by restrictive licences, and with a real capacity to adapt over the long term.

A large share of the critical information managed by public administrations has a territorial dimension: urban planning, emergencies, the environment, mobility, cadastre, infrastructure, public services, security, defence… When that technological layer depends entirely on closed vendors, dependency is not only technical: it is also organisational and strategic.

There is a clear risk in continuing to accept as normal a dependency that compromises the autonomy of our institutions.

When Europe and Spain speak about digital sovereignty, public digital infrastructure, free software, and technological autonomy, this is not a debate that is alien to gvSIG. All of this has been part of gvSIG’s DNA for years. We are looking at a framework that reinforces the relevance of projects that have already been demonstrating, in practice, that another way of building technology is possible.

by Alvaro at March 09, 2026 04:56 PM

En los últimos días se ha presentado la Hoja de Ruta para acelerar la soberanía digital en España, un documento que refleja algo que desde hace tiempo resulta cada vez más evidente: la soberanía digital ha dejado de ser un concepto abstracto para convertirse en una cuestión estratégica. Ya no hablamos solo de innovación, sino de capacidad de decisión, resiliencia, autonomía tecnológica y control sobre infraestructuras críticas. La dependencia tecnológica es una vulnerabilidad estructural.

El documento apunta a una idea clave: Europa y España no pueden limitarse a consumir tecnologías desarrolladas por terceros. Cuando una administración, una empresa o una institución pública depende de tecnologías que no controla, de plataformas que no puede auditar, de licencias que pueden cambiar unilateralmente y de infraestructuras que responden a intereses ajenos, no está innovando: está asumiendo un riesgo estratégico.

Y ese riesgo no afecta solo a la tecnología. Afecta a la capacidad de decidir, de planificar, de garantizar continuidad, de proteger datos, de asegurar servicios públicos y de sostener políticas a largo plazo. La pregunta es por qué seguimos construyendo piezas críticas de nuestras administraciones y organizaciones sobre tecnologías que no controlamos.

Por eso, la respuesta no puede quedarse en la regulación. Hace falta invertir, desplegar infraestructuras propias, reforzar el uso de estándares abiertos y promover soluciones tecnológicas que puedan ser auditadas, evolucionadas y gobernadas desde nuestro propio marco institucional.

En ese contexto, resulta especialmente relevante que la hoja de ruta mencione de forma expresa el impulso al software libre y al código abierto en la Administración, así como la necesidad de conectar la infraestructura pública digital española con la europea y de fortalecer capacidades tecnológicas propias.

Ahí es donde iniciativas como gvSIG adquieren todo su sentido. gvSIG no es solo software. Es, siempre ha sido, una apuesta práctica por la soberanía digital.

Porque hablar de soberanía digital también es hablar de las herramientas concretas con las que las administraciones públicas gestionan su territorio, su información y sus servicios. Es hablar de si una administración puede mantener el control sobre su infraestructura geoespacial, sobre sus datos, sobre sus procesos y sobre la evolución futura de sus sistemas.

Hoy día gvSIG es una suite, una infraestructura digital, un catálogo de soluciones basadas en software libre, estándares abiertos e interoperabilidad. Un ecosistema que permite a administraciones y organizaciones construir y mantener sus propias Infraestructuras de Datos Espaciales, geoportales y sistemas de gestión territorial sin depender de decisiones unilaterales de terceros, sin quedar atrapadas por licencias restrictivas y con capacidad real de adaptación a largo plazo.

Buena parte de la información crítica de una administración tiene una componente territorial: urbanismo, emergencias, medio ambiente, movilidad, catastro, infraestructuras, servicios públicos, seguridad, defensa… Cuando esa capa tecnológica depende por completo de proveedores cerrados, la dependencia no es solo técnica: es también organizativa y estratégica.

Hay un riesgo claro en seguir aceptando como normal una dependencia que compromete la autonomía de nuestras instituciones.

Cuando desde Europa y desde España se habla de soberanía digital, de infraestructura pública digital, de software libre y de autonomía tecnológica, no estamos ante un debate ajeno, todo eso forma parte, desde hace años, del ADN de gvSIG. Estamos ante un marco que refuerza la relevancia de proyectos que ya vienen demostrando, en la práctica, que otra forma de construir tecnología es posible.

by Alvaro at March 09, 2026 04:48 PM

La nueva versión gvSIG Desktop 2.7 incluye la herramienta de Swipe o Cortinilla. Con esta herramienta es posible comparar cartografía diferente de forma sencilla, como por ejemplo dos ortofotos de años diferentes.

El funcionamiento de esta herramienta es a partir de dos vistas diferentes, donde en cada una de ellas se deberá tener una de las capas a comparar. El usuario podrá elegir la opción de realizar cortinilla de forma vertical u horizontal, y se aplicará sobre las capas que se tenga visibles en ese momento en las dos vistas. Una vez seleccionadas ya solo deberá desplazar la barra central horizontalmente o verticalmente para poder ver las diferencias.

En este vídeo te mostramos el funcionamiento:

by Mario at March 09, 2026 10:18 AM

March 06, 2026

Discrete Global Grid Reference System (Part 1)

In 2024, Geomatys started working on DGGRS with our existing libraries. In 2025 we joined the OGC AI-DGGS for Disaster Management Pilot.

 As a result of this pilot our libraries made a step forward, or more exactly a huge jump, in DGGRS. With lots of new things to play with.

 Let’s see one of the results of this work. 

DGGRS Java API

In the current Java ecosystem you can find several DGGRS libraries, like S2 (https://s2geometry.io), H3 (https://h3geo.org), CDS-Healpix (https://github.com/cds-astro/cds-healpix-java). But all of them have a different API and very different capabilities.

 

From our very beginning, Geomatys has been working on OGC’s GeoAPI, a set of programming interfaces for numerous ISO and OGC standards related to GIS, and naturally we continue this effort by creating new interfaces to map DGGRSes but also to fit them into the existing GIS world.

 

Making a DGGRS API required a bit of effort, but it was nonetheless a reasonable task. Making it fluently fit with Coordinate, Temporal, and Elevation Reference Systems as well as Metadata, Geometry, and Identifier Based Reference systems was a more complex task which required a deep understanding of each of these models.

 

 The API presented here is our preliminary work for the future integration of DGGRSes into Apache-SIS and later on in OGC GeoAPI.
 The following UML diagrams hare simplified and slightly outdated versions of the existing code, but defines the core model of this new API

1. Overview

After reading and processing the different DGGRS documents and existing ISO Reference System specifications, we noted one major change with the Topic 21 — Discrete Global Grid Systems document.

 

 

The decision was made to make DGGRS extends ReferenceByIdentifiers (ISO 19112) instead of CoordinateReferenceSystem (ISO 19111). This choice did not have any impact on the OGC DGGRS API but has fundamental differences when using it in code.

The reasons behind this choice are :

 

 

  • Zone IDs are 64-bits integers, not floating points values. All CRS APIs (SIS, PROJ, …) expect floating points values, at best 64-bits, so there is not a direct mapping between float64 and int64.
  • Zone ID may eceed 64-Bits (for 3D, 4D or more) or may be text
  • Zones have a ‘depth’ or ‘level’ information, this property is correctly defined in ReferenceByIdentifiers, but idoes not exist in CRS.

Zones are areas/volumes, not exact positions, which matches ReferenceByIdentifiers locations

2. Packages

The created API is composed of two new core packages.

  • ReferenceSystem (RS) : contains a new API for Compound Reference Systems. Allowing to aggregate DGGRS with additional dimensions, such as time (the temporal dimension) and height above the surface (the vertical dimension).
  • DiscreteGlobalGridReferenceSystem (DGGRS) : contains the new DGGRS API.

3. Discrete Global Grid Reference System package

The UML below is the result from aggregation and factorization of the different API which exist in : OGC DGGRS API Topic 21, H3geo, S2geometry, DGGAL and Healpix CDS.

 I won’t go in the details here since there are a lot of classes and methods, but feel free to check the code for more informations.

4. Reference System package

Since all the DGGRSes that were used in the pilot where limited to 2D and most that exist are limited to 2D. We needed a solution for combining it with additional dimensions to support time and elevation.

 

This resulted in us creating an API composed of three classes: Code, CompoundRS, and CodeOperation, which share the same organization as DirectPosition, CoordinateReferenceSystem, and Operation.

 

They extends the scope of transformation to all kind of Reference systems, whether they use numerical or textual indexing. Allowing transformation for DGGRS, MGRS, GeoHash, OLS or any kind of code or coordinate base systems. 

The API is simple, yet very powerful. It goes way beyond DGGRS.

 For example we often say ‘meet me at {address} at {time}’.
 This can be translated to a Code  object with two components [{address}, {time}] and a Compound Reference System composed of your local country postal code system and a classic temporal reference system.
 Then you ask the API to give you the CodeOperation to transform this to a common coordinate reference system, like Mercator EPSG:3395, and there you go.

 And this works with any kind of reference system combination, MGRS, EPSG, IAU, DGGRS, postal codes..

5. Java Examples

In the end, what does this look like for a developer?

//pick a DGGS implementation : A5, H3, Healpix, ...
final DiscreteGlobalGridReferenceSystem dggrs = DiscreteGlobalGridReferenceSystems.forCode(« H3 ») ;

//create a coder instance to perform queries
final Coder coder = dggrs.createCoder();

//get a zone for a location
final String hash = coder.encode(new DirectPosition2D(12.345, 67.89));

//get a zone for a known identifier
final Zone zone  = coder.decode("811fbffffffffff");

//extract various information from the zone
final DirectPosition position = zone.getPosition(); //centroid
final Collection<? extends Zone> children = zone.getChildren();
final Collection<? extends Zone> neighbors = zone.getNeighbors();
final Collection<? extends Zone> parents = zone.getParents();
final Envelope envelope = zone.getEnvelope();
final GeographicExtent geometry = zone.getGeographicExtent();
final Double areaMetersSquare = zone.getAreaMetersSquare();

6. Next part

In the next DGGRS blog post we will see the OGCAPI DGGRS implementation in the Examind-Community server.

The post Discrete Global Grid Reference System (Part.1) first appeared on Geomatys.

by Johann S. at March 06, 2026 08:51 AM

March 05, 2026

📅 Call for Papers

The call for papers is now open! We welcome proposals for talks and workshops from all levels of expertise, end users, technical developers, academics, and community contributors alike.

Deadline: 12 April 2026 at 23:59 (Europe/Zurich)

👉 Submit your proposal: https://conference.qgis.org/presenting/

Important Dates

Call for Papers opens5 March 2026
Call for Papers deadline12 April 2026 at 23:59 (Europe/Zurich)
Speaker notifications29 May 2026
Conference5–6 October 2026

Topics

Submissions can cover any topic relevant to the QGIS community, for example:

  • Interesting use cases of QGIS
  • Advanced workflows with QGIS
  • Deep dives into new QGIS features
  • QGIS ecosystem (third-party plugins, server solutions, mobile apps)
  • Using QGIS in large organisations
  • Integration of QGIS with other geospatial products
  • Future plans for the QGIS project
  • Open source as a strategic choice
  • GIS sovereignty with QGIS

Session Types

All sessions will be held in English.

  • Talks (20 min + 5 min Q&A) – Accepted talks during the main conference days (5–6 October)
  • Short Workshops (90 min) – Hands-on sessions during the main conference days (5–6 October)
  • Workshops (4 hours) – Extended 4 hour slots (including a 30-min break) on 7 October (after the main conference days) for those who want to dig deeper into QGIS tools and workflows

🤝 Call for Sponsors

We have also launched the call for sponsors, with opportunities available at various levels to help make this event accessible to our global community.

More details here: 👉 https://uc2026.qgis.org/sponsors/

💡 About the User Conference

The QGIS User Conference is our annual gathering bringing together users, developers, and enthusiasts from around the world. It’s a unique opportunity to learn about the latest developments in QGIS, share experiences and workflows, and connect with the open source geospatial community.

👥 About the Contributor Meeting

QGIS Contributor Meetings are volunteer-driven events where project contributors from across the globe come together. Contributors plan their work, hold face-to-face discussions, and present new improvements they’ve been working on. Everyone attending donates their time to the project.

As a project built primarily through online collaboration, these in-person meetings provide a crucial ingredient to the future of QGIS. The event is run largely as an unconference with minimal structured programme planning.

Details and sign-up on the QGIS wiki.

by mbernasocchi at March 05, 2026 09:23 PM

March 03, 2026

My previous blog post reviewed the concept of the Hausdorff distance (which more descriptively could be called farthest distance.) Despite its usefulness in matching geometric data, there are surprisingly few open-source implementations, and seemingly no efficient ones for linear and polygonal data.  This even includes CGAL and GRASS, which are usually reliable for provising a wide spectrum of geospatial operations.

This lack of high-quality Hausdorff extends to the JTS Topology Suite.  JTS has provided  the DiscreteHausdorffDistance class for many years,  That code is used in many geospatial systems (via JTS, and also its GEOS port), including widely used ones such as PostGIS, Shapely, and QGIS.  However, that imlementation has significant performance problems, as well as other usability flaws (detailed here).

So I'm excited to announce the release of a new fast, general-purpose algorithm for Hausdorff distance in JTS.  It's a class called DirectedHausdorffDistance.  It has the following capabilities:

  • handles all linear geometry types: points, lines and polygons
  • supports a distance tolerance parameter, to allow computing the Hausdorff distance to any desired accuracy
  • can compute distance tolerance automatically, providing a "magic-number-free" API
  • has very fast performance due to lazy densification and indexed distance computation
  • can compute the pair of points on the input geometries at which the distance is attained
  • provides prepared mode execution (caching computed indexes)
  • allows determining farthest points which lie in the interior of polygons
  • handles equal or nearly identical geometries efficiently
  • provides the isFullyWithinDistance predicate, with short-circuiting for maximum performance
Hausdorff distance VS shortest distance between linestrings
The choice of name for the new class is deliberate.  The core of the algorithm evaluates the directed Hausdorff distance from one geometry to another.  To compute the symmetric Hausdorff distance  simply involves choosing the largest of the two directed distances DHD(A, B) and DHD(B, A)  This is provided as the function DirectedHausdorffDistance.hausdorffDistance(a,b).

Indexed Shortest Distance

The Hausdorff distance depends on the standard (Euclidean) shortest distance function (as evident from the mathematical definition:
    DHD(A,B) = max a ∈ A dist(a,B)

A key performance improvement is to evaluate shortest distance using the IndexedFacetDistance class.  For point sets this optimization alone produces a significant boost.

For example, take a case of two random sets of 10,000 points.  DiscreteHausdorffDistance takes 495 msDirectedHausdorffDistance takes only 22 ms - 22x faster.   (It's also worth noting that this is similar performance to finding the shortest distance points using IndexedFacetDistance.nearestPoints).

Lazy Densification

The biggest challenge in computing the Hausdorff distance is that it can be attained at geometry locations which are not vertices. This means that linework edges must be densified to add points at which the distance can be evaluated.  The key to making this efficient is to make the computation "adaptive" by performing "lazy densification".  This avoids densifying edges where there is no chance of the farthest distance occurring.  

Densification is done recursively by bisecting segments.  To optimize finding the location with maximum distance the algorithm uses the branch-and-bound pattern.  The edge segments are stored in a priority queue, sorted by a bounding function giving the maximum possible distance for each segment.  The segment maximum distance is the largest of the distances at the endpoints.  The segment maximum distance bound is the segment maximum distance plus one-half the segment length.  (This is a tight bound.  To see why, consider a segment S of length L, at a distance of D from the target at one end and distance D + e at the other.  The farthest point on S is at a distance of L + 2D + e = L/2 + D + e/2.  This is always less than L/2 + D + e, but approaches it in the limit.)  
Proof of Maximum Distance Bound

The algorithm loops over the segments in the priority queue. The first segment in the queue always has the maximum distance bound. If this is less than the current maximum distance, the loop terminates since no greater distance will be found. If the segment distance to the target geometry is greater than the current maximum distance, it is saved as the new farthest segment.  Otherwise, the segment is bisected, subsegment endpoint distances are computed, and both are inserted back into the queue. 

Search for DHD using line segment bisection

By densifying until bisected segments drop below a given length the directed Hausdorff distance can be determined to any desired accuracy,  The accuracy distance tolerance can be user-specified, or it can be determined automatically.  This provides a "magic-number-free" API, which is significantly improves ease of use.

Performance comparison

Comparing the performance of DirectedHausdorffDistance to DiscreteHausdorffDistance is unfair, since the latter implementation is so inefficient.  However, it's the one currently in use, so the comparison is relevant.   

There are two possible situations.  The first is when the directed Hausdorff distance is attained at vertices (which is often the case when the geometry vertices are already dense; i.e. segment lengths are short relative to the distance).  As an example we will use two polygons of 6,426 and 19,645 vertices.  

DiscreteHausdorffDistance with no densification (a factor of 1) takes 1233 ms. DirectedHausdorffDistance takes 25 ms - 49x faster.  (In practice the performance difference is likely to be more extreme. There is no way to decide a priori how much densification is required for DiscreteHausdorffDistance to produce an accurate answer.  So usually a higher amount of densification will be specified.  This can severely decrease performance.)

The second situation has the Hausdorff distance attained in the middle of a segment, so densification is required.  The query polygon has 468 vertices, and the target has 65.
DirectedHausdorffDistance is run with a tolerance of 0.001, and takes 19 ms.  If DiscreteHausdorffDistance is run with a densification factor of 0.0001 to produce equivalent accuracy, it takes 1292 ms.  If the densification factor is 0.001, the time improves to 155 ms - still 8x slower, with a less accurate answer. 

Handling (mostly) equal geometries

The old Hausdorff distance algorithm had an issue reported in this post on GIS Stack Exchange.  It asks about the slow performance of a case of two nearly-identical geometries which have a very small discrepancy.  In the end the actual problem seemed to be due to the overhead of handling large geometries in PostGIS.  However, testing it with the new algorithm revealed a significant issue.  

Two nearly-identical geometries, showing discrepancy location

It turned out that the new bisection algorithm exhibited very poor performance for this case, and in general for geometries which have many coincident segments. In particular, this applies to computing the Hausdorff distance between two identical geometries.  This situation can easily happen when querying a dataset against itself. So it was essential to solve this problem.  Even worse, detecting the very small discrepancy required an accuracy tolerance of small size, which also leads to bad performance.

The problem is that the maximum distance bounding function depends on both the segment distance and the segment length. When the segment distance is very small (or zero), the distance bound is dominated by the segment length, so subdivision will continue until all segments are shorter than the accuracy tolerance.  This lead to a large number of subsegments being generated during search, particularly when the tolerance is small (as required in the above case).

The solution is to check subsegments with zero distance to see if they are coincident with a segment of the target geometry. If so, there is no need to bisect the segment further, since subsegments must also have distance zero. With this check in place, identical (and nearly so) cases executes as fast as more general cases of the same size.  Equally importantly, this detects very small discrepancies regardless of the accuracy tolerance.

For the record, the GIS-SE case now executes in about 45 ms, and detects the tiny discrepancy of 9 orders of magnitude smaller than the input geometry.

The Hausdorff distance of ~0.00099

Handling Polygonal Input

If the Hausdorff distance is attained at a point lying on an edge then densifying the linework is sufficient.  But for polygonal query geometries the farthest point can occur in the interior of the area: 
The Directed Hausdorff distance is attained at an interior point of the query polygon

To find the farthest interior point the adaptive branch-and-bound approach can be used in the area domain.  Conveniently, JTS already implements this in the MaximumInscribedCircle and LargestEmptyCircle classes (see this blog post.) In particular, LargestEmptyCircle  supports constraining the result to lie inside an area, which is exactly what is needed for the Hausdorff distance.  The target geometry is treated as obstacles, and the polygonal element(s) of the query geometry are the constraints on the location of the empty circle centre.
Directed Hausdorff Distance with multiple area constraints and heterogeneous obstacles

The LargestEmptyCircle algorithm is complex, so it might seem that it could significantly decrease performance.  In fact, it only adds an overhead of about 30%, and for many inputs it's not even noticeable.  Also, if there is no need to determine farthest points in the interior of polygons, this overhead can be avoided by using only polygon linework (i.e. the boundary) as input.  

Currently most Hausdorff distance algorithms operate on point sets, with a very few supporting linear geometry.  There seem to be none which compute the Hausdorff distance for polygonal geometries.  While this might seem an uncommon use case, in fact it's essential to support another new capability of the algorithm: computing the isFullyWithinDistance predicate for polygonal geometries.  

isFullyWithinDistance

Distance-based queries often require determining only whether the distance is less than a given value, not the actual distance value itself.  This boolean predicate can be evaluated much faster than the full distance determination, since the computation can short-circuit as soon as any point is found which confirms being over the distance limit.  It also allows using other geometric properties (such as envelopes) for a quick initial check. For shortest distance, this approach is provided by Geometry.isWithinDistance (and supporting methods in DistanceOp and other classes.). 

The equivalent predicate for Hausdorff distance is called isFullyWithinDistance.  It tests whether all points of a geometry are within a specified distance of another geometry.  This is defined in terms of the directed Hausdorff distance (and is thus an asymmetric relationship):

  isFullyWithinDistance(A,B,d) = DHD(A,B) <= d 

The DirectedHausdorffDistance class provides this predicate via the isFullyWithinDistance(A,B,dist) function.  Because the new class supports all types of input geometry (including polygons), the predicate is fully general.  For even faster performance in batch queries it can be executed in prepared mode via the isFullyWithinDistance(A,dist) method.  This mode caches the spatial indexes built on the target geometry so they can be reused.

For a performance example, consider a dataset of European boundaries (countries and islands) containing about 28K vertices.  The boundary of Germany is used as the target geometry.


If isFullyWithinDistance is run with a distance limit of 20, it takes about 60 ms.  

There's no direct comparison for DiscreteHausdorffDistance, but if that class is used to compute the directed Hausdorff distance with a conservative densification factor of 0.1, the time is about 1100 ms.  Another point of comparison is to run a shortest distance query.  This takes only 21 ms - but it's doing much less work.

A better implementation for ST_DFullyWithin

Another way to implement isFullyWithinDistance is to compute the buffer(d) of geometry B and test whether it covers A:

  isFullyWithinDistance(A,B,d) = B.buffer(d).covers(A) 

This is how the ST_DFullyWithin function in PostGIS works now.  It's a reasonable design choice given the current lack of a performant Hausdorff distance implementation.  However, there are a few problems with using buffer:
  • Buffers of complex geometry can be slow to compute, especially for large distances
  • There's a chance of robustness bugs affecting the computed buffer
  • Buffers are linearized approximations, so there is a likelihood of false negatives for query geometries which lie close to the buffer boundary
- image of buffer quantization causing failures

Now, the DirectedHausdorffDistance implementation of isFullyWithinDistance can make this function faster, more accurate, more robust and cacheable.  (And of course, the ST_HausdorffDistance function can benefit as well.)

Summary

The JTS DirectedHausdorffDistance class provides fast, cacheable, easy-to-use computation of Hausdorff distances and the isFullyWithinDistance predicate for all JTS geometry types.  This is a major improvement over the old JTS DiscreteHausdorffDistance class, and essentially fully replaces it.  More generally, it fills a notable gap in open-source geospatial functionality.  It will allow many systems to provide a high-quality implementation for Hausdorff distance.


by Dr JTS (noreply@blogger.com) at March 03, 2026 09:11 PM

March 02, 2026

TorchGeo 0.7.0 Release Notes

TorchGeo 0.7 adds 26 new pre-trained model weights, 33 new datasets, and more powerful trainers, encompassing 7 months of hard work by 20 contributors from around the world.

Highlights of this release

Note

The following model and dataset descriptions were generated by an imperfect human, not by an LLM. If there are any inaccuracies or anything else you would like to highlight, feel free to reach out to @adamjstewart.

Growing collection of foundation models

Panopticon Architecture

TorchGeo has a growing collection of Earth observation foundation models, including 94 weights from 13 papers:

  • GASSL (@kayush95 et al., 2020): Uses spatially aligned images over time to construct temporal positive pairs and a novel geo-location pretext task. Great if you are working with high-resolution RGB data such as Planet or Maxar.
  • SeCo (@oscmansan et al., 2021): Introduces the idea of seasonal contrast, using spatially aligned images over time to force the model to learn features invariant to seasonal augmentations, invariant to synthetic augmentations, and invariant to both.
  • SSL4EO-S12 (@wangyi111 et al., 2022): A spiritual successor to SeCo, with models for Sentinel-1/2 data pretrained using MoCo, DINO, and MAE (new).
  • Satlas (@favyen2 et al., 2022): A collection of Swin V2 models pretrained on a staggering amount of Sentinel-2 and NAIP data, with support for single-image and multiple-image time series. Sentinel-1 and Landsat models were later released as well.
  • Scale-MAE (@cjrd et al., 2022): The first foundation model to explicitly support RGB images with a wide range of spatial resolutions.
  • SSL4EO-L (@adamjstewart et al., 2023): The first foundation models pretrained on Landsat imagery, including Landsat 4–5 (TM), Landsat 7 (ETM+), and Landsat 8–9 (OLI/TIRS).
  • DeCUR (@wangyi111 et al., 2023): Uses a novel multi-modal SSL strategy to promote learning a common representation while also preserving unique sensor-specific information.
  • FG-MAE (@wangyi111 et al., 2023): (new) A feature-guided MAE model, pretrained to reconstruct features from histograms of gradients (HOG) and normalized difference indices (NDVI, NDWI, NDBI).
  • CROMA (@antofuller et al., 2023): (new) Combines contrastive learning and reconstruction loss to learn rich representations of MSI and SAR data.
  • DOFA (@xiong-zhitong et al., 2024): Introduced the idea of dynamically generating the patch embedding layer of a shared multimodal encoder, allowing a single model weight to support SAR, RGB, MSI, and HSI data. Great for working with multimodal data fusion, flexible channel combinations, or new satellites which don't yet have pretrained models.
  • SoftCon (@wangyi111 et al., 2024): (new) Combines a novel multi-label soft contrastive learning with land cover semantics and cross-domain continual pretraining, allowing the model to integrate knowledge from existing computer vision foundation models like DINO (ResNet) and DINOv2 (ViTs). Great if you need efficient small models for SAR/MSI.
  • Panopticon (@LeWaldm et al., 2025): (new, model architecture pictured above) Extends DINOv2 with cross attention over channels, additional metadata in the patch embeddings, and spectrally-continual pretraining. Great if you want the same features as DOFA but with even better performance, especially on SAR and HSI data, and on “non-standard” sensors.
  • Copernicus-FM (@wangyi111 et al., 2025): (new) Combines the spectral hypernetwork introduced in DOFA with a new language hypernetwork and additional metadata. Great if you want to combine image data with non-spectral data, such as DEMs, LU/LC, and AQ data, and supports variable image dimensions thanks to FlexiViT.

100+ built-in data loaders!

Dataset Contributors

TorchGeo now boasts a whopping 126 built-in data loaders. Shoutout to the following folks who have worked tirelessly to make these datasets more accessible for the ML/EO community: @adamjstewart @nilsleh @isaaccorley @calebrob6 @ashnair1 @wangyi111 @GeorgeHuber @yichiac @iejMac etc. See the above figure for a breakdown of how many datasets each of these people have packaged.

In order to build the above foundation models, TorchGeo includes an increasing number of large pretraining datasets:

  • BigEarthNet (@gencersumbul et al., 2019): Including BEN v1 and v2 (new), consisting of 590K Sentinel-2 patches with a multi-label classification task.
  • Million-AID (@IenLong et al., 2020): 1M RGB aerial images from Google Earth Engine, including both multi-label and mutli-class classification tasks.
  • SeCo (@oscmansan et al., 2021): 1M images and 70B pixels from Sentinel-2 imagery, with a novel Gaussian sampling technique around urban centers with greater data diversity.
  • SSL4EO-S12 (@wangyi111 et al., 2022): 3M images and 140B pixels from Sentinel-1 GRD, Sentinel-2 TOA, and Sentinel-2 SR. Extends the SeCo sampling strategy to avoid overlapping images. (new) Now with automatic download support and additional metadata.
  • SatlasPretrain (@favyen2 et al., 2022): (new) Over 10M images and 17T pixels from Landsat, NAIP, and Sentinel-1/2 imagery. Also includes 302M supervised labels for 127 categories and 7 label types.
  • HySpecNet-11k (@m.fuchs et al., 2023): (new) 11k hyperspectral images from the EnMAP satellite.
  • SSL4EO-L (@adamjstewart et al., 2023): 5M images and 348B pixels from Landsat 4–5 (TM), Landsat 7 (ETM+), and Landsat 8–9 (OLI/TIRS). Extends the SSL4EO-S12 sampling strategy to avoid nodata pixels, and includes both TOA and SR imagery, composing the largest ever Landsat dataset. (new) Now with additional metadata.
  • SkyScript (@wangzhecheng et al., 2023): (new) 5.2M images from NAIP, orthophotos, Planet SkySat, Sentinel-2, and Landsat 8–9, with corresponding text descriptions for VLM training.
  • MMEarth (@vishalned et al., 2024): (new) 6M image patches and 120B pixels from over 1.2M locations, including Sentinel-1/2, Aster DEM, and ERA5 data. Includes both image-level and pixel-level classification labels.
  • Copernicus-Pretrain (@wangyi111 et al., 2025): (new, pictured below) 19M image patches and 920B pixels from Sentinel-1/2/3/5P and Copernicus GLO-30 DEM data. Extends SSL4EO-S12 for the entire Copernicus family of satellites.

Copernicus-Pretrain

We are also expanding our collection of benchmark suites to evaluate these new foundation models on a variety of downstream tasks:

  • SpaceNet (@avanetten et al., 2018): A challenge with 8 (and growing) datasets for instance segmentation tasks in building segmentation and road network mapping, with > 11M building footprints and ~20K km of road labels.
  • Copernicus-Bench (@wangyi111 et al., 2025): (new) A collection of 15 downstream tasks for classification, pixel-wise regression, semantic segmentation, and change detection. Includes Level-1 preprocessing (e.g., cloud detection), Level-2 base applications (e.g., land cover classification), and Level-3 specialized applications (e.g., air quality estimation). Covers Sentinel-1/2/3/5P sensors, and includes the first curated benchmark datasets for Sentinel-3/5P.

More powerful trainers

VHR-10 Instance Segmentation

TorchGeo now includes 10 trainers that make it easy to train models for a wide variety of tasks:

  • Classification: including binary (new), multi-class, and multi-label classification
  • Regression: including image-level and pixel-level regression
  • Semantic segmentation: including binary (new), multi-class, and multi-label (new) semantic segmentation
  • Instance segmentation: (new, example predictions pictured above) for RGB, SAR, MSI, and HSI data
  • Object detection: now with (new) support for SAR, MSI, and HSI data
  • BYOL: Bootstrap Your Own Latent SSL method
  • MoCo: Momentum Contrast, including v1, v2, and v3
  • SimCLR: Simple framework for Contrastive Learning of visual Representations, including v1 and v2
  • I/O Bench: For benchmarking TorchGeo I/O performance

In particular, instance segmentation was @ariannasole23's course project, so you have her to thank for that. Additionally, trainers now properly denormalize images before plotting, resulting in correct "true color" plots in tensorboard.

Backwards-incompatible changes

TorchGeo has graduated from alpha to beta development status (#2578). As a result, major backwards-incompatible changes will coincide with a 1 minor release deprecation before complete removal whenever possible from now on.

  • MultiLabelClassificationTask is deprecated, use ClassificationTask(task='multilabel', num_labels=...) instead (#2219)
  • torchgeo.transforms.AugmentationSequential is deprecated, use kornia.augmentation.AugmentationSequential instead (#1978, #2147, #2396)
  • torchgeo.datamodules.utils.AugPipe was removed (#1978)
  • Many objection detection datasets and tasks changed sample keys to match Kornia (#1978, #2513)
  • Channel dimension was squeezed out of many masks for compatibility with torchmetrics (#2147)
  • dofa_huge_patch16_224 was renamed to dofa_huge_patch14_224 (#2627)
  • SENTINEL1_ALL_* weights are deprecated, use SENTINEL1_GRD_* instead (#2677)
  • ignore parameter was moved to a class attribute in BaseTask (#2317)
  • Removed IDTReeS.plot_las, use matplotlib instead (#2428)

Dependencies

New dependencies

Removed dependencies

Changes to existing dependencies

  • Python: drop support for Python 3.10 (#2559)
  • Python: add Python 3.13 tests (#2547)
  • Fiona: v1.8.22+ is now required (#2559)
  • H5py: v3.8+ is now required (#2559)
  • Kornia: v0.7.4+ is now required (#2147)
  • Lightning: v2.5.0 is not compatible (#2489)
  • Matplotlib: v3.6+ is now required (#2559)
  • Numpy: v1.23.2+ is now required (#2559)
  • OpenCV: v4.5.5+ is now required (#2559)
  • Pandas: v1.5+ is now required (#2559)
  • Pillow: v9.2+ is now required (#2559)
  • Pyproj: v3.4+ is now required (#2559)
  • Rasterio: v1.3.3+ is now required, v1.4.0–1.4.2 is not compatible (#2442, #2559)
  • Ruff: v0.9+ is now required (#2423, #2512)
  • Scikit-image: v0.20+ is now required (#2559)
  • Scipy: v1.9.2+ is now required (#2559)
  • SMP: v0.3.3+ is now required (#2513)
  • Shapely: v1.8.5+ is now required (#2559)
  • Timm: v0.9.2+ is now required (#2513)
  • Torch: v2+ is now required (#2559)
  • Torchmetrics: v1.2+ is now required (#2513)
  • Torchvision: v0.15.1+ is now required (#2559)

Datamodules

New datamodules

Changes to existing datamodules

  • Fix support for large mini-batches in datamodules previously using RandomNCrop (#2682)
  • I/O Bench: fix automatic downloads (#2577)

Datasets

New datasets

Changes to existing datasets

  • Many objection detection datasets changed sample keys to match Kornia (#1978, #2513)
  • BioMassters: rehost on HF (#2676)
  • Digital Typhoon: fix MD5 checksum (#2587)
  • ETCI 2021: fix file list when 'vv' in directory name (#2532)
  • EuroCrops: fix handling of Nones in labels (#2499)
  • IDTReeS: removed support for plotting lidar point cloud (#2428)
  • Landsat 7: fix default bands (#2542)
  • ReforesTree: skip images with missing mappings (#2668)
  • ReforesTree: fix image and mask dtype (#2642)
  • SSL4EO-L: add additional metadata (#2535)
  • SSL4EO-S12: add additional metadata (#2533)
  • SSL4EO-S12: add automatic download support (#2616)
  • VHR-10: fix plotting (#2603)
  • ZueriCrop: rehost on HF (#2522)

Changes to existing base classes

  • GeoDataset: all datasets now support non-square pixel resolutions (#2601, #2701)
  • RasterDataset: assert valid bands (#2555)

Models

New model architectures

New model weights

Changes to existing models

  • Timm models now support features_only=True (#2659, #2687)
  • DOFA: save hyperparameters as class attributes (#2346)
  • DOFA: fix inconsistent patch size in huge model (#2627)

Samplers

  • Add ability to set random sampler generator seed (#2309, #2316)

Trainers

New trainers

  • Instance segmentation (#2513)

Changes to existing trainers

  • All trainers now denormalize images before plotting, resulting in correct "true color" plots in tensorboard (#2560)
  • Classification: add support for binary, multiclass, and multilabel classification (#2219)
  • Classification: MultiLabelClassificationTask is now deprecated (#2219)
  • Object Detection: add support for non-RGB imagery (SAR, MSI, HSI) (#2602)
  • Semantic Segmentation: add support for binary, multiclass, and multilabel semantic segmentation (#2219, #2690)

Changes to trainer base classes

  • Fix load_from_checkpoint to load a pretrained model (#2317)
  • Ignore ignore when saving hyperparameters (#2317)

Transforms

  • AugmentationSequential is now deprecated (#2396)

Documentation

Changes to API docs

  • SpaceNet is now properly documented as a benchmark suite
  • Fix license for RESISC45 and VHR-10
  • SatlasPretrain: fix table hyperlink

Changes to user docs

  • Update list of related libraries (#2691)
  • Add GeoAI to related libraries list (#2675)
  • Add geobench to related libraries list (#2665)
  • Add OTBTF to related libraries list (#2666)
  • Fix file-specific test coverage (#2540)

New tutorials

  • Earthquake detection (#2647)
  • Custom semantic segmentation trainer (#2588)

Changes to existing tutorials

  • Customization: fix broken hyperlink (#2549)
  • Trainers: document where checkpoints are saved (#2658)
  • Trainers: document how to get the best model (#2658)
  • Various typo fixes (#2566)

CI

  • Faster model testing (#2687)
  • Codecov: move configuration file to subdirectory (#2361)
  • Do not cancel in-progress jobs on main branch (#2638)
  • Ignore prettier reformat in git blame (#2299)

Contributors

This release is thanks to the following contributors:

@adamjstewart
@ando-shah
@ariannasole23
@ashnair1
@burakekim
@calebrob6
@DarthReca
@dcodrut
@giswqs
@isaaccorley
@japanj
@lccol
@LeWaldm
@lns-lns
@mdchuc
@nilsleh
@remicres
@rijuld
@sfalkena
@wangyi111

by adamjstewart at March 02, 2026 04:56 AM

March 01, 2026

Ebbene sì, cara lettrice, anche nel 2025 ho letto, ho visto, ho guardato, ho ascoltato, ho fatto cose sia stando fermo dove ero sia muovendomi. Da un certo punto di vista tutto quello che segue in questo articolo è nullo perché il 2025 è stato l’anno di un blando ma importante risveglio dal torpore e sono andato alcune (o tante, punti di vista) volte in piazza a mettere il mio corpo per il popolo di Gaza. Ma da un altro punto di vista tutto quello che segue è esattamente quello che bisogna fare per non vendere le nostre vite allo stesso sistema che ci divora, ci rende complici. Nulla di tutto questo è intrattenimento, tutto è sbattimento, tutto è amore, tutto è rabbia.

I libri che mi sono piaciuti

Ho continuato a leggere Solenoide, di Mircea Cărtărescu (che anche quest’anno non ha vinto un nobel, ma che differenza fa). È un’opera monumentale, non solo per la sua estensione fisica ma anche per la tela che tesse, fatta di parti apparentemente semplici e anche marcatamente ripetitive. L’effetto complessivo però è ipnotico e travolgente. Troppo complesso da recensire in poche parole, nei diversi piani narrativi intreccia la letteratura, la scuola, l’infanzia, la visione al tempo stesso onirica e disincantata sulla città di Bucarest in una fatiscente quanto lontana dimensione politica… in un potentissimo labirinto tra Borges, Kafka, Lilian Ethel Voynich, il manoscritto dallo stesso nome… E lo consiglio vivamente anche se certe parti risultano un po’ ripetitive, o ostiche, o persino fastidiose da leggere.

A fine 2024 avevo fatto un prodigioso acquisto di fumetti da add. E ne ho letti molti. Tutti mi hanno detto qualcosa, per quanto molto diversi tra loro, e quindi lode alla casa editrice che spazia. Metto questi fumetti tutti insieme non perché abbiano tra di loro qualche cosa di particolare in comune ma perché non sono molto abituato alleggiere fumetti e anzi ho cercato di fare del mio meglio per imparare a leggere fumetti senza sfogliarli velocemente alla ricerca di una trama che molto spesso non c’è o non è per niente in primo piano. Quindi, per dire, Disfacimento è un viaggio onirico e quasi lisergico all’interno di un mondo che si muove in modo molto lento ma contemporaneamente anche con degli slanci molto crudi di un’umanità ibrida e totalmente intrisa di animale e vegetale. Oppure i due di M. S. Harkness, veramente molto molto dolorosi da leggere e con un sentimento cupo ma anche una fortissima voglia di vivere. Nuvole sul soffitto è molto amaro, soprattutto nelle parti in cui il protagonista si rapporta con la figlia e ha colpito molto duro. The End è stata fonte di grandissima riflessione, molto profondo e anche particolarmente struggente il modo in cui poi viene raccontato il progresso nella creazione del fumetto stesso. Grande Oceano è meraviglioso ho anche cercato di convincere mio figlio a leggerlo e ha la dimensione fiabesca, strepitosa di una grande avventura. Ma il fumetto che ho letto e riletto più volte, trovandoci ogni volta dei risvolti veramente potentissimi è Baby Blue che racconta una storia non più distopica e la affronta in un modo assolutamente esagerato ed epico. Prima dell’oblio è un piccolo labirinto narrativo in forma grafica, disincantato ma anche pieno di speranza su tutto quello che diamo per scontato delle nostre vite.

Non è della stessa provenienza la graphic novel Stretta al cuore di Stepánka Jislová, quindi ne parlo a parte. È veramente intenso e lascia senza parole in più punti. Sembra che parta come una storia individuale ma si costruisce come una vicenda molto più ampia, sugli stereotipi di genere anzitutto ma anche sulle difficoltà familiari, sugli abusi sessuali, sui traumi che lasciano sempre un segno.

Il calcio del figlio di Wu Ming 4 mi è stato regalato ed è una lettura necessaria per chi come me si trova a fare involontariamente il genitore di giovanissimi calciatori. Dà speranza, in uno spazio in cui c’è tanto bisogno di averne perché spesso tutto sembra compresso tra desiderio di primeggiare individualmente in uno sport ostinatamente di squadra, senso di appartenenza, movimento fisico di corpi nello spazio.

Tiarè di Célestine Hitiura Vaite potrebbe sembrare una lettura leggera ma non lo è. Il fatto che sia ambientato in un mondo familiare e domestico, anche se geograficamente lontanissimo, lo rende universale. Riporto una bella citazione che mi è rimasta impressa

Materena ripone l’olio per friggere, ricorda il discorso fatto alla madre pochi giorni addietro: che nella prossima vita forse tornerà come lesbica.
Al che, sua madre ha commentato: «Perché aspettare?».
Ah, oui, alors. Perché aspettare?

Tiaré, pagina 63 dell’edizione italiana

Da Eleuthera ho comprato due libri di James C. Scott tradotti in italiano. Il dominio e l’arte della resistenza mi ha tenuto compagnia per buona parte dell’anno. È una lettura piacevole, molto istruttiva, e ha una visuale molto ampia sul tema, il quale di per sé non è frequente come frame di comprensione dei fenomeni sociali, né antichi né contemporanei. Non è un manuale sull’arte della resistenza, ma comunque ne fa un trattato piuttosto ricco. Lo sguardo dello stato mi ha accompagnato tutta l’estate. Molto pacato e lucido, capace di abbracciare tematiche apparentemente lontanissime tra loro con una visione molto coerente. La postilla finale di commento all’edizione italiana è un preoccupante aggiornamento al ventunesimo secolo della traiettoria descritta da Scott.

Un oggetto narrativo non identificato è Prompt di fine mondo di Agnese Trocchi del collettivo CIRCE. Liberatorio e libero, c’è bisogno di più opere con questo tipo di spazio di manovra.

Mi aveva attirato il titolo de La vegetariana di Han Kang, premio Nobel. Il libro è diviso in tre parti. Ogni parte è narrata dal punto di vista di un personaggio diverso e fortemente centrata sul rapporto tra personaggio (marito, cognato, sorella) e “la vegetariana” vera protagonista della storia. Uno sviluppo in parte circolare che nelle pagine conclusive sembra tornare all’inizio e dare un senso possibile, uno dei diversi possibili, alla vicenda inquietante e drammatica. E proprio nella conclusione mi sembra di trovare una via d’uscita dove viene mostrata la vera tragedia, quella di tutta la violenza subita, perciò la progressiva vegetalizzazione è liberazione. Molto intenso. La seconda parte sembra dare un risvolto positivo, creativo, per quanto folle, ma si conclude sia malamente rispetto a queste velleità sia raggiungendo un punto di non ritorno.

Ho voluto approfondire l’opera di Han Kang con L’ora di greco. Purtroppo l’ho letto una prima volta troppo in fretta, troppo trascinato da una trama che non c’è, e mi trovo in preda a una sensazione di dolore e sconvolgimento. L’ho riletto più lentamente. Il libro diventa via via più lirico, più criptico, ma trasmette comunque un senso di distacco tragico che sembra universale: distacco dalla famiglia, distacco dalla vista, distacco dalla parola, distacco dall’umano. In questo risiede il legame con “La vegetariana” a mio avviso, insieme al fatto che il fulcro di tutto questo dolore e distacco si trova nel nucleo familiare. È un testo difficile, almeno lo è stato per me. C’è una sottile via di uscita, se non di speranza.

Leggendo La straniera di Claudia Durastanti, ho capito che tutta la prima parte di libro mi sembra ricalcare “Middlesex” di Jeffrey Eugenides (un libro che adoro), non in modo esplicito ma tutta l’epopea degli avi, la migrazione, essere chi sei perché quella è la tua storia. Tuttavia questo libro non mi è piaciuto molto nel complesso, diversamente dagli altri che non mi sono piaciuti ne parlo perché apprezzo molto Claudia Durastanti come traduttrice…

e Brevemente risplendiamo sulla terra di Ocean Vuong è esattamente un libro che Durastanti ha tradotto. Insolitamente (per me) diretto e tagliente, ma con una profondità fortissima. Difficile dire che l’ho compreso tutto. Sicuro che mi ha fatto sentire cose mai viste prima, potentissime. Una scrittura senza steccati, ardente.

Le mostre

In primavera siamo andati a Ferrara per la mostra di Alphonse Mucha, c’era accoppiata anche quella di Giovanni Boldini, entrambe a Palazzo dei Diamanti. Non paragonabili se non nella mente dei venditori di biglietti. Mucha gira molto in mostre commerciali come questa, la sua arte libera un’immaginario al tempo stesso fuori dal tempo e molto situato, quasi imprigionato nella tela su cui è stato dipinto.

A Genova, ho visto a Palazzo Ducale Jacopo Benassi Libero! e mi ha colpito molto, una grande libertà e affronto alla morale artistica. Ho visto sempre al Ducale anche altre mostre tra cui quella su Lisetta Carmi, che ho apprezzato molto anche perché non era risicata negli spazi e Meriggiare pallido e assorto, fotografica contemporanea che ho trovato di poca anima e molto bisognosa di un’interpretazione totalmente assente. THE OTHER DIRECTION invece mi sembra degna di nota perché tratta un tema intersezionale da un punto di vista originale: voci di donne su una linea di autobus urbano che attraversa mezza città, interi quartieri e periferie – è la linea 1 che prendo spesso anche io.

Inoltre al Castello D’Albertis ho visto World Things Genova che accoppia mostra fotografica con etnografia contemporanea, attualizzazione post-coloniale delle collezioni del museo con presente di migrazioni.

I podcast

Ho ascoltato veramente molto meno rispetto allo scorso anno. A settembre ho anche iniziato ad accusare i primi sintomi di un acufene abbastanza intenso.

Ho proseguito in modo spezzato Il mondo, Stories e Love Bombing. Ho ascoltato alcuni episodi de Le comari dell’arte, molto liberatorie, di Nuovo baretto utopia con le registrazioni di kenobit, di Mordicchio non l’ha mai detto che purtroppo mi pare interrotto. Fare un podcast è dura.

Ho scoperto il favoloso L’orda d’oro, che è frutto di un programma radiofonico su Radio Onda Rossa. Parla dell’Asia centrale, in un numero altamente soddisfacente di diverse manifestazioni e punti di vista, sempre sostenuti da musica di generi diversi.

Le serie

Ho iniziato a guardare Anatane e i ragazzi di Okura su Rai Play, una serie animata franco-canadese ambientata in un futuro (?) distopico. Episodi semplici e brevi che ho trovato piacevoli.

Cyberpunk: Edgerunners è piuttosto semplice e violento, ma la grafica e la colonna sonora sono molto buone. Un giorno ho guardato un episodio e poi ho scoperto che era quello finale, ma mi è parso un po’ troppo tirato via, anche se l’ultima scena è molto commovente.

Ho guardato 3 minuti della prima puntata di Stranger things. Non so se conta.

Il teatro

Nella prima parte dell’anno sono andato alcune volte a teatro, sempre meno di quanto vorrei.

Lo strepitoso D’oro. Il sesto senso partigiano è stato fortissimo a partire dalle prime battute fuori dal palco, con i primi dodici articoli della Costituzione recitati a piena voce da un gruppo di giovani. Storie vere di uomini e donne che ci hanno tramandato gesti apparentemente semplici di libertà, quando questa era impossibile.

Stabat mater di Liv Ferracchiati è uno sguardo sulla mascolinità e sulle aspettative del genere, della coppia raccontato in modo leggero e divertente, ma al tempo stesso serissimo. Bello il dibattito finale con l’autrice, le altre attrici e Vera Gheno.

La musica

Sono andato a diversi concerti! Il 24 aprile al circolo ARCI Perugina di Certosa ho ascoltato i canti anarchici e partigiani dei Mars on Pluto, e (per me) soprattutto dei Cocks, una punk rock band di Sampierdarena che incarna molto di quello che avrei voluto fare tanti anni fa con altri sgangherati di periferia.

Ho partecipato alla prima serata di Electropark, un festival di musica elettronica che si tiene da 15 anni a Genova. Le artiste della serata erano Tadleeh, la genovese Ginevra Nervi e Luxe da Londra. Sono fuori dai miei confini con la musica elettronica ma ho apprezzato l’atmosfera molto rilassata e contemplativa.

Sono andato a un concerto rap alla Libera collina di Castello, mi è piaciuta la grande energia de La cercleuse, collettivo rap femminista francese.

Con Elisa sono andato al concerto di Vinicio Capossela, non era la prima volta ed è sempre più forte il modo in cui lui e le persone sul palco con lui usano la musica per raccontare storie.

I viaggi

A giugno siamo tornati a Creta, dopo ben 10 anni! Lo abbiamo fatto con il più improbabile dei mezzi di trasporto, cioè la nostra automobile, traghettata attraverso Adriatico ed Egeo dalle fedeli navi che conosciamo da 20 anni. È stato un viaggio intenso ma molto bello, abbiamo fatto base fissa a Kalamaki e poi girato un po’ nella zona di Creta centrale.

In primavera eravamo andati a Ferrara, oltre alle mostre abbiamo passeggiato per la città, trovato parchi dove riposare all’ombra, ottime gelaterie, ristoranti coreani, tantissime biciclette.

Sono andato per lavoro due giorni a Venezia, riuscendo a fare una veloce visita alle gallerie dell’Accademia con tanto di mostra che includeva L’uomo vitruviano lì conservato. Ma è proprio un piacere enorme essere a Venezia e basta.

In estate abbiamo fatto una vacanza in provincia di Cuneo. Abbiamo iniziato con una tappa a Molare da Franco B. famoso cantautore genovese ed ex collega, con bagno nel torrente. Facciamo base a Villar San Costanzo, patria dei ciciu e del famoso biscottificio che macina la farina nel mulino di Dronero lì vicino. Siamo andati a Entracque a visitare il centro sui lupi, ai bambini è piaciuto molto.

In autunno ho iniziato un corso di speleologia, ma questa è un’altra storia.

by Stefano Costa at March 01, 2026 05:20 PM

February 27, 2026

February 26, 2026

Prezado leitor,

Se você instalou o GeoNode 5 via Docker (GeoNode Project) e precisa adicionar um plugin que não vem na instalação padrão do GeoServer, este guia vai te mostrar como fazer isso da maneira correta e reproduzível.

No meu caso, estou utilizando:

  • GeoNode 5.0.0
  • GeoServer 2.27.3

O objetivo é instalar o plugin Resource Browser Tool, que permite navegar e gerenciar arquivos do GeoServer diretamente pela interface web.

1. Baixar o plugin:

O plugin precisa ser exatamente da mesma versão do GeoServer. Como estou usando a versão 2.27.3, o plugin também deve ser 2.27.3.

> cd /home/fernandoquadro/
> wget https://sourceforge.net/projects/geoserver/files/GeoServer/2.27.3/extensions/geoserver-2.27.3-web-resource-plugin.zip
> unzip geoserver-2.27.3-web-resource-plugin.zip

Após descompactar, você terá um ou mais arquivos .jar.

2. Copiar o plugin para a pasta do Projeto:

> mkdir -p /opt/geonode_custom/my_geonode/docker/geoserver/plugins/resourcebrowser
> cp *.jar /opt/geonode_custom/my_geonode/docker/geoserver/plugins/resourcebrowser

3. Alterar o arquivo Dockerfile do GeoServer

A instalação correta do plugin não deve ser feita manualmente dentro do container.
O procedimento adequado é incluir o plugin no processo de build da imagem.

> cd /opt/geonode_custom/my_geonode/docker/geoserver
> sudo nano Dockerfile

Adicione as seguintes linhas ao final do seu arquivo:

# GeoServer Resource Browser Tool (2.27.3)
COPY plugins/resourcebrowser/*.jar \
  /usr/local/tomcat/webapps/geoserver/WEB-INF/lib/

4. Recriar a imagem do GeoServer

> docker compose build geoserver
> docker compose up -d geoserver

Se quiser garantir um rebuild completo, faça:

> docker compose down
> docker compose build
> docker compose up -d

5. Verificar se o plugin foi instalado

Após executar os passos acima você pode então entrar no GeoServer e verificar se o seu plugin realmente foi instalado. Para isso acesse o painel administrativa em About & Status → Modules, se tudo estiver correto, o Resource Browser Tool aparecerá na lista de módulos instalados.

Esse mesmo procedimento pode ser utilizado para instalar qualquer plugin do GeoServer no GeoNode executado via Docker.

Se você ainda não instalou o GeoNode 5, pode conferir o passo a passo completo clicando aqui.

by Fernando Quadro at February 26, 2026 09:15 PM

February 23, 2026

The Hausdorff Distance is a useful spatial function which can appear slightly mysterious. Partly this is due to the name.  It honours Felix Hausdorff, one of the founding fathers of topology, and a polymath who was creative in music and literature as well as mathematics.    

Felix Hausdorff  (1868-1942)

But the name conveys nothing about why this function is useful, or how it is different to the more familiar shortest distance.  The key difference is: the shortest distance tells you how close things are, but the Hausdorff distance tells you how far apart they are. So a more descriptive name might be "farthest distance" or "maximum distance".  With due respect to Dr. Hausdorff, this is one of those historical artifacts of nomenclature that deserves a refresh.  (Especially since it's becoming recognized that the core concept was actually first published by the Romanian mathematician Dimitrie Pompeiu.  Users of the future will be grateful to be spared invoking the ST_PompeiuHausdorffDistance function.)

Definition

The formal definition of the Hausdorff distance (HD) is 

     HD(A,B) = max( DHD(A,B), DHD(B,A) )

where DHD is the directed Hausdorff Distance (DHD):

     DHD(A,B) = max a ∈ A dist(a,B)

with dist(a,B) being the usual shortest distance between point a and geometry B:

     dist(a,B) =  min b ∈ B dist(a,b) 

The Hausdorff distance is symmetric and is a true distance metric.  The directed Hausdorff distance is asymmetric.  Both can be useful in different contexts.  The directed version is arguably more fundamental.  (It's certainly where the bulk of the implementation effort lies.)
Directed Hausdorff Distance is asymmetric

The main application of the Hausdorff distance is in determining how well two datasets match, by providing a measure of their similarity.  In spatial applications these are typically geometries such as lines or polygons, but they can also be point clouds or raster images.  The Hausdorff distance is much more useful than shortest distance as a similarity measure because it gives information about all the points in a shape, not just a single closest point.  While shortest distance puts a bound on how far a single point is from the target, the Hausdorff distance is a bound on every point in the query shape. So in the figure below the two lines have a small shortest distance, but the Hausdorff distance reveals that they are actually far apart at some points.
Hausdorff Distance VS Shortest Distance

The Implementation Challenge

A key difference between shortest distance and Hausdorff distance is that the pair of points defining the shortest distance always includes at least one vertex, whereas the Hausdorff distance can occur at non-vertex points. For lines, the Hausdorff distance can occur anywhere on the edges: 

For polygons it can occur on edges or in the interior of the query area:

This makes the Hausdorff distance substantially harder to implement for general 2D geometries.  While the shortest distance can be determined simply by evaluating the distance at the finite set of vertices on each geometry, the Hausdorff distance requires a way to evaluate a finite set of points out of the infinite number of non-vertex points. 

Perhaps this is why it's so hard to find an implementation of Hausdorff distance for general 2D geometry.  (Or is there just no need for a fast accurate general-purpose Hausdorff distance?  Surely not...)  There's some implementations for point sets, and at least one for the specific case of convex polygons.  There's a couple which may support lines (here and here), but in a seemingly crude way.  And I haven't found a single one for general polygons.  Excellent - it's good to have a challenge!

Discrete Hausdorff Distance

A simple approach is to discretize the input linework by densifying the linework.  The Hausdorff distance is then evaluated over the original and added vertices. The JTS Topology Suite class DiscreteHausdorffDistance implements this approach.  This algorithm was developed many years ago (2008) for use in the RoadMatcher linear network conflation tool.  It worked well enough for that use case, since inputs were typically small and the accuracy was "good enough".  But it has some serious problems:
  • achieving accuracy requires a high degree of densification of every edge, which means slow performance
  • if the Hausdorff distance occurs at a vertex, then densification is not needed, but this is impossible to determine a priori
  • the user generally has no idea what level of densification (if any) is required to determine a result of required accuracy (this is particularly problematic in automated batch processing, where geometries may require varying amounts of densification)
  • the use of a densification factor rather than a maximum segment length was a mistake.  It is hard to determine the factor needed for a desired distance accuracy, and it causes over-densification of short edges
  • it is very slow when the inputs are equal or very similar (as shown in this issue)
  • polygonal inputs are not supported
  • the internal shortest distance computation is inefficient, since it does not use an indexed algorithm
Some of these flaws could be fixed.  For instance, shortest distance computation can be improved by using IndexedFacetDistance (which was not available at the time of development). And densification could be controlled by a maximum segment length instead of a factor.  But addressing all these issues requires a fundamental rethinking of the algorithm. 

Given the wide deployment of JTS and its C++ port GEOS, any improvement stands to benefit a huge number of users.  And after 18 years it's high time this clunky old code was replaced.  So I'm happy to announce that I'm working on an entirely new implementation for Hausdorff distance which solves all the issues above.  Expect a blog post soon!

by Dr JTS (noreply@blogger.com) at February 23, 2026 09:13 PM