Welcome to Planet OSGeo

March 03, 2026

My previous blog post reviewed the concept of the Hausdorff distance (which more descriptively could be called farthest distance.) Despite its usefulness in matching geometric data, there are surprisingly few open-source implementations, and seemingly no efficient ones for linear and polygonal data.  This even includes CGAL and GRASS, which are usually reliable for provising a wide spectrum of geospatial operations.

This lack of high-quality Hausdorff extends to the JTS Topology Suite.  JTS has provided  the DiscreteHausdorffDistance class for many years,  That code is used in many geospatial systems (via JTS, and also its GEOS port), including widely used ones such as PostGIS, Shapely, and QGIS.  However, that imlementation has significant performance problems, as well as other usability flaws (detailed here).

So I'm excited to announce the release of a new fast, general-purpose algorithm for Hausdorff distance in JTS.  It's a class called DirectedHausdorffDistance.  It has the following capabilities:

  • handles all linear geometry types: points, lines and polygons
  • supports a distance tolerance parameter, to allow computing the Hausdorff distance to any desired accuracy
  • can compute distance tolerance automatically, providing a "magic-number-free" API
  • has very fast performance due to lazy densification and indexed distance computation
  • can compute the pair of points on the input geometries at which the distance is attained
  • provides prepared mode execution (caching computed indexes)
  • allows determining farthest points which lie in the interior of polygons
  • handles equal or nearly identical geometries efficiently
  • provides the isFullyWithinDistance predicate, with short-circuiting for maximum performance
Hausdorff distance VS shortest distance between linestrings
The choice of name for the new class is deliberate.  The core of the algorithm evaluates the directed Hausdorff distance from one geometry to another.  To compute the symmetric Hausdorff distance  simply involves choosing the largest of the two directed distances DHD(A, B) and DHD(B, A)  This is provided as the function DirectedHausdorffDistance.hausdorffDistance(a,b).

Indexed Shortest Distance

The Hausdorff distance depends on the standard (Euclidean) shortest distance function (as evident from the mathematical definition:
    DHD(A,B) = max a ∈ A dist(a,B)

A key performance improvement is to evaluate shortest distance using the IndexedFacetDistance class.  For point sets this optimization alone produces a significant boost.

For example, take a case of two random sets of 10,000 points.  DiscreteHausdorffDistance takes 495 msDirectedHausdorffDistance takes only 22 ms - 22x faster.   (It's also worth noting that this is similar performance to finding the shortest distance points using IndexedFacetDistance.nearestPoints).

Lazy Densification

The biggest challenge in computing the Hausdorff distance is that it can be attained at geometry locations which are not vertices. This means that linework edges must be densified to add points at which the distance can be evaluated.  The key to making this efficient is to make the computation "adaptive" by performing "lazy densification".  This avoids densifying edges where there is no chance of the farthest distance occurring.  

Densification is done recursively by bisecting segments.  To optimize finding the location with maximum distance the algorithm uses the branch-and-bound pattern.  The edge segments are stored in a priority queue, sorted by a bounding function giving the maximum possible distance for each segment.  The segment maximum distance is the largest of the distances at the endpoints.  The segment maximum distance bound is the segment maximum distance plus one-half the segment length.  (This is a tight bound.  To see why, consider a segment S of length L, at a distance of D from the target at one end and distance D + e at the other.  The farthest point on S is at a distance of L + 2D + e = L/2 + D + e/2.  This is always less than L/2 + D + e, but approaches it in the limit.)  
Proof of Maximum Distance Bound

The algorithm loops over the segments in the priority queue. The first segment in the queue always has the maximum distance bound. If this is less than the current maximum distance, the loop terminates since no greater distance will be found. If the segment distance to the target geometry is greater than the current maximum distance, it is saved as the new farthest segment.  Otherwise, the segment is bisected, subsegment endpoint distances are computed, and both are inserted back into the queue. 

Search for DHD using line segment bisection

By densifying until bisected segments drop below a given length the directed Hausdorff distance can be determined to any desired accuracy,  The accuracy distance tolerance can be user-specified, or it can be determined automatically.  This provides a "magic-number-free" API, which is significantly improves ease of use.

Performance comparison

Comparing the performance of DirectedHausdorffDistance to DiscreteHausdorffDistance is unfair, since the latter implementation is so inefficient.  However, it's the one currently in use, so the comparison is relevant.   

There are two possible situations.  The first is when the directed Hausdorff distance is attained at vertices (which is often the case when the geometry vertices are already dense; i.e. segment lengths are short relative to the distance).  As an example we will use two polygons of 6,426 and 19,645 vertices.  

DiscreteHausdorffDistance with no densification (a factor of 1) takes 1233 ms. DirectedHausdorffDistance takes 25 ms - 49x faster.  (In practice the performance difference is likely to be more extreme. There is no way to decide a priori how much densification is required for DiscreteHausdorffDistance to produce an accurate answer.  So usually a higher amount of densification will be specified.  This can severely decrease performance.)

The second situation has the Hausdorff distance attained in the middle of a segment, so densification is required.  The query polygon has 468 vertices, and the target has 65.
DirectedHausdorffDistance is run with a tolerance of 0.001, and takes 19 ms.  If DiscreteHausdorffDistance is run with a densification factor of 0.0001 to produce equivalent accuracy, it takes 1292 ms.  If the densification factor is 0.001, the time improves to 155 ms - still 8x slower, with a less accurate answer. 

Handling (mostly) equal geometries

The old Hausdorff distance algorithm had an issue reported in this post on GIS Stack Exchange.  It asks about the slow performance of a case of two nearly-identical geometries which have a very small discrepancy.  In the end the actual problem seemed to be due to the overhead of handling large geometries in PostGIS.  However, testing it with the new algorithm revealed a significant issue.  

Two nearly-identical geometries, showing discrepancy location

It turned out that the new bisection algorithm exhibited very poor performance for this case, and in general for geometries which have many coincident segments. In particular, this applies to computing the Hausdorff distance between two identical geometries.  This situation can easily happen when querying a dataset against itself. So it was essential to solve this problem.  Even worse, detecting the very small discrepancy required an accuracy tolerance of small size, which also leads to bad performance.

The problem is that the maximum distance bounding function depends on both the segment distance and the segment length. When the segment distance is very small (or zero), the distance bound is dominated by the segment length, so subdivision will continue until all segments are shorter than the accuracy tolerance.  This lead to a large number of subsegments being generated during search, particularly when the tolerance is small (as required in the above case).

The solution is to check subsegments with zero distance to see if they are coincident with a segment of the target geometry. If so, there is no need to bisect the segment further, since subsegments must also have distance zero. With this check in place, identical (and nearly so) cases executes as fast as more general cases of the same size.  Equally importantly, this detects very small discrepancies regardless of the accuracy tolerance.

For the record, the GIS-SE case now executes in about 45 ms, and detects the tiny discrepancy of 9 orders of magnitude smaller than the input geometry.

The Hausdorff distance of ~0.00099

Handling Polygonal Input

If the Hausdorff distance is attained at a point lying on an edge then densifying the linework is sufficient.  But for polygonal query geometries the farthest point can occur in the interior of the area: 
The Directed Hausdorff distance is attained at an interior point of the query polygon

To find the farthest interior point the adaptive branch-and-bound approach can be used in the area domain.  Conveniently, JTS already implements this in the MaximumInscribedCircle and LargestEmptyCircle classes (see this blog post.) In particular, LargestEmptyCircle  supports constraining the result to lie inside an area, which is exactly what is needed for the Hausdorff distance.  The target geometry is treated as obstacles, and the polygonal element(s) of the query geometry are the constraints on the location of the empty circle centre.
Directed Hausdorff Distance with multiple area constraints and heterogeneous obstacles

The LargestEmptyCircle algorithm is complex, so it might seem that it could significantly decrease performance.  In fact, it only adds an overhead of about 30%, and for many inputs it's not even noticeable.  Also, if there is no need to determine farthest points in the interior of polygons, this overhead can be avoided by using only polygon linework (i.e. the boundary) as input.  

Currently most Hausdorff distance algorithms operate on point sets, with a very few supporting linear geometry.  There seem to be none which compute the Hausdorff distance for polygonal geometries.  While this might seem an uncommon use case, in fact it's essential to support another new capability of the algorithm: computing the isFullyWithinDistance predicate for polygonal geometries.  

isFullyWithinDistance

Distance-based queries often require determining only whether the distance is less than a given value, not the actual distance value itself.  This boolean predicate can be evaluated much faster than the full distance determination, since the computation can short-circuit as soon as any point is found which confirms being over the distance limit.  It also allows using other geometric properties (such as envelopes) for a quick initial check. For shortest distance, this approach is provided by Geometry.isWithinDistance (and supporting methods in DistanceOp and other classes.). 

The equivalent predicate for Hausdorff distance is called isFullyWithinDistance.  It tests whether all points of a geometry are within a specified distance of another geometry.  This is defined in terms of the directed Hausdorff distance (and is thus an asymmetric relationship):

  isFullyWithinDistance(A,B,d) = DHD(A,B) <= d 

The DirectedHausdorffDistance class provides this predicate via the isFullyWithinDistance(A,B,dist) function.  Because the new class supports all types of input geometry (including polygons), the predicate is fully general.  For even faster performance in batch queries it can be executed in prepared mode via the isFullyWithinDistance(A,dist) method.  This mode caches the spatial indexes built on the target geometry so they can be reused.

For a performance example, consider a dataset of European boundaries (countries and islands) containing about 28K vertices.  The boundary of Germany is used as the target geometry.


If isFullyWithinDistance is run with a distance limit of 20, it takes about 60 ms.  

There's no direct comparison for DiscreteHausdorffDistance, but if that class is used to compute the directed Hausdorff distance with a conservative densification factor of 0.1, the time is about 1100 ms.  Another point of comparison is to run a shortest distance query.  This takes only 21 ms - but it's doing much less work.

A better implementation for ST_DFullyWithin

Another way to implement isFullyWithinDistance is to compute the buffer(d) of geometry B and test whether it covers A:

  isFullyWithinDistance(A,B,d) = B.buffer(d).covers(A) 

This is how the ST_DFullyWithin function in PostGIS works now.  It's a reasonable design choice given the current lack of a performant Hausdorff distance implementation.  However, there are a few problems with using buffer:
  • Buffers of complex geometry can be slow to compute, especially for large distances
  • There's a chance of robustness bugs affecting the computed buffer
  • Buffers are linearized approximations, so there is a likelihood of false negatives for query geometries which lie close to the buffer boundary
- image of buffer quantization causing failures

Now, the DirectedHausdorffDistance implementation of isFullyWithinDistance can make this function faster, more accurate, more robust and cacheable.  (And of course, the ST_HausdorffDistance function can benefit as well.)

Summary

The JTS DirectedHausdorffDistance class provides fast, cacheable, easy-to-use computation of Hausdorff distances and the isFullyWithinDistance predicate for all JTS geometry types.  This is a major improvement over the old JTS DiscreteHausdorffDistance class, and essentially fully replaces it.  More generally, it fills a notable gap in open-source geospatial functionality.  It will allow many systems to provide a high-quality implementation for Hausdorff distance.


by Dr JTS (noreply@blogger.com) at March 03, 2026 09:11 PM

March 02, 2026

TorchGeo 0.7.0 Release Notes

TorchGeo 0.7 adds 26 new pre-trained model weights, 33 new datasets, and more powerful trainers, encompassing 7 months of hard work by 20 contributors from around the world.

Highlights of this release

Note

The following model and dataset descriptions were generated by an imperfect human, not by an LLM. If there are any inaccuracies or anything else you would like to highlight, feel free to reach out to @adamjstewart.

Growing collection of foundation models

Panopticon Architecture

TorchGeo has a growing collection of Earth observation foundation models, including 94 weights from 13 papers:

  • GASSL (@kayush95 et al., 2020): Uses spatially aligned images over time to construct temporal positive pairs and a novel geo-location pretext task. Great if you are working with high-resolution RGB data such as Planet or Maxar.
  • SeCo (@oscmansan et al., 2021): Introduces the idea of seasonal contrast, using spatially aligned images over time to force the model to learn features invariant to seasonal augmentations, invariant to synthetic augmentations, and invariant to both.
  • SSL4EO-S12 (@wangyi111 et al., 2022): A spiritual successor to SeCo, with models for Sentinel-1/2 data pretrained using MoCo, DINO, and MAE (new).
  • Satlas (@favyen2 et al., 2022): A collection of Swin V2 models pretrained on a staggering amount of Sentinel-2 and NAIP data, with support for single-image and multiple-image time series. Sentinel-1 and Landsat models were later released as well.
  • Scale-MAE (@cjrd et al., 2022): The first foundation model to explicitly support RGB images with a wide range of spatial resolutions.
  • SSL4EO-L (@adamjstewart et al., 2023): The first foundation models pretrained on Landsat imagery, including Landsat 4–5 (TM), Landsat 7 (ETM+), and Landsat 8–9 (OLI/TIRS).
  • DeCUR (@wangyi111 et al., 2023): Uses a novel multi-modal SSL strategy to promote learning a common representation while also preserving unique sensor-specific information.
  • FG-MAE (@wangyi111 et al., 2023): (new) A feature-guided MAE model, pretrained to reconstruct features from histograms of gradients (HOG) and normalized difference indices (NDVI, NDWI, NDBI).
  • CROMA (@antofuller et al., 2023): (new) Combines contrastive learning and reconstruction loss to learn rich representations of MSI and SAR data.
  • DOFA (@xiong-zhitong et al., 2024): Introduced the idea of dynamically generating the patch embedding layer of a shared multimodal encoder, allowing a single model weight to support SAR, RGB, MSI, and HSI data. Great for working with multimodal data fusion, flexible channel combinations, or new satellites which don't yet have pretrained models.
  • SoftCon (@wangyi111 et al., 2024): (new) Combines a novel multi-label soft contrastive learning with land cover semantics and cross-domain continual pretraining, allowing the model to integrate knowledge from existing computer vision foundation models like DINO (ResNet) and DINOv2 (ViTs). Great if you need efficient small models for SAR/MSI.
  • Panopticon (@LeWaldm et al., 2025): (new, model architecture pictured above) Extends DINOv2 with cross attention over channels, additional metadata in the patch embeddings, and spectrally-continual pretraining. Great if you want the same features as DOFA but with even better performance, especially on SAR and HSI data, and on “non-standard” sensors.
  • Copernicus-FM (@wangyi111 et al., 2025): (new) Combines the spectral hypernetwork introduced in DOFA with a new language hypernetwork and additional metadata. Great if you want to combine image data with non-spectral data, such as DEMs, LU/LC, and AQ data, and supports variable image dimensions thanks to FlexiViT.

100+ built-in data loaders!

Dataset Contributors

TorchGeo now boasts a whopping 126 built-in data loaders. Shoutout to the following folks who have worked tirelessly to make these datasets more accessible for the ML/EO community: @adamjstewart @nilsleh @isaaccorley @calebrob6 @ashnair1 @wangyi111 @GeorgeHuber @yichiac @iejMac etc. See the above figure for a breakdown of how many datasets each of these people have packaged.

In order to build the above foundation models, TorchGeo includes an increasing number of large pretraining datasets:

  • BigEarthNet (@gencersumbul et al., 2019): Including BEN v1 and v2 (new), consisting of 590K Sentinel-2 patches with a multi-label classification task.
  • Million-AID (@IenLong et al., 2020): 1M RGB aerial images from Google Earth Engine, including both multi-label and mutli-class classification tasks.
  • SeCo (@oscmansan et al., 2021): 1M images and 70B pixels from Sentinel-2 imagery, with a novel Gaussian sampling technique around urban centers with greater data diversity.
  • SSL4EO-S12 (@wangyi111 et al., 2022): 3M images and 140B pixels from Sentinel-1 GRD, Sentinel-2 TOA, and Sentinel-2 SR. Extends the SeCo sampling strategy to avoid overlapping images. (new) Now with automatic download support and additional metadata.
  • SatlasPretrain (@favyen2 et al., 2022): (new) Over 10M images and 17T pixels from Landsat, NAIP, and Sentinel-1/2 imagery. Also includes 302M supervised labels for 127 categories and 7 label types.
  • HySpecNet-11k (@m.fuchs et al., 2023): (new) 11k hyperspectral images from the EnMAP satellite.
  • SSL4EO-L (@adamjstewart et al., 2023): 5M images and 348B pixels from Landsat 4–5 (TM), Landsat 7 (ETM+), and Landsat 8–9 (OLI/TIRS). Extends the SSL4EO-S12 sampling strategy to avoid nodata pixels, and includes both TOA and SR imagery, composing the largest ever Landsat dataset. (new) Now with additional metadata.
  • SkyScript (@wangzhecheng et al., 2023): (new) 5.2M images from NAIP, orthophotos, Planet SkySat, Sentinel-2, and Landsat 8–9, with corresponding text descriptions for VLM training.
  • MMEarth (@vishalned et al., 2024): (new) 6M image patches and 120B pixels from over 1.2M locations, including Sentinel-1/2, Aster DEM, and ERA5 data. Includes both image-level and pixel-level classification labels.
  • Copernicus-Pretrain (@wangyi111 et al., 2025): (new, pictured below) 19M image patches and 920B pixels from Sentinel-1/2/3/5P and Copernicus GLO-30 DEM data. Extends SSL4EO-S12 for the entire Copernicus family of satellites.

Copernicus-Pretrain

We are also expanding our collection of benchmark suites to evaluate these new foundation models on a variety of downstream tasks:

  • SpaceNet (@avanetten et al., 2018): A challenge with 8 (and growing) datasets for instance segmentation tasks in building segmentation and road network mapping, with > 11M building footprints and ~20K km of road labels.
  • Copernicus-Bench (@wangyi111 et al., 2025): (new) A collection of 15 downstream tasks for classification, pixel-wise regression, semantic segmentation, and change detection. Includes Level-1 preprocessing (e.g., cloud detection), Level-2 base applications (e.g., land cover classification), and Level-3 specialized applications (e.g., air quality estimation). Covers Sentinel-1/2/3/5P sensors, and includes the first curated benchmark datasets for Sentinel-3/5P.

More powerful trainers

VHR-10 Instance Segmentation

TorchGeo now includes 10 trainers that make it easy to train models for a wide variety of tasks:

  • Classification: including binary (new), multi-class, and multi-label classification
  • Regression: including image-level and pixel-level regression
  • Semantic segmentation: including binary (new), multi-class, and multi-label (new) semantic segmentation
  • Instance segmentation: (new, example predictions pictured above) for RGB, SAR, MSI, and HSI data
  • Object detection: now with (new) support for SAR, MSI, and HSI data
  • BYOL: Bootstrap Your Own Latent SSL method
  • MoCo: Momentum Contrast, including v1, v2, and v3
  • SimCLR: Simple framework for Contrastive Learning of visual Representations, including v1 and v2
  • I/O Bench: For benchmarking TorchGeo I/O performance

In particular, instance segmentation was @ariannasole23's course project, so you have her to thank for that. Additionally, trainers now properly denormalize images before plotting, resulting in correct "true color" plots in tensorboard.

Backwards-incompatible changes

TorchGeo has graduated from alpha to beta development status (#2578). As a result, major backwards-incompatible changes will coincide with a 1 minor release deprecation before complete removal whenever possible from now on.

  • MultiLabelClassificationTask is deprecated, use ClassificationTask(task='multilabel', num_labels=...) instead (#2219)
  • torchgeo.transforms.AugmentationSequential is deprecated, use kornia.augmentation.AugmentationSequential instead (#1978, #2147, #2396)
  • torchgeo.datamodules.utils.AugPipe was removed (#1978)
  • Many objection detection datasets and tasks changed sample keys to match Kornia (#1978, #2513)
  • Channel dimension was squeezed out of many masks for compatibility with torchmetrics (#2147)
  • dofa_huge_patch16_224 was renamed to dofa_huge_patch14_224 (#2627)
  • SENTINEL1_ALL_* weights are deprecated, use SENTINEL1_GRD_* instead (#2677)
  • ignore parameter was moved to a class attribute in BaseTask (#2317)
  • Removed IDTReeS.plot_las, use matplotlib instead (#2428)

Dependencies

New dependencies

Removed dependencies

Changes to existing dependencies

  • Python: drop support for Python 3.10 (#2559)
  • Python: add Python 3.13 tests (#2547)
  • Fiona: v1.8.22+ is now required (#2559)
  • H5py: v3.8+ is now required (#2559)
  • Kornia: v0.7.4+ is now required (#2147)
  • Lightning: v2.5.0 is not compatible (#2489)
  • Matplotlib: v3.6+ is now required (#2559)
  • Numpy: v1.23.2+ is now required (#2559)
  • OpenCV: v4.5.5+ is now required (#2559)
  • Pandas: v1.5+ is now required (#2559)
  • Pillow: v9.2+ is now required (#2559)
  • Pyproj: v3.4+ is now required (#2559)
  • Rasterio: v1.3.3+ is now required, v1.4.0–1.4.2 is not compatible (#2442, #2559)
  • Ruff: v0.9+ is now required (#2423, #2512)
  • Scikit-image: v0.20+ is now required (#2559)
  • Scipy: v1.9.2+ is now required (#2559)
  • SMP: v0.3.3+ is now required (#2513)
  • Shapely: v1.8.5+ is now required (#2559)
  • Timm: v0.9.2+ is now required (#2513)
  • Torch: v2+ is now required (#2559)
  • Torchmetrics: v1.2+ is now required (#2513)
  • Torchvision: v0.15.1+ is now required (#2559)

Datamodules

New datamodules

Changes to existing datamodules

  • Fix support for large mini-batches in datamodules previously using RandomNCrop (#2682)
  • I/O Bench: fix automatic downloads (#2577)

Datasets

New datasets

Changes to existing datasets

  • Many objection detection datasets changed sample keys to match Kornia (#1978, #2513)
  • BioMassters: rehost on HF (#2676)
  • Digital Typhoon: fix MD5 checksum (#2587)
  • ETCI 2021: fix file list when 'vv' in directory name (#2532)
  • EuroCrops: fix handling of Nones in labels (#2499)
  • IDTReeS: removed support for plotting lidar point cloud (#2428)
  • Landsat 7: fix default bands (#2542)
  • ReforesTree: skip images with missing mappings (#2668)
  • ReforesTree: fix image and mask dtype (#2642)
  • SSL4EO-L: add additional metadata (#2535)
  • SSL4EO-S12: add additional metadata (#2533)
  • SSL4EO-S12: add automatic download support (#2616)
  • VHR-10: fix plotting (#2603)
  • ZueriCrop: rehost on HF (#2522)

Changes to existing base classes

  • GeoDataset: all datasets now support non-square pixel resolutions (#2601, #2701)
  • RasterDataset: assert valid bands (#2555)

Models

New model architectures

New model weights

Changes to existing models

  • Timm models now support features_only=True (#2659, #2687)
  • DOFA: save hyperparameters as class attributes (#2346)
  • DOFA: fix inconsistent patch size in huge model (#2627)

Samplers

  • Add ability to set random sampler generator seed (#2309, #2316)

Trainers

New trainers

  • Instance segmentation (#2513)

Changes to existing trainers

  • All trainers now denormalize images before plotting, resulting in correct "true color" plots in tensorboard (#2560)
  • Classification: add support for binary, multiclass, and multilabel classification (#2219)
  • Classification: MultiLabelClassificationTask is now deprecated (#2219)
  • Object Detection: add support for non-RGB imagery (SAR, MSI, HSI) (#2602)
  • Semantic Segmentation: add support for binary, multiclass, and multilabel semantic segmentation (#2219, #2690)

Changes to trainer base classes

  • Fix load_from_checkpoint to load a pretrained model (#2317)
  • Ignore ignore when saving hyperparameters (#2317)

Transforms

  • AugmentationSequential is now deprecated (#2396)

Documentation

Changes to API docs

  • SpaceNet is now properly documented as a benchmark suite
  • Fix license for RESISC45 and VHR-10
  • SatlasPretrain: fix table hyperlink

Changes to user docs

  • Update list of related libraries (#2691)
  • Add GeoAI to related libraries list (#2675)
  • Add geobench to related libraries list (#2665)
  • Add OTBTF to related libraries list (#2666)
  • Fix file-specific test coverage (#2540)

New tutorials

  • Earthquake detection (#2647)
  • Custom semantic segmentation trainer (#2588)

Changes to existing tutorials

  • Customization: fix broken hyperlink (#2549)
  • Trainers: document where checkpoints are saved (#2658)
  • Trainers: document how to get the best model (#2658)
  • Various typo fixes (#2566)

CI

  • Faster model testing (#2687)
  • Codecov: move configuration file to subdirectory (#2361)
  • Do not cancel in-progress jobs on main branch (#2638)
  • Ignore prettier reformat in git blame (#2299)

Contributors

This release is thanks to the following contributors:

@adamjstewart
@ando-shah
@ariannasole23
@ashnair1
@burakekim
@calebrob6
@DarthReca
@dcodrut
@giswqs
@isaaccorley
@japanj
@lccol
@LeWaldm
@lns-lns
@mdchuc
@nilsleh
@remicres
@rijuld
@sfalkena
@wangyi111

by adamjstewart at March 02, 2026 04:56 AM

March 01, 2026

Ebbene sì, cara lettrice, anche nel 2025 ho letto, ho visto, ho guardato, ho ascoltato, ho fatto cose sia stando fermo dove ero sia muovendomi. Da un certo punto di vista tutto quello che segue in questo articolo è nullo perché il 2025 è stato l’anno di un blando ma importante risveglio dal torpore e sono andato alcune (o tante, punti di vista) volte in piazza a mettere il mio corpo per il popolo di Gaza. Ma da un altro punto di vista tutto quello che segue è esattamente quello che bisogna fare per non vendere le nostre vite allo stesso sistema che ci divora, ci rende complici. Nulla di tutto questo è intrattenimento, tutto è sbattimento, tutto è amore, tutto è rabbia.

I libri che mi sono piaciuti

Ho continuato a leggere Solenoide, di Mircea Cărtărescu (che anche quest’anno non ha vinto un nobel, ma che differenza fa). È un’opera monumentale, non solo per la sua estensione fisica ma anche per la tela che tesse, fatta di parti apparentemente semplici e anche marcatamente ripetitive. L’effetto complessivo però è ipnotico e travolgente. Troppo complesso da recensire in poche parole, nei diversi piani narrativi intreccia la letteratura, la scuola, l’infanzia, la visione al tempo stesso onirica e disincantata sulla città di Bucarest in una fatiscente quanto lontana dimensione politica… in un potentissimo labirinto tra Borges, Kafka, Lilian Ethel Voynich, il manoscritto dallo stesso nome… E lo consiglio vivamente anche se certe parti risultano un po’ ripetitive, o ostiche, o persino fastidiose da leggere.

A fine 2024 avevo fatto un prodigioso acquisto di fumetti da add. E ne ho letti molti. Tutti mi hanno detto qualcosa, per quanto molto diversi tra loro, e quindi lode alla casa editrice che spazia. Metto questi fumetti tutti insieme non perché abbiano tra di loro qualche cosa di particolare in comune ma perché non sono molto abituato alleggiere fumetti e anzi ho cercato di fare del mio meglio per imparare a leggere fumetti senza sfogliarli velocemente alla ricerca di una trama che molto spesso non c’è o non è per niente in primo piano. Quindi, per dire, Disfacimento è un viaggio onirico e quasi lisergico all’interno di un mondo che si muove in modo molto lento ma contemporaneamente anche con degli slanci molto crudi di un’umanità ibrida e totalmente intrisa di animale e vegetale. Oppure i due di M. S. Harkness, veramente molto molto dolorosi da leggere e con un sentimento cupo ma anche una fortissima voglia di vivere. Nuvole sul soffitto è molto amaro, soprattutto nelle parti in cui il protagonista si rapporta con la figlia e ha colpito molto duro. The End è stata fonte di grandissima riflessione, molto profondo e anche particolarmente struggente il modo in cui poi viene raccontato il progresso nella creazione del fumetto stesso. Grande Oceano è meraviglioso ho anche cercato di convincere mio figlio a leggerlo e ha la dimensione fiabesca, strepitosa di una grande avventura. Ma il fumetto che ho letto e riletto più volte, trovandoci ogni volta dei risvolti veramente potentissimi è Baby Blue che racconta una storia non più distopica e la affronta in un modo assolutamente esagerato ed epico. Prima dell’oblio è un piccolo labirinto narrativo in forma grafica, disincantato ma anche pieno di speranza su tutto quello che diamo per scontato delle nostre vite.

Non è della stessa provenienza la graphic novel Stretta al cuore di Stepánka Jislová, quindi ne parlo a parte. È veramente intenso e lascia senza parole in più punti. Sembra che parta come una storia individuale ma si costruisce come una vicenda molto più ampia, sugli stereotipi di genere anzitutto ma anche sulle difficoltà familiari, sugli abusi sessuali, sui traumi che lasciano sempre un segno.

Il calcio del figlio di Wu Ming 4 mi è stato regalato ed è una lettura necessaria per chi come me si trova a fare involontariamente il genitore di giovanissimi calciatori. Dà speranza, in uno spazio in cui c’è tanto bisogno di averne perché spesso tutto sembra compresso tra desiderio di primeggiare individualmente in uno sport ostinatamente di squadra, senso di appartenenza, movimento fisico di corpi nello spazio.

Tiarè di Célestine Hitiura Vaite potrebbe sembrare una lettura leggera ma non lo è. Il fatto che sia ambientato in un mondo familiare e domestico, anche se geograficamente lontanissimo, lo rende universale. Riporto una bella citazione che mi è rimasta impressa

Materena ripone l’olio per friggere, ricorda il discorso fatto alla madre pochi giorni addietro: che nella prossima vita forse tornerà come lesbica.
Al che, sua madre ha commentato: «Perché aspettare?».
Ah, oui, alors. Perché aspettare?

Tiaré, pagina 63 dell’edizione italiana

Da Eleuthera ho comprato due libri di James C. Scott tradotti in italiano. Il dominio e l’arte della resistenza mi ha tenuto compagnia per buona parte dell’anno. È una lettura piacevole, molto istruttiva, e ha una visuale molto ampia sul tema, il quale di per sé non è frequente come frame di comprensione dei fenomeni sociali, né antichi né contemporanei. Non è un manuale sull’arte della resistenza, ma comunque ne fa un trattato piuttosto ricco. Lo sguardo dello stato mi ha accompagnato tutta l’estate. Molto pacato e lucido, capace di abbracciare tematiche apparentemente lontanissime tra loro con una visione molto coerente. La postilla finale di commento all’edizione italiana è un preoccupante aggiornamento al ventunesimo secolo della traiettoria descritta da Scott.

Un oggetto narrativo non identificato è Prompt di fine mondo di Agnese Trocchi del collettivo CIRCE. Liberatorio e libero, c’è bisogno di più opere con questo tipo di spazio di manovra.

Mi aveva attirato il titolo de La vegetariana di Han Kang, premio Nobel. Il libro è diviso in tre parti. Ogni parte è narrata dal punto di vista di un personaggio diverso e fortemente centrata sul rapporto tra personaggio (marito, cognato, sorella) e “la vegetariana” vera protagonista della storia. Uno sviluppo in parte circolare che nelle pagine conclusive sembra tornare all’inizio e dare un senso possibile, uno dei diversi possibili, alla vicenda inquietante e drammatica. E proprio nella conclusione mi sembra di trovare una via d’uscita dove viene mostrata la vera tragedia, quella di tutta la violenza subita, perciò la progressiva vegetalizzazione è liberazione. Molto intenso. La seconda parte sembra dare un risvolto positivo, creativo, per quanto folle, ma si conclude sia malamente rispetto a queste velleità sia raggiungendo un punto di non ritorno.

Ho voluto approfondire l’opera di Han Kang con L’ora di greco. Purtroppo l’ho letto una prima volta troppo in fretta, troppo trascinato da una trama che non c’è, e mi trovo in preda a una sensazione di dolore e sconvolgimento. L’ho riletto più lentamente. Il libro diventa via via più lirico, più criptico, ma trasmette comunque un senso di distacco tragico che sembra universale: distacco dalla famiglia, distacco dalla vista, distacco dalla parola, distacco dall’umano. In questo risiede il legame con “La vegetariana” a mio avviso, insieme al fatto che il fulcro di tutto questo dolore e distacco si trova nel nucleo familiare. È un testo difficile, almeno lo è stato per me. C’è una sottile via di uscita, se non di speranza.

Leggendo La straniera di Claudia Durastanti, ho capito che tutta la prima parte di libro mi sembra ricalcare “Middlesex” di Jeffrey Eugenides (un libro che adoro), non in modo esplicito ma tutta l’epopea degli avi, la migrazione, essere chi sei perché quella è la tua storia. Tuttavia questo libro non mi è piaciuto molto nel complesso, diversamente dagli altri che non mi sono piaciuti ne parlo perché apprezzo molto Claudia Durastanti come traduttrice…

e Brevemente risplendiamo sulla terra di Ocean Vuong è esattamente un libro che Durastanti ha tradotto. Insolitamente (per me) diretto e tagliente, ma con una profondità fortissima. Difficile dire che l’ho compreso tutto. Sicuro che mi ha fatto sentire cose mai viste prima, potentissime. Una scrittura senza steccati, ardente.

Le mostre

In primavera siamo andati a Ferrara per la mostra di Alphonse Mucha, c’era accoppiata anche quella di Giovanni Boldini, entrambe a Palazzo dei Diamanti. Non paragonabili se non nella mente dei venditori di biglietti. Mucha gira molto in mostre commerciali come questa, la sua arte libera un’immaginario al tempo stesso fuori dal tempo e molto situato, quasi imprigionato nella tela su cui è stato dipinto.

A Genova, ho visto a Palazzo Ducale Jacopo Benassi Libero! e mi ha colpito molto, una grande libertà e affronto alla morale artistica. Ho visto sempre al Ducale anche altre mostre tra cui quella su Lisetta Carmi, che ho apprezzato molto anche perché non era risicata negli spazi e Meriggiare pallido e assorto, fotografica contemporanea che ho trovato di poca anima e molto bisognosa di un’interpretazione totalmente assente. THE OTHER DIRECTION invece mi sembra degna di nota perché tratta un tema intersezionale da un punto di vista originale: voci di donne su una linea di autobus urbano che attraversa mezza città, interi quartieri e periferie – è la linea 1 che prendo spesso anche io.

Inoltre al Castello D’Albertis ho visto World Things Genova che accoppia mostra fotografica con etnografia contemporanea, attualizzazione post-coloniale delle collezioni del museo con presente di migrazioni.

I podcast

Ho ascoltato veramente molto meno rispetto allo scorso anno. A settembre ho anche iniziato ad accusare i primi sintomi di un acufene abbastanza intenso.

Ho proseguito in modo spezzato Il mondo, Stories e Love Bombing. Ho ascoltato alcuni episodi de Le comari dell’arte, molto liberatorie, di Nuovo baretto utopia con le registrazioni di kenobit, di Mordicchio non l’ha mai detto che purtroppo mi pare interrotto. Fare un podcast è dura.

Ho scoperto il favoloso L’orda d’oro, che è frutto di un programma radiofonico su Radio Onda Rossa. Parla dell’Asia centrale, in un numero altamente soddisfacente di diverse manifestazioni e punti di vista, sempre sostenuti da musica di generi diversi.

Le serie

Ho iniziato a guardare Anatane e i ragazzi di Okura su Rai Play, una serie animata franco-canadese ambientata in un futuro (?) distopico. Episodi semplici e brevi che ho trovato piacevoli.

Cyberpunk: Edgerunners è piuttosto semplice e violento, ma la grafica e la colonna sonora sono molto buone. Un giorno ho guardato un episodio e poi ho scoperto che era quello finale, ma mi è parso un po’ troppo tirato via, anche se l’ultima scena è molto commovente.

Ho guardato 3 minuti della prima puntata di Stranger things. Non so se conta.

Il teatro

Nella prima parte dell’anno sono andato alcune volte a teatro, sempre meno di quanto vorrei.

Lo strepitoso D’oro. Il sesto senso partigiano è stato fortissimo a partire dalle prime battute fuori dal palco, con i primi dodici articoli della Costituzione recitati a piena voce da un gruppo di giovani. Storie vere di uomini e donne che ci hanno tramandato gesti apparentemente semplici di libertà, quando questa era impossibile.

Stabat mater di Liv Ferracchiati è uno sguardo sulla mascolinità e sulle aspettative del genere, della coppia raccontato in modo leggero e divertente, ma al tempo stesso serissimo. Bello il dibattito finale con l’autrice, le altre attrici e Vera Gheno.

La musica

Sono andato a diversi concerti! Il 24 aprile al circolo ARCI Perugina di Certosa ho ascoltato i canti anarchici e partigiani dei Mars on Pluto, e (per me) soprattutto dei Cocks, una punk rock band di Sampierdarena che incarna molto di quello che avrei voluto fare tanti anni fa con altri sgangherati di periferia.

Ho partecipato alla prima serata di Electropark, un festival di musica elettronica che si tiene da 15 anni a Genova. Le artiste della serata erano Tadleeh, la genovese Ginevra Nervi e Luxe da Londra. Sono fuori dai miei confini con la musica elettronica ma ho apprezzato l’atmosfera molto rilassata e contemplativa.

Sono andato a un concerto rap alla Libera collina di Castello, mi è piaciuta la grande energia de La cercleuse, collettivo rap femminista francese.

Con Elisa sono andato al concerto di Vinicio Capossela, non era la prima volta ed è sempre più forte il modo in cui lui e le persone sul palco con lui usano la musica per raccontare storie.

I viaggi

A giugno siamo tornati a Creta, dopo ben 10 anni! Lo abbiamo fatto con il più improbabile dei mezzi di trasporto, cioè la nostra automobile, traghettata attraverso Adriatico ed Egeo dalle fedeli navi che conosciamo da 20 anni. È stato un viaggio intenso ma molto bello, abbiamo fatto base fissa a Kalamaki e poi girato un po’ nella zona di Creta centrale.

In primavera eravamo andati a Ferrara, oltre alle mostre abbiamo passeggiato per la città, trovato parchi dove riposare all’ombra, ottime gelaterie, ristoranti coreani, tantissime biciclette.

Sono andato per lavoro due giorni a Venezia, riuscendo a fare una veloce visita alle gallerie dell’Accademia con tanto di mostra che includeva L’uomo vitruviano lì conservato. Ma è proprio un piacere enorme essere a Venezia e basta.

In estate abbiamo fatto una vacanza in provincia di Cuneo. Abbiamo iniziato con una tappa a Molare da Franco B. famoso cantautore genovese ed ex collega, con bagno nel torrente. Facciamo base a Villar San Costanzo, patria dei ciciu e del famoso biscottificio che macina la farina nel mulino di Dronero lì vicino. Siamo andati a Entracque a visitare il centro sui lupi, ai bambini è piaciuto molto.

In autunno ho iniziato un corso di speleologia, ma questa è un’altra storia.

by Stefano Costa at March 01, 2026 05:20 PM

February 27, 2026

February 26, 2026

Prezado leitor,

Se você instalou o GeoNode 5 via Docker (GeoNode Project) e precisa adicionar um plugin que não vem na instalação padrão do GeoServer, este guia vai te mostrar como fazer isso da maneira correta e reproduzível.

No meu caso, estou utilizando:

  • GeoNode 5.0.0
  • GeoServer 2.27.3

O objetivo é instalar o plugin Resource Browser Tool, que permite navegar e gerenciar arquivos do GeoServer diretamente pela interface web.

1. Baixar o plugin:

O plugin precisa ser exatamente da mesma versão do GeoServer. Como estou usando a versão 2.27.3, o plugin também deve ser 2.27.3.

> cd /home/fernandoquadro/
> wget https://sourceforge.net/projects/geoserver/files/GeoServer/2.27.3/extensions/geoserver-2.27.3-web-resource-plugin.zip
> unzip geoserver-2.27.3-web-resource-plugin.zip

Após descompactar, você terá um ou mais arquivos .jar.

2. Copiar o plugin para a pasta do Projeto:

> mkdir -p /opt/geonode_custom/my_geonode/docker/geoserver/plugins/resourcebrowser
> cp *.jar /opt/geonode_custom/my_geonode/docker/geoserver/plugins/resourcebrowser

3. Alterar o arquivo Dockerfile do GeoServer

A instalação correta do plugin não deve ser feita manualmente dentro do container.
O procedimento adequado é incluir o plugin no processo de build da imagem.

> cd /opt/geonode_custom/my_geonode/docker/geoserver
> sudo nano Dockerfile

Adicione as seguintes linhas ao final do seu arquivo:

# GeoServer Resource Browser Tool (2.27.3)
COPY plugins/resourcebrowser/*.jar \
  /usr/local/tomcat/webapps/geoserver/WEB-INF/lib/

4. Recriar a imagem do GeoServer

> docker compose build geoserver
> docker compose up -d geoserver

Se quiser garantir um rebuild completo, faça:

> docker compose down
> docker compose build
> docker compose up -d

5. Verificar se o plugin foi instalado

Após executar os passos acima você pode então entrar no GeoServer e verificar se o seu plugin realmente foi instalado. Para isso acesse o painel administrativa em About & Status → Modules, se tudo estiver correto, o Resource Browser Tool aparecerá na lista de módulos instalados.

Esse mesmo procedimento pode ser utilizado para instalar qualquer plugin do GeoServer no GeoNode executado via Docker.

Se você ainda não instalou o GeoNode 5, pode conferir o passo a passo completo clicando aqui.

by Fernando Quadro at February 26, 2026 09:15 PM

February 23, 2026

The Hausdorff Distance is a useful spatial function which can appear slightly mysterious. Partly this is due to the name.  It honours Felix Hausdorff, one of the founding fathers of topology, and a polymath who was creative in music and literature as well as mathematics.    

Felix Hausdorff  (1868-1942)

But the name conveys nothing about why this function is useful, or how it is different to the more familiar shortest distance.  The key difference is: the shortest distance tells you how close things are, but the Hausdorff distance tells you how far apart they are. So a more descriptive name might be "farthest distance" or "maximum distance".  With due respect to Dr. Hausdorff, this is one of those historical artifacts of nomenclature that deserves a refresh.  (Especially since it's becoming recognized that the core concept was actually first published by the Romanian mathematician Dimitrie Pompeiu.  Users of the future will be grateful to be spared invoking the ST_PompeiuHausdorffDistance function.)

Definition

The formal definition of the Hausdorff distance (HD) is 

     HD(A,B) = max( DHD(A,B), DHD(B,A) )

where DHD is the directed Hausdorff Distance (DHD):

     DHD(A,B) = max a ∈ A dist(a,B)

with dist(a,B) being the usual shortest distance between point a and geometry B:

     dist(a,B) =  min b ∈ B dist(a,b) 

The Hausdorff distance is symmetric and is a true distance metric.  The directed Hausdorff distance is asymmetric.  Both can be useful in different contexts.  The directed version is arguably more fundamental.  (It's certainly where the bulk of the implementation effort lies.)
Directed Hausdorff Distance is asymmetric

The main application of the Hausdorff distance is in determining how well two datasets match, by providing a measure of their similarity.  In spatial applications these are typically geometries such as lines or polygons, but they can also be point clouds or raster images.  The Hausdorff distance is much more useful than shortest distance as a similarity measure because it gives information about all the points in a shape, not just a single closest point.  While shortest distance puts a bound on how far a single point is from the target, the Hausdorff distance is a bound on every point in the query shape. So in the figure below the two lines have a small shortest distance, but the Hausdorff distance reveals that they are actually far apart at some points.
Hausdorff Distance VS Shortest Distance

The Implementation Challenge

A key difference between shortest distance and Hausdorff distance is that the pair of points defining the shortest distance always includes at least one vertex, whereas the Hausdorff distance can occur at non-vertex points. For lines, the Hausdorff distance can occur anywhere on the edges: 

For polygons it can occur on edges or in the interior of the query area:

This makes the Hausdorff distance substantially harder to implement for general 2D geometries.  While the shortest distance can be determined simply by evaluating the distance at the finite set of vertices on each geometry, the Hausdorff distance requires a way to evaluate a finite set of points out of the infinite number of non-vertex points. 

Perhaps this is why it's so hard to find an implementation of Hausdorff distance for general 2D geometry.  (Or is there just no need for a fast accurate general-purpose Hausdorff distance?  Surely not...)  There's some implementations for point sets, and at least one for the specific case of convex polygons.  There's a couple which may support lines (here and here), but in a seemingly crude way.  And I haven't found a single one for general polygons.  Excellent - it's good to have a challenge!

Discrete Hausdorff Distance

A simple approach is to discretize the input linework by densifying the linework.  The Hausdorff distance is then evaluated over the original and added vertices. The JTS Topology Suite class DiscreteHausdorffDistance implements this approach.  This algorithm was developed many years ago (2008) for use in the RoadMatcher linear network conflation tool.  It worked well enough for that use case, since inputs were typically small and the accuracy was "good enough".  But it has some serious problems:
  • achieving accuracy requires a high degree of densification of every edge, which means slow performance
  • if the Hausdorff distance occurs at a vertex, then densification is not needed, but this is impossible to determine a priori
  • the user generally has no idea what level of densification (if any) is required to determine a result of required accuracy (this is particularly problematic in automated batch processing, where geometries may require varying amounts of densification)
  • the use of a densification factor rather than a maximum segment length was a mistake.  It is hard to determine the factor needed for a desired distance accuracy, and it causes over-densification of short edges
  • it is very slow when the inputs are equal or very similar (as shown in this issue)
  • polygonal inputs are not supported
  • the internal shortest distance computation is inefficient, since it does not use an indexed algorithm
Some of these flaws could be fixed.  For instance, shortest distance computation can be improved by using IndexedFacetDistance (which was not available at the time of development). And densification could be controlled by a maximum segment length instead of a factor.  But addressing all these issues requires a fundamental rethinking of the algorithm. 

Given the wide deployment of JTS and its C++ port GEOS, any improvement stands to benefit a huge number of users.  And after 18 years it's high time this clunky old code was replaced.  So I'm happy to announce that I'm working on an entirely new implementation for Hausdorff distance which solves all the issues above.  Expect a blog post soon!

by Dr JTS (noreply@blogger.com) at February 23, 2026 09:13 PM

I ran four times, did one elliptical recovery spin on cold day, a recovery bike ride, one weight-lifting session, one yoga class, and the daily mobility and core workout six times.

  • 12 hours, 56 minutes all training

  • 27.2 miles running

  • 6,100 ft D+ running *

I put an asterisk by the elevation gain, because two-thirds of this was on a treadmill during interval workouts. There were no matching descents, and that's the toughest part of running in the mountains.

Running uphill is hard, but also relatively low impact. That's a win-win for my training. I'm doing almost all of my intense running right now at 10-12%, on a treadmill or steep road. In week six, I did 27 minutes of hard 30/30 running. Running on an 8% incline at an 11 minutes per mile pace felt more fun than going slower on a steeper setting. I'll do more of this.

I'm adjusting my Quad Rock training plan a bit to match my current fitness and race goals. Since I want to run the race faster and am responding well to the speed work that I am doing, I'm going to do more. Instead of four evenly sized four-week blocks of training, I'm switching to three blocks. A final block dedicated to improving my all-day pace isn't the best way to help me finish a 25 mile race in under five hours. Instead, the last block before the race will be devoted to zone 2-3 efforts and downhill running. I will also expand the length of my tempo run block from four to five weeks. In a nutshell: more speed, less slogging.

by Sean Gillies at February 23, 2026 03:11 AM

February 22, 2026

“You can’t always get what you want
But if you try sometimes, well, you might find
You get what you need”
(Rolling Stones, You Can’t Always Get What You Want, Album: Let it Bleed, 1969)

I had an idea for a map – where did all of the non native plants in Britain come from? I thought there might be something interesting there if I could find the data and map it.

I called up my new friend Claude and asked for some help with the data “is there a dataset that shows common flower and shrub species found in great britain and their country of origin (possibly with some historical narrative of how they arrived in the uk)?” That took me to the Royal Botanical Gardens Kew’s “BIFloraExplorer” with over 3000 species in Britain and the Botanical Society of Britain and Ireland’s “Plant Atlas 2020” – loads of time saved, I had some authoritative and relevant data in minutes.

A few Q&A’s from Claude and it was ready to build a map – unfortunately the data set only had continents of origin so the map had arrows from 6 continents pointing to GB, not very interesting. It turns out that there isn’t an easy way to find and incorporate the country of origin for each non native species and anyway, even if I solved that it would just be a messy map with loads of arrows on it 🙁 Time for a rethink – “You can’t always get what you want”

A dig in the data and some to and fro with my mate identified that the BSBI data included volunteer survey data from about 6 periods pre 1930 to 2010’s showing coverage by OS grid square so maybe there was a way to make coverage maps for each species 🙂“But if you try sometimes ..”

You don’t need all the details of how I got from an idea to my finished map, I had to source some common or vernacular names and merge into the Kew dataset to make searching easier, I had to find a nice low res geojson British Isles boundary (thank you Natural Earth) and I needed some help to resolve the survey grid squares into coordinates. When I say I, I should be honest and say Claude did most of the work but I spotted some of the problems and suggested solutions. There are a LOT of files behind this (1 per species) and processing them required some brute force which Claude applied.

After a few tries at layout for desktop and mobile, this is what I ended up with – you can search and filter species by Latin names and common names, filter by type and whether native or period of arrival, once you select a species you get a pic (via Wikimedia) and a coverage map for one of the 6 survey periods which lets you see the spread or decline of a species. Lots of interesting stuff to discover like in this screen shot the Evergreen Oak is mainly found in the South and Midlands (maybe it doesn’t like the weather up north?)

This is nothing like what I imagined when I started on this project but I am pretty pleased with it and I think it is probably more useful if you are interested in plants and trees in Britain than the original map idea I had 🙂 – “You might find you get what you need”

Sources and more info on the methodology can be found via the info button in the app.

by Steven at February 22, 2026 02:15 PM

Unusually mild weather helped make week five productive. I can feel the benefits as I write this, a week later.

  • 11 hours, 37 minutes all training

  • 19.3 miles running

  • 1,890 ft D+ running

Tuesday I did hill sprints on the "Wallenberg Wall" in my neighborhood. My cadence has increased, and I was a second faster on average. 23 seconds instead of 24. Getting faster is one of my goals, and I'm making measurable progress.

Thursday I tried my first running intervals: 9 minutes at 9.5/10 rate of perceived exertion (RPE). This was on a road that starts at 5% incline and increases to 12%. To help keep the quality of the running intervals high, I'm practicing 30/30 running like I did last year, interleaving 30 seconds of maximum effort with 30 seconds of micro-recovery high effort. I ran by a utility crew on Centennial Drive and got some cheers and good-natured heckling for my effort.

Tuesday and Friday, I lifted weights at the gym. I'm doing 5 x 5 sets of back squats at the rack to build more muscle and increase the power of my legs. It's going well.

By Saturday, my legs need a bit of a break. I went for an easy ride with a few hard pushes, and gave the single track stretch of Timber trail a go. I can ride it cleanly on my mountain bike. It's more challenging on a gravel bike with no suspension and narrower tires.

https://live.staticflickr.com/65535/55108810041_585f27a1bf_b.jpg

A blue gravel bike laid across a sandy stretch of trail that becomes more rocky as it descends towards a reservoir under a blue Colorado sky.

Sunday, I did a long-ish run on the ridge east of Horsetooth Reservoir. "The reservoir", as we say here, though there are many reservoirs, because it's the biggest one. I went at the pace I'd like to run at Quad Rock, and felt good during the run and afterwards.

by Sean Gillies at February 22, 2026 01:55 AM

February 20, 2026

I'm glad to announce the release of the Semi-Automatic Classification Plugin version 9 (codename "Foundation").

 

This new version is compatible with QGIS 4 (based on Qt 6 framework). 

Until QGIS 4 is officially released, in order to try the new Semi-Automatic Classification Plugin you can install the prerelease (QGIS 3.99 master).

 

The following is the changelog:
  • new version for QGIS 4
  • built on the new Remotior Sensus version 0.6
  • new simplified interface designed for new users
  • added automatic download of Remotior Sensus if library is not available or outdated
  • in the Working toolbar added button to open a Copernicus Browser link at QGIS map coordinates
  • in the Working toolbar added buttons to show or hide custom layers or groups by name also using keyboard shortcuts Z, X, and C
  • in Download products added option to create band set
  • various bug fixing
Read more »

by Luca Congedo (noreply@blogger.com) at February 20, 2026 02:37 PM

February 18, 2026

GeoServer 2.27.5 release is now available with downloads (bin, war, windows), along with docs and extensions.

This is the last scheduled maintenance release of GeoServer series 2.27 - providing existing installations with minor updates and bug fixes. GeoServer 2.27.5 is made in conjunction with GeoTools 33.5, and GeoWebCache 1.27.5.

Are you aware that the all new GeoServer 3 is just around the corner?


And, separately as a special sneak peek, if you’re interested in ARM64 docker images (for example, on AWS, Graviton3 offers a 40% better price performance) then check out this 2.27.5 release as a multi-platform (amd64 & arm4) build, which will very soon be merged into the official docker.osgeo.org repo as the new multi-architecture builder going forward.

Thanks to Peter Smythe (AfriGIS) for making this release and driving the ARM64 docker images.

Release notes

Improvement:

  • GEOS-12023 Improve developer logging during catalog resources loading and WMS capabilities requests
  • GEOS-12033 Allow to configure custom CRS authorities and transformations
  • GEOS-12037 Support Metatiling on MapBox Vectortiles

Task:

For the complete list see 2.27.5 release notes.

About GeoServer 2.27 Series

Additional information on GeoServer 2.27 series:

Release notes: ( 2.27.5 | 2.27.4 | 2.27.3 | 2.27.2 | 2.27.1 | 2.27.0 )

by Peter Smythe at February 18, 2026 12:00 AM

February 17, 2026

Introduction

The NDFF (Nationale Databank Flora & Fauna) holds over 200 million verified observations of plant and animal species from the Netherlands. These data are contributed by volunteers and professionals from hundreds of organizations. Together they contain a wealth of information about the occurrence of plant and animal species in the Netherlands.  Since 2025, the database is accessible to everyone free of charge. That means that one can download all data for 5 × 5 km grid cells (certain restrictions apply). 

Observations are provided as polygons. This post provides an example of how to quickly create summary raster maps showing the total number of observed species. Non-native species are excluded to focus on native biodiversity patterns. Moreover, each record is treated as a single observations regardless of the reported number of individuals. 

Study area

The example uses NDFF data from 2022 to 2025 for an area covering parts of the Loonse en Drunense Duinen and adjacent municipalities of Vught and Heusden. The spatial extent corresponds to four 5×5 km grid cells (Fig. 1). 

Figure 1. Study area, covering four NDFF  5×5 km grid cells (135-410, 140-410, 135-405 and 140-405). Map data from_ OpenStreetMap

Figure 1. Study area, covering four NDFF  5×5 km grid cells (135-410, 140-410, 135-405 and 140-405). Map data from_ OpenStreetMap

Prepare the vector data

NDFF downloads are provided per 5 × 5 km grid cell as GeoPackages containing polygon features. After downloading the four required tiles, we first merge them using QGIS → Merge Vector Layers. This produces a single layer NDFF_original containing 218 161 records.

Identifying duplicates

Because each download includes all polygons intersecting a grid cell, duplicate features occur along tile boundaries. Additional duplicates arise where a single observation is assigned to multiple species groups. Duplicates can be detected with an SQLite query executed in QGIS → Database → DB Manager:

SELECT Identiteit, n
FROM (
    SELECT *,
           COUNT(*) OVER (
               PARTITION BY "Identiteit", "Wetenschappelijke naam"
           ) AS n
    FROM NDFF_original
)
WHERE n > 1;

This reveals 6 711 duplicated records.

Filter unique records

One way to remove duplicates is with a SQL query. Run it in the DB Manager: QGIS → Database → DB Manager. Select the GeoPackage (NDFF_data) and copy the following query into the SQL window:

SELECT *
FROM NDFF_original
WHERE fid IN (
    SELECT MIN(fid)
    FROM NDFF_original
    GROUP BY
        "Identiteit",
        "Wetenschappelijke naam"
); 

Enable Load as new layer, provide a value for Layer name (prefix), and click Load. This creates a virtual layer with 214,753 records.

This is still the NDFF_original, but with a filter applied that removes duplicate records. To make this result permanent, export the filtered layer. To do so, right-click the layer in the Layers panel and choose Export → Save Features As… This writes a new layer containing only the deduplicated features.

Delete duplicates 

An alternative approach is to use the Delete duplicates by attribute algorithm from the Processing Toolbox. Open the function and:

  1. Input layer: NDFF_original
  2. Fields to match duplicates: click the three-dot button and select Identiteit and Wetenschappelijke naam
  3. Filtered (duplicates): Save to GeoPackage
  4. Select the target GeoPackage NDFF_data
  5. Set Layer name to NDFF_cleaned

Delete blurred records

In the NDFF, the locations of some species records are blurred and represented by a 1×1 km or 10×10 km grid cell. The actual observation can be anywhere in those cells. We will remove those records using the Extract by expression function. 

  1. Input layer: NDFF_cleaned
  2. Expression: create the following expression. "Vervaging" IS NOT NULL
  3. Matching features: NDFF_data
  4. Layer name: NDFF_cleaned2

This produces the NDFF_cleaned2 layer in the same NDFF_data GeoPackage with 207,950 records. Very large polygons may still remain; filtering those could be an additional refinement step.

Dissolve records per species

To count the number of species observed between 2022 and 2025, the individual records must be merged per species. This can be done in QGIS with the Dissolve function. 

In the QGIS menu, select Vector  → Geoprocessing Tools  →  Dissolve 

  1. Input layer: NDFF_cleaned2
  2. Dissolve fields: select the  fields Soortgroep, Wetenschappelijke naam
  3. Dissolved (name output layer): species_dissolved

The resulting layer has 4994 species polygons.

Create a polygon heat map

The gdal_rasterize command line function converts vector geometries into a raster grid by assigning values to pixels intersecting features.

Normally, later features overwrite earlier pixel values. With the option -add pixel values are instead accumulated, enabling density- or heatmap-style rasters.

You can run the function from the command line, or in QGIS, where it is called the Vector conversion - Rasterize function. Look for it in the Processing toolbox and open it. Adapt the settings to your needs.

  1. Input layer: Species_dissolved
  2. A fixed value to burn: 1
  3. Output raster size units: Georeferenced units
  4. Width/Horizontal resolution: 5
  5. Height/Vertical resolution: 5
  6. Output extent: Set the extent using one of the provided methods.
  7. Advanced parameters → Pre-initialize the output image with value: 0. This ensures that all raster cells get a value, even raster cells not overlapping with polygons.
  8. Advanced parameters → Additional command-line parameters: -add.
  9. Rasterized: density.tif.

If you scroll down, you will see the corresponding command line code, which you can run directly from the command line.

gdal_rasterize -l species_dissolved -burn 1.0 -tr 5.0 5.0 -init 0.0 -a_nodata 0.0 -te 135000.0 405000.0 145000.0 415000.0 -ot Float32 -of GTiff -add species_dissolved.gpkg density.tif

The resulting map shows that the largest number of species observed within a 5×5 meter raster cell is 580.  

Figure 2. Raster heatmap of overlapping NDFF species records, with each 5 × 5 m cell representing the count of intersecting species polygons.

Figure 2. Raster heatmap of overlapping NDFF species records, with each 5 × 5 m cell representing the count of intersecting species polygons.

In the unweighted heatmap, all polygons contribute equally regardless of their size, meaning that large, spatially imprecise observations influence the result as much as small, precise ones. To account for spatial precision, each polygon can instead be assigned a weight:   where area is measured in square meters. This gives greater influence to precise observations and reduces the impact of uncertain, large polygons.

First step is to create a column in the attribute table, with for each polygon feature the weight. We can do this with the field calculator in QGIS,

  1. Output field name:  weight
  2. Output field type: decimal number
  3. Expression: 1.0 / log10( $area + 25 )

Now, we can repeat the previous steps to create a heatmap. But this time, we leave A fixed value to burn empty, and instead fill in under Field to use for a burn-in value the name of the column (weight). Or, we run:

gdal_rasterize -l species_dissolved -a weight -tr 5.0 5.0 -init 0.0 -a_nodata 0.0 -te 135000.0 405000.0 145000.0 415000.0 -ot Float32 -of GTiff -add species_dissolved2.gpkg density_weighted.tif

Comparing the result with the previous one shows that in some areas there is a relative large ‘contribution’ of larger polygons. And there is that one curious hotspot right of the centre 🤔.

Figure 3. Raster heatmap of NDFF species-record density, with each 5 × 5 m cell showing the area-weighted count of intersecting species polygons.

Figure 3. Raster heatmap of NDFF species-record density, with each 5 × 5 m cell showing the area-weighted count of intersecting species polygons.

Closing remark

This workflow demonstrates how open-source GIS tools such as QGIS and GDAL can transform biodiversity observations into meaningful spatial summaries.

Although NDFF records are validated at entry, the methods used to collect the data, spatial precision, survey effort, temporal coverage, and taxonomic certainty vary considerably. The steps presented here serve as a starting point and can be readily extended, for example by filtering specific species or species groups, checking for inconsistencies, or applying additional validation to support ecological interpretation.

by Paulo van Breugel at February 17, 2026 11:00 PM

What does it mean to speak about European digital infrastructure? It means:

  • Use of open standards (such as those promoted by the Open Geospatial Consortium).
  • Compliance with regulatory frameworks such as INSPIRE.
  • A real possibility of self-hosting.
  • Independence from unilateral changes in licenses or terms of use.
  • The ability to audit, evolve, and adapt the system over the long term.

Europe has clearly defined its commitment to the digital commons and technological sovereignty.
But that strategy is not materialized through declarations. It is materialized through real infrastructures.

Within this framework, gvSIG is more than a technology: it constitutes a digital infrastructure.

Today, the gvSIG brand represents an ecosystem of solutions — the gvSIG Suite — characterized by being:

  • Free and open-source software
  • Standards-based
  • Modular
  • Interoperable
  • Evolvable

Both on the web and complemented by desktop and mobile products, the gvSIG Suite enables the deployment of complete Spatial Data Infrastructures, aligned with European standards and fully controlled by the organization implementing them.

This means:

  • Own servers or trusted hosting environments
  • Integration with administrative systems
  • Adaptation to local regulations
  • Development of specific extensions
  • Progressive scalability

No dependencies.

With the gvSIG Suite, we can build a territorial digital infrastructure that:

  • Enables governance of sensitive data
  • Ensures long-term continuity
  • Reduces future risks
  • Strengthens internal technical capacities
  • Generates reusable knowledge

In the geospatial domain, the gvSIG Suite has become a structural component of digital public administration.

by Alvaro at February 17, 2026 08:22 AM

¿Qué significa hablar de infraestructura digital europea? Significa:

  • Uso de estándares abiertos (como los promovidos por el Open Geospatial Consortium).
  • Cumplimiento de marcos normativos como la directiva INSPIRE.
  • Posibilidad real de autohospedaje.
  • Independencia frente a cambios unilaterales de licencias o condiciones de uso.
  • Capacidad de auditar, evolucionar y adaptar el sistema a largo plazo.

Europa ha definido claramente su apuesta por el bien común digital y la soberanía tecnológica. Pero esa estrategia no se concreta en declaraciones. Se concreta en infraestructuras reales.

En este marco gvSIG es más que una tecnología: se constituye como una infraestructura digital. Hoy la marca gvSIG representa un ecosistema de soluciones, la Suite gvSIG, caracterizado por ser:

  • Software libre y abierto
  • Basado en estándares
  • Modular
  • Interoperable
  • Evolucionable

Tanto en web como complementada por los productos de escritorio y movilidad, la Suite gvSIG permite desplegar Infraestructuras de Datos Espaciales completas, alineadas con estándares europeos y plenamente controladas por la organización que las implementa.

Eso significa:

  • Servidores propios o en entornos de confianza
  • Integración con sistemas administrativos
  • Adaptación a normativa local
  • Desarrollo de extensiones específicas
  • Escalabilidad progresiva

Sin dependencias. Con la Suite gvSIG podemos construir una infraestructura digital territorial que:

  • Permite gobernar datos sensibles.
  • Garantiza continuidad en el tiempo.
  • Reduce riesgos futuros.
  • Refuerza capacidades técnicas internas.
  • Genera conocimiento reutilizable.

En el ámbito geoespacial, la Suite gvSIG se ha convertido en un componente estructural de la administración digital.

by Alvaro at February 17, 2026 08:19 AM

We are happy to announce that GeoServer 3 is approaching general availability with a target release date of 15th of April 2026.

GeoServer 3 Milestone Progress

This major upgrade modernises the platform’s foundation with the migration to Spring 7 and JDK 17, brings a refreshed user experience and replaces legacy image-processing components with ImageN to deliver significantly improved raster performance and maintainability. The release aligns GeoServer with current Java ecosystems, strengthens security and vulnerability management, and simplifies cloud-native deployments. You can read more about the GeoServer 3 initiative on this page.

GeoServer 3 progress has been made possible by a successful community crowdfunding campaign. This activity is possible due to financial support of sponsors listed below, and a consortium (Camptocamp, GeoCat and GeoSolutions) providing coordination and additional co-funding to move from planning into delivery.

We will publish additional announcements, along with upgrade and testing instructions in the coming weeks. The core team will ask for focused community testing on upgrade paths, high-volume raster workflows,and tiling scenarios. Final QA, packaging and documentation work is ongoing to ensure a smooth upgrade experience and clear operational guidance for administrators.

Watch the usual GeoServer channels for the release announcement and release notes. Contact the project team if your organisation can help with final testing or needs tailored migration assistance.

GeoServer 3 is supported by the following organisations:


Individual donations: Abhijit Gujar, Hennessy Becerra, Ivana Ivanova, John Bryant, Jason Horning, Jose Macchi, Peter Smythe, Sajjadul Islam, Sebastiano Meier, Stefan Overkamp.

by Jody Garnett at February 17, 2026 12:00 AM

February 16, 2026

No maps in this post!

I mentor a couple of people and I was about to start a new project this week. I was going to knock up a Google Sheet or Doc to track goals, talks and meetings and share with the mentee and my co-mentor. It occurred to me that I could vibe code an app to track the mentee’s progress and to provide structure to our regular meetings.

With a lot of help from Claude I’ve got something running that does that for a few mentors and mentees, it runs on desktop and mobile and quite a lot of the input is voice to text which is particularly useful in mobile. It took about 3 hours, to build test and deploy.

I am sure that I’ll find some problems or feature gaps in the future but that doesn’t matter in my use case. I don’t plan to build this into a business or to scale it up to thousands of users it’s just a tool that makes record keeping for me and my mentees easier.

It seems to me that vibe coding a tool just for your use case and not having to twist the way you work to fit something off the shelf is democratising software a little bit. Without Claude I couldn’t possibly have done that.

by Steven at February 16, 2026 10:11 PM

IOSACal 0.7 was released yesterday. Here is a quick summary of what’s new.

One of the standard plots rendered in the latest IOSACal version. It looks exactly as before.

This long cycle was mostly about documentation improvements and some maintenance tasks, the boring but essential work that keeps the project going.

Version 0.7 is already available in PyPI and conda-forge. There is an updated version record at Zenodo.

All changes were contributed by Stefano Costa.

  • Documentation improvements, including a new short how-to about running IOSACal in Google Colab and a tutorial for loading radiocarbon dates from a spreadsheet or CSV file
  • add a dedicated page with list of publications citing IOSACal (there’s quite a few)
  • update dependencies to current versions, in particular NumPy
  • align Python versions used as CI targets with current versions supported by Numpy (3.11-3.14)
  • add a Forgejo action for running tests
  • Update Code of Conduct to Contributor Covenant 2.1
  • Replace deprecated pkg_resources with importlib.resources
  • switch entirely to pyproject.toml for package metadata

If you’re using IOSACal in your work, it’s recommended to update to the latest version.

Development is active at the Codeberg repository, if you feel like contributing some improvements please check out the open issues or open a new one.

by Stefano Costa at February 16, 2026 05:16 PM

Die FOSSGIS-Konferenz 2026 findet vom 25.-28. März 2026 in Göttingen und Online statt. Es sind nur noch wenige Wochen bis zur Konferenz. Die Vorfreude wächst stetig und die Vorbereitungen laufen auf Hochtouren!

Die Konferenz wird vom gemeinnützigen FOSSGIS e.V, der OpenStreetMap Community in Kooperation mit dem Geographischen Institut der Georg-August-Universität Göttingen organisiert und findet auf dem Campus der Uni Göttingen statt.

Auch in diesem Jahr zeichnet sich ein großes Interesse an der Konferenz ab. Die Anmeldungen steigen von Woche zu Woche. Zum Glück bietet das Zentrale Hörsaalgebäude der Uni Göttingen ausreichend Platz, so dass es die bisher größte FOSSGIS-Konferenz werden könnte.

FOSSGIS Konferenz 2026 Göttingen

FOSSGIS 2026 Programm und Zeitplan

Das FOSSGIS Team freut sich auch in diesem Jahr auf ein spannendes Programm mit zahlreichen Vorträgen, ExpertInnenfragestunden, Demosessions, BoFs und Anwendertreffen und sowie 28 Workshops. Das Konferenzprogramm findet von Mittwoch bis Freitag im Zentralen Hörsaalgebäude (ZHG) der Uni Göttingen statt. Am Samstag finden OSM-Samstag und Community Sprint an der Fakultät für Geowissenschaften und Geographie am Nordcampus statt.

https://www.fossgis-konferenz.de/2026/programm/

Die Konferenz startet in diesem Jahr schon am Dienstag, den 24.03.2026 ab 10 Uhr mit längeren Workshops (180 Minuten). Wählen Sie unter 7 Workshops aus siehe Programm und reisen Sie schon am Dienstag an. Die Workshops sprechen sowohl Einsteiger:innen als auch Fortgeschrittene an, es sind noch Plätze frei. Buchen Sie gerne noch einen Workshop und nutzen Sie die Chance in kurzer Zeit Wissen zu einem Thema aufzubauen.

FOSSGIS vernetzt - Anwendertreffen und Community Sprint

Rund um die und während der Konferenz gibt es zahlreiche Möglichkeiten sich zu vernetzen. Die Pausenversorgung kombiniert mit Firmen-Ausstellung und Poster-Ausstellung finden im Foyer des ZHG statt sowie auch die Abendveranstaltung am ersten Konferenztag. Für die fachliche Vernetzung bieten sich Gelegenheiten bei den Anwendertreffen, Expert:innenfragestunden und weiteren Community Sessions, eine Onlineteilnahme ist möglich. https://www.fossgis-konferenz.de/2026/socialevents/

Reichhaltiges Rahmenprogramm

In diesem Jahr freuen wir uns über ein vielseitiges Rahmenprogramm mit spannenden Exkursionen und Treffen in interessanten Lokationen Göttingens. FOSSGIS steht auch für Netzwerken. Dies ist schon am Dienstagabend möglich. Die Geochicas laden zu einem Treffen ein. Außerdem findet der Inoffizielle Start mit einem gemeinsamen Abendessen (Selbstzahler) statt und heißt alle schon angereisten Konferenzteilnehmenden willkommen.

Alle Informationen finden sich unter https://www.fossgis-konferenz.de/2026/socialevents/

FOSSGIS Konferenz 2026 Sponsoren

Herzlichen Dank an die Sponsoren der Konferenz. die durch Ihre Unterstützung maßgeblich zur Finanzierung der Veranstaltung beitragen. Werden auch Sie FOSSGIS-Sponsor. Wir freuen uns über weitere Unterstützung. Informationen finden Sie unter https://fossgis-konferenz.de/2026/#Sponsoring

FOSSGIS Konferenz 2026 Sponsoren

FOSSGIS - ein Teamevent

Die FOSSGIS lebt vom ehrenamtlichen Engagement, zahlreiche Helfer:innen bringen sich ein und übernehmen unterschiedlichste Aufgaben vor und während der Konferenz. Herzlichen Dank dafür!

Es werden noch Helfende gesucht, insbesondere für Sessionleitung, Unterstützung im Hörsaal für die Vortragenden sowie beim Catering, siehe https://www.fossgis-konferenz.de/2026/helfen/.

OSM-Samstag und Community Sprint

Am Samstag, den 28.03.2026 werden OSM-Samstag und Community Sprint in den Räumen des Geographischen Instituts in der Goldschmidtstr. 3-5, 37073 Göttingen stattfinden. Die Gelegenheit ins Gespräch zu kommen oder beim Community Sprint sich einzubringen oder Know-How aufzubauen. Jede:r ist herzlich willkommen teilzunehmen, https://pretalx.com/fossgis2026/talk/VVYN7A/.

Informiert rund um die Konferenz

Informationen rund um die FOSSGIS finden sich unter dem Hashtag #FOSSGIS2026. Den Haschtag #FOSSGIS2026 nutzen wir für Informtionen in den Social Media, nutzen sie es auch, um die Social Media Aktivitäten zu verbinden.

Archiv FOSSGIS-Konferenzen

Im FOSSGIS-Archiv finden Sie die Homepages der vergangenen Konferenzen, inkl. Programm und Videos https://fossgis-konferenz.de/liste.html.

Das FOSSGIS Team 2026 wünscht eine gute Anreise und freut sich auf eine spannende Konferenz in Göttingen

February 16, 2026 12:00 AM

February 15, 2026

I'm glad to announce the update of Remotior Sensus to version 0.6.
This new version add several new features such as clustering, raster editing and raster zonal stats. Following the complete changelog:
  • Added optional dependency Pandas for performance improvement in tabular data.
  • In tool “Band classification” added option for using PyTorch pretrained model. In case a pretrained model is selected, and additional algorithm is selected for classification, using the same parameters of the named algorithm (e.g. random forest); after executing the pretrained model, the additional algorithm is executed on the embeddings for classification. Currently, it works with models pretrained by the Allen Institute for Artificial Intelligence (SatlasPretrain: https://satlas-pretrain.allen.ai) in particular, Sentinel-2 swin-v2-base single-image multispectral and swin-v2-tiny single-image multispectral models, and Landsat 8 Landsat 9 swin-v2-base single-image multispectral model. SatlasPretrain model weights are released under the Open Data Commons ‘Attribution License (ODC-BY). The repository code is licensed under the Apache License 2.0 (https://huggingface.co/allenai/satlas-pretrain). This tool downloads the official SatlasPretrain weights (Bastani et al., “SatlasPretrain: A Large-Scale Dataset for Remote Sensing Image Understanding”, ICCV 2023, arXiv:2211.15660, https://doi.org/10.48550/arXiv.2211.15660). All model weights remain the property of their respective authors.
  • In tool “Band classification” added PyTorch pretreained segmentation models for Sentinel-2 (swin-v2-base single-image multispectral model using 3 bands or 4 bands) pretrained by DPR Team as part of the DPR Zoo Segmentation Hub framework (https://github.com/DPR25/dpr-zoo-segmentation-hub) based on SatlasPretrain models. The model output classes: background, water, developed, tree, shrub, grass, crop, bare, snow, wetland, mangroves, moss. The repository code of DPR Zoo models are licensed under the MIT License (https://huggingface.co/martinkorelic/dpr-zoo-models). This tool downloads the model weights (DPR Team, 2025. Made as part of Arnes Hackathon 2025). All model weights remain the property of their respective authors.
  • In tool “Download products” included the download of Sentinel-2 L2A SCL band.
  • In tool “Preprocess products” included the Sentinel-2 L2A SCL band.
  • Improved progress monitoring for multiprocess.
  • Code optimization and bug fixing.

Many of these enhancements will also be implemented in the Semi-Automatic Classification Plugin for QGIS which will be released on the 20th of February 2026.
 

For any comment or question, join the Facebook group or GitHub discussions about the Semi-Automatic Classification Plugin.

by Luca Congedo (noreply@blogger.com) at February 15, 2026 10:38 AM

February 14, 2026

TorchGeo 0.9.0 Release Notes

TorchGeo 0.9 includes 13 new datasets and a number of improvements required for better time series support, encompassing 3 months of hard work by 15 contributors from around the world. We are now trying to make more frequent releases to get exciting new features out to users as quickly as possible!

Highlights of this release

Embeddings datasets

Copernicus-Embed

TorchGeo was the first library to provide pre-trained geospatial foundation models, and offers more GeoFMs than all other GeoML libraries combined1. Users have always had the ability to generate their own embeddings using TorchGeo. However, using FMs requires considerable expertise and compute, preventing widespread adoption.

Several prominent papers have introduced the idea of Earth Embeddings, pre-computed embeddings made from satellite imagery mosaics or annual time series data at regional to global scale. As part of a larger review of Earth Embeddings2, we have added all known patch-based and pixel-based embedding products to TorchGeo!

Dataset Kind Spatial Extent Spatial Resolution Temporal Extent Temporal Resolution Dimensions Dtype License
Clay Embeddings Patch Global* 5.12 km 2018–2023* Snapshot 768 float32 ODC-By-1.0
Major TOM Embeddings Patch Global 2.14–3.56 km 2015–2024* Snapshot 2048 float32 CC-BY-SA-4.0
Earth Index Embeddings Patch Global 320 m 2024 Snapshot 384 float32 CC-BY-4.0
Copernicus-Embed Patch Global 0.25° 2021 Annual 768 float32 CC-BY-4.0
LGND Clay Embeddings Patch Global 256 m 2024–2025 Snapshot 1024 float32 CC-BY-4.0
EarthEmbeddings Patch Global* 2.24–3.84 km 2015–2024* Snapshot 256–1152 float16, float32 CC-BY-SA-4.0
Presto Embeddings Pixel Togo 10 m 2019–2020 Annual 128 uint16 CC-BY-4.0
Tessera Embeddings Pixel Global 10 m 2017–2025* Annual 128 int8 → float32 CC0-1.0
Google Satellite Embedding Pixel Global 10 m 2017–2025 Annual 64 int8 → float64 CC-BY-4.0
Embedded Seamless Data Pixel Global 30 m 2000–2024 Annual 12 uint16 → float32 CC-BY-4.0

Most of the FMs and pre-training datasets used to generate these embeddings can also be found in TorchGeo, offering complete reproducibility. Expect more experiments comparing the performance of different embedding products from us in the coming months, and check out our review!

Time series datasets and models

As part of our ongoing time series rewrite, this release adds time series support for RasterDataset and several new time series models!

All raster datasets can now be configured to either merge all images into a single mosaic or stack all images into a time series:

Landsat9(..., time_series=False)  # merge: [C, H, W]
Landsat9(..., time_series=True)   # stack: [T, C, H, W]
CDL(..., time_series=False)       # merge: [H, W]
CDL(..., time_series=True)        # stack: [T, H, W]

TorchGeo now offers several time series models:

1D time series ($$B \times T \times C$$)

3D change detection ($$B \times 2 \times C \times H \times W$$)

3D image time series ($$B \times T \times C \times H \times W$$)

4D ocean and atmosphere ($$B \times T \times C \times Z \times Y \times X$$)

Most time series datasets now consistently return data in $$T \times C \times H \times W$$ format. Expect more changes to our samplers and trainers in future releases as we strive for 100% time series support!

Backwards-incompatible changes

Warning

TorchGeo 0.9, like 0.8, has a number of backwards-incompatible changes required for a more stable 1.0 release in the future. Below we motivate each change and describe how to migrate any existing code.

GeoDataset: return Tensor outputs when possible

Prior versions of GeoDataset directly returned CRS and query bounding boxes in each sample dictionary. These were designed to support stitching together individual model predictions over space. However, these non-Tensor values could not be transferred to the GPU, requiring custom collation functions and deletion during training.

The 'crs' key has now been removed, and can be retrieved from the dataset. The 'bounds' key has been converted to a Tensor. A new 'transform' key can more directly be used for stitching predictions.

Tip

Instead of:

sample = dataset[...]
crs = sample['crs']

use:

crs = dataset.crs

Point datasets (EDDMapS, GBIF, iNaturalist) now use the 'keypoints' key instead of returning the entire index. This enables support for Kornia transforms on these objects.

Tip

Instead of:

keypoints = sample['bounds'].get_coordinates()

use:

keypoints = sample['keypoints']

There are still several places where sample dictionaries can contain lists or strings. Expect these to be removed or replaced with Tensors in future releases.

Models: avoid downloading by default

Several model architectures and trainers were downloading ImageNet weights by default. This surprised users who didn't expect any downloads and resulted in frequent CI failures. In TorchGeo 0.9, no datasets or models will download anything by default. Model weights will only be downloaded by explicit request.

Tip

To restore the previous behavior, replace:

# Downloads weights unexpectedly
model = ChangeStar()
model = EarthLoc()
model = FarSeg()
model = unet(weights=None)
# Downloads weights with no control over which weights
task = InstanceSegmentationTask(weights=True)
task = ObjectDetectionTask(weights=True)

with:

model = ChangeStar(backbone_weights=WeightsEnum)
model = EarthLoc(pretrained=True)
model = FarSeg(backbone_weights=WeightsEnum)
model = unet(weights=WeightsEnum)
task = InstanceSegmentationTask(weights=WeightsEnum)
task = ObjectDetectionTask(weights=WeightsEnum)

This is now enforced in CI by preventing all downloads during testing.

Other

  • xView2 was renamed to xBD (#3132)
  • SemanticSegmentationTask.predict_step now returns a dictionary (#3357)
  • SeasoNet and Substation now return $$T \times C \times H \times W$$ time series by default (#3369, #3371)
  • The dataset download backend was changed, and Google Drive datasets may no longer download correctly. Most datasets have been moved to Hugging Face, some remain and require manual download (#3338)

Dependencies

New dependencies

Changes to existing dependencies

  • Python: 3.12+ is now required (#3201)
  • geopandas: 0.13+ is now required (#3139)
  • h5py: 3.10+ is now required (#3201)
  • jsonargparse: 4.35+ is now required (#3201)
  • matplotlib: 3.7.3+ is now required (#3201)
  • netcdf4: 1.6.5+ is now required (#3201)
  • numpy: 1.26+ is now required (#3201)
  • packaging: 21+ is now required (#3201)
  • pandas: 2.1.1+ is now required (#3201)
  • pandas-stubs: 2.1.1+ is now required (#3201)
  • pillow: 10+ is now required (#3201)
  • pycocotools: 2.0.8+ is now required (#3201)
  • pyproj: 3.6.1+ is now required (#3201)
  • pytest: 7.3.2+ is now required (#3201)
  • requests: 2.25+ is now required (#3201)
  • scikit-image: 0.22+ is now required (#3201)
  • scipy: 1.11.2+ is now required (#3201)
  • shapely: 2.0.2+ is now required (#3201)
  • torch: 2.2+ is now required (#3201)
  • torchvision: 0.17+ is now required (#3201)
  • types-requests: 2.25+ is now required (#3201)
  • types-shapely: 2.0.2+ is now required (#3201)
  • typing-extensions: 4.8+ is now required (#3201)

Datasets

New datasets

  • Clay Embeddings (#3293, #3358)
  • Copernicus-Embed: pictured above (#3252)
  • Earth Embeddings (#3391)
  • Earth Index Embeddings (#3282)
  • Embedded Seamless Data (ESD) (#3403)
  • Google Satellite Embedding (AlphaEarth Foundations) (#3244)
  • Major TOM Embeddings (#3295)
  • OSCD100 (#3221, #3411)
  • PASTIS100 (#3265)
  • Presto Embeddings (#3288)
  • Tessera Embeddings (#3245, #3310)

Changes to existing datasets

  • BigEarthNetV2: fix downloaded filename (#3363)
  • Cloud Cover Detection: don't rename downloaded directories (#3158)
  • LEVIR-CD: download from Hugging Face (#3351)
  • NLCD: add 2024 data (#3189)
  • Point datasets: return keypoints (#3139)
  • SeasoNet: $$SC \times H \times W \rightarrow T \times C \times H \times W$$ (#3371)
  • SSL4EO-S12: correct docs on # channels for TOA vs. SR (#3379)
  • Substation: return time series by default, plotting fix (#3369)
  • SustainBench Crop Yield: download from Hugging Face (#3337)
  • xBD: rename xView2 dataset (#3132)
  • Fix plot docstring reference to getitem (#3353)

Changes to dataset base classes

  • Dataset: use index consistently (#3264)
  • Dataset: return Sample = dict[str, Any] (#3200)
  • GeoDataset: remove 'crs', convert 'bounds' (#3138, #3350)
  • GeoDataset: return spatial 'transform' (#3140)
  • RasterDataset: add time series support (#3183)
  • RasterDataset: refactor open/reproject to single method (#3014)
  • XarrayDataset: document that this is an experimental feature (#3362)

Utilities

  • download_and_extract_archive: replace torchvision utility (#3339)
  • download_url: replace torchvision utility, remove support for Google Drive downloads (#3338)
  • check_integrity: replace torchvision utility, add support for cryptographically secure checksum algorithms (#3302)
  • extract_archive: replace torchvision utility, enforce stricter tarball checks (#3307)

Data Modules

New data modules

Changes to existing data modules

  • xBD: rename xView2 data module (#3132)

Changes to data module base classes

  • GeoDataModule: don't delete 'crs' and 'bounds' from sample (#3138)

Models

New model architectures

New model weights

  • Tile2Vec (#3230)
  • U-Net: add ChesapeakeRSC road segmentation weights (#3407)
  • U-Net: add PRUE FTW weights (#3406)

Changes to existing models

  • ChangeStar: replace backbone_pretrained bool with backbone_weights enum (#3348)
  • EarthLoc: pretrained model now defaults to False (#3341)
  • FarSeg: replace backbone_pretrained bool with backbone_weights enum (#3348)
  • U-Net: don't download weights unless requested (#3344)

Trainers

  • ClassificationMixin: unify features of classification trainers, add class-wise metrics (#3328)
  • ChangeDetectionTask: add labels parameter (#3328)
  • ChangeDetectionTask: add precision and recall metrics (#3328)
  • ClassificationTask: add labels, pos_weight, ignore_index parameters (#3328)
  • ClassificationTask: add dice loss support (#3328)
  • ClassificationTask: add precision and recall metrics (#3328)
  • InstanceSegmentationTask: weights bool to enum (#3349)
  • InstanceSegmentationTask: add weights_backbone parameter (#3349)
  • ObjectDetectionTask: weights bool to enum (#3352)
  • SemanticSegmentationTask: add labels and pos_weight parameters (#3328)
  • SemanticSegmentationTask: add dice loss support (#3328)
  • SemanticSegmentationTask: add precision, recall, and F1-score metrics (#3328)
  • SemanticSegmentationTask: predict_step now returns dict (#3357)

Documentation

  • Fix broken or redirected links (#3345, #3381, #3413)
  • Move images/logo to subdirectory (#3365)
  • API: redesign and reorganize dataset docs (#3385, #3395, #3409)
  • API: reorganize model architectures (#3324)
  • Tutorials: document more TorchGeo slicing options (#3374)
  • User: update related libraries (#3412)
  • Version bump (#3129, #3329, #3420)

Tests

Contributors

This release is made possible thanks to the following contributors:

  1. https://arxiv.org/abs/2510.02572

  2. https://arxiv.org/abs/2601.13134

by adamjstewart at February 14, 2026 03:24 PM

February 12, 2026

Throughout the first month of 2026, our ninjas added a nifty set of improvements to three useful plugins that we love: the GeoMapFish Search, OpenStreetMap Nominatim Search, and OSRM Routing plugins. These plugins all provide genuinely useful functionalities as well as being great showcases of how easy it is to integrate QField with online REST endpoints.

All three plugins have been updated to ship with useful endpoint presets out of the box, and users also have the option to configure their own custom endpoints for more flexibility. To configure endpoints, open QField’s settings panel, navigate to the plugin manager, and click the settings button next to the desired plugin.

Running a public endpoint using one of these services? If you think users would benefit from having your endpoint included in the preset lists, please open a request on the relevant GitHub repositories linked above.

What are these for exactly?

For those not familiar with these plugins, let’s take a minute to review their functionalities.

GeoMapFish Search and OpenStreetMap Nominatim Search plugins offer the ability to integrate online geocoding and spatial feature searches into QField’s top search bar. When activated, these plugins appear in the search bar alongside core search components, with helpful tips on how to trigger searches via these open source services.

The OpenStreetMap Nominatim Search plugin has a worldwide coverage and exposes the wealth of OpenStreetMap data through simple geocoded searches such as “pubs in London”.

The GeoMapFish Search plugin has several presets covering localities in Swizterland.

The OSRM Routing plugin initially allowed users to retrieve car routes by long-tapping on the map to set a start point, end point, and optional waypoints. The updated version now supports additional routing profiles for bicycles and pedestrians.

Interested in knowing more about plugins?

QField’s plugin framework continues to grow, with new capabilities added in each new point release. To find out more about it and begin writing plugins of your own, be sure to visit this dedicated site which we launched last year to provide an overview, code snippets, and API documentation.

While we work on building a dedicated plugin repository, you can explore the growing collection of plugins on GitHub’s “qfield-plugin” topic page.

by Mathieu at February 12, 2026 01:16 AM

February 11, 2026

Se você trabalha com GIS/QGIS e ainda depende de cliques, processos manuais e retrabalho, chegou a hora de mudar isso.

Estão abertas as inscrições para o Curso Python com GIS do Zero, uma formação técnica e prática para quem quer sair do modo operacional e começar a programar o geoprocessamento de verdade.

Aqui você aprende Python aplicado ao GIS, não Python genérico.

🧠 O que você vai dominar:

✅ Python do zero com foco técnico
✅ Pandas e Geopandas aplicados ao GIS
✅ Processamento de vetores e rasters (GDAL/OGR)
✅ Análises espaciais e estatísticas zonais
✅ SQL Espacial moderno com DuckDB Spatial
✅ Automação de processos no QGIS
✅ Introdução ao desenvolvimento de plugins

Tudo aplicado a problemas reais, como acontece no mercado.

🎯 Para quem é este curso?

🔹 Profissionais de GIS e Geoprocessamento
🔹 Usuários de QGIS que querem evoluir
🔹 Quem quer ganhar produtividade e escala
🔹 Quem quer parar de clicar e começar a programar

👉 Não é curso raso.
👉 Não é só teoria.
👉 É formação técnica aplicada.

📅 Vagas limitadas

Se você quer transformar a forma como trabalha com GIS, essa é a hora.

🔗 https://geocursos.com.br/python

by Fernando Quadro at February 11, 2026 01:37 PM

February 09, 2026

Click on the image above to view Dead Reckoning v2

A couple of weeks ago I finished my Dead Reckoning V1 app and I was pretty pleased with it, all the gigs that the Dead had played summarised and displayed along with set lists, sound tracks and more. I released it just before Geomob and got some good feedback so a few days later I decided to publicise it to the Grateful Dead community on Reddit, then the storm broke!

How do you check the summaries of 3,000 rows of data? You probably do some sample checks, in my case I checked gigs that I had been to and a few others. When you unleash your creation to a forum with 160k members and 8k active in the week, there will be a lot of people viewing your app who are all going to be looking for their favourite shows and know way more than me about them. So I got a flood of “what about this show?’ or “where is this venue?” or “your most played songs list is bullshit” (it was).

The problem was that in my battle with gemini to get it to process and geocode the data, I had not checked enough what the pest was doing with the data, the results were incomplete, the geocoding was poor to terrible, and some of the assumptions in the stats summaries were incorrect. All in all pretty embarrassing but on the plus side the app and it’s interaction were still quite neat. To fix this mess I was going to have to regenerate all of the data and that felt like a daunting task.

I had been at a lunch with a few serious developers and we had all been talking about AI tools for coding and everyone around the table was singing the praises of Claude, not saying it was perfect but highlighting the things it was good at. I had been thinking of upgrading to Claude for another project and decided now was the time to bite the bullet and wow!

I went back to JerryBase, the source of the data, and took a bit of time to explore the data before downloading (something I should have done initially) and discovered that I could use a filter before downloading which only gave me the gigs that were public, excluding recording studios and restricted sessions which immediately reduced the complexity of processing. Then came the fun bit, I explained to Claude what I wanted to do and asked it to run a trial on one of the year’s data – 3 or 4 iterations and I had a script that did what I intended, I then got Claude to run that on all 31 files in one batch. Then rather than geocoding every row in every file, I took the venues file and geocoded on OpenCage and then asked Claude to match the gigs files to the venues file and write in the coordinates in one big batch, along with assigning a unique Venue ID to each venue and adding that to a gig for further geocoding corrections. Final step was to get Claude to convert the geocoded csv files into geojson, whoosh, job done!

I now had a new set of geojson files to replace the old ones. I decided to improve the way I calculated statistics and most played songs and build a statistics file that the app could refer to rather than calculating some of the stats on the fly.

Just one problem, although I had greatly improved the quality of the data, I now had a different data model and some new fields of data to contend with. My original code was not going to work. I uploaded my html, css and js files to Claude, explained the old data model and asked it to refactor the code to work with the new data structures – took 3 or 4 tries and then bingo, the app was back working with the new complete unfiltered data.

I then ran through a short list of minor enhancements and the new version was ready to release. You can compare V1 and V2, I doubt you will notice much difference unless you are a hardcore fan but I know that this one is correct and I won’t be getting more whatabouts from the community.

It’s difficult to describe the difference between Claude and Gemini until you try to undertake a complex task with both and compare the results. Having been forced to do that because of the random errors in data processing that Gemini introduced, I am now convinced that in the short to medium term Claude will be my choice for vibe coding.

Dead Reckoning turned into a much bigger undertaking than I had anticipated, it has been a lot of fun and I have learnt a lot.

My final learning – When you find a data source, take time to understand it well before you start to process it. When you start to process the data, take a lot of time to ensure that the scripts really are dong what you want them to.

by Steven at February 09, 2026 04:12 PM

La nueva versión 2.7 de gvSIG Desktop incluye una nueva calculadora de campos, cuyas principales ventajas son, por un lado, que se pueden rellenar varios campos a la vez, y por otro, que ha mejorado significativamente el rendimiento cuando se pretenden rellenar muchos registros a la vez, al no cargarse en memoria todos ellos sino por bloques.

La nueva calculadora se basa en la herramienta de actualizar tabla que se había incluido en la versión anterior, donde se había mejorado la parte de rellenado de registros. Ahora dispone de una pestaña “Opciones” donde se puede seleccionar el número de registros en los que se quiere que termine edición y reinicie. De esa forma no carga todos los datos en memoria y no bloquea el equipo, evitando así también posibles pérdidas de datos.

Por otro lado, permite rellenar varios campos a la vez, cada uno con su fórmula, indicando el usuario qué campos quiere rellenar. Si se tiene seleccionado un campo de la tabla cuando se abre la calculadora de campos, ese es el campo que se queda marcado para insertar la fórmula a ejecutar. Si no se tiene ninguno seleccionado se activan todos, por lo que habría que desactivar los que no se desee rellenar.

Otra novedad es que no hace falta comenzar edición para poder utilizar la calculadora de campos.

by Mario at February 09, 2026 08:41 AM

February 08, 2026

The numbers went up in week four.

  • 13 hours, 23 minutes all training

  • 30.2 miles running

  • 4,108 ft D+ running

More important is that I got in my power-building workouts. A big session of back squats and single-leg step-downs, among other exercises, at the gym on Tuesday after seeing my physical therapist. A session of hill sprints on Thursday after an easy run with a friend at Pineridge Open Space.

Saturday I went on another Quad Rock training run with a big crowd on an extraordinarily warm day. I'm glad I wore shorts and brought a third bottle of water. There were some icy spots early, but they'd melted on the return leg. I paid the price for going out too fast with some cramping at the finish, and struggled with knee stiffness, but mostly had a great morning.

Today, Sunday, I did an extended foam rolling, mobility, and core strength session while watching Liverpool lose to Manchester City in a chaotic finish. That initial goal by Szoboszlai was so good, I thought we were going to ride that to the finish. Nope. In the afternoon, after some gardening, I was loose enough to go for an easy run at Pineridge.

My long range plan has me switching from hill sprints to longer intervals next week. I think I'll smear this a little, with one session of hill sprints, and just one session of running intervals.

by Sean Gillies at February 08, 2026 11:37 PM

February 04, 2026

En muchas ocasiones, hablar de software libre o de código abierto en proyectos públicos se asocia a innovación. Y aunque es cierto que el acceso al conocimiento permite innovar de forma más eficiente en muchos ámbitos, también suele interpretarse como una apuesta por soluciones alternativas o incluso de riesgo.

Si nunca lo fue, hoy todavía menos: esa lectura ya no encaja con la realidad.

Cada vez más, el verdadero riesgo no está en el software libre. Está en la dependencia.

En proyectos públicos, especialmente en aquellos que gestionan información crítica —y gran parte de lo que ocurre en el territorio y se gestiona desde la geomática lo es— la estabilidad no consiste en que algo funcione hoy. Consiste en que siga funcionando mañana, ante posibles cambios de proveedores, contratos, condiciones de licencia, prioridades políticas o equipos técnicos.

El acceso al código, el uso de estándares abiertos y la posibilidad real de mantenimiento por terceros no son argumentos baladíes. Son mecanismos básicos de control del riesgo. Permiten auditar, evolucionar, integrar y, llegado el caso, cambiar de proveedor sin rehacer el sistema desde cero.

En entornos públicos optar por soluciones cerradas suele ser la decisión más arriesgada.

El software libre es una decisión conservadora en el mejor sentido del término: proteger la inversión pública, reducir incertidumbre y garantizar capacidad de adaptación a largo plazo.

¿Y en cuanto a innovación? Sin control, no hay innovación sostenible.

by Alvaro at February 04, 2026 11:02 AM

https://www.osgeo.org/foundation-news/happy-birthday-osgeo-celebrating-20-years-of-free-and-open-source-software-for-geospatial/

2026-02-04 | Celebrate 20 years of OSGeo with us

In February 2026, the Open Source Geospatial Foundation (OSGeo) will celebrate its 20th anniversary. What began as a small group of individuals and projects with a shared vision for free and open-source software for geospatial applications (FOSS4G) has evolved into a global organisation with projects, local chapters, conferences and communities spanning all continents.

While looking back over the last 20 years is important, it is even more important to consider what OSGeo represents today and how the foundation continues to evolve.

OSGeo: Then and Now

Founded in 2006, OSGeo provides a legal, organisational and community home for open-source geospatial software projects. From the outset, its purpose has been clear: to enable long-term sustainability for geospatial free and open-source software (FOSS) projects, and to support open collaboration across institutions, countries, and disciplines.

Twenty years later, OSGeo has grown into a foundation that includes the following:

  • 50 officially recognised OSGeo projects (including desktop GIS, server software, spatial libraries, and educational initiatives);

  • 30 local chapters worldwide, representing active communities in Africa, Asia, Europe, North America and South America and Oceania:

  • a global conference series (FOSS4G), with additional international and regional events held every year;

  • strong partnerships across academia, public administration, NGOs, and industry.

Projects: From the First Incubations to a Diverse Ecosystem

OSGeo has always been strengthened by its projects and the people behind them.

Some projects have been part of OSGeo since its inception:

  • MapServer, one of the foundational projects that helped to define web mapping as we know it today;

  • GDAL, which is critical for accessing raster and vector data across the entire geospatial ecosystem;

  • GRASS GIS, one of the oldest open-source GIS projects with a long tradition of scientific rigour, active development and global code sprints;

  • also GeoTools, Mapbender, MapBuilder, MapGuide and OSSIM.

Other projects joined later and grew into some of the largest and most active communities in the geospatial world:

  • QGIS, one of the most widely used desktop GIS applications worldwide;

  • PostGIS, which brings spatial capabilities to enterprise-grade databases

  • and many, many more.

OSGeo’s incubation process has proven to be a reliable framework for project governance, openness and long-term sustainability – values that remain as relevant today as they were 20 years ago.

Local chapters: OSGeo on every continent

Local chapters are where OSGeo becomes a tangible presence at a regional level. They organise meetups, conferences, workshops and outreach activities, often in local languages and tailored to regional needs.

Over the years, local chapters have emerged across the globe, including long-established communities and newly formed chapters such as:

  • OSGeo Local Chapter Nepal (newly formed).

  • OSGeo Local Chapter Romania

  • OSGeo Local Chapter FOSSGIS e.V. (D-A-CH region)

  • OSGeo Local Chapter Argentina.

Each local chapter reflects OSGeo’s diversity and shared commitment to open geospatial knowledge.

FOSS4G: Meeting in Person, Building Community

Although much of OSGeo’s collaboration takes place online, through tickets, mailing lists, chats and video calls, the FOSS4G conferences remain at the heart of the community.

By bringing together developers, users, researchers, students and decision-makers, FOSS4G events create spaces where ideas turn into collaborations and collaborations turn into long-lasting projects.

Looking ahead

Celebrating 20 years of OSGeo is not just about history. It is also about the future.

As geospatial technologies become ever more central to addressing global challenges, from climate change to land management and urban planning, OSGeo’s role as a neutral, open and community-driven foundation remains essential.

The coming years will continue to focus on:

  • strengthening the sustainability of the projects

  • supporting new communities and local chapters

  • expanding education and outreach

  • ensuring that geospatial software remains free, open and accessible to all.

Happy birthday, OSGeo! Thank you to everyone who has been part of this endeavour over the last 20 years.

1 post - 1 participant

Read full topic

by jsanz at February 04, 2026 09:22 AM

February 03, 2026

It’s our pleasure to share our first Documentation and Infrastructure Report with highlights ranging across documentation, web infrastructure, and community-facing work. While this blog post provides a brief summary and team introduction, you can find the full report here.

Throughout the year, the documentation team, comprised of Selma Vidimlic Husic and Hefni Azzahra (see below for our introductions), focused on improving the quality, accuracy, and completeness of QGIS documentation. Key efforts included reducing the number of open documentation issues, maintaining existing content, and adding new material where gaps were identified. Further details on these activities and outcomes are described in the full report.

At the same time, important improvements were made to the project’s web infrastructure. Lova Andriarimalala led significant maintenance and modernization work, including splitting services across individual VPSs (virtual servers), branding alignment with the new QGIS.org website, and improving overall server performance. These changes increased reliability, improved UI/UX, and provided a more stable foundation for the continued growth of QGIS’s online services.

Overall, 2025 was a year focused on strengthening the core systems that support QGIS users and contributors.

Team Profile

Lova Andriarimalala

I am a Full-Stack Developer funded by QGIS, specializing in building and maintaining the ecosystem of QGIS websites. My work spans frontend and backend development, infrastructure improvements, performance optimization, UX refinements, and the creation of tools that make QGIS resources easier to navigate.
Beyond maintaining the current infrastructure, I contribute to strategic improvements across the QGIS web landscape – improving architectures, implementing automation, and enhancing contributor workflows. Through this role, I support the broader mission of QGIS by ensuring its online presence reflects the high quality of the software and the vibrant community behind it.

Individual Message: “This year has been a really enjoyable journey working on the QGIS websites. I’ve had the chance to dive into many different parts of the web ecosystem such as fixing issues, improving workflows, cleaning up old parts of the infrastructure, and helping make the sites clearer and more useful for the community. What I appreciated most was how collaborative everything felt: discussing ideas with contributors, learning from the community, and seeing small improvements add up to a better experience for everyone. It’s been a fulfilling year, and I’m excited to keep building on this work.”

Contact: lova@kartoza.com

Selma Vidimlic Husic

I am a QGIS Documentation Writer focused on keeping QGIS user documentation accurate, clear, and aligned with how the software actually works. I test features in QGIS, check behaviour in the source code when needed, and update sections inside the QGIS User Manual and other parts of official documentation. My contributions include writing and editing documentation, reviewing pull requests, improving examples and workflows, onboarding and supporting junior writer and other contributors, and helping ensure that new features are documented consistently. I also support the community by presenting at QGIS events and creating videos that explain documentation updates. My work helps maintain reliable, up-to-date documentation for the global QGIS community.

Individual Message: “This year has been really interesting. I’ve learned a lot about how the open-source community works, improved my technical skills, and expanded on the knowledge I already had. I’ve also gained a clearer understanding of what the documentation writer role involves and how our work contributes to the project. I’m truly grateful for this experience, and I’m especially happy to have had the chance to meet and collaborate with such smart and dedicated people who have made QGIS the great project it is today.
I also believe it would be very beneficial if the wider community could be a bit more involved in documentation organisation, particularly when it comes to setting priorities that align with the development plan. This kind of support would help us focus our efforts where they are most needed and ensure that the documentation continues to grow alongside the project. If you want to help, please reach out to me!”

Contact: selma@kartoza.com

Hefni Azzahra

I am a Junior QGIS Documentation Writer, I started working for the QGIS project in July 2025. I check how features work in QGIS, update descriptions and examples, and help ensure that new tools are properly documented. I also collaborate with other writers and contributors. My other contributions include creating videos for QGIS sprints to announce documentation updates. I am committed to helping users understand QGIS through clear and up-to-date documentation.

Individual Message: “I’m so grateful for this role. I’ve learned many new things, and until now, every day still feels like learning. From testing features to documenting the process and sharing it with the community, it has all been very rewarding. It feels nice to contribute to the community, even through small things like fixing typos or adding visual examples. Back on my first day, I started by addressing “good first issues” (the GitHub issue label used to indicate issues suitable for those getting started with working on the documentation). Now, here I am, busy with more complex ones. Investigating things has become part of my life. This has been a fulfilling year with QGIS. Collaborating with amazing people and contributing to the community are truly my things. I hope I can continue contributing more and more.”

Contact: hefni@kartoza.com

by Selma Vidimlic at February 03, 2026 04:49 PM

February 02, 2026

Click on the image above to view Dead Reckoning

This could be a long post because this was an ambitious project that was full of challenges and there are a few learnings, but first a bit of context.

A friend turned me on to the Grateful Dead in 1969, one listen to Live Dead and I was hooked. I still think the transition from Dark Star to St. Stephen is one of the most sublime bits of music ever. When the Dead came to England in 1972 I managed to get to most of their London gigs and made the journey to Bickershaw for the longest performance I have ever seen, I think it ran out at nearly 5 hours!

I have been a Dead Head for over 50 years and their music has been with me throughout my adult life. Last year, I was so, so fortunate to be able to see Bobby Weir performing with the London Symphony Orchestra at the Royal Albert Hall – literally One More Saturday Night.

When Bobby passed in January 2026 I decided to combine my love of the Dead with my hobby of vibe coding to build “Dead Reckoning” as a tribute to the greatest rock ‘n roll band ever who have given me and many others so much over decades.

Most of the challenges with this project related to the data. Between 1965 and 1995, the Dead played about 3,000 gigs at 650+ venues, amazingly the folk at JerryBase have compiled extensive data on every gig, recording session or unplanned appearance with set lists, images of tickets and posters, band members and more. And it is easy to download as a series of csv files, 1 for each year but it was a lot of data and some of it was not of interest to the story I wanted to tell with Dead Reckoning.

I knew how I wanted to simplify and summarise the data by filtering out recordings, impromptu sessions and minor side projects to focus on the Dead and the principal side projects. Then I wanted to summarise a run of nights at a venue as one record with the count of gigs and one of the set lists, they varied from night to night but I decided one record was better with a sample set list. Oh and I wanted to geocode each venue. Sounds easy? I drafted a detailed prompt and fired it at Gemini and got back nothing like what I wanted! I tried again and again refining the prompt and being very explicit about what was important, several hours went by and eventually I got something that looked ok, I thought “now we are cooking, just keep firing the 31 year files at Gemini with the same prompt and the job will be done” – no, no, no! Gemini forgets what you asked of it even if you repeat the prompt, it goes off on a wobbler and does something completely off brief – when the files are hundreds of rows long it’s not easy to spot errors, but I did way later on when it was even more difficult to fix.

Learning – an AI is not a precise repeatable script.

Eventually I got 31 year files with coordinates (more on that later) in the format I needed and I was able to load them into QGIS, do a bit of cleaning up, create an all years aggregate/summary, export them as geojson and I was ready to go. Well in the background I had also been gathering album covers, photos, creating the svg logos, building the graphics in a Dead style that I would use as headers and banners in the app and tracking down live sound recordings on Archive.org. Gemini had also offered to provide some narrative text for each year that would cover the musical direction of the band, changes amongst personnel and cultural stuff that was going on – it did this really well and returned it as a table that I could build on, when I have time I need to review what Gemini wrote and decide whether I can improve on it but summarising text is something that AI seems to be pretty good at.

The last thing I wanted to do was to pre-calculate some stats for each year (total number of gigs, unique venues, songs most played in a year) so that the app would not need to generate them each time it loaded. My first try at getting Gemini to do this was predictably flaky and the thought of trying to do this 32 times was daunting. I had shied away from running Python scripts, let along writing them, but I thought this might be a more reliable way of generating the stats from my geojson files – bingo! 32 files summarised perfectly after 3 or 4 iterations in about 15 minutes.

Learning – for data processing/cleaning tasks get AI to write a script, test it and refine it, once it is working you will get repeatable results.

Gemini is not a geocoder, it does not have an elegant fallback strategy where it can’t get a match, you can’t rely on it to deliver the same results for the same address if it comes across it twice.

Learning – use a proper geocode, OpenCage has a good free tier and if you need to pay a little for your volume of data, you’ll be way better than wrestling with the random junk and null islands that Gemini will give you.

There were some other images to produce: the “What a long strange trip” banner and the year headers in a matching Grateful Dead style. Producing these was more hassle than I expected – image creation is not an exact science and once I had a “1965” graphic that I was happy with I couldn’t easily get 1966, 1967 etc. the AI gods may know why, surely that should be easy?

Eventually I had all the content for the app and I was ready to start coding. I drew a very simple wireframe diagram with some annotations and uploaded that to Gemini along with a fairly detailed requirement spec. It took a while to get a map rendering and ages to get the clustering and Grateful Dead “Steal Your Face” svg icons to render the way I wanted but finally had a decent map working the way I wanted with some neat tabbed info popups.

Getting the Year Panel to work the way I wanted with a summary of the gigs played, the most played songs and the notes for each year was relatively easy, working through each element one at a time. The music player was fiddly, but once I had worked out the naming format on archive.org and had manually checked every link was freely available and embedable (not all the links can be embedded) that worked beautifully.

I had decided that on mobile I only wanted the Year Panel to appear, there is just too much going on to have the map on a phone screen. Sounded simple, check the screen width and if phone size don’t render the map – no, no no! iPhones (and probably other high end phones with hi res screens) kid the app that they have loads of pixels and messed up my logic. Gemini solved this for me after a bit of virtual head scratching.

A few more tweaks and I was ready to release just before Geomob London. Lots of nice feedback from my friends as the link got shared, so I decided to post about the app on Reddit r/gratefuldead and then the fun started – but that is a story for another day, I’ll write another post in a few days.

by Steven at February 02, 2026 06:37 PM

We are living in a time when almost every territorial project is presented alongside concepts such as artificial intelligence, digital twins or advanced analytics. The message is appealing: prediction, automation, intelligent decision-making.

But there is a less visible and far more decisive reality:
these systems only work properly when data is well structured.

AI applied to territorial management does not just require large volumes of information. It requires data that is coherent, comparable and maintainable over time. This is where standards come into play.

INSPIRE is not a “modern” technology, nor does it aim to be one. It is a framework that defines how geospatial data is described, shared and understood across organizations. Precisely for this reason, it becomes essential when the goal is to go beyond isolated, one-off projects.

Without a common framework:

  • AI models learn from inconsistent data,
  • digital twins become local representations that are difficult to scale,
  • each new use case requires integrations and adaptations to be rebuilt from scratch.

With INSPIRE:

  • data maintains a shared semantics,
  • analyses are reproducible,
  • solutions can grow and connect with other systems.

In our case, this foundation has enabled us to integrate artificial intelligence capabilities directly into SDI platforms, to the point of interacting with gvSIG Online using natural language, generating dynamic dashboards, or launching advanced analyses without breaking the coherence of the system.

The more advanced the solutions we want to build on top of the territory, the more we depend on standards that are rarely mentioned.

In upcoming posts, we will go into more detail on how these AI capabilities are being practically integrated into gvSIG Online and how we are applying them across a wide range of projects.

by Alvaro at February 02, 2026 11:46 AM

Desde la versión 2.7, gvSIG Desktop se ofrece directamente para descargar como distribución portable, con lo que no se instala nada en el equipo del usuario, simplemente es un fichero ZIP que se descomprime y ya se puede ejecutar. Esta versión se podría tener incluso en una tarjeta SD en el ordenador portátil, o llevarla en un pen-drive y ejecutarla en otros equipos, siempre que tengan el mismo sistema operativo, lo que mantendrá la configuración que teníamos en ella.

Un detalle a tener en cuenta al descomprimir los ficheros .ZIP es que no se debe hacer en rutas con espacios, acentos o eñes ni en rutas largas.

Sobre las rutas largas, un problema que suele haber en la distribución de Windows es que si se descomprime con el descompresor del sistema, se generan dos carpetas con el mismo nombre, por lo que genera una ruta más larga. En este vídeo te mostramos las posibles soluciones:

Por otro lado, en Linux, con el descompresor del sistema no tenemos el problema que había en Windows. En este vídeo te mostramos cómo descomprimir y ejecutar una versión portable de gvSIG Desktop en Linux:

En este caso no se creará por defecto un enlace directo para abrir gvSIG, como sí que hacía la versión instalable. Para la versión portable puedes ver cómo crear un acceso directo tanto en Windows como en Linux desde el siguiente post.

by Mario at February 02, 2026 08:00 AM