Welcome to Planet OSGeo

September 20, 2018

O editor de estilo GeoServer agora inclui um modo de edição lado a lado em tela cheia, para facilitar a visualização de seus estilos ao editá-los.

A barra de ferramentas também possui duas novas adições, um seletor de cores que ajuda a encontrar a cor certa e a transformá-la em uma codificação Hexadecimanl e um seletor de arquivos que permite selecionar um elemento gráfico externo e criar o elemento ExternalGraphic. Veja:

Fonte: GeoServer Blog

by Fernando Quadro at September 20, 2018 10:30 AM

September 19, 2018

The program of the 14th International gvSIG Conference is now available. It includes a a great number of presentations in different thematic sessions and 7 free workshops about gvSIG Suite.

Conference will take place from October 24th to 26th in Valencia, Spain, and registrations have to be done from the form available at the event website.

Registration for workshops will be independent. We will inform about it and all the workshops information at gvSIG Blog soon.

by Mario at September 19, 2018 12:58 PM

Ya está disponible el programa de las 14as Jornadas Internacionales gvSIG, con una gran cantidad de ponencias divididas por sesiones temáticas, y siete talleres gratuitos sobre la Suite gvSIG.

Las jornadas tendrán lugar del 24 al 26 de octubre en Valencia, España, y para poder asistir es necesario inscribirse previamente desde el formulario habilitado en la web de las jornadas. Se recomienda no esperar al último momento, ya que las salas cuentan con un aforo limitado.

Para los talleres se deberá realizar una inscripción independiente. En breve se informará sobre la apertura de inscripciones a los talleres. Toda la información relativa a los mismos la podréis encontrar en el blog de gvSIG.

by Mario at September 19, 2018 12:43 PM

O GeoServer 2.14 adiciona suporte a um pacote de álgebra de mapas eficiente conhecido como Jiffle. O Jiffle tem sido o trabalho de um ex-colaborador do GeoTools, Michael Bedwards, que foi recuperado, atualizado para suportar o Java 8 e integrado no jai-ext.

A partir daí, o suporte foi adicionado ao módulo gt-process-raster do GeoTools e, como resultado, ao serviço WPS do GeoServer, para ser usado diretamente ou como uma transformação de renderização.

A seguir as chamadas do Jiffle em um estilo SLD para executar um cálculo de NDVI sobre os dados do Sentinel-2:

<?xml version="1.0" encoding="UTF-8"?>
<StyledLayerDescriptor xmlns="http://www.opengis.net/sld" 
   xmlns:ogc="http://www.opengis.net/ogc" xmlns:xlink="http://www.w3.org/1999/xlink" 
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.opengis.net/sld
http://schemas.opengis.net/sld/1.0.0/StyledLayerDescriptor.xsd" version="1.0.0">
  <NamedLayer>
    <Name>Sentinel2 NDVI</Name>
    <UserStyle>
      <Title>NDVI</Title>
      <FeatureTypeStyle>
        <Transformation>
          <ogc:Function name="ras:Jiffle">
            <ogc:Function name="parameter">
              <ogc:Literal>coverage</ogc:Literal>
            </ogc:Function>
            <ogc:Function name="parameter">
              <ogc:Literal>script</ogc:Literal>
              <ogc:Literal>
                nir = src[7];
                vir = src[3];
                dest = (nir - vir) / (nir + vir);
              </ogc:Literal>
            </ogc:Function>
          </ogc:Function>
        </Transformation>
        <Rule>
          <RasterSymbolizer>
            <Opacity>1.0</Opacity>
            <ColorMap>
              <ColorMapEntry color="#000000" quantity="-1"/>
              <ColorMapEntry color="#0000ff" quantity="-0.75"/>
              <ColorMapEntry color="#ff00ff" quantity="-0.25"/>
              <ColorMapEntry color="#ff0000" quantity="0"/>
              <ColorMapEntry color="#ffff00" quantity="0.5"/>
              <ColorMapEntry color="#00ff00" quantity="1"/>
            </ColorMap>
          </RasterSymbolizer>
        </Rule>
      </FeatureTypeStyle>
    </UserStyle>
  </NamedLayer>
</StyledLayerDescriptor>

O desempenho é bom o suficiente para exibição interativa, e o resultado é o seguinte:

Fonte: GeoServer Blog

by Fernando Quadro at September 19, 2018 10:31 AM

The GeoTools team is pleased to announce the release of GeoTools 20-RC: geotools-20-RC-bin.zip geotools-20-RC-doc.zip geotools-20-RC-userguide.zip geotools-20-RC-project.zip This release candidate is also available from our Maven repository, and is made in conjunction with GeoServer 2.14-RC. This release includes a various major changes. Please Test this Release Candidate A release

by Andrea Aime (noreply@blogger.com) at September 19, 2018 03:40 AM

September 18, 2018

Este post foi publicado originalmente para o OL 3.5 por Dennis Bauszus. Não há uma boa razão para não usar a versão mais recente do OpenLayers, mas caso você utilize uma versão mais atual do OL, veja se não precisará realizar alguns ajustes.

Neste teste foi utilizado o GeoServer 2.8, e o armazenamento de dados é realizado em um banco de dados PostGIS 2.1.

Para o serviço de teste, é utilizada uma tabela muito simples, apenas com uma ID e geometria. A geometria é definida como geometry, sem tipo nem projeção. É importante que o campo de geometria seja chamado de geometria. Caso contrário, as inserções podem criar registros com campos de geometria vazios. Uma restrição deve ser definida no ID ou o GeoServer não poderá inserir registros na tabela.

CREATE TABLE wfs_geom
(
  id bigint NOT NULL,
  geometry geometry,
  CONSTRAINT wfs_geom_pkey PRIMARY KEY (id)
)
WITH (
  OIDS=FALSE
);
ALTER TABLE wfs_geom
  OWNER TO geoserver;
 
CREATE INDEX sidx_wfs_geom
  ON wfs_geom
  USING gist
  (geometry);

No núcleo dos snippets de javascript do OL está a função ol.format.WFS.writeTransaction, que recebe 4 parâmetros de entrada. Os 3 primeiros parâmetros definem se os dados devem ser inseridos, atualizados ou excluídos da fonte de dados. O quarto parâmetro assume a forma de ol.format.GML e transmite informações sobre o tipo de recurso, namespace e projeção dos dados.

O nó writeTransaction deve ser serializado com um XMLSerializer para ser usado em uma postagem do WFS-T.

Os três casos de uso (insert/update/delete) e a chamada AJAX são mostrados no seguinte trecho de código.

var formatWFS = new ol.format.WFS();

var formatGML = new ol.format.GML({
    featureNS: 'https://gsx.geolytix.net/geoserver/geolytix_wfs',
    featureType: 'wfs_geom',
    srsName: 'EPSG:3857'
});

var xs = new XMLSerializer();

var transactWFS = function (mode, f) {
    var node;
    switch (mode) {
        case 'insert':
            node = formatWFS.writeTransaction([f], null, null, formatGML);
            break;
        case 'update':
            node = formatWFS.writeTransaction(null, [f], null, formatGML);
            break;
        case 'delete':
            node = formatWFS.writeTransaction(null, null, [f], formatGML);
            break;
    }
    var payload = xs.serializeToString(node);
    $.ajax('https://gsx.geolytix.net/geoserver/geolytix_wfs/ows', {
        service: 'WFS',
        type: 'POST',
        dataType: 'xml',
        processData: false,
        contentType: 'text/xml',
        data: payload
    }).done(function() {
        sourceWFS.clear();
    });
};

Inserções são chamadas a partir do evento de “drawend” da interação do OL. A função .clear() no sourceWFS recarrega a fonte após cada transação. Isso garante que os IDs de recurso estejam corretos para os novos recursos e que os recursos excluídos sejam removidos da exibição.

O ID de um recurso modificado é armazenado e a transação de atualização é lançada quando o recurso é desmarcado. Para publicar com êxito uma transação de atualização, a propriedade boundedBy deve ser removida das propriedades do recurso. Um clone do recurso é usado para conseguir isso.

map.addInteraction(interactionSelect);
interaction = new ol.interaction.Modify({
    features: interactionSelect.getFeatures()
});
map.addInteraction(interaction);
map.addInteraction(interactionSnap);
dirty = {};
interactionSelect.getFeatures().on('add', function (e) {
    e.element.on('change', function (e) {
        dirty[e.target.getId()] = true;
    });
});
interactionSelect.getFeatures().on('remove', function (e) {
    var f = e.element;
    if (dirty[f.getId()]) {
        delete dirty[f.getId()];
        var featureProperties = f.getProperties();
        delete featureProperties.boundedBy;
        var clone = new ol.Feature(featureProperties);
        clone.setId(f.getId());
        transactWFS('update', clone);
    }
});

Você pode ver o exemplo completo (na versão 3.16 do OL) no jsFiddle.

Fonte: Medium – Dennis Bauszus

by Fernando Quadro at September 18, 2018 06:03 PM

Após alguns dias de trabalho de campo, você retorna para o seu escritório e tem que organizar tudo o que você levantou. Grande parte dos seus dados esta associada à algum local da sua área de estudo.

Como você não esta trabalhando sozinho, você terá que apresentar os resultados desses dias de campo para a sua equipe ou para o coordenador do projeto.

Qual seria a forma mais fácil de apresentar tudo isso para eles?

Um forma de realizar essa apresentação é utilizando o Google Earth em conjunto com vários pontos.

Resultado do processo de exportação de CSV para KMZ no ArcGIS. Ao clicar sobre os ícones do Google Earth, você terá acesso à tabela de atributos exportada.

Antes de começarmos, você vai precisar de um software de geoprocessamento, tais como QGIS ou ArcGIS. Neste tutorial, vamos apresentar o procedimento utilizando os dois.

Além deles, você precisará de um editor de planilhas (tais como Microsoft Excel ou LibreOffice Calc).

Tabulando os dados de campo

Primeiramente, vamos organizar nossos dados de campo em uma planilha, a qual irá conter a localização dos nossos pontos (coordenadas), numeração da foto registrada e as informações complementares.

Criamos abaixo 5 pontos fictícios para você pode acompanhar nosso tutorial.

X (UTM) Y (UTM) Foto Observação
605490 6807439 3848-3852 Ponto de Amostragem de Solo
607801 6807918 3853-3855 Ponto de Amostragem de Água
609479 6807026 3856-3857 Medição de Vazão
609380 6807283 3858-3860 Avaliação da Qualidade do Ar
609703 6806882 3861-3862 Acesso interditado (Ponte Quebrada)

Agora, vamos salvar essa planilha no formato CSV.

No Excel, clique em Arquivo > Salvar Como, na janela que irá abrir, logo abaixo do nome do arquivo, troque o campo por CSV (Separado por Vírgula) e clique em salvar.

O Excel irá avisar que algumas funcionalidades podem ser perdidas ao salvar em CSV, neste caso, você pode clicar em aceitar (sim), sem problemas.

Caso você tenha os dados salvos no seu GPS, no formato GPX, é possível pular essa etapa e trabalhar diretamente no GIS.

Com o arquivo CSV salvo, vamos utilizar o QGIS ou ArcGIS para salvá-lo no formato KMZ.

Abrindo Arquivo CSV no QGIS

Para abrir o nosso arquivo CSV no QGIS, vá em Camada (Layer) > Adicionar Camada (Add Layer) > Adicionar Camada de Texto Delimitado (Add Delimited Text Layer).

No primeiro campo, você deverá procurar a planilha com os seus dados. Em seguida, você poderá renomear o nome da camada, especificar qual é o tipo de delimitador (no caso do CSV é o ponto e vírgula ;). Em seguida, várias opções são apresentadas, permitindo remover cabeçalhos e determinar em qual coluna esta a latitude e longitude.

O resultado é apresentado na parte inferior da janela. Confira a figura abaixo.

Janela para Inserir pontos no QGIS a partir de tabela.Janela para Inserir pontos no QGIS a partir de tabela.

Agora, basta clicar em OK e o QGIS irá solicitar qual é o sistema de coordenadas dos pontos, selecione SIRGAS 2000 UTM 22 S (EPSG: 31982).

Exportando para KMZ no QGIS

Por fim, vamos exportar nossos dados para KMZ para possibilitar a sua apresentação para a nossa equipe de trabalho.

O arquivo CSV recém-aberto estará no painel de camadas. Clique sobre ele com o botão direito e selecione Salvar Como.

Na janela a ser aberta, selecione o formato KML, defina o local onde ele será salvo, qual o nome do arquivo e o sistema  de projeção (1), em seguida, expanda a opção “Select fields to export and their export options” (2) e selecione os campos que deverão ser exportados.

Campo que devem ser habilitados para exportar a tabela de atributos no QGIS.Campo que devem ser habilitados para exportar a tabela de atributos no QGIS.

Abaixo dessas opções, ainda é possível selecionar qual campo contém o nome que irá aparecer no Google Earth (dentro de Datasource Options, modifique “NameField” e “DescriptionField”). No nosso tutorial, não utilizaremos ele.

Caso você habilite essas funções (NameField e DescriptionField), você só poderá mostrar duas colunas no KMZ, sendo que para aparecer a tabela de atributos inteira, elas não devem ser marcadas.

O resultado, ao exportar o arquivo CSV em KMZ, no Google Earth é apresentado na figura abaixo.

Ao clicar sobre os ícones do Google Earth, você terá acesso à tabela de atributos exportada.Ao clicar sobre os ícones do Google Earth, você terá acesso à tabela de atributos exportada.

Abrindo Arquivo CSV no ArcGIS

Já havíamos falado sobre a importação de arquivos do Excel no ArcGIS e o procedimento para abrir arquivos CSV no ArcGIS não é muito diferente.

Vá em Arquivo (File) > Adicionar Dados (Add Data) > Adicionar Dados XY (Add XY Data).

Na janela que irá abrir, você deverá indicar onde esta o arquivo CSV salvo, as colunas que contém a latitude e longitude e o sistema de projeção, conforme figura abaixo.

Abrindo arquivos CSV no ArcGIS.Abrindo arquivos CSV no ArcGIS.

Após esse procedimento, você terá o arquivo CSV aberto no seu ArcGIS.

Exportando para KMZ no ArcGIS

No ArcGIS, para exportar para KMZ, vamos utilizar uma ferramenta do ArcToolbox. Procure em Conversion Tools > To KML > Layer to KML.

Nesta ferramenta, você irá apenas indicar qual arquivo deve ser convertido e o local onde o arquivo KML será gerado. Conforme figura abaixo.

Convertendo camada para KML no ArcGIS.Convertendo camada para KML no ArcGIS.

Após a conversão, ao abrir o arquivo no Google Earth, você poderá clicar nos ícones e obter a tabela de atributos.

Resultado do processo de exportação de CSV para KMZ no ArcGIS.Resultado do processo de exportação de CSV para KMZ no ArcGIS.

Note que o ArcGIS considerou a terceira coluna (número da foto) como o nome dos arquivos. Além disso, ele importou o formato do símbolo que adotamos no ArcGIS.

Comparando os Resultados

Você notou que as tabelas geradas são diferentes, mas em nenhum dos casos, isso irá prejudicar a apresentação dos dados.

Caso você abra o arquivo KML no bloco de notas, irá ver que o arquivo gerado no QGIS é mais simples, onde a tabela de atributos é adicionada pela tag ExtendedData em conjunto com as tags SimpleData.

Já no ArcGIS, ele exporta a tabela de atributos como HTML dentro da tag Description, a qual aceita esse tipo de linguagem de programação, possibilitando o ajuste do estilo (como o formato e cores da tabela).

Você pode consultar mais sobre esse tipo de programação visitando o site da Google para Desenvolvedores na Documentação do KML.

Agora, você pode facilmente apresentar e compartilhar os dados coletados em campo para toda a sua equipe.

Ficou com alguma dúvida? Deixa ela nos comentários que responderemos assim que possível.

Fontes Consultadas

Como adicionar dados personalizados - Keyhole Markup Language. Google Developers. Disponível em: <https://developers.google.com/kml/documentation/extendeddata?hl=pt-br>. Acesso em 16 set. 2018

GIS Stackexchange - QGIS exporting attributes in a KML file. Disponível em: <https://gis.stackexchange.com/questions/136604/qgis-exporting-attributes-in-a-kml-file>. Acesso em 16 set. 2018.

by Fernando BS at September 18, 2018 06:38 AM

September 17, 2018

2018.02.00 cover

Dear Reader,

We are pleased to announce the new release 2018.02.00 of MapStore, our flagship Open Source WebGIS product. The full list of changes for this release can be found here, but the most interesting additions are the following:

  • Dashboards: you can now create your own dashboard to mix maps with charts and other widgets to create advanced visualizations that you can  save and share as you already do with maps
  • Improved Homepage: we added support for featured maps and dashboards so that you can make most importan maps and dashboards more prominent
  • Widgets: new Rich Text widget, available both for maps and dashboards
  • Upgraded OpenLayers and Leaflet to latest versions
  • Various bug fixes and performance improvements

About Dashboards

A dashboard is a new kind of resource available on MapStore. A dashboard is a single page that can contain maps, charts, tables, counters, legend and other widgets to create an immersive visualization of your data that goes beyond the simple mapping.

[gallery type="slideshow" size="large" columns="2" ids="4361,4362,4363,4364"] The various widgets can be connected to a map in order to filter data on the current map viewport so that as you interact with the map the widgets adapt their content dynamically. [caption id="attachment_4371" align="aligncenter" width="800"]Connect map and widgets Connect map and widgets[/caption]

Like maps, dashboards can be created, edited and shared with the other users via MapStore.

[caption id="attachment_4372" align="aligncenter" width="800"]Save your own dashboards as you already do with maps Save your own dashboards as you already do with maps[/caption] New Home page with featured resources

The new home page is much better (at least that is our hope) than the previous one, since we first of all did a deep refresh of its look&feel. Administrators can now mark maps and dashboards as "featured" to make them more prominent in the home page and therefore easier to discover for end users who  can quickly find the most important content without using the search functionalities.

[caption id="attachment_4373" align="aligncenter" width="800"]homepage New home page with featured maps[/caption] Below the featured content section you can find the usual list of maps in a tab together with an additional tab that shows dashboards. New Rich Text widget With the new Rich Text widget users can add to maps and dashboards  rich content comprising of images, hyperlinks and formatted text as shown below. Copy and paste from HTML pages is supported as well. [caption id="attachment_4375" align="aligncenter" width="800"]New text widget available for dashboards and maps New rich text widget available for dashboards and maps[/caption]   Future work

For the next releases we have plans to work on the following (in sparse order):

  • Integration with GeoNode
  • Integrated styler for GeoServer
  • Timeslide to support for WMS layers with TIME
  • Support for more general map annotations, beyond simple markers further improvements to charts and dashboards
Stay tuned for additional news on the next features!

If you are interested in learning about how we can help you achieving your goals with open source products like GeoServerMapstore, GeoNode and GeoNetwork through our Enterprise Support Services and GeoServer Deployment Warranty offerings, feel free to contact us!

The GeoSolutions team,

by Lorenzo Natali at September 17, 2018 04:22 PM

September 16, 2018

The PostGIS development team is pleased to release PostGIS 2.5.0rc2.

Although this release will work for PostgreSQL 9.4 and above, to take full advantage of what PostGIS 2.5 offers, you should be running PostgreSQL 11beta3+ and GEOS 3.7.0 which were released recently.

Best served with PostgreSQL 11beta3.

WARNING: If compiling with PostgreSQL 11+JIT, LLVM >= 6 is required

2.5.0rc2

Changes since PostGIS 2.5.0rc1 release are as follows:

  • 4162, ST_DWithin documentation examples for storing geometry and radius in table (Darafei Praliaskouski, github user Boscop).
  • 4163, MVT: Fix resource leak when the first geometry is NULL (Raúl Marín)
  • 4172, Fix memory leak in lwgeom_offsetcurve (Raúl Marín)
  • 4164, Parse error on incorrectly nested GeoJSON input (Paul Ramsey)
  • 4176, ST_Intersects supports GEOMETRYCOLLECTION (Darafei Praliaskouski)
  • 4177, Postgres 12 disallows variable length arrays in C (Laurenz Albe)
  • 4160, Use qualified names in topology extension install (Raúl Marín)
  • 4180, installed liblwgeom includes sometimes getting used instead of source ones (Regina Obe)

View all closed tickets for 2.5.0.

After installing the binaries or after running pg_upgrade, make sure to do:

ALTER EXTENSION postgis UPDATE;

— if you use the other extensions packaged with postgis — make sure to upgrade those as well

ALTER EXTENSION postgis_sfcgal UPDATE;
ALTER EXTENSION postgis_topology UPDATE;
ALTER EXTENSION postgis_tiger_geocoder UPDATE;

If you use legacy.sql or legacy_minimal.sql, make sure to rerun the version packaged with these releases.

by Regina Obe at September 16, 2018 12:00 AM

September 12, 2018

If you are a MacOS QGIS user, you are probably bothered by some MacOS specific bugs. These are due to the fact that we have fewer QGIS developers working on the MacOS platform and there are additional MacOS specific issues in the underlying qt5 library.

Nevertheless, we found a developer, Denis Rouzaud, who wants to specifically look into investigating and hopefully solving several of these issues. If you are a MacOS user and care about a better QGIS experience on this platform, we invite you to financially support this effort. As a private person, and for smaller amounts, please use the usual donation channel – if you are a company or organization and want to contribute to this specific effort, please consider becoming a sponsor. In any case – please add “MacOS bug fixing campaign” as a remark when donating/sponsoring or inform finance@qgis.org about your earmarked donation.

This effort runs from the 14th September 2018 until the 3.4 release date, due on October 26, 2018. See the QGIS road map for more details about our release schedule.

Specific issues that are looked into, are:

issue priority subject
11103 1 Support for retina displays (HiDPI)
17773 1 No Retina / HiDPI support in 2.99 on osx
19546 1 QGIS 3 slow on macOS at high resolutions
19524 1 [macOS] Map canvas with wrong size on QGIS 3.2.1 start up
19321 2 Map Tips on Mac doesn’t display the content correctly
19314 1 3.2 crashes on startup on a Mac
19092 2 Measure tool on a Mac uses the top right corner of the cross hair cursor instead of the centre
18943 3 QGIS Server on MacOS X High Sierra
18914 3 [macOS] Plugin list corrupted by wrongly placed checkboxes on Mac
18720 2 QGIS 3.0.1 crashes on Mac
18452 3 Snapping options missing on Mac
18418 2 Scroll zoom erratic on Mac trackpad
16575 3 QGIS 2.18.7 crashes on macOS 10.12.4 when undocking the label panel
16025 2 [macOS] Control feature rendering order will crash QGIS
3975 2 PDF exports on OSX always convert text to outlines

Thank you for considering to support this effort! Please note that some issues may also exist due to up-stream issues in the qt library. In such a case, it can’t be guaranteed if and how fast, such an issue can be fixed.

Andreas Neumann, QGIS.ORG treasurer

by Andreas Neumann at September 12, 2018 08:51 AM

The PostGIS development team is pleased to provide bug fix 2.4.5 for the 2.4 stable branch.

View all closed tickets for 2.4.5.

After installing the binaries or after running pg_upgrade, make sure to do:

ALTER EXTENSION postgis UPDATE;

— if you use the other extensions packaged with postgis — make sure to upgrade those as well

ALTER EXTENSION postgis_sfcgal UPDATE;
ALTER EXTENSION postgis_topology UPDATE;
ALTER EXTENSION postgis_tiger_geocoder UPDATE;

If you use legacy.sql or legacy_minimal.sql, make sure to rerun the version packaged with these releases.

2.4.5

by Paul Ramsey at September 12, 2018 12:00 AM

September 11, 2018

The last week of August, I took three days and rode my bike from Victoria to Courtenay. It was a marvelous trip, and I got to see and stay in some wonderful towns along the way: Cowichan Bay, Duncan, Chemainus, Ladysmith, Nanaimo, Parksville, Qualicum Beach, Fanny Bay, Union Bay and Courtenay.

Active rail line has not seen a train since 2011

I also got to see a good portion of the old E&N railway line, as that line also passes through all the little towns I visited (with the exception of Cowichan Bay). It doesn’t take a trained surveyor to see that most of the railbed is in really poor condition. In many places the ties are rotting out, and you can pull spikes out of them with your bare hands. Running a train on the thing is going to take huge investments to basically rebuild the rail bed (and many of the trestles) from scratch, and the economics don’t work: revenues from freight and passenger service couldn’t even cover the operating costs of the line before it was shut down, let alone support a huge capital re-investment.

Cast bronze totem in Duncan

What to do with this invaluable right-of-way, an unobstructed ribbon of land running from Victoria to Courtenay (and beyond to Port Alberni)?

May I (and others) suggest a rail trail?

My breakfast destination in Ladysmith

Right now this chunk of land is returning nothing to the province economically. It’s actually a net drain, as municipalities spend money maintaining unused level crossings and the Island Corridor Foundation (ICF) spends federal and provincial grants to cut brush and replace the occasional tie on the never-again-to-be-used line.

Nanaimo waterfront promenade

Unlike the current ghost railway, a recreational trail would pay for itself almost immediately.

  • My first point of anecdata is my own 3-day bike excursion. Between accomodations, snacks along the way, and very tasty dinners (Maya Norte in Ladysmith and CView in Qualicum) I injected about $400 into the local economies over just two nights.
  • My second point of anecdata is an economic analysis of the Rum Runner’s Trail in Nova Scotia. The study shows annual expenditures by visitors alone of $3M per year. That doesn’t even count the economic benefit of local commuting and connection between communities.
  • My third point of anecdata is to just multiply $200 per night by three nights (decent speed) to cover the whole trail and 2000 marginal new tourists on the trail to get $1.2M direct new dollars. I find my made-up numbers are very compelling.
  • My fourth point of anecdata is the Mackenzie Interchange, currently under construction for over $70M. There is no direct economic benefit to this infrastructure, it will induce no tourist dollars and generate no long term employment.

If a Vancouver Island Rail Trail can generate even $3M in net new economic benefit for the province, it warrants a at least $50M investment to generate an ongoing 6% return. We spend more money for less return routinely (see the Mackenzie Interchange above).

No traffic on the line in Qualicum

And that’s just the tourism benefit.

Electric bikes are coming, and coming fast. A paved, continuous trail will provide another transportation alternative that is currently unavailable. Take it from me, I rode from Nanaimo to Parksville on the roaring busy highway 19 through Nanoose: it’s a terrible experience, nobody wants to do that. Cruising a paved rail trail on a quietly whirring electic bike though, that would be something else again.

Right now the E&N line is not a transportation alternative. Nor is it a tourist destination. Nor is it a railway. It’s time to put that land back to work.

September 11, 2018 04:00 PM

Need to geocode some addresses? Here’s a five-lines-of-code solution based on “An A-Z of useful Python tricks” by Peter Gleeson:

from geopy import GoogleV3
place = "Krems an der Donau"
location = GoogleV3().geocode(place)
print(location.address)
print("POINT({},{})".format(location.latitude,location.longitude))

For more info, check out geopy:

geopy is a Python 2 and 3 client for several popular geocoding web services.
geopy includes geocoder classes for the OpenStreetMap Nominatim, ESRI ArcGIS, Google Geocoding API (V3), Baidu Maps, Bing Maps API, Yandex, IGN France, GeoNames, Pelias, geocode.earth, OpenMapQuest, PickPoint, What3Words, OpenCage, SmartyStreets, GeocodeFarm, and Here geocoder services.

by underdark at September 11, 2018 09:21 AM

Entende-se por geoprocessamento toda a representação de técnicas e tecnologias capazes de coletar e tratar informações georreferenciadas, visando o desenvolvimento de novas aplicações, tais como Sistemas de Informações Geográficas (SIG).

O SIG permite a realização de análises complexas, pois integram dados de diversas fontes e criam bancos de dados georreferenciados.

O geoprocessamento vem se tornando um importante aliado nas etapas de levantamento de dados; diagnóstico de problemas; tomada de decisões; planejamento urbano, dentre outros.

Foto aérea com drone da cidade de Les Vans (França), por Nicolas Van Leekwijck no Unsplash.Foto aérea com drone da cidade de Les Vans (França), por Nicolas Van Leekwijck no Unsplash.

Atualmente, ele é  utilizado praticamente em todas as áreas, seja para inserir um mapa de localização de uma determinada área de estudo como para traçar planos de trabalho nas áreas de administração municipal.

Pensando nisso, separamos 10 (dez) blogs de geoprocessamento que irão te auxiliar na elaboração de seus mapas.

Anderson Medeiros

Link: http://www.andersonmedeiros.com/

O Blog é mantido pelo Graduado em Geoprocessamento Anderson Medeiros que, desde 2005 compartilha seus conhecimentos nas redes sociais por meio de cursos, palestras, vídeo-aulas, tutorias do software livre QGIS, dentre outros.

Fernando Quadro

Link: http://www.fernandoquadro.com.br/html/

Criado em 2007, o blog é mantido por Fernando Quadro. Fernando é formado em Ciências da Computação e pós-graduado (MBA) em Gerenciamento de Projetos pela Universidade do Vale do Itajaí (UNIVALI).

Atualmente é analista de sistemas e consultor em geotecnologias livres e tenta divulgar com o blog artigos, tutoriais e conteúdos relacionados ao geoprocessamento.

  • GeoServer: Nesse artigo, Fernando explica o que é o GeoServer e suas funcionalidades;
  • PostGIS: Esse artigo ele explica um pouco sobre o PostGIS, sua origem e suas funcionalidades.

Geografia e Cartografia Digital

Link: https://geocartografiadigital.blogspot.com/

O blog  está no ar desde 2013  e pertence ao Luiz Henrique Almeida Gusmão, formado em Geografia pela Universidade Federal do Pará (UFRA).

Além de prestar consultoria em cartografia e geoprocessamento a acadêmicos e pesquisadores por meios  da confecção dos mapas; ele compartilha vários materiais relacionados ao geoprocessamento (tais como livros).

GEOSABER

Link: https://www.geosaber.com.br/

O blog é mantido desde 2007 pelo grupo Geosaber e desde então fornece cursos, tutoriais, artigos e serviços voltados a área do geoprocessamento.

  • Phyton Programming Free Books: Esse artigo fala sobre o livro Phyton in Hidrology escrito por Sat Kumar Tomer;
  • Mapas com R: Neste artigo é apresentado como elaborar mapas de estatística e dinâmicos no R utilizando o pacote inlmisc.

Geotecnologias – Luís Lopes

Link: http://www.geoluislopes.com/

O blog esta ativo desde 2008 e é mantido por Luís Lopes, que compartilha e publica artigos tutoriais sobre os programas QGIS; TerraView; ArcGIS, dentre outros.

Murilo Cardoso

Link: http://murilocardoso.com/

O Blog é mantido pelo geógrafo Murilo Cardoso, focando no ensino de Geografia por meio de artigos, tutoriais e vídeos de programas como QGIS e ArcGIS em diversos formatos.

Processamento Digital

Link: http://www.processamentodigital.com.br/

É um canal de conteúdo Geo mantido pela Hex Tecnologias Geoespaciais. No canal você encontra tutoriais, noticiais e demais conteúdos na área do geoprocessamento.

GeoBrainStorms

Link: https://geobrainstorms.wordpress.com/

O blog é escrito por diferentes profissionais e apresenta em seus conteúdos tutoriais e artigos voltados a manipulação de banco de dados, sensoriamento remoto, SIG e desenvolvimento para Geoprocessamento com base em diferentes softwares.

iDea Plus Geo

Link:http://geo.ideaplus.com.br/

Esse blog apresenta artigos e tutoriais sobre o software gvSIG.

Blog Geoprocessamento

Link: http://bloggeoprocessamento.blogspot.com/

O blog apresenta informações e painéis sobre o Geoprocessamento (CAD, GIS/SIG e Sensoriamento Remoto), dentre outros.

[+ BONUS]

arcOrama: Blog francofono dedicado às tecnologias ESRI (i.e. ArcGIS).

Topi Tjukanov: Geografo finlandês que compartilha vários de seus trabalhos com QGIS no Twitter.

Blog SIG & Territoires: Model Builder : convertir un modèle batch en modèle interactif.

Anita Graser: Autora de livros sobre QGIS, Anita publica vários tutoriais no seu site.

Jesse Sadler: Tem várias postagens sobre o uso do R para o desenvolvimento de mapas.

Faltou o seu blog? Apresente ele nos comentários para que possamos também visitá-lo.

[Confira também os 10 Blogs de Engenharia que Todo Engenheiro deve Seguir]

by Émilin CS at September 11, 2018 06:05 AM

September 10, 2018

Prezado leitor,

Você já deve ter passado pelo seguinte problema: filtrar recursos de camadas diferentes usando o WFS.

Como sabemos o CQL não permite que façamos JOIN de camadas. Quando é necessário realizar JOIN de alguma informação, necessitamos criar uma view e a partir desta view uma camada.

Porém hoje, vou apresentar uma forma de como você pode em uma requisição WFS filtrar dados de mais de uma camada. Para isso vamos usar os dados vindos por padrão no GeoServer.

Para aplicar um único CQL_filter em uma camada WFS, é simples. Nós temos a camada topp:tasmania_water_bodiespublicado e queremos obter os recursos onde a área é maior do que 1.066.494.066 metros quadrados, para que possamos fazer a próxima requisição ao servidor:

https://demo.geo-solutions.it/geoserver/wfs?SERVICE=WFS&VERSION=2.0.0&REQUEST=GetFeature&typeNames=topp:tasmania_water_bodies&propertyName=&cql_filter=AREA>1066494066&outputFormat=application/json

Até aqui foi fácil! Nós teremos 4 recursos que satisfazem os filtros. A sintaxe é clara:

typeNames=topp:tasmania_water_bodiese&cql_filter=AREA>1066494066

Mas, como devemos fazer se quisermos obter recursos de duas ou mais camadas ao mesmo tempo?

Se obtivermos ao mesmo tempo os recursos da camada topp:tasmania_roadscamada com os recursos filtrados anteriormente na camada topp:tasmania_water_bodies, devemos usar uma solicitação semelhante, mas separando o typeNames desta maneira:

typeNames=(topp:tasmania_water_bodies)(topp:tasmania_roads)

E configurar os filtros CQL associados ordenados respeitando os typenames:

cql_filter=AREA<1066494066;TYPE='alley'
https://demo.geo-solutions.it/geoserver/wfs?SERVICE=WFS&VERSION=2.0.0&REQUEST=GetFeature&typeNames=(topp:tasmania_water_bodies)(topp:tasmania_roads)&propertyName=&cql_filter=AREA<1066494066;TYPE='alley'&outputFormat = aplicativo / json

E o propertyName? Usando o parâmetro propertyName na requisição WFS, podemos filtrar as propriedades do recurso conforme desejado. Esse parâmetro é importante para reduzir o tamanho da resposta, obtendo apenas as propriedades nas quais estamos interessados.

Em nosso exemplo, podemos obter apenas o CNTRY_NAME da topp:tasmania_water_bodies e o TYPE da topp:tasmania_roads.

Neste caso, o a requisição será:

https://demo.geo-solutions.it/geoserver/wfs?SERVICE=WFS&VERSION=2.0.0&REQUEST=GetFeature&typeNames=(topp:tasmania_water_bodies)(topp:tasmania_roads)&propertyName=(CNTRY_NAME)(TYPE)&cql_filter=AREA%3C1066494066; TYPE =% 27alley% 27 & outputFormat = aplicativo / json

Desta forma você conseguirá então filtrar as camadas topp:tasmania_water_bodies e TYPE da topp:tasmania_roads em uma única requisição.

Fonte: Blog Geomatico

by Fernando Quadro at September 10, 2018 07:41 PM

austrocontrol

Dear Reader,

In this post we would like to cover the work we have performed recently for Austro Control GmbH.

Austro Control is one of Europe’s leading air traffic control organisations. There are up to 3,500 controlled flights in Austrian airspace on some days. With a workforce of about 1,000, Austro Control’s key task is maintaining safe, punctual, efficient and environmentally friendly air traffic round the clock, 365 days a year, with over 1 million flight movements in Austrian airspace.

Austro Control is a limited company (plc) owned by the Austrian government that grew from the Federal Office of Civil Aviation (corporatised on 1 January 1994).

Austro Control has two main functions, performed by separate divisions. The Air Navigation Services Division largely comprises operational functions, while the Aviation Agency is responsible for regulatory matters.

An enterprise SDI based on Open Source

As part of their effort to create their own spatial data infrastructure based on Open Source to ingest, manage and disseminate Aeronautical Information to their stakeholders, they have selected MapStore as their webmapping client of choice. It is also worth to point out that MapStore is in good company since Austro Control GmbH also uses GeoServer and GeoNetwork for data and metadata management.

[caption id="attachment_4339" align="aligncenter" width="800"]Austro Control SDI Austro Control SDI[/caption]

Whilst Austro Control GmbH was satisfied with some of the functionalities offered by MapStore they decided to provide funding to implement some specific functionalities as well as to enhance some other functionalities (such contributions will be part of the next, upcoming MapStore release). In the following sections of this post we are going to briefly cover the most important ones.

Import/Export of Maps and Data

This is about the ability for all users to export map definitions and then reload them or send them to someone else for sharing. 

Anonymous users are therefore now able to export map settings locally in a file, which can be saved for later reuse; this includes the status of collapsed/open parts of the hierarchical legend as shown below. It is also possible to load and style directly on the map Shapefiles, GeoJSON and KML/KMZ files, as shown below.

[caption id="attachment_4344" align="aligncenter" width="800"]Map Export at work Map Export at work[/caption] [caption id="attachment_4342" align="aligncenter" width="800"]Map & Data Import GUI Map & Data Import GUI[/caption] [caption id="attachment_4340" align="aligncenter" width="801"]Importing and styling as marker a Shapefile for populated places from Natural Earth Importing and styling as marker a Shapefile for populated places from Natural Earth[/caption]

Improved Map Annotations

Users are now able to create, modify, remove, store locally and upload sophisticated map annotations also known as redlining data. The annotations can be of type point, line, polygon, circle and text and are not stored on the server (this will be implemented in a future version) but are transient on the map with the possibility to export to a local storage and re-upload later on.

[caption id="attachment_4346" align="aligncenter" width="800"]Annotations manager on a sample set of annotations showing the various types of annotations elements allowed Annotations manager on a sample set of annotations showing the various types of annotations elements allowed[/caption]

The ability to define a symbology for the annotations (colour, line thickness, point symbols, line/area style, text font, etc.) is also supported as shown below.

[caption id="attachment_4347" align="aligncenter" width="801"]Styling a circle annotation Styling a circle annotation[/caption]

It is also possible to place annotations elements by using coordinate input via the keyboard (point, line, polygon, circle, text): for above mentioned data input the possibility shall exist to enter locations/points of elements via coordinate input on a text field, the format for coordinates are decimal degrees and the aeronautical degree format ("NDD MM SS.xxxx, EDDD MM SS.xxxx").

[caption id="attachment_4348" align="aligncenter" width="801"]Precise annotation geolocation Precise annotation geolocation[/caption]

The ability to print annotations and imported vectorial data is also supported, further improvements to the print tool in MapStore have been provided to properly preserve vector feature styles too.  

[caption id="attachment_4349" align="aligncenter" width="801"]Printing of imported KML files and annotations Printing of imported KML files and annotations[/caption]

Support for AIRAC Cycles for publication dates in irregular time lags

MapStore still lacks a proper widget to manage time enabled layer from WMS and WMTS (stay tuned, we are working on that as well) but this was an important missing features since Austro Control Gmbh needs to update data according to the Aeronautical Information Regulation And Control Cycle; therefore the requirement here was to create a widget to display a calendar where the user could select a specific date to control a WMS TIME-enabled layer(s) or to be presented with a predefined list of dates.

[caption id="attachment_4350" align="aligncenter" width="801"]Selecting dates from a predefined list Selecting dates from a predefined list[/caption] [caption id="attachment_4351" align="aligncenter" width="800"]Free selection from a calendar where existing time slots are highlighted Free selection from a calendar where existing time slots are highlighted[/caption]

The date selection applies to all layers that support the time dimension and we also give users the option to display or not layers without time dimension. Notice that only the administrator is able to configure a list of special dates with names that shall be highlighted in the calendar component, as shown before.

Conclusions

Other enhancements were performed as part of this work which will end up in MapStore soon (e.g. ability to measure on the map in nautical miles); eventually, we would like to thank Austro Control GmbH for its patience when dealing with us and for their foresight in embracing the Open Source philosophy.

Last but not least, if you are interested in learning about how we can help you achieving your goals with open source products like GeoServerMapStoreGeoNode and GeoNetwork through our Enterprise Support Services and GeoServer Deployment Warranty offerings, feel free to contact us!

The GeoSolutions Team,

320x100_eng

by simone giannecchini at September 10, 2018 04:06 PM

A little under a year ago, with the release of PostgreSQL 10, I evaluated the parallel query infrastructure and how well PostGIS worked with it.

The results were less than stellar for my example data, which was small-but-not-too-small: under default settings of PostgreSQL and PostGIS, parallel behaviour did not occur.

However, unlike in previous years, as of PostgreSQL 10, it was possible to get parallel plans by making changes to PostGIS settings only. This was a big improvement from PostgreSQL 9.6, which substantial changes to the PostgreSQL default settings were needed to force parallel plans.

PostgreSQL 11 promises more improvements to parallel query:

  • Parallelized hash joins
  • Parallelized CREATE INDEX for B-tree indexes
  • Parallelized CREATE TABLE .. AS, CREATE MATERIALIZED VIEW, and certain queries using UNION

With the exception of CREATE TABLE ... AS none of these are going to affect spatial parallel query. However, there have also been some none-headline changes that have improved parallel planning and thus spatial queries.

Parallel PostGIS and PgSQL 11

TL;DR:

PostgreSQL 11 has slightly improved parallel spatial query:

  • Costly spatial functions on the query target list (aka, the SELECT ... line) will now trigger a parallel plan.
  • Under default PostGIS costings, parallel plans do not kick in as soon as they should.
  • Parallel aggregates parallelize readily under default settings.
  • Parallel spatial joins require higher costings on functions than they probably should, but will kick in if the costings are high enough.

Setup

In order to run these tests yourself, you will need:

  • PostgreSQL 11
  • PostGIS 2.5

You’ll also need a multi-core computer to see actual performance changes. I used a 4-core desktop for my tests, so I could expect 4x improvements at best.

The setup instructions show where to download the Canadian polling division data used for the testing:

  • pd a table of ~70K polygons
  • pts a table of ~70K points
  • pts_10 a table of ~700K points
  • pts_100 a table of ~7M points

PDs

We will work with the default configuration parameters and just mess with the max_parallel_workers_per_gather at run-time to turn parallelism on and off for comparison purposes.

When max_parallel_workers_per_gather is set to 0, parallel plans are not an option.

  • max_parallel_workers_per_gather sets the maximum number of workers that can be started by a single Gather or Gather Merge node. Setting this value to 0 disables parallel query execution. Default 2.

Before running tests, make sure you have a handle on what your parameters are set to: I frequently found I accidentally tested with max_parallel_workers set to 1, which will result in two processes working: the leader process (which does real work when it is not coordinating) and one worker.

show max_worker_processes;
show max_parallel_workers;
show max_parallel_workers_per_gather;

Aggregates

Behaviour for aggregate queries is still good, as seen in PostgreSQL 10 last year.

SET max_parallel_workers = 8;
SET max_parallel_workers_per_gather = 4;

EXPLAIN ANALYZE 
  SELECT Sum(ST_Area(geom)) 
    FROM pd;

Boom! We get a 3-worker parallel plan and execution about 3x faster than the sequential plan.

Scans

The simplest spatial parallel scan adds a spatial function to the target list or filter clause.

SET max_parallel_workers = 8;
SET max_parallel_workers_per_gather = 4;

EXPLAIN ANALYZE 
  SELECT ST_Area(geom)
    FROM pd; 

Unfortunately, that does not give us a parallel plan.

The ST_Area() function is defined with a COST of 10. If we move it up, to 100, we can get a parallel plan.

SET max_parallel_workers_per_gather = 4;

ALTER FUNCTION ST_Area(geometry) COST 100;

EXPLAIN ANALYZE 
  SELECT ST_Area(geom)
    FROM pd 

Boom! Parallel scan with three workers. This is an improvement from PostgreSQL 10, where a spatial function on the target list would not trigger a parallel plan at any cost.

Joins

Starting with a simple join of all the polygons to the 100 points-per-polygon table, we get:

SET max_parallel_workers_per_gather = 4;

EXPLAIN  
 SELECT *
  FROM pd 
  JOIN pts_100 pts
  ON ST_Intersects(pd.geom, pts.geom);

PDs & Points

In order to give the PostgreSQL planner a fair chance, I started with the largest table, thinking that the planner would recognize that a “70K rows against 7M rows” join could use some parallel love, but no dice:

Nested Loop  
(cost=0.41..13555950.61 rows=1718613817 width=2594)
 ->  Seq Scan on pd  
     (cost=0.00..14271.34 rows=69534 width=2554)
 ->  Index Scan using pts_gix on pts  
     (cost=0.41..192.43 rows=232 width=40)
       Index Cond: (pd.geom && geom)
       Filter: _st_intersects(pd.geom, geom)

As with all parallel plans, it is a nested loop, but that’s fine since all PostGIS joins are nested loops.

First, note that our query can be re-written like this, to expose the components of the spatial join:

EXPLAIN  
 SELECT *
  FROM pd 
  JOIN pts_100 pts
   ON pd.geom && pts.geom 
   AND _ST_Intersects(pd.geom, pts.geom);

The default cost of _ST_Intersects() is 100. If we adjust it up by a factor of 100, we can get a parallel plan.

ALTER FUNCTION _ST_Intersects(geometry, geometry) COST 10000;

Can we achieve the same affect adjusting the cost of the && operator? The && operator could activate one of two functions:

  • geometry_overlaps(geom, geom) is bound to the && operator
  • geometry_gist_consistent_2d(internal, geometry, int4) is bound to the 2d spatial index

However, no amount of increasing their COST causes the operator-only query plan to flip into a parallel mode:

ALTER FUNCTION  geometry_overlaps(geometry, geometry) COST 1000000000000;
ALTER FUNCTION  geometry_gist_consistent_2d(internal, geometry, int4) COST 10000000000000;

So for operator-only queries, it seems the only way to force a spatial join is to muck with the parallel_tuple_cost parameter.

Costing PostGIS?

A relatively simple way to push more parallel behaviour out to the PostGIS user community would be applying a global increase of PostGIS function costs. Unfortunately, doing so has knock-on effects that will break other use cases badly.

In brief, PostGIS uses wrapper functions, like ST_Intersects() to hide the index operators that speed up queries. So a query that looks like this:

SELECT ...
FROM ...
WHERE ST_Intersects(A, B)

Will be expanded by PostgreSQL “inlining” to look like this:

SELECT ...
FROM ...
WHERE A && B AND _ST_Intersects(A, B)

The expanded version includes both an index operator (for a fast, loose evaluation of the filter) and an exact operator (for an expensive and correct evaluation of the filter).

If the arguments “A” and “B” are both geometry, this will always work fine. But if one of the arguments is a highly costed function, then PostgreSQL will no longer inline the function. The index operator will then be hidden from the planner, and index scans will not come into play. PostGIS performance falls apart.

This isn’t unique to PostGIS, it’s just a side effect of some old code in PostgreSQL, and it can be replicated using PostgreSQL built-in types too.

It is possible to change current inlining behaviour with a very small patch but the current inlining behaviour is useful for people who want to use SQL wrapper functions as a means of caching expensive calculations. So “fixing” the behaviour PostGIS would break it for some non-empty set of existing PostgreSQL users.

Tom Lane and Adreas Freund briefly discussed a solution involving a smarter approach to inlining that would preserve both the ability inline while avoiding doing double work when inlining expensive functions, but discussion petered out after that.

As it stands, PostGIS functions cannot be properly costed to take maximum advantage of parallelism until PostgreSQL inlining behaviour is made more tolerant of costly parameters.

Conclusions

  • PostgreSQL seems to weight declared cost of functions relatively low in the priority of factors that might trigger parallel behaviour.

    • In sequential scans, costs of 100+ are required.
    • In joins, costs of 10000+ are required. This is suspicious (100x more than scan costs?) and even with fixes in function costing, probably not desireable.
  • Required changes in PostGIS costs for improved parallelism will break other aspects of PostGIS behaviour until changes are made to PostgreSQL inlining behaviour…

September 10, 2018 04:00 PM

El Colegio de Ingenieros Técnicos de Obras Públicas e Ingenieros Civiles de Extremadura, la Escuela Politécnica de Cáceres de la Universidad de Extremadura, la Academia CIVILEX y la Asociación gvSIG organizan una jornada gratuita en Cáceres (España) el día 20 de septiembre sobre geomática libre, en la que se presentará la ponencia “Geolocalizando las tecnologías de la información: oportunidades que ofrece la geomática para la modernización de la gestión”, y otra sobre la Suite gvSIG, con título “gvSIG Suite: soluciones en geomática libre para movilidad, infraestructuras de datos espaciales y SIG corporativo. Casos de éxito”.

También habrá dos talleres gratuitos, uno de introducción a la aplicación gvSIG Desktop y otro sobre scripting en gvSIG.

La jornada se celebrará en el Salón de Actos de la Escuela Politécnica de Cáceres de la Universidad de Extremadura.

El horario completo, el formulario de inscripción, y toda la información sobre las jornadas, podéis encontrarlos en el siguiente enlace.

by Mario at September 10, 2018 11:30 AM

Seguimos repasando los talleres que se impartirán durante las 14as Jornadas Internacionales gvSIG, que tendrán lugar en Valencia (España) del 24 al 26 de octubre. Otro de los talleres que se llevarán a cabo será el de gvSIG Desktop aplicado a Medio Ambiente. 

medio-ambiente

Las personas que trabajan en temas relacionados con el Medio Ambiente se enfrentan a diario con problemáticas que sólo pueden resolverse desde un punto de vista espacial. En este taller aprenderemos a manejar gvSIG y a aplicar sus herramientas de edición y geoprocesos utilizando casos reales como gestión de cultivos, usos del suelo, áreas protegidas, etc.

En las próximas semanas se facilitará en este mismo post toda la información sobre cómo inscribirse en este taller, que será gratuito, así como la cartografía a descargar para poder seguirlo.

En unos días se dará más información también sobre el día y hora en que se impartirá.

Si deseas asistir a las jornadas, te recordamos que el periodo de inscripción a las mismas ya está abierto, pudiendo realizarse el registro a través del formulario habilitado para ello.

¡No os perdáis este taller!

by Alonso Morilla at September 10, 2018 09:23 AM

September 09, 2018

Sales & Marketing 101 workshop Lydia had fun developing her Value Proposition

It’s a year since Marc and I ran the first S&M101 workshop at FOSS4G Boston, since then I have run two more workshops at FOSS4G Europe in Guimaraes and FOSS4G 2018 in Dar es Salaam. In total 36 people have now attended the workshop which is amazing to me.

The two most recent workshops were fitted into a 4.5 hour slot rather than a whole day and that forced me to focus on the elements of the workshop that I felt had the most impact and tightening up the delivery. In Dar there were 15 delegates and we managed to finish only 5 minutes over time!

I wanted to share a brilliant summary of the key points of the workshop from Emmanuel Ng’wandu, a delegate in Dar:

“If you can count to five you will master the laws of sales and marketing.”

  1. Wear the customer’s shoes
  2. Find the pains (and the gains)
  3. 26 words
  4. The funnel of questions from open to closed
  5. Treat rejection like the rhinoceros (let it bounce off you and just carry on)
Sales & Marketing 101 workshop Hard at work on their Value Propositions

The feedback from the delegates has been very positive and I am looking forward to hearing more of their successes using the tools that I taught. If you want to find out more about the Sales & Marketing 101 workshop and how it can help you tighten  up your value proposition, get in touch.

A wall of value propositions A wall of Value Proposition Canvases

by Steven at September 09, 2018 02:15 PM

September 08, 2018

pgAdmin4 version 3.3 released this week comes with a PostGIS geometry viewer. You will be able to see the graphical output of your query directly in pgAdmin, provided you output a geometry or geography column. If your column is of SRID 4326 (WGS 84 lon/lat), pgAdmin will automatically display against an OpenStreetMap background.

We have Xuri Gong to thank for working on this as a PostGIS/pgAdmin Google Summer of Code (GSOC) project. We'd like to thank Victoria Rautenbach and Frikan Erwee for mentoring.

Continue reading "pgAdmin4 now offers PostGIS geometry viewer"

by Regina Obe (nospam@example.com) at September 08, 2018 09:13 PM

September 06, 2018

We have introduced a Child Friendly policy at the FOSS4G-Oceania conference as part of our "diversity" focus. We are quite proud of it, and think we have reached an effective balance between the competing priorities of:

  • High cost of dedicated childcare.
  • Variability and unpredictability of different aged children.
  • Retaining a professional conference environment.
  • Respecting needs of children, parents, and conference participants.
  • Making our conference accessible to a diverse audience, which includes those responsible for children.
Hopefully other conferences will draw inspiration from our work embrace something similar. This is what we have come up with:

Child Friendly Conference

A child-friendly FOSS4G SotM event

FOSS4G SotM Oceania is aiming to be a child-friendly conference. This means we encourage parents to bring their children (still young enough to require minding) to the event, and for those children to be able to integrate with the community.

How does this work?

The conference Code of Conduct describes expected behaviour for everyone at the conference: https://foss4g-oceania.org/conference/code-conduct

Here are some additional principles for considering children.

As parents, we all know that it is impossible to expect our children to sit quietly for any length of time unless they are engaged with whatever is going on. As such, we request that parents:

  • Be prepared to move in and out of sessions as required by their children.
  • Occupy seats near exits in sessions.
  • Self-organise with other parents to manage children if there is a must see session or talk.
  • Be aware of the needs of other conference attendees, who have also paid to turn up and hear people speak.
  • Extend the principles of the conference Code of Conduct to children, both yours and others. Treat them respectfully, and avoid corporal punishment.
In kind, we request that everyone:
  • Be patient with children. They are our future leaders and learn by modelling what we do.
  • Be patient with parents. Children are not robots and don’t have ‘quiet now’ buttons.
  • Leave seats near exits free, for use by parents and children (see above - they may need to make quick exits)
  • Assist. If you see a child looking lost or distressed, get down to their level and ask them how you can help. Find a conference volunteer to hand over to.
Remember parents are doing the best they can. At the conference, if you have issues with children or parents, please contact the diversity team (as per the Code of Conduct).

If you feel that your child will not disrupt your presentation, you are welcome to bring them on stage (if they want to). Please organise this with your child ahead of time, so that you can arrange a carer or some other way of occupying your child while you speak.

For parents wanting tools to assist with helping their children get through a conference, there is a wealth of material that may be useful here: https://www.handinhandparenting.org/blog/.

What does it cost to bring children?

Children can attend for free. The catering cost is covered by the Good Mojo program. Please let us know as soon as you can if you are bringing your child(ren) so we can adjust numbers appropriately.

We all know that children can eat in amazing disproportion to their physical size, and we will cater for them as adults.

What facilities are available for parents?

The conference will aim to provide a ‘chill space’ - a refuge for parents and children at the venue. The University of Melbourne has fantastic outdoor spaces (please use them!); but Melbourne weather is not always friendly.

We are also looking into a specific space for breast pumping/feeding - however, breastfeeding mums are welcome to feed wherever they feel comfortable doing so (with respect also to the needs of your child).

While we will do our best to provide spaces away from the buzz, this is at present a work in progress - look out for updates!

Please use the foss4g-oceania-discuss e-mail list or join us on the Maptime Australia Slack to discuss with the FOSS4G organisers and coordinate with other conference parents. (Many of the organisers are also parents)


--//--

Special kudos goes to Adam Steer who has been the primary driver behind getting this initiative off the ground.


by Cameron Shorter (noreply@blogger.com) at September 06, 2018 08:20 PM

At the beginning there was a chat with @ortelius outside the Namenlos bar at FOSS4G in Bonn. Jef insisted in doing a benchmark of MVT server implementations at the next FOSS4G. With my experience from the FOSS4G WMS benchmarks, where I supported Marco Hugentobler in the QGIS team, I knew how much time such an exercise takes and was rather sceptical.

Two years later I was at a point in the development of t-rex, my own vector tile server, where I needed just this bechmark. I was replacing the built-in webserver with a new asynchronous high-performance webserver, but wanted to measure the performance win of this and other planned improvements. So I decided to implement an MVT benchmark before starting any performance tuning sessions. The dataset I wanted to use was Natural Earth, which is has a decent size for a world wide dataset. A few weeks earlier, Gretchen Peterson blogged about a Natural Earth MVT style she had done for Tegola, Jef’s tile server contestant.

natural-earth-quickstart

I got Gretchens permission to use it and so I started implementing the same map with t-rex. I gave me 5 days for benchmarking and refactoring t-rex, but at the end of day one, I had only about 5 of 37 layers finished. So I postponed my plan for a complete benchmark and used the partially finished tileset for my further work. My idea of performance tuning was to start a profiling tool on my code and look into the hot spots. But the map was so slow at the current state that it was clear that some database operations must be the reason. Instead of starting to profile code, I added a drilldown functionality to t-rex first. This command returns statistics for all tile levels at a set of given positions.

The returned numbers revealed a very slow layer called ne_10m_ocean, which turned out to be a single multipolygon covering the whole earth. So step two wasn’t about code tuning, but dividing polygons into pieces which can make better use of a spatial index. Remembering Paul Ramsey mentioning ST_Subdivide exactly for this case, I threw some SQL at the PostGIS table:

CREATE TABLE ne_10m_ocean_subdivided AS SELECT ST_Subdivide(wkb_geometry)::geometry(Polygon,3857) AS wkb_geometry, featurecla, min_zoom FROM ne_10m_ocean

The query for using the table with t-rex is a bit more complicated:

SELECT ST_Multi(ST_Union(wkb_geometry)) AS wkb_geometry, featurecla, min_zoom FROM ne_10m_ocean_subdivided WHERE min_zoom::integer <= !zoom! AND wkb_geometry && !bbox! GROUP BY featurecla, min_zoom

And the result? The creation of a vector tile went from several seconds far below one second. Lession learned: before tuning anything else, tune your data!

The rest of the week I’ve integrated the shiny new web server into t-rex which ended up (together with other new features like Fake-Mercator support) in a major new release 0.9.

In the meantime, my colleage Hussein finished the Natural Earth map, but I decided to relabel it from a benchmark to a t-rex example project. While completing the map I remembered a simpler map based on Natural Earth data, which we used for some exercises in our vector tile courses. The style needs only 5 vector tile layers and is therefore much easier to implement. I’ve simplified this style from Petr Pridal’s MapBox GL JS offline example a bit, resulting in a map, which still looks quite pretty:

mvt-benchmark-style

So I had a new base for an MVT benchmark, which was enough motivation to finish this project before FOSS4G Dar es Salaam. An important goal was to take as much work as possible away from other projects implementing this benchmark. So even starting the benchmark database is a single Docker command. During the conference I’ve talked to several developers and power users of other vector tile servers to give them a first hand introduction.

So finally, here is the MVT benchmark, ready for contributions. Have a look at it and let’s revive the benchmark tradition at FOSS4G 2019 in Bucharest!

Pirmin Kalberer (@implgeo)

September 06, 2018 07:22 PM

Uma das coisas interessantes do processamento geoespacial é a variedade de ferramentas, e as maneiras de colocá-las juntas podem produzir resultados surpreendentes.

Um membro da comunidade na lista de usuários do PostGIS perguntou: “Existe uma maneira de dividir um polígono em subpolígonos de áreas mais ou menos iguais?”

O desenvolvedor do PostGIS, Darafei Praliaskouski, respondeu e forneceu uma solução funcional que é absolutamente brilhante ao combinar as partes do kit de ferramentas do PostGIS para resolver um problema bastante complicado. Ele disse:

Do jeito que eu vejo, para qualquer tipo de polígono:

  • Converte um polígono em um conjunto de pontos proporcionais à área por ST_GeneratePoints (quanto mais pontos, mais bonito será, acho que 1000 está ok);
  • Decida quantas partes você gostaria de dividir em (ST_Area (geom) / max_area), seja K;
  • Para cada cluster, pegue um ST_Centroid (ST_Collect (point));
  • Alimente esses centróides em ST_VoronoiPolygons, que lhe dará uma máscara para cada parte do polígono;
  • Com o ST_Intersection do polígono original e cada célula dos polígonos de Voronoi você obterá uma boa divisão do seu polígono em K partes.

Vamos dar um passo de cada vez para ver como funciona.

Usaremos o Peru como polígono de exemplo, ele tem uma boa concavidade, o que o torna um pouco mais complicado do que um polígono com “forma comum”.

CREATE TABLE peru AS 
  SELECT *
  FROM countries
  WHERE name = 'Peru'

Agora crie um campo de pontos que preencha o polígono. Em média, cada ponto colocado aleatoriamente acaba “ocupando” uma área igual dentro do polígono.

CREATE TABLE peru_pts AS
  SELECT (ST_Dump(ST_GeneratePoints(geom, 2000))).geom AS geom
  FROM peru
  WHERE name = 'Peru'

Agora, agrupe o campo de pontos, definindo o número de clusters para o número de partes em que você deseja dividir o polígono. Visualmente, agora você pode ver as divisões no polígono! Mas ainda precisamos obter linhas reais para representar essas divisões.

CREATE TABLE peru_pts_clustered AS
  SELECT geom, ST_ClusterKMmeans(geom, 10) over () AS cluster
  FROM peru_pts;

Primeiro, calcule o centroide de cada cluster de pontos, que será o centro de massa de cada cluster.

CREATE TABLE peru_centers AS
  SELECT cluster, ST_Centroid(ST_collect(geom)) AS geom
  FROM peru_pts_clustered
  GROUP BY cluster;

Agora, use um diagrama de voronoi para obter arestas de divisão reais entre os centróides do cluster, que acabam combinando de perto com os locais onde os clusters se dividem!

CREATE TABLE peru_voronoi AS
  SELECT (ST_Dump(ST_VoronoiPolygons(ST_collect(geom)))).geom AS geom
  FROM peru_centers;

Finalmente, intercepte as áreas de voronoi com o polígono original para obter os polígonos de saída final que incorporam as bordas externas das linhas de divisão.

CREATE TABLE peru_divided AS
  SELECT ST_Intersection(a.geom, b.geom) AS geom
  FROM peru a
  CROSS JOIN peru_voronoi b;

Feito!

Agrupar um campo de pontos para obter áreas praticamente iguais e, em seguida, usar o voronoi para extrair linhas divisórias reais são insights maravilhosos sobre o processamento espacial. A imagem final de todos os componentes do cálculo também é bonita.

Não tenho 100% de certeza, mas talvez seja possível usar a técnica de Darafei para subdivisões ainda mais interessantes, como “mapa do Brasil subdividido em áreas de igual PIB”, ou “mapa de São Paulo subdividido em áreas de igual tamanho”, ou ainda a população ”gerando o campo de ponto inicial usando uma ponderação econômica ou demográfica”.

Este texto foi traduzido e adaptado do post original de Paul Ramsey no blog Clever Elephant.

Fonte: Blog Clever Elephant

by Fernando Quadro at September 06, 2018 02:15 PM

Our problem, get multiple features filtered from multiple different layers using WFS.

Let’s work with the topp data coming by default in GeoServer. A matter of taste.

To apply a single CQL_filter on a WFS layer it’s simple. We have the “unknown” ;-) topp:tasmania_water_bodies published in one Demo GeoServer (thanks GeoSolutions ;-)) and we want to get the features where the area is greater than 1066494066 square meters, so we can make the next petition to the server

https://demo.geo-solutions.it/geoserver/wfs?SERVICE=WFS&VERSION=2.0.0&REQUEST=GetFeature&typeNames=topp:tasmania_water_bodies&propertyName=&cql_filter=AREA<1066494066&outputFormat=application/json

Easy! We’ll get 4 features that satisfy filters. The syntax is clear, we must use typeNames=topp:tasmania_water_bodies and cql_filter=AREA<1066494066 in the petition.

But, how we must do if we want get features from two or more layers at the same time?.

If we get at the same time the features type Alley from the topp:tasmania_roads layer with the topp:tasmania_water_bodies features filtered before, we must use similar request but separating the typeNames in this way:

typeNames=(topp:tasmania_water_bodies)(topp:tasmania_roads)

and setting the associated CQL filters ordered respect the typenames:

cql_filter=AREA<1066494066;TYPE='alley'

https://demo.geo-solutions.it/geoserver/wfs?SERVICE=WFS&VERSION=2.0.0&REQUEST=GetFeature&typeNames=(topp:tasmania_water_bodies)(topp:tasmania_roads)&propertyName=&cql_filter=AREA<1066494066;TYPE=‘alley’&outputFormat=application/json

And, what about the propertyName?. Using the propertyName parameter on the WFS petition we can filter the returned feature properties as we want. This parameter is important to reduce the response’s size getting only the properties we are interested in.

In our example, we can get only the CNTRY_NAME of the topp:tasmania_water_bodies and the TYPE of the topp:tasmania_roads.

In this case the request will be:

https://demo.geo-solutions.it/geoserver/wfs?SERVICE=WFS&VERSION=2.0.0&REQUEST=GetFeature&typeNames=(topp:tasmania_water_bodies)(topp:tasmania_roads)&propertyName=(CNTRY_NAME)(TYPE)&cql_filter=AREA%3C1066494066;TYPE=%27alley%27&outputFormat=application/json

Michogar’s GIST

Enjoy!!.

September 06, 2018 07:11 AM

September 05, 2018

Durante las 14as Jornadas Internacionales gvSIG, que tendrán lugar en Valencia (España) del 24 al 26 de octubre, se realizará un taller sobre gvSIG Mobile, el Sistema de Información Geográfica en software libre para dispositivos móviles que permite la toma de datos en campo, muy útil para censos, inventarios, inspecciones, encuestas…

Durante el mismo los participantes aprenderán a crear un fichero de teselas a partir de una imagen georreferenciada en gvSIG Desktop, el cual puede ser insertado como capa base en gvSIG Mobile, evitando así la necesidad de disponer de conexión a internet para tener una capa base.

Por otro lado, se mostrará cómo crear formularios personalizados en gvSIG Desktop que serán exportados posteriormente a gvSIG Mobile para poder tomar datos en campo, pudiendo crear campos de diferentes tipos (texto, fecha, desplegables…) o realizar fotografías o croquis.

Respecto a las herramientas de edición, durante el taller se creará una capa vectorial en gvSIG Desktop que posteriormente será editada en gvSIG Mobile, tanto gráfica como alfanuméricamente.

También se verán otras herramientas como la importación de marcadores, carga de capas WMS…

En las próximas semanas se facilitará en este mismo post toda la información sobre cómo inscribirse en este taller, que será gratuito, así como la cartografía a descargar para poder seguirlo.

En unos días se dará más información también sobre el día y hora en que se impartirá.

Si deseas asistir a las jornadas, te recordamos que el periodo de inscripción a las mismas ya está abierto, pudiendo realizarse el registro a través del formulario habilitado para ello.

¡No os perdáis este taller!

by Mario at September 05, 2018 02:45 PM

September 04, 2018

This is a guest post by Time Manager collaborator and Python expert, Ariadni-Karolina Alexiou.

Today we’re going to look at how to visualize the error bounds of a GPS trace in time. The goal is to do an in-depth visual exploration using QGIS and Time Manager in order to learn more about the data we have.

The Data

We have a file that contains GPS locations of an object in time, which has been created by a GPS tracker. The tracker also keeps track of the error covariance matrix for each point in time, that is, what confidence it has in the measurements it gives. Here is what the file looks like:

data.png

Error Covariance Matrix

What are those sd* fields? According to the manual: The estimated standard deviations of the solution assuming a priori error model and error parameters by the positioning options. What it basically means is that the real GPS location will be located no further than three standard deviations across north and east from the measured location, most of (99.7%) the time. A way to represent this visually is to create an ellipse that maps this area of where the real location can be.ellipse_ab

An ellipse can be uniquely defined from the lengths of the segments a and b and its rotation angle. For more details on how to get those ellipse parameters from the covariance matrix, please see the footnote.

Ground truth data

We also happen to have a file with the actual locations (also in longitudes and latitudes) of the object for the same time frame as the GPS (also in seconds), provided through another tracking method which is more accurate in this case.

actual_data

This is because, the object was me running on a rooftop in Zürich wearing several tracking devices (not just GPS), and I knew exactly which floor tiles I was hitting.

The goal is to explore, visually, the relationship between the GPS data and the actual locations in time. I hope to get an idea of the accuracy, and what can influence it.

First look

Loading the GPS data into QGIS and Time Manager, we can indeed see the GPS locations vis-a-vis the actual locations in time.

actual_vs_gps

Let’s see if the actual locations that were measured independently fall inside the ellipse coverage area. To do this, we need to use the covariance data to render ellipses.

Creating the ellipses

I considered using the ellipses marker from QGIS.

ellipse_marker.png

It is possible to switch from Millimeter to Map Unit and edit a data defined override for symbol width, height and rotation. Symbol width would be the a parameter of the ellipse, symbol height the b parameter and rotation simply the angle. The thing is, we haven’t computed any of these values yet, we just have the error covariance values in our dataset.

Because of the re-projections and matrix calculations inherent into extracting the a, b and angle of the error ellipse at each point in time, I decided to do this calculation offline using Python and relevant libraries, and then simply add a WKT text field with a polygon representation of the ellipse to the file I had. That way, the augmented data could be re-used outside QGIS, for example, to visualize using Leaflet or similar. I could have done a hybrid solution, where I calculated a, b and the angle offline, and then used the dynamic rendering capabilities of QGIS, as well.

I also decided to dump the csv into an sqlite database with an index on the time column, to make time range queries (which Time Manager does) run faster.

Putting it all together

The code for transforming the initial GPS data csv file into an sqlite database can be found in my github along with a small sample of the file containing the GPS data.

I created three ellipses per timestamp, to represent the three standard deviations. Opening QGIS (I used version: 2.12, Las Palmas) and going to Layer>Add Layer>Add SpatialLite Layer, we see the following dialog:

add_spatialite2.png

After adding the layer (say, for the second standard deviation ellipse), we can add it to Time Manager like so:

add_to_tm

We do the process three times to add the three types of ellipses, taking care to style each ellipse differently. I used transparent fill for the second and third standard deviation ellipses.

I also added the data of my  actual positions.

Here is an exported video of the trace (at a place in time where I go forward, backwards and forward again and then stay still).

gps

Conclusions

Looking at the relationship between the actual data and the GPS data, we can see the following:

  • Although the actual position differs from the measured one, the actual position always lies within one or two standard deviations of the measured position (so, inside the purple and golden ellipses).
  • The direction of movement has greater uncertainty (the ellipse is elongated across the line I am running on).
  • When I am standing still, the GPS position is still moving, and unfortunately does not converge to my actual stationary position, but drifts. More research is needed regarding what happens with the GPS data when the tracker is actually still.
  • The GPS position doesn’t jump erratically, which can be good, however, it seems to have trouble ‘catching up’ with the actual position. This means if we’re looking to measure velocity in particular, the GPS tracker might underestimate that.

These findings are empirical, since they are extracted from a single visualization, but we have already learned some new things. We have some new ideas for what questions to ask on a large scale in the data, what additional experiments to run in the future and what limitations we may need to be aware of.

Thanks for reading!

Footnote: Error Covariance Matrix calculations

The error covariance matrix is (according to the definitions of the sd* columns in the manual):

sde * sde sign(sdne) * sdne * sdne
sign(sdne) * sdne * sdne sdn * sdn

It is not a diagonal matrix, which means that the errors across the ‘north’ dimension and the ‘east’ dimension, are not exactly independent.

An important detail is that, while the position is given in longitudes and latitudes, the sdn, sde and sdne fields are in meters. To address this in the code, we convert the longitude and latitudes using UTM projection, so that they are also in meters (northings and eastings).

For more details on the mathematics used to plot the ellipses check out this article by Robert Eisele and the implementation of the ellipse calculations on my github.

by carolinux at September 04, 2018 05:54 PM

Today is my first day with my new employer Crunchy Data. Haven’t heard of them? I’m not surprised: outside of the world of PostgreSQL, they are not particularly well known, yet.

Moving on to Crunchy Data

I’m leaving behind a pretty awesome gig at CARTO, and some fabulous co-workers. Why do such a thing?

While CARTO is turning in constant growth and finding good traction with some core spatial intelligence use cases, the path to success is leading them into solving problems of increasing specificity. Logistics optimization, siting, market evaluation.

Moving to Crunchy Data means transitioning from being the database guy (boring!) in a geospatial intelligence company, to being the geospatial guy (super cool!) in a database company. Without changing anything about myself, I get to be the most interesting guy in the room! What could be better than that?

Crunchy Data has quietly assembled an exceptionally deep team of PostgreSQL community members: Tom Lane, Stephen Frost, Joe Conway, Peter Geoghegan, Dave Cramer, David Steele, and Jonathan Katz are all names that will be familiar to followers of the PostgreSQL mailing lists.

They’ve also quietly assembled expertise in key areas of interest to large enterprises: security deployment details (STIGs, RLS, Common Criteria); Kubernetes and PaaS deployments; and now (ta da!) geospatial.

Why does this matter? Because the database world is at a technological inflection point.

Core enterprise systems change very infrequently, and only under pressure from multiple sources. The last major inflection point was around the early 2000s, when the fleet of enterprise proprietary UNIX systems came under pressure from multiple sources:

  • The RISC architecture began to fall noticeably behind x86 and particular x86-64.
  • Pricing on RISC systems began to diverge sharply from x86 systems.
  • A compatible UNIX operating system (Linux) was available on the alternative architecture.
  • A credible support company (Red Hat) was available and speaking the language of the enterprise.

The timeline of the Linux tidal wave was (extremely roughly):

  • 90s - Linux becomes the choice of the tech cognoscenti.
  • 00s - Linux becomes the choice of everyone for greenfield applications.
  • 10s - Linux becomes the choice of everyone for all things.

By my reckoning, PostgreSQL is on the verge of a Linux-like tidal wave that washes away much of the enterprise proprietary database market (aka Oracle DBMS). Bear in mind, these things pan out over 30 year timelines, but:

  • Oracle DBMS offers no important feature differentiation for most workloads.
  • Oracle DBMS price hikes are driving customers to distraction.
  • Server-in-a-cold-room architectures are being replaced with the cloud.
  • PostgreSQL in the cloud, deployed as PaaS or otherwise, is mature.
  • A credible support industry (including Crunchy Data) is at hand to support migrators.

I’d say we’re about half way through the evolution of PostgreSQL from “that cool database” to “the database”, but the next decade of change is going to be the one people notice. People didn’t notice Linux until it was already running practically everything, from web servers to airplane seatback entertainment systems. The same thing will obtain in database land; people won’t recognize the inevitability of PostgreSQL as the “default database” until the game is long over.

Having a chance to be a part of that change, and to promote geospatial as a key technology while it happens, is exciting to me, so I’m looking forward to my new role at Crunchy Data a great deal!

Meanwhile, I’m going to be staying on as a strategic advisor to CARTO on geospatial and database affairs, so I get to have a front seat on their continued growth too. Thanks to CARTO for three great years, I enjoyed them immensely!

September 04, 2018 04:00 PM

Thumb

Creating beautiful map fitting your brand is no longer a difficult job. With MapTiler Cloud, you pick five colors and the entire map adjust accordingly.

Own map design straightforward

If you ever browsed our maps, maybe you got the feeling: this style is almost perfect but I need just a small change to fit the map with app or web design. Now you can fix it only with a few mouse clicks.

The Make your own map button will lead you directly to the Customize tool where you can drag just one color and all map elements in the group adjust accordingly. The changes can be reset to the default if needed so there is no need to hold back during experimenting. You can also copy a color scheme from a map you made before, change the language of labels to one of 55+ supported languages or the font.

The map design you create in Customize tool will automatically stay updated when we update the software running behind or add a new type of data.

Once you are happy with your design, you can log in or create a free account to save your changes and directly use it on your website or in the app.

Intelligent Maps: the magic behind

MapTiler Cloud offers more than just creating own design. Intelligent Maps can dynamically adapt colors, languages and data for individual visitors and perfectly fit the need of companies who use the maps in their web or mobile applications.

Your visitors can get a personalized map in the applications, with place names automatically displayed in their language, with highlights of most important places for each individual end-user - such as most often used bus stops, frequently visited restaurants or recent trips. It can be done based on the user’s preferences, his history, or data loaded from existing information systems. The maps can also change based on the day of the week, daytime and opening hours, so the user receives only the relevant information.

Tokyo in Customize tool

Free to try, no registration required

To create own map design, you need no registration. Simply visit our maps and start customizing your favorite one. After creating a free account, you can save your map and directly start using it in your web or app.

by Dalibor Janák (info@klokantech.com) at September 04, 2018 11:00 AM

This year’s FOSS4G edition took place in Dar es Salaam, Tanzania. As every year, Sourcepole was supporting this major event as a sponsor. We would like to thank for all the interesting discussions and feedback to our presentations!

image

QGIS Web Client 2 Update

Styling and publication of vector tiles

Using GeoPackage as work and exchange format

Thanks to the LOC for organizing another great FOSS4G!

Pirmin Kalberer (@implgeo)

September 04, 2018 12:00 AM

September 02, 2018

FOSS4G 2018

Dear Reader,

we are putting together in this post all presentations that were given by our staff during this year FOSS4G in Dar Es Salam, Tanzania.

Here is the complete list. Enjoy!

State of GeoServer 2018 Serving earth observation data with GeoServer: addressing real world requirements Creating Stunning Maps in GeoServer, with SLD, CSS, YSLD and MBStyles Crunching Data In GeoServer: Mastering Rendering Transformations, WPS And SQL Views GeoServer in production. We do it, here is how! One GeoNode, many GeoNodesMapping beyond web mercatorMapStore 2, the story

If you want further information, do not hesitate to contact us.

The GeoSolutions Team,

320x100_eng

by simone giannecchini at September 02, 2018 06:06 PM

August 30, 2018

A year has gone by and the public accounts are out again, so I have updated my running tally of IT outsourcing for the BC government.

While the overall growth is modest, I’m less sure of my numbers this year because of last year’s odd and sudden drop-off of the IBM amount.

I am guessing that the IBM drop should be associated with their loss of desktop support services for the health authorities, which went to Fujitsu, but for some reason the transfer is not reflected in the General Revenue accounts. The likely reason is that it’s now being paid out of some other bucket, like the Provincial Health Services Authority (PHSA), so fixing the trend line will involve finding that spend in other accounts. Unfortunately, Health Authorities won’t release their detailed accounts for another few months.

All this shows the difficulty in tracking IT spend over the whole of government, which is something the Auditor General remarked on when she reviewed IT spending a few years ago. Capital dollars are relatively easy to track, but operating dollars are not, and of course the spend is spread out over multiple authorities as well as in general revenue.

In terms of vendors, the drop off in HP/ESIT is interesting. For those following at home ESIT (formerly known as HP Advanced Solutions) holds the “back-office” contract, which means running the two government data centres (in Calgary and Kamloops (yes, the primary BC data centre is in Alberta)) as well as all the servers therein and the networks connecting them up. It’s a pretty critical contract, and the largest in government. Since being let (originally to Ross Perot’s EDS), one of the constants of this tracking has been that the amount always goes up. So this reduction is an interesting data point: will it hold?

And there’s still a large whale unaccounted for: the Coastal Health Authority electronic health record project, which has more-or-less dropped off the radar since Adrian Dix brought in Wynne Powell as a fixer. The vendors (probably Cerner and others) for that project appear on neither the PHSA nor Coastal Health public accounts, I think because it is all being run underneath a separate entity. I haven’t had a chance to figure out the name of it (if you know the way this is financed, drop me a line).

August 30, 2018 07:00 AM

August 29, 2018

This is the third progress report of the GDAL SRS barn effort.

In the last month, the main task was to continue, and finish, the mapping of all GDAL currently supported projection methods between their different representations as WKT 2, WKT 1 and PROJ string: LCC_2SP, LCC_2SP_Belgium, Modified Azimuthal Equidistant, Guam Projection, Bonne, (Lambert) Cylindrical Equal Area,    GaussSchreiberTransverseMercator, CassiniSoldner, EckertI to VI, EquidistantCylindricalSpherical, Gall, GoodeHomolosine, InterruptedGoodeHomolosine, GeostationarySatelliteSweepX/Y, International Map of the World Polyconic, Krovak North Oriented and Krovak, LAEA, Mercator_1SP and Mercator_2SP, WebMercator (including GDAL WKT1 import/export tricks), Mollweide, ObliqueStereographic and Orthographic, American polyconic, PolarSterographicVariantA and PolarSterographicVariantB, Robinson and Sinusoidal, Stereographic, VanDerGrinten, Wagner I to VII, QuadrilateralizedSphericalCube, SphericalCrossTrackHeight, Aitoff, Winkel I and II, Winkel Tripel, Craster_Parabolic, Loximuthal and Quartic_Authalic

The task was tedious, but necessary.  For some cases, this involved cross-checking formulas in the EPSG "Guidance Note 7, part 2 Coordinate Conversions & Transformations including Formulas", PROJ implementation and Snyder "Map Projections - A Working Manual" because of ambiguities in some projection names. Typically the ObliqueStereographic in EPSG is not the Oblique Stereographic of Snyder. The former is implemented as the Oblique Stereographic Alternative (sterea) in PROJ, and the later as the Oblique Stereographic (stere). The parameter names in WKT 2 / EPSG tend also to be much more specific that in GDAL WKT 1. When in GDAL WKT1, you have mostly a "latitude_of_origin" parameter mapping to the lat_0 PROJ parameter, in WKT2, parameter names tend to better reflect the mathematical characteristics of the projection, distinguishing between "Latitude of natural origin", "Latitude of projection centre" or "Latitude of false origin"

The currently ongoing task is now to implement a method that takes a PROJ string and builds ISO-19111 corresponding objects. Done for GeographicCRS (+proj=longlat), in progress for Projected CRS. When this will be completed we will have the infrastructure to convert in all directions between PROJ strings, WKT 1 and WKT 2

When digging into PROJ code, I also uncovered a number of issues in the Helmert implementation (confusing for rotational parameters regarding the "Position Vector" vs "Coordinate frame" convention), the handling of the not-so-well-known +geoc flag for geocentric latitudes and the handling of vertical units for geographic CRS with the new PROJ API. All of those fixes have been independantly merged in PROJ master, so as to be available for the upcoming PROJ 5.2.0, which should be released around mid-september (to remove any confusion, this release will not include yet all the WKT 2 related work)

by Even Rouault (noreply@blogger.com) at August 29, 2018 02:23 PM

August 28, 2018

El texto que acompaña la presentación de las JIIDE de este año dice así:

En estos momentos en los que los mayores desafíos a los que se enfrenta la humanidad, como son el cambio climático, una economía sostenible, el reparto justo de la riqueza, la degradación del medio ambiente, las migraciones y en general, los efectos negativos de la globalización, tienen un marcado carácter geoespacial, las IDE, como sistemas de sistemas colaborativos, abiertos e interoperables son más necesarios que nunca.

A lo que debemos añadir: libres. Sistemas libres, de código abierto, que permitan reutilizar, sumar y apostar por un modelo económico diferente, sostenible, alejado de monopolios y que evite caer en los riesgos de la dependencia tecnológica o de proveedores únicos.

En la suma de los dos textos encontramos la visión y misión de la Asociación gvSIG, que en estas JIIDE 2018 presentará un taller y una serie de ponencias relacionadas con la Suite gvSIG:

  • 17 de octubre.

    • Sesión IDE locales. Sala 1 [11:30-13:30]. Infraestructuras de Datos Espaciales municipales con gvSIG Suite

  • 18 de octubre.

    • Sesión Interoperabilidad de datos y servicios geográficos. Sala 2 [09:00-11:00]. IDE para bomberos, protección civil y gestión del delito

    • Sesión IDE nacionales, regionales y locales II. Sala 1 [12:30-14:30]. IDE en software libre para Agricultura: el caso de gvSIG Online en la GVA

    • Taller 4 [16:00-17:00]. gvSIG Online, plataforma en software libre para IDE

Las JIIDE 2018 tendrán lugar los días 17, 18 y 19 de octubre en la Isla del Lazareto, en el puerto de Maó. Estaremos encantados de interoperar e intercambiar conocimiento.

by Alvaro at August 28, 2018 11:30 AM

We are happy to announce the release of GeoServer 2.14-RC. Downloads are available (zipwar, and exe) along with docs and extensions.

This is a beta release of GeoServer made in conjunction with GeoTools 20-RC.

We want to encourage people to test the release thoroughly and report back any issue found. With no further delay, let’s see what’s new, that is, what is there to test!

WMS “nearest match” support for time dimension

WMS time dimension can now be configured for “nearest match”, that is, to return the nearest time to the one requested (either explicitly, or implicitly via the default time value).

In case of mismatch the actual time used will be returned along with the response as a HTTP header. It is also possible to configure a maximum tolerance, beyond that the server will throw a service exception.

Channel selection name allow expressions

GeoServer 2.14-RC allows expressions to be used in SourceChannelName SLD elements, and in their code counterpart, thus allowing dynamic channel selection. This is welcomed news for anyone building applications that display multispectral or hyperspectral data, thus avoiding the need to build many different styles for the various interesting false color combinations.

Here is an SLD example:

<RasterSymbolizer>
  <ChannelSelection>
    <RedChannel>
      <SourceChannelName>
          <ogc:Function name="env">
             <ogc:Literal>B1</ogc:Literal>
             <ogc:Literal>2</ogc:Literal>
          </ogc:Function>
      </SourceChannelName>
    </RedChannel>
    <GreenChannel>
      <SourceChannelName>
          <ogc:Function name="env">
             <ogc:Literal>B2</ogc:Literal>
             <ogc:Literal>5</ogc:Literal>
          </ogc:Function>
      </SourceChannelName>
    </GreenChannel>
    <BlueChannel>
      <SourceChannelName>
          <ogc:Function name="env">
             <ogc:Literal>B3</ogc:Literal>
             <ogc:Literal>7</ogc:Literal>
          </ogc:Function>
      </SourceChannelName>
    </BlueChannel>
  </ChannelSelection>
<RasterSymbolizer>

Map algebra

This release adds support for an efficient map algebra package knows as Jiffle. Jiffle has been the work of a former GeoTools contributor, Michael Bedwards, that has been salvaged, upgraded to support Java 8, and integrated in jai-ext. From the there support has been added into the GeoTools gt-process-raster module and as a result in GeoServer WPS, to be used either directly or as a rendering transformation.

The following SLD style calls onto Jiffle to perform a NDVI calculation on top of Sentinel 2 data:

<?xml version="1.0" encoding="UTF-8"?>
<StyledLayerDescriptor xmlns="http://www.opengis.net/sld" 
   xmlns:ogc="http://www.opengis.net/ogc" xmlns:xlink="http://www.w3.org/1999/xlink" 
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.opengis.net/sld
http://schemas.opengis.net/sld/1.0.0/StyledLayerDescriptor.xsd" version="1.0.0">
  <NamedLayer>
    <Name>Sentinel2 NDVI</Name>
    <UserStyle>
      <Title>NDVI</Title>
      <FeatureTypeStyle>
        <Transformation>
          <ogc:Function name="ras:Jiffle">
            <ogc:Function name="parameter">
              <ogc:Literal>coverage</ogc:Literal>
            </ogc:Function>
            <ogc:Function name="parameter">
              <ogc:Literal>script</ogc:Literal>
              <ogc:Literal>
                nir = src[7];
                vir = src[3];
                dest = (nir - vir) / (nir + vir);
              </ogc:Literal>
            </ogc:Function>
          </ogc:Function>
        </Transformation>
        <Rule>
          <RasterSymbolizer>
            <Opacity>1.0</Opacity>
            <ColorMap>
              <ColorMapEntry color="#000000" quantity="-1"/>
              <ColorMapEntry color="#0000ff" quantity="-0.75"/>
              <ColorMapEntry color="#ff00ff" quantity="-0.25"/>
              <ColorMapEntry color="#ff0000" quantity="0"/>
              <ColorMapEntry color="#ffff00" quantity="0.5"/>
              <ColorMapEntry color="#00ff00" quantity="1"/>
            </ColorMap>
          </RasterSymbolizer>
        </Rule>
      </FeatureTypeStyle>
    </UserStyle>
  </NamedLayer>
</StyledLayerDescriptor>

The performance is good enough for interactive display, and the result looks as follows (click to enlarge):

PostGIS store improvements and measured geometries support

The PostGIS datastore has been for years the only one that could encode a few filter functions used in filters down into native SQL, but it required a datastore creation flag to be enabled.
Starting with this release it will do so by default.

The functions supported for SQL encoding by the store are:

  • String functions: strConcat, strEndsWith, strStartsWith, strEqualsIgnoreCase, strIndexOf, strLength, strToLowerCase, strToUpperCase, strReplace, strSubstring, strSubstringStart, strTrim, strTrim2
  • Math functions: abs, abs_2, abs_3, abs_4, ceil, floor
  • Date functions: dateDifference

This release adds support for “array” data type in the store, with full reading and writing support, as well as native filtering (with index support, where feasible).

Finally, it’s now possible to read geometries with measures from PostGIS and encode the results in GML. GML does not natively support measures, so the encoding is off by default and you’ll have to enable it explicitly, as well as ensure that the clients involved in WFS usage recognize this extra ordinate. The work will continue in the next few month in order to cover more formats.

 

Image mosaic improvements

The image mosaic module never sleeps, in this release we see the following improvements:

  • Support for remote images (e.g. S3 or Minio). In order to leverage this the mosaic index will have to be created up-front (manually, or with some home grown tools)
  • A new “virtual native resolution” read parameter allows the mosaic to compose outputs respecting a native resolution other than its native one (useful in case you want to give selective resolution access to different users)
  • Supporting multiple WKBs footprint for pixel precise overviews masking
  • A new read mask parameter allows to cut the image to a given geometry (again, helps in providing different selective views to different users)
  • Speed up NetCDF mosaics by allowing usage of stores coming from a repository, instead of re-creating them every time a NetCDF file is needed (to be setup in the auxiliary store config file, while the repository instance needs to be passed as a creation hint).
  • The mosaic now works against images without a proper CRS, setting them into the “CRS not found” wildcard CRS, aka “EPSG:404000”

App-schema improvements

The app-schema module got significant performance and functionality improvements since 19.x series, in particular:

  • Better delegation of spatial filters on nested properties to native database
  • Improved support for multiple nested attribute matches in filters
  • It is now possible to use Apache SOLR as a data source for app-schema
  • The configuration files can be hosted on a remote HTTP server

The MongoDB store upgrades to official extension

Wanted to use the MongoDB store but worried about its unsupported status? Worry no more, in GeoServer 2.14 the MongoDB datastore upgraded to official extension status.

Style editor improvements

The GeoServer style editor now includes a fullscreen side-by-side editing mode, to make it easier to preview your styles while editing them. Click the fullscreen button at the top-right of the screen to toggle fullscreen mode.

The toolbar also has two new additions, a color picker helping to find the right color and turn it into a HEX specification, and a file chooser that allows to pick an external graphic and build the relevant ExternalGraphic element:

 

Windows build restored

GeoServer failed to properly build on Windows for a long time. GeoServer 2.14.x is the first branch in years to successfully build on Windows, and we have added an AppVeyor build to help keep the build going in the future.

The work to make it build there has been fully done in spare time, and we are still experiencing random build failures. If you are a Java developer on Windows, we could really use your help to keep GeoServer Windows build friendly.

New community modules

The 2.12 series comes with a few new community modules, in particular:

  • OAuth2 OpenID connect module (still very much a work in progress)
  • A WFS3 implementation has been added that matches the current WFS3 specification draft, and it’s being updated to become an official compliant one.

Mind, community modules are not part of the release, but you can find them in the nightly builds instead.

Other assorted improvements

There are many improvements to look at in the release notes, cherry picking a few here:

  • The ncWMS community module has seen significant performance improvements
  • WPS has a CSV input/output supporting geometryless data, as well as geometries with WKT encoding, as well as supporting pure PNG/JPEG image for raster data
  • Time/Elevation parsers are no longer silently cutting the read data to 100 entries
  • The WMTS multidimensional GetHistogram operation is now significantly faster, and a new GetDomainValues operation allows to page through the values of a domain (time, elevation) that has too many values to be efficiently explored with DescribeDomain or GetFeature. The DescribeDomain was also improved to allow a selection of the domains that should be described.
  • The SLDService community module has now the ability to return a full SLD style (as opposed to snippets), allows for custom classification and custom color selection. Also, keep an eye on this module, as it’s about to graduate to supported status
  • The monitoring modul can now log the hit/miss status of tiled requests, quite helpful to verify the benefit of caching, especially while using WMS direct integration

Security consideration

Please update your production instances of GeoServer to receive the latest security updates and fixes.

This release addresses several security vulnerabilities:

  • Prevent arbitrary code execution via Freemarker Template injection
  • XXE vulnerability in GeoTools XML Parser
  • XXE vulnerability in WPS Request builder
  • Various library upgrades (see above) from versions with known CVEs
  • Potential access to admin pages without being logged in

Thanks to Steve Ikeoka, Kevin Smith, Brad Hards, Nuno Oliveira and Andrea Aime for providing fixes to these issues.

If you encounter a security vulnerability in GeoServer, or any other open source software, please take care to report the issue in a responsible fashion.

Test, test, test!

Now that you know about all the goodies, please go, download and test your favourite ones. Let us know how it went!

About GeoServer 2.14

GeoServer 2.14 is scheduled for September 2018 release.

by Andrea Aime at August 28, 2018 11:05 AM

August 27, 2018

INSPIRE Conference 2018

Dear All, we are proud to announce that GeoSolutions is exhibiting at the INSPIRE Conference 2018 which will be held in Antwerp from 18th to 21st of September 2018.

GeoSolutions will be present at the exhibition at the OSGEO booth, therefore we will be happy to talk to you about our open source products, like GeoServerMapstore, GeoNode and GeoNetwork, as well as about our Enterprise Support Services and GeoServer Deployment Warranty offerings.

Our INSPIRE & GeoServer expert Nuno Oliveira together with our direction Simone Giannecchini are going to hold a workshop on GeoServer for INSPIRE, here is the details:

and will participate to a more general workshop as follows:

as well as a a presentation on the same topic as follows:

If you are interested in learning about how we can help you achieving your goals with our Open Source products and professional services, make sure to visit us at our booths S4-B3-B11 (we are a large family so we took many booths :) ).

See you in Antwerpen!

The GeoSolutions team,

by simone giannecchini at August 27, 2018 02:11 PM

August 24, 2018

We are happy to announce the release of GeoServer 2.12.5. Downloads are available (zipwar, and exe) along with docs and extensions.

This is the last maintenance release for the 2.12.x series, so we recommend users to plan an upgrade to 2.13.x or to the upcoming 2.14.x series. This release is made in conjunction with GeoTools 18.5.

Highlights of this release are featured below, for more information please see the release notes (2.12.52.12.42.12.3,2.12.22.12.12.12.0 | 2.12-RC1 | 2.12-beta).

Improvements

  • ImageMosaic should work when the images have no CRS information
  • Upgrade Apache POI dependencies
  • Upgrade jasypt dependency
  • Upgrade json-lib dependency to 2.4
  • Upgrade bouncycastle provider to 1.60

Bug Fixes

  •  NullPointerException during WMS request of layer group when caching is enabled
  • GeorectifyCoverage fails to handle paths with spaces
  •  CSS translator does not support mark offset/anchors based on expressions (but SLD does)
  • GeoServerSecuredPage might not redirect to login page in some obscure cases after Wicket upgrade

Security updates

Please update your production instances of GeoServer to receive the latest security updates and fixes.

This release addresses several security vulnerabilities:

  • Prevent arbitrary code execution via Freemarker Template injection
  • XXE vulnerability in GeoTools XML Parser
  • XXE vulnerability in WPS Request builder
  • Various library upgrades (see above) from versions with known CVEs

Thanks to Steve Ikeoka, Kevin Smith, Brad Hards and Nuno Oliveira for providing fixes to these issues.

These fixes are also included in the 2.13.2 release.

If you encounter a security vulnerability in GeoServer, or any other open source software, please take care to report the issue in a responsible fashion.

About GeoServer 2.12 Series

Additional information on the 2.12 series:

by tbarsballe at August 24, 2018 07:52 PM

Deep learning has proven to be an extremely powerful tool in many fields, and particularly in image processing: these approaches are currently subject of great interest in the Computer Vision community. However, while a number of typical CV problems can be directly transposed in Remote Sensing (Semantic labeling, Classification problems, …), some intrinsic properties of […]

by Rémi Cresson at August 24, 2018 02:36 PM