Welcome to Planet OSGeo

July 29, 2016

gvSIG Team

5as. Jornadas Brasileiras de gvSIG: Aberto o período de inscrições

está aberto o período de inscrições para as 5as. Jornadas Brasileiras gvSIG (14 a 16 de setembro de 2016, no Auditório do Centro de Ciências Rurais da UFSM – Santa Maria/RS – Brasil).

As inscrições são gratuitas (vagas limitadas). Para inscrever-se basta preencher o formulário correspondente.

5as Jornadas Brasileiras gvSIGImediatamente após o registro, você receberá um email com um código de confirmação. Este código pode ser usado para consultar e atualizar seus dados.

Quando os certificados de participação estão disponíveis após a conferência será enviado um e-mail para cada participante, com instruções para baixá-lo.

Também continua aberto o período para envio de propostas para comunicações. As formas de participação, bem como as normas e datas limite para a submissão de trabalhos encontram-se no item Comunicações da web.

Esperamos sua participação!


Filed under: community, events, portuguese Tagged: Jornadas Brasileiras

by Mario at July 29, 2016 09:21 AM

July 28, 2016

Fernando Quadro

Introdução ao GeoGig – Parte 2

Uma vez que o software está instalado, é necessário antes de começar a trabalhar com eles uma configuração inicial. Em particular, o PostGIS e GeoGig ambos precisam ser configurados, e os procedimentos para esse ajuste são detalhados a seguir.

Um usuário tem que ser definido no GeoGig antes que você possa comitar os dados para um repositório GeoGig e criar novos snapshots.

Para definir um novo usuário, abra o console e digite os seguintes comandos:

geogig config --global user.name "Author"
geogig config --global user.email "author@example.com"

Substitua o nome e e-mail do usuário com suas próprias informações.

Todos os commits que você fará a partir de agora irá conter a informação do autor. Usuários e e e-mails podem ser definidos para um repositório, mas se não estiverem definidos, o GeoGig usará as informações (usuário e e-mail) que criamos com os comandos acima.

É necessário ter um banco de dados para trabalhar com o PostGIS. Vamos criar um banco de dados com o nome de geogig-ethiopia que vamos usar para armazenar todos os dados usados ​​neste tutorial. O banco de dados pode ser criado a partir do pgAdmin da seguinte forma:

1. Crie um usuário com o nome geogig que irá usar para se conectar ao banco de dados
2. Crie uma senha para esse usuário (use geogig como senha)
3. Nos privilégios, certifique-se a opção “superusuário” esteja marcada, para permitir plenos poderes ao usuário.

create_role3

Agora, que o usuário foi definido, vamos criar o banco de dados.

4. Preencha o nome do novo banco de dados como mostrado abaixo e clique em OK.

Nome : geogig-ethiopia
Proprietário : geogig

create_pg_database

5. Selecione o novo banco de dados, dê um duplo clique para abri-lo e exibir a árvore de objetos. Você verá então o schema public.

workshop_db

6. Clique no botão de consulta SQL no menu superior do PgAdmin (ou vá em Ferramentas> Ferramenta de consulta).

7. Digite a seguinte consulta abaixo no campo de texto para carregar a extensão espacial PostGIS:

CREATE EXTENSION postgis;

8. Clique no botão “Execute” na barra de ferramentas (ou pressione F5) para executar a consulta.

O banco de dados agora está criado e já podemos adicionar dados a ele.

No próximo post iremos preparar e importar os dados para o banco de dados. Aguarde!

by Fernando Quadro at July 28, 2016 06:00 PM

PyWPS team

PyWPS 4.0.0-beta1 released

PyWPS 4.0.0-beta1 released

The PyWPS development team has just released the first beta version of the upcoming PyWPS 4.0. This new release is a complete re-write of PyWPS, taking full advantage of modern Python technologies.

For a bit of history and a quick overview of PyWPS-4 there is this presentation.

For a more in depth information please consult the article recently published by ISPRS.

The source code for this beta release is available at GitHub. Please follow the documentation to install PyWPS 4.0 and the accompanying demo application.

This beta release aims to fully comply with the WPS 1.0 standard and support seamless integration with GRASS GIS. Please report your feedback at the PyWPS mailling list, or open an issue at GitHub.

July 28, 2016 12:00 AM

July 27, 2016

GeoTools Team

Copyright Headers in Source Code Files


OSGeo has now received legal advice on copyright headers in source code files. This advice is summarised here for the benefit of all projects:
  • Copyright headers in source code files are not required to enforce copyright.
  • Works still under copyright when the Berne Convention was adopted (1992 in the United States) are protected under this convention.
  • Works created after the Berne Convention was adopted are also protected.
  • There is no need to include a copyright header for a source code file created or modified after the Berne Convention was adopted. A modified source code file is a new work.
  • Source code files created before the adoption of the Berne Convention and not modified since its adoption should include a copyright header with the dates of creation and modification. These dates support the assertion that these files are covered by the Berne Convention.
  • A header that names the project, links to the project page, and references the licence is informative, and while not legally binding, might deter some infringers (at least they were warned file-by-file that the code is open source).
  • Both individual source code files and the project source as a whole are copyrighted works.
GeoTools was first published in 1996. Project policy requires a copyright header in source code files; these should be seen as informative, and are not required to enforce copyright. See GeoTools Header Policy Updated for details. 

Note that the authors of this blog post are not lawyers and this post has not been reviewed by a lawyer. This post should not be construed as legal advice.

by Ben Caradoc-Davies (noreply@blogger.com) at July 27, 2016 11:02 PM

Fernando Quadro

Introdução ao GeoGig – Parte 1

Esta série de posts que inicia hoje descreverá como usar o GeoGig juntamente com outros pacotes de software para gerenciamento de dados espaciais, mostrando um fluxo de trabalho típico.

Esta introdução se concentra principalmente no GeoGig e pressupõe o conhecimento em nível de usuário de todas as outras aplicações. Para obter mais informações sobre eles, consulte a documentação correspondente de cada programa.

Vamos iniciar com uma definição do que é o GeoGig:

GeoGig é uma ferramenta de código aberto que inspira-se no Git, mas adapta seus conceitos fundamentais para lidar com o controle de versão distribuído de dados geoespaciais.

Resumindo… com o GeoGig os usuários são capazes de importar dados geoespaciais (atualmente Shapefiles, PostGIS ou SpatiaLite) para um repositório onde todas as alterações realizada nos dados são rastreadas. Estas mudanças podem ser visualizados em um histórico, revertidos para versões mais antigas, ramificadas, mescladas (merges), e enviada para repositórios remotos.

Agora que já sabemos o que é GeoGig é importante você saber que todo conjunto de dados de amostra que serão utilizados nos próximos posts são baseados em dados do OpenStreetMap. Você pode baixá-lo como um arquivo zip e descompactá-lo em uma pasta de sua escolha.

Para este tutorial é necessário estar instalado em seu computador os seguintes sistemas:

  • GeoGig
  • GeoServer com a extensão GeoGig
  • PostGIS (incluindo shp2pgsql)
  • QGIS 2.0

O GeoServer e PostGIS são instalados como parte de OpenGeo Suite. Este tutorial assume que você está usando o OpenGeo Suíte.

Para instalar GeoGig, é necessário ter o Java JDK 8 (de preferência) instalado em sua máquina. Após verificado isso, você deve baixar o GeoGig e descompactá-lo em uma pasta de sua preferência (C:\Program Files\GeoGig ou /opt/geogig). É importante ressaltar que esse mesmo arquivo que você baixou pode ser utilizado tanto no Windows, Linux ou Mac.

Para finalizar a instalação do GeoGig você deve adicionar o diretório bin na variável de ambiente PATH. Quando terminar, você deve ser capaz de executar o geogig –help no prompt de comando.

Além do GeoGig você precisa ter GeoServer instalada em seu sistema, com uma versão maior ou igual a 2.3.x.

Se você estiver usando OpenGeo Suite, o GeoServer já deve estar instalado em seu sistema, mas ele não contém a extensão GeoGig. Desta forma, você pode ignorar a instalação do GeoServer, mas você deve instalar a extensão GeoGig como descrito acima.

Como no caso do GeoServer, se você tiver instalado o OpenGeo Suite, já deve ter PostGIS, incluindo o shp2pgsql utilidade, instalado. Caso contrário terá de instalá-lo.

Por último, você deve instalar o QGIS 2.0, você pode baixá-lo no próprio site QGIS. Além dele, você deve baixar o plugin OpenGeo Explorer, que será utilizado neste tutorial. As instruções de instalação podem ser encontradas no site do plugin.

No próximo post iremos configurar o GeoGig com o PostGIS. Não perca!

by Fernando Quadro at July 27, 2016 03:20 PM

Jo Cook

Don't be Afraid to Commit

Back in June, which seems a long time ago now, we (OSGeo:UK) ran a mini FOSS4G event at the Ordnance Survey offices in Southampton, UK. It was a big success, well attended, and everyone seemed to enjoy themselves, which is always nice. This is not a post about the event, per se (perhaps in retrospect going straight on to another event the week after was a bad idea). Others have posted about this, and there’s always the #foss4guk hashtag on twitter if you’re interested. However, along with chairing the event, I foolishly decided to run a workshop around a topic that had been taking shape in my mind for a while- namely how to get new users of open source software to a point where they are comfortable contributing to the packages they use, as well as simply using them.

I began this workshop looking at my own perspective. I’m no more than an arm-chair coder at best (Python and SQL are fine but the rest requires some serious googling and sucking up to colleagues). I can, however, find and report bugs, and contribute to documentation, and that’s my way of giving back. This may not require an in depth knowledge of java or C++, and all the additional skills they require but there’s still a barrier to entry- namely navigating online repositories such as GitHub, and producing useful bug reports. Helping with documentation involves learning about ReadTheDocs and ReStructured Text, and proper bug reporting, where you try and isolate the cause of the problem requires virtualisation of some kind. Suddenly you’re looking at a fairly large toolkit, before you start with bug fixes and other code contributions!

Then, a few months ago some colleagues went to PyCon UK and came back with stories of the wonderous Don’t be afraid to commit workshops, so I decided to come up with something vaguely similar, but focused more on GitHub and bug reporting, and less on Python. I also included an adapted version of the wonderful How to Report a Bug site, but again focusing on open source geospatial projects. You can find my workshop attempt on GitHub at http://archaeogeek.github.io/foss4gukdontbeafraid/.

Some thoughts for those considering running something similar…

In retrospect it was incredibly ambitious to try and get through all the above in 2 hours- it was only possible by setting some fairly strict workshop prerequisites, which most people actually took note of. Given that this was a “Bring your own device” conference, I ran the Git sections using the command line tools rather than any of the graphical user interfaces. Although there was some resistance to this, it was the only sane approach because I had people using Windows, Mac, Linux and an Android tablet in the class! The bit that actually caused the most trouble was setting the default text editor for Git to use. If I was running the class again I would probably omit this section and use the short-hand method of adding commit messages at the command line instead.

It’s also really hard to manage a shared repository for people to use for trying out pull requests- I had a simple text file (generated using the superb hipster ipsum site to generate it), but that caused all sorts of conflicts when multiple people issued pull requests all at the same time. There are alternative approaches to this (countless online, interactive git tutorials) but they weren’t the approach I wanted to use in this workshop. If I’d had more time to prepare I would have come up with a more complex file, perhaps with some specific errors in it, that people could fix without conflicting with each other.

Finally, we barely had time to look at ReStructured Text, or virtualisation, and it probably wasn’t worth even trying to cover them in the same workshop!

Having said all that, I know of at least one workshop attendee who has used his new GitHub skills to good effect, and was recently seen contributing to some documentation, so that’s a start. It would be good if there were more workshops of this kind at open source conferences, not necessarily based on mine, but feel free to borrow/improve it if you like!

July 27, 2016 02:41 PM

GeoTools Team

GeoTools Header Policy Updated

GeoTools Developers Guide on source code conventions has been updated with a new policy on file headers (exciting I know).

GeoTools will now focus on filling in headers with the current year on initial file creation ... and that is it.
/*
* GeoTools - The Open Source Java GIS Toolkit
* http://geotools.org
*
* (C) 2016, Open Source Geospatial Foundation (OSGeo)
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation;
* version 2.1 of the License.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*/
Previously, we asked contributors to update the headers each time they modified a file, resulting in a date range at the top of each file. While not technically difficult, this was a constant chore for committers and caused friction and delays reviewing pull requests. We trust that the new policy will ease the pull request submission and review process.

We have approached the OSGeo Board and gotten approval to bounce this change off legal counsel. It is hoped that other OSGeo projects can benefit from being more relaxed.

Thanks to Justin for writing up the change proposal and steering this conversation on the mailing list.

by Jody Garnett (noreply@blogger.com) at July 27, 2016 02:53 AM

July 26, 2016

GIScussions

Child Poverty in London

The advantages of wealth and the burdern of poverty.

This is another post about making maps from OpenData. I am trying to learn more geeky stuff rather than my usual blathering about the map and data business, things open and stuff, so I am playing with PostGIS, QGIS, taking some tentative steps into CSS and JavaScript and generally having lots of fun. But hey, you can burn a lot of time wrangling data and making maps.

A week ago I saw this tweet

I couldn’t get any sense of what change had taken place or how the 2 maps compared, so I replied

Then I thought to myself, why don’t I try to make a more useful map? What follows is a summary of what I did, what worked, what didn’t. Hopefully it will be useful to someone and possibly a better map maker may even come up with a better way of doing this.

Data

I couldn’t find the CASE UMBR data set that Barry had used. That was probably a good thing as Lower Super Output Areas (even just for London) is quite a large data set.

But thanks to The London DataStore it wasn’t hard to find some data on poverty in London. I downloaded:

  1. Children in Poverty, Borough and Ward (xls)
  2. Statistical GIS Boundary Files for London (shp)

There are 2 sets of boundary files because at some stage the boundaries changed (this was a source of confusion, data wrangling and I’m not sure if I have got it completely right!)

Data munging

The starting point was to have a look at the data

  • There is a tab for each year (plus one for metadata)
  • The ward classifications changed in 2008 and then there seem to have been some further changes in boundaries after 2011. I decided that I would focus on the changes in Child Poverty for under 16 year olds between 2008 and 2013.
  • There is some regional and borough data at the top of each tab which I deleted (and saved as a separate spreadsheet, just in case)
  • I deleted the columns that I didn’t need
  • I cleaned up the header row and created shorter field names

Sounds easy? Yes, but QGIS doesn’t handle spreadsheet imports as elegantly as one might hope 🙁 The problem was that the %age of children in each ward in poverty came through as a string not a number which meant that I couldn’t do any calculations on it. I tried everything I could think of xls, xlsx, changing formats – nothing worked. I installed the Spreadsheet Layers plugin for QGIS which made things better, the results were still rather flaky but they sort of worked.

Finally I created an extra tab to calculate the difference between child poverty rates per ward from 2008 and to 2013. Not quite as simple as I had expected because the wards changed a bit over the period, but with a bit of wrangling I managed to come up with something that sort of worked. I wanted to understand where poverty rates had reduced (or increased) so I decided to show the change in rate as a percentage (percentage changes in percentages may not be the best statistical tool but it was the best I could come up with and it magnifies the changes, anyone got a better suggestion?). This time round I saved the data as a csv (only one tab from the spreadsheet) which made it easier to import into QGIS.

Mapping the data

Once the the tabular data had been manipulated and formatted it was reasonably easy to open in QGIS and join to the spatial boundaries. I created three layers and saved them as shape files:

  1. Child Poverty 2008
  2. Child Poverty 2013
  3. Child Poverty Change

Thematic maps are really simple in QGIS

  • Select properties for a layer and go to the Style tab
  • Choose ‘Categorised’
  • Select the column to thematically map (% of children under 16 in poverty for 1 & 2 above and change in %age for 3 above)
  • Choosing a percentage to map means that the choropleth is normalised which will keep Ken Field happy (I am sure he will find plenty else that needs imroving)
  • I chose ‘pretty breaks’ for the classes and then played around with the number of classes and colour ramps for each data set until it felt about right
  • I then manually over-rode the class breaks to get nice simple ranges for the classification for each layer (e.g. 0% to 10%, 10% to 20% etc)
  • Finally I made some manual tweaks to the colour ramp so that the 2008 and 2013 datasets had identical ranges and colours

Results

To share the results with you I had to have a first try at using the QGIS Print Composer. I definitely need to spend more time learning how to use this and somehow I managed to screw things up and save my project after removing all of the thematic classes (fortunately it only took a few minutes to repeat the steps above).

Here are the maps

Child Pov 2008

Child Pov 2013

You can see there are some white gaps in the 2013 image which are due to the boundary changes (I think I know how to fix that but that is for another day).

So these two maps are not massively dissimilar to the LSOA maps that Barry Quirk produced. The pattern of poverty is somewhat concentrated in the inner london wards with a sort of ‘cross’ stretching north, south, east and west. Broadly it appears that child poverty rates have reduced over the 6 year period (the pinks and reds are a bit lighter signifying lower poverty rates) but there is a bit more that we can find out. This final map shows the changes in child poverty rates over the period.

Child Pov Changes 2008-13

The darker blue wards show the largest reductions in poverty rates and the dark red show wards where child poverty rates have increased the most. The good news is that across most of London child poverty rates have reduced, which is a bit surprising to me in the light of the austerity measures of the last 3 years of the period. It looks as if the wards that had the highest poverty rates in 2008 have experienced the greatest reductions. If I was in government I would want to understand what factors might have caused the increases (some of which are large) in a few wards.

Putting the results on the web

In case you are interested in the data and want to explore it yourself, I used the wonderful QGIS2Web plugin to publish the data as a web project.

I had to use the OpenLayers output because Leaflet doesn’t support the thematic shading of the layers, OL isn’t quite a s elegant as Leaflet in my opinion but the map works reasonably well considering the volume of data. You can on the wards and see the attribute data in the popup (I dropped the hover option that I started with because it doesn’t seem to work in mobile). The layer control allows you to switch on the 2008 or 2013 layers (best to switch off the change layer and only use one layer at a a time).

A little discovery: my ward in Haringey has one of the highest increases in child poverty rates within London! Not what I would have expected, a flaw in the data, my methodology or ….? Maybe my local councillors will be interested.

Update 27th July, 2016 – Leaflet

Thanks to the brilliant Tom Chadwin I now know that the Leaflet option was throwing a wobbly because one of my field names started with a ‘%’ followed by a space (dumb on my part). I changed the field name and now you can see why I prefer the Leaflet option, it DOES display the thematics and builds a thematic legend as well.

Update 29th July 2016 – Attribution

A humble apology, I neglected to correctly attribute the sources of the data in these maps (my thanks to Nick Duggan for reminding me)

  • Contains National Statistics data © Crown copyright and database right 2012
  • Contains Ordnance Survey data © Crown copyright and database right 2012

by steven at July 26, 2016 09:26 PM

GeoServer Team

Online GeoServer Bug Stomp – July 2016 Results

cropped-geoserver_icon.png

Dear Readers,

A few words to report on the results of the Online GeoServer Bug Stomp that took place on the 22nd of July 2016.

The goal, as indicated, was to look at GeoServer and GeoServer JIRA and clean old, useless reports as well as to fix as many bugs as possible within the day of the sprint. Well, the results are not bad, as the image below shows.

download

Numbers are as follow:

  • Improvements closed 103 (9 fixed – remainder failed to attract budget/interest after quite some time)
  • Bugs closed 35 (25 fixed – followed by 6 won’t fix, 3 cannot reproduce, 1 not a bug )
  • New Feature 14 (2 fixed – with 12 not a bug)
  • Task 9 (2 fixed – with 7 not a bug)
  • Wish 2 (not a bug)
  • Subtask 1
  • TOTAL 164

You can check the live report here:

  • Thanks to everybody who participated (a list of the participating people can be found in this spreadsheet).
  • As noted above many new features/improvements/wishes were quite old and had failed to attract budget/volunteers
  • The not-a-bug category is used for ideas or conversations which are best taken to the developers or users list for discussion
  • Not shown is the review of incoming issues to see which issues are ready to be worked on, or held back for further clarification before they can be reproduced.

If you did not participate this month don’t worry, we are going to have this event again on August 27-28th as part of the foss4g post-sprint. Remember, we want to make this event a periodic gathering so keep following this blog for news.

Happy GeoServer to everybody!

by Simone Giannecchini at July 26, 2016 07:42 PM

gvSIG Team

Presencia de gvSIG en eventos durante el próximo trimestre (Agosto-Octubre 2016)

El próximo trimestre van a ser varios los eventos bien centrados en gvSIG o en los que gvSIG estará presente. Para todos aquellos que nos siguen y estén interesados, os dejamos una relación de estas actividades:

Agosto

Septiembre

  • 3as Jornadas gvSIG México
    • Fechas: del 7 al 9.
    • Lugar: Ciudad de México (México).
    • Actividades: El programa se publicará en breve, pero ya os podemos adelantar que habrá un buen número de talleres de todo tipo -para usuarios y desarrolladores- y decenas de ponencias.
    • Más información: http://www.gvsig.com/es/eventos/jornadas-mexico/2016
  • 5as Jornadas gvSIG Brasil
  • Geostat 2016
    • Fechas: del 18 al 24
    • Lugar: Albacete (España).
    • Actividades: Taller/curso de gvSIG aplicado a estadística.
    • Más información: http://geostat-course.org/2016
  • Inspire Conference 2016
  • JIIDE 2016
    • Fechas: del 27 al 30.
    • Lugar: Barcelona (España).
    • Actividades: 6 ponencias principalmente relacionadas con proyectos de Infraestructuras de Datos Espaciales en los que se ha aplicado gvSIG Online. También se hablará de las novedades de la versión 2.3 de gvSIG Desktop.
    • Más información: http://www.jiide.org/

Octubre

  • 8as Jornadas de Latinoamérica y el Caribe de gvSIG
  • III Congreso Internacional de Ingenieria Topográfica, Agrimensura, Ambiental, Catastro, Geodesia y Geomática.
    • Fechas: del 26 al 29.
    • Lugar: Puno (Perú).
    • Actividades: a definir (probablemente ponencias y taller de gvSIG Desktop)
    • Más información: Web todavía no disponible.
  • TOPCART 2016
    • Fechas: del 26 al 30.
    • Lugar: Toledo (España)
    • Actividades: 6 ponencias en los que hablaremos sobre proyectos y tecnologías como gvSIG Online, gvSIG Roads, gvSIG Desktop.
    • Más información: http://topcart2016.com/

Filed under: events, spanish

by Alvaro at July 26, 2016 12:44 PM

Fernando Quadro

Análise e visualização de dados LiDAR – Parte 6

Agora que temos dados de elevação e altura em nossos edifícios podemos fazer um indicador ainda mais atraente visualmente.

1. Na guia “Dados”, clique no botão “Atualizar tipo de recurso”, para que GeoServer esteja ciente dos novos atributos z e z_diff.

gs_reloadft

2. Dentro do diretório de dados, localize a pasta workspaces/opengeo/lidar/buildings

3. Criar um arquivo de texto com o nome height.ftl no diretório mencionado acima com o conteúdo: ${height.value}

4. É isso aí! Agora abra a camada KML no Google Earth e dê zoom até se aproximar dos edifícios para ver o resultado.

5. http://localhost:8080/geoserver/wms/kml?layers=opengeo:buildings&mode=refresh&kmscore=50&format_options=lookatbbox:bbox=-122.8808,42.3311,-122.8806,42.3313

ge_buildings

Se você explorar a visualização em 3D dos edifícios por um tempo, você pode se deparar com alguns bairros ímpares, como este.

ge_buildings_trees1

Se você perceber na imagem acima 3 edifícios parecem um pouco fora do lugar nesta área residencial, e a torre no quintal? Algo está errado.

O que estamos vendo é o efeito das árvores nos dados LIDAR. Os pulsos de laser estão refletindo em volta das árvores, bem como nas casas fazendo com que a altitude média de pontos dentro dos polígonos das edificações sejam artificialmente infladas.

Como podemos corrigir isso?

Os dados LIDAR refletidos das árvores irá gerar mais de uma reflexão de retorno para cada pulso de saída, e quanto mais longe a reflexão, maior o tempo de retorno da luz. Ao invés de gerar nossas alturas dos edifícios como a média de todos os pontos LIDAR, queremos a média apenas os últimos retornos para cada pulso; os retornos que são os mais profundos.

Uma solução simples é dar peso aos retornos, de modo que em áreas de tipos de retorno mistos os retornos mais profundas têm mais peso. Podemos fazer isso ser substituindo a nossa função AVG no cálculo LIDAR com uma média ponderada:

-- numerator
Sum(CASE WHEN PC_Get(pts,'ReturnNumber') = 1
THEN PC_Get(pts, 'z')
WHEN PC_Get(pts,'ReturnNumber') = 2
THEN 10*PC_Get(pts, 'z')
ELSE 100*PC_Get(pts, 'z') END) /
-- denominator
Sum(CASE WHEN PC_Get(pts,'ReturnNumber') = 1
THEN 1
WHEN PC_Get(pts,'ReturnNumber') = 2
THEN 10
ELSE 100 END) AS z

Agora estamos prontos para voltar a executar o nosso cálculo de elevação e nosso cálculo de altura para ver o efeito sobre a saída do final KML.

-- Update into the buildings Z column
UPDATE buildings SET z = elevs.z
FROM (
-- For every building, all intersecting patches
WITH patches AS (
SELECT
buildings.gid AS buildings_gid,
medford.id AS medford_id,
medford.pa AS pa
FROM medford
JOIN buildings
ON PC_Intersects(pa, geom)
),
-- Explode those patches into points, remembering
-- which building they were associated with
pa_pts AS (
SELECT buildings_gid, PC_Explode(pa) AS pts FROM patches
)
-- Use the building associations to efficiently
-- spatially test the points against the building footprints
-- Summarize per building
SELECT
buildings_gid,
-- Use a weighted average that heavily favors
-- multi-return pulses
Sum(CASE WHEN PC_Get(pts,'ReturnNumber') = 1
THEN PC_Get(pts, 'z')
WHEN PC_Get(pts,'ReturnNumber') = 2
THEN 10*PC_Get(pts, 'z')
ELSE 100*PC_Get(pts, 'z') END) /
Sum(CASE WHEN PC_Get(pts,'ReturnNumber') = 1
THEN 1
WHEN PC_Get(pts,'ReturnNumber') = 2
THEN 10
ELSE 100 END) AS z
FROM pa_pts
JOIN buildings
ON buildings.gid = buildings_gid
WHERE ST_Intersects(buildings.geom, pts::geometry)
-- Only use the last returns in this calculation
AND PC_Get(pts,'ReturnNumber') = PC_Get(pts,'NumberOfReturns')
GROUP BY buildings_gid
) AS elevs
-- Join calculated elevations to original buildings table
WHERE elevs.buildings_gid = gid;

-- Update the building heights by subtracting elevation of
-- the nearest street from the elevation of the building
UPDATE buildings SET height = heights.height
FROM (
WITH candidates AS (
SELECT
b.gid AS building_gid,
s.gid AS street_gid,
s.z AS streets_z,
b.z as buildings_z
FROM buildings b, streets s
WHERE ST_DWithin(b.geom, s.geom, 0.001)
ORDER BY
building_gid,
ST_Distance(b.geom, s.geom)
)
SELECT DISTINCT ON (building_gid)
building_gid, street_gid,
buildings_z - streets_z AS height
FROM candidates
) AS heights
WHERE heights.building_gid = buildings.gid;

Agora temos uma nova visualização para os edifícios em 3D. Você pode verificar que a torre está a metade do tamanho, e as edificações foram reduzidas em muitos andares.

ge_buildings_trees2

Este tutorial é uma tradução e adaptação livre do artigo “Analyzing and Visualizing LIDAR” publicado no site da Boundless.

Fonte: Boundless

by Fernando Quadro at July 26, 2016 12:37 PM

July 25, 2016

GeoSolutions

Developer’s Corner: dynamic raster palettes and other spatiotemporal extensions for GeoServer

redblue-editor

Dear Reader,

today we would like to introduce a new community module that we have been working on: ncWMS like extensions to perform easy dynamic mapping over raster data.

The first change introduced by the extension is the ability to easily add dynamic color palettes that will be applied on single banded raster data based on their statistics, but with the ability to control range and palette application from request parameters too. For example, let's say we want to create a simple red to blue progression, with the ncWMS extension installed we can simply create a new style, choose the "dynamic palette" format, and enumerate the two colors:

redblue-editor Then, let's make sure the layer band configuration has a range consistent with the data, in this case, a very narrow one:

Selezione_006

Finally, we can make a simple request that will use the dynamic palette against the configured range:

countries_flexpart1

You can also try dynamically on this link: http://cloudsdi.geo-solutions.it/geoserver/wms?STYLES=,redblue&LAYERS=countries,flexpart&FORMAT=image%2Fpng8&SERVICE=WMS&VERSION=1.1.1&REQUEST=GetMap&SRS=EPSG%3A4326&BBOX=-137,15,-110,42&WIDTH=600&HEIGHT=600

The dynamics of the data appears to favor the low values, it's possible to enhance the high portion by using a logarithmic palette instead, by adding "&logscale=true" to the request, this way: http://cloudsdi.geo-solutions.it/geoserver/wms?STYLES=,redblue&LAYERS=countries,flexpart&FORMAT=image%2Fpng8&SERVICE=WMS&VERSION=1.1.1&REQUEST=GetMap&SRS=EPSG%3A4326&BBOX=-137,15,-110,42&WIDTH=600&HEIGHT=600&logscale=true

countries_flexpart2

It's also possible to concentrate on a specific range using the "&colorscalerange=0.00001,0.0002", as follows: http://cloudsdi.geo-solutions.it/geoserver/wms?STYLES=,redblue&LAYERS=countries,flexpart&FORMAT=image%2Fpng8&SERVICE=WMS&VERSION=1.1.1&REQUEST=GetMap&SRS=EPSG%3A4326&BBOX=-137,20,-110,42&WIDTH=600&HEIGHT=500&colorscalerange=0.00001,0.0002

countries_flexpart3

Finally, one can generated an animation providing a time or elevation range, asking for the GIF format, and adding the "&animation=true" (here, with a different, more complex palette): http://cloudsdi.geo-solutions.it/geoserver/wms?STYLES=,x-Sst&LAYERS=countries,flexpart&FORMAT=image%2Fgif&SERVICE=WMS&VERSION=1.1.1&REQUEST=GetMap&SRS=EPSG%3A4326&BBOX=-137,20,-110,42&WIDTH=600&HEIGHT=500&time=2016-02-01/2016-03-01&animation=true&format_options=gif_loop_continuosly:true;gif_frames_delay:200

countries_flexpart

The official documentation provides more insight into these new features, while the module can be already downloaded as part of the GeoServer 2.10.x nightly builds (it's a development series):

If you want to know more about how we can help your organization, don't hesitate and get in touch with us! Make sure to check our Enterprise Services Offer which has been used to complete the work described in this post.

The GeoSolutions Team,

Geosolutions

by Andrea Aime at July 25, 2016 02:23 PM

Fernando Quadro

Análise e visualização de dados LiDAR – Parte 5

Para o nosso exemplo de hoje vamos dar um zoom e encontrar um edifício para analisar (se você aumenta o zoom, os edifícios se tornam clicáveis).

building_81719

Podemos ver que o edifício #81719 (identificador da chave primária do banco de dados) tem um número limitado de atributos, incluindo uma elevação de 1.446,43! Nossos dados LIDAR variam cerca de 450 metros, de modo que a elevação nos edifícios é provavelmente medida em pés.

Como podemos calcular a elevação para o edifício #81719 utilizando a tabela LIDAR? Veja como a lógica funciona visualmente:

1. Comece com o edifício.

pc_analysis_1

2. Localize todos os patches que se cruzam naquele edifício.

pc_analysis_2

3. Transforme essas manchas em pontos individuais.

pc_analysis_3

4. Filtre esses pontos usando o contorno do edifício.

pc_analysis_4

5. Finalmente a média dos pontos de elevações dará um valor para o edifício.

Veja o procedimento acima em SQL:

-- We run a set of subqueries sequentially using
-- the "with" keyword
WITH
-- Get the one building we are interested in
building AS (
  SELECT geom FROM buildings
  WHERE buildings.gid = 81719
),
-- All the patches that intersect that building
patches AS (
  SELECT pa FROM medford
  JOIN building ON PC_Intersects(pa, geom)
),
-- All the points in that patch
pa_pts AS (
  SELECT PC_Explode(pa) AS pts FROM patches
),
-- All the points in our one building
building_pts AS (
  SELECT pts FROM pa_pts JOIN building
  ON ST_Intersects(geom, pts::geometry)
)
-- Summarize those points by elevation
SELECT
  Avg(PC_Get(pts, 'z')) AS lidar_meters
FROM building_pts;

O resultado do SQL é 441.075 metros, que em pés é 1447.1, quase exatamente o mesmo valor do arquivo de edifícios, 1.446,43!

Ok! Porém podemos acrescentar a elevação a partir da nossa imagem LIDAR para todos os nossos edifícios? Sim, mas vai demorar algum tempo de processamento. Primeiro vamos adicionar uma coluna para aceitar o valor, e então executar uma atualização na nossa tabela do banco de dados.

-- Add column for our calculate Z value
ALTER TABLE buildings ADD COLUMN z real;
-- Update into the column
UPDATE buildings SET z = elevs.z
FROM (
  -- For every building, all intersecting patches
  WITH patches AS (
    SELECT
      buildings.gid AS buildings_gid,
      medford.id AS medford_id,
      medford.pa AS pa
    FROM medford
    JOIN buildings
    ON PC_Intersects(pa, geom)
  ),
  -- Explode those patches into points, remembering
  -- which building they were associated with
  pa_pts AS (
    SELECT buildings_gid, PC_Explode(pa) AS pts FROM patches
  )
  -- Use the building associations to efficiently
  -- spatially test the points against the building footprints
  -- Summarize per building
  SELECT
    buildings_gid,
    Avg(PC_Get(pts, 'z')) AS z
  FROM pa_pts
  JOIN buildings
  ON buildings.gid = buildings_gid
  WHERE ST_Intersects(buildings.geom, pts::geometry)
  GROUP BY buildings_gid
) AS elevs
-- Join calculated elevations to original buildings table
WHERE elevs.buildings_gid = gid;

Nossa tabela está atualizada! Confira as elevações originais e calculadas (temos que converter a coluna z de metros para pés para compará-la com a coluna elevação):

SELECT z AS z_m, z*3.28084 AS z_ft, elevation AS elevation_ft
FROM buildings;

Os valores estão bem próximos, veja:

   z_m | z_ft | elevation_ft 
--------- + ------------------ + --------------- 
 438,128 | 1.437,42663560303 | 1434.43000000 
 433,556 | 1.422,4291678418 | 1413.63200000 
 435,489 | 1.428,76987573853 | 1406.25400000 
 439,244 | 1.441,09014682129 | 1422.58200000 
 433,738 | 1.423,02460105347 | 1416.93600000 
 429,648 | 1.409,60687857788 | 1403.92400000 
 437,264 | 1.434,59264585083 | 1425.84000000 
 430,607 | 1.412,75115040894 | 1404.03675300

Vamos colocar a diferença de elevação (em metros) na tabela, veja como:

-- Add column for our calculated height
ALTER TABLE buildings ADD COLUMN height real;
-- Update the building heights by subtracting elevation of
-- the nearest street from the elevation of the building
UPDATE buildings SET height = heights.height
FROM (
  WITH candidates AS (
    SELECT
      b.gid AS building_gid,
      s.gid AS street_gid,
      s.z AS streets_z,
      b.z as buildings_z
    FROM buildings b, streets s
    WHERE ST_DWithin(b.geom, s.geom, 0.001)
    ORDER BY
      building_gid,
      ST_Distance(b.geom, s.geom)
  )
  SELECT DISTINCT ON (building_gid)
    building_gid, street_gid,
    buildings_z - streets_z AS height
  FROM candidates
) AS heights
WHERE heights.building_gid = buildings.gid;

No próximo post, finalizaremos nossa série de posts sobre LiDAR, não perca!

by Fernando Quadro at July 25, 2016 10:30 AM

July 23, 2016

Free and Open Source GIS Ramblings

OSM turn restriction QA with QGIS

Wrong navigation instructions can be annoying and sometimes even dangerous, but they happen. No dataset is free of errors. That’s why it’s important to assess the quality of datasets. One specific use case I previously presented at FOSS4G 2013 is the quality assessment of turn restrictions in OSM, which influence vehicle routing results.

The main idea is to compare OSM to another data source. For this example, I used turn restriction data from the City of Toronto. Of the more than 70,000 features in this dataset, I extracted a sample of about 500 turn restrictions around Ryerson University, which I had the pleasure of visiting in 2014.

As you can see from the following screenshot, OSM and the city’s dataset agree on 420 of 504 restrictions (83%), while 36 cases (7%) are in clear disagreement. The remaining cases require further visual inspection.

toronto_turns_overview

The following two examples show one case where the turn restriction is modelled in both datasets (on the left) and one case where OSM does not agree with the city data (on the right).
In the first case, the turn restriction (short green arrow) tells us that cars are not allowed to turn right at this location. An OSM-based router (here I used OpenRouteService.org) therefore finds a route (blue dashed arrow) which avoids the forbidden turn. In the second case, the router does not avoid the forbidden turn. We have to conclude that one of the two datasets is wrong.

turn restriction in both datasets missing restriction in OSM?

If you want to learn more about the methodology, please check Graser, A., Straub, M., & Dragaschnig, M. (2014). Towards an open source analysis toolbox for street network comparison: indicators, tools and results of a comparison of OSM and the official Austrian reference graph. Transactions in GIS, 18(4), 510-526. doi:10.1111/tgis.12061.

Interestingly, the disagreement in the second example has been fixed by a recent edit (only 14 hours ago). We can see this in the OSM way history, which reveals that the line direction has been switched, but this change hasn’t made it into the routing databases yet:

now before

This leads to the funny situation that the oneway is correctly displayed on the map but seemingly ignored by the routers:

toronto_okeefe_osrm

To evaluate the results of the automatic analysis, I wrote a QGIS script, which allows me to step through the results and visually compare turn restrictions and routing results. It provides a function called next() which updates a project variable called myvar. This project variable controls which features (i.e. turn restriction and associated route) are rendered. Finally, the script zooms to the route feature:

def next():
    f = features.next()
    id = f['TURN_ID']
    print "Going to %s" % (id)
    QgsExpressionContextUtils.setProjectVariable('myvar',id)
    iface.mapCanvas().zoomToFeatureExtent(f.geometry().boundingBox())
    if iface.mapCanvas().scale() < 500:
        iface.mapCanvas().zoomScale(500)

layer = iface.activeLayer()
features = layer.getFeatures()
next()

You can see it in action here:

I’d love to see this as an interactive web map where users can have a look at all results, compare with other routing services – or ideally the real world – and finally fix OSM where necessary.

This work has been in the making for a while. I’d like to thank the team of OpenRouteService.org who’s routing service I used (and who recently added support for North America) as well as my colleagues at Ryerson University in Toronto, who pointed me towards Toronto’s open data.


by underdark at July 23, 2016 04:10 PM

July 22, 2016

MapProxy

New MapProxy 1.9.0 release

We are pleased to announce the release of MapProxy 1.9.0. It contains a lot of major and minor improvements.

The latest release is available at: https://pypi.python.org/pypi/MapProxy

To upgrade within your virtualenv:

$ pip install --upgrade --no-deps MapProxy

Updated documentation is available at: https://mapproxy.org/docs/1.9.0/

Support for ArcGIS REST services

You can now directly integrate ArcGIS services without the need to enable WMS support.

See: https://mapproxy.org/docs/1.9.0/sources.html#arcgis-label

Band merging

The new band merge feature allows you to create false-color or grayscale images on the fly, by selecting different sources for each (color) band.

See https://mapproxy.org/docs/1.9.0/configuration.html#band-merging and https://mapproxy.org/docs/1.9.0/configuration_examples.html#create-grayscale-images

Other fixes and changes

There are many more changes and improvements. For a complete list of see: http://github.com/mapproxy/mapproxy/blob/1.9.0/CHANGES.txt

July 22, 2016 07:00 PM

OSGeo News

ntroduction to gvSIG free online workshops

by jsanz at July 22, 2016 04:15 PM

OSGeo News

ACM SIGSPATIAL 2016 Workshops. San Francisco, CA, Monday, October 31st, 2016

by jsanz at July 22, 2016 04:11 PM

QGIS Blog

QGIS 2.16 ‘Nødebo’ is released!

We’re happy to announce the release of QGIS 2.16.0 ‘Nødebo’. The University of Copenhagen’s Department of Geoscience and Natural Resource Management Forest and Landscape College in Nødebo were hosts to the First International QGIS conference and developer meeting in May 2015. For all of us who are not fluent in Danish, Lene Fischer has prepared the following video teaching us how to pronounce the release name:

QGIS 2.16 is not designated as a Long Term Release (LTR). Users wishing to have a version of QGIS which does not change and receives bug fixes for at least 1 year are invited to use the current LTR release 2.14.
If you are upgrading from QGIS 2.14 you will find a great many new features in this release.
Whenever new features are added to software they introduce the possibility of new bugs – if you encounter any problems with this release, please file a ticket on the QGIS Bug Tracker.

We would like to thank the developers, documenters, testers and all the many folks out there who volunteer their time and effort (or fund people to do so). From the QGIS community we hope you enjoy this release! If you wish to donate time, money or otherwise get involved in making QGIS more awesome, please wander along to qgis.org and lend a hand!

QGIS is supported by donors and sponsors. A current list of donors who have made financial contributions large and small to the project can be seen on our donors list. If you would like to become and official project sponsor, please visit our sponsorship page for details. Sponsoring QGIS helps us to fund our six monthly developer meetings, maintain project infrastructure and fund bug fixing efforts.

QGIS is Free software and you are under no obligation to pay anything to use it – in fact we want to encourage people far and wide to use it regardless of what your financial or social status is – we believe empowering people with spatial decision making tools will result in a better society for all of humanity. If you are able to support QGIS, you can donate here.


by underdark at July 22, 2016 02:41 PM

Fernando Quadro

Fontes de dados geográficos gratuitos

Quem trabalha com Geotecnologia sabe a dificuldade de encontrar dados geográficos. Apesar da INDE estar em “pleno funcionamento”, ainda estamos engatinhando nesse sentido. Temos poucos portais que disponibilizam informações geográfica se comparado ao tamanho do nosso país.

O LabGIS, após uma vasta pesquisa, compilou uma extensa base de 587 links com dados geográficos gratuitos para consulta e download. Essa base está agora aberta ao público, podendo ser acessada por qualquer pessoa.

Caso você conheça alguma fonte de dados que não consta nesta listagem você pode informá-los por meio desse link, ajudando assim a aumentar essa base pública e disponível a toda a comunidade.

Fonte: Sistema LabGIS

by Fernando Quadro at July 22, 2016 10:30 AM

July 21, 2016

Jackie Ng

React-ing to the need for a modern MapGuide viewer (Part 2): The baseline

I've decided that cataloging my adventure in developing this viewer is enough to warrant its own blog mini-series. So here's part 2 of my adventure. I have no idea how many parts this will take :)

So to continue where we left off, we had a basic map viewer component built on the main pillars of
And it is all written in glorious TypeScript. I've mentioned once on a previous post that writing web frontend code in a language with static typing and a compilation phase (gasp!), with a clean and simple-to-understand component model offered by React is just heavenly! If I could go back in time, I'd have written the Fusion framework in TypeScript, but sadly it didn't exist around 2008 (when Fusion made its debut in MapGuide) and my involvement in the MapGuide project was nowhere near the level it is now.

So before I continue, I should also probably mention what I aim to achieve with this viewer:
  • It will be built on modern, best-of-breed web front-end technologies (React, OpenLayers 3, etc)
  • I will not be shackled with the burden of supporting legacy browsers. This viewer will demand a modern web browser. If one has to use Internet Explorer, it must be IE11
  • It will require MapGuide Open Source 3.0 as the minimal version as this viewer will leverage capabilities present in this particular version onwards.
  • It will at least have some level of feature parity with the AJAX/Fusion viewer offerings, that is to say:
    • It will have a Legend component for toggling layer visibility and comprehending layer symbology/thematics.
    • It will have a Task Pane component that functions as a generic container for content and/or contextual UI
    • It will have a Map Selection component for easy viewing of selected features
    • It will include a stock set of commands, with extension points for developers to add their own.
    • It will have opt-in integration with external base layers (OpenStreetMap, etc), provided your Map Definition meets the requirements (ie. It is in EPSG:3857/WGS84.PseudoMercator).
    • I will not at this time consider any kind of integration with Google Maps as their APIs are not only a constantly moving target, but a TOS minefield I just do not want to wander in at this point in time.
    • I do hope to allow for some level of free-form component layout (ala. templates) so one is not tied to a layout I choose by default.
So with that being said, for this post I want to focus on the last major bullet point: Getting to parity with our current viewer offerings.

So let's introduce our new Legend component.


Implementing this component wasn't difficult, I mainly copypasta'd most of the presentation logic from the existing feature-complete legend used in my various mapguide-rest sample applications.

The difference here is that because this is now a React component, there is zero jQuery or DOM manipulation. Instead, we are just render Group/Layer child components for each layer and group we encounter in our runtime map. It is all pure UI as a function of props and state, and letting React's virtual DOM figure out how to actually update the DOM in the most efficient manner. What this means is that our component model cleanly matches how our layers and groups are conceptually structured.


As for the other components, I'm still working on them (currently focused on the Task Pane) so we'll leave that for another post. But visually speaking, when you put these 3 items together:


It's already starting to look like our existing AJAX viewer!

And before I close out this post, In terms of our current JS bundle size, here's the results of the current weigh-in (using our current Fusion production bundle for comparison).



We've got plenty of breathing room.

by Jackie Ng (noreply@blogger.com) at July 21, 2016 02:17 PM

Fernando Quadro

Problema na internacionalização do GeoServer

A partir da versão 2.8.3 foi adicionado ao GeoServer o idioma português (pt-BR), porém, ao que tudo indica ele foi criado a partir da versão em espanhol e disponibilizado sem que a tradução estivesse completa, ou seja, existem termos em português, espanhol e alguns até em inglês.

Se você está passando por esse problema, saiba que é possível retirar esses arquivos da sua versão do GeoServer e fazer com que ela volte a exibir as informações em inglês.

Para isso, basta seguir o passo-a-passo abaixo:

1. Renomeie o arquivo geoserver.war para geoserver.zip
2. Entre na pasta WEB-INF\lib
3. Encontre o arquivo gs-web-core-X.X.X.jar e renomei-o para .zip
4. Procure o arquivo de properties (GeoServerApplication_pt_BR.properties).
5. Exclua o arquivo
6. Renomeie o arquivo gs-web-core-X.X.X.zip para gs-web-core-X.X.X.jar
6.1. Repita os passos 3, 4, 5 e 6 para os arquivos:

  • gs-web-demo-X.X.X.jar
  • gs-web-sec-core-X.X.X.jar
  • gs-web-gwc-X.X.X
  • gs-web-wcs-X.X.X.jar
  • gs-web-wfs-X.X.X.jar
  • gs-web-wms-X.X.X.jar

7. Renomeie o arquivo geoserver.zip para geoserver.war.
8. Agora está pronto, basta subir sua aplicação e verificar se está tudo em inglês novamente.

Caso esteja utilizando a versão binária, executável (Windows) ou DMG (Mac), desconsidere os passos 1 e 7, e lembre-se que onde está indicado X.X.X você deve substituir pelo número da versão do GeoServer que você está utilizando no momento.

Qualquer problema, não deixe de comentar nesse post.

by Fernando Quadro at July 21, 2016 10:30 AM

July 20, 2016

Free and Open Source GIS Ramblings

One “add” button to rule them all

Reducing the number of “Add layer” buttons in the QGIS GUI is a commonly voiced wish. Multiple approaches have been discussed but no decision has been made so far. One idea is to use the existing browser functionality to replace the “Add layer” dialogs. Others are envisioning completely novel approaches.

Since the topic came up again today on Twitter, I decided to implement a quick & dirty version of a unified Add layer button. This way, I can comfortably reduce my Layer toolbar to three buttons using Settings | Customization …

layerToolBar

customization

I pretty much just kept the “Create new layer” button and the “Add delimited text layer” button because, as far as I know, there is no way to call the dialog from the browser. (Instead, CSVs are opened with OGR, which doesn’t have nearly as many nice features.)

And here it is in action:

(I recommend to undock the Browser panel to get the dialog-like behavior that you see in the video.)

To install the plugin: download it and unzip it into your QGIS plugin folder, then activate it in the plugin manager.

I would love to hear what you think about this UX experiment.


by underdark at July 20, 2016 07:46 PM

gvSIG Team

“Introduction to gvSIG” free online workshops

The gvSIG Association is now pleased to inform the “Introduction to gvSIG” summer webinars at the beginning of August.

Summer has arrived in a lot of places around the world. This is a good time to learn to use gvSIG, the open source GIS.

Do you want to create a thematic map with gvSIG or a new vector layer with your parcel, a road…? You can do it now at these open and free webinars.

There will be two webinars, that will last about 30 minutes:

At the first webinar we’ll see how to manage a gvSIG project. We’ll create new views with cartography, and we’ll apply symbology and labelling. We’ll introduce the reference systems, and finally we will create a thematic map to be printed or exported to a PDF file.

At the second webinar, we’ll show how to create a new vector layer, where we’ll digitalize several elements. We also will see the gvSIG toolbox, where we can find all the geoprocesses. We will apply one of them. We finally will see how the 3D extension works.

Registration for the “Getting started with gvSIG” webinar

Registration for the “Vector editing, 3D view and other gvSIG tools” webinar

We expect your participation!


Filed under: community, english, events, gvSIG Desktop, training Tagged: introduction to gvSIG, webinar

by Mario at July 20, 2016 11:20 AM

July 19, 2016

gvSIG Team

3as Jornadas gvSIG México: geomática y software libre. Propuestas para comunicaciones

Se ha ampliado el plazo para la recepción de propuestas de comunicaciones para las 3as Jornadas gvSIG México, que se celebrarán en Ciudad de México del 7 al 9 septiembre de este año. La nueva fecha límite es el próximo 5 de agosto.

Para ello no tenéis más que enviar un resumen siguiendo la plantilla facilitada en el apartado de Comunicaciones de la web a la dirección 3asjornadasgvsig@igg.unam.mx. El resumen será valorado por el Comité Científico de cara a su inclusión en el programa de las Jornadas. Existen dos modalidades de comunicación: ponencia y póster.

banner-3as_J_Mex_gvSIGPor otro lado, las inscripciones, que son gratuitas, siguen abiertas, y pueden realizarse desde el formulario disponible en la propia web.

Durante las jornadas se realizarán un buen número de talleres para usuarios y desarrolladores. Tened en cuenta que las plazas son limitadas, no esperéis a inscribiros a última hora.

¡Esperamos vuestra participación!


Filed under: community, events, spanish, training Tagged: jornadas, México

by Mario at July 19, 2016 12:36 PM

Even Rouault

Speeding up computation of raster statistics using SSE-2/AVX-2

GDAL offers a method ComputeStatistics() that given a raster band returns the minimum and maximum values of pixels, the mean value and the standard deviation.

For those not remembering how to compute mean and standard deviations, the basic formulas for values indexed from 0 to N-1 are :
mean = sum(value(i) for i = 0 to N-1) / N
std_dev = square root of the mean of the square of the differences of values to the mean
std_dev = sqrt(sum(i = 0 to N-1, (value(i) - mean)^2)) / N)
A very naive version would first compute the mean, and in a second pass compute the standard deviation.

But it can be easily proven (by expanding the (value(i) - mean)^2 term),that it is also equivalent to :
std_dev = sqrt(sum(i = 0 to N-1, value(i)^2)/N - mean^2)
std_dev = sqrt(mean_of_square_values - square_of_mean)

std_dev = sqrt(sum(i = 0 to N-1, value(i)^2)/N - (sum_of_values/N)^2)
std_dev = sqrt(N^2 *(sum(i = 0 to N-1, value(i)^2)/N - (sum_of_values/N)^2)) / N
std_dev = sqrt(N * sum_of_square_values - sum_of_values^2) / N
A less naive implementation would compute the sum of values and the sum of square values in a single pass. However the standard deviation computed like that might be subject to numeric instability given that even if the result is small, sum_of_square_values and sum_of_values can be very big for a big number of pixels, and thus if represented with floating point numbers, the difference between both terms can be wrong.

Welford algorithm

So in recent GDAL versions, the computation of the mean and standard deviation is done in a progressive and numerically stable way, thanks to the Welford algorithm

The generic code is:
pixel_counter = 0
mean = 0
M2 = 0
foreach(value in all pixels):
    if value < minimum or pixel_counter == 0: minimum = value
    if value > maximum or pixel_counter == 0: maximum = value
    pixel_counter = pixel_counter + 1
    delta = value - mean
    mean = mean + delta / pixel_counter
    M2 = M2 + delta * (value - mean);

std_dev = sqrt( M2 / pixel_counter )

Proof of Welford algorithm

(You can skip this paragraph and still follow the rest of this article)

The magic of Welford algorithm lies in the following recurrence relations.

For the mean, it is rather obvious :

N*mean(N) = sum(i = 0 to N-1, value(i))
N*mean(N) = sum(i = 0 to N-2, value(i)) + value(N-1)
N*mean(N) = (N-1) * mean(N-1) + value(N-1)
mean(N) = (N-1)/N * mean(N-1) + value(N-1)/N
mean(N) = mean(N-1) + (value(N-1) - mean(N-1)) / N

Hence mean = mean + delta / pixel_counter

For the standard deviation, the proof is a little bit more lengthy :

N*stddev(N)^2 = sum(i=0 to N-1, (value(i) - mean(N))^2 )

N*stddev(N)^2 = sum(i=0 to N-1, (value(i) - (mean(N-1) + (value(N-1) - mean(N-1)) / N))^2 )

N*stddev(N)^2 = sum(i=0 to N-1, ((value(i) - mean(N-1)) - ((value(N-1) - mean(N-1)) / N))^2 )

N*stddev(N)^2 = sum(i=0 to N-1, (value(i) - mean(N-1))^2 + ((value(N-1) - mean(N-1)) / N)^2
             - 2 * (value(i) - mean(N-1))*((value(N-1) - mean(N-1)) / N)  )

N*stddev(N)^2 = sum(i=0 to N-1, (value(i) - mean(N-1))^2) + N * ((value(N-1) - mean(N-1)) / N)^2
              - 2 * sum(i=0 to N-1, (value(i) - mean(N-1)))*((value(N-1) - mean(N-1)) / N))

N*stddev(N)^2 = (N-1)*stddev(N-1)^2 +  (value(N-1) - mean(N-1)) ^2
                    +  N * ((value(N-1) - mean(N-1)) / N)^2
              - 2 * sum(i=0 to N-1, (value(i) - mean(N-1)))*((value(N-1) - mean(N-1)) / N))

N*stddev(N)^2 = (N-1)*stddev(N-1)^2 + (value(N-1) - mean(N-1))^2 * (1 + 1 / N)
              - 2 * N( mean(N) - mean(N-1)) *((value(N-1) - mean(N-1)) / N))

N*stddev(N)^2 = (N-1)*stddev(N-1)^2 + (value(N-1) - mean(N-1)) *
            ((1 + 1 / N) *  (value(N-1) - mean(N-1)) - 2 * N( mean(N) - mean(N-1)) / N))

N*stddev(N)^2 = (N-1)*stddev(N-1)^2 + (value(N-1) - mean(N-1)) *
            ((value(N-1) - mean(N-1) + (value(N-1) - mean(N-1) / N - 2 * N( mean(N) - mean(N-1)) / N))

N*stddev(N)^2 = (N-1)*stddev(N-1)^2 + (value(N-1) - mean(N-1)) *
            ((value(N-1) - mean(N-1) - (mean(N) - mean(N-1))))

N*stddev(N)^2 = (N-1)*stddev(N-1)^2 + (value(N-1) - mean(N-1)) * (value(N-1) - mean(N))


Hence M2 = M2 + delta * (value - mean)

Integer based computation of standard deviation

The Welford algorithm is good but it involves floating point operations for each pixel to compute the progressive mean and variance. Whereas fundamentally we would need those floating point operations only at the end if using the original formulas, and we could use integer arithmetics for the rest. Another drawback of Welford approach is that it prevents any direct parallelization (there might still be ways to reconcile partial computations, but I have not explored those), whereas if you have a set of pixels, you can conceptually divide it in as many subsets you want, and for each subset compute its local minimum, maximum, sum of values and sum of square values. Merging subsets is then trivial: take the minimum of minimums, maximum of maximums, sums of sum of values and sums of sum of square values.

Let us consider the case of pixels whose type is unsigned byte, ie with values in the range [0,255]. We want to compute
std_dev = sqrt(N * sum_of_square_values - sum_of_values^2) / N
For practical reasons, we want N, sum_of_square_values and sum_of_values to fit on a 64bit unsigned integer (uint64), which is the largest natural integral type that can be easily and efficiently used on today's CPUs. The most limiting factor will be sum_of_square_values. Given that in the worse case, a square value is equal to 255*255, the maximum number of pixels N we can address is (2^64-1) / (255*255) = 283 686 952 306 183, which is large enough to represent a raster of 16 million pixels x 16 million pixels. Good enough.

We know need to be able to multiply two uint64 values and get the result as a uint128, and compute the difference of two uint128 values. The multiplication on Intel/AMD CPUs in 64bit mode natively yields to a 128 bit wide result. It is just that there is no standardized way in C/C++ how to get that result. For GCC compiler in 64 bit mode, the __uint128_t type can be used in a transparent way
to do that :
__uint128_t result = (__uint128_t)operand_64bit * other_operand_64bit
For Visual Studio compilers in 64 bit mode, a special instruction _umul128() is available.

What about non-Intel or non-64bit CPUs ? In that case, we have to do the multiplication at hand by decomposing each uint64 values into its lower uint32 and uint32 parts, doing 4 uint32*uint32->uint64 multiplications, summing the intermediary results, handling the carries and building the resulting number. Not very efficient, but we do not really care about that, since it is just a final operation.

To make it is easier, that partial 128 bit arithmetics is abstracted in a GDALUInt128 C++ class that has different implementations regarding the CPU and compiler support.

Now that we have solved the final part of the computation, we can then write
the computation loop as following :

    minimum = maximum = value[0]
    foreach value:
        if value < minimum: minimum = value
        else if value > maximum: maximum = value
        sum = sum + value
        sum_square = sum_square + value * value


Can we do better ? A bit of loop unrolling can help :

    minimum = maximum = value[0]
    foreach value pair (value1, value2):
        if value1 < minimum: minimum = value1
        else if value1 > maximum: maximum = value1
        sum = sum + value1
        sum_square = sum_square + value1 * value1
        if value < minimum: minimum = value2
        else if value > maximum: maximum = value2
        sum = sum + value2
        sum_square = sum_square + value2 * value2
    (deal with potential remaining pixel if odd number of pixels)


If we start with comparing value1 and value2, we can actually save a comparison (resulting in 3 comparisons for each pair of pixel, instead of 4) :

    minimum = maximum = value[0]
    foreach value pair (value1, value2):
        if value1 < value2:
            if value1 < minimum: minimum = value1
            if value2 > maximum: maximum = value2
        else:
            if value2 < minimum: minimum = value2
            if value1 > maximum: maximum = value1
        sum = sum + value1
        sum_square = sum_square + value1 * value1
        sum = sum + value2
        sum_square = sum_square + value2 * value2
    (deal with potential remaining pixel if odd number of pixels)


This improvement can already dramatically reduce the computation time from
1m10 to 7s, to compute 50 times the statistics on a 10000 x 10000 pixel raster.

Parallelization with SSE2

We have not yet explored the parallelization of the algorithm. One way to do it would be to use multi-threading, but for Intel-compatible CPU, we can also explore the capabilities of the SIMD (Single Instruction/Multiple Data) instruction set. On 64bit Intel, the SSE2 instruction set, which offers vectorized operations on integers, is guaranteed to be always present. 16 registers (XMM0 to XMM15) are available, each 128 bit wide.

So each register is wide enough to hold 16 packed int8/uint8, 8 packed int16/uint16, 4 packed int32/uint32 or 2 packed int64/uint64, depending on the wished representation. A diverse set of operations are offered and generally operate on the sub-parts of each register independently. For example c=_mm_add_epi8(a,b) will add independently c[i]=a[i]+b[i] for i=0 to 15, and that in just one CPU cycle._mm_add_epi16() will work on packed uint16, etc. To add some salt, not all operators are available for all elementary subtypes however.

Compilers are supposed to be able to automatically vectorize some C code, but in practice they rarely manage to do so for real world code, hence requiring the programmer to use the SIMD instruction set at hand. All major compilers (gcc, clang, Visual Studio C/C++) offer access to the SSE2 instruction set through "intrinsics", which are C inline functions that wrap the corresponding assembly instructions, but while still being C/C++. This allows the compiler to do the register allocation and various other optimizations (such as re-ordering), which is a huge win over coding directly in assembly. The Intel intrinsics guide is a useful resource to find the appropriate intrinsics.

So a temptative vectorized version of our algorithm would be :

    v_minimum = vector_of_16_bytes[0]
    v_maximum = vector_of_16_bytes[0]
    v_sum = vector_of_16_zeros
    v_sum_square = vector_of_16_zeros

    foreach vector_of_16_bytes v:
        v_minimum = vector_minimum(v_minimum, v)
        v_maximum = vector_maximum(v_maximum, v)
        v_sum = vector_add(v_sum, v)
        v_sum_square = vector_sum(v_sum_square, vector_mul(v, v))

    minimum = minimum_of_16_values(v_minimum)
    maximum = maximum_of_16_values(v_minimum)
    sum = sum_of_X??_values(v_sum)
    sum_square = sum_of_X??_values(v_sum_square)
    (deal with potential remaining pixels if number of pixels is not multiple of 16)


vector_minimum and vector_maximum do exist as _mm_min_epu8 and _mm_max_epu8. But for vector_add, which variant to use _mm_add_epi8, _mm_add_epi16, _mm_add_epi32 or _mm_add_epi64 ? Well, none directly. We want to add uint8 values, but the result cannot fit on a uint8 (255+255=510). The same holds for sum_square. The result of each square multiplication requires at least a uint16, and we want to loop several times, so we need at least a width of uint32 to hold the accumulation. We designed the overall algorithm to be able to handle an accumulator of uint64, but this would decrease the performance of the vectorization if using that in the tigher loop. So we will decompose our loop into one upper loop and and one inner loop. The inner loop will do as many iterations as possible, while still not overflowing a uint32 accumulator. So (2^32-1)/(255*255) = 66051.xxxx iterations. Which we round down to the closest multiple of 16.

So what about v_sum = vector_add(v_sum, v) ?
The first idea would be to extract the 4 lowest order bytes of v, unpack them so that they fit each on a uint32 and then use _mm_add_epi32 to add them in the v_sum accumulator.

    v_sum = _mm_add_epi32(v_sum, _mm_unpacklo_epi16(_mm_unpacklo_epi8(v, zero), zero)
_mm_unpacklo_epi8(v, zero) expands the 8 lowest order bytes of v as 8 uint16. And similarly _mm_unpacklo_epi16(v, zero)  expands the 4 lowest order uint16 of v as 4 uint32.

And then repeat that with the 3 other groups of 4 bytes :

    v_sum = _mm_add_epi32(v_sum, _mm_unpacklo_epi16(_mm_unpacklo_epi8(_mm_shuffle_epi32(v, 1), zero), zero)
    v_sum = _mm_add_epi32(v_sum, _mm_unpacklo_epi16(_mm_unpacklo_epi8(_mm_shuffle_epi32(v, 2), zero), zero)
    v_sum = _mm_add_epi32(v_sum, _mm_unpacklo_epi16(_mm_unpacklo_epi8(_mm_shuffle_epi32(v, 3), zero), zero)


But we can do better thans to the _mm_sad_epu8 intrinsics. It is designed to "compute the absolute differences of packed unsigned 8-bit integers in a and b, then horizontally sum each consecutive 8 differences to produce two unsigned 16-bit integers, and pack these unsigned 16-bit integers in the low 16 bits of 64-bit elements in dst." If we notice that ABS(x-0) = x when x >= 0, then it does what we want.

    v_sum = _mm_add_epi64(v_sum, _mm_sad_epu8(v, zero))

Pedantic note: we can actually use _mm_add_epi32, since there is no risk of overflow : 8 * 66051 * 255 fits on a uint32. The advantage of using _mm_add_epi32 is that as we will use it elsewhere, the compiler can re-order additions to group them in pairs and benefit from their 0.5 throughput.

_mm_sad_epu8() has a relatively high latency (5 cycles), but it is still a big win since it replaces 14 intrinsics of our initial version.

What about the computation of the square value ? There is no mnemonics to directly multiply packed bytes and get the resulting packed uint16 (or even better uint32, since that is the type we want to operate on eventually to be able to do several iterations of our loop!). One approach would be to take the 8 lowest order bytes, un-pack them to uint16, use the  _mm_mullo_epi16() intrinsics that does uint16 x uint16->uint16. Then you would take the 4 lowest order uint16 of this intermediate result, un-pack them to uint32 and finally use _mm_add_epi32 to accumulate them in v_sum_square.

    v_low = _mm_unpacklo_epi8(v, zero)
    v_low_square = _mm_mullo_epi16(v_low, v_low)
    v_sum_square = _mm_add_epi32(v_sum_square, _mm_unpacklo_epi16(v_low_square, zero)


Then repeat the operation with the 4 upper order uint16 of the intermediate result.

    v_sum_square = _mm_add_epi32(v_sum_square,
        _mm_unpacklo_epi16(_mm_shuffle_epi32(v_low_square, 2 | (3 <<2)), zero) )


_mm_shuffle_epi32(v, 2 | (3 <<2) is a trick to replicate the high 64 bits of a XMM register into its low 64 bits. We don't care about the values of the resulting high 64 bits since they will be lost with the later unpack operations.

And then repeat the whole process with the 8 highest order bytes.

    v_high = _mm_unpacklo_epi8(_mm_shuffle_epi32(v, 2 | (3 <<2)), zero)
    v_high_square = _mm_mullo_epi16(v_high, v_high)
    v_sum_square = _mm_add_epi32(v_sum_square, _mm_unpacklo_epi16(v_high_square, zero)
    v_sum_square = _mm_add_epi32(v_sum_square,
        _mm_unpacklo_epi16(_mm_shuffle_epi32(v_high_square, 2 | (3 <<2)), zero) )


We can actually do much better with the _mm_madd_epi16() mnemonics that "Multiply packed signed 16-bit integers in a and b, producing intermediate signed 32-bit integers. Horizontally add adjacent pairs of intermediate 32-bit integers, and pack the results". This is really close to what we need. We just need to prepare uint16/int16 integers (the sign convention here does not matter since a uint8 zero-extended to 16 bit is both a uint16/int16)

    v_low_16bit = _mm_unpacklo_epi8(v, zero)
    v_sum_square = _mm_add_epi32(v_sum_square, _mm_madd_epi16(v_low_16bit, v_low_16bit))
    v_high_16bit = _mm_unpacklo_epi8(_mm_shuffle_epi32(v, 2 | (3 <<2)), zero)
    v_sum_square = _mm_add_epi32(v_sum_square, _mm_madd_epi16(v_high_16bit, v_high_16bit))


The latencies and throughput of _mm_mullo_epi16 and _mm_madd_epi16 are the same, so the second version is clearly a big win.

Use of AVX2

We can tweak performance a bit by doing a 2x loop unrolling, which will enable the compiler to re-order some operations so that those who have a throughput of 0.5 cycle to be consecutive (such as _mm_add_epi32, _mm_unpacklo_epi8) and thus be able to executive 2 of them in a single cycle. When doing so, we can notice that we are operating on a virtual 256 bit register. But 256 bit registers do exist in the AVX2 instruction set, that was introduced in relatively recent hardware (2013 for Intel Haswell). AVX/AVX2 offer the YMM registers, equivalent of XMM registers but on a doubled bit width (the 128 bit low part of a YMM register is its corresponding XMM register). One particularity of the YMM register is that it operates on quite distinct "128 bit lanes", but you can still extract each lane.

The port to AVX2 is quite straightforward :

    v = _mm256_load_si256(data + i)
    v_sum = _mm256_add_epi32(v_sum, _mm256_sad_epu8(v, zero))
    v_low_16bit = _mm256_cvtepu8_epi16(_mm256_extracti128_si256(v, 0));
    v_sum_square = _mm256_add_epi32(v_sum_square, _mm256_madd_epi16(v_low_16bit, v_low_16bit))
    v_high_16bit = _mm256_cvtepu8_epi16(_mm256_extracti128_si256(v, 1));
    v_sum_square = _mm_add_epi32(v_sum_square, _mm_madd_epi16(v_high_16bit, v_high_16bit))


_mm256_extracti128_si256(v,0) extracts the 128 bit lower part of the register,
and _mm256_extracti128_si256(v,1) the 128 bit upper part.

The good news is that we can have a single code base for the SSE2 and AVX2 variants, by using the AVX2 code. In the case of SSE2, we in fact define the _mm256 functions with their corresponding _mm 128bit functions operating on the low and high 128 bit parts. For example:

static inline GDALm256i GDALmm256_add_epi32(GDALm256i r1, GDALm256i r2)
{
    GDALm256i reg;
    reg.low = _mm_add_epi32(r1.low, r2.low);
    reg.high = _mm_add_epi32(r1.high, r2.high);
    return reg;
}

The AVX2-with-SSE2 emulation can be found in :
https://github.com/OSGeo/gdal/blob/trunk/gdal/gcore/gdal_avx2_emulation.hpp

Thanks to inlining and usual compiler optimizations, this will be equivalent to our hand 2x unrolled version ! The final code is here.

Regarding timings, our SSE2-emulated AVX2 version runs in 1.6s, so roughly a 4x time improvement with respect to the portable optimized C version. On a hardware capable of AVX2, the pure AVX2 version is 15% faster than the SSE2-emulated version. So definitely not enough to justify a dedicated code path, but here as we have a unified one, it comes almost for free. Provided that the code is explicitly compiled to enable AVX2.

Nodata values

Up to now, we have ignored the potential existence of nodata values. When computing statistics, we do not want pixels that match the nodata value to be taken into account in the minimum, maximum, mean or standard deviation.

In the pure C approach, this is easy. Just ignore pixels that match the nodata value:

    minimum = maximum = value[0]
    foreach value:
        if value != nodata:
            valid_pixels = valid_pixels + 1
            minimum = min(minimum, value)
            maximum = max(minimum, value)
            sum = sum + value
            sum_square = sum_square + value * value


We cannot directly translate that with SSE2/AVX2 mnemonics since the result of the value != nodata test can be different for each of the 32 packed bytes of the (virtual) AVX2 register, and making tests for each components of the vector register would kill performance to a point where it would be worse than the pure C approach !

We can however rewrite the above in a vector friendly way with :

    minimum = maximum = first value that is not nodata
    neutral_value = minimum (or any value in final [min,max] range that is not nodata)
    foreach value:
        validity_flag = if (value != nodata) 0xFF else 0
        value_potentially_set_to_zero = value & validity_flag
        value_potentially_set_to_neutral = (value & validity_flag) | (neutral_value & ~validity_flag)
        valid_pixels = valid_pixels + validity_flag / 255
        minimum = min(minimum, value_potentially_set_to_neutral)
        maximum = max(minimum, value_potentially_set_to_neutral)
        sum = sum + value_potentially_set_to_zero
        sum_square = sum_square + value_potentially_set_to_zero * value_potentially_set_to_zero


(value & validity_flag) | (neutral_value & ~validity_flag) is a quite common pattern in SIMD code to implement a if/then/else pattern without branches (for classic scalar code, if/then/else branches are more efficient due to the CPU being able to do branch prediction)

The only complication is that there is no SSE2 intrinsics for non-equality testing, so we have to transform that a bit to use equality testing only. And we will also remove the need for division in the loop :

    foreach value:
        invalidity_flag = if (value == nodata) 0xFF else 0
        value_potentially_set_to_zero = value & ~invalidity_flag
        value_potentially_set_to_neutral = (value & ~invalidity_flag) | (neutral_value & invalidity_flag)
        invalid_pixels_mul_255 = invalid_pixels_mul_255 + invalidity_flag
        minimum = min(minimum, value_potentially_set_to_neutral)
        maximum = max(minimum, value_potentially_set_to_neutral)
        sum = sum + value_potentially_set_to_zero
        sum_square = sum_square + value_potentially_set_to_zero * value_potentially_set_to_zero

    valid_pixels = total_pixels - invalid_pixels_mul_255 / 255


The computation of invalid_pixels_mul_255 in a vectorized way is the same as
v_sum, using the _mm_sad_epu8() trick. The resulting SSE2 code is :

    foreach vector_of_16_bytes v:
        v_invalidity_flag = _mm_cmpeq_epi8(v, v_nodata)
        v_value_potentially_set_to_zero = _mm_andnot_si128(v_invalidity_flag, v)
        v_value_potentially_set_to_neutral = _mm_or_si128(
            v_value_potentially_set_to_zero, _mm_and_si128(v_invalidity_flag, v_neutral))
        v_invalid_pixels_mul_255 = _mm_add_epi32(invalid_pixels_mul_255,
                                        _mm_sad_epu8(v_invalidity_flag, zero))
        [ code for min, max operating on v_value_potentially_set_to_neutral ]
        [ code for sum and sum_square operating on v_value_potentially_set_to_zero ]


The transposition to AVX2 is straightforward.

We can notice that this version that takes into account nodata value can only be used once we have hit a pixel that is not the nodata value, to be able to initialize the neutral value.

What about uint16 rasters ?


The same general principles apply. If we still want to limit ourselves to operate with at most uint64 accumulators, given that the maximum square value of a uint16 is 65535*65535, this limits to rasters of 2^64/(65535*65535) ~= 2 billion pixels, which remains acceptable for common use cases.

One oddity of the SSE-2 instruction set is that it includes only a _mm_min_epi16() / _mm_max_epi16() mnemonics, that is to say that operates on signed int16. The _mm_min_epu16() that operates on uint16 has been introduced in the later SSE 4.1 instruction set (that is quite commonly found in not so recent CPUs).

There are tricks to emulate _mm_min_epu16() in pure SSE2 using saturated subtraction and masking :

    // if x <= y, then mask bits will be set to 1.
    mask = _mm_cmpeq_epi16( _mm_subs_epu16(x, y), zero )

    // select bits from x when mask is 1, y otherwise
    min(x,y) = _mm_or_si128(_mm_and_si128(mask, x), _mm_andnot_si128(mask, y));


Another way is to shift the unsigned values by -32768, so as to operate on signed 16bit values.

This -32768 shift trick is also necessary since, like for the byte case, we want to still be able to use the _madd_epi16 intrinsics, which operates on signed int16, to compute the sum of square values. One subtelty to observe is that when you operate on 2 consecutive pixels at 0, _mm_madd_epi16 does :

 (0 - 32768) * (0 - 32768) + (0 - 32768) * (0 - 32768)
= 1073741824 + 1073741824
= 2147483648 = 2^31


Which actually overflows the range of signed int32 ( [-2^31, 2^31-1] ) ! The good news is that _mm_madd_epi16 does not saturate the result, so it will actually return 0x80000000 as a result. This should normally be interpreted as -2^31 in signed int32 convention, but as we know that the result of _madd_epi16(x,x) is necessary positive values, we can still correctly interpret the result as a uint32 value. This is where you feel lucky that Intel chose two's complement convention for signed integers.

To compute the sum of values, no nice trick equivalent to _mm_sad_epu8. So we just do it the boring way: unpack separately the 64bit low and high part of the value register from uint16 to uint32 and accumulate them with _mm_add_epi32.

Exactly as for the byte case, the uint16 case can be easily transposed to AVX2 or
emulated-AVX2.

Conclusion


Conversion between integer and floating-point operations can be costly, so avoiding them as much as possible is a big win (provided that you make sure not to overflow your integer accumulators)

In theory, the gains offered by a SSE2/AVX2 optimized version are at most limited to a factor of with_of_vector_register / with_of_elementary_type, so, for bytes and SSE2, to 16. But often the gain is lesser, so do that only when you have come to an already optimized portable C version (or if the SIMD instruction set includes a dedicated intrinsics that just do what you want)

by Even Rouault (noreply@blogger.com) at July 19, 2016 11:25 AM

Fernando Quadro

Análise e visualização de dados LiDAR – Parte 4

Agora vamos adicionar ao mapa uma camada de edifícios. Para isso descompacte o arquivo BuildingFootprints.zip que baixado anteriormente.

Nós vamos carregar os edifícios no banco de dados como uma tabela espacial, mas antes disso, nós temos de descobrir o qual o “SRID” usar.

Para descobrir o SRID basta abrir o arquivo BuildingFootprints.prj, veja:

PROJCS [ "NAD_1983_StatePlane_Oregon_South_FIPS_3602_Feet_Intl" , 
  GEOGCS [ "GCS_North_American_1983" , 
    DATUM [ "D_North_American_1983" , 
      SPHEROID [ "GRS_1980" , 6378137.0 , 298.257222101 ]], 
    PRIMEM [ "Greenwich" , 0.0 ], 
    UNIT [ "Degree" , 0.0174532925199433 ]], 
  PROJECTION [ "Lambert_Conformal_Conic" ], 
  PARAMETER [ "False_Easting" , 4921259.842519685 ], 
  PARAMETER [ "False_Northing" , 0.0 ], 
  PARAMETER [ "Central_Meridian" , - 120.5 ], 
  PARAMETER [ "Standard_Parallel_1" , 42.33333333333334 ], 
  PARAMETER [ "Standard_Parallel_2" , 44.0 ], 
  PARAMETER [ "Latitude_Of_Origin" , 41.66666666666666 ], 
  UNIT [ "Foot" , 0.3048 ]]

É muito claro que a informação que estamos procurando é esta “NAD_1983_StatePlane_Oregon_South_FIPS_3602_Feet_Intl”, mas qual número “SRID” devemos usar?

Para descobrir basta ir para ao site Prj2EPSG (http://prj2epsg.org), e procurar pela informação acima, e você vai obter a resposta que é 2270.

Agora, usando o shp2pgsql podemos carregar os dados em uma tabela chamada buildings:

shp2pgsql -s 2270 -D BuildingFootprints.shp buildings | psql -d lidar

Os nossos dados LIDAR estão todos em coordenadas geográficas (EPSG: 4326) e nós vamos ter de integrá-los com a camada buildings (EPSG:2270) para envitar problemas, sendo assim vamos utilizar a função ST_Transform do PostGIS pra isso:

-- Update SRID and transform all geoms
ALTER TABLE buildings
ALTER COLUMN geom
TYPE geometry(MultiPolygon,4326)
USING ST_Transform(geom, 4326);
-- Rename to area column to area
ALTER TABLE buildings
RENAME COLUMN shape_st_1
TO shape_area;
-- Index the table
CREATE INDEX buildings_gix ON buildings USING GIST (geom);

Para agilizar a nossa análise, vamos eliminar todos os edifícios que não estão contidos na área do nosso dado LiDAR.

-- Find the LIDAR extent
SELECT st_extent(pa::geometry) FROM medford;
-- BOX(-122.8874999 42.3125,-122.8749998 42.325)
-- Delete unneeded building polygons
DELETE FROM buildings
WHERE NOT ST_Contains(
  ST_MakeEnvelope(-122.8874999, 42.3125, -122.8749998, 42.325, 4326),
  geom);

Agora, é publicar esta camada no GeoServer. Faça a publicação da forma tradicional, apenas na aba Cache desmarque a opção “Criar camada de cache para esta camada” e na aba Publishing altere as informações das propriedade do KML confirme a figura abaixo:

gs_kmlconfig

Salve, e então temos agora uma camada visível! Você pode visualizá-la no Google Earth usando o KML reflector do GeoServer.

http://localhost:8080/geoserver/wms/kml?layers=opengeo:buildings&format_options=lookatbbox:bbox=-122.8808,42.3311,-122.8806,42.3313&mode=refresh&kmscore=50

Quando você ampliar a imagem no Google Earth, vai notar algo estranho sobre nossos edifícios: eles são “lisos”! Queremos edifícios em 3D, como podemos obtê-los? É isso que iremos ver nos próximos posts.

by Fernando Quadro at July 19, 2016 10:30 AM

July 18, 2016

GeoServer Team

Online GeoServer Bug Stomp

geoserver_icon

Dear Readers,

a quick post to spread the word about the Online GeoServer Bug Stomp which will take place this Friday, the 22nd of July 2016.

Developers as well as users from the GeoServer community will gather online to spend up to a full day (in their timezone) on tasks like:

  • Reviewing JIRA Reports to make sure they are valid
  • Fix bugs as we come across them
  • Improved docs
  • Test and report new bugs or close existing reports

The rules of engagement as well as a first rough list of participants can be found in this document; the event is an online gathering, people will be working from their place coordinating using the GeoServer Gitter channel with whomever will be online in their timezone.

If you feel like helping don’t be scared, jump onboard, read the rules of engagement say hi on the gitter channel (github login required) and help us make GeoServer even better.

If you cannot, don’t worry, we are going to try and make this event a monthly event.

Happy GeoServer to everybody!

gitter

by Simone Giannecchini at July 18, 2016 06:56 PM

Fernando Quadro

Análise e visualização de dados LiDAR – Parte 3

O problema com o LIDAR é o seu tamanho! Este conjunto de dados de exemplo é muito, muito pequeno e inclui 11 milhões de pontos. Desenhá-los em um mapa seria uma tarefa muito lenta, e visualmente não é muito instrutivo.

Para exibir um resumo dos nossos dados, vamos colocar os patches de ponto do banco de dados em um mapa.

O GeoServer só pode mapear geometrias do PostGIS, por isso vamos usar a geometria do pcpatch para criar uma visão dos dados (polígono).

CREATE VIEW medford_patches AS
SELECT
  pa::geometry(Polygon, 4326) AS geom,
  PC_PatchAvg(pa, 'Z') AS elevation
FROM medford;

Agora vamos configurar uma camada no GeoServer a partir da visão acima (medford_patches).

gs_newstore

Quando estiver configurando a camada, na aba Tile/Cache desmarque a opção “Criar camada de cache para esta camada”.

Após salvar a camada, vá em Layer Preview e veja como ficou:

patches_far

Você deve estar vendo um grande quadrado preto, como demonstra a figura acima, mas se ampliar o zoom em poucos passos as coisas se tornam mais claras.

patches_near

Existem 28547 manchas e, quando tudo é desenhado em uma pequena prévia, eles parecem com uma massa escura. Mas ampliando, podemos ver o detalhe dos pequenos blocos de pontos, cada um com cerca de 400 pontos dentro.

No entanto, como são manchas escuras, deixam muito a desejar! Seria melhor se tivessem cor de acordo com sua elevação, por isso precisamos adicionar um estilo no GeoServer com as cores que desejamos.

Vamos então usar o arquivo elevation_ramp.xml e criar um novo estilo no GeoServer com o nome elevation_ramp e vinculá-lo a nossa camada.

Este não é um modelo padrão de SLD, ele não tem regras que definem as quebras de cor. Em vez disso, ele usa um recurso de estilo interpolado para criar uma efeito de cores contínuo, utilizando as quebras de cores sugeridas neste post.

Após a aplicação do estilo na camada, ela ficará da seguinte forma:

patches_colored

Agora a (pequena) variação da elevação pode ser vista. Na verdade, a variação é tão pequena temos dificuldade até de identificar grandes edifícios comerciais.

No próximo post iremos inserir as edificações (Buildings) no mapa. Não Perca.

by Fernando Quadro at July 18, 2016 10:30 AM

July 17, 2016

Free and Open Source GIS Ramblings

Traveltime routing & catchment extension for the QGIS IDF router

As announced in Salzburg a few days ago, I’m happy to present the lastest enhancement to my IDF router for QGIS: travel time routing and catchment computation.

Travel times for pedestrians and cyclists are computed using constant average speeds, while car travel times depend on the speed values provided by the road network data.

Catchment computations return the links that can be traversed completely within the given time (or distance limit). The current implementation does not deal with links at the edge of the catchment area, which can only be traversed partially.

Loading the whole network (2.7GB unzipped IDF) currently requires around 10GB of memory. One of the next plans therefore is to add a way to only load features within a specified bounding box.

Plans to turn this into a full-blown plugin will most likely have to wait for QGIS 3, which will ship with Python 3 and other updated libraries.

Screenshot 2016-07-17 22.04.54


by underdark at July 17, 2016 08:26 PM

July 15, 2016

GIScussions

Local Government Shared Services – wrestling with data, PostGIS and stuff

This making maps business is certainly not as easy as it seems! I have just burnt an inordinate amount of time trying to make a half decent map and publish it to the web, but I am learning stuff on the way even if I did disappear up some blind alleys.

Frustration

2 years ago, I wrote a piece “Could you make a better map than this?” which questioned the way in which the Local Government Association were presenting shared services partnerships on a map. I challenged people to make a better map with the offer of a small prize but no one took up my offer. A few weeks ago, the LGA published an updated version of the map and also provided a link to download the data, well the map is pretty similar to the original one that I criticised, have a look:

For me this map is cluttered with too many markers and colours and numbers on the markers that are confusing. So I thought this time I will try to make something, hopefully better. I am not sure that I have succeeded as representing this data was more difficult than I imagined and I think I have stretched the limits of my technical skills and of the QGIS2Web plugin. So here is a summary of what I tried, maybe it will help someone else.

Wrestling with the data

The data is in a csv file with almost 1700 rows, 1 row per organisation per service/partnership (so if 20 organisations are in a partnership there are 20 rows with pretty much identical data). Some simple metadata about the columns and their significance would have been helpful (e.g. there was a count field that didn’t seem to make any sense to me).

Lesson 1 when you are cleaning up data don’t delete stuff that you think you don’t need too early, you might discover why it was useful later on and have to restart!

I spent a while looking at the data and decided that it might be more effective to represent each service as a multi-polygon covering the Local Authorities. So the first task was to give each partnership a unique ID, how do you do that in Excel? A bit of googling and I cam up with this simple answer: Insert a column (A in this case) for the Service ID then sort the data by service name (column B), set A1=1 then use this formula =if(B2=B1, A1, A1+1) in A2 and copy down the column. Now you have a series of Service IDs, copy and paste as values so that they are fixed and don’t recalculate as you resort or delete some of the data.

SQL is hard

Teaching yourself SQL involves a heck of a lot of trial and error, if I had’t had a lot of help from my friend Aileen Heal (a database guru IMHO) I would never have got a result. Here is the sequence of stuff that I did:

  1. I imported the data into PostGIS and filtered out the services that had shut down or where there was no information to show that it was live to create a table of live services (ca 1100 rows)
  2. The data had an ONS Code for almost every row, so I grabbed the latest version of BoundaryLine from the OS OpenData site and loaded the Counties and Districts/Boroughs/Unitaries into PostGIS
  3. Then I appended the Counties data to the Districts so that I had one table with all of the boundaries and ONS codes
  4. Then I added a geometry column to the table of live services and updated the column with the geometry from the combined boundaries table using the ONS Code (setting the coordinate system and adding a spatial index)
  5. Then I created a table that created a multi polygon for each row within a group of a service ID. I wouldn’t have had a clue how to do this and couldn’t find an example on line so it was good that I could “phone a friend”.
  6. Then I could join the attribute data from the live table to the union table that had the multi polygons to create a view with 256 live services displayed as multi polygons.
  7. The final step was to work out how to split these 256 rows into a manageable number of categories to portray on the map. The original data had 27 different categories which was way too much (see the massive rainbow legend at the bottom of the LGA map above). I worked out how to run a Select, Count, Group and Order query to list all of the categories and the number of rows in each, pretty pleased with that (google is fantastic for working this stuff out). I ended up creating 9 higher level groupings that made some sense to me (e.g combining “Shared Management” with “Shared Leadership”) using select queries.

Job done! Well not first time, there were several iterations before I got it all right 🙁

Lesson 2 save your SQL queries so that you can reuse them if you have to start again

The QGIS bit

This bit should have been really easy but of course it took at leat 6 or 8 tries before I got it all to work.

For some reason my views weren’t opening in the load PostGIS table dialogue (now fixed needed to select a feature ID) so I used the DB Manager tool and “Add to Canvas” which worked fine.

The next step was to do a bit of styling of the layers in QGIS, get them in a sensible order, set some transparency so that you could partially see the layers where they overlap, it would be nice if you could set transparency on a group of layers in one go in QGIS but if there is a way I couldn’t find it. I added a project title (which would appear in my web map) and saved the whole thing as a QGIS project so that I could get back to it.

QGIS2Web

Then it was over to QGIS2Web to build my web map. I have done this before so I sort of knew what I was doing but the volume of data made the whole process rather slow and got me wondering whether this was such a good idea. QGIS2Web has a feature to thin down the geometry and I set that to the highest level, played around with the zoom ranges (there’s no point in being able to zoom down to street level when the data is at local authority level), added a legend and a search box and I was almost there. Last task was to edit the javascript to limit the size of the popup and add a scroll bar if the attribute data is too long for the window. After a quick test and tweak or too I was ready to publish the project to my web server, I zipped it up and uploaded it – aargh it’s nearly 50MB! The moment of truth came loading from the interwebby place – it loaded about as fast as a tortoise doing a 100m sprint.

Cape Disappointment is Disappointing

Realisation, I need to simplify the geometry in PostGIS and then go through the process again 🙁

Back to PostGIS

Another call out to my friend Aileen who suggested a neat way to solve this little problem.

  1. Back up the table with the union geometries just in case
  2. Add a column for simplified geometry and create an index
  3. Run the simplify geometry function to update this column with a simplified geometry I used a setting of 10 (no idea what level of simplification it implies but I had used it before on another data set with good results)
  4. Recreate the master view to use the simplified geometry but aliasing it as “geom” so that all of the dependent views would still work (clever stuff from Aileen)
  5. Rerun a load of queries that fortunately I had saved and “Bob’s your uncle”

Lesson 3 if you are using the QGIS2Web plugin thin your data down unless you really need high levels of precision. PostGIS does that better than QGIS or QGIS2Web

The finish line

Reload the project and zippo you can see how much faster the layers load. Into QGIS2Web and a bit more tweakery (switch off the simplification here as it has already been done in PostGIS). Test and re-upload to my web server – now it is only 9MB, that should speed my tortoise up to harelike standards.

And here is the result

It’s not perfect by any means

  • The QGIS2Web plugin doesn’t provide the ability to drill down through layers or on overlapping polygons, I need to do some fiddling to see if I can come up with a fix for that.
  • It looks quite pretty and is less visually cluttered than the original
  • The polygons provide a clearer view of the extent of a service/partnership particularly if you switch off some of the layers. Maybe it would be better to start with the layers switched off?

Verdict – could be better but I have probably gone as far as I can with the tools I have available. The alternatives would either be to use one of the services like ArcGIS Online (I know, but it is pretty damn slick and easy) or CartoDB or think about installing GeoServer on my web server and learning how to use that which is just a leap to far for me at the moment.

Think you can do better? I’d love to see what an “expert” could do.

Massive thanks to Aileen for all her help.

by steven at July 15, 2016 06:20 PM

Jackie Ng

React-ing to the need for a modern MapGuide viewer

A few months ago, I mused about a hypothetical future map viewer for MapGuide.

Having a good solid 6 months of working with React and TypeScript (and dealing with Webpack in anger) under my belt, I felt confident enough to give this idea a crack.

This is what the consumption story looks like so far:


The HTML is just:
  • One script tag (to the webpack bundle, containing the map viewer component and all of its dependencies
  • Making an instance of the entry point class (currently called MapGuide.Application)
  • Mounting the class (using React terminology here) at the desired DOM element with properties to pass into the root React component
And voila! Instant OpenLayers3-powered map viewer. Pretty simple stuff.

When we look at the current React component composition:



It's one root component (I'm intending for this to be something conceptually similar to the ApplicationDefinition/WebLayout for our existing viewers), and a child component that is wrapped by a react-dimensions higher-order component to automatically resize the map viewer according to parent element/window size changes (so we can hook in and call updateSize on the internal ol.Map to prevent image distortion)

The major deviation from my original thoughts is that in the above HTML code, there is no references to React or having to mount the viewer component using the React APIs. I've come to realize that this would be cumbersome as your ultimate entry point will always be HTML/JavaScript, way outside the zone of React/TypeScript/(T|J)SX, and although this is a "component", our final bundle is ultimately more an application instead of a library. So having a wrapper (which is what MapGuide.Application does) to do all of this leg work simplifies the whole setup process.

That's not to say we can't componentize this map viewer (as a library/package you can npm install), but my primary goal is to have an experience similar to our existing viewer offerings, which are applications whose features and behaviour is driven by configuration files. So in that respect, including a script tag and writing some JavaScript to point to your configuration and kickstart the application is much more easier to comprehend than installing node, npm, Webpack and messing with frontend development stacks.

So far so good. We're just beginning to scratch the surface.

by Jackie Ng (noreply@blogger.com) at July 15, 2016 05:41 PM

gvSIG Team

gvSIG en la radio: Hablando de SIG, IDEs y SmartCities en el 92.6FM de COPE Valencia

radio_gvsig-ide-xxxEl pasado miércoles 13 de julio fui invitado a participar en un programa de radio de la cadena COPE y aportar la visión del proyecto gvSIG sobre las Infraestructuras de Datos Espaciales y su relación con las Smart Cities o Ciudades Inteligentes.

Podéis acceder al podcast del programa en:

http://www.ivoox.com/hablando-sig-ides-smartcities-el-audios-mp3_rf_12230776_1.html

La Asociación gvSIG es conocida internacionalmente por su experiencia en la implantación de Infraestructuras de Datos Espaciales (IDE) con geomática libre. Con la llegada de gvSIG Online hemos materializado ese know-how en un producto que se ha convertido en pocos meses en la plataforma integral en software libre para IDE.

Y en cuanto a las Smart Cities dedicamos una serie de artículos a explicar nuestra visión:

Esperamos que este tipo de programas ayuden a divulgar y comprender la importancia de la geomática libre en la modernización de la gestión de nuestras administraciones públicas.


Filed under: geoportal, gvSIG Online, SDI, spanish Tagged: ciudades inteligentes, IDE, Infraestructuras de Datos Espaciales, smart cities

by Alvaro at July 15, 2016 10:40 AM

Fernando Quadro

Conheça o MapStore 2

O MapStore 2 é um framework para você construir aplicações de mapas na web usando bibliotecas de mapeamento padrão, tais como OpenLayers 3 e Leaflet.

map2

Possui várias aplicações, como por exemplo:

  • MapViewer: um visualizador simples de mapas pré-configurados (opcionalmente armazenados em um banco de dados usando GeoStore)
  • MapPublisher: desenvolvido para criar, salvar e compartilhar de forma simples e intuitiva mapas utilizando conteúdos a partir de serviços como OpenStreetMap, Google Maps, MapQuest ou servidores específicos fornecidos pela sua organização ou de terceiros.

Ele foi projetado desde o início para fornecer aos usuários uma experiência coerente e abrangente em diferentes dispositivos, portanto, deve se adaptar automaticamente a diferentes tamanhos de tela, como mostrado abaixo:

mapstore2

MapStore 2 é um software livre e de código aberto, desenvolvido pela empresa italiana GeoSolutions, e está licenciado sob a Licença Simplificada BSD.

Fonte: GeoSolutions

by Fernando Quadro at July 15, 2016 10:30 AM

From GIS to Remote Sensing

Sentinel-2 Download Issues Using the Semi-Automatic Classification Plugin: Solved


Recently several users of the Semi-Automatic Classification Plugin (SCP) for QGIS reported errors that prevented the search and download of Sentinel-2 images.


The issues were caused mainly by the cryptographic protocols used for accessing data and the way the SCP tries to connect to the Copernics Data Hub (https://scihub.copernicus.eu).
In particular, the issues started after some recent changes of the Copernics Data Hub, when the protocols used for accessing data were upgraded (i.e. TLS v1.1+).

The SCP relies on Python for downloading Sentinel-2 images.
As described here https://docs.python.org/2/library/ssl.html the protocols TLS v1.1 and TLS v1.2 are available only in Python 2.7.9+ with openssl version 1.0.1+.
Unfortunately QGIS comes with a previous version of Python where TLS v1.1 and TLS v1.2 are not available. Therefore the Sentinel-2 download process fails.

by Luca Congedo (noreply@blogger.com) at July 15, 2016 07:51 AM

Stefano Costa

I visitatori dell’Acquario di Genova, edizione 2015

Lo scorso anno avevamo visto, in una panoramica sui musei di Genova, come l’Acquario, l’attrazione cittadina più conosciuta, più pubblicizzata e più visitata, abbia perso visitatori in modo costante negli ultimi anni, arrivando negli ultimi quattro anni per ben tre volte (2012, 2014, 2015) sotto la soglia psicologica di un milione di visitatori.

I dati pubblicati di recente nel Cruscotto dell’economia genovese, pur approssimativi, confermano la continua stagnazione: nel 2015 l’Acquario ha totalizzato 927000 presenze, con un calo percentuale del 6.1%. E pensare che lo scorso anno avevo azzardato una previsione ottimistica:

Sicuramente sarà la struttura che beneficerà maggiormente dell’afflusso di Expo, e i filmati pubblicitari sono già diffusi nelle stazioni ferroviarie e in altri spazi affollati.

Invece niente, anzi: durante i mesi di Expo i visitatori sono stati costantemente inferiori rispetto ai periodi estivi degli anni precedenti. Ma come è possibile? La risposta è abbastanza elementare: nel 2015 sono calati anche i visitatori dei musei, mentre sono aumentati leggermente gli arrivi turistici. Non è l’acquario e non sono i musei a tenere a galla la giovane vocazione turistica di Genova.

I numeri crudi sull’affluenza all’Acquario di Genova sono schietti. Tutti i grafici presentati sotto si basano sui dati pubblicati dal Comune di Genova.

Grafico che mostra i visitatori dell'Acquario di Genova dal 2010 (1166000) al 2015 (927000)I visitatori dell’Acquario di Genova dal 2010 al 2015 presentati dal Cruscotto sull’economia genovese

Se li osserviamo su un arco temporale più ampio di quello presentato nel cruscotto, il calo costante appare ancora più vistoso.

 

Grafico con i visitatori dell'Acquario di Genova dal 1994 al 2015, anno per annoI visitatori dell’Acquario di Genova dal 1994 al 2015

Lo scorporo mensile dei dati è ancora più interessante perché consente di esaminare i dati destagionalizzati.

Graico del trend destagionalizzato del numero di visitatori all'Acquario di GenovaTrend destagionalizzato del numero di visitatori all’Acquario di Genova

Pur altalenante, la curva del trend è inequivocabilmente rivolta verso il basso. Non capisco quindi su quali basi Giuseppe Costa possa parlare di

un periodo decisamente più roseo per l’Acquario con «numeri di molti vicini al periodo pre-crisi».

(tralasciamo le valutazioni di pura fantasia che compaiono nelle interviste realizzate nella stessa occasione sui “milioni di visitatori” attratti ogni anno a Genova dall’Acquario)

Peraltro, a gennaio lo stesso Costa aveva ammesso, commentando gli stessi dati, che

L’anno appena trascorso, che noi calcoliamo dall’1 novembre 2014 al 31 ottobre 2015, è stato il peggiore della storia

Nel periodo pre-crisi (2008, si suppone), l’Acquario aveva un milione e trecentomila visitatori all’anno, mentre nel 2015 è fermo a 927mila.

Dato il ritardo con cui vengono pubblicati i dati, non siamo in grado di sapere se la primavera del 2016 sia stata davvero più rosea, e ci piacerebbe poter fare queste valutazioni in tempi più rapidi, perché sono importanti per capire cosa sta succedendo prima che sia già l’anno prossimo. Per esempio, se a metà di Expo avessimo già saputo che l’Acquario non stava riscuotendo particolare successo di pubblico, si sarebbe potuta cambiare strategia di comunicazione. Per ora, ci dobbiamo accontentare dell’ottimismo con con si prevedono un milione e centomila visitatori.

by Stefano Costa at July 15, 2016 05:56 AM

July 14, 2016

GeoSolutions

MapStore 2: The new look&feel at work

MapStore 2 Mobile

Dear All, we just wanted to share with you the work that we are doing on MapStore 2.

In previous posts we have described the reasoning behind the technological choices (see this post if you missed it) as well as the work being performed on the new look&feel (see this post if you missed it).  In this post we just want to point your attention to the current development release of MapStore 2 which has ben performed in conjunction with the first release of the new theme that depicts what the Look&Feel will look like in the  long term. Here below the main map page with the new theme applied is shown.

[caption id="attachment_2907" align="alignnone" width="800"]MapStore 2 Sample Map MapStore 2 Sample Map[/caption]

MapStore 2 has been designed from the beginning to provide users with a coherent and comprehensive experience across different devices, hence it should automatically adapt to different screen size as shown below (warning, that's my cell phone screen :) ).

[caption id="attachment_2877" align="alignnone" width="350"]MapStore 2 Mobile MapStore 2 Mobile[/caption] [caption id="attachment_2883" align="alignnone" width="350"]MapStore 2 Mobile, the featureinfo at work MapStore 2 Mobile, the featureinfo at work[/caption]

You can play with the current version by pointing your browser at http://mapstore2.geo-solutions.it.

The current cycle of development which should end towards 10th of August (summer holidays permitting :) ) is focused on the implementation of the map management functionalities which includes both Map Editing as well as Map Search functionalities to refactor the old Map Manager interface according to the mock-up here below.

[caption id="attachment_2889" align="alignnone" width="800"]MapStore 2 HomePage MapStore 2 HomePage[/caption] So yeah, stay tuned for more development on the MapStore 2 project, meanwhile play with it and if you have any feedback for us you can use the project's google group to communicate.

If you want to know more about how we can help your organization, don't hesitate and get in touch with us! Make sure to check our Enterprise Services Offer which is the contracting whicle we use to help our clients reach their goal with our products.

The GeoSolutions Team,

Geosolutions

by simone giannecchini at July 14, 2016 04:55 PM

Fernando Quadro

Transparência inteligente de imagens no GeoServer

A GeoSolutions está introduzindo uma nova funcionalidade no GeoServer: transparência inteligente com formato image/vnd.jpeg-png.

Quando se fala em imagens aéreas e de satélite o formato de saída JPEG é muitas vezes preferido pela sua geração rápida e tamanho (pequeno). No entanto, há casos em que é necessária transparência, talvez porque a imagem não está cobrindo toda a área do mapa, ou talvez porque o usuário tem a liberdade para selecionar apenas certas bandas da imagem, e um mapa base é um requisito necessário.

Até agora esses cenários foram tratados utilizando o formato PNG, pagando o preço de um maior tempo e tamanho nesta geração. Para suprir esta deficiência, está sendo adicionado um novo formato de saída ao GeoServer, com o nome “image/vnd.jpeg-png”. Como o nome indica, é um formato que irá gerar imagens JPEG ou PNG, dependendo do conteúdo da imagem: se os pixels são completamente opacos, JPEG será utilizado, se for transparente ou translúcido (não totalmente opaco), é a vez de utilizar o PNG.

Naturalmente, o tipo da imagem será definido em conformidade, transparentemente, sem que seja necessária qualquer alteração. Na imagem abaixo você pode ver uma pré-visualização de camada onde o formato JPEG-PNG foi escolhido (o pedido de pré-visualização precisa ser modificado, acrescentando “&transparente = true” na sua URL, caso contrário, apenas imagens opacas são gerados):

Selezione_001

Como você pode ver, o tipo de tile é JPEG ou PNG, dependendo se a tile possui dados raster, ou tem pixels transparentes.

Esse recurso está disponível e você pode experimentá-lo usando um versão nightly builds em http://ares.boundlessgeo.com/geoserver/master/

Fonte: GeoSolutions Blog

by Fernando Quadro at July 14, 2016 10:30 AM

gvSIG Team

gvSIG, an important tool applied to Open Education

poster_gvSIG_open_education

In the light of the International Map Year (IMY), the ICA and its commissions are highlighting the value of cartography by “mapping” the UN sustainable development goals.

The sustainable development goals and their targets can be mapped based on the indicators. ICA’s Commissions have mapped each of the goals from their particular perspective. The resulting poster collection gives an overview of the strength of cartography. It is telling the story of cartographic diversity, of mapping options and of multiple map perspectives.

Poster made by the Commission on Open Source Geospatial Technologies is about the Goal 10 (Reduced inequalities) and gvSIG appears as an important tool applied to educational environments.

Poster:

http://icaci.org/files/developmentgoals/goal_10.pdf


Filed under: english, events, gvSIG Desktop, gvSIG Educa Tagged: Education, education environment, geospatial, goal 10, open education, open source, reduced inequalities, sustainable development goals, UN

by Alvaro at July 14, 2016 09:48 AM

Petr Pridal

MapTiler 7: Auto-save and new georeferencing

New georeferencing interface

The visual georeferencing process was completely redesigned. Scanned maps and images missing geographical location can be positioned directly. There is a new Overlay option, where it is possible to toggle between the image and background map with a Shift button. Adding more control points this way is faster and more precise - and the overlay is immediately recalculated.

It is possible to preview the exact location of the whole map before rendering the map tiles!

Five transformations are available for the best fit and users needs: Scaled, Similarity, Affine, Polynomial, TPS:
You can type coordinates numerically, and all EPSG spatial reference systems are supported out of the box!

Auto-save of the georeference

When a file is reopened, previous manually done georeference is again loaded on the same computer. This saves time on scanned maps, that have been already georeferenced and only need rerendering or further adjustments.

Other new features

Next to the improvements in the georeferencing process, there are several new features, and improvements coming with MapTiler 7:
  • Map portal with your logo and maps is online with one click, hosted on Amazon or Google cloud! More details in a future blog post.
  • Retina/HighDPI tiles (@2x and @3x) with ready to use OpenLayers viewer on all map profiles, including raster!
  • Background map added to MBTiles preview on Desktop (just double-click on a .mbtiles file!)
  • Shortcut icons for easy upload of the maps to mobile app and to cloud hosting
  • Legend, including image, stored in metadata of generated MBTiles maps 
  • HTTPS hosting on Amazon S3 and Google Cloud Storage offered directly
  • Now unavailable MapQuest maps were replaced with our own OpenStreetMap base maps
And, from now on we have versions without zero in the beginning - so after 0.6.4 comes now 7.0!

Do you want to try the latest MapTiler? Get it for free at http://www.maptiler.com/

by Hynek Přidal (noreply@blogger.com) at July 14, 2016 09:35 AM

gvSIG Team

Mapping dedica de nuevo un número especial al proyecto gvSIG

mapping_gvsig

La revista Mapping, un referente en las publicaciones relacionadas con las ciencias de la tierra, dedica su número 177, de mayo-junio de 2016, al proyecto gvSIG.

El conocimiento de hoy es la base del mañana” con esta frase de relación directa con los valores del software libre, comienza el especial con artículos de trabajos presentados en las pasadas Jornadas Internacionales de gvSIG. Valga citar unos ejemplos para atisbar la variedad temática en la que se aplica la tecnología gvSIG y que viene reflejada en forma de artículos como:

  • Reconstrucción de los sistemas de la Compañía Inglesa de Aguas para el abastecimiento a Cartagena, del siglo XIX hasta mediados del XX.
  • Arqueometría y georreferenciación con gvSIG en un yacimiento al aire libre. Campaña Mas d’Is 2015.
  • Las Cuevas-Santuario de la costa oriental de Yucatán: Un paradigma interpretativo.
  • Mapa base morfopedológico mediante un SIG del departamento general San Martín, Córdoba, Argentina.

y mucho más que podrás leer de forma gratuita en la versión on-line de la revista.

Enlace a la revista on-line:

http://www.mappinginteractivo.es/images/revistas/REVISTA%20MAPPING%20177/REVISTA%20MAPPING%20177.html


Filed under: events, gvSIG Desktop, press office, spanish Tagged: 11as Jornadas Internacionales, 11gvsig

by Alvaro at July 14, 2016 07:15 AM