Welcome to Planet OSGeo

October 30, 2014

gvSIG Team

Curso “Introducción a Scripting en gvSIG 2.1″ en español gratuito

La Plataforma de Capacitación a Distancia gvSIG-Training abre el proceso de inscripciones del Mooc gratuito “Introducción a Scripting en gvSIG 2.1″ en español, organizado por la Asociación gvSIG [1].

El objetivo de este Mooc es el de dar a conocer un poco más sobre el potencial de la programación geoespacial, la posibilidad de crear nuevas herramientas, nuevos geoprocesos o análisis de datos, que nos aumentarán la potencia de gvSIG adaptándose a nuestras necesidades. También la automatización de tareas, que nos podrían generar un ahorro de tiempo y de trabajo considerable.

La duración del curso es de 4 semanas, y comenzará el próximo 24 de Noviembre.

Para realizar este curso no se necesitan conocimientos previos sobre programación, será un nivel básico, además de explicar cada línea de código. El curso está realizado con el lenguaje de programación Python, el favorito para comenzar a programar, muy intuitivo y rápido de aprender.

Realizar este curso es completamente gratuito. Aquellos que lo completen y quieran recibir un Certificado de Aprovechamiento, correspondiente a 30 créditos del programa de Certificación de gvSIG, solo tendrán que aportar una contribución de 40 Euros, además de realizar un proyecto personal sobre un Script en gvSIG que será subido al Repositorio y estará disponible para toda la comunidad.

Más información sobre los módulos, inscripción, etc.: http://web.gvsig-training.com/index.php/es/quienes-somos-2/noticias-2/140-massive-online-open-course-de-introduccion-a-scripting-en-gvsig-2-1

Para matricularse entrar en la opción “Matriculación” en la parte de abajo, y aceptar las condiciones en la página siguiente. Finalmente es necesario registrarse.


Filed under: development, gvSIG Desktop, scripting, spanish, training

by Mario at October 30, 2014 09:49 AM

October 29, 2014

Boundless Blog

Creating a custom build of OpenLayers 3 (Revisited)

OpenLayersSince OpenLayers 3 likely includes more than needed for any single application, we previously described how to generate custom builds with just the relevant code.

As promised back in February, things have changed for the better and it’s now easier than ever to create a custom build of OpenLayers 3 thanks to a task called build.js that uses a JSON configuration file. Documentation for the tool resides here.

Configuration file

We will now use build.js to create a custom build for the GeoServer OpenLayers 3 preview application, the same application that we used in our previous blog post. The full configuration file for our application can be found here.

The exports section of the configuration file defines which parts of the API will be exported in our custom build. By using the name of a class such as ol.Map we export the constructor. By using a syntax such as ol.Map#updateSize we are exporting the updateSize method of ol.Map. The exports section is basically the replacement for the exports file that we used in the older blog post.

We did not make any changes to the compile section. In the define section we are using the same defines as we were using before with Plovr, but its syntax is a bit different "ol.ENABLE_DOM=false" versus "ol.ENABLE_DOM": false. By the way, OpenLayers 3 does not use Plovr anymore.

Compiling

To compile our custom build, first make sure you have run npm install in your OpenLayers 3 git clone directory, then use the following command:

node tasks/build.js geoserver.json ol.min.js

The first argument (geoserver.json) is our build configuration file, the second argument is the name of the output file.

That’s it!

The end result should be a much smaller file tailored specifically to the needs of this specific application.

Interested in using OpenLayers in your enterprise? Boundless provides support, training, and maintenance. Contact us to learn more.

The post Creating a custom build of OpenLayers 3 (Revisited) appeared first on Boundless.

by Bart van den Eijnden at October 29, 2014 01:53 PM

gvSIG Team

10th International gvSIG Conference: Provisional program available

The program of the 10th International gvSIG Conference, that will be held from December 3rd to 5th at La Petxina Sports-Cultural Complex (Valencia – Spain), is now available.

The provisional program is available with the paper sessions and workshops on the Conference web page [1].

All the presentations given at the Conference on Wednesday and Thursday, and the workshops given in Friday, will have simultaneous interpretation (Spanish-English and English-Spanish).

Registrations are free of cost (limited capacity) and must be done through the application form on the Conference web page.

In the next days we will give you more information about the conference at the gvSIG blog [2], including the applications to be installed previously in order to follow the workshops.

[1] http://jornadas.gvsig.org/
[2] http://blog.gvsig.org/


Ya está disponible el programa de las 10as Jornadas Internacionales de gvSIG, que se celebrarán del 3 al 5 de diciembre en el Complejo Deportivo-Cultural Petxina (Valencia – España).

En la página web de las Jornadas podéis consultar el programa provisional [1] con las ponencias y talleres que se van a presentar.

Todas las ponencias del miércoles y jueves, y los talleres del viernes, dispondrán de traducción simultanea Castellano-Inglés o Inglés-Castellano.

La asistencia a las mismas es gratuita, siendo el aforo limitado, por lo que os recomendamos que no esperéis a los últimos días para realizar la inscripción.

En los próximos días se irá ampliando información sobre las jornadas en el blog de gvSIG [2], incluyendo lo que se debe llevar instalado para poder seguir los talleres.

¡Os esperamos!

[1] http://jornadas.gvsig.org/
[2] http://blog.gvsig.org/


Filed under: community, english, events, gvSIG Desktop, scripting, spanish, training

by Mario at October 29, 2014 01:18 PM

Petr Pridal

Virtual 3d globe for your website

Custom online globe can become an attractive feature for visitors of your website. No plugin is needed - just the modern web browser or a mobile device. In the following video tutorial we will show you, how to create your own globe from a texture, using MapTiler, and an open-source project developed by Klokan Technologies, WebGL Earth 2 API

To create your custom globe, you will need a texture of the globe with a size ratio 2:1, in a spherical or equirectangular mode. In the tutorial, we used Cassini globe downloaded from the David Rumsey's website. If you already have the texture, just use MapTiler and then suitable hosting - either on Amazon S3 or on any webserver using Tileserver.php.

And how to go from a real globe to a texture? Typically by photographing in a light box to prevent reflection and processing with a software such as Autopano (http://www.autopano.net/wiki-en/action/view/Understanding_Projecting_Modes) or AgiSoft Photoscan. More information about processing of the globes can be also found in this article.


Video available at: https://www.youtube.com/watch?v=pf8itiTwo6w


by Hynek Přidal (noreply@blogger.com) at October 29, 2014 10:13 AM

October 28, 2014

gvSIG Team

“Introduction to GIS” Course in English free of charge

The gvSIG-Training e-Learning platform opens its registration period for the “Basic GIS with gvSIG” MOOC in English, given by the gvSIG Association and GISMAP.

This MOOC aims to show the use and potentiality of the open source software gvSIG in performing the most common operations during the workflow in a GIS environment. This Course is addressed to beginners as well as to skilled GIS users who want to learn how to use this software.

It will start in November 24th, and it will last four weeks with an approximate participant’s engagement of thirty hours during the whole course period.

Course attendance is completely free of charge. Students who successfully complete the course and wish to receive the Certificate of Achievement, corresponding to 30 credits for gvSIG Certification program, will be asked for a contribution of 40 Euros.

For further information about topics, goals…: http://web.gvsig-training.com/index.php/es/quienes-somos-2/noticias-2/139-massive-online-open-course-introduction-to-gis

For registration, you have to press “Enroll” at the bottom of the page, and then accept the “Site policy agreement”. Finally you will have to register at the web page.


Filed under: community, english, gvSIG Desktop, training

by Mario at October 28, 2014 02:45 PM

Boundless Blog

QGIS Compared: Cartography

Gretchen PetersonAs mentioned in my previous post about visualization, QGIS is easy-to-install, integrates with OpenGeo Suite, and has reliable support offerings, making it a viable alternative to proprietary desktop GIS software such as Esri ArcGIS for Desktop. I’ve written a couple of books on designing cartographic products so it is something I’m passionate about and it is definitely an important component of desktop GIS. So how does QGIS perform when it comes to cartographic design?

In making the examples for this blog post series, I was impressed by the capabilities of QGIS and found it was easy and straightforward to create maps like the Halloween map below.

Halloween in Fort Collins, 2014

Strength: Text and Image Elements

Placing text and images is as easy as finding the Add new label and Add image buttons on the left-hand side of the print composer (a). Once you add a text box or any other element, the Item properties tab on the right-hand side of the print composer gives you most of the complex options that you’d find in any layout or commercial GIS software such as alignment, display, and rotation (b). You can also align these elements by using the Align selected items button in the main button bar (c).

composeritems.png

Strength: Advanced Techniques

Advanced labeling functionality is included in the main QGIS interface, including SQL-based labeling, font choice, and placement protocols. Exporting a layout to SVG for editing in Inkscape or other design software is easy. Another advanced technique is the creation of atlases, or map books, that replicate a layout for each part of an indexed main map. QGIS provides an atlas composer as part of the core functionality within the Print Composer, a very powerful feature.

Strength: Color Blending

An exciting feature is the addition of color blending modes, typically found only in design software, that can add special effects to the look and feel of the map by adding texture or special brightening effects, for example. The following modes are available: lighten, screen, dodge, addition, darken, multiply, burn, overlay, soft light, hard light, difference, and subtract. Color blending can be applied to a single layer in the layer properties dialog (a) or it can be applied to an entire map in the Print Composer (b).

texturemaps.png

Mixed Results: Map Elements

Adding the map to the layout is a little more difficult if you are used to commercial GIS software. You have to use the Add new map button (the wording of which I found to be confusing since it somehow implies a new map rather than the existing map in your project), which adds the map from the main QGIS project to the layout. Another potential area of confusion is the fact that once the map element is added to the project, it doesn’t dynamically update if the main map is changed. In fact, to update it there are actually two buttons in the map element’s Item properties: one to update the preview and the other to set the map extent. The former updates the map if a new map layer has been added or the symbology has changed but only the latter updates the map if it has been panned or zoomed. These, however, are minor quibbles.

Mixed Results: Sizing and Graticules

The Print Composer does have a few shortcomings that I suspect will be cleaned up in later releases. It isn’t possible to change the size of multiple images all at once. For example, enlarging the pumpkin images in the right-hand information panel of the example map has to be done for each pumpkin separately since selecting them all and changing the properties isn’t possible. Also, there is no graticule functionality in the Print Composer. Instead, the user would need to find or create a graticule line dataset to add as a layer in the map if a graphic-like grid was desired. Another minor quibble is that the size dialog for images has the wrong tab order (if you try to tab between the input boxes the tabbing skips boxes instead of sequentially moving the cursor to the next input box).

Hidden Gem: Gradient Fills

Gradient fills, also known as vignette effects, are also possible, in a new plugin called Shapeburst. You can use it to achieve subtle shading along land-water boundaries but also to do some unexpected things like banding the edges of administrative boundaries in different colors or to reverse-fade the edges of a map. This latter effect takes advantage of QGIS’s built-in inverted polygons tool, which simplifies what used to be a task that would take several steps to achieve.

Conclusion

The cartographic capabilities of QGIS are sufficient to produce almost all the common map layout components with an adequate amount of advanced capabilities and even some options, like the color blending modes, that aren’t typically found elsewhere. Cartography is where many people think that QGIS falls short. However, in making the examples for this blog post series, including the Halloween map, I was blown away by its capabilities. See also the QGIS Map Gallery for more map examples. Overall, my experience with QGIS has been that the visualization and cartography functions of QGIS have matured to the point where GIS professionals of all types can’t afford not to strongly consider adopting it. 

The post QGIS Compared: Cartography appeared first on Boundless.

by Gretchen Peterson at October 28, 2014 01:56 PM

gvSIG Team

gvSIG en la gestión técnica del arbolado viario

El pasado viernes 24 de octubre tuve la suerte de poder participar en el XVI Congreso Nacional de Arboricultura, por la tarde con un taller orientado a la gestión técnica de arbolado viario con gvSIG 2.1 y con una ponencia que le precedió en la sesión de la mañana. También fue gratificante conocer que son cada vez más los técnicos que están utilizando gvSIG en este sector.

Unas jornadas realmente interesantes que me sirvieron para reflexionar sobre la importancia que tiene el arbolado urbano y que sin duda se constituye como un servicio público del que muchas veces no somos conscientes.

Para elaborar todo el material de gvSIG presentado en el congreso tuve la fortuna de contar con Jacobo Llorens, Presidente de la Asociación Española de Arboricultura.

Hemos publicado todos estos materiales, que podéis encontrar en:

Esperamos que sea de utilidad para todos aquellos que os dedicáis a la gestión del arbolado urbano.


Filed under: events, gvSIG Desktop, spanish Tagged: arbolado, gestión, municipal

by Alvaro at October 28, 2014 12:44 PM

Tyler Mitchell

Create a Union VRT from a folder of Vector files

The following is an excerpt from the book: Geospatial Power Tools – Open Source GDAL/OGR Command Line Tools by me, Tyler Mitchell.  The book is a comprehensive manual as well as a guide to typical data processing workflows, such as the following short sample…

The real power of VRT files comes into play when you want create virtual representations of features as well.  In this case, you can virtually tile together many individual layers as one.  At the present time you cannot do this with a single command line but it only takes adding two simple lines to the VRT XML file to make it start working.

Here we want to create a virtual vector layer from all the files containing lines in the ne/10m_cultural folder.

First, to keep it simple, create a folder and copy in only the files we are interested in:

mkdir ne/all_lines 
cp ne/10m_cultural/*lines* ne/all_lines

Then we can create our VRT file using ogr2vrt as shown in the previous example:

python ogr2vrt.py -relative ne/all_lines all_lines.vrt

If added to QGIS at this point, it will merely present a list of four layers to select to load. This is not what we want.

So next we edit the resulting all_lines.vrt file and add a line that tells GDAL/OGR that the contents are to be presented as a unioned layer with a given name (i.e. “UnionedLines”).

The added line is the second one below, along with the closing line second from the end:

<OGRVRTDataSource>
 <OGRVRTUnionLayer name="UnionedLines">
  <OGRVRTLayer name="ne_10m_admin_0_boundary_lines_disputed_areas">
   <SrcDataSource relativeToVRT="1" shared="1">
   ...
   <Field name="note" type="String" src="note" width="200"/>
  </OGRVRTLayer>
 </OGRVRTUnionLayer>
</OGRVRTDataSource>

Now loading it into QGIS automatically loads it as a single layer but, behind the scenes, it is a virtual representation of all four source layers.

On the map in Figure 5.8 the unionedLines layer is drawn on top using red lines, whereas all the source files (that I manually loaded) are shown with a light shading. This shows that the new virtual layer covers all the source layer features.

Unioned OGR VRT layers - source layers beneath final resulting merged layerUnioned OGR VRT layers – source layers beneath final resulting merged layer

 


Geospatial Power Tools is 350 pages long – 100 of those pages cover these kinds of workflow topic examples.  Each copy includes a complete (edited!) set of the GDAL/OGR command line documentation as well as the following topics/examples:

Workflow Table of Contents

  1. Report Raster Information – gdalinfo 23
  2. Web Services – Retrieving Rasters (WMS) 29
  3. Report Vector Information – ogrinfo 35
  4. Web Services – Retrieving Vectors (WFS) 45
  5. Translate Rasters – gdal_translate 49
  6. Translate Vectors – ogr2ogr 63
  7. Transform Rasters – gdalwarp 71
  8. Create Raster Overviews – gdaladdo 75
  9. Create Tile Map Structure – gdal2tiles 79
  10. MapServer Raster Tileindex – gdaltindex 85
  11. MapServer Vector Tileindex – ogrtindex 89
  12. Virtual Raster Format – gdalbuildvrt 93
  13. Virtual Vector Format – ogr2vrt 97
  14. Raster Mosaics – gdal_merge 107

by Tyler Mitchell at October 28, 2014 05:42 AM

October 27, 2014

gisky

Please help test SAGA 2.1.3 RC1

SAGA GIS is planning to have another release this week (SAGA 2.1.3).

Please help testing! Windows users may find a snapshot build here (attention, 64 bit only).

Users on ubuntu can also test by using the builds provided on my launchpad ppa:
https://launchpad.net/~johanvdw/+archive/ubuntu/sagacvs

Those using debian can actually download the source packages from
those daily builds as well (eg dget
https://launchpad.net/~johanvdw/+archive/ubuntu/sagacvs/+files/saga_2.1.2%2Bdfsg%2Bsvn2248%2B21~ubuntu14.10.1.dsc
) and build from source. All others can of course also build from source as well.

by Johan Van de Wauw (noreply@blogger.com) at October 27, 2014 06:34 PM

October 26, 2014

Tyler Mitchell

Spatialguru change on Twitter/Google Plus accounts

As a result of moving slightly away from “spatial” as a core focal area in my day-to-day work at Actian.com – (I do way more with Hadoop than spatial these days),  I started a new Twitter account with a less domain specific name.

My original Twitter account was spatialguru – I still use it, but less often than before . Now I’m using 1tylermitchell instead.

When I started calling myself spatialguru it was a bit of an inside joke around our home, I didn’t think it would still around this long.  :) Anyway, follow my new account if you want to see more about what I’m reading, etc.

Similarly, I have tried to migrate my previous Google plus account – tmitchell.osgeo – to a new one here.  Add me to your circles and I’ll probably add you to mine if you aren’t already.

Now, what to do about this blog name.. hmm.. more to come.

– Tyler

by Tyler Mitchell at October 26, 2014 06:45 AM

October 25, 2014

Nathan Woodrow

Thank you from Stacey and I.

Though the kindness of everyone we made it to over $2000 to donate two, not one like we had planned at the start, camera packs to hospitals that need them. One will be going to the new Gold Coast hospital where Ellie was born and the other to another hospital who needs it. Both cameras will have Ellie's name on them in her memory. They will go a long way to help preserve the memories of those last minutes just like it did for us with Ellie.

The money is now with Heartfelt and hopefully the cameras will be done soon. I will update this post with photos once they are done.

It was really amazing to see the number of people who I have never meet in person from all over the internet throwing in what they could to help us reach the goal. We are extremely grateful for everything everyone put in.

A massive thanks to everyone who donated:

  • Lyn Noble
  • Kylie and Nathan
  • Luke Bassett
  • Joanne Smith
  • Darryl & Angela Browning
  • Digital Mapping Solutions
  • David Baxter
  • Carl Wezel
  • Grandad and Grandma Woodrow
  • Bill Williamson
  • Helen Gillman
  • Karlie Jones
  • Lisa Gill
  • Andrew & Peta
  • Sally Drews
  • Matt Travis
  • Mummy, Daddy, Harry & Little Sis..
  • Terry Stigers
  • James McKeown
  • Jill Pask
  • Kym Zevenbergen
  • Jessica Nayler
  • Amelia Woodrow
  • Judy Burt
  • Russell and Suzann Woodrow
  • Jenny & Mark Gill
  • Emeley Sands
  • Rebecca Penny
  • Larissa Collins
  • Ross McDonald
  • Shantelle Sweedman
  • Rebbecca Ben izzy Erica
  • Aidan Woodrow & Andrew Smith
  • simbamangu
  • Sarah Rayner
  • Sassá
  • Matt Travis
  • Marco Giana
  • Heikki Vesanto
  • Jorge Sanz
  • Pure K.
  • Toby Bellwood
  • Andy Tice
  • Ujaval Gandhi
  • Matt Robinson
  • Geraldine Hollyman
  • Anonymous
  • Teresa Baldwin
  • Alexandre Neto
  • Chelsea Fell
  • Stephane Bullier
  • Nathan Saylor
  • Adrien ANDRÉ
  • Steven Feldman
  • Anita Graser
  • Chris Scott
  • Vicky Gallardo
  • Anonymous
  • Anonymous
  • Stevie Little

I will also add a massive thank you to DMS who has been super supportive though this whole year since Elly died, and they know very well the effect it has had on Stace and I over the past year.

In a perfect world we would never had to run a fund raiser for this but I'm glad Heartfelt exist to help those of us in need at the time.

Thank you from Stacey and I.

by Nathan Woodrow at October 25, 2014 02:00 PM

Tyler Mitchell

Query Vector Data Using a WHERE Clause – ogrinfo

The following is an excerpt from the book: Geospatial Power Tools – Open Source GDAL/OGR Command Line Tools by Tyler Mitchell.  The book is a comprehensive manual as well as a guide to typical data processing workflows, such as the following short sample…

Use SQL Query Syntax with ogrinfo

Use a SQL-style -where clause option to return only the features that meet the expression. In this case, only return the populated places features that meet the criteria of having NAME = ’Shanghai’:

$ ogrinfo 10m_cultural ne_10m_populated_places -where "NAME = 'Shanghai'"

... 
Feature Count: 1 Extent: (-179.589979, -89.982894) - (179.383304, 82.483323) 
... 
OGRFeature(ne_10m_populated_places):6282
 SCALERANK (Integer) = 1 
 NATSCALE (Integer) = 300 
 LABELRANK (Integer) = 1 
 FEATURECLA (String) = Admin-1 capital 
 NAME (String) = Shanghai
... 
 CITYALT (String) = (null) 
 popDiff (Integer) = 1 
 popPerc (Real) = 1.00000000000 
 ls_gross (Integer) = 0 
 POINT (121.434558819820154 31.218398311228327)

Building on the above, you can also query across all available layers, using the -al option and removing the specific layer name. Keep the same -where syntax and it will try to use it on each layer. In cases where a layer does not have the specific attribute, it will tell you, but will continue to process the other layers:

   ERROR 1: 'NAME' not recognised as an available field.

NOTE: More recent versions of ogrinfo appear to not support this and will likely give FAILURE messages instead.


Geospatial Power Tools is 350 pages long – 100 of those pages cover these kinds of workflow topic examples.  Each copy includes a complete (edited!) set of the GDAL/OGR command line documentation as well as the following topics/examples:

Workflow Table of Contents

  1. Report Raster Information – gdalinfo 23
  2. Web Services – Retrieving Rasters (WMS) 29
  3. Report Vector Information – ogrinfo 35
  4. Web Services – Retrieving Vectors (WFS) 45
  5. Translate Rasters – gdal_translate 49
  6. Translate Vectors – ogr2ogr 63
  7. Transform Rasters – gdalwarp 71
  8. Create Raster Overviews – gdaladdo 75
  9. Create Tile Map Structure – gdal2tiles 79
  10. MapServer Raster Tileindex – gdaltindex 85
  11. MapServer Vector Tileindex – ogrtindex 89
  12. Virtual Raster Format – gdalbuildvrt 93
  13. Virtual Vector Format – ogr2vrt 97
  14. Raster Mosaics – gdal_merge 107

by Tyler Mitchell at October 25, 2014 08:24 AM

October 24, 2014

GeoServer Team

GeoServer 2.5.3 released

The GeoServer team is happy to announce the release of GeoServer 2.5.3. Download bundles are provided (zipwardmg and exe)  along with documentation and extensions.

GeoServer 2.5.3 is the next the stable release of GeoServer and is recommended for production deployment. Thanks to everyone taking part, submitting fixes and new functionality:

  • A new process, PagedUnique, to efficiently grab large amounts of unique values from a layer column
  • Legend preview functionality in the style editor
  • A long awaited fix for poor font rendering when creating transparent map
  • Some fixes in WFS 2.0 joins
  • GeoJSON CRS syntax has been updated to the current valid one (we were using a old legacy one)
  • Some GetFeatureInfo further fixes for complex styles
  • Fix scale computation when the CRS unit of measure is not meters
  • Some WMS 1.3 rendering fixes with image mosaics
  • Avoid invalid reports of leaked connections when using SHAPE-ZIP output format against SQL views whose SQL is no more valid
  • Check the release notes for more details
  • This release is made in conjunction with GeoTools 11.3

About GeoServer 2.5

Articles and resources for GeoServer 2.5 series:

 

by Andrea Aime at October 24, 2014 01:16 PM

GeoTools Team

GeoTools 11.3 released

The GeoTools community is happy to announce the latest  GeoTools 11.3 download:
This release is also available from our maven repository. This release is made in conjunction with GeoServer 2.5.3.

This is a release of the GeoTools 11 Stable series recommended for production systems. The release schedule now offers 6 months of stable releases followed by six months of maintenance releases.

A few highlights from the GeoTools 11.3-Release Notes:
  • Rendering fixes related to cut geometries/labels at map tile borders
  • Several improvements/fixes to the NetCDF readers
  • Table hints for SQL Server can be specified at the store level, and it's now possible to force SQL Server to use spatial indexes
  • A good set of JDBC related fixes, for joins, multi-geometry tables, spurious error reports against invalid sql views
  • Make sure SortedSimpleFeatureCollection makes full use of the merge-sort sorter and respects the system wide in memory limits (was going straight and fully to disk before)
Thanks to Andrea for this release (GeoSolutions).

About GeoTools 11

Summary of the new features for the GeoTools 11 series:
  • The DataStore API has a new removeSchema method to drop feature types. This new optional feature is currently implemented by the JDBCDataStore family (all spatial database backed stores), other stores will likely throw an UnsupportedOperationException
  • JDBCDataStore now exposes facilities to list, create and destroy indexes on database columns.
  • Ability to create and drop databases from the PostgisNGFactory
  • PostGis data store will now call ST_Simplify when the GEOMETRY_SIMPLIFICATION hint is provided, significantly speeding up loading of complex geometries  (the renderer can perform scale based simplification already, but doing it before sending the data speeds up data retrieval significantly)
  • ImageMosaic can now manage vector footprints for its granules, allowing to filter out no-data or corrupted sections of the imagery
  • All properties in a SLD style can now have a local unit of measure, as opposed to specifying the unit of measure per symbolizer. For example, if you need to have a line width to be 10 meters, its value can now be "10m"
  • Improved handling of data with 3D coordinates in JDBC data stores
  • A number of small improvements to the rendering engine, such as improved raster icon placement resulting in cleaner, less blurry output, improved label grouping, better handling of icons at the border of the map and in general much improved estimation of the buffer area needed to include all symbols in a map (for features that sit outside the map, but whose symbols are big enough to enter it).

by Andrea Aime (noreply@blogger.com) at October 24, 2014 12:48 PM

OSGeo News

OGC honored Jacobs University Professor Peter Baumann with Kenneth D. Gardels Award

by aghisla at October 24, 2014 12:01 PM

GeoSpatial Camptocamp

FOSS4G 2014 : nos présentations et workshops

Cette année, près de 900 personnes étaient réunies pour échanger sur les nouvelles technologies Open Source en Geospatial à FOSS4G.

Présentations et workshops

Camptocamp a donné plusieurs présentations et workshops autour de nos projets. Éric Lemoine a présenté « OpenLayers 3 : a unique mapping library » ainsi que co-présenté deux workshops, l’un sur pgRouting (avec Daniel Kastl) et l’autre sur OpenLayers 3 (avec Tim Schaub et Andreas Hocevar). Jesse Eichar a également exposé son travail sur la version 3 de Mapfish Print.

Vous trouverez ci-dessous, les présentations et les vidéos en ligne :

Ainsi que les workshops :

L’ensemble de ces présentations et workshops ont fait salle comble et nous avons pu remarquer beaucoup d’intérêt autour de ces projets.

Communautés

Camptocamp a participé à différentes sessions dont celle sur WebGL. Les interactions et les échanges que nous avons eus nous rendent enthousiastes sur les évolutions de cette technologie et son intégration au sein de projets comme OpenLayers 3, sur lequel Camptocamp prévoit de travailler dans les tous prochains mois. Cette intégration va permettre de meilleures performances et des fonctionnalités améliorées. Le prototype que nous avions développé il y a plusieurs mois a permis d’avancer dans cette direction. Nous vous tiendrons au courant de ces évolutions très prochainement.

Nous avons également pu échanger avec les utilisateurs de GeoNetwork sur la nouvelle version de l’interface de l’outil de catalogage. Celle-ci est une refonte complète de l’ancienne version avec Angular, Bootstrap et OpenLayers 3. Cette nouvelle version est grandement appréciée par les utilisateurs : plus intuitive, plus propre, elle apparaîtra par défaut dans la prochaine version de GeoNetwork (version 3).

Conclusion

Chaque année, nous sommes très enthousiasmés par la dynamique autour des FOSS4G auxquels Camptocamp est toujours ravi de participer, via ses projets et ses contributions.

Rendez-vous donc à Séoul pour FOSS4G 2015 du 14 au 19 septembre 2015 !

Cet article FOSS4G 2014 : nos présentations et workshops est apparu en premier sur Camptocamp.

by Yves Jacolin at October 24, 2014 08:51 AM

gvSIG Team

1as Jornadas gvSIG Perú

00_Peru

Mañana, día 25 de octubre de 2014, tendrán lugar las 1as Jornadas gvSIG Perú. Jornadas impulsadas por la Comunidad gvSIG Perú y que en el décimo aniversario de gvSIG se suman a las ya realizadas por otros países de Latinoamérica y Caribe.

En estos diez años se han realizado Jornadas gvSIG, muchas de ellas con periodicidad anual o bianual, en Argentina, Brasil, Chile, México, Paraguay, Uruguay y Venezuela. Ahí ya otras comunidades trabajando en futuras jornadas en países como Ecuador o Bolivia. A todo esto debemos sumar las Jornadas de Latinoamérica y Caribe, que con carácter itinerante representan a la gran comunidad latinoamericana de gvSIG.

Volviendo a las 1as Jornadas gvSIG Perú, la Comunidad ha preparado un interesante programa que incluye un conjunto de ponencias que muestran la aplicabilidad de gvSIG a distintas temática y talleres de formación que permitirán al asistente comenzar su formación en gvSIG.

Y, como todas las jornadas gvSIG, son gratuitas y de libre acceso.

Lo dicho, si vives en Perú…no tienes excusa: Mañana hay jornadas gvSIG


Filed under: community, events, spanish Tagged: Perú

by Alvaro at October 24, 2014 07:54 AM

October 23, 2014

Boundless Blog

Partner Profiles: Geospatial Enabling Technologies (GET)

Boundless partners are an important part of spreading the depth and breadth of our software around the world. In this ongoing series, we will be featuring some of our partners and the ways they are expanding the reach of our Spatial IT solutions.

GETGeospatial Enabling Technologies (GET) was established in 2006 with the vision of becoming one of the leaders for Spatial IT solutions and services in Greece as well as more broadly in Europe and Africa. Specializing in the field of geoinformatics, GET provides robust solutions for both the public and private sector.

Since 2010, GET has deployed and supported OpenGeo Suite as part of their projects. From the very beginning, GET held a strong belief that Boundless was the premier provider for commercial open source spatial software. Through its partnership with Boundless and the use of OpenGeo Suite, GET has been able to implement projects for private companies as well as public authorities and government agencies in Greece including the Hellenic Regulatory Authority of Energy and the Greek Ministry of Agriculture. GET has also offered technical support, via the GET SDI Portal, to many public agencies like the Environmental Protection Agency of Athens and the Military Geographic Institute of Ecuador.

Hellenic Regulatory Authority for Energy (RAE)

“With the goal to provide advanced geospatial solutions based on open source, we consider Boundless an essential, valuable partner with whom we could design and implement projects effectively,” said Gabriel Mavrellis of GET.

This successful partnership derives from a relationship where each organization greatly benefits by sharing knowledge, expertise, opportunities, and vision. The developers and project managers at GET deploy projects based on OpenGeo Suite and share their knowledge and expertise with Boundless through the implementation and maintenance of solutions in Greece and abroad.

GET has successfully organized training seminars on OpenGeo Suite to provide Greek engineers and developers with a greater familiarity of the platform and its functions. GET is also a proud contributor to QGIS, providing training and translating a large part of the QGIS user interface into Greek.

If you’d like your company to be considered for our international network of partners, please contact us!

The post Partner Profiles: Geospatial Enabling Technologies (GET) appeared first on Boundless.

by Camille Acey at October 23, 2014 03:11 PM

Gis-Lab

Вышел Revolution R Open

15 октября вышел Revolution R Open — версия языка R от Revolution Analytics, многие годы выпускающих коммерческую версию R, имеющую «встроенную» многопоточночть. Revolution R Open обладает улучшенной производительностью по сравнению со стандартной версией R за счёт использования Intel Math Kernel Libraries (MKL) вместо стандартного R BLAS/LAPACK (при этом не требуется каких-либо дополнительных модификаций вашего кода); полностью совместим с приложениями, пакетами и скриптами, работающими с R 3.1.1; распространяется под лицензией GPLv2.

Revolution R Open доступен для скачивания для следующих платформ:

  • Ubuntu 12.04, 14.04
  • CentOS / Red Hat Enterprise Linux 5.8, 6.5, 7.0
  • OS X Mavericks (10.9)
  • Windows® 7.0 (SP 1), 8.0, 8.1, Windows Server® 2008 R2 (SP1) and 2012

  • Так же имеется экспериментальная поддержка для:

  • OpenSUSE 13.1 (обновлённая сборка Revolution R Open для OpenSUSE от 18 октября у меня работает хорошо)
  • OS X Yosemite (10.10)

  • Здесь можно посмотреть сравнительные тесты (с воспроизводимым кодом) стандартного R и R от Revolution Analytics.

    В частности, у меня такие результаты теста умножения матриц для Revolution R Open:

    > set.seed (1)
    > m <- 10000 > n <-  5000 > A <- matrix (runif (m*n),m,n) > system.time (B <- crossprod(A))
       user  system elapsed 
     14.690   0.141   3.856

    Весьма неплохо! Однако не следует ожидать существенного прироста производительности сторонних пакетов. Я, например, тестировал spatstat: как использовалось только одно ядро, так и используется. Может с другими пакетами повезёт больше )))

    by SS_Rebelious at October 23, 2014 11:28 AM

    gvSIG Team

    Disponibles vídeos webinars del aniversario de gvSIG: fauna y espacios naturales, criminología, modelador de geoprocesos y curso de gestão de bacias

    webinar2Durante este mes de octubre ha habido cuatro seminarios on-line organizados por MundoGEO y realizados por diferentes colaboradores de la Asociación gvSIG. Tres de ellos en español y uno en portugués.

    Ya están disponibles las grabaciones de estos webinars. Para todos aquellos que no pudieron ver los seminarios en directo, aquí tenéis los vídeos:

    Esperamos que este tipo de iniciativas sean de interés para que la comunidad siga formándose en gvSIG. Y, por supuesto, esperamos seguir ofreciéndoos más seminarios en los próximos meses.


    Filed under: events, gvSIG Desktop, portuguese, spanish, training Tagged: bacias, crime, fauna, geoprocesos, modelador, webinar

    by Alvaro at October 23, 2014 10:02 AM

    gvSIG Team

    The second Release Candidate of the gvSIG 2.1 version is now available

    The second gvSIG 2.1 Release Candidate (gvSIG 2.1 RC2) has been released [1].

    During the stabilization process from the first release candidate a lot of errors have been fixed, and some new functionalities have been included in gvSIG 2.1 too, like a new layout with TOC (table of contents), new grid functionalities, memory management at the Preferences menu or the possibility to add layers to the view dragging the file from the file browser directly.

    We encourage you to test this version and send us any errors and suggestions in the users mailing list in English [2] or Spanish [3] or directly in the bugtracker (see interesting links for testers [4]).

    The complete list of the main new features of gvSIG 2.1 can be consulted on [5].

    Thanks for your collaboration.

    [1] http://www.gvsig.org/web/projects/gvsig-desktop/official/gvsig-2.1/downloads
    [2] http://listserv.gva.es/cgi-bin/mailman/listinfo/gvsig_internacional
    [3] http://listserv.gva.es/cgi-bin/mailman/listinfo/gvsig_usuarios
    [4] http://www.gvsig.org/web/docusr/doctesting/interesting-links-for-testers/view?set_language=en
    [5] http://www.gvsig.org/web/projects/gvsig-desktop/official/gvsig-2.1/notas-de-version/new-features


    Filed under: development, english, gvSIG Desktop, testing

    by Mario at October 23, 2014 07:47 AM

    October 22, 2014

    Even Rouault

    Blending metadata into vector formats

    This post explores a few ideas, and the resulting experiments, I've had recently to put metadata (or arbitrary information) into vector GIS formats that have no provision for them. One typical such format is the good-old Shapefile format. A shapefile generally consists in 3 files, a .shp file that contains the geometries, a .shx that is an index from the shape number to the offset in the .shp file where the geometry is located (to allow fast retrieval by shape ID) and a .dbf file that contains the attributes of each shape.
    Of course, the most simple way of adding metadata would be to but an additional file besides the 3 mentionned ones, but that would not be very challenging (plus the risk of losing it during copy).
    Most implementations require at least those 3 files to be present. Some allow .dbf to be missing (e.g. GDAL/OGR). Some allow .shx to be missing, like OpenJUMP which doesn't read it even if it is available, which is both a feature and a drawback in situations when there are "holes" in the .shp due to editing.

    A basic solution is to add our metadata at the end of one of those 3 files. I've done tests with GDAL/OGR (based on Shapelib), GeoTools 12.0, OpenJUMP 1.7.1 (whose shapefile reader is a forked version of the GeoTools one with changes), proprietary software code-named "GM" and proprietary software "AG"
    .dbf : all 5 implementations are happy with extra content at the end of the file
    .shp : all implementations happy, except OpenJUMP that opens the file, but throws a warning because it tries to interprete the additional bytes as shape.
    .shx : all 5 implementations are happy
    So we have at least 2 possibilities that are rather portable.
    It should be checked how they react in editing use cases, like adding new features to the shapefile. Regarding GDAL/OGR, I can say that it would overwrite the extra content at the end of the .dbf and the .shx. It would let the extra content at the end of the .shp to write the new geometry afterwards.

    What if we want to "link" the metadata per feature in a way where it is preserved if shapes are added ? And for the sake of exploring more possibilities, we will exclude using the data-at-end-of-file track. Interleaving data and metadata is not possible in .dbf since the records are placed consecutively. Same for .shx. In .shp, we can try reserving some space between all geometry records and make sure that the .shx index takes the holes into account. Due to the fact that size and offsets in shapefile are expressed in term of 16 bit words, that extra space must be a multiple of 16 bit too. That works fine for all implementations, except OpenJUMP for the same reason as above. Hum, and what if we incorporate the metadata, not between the encoded geometries, but inside them ? Each geometry record is indeed structured like this :

    Shape Id: 4 bytes
    Record length (number of 16 bit words after that field): 4 bytes
    Record content: (2 * record length) bytes
        Shape Type: 4 bytes
        Variable payload according to shape type

    We can try adding extra payload at the end of record content while still updating record length to take into account. We could have thought that implementations strictly checks that the declared record length is consistant with the shape type, but experimentations (and code inspection on the 3 Open Source implementations) show that, when they check, they check that the record length is at least greater or equal to the minimum expected record length. So this works for the 5 implementations ! At least on a layer with 2D polygons. That should also work for other 2D geometry type. 3D shapes consist in the 2D information, followed by the Z information, and optionaly by the M(eausre) information. M information is sometimes omitted when it is not present (this is the case of the OGR writer). So if we would want to add metadata for 3D shapes, we would have to write dummy M information (writting not-a-number double values is commonly done to indicate that M information is invalid).

    To go back to .dbf file a bit, sometimes the width of fields of string type is larger than strictly needed. The values are left aligned in the field and remaining space is padded with space characters. I've tried to insert a nul character just at the end of the string, and put the extra information afterwards. This works fine for the 3 C/C++ based shapefile readers (GDAL/OGR, G.M., A.G) since nul character is conventionnaly used to terminate a string in C/C++. Unfortunately that does not work with the 2 Java based implementations that do not use that convention : the extra content is displayed after the field content.

    As we have started exploring modifying the data itself, let's return to .shp file. One thing to consider is that coordinates in shapefiles are stored as double precision floating point numbers, stored on 64 bits using the IEEE-754 binary representation. Such numbers are decomposed like the following : 1 bit for the sign of the value, 11 bits for the exponent and its sign and 52 bits for the mantissa. The mantissa is where the significand precision of the number is stored. How big is that ? Let's go back to geography a bit. The Earth has rougly a circonference of 40 000 km. If we want to map features with a precision of 1 cm, we need 40 000 000 / 0.01 = 4 billion distinct numbers. 4 billion fits conveniently on a 32 bit integer (and OpenStreetMap .pbf optimized format store coordinates on 32 bit integers based on that observation). So 52 bits allow 2^(52-32)=2^20, roughly 1 million more numbers, i.e. a precision of 10^-8 meters = 10 nanometers ! We could almost map every molecule located on the Earth surface !
    It is consequently reasonable to borrow the 16 least significant bits from the mantissa for other use. Said differently for every 2D point/vertex, we can get back 4 bytes without any noticeable loss of precision. Depending on the shape complexity, this might be not big enough to store per-feature metadata. But on a typical shapefile, if we spead the metadata over the features, we can certainly store useful content. And the really great news is that this metadata would be preserved naturally in most format conversions (at least with GDAL/OGR whose internal geometry representation also uses 64-bit floating point numbers, and probably most other geometry engines), and for formats like Spatialite or GeoPackage that also use 64-bit floating point numbers. However, one must be aware than any other operation like rescaling or reprojection would completely change the least significant bits and erase our metadata.

    Admitedly this is not a new idea. People have explored similar ideas for digital watermarking and more generally steganography, typically used to embed copyright information or source tracking (i.e. you generate a slightly different dataset for each customer, hence if a copy is then available for download, you can identify the origin of the leak), generally in a not noticeable way. Using least significant bits is the very basic technique, that can be circumvented easily by just zeroing them or adding noise. More advanced technique operate in the spectral domain, like DCT (Discrete Cosine Transform), DFT (Discrete Fourier Transform) or DWT (Discrete Wavelet Tranform). Some techniques have been specifically designed for GIS data, using topological properties for example. The common target of those techniques is to have robustness against attempts of removing the watermark from the signal, at the expense of a reduced bandwith for the inserted information. But for regular metadata, we do not need such guarantee and the use of least-significant bits might be good enough and easily implemented.

    Any other ideas ? Sure...

    For polygons, the shapefile specification states that the vertices of the outer ring must be listed in clockwise order. But it does not specify which vertex of the outline must be the first one. Let's consider that the top-most vertex of the polygon is numbered 0 (if there are several vertices with the same y coordinate, let's take the one of them with the minimum x), the following vertex in clockwise order is 1, etc... If our polygon has 16 vertices, and we serialize it starting at vertex 11, we have coded the 11 number. Combined with information of following polygons, we can build a longer message. This idea could only work in practice for shapefiles of complex/dense enough polygons. If every polygon has 256 vertex, we can encode log2(256)=8 bits per polygon. More generally, for a polygon with N vertex, we could encode log2(N) bits (rounded to inferior integer). So we need also at least hundreds or thousands of polygons of that complexity to be able to encode something useful. The advantage of this technique is that it is robust to rescaling, and probably most reprojections (at least the one that globally preserve the appearance of shapes), provided that the shapes are rewritten in the same order as in the original data.
    That technique could also be adapted for lines. Let's consider a line made of (V1,V2,....Vn). We can for example simply build a multi-polyline of 2 parts (V1,...Vi) and (Vi,....,VN) that will visually looks like the original line and will encode for the i value. The increase in binary encoding would be modest (4+8+8=20 extra bytes).

    Another technique might be to use repeated vertices. Let's consider a line or a polygon: if while listing consecutive vertices, they are repeated, this would encode a 1 value. Otherwise 0. For example, if a line is made of the sequence of vertices (V1,V1,V2,V3,V4,V4,V5,V5,V6), it would be equivalent to binary number 100110. So we could encode as many bits as vertices in the geometry. If needed, we can also use more repetitions to encode more bits. For one bit per vertex, on average such a technique would increase shapefile size by 50% (because on average, half of bits in a message are 1). It would preserve metadata perfectly for all coordinate transformations (geometry engines generally operate on vertices separately). But not to operations that would remove duplicated vertices.

    Finally, here's another idea, conceptually close to the one based on the starting vertex. Excluding implementations that don't rely on the .shx (I've no prejudice against such one ! Keep on good work folks !), we could use the order of shapes in the .shp to encode information. Traditionnaly, feature 1 appears first in the .shp, followed by feature 2, etc... But we could re-order the shapes as we wish, provided we make the .shx point to the right offset in the .shp. If we have N shapes, there are N! (factorial(N) = N*(N-1)*(N-2)*...2*1) ways of ordering them. So for N shapes, we can encode log2(N!) bits. In practice for 10 shapes, that is 21 bits. For 100 shapes, 524 bits. For 1000 shapes, 8529 bits. And for 10000, 118458. Advantages: works for all geometry types, no increase in file size. Inconvenients: possibly less performant sequential reading because of apparently random seeking within the .shp, doesn't resist to file conversion.

    I've not mentionned it, but for nearly all mentionned techniques, especially the last ones, we would need to reserve a few bits to insert a CRC or any other integrity mechanism, so as to make sure that we think is metadata really is. And all them could be potentially combined !

    by Even Rouault (noreply@blogger.com) at October 22, 2014 09:50 PM

    gvSIG Team

    Open Planet Especial nº1: 10 años gvSIG “gvSIG: no sólo ciencia. Recopilatorio de escritos”

    Estamos de aniversario. Una década en la que el proyecto gvSIG ha ido construyendo su camino.  Esta efemérides está siendo celebrada de muchas maneras por la Comunidad. A todos esos eventos y actividades queremos unir una recopilación de escritos que durante estos 10 años han aparecido aquí y allá. Escritos que son un reflejo de cómo hemos ido avanzando en el pensamiento de gvSIG, en la interpretación de la realidad desde un proyecto de geomática libre.

    Textos que en ocasiones, como decía Bertolt Brecht, en estos tiempos han de defender lo evidente: que el conocimiento debe ser patrimonio de la humanidad y no de corporaciones, que la colaboración y la solidaridad son valores fundamentales sobre los que construir los modelos de negocio.

    En gran parte somos recuerdos y recordar lo que hemos defendido y argumentado en cada momento nos permite también definir lo que somos a hoy día. Esta recopilación nos permite afirmar que detrás nuestro no solo queda un camino recorrido. Las bases de lo que seremos también están ahí.

    Os dejo con la introducción del recopilatorio que espero que os anime a todos a descargarlo e ir buceando en su lectura:

    gvSIG es algo más que ciencia. Economía Ciencia y Política son disciplinas que consideramos relacionadas entre sí y no logramos entenderlas plenamente si no atendemos a la existencia de estas relaciones. Esta idea es recurrente en gvSIG; siempre que podemos la proclamamos.
    Quizás, de todas las componentes de gvSIG la que menos se conozca sea la que explique como se ha construido la organización gvSIG, su pensamiento. Este recopilatorio pretende ayudar a explicar este proceso.
    El presente documento es un recopilatorio de algunos escritos que han ido ayudando a crear gvSIG. Escritos de blogs, jornadas o escritos internos. Muchos de ellos van acompañados de un párrafo que ayude a explicar el porqué y los objetivos de cada uno de los escritos. No se trata de un trabajo exhaustivo, pero sí que pensamos que le puede resultar interesante a quien desee conocer los aspectos no técnicos de gvSIG.
    Se presentan ordenados por años, añadiendo al final un anexo con otros documentos.
    Esperando que os resulte de interés y que puedan servir como una herramienta para la transformación de un futuro que está por escribir.

    Atreveros a soñar y carpe diem

    Descarga: http://downloads.gvsig.org/download/documents/books/Recopilatorio_10.pdf


    Filed under: gvSIG Association, opinion, spanish

    by Alvaro at October 22, 2014 11:46 AM

    GeoSpatial Camptocamp

    Journée ASIT VD : rencontrez les acteurs de la géoinformation !

    Le 28 octobre, Camptocamp vous donne rendez-vous à la journée de l’ASIT VD, l’Association pour le Système d’Information du Territoire Vaudois, qui aura lieu au Swiss Tech Convention Center à Lausanne.

    Depuis 20 ans, l’ASIT VD facilite l’accès aux géodonnées sur le territoire vaudois et regroupe près de 300 membres autour d’un partenariat public-privé original. A cette occasion, sociétés de services, administrations publiques, écoles et associations, vous présentent leurs produits, activités et nouveautés. Le programme est disponible ici.

    L’équipe Geospatial Solutions de Camptocamp vous accueillera sur son stand et vous présentera démos, formations, et nouveautés. Que vous soyez architecte, municipal, ingénieur ou technicien, venez discuter avec nous de vos projets SIG !

    Cet article Journée ASIT VD : rencontrez les acteurs de la géoinformation ! est apparu en premier sur Camptocamp.

    by camptocamp at October 22, 2014 09:26 AM

    October 21, 2014

    Boundless Blog

    QGIS Compared: Visualization

    Gretchen PetersonAny GIS professional who’s been paying attention to the professional chatter in recent years will be wondering about QGIS and whether or not it might meet some or all of their needs. QGIS is open source, similar to proprietary GIS software, runs on a variety of operating systems, and has been steadily improving since its debut in 2002. With easy-to-install packages, OpenGeo Suite integration, and reliable support offerings, we obviously see QGIS as a viable alternative to proprietary desktop GIS software such as Esri’s ArcGIS for Desktop.

    But will it work for you? The short answer is: most likely yes for visualization of most formats of spatial data, probably for analysis of raster and vector data, probably for geographic data editing, and probably for cartographic publishing.  Those are all very subjective assertions based on my personal experience using QGIS for the past seven months but I have been using proprietary GIS for over fourteen years as an analyst and cartographer and have written a couple of books on the subject.

    By all means give QGIS a try: download and install it, drag-and-drop some data into it, and give it a spin. This is definitely a good time to evaluate it and consider adopting it across your organization.

    Visualizing spatial data in QGIS

    In this first post, I’m going to focus on visualizing spatial data in QGIS. These basic functions are straightforward and easy to do in QGIS:

    1. adding datasets

    2. moving datasets up and down in the layer hierarchy

    3. zooming around the map

    4. selecting features based on simple point-and-click

    5. selecting features based on complex selection criteria

    6. viewing attributes

    7. creating graduated color schemes

    PostPic1.png

    Strength: Versatile and efficient format support

    In fact, QGIS is an effective means of viewing and exploring spatial data of almost any type. If you have complex data, you might be interested to hear that the newest release of QGIS boasts very fast, multi-threaded, rendering of spatial data that may even make it faster than leading competitors. When I began creating the map shown above, I accidentally added all of the Natural Earth 1:10m Cultural Vectors in triplicate to the project, causing some minor heart-palpitations as I realized it was going to try to render close to 100 vector layers all at once. However, my fears were unfounded as it took only a few seconds for them to render once they were all added. In the realm of visualization, it does most of the other tasks that a GIS professional would expect as well, including support for custom symbol sets (in SVG format). Adding GeoJSON data is simple, just drag a geojson file onto the Layers list. Here, we show a portion of James Fee’s GeoJSON repository of baseball stadiums:

    BaseballGeoJson.jpg

    Mixed results: Raster visualization

    That said, raster visualization can yield unexpected results depending on what is desired. Some raster datasets have tables that associate bands with RGB values such that specific cell-types are rendered certain colors. Often, landcover datasets will have this kind of structure so that, for example, the raster is rendered with blue for water, green for grass, white for ice, and so on. Unfortunately, QGIS doesn’t yet support rendering based on associated table files for rasters. Another slight irritation is the continuing use of binary ARC/INFO GRID formats by some agencies who distribute raster data to the public. If you have one of these datasets, QGIS can open it but you must point to the w001001.adf file using the raster data import button.

    Mixed results: On-the-fly reprojection

    One of the most important ways to make GIS user-friendly is to support on-the-fly projection. I still remember when projecting on-the-fly became a part of the software that I used to use. It was the end of 1999, and life was so much easier when multiple datasets from multiple agencies in multiple projections could all be jammed together into a single project, producing a map where all the data layers were in the correct projected space. This was because reprojecting not only added extra steps requiring you to reproject everything into a common coordinate system even if all you wanted to do was visualize the data, it also meant maintaining multiple copies of the same dataset, which contributed to folder clutter and using up of valuable disk space. QGIS supports reprojection on-the-fly but it is an option that must be set in the project properties dialog. Some glitches with projections still seem to occur from time to time. Zooming in, for example, sometimes causes the map to zoom to a different place than expected. However, this unexpected behavior is inconsistent, not a showstopper, and may be fixed soon.

    Projection.png

    Hidden gem: Context

    The other important aspect of visualizing data is having enough underlying context for the data. Country boundaries, city labels, roads, oceans, and other standard map data are crucial. Proprietary GIS software generally contains basemap layers that can easily be turned on and off to support visualization in this manner. QGIS also has this capability, in the form of the OpenLayers plugin, which serves up Google, OpenStreetMap, Bing, and Yahoo basemaps at the click of a button. The OpenLayers plugin is free and installs just like any other QGIS plugin—you search for it in the Plugins menu, press “install,” and make your basemap choice in the Web menu.

    OpenLayersPlugin.png

    Conclusion

    While QGIS may need a small amount of improvement when it comes to raster visualization and on-the-fly projection, these aren’t hindrances to a typical visualization workflow and are only mentioned here out of respect for a fair and balanced assessment. By and large, my testing has convinced me that the robust visualization capabilities that QGIS offers provide more than enough impetus for many organizations to make the switch to QGIS. In later posts, I’ll discuss how QGIS performs with respect to analysis, editing, and cartography.

    The post QGIS Compared: Visualization appeared first on Boundless.

    by Gretchen Peterson at October 21, 2014 03:22 PM

    October 20, 2014

    Peter Batty

    Reaction to Apple Maps announcement

    What they announced As predicted by the entire world, Apple announced their new maps application today as part of iOS 6. You can see the keynote presentation of the video here, and Apple's summary information about the Maps app here. Overall my predictions from last week were pretty spot on :) ... they announced that it would have turn by turn directions with voice guidance, real time

    by Peter Batty (noreply@blogger.com) at October 20, 2014 05:16 PM

    GeoTools Team

    GeoTools 12.0 Released

    The GeoTools team is happy to announce the release of version 12.0.
    GeoTools now requires Java 7 and this is the first release tested with OpenJDK! Please ensure you are using JDK 1.7 or newer for GeoTools 12. Both Oracle Java 7 and OpenJDK 7 are supported, tested, release targets.

    There are a number of new features in this release:
    • circular strings are now supported in Oracle data stores, thanks to GeoSolutions.it for the work.
    • The content datastore tutorial was updated by Jody and tested out by the FOSS4G workshop participants.
    • GeoTools Filter interfaces have been simplified (cleaning up technical debt from GeoTools 2.3)
    • The new wfs-ng datastore is now available as a drop in replacement for the old WFS datastore, The new store provides much better support for axis orders with servers that don't know what they are doing. In order to make wfs-ng a drop-in replacement (and respond to the same connection parameters) you are limited to only using one implementation of gt-wfs-ng or gt-wfs plugins in your application at a time.
    • New advanced raster reprojection, a lot of work has been put into improving the raster reprojection story for glitches around the date line and polar regions. To enable these options use the following rendering hints:
      rendererParams.put(StreamingRenderer.ADVANCED_PROJECTION_HANDLING_KEY, true);
      rendererParams.put(StreamingRenderer.CONTINUOUS_MAP_WRAPPING, true);
    This release is made in conjunction with GeoServer 2.6.0 and is available from the OSGeo maven repository.

    About GeoTools 12

    by Ian Turton (noreply@blogger.com) at October 20, 2014 01:46 PM

    Margherita Di Leo

    Call For papers Geospatial devroom @FOSDEM



    Please forward!

    FOSDEM is a free open source event bringing together about 5000 developers in Brussels, Belgium. The goal is to provide open source software developers and communities a place to meet at. The next edition will take place the weekend 31/1 -> 1/2/2015. This year for the first time there will be a geospatial devroom on Sunday 1/2/2015!

    Geospatial technology becomes more and more part of mainstream IT. The idea is to bring together people with different backgrounds to better explain and understand the opportunities Geospatial can offer. This devroom will host topics explaining the state of the art of geospatial technology, and how it can be used amongst other projects.

    The geospatial devroom is the place to talk about open, geo-related data and software and their ecosystem. This includes standards and tools, e.g. for spatial databases, and online mapping, geospatial services, used for collecting, storing, delivering, analysing, and visualizing puposes. Typical topics that will be covered are:

    • Web and desktop GIS applications
    • Interoperable geospatial web services and specifications
    • Collection of data using sensors/drones/satellites
    • Open hardware for geospatial applications
    • Geo-analytic algorithms/libraries
    • Geospatial extensions for classical databases (indexes, operations)
    • Dedicated databases

    HOW TO SUBMIT YOUR TALK PROPOSAL

    Are you thrilled to present your work to other open source developers? Would you like to run a discussion? Any other ideas? Please submit your proposal at the Pentabarf event planning tool at:


    When submitting your talk in Pentabarf, make sure to select the 'Geospatial devroom' as  'Track'. Please specify in the notes if you prefer for your presentation a short timeslot (lightning talks ~10 minutes) or a long timeslot (20 minutes presentation + discussion).

    The DEADLINE for submissions is **1st December 2014**

    Should you have any questions, please do not hesitate to get in touch with the organisers of the devroom at fosdem-geospatial@gisky.be!

    Johan Van de Wauw
    Margherita Di Leo
    Astrid Emde
    Anne Ghisla
    Julien Fastré
    Martin Hammitzsch
    Andy Petrella 
    Dirk Frigne
    Gael Musquet

    by Margherita Di Leo (noreply@blogger.com) at October 20, 2014 12:21 PM

    Jackie Ng

    GovHack 2014 post-mortem

    UPDATE 20 October 2014: After a bungle on my Amazon EC2 instance, the demo URL on our hackerspace project page is no longer active. I've resurrected this site on my demo server on Rackspace here. Ignore the link on the hackerspace page until that page gets updated (if it will get updated, because I can't do it)

    Earlier this month, I attended the GovHack 2014 hackathon, along with thousands of other fellow hackers all across the country. This was my first GovHack, but not my first hackathon. My previous hackathon was RHoK and having no idea how GovHack would turn out, I entered the GovHack event with a RHoK-based mindset of how I would expect this hackathon to turn out.

    Bad idea.

    I learned very quickly there was a major difference between RHoK and GovHack. Going into RHoK, you have an idea about what solutions you will get to hack on over the weekend as problem owners are present to pitch their ideas to the audience of prospective hackers. With GovHack, you need an idea about what solution you want to hack on over the weekend, all they were going to provide was the various open data and APIs. What on earth are we going to build?



    So after losing nearly half the weekend to analysis paralysis, our team (named CreativeDrought, wonder why?) agreed with my suggestion of just building a MapGuide-based mashup of various open datasets, most notably, the VicRoads Crash Stats dataset and related transportation data. I obviously knew MapGuide inside-and-out and its capabilities to have a level of confidence that with the remaining weekend we should still be able to crank out some sort of workable solution. At the very least, we'd have a functional interactive map with some open data on it.

    And that's the story of our CrashTest solution in a nutshell. It's a Fusion application, packed to the gills with out-of-the-box functionality from its rich array of widgets (including Google StreetView integration). The main objective of this solution was to allow users to view and analyse crash data, sliced and diced along various age, gender, vehicle type and various socio-economic parameters.



    MapGuide's rich out-of-the-box capabilities, Maestro's rapid authoring functionality and GDAL/OGR's ubiquitous data support greatly helped us. I knew with this trio of tools, that we could assemble an application together in the remaining day and a bit left that we had to actually "hack" on something.

    Sadly, we only got as far as putting the data on the map for the most part. Our team spent more time frantically trying to massage various datasets via ogr2ogr/Excel/GoogleDocs into something more usable than actually writing lines of code! Seriously VicRoads? Pseudo-AMG? Thank goodness I found the necessary proj4 string for this cryptic coordinate system so that we could re-project a fair chunk of the VicRoads spatial data into a coordinate system that better reflects the world we want to mash this data up with!

    Still, our "solution" should hopefully still open up a lot of "what if" scenarios. Imagine looking at a cluster of accident events, not being able to ascertain any real patterns or correlation and so you then fire up the StreetView widget and lo-and-behold, Google StreetView providing additional insights that a birds-eye view could not. Also imagine the various reporting and number crunching possibilities that are available by tapping into the MapGuide API. Imagine what other useful information you could derive if we had more time to put up additional useful datasets. We didn't get very far on any of the above ideas, so just imagine such possibilities if you will :)

    So here's our entry page if you want to have a look. It includes a working demo URL to a Amazon EC2 hosted instance of MapGuide. Getting acquainted with Amazon Web Services and putting MapGuide up there was an interesting exercise and much easier than I thought it would be, though I didn't have enough time to use the AWS credits I redeemed over the weekend to momentarily lift this demo site out of the free usage tier range performance-wise. Still, the site seems to perform respectably well on the free usage tier.

    Also on that page is a link to a short video where we talk about the hack. Please excuse the sloppy editing, it was obviously recorded in haste in a race against time. Like the solution and/or the possibilities it can offer? Be sure to vote on our entry page.

    Despite the initial setbacks, I was happy with what we produced given the severely depleted time constraints imposed on us. I think we got some nice feedback demo-ing CrashTest in person at the post-mortem event several days later, which is always good to hear. Good job team!


    So what do I think could be improved with GovHack?
    • Have a list of hack ideas (by participants who actually have some ideas) up some time before the hackathon starts. This would facilitate team building, letting participants with the skills, but without ideas easily gravitate towards people/teams with the ideas.
    • The mandatory video requirement for each hack entry just doesn't work in its current form. Asking teams to produce their own videos puts lots of unnecessary stress on teams, who not only have to come up with the content for their video, but have to also deal with the logistics of producing said video. I would strongly prefer that teams who can/want to make their own video do so, while other teams can just do a <= 3 minute presentation and have that be recorded by the GovHack organisers. Presentations also lets teams find out how other teams fared over the weekend. While everyone else in the ThoughtWorks Melbourne office was counting down to the end of the hackathon, I was still frantically trying to record my lines and trying not to flub them! I raided the office fridge for whatever free booze that remained just to calm myself down afterwards. I don't want to be in that situation ever again!
    • Finally, the data itself. So many "spatial" datasets as CSV files! So many datasets with no coordinates, but have addresses, horribly formatted addresses, adding even more hoops to geocode them. KML/KMZ may be a decent consumer format, but it is a terrible data source format. If ogr2ogr can't convert your dataset, and requires a manual intervention of QGIS to fix it, then perhaps it's better to use a different spatial data format. Despite my loathing of its limitations, SHP files would've been heavily preferred for all of the above cases. I've made my thoughts known on the GovHack DataRater about the quality of some of these datasets we had to deal with and got plenty of imaginary ponies in the process.
    Despite the above points, the event as a whole was a lot of fun. Thanks to the team (Jackie and Felicity) for your data wrangling and video production efforts.


    Also thanks to Jordan Wilson-Otto and his flickr photostream where I was able to get some of these photos for this particular post.

    Would I be interested in attending the 2015 edition of GovHack? Given I am now armed with 20/20 hindsight, yes I would!

    by Jackie Ng (noreply@blogger.com) at October 20, 2014 10:51 AM

    Petr Pridal

    IIIF for images in culture heritage


    Online scans of culture heritage documents, such as old maps, books, photographs, etc. are being published by the galleries, libraries, archives and museums.  Until now there was no official standardisation activity in this area. This is now changing with the International Image Interoperability Framework IIIF (http://iiif.io/), which enables easy access to large raster images across institutions.

    We are happy to announce a new Open Source IIIF viewer, with several useful features: 

    - Rotation on client side  - pinch with fingers, Alt-Shift drag with the mouse
    - Drawing tools - polygons, lines, markers - used to annotate parts of the pictures
    - Color adjustments - saturation, lightness, etc



    The viewer is pure Java Script, mobile optimised with almost native feeling for zoom and powered by OpenLayers V3 open-source project, where we are co-developers (see blog post).

    Feel free to try at: http://klokantech.github.io/iiifviewer/

    Source codes are available on GitHub: https://github.com/klokantech/iiifviewer/

    This viewer is another important part of the mosaic of open source tools for publishing of large images and maps. Together with high-performance open-source JPEG2000 image server can be used to serve thousands of users in a very fast and efficient way.

    The mentioned server providing IIIF endpoint for the JPEG2000 images was developed and released by Klokan Technologies in cooperation with the National Library of Austria and their Google Books scanning project, the Austrian Books in 2013. The documentation is available at: https://github.com/klokantech/iiifserver/

    Server software runs under Linux, Mac OS X as well as Windows. There is even an easy to use installer.  It is powered by IIPImage server and our code has been recently refactored and merged back to the main IIPImage repository.

    Support and maintenance for installation of this open-source software can be provided by Klokan as well as the access to JPEG2000 Kakadu license.

    by Petr Pridal (noreply@blogger.com) at October 20, 2014 08:59 AM

    October 19, 2014

    Sean Gillies

    Unix style spatial ETL with fio cat, collect, and load

    Unix style spatial ETL with fio cat, collect, and load

    In Fiona 1.4.0 I added a fio-cat command to the CLI which works much UNIX cat. It opens one or more vector datastets, concatenating their features and printing them to stdout as a sequence of GeoJSON features.

    $ fio cat docs/data/test_uk.shp | head -n 2
    {"geometry": {"coordinates": [...], "type": "Polygon"}, "id": "0", "properties": {"AREA": 244820.0, "CAT": 232.0, "CNTRY_NAME": "United Kingdom", "FIPS_CNTRY": "UK", "POP_CNTRY": 60270708.0}, "type": "Feature"}
    {"geometry": {"coordinates": [...], "type": "Polygon"}, "id": "1", "properties": {"AREA": 244820.0, "CAT": 232.0, "CNTRY_NAME": "United Kingdom", "FIPS_CNTRY": "UK", "POP_CNTRY": 60270708.0}, "type": "Feature"}
    

    I’ve replaced most of the coordinates with ellipses to save space in the code block above, something I’ll continue to do in examples below.

    I said that fio-cat concatenates features of multiple files and you can see this by using wc -l.

    $ fio cat docs/data/test_uk.shp | wc -l
          48
    $ fio cat docs/data/test_uk.shp docs/data/test_uk.shp | wc -l
          96
    

    If you look closely at the output, you’ll see that every GeoJSON feature is a standalone text and each is preceded by an ASCII RS (0x1E) control character. These allow you to cat pretty-printed GeoJSON (using the --indent option) containing newlines that can still be understood as a sequence of texts by other programs. Software like Python’s json module and Node’s underscore-cli will trip over unstripped RS, so you can disable the RS control characters and emit LF delimited sequences of GeoJSON (with no option to pretty print, of course) using --x-json-seq-no-rs.

    To complement fio-cat I’ve written fio-load and fio-collect. They read features from a sequence (RS or LF delimited) and respectively write them to a formatted vector file (such as a Shapefile) or print them as a GeoJSON feature collection.

    Here’s an example of using fio-cat and load together. You should tell fio-load what coordinate reference system to use when writing the output file because that information isn’t carried in the GeoJSON features written by fio-cat.

    $ fio cat docs/data/test_uk.shp \
    | fio load --driver Shapefile --dst_crs EPSG:4326 /tmp/test_uk.shp
    $ ls -l /tmp/test_uk.*
    -rw-r--r--  1 seang  wheel     10 Oct  5 10:09 /tmp/test_uk.cpg
    -rw-r--r--  1 seang  wheel  11377 Oct  5 10:09 /tmp/test_uk.dbf
    -rw-r--r--  1 seang  wheel    143 Oct  5 10:09 /tmp/test_uk.prj
    -rw-r--r--  1 seang  wheel  65156 Oct  5 10:09 /tmp/test_uk.shp
    -rw-r--r--  1 seang  wheel    484 Oct  5 10:09 /tmp/test_uk.shx
    

    And here’s one of fio-cat and collect.

    $ fio cat docs/data/test_uk.shp | fio collect --indent 4 | head
    {
        "features": [
            {
                "geometry": {
                    "coordinates": [
                        [
                            [
                                0.899167,
                                51.357216
                            ],
    $ fio cat docs/data/test_uk.shp | fio collect --indent 4 | tail
                    "CAT": 232.0,
                    "CNTRY_NAME": "United Kingdom",
                    "FIPS_CNTRY": "UK",
                    "POP_CNTRY": 60270708.0
                },
                "type": "Feature"
            }
        ],
        "type": "FeatureCollection"
    }
    

    Does it look like I’ve simply reinvented ogr2ogr? The difference is that with fio-cat and fio-load there’s space in between for programs that process features. The programs could be written in any language. They might use Shapely, they might use Turf. The only requirement is that they read and write sequences of GeoJSON features using stdin and stdout. A nice property of programs like these is that you can sometimes parallelize them cheaply using GNU parallel.

    The fio-buffer program (unreleased) in the example below uses Shapely to calculate a 100 km buffer around features (in Web Mercator, I know!). Parallel doesn’t help in this example because the sequence of features from fio-cat is fairly small, but I want to show you how to tell parallel to watch for RS as a record separator.

    $ fio cat docs/data/test_uk.shp --dst_crs EPSG:3857 \
    > | parallel --pipe --recstart '\x1E' fio buffer 1E+5 \
    > | fio collect --src_crs EPSG:3857 \
    > | geojsonio
    

    Here’s the result. Unix pipelines, still awesome at the age of 41!

    The other point of this post is that, with the JSON Text Sequence draft apparently going to publication, sequences of GeoJSON features not collected into a GeoJSON feature collection are very close to being a real thing that developers should be supporting.

    by Sean Gillies at October 19, 2014 02:46 PM

    Jackie Ng

    MapGuide tidbits: MapGuide Server daemon doesn't start after reboot

    This one will be short and sweet.

    If you have rebooted your Linux server, and for some reason you can no longer start the MapGuide Server as a daemon. You should check that the /var/lock/mgserver directory exists and create it if it doesn't.

    The mgserver process will try to create and lock a file in this directory and will bail out if it can't. This directory is cleared when the Linux server is restarted (at least in my observations). None of the wrapper scripts (mgserver.sh or mgserverd.sh) actually check if the directory exists, so they blindly proceed as though this directory existed.

    We'll patch the mgserverd.sh script to create this directory if it doesn't exist before running the mgserver daemon. In the meantime, you can edit the mgserverd.sh file in the MapGuide Linux installation yourself to create the /var/lock/mgserver directory before running the mgserver process.

    by Jackie Ng (noreply@blogger.com) at October 19, 2014 01:33 PM

    Bjorn Sandvik

    Creating 3D terrains with Cesium

    Previously, I’ve used three.js to create 3D terrain maps in the browser (1, 2, 3, 4, 5, 6). It worked great for smaller areas, but three.js doesn’t have built-in support for tiling and advanced LOD algorithms needed to render large terrains. So I decided to take Cesium for a spin.


    Cesium is a JavaScript library for creating 3D globes and 2D maps in the browser without a plugin. Like three.js, it uses WebGL for hardware-accelerated graphics. Cesium allows you to add your own terrain data, and this blog post will show you how.


    Compared to the dying Google Earth plugin, it's quite complicated to get started with Cesium. The source code is well documented and the live coding Sandcastle is great, but there is a lack of tutorials and my development slows down when I have to deal with a lot of math.

    That said, I was able to create an app streaming my own terrain and imagery with a few lines of code. There is also WebGL Earth, a wrapper around Cesium giving you an API similar to well-known Leaflet. I expect to see more functions or wrappers to make stuff like camera positioning easier in the future.

    How can you add your own terrain data to Cesium? 

    First, you need to check if you really need it. You have the option to stream high-resolution terrain data directly from the servers at AGI. It's free to use on public sites under the terms of use. If you want to host the terrain data on your own servers, AGI provides a commercial product - the STK Terrain Server. Give it a try, if you have a budget!

    I was looking for an open source solution, and found out that Cesium supports two terrain formats:
    1. heightmap
    2. quantized-mesh
    The tiled heightmap format is similar to the one I used for three.js. Each tile contains 65 x 65 height values, which overlap their neighbors at their edges to create a seamless terrain. Cesium translates the heightmap tiles into a uniform triangle mesh, as I did in three.js. The downside of this format is the uniform grid, you use the same amount of data to represent both flat and hilly terrain.

    The regular terrain mesh made from heightmap tiles. 

    The quantized-mesh format follows the same tile structure as heightmap tiles, but each tile is better optimised for large-scale terrain rendering. Instead of creating a dense, uniform triangle mesh in the browser, an irregular triangle mesh is pre-rendered for each tile. It's a better representation of the landscape, having less detail in flat areas while increasing the density in steep terrain. The mesh terrain is also more memory efficient and renders faster.

    The irregular terrain mesh from quantized-mesh tiles. Larger triangles have less height variation. 

    Unfortunately, I haven't found any open source tools to create tiles in the quantized-mesh format - please notify me if you know how to do it!

    You can generate heightmap tiles with Cesium Terrain Builder, a great command-line utility by Homme Zwaagstra at the GeoData Institute, University of Southampton.

    I'm using the same elevation data as I did for my three.js maps, but this time in full 10 meter resolution. I'm just clipping the data to my focus area (Jotunheimen) using EPSG:4326, the World Geodetic System (WGS 84).

    gdalwarp -t_srs EPSG:4326 -te 7.2 60.9 9.0 61.7 -co compress=lzw -r bilinear jotunheimen.vrt jotunheimen.tif

    I went for the easy option, and installed Cesium Terrain Builder using the Docker image. First I installed Docker via Homebrew.  I was not able to mount my hard drive with this method, so I downloaded the elevation data from my public Dropbox folder:

    wget https://dl.dropboxusercontent.com/u/1234567/jotunheimen.tif

    I used the ctb-tile command to generate the tileset:

    ctb-tile --output-dir ./tiles jotunheimen.tif

    The command returned 65 000 tiles down to zoom level 15. I compressed the tiles into one file:

    tar cvzf tiles.tar.gz tiles

    and used the Dropbox uploader to get the tiles back to my hard drive:

    ./dropbox_uploader.sh upload tiles.tar.gz tiles.tar.gz

    So I got 65 000 terrain tiles on my server, how can I see the beauty in Cesium? It required some extra work:
    1. First I had to add a missing top level tile that Cesium was expecting. 
    2. Cesium was also looking for a layer.json file which I had to create:

      {
        "tilejson": "2.1.0",
        "format": "heightmap-1.0",
        "version": "1.0.0",
        "scheme": "tms",
        "tiles": ["{z}/{x}/{y}.terrain?v={version}"]
      }

    3. Lastly, I added a .htaccess file to support CORS and gzipped terrain tiles: 

    Then I was ready to go!

    Beautiful terrain rendered with 10 m elevation data from the Norwegian Mapping Authority. Those who know Jotunheimen, will notice Skogadalsbøen by the river and Stølsnostind and Falketind surrounded by glaciers in the background.

    The terrain is a bit blocky (see the mount Falketind to the left), but I'm not sure if this is happening in Cesium Terrain Builder or Cesium itself. The quantized-mesh tiles from AGI gives a better result. 

    I'm not able so show an interactive version, as I'm using detailed aerial imagery from "Norge i bilder", which are not publicly available.

    by Bjørn Sandvik (noreply@blogger.com) at October 19, 2014 10:01 AM

    October 18, 2014

    Gary Sherman

    PyQGIS Resources

    Here is a short list of resources available when writing Python code in QGIS. If you know of others, please leave a comment.

    Blogs/Websites

    In alphabetical order:

    Documentation

    Example Code

    • Existing plugins can be a great learning tool
    • Code Snippets in the PyQGIS Cookbook

    Plugins/Tools

    • Script Runner: Run scripts to automate QGIS tasks
    • Plugin Builder: Create a starter plugin that you can customize to complete your own plugin
    • pb_tool: Tool to compile and deploy your plugins

    Books

    October 18, 2014 06:18 PM

    Antonio Santiago

    7 reasons to use Yeoman’s angular-fullstack generator

    For my next project and, after looking for candidates and reading some hundreds of lines of documentation, I finally choose to work with the so called MEAN stack: mongodb, express, angular and node.

    As with any other technology ecosystem, the great number of frameworks, libraries and tools can make our choice a challenge, and JavaScript is not an exception. But for JavaScript projects we have lot of help and I decide to use the awesome Yeoman tool. Yeoman combines the power of grunt, bower, npm and adds its own salt: the generators.

    Yeoman generators are tasks responsible to build the initial project scaffolding.

    Yeoman offers an extensive set of official generators oriented to create: webapps, backbone app, chrome extension, etc but we can also found a myriad of non oficial generators (yes, because anyone can create a new generator to satisfy his/her needs).

    Within all the generators I chose angular-fullstack to create my MEAN project structure and next are my reasons:

    1. Easy to install

    You require to have node and npm installed on your system. Once you have them installYeoman and the angular-fullstack is as easy as:

    $ npm install -g yo
    $ npm install -g generator-angular-fullstack

    Once installed the generator you simply need to create a new folder and initialise your project:

    $ mkdir my-new-project && cd $_
    $ yo angular-fullstack [app-name]

    2. Creates both client and server scaffoldings

    The generator generates the full stack of your project, both the client and server code. Your project will start well organised and prepared to create an awesome RIA application.

    3. Introduces good practices in the generated code

    Because the generated is made by experienced developers, they applies good practices in code organisation and style programming (like the environment configuration on the server side using node).

    For me, this is one of the most important reasons to use this generator. Anybody knows starting with a new technology is always hard, but it is nothing when you start with four new technologies :)

    4. Server side API prepared to use authentication

    Following best practices the code is prepared so you can easily add security to you API via a node middleware so each request requires authentication of the client side.

    5. Support HTML or jade templating on client side

    You can use any template engine for client side but by default the generator works with HTML and Jade. I don’t really like Jade too much so I always try to use EJS or similar (Warning this last sentence is the author’s opinion).

    6. Support for different CSS preprocessors

    For different opinions there are different alternatives. This way angular-fullstack has support for plain CSS, Stylus, Sass or LESS pre-processors. Choose your preferred.

    7. Commands to scaffold anything

    With theangular-fullstack you can create new end points for the server side or client side components (like routes, controllers, services, filters, directives, …) with a sentences. So, next command:

    yo angular-fullstack:endpoint message
    [?] What will the url of your endpoint to be? /api/messages

    will produce:

    server/api/message/index.js
    server/api/message/message.spec.js
    server/api/message/message.controller.js
    server/api/message/message.model.js  (optional)
    server/api/message/message.socket.js (optional)

     Conclusion

    In my opnion, angular-fullstack is a really powerful tool that simplifies our day to day work.

    As always it is not the panacea, it is simply a generic tool to automatize many common tasks. Because of this we can found situations it lacks some feature.

    by asantiago at October 18, 2014 10:17 AM

    October 17, 2014

    gisky

    Call For papers Geospatial devroom @FOSDEM

    Please forward!

    FOSDEM is a free open source event bringing together about 5000 developers in Brussels, Belgium. The goal is to provide open source software developers and communities a place to meet at. The next edition will take place the weekend 31/1 -> 1/2/2015. This year for the first time there will be a geospatial devroom on Sunday 1/2/2015!

    Geospatial technology becomes more and more part of mainstream IT. The idea is to bring together people with different backgrounds to better explain and understand the opportunities Geospatial can offer. This devroom will host topics explaining the state of the art of geospatial technology, and how it can be used amongst other projects.

    The geospatial devroom is the place to talk about open, geo-related data and software and their ecosystem. This includes standards and tools, e.g. for spatial databases, and online mapping, geospatial services, used for collecting, storing, delivering, analysing, and visualizing puposes. Typical topics that will be covered are:

    • Web and desktop GIS applications
    • Interoperable geospatial web services and specifications
    • Collection of data using sensors/drones/satellites
    • Open hardware for geospatial applications
    • Geo-analytic algorithms/libraries
    • Geospatial extensions for classical databases (indexes, operations)
    • Dedicated databases
    HOW TO SUBMIT YOUR TALK PROPOSAL

    Are you thrilled to present your work to other open source developers? Would you like to run a discussion? Any other ideas? Please submit your proposal at the Pentabarf event planning tool at:

    https://penta.fosdem.org/submission/FOSDEM15

    When submitting your talk in Pentabarf, make sure to select the 'Geospatial devroom' as  'Track'. Please specify in the notes if you prefer for your presentation a short timeslot (lightning talks ~10 minutes) or a long timeslot (20 minutes presentation + discussion).

    The DEADLINE for submissions is **1st December 2014**

    Should you have any questions, please do not hesitate to get in touch with the organisers of the devroom at fosdem-geospatial@gisky.be!

    Johan Van de Wauw
    Margherita Di Leo
    Astrid Emde
    Anne Ghisla
    Julien Fastré
    Martin Hammitzsch
    Andy Petrella 
    Dirk Frigne
    Gael Musquet


    A final note for everyone still hesitating to come: have a look at the accepted devrooms and other tracks for this year, I'm sure you will find other interesting topics that will make your trip to FOSDEM worthwile!


    by Johan Van de Wauw (noreply@blogger.com) at October 17, 2014 07:36 AM

    Just van den Broecke

    Into the Weather – Part 1 – Exploring weewx

    wms-time-heron-knmi

    WMS Time Example with GeoServer in Heron

    Tagging this post as “Part 1″  is ambitious. Beware: there is hardly any “geo” for now. In the coming time I hope to share some technical experiences with weather stations, weather software and ultimately exposing weather data via some open geospatial standards like OGC WMS(-Time) as in example image right, WFS and in particular SOS (Sensor Observation Service). The context is an exciting project with Geonovum in the Netherlands: to transform and expose (via web services and reporting) open/raw Air Quality data from RIVM , the  Dutch National Institute for Public Health and the Environment. The main link to this project is sensors.geonovum.nl. All software is developed as FOSS via a GitHub project. There are already some results there. I may post on these later.

    sospilot-screenshot

    Within a sub-project the aim is to expose measurements from a physical weather station via standardized OGC web services like WMS, WFS and SOS.  As a first step I dived into the world of weather hardware and software, in particular their vivid open source/open data communities. A whole new world expanded to me. To no surprise: Location and The Weather are part of everyday life since the beginnings of humanity. OpenWeatherMap and Weather Underground are just two of the many communities around open weather data. In addition there’s an abundance of FOSS weather software. Personal weather stations are measuring not just temperature but also pressure, humidity, rainfall, wind, up to UV radiation and are built homebrew or bought for as cheap as $50,-. 

    Weather Hacking

    Weather Hacking

    Being a noob in weather soft/hardware technology I had to start somewhere and then go step-by-step. The overall “architecture” can be even depicted in text:

    weather station --> soft/middleware --> web services + reporting
    Davis Vantage Pro2 Weather Station

    Davis Vantage Pro2 Weather Station

    Being more of a software person, I decided to start with the weather soft/middleware. Also, since Geonovum already owns a Davis Vantage Pro2 Weather Station and the Raspberry Pi B+ I plan to use is still underway…

    From what I gathered, weewx is the most widely used engine/framework within the weather FOSS community. Also the fact that it is written in Python with a very extensible architecture immediately settled my choice. Explaining weewx is a subject by itself but very well documented. I’ll try in a few sentences what weewx does:

    • collect current and archive weather station measurement data (drivers)
    • storing weather data (archive and statistics) in a database (SQLite or MySQL)
    • submitting data to weather community services like Weather Underground
    • creating formatted/templated reports for your local or remote website

    Any of these functionalities is highly extensible through a configurable plugin architecture. The drivers support most common weather stations. Installing is a breeze, either in a local directory or via Linux package managers. Also note that weather data  have quite some different local units (Fahrenheit/Celsius, knots/meters etc). weewx will all take care of this.

    So, not yet having access to a weather station, what could I do? One of the weather station drivers is the Simulator which intelligently generates weather data for testing.

    openweathermapTrying to have some real-world data I set out on what appeared to be a two-hour hack: create a weather station driver that obtains its data from an open weather API. There are many off course. I choose the OpenWeatherMap API to get data in the area of our cabin in the woods near the place of Otterlo in the Netherlands. Writing this hard-coded driver took just a few line of Python. The source code can be found here.  To not overask the API, I’ve set the time interval to 2 minutes within the weewx configuration file. Also it would not be fair to report these values to any of the weather communities. If the weewx community is interested I can donate this software, with some generalization (e.g. URLvia config).

    But all in all my first driver is still running fine in weewx. The main challenge was converting all the values between different metric systems. weewx allows and even encourages you to store all data in US metrics. All the reporting and conversion utilities will always allow you to show your local metric units.

    otterlo-weewx-reportAs a Linux daemon now runs fine in our test system. It is time to show some results. weewx reporting is basically a website generated via Cheetah templates. The default template is basic white on black. I found a nice template called Byteweather. You can find my continuous weather report  here at sensors.geonovum.nl/weather. Measurements are now building up thanks to the weewx archive database. Values are mostly matching Dutch weather station data. Expect for the rainfall…Surely we have lots of rain but not that much…

    Next posting I hope to tell more about deploying the Raspberry Pi and connecting to the Geonovum Davis Weather station. Then there will be also more “geo” in the post, I promise!

     

    by Just van den Broecke at October 17, 2014 01:21 AM

    October 16, 2014

    Boundless Blog

    Partner Profiles: Agrisoft

    Boundless partners are an important part of spreading the depth and breadth of our software around the world. In this ongoing series, we will be featuring some of our partners and the ways they are expanding the reach of our Spatial IT solutions.

    AgrisoftEstablished in 2002, Agrisoft is an Indonesian consulting firm specializing in integrated spatial solutions using open source software. Agrisoft offers consultancy, integration and training services, product development, and knowledge of clients’ business processes.

    With a population of 250 million people and a booming business community, Indonesia has proved itself to be a growing market for Agrisoft and Boundless. While the market for spatial solutions is still young, Agrisoft encourages the use of spatial software for business by promoting its value and establishing it as a viable solution. OpenGeo Suite provides a complete set of tools for Agrisoft’s clients to build spatially-enabled applications and GeoServer, OpenLayers, and PostGIS have become the preferred solutions among Agrisoft’s clients.

    SIH3: Sistem Informasi Hidrologi Hidrometeorologi & Hidrogeologi

    Tools and expertise from Boundless have enabled Agrisoft to expand and improve on some of their largest projects and they count among their customers the Indonesian Geospatial Information Agency and the Republic of Indonesia Ministry of Agriculture. In a current project for the Republic of Indonesia Agency for Meteorology, Climatology and Geophysics, Agrisoft is working on the SIH3 Portal, an information system for hydrology, hydrometeorology and hydrogeology. This project makes use of applications built on OpenGeo Suite to browse and explore maps showing different information and Agrisoft is redesigning the graphical user interface using OpenLayers 3.

    Agrisoft continually encourages the use of open source spatial software and looks to Boundless for industry best practices and guidance for their current and prospective customers.

    If you’d like your company to be considered for our international network of partners, please contact us!

    The post Partner Profiles: Agrisoft appeared first on Boundless.

    by Camille Acey at October 16, 2014 02:29 PM

    gvSIG Team

    “Introduction to gvSIG 2.1″ workshop in English

    A workshop about the new gvSIG 2.1 version was given in April at the 1st Mexican gvSIG Conference, and now it has been translated to English thanks to Elena Sánchez and
    Francisco Solís.

    This workshop shows the main gvSIG functionalities, and it includes the new
    features that have been included in gvSIG 2.1.

    You can download the workshop in pdf format, and the cartography, from [1].

    We hope it’s useful for you!

    [1] http://www.gvsig.org/plone/docusr/learning/gvsig-courses/gvsig_des_2.1_u_en/pub/documentation/


    Filed under: community, english, events, gvSIG Desktop, training

    by Mario at October 16, 2014 01:49 PM

    Even Rouault

    Warping, overviews and... warped overviews

    The development version of GDAL has lately received a few long awaited improvements in the area of warping and overview computation.

    For those non familiar with GDAL, warping is mainly used for reprojecting datasets from one source coordinate system to a target one, or to create a "north-up" image from a rotated image or an image that has ground control points. Overviews in GDAL are also called pyramids in other GIS software and are sub-sampled (i.e. with coarser resolution) versions of full-resolution datasets, that are mainly used for fast display in zooming out operations. Depending on the utility (warper or overview computation), different resampling methods are available : bilinear, cubic, cubicspline, lanczos, average, etc..

    Cubic resampling


    Up to now, the bi-cubic resampling algorithm used when computing warped images and overviews was a 4x4 convolution kernel. This was appropriate for warping, when the dimensions of the target dataset are of the same order as the source dataset. However if the target dataset was downsized (which is the nominal case of overview computation), the result was sub-optimal, not to say plainly bad, because not enough source pixels were captured, leading to a result close to what nearest neighbour would give. Now, the convolution kernel dynamically uses the subsampling ratio to take into account all source pixels that have an influence on each target pixel, so e.g 8x8 pixels if subsampling by a factor of 2.
    Of course, this involves more computation and could be slower. Fortunately, for 64 bit builds, Intel SSE2 intrinsics are at the rescue to compute convolutions in a very efficient way.

    For example in GDAL 2.0dev, computing 5 overview levels on a 10474x4951 RGB raster with cubic resampling takes 2.4 seconds on a Core i5-750, to be compared with 3.8s with GDAL 1.11

    $ gdaladdo -ro -r cubic world_4326.tif 2 4 8 16 32

    To compare both results, we can select the 5th overview level with the fresh new open option OVERVIEW_LEVEL=4 (index are 0 based)

    $ gdal_translate world_4326.tif out.tif -oo OVERVIEW_LEVEL=4

    5th overview generated by GDAL 2.0dev

    5th overview generated by GDAL 1.11.1


    So yes, faster (a bit) and better (a lot) !

    Similar result can also be obtained with :

    $ gdalwarp -r cubic world_4326.tif out.tif -ts 328 155

    The "-oo OVERVIEW_LEVEL=xxx" option can be used with gdalinfo, gdal_translate and gdalwarp, or with the new GDALOpenEx() API.

    Related work could involve adding resampling method selection in the RasterIO() API that currently only does nearest neighbour sampling. If that might interest you, please contact me.

    Overviews in warping


    Related to the OVERVIEW_LEVEL open option, another long due improvement was the selection of the appropriate overview level when warping. A typical use case is to start with a WMS or tiled dataset, e.g the OpenStreetMap tiles, and wanting to reproject full or partial extent to an image with reasonably small dimensions. Up to now, GDAL would alway use the most precision dataset (typically zoom level 18 for OpenStreetMap), which would make the operation terribly slow and unpractical.

    Now, the following will run in just a few seconds :

    $ gdalwarp frmt_wms_openstreetmap_tms.xml out.tif -t_srs EPSG:4326 \
      -r cubic -te -10 35 10 55 -overwrite -ts 1000 1000

    With the -ovr flag, you can modify the overview selection strategy, and for example specify you want to use the overview if the level immediately before the one that would have been automatically selected (i.e. with bigger dimensions, more precise)

    $ gdalwarp frmt_wms_openstreetmap_tms.xml out.tif -t_srs EPSG:4326 \
      -r cubic -te -10 35 10 55 -overwrite -ts 1000 1000 -ovr AUTO-1

    You can also specify a precise overview level to control the level of details, which is particuarly relevant in the case of OSM since the rendering depends on the scale :

    $ gdalwarp frmt_wms_openstreetmap_tms.xml out.tif -t_srs EPSG:4326 \
      -r cubic -te -10 35 10 55 -overwrite -ts 1000 1000 -ovr 9

    (Note: -ovr 9 is equivalent to OSM zoom level 8, since GDAL_overview_level = OSM_max_zoom_level - 1 - OSM_level, 9 = 18 - 1 - 8. )

    With -ovr 9 (zoom level 8)

    With -ovr 10 (zoom level 7)

    With -ovr 11 (zoom level 6) or without any -ovr parameter

    With -ovr 12 (zoom level 5)
    (All above images are © OpenStreetMap contributors)

    Overviews in warped VRT


    GDAL advanced users will perhaps know the Virtual Raster (.vrt) format. There are several flavors of VRT files, one of them is the so-called "warped VRT", which can be produced by "gdalwarp -of VRT". This is an XML file that captures the name of the source dataset being warped and the parameters of the warping: output resolution, extent, dimensions, transformer used, etc... This can be convenient to do on-the-fly reprojection without needing to store the result of the reprojection. Similarly to regular warping, warped VRT can now make use of overviews of the source dataset to expose "implicit" overviews in the warped VRT dataset. Which make it possible to use warped VRT in a GIS viewer ith decent performance when zooming out. Among others, this will be  beneficial to QGIS that use the "auto-warped-VRT" mechanism when opening a raster that is not a "north-up" dataset.

    Still playing with our OpenStreetMap dataset, let's create a warped VRT around western Europe :

    $ gdalwarp frmt_wms_openstreetmap_tms.xml out.vrt -t_srs EPSG:4326 \
      -r cubic -te -10 35 10 55 -overwrite -of VRT

    We can see that the VRT now advertizes overviews :

    $ gdalinfo out.vrt
    [...]
    Size is 4767192, 4767192
    [...]
    Band 1 Block=512x128 Type=Byte, ColorInterp=Red
      Overviews: 2383596x2383596, 1191798x1191798, 595899x595899,
                 297950x297950, 148975x148975, 74487x74487,
                 37244x37244, 18622x18622, 9311x9311, 4655x4655,
                 2328x2328, 1164x1164, 582x582, 291x291, 145x145,
                 73x73, 36x36, 18x18


    I'd like to thank Koordinates and Land Information New Zealand for funding those improvements.

    by Even Rouault (noreply@blogger.com) at October 16, 2014 01:19 PM