Welcome to Planet OSGeo

April 24, 2015

OSGeo News

Open Letter for the need for Open Standards in LiDAR

by jsanz at April 24, 2015 10:49 PM

BostonGIS

PostGIS In Action 2nd Fresh off the presses

Just got our shipment of PostGIS In Action 2nd Edition. Here is one here.

It's a wee bit fatter than the First Edition (just by about 100 pages). I'm really surprised the page count didn't go over 600 pages given the large additional ground of coverage this edition has. This edition covers a lot more raster than the first edition and has chapters dedicated to PostGIS topology and PostGIS tiger geocoder.

by Regina Obe (nospam@example.com) at April 24, 2015 09:08 PM

Jackie Ng

Custom GDAL binaries for MapGuide Open Source 2.6 and 3.0

A question that gets normally asked on our mailing list is how do you get the GDAL FDO provider to work with formats like ECW or MrSID. Our normal response would be (provided you are licensed to use ECW, MrSID or any other non-standard GDAL supported format) to point you over to GIS internals and grab one of their custom windows GDAL binaries to replace the GDAL dlls in your current MapGuide installation.

The reason we ask you to do this is because when we build GDAL for use with the FDO provider, we build GDAL using only the standard profile of supported formats. That is to say any format listed here:
Where the Compiled by default option is unconditionally Yes. It is not possible for us logistically to build GDAL/OGR with the proverbial kitchen sink of raster/vector format support, so that's where GIS internals comes in as their builds of GDAL/OGR have greater raster/vector format support. As long as you grab the same release of GDAL and make sure to pick the build that is built with the same MSVC compiler used to build the release of MapGuide/FDO you're using, you should then have GDAL and OGR FDO providers with expanded vector and raster format support.

This suggestion worked up until the 2.5.2 release, where the right version of GDAL built with the right version of MSVC (2010 at the time) was available for download. But for 2.6 and the (pending) 3.0 releases, this suggestion is not applicable because that site does not offer a MSVC 2012 build of GDAL 1.10, which is what MapGuide 2.6 and 3.0 both use for their GDAL FDO provider.

So this leaves some of you in a pickle, being stuck on 2.5.2 and unable to move to 2.6 or 3.0 because you need to support one of these esoteric data formats. Well, I have partially alleviated this issue for you.

Tamas has not only made these custom GDAL binaries for download, but also the development kits used to build these binaries as well. So in these past few days, I grabbed the MSVC 2012 dev kit, paired it with our internal GDAL 1.10 source tree in FDO and made a few tweaks to some makefiles here and there and here's the end result.

A custom build of GDAL 1.10 with support for the following additional raster data formats:
  • ECW (rw): ERDAS Compressed Wavelets (SDK 3.x)
  • JP2ECW (rw+v): ERDAS JPEG2000 (SDK 3.x)
  • FITS (rw+): Flexible Image Transport System
  • GMT (rw): GMT NetCDF Grid Format
  • netCDF (rw+s): Network Common Data Format
  • WCS (rovs): OGC Web Coverage Service
  • WMS (rwvs): OGC Web Map Service
  • HTTP (ro): HTTP Fetching Wrapper
  • Rasterlite (rws): Rasterlite
  • PostGISRaster (rws): PostGIS Raster driver
  • MBTiles (rov): MBTiles
And support for the following additional vector formats:
  • "PostgreSQL" (read/write)
  • "NAS" (readonly)
  • "LIBKML" (read/write)
  • "Interlis 1" (read/write)
  • "Interlis 2" (read/write)
  • "SQLite" (read/write)
  • "VFK" (readonly)
  • "OSM" (readonly)
  • "WFS" (readonly)
  • "GFT" (read/write)
  • "CouchDB" (read/write)
  • "ODS" (read/write)
  • "XLSX" (read/write)
  • "ElasticSearch" (read/write)
  • "PDF" (read/write)
You might notice some omissions from this list. Where's MrSID? Where's Oracle? Where's $NOT_COMPILED_BY_DEFAULT_DATA_FORMAT?

Well I did say I have partially alleviated the issue and not fully alleviated it. The issue is that due to what I gather is licensing restrictions, the development kit can't bundle the necessary headers and libraries needed to build GDAL with driver support for MrSID, OCI, etc. As such the custom build of GDAL I have made available does not include support for such formats.

What can be done about this. For something like Oracle, we already have a dedicated FDO provider for that. For something like MrSID? I'm afraid you're out of luck. You'll either have to stick on the 2.5.2 release for that much longer, or just bite the bullet and gdal_translate those MrSID files to something more accessible. I've heard some good things about RasterLite. I've also heard that you could get some great performance out of carefully prepared GeoTiffs

Any thing to liberate yourself from MrSID because you won't see the right GDAL binaries with this support built in for the foreseeable future.

You can find the download links for the custom GDAL builds in our updated GDAL provider guide for MapGuide 2.6 and 3.0.

One more thing before I finish that is worth re-iterating. Some formats like ECW, require you to be have a license to use ECW technology in a server environment. Other formats carry their own licensing shenanigans, so make sure you are properly licensed to use any of the the additional formats that are made available with this custom build of GDAL. The GISInternals build system repo on GitHub has all the applicable licenses in RTF format for your perusal.

Also worth pointing out is that this custom build of GDAL is not supported by me or anyone on the MapGuide development team. I only make this build available to you so you have the ability to access these additional data formats should you so choose and nothing more. There is no obligation by us to provide support for any issues that may arise as a result of using this custom GDAL release (inferred or otherwise). Use this custom build of GDAL at your own discretion.

/end lawyer-y talk. Enjoy!

by Jackie Ng (noreply@blogger.com) at April 24, 2015 01:44 PM

Micha Silver

Get Landsat 8 Reflectance with GRASS-GIS

Landsat 8 tiles have been available for more than two years now. In addition to the obvious advantages of these new satellite images: higher (16 bit) radiometric resolution and extra bands, there are some subtle additions to the metadata file that makes image processing easier.

Firstly, the new metadata files are formatted for easy parsing. But more importantly, we now have a pair of parameters titled RADIANCE_MULT_BAND_* and RADIANCE_ADD_BAND_* (one pair for each band).  With these parameters we can calculate the Top Of Atmosphere (TOA) reflectance directly, without the need for the intermediary step of radiometric calibration. Read the Using USGS Landsat 8 website for full details.  These two parameters are parallel to the well known gain and bias parameters from earlier Landsat missions, however they already take into account Esun and the earth-sun distance.  Thus reflectance can be obtained straight-away. We only need to divide by the sun zenith angle to get corrected TOA relectance.

We GRASS users can easily put the above numbers into an r.mapcalc expression to get reflectance for each band. But we also want to take advantage of the scripting capabilities of GRASS to batch process all bands for several tiles.  Choosing python for our scripting language we have access to the  OS libraries we need as well as the GRASS python library.   We first loop through all the Landsat 8 directories in some top level folder where we downloaded the original tiles.  We read through the metadata file for each tile,  creating a python dictionary of the entries we need.  Now we implement an inner loop to import each of the individual bands in the tile as a GRASS raster. Then we run the mapcalc module on each band, creating TOA reflectance for that band. When the inner loop finishes importing and processing all the bands, the outer loop moves to the next Landsat directory and cycles thru the bands in that tile, etc.

For those interested in the nitty-gritty, you’re welcome to clone a small python script I’ve put  on github that does the above.

by Micha Silver at April 24, 2015 12:40 PM

GeoSpatial Camptocamp

OpenLayers 3: Code Sprint in Austria

At the beginning of April, three Camptocamp’s developers attended the OpenLayers 3 Code Sprint which took place in Schladming (Austria). With this blog post, we would like to provide details on some of the work the Camptocamp team did at this code sprint.

Rendering tests

Since our work on drawing points with WebGL, we wanted to add « rendering tests » to the library. Drawing features with WebGL is quite complex, so we’ve always felt that having a way to test the library’s rendering output was mandatory for the future.

So we worked on a rendering test framework and actual rendering tests during the Code Sprint. The rendering test framework is based on the Resemble.js library for image comparison, and on the Slimer JS scriptable browser for running rendering tests in an headless way on Travis CI.

Slimer JS is similar to PhantomJS, except that it is based on Gecko, the browser engine of Mozilla Firefox. Contrary to PhantomJS, Slimer JS supports Canvas as well as WebGL, which was one of the main requirements for us.

Compile your OpenLayers 3 apps with Closure Compiler

At Camptocamp, we compile our JavaScript applications with the Closure Compiler. The Closure Compiler allows for high compression rates, and, based on annotations in the code, « type-checks » the JavaScript code. The Closure Compiler is a very good tool for maintaining large JavaScript code bases with high constraints in terms of performance.

At the Code Sprint in Schladming, we improved closure-util, our node-based tool for Closure, to make using the Closure Compiler in OpenLayers 3 applications much easier. We also wrote a tutorial showing how to compile applications together with OpenLayers 3. The tutorial will become an official OpenLayers 3 tutorial when OpenLayers v3.5.0 will be released (beginning of May 2015).

Drawing Lines and Polygons with WebGL

Some time ago, we added support for drawing points with WebGL to the library. This work was sponsored by Météorage, which uses OpenLayers 3 and WebGL to draw large number of lightning impacts on a map.

But this was just a first step towards WebGL vector support in OpenLayers 3. Obviously, we also wanted to support drawing lines, polygons and labels.

We took the opportunity of the Code Sprint to take a stab at it! We worked on a first implementation, to demonstrate the feasibility, and verify that the current rendering architecture will work for WebGL lines and polygons.

The results are so far encouraging, and we’re looking forward to continuing this work. Check out the dedicated blog post we wrote for more detail.

Vector extrusion with ol3-cesium

We added support to the KML parser for reading extrude and altitudeMode values that may be associated to geometries in KML documents. With some additions to ol3-cesium, the extrude and altitudeMode values may be used to, for example, display extruded buildings in the Cesium globe. See the ol3-cesium extrude example for a demo.

Vector Tiling

OpenLayers 3 already includes basic support for decoding and rendering Vector Tiles. See the tile-vector example for example.

But OpenLayers 3 doesn’t yet support the MapBox Vector Tile Spec, and rendering artefacts may be present at tile boundaries for polygon features with outlines/strokes. We think that full support for Vector Tiles is important for the library, so our goal is to fill the gaps.

At the Code Sprint, we started working on an OpenLayers 3 format for decoding MapBox Vector Tiles and creating vector objects that can be exploited by the library. We also discussed and designed rendering strategies that we could use for properly displaying Vector Tiles. We wanted to experiment with buffered Vector Tiles and clipping at rendering time to prevent rendering problems at tile boundaries.

We think that Vector Tiles present a number of advantages over standard/current vector strategies. To name a few: data caching, data simplification performed once on the server, natural index formed by the tiles, compact and efficient format.

We’re then looking forward to improving the support for Vector Tiles in OpenLayers 3 and making Vector Tiles as mainstream as possible for application developers.

Feel free to contact us if you want to discuss these topics with us!

Cet article OpenLayers 3: Code Sprint in Austria est apparu en premier sur Camptocamp.

by Eric Lemoine at April 24, 2015 11:44 AM

GeoSpatial Camptocamp

OpenLayers 3: towards drawing lines and polygons with WebGL

Last year, we investigated then implemented massive and very fast rendering of hundred of thousands of points using WebGL. We took the opportunity of the recent OpenLayers 3 code sprint in Austria to implement a Proof of Concept (PoC) demonstrating line and polygon rendering using WebGL.

This first example shows a map with the base layer, points, lines and polygons, all drawn using WebGL.

webgl_lines_and_polygons                                                               WebGL lines and polygons

The second example shows countries (Polygon and MultiPolygon features), also drawn using WebGL. If the examples do not display correctly, please check here if your browser supports WebGL.

webgl_vector_layer                                                                    WebGL vector layer

Even though the developments are at an early stage, it is easy to notice that panning, rotating and zooming the map are already very smooth.

Rendering lines

WebGL supports rendering lines out of the box, and this is what we are using in this prototype. We are creating an array of pairs of vertices (a batch) and we are passing it to WebGL for rendering.

There are a few WebGL limitations though:

  • joins are not supported: there is some space between two consecutive lines.
  • thick lines are only supported on Mac and Linux: all lines have a width of 1 pixel on Microsoft Windows
  • lines are aliased: they appear rough on the screen.

webgl_lines                                                                   WebGL lines

Despite these limitations, we intentionally implemented line rendering this way as it is the simplest technique and it works well for the scope of this PoC.

In order to overcome these limitations, we should triangulate the lines; basically, line ends should be duplicated on the CPU then efficiently moved in the vertex shader. A line segment would be represented by two triangles.

Rendering polygons

WebGL has no support for rendering polygons, so we implemented it using only two draw calls.

  • First, the polygon interiors are rendered using a batch of triangles. We triangulate the polygons and create a batch with an array of vertices and an array of triangle indices. The color is stored on each of the vertices, which allows to draw all polygons in a single draw call.
  • Then the polygon outlines are rendered using a batch of lines. The line renderer described above is reused.

A limitation of our technique is that we duplicate the color on each vertex since using a uniform would prevent batching. An idea to save resources while still allowing batching would be to use a color atlas and only store a texture coordinate on each of the vertices. Only one draw call would be required.

As a side note, we use the promising earcut library for triangulating the polygons in this PoC.

This prototype provides a concrete first step toward implementing fast and reliable rendering of lines and polygons using WebGL. We are pleased by the smoothness we already get without any optimization. At Camptocamp, we are very excited by the performance and the full control WEBGL offers. If you are also looking for pushing the limits of current rendering, please get in touch with us!

Cet article OpenLayers 3: towards drawing lines and polygons with WebGL est apparu en premier sur Camptocamp.

by Guillaume Beraudo at April 24, 2015 09:50 AM

Nathan Woodrow

PSA: Please use new style Qt signals and slots not the old style

Don’t do this:

self.connect(self.widget, 
             SIGNAL("valueChanged(int)"), 
             self.valuechanged)

It’s the old way, the crappy way. It’s prone to error and typing mistakes. And who really wants to be typing strings as functions and arg names in it. Gross.

Do this:

self.widget.valueChanged.connect(self.valuechanged)
self.widget.valueChanged[str].connect(self.valuechanged)

Much nicer. Cleaner. Looks and feels like Python not some mash up between C++ and Python. The int argument is the default so it will use that. If you to pick the signal type you can use [type].

Don’t do this:

self.emit(SIGNAL("changed()", value1, value2))

Do this

class MyType(QObject):
   changed = pyqtSignal(str, int)

   def stuff(self):
       self.changed.emit(value1, value2)

pyqtSignal is a type you can use to define you signal. It will come with type checking, if you don’t want type checking just do pyqtSignal(object).

Please think of the poor kittens before using the old style in your code.


Filed under: pyqt, python, qgis Tagged: pyqt, qgis, qt

by Nathan at April 24, 2015 05:20 AM

April 23, 2015

Slashgeo (FOSS Articles)

Batch geonews: LiDAR Standards Woes, Maps on Apple Watch, Esri Maps for Office 3, and much more

Here’s the recent geonews in batch mode.

On the open source / open data front:

On the Esri front:

On the Google front:

Discussed over Slashdot:

In the everything else category:

In the maps category:

The post Batch geonews: LiDAR Standards Woes, Maps on Apple Watch, Esri Maps for Office 3, and much more appeared first on Slashgeo.org.

by Alex at April 23, 2015 06:21 PM

Bjorn Sandvik

Real time satellite tracking of your journeys - how does it work?

I'm back in Oslo after my 25 days ski trip across Nordryggen in Norway. It was a great journey, and I would highly recommend doing all or parts of it if you enjoy cross-country skiing. Just be prepared for shifting weather conditions.
The goal of the trip was also to test my solution for real time satellite tracking, explained in several of my previous blog posts. It worked out really well, and people were able to follow along in the comfort of their sofa.


I fastened a Spot Satellite Messenger to the top of my backpack, and left the device in tracking mode while skiing. The device sent my current position every 5 minutes, allowing me to update the map without any mobile coverage. When we arrived at a mountain hut, I pressed the OK button to set up a bed. I also programmed a button to show a snow cave, in case we wouldn't reach a hut. Luckily we didn't have to use it :-)

My map and elevation plot of the 25 days ski trip across Nordryggen. Most of the trip is above tree line, and there are only 5 road crossings in total. 

The SPOT messenger only sends my time and position, so I had to create a web service to retrieve extra information about each location. I'm using a service from the Norwegian Mapping Authority
 to retrieve the altitude, nearest place name and the terrain type. Earlier this winter, I experienced that the service did't return any altitude if I was skiing on lakes, so I'm using the Google Elevation API to avoid gaps in the elevation profile.

By knowing the time and location, I could create an automatic service to obtain more information to enrich the map. In addition to elevation and place name, I've added a weather report.  The image show Bjordalsbu, the highest lying hut on the route 1586 m, which we visited in a strong breeze. 

While skiing, I used Instagram to post photos that would instantly show on the map as well. This required mobile coverage, which is sparse in the mountains. After the trip, I synced my camera photos with my GPS track to be able to show them along the route.

Click "Bilder" in the top menu to see the photos along the route. 

A few of my photos:

Eidsbugarden in Jotunheimen. 

Iungsdalshytta in Skarveimen. 

Taumevatn in Ryfylkeheiane.

Gaukhei in Setesdalsheiane. 

End of trip - and the snow - in Ljosland. 
More photos in my Google+ album.

by Bjørn Sandvik (noreply@blogger.com) at April 23, 2015 04:17 PM

Slashgeo (FOSS Articles)

Evolve! New techs for developer GIS, meet the latest SuperGIS Engine 3.3

Supergeo Technologies, the leading global provider of complete GIS software and solutions, officially released SuperGIS Engine 3.3 for global GIS developers to customize GIS applications, meeting diverse demands in various fields. Lots of the components have been updated in the latest version, providing renewed and advanced data access and analysis functions such as mainstream database compatibility, table mashup, data processing etc. Moreover, enhanced mapping elements allow developer to build up impressive data display and detailed labeling design for clients.

 

Developed by Supergeo through integrating mapping and GIS technologies, SuperGIS Engine 3.3, as the COM-structured development component, provides developers with complete GIS core components. The developed applications can be seamlessly embedded into programming language in Windows developing environment, helping integration with other systems for strong system development.

 

SuperGIS Engine 3.3 offers complete development resources. Hence, GIS programmers and developers can efficiently develop applications with GIS functionalities such as Display Layer, Edit, Query, Access Spatial Database, etc. In addition, hundreds of GIS-related objects, diverse Controls, comprehensive development samples and object diagram are provided for technical users to effectively build programs and deploy to multiple end-users. Also, SuperGIS Engine developers are allowed to access online contents such as sample codes and GIS application designs, enabling front-end users to bring flexible and productive solutions for end-users.

 

To know more information about SuperGIS Engine, please visit our product pages on: www.supergeotek.com/ProductPage_SE.aspx.

Feel free to download the trial from:

http://www.supergeotek.com/download_6_developer.aspx

 

# # #

 

About Supergeo

 

Supergeo Technologies Inc. is a leading global provider of GIS software and solutions. Since the establishment, Supergeo has been dedicated to providing state-of-the-art geospatial technologies and comprehensive services for customers around the world. It is our vision to help users utilize geospatial technologies to create a better world.

Supergeo software and applications have been spread over the world to be the backbone of the world’s mapping and spatial analysis. Supergeo is the professional GIS vendor, providing GIS-related users with complete GIS solutions for desktop, mobile, server, and Internet platforms.

 

Marketing Contact:

Patty Chen

Supergeo Technologies Inc.

5F, No. 71, Sec. 1, Zhouzi St., Taipei, 114, TAIWAN

TEL:+886-2-2659 1899

Website: http://www.supergeotek.com

Email: patty@supergeotek.com

The post Evolve! New techs for developer GIS, meet the latest SuperGIS Engine 3.3 appeared first on Slashgeo.org.

by Supergeo at April 23, 2015 12:13 PM

April 22, 2015

Stefano Costa

Tipografia elettorale: le elezioni regionali in Liguria

Anche in Liguria stanno per arrivare le elezioni regionali. Se votassero i tipografi, vincerebbe Raffaella Paita. Le elezioni vere sono un’altra storia.

La partenza delle campagne per le presidenziali USA di Hillary Clinton e Marco Rubio ha destato un po’ di interesse per la tipografia anche sui media generalisti. Facciamo una panoramica dei principali candidati per la Liguria, in rigoroso ordine di sondaggio.

Nexa

imageRaffaella Paita, candidata per il PD, usa Nexa, disegnato da Font Fabric nel 2012. È un font geometrico moderno, con una ampia gamma di pesi. La campagna di Paita sfrutta Nexa in modo pervasivo, adottandolo sia in maiuscolo grassetto per lo slogan e il cognome, sia in minuscolo per il testo ‒ incluso quello sul sito web, con l’intera gamma di pesi disponibili. Nonostante sia un font geometrico, Nexa è abbastanza leggibile anche per testi di lunghezza media, soprattutto nei pesi più sottili, anche se non risulta necessariamente gradevole.

Block Condensed

imageGiovanni Toti, candidato di Forza Italia, usa Block Condensed, disegnato da Hermann Hoffmann nel 1908 e distribuito da molte case tipografiche, tra cui Linotype e Adobe. Il manifesto è scritto interamente in lettere maiuscole, ad eccezione dei riferimenti social, e non risulta un sito web ufficiale della campagna.

Block è un font senza grazie dai bordi leggermente frastagliati, che conferiscono un aspetto vagamente rustico, più caldo rispetto alla maggior parte dei caratteri senza grazie. La leggibilità è garantita soprattutto dall’uso esclusivo delle maiuscole.

Kabel

Alice Salvatore è la candidata del Movimento 5 Stelle e usa Kabel, disegnato da Rudolf Koch nel 1927. Kabel è un font geometrico umanistico, oggi distribuito da diverse case tipografiche. Ad un occhio inesperto, Kabel non è particolarmente diverso da Nexa, e questa convergenza è interessante (lascio ad altri valutazioni sulla tempistica delle scelte tipografiche), se consideriamo che questo genere di font può trasmettere sensazioni di modernità, efficienza, precisione.

cropped-alice_copertinaLa campagna di Salvatore usa Kabel in lettere maiuscole e minuscole, nel peso standard. Le lettere minuscole non sono particolarmente leggibili, in particolare con la spaziatura tra lettere ridotta: il risultato non è dei migliori. Non vengono sfruttati i diversi pesi a disposizione in molte delle versioni digitali del font.

~

Se dovessi votare adesso, la campagna elettorale tipografica sarebbe vinta da Raffaella Paita. Paita usa un font moderno, creato da una type foundry giovane che sforna font apprezzatissimi, e non è nemmeno un caso che il font ufficiale di Hillary Clinton sia molto simile. La campagna nel suo complesso è la meglio studiata dal punto di vista tipografico, integrata senza troppe sbavature tra i diversi media (anche se sia Paita che Salvatore usano WordPress per i loro siti web, la campagna M5S è più “minimal”). Il rovescio della medaglia è che probabilmente Paita ha investito più risorse degli altri candidati nella creazione di una campagna di comunicazione professionale di alto livello, nella diffusione del proprio slogan ‒ mentre gli altri due candidati qui considerati inseriscono lo slogan nel proprio manifesto sotto forma di #hashtag , in una sorta di chiamata all’azione per i propri potenziali elettori, forse già un po’ trita per chi gli hashtag li usa davvero.

Nelle prossime settimane, se ci sarà il tempo, guarderemo anche gli altri candidati alla presidenza, e qualche candidato al consiglio regionale (anche se quello che ho visto finora è molto noioso).

Le elezioni vere sono tutta un’altra storia.

by Stefano Costa at April 22, 2015 05:50 AM

April 21, 2015

gvSIG Team

Recomendaciones y trucos para desarrollar con gvSIG 2.1 (1). Recorriendo datos

Hola a todos de nuevo,

De vez en cuando echo un vistazo al código de proyectos gvSIG realizados por otros desarrolladores y veo algunos fragmentos con pequeñas erratas que se repiten a lo largo del código. Algunas ni siquiera pueden considerarse erratas, sino que simplemente sería recomendable hacer las cosas de otra forma. He recogido aquí unas pocas de ellas e intentaré ir recogiendo algunas más para ir contándolas en otros artículos.

En este vamos a ver algunos trucos o buenas prácticas relacionadas con:

  • Liberación de recursos
  • Usar FeatureReference
  • Iterar sobre un conjunto de features

Liberación de recursos

Empecemos por la parte de liberación de recursos.

Cuando estamos creando código que accede a fuentes de datos a través de DAL, hay que tener en cuenta que dependiendo del proveedor de datos que estemos utilizando es importante que liberemos los recursos cuando terminemos de usarlos. Algunos proveedores, como los de acceso a base de datos, mantienen conexiones
abiertas con la base de datos que deberían ser liberadas al terminar de usarlas. En general cualquier “artefacto” que implemente el interface “Disposable” deberíamos cerciorarnos de que lo liberamos al terminar de usarlo.

Veamos un fragmento de código:

...
final List<FeatureReference> tmpFeatures = new ArrayList<FeatureReference>();
boolean showWarningDialog = false;
DisposableIterator it = null;
FeatureSet featureSet = null;

try {
  featureSet = featureStore.getFeatureSet();
  it = featureSet.fastIterator();
} catch (DataException ex) {
  String message = String.format(
    "Error getting feature set or fast iterator of %1",
    featureStore);
  LOG.info(message, ex);
  return;
}
while (it.hasNext()) {
Feature feature = (Feature) it.next();
  if (hasMoreThanOneGeometry(feature)
    || feature.getDefaultGeometry() == null) {
    showWarningDialog = true;
  } else {
    tmpFeatures.add(feature.getCopy().getReference());
  }

}

it.dispose();
featureSet.dispose();

if (showWarningDialog) {
  showWarningDialog();
}
...

En este fragmento de código podemos observar que se crea un “FeatureSet”, al que se le pide un iterador para recorrernos las features de un “FeatureStore”, y al final del proceso se invoca al método “dispose” de ambos para liberar los recursos que estos tengan asociados.

Si el proceso que se realiza va bien, es decir, no se producen errores, no pasará nada y los recursos se liberarán correctamente al ejecutarse las últimas dos líneas… pero ¿Qué sucede si se produce un error durante la ejecución de las líneas previas a invocar al método dispose?

Lo normal es que si el FeatureStore sobre el que se está trabajando es un tabla de una base de datos, deje “pillada” al menos una conexión con la base de datos. Y esta ya no se liberará hasta que se cierre gvSIG. Si el usuario insiste y repite el proceso y le sigue fallando, en un momento dado dejaremos al servidor de base de datos sin conexiones disponibles, y estas no se liberarán hasta que el usuario cierre gvSIG.

El escenario puede llegar a bloquear el acceso a un gestor de base de datos y no solo al usuario de gvSIG.

La recomendación es que cuando se esté trabajando con recursos “disposables” se utilice siempre una construcción “try…finally” tal como se muestra a continuación:

...
Disposable recurso = null;
try {
  ...
  recurso = ...
  ...
} finally {
  DisposeUtils.disposeQuietly(recurso);
}
...

El método “DisposeUtils.disposeQuietly” es un método de utilidad que comprueba si es null el recurso pasado, y solo intenta liberarlo en caso de que no lo sea. Además atrapa los errores y los ignora; bueno, se limita a enviarlos al registro de errores, en caso de que se produzcan al invocar al método “dispose” del recurso.

Si no queremos que se ignoren los errores usaremos “DisposeUtils.dispose”. En el código de ejemplo esto quedaría como:

...
final List<FeatureReference> tmpFeatures = new ArrayList<FeatureReference>();
boolean showWarningDialog = false;
DisposableIterator it = null;
FeatureSet featureSet = null;
try {

  try {
    featureSet = featureStore.getFeatureSet();
    it = featureSet.fastIterator();
  } catch (DataException ex) {
    String message = String.format(
      "Error getting feature set or fast iterator of %1",
      featureStore);
    LOG.info(message, ex);
    return;
  }
  while (it.hasNext()) {
    Feature feature = (Feature) it.next();
    if (hasMoreThanOneGeometry(feature)
      || feature.getDefaultGeometry() == null) {
        showWarningDialog = true;
    } else {
      tmpFeatures.add(feature.getCopy().getReference());
    }
  }

  if (showWarningDialog) {
    showWarningDialog();
  }
} finally {
  DisposeUtils.disposeQuietly(it);
  DisposeUtils.disposeQuietly(featureSet);
}
...

Usar FeatureReference

Cuando tenemos una Feature y queremos guardárnosla para acceder a sus datos más tarde normalmente tendríamos que guardarnos una copia de esta utilizando el método “getCopy”. Sin embargo, a veces no nos interesa guardarnos la feature completa, sino quedarnos con una referencia a ella. Para hacer esto utilizaremos el método “getReference” de la feature.

¿ En qué se diferencia la “feature” de su “referencia” ?
¿ Qué es una “referencia” a una “feature” ?

La “feature” es una estructura de datos que contiene todos los valores de los distintos atributos de esta, mientras que la referencia” es un estructura de datos que contiene la información mínima para poder recuperar la feature completa de su almacén de datos. En ocasiones sera un OID, en otras una clave primaria, siendo el proveedor de datos el encargado de decidir con cual quiere trabajar y de que tipo será ese OID.

Por ejemplo, una referencia a una feature de un shape usa un OID para referenciar a la feature dentro del shape, que normalmente será la posición de la feature dentro del shape, mientras que una feature de una tabla de base de datos tendrá los valores de los campos que constituyen la clave primaria de esa feature.

Tenemos que tener en cuenta que dependiendo de la fuente de datos subyacente, recuperar la información de una FeatureReference puede ser costoso. Las usaremos con cuidado y siendo conscientes de lo que se puede estar haciendo.

Por ejemplo, si tenemos una referencia a una feature de un shape, si invocamos a su método “getFeature”, esto provocará que se vaya a la posición asociada a la feature del shape y se cargue esta en memoria. El coste es asumible sin problemas. Sin embargo, si se trata de una referencia a una feature de una tabla de base de datos, provocará que se lance una consulta contra la base de datos para obtener el registro asociado a la feature y se pueda cargar en memoria. Hay que tener cuidado con esto, ya que si tenemos un List de referencias y las dereferenciamos, provocaremos que se lance una consulta contra la base de datos por cada referencia que tengamos, lo que puede no ser admisible.

Cuando usemos referencias, tendremos que tener en cuenta que recuperar sus features puede ser costoso, y tendremos que valorar si la aproximación que estamos realizando es la adecuada.

Con esto en mente… echemos un vistazo de nuevo al fragmento de código de antes:

...
while (it.hasNext()) {
  Feature feature = (Feature) it.next();
  if (hasMoreThanOneGeometry(feature)
    || feature.getDefaultGeometry() == null) {
      showWarningDialog = true;
    } else {
      tmpFeatures.add(feature.getCopy().getReference());
     }

    }
    ...

Observamos que está recorriéndose las features para guardarse las referencias a algunas features en un List. Sin embargo, para hacerlo hace “feature.getCopy().getReference()”, que provoca la creación de una copia de la feature original para después desecharla y quedarse con su referencia, que será la misma que si se la hubiésemos pedido a la feature original. Podríamos eliminar la obtención de la copia de la feature y pedir a la feature original su referencia y obtendríamos el mismo resultado.

...
while (it.hasNext()) {
  Feature feature = (Feature) it.next();
  if (hasMoreThanOneGeometry(feature)
     || feature.getDefaultGeometry() == null) {
      showWarningDialog = true;
    } else {
      tmpFeatures.add(feature.getReference());
    }

  }
  ...

Ahora observemos otro fragmento de código:

...
private List<FeatureReference> features;
...

int[] currIndexs = getSelectedIndexs();
// If selected is the first row, do nothing
if (currIndexs.length <= 0 || currIndexs[0] == 0) {
  return;
}
List<FeatureReference> selectedFeatures = new ArrayList<FeatureReference>();
for (int i = 0; i < currIndexs.length; i++) {
  FeatureReference selected = null;
  try {
    selected = features.get(currIndexs[i]).getFeature().getReference();
  } catch (DataException ex) {
    LOG.info("Error getting feature", ex);
    return;
  }
  selectedFeatures.add(selected);
}
if (!selectedFeatures.isEmpty()) {
...

El código llena un List de FeatureReference con parte de las referencias obtenidas de otra lista. Nos centraremos ahora mismo en el código:

...
selected = features.get(currIndexs[i]).getFeature().getReference();
...

Si pensamos que está haciendo esto, tendremos que:

  • Primero recuperamos una FeatureReference de la lista de “referencias a features”
  • Luego construimos la “feature” asociada a esa referencia, con lo que eso provoca que se acceda al almacén de fearures subyacente para recuperarla.
  • Y por último le pedimos a la nueva feature su referencia y descartamos la feature.

Todo ello además dentro de un bucle.

Si estamos trabajando con una fuente basada en base de datos, se lanzará una consulta para recuperar cada una de las features, lo que en si puede llevar bastante mas tiempo del deseable. De todos modos, hasta aquí aún tenemos suerte. Tenemos una referencia a la que le pedimos su feature para obtener su referencia. No es necesario recuperar la feature para obtener la referencia, ¡ya la tenemos!. Nos bastaría con guardarnos en “selected” la referencia original:

...
selected = features.get(currIndexs[i]);
...

En el código que estamos viendo se guarda las referencias en un List que luego usará para
mostrarlo en un JTable. Cada vez que el JTable tenga que acceder a los valores de las features a mostrar tendrá que dereferenciarlas para obtener así la feature. Si la tabla tiene muchas líneas, y las features están en base de datos se hará un consumo intensivo de base de datos, un acceso a la base de datos por cada línea de la tabla. Podría suceder que la herramienta no fuese a trabajar nunca con una fuente de datos “pesada” a la hora de recuperar features a partir de sus referencias, pero si no es así tendremos que plantearnos una implementación alternativa para este tipo de problema.

Por último, una cuestión más a tener en cuenta relacionada con las FeatureReference. No se puede garantizar que tras terminar la edición en un FeatureStore, las FeatureReference que te hayas guardado sean válidas. Hay fuentes de datos como shapes o dxfs, que usan como OID la posición de la feature dentro del fichero, y tras terminar la edición este orden puede cambiar. Sin embargo, en otras fuentes de datos como las de BBDD la referencia puede seguir siendo válida dependiendo de su clave primaria.

Iterar sobre un conjunto de features

Cuando queremos recorrer un conjunto de features, normalmente intentamos obtener un iterador y recorrerlo. Sin embargo la recomendación es que siempre que podamos para recorrer los datos usemos un visitor en lugar de un iterador.

Veámoslo con un ejemplo. Retomaremos el fragmento de código que vimos al principio:

...
final List<FeatureReference> tmpFeatures = new ArrayList<FeatureReference>();
boolean showWarningDialog = false;
DisposableIterator it = null;
FeatureSet featureSet = null;
try {

  try {
    featureSet = featureStore.getFeatureSet();
    it = featureSet.fastIterator();
  } catch (DataException ex) {
    String message = String.format(
      "Error getting feature set or fast iterator of %1",
      featureStore);
    LOG.info(message, ex);
    return;
  }
  while (it.hasNext()) {
    Feature feature = (Feature) it.next();
    if (hasMoreThanOneGeometry(feature)
      || feature.getDefaultGeometry() == null) {
        showWarningDialog = true;
    } else {
      tmpFeatures.add(feature.getCopy().getReference());
    }
  }

  if (showWarningDialog) {
    showWarningDialog();
  }
} finally {
  DisposeUtils.disposeQuietly(it);
  DisposeUtils.disposeQuietly(featureSet);
}
...

Veamos que hace:

  • Obtenemos un FeatureSet a partir del FeatureStore
  • Le pedimos un iterador
  • Nos recorremos todas las features y nos quedamos una referencia a ellas.
  • Al final mostramos un mensaje si se ha encontrado multigeometrias o no.

Usando “visitor” podría quedar algo como:

...
final MutableBoolean showWarningDialog = new MutableBoolean(false);
try {
  featureStore.accept(new Visitor() {
    public void visit(Object obj) throws VisitCanceledException, BaseException {
      Feature feature = (Feature) obj;
      if (hasMoreThanOneGeometry(feature)
        || feature.getDefaultGeometry() == null) {
        showWarningDialog.setValue(true);
      } else {
        tmpFeatures.add(feature.getReference());
      }
    }
  });
} catch (BaseException ex) {
  ... exception handling ...
}

if (showWarningDialog.isTrue() ) {
  showWarningDialog();
}
...

Como nos estamos recorriendo todas las features del store, podríamos visitar directamente el store. No pedimos un FeatureSet ni un iterador, así que no tendremos que preocuparnos de liberarlos. Tendremos que preocuparnos de atrapar las excepciones del método “visit”, pero también teníamos que hacerlo cuando creábamos el FeatureSet.

En este fragmento de código he utilizado algo que puede parecer extraño. Habia un flag que se modificaba dentro del bucle, “showWarningDialog”. Como hemos sustituido el cuerpo del bucle por una clase interna anónima, no podemos utilizar una variable “boolean” ya que esta no puede ser final. Así que en lugar de usar un “boolean”, he usado un “MutableBoolean” para almacenar el flag. Esta clase forma parte de la librería de “apache commons lang” que va de base con gvSIG.

Resumiendo

Recomendaciones en general:

  • Que cuando se este trabajando con recursos “disposables” se utilice siempre una construcción “try…finally“, usando “DisposeUtils.disposeQuietly” para liberar los recursos.
  • Una FeatureReference referencia a la feature, no precisas obtener una copia para pedirle la referencia.
  • Dereferenciar una FeatureReference llamando a “getFeature” puede ser un proceso “pesado”.
  • No asumiremos que tras terminar la edición en un FeatureStore las FeatureReference que tengamos guardadas sean válidas.
  • Siempre que podamos, para recorrer los datos usemos un visitor en lugar de un iterador.

Bueno, y por hoy asta aquí llego. Otro día ya contaré sobre algunas otras cosas…

Un saludo a todos!

 


Filed under: development, gvSIG Desktop, spanish Tagged: java

by Joaquin del Cerro at April 21, 2015 01:57 PM

GeoSolutions

Upcoming GeoServer training in Finland with our partner Gispo Ltd

GeoServer

Dear All,

The GeoSolutions team is proud to announce that our GeoServer Lead Andrea Aime  will hold a three days training on GeoServer  in Espoo, Finland from 9/6/2015 to 11/06/2015.

The Training will be conducted in English following the material provided by GeoSolutions and available at this link. The course consists of lessons and exercises. The exercises materials can be used after the course as well.

The following topics, among others, will be covered:

  • Installing and running GeoServer
  • Advanced Raster Data Management
  • Web Processing Service (WPS) and Rendering Transformations
  • Advanced GeoServer Configuration
  • GeoServer Security
  • Styling with SLD
  • Styling with CSS
  • INSPIRE Support
Basic skills in Geoserver and geographic data management are required to proficiently follow the training. Additional information can be found at this link (in Finnish) as well as at this one (in English).

Happy GeoServing!

The GeoSolutions team,

http://www.geo-solutions.it

by simone giannecchini at April 21, 2015 12:29 PM

April 20, 2015

Cameron Shorter

Esri's claim at being good "Standards" citizens is questionable

I'm calling Esri out on their claim to be good "Open Standards" citizens. Esri are again abusing their market position to compromise established Open Spatial Standards, as described in an Open Letter from the OSGeo community. It starts:
We, the undersigned, are concerned that the current interoperability between LiDAR applications, through use of the open "LAS" format, is being threatened by Esri's introduction and promotion of an alternative "Optimized LAS" proprietary format. This is of grave concern given that fragmentation of the LAS format will reduce interoperability between applications and organisations, and introduce vendor lock-in. …
To be clear, Esri has extended LAS to create "Optimized LAS" which provides near identical features and performance to the existing and open LASzip format, both of which provide faster access and smaller file sizes the LAS format. However, rather than collaborate with the open community, as has been repeatedly offered, "Optimiszed LAS" has been developed internally to Esri. It is neither published, nor open, which provides both technical as well as legal barriers for other applications reading and/or writing to this proprietary format. This creates a vendor lock-in scenario which is contrary to the principles of the Open Geospatial Consortium, the OSGeo Foundation, and many government IT procurement policies.

Esri responded to the open request to avoid fragmenting LiDAR standards with the following motherhood statement, which doesn't actually answer the key questions:
Regarding Dr. Anand’s concerns and the referenced letter below:
Esri has long understood the importance of interoperability between systems and users of geographic information and services. Esri has participated in the development of national, information community, OGC, and ISO TC 211 standards from the development of the US Spatial Data Transfer Standard in the 1980s through the development of OGC Geopackage today. As a sustaining member of ASPRS and a Principle member of OGC, Esri would gladly participate in efforts to further the development of open LIDAR and point cloud standards. Keep in mind that ASPRS owns and maintains LAS, along with other spatial information standards, and would have the lead in moving it into  OGC or ISO TC211 for further work if they so desired. Esri will continue to support and use the ASPRS LAS standard; the Optimized LAS (see FAQ at https://github.com/Esri/esri-zlas-io-library) is not intended to replace LAS but to enhance access to remotely stored LIDAR information for our users.
Lets refute Esri's statement line by line:

Esri has long understood the importance of interoperability between systems and users of geographic information and services.

  • Nice motherhood statement. Notice that Esri carefully selects the words "understood the importance" rather than "we commit to implementing".

Esri has participated in the development of national, information community, OGC, and ISO TC 211 standards from the development of the US Spatial Data Transfer Standard in the 1980s through the development of OGC Geopackage today.


As a sustaining member of ASPRS and a Principle member of OGC, Esri would gladly participate in efforts to further the development of open LIDAR and point cloud standards.

  • Nice statement, without any quantifiable commitment. Will Esri put it into practice? Track record suggests otherwise. As explained by Marin Isenburg, Esri has talked a lot about collaboration and being open, while in parallel creating a competing proprietary format. If Esri were seriously committed to open LiDAR standards, Esri would publish "Optimized LAS" under an Open License, and/or take "Optimized LAS" through a standards development process such as provided by the OGC. Esri would have also build upon the prior LASzip format rather than redeveloping equivalent functionality.

Keep in mind that ASPRS owns and maintains LAS, along with other spatial information standards, and would have the lead in moving it into  OGC or ISO TC211 for further work if they so desired. 
  • Again, if Esri had the best interests of ASPRS and Open Standards in mind (as you would expect from a sustaining member), then we'd expect Esri to donate their LAS improvements back to the ASPRS for safe keeping. Why is Esri keeping such improvements in an Esri proprietary format instead?
  • Esri would be also lobbying ASPRS to accept improvements to the LAS format. Has this happened? Lack of public discussion on this topic suggests otherwise?

Esri will continue to support and use the ASPRS LAS standard; the Optimized LAS (see FAQ at https://github.com/Esri/esri-zlas-io-library) is not intended to replace LAS but to enhance access to remotely stored LIDAR information for our users.

  • Esri is sidestepping the issue. The LAS standard needs improvements. These improvements have been implemented by the open LASzip format and also by Esri's proprietary Optimized LAS. One should be incorporated into a future LAS standard.
  • The question Esri fails to answer is why does Esri refuse to work in collaboration with the open community? Why has Esri developed their own Optimized LAS format instead of
  • improving an existing standard format?
  • Esri's FAQ, explains that esri-zlas-io-library is stored on github under the Apache license, which would make you think the code is Open Source and hence the Optimized LAS format could be reverse engineered. This is not the case. Esri has only licensed the binaries under the Apache license such that it can't be reverse engineered or improved by the community. By the OSI definition, this is not Open Source Software
So I'm calling Esri out on their claim to being supporters of Open Standards. Please Esri, either clean up the way you behave, or come clean and admit that Esri abuses its market position to undermine Open Standards.


by Cameron Shorter (noreply@blogger.com) at April 20, 2015 10:40 PM

GeoServer Team

GeoServer 2.6.3 released

The GeoServer team is happy to announce the release of GeoServer 2.6.3. Download bundles are provided (zipwardmg and exe)  along with documentation and extensions.

GeoServer 2.6.3 is a maintenance release of GeoServer recommended for production deployment. Thanks to everyone taking part, submitting fixes and new functionality:

  • The WPS download community module is now available on the 2.6.x branch too
  • Some WPS fixes related to requests not including the response form
  • Fixed layer naming regression that prevented non XML valid names to be used for coverages (care on naming is still advised, different protocols have different requirements, check the ones you are using)
  • Some WFS 2.0 join related fixes
  • Speed up generation of JSON files when the native CRS is EPSG:900913
  • Avoid leaks of commons-httpclient pools (which in turn can lead to a native thread leak)
  • Check the release notes for more details
  • This release is made in conjunction with GeoTools 12.3

Thanks to Andrea (GeoSolutions), Jody (Boundless) for this release

About GeoServer 2.6

Articles and resources for GeoServer 2.6 series:

 

 

by Andrea Aime at April 20, 2015 04:37 PM

GeoTools Team

GeoTools 12.3 released

The GeoTools community is happy to announce the latest  GeoTools 12.3 download:

This release is also available from our maven repository. This release is made in conjunction with GeoServer 2.6.3.
This is a maintenance release of the GeoTools 12 series recommended for production systems. 
A few highlights from the GeoTools 12.3-Release Notes:
  • A number of small bug fixes and improvements in raster data sources (image mosaic, NetCDF, GeoTiff)
  • Robustness improvement in advanced projection handling (an optional rendering feature that will cut geometries inside the projection valid area, handle difficult spots list poles/dateline, and wrap data Google maps style)
  • Improvement in raster reprojection quality and robustness
  • Improved ability to read topologically invalid geometries out of Oracle stores (non closed rings in polygons)
  • Added a way to dispose of Commons HTTPClient connection pools in WMS client, quite important to avoid leaking native threads (the pool uses a thread for cleanup purposes)
  • A number of other fixes, check the release notes for full details
Thanks to Andrea (GeoSolutions) for this release.

About GeoTools 12

by Andrea Aime (noreply@blogger.com) at April 20, 2015 04:35 PM

Nyall Dawson

Review: Building Mapping Applications with QGIS

It seems like over the last year the amount of literature published regarding QGIS has really exploded. In the past few months alone there’s been at least three titles I can think of (Building Mapping Applications with QGISMastering QGIS, and the QGIS Python Programming Cookbook). I think this is a great sign of a healthy project. Judging by this there’s certainly a lot of demand for quality guides and documentation for QGIS.

I recently finished reading one of these titles – Building Mapping Applications with QGIS. (Erik Westra, Packt Publishing 2015) In short, I’m a huge fan of this work and think it may be my favourite QGIS book to date! I’ve read Erik’s previous work, Python Geospatial Development, and thought it was an entertaining and really well written book. He’s clearly got an in-depth knowledge about what he’s writing about and this confidence comes through in his writing. So when I first saw this title announced I knew it would be a must-read for me.

In Building Mapping Applications with QGIS, Erik has created a comprehensive guide through all the steps required to create QGIS plugins and standalone Python applications which utilise the QGIS libraries. It’s not a beginner’s guide to Python or to PyQGIS, but that’s what helps it stand out. There’s no introductory chapters on programming with Python or how to use QGIS and instead Erik dives straight into the meat of this topic. I found this approach really refreshing, as I’m often frustrated when the first few chapters of an advanced work just cover the basics. Instead, Building Mapping Applications with QGIS is packed with lessons about, well, actually building mapping applications!

So, why do I like this book so much? Personally, I think it fills a a really crucial void in the existing QGIS literature. There’s a lot of works covering using QGIS, and a few covering PyQGIS development (eg, the PyQGIS Programmer’s Guide, which I reviewed here). But to date, there hasn’t been any literature that covers developing QGIS based applications in such great depth. It’s just icing on the cake that Erik’s writing is also so interesting and easy to read.

Is there any criticisms I have with this book? Well, there’s one small omission which I would have liked to see addressed. While the chapter Learning the QGIS Python API goes into some detail about how QGIS is built using the Qt libraries and a great deal of depth about interpreting the QGIS c++ APIs, I think it could really benefit from some discussion about both the PyQt and Qt APIs themselves. Since a lot of the QGIS classes are either directly derived from Qt classes or heavily utilise them it’s really important that PyQGIS developers are also directed to the PyQt and Qt APIs. For instance, the Qt QColor class is used heavily throughout PyQGIS, but you won’t find any API documentation on QColor in QGIS’ API. Instead, you need to first consult the PyQt API docs and also the detailed Qt c++ docs. It’s often that you may think the PyQGIS API is missing a crucial method, but consulting the Qt docs reveals that the method is instead implemented in the base classes. It’s an important point to note for mastering PyQGIS development. To be fair, I’m yet to read a PyQGIS book which has nailed the interaction between the QGIS, PyQt and Qt APIs.

Honestly, that’s a really minor quibble with an otherwise outstanding work. I’m so glad Erik’s written this work and strongly recommend it to anyone wanting to take their PyQGIS development skills to the next level.

by Nyall Dawson at April 20, 2015 10:57 AM

gvSIG Team

gvSIG 2.1: Italian and Bulgarian translations updates

Italian and Bulgarian translations have been updated, and they are now available for the gvSIG 2.1 final version, thanks to Antonio Falciano (Italian), and Zahari Savov and Jordan Tsvetkov (Bulgarian).

If you installed the gvSIG 2.1 final version, you can update these languages now from the add-ons manager. From gvSIG, you have to go to Tools->Addons manager, then select the Installation from URL option and finally you have to select the “Official gvSIG repository for this version” option. When packages appears at the window you have to select the “Internationalization” option at the left side, and then the “Translations” package (1.0.0-27 version).

Translations

After installing it and restarting gvSIG the translation will be updated.

We encourage you to translate the gvSIG interface to new languages or update the existing ones. If you are interested you can contact us (mcarrera at gvsig.com).


Filed under: community, english, gvSIG Desktop

by Mario at April 20, 2015 09:58 AM

April 17, 2015

Free and Open Source GIS Ramblings

Routing in polygon layers? Yes we can!

A few weeks ago, the city of Vienna released a great dataset: the so-called “Flächen-Mehrzweckkarte” (FMZK) is a polygon vector layer with an amazing level of detail which contains roads, buildings, sidewalk, parking lots and much more detail:

preview of the Flächen-Mehrzweckkarte

preview of the Flächen-Mehrzweckkarte

Now, of course we can use this dataset to create gorgeous maps but wouldn’t it be great to use it for analysis? One thing that has been bugging me for a while is routing for pedestrians and how it’s still pretty bad in many situations. For example, if I’d be looking for a route from the northern to the southern side of the square in the previous screenshot, the suggestions would look something like this:

Pedestrian routing in Google Maps

Pedestrian routing in Google Maps

… Great! Google wants me to walk around it …

Pedestrian routing on openstreetmap.org

Pedestrian routing on openstreetmap.org

… Openstreetmap too – but on the other side :P

Wouldn’t it be nice if we could just cross the square? There’s no reason not to. The routing graphs of OSM and Google just don’t contain a connection. Polygon datasets like the FMZK could be a solution to the issue of routing pedestrians over squares. Here’s my first attempt using GRASS r.walk:

Routing with GRASS r.walk

Routing with GRASS r.walk (Green areas are walk-friendly, yellow/orange areas are harder to cross, and red buildings are basically impassable.)

… The route crosses the square – like any sane pedestrian would.

The key steps are:

  1. Assigning pedestrian costs to different polygon classes
  2. Rasterizing the polygons
  3. Computing a cost raster for moving using r.walk
  4. Computing the route using r.drain

I’ve been using GRASS 7 for this example. GRASS 7 is not yet compatible with QGIS but it would certainly be great to have access to this functionality from within QGIS. You can help make this happen by supporting the crowdfunding initiative for the GRASS plugin update.


by underdark at April 17, 2015 07:50 PM

April 16, 2015

Paulo van Breugel

GRASS 7.0 is out, but the development continues unabated

Just a thumbs-up for the developers of GRASS GIS, who evidently do not rest on their laurels since their release of GRASS GIS 7.0. Below one of those more visible new features in the GRASS GIS development version which make live just that much easier. Filed under: GRASS GIS Tagged: GRASS GIS, new feature

by pvanb at April 16, 2015 04:59 PM

OSGeo News

OSGeo and the International Society for Photogrammetry and Remote Sensing (ISPRS) sign MoU

by aghisla at April 16, 2015 01:31 PM

Boundless Blog

Advanced Styling with OpenLayers 3

As we see growth in adoption of OpenLayers 3, we get a lot of feedback from the community how we can enhance its usefulness and functionality. We’ve been pleased in 2015 to release iterations of OL3 on a monthly basis – but I’m going to highlight some great new functionality added by my colleague Andreas Hocevar late last year.

While styling in OpenLayers normally uses the geometry of a feature, when you’re doing visualizations it can be beneficial if there is a way for a style to provide its own geometry. What this means is you can use OL3 to easily provide additional context within visualizations based on the style itself. As a very simple example, you can take a polygon and use this new feature to show all the vertices of the polygon in addition to the polygon geometry itself, or even showing the interior point of a polygon.

As a visual:Bart_1In order to achieve the above effect, you can add a constructor option called geometry to ol.style.Style. This can take a function that gets the feature as an argument, a geometry instance or the name of a feature attribute. If it’s a function, we can – for example – get all the vertices of the polygon and transform them into a multipoint geometry that is then used for rendering and applied to the corresponding style.

You can see sample code for this OpenLayers polygon-styles example at http://openlayers.org/en/master/examples/polygon-styles.html.

[Side note: Since the OpenLayers development team got together for a codesprint in Schladming, Austria, the polygon-styles example page now has a “Create JSFiddle” button (above the example code) which will allow you to experiment quickly with the code from the OpenLayers examples. Thanks to the sprint team for adding this convenient functionality!]

Another example to connect this with more practical use cases: you can use this functionality to show arrows at the segments of a line string.
Bart_2As before with the polygon-styles example, you can see what’s behind this line-arrows example at http://openlayers.org/en/master/examples/line-arrows.html

Lastly, we’ve provided an earthquake-clusters example (reviewable at http://openlayers.org/en/master/examples/earthquake-clusters.html) showing off this new functionality with a slightly different twist. When you hover over an earthquake cluster, you’ll see the individual earthquake locations styled by their magnitude as a regular shape (star):
Bart_3

Please don’t hesitate to let Boundless know if you have any questions about how we did this in OL3, or any other questions you may have about OpenLayers or OpenGeo Suite!

 

The post Advanced Styling with OpenLayers 3 appeared first on Boundless.

by Bart van den Eijnden at April 16, 2015 12:30 PM

OSGeo News

FOSS4G Seoul 2015 Conference Registration Opens

by aghisla at April 16, 2015 10:26 AM

gvSIG Team

How to work with a form, browsing selected records using scripting in gvSIG 2.1

English translation of the article by Joaquin del Cerro.

Hi everyone…

Yesterday, someone in gvSIG users mailing list asked about how to make something with scripting.

Basically what was intended was to present a custom form using a script that showed the data of the selected record in a layer, and if they had multiple selected records, the buttons first / previous / next / last will be presenting the different selected records.

I´ve added one extra thing and here you can find, one possible implementation: even when there are not selected records, you will be able to move for all the records of the layer.

The idea is that when we load the form, we save the selected records in one list and we will have them easily handy. This action with the selected records could be more or less acceptable, but if we are scrolling all of the records in one table, it could be impossible, because the table can have a vast amount of records and we are not going to load in memory. Also, it could happen if we select thousands of records, but this is less usual.

I will comment only on the piece that is responsible for working with the entire table because it is the “darkest” part. As I said, to load all records from one table in a list may not be feasible for the table size, so in gvSIG, we have the possibility of using a paging mechanism on the table and we show it as an ongoing list, regardless of this “pager” take care of loading and unloading records required depending on the size of page indicated by us. The code would correspond to the LoadFullTable function.

def loadFullTable(table):
  # Obtain a pager based on data table, paging in groups of 200 records.
  pager = DALLocator.getDataManager().createFeaturePagingHelper(table,200)
  # Obtain one paged list from the pager.
  return pager.asList()

The code at the end is very easy. We call a function to which we pass the table and the maximum number of records which we want to be loaded simultaneously in the memory:

pager = DALLocator.getDataManager().createFeaturePagingHelper(table,200)

and then, we ask to the pager that we have obtained, to return us a list based on it.

return pager.asList()

This list can already be handled as if it were a normal list, but being the pager responsible for retrieving records from disk as it needs them.

I will not tell many more things, I think the rest of the code is clear enough if it has already been playing with python and scripting in gvSIG environment.

For the example, I´ve used the layer zonas_basicas_salud (It means health basic zones), available for downloading in:

http://downloads.gvsig.org/download/geodata/vector/SHP2D/zonas_basicas_salud.navarra.zip

I´ve assumed that is the selected layer when the script is launched.
The script result would be something similar to:

resultado

And the full script would be:

from gvsig import *

from org.gvsig.fmap.dal import DALLocator

table = None
data = None
currentRecord=0

def loadRecord():
 codzone = dialog.find("codzone")
 area = dialog.find("area")
 zone = dialog.find("zone")
 sector = dialog.find("sector")
 actual = dialog.find("current")

 record = data[currentRecord]

 dialog.setString(codzone,"text", "%s" % record.get("codzona"))
 dialog.setString(area,"text", record.get("Area"))
 dialog.setString(zone,"text", record.get("Zona"))
 dialog.setString(sector,"text", record.get("Sector"))

 dialog.setString(actual,"text", "%d / %d" %(currentRecord+1, len(data)) )

 #selectRecord(record)

def selectRecord(record):
 selection = table.getFeatureSelection()
 selection.deselectAll()
 selection.select(record)

def loadFullTable(table):
 # Obtain a pager based on data table, paging in groups of 200 records.
 pager = DALLocator.getDataManager().createFeaturePagingHelper(table,200)
 # Obtain one paged list from the pager.
 return pager.asList()

def loadSelection(table):
 data = list()
 selection = table.getFeatureSelection()
 for record in selection.iterator():
 data.append(record)
 return data

def onload(*args):
 global data
 global table
 # For being easier, assume that current layer is zonas_basicas_salud
 layer = currentLayer()
 # Obtain the table related to the layer
 table = layer().getFeatureStore()

 selection = table.getFeatureSelection()
 if selection.isEmpty():
 data = loadFullTable(table)
 else:
 data = loadSelection(table)
 currentRecord=0
 loadRecord()

def next():
 global currentRecord

 if currentRecord<len(data)-1 :
 currentRecord +=1
 loadRecord()

def previous():
 global currentRecord

 if currentRecord>0 :
 currentRecord -=1
 loadRecord()

def first():
 global currentRecord

 currentRecord = 0
 loadRecord()

def last():
 global currentRecord

 currentRecord = len(data)-1
 loadRecord()

Here is the xml of the form, to let us know the references names that I´ve used in the code:

<?xml version="1.0" encoding="ISO-8859-1"?>
<!-- generated by ThinG, the Thinlet GUI editor -->
<panel columns="1" gap="4">
    <panel columns="2" gap="2">
        <label text="Zone code"/>
        <textfield name="codzone" weightx="1"/>
        <label text="Area"/>
        <textfield name="area"/>
        <label text="Zone"/>
        <textfield name="zone"/>
        <label text="Sector"/>
        <textfield name="sector"/>
    </panel>
    <panel gap="2">
        <label text="  "/>
        <button action="first" name="first" text="First"/>
        <button action="previous" name="previous" text="Previous"/>
        <textfield editable="false" end="3" name="current" start="3" text="0/0"/>
        <button action="next" name="next" text="Next"/>
        <button action="last" name="last" text="Last"/>
        <label text="  "/>
    </panel>
</panel>

Greetings to all and I hope you find it useful


Filed under: development, english, gvSIG Desktop, gvSIG development, scripting

by mjlobato76 at April 16, 2015 10:03 AM

gvSIG Team

Mapeamento Colaborativo da História de São Paulo (1870-1940)

himacoO grupo Hímaco reúne professores, pesquisadores e estudantes da Universidade Federal de São Paulo e do Arquivo Público do Estado de São Paulo, e tem o objetivo de explorar as possibilidades das geotecnologias em investigações históricas. Desde o início, o grupo optou por trabalhar com o gvSIG, tanto por conta das suas funcionalidades técnicas, como dos valores que ajuda a praticar, voltados à livre circulação do conhecimento e ao trabalho colaborativo.

O trabalho desenvolvido pelo grupo até o momento pode ser conferido em seu site: www.unifesp.br/himaco. Ali, há uma página de download, onde estão disponíveis rasters e vetores referentes ao passado da cidade de São Paulo, bem como um tutorial de introdução ao gvSIG aplicado a estudos históricos. Como estudo piloto, o grupo investigou as principais enchentes na cidade no período que vai de 1870 a 1940, e um mapa temático referente à maior delas, a de 1929, também se encontra disponível.

No momento, o grupo prepara um novo projeto de pesquisa, que representa um desafio bastante grande, tanto em termos técnicos como metodológicos. Esse post tem o objetivo de apresentar a concepção desse projeto e desde já anunciar que vamos precisar de muita ajuda para dar conta dele. Em breve, o projeto estará pronto para ser enviado às agências financiadoras. Até lá, estamos abertos a críticas e sugestões. Depois disso, se conseguirmos o financiamento, mais ainda ;-)

Segue o título e o resumo do mesmo.

Mapeamento Colaborativo da História de São Paulo (1870-1940)

O projeto prevê o desenvolvimento e a disponibilização na rede mundial de computadores de uma base cartográfica digital histórica da cidade de São Paulo, referente ao período de sua modernização urbano-industrial (1870-1940), associada a uma interface que permita a interatividade de pesquisadores interessados, de forma a que estes possam alimentar a base disponibilizada com eventos espacializáveis de suas próprias investigações. Dessa forma, pesquisadores interessados poderão produzir mapas e visualizações de suas respectivas pesquisas, a partir da base fornecida, ao mesmo tempo em que acabarão por enriquecer a base disponibilizada com as informações que terão alimentado ao sistema. Pretende-se, assim, criar as condições para o enriquecimento das abordagens da história de São Paulo daquele período, fazendo-o em conformidade com os mais recentes e interessantes desdobramentos das chamadas humanidades digitais, voltados ao trabalho colaborativo e à livre circulação do conhecimento.

Luis Ferla. Hímaco.


Filed under: community, portuguese Tagged: estudos históricos, história, São Paulo

by Alvaro at April 16, 2015 07:53 AM

April 15, 2015

Stefano Costa

Debian Wheezy on a Fujitsu Primergy TX200 S3

Debian Wheezy runs just fine on a Fujitsu Primergy TX200 S3 server

A few days ago I rebooted an unused machine at work, that had been operating as the main server for the local network (~40 desktops) until 3 years ago. It is a Fujitsu Primergy TX200 S3, that was in production during the years 2006-2007. I found mostly old (ok, I can see why) and contradictory reports on the Web about running GNU/Linux on it.

This is mostly a note to myself, but could serve others as well.

I chose to install Debian on it, did a netinstall of Wheezy 7.8.0 from the netinst CD image (using an actual CD, not an USB key) and all went well with the default settings ‒ which may not be optimal, but that’s another story. While older and less beefy than its current HP companion, this machine is still good enough for many tasks. I am slightly worried by its energy consumption, to be honest.

It will be used for running web services on the local network, such as Radicale for shared calendars and address books, Mediagoblin for media archiving, etc.

by Stefano Costa at April 15, 2015 09:27 PM

April 14, 2015

Stefano Costa

Quanti visitatori nei musei di Genova?

In attesa che vengano pubblicati i dati aggiornati al 2014, diamo uno sguardo ai dati numerici sui visitatori nei musei civici di Genova negli ultimi anni. La situazione è stabile, ma sembra esserci una stagnazione e l’Acquario non gira. Purtroppo, mancano i dati su Palazzo Ducale.

Visitatori nei musei civici e Acquario di Genova 1996-2013Visitatori nei musei civici e Acquario di Genova 1996-2013

Nel 2004 Genova è stata Capitale Europea della Cultura. Ce lo ricordiamo bene. I cantieri che sembravano infiniti, le facciate riportate a lustro, le inaugurazioni, le mostre. Qualcosa è rimasto, Genova adesso è una meta turistica, sia per gli italiani sia per gli stranieri. La ricettività inizia a stare al pari con la domanda. Abbiamo un city brand. Ma i musei non sono solo turismo, sono prima di tutto dei cittadini, delle scolaresche che si spostano rumorosamente in autobus, dei gruppi di mezza età, delle famiglie. Quante persone visitano i musei di Genova?

Dove sono i dati

Stuzzicato dagli open data rilasciati dalla Fondazione Torino Musei, che peraltro non sono aggiornati da più di un anno, ho cercato quel che c’era in rete sui musei genovesi, quelli civici in particolare (ci sono anche Palazzo Spinola e Palazzo Reale, statali, i cui dati di affluenza sono disponibili). Forse sembrerà ovvio, ma ho trovato davvero poco, veri e propri dati sparsi.

I dati dal 2004 al 2013 sono compresi nell’Annuario statistico del Comune di Genova (un file XLS dentro uno ZIP). I dati sui musei sono nel file 06 ISTRUZIONE E CULTURA/6.2 Cultura/TAV 07.XLS. Quelli sull’Acquario sono nel file 12 TURISMO/TAV13.XLS.

Per qualche motivo i dati sui musei dal 1996 al 2008 sono nelle serie storiche (link diretto al file XLS). Le stesse serie storiche abbracciano gli anni dal 1993 al 2008 per l’Acquario (link diretto al file XLS).

Nel Notiziario statistico n° 3 del 2014 (file PDF) troviamo i dati sul turismo, che includono i visitatori mensili dell’Acquario nel 2013 e fino a settembre 2014, ma non quelli dei musei.

Il dettaglio maggiore di cui disponiamo è quello mensile per l’Acquario negli ultimi 21 mesi, mentre in tutti gli altri casi siamo fermi al numero totale di visitatori annuali per singolo museo. È difficile fare qualunque valutazione in rapporto agli afflussi turistici, se non a livello molto generale, quindi in questa puntata non ne parlerò proprio.

I dati sparsi vanno ripuliti e ricomposti per essere elaborati. È un lavoro lento e noioso, in cui sicuramente si possono fare errori. Quello che ho ripulito per ora è in questo repository su GitHub, ovviamente in formato CSV.

Cosa dicono i dati

Una premessa doverosa: i numeri sono, per l’appunto, numeri. Un museo poco visitato non è più brutto degli altri, né gestito da persone meno competenti, impegnate, capaci.

I musei di Genova ospitano collezioni uniche, e soprattutto organizzano una quantità incredibile di eventi ‒ ogni settimana sono decine e spaziano da incontri serali a visite guidate, presentazioni, concerti, laboratori per grandi e piccoli. Nell’ultima newsletter che ho ricevuto posso contare 16 mostre in corso.

Ricordatevene leggendo il seguito.

I musei

Il 2004 ha segnato in positivo un punto di non ritorno per la maggior parte dei musei civici genovesi. Il balzo è evidente dal grafico.

Visitatori nei Musei civici di Genova 1996-2013Visitatori nei Musei civici di Genova 1996-2013

Il dettaglio dei singoli musei è un po’ meno brillante, perché si vedono tramontare realtà come il Museo di di storia e cultura contadina e il Museo di arte contemporanea. Il Galata Museo del mare è in affanno (non sorprendente considerati i numeri dell’Acquario?), anche se rimane il museo più visitato.

Molti musei hanno andamenti altalenanti, magari legati a mostre (↑) o chiusure (↓) temporanee, su cui naturalmente sarebbe importante avere dati.

Visitatori nei musei di Genova: il dettaglio dei singoli musei (1996-2013)Visitatori nei musei di Genova: il dettaglio dei singoli musei (1996-2013)
I Numeri solidi

I numeri più solidi a mio parere sono quelli dei musei di Storia naturale e di Sant’Agostino, gli unici ad avere una quantità considerevole di visitatori associata ad una crescita costante negli ultimi anni. Anche il Museo di Arte Orientale ha avuto un buon andamento nell’ultimo periodo. Sono queste le realtà che meriterebbero di essere analizzate più in dettaglio per individuare fattori positivi su cui costruire, volendo, una strategia più ampia.

L’Acquario

L’Acquario di Genova non è un museo, almeno non nella tradizionale accezione italiana. Il numero di visitatori dell’Acquario è in calo costante. Dopo il 2004 solo nel 2007 e nel 2013 si è registrato un lieve aumento, ma non ci sono segnali di inversione della tendenza ‒ alla stagnazione più che al ribasso, perché comunque si tratta di una realtà molto forte che non può scomparire da un momento all’altro. Rimane il dubbio sulla sostenibilità di questa impresa, che ha costi altissimi per i visitatori  (24 €) ed evidentemente non riesce a trainare da sola il resto della città, pur rimanendo di gran lunga la struttura più visitata con circa un milione di visitatori ogni anno. Sicuramente sarà la struttura che beneficerà maggiormente dell’afflusso di Expo, e i filmati pubblicitari sono già diffusi nelle stazioni ferroviarie e in altri spazi affollati. Come la Capitale Europea della Cultura, anche Expo può dare uno slancio di medio periodo, ma solo se si saprà lavorare con le lenti multifocali.

Visitatori Acquario di Genova 1994-2013Visitatori Acquario di Genova 1994-2013

Cosa manca

I grandi assenti di questa panoramica sono naturalmente i dati su Palazzo Ducale, che non è un museo vero e proprio ma è il principale spazio culturale pubblico della città. Dobbiamo accontentarci di notizie (500mila visitatori nel 2012, chiusura record con 125mila visitatori per Frida Kahlo) ma da una fondazione pubblica sinceramente vogliamo molto di più. Ci vogliono gli open data.

~

Questo post è il primo con l’hashtag #d969humanities. Stay tuned.

La foto di copertina è “Genova, more than this?”. L’ho scattata io qualche mese fa.

by Stefano Costa at April 14, 2015 08:13 PM

OSGeo News

Announcing Open Source GIS Seminar (ALPO 2015), Finland

by jsanz at April 14, 2015 12:53 PM

April 13, 2015

Stefano Costa

Postcards from 1910

Digital humanities start at home. I present a small collection of postcards dating from 1908 to 1913, that I scanned yesterday as part of a larger collection of 50 letter envelopes (and few actual letters) that ‒ I think ‒ were sent or brought back to Italy when a relative of ours, Enrichetta Costa, passed away in the U.S. in 1923.

To my sweet fluffy ruffles (postcard from 1908) Auburn, N.Y., Owasco River #1 (postcard from 1910) Many happy returns (postcard from 1910) Do we take thee by surprise, Coquette, trifler, roguish eyes? (postcard from 1910) Fond wishes for a happy birthday (postcard from 1910) A couple holding hands (postcard from 1913)

These postcards are mostly conveying short messages, greetings, recounts of happy moments, and they often mention being in good health. Some of them are written in English, some in Italian, showing both a desire to become one with American culture and the need to stay in contact with families at home. These postcards are glimpses in the life of young women in their 20s, who had recently moved overseas. Looking at post office stamps, one gets the idea of such postcards as “short messages” that were sent from a nearby town, or from another neighbourhood of N.Y, and apparently could take as little as 12 hours to get to their destination.

The 1913 postcard is actually Italian and was sent from Torriglia to Genoa, where Enrichetta was staying. One has to think she brought the postcard back to the U.S. upon leaving Italy one more time.

It’s tempting to try building social networks from these letters and postcards, and I already started playing with TimeMapper to follow her across the years, even though the bulk of material (that is, the envelopes… but see how I already detached from the emotional value of the object?) is from 1910. Between March and April 1910 she moved from Baxter Street in New York City to the borough of Woodhaven, but only a few months before she was still in Kingston, N.Y.

I’m sure there is abundant literature on the subject of life of immigrants in the U.S. West Coast, but the afternoon I spent with my old Epson scanner gave me a lot of time to think about the social struggles of the time ‒ after all she was coming from a family of peasants and had moved overseas at 16 years ‒ and how her personal history became increasingly detached from that of her family of origin, at the same time having strong ties to the immigrant community from Torriglia that had formed around N.Y., until the final letter confirming her death, signed by her friend Cornelia Sciutto (a surname that is highly characteristic of a village nearby Torriglia), otherwise unknown.

I also came across more mundane problems like what is the best way to present digitally these envelopes. I think we should try using animated GIFs like this one. The original images of the front and back side can be easily retrieved with any image editing software (GIMP in my case) but it’s easier to keep the two sides together, without resorting to ugly non-image formats like PDF. A delay of 3 seconds should work fine for most cases, but it can be adjusted accordingly. It would be rather pointless, but fascinating, to go further and create 3D scans of the envelopes ‒ an unwieldy task for something that is normally flat, on a flat surface, with no tangible “volume”.

There are other issues with publishing these scans, namely exposing the intimate life of people who have been dead for less than 100 years. Surely no one on Facebook cares that their great-grandchildren will be able to sift through their silly day to day chat messages, but today’s assumptions are not good for last year, let alone last century. I’m relieved by the fact that I have almost only addresses, and names, and post stamp dates ‒ and part of me wants someone in Kingston, N.Y. or Woodhaven to recognise one of those names as a distant relative, a long-forgotten ancestor who was friends with Henrietta Costa. If you’re that someone, it would be nice to get in touch, and the sunny Sunday afternoon I spent scanning was not entirely lost.

In any case, enjoy the postcards!

by Stefano Costa at April 13, 2015 07:42 PM

GeoSolutions

Developer’s Corner: JAI-Ext, the Open Source replacement for Oracle JAI

cite-true_marble_cite-test

Dear Readers, it is our pleasure to introduce a new project developed at GeoSolutionsJAI-EXT. JAI-EXT is a project based on the Java Advanced Imaging (JAI) API, which is a Java framework for high performances image processing. The main feature of JAI is the ability to process image using deferred execution and tiling, which provide high performances when elaborating input data. Anyway JAI is no more supported by its original developers and has a very restricting license. Since in many of our project we heavily rely on JAI (one for all, GeoServer), at GeoSolutions we decided to embarque in this effort with the JAI-EXT project to completely replace the JAI framework with a new one in order to have a new complete, open-source Java API for image processing. At the time of writing JAI-EXT is composed by a huge number of operations, which are (almost :)) drop-in replacements for the old JAI ones. In addition, with JAI-EXT we look into fixing some of the outstanding problems we  found in JAI but we also aim to add a few interesting and much needed features. Below you can find some of the most important features we focused on:
  1. Complete Open Source development
  2. Support for Multithreading and parallel processing
  3. Support for ROI
  4. Support for No Data
Let's now go into more details.

Open Source development

JAI-EXT project has been developed as an Open Source project in order to be accessible by its users and to grow with their help. The project is hosted at GitHub: https://github.com/geosolutions-it/jai-ext. Actually a new minor release  1.01  has been released and is available at the following link.

Support for Multithreading and parallel processing

Since JAI-EXT started from the JAI project, it fully supports multithreaded, parallel processing of image tiles. The main difference of the JAI-EXT project is a better use of the Concurrency Java API introduced in Java 5 in order to increase concurrency throughput.

Support for ROI

Sometimes it is helpful to define an active area on the source raster data where the computations must be executed leaving the pixels outside untocuhed. This feature is available in JAI-EXT as it supports the usage of  ROIs (both vector and raster) for all of its operations.

Support for NoData

Geospatial data sometimes may contain pixel values which do not represent valid data, such values are called NoData. JAI is unable to deal with NoData, since it has been developed for working with photographic images. The biggest problem of not having the capability to take NoData into account with doing raser processing is with image interpolation;  interpolating raster data that contains NoData may result in artifacts when using higher order interpolations  like bilinear or bicubic interpolation if proper management of NoData pixels is not performed. Since one of our main areas of expertise is remote sensing andMetOc raster data management where NoData values are extensively used, we decide to focus on adding  support for NoData values in all the JAI-EXT operations like Affine, Warp and so on.

What Now?

As we speak, GeoTools and GeoServer master branches have been updated in order to use JAI-EXT. We do expect a short period of instability since the changes we have performed are extensive and deep, hence help us testing by grabbing a recent nightly of GeoServer from the master branch. That said, this means that GeoServer 2.8 will enjoy full support for NoData and ROI! There is still work to be performed though, as an instance:
  • Rewrite the missing JAI Operators
  • Rewrite the JAI Tile Scheduler
  • Introduce Support for Progress Report and Cancellation of running operations
If you are interested to support this journey, we invite you to join the mailing list and start coding (although funding are highly appreciated as well :P). The GeoSolutions team, GeoSolutions

by simone giannecchini at April 13, 2015 03:00 PM

gvSIG Team

Control de acceso en gvSIG 2.1.0

Hola a todos!

De nuevo aquí para contaros sobre el sistema de permisos de gvSIG.
En versiones anteriores de gvSIG, cuando un desarrollador tenía que realizar una personalización en la que en función del usuario este tuviese que tener acceso a unas u otras herramientas, o tener acceso de solo lectura a determinadas fuentes de datos, capas o tablas, no había nada especifico en gvSIG que le ayudase a abordar ese desarrollo y tenía que hacerlo él todo. En muchas ocasiones teniendo que reescribir código de gvSIG para introducir sus comprobaciones de permisos.
Con gvSIG 2.1.0 se han introducido mecanismos para gestionar el acceso a las distintas herramientas, así como a fuentes de datos. Poco a poco iremos extendiendo el sistema de permisos a otras funcionalidades de gvSIG como pueden ser :

  • Geoprocesos
  • Páginas de preferencias
  • Documentos del proyecto
  • Acceso a snappers

En este artículo vamos a ver cómo podemos usar estos mecanismos.
Lo primero que hay que tener claro es que gvSIG no viene con una gestión de usuarios integrada.
Pero… ¿Qué quiere decir esto?
Pues simplemente que gvSIG no dispone de herramientas para dar de alta usuarios o asignarles permisos a estos. Normalmente cuando precisamos dotar a gvSIG de un sistema de permisos es porque queremos integrarlo en una organización que ya tiene su propio sistema de gestión de usuarios. Así que no vamos a tener que duplicarlo para usarlo en gvSIG.
En gvSIG se ha definido un API que permite autenticar a un usuario, y comprobar si tiene autorización para realizar determinadas acciones. Este API es muy simple. Consta de dos entidades:

  • SimpleIdentityManager, que nos permite autenticar a un usuario.
  • SimpleIdentity, que representa a un usuario autenticado, y nos permite verificar si este está autorizado a acceder a una acción determinada.

La distribución estándar de gvSIG viene con una implementación base de estos servicios en la que existe un usuario predeterminado, “guest“, que cuando se le pregunta, responde siempre que sí esta autorizado a acceder a un recurso; pero podemos reemplazar esta implementación por una nuestra, y ahí esta lo interesante.

En el proyecto org.gvsig.tools están definidos los interfaces de SimpleIdentityManager y SimpleIdentity, su implementación por defecto, así como alguna clase abstracta para facilitarnos la implementación de estos interfaces, además de disponer en el “locator” de tools de métodos para recuperar y registrar implementaciones de ellos.
Para ilustrar como podemos aportar nuestra implementación del SimpleIdentityManager vamos a preparar un plugin para gvSIG que:

  • Registre una implementación propia de SimpleIdentityManager, la cual valide los usuarios contra una BBDD simple en ficheros “property” en los que se encuentre la información de permisos asociada a cada usuario.
  • Y presente un diálogo de identificación de usuario al arrancar gvSIG y valide este contra el SimpleIdentityManager registrado en gvSIG.

Lo de menos es el soporte usado como BBDD, cada cual tendrá el suyo propio en su organización, lo importante es ver como se encajan las piezas para poder disponer de un control de acceso a acciones y recursos personalizado en gvSIG.

En el SVN del proyecto “templates” del redmine de gvSIG encontraremos los fuentes del proyecto org.gvsig.trivialidentitymanagement, con la implementación que voy a ir comentando. Podéis descargarlo desde :

http://devel.gvsig.org/svn/gvsig-plugintemplates/org.gvsig.trivialidentitymanagement/trunk/org.gvsig.trivialidentitymanagement

En él encontraremos básicamente tres proyectos:

  • org.gvsig.trivialidentitymanagement.app.mainplugin
  • org.gvsig.trivialidentitymanagement.lib.impl
  • org.gvsig.trivialidentitymanagement.lib.api

org.gvsig.trivialidentitymanagement.lib.api
En él tendremos la definición de nuestro API, que debe extender de el de gvSIG.
Tendremos los interfaces:

  • TrivialIdentityManager
  • TrivialIdentity

org.gvsig.trivialidentitymanagement.lib.impl
Aquí tendremos las clases que implementan nuestro API. Tendremos:

  • DefaultTrivialIdentityManager
  • DefaultTrivialIdentity

La implementación de estos es muy sencilla. Solamente hacer mención a unos pocos detalles:

  • El SimpleIdentityManager siempre tiene que devolver, en su método getCurrentIdentity, un SimpleIdentity, nunca null, aunque no haya un usuario registrado. Bien podemos devolver un usuario que responde siempre que está autorizado, o que responde siempre que no lo está. Dependerá de lo que nos interese en nuestra personalización.
  • El método “isAuthorized(actionName)”, recibe un nombre de acción y debe responder si el usuario está autorizado a ejecutar la acción asociada a ese nombre.
  • El método “isAuthorized(actionName, resource,resourceid)“, deberá devolver si el usuario está autorizado o no a realizar la acción indicada sobre el recurso indicado. Actualmente, con gvSIG 2.1.0, solo el acceso a fuentes de datos, DataStore de DAL, utiliza este mecanismos, pasando como recurso el DataParameter asociado a la acción que se intenta realizar. Las acciones que usa el DAL son:
    • create-store
    • open-store
    • read-store

Antes de ver la implementaron de ejemplo vamos a ver en que consistiría la BBDD de ficheros properties que usaremos. Dentro de la carpeta de la instalación del plugin tendremos una carpeta  “dbusers” y en ella un fichero property por usuario, que llamaremos con el nombre del usuario seguido de “.property”. Así, si tenemos un usuario “user01″ tendremos un fichero “user01.properties”.

En el ejemplo tendremos dado de alta un usuario “user01″ con la siguiente configuración:

attr.password=user01
attr.fullname=Test user 01
# El usuario no puede cargar recursos cuyo nombre de fichero termine en dxf
action.dal-read-store.parameter.name=file
action.dal-read-store.parameter.pattern=.*[.]dxf
action.dal-read-store.ifmatches=false
# Tampoco puede acceder a la herramienta de información de un punto.
action.layer-info-by-point=false

La implementación de nuestro ejemplo, básicamente contiene un Map para cada usuario, e indexando por el nombre de la acción obtenemos un booleano que nos indica está autorizado a ejecutar esa acción, diciendo que si están todas las que no conoce:

@Override
public boolean isAuthorized(String actionName) {
    try {
	String value = (String) this.properties.get(ACTION_PREFIX+actionName);
	if( StringUtils.isBlank(value) ) {
	    return true;
	}
	boolean  b = BooleanUtils.toBoolean(value);
	return b;

    } catch(Throwable th) {
	return true;
    }
}

El método que comprueba si tenemos acceso a un recurso, es algo mas complicado, pero también muy básico. Solo gestiona recursos de DAL, así que el recurso es siempre un DataParameter. Para cada acción tiene la siguiente información:

  • name, un nombre de parámetro del DataParameter que recibe.
  • pattern, una expresión regular que aplicar sobre el parámetro indicado.
  • ifmatches, un valor booleano que nos indica si debemos retornar true o false cuando el valor del parámetro concuerde con la expresión regular.

Con algo tan simple ya podemos restringir el acceso a los recursos en función del usuario. Vamos a ir viendo poco a poco el código de este método:

@Override
public boolean isAuthorized(String actionName, Object resource, String resourceName) {
    try {

	if( resource == null ) {
	    return this.isAuthorized(actionName);
	}
	if( !DataManager.CREATE_STORE_AUTHORIZATION.equalsIgnoreCase(actionName) &&
	    !DataManager.READ_STORE_AUTHORIZATION.equalsIgnoreCase(actionName) &&
	    !DataManager.WRITE_STORE_AUTHORIZATION.equalsIgnoreCase(actionName) ) {
	    // Si no es una accion conocida no la tratamos de forma especial
	    return this.isAuthorized(actionName);
	}
	if( !(resource instanceof DataParameters) ) {
	    return true;
	}
	...

Lo primero que hacemos es comprobar si hemos recibido un recurso. Si no lo hemos recibido delegamos en el método isAuthorized que solo recibe el nombre de acción.
Luego nos cercioramos que la acción que se quiere realizar es una de las que soporta nuestro sistema de autorización. En nuestro caso solo soportamos las peticiones de acciones de DAL, así que si no es ninguna de ellas también delegamos en el método isAuthorized que solo recibe el nombre de acción. Y por ultimo comprobamos que el recurso es del tipo DataParameters, que es con los que vamos a trabajar, y si no lo es, como no sabemos manejarlo, simplemente decimos que sí que esta autorizado.

Sea cual sea el tipo de validación que utilicemos estas primeras lineas serán siempre muy parecidas, mientras que el resto de lineas del método ya serán muy dependientes de contra qué y cómo realicemos la comprobación de si un usuario tiene o no autorización para realizar una acción sobre un recurso.

Vamos a ver muy rápido el código del ejemplo:

        ...
	String ifmatchesValue = (String) this.properties.get(ACTION_PREFIX+actionName+".ifmatches");
	if( StringUtils.isBlank(ifmatchesValue) ) {
	    return true;
	}     

	DataParameters params = (DataParameters) resource;
	String parameterValue = null;
	String parameterPattern = (String) this.properties.get(ACTION_PREFIX+actionName+".parameter.pattern");
	if( StringUtils.isBlank(parameterPattern) ) {
	    return true;
	}
	String parameterName = (String) this.properties.get(ACTION_PREFIX+actionName+".parameter.name");
	if( StringUtils.isBlank(parameterName) ) {
	    return true;
	}
	if( parameterName.equalsIgnoreCase("all") ) {
	    parameterValue = params.toString();
	} else {
	    if( resource instanceof FilesystemStoreParameters && parameterName.equalsIgnoreCase("file") ) {
		parameterValue = ((FilesystemStoreParameters) resource).getFile().getAbsolutePath();
	    } else {
		if( params.hasDynValue(parameterName) ) {
		    Object x = params.getDynValue(parameterName);
		    if( x == null ) {
			return true;
		    }
		    parameterValue = x.toString();
		}
	    }
	}
	...

Lo que hace en estas lineas de código es recuperar los valores de los tres datos, “name“, “pattern” y “ifmatches” asociados a la acción que se ha solicitado, y luego intenta recuperar de los DataParameter el valor indicado por “name“, usando como casos especiales los nombres de parámetro “all” y “file“.

Una vez ya hemos recopilado esta información, la cosa es ya mas simple:

        ...
	if( StringUtils.isBlank(parameterValue) ) {
	    return true;
	}
	if( !parameterValue.matches(parameterPattern) ) {
	    return true;
	}
	return BooleanUtils.toBoolean(ifmatchesValue);

    } catch(Throwable th) {
	return true;
    }
}

Comprobamos si concuerda el valor del parámetro con la expresión regular y devolvemos el valor de “ifmaches” en caso de que lo haga.

Hay algún detalle más como los métodos:

  • getAttributeNames, que devuelve una lista de nombres de atributos extra que tiene asociado ese usuario
  • O getAttribute, que nos permite recuperar el valor de un atributo extra del usuario.

Pero no son de mucha importancia, y se explican por si solos.

org.gvsig.trivialidentitymanagement.app.mainplugin
Bueno, pues una vez ya tenemos nuestras clases …. ¿cómo le decimos a gvSIG que las utilice?

Para eso tendremos que crear un plugin y dejar en el algo de configuración adicional.
Por un lado crearemos en el plugin una clase que implemente Runnable, en la que meteremos el código de inicialización de nuestro sistema de permisos. Y por otro deberemos crear en el raíz de nuestro plugin en la carpeta de la instalación el fichero “identity-management.ini”. Este fichero solo precisa de dos lineas, aunque puede contener mas información si la precisamos para nuestra implementación. Vemos esas dos lineas;

IdentityManager=org.gvsig.trivialidentitymanagement.impl.DefaultTrivialIdentityManager
IdentityManagementInitializer=org.gvsig.trivialidentitymanagement.Initializer

La entrada IdentityManager, contendrá el nombre de la clase a utilizar como la implementación del SimpleIdentityManager de gvSIG. Esta clase y todo lo que precise deberá estar en el class-path de nuestro plugin. gvSIG, al iniciarse, comprobara si algún plugin tiene este fichero “identity-management.ini”, y utilizara la clase indicada en la entrada IdentityManager para cargarla y registrarla en el locator de tools. De esta forma cualquier porción de gvSIG que acceda al locator de tools para recuperar el SimpleIdentityManager obtendrá el nuestro.

La otra entrada, IdentityManagementInitializer, contendrá el nombre de una clase de nuestro plugin que implemente el interface runable. Una vez registrado nuestro SimpleIdentityManagement, se cargara esa clase y se invocara a su método run para que esta se encargue de realizar las inicializaciones que precisemos para que nuestro sistema funciones correctamente. Veamos que tenemos en el código de nuestro ejemplo:

public void run() {
    TrivialIdentityManager identityManager = (TrivialIdentityManager) ToolsLocator.getIdentityManager();
    PluginsManager pluginsManager = PluginsLocator.getManager();
    PluginServices plugin = pluginsManager.getPlugin(this);

    // Initialize the folder database  for the TrivialIdentityManager
    File pluginFolder = plugin.getPluginDirectory();
    File dbfolder = new File(pluginFolder, "dbusers");
    identityManager.setdbFolder(dbfolder);

    // Show login dialog
    // Do not return if user cancel login.
    LoginDialog loginDialog = new LoginDialog(pluginFolder);
    loginDialog.showDialog();
    logger.info("User logged as '"+identityManager.getCurrentIdentity().getID()+"'.");
}

Como vemos, hace basicamente dos cosas:

  • Inicializar en nuestro TrivialIdentityManager la BBDD donde esta almacenada la información de permisos.
  • Presentar un cuadro de diálogo para autenticar al usuario.

A partir de aquí, y según sea nuestra aplicacion y organización podemos adaptar este proyecto de ejemplo para que valide contra una BBDD relacional, un servicio web o un servidor de LDAP. No debería ser complicado hacerlo. La única precaución a tener es que la información de los permisos que tiene un usuario debería cargarse al instanciar la clase SimpleIdentity y cuando se llame a los métodos isAuthorized trabajar contra esa información ya en memoria. Si tenemos que acceder a un servicio externo cada vez que se compruebe si el usuario tiene permiso para acceder a una acción puede relentizarse mucho la ejecución de gvSIG.

Vamos a ver el proyecto de ejemplo en acción.

Con la configuración que hemos visto el usuario  user01 no podrá cargar ficheros DXF ni podrá acceder a la herramienta de información.

Con el plugin org.gvsig.trivialidentitymanagement.app.mainplugin instalado, al arrancar gvSIG nos presentra la siguiente pantalla:

login

Nos identificaremos con usuario “user01″ y clave “user01″ y continuara la carga de gvSIG.
Si una vez arrancado gvSIG intentamos cargar un fichero DXF nos presentara un mensaje diciendo que no estamos autorizados a realizar esa acción:

no-autorizado-dxf

Y si cargamos una capa vectorial, observaremos que no tenemos la herramienta de información.

sin-herramienta-de-informacion
Bueno, espero que os sirva, y a ver cuando tengo otro ratito y cuento algún que otro truquito que me queda en la manga sobre el control de acceso en gvSIG 2.1.0.

Me gustaría agradecer los comentarios de Juan Carlos Gutiérrez que en su momento me ayudaron en el diseño al hacerme participe de sus necesidades relacionadas con el control de acceso a los datos en gvSIG.

Un saludo a todos


Filed under: development, gvSIG Association, gvSIG Desktop, spanish Tagged: control de acceso

by Joaquin del Cerro at April 13, 2015 02:07 PM

April 11, 2015

Markus Neteler

Fun with docker and GRASS GIS software

GRASS GIS and dockerSometimes, we developers get reports via mailing list that this & that would not work on whatever operating system. Now what? Should we be so kind and install the operating system in question in order to reproduce the problem? Too much work… but nowadays it has become much easier to perform such tests without having the need to install a full virtual machine – thanks to docker.

Disclaimer: I don’t know much about docker yet, so take the code below with a grain of salt!

In my case I usually work on Fedora or Scientific Linux based systems. In order to quickly (i.e. roughly 10 min of automated installation on my slow laptop) try out issues of GRASS GIS 7 on e.g., Ubuntu, I can run all my tests in docker installed on my Fedora box:

# we need to run stuff as root user
su
# install docker on Fedora
yum -y docker-io
systemctl start docker
systemctl enable docker

Now we have a running docker environment. Since we want to exchange data (e.g. GIS data) with the docker container later, we prepare a shared directory beforehand:

# we'll later map /home/neteler/data/docker_tmp to /tmp within the docker container
mkdir /home/neteler/data/docker_tmp

Now we can start to install a Ubuntu docker image (may be “any” image, here we use “Ubuntu trusty” in our example). We will share the X11 display in order to be able to use the GUI as well:

# enable X11 forwarding
xhost +local:docker

# search for available docker images
docker search trusty

# fetch docker image from internet, establish shared directory and display redirect
# and launch the container along with a shell
docker run -v /data/docker_tmp:/tmp:rw -v /tmp/.X11-unix:/tmp/.X11-unix \
       -e uid=$(id -u) -e gid=$(id -g) -e DISPLAY=unix$DISPLAY \
       --name grass70trusty -i -t corbinu/docker-trusty /bin/bash

In almost no time we reach the command line of this minimalistic Ubuntu container which will carry the name “grass70trusty” in our case (btw: read more about Working with Docker Images):

root@8e0f233c3d68:/# 
# now we register the Ubuntu-GIS repos and get GRASS GIS 7.0
add-apt-repository ppa:ubuntugis/ubuntugis-unstable
add-apt-repository ppa:grass/grass-stable
apt-get update
apt-get install grass7

This will take a while (the remaining 9 minutes or so of the overall 10 minutes).

Since I like cursor support on the command line, I launch (again?) the bash in the container session:

root@8e0f233c3d68:/# bash
# yes, we are in Ubuntu here
root@8e0f233c3d68:/# cat /etc/issue

Now we can start to use GRASS GIS 7, even with its graphical user interface from inside the docker container:

# create a directory for our data, it is mapped to /home/neteler/data/docker_tmp/
# on the host machine 
root@8e0f233c3d68:/# mkdir /tmp/grassdata
# create a new LatLong location from EPSG code
# (or copy a location into /home/neteler/data/docker_tmp/)
root@8e0f233c3d68:/# grass70 -c epsg:4326 ~/grassdata/latlong_wgs84
# generate some data to play with
root@8e0f233c3d68:/# v.random n=30 output=random30
# start the GUI manually (since we didn't start GRASS GIS right away with it before)
root@8e0f233c3d68:/# g.gui

Indeed, the GUI comes up as expected!

GRASS GIS 7 GUI in docker container

GRASS GIS 7 GUI in docker container

You may now perform all tests, bugfixes, whatever you like and leave the GRASS GIS session as usual.
To get out of the docker session:

root@8e0f233c3d68:/# exit    # leave the extra bash shell
root@8e0f233c3d68:/# exit    # leave docker session

# disable docker connections to the X server
[root@oboe neteler]# xhost -local:docker

To restart this session later again, you will call it with the name which we have earlier assigned:

[root@oboe neteler]# docker ps -a
# ... you should see "grass70trusty" in the output in the right column

# we are lazy and automate the start a bit
[root@oboe neteler]# GRASSDOCKER_ID=`docker ps -a | grep grass70trusty | cut -d' ' -f1`
[root@oboe neteler]# echo $GRASSDOCKER_ID 
[root@oboe neteler]# xhost +local:docker
[root@oboe neteler]# docker start -a -i $GRASSDOCKER_ID

### ... and so on as described above.

Enjoy.

The post Fun with docker and GRASS GIS software appeared first on GFOSS Blog | GRASS GIS Courses.

by neteler at April 11, 2015 10:30 PM

Nathan Woodrow

A interactive command bar for QGIS

Something that has been on my mind for a long time is a interactive command interface for QGIS.  Something that you can easily open, run simple commands, and is interactive to ask for arguments when they are needed.

After using the command interface in Emacs for a little bit over the weekend – you can almost hear the Boos! from heavy Vim users :) – I thought this is something I must have in QGIS as well.  I’m sure it can’t be that hard to add.

So here it is.  A interactive command interface for QGIS.

commandbar

commandbar2

The command bar plugin (find it in the plugin installer) adds a simple interactive command bar to QGIS. Commands are defined as Python code and may take arguments.

Here is an example function:

@command.command("Name")
def load_project(name):
    """
    Load a project from the set project paths
    """
    _name = name
    name += ".qgs"
    for path in project_paths:
        for root, dirs, files in os.walk(path):
            if name in files:
                path = os.path.join(root, name)
                iface.addProject(path)
                return
    iface.addProject(_name)

All functions are interactive and if not all arguments are given when called it will prompt for each one.

Here is an example of calling the point-at function with no args. It will ask for the x and then the y

pointat

Here is calling point-at with all the args

pointatfunc

Functions can be called in the command bar like so:

my-function arg1 arg2 arg2

The command bar will split the line based on space and the first argument is always the function name, the rest are arguments passed to the function. You will also note that it will convert _ to - which is easier to type and looks nicer.

The command bar also has auto complete for defined functions – and tooltips once I get that to work correctly.

You can use CTRL + ; (CTRL + Semicolon), or CTRL + ,, to open and close the command bar.

What is a command interface without auto complete

autocomplete

Use Enter to select the item in the list.

How about a function to hide all the dock panels. Sure why not.

@command.command()
def hide_docks():
    docks = iface.mainWindow().findChildren(QDockWidget)
    for dock in docks:
        dock.setVisible(False)

alias command

You can also alias a function by calling the alias function in the command bar.

The alias command format is alias {name} {function} {args}

Here is an example of predefining the x for point-at as mypoint

-> alias mypoint point-at 100

point-at is a built in function that creates a point at x y however we can alias it so that it will be pre-called with the x argument set. Now when we call mypoint we only have to pass the y each time.

-> mypoint
(point-at) What is the Y?: 200

You can even alias the alias command – because why the heck not :)

-> alias a alias
a mypoint 100

a is now the shortcut hand for alias

WHY U NO USE PYTHON CONSOLE

The Python console is fine and dandy but we are not going for a full programming language here, that isn’t the point. The point is easy to use commands.

You could have a function called point_at in Python that would be

point_at(123,1331)

Handling incomplete functions is a lot harder because of the Python parser. In the end it’s easier and better IMO to just make a simple DSL for this and get all the power of a DSL then try and fit into Python.

It should also be noted that the commands defined in the plugin can still be called like normal Python functions because there is no magic there. The command bar is just a DSL wrapper around them.

Notes

This is still a bit of an experiment for me so things might change or things might not work as full expected just yet.

Check out the projects readme for more info on things that need to be done, open to suggestions and pull requests.

Also see the docs page for more in depth information


Filed under: Open Source, python, qgis Tagged: plugin, pyqgis, qgis

by Nathan at April 11, 2015 02:26 PM

April 09, 2015

GeoTools Team

CodeHaus Migration Schedule

As per earlier blog post CodeHaus is shutting down and the GeoTools project is taking steps to migrate our issue tracker and wiki to a new home.

First up I need to thank the Open Source Geospatial Foundation for responding quickly in a productive fashion. The board and Alex Mandel were in position to respond quickly and hire a contractor to work with the system admin committee to capture this content while it is still available.

I should also thank Boundless for providing me time coordinate CodeHaus migration and Andrea for arranging cloud hosting.

Updates:

  • Update April 7th: GeoAPI project attachments migrated (using this to estimate time remaining)
  • Update April 3rd: Issue tracker signup open to all (no need to email project leads).
  • Update April 2nd: Mauro Bartolomeo created GEOT-5074 in the new issue tracker
  • Update March 28th: Placeholder tickets created, contents and attachments to follow

Confluence Migration

Is scheduled for ... March 26th! I have taken a copy of the CodeHaus wiki and will be migrating proposals and project history. A html dump of the wiki is published at old.geotools.org so we have a record.

The new wiki is available here: https://github.com/geotools/geotools/wiki
GitHub Wiki

Jira Migration

Jira migration will start on 00:00 UTC Saturday March 28th.

On Saturday all issues will start migrating to their new home (and CodeHaus issue creation will be disabled). If you wish to lend a hand testing please drop by the #osgeo IRC channel on Saturday. Harrison Grundy will be coordinating the proceedings.

We have set up a new JIRA available at osgeo-org.atlassian.net for the migration. You can sign up directly (although we ask you to consider keeping the same user name).

OSGeo Jira
As shown above a few friendly CodeHaus refugees have also been sheltered for the storm (uDig and GeoAPI).

by Jody Garnett (noreply@blogger.com) at April 09, 2015 05:18 PM

gvSIG Team

MOOC Cycle “GIS for Users” free of charge

índiceThe gvSIG-Training e-Learning platform opens its registration period for the MOOC cycle “gvSIG for Users” in English, offered by gvSIG Association, in collaboration with GISMAP.

This cycle is made up of three different Modules:
Module 1: “Introduction to GIS” (starting on the 4th of May 2015)
Module 2: “Layer Editing” (starting on the 25th of May 2015)
Module 3: “Raster Analysis” (starting on Autumn 2015)

It will lead participants to how to use and exploit the potentiality of the open source software gvSIG while performing the most common GIS activities. This Course is both addressed to beginners and skilled GIS users who want to learn how to use this software.

The Course is open in continuous form and each module requires a participants engagement of around thirty hours.

Participants can thus plan individually the time to allocate to the course and complete all the scheduled activities without interfering with their daily work.

This Course is completely free of charge except for those participants asking for the corresponding credits of the gvSIG user certification program from the gvSIG Association, that is submitted to the payment of 40 Euros for each module or 100 Euros for the whole cycle.

For further information about topics, goals…:
http://web.gvsig-training.com/index.php/es/quienes-somos-2/noticias-2/145-the-free-mooc-cycle-gvsig-for-users

For registration, you have to press “Enroll” at the corresponding Module, and then accept the “Site policy agreement”. Finally you will have to register at the web page, or login if you were registered already.


Filed under: english, gvSIG Desktop, training Tagged: mooc

by Alvaro at April 09, 2015 04:38 PM

gvSIG Team

Nueva edición del curso gratuito “Introducción a Scripting en gvSIG 2.1″

linux-python-logoEl objetivo de este MOOC es el de dar a conocer el potencial de la programación geoespacial, la posibilidad de crear nuevas herramientas, nuevos geoprocesos o análisis de datos, que nos aumentarán la potencia de gvSIG adaptándose a nuestras necesidades. También la automatización de tareas, que nos podrían generar un ahorro de tiempo y de trabajo considerable.

El curso estará en modalidad de inscripción gratuita y abierta desde el 4 de Mayo de 2015, pudiendo inscribirse o completarse en el momento en que el alumno lo desee. De este modo el curso estará disponible de forma permanente.

Para realizar este curso no se necesitan conocimientos previos sobre programación, será un nivel básico, además de explicar cada línea de código. El curso está realizado con el lenguaje de programación Python, el favorito para comenzar a programar, muy intuitivo y rápido de aprender.

Realizar este curso es completamente gratuito. Aquellos que lo completen y quieran recibir un Certificado de Aprovechamiento, correspondiente a 30 créditos del programa de Certificación de gvSIG, solo tendrán que aportar una contribución de 40 Euros, además de realizar un proyecto personal sobre un Script en gvSIG que será subido al Repositorio y estará disponible para toda la comunidad.

Más información sobre el temario, inscripción, etc.: http://web.gvsig-training.com/index.php/es/quienes-somos-2/noticias-2/140-massive-online-open-course-de-introduccion-a-scripting-en-gvsig-2-1

Para poder matricularse al curso se debe entrar en “Matriculación” al final de la página, y aceptar después el “Acuerdo con las Condiciones del Sitio”. Finalmente se debe realizar el registro.


Filed under: development, gvSIG Desktop, scripting, spanish Tagged: mooc, python

by Alvaro at April 09, 2015 03:40 PM

Gis-Lab

Создание и визуализация пользовательских диаграмм и графиков в QGIS при помощи R

QGIS обладает инструментарием, способным строить базовые диаграммы на основе данных, содержащихся в атрибутах слоя. К сожалению, встроенные инструменты обладают рядом недостатков: множественные артефакты при рендеринге, громоздкость при отображении больших объёмов данных, непривлекательный внешний вид. Некоторые из этих проблем можно решить путём создания собственных графиков и диаграмм. В статье Создание и визуализация пользовательских диаграмм и графиков в QGIS при помощи R показано как, используя программную среду R, можно создавать пользовательские диаграммы и графики для QGIS.

Пользовательская радиальная диаграмма, используемая в качестве условного знака
Пользовательская радиальная диаграмма, используемая в качестве условного знака при рендеринге карты

Почитать | Обсудить

by SS_Rebelious at April 09, 2015 12:19 AM

April 08, 2015

Paulo van Breugel

QGIS live layer effects is propelling QGIS to the next level in the cartographic realm

This new feature created by Nyall Dawson and funded through crowd funding really sets new limits in terms of what is possible in terms of cartography. Check out Nyall’s post Introducing QGIS live layer effects! for a walk through of the new possibilities that this features brings to QGIS. It will be available in version … Continue reading QGIS live layer effects is propelling QGIS to the next level in the cartographic realm

by pvanb at April 08, 2015 02:24 PM

Boundless Blog

MGRS Coordinates in QGIS

One of the main characteristics of QGIS, and one of the reasons developers like myself appreciate it so much, is its extensibility. Using its Python API, new functionality can be added by writing plugins, and those plugins can be shared with the community. The ability to share scripts and plugins in an open-source medium has caused QGIS functionality to grow exponentially.  The Python API lowers the barrier to entry for programmers, who can now contribute to the project without having to work with the much more intimidating core C++ QGIS codebase.

At Boundless, we have created plugins for QGIS such as the OpenGeo Explorer plugin. The OpenGeo Explorer plugin allows QGIS users to interact with Suite elements such as PostGIS and GeoServer, as well as provides an easy and intuitive interface for managing these elements.

Boundless is also involved in the development and improvement of core plugins (plugins that, due to their importance, are distributed by default with QGIS instead of installed optionally by the user). For instance, Boundless is the main contributor to the Processing framework, where most of the analysis capabilities of QGIS reside.

Although both Processing and the OpenGeo Explorer are rather large plugins, most of the plugins available for QGIS (which currently are more than a hundred) are smaller, just adding some simple functionality. That is the case with one of our latest developments, the mgrs-tools plugin, which adds support for using MGRS coordinates when working with a QGIS map.

The military grid reference system (MGRS) is a geocoordinate standard which permits points on the earth to be expressed as alphanumeric strings. . QGIS has no native support for MGRS coordinates, so the need for the mgrs-tools plugin to support users of the standard has grown significantly.

Unlike other coordinate systems that are supported by QGIS, MGRS coordinates are not composed of a pair of values (i.e. lat, lon or x, y), but just a single value. For this reason, implementing support required using a different approach.

We created a small plugin that has two features: centering the view on a given MGRS coordinate, and showing the MGRS coordinate at the current mouse position.

The coordinates to zoom to are introduced in a panel at the top of the map view, which accepts MGRS coordinates of any degree of precision. The view is moved to that point and a marker is added to the map canvas.
Olaya_1

 

When the MGRS coordinates map tool is selected, the MGRS coordinates corresponding to the current mouse position in the map will be displayed in the QGIS status bar.
Olaya_2Both of these features make use of the Python mgrs library, using it to convert the coordinates of the QGIS map canvas into MGRS coordinates or other way around.

In spite of its simplicity this plugin is of great use for all those working with MGRS coordinates, who had no way of using them in QGIS until now. New routines can be added to extend the functionality, and we plan to do that in the near future.

As you can see, creating Python plugins is the easiest and most practical way of adding new functionality to QGIS or customizing it. The QGIS community has reduced barriers to solving challenges by adding extensibility. At Boundless, we use our extensive experience creating and maintaining QGIS plugins to provide effective solutions to our QGIS customers. Also, we provide training for those wanting to learn how to do it themselves through workshops and training programs. Let us know your needs and we will help you get the most out of your QGIS.

(Note: The mgrs-tools plugin is currently available at https://github.com/boundlessgeo/mgrs-tools)

 

The post MGRS Coordinates in QGIS appeared first on Boundless.

by Victor Olaya at April 08, 2015 02:06 PM

Nyall Dawson

Introducing QGIS live layer effects!

I’m pleased to announce that the crowdfunded work on layer effects for QGIS is now complete and available in the current development snapshots! Let’s dive in and explore how these effects work, and check out some of the results possible using them.

I’ll start with a simple polygon layer, with some nice plain styling:

Nice and boring polygon layer

A nice and boring polygon layer

If I open the properties for this layer and switch to the Style tab, there’s a new checkbox for “Draw effects“. Let’s enable that, and then click the little customise effects button to its right:

Enabling effects for the layer

Enabling effects for the layer

A new “Effects Properties” dialog opens:

Effects Properties dialog

Effects Properties dialog

You can see that currently the only effect listed is a “Source” effect. Source effects aren’t particularly exciting – all they do is draw the original layer unchanged. I’m going to change this to a “Blur” effect by clicking the “Effect type” combo box and selecting “Blur“:

Changing to a blur effect

Changing to a blur effect

If I apply the settings now, you’ll see that the polygon layer is now blurry. Now we’re getting somewhere!

Blurry polygons!

Blurry polygons!

Ok, so back to the Effects Properties dialog. Let’s try something a bit more advanced. Instead of just a single effect, it’s possible to chain multiple effects together to create different results. Let’s make a traditional drop shadow by adding a “Drop shadow” effect under the “Source” effect:

Setting up a drop shadow

Setting up a drop shadow

Effects are drawn top-down, so the drop shadow will appear below the source polygons:

Live drop shadows!

Live drop shadows!

Of course, if you really wanted, you could rearrange the effects so that the drop shadow effect is drawn above the source!..

Hmmmm

Hmmmm…

You can stack as many effects as you like. Here’s a purple inner glow over a source effect, with a drop shadow below everything:

Inner glow, source, drop shadow...

Inner glow, source, drop shadow…

Now it’s time to get a bit more creative… Let’s explore the “transform” effect. This effect allows you to apply all kinds of transformations to your layer, including scaling, shearing, rotation and translation:

The transform effect

The transform effect

Here’s what the layer looks like if I add a horizontally shearing transform effect above an outer glow effect:

Getting freaky...

Getting tricky…

Transforms can get really freaky. Here’s what happens if we apply a 180° rotation to a continents layer (with a subtle nod to xkcd):

Change your perspective on the world!

Change your perspective on the world!

Remember that all these effects are applied when the layers are rendered, so no modifications are made to the underlying data.

Now, there’s one last concept regarding effects which really blasts open what’s possible with them, and that’s “Draw modes“. You’ll notice that this combo box contains a number of choices, including “Render“, “Modify” and “Render and Modify“:

"Draw mode" options

“Draw mode” options

These draw modes control how effects are chained together. It’s easiest to demonstrate how draw modes work with an example, so this time I’ll start with a Transform effect over a Colorise effect. The transform effect is set to a 45° rotation, and the colorise effect set to convert to grayscale. To begin, I’ll set the transform effect to a draw mode of Render only:

The "Render only" draw mode

The “Render only” draw mode

In this mode, the results of the effect will be drawn but won’t be used to modify the underlying effects:

Rotation effect over the grayscale effect

Rotation effect over the grayscale effect

So what we have here is that the polygon is drawn rotated by 45° by the transform effect, and then underneath that there’s a grayscale copy of the original polygon drawn by the colorise effect. The results of the transform effect have been rendered, but they haven’t affected the underlying colorise effect.

If I instead set the Transform effect’s draw mode to “Modifier only” the results are quite different:

Rotation modifier for grayscale effect

Rotation modifier for grayscale effect

Now, the transform effect is rotating the polygon by 45° but the result is not rendered. Instead, it is passed on to the subsequent colorise effect, so that now the colorise effect draws a grayscale copy of the rotated polygon. Make sense? We could potentially chain a whole stack of modifier effects together to get some great results. Here’s a transform, blur, colorise, and drop shadow effect all chained together using modifier only draw modes:

A stack of modifier effects

A stack of modifier effects

The final draw mode, “Render and modify” both renders the effect and applies its result to underlying effects. It’s a combination of the two other modes. Using draw modes to customise the way effects chain is really powerful. Here’s a combination of effects which turn an otherwise flat star marker into something quite different:

Lots of effects!

Lots of effects!

The last thing I’d like to point out is that effects can be either applied to an entire layer, or to the individual symbol layers for features within a layer. Basically, the possibilities are almost endless! Python plugins can also extend this further by implementing additional effects.

All this work was funded through the 71 generous contributors who donated to the crowdfunding campaign. A big thank you goes out to you all whole made this work possible! I honestly believe that this feature takes QGIS’ cartographic possibilities to whole new levels, and I’m really excited to see the maps which come from it.

Lastly, there’s two other crowdfunding campaigns which are currently in progress. Lutra consulting is crowdfunding for a built in auto trace feature, and Radim’s campaign to extend the functionality of the QGIS GRASS plugin. Please check these out and contribute if you’re interested in their work and would like to see these changes land in QGIS.

by Nyall Dawson at April 08, 2015 12:24 PM