field as class code)
-Landsat pansharpening fix for 32bit systems
-improved code for several post processing functions
This release addresses some minor issues found in the first GRASS GIS 7.0.0 release published earlier this year. The new release provides a series of stability fixes in the core system and the graphical user interface, PyGRASS improvements, some manual enhancements, and a few language translations.
This release is the 32nd birthday release of GRASS GIS.
New in GRASS GIS 7: Its new graphical user interface supports the user in making complex GIS operations as simple as possible. A new Python interface to the C library permits users to create new GRASS GIS-Python modules in a simple way while yet obtaining powerful and fast modules. Furthermore, the libraries were significantly improved for speed and efficiency, along with support for huge files. A lot of effort has been invested to standardize parameter and flag names. Finally, GRASS GIS 7 comes with a series of new modules to analyse raster and vector data, along with a full temporal framework. For a detailed overview, see the list of new features. As a stable release 7.0 enjoys long-term support.
Source code download:
See also our detailed announcement:
About GRASS GIS
The Geographic Resources Analysis Support System (http://grass.osgeo.org/), commonly referred to as GRASS GIS, is an Open Source Geographic Information System providing powerful raster, vector and geospatial processing capabilities in a single integrated software suite. GRASS GIS includes tools for spatial modeling, visualization of raster and vector data, management and analysis of geospatial data, and the processing of satellite and aerial imagery. It also provides the capability to produce sophisticated presentation graphics and hardcopy maps. GRASS GIS has been translated into about twenty languages and supports a huge array of data formats. It can be used either as a stand-alone application or as backend for other software packages such as QGIS and R geostatistics. It is distributed freely under the terms of the GNU General Public License (GPL). GRASS GIS is a founding member of the Open Source Geospatial Foundation (OSGeo).
The GRASS Development Team, July 2015
A while back, I worked on a project that required the conversion of a number of KML/KMZ (Google Earth) raster files into vector format (don’t ask!) Because there were a lot of files, it was painstaking to manually geo-reference the files after unzipping the KMZ to extract the raster files. I dug around on the web, and was able to find two tools that did the job. The first, WorldFileTool, works great, but must be run individually for each file (ie. you can’t run it in a batch over multiple files in a directory). I use this tool if I’m only converting a single file, or less than a handful at at time.
The other option I found was a shell script created by Nicolas Moyroud, who had made it available at this link. However, the link now appears to be broken, and I can’t find another reference to the file. As it’s tagged as a “GNU/GPL v3 – Free use, distribution and modification” license, I’m posting a copy here for others who may find it of use. Note that all credit for this file goes to Nicolas Moyroud, and I have no claim to this work!
One of the most discussed topic of the recent years is the rise of UAVs, MAVs, RPVs or simply drones. These flying devices are equipped in their latest evolutions not only with full HD cameras, but also GPS devices and even small computers. This equipment is the base for next generation aerial geo-data and services. In the 3rd edition of GeoMonday we will cover the whole lifecycle from the creation, processing up to the integration for location intelligence or services. It’s a special pleasure for us to have our session for the first time in the beautiful city center of Potsdam, thanks to our Partner Zukunftsagentur Brandenburg.
When: Monday September 14th, 2015, starting 6pm sharp
Where: arcona Hotel am Havelufer, Zeppelinstraße 136, 14471 Potsdam
You are welcome to join our event to become part of the GeoMonday community. Get your free tickets here:
We will announce our speakers in the next days and weeks, so stay tuned…
Thanks to the work of Volker Fröhlich and other Fedora/EPEL packagers I was able to create RPM packages of QGIS 2.10 Pisa for Fedora 21, Centos 7, and Scientific Linux 7 using the great COPR platform.
The following packages can now be installed and tested on epel-7-x86_64 (Centos 7, Scientific Linux 7, etc.), and Fedora-21-x86_64:
Installation instructions (run as “root” user or use “sudo”):
su # EPEL7: yum install epel-release yum update wget -O /etc/yum.repos.d/qgis-2-10-epel-7.repo https://copr.fedoraproject.org/coprs/neteler/QGIS-2.10-Pisa/repo/epel-7/neteler-QGIS-2.10-Pisa-epel-7.repo yum update yum install qgis qgis-grass qgis-python # Fedora 21: dnf copr enable neteler/QGIS-2.10-Pisa dnf update dnf install qgis qgis-grass qgis-python
The post QGIS 2.10 RPMs for Fedora 21, Centos 7, Scientific Linux 7 appeared first on GFOSS Blog | GRASS GIS Courses.
A year on, the reference conference of the gvSIG Community and one of the more relevant events about free geomatics at an international level will take place. The 11th International gvSIG Conference will be held from December 2nd to 4th 2015 under the slogan “It’s possible. It’s real“.
Call for papers
The conference program has been excellent at the last years, and we’re sure it will be very good this year. We expect your proposals about big ans small projects, case studies, researches and university works, developed on gvSIG, gvSIG, standard uses and Spatial Data Infrastructure on free geomatics…
If you are interested, call for papers is now open. As of today communication proposals can be sent to the email address: email@example.com; they will be evaluated by the scientific committee as to their inclusion in the conference program.
There are two types of communication: paper or poster. Information regarding to regulations on communication presentations can be found in the Communications section of the website. Abstracts will be accepted until September 25th.
Organizations interested in collaborating in the event can find information in the section: How to collaborate of the conference website. We call the institutions and companies that use the gvSIG technology, to collaborate at this event.
Un año más celebraremos las jornadas de referencia de la comunidad gvSIG y uno de los eventos más relevantes de geomática libre a nivel internacional. Del 2 al 4 de diciembre de 2015 tendrán lugar las 11as Jornadas Internacionales gvSIG que este año se presentan con el lema “Es posible. Es real”.
Envío de comunicaciones
Llevamos varios años en que el programa de las jornadas no puede calificarse de otra forma que excelente. Y estamos seguros que este año no será menos. Esperamos vuestras propuestas, de grandes y pequeños proyectos, casos de uso, investigaciones y trabajos universitarios, desarrollos con gvSIG, gvNIX, uso de estándares e Infraestructuras de Datos Espaciales con geomática libre…
Si estás interesado en participar…ya está abierto el periodo para el envío de propuestas de comunicaciones. Las propuestas pueden enviarse a la dirección de correo electrónico firstname.lastname@example.org, y serán valoradas por el comité científico de cara a su inclusión en el programa de las Jornadas. Existen dos modalidades de comunicación: ponencia y póster. Toda la información sobre las normas para la presentación de comunicaciones puede consultarse en el apartado “Comunicaciones” de la web. El periodo de recepción de resúmenes finalizará el próximo 25 de septiembre.
Las organizaciones interesadas en colaborar en el evento pueden encontrar información en el apartado “¿Cómo colaborar?” de la web de las jornadas. Hacemos llamamiento especialmente a las empresas e instituciones usuarias de la tecnología gvSIG, a que colaboren en la realización de este evento.
Hola a todos,
Ya casi están aquí las “7as Jornadas gvSIG de Latinoamérica y Caribe”, que este año se celebran en la Facultad de Geografía de la Universidad Autónoma del Estado de México. En estas jornadas entre varios talleres y exposiciones habrá un “Taller de scripting con gvSIG 2.2″.
¿ A quien va dirigido el “Taller de scripting con gvSIG 2.2″ ?
¿Qué deben conocer y llevar los asistentes?
En caso de no tener conocimientos de programación o python, se puede asistir al taller y seguirlo a modo de charla, aunque no se puedan realizar todos los ejercicios que vamos a ir viendo.
Qué necesitarás llevar en el caso de seguir el taller con vuestro propio portátil/laptop:
Un gvSIG 2.2 instalado y funcionando correctamente, que tenga instalado el complemento de scripting.
Respecto al Sistema Operativo, debería poder seguirse tanto desde Linux como desde Windows. En mi caso el taller lo impartiré sobre Linux, Kubuntu de 64 bits.
¿ Qué veremos en el taller ?
Se trata de un taller de entre tres y cuatro horas, así que puede dar para ver algunas cosas interesantes.
La idea seria dividir el taller en tres bloques:
Además veremos algunos trucos para poder descubrir de que operaciones disponemos sobre los distintos componentes u objetos a los que tenemos acceso, así como donde podemos consultar información sobre algunos de ellos.
Continuaremos con lo que habíamos creado en el punto anterior para dotarlo de interfaz gráfica y acabar disponiendo de una herramienta que nos permita personalizar nuestros mapas.
En función de lo que a los asistentes les interese iremos dejando caer más peso en unas partes u otras, adaptando el taller a estos.
Espero poder publicar antes de las jornadas otro pequeño artículo indicando dónde podéis descargaros algo de documentación sobre lo que iremos viendo durante el taller.
Recordar que los talleres son gratuitos, al igual que todas las actividades de las jornadas, y que para asistir necesitáis realizar vuestra inscripción a las jornadas mediante el siguiente enlace:
Post anteriores sobre talleres en las 7as Jornadas gvSIG LAC:
Un saludo a todos!
As you know, NetCDF and GRIB are commonly used format in the Meteorological and Oceanographic (MetOc) context for observational data and numerical modeling, being a platform independent format used to represent multidimensional array-oriented scientific data. As an instance, data for air temperature, water current, wind speed computed by mathematical models across multiple dimensions, such as time, depth/elevation or physical entities measured by sensors may be served as NetCDF datasets.
In the last years, we have improved the GeoTools library in such a context, by providing a NetCDF plugin based on Unidata NetCDF java library. It is worth to point out that the same library also allows to access GRIB datasets. As a result, you may configure a NetCDF/GRIB coverage store in GeoServer for a NetCDF/GRIB file and setup different coverages/layers, one for each NetCDF variable/GRIB parameter available in the input file, together with its underlying dimension (time, elevation, ...)
Whilst the standalone NetCDF/GRIB provides access to a single NetCDF/GRIB file exposing it as a self contained coverage store, multiple datasets can be served as a single imageMosaic coverage store. This is especially useful when you have to deal with a collection of files representing different runs and forecasts of a meteo model. You may think about a Meteorological agency running a model each day, producing N forecasts by day with 1 hour step. In that case, an ImageMosaic can be configured on top of the folder containing the related NetCDF/GRIB datasets. Moreover, scripts running periodically can automatically add new files to that, in order to update the available data with latest forecasts.With this approach you can configure a coverageStore based on an ImageMosaic, so that you can send WMS getMap requests in order to depicts, as an instance, wind currents at specific heights above ground or send WCS getCoverage requests to get raw data about precipitations at different times.
Creating meaningful maps for the user requires proper styling to be applied to raw data. SLD allows to customize the rendering of your NetCDF/GRIB datasets.As an instance:
More details on NetCDF/GRIB styles and other rendering transformations can be found in the Rendering Transformations section of the SpatioTemporal training. In such a context, you can also take a look to this blog post related to wind barbs depicted in the previous example. Full SLD for WindBarbs example is available here.
Whilst WMS allows to create maps/portraits with custom styling to customize the rendering of a specific slice of a NetCDF/GRIB variable, WCS allows to get raw data for an “hypercube” involving multiple values across different dimensions.
In such a context WCS 2.0 defines:
Standard output formats such as GeoTIFF, ArcGrid don’t allow to encode multiple “2D slices” of the same coverage related to different time,elevation ranges involved in the request.
Therefore, a NetCDF output format has been developed to store all the requested portions of a coverage into a single multidimensional file.As an instance, a request like this: http://localhost:8080/geoserver/wcs?request=GetCoverage&service=WCS&version=2.0.1&coverageId=geosolutions__NO2&Format=application/x-netcdf&subset=http://www.opengis.net/def/axis/OGC/0/Long(5,20)&subset=http://www.opengis.net/def/axis/OGC/0/Lat(40,50)&subset=http://www.opengis.net/def/axis/OGC/0/elevation(300,1250)&subset=http://www.opengis.net/def/axis/OGC/0/time("2013-03-01T10:00:00.000Z","2013-03-01T22:00:00.000Z") will create a NetCDF file containing all available data for the NO2 (Nitrogen Dioxide) coverage by
That request is getting all NO2 data within the elevation range [300-1250] for the time period from 2013/03/01 at 10 AM to 2013/03/01 at 10 PM, in the BoundingBox with corners 5°E 40°N - 20°E 50°N.
On Panoply, the output will look like this. You can notice multiple values available across dimensions: 13 time values and 6 elevation values which can be combined together to get 6*13 = 78 different 2D slices of the requested coverage.An important improvement we recently made in handling NetCDF and GRIB file is related to support for different projections. In the beginning, GT/GS NetCDF plugins were only supporting WGS84 based datasets due to missing logic to parse projection related information. Lately (GeoServer 2.8.x), the NetCDF input format has been improved in order to support different coordinate reference systems which are expressed through GridMapping as per NetCDF-CF conventions so we can support Lambert conformal, Stereographic, Transverse Mercator, Albers Equal Area, Azimuthal Equal Area, Orthographic projections. The NetCDF-CF GridMapping way, requires to associate a NetCDF variable containing the projection information to a multidimensional Variable containing data defined in that projection. As an instance, your dataset may contain an icing_probability variable declaring a grid_mapping = "LambertConformal_Projection" attribute as well as that “LambertConformal_Projection” variable containing this definition: int LambertConformal_Projection; :grid_mapping_name = "lambert_conformal_conic"; :latitude_of_projection_origin = 25.0; // double :longitude_of_central_meridian = -90.0; // double :standard_parallel = 25.0; // double :earth_radius = 6371229.0; // double This information will be internally parsed to setup a Coordinate Reference System. Moreover, a custom EPSG should be added to the GeoServer’s user_projection definitions matching that CRS in order to have a valid code identifying that custom projection. More information on this topic are available as part of the GeoServer NetCDF community module documentation.
Finally, the NetCDF output format has been improved too, in order to:
Como ya sabréis este año las Jornadas LAC se celebrarán en Toluca, México, del 26 al 28 de agosto.
En la página web de las Jornadas podéis consultar el programa de actividades con las ponencias que se van a presentar. Un programa muy completo que muestra la variedad de usos de la tecnología gvSIG, y ejemplo de la implantación creciente de gvSIG en los más diversos ámbitos y geografías. Estas jornadas servirán también para conocer los nuevos productos de la Asociación gvSIG, como gvNIX y gvCity.
De forma complementaria al programa, para todos aquellos que quieran más información sobre las Jornadas LAC, hemos ido publicando distintos posts con información de algunos de los talleres que se realizarán en dichas jornadas (y que seguiremos publicando en las próximas semanas).
Os recordamos que también continúa abierta la inscripción, que como en todo evento gvSIG es gratuita, siendo el aforo limitado (y ya hay más de dos centenares de inscritos), por lo que os recomendamos que no esperéis a los últimos días para realizarla.
Aprovechamos para comentaros que varios miembros del equipo gvSIG estaremos en las Jornadas LAC, participando activamente con talleres y ponencias. Y, por supuesto, esperamos que también sea una buena ocasión para conversar, establecer colaboraciones, sumar activos a la comunidad…
If you have never saw maps hosted in Mapbox platform you would probably agree on the quality of its designs. The business of Mapbox is to host and server geospatial data. For this reason, all the great tools Mapbox facilitates are oriented to help their users to prepare and work with their data.
One of the provided tools is Mapbox Studio. Mapbox Studio (MbS) is a desktop application that allows to create CartoCSS themes that are later used to generate raster tiles. Briefly explained, what MbS does is to download OpenStreetMap data in vector format and render it on the fly applying the specified CartoCSS style.
The result of working with MbS is not a set of tiles but a style, that is, a set of rules that express which colour must be used to render roads, at which levels must labels appears and with which size, which colour must be used for ground, etc. This style can be later uploaded to Mapbox platform so that raster tiles were generated on the cloud and we can consume the tiles paying for the service. (Hope one day I can contract their services, they deserve by their great job).
The question we can make us is: how we can generate the raster tiles locally from a given MbS style?
Well, this article is about that. Continue reading.
Let’s start from the beginning so download Mapbox Studio application and install on your system. Once installed execute it and you will be asked to be connected to the Mapbox platform.
There are two main reasons why Mapbox requires you to register as a user. First, the power of the platform is on the cloud and the goal is you upload all your data to the servers. That includes the styles you create.
Second, MbS retrieves data in vector format from Mapbox servers. When you register as a user you get an API token that identifies your requests. Each time MbS makes a request to extract data it has your token that identifies you as user. This way Mapbox can control if any user is making a bad usage of their platform.
Once logged in you will be allowed to create new map styles. The easiest way is to start using one of the starter styles created by the great Mapbox designers:
Here we have chose the Mapbox Outdoors style. In the image you can see the style code (CartoCSS which is inspered by CSS) and the resultant tiles obtaining from painting the vector information with the given style rules:
Store the style with a new name somewhere on your computer, for example,
customstyle. If you look at your disk you will see a
customstyle.tm2 folder has been created containing a bunch of files that defines the style rules (take a look they are not dangerous).
Finally, modify some properties, for example
@crop colors and save to see the result:
Great !!! You just have created your first custom style.
Looking for a solution I discovered the tessera and tl tools. Tessera is a node based command line application. It is based in some modules from mapbox (concretely tilelive) plus others implemented by the author (Seth Fitzsimmons). The result is we can execute tessera passing a MbS defined style, open a browser pointing to a local address and see a map with the raster tiles generated with our MbS style.
Similarly, tl is a node based command line tool we can execute, passing a set of options, to generate a MBTiles file or a pyramid of tiles following the well known
NOTE: You need to have NodeJS installed in your system, along with the npm package manager command line tools.
I don’t like to install global node packages (or at least more than the necessary) so I’m going to install the previous tools in a custom folder:
> mkdir tiletools > cd tiletools
Inside the directory execute next sentence, which install the
tl packages among others:
> npm install tessera tl mbtiles mapnik tilelive tilelive-file tilelive-http tilelive-mapbox tilelive-mapnik tilelive-s3 tilelive-tmsource tilelive-tmstyle tilelive-utfgrid tilelive-vector tilejson
You will see a hidden directory named
.npm_modules has been created which contains some subdirectories with the same name as the previous packages.
Let’s try to run tessera for the first time. Because it is installed as a local node module execute:
> ./node_modules/tessera/bin/tessera.js Usage: node tessera.js [uri] [options] uri tilelive URI to serve Options: -C SIZE, --cache-size SIZE Set the cache size (in MB)  -c CONFIG, --config CONFIG Provide a configuration file -p PORT, --port PORT Set the HTTP Port  -r MODULE, --require MODULE Require a specific tilelive module -S SIZE, --source-cache-size SIZE Set the source cache size (in # of sources)  -v, --version Show version info A tilelive URI or configuration file is required.
Tessera requires you pass an URI so it can server its content. It accepts URIs from Mapbox hosted file, Mapnik, Tilemill, Mapbox Studio, …
Repeat again indicating the path to our previously created style indicating the protocol
> ./node_modules/tessera/bin/tessera.js tmstyle://./customstyle.tm2 Listening at http://0.0.0.0:8080/ /Users/antonio/Downloads/tiletools/node_modules/tessera/server.js:43 throw err; ^ Error: A Mapbox access accessToken is required. `export MAPBOX_ACCESS_TOKEN=...` to set. ...
First seems tessera is working at port 8080 but later we get an error about MAPBOX_ACCESS_TOKEN. If you remember from the first section, Mapbox requires all the requests be signed with the user token. So, you need to get the access token from your account and set it as environment variable before execute tessera:
> export MAPBOX_ACCESS_TOKEN=your_token_here > > ./node_modules/tessera/bin/tessera.js tmstyle://./customstyle.tm2 Listening at http://0.0.0.0:8080/ /Users/antonio/Downloads/tiletools/node_modules/tessera/server.js:43 throw err; ^ Error: Failed to find font face 'Open Sans Bold' in FontSet 'fontset-0' in FontSet
We are close to make it work. The problem now is our MbS style is using a font we have not installed in our system. One easy, but brute force, solution is to install all Google Web Fonts on your system. For this purpose you can use the Web Font Load installation script. In my case I have installed them in the user’s fonts folder
Once fonts were installed try executing tessera again:
> ./node_modules/tessera/bin/tessera.js tmstyle://./customstyle.tm2 Listening at http://0.0.0.0:8080/ /Users/antonio/Downloads/tiletools/node_modules/tessera/server.js:43 throw err; ^ Error: Failed to find font face 'Open Sans Bold' in FontSet 'fontset-0' in FontSet
That’ s a bit strange, we have just installed the fonts but they are not found. What is happening? Well, tessera uses mapnik to create the raster tiles and it looks for fonts in the folders specified by the environment variable MAPNIK_FONT_PATH, so let define the variable:
> export MAPNIK_FONT_PATH=~/Library/Fonts/
and execute the script again:
> ./node_modules/tessera/bin/tessera.js tmstyle://./customstyle.tm2 Listening at http://0.0.0.0:8080/ /Users/antonio/Downloads/tiletools/node_modules/tessera/server.js:43 throw err; ^ Error: Failed to find font face 'Arial Unicode MS Regular' in FontSet 'fontset-0' in FontSet
OMG !!! This seems a never ending story. Now we need to install the Arial Unicode font. Look for it, install in your system and execute tessera again:
> ./node_modules/tessera/bin/tessera.js tmstyle://./customstyle.tm2 Listening at http://0.0.0.0:8080/
Great !!! It seems tessera is working fine. Let’s go to open our browser pointing to
http://localhost:8080 and see the result:
A map implemented using Leaflet web mapping library is shown, rendering raster tiles that are created in the fly. Look at the console to see the tessera output information:
We can see how tiles at current zoom, the zoom level 8, has been generated.
At this point we have tessera working but what about generate a local pyramid of tiles for a given zoom levels and a given bounding box?
Before continue we need to know which bounding box we want to generate, the whole World? or only a piece. In my case I want three zoom levels (7, 8 and 9) wrapping Catalonia.
The tl tool can run three main commands but are only interested in the copy one, which copies data between two providers. In our case the MbS style is one provider and the file system is the other. Run the tl command to see the available options:
> ./node_modules/tl/bin/tl.js copy -help '-p' expects a value Usage: node tl.js copy <source> <sink> [options] source source URI sink sink URI Options: -v, --version Show version info -b BBOX, --bounds BBOX WGS84 bounding box [-180,-85.0511,180,85.0511] -z ZOOM, --min-zoom ZOOM Min zoom (inclusive)  -Z ZOOM, --max-zoom ZOOM Max zoom (inclusive)  -r MODULE, --require MODULE Require a specific tilelive module -s SCHEME, --scheme SCHEME Copy scheme [scanline] -i FILE, --info FILE TileJSON copy data between tilelive providers
So let’s go to execute the command to copy data from our MbS style to the local
tiles folder. We want to generate tiles from zoom level 7 to 9 and indicating a bounding box wrapping Catalonia.
-boptions must be indicated as
[minLon minLat maxLon maxLat].
> ./node_modules/tl/bin/tl.js copy -z 7 -Z 9 -b "0.023293972 40.4104003077 3.6146087646 42.9542303723" tmstyle://./customstyle.tm2/ file://./tiles Segmentation fault: 11
Ough !!! That hurts, a segmentation fault. After looking for a while I realised it seems a bug. To solve it go to
tl/node_modules/abaculus/node_modules and remove the
mapnik folder dependency. It is redundant because there is one installed in parent folder.
Execute the command again and see the output:
The tl tool has created a local tiles directory and generated all the raster tiles for the given zoom levels and bounding box. The output shows in addition the time required to generate each tile.
That’s all. Now we only need to host the tiles at our own servers !!!
|The flexible mapping stack of NRK.no, allowing journalists and digital storytellers to create advanced maps in minutes.|
|"Kartoteket" - our in-house mapping tool built on top of our mapping stack.|
|Digital storytelling using NRKs mapping stack and Mapbox.|
|Digital storytelling using NRKs mapping stack and Mapbox.|
|Flood maps using NRKs mapping stack and CartoDB.|
|Radon affected areas in Norway using NRKs mapping stack.|
|Our popular photo maps.|
|Video map of the long running TV show Norge Rundt.|
|Tracking of "Sommerbåten" along the coast of Norway.|
GeoServer 2.7.2 is a stable release of GeoServer recommended for production deployment. Thanks to everyone taking part, submitting fixes and new functionality including:
Also, as a heads up for Oracle users, the Oracle store does not ship anymore with the JDBC driver (due to redistribution limitations imposed by Oracle). For details see the updated the oracle installation instructions here.
Thanks to Andrea (GeoSolutions) and Kevin (Boundless) for this release.
The Open Source Geospatial Foundation would like to open nominations for the 2015 Sol Katz Award for Geospatial Free and Open Source Software.
The Sol Katz Award for Geospatial Free and Open Source Software (GFOSS) will be given to individuals who have demonstrated leadership in the GFOSS community. Recipients of the award will have contributed significantly through their activities to advance open source ideals in the geospatial realm.
Sol Katz was an early pioneer of GFOSS and left behind a large body of work in the form of applications, format specifications, and utilities while at the U.S. Bureau of Land Management. This early GFOSS archive provided both source code and applications freely available to the community. Sol was also a frequent contributor to many geospatial list servers, providing much guidance to the geospatial community at large.
Sol unfortunately passed away in 1999 from Non-Hodgkin’s Lymphoma, but his legacy lives on in the open source world. Those interested in making a donation to the American Cancer Society, as per Sol’s family’s request, can do so at https://donate.cancer.org/index.
Nominations for the Sol Katz Award should be sent to SolKatzAward@osgeo.org with a description of the reasons for this nomination. Nominations will be accepted until 23:59 UTC on August 21st (http://www.timeanddate.com/worldclock/fixedtime.html?month=8&day=21&year=2015&hour=23&min=59&sec=59).
A recipient will be decided from the nomination list by the OSGeo selection committee.
The winner of the Sol Katz Award for Geospatial Free and Open Source Software will be announced at the FOSS4G-Seoul event in September. The hope is that the award will both acknowledge the work of community members, and pay tribute to one of its founders, for years to come.
It should be noted that past awardees and selection committee members are not eligible.
More info at the Sol Katz Award wiki page
2014: Gary Sherman
2013: Arnulf Christl
2012: Venkatesh Raghavan
2011: Martin Davis
2010: Helena Mitasova
2009: Daniel Morissette
2008: Paul Ramsey
2007: Steve Lime
2006: Markus Neteler
2005: Frank Warmerdam
Selection Committee 2015:
Jeff McKenna (chair)
IDF is the data format used by Austrian authorities to publish the official open government street graph. It’s basically a text file describing network nodes, links, and permissions for different modes of transport.
Since, to my knowledge, there hasn’t been any open source IDF parser available so far, I’ve started to write my own using PyQGIS. You can find the script which is meant to be run in the QGIS Python console in my Github QGIS-resources repo.
I haven’t implemented all details yet but it successfully parses nodes and links from the two example IDF files that have been published so far as can be seen in the following screenshot which shows the Klagenfurt example data:
If you are interested in advancing this project, just get in touch here or on Github.
Earlier this year, in cold January morning commutes, I finally read William Gibson’s masterpiece trilogy. If you know me personally, this may sound ironic, because I dig geek culture quite a bit. Still, I’m a slow reader and I never had a chance to read the three books before. Which was good, actually, because I could enjoy them deeply, without the kind of teenage infatuation that is quickly gone ‒ and most importantly because I could read the original books, instead of a translation: I don’t think 15-year old myself could read English prose, not Gibson’s prose at least, that easily.
I couldn’t help several moments of excitement for the frequent glimpses of archaeology along the chapters. This could be a very naive observation, and maybe there are countless critical studies that I don’t know of, dealing with the role of archaeology in the Sprawl trilogy and Gibson’s work in general. Perhaps it’s touching for me because I deal with Late Antiquity, that is the closest thing to a dystopian future that ever happened in the ancient world, at least as we see it with abundance of useless objects and places from the past centuries of grandeur. Living among ruins of once beautiful buildings, living at the edge of society in abandoned places, reusing what was discarded in piles, black markets, spirituality: it’s all so late antique. Of course the plot of the Sprawl trilogy is a contemporary canon, and the characters are post-contemporary projections of a (very correctly) imagined future, but the setting is, to me, evoking of a world narrative that I could embrace easily if I had to write fiction about the periods I study.
Count Zero is filled with archaeology, of course especially the Marly chapters. Towards the end it gets more explicit, but it’s there in almost all chapters and it has something to do with the abundance of adjectives, the care for details in little objects. Mona Lisa overdrive is totally transparent about it, since the first pages of Angie Mitchell on the beach:
The house crouched, like its neighbors, on fragments of ruined foundations, and her walks along the beach sometimes involved attempts at archaeological fantasy. She tried to imagine a past for the place, other houses, other voices.
– William Gibson. Mona Lisa Overdrive, p. 35.
But really, you just have to follow Molly along the maze of the Straylight Villa in Neuromancer to realize it’s a powerful theme of all the Sprawl trilogy.
The Japanese concept of gomi, that pervades Kumiko’s view of Britain and the art of Rubin in the Winter Market, is another powerful tool for material culture studies, at least if we have to find a pop dimension where our studies survive beyond the inevitable end of academia.
Los días 25 y 26 de septiembre de 2015 tendrán lugar las 2as Jornadas gvSIG Perú, que se celebrarán en el Auditorio de la Municipalidad de Huancayo, bajo el lema “Ciencia, tecnología y desarrollo”.
Por segundo año consecutivo se celebran las jornadas que reunirán a la comunidad gvSIG Perú y a todos aquellos interesados por la geomática libre. Este año además habrá representación de la Asociación gvSIG, participando mediante charlas y talleres tanto Joaquín del Cerro – Responsable de Arquitectura y Desarrollo de gvSIG – como Alvaro Anguix – Director General-.
Está abierto el plazo para enviar las propuestas de comunicación para las jornadas a la dirección de correo electrónico email@example.com, que serán valoradas por el Comité Científico de cara a su inclusión en el programa de las Jornadas. Toda la información sobre las normas para la presentación de comunicaciones puede consultarse en el apartado Comunicaciones de la web. El periodo de recepción de resúmenes finalizará el próximo 14 de agosto.
Ya está abierto el periodo de inscripción de las Jornadas. La inscripción es gratuita (aforo limitado) y se ha de realizar a través del formulario existente en la página web.
Does it matter, and who cares?
Multi-ring buffers can be useful for simple distance calculations as seen in:
X Percent of the Population of Scotland Lives Within Y Miles of Glasgow
X Percent of the Population of Scotland Lives Within Y Miles of Edinburgh
For these I simply created multiple buffers using the QGIS buffer tool. This works for small samples, but was quite frustrating. I had initially hoped to do the whole analysis in SQLite, which worked pretty well initally, but struggled on the larger buffers. It took too long to run the queries, and did not allow for visualisation. I think using PostGIS would however be pretty feasible.
But creating a multi-ring buffer plugin for QGIS also seemed like a good learning experience. Which got me thinking, does it matter if you create increasingly large buffers around the original feature, or if you buffered the resulting buffer sequentially. My hypothesis was that there would be pretty significant differences due to the rounding of corners.
I asked on StackExchange but the conversation did not really take off:
My question is not about the overlapping-ness of the buffers, since I think multi-ring buffers should be “doughnuts” anyway. But rather if smoothing will occur. The only answer was to try it myself.
Buffer the resulting buffer sequentially: Sequential
Buffer the original feature with increasing buffer distance: Central
No matter how you do it the sequential style is quicker, but that may be down to my code.
Interestingly, although understandably, the sequential style results in a lot more vertices in the outer rings. For comparison, for a 500 ring buffer the outermost ring had the following vertice counts:
We can also see a smoother profile in the sequential buffer. However the difference is not major, and hard to discern with the naked eye.
So we have at most about around a 10m discrepancy, with 500 50m rings, so around 25000m of distance from the original feature.
This impacts rendering time dramatically, an example with our 500 rings:
So quicker to create but slower to draw. So which one is better, quicker calculation, or quicker rendering? Or should we not do 200+ ring buffers?
Hard to say. In version 0.2 of the Multi Ring Buffer Plugin. There is an option for either in the advanced tab.
Please report any issues through GitHub: https://github.com/HeikkiVesanto/QGIS_Multi_Ring_Buffer/issues
Originally posted on gvSIG Batovi:
El día martes 14 de julio se llevó a cabo un taller sobre gvSIG Batoví en el liceo 54 (Agraciada 3636, Montevideo). El mismo estuvo orientado a profesores de Secundaria (en su mayoría de Geografía pero también de otras disciplinas) y contó con la asistencia de 17 profesoras y profesores de 7 liceos de la zona (Nºs 6, 16, 18, 56, 71, 75, además del propio liceo Nº 54). El resultado del mismo ha sido muy positivo.
El taller fue diseñado para trabajar con asistentes que se enfrentaban por primera vez con un SIG de escritorio, pero que podía aportar también a aquéllas y aquéllos que ya conocieran la tecnología. La actividad consistió en una pequeña introducción al proyecto gvSIG Batoví, una breve descripción de los objetivos del taller, mostrar cómo descargar e instalar el programa, cómo descargar y…
View original 173 more words
I’ve been serving as co-editor of the Journal of Open Archaeology Data (JOAD) for more than one year now, when I joined Victoria Yorke-Edwards in the role. It has been my first time in an editorial role for a journal. I am learning a lot, and the first thing I learned is that being a journal editor is hard and takes time, effort, self-esteem. I’ve been thinking about writing down a few thoughts for months now, and today’s post by Melissa Terras about “un-scholarly peer review practices […] and predatory open access publishing mechanisms” was an unavoidable inspiration (go and read her post).
Some things are peculiar of JOAD, such as the need to ensure data quality at a technical level: often, though, improvements on the technical side will reflect substantially on the general quality of the data paper. Things that may seem easily understood, like using CSV for tabular data instead of PDF, or describing the physical units of each column / variable. Often, archaeology datasets related to PhD research are not forged in highly standardised database systems, so there may be small inconsistencies in how the same record is referenced in various tables. In my experience so far, reviewers will look at data quality even more than at the paper itself, which is a good sign of assessing the “fitness for reuse” of a dataset.
The data paper: you have to try authoring one before you get a good understanding of how a good data paper is written and structured. Authors seem to prefer terse and minimal descriptions of the methods used to create their dataset, giving many passages for granted. The JOAD data paper template is a good guide to structuring a data paper and to the minimum metadata that is required, but we have seen authors relying almost exclusively on the default sub-headings. I often point reviewers and authors to some published JOAD papers that I find particularly good, but the advice isn’t always heeded. It’s true, the data paper is a rather new and still unstable concept of the digital publishing era: Internet Archaeology has been publishing some beautiful data papers,and I like to think there is mutual inspiration in this regard. Data papers should be a temporary step towards open archaeology data as default, and continuous open peer review as the norm for improving the global quality of our knowledge, wiki-like. However, data papers without open data are pointless: choose a good license for your data and stick with that.
Peer review is the most crucial and exhausting activity: as editors, we have to give a first evaluation of the paper based on the journal scope and then proceed to find at least two reviewers. This requires having a broad knowledge of ongoing research in archaeology and related disciplines, including very specific sub-fields of study ‒ our list of available reviewers is quite long now but there’s always some unknown territory to explore, for this asking other colleagues for help and suggestions is vital. Still, there is a sense of inadequacy, a variation on the theme of impostor syndrome, when you have a hard time finding a good reviewer, someone who will provide the authors with positive and constructive criticism, becoming truly part of the editorial process. I am sorry for the fact that our current publication system doesn’t allow for the inclusion of both the reviewers’ names and their commentary ‒ that’s the best way to provide readers with an immediate overview of the potential of what they are about to read, and a very effective rewarding system for reviewers themselves (I keep a list of all peer reviews I’m doing but that doesn’t seem as satisfying). Peer review at JOAD is not double blind, and I think often it would be ineffective and useless to anonymise a dataset and a paper, in a discipline so territorial that everyone knows who is working where. It is incredibly difficult to get reviews in a timely manner, and while some of our reviewers are perfect machines, others keep us (editors and authors) waiting for weeks after the agreed deadline is over. I understand this, of course, being too often on the other side of the fence. I’m always a little hesitant to send e-mail reminders in such cases, partly because I don’t like receiving them, but being an annoyance is kind of necessary in this case. The reviews are generally remarkable in their quality (at least compared to previous editorial experience I had), quite long and honest: if something isn’t quite right, it has to be pointed out very clearly. As an editor, I have to read the paper, look at the dataset, find reviewers, wait for reviews, solicit reviews, read reviews and sometimes have a conversation with reviewers, if something is their comments are clear and their phrasing/language is acceptable (an adversarial, harsh review must never be accepted, even when formally correct). All this is very time consuming, and since the journal (co)editor is an unpaid role at JOAD and other overlay journals at Ubiquity Press (perhaps obvious, perhaps not!) , usually this means procrastinating: summing the impostor syndrome dose from criticising the review provided by a more experienced colleague with the impostor syndrome dose from being always late on editorial deadlines yields frustration. Lots. Of. Frustration. When you see me tweet about a new data paper published at JOAD, it’s not an act of deluded self-promotion, but rather a liberatory moment of achievement. All this may sound naive to experienced practitioners of peer review, especially to those into academic careers. I know, and I still would like to see a more transparent discussion of how peer review should work (not on StackExchange, preferably).
JOAD is Open Access. It’s the true Open Access, not to differentiate between gold and green (a dead debate, it seems) but between two radically different outputs. JOAD is openly licensed under the Creative Commons Attribution license and we require that all datasets are released under open licenses so readers know that they can download, reuse, incorporate published data in their new research. There is no “freely available only in PDF”, each article is primarily presented as native HTML and can be obtained in other formats (including PDF, EPUB). We could do better, sure ‒ for example, provide the ability to interact directly with the dataset instead of just providing a link to the repository ‒ but I think we will be giving more freedom to authors in the future. Publication costs are covered by Article Processing Charges, 100 £, that will be paid by the authors’ institutions: in case this is not possible, the fee will be waived. Ubiquity Press is involved in some of the most important current Open Access initiatives, such as the Open Library of Humanities and most importantly does a wide range of good things to ensure research integrity from article submission to … many years in the future.
You may have received an e-mail from me with an invite to contribute to JOAD, either by submitting an article or giving your availability as a reviewer ‒ or you may receive it in the next few weeks. Here, you had a chance to learn what goes on behind the scenes at JOAD.
"TEMA" "Terremotos kml"
END # METADATA
COLOR 0 0 0
END # STYLE
END # CLASS
END # LAYER
END # MAP
END # METADATA
COLOR 200 50 0
END # STYLE
END # CLASS
END # LAYER
END # MAP
In my previous blog posted on June 23rd, I walked through the steps necessary to go from Project & Data to completed web app using Boundless’ new Web App Builder. I encourage you to take a minute to review the initial blog, as there’s some important context. The sample flood data application used a sampling of the most commonly anticipated controls and options that are available, but there’s a lot of functionality I didn’t explore. This post we will explore more of these, including the following that someone desiring greater control might seek to leverage:
-> using different themes
-> augmenting the HTML of an information popup,
-> using bookmarks map control.
I think it’s worth noting that the extensibility of QGIS – for more on this, my colleague Anthony’s recent post is a great place to start – means these are only the start, there is lots more room for additional functionality to be added.
To refresh your memory… When we start the Web App Builder, the first tab presented to us is the Description tab. Here we enter an application name, add a logo image, and choose a theme. The Web App Builder currently includes three themes: Basic, Fullscreen, and Tabbed.
Below is the same application created using the three different themes.
As you can see from the images the primary difference between the themes is the location of the controls. In the basic theme the tools are placed in the upper right and clicking the tool brings up a new panel. In the tabbed version the tools are the title of the tab and the information (about, chart, etc) is contained inside its respective tab. To maximize real estate for the map the full screen version places the tools are in a pull-down on the top. The three themes provide flexibility in the look and feel of your application, deciding on the best fit is up to you.
Just as important as finding the right theme for your application is formatting the attribute data for features. The configuration dialogue for info popups is accessible from the QGIS Layers tab and is specific to each layer. The Info popup for the layers uses HTML, which we can use to format the attributes and other popup content.
Let’s start by looking at how we add feature attributes to the popup. While the editor dialogue is initially empty, clicking ‘Add all attributes’ in the Info Popup Editor dialogue will add every attribute to the popup along with the field name, colon, and line break.
Resulting Info Popup
However the HTML tags can be used for more than just formating. They can also be used to link to other documents or reports based on information in the attributes. In our sample Flood Data Viewer Application I’ve added a reports sub-folder that has PDFs for each of the parcels using the Parcel ID as the file name. Using the HTML Link tag we can create a link to the PDF report using the parcel ID.
<b>Parcel ID</b>: [PARCELID]<br>
<a href=”./reports/[PARCELID].pdf”> Parcel Report </a>
Located under the controls tab, the Bookmarks control allows us to import bookmarks or a bookmarks layer from QGIS, and with a little configuration can turn our map into a story panel.
Right-clicking on the control opens the configuration dialogue.
There are two tabs for configuring the bookmarks, the first is used to specify which bookmarks to use and the second to define how the bookmarks are presented.
The first step is adding bookmarks, they can be added from QGIS or from a separate layer. In our case the bookmarks are saved with the QGIS project. Once added, we can change the order of the bookmarks by dragging and dropping them in the list, and can add a description to each bookmark.
Selecting “Show as story panel” on the second tab of the bookmarks configuration dialogue changes the bookmark display from a drop down to an inset where the intro title and description are used in the overview inset. Clicking the arrow in the inset advances the map to the next bookmark, while checking “Move automatically with each X seconds” will cycle through the bookmarks automatically.
The screenshots above illustrate the story panel concept, this is useful for making presentations or guiding users through key points of interest. However, for our limited text the story panel is much too large. Let’s take a quick look at how we can adjust it, using the description tab click configure theme. Here we see the setting that can easily be changed for the components of the web app:
We want to change the pixel values for the height and width inside the “.story-panel”:
Here is the story panel after the change.
As we have seen in this post the Web App Builders’ configuration options provide a lot of flexibility for your application. If you need to make changes beyond the configurations many of the theme components and controls can be customized using CSS or HTML, which again means web developers can support the application without being GIS experts.
Currently the Web App builder is available by contacting Boundless at firstname.lastname@example.org. In the near future the Web App Builder will be available to install from the Boundless plugin server, simplifying the install and update process.
The post Building an OpenLayers 3 Web App Without Writing Code – Part II appeared first on Boundless.
The problem with satire and much political commentary is that politicians, and leaders particularly, are treated like human beings, with their own spoken language and experience and ideas, whereas a less naive view could acknowledge they are more a condensation of economic and lobby agendas, brought forward on a mid-term scale with fixed objectives. At least that seems to explain the trajectory of successful leaders and successful parties, like Berlusconi and lately Renzi. One may simplistically call them puppets for large groups of less visible people who are not directly involved with politics, from rich entrepreneurs to CFO of the financial sector and high-ranking civil servants. It’s certainly more nuanced and way deeper than that, though.
The “Europa Challenge” award has an international character where besides the Spanish representatives from the gvSIG Association, projects from United States, Italy, United Kingdom, Hungary and India.
These awards are given to development projects of computer applications that use the NASA World Wind software, a virtual globe similar to Google Earth in open source, and are framed at the standards use to share and access to geographical information, defined by the INSPIRE European Directive.
The award was given by Patrick Hogan finally, the NASA World Wind project manager. Besides, the meeting served to stablish the bases of a collaboration agreement between NASA and the gvSIG Association.
The project presented by the gvSIG Association allows the integration of the virtual globe developed by the NASA in gvSIG, an open source geographical information system that has become a referent as technology for information analysis from the territorial point of view, in an international level, after it’s 10th anniversary. Trough gvSIG, thousands of users around the world manage their spatial information without any license use limitation.
This award is a recognition to the probably most successful open source project born in the European Union. It’s a recognition to the gvSIG Association and the community work, that has started up a development model based on the collaboration, solidarity and shared knowledge successfully.
Thank you very much to everybody who make gvSIG bigger day by day!
Here you have the presentation and videos: