Welcome to Planet OSGeo

November 27, 2014

Markus Neteler

Landsat 8 captures Trentino in November 2014

The beautiful days in early November 2014 allowed to get some nice views of the Trentino (Northern Italy) – thanks to Landsat 8 and NASA’s open data policy:

Landsat 8: Northern Italy 1 Nov 2014
Landsat 8: Northern Italy 1 Nov 2014

Trento captured by Landsat8
Trento captured by Landsat8

Landsat 8: San Michele - 1 Nov 2014
Landsat 8: San Michele – 1 Nov 2014

The beauty of the landscape but also the human impact (landscape and condensation trails of airplanes) are clearly visible.

All data were processed in GRASS GIS 7 and pansharpened with i.fusion.hpf written by Nikos Alexandris.

The post Landsat 8 captures Trentino in November 2014 appeared first on GFOSS Blog | GRASS GIS Courses.

by neteler at November 27, 2014 05:21 PM

GIS for Thought

Clipping Datasets to the Dateline in ogr2ogr

If you want to visualise a global flight great circle dataset in QGIS it needs to be clipped to the dateline.

This is extremely easy to do using ogr2ogr.

The command is simply:

ogr2ogr -wrapdateline output_file.shp input_file.shp

With the results.

Dateline wrapping

by Heikki Vesanto at November 27, 2014 12:00 PM

gvSIG Team

gvSIG 2.1: Biblioteca de símbolos de Geología

En gvSIG 2.1 el usuario encontrará un conjunto de nuevas bibliotecas de símbolos que completan todavía más el amplio catálogo de símbolos existente.

Una de estas nuevas bibliotecas de símbolos es la referente a Geología.

Esta biblioteca contempla principalmente 2 conjuntos de símbolos. Por un lado se ha realizado el trabajo de generar los símbolos vectoriales adaptados a gvSIG en base al manual del “Digital Cartographic Standard for Geologic Map Symbolization “ del FGDC (Federal Geographic Data Committee)…cientos de símbolos estructurados en un conjunto de carpetas y subcarpetas (bedding, cleavage, eolian, fluvial/alluvial, foliation, geohydrology…).

Por otro lado se han diseñado un conjunto de símbolos de relleno con los códigos de color RGB definidos por la CGMW (Commission for the Geological Map of the World) y que representan las distintas unidades estratigráficas

El modo de instalación es el habitual, a través del administrador de complementos…tal y como vemos en el siguiente vídeo.


Filed under: gvSIG Desktop, spanish Tagged: geology, gvSIG 2.1, symbols

by Alvaro at November 27, 2014 06:53 AM

November 26, 2014

GIS for Thought

Great Circle Flight Lines in PostGIS

There is an excellent post by Anita Graser about creating Great Circles in PostGIS.

However as of PostGIS version 2.1 this can be done in a different (better) way, using the geography functions.

PostGIS Great Circles

For more information about geography, see:
Introduction to PostGIS – Geography

This allows us to create the great circles without having to add in a new projection.

So we first need to create our three tables in PostGIS:

CREATE TABLE airlines
(Airline_ID integer,Name varchar,Alias varchar,IATA varchar,ICAO varchar,Callsign varchar,Country varchar,Active varchar, uid Serial);

CREATE TABLE routes
(Airline varchar,Airline_ID integer,Source_airport varchar,Source_airport_ID integer,Destination_airport varchar,Destination_airport_ID integer,Codeshare varchar,Stops varchar,Equipment varchar, uid Serial);

CREATE TABLE airports
(Airport_ID integer,Name varchar,City varchar,Country varchar,IATA varchar,ICAO varchar,Latitude double precision,Longitude double precision,Altitude double precision,Timezone double precision, dst varchar, tz varchar, uid Serial);

The data itself can be found at: openflights.org/data.html

We can then load our data through PGAdminIII. You can just right click on a table and select import. Remember to not load the “uid” column, because it is our primary key which will be populated automatically and not in the original data. You will also want to define it as the primary key.

Now we need a geometry column in the airports dataset.

ALTER TABLE airports ADD COLUMN geom geometry(POINT,4326);

We can define our geometry in the airports dataset from the Latitude and Longitude columns.

UPDATE airports SET geom = ST_SetSRID(ST_MakePoint(longitude,latitude),4326);

And create a spatial index.

CREATE INDEX idx_airports_geom ON netherlands.airports USING GIST(geom);

Then we can create a flights table.

CREATE TABLE flights AS
SELECT
  air1.geom AS source_geom, 
  air2.geom AS destination_geom, 
  airlines.name, 
  routes.equipment, 
  routes.destination_airport_id, 
  routes.source_airport_id, 
  routes.destination_airport, 
  routes.source_airport
FROM 
  public.routes, 
  public.airlines, 
  public.airports air1, 
  public.airports air2 
WHERE 
  routes.airline_id = airlines.airline_id AND
  routes.source_airport_id = air1.airport_id AND
  routes.destination_airport_id = air2.airport_id;

This table will have a source geometry and a destination geometry along with a few other attributes. I added a primary key to this table as well.

To filter out a specific airport, for example Honolulu we use the “Airport ID”.

CREATE TABLE honolulu_flights AS
SELECT * FROM flights
WHERE destination_airport_id = 3728 OR source_airport_id = 3728;

Then we add in the actual line geometry column.

ALTER TABLE honolulu_flights ADD COLUMN line_geom geometry(LineString,4326);

And populating the great circle geomtrey:

UPDATE honolulu_flights
SET line_geom =  
  (ST_Segmentize(
  (ST_MakeLine(source_geom, destination_geom)::geography)
  ,100000)::geometry)
;

This is works great to an extent, but QGIS still has some trouble with lines that cross the date-line.

Screenshot[32]

We can fix this using a Pacific centered projection like EPSG:3832.

Screenshot[33]

We can either set QGIS to the projection. Or we could have set our geometry the this projection initially when creating the flight lines.

ALTER TABLE honolulu_flights ADD COLUMN line_geom geometry(LineString,3832);

UPDATE honolulu_flights
SET line_geom =  
  ST_Transform((ST_Segmentize(
  (ST_MakeLine(source_geom, destination_geom)::geography)
  ,100000)::geometry), 3832)
;

Thanks to:
The World Is A Village – PostGIS: using latitude and longitude to create geometry
http://gis.stackexchange.com/questions/84443/what-is-this-postgis-query-doing-to-show-great-circle-connections

by Heikki Vesanto at November 26, 2014 12:00 PM

gvSIG Team

gvSIG 2.1: Editor alfanumérico en Vista

Una novedad que se ha introducido en los últimos build de gvSIG 2.1, gracias al aporte de la empresa brasileña GAUSS geotecnologia e engenharia, es un sencilla pero útil herramienta: un editor alfanumérico que permita editar los atributos de cualquier elemento de una capa sin necesidad de tener que ir a su tabla.
La forma de funcionamiento es muy similar a la del botón de “Información”, pero en este caso nos permite editar cualquiera de los atributos del elemento seleccionado. De este modo se agilizan considerablemente las tareas de edición de los usuarios de gvSIG.

Veamos mediante un vídeo el funcionamiento de esta herramienta:

 


Filed under: gvSIG Desktop, opinion, spanish Tagged: editor, gvSIG 2.1

by Alvaro at November 26, 2014 09:53 AM

November 25, 2014

GIS for Thought

Helsinki Airport the Gateway to the East

The shortest route between two points on the earth is a great circle. This is a straight line on a globe, but ultimately looks like half-circle when projected.

Due to Helsinki Airports location, it is an ideal hub for East Asia travel. Starting at practically any European city and travelling to Far East Asia via Helsinki. The final route will almost be a straight line.

Helsinki Flights

by Heikki Vesanto at November 25, 2014 12:00 PM

November 24, 2014

Slashgeo (FOSS Articles)

Batch Geonews: OL3-Cesium Library, Embed Street Views, OGC Web Coverage Tile Service, and much more

Here’s the recent geonews in batch mode.

On the open source / open data front:

On the Google front:

In the everything else category:

In the maps category:

The post Batch Geonews: OL3-Cesium Library, Embed Street Views, OGC Web Coverage Tile Service, and much more appeared first on Slashgeo.org.

by Alex at November 24, 2014 03:05 PM

GIS for Thought

Population of Scotland Mapped

One random point on the map for each person within a postcode in Scotland.

Workflow:
OS Code-Point Open points.
>
Voronoi polygons from the postcodes.
>
Join 2011 Scottish Census postcode population counts to Voronoi polygons.
>
Clip the resulting polygons to the Scottish coastline (using PostGIS for time saving).
>
Intersect the lakes out of the resulting polygons.
>
Random point in polygon into the postcode Voronoi polygons (minus lakes), using the census counts.
>
Output:

Population of Scotland Mapped

An easier approach would have been to use the NRS supplied postcode areas for Scotland mentioned in previous posts. A better display of this data would be through a web mapping environment, which is working on my home environment but lacking hosting.

by Heikki Vesanto at November 24, 2014 12:00 PM

Faunalia

A new QGIS tool (based on ogr2ogr) to import vectors in PostGIS, the fast way

In QGIS there are many tools that can be used to import vectors inside a PostGIS database, each one has pros and cons: SPIT core plugin: available since long ago but now seems to be a unmaintained tool and therefore will be probably removed in a future QGIS release. It  has the advantage to allow […]

by faunaliagis at November 24, 2014 09:56 AM

November 23, 2014

Stefano Costa

Yet another failure for cultural heritage data in Italy

This short informative piece is written in English because I think it will be useful for anyone working on cultural heritage data, not just in Italy.

A few days ago the Istituto Centrale per il Catalogo e la Documentazione published an internal document for all offices in the Ministry of Culture (actual name is longer, but you got it), announcing imminent changes and the beginning of a process for publishing all records about cultural heritage items (I have no idea on the exact size but we’re in the millions of records). In short, all records will be publicly available, and there will be at least one image for each record ‒ you’ll get anything from small pieces of prehistoric flint to renaissance masterpieces, and more. That’s a huge step and we can only be happy to see this, the result of decades of cataloguing, years of digital archiving and … some lobbying and campaigning too. Do you remember Beni Culturali Aperti? The response from the ICCD had been lukewarm at best, basically arguing that the new strong requirements for open government data from article 68 of the Codice dell’Amministrazione Digitale did not apply at all to cultural heritage data. So nobody was optimistic about the developments to follow.

And unfortunately pessimism was justified. Here’s an excerpt from the document published last week:

Brano della nota prot. n. 2975  del 17/11/2014 dell'Istituto Centrale per il Catalogo e la DocumentazioneNota prot. n. 2975 del 17/11/2014 dell’Istituto Centrale per il Catalogo e la Documentazione

relevant sentence:

Le schede di catalogo verranno rese disponibili con la licenza Creative Commons CC BY-NC-SA

that would be

Catalog records will be made available under the Creative Commons CC BY-NC-SA license

And that was the (small) failure. CC BY-NC-SA is not an open license. The license makes commercial (= paid!) work with such data impossible or very difficult, at a time when the cultural heritage private sector could just benefit from full access to this massive dataset, with zero losses for the gatekeepers. At the same time when we have certified that open licenses are becoming more and more widespread and non-open licenses like BY-NC-SA are used less and less because they’re incompatible with anything else and inhibit reuse, someone decided that it was the right choice, against all internationa, European and national recommendations and regulations. We can only hope that a better choice will be made in the near future, but the record isn’t very encouraging, to be honest.

by Stefano Costa at November 23, 2014 06:45 PM

GIS for Thought

Scotland Azimuth Orthographic Projection

Thanks to the excellent tutorial by Hamish Campbell at: http://polemic.nz/2014/11/21/nz-azimuth-orthographic/

Quick Scotland centric view of the world.

QGIS Azimuth Orthographic Projections

by Heikki Vesanto at November 23, 2014 12:00 PM

November 22, 2014

Tyler Mitchell

Supertunnels with SSH – multi-hop proxies

I never know what to call this process, so I’m inventing the term <em>supertunnels</em> via SSH for now. A lot of my work these days involves using clusters built on <a href=”http://aws.amazon.com/ec2/”>Amazon EC2</a> cloud environment. There, I have some servers that are externally accessible, i.e. web servers. Then there are support servers that are only accessible “internally” to those web servers and not accessible from the outward facing public side of the network, i.e. <a href=”http://hadoop.apache.org/”>Hadoop clusters</a>, databases, etc.

To help log into the “internal” machines, I have pretty much one choice – using SSH <em>through the public machine first</em>. No problem here, any server admin knows how to use SSH – I’ve been using it forever. However, I didn’t really use some of the more advanced features that are very helpful. Here are two…

<h3>Remote command chaining</h3>

Most of my SSH usage is for running long sessions on a remote machine. But you can also pass a command as an argument and the results come directly back to your current terminal:

<code>$ ssh user@host “ls /usr/lib”
</code>

Take this example one step further and you can actually <strong>inject another SSH command</strong> that gets into the “internal” side of the network.

This is starting to really sound like tunneling, though it’s somewhat manual and doesn’t redirect traffic from your client side, we’ll get to that later.

As an aside, in EC2-land you often use certificate files during SSH login, so you don’t need to have an interactive password exchange. You specify the certificate with another argument. If that’s how you run your servers (or with authorized_keys files) then you can push in multiple levels of additional SSH commands easily.

For example, here I log into <strong>ext-host1</strong>, then from there log into <strong>int-host2</strong> and run a command:

<code>$ ssh -i ~/mycert.pem user@ext-host1 “ssh -i ~/mycert.pem user@int-host2 ‘ls /usr/lib'”
</code>

That is a bit of a long line for just getting a file listing, but it’s easy to understand and gets the job done quickly. It also works great in shell scripts, in fact you could wrap it up with a simple script to make it shorter.

<h3>Proxy config</h3>

Another way to make your command shorter and simpler is to add some proxy rules to the ~/.ssh/config file. I didn’t even know this file existed, so was thrilled to find out how it can be used.

To talk about this, let’s use the external and internal hosts as examples. And let’s assume that the internal host is 10.0.1.1. Obviously these don’t need to be specifically public or private SSH endpoints, but it serves its purpose for this discussion.

If we are typically accessing int-host2 via ext-host1 then we can setup a Proxy rule in the config file:

Host 10.0.*.*
ProxyCommand ssh -i ~/mycert.pem user@ext-host1 -W %h:%p

This rule watches for <b>any</b> requests on the 10.0… network and automatically pushes the requests through the ext-host1 as specified above. Furthermore, the -W option tells it to stream all output back to the same terminal you are using. (Minor point, but if you miss it you may go crazy trying to find out where your responses go.)

Now I can do a simple login request on the <b>internal</b> host and not even have to think about how to get there.

ssh -i ~/mycert.pem user@int-host2

I think that’s a really beautiful thing – hope it helps!

Another time I’ll have to write more about port forwarding…

by Tyler Mitchell at November 22, 2014 12:27 PM

GIS for Thought

Scotland Gender Split

Based on 2011 Census data. We can see a clear majority of the population is Female.

The raw numbers are:
Population total:
5295403
Male total:
2567444
Female total:
2727959
Male total %:
48.48
Female total %:
51.52
Top 5 Male by %:
Shetland Islands – 50.77
Aberdeenshire – 49.52
Orkney Islands – 49.49
Aberdeen City – 49.42
Na h-Eileanan an Iar (Western Isles) – 49.37
Top 5 Female by %
West Dunbartonshire – 52.40
North Ayrshire – 52.37
South Ayrshire – 52.36
East Renfrewshire – 52.34
Inverclyde – 52.14

And the split by local authority:

Scotland Gender Split

by Heikki Vesanto at November 22, 2014 12:00 PM

Tyler Mitchell

Converting Decimal Degree Coordinates

Converting Decimal Degree Coordinates to/from DMS Degrees Minutes Seconds

cs2cs command from GDAL/OGR toolset (gdal.org) - allows robust coordinate transformations.cs2cs command from GDAL/OGR toolset (gdal.org) – allows robust coordinate transformations.

If you have files or apps that have to filter or convert coordinates – then the cs2cs command is for you.  It comes with most distributions of the GDAL/OGR (gdal.org) toolset.  Here is one popular example for converting between degrees minutes and seconds (DMS) and decimal degrees (DD).


Geospatial Power Tools book coverThe following is an excerpt from the book: Geospatial Power Tools – Open Source GDAL/OGR Command Line Tools by me, Tyler Mitchell.  The book is a comprehensive manual as well as a guide to typical data processing workflows, such as the following short sample…


Input coordinates can come from the command line or an external file. Assuming a file containing DMS (degree, minute, seconds) style, looks like:

124d10'20"W 52d14'22"N
122d20'05"W 54d12'00"N

Use the cs2cs command, specifying how the print format will be returned, using the -f option. In this case -f “%.6f”
is explicitly requesting a decimal degree number with 6 decimals:

cs2cs -f "%.6f" +proj=latlong +datum=WGS84 input.txt

Example Converting DMS to/from DD

This will return the results, notice no 3D/Z value was provided, so none is returned:

-124.172222 52.239444 0.000000
-122.334722 54.200000 0.000000

To do the inverse, remove the formatting option and provide a list of values in decimal degree (DD):

cs2cs +proj=latlong +datum=WGS84 inputdms.txt
124d10'19.999"W 52d14'21.998"N 0.000
122d20'4.999"W 54d12'N 0.000


Geospatial Power Tools is 350+ pages long – 100 of those pages cover these kinds of workflow topic examples. Each copy includes a complete (edited!) set of the GDAL/OGR command line documentation as well as the following topics/examples:

Workflow Table of Contents

  1. Report Raster Information – gdalinfo
  2. Web Services – Retrieving Rasters (WMS)
  3. Report Vector Information – ogrinfo
  4. Web Services – Retrieving Vectors (WFS)
  5. Translate Rasters – gdal_translate
  6. Translate Vectors – ogr2ogr
  7. Transform Rasters – gdalwarp
  8. Create Raster Overviews – gdaladdo
  9. Create Tile Map Structure – gdal2tiles
  10. MapServer Raster Tileindex – gdaltindex
  11. MapServer Vector Tileindex – ogrtindex
  12. Virtual Raster Format – gdalbuildvrt
  13. Virtual Vector Format – ogr2vrt
  14. Raster Mosaics – gdal_merge

by Tyler Mitchell at November 22, 2014 07:30 AM

November 21, 2014

Paul Ramsey

What to do about Uber (BC)

Nick Denton has a nice little article on Kinja about Uber and how they are slowly taking over the local transportation market in cities they have been allowed to operate.

it's increasingly clear that the fast-growing ride-hailing service is what economists would call a natural monopoly, with commensurate profitability... It's inevitable that one ride-sharing service will dominate in each major metropolitan area. Neither passengers nor drivers want to maintain accounts with multiple services. The latest numbers on [Uber], show a business likely to bring in nearly $1bn a month by this time next year, far ahead of any competitor

BC has thus far resisted the encroachment of Uber, but that cannot last forever, and it shouldn't: users of taxis in Vancouver aren't getting great service, and that's why there's room in the market for Uber to muscle in.

Like Denton, I see Uber as a mixed bag: on the one hand, they've offered a streamlined experience which is qualitatively better than the old taxi service; on the other, in setting up an unregulated and exploitative market for drivers, they've sowed the seeds of chaos. The thing is, many of the positive aspects of Uber are easily duplicable by existing transportation providers: app-based dispatching and payment aren't rocket science by any stretch.

As an American, Denton naturally reaches for the American solution to the natural monopoly: regulated private enterprise. In the USA, monopolists (electric utilities, for example) are allowed to extract profits, but only at a regulated rate. As Canadians, we have an additional option: the Crown corporation. Many of our natural monopolies, like electricity, are run by government-owned corporations.

Since most taxis are independently owned and operated anyways, all that a Crown taxi corporation would need to do is provide a central dispatching service, with enough ease-of-use to compete with Uber and its like. The experience of users would improve: one number to call, one app to use, no payment hassles, optimized routing, maybe even ride sharing. And the Crown corporation could use supply management to prevent a race to the bottom that would impoverish drivers and reduce safety on the roads.

There's nothing magical about what Uber is doing, they are arbitraging a currently inefficient system, but the system can save itself, and all its positive aspects, by recognizing and reforming now. Bring on our next Crown corporation, "BC Dispatching".

by Paul Ramsey (noreply@blogger.com) at November 21, 2014 07:05 PM

Paulo van Breugel

Access GRASS 7 data in QGIS

QGIS supports GRASS in two different ways. 1) For those working with GRASS databases, there is the GRASS toolbox, which basically offered an alternative GUI to GRASS. For those working with other data types, most GRASS functions are now available through the processing toolbox. I do most of my spatial analysis in GRASS, while I […]

by pvanb at November 21, 2014 06:09 PM

GIS for Thought

X Percent of the Population of Scotland Lives Within Y Miles of Glasgow

I have often heard that X percent of the population Scotland live within Y miles of Glasgow. With the X and the Y varying between claimant.

This is a pretty easy question to answer, using the 2011 Scottish Census population results and the Census Output Area Population Weighted Centroids. Then we get the extents of Glasgow City Council from OS Boundary Line.

The results are:

Pop. Count: %
Scotland 5295403 100
Glasgow 593245 11.2
25 km 2002431 37.8
50 km 2839583 53.6
50 miles 3776701 71.3
100 km 4201860 79.3
100 miles 4483330 84.7

Pretty interesting results, especially the within 50 miles query.

To see how these boundaries look on a map:

Population buffers around Glasgow

A few caveats:
We are using the population weighted centroids, which will produce some minor inaccuracies, but is a very good generalisation.
Also we are using euclidean buffers on the British National Grid plain, so these are not geodesic buffers. The difference will likely be small at these distances.

by Heikki Vesanto at November 21, 2014 12:00 PM

November 20, 2014

SourcePole

QGIS Cloud - Speed up the loading time of the web client

QGIS Cloud is your personal geo-data infrastructure in the internet. Publish maps and data. Share geo-information with others. And all of this very easily, without server, infrastructure and expert knowledge. If you know QGIS Desktop, then you know QGIS cloud just as well. Just install the QGIS cloud plugin from the official QGIS plugin repository and you’re good to go. You can publish as many maps as you want.

But the default settings of QGIS projects you like to publish via QGIS Cloud are not the best with respect to the performance of the QGIS Webclient / WMS. This point is noticeable when the published project contains many layers. Than the default settings are leading to bad performance. The size of the WMS GetCapabilities request is not negligible. Have a look at the first request:

QGIS Cloud slow response

The second request has a much faster response time than the first one:

QGIS Cloud fast response

What’s the difference between this two requests? First of all the slow request has to download and parse 3.1MB of XML data. The fast request has to download and parse 22KB only. However that work’s much faster. What makes the difference? If you have a look at the first request result, you can see, that tons of coordinate reference systems (CRS) are defined for every layer. These are all CRS supported by QGIS. In fact most of them will never be used. As the result the solution is to reduce the number of CRS in the QGIS Cloud WMS and WFS services. To achieve that you have to restrict the CRS in the QGIS project settings. Open the OWS Server tab and activate the CRS restrictions option and add all CRS of interest.

QGIS Cloud Webclient slow initialisation with none restricted CRS

QGIS Cloud Webclient fast initialisation with restricted CRS

by hdus at November 20, 2014 04:15 PM

Boundless Blog

Happy PostGIS Day!

PostGISYesterday was GIS Day, which means that today is PostGIS Day — get it? Post-GIS Day! To celebrate, we’re giving a 50% discount on online PostGIS training through the end of the week! Visit our training catalog and use promo code “postgis2014″ to take advantage of this offer.

A lot has happened since last year, when I joined Stephen Mather, Josh Berkus, and Bborie Park in a PostGIS all-star hangout with James Fee.

In case you missed them, here are some features from our blog and elsewhere that highlight what’s possible with PostGIS:

An, as always, be sure to check out our workshops for a slew of PostGIS-related courses, including Introduction to PostGIS and our Spatial Database Tips & Tricks.

Interested in learning about or deploying PostGIS? Boundless provides support, training, and maintenance for installations of PostGIS. Contact us to learn more.

The post Happy PostGIS Day! appeared first on Boundless.

by Paul Ramsey at November 20, 2014 01:54 PM

GIS for Thought

UK Postcode Polygon Accuracy Comparison Part 2

So we have seen from the previous comparing the raw polygon accuracy between Voronoi generated polygons and NRS generated postcode polygons: Results.

The physical results are interesting, and a visual examination can provide a useful overall comparison, but how does this actually impact me?

I have a CAG from GCC and I just want to attach a postcode to it. How different will my results be between a true postcode boundary dataset from the NRS, and a generated Voronoi dataset from the OS?

I’m glad you are still with me, it might be useful to explain how postcodes actually work in this context:

Lets take a postcode of G31 2XT how does it break down?
Area: G
District: G31
Sector: G31 2
Unit: G31 2XT

So then we can compare how an actual address dataset, like the Glasgow CAG, spatially joined to two postcode datasets compare:

Assuming the NRS dataset is correct (a good assumption) how accurate is a postcode based on an OS Code-Point Open generated Voronoi polygon based on Glasgow City Council residentially classified properties as of 16/11/2014:

Total number of properties:
245096     100%
Correct Area:
245096     100%
Correct District
243650     99.4%
Correct Sector
240956     98.3%
Correct Unit
174344     71.1%

We can see that up to a sector level a Voronoi polygon can produce an extremely accurate results. A visual comparison of how this plays out in Glasgow can be seen here, with the legend best read from the bottom:

UK Postcode Comparison

by Heikki Vesanto at November 20, 2014 12:00 PM

Andrea Antonello

Geopaparazzi 4.0.1 is out: Welcome to Slovakia!

Geopaparazzi 4.0.1 has been released to google play.

This is mostly a language update release and includes Slovak as new supported language.

This is the current status of the localization effort:



Some languages need more love than others. Do you have that love to share?
Just jump on the geopaparazzi translation site.





We also hit the 10000 installs badge in google play. That is not so bad for an engineering app, right? :-)






Release note have changed place as well as the download area for those without access to google play. This is thanks to the fact that the amazing Github gives a clean and simple space to add releases with binary downloads.



And that's also quite all. Enjoy!

by andrea antonello (noreply@blogger.com) at November 20, 2014 10:29 AM

Cameron Shorter

Request NSW Gov stop discriminating against Open Source

To the NSW Procurement Team,

During a recent NSW tendering process, we discovered the NSW Government purchasing guidelines actively discourage use of Open Source Software. These guidelines about Open Source Software are dated and need to be changed.

The guidelines:
  • Inaccurately imply Proprietary Software is less risky than Open Source [1],
  • Unfairly discriminate against Open Source Software solutions and Australian Open Source businesses [1],
  • Conflict with Australian government policy which directly mandate that Open Source and Proprietary Software should be considered equally. [2]
  • Increases the cost and reduce the value of NSW Government IT purchases by actively discouraging use of Open Source.
Could the NSW Procurement Team please review the current Open Source statement, assess the appropriateness of updating to Australian Government Policy statements related to Open Source, and reply describing how you plan to address this issue.

Reference 1:

The NSW IT procurement framework (version 3.1) specifically discourses use of Open Source software with Major Project System Integration Services.
23 Open Source Software
23.1 The Contractor must ensure that:
(a) none of the Deliverables comprise Open Source Software; and
(b) it does not insert any Open Source Software into the Customer Environment, except to the extent otherwise approved by the Customer in writing.
23.2 Where the Customer gives its approval in relation to the use of any Open Source Software
under clause 23.1:
(a) the Contractor must ensure that the use of that Open Source Software will not result in an obligation to disclose, license or otherwise make available any part of the Customer Environment or any of the Customer’sConfidential Information to any third party; and
(b) the use of that Open Source Software will not in any way diminish the Contractor’s obligations under the Contract, including without limitation in relation to any warranties, indemnities or any provisions dealing with the licensing or assignment of Intellectual Property.
https://www.procurepoint.nsw.gov.au/before-you-supply/standard-procurement-contract-templates/procure-it-framework-version-31
See: Module 13A Major project systems integration services

Reference 2:
Australian Government Policy on Open Source Software:
Principle 1: Australian Government ICT procurement processes must actively and fairly consider all types of available software.
Australian Government agencies must actively and fairly consider all types of available software (including but not limited to open source software and proprietary software) through their ICT procurement processes. It is recognised there may be areas where open source software is not yet available for consideration. Procurement decisions must be made based on value for money. Procurement decisions should take into account
whole-of-life costs, capability, security, scalability, transferability, support and manageability requirements.
For a covered procurement (over $80K), agencies are required to include in their procurement plan that open source software will be considered equally alongside proprietary software. Agencies will be required to insert a statement into any Request for Tender that they will consider open source software equally alongside proprietary software. Tender responses will be evaluated under the normal requirements of the Commonwealth Procurement Guidelines. For a non-covered procurement (below $80K), agencies are required to document all key decisions, as required by the Commonwealth Procurement Guidelines. This includes how they considered open source software suppliers when selecting suppliers to respond to the Select Tender or Request for Quotation.
Australian Government Policy on Open Source Software, http://www.finance.gov.au/policy-guides-procurement/open-source-software/

by Cameron Shorter (noreply@blogger.com) at November 20, 2014 09:40 AM

BostonGIS

PostGIS Day Game of Life celebration

This year's PostGIS day, I decided to celebrate with a little Conway's Game of Life fun inspired by Anita Graser's recent blog series Experiments with Game of Life. The path I chose to simulate the Game of life is a little different from Anita's. This variant exercises PostGIS 2.1+ raster mapalgebra and PostgreSQL 9.1+ recursive queries. Although you can do this with PostGIS 2.0, the map algebra syntax I am using is only supported in PostGIS 2.1+. My main disappointment is that because PostGIS does not yet support direct generation of animated gifs I had to settle for a comic strip I built unioning frames of rasters instead of motion picture. Hopefully some day my PostGIS animated gif dream will come true.


Continue reading "PostGIS Day Game of Life celebration"

by Regina Obe (nospam@example.com) at November 20, 2014 07:04 AM

Just van den Broecke

Open Source and The Theory on Brontosauruses by Anne Elk (Miss)

Preparing for a talk on our OSGeo.nl Day at the Dutch GeoBuzz Conference, I am trying to put in some slides on Free and Open Source Software (FOSS) for geospatial: why “FOSS is good” and why I live by it. The usual arguments on licensing, (not) price, feature comparison, collaboration contrasted with proprietary source are to me a, to be honest, a past, boring station.

Interlude: some younger readers (and non Python-programmers), may be puzzled: who the !&$# is Anne Elk? Ok watch this Monty Python video first:

anne-elk

From my humble developer’s point of view software is always built on assembled and shared human knowledge codified in programming language lines, ultimately compiled into zeroes and ones running on a machine. Sharing knowledge has been always been key to human evolution. Someone invented building a fire long ago. I am always wondering how that knowledge was shared. Was it “licensed” with other tribes. How? Per fire? For the duration of the fire, or N fires per month? Was the original inventor awarded?

No well-established software developer team will start from a real “scratch”, i.e. first designing and “baking” all hardware, developing CPU-instructions, boot-loaders, operating systems, programming languages, libraries, frameworks etc, in order to develop their application. In general they use available knowledge, i.e. software, from the “giants” that developed those components before. The obvious metaphor is the pyramid: would placing the last stone at the top make the entire structure “yours”, as would be the proprietary software case? From the 1980′s on, for many a marketing manager, still the entire pyramid is sold a piece, as software became a valuable asset.

So in real-life, any smart developer team will Google for, and use any freely (FOSS) software available “out there”, usually being aware of any licensing constraints.  Developing “from scratch” is something we did in the dark ages, or even further back, at the time of the brontosauruses.

Well, not that long ago. My career started in the 1985 working for AT&T , later called Lucent, for 11 years, working on software for the 5ESS public telephone exchange. I am still grateful for that opportunity. From what I gathered at the time, both software and hardware were all developed “in-house”: the chips, the Unix operating system, the C, later C++ language, its compilers, the whole lot. Well, that is really “from scratch”. Luckily all these goods were later shared with the world. That is why we have Linux and Mac OSX (via BSD, NEXT, but that is another story) today.

But still, who is Anne Elk and what does The Theory on Brontosauruses have to do with all of this? My point is that, although in practice software is developed on the shoulders of “the pyramid builders”, i.e. “the giants”, proprietary software is often still traded in the high spirits of Anne Elk. Although some may be uttering: “we love Open Source, we use it all the time”, as to sell a fire… Only if you are like AT&T and many others at the time, “from scratch” comes close to the truth and may not itch…But for the true humans among us, sharing is us and where we came from.

 

 

by Just van den Broecke at November 20, 2014 01:55 AM

longwayaround.org.uk

Loading PostGIS

Notes from a talk at PostGIS Day, London 20th Nov 2014.

The options

All support basic options such as specifying table names, schema, SRS etc.

Graphical

shp2pgsql-gui
Graphical interface to shp2pgsql tool shipped with PostGIS to load ESRI Shapefiles via pgAdmin III.
ogr2gui
Graphical interface to ogr2ogr command line utility. Windows only.
QGIS DB Manager
Simple database management from QGIS, including creating schemas & tables, moving tables and importing. Supports multiple import formats. Shipped with QGIS.
Proprietary
Support for PostGIS is now fairly broad within proprietary products including Safe Software's FME, Cadcorp, MapInfo Easyloader, ESRI SDE etc. MapInfo and ESRI impose some constraints & conventions including their own metadata tables.

Command Line

shp2pgsql
Fully featured ESRI Shapefile loader, supports specifying tablespace, null geometry handling, encoding etc. Shipped with PostGIS.
ogr2ogr
Swiss Army Knife of vector translation, supports many input formats and has good support for PostGIS. PostGIS docs here and here.
Loader
Loads GML such as Ordnance Survey data, uses ogr2ogr under the hood.
psql
Standard command line interface to PostgreSQL used to import from textual formats such as CSV via COPY.

Examples

All examples use the Natural Earth Vector data, a full download is available if you'd like to follow along. Assumes loading into a database called postgis with a loader login role with the password set via the PGPASSWORD environment variable: PGPASSWORD=password.

shp2pgsql

Load a Shapefile using shp2pgsql and psql. shp2pgsql doesn't load into the database directly but instead outputs SQL which can then be loaded via psql. This example pipes the SQL output by shp2pgsql to stdout to psql which executes it against the specified database.

shp2pgsql -W LATIN1 \
            -s 4326 \
            -I natural_earth_vector/10m_cultural/ne_10m_admin_0_countries.shp \
            ne.ne_10m_admin_0_countries \
        | psql -U loader -d postgis

ogr2ogr -f PostgreSQL

Load an equivalent Shapefile into PostGIS using ogr2ogr.

ogr2ogr -f PostgreSQL \
        PG:'dbname=postgis user=loader active_schema=ne' \
        natural_earth_vector/10m_cultural/ne_10m_time_zones.shp \
        -nlt PROMOTE_TO_MULTI

Tips

Use COPY for Speed

When loading large quantities of data both shp2pgsql and ogr2ogr support loading via PostgreSQL Dump format. This involves bulk loading of rows in a textual CSV like format using the COPY command which can be much quicker than INSERT statement.

Enabling Dump format with shp2pgsql is as simple as specifying the -D flag (shp2pgsql -D ...).

For ogr2ogr use the PGDump output format with the PG_USE_COPY config set to YES. The are a number of layer creation (-lco) and config options detailed on the PGDump driver page.

ogr2ogr -f PGDump \
        --config PG_USE_COPY YES \
        -lco schema=ne \
        -lco create_schema=off \
        /vsistdout/ \
        natural_earth_vector/10m_cultural/ne_10m_admin_1_states_provinces_shp.shp \
        -nlt PROMOTE_TO_MULTI \
        | psql -U loader -d postgis

When loading multi file datasets such as Ordnance Survey OSMM Topography Layer or VectorMap Local the general approach is:

  • Create schema and empty tables
  • Load each source file via COPY
  • Create indexes, vacuum etc.

An example of this workflow for VectorMap Local can be found in the Loader repository. Deferring the creation of indexes can also improve performance significantly as it avoids the database continually rebuilding the indexes during load. In this instance the ogr2ogr command might look like:

ogr2ogr --config PG_USE_COPY YES \
        -lco schema=ne \
        -lco create_schema=off \
        -lco create_table=off \
        -lco spatial_index=off \
        -f PGDump \
        /vsistdout/ \
        /path/to/source.gml

A benefit of this approach is that you can also fine tune the column types and provide support for date fields which ogr2ogr doesn't natively understand.

Parallel Processing

Databases are designed to handle lots of concurrent activity and can easily handle more than one process loading data at the same time. Often load performance can be improved by running several shp2pgsql or ogr2ogr processes at a time. This can be done manually but for large datasets this becomes a pain, lucky on *unix systems we have GNU Parallel which can automate it for use. A previous post has covered loading with GNU Parallel in more detail but it fits well with this discussion.

The parallel command is very flexible and can take some time to understand but a simple

In a previous post I provided an example of loading all Natural Earth vectors using shp2pgsql so this time lets do the same with ogr2ogr:

time find natural_earth_vector/10m_physical/ -name '*.shp' \
    | parallel "ogr2ogr -f PGDump \
                    --config PG_USE_COPY YES \
                    -lco schema=ne \
                    -lco create_schema=off \
                    /vsistdout/ {} \
                    -nlt PROMOTE_TO_MULTI | psql -U loader -d postgis"

Load Geometry with COPY

If you have data in a delimited format such as CSV or TSV you can load it via COPY and have PostgreSQL create geometries on the fly by expressing the geometry as WKT or EWKT in your text file. The steps are very similar to those outlined above when using COPY:

  • Create table with a geometry column
  • Load source file via COPY
  • Create indexes, vacuum etc.

As an example lets load a CSV with details of WMS requests with the bounding box expressed as EWKT:

cat requests1.csv
2014-10-20 06:33:24,elmbridge,wms,"SRID=27700;POLYGON((516601 163293, 516729 163293, 516729 163421, 516601 163421, 516601 163293))"
2014-10-20 06:33:32,surrey,wms,"SRID=27700;POLYGON((492801 166401, 499201 166401, 499201 172801, 492801 172801, 492801 166401))"
2014-10-20 06:38:09,exactrak,wms,"SRID=27700;POLYGON((206848 67200, 206976 67200, 206976 67328, 206848 67328, 206848 67200))"
...

psql -U loader -d postgis
drop table if exists requests;
create table requests(reqtime timestamp, org text, service text, bbox geometry);
\copy requests FROM 'requests1.csv' DELIMITER ',' CSV;
select populate_geometry_columns();

by walkermatt at November 20, 2014 12:00 AM

November 19, 2014

Jackie Ng

MapGuide tidbits: Log monitoring with log.io

If the previous MapGuide log monitoring solution don't cut it for you (especially on Linux), here's another one you can try.

log.io is a real-time log monitoring solution that runs in your web browser. It is powered by node.js and socket.io.

Just a note before we dive in, this post is geared towards Linux-specific installations of MapGuide. This may or may not work for Windows. The log.io install instructions assume Linux, so we're rolling with that.

To install log.io you will first have to install node.js, there's a bajillion different links out there on how to install node.js, so here's one for:


Once node is installed, you can install the log.io package via npm (on Ubuntu you'll have to elevate that command with sudo):

npm install -g log.io --user "[username that will run log.io]"

Then start the log.io server.

log.io-server

Then create a harvester.conf in your ~/.log.io/ directory that defines what log files to monitor. Here's a basic example configured for MapGuide and Apache log monitoring.

exports.config = {
    nodeName: "mapguide_server",
    logStreams: {
      apache: [
        "/usr/local/mapguideopensource-2.6.0/webserverextensions/apache2/logs/access_log",
        "/usr/local/mapguideopensource-2.6.0/webserverextensions/apache2/logs/error_log"
      ],
      mapguide_access: [
        "/usr/local/mapguideopensource-2.6.0/server/Logs/Access.log"
      ],
      mapguide_error: [
        "/usr/local/mapguideopensource-2.6.0/server/Logs/Error.log"
      ]
    },
    server: {
      host: '0.0.0.0',
      port: 28777
    }
  } 

Now start the log harvester.

log.io-harvester

Now browse to http://localhost:28778 and watch your MapGuide log files in real time.


NOTE: The above install instructions were pilfered from the log.io website itself, however I found the global install option to be problematic on my Ubuntu test VM. For some reason it always insists on building the native modules as the "root" user and not the user I designated from the npm install command. So I went for a local install instead, which means the following commands have been changed to the following:

  • Installing log.io: npm install log.io
  • Running log.io server: ~/node_modules/log.io/bin/log.io-server
  • Running log.io harvester: ~/node_modules/log.io/bin/log.io-harvester

by Jackie Ng (noreply@blogger.com) at November 19, 2014 04:11 PM

Stefano Costa

Archaeology in the Mediterranean: I don’t wanna drown in cold water

This post is the second half of the one I had prepared for this year’s Day of Archaeology (Archaeology in the Mediterranean: do not drown if you can). For an appropriately timed mistake, I only managed to post the first, more relaxed half of the text. Enjoy this rant.

Written and unwritten rules dictate what is correct, acceptable and ultimately recognised by your peers: it is never entirely clear who sets research agendas for entire disciplines, but ‒ just to be more specific ‒ I feel increasingly stifled by the “trade networks” research framework that has dominated Late Roman pottery studies for the past 40 years now. Invariably, at any dig site, there will be from 1 to 100,000 potsherds from which we should infer that said site was part of the Mediterranean trade network. We are all experts about our “own” material, that is, the finds that we study, and apart from a few genuine gurus most of us have a hard time recognising where one pot was made, what is the exact chronology of one amphora, and so on. But those gurus, as leaders, contribute to setting in stone what should be a temporary convention as to what terminology, chronology and to a larger extent what approach is appropriate. I can hear the drums of impostor syndrome rolling in the back.

I don’t want to drown in this sea of small ceramic sherds and imaginary trade networks, rather I really need to spend time understanding why those broken cooking pots ended up exactly where we found them, in a certain room used by a limited number of people, in that stratigraphical position.

At the same time, I’m depressingly frustrated by how mechanical and repetitive the identification of ceramic finds can be: look at shape, compare with known corpora, look at fabric, compare with more or less known corpora. If any, look at decoration, lather, rinse, repeat. My other self, the one writing open source computer programs, wonders if all of this could not be done by a mildly intelligent algorithm, liberating thousands of human neurons for more creative research. But this is heresy. We collectively do our research and dissemination as we are told, with sometimes ridiculously detailed guidelines for the preparation of digital illustrations that end up printed either on paper or on PDF (which is the same thing). Our collective knowledge is the result of a lot of work that we need to respect, acknowledge, study and pass on to the next generation.

At the end of the obligations telling you how to study your material, how to publish it, and ultimately how to think about it, you could just be happy and let yourself comfortably drown into the next grant application. Don’t do that. Do more. Follow your crazy idea and sail the winds of Mediterranean archaeology.

by Stefano Costa at November 19, 2014 02:58 PM

Petr Pridal

Create Google Earth KML Overlay with MapTiler

A new video tutorial is available - How to create a Google Earth KML SuperOverlay with MapTiler.

It is as easy as few clicks! Check the video:


https://www.youtube.com/watch?v=-rSYFwcWVnc&feature=youtu.be

by Klokan Technologies GmbH (noreply@blogger.com) at November 19, 2014 02:12 PM

GIS for Thought

GIS DAY – Post your Workstation

In honor of GIS day I have posted my current home GIS workstation:

CPU: Intel Xeon E3-1230 V3 3.3GHz Quad-Core Processor
CPU Cooler: Cooler Master Hyper 212 EVO 82.9 CFM Sleeve Bearing CPU Cooler
Motherboard: Gigabyte GA-H97M-D3H Micro ATX LGA1150 Motherboard
Memory: Crucial Ballistix Sport 8GB (1 x 8GB) DDR3-1600 Memory
Memory: Crucial Ballistix Sport 8GB (1 x 8GB) DDR3-1600 Memory
Storage: Crucial MX100 512GB 2.5″ Solid State Drive
Case: Silverstone TJ08B-E MicroATX Mini Tower Case
Power Supply: Corsair CSM 550W 80+ Gold Certified Semi-Modular ATX Power Supply
Monitor: Asus VX279Q 60Hz 27.0″ Monitor

My GIS Battlestation

I love the SSD for speed and the CPU is more that enough for my use.

And for a GIS server I use:
HP 704941-421 ProLiant Micro Server
Running Xubuntu. 3x3tb in a Raid 5 array, with 8gb extra memory.

by Heikki Vesanto at November 19, 2014 12:00 PM

gvSIG Team

gvSIG 2.1: Memory management

Users that start to use gvSIG will find a lot of small improvements respecting previous versions. Now we’re going to tell about one of them, the possibility to manage the RAM memory used by gvSIG.

Until this version, the user had to do it manually, editing a text file and modifying the memory parameters. In gvSIG 2.1 an option has been included in order to manage the memory from the application preferences.

00_memory

 


Filed under: development, english, gvSIG Desktop

by Mario at November 19, 2014 09:25 AM

November 18, 2014

GIS for Thought

UK Postcode Polygon Accuracy Comparison

One of the main ways of generating postcode polygons is to use OS Code-Point Open and from them generate Voronoi polygons.

This visualization compares the Code-Point Voronoi polygons to postcodes from NRS postcode extract, Which is widely considered the best postcode dataset for Scotland. Scotland is used because we have a CAG (NLPG in the south) extract for Glasgow available for a property comparison of accuracy.

The black areas are where the two datasets agree and the coloured areas are where they do not. For this comparison we can consider NRS to be correct.

Postcode Comparison

by Heikki Vesanto at November 18, 2014 12:00 PM

Nyall Dawson

Exploring QGIS 2.6 – Item panel for map composer

In recent releases QGIS’ map composer has undergone some large usability improvements, such as the ability to select and interact with multiple items, and much improved navigation of compositions. Another massive usability improvement which is included in QGIS 2.6 is the new “Items” panel in the map composer. The panel shows a list of all items currently in the composition, and allows you to individually select, show or hide items, toggle their lock status, and rearrange them via drag and drop. You can also double click the item’s description to modify its ID, which makes managing items in the composition much easier.

QGIS composer’s new items panel

This change has been on my wish list for a long time. The best bit is that implementing the panel has allowed me to fix some of the composer’s other biggest usability issues. For instance, now locked items are no longer selectable in the main composer view. If you’ve ever tried to create fancy compositions with items which are stacked on top of other items, you’ll know that trying to interact with the lower items has been almost impossible in previous QGIS versions. Now, if you lock the higher stacked items you’ll be able to fully interact with all underlying items without the higher items getting in the way. Alternatively you could just temporarily hide them while you work with the lower items.

This feature brings us one more step closer to making QGIS’ map composer a powerful DTP tool in itself. If you’d like to help support further improvements like this in QGIS, please consider sponsoring my development work, or you can contact me directly for a quote on specific development.

by Nyall Dawson at November 18, 2014 09:42 AM

GeoServer Team

GeoServer 2.6.1 released

The GeoServer team is happy to announce the release of GeoServer 2.6.1. Download bundles are provided (zipwardmg and exe)  along with documentation and extensions.

GeoServer 2.6.1 is the next the stable release of GeoServer and is recommended for production deployment. Thanks to everyone taking part, submitting fixes and new functionality:

  • Fix for slow rendering of maps with lots of layers coming from a spatial database, reported by numerous users
  • Improvements in rendering labels over transparent maps
  • Fix for rendering tiled raster data, it could occasionally miss portions of the data
  • Fixes for WFS 2.0 joins
  • Better memory management when HTTP gzip-ping large amounts of GML/CSV/JSON data
  • Multidimensional WCS 2.0 outputs (NetCDF downloads) now support subsetting in CRS other than WGS84
  • An option to disable JAI native warp, which can cause some instability when reprojecting certain raster data sets
  • Some improvements in the GeoPackage outputs (still a community module, available via nightly builds)
  • Check the release notes for more details
  • This release is made in conjunction with GeoTools 12.1

Thanks to Andrea (GeoSolutions) for this release

About GeoServer 2.6

Articles and resources for GeoServer 2.6 series:

 

 

by Andrea Aime at November 18, 2014 09:06 AM

GeoTools Team

GeoTools 12.1 Released

GeoTools 12.1 released

The GeoTools community is happy to announce the latest  GeoTools 12.1 download:
This release is also available from our maven repository. This release is made in conjunction with GeoServer 2.6.1.

This is a release of the GeoTools 12 Stable series recommended for production systems. The release schedule now offers 6 months of stable releases followed by six months of maintenance releases.

A few highlights from the GeoTools 12.1-Release Notes:
  • Some fixes in JDBC land, one important for performance, making sure feature types are cached, plus a few others related to feature type joining
  • Some rendering fixes, including an important one related to raster data rendering not always displaying the full raster in tiled outputs, as well as better calculation of the extra to be queried in order to render all labels in maps
  • Some improvements to the image mosaic module, including the ability to extract times from the full path, instead of just the file name, when harvesting multidimensional data sets
  • Some love in SLD 1.0 parsing and encoding
  • SQL Server store can now also work off instance name, in addition to the already supported TCP port
  • A number of other fixes, check the release notes for full details
Thanks to Jody for this release (Boundless).

About GeoTools 12

by Andrea Aime (noreply@blogger.com) at November 18, 2014 08:12 AM

November 17, 2014

GIS for Thought

Centroid Within Selection in QGIS

While we have some options for spatial selection in QGIS through the Spatial Query plugin. One option that is glaringly missing is centroid within. This is extremely useful for easily selecting polygons that mainly fall within other polygons. This tutorial will run through how to do a polygon centroid within another polygon selection in QGIS.

Our initial setup is a postcode dataset, where we want to extract all of the ones that are mainly within Glasgow City Council. The boundaries are not the same but are roughly the same. However an intersect query would bring ones that simply touched the area, and a within query would exclude the ones that fall just outside. A centroid within should work great.

Selection

In this image the red lines are our postcodes, and the yellow area is the highlighted Glasgow City polygon.

We are going to cheat slightly by using SpatiaLite, which is a stand alone, single file, spatial database. It is however very tightly integrated into QGIS and we do not have to leave the program so I feel this counts as a QGIS solution.

First using one of the browser panels create a new database:

Screenshot[34]

Transfer your data into the database. This can be done by dragging and dropping a .shp file into the newly created database using two browser panels.

I created a subset of the postcode dataset, with a simple polygon selection of roughly the Glasgow area (postcode_glasgow_nrs_rough). My other dataset is the UK unitary authorities dataset (district_borough_unitary_region).

Then open up the DB Manager. Database>DB Manager>DB Manager.

Once in the database we can do the query using simple SQL:

SELECT  postcode_glasgow_nrs_rough.*
FROM postcode_glasgow_nrs_rough
JOIN district_borough_unitary_region
ON ST_intersects(ST_centroid(postcode_glasgow_nrs_rough.geom),district_borough_unitary_region.geom)
WHERE district_borough_unitary_region.name LIKE "%lasgow%"

We also have a WHERE statement so only the ones that within Glasgow are selected. “%lasgow%” used to avoid capitalization mismatches.

Screenshot[35]

We can also directly add this query in as a layer in QGIS using the “Load as new layer” feature. An excellent feature, and only requires you to select the primary key and geometry column. This allows us to visually check our results.

The query has worked as intended, but we have some strangely shaped polygons so the results are not what I had hoped.

Screenshot[37]

We can see that one of the postcode polygons is missing from the selection because its centroid actually falls outside of itself.

Not to worry we have a better option than centroid for this query, which is ST_PointOnSurface. Details can be found on the Boundless PostGIS pages.

So lets try this.

SELECT  postcode_glasgow_nrs_rough.*
FROM postcode_glasgow_nrs_rough
JOIN district_borough_unitary_region
ON ST_intersects(ST_PointOnSurface(postcode_glasgow_nrs_rough.geom),district_borough_unitary_region.geom)
WHERE district_borough_unitary_region.name LIKE "%lasgow%"

Screenshot[36]

Adding it in we see the results as expected.

Screenshot[38]

So great we now have our data selected, but how do I get it out of SpatiaLite? We could wait for the “Load as new layer” to load in all the features, then save it as a shapefile, but for my query, while great for quick look, the “Load as new layer” was running quite slow and thus not an option.

So instead, we can simply create a new table in the database from our selection.

CREATE TABLE glasgow_postcode_nrs AS
SELECT  postcode_glasgow_nrs_rough.*
FROM postcode_glasgow_nrs_rough
JOIN district_borough_unitary_region
ON ST_intersects(ST_PointOnSurface(postcode_glasgow_nrs_rough.geom),district_borough_unitary_region.geom)
WHERE district_borough_unitary_region.name LIKE "%lasgow%";

Note the ; at the end. This creates a new table pretty quickly. And to get it to appear as a spatial table we simply register its geometry in the geometry column:

INSERT INTO geometry_columns VALUES ('glasgow_postcode_nrs', 'geom',6, 2, 27700, 0);

With the options being: Table name, Geometry column, Geometry (type 6 for polygon), dimensions, SRID, Spatial index boolean.

The table the appears in our browser.

Screenshot[41]

And our final result.

part2

I am loving the database integration in QGIS. It makes some workflows much easier and adds a wealth of new opportunities. Also the “Load as new layer” views are amazing, lots of possibilities.

by Heikki Vesanto at November 17, 2014 12:00 PM

gvSIG Team

gvSIG 2.1: Configuration of the grid in the new Layout

Following the changes in the cartographic production, the new Layout presents in gvSIG 2.1. The grid generating function is greatly improved. Now we will be able to configure almost all visual parameters and thus we can cover many of the needs that gvSIG community users had demanded.

We will see, with a very easy example, the new functionality of grid configuration.

We start from a simple Layout where we will add a grid:

00_grid0If you were surprised to see the Table of Contents embedded in the document Layout, it is because you have not read last week´s post.

The first thing we do is select the View that is inserted on our Layout and display its context menu -clicking on it with the right-button. Among the options that it shows us, we select “Properties”. gvSIG will show a window and this one will be similar to the following image, where we can configure all the parameters related to a View:

00_grid1In this case we are interested in grid configuration. It is located at the bottom of the window. We enter in settings; we have to press the corresponding button. This will open a new window where you can configure every parameter of the grid:

00_grid2In our case, we will select that the grid will be symbolized by lines 100 degrees for both horizontal and vertical intervals. Also we will indicate the horizontal labels are rotated 90 ° (the rotation of this feature was one of the most requested by users). We configure the remaining parameters (font, size, color …) and we have our grid ready. Simple, really?

00_grid3


Filed under: english, gvSIG Desktop Tagged: grid, gvSIG 2.1, layout, map

by fjsolis at November 17, 2014 10:44 AM

November 16, 2014

Free and Open Source GIS Ramblings

More experiments with Game of Life

As promised in my recent post “Experiments with Conway’s Game of Life”, I have been been looking into how to improve my first implementation. The new version which you can now find on Github is fully contained in one Python script which runs in the QGIS console. Additionally, the repository contains a CSV with the grid definition for a Gosper glider gun and the layer style QML.

Rather than creating a new Shapefile for each iteration like in the first implementation, this script uses memory layers to save the game status.

You can see it all in action in the following video:

(video available in HD)

Thanks a lot to Nathan Woodrow for the support in getting the animation running!

Sometimes there are still hick-ups causing steps to be skipped but overall it is running nicely now. Another approach would be to change the layer attributes rather than creating more and more layers but I like to be able to go through all the resulting layers after they have been computed.


by underdark at November 16, 2014 06:52 PM

GIS for Thought

Open UK Postcode Polygons

The Ordnance Survey releases Code-Point Open, which contains the centroid coordinates for each postcode in the UK. One way to generate open postcode polygons is to generate a Voronoi diagram from those points.

The results initially look good, but how accurate are these generated polygons compared with actual postcode polygons.

Luckily the National Records of Scotland (NRS) also maintain a postcode dataset, which is released on their website for free. So we can do an easy comparison of the two postcode datasets, which should be an indication of how accurate Voronoi postcode areas would be across the UK.

I have decided to use Glasgow for the comparison because we also have the Corporate Address Gazetteer, which will allow us to compare not just the actual polygons, but actual properties. It does not really matter if the postcode polygon is incorrect, if all of the properties within that postcode would still be correct.

Fist we have a simple side by side look at the two datasets we will compare:

Glasgow Postcodes

And a closeup overlay:

Zoomed in

The Voronoi one has been created from Code-Point Open points that fell within the Glasgow City Council Unitary Authority and the output Voronoi was clipped to the extent of the Unitary Authority.

The NRS created postcodes were simply selected from the ones where their “Point on Surface” fell within the Unitary Authority. The process will detailed in a later post.

by Heikki Vesanto at November 16, 2014 12:00 PM

November 15, 2014

gisky

Why you should present/be present at FOSDEM

Last month we published a call for participation for the Geospatial devroom at FOSDEM .

But with many other conferences around one may wonder why they want to be present at geospatial@FOSDEM? Well, one of the main goals of the organisers is to bring geospatial and non-geo developers together. Are you interested in building your own GPS device or drone? Check out some of the talks in the embedded room. Are you using java, python, php, perl? All those languages have dedicated rooms where you can get an update. Interested in routing or using postgresql? Yep, there is a graph devroom and a postgresql room. Want to get your software in linux distributions? The distribution devroom. The same is true the other way around, people from those communities can come to see your talk as well.

Anyway, already a number of interesting talks has been submitted, and I just want to remind everyone of the deadline of **1 december**. So if you are interested in presenting something - go ahead and submit your proposal! Don't postpone - if you have just a title - submit that already :-)


Some practical announcements: if you are interested in the devroom, consider joining the devroom mailinglist. We will be using that mailinglist to send updates on the devroom and a social event. FOSDEM itself is free of charge and requires no registration, so no obligations here: you can just show up as well.

A last note: we want to stream the tracks and provide videos. If you intent to come and have experience with video (or would like to get some), please get in touch, we could still use volunteers.


by Johan Van de Wauw (noreply@blogger.com) at November 15, 2014 10:00 PM

GIS for Thought

Glasgow CAG High Density Areas

Example of the new multiple overview feature in QGIS 2.6. Point displacement, but not amazing with the zoom levels and number of points used.

Primarily looking at spacing and composition in QGIS.

Glasgow CAG

by Heikki Vesanto at November 15, 2014 12:00 PM