Welcome to Planet OSGeo

February 19, 2018

GeoTools Team

GeoTools 17.5 released

The GeoTools team is pleased to announce the release of GeoTools 17.5:
This release, which is also available from the GeoTools Maven repository, is made in conjunction with GeoServer 2.11.5.

GeoTools 17.5 is the last maintenance release of the 17.x series, time to consider upgrading!
The release mainly fixes bugs but also includes some enhancements: 
  • Allow group extraction in image mosaic regular expression based property collector
  • Allow overriding LabelCache in renderer
  • Allow MetaBufferEstimator to work against a specific feature (to calculate sizes based on attribute values)
For more information please see the release notes (17.517.4 | 17.3 | 17.2 | 17.1 | 17.0 | 17-RC1 | 17-beta).

About GeoTools 17

  • The wfs-ng module has now replaced gt-wfs.
  • The NetCDF module now uses NetCDF-Java 4.6.6.
  • Image processing provided by JAI-EXT 1.0.15.
  • YLSD module providing a plain-text representation of styling.

Upgrading

  • The AbstractDataStore has finally been removed. Please transition any custom DataStore implementations to ContentDataStore (tutorial available).

by Andrea Aime (noreply@blogger.com) at February 19, 2018 03:11 PM

gvSIG Team

GIS applied to Municipality Management: Module 13 ‘Layouts’

The video of the thirteenth module is now available, in which we will show how to create maps with the geographic information that we have in our views.

The layout will be the document that we can print, or export to PDF or PostScript, and in which we will insert the views that we have created in our project.

On the layout we can insert all type of elements, such as texts, north arrow, scale, legend, images or logos, charts, rectangles, lines…

The cartography to follow this video can be downloaded from this link.

Here you have the videotutorial of this new module:

Related posts:

by Mario at February 19, 2018 10:49 AM

February 16, 2018

Just van den Broecke

Emit #3 – Things are Moving

This is Emit #3, in a series of blog-posts around the Smart Emission Platform, an Open Source software component framework that facilitates the acquisition, processing and (OGC web-API) unlocking of spatiotemporal sensor-data, mainly for Air Quality. In Emit #1, the big picture of the platform was sketched. Subsequent Emits will detail technical aspects of the SE Platform. “Follow the data” will be the main trail loosely adhered to.

Three weeks ago since Emit #2. A lot of Things have been happening since:

A lot to expand on. Will try to briefly summarize on The Things Conference, LoRA and LoRaWAN and save the other tech for later Emits.

LoRA and LoRaWAN may sound like a sidestep, but are very much related to for example building a network of Air Quality and other environmental sensors. When deploying such sensors two issues always arise:

  • power: need continuous electricity to keep sensors and their computing boards powered
  • transmission: need cabled Internet, WIFI or cellular data to transmit sensor-data

In short, LoRa/LoRaWAN (LoRa=Long Range) is basically a wireless RF technology for long-range, low-power and low-throughput communications. You may find many references on the web like from the LoRa Alliance and SemTech. There is lots of buzz around LoRa. But just like the, Wireless Leiden project , who built a public WIFI network around the city, The ThingsNetwork has embraced LoRa technology to build a world-wide open, community-driven, “overlay network”:

“The Things Network is building a network for the Internet of Things by creating abundant data connectivity, so applications and businesses can flourish. The technology we use is called LoRaWAN and it allows for things to talk to the internet without 3G or WiFi. So no WiFi codes and no mobile subscriptions. It features low battery usage, long range and low bandwidth. Perfect for the internet of things.”

You may want to explore the worldwide map of TTN gateways below.

And The ThingsNetwork (TTN) was established in Amsterdam, The Netherlands. As an individual you can extend The Things Network by deploying a Gateway. Via the TTN KickStarter project, I was one of the first backers, already in 2015. The interest was overwhelming, even leading to (Gateway) delivery problems. But a LoRa Gateway to extend TTN is almost a commodity now. You can even build one yourself. TTN is very much tied to the whole “DIY makers movement”. All TTN designs and code (on GitHub) are open. Below a global architecture picture from their site.

 So TTN organized their first conference, off course in Amsterdam. For three days: it was an amazing success, more than 500 enthousiasts.

The conf was very hands-on with lots of free workshops (with free takeaway hardware). Followed several workshops, which were intense (hardware+software hacking) but always rewarding (blinking green lights!). One to mention in particular (as a Python programmer) was on LoPy a sort of Arduino board, very low cost (around $30), programmable with MicroPython that connects directly to TTN. Within an hour the board was happily sending meteo-data to the TTN.

All in all this conference made me eager to explore more of LoRA with TTN, in particular to explore possibilities for citizen-based sensor-networks for environmental, in particular air quality, data. I am aware that “IoT” has some bad connotations when it comes to security, especially from closed technologies. But IoT is a movement we cannot stop. With and end-to-end open technology like the TTN there is at least the possibility to avoid the “black box”-part and take Things in our own hand.

 

 

 

by Just van den Broecke at February 16, 2018 11:03 PM

gvSIG Team

Recording of webinar on “gvSIG Suite: open source software for geographic information management in agriculture” is now available

If you weren’t be able to follow the webinar on “gvSIG Suite: open source software for geographic information management in agriculture”, organized by GODAN and gvSIG Association, you can watch the recording now at the gvSIG Youtube channel:

The webinar was oriented to show the gvSIG Suite, a complete catalog of open source software solutions, applied to agriculture.

The gvSIG Suite is composed of ‘horizontal’ products:

  • gvSIG Desktop: Geographic Information System for editing, 3D analysis, geoprocessing, maps, etc
  • gvSIG Online: Integrated platform for Spatial Data Infrastructure (SDI) implementation.
  • gvSIG Mobile: Mobile application for Android to take field data.

and sector products:

  • gvSIG Roads: Platform to manage roads inventory and conservation.
  • gvSIG Educa: gvSIG adapted to geography learning in pre-university education.
  • gvSIG Crime: Geographic Information System for Criminology management.

At the webinar we also showed several successful case studies in agriculture and forestry. You also can consult another case studies at gvSIG Outreach website in this sector and other related sectors (they are in their original language but there’s a translator at the left side):

The presentation is available at this link.

If you want to download gvSIG Desktop you can do it from the gvSIG website, gvSIG Mobile is available from the Play Store, and if you are interested in implementing gvSIG Online in your organization you can contact us by e-mail: info@gvsig.com.

If you have any doubt or problem with the application you can use the mailing lists.

And here you have several links about training on gvSIG, with free courses:

by Mario at February 16, 2018 09:33 AM

February 15, 2018

gvSIG Team

GIS applied to Municipality Management: Module 12 ‘Geoprocessing’

The video of the twelfth module is now available, in which we will see the geoprocessing tools in gvSIG.

gvSIG has more than 350 geoprocesses, both for raster and vector layers, which allow us to perform different types of analysis, for example to obtain the optimal areas to locate a specific type of infrastructure.

Using the geoprocesses that are available in gvSIG we can create buffers for example, to calculate, among other things, the roads or railways rights of way. Then an intersection can be applied with a layer of parcels to obtain which part of each parcel should be expropriated. We can also make hydrological analysis, merge layers…

The cartography to follow this video can be downloaded from this link.

Here you have the videotutorial of this new module:

Related posts:

by Mario at February 15, 2018 08:52 AM

February 14, 2018

GeoSolutions

Latest on GeoNode: Local Monitoring Dashboard

monitoring-geonode-featured

Dear All,

in this post we would like to introduce a plugin which we have developed for GeoNode and released it as Open Source (full documentation available here) in order to give users the ability to to keep under control hardware load as well as software load on a GeoNode installation. This plugin is called Monitoring, it is available as an Open Source contrib module for GeoNode, documentation on how to enable it and configure it, can be found here.

Overview of the monitoring capabilities

We are now going to provide an overview of the functionalities provided by this plugin; it is worth to point out that given the sensitive nature of the information to show, it is accessible only to GeoNode users that have the administrator role.

The plugin allows administrators to keep under control the hardware and software load on a GeoNode instance by collecting, aggregating, storing and indexing a large number of informations that we normally keep hidden and spread in various logs which are difficult to find when troubleshooting, like GeoNode's own log, GeoServer log and audit log and so on; in addition, we collect also information about hardware load on memory and cpu (disk could be added easily) which are important to collect in live instances.

Possibility to create alerts that can control certain conditions and then send a notification email to preconfigured email addresses is also available (more on this here).

It is also possible to look at OGC Service statistics on a per service and per layer basis. Eventually, a simple country-based map that shows where requests are coming from is available.

Overview of the available analytics

Let us now dive a little into the functionalities provided by this plugin. Here below you can see the initial homepage of the plugin.

[caption id="attachment_3878" align="alignnone" width="1024"]Monitoring Plugin Homepage Monitoring Plugin Homepage[/caption] We tried to put on the homepage a summary of the available information so that a user can quickly understand what is going on. The first row provides a series of visual controls that give an overview of the instance's health at different aggregation time ranges (from 10 mins to 1 Week):
  • Health Check - tells us if there are any alerts or erros that would require the attention of the administrator. Colors range from Red (at least an Error has happened in the selected time range) to Yellow (no Errors but at least an Alert has triggered within the selected time range)  and finally to Green (no Alerts or Errors).
  • Uptime - shows GeoNode system uptime.
  • Alerts -  shows number of notifications from defined checks. When clicked, Alerts box will show detailed information . See Notifications description for details.
  • Errors - shows how many errors were captured during request processing. When clicked, Errors box will show detailed list of captured errors.
The Software Performance section is responsible for showing analytics about the overall performance of both GeoNode itself as well as the OGC back-end. [caption id="attachment_3892" align="alignnone" width="400"]dashboard-sw-performance Software Performance Summary Dashboard[/caption]

If we click on the upper right corner icon, the detailed view will be shown, as illustrated below. Additional detailed information over the selected time period will be shown for both GeoNode, OGC Services and then for individual layers and resources.

[caption id="attachment_3906" align="alignnone" width="1024"]Software Performance Detailed Dashboard - 1 Software Performance Detailed Dashboard - 1[/caption] [caption id="attachment_3907" align="alignnone" width="1024"]Software Performance Detailed Dashboard - 2 Software Performance Detailed Dashboard - 2[/caption]

The Hardware Performance section is responsible for showing analytics about CPU and Memory usage of the machine where GeoNode runs as well as of the machine where GeoServer runs (in case it runs on a separate machine); see figure below.

[caption id="attachment_3901" align="alignnone" width="1024"]Hardware Performance Detail Section Hardware Performance Detail Section[/caption]

Interesting points and next steps

The plugin provides additional functionalities which are described in detail in the GeoNode documentation (see this page); as mentioned above we can inspect errors in the logs directly from the plugin, we can set alerts that would send notifications email when they trigger. Moreover, the plugin provides a few additional endpoints that make it easier to monitor a GeoNode instance from the GeoHealthCheck Open Source project (as explained here).

[caption id="attachment_3912" align="alignnone" width="1024"]Inspecting the error log Inspecting the error log[/caption]

Last but not least, we would want to thank the GFDRR group at the World Bank which provided most of the funding for this work.

If you are interested in learning about how we can help you achieving your goals with open source products like GeoServerMapStoreGeoNode and GeoNetwork through our Enterprise Support Services and GeoServer Deployment Warranty offerings, feel free to contact us!

The GeoSolutions team,

by simone giannecchini at February 14, 2018 04:51 PM

CARTO Inside Blog

CARTO Core Team and 5x

CARTO is open source, and is built on core open source components, but historically most of the code we have written has been in the “CARTO” parts, and we have used the core components “as is” from the open source community.

While this has worked well in the past, we want to increase the velocity at which we improve our core infrastructure, and that means getting intimate with the core projects: PostGIS, Mapnik, PostgreSQL, Leaflet, MapBox GL and others.

Our new core technology team is charged with being the in-house experts on the key components, and the first problem they have tackled is squeezing more performance out of the core technology for our key use cases. We called the project “5x” as an aspirational goal – can we get multiples of performance improvement from our stack? We knew “5x” was going to be a challenge, but by trying to get some percentage improvement from each step along the way, we hoped to at least get a respectable improvement in global performance.

Our Time Budget

A typical CARTO visualization might consist of a map and a couple widget elements.

The map will be composed of perhaps 12 (visible) tiles, which the browser will download in parallel, 3 or 4 at a time. In order to get a completed visualization delivered in under 2 seconds, that implies the tiles need to be delivered in under 0.5s and the widgets in no more than 1s.

Ideally, everything should be faster, so that more load can be stacked onto the servers without affecting overall performance.

The time budget for a tile can be broken down even further:

  • database retrieval time,
  • data transit to map renderer,
  • map render time,
  • map image compression, and
  • image transit to browser.

The time budget for a widget is basically all on the database:

  • database query execution, and
  • data transit to JavaScript widget.

The project goal was to add incremental improvements to as many slices as possible, which would hopefully together add up to a meaningful difference.

Measure Twice, Cut Once

In order to continuously improve the core components, we needed to monitor how changes affected the overall system, against both a long-term baseline (for project-level measurements) and short-term baselines (for patch-level measurements).

To get those measurements, we:

  • Enhanced the metrics support in Mapnik, so we could measure the amount of time spent in retrieving data, rendering data, and compressing output.
  • Built an internal performance harness, so we can measure the cost of various workloads end-to-end.
  • Carried out micro-benchmarks of particular workloads at the component level. For PostGIS, that meant running particular SQL against sample data. For Mapnik that meant running particular kinds of data (large collections of points, or lines, or polygons) through the renderer with various stylings.

Using the measurements as a guide, we then attacked the performance problem.

Low Hanging Fruit

Profiling and running performance tests, and doing a little bit of research showed up three major opportunities for performance improvements:

  • PostgreSQL parallelism was the biggest potential win. With version 10 coming out shortly, we had an opportunity get “free” improvements “just” by ensuring all the code in CARTO was parallel safe and marked as such. Reviewing all the code for parallel safety also surfaced a number of other potential efficiency improvements.
  • Mapnik turned out to have a couple areas where performance could be improved, through caching features rather than re-querying, and in improving the algorithms used for rendering large collections of points.
  • PostGIS had some small bottlenecks in the critical path for CARTO rendering, including some inefficient memory handling in TWKB that impacted point encoding performance.

Most importantly, during our work on core code improvements, we brought all the core software into the CARTO build and deployment chain, so these and future improvements can be quickly deployed to production without manual intervention.

We want to bring our improvements back to the community versions, and at the same time have early access to them in the CARTO infrastructure, so we follow a policy of contributing improvements to the community development versions while back-patching them into our own local branches (PostGIS, Mapnik, PostgreSQL).

And In the End

Did we get to “5x”? No, in our end-to-end benchmarks, we notched a range of different improvements, ranging from a new percent to a few times, depending on the use cases. We also found our integration benchmarks were sensitive to pressure from other load on our testing servers, so relied mostly on micro-benchmarks of different components to confirm local performance improvements.

While the performance improvements have been gratifying, some of the biggest wins have been the little improvements we made along the way:

We made a lot of performance improvements across all the major projects in which CARTO is based upon: you may have already noticed those improvements. We’ve also shown that optimizations can only get you that so far – sometimes taking a whole new approach is a better plan. A good example of this is the vector and raster data aggregations work we have been doing, reducing the amount of data transfered with clever summarization and advanced styling.

More changes from our team are still rolling out, and you can expect further platform improvements as time goes on. Keep on mapping!

February 14, 2018 10:00 AM

GeoNode

GeoNode Summit 2018

GeoNode Summit 2018

Join the awesome GeoNode community for the Summit 2018 from 26 to 28 March 2018 in the elegant city of Turin, Italy!


Summit Website


February 14, 2018 12:00 AM

February 13, 2018

OSGeo.nl

Verslag: OSGeo.nl en OpenStreetMap NL Nieuwjaarsborrel 2018

Op zondag 14 januari 2018 werd de traditionele OSGeo.nl en OpenStreetMap NL nieuwjaarsborrel, in de inmiddels geheel gerenoveerde bovenzaal van Cafe Dudok in Hilversum, weer goed bezocht. Een van de weinige events waar deze twee communities tezamen komen (dat moeten we vaker doen!).

Ieder jaar weer blijkt dat er bij deze communities weer interesses en raakvlakken zijn: niet alleen op gebied van de open data rond De Basis Registraties (BAG, BGT, BRK, Top10NL etc) en projecten die zich daar mee bezig houden zoals NLExtract, maar ook op het gebied van bijvoorbeeld Missing Maps en QGIS. Dit smaakt ook naar meer events in 2018 om gezamenlijk Open Source en Open Data voor geo-informatie in Nederland te versterken.

Onder de aanwezigen was duidelijk veel kennis en vooral het enthousiasme deze kennis te delen. Naast dat de bitterballen en speciaal-bieren weer goed smaakten waren er weer een aantal presentaties, aankondigingen en plannen. Meer hieronder, met links naar de slides:

1. OSGeo.nl: Terugblik 2017, plannen 2018 – Just van den Broecke – Slides PDF
Gert-Jan van der Weijden nam na 5 jaar uitmuntend OSGeo.nl voorzitterschap afscheid. Het huidige OSGeo.nl bestuur bestaat nu uit Just van den Broecke (voorzitter), Paulo van Breugel (secretaris), Barend Köbben (penningmeester). Maar bovenal was 2017 het jaar waarin de eerste FOSS4G NL plaatsvond in Groningen. Door de inzet van een geweldig team met o.a. Erik Meerburg, Leon van der Meulen, Willy Bakker en vele vrijwilligers van de Rijks Universiteit Groningen, werd dit event een enorm succes. Meer (vervolg) daarover later. En vooruitkijkend: welke ambities hebben we in 2018: teveel om op te noemen: vooral een FOSS4G NL 2018, maar ook gezien bijvoorbeeld een overweldigend aantal inschrijvingen op onze GRASS Crash Course, willen we in 2018 meer inzetten op kleinschalige, gerichte, hands-on events. Laat ons weten, als je daarvoor ideeën hebt.

2. Raymond Nijssen – 510.global data team Rode Kruis – Slides PDF

Raymond nam ons mee naar Sint Maarten in zijn, vaak aangrijpende, persoonlijke verhaal als vrijwilliger voor het 510.global data team van het Rode Kruis, waarvoor hij zich had aangemeld  kort na de orkaan Irma. Met de beperkte middelen en connectiviteit aldaar, wisten Raymond en het team, gebruikmakend van het ecosysteem van OpenStreetMap en tools als QGIS effectief overzichtskaarten voor hulpverleners te fabriceren. Hulde Raymond, je bent een voorbeeld!

3. Rob van Loon, Ordina – Beheer GeoServer configuratie met Python scripts – Slides PDF


GeoServer wordt op zeer veel plekken in Nederland ingezet. De bijbehorendeGeoServer Web UI om lagen en styling in te regelen is handig. Maar in veel situaties is het geautomatiseerd configureren van GeoServer veel effectiever: denk aan OTAP straten, meerdere, soms 100-en bijna identieke, lagen. Veel herhaalde, handmatige handelingen. Er bestaat al jaren een niet heel bekende REST-API om GeoServer op afstand te configureren. Deze is in de laatste versies van GeoServer steeds stabieler en krachtiger. Rob heeft daarvoor een toolkit ontwikkeld, binnenkort onder https://github.com/borrob.

4. Willem Hofmans (JW van Aalst) – Witte Plekken en Zwarte Gaten in de BGT


Willem en bij afwezigheid, Jan-Willem van Aalst, presenteerde een ontdekkingsreis in de krochten en details van de BGT. Daarbij viel nog veel te ontdekken: Witte Plekken, waar BGT nog niet volledig is, presenteert Jan-Willem regelmatig via zijn website. Zwarte Gaten, waar Willem met name op inging, zijn ook spannend: zitten er fouten in de BGT, of in de tools die, de vaak over-gecompliceerde, GML van BGT inlezen zoals NLExtract? Grensgebieden tussen rechthoeken en curves, het werd een reis vol verassingen. Navolging binnen NLExtract is er al.

Verdere aankondigingen:

Erik Meerburg (met Hans van der Kwast): de eerste QGIS Gebruikersdag op 31 jan 2018 bij IHE in Delft. Heeft inmiddels plaatsgevonden: een overweldigend success: over de 100 deelnemers. Er komt een QGIS NL Gebruikersgroep. Meer daarover binnenkort, blijf ons hier volgen!

Erik Meerburg: de FOSS4G NL 2018: er waren op dat moment gesprekken met meerdere universiteiten/HBOs, want iedereen wil dit event graag binnenhalen. Inmiddels vergaande vorderingen: save the date: woensdag 11 juli 2018 bij Aeres Hogeschool in Almere. Meer nieuws volgt!

Edward Mac Gillavry: na eerder discussies op deze dag rond basisregistraties met name BGT en NLExtract en het streven van OSGeo.nl op kleinere, gerichte events/workshops/hackethons/code sprints te organiseren, biedt WebMapper, ook sponsor voor de OSGeo.nl Meetup, aan om in 2018 een NLExtract Dag te organiseren. Vorm, plaats, tijd, nog te bepalen, ook meer hierover hier en via de OSGeo.nl kanalen.

Al met al weer een mooie middag, waarbij het overgrote deel van de aanwezigen ook nog aanschoof bij het gezamenlijke diner in Cafe Dudok.

 

 

by Just van den Broecke at February 13, 2018 10:33 PM

Gary Sherman

Quick Guide to Getting Started with PyQGIS 3 on Windows

Getting started with Python and QGIS 3 can be a bit overwhelming. In this post we give you a quick start to get you up and running and maybe make your PyQGIS life a little easier.

There are likely many ways to setup a working PyQGIS development environment---this one works pretty well.

Contents

Requirements

  • OSGeo4W Advanced Install of QGIS
  • pip (for installing/managing Python packages)
  • pb_tool (cross-platform tool for compiling/deploying/distributing QGIS plugin)
  • A customized startup script to set the environment (pyqgis.cmd)
  • IDE (optional)
  • Emacs (just kidding)
  • Vim (just kidding)

We'll start with the installs.

Installing

Almost everything we need can be installed using the OSGeo4W installer available on the QGIS website.

OSGeo4W

From the QGIS website, download the appropriate network installer (32 or 64 bit) for QGIS 3.

  • Run the installer and choose the Advanced Install option
  • Install from Internet
  • Choose a directory for the install---I prefer a path without spaces such as C:\OSGeo4W
  • Accept default for local package directory and Start menu name
  • Tweak network connection option if needed on the Select Your Internet Connection screen
  • Accept default download site location
  • From the Select packages screen, select: Desktop -> qgis: QGIS Desktop

When you click Next a bunch of additional packages will be suggested---just accept them and continue the install.

Once complete you will have a functioning QGIS install along with the other parts we need. If you want to work with the nightly build of QGIS, choose Desktop -> qgis-dev instead.

If you installed QGIS using the standalone installer, the easiest option is to remove it and install from OSGeo4W. You can run both the standalone and OSGeo4W versions on the same machine, but you need to be extra careful not to mix up the environment.

Setting the Environment

To continue with the setup, we need to set the environment by creating a .cmd script. The following is adapted from several sources, and trimmed down to the minimum. Copy and paste it into a file named pyqgis.cmd and save it to a convenient location (like your HOME directory).

@echo off
SET OSGEO4W_ROOT=C:\OSGeo4W3
call "%OSGEO4W_ROOT%"\bin\o4w_env.bat
call "%OSGEO4W_ROOT%"\apps\grass\grass-7.4.0\etc\env.bat
@echo off
path %PATH%;%OSGEO4W_ROOT%\apps\qgis-dev\bin
path %PATH%;%OSGEO4W_ROOT%\apps\grass\grass-7.4.0\lib
path %PATH%;C:\OSGeo4W3\apps\Qt5\bin
path %PATH%;C:\OSGeo4W3\apps\Python36\Scripts

set PYTHONPATH=%PYTHONPATH%;%OSGEO4W_ROOT%\apps\qgis-dev\python
set PYTHONHOME=%OSGEO4W_ROOT%\apps\Python36

set PATH=C:\Program Files\Git\bin;%PATH%

cmd.exe

You should customize the set PATH statement to add any paths you want available when working from the command line. I added paths to my git install.

The last line starts a cmd shell with the settings specified above it. We'll see an example of starting an IDE in a bit.

You can test to make sure all is well by double-clicking on our pyqgis.cmd script, then starting Python and attempting to import one of the QGIS modules:

C:\Users\gsherman>python3
Python 3.6.0 (v3.6.0:41df79263a11, Dec 23 2016, 07:18:10) [MSC v.1900 32 bit (In tel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import qgis.core
>>> import PyQt5.QtCore

If you don't get any complaints on import, things are looking good.

Installing pb_tool

Open your customized shell (double-click on pyqgis.cmd to start it) to install pb_tool:

python3 -m pip install pb_tool

Check to see if pb_tool is installed correctly:

C:\Users\gsherman>pb_tool
Usage: pb_tool [OPTIONS] COMMAND [ARGS]...

  Simple Python tool to compile and deploy a QGIS plugin. For help on a
  command use --help after the command: pb_tool deploy --help.

  pb_tool requires a configuration file (default: pb_tool.cfg) that declares
  the files and resources used in your plugin. Plugin Builder 2.6.0 creates
  a config file when you generate a new plugin template.

  See http://g-sherman.github.io/plugin_build_tool for for an example config
  file. You can also use the create command to generate a best-guess config
  file for an existing project, then tweak as needed.

  Bugs and enhancement requests, see:
  https://github.com/g-sherman/plugin_build_tool

Options:
  --help  Show this message and exit.

Commands:
  clean       Remove compiled resource and ui files
  clean_docs  Remove the built HTML help files from the...
  compile     Compile the resource and ui files
  config      Create a config file based on source files in...
  create      Create a new plugin in the current directory...
  dclean      Remove the deployed plugin from the...
  deploy      Deploy the plugin to QGIS plugin directory...
  doc         Build HTML version of the help files using...
  help        Open the pb_tools web page in your default...
  list        List the contents of the configuration file
  translate   Build translations using lrelease.
  update      Check for update to pb_tool
  validate    Check the pb_tool.cfg file for mandatory...
  version     Return the version of pb_tool and exit
  zip         Package the plugin into a zip file suitable...

If you get an error, make sure C:\OSGeo4W3\apps\Python36\Scripts is in your PATH.

More information on using pb_tool is available on the project website.

Working on the Command Line

Just double-click on your pyqgis.cmd script from the Explorer or a desktop shortcut to start a cmd shell. From here you can use Python interactively and also use pb_tool to compile and deploy your plugin for testing.

IDE Example

By adding one line to our pyqgis.cmd script, we can start our IDE with the proper settings to recognize the QGIS libraries:

start "PyCharm aware of Quantum GIS" /B "C:\Program Files (x86)\JetBrains\PyCharm 3.4.1\bin\pycharm.exe" %*

We added the start statement with the path to the IDE (in this case PyCharm). If you save this to something like pycharm.cmd, you can double-click on it to start PyCharm. The same method works for other IDEs, such as PyDev.

Within your IDE settings, point it to use the Python interpreter included with OSGeo4W---typically at: %OSGEO4W_ROOT%\bin\python3.exe. This will make it pick up all the QGIS goodies needed for development, completion, and debugging. In my case OSGEO4W_ROOT is C:\OSGeo4W3, so in the IDE, the path to the correct Python interpreter would be: C:\OSGeo4W3\bin\python3.exe.

Make sure you adjust the paths in your .cmd scripts to match your system and software locations.

Workflow

Here is an example of a workflow you can use once you're setup for development.

Creating a New Plugin

  1. Use the Plugin Builder plugin to create a starting point [1]
  2. Start your pyqgis.cmd shell
  3. Use pb_tool to compile and deploy the plugin (pb_tool deploy will do it all in one pass)
  4. Activate it in QGIS and test it out
  5. Add code, deploy, test, repeat

Working with Existing Plugin Code

The steps are basically the same was creating a new plugin, except we start by using pb_tool to create a new config file:

  1. Start your pyqgis.cmd shell
  2. Change to the directory containing your plugin code
  3. Use pb_tool create to create a config file
  4. Edit pb_tool.cfg to adjust/add things create may have missed
  5. Start at step 3 in Creating a New Plugin and press on

Troubleshooting

Assuming you have things properly installed, trouble usually stems from an incorrect environment.

  • Make sure QGIS runs and the Python console is available and working
  • Check all the paths in your pygis.cmd or your custom IDE cmd script
  • Make sure your IDE is using the Python interpreter that comes with OSGeo4W


[1] Plugin Builder 3.x generates a pb_tool config file

February 13, 2018 09:00 AM

Blog 2 Engenheiros

Como organizar seus mapas no ArcGIS e QGIS?

Nem todas as pessoas são organizadas, e não é diferente para aquelas pessoas que passaram 5+ anos numa graduação.

Se eu te solicita-se material das aulas de Química Orgânica do seu curso de graduação, você teria condições de me entregar eles? Ou eles estariam perdidos, em uma pasta muito remota, que nem mesmo o Windows conseguiria achar.

Se você tem problemas de organização e trabalha com geoprocessamento, iremos dar dicas para você se organizar e não perder os arquivos do seu SIG.

Como é a sua área trabalho? Organizada ou uma bagunça?Como é a sua área trabalho? Organizada ou uma bagunça? Fonte: ATRL.

Vantagens da Organização

Ao tornar-se uma pessoa organizada, você notará benefícios como passar menos tempo procurando seus arquivos e corrigindo erros. Logo, se você não perde tempo com isso, terá mais tempo para realizar atividades produtivas.

Não pense que as boas práticas de arquivamento de documentos se foram pois não utilizamos mais papeis.

Aaron Lynn, no site Asian Efficiency, apresenta algumas regras que devem ser seguidas para deixar seu computador organizado. São elas:

  • Não salve documentos na Área de Trabalho;
  • Limite a criação de novas pastas;
  • Acostume-se a pensar em hierarquias;
  • Crie uma pasta para projetos concluídos (arquivo morto).

Dentre essas regras, a ideia de hierarquia é a mais importante. Pois você deverá classificar seus documentos, por exemplo, como Pessoal ou Trabalho; dentro da pasta Trabalho, você pode ainda ter outras classes, tais como Projetos em Andamento, Projetos Concluídos e Documentos Administrativos.

Exemplo de organização usando hierarquia para uma pasta de trabalho.Exemplo de organização usando hierarquia para uma pasta de trabalho.

Parece complicado? Se você se acostumar a ser organizado, as coisas começarão a fluir facilmente. Ainda parece difícil? Segue alguns sites que podem te auxiliar a ser mais organizado:

Organização no SIG

Agora vamos aplicar um pouco dessas ideias para gerenciar nossos arquivos do nosso Sistema de Informações Geográficas. Assista nosso vídeo e descubra.

Siga nossas dicas e você será mais organizado. Organização é uma habilidade que desenvolvemos com o tempo, portanto, não desista.

E caso você tenha alguma sugestão de organização, deixa ela nos comentários.

by Fernando BS at February 13, 2018 07:07 AM

February 12, 2018

Free and Open Source GIS Ramblings

TimeManager 2.5 published

TimeManager 2.5 is quite likely going to be the final TimeManager release for the QGIS 2 series. It comes with a couple of bug fixes and enhancements:

  • Fixed #245: updated help.htm
  • Fixed #240: now hiding unmanageable WFS layers
  • Fixed #220: fixed issues with label size
  • Fixed #194: now exposing additional functions: animation_time_frame_size, animation_time_frame_type, animation_start_datetime, animation_end_datetime

Besides updating the help, I also decided to display it more prominently in the settings dialog (similarly to how the help is displayed in the field calculator or in Processing):

So far, I haven’t started porting to QGIS 3 yet. If you are interested in TimeManager and want to help, please get in touch.

On this note, let me leave you with a couple of animation inspirations from the Twitterverse:

by underdark at February 12, 2018 09:39 PM

gvSIG Team

Asociación gvSIG participará en el ILoveFS 2018

ilovefs-banner-extralarge

Por tercer año consecutivo Datalab organiza en MediaLab Prado el IloveFS18, un espacio de demostración de cariño y amor al Software Libre, como no, el 13 y el 14 de febrero.

La Asociación gvSIG estará presente mostrando que el SIG libre puede utilizarse para casi cualquier ámbito de nuestra vida. Facilitaremos un taller de gvSIG básico aplicado a la labor periodística para que todo el mundo pueda aprender Geomática Libre de manera práctica y divertida.

La cita será de 18:45 a 20:30 el martes 13 y de 18:45 a 20:00 el miércoles 14 en las instalaciones de MediaLab Prado.

Por supuesto, el evento y los talleres son completamente gratuitos y sólo necesitáis registraros en la web habilitada para ello.

Podéis ver la información completa en http://medialab-prado.es/article/ilovefs18

Y podéis descargar la versión portable para vuestro sistema operativo en http://www.gvsig.com/es/productos/gvsig-desktop/descargas

 

by Alonso Morilla at February 12, 2018 12:09 PM

CARTO Inside Blog

ETL into CARTO with ogr2ogr

The default CARTO data importer is a pretty convenient way to quickly get data into the platform, but for enterprises setting up automated update it has some limitations:

  • there’s no way to define type coercions for CSV data;
  • some common GIS formats like File Geodatabase aren’t supported;
  • the sync facility is “pull” only, so data behind a firewall is inaccessible; and,
  • the sync cannot automatically filter the data before loading it into the system.

Fortunately, there’s a handy command-line tool that can automate many common enterprise data loads: ogr2ogr.

Basic Operation

ogr2ogr has a well-earned reputation for being hard to use. The commandline options are plentiful and terse, the standard documentation page lacks examples, and format-specific documentation is hidden away with the driver documentation.

Shapefile

The basic structure of an ogr2ogr call is “ogr2ogr -f format destination source”. Here’s a simple shapefile load.

ogr2ogr \
  --debug ON \
  --config CARTO_API_KEY abcdefghijabcdefghijabcdefghijabcdefghij \
  -t_srs "EPSG:4326" \
  -nln interesting \
  -f Carto \
  "Carto:pramsey" \
  interesting_things.shp

The parameters are:

  • –debug turns on verbose debugging, which is useful during development to see what’s happening behind the scenes.
  • –config is used to pass generic “configuration parameters”. In this case we pass our CARTO API key so we are allowed to write to the database.
  • -t_srs is the “target spatial reference system”, telling ogr2ogr to convert the spatial coordinates to “EPSG:4326” (WGS84) before writing them to CARTO. The CARTO driver expects inputs in WGS84, so this step is mandatory.
  • -nln is the “new layer name”, so the name of the uploaded table can differ from that of the input file.
  • -f is the format of the destination layer, so for uploads to CARTO, it is always “Carto”.
  • Carto:pramsey is the “destination datasource”, so it’s a CARTO source, in the “pramsey” account. Change this to your user name. (Note for multi-user accounts: you must supply your user name here, not your organization name.)
  • interesting_things.shp is the “source datasource”, which for a shapefile is just the path to the file.

File Geodatabase

Loading a File Geodatabase is almost the same as loading a shapefile, except that a file geodatabase can contain multiple layers, so the conversion must also specify which layer to convert, by adding the source layer name after the data source. You can load multiple layers in one run by providing multiple layer names.

ogr2ogr \
  --debug ON \
  --config CARTO_API_KEY abcdefghijabcdefghijabcdefghijabcdefghij \
  -t_srs "EPSG:4326" \
  -nln cities \
  -f Carto \
  "Carto:pramsey" \
  CountyData.gdb Cities

In this example, we take the “Cities” layer from the county database, and write it into the “cities” table of CARTO. Note that if you do not re-map the layer name to all lower case, you’ll get a mixed case layer in CARTO, which you may not want.

Filtering

You can use OGR on any input data source to filter the data prior to loading. This can be useful for loads of large inputs that are “only the data since time X” or “only the data in this region”, like this:

ogr2ogr \
  --debug ON \
  --config CARTO_API_KEY abcdefghijabcdefghijabcdefghijabcdefghij \
  -t_srs "EPSG:4326" \
  -nln cities \
  -f Carto \
  -sql "SELECT * FROM Cities WHERE state_fips = 53" \
  "Carto:pramsey" \
  CountyData.gdb Cities

Since the filter is just a SQL statement, the filter can both reduce the number of records and also apply transforms to the output on the way: reduce the number of columns, apply some data transformations, anything that is possible using the SQLite dialect of SQL.

Overwrite or Append

By default, ogr2ogr runs in “append” mode (you can force it with the -append flag, so if you run the same translation multiple times, you’ll get rows added into your table. This be useful for processes that regularly take the most recent entries and copy them into CARTO.

For translations where you want to replace the existing table, use the -overwrite mode, which will drop the existing table, and create a new one in its place.

Because of some limitations in how the OGR CARTO driver handles primary keys, the OGR -update mode does not work correctly.

OGR Virtual Format

As you can see, the command-line complexity of an OGR conversions starts high. The complexity only goes up as advanced features like filtering and arbitrary SQL are added.

To contain the complexity in one location, you can use the OGR “virtual format”, VRT files, to define your data sources. This is handy for managing a library of conversions in source control. Each data source becomes it’s own VRT file, and the actual OGR commands become smaller.

CSV Type Enforcement

CSV files are convenient ways of passing data, but they are under-defined: they supply column names, but not column types. This forces CSV consumers to do type guessing based on the input data, or to coerce every input to a lowest common denominator string type.

Particularly for repeated and automated uploads it would nice to define the column types once beforehand and have them respected in the final CARTO table.

For example, take this tiny CSV file:

longitude,latitude,name,the_date,the_double,the_int,the_int16,the_int_as_str,the_datetime
-120,51,"First Place",2018-01-01,2.3,123456789,1234,00001234,2014-03-04 08:12:23
-121,52,"Second Place",2017-02-02,4.3,423456789,4234,00004234,"2015-05-05 09:15:25"

Using a VRT, we can define a CSV file as a source, and also add the rich metadata needed to support proper type definitions:

<OGRVRTDataSource>
    <OGRVRTLayer name="test_csv">
        <SrcDataSource>/data/exports/test_csv.csv</SrcDataSource>
        <GeometryField encoding="PointFromColumns" x="longitude" y="latitude"/>
        <GeometryType>wkbPoint</GeometryType>
        <LayerSRS>WGS84</LayerSRS>
        <OpenOptions>
            <OOI key="EMPTY_STRING_AS_NULL">YES</OOI>
        </OpenOptions>
        <Field name="name" type="String" nullable="false" />
        <Field name="a_date" type="Date" src="the_date" nullable="true" />
        <Field name="the_double" type="Real" nullable="true" />
        <Field name="the_int" type="Integer" nullable="true" />
        <Field name="the_int16" type="Integer" subtype="Int16" nullable="true" />
        <Field name="the_int_as_str" type="String" nullable="true" />
        <Field name="the_datetime" type="DateTime" nullable="true" />
    </OGRVRTLayer>
</OGRVRTDataSource>

This example has a number of things going on:

  • The <SrcDataSource> is an OGR connection string, as defined in the driver documentation for the format. For a CSV, it’s just the path to a file with a “csv” extension.
  • The <GeometryField> line maps coordinate columns into a point geometry.
  • The <LayerSRS> confirms the coordinates are WGS84. They could also be some planar format, and OGR can reproject them if requested.
  • The <OpenOptions> let us pass one of the many CSV open options.
  • The <Field> type definitions, using the “type” attribute to explicitly define types, including obscure ones like 16-bit integers.
  • Column renaming, in the “a_date” <Field>, maps the source column name “the_date” to “a_date” in the target.
  • Null enforcement, in the “name” <Field>, creates a target column with a NOT NULL constraint.

To execute the translation, we use the VRT as the source argument in the ogr2ogr call.

ogr2ogr \
  --debug ON \
  --config CARTO_API_KEY abcdefghijabcdefghijabcdefghijabcdefghij \
  -t_srs "EPSG:4326" \
  -f Carto \
  "Carto:pramsey" \
  test_csv.vrt

Database-side SQL

Imagine you have an Oracle database with sales information in it, and you want to upload a weekly snaphot of transactions. The database is behind the firewall, and the transactions need to be joined to location data in order to mapped. How to do it?

With VRT tables and the OGR Oracle driver, it’s just some more configuration during the load step:

<OGRVRTDataSource>
    <OGRVRTLayer name="detroit_locations">
        <SrcDataSource>OCI:scott/password@ora.company.com</SrcDataSource>
        <LayerSRS>EPSG:26917</LayerSRS>
        <GeometryField encoding="PointFromColumns" x="easting" y="northing"/>
        <SrcSQL>
            SELECT sales.sku, sales.amount, sales.tos,
                   locs.latitude, locs,longitude
            FROM sales JOIN locs ON sales.loc_id = locs.loc_id
            WHERE locs.city = 'Detroit'
            AND sales.transaction_date > '2018-01-01'
        </SrcSQL>
    </OGRVRTLayer>
</OGRVRTDataSource>

Some things to note in this example:

  • The <SrcDataSource> holds the Oracle connection string
  • The coordinates are stored in UTM17, in northing/easting columns, but we can still easily map them into a point type for reprojection later.
  • The output data source is actually the result of a join executed on the Oracle database, attributing each sale with the location it was made. We don’t have to ship the tables to CARTO separately.

The ability to run any SQL on the source database is a very powerful tool to ensure that the uploaded data is “just right” before it arrives on the CARTO side for analysis and display.

As before, the VRT is run with a simple execution of the ogr2ogr command line:

ogr2ogr \
  --debug ON \
  --config CARTO_API_KEY abcdefghijabcdefghijabcdefghijabcdefghij \
  -t_srs "EPSG:4326" \
  -f Carto \
  "Carto:pramsey" \
  test_oracle.vrt

Multi-format Sources

Suppose you have a source of attribute information and a source of location information, but they are in different formats in different databases: how to bring them together in CARTO? One way, as usual, would be to upload them separately and join them on the CARTO side with SQL. Another way is to use the power of ogr2ogr and VRT to do the join during the data upload.

For example, imagine having transaction data in a PostgreSQL database, and store locations in a Geodatabase. How to bring them together? Here’s a joined_stores.vrt file that does the join in ogr2ogr:

<OGRVRTDataSource>
    <OGRVRTLayer name="sales_data">
        <SrcDataSource>Pg:dbname=pramsey</SrcDataSource>
        <SrcLayer>sales.sales_data_2017</SrcLayer>
    </OGRVRTLayer>
    <OGRVRTLayer name="stores">
        <SrcDataSource>store_gis.gdb</SrcDataSource>
        <SrcLayer>Stores</SrcLayer>
    </OGRVRTLayer>
    <OGRVRTLayer name="joined">
        <SrcDataSource>joined_stores.vrt</SrcDataSource>
        <SrcSQL dialect="SQLITE">
            SELECT stores.*, sales_data.*
            FROM sales_data
            JOIN stores
            ON sales_data.store_id = stores.store_id
        </SrcSQL>
    </OGRVRTLayer>
</OGRVRTDataSource>

Some things to note:

  • The “joined” layer uses the VRT file itself in the <SrcDataSource> definition!
  • Each <OGRVRTLayer> is a full-fledged VRT layer, so you can do extra processing in them. Apply type definitions to CSV, run complex SQL on a remote database, whatever you want.
  • The join layer uses the “SQLite” dialect, so anything available in SQLite is available to you in the join step.

Almost an ETL

Combining the ability to read and write from multiple formats with the basic functionality of the SQLite engine, and chaining operations through multi-layer VRT layers, ogr2ogr provides the core functions of an “extract-transform-load” engine, in a package that is easy to automate and maintain.

For users with data behind a firewall, who need more complex processing during their loads, or who have data locked in formats the CARTO importer cannot read, ogr2ogr should be an essential tool.

Getting ogr2ogr

OGR is a subset of the GDAL suite of libraries and tools, so you need to install GDAL to get ogr2ogr

  • For Linux, look for “gdal” packages in your Linux distribution of choice.
  • For Mac OS X, use the GDAL Framework builds.
  • For Windows, use the MS4 package system or pull the Mapserver builds from GisInternals and use the included GDAL binaries.

February 12, 2018 10:39 AM

gvSIG Team

GIS applied to Municipality Management: Module 11 ‘Reprojecting vector layers’

The video of the eleventh module is now available, in which we will show how to reproject vector layers.

Sometimes municipalities need external geographic information to work, for example cartography published by another administration, such as regional or national. That cartography can be in a different system than technicians usually work on in the municipality. If we don’t take the reference systems into account, both cartographies would not be overlapped correctly.

The municipality technicians can also use old cartography, which is in an obsolete reference system, and they need to have it in an updated reference system. For this, it will be necessary to reproject that cartography.

In module 2 you can consult all the information related to the reference systems.

Apart from reprojecting from one reference system to another one, sometimes it will be necessary to apply a transformation to improve the reprojection. For example in the case of Spain, to reproject a layer available in ED50, the official reference system until a few years ago, to ETRS89, the official system currently, it is necessary to apply a transformation by grid, otherwise we would have a difference of about 7 meters between these layers.

The cartography to follow this video can be downloaded from this link.

Here you have the videotutorial of this new module:

Related posts:

by Mario at February 12, 2018 08:29 AM

February 10, 2018

From GIS to Remote Sensing

Available the User Manual of the Semi-Automatic Classification Plugin v. 6

I've updated the user manual of the Semi-Automatic Classification Plugin (SCP) for the new version 6.
This updated version contains the description of all the tools included in SCP as well as a brief introduction to the remote sensing definitions, and the first basic tutorial.
The user manual in English is available at this link. Also, other languages are available, although some translations are incomplete. 

I'd like to deeply thank all the volunteers that have translated the previous versions of this user manual, and I invite you to help translating this new version to your language.

It is possible to easily translate the user manual to any language, because it is written in reStructuredText as markup language (using Sphinx). Therefore, your contribution is fundamental for the translation of the manual to your language.


by Luca Congedo (noreply@blogger.com) at February 10, 2018 03:34 PM

February 09, 2018

Jackie Ng

One MapGuide/FDO build system to rule them all?

Before the bombshell of the death of Autodesk Infrastructure Map Server was dropped, I was putting the final finishing touches of making (pun intended) the MapGuide/FDO build experience on Linux a more pleasant experience.

Namely, I had it with autotools (some may describe it as autohell) as the way to build MapGuide/FDO on Linux. For FDO, this was the original motivation for introducing CMake as an alternative. CMake is a more pleasant way to build software on Linux over autotools because:
  1. CMake builds your sources outside of the source tree. If you're doing development on a SVN working copy this is an absolute boon as it means when it comes time to commit any changes, you don't have to deal with sifting through tons of autotools-generated junk that is left in your actual SVN wc.
  2. CMake builds are faster than their autotools counterpart.
  3. It is much easier to find and consume external libraries with CMake than it is through autotools, which makes build times faster because we can just source system-installed copies of thirdparty libraries we use, instead of waste time having to build these copies (in our internal thirdparty source tree) ourselves. If we are able to use system-installed copies of libraries when building FDO, then we can take advantage of SVN sparse checkouts and be able to skip downloading whole chunks of thirdparty library sources that we never have to build!
Sadly, while this sounds nice in theory, the CMake way to build FDO had fallen into a state of disrepair. My distaste for autotools was sufficient motivation to get the CMake build back into working condition. Several weeks of bashing at various CMakeLists.txt files later, the FDO CMake build was operational again and had some several major advantages over the autotools build (in addition to what was already mentioned):
  • We can setup the CMake to generate build configurations for Ninja instead of standard make. A ninja-powered CMake build is faster than standard make ^.
  • On Ubuntu 14.04 LTS (the current Ubuntu version we're targeting), all the thirdparty libraries we use were available for us to apt-get install in the right version ranges, and the CMake build can take advantage of all of them. Not a single internal thirdparty library copy needs to be built!
  • We can easily light up compiler features like AddressSanitizer and linking with the faster gold instead of ld. AddressSanitizer in particular easily helped us catch some issues that have flew under the radar.
  • All of the unit tests are build-able and more importantly ... executable outside the source tree, making it easier to fix up whatever was failing.
Although we now had a functional FDO CMake build. MapGuide still was built on Linux using autotools. So for the same reasons and motivations, I started the process of introducing CMake to the MapGuide build system for Linux.

Unlike FDO, MapGuide still needed some of the internal thirdparty libraries built.
  • DBXML - No ubuntu package available, though we can get it to build against a system-provided version of xerces, so we can at least skip building that part of DBXML.
  • Apache HTTPD - Ubuntu package available, but having MapGuide be able to integrate with an Ubuntu-provided httpd installation was not in the scope of this work, even though this is a nice thing to have.
  • PHP - Same reasons as Apache HTTPD
  • Antigrain Geometry - No ubuntu package available. Also the AGG sources are basically wedded to our Renderers project anyways.
  • DWF Toolkit - No ubuntu package available
  • CS-Map - No ubuntu package available
For everything else, Ubuntu provided the package in the right version ranges for CMake to take advantage of. Another few weeks of bashing various CMakeLists.txt files into shape and we had FDO and MapGuide both build-able on Linux via CMake. To solve the problem of still needing to build some internal thirdparty libs, but still be able to retain the CMake quality of everything is built outside the source tree, some wrapper shell scripts are provided that will copy applicable thirdparty library sources out of the current directory, build them in their copied directories and then invoke CMake and pass in all the required parameters so that it will know where to look for the internal libraries to link against when it comes to build MapGuide proper.

This was also backported to FDO, so that on distros where we do not have all our required thirdparty libraries available, we can selectively build internal copies and be able to find/link the rest, and have CMake take care of all of that for us.

So what's with the title of this post?

Remember when I wrote about how interesting vcpkg was?

What is best used with vcpkg to easily consume thirdparty libraries on Windows? Why CMake of course! Now building MapGuide on Windows via CMake is not on the immediate horizon. We'll still be maintaining Visual Studio project files by hand (instead of auto-generating them with CMake) for the immediate future, but can you imagine being able to build FDO and MapGuide on both Windows and Linux with CMake and not have to waste time on huge SVN checkouts and building thirdparty libraries? That future is starting to look real possible now!

For the next major release of MapGuide Open Source, it is my plan to use CMake over autotools as the way to build both MapGuide and FDO on Linux.

^ Well, the ninja-powered CMake build used to be blazing fast until Meltdown and Spectre happened. My dev environment got the OS security patches and whatever build performance gains that were made through ninja and CMake were instantly wiped out and we were back to square one in terms of build time. Still, the autotools build performed worse after the meltdown patches, so while CMake still beats the autotools build in terms of build time, we ultimately gained nothing on this front.

Thanks Intel!!!

by Jackie Ng (noreply@blogger.com) at February 09, 2018 04:38 PM

February 08, 2018

gvSIG Team

GIS applied to Municipality Management: Module 10 ‘How to convert cartography from CAD to GIS’

The video of the tenth module is now available, in which we will show how to load and manage cartography in CAD format on gvSIG.

Many municipalities have their geographic information in CAD format, and in many cases there’s an only file for the whole municipality that contains all type of information, such as power lines, parcels, drinking water system, sewage system…, each one in a different layer.

It sometimes makes it difficult to manage, even we have to divide the municipality into sheets to manage that information, where we lose information of our municipality as a group. In that case, to make queries, calculations…, we would have to open the different files.

The advantage of working with a Geographic Information System is that each type of information would be available in a different file (that would be the optimal way to work), and we would be able to overlap the different files (which would be ‘layers’ in our GIS) in the same View to be able to make analysis, consultations…

Another important advantage is that the vector layers in a GIS have an associated attribute table, and on the .SHP format, the most common in GIS, we can add all the fields that we want to that attribute table (length, area, owner, release date…). We will have a great amount of alphanumeric information of the different elements.

By having alphanumeric information it is easy, for example, to know the areas of all the parcels of our municipality at the same time, we wouldn’t have to select them individually like in a CAD. We could also make inquiries about them. For example we can make a query of parcels the area of which is larger than 1000 square meters with a simple sentence, where they would appear selected directly.

The cartography to follow this video can be downloaded from this link.

Here you have the videotutorial of this new module:

Related posts:

by Mario at February 08, 2018 09:48 AM

February 07, 2018

GIS for Thought

QGIS Multi Ring Buffer Plugin Version 1

After about 3 years of existing. I am releasing version 1 of the Multi Ring Buffer Plugin.

QGIS plugin repository.

With version 1 comes a few new features:

  • Ability to choose the layer to buffer after launching the plugin
  • Ability to supply buffer distances as comma separated text string
  • Ability to make non-doughnut buffers
  • Doughnut buffer:

    Non-doughnut buffer (regular):

    by Heikki Vesanto at February 07, 2018 11:23 PM

    gvSIG Team

    Adding new colour tables in gvSIG Desktop, more options to represent our data

    The colour tables are used in gvSIG Desktop to represent both raster data (for example, a Digital Elevation Model) and vector data (they can be applied in legends such as unique values or heat maps). By default gvSIG Desktop has a small catalog of colour tables. But most of the users don’t know that it’s very easy to add new colour tables. Do you want to see how easy it is?

    First of all you have to know that the colour tables used by gvSIG Desktop are stored as xml files in the ‘colortable’ folder, inside the ‘gvSIG’ folder. So, if you delete some of these xml, those tables will no longer be available in gvSIG Desktop.

    Let’s see now how we can add new colour tables in gvSIG Desktop.

    To download new colour tables we will access to this website:

    http://soliton.vm.bytemark.co.uk/pub/cpt-city/

    As you will see that website contains hundreds of colour tables, many of them applicable to the world of cartography that can be downloaded in a wide variety of formats, including ‘ggr’ (GIMP gradient) format, supported by gvSIG Desktop. We will download some of the colour tables offered in that ‘ggr’ format.

    We launch the tool ‘Colour table’ in gvSIG Desktop and in the new window we press the button ‘Import library’ … .then we select the ggr files that we have downloaded and we already have them available.

    Finally, in the video we will see how they can be applied to raster and vector data once imported.

    And if you want to download ALL the colour tables in ‘ggr’ format and in a zipped file… they are available here.

    by Mario at February 07, 2018 04:30 PM

    gvSIG Team

    Webinar on “gvSIG Suite: open source software for geographic information management in agriculture – Technology and case studies” (February 15)

    GODAN and gvSIG Association invite you to join the webinar about “gvSIG Suite: open source software for geographic information management in agriculture – Technology and case studies“, in February 15th at 2PM GMT.

    This webinar will deal with the gvSIG Suite, the whole catalog of open source software solutions offered by the gvSIG Association, and case studies about forestry and agriculture of the different products.

    With free registration, this event is appropriate for all users interested in knowing how to work with an open source Geographic Information System in agriculture and forestry sectors.

    We will speak about gvSIG Desktop, the Desktop GIS to manage geographic information and make vector and raster analysis, gvSIG Mobile, for field data gathering with mobile devices, and gvSIG Online, an integrated platform for Spatial Data Infrastructure, to create geoportals in an easy way and manage cartography between different departments in an organization.

    Attendees will be able to interact with the speakers by sending their comments and questions through chat.

    Registrations are available from: https://app.webinarjam.net/register/24718/18244a0afc

    The webinar details are:

    by Mario at February 07, 2018 12:16 PM

    gvSIG Team

    Añadir nuevas tablas de color a gvSIG Desktop, ampliando las opciones para representar nuestros datos

    Las tablas de color se utilizan en gvSIG Desktop tanto para representar datos ráster (por ejemplo, un Modelo Digital del Terreno) como para representar datos vectoriales (se pueden aplicar en leyendas como la de valores únicos o en la de mapas de calor). Por defecto gvSIG Desktop tiene un pequeño catálogo de tablas de color. Lo que la mayor parte de usuarios no sabe es que es muy sencillo añadir tablas de color nuevas. ¿Queréis ver lo fácil que es?

    En primer lugar debéis saber que las tablas de color que usa gvSIG Desktop se almacenan como ficheros xml en la carpeta ‘colortable’, dentro de la carpeta ‘gvSIG’. Así, si por ejemplo borráis algunos de estos xml, esas tablas dejaran de estar disponibles en gvSIG Desktop.

    En el vídeo demo que acompaña a este post hemos borrado todas menos la denominada ‘Default’, que siempre debéis tener la precaución de no borrar. Como se muestra en el vídeo, al borrar los xml tan sólo nos queda una tabla de color que podamos aplicar a nuestras capas ráster y vectoriales.

    Veamos ahora la parte interesante de verdad que no es como borrar tablas de color ya existentes sino añadir otras nuevas.

    Para descargar tablas de color nuevas vamos a utilizar esta web:

    http://soliton.vm.bytemark.co.uk/pub/cpt-city/

    Como veréis contiene cientos de tablas de color, muchas de ellas aplicables al mundo de la cartografía y descargables en una amplia diversidad de formatos, incluidos algunos como el ‘ggr’ (GIMP gradient) soportados por gvSIG Desktop. De los cientos de tablas de color que ofrece la página vamos a descargar algunos de ellos en este formato ‘ggr’.

    Lanzamos la herramienta de ‘Tabla de color’ en gvSIG Desktop y en la ventana que nos aparece pulsamos el botón de ‘importar librerías’….a continuación seleccionamos los ficheros ggr que nos hemos descargada y voilà!…ya las tenemos disponibles.

    Finalmente en el vídeo demostrativo veremos como una vez importadas se pueden aplicar tanto a los datos ráster como vectoriales.

    Y si queréis descargar TODAS las tablas de color en formato ggr y en un fichero comprimido…las tenéis disponibles aquí.

    by Alvaro at February 07, 2018 11:16 AM

    February 06, 2018

    gvSIG Team

    Sentilo and gvSIG: Agreement to collaborate

    We are pleased to announce that Sentilo and gvSIG communities have reached an agreement to collaborate closely in order to make it easier for users, partners and developers of both communities to deploy an integrated sensor platform and a Geographic Information System, both based on open source.

    Sentilo is an open source sensor and actuator platform designed to fit in the Smart City architecture of any city who looks for openness and easy interoperability. It is the piece of architecture that will isolate the applications that are developed to exploit the information “generated by the city” and the layer of sensors deployed across the city to collect and broadcast this information.

    It’s built, used, and supported by an active and diverse community of cities and companies that believe that using open standards and free software is the first smart decision a Smart City should take. In order to avoid vertical solutions, Sentilo is designed as a cross platform with the objective of sharing information between heterogeneous systems and to easily integrate legacy applications.

    The collaboration agreement will provide mutual priority support among and for members of the two communities who wish to integrate Sentilo and gvSIG in their projects.

    Both gvSIG and Sentilo were awarded in the Sharing & Reuse Conference 2017, organized by the European Commission, in the “Cross Border” category (gvSIG won the first prize and Sentilo won the third prize).

    by Alvaro at February 06, 2018 12:37 PM

    February 05, 2018

    gvSIG Team

    Cambiando el ‘look and feel’ de gvSIG Desktop en un par de pasos

    En alguna ocasión me han preguntado por cómo poder cambiar la apariencia que viene ‘de serie’ de gvSIG Desktop. La verdad es que siempre ha habido algunas opciones, aunque desconocidas para la mayoría de los usuarios. Con la versión 2.4 de gvSIG Desktop se amplían al poder crear y utilizar nuevos juegos de iconos.

    Os voy a poner un pequeño ejemplo para en un par de pasos cambiar ‘el estilo’ por defecto de gvSIG Desktop 2.4.

    En primer lugar vamos a modificar el tema de Java que usa gvSIG. Para ello vamos al botón de ‘Preferencias’, seleccionamos del árbol de opciones ‘General’ y ‘Apariencia’. A continuación seleccionamos la denominada ‘Texture’.

    Al reiniciar veremos un aspecto similar al de la imagen.

    Segundo paso, instalamos mediante el ‘Administrador de complementos’, opción ‘Instalación desde URL’, el juego de iconos denominado ‘TreCC 22×22’. Aunque nos indica que es necesario reiniciar, en este caso no lo es. Vamos de nuevo a ‘Preferencias’ y seleccionamos del árbol de opciones ‘General’ y ‘Juego de iconos’, eligiendo el que acabamos de instalar de ‘TreCC 22×22’.

    Reiniciamos gvSIG Desktop y encontraremos algo similar al siguiente vídeo:

    by Alvaro at February 05, 2018 07:02 PM

    gvSIG Team

    GIS applied to Municipality Management: Module 9 ‘Hyperlink’

    The video of the ninth module is now available, in which we will show how to work with hyperlinks in gvSIG.

    This tool allows us to associate images, text files, pdf files, folders, web pages … to the geometries of our vector layer. For that we must have one or more fields in the attribute table of that shapefile in which we will have the file or folder paths or the web page URL.

    When we create the field for the paths it’s important to indicate a large size (for example 200 characters), since if we link to a very long path and the field length is smaller, that path will be cut and you will not find the linked element.

    This functionality will be very useful to show reviews, pictures, reports … of our geographical entities.

    The cartography to follow this video can be downloaded from this link.

    Here you have the videotutorial of this new module:

    Related posts:

    by Mario at February 05, 2018 11:35 AM

    February 04, 2018

    From GIS to Remote Sensing

    Basic tutorial 1: Land Cover Classification of Landsat Images

    This is a basic tutorial about the use of the new Semi-Automatic Classification Plugin version 6 for QGIS for the classification of a multispectral image. It is recommended to read the Brief Introduction to Remote Sensing before this tutorial.
    The purpose of the classification is to identify the following land cover classes:
    1. Water;
    2. Built-up;
    3. Vegetation;
    4. Bare soil.
    The study area of this tutorial is Greenbelt (Maryland, USA) which is the site of NASA’s Goddard Space Flight Center (the institution that will lead the development of the future Landsat 9 flight segment).


    by Luca Congedo (noreply@blogger.com) at February 04, 2018 11:41 PM

    February 02, 2018

    gvSIG Team

    gvSIG 2.4 RC4 ya está disponible para descargar

    gvSIG 2.4 RC4, la cuarta distribución candidata a versión (Release Candidate) de gvSIG 2.4, ya está disponible para descargar desde la web de gvSIG.

    Con la publicación de este nuevo build os animamos a que lo probéis y a que nos reportéis los posibles errores y sugerencias que encontréis a través de la lista de usuarios.

    Las principales novedades de esta versión las podéis encontrar en los distintos post publicados en el blog de gvSIG, destacando, entre otras, la descarga de datos de OpenStreetMap o el acceso a las herramientas de administración de H2 desde gvSIG Desktop.

    Gracias por vuestra colaboración.

    by Mario at February 02, 2018 09:23 AM

    gvSIG Team

    gvSIG 2.4 RC4 is available to download now

    gvSIG 2.4 RC4, the fourth gvSIG 2.4 Release Candidate is now available to download from the gvSIG website.

    With the release of this new build we encourage you to test it and send us any errors and suggestions in the users mailing list.

    The main new features of this version have been published at the gvSIG blog during the last weeks. Some of them are the possibility to download data from Open Street Map or the access to H2 from gvSIG Desktop.

    Thanks for your collaboration.

    by Mario at February 02, 2018 09:23 AM

    February 01, 2018

    GeoSolutions

    New release of MapStore with Charts and Revised Filtering

    Release with widgets

    Dear Reader,

    we are pleased to announce the release 2018.01.00 of MapStore, our flagship Open Source webgis product. The full list of changes for this release can be found here, but the most interesting additions are the following:

    • Charts: you can now add charts to your maps for data analysis.
    • New Simplified Query Builder with Cross Layer Filtering: support for cross layer filtering from the query builder.
    • Various bug fixes and performance improvements.
    More on charts

    With this release we added an important data analysis tool that can enhance your maps with useful data. MapStore now allows to quickly generate charts (pies, lines, bars, gauges) from layer's data. Using GeoServer's powerful services, you can aggregate data and add them to the map. You can play with this map to get a feeling about this new functionalities.

    You can create a chart, and add it to the map, directly from the Table of contents, as shown below. [gallery type="slideshow" link="none" size="large" ids="3843,3842,3841,3844,3845,3854"]

    Every chart can be configured to be in sync with the map viewport, that means the data will be filtered using the current map viewport. Chart will then update everytime you pan and zoom the map to reflect the data that falls within the viewport.

    [caption id="attachment_3770" align="aligncenter" width="1024"]Preview of Charts functionality Chart sync with Maps[/caption] You can even provide additional filters using the query builder to refine the data that powers your charts (see below). More work is planned on the Charts functionality to provide additional chart's types and enhance the current ones. We also aim to add more elements that go beyond pure charts hence we decided to call these elements widgets, to account for future additions. Revised Query Builder and Cross Layer Filtering You will notice a new look and feel for the query builder. [caption id="attachment_3848" align="aligncenter" width="425"]querybuilder1 New look and feel for query builder[/caption] [caption id="attachment_3849" align="alignright" width="300"]Filter all roads that intersect New York's Central Park Filter all roads that intersect New York's Central Park[/caption]

    In addition, now you can filter data using the geometries from another layer of the map using the brand new "Layer Filter". Select the layer you want to use as filter and the geometric operation to match data. In addition you can add an attribute filter to the filter layer too.

    This greatly increases the analysis possibilities. You can simply find the roads that intersect New York Central Park (like below) or make more complex filters combining cross layer, spatial filter and attribute filter.

    This feature can also be used to filter the data for the charts, so you can generate charts directly from the data filtered using the cross layer.

    Advanced filtering, data aggregation and charts makes MapStore a useful tool for data analysis and visualization that goes beyond pure maps. For the future releases we plan to enhance these functionalities with new widgets and new analysis features.

    News for developers/custom projects

    The developers will notice we changed the build files and documented more the application to support the following functionalities:

    • JS/CSS versioning: now javascript and css are loaded by version, so if you're doing hard client side caching you don't need to empty the browser cache to see changes anymore. Learn how to migrate your project here.
    • Configurable and Documented I18N: now you can configure the languages you want in configuration file. Learn How

    You can also refer to the MapStore developer documentation to learn more about this feature.

    Future work

    For the next releases we plan to (in sparse order):

    • Improve existing charts and add new widgets (text, counters and statistics, dashboard...)
    • Integration with GeoNode
    • Integrated styler for GeoServer
    • Support for layers with TIME
    • Support for more general map annotations, beyond simple markers
    Stay tuned for additional news on the next features!

    If you are interested in learning about how we can help you achieving your goals with open source products like GeoServerMapstore, GeoNode and GeoNetwork through our Enterprise Support Services and GeoServer Deployment Warranty offerings, feel free to contact us!

    The GeoSolutions team,

    by Lorenzo Natali at February 01, 2018 02:58 PM

    GRASS GIS

    GRASS GIS 7.4.0 released

    We are pleased to announce the GRASS GIS 7.4.0 release

    February 01, 2018 02:55 PM

    Markus Neteler

    GRASS GIS 7.4.0 released

    We are pleased to announce the GRASS GIS 7.4.0 release

    GRASS GIS 7.4.0: Wildfire in Australia, seen by Sentinel-2B

    What’s new in a nutshell

    After a bit more than one year of development the new update release GRASS GIS 7.4.0 is available. It provides more than 480 stability fixes and improvements compared to the previous stable version 7.2. An overview of the new features in the 7.4 release series is available at New Features in GRASS GIS 7.4.

    Efforts have concentrated on making the user experience even better, providing many small, but useful additional functionalities to modules and further improving the graphical user interface. Users can now directly download pre-packaged demo data locations in the GUI startup window. Several modules were migrated from addons to the core GRASS GIS package and the suite of tools for ortho-rectification was re-implemented in the new GRASS 7 GUI style. In order to support the treatment of massive datasets, new compression algorithms were introduced and NULL (no-data) raster files are now also compressed by default. For a detailed overview, see the list of new features. As a stable release series, 7.4.x enjoys long-term support.

    Binaries/Installer download:

    Source code download:

    More details:

    See also our detailed announcement:

    About GRASS GIS

    The Geographic Resources Analysis Support System (https://grass.osgeo.org/), commonly referred to as GRASS GIS, is an Open Source Geographic Information System providing powerful raster, vector and geospatial processing capabilities in a single integrated software suite. GRASS GIS includes tools for spatial modeling, visualization of raster and vector data, management and analysis of geospatial data, and the processing of satellite and aerial imagery. It also provides the capability to produce sophisticated presentation graphics and hardcopy maps. GRASS GIS has been translated into about twenty languages and supports a huge array of data formats. It can be used either as a stand-alone application or as backend for other software packages such as QGIS and R geostatistics. It is distributed freely under the terms of the GNU General Public License (GPL). GRASS GIS is a founding member of the Open Source Geospatial Foundation (OSGeo).

    The GRASS Development Team, Feb 2018

    The post GRASS GIS 7.4.0 released appeared first on GFOSS Blog | GRASS GIS Courses.

    by neteler at February 01, 2018 02:09 PM

    gvSIG Team

    Nueva jornada sobre gvSIG, en la Escuela Técnica Superior de Ingenieros Agrónomos y de Montes (ETSIAM) de la Universidad de Castilla La Mancha (Albacete)

    El próximo jueves 8 de febrero se celebrará en la Escuela Técnica Superior de Ingenieros Agrónomos y de Montes (ETSIAM) de la Universidad de Castilla La Mancha, en Albacete, una jornada sobre gvSIG.

    La jornada será gratuita, con plazas limitadas, y la inscripción puede realizarse escribiendo un correo electrónico a la dirección jornadagvsigab2018@gmail.com, indicando nombre, apellidos, e-mail, organización, y a qué ponencia y/o taller/es se desea asistir. El programa de la misma es el siguiente:

    • 9:00-10:00: Ponencia “Introducción a la Suite gvSIG”
    • 10:00-11:30: Taller “Introducción a gvSIG Desktop”
    • 11:30-13:00: Taller “Introducción a scripting con gvSIG Desktop“
    • 13:00-14:30: Taller “Geoestadística con gvSIG y R“

    ¡Os esperamos!

    by Mario at February 01, 2018 01:02 PM

    gvSIG Team

    GIS applied to Municipality Management: Module 8.2 ‘Creation of point layers from tables (Event layers)’

    The second video of the eighth module is now available, in which we continue showing how to create point layers from a table. In this case we will create an event layer, that means, a point shapefile from a table with coordinates.

    For example, the table can be composed of geographic coordinates that we could get from a topography survey with GPS.

    This functionality is another way to generate our cartography in a town hall, in this case when we only have the coordinates of the points.

    The cartography to follow this video can be downloaded from this link.

    Here you have the second videotutorial of this eighth module:

    Related posts:

    by Mario at February 01, 2018 12:33 PM

    January 31, 2018

    Fernando Quadro

    GeoUsage – Análise de serviços OGC

    O GeoUsage é uma ferramenta gratuita e de código aberto desenvolvida por Tom Kralidis para dar suporte a caso de uso de métricas e análise do uso dos serviços OWS (OGC Web Services).

    Você gostaria de saber quantos usuários estão acessando seus serviços? Quais as camadas e projeções são as mais populares? Qual largura de banda utilizada? Quantos o volume de downloads de dados?

    Desenvolvido em Python, a GeoUsage não possui opiniões fortes além das análises específicas dos logs do servidor web onde se encontram os serviços OWS. O GeoUsage é “composable“, ou seja, a freqüência, o gerenciamento de logs e o armazenamento de resultados é totalmente voltado ao usuário. Dito isto, uma interface de linha de comando simples e bonita está disponível para visualizar os resultados.

    Fonte: Tommy’s Scratchpad

    by Fernando Quadro at January 31, 2018 10:30 AM

    OTB Team

    Orfeo ToolBox 6.4 is out!

    We are happy to announce that Orfeo ToolBox 6.4 is out! As usual, ready-to-use binary packages are available for Windows, Linux and Mac OS X: OTB 6.4 You can also checkout the source directly with git: git clone https://git@git.orfeo-toolbox.org/git/otb.git OTB -b release-6.4 We welcome your feedback and requests, and encourage you to join the OTB […]

    by Manuel Grizonnet at January 31, 2018 09:46 AM

    January 30, 2018

    Tom Kralidis

    GeoUsage: Log Analyzer for OGC Web Services

    Continuing on the UNIX philosophy, another little tool to help with your OWS workflows. GeoUsage attempts to support the use case of metrics and analysis of OWS service usage.  How many users are hitting your OWS?  Which layers/projections are the most popular?  How much bandwidth?  How many maps vs. data downloads? A pure Python package, […]

    by tomkralidis at January 30, 2018 02:17 PM

    January 29, 2018

    gvSIG Team

    GIS applied to Municipality Management: Module 8.1 ‘Creation of point layers from tables (Geocoding: Points from a table with addresses)’

    The first video of the eighth module is now available, in which we will show how to create point layers from a table. In this first case we will speak about geocoding, creating a point shapefile from a table with addresses.

    Apart from addresses, that table could also contain characteristic elements such as museums, monuments, sports facilities…, that is, any place that we could find in search engines such as Google Maps, OpenStreetMap…, since it uses these search engines to create that layer.

    This functionality is very useful in a municipality because if we have for example several tables with these type of elements or their addresses, we wouldn’t have to digitalize one by one. Thanks to this geoprocess we can do it automatically, and we would only have to tackle a quality control phase at the end to check that the results are correct (since as we know, these search engines also have certain errors). Besides, we can indicate that the geoprocess shows us which elements or addresses haven’t been found, in order to digitalize them in another way.

    We will also show how to access to Google Street View from gvSIG, very interesting for office work, for certain queries that would avoid us to have to go to that place in person.

    The cartography to follow this video can be downloaded from this link.

    Here you have the first videotutorial of this eighth module:

    Related posts:

    by Mario at January 29, 2018 04:06 PM

    January 28, 2018

    Free and Open Source GIS Ramblings

    Porting Processing scripts to QGIS3

    I’ll start with some tech talk first. Feel free to jump to the usage example further down if you are here for the edge bundling plugin.

    As you certainly know, QGIS 3 brings a lot of improvements and under-the-hood changes. One of those changes affects all Python scripts. They need to be updated to Python 3 and the new PyQGIS API. (See the official migration guide for details.)

    To get ready for the big 3.0 release, I’ve started porting my Processing tools. The edge bundling script is my first candidate for porting to QGIS 3. I also wanted to use this opportunity to “upgrade” from a simple script to a plugin that integrates into Processing.

    I used Alexander Bruy’s “prepair for Processing” plugin as a template but you can also find an example template in your Processing folder. (On my system, it is located in C:\OSGeo4W64\apps\qgis-dev\python\plugins\processing\algs\exampleprovider.)

    Since I didn’t want to miss the advantages of a good IDE, I set up PyCharm as described by Heikki Vesanto. This will give you code completion for Python 3 and PyQGIS which is very helpful for refactoring and porting. (I also tried Eclipse with PyDev but if you don’t have a favorite IDE yet, I find PyCharm easier to install and configure.)

    My PyCharm startup script qgis3_pycharm.bat is a copy of C:\OSGeo4W64\bin\python-qgis-dev.bat with the last line altered to start PyCharm:

    @echo off
    call "%~dp0\o4w_env.bat"
    call qt5_env.bat
    call py3_env.bat
    @echo off<span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span>
    path %OSGEO4W_ROOT%\apps\qgis-dev\bin;%PATH%
    set QGIS_PREFIX_PATH=%OSGEO4W_ROOT:\=/%/apps/qgis-dev
    set GDAL_FILENAME_IS_UTF8=YES
    rem Set VSI cache to be used as buffer, see #6448
    set VSI_CACHE=TRUE
    set VSI_CACHE_SIZE=1000000
    set QT_PLUGIN_PATH=%OSGEO4W_ROOT%\apps\qgis-dev\qtplugins;%OSGEO4W_ROOT%\apps\qt5\plugins
    set PYTHONPATH=%OSGEO4W_ROOT%\apps\qgis-dev\python;%PYTHONPATH%
    start /d "C:\Program Files\JetBrains\PyCharm\bin\" pycharm64.exe
    

    In PyCharm File | Settings, I configured the OSGeo4W Python 3.6 interpreter and added qgis-dev and the plugin folder to its path:

    With this setup done, we can go back to the code.

    I first resolved all occurrences of import * in my script to follow good coding practices. For example:

    from qgis.core import *
    

    became

    from qgis.core import QgsFeature, QgsPoint, QgsVector, QgsGeometry, QgsField, QGis<span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span>
    

    in this PR.

    I didn’t even run the 2to3 script that is provided to make porting from Python 2 to Python 3 easier. Since the edge bundling code is mostly Numpy, there were almost no changes necessary. The only head scratching moment was when Numpy refused to add a map() return value to an array. So (with the help of Stackoverflow of course) I added a work around to convert the map() return value to an array as well:

    flocal_x = map(forcecalcx, subtr_x, subtr_y, distance)
    electrostaticforces_x[e_idx, :] += np.array(list(flocal_x))
    

    The biggest change related to Processing is that the VectorWriter has been replaced by a QgsFeatureSink. It’s defined as a parameter of the edgebundling QgsProcessingAlgorithm:

    self.addParameter(QgsProcessingParameterFeatureSink(
    self.OUTPUT,
    self.tr("Bundled edges"),
    QgsProcessing.TypeVectorLine))
    

    And when the algorithm is run, the sink is filled with the output features:

    (sink, dest_id) = self.parameterAsSink(
    parameters, self.OUTPUT, context,
    source.fields(), source.wkbType(), source.sourceCrs())
    
    # code that creates features
    
    sink.addFeature(feat, QgsFeatureSink.FastInsert)
    

    The ported plugin is available on Github.

    The edge bundling plugin in action

    I haven’t uploaded the plugin to the official plugin repository yet, but you can already download if from Github and give it a try:

    For this example, I’m using taxi pick-up and drop-off data provided by the NYC Taxi & Limousine Commission. I downloaded the January 2017 green taxi data and extracted all trips for the 1st of January. Then I created origin-destination (OD) lines using the QGIS virtual layer feature:

    To get an interesting subset of the data, I extracted only those OD flows that cross the East River and have a count of at least 5 taxis:

    Now the data is ready for bundling.

    If you have installed the edge bundling plugin, the force-directed edge bundling algorithm should be available in the Processing toolbox. The UI of the edge bundling algorithm looks pretty much the same as it did for the QGIS 2 Processing script:

    Since this is a small dataset with only 148 OD flows, the edge bundling processes is pretty quick and we can explore the results:

    Beyond this core edge bundling algorithm, the repository also contains two more scripts that still need to be ported. They include dependencies on sklearn, so it will be interesting to see how straightforward it is to convert them.

    by underdark at January 28, 2018 05:10 PM

    January 26, 2018

    Just van den Broecke

    Emit #2 – On Air Quality Sensors

    This is Emit #2, in a series of blog-posts around the Smart Emission Platform, an Open Source software component framework that facilitates the acquisition, processing and (OGC web-API) unlocking of spatiotemporal sensor-data, mainly for Air Quality. In Emit #1, the big picture of the platform was sketched. Subsequent Emits will detail technical aspects of the SE Platform. “Follow the data” will be the main trail loosely adhered to.

    In this Emit I will talk a bit about sensors as the data flow originates there. Mind, this is not my area of expertise, but much of the SE platform, in particular data processing (ETL),  is built around challenges of dealing with (cheap) sensors for Air Quality.

    Previously I posted about meteo sensors and weather stations (part 1, part 2, part 3): how to connect a weather station to a Raspberry Pi and publish weather data “to the cloud”.  Now this was relatively easy, due to the availability of:

    So without any programming, you can be “in business” quite quickly with your personal weather station. In addition: meteo sensors (temperature, humidity, pressure, wind, rain) in general produce relatively “clean/interpretable data”. From a cheap  sensor like the $9,95 DH22 Temperature Humidity Sensor , it is relatively straightforward to read-out temperature and humidity via an Arduino Board or Raspberry Pi. Personal Weather Stations provide even more internal software, so most meteo data comes out in well-known units (Fahrenheit/Celsius, HectoPascal, etc).

    Now this is a whole different story for (cheap) Air Quality sensors.  It begins with the fact that measuring Air Quality indicators like Carbon Monoxide/Dioxide (CO, CO2), Nitrogen Monoxide/Dioxide (NO, NO2), Particulate Matter (PM, i.e. fine dust), Ozone (O3) requires many ways …, with both simple chemical and physical methods and with more sophisticated electronic techniques (from www.enviropedia.org.uk). While techniques for measuring weather data have evolved for maybe hundreds of years, measuring Air Quality is relatively new and mostly within the domain of “chemistry”, and when it comes to sensors, very expensive.

    Recently, this has changed. Not only are governmental environmental agencies facing lowering budgets, but more importantly, a growing number of civilian initiatives want to “take things in their own hand” with respect to measuring Air Quality. As a result more and more affordable/cheap sensors and creative solutions like the iSpex (measure PM on your iPhone) are entering the market. But given the (chemical) complexity, how reliable are these sensors? Is the data that they produce readily usable? Like with Celsius to Fahrenheit, is it a matter of providing some simple formula?

    IMHO unfortunately not, but things are getting better as time passes. It also depends on the chemical component you want to measure. For example, Nitrogen Dioxide (NO2) and Ozone (O3) appear to be much harder to measure than CO/CO2. Particulate Matter is a whole story by itself as one deals with, well, “dust” in many shapes and especially sizes (PM10, PM2.5, PM1).

    There is ample research for the quest of finding cheap AQ sensors: their limitations, reliabilities, particularities. Within the Smart Emission Project, I am working with RIVM, the  Dutch National Institute for Public Health and the Environment and the  European Union Joint Research Centre (JRC), who both did extensive research on cheap AQ sensors. I know there is much more, but forgot to mention that the main message of this Emit is that “measuring AQ has far more challenges than measuring weather data”. One of the main conclusions, again IMHO, is that, yes, it is possible (to use cheap AQ sensors), but one has to do Calibration. Below some links if you are interested in the state of RIVM and EU JRC research:

    Though the latter EU JRC report may be a tough read, it is one of the most detailed and concise reports on the application of low-cost AQ sensors, and, I mention it again, different techniques for Calibration.

    So back to the Smart Emission Platform, what sensors are we using currently? The SE Platform started with the Smart Emission Nijmegen Project, where participating citizens of the City of Nijmegen, would each have their own sensor station that publishes data to the SE Platform.

    Our partner in the project Intemo, develops a sensor station, Josene nicknamed ‘Jose’, that measures not only AQ but also sound levels (Noise Pollution) and many other indicators, like light.

    In the course of the project I was fortunate to follow a workshop at EU JRC for their amazing Open Hardware/Software product AirSensEUR. At the spot each of us assembled a complete ASE, connecting these to standard web services like SOS. The ASE Open Hardware approach also allows it to embed a multitude of sensor types and brands. The workshop had participants from the major national environmental agencies in Europe. In fact RIVM is now deploying and testing about 18 AirSensEURs. Coming year I have the task to deploy five ASEs within The Netherlands. Two of them are already  humming at my desk for testing.

    AirSensEUR #2 at my desk

    Describing AirSensEUR would require a full post by itself. Quoting: “AirSensEUR is an open framework focused on air quality monitoring using low cost sensors. The project started on 2014 from a group of passionate researchers and engineers. The framework is composed by a set of electronic boards, firmware, and several software applications.”

    EU JRC AirSensEURs

    So currently (jan 2018) the SE Platform supports both the Josene/Jose and AirSensEUR sensor devices.

    The Air Quality sensor data coming out of  these devices still requires cleanup and  Calibration. This is part of the data handling within the SE platform, subject of one of the upcoming Emits.

    This post was meant to give you a taste of the challenges around using (cheap) sensors for Air Quality and introduce the two sensor devices (Josene, AirSensEUR) currently used/supported by the Smart Emission Platform. Many details are still to be uncovered. These will be subjects of upcoming Emits.

     

    by Just van den Broecke at January 26, 2018 11:36 PM

    gvSIG Team

    3rd gvSIG Festival: A new edition of the virtual conference about gvSIG

    The third edition of the gvSIG Festival, the virtual conference about the gvSIG project, will be held in March 21st and 22nd 2018.

    Just like in the last edition, a period for sending proposals of projects about the application has been open, so that users that can’t present their projects in any of the existing face-to-face conferences can do it from their city.

    This event is free of charge and completely online, through the webinar service of the gvSIG Association, with the advantage to count with speakers from different countries and presentations in different languages, where users and developers from any part of the world can hold them.

    If you have done any project with gvSIG and you want to present it at the gvSIG Festival, you can send a summary to the following e-mail address: conference-contact@gvsig.com, explaining the project. The summary will have no more than 300 words, it has to be written in Spanish or English, and you have to indicate the title and the language of the presentation.

    Once the program is configured it will be published at the event website, and registrations will be opened.

    We expect your proposals!

    by Mario at January 26, 2018 11:30 AM