Welcome to Planet OSGeo

January 17, 2020

by Fernando Quadro at January 17, 2020 07:07 PM

Overview of my professional life in 2019. Inspired by the concise bullet-point-style of Tom Kralidis’ Cheers to 2018,  on which I based my previous Cheers to 2018. Highlights of living and working in the Open Source Geospatial and OSGeo(.nl|.org)-world in 2019. Organized by “Theme” i.s.o. by month.

TL;DR. My 2019 highlight was providing the GeoPython Workshop (“Doing Geospatial with Python”) at the FOSS4G in Bukarest. A really great team-effort: first remote collaboration to get the content done. Then at the spot, despite network failures, providing a hopefully inspiring workshop on modern GeoPython. Kudos to Tom, Angelos, Francesco, Jachym, Luis and Jorge!

The GeoPython Workshop Team at FOSS4G 2019 – Bukarest.

My second highlight was joining the pygeoapi project. Had a love/hate relationship with WFS, but the new OpenAPI direction in OGC and the great team behind the pygeoapi project made me want to be part of this. BTW pygeoapi has just (Jan. 2020) received OGC Compliance Certification and Reference Implementation Status for OGC API – Features specification.

Geospatial Cloud Services

Main focus. Seriously moving into hosting Geospatial Cloud Services, both as a source of income and to support/strengthen underlying open source projects. Warning: shameless ads below.

  • Throughout 2019 – expanded  map5.nl, a subscription service for Dutch topographic, historical- and embellished hill-shade and arial maps I started to host in 2015.

  • January – launched GeoQoS.com , a Cloud-hosted GeoHealthCheck (GHC) service on a subscription basis. GHC is is an uptime and QoS monitor for (OGC) web services. Customers get their own GHC instance. GeoQoS.com saves the burden of self-hosting GHC. Truly, I can’t do without GHC for any of my geospatial web-services (like map5.nl). Developed with Python Django and Stripe, deployed with Ansible (can’t do without) and off course Docker.

  • Dec+ into 2020 – something big – more to be announced.

Contract Work

As my focus is more and more on providing “Geospatial Cloud Services” (see above), not too much contract work in 2019, though I am always open for offerings!

Open Source Contributions

Continuous work as a contributor on several Open Source Projects. Apart from some GitLab Projects, you can find/follow me best on GitHub.

GitHub Contribitions 2019

More Contributions – Handy Docker Images

To support many of the Cloud services and Open Source projects, I developed several handy Docker Images, also available from my DockerHub.

  • docker-awstats – AWStats in Docker, oldie, but very effective webstats. Deploy multiple instances in single Docker container. Highly configurable, e.g. also for Traefik access logs.
  • docker-jmeterApache JMeter wrapped in Docker.

Not too many Docker Image downloads, but look at docker-jmeter, over 1 million! Glad to give to The Commons.

OSGeo.nl

Now as chair of the board, still involved in the OSGeo Dutch Local Chapter, OSGeo.nl since its establishment in 2011. Thanks to our wonderful volunteers, we were able to organize several events.

Conferences – Attended

  • August 25-31 – FOSS4G – Bukarest.
  • Sept 20 – Sensemakers Amsterdam
  • Okt 2 – Geo Gebruikers festival by Geonovum – Amersfoort

Hackathons & Code Sprints

I always love to go to hackathons. From software to hardware-hacking with the Sensemakers Amsterdam.

Talks & Workshops – Provided

On several of the above events I gave some presence with presentations and workshops. Most of my slides can be found on slideshare.net/justb4 Below some links.

Logo Design

I don’t consider myself as a designer, but with a little help from online logo-creation platforms and feedback from my co-workers, I created the following in 2019:

Resolutions 2020

  • More effort into Wegue project
  • Further expanding hosted Geospatial Cloud Services
  • Improve on and provide the GeoPython Workshop
  • Whatever comes around.

by Just van den Broecke at January 17, 2020 01:38 AM

January 14, 2020

Tips about the Semi-Automatic Classification Plugin for QGIS

SCP can display pixel values of vegetation indices (NDVI, EVI) as you move the cursor, after activating the ROI pointer.
Also, it is possible to define a custom expression to be calculated. You can enter any mathematical expression referring to the bands defined in the Band set as bandset#bBANDNUMBER (e.g. bandset#b1 * bandset#b2).



For any comment or question, join the Facebook group about the Semi-Automatic Classification Plugin.

by Luca Congedo (noreply@blogger.com) at January 14, 2020 07:00 AM

January 12, 2020

This post is a follow-up to the draft template for exploring movement data I wrote about in my previous post. Specifically, I want to address step 4: Exploring patterns in trajectory and event data.

The patterns I want to explore in this post are clusters of trip origins. The case study presented here is an extension of the MovingPandas ship data analysis notebook.

The analysis consists of 4 steps:

  1. Splitting continuous GPS tracks into individual trips
  2. Extracting trip origins (start locations)
  3. Clustering trip origins
  4. Exploring clusters

Since I have already removed AIS records with a speed over ground (SOG) value of zero from the dataset, we can use the split_by_observation_gap() function to split the continuous observations into individual trips. Trips that are shorter than 100 meters are automatically discarded as irrelevant clutter:

traj_collection.min_length = 100
trips = traj_collection.split_by_observation_gap(timedelta(minutes=5))

The split operation results in 302 individual trips:

Passenger vessel trajectories are blue, high-speed craft green, tankers red, and cargo vessels orange. Other vessel trajectories are gray.

To extract trip origins, we can use the get_start_locations() function. The list of column names defines which columns are carried over from the trajectory’s GeoDataFrame to the origins GeoDataFrame:

 
origins = trips.get_start_locations(['SOG', 'ShipType']) 

The following density-based clustering step is based on a blog post by Geoff Boeing and uses scikit-learn’s DBSCAN implementation:

from sklearn.cluster import DBSCAN
from geopy.distance import great_circle
from shapely.geometry import MultiPoint

origins['lat'] = origins.geometry.y
origins['lon'] = origins.geometry.x
matrix = origins.as_matrix(columns=['lat', 'lon'])

kms_per_radian = 6371.0088
epsilon = 0.1 / kms_per_radian

db = DBSCAN(eps=epsilon, min_samples=1, algorithm='ball_tree', metric='haversine').fit(np.radians(matrix))
cluster_labels = db.labels_
num_clusters = len(set(cluster_labels))
clusters = pd.Series([matrix[cluster_labels == n] for n in range(num_clusters)])
print('Number of clusters: {}'.format(num_clusters))

Resulting in 69 clusters.

Finally, we can add the cluster labels to the origins GeoDataFrame and plot the result:

origins['cluster'] = cluster_labels

To analyze the clusters, we can compute summary statistics of the trip origins assigned to each cluster. For example, we compute a representative (center-most) point, count the number of trips, and compute the mean speed (SOG) value:

 
def get_centermost_point(cluster):
    centroid = (MultiPoint(cluster).centroid.x, MultiPoint(cluster).centroid.y)
    centermost_point = min(cluster, key=lambda point: great_circle(point, centroid).m)
    return Point(tuple(centermost_point)[1], tuple(centermost_point)[0])
centermost_points = clusters.map(get_centermost_point) 

The largest cluster with a low mean speed (indicating a docking or anchoring location) is cluster 29 which contains 43 trips from passenger vessels, high-speed craft, an an undefined vessel:

To explore the overall cluster pattern, we can plot the clusters colored by speed and scaled by the number of trips:

Besides cluster 29, this visualization reveals multiple smaller origin clusters with low speeds that indicate different docking locations in the analysis area.

Cluster locations with high speeds on the other hand indicate locations where vessels enter the analysis area. In a next step, it might be interesting to compute flows between clusters to gain insights about connections and travel times.

It’s worth noting that AIS data contains additional information, such as vessel status, that could be used to extract docking or anchoring locations. However, the workflow presented here is more generally applicable to any movement data tracks that can be split into meaningful trips.

For the full interactive ship data analysis tutorial visit https://mybinder.org/v2/gh/anitagraser/movingpandas/master


This post is part of a series. Read more about movement data in GIS.

by underdark at January 12, 2020 06:57 PM

January 10, 2020

A few months ago, we proposed to the QGIS grant program to make improvements to the snap cache in QGIS. The community vote selected our project which was funded by QGIS.org. Developments are now mostly finished.

In short, snapping is crucial for editing geospatial features. It is the only way to ensuring they are topologically related, ie, connected vertices have exactly the same coordinates even if manual digitizing on screen is imprecise by nature.  Snapping correctly supposes QGIS have in memory an indexed cache of the geometries to snap to. And maintainting this cache when data is modified, sometimes by another user or database logic, can be a real challenge. This it exactly what this work adresses.

The proposal was divided into two different tasks:

  • Manage circular dependencies
  • Relax the snap cache index build

Manage cicular data dependencies

Data dependencies

Data dependency is an existing feature that allows you to configure QGIS to reload layers (and their snapping cache) when a layer is modified.

It is useful when you store your data in a database and you set up triggers to maintain consistency between the different tables of your data model.

For instance, say you have topological informations containing lines and nodes. Nodes are part of lines and lines go through nodes. Then, you move a node in QGIS, and save your modifications to the database. In order to keep the data consistent, a trigger updates the geometry of the line going through the modified node.

Node 2 is modified, Line 1 is updated accordingly

QGIS, as a database client, has no information that the line layer currently displayed in the canvas needs to be refreshed after the trigger. Although the map canvas will be up to date, because QGIS fetches data for display without any caching system, the snapping cache is not and you’ll end up with ghost snapping highlights issues.

Snapping highlights (light red) differ from real line (orange)

Defining a dependency between nodes and lines layers tells QGIS that it has to refresh the line layer when a node is modified.

Dependencies configuration: Lines layer will be refreshed whenever Nodes layer is modified

It also have to work the other way, modifying a line should update the nodes to ensure they still are on the line.

Circular data dependencies

So here we are, lines depend on nodes which depend on lines which depend on nodes which…

That’s what circular dependencies is about. This specific behavior was previously forbidden and needed a special way to deal with it. Thanks to this recent development, it is now possible.

It’s also possible to add the layer itself as one of its own dependencies. It helps dealing with specific cases where one feature modification could lead to a modification of another feature in the same layer (to keep consistency on road networks for instance).

Road 2 is modified, Road 1 is updated accordingly

This feature is available in the next QGIS LTR version 3.10.

Relax the snapping cache index build

If you work in QGIS with huge projects displaying a lot of vector data, and you enable snapping while editing these data, you probably already met this dialog:

Snap indexing dialog

This dialog informs you that data are currently being indexed so you can snap on them while you will edit feature geometry. And for big projects, this dialog can last for a really long time. Let’s work on speeding it up!

What’s a snap index?

Let’s say you want to move a line and snap it onto another one. While you drag your line with the mouse, QGIS will look for an existing geometry beneath the mouse cursor (with a certain pixel tolerance) every time you move your mouse. Without spatial index, QGIS will have to go through every geometry in your layer to check if the given geometry is beneath the cursor position. This would be very ineffective.

In order to prevent this, QGIS keeps an index where vector data are stored in a way that it can quickly find out what geometry is beneath the mouse cursor. The building of this data structure takes time and that is what the progress dialog is about.

Firstly: Parallelize snap index build

If you want to be able to snap on all layers in your project, then QGIS will have to build one snap index for each layer. This operation was made sequentially meaning that if you have for instance 20 layers and the index building last approximatively 3 seconds for each, then the whole index building will last 1 minute. We made modifications to QGIS so that index building could be done in parallel. As a result, the total index building time could theoretically be 3 seconds!

4 layers snap index being built in parallel

However, parallel operations are limited by the number of CPU cores of your machine, meaning that if you have 4 cores (core i7 for instance) then the total time will be up to 4 times faster than when the building is sequential (and last 15 seconds in our example).

Secondly: relax the snap build

For big projects, parallelizing index building is not enough and still takes too much time. Futhermore, to reduce snap index building, an existing optimisation was to build the spatial index for a specific area of interest (determined according to the displayed area and layer size). As a consequence, when you’ve done waiting for an index currently building and you move the map or zoom in/out, you could possibly trigger another snap index building and wait again.

So, the idea was to avoid waiting at all. Snap index is now built whenever it needs to (when you first enable snapping, when you move or zoom) but the user doesn’t have to wait for the build to be over and can continue what it was doing (creating feature, moving…). Snapping highlights will be missing when the index is currently being built and will appear gradually as soon as they finished. That’s what we call the relaxing mode.

No waiting dialog, snapping highlights appears as soon as snap index is ready

This feature has been merged into current QGIS master and will be present in future QGIS 3.12 release. We keep working on this feature in order to make it more stable and efficient.

What’s next

We’ll continue to improve this feature in the coming days, if you have the chance to test it and encounter issues please let us know on the QGIS tracker. If you think about a missing feature or just want to know more about QGIS, feel free to contact us at infos+data@oslandia.com. And please have a look at our support offering for QGIS.

Many thanks to QGIS grant program for funding these new features. Thanks also to all the people involved in reviewing the code and helping to better understand the existing mechanism.

 

by Julien Cabieces at January 10, 2020 09:39 AM

January 09, 2020

luis-375-370 copy

Dear Reader,

after almost 10 years at the Open Geospatial Consortium (OGC) leading the Innovation and Compliance Programs, I am very excited to join GeoSolutions and support developing of open source solutions in the US and Americas (See blog about the opening of US Office). In these years at OGC, I was not only able to see the raise of new open standards within the OGC agile prototyping approach, but also the raise and uptake of open source implementations of OCG standards. I remember, when I joined OGC, there were only two reference implementations: GeoServer for WMS and WFS and GeoNetwork for CSW.

GeoSolutions has been a company I have interacted with for many years. They have been selected as participants in several OGC Initiatives.  For example, in the 2019 Vector Tiles Pilot, the new OGC API (then called WFS 3.0) was advanced to serve tiles. GeoSolutions demonstrated WFS 3.0 and WMTS extensions with GeoServer and MapStore. The demonstration included and advance analysis of different styling approaches that is shaping the current OGC standards. The Pilot page provides the videos. I'm highlighting only two:

GeoSolutions_VTP Extension_WMTS GeoServer Service with MapStore Client

https://youtu.be/LL-spqq-LNU GeoSolutions Vector Tiles Part One - WMTS and WFS 3 Vector Tiles at Work https://youtu.be/icyRjVUWKao

In 2017, at the  OGC Testbed 13, test suites and implementations were developed in support of the efforts the US National Geospatial Intelligence Agency (NGA)  to advance profiles of the National System for Geospatial Intelligence (NSG). GeoSolutions was selected to  implement WFS 2.0 and WMTS 1.0 profiles with GeoServer. Which speaks of the skills of GeoSolutions to adapt to particular needs and challenges in the US.

Part of my core believes are openness, open standards, and open source software. Geospatial software, will continue to impact our daily lives and solve complex problems for industry and governments. I'm excited to be part of the GeoSolutions team and I'm looking forward to support US and the Americas with robust open source solutions.

If you are interested in learning about how we can help you achieve your goals with our open source products GeoServerMapStoreGeoNode and GeoNetwork through our Enterprise Support Services and GeoServer Subscription offerings, feel free to contact us!

Luis,

by Luis Bermudez at January 09, 2020 05:15 PM

In the tail end of part 21, I did say that the next release (0.13) will have some actual new features besides some important under-the-hood updates and for this post we'll be talking about one such feature.

Way back in the 0.11 release, a new component was introduced to allow adding external WMS layers to your current map.


I knew at that point in time that this component had much more room for improvement as the wide array of formats that OpenLayers can support means we can add more than just WMS layers.

For the upcoming 0.13 release we've re-designed and re-built this component to allow adding a wider array of external layers to your map. When you bring up the external layer manager, you are now greeted with a choice of what kind of layer to add:
  1. A local file-based layer
  2. A remote URL-based layer
With a local file-based layer, simply drag a supported file type into the specified drop-zone or click the drop-zone itself to be prompted for a file to load. Once loaded, specify the projection of the data source (default will be EPSG:4326) and the layer will be added to the map.


Several things to note with local file-based layers.

Firstly, the files you pick are not uploaded to any server. The "upload" UI is merely a means to obtain a HTML5 FileReader so we can load the vector features client-side into the map directly.

Secondly, the following file formats are supported in this mode:
  • GeoJSON/TopoJSON
  • KML
  • GPX
  • IGC
Finally, the available projection list in the following dropdown:


Are built-in projections provided by the proj4js library we're using along with any additional projections registered while discovering map definitions or projections registered up-front.

The other mode is remote URL-based layers, which is how you add WMS and WFS layers through this re-designed UI.


UI for adding WFS layers is near identical, so it is not shown here.

Loaded layers can be viewed and managed through the Manage Layers tab, which itself has been re-designed as well.


Every layer in this list:
  • Can have their visibility toggled via the switch
  • Can have their opacity changed via the slider
  • Can have their draw-order (relative to the MapGuide map) changed through the up/down arrow buttons
  • Can be removed later on via the button with the trash icon
For WMS layers that support the LegendURL sub-capability, you can click the info button to show the WMS legend inline.


For added vector layers, you can zoom to the extents of that layer.


And for vector layers that aren't KML files, you can edit the style for these features. For KML files, the style is intrinsically part of the file itself, so the style is effectively "locked in" for such layers and is not editable as a result.


So in closing, the revised external layer manager now supports adding the following external data sources to your current map:
  • Local files: GeoJSON/TopoJSON, KML, GPX, IGC
  • Remote sources: WMS, WFS
We could actually add many more formats, but this revised external layer manager took a significant toll on our production bundle size and adding more formats would've blown up the bundle size to unacceptable levels. This list of formats I have determined to be a "good enough" list given our bundle size constraints.

Thanks to the storybook support, you can also see a live demo of this external layer manager here.

by Jackie Ng (noreply@blogger.com) at January 09, 2020 03:13 PM

Del 27 al 30 de enero de 2020 se realizará la segunda edición del curso presencial sobre gvSIG Desktop y gvSIG Mobile aplicado a Arqueología en Valencia (España), un curso organizado por el Colegio de Doctores y Licenciados en Filosofía y Letras y en Ciencias e impartido por la Asociación gvSIG.

Los Sistemas de Información Geográfica se han convertido en aplicaciones de gran utilidad para los arqueólogos, y gracias a la Suite gvSIG, una suite completa de soluciones SIG en software libre, cada vez más son los profesionales del sector que adoptan gvSIG como herramienta de trabajo.

Durante el curso se mostrarán las principales funcionales de gvSIG Desktop a través de ejercicios prácticos relacionados con la gestión de yacimientos y zonas arqueológicas, los Bienes de Interés Cultural…, y en la última parte del curso se mostrará la aplicación gvSIG Mobile, donde se realizará un ejercicio de toma de datos en campo.

Al final el alumno obtendrá el certificado oficial de gvSIG Usuario de la Asociación gvSIG.

Más información sobre precios, temario e inscripción en el siguiente enlace.

by Mario at January 09, 2020 08:53 AM

January 07, 2020

Tips about the Semi-Automatic Classification Plugin for QGIS

The refinement of classification usually involves the creation of vector files and then rasterization processing. In SCP it is possible to quickly edit a raster by manually drawing ROIs and defining pixel values.



For any comment or question, join the Facebook group about the Semi-Automatic Classification Plugin.

by Luca Congedo (noreply@blogger.com) at January 07, 2020 07:00 AM

January 06, 2020

With the QGIS Grant Programme 2018, we were able to support seven proposals that were aimed to improve the QGIS project, including software, infrastructure, and documentation. These are the reports on the work that has been done within the individual projects:

  1. Increased stability for Processing GUI and External Providers (Nyall Dawson)
    Many bugs in 3rd party providers have been fixed and lots of new unit tests added. The GUI includes new C++ classes and a  new framework that landed in QGIS 3.4. For more details see Nyall’s report on the mailing list.
  2. OSGeo4W updates (Jürgen Fischer)
    The updates performed in this project were essential to bring QGIS 3.x to Windows.
  3. Resurrect Processing “R” Provider (Nyall Dawson)
    The R provider has been implemented as a provider plugin. The plugin’s beta phase was first announced in Nov 2018 and the plugin is now available for general use.
  4. OpenCL support for processing core algs (Alessandro Pasotti)
    The following processing algorithms have been ported: slope, aspect, hillshade, and ruggedness. Even if was not in scope for this QEP, the hillshade renderer has also been optimized. For more details see qgis/QGIS#7451.
  5. QGIS server OGC compliant and certified for WFS (Régis Haubourg)
    This project fixed numerous issues to get closer to the goal of getting QGIS Server WFS certified. However, the project ran out of resources before the goal could be achieved. For details see the current WFS tests status page.
  6. Charts and drawings on attribute forms (Matthias Kuhn)
    For details read “The new QML widgets in QGIS” and see qgis/QGIS#7801.
  7. Update of QGIS Training Manual (Matteo Ghetta)
    This project hasn’t been completed yet.

Thank you to everyone who participated and made this round of grants a great success and thank you to all our sponsor and donors who make this initiative possible!

by underdark at January 06, 2020 01:44 PM

January 05, 2020

First, I wish you a very happy new year!
This post is about a minor update for the Semi-Automatic Classification Plugin (SCP) for QGIS, version 6.4.1 where I have fixed several issues.
Following the changelog:
-manual ROI creation allowed without band set definition
-fixed edit raster tool to update raster view
-fixed the tools Accuracy and Cross Classification
-added GDAL test
-fixed report tool to hide nodata


Read more »

by Luca Congedo (noreply@blogger.com) at January 05, 2020 12:22 PM

January 04, 2020

So, you have a table and you need to modify a column's type. The problem arise when the column is filled and the type change is incompatible, for example, from string to integer, so how we can update the type and recompute the filled values to the new type?

Don't worry, SQL is powerful enough to let you make the change in one sentence with ALTER TABLE.

In the next example, we are supposing:

  • we have a table the_table
  • has a string column status with values: (happy | sad)
  • we want to change the column's type from string to integer and set values as happy=1 | sad=2

Yes, I know using strings for status instead creating a table with the allowed statuses is a bad decision, but is needed for this example 😄

Easy, just run:

ALTER TABLE the_table
  ALTER COLUMN status TYPE INT
    USING (CASE WHEN status = 'happy' THEN 1 ELSE 3 END);

A note. If the column status has a default value, for example happy you first need to remove that default constraint, update the type and then set the new default value:

BEGIN;

ALTER TABLE the_table ALTER COLUMN status DROP DEFAULT;
ALTER TABLE the_table
  ALTER COLUMN status TYPE INT
    USING (CASE WHEN status = 'happy' THEN 1 ELSE 3 END);
ALTER TABLE the_table ALTER COLUMN status SET DEFAULT 1;

COMMIT;

In this case, because we are executing three sentences, we are running within a transaction to avoid undesired problems if one of the sentences fails.

January 04, 2020 03:01 PM

January 03, 2020

Exploring new datasets can be challenging. Addressing this challenge, there is a whole field called exploratory data analysis that focuses on exploring datasets, often with visual methods.

On exploring movement data, there’s a comprehensive book on the visual analysis of movement by Andrienko et al. (2013) and a host of papers, such as the recent state of the art summary by Andrienko et al. (2017).

However, while the literature does provide concepts, methods, and example applications, these have not yet translated into readily available tools for analysts to use in their daily work. To fill this gap, I’m working on a template for movement data exploration implemented in Python using MovingPandas. The proposed workflow consists of five main steps:

  1. Establishing an overview by visualizing raw input data records
  2. Putting records in context by exploring information from consecutive movement data records (such as: time between records, speed, and direction)
  3. Extracting trajectories & events by dividing the raw continuous tracks into individual trajectories and/or events
  4. Exploring patterns in trajectory and event data by looking at groups of the trajectories or events
  5. Analyzing outliers by looking at potential outliers and how they may challenge preconceived assumptions about the dataset characteristics

To ensure a reproducible workflow, I’m designing the template as a a Jupyter notebook. It combines spatial and non-spatial plots using the awesome hvPlot library:

This notebook is a work-in-progress and you can follow its development at http://exploration.movingpandas.org. Your feedback is most welcome!

 

References

  • Andrienko G, Andrienko N, Bak P, Keim D, Wrobel S (2013) Visual analytics of movement. Springer Science & Business Media.
  • Andrienko G, Andrienko N, Chen W, Maciejewski R, Zhao Y (2017) Visual Analytics of Mobility and Transportation: State of the Art and Further Research Directions. IEEE Transactions on Intelligent Transportation Systems 18(8):2232–2249, DOI 10.1109/TITS.2017.2683539

by underdark at January 03, 2020 03:58 PM

January 02, 2020

Following on from 2018, a bit of a changeup this year. Inspired by numerous ‘decade in review’ posts/tweets, here’s my attempt below, in no particular, while trying to keep my offline life brief: I got married. I read long ago that being with the right person makes a huge difference and I couldn’t agree more […]

by tomkralidis at January 02, 2020 05:23 PM

Empezamos el año compartiendo un libro que he encontrado, de libre descarga, que aunque tiene unos pocos años está plenamente vigente. En él se nos muestra como podemos aplicar los Sistemas de Información Geográfica, y particularmente gvSIG, a la enseñanza del urbanismo y la ordenación del territorio en la ingeniería civil.

Descarga el libro pulsando aquí.

Os dejo con el resumen del libro:

La enseñanza del urbanismo ha estado muy vinculada desde sus orígenes al empleo de diferentes cartografías (topografía, usos del suelo, infraestructuras de comunicación, drenaje y masas de agua, asentamientos urbanos,…) necesarias para llevar a cabo los análisis y diagnósticos territoriales así como las propuestas que conforman los Planes de Ordenación del Territorio. La utilización generalizada de estas cartografías temáticas ‘en papel’ ha dado paso en los últimos años a una continua ‘digitalización y vectorización’ de las mismas generando una cantidad ingente de información cartográfica disponible en diferentes servidores web’s y bases de datos. Sin embargo, el acceso a esta información no está carente de dificultades, pues muchas de ellas se encuentran dentro de las web’s de organismos oficiales o requieren de peticiones concretas que los estudiantes generalmente no suelen conocer. Por otro lado, la utilización de esta información requiere de un Software específico que permita no sólo su visualización sino también su procesamiento, teniendo éste innumerables aplicaciones en la ingeniería en general y en el urbanismo en particular; modelos digitales de terreno, modelos de ocupación del suelo y crecimiento urbanos, análisis de accesibilidad, modelos hidrológicos, etc… Por todo ello, en el momento actual resulta imprescindible que los estudiantes de Ingeniería de Caminos, Canales y Puertos conozcan por un lado, las fuentes de información cartográficas digitales más importantes existentes y su modo de acceso, y por otro algunas nociones básicas de los programas que permitan su procesamiento, con el fin de mejorar su aprendizaje y hacer más eficiente su trabajo, pues el empleo de las TIC supone un ahorro de tiempo considerable que el alumno puede utilizar para realizar un mejor trabajo. Así, el objetivo principal del proyecto es proporcionar a los alumnos las TIC’S necesarias para acceder y procesar la información digital cartográfica que permitan mejorar su rendimiento en las asignaturas del Departamento de Urbanística y Ordenación del Territorio, así como proporcionarles las herramientas que se utilizan hoy día en la planificación territorial y que utilizarán a lo largo de su vida profesional. Dada la aplicabilidad de los Sistemas de Información Geográfica (SIG) a la problemática territorial, en los últimos años el Departamento ha comenzado a utilizar el software GVSIG, de libre acceso, en las asignaturas de la titulación de Ingeniería de Caminos, Canales y Puertos. Los resultados han sido muy favorables, pero es necesario normalizar y regularizar estas aplicaciones, por lo que este proyecto de Innovación Docente permite instaurar de forma definitiva esta nueva metodología docente, sin duda necesaria para la adaptación a las necesidades actuales en planificación territorial.

by Alvaro at January 02, 2020 10:18 AM

December 31, 2019

December 20, 2019

We are pleased to announce the release of GeoServer 2.15.4 with downloads (zip|war), documentation (html) and extensions.

This is a maintenance release and is a recommended update for existing installations. This is the last scheduled 2.15 maintenance release and we recommend planning your upgrade to 2.16.

This release is made in conjunction with GeoTools 21.3 and GeoWebCache 1.15.3. Thanks to everyone who contributed to this release.

For more information see the GeoServer 2.15.4 release notes.

Improvements and Fixes

This release includes a number of improvements, including:

  • Improve ncWMS extension to support time list and time range.
  • Truncate of tiles fixed use of parameters.
  • Upgrade of XStream and Jackson libraries.

A number of fixes are also present:

  • Fix cascading WMTS use of credentials
  • ncWMS extension now respects no-data values.
  • Importer fixed to respect shapefile charset encoding.

About GeoServer 2.15 Series

Additional information on the 2.15 series:

by jgarnett at December 20, 2019 02:47 PM

GeoTools 21.4 released The GeoTools team is happy to announce the release of GeoTools 21.4: geotools-21.4-bin.zip geotools-21.4-doc.zip geotools-21.4-project.zip geotools-21.4-userguide.zip maven repository This release is a maintenance release provided for production systems. This is the last 21 series maintenance release scheduled, and we encourage migrating to 22 series at this time. This

by Jody Garnett (noreply@blogger.com) at December 20, 2019 02:42 PM

December 18, 2019

December 17, 2019

The previous blog series title is too long to type out, so I've shortened this blog series to be just called "mapguide-react-layout dev diary". It's much easier to type :)

So for this post, I'll be outlining some of the long overdue updates that have been done for the next (0.13) release (and why these updates have been held off for so long)

Due to the long gap between 0.11 and 0.12 releases I didn't want to rock the boat with some of the disruptive changes I had in the pipeline, choosing to postpone this work until 0.12 has settled down. Now that 0.12 is mostly stable (surely 8 bug fix releases should attest to that!), we can now focus on the original disruptive work I had planned and postponing the planned hiatus I had for this project.

Updating OpenLayers (finally!)

For the longest time, mapguide-react-layout was using OpenLayers 4.6.5. The reason we were stuck on this version was because this was the last version of OpenLayers where I was able to automatically generate a full-API-surface TypeScript d.ts definition file from the OpenLayers sources, through a JSDoc plugin that I built for this very purpose. This d.ts file provides the "intellisense" when using OpenLayers and type-checking so that we were actually using the OpenLayers API the way it was documented.

Up until the 4.6.5 release, this JSDoc plugin did its job very well. After this release, OpenLayers completely changed its module format for version 5.x onwards, breaking my ability to have updated d.ts files and without up-to-date d.ts files, I was not ready to update and given the expansiveness of the OpenLayers API surface, it was going to be a lot of work to generate this file properly for newer version of OpenLayers.

What brought me to originaly write this JSDoc plugin myself was that TypeScript compiler supported vanilla JavaScript (through the --allowJs flag), but for the longest time did not work in combination with the --declarations flag that allowed the TypeScript compiler to generate d.ts files from vanilla JS sources that we're properly annotated with JSDoc.

When I heard that this long standing limitation was finally going to be addressed in TypeScript 3.7, I took this as the time to see if we can upgrade to OpenLayers (now at version 6.x) and use the --allowJs + --declarations combination provided by TypeScript 3.7 to generate our own d.ts files for OpenLayers.

Sadly, it seems that the d.ts files generated through this combination aren't quite usable still, which was deflating news and I was about to put OL update plans on ice again until I learned of another typings effort for OpenLayers 5.x and above. As there were no other viable solutions, I decided to given these d.ts files a try. Despite lacking inline API documentation (which my JSDoc plugin was able to preserve when generating the d.ts files), these typings did accurately cover most of the OpenLayers API surface which gave me the impetus to make the full upgrade to OpenLayers 6.1.1, the latest release of OpenLayers as of this writing.

Updating Blueprint

Also for the longest time, mapguide-react-layout was using Blueprint 1.x. What previously held us off from upgrading, besides dealing with the expected breaking changes and fixing our viewer as a result, was that Blueprint introduced SVG icons as replacement for their font icons. While having SVG icons is great, having the full kitchen sink of Blueprint's SVG icons in our viewer bundle was not as that blew up our viewer bundle sizes to unacceptable levels.

For the longest time, this had been a blocker to fully upgrading Blueprint until I found someone suggesting a creative use of webpack's module replacement plugin to intercept the original full icon package and replace it with our own stripped-down subset. This workaround meant brought our viewer bundle size back to acceptable levels (ie. Only slightly larger than the 0.12.8 release). With this workaround in place, it was finally safe to upgrade to the latest version of Blueprint, which is 3.22 as of this writing.

Resizable Modal Dialogs!

So we finally upgraded Blueprint, but our Blueprint-styled modal dialogs were still fixed size things whose inability to be resized really hampered the user experience of features that spawned modal dialogs or made heavy use of them (eg. The Aqua viewer template). Since we're on the theme of doing things that are long overdue, I decided to tackle the problem of making these things resizable.

My original mental notes were to check out the react-rnd library and see how hard it was to integrate this into our modal dialogs. It turns out, this was actually not that hard at all! The react-rnd library was completely un-intrusive and as a bonus was lightweight as well meaning our bundle sizes weren't going to blow out significantly as well.

So say hello to the updated Aqua template, with resizable modal dialogs!


Now unfortunately, we didn't win everything here. The work to update Blueprint and make these modals finally resizable broke our ability to have modal dialogs with a darkened backdrop like this:


This was due to overlay changes introduced with Blueprint. My current line of thinking around this is to ... just remove support for darkened backdrops. I don't think losing this support is such a big loss in the grand scheme of things.

Hook all of the react components

The other long overdue item was upgrading our react-redux package. We had held on to our specific version (5.1.1) for the longest time because we had usages of its legacy context API to be able to dispatch any redux action from toolbar commands. The latest version removed this legacy context API which meant upgrading would require us to re-architect how our toolbar component constructed its toolbar items.

We were also using the connect() API, which combined with our class-based container components produced something that required a lot of pointless type-checking and in some cases forced me to fall back to using the any type to describe things


It turns out that the latest version of react-redux offered a hooks-based alternative for its APIs and having been sold on the power of hooks in react in my day job, I took this upgrade as an opportunity to convert all our class-based components over to functional ones using hooks and the results were most impressive.

Moving away from class-based container components and using the react-redux hooks API meant that we no longer needed to type state/dispatch prop interfaces for all our container components. These interfaces had to have all optional props as they're not required when rendering out a container component, but were set as part of when the component is connect()-ed to the redux store. This optionality infected the type system, meaning we had to do lots of pointless null checks in our container components for props that could not be null or undefined but we had to check anyway because of our state/dispatch interfaces saying so.

Using the hooks API mean that state/dispatch interfaces are no longer required as they are now all implementation details of the container component through the new useDispatch and useSelector hook APIs. It means that we no longer need to do a whole lot of pointless checks for null or undefined. Moving to functional components with hooks means we no longer need to use the connect() API (we just default export the functional component itself) and having to use the "any" type band-aid as well.

To see some visual evidence of how much cleaner and more compact our container components are, consider one of our simplest container components, the "selected features" counter:

A useful property of hooks is that they are composable so from the low-level useSelector hook that react-redux gives us, we can build a series of specialized and reusable hooks on top to access common viewer and application state across all our container components. The useViewerLocale hook in the linked master example above is just one of many reusable hooks that's been built that all our container components use now. The useSelector API encourages us to return scalar values instead of objects (as objects would require custom equality comparisons for testing re-rendering) and you see that reflected in most of the hooks that have been written, which mostly return single values. 

Usage of the new hooks API provided some valuable insights when doing the needed re-architecting of how our toolbar component constructs its toolbar items. In our original implementation, we pulled the full application state for determining if toolbar items are selected/disabled/etc. This meant that even the most innocuous change of state like change of mouse coordinates would cause our toolbars to re-render themselves (because we were listening to the full application state), causing the react devtools to light up like a christmas tree when highlighting re-renders was enabled.

In re-architecting our toolbar, it forced us to take a look at what part of the application state we actually cared about when determining if a given toolbar item should be selected/disabled/etc. The end result is that we really only cared about 6 bits of state and so we now have a custom hook that only returned this subset and only triggering re-renders when any part of this subset has actually changed. The end result of this being: The UI is more responsive because the toolbars aren't constantly re-rendering due to updates to state that is not relevant to toolbar items.

In closing ...

The main objective of the next 0.13 release was to carry out some long overdue updates to key libraries we were using, which we've passed with rousing success.

But despite it being our main objective, that doesn't mean 0.13 is going to be released immediately, there's still actual features I want to get into this release, which will (of course) be the topic of various dev diary entries in the future.

by Jackie Ng (noreply@blogger.com) at December 17, 2019 02:29 PM

Prezados leitores,

A OSGeo acaba de anunciar na sua lista de discussão (OSGeo-Conf) que agora é oficial, o FOSS4G 2021 será realizado na cidade de Buenos Aires (Argentina).

Eu gostaria de parabenizar toda a equipe envolvida neste processo de trazer a edição de 2021 do FOSS4G para a América do Sul. Sei que não foi fácil e nem ocorreu do dia pra noite, e sim um trabalho que vem sendo desenvolvido desde 2013 com a primeira edição do FOSS4G-AR.

Tenho certeza que o evento será um sucesso, e espero poder estar lá em 2021 pra conferir.

Fonte: OSGeo Conf List

by Fernando Quadro at December 17, 2019 11:58 AM

With the major under-the-hood updates out of the way, it was time to tackle another long-standing item on my todo list. The ability to showcase the key components of mapguide-react-layout in storybook.

From storybook's home page introduction:
Storybook is a user interface development environment and playground for UI components. The tool enables developers to create components independently and showcase components interactively in an isolated development environment.
In the case of mapguide-react-layout, the motivation was to be able to leverage our existing gh-pages branch that currently hosts the project landing page and API docs to also host storybook to showcase the various react components that make up mapguide-react-layout as an [interactive playground / component documentation / pseudo-demo site] on GitHub Pages.

The major challenge to storybook adoption

Storybook has existed for quite some time now, so what was the major blocker to adopting storybook in mapguide-react-layout?

Namely, mapguide-react-layout would require a running MapGuide Server in order to properly showcase the viewer components. While I have no problems pointing storybook to a demo MapGuide Server if there was no other options, it shouldn't need to be a hard requirement. If we can intercept and where possible, mock the expected responses from the MapGuide Server, then it means it would simplify our ability to host storybook on GitHub Pages.

It turns out that it is indeed possible to "mock out" the MapGuide Server dependency:
  • The requests to the mapagent are done through a dedicated class. We just needed a mechanism to register an alternate implementation that can just return canned response data for certain requests.
  • The API for OpenLayers image sources allows us to register a custom "image load function". We can register our own image load function that intercepts the mapagent rendering request URL and using the HTML5 Canvas API, render out an alternate image that simply dumps out the key request parameters of note and export the rendered image out to a data URI to be assigned to the image element that OpenLayer provides.
The end result of this, is that it means we can showcase our map viewer components, but with the MapGuide Server communication bits mocked out and canned response data returned where needed.

We can showcase the map viewer component without dependency to a running MapGuide Server with the canned replacement image still providing useful information about what would happen on a real MapGuide Server.



Our test app def is set up with Stamen and OSM maps so that even though we're rendering a textual placeholder in place of the actual MapGuide-rendered map image, you still have some "real world context" that the Stamen/OSM base layer provides.

We can showcase the legend component by providing a canned CREATERUNTIMEMAP response of the Sheboygan map.



We can showcase our selection panel by providing a canned QUERYMAPFEATURES response.



And so on, and so on.

In closing ...

Storybook for mapguide-react-layout is now live on GitHub Pages. If you ever wanted to see or explore the components that make up this mapguide-react-layout and how they work without the need to spin up your own dev environment and/or a running MapGuide Server, we now have storybook for that.

It will be periodically kept up to date I'm guessing, with each new release in the future.

by Jackie Ng (noreply@blogger.com) at December 17, 2019 08:05 AM

December 15, 2019

Last week I spoke at devrelcon London 2019, which was an interesting and fun experience. Firstly, I’d never heard of “devrel” until a few months ago, and secondly, it’s been a while since I’ve spoken or even attended a conference outside of the cosy little Open Source GIS community. For those short of time, my talk was on “Inspiring and empowering users and techies to become great writers- and why that’s important” and you can find it on GitHub for the live version, and the pdf with speaker notes.

Given my complete lack of knowledge about “devrel”, what on earth was I doing at one of their conferences? It comes from the work I’ve been doing administering and mentoring a technical writer as part of OSGeo’s participation in Google Season of Docs (GSoD). The Good Docs Project was spun out of this, like a “meta-project”, abstracting the fabulous work our paid writers were doing for GSod to work with any open source project. As the main UK-based person involved in these projects, I was best placed to speak at devrelcon in London, so there I was.

What follows are my thoughts about the event, take-aways from my talk, and some general musings about “devrel” as it applies, or could apply to open source GIS…

So what’s devrel?

Firstly, I’ve since done my homework and learnt that “devrel”, or Developer Relations, is a relatively new umbrella term, broadly meaning community management for a technical audience. The purpose of it is to build relationships and enable developer communities, be that as liasing between companies and communities, or advocating for community needs. Another side of it is PR or marketing for developers- ensuring that products such as APIs or SDKs reach their target audience. Communities such as devrelnet (the organisers of devrelcon) have sprung up to provide a space where the devrel community can come together and share ideas.

My talk

Clearly areas such as documentation mesh quite well within this fairly nebulous overall idea, and in fact there was an entire documentation track at the event, of which my talk was part. Coming to this event as a complete outsider, or n00b, I thought long and hard about what I wanted to say. Initially I’d wanted to focus on the types of things you should or shouldn’t say in documentation; the types of terms that alienate users and make for a poor reading experience. However, there are so many articles about this, and for all I knew I could be preaching to the converted, or teaching Grandmothers to suck eggs. Let’s face it, no one likes an outsider coming to their event, pretending to be an expert, and telling you about something you already know as if it’s a cool new discovery!

Serendipitously, at about this time I also read some fabulous articles in Increment Magazine’s issue on open source, which gave me a different way in. Along with the GitHub open source software survey (of which 2017 is the most recent published version), I started looking at some of the problems open source projects face, around developer burn-out, coupled with a lack of new developers coming on board, and a lack of diversity. When you also see surveys that show documentation as a “way in” to a project for under-represented groups (be that by gender, language etc) then you start to see a story developing:

Better documentation -> Bigger and more diverse communities -> New developers -> Less burn-out at the top -> Win for everyone

So that’s the angle that I took, then I looked at ways in which users could be enticed to help with documentation, and how in many cases they are the best people to write the docs, particularly when it comes to installation and quick-start guides. I looked at ways in which barriers to entry could be reduced, having seen for myself how easy it is to be put off by a massively difficult documentation work-flow, or missing steps, and how nice it is when as a new contributor you see your first spelling mistake fix reflected in the live documentation.

Finally, I rounded the talk off with a call to arms to get people to join The Good Docs Project, and we also handed out some of the super cute Doctopus stickers!

Questions after my talk suggested I’d made some people think, which is always good. To paraphrase, a couple of interactions went along the line of “I am responsible for the documentation for a large open source project, and it’s a mess, and I don’t know how to even start”. The idea of approaching users to help with this seemed new, as if documentation needs to be the domain of the software project itself, and not the people using it.

Take-aways

Take-aways from other talks in my stream:

  • Build better GitHub experiences as that’s often where developers will randomly land when googling your code (Lorna Mitchell). Think about your readmes and the GitHub tags for a project, which are structured data and will appear in search results.
  • Step up from Docs as Code to Docs as Engineering by treating the documentation deployment process as you would a software deployment (Cristiano Betta). Make it very easy to link between (for example) APIs and their documentation- including adding working API examples within the docs.

Since I could only be there for one day, my experience outside of that track was a little limited- but the devrel crowd were friendly, diverse and all seemed to know each other. I guess (or hope) that’s what people think when they visit one of our OSGeo events too.

Devrel in open source GIS

So devrel in Open Source GIS- do we do it? I think yes, a bit, but without giving it a name. More engagement with other communities, like devrel, would be helpful to remind us that we are part of a bigger “thing” and not just stuck in our own niche. To highlight that- when I talked about what I do, in terms of “open source mapping”, people immediately assumed I meant OpenStreetMap.

Certainly we could learn from the practices being discussed at the event, such as building diverse communities. We could definitely learn better ways of doing documentation (which is where The Good Docs project comes in, of course).

Any more than that, I can’t say for now as it was a bit of a whirlwind (I could only attend one day and then went straight to the Astun Christmas Company meeting) and I feel I need to inwardly digest my thoughts from the event, and learn more about it all. I’ll be looking out for opportunities to apply things I learned though, in both Astun and OSGeo.

by Jo at December 15, 2019 04:34 PM

Well, the new geopaparazzi release had been requested by users that were not able to load spatialite tables that had > 3 dimensions. Indeed there was a bug that was giving that problem.

So we fixed the bug and since we had some quite tested code about Geopackage at hand, we tried to insert it and use it the same way we use Spatialite. It proved to be fairly easy and so we decided to bring out this version with the following support:

  • vectors (feature) layers: read, write (speak editing) and style mode, but for 4326 srid only (so that no reprojection is necessary)
  • tiles: but only for 3857 srid (so they fit in the current tiling system)



I personally think it is quite awesome. Geopackage is more limited than spatialite, but its data preparation tools are way more stable and spread around GIS environments. QGIS even uses it as its default vector format.

Also in the hortomachine project we have tools to quickly create geopackage databases, add shapefiles to them, tilesets from raster files and also to style the layers to be geopaparazzi ready.









One other thing we fixed is vector layer labelling. Many users have been crying about missing vector labels. Well, it has been really fun to see how many people and working groups of people use geopaparazzi with spatialite for their surveys. I really wish they would also contribute from time to time.

Anyways, we decided to add back vector labels. Given the limitations
imposed by the rendering framework, which would make things very
difficult in 3D space and add a layer to be handled for data layer, this is how we solved it, blocking the visualization of labels in the 2D space:






Might seem strange at first test, but I can assure you it works quite nicely.

There is also collision handling on a per layer  basis. This might show it better:








Enjoy!!






by moovida (noreply@blogger.com) at December 15, 2019 09:05 AM

December 14, 2019

GRASS GIS 7.8.2 released with updated PROJ 6 and GDAL 3 support

What’s new in a nutshell

As a follow-up to the recent GRASS GIS 7.8.1 we have pusblished the new stable release GRASS GIS 7.8.2.
Besides other improvements, the release contains important PROJ 4/5/6 related datum handling fixes, wxGUI fixes and a fix for the vector import from PostGIS databases.

An overview of the new features in the 7.8 release series is available at new features in GRASS GIS 7.8.

Binaries/Installer download:

Source code download:

See also our detailed announcement:

First time users may explore the first steps tutorial after installation.

About GRASS GIS

The Geographic Resources Analysis Support System (https://grass.osgeo.org/), commonly referred to as GRASS GIS, is an Open Source Geographic Information System providing powerful raster, vector and geospatial processing capabilities in a single integrated software suite. GRASS GIS includes tools for spatial modeling, visualization of raster and vector data, management and analysis of geospatial data, and the processing of satellite and aerial imagery. It also provides the capability to produce sophisticated presentation graphics and hardcopy maps. GRASS GIS has been translated into about twenty languages and supports a huge array of data formats. It can be used either as a stand-alone application or as backend for other software packages such as QGIS and R geostatistics. It is distributed freely under the terms of the GNU General Public License (GPL). GRASS GIS is a founding member of the Open Source Geospatial Foundation (OSGeo).

The GRASS Development Team, December 2019

The post GRASS GIS 7.8.2 released appeared first on GFOSS Blog | GRASS GIS and OSGeo News.

by neteler at December 14, 2019 09:22 PM

Otra de las novedades más interesantes de gvSIG Desktop es el nuevo generador de expresiones, integrado en distintas herramientas de gvSIG como la calculadora de campos o la selección por atributos.

Gracias a ella podemos aplicar desde búsquedas, filtros y cálculos sencillos a operaciones tan complejas como necesitemos. Ni nosotros somos conscientes de todo el potencial que tiene…aunque este vídeo que nos presenta Óscar Martínez de la Asociación gvSIG nos ayuda a empezar a descubrirlo

by Alvaro at December 14, 2019 04:33 PM

Una de las novedades destacadas de la última versión es el nuevo generador de formularios, que puede utilizarse de forma muy básica a realizar formularios personalizados tan complejos como necesitemos. Explotado a su máximo nivel podríamos decir que nos permite realizar nuestras propias aplicaciones personalizadas de gestión de datos…sin escribir una sola línea de código.

En este vídeo, grabado durante las pasadas 15as Jornadas Internacionales de gvSIG, Joaquín del Cerro – el responsable de desarrollo de gvSIG Desktop en la Asociación gvSIG – nos cuenta cómo funcionan.

Si queréis llevar el uso del SIG un paso más allá, no os perdáis el vídeo….

by Alvaro at December 14, 2019 02:01 PM

December 13, 2019

Están disponibles los materiales del cursoRepresentación Cartográfica, como herramienta para la mejora del Sistema Canario de Seguridad y Emergencias“, impulsado por la Dirección General de Seguridad y Emergencias de la Consejería de Administraciones Públicas, Justicia y Seguridad del Gobierno de Canarias.

Un curso que recomendamos a todos los interesados en la aplicación de los Sistemas de Información Geográfica en áreas como la Seguridad, Emergencias y Protección Civil. Realizado por profesionales de la seguridad, con ejercicios prácticos, que ayudarán a comprender la importancia del uso de herramientas como gvSIG en el ámbito de la protección civil.

Vídeo de presentación del curso:

Los responsables del curso son Gustavo Armas Gómez, Director General de Seguridad y Emergencias y Juan José Pacheco Lara, Responsable de Estudios e Investigación de la Unidad de Formación de la DGSE. El curso ha sido desarrollado por Gilberto Díaz Gil.

El curso tiene un carácter eminentemente práctico y todos los ejercicios han sido realizados utilizando el software libre gvSIG Desktop, y concretamente su versión 2.4, que podéis descargar aquí.

Desde la Asociación gvSIG queremos agradecer la predisposición del Gobierno de Canarias a publicar y divulgar estos interesantísimos materiales formativos.

A continuación os enlazamos con todos los temas y vídeo-tutoriales del curso.

by Alvaro at December 13, 2019 08:54 AM

Enlazamos con el post que nos cuenta que el proyecto ganador del concurso gvSIG Batoví en 2019 ha sido reconocido en los premios INSPIRA. ¡Nuestra más sincera felicitación a todo el equipo!

El  proyecto galardonado tenía como objetivo, utilizando gvSIG, identificar el lugar más apropiado para la construcción de un liceo de bachillerato en el barrio La Teja y conocer la oferta de servicios socio-culturales del barrio con el fin de relacionar el futuro liceo de bachillerato con las instituciones del barrio.

Estudiantes de secundaria utilizando gvSIG para analizar y resolver necesidades de su entorno. Para los que estamos impulsando el proyecto, ver estos logros, nos llena de satisfacción. Porque de eso se trata, de que la tecnología esté disponible para todos, sea de todos.

via Estudiantes ganadores del concurso gvSIG Batoví 2019 reconocidos en los premios INSPIRA

by Alvaro at December 13, 2019 08:24 AM

December 12, 2019

December 11, 2019

Ya tenéis disponible la grabación de la ponencia “Gestión de accidentes e integración con ARENA2 de la Dirección General de Tráfico en gvSIG Desktop”. En ella se muestran los desarrollos realizados para integrar en gvSIG Desktop los archivos que proporciona la DGT de España (ARENA2) y las herramientas que permiten gestionar la información de accidentalidad.

Entre los desarrollos abordados hay desde mejoras genéricas de gvSIG Desktop a nuevas funcionalidades relacionadas directamente con la gestión de datos de accidentalidad.

Utilizado en el CEGESEV (Centro de Gestión y Seguridad Vial de la Generalitat Valenciana), su desarrollo como software libre puede permitir la fácil adopción por otros organismos que requieran trabajar con este tipo de datos.

La ponencia la podéis revisar aquí:

 

by Alvaro at December 11, 2019 01:57 PM

Os traemos la grabación de la ponencia en la que se presenta el Geoportal para gestión de carreteras en la República Dominicana desarrollado con gvSIG Online y en el ámbito del proyecto “Apoyo en el Sistema de Gestión de Inventario de la Red Vial y Puentes de la República Dominicana”.

Una ponencia muy interesante en la que se puede apreciar el impacto que pequeños proyectos pueden tener en la gestión de infraestructuras de un país. Gracias a este proyecto por primera vez hay un mapa digital con todas las carreteras del país.

by Alvaro at December 11, 2019 10:47 AM

December 10, 2019

After publishing a version with big changes, as usual, we release an update, in which we put the final touches to the previous version, fixing the most relevant errors that may have appeared and adding new functionalities that bring considerable improvements without assuming major changes.

The new gvSIG Desktop version will be available soon (in fact you can download development builds already) and, if everything goes according to plan, the release candidate to final version will be available in January.

That is why we wanted to tell you about the improvements that gvSIG Desktop 2.5.1 will bring, and we will give you more details about them soon.

Exporting virtual fields as values

One of the most important improvements of gvSIG 2.5 has been the ability to work with virtual fields. Now we are going to add the possibility of indicating if you want the virtual fields as real fields in the table resulting from the export. It will be indicated during the table/layer export. As the virtual fields are typical of gvSIG, it will allow to access to these values ​​from other applications.

PDF and ODS viewer in forms

Another important improvement of gvSIG 2.5 has been the option to work with forms. Thanks to this new functionality we can make simple queries on a table using a form as well as practically generate data maintenance applications without writing any line of code.

Related to the forms, in gvSIG 2.5 we can have a reference to an image or the image directly included as an attribute in a field of a table. When we show the form associated to that table, we can configure it to embed an image viewer in the form and display the image on it.

In the new gvSIG version we are going to add the possibility of doing something similar with PDF and ODS files.

Heat map comparison

As all of you know, from gvSIG 2.4 you can generate heat map legends. gvSIG 2.5.1 will now bring a new type of legend that allows to compare heat maps. It allows, for example, to analyze the variations of a phenomenon between different dates or the behaviour of two different variables on the same layer.

Extreme heat map

We are going to add another new type of legend related to the concept of heat map. One of the problems presented by heat maps when analyzing certain variables or entities is to represent the density of all the information in a layer and sometimes, and depending on the type of information, it can make the detection of extreme values difficult.

The new type of legend that will be available in gvSIG 2.5.1 will allow to filter or differentiate those values ​​that we want to represent from a certain threshold or value.

Semi-automatic simple reports generation

Another of the most important gvSIG Desktop 2.5 novelties has been the possibility to create Reports. Currently you can generate reports using JaperSoft Studio and associate them with tables in gvSIG. However, in many cases it is required to be able to make simple and quick reports, without designing templates previously.

For this reason, a new functionality that allows the user to generate simple and quick reports will be integrated in gvSIG.

by Mario at December 10, 2019 12:58 PM

gvSIG Online es, hoy día, una de las soluciones de referencia para gestionar la información geográfica de una organización, sea cual sea. Frente a otros productos, gvSIG Online es una solución 100% software libre y con el soporte y respaldo profesional de la Asociación gvSIG. No es objeto de este post hacer referencia a la multitud de implantaciones de gvSIG Online en todo tipo de sectores y geografías. Lo que os traemos es una presentación de la nueva modalidad de implantación de gvSIG Online, la que hemos denominado gvSIG Appliance.

Las modalidades de implantación ya conocidas de SaaS (como servicio) y On-Premise (en los servidores del cliente) no cubrían la necesidad de determinado tipo de usuarios que querían utilizar gvSIG Online y, por motivos de seguridad, de forma totalmente aislada a la red e integrada con aplicaciones de vigilancia, centros de mando y control.

De este modo gvSIG Appliance permite tener gvSIG Online como un appliance dentro de un servidor e integrado con centros de mando y control como GENETEC, software líder en el mundo de la seguridad.

Podéis ver una presentación donde Ramón Sánchez de San2 Innovacion Sostenible, explica perfectamente las características de gvSIG Appliance y algunos ejemplos de implantación, de la Smart City de Ceuta a Iberdrola.

by Alvaro at December 10, 2019 10:30 AM

Tras la publicación de una versión con grandes cambios, como es habitual, sacamos una versión “menor”, en la que nos dedicamos a pulir la versión anterior corrigiendo los errores más relevantes que puedan haber surgido y añadiendo nuevas funcionalidades que sin suponer grandes cambios, aportan considerables mejoras.

La nueva versión de gvSIG Desktop va a estar muy pronto disponible (de hecho podéis ya acceder a builds de desarrollo) y, si todo va según lo previsto, en enero publicaremos ya las versiones candidatas a final.

Por eso os queríamos comentar las mejoras que traerá gvSIG Desktop 2.5.1 y de las que pronto os daremos más detalles.

Exportar campos virtuales como valores

Una de las mejoras más importantes de gvSIG 2.5 ha sido la de aportar la capacidad de trabajar con campos virtuales. Lo que vamos a añadir durante la exportación de una tabla/capa es la posibilidad de indicar si se quiere que los campos virtuales se conviertan en campos reales en la tabla resultante de la exportación. Como los campos virtuales son propios de gvSIG, esto permitirá acceder a esos valores desde otras aplicaciones.

Visor de PDF y ODS en los formularios

Otra de las mejoras importante de gvSIG 2.5 han sido los formularios. Gracias a esta nueva funcionalidad podemos realizar desde consultas simples a una tabla mediante un formulario a prácticamente generar aplicaciones de mantenimiento de datos sin escribir una sola línea de código.

Relacionado con los formularios, en gvSIG 2.5 podemos tener en un campo de una tabla una referencia a una imagen o la imagen directamente incluida como atributo. Cuando mostramos el formulario asociado a esa tabla, podemos configurar para que se incruste un visor de imágenes en el formulario y se muestre la imagen en él.

Lo que vamos a añadir en la nueva versión de gvSIG es la capacidad de hacer algo similar con archivos PDF y ODS.

Mapa de calor comparado

Como todos sabréis desde gvSIG 2.4 se pueden generar leyendas de mapas de calor. Lo que traerá gvSIG 2.5.1 es un nuevo tipo de leyenda que permita realizar mapas de calor comparados. Que permita, por ejemplo, analizar las variaciones de un fenómeno entre fechas distintas o el comportamiento de dos variables distintas sobre una misma capa.

Mapa de calor extremo

Y añadiremos otro nuevo tipo de leyenda relacionada con el concepto de mapa de calor. Uno de los problemas que presentan los mapas de calor a la hora de analizar ciertas variables o fenómenos es representan la densidad de la totalidad de la información de una capa y, por ejemplo, en ocasiones y en función de la tipología de la información puede llegar a dificultar la detección de valores extremos.

El nuevo tipo de leyenda que estará en gvSIG 2.5.1 permitirá filtrar o discriminar aquellos valores que queremos sean representados a partir de un determinado umbral o valor.

Generación semiautomática de informes simples

Otra de las novedades más importantes de gvSIG Desktop 2.5 ha sido la de Informes. Actualmente se pueden generar informes usando JaperSoft Studio y asociarlos a tablas en gvSIG. Sin embargo en muchas ocasiones se requiere poder hacer informes simples y rápidos, sin tener que pasar por el diseño de plantillas.

Por esto mismo se va a integrar una nueva funcionalidad que permita al usuario generar informes simples y rápidos.

by Alvaro at December 10, 2019 09:40 AM