Categories
Technical

Consolidation

In an effort to focus my (now rather limited) spare time on a more nuanced set of web projects, I have consolidated some websites.

Three Blogs into One

I have combined two other blogs into this one:

  • blog.oomap.co.uk – My personal blog, with notes about London, orienteering and cycling. This was my original blog “YepSport” (also known as “Yablog”) and goes right back to 2003.
  • bikesharp.com – Writing about shared micromobility in the UK. This blog has had less writing than I had planned, as shortly after I started, I was contracted to write similar articles for Zag Daily, for the last couple of years (I have now stepped down).

I still write another blog – Mapping London – which will stay separate for now.

OpenOrienteeringMap Leaves Home

I have also switched OpenOrienteeringMap from my own server/website to David Dixon’s excellent “version 4” rewrite.

OOM has been going since 2009, with saveable maps and a UI redesign in 2013 (version 2), and the ability to generate MapRun-compatible maps (JPGs, JGWs and KMZs) in 2020, capitalising on a boom in popularity for self-run orienteering races during the early months of the COVID crisis and resulting lockdowns (version 3).

However my version had a number of limitations – it used its own copy of OpenStreetMap rather than APIs to the master copy, resulting in delays to people’s edits appearing on the map. It also had very limited contour coverage (10m contours, and GB only). Finally, the actual mapping was inflexible, e.g. you get fences whether you wanted them or not. There was no per-feature toggle.

All of these problems were fixed by David’s rewrite. The only step back is you don’t see the actual orienteering style on the screen, until you generate the PDF. But David has added a Preview button, so the software can go off and generate it on the screen after all – you just need to wait a few seconds.

BikeShareMap and the Meddin Bike-Sharing World Map

Finally, I currently run/contribute towards two websites mapping bikeshare systems around the world. BikeShareMap shows individual docking station statuses, for the ~600 systems for which there is such data. The Meddin Bike-Sharing World Map maps all the bikeshare systems in the world – but just as a single pin, per city.

I have been gradually consolidating the two projects – the first stage has been to use common IDs and split outs for each city, this is now well underway. The second step is to harmonise data input across the two websites, rather than double-keying, as at present. The final stage (which is some way off) will be to integrate the BSM function into the Meddin project.

Categories
Bike Share London OpenLayers OpenStreetMap Technical

All the Docks: Technical Notes on the Routes and Map

Routes

I created GPX route files for the challenge. These were created manually in QGIS, using the OpenStreetMap standard “Mapnik” render as a background, by drawing lines, with Google Street View imagery used to check restrictions.

I split each team’s route into 12 stages (so 36 altogether), which were initially each just over 10km and ended at a docking station. Each stage contained between 10 and 40 sequential legs to docking stations. I’m not sure I would trust proper routing engines (based on Google Maps or OpenStreetMap, normally) to have found better routes on each leg between each docking station, than me and Google Street View, largely because many London boroughs have been experimenting a lot recently with Low Traffic Neighbourhoods (LTNs) and modal filters (e.g. two way for bikes/one way for cars). But I did run a TSP solver (RouteXL) on 3 of the stages and in 2 cases it did find a slightly shorter ordering of the legs, within the stage. So I would probably use a TSP solver more for a future iteration of the challenge.

The three route/team files were saved in British National Grid (EPSG27700) GeoJSONs (technically not allowed by the spec) so I could get proper metre distances ($length) auto-updated into a column, for each stage, during planning. The stages had a number column, and were numbered sequentially. Having a number column results in LineStrings in the GeoJSONs and GPX routes/routepoints rather than single MultiLineStrings and GPX tracks/trackpoints. They were then saved as WGS84 GPX files. I (mis-)used a very limited set of column names (name, number, src, desc, cmt), due to the restrictions with the GPX specification – I didn’t want to use GPX extensions.

It was important to have three separate GPX files so that each team would need to load in just a single file to their navigation device and not see docking stations/routes from other teams). But it did make preparations a bit harder for the online map.

The docking stations were imported in via a TSV file, then saved as GPX waypoints (column names again restricted to src, desc, name, and cmt), and the relevant ones were manually appended to the GPX team files. The GeoJSONs were retained as my master editing files, as QGIS cannot easily edit GPX files due to them containing multiple geometry types.

I would certainly like to try a more automated approach to the routing. It did take a substantial amount of time – probably two evenings for each of the three routes, and a further evening for each route to enumerate the docking stations, fine-tune the routes and reorder any sliced up GeoJSON LineString segments (part-stages) back into the correct sequence. The reordering was needed as QGIS would incorrectly reorder parts of the route that crossed over itself, when it was sliced up.

But an automated approach would require a method that deals with docking stations that are just 10m down a no-entry street (so you’d just walk it), which is hard. Currently they are represented as a point defined by TfL through their API (and separately in OpenStreetMap) which may be the location of the “totem pole” kiosk but not the docking points themselves. In routing or GIS systems, the docking station needs to be represented as an area (within which you would walk the bikes) plus a (multi-)line (representing the line of dock points – some of these are quite long – some have significant gaps, and sometimes they are split on either side of a road). Potentially, the point representing a docking station really needs to be an area, and that area can extend up to the nearby road junction to deal with the one-way issue.

Future Improvements

In terms of the general design, a few things could be changed for a future challenge (some of these I mentioned in my previous blog post):

  • Ensuring that participants are well away from the finish at around the 60-80% stage, so that they are less likely to bail at that difficult time of the day, because the remainder of the challenge is then a kind of “run in” to the finish, rather than routing them away at a late stage.
  • When participants pass by another docking station twice, they should visit it on the first occasion, not the second time. (An exception is when it is on the wrong side of a dual carriageway, particularly one with a median barrier). Otherwise there is a danger of it being missed on the return.
  • Build specific meal stops in.
  • Maximum of 200 docking stations/10 hours per team.

The Web Map

By comparison, building the web map was straightforward, probably just one evening’s work to build the map page itself as a basic OpenLayers map reading in GPX files and with simple browser-based geolocation, and one further evening to build a “team” version of the map that allowed ticking off the stations, the action being stored in a database, and a time string echoed back to the web map (and other viewers, on a Javascript timer) as confirmation. The database had two tables, a summary table with a row per docking station, and an action log which recorded the dock’s TfL ID, timestamp, event type and the submitter’s browser user agent string ($_SERVER[‘HTTP_USER_AGENT’]) in lieu of logins/IDs. It was fairly easy to assign a manually assign each user agent to team, post-event.

Each docking station ended up with 4 identifiers which feels a bit too many, but it kind of made sense:

  • an integer TfL ID (e.g. 761)
  • the TfL Name that appears on the totem pole (e.g. Gower Place, Bloomsbury)
  • a shortcode which was the sequence number and the initials of the first part of the TfL Name (e.g. 37.GP). There were some duplicates across the team. FIN.HS was a special shortcode for the finish for the two teams that didn’t have that as a docking station in their “zone”. One newly added docking station had “A” appended to the sequence number of the previous, rather than having to renumber everything.
  • a unique sequence code which was the team, stage and docking station order within that stage, (e.g. W02.15). This was used as a logical ordering of the file and to help with assigning each docking station to its stage on the online map.

I also listed an “actual sequence” post-event ordering, e.g. W038, in the final results file.

I could have used the sequence code on the map but felt the shortcode was the most useful concise way of identifying each station to the team as they approached it, and hopefully the simple number would result in any missing out being spotted quickly.

I built a special “diff” webpage that compares our docks file with the live data (via BikeShareMap) every 2 minutes and this alerted us of any new, closed or zero-capacity docking stations, plus a list of full ones. There was one that opened a few days before, but none on the day, thankfully!

Future Improvements

I do think that using fewer intermediate routing points on each leg would be better and would allow for turn-by-turn satnav directions. Having said that, having street names called out is of limited use as they are often hard to spot on the ground, so the breadcrumb trail approach we used worked well.

We had paper maps (just screenshots of the website) as a backup. I never used them, and I think Team South used the website. Team West used them exclusively, with a separate person using the website to tick off.

I would have liked to have had a single source of docking station locations. In the end, they were:

  1. on TfL’s API, which is fed through to a CSV on BikeShareMap every two minutes,
  2. on a CSV file on Github,
  3. as GPX waypoints appended to each team’s GPX routes file, and
  4. in my database for recording times on the ATDMap website.

1 and 2 were automatically compared (see above), 2 could be added to QGIS to compare and generate GPX for 3, and also imported into the database table (4) but this would all be manual steps.

Links

Website map with the timings: https://misc.oomap.co.uk/atdmap/

Route GPX files and timings CSV: https://github.com/oobrien/allthedocks

Strava link (Team East): https://www.strava.com/activities/7908548122

Categories
Conferences Technical

A Day with HERE at CES

The HERE Technologies booth at CES 2019.

I was a guest earlier this week at HERE Techologies at the Consumer Electronics Show (CES) 2019 in Las Vegas, the world’s biggest consumer electronics trade show. Their booth was directly right outside the main entrance to the Convention Centre, the hub of CES, right beside Google’s own huge one. The juxtaposition was interesting, the two companies competing intensely in some areas of location services (e.g. mapping APIs, journey routing and rich global POI databases) while being distinctly different in their approach – Google being very consumer focused with its ubiquitous brand, its location tools being largely smartphone based and advertising/user profile driven while HERE’s European origins are reflected in its strict user anonymisation defaults, its main datasource being car sensor information from cars (e.g. some of the major car companies are the key investors in HERE), and its mainly B2B focus which means that the UI you typically in front of HERE’s location intelligence is typically branded from the car company itself.

HERE’s location marketplace

The car sensor information drives much of the 5 million updates made every day (generally automatically) to its global master map and also means that HERE has a pretty good live traffic data stream of its own. The global master map also contains 160 million+ POIs (points of interest) – it’s a seriously large database – which HERE has collected, collated and bought from a wide variety of sources. The map is a core part of HERE’s overall location platform offering.

HERE’s booth was a hive of activity, with product demos downstairs (themed around “the new reality”) and a small stage, while upstairs, numerous meeting rooms were full all day, presumably with various meetings between HERE executives and at a guess, car companies looking for platforms to power their car/user information systems, city transportation agencies looking for new datasets to understand their city roads more effectively, and other key potential stakeholders in HERE’s location platforms. The the breakout areas were also well used and even a little outdoor cafe/terrace overlooking the main entrance to the convention centre.

The HERE XYZ developer API.

Our group was introduced to a number of people at HERE, including the CEO and various product managers. Of particular interest to me were the Fleet and Developer API talks – the former because of the “enterprise level” travelling-salesman-problem type (actually the vehicle-routing-with-prizes problem) functionality that is a core part of the platform, and the latter because I’ve already used a little bit of the HERE mapping APIs.

Fleet Management (the “travellling salesmen problem” solver)

SoMo

I also chatted to the HERE Mobility team who also had a presence in the HERE booth and also their own display in the main exhibition halls. HERE Mobility, who operate almost as a “start-up” within HERE, have the most obvious “consumer” presence of HERE, and launched their new “SoMo” app, which aims to be an “honest broker” multi-mobility navigation too. SoMo, which is short for Social Mobility, aims to offer various rideshare options from third parties, as well as transit and driving information – it’s key distinction, apart from being a platform for smaller rideshares, is to allow easy pooling of ride opportunities and friends/contacts who also need to journey to the same place.

They have identified a number of scenarios where this is useful, for example, people from a particular neighbourhood who are all planning to go to a music concert in a specific venue in another part of a city. The theory being that fans of the same artist might want to travel together and pool the costs, and find a good value or available service, where the “big two” rideshares Uber and Lyft, who are not on the platform (and indeed are building their own multimodal platforms) may be not present in a particular city or don’t have the necessarily availability or good price point on the ground.

SoMo will likely work best when you have a number of friends/contacts using it, and sufficient coverage of timely services in the cities where the users are. As such, it will live or die by the volumes of people using it, hence their big push to have the new app downloaded as widely as possible.

One HERE announcement at CES that is of immediate to me – my Alexa Echo Dot is finally location aware, worldwide – it was frustrating that it was unable to give me directions or time estimates, while my Google Home Mini was able to – but Amazon and HERE announced a partnership where the HERE location platform (with its routing capability, traffic awareness and huge map and POI database underlying it) provides location information in response to relevant queries to Alexa. This is not through an add-on “skill” (Alexa’s terminology for apps) but is built in to the core of the device’s response framework.

More map layers and location data available through HERE APIs.

Thank you to HERE Technology for inviting me to CES and organising the trip and insight day.

Categories
Technical

Panama’s Population Geographies

Panama is a Central American country with around 4 million population. The country is split into 10 provinces (including one that was split from another in 2014). The population is obliged to register for and obtain an ID card, or “cedula” which contains an interesting attribute. The prefix of their ID number indicates their province of birth. This not only allows the mapping and analysis of surname (and other) demographic information across the country, but also, if combined with information on current location, even allows for a rudimentary analysis of internal migration in the country.

This official document contains lots of useful information. Subsequent to this, the “Panama” province within the country has split into two, with the westernmost section becoming Panama West and gaining a new province number 13. In practice, the great majority of people living here retain the prefix 8 as the population with “13-” prefixes will be too young to have appeared on school attendance lists, jury service lists, exam candidate lists or government worker salary transparency lists. Here is the very No. 13: Ashly Ríos, getting the number 13-1-001. (People are required to obtain their number by the age of 18 but you can be registered at birth.)

For most people, born in Panama, their cedula number prefix indicates the following provinces of birth:

Province Cedula prefix
Bocas del Toro 1
Coclé 2
Colón 3
Chiriquí 4
Darién 5
Herrera 6
Los Santos 7
Panamá 8
Panamá Ouest 13
8 (pre-2014)
Veraguas 9
Guna Yala (indigenous province) 10
3SB (pre-1953)
Madungandí (indigenous sub-province) 10*
8PI (pre-2014)
Wargandi (indigenous sub-province) 10*
5PI (pre-2014)
Emberá Wounnan (indigenous province) 11
5PI (pre-1983)
Ngäbe-Buglé (indigenous province) 12
1PI, 4PI or 9PI (pre-1997)

* These were briefly assigned No. 13, before being changed to 10.

The format of the cedula number is generally X-YYY-ZZZZ where X is the province number, YYY is the registry book number and ZZZZ is the number within the book. However, for certain groups, the prefix is different. If SB appears after the province prefix, this is an indication that the person was born in Guna Yala (formerly called San Blas), but before it became a standalone indigenous province. Other indigenous areas, some of which have not formally become provinces, were indicated by PI appearing after the prefix of the former or enclosing province, or AV if very old (born pre-1914). However, the numerical codes are now used.

Panamanians born outside the country get “PE” as their prefix instead. Foreigners are assigned “EE” while they retain their immigrant status. If they gain permanent residence rights, they are assigned “E”, and if they become full Panamanian citizens, they are assigned “N”. PE, N, E and EE do not officially have an associated province prefix, although one is occasionally added in third-party lists, or “00”. So, these people can also be assigned a separate ID, starting with “NT” and with an associated province prefix, this is a temporary ID issued for tax purposes, rather than a full cedula number.

Categories
Technical

Taxonomy of Web Mapping Frameworks and Formats

Here’s an attempt to create a simple taxonomy of the currently active and popular web mapping frameworks available. This covers web mapping that delivers a consumer-navigable geographic “slippy” map of raster and/or vector tiles containing bespoke geographic data.

FRAMEWORKS
< < < EASY, costs, limited, quick
Flexible, Needs resources, time, HARD > > >
Ecosystems Hosted Wrappers Managed Wrappers Managed APIs Open Frameworks Spatial Servers Server Programming
Mapbox Studio


CARTO Builder


ESRI ArcGIS Online


Tableau


Google Fusion Tables


Google MyMaps

Google Maps Embed API


Google Static Maps API


OSM StaticMapLite

HERE Maps API for JavaScript


Google Maps JavaScript API


Microsoft Bing Maps V8 SDK

OpenLayers


Leaflet


D3 DataMaps


Leaflet for R/RStudio


RMaps

MapServer


GeoServer

R (ggplot)


Unfolding (Processing/Java)


Mapnik (C++/Python)

Capabilities/Requirements of the above Frameworks
Data analysis Data analysis
Remote server dependency Server with shell access required
Web space required
Scripting knowledge required Programming required

I will aim to update based on feedback and new discovery. This initial version is based on my own usages/experiences in the field, so it is quite possible there are some very obvious candidates I have missed.

Additionally (and with the some proviso as above) here’s a 2×2 table of file formats used in slippy and static web mapping, for vectors and rasters – the latter including attribute fields like UTF Grids. I am only including formats widely used in web mapping, rather than GIS in general.

DATA SPECIFICATIONS & FILE FORMATS
Static “WebGIS”
Raster OGC WMS


GIF, JPG, PNG, (Geo)TIFF

OGC WFS, GeoJSON, TopoJSON, KML, SVG


XML, SHP, JSON

Vector
TMS, WMTS, XYZ, UTFGrid


GIF, PNG, JSON

Mapbox Vector Tile Specification


JSON, PBF

Tiled “Slippy”
Categories
CDRC London Technical

Big Data Here: The Code

So Big Data Here, a little pop-up exhibition of hyperlocal data, has just closed, having run continuously from Tuesday evening to this morning, as part of Big Data Week. We had many people peering through the windows of the characterful North Lodge building beside UCL’s main entrance on Gower Street, particularly during the evening rush hour, when the main projection was obvious through the windows in the dark, and some interested visitors were also able to come inside the room itself and take a closer look during our open sessions on Wednesday, Thursday and Friday afternoons.

Thanks to the Centre for Advanced Spatial Analysis (CASA) for loaning the special floor-mounted projector and the iPad Wall, the Consumer Data Research Centre (CDRC) for arranging for the exhibition with UCL Events, Steven Gray for helping with the configuration and setup of the iPad Wall, Bala Soundararaj for creating visuals of footfall data for 4 of the 12 iPad Wall panels, Jeff for logistics help, Navta for publicity and Wen, Tian, Roberto, Bala and Sarah for helping with the open sessions and logistics.

The exhibition website is here.

I created three custom local data visualisations for the big screen that was the main exhibit in the pop-up. Each of these was shown for around 24 hours, but you can relive the experience on the comfort of your own computer:

bdh_buses

1. Arrival Board

View / Code

This was shown from Tuesday until Wednesday evening, and consisted of a live souped-up “countdown” board for the bus stop outside, alongside one for Euston Square tube station just up the road. Both bus stops and tube stations in London have predicted arrival information supplied by TfL through a “push” API. My code was based on a nice bit of sample code from GitHub, created by one of TfL’s developers. You can see the Arrival Board here or Download the code on Github. This is a slightly enhanced version that includes additional information (e.g. bus registration numbers) that I had to hide due to space constraints, during the exhibition.

Customisation: Note that you need to specify a Naptan ID on the URL to show your bus stop or tube station of choice. To find it out, go here, click “Buses” or “Tube…”, then select your route/line, then the stop/station. Once you are viewing the individual stop page, note the Naptan ID forms part of the URL – copy it and paste it into the Arrival Board URL. For example, the Naptan ID for this page is 940GZZLUBSC, so your Arrival Baord URL needs to be this.

bdh_traffic2

2. Traffic Cameras

View / Code

This was shown from Wednesday evening until Friday morning, and consisted of a looping video feed from the TfL traffic camera positioned right outside the North Lodge. The feed is a 10 second loop and is updated every five minutes. The exhibition version then had 12 other feeds, surrounding the main one and representing the nearest camera in each direction. The code is a slightly modified version of the London Panopticon which you can also get the code for on Github.

Customisation: You can specify a custom location by adding ?lat=X&lon=Y to the URL, using decimal coordinates – find these out from OpenStreetMap. (N.B. TfL has recently changed the way it makes available the list of traffic cameras, so the list used by London Panopticon may not be completely up-to-date.)

bdh_census

3. Census Numbers

View / Code

Finally, the screen showed randomly chosen statistical numbers, for the local Bloomsbury ward that UCL is in, from the 2011 Census. Again, you can see it in action here (wait 10 seconds for each change, or refresh), and download the code from GitHub.

Customisation: This one needs a file for each area it is used in and unfortunately I have, for now, only produced one for Bloomsbury. The data originally came, via the NOMIS download service, from the Office for National Statistics and is Crown Copyright.

bdh_traffic3

Categories
Technical

Behind the Code in Tube Heartbeat

Cross-posted to the 360 Here blog.

As a follow-up to my intro post about Tube Heartbeat, here’s some notes on the API usage that allowed me to get the digital cartography right, and build out the interactive visualisation I wanted to.

The key technology behind the visualisation is the HERE JavaScript API. This not only displays the background HERE map tiles and provides the “slippy map” panning/zoom and scale controls, but also allows the transportation data to be easily overlaid on top. It’s the first project I’ve created on the HERE platform and the API was easy to get to grips with. The documentation includes plenty of examples, as well the API reference.

The top feature of the API for me is that it is very fast, both on desktop browsers but also on smartphones. I have struggled in the past with needing to optimise code or reduce functionality, to show interactive mapped content on smartphones – not just needing to design a small-screen UI, but dealing with the browser struggling to show sometimes complex and large-volume spatial data. The API has some nice specific features too, here’s some that I used:

Arrows

One of the smallest features, but a very nice one I haven’t come across elsewhere, is the addition of arrows along vector lines, showing their direction. Useful for routing, but also useful for showing which flow is currently being shown on a bi-directional dataset – all the lines on Tube Heartbeat use it:

var strip = new H.geo.Strip();
strip.pushPoint({ lat: startLat, lng: startLon });
strip.pushPoint({ lat: endLat, lng: endLon });

var polyline;
var arrowWidth = 0.5; /* example value */

polyline = new H.map.Polyline(
	strip, { 
		style: { ... }, 
		arrows: { 
			fillColor: 'rgba(255, 255, 255, 0.5)', 
			frequency: 2, 
			width: arrowWidth, 
			length: arrowWidth*1.5 
		}
	}
);

polyline.zorder = lines[lineID].zorder;

The frequency that the arrows occur can be specified, as well as their width and length. I’m using quite elongated ones, which are 3 times as long as they are wide, and occupy the middle half of the arrow (above/below certain flow thresholds, I used different numbers). A frequency of 2 means there is an arrow-sized gap between each one. Using 1 results in a continuous stream of arrows. (N.B. Rendering quirks in some browsers mean that other gaps may appear too.) Here, the blue and red segments have a frequency of 1 and a width of 0.2, while the smaller flows in the brown segments are shown with the frequency of 2 and width of 0.5 in the example code above:

tubeflows

Z-Order

Z-order is important so that the map has a natural hierarchy of data. I decided to use an order where the busiest tube lines were generally at the bottom, with the quieter lines being layered on top of them (i.e. having a higher Z-order). Because the busier tube lines are shown with correspondingly fatter vector lines on the map, the ordering means that generally all the data can be seen at once, rather some lines being hidden. You can see the order in the penultimate column of my lines data file (CSV). I’m specifying z-order simply as a custom object “zorder” on the H.map.Polyline, as shown in the code sample above. This then gets used later when assembling the lines to draw, in a group (see below).

Translucency

I’m using translucency both as a cartographical tool and to ensure that data does not otherwise become invisible. The latter is simply achieved by using RGBA colours rather than the more normal hexadecimals; that is, colours with a opacity specified as well as the colour components. In the code block above, “rgba(255, 255, 255, 0.5)” gives white arrows which are only 50% opaque. The tube lines themselves are shown as 70% opaque – specified in lines data file along with the z-order – this allows their colour to appear strongly while allowing other lines or background map features/captions, such as road or neighborhood names, to still be observable.

While objects such as the tube lines can be made translucent by manipulating their colour values, layers themselves always display at 100% opacity. This is probably a good thing because translucent map image layers could look a mess, if you layered multiple ones on top of each other, but it means you need to use a different technique if you want to tint or fade a layer. Because even the simplified “base” background map tiles from HERE for London have a lot of detail on them, and the “xbase” extra-simplified ones don’t have enough for my purposes, I needed a half-way house approach. I acheived this by creating a geographical object in code and placing it on top of the layers:

var tintStyle = {
	fillColor: 'rgba(240, 240, 240, 0.35)'
};
var rect = new H.map.Rect(
	new H.geo.Rect(	42, -7, 58, 7 ), 
	{ style: tintStyle }
);
map.addObject(rect);

The object here is a very light gray box, at 35% opacity, with an extent that covers all of the London area and well beyond. In HERE JavaScript API, such objects automatically go on top of the layers. My tint doesn’t affect the lines or stations, because I add two groups, containing them, after my rectangle:

var stationGroup = new H.map.Group();
var segGroup = new H.map.Group();
map.addObject(segGroup);
map.addObject(stationGroup);  

Object Groups

I can add and remove objects from the above groups rather than directly to the map object, and the groups themselves remain in place, ordered above my tint and the background map layers. Objects are drawn in the order they appear in the group, the so-called “Painters Algorithm“, hence why I sort using my previously specified “zorder” object’s value, earlier:

function segSort(a, b)
{
	var lineA = parseInt(a.zorder);
	var lineB = parseInt(b.zorder);
	if (lineA > lineB) return 1;	
	if (lineA < lineB) return -1;
	return 0;
}

var segsToDraw = [];
segGroup.removeAll();

...

segsToDraw.sort(segSort);	
for (var i in segsToDraw)
{
	segGroup.addObject(segsToDraw[i]);									
}		 

Circles

There are super easy to create and illustrate the second reason that I very much like the HERE JavaScript API. The code is obvious:

var circle = new H.map.Circle(
	{	
		lat: Number(stations[i].lat), 
		lng: Number(stations[i].lon)
	}, 
	radius, 
	{ 
		style: { 
			strokeColor: stationColour, 
			fillColor: 'rgba(255, 255, 255, 0.8)', 
			lineWidth: 3 
		} 
	}
);

These are my station circles. They are thickly bordered white circles, as is the tradition for stations on maps of the London Underground as well as many other metros worldwide, but with a little bit of translucent to allow background map details to still be glimpsed. Here you can see the circle translucencies, as well as those on the lines, and the arrows themselves, the lines also being ordered as per the z-order specification, so that the popular Victoria line (light blue) doesn't obscure the Northern line (black):

translucency

Other Technologies

As well as the HERE JavaScript API, I used JQuery to short-cut some of the non-map JavaScript coding, as well as JQueryUI for some of the user controls, and the Google Visualization API (aka Google Charts) for the graphs. Google's Visualization API is full-featured, although a note of caution: I am using their new "Material" look, which works better on mobile and looks nicer too than their regular "Classic" look - but it is still very much in development - it is missing quite a few features of the older version, and sometimes requires the use of configuration converters - so check Google's documentation carefully. However, it produces nicer looking charts of the data, a trade-off that I decided it was worth making:

google_material_chart2

These are just some of the techniques I used for Tube Heartbeat, and I only scratched at the surface of the HERE APIs, there are all sorts of interesting ones I could additionally incorporate, including some you might not expect, such as a Weather API.

Try out Tube Heartbeat for yourself.

Background map tiles shown here are Copyright HERE 2016.

Categories
Technical

OS Open

Ordnance Survey have this week released four new additions to their Open Data product suite. The four, which were announced earlier this month, are collectively branded as OS Open and include OS Open Map Local, which, like Vector Map District (VMD), is a vector dataset containing files for various feature types, such as building polygons and railway stations. The resolution of the buildings in particular is much greater than VMD – surprisingly good, in fact. I had expected the data to be similar in resolution to the (rasterised) OS StreetView but it turns out it’s even more detailed than that. The specimen resolution for OS Open Map Local is 1:10000, with suggested uses down to a scale of 1:3000, which is really quite zoomed in. Two new files in OS Open Map Local are “Important Buildings” (universities, hospitals etc) and “Functional Areas” which outline the land containing such important buildings.

osopendata_oldnew
osvmd_osm

Above: Comparing the building polygon detail in the older Vector Map District (top left), previously the largest scale vector building open data from Ordnance Survey, and the brand new OS Open Map Local (top right). The new data is clearly much higher resolution, however one anomaly is that roads going under buildings no longer break the buildings – note the wiggly road in the centre of the top left sample, Malet Place, which runs through the university and under a building, doesn’t appear in full on the right. Two other sources of large-scale building polygons are OS StreetView (bottom left), which is only available as a raster, and OpenStreetMap (bottom right). The OS data is Crown Copyright and Database right OS, 2015. The OSM data is Copyright OSM contributors, 2015.

The other three new products, under the OS Open banner, are OS Open Names, OS Open Rivers and OS Open Roads. The latter two are topological datasets – that is, they are connected node networks, which allow routing to be calculated. OS Open Names is a detailed gazetteer. These latter three products are great as an “official”, “complete” specialised dataset, but they have good equivalents on the OpenStreetMap project. OS Open Map Local is different – it offers spatial data that is generally much higher in accuracy than most building shapes already on OpenStreetMap, including inward facing walls of buildings which are not visible from the street – and so difficult for the amateur mapper to spot. As such, it is a compelling addition to the open data landscape of Great Britain.

The OS also confirmed last week the location for its new Innovation Hub. It is indeed a mile from King’s Cross – specifically, it’s in Clerkenwell, and the hub will be sharing space with the Future Cities Catapult. Conveniently the new space has a presentation space and the May Geomob will be taking place there.

Categories
Technical

Ordnance Survey Open Data – The Next Level of Detail

An encouraging announcement from BIS (the Department for Business, Innovation and Skills) a few days ago regarding future Open Data products from the Ordnance Survey (press release here) – two pieces of good news:

  • The OS will be launching a new, detailed set of vector data as Open Data at the end of this month. They are branding it as OS OpenMap, but it looks a lot like a vector version of OS StreetView, which is already available as a raster. The key additions will be “functional polygons” which show the boundaries of school and hospital sites, and more detailed building outlines. OS Vector Map District, which is part of the existing Open Data release, is already pretty good for building outlines – it forms the core part of DataShine and this print, to name just two pieces of my work that have used the footprints extensively. With OpenMap, potentially both of these could benefit, and we might even get attribute information about building types, which means I could filter out non-residential buildings in DataShine. What we do definitely get is the inclusion of unique building identifiers – potentially this could allow an crowd-sourced building classification exercise if the attribution information isn’t there. OpenMap also includes a detailed and topological (i.e. joined up under the bridges) water network, and an enhanced gazetteer, i.e. placename database.
  • The other announcement relates to the establishment of an innovation hub in London – an incubator for geo-related startups. The OS are being cagey about exactly where it will be, saying just that it will be on the outskirts of the Knowledge Quarter, which is defined as being within a mile of King’s Cross. UCL’s about a mile away. So maybe it will be very close to here? In any case, it will be somewhere near the edge of the green circle on the (Google) map below…

p.s. The Ordnance Survey have also recently rebranded themselves as just “OS”. Like University College London rebranding itself as “UCL” a few years ago, and ESRI calling itself Esri (and pronouncing it like a word), it will be interesting to see if it sticks. OS for me stands for “open source” and is also very close to OSM (OpenStreetMap), so possible confusion may follow. It does however mean a shorter attribution line for when I use OS data in my web maps.

Screen Shot 2015-03-04 at 17.47.52

London’s Knowledge Quarter
Categories
Technical

Downtime

Various websites I’ve built, and mentioned here on oobrien.com from time to time, are down from Friday at 5pm until Monday noon (all times GMT), due to a major power upgrade for the building that the server is in.

This affects the following websites:

  • DataShine
  • CDRC
  • Bike Share Map
  • Tube Tongues
  • OpenOrienteeringMap (extremely degraded)
  • Some other smaller visualisations

However the following are hosted on different servers and so will remain up: