Seeing Red: 15 Ways the Boris Bikes of London Could be Better

santabikes

A big announcement for the “Boris Bikes” today, aka Barclays Cycle Hire. London’s bikeshare system, the second largest in the western world after Paris’s Velib and nearly five years old, will be rebranded as Santander Cycles, and the bikes with have a new, bright red branding – Santander’s corporate colour, and conveniently also London’s most famous colour. As well as the Santander logo, it looks like the “Santa Bikes” will have outlines of London’s icons – the above publicity photo showing the Tower of London and the Orbit, while another includes the Shard and Tower Bridge. A nice touch to remind people these are London’s bikes.

velibIt’s great that London’s system can attract “big” sponsors – £7m a year with the new deal – but another document that I spotted today reveals (on the last page) that, despite the sponsorship, London’s system runs at a large operating loss – this is all the more puzzling because other big bikeshare systems can (almost) cover their operating costs – including Washington DC’s which is both similar to London’s in some ways (a good core density, same bike/dock equipment) and different (coverage into the suburbs, rider incentives); and Paris’s (right), which has a very different funding model, and its own set of advantages (coverage throughout the city) and disadvantages (little incentive to expand/intensify). What are they doing right that London is not?

In financial year 2013/4, London’s bikeshare had operating costs of £24.3m. Over this time period, the maximum number of bikes that were available to hire, according to TfL’s Open Data Portal was 9471, on 26 March 2014. This represents a cost of just over £2500 per bike, for that year alone. If you look at it another way, each bike is typically used three times a day or ~1000 times a year, so that’s about £2.50 a journey, of which, very roughly, the sponsor pays about £0.50, the taxpayer £1 and the user about £1. In those terms it does sound better value but it’s still a surprisingly expensive system.

As operating costs, these don’t include the costs of buying the bikes or building the docking stations. Much of the cost therefore is likely ocurring in two places:

  1. Repairing the bikes – London’s system is wildly* successful, so each bike sees a lot of use every day, and the wear and tear is likely to be considerable. This is not helped by the manufacturers of the bikes going bust a couple of years ago – so there are no “new” ones out there to replace the older ones – New York City, which uses the same bikes, is suffering similar problems. (* Update: To clarify, based on a comment from BorisWatch, this assertion is a qualitative one, based on seeing huge numbers of the bikes in use, in certain places at certain times of the day. Doubtless, some do remain dormant for days.)
  2. Rebalancing/redistribution activity, operating a fleet of vehicles that move bikes around.

I have no great issues with the costs of the bikes – they are a public service and the costs are likely a fraction of the costs of maintaining the other public assets of roads, buses, railway lines – but it is frustrating to see, in the document I referred to earlier, that the main beneficiaries are in fact tourists (the Hyde Park docking stations consistently being the most popular), commuters (the docking stations around Waterloo are always popular on weekdays), and those Londoners lucky enough to live in Zone 1 and certain targeted parts of Zone 2 (south-west and east). Wouldn’t be great if all Londoners benefited from the system?

Here’s 15 ways that London’s bikeshare could be made better for Londoners (and indeed for all) – and maybe cheaper to operate too:

  1. Scrap almost all rebalancing activity. It’s very expensive (trucks, drivers, petrol), and I’m not convinced it is actually helping the system – in fact it might be making it worse. Most cycling flows in London are uni-directional – in to the centre in the morning, back out in the evening – or random (tourist activity). Both of these kinds of flows will, across a day, balance out on their own. Rebalancing disrupts these flows, removing the bikes from where they are needed later in the day (or the following morning) to address a short-term perceived imbalance that might not be real on-the-ground. An empty docking station is not a problem if no one wants to start a journey there. Plus, when the bikes are in sitting in vans, inevitably clogged in traffic, they are of no use to anyone. Revealingly, the distribution drivers went on strike in London a few months ago and basically everything carried on as normal. Some “lightweight” rebalancing, using cycle couriers and trailer, could help with some specific small-scale “pinch points”, or responding to special events such as heavy rainfall or a sporting/music event. New York uses cyclists/trailers to help with the rebalancing.
  2. Have a “guaranteed valet” service instead, like in New York. This operates for a certain number of key docking stations at certain times of the day, and guarantees that someone can start or finish their journey there. London already has this, to a certain extent, at some stations near Waterloo, but it would be good to highlight this more and have it at other key destinations. This “static” supply/demand management would be a much better use of the time of redistribution drivers.
  3. rrrHave “rider rewards“, like in Washington DC. Incentivise users to redistribute the bikes themselves, by allowing a free subsequent day’s credit (or free 60-minute journey extension) for journeys that start at a full docking station and end at an empty one. This would need to be designed with care to ensure “over-rebalancing”, or malicious marking of bikes as broken, was minimised. Everyone values the system in different ways, so some people benefit from a more naturally balanced system and others benefit from lower costs using it.
  4. Have more flexible user rules. Paris’s Velib has an enhanced membership “Passion” that allows free single journeys of up to 45 minutes rather than every 30 minutes. London, like Paris, is a large city, and the current 30 minute cutoff seems short and arbitrary, when considering most bikes are used around three times a day. Increasing the window would therefore have little impact on the overall distribution of the system and might in fact benefit it – because the journeys from the terminal stations to the City or the West End, which are the most distinctive flows seen, are acheived comfortably in under half an hour. In London, you have to wait 5 minutes between hires, but most systems (Paris, Boston, New York) don’t have this “timeout” period. To stop people “guarding” recently returned bikes for additional use, an alternative could be make it a 10 minute timeout but tie it to the specific docking station (or indeed a specific bike) rather than system-wide. Then, if people are prepared to switch bikes or docking stations, they can continue on longer journeys for free.
  5. Adjust performance metrics. TfL (and the sponsors) measure performance of the system in certain ways, such as the time a docking station remains empty at certain times of the day. I’m not sure that these are helpful – surely the principle metric of value (along with customer service resolution) is the number of journeys per time period and/or number of distinct users per time period. If these numbers go down, over a long period, something’s wrong. The performance metrics, as they stand, are perhaps encouraging the unnecessary and possibly harmful rebalancing activity, increasing costs with no actual benefit to the system.
  6. lyonRemove the density rule (one docking station every ~300 metres) except in Zone 1. Having high density in the centre and low density in the suburbs works well for many systems – e.g. Bordeaux, Lyon (above) and Washington DC, because it allows the system to be accessible to a much larger population, without flooding huge areas with expensive stations/bikes. An extreme example, this docking station is several miles from its nearest neighbour, in a US city.
  7. Build a docking station outside EVERY tube station, train station and bus station inside the North/South Circular (roughly, Zones 1-3). Yes, no matter how hilly* the area is, or how little existing cycling culture it has – stop assuming how people use bikes or who uses them! Bikeshare is a “last mile” transport option and it should be thought of as part of someone’s journey across London, and as a life benefit, not as a tourist attraction. The system should also look expand into these areas iteratively rather than having a “big bang” expansion by phases. It’s crazy that most of Hackney and Islington doesn’t have the bikeshare, despite having a very high cycling population. Wouldn’t be great if people without their own bikes could be part of the “cycling cafe culture” strong in these places? For other places that have never had a cycling culture, the addition of a docking station in a prominent space might encourage some there to try cycling for the first time. (*This version of the bikes could be useful.)
  8. Annual membership (currently £90) should be split into peak and off-peak (no journey starts from 6am-10am) memberships, the former increased to £120 and the latter decreased back to £45. Unlike the buses and trains, which are always full peak and pretty busy off-peak too, there is a big peak/offpeak split in demand for the bikes. Commuters get a really good deal, as it stands. Sure, it costs more than buying a very cheap bike, but actually you aren’t buying the use of a bike – you are buying the free servicing of the bike for a year, and free distribution of “your” bike to another part of central London, if you are going out in the evening. Commuters that use the bikes day-in-day-out should pay more. Utility users who use the bike to get to the shops, are the sorts that should be targetted more, with off-peak membership.
  9. officialmapA better online map *cough* of availability. The official map still doesn’t have at-a-glance availability. “Rainbow-board” type indications of availability in certain key areas of London would also be very useful. Weekday use, in particular, follows distinct and regular patterns in places.
  10. Better indication of where the nearest bikes/docks are, if you are at a full/empty docking station, i.e. a map with route indication to several docking stations nearby with availability.
  11. Better static signage of your nearest docking station. I see very few street signs pointing to the local docking station, even though they are hard-built into the ground and so generally are pretty permanent features.
  12. Move more services online, have a smaller help centre. A better view of journeys done (a personal map of journeys would be nice) and the ability to question overpayments/charges online.
  13. hubwayEncourage innovative use of the bikeshare data, via online competitions – e.g. Boston’s Hubway data visualisation competitions have had lots of great entries. These get further groups interested in the system and ways to improve it, and can produce great visuals to allow the operator/owner to demonstrate the reach and power of the system.
  14. Allow use of the system with contactless payment cards, and so integration with travelcards, daily TfL transport price caps etc. The system can’t use Oyster cards because of the need to have an ability to take a “block payment” charge for non-return of the bikes. But with contactless payment, this could be achieved. The cost of upgrading the docking points to take cards would be high, but such docking points are available and in use in many of the newer US systems that use the same technology.
  15. Requirement that all new housing developments above a certain size, in say Zone 1-3 London, including a docking station with at least one docking point per 20 residents and one new bike per 40 residents, either on their site or within 300m of their development boundary. (Update: Euan Mills mentions this is already is the case, within the current area. To clarify, I would like to see this beyond the current area, allowing an organic growth outwards and linking with the sparser tube station sites of point 7.)

London has got much right – it “went big” which is expensive but the only way to have a genuinely successful system that sees tens of thousands of journeys on most days. It also used a high-quality, rugged system that can (now) cope with the usage – again, an expensive option but absolutely necessary for it to work in the long term. It has also made much data available on the system, allowing for interesting research and increasing transparency. But it could be so much better still.

15094632681_a184a8a065_b
Washington DC’s systems – same technology as London’s, not that much smaller, but profitable.

Visit the new oobrien.com Shop
High quality lithographic prints of London data, designed by Oliver O'Brien

Downtime

Various websites I’ve built, and mentioned here on oobrien.com from time to time, are down from Friday at 5pm until Monday noon (all times GMT), due to a major power upgrade for the building that the server is in.

This affects the following websites:

  • DataShine
  • CDRC
  • Bike Share Map
  • Tube Tongues
  • OpenOrienteeringMap (extremely degraded)
  • Some other smaller visualisations

However the following are hosted on different servers and so will remain up:

Visit the new oobrien.com Shop
High quality lithographic prints of London data, designed by Oliver O'Brien

GeoComputation: A Practical Primer

geocomputationGeoComputation: A Practical Primer, edited by Profs Chris Brunsdon and Alex Singleton, has just been published by SAGE.

The book acts both as a reference guide to the field and as a guide to help you get to know aspects of it. Each chapter includes a worked example with step-by-step instructions.

Each chapter has a different author, and includes topics such as spatial data visualisation with R, agent-based modelling, kernel density estimation, spatial interaction models and the Python Spatial Analysis library, PySAL. With 18 chapters, the book runs to over 300 pages and so has the appropriate depth to cover a diverse, active and fast-evolving field.

I wrote a chapter in the book, on open source GIS. I focused particularly on QGIS, as well as mentioning PostGIS, Leaflet, OpenLayers (2) and other parts of the modern open source “geostack”. My worked example describes how to build a map, in QGIS, of London’s railway “not-spots” – places which are further than a mile from a railway station, using open data map files, mainly from the Ordnance Survey. With the guide, you can create a map like the one below:

offthetracks

That little spot on its own in central-ish London, by the way, is part of Burgess Park, near Peckham.

The book has only just been published and I was able to slip in brand new screenshots (and slightly updated instructions) just before publication, as QGIS 2.6 came out late last year. So, the book is right up to date, and as such now is a great time to get your copy!

It’s available now in paperback on Amazon: Geocomputation: A Practical Primer.

The first part of my chapter:

geocomp_ch17

London Boroughs and Tube Lines

piccadilly

How many of London’s 32 boroughs (& the City of London) would you pass through on a single end-to-end journey on the tube?

It turns out that if you travel the length of the Piccadilly Line (Uxbridge branch), then, in a single journey, you’ll pass through 14 boroughs (and stop at least once in all of them but Barnet). That’s more of London than if you travel on any single Crossrail journey, once it opens in 2018.

Line Branch # Boroughs
with Stops
# Boroughs
(Total)
Piccadilly to Uxbridge 13 14
Crossrail to Shenfield 10 13
Central to West Ruislip 11 12
Piccadilly to Heathrow 11 12
Central to Ealing Broadway 10 11
Northern High Barnet to Morden 10 10
District Upminster to Richmond/Ealing Broadway 10 10
Overground Richmond to Stratford 8 10
District Wimbledon to Barking 9 9
Hammersmith & City 9 9
Jubilee 9 9
Northern Edgware to Morden via Bank 9 9
Overground Clapham Junction to Stratford 8 9
Northern Edgware to Morden via Charing Cross 8 8
Bakerloo 5 8
Overground West Croydon to Highbury & Islington 7 7
Metropolitan 7 7
Circle 7 7
Victoria 6 7
Overground Clapham Junction to Highbury & Islington 6 7
Overground Gospel Oak to Barking 6 6
Overground Watford Junction to Euston 3 6
District Wimbledon to Edgware Road 5 5
DLR Bank to Lewisham/Woolwich Arsenal 4 4
Tramlink Wimbledon to New Addington 3 3
Waterloo & City 2 3
Cable Car 2 2

See for yourself at http://vis.oobrien.com/tube/#map.

Of course, if you are aiming to see a cross-section of London’s boroughs, in a rush, then the tube probably isn’t the best way, as you’ll be underground for quite a lot of the journey…

Quick-and-Dirty WordPress Site Cloning

mysqlcloning

Here is a guide to clone a WordPress(.org) blog, on the same server, 10 steps, on Linux, You’ll definitely need admin access to the blog itself, and probably to the database and server too, depending on your setup. I did this recently as I needed a copy of an existing production site, to hack on. If you don’t fancy doing it the quick-and-dirty way, there are, I’m sure, even quicker (and cleaner) ways, by installing plugins.

In the following instructions, substitute X and Y for your existing and new blog, respectively.

0. Do a backup of your current website, like you do normally for an upgrade or archiving, in case anything goes wrong. e.g. under Tools > Export in the WordPress admin interface.

1. Copy all the files:
cp -r /home/~username/www/blog_X /home/~username/www/blog_Y

2. Edit wp-config.php in your new blog directory:

Change:
$table_prefix = 'wp_X_';
to:
$table_prefix = 'wp_Y_';

3. Copy all the database tables (prefixed with wp_X_). The new ones should have a prefix wp_Y_ instead. I used the Copy functionality under the Operations tab in phpMyAdmin (see screenshot below).

4. Edit wp_Y_options:
update wp_Y_options set option_name = 'wp_Y_user_role' where option_name = ' wp_X_user_role';

5. Edit wp_Y_options:
Edit the option_value for rows with option_name values of siteurl and home, pointing them to the new location – mine are the same but one might be different, e.g. if you have your WordPress core files in a subdirectory relative to the directory for the site entry-point on the web.

update wp_Y_options set option_value = 'http://your_server.com/~username/wp_Y' where option_name = 'siteurl';
update wp_Y_options set option_value = 'http://your_server.com/~username/wp_Y' where option_name = 'home';

There may be other rows referencing your old blog name, but these are probably from plugins and therefore probably don’t need to be changed.

6. Edit wp_Y_usermeta:
update wp_Y_usermeta set meta_key = replace(meta_key, 'wp_X', 'wp_Y');

(You can edit the affected rows manually, but I had a lot to do – there’s around 5 for each user.)

7. Drop force-upgrade.php in the same directory as wp-config.php and run it from your browser. This rebuilds caches/hashes stored in some of the tables. You can run it repeatedly if necessary, (e.g. if you missed a step above), it shouldn’t do any harm.

You can find force-upgrade.php here.

8. Delete force-upgrade.php. Leaving it is a security risk.

9. Log in to your blog in the new location, as normal. Usernames and passwords should be preserved.

mysqlcopy

Bad Maps

<rant> Three maps with glaring errors which I came across yesterday. I’m hesitant to criticise – many of my own maps have, I am sure, issues too (i.e. my Electric Tube map, on the right, is deliberately way off.) But I couldn’t resist calling out this trio which I spotted within a few hours of each other.

1. Global Metropolitan Urban Area Footprints

footprints

This is, in itself, a great concept. I particularly like that the creator has used the urban extent rather that administrative boundaries, which rarely follow the true urban extent of a city. The glaring error is scale. It looks like the creator traced the boundaries of each city’s urban extent in Google Maps (aerial view) or similar. All well and good, but a quirk of representing a 3D globe on a 2D “slippy” map means that the scale in Google Maps (and OpenStreetMap and other maps projected to “WebMercator”) varies with latitude, at a fixed zoom level. This hasn’t been accounted for in the graphic, with the result that all cities near the equator (i.e. most of the Asian and African ones) are shown on the map smaller relative to the others, while cities near the poles (e.g. London, Paris, Edmonton, Toronto) are shown misleadingly big. This is a problem because the whole point of the graphic is to compare footprints (and populations) of the major cities. In fact, many of those Chinese and African cities are quite a bit bigger relative to, for example, London, than the graphic suggests.

2. Where Do All The Jedi Live?

religions

The map is in the Daily Mirror (and their online new media) so it doesn’t need to be a pinnacle of cartographic excellence – just a device to get a story across.However, Oxford and Mid Sussex – 40% of the datapoints – are shown in the wrong place – both are much closer to London than the map suggests. The author suggests they did this to make the text fit – but there better ways to accommodate text while having the centroid dots in the correct location. It might take a little longer but then it wouldn’t be – quite simply – wrong. I’m somewhat disappointed that the Mirror not only stoops to the level of Fox News in the accuracy of their mapping, but appears to have no problem with maintaining such an error, even when readers point it out. It’s sloppy journalism and a snub to the cartographic trade, that just relocating whole cities for artistic purposes is not an issue, particularly as so many people in the UK have relatively poor spatial literacy and so can be potentially easily manipulated.

3. A London map…

breakfasts

I’m not really sure where to begin here. I’m not sure if any of the features are in fact in the right place!

OpenLayers 3 and Vector Data

As part of a project to move most of my OpenLayers 2-powered websites to OpenLayers 3, I have recently converted two more – DataShine: Travel to Work Flows and the North/South Interactive Map. Unlike the main DataShine: Census website, both of these newer conversions include vector geospatial data, so there was additional learning involved during the migration process, mainly relating to vector styling.

northsouth2North/South Interactive Map

For the North/South Interactive Map, I made use of the loading in of remote GeoJSON files.

Vector Layers

Here’s a vector layer:

layerPoints = new ol.layer.Vector({
    source: pointSource,
    style: function(feature, res) { return pointStyle(feature, res); }
});

The pointSource is a ol.source.GeoJSON, which requires the projection of the files to be defined, as well as that to be displayed, when defining the source for the Vector layer:
pointSource = new ol.source.GeoJSON({
    url: '...',
    defaultProjection: 'EPSG:4326',
    projection: 'EPSG:3857',

    attributions: [ new ol.Attribution({ 'html': "..." }) ]
});

If you wish to do further operations on your data once it is loaded in, you need to add a listener to a remotely loaded (e.g. GeoJSON file) source included within a Vector layer:

pointSource.once('change', function()
{
    if (pointSource.getState() == 'ready')
    { var features = pointSource.getFeatures(); ... }
};

Here’s a typical style function. I’m using a property “highlight” on my feature to style such features differently:

function pointStyle(feature, resolution)
{
    return [
        new ol.style.Style({
            image: new ol.style.Circle({
                radius: (feature.highlight ? 7 : feature.radius ),
                fill: new ol.style.Fill({ color: feature.fillColor }),
                stroke: new ol.style.Stroke({ width: feature.strokeWidth, color: '#fff' })
            }),
            text: new ol.style.Text({
                text: (feature.highlight ? feature.label : ""),
                font: '9px Ubuntu, Gill Sans, Helvetica, Arial, sans-serif',
                fill: new ol.style.Fill({ color: '#fff' })
            })
        })
    ]
};

Interactions

To detect clicks, I used an ol.interaction.Select – N.B. if you don’t specify which layers it applies to, it tries to apply them to all Vector layers!

var selectClick = new ol.interaction.Select({
    condition: ol.events.condition.click,
    style: function(feature, res) { return pointStyle(feature, res); },
    layers: [layerPoints]
});

selectClick.getFeatures().on('change:length', function(e)
{ ... }

olMap.addInteraction(selectClick);

In my function here I remove the flag from any already highlighted features and call features[i].changed(); to get the non-highlighed style. You don’t need to call this on what you’ve actually clicked on, as this is done implicitly. here’s likely better ways of showing selected/highlighted features, using ol.FeatureOverlay, but i couldn’t get this to work.

coordinates

MousePosition

There’s quite a nice new utility function which means it was little effort to get an “old style” location indicator in, at the bottom of the North/South interactive:
new ol.control.MousePosition({ projection: "EPSG:4326", coordinateFormat: ol.coordinate.toStringHDMS, className: 'olControlMousePosition' })

ttwf

DataShine: Travel to Work Flows

This loads vector data in as generic JSON through a regular (non-OL) AJAX call rather than GeoJSON so the processing is a bit more manual. This time, my source for the Vector layer is a simple ol.source.Vector which can be emptied with source.clear(); and reused.

I’m creating lines directly from the JSON, converting from OSGB grid and specifying colour (for the style) as I go – note my use of rgba format, allowing me to specify a partial transparency (of 60%) for the lines:

var startLL = ol.proj.transform([data[start][2], data[start][3]], "EPSG:27700", "EPSG:3857");
var endLL = ol.proj.transform([data[end][2], data[end][3]], "EPSG:27700", "EPSG:3857");
var journeyLine = new ol.geom.LineString([startLL, endLL]);
var lineItem = new ol.Feature({ geometry: journeyLine });
lineItem.strokeColor = 'rgba(255, 0, 0, 0.4)'; lineSource.addFeature(lineItem);

As previously blogged, I’m also using hand-crafted permalinks in both websites, and drag-and-drop KML display and UTF grid mouseovers in the latter, and both have also had their stylesheets tweaked to allow for easy printing – again made possible with OL3.

I’m about ready now to tackle my most complicated OpenLayers project by far, the Bike Share Map.

OpenLayers 3 and DataShine

ds_ol3

OpenLayers is a powerful web mapping API that many of my websites use to display full-page “slippy” maps. DataShine: Census has been upgraded to use OpenLayers 3. Previously it was powered by OpenLayers 2, so it doesn’t sound like a major change, but OL3 is a major rewrite and as such it was quite an effort to migrate to it. I’ve run into issues with OL3 before, many of which have since been resolved by the library authors or myself. I was a bit grumbly in that earlier blogpost for which I apologise! Now that I have fought through, the clouds have lifted.

Here are some notes on the upgrade including details on a couple of major new features afforded by the update.

New Features

Drag-and-drop shapes

One of the nicest new features of OL3 is drag-and-dropping of KMLs, GeoJSONs and other geo-data files onto the map (simple example). This adds the features pans and zooms the map to the appropriate area. This is likely most useful for showing political/administrative boundaries, allowing for easier visual comparisons. For example, download and drag this file onto DataShine to see the GLA boundary appear. New buttons at the bottom allow for removal or opacity variation of the overlay files. If the added features include a “name” tag this appears on the key on the left, as you “mouse over” them. I modified the simple example to keep track of files added in this way, in an ol.layer.Group, initially empty when added to the map during initialisation.

Nice printing

Another key feature of OL3 that I was keen to make use of is much better looking printing of the map. With the updated library, this required only a few tweaks to CSS. Choosing the “background colours” option when printing is recommended. Printing also hides a couple of the panels you see on the website.

Nice zooming

OL3 also has much smoother zooming, and nicer looking controls. Try moving the slider on the bottom right up and down, to see the smooth zooming effect. The scale control also changes smoothly. Finally, data attributes and credits are now contained in an expandable control on the bottom left.

A bonus update, unrelated to OL3, is that I’ve recreated the placename labels with the same font as the DataShine UI, Cabin Condensed. The previous font I was using was a bit ugly.

Major reworkings to move from OL2 to OL3

UTF Grids

With OpenLayers 3.1, that was released in December 2014, a major missing feature was added back in – support for UTF Grid tiles of metadata. I use this to display the census information about the current area as you “mouse over” it. The new implementation wasn’t quite the same as the old though and I’ve had to do a few tricks to get it working. First of all, the ol.source.TileUTFGrid that your UTF ol.layer.Tile uses expects a TileJSON file. This was a new format that I hadn’t come across before. It also, as far as I can tell, insists on requesting the file with a JSONP callback. The TileJSON file then contains another URL to the UTF Grid file, which OL3 also calls requiring a JSONP callback. I implemented both of these with PHP files that return the appropriate data (with appropriate filetype and compression headers), programmatically building “files” based on various parameters I’m sending though. The display procedure is also a little different, with a new ol.source.TileUTFGrid.forDataAtCoordinateAndResolution function needing to be utilised.

In my map initialisation function:

layerUTFData = new ol.layer.Tile({});

var handleUTFData = function(coordinate)
{
  var viewResolution = olMap.getView().getResolution();
  layerUTFData.getSource().forDataAtCoordinateAndResolution(coordinate, viewResolution, showUTFData);
}

$(olMap.getViewport()).on('mousemove', function(evt) {
  var coordinate = olMap.getEventCoordinate(evt.originalEvent);
  handleUTFData(coordinate);
});

In my layer change function:

layerUTFData.setSource(new ol.source.TileUTFGrid({
  url: "http://datashine.org.uk/utf_tilejsonwrapper.php?json_name=" + jsonName
})

(where jsonName is how I’ve encoded the current census data being shown.)

Elsewhere:

var callback = function(data) { [show the data in the UI] }

In utf_tilejsonwrapper.php:

<?php
header('Content-Type: application/json');
$callback = $_GET['callback'];
$json_name = $_GET['json_name'];
echo $callback . "(";
echo "
{ 'grids' : ['http://datashine.org.uk/utf_tilefilewrapper.php?x={x}&y={y}&z={z}&json_name=$json_name'],
'tilejson' : '2.1.0', 'scheme' : 'xyz', 'tiles' : [''], 'version' : '1.0.0' }";
echo ')';
?>

(tilejson and tiles are the two mandatory parts of a TileJSON file.)

In utf_tilefilewrapper.php:

<?php
header('Content-Type: application/json');
$callback = $_GET['callback'];
$z = $_GET['z'];
$y = $_GET['y'];
$x = $_GET['x'];
$json_name = $_GET['json_name'];
echo $callback . "(";
echo file_get_contents("http://[URL to my UTF files or creator service]/$json_name/$z/$x/$y.json");
echo ')';
?>

Permalinks

The other change that required careful coding to recreate the functionality of OL2, was permalinks. The OL3 developers have stated that they consider permalinks to be the responsibility of the the application (e.g. DataShine) rather than the mapping API, and, to a large extent, I agree. However OL2 created permalinks in a particular way and it would be useful to include OL3 ones in the same format, so that external custom links to DataShine continue to work correctly. To do this, I had to mimic the old “layers”, “zoom”, “lat” and “lon” parameters that OL2’s permalink updated, and again work in my custom “table”, “col” and “ramp” ones.

Various listeners for events need to be added, and functions appended, for when the URL needs to be updated. Note that the “zoom ended” event has changed its name/location – unlike moveend (end of a pan) which sits on your ol.map, the old “zoomend” is now called change:resolution and sets on olMap.getView(). Incidentally, the appropriate mouseover event is in an OL3-created HTML element now – olMap.getViewport() – and is mousemove.

Using the permalink parameters (args):

if (args['layers']) {
  var layers = args['layers'];
  if (layers.substring(1, 2) == "F") {
    layerBuildMask.setVisible(false);
  }
  [etc...]
}
[& similarly for the other args]

On map initialisation:

args = []; //Created this global variable elsewhere.
var hash = window.location.hash;
if (hash.length > 0) {
  var elements = hash.split('&');
  elements[0] = elements[0].substring(1); /* Remove the # */
  for(var i = 0; i < elements.length; i++) {
    var pair = elements[i].split('=');
    args[pair[0]] = pair[1];
  }
}

Whenever something happens that means the URL needs an update, call a function that includes this:

var layerString = "B"; //My old "base layer"
layerBuildMask.getVisible() ? layerString += "T" : layerString += "F";
[etc...]
layerString += "T"; //The UTF data layer.
[...]
var centre = ol.proj.transform(olMap.getView().getCenter(), "EPSG:3857", "EPSG:4326");
window.location.hash = "table=" + tableval + "&col=" + colval + "&ramp=" + colourRamp + "&layers=" + layerString + "&zoom=" + olMap.getView().getZoom() + "&lon=" + centre[0].toFixed(4) + "&lat=" + centre[1].toFixed(4);
}

Issues Remaining

There remains a big performance drop-off in panning when using DataShine on mobile phones and other small-screen devices. I have put in a workaround "viewport" meta-tag in the HTML which halves the UI size, and this makes panning work on an iPhone 4/4S, viewed horizontally, but as soon as the display is a bit bigger (e.g. iPhone 5 viewed horizontally) performance drops off a cliff. It's not a gradual thing, but a sudden decrease in update-speed as you pan around, from a few per second, to one every few seconds.

Additional Notes

Openlayers 3 is compatible with Proj4js version 2 only. Using this newer version requires a slightly different syntax when adding special projections. I use Proj4js to handle the Ordnance Survey GB projection (aka ESPG:27700), which is used for the postcode search, as I use a file derived from the Ordnance Survey's Code-Point Open product.

I had no problems with my existing JQuery/JQueryUI-based code, which powers much of the non-map part of the website, when doing the upgrade.

Remember to link in the new ol.css stylesheet, or controls will not display correctly. This was not needed for OL2.

OL3 is getting there. The biggest issue remains the sparsity of documentation available online - so I hope the above notes are helpful in the interim.

ds_ol3overlay2

Above: GeoJSON-format datafiles for tube lines and stations (both in blue), added onto a DataShine map of commuters (% by tube) in south London.

North/South – The Interactive Version.

northsouth_large

As a weekend project, I’ve made an interactive version of my London North/South artwork.

As well as the blue and red house silhouettes, assembled in QGIS, I’ve added in GeoJSON files of the River Thames (from Ordnance Survey Vector Map District, like the buildings) and of tube/DLR/Overground stations – the location/name/network data is from this GitHub file and I’ve applied a custom styling in OpenLayers 2, with station name styling inspired by the NYC Subway signs. The positional information comes from an OpenLayers control – I’m using a utility function to modify the output to use degrees, minutes and seconds. Finally, the naming popup is a set of UTFGrid JSON files (with 2-pixel resolution) based on OpenStreetMap data for polygons. Where the polygon has a building, leisure or waterway tag, I’m extracting a name, if available, and showing it. The coverage here is therefore only as good as building naming is in OpenStreetMap. I could potentially add in street names in the future.

Try it out here.

Primary Roads

a1a6_london

Britain’s “top” primary roads – the A1, A2, A3… to A9 – are arranged in a particular pattern, with the A1-A6 radiating out clockwise from London and the A7 to A9 similarly radiating around Edinburgh.

I used Gemma, an old UCL CASA project that Steve and I worked on back in 2011, to draw, from OpenStreetMap, the routes of the A1-A6 as they leave London. The A5 has a gap between Edgware and Harpenden, and the A6 only starts at Luton – both of these changes likely due to the building of the M1 motorway which effectively replaced those sections. Co-numbered roads are not included in the map due to a conflict with the way OpenStreetMap and Gemma separate information. Key for the maps: Red = A1, Orange = A2, Green = A3, Blue = A4, Purple = A5, Black = A6.

Also of interest is that the only two roads that “touch” in London are the A2 and A3, at Borough. The other roads may at one time have converged at junctions, but their starts have been shortened slightly over the years. The big junction at Bank certainly looks like a place where the A1, A3 and A4 could have started from. (Outside of London, the A7 touches the A1 at its northern end and the A6 at its southern end.) Diamond Geezer walked the first mile of the A1-A5 a few years ago.

Gemma still partially works, despite not having seen much love for the last few years and having never made it out of beta (it was a short project). It is recommended you use the OpenStreetMap (or Marker) layers only, to avoid bugs, and watch out if removing layers. You can see the live A1-A6 map here or have a go at building your own.

a1a6_detail

Key for the maps: Red = A1, Orange = A2, Green = A3, Blue = A4, Purple = A5, Black = A6.

I’ve blogged about Gemma before (more).

The coloured road lines are Copyright OpenStreetMap contributors and the greyscale background map is Copyright Google.