Category Archives: Technical

Panama’s Population Geographies

Panama is a Central American country with around 4 million population. The country is split into 10 provinces (including one that was split from another in 2014). The population is obliged to register for and obtain an ID card, or “cedula” which contains an interesting attribute. The prefix of their ID number indicates their province of birth. This not only allows the mapping and analysis of surname (and other) demographic information across the country, but also, if combined with information on current location, even allows for a rudimentary analysis of internal migration in the country.

This official document contains lots of useful information. Subsequent to this, the “Panama” province within the country has split into two, with the westernmost section becoming Panama West and gaining a new province number 13. In practice, the great majority of people living here retain the prefix 8 as the population with “13-” prefixes will be too young to have appeared on school attendance lists, jury service lists, exam candidate lists or government worker salary transparency lists. Here is the very No. 13: Ashly Ríos, getting the number 13-1-001. (People are required to obtain their number by the age of 18 but you can be registered at birth.)

For most people, born in Panama, their cedula number prefix indicates the following provinces of birth:

Province Cedula prefix
Bocas del Toro 1
Coclé 2
Colón 3
Chiriquí 4
Darién 5
Herrera 6
Los Santos 7
Panamá 8
Panamá Ouest 13
8 (pre-2014)
Veraguas 9
Guna Yala (indigenous province) 10
3SB (pre-1953)
Madungandí (indigenous sub-province) 10*
8PI (pre-2014)
Wargandi (indigenous sub-province) 10*
5PI (pre-2014)
Emberá Wounnan (indigenous province) 11
5PI (pre-1983)
Ngäbe-Buglé (indigenous province) 12
1PI, 4PI or 9PI (pre-1997)

* These were briefly assigned No. 13, before being changed to 10.

The format of the cedula number is generally X-YYY-ZZZZ where X is the province number, YYY is the registry book number and ZZZZ is the number within the book. However, for certain groups, the prefix is different. If SB appears after the province prefix, this is an indication that the person was born in Guna Yala (formerly called San Blas), but before it became a standalone indigenous province. Other indigenous areas, some of which have not formally become provinces, were indicated by PI appearing after the prefix of the former or enclosing province, or AV if very old (born pre-1914). However, the numerical codes are now used.

Panamanians born outside the country get “PE” as their prefix instead. Foreigners are assigned “EE” while they retain their immigrant status. If they gain permanent residence rights, they are assigned “E”, and if they become full Panamanian citizens, they are assigned “N”. PE, N, E and EE do not officially have an associated province prefix, although one is occasionally added in third-party lists, or “00”. So, these people can also be assigned a separate ID, starting with “NT” and with an associated province prefix, this is a temporary ID issued for tax purposes, rather than a full cedula number.

Visit the new Shop
High quality lithographic prints of London data, designed by Oliver O'Brien

Taxonomy of Web Mapping Frameworks and Formats

Here’s an attempt to create a simple taxonomy of the currently active and popular web mapping frameworks available. This covers web mapping that delivers a consumer-navigable geographic “slippy” map of raster and/or vector tiles containing bespoke geographic data.

< < < EASY, costs, limited, quick
Flexible, Needs resources, time, HARD > > >
Ecosystems Hosted Wrappers Managed Wrappers Managed APIs Open Frameworks Spatial Servers Server Programming
Mapbox Studio

CARTO Builder

ESRI ArcGIS Online


Google Fusion Tables

Google MyMaps
Google Maps Embed API

Google Static Maps API

OSM StaticMapLite
HERE Maps API for JavaScript

Google Maps JavaScript API

Microsoft Bing Maps V8 SDK


D3 DataMaps

Leaflet for R/RStudio


R (ggplot)

Unfolding (Processing/Java)

Mapnik (C++/Python)
Capabilities/Requirements of the above Frameworks
Data analysis Data analysis
Remote server dependency Server with shell access required
Web space required
Scripting knowledge required Programming required

I will aim to update based on feedback and new discovery. This initial version is based on my own usages/experiences in the field, so it is quite possible there are some very obvious candidates I have missed.

Additionally (and with the some proviso as above) here’s a 2×2 table of file formats used in slippy and static web mapping, for vectors and rasters – the latter including attribute fields like UTF Grids. I am only including formats widely used in web mapping, rather than GIS in general.

Static “WebGIS”
Raster OGC WMS



Mapbox Vector Tile Specification

Tiled “Slippy”
Visit the new Shop
High quality lithographic prints of London data, designed by Oliver O'Brien

Big Data Here: The Code

So Big Data Here, a little pop-up exhibition of hyperlocal data, has just closed, having run continuously from Tuesday evening to this morning, as part of Big Data Week. We had many people peering through the windows of the characterful North Lodge building beside UCL’s main entrance on Gower Street, particularly during the evening rush hour, when the main projection was obvious through the windows in the dark, and some interested visitors were also able to come inside the room itself and take a closer look during our open sessions on Wednesday, Thursday and Friday afternoons.

Thanks to the Centre for Advanced Spatial Analysis (CASA) for loaning the special floor-mounted projector and the iPad Wall, the Consumer Data Research Centre (CDRC) for arranging for the exhibition with UCL Events, Steven Gray for helping with the configuration and setup of the iPad Wall, Bala Soundararaj for creating visuals of footfall data for 4 of the 12 iPad Wall panels, Jeff for logistics help, Navta for publicity and Wen, Tian, Roberto, Bala and Sarah for helping with the open sessions and logistics.

The exhibition website is here.

I created three custom local data visualisations for the big screen that was the main exhibit in the pop-up. Each of these was shown for around 24 hours, but you can relive the experience on the comfort of your own computer:


1. Arrival Board

View / Code

This was shown from Tuesday until Wednesday evening, and consisted of a live souped-up “countdown” board for the bus stop outside, alongside one for Euston Square tube station just up the road. Both bus stops and tube stations in London have predicted arrival information supplied by TfL through a “push” API. My code was based on a nice bit of sample code from GitHub, created by one of TfL’s developers. You can see the Arrival Board here or Download the code on Github. This is a slightly enhanced version that includes additional information (e.g. bus registration numbers) that I had to hide due to space constraints, during the exhibition.

Customisation: Note that you need to specify a Naptan ID on the URL to show your bus stop or tube station of choice. To find it out, go here, click “Buses” or “Tube…”, then select your route/line, then the stop/station. Once you are viewing the individual stop page, note the Naptan ID forms part of the URL – copy it and paste it into the Arrival Board URL. For example, the Naptan ID for this page is 940GZZLUBSC, so your Arrival Baord URL needs to be this.


2. Traffic Cameras

View / Code

This was shown from Wednesday evening until Friday morning, and consisted of a looping video feed from the TfL traffic camera positioned right outside the North Lodge. The feed is a 10 second loop and is updated every five minutes. The exhibition version then had 12 other feeds, surrounding the main one and representing the nearest camera in each direction. The code is a slightly modified version of the London Panopticon which you can also get the code for on Github.

Customisation: You can specify a custom location by adding ?lat=X&lon=Y to the URL, using decimal coordinates – find these out from OpenStreetMap. (N.B. TfL has recently changed the way it makes available the list of traffic cameras, so the list used by London Panopticon may not be completely up-to-date.)


3. Census Numbers

View / Code

Finally, the screen showed randomly chosen statistical numbers, for the local Bloomsbury ward that UCL is in, from the 2011 Census. Again, you can see it in action here (wait 10 seconds for each change, or refresh), and download the code from GitHub.

Customisation: This one needs a file for each area it is used in and unfortunately I have, for now, only produced one for Bloomsbury. The data originally came, via the NOMIS download service, from the Office for National Statistics and is Crown Copyright.


Behind the Code in Tube Heartbeat

Cross-posted to the 360 Here blog.

As a follow-up to my intro post about Tube Heartbeat, here’s some notes on the API usage that allowed me to get the digital cartography right, and build out the interactive visualisation I wanted to.

The key technology behind the visualisation is the HERE JavaScript API. This not only displays the background HERE map tiles and provides the “slippy map” panning/zoom and scale controls, but also allows the transportation data to be easily overlaid on top. It’s the first project I’ve created on the HERE platform and the API was easy to get to grips with. The documentation includes plenty of examples, as well the API reference.

The top feature of the API for me is that it is very fast, both on desktop browsers but also on smartphones. I have struggled in the past with needing to optimise code or reduce functionality, to show interactive mapped content on smartphones – not just needing to design a small-screen UI, but dealing with the browser struggling to show sometimes complex and large-volume spatial data. The API has some nice specific features too, here’s some that I used:


One of the smallest features, but a very nice one I haven’t come across elsewhere, is the addition of arrows along vector lines, showing their direction. Useful for routing, but also useful for showing which flow is currently being shown on a bi-directional dataset – all the lines on Tube Heartbeat use it:

var strip = new H.geo.Strip();
strip.pushPoint({ lat: startLat, lng: startLon });
strip.pushPoint({ lat: endLat, lng: endLon });

var polyline;
var arrowWidth = 0.5; /* example value */

polyline = new
	strip, { 
		style: { ... }, 
		arrows: { 
			fillColor: 'rgba(255, 255, 255, 0.5)', 
			frequency: 2, 
			width: arrowWidth, 
			length: arrowWidth*1.5 

polyline.zorder = lines[lineID].zorder;

The frequency that the arrows occur can be specified, as well as their width and length. I’m using quite elongated ones, which are 3 times as long as they are wide, and occupy the middle half of the arrow (above/below certain flow thresholds, I used different numbers). A frequency of 2 means there is an arrow-sized gap between each one. Using 1 results in a continuous stream of arrows. (N.B. Rendering quirks in some browsers mean that other gaps may appear too.) Here, the blue and red segments have a frequency of 1 and a width of 0.2, while the smaller flows in the brown segments are shown with the frequency of 2 and width of 0.5 in the example code above:



Z-order is important so that the map has a natural hierarchy of data. I decided to use an order where the busiest tube lines were generally at the bottom, with the quieter lines being layered on top of them (i.e. having a higher Z-order). Because the busier tube lines are shown with correspondingly fatter vector lines on the map, the ordering means that generally all the data can be seen at once, rather some lines being hidden. You can see the order in the penultimate column of my lines data file (CSV). I’m specifying z-order simply as a custom object “zorder” on the, as shown in the code sample above. This then gets used later when assembling the lines to draw, in a group (see below).


I’m using translucency both as a cartographical tool and to ensure that data does not otherwise become invisible. The latter is simply achieved by using RGBA colours rather than the more normal hexadecimals; that is, colours with a opacity specified as well as the colour components. In the code block above, “rgba(255, 255, 255, 0.5)” gives white arrows which are only 50% opaque. The tube lines themselves are shown as 70% opaque – specified in lines data file along with the z-order – this allows their colour to appear strongly while allowing other lines or background map features/captions, such as road or neighborhood names, to still be observable.

While objects such as the tube lines can be made translucent by manipulating their colour values, layers themselves always display at 100% opacity. This is probably a good thing because translucent map image layers could look a mess, if you layered multiple ones on top of each other, but it means you need to use a different technique if you want to tint or fade a layer. Because even the simplified “base” background map tiles from HERE for London have a lot of detail on them, and the “xbase” extra-simplified ones don’t have enough for my purposes, I needed a half-way house approach. I acheived this by creating a geographical object in code and placing it on top of the layers:

var tintStyle = {
	fillColor: 'rgba(240, 240, 240, 0.35)'
var rect = new
	new H.geo.Rect(	42, -7, 58, 7 ), 
	{ style: tintStyle }

The object here is a very light gray box, at 35% opacity, with an extent that covers all of the London area and well beyond. In HERE JavaScript API, such objects automatically go on top of the layers. My tint doesn’t affect the lines or stations, because I add two groups, containing them, after my rectangle:

var stationGroup = new;
var segGroup = new;

Object Groups

I can add and remove objects from the above groups rather than directly to the map object, and the groups themselves remain in place, ordered above my tint and the background map layers. Objects are drawn in the order they appear in the group, the so-called “Painters Algorithm“, hence why I sort using my previously specified “zorder” object’s value, earlier:

function segSort(a, b)
	var lineA = parseInt(a.zorder);
	var lineB = parseInt(b.zorder);
	if (lineA > lineB) return 1;	
	if (lineA < lineB) return -1;
	return 0;

var segsToDraw = [];


for (var i in segsToDraw)


There are super easy to create and illustrate the second reason that I very much like the HERE JavaScript API. The code is obvious:

var circle = new
		lat: Number(stations[i].lat), 
		lng: Number(stations[i].lon)
		style: { 
			strokeColor: stationColour, 
			fillColor: 'rgba(255, 255, 255, 0.8)', 
			lineWidth: 3 

These are my station circles. They are thickly bordered white circles, as is the tradition for stations on maps of the London Underground as well as many other metros worldwide, but with a little bit of translucent to allow background map details to still be glimpsed. Here you can see the circle translucencies, as well as those on the lines, and the arrows themselves, the lines also being ordered as per the z-order specification, so that the popular Victoria line (light blue) doesn't obscure the Northern line (black):


Other Technologies

As well as the HERE JavaScript API, I used JQuery to short-cut some of the non-map JavaScript coding, as well as JQueryUI for some of the user controls, and the Google Visualization API (aka Google Charts) for the graphs. Google's Visualization API is full-featured, although a note of caution: I am using their new "Material" look, which works better on mobile and looks nicer too than their regular "Classic" look - but it is still very much in development - it is missing quite a few features of the older version, and sometimes requires the use of configuration converters - so check Google's documentation carefully. However, it produces nicer looking charts of the data, a trade-off that I decided it was worth making:


These are just some of the techniques I used for Tube Heartbeat, and I only scratched at the surface of the HERE APIs, there are all sorts of interesting ones I could additionally incorporate, including some you might not expect, such as a Weather API.

Try out Tube Heartbeat for yourself.

Background map tiles shown here are Copyright HERE 2016.

OS Open

Ordnance Survey have this week released four new additions to their Open Data product suite. The four, which were announced earlier this month, are collectively branded as OS Open and include OS Open Map Local, which, like Vector Map District (VMD), is a vector dataset containing files for various feature types, such as building polygons and railway stations. The resolution of the buildings in particular is much greater than VMD – surprisingly good, in fact. I had expected the data to be similar in resolution to the (rasterised) OS StreetView but it turns out it’s even more detailed than that. The specimen resolution for OS Open Map Local is 1:10000, with suggested uses down to a scale of 1:3000, which is really quite zoomed in. Two new files in OS Open Map Local are “Important Buildings” (universities, hospitals etc) and “Functional Areas” which outline the land containing such important buildings.


Above: Comparing the building polygon detail in the older Vector Map District (top left), previously the largest scale vector building open data from Ordnance Survey, and the brand new OS Open Map Local (top right). The new data is clearly much higher resolution, however one anomaly is that roads going under buildings no longer break the buildings – note the wiggly road in the centre of the top left sample, Malet Place, which runs through the university and under a building, doesn’t appear in full on the right. Two other sources of large-scale building polygons are OS StreetView (bottom left), which is only available as a raster, and OpenStreetMap (bottom right). The OS data is Crown Copyright and Database right OS, 2015. The OSM data is Copyright OSM contributors, 2015.

The other three new products, under the OS Open banner, are OS Open Names, OS Open Rivers and OS Open Roads. The latter two are topological datasets – that is, they are connected node networks, which allow routing to be calculated. OS Open Names is a detailed gazetteer. These latter three products are great as an “official”, “complete” specialised dataset, but they have good equivalents on the OpenStreetMap project. OS Open Map Local is different – it offers spatial data that is generally much higher in accuracy than most building shapes already on OpenStreetMap, including inward facing walls of buildings which are not visible from the street – and so difficult for the amateur mapper to spot. As such, it is a compelling addition to the open data landscape of Great Britain.

The OS also confirmed last week the location for its new Innovation Hub. It is indeed a mile from King’s Cross – specifically, it’s in Clerkenwell, and the hub will be sharing space with the Future Cities Catapult. Conveniently the new space has a presentation space and the May Geomob will be taking place there.

Ordnance Survey Open Data – The Next Level of Detail

An encouraging announcement from BIS (the Department for Business, Innovation and Skills) a few days ago regarding future Open Data products from the Ordnance Survey (press release here) – two pieces of good news:

  • The OS will be launching a new, detailed set of vector data as Open Data at the end of this month. They are branding it as OS OpenMap, but it looks a lot like a vector version of OS StreetView, which is already available as a raster. The key additions will be “functional polygons” which show the boundaries of school and hospital sites, and more detailed building outlines. OS Vector Map District, which is part of the existing Open Data release, is already pretty good for building outlines – it forms the core part of DataShine and this print, to name just two pieces of my work that have used the footprints extensively. With OpenMap, potentially both of these could benefit, and we might even get attribute information about building types, which means I could filter out non-residential buildings in DataShine. What we do definitely get is the inclusion of unique building identifiers – potentially this could allow an crowd-sourced building classification exercise if the attribution information isn’t there. OpenMap also includes a detailed and topological (i.e. joined up under the bridges) water network, and an enhanced gazetteer, i.e. placename database.
  • The other announcement relates to the establishment of an innovation hub in London – an incubator for geo-related startups. The OS are being cagey about exactly where it will be, saying just that it will be on the outskirts of the Knowledge Quarter, which is defined as being within a mile of King’s Cross. UCL’s about a mile away. So maybe it will be very close to here? In any case, it will be somewhere near the edge of the green circle on the (Google) map below…

p.s. The Ordnance Survey have also recently rebranded themselves as just “OS”. Like University College London rebranding itself as “UCL” a few years ago, and ESRI calling itself Esri (and pronouncing it like a word), it will be interesting to see if it sticks. OS for me stands for “open source” and is also very close to OSM (OpenStreetMap), so possible confusion may follow. It does however mean a shorter attribution line for when I use OS data in my web maps.

Screen Shot 2015-03-04 at 17.47.52

London’s Knowledge Quarter


Various websites I’ve built, and mentioned here on from time to time, are down from Friday at 5pm until Monday noon (all times GMT), due to a major power upgrade for the building that the server is in.

This affects the following websites:

  • DataShine
  • CDRC
  • Bike Share Map
  • Tube Tongues
  • OpenOrienteeringMap (extremely degraded)
  • Some other smaller visualisations

However the following are hosted on different servers and so will remain up:

GeoComputation: A Practical Primer

geocomputationGeoComputation: A Practical Primer, edited by Profs Chris Brunsdon and Alex Singleton, has just been published by SAGE.

The book acts both as a reference guide to the field and as a guide to help you get to know aspects of it. Each chapter includes a worked example with step-by-step instructions.

Each chapter has a different author, and includes topics such as spatial data visualisation with R, agent-based modelling, kernel density estimation, spatial interaction models and the Python Spatial Analysis library, PySAL. With 18 chapters, the book runs to over 300 pages and so has the appropriate depth to cover a diverse, active and fast-evolving field.

I wrote a chapter in the book, on open source GIS. I focused particularly on QGIS, as well as mentioning PostGIS, Leaflet, OpenLayers (2) and other parts of the modern open source “geostack”. My worked example describes how to build a map, in QGIS, of London’s railway “not-spots” – places which are further than a mile from a railway station, using open data map files, mainly from the Ordnance Survey. With the guide, you can create a map like the one below:


That little spot on its own in central-ish London, by the way, is part of Burgess Park, near Peckham.

The book has only just been published and I was able to slip in brand new screenshots (and slightly updated instructions) just before publication, as QGIS 2.6 came out late last year. So, the book is right up to date, and as such now is a great time to get your copy!

It’s available now in paperback on Amazon: Geocomputation: A Practical Primer.

The first part of my chapter:


Quick-and-Dirty WordPress Site Cloning


Here is a guide to clone a WordPress(.org) blog, on the same server, 10 steps, on Linux, You’ll definitely need admin access to the blog itself, and probably to the database and server too, depending on your setup. I did this recently as I needed a copy of an existing production site, to hack on. If you don’t fancy doing it the quick-and-dirty way, there are, I’m sure, even quicker (and cleaner) ways, by installing plugins.

In the following instructions, substitute X and Y for your existing and new blog, respectively.

0. Do a backup of your current website, like you do normally for an upgrade or archiving, in case anything goes wrong. e.g. under Tools > Export in the WordPress admin interface.

1. Copy all the files:
cp -r /home/~username/www/blog_X /home/~username/www/blog_Y

2. Edit wp-config.php in your new blog directory:

$table_prefix = 'wp_X_';
$table_prefix = 'wp_Y_';

3. Copy all the database tables (prefixed with wp_X_). The new ones should have a prefix wp_Y_ instead. I used the Copy functionality under the Operations tab in phpMyAdmin (see screenshot below).

4. Edit wp_Y_options:
update wp_Y_options set option_name = 'wp_Y_user_role' where option_name = ' wp_X_user_role';

5. Edit wp_Y_options:
Edit the option_value for rows with option_name values of siteurl and home, pointing them to the new location – mine are the same but one might be different, e.g. if you have your WordPress core files in a subdirectory relative to the directory for the site entry-point on the web.

update wp_Y_options set option_value = '' where option_name = 'siteurl';
update wp_Y_options set option_value = '' where option_name = 'home';

There may be other rows referencing your old blog name, but these are probably from plugins and therefore probably don’t need to be changed.

6. Edit wp_Y_usermeta:
update wp_Y_usermeta set meta_key = replace(meta_key, 'wp_X', 'wp_Y');

(You can edit the affected rows manually, but I had a lot to do – there’s around 5 for each user.)

7. Drop force-upgrade.php in the same directory as wp-config.php and run it from your browser. This rebuilds caches/hashes stored in some of the tables. You can run it repeatedly if necessary, (e.g. if you missed a step above), it shouldn’t do any harm.

You can find force-upgrade.php here.

8. Delete force-upgrade.php. Leaving it is a security risk.

9. Log in to your blog in the new location, as normal. Usernames and passwords should be preserved.


OpenLayers 3 and Vector Data

As part of a project to move most of my OpenLayers 2-powered websites to OpenLayers 3, I have recently converted two more – DataShine: Travel to Work Flows and the North/South Interactive Map. Unlike the main DataShine: Census website, both of these newer conversions include vector geospatial data, so there was additional learning involved during the migration process, mainly relating to vector styling.

northsouth2North/South Interactive Map

For the North/South Interactive Map, I made use of the loading in of remote GeoJSON files.

Vector Layers

Here’s a vector layer:

layerPoints = new ol.layer.Vector({
    source: pointSource,
    style: function(feature, res) { return pointStyle(feature, res); }

The pointSource is a ol.source.GeoJSON, which requires the projection of the files to be defined, as well as that to be displayed, when defining the source for the Vector layer:
pointSource = new ol.source.GeoJSON({
    url: '...',
    defaultProjection: 'EPSG:4326',
    projection: 'EPSG:3857',

    attributions: [ new ol.Attribution({ 'html': "..." }) ]

If you wish to do further operations on your data once it is loaded in, you need to add a listener to a remotely loaded (e.g. GeoJSON file) source included within a Vector layer:

pointSource.once('change', function()
    if (pointSource.getState() == 'ready')
    { var features = pointSource.getFeatures(); ... }

Here’s a typical style function. I’m using a property “highlight” on my feature to style such features differently:

function pointStyle(feature, resolution)
    return [
            image: new{
                radius: (feature.highlight ? 7 : feature.radius ),
                fill: new{ color: feature.fillColor }),
                stroke: new{ width: feature.strokeWidth, color: '#fff' })
            text: new{
                text: (feature.highlight ? feature.label : ""),
                font: '9px Ubuntu, Gill Sans, Helvetica, Arial, sans-serif',
                fill: new{ color: '#fff' })


To detect clicks, I used an ol.interaction.Select – N.B. if you don’t specify which layers it applies to, it tries to apply them to all Vector layers!

var selectClick = new ol.interaction.Select({
    style: function(feature, res) { return pointStyle(feature, res); },
    layers: [layerPoints]

selectClick.getFeatures().on('change:length', function(e)
{ ... }


In my function here I remove the flag from any already highlighted features and call features[i].changed(); to get the non-highlighed style. You don’t need to call this on what you’ve actually clicked on, as this is done implicitly. here’s likely better ways of showing selected/highlighted features, using ol.FeatureOverlay, but i couldn’t get this to work.



There’s quite a nice new utility function which means it was little effort to get an “old style” location indicator in, at the bottom of the North/South interactive:
new ol.control.MousePosition({ projection: "EPSG:4326", coordinateFormat: ol.coordinate.toStringHDMS, className: 'olControlMousePosition' })


DataShine: Travel to Work Flows

This loads vector data in as generic JSON through a regular (non-OL) AJAX call rather than GeoJSON so the processing is a bit more manual. This time, my source for the Vector layer is a simple ol.source.Vector which can be emptied with source.clear(); and reused.

I’m creating lines directly from the JSON, converting from OSGB grid and specifying colour (for the style) as I go – note my use of rgba format, allowing me to specify a partial transparency (of 60%) for the lines:

var startLL = ol.proj.transform([data[start][2], data[start][3]], "EPSG:27700", "EPSG:3857");
var endLL = ol.proj.transform([data[end][2], data[end][3]], "EPSG:27700", "EPSG:3857");
var journeyLine = new ol.geom.LineString([startLL, endLL]);
var lineItem = new ol.Feature({ geometry: journeyLine });
lineItem.strokeColor = 'rgba(255, 0, 0, 0.4)'; lineSource.addFeature(lineItem);

As previously blogged, I’m also using hand-crafted permalinks in both websites, and drag-and-drop KML display and UTF grid mouseovers in the latter, and both have also had their stylesheets tweaked to allow for easy printing – again made possible with OL3.

I’m about ready now to tackle my most complicated OpenLayers project by far, the Bike Share Map.