I’ve been working on a new web site which will hold Soph and my more fine art based output and the outcomes of our GPS mapping work which is currently rather buried on the planbperformance site.
I’d love to hear what you think about it. I’m using WordPress as a CMS rather than a blog (which basically entails putting content on ‘pages’ rather than ‘posts’) with a modified Origami child theme (just a few font tweaks really).
Something to note when creating a child theme: if you copy something like the Origami header to modify for your child theme styles.css file watch out – the header has to contain the line
(replace ‘twentytwelve’ with the name of your parent template i.e. ‘origami’) If you do copy the header from the parent theme styles.css, it might not have this, resulting in WordPress throwing an error when you try and activate it in Appearance > Themes, like
The following themes are installed but incomplete. Themes must have a stylesheet and a template… Description: Template is missing.
I first thought this meant I had to copy some sort of php file from the parent theme to my child theme folder but it turned out I didn’t have the ‘Template: ‘ line in my style.css header.
Chalk Travel Graph 2007-2012 plan b
In making the graph above, showing all our journeys for the years 2007 – 2012 (D is Dan, S is Soph, numbers on the ‘y’ axis are thousands of km), I used a custom-written piece of python software which detects ‘gaps’ in our data and if they are above a certain threshold of km apart, logs them.
All well and good, I thought, until I talked to Soph who has done quite a lot of commuting in Berlin since 2012 as a result of her teaching job at the HZT in Wedding. She was interested in these ‘missing’ journeys and so I set about wondering whether I would abolish the threshold altogether and join every gap up. I was nervous about this and so I wanted to somehow ‘see’ it to check it which gave me the idea to generate lines held in a custom table in our spatialite database which would join everything up. I could then do the analysis from there.
2012 Soph Berlin With Missing Journeys in Blue plan b
I found this so seductive, I wanted to share it. I’ve yet to analyse the data to find out whether results such as the amount of missing kilometres makes sense (although I’m not sure what I’d test this against) but I find it an interesting exercise already.
At last, after a record dark start to 2013, Berlin is waking up to bright, clear mornings and the smell and sounds of Spring. Martin and I decided that we would have an excursion together today, after a long gap.
This is the GPS path we made walking around the rather over-designed path layout of the new park.
GPS track from the first exploratory walk of 2013 around Gleisdreieck in Berlin
I decided also to dust off and reassemble my drawing machine. The first two drawings were made with it strapped to the rack of my bike while we walked along pushing it.
Drawing Machine Drawing 1 (On bike, pushing)
Drawing Machine Drawing 2 (On bike, pushing)
This last drawing was made by me carrying it in my hands as I walked. Martin also made a drawing in this way which he kept.
Drawing Machine 3 (Walking)
On the way back to Bülowstraße, we thought it would be interesting to see what the electromagnetic soundscape was like where the park path takes you near the high-speed track emerging out of the tunnel on the Southbound stretch after Hauptbahnhof. The profusion of overhead power lines tempted us. Luckily I’d brought the coil I’d made from Martin’s instructions on the Psychogeophysics Summit
, which I call my Rendlesham Coil.
And this recording was made further on under the bridge where the U2 is turning the corner to head North into Gleisdreieck.
While Soph and I were working last week at the HZT on plan b stuff again at last, we were thinking about the data we collect i.e. GPS, text messages, mood reports (for 2011 only) and photographs.
We were preparing something for the try-out we did last Thursday in which we performed Narrating Our Lines live in front of an invited audience to see if this also worked as a performance, not just a video installation.
One of the things we wanted to try was a fast slideshow (actually a movie) of all the photos we took in the year we decided to play (2007). As I am unsatisfied with any photo management programme I have tried, preferring to order by location rather than date, the photos are scattered among multiple directories.
I knew I could use ffmpeg to stitch individual photos together into a movie, once I’d resized them with the excellent mogrify command in imagemagick, but I needed something that would copy all the photos taken in 2007 to a location so that I could work on them, so I wrote a quick python script you can examine/download below if you’re interested.
# -*- coding: utf-8 -*-
2013/01/07 19:55:57 Daniel Belasco Rogers firstname.lastname@example.org
User points script at a root directory and script finds all images for
a certain year derived from the Exif data and copies these images into
a destination folder supplied by the user
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or (at
your option) any later version.
This program is distributed in the hope that it will be useful, but
WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>
from optparse import OptionParser
from shutil import copy2
usage = """
parser = OptionParser(usage, version="%prog 0.1")
(options, args) = parser.parse_args()
if len(args) != 3:
Please enter a year in the form YYYY, a directory to search for images
under and a directory to save a copy of the images to
e.g. copyimages.py 2007 "/nfs/photos/" "/media/ext3/"
year = args
searchpath = args
destination = args
return year, searchpath, destination
get creation date from exif
metadata = pyexiv2.ImageMetadata(pathname)
print "%s Unknown image type" % pathname
tag = metadata['Exif.Photo.DateTimeOriginal']
print '%s tag not set' % pathname
def findimages(year, searchpath):
ùse os.walk to find images with .jpg extension
year = int(year)
imagelist = 
for (path, dirs, files) in os.walk(searchpath):
for f in files:
pathname = os.path.join(path, f)
if os.path.splitext(pathname).lower() == '.jpg':
imagedate = getexifdate(pathname)
imageyear = imagedate.year
print '%s invalid date in exif: %s' % (pathname, imagedate)
if imageyear == year:
def copyimages(imagelist, destination):
iterate through imagelist, copying images to destination directory
make the dir in a different way by checking if it is present first
and making it if not, rather than catching it like this.
for image in imagelist:
destinationpath = os.path.join(destination, os.path.split(image))
print "copying %s to %s" % (image, destinationpath)
# except IOError:
# copy2(image, destination)
# except OSError as e:
# print e
call all functions within script and print stuff to stdout for
year, searchpath, destination = parseargs()
print "Looking in %s for images from %s" % (searchpath, year)
imagelist = findimages(year, searchpath)
print "Found %d images" % len(imagelist)
print "Copying images to %s" % destination
print "Copied %d images. Script ends here." % len(imagelist)
if __name__ == '__main__':
All this made me think, however, how much we are all becoming used to this idea of having too much data to sort through. I think its something that lots of us can now relate to when it comes to digital photographs. Running the script above I found about 2500 photos, representing gigabytes of data. Some of the photos I hadn’t seen since I took them and were gathering digital dust somewhere in a remote corner of my filing un-system. To make this stuff (our stuff) understandable, or even viewable, graspable, we need tools to manage it. It is no longer possible or even appropriate to browse through our photos and pull out the ones we’re interested in, we need tools to do this for us.
I have to admit to a feeling of great pride and joy that I could write my own, thanks to acquiring some basic Python skills over the past couple of years.
This is something I’ve been wrestling with for a while, both with Spatialite and latterly with Postgis. The problems stem from the fact that a GPX track segment contains information that can be represented in two entirely different ways in these systems. A track segment as represented in a GPX file, as we know contains track points that each have attributes like latitude, longitude, elevation and time and can have more such as speed and course (both of these are calculated between the current point and the previous one). You can import these points into a spatial database (most straightforwardly through an intermediary like shape files), but the devil is in the details and it might surprise you if like me you are used to programmes that have been written to handle GPX files and visualise these rather than larger GIS applications.
The crux is – if you want to see your tracks as lines, you lose the information that each track contained and are left with the segment represented as a row in the database – if you want to retain all the information, you’d better import those trackpoints as points, but then you don’t have a graphical representation of the line each segment represents. Perhaps a couple of illustrations will elucidate the problem
gpx file represented as points in Qgis
Here you see what happens when you import a GPX file into Quantum GIS as track points. This is how it would then be imported into a spatial database. The advantage here is that if you open up the attributes of the file, all the information from the original file is there. I don’t know about you however, but I find it quite difficult to trace the individual track segments from these points – you have an idea of where the road is but no idea about how many times the street is reiterated.
gpx file represented as lines in Qgis
This is more like it – but is it? If you looked at the information each line contains, you’ll see that all the information about the individual points is lost – each line is represented by a single row in the attributes data and so information about the elevation, speed, time of each track point has been lost. Better not use this as a way of archiving your GPS data.
At the moment, other than designing your own database schema and writing your own importers (which is what I’m contemplating), there’s no way I know of in the spatial database world of representing a GPX file in one instance that retains its lines and the information about each point. Please prove me wrong.