This post is the background to an image-colour-photograph-gallery-thing I made. I encourage you to turn the lights out and view it in fullscreen. Read on for details.
“Tūhonohono means to bring together – to weave or join. Here, it is the joining of a daily moment in time as captured on my phone; and a (tangentially related) image from the past.”
Since the beginning of 2013, I’ve curiously followed Virginia Gow’s Tūhonohono project. I think it’s marvellous. Each day she photographs something she encounters in the world and presents it alongside a corresponding image from a heritage collection—usually a photograph from the Alexander Turnbull Library—and a few words. The pairing is often a literal; almost identical subjects, separated by time and space.
But my favourite matches are more oblique. Sometimes you need to make a metaphorical mental jump to understand why Virginia brought together two particular photographs. This is one of my favourite pairings.
I have also been following the redevelopment of the Smithsonian Cooper-Hewitt collections website. The Cooper-Hewitt’s digital & emerging media team’s playfulness and design process has fascinated and inspired me. I am particularly taken with their colour search feature.
“Objects with images now have up to five representative colors attached to them. The colors have been selected by our robotic eye machines who scour each image in small chunks to create color averages. These have then been harvested and snapped to the grid of 118 different colors.”
(I love the strange and wonderful website those folk is building.)
A few mornings ago I was sitting at my laptop drinking coffee. The previous evening’s Tūhonohono pairing was open in one tab and a Cooper-Hewitt object detail page in the next. My tired mind folded one into the other. I started wondering what Virginia’s year of photography would looks like if the subjects faded away and the dominant colours expanded to fill the image.
So, I started playing…
First, I needed some data. I wrote a mini-harvester to scrape post details and download the images from Tūhonohono’s monthly archive pages. The code isn’t exactly crash hot, but it does the job. Then it was on to the fun playing with colour stuff.
As noted above, the Cooper-Hewitt colour classification code partitions RGB colour space into a finite set of buckets. An algorithm inspects each image, assigning every pixel to a colour bucket according to Euclidean distance. This technique works really well for the Cooper-Hewitt and it makes a heap of sense if you want to search across colours.
I travelled a slightly different route. Instead of matching pixels to a small set of predetermined colours, I used a machine learning technique called k-means clustering to divide similar pixels into groups to find dominant colours. The clever way k-means clustering mashes together nearest-neighbour analysis with Voronoi polygons makes it a great method for finding approximate cluster centres when you’re unconcerned about the partition borders. I also like that the results emphasise distinctness over raw frequency counts. I began writing my own clustering algorithm before stumbling across a Python implementation by Charles Leifer, which I used without modification
With metadata and colours in hand, I started playing with the photographs. I was unsure what I was making except that I wanted to see all of the colours in Virginia’s year at once and convey a sense of dynamism. I wanted the screen to pulse and change and fade and glow. The colours of Tūhonohono is a small attempt to share what has been like to follow this photography project over the course of the year. I wanted to provide a space for people to make their own connections from Virginia’s photographs. It is a thing built out of impressions, glimpses and intuition. And I think that’s all I have to say about it.