New(ish) Work: Wikimedia Foundation

Last year, I was engaged by the Wikimedia Foundation to do a usability and community deep dive into their Wikimedia Commons platform, thinking about how it might be improved or adapted with a critical eye on:

  • what the system is like for new folks,
  • how new institutional partners could be attracted,
  • a sideways glance at the Structured Data Commons initiative, and
  • some dreaming about what a future of the Commons platform could be like.

As well as an internal presentation to the WMF crew, I ended up writing a public-facing version of the work, How could Wikimedia Commons be improved? A conversation with designer George Oates, that was published in October 2018.

I ended up becoming really interested three main things:

  1. Differences in the administrative feel of different language communities,
  2. The diversity challenges faced by the platform, and especially comparing the way the “elders” of the community governed with how the community at Flickr managed to get along without too many car crashes. Very interesting to compare those two different, huge communities to see how different they are, and
  3. Organic vs top-down information systems and how that plays out in the intense Commons “category” system, a topic that’s dear to my heart.

It was super interesting and complex.

Defining beauty

I’m very excited to dive into twoway.st and (in honour of the British Museum’s current exhibition Defining Beautyto see what beauty I discover.

There’s a lot of Greek sculpture in the exhibition so I’m starting with Made of marble.

I’ve found Made of ?, there are 5,811 things made of ?.

1,998 things are Made of human bone???

1,004 things are Made of coconut!

677 of papier mâché.

493 of donkey skin… it gets weirder and weirder…

Made of fish tooth (26 things), bear tooth (25 things), sheep tooth (24 things). Where are the hen’s teeth? Brilliant, just found chicken eggshell (24 things).

Whoops I skipped right past marble, back to page 1, a monumental 10,178 marble objects.

This marble Venus is rather beautiful, resting her left foot on the head of a swan.

AN00013223_001_l

Following the Depicted Aphrodite/Venus thread there are, unsurprisingly, lots more depictions of Venus, 3,446 in total. But surprisingly (for me) the depictions are mostly on paper (1,367 of them). And mostly from Italy.

Here are a couple of the Venus/Paper/Italy beauties:
Venus drying her left foot, with Cupid
Venus pulling a thorn from her left foot, with a rabbit

What’s with her left foot?

Anyway, I seem to have gone down a rabbit hole of Italian C16th engravings.

Back to Greek stuff…via Subject: Classical Deity

Nice. Pots and sculptures that’s what I’m after…

I recognise these guys, they are pretty beautiful, and they’ve been on tour with the Body Beautiful exhibition to the US and Australia.

AN00009266_001_l

Another Venus, from the photo it looks like she’s tucked away in some storeroom. The Place information is blank, so it doesn’t look like she’s on display. Maybe she got some limelight in the exhibition…

AN00015494_001_l

I haven’t had a huge amount of luck finding objects that are actually in the show. One facet called Appeared in exhibition: Human Body in Greek Art has a lot of the biggies that I know appear in Defining Beauty (Discobolus, Ajax, Westmacott Athelete). But some others like Lely’s Venus, Illissos, Dionysos are missing from that facet. Perhaps if the database was used consistently then you would be able to find the complete set through the Appeared in exhibition facet.

What’s so brilliant about twoway.st is there is cool, surprising, and beautiful stuff whichever way you turn.

It is a very freeing experience.

You don’t have need to have any knowledge of the collection before diving in. And even if you have a bit of prior knowledge (I worked with the collection for nearly 10 years as part of the digital team), it makes you realise how many more objects there are to discover.

A beautiful new way of exploring the awesome collection of the British Museum.

Internal R&D Project #2: V&A Spelunker

The second of our internal R&D projects, the V&A Spelunker is a new fun way to explore the corners of the vast Victoria and Albert Museum.

This is a copy of a blog post I wrote for the Victoria & Albert Museum Digital Media blog. I thought it would be nice to pop a copy here for posterity.

As part of our ongoing research practice, we’ve made a new toy to help you explore the wondrous Victoria and Albert Museum’s catalogue, the V&A Spelunker.

Spelunking is an American word for exploring natural, wild caves. You might also say caving, or potholing here in the UK. I hope using this thing we’ve made feels a bit like exploring a dark cave with a strong torch and proper boots. It’s an interface to let you wander around a big dataset, and it’s designed to show everything all at once, and importantly, to show relationships between things. Your journey is undetermined by design, defined by use.

The V&A Spelunker’s Skeleton

In some ways, the spelunker isn’t particularly about the objects in the collection — although they’re lovely and interesting — it now seems much more about the shape of the catalogue itself. You eventually end up looking at individual things, but, the experience is mainly about tumbling across connections and fossicking about in dark corners.

The bones of the the spelunker are pretty straightforward. It’s trying to help you see what’s connected to what, who is connected to where, and what is most connected to where, at a very simple level. You have the home page, which shows you a small random selection of things of the same type, like these wallpapers:

You can also look around a list of a few selected facets:

And at some point, you’ll find yourself at a giant list of any objects that match whatever filter you’ve chosen, like hand-sewn things, or all the things from Istanbul:

Just yesterday, we added another view for these lists to show you any/all images a little larger, and with no metadata. It’s a lovely way to look at all the robes or Liberty & Co. Ltd. fabrics or things in storage.

If you see something of interest, you can pull up a dumb-as-a-bag-of-hammers catalogue record view which is just that. Except that it also links through to the V&A API’s .JSON response for that object, which shows you some of the juicy interconnection metadata. Here’s a favourite I stumbled on:

(Incidentally, I was thrilled but slightly frightened to see this “Water cistern with detatchable drinking cup, modelled as a chained bear clasping a cub to its breast” from Nottingham in person in the absolutely stunning ceramics gallery, in person.)

The Beauty of an Ordered List

If you choose one of the main facets like Artist, or Place, you’ll get to a simple ordered list of results for that facet. It’s nice because you can see a lot of information about the catalogue at a glance.

You can see that the top four artists in the catalogue are Unknown (roughly 10%), Worth (as in the House of Worth, famous French couturiers), Hammond, Harry (‘founding father of music photography‘) and the lesser-known unknown.

I was curious to learn, at a glance, that most of the collection appears to come from the United Kingdom. (I might be showing my ignorance here, but this was a surprise to me.)

Here are the most common 20 places, with UK in bold:

London 59,661
England 42,178
Paris 36,890
Britain 32,388
Great Britain 27,486
France 23,540
Italy 11,562
Staffordshire (hello, Wedgwood?) 11,007
Germany 6,666
China 5,275
Europe 5,260
Japan 4,005
Royal Leamington Spa (3,857 hand-coloured fashion plates, from the ‘Pictorial History of Female Costume from the year 1798 to 1900’) 3,859
Iran 3,411
India 3,302
Jingdezhen (known for porcelain) 3,261
United Kingdom 3,098
United States 3,045
Rome 2,961
Netherlands 2,943

Catalogue Topology

Those simple sorts of views and lists start to help you make suppositions about the collection as a whole. Perhaps you can start to poke at the stories hidden in the catalogue about the institution itself. I found myself wanting to try to illustrate some other aspects of the catalogue that just its contents, and that’s when this happened…

The Date Graph

The Date Graph has three simple inputs, all date related. The V&A sets two creation dates for each object: year_start and year_end. Each record also gets a museum_number, which, as in the case of our weird bear, looks like this:

1180-1864

Those last four digits there normally represent the year the object was acquired. So, we snipped out that date and drew all the three dates together in a big graph.

The more you look around the date graph, the more you start to see what might be patterns, like this big set of stuff all collected in the same year. Often these blocks of objects are related, like prints by the same artist, or fragments of the same sculptural feature:

Some objects in the collection have very accurate creation date ranges, while some are very broad, even hundreds of years wide. The very accurate ones are often objects that have a date on it, like coins:

It’s also interesting to see how drawing a picture like this date graph can show you small glitches in the catalogue metadata itself. Now, I don’t know enough about the collections, but perhaps this sort of tool could help staff find small errors to be corrected, errors that are practically impossible to spot when you’re looking at big spreadsheets, or records represented in text. Here’s an example, from the graph that shows objects in the 2000-2014 range… see those outliers that look as if they were acquired before they were created?

Asking Different Questions

I kept finding myself wondering if the Date Graph style of view could show us answers to some questions that are specific to the internal workings of the V&A. Could we answer different sorts of questions about the institution itself?

  • When do cataloguers come and go as staff? Do they have individual styles?
  • Can we see the collecting habits of individual curators?
  • Does this idea of “completeness” of records reveal how software could change the data entry habits used to make the catalogue?
  • Do required fields in a software application affect the accuracy of “tombstone” records?

A new feature I’d like to build would be a way to add extra filters on the date graph, like show me all the Jumpers acquired by the museum that were made between 1900-1980.

It’s a Sketch

No Title by Jamini, Roy

Even though we put in a good effort to make this, it’s still a rough sketch. Now that it’s built and hanging together there are all sorts of things we’d like to do to improve it. If anything, it’s teased out more questions than answers for us, and that’s exactly what this sort of thing should do. My collaborator, Tom Armitage, is also going to write a post over on the Good, Form & Spectacle Work Diary about “Sketching and Engineering” in a little while too, so stay tuned for that.

We hope you enjoy poking around, and we’d love to hear of any interesting discoveries you make. Please tweet your finds to us @goodformand.

Go spelunking!

theresa-going-in

theresa-going-in by Theresa – CC BY-NC-ND
2.0

Internal R&D Project #1: Netflix-O-Matic

screenshot of the home page

Netflix-O-Matic is our very first project to go public. Woo! Strictly speaking, the project began very informally in December last year, but really came to a head just yesterday, when Frankie Roberto, Dan Williams and I built the front end for it. In a day. That was satisfying, and the form part in our name, which is so important otherwise you’re just waving your hands about.

Why Netflix-O-Matic?

I’ve planted a flag in the sand with Good, Form & Spectacle. It’s declaring my territory of interest in the world of cultural heritage. To me, that’s a helpful constraint and context for work-making (libraries, museums, archives), but also a juicy remit (save world, support open data, promote open access, help the humans, etc). It might be slightly odd to call Netflix-O-Matic a cultural heritage project, but let me explain why I think it is.

  • Netflix is a repository of objects that happen to be movies
  • Each movie has a bunch of metadata about it (directors, actors, genres etc)
  • Movies are collected together in genres and their variants (Slapstick Comedies; Feel-Good Slapstick Comedies). (Alexis Madrigal wrote an interesting article in The Atlantic Monthly about How Netflix Reverse-Engineered Hollywood, well worth a read. His work in December actually spurred my interest in making something with Netflix genres.)

This sounds a bit like a library. It’s a library.

It’s a bit hard to find new movies to watch using the default Netflix UI, or at least it is for me. None of this entertaining ‘micro genres’ are exposed, and it’s tricksy to engage when you just sort of feel like watching something slapstick-y but you don’t know what yet. When you feel like that, you need to ‘wander the stacks,’ and wait until something pops out at you. That’s the feeling that Netflix-O-Matic is trying to respond to. A search box isn’t great when you feel like that. This toy is about trying to remove that particular ‘I don’t know what I want’ cognitive load entirely, and get people moving in the content as quickly as possible.

User Interface

I enjoy designing bone simple, clicky clicky, lie back-type interfaces, particularly to look around giant datasets. I mean, we can all read a spreadsheet, I guess, but I like to throw in some randomness, and intentionally make it so people might not remember how they got to where they are.

In the case of the fantastic Netflix genres, I knew pretty early on that I wanted to play with the idea of compounding and splitting them in the UI. Often with faceted search, you’re just drilling, drilling, drilling, and it’s difficult to move sideways, across a search. In this UI, you can either split out into a component of a genre, or drill into the full genre listing. The UI isn’t perfect, but hopefully functional.

Click on a single element, in this case, Australian:

netflixomatic-split

Or the whole genre, Australian Dramas Based on Contemporary Literature:

netflixomatic-compound

To help people use the split / sideways movement effectively, we spent quite a bit of time yesterday (mostly Frankie) configuring some regular expressions, or regex. That makes it so you can click on some phrases that make more sense to us than single words. Some examples are kung fu or hopeless romantics or hong kong or based on books or the old classic featuring a strong female lead. Both Frankie and I felt that we could have made the regex file about ten times as detailed, but, we wanted to release the thing in a single day and be done with it.

You can sort of get the gist of some of the others here, in the regex code:

 def words
  name.scan(/(?:
    (?:from\sthe\s[\d]+s) |
    (?:\sby\s[^\s]+\s[^\s]+) |
    (?:for\sages\s[\d]+\sto\s[\d]+) |
    (?:about\s(?:Trucks,\sTrains\s&\sPlanes|[^\s]+\s\&\s[^\s]+|[^\s]+)) |
    (?:(?:(?:directed|created)\sby|starring)\s[^\s]+(?:\s[^\s]+)*) |
    (?:[^\s]+\s\&\s[^\s]+) |
    (?:on\sBlu\-ray) |
    (?:set\sin\s(?:the\s)?[^\s]+(?:\s(?:Times|Era|Ages|America))?) |
    (?:Kids'\sTV) |
    (?:(?:North|East|South|West|Southeast)\s[^\s]+) |
    (?:Kung\sFu) |
    (?:Hidden\s[^\s]+) |
    (?:Best\s[^\s]+) |
    (?:for\sHopeless\sRomantics) |
    (?:Road\sTrip) |
    (?:Golden\sGlobe) |
    (?:[Ff]eaturing\s(?:a\s)?(?:[Ss]trong\s)?(?:[Ff]emale\s)?[^\s]+) |
    (?:Film\sFestival) |
    (?:[\d]+th\s[^\s]+) |
    (?:Hong\sKong) |
    (?:Fairy\sTales?) |
    (?:based\son\s
        (?:a\s)?
        (?:[Rr]eal\s)?
        [^\s]+
        (?:\s(?:[Bb]books|[Ll]iterature|[Gg]ame))?
        (?:\sby\s[^\s]+(?:\s[A-Z][^\s]+)*)?
    ) |
    (?:[^\s]+)
  )/x)
end

Making exploratory interfaces without a search box is useful for me because it helps me think through how to explore. If we'd wanted to take more time, to really embellish this toy we'd probably add one... I can definitely see some kind of auto-complete being useful. It's a useful exercise to think through what Dan called the mental model of the UI without a search too. It makes exploring the data feel a bit more organic, or kaleidoscopic or something. It definitely also helps you come across stuff you'd never search for, and I really enjoy that sensation. You're guided instead by words that appeal to you (in this case), and they expose the filigree in the system without you having to do much work.

Skills / Tools

I'd guess that, over the course of about ten months, the total time my collaborators and I would have work would probably only be about two weeks total effort. It's a bit of a guess, but I don't think it's much more than that. Frankie, Dan and I built the front end in a day, yesterday, which was great fun. That moment when you start seeing actual data in your designs is always a thrill.

If you had to describe the team - me, Frankie, Dan, Nick and Paul - it's a mötley crüe of UX design, ops, web crawling, DBA-ish, programmer-y something something. We mostly did research, scraping, emailing across the USA, planning, designs, data munging, Heroku/Ruby/Postgres setup, front-end development, tweaking, and a bit of publicity at the end. I also made use of a handy responsive CSS library called Skeleton and implemented it yesterday in about 15 minutes.

Working in Public / R&D

Even before this firm existed, I knew I wanted to be public about the behind-the-scenes noodling we're doing, and why we're doing it. Netflix-O-Matic is an example of the sort of simple toy I love to build. It's also useful because it helps me think about things like kaleidoscopic interfaces. I am not quite sure what that means just yet. Hence the toy.