Last year when I didn’t know what to do because Museum in a Box had (temporarily) fallen apart because every museum on the planet closed, I thought to myself: AI is interesting, maybe you should study it for a while. I had heard of the 3AI Institute at the School of Cybernetics in Canberra, and saw that they were accepting their next round of candidates for the Masters program. Once I’d figured out how to process the state of Museum in a Box – See Thriving in uncertainty – I sent in my “portfolio” to 3AI, with just a few days to spare.
I’d written it quickly, but I still think about it, so thought others might be interested. I didn’t get into the course, mostly because COVID and international travel, but maybe I’ll try again another time.
“I offer you some questions I would enjoy exploring with you,” I said.
What are the affordances humans need to see and read AI? How do we show and understand and appreciate how AI is diverging from human civility? What do humans need to make AI legible?
The machines must be trained with the best humanity has to offer culturally. Instead of Asimov’s Three Laws, imagine if they were fed a compilation of the commandments of the major religions of the world so they would have to negotiate a civil system.
What if algorithms ran in concert? Imagine a choir instead of a monologue. What does resonance mean?
What if machines ran decisions like the US Supreme Court? What if five programs were required to reach consensus on desirable programmatic resolution? This is not drones flocking.
What is a Shitty Robots* version of AI? Does that make it more charming or understandable? Does it mean humans are sympathetic instead of fearful? How can an AI be useless?
- Shitty Robots is the work of Simone Giertz. She designs machines that are clumsy and/or useless.
It is here I recall the work of Professor Stephanie Dinkins, who “employs emerging technologies, documentary practices, and social collaboration toward equity and community sovereignty.” Her talk from Eyeo 2019 about how we know what we know is well worth a watch, and it begs the question why don’t we have millions of training datasets instead of those few hulking, worn, and prejudiced ones?
I’m enjoying the thought that computers should need to socialise in some way, instead of only being internally consistent. If you know of people researching that direction, I’d love to hear about it.