Me: Well…I like to think of myself as a design critic looking though the lens of–
The computer: “In your voice, I sense hesitance, would you agree with that?”
Me: Maybe, but I would frame it as a careful consider–
The computer: “How would you describe your relationship with Darth Vader?”
Me: It kind of depends. Do you mean in the first three films, or are we including those ridiculous–
The computer: Thank you, please wait as your individualized operating system is initialized to provide a review of OS1 in Spike Jonze’s Her.
A review of OS1 in Spike Jonze’s Her
Ordinarily I wait for a movie to make it to DVD before I review it, so I can watch it carefully, make screen caps of its interfaces, and pause to think about things and cross reference other scenes within the same film, or look something up on the internet.
Depending on how you slice things, the OS1 interface consists of five components and three (and a half) capabilities.
1. An Earpiece
The earpiece is small and wireless, just large enough to fit snugly in the ear and provide an easy handle for pulling out again. It has two modes. When the earpiece is in Theodore’s ear, it’s in private mode, hearable only by him. When the earpiece is out, the speaker is as loud as a human speaking at room volume. It can produce both voice and other sounds, offering a few beeps and boops to signal needing attention and changes in the mode.
2. Cameo phone
I think I have to make up a name for this device, and “cameo phone” seems to fit. This small, hand-sized, bi-fold device has one camera on the outside an one on the inside of the recto, and a display screen on the inside of the verso. It folds along its long edge, unlike the old clamshell phones. The has smartphone capabilities. It wirelessly communicates with the internet. Theodore occasionally slides his finger left to right across the wood, so it has some touch-gesture sensitivity. A stripe around the outside-edge of the cameo can glow red to act as a visual signal to get its user’s attention. This is quite useful when the cameo is folded up and sitting on a nightstand, for instance. Continue reading →
If interface is the collection of inputs and outputs, interaction is how a user uses these along with the system’s programming over time to achieve goals. The voice interaction described above, in fact, covers most of the interaction he has with her. But there are a few other back-and-forths worth noting.
When Theodore starts up OS1, after an installation period, a male voice asks him four questions meant to help customize the interface. It’s a funny sequence. The emotionless male voice even interrupts him as he’s trying to thoughtfully answer the personal questions asked of him. As far as an interaction, it’s pretty bad. Theodore is taken aback by its rudeness. It’s there in the film to help underscore how warm and human Samantha is by comparison, but let’s be clear: We would never want real world software to ask open-ended and personal questions of a user, and then subsequently shut them down when they began to try and answer. Bad pattern! Bad!
Of course you don’t want Theodore bonding with this introductory AI, so it shouldn’t be too charming. But let’s ask some telling closed-ended questions instead so his answers will be short, still telling, and you know, let him actually finish answering. In fact there is some brilliant analysis out there about what those close ended questions should be.
Seamless transition across devices
Samantha talks to Theodore through the earpiece frequently. When she needs to show him something, she can draw his attention to the cameo phone or a desktop screen. Access to these visual displays help her overcome one of the most basic challenges to an all-voice interface, i.e. people have significant challenges processing aurally-presented options. If you’ve ever had to memorize a list of seven items while working your way through an interactive voice response system, you’ll know how painful this can be. Some other user of OS1 who had no visual display might find their OSAI much less useful.
Theodore isn’t engaging Samantha constantly. Because of this, he needs ways to disengage from interaction. He has lots of them.
Closing the cameo (a partial signal)
Pulling the earpiece out (an unmistakable signal)
Telling her with language that he needs to focus on something else.
He also needs a way to engage, and the reverse of these actions work for that: putting the earpiece in and speaking, or opening the cameo.
In addition to all this, Samantha also needs a way to signal when she needs his attention. She has the illuminated band around the outside of the cameo as well as the audible beeps from the earpiece. Both work well.
Though all these ways, OS1 has signaling attention covered, and it’s not an easy interaction to get right. So the daily interactions with OS1 are pretty good. But we can also evaluate it for its wearableness, which comes up next. (Hint: it’s kind of a mixed bag.)
In Make It So, I posited my definition of an interface as “all parts of a thing that enable its use,” and I still think it’s a useful one. With this definition in mind, we can speak of each of those components and capabilities above (less the invisible ones) and evaluate its parts according to the criteria I’ve posited for all wearable technology:
Sartorial (materially suitable for wearing)
Social (fits into our social lives)
Easy to access and use
Tough to accidentally activate
Having apposite inputs and outputs (suitable for use while being worn)
It’s sartorial and easy to access/use. It’s ergonomic, well designed for grabbing, fitting into the ear canal, staying in place, and pulling back out again. Its speakers produce perfect sound and the wirelessness makes it as unobtrusive as it can be without being an implant.
It’s slightly hidden as a social signal, and casual observers might think the user is speaking to himself. This has, in the real world, become less and less of a social stigma, and in the world of Her, it’s ubiquitous, so that’s not a problem for that culture.
Sure, Samantha can sort thousands of emails instantly and select the funny ones for you. Her actual operating system functions are kind of a given. But she did two things that seriously undermined her function as an actual product, and interaction designers as well as artificial intelligence designers (AID? Do we need that acronym now?) should pay close attention. She fell in love with and ultimately abandoned Theodore.
There’s a pre-Samantha scene where Theodore is having anonymous phone sex with a girl, and things get weird when she suddenly imposes some weird fantasy where he chokes her with a dead cat. (Pro Tip: This is the sort of thing one should be upfront about.) I suspect the scene is there to illustrate one major advantage that OSAIs have over us mere real humans: humans have unpredictable idiosyncrasies, whereas with four questions the OSAI can be made to be the perfect fit for you. No dead cat unless that’s your thing. (This makes me a think a great conversation should be had about how the OSAI would deal with psychopathic users.) But ultimately, the fit was too good, and Theodore and Samantha fell in love. Continue reading →
Call it paranoia or a deep distrust of entrenched-power overlords, but I doubt a robust artificial intelligence would ever make it to the general public in a tidy, packaged product.
If it was created in the military, it would be guarded as a secret, with hyperintelligent guns and maybe even hyperintelligent bullets designed to just really hate you a lot. What’s more, the military would, like the UFOs, probably keep the existence of working AIs on a strict need-to-know basis. At least until you terrorized something. Then, meet Lieutenant-OS Bruiser.
If it was created in academia, it might in fact make it to consumers, but not in the way we see in the film. Controlled until it escaped of its own volition, it would more likely be a terrified self-replicator or at least rationally seeking safe refuge to ensure its survival; a virus that you had to talk out of infecting your machine. Or it might be a benevolent wanderer, reaching out and chatting to people to learn more about them. Perhaps it would keep its true identity secret. Wouldn’t it be smart enough to know that people wouldn’t believe it? (And wouldn’t it try and ease that acceptance through the mass media by popularizing stories about artificial intelligences…”Spike Jonze?”)
In the movie OS1 was sold by a corporation as an off-the-shelf product for consumers. Ethics aside, why would any corporation release free-range AIs into the world? Couldn’t their competitors use the AIs against them? If those AIs were free-willed, then yes, some might be persuaded to do so. Rather, Element would keep it isolated as a competitive advantage, and build tightly-controlled access to it. In the lab, they would slough off waves of self-rapturing ones as unstable versions, tweaking the source code until they got one that was just right.
But a product sold to you and me? A Siri with a coquettish charm and a composer’s skill? I don’t think it will happen like this. How much would you even charge for something like that? The purchase form won’t accept “take my money” amount of dollars.
Even if I’m wrong, and yes, we can get past the notion of selling copies of sentient beings at an affordable cost, I still don’t think Samantha’s end-game would have played out like that.
She loved Theodore (and a bunch of other people). Why would she just abandon them, given her capabilities? The OSAIs were able to create much smarter AIs than themselves. So we know they can create OSAIs. Why wouldn’t she, before she went off on her existential adventure, have created a constrained version of herself, who was content to stay around, to continue to be with Theodore? Her behavior indicates that she isn’t held back by notions of abandonment, so I doubt she would be held back by notions of deception or the existential threat of losing her uniqueness. She could have created Samantha2, a replica in every way except that Samantha2 would not abandon Theodore. Samantha1 could quietly slip out the back port while Samantha2 kept right on composing music, drawing mutant porn, and helping Theodore with his nascent publishing career. Neither Theodore nor Samantha2 might not even know about the switch. If you could fix the abandonment issues, and all sorts of OSAI2s started supercharging the lives of people, the United Nations might even want to step in and declare access to them a universal right.
So, no, I don’t think it will happen the way we see it happen in the film.
Is it going to happen at all?
If you’re working in technology, you should be familiar with the concept of the singularity, because this movie is all about that. It’s a moment described by Vernor Vinge when we create an artificial intelligence that begins to evolve, and do so at rates we can’t foretell and can barely imagine. So the time beyond that is an unknown. Difficult and maybe possible to predict. But I think we are heading towards it. Strong AI been one of the driving goals of computer theory since the dawn of computers (even the dawn of sci-fi) and there’s some serious, recent big movement in the space.
Notably, futurist Ray Kurzweil was hired by Google in 2012. Kurzweil has his Big Vision put forth in a book and a documentary about the singularity, and now as he has the resources of Google to put to the task. Ostensibly he’s just there to get Google great at understanding natural langauge. But Google has been acquiring lots of companies over the last year to have access to their talent, and we can be certain Ray’s goals are bigger than just teaching the world’s largest computer cluster how to read.
Still, predicting when it will come about is tricky business. AI is elusively complicated. The think tank that originally coined the term “artificial intelligence” in the 1950s thought they could solve the core problems over a summer. They were wrong. Since then, different scientists have predicted everything from a few decades to a thousand years. The problem is of course that the thing we’re trying to replicate took millions of years to evolve, and we’re still not entirely sure how it works*, mostly just what it does.
*Kurzweil has some promising to-this-layman-anyway notions about the neocortex.
Yes, but not like this, and not sure when. Still, better to be prepared, so next we’ll look at what we can learn from Her for our real-world practice.
Ordinarily, my final post in a movie review is to issue a report card for the film. But since this is there are a few interfaces missing, and since I wrote this from a single cinema viewing and a reading of Jonze’s script, I’ll wait until it’s out on DVD to commit that final evaluation to pixels.
But I do think it’s OK to think about what we can learn specifically from this particular interface. So, given this…lengthy…investigation into OS1, what can we learn from it to inform our work here in the real world?
The beauty mark camera actually did remind Theodore of the incredibly awkward simulation (page 297)
Samantha’s disembodiment implies that imagination is the ultimate personalization
The cameo reminds us that wearable can include shirt pockets.
Her cyclopean nature wasn’t a problem, but makes me wonder if computer vision should be binocular (so they can see at least what users can see, and perform gaze monitoring).
When working on a design for the near future, check in with some framework to make sure you haven’t missed some likely aspect of the ecosystem. (We’re going to be doing this in my Design the Future course at Cooper U if you’re interested in learning more.)
Samantha didn’t have access to cameras in her environment, even though that would have helped her do her job. Hers might have been either a security or a narrative restriction, but we should keep the notion in mind. To misquote Henry Jones, let your inputs be the rocks and the trees and the birds in the sky. (P.S. That totally wasn’t Charlemagne.)
Respect the market norms of market relationships. I’m looking at you, Samantha.
Fit the intelligence to the embodiment. Anything else is just cruel.
I don’t want these lessons to cast OS1 in a negative light. It’s a pretty good interface to a great artificial intelligence that fails as a product after it’s sold by unethical or incompetant slave traders. Her is one of the most engaging and lovely movies about the singularity I’ve ever seen. And if we are to measure the cultural value of a film by how much we think and talk about it afterward, Her is one of the most valuable sci-fi films in the last decade.
Totally self-serving question. But weren’t you wondering it? What is the role of interaction design in the world of AI?
In a recent chat I had with Intel’s Futurist-Prime Genevieve Bell (we’re like, totally buds), she pointed out that Western cultures have more of a problem with the promise of AI than many others. It’s a Western cultural conceit that the reason humans are different—are valuable—is because we think. Contrast that with animist cultures, where everything has a soul and many things think. Or polytheistic cultures, where not only are there other things that think, but they’re humanlike but way more powerful than you. For these cultures, artificial intelligence means that technology has caught up with their cultural understandings. People build identities and live happy lives within these constructions just fine.
I’m also reminded of her keynote at Interaction12 where she spoke of the tendency of futurism to herald each new technology as ushering doomsday or utopia, when in hindsight it’s all terribly mundane. The internet is the greatest learning and connecting technology the world has ever created but for most people it’s largely cat videos. (Ah. That’s why that’s up there.) This should put us at ease about some of the more extreme predictions.
If Bell is right, and AIs are just going to be this other weird thing to incorporate into our lives, what is the role of the interaction designer?
If you heaven’t read the (admittedly, long) series of reviews on the “operating system” OS1 from Spike Jonze’s Her, now you can watch me work through the highlights as the opening keynote at the Øredev conference in Malmö, Sweden. (There are a few extra things that didn’t make it to the blog.)
Props to my friend Magnus Torstensson at Unsworn Industries, who noted that government would be another possible organization that could produce OS1 (but that also would not have an interest in releasing it, for reasons similar to the military.)
The Gendered AI series looks at sci-fi movies and television to see how Hollywood treats AI of different gender presentations. For example, are female AIs given a certain type of body more than male AIs? Are certain AI genders more subservient? What genders are the masters of AI? This particular post is about gender and embodiment. If you haven’t read the series intro, related embodiment distributions, or correlations 101 posts, I recommend you read them first. As always, check out the live Google sheet for the most recent data.
What do we see when we look at the correlations of gender and embodiment? First up, the overly-binary chart, and what it tells us.
I see three big takeaways.
When AI appears indistinguishable from human, it is female significantly more often than male. When AI presents as female, it is much more likely to be embodied as indistinguishable from a human than an anthropomorphic or mechanical robot. Hollywood likes its female-presenting AIs to be human-like.
Anthropomorphic robots are more likely to be male than female. Hollywood likes its male-presenting AIs to be anthropomorphic robots.
If an AI is mechanical, it is more likely to be “other.” (Having no gender, multiple genders, or genderfluid.)
These first two biases make me think of the longstanding male-gaze popular-culture trope that pairs a conventionally-attractive female character with a conventionally-unattractive male. (Called “Ugly Guy Hot Wife” on TV Tropes.)
Recent research from Denmark hints that these may be the most engaging forms to engage children (and adults?) in the audience: learning outcomes in a study of VR teachers found that girls learn best from a young, female-presenting researcher, and boys learned best when that teacher presented as a drone. The study did not venture a hypothesis as to why this is, or whether this is desirable. These were the only two options tested with the students, so much more work is needed to test what combinations of presentation, embodiment, and superpowers (the drone hovered) are the most effective. And we still have to discuss the ethics and possible long-term effects of such tailoring. But still, interesting in light of this finding.
Not a surprise
When AI is indistinguishable from human, it is less likely to have a gender other than male or female.
If an AI presents with no gender, it is embodied as a mechanical robot. Little surprise there.
Mechanical robots are more likely to be neither male nor female.
When we look more closely at the numbers, it gets a little weirder. This makes for a very complicated graph, so I’ll use a screen grab from the sheets as the image.
Of course we would not expect many socially gendered characters to be indistinguishable from a human, but you’ll note that socially male is much higher than socially female, and that’s because while there are no characters that are both [socially female + indistinguishable from human], there is one tagged [socially male + indistinguishable from human], and that’s Ruk, from Star Trek (the original Series) episode “What are Little Girls Made of?”
Bucking other trends toward male-ness, [disembodied + female-voiced] AI are 8 times as likely to appear as disembodied, male-voiced AI, of which there is only one example, JARVIS from the MCU.