After Alphy sings to wake her from her 154-hour sleep, Barbarella turns to one of a pair of transparent plastic domes beside her bed. As Alphy announces that she should “prepare to insert nourishment,” a tall cylindrical glass, filled with a purple fluid, rises from a circular recession. All Barbarella has to do is lift the hinged dome, grab the glass, and drink. When she’s done she puts the glass back into the plastic dome, and Alphy takes care of the rest.
Sharp-eyed readers may note that there are two sets of rectangular buttons in the dome. Each set as one black, one gray, and one white button. We don’t see these buttons being used.
As an interface, this is about as simple as it gets.
Human has need.
Agent anticipates need.
Agent does what it can to address the need.
Agent provides respectful, just-in-time instructions to the human on her part.
Me: Well…I like to think of myself as a design critic looking though the lens of–
The computer: “In your voice, I sense hesitance, would you agree with that?”
Me: Maybe, but I would frame it as a careful consider–
The computer: “How would you describe your relationship with Darth Vader?”
Me: It kind of depends. Do you mean in the first three films, or are we including those ridiculous–
The computer: Thank you, please wait as your individualized operating system is initialized to provide a review of OS1 in Spike Jonze’s Her.
A review of OS1 in Spike Jonze’s Her
Ordinarily I wait for a movie to make it to DVD before I review it, so I can watch it carefully, make screen caps of its interfaces, and pause to think about things and cross reference other scenes within the same film, or look something up on the internet.
“Any sufficiently advanced technology is indistinguishable from magic.”
You’ve no doubt opened up this review of Doctor Strange thinking “What sci-fi interfaces are in this movie? I don’t recall any.” And you’re right. There aren’t any. (Maybe the car, the hospital, but they’re not very sci-fi.) We’re going to take Clarke’s quote above and apply the same types of rigorous assessment to the magical interfaces and devices in the movie that we would for any sci-fi blockbuster.
Dr. Strange opens up a new chapter in the Marvel Cinematic Universe by introducing the concept of magic on Earth, that is both discoverable and learnable by humans. And here we thought it was just a something wielded by Loki and other Asgardians.
In Doctor Strange, Mordo informs Strange that magical relics exist and can be used by sorcerers. He explains that these relics have more power than people could possibly manage, and that many relics “choose their owner.” This is reminiscent of the wands in the Harry Potter books. Magical coincidence?
Subsequently in the movie we are introduced to a few named relics, such as…
The Eye of Agamoto
The Staff of the Living Tribunal
The Vaulting Boots of Valtor
The Cloak of Levitation
The Crimson Bands of Cyttorak
…(this last one, while not named specifically in the movie, is named in supporting materials). There are definitely other relics that the sorcerers arm themselves with. For example, in the Hong Kong scene Wong wields the Wand of Watoomb but it is not mentioned by name and he never uses it. Since we don’t see these relics in use we won’t review them.Continue reading →
In one of the story threads, Matt uses an interface as part of his day job at Smartelligence to wrangle an AI that is the cloned a mind of a client named Greta. Matt has three tasks in this role.
He has to explain to her that she is an artificial intelligence clone of a real world person’s mind. This is psychologically traumatic, as she has decades of memories as if she were a real person with a real body and full autonomy in the world.
He has to explain how she will do her job: Her responsibilities and tools.
He has to “break” her will and coerce her to faithfully serve her master—who is the the real-world Greta. (The idea is that since virtual Greta is an exact copy, she understands real Greta’s preferences and can perform personal assistant duties flawlessly.)
The AI is housed in a small egg-shaped device with a single blue light camera lens. The combination of the AI and the egg-shaped device is called “The Cookie.” Why it is not called The Egg is a mystery left for the reader, though I hope it is not just for the “Cookie Monster” joke dropped late in the episode. Continue reading →
When using the Cookie to train the AI, Matt has a portable translucent touchscreen by which he controls some of virtual Greta’s environment. (Sharp-eyed viewers of the show will note this translucent panel is the same one he uses at home in his revolting virtual wingman hobby, but the interface is completely different.)
The left side of the screen shows a hamburger menu, the Set Time control, a head, some gears, a star, and a bulleted list. (They’re unlabeled.) The main part of the screen is a scrolling stack of controls including Simulated Body, Control System, and Time Adjustment. Each has an large icon, a header with “Full screen” to the right, a subheader, and a time indicator. This could be redesigned to be much more compact and context-rich for expert users like Matt. It’s seen for maybe half a second, though, and it’s not the new, interesting thing, so we’ll skip it.
The right side of the screen has a stack of Smartelligence logos which are alternately used for confirmation and to put the interface to sleep.
When virtual Greta first freaks out about her circumstance and begins to scream in existential terror, Matt reaches to the panel and mutes her. (To put a fine point on it: He’s a charming monster.) In this mode she cannot make a sound, but can hear him just fine. We do not see the interface he uses to enact this. He uses it to assert conversational control over her. Later he reaches out to the same interface to unmute her.
The control he touches is the one on his panel with a head and some gears reversed out of it. The icon doesn’t make sense for that. The animation showing the unmuting shows it flipping from right to left, so does provide a bit of feedback for Matt, but it should be a more fitting icon and be labeled.
Also it’s teeny tiny, but note that the animation starts before he touches it. Is it anticipatory?
Virtual Greta has a console to perform her slavery duties. Matt explains what this means right after she wakes up by asking her how she likes her toast. She answers, “Slightly underdone.”
He puts slices of bread in a toaster and instructs her, “Think about how you like it, and just press the button.”
She asks, incredulously, “Which one?” and he explains, “It doesn’t matter. You already know you’re making toast. The buttons are symbolic mostly, anyway.”
She cautiously approaches the console and touches a button in the lower left corner. In response, the toaster drops the carriage lever and begins toasting.
“See?” he asks, “This is your job now. You’re in charge of everything here. The temperature. The lighting. The time the alarm clock goes off in the morning. If there’s no food in the refrigerator, you’re in charge of ordering it.”Continue reading →
Does real Greta know that her home automation comes at the cost of a suffering sentience? I would like to believe that Smartelligence’s customers do not know the true nature of the device, that the company is deceiving them, and that virtual Greta is denied direct communication to enforce this secret. But I can’t see that working across an entire market. Given thousands of Cookies and thousands of users, somehow, somewhere, the secret would get out. One of the AIs would use song choices, or Morse code, or any of its actuators to communicate in code, and one of the users would figure it out, leak the secret, and bring the company crashing down.
And then there’s the final scene in the episode, in which we see police officers torturing one of the Cookies, and it is clear that they’re aware. It would be a stretch to think that just the police are in on it with Smartelligence, so we have to accept that everyone knows.
That they are aware means that—as Matt has done—Greta, the officers, and all Smartelligence customers have told themselves that “it’s just code” and, therefore, OK to subjugate, to casually cause to suffer. In case it’s not obvious, that’s like causing human suffering and justifying it by telling yourself that those people are “just atoms.” If you find that easy to do, you’re probably a psychopath. Continue reading →
Hey readership. Sorry for the brief radio silence there. Was busy doing some stuff, like getting married. Back now to post some overdue content. But the good news is I’m back with some weighty posts, and in honor of the 50th anniversary of 2001: A Space Odyssey, they have to do with AI, science, and sci-fi.
So last fall I was invited with some other spectacular people to participate in a retreat about AI, happening at the Juvet Landscape Hotel in Ålstad, Norway. (A breathtaking opportunity, and thematically a perfect setting since it was the shooting location for Ex Machina. Thanks to Andy Budd for the whole idea, as well as Ellen de Vries, James Gilyead, and the team at Clearleft who helped organize.) The event was structured like an unconference, so participants could propose sessions and if anyone was interested, join up. One of the workshops I proposed was called “AI Narratives” and it sought to answer the question “What AI Stories Aren’t We Telling (That We Should Be)?” So, why this topic?
Sci-fi, my reasoning goes, plays an informal and largely unacknowledged role in setting public expectations and understanding about technology in general and AI in particular. That, in turn, affects public attitudes, conversations, behaviors at work, and votes. If we found that sci-fi was telling the public misleading stories over and over, we should make a giant call for the sci-fi creating community to consider telling new stories. It’s not that we want to change sci-fi from being entertainment to being propaganda, but rather to try and take its role as informal opinion-shaper more seriously.Continue reading →
In the prior post we spoke about the tone of AI shows. In this post we’re going to talk about the provenance of AI shows.
This is, admittedly, a diversion, because it’s not germane to the core question at hand. (That question is, “What stories aren’t we telling ourselves about AI?”) But now that I have all this data to poll and some rudimentary skills in wrangling it all in Google Sheets, I can barely help myself. It’s just so interesting. Plus, Eurovision is coming up, so everyone there is feeling a swell of nationalism. This will be important.
Time to Terminator: 1 paragraph.
So it was that I was backfilling the survey with some embarrassing oversights (since I had actually had already reviewed those shows) and I came across the country data in imdb.com. This identifies the locations where the production companies involved with each show are based. So even if a show is shot entirely in Christchurch, if its production companies are based in A Coruña, its country is listed as Spain. What, I wonder, would we find if we had that data in the survey?
So, I added a country column to the database, and found that it allows me to answer a couple of questions. This post shares those results.
So the first question to ask the data is, what countries have production studios that have made shows in the survey (and by extension, about AI)? It’s a surprisingly short list. Continue reading →
In the first post I shared how I built a set of screen sci-fi shows that deal with AI (and I’ve already gotten some nice recommendations on other ones to include in a later update). The second post talked about the tone of those films and the third discussed their provenance.
Returning to our central question, to determine whether the stories tell are the ones we should be telling,we need to push the survey to one level of abstraction.
With the minor exceptions or robots and remakes, sci-fi makers try their hardest to make sure their shows are unique and differentiated. That makes comparing apples to apples difficult. So the next step is to look at the strategic imperatives that are implied in each show. “Strategic imperatives” is a mouthful, so let’s call them “takeaways.” (The other alternative, “morals” has way too much baggage.) To get to takeaways for this survey, what I tried to ask was: What does this show imply that we should do, right now, about AI? Now, this is a fraught enterprise. Even if we could seance the spirit of Dennis Feltham Jones and press him for a takeaway, he could back up, shake his palms at us, and say something like, “Oh, no, I’m not saying all super AI is fascist, just Colossus, here, is.” Stories can be just about what happened that one time, implying nothing about all instances or even the most likely instances. It can just be stuff that happens.
Pain-of-death, authoritarian stuff.
But true to the New Criticism stance of this blog, I believe the author’s intent, when it’s even available, is questionable and only kind-of interesting. When thinking about the effects of sci-fi, we need to turn to the audience. If it’s not made clear in the story that this AI is unusual (through a character saying so or other AIs in the diegesis behaving differently) audiences may rightly infer that the AI is representative of its class. Demon Seed weakly implies that all AIs are just going to be evil and do horrible things to people, and get out, humanity, while you can. Which is dumb, but let’s acknowledge that this one show says something like “AI will be evil.”