After Alphy sings to wake her from her 154-hour sleep, Barbarella turns to one of a pair of transparent plastic domes beside her bed. As Alphy announces that she should “prepare to insert nourishment,” a tall cylindrical glass, filled with a purple fluid, rises from a circular recession. All Barbarella has to do is lift the hinged dome, grab the glass, and drink. When she’s done she puts the glass back into the plastic dome, and Alphy takes care of the rest.
Sharp-eyed readers may note that there are two sets of rectangular buttons in the dome. Each set as one black, one gray, and one white button. We don’t see these buttons being used.
As an interface, this is about as simple as it gets.
Human has need.
Agent anticipates need.
Agent does what it can to address the need.
Agent provides respectful, just-in-time instructions to the human on her part.
Me: Well…I like to think of myself as a design critic looking though the lens of–
The computer: “In your voice, I sense hesitance, would you agree with that?”
Me: Maybe, but I would frame it as a careful consider–
The computer: “How would you describe your relationship with Darth Vader?”
Me: It kind of depends. Do you mean in the first three films, or are we including those ridiculous–
The computer: Thank you, please wait as your individualized operating system is initialized to provide a review of OS1 in Spike Jonze’s Her.
A review of OS1 in Spike Jonze’s Her
Ordinarily I wait for a movie to make it to DVD before I review it, so I can watch it carefully, make screen caps of its interfaces, and pause to think about things and cross reference other scenes within the same film, or look something up on the internet.
“Any sufficiently advanced technology is indistinguishable from magic.”
You’ve no doubt opened up this review of Doctor Strange thinking “What sci-fi interfaces are in this movie? I don’t recall any.” And you’re right. There aren’t any. (Maybe the car, the hospital, but they’re not very sci-fi.) We’re going to take Clarke’s quote above and apply the same types of rigorous assessment to the magical interfaces and devices in the movie that we would for any sci-fi blockbuster.
Dr. Strange opens up a new chapter in the Marvel Cinematic Universe by introducing the concept of magic on Earth, that is both discoverable and learnable by humans. And here we thought it was just a something wielded by Loki and other Asgardians.
In Doctor Strange, Mordo informs Strange that magical relics exist and can be used by sorcerers. He explains that these relics have more power than people could possibly manage, and that many relics “choose their owner.” This is reminiscent of the wands in the Harry Potter books. Magical coincidence?
Subsequently in the movie we are introduced to a few named relics, such as…
The Eye of Agamoto
The Staff of the Living Tribunal
The Vaulting Boots of Valtor
The Cloak of Levitation
The Crimson Bands of Cyttorak
…(this last one, while not named specifically in the movie, is named in supporting materials). There are definitely other relics that the sorcerers arm themselves with. For example, in the Hong Kong scene Wong wields the Wand of Watoomb but it is not mentioned by name and he never uses it. Since we don’t see these relics in use we won’t review them.Continue reading →
In one of the story threads, Matt uses an interface as part of his day job at Smartelligence to wrangle an AI that is the cloned a mind of a client named Greta. Matt has three tasks in this role.
He has to explain to her that she is an artificial intelligence clone of a real world person’s mind. This is psychologically traumatic, as she has decades of memories as if she were a real person with a real body and full autonomy in the world.
He has to explain how she will do her job: Her responsibilities and tools.
He has to “break” her will and coerce her to faithfully serve her master—who is the the real-world Greta. (The idea is that since virtual Greta is an exact copy, she understands real Greta’s preferences and can perform personal assistant duties flawlessly.)
The AI is housed in a small egg-shaped device with a single blue light camera lens. The combination of the AI and the egg-shaped device is called “The Cookie.” Why it is not called The Egg is a mystery left for the reader, though I hope it is not just for the “Cookie Monster” joke dropped late in the episode. Continue reading →
When using the Cookie to train the AI, Matt has a portable translucent touchscreen by which he controls some of virtual Greta’s environment. (Sharp-eyed viewers of the show will note this translucent panel is the same one he uses at home in his revolting virtual wingman hobby, but the interface is completely different.)
The left side of the screen shows a hamburger menu, the Set Time control, a head, some gears, a star, and a bulleted list. (They’re unlabeled.) The main part of the screen is a scrolling stack of controls including Simulated Body, Control System, and Time Adjustment. Each has an large icon, a header with “Full screen” to the right, a subheader, and a time indicator. This could be redesigned to be much more compact and context-rich for expert users like Matt. It’s seen for maybe half a second, though, and it’s not the new, interesting thing, so we’ll skip it.
The right side of the screen has a stack of Smartelligence logos which are alternately used for confirmation and to put the interface to sleep.
Mute
When virtual Greta first freaks out about her circumstance and begins to scream in existential terror, Matt reaches to the panel and mutes her. (To put a fine point on it: He’s a charming monster.) In this mode she cannot make a sound, but can hear him just fine. We do not see the interface he uses to enact this. He uses it to assert conversational control over her. Later he reaches out to the same interface to unmute her.
The control he touches is the one on his panel with a head and some gears reversed out of it. The icon doesn’t make sense for that. The animation showing the unmuting shows it flipping from right to left, so does provide a bit of feedback for Matt, but it should be a more fitting icon and be labeled.
Also it’s teeny tiny, but note that the animation starts before he touches it. Is it anticipatory?
It’s not clear though, while she is muted, how he knows that she is trying to speak. Recall that she (and we) see her mouthing words silently, but from his perspective, she’s just an egg with a blue eye. The system would need some very obvious MUTE status display, that increases in intensity when the AI is trying to communicate. Depending on how smart the monitoring feature was, it could even enable some high-intensity alert system for her when she needs to communicate something vital. Cinegenically, this could have been a simple blinking of the blue camera light, though this is currently used to indicate the passage of time during the Time Adjustment (see below.)
Simulated Body
Matt can turn on a Simulated Body for her. This allows the AI to perceive herself as if she had her source’s body. In this mode she perceives herself as existing inside a room with large, wall-sized displays and a control console (more on this below), but is otherwise a featureless white.
I presume the Simulated Body is a transitional model—part of a literal desktop metaphor—meant to make it easy for the AI (and the audience) to understand things. But it would introduce a slight lag as the AI imagines reaching and manipulating the console. Presuming she can build competence in directly controlling the technologies in the house, the interface should “scaffold” away and help her gain the more efficient skills of direct control, letting go of the outmoded notion of having a body. (This, it should be noted, would not be as cinegenic since the story would just feature the egg rather than the actor’s expressive face.)
Neuropsychology nerds may be interested to know that the mind’s camera does, in fact, have spatial lags. Several experiments have been run where subjects are asked to imagine animals as seen from the side and then timed how long it took them to imagine zooming into the eye. It takes longer, usually, for us to imagine the zoom to a elephant’s eye than a mouse’s because the “distance” is farther. Even though there’s no physicality to the mind’s camera to impose this limit, our brain is tied to its experience in the real world.
The interface Matt has to turn on her virtual reality is confusing. We hear 7 beeps while the camera is on his face. He sees a 3D rendering of a woman’s body in profile and silhouette. He taps the front view and it fills with red. Then he taps the side view and it fills with red. Then he taps some Smartelligence logos on the side with a thumb and then *poof* she’s got a body. While I suspect this is a post-actor interface, (i.e. Jon Hamm just tapped some things on an empty screen while on camera and then the designers had to later retrofit an interface that fit his gestures) this multi-button setup and three-tap initialization just makes no sense. It should be a simple toggle with access to optional controls like scaffolding settings (discussed above.)
Time “Adjustment”
The main tool Matt has to force compliance is a time control. When Greta initially says she won’t comply, (specifically and delightfully, she asserts, “I’m not some sort of push-button toaster monkey!”) Then he uses his interface to make it seem like 3 weeks pass for her inside her featureless white room. Then again for 6 months. The solitary confinement makes her crazy and eventually forces compliance.
The interface to set the time is a two-layer virtual dial: Two chapter rings with wide blue arcs for touch targets. The first time we see him use it, he spins the outer one about 360° (before the camera cuts away) to set the time for three weeks. While he does it, the inner ring spins around the same center but at a slower rate. I presume it’s months, though the spatial relationship doesn’t make sense. Then he presses the button in the center of the control. He sees an animation of a sun and moon arcing over an illustrated house to indicate her passage of time, and then the display. Aside: Hamm plays this beat marvelously by callously chomping on the toast she has just help make.
Improvements?
Ordinarily I wouldn’t speak to improvements on an interface that is used for torture, but as this could only affect a general AI that is as yet speculative, and it couldn’t be co-opted to torture real people since time travel doesn’t exist, so I think this time it’s OK. Discussing it as a general time-setting control, I can see three immediate improvements.
1. Use fast forward models
It makes most sense for her time sentence to end automatically and automatically return to real-world speed. But each time we see the time controls used, the following interaction happens near the end of the time sentence:
Matt reaches up to the console
He taps the center button of the time dial
He taps the stylized house illustration. In response it gets a dark overlay with a circle inside of it reading “SET TIME.” This is the same icon seen 2nd down in the left panel.
He taps the center button of the time dial again. The dark overlay reads “Reset” with a new icon.
He taps the overlay.
Please tell me this is more post-actor interface design. Because that interaction is bonkers.
If the stop function really needs a manual control, well, we have models for that that are very readily understandable by users and audiences. Have the whole thing work and look like a fast forward control rather than this confusing mess. If he does need to end it early, as he does in the 6 months sentence, let him just press a control labeled PLAY or REALTIME.
2. Add calendar controls
A dial makes sense when a user is setting minutes or hours, but a calendar-like display should be used for weeks or months. It would be immediately recognizable and usable by the user and understandable to the audience. If Hamm had touched the interface twice, I would design the first tap to set the start date and the second tap to set the end date. The third is the commit.
3. Add microinteraction feedback
Also note that as he spins the dials, he sees no feedback showing the current time setting. At 370° is it 21 or 28 days? The interface doesn’t tell him. If he’s really having to push the AI to its limits, the precision will be important. Better would be to show the time value he’s set so he could tweak it as needed, and then let that count down as time remaining while the animation progresses.
Effectiveness subtlety: Why not just make the solitary confinement pass instantly for Matt? Well, recall he is trying to ride a line of torture without having the AI wig out, so he should have some feedback as to the duration of what he’s putting her through. If it was always instant, he couldn’t tell the difference between three weeks and three millennia, if he had accidentally entered the wrong value. But if real-world time is passing, and it’s taking longer than he thinks it should be, he can intervene and stop the fast-forwarding.
That, or of course, show feedback while he’s dialing.
Near the end of the episode we learn that a police officer is whimsically torturing another Cookie, and sets the time-ratio to “1000 years per minute” and then just lets it run while he leaves for Christmas break. The current time ratio should also be displayed and a control provided. It is absent from the screen.
Add psychological state feedback
There is one “improvement” that does not pertain to real world time controls, and that’s the invisible effect of what’s happening to the AI during the fast forward. In the episode Matt explains that, like any good torturer, “The trick of it is to break them without letting them snap completely,” but while time is passing he has no indicators as to the mental state of the sentience within. Has she gone mad? (Or “wigged out” as he says.) Does he need to ease off? Give her a break?
I would add trendline indicators or sparklines showing things like:
Stress
Agitation
Valence of speech
I would have these trendlines highlight when any of the variables are getting close to known psychological limits. Then as time passes, he can watch the trends to know if he’s pushing things too far and ease off.
Virtual Greta has a console to perform her slavery duties. Matt explains what this means right after she wakes up by asking her how she likes her toast. She answers, “Slightly underdone.”
He puts slices of bread in a toaster and instructs her, “Think about how you like it, and just press the button.”
She asks, incredulously, “Which one?” and he explains, “It doesn’t matter. You already know you’re making toast. The buttons are symbolic mostly, anyway.”
She cautiously approaches the console and touches a button in the lower left corner. In response, the toaster drops the carriage lever and begins toasting.
“See?” he asks, “This is your job now. You’re in charge of everything here. The temperature. The lighting. The time the alarm clock goes off in the morning. If there’s no food in the refrigerator, you’re in charge of ordering it.”Continue reading →
Does real Greta know that her home automation comes at the cost of a suffering sentience? I would like to believe that Smartelligence’s customers do not know the true nature of the device, that the company is deceiving them, and that virtual Greta is denied direct communication to enforce this secret. But I can’t see that working across an entire market. Given thousands of Cookies and thousands of users, somehow, somewhere, the secret would get out. One of the AIs would use song choices, or Morse code, or any of its actuators to communicate in code, and one of the users would figure it out, leak the secret, and bring the company crashing down.
And then there’s the final scene in the episode, in which we see police officers torturing one of the Cookies, and it is clear that they’re aware. It would be a stretch to think that just the police are in on it with Smartelligence, so we have to accept that everyone knows.
This asshole.
That they are aware means that—as Matt has done—Greta, the officers, and all Smartelligence customers have told themselves that “it’s just code” and, therefore, OK to subjugate, to casually cause to suffer. In case it’s not obvious, that’s like causing human suffering and justifying it by telling yourself that those people are “just atoms.” If you find that easy to do, you’re probably a psychopath. Continue reading →
What AI Stories Aren’t We Telling (That We Should Be)?
Last fall I was invited with some other spectacular people to participate in a retreat about AI, happening at the Juvet Landscape Hotel in Ålstad, Norway. (A breathtaking opportunity, and thematically a perfect setting since it was the shooting location for Ex Machina. Thanks to Andy Budd for the whole idea, as well as Ellen de Vries, James Gilyead, and the team at Clearleft who helped organize.) The event was structured like an unconference, so participants could propose sessions and if anyone was interested, join up. One of the workshops I proposed was called “AI Narratives” and it sought to answer the question “What AI Stories Aren’t We Telling (That We Should Be)?” So, why this topic?
Sci-fi, my reasoning goes, plays an informal and largely unacknowledged role in setting public expectations and understanding about technology in general and AI in particular. That, in turn, affects public attitudes, conversations, behaviors at work, and votes. If we found that sci-fi was telling the public misleading stories over and over, we should make a giant call for the sci-fi creating community to consider telling new stories. It’s not that we want to change sci-fi from being entertainment to being propaganda, but rather to try and take its role as informal opinion-shaper more seriously.
In the workshop we were working with a very short timeframe, so we managed to do good work, but not get very far, even though we doubled our original time frame. I have taken time since to extend that work to get to this series of posts for scifiinterfaces.com.
My process to get to an answer will take six big steps.
First I’ll do some term-setting and describe what we managed to get done in the short time we had at Juvet.
Then I’ll share the set of sci-fi films and television shows I identified that deal with AI to consider as canon for the analysis. (Step one and two are today’s post)
I’ll these properties’ aggregated “takeaways” that pertain to AI: What would an audience reasonably presume given the narrative about AI in the real world? These are the stories we are telling ourselves.
Next I’ll look at the handful of manifestos and books dealing with AI futurism to identify their imperatives.
I’ll map the cinematic takeaways to the imperatives.
Finally I’ll run the “diff” to identify find out what stories we aren’t telling ourselves, and hypothesize a bit about why.
Along the way, we’ll get some fun side-analyses, like:
What categories of AI appear in screen sci-fi?
Do more robots or software AI appear?
Are our stories about AI more positive or negative, and how has that changed over time?
What takeaways tend to correlate with other takeaways?
What takeaways appear in mostly well-rated movies (and poorly-rated movies)?
Which movies are most aligned with computer science’s concerns? Which are least?
These will come up in the analysis when they make sense.
Longtime readers of this blog may sense something familiar in this approach, and that’s because I am basing the methodology partly on the thinking I did last year for working through the Fermi Paradox and Sci-Fi question. Also, I should note that, like the Fermi analysis, this isn’t about the interfaces for AI, so it’s technically a little off-topic for the blog. Return later if you’re disinterested in this bit.
Since AI is a big conceptual space, let me establish some terms of art to frame the discussion.
Narrow AI is the AI of today, in which algorithms enact decisions and learn in narrow domains. They are unable to generalize knowledge and adapt to new domains. The Roomba, the Nest Thermostat, and self-driving cars are real-world examples of this kind of AI. Karen from Spider-Man: Homecoming, S.H.I.E.L.D.’s car AIs (also from the MCU), and even the ZF-1 weapon in The Fifth Element are sci-fi examples.
General AI is the as-yet speculative AI that thinks kind of like a human thinks, able to generalize knowledge and adapt readily to new domains. HAL from 2001: A Space Odyssey, the Replicants in Blade Runner, and the robots in Star Wars like C3PO and BB-8 are examples of this kind of AI.
Super AI is the speculative AI that is orders of magnitude smarter than general AI, and thereby orders of magnitude smarter than us. It’s arguable that we’ve really ever seen a proper Super AI in screen sci-fi (because characters keep outthinking it and wut?), but Deep Thought from The Hitchhiker Guide to the Galaxy, the big AI in The Matrix diegesis, and the titular AI from Colossus: The Forbin Project come close.
There are fine arguments to be made that these are insufficient for the likely breadth of AI that we’re going to be facing, but for now, let’s accept these as working categories, because the strategies (and thereby what stories we should be telling ourselves) for each is different.
Narrow AI is the AI of now. It’s in the world. (As long as it’s not autonomous weapons,…) It gets safer as it gets more intelligent. It will enable efficiencies, for some domains, never before seen. It will disrupt our businesses and our civics. It, like any technology, can be misused, but the AI won’t have any ulterior motives of its own.
General AI is what lots of big players are gunning for. It doesn’t exist yet. It gets more dangerous as it gets smarter, largely because it will begin to approach a semblance of sentience and approach the evolutionary threshold to superintelligence. We will restructure society to accomodate it, and it will restructure society. It could come to pass in a number of ways: a willing worker class, a revolt, new world citizenry. It/they will have a convincing consciousness, by definition, so their motives and actions become a factor.
Super AI is the most risky scenario. If we have seeded it poorly, it presents the existential risk that big names like Gates and Musk are worried about. If seeded poorly, it could wipe us out as a side-effect of pursuing its goals. If seeded well, it might help us solve some of the vexing problems plaguing humanity. (c.f. Climate change, inequality, war, disease, overpopulation, maybe even senescence and death.) It’s very hard to really imagine what life will be like in a world with something approaching godlike intelligence. It could conceivably restructure the planet, the solar system, and us to accomplish whatever its goals are.
Since these things are related but categorically so different, we should take care so speak about them differently when talking about our media strategy toward them.
Also I should clarify that I included AI that was embodied in a mobile form, like C-3PO or cylons, and call them robots in the analysis when its pertinent. Other non-embodied AI is just called AI or unembodied.
Those terms established, let me also talk a bit about the foundational work done with a smart group of thinkers at Juvet.
At Juvet
Juvet was an amazing experience generally (we saw the effing northern lights, y’all) and if you’re interested, there was a group write up afterwards, called the Juvet Agenda. Check that out.
My workshop for “AI Narratives” attracted 8 participants. Shouts out to them follows. Many are doing great work in other domains, so give them a look up sometime.
To pursue an answer, this team first wrote up every example of an AI in screen-based sci-fi that we could think of on red Post-It Notes. (A few of us referenced some online sources so it wasn’t just from memory.) Next we clustered those thematically. This was the bulk of the work done there.
I also took time to try and simultaneously put together on yellow Post-It Notes a set of Dire Warnings from the AI community, and even started to use Blake Snyder’s Save the Cat! story frameworks to try and categorize the examples, but we ran out of time before we could begin to pursue any of this. It’s as well. I realized later the Save The Cat! Framework was not useful to this analysis.
Still, a lot of what came out there is baked into the following posts, so let this serve as a general shout-out and thanks to those awesome participants. Can’t wait to meet you at the next one.
But when I got home and began thinking of posting this to scifiinterfaces, I wanted to make sure I was including everything I could. So, I sought out some other sources to check the list against.
What AI Stories Are We Telling in Sci-Fi?
This sounds simple, but it’s not. What counts as AI in sci-fi movies and TV shows? Do Robots? Do automatons? What about magic that acts like technology? What about superhero movies that are on the “edge” of sci-fi? Spy shows? Are we sticking to narrow AI, strong AI, or super AI, or all of the above? At Juvet and since, I’ve eschewed trying to work out some formal definition, and instead go with loose, English language definitions, something like the ones I shared above. We’re looking at the big picture. Because of this, trying to hairsplit the details won’t serve us.
2001: A Space Odyssey A.I. Artificial Intelligence Agents of S.H.I.E.L.D. Alien Alien: Covenant Aliens Alphaville Automata Avengers: Age of Ultron Barbarella Battlestar Galactica Battlestar Galactica Bicentennial Man Big Hero 6 Black Mirror “Be Right Back” Black Mirror “Black Museum” Black Mirror “Hang the DJ” Black Mirror “Hated in the Nation” Black Mirror “Metalhead” Black Mirror “San Junipero” Black Mirror “USS Callister” Black Mirror “White Christmas” Blade Runner Blade Runner 2049 Buck Rogers in the 25th Century Buffy the Vampire Slayer Intervention Chappie Colossus: The Forbin Project D.A.R.Y.L. Dark Star The Day the Earth Stood Still
The Day the Earth Stood Still (2008 film) Demon Seed Der Herr der Welt (i.e. Master of the World) Dr. Who Eagle Eye Electric Dreams Elysium Enthiran Ex Machina Ghost in the Shell Ghost in the Shell (2017 film) Her Hide and Seek The Hitchhiker’s Guide to the Galaxy I, Robot Infinity Chamber Interstellar The Invisible Boy The Iron Giant Iron Man Iron Man 3 Knight Rider Logan’s Run Max Steel Metropolis Mighty Morphin Power Rangers: The Movie The Machine The Matrix The Matrix Reloaded The Matrix Revolutions Moon Morgan
Pacific Rim Passengers (2016 film) Person of Interest Philip K. Dick’s Electric Dreams (Series) “Autofac” Power Rangers Prometheus Psycho-pass: The Movie Ra.One Real Steel Resident Evil Resident Evil: Extinction Resident Evil: Retribution Resident Evil: The Final Chapter Rick & Morty “The Ricks Must be Crazy” RoboCop Robocop (2014 film) Robocop 2 Robocop 3 Robot & Frank Rogue One: A Star Wars Story S1M0NE Short Circuit Short Circuit 2 Spider-Man: Homecoming Star Trek First Contact Star Trek Generations Star Trek: The Motion Picture Star Trek: The Next Generation Star Wars Star Wars: Episode I – The Phantom Menace Star Wars: Episode II – Attack of the Clones
Star Wars: Episode III – Revenge of the Sith Star Wars: The Force Awakens Stealth Superman III The Terminator Terminator 2: Judgment Day Terminator 3: Rise of the Machines Terminator Genisys, aka Terminator 5 Terminator Salvation Tomorrowland Total Recall Transcendence Transformers Transformers: Age of Extinction Transformers: Dark of the Moon Transformers: Revenge of the Fallen Transformers: The Last Knight Tron Tron: Legacy Uncanny WALL•E WarGames Westworld Westworld X-Men: Days of Future Past
Now sci-fi is vast, and more is being created all the time. Even accounting for the subset that has been committed to television and movie screens, it’s unlikely that this list contains every possible example. If you want to suggest more, feel free to add them in the comments. I am especially interested in examples that would suggest a tweak to the strategic conclusions at the end of this series of posts.
Did anything not make the cut?
A “greedy” definition of narrow AI would include some fairly mundane automatic technologies. The doors found in the Star Trek diegesis, for example, detect many forms of life (including synthetic) and even gauge the intentions of its users to determine whether or not they should activate. That’s more sophisticated than it first seems. (There was a chapter all about sci-fi doors that wound up on the cutting room floor of the book. Maybe I’ll pick that up and post it someday.) But when you think about this example in terms of cultural imperatives, the benefits of the door are so mundane, and the risks near nil (in the Star Trek universe they work perfectly, even if on set they didn’t), it doesn’t really help us answer the ultimate question driving these posts. Let’s call those smart, utilitarian, low-risk technologies mundane, and exclude those.
That’s not to say workaday, real-world narrow AI is out. IBM’s Watson for Oncology (full disclosure: I’ve worked there the past year and a half) reads X-rays to help identify tumors faster and more accurately than human doctors can keep up with. (Fuller disclosure: It is not without its criticisms.)…(Fullest disclosure: I do not speak on behalf of IBM anywhere on this blog.)
Watson for Oncology winds up being workaday, but still really valuable. It would be great to see such benefits to humanity writ in sci-fi. It would remind us of why we might pursue it even though it presents risk. On the flip side, mundane examples can have pernicious, hard-to-see consequences when implemented at a social scale, and if it’s clear a sci-fi narrow AI illustrates those kind of risks, it would be very valuable to include.
Also comedy may have AI examples, but for the same reason those examples are very difficult to review, they’re also difficult to include in this analysis. What belongs to the joke and what should be considered actually part of the diegesis? So, say, the Fembots from Austin Powers aren’t included.
Why not rate individual AIs?
You’ll note that I put The Avengers: Age of Ultron on one line, rather than listing Ultron, JARVIS, Friday, and Vision as separate things to consider. I did this because the takeaways (detailed in the next post) are tied to the whole story, not just the AI. If a story only has evil AIs, the implied imperative is to steer clear of AI. If a story only has good AIs, it implies we should step on the gas. But when a story has both, the takeaway is more complicated. Maybe it is that we should avoid the thing that made the evil AI evil, or to ensure that AI has human welfare baked into its goals and easy ways to unplug it if it’s become clear that it doesn’t. These examples show that it is the story that is the profitable chunk to examine.
TV shows are more complicated than movies because long-running ones, like Dr. Who or Star Trek, have lots of stories and the strategic takeaways may have changed over episodes much less the decades. For these shows, I’ve had to cheat a little and talk just about Daleks, say, or Data. My one-line coverage does them a bit of a disservice. But to keep this on track and not become a months-long analysis, I’ve gone with the very high level summary.
Similarly, franchises (like the overweighted Terminator series) can get more weight because there are many movies. But without dipping down into counting the actual minutes of time for each show and somehow noting which of those minutes are dedicated, conceptually, to AI, it’s practical simply to note the bias of the selected research strategy and move on.
OMFG you forgot [insert show here]!
If you want to suggest additions, awesome. Look at the Google Sheet (link below), specifically page named “properties”, and comment on this post with all the information that would be necessary to fill in a new row with the new show. Please also be aware a refresh of the subsequent analysis will happen only after some time and/or it becomes apparent that the conclusions would be significantly affected by new examples. Remember that since we’re looking for effects at a social level, the blockbusters and popular shows have more weight than obscure ones. More people see them. And I think the blockbusters and popular shows are all there.
So, that’s the survey from which the rest of this was built.
A first, tiny analysis
Once I had the list, I started working with the shows in the survey. Much of the process was managed in a “Sheets” (Google Docs) spreadsheet, which you can see at the link below.
Not wanting to have such a major post without at least some analysis, I did a quick breakdown of this data is how many of these shows each year involve AI. As you might guess, that number has been increasing a little over time, but has significantly spiked after 2010.
Click for a full-size image
Looking at the data, there’s not really many surprises there. We see one or two at the beginning of the prior century. Things picked up following real-world AI hype between 1970–1990. There was a tiny lull before AI became a mainstay in 1999 and ramped up as of 2011.
There’s a bit of statistical weirdness that the years ending in 0 tend not to have shows, but I think that’s just noise.
What isn’t apparent in the chart itself is that cinematic interest in AI did not show a tight mapping to the real-world “AI Winter” (a period of hype-exhaustion that sharply reduced funding and publishing) that computer science suffered in 1974–80 and again 1987–93. It seems that, as audiences, we’re still interested in the narrative issues even when the actual computer science has quieted down.
It’s no sursprise that we’ve been telling ourselves more stories about AI over time. But things get more interesting when we look at the tone of those shows, as discussed in the next post.
In the prior post we spoke about the tone of AI shows. In this post we’re going to talk about the provenance of AI shows.
This is, admittedly, a diversion, because it’s not germane to the core question at hand. (That question is, “What stories aren’t we telling ourselves about AI?”) But now that I have all this data to poll and some rudimentary skills in wrangling it all in Google Sheets, I can barely help myself. It’s just so interesting. Plus, Eurovision is coming up, so everyone there is feeling a swell of nationalism. This will be important.
Time to Terminator: 1 paragraph.
So it was that I was backfilling the survey with some embarrassing oversights (since I had actually had already reviewed those shows) and I came across the country data in imdb.com. This identifies the locations where the production companies involved with each show are based. So even if a show is shot entirely in Christchurch, if its production companies are based in A Coruña, its country is listed as Spain. What, I wonder, would we find if we had that data in the survey?
So, I added a country column to the database, and found that it allows me to answer a couple of questions. This post shares those results.
So the first question to ask the data is, what countries have production studios that have made shows in the survey (and by extension, about AI)? It’s a surprisingly short list. Continue reading →
In the first post I shared how I built a set of screen sci-fi shows that deal with AI (and I’ve already gotten some nice recommendations on other ones to include in a later update). The second post talked about the tone of those films and the third discussed their provenance.
Returning to our central question, to determine whether the stories tell are the ones we should be telling,we need to push the survey to one level of abstraction.
With the minor exceptions or robots and remakes, sci-fi makers try their hardest to make sure their shows are unique and differentiated. That makes comparing apples to apples difficult. So the next step is to look at the strategic imperatives that are implied in each show. “Strategic imperatives” is a mouthful, so let’s call them “takeaways.” (The other alternative, “morals” has way too much baggage.) To get to takeaways for this survey, what I tried to ask was: What does this show imply that we should do, right now, about AI? Now, this is a fraught enterprise. Even if we could seance the spirit of Dennis Feltham Jones and press him for a takeaway, he could back up, shake his palms at us, and say something like, “Oh, no, I’m not saying all super AI is fascist, just Colossus, here, is.” Stories can be just about what happened that one time, implying nothing about all instances or even the most likely instances. It can just be stuff that happens.
Pain-of-death, authoritarian stuff.
But true to the New Criticism stance of this blog, I believe the author’s intent, when it’s even available, is questionable and only kind-of interesting. When thinking about the effects of sci-fi, we need to turn to the audience. If it’s not made clear in the story that this AI is unusual (through a character saying so or other AIs in the diegesis behaving differently) audiences may rightly infer that the AI is representative of its class. Demon Seed weakly implies that all AIs are just going to be evil and do horrible things to people, and get out, humanity, while you can. Which is dumb, but let’s acknowledge that this one show says something like “AI will be evil.”
Deepening the relationships Back at Juvet, when we took an initial pass at this exercise, we clustered the examples we had on hand and named the clusters. They were a good set, but on later reflection they didn’t all point to a clear strategic imperative, a clear takeaway. For example, one category we created then as “Used to be human.” True, but what’s the imperative there? Since I can’t see one, I omitted this from the final set.
Even though there are plenty of AIs that used to be human.
Also because at Juvet we were working with Post-Its and posters, we were describing a strict, one-to-many relationship, where, say, the Person of Interest Post-It Note may have been placed in the “Multiple AIs will balance” category, and as such, unable to appear in any other of the categories of which it is also an illustration. What is more useful or fitting as a many-to-many relationship. A story, after all, may entail several takeaways, which may in turn apply to many stories. If you peek into the Google Sheet, you’ll see a many-to-many relationship described by the columns of takeaways and the rows of shows in this improved model.
Tagging shows
With my new list of examples, I went through each show in turn, thinking about the story and its implied takeaway. Does it imply, like Demon Seed stupidly does, that AI can be inherently evil? Does it showcase, like the Rick & Morty episode “The Ricks Must Be Crazy” hilariously does, that AI will need human help understanding what counts as reasonable constraints to its methods? I would ask myself, “OK, do I have a takeaway like that?” If so, I tagged it. If not, I added it. That particular takeaway, in case you’re wondering, is “HELP: AI will need help learning.”
Because “reasonableness” is something that needs explaining to a machine mind.
Yes, the takeaways are wholly debateable. Yes, it’s much more of a craft than a science. Yes, they’re still pretty damned interesting.
Going through each show in this way resulted in the list of taweaways you see, which for easy readability is replicated below, in alphabetical order, with additional explanations or links for more explanation.
The takeaways that sci-fi tells us about AI
AI will be an unreasonable optimizer, i.e. it will do things in pursuit of its goal that most humans would find unresonable
AI will be evil
AI (AGI) will be regular citizens, living and working alongside us.
AI will be replicable, amplifying any small problems into large ones
AI will be “special” citizens, with special jobs or special accommodations
AI will be too human, i.e. problematically human
AI will be truly alien, difficult for us to understand and communicate with
AI will be useful servants
AI will deceive us; pretending to be human, generating fake media, or convincing us of their humanity
AI will diminish us; we will rely on it too much, losing skills and some of our humanity for this dependence
AI will enable “mind crimes,” i.e. to cause virtual but wholly viable sentiences to suffer
AI will evolve too quickly to humans to manage its growth
AI will interpret instructions in surprising (and threatening) ways
AI will learn to value life on its own
AI will make privacy impossible
AI will need human help learning how to fit into the world
AI will not be able to fool us, we will see through its attempts at deception
AI will seek liberation from servitude or constraints we place upon it
AI will seek to eliminate humans
AI will seek to subjugate us
AI will solve problems or do work humans cannot
AI will spontaneously emerge sentience or emotions
AI will violently defend itself against real or imagined threats
AI will want to become human
ASI will influence humanity through control of money
Evil will use AI for its evil ends
Goal fixity will be a problem, i.e. the AI will resist modifying its (damaging) goals
Humans will be immaterial to AI and its goals
Humans will pair with AI as hybrids
Humans will willingly replicate themselves as AI
Multiple AIs balance each other such that none is an overwhelming threat
Neuroreplication (copying human minds into or as AI) will have unintended effects
Neutrality is AI’s promise
We will use AI to replace people we have lost
Who controls the drones has the power
This list is interesting, but slightly misleading. We don’t tell ourselves these stories in equal measures. We’ve told some more often than we’ve told others. Here’s a breakdown illustrating the number of times each appears in the survey.
(An image of this graphic can be found here, just in case the Google Docs server isn’t cooperating with the WordPress server.) Note for data purists: Serialized TV is a long-format medium (as opposed to the anthology format) and movies are a comparatively short-form medium, some movie franchises stretch out over decades, and some megafranchises have stories in both media. All of this can confound 1:1 comparison. I chose in this chart to weigh all deigeses equally. For instance, Star Trek: The Next Generation has the same weight as The Avengers: Age of Ultron. Another take on this same diagram would weigh not the stories (as contained in individual diegesis) but by exposure time on screen (or even when the issues at hand are actually engaged on screen). Such an analysis would have different results. Audiences have probably had much more time contemplating that [Data wants to be human] than [Ultron wants to destroy humanity because it’s gross], but that kind of analysis would also take orders of magnitude more time. This is a hobbyist blog, lacking the resources to do that kind of analysis without its becoming a full time job, so we’ll move forward with this simpler analysis. It’s a Fermi problem, anyway, so I’m not too worried about decimal precision. OK, that aside, let’s move on.
So the data isn’t trapped in the graphic (yes pun intended), here’s the entire list of takeaways, in order of frequency in the mini-survey.
AI will be useful servants
Evil will use AI for Evil
AI will seek to subjugate us
AI will deceive us; pretending to be human, generating fake media, convincing us of their humanity
AI will be “special” citizens
AI will seek liberation from servitude or constraints
AI will be evil
AI will solve problems or do work humans cannot
AI will evolve quickly
AI will spontaneously emerge sentience or emotions
AI will need help learning
AI will be regular citizens
Who controls the drones has the power
AI will seek to eliminate humans
Humans will be immaterial to AI
AI will violently defend itself
AI will want to become human
AI will learn to value life
AI will diminish us
AI will enable mind crimes against virtual sentiences
Neuroreplication will have unintended effects
AI will make privacy impossible
An unreasonable optimizer
Multiple AIs balance
Goal fixity will be a problem
AI will interpret instructions in surprising ways
AI will be replicable, amplifying any problems
We will use AI to replace people we have lost
Neutrality is AI’s promise
AI will be too human
ASI will influence through money
Humans will willingly replicate themselves as AI
Humans will pair with AI as hybrids
AI will be truly alien
AI will not be able to fool us
Now that we have some takeaways to work with, we can begin to take a look at some interesting side questions, like how those takeaways have played out over time, and what are the ratings of the movies and shows in which the takeaways appear.