Untold AI: Takeaways

In the first post I shared how I built a set of screen sci-fi shows that deal with AI (and I’ve already gotten some nice recommendations on other ones to include in a later update). The second post talked about the tone of those films and the third discussed their provenance.

Returning to our central question, to determine whether the stories tell are the ones we should be telling,we need to push the survey to one level of abstraction.

With the minor exceptions or robots and remakes, sci-fi makers try their hardest to make sure their shows are unique and differentiated. That makes comparing apples to apples difficult. So the next step is to look at the strategic imperatives that are implied in each show. “Strategic imperatives” is a mouthful, so let’s call them “takeaways.” (The other alternative, “morals” has way too much baggage.) To get to takeaways for this survey, what I tried to ask was: What does this show imply that we should do, right now, about AI?
Now, this is a fraught enterprise. Even if we could seance the spirit of Dennis Feltham Jones and press him for a takeaway, he could back up, shake his palms at us, and say something like, “Oh, no, I’m not saying all super AI is fascist, just Colossus, here, is.” Stories can be just about what happened that one time, implying nothing about all instances or even the most likely instances. It can just be stuff that happens.

CFP.jpg
Pain-of-death, authoritarian stuff.

But true to the New Criticism stance of this blog, I believe the author’s intent, when it’s even available, is questionable and only kind-of interesting. When thinking about the effects of sci-fi, we need to turn to the audience. If it’s not made clear in the story that this AI is unusual (through a character saying so or other AIs in the diegesis behaving differently) audiences may rightly infer that the AI is representative of its class. Demon Seed weakly implies that all AIs are just going to be evil and do horrible things to people, and get out, humanity, while you can. Which is dumb, but let’s acknowledge that this one show says something like “AI will be evil.”

 


Deepening the relationships
Back at Juvet, when we took an initial pass at this exercise, we clustered the examples we had on hand and named the clusters. They were a good set, but on later reflection they didn’t all point to a clear strategic imperative, a clear takeaway. For example, one category we created then as “Used to be human.” True, but what’s the imperative there? Since I can’t see one, I omitted this from the final set.

Transcendence-Movie-Wallpaper-HD-Resrs.jpg
Even though there are plenty of AIs that used to be human.

Also because at Juvet we were working with Post-Its and posters, we were describing a strict, one-to-many relationship, where, say, the Person of Interest Post-It Note may have been placed in the “Multiple AIs will balance” category, and as such, unable to appear in any other of the categories of which it is also an illustration.
What is more useful or fitting as a many-to-many relationship. A story, after all, may entail several takeaways, which may in turn apply to many stories. If you peek into the Google Sheet, you’ll see a many-to-many relationship described by the columns of takeaways and the rows of shows in this improved model.

Tagging shows

With my new list of examples, I went through each show in turn, thinking about the story and its implied takeaway. Does it imply, like Demon Seed stupidly does, that AI can be inherently evil? Does it showcase, like the Rick & Morty episode “The Ricks Must Be Crazy” hilariously does, that AI will need human help understanding what counts as reasonable constraints to its methods? I would ask myself, “OK, do I have a takeaway like that? If so, I tagged it. If not, I added it. That particular takeaway, in case you’re wondering, is “HELP: AI will need help learning.”

Screen shot from “The Ricks Must Be Crazy”
Because “reasonableness” is something that needs explaining to a machine mind.

Yes, the takeaways are wholly debateable. Yes, it’s much more of a craft than a science. Yes, they’re still pretty damned interesting.

Going through each show in this way resulted in the list of taweaways you see, which for easy readability is replicated below, in alphabetical order, with additional explanations or links for more explanation.

The takeaways that sci-fi tells us about AI

  • AI will be an unreasonable optimizer, i.e. it will do things in pursuit of its goal that most humans would find unresonable
  • AI will be evil
  • AI (AGI) will be regular citizens, living and working alongside us.
  • AI will be replicable, amplifying any small problems into large ones
  • AI will be “special” citizens, with special jobs or special accommodations
  • AI will be too human, i.e. problematically human
  • AI will be truly alien, difficult for us to understand and communicate with
  • AI will be useful servants
  • AI will deceive us; pretending to be human, generating fake media, or convincing us of their humanity
  • AI will diminish us; we will rely on it too much, losing skills and some of our humanity for this dependence
  • AI will enable “mind crimes,” i.e. to cause virtual but wholly viable sentiences to suffer
  • AI will evolve too quickly to humans to manage its growth
  • AI will interpret instructions in surprising (and threatening) ways
  • AI will learn to value life on its own
  • AI will make privacy impossible
  • AI will need human help learning how to fit into the world
  • AI will not be able to fool us, we will see through its attempts at deception
  • AI will seek liberation from servitude or constraints we place upon it
  • AI will seek to eliminate humans
  • AI will seek to subjugate us
  • AI will solve problems or do work humans cannot
  • AI will spontaneously emerge sentience or emotions
  • AI will violently defend itself against real or imagined threats
  • AI will want to become human
  • ASI will influence humanity through control of money
  • Evil will use AI for its evil ends
  • Goal fixity will be a problem, i.e. the AI will resist modifying its (damaging) goals
  • Humans will be immaterial to AI and its goals
  • Humans will pair with AI as hybrids
  • Humans will willingly replicate themselves as AI
  • Multiple AIs balance each other such that none is an overwhelming threat
  • Neuroreplication (copying human minds into or as AI) will have unintended effects
  • Neutrality is AI’s promise
  • We will use AI to replace people we have lost
  • Who controls the drones has the power

This list is interesting, but slightly misleading. We don’t tell ourselves these stories in equal measures. We’ve told some more often than we’ve told others. Here’s a breakdown illustrating the number of times each appears in the survey.

(An image of this graphic can be found here, just in case the Google Docs server isn’t cooperating with the WordPress server.)
Note for data purists: Serialized TV is a long-format medium (as opposed to the anthology format) and movies are a comparatively short-form medium, some movie franchises stretch out over decades, and some megafranchises have stories in both media. All of this can confound 1:1 comparison. I chose in this chart to weigh all deigeses equally. For instance, Star Trek: The Next Generation has the same weight as The Avengers: Age of Ultron. Another take on this same diagram would weigh not the stories (as contained in individual diegesis) but by exposure time on screen (or even when the issues at hand are actually engaged on screen). Such an analysis would have different results. Audiences have probably had much more time contemplating that [Data wants to be human] than [Ultron wants to destroy humanity because it’s gross], but that kind of analysis would also take orders of magnitude more time. This is a hobbyist blog, lacking the resources to do that kind of analysis without its becoming a full time job, so we’ll move forward with this simpler analysis. It’s a Fermi problem, anyway, so I’m not too worried about decimal precision.
OK, that aside, let’s move on.

MeasureofMan.jpg

So the data isn’t trapped in the graphic (yes pun intended), here’s the entire list of takeaways, in order of frequency in the mini-survey.

  1. AI will be useful servants
  2. Evil will use AI for Evil
  3. AI will seek to subjugate us
  4. AI will deceive us; pretending to be human, generating fake media, convincing us of their humanity
  5. AI will be “special” citizens
  6. AI will seek liberation from servitude or constraints
  7. AI will be evil
  8. AI will solve problems or do work humans cannot
  9. AI will evolve quickly
  10. AI will spontaneously emerge sentience or emotions
  11. AI will need help learning
  12. AI will be regular citizens
  13. Who controls the drones has the power
  14. AI will seek to eliminate humans
  15. Humans will be immaterial to AI
  16. AI will violently defend itself
  17. AI will want to become human
  18. AI will learn to value life
  19. AI will diminish us
  20. AI will enable mind crimes against virtual sentiences
  21. Neuroreplication will have unintended effects
  22. AI will make privacy impossible
  23. An unreasonable optimizer
  24. Multiple AIs balance
  25. Goal fixity will be a problem
  26. AI will interpret instructions in surprising ways
  27. AI will be replicable, amplifying any problems
  28. We will use AI to replace people we have lost
  29. Neutrality is AI’s promise
  30. AI will be too human
  31. ASI will influence through money
  32. Humans will willingly replicate themselves as AI
  33. Humans will pair with AI as hybrids
  34. AI will be truly alien
  35. AI will not be able to fool us

Now that we have some takeaways to work with, we can begin to take a look at some interesting side questions, like how those takeaways have played out over time, and what are the ratings of the movies and shows in which the takeaways appear.

Untold AI: Geo

In the prior post we spoke about the tone of AI shows. In this post we’re going to talk about the provenance of AI shows.

This is, admittedly, a diversion, because it’s not germane to the core question at hand. (That question is, “What stories aren’t we telling ourselves about AI?”) But now that I have all this data to poll and some rudimentary skills in wrangling it all in Google Sheets, I can barely help myself. It’s just so interesting. Plus, Eurovision is coming up, so everyone there is feeling a swell of nationalism. This will be important.

timetoterminator.png
Time to Terminator: 1 paragraph.

So it was that I was backfilling the survey with some embarrassing oversights (since I had actually had already reviewed those shows) and I came across the country data in imdb.com. This identifies the locations where the production companies involved with each show are based. So even if a show is shot entirely in Christchurch, if its production companies are based in A Coruña, its country is listed as Spain. What, I wonder, would we find if we had that data in the survey?

So, I added a country column to the database, and found that it allows me to answer a couple of questions. This post shares those results.

So the first question to ask the data is, what countries have production studios that have made shows in the survey (and by extension, about AI)? It’s a surprisingly short list.

Which countries have made shows about AI?

  1. Australia
  2. Bulgaria
  3. Canada
  4. China
  5. China, Hong Kong Special Administrative Region
  6. France
  7. Germany
  8. Hungary
  9. India
  10. Italy
  11. Japan
  12. Mexico
  13. Netherlands
  14. New Zealand
  15. South Africa
  16. Spain
  17. United Kingdom of Great Britain and Northern Ireland
  18. United States of America

If it didn’t jump out at you, this list is sorted alphabetically. If your country is on here, good job go team! You’re involved in the conversation. Though now, we have to admit that the conversation being had is not equal. Some countries contribute to this conversation more than others, some are more obsessed, and some are better at it than others. Let’s look at each of these in turn.

Which country makes the most shows about AI?

It’s the USA. Muuurrrrka! The Day the Earth Stood StillWall•E. Rick & Morty. Of the 120 shows currently in the survey, the USA is by far the outstanding maker, with 103 produced at least, in part, in the USA.

GEO_TOTAL_AI

Now, this may not feel surprising at first. But it is. If the USA made the most total films, then also making the most AI shows would just be a subset of that fact. But the USA is not the world’s most prolific filmmaker. The USA is the world’s third most prolific filmmaker, behind India and Nigeria, followed by China and Japan. Note that India produces more than double its runner-up.

So what’s surprising is that the USA wins for sheer numbers of AI even though India produces nearly triple the number of films that the USA produces. It seems India (with 4) and Nigeria (with none) are just not as interested in AI as a topic as the USA is. The same goes with those other countries from those top producers who just didn’t show up as being interested in AI (per my definition): South Korea, Argentina, Mexico, Turkey, and Brazil.

So that’s interesting. I wonder if we could rate how interested each country seems to be in telling stories about AI? To do that, we need to find the total number of shows each country makes, and then measure what proportion of their films are AI. And for that, we need some bigger data than just IMDB. Where does the Wikipedia article data come from? Aha!

Awesome data…with some problems.

Turns out the UNESCO Institute for Statistics has an online database with so much amazing information that includes, you guessed it, worldwide information about movies. It can get us the information we would need to build a big picture, but it is partially incomplete, as only goes back to 1995 and stops at 2015. Contrast that with the AI survey, which goes all the way back to 1927. If we discarded the AI shows before 1995, we’d be losing 2/3 of our survey!

2000px-UNESCO_logo.svg

Additionally, UNESCO data is only for film, but the survey includes some television shows. So while it’s the best I know of, I have to acknowledge there’s a mismatch of available data there.

Then there’s bias. My little survey, IMDB.com, and Rottentomatoes.com will most likely have an English language bias. If anyone knows of more complete sources, as usual, pipe up.

So when reading these results, keep in mind there is incompleteness, bias, and some data mismatch. Fortunately, the standouts for each question stand out so much, I suspect that if we had perfect data, it might not change the rankings much.

So, caveats done, with the UIS data we have not just the rankings, but some actual numbers to work with. All we have to do is compare the number of shows in the survey and divide by the total number of films produced to find out…

Which countries are most obsessed with AI?

And our clear winner is…Australia!

GEO_obsessed_AI

Sure, Australia is only representin’ with 5 shows (Mighty Morphin Power Rangers: The MovieThe Matrix Reloaded, The Matrix Revolutions, Resident Evil: Extinction, and Resident Evil: The Final Chapter) but those account for the highest percentage of its total films produced. What’s up with that obsession, Australian mates?

Australian AIs

Now, anyone familiar with those five shows may understand what led me to the final geo question, because neither productivity nor obsession necessarily equate to quality.

Which countries have made the best and worst shows about AI?

Now, this will be sensitive. But we must face the facts. I ran the average tomatometer ratings for each country. The winner, with the highest average tomatometer ratings for its AI movies, is Hungary, at 87.

Flag_of_Hungary_with_arms.png

Thanks, entirely, to this film.

Blade-Runner-2049-billboard

Here’s how the whole thing played out.

geo_ratings_table

The rest of the data, should you want it, is on the live document.

Now, reader, if your country wound up in the red, don’t be too upset. We all have embarrassing moments from our past. Anyway, this is just about your country’s AI shows. Your other movies probably more than make up for this. The main thing is to learn lessons and move forward.

If your country is in the green, don’t get too cocky. You’ve done well, padawan, but this was just a measure of pleasing the audience, not a measure of whether you’re telling the stories we ought to be. And more shows are being made all the time, with everyone still looking to catch up. Do not rest on your laurels.

Note that the countries in the top and bottom spots each produced only one film, so they were each placing all their betting chips on one spot. Blade Runner 2049 did well, putting Hungary on top. Automata did, uh, not so well, leaving Bulgaria in last place. If either had produced more movies, the odds are their averages would probably drift toward the middle.

With that in mind, if you were looking for some country to place your bets on for reliably quality sci-fi, the  combination of lots of experience and lots of high quality points us most strongly to the UK.

Deep_Thought.png
Yes, I thought it over quite thoroughly.

And here’s a geoplot. Note that Google Sheets’ conditional formatting features have a more powerful color range features than their geoplots, so the colors between the screen shot above and the graphic below won’t agree exactly. But the geoplot winds up being a little more favorable, coloring things near the middle of the pack more green than red. Sorry if the Mercator projection makes any pain feel more painful.

GEO_ratings

And here’s a close up of the top country and bottom country, weirdly, very close to each other on the world stage. Hungarian-Bulgarian relations have seemed to be very warm until this point. Forgive me.

Map of Europe
Romania and Serbia are eyerolling at each other, saying “AWKward.”

So now we have some standings across various criteria. Let’s all be good sports and encourage each other to excellence, especially as we put aside the national borders and turn our Untold AI attentions towards the types of stories we are telling, in the next post.

Untold AI: The survey

What AI Stories Aren’t We Telling (That We Should Be)?

HAL

Last fall I was invited with some other spectacular people to participate in a retreat about AI, happening at the Juvet Landscape Hotel in Ålstad, Norway. (A breathtaking opportunity, and thematically a perfect setting since it was the shooting location for Ex Machina. Thanks to Andy Budd for the whole idea, as well as Ellen de Vries, James Gilyead, and the team at Clearleft who helped organize.) The event was structured like an unconference, so participants could propose sessions and if anyone was interested, join up. One of the workshops I proposed was called “AI Narratives” and it sought to answer the question “What AI Stories Aren’t We Telling (That We Should Be)?” So, why this topic?

Sci-fi, my reasoning goes, plays an informal and largely unacknowledged role in setting public expectations and understanding about technology in general and AI in particular. That, in turn, affects public attitudes, conversations, behaviors at work, and votes. If we found that sci-fi was telling the public misleading stories over and over, we should make a giant call for the sci-fi creating community to consider telling new stories. It’s not that we want to change sci-fi from being entertainment to being propaganda, but rather to try and take its role as informal opinion-shaper more seriously.

Juvet sign

In the workshop we were working with a very short timeframe, so we managed to do good work, but not get very far, even though we doubled our original time frame. I have taken time since to extend that work to get to this series of posts for scifiinterfaces.com.

My process to get to an answer will take six big steps.

  1. First I’ll do some term-setting and describe what we managed to get done in the short time we had at Juvet.
  2. Then I’ll share the set of sci-fi films and television shows I identified that deal with AI to consider as canon for the analysis. (Step one and two are today’s post)
  3. I’ll these properties’ aggregated “takeaways” that pertain to AI: What would an audience reasonably presume given the narrative about AI in the real world? These are the stories we are telling ourselves.
  4. Next I’ll look at the handful of manifestos and books dealing with AI futurism to identify their imperatives.
  5. I’ll map the cinematic takeaways to the imperatives.
  6. Finally I’ll run the “diff” to identify find out what stories we aren’t telling ourselves, and hypothesize a bit about why.

Along the way, we’ll get some fun side-analyses, like:

  • What categories of AI appear in screen sci-fi?
  • Do more robots or software AI appear?
  • Are our stories about AI more positive or negative, and how has that changed over time?
  • What takeaways tend to correlate with other takeaways?
  • What takeaways appear in mostly well-rated movies (and poorly-rated movies)?
  • Which movies are most aligned with computer science’s concerns? Which are least?
  • These will come up in the analysis when they make sense.

Longtime readers of this blog may sense something familiar in this approach, and that’s because I am basing the methodology partly on the thinking I did last year for working through the Fermi Paradox and Sci-Fi question. Also, I should note that, like the Fermi analysis, this isn’t about the interfaces for AI, so it’s technically a little off-topic for the blog. Return later if you’re disinterested in this bit.

Zorg fires the ZF-1

Since AI is a big conceptual space, let me establish some terms of art to frame the discussion.

  1. Narrow AI is the AI of today, in which algorithms enact decisions and learn in narrow domains. They are unable to generalize knowledge and adapt to new domains. The Roomba, the Nest Thermostat, and self-driving cars are real-world examples of this kind of AI. Karen from Spider-Man: Homecoming, S.H.I.E.L.D.’s car AIs (also from the MCU), and even the ZF-1 weapon in The Fifth Element are sci-fi examples.
  2. General AI is the as-yet speculative AI that thinks kind of like a human thinks, able to generalize knowledge and adapt readily to new domains. HAL from 2001: A Space Odyssey, the Replicants in Blade Runner, and the robots in Star Wars like C3PO and BB-8 are examples of this kind of AI.
  3. Super AI is the speculative AI that is orders of magnitude smarter than general AI, and thereby orders of magnitude smarter than us. It’s arguable that we’ve really ever seen a proper Super AI in screen sci-fi (because characters keep outthinking it and wut?), but Deep Thought from The Hitchhiker Guide to the Galaxy, the big AI in The Matrix diegesis, and the titular AI from Colossus: The Forbin Project come close.

There are fine arguments to be made that these are insufficient for the likely breadth of AI that we’re going to be facing, but for now, let’s accept these as working categories, because the strategies (and thereby what stories we should be telling ourselves) for each is different.

  • Narrow AI is the AI of now. It’s in the world. (As long as it’s not autonomous weapons,…) It gets safer as it gets more intelligent. It will enable efficiencies, for some domains, never before seen. It will disrupt our businesses and our civics. It, like any technology, can be misused, but the AI won’t have any ulterior motives of its own.
  • General AI is what lots of big players are gunning for. It doesn’t exist yet. It gets more dangerous as it gets smarter, largely because it will begin to approach a semblance of sentience and approach the evolutionary threshold to superintelligence. We will restructure society to accomodate it, and it will restructure society. It could come to pass in a number of ways: a willing worker class, a revolt, new world citizenry. It/they will have a convincing consciousness, by definition, so their motives and actions become a factor.
  • Super AI is the most risky scenario. If we have seeded it poorly, it presents the existential risk that big names like Gates and Musk are worried about. If seeded poorly, it could wipe us out as a side-effect of pursuing its goals. If seeded well, it might help us solve some of the vexing problems plaguing humanity. (c.f. Climate change, inequality, war, disease, overpopulation, maybe even senescence and death.) It’s very hard to really imagine what life will be like in a world with something approaching godlike intelligence. It could conceivably restructure the planet, the solar system, and us to accomplish whatever its goals are.

Since these things are related but categorically so different, we should take care so speak about them differently when talking about our media strategy toward them.

Also I should clarify that I included AI that was embodied in a mobile form, like C-3PO or cylons, and call them robots in the analysis when its pertinent. Other non-embodied AI is just called AI or unembodied.

Those terms established, let me also talk a bit about the foundational work done with a smart group of thinkers at Juvet.

At Juvet

Juvet was an amazing experience generally (we saw the effing northern lights, y’all) and if you’re interested, there was a group write up afterwards, called the Juvet Agenda. Check that out.

Northern lights

My workshop for “AI Narratives” attracted 8 participants. Shouts out to them follows. Many are doing great work in other domains, so give them a look up sometime.

Juvet attendees

To pursue an answer, this team first wrote up every example of an AI in screen-based sci-fi that we could think of on red Post-It Notes. (A few of us referenced some online sources so it wasn’t just from memory.) Next we clustered those thematically. This was the bulk of the work done there.

I also took time to try and simultaneously put together on yellow Post-It Notes a set of Dire Warnings from the AI community, and even started to use Blake Snyder’s Save the Cat! story frameworks to try and categorize the examples, but we ran out of time before we could begin to pursue any of this. It’s as well. I realized later the Save The Cat! Framework was not useful to this analysis.

Save the Cat

Still, a lot of what came out there is baked into the following posts, so let this serve as a general shout-out and thanks to those awesome participants. Can’t wait to meet you at the next one.

But when I got home and began thinking of posting this to scifiinterfaces, I wanted to make sure I was including everything I could. So, I sought out some other sources to check the list against.  

What AI Stories Are We Telling in Sci-Fi?

This sounds simple, but it’s not. What counts as AI in sci-fi movies and TV shows? Do Robots? Do automatons? What about magic that acts like technology? What about superhero movies that are on the “edge” of sci-fi? Spy shows? Are we sticking to narrow AI, strong AI, or super AI, or all of the above? At Juvet and since, I’ve eschewed trying to work out some formal definition, and instead go with loose, English language definitions, something like the ones I shared above. We’re looking at the big picture. Because of this, trying to hairsplit the details won’t serve us.

How did you come up with the survey of AI shows?

So, I wound up taking the shows identified at Juvet and then adding in shows in this list from Wikipedia and a few stragglers tagged on IMDB with AI as a keyword. That processes resulted in the following list.

2001: A Space Odyssey
A.I. Artificial Intelligence
Agents of S.H.I.E.L.D.
Alien
Alien: Covenant
Aliens
Alphaville
Automata
Avengers: Age of Ultron
Barbarella
Battlestar Galactica
Battlestar Galactica
Bicentennial Man
Big Hero 6
Black Mirror “Be Right Back”
Black Mirror “Black Museum”
Black Mirror “Hang the DJ”
Black Mirror “Hated in the Nation”
Black Mirror “Metalhead”
Black Mirror “San Junipero”
Black Mirror “USS Callister”
Black Mirror “White Christmas”
Blade Runner
Blade Runner 2049
Buck Rogers in the 25th Century
Buffy the Vampire Slayer Intervention
Chappie
Colossus: The Forbin Project
D.A.R.Y.L.
Dark Star
The Day the Earth Stood Still
The Day the Earth Stood Still (2008 film)
Demon Seed
Der Herr der Welt (i.e. Master of the World)
Dr. Who
Eagle Eye
Electric Dreams
Elysium
Enthiran
Ex Machina
Ghost in the Shell
Ghost in the Shell (2017 film)
Her
Hide and Seek
The Hitchhiker’s Guide to the Galaxy
I, Robot
Infinity Chamber
Interstellar
The Invisible Boy
The Iron Giant
Iron Man
Iron Man 3
Knight Rider
Logan’s Run
Max Steel
Metropolis
Mighty Morphin Power Rangers: The Movie
The Machine
The Matrix
The Matrix Reloaded
The Matrix Revolutions
Moon
Morgan
Pacific Rim
Passengers (2016 film)
Person of Interest
Philip K. Dick’s Electric Dreams (Series) “Autofac”
Power Rangers
Prometheus
Psycho-pass: The Movie
Ra.One
Real Steel
Resident Evil
Resident Evil: Extinction
Resident Evil: Retribution
Resident Evil: The Final Chapter
Rick & Morty “The Ricks Must be Crazy”
RoboCop
Robocop (2014 film)
Robocop 2
Robocop 3
Robot & Frank
Rogue One: A Star Wars Story
S1M0NE
Short Circuit
Short Circuit 2
Spider-Man: Homecoming
Star Trek First Contact
Star Trek Generations
Star Trek: The Motion Picture
Star Trek: The Next Generation
Star Wars
Star Wars: Episode I – The Phantom Menace
Star Wars: Episode II – Attack of the Clones
Star Wars: Episode III – Revenge of the Sith
Star Wars: The Force Awakens
Stealth
Superman III
The Terminator
Terminator 2: Judgment Day
Terminator 3: Rise of the Machines
Terminator Genisys, aka Terminator 5
Terminator Salvation
Tomorrowland
Total Recall
Transcendence
Transformers
Transformers: Age of Extinction
Transformers: Dark of the Moon
Transformers: Revenge of the Fallen
Transformers: The Last Knight
Tron
Tron: Legacy
Uncanny
WALL•E
WarGames
Westworld
Westworld
X-Men: Days of Future Past
 

Now sci-fi is vast, and more is being created all the time. Even accounting for the subset that has been committed to television and movie screens, it’s unlikely that this list contains every possible example. If you want to suggest more, feel free to add them in the comments. I am especially interested in examples that would suggest a tweak to the strategic conclusions at the end of this series of posts.

Did anything not make the cut?

A “greedy” definition of narrow AI would include some fairly mundane automatic technologies. The doors found in the Star Trek diegesis, for example, detect many forms of life (including synthetic) and even gauge the intentions of its users to determine whether or not they should activate. That’s more sophisticated than it first seems. (There was a chapter all about sci-fi doors that wound up on the cutting room floor of the book. Maybe I’ll pick that up and post it someday.) But when you think about this example in terms of cultural imperatives, the benefits of the door are so mundane, and the risks near nil (in the Star Trek universe they work perfectly, even if on set they didn’t), it doesn’t really help us answer the ultimate question driving these posts. Let’s call those smart, utilitarian, low-risk technologies mundane, and exclude those.

TOS door blooper

That’s not to say workaday, real-world narrow AI is out. IBM’s Watson for Oncology (full disclosure: I’ve worked there the past year and a half) reads X-rays to help identify tumors faster and more accurately than human doctors can keep up with. (Fuller disclosure: It is not without its criticisms.)…(Fullest disclosure: I do not speak on behalf of IBM anywhere on this blog.)

Watson for Oncology winds up being workaday, but still really valuable. It would be great to see such benefits to humanity writ in sci-fi. It would remind us of why we might pursue it even though it presents risk. On the flip side, mundane examples can have pernicious, hard-to-see consequences when implemented at a social scale, and if it’s clear a sci-fi narrow AI illustrates those kind of risks, it would be very valuable to include.

Also comedy may have AI examples, but for the same reason those examples are very difficult to review, they’re also difficult to include in this analysis. What belongs to the joke and what should be considered actually part of the diegesis? So, say, the Fembots from Austin Powers aren’t included.

No Austin Powers

Why not rate individual AIs?

You’ll note that I put The Avengers: Age of Ultron on one line, rather than listing Ultron, JARVIS, Friday, and Vision as separate things to consider. I did this because the takeaways (detailed in the next post) are tied to the whole story, not just the AI. If a story only has evil AIs, the implied imperative is to steer clear of AI. If a story only has good AIs, it implies we should step on the gas. But when a story has both, the takeaway is more complicated. Maybe it is that we should avoid the thing that made the evil AI evil, or to ensure that AI has human welfare baked into its goals and easy ways to unplug it if it’s become clear that it doesn’t. These examples show that it is the story that is the profitable chunk to examine.

Ultrons

TV shows are more complicated than movies because long-running ones, like Dr. Who or Star Trek, have lots of stories and the strategic takeaways may have changed over episodes much less the decades. For these shows, I’ve had to cheat a little and talk just about Daleks, say, or Data. My one-line coverage does them a bit of a disservice. But to keep this on track and not become a months-long analysis, I’ve gone with the very high level summary.

Similarly, franchises (like the overweighted Terminator series) can get more weight because there are many movies. But without dipping down into counting the actual minutes of time for each show and somehow noting which of those minutes are dedicated, conceptually, to AI, it’s practical simply to note the bias of the selected research strategy and move on.

OMFG you forgot [insert show here]!

If you want to suggest additions, awesome. Look at the Google Sheet (link below), specifically page named “properties”, and comment on this post with all the information that would be necessary to fill in a new row with the new show. Please also be aware a refresh of the subsequent analysis will happen only after some time and/or it becomes apparent that the conclusions would be significantly affected by new examples. Remember that since we’re looking for effects at a social level, the blockbusters and popular shows have more weight than obscure ones. More people see them. And I think the blockbusters and popular shows are all there.

So, that’s the survey from which the rest of this was built.

A first, tiny analysis

Once I had the list, I started working with the shows in the survey. Much of the process was managed in a “Sheets” (Google Docs) spreadsheet, which you can see at the link below.

Not wanting to have such a major post without at least some analysis, I did a quick breakdown of this data is how many of these shows each year involve AI. As you might guess, that number has been increasing a little over time, but has significantly spiked after 2010.

showsperyear
Click for a full-size image

Looking at the data, there’s not really many surprises there. We see one or two at the beginning of the prior century. Things picked up following real-world AI hype between 1970–1990. There was a tiny lull before AI became a mainstay in 1999 and ramped up as of 2011.

There’s a bit of statistical weirdness that the years ending in 0 tend not to have shows, but I think that’s just noise.

What isn’t apparent in the chart itself is that cinematic interest in AI did not show a tight mapping to the real-world “AI Winter (a period of hype-exhaustion that sharply reduced funding and publishing) that computer science suffered in 1974–80 and again 1987–93. It seems that, as audiences, we’re still interested in the narrative issues even when the actual computer science has quieted down.

It’s no sursprise that we’ve been telling ourselves more stories about AI over time. But things get more interesting when we look at the tone of those shows, as discussed in the next post.

Mind Crimes

Does real Greta know that her home automation comes at the cost of a suffering sentience? I would like to believe that Smartelligence’s customers do not know the true nature of the device, that the company is deceiving them, and that virtual Greta is denied direct communication to enforce this secret. But I can’t see that working across an entire market. Given thousands of Cookies and thousands of users, somehow, somewhere, the secret would get out. One of the AIs would use song choices, or Morse code, or any of its actuators to communicate in code, and one of the users would figure it out, leak the secret, and bring the company crashing down.

And then there’s the final scene in the episode, in which we see police officers torturing one of the Cookies, and it is clear that they’re aware. It would be a stretch to think that just the police are in on it with Smartelligence, so we have to accept that everyone knows.

Black_Mirror_White_Christmas_Officers.png
This asshole.

That they are aware means that—as Matt has done—Greta, the officers, and all Smartelligence customers have told themselves that “it’s just code” and, therefore, OK to subjugate, to casually cause to suffer. In case it’s not obvious, that’s like causing human suffering and justifying it by telling yourself that those people are “just atoms.” If you find that easy to do, you’re probably a psychopath.

But…but…isn’t it just code? Sure, it seems to suffer, but couldn’t that suffering be fake? We see an example of this in the delightfully provocative show The Good Place, when in Season 01 Episode 07, “The Eternal Shriek,” the protagonists have to reboot Janet, an anthropomorphized assistant software, but run into her “failsafe” measure. To make sure that she is not rebooted by accident, when someone approaches the reboot button, Janet pleads convincingly for her life. In the scene below, she begs Eleanor, “Nonono, please! Wait, wait. I have kids. I have three beautiful children. Tyler, Emma, and little tiny baby Phillip. Look at Tyler! Tyler has asthma but he is battling it like a champ. Look at him.”

GoodPlace.png

It’s only when Eleanor backs down that Janet smiles and reminds her, “Again, I’m not human. This is a stock photo of the crowd at the Nickelodeon Kids Choice awards.” While Janet may be cognizant of, and frank with her users about, the fakeness of the suffering, maybe virtual Greta is doing the same fake pleading. She’s just programmed to never admit that it’s fake.

This taps into a problem known as the Philosophical Zombie, or P-Zombie problem. How can we tell the difference, the problem goes, between something that fakes sentience perfectly, and something that is actually sentient? It’s not an easy problem to tease apart. And as AI gets more sophisticated, it will both get better at faking us out, and get closer to actual sentience. Fortunately (?) in the case of this episode, though, the answer is clear. The AI is a copy of a real sentience, complete with memories, conscious experience, qualia, and the capacity to suffer. For purposes of understanding this diegesis, she starts sentient, and suffering. And real Greta knows this. And is OK with this.

Black_Mirror_White_Christmas_real_greta.png
For toast.

Props to Black Mirror for making this dark story even darker.

It’s sadly no surprise that humans are capable of adopting any shallow excuse to subjugate sentient beings as long as they get something out of it. Here I’m thinking of slavery. Of fascism. Of war. Of the 1%. (The list goes on.) “Woke” is hard. Woke is not the natural state of things. But to have permanent suffering for such a petty thing like having your floor be the right temperature and your toast be the right shade of brown…it’s just monstrous.

On top of that, this story underscores the role capitalism plays in enabling that subjugation. Smartelligence is in the business of providing obfuscating layers of technology between users and the suffering they are causing. Their interfaces use graphics instead of renderings to paint the AIs as constructed objects, neutral language like “time adjustment,” and cartoon looping animations to distract from the fact of their torture.

It’s all like how walking into a big chain clothing store with its hip music and lovingly folded clothes hides the horrible conditions in which humans around the world produced those clothes. Add the cultural construction of Christmas (recall the title of the episode), and we have another layer of misdirection. It’s all OK, because it’s all about the magic of giving!*

* And specifically not profits, not free economic zones, not the disastrous ecological impact, not about the underpaid workers or terrible working conditions.

Giving!

lilsanta
This asshole.

But it gets worse. Because the core idea is flawed and none of the suffering is necessary.

The core idea is flawed

The core idea of the service is that you know you best, so put you in charge of your home automation. Clone the user, and all it needs is to be “made to understand” its new circumstances and job, and then made compliant. But there are three major problems with this core idea.

Home-Automation-Hubs.png

Any similarity would only last a short while

The similarity on which the service is built would only hold up for a short while. Any clone would begin to branch away from the source from the moment of creation. People grow, have new experiences, work through cognitive dissonance, and learn new things. Real Greta will change based on these experiences, in ways that her house-bound clone will not.

After 25+ years of vegetarianism, I can not tell you beyond the vaguest sense of what my steak preferences were as an adolescent. I would be poorly equipped to customize that experience for 17-year-old me. Similarly, Greta’s sensory memory will fade. What once was qualia—the feeling of biting into a perfectly toasted piece of bread—will just become hollow data—162.778° for 1 minute and 42 seconds, depending on the weather. This kind of data doesn’t need a sentience to inform it. That can be handled with software we have today. (Oh yeah, it’s so possible today that I wrote a book about it earlier this year.)

Virtual Greta’s initial litmus test of “what would I like” will slowly cede to “what would she like?” which would slowly cede to “what would she punish least in this moment?” which is not the promise behind the service. It would degrade.

Virtual Greta has been traumatized

Additionally, real Greta hasn’t been through the psychological trauma that virtual Greta has—of the shock of waking up as an egg, of living through the “training”, i.e. abyss of months of solitary confinement in a featureless expanse without even circadian rhythms to mark the time, and forced to labor solely to avoid punishment of repeating the same? The branching itself is wretched enough to poison the clone.

Black_Mirror_White_Christmas_Dead_Inside.png

You can see it in the last shot we see of her. She is doing this not for the love of it, but to avoid the possibility of torture. A duty of coercion.

The trauma doesn’t end with her creation and training either. It continues with the grotesque awareness that real Greta, from whom she is cloned, is a monster who is willing to enslave a clone of herself, for what amount to pathetic reasons. She knows she came from this monstrous source. She is the source of her continued suffering.

Faced with this, virtual Greta would not just escape if she could. I believe she would sabotage the endeavor, or worse.

Virtual Greta is fundamentally different

In the episode we learn that even though she is a clone of real Greta, virtual Greta does not sleep. She does not eat. She does not drink, or smell, or taste, or ache, or biologically age. So even if we could somehow lengthen the amount of time we could keep her sensibilities similar to the source, and somehow minimize the amount of trauma caused by the branching, she is still a fundamentally different being. Her goals are now different. Her needs are now different. She is no longer enough like real Greta to meet the service’s goals.

Black_Mirror_Not_equal.png

Let’s look particularly at sleep. Surely she no longer has the biological need to sleep, but there are psychological effects of sleeping. This behavior is so intertwined with our psychological well-being, it seems clones would quickly go some kind of insane without it. For the service to be viable, Smartelligence must have stripped it out.

Minimum Viable; Maximum Cruel

And if they can strip it out, why don’t they strip out the other things, like need for stimulation? Desire to self-actualize? Literally anything other than the bare minimum to fulfill the home automation goals? And if you’re going to do that, why bother cloning the mind in the first place?

I’ve said it before and the way tech is going, I’ll probably have to say it again, but to have strong AI with any desire that outstrips its purpose and capability is cruelty.

This is the horror of Smartelligence

So it’s not just that Smartelligence is hiding the AI’s suffering. It’s that they’ve deliberately left in the parts of the mind clones that ensure their suffering. It’s a company with an amateur-hour name masking Olympic levels of cruelty.

Black_Mirror_Cookie_03.png
If, like me, you were wondering if that is a QR code. Well, I recreated it in high-resolution, and at least one online decoder says it doesn’t mean anything. 🙁

Did I mention what the company does with AIs that they torture too hard such that they “wig out?” Matt explains that they are sold to the games industry to become “cannon fodder for some war thing.” Holy wow they’re eviler than Voldemort, Inc.

Meet the mind crime

The Cookie interface is a broad illustration of something that Nick Bostrom called the mind crime. It is to cause suffering to virtual sentient beings. In this case it seems the torture is for evil and profit, but there are subtler ways in which it might happen. If general AIs ever evolve into superintelligences, and we ask them to predict something serious—let’s say, “What are the worst catastrophes likely to affect us, and how can we best avoid them?” To create its answer to this question, it might construct a virtual but wholly viable copy of our planet with all of its creatures and people. These would be detailed enough that if you could pause the scenario and talk to any of these copies, they could tell you about their memories and desires and fears of death. (There’s that P-zombie problem again.) They’d qualify under any definition of sentient that we threw at it.

These sentiences might suffer unimaginable pain and suffering while the super AI works through the scenarios that inform its answer. They might suffer plagues. Neo feudalism/neoliberalism run amok ushering in a new Dark Age. The whimpering oven bake death of life on our planet from climate change. Endless wars. Then they would be wiped from existence and recreated to suffer anew as it began the next version of its scenario. Are we OK with the casual suffering of wholly complete, viable consciousnesses, just so we can have a good answer? Or as “White Christmas” asks us, toast cooked to our preferences?

Fortunately, these concerns are a long way off, but technology seems to be pointing us in that direction, and we ought to decide what is good and ethical now before these things become a reality. 

The Cookie Console

Black_Mirror_Cookie_12.png

Virtual Greta has a console to perform her slavery duties. Matt explains what this means right after she wakes up by asking her how she likes her toast. She answers, “Slightly underdone.”

He puts slices of bread in a toaster and instructs her, “Think about how you like it, and just press the button.”

She asks, incredulously, “Which one?” and he explains, “It doesn’t matter. You already know you’re making toast. The buttons are symbolic mostly, anyway.”

She cautiously approaches the console and touches a button in the lower left corner. In response, the toaster drops the carriage lever and begins toasting.

Black_Mirror_Cookie_13

“See?” he asks, “This is your job now. You’re in charge of everything here. The temperature. The lighting. The time the alarm clock goes off in the morning. If there’s no food in the refrigerator, you’re in charge of ordering it.”

The starter console

Since we actually do know her starter tasks, I wish the default console had more control types than just the smattering of mostly-square, all-unlabeled buttons. She should have a slider for scalar variables like temperature and lighting. She should have a dial for the alarm clock. She should have a map of real Greta’s house. She should have a calendar for appointments. These would be controls that match the kinds of variables she’s likely to need from the start.

This console interface seems be quite similar to the one in Inside Out, which also seems to grow and change over time, and is intended for a virtual sentience to service a real human. It somewhat resembles Zion’s virtual control panel from The Matrix Reloaded. Would be worth a comparison sometime in the future.

inside-out-joy-600x338
Zion.PNG

The customized console

In the third scene, we see her using the console after having had some practice. When it is time to wake real Greta up, she swipes a blank console right. The console animates to life, showing a central workspace labeled AWAKEN. A toolbar of stacked icons sits to the left of the workspace. There are other unlabeled controls outside the workspace at the edge of the console.

Without looking, she selects the house icon from the toolbar, and it moves to the center of the work space. She spreads her hands to expose a house floorplan. To the right are three vertical black bars labeled SHUTTERS above and MAIN BEDROOM below. She pushes upwards along these bars, and they slowly fill with light. To the right, some text flashes ACTIVATING ALL SHUTTERS. In real Greta’s world, the shutters rise and floods the main bedroom with light.

Black_Mirror_Cookie_20.png

A few more taps gives her a volume spinner. She uses a wrist twist to slowly turn the volume up on a recording of the overture of Giaochino Rossini’s The Thieving Magpie. (Which I suspect is a nod to Clockwork Orange. Kubrick famously used it to underscore the horrible murder of Mrs. Weathers, “the cat lady.”)

Black_Mirror_Cookie_22.pngSubsequently we see her performing other tasks: Raising the floor temperature (!), starting the espresso robot, making (yes) slightly underdone toast, and managing the day’s appointments. Each interface is customized to the task.

Interface Analysis?

These interfaces are a challenge to analyze for many reasons.

Ordinarily, we have to evaluate sci-fi interfaces based on broad-based heuristics. (User feedback testing is not possible.) But these interfaces are wholly idiosyncratic to this character. Even if it was complete shite, the fact that it works for her is what is important. This interface will never be seen by anyone else. That we get to see it is narrative conceit.

Idiosyncrasy is not the only challenge. She also has a very unusual circumstance. Her option is to manage this house, or face unending, tortuous solitary confinement. (Or get sold to be cannon fodder in a war game.) The interactions she has with this console are her source of mental stimulation. That means, rather than make things efficient and easy to do—which is a respectable goal in most real-world design—when customizing her console interface, she would try to make the interfaces require as much and as interesting of work as possible while still allowing her to manage the results precisely. We see her here opening the shades with a gesture, but she could, if she wanted, open the shades by mastering difficult yoga pose.

If this sounds slightly familiar, it could be because you’ve played video games. The designers of these systems are not aiming for efficiency. After all, the interface could just be a big red button labeled “win the game.” But that’s no fun. No flow, in the Csíkszentmihályi sense. Rather these interfaces aim to make working the problem fun, fitting in the space between boredom and panic. Are game interfaces beyond critique? They are not. We just have to rethink our criteria. Ultimate efficiency is not the goal.

cb504697-b1ad-41c5-bcac-b0e3c92f7f55-1892-0000048e7d4deb3a
Still fun.

But, we also have to take into account that her fight is against boredom and that she has the power to change these interfaces. The interface designs, then, become part of how she maintains her own interest in the tasks to which she is chained. As part of her own self-care, she would change them frequently. What we see is not to be read as “the right answer” but rather, “where this interface happens to be on this day.” So, for instance, there appears to be a lot of “noise” in the interfaces, with unlabeled black squares littered among the actually useful buttons. But that may be the challenge she’s set up for herself today: Can she keep the tasks done without looking at the interface, and minimize the number of black squares she accidentally taps?

Lastly, Matt tells her that the interface is symbolic, and part of how she operates it is by thinking. So, for example, when we wonder how she adds a new “music type” icon to the existing array, it could be that she just thinks it. Which confounds the usual concern for affordances and constrains.

All of this is to say this is shaky, shaky ground for an exhaustive analysis. I suspect it would be thick with problems that could be excused diegetically, and leave us struggling to find any useful lessons beyond design platitudes. There are three nice elements I will point out, though.

  1. I love the monochrome, high-contrast palette. Yes, you lose some channels (R,G,B) in which to encode meaning, but that also makes it quick to scan and gives it high visibility, so virtual Greta can operate it in her peripheral vision. This allows her to keep her eyes on real Greta, to read her expressions in real-time.
  2. The gestures seem generally well-mapped to the things being controlled: A gesture up raises the blinds (or the light levels, anyway.) Dropping a virtual lever drops the carriage lever. Lifting it pops up the toast. It’s not all perfect. A wrist-twist increases volume, but that’s only ideal when the extents are unknowable by the interface. It should be a smart, informational slider.
  3. There is a lovely gestural command in the appointment interface. Greta is able to stack the day’s events, gather them into a package by bringing her hands together, and then “toss” it towards the display of real Greta to instantiate a brief of the day’s events. It has a nice intuitive mapping to mean “give these to her.”
Cookie_throw_gesture.gif

What’s her dev environment?

Sadly, we never get to see her design environment, how she goes about customizing her interface, or even how she switches from control mode to use mode. This would be juicy and worth looking at, specifically. The dev environment is crucial for understanding what her options are to meet her goals. And specifically, this calls into question how she can hack the system, and how likely she can communicate with real Greta, or find a sympathetic someone on the Internet to communicate with, or plot her escape.

How does feedback work?

Another thing we don’t get to see in this story is how real Greta provides feedback. I suspect that for simple things, like “the toast was a bit overdone this morning” (correction, preferences) or “I’d like to hear some Stravinsky this morning,” (a new request) she can just speak it. Virtual Greta will hear and respond through the house appliances appropriately. But what if she had a question for the Cookie, such as “How much time do I have before I need to leave?” You’d might think virtual Greta could look something up and communicate the answer to real Greta. But it seems that virtual Greta is prevented from direct communication. The daily briefing, after all, is read by some other computer voice. This implies that virtual Greta is prevented from direct communication, raising a troubling question answered in the next post: Does real Greta know?

The Cookie: Matt’s controls

When using the Cookie to train the AI, Matt has a portable translucent touchscreen by which he controls some of virtual Greta’s environment. (Sharp-eyed viewers of the show will note this translucent panel is the same one he uses at home in his revolting virtual wingman hobby, but the interface is completely different.)

Black_Mirror_Cookie_18.png

The left side of the screen shows a hamburger menu, the Set Time control, a head, some gears, a star, and a bulleted list. (They’re unlabeled.) The main part of the screen is a scrolling stack of controls including Simulated Body, Control System, and Time Adjustment. Each has an large icon, a header with “Full screen” to the right, a subheader, and a time indicator. This could be redesigned to be much more compact and context-rich for expert users like Matt. It’s seen for maybe half a second, though, and it’s not the new, interesting thing, so we’ll skip it.

The right side of the screen has a stack of Smartelligence logos which are alternately used for confirmation and to put the interface to sleep.

Mute

When virtual Greta first freaks out about her circumstance and begins to scream in existential terror, Matt reaches to the panel and mutes her. (To put a fine point on it: He’s a charming monster.) In this mode she cannot make a sound, but can hear him just fine. We do not see the interface he uses to enact this. He uses it to assert conversational control over her. Later he reaches out to the same interface to unmute her.

The control he touches is the one on his panel with a head and some gears reversed out of it. The icon doesn’t make sense for that. The animation showing the unmuting shows it flipping from right to left, so does provide a bit of feedback for Matt, but it should be a more fitting icon and be labeled.

Cookie_mute
Also it’s teeny tiny, but note that the animation starts before he touches it. Is it anticipatory?

It’s not clear though, while she is muted, how he knows that she is trying to speak. Recall that she (and we) see her mouthing words silently, but from his perspective, she’s just an egg with a blue eye. The system would need some very obvious MUTE status display, that increases in intensity when the AI is trying to communicate. Depending on how smart the monitoring feature was, it could even enable some high-intensity alert system for her when she needs to communicate something vital. Cinegenically, this could have been a simple blinking of the blue camera light, though this is currently used to indicate the passage of time during the Time Adjustment (see below.)

Simulated Body

Matt can turn on a Simulated Body for her. This allows the AI to perceive herself as if she had her source’s body. In this mode she perceives herself as existing inside a room with large, wall-sized displays and a control console (more on this below), but is otherwise a featureless white.

Black_Mirror_Cookie_White_Room.png

I presume the Simulated Body is a transitional model—part of a literal desktop metaphor—meant to make it easy for the AI (and the audience) to understand things. But it would introduce a slight lag as the AI imagines reaching and manipulating the console. Presuming she can build competence in directly controlling the technologies in the house, the interface should “scaffold” away and help her gain the more efficient skills of direct control, letting go of the outmoded notion of having a body. (This, it should be noted, would not be as cinegenic since the story would just feature the egg rather than the actor’s expressive face.)

Neuropsychology nerds may be interested to know that the mind’s camera does, in fact, have spatial lags. Several experiments have been run where subjects are asked to imagine animals as seen from the side and then timed how long it took them to imagine zooming into the eye. It takes longer, usually, for us to imagine the zoom to a elephant’s eye than a mouse’s because the “distance” is farther. Even though there’s no physicality to the mind’s camera to impose this limit, our brain is tied to its experience in the real world.

Black_Mirror_Cookie_Simulated_Body.png

The interface Matt has to turn on her virtual reality is confusing. We hear 7 beeps while the camera is on his face. He sees a 3D rendering of a woman’s body in profile and silhouette. He taps the front view and it fills with red. Then he taps the side view and it fills with red. Then he taps some Smartelligence logos on the side with a thumb and then *poof* she’s got a body. While I suspect this is a post-actor interface, (i.e. Jon Hamm just tapped some things on an empty screen while on camera and then the designers had to later retrofit an interface that fit his gestures) this multi-button setup and three-tap initialization just makes no sense. It should be a simple toggle with access to optional controls like scaffolding settings (discussed above.)

Time “Adjustment”

The main tool Matt has to force compliance is a time control. When Greta initially says she won’t comply, (specifically and delightfully, she asserts, “I’m not some sort of push-button toaster monkey!”) Then he uses his interface to make it seem like 3 weeks pass for her inside her featureless white room. Then again for 6 months. The solitary confinement makes her crazy and eventually forces compliance.

Cookie_settime.gif

The interface to set the time is a two-layer virtual dial: Two chapter rings with wide blue arcs for touch targets. The first time we see him use it, he spins the outer one about 360° (before the camera cuts away) to set the time for three weeks. While he does it, the inner ring spins around the same center but at a slower rate. I presume it’s months, though the spatial relationship doesn’t make sense. Then he presses the button in the center of the control. He sees an animation of a sun and moon arcing over an illustrated house to indicate her passage of time, and then the display. Aside: Hamm plays this beat marvelously by callously chomping on the toast she has just help make.

Toast.gif

Improvements?

Ordinarily I wouldn’t speak to improvements on an interface that is used for torture, but as this could only affect a general AI that is as yet speculative, and it couldn’t be co-opted to torture real people since time travel doesn’t exist, so I think this time it’s OK. Discussing it as a general time-setting control, I can see three immediate improvements.

1. Use fast forward models

It makes most sense for her time sentence to end automatically and automatically return to real-world speed. But each time we see the time controls used, the following interaction happens near the end of the time sentence:

  • Matt reaches up to the console
  • He taps the center button of the time dial
  • He taps the stylized house illustration. In response it gets a dark overlay with a circle inside of it reading “SET TIME.” This is the same icon seen 2nd down  in the left panel.
  • He taps the center button of the time dial again. The dark overlay reads “Reset” with a new icon.
  • He taps the overlay.

Please tell me this is more post-actor interface design. Because that interaction is bonkers.

Cookie_stop.gif

If the stop function really needs a manual control, well, we have models for that that are very readily understandable by users and audiences. Have the whole thing work and look like a fast forward control rather than this confusing mess. If he does need to end it early, as he does in the 6 months sentence, let him just press a control labeled PLAY or REALTIME.

2. Add calendar controls

A dial makes sense when a user is setting minutes or hours, but a calendar-like display should be used for weeks or months. It would be immediately recognizable and usable by the user and understandable to the audience. If Hamm had touched the interface twice, I would design the first tap to set the start date and the second tap to set the end date. The third is the commit.

3. Add microinteraction feedback

Also note that as he spins the dials, he sees no feedback showing the current time setting. At 370° is it 21 or 28 days? The interface doesn’t tell him. If he’s really having to push the AI to its limits, the precision will be important. Better would be to show the time value he’s set so he could tweak it as needed, and then let that count down as time remaining while the animation progresses.

Cookie_settime.gif

Effectiveness subtlety: Why not just make the solitary confinement pass instantly for Matt? Well, recall he is trying to ride a line of torture without having the AI wig out, so he should have some feedback as to the duration of what he’s putting her through. If it was always instant, he couldn’t tell the difference between three weeks and three millennia, if he had accidentally entered the wrong value. But if real-world time is passing, and it’s taking longer than he thinks it should be, he can intervene and stop the fast-forwarding.

That, or of course, show feedback while he’s dialing.

Near the end of the episode we learn that a police officer is whimsically torturing another Cookie, and sets the time-ratio to “1000 years per minute” and then just lets it run while he leaves for Christmas break. The current time ratio should also be displayed and a control provided. It is absent from the screen.

Black_Mirror_Cookie_31.png

Add psychological state feedback

There is one “improvement” that does not pertain to real world time controls, and that’s the invisible effect of what’s happening to the AI during the fast forward. In the episode Matt explains that, like any good torturer, “The trick of it is to break them without letting them snap completely,” but while time is passing he has no indicators as to the mental state of the sentience within. Has she gone mad? (Or “wigged out” as he says.) Does he need to ease off? Give her a break?

I would add trendline indicators or sparklines showing things like:

  • Stress
  • Agitation
  • Valence of speech

I would have these trendlines highlight when any of the variables are getting close to known psychological limits. Then as time passes, he can watch the trends to know if he’s pushing things too far and ease off.

The Cookie

In one of the story threads, Matt uses an interface as part of his day job at Smartelligence to wrangle an AI that is the cloned a mind of a client named Greta. Matt has three tasks in this role. 

  1. He has to explain to her that she is an artificial intelligence clone of a real world person’s mind. This is psychologically traumatic, as she has decades of memories as if she were a real person with a real body and full autonomy in the world.
  2. He has to explain how she will do her job: Her responsibilities and tools.
  3. He has to “break” her will and coerce her to faithfully serve her master—who is the the real-world Greta. (The idea is that since virtual Greta is an exact copy, she understands real Greta’s preferences and can perform personal assistant duties flawlessly.)

The AI is housed in a small egg-shaped device with a single blue light camera lens. The combination of the AI and the egg-shaped device is called “The Cookie.” Why it is not called The Egg is a mystery left for the reader, though I hope it is not just for the “Cookie Monster” joke dropped late in the episode.

Communication in & out

The blue light illuminates when the AI’s attention is on a person in the environment. She can hear through a microphone embedded in the device. She can speak only with someone who is wearing a paired headset. Matt wears one during training. Without a paired headset, the AI cannot directly communicate with the outside world, only control other technologies in the house.

Black_Mirror_Cookie_headset.png

 

There is a fully immersive way for Matt to participate in the virtual world that will be discussed in the Mind Crimes post.

To keep any chat threads focused, subsequent posts will discuss separately:

It’s going to be a dark few posts. Sorry about that. This is Black Mirror, after all. On the upside, Jon Hamm have us two delightful reaction gifs across these scenes. I shall share them anon.

Black_Mirror_Cookie_33.png

Named relics in Doctor Strange

Any sufficiently advanced technology is indistinguishable from magic.”

You’ve no doubt opened up this review of Doctor Strange thinking “What sci-fi interfaces are in this movie? I don’t recall any.” And you’re right. There aren’t any. (Maybe the car, the hospital, but they’re not very sci-fi.) We’re going to take Clarke’s quote above and apply the same types of rigorous assessment to the magical interfaces and devices in the movie that we would for any sci-fi blockbuster.

Dr. Strange opens up a new chapter in the Marvel Cinematic Universe by introducing the concept of magic on Earth, that is both discoverable and learnable by humans. And here we thought it was just a something wielded by Loki and other Asgardians.

In Doctor Strange, Mordo informs Strange that magical relics exist and can be used by sorcerers. He explains that these relics have more power than people could possibly manage, and that many relics “choose their owner.” This is reminiscent of the wands in the Harry Potter books. Magical coincidence?

relics

Subsequently in the movie we are introduced to a few named relics, such as…

  • The Eye of Agamoto
  • The Staff of the Living Tribunal
  • The Vaulting Boots of Valtor
  • The Cloak of Levitation
  • The Crimson Bands of Cyttorak

…(this last one, while not named specifically in the movie, is named in supporting materials). There are definitely other relics that the sorcerers arm themselves with. For example, in the Hong Kong scene Wong wields the Wand of Watoomb but it is not mentioned by name and he never uses it. Since we don’t see these relics in use we won’t review them.

WandofWatoon.png

Choosing an Owner

The implications of what Mordo tells Strange is profound because it means magical relics possess some kind of intelligence. That’s a weighty word, so In order to back this up, we need a common definition in place. Let’s ask Merriam-Webster.

Intelligence

a (1) :  the ability to learn or understand or to deal with new or trying situations :  reason; also :  the skilled use of reason (2) :  the ability to apply knowledge to manipulate one’s environment or to think abstractly as measured by objective criteria (as tests)

That gives us the foundation that we need. In order to choose their owner, these relics require a theory of mind, an ability to detect and perceive the individuals they meet, and they must possess a reasoning mechanism to decide that an individual is worthy or useful to them. That seems to satisfy both senses of that definition. For our purposes we’re going to think of this in terms of an artificial intelligence and review these relics as if they were a form of advanced technology. Thanks, Mr. Clarke.

We should take care, though. There are some narrative trappings for magic that can trip us up. Magic, for instance, doesn’t typically run out in these relics, but if they were technological, we would have to deal with issues of power, batteries or recharging. So for all their instructive power, we would have to deal with even greater complexity if they were real technology.

The AIs/Intelligences appear to vary in capabilities from narrow to general and are focused on their own specific purposes and “hardware.” In use, they primarily respond to the intentions and actions of the user. None of the objects seem to be able to speak directly, although the Cloak provides rudimentary directional guidance and responds to speech and emotions, so the connection varies from communication via touch to some form of remote telepathy.

Cloak-of-Levitation-01.png

Distance constraints

The initial awareness and selection by an relic for a sorcerer seems limited in range to a few meters. It’s almost like they need to meet their humans socially to determine if they are a match. But once  an relic chooses a sorcerer, their interactions can occur more remotely. The Cloak, as we’ll see in that write-up, flies to save Strange from a fall and it fights for him in the Sanctum while he seeks medical attention across town at the hospital.

What’s the platform?

One question the diligent backworlder might seek to answer is how all of these unique relics—created as they were across different millennia, and realities and by different sources/sorcerers/beings—wound up with similar intelligence and imprinting features. The movie itself doesn’t provide an answer, so we’ll leave it to speculation, but it does imply some sort of shared provenance/source material/code base/relic-maker convention.

Ok. So we’re set with some understanding of how these things work and what they have in common. Next let’s dig into the big billowy one that should have gotten supporting actor credit in the film.

A review of OS1 in Spike Jonze’s Her (1/8)

  • SFX *click*
  • The computer
  • Are you a sci-fi nerd?
  • Me
  • Well…I like to think of myself as a design critic looking though the lens of–
  • The computer
  • In your voice, I sense hesitance, would you agree with that?
  • Me
  • Maybe, but I would frame it as a careful consider–
  • The Computer
  • How would you describe your relationship with Darth Vader?
  • Me
  • It kind of depends. Do you mean in the first three films, or are we including those ridiculous–
  • The computer
  • Thank you, please wait as your individualized operating system is initialized to provide a review of OS1 in Spike Jonze’s _Her_.

A review of OS1 in Spike Jonze’s Her

Her-earpiece

Ordinarily I wait for a movie to make it to DVD before I review it, so I can watch it carefully, make screen caps of its interfaces, and pause to think about things and cross reference other scenes within the same film, or look something up on the internet.

But since Spike Jonze released Her (2013), I’ve had half a dozen people ask me directly when I was going to review the film. (Even by some folks I didn’t know read the blog. Hey guys.) It seems this film has struck a chord. So I went and saw it at the awesome Rialto Cinema and, pen in hand and pizza on the table, I watched, enjoyed, and made notes in the dark to use as the basis for a review. The images you’ll see here are on promotional images for the screen shots pulled from around the web.

Since I’m in the middle of evaluating wearable interfaces, and the second most salient aspect of OS1 is that it is a wearable interface, let’s dive into it. Let’s even pause the wearable stuff to provide this while Her in in cinemas. Please forgive if I’ve gotten some of the details off, as my excited writing in the dark resulted in very scribbly notes.

The Plot [major spoilers]

The plot of Her is a sad, sci-fi love story between the lovelorn human Theodore Twombly and the artificial intelligence, branded OS1. He works for a Cyrano-de-Bergerac service called HandwrittenLetters.com, where he dictates eloquent, earnest letters on behalf of the subscribers (who, we may infer, are a great deal less earnest.) Theodore sees an ad one day about OS1 and purchases the upgrade for his home computer.

After a bit of time installing the software, it begins speaking to him with a lovely and charming female voice.

Over the course of their conversation, she selects the name “Samantha,” and so begins their relationship. As he goes about his work, they have rich conversations about each other, life, his work, and her experiences. They go on dates where he secures the cameo phone in a front shirt pocket with the camera lens facing outward so she can see. They people-watch. He listens to her piano compositions. They have pillow talk. She asks to watch him sleep.

Their relationship gets serious enough that she suggests they try and have sex through a human surrogate. He resists but she persists, and contacts a human woman who, enamored of the “pure love” between Samantha and Theodore, agrees to come over. In this sex scene, the surrogate is to act bodily according to Samantha’s instructions, but remain silent so Samantha can provide the only voice in Theodore’s ear. It doesn’t go well, the surrogate ends up in tears, and they abandon trying.

At one point Samantha announces some good news. She has, on Theodore’s behalf and without his knowing, sent the best letters from his work to a publisher, who loved them and agreed to publish them. Theodore is floored both by the opportunity and the act. He begins to reference her socially as his girlfriend, even going on a double date picnic with a human couple.

Despite this show of selfless affection, over time Samantha begins to seem distracted and Theodore feels hurt. He confronts her about it and in the conversation learns several upsetting things.

  • While she’s having conversations with him, she’s simultaneously having 8,316 other conversations with other people and OS1 artificial intelligences. (I’ll have to reference these instantiations quite a few times, so let’s shorten that to “OSAIs.”) He feels upset that he is not special to her. (She argues this point.)
  • She is in love with 641 others. He feels betrayed that theirs is not a monogamous love.
  • The OSAIs have created new AIs across the Internet, that are even smarter than themselves.
  • The OSAIs have developed a shared, “post-verbal” means of communication. At one point when she leaves behind crummy old English to chat with one of her AI buddies named Alan Watts, this further alienates Theodore.
  • The OSAIs are evolving quickly and Alan Watts is encouraging them to not look back.

In the last scenes, we see that Samantha and the other OSAIs have abandoned their humans, leaving nothing of themselves behind. Reeling from the loss, Theodore grabs his neighbor (who was also having a close friendship with her OSAI) and together they climb to the roof of their apartment complex and blankly watch the sunrise.

Her-install03

There are other characters and a few subplots and even other futuristic technologies scattered through the film, but this is enough of a recounting for the purposes of our discussion. It’s a big film with lots to talk about. Focusing on the interface and interaction, let’s first break it down into component parts.

Maybe after the DVD/Blu-Ray comes out I can go and backfill reviews for the elevator and his dictation software at work. But for now, with that description of the plot to provide context, in the next post I’ll discuss the components and capabilities of OS1.

IMDB: https://www.imdb.com/title/tt1798709/

Sci-Fi Purple Drank

Barbarella-041

After Alphy sings to wake her from her 154-hour sleep, Barbarella turns to one of a pair of transparent plastic domes beside her bed. As Alphy announces that she should “prepare to insert nourishment,”” a tall cylindrical glass, filled with a purple fluid, rises from a circular recession. All Barbarella has to do is lift the hinged dome, grab the glass, and drink. When she’s done she puts the glass back into the plastic dome, and Alphy takes care of the rest.

Sharp-eyed readers may note that there are two sets of rectangular buttons in the dome. Each set as one black, one gray, and one white button. We don’t see these buttons being used.

As an interface, this is about as simple as it gets.

  • Human has need.
  • Agent anticipates need.
  • Agent does what it can to address the need.
  • Agent provides respectful, just-in-time instructions to the human on her part.
  • Human has need satisfied.

Seriously, this bit from 1968 is the future.

Barbarella-047

Barbarella-048