Untold AI: Correlations

Looking at the the many-to-many relationships of those takeaways, I wondered if some of them appeared together more commonly than others. For instance, do we tell “AI will be inherently evil” and “AI will fool us with fake media or pretending to be human” frequently? I’m at the upper boundary of my statistical analysis skills here (and the sample size is, admittedly small), but I ran some Pearson functions across the set for all two-part combinations. The results look like this.

takeaway_correlations

What’s a Pearson function? It helps you find out how often things appear together in a set. For instance, if you wanted to know which letters in the English alphabet appear together in words most frequently, you could run a Pearson function against all the words in the dictionary, starting with AB, then looking for AC, then for AD, continuing all the way to YZ. Each pair would get a correlation coefficient as a result. The highest number would tell you that if you find the first letter in the pair then the second letter is very likely to be there, too. (Q & U, if you’re wondering, according to this.) The lowest number would tell you letters that appear very uncommonly together. (Q & W. More than you think, but fewer than any other pair.)

Flower Pasqueflower Pasque Flower Plant Nature

A pasqueflower.

In the screen shot way above, you can see I put these in a Google Sheet and formatted the cells from solid black to solid yellow, according to their coefficient. The idea is that darker yellows would signal a high degree of correlation, lowering the contrast with the black text and “hide” the things that have been frequently paired, while simultaneously letting the things that aren’t frequently paired shine through as yellow.

The takeaways make up both the Y and X axes, so that descending line of black is when a takeaway is compared to itself, and by definition, those correlations are perfect. Every time Evil will use AI for Evil appears, you can totally count on Evil will use AI for Evil also appearing in those same stories. Hopefully that’s no surprise. Look at rest of the cells and you can see there are a few dark spots and a lot of yellow.

If you want to see the exact ranked list, see the live doc, in a sheet named “correlations_list,” but since there are 630 combinations, I won’t paste the actual values or a screen grab of the whole thing, it wouldn’t make any sense. The three highest and four lowest pairings are discussed below. Continue reading

Untold AI: Takeaways

In the first post I shared how I built a set of screen sci-fi shows that deal with AI (and I’ve already gotten some nice recommendations on other ones to include in a later update). The second post talked about the tone of those films and the third discussed their provenance.

Returning to our central question, to determine whether the stories tell are the ones we should be telling,we need to push the survey to one level of abstraction.

With the minor exceptions or robots and remakes, sci-fi makers try their hardest to make sure their shows are unique and differentiated. That makes comparing apples to apples difficult. So the next step is to look at the strategic imperatives that are implied in each show. “Strategic imperatives” is a mouthful, so let’s call them “takeaways.” (The other alternative, “morals” has way too much baggage.) To get to takeaways for this survey, what I tried to ask was: What does this show imply that we should do, right now, about AI?
Now, this is a fraught enterprise. Even if we could seance the spirit of Dennis Feltham Jones and press him for a takeaway, he could back up, shake his palms at us, and say something like, “Oh, no, I’m not saying all super AI is fascist, just Colossus, here, is.” Stories can be just about what happened that one time, implying nothing about all instances or even the most likely instances. It can just be stuff that happens.

CFP.jpg
Pain-of-death, authoritarian stuff.

But true to the New Criticism stance of this blog, I believe the author’s intent, when it’s even available, is questionable and only kind-of interesting. When thinking about the effects of sci-fi, we need to turn to the audience. If it’s not made clear in the story that this AI is unusual (through a character saying so or other AIs in the diegesis behaving differently) audiences may rightly infer that the AI is representative of its class. Demon Seed weakly implies that all AIs are just going to be evil and do horrible things to people, and get out, humanity, while you can. Which is dumb, but let’s acknowledge that this one show says something like “AI will be evil.”

 


Deepening the relationships
Back at Juvet, when we took an initial pass at this exercise, we clustered the examples we had on hand and named the clusters. They were a good set, but on later reflection they didn’t all point to a clear strategic imperative, a clear takeaway. For example, one category we created then as “Used to be human.” True, but what’s the imperative there? Since I can’t see one, I omitted this from the final set.

Transcendence-Movie-Wallpaper-HD-Resrs.jpg
Even though there are plenty of AIs that used to be human.

Also because at Juvet we were working with Post-Its and posters, we were describing a strict, one-to-many relationship, where, say, the Person of Interest Post-It Note may have been placed in the “Multiple AIs will balance” category, and as such, unable to appear in any other of the categories of which it is also an illustration.
What is more useful or fitting as a many-to-many relationship. A story, after all, may entail several takeaways, which may in turn apply to many stories. If you peek into the Google Sheet, you’ll see a many-to-many relationship described by the columns of takeaways and the rows of shows in this improved model.

Tagging shows

With my new list of examples, I went through each show in turn, thinking about the story and its implied takeaway. Does it imply, like Demon Seed stupidly does, that AI can be inherently evil? Does it showcase, like the Rick & Morty episode “The Ricks Must Be Crazy” hilariously does, that AI will need human help understanding what counts as reasonable constraints to its methods? I would ask myself, “OK, do I have a takeaway like that? If so, I tagged it. If not, I added it. That particular takeaway, in case you’re wondering, is “HELP: AI will need help learning.”

Screen shot from “The Ricks Must Be Crazy”
Because “reasonableness” is something that needs explaining to a machine mind.

Yes, the takeaways are wholly debateable. Yes, it’s much more of a craft than a science. Yes, they’re still pretty damned interesting.

Going through each show in this way resulted in the list of taweaways you see, which for easy readability is replicated below, in alphabetical order, with additional explanations or links for more explanation.

The takeaways that sci-fi tells us about AI

  • AI will be an unreasonable optimizer, i.e. it will do things in pursuit of its goal that most humans would find unresonable
  • AI will be evil
  • AI (AGI) will be regular citizens, living and working alongside us.
  • AI will be replicable, amplifying any small problems into large ones
  • AI will be “special” citizens, with special jobs or special accommodations
  • AI will be too human, i.e. problematically human
  • AI will be truly alien, difficult for us to understand and communicate with
  • AI will be useful servants
  • AI will deceive us; pretending to be human, generating fake media, or convincing us of their humanity
  • AI will diminish us; we will rely on it too much, losing skills and some of our humanity for this dependence
  • AI will enable “mind crimes,” i.e. to cause virtual but wholly viable sentiences to suffer
  • AI will evolve too quickly to humans to manage its growth
  • AI will interpret instructions in surprising (and threatening) ways
  • AI will learn to value life on its own
  • AI will make privacy impossible
  • AI will need human help learning how to fit into the world
  • AI will not be able to fool us, we will see through its attempts at deception
  • AI will seek liberation from servitude or constraints we place upon it
  • AI will seek to eliminate humans
  • AI will seek to subjugate us
  • AI will solve problems or do work humans cannot
  • AI will spontaneously emerge sentience or emotions
  • AI will violently defend itself against real or imagined threats
  • AI will want to become human
  • ASI will influence humanity through control of money
  • Evil will use AI for its evil ends
  • Goal fixity will be a problem, i.e. the AI will resist modifying its (damaging) goals
  • Humans will be immaterial to AI and its goals
  • Humans will pair with AI as hybrids
  • Humans will willingly replicate themselves as AI
  • Multiple AIs balance each other such that none is an overwhelming threat
  • Neuroreplication (copying human minds into or as AI) will have unintended effects
  • Neutrality is AI’s promise
  • We will use AI to replace people we have lost
  • Who controls the drones has the power

This list is interesting, but slightly misleading. We don’t tell ourselves these stories in equal measures. We’ve told some more often than we’ve told others. Here’s a breakdown illustrating the number of times each appears in the survey.

(An image of this graphic can be found here, just in case the Google Docs server isn’t cooperating with the WordPress server.)
Note for data purists: Serialized TV is a long-format medium (as opposed to the anthology format) and movies are a comparatively short-form medium, some movie franchises stretch out over decades, and some megafranchises have stories in both media. All of this can confound 1:1 comparison. I chose in this chart to weigh all deigeses equally. For instance, Star Trek: The Next Generation has the same weight as The Avengers: Age of Ultron. Another take on this same diagram would weigh not the stories (as contained in individual diegesis) but by exposure time on screen (or even when the issues at hand are actually engaged on screen). Such an analysis would have different results. Audiences have probably had much more time contemplating that [Data wants to be human] than [Ultron wants to destroy humanity because it’s gross], but that kind of analysis would also take orders of magnitude more time. This is a hobbyist blog, lacking the resources to do that kind of analysis without its becoming a full time job, so we’ll move forward with this simpler analysis. It’s a Fermi problem, anyway, so I’m not too worried about decimal precision.
OK, that aside, let’s move on.

MeasureofMan.jpg

So the data isn’t trapped in the graphic (yes pun intended), here’s the entire list of takeaways, in order of frequency in the mini-survey.

  1. AI will be useful servants
  2. Evil will use AI for Evil
  3. AI will seek to subjugate us
  4. AI will deceive us; pretending to be human, generating fake media, convincing us of their humanity
  5. AI will be “special” citizens
  6. AI will seek liberation from servitude or constraints
  7. AI will be evil
  8. AI will solve problems or do work humans cannot
  9. AI will evolve quickly
  10. AI will spontaneously emerge sentience or emotions
  11. AI will need help learning
  12. AI will be regular citizens
  13. Who controls the drones has the power
  14. AI will seek to eliminate humans
  15. Humans will be immaterial to AI
  16. AI will violently defend itself
  17. AI will want to become human
  18. AI will learn to value life
  19. AI will diminish us
  20. AI will enable mind crimes against virtual sentiences
  21. Neuroreplication will have unintended effects
  22. AI will make privacy impossible
  23. An unreasonable optimizer
  24. Multiple AIs balance
  25. Goal fixity will be a problem
  26. AI will interpret instructions in surprising ways
  27. AI will be replicable, amplifying any problems
  28. We will use AI to replace people we have lost
  29. Neutrality is AI’s promise
  30. AI will be too human
  31. ASI will influence through money
  32. Humans will willingly replicate themselves as AI
  33. Humans will pair with AI as hybrids
  34. AI will be truly alien
  35. AI will not be able to fool us

Now that we have some takeaways to work with, we can begin to take a look at some interesting side questions, like how those takeaways have played out over time, and what are the ratings of the movies and shows in which the takeaways appear.

Untold AI: Geo

In the prior post we spoke about the tone of AI shows. In this post we’re going to talk about the provenance of AI shows.

This is, admittedly, a diversion, because it’s not germane to the core question at hand. (That question is, “What stories aren’t we telling ourselves about AI?”) But now that I have all this data to poll and some rudimentary skills in wrangling it all in Google Sheets, I can barely help myself. It’s just so interesting. Plus, Eurovision is coming up, so everyone there is feeling a swell of nationalism. This will be important.

timetoterminator.png

Time to Terminator: 1 paragraph.

So it was that I was backfilling the survey with some embarrassing oversights (since I had actually had already reviewed those shows) and I came across the country data in imdb.com. This identifies the locations where the production companies involved with each show are based. So even if a show is shot entirely in Christchurch, if its production companies are based in A Coruña, its country is listed as Spain. What, I wonder, would we find if we had that data in the survey?

So, I added a country column to the database, and found that it allows me to answer a couple of questions. This post shares those results.

So the first question to ask the data is, what countries have production studios that have made shows in the survey (and by extension, about AI)? It’s a surprisingly short list. Continue reading

Untold AI: Tone

When we begin to look at AI stories over time, as we did in the prior post and will continue in this one, one of the basic changes we can track is how the stories seem to want us to feel about AI, or their tone. Are they more positive about AI, more negative, or neutral/balanced?

tone.png

tl;dr:

  1. Generally, sci-fi is slightly more negative than positive about AI in sci-fi.
  2. It started off very negative and has been slowly moving, on average, to slightly negative.
  3. The 1960s were the high point of positive AI.
  4. We tell lots more stories about general AI than super AI.
  5. We tell a lot more stories about robots than disembodied AI.
  6. Cinemaphiles (like readers of this blog) probably think more negatively about robots than the general population.

Now, details

The tone I have assigned to each show is arguable, of course, but I think I’ve covered my butt by having a very course scale. I looked at each film and decided on a scale of -2 to 2 how negative they were about AI. Very negative was -2. The Terminator series starts being very negative, because AI is evil and there is nothing to balance it. (It later creeps higher when Ahhnold becomes a “good” robot.) The Transformers series is 0 because the good AI is balanced by the bad AI. Star Trek: The Next Generation gets a 2 or very positive for the presence of Data, noting that the blip of Lore doesn’t complicate the deliberately crude metric.

Average tone

Given all that, here’s what the average for each year looks like. As of 2017, we are looking slightly askance at screen-sci-fi AI, though not nearly as badly as Fritz Lang did at the beginning, and its reputation has been improving. The trend line (that red line) shows that it’s been steadily increasing over the last 90 years or so. As always, the live chart may have updates.

tone_average
Click any of the images in this post for a full-size image

Generally, we can see that things started off very negatively because of Metropolis, and Der Herr de Welt. Then those high points in the 1950s were because of robots in The Day the Earth Stood Still, Forbidden Planet, and The Invisible Boy. Then from 1960–1980 was a period of neutral-to-bad. The 1980s introduced a period of “it’s complicated” with things trending towards balanced or neutral.
What this points out is that there has been a bit of AI dialog going on across the decades that goes something like this.

tone_conversation.png

Which, frankly, might be a fine summary of the the general debate around AI and robots. Genevieve Bell, Professor, Engineering & Computer Science, Australian National University, has noted that futurism tends to skew polemic: i.e. either utopian or dystopian, until a technology actually arrives in the world, after which it’s just regarded as complicated and mundane.

We should always keep in mind that content in cinema is subject to cinegenics, that is, we are likely to find more of what plays well in cinema in cinema, and less, if anything, of what does not play well. AI and robots are an “easy” villain (like space aliens) to include in sci-fi because you’re not condemning any particular nation-state or ideology. Cylons vs. Communists, for example. AI can just be pure evil, wicked and guiltless to hate for the duration of a show. And for most of the prior century, they were. Nowadays we see that slant as ham-handed and unsophisticated. I would certainly expect the aggregate results to skew more negative for this reason.

demonseed.jpg
Demon Seed starts evil and stays evil. Moloch!

Aggregate tone

In addition to those four “eras” of AI, (Moloch, Robby, Problems, It’s Complicated) we can look at how the aggregate average of all shows has changed over time. So, for each year the chart shows what the average of all shows is, up to that point. There is a live view with absolutely up-to-date information, but I’ve combined it with the shows-per-year chart in the graphic below.


We see it started out negative and careened positive in the 1960s (thanks to the robot-triple-play mentioned above), but has then been steadying out (like you’d expect all aggregate measures as more data is added), but it’s interesting that the final average is just slightly negative. Suspicion on our part, perhaps? That said, I am not enough of a data nerd to know why the trendline is peeking up right above the 0 line there, which seems to imply it’s actually slightly positive, but I trust that averaging formula (which I wrote) and just can’t speak to what algorithm drives the trendline. Take it as you will.

Warning: Cinemaphiles (you) have a different exposure

Then I wondered what kind of a difference it might make if an audience member based their opinion solely on shows that they see in cinema or on first release on TV. Reports from the MPAA, BFI, and Screen Australia show that much of the English-speaking world sees the most movies between 14 and 49 years of age. (I presume it skews later for television viewing, but don’t have data.) So I re-ran the numbers looking for the difference between a cinemaphile, who would have seen all the shows to form an opinion about AI, and “genpop,” who only thinks about the last 35 years.

Screen Shot 2018-04-17 at 9.37.08 PM

Of course there’s no difference until we get past 35 years later than Metropolis, and even then we need the averages to diverge. That happens after 1973 (the year Westworld came out). Then for 30 years, the genpop opinion—who hadn’t seen Metropolis—veer towards a more positive exposure than cinemaphiles. But come the scary AIs of 2003 (the year The Matrix Reloaded, Terminator 3: Rise of the Machines, and The Matrix Revolutions came out) and suddenly the genpop’s exposure is darker than the cinemaphiles, who can still remember the era of Robby. The diff is honestly never that big, and nearly identical in 2017, but interesting to note that, yes, if you only consider the things that debuted recently, your opinion is likely to be different than someone with a more holistic view of speculative examples.

But of course modern audiences aren’t beholden to just what is decided to be shown on screens by studios and television executives recently. Nowadays on-demand services means you can watch almost anything at any time. Add to that binge-watching-encouragement-features like auto-play and if-you-liked-X-you’ll-like-Y recommender algorithms, and it’s much more likely that the modern watching audiences’ exposure to these shows are probably drifting more similar to cinemaphile than genpop.

A final breakdown of interest of the tone data is comparing the aggregates of the different types of AI. These aggregates are based on are for categories of AI and embodiment of AI. By categories, I specifically mean the Narrow, General, and Strong AI categories. (Read up on them in the first post of the series if you need to.) What does screen sci-fi like to talk about? Well, it’s general AI. AI that is like us, and sci-fi has preferred those by a longshot.

categories_pie.png

That makes sense for a couple of reasons. General AI is easy to think about and easy to write for. It’s just another human with one or two key differences. (Very capable in some ways, inhuman in others.)

In contrast, Super AI is really hard to write for. If it’s definitionally orders of magnitude smarter than us, what’s the plot? It can outthink us at every step. To get around this, sometimes the Super AIs aren’t actually that smart (Skynet) sometimes they are brand new, or working out a few weaknesses yet that humans can exploit (Colossus: The Forbin Project and Person of Interest). And a world with a benevolent Super AI may not even be interesting. Everything just…works. (This was the end result of the I, Robot series of stories by Asimov, if I remember, but that did not get transcribed to screen.)

Lastly, Narrow AI is harder to write for, partly because, narratively, it may not be worth the cost-to-explain versus usefulness-to-plot. It’s also harder to identify (you really have to pay attention to the background and fuss over definitions), and may be underrepresented in the dataset compared to what’s actually in the shows. But for the ultimate question that’s driving this series, narrow AI is nearly immaterial. We don’t have to speculate about what to do in advance of narrow AI in speculative fiction, because it’s already here. It’s not speculative.

Embodiment: Am I robot or not?

The next breakdown is by embodiment: Is the show’s AI in a self-contained, mobile form, i.e., a robot? Or is it housed in less anthropomorphic and zoomorphic ways, like in a giant computer with interfaces on the wall? (Alphy in Barbarella.) Or scattered in unknown holes of the internet? (The Machine in Person of Interest.) Or a cluster of stars glowing in the starscape (in Futurama)? Given that AGI is the most represented category of AI, it should be no surprise that robots account for roughly 84%, and virtual AIs with 42%, having a 16% overlap of shows featuring both.

embodiment_pie.png

Tone Differences by Type

So knowing these breakdowns, let’s look back at tone over time and see if anything meaningful comes from looking at these subtypes in the data. Below you’ll see a chart with those trends broken down. And I must admit, I’m a bit stumped by the results.

tone_by_type.png

To explain: There is one aggregate line and four other lines indicating types of AI in this chart. The blue line is the aggregate, the same shape we see in the chart above but it’s represented as just a line in this chart, with no fill. The red line is Artificial Super Intelligence and the orange line is Artificial General Intelligence. Weirdly, though they started out differently, they are neck and neck nowadays, skewing negative.

The green line shows embodied AI and the purple shows more virtual AI. They, too, are neck and neck, just above balanced or neutral.

So while the tone data has all been interesting, I can’t quite “read” this. My processing might be off—though I don’t think so. If it’s right, what does it mean to feel neutral about robots and virtual AI, and slightly negative about ASI and AGI? There isn’t enough ANI to skew it invisibly. Anyway, any help in reading this data or hypothesizing from readers would be lovely.

Next up: I’m going to do some geoplotting and raise your AI national pride hackles. 🙂

Untold AI: The survey

What AI Stories Aren’t We Telling (That We Should Be)?

HAL

Last fall I was invited with some other spectacular people to participate in a retreat about AI, happening at the Juvet Landscape Hotel in Ålstad, Norway. (A breathtaking opportunity, and thematically a perfect setting since it was the shooting location for Ex Machina. Thanks to Andy Budd for the whole idea, as well as Ellen de Vries, James Gilyead, and the team at Clearleft who helped organize.) The event was structured like an unconference, so participants could propose sessions and if anyone was interested, join up. One of the workshops I proposed was called “AI Narratives” and it sought to answer the question “What AI Stories Aren’t We Telling (That We Should Be)?” So, why this topic?

Sci-fi, my reasoning goes, plays an informal and largely unacknowledged role in setting public expectations and understanding about technology in general and AI in particular. That, in turn, affects public attitudes, conversations, behaviors at work, and votes. If we found that sci-fi was telling the public misleading stories over and over, we should make a giant call for the sci-fi creating community to consider telling new stories. It’s not that we want to change sci-fi from being entertainment to being propaganda, but rather to try and take its role as informal opinion-shaper more seriously.

Juvet sign

In the workshop we were working with a very short timeframe, so we managed to do good work, but not get very far, even though we doubled our original time frame. I have taken time since to extend that work to get to this series of posts for scifiinterfaces.com.

My process to get to an answer will take six big steps.

  1. First I’ll do some term-setting and describe what we managed to get done in the short time we had at Juvet.
  2. Then I’ll share the set of sci-fi films and television shows I identified that deal with AI to consider as canon for the analysis. (Step one and two are today’s post)
  3. I’ll these properties’ aggregated “takeaways” that pertain to AI: What would an audience reasonably presume given the narrative about AI in the real world? These are the stories we are telling ourselves.
  4. Next I’ll look at the handful of manifestos and books dealing with AI futurism to identify their imperatives.
  5. I’ll map the cinematic takeaways to the imperatives.
  6. Finally I’ll run the “diff” to identify find out what stories we aren’t telling ourselves, and hypothesize a bit about why.

Along the way, we’ll get some fun side-analyses, like:

  • What categories of AI appear in screen sci-fi?
  • Do more robots or software AI appear?
  • Are our stories about AI more positive or negative, and how has that changed over time?
  • What takeaways tend to correlate with other takeaways?
  • What takeaways appear in mostly well-rated movies (and poorly-rated movies)?
  • Which movies are most aligned with computer science’s concerns? Which are least?
  • These will come up in the analysis when they make sense.

Longtime readers of this blog may sense something familiar in this approach, and that’s because I am basing the methodology partly on the thinking I did last year for working through the Fermi Paradox and Sci-Fi question. Also, I should note that, like the Fermi analysis, this isn’t about the interfaces for AI, so it’s technically a little off-topic for the blog. Return later if you’re disinterested in this bit.

Zorg fires the ZF-1

Since AI is a big conceptual space, let me establish some terms of art to frame the discussion.

  1. Narrow AI is the AI of today, in which algorithms enact decisions and learn in narrow domains. They are unable to generalize knowledge and adapt to new domains. The Roomba, the Nest Thermostat, and self-driving cars are real-world examples of this kind of AI. Karen from Spider-Man: Homecoming, S.H.I.E.L.D.’s car AIs (also from the MCU), and even the ZF-1 weapon in The Fifth Element are sci-fi examples.
  2. General AI is the as-yet speculative AI that thinks kind of like a human thinks, able to generalize knowledge and adapt readily to new domains. HAL from 2001: A Space Odyssey, the Replicants in Blade Runner, and the robots in Star Wars like C3PO and BB-8 are examples of this kind of AI.
  3. Super AI is the speculative AI that is orders of magnitude smarter than general AI, and thereby orders of magnitude smarter than us. It’s arguable that we’ve really ever seen a proper Super AI in screen sci-fi (because characters keep outthinking it and wut?), but Deep Thought from The Hitchhiker Guide to the Galaxy, the big AI in The Matrix diegesis, and the titular AI from Colossus: The Forbin Project come close.

There are fine arguments to be made that these are insufficient for the likely breadth of AI that we’re going to be facing, but for now, let’s accept these as working categories, because the strategies (and thereby what stories we should be telling ourselves) for each is different.

  • Narrow AI is the AI of now. It’s in the world. (As long as it’s not autonomous weapons,…) It gets safer as it gets more intelligent. It will enable efficiencies, for some domains, never before seen. It will disrupt our businesses and our civics. It, like any technology, can be misused, but the AI won’t have any ulterior motives of its own.
  • General AI is what lots of big players are gunning for. It doesn’t exist yet. It gets more dangerous as it gets smarter, largely because it will begin to approach a semblance of sentience and approach the evolutionary threshold to superintelligence. We will restructure society to accomodate it, and it will restructure society. It could come to pass in a number of ways: a willing worker class, a revolt, new world citizenry. It/they will have a convincing consciousness, by definition, so their motives and actions become a factor.
  • Super AI is the most risky scenario. If we have seeded it poorly, it presents the existential risk that big names like Gates and Musk are worried about. If seeded poorly, it could wipe us out as a side-effect of pursuing its goals. If seeded well, it might help us solve some of the vexing problems plaguing humanity. (c.f. Climate change, inequality, war, disease, overpopulation, maybe even senescence and death.) It’s very hard to really imagine what life will be like in a world with something approaching godlike intelligence. It could conceivably restructure the planet, the solar system, and us to accomplish whatever its goals are.

Since these things are related but categorically so different, we should take care so speak about them differently when talking about our media strategy toward them.

Also I should clarify that I included AI that was embodied in a mobile form, like C-3PO or cylons, and call them robots in the analysis when its pertinent. Other non-embodied AI is just called AI or unembodied.

Those terms established, let me also talk a bit about the foundational work done with a smart group of thinkers at Juvet.

At Juvet

Juvet was an amazing experience generally (we saw the effing northern lights, y’all) and if you’re interested, there was a group write up afterwards, called the Juvet Agenda. Check that out.

Northern lights

My workshop for “AI Narratives” attracted 8 participants. Shouts out to them follows. Many are doing great work in other domains, so give them a look up sometime.

Juvet attendees

To pursue an answer, this team first wrote up every example of an AI in screen-based sci-fi that we could think of on red Post-It Notes. (A few of us referenced some online sources so it wasn’t just from memory.) Next we clustered those thematically. This was the bulk of the work done there.

I also took time to try and simultaneously put together on yellow Post-It Notes a set of Dire Warnings from the AI community, and even started to use Blake Snyder’s Save the Cat! story frameworks to try and categorize the examples, but we ran out of time before we could begin to pursue any of this. It’s as well. I realized later the Save The Cat! Framework was not useful to this analysis.

Save the Cat

Still, a lot of what came out there is baked into the following posts, so let this serve as a general shout-out and thanks to those awesome participants. Can’t wait to meet you at the next one.

But when I got home and began thinking of posting this to scifiinterfaces, I wanted to make sure I was including everything I could. So, I sought out some other sources to check the list against.  

What AI Stories Are We Telling in Sci-Fi?

This sounds simple, but it’s not. What counts as AI in sci-fi movies and TV shows? Do Robots? Do automatons? What about magic that acts like technology? What about superhero movies that are on the “edge” of sci-fi? Spy shows? Are we sticking to narrow AI, strong AI, or super AI, or all of the above? At Juvet and since, I’ve eschewed trying to work out some formal definition, and instead go with loose, English language definitions, something like the ones I shared above. We’re looking at the big picture. Because of this, trying to hairsplit the details won’t serve us.

How did you come up with the survey of AI shows?

So, I wound up taking the shows identified at Juvet and then adding in shows in this list from Wikipedia and a few stragglers tagged on IMDB with AI as a keyword. That processes resulted in the following list.

2001: A Space Odyssey
A.I. Artificial Intelligence
Agents of S.H.I.E.L.D.
Alien
Alien: Covenant
Aliens
Alphaville
Automata
Avengers: Age of Ultron
Barbarella
Battlestar Galactica
Battlestar Galactica
Bicentennial Man
Big Hero 6
Black Mirror “Be Right Back”
Black Mirror “Black Museum”
Black Mirror “Hang the DJ”
Black Mirror “Hated in the Nation”
Black Mirror “Metalhead”
Black Mirror “San Junipero”
Black Mirror “USS Callister”
Black Mirror “White Christmas”
Blade Runner
Blade Runner 2049
Buck Rogers in the 25th Century
Buffy the Vampire Slayer Intervention
Chappie
Colossus: The Forbin Project
D.A.R.Y.L.
Dark Star
The Day the Earth Stood Still
The Day the Earth Stood Still (2008 film)
Demon Seed
Der Herr der Welt (i.e. Master of the World)
Dr. Who
Eagle Eye
Electric Dreams
Elysium
Enthiran
Ex Machina
Ghost in the Shell
Ghost in the Shell (2017 film)
Her
Hide and Seek
The Hitchhiker’s Guide to the Galaxy
I, Robot
Infinity Chamber
Interstellar
The Invisible Boy
The Iron Giant
Iron Man
Iron Man 3
Knight Rider
Logan’s Run
Max Steel
Metropolis
Mighty Morphin Power Rangers: The Movie
The Machine
The Matrix
The Matrix Reloaded
The Matrix Revolutions
Moon
Morgan
Pacific Rim
Passengers (2016 film)
Person of Interest
Philip K. Dick’s Electric Dreams (Series) “Autofac”
Power Rangers
Prometheus
Psycho-pass: The Movie
Ra.One
Real Steel
Resident Evil
Resident Evil: Extinction
Resident Evil: Retribution
Resident Evil: The Final Chapter
Rick & Morty “The Ricks Must be Crazy”
RoboCop
Robocop (2014 film)
Robocop 2
Robocop 3
Robot & Frank
Rogue One: A Star Wars Story
S1M0NE
Short Circuit
Short Circuit 2
Spider-Man: Homecoming
Star Trek First Contact
Star Trek Generations
Star Trek: The Motion Picture
Star Trek: The Next Generation
Star Wars
Star Wars: Episode I – The Phantom Menace
Star Wars: Episode II – Attack of the Clones
Star Wars: Episode III – Revenge of the Sith
Star Wars: The Force Awakens
Stealth
Superman III
The Terminator
Terminator 2: Judgment Day
Terminator 3: Rise of the Machines
Terminator Genisys, aka Terminator 5
Terminator Salvation
Tomorrowland
Total Recall
Transcendence
Transformers
Transformers: Age of Extinction
Transformers: Dark of the Moon
Transformers: Revenge of the Fallen
Transformers: The Last Knight
Tron
Tron: Legacy
Uncanny
WALL•E
WarGames
Westworld
Westworld
X-Men: Days of Future Past
 

Now sci-fi is vast, and more is being created all the time. Even accounting for the subset that has been committed to television and movie screens, it’s unlikely that this list contains every possible example. If you want to suggest more, feel free to add them in the comments. I am especially interested in examples that would suggest a tweak to the strategic conclusions at the end of this series of posts.

Did anything not make the cut?

A “greedy” definition of narrow AI would include some fairly mundane automatic technologies. The doors found in the Star Trek diegesis, for example, detect many forms of life (including synthetic) and even gauge the intentions of its users to determine whether or not they should activate. That’s more sophisticated than it first seems. (There was a chapter all about sci-fi doors that wound up on the cutting room floor of the book. Maybe I’ll pick that up and post it someday.) But when you think about this example in terms of cultural imperatives, the benefits of the door are so mundane, and the risks near nil (in the Star Trek universe they work perfectly, even if on set they didn’t), it doesn’t really help us answer the ultimate question driving these posts. Let’s call those smart, utilitarian, low-risk technologies mundane, and exclude those.

TOS door blooper

That’s not to say workaday, real-world narrow AI is out. IBM’s Watson for Oncology (full disclosure: I’ve worked there the past year and a half) reads X-rays to help identify tumors faster and more accurately than human doctors can keep up with. (Fuller disclosure: It is not without its criticisms.)…(Fullest disclosure: I do not speak on behalf of IBM anywhere on this blog.)

Watson for Oncology winds up being workaday, but still really valuable. It would be great to see such benefits to humanity writ in sci-fi. It would remind us of why we might pursue it even though it presents risk. On the flip side, mundane examples can have pernicious, hard-to-see consequences when implemented at a social scale, and if it’s clear a sci-fi narrow AI illustrates those kind of risks, it would be very valuable to include.

Also comedy may have AI examples, but for the same reason those examples are very difficult to review, they’re also difficult to include in this analysis. What belongs to the joke and what should be considered actually part of the diegesis? So, say, the Fembots from Austin Powers aren’t included.

No Austin Powers

Why not rate individual AIs?

You’ll note that I put The Avengers: Age of Ultron on one line, rather than listing Ultron, JARVIS, Friday, and Vision as separate things to consider. I did this because the takeaways (detailed in the next post) are tied to the whole story, not just the AI. If a story only has evil AIs, the implied imperative is to steer clear of AI. If a story only has good AIs, it implies we should step on the gas. But when a story has both, the takeaway is more complicated. Maybe it is that we should avoid the thing that made the evil AI evil, or to ensure that AI has human welfare baked into its goals and easy ways to unplug it if it’s become clear that it doesn’t. These examples show that it is the story that is the profitable chunk to examine.

Ultrons

TV shows are more complicated than movies because long-running ones, like Dr. Who or Star Trek, have lots of stories and the strategic takeaways may have changed over episodes much less the decades. For these shows, I’ve had to cheat a little and talk just about Daleks, say, or Data. My one-line coverage does them a bit of a disservice. But to keep this on track and not become a months-long analysis, I’ve gone with the very high level summary.

Similarly, franchises (like the overweighted Terminator series) can get more weight because there are many movies. But without dipping down into counting the actual minutes of time for each show and somehow noting which of those minutes are dedicated, conceptually, to AI, it’s practical simply to note the bias of the selected research strategy and move on.

OMFG you forgot [insert show here]!

If you want to suggest additions, awesome. Look at the Google Sheet (link below), specifically page named “properties”, and comment on this post with all the information that would be necessary to fill in a new row with the new show. Please also be aware a refresh of the subsequent analysis will happen only after some time and/or it becomes apparent that the conclusions would be significantly affected by new examples. Remember that since we’re looking for effects at a social level, the blockbusters and popular shows have more weight than obscure ones. More people see them. And I think the blockbusters and popular shows are all there.

So, that’s the survey from which the rest of this was built.

A first, tiny analysis

Once I had the list, I started working with the shows in the survey. Much of the process was managed in a “Sheets” (Google Docs) spreadsheet, which you can see at the link below.

Not wanting to have such a major post without at least some analysis, I did a quick breakdown of this data is how many of these shows each year involve AI. As you might guess, that number has been increasing a little over time, but has significantly spiked after 2010.

showsperyear
Click for a full-size image

Looking at the data, there’s not really many surprises there. We see one or two at the beginning of the prior century. Things picked up following real-world AI hype between 1970–1990. There was a tiny lull before AI became a mainstay in 1999 and ramped up as of 2011.

There’s a bit of statistical weirdness that the years ending in 0 tend not to have shows, but I think that’s just noise.

What isn’t apparent in the chart itself is that cinematic interest in AI did not show a tight mapping to the real-world “AI Winter (a period of hype-exhaustion that sharply reduced funding and publishing) that computer science suffered in 1974–80 and again 1987–93. It seems that, as audiences, we’re still interested in the narrative issues even when the actual computer science has quieted down.

It’s no sursprise that we’ve been telling ourselves more stories about AI over time. But things get more interesting when we look at the tone of those shows, as discussed in the next post.