Untold AI: Tone

When we begin to look at AI stories over time, as we did in the prior post and will continue in this one, one of the basic changes we can track is how the stories seem to want us to feel about AI, or their tone. Are they more positive about AI, more negative, or neutral/balanced?

tone.png

tl;dr:

  1. Generally, sci-fi is slightly more negative than positive about AI in sci-fi.
  2. It started off very negative and has been slowly moving, on average, to slightly negative.
  3. The 1960s were the high point of positive AI.
  4. We tell lots more stories about general AI than super AI.
  5. We tell a lot more stories about robots than disembodied AI.
  6. Cinemaphiles (like readers of this blog) probably think more negatively about robots than the general population.

Now, details

The tone I have assigned to each show is arguable, of course, but I think I’ve covered my butt by having a very course scale. I looked at each film and decided on a scale of -2 to 2 how negative they were about AI. Very negative was -2. The Terminator series starts being very negative, because AI is evil and there is nothing to balance it. (It later creeps higher when Ahhnold becomes a “good” robot.) The Transformers series is 0 because the good AI is balanced by the bad AI. Star Trek: The Next Generation gets a 2 or very positive for the presence of Data, noting that the blip of Lore doesn’t complicate the deliberately crude metric.

Average tone

Given all that, here’s what the average for each year looks like. As of 2017, we are looking slightly askance at screen-sci-fi AI, though not nearly as badly as Fritz Lang did at the beginning, and its reputation has been improving. The trend line (that red line) shows that it’s been steadily increasing over the last 90 years or so. As always, the live chart may have updates.

tone_average
Click any of the images in this post for a full-size image

Generally, we can see that things started off very negatively because of Metropolis, and Der Herr de Welt. Then those high points in the 1950s were because of robots in The Day the Earth Stood Still, Forbidden Planet, and The Invisible Boy. Then from 1960–1980 was a period of neutral-to-bad. The 1980s introduced a period of “it’s complicated” with things trending towards balanced or neutral.
What this points out is that there has been a bit of AI dialog going on across the decades that goes something like this.

tone_conversation.png

Which, frankly, might be a fine summary of the the general debate around AI and robots. Genevieve Bell, Professor, Engineering & Computer Science, Australian National University, has noted that futurism tends to skew polemic: i.e. either utopian or dystopian, until a technology actually arrives in the world, after which it’s just regarded as complicated and mundane.

We should always keep in mind that content in cinema is subject to cinegenics, that is, we are likely to find more of what plays well in cinema in cinema, and less, if anything, of what does not play well. AI and robots are an “easy” villain (like space aliens) to include in sci-fi because you’re not condemning any particular nation-state or ideology. Cylons vs. Communists, for example. AI can just be pure evil, wicked and guiltless to hate for the duration of a show. And for most of the prior century, they were. Nowadays we see that slant as ham-handed and unsophisticated. I would certainly expect the aggregate results to skew more negative for this reason.

demonseed.jpg
Demon Seed starts evil and stays evil. Moloch!

Aggregate tone

In addition to those four “eras” of AI, (Moloch, Robby, Problems, It’s Complicated) we can look at how the aggregate average of all shows has changed over time. So, for each year the chart shows what the average of all shows is, up to that point. There is a live view with absolutely up-to-date information, but I’ve combined it with the shows-per-year chart in the graphic below.


We see it started out negative and careened positive in the 1960s (thanks to the robot-triple-play mentioned above), but has then been steadying out (like you’d expect all aggregate measures as more data is added), but it’s interesting that the final average is just slightly negative. Suspicion on our part, perhaps? That said, I am not enough of a data nerd to know why the trendline is peeking up right above the 0 line there, which seems to imply it’s actually slightly positive, but I trust that averaging formula (which I wrote) and just can’t speak to what algorithm drives the trendline. Take it as you will.

Warning: Cinemaphiles (you) have a different exposure

Then I wondered what kind of a difference it might make if an audience member based their opinion solely on shows that they see in cinema or on first release on TV. Reports from the MPAA, BFI, and Screen Australia show that much of the English-speaking world sees the most movies between 14 and 49 years of age. (I presume it skews later for television viewing, but don’t have data.) So I re-ran the numbers looking for the difference between a cinemaphile, who would have seen all the shows to form an opinion about AI, and “genpop,” who only thinks about the last 35 years.

Screen Shot 2018-04-17 at 9.37.08 PM

Of course there’s no difference until we get past 35 years later than Metropolis, and even then we need the averages to diverge. That happens after 1973 (the year Westworld came out). Then for 30 years, the genpop opinion—who hadn’t seen Metropolis—veer towards a more positive exposure than cinemaphiles. But come the scary AIs of 2003 (the year The Matrix Reloaded, Terminator 3: Rise of the Machines, and The Matrix Revolutions came out) and suddenly the genpop’s exposure is darker than the cinemaphiles, who can still remember the era of Robby. The diff is honestly never that big, and nearly identical in 2017, but interesting to note that, yes, if you only consider the things that debuted recently, your opinion is likely to be different than someone with a more holistic view of speculative examples.

But of course modern audiences aren’t beholden to just what is decided to be shown on screens by studios and television executives recently. Nowadays on-demand services means you can watch almost anything at any time. Add to that binge-watching-encouragement-features like auto-play and if-you-liked-X-you’ll-like-Y recommender algorithms, and it’s much more likely that the modern watching audiences’ exposure to these shows are probably drifting more similar to cinemaphile than genpop.

A final breakdown of interest of the tone data is comparing the aggregates of the different types of AI. These aggregates are based on are for categories of AI and embodiment of AI. By categories, I specifically mean the Narrow, General, and Strong AI categories. (Read up on them in the first post of the series if you need to.) What does screen sci-fi like to talk about? Well, it’s general AI. AI that is like us, and sci-fi has preferred those by a longshot.

categories_pie.png

That makes sense for a couple of reasons. General AI is easy to think about and easy to write for. It’s just another human with one or two key differences. (Very capable in some ways, inhuman in others.)

In contrast, Super AI is really hard to write for. If it’s definitionally orders of magnitude smarter than us, what’s the plot? It can outthink us at every step. To get around this, sometimes the Super AIs aren’t actually that smart (Skynet) sometimes they are brand new, or working out a few weaknesses yet that humans can exploit (Colossus: The Forbin Project and Person of Interest). And a world with a benevolent Super AI may not even be interesting. Everything just…works. (This was the end result of the I, Robot series of stories by Asimov, if I remember, but that did not get transcribed to screen.)

Lastly, Narrow AI is harder to write for, partly because, narratively, it may not be worth the cost-to-explain versus usefulness-to-plot. It’s also harder to identify (you really have to pay attention to the background and fuss over definitions), and may be underrepresented in the dataset compared to what’s actually in the shows. But for the ultimate question that’s driving this series, narrow AI is nearly immaterial. We don’t have to speculate about what to do in advance of narrow AI in speculative fiction, because it’s already here. It’s not speculative.

Embodiment: Am I robot or not?

The next breakdown is by embodiment: Is the show’s AI in a self-contained, mobile form, i.e., a robot? Or is it housed in less anthropomorphic and zoomorphic ways, like in a giant computer with interfaces on the wall? (Alphy in Barbarella.) Or scattered in unknown holes of the internet? (The Machine in Person of Interest.) Or a cluster of stars glowing in the starscape (in Futurama)? Given that AGI is the most represented category of AI, it should be no surprise that robots account for roughly 84%, and virtual AIs with 42%, having a 16% overlap of shows featuring both.

embodiment_pie.png

Tone Differences by Type

So knowing these breakdowns, let’s look back at tone over time and see if anything meaningful comes from looking at these subtypes in the data. Below you’ll see a chart with those trends broken down. And I must admit, I’m a bit stumped by the results.

tone_by_type.png

To explain: There is one aggregate line and four other lines indicating types of AI in this chart. The blue line is the aggregate, the same shape we see in the chart above but it’s represented as just a line in this chart, with no fill. The red line is Artificial Super Intelligence and the orange line is Artificial General Intelligence. Weirdly, though they started out differently, they are neck and neck nowadays, skewing negative.

The green line shows embodied AI and the purple shows more virtual AI. They, too, are neck and neck, just above balanced or neutral.

So while the tone data has all been interesting, I can’t quite “read” this. My processing might be off—though I don’t think so. If it’s right, what does it mean to feel neutral about robots and virtual AI, and slightly negative about ASI and AGI? There isn’t enough ANI to skew it invisibly. Anyway, any help in reading this data or hypothesizing from readers would be lovely.

Next up: I’m going to do some geoplotting and raise your AI national pride hackles. 🙂

7 thoughts on “Untold AI: Tone

  1. Pingback: Untold AI | Sci-fi interfaces

  2. As possible evidence of 21st C moving to a more positive view, Microsoft naming their smart assistant for PCs “Cortana” after the AI in the Halo games? Apple, Amazon, and Google picked neutral sounding names, but Microsoft deliberately suggest an AI connection.

  3. Pingback: Untold AI: Geo | Sci-fi interfaces

  4. Pingback: Untold AI: Takeaways over time | Sci-fi interfaces

  5. Pingback: Untold AI: Takeaways | Sci-fi interfaces

  6. Pingback: Untold AI: The Science | Sci-fi interfaces

  7. Pingback: Untold AI: Poster | Sci-fi interfaces

Leave a Reply to scifihughfCancel reply