Untold AI

Hey readership. Sorry for the brief radio silence there. Was busy doing some stuff, like getting married. Back now to post some overdue content. But the good news is I’m back with some weighty posts, and in honor of the 50th anniversary of 2001: A Space Odyssey, they have to do with AI, science, and sci-fi.

HAL

So last fall I was invited with some other spectacular people to participate in a retreat about AI, happening at the Juvet Landscape Hotel in Ålstad, Norway. (A breathtaking opportunity, and thematically a perfect setting since it was the shooting location for Ex Machina. Thanks to Andy Budd for the whole idea, as well as Ellen de Vries, James Gilyead, and the team at Clearleft who helped organize.) The event was structured like an unconference, so participants could propose sessions and if anyone was interested, join up. One of the workshops I proposed was called “AI Narratives” and it sought to answer the question “What AI Stories Aren’t We Telling (That We Should Be)?” So, why this topic?

Sci-fi, my reasoning goes, plays an informal and largely unacknowledged role in setting public expectations and understanding about technology in general and AI in particular. That, in turn, affects public attitudes, conversations, behaviors at work, and votes. If we found that sci-fi was telling the public misleading stories over and over, we should make a giant call for the sci-fi creating community to consider telling new stories. It’s not that we want to change sci-fi from being entertainment to being propaganda, but rather to try and take its role as informal opinion-shaper more seriously. Continue reading

Untold AI: Tone

When we begin to look at AI stories over time, as we did in the prior post and will continue in this one, one of the basic changes we can track is how the stories seem to want us to feel about AI, or their tone. Are they more positive about AI, more negative, or neutral/balanced?

tone.png

tl;dr:

  1. Generally, sci-fi is slightly more negative than positive about AI in sci-fi.
  2. It started off very negative and has been slowly moving, on average, to slightly negative.
  3. The 1960s were the high point of positive AI.
  4. We tell lots more stories about general AI than super AI.
  5. We tell a lot more stories about robots than disembodied AI.
  6. Cinemaphiles (like readers of this blog) probably think more negatively about robots than the general population.

Now, details

The tone I have assigned to each show is arguable, of course, but I think I’ve covered my butt by having a very course scale. I looked at each film and decided on a scale of -2 to 2 how negative they were about AI. Very negative was -2. The Terminator series starts being very negative, because AI is evil and there is nothing to balance it. (It later creeps higher when Ahhnold becomes a “good” robot.) The Transformers series is 0 because the good AI is balanced by the bad AI. Star Trek: The Next Generation gets a 2 or very positive for the presence of Data, noting that the blip of Lore doesn’t complicate the deliberately crude metric.

Average tone

Given all that, here’s what the average for each year looks like. As of 2017, we are looking slightly askance at screen-sci-fi AI, though not nearly as badly as Fritz Lang did at the beginning, and its reputation has been improving. The trend line (that red line) shows that it’s been steadily increasing over the last 90 years or so. As always, the live chart may have updates.

Generally, we can see that things started off very negatively because of Metropolis, and Der Herr de Welt. Then those high points in the 1950s were because of robots in The Day the Earth Stood Still, Forbidden Planet, and The Invisible Boy. Then from 1960–1980 was a period of neutral-to-bad. The 1980s introduced a period of “it’s complicated” with things trending towards balanced or neutral.
What this points out is that there has been a bit of AI dialog going on across the decades that goes something like this.

tone_conversation.png

Continue reading

Untold AI: Geo

In the prior post we spoke about the tone of AI shows. In this post we’re going to talk about the provenance of AI shows.

This is, admittedly, a diversion, because it’s not germane to the core question at hand. (That question is, “What stories aren’t we telling ourselves about AI?”) But now that I have all this data to poll and some rudimentary skills in wrangling it all in Google Sheets, I can barely help myself. It’s just so interesting. Plus, Eurovision is coming up, so everyone there is feeling a swell of nationalism. This will be important.

timetoterminator.png

Time to Terminator: 1 paragraph.

So it was that I was backfilling the survey with some embarrassing oversights (since I had actually had already reviewed those shows) and I came across the country data in imdb.com. This identifies the locations where the production companies involved with each show are based. So even if a show is shot entirely in Christchurch, if its production companies are based in A Coruña, its country is listed as Spain. What, I wonder, would we find if we had that data in the survey?

So, I added a country column to the database, and found that it allows me to answer a couple of questions. This post shares those results.

So the first question to ask the data is, what countries have production studios that have made shows in the survey (and by extension, about AI)? It’s a surprisingly short list. Continue reading

Untold AI: Takeaways

In the first post I shared how I built a set of screen sci-fi shows that deal with AI (and I’ve already gotten some nice recommendations on other ones to include in a later update). The second post talked about the tone of those films and the third discussed their provenance.

Returning to our central question, to determine whether the stories tell are the ones we should be telling,we need to push the survey to one level of abstraction.

With the minor exceptions of robots and remakes, sci-fi makers try their hardest to make sure their shows are unique and differentiated. That makes comparing apples to apples difficult. So the next step is to look at the strategic imperatives that are implied in each show. “Strategic imperatives” is a mouthful, so let’s call them “takeaways.” (The other alternative, “morals” has way too much baggage.) To get to takeaways for this survey, what I tried to ask was: What does this show imply that we should do, right now, about AI?

Now, this is a fraught enterprise. Even if we could seance the spirit of Dennis Feltham Jones and press him for a takeaway, he could back up, shake his palms at us, and say something like, “Oh, no, I’m not saying all super AI is fascist, just Colossus, here, is.” Stories can be just about what happened that one time, implying nothing about all instances or even the most likely instances. It can just be stuff that happens.

CFP.jpg

Pain-of-death, authoritarian stuff.

But true to the New Criticism stance of this blog, I believe the author’s intent, when it’s even available, is questionable and only kind-of interesting. When thinking about the effects of sci-fi, we need to turn to the audience. If it’s not made clear in the story that this AI is unusual (through a character saying so or other AIs in the diegesis behaving differently) audiences may rightly infer that the AI is representative of its class. Demon Seed weakly implies that all AIs are just going to be evil and do horrible things to people, and get out, humanity, while you can. Which is dumb, but let’s acknowledge that this one show says something like “AI will beevil.”

Continue reading

Untold AI: Correlations

Looking at the the many-to-many relationships of those takeaways, I wondered if some of them appeared together more commonly than others. For instance, do we tell “AI will be inherently evil” and “AI will fool us with fake media or pretending to be human” frequently? I’m at the upper boundary of my statistical analysis skills here (and the sample size is, admittedly small), but I ran some Pearson functions across the set for all two-part combinations. The results look like this.

takeaway_correlations

What’s a Pearson function? It helps you find out how often things appear together in a set. For instance, if you wanted to know which letters in the English alphabet appear together in words most frequently, you could run a Pearson function against all the words in the dictionary, starting with AB, then looking for AC, then for AD, continuing all the way to YZ. Each pair would get a correlation coefficient as a result. The highest number would tell you that if you find the first letter in the pair then the second letter is very likely to be there, too. (Q & U, if you’re wondering, according to this.) The lowest number would tell you letters that appear very uncommonly together. (Q & W. More than you think, but fewer than any other pair.)

Flower Pasqueflower Pasque Flower Plant Nature

A pasqueflower.

In the screen shot way above, you can see I put these in a Google Sheet and formatted the cells from solid black to solid yellow, according to their coefficient. The idea is that darker yellows would signal a high degree of correlation, lowering the contrast with the black text and “hide” the things that have been frequently paired, while simultaneously letting the things that aren’t frequently paired shine through as yellow.

The takeaways make up both the Y and X axes, so that descending line of black is when a takeaway is compared to itself, and by definition, those correlations are perfect. Every time Evil will use AI for Evil appears, you can totally count on Evil will use AI for Evil also appearing in those same stories. Hopefully that’s no surprise. Look at rest of the cells and you can see there are a few dark spots and a lot of yellow.

If you want to see the exact ranked list, see the live doc, in a sheet named “correlations_list,” but since there are 630 combinations, I won’t paste the actual values or a screen grab of the whole thing, it wouldn’t make any sense. The three highest and four lowest pairings are discussed below. Continue reading

Untold AI: Takeaway trends

So as interesting as the big donut of takeaways is, it is just a snapshot of everything, all at once. And of course neither people nor cinema play out that way. Like the tone of shows about AI, we see a few different things when we look at individual takeaways over time.

time00_all.png

So you understand what you’re seeing: These charts are for the top 7 takeaways from sci-fi AI as described the takeaways post. The colors of each chart correspond to its takeaway in the big donut diagram.

Screen Shot 2018-04-11 at 12.07.14 AM

Compare freely.

Each chart shows, for each year between Metropolis in 1927 and the many films of 2017, what percentage of shows contained that takeaway. The increasing frequency of sci-fi has some effect on the charts. Up until 1977 there was at most one show per year, so it’s more likely during that early period to see any of the charts max out at 100%. And from 2007 until the time of publication, there have been multiple shows each year, so you would expect to see much lower peaks on the chart as many shows differentiate themselves from their competition, rather than cluster around similar themes. In between those dates it’s a bit of a crapshoot. Continue reading

Untold AI: Takeaway ratings

This quickie goes out to writers, directors, and producers. On a lark I decided to run an analysis of AI show takeaways by rating. To do this, I referenced the Tomatometer ratings from rottentomatoes.com to the shows. Then I processed the average rating of the properties that were tagged with each takeaway, and ranked the results.

V'ger

It knows only that it needs, Commander. But, like so many of us, it does not know what.

For instance, looking at the takeaway “AI will spontaneously emerge sentience or emotions,” we find the following shows and their ratings.

  • Star Trek: The Motion Picture, 44%
  • Superman III, 26%
  • Hide and Seek, none
  • Electric Dreams, 47%
  • Short Circuit, 57%
  • Short Circuit 2, 48%
  • Bicentennial Man, 36%
  • Stealth, 13%
  • Terminator: Salvation, 33%
  • Tron: Legacy, 51%
  • Enthiran, none
  • Avengers: Age of Ultron, 75%
Ultrons

I’ve come to save the world! But, also…yeah.

I dismissed those shows that had no rating, rather than counting them as zero. The average, then, for this takeaway is 42%. (And it can thank the MCU for doing all the heavy lifting for this one.) There are of course data caveats, like that Black Mirror is given a single tomatometer rating (and one that is quite high) rather than one per episode, but I did not claim this was a clean science. Continue reading