Untold AI: Takeaway ratings

This quickie goes out to writers, directors, and producers. On a lark I decided to run an analysis of AI show takeaways by rating. To do this, I referenced the Tomatometer ratings from rottentomatoes.com to the shows. Then I processed the average rating of the properties that were tagged with each takeaway, and ranked the results.

V'ger

It knows only that it needs, Commander. But, like so many of us, it does not know what.

For instance, looking at the takeaway “AI will spontaneously emerge sentience or emotions,” we find the following shows and their ratings.

  • Star Trek: The Motion Picture, 44%
  • Superman III, 26%
  • Hide and Seek, none
  • Electric Dreams, 47%
  • Short Circuit, 57%
  • Short Circuit 2, 48%
  • Bicentennial Man, 36%
  • Stealth, 13%
  • Terminator: Salvation, 33%
  • Tron: Legacy, 51%
  • Enthiran, none
  • Avengers: Age of Ultron, 75%
Ultrons

I’ve come to save the world! But, also…yeah.

I dismissed those shows that had no rating, rather than counting them as zero. The average, then, for this takeaway is 42%. (And it can thank the MCU for doing all the heavy lifting for this one.) There are of course data caveats, like that Black Mirror is given a single tomatometer rating (and one that is quite high) rather than one per episode, but I did not claim this was a clean science. Continue reading

Untold AI: Takeaway trends

So as interesting as the big donut of takeaways is, it is just a snapshot of everything, all at once. And of course neither people nor cinema play out that way. Like the tone of shows about AI, we see a few different things when we look at individual takeaways over time.

time00_all.png

So you understand what you’re seeing: These charts are for the top 7 takeaways from sci-fi AI as described the takeaways post. The colors of each chart correspond to its takeaway in the big donut diagram.

Screen Shot 2018-04-11 at 12.07.14 AM

Compare freely.

Each chart shows, for each year between Metropolis in 1927 and the many films of 2017, what percentage of shows contained that takeaway. The increasing frequency of sci-fi has some effect on the charts. Up until 1977 there was at most one show per year, so it’s more likely during that early period to see any of the charts max out at 100%. And from 2007 until the time of publication, there have been multiple shows each year, so you would expect to see much lower peaks on the chart as many shows differentiate themselves from their competition, rather than cluster around similar themes. In between those dates it’s a bit of a crapshoot. Continue reading

Untold AI: Correlations

Looking at the the many-to-many relationships of those takeaways, I wondered if some of them appeared together more commonly than others. For instance, do we tell “AI will be inherently evil” and “AI will fool us with fake media or pretending to be human” frequently? I’m at the upper boundary of my statistical analysis skills here (and the sample size is, admittedly small), but I ran some Pearson functions across the set for all two-part combinations. The results look like this.

takeaway_correlations

What’s a Pearson function? It helps you find out how often things appear together in a set. For instance, if you wanted to know which letters in the English alphabet appear together in words most frequently, you could run a Pearson function against all the words in the dictionary, starting with AB, then looking for AC, then for AD, continuing all the way to YZ. Each pair would get a correlation coefficient as a result. The highest number would tell you that if you find the first letter in the pair then the second letter is very likely to be there, too. (Q & U, if you’re wondering, according to this.) The lowest number would tell you letters that appear very uncommonly together. (Q & W. More than you think, but fewer than any other pair.)

Flower Pasqueflower Pasque Flower Plant Nature

A pasqueflower.

In the screen shot way above, you can see I put these in a Google Sheet and formatted the cells from solid black to solid yellow, according to their coefficient. The idea is that darker yellows would signal a high degree of correlation, lowering the contrast with the black text and “hide” the things that have been frequently paired, while simultaneously letting the things that aren’t frequently paired shine through as yellow.

The takeaways make up both the Y and X axes, so that descending line of black is when a takeaway is compared to itself, and by definition, those correlations are perfect. Every time Evil will use AI for Evil appears, you can totally count on Evil will use AI for Evil also appearing in those same stories. Hopefully that’s no surprise. Look at rest of the cells and you can see there are a few dark spots and a lot of yellow.

If you want to see the exact ranked list, see the live doc, in a sheet named “correlations_list,” but since there are 630 combinations, I won’t paste the actual values or a screen grab of the whole thing, it wouldn’t make any sense. The three highest and four lowest pairings are discussed below. Continue reading

Untold AI: Takeaways

In the first post I shared how I built a set of screen sci-fi shows that deal with AI (and I’ve already gotten some nice recommendations on other ones to include in a later update). The second post talked about the tone of those films and the third discussed their provenance.

Returning to our central question, to determine whether the stories tell are the ones we should be telling,we need to push the survey to one level of abstraction.

With the minor exceptions of robots and remakes, sci-fi makers try their hardest to make sure their shows are unique and differentiated. That makes comparing apples to apples difficult. So the next step is to look at the strategic imperatives that are implied in each show. “Strategic imperatives” is a mouthful, so let’s call them “takeaways.” (The other alternative, “morals” has way too much baggage.) To get to takeaways for this survey, what I tried to ask was: What does this show imply that we should do, right now, about AI?

Now, this is a fraught enterprise. Even if we could seance the spirit of Dennis Feltham Jones and press him for a takeaway, he could back up, shake his palms at us, and say something like, “Oh, no, I’m not saying all super AI is fascist, just Colossus, here, is.” Stories can be just about what happened that one time, implying nothing about all instances or even the most likely instances. It can just be stuff that happens.

CFP.jpg

Pain-of-death, authoritarian stuff.

But true to the New Criticism stance of this blog, I believe the author’s intent, when it’s even available, is questionable and only kind-of interesting. When thinking about the effects of sci-fi, we need to turn to the audience. If it’s not made clear in the story that this AI is unusual (through a character saying so or other AIs in the diegesis behaving differently) audiences may rightly infer that the AI is representative of its class. Demon Seed weakly implies that all AIs are just going to be evil and do horrible things to people, and get out, humanity, while you can. Which is dumb, but let’s acknowledge that this one show says something like “AI will beevil.”

Continue reading