Untold AI: Correlations

Looking at the the many-to-many relationships of those takeaways, I wondered if some of them appeared together more commonly than others. For instance, do we tell “AI will be inherently evil” and “AI will fool us with fake media or pretending to be human” frequently? I’m at the upper boundary of my statistical analysis skills here (and the sample size is, admittedly small), but I ran some Pearson functions across the set for all two-part combinations. The results look like this.


What’s a Pearson function? It helps you find out how often things appear together in a set. For instance, if you wanted to know which letters in the English alphabet appear together in words most frequently, you could run a Pearson function against all the words in the dictionary, starting with AB, then looking for AC, then for AD, continuing all the way to YZ. Each pair would get a correlation coefficient as a result. The highest number would tell you that if you find the first letter in the pair then the second letter is very likely to be there, too. (Q & U, if you’re wondering, according to this.) The lowest number would tell you letters that appear very uncommonly together. (Q & W. More than you think, but fewer than any other pair.)

Flower Pasqueflower Pasque Flower Plant Nature

A pasqueflower.

In the screen shot way above, you can see I put these in a Google Sheet and formatted the cells from solid black to solid yellow, according to their coefficient. The idea is that darker yellows would signal a high degree of correlation, lowering the contrast with the black text and “hide” the things that have been frequently paired, while simultaneously letting the things that aren’t frequently paired shine through as yellow.

The takeaways make up both the Y and X axes, so that descending line of black is when a takeaway is compared to itself, and by definition, those correlations are perfect. Every time Evil will use AI for Evil appears, you can totally count on Evil will use AI for Evil also appearing in those same stories. Hopefully that’s no surprise. Look at rest of the cells and you can see there are a few dark spots and a lot of yellow.

If you want to see the exact ranked list, see the live doc, in a sheet named “correlations_list,” but since there are 630 combinations, I won’t paste the actual values or a screen grab of the whole thing, it wouldn’t make any sense. The three highest and four lowest pairings are discussed below. Continue reading

Untold AI: Takeaways

In the first post I shared how I built a set of screen sci-fi shows that deal with AI (and I’ve already gotten some nice recommendations on other ones to include in a later update). The second post talked about the tone of those films and the third discussed their provenance.

Returning to our central question, to determine whether the stories tell are the ones we should be telling,we need to push the survey to one level of abstraction.

With the minor exceptions or robots and remakes, sci-fi makers try their hardest to make sure their shows are unique and differentiated. That makes comparing apples to apples difficult. So the next step is to look at the strategic imperatives that are implied in each show. “Strategic imperatives” is a mouthful, so let’s call them “takeaways.” (The other alternative, “morals” has way too much baggage.) To get to takeaways for this survey, what I tried to ask was: What does this show imply that we should do, right now, about AI?
Now, this is a fraught enterprise. Even if we could seance the spirit of Dennis Feltham Jones and press him for a takeaway, he could back up, shake his palms at us, and say something like, “Oh, no, I’m not saying all super AI is fascist, just Colossus, here, is.” Stories can be just about what happened that one time, implying nothing about all instances or even the most likely instances. It can just be stuff that happens.


Pain-of-death, authoritarian stuff.

But true to the New Criticism stance of this blog, I believe the author’s intent, when it’s even available, is questionable and only kind-of interesting. When thinking about the effects of sci-fi, we need to turn to the audience. If it’s not made clear in the story that this AI is unusual (through a character saying so or other AIs in the diegesis behaving differently) audiences may rightly infer that the AI is representative of its class. Demon Seed weakly implies that all AIs are just going to be evil and do horrible things to people, and get out, humanity, while you can. Which is dumb, but let’s acknowledge that this one show says something like “AI will be evil.”

Continue reading

Untold AI: Geo

In the prior post we spoke about the tone of AI shows. In this post we’re going to talk about the provenance of AI shows.

This is, admittedly, a diversion, because it’s not germane to the core question at hand. (That question is, “What stories aren’t we telling ourselves about AI?”) But now that I have all this data to poll and some rudimentary skills in wrangling it all in Google Sheets, I can barely help myself. It’s just so interesting. Plus, Eurovision is coming up, so everyone there is feeling a swell of nationalism. This will be important.


Time to Terminator: 1 paragraph.

So it was that I was backfilling the survey with some embarrassing oversights (since I had actually had already reviewed those shows) and I came across the country data in imdb.com. This identifies the locations where the production companies involved with each show are based. So even if a show is shot entirely in Christchurch, if its production companies are based in A Coruña, its country is listed as Spain. What, I wonder, would we find if we had that data in the survey?

So, I added a country column to the database, and found that it allows me to answer a couple of questions. This post shares those results.

So the first question to ask the data is, what countries have production studios that have made shows in the survey (and by extension, about AI)? It’s a surprisingly short list. Continue reading

Untold AI: Tone

When we begin to look at AI stories over time, as we did in the prior post and will continue in this one, one of the basic changes we can track is how the stories seem to want us to feel about AI, or their tone. Are they more positive about AI, more negative, or neutral/balanced?



  1. Generally, sci-fi is slightly more negative than positive about AI in sci-fi.
  2. It started off very negative and has been slowly moving, on average, to slightly negative.
  3. The 1960s were the high point of positive AI.
  4. We tell lots more stories about general AI than super AI.
  5. We tell a lot more stories about robots than disembodied AI.
  6. Cinemaphiles (like readers of this blog) probably think more negatively about robots than the general population.

Now, details

The tone I have assigned to each show is arguable, of course, but I think I’ve covered my butt by having a very course scale. I looked at each film and decided on a scale of -2 to 2 how negative they were about AI. Very negative was -2. The Terminator series starts being very negative, because AI is evil and there is nothing to balance it. (It later creeps higher when Ahhnold becomes a “good” robot.) The Transformers series is 0 because the good AI is balanced by the bad AI. Star Trek: The Next Generation gets a 2 or very positive for the presence of Data, noting that the blip of Lore doesn’t complicate the deliberately crude metric.

Average tone

Given all that, here’s what the average for each year looks like. As of 2017, we are looking slightly askance at screen-sci-fi AI, though not nearly as badly as Fritz Lang did at the beginning, and its reputation has been improving. The trend line (that red line) shows that it’s been steadily increasing over the last 90 years or so. As always, the live chart may have updates.

Generally, we can see that things started off very negatively because of Metropolis, and Der Herr de Welt. Then those high points in the 1950s were because of robots in The Day the Earth Stood Still, Forbidden Planet, and The Invisible Boy. Then from 1960–1980 was a period of neutral-to-bad. The 1980s introduced a period of “it’s complicated” with things trending towards balanced or neutral.
What this points out is that there has been a bit of AI dialog going on across the decades that goes something like this.


Continue reading

Untold AI: The survey

Hey readership. Sorry for the brief radio silence there. Was busy doing some stuff, like getting married. Back now to post some overdue content. But the good news is I’m back with some weighty posts, and in honor of the 50th anniversary of 2001: A Space Odyssey, they have to do with AI, science, and sci-fi.


So last fall I was invited with some other spectacular people to participate in a retreat about AI, happening at the Juvet Landscape Hotel in Ålstad, Norway. (A breathtaking opportunity, and thematically a perfect setting since it was the shooting location for Ex Machina. Thanks to Andy Budd for the whole idea, as well as Ellen de Vries, James Gilyead, and the team at Clearleft who helped organize.) The event was structured like an unconference, so participants could propose sessions and if anyone was interested, join up. One of the workshops I proposed was called “AI Narratives” and it sought to answer the question “What AI Stories Aren’t We Telling (That We Should Be)?” So, why this topic?

Sci-fi, my reasoning goes, plays an informal and largely unacknowledged role in setting public expectations and understanding about technology in general and AI in particular. That, in turn, affects public attitudes, conversations, behaviors at work, and votes. If we found that sci-fi was telling the public misleading stories over and over, we should make a giant call for the sci-fi creating community to consider telling new stories. It’s not that we want to change sci-fi from being entertainment to being propaganda, but rather to try and take its role as informal opinion-shaper more seriously. Continue reading

Reader wish: More interviews with authors


[This is a one-off request from the most recent readership poll.]

This is a great idea! Many times my critiques pass the buck from the interface designers to the script writers, so in all fairness I should also interview them. I would very much want to have completed a review for them to respond to first, though it’s admittedly not a requirement. I do have a personal connection to the author of Arrival. Maybe I’ll get to that one.

One clarification, though, reader: Do you mean authors for the shows I’ve reviewed, any show, or authors of written sci-fi?

Also: Does anyone have connection to authors of sci-fi? Especially of any shows that I’ve reviewed already? (If you’re an RSS reader, there’s a list of shows on the right-hand side of the site.) If so, send me a private message at chris[at]scifiinterfaces.com and pass me the author name and how you know them. Then we can discuss your asking them if they’d be OK with an instruction to me for an interview.

Reader wish: More about the narrative side of things

[This is a one-off request from the most recent readership poll.]

I am actually quite interested in this. I have an outline for a book, tentatively titled Worldbuilding with Interfaces, and in my head this would include individual frameworks for common interfaces and what needed to be shown for several models of interaction, among other things.

While I’m dreaming, let me also put out that I have a daydream where I join the faculty down at Worldbuilding Institute to get deep into this with the pros. Hook a nerd up, will ya. Back to reality.

If I started to include posts as a lead-up to a full book on it, though, this would be a pretty major shift in the tone and content. Would that be worth starting a new blog for just that purpose? Or could it fit in here amongst the other reviews? Would the lines be too blurry? Would it isolate existing readers? It would certainly slow down my already pokey publishing pace.

Since this would be a major shift, I’m putting it out there to see if anyone wants to discuss it. In, of course, comments. Or chris[at]scifiinterfaces.com if you have secret, sage words of advice.

Reader wish: More diverse UI work

[This is a one-off request from the most recent readership poll.]

Reader wish: Most of the content is fixated on one type of FUI. It would be nice to see more diverse UI work.

This was really weird for me to read since Scout and I are currently reviewing magic items as if they were tech. In the past the blog has covered bizarre gestural, suicide kits, Krell technology, robot design, ectoplasmic containment units, NUI, AI, service design, and even panopticon teleporting matchmaking interfaces.

I have gone back to the beginning of sci-fi and thereafter spread new reviews out amongst the decades. I review every interface in any given movie or TV show, using a very broad definition of interfaces. The only type of sci-fi interface I won’t cover is weapons, torture devices, or work done by toxic people.

So if you can comment and help me understand more of what you mean I’d appreciate it. But if that doesn’t satisfy, HUDs and GUIs includes the occasional games and some lightweight analysis, too, so be sure to check them out. And of course anyone is welcome to offer to contribute to ensure there is more diversity of the sort you are seeking.

  • You could mean games, and here’s why not.
  • You could mean literature or illustration, and the intro to the book covers why that’s a non-starter.
  • You could mean more obscure sci-fi or subgenres, and that’s just a matter of my limited bandwidth.

I guess what I’m saying is I think the blog already covers a huge range of FUI, within the constraints of movie and TV sci-fi. If you’ve actually identified a blind spot I’ve had, please email me or comment on the site so I can have my eyes opened.

Reader wish: Talk to more creators

[This is a one-off request from the most recent readership poll.]

Reader wish: I wish there would be more interviews whenever you can get creators to talk about their interfaces, because I’d like to have more context about the story behind them.

Sounds good. I like that content, too.

I’ve been explicit about the virtues of a New Criticism approach to critique, which explicitly calls against including a creator’s intention in a critique. I still believe that to be true, despite modern trends toward ad hominem analysis.

But after a review gets completed, I don’t see any harm. Well, except that lots of sites are now featuring creator interviews, and it’s a time-intensive undertaking for—comparitively—not much pay off.

I’ll do my best. Let me know if you have any particular interfaces that you’re thinking of, or even any particular creators you already know about in the comments.

Reader comment: Sometimes the breakdowns are pretty abstract and pedantic or obscure

[This is a one-off request from the most recent readership poll.]

All true. I follow the analyses where they lead, and I won’t reject a line of inquiry because it’s abstract, pedantic, or obscure. My twitter description used to note that “I delight in finding truffles in oubliettes”, and that bit of poetry refers to exactly this.

If I was to flatter myself, I would love for this blog to be considered in a league with PBS Idea Channel. Insightful and unapologetically nerdy. Not there yet of course.

So I hadn’t considered this a bug but a feature.

I’d love to hear from other readers. Do you feel this same way? If a majority of readers feel that the abstraction, depth, and obscure places the blog goes to is off-putting, it might be a good moment to consider the future of the blog.