Looking at the the many-to-many relationships of those takeaways, I wondered if some of them appeared together more commonly than others. For instance, do we tell “AI will be inherently evil” and “AI will fool us with fake media or pretending to be human” frequently? I’m at the upper boundary of my statistical analysis skills here (and the sample size is, admittedly small), but I ran some Pearson functions across the set for all two-part combinations. The results look like this.
What’s a Pearson function? It helps you find out how often things appear together in a set. For instance, if you wanted to know which letters in the English alphabet appear together in words most frequently, you could run a Pearson function against all the words in the dictionary, starting with AB, then looking for AC, then for AD, continuing all the way to YZ. Each pair would get a correlation coefficient as a result. The highest number would tell you that if you find the first letter in the pair then the second letter is very likely to be there, too. (Q & U, if you’re wondering, according to this.) The lowest number would tell you letters that appear very uncommonly together. (Q & W. More than you think, but fewer than any other pair.)
In the screen shot way above, you can see I put these in a Google Sheet and formatted the cells from solid black to solid yellow, according to their coefficient. The idea is that darker yellows would signal a high degree of correlation, lowering the contrast with the black text and “hide” the things that have been frequently paired, while simultaneously letting the things that aren’t frequently paired shine through as yellow.
The takeaways make up both the Y and X axes, so that descending line of black is when a takeaway is compared to itself, and by definition, those correlations are perfect. Every time Evil will use AI for Evil appears, you can totally count on Evil will use AI for Evil also appearing in those same stories. Hopefully that’s no surprise. Look at rest of the cells and you can see there are a few dark spots and a lot of yellow.
If you want to see the exact ranked list, see the live doc, in a sheet named “correlations_list,” but since there are 630 combinations, I won’t paste the actual values or a screen grab of the whole thing, it wouldn’t make any sense. The three highest and four lowest pairings are discussed below.
The most correlated
These three are each correlated more than 50%. That means, like the Q & the U, where you find one, you’re much more likely to find the other.
Our uncanny valley detectors are very sensitive
The highest correlated pair at 57% is We will use AI to replace people we have lost & AI will not be able to fool us, which makes sense. If we could replace people we have lost and we could not tell the difference, there would be no dramatic tension. (Though in Black Mirror’s lovely “Junipero Serra” episode, it makes for a beautiful love story.)
You are the product
The runners up are a tie at 56%. The first of those two are AI will make privacy impossible & AI will enable mind crimes against virtual sentiences. I suspect this pair is almost entirely due to Black Mirror, which frequently tells tales of unconsenting neuroreplication, the results of which are seen being used as a service slave or virtually tortured, and that’s just in the “White Christmas” episode.
Obey me and live
The other of tied-for-second-place pair is AI will make privacy impossible & Multiple AIs will balance. This is probably the combined effects of Colossus: The Forbin Project and Person of Interest, the connection being that multiples only matter when talking about super AI, and in any super AI scenario, privacy is close to impossible. (Interestingly, the other multiples tale, which was Ultron vs. JARVIS vs. Vision, privacy didn’t really come up…)
The least correlated
I’m sharing the bottom four because second-to-last place is a three-way-tie, and there is actually one pairing with the score of 0. The three pairs tied for second to last place are…
DOES NOT COMPUTE 00
Well, ok, of course. AI will solve problems or do work humans cannot & AI will seek to eliminate humans doesn’t really work together. The former presumes that the AI is working on behalf of us, and the latter presumes the opposite. The only time I believe it appears together is in the French film Alphaville.
DOES NOT COMPUTE 01
Again, we see “AI problem solving” and again, almost-incompatible concepts. AI will solve problems or do work humans cannot & Humans will be immaterial to AI. Hey, it’s helping us do things, but we’re immaterial to its existence? Prometheus is the show with both, and it depends on progressive unfolding of his goals: David begins the film piloting the ship for the decades while the humans sleep, but his ultimate goal by the end of the film seems to be pure knowledge discovery.
Ought to Compute
Again we see Humans will be immaterial to AI but this time it’s paired with AI will evolve quickly. If you know the movie Her you know that this is the one film where a general AI becomes a super AI and decides that humans just aren’t interesting anymore. It’s a bit of narcissism, I suppose, to presume otherwise. We want to believe it will truly care about our well-being and usher in a golden age, or it will decide we are a plague and seek to eradicate us. Abandonment would be one of the best possible outcomes of a rogue AI, but also probably also unlikely. A more likely scenario is that we will be immaterial but be regarded only as resources to be incorporated into the goal function. But more on this later.
Has never computed
There is only one pair in all the takeaways that has just not happened. They are Who controls the drones has the power and AI will seek to subjugate us. Wait. Doesn’t the MCU’s Iron Legion count? Not really. When they were controlled by an AI, it was the friendly proto-Vision JARVIS. When Ultron cobbled together his first body, it wasn’t really controlling them, just scavenging parts. When they clashed, the Ultrons just destroyed the Iron Legion, they did not try to take them over. Otherwise, drones like those seen in Black Mirror’s “Metalhead” are self-contained AI’s. Universal negatives are pretty easy to disprove with evidence, so I expect an example from an eagle-eyed reader fairly soon after publication.
All that yellow
Below you’ll see a histogram of the Pearson values. The good news for writers is that not only has the survey done a pretty good job of telling differentiated stories so far, but the opportunities for telling new stories that combine takeaways is pretty wide open. (If it was otherwise we’d see more of a half-bubble shape instead of the steep slope.)
With the exception of those top three, the field is wide open for the combinations of other takeaways. Maybe you can use new combinations to spark your imagination?
As promising as that is, though, don’t open up your copy of Final Draft just yet. Just because a combination of narrative tropes hasn’t been combined before, doesn’t mean it’s a story we should be telling ourselves. Let’s look at takeaway trends and takeaways by rating, and then start moving into the real-world science for some sobering comparisons. Then you’ll be in a better place to start.