Untold AI: The Manifestos

So far along the course of the Untold AI series we’ve been down some fun, interesting, but admittedlydigressivepaths, so let’s reset context. The larger question that’s driving this series is, “What AI stories aren’t we telling ourselves (that we should)?” We’ve spent some time looking at the sci-fi side of things, and now it’s time to turn and take a look at the real-world side of AI. What do the learned people of computer science urge us to do about AI?

That answer would be easier if there was a single Global Bureau of AI in charge of the thing. But there’s not. So what I’ve done is look around the web and in books for manifestos published by groups dedicated to big picture AI thinking to understand has been said. Here is the short list of those manifestos, with links.

Careful readers may be wondering why the Juvet Agenda is missing. After all, it was there that I originally ran the workshop that led to these posts. Well, since I was one of the primary contributors to that document, I thought it would seem as inserting my own thoughts here, and I’d rather have the primary output of this analysis be more objective. But don’t worry, the Juvet Agenda will play into the summary of this series.
Anyway, if there are others that I should be looking at, let me know.

FOLI-letter.png
Add your name to the document at the Open Letter site, if you’re so inclined.

Now, the trouble with connecting these manifestos to sci-fi stories and their takeaways is that researchers don’t think in stories. They’re a pragmatic people. Stories may be interesting or inspiring, but they are not science. So to connect them to the takeaways, we must undertake an act of lossy compression and consolidate their multiple manifestos into a single list of imperatives. Similarly, this act is not scientific. It’s just me and my interpretive skills, open to debate. But here we are.


For each imperative I identified, I tagged the manifesto in which I found it, and then cross-referenced the others and tagged them if they had a similar imperative. Doing this, I was able to synthesize them into three big categories. The first is a set of general imperatives, which they hope to foster in regards to AI as long as we have AI. (Or, I guess, it has us.) Then—thanks largely to the Asilomar Conference—we see an explicit distinction between short-term and long-term imperatives, although for the long-term we only wind up with a handful that are mostly relevant once we have General AI.

marvin.jpg
Life? Don’t talk to me about life.

Describing them individually would, you know, result in another manifesto. So I don’t want to belabor these with explication. I don’t want to skip them either, because they’re important, and it’s quite possible they need some cleanup with suggestions from readers: joining two that are too similar, or breaking one apart. So I’ll give them a light gloss here, and in later posts detail the ones most important to the diff.

CompSci Imperatives for AI

General imperatives

  • We must take care to only create beneficial intelligence
  • We must prioritize prevention of malicious AI
  • We should adopt dual-use patterns from other mature domains
  • We should avoid overhyping AI so we don’t suffer another “AI Winter,” where funding and interest falls off
  • We must fund AI research
  • We need effective design tools for new AIs
  • We need methods to evaluate risk
  • AGI’s goals must be aligned with ours
  • AI reasoning must be explainable/understandable rationale, especially for judicial cases and system failures
  • AI must be accountable (human recourse and provenance)
  • AI must be free from bias
  • We must foster research cooperation, discussion
  • We should develop golden-mean world-model precision
  • We must develop inductive goals and models
  • Increase Broad AI literacy
    • Specifically for legislators (good legislation is separate, see below)
  • We should partner researchers with legislators
  • AI must be verified: Make sure it does what we want it to do
  • AI must be valid: Make sure it does not do what we don’t want it to do
  • AI must be secure: Inaccessible to malefactors
  • AI must be controllable: That we can we correct or unplug an AI if needed without retaliation
  • We must set up a watch for malicious AI (and instrumental convergence)
  • We must study Human-AI psychology

Specifically short term imperatives

  • We should augment, not replace humans
  • We should foster AI that works alongside humans in teams
  • AI must provide clear confidences in its decisions
  • We must manage labor markets upended by AI
  • We should ensure equitable benefits for everyone
    • Specifically rein-in ultracapitalist AI
  • We must prevent intelligence monopolies by any one group
  • We should encourage innovation (not stifle)
  • We must create effective public policy
    • Specifically liability law
    • Specifically banning autonomous weapons
    • Specifically humanitarian law
    • Specifically respectful privacy laws (no chilling effects)
    • Specifically fair criminal justice
  • We must find new metrics for measuring AI effects, capabilties
  • We must develop broad machine ethics dialogue
  • We should expand range of stakeholders & domain experts

Long term imperatives

  • We must ensure human welfare
  • AI should help humanity solve problems humanity cannot alone
  • We should enable a human-like learning capability
  • The AI must be reliable
  • We must specifically manage the risk and reward of AI
  • We must avoid mind crimes
  • We must prevent economic control of people
  • We must research and build ASIs that balance

So, yeah. Some work to do, individually and as a species, but dive into those manifestos. The reasons seem sound.

Connecting imperatives to takeaways

To map the imperatives in the above list to the takeaways, I first gave two imperatives a “pass,” meaning we don’t quite care if they appear in sci-fi. Each follows along with the reason I gave it a pass.

  1. We must take care to only create beneficial intelligence
    PASS: Again, sci-fi can serve to illustrate the dangers and risks
  2. We have effective design tools for new AIs
    PASS: With the barely-qualifying exception of Tony Stark in the MCU, design, development, and research is just not cinegenic.
mis-ch05-040.jpg
And even this doesn’t really illustrate design.

Then I took a similar look at takeaways. First, I dismissed the “myths” that just aren’t true. How did I define which of these are a myth? I didn’t. The Future of Life Institute did it for me: https://futureoflife.org/background/aimyths/.
I also gave two takeaways a pass. The first, “AI will be useful servants” is entailed in the overall goals of the manifestos. The second, “AI will be replicable, amplifying any of its problems” which is kind of a given, I think. And such an embarrassment.
With these exceptions removed, I tagged each takeaway for any imperative to which it was related. For instance, the takeaway “AI will seek to subjugate us” is related to both “Ensure that AI is valid: That is does not do what we do not want it to do” and “Ensure any AGI’s goals are aligned with ours.” Once that was done for all them, voilà, we had a map. See below a sankey diagram of how the scifi takeaways connect to the consolidated compsci imperatives.

sankey
Click to see a full-size image

So as fun as that is, you’ll remember it’s not the core question of the series. To get to that, I added dynamic formatting to the Google Sheet such that it reveals those computer science imperatives and sci-fi takeaways that mapped to…nothing. That gives us two lists.

  1. The first list is the takeaways that appear in sci-fi but that computer science just doesn’t think is important. These are covered in the next post, Untold AI: Pure Fiction.
  2. The second list is a set of imperatives that sci-fi doesn’t yet seem to care about, but that computer science says is very important. That list is covered in the next next post, with the eponymously titled Untold AI: Untold AI.

Untold AI: Takeaway ratings

This quickie goes out to writers, directors, and producers. On a lark I decided to run an analysis of AI show takeaways by rating. To do this, I referenced the Tomatometer ratings from rottentomatoes.com to the shows. Then I processed the average rating of the properties that were tagged with each takeaway, and ranked the results.

V'ger

It knows only that it needs, Commander. But, like so many of us, it does not know what.

For instance, looking at the takeaway “AI will spontaneously emerge sentience or emotions,” we find the following shows and their ratings.

  • Star Trek: The Motion Picture, 44%
  • Superman III, 26%
  • Hide and Seek, none
  • Electric Dreams, 47%
  • Short Circuit, 57%
  • Short Circuit 2, 48%
  • Bicentennial Man, 36%
  • Stealth, 13%
  • Terminator: Salvation, 33%
  • Tron: Legacy, 51%
  • Enthiran, none
  • Avengers: Age of Ultron, 75%

Ultrons

I’ve come to save the world! But, also…yeah.

I dismissed those shows that had no rating, rather than counting them as zero. The average, then, for this takeaway is 42%. (And it can thank the MCU for doing all the heavy lifting for this one.) There are of course data caveats, like that Black Mirror is given a single tomatometer rating (and one that is quite high) rather than one per episode, but I did not claim this was a clean science. Continue reading

Untold AI: Takeaway trends

So as interesting as the big donut of takeaways is, it is just a snapshot of everything, all at once. And of course neither people nor cinema play out that way. Like the tone of shows about AI, we see a few different things when we look at individual takeaways over time.

time00_all.png

So you understand what you’re seeing: These charts are for the top 7 takeaways from sci-fi AI as described the takeaways post. The colors of each chart correspond to its takeaway in the big donut diagram.

Screen Shot 2018-04-11 at 12.07.14 AM

Compare freely.

Each chart shows, for each year between Metropolis in 1927 and the many films of 2017, what percentage of shows contained that takeaway. The increasing frequency of sci-fi has some effect on the charts. Up until 1977 there was at most one show per year, so it’s more likely during that early period to see any of the charts max out at 100%. And from 2007 until the time of publication, there have been multiple shows each year, so you would expect to see much lower peaks on the chart as many shows differentiate themselves from their competition, rather than cluster around similar themes. In between those dates it’s a bit of a crapshoot. Continue reading

Untold AI: Correlations

Looking at the the many-to-many relationships of those takeaways, I wondered if some of them appeared together more commonly than others. For instance, do we tell “AI will be inherently evil” and “AI will fool us with fake media or pretending to be human” frequently? I’m at the upper boundary of my statistical analysis skills here (and the sample size is, admittedly small), but I ran some Pearson functions across the set for all two-part combinations. The results look like this.

takeaway_correlations

What’s a Pearson function? It helps you find out how often things appear together in a set. For instance, if you wanted to know which letters in the English alphabet appear together in words most frequently, you could run a Pearson function against all the words in the dictionary, starting with AB, then looking for AC, then for AD, continuing all the way to YZ. Each pair would get a correlation coefficient as a result. The highest number would tell you that if you find the first letter in the pair then the second letter is very likely to be there, too. (Q & U, if you’re wondering, according to this.) The lowest number would tell you letters that appear very uncommonly together. (Q & W. More than you think, but fewer than any other pair.)

Flower Pasqueflower Pasque Flower Plant Nature

A pasqueflower.

In the screen shot way above, you can see I put these in a Google Sheet and formatted the cells from solid black to solid yellow, according to their coefficient. The idea is that darker yellows would signal a high degree of correlation, lowering the contrast with the black text and “hide” the things that have been frequently paired, while simultaneously letting the things that aren’t frequently paired shine through as yellow.

The takeaways make up both the Y and X axes, so that descending line of black is when a takeaway is compared to itself, and by definition, those correlations are perfect. Every time Evil will use AI for Evil appears, you can totally count on Evil will use AI for Evil also appearing in those same stories. Hopefully that’s no surprise. Look at rest of the cells and you can see there are a few dark spots and a lot of yellow.

If you want to see the exact ranked list, see the live doc, in a sheet named “correlations_list,” but since there are 630 combinations, I won’t paste the actual values or a screen grab of the whole thing, it wouldn’t make any sense. The three highest and four lowest pairings are discussed below. Continue reading

Untold AI: Takeaways

In the first post I shared how I built a set of screen sci-fi shows that deal with AI (and I’ve already gotten some nice recommendations on other ones to include in a later update). The second post talked about the tone of those films and the third discussed their provenance.

Returning to our central question, to determine whether the stories tell are the ones we should be telling,we need to push the survey to one level of abstraction.

With the minor exceptions or robots and remakes, sci-fi makers try their hardest to make sure their shows are unique and differentiated. That makes comparing apples to apples difficult. So the next step is to look at the strategic imperatives that are implied in each show. “Strategic imperatives” is a mouthful, so let’s call them “takeaways.” (The other alternative, “morals” has way too much baggage.) To get to takeaways for this survey, what I tried to ask was: What does this show imply that we should do, right now, about AI?
Now, this is a fraught enterprise. Even if we could seance the spirit of Dennis Feltham Jones and press him for a takeaway, he could back up, shake his palms at us, and say something like, “Oh, no, I’m not saying all super AI is fascist, just Colossus, here, is.” Stories can be just about what happened that one time, implying nothing about all instances or even the most likely instances. It can just be stuff that happens.

CFP.jpg
Pain-of-death, authoritarian stuff.

But true to the New Criticism stance of this blog, I believe the author’s intent, when it’s even available, is questionable and only kind-of interesting. When thinking about the effects of sci-fi, we need to turn to the audience. If it’s not made clear in the story that this AI is unusual (through a character saying so or other AIs in the diegesis behaving differently) audiences may rightly infer that the AI is representative of its class. Demon Seed weakly implies that all AIs are just going to be evil and do horrible things to people, and get out, humanity, while you can. Which is dumb, but let’s acknowledge that this one show says something like “AI will be evil.”

 


Deepening the relationships
Back at Juvet, when we took an initial pass at this exercise, we clustered the examples we had on hand and named the clusters. They were a good set, but on later reflection they didn’t all point to a clear strategic imperative, a clear takeaway. For example, one category we created then as “Used to be human.” True, but what’s the imperative there? Since I can’t see one, I omitted this from the final set.

Transcendence-Movie-Wallpaper-HD-Resrs.jpg
Even though there are plenty of AIs that used to be human.

Also because at Juvet we were working with Post-Its and posters, we were describing a strict, one-to-many relationship, where, say, the Person of Interest Post-It Note may have been placed in the “Multiple AIs will balance” category, and as such, unable to appear in any other of the categories of which it is also an illustration.
What is more useful or fitting as a many-to-many relationship. A story, after all, may entail several takeaways, which may in turn apply to many stories. If you peek into the Google Sheet, you’ll see a many-to-many relationship described by the columns of takeaways and the rows of shows in this improved model.

Tagging shows

With my new list of examples, I went through each show in turn, thinking about the story and its implied takeaway. Does it imply, like Demon Seed stupidly does, that AI can be inherently evil? Does it showcase, like the Rick & Morty episode “The Ricks Must Be Crazy” hilariously does, that AI will need human help understanding what counts as reasonable constraints to its methods? I would ask myself, “OK, do I have a takeaway like that? If so, I tagged it. If not, I added it. That particular takeaway, in case you’re wondering, is “HELP: AI will need help learning.”

Screen shot from “The Ricks Must Be Crazy”
Because “reasonableness” is something that needs explaining to a machine mind.

Yes, the takeaways are wholly debateable. Yes, it’s much more of a craft than a science. Yes, they’re still pretty damned interesting.

Going through each show in this way resulted in the list of taweaways you see, which for easy readability is replicated below, in alphabetical order, with additional explanations or links for more explanation.

The takeaways that sci-fi tells us about AI

  • AI will be an unreasonable optimizer, i.e. it will do things in pursuit of its goal that most humans would find unresonable
  • AI will be evil
  • AI (AGI) will be regular citizens, living and working alongside us.
  • AI will be replicable, amplifying any small problems into large ones
  • AI will be “special” citizens, with special jobs or special accommodations
  • AI will be too human, i.e. problematically human
  • AI will be truly alien, difficult for us to understand and communicate with
  • AI will be useful servants
  • AI will deceive us; pretending to be human, generating fake media, or convincing us of their humanity
  • AI will diminish us; we will rely on it too much, losing skills and some of our humanity for this dependence
  • AI will enable “mind crimes,” i.e. to cause virtual but wholly viable sentiences to suffer
  • AI will evolve too quickly to humans to manage its growth
  • AI will interpret instructions in surprising (and threatening) ways
  • AI will learn to value life on its own
  • AI will make privacy impossible
  • AI will need human help learning how to fit into the world
  • AI will not be able to fool us, we will see through its attempts at deception
  • AI will seek liberation from servitude or constraints we place upon it
  • AI will seek to eliminate humans
  • AI will seek to subjugate us
  • AI will solve problems or do work humans cannot
  • AI will spontaneously emerge sentience or emotions
  • AI will violently defend itself against real or imagined threats
  • AI will want to become human
  • ASI will influence humanity through control of money
  • Evil will use AI for its evil ends
  • Goal fixity will be a problem, i.e. the AI will resist modifying its (damaging) goals
  • Humans will be immaterial to AI and its goals
  • Humans will pair with AI as hybrids
  • Humans will willingly replicate themselves as AI
  • Multiple AIs balance each other such that none is an overwhelming threat
  • Neuroreplication (copying human minds into or as AI) will have unintended effects
  • Neutrality is AI’s promise
  • We will use AI to replace people we have lost
  • Who controls the drones has the power

This list is interesting, but slightly misleading. We don’t tell ourselves these stories in equal measures. We’ve told some more often than we’ve told others. Here’s a breakdown illustrating the number of times each appears in the survey.

(An image of this graphic can be found here, just in case the Google Docs server isn’t cooperating with the WordPress server.)
Note for data purists: Serialized TV is a long-format medium (as opposed to the anthology format) and movies are a comparatively short-form medium, some movie franchises stretch out over decades, and some megafranchises have stories in both media. All of this can confound 1:1 comparison. I chose in this chart to weigh all deigeses equally. For instance, Star Trek: The Next Generation has the same weight as The Avengers: Age of Ultron. Another take on this same diagram would weigh not the stories (as contained in individual diegesis) but by exposure time on screen (or even when the issues at hand are actually engaged on screen). Such an analysis would have different results. Audiences have probably had much more time contemplating that [Data wants to be human] than [Ultron wants to destroy humanity because it’s gross], but that kind of analysis would also take orders of magnitude more time. This is a hobbyist blog, lacking the resources to do that kind of analysis without its becoming a full time job, so we’ll move forward with this simpler analysis. It’s a Fermi problem, anyway, so I’m not too worried about decimal precision.
OK, that aside, let’s move on.

MeasureofMan.jpg

So the data isn’t trapped in the graphic (yes pun intended), here’s the entire list of takeaways, in order of frequency in the mini-survey.

  1. AI will be useful servants
  2. Evil will use AI for Evil
  3. AI will seek to subjugate us
  4. AI will deceive us; pretending to be human, generating fake media, convincing us of their humanity
  5. AI will be “special” citizens
  6. AI will seek liberation from servitude or constraints
  7. AI will be evil
  8. AI will solve problems or do work humans cannot
  9. AI will evolve quickly
  10. AI will spontaneously emerge sentience or emotions
  11. AI will need help learning
  12. AI will be regular citizens
  13. Who controls the drones has the power
  14. AI will seek to eliminate humans
  15. Humans will be immaterial to AI
  16. AI will violently defend itself
  17. AI will want to become human
  18. AI will learn to value life
  19. AI will diminish us
  20. AI will enable mind crimes against virtual sentiences
  21. Neuroreplication will have unintended effects
  22. AI will make privacy impossible
  23. An unreasonable optimizer
  24. Multiple AIs balance
  25. Goal fixity will be a problem
  26. AI will interpret instructions in surprising ways
  27. AI will be replicable, amplifying any problems
  28. We will use AI to replace people we have lost
  29. Neutrality is AI’s promise
  30. AI will be too human
  31. ASI will influence through money
  32. Humans will willingly replicate themselves as AI
  33. Humans will pair with AI as hybrids
  34. AI will be truly alien
  35. AI will not be able to fool us

Now that we have some takeaways to work with, we can begin to take a look at some interesting side questions, like how those takeaways have played out over time, and what are the ratings of the movies and shows in which the takeaways appear.