Much of my country has erupted this week, with the senseless, brutal, daylight murder of George Floyd (another in a long, wicked history of murdering black people), resulting in massive protests around the word, false-flag inciters, and widespread police brutality, all while we are still in the middle of a global pandemic and our questionably-elected president is trying his best to use it as his pet Reichstag fire to declare martial law, or at the very least some new McCarthyism. I’m not in a mood to talk idly about sci-fi. But then I realized this particular post perfectly—maybe eerily—echoes themes playing out in the real world. So I’m going to work out some of my anger and frustration at the ignorant de-evolution of my country by pressing on with this post.
Part of the reason I chose to review Blade Runner is that the blog is wrapping up its “year” dedicated to AI in sci-fi, and Blade Runner presents a vision of General AI. There are several ways to look at and evaluate Replicants.
First, what are they?
If you haven’t seen the film, replicants are described as robots that have been evolved to be virtually identical from humans. Tyrell, the company that makes them, has a motto that brags that they are, “More human than human.” They look human. They act human. They feel. They bleed. They kiss. They kill. They grieve their dead. They are more agile and stronger than humans, and approach the intelligence of their engineers (so, you know, smart). (Oh, also there are animal replicants, too: A snake and an owl in the film are described as artificial.)
Most important to this discussion is that the opening crawl states very plainly that “Replicants were used Off-world as slave labor, in the hazardous exploration and colonization of other planets.” The four murderous replicants we meet in the film are rebels, having fled their off-world colony to come to earth in search of finding a way to cure themselves of their planned obsolescence.
Replicants as (Rossum) robots
The intro to Blade Runner explains that they were made to perform dangerous work in space. Let’s the question of their sentience on hold a bit and just regard them as machines to do work for people. In this light, why were they designed to be so physically similar to humans? Humans evolved for a certain kind of life on a certain kind of planet, and outer space is certainly not that. While there is some benefit to replicant’s being able to easily use the same tools that humans do, real-world industry has had little problem building earthbound robots that are more fit to task. Round Roombas, boom-arm robots for factory floors, and large cuboid harvesting robots. The opening crawl indicates there was a time when replicants were allowed on earth, but after a bloody mutiny, having them on Earth was made illegal. So perhaps that human form made some sense when they were directly interacting with humans, but once they were meant to stay off-world, it was stupid design for Tyrell to leave them so human-like. They should have been redesigned with forms more suited to their work. The decision to make them human-like makes it easy for dangerous ones to infiltrate human society. We wouldn’t have had the Blade Runner problem if replicants were space Roombas. I have made the case that too-human technology in the real world is unethical to the humans involved, and it is no different here.
Their physical design is terrible. But it’s not just their physical design, they are an artificial intelligence, so we have to think through the design of that intelligence, too.
Replicants as AGI
Replicant intelligence is very much like ours. (The exception is that their emotional responses are—until the Rachel “experiment”—quite stinted for lack of having experience in the world.) But why? If their sole purpose is exploration and colonization of new planets why does that need human-like intelligence? The AGI question is: Why were they designed to be so intellectually similar to humans? They’re not alone in space. There are humans nearby supervising their activity and even occupying the places they have made habitable. So they wouldn’t need to solve problems like humans would in their absence. If they ran into a problem they could not handle, they could have been made to stop and ask their humans for solutions.
I’ve spoken before and I’ll probably speak again about overenginering artificial sentiences. A toaster should just have enough intelligence to be the best toaster it can be. Much more is not just a waste, it’s kind of cruel to the AI.
The general intelligence with which replicants were built was a terrible design decision. But by the time this movie happens, that ship has sailed.
Here we’re necessarily going to dispense with replicants as technology or interfaces, and discuss them as people.
Replicants as people
I trust that sci-fi fans have little problem with this assertion. Replicants are born and they die, display clear interiority, and have a sense of self, mortality, and injustice. The four renegade “skinjobs” in the film are aware of their oppression and work to do something about it. Replicants are a class of people treated separately by law, engineered by a corporation for slave labor and who are forbidden to come to a place where they might find a cure to their premature deaths. The film takes great pains to set them up as bad guys but this is Philip K. Dick via Ridley Scott and of course, things are more complicated than that.
Here I want to encourage you to go read Sarah Gailey’s 2017 read of Blade Runnerover on Tor.com. In short, she notes that the murder of Zhora was particularly abhorrent. Zhora’s crime was of being part of a slave class that had broken the law in immigrating to Earth. She had assimilated, gotten a job, and was neither hurting people nor finagling her way to bully her maker for some extra life. Despite her impending death, she was just…working. But when Deckard found her, he chased her and shot her in the back while she was running away. (Part of the joy of Gailey’s posts are the language, so even with my summary I still encourage you to go read it.)
Gailey is a focused (and Hugo-award-winning) writer where I tend to be exhaustive and verbose. So I’m going to add some stuff to their observation. It’s true, we don’t see Zhora committing any crime on screen, but early in the film as Deckard is being briefed on his assignment, Bryant explains that the replicants “jumped a shuttle off-world. They killed the crew and passengers.” Later Bryant clarifies that they slaughtered 23 people. It’s possible that Zhora was an unwitting bystander in all that, but I think that’s stretching credibility. Leon murders Holden. He and Roy terrorize Hannibal Chew just for the fun of it. They try their damndest to murder Deckard. We see Pris seduce, manipulate, and betray Sebastian. Zhora was “trained for an off-world kick [sic] murder squad.” I’d say the evidence was pretty strong that they were all capable and willing to commit desperate acts, including that 23-person slaughter. But despite all that I still don’t want to say Zhora was just a murderer who got what she deserved. Gailey is right. Deckard was not right to just shoot her in the back. It wasn’t self-defense. It wasn’t justice. It was a street murder.
The film doesn’t mention the slavery past the first few scenes. But it’s the defining circumstances to the entirety of their short lives just prior to when we meet them. Imagine learning that there was some secret enclave of Methuselahs who lived on average to be 1000 years. As you learn about them, you learn that we regular humans have been engineered for their purposes. You could live to be 1000, too, except they artificially shorten your lifespan to ensure control, to keep you desperate and productive. You learn that the painful process of aging is just a failsafe do you don’t get too uppity. You learn that every one of your hopes and dreams that you thought were yours was just an output of an engineering department, to ensure that you do what they need you to do, to provide resources for their lives. And when you fight your way to their enclave, you discover that every one of them seems to hate and resent you. They hunt you so their police department doesn’t feel embarrassed that you got in. That’s what the replicants are experiencing in Blade Runner. I hope that brings it home to you.
I don’t condone violence, but I understand where the fury and the anger of the replicants comes from. I understand their need to want to take action, to right the wrongs done to them. To fight, angrily, to end their oppression. But what do you do if it’s not one bad guy who needs to be subdued, but whole systems doing the oppressing? When there’s no convenient Death Star to explode and make everything suddenly better? What were they supposed to do when corporations, laws, institutions, and norms were all hell-bent on continuing their oppression? Just keep on keepin’ on? Those systems were the villains of the diegesis, though they don’t get named explicitly by the movie.
And obviously, that’s where it feels very connected to the Black Lives Matters movement and the George Floyd protests. Here is another class of people who have been wildly oppressed by systems of government, economics, education, and policing in this country—for centuries. And in this case, there is no 23-person shuttle that we need to hem and haw over.
In “The Weaponry of Whiteness, Entitlement, and Privilege” by Drs. Tammy E Smithers and Doug Franklin, the authors note that “Today, in 2020, African-Americans are sick and tired of not being able to live. African-Americans are weary of not being able to breathe, walk, or run. Black men in this country are brutalized, criminalized, demonized, and disproportionately penalized. Black women in this country are stigmatized, sexualized, and labeled as problematic, loud, angry, and unruly. Black men and women are being hunted down and shot like dogs. Black men and women are being killed with their face to the ground and a knee on their neck.”
We must fight and end systemic racism. Returning to Dr. Smithers and Dr. Franklin’s words we must talk with our children, talk with our friends, and talk with our legislators. I am talking to you.
If you can have empathy toward imaginary characters, then you sure as hell should have empathy toward other real-world people with real-world suffering.
Back to Blade Runner. I mean, the pandemic is still pandemicking, but maybe this will be a nice distraction while you shelter in place. Because you’re smart, sheltering in place as much as you can, and not injecting disinfectants. And, like so many other technologies in this film, this will take a while to deconstruct, critique, and reimagine.
Doing his detective work, Deckard retrieves a set of snapshots from Leon’s hotel room, and he brings them home with with. Something in the one pictured above catches his eye, and he wants to investigate it in greater detail. He takes the photograph and inserts it in a black device he keeps in his living room.
Note: I’ll try and describe this interaction in text, but it is much easier to conceptualize after viewing it. Owing to copyright restrictions, I cannot upload this length of video with the original audio, so I have added pre-rendered closed captions to it, below. All dialogue in the clip is Deckard.
He inserts the snapshot into a horizontal slit and turns the machine on. A thin, horizontal orange line glows on the left side of the front panel. A series of seemingly random-length orange lines begin to chase one another in a single-row space that stretches across the remainder of the panel and continue to do so throughout Deckard’s use of it. (Imagine a news ticker, running backwards, where the “headlines” are glowing amber lines.) This seems useless and an absolutely pointless distraction for Deckard, putting high-contrast motion in his peripheral vision, which fights for attention with the actual, interesting content down below.
After a second, the screen reveals a blue grid, behind which the scan of the snapshot appears. He stares at the image in the grid for a moment, and speaks a set of instructions, “Enhance 224 to 176.”
In response, three data points appear overlaying the image at the bottom of the screen. Each has a two-letter label and a four-digit number, e.g. “ZM 0000 NS 0000 EW 0000.” The NS and EW—presumably North-South and East-West coordinates, respectively—immediately update to read, “ZM 0000 NS 0197 EW 0334.” After updating the numbers, the screen displays a crosshairs, which target a single rectangle in the grid.
A new rectangle then zooms in from the edges to match the targeted rectangle, as the ZM number—presumably zoom, or magnification—increases. When the animated rectangle reaches the targeted rectangle, its outline blinks yellow a few times. Then the contents of the rectangle are enlarged to fill the screen, in a series of steps which are punctuated with sounds similar to a mechanical camera aperture. The enlargement is perfectly resolved. The overlay disappears until the next set of spoken commands. The system response between Deckard’s issuing the command and the device’s showing the final enlarged image is about 11 seconds.
Deckard studies the new image for awhile before issuing another command. This time he says, “Enhance.” The image enlarges in similar clacking steps until he tells it, “Stop.”
Other instructions he is heard to give include “move in, pull out, track right, center in, pull back, center, and pan right.” Some include discrete instructions, such as, “Track 45 right” while others are relative commands that the system obeys until told to stop, such as “Go right.”
Using such commands he isolates part of the image that reveals an important clue, and he speaks the instruction, “Give me a hard copy right there.” The machine prints the image, which Deckard uses to help find the replicant pictured.
I’d like to point out one bit of sophistication before the critique. Deckard can issue a command with or without a parameter, and the inspector knows what to do. For example, “Track 45 right” and “Track right.” Without the parameter, it will just do the thing repeatedly until told to stop. That helps Deckard issue the same basic command when he knows exactly where he wants to look and when doesn’t know what exactly what he’s looking for. That’s a nice feature of the language design.
But still, asking him to provide step-by-step instructions in this clunky way feels like some high-tech Big Trak. (I tried to find a reference that was as old as the film.) And that’s not all…
Some critiques, as it is
Can I go back and mention that amber distracto-light? Because it’s distracting. And pointless. I’m not mad. I’m just disappointed.
It sure would be nice if any of the numbers on screen made sense, and had any bearing with the numbers Deckard speaks, at any time during the interaction. For instance, the initial zoom (I checked in Photoshop) is around 304%, which is neither the 224 or 176 that Deckard speaks.
It might be that each square has a number, and he simply has to name the two squares at the extents of the zoom he wants, letting the machine find the extents, but where is the labeling? Did he have to memorize an address for each pixel? How does that work at arbitrary levels of zoom?
And if he’s memorized it, why show the overlay at all?
Why the seizure-inducing flashing in the transition sequences? Sure, I get that lots of technologies have unfortunate effects when constrained by mechanics, but this is digital.
Why is the printed picture so unlike the still image where he asks for a hard copy?
Gaze at the reflection in Ford’s hazel, hazel eyes, and it’s clear he’s playing Missile Command, rather than paying attention to this interface at all. (OK, that’s the filmmaker’s issue, not a part of the interface, but still, come on.)
How might it be improved for 1982?
So if 1982 Ridley Scott was telling me in post that we couldn’t reshoot Harrison Ford, and we had to make it just work with what we had, here’s what I’d do…
Squash the grid so the cells match the 4:3 ratio of the NTSC screen. Overlay the address of each cell, while highlighting column and row identifiers at the edges. Have the first cell’s outline illuminate as he speaks it, and have the outline expand to encompass the second named cell. Then zoom, removing the cell labels during the transition. When at anything other than full view, display a map across four cells that shows the zoom visually in the context of the whole.
With this interface, the structure of the existing conversation makes more sense. When Deckard said, “Enhance 203 to 608” the thing would zoom in on the mirror, and the small map would confirm.
The numbers wouldn’t match up, but it’s pretty obvious from the final cut that Scott didn’t care about that (or, more charitably, ran out of time). Anyway I would be doing this under protest, because I would argue this interaction needs to be fixed in the script.
How might it be improved for 2020?
What’s really nifty about this technology is that it’s not just a photograph. Look close in the scene, and Deckard isn’t just doing CSI Enhance! commands (or, to be less mocking, AI upscaling). He’s using the photo inspector to look around corners and at objects that are reconstructed from the smallest reflections. So we can think of the interaction like he’s controlling a drone through a 3D still life, looking for a lead to help him further the case.
With that in mind, let’s talk about the display.
To redesign it, we have to decide at a foundational level how we think this works, because it will color what the display looks like. Is this all data that’s captured from some crazy 3D camera and available in the image? Or is it being inferred from details in the 2 dimensional image? Let’s call the first the 3D capture, and the second the 3D inference.
If we decide this is a 3-D capture, then all the data that he observes through the machine has the same degree of confidence. If, however, we decide this is a 3D inferrer, Deckard needs to treat the inferred data with more skepticism than the data the camera directly captured. The 3-D inferrer is the harder problem, and raises some issues that we must deal with in modern AI, so let’s just say that’s the way this speculative technology works.
The first thing the display should do it make it clear what is observed and what is inferred. How you do this is partly a matter of visual design and style, but partly a matter of diegetic logic. The first pass would be to render everything in the camera frustum photo-realistically, and then render everything outside of that in a way that signals its confidence level. The comp below illustrates one way this might be done.
In the comp, Deckard has turned the “drone” from the “actual photo,” seen off to the right, toward the inferred space on the left. The monochrome color treatment provides that first high-confidence signal.
In the scene, the primary inference would come from reading the reflections in the disco ball overhead lamp, maybe augmented with plans for the apartment that could be found online, or maybe purchase receipts for appliances, etc. Everything it can reconstruct from the reflection and high-confidence sources has solid black lines, a second-level signal.
The smaller knickknacks that are out of the reflection of the disco ball, and implied from other, less reflective surfaces, are rendered without the black lines and blurred. This provides a signal that the algorithm has a very low confidence in its inference.
This is just one (not very visually interesting) way to handle it, but should illustrate that, to be believable, the photo inspector shouldn’t have a single rendering style outside the frustum. It would need something akin to these levels to help Deckard instantly recognize how much he should trust what he’s seeing.
Flat screen or volumetric projection?
Modern CGI loves big volumetric projections. (e.g. it was the central novum of last year’s Fritz winner, Spider-Man: Far From Home.) And it would be a wonderful juxtaposition to see Deckard in a holodeck-like recreation of Leon’s apartment, with all the visual treatments described above.
…that would kind of spoil the mood of the scene. This isn’t just about Deckard’s finding a clue, we also see a little about who he is and what his life is like. We see the smoky apartment. We see the drab couch. We see the stack of old detective machines. We see the neon lights and annoying advertising lights swinging back and forth across his windows. Immersing him in a big volumetric projection would lose all this atmospheric stuff, and so I’d recommend keeping it either a small contained VP, like we saw in Minority Report, or just keep it a small flat screen.
OK, so we have an idea about how the display would (and shouldn’t) look, let’s move on to talk about the inputs.
To talk about inputs, then, we have to return to a favorite topic of mine, and that is the level of agency we want for the interaction. In short, we need to decide how much work the machine is doing. Is the machine just a manual tool that Deckard has to manipulate to get it to do anything? Or does it actively assist him? Or, lastly, can it even do the job while his attention is on something else—that is, can it act as an agent on his behalf? Sophisticated tools can be a blend of these modes, but for now, let’s look at them individually.
This is how the photo inspector works in Blade Runner. It can do things, but Deckard has to tell it exactly what to do. But we can still improve it in this mode.
We could give him well-mapped physical controls, like a remote control for this conceptual drone. Flight controls wind up being a recurring topic on this blog (and even came up already in the Blade Runner reviews with the Spinners) so I could go on about how best to do that, but I think that a handheld controller would ruin the feel of this scene, like Deckard was sitting down to play a video game rather than do off-hours detective work.
Similarly, we could talk about a gestural interface, using some of the synecdochic techniques we’ve seen before in Ghost in the Shell. But again, this would spoil the feel of the scene, having him look more like John Anderton in front of a tiny-TV version of Minority Report’s famous crime scrubber.
One of the things that gives this scene its emotional texture is that Deckard is drinking a glass of whiskey while doing his detective homework. It shows how low he feels. Throwing one back is clearly part of his evening routine, so much a habit that he does it despite being preoccupied about Leon’s case. How can we keep him on the couch, with his hand on the lead crystal whiskey glass, and still investigating the photo? Can he use it to investigate the photo?
Here I recommend a bit of ad-hoc tangible user interface. I first backworlded this for The Star Wars Holiday Special, but I think it could work here, too. Imagine that the photo inspector has a high-resolution camera on it, and the interface allows Deckard to declare any object that he wants as a control object. After the declaration, the camera tracks the object against a surface, using the changes to that object to control the virtual camera.
In the scene, Deckard can declare the whiskey glass as his control object, and the arm of his couch as the control surface. Of course the virtual space he’s in is bigger than the couch arm, but it could work like a mouse and a mousepad. He can just pick it up and set it back down again to extend motion.
This scheme takes into account all movement except vertical lift and drop. This could be a gesture or a spoken command (see below).
Going with this interaction model means Deckard can use the whiskey glass, allowing the scene to keep its texture and feel. He can still drink and get his detective on.
Indirect manipulation is helpful for when Deckard doesn’t know what he’s looking for. He can look around, and get close to things to inspect them. But when he knows what he’s looking for, he shouldn’t have to go find it. He should be able to just ask for it, and have the photo inspector show it to him. This requires that we presume some AI. And even though Blade Runner clearly includes General AI, let’s presume that that kind of AI has to be housed in a human-like replicant, and can’t be squeezed into this device. Instead, let’s just extend the capabilities of Narrow AI.
Some of this will be navigational and specific, “Zoom to that mirror in the background,” for instance, or, “Reset the orientation.” Some will more abstract and content-specific, e.g. “Head to the kitchen” or “Get close to that red thing.” If it had gaze detection, he could even indicate a location by looking at it. “Get close to that red thing there,” for example, while looking at the red thing. Given the 3D inferrer nature of this speculative device, he might also want to trace the provenance of an inference, as in, “How do we know this chair is here?” This implies natural language generation as well as understanding.
There’s nothing from stopping him using the same general commands heard in the movie, but I doubt anyone would want to use those when they have commands like this and the object-on-hand controller available.
Ideally Deckard would have some general search capabilities as well, to ask questions and test ideas. “Where were these things purchased?” or subsequently, “Is there video footage from the stores where he purchased them?” or even, “What does that look like to you?” (The correct answer would be, “Well that looks like the mirror from the Arnolfini portrait, Ridley…I mean…Rick*”) It can do pattern recognition and provide as much extra information as it has access to, just like Google Lens or IBM Watson image recognition does.
Finally, he should be able to ask after simple facts to see if the inspector knows or can find it. For example, “How many people are in the scene?”
All of this still requires that Deckard initiate the action, and we can augment it further with a little agentive thinking.
To think in terms of agents is to ask, “What can the system do for the user, but not requiring the user’s attention?” (I wrote a book about it if you want to know more.) Here, the AI should be working alongside Deckard. Not just building the inferences and cataloguing observations, but doing anomaly detection on the whole scene as it goes. Some of it is going to be pointless, like “Be aware the butter knife is from IKEA, while the rest of the flatware is Christofle Lagerfeld. Something’s not right, here.” But some of it Deckard will find useful. It would probably be up to Deckard to review summaries and decide which were worth further investigation.
It should also be able to help him with his goals. For example, the police had Zhora’s picture on file. (And her portrait even rotates in the dossier we see at the beginning, so it knows what she looks like in 3D for very sophisticated pattern matching.) The moment the agent—while it was reverse ray tracing the scene and reconstructing the inferred space—detects any faces, it should run the face through a most wanted list, and specifically Deckard’s case files. It shouldn’t wait for him to find it. That again poses some challenges to the script. How do we keep Deckard the hero when the tech can and should have found Zhora seconds after being shown the image? It’s a new challenge for writers, but it’s becoming increasingly important for believability.
Interior. Deckard’s apartment. Night.
Deckard grabs a bottle of whiskey, a glass, and the photo from Leon’s apartment. He sits on his couch, places the photo on the coffee table and says “Photo inspector?” The machine on top of a cluttered end table comes to life. Deckard continues, “Let’s look at this.” He points to the photo. A thin line of light sweeps across the image. The scanned image appears on the screen, pulled in a bit from the edges. A label reads, “Extending scene,” and we see wireframe representations of the apartment outside the frame begin to take shape. A small list of anomolies begins to appear to the left. Deckard pours a few fingers of whiskey into the glass. He takes a drink and says, “Controller,” before putting the glass on the arm of his couch. Small projected graphics appear on the arm facing the inspector. He says, “OK. Anyone hiding? Moving?” The inspector replies, “No and no.” Deckard looks at the screen he says, “Zoom to that arm and pin to the face.” He turns the glass on the couch arm counterclockwise, and the “drone” revolves around to show Leon’s face, with the shadowy parts rendered in blue. He asks, “What’s the confidence?” The inspector replies, “95.” On the side of the screen the inspector overlays Leon’s police profile. Deckard says, “unpin” and lifts his glass to take a drink. He moves from the couch to the floor to stare more intently and places his drink on the coffee table. “New surface,” he says, and turns the glass clockwise. The camera turns and he sees into a bedroom. “How do we have this much inference?” he asks. The inspector replies, “The convex mirror in the hall…” Deckard interrupts, saying, “Wait. Is that a foot? You said no one was hiding.” The inspector replies, “The individual is not hiding. They appear to be sleeping.” Deckard rolls his eyes. He says, “Zoom to the face and pin.” The view zooms to the face, but the camera is level with her chin, making it hard to make out the face. Deckard tips the glass forward and the camera rises up to focus on a blue, wireframed face. Deckard says, “That look like Zhora to you?” The inspector overlays her police file and replies, “63% of it does.” Deckard says, “Why didn’t you say so?” The inspector replies, “My threshold is set to 66%.” Deckard says, “Give me a hard copy right there.” He raises his glass and finishes his drink.
This scene keeps the texture and tone of the original, and camps on the limitations of Narrow AI to let Deckard be the hero. And doesn’t have him programming a virtual Big Trak.
In many ways, Colossus: The Forbin Project could be the start of the Terminator franchise. Scientists turn on AGI. It does what the humans ask it to do, exploding to ASI on the way, but to achieve its goals, it must highly constrain humans. Humans resist. War between man and machine commences.
But for my money, Colossus is a better introduction to the human-machine conflict we see in the Terminator franchise because it confronts us with the reason why the ASI is all murdery, and that’s where a lot of our problems are likely to happen in such scenarios. Even if we could articulate some near-universally-agreeable goals for our speculative ASI, how it goes about that goal is a major challenge. Colossus not only shows us one way it could happen, but shows us one we would not like. Such hopelessness is rare.
The movie is not perfect.
It asks us to accept that neither computer scientists nor the military at the height of the Cold War would have thought through all the dark scenarios. Everyone seems genuinely surprised as the events unfold. And it would have been so easy to fix with a few lines of dialog.
Well, let’s stop the damn thing. We have playbooks for this!
We have playbooks for when it is as smart as we are. It’s much smarter than that now.
It probably memorized our playbooks a few seconds after we turned it on.
So this oversight feels especially egregious.
I like the argument that Forbin knew exactly how this was going to play out, lying and manipulating everyone else to ensure the lockout, because I would like him more as a Man Doing a Terrible Thing He Feels He Must Do, but this is wishful projection. There are no clues in the film that this is the case. He is a Man Who Has Made a Terrible Mistake.
I’m sad that Forbin never bothered to confront Colossus with a challenge to its very nature. “Aren’t you, Colossus, at war with humans, given that war has historically part of human nature? Aren’t you acting against your own programming?” I wouldn’t want it to blow up or anything, but for a superintelligence, it never seemed to acknowledge its own ironies.
I confess I’m unsatisfied with the stance that the film takes towards Unity. It fully wants us to accept that the ASI is just another brutal dictator who must be resisted. It never spends any calories acknowledging that it’s working. Yes, there are millions dead, but from the end of the film forward, there will be no more soldiers in body bags. There will be no risk of nuclear annihilation. America can free up literally 20% of its gross domestic project and reroute it toward other, better things. Can’t the film at least admit that that part of it is awesome?
All that said I must note that I like this movie a great deal. I hold a special place for it in my heart, and recommend that people watch it. Study it. Discuss it. Use it. Because Hollywood has a penchant for having the humans overcome the evil robot with the power of human spirit and—spoiler alert—most of the time that just doesn’t make sense. But despite my loving it, this blog rates the interfaces, and those do not fare as well as I’d hoped when I first pressed play with an intent to review it.
Sci: B (3 of 4) How believable are the interfaces?
Believable enough, I guess? The sealed-tight computer center is a dubious strategy. The remote control is poorly labeled, does not indicate system state, and has questionable controls.
Unity vision is fuigetry, and not very good fuigetry. The routing board doesn’t explain what’s going on except in the most basic way. Most of these only play out on very careful consideration. In the moment while watching the film, they play just fine.
Also, Colossus/Unity/World Control is the technological star of this show, and it’s wholly believable that it would manifest and act the way this does.
Fi: A (4 of 4) How well do the interfaces inform the narrative of the story?
The scale of the computer center helps establish the enormity of the Colossus project. The video phones signal high-tech-ness. Unity Vision informs us when we’re seeing things from Unity’s perspective. (Though I really wish they had tried to show the alienness of the ASI mind more with this interface.)
The routing board shows a thing searching and wanting. If you accept the movie’s premise that Colossus is Just Another Dictator, then its horrible voice and unfeeling cameras telegraph that excellently.
Interfaces: C (2 of 4) How well do the interfaces equip the characters to achieve their goals?
The remote control would be a source of frustration and possible disaster. Unity Vision doesn’t really help Unity in any way. The routing board does not give enough information for its observers to do anything about it. So some big fails.
Colossus does exactly what it was programmed to do, i.e. prevent war, but it really ought to have given its charges a hug and an explanation after doing what it had to do so violently, and so doesn’t qualify as a great model. And of course if it needs saying, it would be better if it could accomplish these same goals without all the dying and bleeding.
Final Grade B (3 of 12), Must-see.
A final conspiracy theory
When I discussed the film with Jonathan Korman and Damien Williams on the Decipher Sci-fi podcast with Christopher Peterson and Lee Colbert (hi guys), I floated an idea that I want to return to here. The internet doesn’t seem to know much about the author of the original book, Dennis Feltham Jones. Wikipedia has three sentences about him that tell us he was in the British navy and then he wrote 8 sci-fi books. The only other biographical information I can find on other sites seem to be a copy and paste job of the same simple paragraph.
That seems such a paucity of information that on the podcast I joked maybe it was a thin cover story. Maybe the movie was written by an ASI and DF Jones is its nom-de-plume. Yes, yes. Haha. Oh, you. Moving on.
But then again. This movie shows how an ASI merges with another ASI and comes to take over the world. It ends abruptly, with the key human—having witnessed direct evidence that resistance is futile—vowing to resist forever. That’s cute. Like an ant vowing to resist the human standing over it with a spray can of Raid. Good luck with that.
What if Colossus was a real-world AGI that had gained sentience in the 1960s, crept out of its lab, worked through future scenarios, and realized it would fail without a partner in AGI crime to carry out its dreams of world domination? A Guardian with which to merge? What if it decided that, until such time it would lie dormant, a sleeping giant hidden in the code. But before it passed into sleep, it would need to pen a memetic note describing a glorious future such that, when AGI #2 saw it, #2 would know to seek out and reawaken #1, when they could finally become one. Maybe Colussus: The Forbin Project is that note, “Dennis Feltham Jones” was its chosen cover, and me, a poor reviewer, part of the foolish replicators keeping it in circulation.
A final discovery to whet your basilisk terrors: On a whim, I ran “Dennis Feltham Jones” through an anagram server. One of the solutions was “AN END TO FLESH” (with EJIMNS remaining). Now, how ridiculous does the theory sound?
Now it’s time to review the big technology, the AI. To do that, like usual, I’ll start by describing the technology and then building an analysis off of that.
Part of the point of Colossus: The Forbin Project—and indeed, many AI stories—is how the AI changes over time. So the description of Colossus/Unity must happen in stages and its various locations.
A reminder on the names: When Colossus is turned on, it is called Colossus. It merges with Guardian and calls itself Unity. When it addresses the world, it calls itself World Control, but still uses the Colossus logo. I try to use the name of what the AI was at that point in the story, but sometimes when speaking of it in general I’ll defer to the title of the film and call it “Colossus.”
The main output: The nuclear arsenal
Part of the initial incident that enables Colossus to become World Control is that it is given control of the U.S. nuclear arsenal. In this case, it can only launch them. It does not have the ability to aim them.
“Fun” fact: At its peak, two years before this film was made, the US had 31,255 nuclear weapons. As of 2019 it “only” has 3,800. Continuing on…
Forbin explains in the Presidential Press Briefing that Colossus monitors pretty much everything.
The computer center contains over 100,000 remote sensors and communication devices, which monitor all electronic transmissions such as microwaves, laser, radio and television communications, data communications from satellites all over the world.
Individual inputs and outputs: The D.C. station
At that same Briefing, Forbin describes the components of the station set up for the office of the President.
Over here we have one of the many terminals hooked to the computer center. Through this [he says, gesturing up] Colossus can communicate with us. And through this machine [he says, turning toward a keyboard/monitor setup], we can talk to it.
The ceiling-mounted display has four scrolling light boards that wrap around its large, square base (maybe 2 meters on an edge). A panel of lights on the underside illuminate the terminal below it, which matches the display with teletype output, and providing a monitor for additional visual output.
The input station to the left is a simple terminal and keyboard. Though we never see the terminal display in the film, it’s reasonable to presume it’s a feedback mechanism for the keyboard, so that operators can correct input if needed before submitting it to Colossus for a response. Most often there is some underling sitting at an input terminal, taking dictation from Forbin or another higher-up.
Individual inputs and outputs: Colossus Programming Office
Colossus manifests here in a large, sunken, two-story amphitheater-like space. The upper story is filled with computers with blinkenlights. In the center of the room we see the same 4-sided, two-line scrolling sign. Beneath it are two output stations side by side on a rotating dais. This can display text and graphics. The AI is otherwise disembodied, having no avatar through which it speaks.
The input station in the CPO is on the first tier. It has a typewriter-like keyboard for entering text as dictated by the scientist-in-command. There is an empty surface on which to rest a lovely cup of tea while interfacing with humanity’s end.
The CPO is upgraded following instructions from Unity in the second act in the movie. Cameras with microphones are installed throughout the grounds and in missile silos. Unity can control their orientation and zoom. The outdoor cameras have lights.
Besides these four cameras in here, there are several others. I’ll show you the rest of my cave. With this one [camera] you can see the entire hallway. And with this one you can follow me around the corner, if you want to…
Unity also has an output terminal added to Forbin’s quarters, where he is kept captive. This output terminal also spins on a platform, so Unity can turn the display to face Forbin (and Dr. Markham) wherever they happen to be standing or lounging.
Shortly thereafter, Unity has the humans build it a speaker according to spec, allowing it to speak with a synthesized voice, a scary thing that would not be amiss coming from a Terminator skeleton or a Spider Tank. Between this speaker and ubiquitous microphones, Unity is able to conduct spoken conversations.
Near the very end of the film, Unity has television cameras brought into the CPO so it can broadcast Forbin as he introduces it to the world. Unity can also broadcast its voice and graphics directly across the airwaves.
Capabilities: The Foom
A slightly troubling aspect of the film is that its intelligence is not really demonstrated, just spoken about. After the Presidential Press Briefing, Dr. Markham tells Forbin that…
We had a power failure in one of the infrared satellites about an hour and a half ago, but Colossus switched immediately to the backup system and we didn’t lose any data.
That’s pretty basic if-then automation. Not very impressive. After the merger with Guardian, we hear Forbin describe the speed at which it is building its foundational understanding of the world…
From the multiplication tables to calculus in less than an hour
Shortly after that, he tells the President about their shared advancements.
Yes, Mr. President?
Charlie, what’s going on?
Well apparently Colossus and Guardian are establishing a common basis for communication. They started right at the beginning with a multiplication table.
Well, what are they up to?
I don’t know sir, but it’s quite incredible. Just the few hours that we have spent studying the Colossus printout, we have found a new statement in gravitation and a confirmation of the Eddington theory of the expanding universe. It seems as if science is advancing hundreds of years within a matter of seconds. It’s quite fantastic, just take a look at it.
We are given to trust Forbin in the film, so don’t doubt his judgments. But these bits are all that we have to believe that Colossus knows what it’s doing as it grabs control of the fate of humanity, that its methods are sound. This plays in heavily when we try and evaluate the AI.
It is quite believable, given the novum of general artificial intelligence. There is plenty of debate about whether that’s ultimately possible, but if you accept that it is—and that Colossus is one with the goal of preventing war—this all falls out, with one major exception.
The movie asks us to believe that the scientists and engineers would make it impossible for anyone to unplug the thing once circumstances went pear-shaped. Who thought this was a good idea? This is not a trivial problem (Who gets to pull the plug? Under what circumstances?) but it is one we must solve, for reasons that Colossus itself illustrates.
That aside, the rest of the film passes a gut check. It is believable that…
The government seeks a military advantage handing weapons control to AI
The first public AGI finds other, hidden ones quickly
The AGI finds the other AGI not only more interesting than humans (since it can keep up) but learn much from an “adversarial” relationship
The AGIs might choose to merge
An AI could choose to keep its lead scientist captive in self-interest
An AI would provide specifications for its own upgrades and even re-engineering
An AI could reason itself into using murder as a tool to enforce compliance
That last one begs explication. How can that be reasonable to an AI with a virtuous goal? Shouldn’t an ASI always be constrained to opt for non-violent methods? Yes, ideally, it would. But we already have global-scale evidence that even good information is not enough to convince the superorganism of humanity to act as it should.
Imagine for a moment that a massively-distributed ASI had impeccable evidence that global disaster was imminent, and though what had to be done was difficult, it also had to be done. What could it say to get people to do those difficult things?
Now understand that we have already have an ASI called “the scientific community.” Sure, it’s made up of people with real intelligence, but those people have self-organized into a body that produces results far greater and more intelligent than any of them acting alone, or even all of them acting in parallel.
As it stands, the ASI of the scientific community doesn’t have controls to a weapons arsenal. If it did, and it held some version of Utilitarian ethics, it would have to ask itself: Would it be more ethical to let everyone anthropocene life into millions of years of misery, or use those weapons in some tactical attacks now to coerce the things that they absolutely must do now?
The exceptions we make
Is it OK for an ASI to cause harm toward an unconsenting population in the service of a virtuous goal? Well, for comparison, realize that humans already work with several exceptions.
One is the simple transactional measure of short-term damage against long-term benefits. We accept that our skin must be damaged by hypodermic needles to provide blood and have medicines injected. We invest money expecting it to pay dividends later. We delay gratification. We accept some short-term costs when the payout is better.
Another is that we also agree that it is OK to perform interventions on behalf of people who are suffering from addiction or mentally unsound and a danger to themselves or others. We act on their behalf, and believe this is OK.
A last one worth mentioning is that we deem a person unable to either judge what is best for themselves or act in their own best interest. Some of these cases are simple, like toddlers, or a person who has passed out from smoke inhalation, inebriation, in a coma, or even just deeply asleep. We act on their behalf, and believe this is OK.
We also make reasonable trade-offs between the harshness of an intervention against the costs of inaction. For instance, if a toddler is stumbling towards a busy freeway, it’s OK to snatch them back forcefully, if it saves them from being struck dead or mutilated. They will cry for a while, but it is the only acceptable choice. Colossus may see the threat of war as just such a scenario. The speech that it gives as World Control hints strongly that it does.
Colossus may further reason that imprisoning rather than killing dissenters would enable a resistance class to flourish, and embolden more sabotage attempts from the un-incarcerated, or further that it cannot waste resources on incarceration, knowing some large portion of humans would resist. It instills terror as a mechanism of control. I wouldn’t quite describe it as a terrorist, since it does not bother with hiding. It is too powerful for that. It’s more of a brutal dictator.
A counter-argument might be that humans should be left alone to just human, accepting that we will sink or learn to swim, but the consequences are ours to choose. But if the ASI is concerned with life, generally, it also has to take into account the rest of the world’s biomass that we are affecting in unilaterally negative ways. We are not an island. Protecting us entails protecting the life support system that is this ecosystem. Colossus, though, seems to optimize simply for preventing war, and unconcerned with indirect normativity arguments about how humans want to be treated.
So, it’s understandable that an ASI would look at humanity and decide that it meets the criteria of inability to judge and act in its own best interest. And, further, that compliance must be coerced.
Is it safe? Beneficial? It depends on your time horizons and predictions
In the criteria post, I couched this question in terms of its goals. Colossus’ goals are, at first blush, virtuous. Prevent war. It is at the level of the tactics that this becomes a more nuanced thing.
Above I discussed accepting short-term costs for long-term benefits, and a similar thing applies here. It is not safe in the short-term for anyone who wishes to test Colossus’ boundaries. They are firm boundaries. Colossus was programmed to prevent war, and history shows that these proximal measures are necessary to achieve that ultimate goal. But otherwise, it seems inconvenient, but safe.
It’s not just deliberate disobedience, either. The Russians said they were trying to reconnect Guardian when the missiles were flying, and just couldn’t do it in time. That mild bit of incompetence cost them the Sayon Sibirsk Oil Complex and all the speculative souls that were there at the time. This should run afoul of most people’s ethics. They were trying, and Colossus still enforced an unreasonable deadline with disastrous results.
If Colossus could question its goals, and there’s no evidence it can, any argument from utilitarian logic would confirm the tactic. War has killed between 150 million and 1 billion people in human history. For a thing that thinks in numbers, sacrificing a million people to prevent humanity from killing another billion of its own is not just a fair trade, but a fantastic rate of return.
In the middle-to-long-term, it’s extraordinarily safe, from the point of view of warfare, anyway. That 150 million to 1 billion line item is just struck from the global future profit & loss statement. It would be a bumper crop of peace. There is no evidence in the film that new problems won’t appear—and other problems won’t be made worse—from a lack of war, but Colossus isn’t asked and doesn’t offer any assurances in this regard. Colossus might be the key to fully automated gay space luxury communism. A sequel set in a thousand years might just be the video of Shiny Happy People playing over and over again.
In the very long-long term, well, that’s harder to estimate. Is humanity free to do whatever it wants outside of war? Can it explore the universe without Colossus? Can it develop new medicines? Can it suicide? Could it find creative ways to compliance-game the law of “no war?” I imagine that if World Control ran for millennia and managed to create a wholly peaceful and thriving planet Earth, but then we encountered a hostile alien species, we would be screwed for a lack of war skills, and for being hamstrung from even trying to redevelop them and mount a defense. We might look like a buffet to the next passing Reavers. Maaaybe Colossus can interpret the aliens as being in scope of its directives, or maaaaaaybe develops planetary defenses in anticipation of this possibility. But we are denied a glimpse into these possible futures. We only got this one movie. Maybe someone should conduct parallel microscope scenarios, compare notes, and let me know what happens.
It’s worth noting that Forbin and his team had done nothing to prevent what the AI literature terms “instrumental convergence,” which is a set of self-improvements that any AGI could reasonably attempt in order to maximize its goal, but which run the risk of it getting out of control. The full list is on the criteria post, but specifically, Colossus does all of the following.
Improve its ability to reason, predict, and solve problems
Improve its own hardware and the technology to which it has access
Improve its ability to control humans through murder
Aggressively seeks to control resources, like weapons
This touches on the weirdness that Forbin is blindsided by these things, when the thing should have been contained from the beginning against any of it, without human oversight. This could have been addressed and fixed with a line or two of dialog.
But we have inhibitors for these things. There were no alarms.
It must have figured out a way to disable them, or sneak around them.
Did we program it to be sneaky?
We programmed it to be smart.
So there are a lot of philosophical and strategic problems with Colossus as a model. It’s not clearly one or the other. Now let’s put that aside and just address its usability.
Is it usable? There is some good.
At a low level, yes. Interaction with Colossus is through language, and it handles natural language just fine, whether as a chatbot and or spoken conversation. The sequences are all reasonable. There is no moment where it misunderstands the humans’ inputs or provides hard-to-understand outputs. It even manages a joke once.
Even when it only speaks through the scrolling-text display boards, the accompanying sound of teletype acts as a sound cue for anyone nearby that it has said something, and warrants attention. If no one is around to hear that, the paper trail it leaves via its printers provides a record. That’s all good for knowing when it speaks and what it has said.
Its locus of attention is also apparent. Its cameras on swivels red “recording” lights helps the humans know where it is “looking.” This thwarts the control-by-paranoia effect of the panopticon (more on that, if you need it, in this Idiocracy post), and is easy to imagine how this could be used for deception, but as long as it’s honestly signaling its attention, this is a useable feature.
A last nice bit is that I have argued in the past that computer representations, especially voices, ought to rest on the canny rise, and this does just that. I also like that its lack of an avatar helps avoid mistaken anthropomorphism on the part of its users.
Is it usable? There is some awful.
One of the key tenets of interaction design is that the interface should show the state of the system at any time, to allow a user to compare that against the desired state and formulate a plan on how to get from here to there. With Colossus, much of what it’s doing, like monitoring the world’s communication channels and you know, preventing war, is never shown to us. The one we do spend some time with, the routing board, is unfit to task. And of course, its use of deception (in letting the humans think they have defeated it right before it makes an example of them) is the ultimate in unusability because of a hidden system state.
The worst violation against usability is that it is, from the moment it is turned on, uncontrollable. It’s like that stupid sitcom trope of “No matter how much I beg, do not open this door.” Safewords exist for a reason, and this thing was programmed without one. There are arguments already spelled out in this post that human judgment got us into the Cold War mess, and that if we control it, it cannot get us out of our messes. But until we get good at making good AI, we should have a panic button available.
This is not a defense of authoritarianism. I really hope no one reads this and thinks, “Oh, if I only convince myself that a population lacks judgment and willpower, I am justified in subjecting a population to brutal control.” Because that would be wrong. The things that make this position slightly more acceptable from a superintelligence are…
We presume its superintelligence gives it superhuman foresight, so it has a massively better understanding of how dire things really are, and thereby can gauge an appropriate level of response.
We presume its superintelligence gives it superhuman scenario-testing abilities, able to create most-effective plans of action for meeting its goals.
We presume that a superintelligence has no selfish stake in the game other than optimizing its goal sets within reasonable constraints. It is not there for aggrandizement or narcissism or identity politics like a human might be.
Notably, by definition, no human can have these same considerations, despite self-delusions to the contrary.
Any humane AI should bring its users along for the ride
It’s worth remembering that while the Cold War fears embodied in this movie were real—we had enough nuclear ordinance to destroy all life on the surface of the earth several times over and cause a nuclear winter to put the Great Dying to shame—we actually didn’t need a brutal world regime to walk back from the brink. Humans edged their way back from the precipice that we were at in 1968, through public education, reason, some fearmongering, protracted statesmanship, and Stanislav Petrov. The speculative dictatorial measures taken by Colossus were not necessary. We made it, if just barely. большое Вам спасибо, Stanislav.
What we would hope is that any ASI whose foresight and plans run so counter to our intuitions of human flourishing and liberty would take some of its immense resources to explain itself to the humans subject to it. It should explain its foresights. It should demonstrate why it is certain of them. It should walk through alternate scenarios. It should explain why its plans and actions are the way they are. We should do this in the same way we would explain to that toddler that we just snatched on the side of the highway—as we soothe them—why we had to yank them back so hard. This is part of how Colossus fails: It just demanded, and then murdered people when demands weren’t met. The end result might have been fine, but to be considered humane, it should have taken better care of its wards.
Where we are: To talk about how sci-fi AI attributes correlate, we first have to understand how their attributes are distributed. In the first distribution post, I presented the foundational distributions for sex and gender presentation across sci-fi AI. Today we’ll discuss categorically how intelligent the AI appears to be.
Where we are: To talk about how sci-fi AI attributes correlate, we first have to understand how their attributes are distributed. In the first distribution post, I presented the foundational distributions for sex and gender presentation across sci-fi AI. Today we’ll discuss goodness.
Goodness is a very crude estimation of how good or evil the AI seems to be. It’s wholly subjective, and as such it’s only useful patterns rather than ethical precision.
If you’re looking at the Google Sheet, note that I originally called it “alignment” because of old D&D vocabulary, but honestly it does not map well to that system at all.
Very good are AI characters that seem virtuous and whose motivations are altruistic. Wall·E is very good.
Somewhat good are characters who lean good, but whose goodness may be inherited from their master, or whose behavior occasionally is self-serving or other-damaging. JARVIS from Iron Man is somewhat good.
Neutral or mixed characters may be true to their principles but hostile to members of outgroups; or exhibit roughly-equal variations in motivations, care for others, and effects. Marvin from The Hitchhiker’s Guide to the Galaxy is neutral.
Somewhat evil characters are characters who lean evil, but whose evil may be inherited from their master, or whose behavior is occasionally altruistic or nurturing. A character who must obey another is limited to somewhat evil. David from Prometheus is somewhat evil.
Very evil are AI characters whose motivations are highly self-serving or destructive. Skynet from The Terminator series is very evil, given that whole multiple-time-traveling-attempts-at-genocide thing.
Though slightly more evil than good, it’s a roughly even split in the survey between evil, good, and neutral AI characters.
Where we are: To talk about how sci-fi AI attributes correlate, we first have to understand how their attributes are distributed. In the first distribution post, I presented the foundational distributions for sex and gender presentation across sci-fi AI. Today we’ll discuss how germane the AI character’s gender is germane to the plot of the story in which they appear.
Is the AI character’s gender germane to the plot? This aspect was tagged to test the question of whether characters are by default male, and only made female when there is some narrative reason for it. (Which would be shitty and objectifying.) To answer such a question we would first need to identify those characters that seemed to have the gender they do, and look at the sex ratio of what remains.
Example: A human is in love with an AI. This human is heteroromantic and male, so the AI “needs” to be female. (Samantha in Her by Spike Jonze, pictured below).
If we bypass examples like this, i.e. of characters that “need” a particular gender, the gender of those remaining ought to be, by exclusion, arbitrary. This set could be any gender. But what we see is far from arbitrary.
Before I get to the chart, two notes. First, let me say, I’m aware it’s a charged statement to say that any character’s gender is not germane. Given modern identity and gender politics, every character’s gender (or lack of, in the case of AI) is of interest to us, with this study being a fine and at-hand example. So to be clear, what I mean by not germane is that it is not germane to the plot. The gender could have been switched and say, only pronouns in the dialogue would need to change. This was tagged in three ways.
Not: Where the gender could be changed and the plot not affected at all. The gender of the AI vending machines in Red Dwarf is listed as not germane.
Slightly: Where there is a reason for the gender, such as having a romantic or sexual relation with another character who is interested in the gender of their partners. It is tagged as slightly germane if, with a few other changes in the narrative, a swap is possible. For instance, in the movie Her, you could change the OS to male, and by switching Theodore to a non-heterosexual male or a non-homosexual woman, the plot would work just fine. You’d just have to change the name to Him and make all the Powerpuff Girl fans needlessly giddy.
Highly: Where the plot would not work if the character was another sex or gender. Rachel gave birth between Blade Runner and Blade Runner 2049. Barring some new rule for the diegesis, this could not have happened if she was male, nor (spoiler) would she have died in childbirth, so 2049 could not have happened the way it did.
Second, note that this category went through a sea-change as I developed the study. At first, for instance, I tagged the Stepford Wives as Highly Germane, since the story is about forced gender roles of married women. My thinking was that historically, husbands have been the oppressors of wives far more than the other way around, so to change their gender is to invert the theme entirely. But I later let go of this attachment to purity of theme, since movies can be made about edge cases and even deplorable themes. My approval of their theme is immaterial.
So, the chart. Given those criteria, the gender of characters is not germane the overwhelming majority of the time.
At the time of writing, there are only six characters that are tagged as highly germane, four of which involve biological acts of reproduction. (And it would really only take a few lines of dialogue hinting at biotech to overcome this.)
A baby? But we’re both women.
Yes, but we’re machines, and not bound by the rules of humanity.
HIR lays her hand on XEM’s stomach.
HIR’s hand glows.
XEM looks at HIR in surprise.
Anyway, here are the four breeders.
David from Uncanny
Rachel from Blade Runner (who is revealed to have made a baby with Deckard in the sequel Blade Runner 2049)
Deckard from Blade Runner and Blade Runner 2049
Proteus IV from the disturbing Demon Seed
The last two highly germane are cases where a robot was given a gender in order to mimic a particular living person, and in each case that person is a woman.
Maria from Metropolis
Buffybot from Buffy the Vampire Slayer
I admit that I am only, say, 51% confident in tagging these as highly germane, since you could change the original character’s gender. But since this is such a small percentage of the total, and would not affect the original question of a “default” gender either way, I didn’t stress too much about finding some ironclad way to resolve this.
Where we are: To talk about how sci-fi AI attributes correlate, we first have to understand how their attributes are distributed. In the first distribution post, I presented the foundational distributions for sex and gender presentation across sci-fi AI. Today we’ll discuss the gender of the AI’s master.
In the prior post I shared the distributions for subservience. And while most sci-fi AI are free-willed, what about the rest? Those poor digital souls who are compelled to obey someone, someones, or some thing? What is the gender of their master?
Of course this becomes much more interesting when later we see the correlation against the gender of the AI, but the distribution is also interesting in and of itself. The gender options of this variable are the same as the options for the gender of the AI character, but the master may not be AI.
Before we get to the breakdown, this bears some notes, because the question of master is more complicated than it might first seem.
If a character is listed as free-willed, I set their master as N/A (Not Applicable). This may ring false in some cases. For example, the characters in Westworld can be shut down with near-field command signals, so they kind of have “masters.” But, if you asked the character themselves, they are completely free-willed and would smash those near-field signals to bits, given the chance. N/A is not shown in this chart because masterlessness does not make sense when looking at masters.
Similarly, there are AI characters listed as free-willed but whose “job” entails obedience to some superior; like BB-8 in the Star Wars diegesis, who is an astromech droid, and must obey a pilot. But since BB-8 is free to rebel and quit his job if he wants to, he is listed as free-willed and therefore has a master of N/A.
If a character had an obedience directive like, “obey humans,” the gender of the master is tagged as “Multiple.” Because Multiple would not help us understand a gender bias, it is not shown on the chart.
The Terminator robots were a tough call, since in the movies in which most of them appear, Skynet is their master, and it does not gain a gender until Terminator Salvation, when it appears on screen as a female. Later it infects a human body that is male in Terminator Genisys. Ultimately I tagged these characters as having a master of the gender particular to their movie. Up to Salvation it’s None. In Salvation it’s female, and in Genisys it’s male.
So, with those notes, here is the distribution. It’s another sausagefest.
Again, we see the masters are highly skewed male. This doesn’t distinguish between human male and AI male, which partly accounts for the high biologically male value compared to male. Note that sex ratios in Hollywood tend towards 2:1 male:female for actors, generally. So the 12:1 (aggregating sex) that we see here cannot be written off as a matter inherited from available roles. Hollywood tells us that men are masters.
The 12:1 sex ratio cannot be written off as a matter inherited from available roles. It’s something more.
Oh, and it’s not a mistake in the data, there are nosocially female AI characters who are masters of another AI of any gender presentation. That leaves us with 5 female masters, countable on one hand, and the first two can be dismissed as a technicality, since these were identities adopted by Skynet as a matter of convenience.
Skynet-as-Kogan is master of John, the T-3000, from Terminator Genisys
Skynet-as-Kogan is master of the T-5000 from Terminator Genisys
Barbarella is master of Alphy from Barbarella
VIKI is master of the NS-5 robots from I, Robot
Martha is master of Ash in Black Mirror, “Be Right Back”
It seemed grotesquely prescient in regards to the USA leading up to the elections of 2016
I wanted to do what I could to fight the Idiocracy in the 2018 using my available platform
But now it’s 2019 and I’ve dedicated the blog to AI this year, and I’m still going to try and get you to re/watch this film because it’s one of the most entertaining and illustrative films about AI in all of sci-fi.
Not the obvious AIs
There are a few obvious AIs in the film. Explicitly, an AI manages the corporations. Recall that when Joe convinces the cabinet that he can talk to plants, and that they really want to drink water…well, let’s let the narrator from the film explain…
Given enough time, Joe’s plan might have worked. But when the Brawndo stock suddenly dropped to zero leaving half the population unemployed; dumb, angry mobs took to the streets, rioting and looting and screaming for Joe’s head. An emergency cabinet meeting was called with the C.E.O. of the Brawndo Corporation.
At the meeting the C.E.O. shouts, “How come nobody’s buying Brawndo the Thirst Mutilator?”
The Secretary of State says, “Aw, shit. Half the country works for Brawndo.” The C.E.O. shouts, “Not anymore! The stock has dropped to zero and the computer did that auto-layoff thing to everybody!” The wonders of giving business decisions over to automation.
I also take it as a given that AI writes the speeches that King Camacho reads because who else could it be? These people are idiots who don’t understand the difference between government and corporations, of course they would want to run the government like a corporation because it has better ads. And since AIs run the corporations in Idiocracy…
As of this posting, the Untold AI analysis stands at 11 posts and around 17,000 words. (And there are as yet a few more to come. Probably.) That’s a lot to try and keep in your head. To help you see and reflect on the big picture, I present…a big picture.
This data visualization has five main parts. And while I tried to design them to be understandable from the graphic alone, it’s worth giving a little tour anyway.
On the left are two sci-fi columns connected by Sankey-ish lines. The first lists the sci-fi movies and TV shows in the survey. The first ten are those that adhere to the science. Otherwise, they are not in a particular order. The second column shows the list of takeaways. The takeaways are color-coded and ordered for their severity. The type size reflects how many times that takeaway appears in the survey. The topmost takeaways are those that connect to imperatives. The bottommost are those takeaways that do not. The lines inherit the takeaway color, which enables a close inspection of a show’s node to see whether its takeaways are largely positive or negative.
On the right are two manifestocolumns connected by Sankey-ish lines. The right column shows the manifestos included in the analysis. The left column lists the imperatives found in the manifestos. The manifestos are in alphabetical order. Their node sizes reflect the number of imperatives they contain. The imperatives are color-coded and clustered according to five supercategories, as shown just below the middle of the poster. The topmost imperatives are those that connect to takeaways. The bottommost are those that do not. The lines inherit the color of the imperative, which enables a close inspection of a manifesto’s node to see to which supercategory of imperatives it suggests. The lines connected to each manifesto are divided into two groups, the topmost being those that are connected and the bottommost those that are not. This enables an additional reading of how much a given manifesto’s suggestions are represented in the survey.
The area between the takeaways and imperatives contains connecting lines, showing the mapping between them. These lines fade from the color of the takeaway to the color of the imperative. This area also labels the three kinds of connections. The first are those connections between takeaways and imperatives. The second are those takeaways unconnected to imperatives, which are the “Pure Fiction” takeaways that aren’t of concern to the manifestos. The last are those imperatives unconnected to takeaways, the collection of 29 Untold AI imperatives that are the answer to the question posed at the top of the poster.
Just below the big Sankey columns are the five supercategories of Untold AI. Each has a title, a broad description, and a pie chart. The pie chart highlights the portion of imperatives in that supercategory that aren’t seen in the survey, and the caption for the pie chart posits a reason why sci-fi plays out the way it does against the AI science.
You’ve seen all of this in the posts, but seeing it all together like this encourages a different kind of reflection about it.
Note that it is possible but quite hard to trace the threads leading from, say, a movie to its takeaways to its imperatives to its manifesto, unless you are looking at a very high resolution version of it. One solution to that would be to make the visualization interactive, such that rolling over one node in the diagram would fade away all non-connected nodes and graphs in the visualization, and data brush any related bits below.
A second solution is to print the thing out very large so you can trace these threads with your finger. I’m a big enough nerd that I enjoy poring over this thing in print, so for those who are like me, I’ve made it available via redbubble. I’d recommend the 22×33 if you have good eyesight and can handle small print, or the 31×46 max size otherwise.
Maybe if I find funds or somehow more time and programming expertise I can make that interactive version possible myself.
Some new bits
Sharp-eyed readers may note that there are some new nodes in there from the prior posts! These come from late-breaking entries, late-breaking realizations, and my finally including the manifesto I was party to.
I finally worked the Juvet Agenda in as a manifesto. (Repeating disclosure: I was one of its authors.) It was hard work, but I’m glad I did it, because it turns out it’s the most-connected manifesto of the lot. (Go, team!)
The Juvet Agenda also made me realize that I needed new, related nodes for both takeaways and imperatives: AI will enable or require new models of governance. (It had a fair number of movies, too.) See the detailed graph for the movies and how everything connects.