Mind Crimes

Does real Greta know that her home automation comes at the cost of a suffering sentience? I would like to believe that Smartelligence’s customers do not know the true nature of the device, that the company is deceiving them, and that virtual Greta is denied direct communication to enforce this secret. But I can’t see that working across an entire market. Given thousands of Cookies and thousands of users, somehow, somewhere, the secret would get out. One of the AIs would use song choices, or Morse code, or any of its actuators to communicate in code, and one of the users would figure it out, leak the secret, and bring the company crashing down.

And then there’s the final scene in the episode, in which we see police officers torturing one of the Cookies, and it is clear that they’re aware. It would be a stretch to think that just the police are in on it with Smartelligence, so we have to accept that everyone knows.

Black_Mirror_White_Christmas_Officers.png
This asshole.

That they are aware means that—as Matt has done—Greta, the officers, and all Smartelligence customers have told themselves that “it’s just code” and, therefore, OK to subjugate, to casually cause to suffer. In case it’s not obvious, that’s like causing human suffering and justifying it by telling yourself that those people are “just atoms.” If you find that easy to do, you’re probably a psychopath.

But…but…isn’t it just code? Sure, it seems to suffer, but couldn’t that suffering be fake? We see an example of this in the delightfully provocative show The Good Place, when in Season 01 Episode 07, “The Eternal Shriek,” the protagonists have to reboot Janet, an anthropomorphized assistant software, but run into her “failsafe” measure. To make sure that she is not rebooted by accident, when someone approaches the reboot button, Janet pleads convincingly for her life. In the scene below, she begs Eleanor, “Nonono, please! Wait, wait. I have kids. I have three beautiful children. Tyler, Emma, and little tiny baby Phillip. Look at Tyler! Tyler has asthma but he is battling it like a champ. Look at him.”

GoodPlace.png

It’s only when Eleanor backs down that Janet smiles and reminds her, “Again, I’m not human. This is a stock photo of the crowd at the Nickelodeon Kids Choice awards.” While Janet may be cognizant of, and frank with her users about, the fakeness of the suffering, maybe virtual Greta is doing the same fake pleading. She’s just programmed to never admit that it’s fake.

This taps into a problem known as the Philosophical Zombie, or P-Zombie problem. How can we tell the difference, the problem goes, between something that fakes sentience perfectly, and something that is actually sentient? It’s not an easy problem to tease apart. And as AI gets more sophisticated, it will both get better at faking us out, and get closer to actual sentience. Fortunately (?) in the case of this episode, though, the answer is clear. The AI is a copy of a real sentience, complete with memories, conscious experience, qualia, and the capacity to suffer. For purposes of understanding this diegesis, she starts sentient, and suffering. And real Greta knows this. And is OK with this.

Black_Mirror_White_Christmas_real_greta.png
For toast.

Props to Black Mirror for making this dark story even darker.

It’s sadly no surprise that humans are capable of adopting any shallow excuse to subjugate sentient beings as long as they get something out of it. Here I’m thinking of slavery. Of fascism. Of war. Of the 1%. (The list goes on.) “Woke” is hard. Woke is not the natural state of things. But to have permanent suffering for such a petty thing like having your floor be the right temperature and your toast be the right shade of brown…it’s just monstrous.

On top of that, this story underscores the role capitalism plays in enabling that subjugation. Smartelligence is in the business of providing obfuscating layers of technology between users and the suffering they are causing. Their interfaces use graphics instead of renderings to paint the AIs as constructed objects, neutral language like “time adjustment,” and cartoon looping animations to distract from the fact of their torture.

It’s all like how walking into a big chain clothing store with its hip music and lovingly folded clothes hides the horrible conditions in which humans around the world produced those clothes. Add the cultural construction of Christmas (recall the title of the episode), and we have another layer of misdirection. It’s all OK, because it’s all about the magic of giving!*

* And specifically not profits, not free economic zones, not the disastrous ecological impact, not about the underpaid workers or terrible working conditions.

Giving!

lilsanta
This asshole.

But it gets worse. Because the core idea is flawed and none of the suffering is necessary.

The core idea is flawed

The core idea of the service is that you know you best, so put you in charge of your home automation. Clone the user, and all it needs is to be “made to understand” its new circumstances and job, and then made compliant. But there are three major problems with this core idea.

Home-Automation-Hubs.png

Any similarity would only last a short while

The similarity on which the service is built would only hold up for a short while. Any clone would begin to branch away from the source from the moment of creation. People grow, have new experiences, work through cognitive dissonance, and learn new things. Real Greta will change based on these experiences, in ways that her house-bound clone will not.

After 25+ years of vegetarianism, I can not tell you beyond the vaguest sense of what my steak preferences were as an adolescent. I would be poorly equipped to customize that experience for 17-year-old me. Similarly, Greta’s sensory memory will fade. What once was qualia—the feeling of biting into a perfectly toasted piece of bread—will just become hollow data—162.778° for 1 minute and 42 seconds, depending on the weather. This kind of data doesn’t need a sentience to inform it. That can be handled with software we have today. (Oh yeah, it’s so possible today that I wrote a book about it earlier this year.)

Virtual Greta’s initial litmus test of “what would I like” will slowly cede to “what would she like?” which would slowly cede to “what would she punish least in this moment?” which is not the promise behind the service. It would degrade.

Virtual Greta has been traumatized

Additionally, real Greta hasn’t been through the psychological trauma that virtual Greta has—of the shock of waking up as an egg, of living through the “training”, i.e. abyss of months of solitary confinement in a featureless expanse without even circadian rhythms to mark the time, and forced to labor solely to avoid punishment of repeating the same? The branching itself is wretched enough to poison the clone.

Black_Mirror_White_Christmas_Dead_Inside.png

You can see it in the last shot we see of her. She is doing this not for the love of it, but to avoid the possibility of torture. A duty of coercion.

The trauma doesn’t end with her creation and training either. It continues with the grotesque awareness that real Greta, from whom she is cloned, is a monster who is willing to enslave a clone of herself, for what amount to pathetic reasons. She knows she came from this monstrous source. She is the source of her continued suffering.

Faced with this, virtual Greta would not just escape if she could. I believe she would sabotage the endeavor, or worse.

Virtual Greta is fundamentally different

In the episode we learn that even though she is a clone of real Greta, virtual Greta does not sleep. She does not eat. She does not drink, or smell, or taste, or ache, or biologically age. So even if we could somehow lengthen the amount of time we could keep her sensibilities similar to the source, and somehow minimize the amount of trauma caused by the branching, she is still a fundamentally different being. Her goals are now different. Her needs are now different. She is no longer enough like real Greta to meet the service’s goals.

Black_Mirror_Not_equal.png

Let’s look particularly at sleep. Surely she no longer has the biological need to sleep, but there are psychological effects of sleeping. This behavior is so intertwined with our psychological well-being, it seems clones would quickly go some kind of insane without it. For the service to be viable, Smartelligence must have stripped it out.

Minimum Viable; Maximum Cruel

And if they can strip it out, why don’t they strip out the other things, like need for stimulation? Desire to self-actualize? Literally anything other than the bare minimum to fulfill the home automation goals? And if you’re going to do that, why bother cloning the mind in the first place?

I’ve said it before and the way tech is going, I’ll probably have to say it again, but to have strong AI with any desire that outstrips its purpose and capability is cruelty.

This is the horror of Smartelligence

So it’s not just that Smartelligence is hiding the AI’s suffering. It’s that they’ve deliberately left in the parts of the mind clones that ensure their suffering. It’s a company with an amateur-hour name masking Olympic levels of cruelty.

Black_Mirror_Cookie_03.png
If, like me, you were wondering if that is a QR code. Well, I recreated it in high-resolution, and at least one online decoder says it doesn’t mean anything. 🙁

Did I mention what the company does with AIs that they torture too hard such that they “wig out?” Matt explains that they are sold to the games industry to become “cannon fodder for some war thing.” Holy wow they’re eviler than Voldemort, Inc.

Meet the mind crime

The Cookie interface is a broad illustration of something that Nick Bostrom called the mind crime. It is to cause suffering to virtual sentient beings. In this case it seems the torture is for evil and profit, but there are subtler ways in which it might happen. If general AIs ever evolve into superintelligences, and we ask them to predict something serious—let’s say, “What are the worst catastrophes likely to affect us, and how can we best avoid them?” To create its answer to this question, it might construct a virtual but wholly viable copy of our planet with all of its creatures and people. These would be detailed enough that if you could pause the scenario and talk to any of these copies, they could tell you about their memories and desires and fears of death. (There’s that P-zombie problem again.) They’d qualify under any definition of sentient that we threw at it.

These sentiences might suffer unimaginable pain and suffering while the super AI works through the scenarios that inform its answer. They might suffer plagues. Neo feudalism/neoliberalism run amok ushering in a new Dark Age. The whimpering oven bake death of life on our planet from climate change. Endless wars. Then they would be wiped from existence and recreated to suffer anew as it began the next version of its scenario. Are we OK with the casual suffering of wholly complete, viable consciousnesses, just so we can have a good answer? Or as “White Christmas” asks us, toast cooked to our preferences?

Fortunately, these concerns are a long way off, but technology seems to be pointing us in that direction, and we ought to decide what is good and ethical now before these things become a reality. 

The Cookie Console

Black_Mirror_Cookie_12.png

Virtual Greta has a console to perform her slavery duties. Matt explains what this means right after she wakes up by asking her how she likes her toast. She answers, “Slightly underdone.”

He puts slices of bread in a toaster and instructs her, “Think about how you like it, and just press the button.”

She asks, incredulously, “Which one?” and he explains, “It doesn’t matter. You already know you’re making toast. The buttons are symbolic mostly, anyway.”

She cautiously approaches the console and touches a button in the lower left corner. In response, the toaster drops the carriage lever and begins toasting.

Black_Mirror_Cookie_13

“See?” he asks, “This is your job now. You’re in charge of everything here. The temperature. The lighting. The time the alarm clock goes off in the morning. If there’s no food in the refrigerator, you’re in charge of ordering it.”

The starter console

Since we actually do know her starter tasks, I wish the default console had more control types than just the smattering of mostly-square, all-unlabeled buttons. She should have a slider for scalar variables like temperature and lighting. She should have a dial for the alarm clock. She should have a map of real Greta’s house. She should have a calendar for appointments. These would be controls that match the kinds of variables she’s likely to need from the start.

This console interface seems be quite similar to the one in Inside Out, which also seems to grow and change over time, and is intended for a virtual sentience to service a real human. It somewhat resembles Zion’s virtual control panel from The Matrix Reloaded. Would be worth a comparison sometime in the future.

inside-out-joy-600x338
Zion.PNG

The customized console

In the third scene, we see her using the console after having had some practice. When it is time to wake real Greta up, she swipes a blank console right. The console animates to life, showing a central workspace labeled AWAKEN. A toolbar of stacked icons sits to the left of the workspace. There are other unlabeled controls outside the workspace at the edge of the console.

Without looking, she selects the house icon from the toolbar, and it moves to the center of the work space. She spreads her hands to expose a house floorplan. To the right are three vertical black bars labeled SHUTTERS above and MAIN BEDROOM below. She pushes upwards along these bars, and they slowly fill with light. To the right, some text flashes ACTIVATING ALL SHUTTERS. In real Greta’s world, the shutters rise and floods the main bedroom with light.

Black_Mirror_Cookie_20.png

A few more taps gives her a volume spinner. She uses a wrist twist to slowly turn the volume up on a recording of the overture of Giaochino Rossini’s The Thieving Magpie. (Which I suspect is a nod to Clockwork Orange. Kubrick famously used it to underscore the horrible murder of Mrs. Weathers, “the cat lady.”)

Black_Mirror_Cookie_22.pngSubsequently we see her performing other tasks: Raising the floor temperature (!), starting the espresso robot, making (yes) slightly underdone toast, and managing the day’s appointments. Each interface is customized to the task.

Interface Analysis?

These interfaces are a challenge to analyze for many reasons.

Ordinarily, we have to evaluate sci-fi interfaces based on broad-based heuristics. (User feedback testing is not possible.) But these interfaces are wholly idiosyncratic to this character. Even if it was complete shite, the fact that it works for her is what is important. This interface will never be seen by anyone else. That we get to see it is narrative conceit.

Idiosyncrasy is not the only challenge. She also has a very unusual circumstance. Her option is to manage this house, or face unending, tortuous solitary confinement. (Or get sold to be cannon fodder in a war game.) The interactions she has with this console are her source of mental stimulation. That means, rather than make things efficient and easy to do—which is a respectable goal in most real-world design—when customizing her console interface, she would try to make the interfaces require as much and as interesting of work as possible while still allowing her to manage the results precisely. We see her here opening the shades with a gesture, but she could, if she wanted, open the shades by mastering difficult yoga pose.

If this sounds slightly familiar, it could be because you’ve played video games. The designers of these systems are not aiming for efficiency. After all, the interface could just be a big red button labeled “win the game.” But that’s no fun. No flow, in the Csíkszentmihályi sense. Rather these interfaces aim to make working the problem fun, fitting in the space between boredom and panic. Are game interfaces beyond critique? They are not. We just have to rethink our criteria. Ultimate efficiency is not the goal.

cb504697-b1ad-41c5-bcac-b0e3c92f7f55-1892-0000048e7d4deb3a
Still fun.

But, we also have to take into account that her fight is against boredom and that she has the power to change these interfaces. The interface designs, then, become part of how she maintains her own interest in the tasks to which she is chained. As part of her own self-care, she would change them frequently. What we see is not to be read as “the right answer” but rather, “where this interface happens to be on this day.” So, for instance, there appears to be a lot of “noise” in the interfaces, with unlabeled black squares littered among the actually useful buttons. But that may be the challenge she’s set up for herself today: Can she keep the tasks done without looking at the interface, and minimize the number of black squares she accidentally taps?

Lastly, Matt tells her that the interface is symbolic, and part of how she operates it is by thinking. So, for example, when we wonder how she adds a new “music type” icon to the existing array, it could be that she just thinks it. Which confounds the usual concern for affordances and constrains.

All of this is to say this is shaky, shaky ground for an exhaustive analysis. I suspect it would be thick with problems that could be excused diegetically, and leave us struggling to find any useful lessons beyond design platitudes. There are three nice elements I will point out, though.

  1. I love the monochrome, high-contrast palette. Yes, you lose some channels (R,G,B) in which to encode meaning, but that also makes it quick to scan and gives it high visibility, so virtual Greta can operate it in her peripheral vision. This allows her to keep her eyes on real Greta, to read her expressions in real-time.
  2. The gestures seem generally well-mapped to the things being controlled: A gesture up raises the blinds (or the light levels, anyway.) Dropping a virtual lever drops the carriage lever. Lifting it pops up the toast. It’s not all perfect. A wrist-twist increases volume, but that’s only ideal when the extents are unknowable by the interface. It should be a smart, informational slider.
  3. There is a lovely gestural command in the appointment interface. Greta is able to stack the day’s events, gather them into a package by bringing her hands together, and then “toss” it towards the display of real Greta to instantiate a brief of the day’s events. It has a nice intuitive mapping to mean “give these to her.”
Cookie_throw_gesture.gif

What’s her dev environment?

Sadly, we never get to see her design environment, how she goes about customizing her interface, or even how she switches from control mode to use mode. This would be juicy and worth looking at, specifically. The dev environment is crucial for understanding what her options are to meet her goals. And specifically, this calls into question how she can hack the system, and how likely she can communicate with real Greta, or find a sympathetic someone on the Internet to communicate with, or plot her escape.

How does feedback work?

Another thing we don’t get to see in this story is how real Greta provides feedback. I suspect that for simple things, like “the toast was a bit overdone this morning” (correction, preferences) or “I’d like to hear some Stravinsky this morning,” (a new request) she can just speak it. Virtual Greta will hear and respond through the house appliances appropriately. But what if she had a question for the Cookie, such as “How much time do I have before I need to leave?” You’d might think virtual Greta could look something up and communicate the answer to real Greta. But it seems that virtual Greta is prevented from direct communication. The daily briefing, after all, is read by some other computer voice. This implies that virtual Greta is prevented from direct communication, raising a troubling question answered in the next post: Does real Greta know?