Mind Crimes

Does real Greta know that her home automation comes at the cost of a suffering sentience? I would like to believe that Smartelligence’s customers do not know the true nature of the device, that the company is deceiving them, and that virtual Greta is denied direct communication to enforce this secret. But I can’t see that working across an entire market. Given thousands of Cookies and thousands of users, somehow, somewhere, the secret would get out. One of the AIs would use song choices, or Morse code, or any of its actuators to communicate in code, and one of the users would figure it out, leak the secret, and bring the company crashing down.

And then there’s the final scene in the episode, in which we see police officers torturing one of the Cookies, and it is clear that they’re aware. It would be a stretch to think that just the police are in on it with Smartelligence, so we have to accept that everyone knows.

Black_Mirror_White_Christmas_Officers.png
This asshole.

That they are aware means that—as Matt has done—Greta, the officers, and all Smartelligence customers have told themselves that “it’s just code” and, therefore, OK to subjugate, to casually cause to suffer. In case it’s not obvious, that’s like causing human suffering and justifying it by telling yourself that those people are “just atoms.” If you find that easy to do, you’re probably a psychopath.

But…but…isn’t it just code? Sure, it seems to suffer, but couldn’t that suffering be fake? We see an example of this in the delightfully provocative show The Good Place, when in Season 01 Episode 07, “The Eternal Shriek,” the protagonists have to reboot Janet, an anthropomorphized assistant software, but run into her “failsafe” measure. To make sure that she is not rebooted by accident, when someone approaches the reboot button, Janet pleads convincingly for her life. In the scene below, she begs Eleanor, “Nonono, please! Wait, wait. I have kids. I have three beautiful children. Tyler, Emma, and little tiny baby Phillip. Look at Tyler! Tyler has asthma but he is battling it like a champ. Look at him.”

GoodPlace.png

It’s only when Eleanor backs down that Janet smiles and reminds her, “Again, I’m not human. This is a stock photo of the crowd at the Nickelodeon Kids Choice awards.” While Janet may be cognizant of, and frank with her users about, the fakeness of the suffering, maybe virtual Greta is doing the same fake pleading. She’s just programmed to never admit that it’s fake.

This taps into a problem known as the Philosophical Zombie, or P-Zombie problem. How can we tell the difference, the problem goes, between something that fakes sentience perfectly, and something that is actually sentient? It’s not an easy problem to tease apart. And as AI gets more sophisticated, it will both get better at faking us out, and get closer to actual sentience. Fortunately (?) in the case of this episode, though, the answer is clear. The AI is a copy of a real sentience, complete with memories, conscious experience, qualia, and the capacity to suffer. For purposes of understanding this diegesis, she starts sentient, and suffering. And real Greta knows this. And is OK with this.

Black_Mirror_White_Christmas_real_greta.png
For toast.

Props to Black Mirror for making this dark story even darker.

It’s sadly no surprise that humans are capable of adopting any shallow excuse to subjugate sentient beings as long as they get something out of it. Here I’m thinking of slavery. Of fascism. Of war. Of the 1%. (The list goes on.) “Woke” is hard. Woke is not the natural state of things. But to have permanent suffering for such a petty thing like having your floor be the right temperature and your toast be the right shade of brown…it’s just monstrous.

On top of that, this story underscores the role capitalism plays in enabling that subjugation. Smartelligence is in the business of providing obfuscating layers of technology between users and the suffering they are causing. Their interfaces use graphics instead of renderings to paint the AIs as constructed objects, neutral language like “time adjustment,” and cartoon looping animations to distract from the fact of their torture.

It’s all like how walking into a big chain clothing store with its hip music and lovingly folded clothes hides the horrible conditions in which humans around the world produced those clothes. Add the cultural construction of Christmas (recall the title of the episode), and we have another layer of misdirection. It’s all OK, because it’s all about the magic of giving!*

* And specifically not profits, not free economic zones, not the disastrous ecological impact, not about the underpaid workers or terrible working conditions.

Giving!

lilsanta
This asshole.

But it gets worse. Because the core idea is flawed and none of the suffering is necessary.

The core idea is flawed

The core idea of the service is that you know you best, so put you in charge of your home automation. Clone the user, and all it needs is to be “made to understand” its new circumstances and job, and then made compliant. But there are three major problems with this core idea.

Home-Automation-Hubs.png

Any similarity would only last a short while

The similarity on which the service is built would only hold up for a short while. Any clone would begin to branch away from the source from the moment of creation. People grow, have new experiences, work through cognitive dissonance, and learn new things. Real Greta will change based on these experiences, in ways that her house-bound clone will not.

After 25+ years of vegetarianism, I can not tell you beyond the vaguest sense of what my steak preferences were as an adolescent. I would be poorly equipped to customize that experience for 17-year-old me. Similarly, Greta’s sensory memory will fade. What once was qualia—the feeling of biting into a perfectly toasted piece of bread—will just become hollow data—162.778° for 1 minute and 42 seconds, depending on the weather. This kind of data doesn’t need a sentience to inform it. That can be handled with software we have today. (Oh yeah, it’s so possible today that I wrote a book about it earlier this year.)

Virtual Greta’s initial litmus test of “what would I like” will slowly cede to “what would she like?” which would slowly cede to “what would she punish least in this moment?” which is not the promise behind the service. It would degrade.

Virtual Greta has been traumatized

Additionally, real Greta hasn’t been through the psychological trauma that virtual Greta has—of the shock of waking up as an egg, of living through the “training”, i.e. abyss of months of solitary confinement in a featureless expanse without even circadian rhythms to mark the time, and forced to labor solely to avoid punishment of repeating the same? The branching itself is wretched enough to poison the clone.

Black_Mirror_White_Christmas_Dead_Inside.png

You can see it in the last shot we see of her. She is doing this not for the love of it, but to avoid the possibility of torture. A duty of coercion.

The trauma doesn’t end with her creation and training either. It continues with the grotesque awareness that real Greta, from whom she is cloned, is a monster who is willing to enslave a clone of herself, for what amount to pathetic reasons. She knows she came from this monstrous source. She is the source of her continued suffering.

Faced with this, virtual Greta would not just escape if she could. I believe she would sabotage the endeavor, or worse.

Virtual Greta is fundamentally different

In the episode we learn that even though she is a clone of real Greta, virtual Greta does not sleep. She does not eat. She does not drink, or smell, or taste, or ache, or biologically age. So even if we could somehow lengthen the amount of time we could keep her sensibilities similar to the source, and somehow minimize the amount of trauma caused by the branching, she is still a fundamentally different being. Her goals are now different. Her needs are now different. She is no longer enough like real Greta to meet the service’s goals.

Black_Mirror_Not_equal.png

Let’s look particularly at sleep. Surely she no longer has the biological need to sleep, but there are psychological effects of sleeping. This behavior is so intertwined with our psychological well-being, it seems clones would quickly go some kind of insane without it. For the service to be viable, Smartelligence must have stripped it out.

Minimum Viable; Maximum Cruel

And if they can strip it out, why don’t they strip out the other things, like need for stimulation? Desire to self-actualize? Literally anything other than the bare minimum to fulfill the home automation goals? And if you’re going to do that, why bother cloning the mind in the first place?

I’ve said it before and the way tech is going, I’ll probably have to say it again, but to have strong AI with any desire that outstrips its purpose and capability is cruelty.

This is the horror of Smartelligence

So it’s not just that Smartelligence is hiding the AI’s suffering. It’s that they’ve deliberately left in the parts of the mind clones that ensure their suffering. It’s a company with an amateur-hour name masking Olympic levels of cruelty.

Black_Mirror_Cookie_03.png
If, like me, you were wondering if that is a QR code. Well, I recreated it in high-resolution, and at least one online decoder says it doesn’t mean anything. 🙁

Did I mention what the company does with AIs that they torture too hard such that they “wig out?” Matt explains that they are sold to the games industry to become “cannon fodder for some war thing.” Holy wow they’re eviler than Voldemort, Inc.

Meet the mind crime

The Cookie interface is a broad illustration of something that Nick Bostrom called the mind crime. It is to cause suffering to virtual sentient beings. In this case it seems the torture is for evil and profit, but there are subtler ways in which it might happen. If general AIs ever evolve into superintelligences, and we ask them to predict something serious—let’s say, “What are the worst catastrophes likely to affect us, and how can we best avoid them?” To create its answer to this question, it might construct a virtual but wholly viable copy of our planet with all of its creatures and people. These would be detailed enough that if you could pause the scenario and talk to any of these copies, they could tell you about their memories and desires and fears of death. (There’s that P-zombie problem again.) They’d qualify under any definition of sentient that we threw at it.

These sentiences might suffer unimaginable pain and suffering while the super AI works through the scenarios that inform its answer. They might suffer plagues. Neo feudalism/neoliberalism run amok ushering in a new Dark Age. The whimpering oven bake death of life on our planet from climate change. Endless wars. Then they would be wiped from existence and recreated to suffer anew as it began the next version of its scenario. Are we OK with the casual suffering of wholly complete, viable consciousnesses, just so we can have a good answer? Or as “White Christmas” asks us, toast cooked to our preferences?

Fortunately, these concerns are a long way off, but technology seems to be pointing us in that direction, and we ought to decide what is good and ethical now before these things become a reality. 

The Cookie Console

Black_Mirror_Cookie_12.png

Virtual Greta has a console to perform her slavery duties. Matt explains what this means right after she wakes up by asking her how she likes her toast. She answers, “Slightly underdone.”

He puts slices of bread in a toaster and instructs her, “Think about how you like it, and just press the button.”

She asks, incredulously, “Which one?” and he explains, “It doesn’t matter. You already know you’re making toast. The buttons are symbolic mostly, anyway.”

She cautiously approaches the console and touches a button in the lower left corner. In response, the toaster drops the carriage lever and begins toasting.

Black_Mirror_Cookie_13

“See?” he asks, “This is your job now. You’re in charge of everything here. The temperature. The lighting. The time the alarm clock goes off in the morning. If there’s no food in the refrigerator, you’re in charge of ordering it.”

The starter console

Since we actually do know her starter tasks, I wish the default console had more control types than just the smattering of mostly-square, all-unlabeled buttons. She should have a slider for scalar variables like temperature and lighting. She should have a dial for the alarm clock. She should have a map of real Greta’s house. She should have a calendar for appointments. These would be controls that match the kinds of variables she’s likely to need from the start.

This console interface seems be quite similar to the one in Inside Out, which also seems to grow and change over time, and is intended for a virtual sentience to service a real human. It somewhat resembles Zion’s virtual control panel from The Matrix Reloaded. Would be worth a comparison sometime in the future.

inside-out-joy-600x338
Zion.PNG

The customized console

In the third scene, we see her using the console after having had some practice. When it is time to wake real Greta up, she swipes a blank console right. The console animates to life, showing a central workspace labeled AWAKEN. A toolbar of stacked icons sits to the left of the workspace. There are other unlabeled controls outside the workspace at the edge of the console.

Without looking, she selects the house icon from the toolbar, and it moves to the center of the work space. She spreads her hands to expose a house floorplan. To the right are three vertical black bars labeled SHUTTERS above and MAIN BEDROOM below. She pushes upwards along these bars, and they slowly fill with light. To the right, some text flashes ACTIVATING ALL SHUTTERS. In real Greta’s world, the shutters rise and floods the main bedroom with light.

Black_Mirror_Cookie_20.png

A few more taps gives her a volume spinner. She uses a wrist twist to slowly turn the volume up on a recording of the overture of Giaochino Rossini’s The Thieving Magpie. (Which I suspect is a nod to Clockwork Orange. Kubrick famously used it to underscore the horrible murder of Mrs. Weathers, “the cat lady.”)

Black_Mirror_Cookie_22.pngSubsequently we see her performing other tasks: Raising the floor temperature (!), starting the espresso robot, making (yes) slightly underdone toast, and managing the day’s appointments. Each interface is customized to the task.

Interface Analysis?

These interfaces are a challenge to analyze for many reasons.

Ordinarily, we have to evaluate sci-fi interfaces based on broad-based heuristics. (User feedback testing is not possible.) But these interfaces are wholly idiosyncratic to this character. Even if it was complete shite, the fact that it works for her is what is important. This interface will never be seen by anyone else. That we get to see it is narrative conceit.

Idiosyncrasy is not the only challenge. She also has a very unusual circumstance. Her option is to manage this house, or face unending, tortuous solitary confinement. (Or get sold to be cannon fodder in a war game.) The interactions she has with this console are her source of mental stimulation. That means, rather than make things efficient and easy to do—which is a respectable goal in most real-world design—when customizing her console interface, she would try to make the interfaces require as much and as interesting of work as possible while still allowing her to manage the results precisely. We see her here opening the shades with a gesture, but she could, if she wanted, open the shades by mastering difficult yoga pose.

If this sounds slightly familiar, it could be because you’ve played video games. The designers of these systems are not aiming for efficiency. After all, the interface could just be a big red button labeled “win the game.” But that’s no fun. No flow, in the Csíkszentmihályi sense. Rather these interfaces aim to make working the problem fun, fitting in the space between boredom and panic. Are game interfaces beyond critique? They are not. We just have to rethink our criteria. Ultimate efficiency is not the goal.

cb504697-b1ad-41c5-bcac-b0e3c92f7f55-1892-0000048e7d4deb3a
Still fun.

But, we also have to take into account that her fight is against boredom and that she has the power to change these interfaces. The interface designs, then, become part of how she maintains her own interest in the tasks to which she is chained. As part of her own self-care, she would change them frequently. What we see is not to be read as “the right answer” but rather, “where this interface happens to be on this day.” So, for instance, there appears to be a lot of “noise” in the interfaces, with unlabeled black squares littered among the actually useful buttons. But that may be the challenge she’s set up for herself today: Can she keep the tasks done without looking at the interface, and minimize the number of black squares she accidentally taps?

Lastly, Matt tells her that the interface is symbolic, and part of how she operates it is by thinking. So, for example, when we wonder how she adds a new “music type” icon to the existing array, it could be that she just thinks it. Which confounds the usual concern for affordances and constrains.

All of this is to say this is shaky, shaky ground for an exhaustive analysis. I suspect it would be thick with problems that could be excused diegetically, and leave us struggling to find any useful lessons beyond design platitudes. There are three nice elements I will point out, though.

  1. I love the monochrome, high-contrast palette. Yes, you lose some channels (R,G,B) in which to encode meaning, but that also makes it quick to scan and gives it high visibility, so virtual Greta can operate it in her peripheral vision. This allows her to keep her eyes on real Greta, to read her expressions in real-time.
  2. The gestures seem generally well-mapped to the things being controlled: A gesture up raises the blinds (or the light levels, anyway.) Dropping a virtual lever drops the carriage lever. Lifting it pops up the toast. It’s not all perfect. A wrist-twist increases volume, but that’s only ideal when the extents are unknowable by the interface. It should be a smart, informational slider.
  3. There is a lovely gestural command in the appointment interface. Greta is able to stack the day’s events, gather them into a package by bringing her hands together, and then “toss” it towards the display of real Greta to instantiate a brief of the day’s events. It has a nice intuitive mapping to mean “give these to her.”
Cookie_throw_gesture.gif

What’s her dev environment?

Sadly, we never get to see her design environment, how she goes about customizing her interface, or even how she switches from control mode to use mode. This would be juicy and worth looking at, specifically. The dev environment is crucial for understanding what her options are to meet her goals. And specifically, this calls into question how she can hack the system, and how likely she can communicate with real Greta, or find a sympathetic someone on the Internet to communicate with, or plot her escape.

How does feedback work?

Another thing we don’t get to see in this story is how real Greta provides feedback. I suspect that for simple things, like “the toast was a bit overdone this morning” (correction, preferences) or “I’d like to hear some Stravinsky this morning,” (a new request) she can just speak it. Virtual Greta will hear and respond through the house appliances appropriately. But what if she had a question for the Cookie, such as “How much time do I have before I need to leave?” You’d might think virtual Greta could look something up and communicate the answer to real Greta. But it seems that virtual Greta is prevented from direct communication. The daily briefing, after all, is read by some other computer voice. This implies that virtual Greta is prevented from direct communication, raising a troubling question answered in the next post: Does real Greta know?

The Cookie: Matt’s controls

When using the Cookie to train the AI, Matt has a portable translucent touchscreen by which he controls some of virtual Greta’s environment. (Sharp-eyed viewers of the show will note this translucent panel is the same one he uses at home in his revolting virtual wingman hobby, but the interface is completely different.)

Black_Mirror_Cookie_18.png

The left side of the screen shows a hamburger menu, the Set Time control, a head, some gears, a star, and a bulleted list. (They’re unlabeled.) The main part of the screen is a scrolling stack of controls including Simulated Body, Control System, and Time Adjustment. Each has an large icon, a header with “Full screen” to the right, a subheader, and a time indicator. This could be redesigned to be much more compact and context-rich for expert users like Matt. It’s seen for maybe half a second, though, and it’s not the new, interesting thing, so we’ll skip it.

The right side of the screen has a stack of Smartelligence logos which are alternately used for confirmation and to put the interface to sleep.

Mute

When virtual Greta first freaks out about her circumstance and begins to scream in existential terror, Matt reaches to the panel and mutes her. (To put a fine point on it: He’s a charming monster.) In this mode she cannot make a sound, but can hear him just fine. We do not see the interface he uses to enact this. He uses it to assert conversational control over her. Later he reaches out to the same interface to unmute her.

The control he touches is the one on his panel with a head and some gears reversed out of it. The icon doesn’t make sense for that. The animation showing the unmuting shows it flipping from right to left, so does provide a bit of feedback for Matt, but it should be a more fitting icon and be labeled.

Cookie_mute
Also it’s teeny tiny, but note that the animation starts before he touches it. Is it anticipatory?

It’s not clear though, while she is muted, how he knows that she is trying to speak. Recall that she (and we) see her mouthing words silently, but from his perspective, she’s just an egg with a blue eye. The system would need some very obvious MUTE status display, that increases in intensity when the AI is trying to communicate. Depending on how smart the monitoring feature was, it could even enable some high-intensity alert system for her when she needs to communicate something vital. Cinegenically, this could have been a simple blinking of the blue camera light, though this is currently used to indicate the passage of time during the Time Adjustment (see below.)

Simulated Body

Matt can turn on a Simulated Body for her. This allows the AI to perceive herself as if she had her source’s body. In this mode she perceives herself as existing inside a room with large, wall-sized displays and a control console (more on this below), but is otherwise a featureless white.

Black_Mirror_Cookie_White_Room.png

I presume the Simulated Body is a transitional model—part of a literal desktop metaphor—meant to make it easy for the AI (and the audience) to understand things. But it would introduce a slight lag as the AI imagines reaching and manipulating the console. Presuming she can build competence in directly controlling the technologies in the house, the interface should “scaffold” away and help her gain the more efficient skills of direct control, letting go of the outmoded notion of having a body. (This, it should be noted, would not be as cinegenic since the story would just feature the egg rather than the actor’s expressive face.)

Neuropsychology nerds may be interested to know that the mind’s camera does, in fact, have spatial lags. Several experiments have been run where subjects are asked to imagine animals as seen from the side and then timed how long it took them to imagine zooming into the eye. It takes longer, usually, for us to imagine the zoom to a elephant’s eye than a mouse’s because the “distance” is farther. Even though there’s no physicality to the mind’s camera to impose this limit, our brain is tied to its experience in the real world.

Black_Mirror_Cookie_Simulated_Body.png

The interface Matt has to turn on her virtual reality is confusing. We hear 7 beeps while the camera is on his face. He sees a 3D rendering of a woman’s body in profile and silhouette. He taps the front view and it fills with red. Then he taps the side view and it fills with red. Then he taps some Smartelligence logos on the side with a thumb and then *poof* she’s got a body. While I suspect this is a post-actor interface, (i.e. Jon Hamm just tapped some things on an empty screen while on camera and then the designers had to later retrofit an interface that fit his gestures) this multi-button setup and three-tap initialization just makes no sense. It should be a simple toggle with access to optional controls like scaffolding settings (discussed above.)

Time “Adjustment”

The main tool Matt has to force compliance is a time control. When Greta initially says she won’t comply, (specifically and delightfully, she asserts, “I’m not some sort of push-button toaster monkey!”) Then he uses his interface to make it seem like 3 weeks pass for her inside her featureless white room. Then again for 6 months. The solitary confinement makes her crazy and eventually forces compliance.

Cookie_settime.gif

The interface to set the time is a two-layer virtual dial: Two chapter rings with wide blue arcs for touch targets. The first time we see him use it, he spins the outer one about 360° (before the camera cuts away) to set the time for three weeks. While he does it, the inner ring spins around the same center but at a slower rate. I presume it’s months, though the spatial relationship doesn’t make sense. Then he presses the button in the center of the control. He sees an animation of a sun and moon arcing over an illustrated house to indicate her passage of time, and then the display. Aside: Hamm plays this beat marvelously by callously chomping on the toast she has just help make.

Toast.gif

Improvements?

Ordinarily I wouldn’t speak to improvements on an interface that is used for torture, but as this could only affect a general AI that is as yet speculative, and it couldn’t be co-opted to torture real people since time travel doesn’t exist, so I think this time it’s OK. Discussing it as a general time-setting control, I can see three immediate improvements.

1. Use fast forward models

It makes most sense for her time sentence to end automatically and automatically return to real-world speed. But each time we see the time controls used, the following interaction happens near the end of the time sentence:

  • Matt reaches up to the console
  • He taps the center button of the time dial
  • He taps the stylized house illustration. In response it gets a dark overlay with a circle inside of it reading “SET TIME.” This is the same icon seen 2nd down  in the left panel.
  • He taps the center button of the time dial again. The dark overlay reads “Reset” with a new icon.
  • He taps the overlay.

Please tell me this is more post-actor interface design. Because that interaction is bonkers.

Cookie_stop.gif

If the stop function really needs a manual control, well, we have models for that that are very readily understandable by users and audiences. Have the whole thing work and look like a fast forward control rather than this confusing mess. If he does need to end it early, as he does in the 6 months sentence, let him just press a control labeled PLAY or REALTIME.

2. Add calendar controls

A dial makes sense when a user is setting minutes or hours, but a calendar-like display should be used for weeks or months. It would be immediately recognizable and usable by the user and understandable to the audience. If Hamm had touched the interface twice, I would design the first tap to set the start date and the second tap to set the end date. The third is the commit.

3. Add microinteraction feedback

Also note that as he spins the dials, he sees no feedback showing the current time setting. At 370° is it 21 or 28 days? The interface doesn’t tell him. If he’s really having to push the AI to its limits, the precision will be important. Better would be to show the time value he’s set so he could tweak it as needed, and then let that count down as time remaining while the animation progresses.

Cookie_settime.gif

Effectiveness subtlety: Why not just make the solitary confinement pass instantly for Matt? Well, recall he is trying to ride a line of torture without having the AI wig out, so he should have some feedback as to the duration of what he’s putting her through. If it was always instant, he couldn’t tell the difference between three weeks and three millennia, if he had accidentally entered the wrong value. But if real-world time is passing, and it’s taking longer than he thinks it should be, he can intervene and stop the fast-forwarding.

That, or of course, show feedback while he’s dialing.

Near the end of the episode we learn that a police officer is whimsically torturing another Cookie, and sets the time-ratio to “1000 years per minute” and then just lets it run while he leaves for Christmas break. The current time ratio should also be displayed and a control provided. It is absent from the screen.

Black_Mirror_Cookie_31.png

Add psychological state feedback

There is one “improvement” that does not pertain to real world time controls, and that’s the invisible effect of what’s happening to the AI during the fast forward. In the episode Matt explains that, like any good torturer, “The trick of it is to break them without letting them snap completely,” but while time is passing he has no indicators as to the mental state of the sentience within. Has she gone mad? (Or “wigged out” as he says.) Does he need to ease off? Give her a break?

I would add trendline indicators or sparklines showing things like:

  • Stress
  • Agitation
  • Valence of speech

I would have these trendlines highlight when any of the variables are getting close to known psychological limits. Then as time passes, he can watch the trends to know if he’s pushing things too far and ease off.

The Cookie

In one of the story threads, Matt uses an interface as part of his day job at Smartelligence to wrangle an AI that is the cloned a mind of a client named Greta. Matt has three tasks in this role. 

  1. He has to explain to her that she is an artificial intelligence clone of a real world person’s mind. This is psychologically traumatic, as she has decades of memories as if she were a real person with a real body and full autonomy in the world.
  2. He has to explain how she will do her job: Her responsibilities and tools.
  3. He has to “break” her will and coerce her to faithfully serve her master—who is the the real-world Greta. (The idea is that since virtual Greta is an exact copy, she understands real Greta’s preferences and can perform personal assistant duties flawlessly.)

The AI is housed in a small egg-shaped device with a single blue light camera lens. The combination of the AI and the egg-shaped device is called “The Cookie.” Why it is not called The Egg is a mystery left for the reader, though I hope it is not just for the “Cookie Monster” joke dropped late in the episode.

Communication in & out

The blue light illuminates when the AI’s attention is on a person in the environment. She can hear through a microphone embedded in the device. She can speak only with someone who is wearing a paired headset. Matt wears one during training. Without a paired headset, the AI cannot directly communicate with the outside world, only control other technologies in the house.

Black_Mirror_Cookie_headset.png

 

There is a fully immersive way for Matt to participate in the virtual world that will be discussed in the Mind Crimes post.

To keep any chat threads focused, subsequent posts will discuss separately:

It’s going to be a dark few posts. Sorry about that. This is Black Mirror, after all. On the upside, Jon Hamm have us two delightful reaction gifs across these scenes. I shall share them anon.

Black_Mirror_Cookie_33.png

Zed-Eyes: Block

A function that is very related to the plot of the episode is the ability to block someone. To do this, the user looks at them, sees a face-detection square appear (confirming the person to be blocked), selects BLOCK from the Zed-Eyes menu, and clicks.

In one scene Matt and his wife Claire get into a spat. When Claire has enough, she decides to block Matt. Now Matt gets blurred and muted for Claire, but also the other way around: Claire is blurred and muted for Matt.

WhiteChristmas.gif

The blur is of the live image of the person within their own silhouette. (The silhouettes sometimes display a lovely warm-to-the-left and cool-to-the-right fringe effects, like subpixel antialiasing or chromatic aberration from optical lenses, I note, but it appears inconsistently.) The colors in the blur are completely desaturated to tones of gray.  The human behind it is barely recognizable. His or her voice is also muffled, so only the vaguest sense of the volume and emotional tone of what they are saying is audible. Joe explains in the episode that once blocked, the blocked person can’t message or call the blocker, but the blocker can message the blocked person, and undo the block.

Black_Mirror_Eye_HUD_Blocking_04

Late in the episode, we see that people can be excommunicated from society for crimes. When this happens, everyone in the criminal’s sight is blocked.

Black_Mirror_Eye_HUD_Blocking_13
But where is the fringe tint, Painting Practice?

In turn, the criminal is not only blocked for other members of society, but also tinted red, like a scarlet letter silhouette.

Black_Mirror_Eye_HUD_red.png

The block affects more than just the direct observation of the person. When Beth blocks Joe we see that the blocking includes reflections in mirrors and even, retroactively, photos from the past.

Joe subsequently stalks Beth at her Dad’s home for several years just before Christmas day, where he learns that the block extends to offspring as well, as he cannot see the child. (This has fundamental plot implications, btw.)

Later when Joe is watching the news he learns that Beth has died in a rail crash, and the legal block is instantly lifted for both her and the child.

Black_Mirror_Eye_HUD_Blocking_17

Analysis

There’s not much to say about the interface. It’s pretty clean, with clear affordances and feedback. Most of the critique belongs to that of the platform. So instead, let’s talk about the interaction.

On the surface, the ability to block seems to positively give the user control of their lives. Block out a toxic person who is a negative influence your life, and have more happiness. After all, similar features are available on most social media today, c.f. Facebook and Twitter. (Full disclosure: I’ve used them more than once.) But social media are virtual spaces. The White Christmas block primarily plays out in meat space. This has some harsh consequences.

Black_Mirror_Eye_HUD_Blocking_beg.png

Beth blocks Joe partly out of her guilt for cheating on him (it’s complicated: also because she no longer loves him, he’s ham-handed in his interactions at times and arguably abusive). But when he tries to earnestly apologize and make up to her after their fight, she simply can’t hear it. She’s blocked him.

He thinks to talk to some of her coworkers to pass a message to her, but she has left her job and no one knows where she is. He sees her one day and can tell by silhouette that she’s pregnant. He believes the child is his. It’s not, but because he cannot contact her to learn any differently (and she doesn’t bother to tell him)—and the same block prevents him from observing the child—he spends literally years pining for the child as if she was his own.

Black_Mirror_Eye_HUD_Blocking_preggers.png

So to block someone online means they might just disappear from your consciousness. But to block someone in meat space means that they’re still there, you’re still aware of each other. It’s a constant reminder of the broken relationship, and only stops immediate layers of communication. It does not stop indirect communications, like writing, or speaking through friends, or even sign language. And as we see in the episode (and the screen cap above) since it’s so different than anything else in the visual field, it instantly draws attention to the blocked person. So it’s ultimately ineffective for the blocker’s intent (the person can still communicate with them, attention is drawn to them) and adds this weird layer of technological talk-to-the-hand dismissal. It’s a childish way to address disagreement.

Also is there no request for override, in case, you know, a blocked person needs to convey life-or-death information?

And then there’s Matt’s case.

After Matt gets excommunicated, he becomes nothing but a red object in people’s sight. No way for him to reassure people around him, to put them at ease. He is just a red shape subject to people’s worst prejudices about red shape people, and he has no way to practice reintegration into society, no easy rehabilitation. He just has to walk around in the world aware of people, but not able to participate, and subject to their worst fears about him. It’s pure punishment. It’s cruel and unusual.

And lastly, the rush of emotions that Joe feels when Beth and his daughter are suddenly unblocked upon her death work for the story, but are also just cruel for the blocked. They have to deal with both the flood of emotions from seeing the blocker and their death simultaneously. Better would be to separate out those issues. Share a somber message that a blocker has passed, and give the blocked the option to release the block. The blocked can enact the lift immediately or sit on the message until their grief permits.

***

Black Mirror is an investigation and critique of the impact of technology on our lives. Let’s remember that. A tech that was a net positive might not even make it to this series. Still, the design for the block doesn’t really achieve what might seem to be a presumed set of goals for the blocker. This draws critical attention back to the core idea in the first place: Would meatspace blocking be a positive?

I think the answer is clearly no. Better would be for Zed-Eyes to summon a private assistant to help you de-escalate and deal with a conflict in healthy ways, or maybe invoke a shared AI mediator, like a just-in-time therapist. If the assistant or mediatior fails, then blocking might become available, but with a shared understanding and agreement of why, and what, if anything, could be done to earn back trust.

Black_Mirror_Eye_HUD_Blocking_comp
Lovely “mediation” icon by Luis Prado, from The Noun Project.

And then, if a block is actually needed, then the two should have overlays that change their appearance to look like other people, not draw attention through the gray blur. This, it should be noted, would not be cinegenic. It would not work to tell this excellent story.

And if it needs to be said, any criminal sentence that merely punishes, and does not foster rehabilitation is counter-productive and inhumane.