I love Black Mirror. It’s not always perfect, but uses great story telling to get us to think about the consequences of technology in our lives. It’s a provocateur that invokes the spirit of anthology series like The Twilight Zone, and rarely shies away from following the tech into the darkest places. It’s what thinking about technology in sci-fi formats looks like.
But, as usual, this site is not about the show but the interfaces, and for that we turn to the three criteria for evaluation here on scifiinterfaces.com.
How believable are the interfaces? Can it work this way? (To keep you immersed.)
How well do the interfaces inform the narrative of the story? (To tell a good story.)
How well do the interfaces equip the characters to achieve their goals? (To be a good model for real-world design?)
Another incidental interface is the pregnancy test that Joe finds in the garbage. We don’t see how the test is taken, which would be critical when considering its design. But we do see the results display in the orange light of Joe and Beth’s kitchen. It’s a cartoon baby with a rattle, swaying back and forth.
Sure it’s cute, but let’s note that the news of a pregnancy is not always good news. If the pregnancy is not welcome, the “Lucky you!” graphic is just going to rip her heart out. Much better is an unambiguous but neutral signal.
That said, Black Mirror is all about ripping our hearts out, so the cuteness of this interface is quite fitting to the world in which this appears. Narratively, it’s instantly recognizable as a pregnancy test, even to audience members who are unfamiliar with such products. It also sets up the following scene where Joe is super happy for the news, but Beth is upset that he’s seen it. So, while it’s awful for the real world; for the show, this is perfect.
After Joe confronts Beth and she calls for help, Joe is taken to a police station where in addition to the block, he now has a GPS-informed restraining order against him.
To confirm the order, Joe has to sign is name to a paper and then press his thumbprints into rectangles along the bottom. The design of the form is well done, with a clearly indicated spot for his signature, and large touch areas in which he might place his thumbs for his thumbprints to be read.
A scary thing in the interface is that the text of what he’s signing is still appearing while he’s providing his thumbprints. Of course the page could be on a loop that erases and redisplays the text repeatedly for emphasis. But, if it was really downloading and displaying it for the first time to draw his attention, then he has provided his signature and thumbprints too early. He doesn’t yet know what he’s signing.
Government agencies work like this all the time and citizens comply because they have no choice. But ideally, if he tried to sign or place his thumbprints before seeing all the text of what he’s signing, it would be better for the interface to reject his signature with a note that he needs to finish reading the text before he can confirm he has read and understands it. Otherwise, if the data shows that he authenticated it before the text appeared, I’d say he had a pretty good case to challenge the order in court.
Does real Greta know that her home automation comes at the cost of a suffering sentience? I would like to believe that Smartelligence’s customers do not know the true nature of the device, that the company is deceiving them, and that virtual Greta is denied direct communication to enforce this secret. But I can’t see that working across an entire market. Given thousands of Cookies and thousands of users, somehow, somewhere, the secret would get out. One of the AIs would use song choices, or Morse code, or any of its actuators to communicate in code, and one of the users would figure it out, leak the secret, and bring the company crashing down.
And then there’s the final scene in the episode, in which we see police officers torturing one of the Cookies, and it is clear that they’re aware. It would be a stretch to think that just the police are in on it with Smartelligence, so we have to accept that everyone knows.
That they are aware means that—as Matt has done—Greta, the officers, and all Smartelligence customers have told themselves that “it’s just code” and, therefore, OK to subjugate, to casually cause to suffer. In case it’s not obvious, that’s like causing human suffering and justifying it by telling yourself that those people are “just atoms.” If you find that easy to do, you’re probably a psychopath. Continue reading →
Virtual Greta has a console to perform her slavery duties. Matt explains what this means right after she wakes up by asking her how she likes her toast. She answers, “Slightly underdone.”
He puts slices of bread in a toaster and instructs her, “Think about how you like it, and just press the button.”
She asks, incredulously, “Which one?” and he explains, “It doesn’t matter. You already know you’re making toast. The buttons are symbolic mostly, anyway.”
She cautiously approaches the console and touches a button in the lower left corner. In response, the toaster drops the carriage lever and begins toasting.
“See?” he asks, “This is your job now. You’re in charge of everything here. The temperature. The lighting. The time the alarm clock goes off in the morning. If there’s no food in the refrigerator, you’re in charge of ordering it.”Continue reading →
When using the Cookie to train the AI, Matt has a portable translucent touchscreen by which he controls some of virtual Greta’s environment. (Sharp-eyed viewers of the show will note this translucent panel is the same one he uses at home in his revolting virtual wingman hobby, but the interface is completely different.)
The left side of the screen shows a hamburger menu, the Set Time control, a head, some gears, a star, and a bulleted list. (They’re unlabeled.) The main part of the screen is a scrolling stack of controls including Simulated Body, Control System, and Time Adjustment. Each has an large icon, a header with “Full screen” to the right, a subheader, and a time indicator. This could be redesigned to be much more compact and context-rich for expert users like Matt. It’s seen for maybe half a second, though, and it’s not the new, interesting thing, so we’ll skip it.
The right side of the screen has a stack of Smartelligence logos which are alternately used for confirmation and to put the interface to sleep.
When virtual Greta first freaks out about her circumstance and begins to scream in existential terror, Matt reaches to the panel and mutes her. (To put a fine point on it: He’s a charming monster.) In this mode she cannot make a sound, but can hear him just fine. We do not see the interface he uses to enact this. He uses it to assert conversational control over her. Later he reaches out to the same interface to unmute her.
The control he touches is the one on his panel with a head and some gears reversed out of it. The icon doesn’t make sense for that. The animation showing the unmuting shows it flipping from right to left, so does provide a bit of feedback for Matt, but it should be a more fitting icon and be labeled.
Also it’s teeny tiny, but note that the animation starts before he touches it. Is it anticipatory?
In one of the story threads, Matt uses an interface as part of his day job at Smartelligence to wrangle an AI that is the cloned a mind of a client named Greta. Matt has three tasks in this role.
He has to explain to her that she is an artificial intelligence clone of a real world person’s mind. This is psychologically traumatic, as she has decades of memories as if she were a real person with a real body and full autonomy in the world.
He has to explain how she will do her job: Her responsibilities and tools.
He has to “break” her will and coerce her to faithfully serve her master—who is the the real-world Greta. (The idea is that since virtual Greta is an exact copy, she understands real Greta’s preferences and can perform personal assistant duties flawlessly.)
The AI is housed in a small egg-shaped device with a single blue light camera lens. The combination of the AI and the egg-shaped device is called “The Cookie.” Why it is not called The Egg is a mystery left for the reader, though I hope it is not just for the “Cookie Monster” joke dropped late in the episode. Continue reading →
A function that is very related to the plot of the episode is the ability to block someone. To do this, the user looks at them, sees a face-detection square appear (confirming the person to be blocked), selects BLOCK from the Zed-Eyes menu, and clicks.
In one scene Matt and his wife Claire get into a spat. When Claire has enough, she decides to block Matt. Now Matt gets blurred and muted for Claire, but also the other way around: Claire is blurred and muted for Matt.
The blur is of the live image of the person within their own silhouette. (The silhouettes sometimes display a lovely warm-to-the-left and cool-to-the-right fringe effects, like subpixel antialiasing or chromatic aberration from optical lenses, I note, but it appears inconsistently.) The colors in the blur are completely desaturated to tones of gray. The human behind it is barely recognizable. His or her voice is also muffled, so only the vaguest sense of the volume and emotional tone of what they are saying is audible. Joe explains in the episode that once blocked, the blocked person can’t message or call the blocker, but the blocker can message the blocked person, and undo the block.
In the world of “White Christmas”, everyone has a networked brain implant called Zed-Eyes that enables heads-up overlays onto vision, personalized audio, and modifications to environmental sounds. The control hardware is a thin metal circle around a metal click button, separated by a black rubber ring. People can buy the device with different color rings, as we see alternately see metal, blue, and black versions across the episode.
To control the implant, a person slides a finger (thumb is easiest) around the rim of a tiny touch device. Because it responds to sliding across its surface, let’s say the device must use a sensor similar to the one used in The Entire History of You (2011) or the IBM Trackpoint,
A thumb slide cycles through a carousel menu. Sliding can happen both clockwise and counterclockwise. It even works through gloves.
The button selects or executes the selected action. The complete list of carousel menu options we see in the episode are: Search, Camera, Music, Mail, Call, Magnify, Block, Map. The particular options change across scenes, so it is context-aware or customizable. We will look at some of the particular functions in later posts. For now, let’s discuss the “platform” that is Zed-eyes.Continue reading →
EYE-LINK is an interface used between a person at a desktop who uses support tools to help another person who is live “in the field” using Zed-Eyes. The working relationship between the two is very like Vika and Jack in Oblivion, or like the A.I. in Sight.
In this scene, we see EYE-LINK used by a pick-up artist, Matt, who acts as a remote “wingman” for pick-up student Harry. Matt has a group video chat interface open with paying customers eager to lurk, comment, and learn from the master.
Harry wears a hidden camera and microphone. This is the only tech he seems to have on him, only hearing his wingman’s voice, and only able to communicate back to his wingman by talking generally, talking about something he’s looking at, or using pre-arranged signals.
Tap your beer twice if this is more than a little creepy.
A smaller transparent information panel for automated analysis, research, and advice.
An extra, laptop-like screen where Matt leads a group video chat with a paying audience, who are watching and snarkily commenting on the wingman scenario. It seems likely that this is not an official part of the EYE-LINK software.