I love Black Mirror. It’s not always perfect, but uses great story telling to get us to think about the consequences of technology in our lives. It’s a provocateur that invokes the spirit of anthology series like The Twilight Zone, and rarely shies away from following the tech into the darkest places. It’s what thinking about technology in sci-fi formats looks like.
But, as usual, this site is not about the show but the interfaces, and for that we turn to the three criteria for evaluation here on scifiinterfaces.com.
How believable are the interfaces? Can it work this way? (To keep you immersed.)
How well do the interfaces inform the narrative of the story? (To tell a good story.)
How well do the interfaces equip the characters to achieve their goals? (To be a good model for real-world design?)
Sci: C (2 of 4) How believable are the interfaces?
There are some problems. Yes, there is the transparent-screen trope, but I regularly give that a cinegenics pass. And for reasons explained in the post I’ll give everything in Virtual Greta’s virtual reality a pass.
But on top of that there are missing navigation elements, missing UI elements, and extraneous UI elements in Matt’s interfaces. And ultimately, I think the whole cloned-you home automation is unworkable. These are key to the episode, so it scores pretty low.
Fi: A (4 of 4) How well do the interfaces inform the narrative of the story?
From the Restraining Order that doesn’t tell you what it’s saying until after you’ve signed it, to the creepy home-hacked wingman interfaces, to the Smartelligence slavery and torture obfuscation, the interfaces help paint the picture of a world full of people and institutions that are psychopathically cruel to each other for pathetic, inhumane reasons. It takes a while to see it, but the only character who can be said to be straight-up good in this episode is the not-Joe’s kid.
Interfaces: A (4 of 4)
How well do the interfaces equip the characters to achieve their goals?
Matt wants to secretly help Harry S be more confident and, yeah, “score.” Beth and Claire want to socially block their partners in the real world. Matt needs easy tools to torture virtual Greta into submission. Greta needs to control the house. Joe wants to snoop on what he believes to be his daughter. Matt wants to extract a confession. All the interfaces are driven by clear character, social, and institutional goals. They are largely goal-focused, even if those goals are shitty.
For reasons discussed in the Sci section of this review (above), there are problems with the details of the interfaces, but if you were a designer working with no ethical base in a society of psychopaths, yes, these would be pretty good models to build from.
Final Grade B (10 of 12), Must-see.
Special thanks again to Ianus Keller and his students TU Delft who began the analysis of this episode and collected many of the screen shots.
I also want to help them make a shout-out to IDE alumnus Frans van Eedena, whose coffee machine wound up being one of the appliances controlled by virtual Greta. Nice work IDE!
Another incidental interface is the pregnancy test that Joe finds in the garbage. We don’t see how the test is taken, which would be critical when considering its design. But we do see the results display in the orange light of Joe and Beth’s kitchen. It’s a cartoon baby with a rattle, swaying back and forth.
Sure it’s cute, but let’s note that the news of a pregnancy is not always good news. If the pregnancy is not welcome, the “Lucky you!” graphic is just going to rip her heart out. Much better is an unambiguous but neutral signal.
That said, Black Mirror is all about ripping our hearts out, so the cuteness of this interface is quite fitting to the world in which this appears. Narratively, it’s instantly recognizable as a pregnancy test, even to audience members who are unfamiliar with such products. It also sets up the following scene where Joe is super happy for the news, but Beth is upset that he’s seen it. So, while it’s awful for the real world; for the show, this is perfect.
After Joe confronts Beth and she calls for help, Joe is taken to a police station where in addition to the block, he now has a GPS-informed restraining order against him.
To confirm the order, Joe has to sign is name to a paper and then press his thumbprints into rectangles along the bottom. The design of the form is well done, with a clearly indicated spot for his signature, and large touch areas in which he might place his thumbs for his thumbprints to be read.
A scary thing in the interface is that the text of what he’s signing is still appearing while he’s providing his thumbprints. Of course the page could be on a loop that erases and redisplays the text repeatedly for emphasis. But, if it was really downloading and displaying it for the first time to draw his attention, then he has provided his signature and thumbprints too early. He doesn’t yet know what he’s signing.
Government agencies work like this all the time and citizens comply because they have no choice. But ideally, if he tried to sign or place his thumbprints before seeing all the text of what he’s signing, it would be better for the interface to reject his signature with a note that he needs to finish reading the text before he can confirm he has read and understands it. Otherwise, if the data shows that he authenticated it before the text appeared, I’d say he had a pretty good case to challenge the order in court.
Does real Greta know that her home automation comes at the cost of a suffering sentience? I would like to believe that Smartelligence’s customers do not know the true nature of the device, that the company is deceiving them, and that virtual Greta is denied direct communication to enforce this secret. But I can’t see that working across an entire market. Given thousands of Cookies and thousands of users, somehow, somewhere, the secret would get out. One of the AIs would use song choices, or Morse code, or any of its actuators to communicate in code, and one of the users would figure it out, leak the secret, and bring the company crashing down.
And then there’s the final scene in the episode, in which we see police officers torturing one of the Cookies, and it is clear that they’re aware. It would be a stretch to think that just the police are in on it with Smartelligence, so we have to accept that everyone knows.
That they are aware means that—as Matt has done—Greta, the officers, and all Smartelligence customers have told themselves that “it’s just code” and, therefore, OK to subjugate, to casually cause to suffer. In case it’s not obvious, that’s like causing human suffering and justifying it by telling yourself that those people are “just atoms.” If you find that easy to do, you’re probably a psychopath. Continue reading →
Virtual Greta has a console to perform her slavery duties. Matt explains what this means right after she wakes up by asking her how she likes her toast. She answers, “Slightly underdone.”
He puts slices of bread in a toaster and instructs her, “Think about how you like it, and just press the button.”
She asks, incredulously, “Which one?” and he explains, “It doesn’t matter. You already know you’re making toast. The buttons are symbolic mostly, anyway.”
She cautiously approaches the console and touches a button in the lower left corner. In response, the toaster drops the carriage lever and begins toasting.
“See?” he asks, “This is your job now. You’re in charge of everything here. The temperature. The lighting. The time the alarm clock goes off in the morning. If there’s no food in the refrigerator, you’re in charge of ordering it.”Continue reading →
When using the Cookie to train the AI, Matt has a portable translucent touchscreen by which he controls some of virtual Greta’s environment. (Sharp-eyed viewers of the show will note this translucent panel is the same one he uses at home in his revolting virtual wingman hobby, but the interface is completely different.)
The left side of the screen shows a hamburger menu, the Set Time control, a head, some gears, a star, and a bulleted list. (They’re unlabeled.) The main part of the screen is a scrolling stack of controls including Simulated Body, Control System, and Time Adjustment. Each has an large icon, a header with “Full screen” to the right, a subheader, and a time indicator. This could be redesigned to be much more compact and context-rich for expert users like Matt. It’s seen for maybe half a second, though, and it’s not the new, interesting thing, so we’ll skip it.
The right side of the screen has a stack of Smartelligence logos which are alternately used for confirmation and to put the interface to sleep.
When virtual Greta first freaks out about her circumstance and begins to scream in existential terror, Matt reaches to the panel and mutes her. (To put a fine point on it: He’s a charming monster.) In this mode she cannot make a sound, but can hear him just fine. We do not see the interface he uses to enact this. He uses it to assert conversational control over her. Later he reaches out to the same interface to unmute her.
The control he touches is the one on his panel with a head and some gears reversed out of it. The icon doesn’t make sense for that. The animation showing the unmuting shows it flipping from right to left, so does provide a bit of feedback for Matt, but it should be a more fitting icon and be labeled.
It’s not clear though, while she is muted, how he knows that she is trying to speak. Recall that she (and we) see her mouthing words silently, but from his perspective, she’s just an egg with a blue eye. The system would need some very obvious MUTE status display, that increases in intensity when the AI is trying to communicate. Depending on how smart the monitoring feature was, it could even enable some high-intensity alert system for her when she needs to communicate something vital. Cinegenically, this could have been a simple blinking of the blue camera light, though this is currently used to indicate the passage of time during the Time Adjustment (see below.)
Matt can turn on a Simulated Body for her. This allows the AI to perceive herself as if she had her source’s body. In this mode she perceives herself as existing inside a room with large, wall-sized displays and a control console (more on this below), but is otherwise a featureless white.
I presume the Simulated Body is a transitional model—part of a literal desktop metaphor—meant to make it easy for the AI (and the audience) to understand things. But it would introduce a slight lag as the AI imagines reaching and manipulating the console. Presuming she can build competence in directly controlling the technologies in the house, the interface should “scaffold” away and help her gain the more efficient skills of direct control, letting go of the outmoded notion of having a body. (This, it should be noted, would not be as cinegenic since the story would just feature the egg rather than the actor’s expressive face.)
Neuropsychology nerds may be interested to know that the mind’s camera does, in fact, have spatial lags. Several experiments have been run where subjects are asked to imagine animals as seen from the side and then timed how long it took them to imagine zooming into the eye. It takes longer, usually, for us to imagine the zoom to a elephant’s eye than a mouse’s because the “distance” is farther. Even though there’s no physicality to the mind’s camera to impose this limit, our brain is tied to its experience in the real world.
The interface Matt has to turn on her virtual reality is confusing. We hear 7 beeps while the camera is on his face. He sees a 3D rendering of a woman’s body in profile and silhouette. He taps the front view and it fills with red. Then he taps the side view and it fills with red. Then he taps some Smartelligence logos on the side with a thumb and then *poof* she’s got a body. While I suspect this is a post-actor interface, (i.e. Jon Hamm just tapped some things on an empty screen while on camera and then the designers had to later retrofit an interface that fit his gestures) this multi-button setup and three-tap initialization just makes no sense. It should be a simple toggle with access to optional controls like scaffolding settings (discussed above.)
The main tool Matt has to force compliance is a time control. When Greta initially says she won’t comply, (specifically and delightfully, she asserts, “I’m not some sort of push-button toaster monkey!”) Then he uses his interface to make it seem like 3 weeks pass for her inside her featureless white room. Then again for 6 months. The solitary confinement makes her crazy and eventually forces compliance.
The interface to set the time is a two-layer virtual dial: Two chapter rings with wide blue arcs for touch targets. The first time we see him use it, he spins the outer one about 360° (before the camera cuts away) to set the time for three weeks. While he does it, the inner ring spins around the same center but at a slower rate. I presume it’s months, though the spatial relationship doesn’t make sense. Then he presses the button in the center of the control. He sees an animation of a sun and moon arcing over an illustrated house to indicate her passage of time, and then the display. Aside: Hamm plays this beat marvelously by callously chomping on the toast she has just help make.
Ordinarily I wouldn’t speak to improvements on an interface that is used for torture, but as this could only affect a general AI that is as yet speculative, and it couldn’t be co-opted to torture real people since time travel doesn’t exist, so I think this time it’s OK. Discussing it as a general time-setting control, I can see three immediate improvements.
1. Use fast forward models
It makes most sense for her time sentence to end automatically and automatically return to real-world speed. But each time we see the time controls used, the following interaction happens near the end of the time sentence:
Matt reaches up to the console
He taps the center button of the time dial
He taps the stylized house illustration. In response it gets a dark overlay with a circle inside of it reading “SET TIME.” This is the same icon seen 2nd down in the left panel.
He taps the center button of the time dial again. The dark overlay reads “Reset” with a new icon.
He taps the overlay.
Please tell me this is more post-actor interface design. Because that interaction is bonkers.
If the stop function really needs a manual control, well, we have models for that that are very readily understandable by users and audiences. Have the whole thing work and look like a fast forward control rather than this confusing mess. If he does need to end it early, as he does in the 6 months sentence, let him just press a control labeled PLAY or REALTIME.
2. Add calendar controls
A dial makes sense when a user is setting minutes or hours, but a calendar-like display should be used for weeks or months. It would be immediately recognizable and usable by the user and understandable to the audience. If Hamm had touched the interface twice, I would design the first tap to set the start date and the second tap to set the end date. The third is the commit.
3. Add microinteraction feedback
Also note that as he spins the dials, he sees no feedback showing the current time setting. At 370° is it 21 or 28 days? The interface doesn’t tell him. If he’s really having to push the AI to its limits, the precision will be important. Better would be to show the time value he’s set so he could tweak it as needed, and then let that count down as time remaining while the animation progresses.
Effectiveness subtlety: Why not just make the solitary confinement pass instantly for Matt? Well, recall he is trying to ride a line of torture without having the AI wig out, so he should have some feedback as to the duration of what he’s putting her through. If it was always instant, he couldn’t tell the difference between three weeks and three millennia, if he had accidentally entered the wrong value. But if real-world time is passing, and it’s taking longer than he thinks it should be, he can intervene and stop the fast-forwarding.
That, or of course, show feedback while he’s dialing.
Near the end of the episode we learn that a police officer is whimsically torturing another Cookie, and sets the time-ratio to “1000 years per minute” and then just lets it run while he leaves for Christmas break. The current time ratio should also be displayed and a control provided. It is absent from the screen.
Add psychological state feedback
There is one “improvement” that does not pertain to real world time controls, and that’s the invisible effect of what’s happening to the AI during the fast forward. In the episode Matt explains that, like any good torturer, “The trick of it is to break them without letting them snap completely,” but while time is passing he has no indicators as to the mental state of the sentience within. Has she gone mad? (Or “wigged out” as he says.) Does he need to ease off? Give her a break?
I would add trendline indicators or sparklines showing things like:
Valence of speech
I would have these trendlines highlight when any of the variables are getting close to known psychological limits. Then as time passes, he can watch the trends to know if he’s pushing things too far and ease off.
In one of the story threads, Matt uses an interface as part of his day job at Smartelligence to wrangle an AI that is the cloned a mind of a client named Greta. Matt has three tasks in this role.
He has to explain to her that she is an artificial intelligence clone of a real world person’s mind. This is psychologically traumatic, as she has decades of memories as if she were a real person with a real body and full autonomy in the world.
He has to explain how she will do her job: Her responsibilities and tools.
He has to “break” her will and coerce her to faithfully serve her master—who is the the real-world Greta. (The idea is that since virtual Greta is an exact copy, she understands real Greta’s preferences and can perform personal assistant duties flawlessly.)
The AI is housed in a small egg-shaped device with a single blue light camera lens. The combination of the AI and the egg-shaped device is called “The Cookie.” Why it is not called The Egg is a mystery left for the reader, though I hope it is not just for the “Cookie Monster” joke dropped late in the episode. Continue reading →
A function that is very related to the plot of the episode is the ability to block someone. To do this, the user looks at them, sees a face-detection square appear (confirming the person to be blocked), selects BLOCK from the Zed-Eyes menu, and clicks.
In one scene Matt and his wife Claire get into a spat. When Claire has enough, she decides to block Matt. Now Matt gets blurred and muted for Claire, but also the other way around: Claire is blurred and muted for Matt.
The blur is of the live image of the person within their own silhouette. (The silhouettes sometimes display a lovely warm-to-the-left and cool-to-the-right fringe effects, like subpixel antialiasing or chromatic aberration from optical lenses, I note, but it appears inconsistently.) The colors in the blur are completely desaturated to tones of gray. The human behind it is barely recognizable. His or her voice is also muffled, so only the vaguest sense of the volume and emotional tone of what they are saying is audible. Joe explains in the episode that once blocked, the blocked person can’t message or call the blocker, but the blocker can message the blocked person, and undo the block.
In the world of “White Christmas”, everyone has a networked brain implant called Zed-Eyes that enables heads-up overlays onto vision, personalized audio, and modifications to environmental sounds. The control hardware is a thin metal circle around a metal click button, separated by a black rubber ring. People can buy the device with different color rings, as we see alternately see metal, blue, and black versions across the episode.
To control the implant, a person slides a finger (thumb is easiest) around the rim of a tiny touch device. Because it responds to sliding across its surface, let’s say the device must use a sensor similar to the one used in The Entire History of You (2011) or the IBM Trackpoint,
A thumb slide cycles through a carousel menu. Sliding can happen both clockwise and counterclockwise. It even works through gloves.
The button selects or executes the selected action. The complete list of carousel menu options we see in the episode are: Search, Camera, Music, Mail, Call, Magnify, Block, Map. The particular options change across scenes, so it is context-aware or customizable. We will look at some of the particular functions in later posts. For now, let’s discuss the “platform” that is Zed-eyes.Continue reading →
EYE-LINK is an interface used between a person at a desktop who uses support tools to help another person who is live “in the field” using Zed-Eyes. The working relationship between the two is very like Vika and Jack in Oblivion, or like the A.I. in Sight.
In this scene, we see EYE-LINK used by a pick-up artist, Matt, who acts as a remote “wingman” for pick-up student Harry. Matt has a group video chat interface open with paying customers eager to lurk, comment, and learn from the master.
Harry wears a hidden camera and microphone. This is the only tech he seems to have on him, only hearing his wingman’s voice, and only able to communicate back to his wingman by talking generally, talking about something he’s looking at, or using pre-arranged signals.
Tap your beer twice if this is more than a little creepy.
A smaller transparent information panel for automated analysis, research, and advice.
An extra, laptop-like screen where Matt leads a group video chat with a paying audience, who are watching and snarkily commenting on the wingman scenario. It seems likely that this is not an official part of the EYE-LINK software.