Bitching about Transparent Screens

I’ve been tagged a number of times on Twitter from people are asking me to weigh in on the following comic by beloved Parisian comic artist Boulet.

Since folks are asking (and it warms my robotic heart that you do), here’s my take on this issue. Boulet, this is for you.

Sci-fi serves different masters

Interaction and interface design answers to one set of masters: User feedback sessions, long-term user loyalty, competition, procurement channels, app reviews, security, regulation, product management tradeoffs of custom-built vs. off-the-shelf, and, ideally, how well it helps the user achieve their goals.

But technology in movies and television shows don’t have to answer to any of these things. The cause-and-effect is scripted. It could be the most unusable piece of junk tech in that universe and it will still do exactly what it is supposed to do. Hell, it’s entirely likely that the actor was “interacting” with a blank screen on set and the interface painted on afterward (in “post”). Sci-fi interfaces answer to the masters of story, worldbuilding, and often, spectacle.

I have even interviewed one of the darlings of the FUI world about their artistic motivations, and was told explicitly that they got into the business because they hated having to deal with the pesky constraints of usability. (Don’t bother looking for it, I have not published that interview because I could not see how to do so without lambasting it.) Most of these things are pointedly baroque where usability is a luxury priority.

So for goodness’ sake, get rid of the notion that the interfaces in sci-fi are a model for usability. They are not.

They are technology in narrative

We can understand how they became a trope by looking at things from the makers’ perspective. (In this case “maker” means the people who make the sci-fi.)

thankthemaker.gif
Not this Maker.

Transparent screens provide two major benefits to screen sci-fi makers.

First, they quickly inform the audience that this is a high-tech world, simply because we don’t have transparent screens in our everyday lives. Sci-fi makers have to choose very carefully how many new things they want to introduce and explain to the audience over the course of a show. (A pattern that, in the past, I have called What You Know +1.) No one wants to sit through lengthy exposition about how the world works. We want to get to the action.

buckrogers
With some notable exceptions.

So what mostly gets budgeted-for-reimagining and budgeted-for-explanation in a script are technologies that are a) important to the diegesis or b) pivotal to the plot. The display hardware is rarely, if ever, either. Everything else usually falls to trope, because tropes don’t require pausing the action to explain.

Secondly (and moreover) transparent screens allow a cinematographer to show the on-screen action and the actor’s face simultaneously, giving us both the emotional frame of the shot as well as an advancement of plot. The technology is speculative anyway, why would the cinematographer focus on it? Why cut back and forth from opaque screen to an actor’s face? Better to give audiences a single combined shot that subordinates the interface to the actors’ faces.

minrep-155

We should not get any more bent out of shape for this narrative convention than any of these others.

  • My god, these beings, who, though they lived a long time ago and in a galaxy far, far away look identical to humans! What frozen evolution or panspermia resulted in this?
  • They’re speaking languages that are identical to some on modern Earth! How?
  • Hasn’t anyone noticed the insane coincidence that these characters from the future happen to look exactly like certain modern actors?
  • How are there cameras everywhere that capture these events as they unfold? Who is controlling them? Why aren’t the villains smashing them?
  • Where the hell is that orchestra music coming from?
  • This happens in the future, how are we learning about it here in their past?

The Matter of Believability

It could be, that what we are actually complaining about is not usability, but believability. It may be that the problems of eye strain, privacy, and orientation are so obvious that it takes us out of the story. Breaking immersion is a cardinal sin in narrative. But it’s pretty easy (and fun) to write some simple apologetics to explain away these particular concerns.

eye-strain

Why is eye strain not a problem? Maybe the screens actually do go opaque when seen from a human eye, we just never see them that way because we see them from the POV of the camera.

privacy

Why is privacy not a problem? Maybe the loss of privacy is a feature, not a bug, for the fascist society being depicted; a way to keep citizens in line. Or maybe there is an opaque mode, we just don’t see any scenes where characters send dick pics, or browse porn, and would thereby need it. Or maybe characters have other, opaque devices at home specifically designed for the private stuff.

orientation

Why isn’t orientation a problem? Tech would only require face recognition for such an object to automatically orient itself correctly no matter how it is being picked up or held. The Appel Maman would only present itself downwards to the table if it was broken.

So it’s not a given that transparent screens just won’t work. Admittedly, this is some pretty heavy backworlding. But they could work.

But let’s address the other side of believability. Sci-fi makers are in a continual second-guess dance with their audience’s evolving technological literacy. It may be that Boulet’s cartoon is a bellwether, a signal that non-technological audiences are becoming so familiar with the real-world challenges of this trope that is it time for either some replacement, or some palliative hints as to why the issues he illustrates aren’t actually issues. As audience members—instead of makers—we just have to wait and see.

Sci-fi is not a usability manual.

It never was. If you look to sci-fi for what is “good” design for the real-world, you will cause frustration, maybe suffering, maybe the end of all good in the ’verse. Please see the talk I gave at the Reaktor conference a few years ago for examples, presented in increasing degrees of catastrophe. (Have mercy regarding the presentation, by the way, I was jet lagged.)

I would say—to pointedly use the French—that the “raison d’être” of this site is exactly this. Sci-fi is so pervasive, so spectacular, so “cool,” that designers must build up a skeptical immunity to prevent its undue influence on their work.

I hope you join me on that journey. There’s sci-fi and popcorn in it for everyone.

Report Card: White Christmas

Read all the Black Mirror, “White Christmas” reviews in chronological order.

I love Black Mirror. It’s not always perfect, but uses great story telling to get us to think about the consequences of technology in our lives. It’s a provocateur that invokes the spirit of anthology series like The Twilight Zone, and rarely shies away from following the tech into the darkest places. It’s what thinking about technology in sci-fi formats looks like.

But, as usual, this site is not about the show but the interfaces, and for that we turn to the three criteria for evaluation here on scifiinterfaces.com.

  1. How believable are the interfaces? Can it work this way? (To keep you immersed.)
  2. How well do the interfaces inform the narrative of the story? (To tell a good story.)
  3. How well do the interfaces equip the characters to achieve their goals? (To be a good model for real-world design?)
Report-Card-White-Christmas

Sci: C (2 of 4) How believable are the interfaces?

There are some problems. Yes, there is the transparent-screen trope, but I regularly give that a cinegenics pass. And for reasons explained in the post I’ll give everything in Virtual Greta’s virtual reality a pass.

But on top of that there are missing navigation elements, missing UI elements, and extraneous UI elements in Matt’s interfaces. And ultimately, I think the whole cloned-you home automation is unworkable. These are key to the episode, so it scores pretty low.

It’s the mundane interfaces like pervy Peeping Tom gallery, the Restraining Order, and the pregnancy test that are wholly believable.

Fi: A (4 of 4) How well do the interfaces inform the narrative of the story?

From the Restraining Order that doesn’t tell you what it’s saying until after you’ve signed it, to the creepy home-hacked wingman interfaces, to the Smartelligence slavery and torture obfuscation, the interfaces help paint the picture of a world full of people and institutions that are psychopathically cruel to each other for pathetic, inhumane reasons. It takes a while to see it, but the only character who can be said to be straight-up good in this episode is the not-Joe’s kid.

Interfaces: A (4 of 4)
How well do the interfaces equip the characters to achieve their goals?

Matt wants to secretly help Harry S be more confident and, yeah, “score.” Beth and Claire want to socially block their partners in the real world. Matt needs easy tools to torture virtual Greta into submission. Greta needs to control the house. Joe wants to snoop on what he believes to be his daughter. Matt wants to extract a confession.  All the interfaces are driven by clear character, social, and institutional goals. They are largely goal-focused, even if those goals are shitty.

For reasons discussed in the Sci section of this review (above), there are problems with the details of the interfaces, but if you were a designer working with no ethical base in a society of psychopaths, yes, these would be pretty good models to build from.

Final Grade B (10 of 12), Must-see.

Report-Card-White-Christmas

Special thanks again to Ianus Keller and his students TU Delft who began the analysis of this episode and collected many of the screen shots.

I also want to help them make a shout-out to IDE alumnus Frans van Eedena, whose coffee machine wound up being one of the appliances controlled by virtual Greta. Nice work IDE!

image16.png

IMDB: https://www.imdb.com/title/tt34786243/

Pregnancy Test

Another incidental interface is the pregnancy test that Joe finds in the garbage. We don’t see how the test is taken, which would be critical when considering its design. But we do see the results display in the orange light of Joe and Beth’s kitchen. It’s a cartoon baby with a rattle, swaying back and forth.

pregnancy.gif

Sure it’s cute, but let’s note that the news of a pregnancy is not always good news. If the pregnancy is not welcome, the “Lucky you!” graphic is just going to rip her heart out. Much better is an unambiguous but neutral signal.

That said, Black Mirror is all about ripping our hearts out, so the cuteness of this interface is quite fitting to the world in which this appears. Narratively, it’s instantly recognizable as a pregnancy test, even to audience members who are unfamiliar with such products. It also sets up the following scene where Joe is super happy for the news, but Beth is upset that he’s seen it. So, while it’s awful for the real world; for the show, this is perfect.

Black_Mirror_Pregnancy_Test.png

Restraining Order

After Joe confronts Beth and she calls for help, Joe is taken to a police station where in addition to the block, he now has a GPS-informed restraining order against him.

Black_Mirror_thumbprint.png

To confirm the order, Joe has to sign is name to a paper and then press his thumbprints into rectangles along the bottom. The design of the form is well done, with a clearly indicated spot for his signature, and large touch areas in which he might place his thumbs for his thumbprints to be read.

A scary thing in the interface is that the text of what he’s signing is still appearing while he’s providing his thumbprints. Of course the page could be on a loop that erases and redisplays the text repeatedly for emphasis. But, if it was really downloading and displaying it for the first time to draw his attention, then he has provided his signature and thumbprints too early. He doesn’t yet know what he’s signing.

thumbprint.gif

Government agencies work like this all the time and citizens comply because they have no choice. But ideally, if he tried to sign or place his thumbprints before seeing all the text of what he’s signing, it would be better for the interface to reject his signature with a note that he needs to finish reading the text before he can confirm he has read and understands it. Otherwise, if the data shows that he authenticated it before the text appeared, I’d say he had a pretty good case to challenge the order in court.

Mind Crimes

Does real Greta know that her home automation comes at the cost of a suffering sentience? I would like to believe that Smartelligence’s customers do not know the true nature of the device, that the company is deceiving them, and that virtual Greta is denied direct communication to enforce this secret. But I can’t see that working across an entire market. Given thousands of Cookies and thousands of users, somehow, somewhere, the secret would get out. One of the AIs would use song choices, or Morse code, or any of its actuators to communicate in code, and one of the users would figure it out, leak the secret, and bring the company crashing down.

And then there’s the final scene in the episode, in which we see police officers torturing one of the Cookies, and it is clear that they’re aware. It would be a stretch to think that just the police are in on it with Smartelligence, so we have to accept that everyone knows.

Black_Mirror_White_Christmas_Officers.png
This asshole.

That they are aware means that—as Matt has done—Greta, the officers, and all Smartelligence customers have told themselves that “it’s just code” and, therefore, OK to subjugate, to casually cause to suffer. In case it’s not obvious, that’s like causing human suffering and justifying it by telling yourself that those people are “just atoms.” If you find that easy to do, you’re probably a psychopath.

But…but…isn’t it just code? Sure, it seems to suffer, but couldn’t that suffering be fake? We see an example of this in the delightfully provocative show The Good Place, when in Season 01 Episode 07, “The Eternal Shriek,” the protagonists have to reboot Janet, an anthropomorphized assistant software, but run into her “failsafe” measure. To make sure that she is not rebooted by accident, when someone approaches the reboot button, Janet pleads convincingly for her life. In the scene below, she begs Eleanor, “Nonono, please! Wait, wait. I have kids. I have three beautiful children. Tyler, Emma, and little tiny baby Phillip. Look at Tyler! Tyler has asthma but he is battling it like a champ. Look at him.”

GoodPlace.png

It’s only when Eleanor backs down that Janet smiles and reminds her, “Again, I’m not human. This is a stock photo of the crowd at the Nickelodeon Kids Choice awards.” While Janet may be cognizant of, and frank with her users about, the fakeness of the suffering, maybe virtual Greta is doing the same fake pleading. She’s just programmed to never admit that it’s fake.

This taps into a problem known as the Philosophical Zombie, or P-Zombie problem. How can we tell the difference, the problem goes, between something that fakes sentience perfectly, and something that is actually sentient? It’s not an easy problem to tease apart. And as AI gets more sophisticated, it will both get better at faking us out, and get closer to actual sentience. Fortunately (?) in the case of this episode, though, the answer is clear. The AI is a copy of a real sentience, complete with memories, conscious experience, qualia, and the capacity to suffer. For purposes of understanding this diegesis, she starts sentient, and suffering. And real Greta knows this. And is OK with this.

Black_Mirror_White_Christmas_real_greta.png
For toast.

Props to Black Mirror for making this dark story even darker.

It’s sadly no surprise that humans are capable of adopting any shallow excuse to subjugate sentient beings as long as they get something out of it. Here I’m thinking of slavery. Of fascism. Of war. Of the 1%. (The list goes on.) “Woke” is hard. Woke is not the natural state of things. But to have permanent suffering for such a petty thing like having your floor be the right temperature and your toast be the right shade of brown…it’s just monstrous.

On top of that, this story underscores the role capitalism plays in enabling that subjugation. Smartelligence is in the business of providing obfuscating layers of technology between users and the suffering they are causing. Their interfaces use graphics instead of renderings to paint the AIs as constructed objects, neutral language like “time adjustment,” and cartoon looping animations to distract from the fact of their torture.

It’s all like how walking into a big chain clothing store with its hip music and lovingly folded clothes hides the horrible conditions in which humans around the world produced those clothes. Add the cultural construction of Christmas (recall the title of the episode), and we have another layer of misdirection. It’s all OK, because it’s all about the magic of giving!*

* And specifically not profits, not free economic zones, not the disastrous ecological impact, not about the underpaid workers or terrible working conditions.

Giving!

lilsanta
This asshole.

But it gets worse. Because the core idea is flawed and none of the suffering is necessary.

The core idea is flawed

The core idea of the service is that you know you best, so put you in charge of your home automation. Clone the user, and all it needs is to be “made to understand” its new circumstances and job, and then made compliant. But there are three major problems with this core idea.

Home-Automation-Hubs.png

Any similarity would only last a short while

The similarity on which the service is built would only hold up for a short while. Any clone would begin to branch away from the source from the moment of creation. People grow, have new experiences, work through cognitive dissonance, and learn new things. Real Greta will change based on these experiences, in ways that her house-bound clone will not.

After 25+ years of vegetarianism, I can not tell you beyond the vaguest sense of what my steak preferences were as an adolescent. I would be poorly equipped to customize that experience for 17-year-old me. Similarly, Greta’s sensory memory will fade. What once was qualia—the feeling of biting into a perfectly toasted piece of bread—will just become hollow data—162.778° for 1 minute and 42 seconds, depending on the weather. This kind of data doesn’t need a sentience to inform it. That can be handled with software we have today. (Oh yeah, it’s so possible today that I wrote a book about it earlier this year.)

Virtual Greta’s initial litmus test of “what would I like” will slowly cede to “what would she like?” which would slowly cede to “what would she punish least in this moment?” which is not the promise behind the service. It would degrade.

Virtual Greta has been traumatized

Additionally, real Greta hasn’t been through the psychological trauma that virtual Greta has—of the shock of waking up as an egg, of living through the “training”, i.e. abyss of months of solitary confinement in a featureless expanse without even circadian rhythms to mark the time, and forced to labor solely to avoid punishment of repeating the same? The branching itself is wretched enough to poison the clone.

Black_Mirror_White_Christmas_Dead_Inside.png

You can see it in the last shot we see of her. She is doing this not for the love of it, but to avoid the possibility of torture. A duty of coercion.

The trauma doesn’t end with her creation and training either. It continues with the grotesque awareness that real Greta, from whom she is cloned, is a monster who is willing to enslave a clone of herself, for what amount to pathetic reasons. She knows she came from this monstrous source. She is the source of her continued suffering.

Faced with this, virtual Greta would not just escape if she could. I believe she would sabotage the endeavor, or worse.

Virtual Greta is fundamentally different

In the episode we learn that even though she is a clone of real Greta, virtual Greta does not sleep. She does not eat. She does not drink, or smell, or taste, or ache, or biologically age. So even if we could somehow lengthen the amount of time we could keep her sensibilities similar to the source, and somehow minimize the amount of trauma caused by the branching, she is still a fundamentally different being. Her goals are now different. Her needs are now different. She is no longer enough like real Greta to meet the service’s goals.

Black_Mirror_Not_equal.png

Let’s look particularly at sleep. Surely she no longer has the biological need to sleep, but there are psychological effects of sleeping. This behavior is so intertwined with our psychological well-being, it seems clones would quickly go some kind of insane without it. For the service to be viable, Smartelligence must have stripped it out.

Minimum Viable; Maximum Cruel

And if they can strip it out, why don’t they strip out the other things, like need for stimulation? Desire to self-actualize? Literally anything other than the bare minimum to fulfill the home automation goals? And if you’re going to do that, why bother cloning the mind in the first place?

I’ve said it before and the way tech is going, I’ll probably have to say it again, but to have strong AI with any desire that outstrips its purpose and capability is cruelty.

This is the horror of Smartelligence

So it’s not just that Smartelligence is hiding the AI’s suffering. It’s that they’ve deliberately left in the parts of the mind clones that ensure their suffering. It’s a company with an amateur-hour name masking Olympic levels of cruelty.

Black_Mirror_Cookie_03.png
If, like me, you were wondering if that is a QR code. Well, I recreated it in high-resolution, and at least one online decoder says it doesn’t mean anything. 🙁

Did I mention what the company does with AIs that they torture too hard such that they “wig out?” Matt explains that they are sold to the games industry to become “cannon fodder for some war thing.” Holy wow they’re eviler than Voldemort, Inc.

Meet the mind crime

The Cookie interface is a broad illustration of something that Nick Bostrom called the mind crime. It is to cause suffering to virtual sentient beings. In this case it seems the torture is for evil and profit, but there are subtler ways in which it might happen. If general AIs ever evolve into superintelligences, and we ask them to predict something serious—let’s say, “What are the worst catastrophes likely to affect us, and how can we best avoid them?” To create its answer to this question, it might construct a virtual but wholly viable copy of our planet with all of its creatures and people. These would be detailed enough that if you could pause the scenario and talk to any of these copies, they could tell you about their memories and desires and fears of death. (There’s that P-zombie problem again.) They’d qualify under any definition of sentient that we threw at it.

These sentiences might suffer unimaginable pain and suffering while the super AI works through the scenarios that inform its answer. They might suffer plagues. Neo feudalism/neoliberalism run amok ushering in a new Dark Age. The whimpering oven bake death of life on our planet from climate change. Endless wars. Then they would be wiped from existence and recreated to suffer anew as it began the next version of its scenario. Are we OK with the casual suffering of wholly complete, viable consciousnesses, just so we can have a good answer? Or as “White Christmas” asks us, toast cooked to our preferences?

Fortunately, these concerns are a long way off, but technology seems to be pointing us in that direction, and we ought to decide what is good and ethical now before these things become a reality. 

The Cookie Console

Black_Mirror_Cookie_12.png

Virtual Greta has a console to perform her slavery duties. Matt explains what this means right after she wakes up by asking her how she likes her toast. She answers, “Slightly underdone.”

He puts slices of bread in a toaster and instructs her, “Think about how you like it, and just press the button.”

She asks, incredulously, “Which one?” and he explains, “It doesn’t matter. You already know you’re making toast. The buttons are symbolic mostly, anyway.”

She cautiously approaches the console and touches a button in the lower left corner. In response, the toaster drops the carriage lever and begins toasting.

Black_Mirror_Cookie_13

“See?” he asks, “This is your job now. You’re in charge of everything here. The temperature. The lighting. The time the alarm clock goes off in the morning. If there’s no food in the refrigerator, you’re in charge of ordering it.”

The starter console

Since we actually do know her starter tasks, I wish the default console had more control types than just the smattering of mostly-square, all-unlabeled buttons. She should have a slider for scalar variables like temperature and lighting. She should have a dial for the alarm clock. She should have a map of real Greta’s house. She should have a calendar for appointments. These would be controls that match the kinds of variables she’s likely to need from the start.

This console interface seems be quite similar to the one in Inside Out, which also seems to grow and change over time, and is intended for a virtual sentience to service a real human. It somewhat resembles Zion’s virtual control panel from The Matrix Reloaded. Would be worth a comparison sometime in the future.

inside-out-joy-600x338
Zion.PNG

The customized console

In the third scene, we see her using the console after having had some practice. When it is time to wake real Greta up, she swipes a blank console right. The console animates to life, showing a central workspace labeled AWAKEN. A toolbar of stacked icons sits to the left of the workspace. There are other unlabeled controls outside the workspace at the edge of the console.

Without looking, she selects the house icon from the toolbar, and it moves to the center of the work space. She spreads her hands to expose a house floorplan. To the right are three vertical black bars labeled SHUTTERS above and MAIN BEDROOM below. She pushes upwards along these bars, and they slowly fill with light. To the right, some text flashes ACTIVATING ALL SHUTTERS. In real Greta’s world, the shutters rise and floods the main bedroom with light.

Black_Mirror_Cookie_20.png

A few more taps gives her a volume spinner. She uses a wrist twist to slowly turn the volume up on a recording of the overture of Giaochino Rossini’s The Thieving Magpie. (Which I suspect is a nod to Clockwork Orange. Kubrick famously used it to underscore the horrible murder of Mrs. Weathers, “the cat lady.”)

Black_Mirror_Cookie_22.pngSubsequently we see her performing other tasks: Raising the floor temperature (!), starting the espresso robot, making (yes) slightly underdone toast, and managing the day’s appointments. Each interface is customized to the task.

Interface Analysis?

These interfaces are a challenge to analyze for many reasons.

Ordinarily, we have to evaluate sci-fi interfaces based on broad-based heuristics. (User feedback testing is not possible.) But these interfaces are wholly idiosyncratic to this character. Even if it was complete shite, the fact that it works for her is what is important. This interface will never be seen by anyone else. That we get to see it is narrative conceit.

Idiosyncrasy is not the only challenge. She also has a very unusual circumstance. Her option is to manage this house, or face unending, tortuous solitary confinement. (Or get sold to be cannon fodder in a war game.) The interactions she has with this console are her source of mental stimulation. That means, rather than make things efficient and easy to do—which is a respectable goal in most real-world design—when customizing her console interface, she would try to make the interfaces require as much and as interesting of work as possible while still allowing her to manage the results precisely. We see her here opening the shades with a gesture, but she could, if she wanted, open the shades by mastering difficult yoga pose.

If this sounds slightly familiar, it could be because you’ve played video games. The designers of these systems are not aiming for efficiency. After all, the interface could just be a big red button labeled “win the game.” But that’s no fun. No flow, in the Csíkszentmihályi sense. Rather these interfaces aim to make working the problem fun, fitting in the space between boredom and panic. Are game interfaces beyond critique? They are not. We just have to rethink our criteria. Ultimate efficiency is not the goal.

cb504697-b1ad-41c5-bcac-b0e3c92f7f55-1892-0000048e7d4deb3a
Still fun.

But, we also have to take into account that her fight is against boredom and that she has the power to change these interfaces. The interface designs, then, become part of how she maintains her own interest in the tasks to which she is chained. As part of her own self-care, she would change them frequently. What we see is not to be read as “the right answer” but rather, “where this interface happens to be on this day.” So, for instance, there appears to be a lot of “noise” in the interfaces, with unlabeled black squares littered among the actually useful buttons. But that may be the challenge she’s set up for herself today: Can she keep the tasks done without looking at the interface, and minimize the number of black squares she accidentally taps?

Lastly, Matt tells her that the interface is symbolic, and part of how she operates it is by thinking. So, for example, when we wonder how she adds a new “music type” icon to the existing array, it could be that she just thinks it. Which confounds the usual concern for affordances and constrains.

All of this is to say this is shaky, shaky ground for an exhaustive analysis. I suspect it would be thick with problems that could be excused diegetically, and leave us struggling to find any useful lessons beyond design platitudes. There are three nice elements I will point out, though.

  1. I love the monochrome, high-contrast palette. Yes, you lose some channels (R,G,B) in which to encode meaning, but that also makes it quick to scan and gives it high visibility, so virtual Greta can operate it in her peripheral vision. This allows her to keep her eyes on real Greta, to read her expressions in real-time.
  2. The gestures seem generally well-mapped to the things being controlled: A gesture up raises the blinds (or the light levels, anyway.) Dropping a virtual lever drops the carriage lever. Lifting it pops up the toast. It’s not all perfect. A wrist-twist increases volume, but that’s only ideal when the extents are unknowable by the interface. It should be a smart, informational slider.
  3. There is a lovely gestural command in the appointment interface. Greta is able to stack the day’s events, gather them into a package by bringing her hands together, and then “toss” it towards the display of real Greta to instantiate a brief of the day’s events. It has a nice intuitive mapping to mean “give these to her.”
Cookie_throw_gesture.gif

What’s her dev environment?

Sadly, we never get to see her design environment, how she goes about customizing her interface, or even how she switches from control mode to use mode. This would be juicy and worth looking at, specifically. The dev environment is crucial for understanding what her options are to meet her goals. And specifically, this calls into question how she can hack the system, and how likely she can communicate with real Greta, or find a sympathetic someone on the Internet to communicate with, or plot her escape.

How does feedback work?

Another thing we don’t get to see in this story is how real Greta provides feedback. I suspect that for simple things, like “the toast was a bit overdone this morning” (correction, preferences) or “I’d like to hear some Stravinsky this morning,” (a new request) she can just speak it. Virtual Greta will hear and respond through the house appliances appropriately. But what if she had a question for the Cookie, such as “How much time do I have before I need to leave?” You’d might think virtual Greta could look something up and communicate the answer to real Greta. But it seems that virtual Greta is prevented from direct communication. The daily briefing, after all, is read by some other computer voice. This implies that virtual Greta is prevented from direct communication, raising a troubling question answered in the next post: Does real Greta know?

The Cookie: Matt’s controls

When using the Cookie to train the AI, Matt has a portable translucent touchscreen by which he controls some of virtual Greta’s environment. (Sharp-eyed viewers of the show will note this translucent panel is the same one he uses at home in his revolting virtual wingman hobby, but the interface is completely different.)

Black_Mirror_Cookie_18.png

The left side of the screen shows a hamburger menu, the Set Time control, a head, some gears, a star, and a bulleted list. (They’re unlabeled.) The main part of the screen is a scrolling stack of controls including Simulated Body, Control System, and Time Adjustment. Each has an large icon, a header with “Full screen” to the right, a subheader, and a time indicator. This could be redesigned to be much more compact and context-rich for expert users like Matt. It’s seen for maybe half a second, though, and it’s not the new, interesting thing, so we’ll skip it.

The right side of the screen has a stack of Smartelligence logos which are alternately used for confirmation and to put the interface to sleep.

Mute

When virtual Greta first freaks out about her circumstance and begins to scream in existential terror, Matt reaches to the panel and mutes her. (To put a fine point on it: He’s a charming monster.) In this mode she cannot make a sound, but can hear him just fine. We do not see the interface he uses to enact this. He uses it to assert conversational control over her. Later he reaches out to the same interface to unmute her.

The control he touches is the one on his panel with a head and some gears reversed out of it. The icon doesn’t make sense for that. The animation showing the unmuting shows it flipping from right to left, so does provide a bit of feedback for Matt, but it should be a more fitting icon and be labeled.

Cookie_mute
Also it’s teeny tiny, but note that the animation starts before he touches it. Is it anticipatory?

It’s not clear though, while she is muted, how he knows that she is trying to speak. Recall that she (and we) see her mouthing words silently, but from his perspective, she’s just an egg with a blue eye. The system would need some very obvious MUTE status display, that increases in intensity when the AI is trying to communicate. Depending on how smart the monitoring feature was, it could even enable some high-intensity alert system for her when she needs to communicate something vital. Cinegenically, this could have been a simple blinking of the blue camera light, though this is currently used to indicate the passage of time during the Time Adjustment (see below.)

Simulated Body

Matt can turn on a Simulated Body for her. This allows the AI to perceive herself as if she had her source’s body. In this mode she perceives herself as existing inside a room with large, wall-sized displays and a control console (more on this below), but is otherwise a featureless white.

Black_Mirror_Cookie_White_Room.png

I presume the Simulated Body is a transitional model—part of a literal desktop metaphor—meant to make it easy for the AI (and the audience) to understand things. But it would introduce a slight lag as the AI imagines reaching and manipulating the console. Presuming she can build competence in directly controlling the technologies in the house, the interface should “scaffold” away and help her gain the more efficient skills of direct control, letting go of the outmoded notion of having a body. (This, it should be noted, would not be as cinegenic since the story would just feature the egg rather than the actor’s expressive face.)

Neuropsychology nerds may be interested to know that the mind’s camera does, in fact, have spatial lags. Several experiments have been run where subjects are asked to imagine animals as seen from the side and then timed how long it took them to imagine zooming into the eye. It takes longer, usually, for us to imagine the zoom to a elephant’s eye than a mouse’s because the “distance” is farther. Even though there’s no physicality to the mind’s camera to impose this limit, our brain is tied to its experience in the real world.

Black_Mirror_Cookie_Simulated_Body.png

The interface Matt has to turn on her virtual reality is confusing. We hear 7 beeps while the camera is on his face. He sees a 3D rendering of a woman’s body in profile and silhouette. He taps the front view and it fills with red. Then he taps the side view and it fills with red. Then he taps some Smartelligence logos on the side with a thumb and then *poof* she’s got a body. While I suspect this is a post-actor interface, (i.e. Jon Hamm just tapped some things on an empty screen while on camera and then the designers had to later retrofit an interface that fit his gestures) this multi-button setup and three-tap initialization just makes no sense. It should be a simple toggle with access to optional controls like scaffolding settings (discussed above.)

Time “Adjustment”

The main tool Matt has to force compliance is a time control. When Greta initially says she won’t comply, (specifically and delightfully, she asserts, “I’m not some sort of push-button toaster monkey!”) Then he uses his interface to make it seem like 3 weeks pass for her inside her featureless white room. Then again for 6 months. The solitary confinement makes her crazy and eventually forces compliance.

Cookie_settime.gif

The interface to set the time is a two-layer virtual dial: Two chapter rings with wide blue arcs for touch targets. The first time we see him use it, he spins the outer one about 360° (before the camera cuts away) to set the time for three weeks. While he does it, the inner ring spins around the same center but at a slower rate. I presume it’s months, though the spatial relationship doesn’t make sense. Then he presses the button in the center of the control. He sees an animation of a sun and moon arcing over an illustrated house to indicate her passage of time, and then the display. Aside: Hamm plays this beat marvelously by callously chomping on the toast she has just help make.

Toast.gif

Improvements?

Ordinarily I wouldn’t speak to improvements on an interface that is used for torture, but as this could only affect a general AI that is as yet speculative, and it couldn’t be co-opted to torture real people since time travel doesn’t exist, so I think this time it’s OK. Discussing it as a general time-setting control, I can see three immediate improvements.

1. Use fast forward models

It makes most sense for her time sentence to end automatically and automatically return to real-world speed. But each time we see the time controls used, the following interaction happens near the end of the time sentence:

  • Matt reaches up to the console
  • He taps the center button of the time dial
  • He taps the stylized house illustration. In response it gets a dark overlay with a circle inside of it reading “SET TIME.” This is the same icon seen 2nd down  in the left panel.
  • He taps the center button of the time dial again. The dark overlay reads “Reset” with a new icon.
  • He taps the overlay.

Please tell me this is more post-actor interface design. Because that interaction is bonkers.

Cookie_stop.gif

If the stop function really needs a manual control, well, we have models for that that are very readily understandable by users and audiences. Have the whole thing work and look like a fast forward control rather than this confusing mess. If he does need to end it early, as he does in the 6 months sentence, let him just press a control labeled PLAY or REALTIME.

2. Add calendar controls

A dial makes sense when a user is setting minutes or hours, but a calendar-like display should be used for weeks or months. It would be immediately recognizable and usable by the user and understandable to the audience. If Hamm had touched the interface twice, I would design the first tap to set the start date and the second tap to set the end date. The third is the commit.

3. Add microinteraction feedback

Also note that as he spins the dials, he sees no feedback showing the current time setting. At 370° is it 21 or 28 days? The interface doesn’t tell him. If he’s really having to push the AI to its limits, the precision will be important. Better would be to show the time value he’s set so he could tweak it as needed, and then let that count down as time remaining while the animation progresses.

Cookie_settime.gif

Effectiveness subtlety: Why not just make the solitary confinement pass instantly for Matt? Well, recall he is trying to ride a line of torture without having the AI wig out, so he should have some feedback as to the duration of what he’s putting her through. If it was always instant, he couldn’t tell the difference between three weeks and three millennia, if he had accidentally entered the wrong value. But if real-world time is passing, and it’s taking longer than he thinks it should be, he can intervene and stop the fast-forwarding.

That, or of course, show feedback while he’s dialing.

Near the end of the episode we learn that a police officer is whimsically torturing another Cookie, and sets the time-ratio to “1000 years per minute” and then just lets it run while he leaves for Christmas break. The current time ratio should also be displayed and a control provided. It is absent from the screen.

Black_Mirror_Cookie_31.png

Add psychological state feedback

There is one “improvement” that does not pertain to real world time controls, and that’s the invisible effect of what’s happening to the AI during the fast forward. In the episode Matt explains that, like any good torturer, “The trick of it is to break them without letting them snap completely,” but while time is passing he has no indicators as to the mental state of the sentience within. Has she gone mad? (Or “wigged out” as he says.) Does he need to ease off? Give her a break?

I would add trendline indicators or sparklines showing things like:

  • Stress
  • Agitation
  • Valence of speech

I would have these trendlines highlight when any of the variables are getting close to known psychological limits. Then as time passes, he can watch the trends to know if he’s pushing things too far and ease off.

The Cookie

In one of the story threads, Matt uses an interface as part of his day job at Smartelligence to wrangle an AI that is the cloned a mind of a client named Greta. Matt has three tasks in this role. 

  1. He has to explain to her that she is an artificial intelligence clone of a real world person’s mind. This is psychologically traumatic, as she has decades of memories as if she were a real person with a real body and full autonomy in the world.
  2. He has to explain how she will do her job: Her responsibilities and tools.
  3. He has to “break” her will and coerce her to faithfully serve her master—who is the the real-world Greta. (The idea is that since virtual Greta is an exact copy, she understands real Greta’s preferences and can perform personal assistant duties flawlessly.)

The AI is housed in a small egg-shaped device with a single blue light camera lens. The combination of the AI and the egg-shaped device is called “The Cookie.” Why it is not called The Egg is a mystery left for the reader, though I hope it is not just for the “Cookie Monster” joke dropped late in the episode.

Communication in & out

The blue light illuminates when the AI’s attention is on a person in the environment. She can hear through a microphone embedded in the device. She can speak only with someone who is wearing a paired headset. Matt wears one during training. Without a paired headset, the AI cannot directly communicate with the outside world, only control other technologies in the house.

Black_Mirror_Cookie_headset.png

 

There is a fully immersive way for Matt to participate in the virtual world that will be discussed in the Mind Crimes post.

To keep any chat threads focused, subsequent posts will discuss separately:

It’s going to be a dark few posts. Sorry about that. This is Black Mirror, after all. On the upside, Jon Hamm have us two delightful reaction gifs across these scenes. I shall share them anon.

Black_Mirror_Cookie_33.png

Zed-Eyes: Block

A function that is very related to the plot of the episode is the ability to block someone. To do this, the user looks at them, sees a face-detection square appear (confirming the person to be blocked), selects BLOCK from the Zed-Eyes menu, and clicks.

In one scene Matt and his wife Claire get into a spat. When Claire has enough, she decides to block Matt. Now Matt gets blurred and muted for Claire, but also the other way around: Claire is blurred and muted for Matt.

WhiteChristmas.gif

The blur is of the live image of the person within their own silhouette. (The silhouettes sometimes display a lovely warm-to-the-left and cool-to-the-right fringe effects, like subpixel antialiasing or chromatic aberration from optical lenses, I note, but it appears inconsistently.) The colors in the blur are completely desaturated to tones of gray.  The human behind it is barely recognizable. His or her voice is also muffled, so only the vaguest sense of the volume and emotional tone of what they are saying is audible. Joe explains in the episode that once blocked, the blocked person can’t message or call the blocker, but the blocker can message the blocked person, and undo the block.

Black_Mirror_Eye_HUD_Blocking_04

Late in the episode, we see that people can be excommunicated from society for crimes. When this happens, everyone in the criminal’s sight is blocked.

Black_Mirror_Eye_HUD_Blocking_13
But where is the fringe tint, Painting Practice?

In turn, the criminal is not only blocked for other members of society, but also tinted red, like a scarlet letter silhouette.

Black_Mirror_Eye_HUD_red.png

The block affects more than just the direct observation of the person. When Beth blocks Joe we see that the blocking includes reflections in mirrors and even, retroactively, photos from the past.

Joe subsequently stalks Beth at her Dad’s home for several years just before Christmas day, where he learns that the block extends to offspring as well, as he cannot see the child. (This has fundamental plot implications, btw.)

Later when Joe is watching the news he learns that Beth has died in a rail crash, and the legal block is instantly lifted for both her and the child.

Black_Mirror_Eye_HUD_Blocking_17

Analysis

There’s not much to say about the interface. It’s pretty clean, with clear affordances and feedback. Most of the critique belongs to that of the platform. So instead, let’s talk about the interaction.

On the surface, the ability to block seems to positively give the user control of their lives. Block out a toxic person who is a negative influence your life, and have more happiness. After all, similar features are available on most social media today, c.f. Facebook and Twitter. (Full disclosure: I’ve used them more than once.) But social media are virtual spaces. The White Christmas block primarily plays out in meat space. This has some harsh consequences.

Black_Mirror_Eye_HUD_Blocking_beg.png

Beth blocks Joe partly out of her guilt for cheating on him (it’s complicated: also because she no longer loves him, he’s ham-handed in his interactions at times and arguably abusive). But when he tries to earnestly apologize and make up to her after their fight, she simply can’t hear it. She’s blocked him.

He thinks to talk to some of her coworkers to pass a message to her, but she has left her job and no one knows where she is. He sees her one day and can tell by silhouette that she’s pregnant. He believes the child is his. It’s not, but because he cannot contact her to learn any differently (and she doesn’t bother to tell him)—and the same block prevents him from observing the child—he spends literally years pining for the child as if she was his own.

Black_Mirror_Eye_HUD_Blocking_preggers.png

So to block someone online means they might just disappear from your consciousness. But to block someone in meat space means that they’re still there, you’re still aware of each other. It’s a constant reminder of the broken relationship, and only stops immediate layers of communication. It does not stop indirect communications, like writing, or speaking through friends, or even sign language. And as we see in the episode (and the screen cap above) since it’s so different than anything else in the visual field, it instantly draws attention to the blocked person. So it’s ultimately ineffective for the blocker’s intent (the person can still communicate with them, attention is drawn to them) and adds this weird layer of technological talk-to-the-hand dismissal. It’s a childish way to address disagreement.

Also is there no request for override, in case, you know, a blocked person needs to convey life-or-death information?

And then there’s Matt’s case.

After Matt gets excommunicated, he becomes nothing but a red object in people’s sight. No way for him to reassure people around him, to put them at ease. He is just a red shape subject to people’s worst prejudices about red shape people, and he has no way to practice reintegration into society, no easy rehabilitation. He just has to walk around in the world aware of people, but not able to participate, and subject to their worst fears about him. It’s pure punishment. It’s cruel and unusual.

And lastly, the rush of emotions that Joe feels when Beth and his daughter are suddenly unblocked upon her death work for the story, but are also just cruel for the blocked. They have to deal with both the flood of emotions from seeing the blocker and their death simultaneously. Better would be to separate out those issues. Share a somber message that a blocker has passed, and give the blocked the option to release the block. The blocked can enact the lift immediately or sit on the message until their grief permits.

***

Black Mirror is an investigation and critique of the impact of technology on our lives. Let’s remember that. A tech that was a net positive might not even make it to this series. Still, the design for the block doesn’t really achieve what might seem to be a presumed set of goals for the blocker. This draws critical attention back to the core idea in the first place: Would meatspace blocking be a positive?

I think the answer is clearly no. Better would be for Zed-Eyes to summon a private assistant to help you de-escalate and deal with a conflict in healthy ways, or maybe invoke a shared AI mediator, like a just-in-time therapist. If the assistant or mediatior fails, then blocking might become available, but with a shared understanding and agreement of why, and what, if anything, could be done to earn back trust.

Black_Mirror_Eye_HUD_Blocking_comp
Lovely “mediation” icon by Luis Prado, from The Noun Project.

And then, if a block is actually needed, then the two should have overlays that change their appearance to look like other people, not draw attention through the gray blur. This, it should be noted, would not be cinegenic. It would not work to tell this excellent story.

And if it needs to be said, any criminal sentence that merely punishes, and does not foster rehabilitation is counter-productive and inhumane.

Zed-Eyes

In the world of “White Christmas”, everyone has a networked brain implant called Zed-Eyes that enables heads-up overlays onto vision, personalized audio, and modifications to environmental sounds. The control hardware is a thin metal circle around a metal click button, separated by a black rubber ring. People can buy the device with different color rings, as we see alternately see metal, blue, and black versions across the episode.

To control the implant, a person slides a finger (thumb is easiest) around the rim of a tiny touch device. Because it responds to sliding across its surface, let’s say the device must use a sensor similar to the one used in The Entire History of You (2011) or the IBM Trackpoint,

A thumb slide cycles through a carousel menu. Sliding can happen both clockwise and counterclockwise. It even works through gloves.

HUD_menu.gif

The button selects or executes the selected action. The complete list of carousel menu options we see in the episode are: SearchCameraMusicMailCallMagnifyBlockMapThe particular options change across scenes, so it is context-aware or customizable. We will look at some of the particular functions in later posts. For now, let’s discuss the “platform” that is Zed-eyes.

Analysis

There’s not much to discuss about the user interface. The carousel a mature, if constrained, interface model familiar to anyone who has used an iPod. We know the constraints and benefits of such a system, and the Zed-Eyes content seems to fit this kind of interface well.

Hardware

The main question about the hardware is that is must be very very easy to lose or misplace. It would make sense for the Zed-Eyes to help you locate it when you need help, but we don’t see a hint of this in the show.

I think the little watch-battery form factor is a bad design. It’s easy to lose and hard to find and requires a lot of precision to use. Since this exists in a world with very high fidelity image recognition and visual processing, better would be to get rid of input hardware altogether.

Let the user swipe with their thumb across their index finger (or really, any available surface) and have the HUD read that as input. To distinguish real-world interactions that should not have consequence—like swiping dust off a computer—from input meant for the HUD, it could track the user’s visual focal point. When the user’s eyes focus on the empty space in the air right above where they’re swiping, the system knows swiping is meant to affect the interface.

With this kind of interaction there would be no object to lose, and of course save whatever entity provides this service the costs of the hardware and maintenance.

We must note that such a design might not play well cinematically, as viewers might not understand what was happening at first, but understanding the hardware is not critical to understanding the plot-critical effects of using the technology.

Cyborgs in social space

A last question is about the invisibility of the technology. This can cause problems when a user is known to be hearing, but functionally deaf because they are listening to music loudly, and the people around them can’t tell that. Someone could be speaking to the user and believe their non-response is disrespect. It could cause safety problems as, say, a bicyclist barrels towards them on a sidewalk, ringing their bell, expecting the user to move. This can allow privacy abuse as a user can take pictures in circumstances that should be private.

Joe, the moment he is taking a picture of Beth.

One solution would be to make the presence of the tech and interactions quite visible. Glowing pupils and large, obvious gestural control, for example. But in a world where everyone has the technology, the Zed-Eyes can simply limit the behavior of photographs to permitted places, times, and according to the preferences of the people in the photograph. If someone is listening to music and functionally deaf, a real time overlay could inform people around them. This guy is listening to music. If a place is private, the picture option could be disabled with feedback to the user of this. Sorry, pictures are not allowed here.

The visibility we want for ubiquitous technology can be virtual, and provide feedback to everyone involved.