Another incidental interface is the pregnancy test that Joe finds in the garbage. We don’t see how the test is taken, which would be critical when considering its design. But we do see the results display in the orange light of Joe and Beth’s kitchen. It’s a cartoon baby with a rattle, swaying back and forth.
Sure it’s cute, but let’s note that the news of a pregnancy is not always good news. If the pregnancy is not welcome, the “Lucky you!” graphic is just going to rip her heart out. Much better is an unambiguous but neutral signal.
That said, Black Mirror is all about ripping our hearts out, so the cuteness of this interface is quite fitting to the world in which this appears. Narratively, it’s instantly recognizable as a pregnancy test, even to audience members who are unfamiliar with such products. It also sets up the following scene where Joe is super happy for the news, but Beth is upset that he’s seen it. So, while it’s awful for the real world; for the show, this is perfect.
When using the Cookie to train the AI, Matt has a portable translucent touchscreen by which he controls some of virtual Greta’s environment. (Sharp-eyed viewers of the show will note this translucent panel is the same one he uses at home in his revolting virtual wingman hobby, but the interface is completely different.)
The left side of the screen shows a hamburger menu, the Set Time control, a head, some gears, a star, and a bulleted list. (They’re unlabeled.) The main part of the screen is a scrolling stack of controls including Simulated Body, Control System, and Time Adjustment. Each has an large icon, a header with “Full screen” to the right, a subheader, and a time indicator. This could be redesigned to be much more compact and context-rich for expert users like Matt. It’s seen for maybe half a second, though, and it’s not the new, interesting thing, so we’ll skip it.
The right side of the screen has a stack of Smartelligence logos which are alternately used for confirmation and to put the interface to sleep.
Mute
When virtual Greta first freaks out about her circumstance and begins to scream in existential terror, Matt reaches to the panel and mutes her. (To put a fine point on it: He’s a charming monster.) In this mode she cannot make a sound, but can hear him just fine. We do not see the interface he uses to enact this. He uses it to assert conversational control over her. Later he reaches out to the same interface to unmute her.
The control he touches is the one on his panel with a head and some gears reversed out of it. The icon doesn’t make sense for that. The animation showing the unmuting shows it flipping from right to left, so does provide a bit of feedback for Matt, but it should be a more fitting icon and be labeled.
Also it’s teeny tiny, but note that the animation starts before he touches it. Is it anticipatory?
It’s not clear though, while she is muted, how he knows that she is trying to speak. Recall that she (and we) see her mouthing words silently, but from his perspective, she’s just an egg with a blue eye. The system would need some very obvious MUTE status display, that increases in intensity when the AI is trying to communicate. Depending on how smart the monitoring feature was, it could even enable some high-intensity alert system for her when she needs to communicate something vital. Cinegenically, this could have been a simple blinking of the blue camera light, though this is currently used to indicate the passage of time during the Time Adjustment (see below.)
Simulated Body
Matt can turn on a Simulated Body for her. This allows the AI to perceive herself as if she had her source’s body. In this mode she perceives herself as existing inside a room with large, wall-sized displays and a control console (more on this below), but is otherwise a featureless white.
I presume the Simulated Body is a transitional model—part of a literal desktop metaphor—meant to make it easy for the AI (and the audience) to understand things. But it would introduce a slight lag as the AI imagines reaching and manipulating the console. Presuming she can build competence in directly controlling the technologies in the house, the interface should “scaffold” away and help her gain the more efficient skills of direct control, letting go of the outmoded notion of having a body. (This, it should be noted, would not be as cinegenic since the story would just feature the egg rather than the actor’s expressive face.)
Neuropsychology nerds may be interested to know that the mind’s camera does, in fact, have spatial lags. Several experiments have been run where subjects are asked to imagine animals as seen from the side and then timed how long it took them to imagine zooming into the eye. It takes longer, usually, for us to imagine the zoom to a elephant’s eye than a mouse’s because the “distance” is farther. Even though there’s no physicality to the mind’s camera to impose this limit, our brain is tied to its experience in the real world.
The interface Matt has to turn on her virtual reality is confusing. We hear 7 beeps while the camera is on his face. He sees a 3D rendering of a woman’s body in profile and silhouette. He taps the front view and it fills with red. Then he taps the side view and it fills with red. Then he taps some Smartelligence logos on the side with a thumb and then *poof* she’s got a body. While I suspect this is a post-actor interface, (i.e. Jon Hamm just tapped some things on an empty screen while on camera and then the designers had to later retrofit an interface that fit his gestures) this multi-button setup and three-tap initialization just makes no sense. It should be a simple toggle with access to optional controls like scaffolding settings (discussed above.)
Time “Adjustment”
The main tool Matt has to force compliance is a time control. When Greta initially says she won’t comply, (specifically and delightfully, she asserts, “I’m not some sort of push-button toaster monkey!”) Then he uses his interface to make it seem like 3 weeks pass for her inside her featureless white room. Then again for 6 months. The solitary confinement makes her crazy and eventually forces compliance.
The interface to set the time is a two-layer virtual dial: Two chapter rings with wide blue arcs for touch targets. The first time we see him use it, he spins the outer one about 360° (before the camera cuts away) to set the time for three weeks. While he does it, the inner ring spins around the same center but at a slower rate. I presume it’s months, though the spatial relationship doesn’t make sense. Then he presses the button in the center of the control. He sees an animation of a sun and moon arcing over an illustrated house to indicate her passage of time, and then the display. Aside: Hamm plays this beat marvelously by callously chomping on the toast she has just help make.
Improvements?
Ordinarily I wouldn’t speak to improvements on an interface that is used for torture, but as this could only affect a general AI that is as yet speculative, and it couldn’t be co-opted to torture real people since time travel doesn’t exist, so I think this time it’s OK. Discussing it as a general time-setting control, I can see three immediate improvements.
1. Use fast forward models
It makes most sense for her time sentence to end automatically and automatically return to real-world speed. But each time we see the time controls used, the following interaction happens near the end of the time sentence:
Matt reaches up to the console
He taps the center button of the time dial
He taps the stylized house illustration. In response it gets a dark overlay with a circle inside of it reading “SET TIME.” This is the same icon seen 2nd down in the left panel.
He taps the center button of the time dial again. The dark overlay reads “Reset” with a new icon.
He taps the overlay.
Please tell me this is more post-actor interface design. Because that interaction is bonkers.
If the stop function really needs a manual control, well, we have models for that that are very readily understandable by users and audiences. Have the whole thing work and look like a fast forward control rather than this confusing mess. If he does need to end it early, as he does in the 6 months sentence, let him just press a control labeled PLAY or REALTIME.
2. Add calendar controls
A dial makes sense when a user is setting minutes or hours, but a calendar-like display should be used for weeks or months. It would be immediately recognizable and usable by the user and understandable to the audience. If Hamm had touched the interface twice, I would design the first tap to set the start date and the second tap to set the end date. The third is the commit.
3. Add microinteraction feedback
Also note that as he spins the dials, he sees no feedback showing the current time setting. At 370° is it 21 or 28 days? The interface doesn’t tell him. If he’s really having to push the AI to its limits, the precision will be important. Better would be to show the time value he’s set so he could tweak it as needed, and then let that count down as time remaining while the animation progresses.
Effectiveness subtlety: Why not just make the solitary confinement pass instantly for Matt? Well, recall he is trying to ride a line of torture without having the AI wig out, so he should have some feedback as to the duration of what he’s putting her through. If it was always instant, he couldn’t tell the difference between three weeks and three millennia, if he had accidentally entered the wrong value. But if real-world time is passing, and it’s taking longer than he thinks it should be, he can intervene and stop the fast-forwarding.
That, or of course, show feedback while he’s dialing.
Near the end of the episode we learn that a police officer is whimsically torturing another Cookie, and sets the time-ratio to “1000 years per minute” and then just lets it run while he leaves for Christmas break. The current time ratio should also be displayed and a control provided. It is absent from the screen.
Add psychological state feedback
There is one “improvement” that does not pertain to real world time controls, and that’s the invisible effect of what’s happening to the AI during the fast forward. In the episode Matt explains that, like any good torturer, “The trick of it is to break them without letting them snap completely,” but while time is passing he has no indicators as to the mental state of the sentience within. Has she gone mad? (Or “wigged out” as he says.) Does he need to ease off? Give her a break?
I would add trendline indicators or sparklines showing things like:
Stress
Agitation
Valence of speech
I would have these trendlines highlight when any of the variables are getting close to known psychological limits. Then as time passes, he can watch the trends to know if he’s pushing things too far and ease off.
We see a completely new mode for the Eye in the Dark Dimension. With a flourish of his right hand over his left forearm, a band of green lines begin orbiting his forearm just below his wrist. (Another orbits just below his elbow, just off-camera in the animated gif.) The band signals that Strange has set this point in time as a “save point,” like in a video game. From that point forward, when he dies, time resets and he is returned here, alive and well, though he and anyone else in the loop is aware that it happened.
In the scene he’s confronting a hostile god-like creature on its own mystical turf, so he dies a lot.
An interesting moment happens when Strange is hopping from the blue-ringed planetoid to the one close to the giant Dormammu face. He glances down at his wrist, making sure that his savepoint was set. It’s a nice tell, letting us know that Strange is a nervous about facing the giant, Galactus-sized primordial evil that is Dormammu. This nervousness ties right into the analysis of this display. If we changed the design, we could put him more at ease when using this life-critical interface.
Initiating gesture
The initiating gesture doesn’t read as “set a savepoint.” This doesn’t show itself as a problem in this scene, but if the gesture did have some sort of semantic meaning, it would make it easier for Strange to recall and perform correctly. Maybe if his wrist twist transitioned from moving splayed fingers to his pointing with his index finger to his wrist…ok, that’s a little too on the nose, so maybe…toward the ground, it would help symbolize the here & now that is the savepoint. It would be easier for Strange to recall and feel assured that he’d done the right thing.
I have questions about the extents of the time loop effect. Is it the whole Dark Dimension? Is it also Earth? Is it the Universe? Is it just a sphere, like the other modes of the Eye? How does he set these? There’s not enough information in the movie to backworld this, but unless the answer is “it affects everything” there seems to be some variables missing in the initiating gesture.
Setpoint-active signal
But where the initiating gesture doesn’t appear to be a problem in the scene, the wrist-glance indicates that the display is. Note that, other than being on the left forearm instead of the right, the bands look identical to the ones in the Tibet and Hong Kong modes. (Compare the Tibet screenshot below.) If Strange is relying on the display to ensure that his savepoint was set, having it look identical is not as helpful as it would be if the visual was unique. “Wait,” he might think, “Am I in the right mode, here?”
In a redesign, I would select an animated display that was not a loop, but an indication that time was passing. It can’t be as literal as a clock of course. But something that used animation to suggest time was progressing linearly from a point. Maybe something like the binary clock from Mission to Mars (see below), rendered in the graphic language of the Eye. Maybe make it base-3 to seem not so technological.
Seeing a display that is still, on invocation—that becomes animated upon initialization—would mean that all he has to do is glance to confirm the unique display is in motion. “Yes, it’s working. I’m in the Groundhog Day mode, and the savepoint is set.”
As mentioned, Johnny in the last phone conversation in the van is not talking to the person he thinks he is. The film reveals Takahashi at his desk, using his hand as if he were a sock puppeteer—but there is no puppet. His desk is emitting a grid of green light to track the movement of his hand and arm.
The Make It So chapter on gestural interfaces suggests Takahashi is using his hand to control the mouth movements of the avatar. I’d clarify this a bit. Lip synching by human animators is difficult even when not done in real time, and while it might be possible to control the upper lip with four fingers, one thumb is not enough to provide realistic motion of the lower lip.Continue reading →
Students in Starship Troopers academy have access to desktop computing environments during class, including a drawing and animation program called “Fedpaint,” that had a number of very forward-looking features.
The screen is housed in a metal bezel that is attached to the desk, and can be left flat or angled slightly per the user’s preference. A few hardware buttons sit in a row at the bottom of the bezel. (Quick industrial design aside: Those buttons belong at the top of the bezel.) The input device is a stylus. (Styli had been in use in personal digital assistants for over a decade when the film came out, I don’t think they had been sold as the primary input for a PC.) When we first see Johnny using the computer, he is ignoring his citizenship lesson and using Fedpaint instead.
The main part of the interface is a canvas. Running along the left and bottom edges are a complex tool palette and color picker that is vaguely reminiscent of Windows 3.0 WIMP applications. It’s easy to tell which category and tool is selected. (What color is selected is unclear.) I’d even say that most of the icons, while a little ham-handed and completely lacking labels, convey what they would do pretty clearly. The tools also seem to be clustered logically with categories across the top left, tools in the middle left, a color palette in the lower left corner, and file operations across the bottom. That’s some reasonable and reasonably convincing layout design for a movie interface. Nowadays a designer might argue to hide the menus when not in use to maximize the canvas real estate, but the most common OS paradigm at the time was Windows 97, and the most advanced paint program, i.e. Photoshop, looked like this. (Major thanks to Hongkiat for keeping their museum of Photoshop interfaces.)
Using the stylus, Johnny sketches a flirty animation for Carmen. He draws each of their profiles in white lines. He then adds some flat color and animates the profiles (not shown onscreen) such that the faces get closer, their eyes close, and their mouths open in readiness of a kiss. He then sends it to her.
On her desk she receives a notification. (We don’t get to see it. Was she already in the program? Did the notification jump her there?) Carmen grabs her stylus and responds by adding to the animation. She sends the file back to him. He opens it and it plays automatically. In her version of the animation, the profiles approach as before, but as they near for a kiss, the female profile blows a bubble gum bubble that gets so large it pops and covers the face of the male.
What’s nice about this interface is that the narrative seems to have driven some innovation in its design. It’s half gee-whiz-circa-1997 of course but half character development as it tells us that Johnny likes Cameron, and Cameron is a bit playfully stand-offish in response. To make this work well narratively, communication of the animation back and forth had to be seamless, and that seems to be the reason we see the communication tools built right into the interface. If ever there was a case for why scenario-driven design for personas works, this is it.
What’s frustrating is that they skipped over the hard part. How does Johnny apply the color? A paint bucket tool is a reasonable guess, but it’s also error prone. How did he specify the number of frames and their speed? How did he ensure that the motion felt relatively smooth and communicative? Anyone who’s worked with an animation program knows that these aren’t trivial matters, and Starship Troopers took the narrative route. Probably best for the story, but less for my analysis purposes.
Still, the stylus-driven direct manipulation, the unique layout, and easy, social sharing were big innovations for the time. I don’t know that there’s much to learn from this today, since our OS metaphors have advanced enough to make this seem quaint at best, and social integration is now the norm. But credit where it’s due, this interface was ahead of its time.