The Thanatorium: A beneficiary’s experience

The thanatorium is a speculative service for assisted suicide in Soylent Green. Suicide and death are not easy topics and I will do my best to address them seriously. Let me first take a moment to direct anyone who is considering or dealing with suicide to please stop reading this and talk to someone about it. I am unqualified to address—and this blog is not the place to work through—such issues.

There are four experiences to look at in the interface and service design of the Thanatorium: The patient, their beneficiaries, the usher, and the attendants to the patient. This post is about the least complicated of the bunch, the beneficiaries.

Thorn’s experience

We have to do a little extrapolation here because the way we see it in the movie is not the way we imagine it would work normally. What we see is Thorn entering the building and telling staff there to take him to Sol. He is escorted to an observation room labeled “beneficiaries only” by an usher. (Details about the powerful worldbuilding present in this label can be found in the prior post.) Sol has already drunk the “hemlock” drink by the time Thorn enters this room, so Sol is already dying and the robed room attendants have already left.

Aaand I just noticed that the walls are the same color as the Soylent. Ewww.

This room has a window view of the “theater” proper, with an interface mounted just below the window. At the top of this interface is a mounted microphone. Directly below is an intercom speaker beside a large status alert labeled SPEAKING PERMITTED. When we first see the panel this indicator is off. At the bottom is a plug for headphones to the left, a slot for a square authorization key, and in the middle, a row of square, backlit toggle buttons labeled PORTAL, EFFECTS, CHAMBER 2, AUDIO, VISUAL, and CHAMBER 1. When the Sol is mid-show, EFFECTS and VISUAL are the only buttons that are lit.

When the usher closes the viewing window, explaining that it’s against policy for beneficiaries to view the ceremony, Thorn…uh…chokes him in order to persuade him to let him override the policy.

Persuasion.

“Persuaded,” the usher puts his authorization key back in the slot. The window opens again. Thorn observes the ceremony in awe, having never seen the beautiful Earth of Sol’s youth. He mutters “I didn’t know” and “How could I?” as he watches. Sol tries weakly to tell Thorn something, but the speaker starts glitching, with the SPEAKING PERMITTED INDICATOR flashing on and off. Thorn, helpfully, pounds his fist on the panel and demands that the usher do something to fix it. The user gives Thorn wired earbuds and Thorn continues his conversation. (Extradiegetically, is this so they didn’t have to bother with the usher’s overhearing the conversation? I don’t understand this beat.) The SPEAKING PERMITTED light glows a solid red and they finish their conversation.

Yes, that cable jumps back and forth like that in the movie during the glitch. It was a simpler time.

Sol dies, and the lights come up in the chamber. Two assistants come to push the gurney along a track through a hidden door. Some mechanism in the floor catches the gurney, and the cadaver is whisked away from Thorn’s sight.

Regular experience?

So that’s Thorns corrupt, thuggish cop experience of the thanatorium. Let’s now make some educated guesses about what this might imply for the regular, non-thug experience for beneficiaries.

  1. The patient and beneficiaries enter the building and greeted by staff.
  2. They wait in queue in the lobby for their turn.
  3. The patient is taken by attendants to the “theater” and the beneficiaries taken by the usher to the observation room.
  4. Beneficiaries witness the drinking of the hemlock.
  5. The patient has a moment to talk with the beneficiaries and say their final farewells.
  6. The viewing window is closed as the patient watches the “cinerama” display and dies. The beneficiaries wait quietly in the observation room with the usher.
  7. The viewing window is opened as they watch the attendants wheel the body into the portal.
  8. They return to the lobby to sign some documents for benefits and depart.

So, some UX questions/backworlding

We have to backworld some of the design rationales involved to ground critique and design improvements. After all, design is the optimization of a system for a set of effects, and we want to be certain about what effects we’re targeting. So…

Why would beneficiaries be separated from the patient?

I imagine that the patient might take comfort from holding the hands or being near their loved ones (even if that set didn’t perfectly overlap with their beneficiaries). So why is there a separate viewing room? There are a handful of reasons I can imagine, only one of which is really satisfying.

Maybe it’s to prevent the spread of disease? Certainly given our current multiple pandemics, we understand the need for physical separation in a medical setting. But the movie doesn’t make any fuss about disease being a problem (though with 132,000 people crammed into every square mile of the New York City metropolitan area you’d figure it would be), and in Sol’s case, there’s zero evidence in the film that he’s sick. Why does the usher resist the request from Thorn if this was the case? And why wouldn’t the attendants be in some sort of personal protective gear?

Maybe it’s to hide the ugly facts of dying? Real death is more disconcerting to see than most people are familiar with (take the death rattle as one example) and witnessing it might discourage other citizens from opting-in for the same themselves. But, we see that Sol just passes peacefully from the hemlock drink, so this isn’t really at play here.

Maybe it’s to keep the cinerama experience hidden? It’s showing pictures of an old, bountiful earth that—in the diegesis—no longer exists. Thorn says in the movie that he’s too young to know what “old earth” was like, so maybe this society wants to prevent false hope? Or maybe to prevent rioting, should the truth of How Far We’ve Fallen get out? Or maybe it’s considered a reward for patients opting-in to suicide, thereby creating a false scarcity to further incentivize people to opt-in themselves? None of this is super compelling, and we have to ask, why does the usher give in and open the viewport if any of this was the case?

That blue-green in the upper left of this still is the observation booth.

So, maybe it’s to prevent beneficiaries from trying to interfere with the suicide. This society would want impediments against last-minute shouts of, “Wait! Don’t do it!” There’s some slight evidence against this, as when Sol is drinking the Hemlock, the viewing port is wide open, so beneficiaries might have pounded on the window if this was standard operating procedure. But its being open might have been an artifact of Sol’s having walked in without any beneficiaries. Maybe the viewport is ordinarily closed until after the hemlock, opened for final farewells, closed for the cinerama, and opened again to watch as the body is sped away?

Ecstasy Meat

This rationale supports another, more horrible argument. What if the reason is that Soylent (the company) wants the patient to have an uninterrupted dopamine and seratonin hit at the point of dying, so those neurotransmitters are maximally available in the “meat” before processing? (Like how antibiotics get passed along to meat-eaters in industrialized food today.) It would explain why they ask Sol for his favorite color in the lobby. Yes it is for his pleasure, but not for humane reasons. It’s so he can be at his happiest at the point of death. Dopamine and seratonin would make the resulting product, Soylent green, more pleasurable and addictive to consumers. That gives an additional rationale as to why beneficiaries would be prevented from speaking—it would distract from patients’ intense, pleasurable experience of the cinerama.

A quickly-comped up speculative banner ad reading “You want to feel GOOD GOOD. Load up on Soylent Green today!”
Now, with more Clarendon.

For my money, the “ecstasy meat” rationale reinforces and makes worse the movie’s Dark Secret, so I’m going to go with that. Without this rationale, I’d say rewrite the scene so beneficiaries are in the room with the patient. But with this rationale, let’s keep the rooms separate.

Beneficiary interfaces

Which leads us to rethinking this interface.

Beneficiary interfaces

A first usability note is that the SPEAKING PERMITTED indicator is very confusing. The white text on a black background looks like speaking is, currently, permitted. But then the light behind it illuminates and I guess, then speaking is permitted? But wait, the light is red, so does that mean it’s not permitted, or is? And then adding to the confusion, it blinks. Is that the glitching, or some third state? Can we send this to its own interface thanatorium? So to make this indicator more usable, we could do a couple of things.

  • Put a ring of lights around the microphone and grill. When illuminated, speaking is permitted. This presumes that the audience can infer what these lights mean, and isn’t accessible to unsighted users, but I don’t think the audio glitch is a major plot point that needs that much reinforcing; see above. If the execs just have to have it crystal clear, then you could…
  • Have two indicators, one reading SPEAKING PERMITTED and another reading SILENCE PLEASE, with one or the other always lit. If you had to do it on the cheap, they don’t need to be backlit panels, but just two labeled indicator lamps would do.

And no effing blinking.

Thorn voice: NO EFFING BLINKING!

I think part of the affective purpose of the interface is to show how cold and mechanistic the thanatorium’s treatment of people are. To keep that, you could add another indicator light on the panel labeled somewhat cryptically, PATIENT. Have it illuminated until Sol passes, and then have a close up shot when it fades, indicating his death.

Ah, yes, good to have a reminder that’s why he’s a critic and not a working FUI designer.

A note on art direction. It would be in Soylent’s and our-real-world interest to make this interface feel as humane as possible. Maybe less steel and backlit toggles? Then again, this world is operating on fumes, so they would make do with what’s available. So this should also feel a little more strung together, maybe with some wires sticking out held together with electrical tape and tape holding the audio jack in place.

Last note on the accommodations. What are the beneficiaries supposed to do while the patient is watching the cinerama display? Stand there and look awkward? Let’s get some seats in here and pipe the patient’s selection of music in. That way they can listen and think of the patient in the next room.

If you really want it to feel extradiegetically heartless, put a clock on the wall by the viewing window that beneficiaries can check.


Once we simplify this panel and make the room make design sense, we have to figure out what to do with the usher’s interface elements that we’ve just removed, and that’s the next post.

8 Reasons The Voight-Kampff Machine is shit (and a redesign to fix it)

Distinguishing replicants from humans is a tricky business. Since they are indistinguishable biologically, it requires an empathy test, during which the subject hears empathy-eliciting scenarios and watched carefully for telltale signs such as, “capillary dilation—the so-called blush response…fluctuation of the pupil…involuntary dilation of the iris.” To aid the blade runner in this examination, they use a portable machine called the Voight-Kampff machine, named, presumably, for its inventors.

The device is the size of a thick laptop computer, and rests flat on the table between the blade runner and subject. When the blade runner prepares the machine for the test, they turn it on, and a small adjustable armature rises from the machine, the end of which is an intricate piece of hardware, housing a powerful camera, glowing red.

The blade runner trains this camera on one of the subject’s eyes. Then, while reading from the playbook book of scenarios, they keep watch on a large monitor, which shows an magnified image of the subject’s eye. (Ostensibly, anyway. More on this below.) A small bellows on the subject’s side of the machine raises and lowers. On the blade runner’s side of the machine, a row of lights reflect the volume of the subject’s speech. Three square, white buttons sit to the right of the main monitor. In Leon’s test we see Holden press the leftmost of the three, and the iris in the monitor becomes brighter, illuminated from some unseen light source. The purpose of the other two square buttons is unknown. Two smaller monochrome monitors sit to the left of the main monitor, showing moving but otherwise inscrutable forms of information.

In theory, the system allows the blade runner to more easily watch for the minute telltale changes in the eye and blush response, while keeping a comfortable social distance from the subject. Substandard responses reveal a lack of empathy and thereby a high probability that the subject is a replicant. Simple! But on review, it’s shit. I know this is going to upset fans, so let me enumerate the reasons, and then propose a better solution.

-2. Wouldn’t a genetic test make more sense?

If the replicants are genetically engineered for short lives, wouldn’t a genetic test make more sense? Take a drop of blood and look for markers of incredibly short telomeres or something.

-1. Wouldn’t an fMRI make more sense?

An fMRI would reveal empathic responses in the inferior frontal gyrus, or cognitive responses in the ventromedial prefrontal gyrus. (The brain structures responsible for these responses.) Certinaly more expensive, but more certain.

0. Wouldn’t a metal detector make more sense?

If you are testing employees to detect which ones are the murdery ones and which ones aren’t, you might want to test whether they are bringing a tool of murder with them. Because once they’re found out, they might want to murder you. This scene should be rewritten such that Leon leaps across the desk and strangles Holden, IMHO. It would make him, and other blade runners, seem much more feral and unpredictable.

(OK, those aren’t interface issues but seriously wtf. Onward.)

1. Labels, people

Controls needs labels. Especially when the buttons have no natural affordance and the costs of experimentation to discover the function are high. Remembering the functions of unlabeled controls adds to the cognitive load for a user who should be focusing on the person across the table. At least an illuminated button helps signal the state, so that, at least, is something.

 2. It should be less intimidating

The physical design is quite intimidating: The way it puts a barrier in between the blade runner and subject. The fact that all the displays point away from the subject. The weird intricacy of the camera, its ominous HAL-like red glow. Regular readers may note that the eyepiece is red-on-black and pointy. That is to say, it is aposematic. That is to say, it looks evil. That is to say, intimidating.

I’m no emotion-scientist, but I’m pretty sure that if you’re testing for empathy, you don’t want to complicate things by introducing intimidation into the equation. Yes, yes, yes, the machine works by making the subject feel like they have to defend themselves from the accusations in the ethical dilemmas, but that stress should come from the content, not the machine.

2a. Holden should be less intimidating and not tip his hand

While we’re on this point, let me add that Holden should be less intimidating, too. When Holden tells Leon that a tortoise and a turtle are the same thing, (Narrator: They aren’t) he happens to glance down at the machine. At that moment, Leon says, “I’ve never seen a turtle,” a light shines on the pupil and the iris contracts. Holden sees this and then gets all “ok, replicant” and becomes hostile toward Leon.

In case it needs saying: If you are trying to tell whether the person across from you is a murderous replicant, and you suddenly think the answer is yes, you do not tip your hand and let them know what you know. Because they will no longer have a reason to hide their murderyness. Because they will murder you, and then escape, to murder again. That’s like, blade runner 101, HOLDEN.

3. It should display history 

The glance moment points out another flaw in the interface. Holden happens to be looking down at the machine at that moment. If he wasn’t paying attention, he would have missed the signal. The machine needs to display the interview over time, and draw his attention to troublesome moments. That way, when his attention returns to the machine, he can see that something important happened, even if it’s not happening now, and tell at a glance what the thing was.

4. It should track the subject’s eyes

Holden asks Leon to stay very still. But people are bound to involuntarily move as their attention drifts to the content of the empathy dilemmas. Are we going to add noncompliance-guilt to the list of emotional complications? Use visual recognition algorithms and high-resolution cameras to just track the subject’s eyes no matter how they shift in their seat.

5. Really? A bellows?

The bellows doesn’t make much sense either. I don’t believe it could, at the distance it sits from the subject, help detect “capillary dilation” or “ophthalmological measurements”. But it’s certainly creepy and Terry Gilliam-esque. It adds to the pointless intimidation.

6. It should show the actual subject’s eye

The eye color that appears on the monitor (hazel) matches neither Leon’s (a striking blue) or Rachel’s (a rich brown). Hat tip to Typeset in the Future for this observation. His is a great review.

7. It should visualize things in ways that make it easy to detect differences in key measurements

Even if the inky, dancing black blob is meant to convey some sort of information, the shape is too organic for anyone to make meaningful readings from it. Like seriously, what is this meant to convey?

The spectrograph to the left looks a little more convincing, but it still requires the blade runner to do all the work of recognizing when things are out of expected ranges.

8. The machine should, you know, help them

The machine asks its blade runner to do a lot of work to use it. This is visual work and memory work and even work estimating when things are out of norms. But this is all something the machine could help them with. Fortunately, this is a tractable problem, using the mighty powers of logic and design.

Pupillary diameter

People are notoriously bad at estimating the sizes of things by sight. Computers, however, are good at it. Help the blade runner by providing a measurement of the thing they are watching for: pupillary diameter. (n.b. The script speaks of both iris constriction and pupillary diameter, but these are the same thing.) Keep it convincing and looking cool by having this be an overlay on the live video of the subject’s eye.

So now there’s some precision to work with. But as noted above, we don’t want to burden the user’s memory with having to remember stuff, and we don’t want them to just be glued to the screen, hoping they don’t miss something important. People are terrible at vigilance tasks. Computers are great at them. The machine should track and display the information from the whole session.

Note that the display illustrates radius, but displays diameter. That buys some efficiencies in the final interface.

Now, with the data-over-time, the user can glance to see what’s been happening and a precise comparison of that measurement over time. But, tracking in detail, we quickly run out of screen real estate. So let’s break the display into increments with differing scales.

There may be more useful increments, but microseconds and seconds feel pretty convincing, with the leftmost column compressing gradually over time to show everything from the beginning of the interview. Now the user has a whole picture to look at. But this still burdens them into noticing when these measurements are out of normal human ranges. So, let’s plot the threshold, and note when measurements fall outside of that. In this case, it feels right that replicants display less that normal pupillary dilation, so it’s a lower-boundary threshold. The interface should highlight when the measurement dips below this.

Blush

I think that covers everything for the pupillary diameter. The other measurement mentioned in the dialogue is capillary dilation of the face, or the “so-called blush response.” As we did for pupillary diameter, let’s also show a measurement of the subject’s skin temperature over time as a line chart. (You might think skin color is a more natural measurement, but for replicants with a darker skin tone than our two pasty examples Leon and Rachel, temperature via infrared is a more reliable metric.) For visual interest, let’s show thumbnails from the video. We can augment the image with degree-of-blush. Reduce the image to high contrast grayscale, use visual recognition to isolate the face, and then provide an overlay to the face that illustrates the degree of blush.

But again, we’re not just looking for blush changes. No, we’re looking for blush compared to human norms for the test. It would look different if we were looking for more blushing in our subject than humans, but since the replicants are less empathetic than humans, we would want to compare and highlight measurements below a threshold. In the thumbnails, the background can be colored to show the median for expected norms, to make comparisons to the face easy. (Shown in the drawing to the right, below.) If the face looks too pale compared to the norm, that’s an indication that we might be looking at a replicant. Or a psychopath.

So now we have solid displays that help the blade runner detect pupillary diameter and blush over time. But it’s not that any diameter changes or blushing is bad. The idea is to detect whether the subject has less of a reaction than norms to what the blade runner is saying. The display should be annotating what the blade runner has said at each moment in time. And since human psychology is a complex thing, it should also track video of the blade runner’s expressions as well, since, as we see above, not all blade runners are able to maintain a poker face. HOLDEN.

Anyway, we can use the same thumbnail display of the face, without augmentation. Below that we can display the waveform (because they look cool), and speech-to-text the words that are being spoken. To ensure that the blade runner’s administration of the text is not unduly influencing the results, let’s add an overlay to the ideal intonation targets. Despite evidence in the film, let’s presume Holden is a trained professional, and he does not stray from those targets, so let’s skip designing the highlight and recourse-for-infraction for now.

Finally, since they’re working from a structured script, we can provide a “chapter” marker at the bottom for easy reference later.

Now we can put it all together, and it looks like this. One last thing we can do to help the blade runner is to highlight when all the signals indicate replicant-ness at once. This signal can’t be too much, or replicants being tested would know from the light on the blade runner’s face when their jig is up, and try to flee. Or murder. HOLDEN.

For this comp, I added a gray overlay to the column where pupillary and blush responses both indicated trouble. A visual designer would find some more elegant treatment.

If we were redesigning this from scratch, we could specify a wide display to accomodate this width. But if we are trying to squeeze this display into the existing prop from the movie, here’s how we could do it.

Note the added labels for the white squares. I picked some labels that would make sense in the context. “Calibrate” and “record” should be obvious. The idea behind “mark” is an easy button for the blade runner to press when they see something that looks weird, like when doctors manually annotate cardiograph output.

Lying to Leon

There’s one more thing we can add to the machine that would help out, and that’s a display for the subject. Recall the machine is meant to test for replicant-ness, which happens to equate to murdery-ness. A positive result from the machine needs to be handled carefully so what happens to Holden in the movie doesn’t happen. I mentioned making the positive-overlay subtle above, but we can also make a placebo display on the subject’s side of the interface.

The visual hierarchy of this should make the subject feel like its purpose is to help them, but the real purpose is to make them think that everything’s fine. Given the script, I’d say a teleprompt of the empathy dilemma should take up the majority of this display. Oh, they think, this is to help me understand what’s being said, like a closed caption. Below the teleprompt, at a much smaller scale, a bar at the bottom is the real point.

On the left of this bar, a live waveform of the audio in the room helps the subject know that the machine is testing things live. In the middle, we can put one of those bouncy fuiget displays that clutters so many sci-fi interfaces. It’s there to be inscrutable, but convince the subject that the machine is really sophisticated. (Hey, a diegetic fuiget!) Lastly—and this is the important part—An area shows that everything is “within range.” This tells the subject that they can be at ease. This is good for the human subject, because they know they’re innocent. And if it’s a replicant subject, this false comfort protects the blade runner from sudden murder. This test might flicker or change occasionally to something ambiguous like “at range,” to convey that it is responding to real world input, but it would never change to something incriminating.

This way, once the blade runner has the data to confirm that the subject is a replicant, they can continue to the end of the module as if everything was normal, thank the replicant for their time, and let them leave the room believing they passed the test. Then the results can be sent to the precinct and authorizations returned so retirement can be planned with the added benefit of the element of surprise.

OK

Look, I’m sad about this, too. The Voight-Kampff machine is cool. It fits very well within the art direction of the Blade Runner universe. This coolness burned the machine into my memory when I saw this film the first dozen times, but despite that, it just doesn’t stand up to inspection. It’s not hopeless, but does need a lot of thinkwork and design to make it really fit to task, and convincing to us in the audience.

Sleeping pods

Use

Joe and Rita climb into the pods and situate themselves comfortably. Officer Collins and his assistant approach and insert some necessary intravenous chemicals. We see two canisters, one empty (for waste?) and one filled with the IV fluid. To each side of the subject’s head is a small raised panel with two lights (amber and ruby) and a blue toggle switch. None of these are labeled. The subjects fall into hibernation and the lids close.

Collins and his assistant remove a cable labeled “MASTER” from the interface and close a panel which seals the inputs and outputs. They then close a large steel door, stenciled “TOP SECRET,” to the hibernation chamber.

Idiocracy_pods04

The external interface panel includes:

  • A red LED display
  • 3 red safety cover toggle switches labeled “SET 1” “SET 2” and “SET 3.”
  • A 5×4 keypad
    • 0-9 numbers
    • Letters A–F
    • Four unlabeled white buttons

500 years later, after the top secret lab is destroyed, the pods become part of the mountains of garbage that just pile up. Sliding down an avalanche of the stuff, the pods wind up in a downtown area. Joe’s crashes through Frito’s window. At this moment the pod decides enough is enough and it wakes him. Clamps around the edge unlock. The panel cover has fallen off somewhere, and the LED display blinks the text, “unfreezing.” Joe drowsily pushes the lids open and gets out.

Its purpose in the narrative

This is a “segue” interface, mostly useful in explaining how Joe and Rita are transported safely 500 years in the future. At its base, all it needs to convey is:

  • Scienciness (lights and interfaces, check)
  • See them pass into sleep (check)
  • See why how they are kept safe (rugged construction details, clamped lid, check)
  • See the machine wake them up (check)

Is it ideal?

The ergonomics are nice. A comfortable enough coffin to sleep in. And it seems…uh…well engineered, seeing as how it winds up lasting 500 times its intended use and takes some pretty massive abuse as it slides down the mountains of garbage and through Frito’s window into his apartment. But that’s where the goodness ends. It looks solid enough to last a long long time. But there are questions.

From Collins’ point of view:

  • Why was it engineered to last 500 years, but you know, fail to have any of its interior lights or toggle switches labeled? Or have something more informative on the toggles than “SET 1”?
  • How on earth did they monitor the health of the participants over time? (Compare Prometheus’ hibernation screens.) Did they just expect it to work perfectly? Not a lot of comfort to the subjects. Did they monitor it remotely? Why didn’t that monitoring screen arouse the suspicions of the foreclosers?
  • How are subjects roused? If the procedure is something that Collins just knows, what if something happens to him? That information should be somewhere on the pod with very clear instructions.
  • How does it gracefully degrade as it runs out of resources (power, water, nutrition, air, water storage or disposal) to keep it’s occupants alive? What if the appointed person doesn’t answer the initial cry for help?

From the hibernators’ point of view:

  • How do the participants indicate their consent to go into hibernation? Can this be used as an involuntary prison?
  • How do they indicate consent to be awakened? (Not an easy problem, but Passengers illustrates why it’s necessary.)
  • What if they wake early? How do they get out or let anyone know to release them?
  • Why does the subject have to push the lid if they’re going to be weak and woozy when they waken? Can’t it be automatic, like the hibernation lids in Aliens?
  • How does the sleeper know it’s safe to get out? Certainly Joe and Rita expected to wake up in the military laboratory. But while we’re putting in the effort to engineer it to last 500 years, maybe we could account for the possibility that it’s somewhere else.
  • Can’t you put me at ease in the disorientating hypnopompic phase? Maybe some soothing graphic on the interior lid? A big red label reading, “DON’T PANIC” with an explanation?
  • Can you provide some information to help orient me, like where I am and when I am? Why does Joe have to infer the date from a magazine cover?

From a person-in-the-future point of view

  • How do the people nearby know that it contains living humans? That might be important for safekeeping, or even to take care in case the hibernators are carrying some disease to which the population has lost resistance.
  • How do we know if they’ve got some medical conditions that will need specialized care? What food they eat? Whether they are dangerous?
  • Can we get a little warning so we can prepare for all this stuff?

Is the interface believable?

Oh yes. Prototypes tend to be minimum viable thing, and usability lags far behind basic utility. Plus, this is military, known to be tough people expecting their people to be tough people without the need for civilian niceties. Plus, Collins didn’t seem too big on “details.” So very believable.

Idiocracy_surveillance14

Note that this doesn’t equate to the thing itself being believable. I mean, it was an experiment meant to last only a year. How did it have the life support resources—including power—to run for 500 times the intended duration? What brown fluid has the 273,750,000 calories needed to sustain Luke Wilson’s physique for 500 years? (Maya Rudoph lucks out needing “only” 219,000,000.) How did it keep them alive and prevent long-term bedridden problems, like pressure sores, pneumonia, constipation, contractures, etc. etc.?
See? Comedy is hard to review.

Fight US Idiocracy: Donate to close races

Reminder: Every post in this series includes some U.S.-focused calls to action for readers to help reverse the current free fall into our own Idiocracy. In the last post I provided information about how to register to vote in your state. DO THAT.
If you accidentally missed the deadline (and triple check because many states have some way to register right up to and including election day, which is 06 NOV this year), there are still things you can do. Sadly, one of the most powerful things feels crass: Donate money to close campaigns. Much of this money is spent reaching out to undecided voters via media channels, and that means the more money the more reach.

close_districts.png
ActBlue_logo.png


There are currently 68 highly competitive seats—those considered a toss up between the two parties or leaning slightly toward one. You can look at the close campaigns and donate directly, or you can donate to Act Blue, and let that organization make the call. That’s what I did. Just now. Please join me.

The Cookie: Matt’s controls

When using the Cookie to train the AI, Matt has a portable translucent touchscreen by which he controls some of virtual Greta’s environment. (Sharp-eyed viewers of the show will note this translucent panel is the same one he uses at home in his revolting virtual wingman hobby, but the interface is completely different.)

Black_Mirror_Cookie_18.png

The left side of the screen shows a hamburger menu, the Set Time control, a head, some gears, a star, and a bulleted list. (They’re unlabeled.) The main part of the screen is a scrolling stack of controls including Simulated Body, Control System, and Time Adjustment. Each has an large icon, a header with “Full screen” to the right, a subheader, and a time indicator. This could be redesigned to be much more compact and context-rich for expert users like Matt. It’s seen for maybe half a second, though, and it’s not the new, interesting thing, so we’ll skip it.

The right side of the screen has a stack of Smartelligence logos which are alternately used for confirmation and to put the interface to sleep.

Mute

When virtual Greta first freaks out about her circumstance and begins to scream in existential terror, Matt reaches to the panel and mutes her. (To put a fine point on it: He’s a charming monster.) In this mode she cannot make a sound, but can hear him just fine. We do not see the interface he uses to enact this. He uses it to assert conversational control over her. Later he reaches out to the same interface to unmute her.

The control he touches is the one on his panel with a head and some gears reversed out of it. The icon doesn’t make sense for that. The animation showing the unmuting shows it flipping from right to left, so does provide a bit of feedback for Matt, but it should be a more fitting icon and be labeled.

Cookie_mute
Also it’s teeny tiny, but note that the animation starts before he touches it. Is it anticipatory?

It’s not clear though, while she is muted, how he knows that she is trying to speak. Recall that she (and we) see her mouthing words silently, but from his perspective, she’s just an egg with a blue eye. The system would need some very obvious MUTE status display, that increases in intensity when the AI is trying to communicate. Depending on how smart the monitoring feature was, it could even enable some high-intensity alert system for her when she needs to communicate something vital. Cinegenically, this could have been a simple blinking of the blue camera light, though this is currently used to indicate the passage of time during the Time Adjustment (see below.)

Simulated Body

Matt can turn on a Simulated Body for her. This allows the AI to perceive herself as if she had her source’s body. In this mode she perceives herself as existing inside a room with large, wall-sized displays and a control console (more on this below), but is otherwise a featureless white.

Black_Mirror_Cookie_White_Room.png

I presume the Simulated Body is a transitional model—part of a literal desktop metaphor—meant to make it easy for the AI (and the audience) to understand things. But it would introduce a slight lag as the AI imagines reaching and manipulating the console. Presuming she can build competence in directly controlling the technologies in the house, the interface should “scaffold” away and help her gain the more efficient skills of direct control, letting go of the outmoded notion of having a body. (This, it should be noted, would not be as cinegenic since the story would just feature the egg rather than the actor’s expressive face.)

Neuropsychology nerds may be interested to know that the mind’s camera does, in fact, have spatial lags. Several experiments have been run where subjects are asked to imagine animals as seen from the side and then timed how long it took them to imagine zooming into the eye. It takes longer, usually, for us to imagine the zoom to a elephant’s eye than a mouse’s because the “distance” is farther. Even though there’s no physicality to the mind’s camera to impose this limit, our brain is tied to its experience in the real world.

Black_Mirror_Cookie_Simulated_Body.png

The interface Matt has to turn on her virtual reality is confusing. We hear 7 beeps while the camera is on his face. He sees a 3D rendering of a woman’s body in profile and silhouette. He taps the front view and it fills with red. Then he taps the side view and it fills with red. Then he taps some Smartelligence logos on the side with a thumb and then *poof* she’s got a body. While I suspect this is a post-actor interface, (i.e. Jon Hamm just tapped some things on an empty screen while on camera and then the designers had to later retrofit an interface that fit his gestures) this multi-button setup and three-tap initialization just makes no sense. It should be a simple toggle with access to optional controls like scaffolding settings (discussed above.)

Time “Adjustment”

The main tool Matt has to force compliance is a time control. When Greta initially says she won’t comply, (specifically and delightfully, she asserts, “I’m not some sort of push-button toaster monkey!”) Then he uses his interface to make it seem like 3 weeks pass for her inside her featureless white room. Then again for 6 months. The solitary confinement makes her crazy and eventually forces compliance.

Cookie_settime.gif

The interface to set the time is a two-layer virtual dial: Two chapter rings with wide blue arcs for touch targets. The first time we see him use it, he spins the outer one about 360° (before the camera cuts away) to set the time for three weeks. While he does it, the inner ring spins around the same center but at a slower rate. I presume it’s months, though the spatial relationship doesn’t make sense. Then he presses the button in the center of the control. He sees an animation of a sun and moon arcing over an illustrated house to indicate her passage of time, and then the display. Aside: Hamm plays this beat marvelously by callously chomping on the toast she has just help make.

Toast.gif

Improvements?

Ordinarily I wouldn’t speak to improvements on an interface that is used for torture, but as this could only affect a general AI that is as yet speculative, and it couldn’t be co-opted to torture real people since time travel doesn’t exist, so I think this time it’s OK. Discussing it as a general time-setting control, I can see three immediate improvements.

1. Use fast forward models

It makes most sense for her time sentence to end automatically and automatically return to real-world speed. But each time we see the time controls used, the following interaction happens near the end of the time sentence:

  • Matt reaches up to the console
  • He taps the center button of the time dial
  • He taps the stylized house illustration. In response it gets a dark overlay with a circle inside of it reading “SET TIME.” This is the same icon seen 2nd down  in the left panel.
  • He taps the center button of the time dial again. The dark overlay reads “Reset” with a new icon.
  • He taps the overlay.

Please tell me this is more post-actor interface design. Because that interaction is bonkers.

Cookie_stop.gif

If the stop function really needs a manual control, well, we have models for that that are very readily understandable by users and audiences. Have the whole thing work and look like a fast forward control rather than this confusing mess. If he does need to end it early, as he does in the 6 months sentence, let him just press a control labeled PLAY or REALTIME.

2. Add calendar controls

A dial makes sense when a user is setting minutes or hours, but a calendar-like display should be used for weeks or months. It would be immediately recognizable and usable by the user and understandable to the audience. If Hamm had touched the interface twice, I would design the first tap to set the start date and the second tap to set the end date. The third is the commit.

3. Add microinteraction feedback

Also note that as he spins the dials, he sees no feedback showing the current time setting. At 370° is it 21 or 28 days? The interface doesn’t tell him. If he’s really having to push the AI to its limits, the precision will be important. Better would be to show the time value he’s set so he could tweak it as needed, and then let that count down as time remaining while the animation progresses.

Cookie_settime.gif

Effectiveness subtlety: Why not just make the solitary confinement pass instantly for Matt? Well, recall he is trying to ride a line of torture without having the AI wig out, so he should have some feedback as to the duration of what he’s putting her through. If it was always instant, he couldn’t tell the difference between three weeks and three millennia, if he had accidentally entered the wrong value. But if real-world time is passing, and it’s taking longer than he thinks it should be, he can intervene and stop the fast-forwarding.

That, or of course, show feedback while he’s dialing.

Near the end of the episode we learn that a police officer is whimsically torturing another Cookie, and sets the time-ratio to “1000 years per minute” and then just lets it run while he leaves for Christmas break. The current time ratio should also be displayed and a control provided. It is absent from the screen.

Black_Mirror_Cookie_31.png

Add psychological state feedback

There is one “improvement” that does not pertain to real world time controls, and that’s the invisible effect of what’s happening to the AI during the fast forward. In the episode Matt explains that, like any good torturer, “The trick of it is to break them without letting them snap completely,” but while time is passing he has no indicators as to the mental state of the sentience within. Has she gone mad? (Or “wigged out” as he says.) Does he need to ease off? Give her a break?

I would add trendline indicators or sparklines showing things like:

  • Stress
  • Agitation
  • Valence of speech

I would have these trendlines highlight when any of the variables are getting close to known psychological limits. Then as time passes, he can watch the trends to know if he’s pushing things too far and ease off.

Ship Console

FaithfulWookie-console.png

The only flight controls we see are an array of stay-state toggle switches (see the lower right hand of the image above) and banks of lights. It’s a terrifying thought that anyone would have to fly a spaceship with binary controls, but we have some evidence that there’s analog controls, when Luke moves his arms after the Falcon fires shots across his bow.

Unfortunately we never get a clear view of the full breadth of the cockpit, so it’s really hard to do a proper analysis. Ships in the Holiday Special appear to be based on scenes from A New Hope, but we don’t see the inside of a Y-Wing in that movie. It seems to be inspired by the Falcon. Take a look at the upper right hand corner of the image below.

ANewHope_Falcon_console01.png

R. S. Revenge Comms

Note: In honor of the season, Rogue One opening this week, and the reviews of Battlestar Galactica: The Mini-Series behind us, I’m reopening the Star Wars Holiday Special reviews, starting with the show-within-a-show, The Faithful Wookie. Refresh yourself of the plot if it’s been a while.

Faithful-Wookiee-02

On board the R.S. Revenge, the purple-skinned communications officer announces he’s picked up something. (Genders are a goofy thing to ascribe to alien physiology, but the voice actor speaks in a masculine register, so I’m going with it.)

faithful-wookiee-01-surrounds

He attends a monitor, below which are several dials and controls in a panel. On the right of the monitor screen there are five physical controls.

  • A stay-state toggle switch
  • A stay-state rocker switch
  • Three dials

The lower two dials have rings under them on the panel that accentuate their color.

Map View

The screen is a dark purple overhead map of the impossibly dense asteroid field in which the Revenge sits. A light purple grid divides the space into 48 squares. This screen has text all over it, but written in a constructed orthography unmentioned in the Wookieepedia. In the upper center and upper right are unchanging labels. Some triangular label sits in the lower-left. In the lower right corner, text appears and disappears too fast for (human) reading. The middle right side of the screen is labeled in large characters, but they also change too rapidly to make much sense of it.

revengescreen Continue reading

Sleep Pod—Wake Up Countdown

On each of the sleep pods in which the Odyssey crew sleep, there is a display for monitoring the health of the sleeper. It includes some biometric charts, measurements, a body location indicator, and a countdown timer. This post focuses on that timer.

To show the remaining time of until waking Julia, the pod’s display prompts a countdown that shows hours, minutes and seconds. It shows in red the final seconds while also beeping for every second. It pops-up over the monitoring interface.

image03

Julia’s timer reaches 0:00:01.

The thing with pop-ups

We all know how it goes with pop-ups—pop-ups are bad and you should feel bad for using them. Well, in this case it could actually be not that bad.

The viewer

Although the sleep pod display’s main function is to show biometric data of the sleeper, the system prompts a popup to show the remaining time until the sleeper wakes up. And while the display has some degree of redundancy to show the data—i.e. heart rate in graphics and numbers— the design of the countdown brings two downsides for the viewer.

  1. Position: it’s placed right in the middle of the screen.
  2. Size: it’s roughly a quarter of the whole size of the display

Between the two, it partially covers both the pulse graphics and the numbers, which can be vital, i.e. life threatening—information of use to the viewer. Continue reading

Ghost trap

Once ghosts are bound by the streams from the Proton Packs, they can be trapped by a special trap. It has two parts: The trap itself, that is roughly the size of a toaster, and the foot pedal activation switch, which connects to the trap box by a long black cord.

Trap02

To open the trap, a ghostbuster simply steps on the foot pedal. For a second the trap sparks with some unknown energy and opens to reveal a supernatural light within. Once open, the bound ghost can be manipulated down towards the trap.

Trap08

When the ghost is close to the trap, the Ghostbuster steps on the foot pedal again. Lots of special effects later, the ghost gets sucked down into the trap and it closes.

With a ghost contained inside, a red indicator light illuminates near the handle to let users know that a dangerous thing is contained within. (Also, it emits smoke, but I suspect that’s a side effect rather than a feature that’s been added in.) The trap can be held by the long handle or (and this is the way the Ghostbusters themselves tend to carry it around) by the cord.

Trap16

The design of the trap has so many great aspects. The separate control keeps the ghostbuster a safe distance from both the proton streams, the trap, and the ghost. And the use of a foot pedal as a switch keeps his hands free to keep a defensive grip the proton gun. I should also make note of the industrial design of the thing: The safety stripes, the handle, and the shape tell of a device handmade by scientists that is dangerous and powerful.

Still, some improvements

If the activation was wireless rather than a foot pedal, the Ghostbuster would be free to move to wherever was most tactically sound, rather than constrained to standing near it. Wireless controls have their own tradeoffs, of course, and those may not be acceptable in the mission-critical scenarios of ghostbusting. If that control was also hands-free (gestural, vocal, ocular, brain) then you’d keep the goodness of the hands-free pedal.

The red light is a little ambiguous. It could just mean “power on,” which doesn’t help. Blinking should be used very judiciously, but here it’s warranted, so I’d make that blink to say “Dangerous thing contained. Release only with caution.” Let’s presume the thing automatically locks when a ghost is trapped and can only be unlocked by the containment unit (the next post). Even better might be several lights blinking, perhaps both around the trap doors and around any controls that might release the ghost, e.g. the foot pedal. You could even make it blink similarly to the “working” light animation of the Proton Packs to tie the equipment together.

One problem that’s familiar to software designers is that’s that the control is a stateless toggle, i.e. it looks and behaves the same whether you’re opening the trap or closing the trap. If the trap doesn’t automatically lock with a ghost in it, that’s a major problem. Imagine if the activator had hid behind a curtain to trap a poltergeist and wasn’t sure if he’d accidentally stepped on it. A UX 101 rule of thumb is that controls should indicate the state of the thing they’re controlling. So the pedal should have a signal to indicate whether the trap is open or closed, even though the trap itself conveys that pretty well. Even better if that signal is something that can be felt with the foot. Maybe it’s a rocker switch? (Like this Linemaster, but more exaggerated.)

Lastly, we can also presume that the trap has a power source, and that there’s a time pressure to get the trap to the containment unit before that power source dies. But where’s that information? So some indication somewhere of how much power and time is left for that would be very useful to avoid all that work (and, you know, property damage) going to waste.

Small improvements, but each would improve it and not take away from the narrative.

Reckless undocking

After logging in to her station, Ibanez shares a bit of flirty dialog with mushroom-quaffed Zander Barcalow, and Captain Deladier says, “All right, Ibanez. Take her out.” Ibanez grasps the yoke, pulls back, and the ship begins to pull back from the docking station while still attached by two massive cables. Daladier and Barcalow keep silent but watch as the cables grow dangerously taut. At the last minute Ibanez flips a toggle switch on her panel from 0 to 1 and the cables release.

StarshipT-undocking08

StarshipT-undocking16

There’s a lot of wrong in just this sequence. I mean, I get narratively what’s happening here: Check her out, she’s a badass maverick (we’re meant to think). But, come on…

  1. Where is the wisdom of letting a Pilot Trainee take the helm on her first time ever aboard a vessel? OK. Sorry. This is an interface blog. Ignore that one.
  2. The 1 and 0 symbols are International Electrotechnical Commission 60417 standards for on and off, respectively. How is the cable’s detachment caused by something turning on? If it was magnetic, shouldn’t you turn the magnetism off to release the cables?
  3. Why use the symbols for ON and OFF for an infrequent, specific task? Shouldn’t this be reserved for a kill switch or power to the station or something major? Or shouldn’t it bear a label reading “Power Cable Magnets” or something to make it more intelligible?
  4. Why is there no safety mechanism for this switch? A cover? A two-person rule? A timed activation? It’s fairly consequential. The countersink doesn’t feel like it’s enough.
  5. Where is the warning klaxon to alert everyone to this potentially disastrous situation?
  6. Why isn’t she dishonorably discharged the moment she started to maneuver the ship while it was still attached to the dock? Oh, shit. Sorry. Interfaces. Right. Interfaces.

Rhod’s rod

TheFifthElement-Rhod-011

One of the most delightfully flamboyant characters in sci-fi is the radio star in The Fifth Element, Ruby Rhod. He wears a headpiece to hear his producers as well as to record his own voice. But to capture the voices of others, he has a technological staff that he carries.

Function

The handle of the device has a microphone built into it. Because of the length of the staff, his reach to potential interviewees is extended. The literal in-your-face nature of the microphone matches Ruby’s in-your-face show.

TheFifthElement-Rhod-004

To let interviewees know when they’re being recorded, a red light in the handle illuminates. This also lets others nearby know that the interviewee is “on air” and not to interrupt.

Ruby also has a single switch on the handle. It’s a small silver toggle. It’s likely that he can set this switch to function as he likes. The one time we see it in action, he has set it to play back an “audio cut,” (the sound clips morning radio talk show hosts insert into their programs) in this case an intimate recording of the Princess of Kodar Japhet. He flips the toggle to play the cut, and flips it back when it’s done.

Here, a different input would have worked better. The toggle switch is too easy to bump and kind of ruins the design of the handle. Better would be a billet button. This sort of momentary button sits flush with a bezel, which prevents accidental activation from, say, a finger laying across it, or resting the button against a flat surface. If Ruby wants the recorded sound to play out completely, and the button press only starts or stops the playback, it would be good to know the state of the playback, and using a billet button with a LED ring would be best.

We also know that Ruby is a performer. He would be happier if he had more than a play button, but a way to express himself. His hand is already in a grip to hold the staff, so the control should fit that—If you could outfit the billet button with directional pressure sensitivity, he could assign each direction to a control. So, for instance, while he was pressing the button, the audio would play, and the harder he pressed up, the volume for each echo would increase. Or pressing down could lower the sample in tone, etc. This would allow him to not just play the audio cut, but perform it.

Fashion

To work as a device that the character would want to carry, it has to match his sense of style. I mean this first in a general sense, and the device does that, with its handle of ornately carved silver. Ruby’s necklaces, bracelets, and rings are all silver, and they work together. The staff also works in his hand like a drum major’s baton, augmenting his larger-than-life presence with an attention-commanding object.

It has to fit his daily fashion as well, and the staff does that, too. The shaft can change appearance. I don’t know if it’s an e-ink-type surface, replaceable staves, or fabric sleeves that change out, but when Ruby’s in leopard print, the staff is in leopard print, too. When Ruby’s decked out in rose-adorned tuxedo black, the staff matches.

TheFifthElement-Rhod-002

TheFifthElement-Rhod-006

Though this is more a portable than a wearable technology, the fact that it can change to match the personal style of the wearer makes it not only functional, but since it fits his persona, desirable as well.