The Thanatorium: Usher Panel

The thanatorium is a speculative device for assisted suicide in Soylent Green. Suicide and death are not easy topics and I will do my best to address them seriously. Let me first take a moment to direct anyone who is considering or dealing with suicide to please stop reading this and talk to someone about it. I am unqualified to address—and this blog is not the place to work through—such issues.

There are four experiences to look at in the interface and service design of the Thanatorium: The patient, their beneficiaries, the usher to the beneficiaries, and the attendants to the patient. This post is about the interface for the usher. This Thanatorium personnel is there as a stage manager of sorts, both to help the patient and the beneficiaries go where they need to go, ensure the beneficiaries do not do what they must not, and run the tech aspects of the ceremony.

The usher, left, ushing.

Note that—as I backworlded in the last post—these notes presume that the reason the beneficiaries are separated from the patients are to prevent them from trying to stop the event, and to minimize distractions during the cinerama display for gross biochemical reasons. Also recall that we’re having to go with a best-guess as to what the usual experience is, since we only see Thorn’s tardy thuggery in the film.

The usher’s tasks

Based on what we see in the film, the usher has a lot to do for each event…

  • Receive the patient’s preferences (music category, color, whatever other questions intake asked before we join that scene) from the intake personnel 
  • Escort the patient to the “theater” and the beneficiaries to the observation room
  • Set the color of the light and the music to the patient’s preferences
  • Close the portal for the hemlock drinking
  • Open the portal for last farewells
  • Close the portal for the cinerama display
  • Start the cinerama display
  • Get help if the patient gets up or otherwise interrupts the ceremony
  • Wait for when the patient dies
  • Open the portal to view the body’s being shuttled away
  • Ensure the beneficiaries behave, answer any questions
  • Escort the beneficiaries back to the lobby
A screen shot from the movie showing the existing usher panel.

The interface barely touches on any of this

With all that in mind, we can see that this interface is woefully ill-equipped for any of his tasks. In the prior post I argue that the features for speaking to the patient—the speaker, the audio jack, and the SPEAKING PERMITTED indicator—should be separated from the usher’s stage manager functions. So we’re only going to pay attention in this post to the row of backlit rocker toggles labeled PORTAL, EFFECTS, CHAMBER 2, AUDIO, VISUAL, CHAMBER 1 and a little bit of the authorization key that looks like a square metal button in the screen cap above. And note I’m going to make suggestions that are appropriate to the early 1970s rather than use either modern real-world or speculative interface technologies.


First, that authorization key is pretty cool

The fact that it’s a featureless, long metal cuboid is so simple it feels sci-fi. Even the fact that its slot is unlabeled is good—it would help prevent a malicious or grief-panicked user from figuring out how to take control. You could even go one step further and have a hidden magnetic switch, so there’s not even a slot to clue in users. Production designer note, though, this means that the panel needs to be wood (or something non-magnetic) rather than a ferromagnetic metal. Aluminum, maybe, since it’s paramagnetic, but you also don’t want anything that can scratch or wear easily and give away the position of the secret spot.

A side view of a magnetic cabinet lock. When the magnet gets close to the right spot on the cabinet door, it pulls the latching mechanism open, allowing the door to be opened.
This is a cabinet lock, but the same principle would apply.

But, the buttons don’t match the scene

The PORTAL button never changes state, though we see the portal open and close in the scene. AUDIO is dim though we hear the audio. Maybe dim equals on? No, because VISUAL is lit. There’s some gymnastics we could do to apologize for this, but Imma give up and just say it’s just a moviemaking error.

And they are poorly clustered

Why is CHAMBER 2 before CHAMBER 1? Why are the three AV buttons split up by CHAMBER 2? A more reasonable chunking of these would be PORTAL on its own, CHAMBER 1 & CHAMBER 2 together, and the remaining A/V buttons together. These groups should be separated to make them easier to identify and help avoid accidental activation (though the stakes here are pretty low.)

One square by itself labeled PORTAL. Two squares side-by-side labeled CHAMBER 1 and CHAMBER 2. And three squares together labeled AUDIO, VISUAL, and EFFECTS.
If we were just dealing with these 6 buttons, this might be a reasonable clustering. But, read on…

The PORTAL button is the wrong type and orientation

Look close at the screen shot and you’ll see that each button consists of three parts. A white, back-lit square which bears the label, and two black pushbuttons that act like rocker switches. That is, press the upper one in, and the lower one pops out. Press that popped-out lower one in, and the upper one pops out again. When the lower button is pressed in, the button is “on,” which you can tell because those are the only ones with the upper button popped out and the back light illuminated.

Rocker switches are good for things with two mutually exclusive states, like ON and OFF. The PORTAL button is the only one for which this makes unambiguous sense, with its two states being OPEN and CLOSED. But, we have to note that it is poorly mapped. The button has a vertical orientation, but the portal closes from right to left. It means the usher has to memorize which toggle state is open and which one is closed. It would more usable to have an inferrable affordance. Cheapest would be to turn the button sideways so it maps more clearly, but an even tighter mapping would be a slider mounted sideways with OPEN and CLOSED labels. I don’t think the backlit status indicator is necessary here because there’s already a giant signal of the state of the portal, and that’s the adjacent portal.

An image of a slider switch.

What do EFFECTS and CHAMBER even do?

What does the EFFECTS button do? I mean, if AUDIO and VISUAL have their own controls, what’s left? Lasers? A smoke machine? Happiness pheromones? (I’m getting The Cabin in the Woods vibes here.) Whatever it is, if there are multiple, they should have individual controls, in case the patient wants one but not the other, or if there’s any variability that needs controlling.

Also what do CHAMBER 1 and CHAMBER 2 do? It’s very poor labeling. What chambers do they refer to? Maybe the observation room is chamber 1 and the theater is chamber 2? If so, different names could save the usher’s having to memorize them. Also, what do these switches control? Lights? Door locks? We would need to know to really make design recommendations for these.

The AV controls are incomplete

Which takes us to AUDIO, and VISUAL. Each of these is missing something.

Sure, they might need ON/OFF controls as we see here. But how about a volume control to accommodate the hard-of-hearing and the sound-sensitive? How about a brightness control for the video? These could have an OFF state and replace the toggle switches.

We know from the movie itself that the service has offered Sol his choice of music genre. Where is the genre selector? This is a non-trivial problem since the number of genres is on the order of 1000. They probably don’t offer all of them, but at intake they do ask Sol his preference as an open-ended question, so it implies a huge selection. Radio band selectors would have made sense to audiences in the 1970s, and signal a huge number of options, but risk being “out of tune” and imply that it’s broadcast. So either have a small number of options with a 15° rotary switch (and rewrite the intake scene so Sol selects from a menu) or three 10-digit rotary switches with a “commit” momentary button, and have a small reference booklet hanging there.

I also want to believe that the theme of the video can be selected. Sol has chosen “nature” but you could imagine patients requesting for their end-of-life ceremony something else like “family,” “celestial,” “food” (given the diegesis, this should be first) or even “religious” (with a different one for each of the world’s twelve major religions). So it would make sense to have a video theme selector as well, say, on the order of 20 options. That could be a 15° rotary switch. Labeling gets tough, but it could just be numbers with an index label to the side.

An image showing the components of such a video selector, including the label.

I’m going to presume that they never need scrubbing controls (REWIND or FAST FORWARD) for the AV. The cinerama plays through once and stops. Sudden rewinding or fast forwarding would be jarring for the patient and ruin the immersion. Have a play button that remains depressed while the cinerama is ongoing. But if the patient passes more quickly than expected, a RESET button would make sense. So would a clock or a countdown timer, since Sol had confirmed at intake that it would be at least 20 minutes, and to let the usher know how much time they have left to get those neurotransmitter numbers up up up.

Some controls are straight up missing

How does an usher set the lights according to the patient’s preferences? They ask at intake, and we see Sol’s face washed with a soothing amber color once the attendants leave, so there should be a color selector. Three RGB slide potentiometers would provide perfect control, but I doubt anyone would quibble that the green they’d asked for was #009440 and not #96b300, so you could go with a selector. The XKCD color survey results show that there are on the order of about 30 colors, so something similar to the video-theme selector above would work, with a brightness potentiometer to the side.

XKCD color graph showing the outlines with crowdsourced regions and labels. But please check out the full post as linked in the image caption. It’s so much more awesome than just this.
I will always be in awe of this undertaking and visualization, Randall Monroe.

These controls ought to be there

The patient experience is a bit of a show, so to signal its beginning and end, there should be lighting controls for the usher to dim and raise the lights, like in a theater. So let’s add those.

Also, the usher has a minor medical task to accomplish: Monitoring the health of the patient to know when they’ve passed. The three metrics for clinical death are a cessation of all three of…

  1. breath
  2. blood flow
  3. brain activity

…so there should be indicators for each of these. As discussed in the medical chapter of the book, this is ideally a display of values over time, but in the resource-poor and elecromechanical world of Soylent Green, it might have to be a collection of gauges, with an indicator bulb near the zero for when activity has stopped. A final, larger indicator bulb should light when all three gauges are still. To really underscore the morbidness of this interface, all those indicators should be green.

A comp showing the three clinical death gauges and incdicators, as described.

If you buy my backworlding, i.e. that part of the point of preventing interruptions is to maximize the dopamine and serotonin being released into the patient’s body, there should also be status indicators showing the level of these neurotransmitters in the patient’s bloodstream. They can be the same style of gauges, but I’d add a hand drawn arrow to some point along the chapter ring that reads “quota.” Those indicators should be larger than the clinical death indicators to match their importance to Soylent’s purposes.

Lastly, thinking of Thorn’s attack, the usher should have a panic button to summon help if the patient or the beneficiaries are getting violent (especially once they discover they’re locked in.) This should be hidden under the panel so it can be depressed secretly.

Where should this panel go?

As described in the beneficiaries post we’re going to leave the communication interface just below the portal, where they are now for those fleeting moments when they can wish the patient goodbye.

A screen shot of the movie showing Thorn at the portal addressing the usher, angrily, as usual.

And there’s no need to put the usher’s controls under the nose of the beneficiaries. (In fact with the medical monitoring it would be kind of cruel.) So let the usher have a lectern beside the door, in a dim pool of light, and mount the controls to the reading top. (Also give them a stool to rest on, have we learned nothing?) Turn the lectern such that the interface is not visible to beneficiaries in the room. This lets the usher remain respectfully out of the center of attention, but in a place where they can keep an eye on both the patient when the portal is open, and the beneficiaries throughout.

An image of a lectern
Looks cheap? Perfect.

In total, the lectern panel would look something like this…

The “READY” indicators are explained in the attendant’s post.

…and the scene could go something like this…

  • Interior. Thanatorium observation room.
  • The Usher escorts Thorn into the room. Thorn rushes to the portal. The usher steps behind a lectern near the door.
  • Usher
  • It’s truly a shame you missed the overture.
  • The Usher slides a switch on the lectern panel, and the portal closes.
  • Thorn
  • I want to see him.
  • Usher, looking down at his interface
  • That is prohibited during the ceremony.
  • Worm’s eye view. Thorn takes a few steps toward him and knocks the lectern to the ground. It falls with its interface in the foreground. In the background, we see Thorn slam the usher against the wall.
  • Thorn
  • Well I can assure you, open that damned thing right now, or I swear to God you’ll die before he does!
  • Usher
  • OK, OK!
  • The usher falls to his hands and knees and we see him slide the switch to open the portal. Thorn steps back to it, and the usher gets on his feet to right the lectern

The Thanatorium: A beneficiary’s experience

The thanatorium is a speculative service for assisted suicide in Soylent Green. Suicide and death are not easy topics and I will do my best to address them seriously. Let me first take a moment to direct anyone who is considering or dealing with suicide to please stop reading this and talk to someone about it. I am unqualified to address—and this blog is not the place to work through—such issues.

There are four experiences to look at in the interface and service design of the Thanatorium: The patient, their beneficiaries, the usher, and the attendants to the patient. This post is about the least complicated of the bunch, the beneficiaries.

Thorn’s experience

We have to do a little extrapolation here because the way we see it in the movie is not the way we imagine it would work normally. What we see is Thorn entering the building and telling staff there to take him to Sol. He is escorted to an observation room labeled “beneficiaries only” by an usher. (Details about the powerful worldbuilding present in this label can be found in the prior post.) Sol has already drunk the “hemlock” drink by the time Thorn enters this room, so Sol is already dying and the robed room attendants have already left.

Aaand I just noticed that the walls are the same color as the Soylent. Ewww.

This room has a window view of the “theater” proper, with an interface mounted just below the window. At the top of this interface is a mounted microphone. Directly below is an intercom speaker beside a large status alert labeled SPEAKING PERMITTED. When we first see the panel this indicator is off. At the bottom is a plug for headphones to the left, a slot for a square authorization key, and in the middle, a row of square, backlit toggle buttons labeled PORTAL, EFFECTS, CHAMBER 2, AUDIO, VISUAL, and CHAMBER 1. When the Sol is mid-show, EFFECTS and VISUAL are the only buttons that are lit.

When the usher closes the viewing window, explaining that it’s against policy for beneficiaries to view the ceremony, Thorn…uh…chokes him in order to persuade him to let him override the policy.

Persuasion.

“Persuaded,” the usher puts his authorization key back in the slot. The window opens again. Thorn observes the ceremony in awe, having never seen the beautiful Earth of Sol’s youth. He mutters “I didn’t know” and “How could I?” as he watches. Sol tries weakly to tell Thorn something, but the speaker starts glitching, with the SPEAKING PERMITTED INDICATOR flashing on and off. Thorn, helpfully, pounds his fist on the panel and demands that the usher do something to fix it. The user gives Thorn wired earbuds and Thorn continues his conversation. (Extradiegetically, is this so they didn’t have to bother with the usher’s overhearing the conversation? I don’t understand this beat.) The SPEAKING PERMITTED light glows a solid red and they finish their conversation.

Yes, that cable jumps back and forth like that in the movie during the glitch. It was a simpler time.

Sol dies, and the lights come up in the chamber. Two assistants come to push the gurney along a track through a hidden door. Some mechanism in the floor catches the gurney, and the cadaver is whisked away from Thorn’s sight.

Regular experience?

So that’s Thorns corrupt, thuggish cop experience of the thanatorium. Let’s now make some educated guesses about what this might imply for the regular, non-thug experience for beneficiaries.

  1. The patient and beneficiaries enter the building and greeted by staff.
  2. They wait in queue in the lobby for their turn.
  3. The patient is taken by attendants to the “theater” and the beneficiaries taken by the usher to the observation room.
  4. Beneficiaries witness the drinking of the hemlock.
  5. The patient has a moment to talk with the beneficiaries and say their final farewells.
  6. The viewing window is closed as the patient watches the “cinerama” display and dies. The beneficiaries wait quietly in the observation room with the usher.
  7. The viewing window is opened as they watch the attendants wheel the body into the portal.
  8. They return to the lobby to sign some documents for benefits and depart.

So, some UX questions/backworlding

We have to backworld some of the design rationales involved to ground critique and design improvements. After all, design is the optimization of a system for a set of effects, and we want to be certain about what effects we’re targeting. So…

Why would beneficiaries be separated from the patient?

I imagine that the patient might take comfort from holding the hands or being near their loved ones (even if that set didn’t perfectly overlap with their beneficiaries). So why is there a separate viewing room? There are a handful of reasons I can imagine, only one of which is really satisfying.

Maybe it’s to prevent the spread of disease? Certainly given our current multiple pandemics, we understand the need for physical separation in a medical setting. But the movie doesn’t make any fuss about disease being a problem (though with 132,000 people crammed into every square mile of the New York City metropolitan area you’d figure it would be), and in Sol’s case, there’s zero evidence in the film that he’s sick. Why does the usher resist the request from Thorn if this was the case? And why wouldn’t the attendants be in some sort of personal protective gear?

Maybe it’s to hide the ugly facts of dying? Real death is more disconcerting to see than most people are familiar with (take the death rattle as one example) and witnessing it might discourage other citizens from opting-in for the same themselves. But, we see that Sol just passes peacefully from the hemlock drink, so this isn’t really at play here.

Maybe it’s to keep the cinerama experience hidden? It’s showing pictures of an old, bountiful earth that—in the diegesis—no longer exists. Thorn says in the movie that he’s too young to know what “old earth” was like, so maybe this society wants to prevent false hope? Or maybe to prevent rioting, should the truth of How Far We’ve Fallen get out? Or maybe it’s considered a reward for patients opting-in to suicide, thereby creating a false scarcity to further incentivize people to opt-in themselves? None of this is super compelling, and we have to ask, why does the usher give in and open the viewport if any of this was the case?

That blue-green in the upper left of this still is the observation booth.

So, maybe it’s to prevent beneficiaries from trying to interfere with the suicide. This society would want impediments against last-minute shouts of, “Wait! Don’t do it!” There’s some slight evidence against this, as when Sol is drinking the Hemlock, the viewing port is wide open, so beneficiaries might have pounded on the window if this was standard operating procedure. But its being open might have been an artifact of Sol’s having walked in without any beneficiaries. Maybe the viewport is ordinarily closed until after the hemlock, opened for final farewells, closed for the cinerama, and opened again to watch as the body is sped away?

Ecstasy Meat

This rationale supports another, more horrible argument. What if the reason is that Soylent (the company) wants the patient to have an uninterrupted dopamine and seratonin hit at the point of dying, so those neurotransmitters are maximally available in the “meat” before processing? (Like how antibiotics get passed along to meat-eaters in industrialized food today.) It would explain why they ask Sol for his favorite color in the lobby. Yes it is for his pleasure, but not for humane reasons. It’s so he can be at his happiest at the point of death. Dopamine and seratonin would make the resulting product, Soylent green, more pleasurable and addictive to consumers. That gives an additional rationale as to why beneficiaries would be prevented from speaking—it would distract from patients’ intense, pleasurable experience of the cinerama.

A quickly-comped up speculative banner ad reading “You want to feel GOOD GOOD. Load up on Soylent Green today!”
Now, with more Clarendon.

For my money, the “ecstasy meat” rationale reinforces and makes worse the movie’s Dark Secret, so I’m going to go with that. Without this rationale, I’d say rewrite the scene so beneficiaries are in the room with the patient. But with this rationale, let’s keep the rooms separate.

Beneficiary interfaces

Which leads us to rethinking this interface.

Beneficiary interfaces

A first usability note is that the SPEAKING PERMITTED indicator is very confusing. The white text on a black background looks like speaking is, currently, permitted. But then the light behind it illuminates and I guess, then speaking is permitted? But wait, the light is red, so does that mean it’s not permitted, or is? And then adding to the confusion, it blinks. Is that the glitching, or some third state? Can we send this to its own interface thanatorium? So to make this indicator more usable, we could do a couple of things.

  • Put a ring of lights around the microphone and grill. When illuminated, speaking is permitted. This presumes that the audience can infer what these lights mean, and isn’t accessible to unsighted users, but I don’t think the audio glitch is a major plot point that needs that much reinforcing; see above. If the execs just have to have it crystal clear, then you could…
  • Have two indicators, one reading SPEAKING PERMITTED and another reading SILENCE PLEASE, with one or the other always lit. If you had to do it on the cheap, they don’t need to be backlit panels, but just two labeled indicator lamps would do.

And no effing blinking.

Thorn voice: NO EFFING BLINKING!

I think part of the affective purpose of the interface is to show how cold and mechanistic the thanatorium’s treatment of people are. To keep that, you could add another indicator light on the panel labeled somewhat cryptically, PATIENT. Have it illuminated until Sol passes, and then have a close up shot when it fades, indicating his death.

Ah, yes, good to have a reminder that’s why he’s a critic and not a working FUI designer.

A note on art direction. It would be in Soylent’s and our-real-world interest to make this interface feel as humane as possible. Maybe less steel and backlit toggles? Then again, this world is operating on fumes, so they would make do with what’s available. So this should also feel a little more strung together, maybe with some wires sticking out held together with electrical tape and tape holding the audio jack in place.

Last note on the accommodations. What are the beneficiaries supposed to do while the patient is watching the cinerama display? Stand there and look awkward? Let’s get some seats in here and pipe the patient’s selection of music in. That way they can listen and think of the patient in the next room.

If you really want it to feel extradiegetically heartless, put a clock on the wall by the viewing window that beneficiaries can check.


Once we simplify this panel and make the room make design sense, we have to figure out what to do with the usher’s interface elements that we’ve just removed, and that’s the next post.

Design fiction in sci-fi

As so many of my favorite lines of thought have begun, this one was started with a provocative question lobbed at me across social media. Friend and colleague Jonathan Korman tweeted to ask, above a graphic of the Black Mirror logo, “Surely there is another example of pop design fiction?”

I replied in Twitter, but my reply there was rambling and unsatisfying, so I’m re-answering here with an eye toward being more coherent.

What’s Design Fiction?

If you’re not familiar, design fiction is a practice that focuses on speculative artifacts to raise issues. While leading the interactions program at The Royal College of Art, Anthony Dunne and Fiona Raby catalyzed the practice.

“It thrives on imagination and aims to open up new perspectives on what are sometimes called wicked problems, to create spaces for discussion and debate about alternative ways of being, and to inspire and encourage people’s imaginations to flow freely. Design speculations can act as a catalyst for collectively redefining our relationship to reality.”

Anthony Dunne and Fiona Raby, Speculative Everything: Design, Dreaming, and Social Dreaming

Dunne & Raby tend to often lean toward provocation more than clarity (“sparking debate” is a stated goal, as opposed to “identifying problems and proposing solutions.”) Where to turn for a less shit-stirring description? Like many related fields there are lots of competing definitions and splintering. John Spicey has listed 26 types of Design Fiction over on Simplicable. But I am drawn to the more practical definition offered by the Making Tomorrow handbook.

Design Fiction proposes speculative scenarios that aim to stimulate commitment concerning existing and future issues.

Nicolas Minvielle et al., Making Tomorrow Collective

To me, that feels like a useful definition and clearly indicates a goal I can get behind. Your mileage may vary. (Hi, Tony! Hi, Fiona!)

Some examples should help.

Dunne & Raby once designed a mask for dogs called Spymaker, so that the lil’ scamps could help lead their owners to unsurveilled locations in an urban environment.

Julijonas Urbonas while at RCA conceived and designed a “euthanasia coaster” which would impart enough Gs on its passengers to kill them through cerebral hypoxia. While he designed its clothoid inversions and even built a simple physical model, the idea has been recapitulated in a number of other media, including the 3D rendering you see below.

This commercial example from Ericsson is a video with mild narrative about appliances having a limited “social life.”

Corporations create design fictions from time to time to illustrate their particular visions of the future. Such examples are on the verge of the space, since we can be sure those would not be released if they ran significantly counter to the corporation’s goals. They’re rarely about the “wicked” problems invoked above and tend more toward gee-whiz-ism, to coin a deroganym.

How does it differ from sci-fi?

Design Fiction often focuses on artifacts rather than narratives. The euthanasia coaster has no narrative beyond what you bring or apply to it, but I don’t think this lack of narrative a requirement. For my money, the point of design fiction is focused on exploring the novum more than a particular narrative around the novum. What are its consequences? What are its causes? What kind of society would need to produce it and why? Who would use it and how? What would change? What would lead there and do we want to do that? Contrast Star Wars, which isn’t about the social implications of lightsabers as much as it is space opera about dynasties, light fascism, and the magic of friendship.

Adorable, ravenous friendship.

But, I don’t think there’s any need to consider something invalid as design fiction if it includes narrative. Some works, like Black Mirror, are clearly focused on their novae and their implications and raise all the questions above, but are told with characters and plots and all the usual things you’d expect to find.

So what’s “pop” design fiction?

As a point of clarification, in Korman’s original question, he asked after pop design fiction. I’m taking that not to mean the art movement in the 01950–60s, which Black Mirror isn’t, but rather “accessible” and “popular,” which Black Mirror most definitely is.

So not this, even though it’s also adorable. And ravenous.

What would distinguish other sci-fi works as design fiction?

So if sci-fi can be design fiction, what would we look for in a show to classify it is design fiction? It’s a sloppy science, of course, but here’s a first pass. A show can be said to be design fiction if it…

  • Includes a central novum…
  • …that is explored via the narrative: What are its consequences, direct and indirect?
  • Corollary: The story focused on a primary novum, and not a mish-mash of them. (Too many muddle the thought experiment.)
  • Corollary: The story focuses on characters who are most affected by the novae.
  • Its explorations include the personal and social.
  • It goes where the novum leads, avoiding narrative fiats that sully the thought experiment.
  • Bonus points if it provides illustrative contrasts: Different versions of the novum, characters using it in different ways, or the before and after.

With this stake in the ground, it probably strikes you that some subgenres lend themselves to design fiction and others do not. Anthology series, like Black Mirror, can focus on different characters, novae, and settings each episode. Series and franchises like Star Wars and Star Trek, in contrast, have narrative investments in characters and settings that make it harder to really explore nova on their own terms, but it is not impossible. The most recent season of Black Mirror is pointing at a unified diegesis and recurring characters, which means Brooker may be leaning the series away from design fiction. Meanwhile, I’d posit that the eponymous Game from Star Trek: The Next Generation S05E06 is an episode that acts as a design fiction. So it’s not cut-and-dry.

“It’s your turn. Play the game, Will Wheaton.”

What makes this even more messy is that you are asking a subjective question, i.e. “Is this focused on its novae?”, or even “Does this intend to spur some commitment about the novae?” which is second-guessing whether or not what you think the maker’s intent was. As I mentioned, it’s messy, and against the normal critical stance of this blog. But, there are some examples that lean more toward yes than no.

Jurasic Park

Central novum: What if we use science to bring dinosaurs back to life?

Commitment: Heavy prudence and oversight for genetic sciences, especially if capitalists are doing the thing.

Hey, we’ve reviewed Jurassic Park on this very blog!

This example leads to two observations. First, the franchises that follow successful films are much less likely to be design fiction. I’d argue that every Jurassic X sequel has simply repeated the formula and not asked new questions about that novum. More run-from-the-teeth than do-we-dare?

Second is that big-budget movies are almost required to spend some narrative calories discussing the origin story of novae at the cost of exploring multiple consequences of the same. Anthology series are less likely to need to care about origins, so are a safer bet IMHO.

Minority Report

Central novum: What if we could predict crime? (Presuming Agatha is a stand-in for a regression algorithm and not a psychic drug-baby mutant.)

Commitment: Let’s be cautious about prediction software, especially as it intersects civil rights: It will never be perfect and the consequences are dire.

Blade Runner

Central novum: What if general artificial intelligence was made to look indistinguishable from humans, and kept as an oppressed class?

Commitment: Let’s not do any of that. From the design perspective: Keep AI on the canny rise.

Hey, I reviewed Blade Runner on this very blog!

Ex Machina

Central novum: Will we be able to box a self-interested general intelligence?

Commitment: No. It is folly to think so.

Colossus: The Forbin Project

Central novum: What if we deliberately prevented ourselves from pulling the plug on a superintelligence, and then asked it to end war?

Commitment: We must be extremely careful what we ask a superintelligence to do, how we ask it, and the safeguards we provide ourselves if we find out we messed it up.

Hey, I lovingly reviewed Colossus: The Forbin Project on this very blog!

Person of Interest

Central novum: What if we tried to box a good superintelligence?

Commitment: Heavy prudence and oversight for computer sciences, especially if governments are doing the thing.

Not reviewed, but it won an award for Untold AI

This is probably my favorite example, and even though it is a long-running series with recurring characters, I argue that the leads are all highly derived, narratively, from the novum, and still counts strongly.

But are they pop?

Each of these are more-or-less accessible and mainstream, even if their actual popularity and interpretations vary wildly. So, yes, from that perspective.

Jurassic Park is at the time of writing the 10th highest-grossing sci-fi movie of all time. So if you agree that it is design fiction, it is the most pop of all. Sadly, that is the only property I’d call design fiction on the entire highest-grossing list.

So, depending on a whole lot of things (see…uh…above) the short answer to Mr. Korman’s original question is yes, with lots of if.

What others?

I am not an exhaustive encyclopedia of sci-fi, try though I may. Agree with this list above? What did I miss? If you comment with additions, be sure and list, as I did these, the novum and the challenge.

Spacesuits in Sci-fi: Wrapping up

If you recall from the first entry in this series of posts, the Spacesuits content was originally drafted as a chapter in the Make It So book, but had to be cut because for length. Now here we are at six posts and 6,700 words later. So it was probably a good call on the part of the publisher. 🙂 But now that we’re here, what have we learned?

Let’s recap. Spacesuit interfaces have to…

Spacesuits are a particularly interesting example of the relationship of sci-fi and design because a tiny fraction of sci-fi audiences will ever experience being in one, but many people have seen them being used in real-world circumstances. This gives them a unique representational anchor in sci-fi, and the survey reveals this. Sci-fi makers base their designs on the surface style with occasional additions or extensions, depending on the fashionable technology of the time. These additions rarely make it to the real world because they’re often made without consideration of the real constraints of keeping a human alive in space. But are still cool.

And the winner is…?

If I had to name the franchise that gets it right the most it’s probably Star Trek. Keep in mind that this has been far from an exhaustive survey (“Yeah like where is The Expanse?,” I hear me cry), and the Star Trek franchise is vast and decades old, with most of its stories set on spacecraft. Extravehicular activity in space is a natural fit to the show and there’s been lots of it. I’m not dismissing it. The work done on Star Trek: Discovery has been beautiful. But if it was just a numbers game rather than a question of quality design, we would expect it to win. And lucky for us, it’s been consistently showcasing the most inspiring examples and most(ly) functional interfaces as well. If you’re looking for inspiration, maybe start there.

What lessons can we learn?

As a particular kind of wearables, spacesuit interfaces reinforce all the principles I originally outlined for Ideal Wearables way back in 2014. They must be…

  • Sartorial
  • Social
  • Easy to access and use
  • Tough to accidentally activate
  • Have apposite inputs and outputs

…all pushed through the harder constraints of listed at the top of the article. We have some additional lessons about where to put interfaces on spacesuits given those constraints, but it seems pretty well tied to this domain and difficult to generalize. That is, unless climate change has us all donning environmental suits just to enjoy our own planet in a few degrees Centigrade. Wait, I did not mean to go that dark. Even though climate change is a massive crisis and we should commit to halting it and reversing it if possible. (Hey check out these cool tree-planting drones.)

Let’s instead focus on a mild prognostication. I expect that we’ll be seeing more sci-fi spacesuits in the near future, partly because space travel has been on a kick lately with the high-profile and branding-conscious missions of SpaceX. Just this week Crew Dragon flew has taken the first commercial flight of four civilians into space. (Not the first civilians into space, according to Harvard professor Jonathan McDowell, that honor belongs to the Soyuz TMA-3 mission in 2003, but that was still a government operation.) For better or for worse, part of how SpaceX is making its name is by bringing a new, cool aesthetic to space travel.

So people are seeing spacesuits again (though am I right…no extravehicular activities?) and that means it will be on the minds of studios and writers, and they will give it their own fantastic spin, which will in turn inspire real-world designers, etc. etc. Illustrators and industrial designers are already posting some amazing speculative designs of late, and I look forward to more inspiring designs to come.

I think my spaceship knows which way to go

You may have noticed that this post comes an uncommonly long time after the prior post. I had cut down my publishing cadence at the start of the pandemic to once every other week because stress, and even that has been difficult to keep up. But now we are heading into fall and the winter holidays and a cluster of family birthdays and whatnot usually keep me busy through March. Plus I’m about to start hosting a regular session with Ambition Group about AI Mastery for Design Leaders, and as a first time curriculum, it’s going to demand much of me on top of my full time job. (You didn’t think I did scifiinterfaces professionally, did you? This is a hobby.) And I’m making some baby steps in publishing my own sci-fi short stories. Keep an eye on Escape Pod and Dark Matter Magazine over the fall if you want to catch those. (I’ll almost certainly tweet about them, too.) I want to work on others.

Which is all to say that I’m on the verge of being overcommitted and burnt out, and so going to do myself a favor and take a break from posting here for a while. Sadly, I don’t have any guest posts in the work. Who would be crazy enough to critique sci-fi interfaces during a climate crisis, ongoing fascist movements, and a global pandemic?

I do have big plans for a major study of the narrative uses of sci-fi interfaces, which I hope to use time off in the winter holiday to conduct. That will probably be as huge as the Untold AI and the Gendered AI series. I have nascent notions of using that study as a last bit of material to collect into a 10-year retrospective follow-up to Make It So (let me know if that sounds appealing). And I’m committed to another round of Fritz awards for 2022. So more is coming, and I’ll be back before you know it.

But for a while, over and out, readers. And don’t forget while I’m gone…

Stop watching sci-fi. Start using it.

—Me

Sci-fi Spacesuits: Identification

Spacesuits are functional items, built largely identically to each other, adhering to engineering specifications rather than individualized fashion. A resulting problem is that it might be difficult to distinguish between multiple, similarly-sized individuals wearing the same suits. This visual identification problem might be small in routine situations:

  • (Inside the vehicle:) Which of these suits it mine?
  • What’s the body language of the person currently speaking on comms?
  • (With a large team performing a manual hull inspection:) Who is that approaching me? If it’s the Fleet Admiral I may need to stand and salute.

But it could quickly become vital in others:

  • Who’s body is that floating away into space?
  • Ensign Smith just announced they have a tachyon bomb in their suit. Which one is Ensign Smith?
  • Who is this on the security footage cutting the phlebotinum conduit?

There a number of ways sci-fi has solved this problem.

Name tags

Especially in harder sci-fi shows, spacewalkers have a name tag on the suit. The type is often so small that you’d need to be quite close to read it, and weird convention has these tags in all-capital letters even though lower-case is easier to read, especially in low light and especially at a distance. And the tags are placed near the breast of the suit, so the spacewalker would also have to be facing you. So all told, not that useful on actual extravehicular missions.

Faces

Screen sci-fi usually gets around the identification problem by having transparent visors. In B-movies and sci-fi illustrations from the 1950s and 60s, the fishbowl helmet was popular, but of course offering little protection, little light control, and weird audio effects for the wearer. Blockbuster movies were mostly a little smarter about it.

1950s Sci-Fi illustration by Ed Emshwiller
c/o Diane Doniol-Valcroze

Seeing faces allows other spacewalkers/characters (and the audience) to recognize individuals and, to a lesser extent, how their faces synch with their voice and movement. People are generally good at reading the kinesics of faces, so there’s a solid rationale for trying to make transparency work.

Face + illumination

As of the 1970s, filmmakers began to add interior lights that illuminate the wearer’s face. This makes lighting them easier, but face illumination is problematic in the real world. If you illuminate the whole face including the eyes, then the spacewalker is partially blinded. If you illuminate the whole face but not the eyes, they get that whole eyeless-skull effect that makes them look super spooky. (Played to effect by director Scott and cinematographer Vanlint in Alien, see below.)

Identification aside: Transparent visors are problematic for other reasons. Permanently-and-perfectly transparent glass risks the spacewalker getting damage from infrared lights or blinded from sudden exposure to nearby suns, or explosions, or engine exhaust ports, etc. etc. This is why NASA helmets have the gold layer on their visors: it lets in visible light and blocks nearly all infrared.

Astronaut Buzz Aldrin walks on the surface of the moon near the leg of the lunar module Eagle during the Apollo 11 mission.

Image Credit: NASA (cropped)

Only in 2001 does the survey show a visor with a manually-adjustable translucency. You can imagine that this would be more safe if it was automatic. Electronics can respond much faster than people, changing in near-real time to keep sudden environmental illumination within safe human ranges.

You can even imagine smarter visors that selectively dim regions (rather than the whole thing), to just block out, say, the nearby solar flare, or to expose the faces of two spacewalkers talking to each other, but I don’t see this in the survey. It’s mostly just transparency and hope nobody realizes these eyeballs would get fried.

So, though seeing faces helps solve some of the identification problem, transparent enclosures don’t make a lot of sense from a real-world perspective. But it’s immediate and emotionally rewarding for audiences to see the actors’ faces, and with easy cinegenic workarounds, I suspect identification-by-face is here in sci-fi for the long haul, at least until a majority of audiences experience spacewalking for themselves and realize how much of an artistic convention this is.

Color

Other shows have taken the notion of identification further, and distinguished wearers by color. Mission to Mars, Interstellar, and Stowaway did this similar to the way NASA does it, i.e. with colored bands around upper arms and sometimes thighs.

Destination Moon, 2001: A Space Odyssey, and Star Trek (2009) provided spacesuits in entirely different colors. (Star Trek even equipped the suits with matching parachutes, though for the pedantic, let’s acknowledge these were “just” upper-atmosphere suits.)The full-suit color certainly makes identification easier at a distance, but seems like it would be more expensive and introduce albedo differences between the suits.

One other note: if the visor is opaque and characters are only relying on the color for identification, it becomes easier for someone to don the suit and “impersonate” its usual wearer to commit spacewalking crimes. Oh. My. Zod. The phlebotinum conduit!

According to the Colour Blind Awareness organisation, blindness (color vision deficiency) affects approximately 1 in 12 men and 1 in 200 women in the world, so is not without its problems, and might need to be combined with bold patterns to be more broadly accessible.

What we don’t see

Heraldry

Blog from another Mog Project Rho tells us that books have suggested heraldry as space suit identifiers. And while it could be a device placed on the chest like medieval suits of armor, it might be made larger, higher contrast, and wraparound to be distinguishable from farther away.

Directional audio

Indirect, but if the soundscape inside the helmet can be directional (like a personal Surround Sound) then different voices can come from the direction of the speaker, helping uniquely identify them by position. If there are two close together and none others to be concerned about, their directions can be shifted to increase their spatial distinction. When no one is speaking leitmotifs assigned to each other spacewalker, with volumes corresponding to distance, could help maintain field awareness.

HUD Map

Gamers might expect a map in a HUD that showed the environment and icons for people with labeled names.

Search

If the spacewalker can have private audio, shouldn’t she just be able to ask, “Who’s that?” while looking at someone and hear a reply or see a label on a HUD? It would also be very useful if I’ve spacewalker could ask for lights to be illuminated on the exterior of another’s suit. Very useful if that other someone is floating unconscious in space.

Mediated Reality Identification

Lastly I didn’t see any mediated reality assists: augmented or virtual reality. Imagine a context-aware and person-aware heads-up display that labeled the people in sight. Technological identification could also incorporate in-suit biometrics to avoid the spacesuit-as-disguise problem. The helmet camera confirms that the face inside Sargeant McBeef’s suit is actually that dastardly Dr. Antagonist!

We could also imagine that the helmet could be completely enclosed, but be virtually transparent. Retinal projectors would provide the appearance of other spacewalkers—from live cameras in their helmets—as if they had fishbowl helmets. Other information would fit the HUD depending on the context, but such labels would enable identification in a way that is more technology-forward and cinegenic. But, of course, all mediated solutions introduce layers of technology that also introduces more potential points of failure, so not a simple choice for the real-world.

Oh, that’s right, he doesn’t do this professionally.

So, as you can read, there’s no slam-dunk solution that meets both cinegenic and real-world needs. Given that so much of our emotional experience is informed by the faces of actors, I expect to see transparent visors in sci-fi for the foreseeable future. But it’s ripe for innovation.

Sci-fi Spacesuits: Audio Comms


A special subset of spacesuit interfaces is the communication subsystems. I wrote a whole chapter about Communications in Make It So, but spacesuit comms bear special mention, since they’re usually used in close physical proximity but still must be mediated by technology, the channels for detailed control are clumsy and packed, and these communicators are often being overseen by a mission control center of some sort. You’d think this is rich territory, but spoiler: There’s not a lot of variation to study.

Every single spacesuit in the survey has audio. This is so ubiquitous and accepted that, after 1950, no filmmaker has thought the need to explain it or show an interface for it. So you’d think that we’d see a lot of interactions.

Spacesuit communications in sci-fi tend to be many-to-many with no apparent means of control. Not even a push-to-mute if you sneezed into your mic. It’s as if the spacewalkers were in a group, merely standing near each other in air, chatting. No push-to-talk or volume control is seen. Communication with Mission Control is automatic. No audio cues are given to indicate distance, direction, or source of the sound, or to select a subset of recipients.

The one seeming exception to the many-to-many communication is seen in the reboot of Battlestar Galactica. As Boomer is operating a ship above a ground crew, shining a light down on them for visibility, she has the following conversation with Tyrol.

  • Tyrol
  • Raptor 478, this is DC-1, I have you in my sights.
  • Boomer
  • Copy that, DC-1. I have you in sight.
  • Tyrol
  • Understood.
  • Boomer
  • How’s it looking there? Can you tell what happened?
  • Tyrol
  • Lieutenant, don’t worry…about my team. I got things under control.
  • Boomer
  • Copy that, DC-1. I feel better knowing you’re on it.

Then, when her copilot gives her a look about what she has just said, she says curtly to him, “Watch the light, you’re off target.” In this exchange there is clear evidence that the copilot has heard the first conversation, but it appears that her comment to him is addressed to him and not for the others to hear. Additionally, we do not hear chatter going on between the ground grew during this exchange. Unfortunately, we do not see any of the conversationalists touch a control to give us an idea about how they switch between these modes. So, you know, still nothing.

More recent films, especially in the MCU, has seen all sorts of communication controlled by voice with the magic of General AI…pause for gif…


…but as I mention more and more, once you have a General AI in the picture, we leave the realm of critique-able interactions. Because an AI did it.

In short, sci-fi just doesn’t care about showing audio controls in sci-fi spacesuits, and isn’t likely to start caring anytime soon. As always, if you know of something outside my survey, please mention it.

For reference, in the real world, a NASA astronaut has direct control over the volume of audio that she hears, using potentiometer volume controls. (Curiously the numbers on them are not backwards, unlike the rest of the controls.)

A spacewalker uses the COMM dial switch mode selector at the top of the DCM to select between three different frequencies of wireless communication, each of which broadcasts to each other and the vehicle. When an astronaut is on one of the first two channels, transmission is voice-activated. But a backup, “party line” channel requires push-to-talk, and this is what the push-to-talk control is for.

By default, all audio is broadcast to all other spacewalkers, the vehicle, and Mission Control. To speak privately, without Mission Control hearing, spacewalkers don’t have an engineered option. But if one of the radio frequency bands happens to be suffering a loss of signal to Mission Control, she can use this technological blind spot to talk with some degree of privacy.

Sci-fi Spacesuits: Moving around

Whatever it is, it ain’t going to construct, observe, or repair itself. In addition to protection and provision, suits must facilitate the reason the wearer has dared to go out into space in the first place.

One of the most basic tasks of extravehicular activity (EVA) is controlling where the wearer is positioned in space. The survey shows several types of mechanisms for this. First, if your EVA never needs you to leave the surface of the spaceship, you can go with mountaineering gear or sticky feet. (Or sticky hands.) We can think of maneuvering through space as similar to piloting a craft, but the outputs and interfaces have to be made wearable, like wearable control panels. We might also expect to see some tunnel in the sky displays to help with navigation. We’d also want to see some AI safeguard features, to return the spacewalker to safety when things go awry. (Narrator: We don’t.)

Mountaineering gear

In Stowaway (2021) astronauts undertake unplanned EVAs with carabiners and gear akin to mountaineers use. This makes some sense, though even this equipment needs to be modified for use by astronauts’ thick gloves.

Stowaway (2021) Drs Kim and Levinson prepare to scale to the propellant tank.

Sticky feet (and hands)

Though it’s not extravehicular, I have to give a shout out to 2001: A Space Odyssey (1969), where we see a flight attendant manage their position in the microgravity with special shoes that adhere to the floor. It’s a lovely example of a competent Hand Wave. We don’t need to know how it works because it says, right there, “Grip shoes.” Done. Though props to the actress Heather Downham, who had to make up a funny walk to illustrate that it still isn’t like walking on earth.

2001: A Space Odyssey (1969)
Pan Am: “Thank god we invented the…you know, whatever shoes.

With magnetic boots, seen in Destination Moon, the wearer simply walks around and manages the slight awkwardness of having to pull a foot up with extra force, and have it snap back down on its own.

Battlestar Galactica added magnetic handgrips to augment the control provided by magnetized boots. With them, Sergeant Mathias is able to crawl around the outside of an enemy vessel, inspecting it. While crawling, she holds grip bars mounted to circles that contain the magnets. A mechanism for turning the magnet off is not seen, but like these portable electric grabbers, it could be as simple as a thumb button.

Iron Man also had his Mark 50 suit form stabilizing suction cups before cutting a hole in the hull of the Q-Ship.

Avengers: Infinity War (2018)

In the electromagnetic version of boots, seen in Star Trek: First Contact, the wearer turns the magnets on with a control strapped to their thigh. Once on, the magnetization seems to be sensitive to the wearer’s walk, automatically lessening when the boot is lifted off. This gives the wearer something of a natural gait. The magnetism can be turned off again to be able to make microgravity maneuvers, such as dramatically leaping away from Borg minions.

Star Trek: Discovery also included this technology, but with what appears to be a gestural activation and a cool glowing red dots on the sides and back of the heel. The back of each heel has a stack of red lights that count down to when they turn off, as, I guess, a warning to anyone around them that they’re about to be “air” borne.

Quick “gotcha” aside: neither Destination Moon nor Star Trek: First Contact bothers to explain how characters are meant to be able to kneel while wearing magnetized boots. Yet this very thing happens in both films.

Destination Moon (1950): Kneeling on the surface of the spaceship.
Star Trek: First Contact (1996): Worf rises from operating the maglock to defend himself.

Controlled Propellant

If your extravehicular task has you leaving the surface of the ship and moving around space, you likely need a controlled propellant. This is seen only a few times in the survey.

In the film Mission to Mars, the manned mobility unit, or MMU, seen in the film is based loosely on NASA’s MMU. A nice thing about the device is that unlike the other controlled propellant interfaces, we can actually see some of the interaction and not just the effect. The interfaces are subtly different in that the Mission to Mars spacewalkers travel forward and backward by angling the handgrips forward and backward rather than with a joystick on an armrest. This seems like a closer mapping, but also seems more prone to error by accidental touching or bumping into something.

The plus side is an interface that is much more cinegenic, where the audience is more clearly able to see the cause and effect of the spacewalker’s interactions with the device.

If you have propellent in a Moh’s 4 or 5 film, you might need to acknowledge that propellant is a limited resource. Over the course of the same (heartbreaking) scene shown above, we see an interface where one spacewalker monitors his fuel, and another where a spacewalker realizes that she has traveled as far as she can with her MMU and still return to safety.

Mission to Mars (2000): Woody sees that he’s out of fuel.

For those wondering, Michael Burnham’s flight to the mysterious signal in that pilot uses propellant, but is managed and monitored by controllers on Discovery, so it makes sense that we don’t see any maneuvering interfaces for her. We could dive in and review the interfaces the bridge crew uses (and try to map that onto a spacesuit), but we only get snippets of these screens and see no controls.

Iron Man’s suits employ some Phlebotinum propellant that lasts for ever, can fit inside his tailored suit, and are powerful enough to achieve escape velocity.

Avengers: Infinity War (2018)

All-in-all, though sci-fi seems to understand the need for characters to move around in spacesuits, very little attention is given to the interfaces that enable it. The Mission to Mars MMU is the only one with explicit attention paid to it, and that’s quite derived from NASA models. It’s an opportunity for film makers should the needs of the plot allow, to give this topic some attention.

Sci-fi Spacesuits: Interface Locations

A major concern of the design of spacesuits is basic usability and ergonomics. Given the heavy material needed in the suit for protection and the fact that the user is wearing a helmet, where does a designer put an interface so that it is usable?

Chest panels

Chest panels are those that require that the wearer only look down to manipulate. These are in easy range of motion for the wearer’s hands. The main problem with this location is that there is a hard trade off between visibility and bulkiness.

Arm panels

Arm panels are those that are—brace yourself—mounted to the forearm. This placement is within easy reach, but does mean that the arm on which the panel sits cannot be otherwise engaged, and it seems like it would be prone to accidental activation. This is a greater technological challenge than a chest panel to keep components small and thin enough to be unobtrusive. It also provides some interface challenges to squeeze information and controls into a very small, horizontal format. The survey shows only three arm panels.

The first is the numerical panel seen in 2001: A Space Odyssey (thanks for the catch, Josh!). It provides discrete and easy input, but no feedback. There are inter-button ridges to kind of prevent accidental activation, but they’re quite subtle and I’m not sure how effective they’d be.

2001: A Space Odyssey (1968)

The second is an oversimplified control panel seen in Star Trek: First Contact, where the output is simply the unlabeled lights underneath the buttons indicating system status.

The third is the mission computers seen on the forearms of the astronauts in Mission to Mars. These full color and nonrectangular displays feature rich, graphic mission information in real time, with textual information on the left and graphic information on the right. Input happens via hard buttons located around the periphery.

Side note: One nifty analog interface is the forearm mirror. This isn’t an invention of sci-fi, as it is actually on real world EVAs. It costs a lot of propellant or energy to turn a body around in space, but spacewalkers occasionally need to see what’s behind them and the interface on the chest. So spacesuits have mirrors on the forearm to enable a quick view with just arm movement. This was showcased twice in the movie Mission to Mars.

HUDs

The easiest place to see something is directly in front of your eyes, i.e. in a heads-up display, or HUD. HUDs are seen frequently in sci-fi, and increasingly in sc-fi spacesuits as well. One is Sunshine. This HUD provides a real-time view of each other individual to whom the wearer is talking while out on an EVA, and a real-time visualization of dangerous solar winds.

These particular spacesuits are optimized for protection very close to the sun, and the visor is limited to a transparent band set near eye level. These spacewalkers couldn’t look down to see the top of a any interfaces on the suit itself, so the HUD makes a great deal of sense here.

Star Trek: Discovery’s pilot episode included a sequence that found Michael Burnham flying 2000 meters away from the U.S.S. Discovery to investigate a mysterious Macguffin. The HUD helped her with wayfinding, navigating, tracking time before lethal radiation exposure (a biological concern, see the prior post), and even doing a scan of things in her surroundings, most notably a Klingon warrior who appears wearing unfamiliar armor. Reference information sits on the periphery of Michael’s vision, but the augmentations occur mapped to her view. (Noting this raises the same issues of binocular parallax seen in the Iron HUD.)

Iron Man’s Mark L armor was able to fly in space, and the Iron HUD came right along with it. Though not designed/built for space, it’s a general AI HUD assisting its spacewalker, so worth including in the sample.

Avengers: Infinity War (2018)

Aside from HUDs, what we see in the survey is similar to what exists in existing real-world extravehicular mobility units (EMUs), i.e. chest panels and arm panels.

Inputs illustrate paradigms

Physical controls range from the provincial switches and dials on the cigarette-girl foldout control panels of Destination Moon to the simple and restrained numerical button panel of 2001, to strangely unlabeled buttons of Star Trek: First Contact’s arm panels (above), and the ham-handed touch screens of Mission to Mars.

Destination Moon (1950)
2001: A Space Odyssey (1968)

As the pictures above reveal, the input panels reflect the familiar technology of the time of the creation of the movie or television show. The 1950s were still rooted in mechanistic paradigms, the late 1960s interfaces were electronic pushbutton, the 2000s had touch screens and miniaturized displays.

Real world interfaces

For comparison and reference, the controls for NASA’s EMU has a control panel on the front, called the Display and Control Module, where most of the controls for the EMU sit.

The image shows that inputs are very different than what we see as inputs in film and television. The controls are large for easy manipulation even with thick gloves, distinct in type and location for confident identification, analog to allow for a minimum of failure points and in-field debugging and maintenance, and well-protected from accidental actuation with guards and deep recesses. The digital display faces up for the convenience of the spacewalker. The interface text is printed backwards so it can be read with the wrist mirror.

The outputs are fairly minimal. They consist of the pressure suit gauge, audio warnings, and the 12-character alphanumeric LCD panel at the top of the DCM. No HUD.

The gauge is mechanical and standard for its type. The audio warnings are a simple warbling tone when something’s awry. The LCD panel provides information about 16 different values that the spacewalker might need, including estimated time of oxygen remaining, actual volume of oxygen remaining, pressure (redundant to the gauge), battery voltage or amperage, and water temperature. To cycle up and down the list, she presses the Mode Selector Switch forward and backward. She can adjust the contrast using the Display Intensity Control potentiometer on the front of the DCM.

A NASA image tweeted in 2019.

The DCMs referenced in the post are from older NASA documents. In more recent images on NASA’s social media, it looks like there have been significant redesigns to the DCM, but so far I haven’t seen details about the new suit’s controls. (Or about how that tiny thing can house all the displays and controls it needs to.)

Sci-fi Spacesuits: Biological needs

Spacesuits must support the biological functioning of the astronaut. There are probably damned fine psychological reasons to not show astronauts their own biometric data while on stressful extravehicular missions, but there is the issue of comfort. Even if temperature, pressure, humidity, and oxygen levels are kept within safe ranges by automatic features of the suit, there is still a need for comfort and control inside of that range. If the suit is to be warn a long time, there must be some accommodation for food, water, urination, and defecation. Additionally, the medical and psychological status of the wearer should be monitored to warn of stress states and emergencies.

Unfortunately, the survey doesn’t reveal any interfaces being used to control temperature, pressure, or oxygen levels. There are some for low oxygen level warnings and testing conditions outside the suit, but these are more outputs than interfaces where interactions take place.

There are also no nods to toilet necessities, though in fairness Hollywood eschews this topic a lot.

The one example of sustenance seen in the survey appears in Sunshine, we see Captain Kaneda take a sip from his drinking tube while performing a dangerous repair of the solar shields. This is the only food or drink seen in the survey, and it is a simple mechanical interface, held in place by material strength in such a way that he needs only to tilt his head to take a drink.

Similarly, in Sunshine, when Capa and Kaneda perform EVA to repair broken solar shields, Cassie tells Capa to relax because he is using up too much oxygen. We see a brief view of her bank of screens that include his biometrics.

Remote monitoring of people in spacesuits is common enough to be a trope, but has been discussed already in the Medical chapter in Make It So, for more on biometrics in sci-fi.

Crowe’s medical monitor in Aliens (1986).

There are some non-interface biological signals for observers. In the movie Alien, as the landing party investigates the xenomorph eggs, we can see that the suit outgases something like steam—slower than exhalations, but regular. Though not presented as such, the suit certainly confirms for any onlooker that the wearer is breathing and the suit functioning.

Given that sci-fi technology glows, it is no surprise to see that lots and lots of spacesuits have glowing bits on the exterior. Though nothing yet in the survey tells us what these lights might be for, it stands to reason that one purpose might be as a simple and immediate line-of-sight status indicator. When things are glowing steadily, it means the life support functions are working smoothly. A blinking red alert on the surface of a spacesuit could draw attention to the individual with the problem, and make finding them easier.

Emergency deployment

One nifty thing that sci-fi can do (but we can’t yet in the real world) is deploy biology-protecting tech at the touch of a button. We see this in the Marvel Cinematic Universe with Starlord’s helmet.

If such tech was available, you’d imagine that it would have some smart sensors to know when it must automatically deploy (sudden loss of oxygen or dangerous impurities in the air), but we don’t see it. But given this speculative tech, one can imagine it working for a whole spacesuit and not just a helmet. It might speed up scenes like this.

What do we see in the real world?

Are there real-world controls that sci-fi is missing? Let’s turn to NASA’s space suits to compare.

The Primary Life-Support System (PLSS) is the complex spacesuit subsystem that provides the life support to the astronaut, and biomedical telemetry back to control. Its main components are the closed-loop oxygen-ventilation system for cycling and recycling oxygen, the moisture (sweat and breath) removal system, and the feedwater system for cooling.

The only “biology” controls that the spacewalker has for these systems are a few on the Display and Control Module (DCM) on the front of the suit. They are the cooling control valve, the oxygen actuator slider, and the fan switch. Only the first is explicitly to control comfort. Other systems, such as pressure, are designed to maintain ideal conditions automatically. Other controls are used for contingency systems for when the automatic systems fail.

Hey, isn’t the text on this thing backwards? Yes, because astronauts can’t look down from inside their helmets, and must view these controls via a wrist mirror. More on this later.

The suit is insulated thoroughly enough that the astronaut’s own body heats the interior, even in complete shade. Because the astronaut’s body constantly adds heat, the suit must be cooled. To do this, the suit cycles water through a Liquid Cooling and Ventilation Garment, which has a fine network of tubes held closely to the astronaut’s skin. Water flows through these tubes and past a sublimator that cools the water with exposure to space. The astronaut can increase or decrease the speed of this flow and thereby the amount to which his body is cooled, by the cooling control valve, a recessed radial valve with fixed positions between 0 (the hottest) and 10 (the coolest), located on the front of the Display Control Module.

The spacewalker does not have EVA access to her biometric data. Sensors measure oxygen consumption and electrocardiograph data and broadcast it to the Mission Control surgeon, who monitors it on her behalf. So whatever the reason is, if it’s good enough for NASA, it’s good enough for the movies.


Back to sci-fi

So, we do see temperature and pressure controls on suits in the real world, which underscores their absence in sci-fi. But, if there hasn’t been any narrative or plot reason for such things to appear in a story, we should not expect them.

Sci-fi Spacesuits: Protecting the Wearer from the Perils of Space

Space is incredibly inhospitable to life. It is a near-perfect vacuum, lacking air, pressure, and warmth. It is full of radiation that can poison us, light that can blind and burn us, and a darkness that can disorient us. If any hazardous chemicals such as rocket fuel have gotten loose, they need to be kept safely away. There are few of the ordinary spatial clues and tools that humans use to orient and control their position. There are free-floating debris that range from to bullet-like micrometeorites to gas and rock planets that can pull us toward them to smash into their surface or burn in their atmospheres. There are astronomical bodies such as stars and black holes that can boil us or crush us into a singularity. And perhaps most terrifyingly, there is the very real possibility of drifting off into the expanse of space to asphyxiate, starve (though biology will be covered in another post), freeze, and/or go mad.

The survey shows that sci-fi has addressed most of these perils at one time or another.

Alien (1976): Kane’s visor is melted by a facehugger’s acid.

Interfaces

Despite the acknowledgment of all of these problems, the survey reveals only two interfaces related to spacesuit protection.

Battlestar Galactica (2004) handled radiation exposure with simple, chemical output device. As CAG Lee Adama explains in “The Passage,” the badge, worn on the outside of the flight suit, slowly turns black with radiation exposure. When the badge turns completely black, a pilot is removed from duty for radiation treatment.

This is something of a stretch because it has little to do with the spacesuit itself, and is strictly an output device. (Nothing that proper interaction requires human input and state changes.) The badge is not permanently attached to the suit, and used inside a spaceship while wearing a flight suit. The flight suit is meant to act as a very short term extravehicular mobility unit (EMU), but is not a spacesuit in the strict sense.

The other protection related interface is from 2001: A Space Odyssey. As Dr. Dave Bowman begins an extravehicular activity to inspect seemingly-faulty communications component AE-35, we see him touch one of the buttons on his left forearm panel. Moments later his visor changes from being transparent to being dark and protective.

We should expect to see few interfaces, but still…

As a quick and hopefully obvious critique, Bowman’s function shouldn’t have an interface. It should be automatic (not even agentive), since events can happen much faster than human response times. And, now that we’ve said that part out loud, maybe it’s true that protection features of a suit should all be automatic. Interfaces to pre-emptively switch them on or, for exceptional reasons, manually turn them off, should be the rarity.

But it would be cool to see more protective features appear in sci-fi spacesuits. An onboard AI detects an incoming micrometeorite storm. Does the HUD show much time is left? What are the wearer’s options? Can she work through scenarios of action? Can she merely speak which course of action she wants the suit to take? If a wearer is kicked free of the spaceship, the suit should have a homing feature. Think Doctor Strange’s Cloak of Levitation, but for astronauts.

As always, if you know of other examples not in the survey, please put them in the comments.