Fritzes 2026 bonus award: Best Robots

The Fritzes award honors the best interfaces in a full-length motion picture in the past year. Interfaces play a special role in our movie-going experience, and are a craft all their own that does not otherwise receive focused recognition.

The 2026 Award for Best Robots: The Electric State

The Fritzes has been tracking robots in cinema for a few years now. My favorite from 2025 is The Electric State. The film is a Netflix film adaptation of Simon Stålenhag’s luscious illustrated novel of the same name. And some of the robots we see in the film are directly lifted from his illustrations. So this award partly goes to you, Simon. 

A futuristic landscape featuring a massive, rusted robot sculpture in an urban setting, with two figures standing in front of it. Cars are parked nearby under a bridge, with mountains visible in the background and a clear sky above.
A whimsical landscape featuring a large, rusty robot figure lying in a desert setting, surrounded by sparse vegetation and mountains in the background under a blue sky.

But in the movie they are animated and voiced, and there are new ones as well, so it is its own thing. It has Chris Pratt, who is problematic for offscreen reasons, and the script can be somewhat tropey, but the film has nifty world building. In the diegesis, sentient robots are seen as enemies of the state and excommunicated to form their own outcast cities. The design of the robots betray their capitalist origins. Mascots and advertisements. Job-tailored bots. They are quirky and charming and all sizes, and help critique a system that fully deserves it.

A futuristic desert scene featuring various robotic characters and a dilapidated building with the sign 'SEARS'. Numerous robots are depicted interacting and exploring the area, amidst rocky cliffs in the background.

Also check out: Superman!

 James Gunn’s first D.C. movie brought Superman to life and added some things to its lore, such as: Kal-El has four service robots that support him in his Fortress of Solitude. They’re just called Superman Robots at first. Their chest plates identify them by number: 1, 4, 5, and 12. They’re on the far side of the canny rise, one-eyed and very much robotic, with charming banter. At the end of the movie, after it is rebuilt, number four dons a cape and chooses a name, and that name is Gary. Gary’s just a mensch “with no emotional capacity whatsoever”. (And that frankness is why I like Gary.)

Also check out: M3gan 2.0!

One of the smart things the M3gan franchise uses in their diegesis is that AI and robotic housings are not tightly bound. AI can slip out of a housing, replicate itself, find new embodiments on the network, manage multiple embodiments, coordinate disparate housings, etc. Over the course of the movie, we see M3gan and her nemesis AMELIA in many kinds of robot bodies in many states of development. My favorite is the cute little toy that Gemma puts M3gan while she was figuring out if the AI could be trusted.

A small, friendly-looking robot with a teal body and large expressive eyes, standing on a cluttered workspace.

This decoupling is an important difference in AI capabilities that don’t jive with our anthropocentric models. Humans and animals can’t do that, so it’s something that bears literacy.

Shout out to the Act III robot design for AMELIA that references Hajime Sorayama’s illustrations from the 80s and 90s, because reference!

Also check out: Section 31!

Near the end of the film, Garrett finds a Droom doll in the hold of a garbage scow they’ve commandeered. The doll has sensors to detect its context, and actuators to move the arms, head, and mouth. Its three eyes can illuminate. It has speech generation and, as we discover, general reasoning capabilities. When Garrett first finds it, it says, “Hi there! I’m so glad you found me!” It suggests play time with, “Shall we do something fun together?” and spins its head around, whipping its indigo-colored hair in circles.

Garrett pours acid on its volatile power source to turn it into a bomb, and it begins to malfunction, uttering child-friendly things like “We can be friends forever” and dark things, like, “We’re all gonna die! We’re all gonna die!” It is released from the ship to explode in space and destroy another ship that is chasing it.

The conclusion that “we’re all gonna die” is immediately true in the diegesis, not just the morbid, general version of that same truth. But making this conclusion depends not just on context, but general causal reasoning. My decaying battery is going to explode and destroy everything and everyone around it, so I’m going to shout that fact. Note it does not actually issue a warning for the owner to flee, which it should do, but we can chalk that up to malfunction. It hints that the Droom are a species with vast technological resources but troublingly weak risk assessment. All from a tiny little robot with mere seconds of screen time.

Next up: The best assistants of 2025 (currently scheduled for 1 May 2026)

Fritzes 2026: Best Narrative

The Fritzes award honors the best interfaces in a full-length motion picture in the past year. Interfaces play a special role in our movie-going experience, and are a craft all their own that does not otherwise receive focused recognition.

Today we’ll be covering Best Narrative. These movies’ interfaces blow us away with evocative visuals and the richness of their future vision. They engross us in the story world by being spectacular.

The 2026 Award goes to: Elio

Pixar consistently puts great thought into their animated interfaces, and Elio is no different. The little wearable personal devices that help the different intergalactic species all share a space are so simple, and provide both a bit of worldbuilding as well as moments of comedy. The incomprehensibility of the alien spaceship controls are a plot-critical, candy-colored glowing hoot (and reminiscent of another Pixar short, Lifted.) I loved the lemniscate-shaped AI encyclopedia that Elio consults when preparing for his negotiations. We should be able to talk to Wikipedia and not just its articles. (Though I wish the entries were more than just text and an image.) Also this film has the only example I’ve seen where one character acts as an environmental suit for another character (not pictured, but you know the scene).

Also check out: Mickey 17

It’s a dark world where the hoarding class has made the working class so desperate that some people have to agree to be cloned for critical tasks that are likely death sentences. The interfaces in Mickey 17 help sell that very world, and even the ways that some folks use that same tech to eke out a little naughty joy amongst the drudgery. (With echoes of a similarly flirty interface from Starship Troopers.)

Also check out: Fantastic Four: First Steps

Marvel was once a main-stay for interfaces to study, but they’ve pointed their camera increasingly away from interfaces of late. So I was delighted to see Fantastic Four: First Steps bring to life interfaces from Jack Kirby’s Silver Age Fantastic Four. I don’t know if it was CGI, but I swear the giant, spherical quadrilateral screens are actual giant CRTs right down to the blurriness and chromatic aberration. If that’s CGI, it’s great attention to the detail from the reference material. All the spherical displays!

The “big” award in the Fritzes is Best Interface, but to amp up the anticipation, let’s look at some of the idiosyncratic awards from 2025 first.

Next up: The best comedy-horror interface

Fritzes 2026 Best Believable

The Fritzes award honors the best interfaces in a full-length motion picture in the past year. Interfaces play a special role in our movie-going experience, and are a craft all their own that does not otherwise receive focused recognition.

Today we’ll be covering Best Believable. These movies’ interfaces adhere to solid computer-human-interaction principles and believable interactions. They engage us in the story world by being convincing.

The 2026 Award goes to: The Running Man

This second adaptation of Stephen King’s novel knocks it out of the park for the plot-central interfaces: The runner cuff and R-Cam box, the hideous sousveillance phone app for “fans”, the service design of the “free-v” show, and the in-home snitch interfaces. They lean towards narrative (missing a few things real-world counterparts would need), but all help articulate this dystopian world and the circumstances that drive the action. Moreover, I feel quite certain not making good real-world models of these horrible things is the right thing to do, especially given *gestures vaguely at the kakistocracy*.

On top of that it also has lots of awesome everyday interfaces, and it takes a level of commitment on the part of the filmmakers to go that deep in the worldbuilding. There’s a videophone interface with shades of Blade Runner. There’s a mailbox that signals its readiness and lifts off immediately after receiving a letter. (Though I would have flipped those red and green colors, so red meant “don’t put mail in here” and green meant “ready to receive”, but my invitation was lost in the mail.) The fare interfaces in the taxi. The self-driving interface of the citizen car. The piloting interfaces aboard the network plane. It’s all uncluttered, straightforward, and believable. Really well done, really well presented, and that’s hard to do in intense-action movies.

Also check out: War of the Worlds (2025) 

It got universally panned. Fair enough, neither ubiquitous government surveillance nor the current DHS bears valorization. (Also the virus-but-its-digital twist was already done), but I am impressed that this take on the classic Wells story is told almost entirely through interfaces, and each of them is detailed and mostly-realistic. The editing around the interface can be dizzying, and I wondered why William Radford had to do so much digital hunting at the beginning when an assistant should have been guiding his attention. But it’s impressive to bring that tale to life mostly through this unsung medium.

Also check out: Companion

With soft echoes of the interfaces in Westworld (2016), the interfaces in Companion control android and gynoid companions. (Yes, that term is deliberately coy.) They are clean and simple, which underscores the robots’ horror that they are under that much control by their owners.

My hackles are raised from “Intelligence” being a single slider. Intelligence is much more complicated than that, and this notion that it’s a single scalar variable has done a lot of damage over time. Even if they’d had a little expando control, it would have pointed at the idea that we’re looking at a simplification. Also I wish they’d provided a live preview of the eye color, because even with its intended use—of an owner controlling their companion’s eye color—this control has them glancing up to see the effect and then back down again to adjust, which is not a satisfying feedback loop. I use this very control as an example of a “plan” assistant in my new book. Hey, all of Hollywood: Buy it!

Next up: The Best Narrative interfaces from 2025

Fritzes 2026, an intro

The Fritzes award honors the best interfaces in a full-length motion picture in the past year. Interfaces play a special role in our movie-going experience, and are a craft all their own that does not otherwise receive focused recognition. (Looking at you, Academy.) Awards are given for Best Believable, Best Narrative, and Best Interfaces (overall). Some years I give awards and shout-outs to other interesting trends or interfaces I spot along the way. This year I’ll do that, too.

History (still) unfolding note: Here in my home country we are still in the throes of Epstein-class fascism that amounts to a crimes-against-humanity, cartoonishly-incompetent, distraction-war. We are obligated to root out and overcome these forces. But we can’t be “on” 24/7, and sometimes the best thing we can do in these circumstances is resist and thrive, so despite the daily horrors, for when you’re done protesting and voting and resisting, I present this minor distraction with the full knowledge that there are other things with orders of magnitude more importance going on. It is not meant to normalize the kakistocracy.

Last year surprised me for the number of quality interfaces in sci-fi. I keep a long note on my phone across the year as I see shows, and despite that very concrete memory anchor, when I started thinking through the complete set for 2025, I had a vague sense that there weren’t that many. But when I started looking, I was wrong. There are a lot, and some really good ones. I’ll save further comments on the whole year in the wrap-up post.

MASSIVE SPOILERS AHEAD

Major spoilers in the days and weeks ahead, as I’ll be posting these in parts. Today, a pre-award shout-out to interfaces from long-format shows.

Pre-award shout out: Series!

Long-form formats like TV shows require a lot more of me to give those interfaces their due. More watching, more capturing, more analysis. But I do watch some shows, and there’s some great, great stuff happening. Maybe I should start an Emmy-esque award series, but that takes time I do not have. But as a simple shout-out, let me name a few you might want to check out.

Check out Alien Earth!

Working between the palette of the existing movies and genre and bringing something new to the franchise.

Check out Murderbot!

Check out their beautifully controlled palette (light gray and orange as keystone colors are just gorgeous), and what look like deeply considered interfaces throughout.

Check out Pluribus!

It’s much more of an abstract conversation, but the show is quite smart about the interfaces between the Unum (my term for the hive mind) and the free-willed. (Though come on, surely they could shorten that voice mail message after her first couple of calls.)

There are certainly some shows I’ve missed because I don’t have so much time to survey all the TV shows, much less in their entirety. Sorry if I missed your favorites, but give a comment below if there’s a series with great interfaces. As noted, though, the Fritzes are about movies, so I’ll say so long to TV for now.

Previous awards: [2021] [2022] [2023] [2024] [2025]

Next up: We’ll move on to movies and the Best Believable interfaces from 2025

Comparing Sci-Fi HUDs in 2024 Movies

As in previous years, in preparation for awarding the Fritzes, I watched as many sci-fi movies as I could find across 2024. One thing that stuck out to me was the number of heads-up displays (HUDs) across these movies. There were a lot to them. So in advance of the awards, lets look and compare these. (Note the movies included here are not necessarily nominees for a Fritz award.)

I usually introduce the plot of every movie before I talk about it. This provides some context to understanding the interface. However, that will happen in the final Fritzes post. I’m going to skip that here. Still, it’s only fair to say there will be some spoilers as I describe these.

If you read Chapter 8 of Make It So: Interaction Lessons from Science Fiction, you’ll recall that I’d identified four categories of augmentation.

  1. Sensor displays
  2. Location awareness
  3. Context awareness (objects, people)
  4. Goal awareness

These four categories are presented in increasing level of sophistication. Let’s use these to investigate and compare five primary examples from 2024, in order of their functional sophistication.

Dune 2

Lady Margot Fenring looks through augmented opera glasses at Feyd-Rautha in the arena. Dune 2 (2024).

True to the minimalism that permeates much of the interfaces film, the AR of this device has a rounded-rectangle frame from which hangs a measure of angular degrees to the right. There are a few ticks across the center of this screen (not visible in this particular screen shot). There is a row of blue characters across the bottom center. I can’t read Harkonnen, and though the characters change, I can’t quite decipher what most of them mean. But it does seem the leftmost character indicates azimuth and the rightmost character angular altitude of the glasses. Given the authoritarian nature of this House, it would make sense to have some augmentation naming the royal figures in view, but I think it’s a sensor display, which leaves the user with a lot of work to figure out how to use that information.

You might think this indicates some failing of the writer’s or FUI designers’ imagination. However, an important part of the history of Dune is a catastrophic conflict known as the Butlerian Jihad. This conflict involved devastating, large-scale wars against intelligent machines. As a result, machines with any degree of intelligence are considered sacrilege. So it’s not an oversight, but as a result, we can’t look to this as a model for how we might handle more sophisticated augmentations.

Alien: Romulus

Tyler teaches Rain how to operate a weapon aboard the Renaissance. Alien: Romulus (2024)

A little past halfway through the movie, the protagonists finally get their hands on some weapons. In a fan-service scene similar to one between Ripley and Hicks from Aliens (1986), Tyler shows Rain how to hold an FAA44 pulse rifle. He also teaches her how to operate it. The “AA” stands for “aiming assist”, a kind of object awareness. (Tyler asserts this is what the colonial marines used, which kind of retroactively saps their badassery, but let’s move on.) Tyler taps a small display on the user-facing rear sight, and a white-on-red display illuminates. It shows a low-res video of motion happening before it. A square reticle with crosshairs shows where the weapon will hit. A label at the top indicates distance. A radar sweep at the bottom indicates movement in 360° plan view, a sensor display.

When Rain pulls the trigger halfway, the weapon quickly swings to aim at the target. There is no indication of how it would differentiate between multiple targets. It’s also unclear how Rain told it that the object in the crosshairs earlier is what she wants it to track now. Or how she might identify a friendly to avoid. Red is a smart choice for low-light situations as red is known to not interfere with night vision. Also it’s elegantly free of flourishes and fuigetry.

I’m not sure the halfway-trigger is the right activation mechanism. Yes, it allows the shooter to maintain a proper hold and remain ready with the weapon, and allows them not have to look at the display to gain its assistance, but also requires them to be in a calm, stable circumstance that allows for fine motor control. Does this mean that in very urgent, chaotic situations, users are just left to their own devices? Seems questionable.

Alien: Romulus is beholden to the handful of movies in the franchise that preceded it. Part of the challenge for its designers is to stay recognizably a part of the body of work that was established in 1979 while offering us something new. This weapon HUD stays visually simple, like the interfaces from the original two movies. It narratively explains how a civilian colonist with no weapons training can successfully defend herself against a full-frontal assault by a dozen of this universe’s most aggressive and effective killers. However, it leaves enough unexplained that it doesn’t really serve as a useful model.

The Wild Robot

Roz examines an abandoned egg she finds. The Wild Robot (2024)

HUD displays of artificially intelligent robots are always difficult to analyze. It’s hard to determine what’s an augmentation, here loosely defined as an overlay on some datastream created for a user’s benefit but explicitly not by that user. It opposes a visualization of the AI’s own thoughts as they are happening. I’d much rather analyze these as augmentation provided for Roz, but it just doesn’t hold up to scrutiny that way. What we see in this film are visualizations of Roz’ thoughts.

In the HUD, there is an unchanging frame around the outside. Static cyan circuit lines extend to the edge. (In the main image above, the screen-green is an anomaly.) A sphere rotates in the upper left unconnected to anything. A hexagonal grid on the left has some hexes which illuminate and blink unconnected to anything. The grid moves unrelated to anything. These are fuigetry and neither conveys information nor provides utility.

Inside that frame, we see Roz’ visualized thinking across many scenes.

  • Locus of attention—Many times we see a reticle indicating where she’s focused, oftentimes with additional callout details written in robot-script.
  • “Customer” recognition—(pictured) Since it happens early in the film, you might think this is a goofy error. The potential customer she has recognized is a crab. But later in the film, Roz learns the language common to the animals of the island. All the animals display a human-like intelligence, so it’s completely within the realm of possibility that this blue little crustacean could be her customer. Though why that customer needed a volumetric wireframe augmentation is very unclear.
  • X-ray vision—While looking around for a customer, she happens upon an egg. The edge detection indicates her attention. Then she performs scans that reveal the growing chick inside and a vital signs display.
  • Damage report—After being attacked by a bear, Roz does an internal damage check and she notes the damage on screen.
  • Escape alert—(pictured) When a big wave approaches the shore on which she is standing, Roz estimates the height of the wave to be five time her height. Her panic expresses itself in a red tint around the outside edge.
  • Project management—Roz adopts Brightbill and undertakes the mission to mother him—specifically to teach him to eat, swim, and fly. As she successfully teaches him each of these things, she checks it off by updating one of three graphics that represent the topics.
  • Language acquisition—(pictured) Of all the AR in this movie, this scene frustrates me the most. There is a sequence in which Roz goes torpid to focus on learning the animal language. Her eyes are open the entire time she captures samples and analyzes them. The AR shows word bubbles associated with individual animal utterances. At first those bubbles are filled with cyan-colored robo-ese script. Over the course of processing a year’s worth of samples, individual characters are slowly replaced in the utterances with bold, green, Latin characters. This display kind of conveys the story beat of “she’s figuring out the language), but befits cryptography much more than acquisition of a new language.

If these were augmented reality, I’d have a lot of questions about why it wasn’t helping her more than it does. It might seem odd to think an AI might have another AI helping it, but humans have loads of systems that operate without explicit conscious thought, like preattentive processing, all the functions of our autonomic nervous system, sensory filtering, and recall, just to name a few. So I can imagine it would be a fine model for AI-supporting-AI.

Since it’s not augmented reality, it doesn’t really act as a model for real world designs except perhaps for its visual styling.

Borderlands

Claptrap is a little one-wheel robot that accompanies Lilith though her adventures on and around Pandora. We see things through his POV several times.

Claptrap sizes up Lilith from afar. Borderlands (2024).

When Claptrap first sees Lilith, it’s from his HUD. Like Roz’ POV display in The Wild Robot, the outside edge of this view has a fixed set of lines and greebles that don’t change, not even for a sensor display. I wish those lines had some relationship to his viewport, but that’s just a round lens and the lines are vaguely like the edges of a gear.

Scrolling up from the bottom left is an impressive set of textual data. It shows that a DNA match has been made (remotely‽ What kind of resolution is Claptrap’s CCD?) and some data about Lilith from what I presume is a criminal justice data feed: Name and brief physical description. It’s person awareness.

Below that are readouts for programmed directive and possible directive tasks. They’re funny if you know the character. Tasks include “Supply a never-ending stream of hilarious jokes and one-liners to lighten the mood in tense situations” and “Distract enemies during combat. Prepare the Claptrap dance of confusion!” I also really like the last one “Take the bullets while others focus on being heroic.” It both foreshadows a later scene and touches on the problem raised with Dr. Strange’s Cloak of Levitation: How do our assistants let us be heroes?

At the bottom is the label “HYPERION 09 U1.2” which I think might be location awareness? The suffix changes once they get near the vault. Hyperion a faction in the game. Not certain what it means in this context.

When driving in a chase sequence, his HUD gives him a warning about a column he should avoid. It’s not a great signal. It draws his attention but then essentially says “Good luck with that.” He has to figure out what object it refers to. (The motion tracking, admittedly, is a big clue.) But the label is not under the icon. It’s at the bottom left. If this were for a human, it would add a saccade to what needs to be a near-instantaneous feedback loop. Shouldn’t it be an outline or color overlay to make it wildly clear what and where the obstacle is? And maybe some augmentation on how to avoid it, like an arrow pointing right? As we see in a later scene (below) the HUD does have object detection and object highlighting. There it’s used to find a plot-critical clue. It’s just oddly not used here, you know, when the passengers’ lives are at risk.

When the group goes underground in search of the key to the Vault, Claptrap finds himself face to face with a gang of Psychos. The augmentation includes little animated red icons above the Psychos. Big Red Text summarizes “DANGER LEVEL: HIGH” across the middle, so you might think it’s demonstrating goal and context awareness. But Claptrap happens to be nigh-invulnerable, as we see moments later when he takes a thousand Psycho bullets without a scratch. In context, there’s no real danger. So,…holup. Who’s this interface for, then? Is it really aware of context?

When they visit Lilith’s childhood home, Claptrap finds a scrap of paper with a plot-critical drawing on it. The HUD shows a green outline around the paper. Text in the lower right tracks a “GARBAGE CATALOG” of objects in view with comments, “A PSYCHO WOULDN’T TOUCH THAT”, “LIFE-CHOICE QUESTIONING TRASH”, “VAULT HUNTER THROWBACK TRASH”. This interface gives a bit of comedy and leads to the Big Clue, but raises questions about consistency. It seems the HUDs in this film are narrativist.

In the movie, there are other HUDs like this one, for the Crimson Lance villains. They fly their hover-vehicles using them, but we don’t nearly get enough time to tease the parts apart.

Atlas

The HUD in Atlas happens when the titular character Atlas is strapped into an ARC9 mech suit, which has its own AGI named Smith. Some of the augmentations are communications between Smith and Atlas, but most are augmentations of the view before her. The viewport from the pilot’s seat is wide and the augmentations appear there.

Atlas asks Smith to display the user manuals. Atlas (2024)

On the way to evil android Harlan’s base, we see the frame of the HUD has azimuth and altitude indicators near the edge. There are a few functionless flourishes, like arcs at the left and right edges. Later we see object and person recognition (in this case, an android terrorist, Casca Decius). When Smith confirms they are hostile, the square reticles go from cyan to red, demonstrating context awareness.

Over the course of the movie Atlas has resisted Smith’s call to “sync” with him. At Harlan’s base, she is separated from the ARC9 unit for a while. But once she admits her past connection to Harlan, she and Smith become fully synched. She is reunited with the ARC9 unit and its features fully unlock.

As they tear through the base to stop the launch of some humanity-destroying warheads, they meet resistance from Harlan’s android army. This time the HUD wholly color codes the scene, making it extremely clear where the combatants are amongst the architecture.

Overlays indicate the highest priority combatants that, I suppose, might impede progress. A dashed arrow stretches through the scene indicating the route they must take to get to their goal. It focuses Atlas on their goal and obstacles, helping her decision-making around prioritization. It’s got rich goal awareness and works hard to proactively assist its user.

Despite being contrasting colors, they are well-controlled to not vibrate. You might think that the luminance of the combatants and architecture might be flipped, but the ARC9 is bulletproof, so there’s no real danger from the gunfire. (Contrast Claptrap’s fake danger warning, above.) Saving humanity is the higher priority. So the brightest (yellow) means “do this”, the second brightest (cyan) means “through this” and darkest (red) means “there will be some nuisances en route.” The luminescence is where it should be.

In the climactic fight with Harlan, the HUD even displays a predictive augmentation, illustrating where the fast-moving villain is likely to be when Atlas’ attacks land. This crucial augmentation helps her defeat the villain and save the day. I don’t think I’ve seen predictive augmentation outside of video games before.


If I was giving out an award for best HUD of 2024, Atlas would get it. It is the most fully-imagined HUD assistance across the year, and consistently, engagingly styled. If you are involved with modern design or the design of sci-fi interfaces, I highly recommend you check it out.

Stay tuned for the full Fritz awards, coming later this year.

Lessons in instrument design from Star Trek

by S. Astrid Bin 

Editor’s Note: Longtime fans of this site may be familiar with its “tag line,” “Stop watching sci-fi. Start using it.” So I was thrilled when a friend told me they had seen Astrid present how she had made an instrument from a Star Trek episode real! Please welcome Astrid as she tells us about the journey and lessons learned from making something from a favorite sci-fi show real. —Christopher

I’ve been watching Star Trek for as long as I can remember. Though it’s always been in the air of culture, it wasn’t until March 2020—when we were all stuck at home with Netflix and nothing else to do—that I watched all of it from the beginning.

Discovering Trek Instruments

I’m a designer and music researcher, and I specialise in interfaces for music. When I started this Great Rewatch with my husband (who is an enormous Trek fan, so nothing pleased him more) I started noting every musical instrument I saw. What grabbed me was they were so different from the instruments I write about, design, make, and look at, because none of these instruments, you know, actually worked. They were pure speculation, free even of the conventions of the last couple of decades since computers became small and powerful enough that digital musical instruments started to become a common thing on Kickstarter. I got excited every time I saw a new one.

What struck me the most about these instruments is that how they worked didn’t ever seem to enter into the mind of the person who dreamed them up. This sure is a departure for me, as I’ve spent more than ten years designing instruments and worrying about the subtleties of sensors, signal processing, power requirements, material response, fabrication techniques, sound design, and countless other factors that come into play when you make novel digital musical instruments. The instruments in Star Trek struck me as anarchic, because it was clear the designers didn’t consider at all how they would work, or, if they did, they just weren’t concerned. Some examples: Tiny instruments make enormous sounds. Instruments are “telepathic”. Things resonate by defying the laws of physics. Some basic sound design is tossed in at the end, and bam, job done.

Some previous instrument design projects. From left: Moai (electronic percussion), Keppi (electronic percussion), Gliss (synth module interaction, as part of the Bela.io team)

I couldn’t get over how different this was to the design process I was used to. Of course, this is because the people designing these instruments weren’t making “musical instruments” the way we know them, as functional cultural objects that produce sound of some kind. Rather, Trek instruments are storytelling devices, alluring objects that have a narrative and character function, and the sound they make and how they might work is completely secondary. These instruments have a number of storytelling purposes, but most of all they serve to show that alien civilisations are as complex, creative and culturally sophisticated as humans’.

This was striking, because I was used to the opposite; so often the technical aspects of an instrument—and there are many, from synthesis to sensors—always somehow become the most significant determining factor in an instruments’ final form.

The Aldean Instrument

There was one instrument that especially intrigued me, the “unnamed Aldean instrument” from Season 1, Episode 16 of Star Trek: The Next Generation, “When the Bough Breaks”. This instrument is a light-up disc that is played by laying hands on it, through which it translates your thoughts to sound. In this episode the children of the Enterprise are kidnapped by a race of people who can’t reproduce (spoiler alert: it was an environmental toxin, they’re fine now) and the children are distributed among various families. One girl is sent to a family of very kind musicians, and the grandfather teaches her to play this instrument. When she puts her hands on it, lays her fingers over the edge and is very calm it plays some twinkly noise, but then she gets anxious when she remembers she’s been kidnapped, and it makes a burst of horrible noise.

[If you have a subscription to Paramount, you can see the episode here. —Ed.]

This instrument was fascinating for a lot of reasons. It looked so cool with the light-up sides and round shape, and it was only on screen for about four tantalising seconds. Unlike other instruments that were a bit ridiculous, I kept thinking about this one because it was uniquely beautiful, and it seemed like a lot of thought went into it.

I researched the designers of Trek instruments and this instrument was the only one that had a design credit: Andrew Probert. Andrew is a prolific production designer who’s worked mainly in science fiction, and he’s been active for decades, designing everything from the bridge on the Enterprise to the Delorian in Back to the Future. He’s still working, his work is fantastic, and he has a website, so I emailed him and asked him what he could tell me about the design process.

He got back to me straight away and said he couldn’t remember anything about it, but he dug out his production sketch for me:

Courtesy of Andrew Probert, https://probert.artstation.com/

The sketch was so gloriously beautiful that I couldn’t resist building it. I had so many questions that you can’t answer, except through bringing it into reality: How would I make it work like it did in the show? How would I make it come alive slowly, and require calmness? How was I going to make that shape? Wait, this thing is supposed to translate moods, what does that even mean? How was I going to achieve the function and presence that this instrument had in the show, and what would I learn?

Building the Aldean Instrument

Translating moods

When I discussed this project with people, the question I got asked most often was “So how are you going to make it read someone’s mind?”

While the instrument doesn’t read minds, the idea of translating moods gave me pause and eventually led me to think of affective computing, an area of computing that was originated by a woman named—brace yourself—Rosalind Picard. Picard says that affective computing refers to computing that relates to, arises from, or deliberately impacts emotions.

Affective computing considers two variable and intersecting factors: Arousal (on a scale of “inactive” to “active”), and valence (on a scale from “unpleasant” to “pleasant”). A lot of research has been done on how various emotions fall into this two-dimensional space, and how emotional states can be inferred by sensing these two factors.

Image by Patricia Bota, 2019

I realised that, to make this instrument work the way it did in the show, the valence/arousal state that the instrument was sensing was much simpler. In the show, the little girl is calm (and the instrument plays some sparkly sound), and then she’s not (and the instrument emits a burst of noise). If this instrument just sensed arousal through how hard it was being gripped and valence through how much the instrument was moving, this creates an interaction space that still has a lot of possibility.

The instrument playing requires calmness, and I could sense how much they were moving around with an accelerometer, by calculating quantity of motion. If the instrument was moved suddenly or violently it could make a burst of noise. For valence—pleasantness to unpleasantness—I could sense how hard the person was gripping the instrument using a Trill Bar sensor. The Trill Bar can sense up to five individual touches, as well as the size of those touches (in other words, how hard those fingers are pressing). 

Both the touch sensing and the accelerometer data would be processed by a Bela Mini, a tiny but powerful computer that could process the sensor data, as well as provide the audio playback.

Making the body

I got to work first with the body of the instrument. I often prototype 3D shapes using layers of paper that are laser cut and sandwiched together, as it allows for a gradual, hands-on process that allows adjustments throughout. After a few days with a laser cutter and some cut and paste circuitry, I had something that lit up that I could attach the sensing system to.

Putting it together

I attached the Bela Mini to the underside of the instrument body, and embedded the Trill Bar sensor on the underside of the hand grip, so I could sense when someone’s hand was on the instrument. 

As I set out to recreate how the instrument looked and sounded in the show, I wanted to make a faithful reproduction of the sound design, despite the sound design being pretty basic.

The sound is a four-part major chord harmony. I recreated the sound in Ableton Live, with each part of the harmony as a separate sample. I also made a burst of noise. 

When the instrument is being held gently and there are no sudden movements, it can play; this doesn’t mean stillness, just a lack of chaos. As the player places their fingers over the instrument’s edge, each of their four fingers will be sensed and trigger one part of the harmony. The harder that finger presses, the louder that voice is.

There’s a demo video of me playing it, above.

Reflections on the process

This process was just as interesting as I suspected, for a number of reasons.

Firstly, de-emphasising technology in the process of making a technological object presented a fresh way of thinking. Instead of worrying about what I could add, whether the interaction was enough, or what other sensors I had access to (and thereby making the design a product of those technical decisions), I was able instead to be led by the material and object factors in this design process. This is an inverse of what usually happens, and I certainly am going to consciously invert this process more often from now on.

Secondly, thinking about what this instrument needed to do, say and mean, and extract the technological factors from there, made the technical aspects much simpler. I found myself working artistic muscles that aren’t always active in designing technology, because there’s often some kind of pressure, real or imagined, to make the technical aspects more complex. In this situation, the most important thing was supporting what this was in the show, which was an object that told a story. When I thought along those lines, the two axes of sensing were an obvious, and refreshingly simple direction to take.

Third, one of the difficult things about designing instruments is that, thanks to tiny and powerful computers, they can sound like anything you can imagine. There’s no size limitations for sound, no physical bodies to resonate, no material factors that affect the acoustic physics that create a noise. This freedom is often overwhelming, and it’s hard to make sound design choices that make sense. However, because I was working backwards from thinking about how this instrument was presented in the plot of the episode, I had something to attach these decisions to. I recreated the show’s simplistic sound design, but I’ve since designed sound worlds for it that support this calm, gentle, but very much alive nature that the Aldean instrument would have, when I imagine it played in its normal context. 

Not only physically recreating the shape an instrument from Star Trek, but making it function as an instrument showed me that bringing imaginary things into reality is a process that offers the creator a fresh perspective, whether designing fantastical or earthly interfaces.

Santa Tech: Rise of the Guardians (2012)

We interrupt the 3D file browsing series for this Santa-holiday one-off post. If you’re trapped somewhere needing design-and-Santa-related distraction, here’s a bunch of words, images, and links for you.

Longtime readers may recall the Who Did it Better? Santa Claus edition from 2020, in which I took a look at speculative interfaces that help Santa Claus do his Saintly Nick business. (If not, check it out at the link above, especially if you need a refresher on the core myth.) Earlier this year a dear friend mentioned Rise the Guardians as an additional candidate. So I watched it, and hereby add it as an addendum to that study. I might make it a habit to do every year, because they aren’t going to stop making Santa movies anytime soon.

Spoiler alert: There aren’t many interfaces, and they don’t fare well, but the joy is in the analysis, so let’s dive in.

Quick plot recap

Children around the world are protected by a group called the Guardians:

  • North (Santa)
  • Tooth (the Tooth Fairy)
  • (the Easter) Bunnymund
  • Sandman

…all appointed by the mysterious Man in the Moon. Who is just the moon, communicating via moonbeams.

Pictured: A plot-critical character peering in through the shutter like some kind of celestial stalker.

One day, an ancient foe named Pitch Black returns, who plots to get all the children to stop believing in the guardians, thereby robbing them of their power and clearing the way for his fear-mongering world domination. In response, the Man in the Moon names a new Guardian to help defeat him: Jack Frost. Jack initially resists, but over the course of the film and the help of one special child, Jack comes around, learns to care, and helps defeat Pitch. Children around the world believe in him, and he formally joins the ranks of the Guardians.

Our heroes face off against Pitch. Sandman is Disney-dead at this point in the story, and so not pictured.

n.b. Santa’s are only a subset of the film’s devices

The abilities of the Guardians are a blend of innate magic and magic items, fueled with *vaguely gestures at childhood belief* and not a lot of observable cause-and-effect interfaces. For instance, when Pitch breaks Jack’s magic crook, Jack just holds the pieces and wills it back whole with glowy sparkliness and grunting psychic effort despite never having done anything like this before. No interfaces there. Magic things don’t really befit the usual sort of analysis done on this blog. But North does have three interfaces to do his gift-giving duties that bear the cold light of examination, you heartless, Vulcan bastards. (Yaaay! My people!)

  1. Snow globes
  2. Sleigh dashboard
  3. The Belief Globe

(Tooth and her hummingbird-like Baby Teeth helpers have some proper interfaces as well, but are kind of creepy and this post is about Santa tech. Maybe I’ll do teeth tech interfaces later. Maybe March 6.)

Snow globes

These handheld spheres look like the popular winter decorations, but with no base by which they can rest on a surface. Instead they are kept loose in the user’s pocket until they are needed. By shaking it and speaking a destination, a preview of the destination appears on the inside, surrounded by swirls of “snow.” Then by pitching it like a baseball, the globe disappears in a puff, replaced with a circular portal to that destination. Move or toss something through, and the portal closes behind.

If this interface seems well-designed, that’s because the examples in the movie are damned convenient. Each time we see a snow globe used in the movie…

  • …the destination has a globally-unique name
  • …the destination has a unique and easily identifiable landmark to display in the globe
  • …the appearance of the destination is already known to the user, so the visual helps confirm the selection

But change any one of these, and it starts to fail. Consider if North, in the course of doing his Santa-ly duties, had to jump to a “San José.” There are at least 334 San Josés around the world. Very few of which have identifiable landmarks. How does North know the one that’s being visualized is the right one? He might have eidetic memory because of Рождество Христово magic or something, but these tools are used by the yetis, too, and I doubt they have that same gift.

How would it help them disambiguate? If the displayed destination is not the right one, how does the user provide more specificity to get to the right one? What if they only know the name? How does the snow globe help them narrow things down from 334 to 1? Since the globe disappears on use, and pockets have a limited capacity, the cost for getting it wrong can be quite high. The yetis might very well have to walk back to the North Pole should they run out.

Maybe, maybe, there are only a limited number of destinations possible, but then you’d expect some reference on the globe itself to help a user know that.

Pictured in the globe: a San José from Google Earth, and I’ll send a free PDF copy of the book to the first person who names which San José correctly, because I’m fairly confident it’s nigh-impossible.

It’s also worth noting that there’s no indication how the portals know when it’s OK to close, rather than say, chopping the traveler in half or leaving them stranded. Is it time-based? Where’s the countdown? Is it dependent on a code word or thought? How does the user know whether the code word has been received or rejected? Does the portal close as soon as a single, “whole object” passes through? Theseus would like a word. There’s no interface in evidence, so it must be “smart,” but as we know, “smart” is not always smart, and design is critical for making users more confident and avoiding costly errors. There are far too many unanswered questions to give this any stamp of approval.

Sleigh dashboard

North has a sleigh of course. It has a dashboard with some controls. One of these controls we see in use is a lever, whose purpose is a mystery. It can’t be a booster, since the motile force here is rangiferine, not mechanical. The control is shaped like an engine control lever on a boat or a thrust control on an airplane. After the switch is thrown, the camera cuts to a very blurry shot of the sleigh’s undercarriage where, if something happens, I can’t discern what is it. Maybe the runners go from flat to vertical, for a more ice-skating-like experience? Exacerbating our lack of information, the control is unlabeled, so it’s hard for a new user to know what it does, or what state it’s in, or what the options are. It has no safety mechanism, so depending on the force required, might be easily accidentally activated. Cannot recommend this, either.

The major element in the dashboard is a large globe inset in its center. It’s roughly shoulder-width in diameter. We never see it in use, but it bears great resemblance to the Belief Globe (see below). I want to believe it’s a you-are-here navigation device that automatically orients to match the position and bearing of the sleigh, because that might be useful. And it would be an awesome opportunity for a super-charming brass location indicator, mounted to a quarter-meridian arm. But I suspect this device is actually meant to be a miniaturized copy of the Belief Globe, which would not be useful for reasons you’ll read in the next section.

North and Jack chuckle at Bunnymund’s terror of flying. Fear is so funny.

The Belief Globe

This display is not explicitly named over the course of the movie, but I have to call it something. It is a huge globe that mechanically rotates in the center of North’s arctic fortress. It is covered with beautiful, arcane symbols and Cyrillic writing (North is Russian—this movie was from the halcyon days between the end of the Cold War and its horrific current genocidal landgrab attempts against Ukraine), and displays tiny points of light all over it.

Tooth, explaining the globe to Jack, says, “Each of those lights is a child.” North explains further, “A child who believes.” But some of the dots are bigger and others brighter. It’s unclear what information those variables are meant to convey. Older kids? Degree of belief? Relative niceness? We don’t see anyone looking into individual dots, which, if that’s not possible, really means that this device, diegetically, just shows where the Guardians might want to focus their activities, conspicuously, to bolster Belief in that geographical area.

And belief seems to be at critical levels. I asked chatGPT to count the dots in the second image in the gallery above. It estimated 39,674 dots and that that pictured chunk of South America to be about 12% of the world’s total landmass, excluding Antarctica. South America has around 5% of the world’s total population, which extrapolates out to a total 725,280 dots we would expect to see across the world. According to populationpyramid.com, global population in 2012—the time this film was released—was 7.2 billion, with 1.91 billion being 14 years old or younger (a generous age for childlike belief, since the average age of losing faith in a “real” Santa tends to be around 10 years old in the USA, but let’s run with it.)

I am delighted that this happens to look like a morbid, morbid Christmas tree.

That means that in the world of the Guardians, only 4 out of 100 children believe in any of them to begin with, even before Pitch comes a-calling. This would have been so easy to fix in the script. Have Tooth say, “These lights represent children who believe.” The plural would have left it ambiguous.

But I’ve digressed.

North has a viewing deck which seems custom-built for observing the globe, and which gives us an important perspective for analysis.

This over-the-yeti-shoulder shot helps point out a major failing of this display: visibility of the information.

With the globe anchored in place at the poles and the observation deck so low, this makes the dots in the southern hemisphere much more prominent in the viewers’ sight, introducing an availability bias. It looks like anything above 50N latitude is just…out of sight, and that includes significant populations in Europe as well as North’s own fortress. (We’ll see in the Control Panel that there’s a miniature globe mounted there that provides a view of the Northern Hemisphere, but we don’t see lights on it, and it would be a bad idea to split the information across two sources of differing scales, anyway. So let’s hope that’s not its intended purpose.)

There is an easy fix for the orientation problem, and it of course comes from the world of globe-making. By attaching the poles of the globe to a full meridian that encircles the globe, and then attaching the full meridian to a half meridian at the equator, you create a gimbal that allows the globe to rotate to any orientation.

Like this. Example from UltimateGlobes.com

This is called a full-swing mount, and it would allow arbitrary inspection of any point on the globe. It would be lovely to see writ large and mechanical in the film.

This display also privileges land in a possibly-misleading way, in the same way that election maps can. Let’s all recall that land doesn’t vote, but this kind of implies otherwise.

Same image as above, repeated for easy reference.

For example, on the Belief Globe, it looks like Australian kids are way behind in Belief metrics than New Zealand kids, but Australia has a density of 3.4 inhabitants per square kilometer compared to New Zealand’s 19.1, and this map doesn’t make that easy to understand. Proportion of per capita belief would be a better metric for delivering actionable Santa insight.

Like this, but inverse. From Colin Mathers on Medium.

Even better would be to show change in belief over time (“боже мой!” North might shout, “Bunny! Get to Czech Republic, немедленно!”), though information over time is notoriously difficult to do on a geographical map.

But even if we solve the orientation and representation problems, putting the information on a globe means at least half of it is out of sight at any given time. In the yeticam view above, what’s going on in Bermuda? You don’t know! It does revolve slowly, but by my own rough estimation at the speed we see in this scene, it would take around 6 minutes for this globe to make a complete, sidereal rotation, which is way, way beyond the vigilance threshold limit required to put that picture together holistically in your mind. If the whole picture is important (and I’m asserting that it is), the information display should be a map rather than a globe.

Eh…it’s a crappy Midjourney comp, but you get the gist.

You don’t want to lose the charming magical-Soviet machine feeling of it, but with a world map, maybe you have some mechanics that physically simulate the day/night cycle? And since the Man in the Moon is so important to this story, maybe the lunar cycle as well? Or you could make some mechanical interactive fisheye focus effect, which would be even more spectacular. (Please, somebody, do this.)

I also have to note that having Belief hold such a prominent place in this command and control room seems really self-serving. That much real estate is dedicated to telling you how much gas you have in the tank? There are plenty of additional things that a Santa and his team would want to keep track of that would be of as much importance: Days until Christmas, location of kids at risk of losing belief, percentage of toys complete, bowl-full-of-jelly BMI score, naughty/nice balance in the world, current value of elf pension fund, just to name a few. These could be split-flap displays for nostalgia and lovely clacking audio opportunities.

Globe Control Panel

On the observation deck, North has a control panel of sorts. There are two parts whose functions we can infer, a trackball and a Bat-Guardian-Signal, but most of it—like the levers and joysticks with lit toggle buttons—we cannot. Let’s look at the two whose purpose we can infer.

The trackball

The trackball is a miniature Belief Globe, inset on the right hand of the control panel. It is quite similar to the trackballs we see in Arthur Christmas (2011, the year before) and The Christmas Chronicles (2018, six years later). If it controls the orientation of the Belief Globe, and its movement is constrained similarly to how the globe is, a user hoping to focus on Mauritius would have to memorize that it is due south of Oman, and do the same for the entirety of the southern hemisphere.

I hope you‘ve memorized your world geography, mate.

It should also be constrained to left-right movement like the thing being controlled, as if on a hidden inclination mount. But this looks like a free-spin trackball, so could use a knob in the pole and maybe a meridian arm to help signal its constraint. It should also be well-mapped to the globe as the observer sees it. It is not. Compare the orientation of the Globe to the trackball in the screen shot. They do not match.

All told, a pretty underthought component.

Bat-Guardian-Signal

Early in the film, when North realizes Pitch is back, he grabs the control in the far lower-right-hand corner. He twists it 90 degrees counterclockwise and pushes down. The ice-like octagonal button below begins to glow brightly.

This sets the Belief Globe to glowing with aurora lights, that extend out across the globe and alert the Guardians, signaling them to report to Commissioner Gordon North’s compound at once. Mentioned here only out of a sense of completeness, this control is germane to North’s being leader of a team rather than any of his Santa duties. It’s unlabeled, it can’t possibly have the global reach that it needs, and I’m not sure why the Globe was selected to be the source of the aurora, but meh, it’s just not that important in this context.

Final score: Lump of Coal

We have to keep in mind this is a movie for kids, and kids won’t be put off by any of these interface failings. But for our overthinking design-nerd purposes in reviewing the Santa tech, these just don’t hold up. Because of this, Rise of the Guardian’s Santa tech poses zero threat to dethroning The Santa Chronicle’s lovely Santa interfaces. But good to remind ourselves of the principles to which we should be paying attention.

Enjoy the movie for the fun voice acting, the awesome character design, the gorgeous Sandman visuals, and any nearby kids’ sense of wonder, but don’t worry about the interfaces as anything to admire or mimic in the real world.

Happy holidays, however you celebrate, to most everyone except you, asshole elf.

IMDB: https://www.imdb.com/title/tt1446192/

Disclosure (1994)

Our next 3D file browsing system is from the 1994 film Disclosure. Thanks to site reader Patrick H Lauke for the suggestion.

Like Jurassic Park, Disclosure is based on a Michael Crichton novel, although this time without any dinosaurs. (Would-be scriptwriters should compare the relative success of these two films when planning a study program.) The plot of the film is corporate infighting within Digicom, manufacturer of high tech CD-ROM drives—it was the 1990s—and also virtual reality systems. Tom Sanders, executive in charge of the CD-ROM production line, is being set up to take the blame for manufacturing failures that are really the fault of cost-cutting measures by rival executive Meredith Johnson.

The Corridor: Hardware Interface

The virtual reality system is introduced at about 40 minutes, using the narrative device of a product demonstration within the company to explain to the attendees what it does. The scene is nicely done, conveying all the important points we need to know in two minutes. (To be clear, some of the images used here come from a later scene in the film, but it’s the same system in both.)

The process of entangling yourself with the necessary hardware and software is quite distinct from interacting with the VR itself, so let’s discuss these separately, starting with the physical interface.

Tom wearing VR headset and one glove, being scanned. Disclosure (1994)

In Disclosure the virtual reality user wears a headset and one glove, all connected by cables to the computer system. Like most virtual reality systems, the headset is responsible for visual display, audio, and head movement tracking; the glove for hand movement and gesture tracking. 

There are two “laser scanners” on the walls. These are the planar blue lights, which scan the user’s body at startup. After that they track body motion, although since the user still has to wear a glove, the scanners presumably just track approximate body movement and orientation without fine detail.

Lastly, the user stands on a concave hexagonal plate covered in embedded white balls, which allows the user to “walk” on the spot.

Closeup of user standing on curved surface of white balls. Disclosure (1994)

Searching for Evidence

The scene we’re most interested in takes place later in the film, the evening before a vital presentation which will determine Tom’s future. He needs to search the company computer files for evidence against Meredith, but discovers that his normal account has been blocked from access.   He knows though that the virtual reality demonstrator is on display in a nearby hotel suite, and also knows about the demonstrator having unlimited access. He sneaks into the hotel suite to use The Corridor. Tom is under a certain amount of time pressure because a couple of company VIPs and their guests are downstairs in the hotel and might return at any time.

The first step for Tom is to launch the virtual reality system. This is done from an Indy workstation, using the regular Unix command line.

The command line to start the virtual reality system. Disclosure (1994)

Next he moves over to the VR space itself. He puts on the glove but not the headset, presses a key on the keyboard (of the VR computer, not the workstation), and stands still for a moment while he is scanned from top to bottom.

Real world Tom, wearing one VR glove, waits while the scanners map his body. Disclosure (1994)

On the left is the Indy workstation used to start the VR system. In the middle is the external monitor which will, in a moment, show the third person view of the VR user as seen earlier during the product demonstration.

Now that Tom has been scanned into the system, he puts on the headset and enters the virtual space.

The Corridor: Virtual Interface

“The Corridor,” as you’ve no doubt guessed, is a three dimensional file browsing program. It is so named because the user will walk down a corridor in a virtual building, the walls lined with “file cabinets” containing the actual computer files.

Three important aspects of The Corridor were mentioned during the product demonstration earlier in the film. They’ll help structure our tour of this interface, so let’s review them now, as they all come up in our discussion of the interfaces.

  1. There is a voice-activated help system, which will summon a virtual “Angel” assistant.
  2. Since the computers themselves are part of a multi-user network with shared storage, there can be more than one user “inside” The Corridor at a time.
    Users who do not have access to the virtual reality system will appear as wireframe body shapes with a 2D photo where the head should be.
  3. There are no access controls and so the virtual reality user, despite being a guest or demo account, has unlimited access to all the company files. This is spectacularly bad design, but necessary for the plot.

With those bits of system exposition complete, now we can switch to Tom’s own first person view of the virtual reality environment.

Virtual world Tom watches his hands rezzing up, right hand with glove. Disclosure (1994)

There isn’t a real background yet, just abstract streaks. The avatar hands are rezzing up, and note that the right hand wearing the glove has a different appearance to the left. This mimics the real world, so eases the transition for the user.

Overlaid on the virtual reality view is a Digicom label at the bottom and four corner brackets which are never explained, although they do resemble those used in cameras to indicate the preferred viewing area.

To the left is a small axis indicator, the three green lines labeled X, Y, and Z. These show up in many 3D applications because, silly though it sounds, it is easy in a 3D computer environment to lose track of directions or even which way is up. A common fix for the user being unable to see anything is just to turn 180 degrees around.

We then switch to a third person view of Tom’s avatar in the virtual world.

Tom is fully rezzed up, within cloud of visual static. Disclosure (1994)

This is an almost photographic-quality image. To remind the viewers that this is in the virtual world rather than real, the avatar follows the visual convention described in chapter 4 of Make It So for volumetric projections, with scan lines and occasional flickers. An interesting choice is that the avatar also wears a “headset”, but it is translucent so we can see the face.

Now that he’s in the virtual reality, Tom has one more action needed to enter The Corridor. He pushes a big button floating before him in space.

Tom presses one button on a floating control panel. Disclosure (1994)

This seems unnecessary, but we can assume that in the future of this platform, there will be more programs to choose from.

The Corridor rezzes up, the streaks assembling into wireframe components which then slide together as the surfaces are shaded. Tom doesn’t have to wait for the process to complete before he starts walking, which suggests that this is a Level Of Detail (LOD) implementation where parts of the building are not rendered in detail until the user is close enough for it to be worth doing.

Tom enters The Corridor. Nearby floor and walls are fully rendered, the more distant section is not complete. Disclosure (1994)

The architecture is classical, rendered with the slightly artificial-looking computer shading that is common in 3D computer environments because it needs much less computation than trying for full photorealism.

Instead of a corridor this is an entire multistory building. It is large and empty, and as Tom is walking bits of architecture reshape themselves, rather like the interior of Hogwarts in Harry Potter.

Although there are paintings on some of the walls, there aren’t any signs, labels, or even room numbers. Tom has to wander around looking for the files, at one point nearly “falling” off the edge of the floor down an internal air well. Finally he steps into one archway room entrance and file cabinets appear in the walls.

Tom enters a room full of cabinets. Disclosure (1994)

Unlike the classical architecture around him, these cabinets are very modern looking with glowing blue light lines. Tom has found what he is looking for, so now begins to manipulate files rather than browsing.

Virtual Filing Cabinets

The four nearest cabinets according to the titles above are

  1. Communications
  2. Operations
  3. System Control
  4. Research Data.

There are ten file drawers in each. The drawers are unmarked, but labels only appear when the user looks directly at it, so Tom has to move his head to centre each drawer in turn to find the one he wants.

Tom looks at one particular drawer to make the title appear. Disclosure (1994)

The fourth drawer Tom looks at is labeled “Malaysia”. He touches it with the gloved hand and it slides out from the wall.

Tom withdraws his hand as the drawer slides open. Disclosure (1994)

Inside are five “folders” which, again, are opened by touching. The folder slides up, and then three sheets, each looking like a printed document, slide up and fan out.

Axis indicator on left, pointing down. One document sliding up from a folder. Disclosure (1994)

Note the tilted axis indicator at the left. The Y axis, representing a line extending upwards from the top of Tom’s head, is now leaning towards the horizontal because Tom is looking down at the file drawer. In the shot below, both the folder and then the individual documents are moving up so Tom’s gaze is now back to more or less level.

Close up of three “pages” within a virtual document. Disclosure (1994)

At this point the film cuts away from Tom. Rival executive Meredith, having been foiled in her first attempt at discrediting Tom, has decided to cover her tracks by deleting all the incriminating files. Meredith enters her office and logs on to her Indy workstation. She is using a Command Line Interface (CLI) shell, not the standard SGI Unix shell but a custom Digicom program that also has a graphical menu. (Since it isn’t three dimensional it isn’t interesting enough to show here.)

Tom uses the gloved hand to push the sheets one by one to the side after scanning the content.

Tom scrolling through the pages of one folder by swiping with two fingers. Disclosure (1994)

Quick note: This is harder than it looks in virtual reality. In a 2D GUI moving the mouse over an interface element is obvious. In three dimensions the user also has to move their hand forwards or backwards to get their hand (or finger) in the right place, and unless there is some kind of haptic feedback it isn’t obvious to the user that they’ve made contact.

Tom now receives a nasty surprise.

The shot below shows Tom’s photorealistic avatar at the left, standing in front of the open file cabinet. The green shape on the right is the avatar of Meredith who is logged in to a regular workstation. Without the laser scanners and cameras her avatar is a generic wireframe female humanoid with a face photograph stuck on top. This is excellent design, making The Corridor usable across a range of different hardware capabilities.

Tom sees the Meredith avatar appear. Disclosure (1994)

Why does The Corridor system place her avatar here? A multiuser computer system, or even just a networked file server,  obviously has to know who is logged on. Unix systems in general and command line shells also track which directory the user is “in”, the current working directory. Meredith is using her CLI interface to delete files in a particular directory so The Corridor can position her avatar in the corresponding virtual reality location. Or rather, the avatar glides into position rather than suddenly popping into existence: Tom is only surprised because the documents blocked his virtual view.

Quick note: While this is plausible, there are technical complications. Command line users often open more than one shell at a time in different directories. In such a case, what would The Corridor do? Duplicate the wireframe avatar in each location? In the real world we can’t be in more than one place at a time, would doing so contradict the virtual reality metaphor?

There is an asymmetry here in that Tom knows Meredith is “in the system” but not vice versa. Meredith could in theory use CLI commands to find out who else is logged on and whether anyone was running The Corridor, but she would need to actively seek out that information and has no reason to do so. It didn’t occur to Tom either, but he doesn’t need to think about it,  the virtual reality environment conveys more information about the system by default.

We briefly cut away to Meredith confirming her CLI delete command. Tom sees this as the file drawer lid emitting beams of light which rotate down. These beams first erase the floating sheets, then the folders in the drawer. The drawer itself now has a red “DELETED” label and slides back into the wall.

Tom watches Meredith deleting the files in an open drawer. Disclosure (1994)

Tom steps further into the room. The same red labels appear on the other file drawers even though they are currently closed.

Tom watches Meredith deleting other, unopened, drawers. Disclosure (1994)

Talking to an Angel

Tom now switches to using the system voice interface, saying “Angel I need help” to bring up the virtual reality assistant. Like everything else we’ve seen in this VR system the “angel” rezzes up from a point cloud, although much more quickly than the architecture: people who need help tend to be more impatient and less interested in pausing to admire special effects.

The voice assistant as it appears within VR. Disclosure (1994)

Just in case the user is now looking in the wrong direction the angel also announces “Help is here” in a very natural sounding voice.

The angel is rendered with white robe, halo, harp, and rapidly beating wings. This is horribly clichéd, but a help system needs to be reassuring in appearance as well as function. An angel appearing as a winged flying serpent or wheel of fire would be more original and authentic (yes, really: ​​Biblically Accurate Angels) but users fleeing in terror would seriously impact the customer satisfaction scores.

Now Tom has a short but interesting conversation with the angel, beginning with a question:

  • Tom
  • Is there any way to stop these files from being deleted?
  • Angel
  • I’m sorry, you are not level five.
  • Tom
  • Angel, you’re supposed to protect the files!
  • Angel
  • Access control is restricted to level five.

Tom has made the mistake, as described in chapter 9 Anthropomorphism of the book, of ascribing more agency to this software program than it actually has. He thinks he is engaged in a conversational interface (chapter 6 Sonic Interfaces) with a fully autonomous system, which should therefore be interested in and care about the wellbeing of the entire system. Which it doesn’t, because this is just a limited-command voice interface to a guide.

Even though this is obviously scripted, rather than a genuine error I think this raises an interesting question for real world interface designers: do users expect that an interface with higher visual quality/fidelity will be more realistic in other aspects as well? If a voice interface assistant has a simple polyhedron with no attempt at photorealism (say, like Bit in Tron) or with zoomorphism (say, like the search bear in Until the End of the World) will users adjust their expectations for speech recognition downwards? I’m not aware of any research that might answer this question. Readers?

Despite Tom’s frustration, the angel has given an excellent answer – for a guide. A very simple help program would have recited the command(s) that could be used to protect files against deletion. Which would have frustrated Tom even more when he tried to use one and got some kind of permission denied error. This program has checked whether the user can actually use commands before responding.

This does contradict the earlier VR demonstration where we were told that the user had unlimited access. I would explain this as being “unlimited read access, not write”, but the presenter didn’t think it worthwhile to go into such detail for the mostly non-technical audience.

Tom is now aware that he is under even more time pressure as the Meredith avatar is still moving around the room. Realising his mistake, he uses the voice interface as a query language.

“Show me all communications with Malaysia.”
“Telephone or video?”
“Video.”

This brings up a more conventional looking GUI window because not everything in virtual reality needs to be three-dimensional. It’s always tempting for a 3D programmer to re-implement everything, but it’s also possible to embed 2D GUI applications into a virtual world.

Tom looks at a conventional 2D display of file icons inside VR. Disclosure (1994)

The window shows a thumbnail icon for each recorded video conference call. This isn’t very helpful, so Tom again decides that a voice query will be much faster than looking at each one in turn.

“Show me, uh, the last transmission involving Meredith.”

There’s a short 2D transition effect swapping the thumbnail icon display for the video call itself, which starts playing at just the right point for plot purposes.

Tom watches a previously recorded video call made by Meredith (right). Disclosure (1994)

While Tom is watching and listening, Meredith is still typing commands. The camera orbits around behind the video conference call window so we can see the Meredith avatar approach, which also shows us that this window is slightly three dimensional, the content floating a short distance in front of the frame. The film then cuts away briefly to show Meredith confirming her “kill all” command. The video conference recordings are deleted, including the one Tom is watching.

Tom is informed that Meredith (seen here in the background as a wireframe avatar) is deleting the video call. Disclosure (1994)

This is also the moment when the downstairs VIPs return to the hotel suite, so the scene ends with Tom managing to sneak out without being detected.

Virtual reality has saved the day for Tom. The documents and video conference calls have been deleted by Meredith, but he knows that they once existed and has a colleague retrieve the files he needs from the backup tapes. (Which is good writing: the majority of companies shown in film and TV never seem to have backups for files, no matter how vital.) Meredith doesn’t know that he knows, so he has the upper hand to expose her plot.

Analysis

How believable is the interface?

I won’t spend much time on the hardware, since our focus is on file browsing in three dimensions. From top to bottom, the virtual reality system starts as believable and becomes less so.

Hardware

The headset and glove look like real VR equipment, believable in 1994 and still so today. Having only one glove is unusual, and makes impossible some of the common gesture actions described in chapter 5 of Make It So, which require both hands.

The “laser scanners” that create the 3D geometry and texture maps for the 3D avatar and perform real time body tracking would more likely be cameras, but that would not sound as cool.

And lastly the walking platform apparently requires our user to stand on large marbles or ball bearings and stay balanced while wearing a headset. Uh…maybe…no. Apologetics fails me. To me it looks like it would be uncomfortable to walk on, almost like deterrent paving.

Software

The Corridor, unlike the 3D file browser used in Jurassic Park, is a special effect created for the film. It was a mostly-plausible, near future system in 1994, except for the photorealistic avatar. Usually this site doesn’t discuss historical context (the  “new criticism” stance), but I think in this case it helps to explain how this interface would have appeared to audiences almost two decades ago.

I’ll start with the 3D graphics of the virtual building. My initial impression was that The Corridor could have been created as an interactive program in 1994, but that was my memory compressing the decade. During the 1990s 3D computer graphics, both interactive and CGI, improved at a phenomenal rate. The virtual building would not have been interactive in 1994, was possible on the most powerful systems six years later in 2000, and looks rather old-fashioned compared to what the game consoles of the 21st C can achieve.

For the voice interface I made the opposite mistake. Voice interfaces on phones and home computing appliances have become common in the second decade of the 21st C, but in reality are much older. Apple Macintosh computers in 1994 had text-to-speech synthesis with natural sounding voices and limited vocabulary voice command recognition. (And without needing an Internet connection!) So the voice interface in the scene is believable.

The multi-user aspects of The Corridor were possible in 1994. The wireframe avatars for users not in virtual reality are unflattering or perhaps creepy, but not technically difficult. As a first iteration of a prototype system it’s a good attempt to span a range of hardware capabilities.

The virtual reality avatar, though, is not believable for the 1990s and would be difficult today. Photographs of the body, made during the startup scan, could be used as a texture map for the VR avatar. But live video of the face would be much more difficult, especially when the face is partly obscured by a headset.

How well does the interface inform the narrative of the story?

The virtual reality system in itself is useful to the overall narrative because it makes the Digicom company seem high tech. Even in 1994 CD-ROM drives weren’t very interesting.

The Corridor is essential to the tension of the scene where Tom uses it to find the files, because otherwise the scene would be much shorter and really boring. If we ignore the virtual reality these are the interface actions:

  • Tom reads an email.
  • Meredith deletes the folder containing those emails.
  • Tom finds a folder full of recorded video calls.
  • Tom watches one recorded video call.
  • Meredith deletes the folder containing the video calls.

Imagine how this would have looked if both were using a conventional 2D GUI, such as the Macintosh Finder or MS Windows Explorer. Double click, press and drag, double click…done.

The Corridor slows down Tom’s actions and makes them far more visible and understandable. Thanks to the virtual reality avatar we don’t have to watch an actor push a mouse around. We see him moving and swiping, be surprised and react; and the voice interface adds extra emotion and some useful exposition. It also helps with the plot, giving Tom awareness of what Meredith is doing without having to actively spy on her, or look at some kind of logs or recordings later on.

Meredith, though, can’t use the VR system because then she’d be aware of Tom as well. Using a conventional workstation visually distinguishes and separates Meredith from Tom in the scene.

So overall, though the “action” is pretty mundane, it’s crucial to the plot, and the VR interface helps make this interesting and more engaging.

How well does the interface equip the character to achieve their goals?

As described in the film itself, The Corridor is a prototype for demonstrating virtual reality. As a file browser it’s awful, but since Tom has lost all his normal privileges this is the only system available, and he does manage to eventually find the files he needs.

At the start of the scene, Tom spends quite some time wandering around a vast multi-storey building without a map, room numbers, or even coordinates overlaid on his virtual view. Which seems rather pointless because all the files are in one room anyway. As previously discussed for Johnny Mnemonic, walking or flying everywhere in your file system seems like a good idea at first, but often becomes tedious over time. Many actual and some fictional 3D worlds give users the ability to teleport directly to any desired location.

Then the file drawers in each cabinet have no labels either, so Tom has to look carefully at each one in turn. There is so much more the interface could be doing to help him with his task, and even help the users of the VR demo learn and explore its technology as well.

Contrast this with Meredith, who uses her command line interface and 2D GUI to go through files like a chainsaw.

Tom becomes much more efficient with the voice interface. Which is just as well, because if he hadn’t, Meredith would have deleted the video conference recordings while he was still staring at virtual filing cabinets. However neither the voice interface nor the corresponding file display need three dimensional graphics.

There is hope for version 2.0 of The Corridor, even restricting ourselves to 1994 capabilities. The first and most obvious is to copy 2D GUI file browsers, or the 3D file browser from Jurassic Park, and show the corresponding text name next to each graphical file or folder object. The voice interface is so good that it should be turned on by default without requiring the angel. And finally add some kind of map overlay with a you are here moving dot, like the maps that players in 3D games such as Doom could display with a keystroke.

Film making challenge: VR on screen

Virtual reality (or augmented reality systems such as Hololens) provide a better viewing experience for 3D graphics by creating the illusion of real three dimensional space rather than a 2D monitor. But it is always a first person view and unlike conventional 2D monitors nobody else can usually see what the VR user is seeing without a deliberate mirroring/debugging display. This is an important difference from other advanced or speculative technologies that film makers might choose to include. Showing a character wielding a laser pistol instead of a revolver or driving a hover car instead of a wheeled car hardly changes how to stage a scene, but VR does.

So, how can we show virtual reality in film?

There’s the first-person view corresponding to what the virtual reality user is seeing themselves. (Well, half of what they see since it’s not stereographic, but it’s cinema VR, so close enough.) This is like watching a screencast of someone else playing a first person computer game, the original active experience of the user becoming passive viewing by the audience. Most people can imagine themselves in the driving seat of a car and thus make sense of the turns and changes of speed in a first person car chase, but the film audience probably won’t be familiar with the VR system depicted and will therefore have trouble understanding what is happening. There’s also the problem that viewing someone else’s first-person view, shifting and changing in response to their movements rather than your own, can make people disoriented or nauseated.

A third-person view is better for showing the audience the character and the context in which they act. But not the diegetic real-world third-person view, which would be the character wearing a geeky headset and poking at invisible objects. As seen in Disclosure, the third person view should be within the virtual reality.

But in doing that, now there is a new problem: the avatar in virtual reality representing the real character. If the avatar is too simple the audience may not identify it with the real world character and it will be difficult to show body language and emotion. More realistic CGI avatars are increasingly expensive and risk falling into the Uncanny Valley. Since these films are science fiction rather than factual, the easy solution is to declare that virtual reality has achieved the goal of being entirely photorealistic and just film real actors and sets. Adding the occasional ripple or blur to the real world footage to remind the audience that it’s meant to be virtual reality, again as seen in Disclosure, is relatively cheap and quick.
So, solving all these problems results in the cinematic trope we can call Extradiegetic Avatars, which are third-person, highly-lifelike “renderings” of characters, with a telltale Hologram Projection Imperfection for audience readability, that may or may not be possible within the world of the film itself.

IMDB: https://www.imdb.com/title/tt0109635/Currently streaming on:

Sci-fi Spacesuits: Moving around

Whatever it is, it ain’t going to construct, observe, or repair itself. In addition to protection and provision, suits must facilitate the reason the wearer has dared to go out into space in the first place.

One of the most basic tasks of extravehicular activity (EVA) is controlling where the wearer is positioned in space. The survey shows several types of mechanisms for this. First, if your EVA never needs you to leave the surface of the spaceship, you can go with mountaineering gear or sticky feet. (Or sticky hands.) We can think of maneuvering through space as similar to piloting a craft, but the outputs and interfaces have to be made wearable, like wearable control panels. We might also expect to see some tunnel in the sky displays to help with navigation. We’d also want to see some AI safeguard features, to return the spacewalker to safety when things go awry. (Narrator: We don’t.)

Mountaineering gear

In Stowaway (2021) astronauts undertake unplanned EVAs with carabiners and gear akin to mountaineers use. This makes some sense, though even this equipment needs to be modified for use by astronauts’ thick gloves.

Stowaway (2021) Drs Kim and Levinson prepare to scale to the propellant tank.

Sticky feet (and hands)

Though it’s not extravehicular, I have to give a shout out to 2001: A Space Odyssey (1969), where we see a flight attendant manage their position in the microgravity with special shoes that adhere to the floor. It’s a lovely example of a competent Hand Wave. We don’t need to know how it works because it says, right there, “Grip shoes.” Done. Though props to the actress Heather Downham, who had to make up a funny walk to illustrate that it still isn’t like walking on earth.

2001: A Space Odyssey (1969)
Pan Am: “Thank god we invented the…you know, whatever shoes.

With magnetic boots, seen in Destination Moon, the wearer simply walks around and manages the slight awkwardness of having to pull a foot up with extra force, and have it snap back down on its own.

Battlestar Galactica added magnetic handgrips to augment the control provided by magnetized boots. With them, Sergeant Mathias is able to crawl around the outside of an enemy vessel, inspecting it. While crawling, she holds grip bars mounted to circles that contain the magnets. A mechanism for turning the magnet off is not seen, but like these portable electric grabbers, it could be as simple as a thumb button.

Iron Man also had his Mark 50 suit form stabilizing suction cups before cutting a hole in the hull of the Q-Ship.

Avengers: Infinity War (2018)

In the electromagnetic version of boots, seen in Star Trek: First Contact, the wearer turns the magnets on with a control strapped to their thigh. Once on, the magnetization seems to be sensitive to the wearer’s walk, automatically lessening when the boot is lifted off. This gives the wearer something of a natural gait. The magnetism can be turned off again to be able to make microgravity maneuvers, such as dramatically leaping away from Borg minions.

Star Trek: Discovery also included this technology, but with what appears to be a gestural activation and a cool glowing red dots on the sides and back of the heel. The back of each heel has a stack of red lights that count down to when they turn off, as, I guess, a warning to anyone around them that they’re about to be “air” borne.

Quick “gotcha” aside: neither Destination Moon nor Star Trek: First Contact bothers to explain how characters are meant to be able to kneel while wearing magnetized boots. Yet this very thing happens in both films.

Destination Moon (1950): Kneeling on the surface of the spaceship.
Star Trek: First Contact (1996): Worf rises from operating the maglock to defend himself.

Controlled Propellant

If your extravehicular task has you leaving the surface of the ship and moving around space, you likely need a controlled propellant. This is seen only a few times in the survey.

In the film Mission to Mars, the manned mobility unit, or MMU, seen in the film is based loosely on NASA’s MMU. A nice thing about the device is that unlike the other controlled propellant interfaces, we can actually see some of the interaction and not just the effect. The interfaces are subtly different in that the Mission to Mars spacewalkers travel forward and backward by angling the handgrips forward and backward rather than with a joystick on an armrest. This seems like a closer mapping, but also seems more prone to error by accidental touching or bumping into something.

The plus side is an interface that is much more cinegenic, where the audience is more clearly able to see the cause and effect of the spacewalker’s interactions with the device.

If you have propellent in a Moh’s 4 or 5 film, you might need to acknowledge that propellant is a limited resource. Over the course of the same (heartbreaking) scene shown above, we see an interface where one spacewalker monitors his fuel, and another where a spacewalker realizes that she has traveled as far as she can with her MMU and still return to safety.

Mission to Mars (2000): Woody sees that he’s out of fuel.

For those wondering, Michael Burnham’s flight to the mysterious signal in that pilot uses propellant, but is managed and monitored by controllers on Discovery, so it makes sense that we don’t see any maneuvering interfaces for her. We could dive in and review the interfaces the bridge crew uses (and try to map that onto a spacesuit), but we only get snippets of these screens and see no controls.

Iron Man’s suits employ some Phlebotinum propellant that lasts for ever, can fit inside his tailored suit, and are powerful enough to achieve escape velocity.

Avengers: Infinity War (2018)

All-in-all, though sci-fi seems to understand the need for characters to move around in spacesuits, very little attention is given to the interfaces that enable it. The Mission to Mars MMU is the only one with explicit attention paid to it, and that’s quite derived from NASA models. It’s an opportunity for film makers should the needs of the plot allow, to give this topic some attention.

Sci-fi Spacesuits: Biological needs

Spacesuits must support the biological functioning of the astronaut. There are probably damned fine psychological reasons to not show astronauts their own biometric data while on stressful extravehicular missions, but there is the issue of comfort. Even if temperature, pressure, humidity, and oxygen levels are kept within safe ranges by automatic features of the suit, there is still a need for comfort and control inside of that range. If the suit is to be warn a long time, there must be some accommodation for food, water, urination, and defecation. Additionally, the medical and psychological status of the wearer should be monitored to warn of stress states and emergencies.

Unfortunately, the survey doesn’t reveal any interfaces being used to control temperature, pressure, or oxygen levels. There are some for low oxygen level warnings and testing conditions outside the suit, but these are more outputs than interfaces where interactions take place.

There are also no nods to toilet necessities, though in fairness Hollywood eschews this topic a lot.

The one example of sustenance seen in the survey appears in Sunshine, we see Captain Kaneda take a sip from his drinking tube while performing a dangerous repair of the solar shields. This is the only food or drink seen in the survey, and it is a simple mechanical interface, held in place by material strength in such a way that he needs only to tilt his head to take a drink.

Similarly, in Sunshine, when Capa and Kaneda perform EVA to repair broken solar shields, Cassie tells Capa to relax because he is using up too much oxygen. We see a brief view of her bank of screens that include his biometrics.

Remote monitoring of people in spacesuits is common enough to be a trope, but has been discussed already in the Medical chapter in Make It So, for more on biometrics in sci-fi.

Crowe’s medical monitor in Aliens (1986).

There are some non-interface biological signals for observers. In the movie Alien, as the landing party investigates the xenomorph eggs, we can see that the suit outgases something like steam—slower than exhalations, but regular. Though not presented as such, the suit certainly confirms for any onlooker that the wearer is breathing and the suit functioning.

Given that sci-fi technology glows, it is no surprise to see that lots and lots of spacesuits have glowing bits on the exterior. Though nothing yet in the survey tells us what these lights might be for, it stands to reason that one purpose might be as a simple and immediate line-of-sight status indicator. When things are glowing steadily, it means the life support functions are working smoothly. A blinking red alert on the surface of a spacesuit could draw attention to the individual with the problem, and make finding them easier.

Emergency deployment

One nifty thing that sci-fi can do (but we can’t yet in the real world) is deploy biology-protecting tech at the touch of a button. We see this in the Marvel Cinematic Universe with Starlord’s helmet.

If such tech was available, you’d imagine that it would have some smart sensors to know when it must automatically deploy (sudden loss of oxygen or dangerous impurities in the air), but we don’t see it. But given this speculative tech, one can imagine it working for a whole spacesuit and not just a helmet. It might speed up scenes like this.

What do we see in the real world?

Are there real-world controls that sci-fi is missing? Let’s turn to NASA’s space suits to compare.

The Primary Life-Support System (PLSS) is the complex spacesuit subsystem that provides the life support to the astronaut, and biomedical telemetry back to control. Its main components are the closed-loop oxygen-ventilation system for cycling and recycling oxygen, the moisture (sweat and breath) removal system, and the feedwater system for cooling.

The only “biology” controls that the spacewalker has for these systems are a few on the Display and Control Module (DCM) on the front of the suit. They are the cooling control valve, the oxygen actuator slider, and the fan switch. Only the first is explicitly to control comfort. Other systems, such as pressure, are designed to maintain ideal conditions automatically. Other controls are used for contingency systems for when the automatic systems fail.

Hey, isn’t the text on this thing backwards? Yes, because astronauts can’t look down from inside their helmets, and must view these controls via a wrist mirror. More on this later.

The suit is insulated thoroughly enough that the astronaut’s own body heats the interior, even in complete shade. Because the astronaut’s body constantly adds heat, the suit must be cooled. To do this, the suit cycles water through a Liquid Cooling and Ventilation Garment, which has a fine network of tubes held closely to the astronaut’s skin. Water flows through these tubes and past a sublimator that cools the water with exposure to space. The astronaut can increase or decrease the speed of this flow and thereby the amount to which his body is cooled, by the cooling control valve, a recessed radial valve with fixed positions between 0 (the hottest) and 10 (the coolest), located on the front of the Display Control Module.

The spacewalker does not have EVA access to her biometric data. Sensors measure oxygen consumption and electrocardiograph data and broadcast it to the Mission Control surgeon, who monitors it on her behalf. So whatever the reason is, if it’s good enough for NASA, it’s good enough for the movies.


Back to sci-fi

So, we do see temperature and pressure controls on suits in the real world, which underscores their absence in sci-fi. But, if there hasn’t been any narrative or plot reason for such things to appear in a story, we should not expect them.