Fritzes 2026 bonus award: Best Comedy-Horror Interface

The Fritzes award honors the best interfaces in a full-length motion picture in the past year. Interfaces play a special role in our movie-going experience, and are a craft all their own that does not otherwise receive focused recognition.

In this post, I award the best comedy-horror interface of 2025, then realize it is a special category of thing, gather multiple examples, and propose a name for it. It’s going to be a long one. Buckle in.

This post contains major spoilers (central twist) and a major digression.

A stylized graphic featuring a jellyfish-like creature against a dark background with the text 'MASSIVE SPOILERS AHEAD' in bold yellow lettering.

The movie is Bugonia. It is an English-language remake of the 2003 South Korean film Save the Green Planet! by Jang Joon-hwan. (Which is not streaming anywhere as far as I can tell, so I haven’t seen it yet.)

IMDB: https://www.imdb.com/title/tt0354668/

The plot

Bugonia centers on Teddy, a paranoid beekeeper, and his impressionable cousin Donny, who together kidnap Michelle Fuller. She is CEO of the pharmaceutical conglomerate Auxolith. The pair are convinced she is an extraterrestrial from the Andromeda galaxy, intent on destroying humanity. Their belief is drawn from conspiracy podcasts, fringe online sources, and Teddy’s own experimentation. Having abducted her, they chain her in their basement, shave her head, torture her, and subject her to an extended interrogation in which they hope to get her to agree to arrange a parley with the Andromedan emperor, in turn to negotiate for the withdrawal of Andromedans from Earth.

Michelle tries several tactics to escape, including reason, denial, and bargaining. While Teddy is out of the basement, dealing with an investigating sheriff, Donny confesses to Michelle that it’s all gone too far and shoots himself. When Teddy returns, Michelle tries absurdist escalation—agreeing that she is an alien—and convinces Teddy to inject his hospitalized mother with an alien cure in her car’s trunk (that is actually antifreeze). He does so, killing her. Infuriated, he returns to confront Michelle, but she intimidates him with absurdist escalation, claiming that she is in fact alien royalty and he must do what she says to save humanity. He agrees to take her to her office where she says she has a teleporter hidden in the coat closet. He steps in, but the explosives he has strapped to his body detonate, killing him, and freeing Michelle from the ordeal.

The spoiler

There are lots of hints along the way that Teddy and Donny don’t have a solid grasp on reality. But the sequence at the very end of the movie reframes everything that came before it, showing that Teddy’s conspiracy theories were right all along. (That in and of itself seems like a dangerous thing to put into the world, given current kayfabe fascist politics and their psychotic supporters, but it’s kind of played for comedy, so…sure, I guess?) Michelle really is queen of an alien species.

It means the long story she delivers in the basement is probably, diegetically true, rather than a bid to out-conspiracy Teddy, as the audience is led to believe. In this monologue she explains (it’s long, so I’m augmenting with emoji): The Andromedans’ 75th emperor discovered Earth 🛸👑🌎 when it was ruled by dinosaurs. 🦕🦖 After his species accidentally introduced a fatal virus 🦠 that wiped out all life there, he repopulated the planet with beings modeled on the Andromedans. 👽 These early humans eventually flourished into a civilization—Atlantis—that worshipped the Andromedans as gods. 🕉️

That harmony unravelled when some Atlantean humans began engineering 🧬 stronger, more aggressive variants of themselves, triggering a war ⚔️ that ended in thermonuclear catastrophe. 💥 The few survivors drifted at sea for a century. 🌊🚣‍♂️⏳ When they returned to land, their leaders were dead, ☠️ leaving only degraded remnants from which the apes 🦍 and eventually modern humans 🧑‍🤝‍🧑 descended. The new species proved no better. They were driven by war, ⚔️ ecological destruction, 🌲➡️🪵 and self-poisoning, 🍶☠️ incapable of changing course even when confronted with evidence of their own ruin. 📉 [Which, you know, fair enough.]

The Andromedans 👽 determined the flaw was genetic, 🧬 inherited from those ancient engineered ancestors and growing stronger with each generation. Their stated mission became eliminating this suicidal gene. 🔬💉 This would save both humanity and the Earth. 🧑‍🤝‍🧑🌏 For the experiments, including those conducted on Teddy’s mother, 👩 they chose subjects selected for their weakness and brokenness, 💔 on the theory that if the most damaged humans could be corrected, all of humanity might be. 🌍✨

Whew. 😮‍💨

So, after Teddy accidentally kills himself, Michelle teleports back to her ship where she meets with her court, dons her royal regalia, and confers with them on strategy. The hive agrees that humanity is beyond saving, and to enact this decision, she approaches a circular table with a map of the earth on top. Specifically it is a Lambert azimuthal equal-area projection centered on the North Pole. (I’m a sucker for nonstandard projections, as you may recall.)

A surreal and eerie underground environment with a circular arrangement of stone-like sculptures, surrounded by red terrain and mist, featuring a small figure in a tattered cloak standing near a central basin.

Encasing this map is a shimmering dome of translucent hexagons. (Like a beehive. I see what you did there.)

A close-up view of a decorative bowl filled with blue liquid, resembling an abstract earth or water scene, surrounded by soft, flowing material in warm colors.

She stares at it for a while.

Close-up portrait of a person with a detailed, artistic headdress, showcasing a serious expression against a dark background.

She presses the tip of a large thorn-like object into the dome. It gives and resists for a half a second, but then it pops, leaving tiny clouds above the map that quickly dissipate. And that’s it. All done. She looks down with a hint of sadness. Such a loss.

There follows a 3-minute sequence of eerily still scenes from around the world of the 8 billion humans who have been cut down instantly as a result of that interface, while extradiegetically, we hear Marlene Dietrich’s ”Where Have All the Flowers Gone”. Nightclubs and factories. Bedrooms and saunas. Beaches and museums. Everyone’s lying there, dead.

IMDB: https://www.imdb.com/title/tt12300742/Currently streaming on:

It’s a shockingly simple interface that wildly contrasts the horror of the mass extermination it causes. There is no second-hand safety switch. No pair of keys that need simultaneous turning. No equivalent of an “are you sure?” confirmation dialog. No big, surging hum from the giant planet-exploding laser that’s powering up. It is just presspop…death. The need to hold the thorn and keep pressing is a tiny, negligible safety measure, which, again, adds to the horror for being so mismatched to its effects. For a horror movie this thing is bzzz bzzz bzzz (bee’s kiss) perfection.

We do see a few animals, like birds, moving amongst the corpses. So we know the whole biosphere isn’t affected. (Well, at least until the 500 million metric tons of corpse begins to decay and so on.) So at first I thought I would have liked to have seen some interface preceding the pop where Queen Michelle selects our one species from amongst the 8.7 millions on the planet (maybe from an interactive Hillis Plot of the Tree of Life?), but when I imagined it, I thought better of it. It would have lost the horror of its utter simplicity. As it is, it conveys that homo sapiens sapiens were the singular problem under consideration, and this interface was just about them. Well. Killing them, anyway.

But otherwise, I don’t think the pop-interface itself makes much sense.

  • Why would it need a detailed map when it’s just a giant, momentary mass-murder button? Certainly we want labels, but this label doesn’t really explain what the button does, so is insufficient.
  • The dome is misleading, since it’s not describing some atmospheric protection. The air swirls, as a display, are misleading because not all air in the Terran atmosphere dissipates. (Sure, you can’t un-pop a bubble, and this extinction-action is irreversible, so that’s fitting.)
  • It seems prone to accidental activation. The Andromedans are managing a planetary, 66-million-year cover-their-ass project. Its end would involve…more.

So I suspect something else is going on here. I don’t think we’re seeing something literal in this sequence.

But to explain that in any depth I have to veer into some super heady film-critique stuff. If you’re just here for the interfaces, nope-out now. See you next time for Best Robots. But for the rest of you, let’s talk about…

Similar sequences

It’s one of my favorite kind in sci-fi, where you suspect the diegetic reality is kind of unfilmable or even incomprehensible to the human mind, but the filmmaker has to show something so they shift into a close-enough representation.

In these types of sequences, the shift from a more literal depiction to some close-enough stand-in is not marked or explained. You just have to feel that things are uncanny, decide that you’re seeing things in a different narrative register, and interpret from there.

Bugonia is not the first time we see something like this.

Other examples | 2001: A Space Odyssey (1969)

I think the first and biggest example in the survey is the white bedroom sequence at the end of 2001. Bowman’s mind is being shown something beyond his (and our) capabilities to comprehend. Kind of like a monkey mind being blown because tools. So Kubrick uses streaky lights and Louis-XVI-style bedroom furniture, illuminated floor grids and multiple, overlapping reflections of Bowman at different ages staring at each other, and you have to try and figure it out.

Other examples | Under the Skin (2013)

The Female (sorry, that’s the character name on imdb.com) looks like a seductrix, but functions more like the lure on an anglerfish. In the midnight zone where the anglerfish hunts, little fishes just see a pretty blue light and follow it, unable to perceive (or conceive?) the imminent danger of the giant, unseen, terrifying anglerfish controlling it. Similarly, The Female lures female-attracted men through a regular-looking door in a city. Once through the door, things quickly become uncanny, but the victims are so entranced by The Female, they just keep going. They walk deeper and deeper into a pool of inky blackness following her, while she walks on top of it. Once submerged in the weird liquid/not-liquid, after an elongated, spooky beat, they are suddenly flayed and the slurry of their remains goes…somewhere.

The movie, if you haven’t seen it, takes the whole thing several steps further, interrogating the existential crisis and ego death of The Female realizing she is just a lure, and more than that, one that is decaying and being replaced by another. If you haven’t seen it, I highly recommend it, even though you’ve just read massive spoilers, it’s still fantastic and worth watching and contemplating.

Other examples | Interstellar (2104)

This movie features a tesseract, a four-and-a-half dimensional hyper-cube structure built by post-human beings inside the supermassive black hole Gargantua. Astronaut Cooper gets trapped within it. In this space, the film represents time as a physical, navigable dimension, an Escher-esque library with bookshelves running every which way; repeating, stretching, and infused with scenes from Cooper’s daughter’s life. From this vantage he’s able to hit books in the shelves and manipulate gravity across the universe, ultimately sending quantum data Murph’s way that is crucial for saving humanity from itself.

We poor suckers in the audience live constrained in 3 and a half dimensions: we can move in the X, Y, and Z directions, but are passive recipients of the half bit, i.e. time. The tesseract allows time to function like one of those navigable dimensions, which we just aren’t equipped to comprehend, so, OK, a library of books is as good a visualization as any. 

Other examples | Legion (2017–2019)

(Thanks to Jonathan Korman for this last example). In the Season 2 opener of Legion, we see a choreographed dance-off between professor X’s psychic son David Haller, psychic parasite Amahl Farouk (posing as Oliver Bird), and fellow Clockworks patient Lenny Busker. It is a mental battle that we can’t possibly imagine, visualized as a dance battle that we can.


In each of these examples, the rest of the movie or TV show works with a standard-issue camera that shows what you might see if you were a fly on the wall in the room. But in these scenes, we’re seeing a weird in-between. It’s an impression of the actual events as they unfold, but not as literal as the rest of the show. But it’s not completely abstract, which takes us to this next not-quite-an-example.

A slightly different example | The End of Evangelion (1997)

The Third Impact sequence from Neon Genesis Evangelion features a similar sequence, that is not quite the same. In it, humanity is being unified into a single consciousness, and things shift from standard anime into a wholly-abstract sequence of still images, text cards, multiple characters overlapped on the same screen from multiple people’s memories, and bits of animation which are just fill color, no lines, and some kid’s illustrations, and hand drawings, and abstract paint, &c.

Contrast this chaos with the examples above. In those it feels like the art direction may have gotten stranger, but third-person narrative is still happening. Bowman is trying to figure out what he’s seeing. Victims are being eaten. Cooper is sending messages. David is fighting for control.

In Neon Genesis, we’re seeing the chaos of 8 million individuals’ memories and perceptions dissolving and fusing into a new thing. It’s more of a narrative-less, 8-million-person POV impression. Maybe I’m hair-splitting, but it does feel different.

Now that I’ve corralled those examples and that one near-example, I want to name it.

Naming it

I did a lot of web searching and I couldn’t find a fitting, extant descriptor in film theory for this kind of thing. Important caveat: I have never explicitly studied film theory, so I don’t have the benefit of a community of practice from whom I might have learned of one. But I can use Google and skip past the enshittified results to find some real ones. There were maybe half a dozen candidates. But none of them fit. So I have to coin something. I propose calling this a…

Text graphic displaying the phrase 'NARRATIVE PROXY SEQUENCE' in a stylized black font.
Admittedly setting the damned thing in Churchward Roundsquare does nothing to make it more accessible, but it’s the movie typeface, so…

(If that image didn’t load, know that it read, “narrative proxy sequence.”)

It’s a sequence because it’s unlike the rest of the narrative. It’s special. It’s a “narrative proxy” because while it’s still describing things that happen in the story, it’s using stand-ins for otherwise-unrenderable diegetic elements.

  • We can’t experience the cosmic mind-expansion that Bowman is experiencing, but we can deal with an antique bedroom set on an illuminated grid.
  • We can’t face the man-hunting anglerfish, but we can deal with a beautiful woman and an inky floor.
  • We can’t conceive a tesseract, but we can deal with a twisty library.
  • We can’t perceive a mental battle between omega-level telepaths, but we can go with a dance battle.
  • We can’t face whatever an Andromedan and their evil human-extinction interface is, but we can deal with a bubble map and a pop.

There’s one aspect that I failed to capture in the phrase “narrative proxy sequence”. In the examples, the “grand imagier” behind the film has decided that we couldn’t cope with—or even that it’s futile to try to—depict the diegetic events in a literal sense, so get in, loser, we’re going with this instead. Compare the trope of flashbacks. They’re not happening at the moment they’re remembered, but they’re shown as if the imagier’s camera was there, then. That’s different.

To capture this extra sense, I thought of prepending “mind-sparing”, “cognizable”, “renderable”, “semidiegetic”, or “perceptualized”, but each of them was either too wan or academic or misleading, so I left the intent part out to be inferred from context. Plus it just made the phrase too long. “Perceptualized narrative proxy sequence”, while more precise, is almost double the length. It’s just too much. So let’s go with the shorter phrase.

OK. What does this mean for sci-fi interfaces?

What’s important to us for this blog’s purposes is: When discussing an interface in a narrative proxy sequence, we don’t have access to any of the usual tools. What are the outputs? (We’re not sure.) What are the controls and how do you manipulate them? (We only have a guess.) Does it all fit together? (We can’t say.)

All of these questions are much more possible when we’ve got a literal depiction of a speculative interface. And so though my usual art-criticism stance is to push through and presume the interface is exactly as it appears, that analysis becomes prohibitively convoluted when we’re looking at a narrative proxy. We have to admit that it’s unavailable to the close-read analysis that this blog does.

It doesn’t make it any less awesome, though. So I’m giving it this award.

If you know of other sci-fi examples of this niche trope, feel free to comment. And thank you, Bugonia, for giving us something to think about and giving us this marvelous, funny, terrifying moment of interface horror.

*pop*

The word 'BUGONIA' is displayed in a stylized font featuring various geometric shapes, set against a black background.

IMDB: https://www.imdb.com/title/tt12300742/Currently streaming on:

Next up: The best robots of 2025

Sci-fi Spacesuits: Identification

Spacesuits are functional items, built largely identically to each other, adhering to engineering specifications rather than individualized fashion. A resulting problem is that it might be difficult to distinguish between multiple, similarly-sized individuals wearing the same suits. This visual identification problem might be small in routine situations:

  • (Inside the vehicle:) Which of these suits it mine?
  • What’s the body language of the person currently speaking on comms?
  • (With a large team performing a manual hull inspection:) Who is that approaching me? If it’s the Fleet Admiral I may need to stand and salute.

But it could quickly become vital in others:

  • Who’s body is that floating away into space?
  • Ensign Smith just announced they have a tachyon bomb in their suit. Which one is Ensign Smith?
  • Who is this on the security footage cutting the phlebotinum conduit?

There a number of ways sci-fi has solved this problem.

Name tags

Especially in harder sci-fi shows, spacewalkers have a name tag on the suit. The type is often so small that you’d need to be quite close to read it, and weird convention has these tags in all-capital letters even though lower-case is easier to read, especially in low light and especially at a distance. And the tags are placed near the breast of the suit, so the spacewalker would also have to be facing you. So all told, not that useful on actual extravehicular missions.

Faces

Screen sci-fi usually gets around the identification problem by having transparent visors. In B-movies and sci-fi illustrations from the 1950s and 60s, the fishbowl helmet was popular, but of course offering little protection, little light control, and weird audio effects for the wearer. Blockbuster movies were mostly a little smarter about it.

1950s Sci-Fi illustration by Ed Emshwiller
c/o Diane Doniol-Valcroze

Seeing faces allows other spacewalkers/characters (and the audience) to recognize individuals and, to a lesser extent, how their faces synch with their voice and movement. People are generally good at reading the kinesics of faces, so there’s a solid rationale for trying to make transparency work.

Face + illumination

As of the 1970s, filmmakers began to add interior lights that illuminate the wearer’s face. This makes lighting them easier, but face illumination is problematic in the real world. If you illuminate the whole face including the eyes, then the spacewalker is partially blinded. If you illuminate the whole face but not the eyes, they get that whole eyeless-skull effect that makes them look super spooky. (Played to effect by director Scott and cinematographer Vanlint in Alien, see below.)

Identification aside: Transparent visors are problematic for other reasons. Permanently-and-perfectly transparent glass risks the spacewalker getting damage from infrared lights or blinded from sudden exposure to nearby suns, or explosions, or engine exhaust ports, etc. etc. This is why NASA helmets have the gold layer on their visors: it lets in visible light and blocks nearly all infrared.

Astronaut Buzz Aldrin walks on the surface of the moon near the leg of the lunar module Eagle during the Apollo 11 mission.

Image Credit: NASA (cropped)

Only in 2001 does the survey show a visor with a manually-adjustable translucency. You can imagine that this would be more safe if it was automatic. Electronics can respond much faster than people, changing in near-real time to keep sudden environmental illumination within safe human ranges.

You can even imagine smarter visors that selectively dim regions (rather than the whole thing), to just block out, say, the nearby solar flare, or to expose the faces of two spacewalkers talking to each other, but I don’t see this in the survey. It’s mostly just transparency and hope nobody realizes these eyeballs would get fried.

So, though seeing faces helps solve some of the identification problem, transparent enclosures don’t make a lot of sense from a real-world perspective. But it’s immediate and emotionally rewarding for audiences to see the actors’ faces, and with easy cinegenic workarounds, I suspect identification-by-face is here in sci-fi for the long haul, at least until a majority of audiences experience spacewalking for themselves and realize how much of an artistic convention this is.

Color

Other shows have taken the notion of identification further, and distinguished wearers by color. Mission to Mars, Interstellar, and Stowaway did this similar to the way NASA does it, i.e. with colored bands around upper arms and sometimes thighs.

Destination Moon, 2001: A Space Odyssey, and Star Trek (2009) provided spacesuits in entirely different colors. (Star Trek even equipped the suits with matching parachutes, though for the pedantic, let’s acknowledge these were “just” upper-atmosphere suits.)The full-suit color certainly makes identification easier at a distance, but seems like it would be more expensive and introduce albedo differences between the suits.

One other note: if the visor is opaque and characters are only relying on the color for identification, it becomes easier for someone to don the suit and “impersonate” its usual wearer to commit spacewalking crimes. Oh. My. Zod. The phlebotinum conduit!

According to the Colour Blind Awareness organisation, blindness (color vision deficiency) affects approximately 1 in 12 men and 1 in 200 women in the world, so is not without its problems, and might need to be combined with bold patterns to be more broadly accessible.

What we don’t see

Heraldry

Blog from another Mog Project Rho tells us that books have suggested heraldry as space suit identifiers. And while it could be a device placed on the chest like medieval suits of armor, it might be made larger, higher contrast, and wraparound to be distinguishable from farther away.

Directional audio

Indirect, but if the soundscape inside the helmet can be directional (like a personal Surround Sound) then different voices can come from the direction of the speaker, helping uniquely identify them by position. If there are two close together and none others to be concerned about, their directions can be shifted to increase their spatial distinction. When no one is speaking leitmotifs assigned to each other spacewalker, with volumes corresponding to distance, could help maintain field awareness.

HUD Map

Gamers might expect a map in a HUD that showed the environment and icons for people with labeled names.

Search

If the spacewalker can have private audio, shouldn’t she just be able to ask, “Who’s that?” while looking at someone and hear a reply or see a label on a HUD? It would also be very useful if I’ve spacewalker could ask for lights to be illuminated on the exterior of another’s suit. Very useful if that other someone is floating unconscious in space.

Mediated Reality Identification

Lastly I didn’t see any mediated reality assists: augmented or virtual reality. Imagine a context-aware and person-aware heads-up display that labeled the people in sight. Technological identification could also incorporate in-suit biometrics to avoid the spacesuit-as-disguise problem. The helmet camera confirms that the face inside Sargeant McBeef’s suit is actually that dastardly Dr. Antagonist!

We could also imagine that the helmet could be completely enclosed, but be virtually transparent. Retinal projectors would provide the appearance of other spacewalkers—from live cameras in their helmets—as if they had fishbowl helmets. Other information would fit the HUD depending on the context, but such labels would enable identification in a way that is more technology-forward and cinegenic. But, of course, all mediated solutions introduce layers of technology that also introduces more potential points of failure, so not a simple choice for the real-world.

Oh, that’s right, he doesn’t do this professionally.

So, as you can read, there’s no slam-dunk solution that meets both cinegenic and real-world needs. Given that so much of our emotional experience is informed by the faces of actors, I expect to see transparent visors in sci-fi for the foreseeable future. But it’s ripe for innovation.

Sci-fi Spacesuits: Moving around

Whatever it is, it ain’t going to construct, observe, or repair itself. In addition to protection and provision, suits must facilitate the reason the wearer has dared to go out into space in the first place.

One of the most basic tasks of extravehicular activity (EVA) is controlling where the wearer is positioned in space. The survey shows several types of mechanisms for this. First, if your EVA never needs you to leave the surface of the spaceship, you can go with mountaineering gear or sticky feet. (Or sticky hands.) We can think of maneuvering through space as similar to piloting a craft, but the outputs and interfaces have to be made wearable, like wearable control panels. We might also expect to see some tunnel in the sky displays to help with navigation. We’d also want to see some AI safeguard features, to return the spacewalker to safety when things go awry. (Narrator: We don’t.)

Mountaineering gear

In Stowaway (2021) astronauts undertake unplanned EVAs with carabiners and gear akin to mountaineers use. This makes some sense, though even this equipment needs to be modified for use by astronauts’ thick gloves.

Stowaway (2021) Drs Kim and Levinson prepare to scale to the propellant tank.

Sticky feet (and hands)

Though it’s not extravehicular, I have to give a shout out to 2001: A Space Odyssey (1969), where we see a flight attendant manage their position in the microgravity with special shoes that adhere to the floor. It’s a lovely example of a competent Hand Wave. We don’t need to know how it works because it says, right there, “Grip shoes.” Done. Though props to the actress Heather Downham, who had to make up a funny walk to illustrate that it still isn’t like walking on earth.

2001: A Space Odyssey (1969)
Pan Am: “Thank god we invented the…you know, whatever shoes.

With magnetic boots, seen in Destination Moon, the wearer simply walks around and manages the slight awkwardness of having to pull a foot up with extra force, and have it snap back down on its own.

Battlestar Galactica added magnetic handgrips to augment the control provided by magnetized boots. With them, Sergeant Mathias is able to crawl around the outside of an enemy vessel, inspecting it. While crawling, she holds grip bars mounted to circles that contain the magnets. A mechanism for turning the magnet off is not seen, but like these portable electric grabbers, it could be as simple as a thumb button.

Iron Man also had his Mark 50 suit form stabilizing suction cups before cutting a hole in the hull of the Q-Ship.

Avengers: Infinity War (2018)

In the electromagnetic version of boots, seen in Star Trek: First Contact, the wearer turns the magnets on with a control strapped to their thigh. Once on, the magnetization seems to be sensitive to the wearer’s walk, automatically lessening when the boot is lifted off. This gives the wearer something of a natural gait. The magnetism can be turned off again to be able to make microgravity maneuvers, such as dramatically leaping away from Borg minions.

Star Trek: Discovery also included this technology, but with what appears to be a gestural activation and a cool glowing red dots on the sides and back of the heel. The back of each heel has a stack of red lights that count down to when they turn off, as, I guess, a warning to anyone around them that they’re about to be “air” borne.

Quick “gotcha” aside: neither Destination Moon nor Star Trek: First Contact bothers to explain how characters are meant to be able to kneel while wearing magnetized boots. Yet this very thing happens in both films.

Destination Moon (1950): Kneeling on the surface of the spaceship.
Star Trek: First Contact (1996): Worf rises from operating the maglock to defend himself.

Controlled Propellant

If your extravehicular task has you leaving the surface of the ship and moving around space, you likely need a controlled propellant. This is seen only a few times in the survey.

In the film Mission to Mars, the manned mobility unit, or MMU, seen in the film is based loosely on NASA’s MMU. A nice thing about the device is that unlike the other controlled propellant interfaces, we can actually see some of the interaction and not just the effect. The interfaces are subtly different in that the Mission to Mars spacewalkers travel forward and backward by angling the handgrips forward and backward rather than with a joystick on an armrest. This seems like a closer mapping, but also seems more prone to error by accidental touching or bumping into something.

The plus side is an interface that is much more cinegenic, where the audience is more clearly able to see the cause and effect of the spacewalker’s interactions with the device.

If you have propellent in a Moh’s 4 or 5 film, you might need to acknowledge that propellant is a limited resource. Over the course of the same (heartbreaking) scene shown above, we see an interface where one spacewalker monitors his fuel, and another where a spacewalker realizes that she has traveled as far as she can with her MMU and still return to safety.

Mission to Mars (2000): Woody sees that he’s out of fuel.

For those wondering, Michael Burnham’s flight to the mysterious signal in that pilot uses propellant, but is managed and monitored by controllers on Discovery, so it makes sense that we don’t see any maneuvering interfaces for her. We could dive in and review the interfaces the bridge crew uses (and try to map that onto a spacesuit), but we only get snippets of these screens and see no controls.

Iron Man’s suits employ some Phlebotinum propellant that lasts for ever, can fit inside his tailored suit, and are powerful enough to achieve escape velocity.

Avengers: Infinity War (2018)

All-in-all, though sci-fi seems to understand the need for characters to move around in spacesuits, very little attention is given to the interfaces that enable it. The Mission to Mars MMU is the only one with explicit attention paid to it, and that’s quite derived from NASA models. It’s an opportunity for film makers should the needs of the plot allow, to give this topic some attention.

Sci-fi Spacesuits: Interface Locations

A major concern of the design of spacesuits is basic usability and ergonomics. Given the heavy material needed in the suit for protection and the fact that the user is wearing a helmet, where does a designer put an interface so that it is usable?

Chest panels

Chest panels are those that require that the wearer only look down to manipulate. These are in easy range of motion for the wearer’s hands. The main problem with this location is that there is a hard trade off between visibility and bulkiness.

Arm panels

Arm panels are those that are—brace yourself—mounted to the forearm. This placement is within easy reach, but does mean that the arm on which the panel sits cannot be otherwise engaged, and it seems like it would be prone to accidental activation. This is a greater technological challenge than a chest panel to keep components small and thin enough to be unobtrusive. It also provides some interface challenges to squeeze information and controls into a very small, horizontal format. The survey shows only three arm panels.

The first is the numerical panel seen in 2001: A Space Odyssey (thanks for the catch, Josh!). It provides discrete and easy input, but no feedback. There are inter-button ridges to kind of prevent accidental activation, but they’re quite subtle and I’m not sure how effective they’d be.

2001: A Space Odyssey (1968)

The second is an oversimplified control panel seen in Star Trek: First Contact, where the output is simply the unlabeled lights underneath the buttons indicating system status.

The third is the mission computers seen on the forearms of the astronauts in Mission to Mars. These full color and nonrectangular displays feature rich, graphic mission information in real time, with textual information on the left and graphic information on the right. Input happens via hard buttons located around the periphery.

Side note: One nifty analog interface is the forearm mirror. This isn’t an invention of sci-fi, as it is actually on real world EVAs. It costs a lot of propellant or energy to turn a body around in space, but spacewalkers occasionally need to see what’s behind them and the interface on the chest. So spacesuits have mirrors on the forearm to enable a quick view with just arm movement. This was showcased twice in the movie Mission to Mars.

HUDs

The easiest place to see something is directly in front of your eyes, i.e. in a heads-up display, or HUD. HUDs are seen frequently in sci-fi, and increasingly in sc-fi spacesuits as well. One is Sunshine. This HUD provides a real-time view of each other individual to whom the wearer is talking while out on an EVA, and a real-time visualization of dangerous solar winds.

These particular spacesuits are optimized for protection very close to the sun, and the visor is limited to a transparent band set near eye level. These spacewalkers couldn’t look down to see the top of a any interfaces on the suit itself, so the HUD makes a great deal of sense here.

Star Trek: Discovery’s pilot episode included a sequence that found Michael Burnham flying 2000 meters away from the U.S.S. Discovery to investigate a mysterious Macguffin. The HUD helped her with wayfinding, navigating, tracking time before lethal radiation exposure (a biological concern, see the prior post), and even doing a scan of things in her surroundings, most notably a Klingon warrior who appears wearing unfamiliar armor. Reference information sits on the periphery of Michael’s vision, but the augmentations occur mapped to her view. (Noting this raises the same issues of binocular parallax seen in the Iron HUD.)

Iron Man’s Mark L armor was able to fly in space, and the Iron HUD came right along with it. Though not designed/built for space, it’s a general AI HUD assisting its spacewalker, so worth including in the sample.

Avengers: Infinity War (2018)

Aside from HUDs, what we see in the survey is similar to what exists in existing real-world extravehicular mobility units (EMUs), i.e. chest panels and arm panels.

Inputs illustrate paradigms

Physical controls range from the provincial switches and dials on the cigarette-girl foldout control panels of Destination Moon to the simple and restrained numerical button panel of 2001, to strangely unlabeled buttons of Star Trek: First Contact’s arm panels (above), and the ham-handed touch screens of Mission to Mars.

Destination Moon (1950)
2001: A Space Odyssey (1968)

As the pictures above reveal, the input panels reflect the familiar technology of the time of the creation of the movie or television show. The 1950s were still rooted in mechanistic paradigms, the late 1960s interfaces were electronic pushbutton, the 2000s had touch screens and miniaturized displays.

Real world interfaces

For comparison and reference, the controls for NASA’s EMU has a control panel on the front, called the Display and Control Module, where most of the controls for the EMU sit.

The image shows that inputs are very different than what we see as inputs in film and television. The controls are large for easy manipulation even with thick gloves, distinct in type and location for confident identification, analog to allow for a minimum of failure points and in-field debugging and maintenance, and well-protected from accidental actuation with guards and deep recesses. The digital display faces up for the convenience of the spacewalker. The interface text is printed backwards so it can be read with the wrist mirror.

The outputs are fairly minimal. They consist of the pressure suit gauge, audio warnings, and the 12-character alphanumeric LCD panel at the top of the DCM. No HUD.

The gauge is mechanical and standard for its type. The audio warnings are a simple warbling tone when something’s awry. The LCD panel provides information about 16 different values that the spacewalker might need, including estimated time of oxygen remaining, actual volume of oxygen remaining, pressure (redundant to the gauge), battery voltage or amperage, and water temperature. To cycle up and down the list, she presses the Mode Selector Switch forward and backward. She can adjust the contrast using the Display Intensity Control potentiometer on the front of the DCM.

A NASA image tweeted in 2019.

The DCMs referenced in the post are from older NASA documents. In more recent images on NASA’s social media, it looks like there have been significant redesigns to the DCM, but so far I haven’t seen details about the new suit’s controls. (Or about how that tiny thing can house all the displays and controls it needs to.)

Sci-fi Spacesuits: Protecting the Wearer from the Perils of Space

Space is incredibly inhospitable to life. It is a near-perfect vacuum, lacking air, pressure, and warmth. It is full of radiation that can poison us, light that can blind and burn us, and a darkness that can disorient us. If any hazardous chemicals such as rocket fuel have gotten loose, they need to be kept safely away. There are few of the ordinary spatial clues and tools that humans use to orient and control their position. There are free-floating debris that range from to bullet-like micrometeorites to gas and rock planets that can pull us toward them to smash into their surface or burn in their atmospheres. There are astronomical bodies such as stars and black holes that can boil us or crush us into a singularity. And perhaps most terrifyingly, there is the very real possibility of drifting off into the expanse of space to asphyxiate, starve (though biology will be covered in another post), freeze, and/or go mad.

The survey shows that sci-fi has addressed most of these perils at one time or another.

Alien (1976): Kane’s visor is melted by a facehugger’s acid.

Interfaces

Despite the acknowledgment of all of these problems, the survey reveals only two interfaces related to spacesuit protection.

Battlestar Galactica (2004) handled radiation exposure with simple, chemical output device. As CAG Lee Adama explains in “The Passage,” the badge, worn on the outside of the flight suit, slowly turns black with radiation exposure. When the badge turns completely black, a pilot is removed from duty for radiation treatment.

This is something of a stretch because it has little to do with the spacesuit itself, and is strictly an output device. (Nothing that proper interaction requires human input and state changes.) The badge is not permanently attached to the suit, and used inside a spaceship while wearing a flight suit. The flight suit is meant to act as a very short term extravehicular mobility unit (EMU), but is not a spacesuit in the strict sense.

The other protection related interface is from 2001: A Space Odyssey. As Dr. Dave Bowman begins an extravehicular activity to inspect seemingly-faulty communications component AE-35, we see him touch one of the buttons on his left forearm panel. Moments later his visor changes from being transparent to being dark and protective.

We should expect to see few interfaces, but still…

As a quick and hopefully obvious critique, Bowman’s function shouldn’t have an interface. It should be automatic (not even agentive), since events can happen much faster than human response times. And, now that we’ve said that part out loud, maybe it’s true that protection features of a suit should all be automatic. Interfaces to pre-emptively switch them on or, for exceptional reasons, manually turn them off, should be the rarity.

But it would be cool to see more protective features appear in sci-fi spacesuits. An onboard AI detects an incoming micrometeorite storm. Does the HUD show much time is left? What are the wearer’s options? Can she work through scenarios of action? Can she merely speak which course of action she wants the suit to take? If a wearer is kicked free of the spaceship, the suit should have a homing feature. Think Doctor Strange’s Cloak of Levitation, but for astronauts.

As always, if you know of other examples not in the survey, please put them in the comments.

Bot envoys (for extremely-high-latency communications)

While recording a podcast with the guys at DecipherSciFi about the twee(n) love story The Space Between Us, we spent some time kvetching about how silly it was that many of the scenes involved Gardner, on Mars, in a real-time text chat with a girl named Tulsa, on Earth. It’s partly bothersome because throughout the rest of the the movie, the story tries for a Mohs sci-fi hardness of, like, 1.5, somewhere between Real Life and Speculative Science, so it can’t really excuse itself through the Applied Phlebotinum that, say, Star Wars might use. The rest of the film feels like it’s trying to have believable science, but during these scenes it just whistles, looks the other way, and hopes you don’t notice that the two lovebirds are breaking the laws of physics as they swap flirt emoji.

Hopefully unnecessary science brief: Mars and Earth are far away from each other. Even if the communications transmissions are sent at light speed between them, it takes much longer than the 1 second of response time required to feel “instant.” How much longer? It depends. The planets orbit the sun at different speeds, so aren’t a constant distance apart. At their closest, it takes light 3 minutes to travel between Mars and Earth, and at their farthest—while not being blocked by the sun—it takes about 21 minutes. A round-trip is double that. So nothing akin to real-time chat is going to happen.

But I’m a designer, a sci-fi apologist, and a fairly talented backworlder. I want to make it work. And perhaps because of my recent dive into narrow AI, I began to realize that, well, in a way, maybe it could. It just requires rethinking what’s happening in the chat.

Let’s first acknowledge that we’ve solved long distance communications a long time ago. Gardner and Tulsa could just, you know, swap letters or, like the characters in 2001: A Space Odyssey, recorded video messages. There. Problem solved. It’s not real-time interaction, but it gets the job done. But kids aren’t so much into pen pals anymore, and we have to acknowledge that Gardner doesn’t want to tip his hand that he’s on Mars (it’s a grave NASA secret, for plot reasons). So the question is how could we make it work so it feels like a real time chat to her. Let’s first solve it for the case where he’s trying to disguise his location, and then how it might work when both participants are in the know.

Fooling Tulsa

Since 1984 (ping me, as always, if you can think of an earlier reference) sci-fi has had the notion of a digitally-replicated personality. Here I’m thinking of Gibson’s Neuromancer and the RAM boards on which Dixie Flatline “lives.” These RAM boards house an interactive digital personality of a person, built out of a lifetime of digital traces left behind: social media, emails, photos, video clips, connections, expressed interests, etc. Anyone in that story could hook the RAM board up to a computer, and have conversations with the personality housed there that would closely approximate how that person would (or would have) respond in real life.

SBU_Tulsa.png
Listen to the podcast for a mini-rant on translucent screens, followed by apologetics.

Is this likely to actually happen? Well it kind of already is. Here in the real world, we’re seeing early, crude “me bots” populate the net which are taking baby steps toward the same thing. (See MessinaBothttps://bottr.me/, https://sensay.it/, the forthcoming http://bot.me/) By the time we actually get a colony to Mars (plus the 16 years for Gardner to mature), mebot technology should should be able to stand in for him convincingly enough in basic online conversations.

Training the bot

So in the story, he would look through cached social media feeds to find a young lady he wanted to strike up a conversation with, and then ask his bot-maker engine to look at her public social media to build a herBot with whom he could chat, to train it for conversations. During this training, the TulsaBot would chat about topics of interest gathered from her social media. He could pause the conversation to look up references or prepare convincing answers to the trickier questions TulsaBot asks. He could also add some topics to the conversation they might have in common, and questions he might want to ask her. By doing this, his GardnerBot isn’t just some generic thing he sends out to troll any young woman with. It’s a more genuine, interactive first “letter” sent directly to her. He sends this GardnerBot to servers on Earth.

Hey-mars-chat.gif
A demonstration of a chat with a short Martian delay. (Yes, it’s an animated gif.)

Launching the bot

GardnerBot would wait until it saw Tulsa online and strike up the conversation with her. It would send a signal back to Gardner that the chat has begun so he can sit on his end and read a space-delayed transcript of the chat. GardnerBot would try its best to manage the chat based on what it knows about awkward teen conversation, Turing test best practices, what it knows about Gardner, and how it has been trained specifically for Tulsa. Gardner would assuage some of his guilt by having it dodge and carefully frame the truth, but not outright lie.

Buying time

If during the conversation she raised a topic or asked a question for which GardnerBot was not trained, it could promise an answer later, and then deflect, knowing that it should pad the conversation in the meantime:

  • Ask her to answer the same question first, probing into details to understand rationale and buy more time
  • Dive down into a related subtopic in which the bot has confidence, and which promises to answer the initial question
  • Deflect conversation to another topic in which it has a high degree of confidence and lots of detail to share
  • Text a story that Gardner likes to tell that is known to take about as long as the current round-trip signal

Example

  • TULSA
  • OK, here’s one: If you had to live anywhere on Earth where they don’t speak English, where would you live?

GardnerBot has a low confidence that it knows Gardner’s answer. It could respond…

  1. (you first) “Oh wow. That is a tough one. Can I have a couple of minutes to think about it? I promise I’ll answer, but you tell me yours first.”
  2. (related subtopic) “I’m thinking about this foreign movie that I saw one time. There were a lot of animals in it and a waterfall. Does that sound familiar?”
  3. (new topic) “What? How am I supposed to answer that one? 🙂 Umm…While I think about it, tell me…what kind of animal would you want to be reincarnated as. And you have to say why.”
  4. (story delay) “Ha. Sure, but can I tell a story first? When I was a little kid, I used to be obsessed with this music that I would hear drifting into my room from somewhere around my house…”

Lagged-realtime training

Each of those responses is a delay tactic that allows the chat transcript to travel to Mars for Gardner to do some bot training on the topic. He would be watching the time-delayed transcript of the chat, keeping an eye on an adjacent track of data containing the meta information about what the bot is doing, conversationally speaking. When he saw it hit low-confidence or high-stakes topic and deflect, it would provide a chat window for him to tell the GardnerBot what it should do or say.

  • To the stalling GARDNERBOT…
  • GARDNER
  • For now, I’m going to pick India, because it’s warm and I bet I would really like the spicy food and the rain. Whatever that colored powder festival is called. I’m also interested in their culture, Bollywood, and Hinduism.
  • As he types, the message travels back to Earth where GardnerBot begins to incorporate his answers to the chat…
SBU_Gardner.png
  • At a natural break in the conversation…
  • GARDNERBOT
  • OK. I think I finally have an answer to your earlier question. How about…India?
  • TULSA
  • India?
  • GARDNERBOT
  • Think about it! Running around in warm rain. Or trying some of the street food under an umbrella. Have you seen youTube videos from that festival with the colored powder everywhere? It looks so cool. Do you know what it’s called?

Note that the bot could easily look it up and replace “that festival with the colored powder everywhere” with “Holi Festival of Color” but it shouldn’t. Gardner doesn’t know that fact, so the bot shouldn’t pretend it knows it. A Cyrano-de-Bergerac software—where it makes him sound more eloquent, intelligent, or charming than he really is to woo her—would be a worse kind of deception. Gardner wants to hide where he is, not who he is.

That said, Gardner should be able to direct the bot, to change its tactics. “OMG. GardnerBot! You’re getting too personal! Back off!” It might not be enough to cover a flub made 42 minutes ago, but of course the bot should know how to apologize on Gardner’s behalf and ask conversational forgiveness.

Gotta go

If the signal to Mars got interrupted or the bot got into too much trouble with pressure to talk about low confidence or high stakes topics, it could use a believable, pre-rolled excuse to end the conversation.

  • GARDNERBOT
  • Oh crap. Will you be online later? I’ve got chores I have to do.

Then, Gardner could chat with TulsaBot on his end without time pressure to refine GardnerBot per their most recent topics, which would be sent back to Earth servers to be ready for the next chat.

In this way he could have “chats” with Tulsa that are run by a bot but quite custom to the two of them. It’s really Gardner’s questions, topics, jokes, and interest, but a bot-managed delivery of these things.

So it could work, does it fit the movie? I think so. It would be believable because he’s a nerd raised by scientists. He made his own robot, why not his own bot?

From the audience’s perspective, it might look like they’re chatting in real time, but subtle cues on Gardner’s interface reward the diligent with hints that he’s watching a time delay. Maybe the chat we see in the film is even just cleverly edited to remove the bots.

How he manages to hide this data stream from NASA to avoid detection is another question better handled by someone else.

SBU_whodis.png

An honest version: bot envoy

So that solves the logic from the movie’s perspective but of course it’s still squickish. He is ultimately deceiving her. Once he returns to Mars and she is back on Earth, could they still use the same system, but with full knowledge of its botness? Would real world astronauts use it?

Would it be too fake?

I don’t think it would be too fake. Sure, the bot is not the real person, but neither are the pictures, videos, and letters we fondly keep with us as we travel far from home. We know they’re just simulacra, souvenir likenesses of someone we love. We don’t throw these away in disgust for being fakes. They are precious because they are reminders of the real thing. So would the themBot.

  • GARDNER
  • Hey, TulsaBot. Remember when we were knee deep in the Pacific Ocean? I was thinking about that today.
  • TULSABOT
  • I do. It’s weird how it messes with your sense of balance, right? Did you end up dreaming about it later? I sometimes do after being in waves a long time.
  • GARDNER
  • I can’t remember, but someday I hope to come back to Earth and feel it again. OK. I have to go, but let me know how training is going. Have you been on the G machine yet?

Nicely, you wouldn’t need stall tactics in the honest version. Or maybe it uses them, but can be called out.

  • TULSA
  • GardnerBot, you don’t have to stall. Just tell Gardner to watch Mission to Mars and update you. Because it’s hilarious and we have to go check out the face when I’m there.

Sending your loved one the transcript will turn it into a kind of love letter. The transcript could even be appended with a letter that jokes about the bot. The example above was too short for any semi-realtime insertions in the text, but maybe that would encourage longer chats. Then the bot serves as charming filler, covering the delays between real contact.

Ultimately, yes, I think we can backworld what looks physics-breaking into something that makes sense, and might even be a new kind of interactive memento between interplanetary sweethearts, family, and friends.

The Fermi Paradox and Sci-fi

In the prior post we introduced the Fermi paradox—or Fermi question—before an overview of the many hypotheses that try to answer the question, and ended noting that we must consider what we are to do, given the possibilities. In this post I’m going to share which of those hypotheses that screen-based sci-fi has chosen to tell stories about.

First we should note that screen sci-fi (this is, recall, a blog that concerns itself with sci-fi in movies and television), since the very, very beginning, has embraced questionably imperialist thrills. In Le Voyage dans la Lune, George Melies’ professor-astronomers encounter a “primitive” alien culture on Earth’s moon when they land there, replete with costumes, dances, and violent responses to accidental manslaughter. Hey, we get it, aliens are part of why audiences and writers are in it: As a thin metaphor for speculative human cultures that bring our own into relief. So, many properties are unconcerned with the *yawn* boring question of the Fermi paradox, instead imagining a diegesis with a whole smörgåsbord of alien civilizations that are explicitly engaged with humans, at times killing, trading, or kissing us, depending on which story you ask.

image.png

But some screen sci-fi does occasionally concern itself with the Fermi question.

Which are we telling stories about?

Screen sci-fi is a vast library, and more is being produced all the time, so it’s hard to give an exact breakdown, but if Drake can do it for Fermi’s question, we can at least ballpark it, too. To do this, I took a look at every sci-fi in the survey that produced Make It So and has been extended here on scifiinterfaces.com, and I tallied the breakdown between aliens, no aliens, and silent aliens. Here’s the Google Sheet with the data. And here’s what we see.

image

No aliens is the clear majority of stories! This is kind of surprising for me, since when I think of sci-fi my brain pops bug eyes and tentacles alongside blasters and spaceships. But it also makes sense because a lot of sci-fi is near future or focused on the human condition.

Some notes about these numbers.

I counted all the episodes or movies that exist in a single diegesis as one. So the two single largest properties in the sci-fi universe, Star Trek and Star Wars, only count once each. That seems unfair, since we’ve spent lots more total minutes of our lives with C3PO and the Enterprise crews than we have with Barbarella. This results in low-seeming numbers. There’s only 53 diegeses at the time of this writing even though it spans thousands of hours of shows. But all that said, this is ballpark problem, meant to tally rationales across diegeses, so we’ll deal with numbers that skew differently than our instincts would suggest. Someone else with a bigger budget of time or money can try and get exhaustive with the number, attempt to normalize for total minutes of media produced, and again for number of alien species referenced at their leisure, and then again for how popular the particular show was. Those numbers may be different.

image.png

Additionally the categorizations can be ambiguous. Should Star Trek go in “Silent Aliens” because of the Prime Directive, or under “Aliens” since the show has lots and lots and lots of aliens? Since the Fermi question seeks to answer why Silent Aliens are silent in our real world now, I opted for Silent Aliens, but that’s an arguable choice. Should The Martian count as “Life is Rare” since it’s competence porn that underscores how fragile life is? Should Deep Impact show that life is rare even though they never talk about aliens? It’s questionable to categorize something on a strong implication, but I did it where I felt the connection was strong. Additionally I may have ranked something as “no reason” because I missed an explanatory line of dialog somewhere. Please let me know if I missed something major or got something wrong in the comments.

All that said, let’s look back and see how those broad numbers break down when we look at individual Fermi hypotheses. First, we should omit shows with aliens. They categorically exclude themselves. Aliens is an obvious example. Also, let’s exclude shows that are utterly unconcerened with the question of aliens, e.g. Logan’s Run, (or those that never bother to provide an explanation as to why aliens may have been silent for so long, e.g. The Fifth Element.) We also have to dismiss the other show in the survey that shows a long-dead species but does not investigate why, Total Recall (1990). Aaaaand holy cow, that takes us down to only 8 shows that give some explanation for the historical absence or silence of aliens. Since that number is so low, I’ll list the shows explicitly to the right of their numbers. I’ll leave the numbers as percentages for consistency when I get to increase the data set.

No Aliens

8% Life is rare: Battlestar Galactica (2004)
25% Life doesn’t last (Natural disasters): Deep Impact, The CoreArmaggedon
8% Life doesn’t last (Technology will destroy us): Forbidden Planet

Silent Aliens

8% Superpredators: Oblivion
0% Information is dangerous
33% Prime directive: The Day the Earth Stood Still, 2001: A Space Odyssey, Mission to Mars, Star Trek
0% Isolationism
0% Zoo
0% Planetarium
0% Lighthouse hello
0% Still ringing
8% Hicksville: The Hitchhiker’s Guide to the Galaxy
0% Too distributed
0% Tech mismatch
0% Inconceivability
0% Too expensive
8% Cloaked: Men in Black

(*2% lost to rounding)

It’s at this point that some readers are sharpening their keyboards to inform me of the shows I’ve missed, and that’s great. I would rather have had the data before, but I’m just a guy and nothing motivates geeks like an incorrect pop culture data set. We can run these numbers again when more come in and see what changes.

image.png

In the meantime, the first thing we note is that of those that concern themselves with the question of Silent Aliens, most use some version of the prime directive.

Respectively, they say we have to do A Thing before they’ll contact us.

  • Mature ethically
  • Mature technologically by finding the big obelisk on the moon (and then the matching one around Jupiter)
  • Mature technologically by mastering faster-than-light travel
  • Find the explanatory kiosk/transportation station on Mars

It’s easy to understand why Prime Directives would be attractive as narrative rationales. It explains why things are so silent now, and puts the onus on us as a species to achieve The Thing, to do good, to improve. They are inspirational and encourage us to commit to space travel.

The second thing to note, is that those that concern themselves with the notion that Life Doesn’t Last err toward disaster porn, which is attractive because such films are tried and true formulas. The dog gets saved along with the planet, that one person died, there’s a ticker tape parade after they land, and the love interests reconcile. Some are ridiculous. Some are competent. None stand out to me as particularly memorable or life changing. I can’t think of one that illustrates how it is inevitable.

So prime directives and disaster porn are the main answers we see in sci-fi. Are those the right ones? I’ll discuss that in the next post. Stay Tuned.

Introducing Heath Rezabek

MLIS—Librarian and Futurist.

rezabek-heath-01-tall-halfsize

Hi there. Tell us a bit about yourself. What’s your name, where are you from, how do you spend your time?

I’m Heath Rezabek. I live in Austin, Texas, and have been an enthusiast of user interface design for many years. By career and calling I’m a librarian, and am a library services and technology grant manager by day. I have long been interested in how information is portrayed, symbolized, and accessed. I’m also writer of experimental speculative fiction, and have an interest in how the future is seen by creators and audiences. Interfaces play a key role in my fiction series, as well, from holographic to virtual world driven to all-out surrealist.

Screen Shot 2015-01-22 at 22.47.52

What are some of your favorite sci-fi interfaces (Other than in Oblivion)? (And, of course, why.)

In the realm of sci-fi interfaces, I’m quite drawn to the interplay between computer-based systems and the more physical failsafes often used to counterbalance or circumvent them. Two favorite examples would be the range of interfaces found in 2001: A Space Odyssey (from vocal interface to highly abstracted displays to physical systems such as HAL’s memory chamber), and the blend of failsafe systems in Danny Boyle’s Sunshine. Another favorite interface is that of the infamous Self Destruct levers in Ridley Scott’s Alien. Gmunk’s interfaces in TRON Legacy, particularly the ISO DNA editing orb interface, is another key inspiration. Again: Information as alive, as primal, as root-level mission-critical source-code.

2001: A Space Odyssey

HR01

Sunshine

02_sunshine_Blu-ray

Alien

HR04

Tron Legacy

HR05

Why did you decide to participate in the group review of Oblivion for your first scifiinterfaces review?

I decided to participate in the group review of Oblivion partly for a behind-the-scenes look at how Chris Noessel / scifiinterfaces approached such a project, and partly to get myself to take a deep look at interfaces I might otherwise only have considered from a distance. I’m an admirer of gmunk’s design work, on TRON Legacy as well as here, and that was another draw.

What was your biggest surprise when doing the review?

I don’t know whether this bit of analysis will make the final cut in the review, but my biggest surprise came as a mental leap while evaluating the direct drone linking and maintenance system used by Jack before deploying the hacked drone. In the end, I arrived at the idea that in tech-heavy stories, low-level physical interfaces (such as the thick, external cable which not only carried data from the reprogramming unit to the drone but also sparked, livewire-like, when detached) might often be symbolic signifiers of particularly root-level or fundamental information, commands, and (in the end) plot points. As important as a fictional interface is the way in which it is (or isn’t) eventually circumvented, (also built into the interface as a whole system) and what that moment means for the story.

In the case of Oblivion, I ended up drawing a connection between this brute, physically hazardous (sparking data cable!) reprogramming method and the sudden, stunning, reorienting effect that finding the crumbling book of poetry had on Jack. It’s no surprise to me that this particular moment had such an impact, given my interest in the role of physical-level and failsafe systems in overall fictional interfaces elsewhere. I’ll have to rewatch 2001 and Sunshine with this thought in mind.

What else are you working on? (Alternately: What other awesomeness should we know about you?)

I’m the Director of Strategic Initiatives at Icarus Interstellar, a research group focused on developing our prospects for eventual interstellar travel.  (Yes, actual eventual interstellar travel.)

I’m Deputy Lead of Project Astrolabe (also via Icarus Interstellar), a project to research long-term models of civilization.  My main research focus is very long term archival of the biological, scientific, and cultural record as a mitigation of risk to civilization’s capabilities over the long term.  I’ve Interned with the Long Now Foundation on their Manual for Civilization, and am advising Lunar Mission One on their Public Archive.

I’m also a lead for a project called the FarMaker Design Corps (also via Icarus Interstellar), which at a basic level is a biannual concept art contest with brackets for starship visualizations as well as (if all goes well) interface design. Chris Noessel is one of our Judges, and joins an amazing team of Advisors:

  • Mike Okuda (Star Trek)
  • Mark Rademaker (freelance ship concept designer)
  • Stephan Martiniere (Guardians of the Galaxy)
  • Steve Burg (Prometheus & Nolan’s Interstellar)
  • Oliver Scholl (Edge of Tomorrow)
  • Doug Drexler (Star Trek & Battlestar Galactica)
  • Thomas Marrone (UI for Star Trek Online)
  • Chuck Beaver (story, game, and UI director for the Dead Space series, formerly at EA)

We’ve started with an art contest to help find and encourage artists envisioning an interstellar future. Of course, with an advisory team like that, I most definitely look forward to seeing what the future holds.

Wearable Control Panels

As I said in the first post of this topic, exosuits and environmental suits are out of the definition of wearable computers. But there is one item commonly found on them that can count as wearable, and that’s the forearm control panels. In the survey these appear in three flavors.

Just Buttons

Fairly late in sci-fi they acknowledged the need for environmental suits, and acknowledged the need for controls on them. The first wearable control panel belongs to the original series of Star Trek, “The Naked Time” S01E04. The sparkly orange suits have a white cuff with a red and a black button. In the opening scene we see Mr. Spock press the red button to communicate with the Enterprise.

This control panel is crap. The buttons are huge momentary buttons that exist without a billet, and would be extremely easy to press accidentally. The cuff is quite loose, meaning Spock or the redshirt have to fumble around to locate it each time. Weeeeaak.

Star Trek (1966)

TOS_orangesuit

Some of these problems were solved when another WCP appeared 3 decades later in the the Next Generation movie First Contact.

Star Trek First Contact (1996)

ST1C-4arm

This panel is at least anchored, and located in places that could be located fairly easily via proprioception. It seems to have a facing that acts as a billet, and so might be tough to accidentally activate. It’s counter to its wearer’s social goals, though, since it glows. The colored buttons help to distinguish it when you’re looking at it, but it sure makes it tough to sneak around in darkness. Also, no labels? No labels seems to be a thing with WCPs since even Pixar thought it wasn’t necessary.

The Incredibles (2004)

Admittedly, this WCP belonged to a villain who had no interest in others’ use of it. So that’s at least diegetically excusable.

TheIncredibles_327

Hey, Labels, that’d be greeeeeat

Zipping back to the late 1960s, Kubrick’s 2001 nailed most everything. Sartorial, easy to access and use (look, labels! color differentiation! clustering!), social enough for an environmental suit, billeted, and the inputs are nice and discrete, even though as momentary buttons they don’t announce their state. Better would have been toggle buttons.

2001: A Space Odyssey (1968)

2001-spacesuit-021

Also, what the heck does the “IBM” button do, call a customer service representative from space? Embarrassing. What’s next, a huge Mercedez-Benz logo on the chest plate? Actually, no, it’s a Compaq logo.

A monitor on the forearm

The last category of WCP in the survey is seen in Mission to Mars, and it’s a full-color monitor on the forearm.

Mission to Mars

M2Mars-242

This is problematic for general use and fine for this particular application. These are scientists conducting a near-future trip to Mars, and so having access to rich data is quite important. They’re not facing dangerous Borg-like things, so they don’t need to worry about the light. I’d be a bit worried about the giant buttons that stick out on every edge that seem to be begging to be bumped. Also I question whether those particular buttons and that particular screen layout are wise choices, but that’s for the formal M2M review. A touchscreen might be possible. You might think that would be easy to accidentally activate, but not if it could only be activated by the fingertips in the exosuit’s gloves.

Wearableness

This isn’t an exhaustive list of every wearable control panel from the survey, but a fair enough recounting to point out some things about them as wearable objects.

  • The forearm is a fitting place for controls and information. Wristwatches have taken advantage of this for…some time. 😛
  • Socially, it’s kind of awkward to have an array of buttons on your clothing. Unless it’s an exosuit, in which case knock yourself out.
  • If you’re meant to be sneaking around, lit buttons are counterindicated. As are extruded switch surfaces that can be glancingly activated.
  • The fitness of the inputs and outputs depend on the particular application, but don’t drop the understandability (read: labels) simply for the sake of fashion. (I’m looking at you, Roddenberry.)