Sci-fi Spacesuits: Identification

Spacesuits are functional items, built largely identically to each other, adhering to engineering specifications rather than individualized fashion. A resulting problem is that it might be difficult to distinguish between multiple, similarly-sized individuals wearing the same suits. This visual identification problem might be small in routine situations:

  • (Inside the vehicle:) Which of these suits it mine?
  • What’s the body language of the person currently speaking on comms?
  • (With a large team performing a manual hull inspection:) Who is that approaching me? If it’s the Fleet Admiral I may need to stand and salute.

But it could quickly become vital in others:

  • Who’s body is that floating away into space?
  • Ensign Smith just announced they have a tachyon bomb in their suit. Which one is Ensign Smith?
  • Who is this on the security footage cutting the phlebotinum conduit?

There a number of ways sci-fi has solved this problem.

Name tags

Especially in harder sci-fi shows, spacewalkers have a name tag on the suit. The type is often so small that you’d need to be quite close to read it, and weird convention has these tags in all-capital letters even though lower-case is easier to read, especially in low light and especially at a distance. And the tags are placed near the breast of the suit, so the spacewalker would also have to be facing you. So all told, not that useful on actual extravehicular missions.

Faces

Screen sci-fi usually gets around the identification problem by having transparent visors. In B-movies and sci-fi illustrations from the 1950s and 60s, the fishbowl helmet was popular, but of course offering little protection, little light control, and weird audio effects for the wearer. Blockbuster movies were mostly a little smarter about it.

1950s Sci-Fi illustration by Ed Emshwiller
c/o Diane Doniol-Valcroze

Seeing faces allows other spacewalkers/characters (and the audience) to recognize individuals and, to a lesser extent, how their faces synch with their voice and movement. People are generally good at reading the kinesics of faces, so there’s a solid rationale for trying to make transparency work.

Face + illumination

As of the 1970s, filmmakers began to add interior lights that illuminate the wearer’s face. This makes lighting them easier, but face illumination is problematic in the real world. If you illuminate the whole face including the eyes, then the spacewalker is partially blinded. If you illuminate the whole face but not the eyes, they get that whole eyeless-skull effect that makes them look super spooky. (Played to effect by director Scott and cinematographer Vanlint in Alien, see below.)

Identification aside: Transparent visors are problematic for other reasons. Permanently-and-perfectly transparent glass risks the spacewalker getting damage from infrared lights or blinded from sudden exposure to nearby suns, or explosions, or engine exhaust ports, etc. etc. This is why NASA helmets have the gold layer on their visors: it lets in visible light and blocks nearly all infrared.

Astronaut Buzz Aldrin walks on the surface of the moon near the leg of the lunar module Eagle during the Apollo 11 mission.

Image Credit: NASA (cropped)

Only in 2001 does the survey show a visor with a manually-adjustable translucency. You can imagine that this would be more safe if it was automatic. Electronics can respond much faster than people, changing in near-real time to keep sudden environmental illumination within safe human ranges.

You can even imagine smarter visors that selectively dim regions (rather than the whole thing), to just block out, say, the nearby solar flare, or to expose the faces of two spacewalkers talking to each other, but I don’t see this in the survey. It’s mostly just transparency and hope nobody realizes these eyeballs would get fried.

So, though seeing faces helps solve some of the identification problem, transparent enclosures don’t make a lot of sense from a real-world perspective. But it’s immediate and emotionally rewarding for audiences to see the actors’ faces, and with easy cinegenic workarounds, I suspect identification-by-face is here in sci-fi for the long haul, at least until a majority of audiences experience spacewalking for themselves and realize how much of an artistic convention this is.

Color

Other shows have taken the notion of identification further, and distinguished wearers by color. Mission to Mars, Interstellar, and Stowaway did this similar to the way NASA does it, i.e. with colored bands around upper arms and sometimes thighs.

Destination Moon, 2001: A Space Odyssey, and Star Trek (2009) provided spacesuits in entirely different colors. (Star Trek even equipped the suits with matching parachutes, though for the pedantic, let’s acknowledge these were “just” upper-atmosphere suits.)The full-suit color certainly makes identification easier at a distance, but seems like it would be more expensive and introduce albedo differences between the suits.

One other note: if the visor is opaque and characters are only relying on the color for identification, it becomes easier for someone to don the suit and “impersonate” its usual wearer to commit spacewalking crimes. Oh. My. Zod. The phlebotinum conduit!

According to the Colour Blind Awareness organisation, blindness (color vision deficiency) affects approximately 1 in 12 men and 1 in 200 women in the world, so is not without its problems, and might need to be combined with bold patterns to be more broadly accessible.

What we don’t see

Heraldry

Blog from another Mog Project Rho tells us that books have suggested heraldry as space suit identifiers. And while it could be a device placed on the chest like medieval suits of armor, it might be made larger, higher contrast, and wraparound to be distinguishable from farther away.

Directional audio

Indirect, but if the soundscape inside the helmet can be directional (like a personal Surround Sound) then different voices can come from the direction of the speaker, helping uniquely identify them by position. If there are two close together and none others to be concerned about, their directions can be shifted to increase their spatial distinction. When no one is speaking leitmotifs assigned to each other spacewalker, with volumes corresponding to distance, could help maintain field awareness.

HUD Map

Gamers might expect a map in a HUD that showed the environment and icons for people with labeled names.

Search

If the spacewalker can have private audio, shouldn’t she just be able to ask, “Who’s that?” while looking at someone and hear a reply or see a label on a HUD? It would also be very useful if I’ve spacewalker could ask for lights to be illuminated on the exterior of another’s suit. Very useful if that other someone is floating unconscious in space.

Mediated Reality Identification

Lastly I didn’t see any mediated reality assists: augmented or virtual reality. Imagine a context-aware and person-aware heads-up display that labeled the people in sight. Technological identification could also incorporate in-suit biometrics to avoid the spacesuit-as-disguise problem. The helmet camera confirms that the face inside Sargeant McBeef’s suit is actually that dastardly Dr. Antagonist!

We could also imagine that the helmet could be completely enclosed, but be virtually transparent. Retinal projectors would provide the appearance of other spacewalkers—from live cameras in their helmets—as if they had fishbowl helmets. Other information would fit the HUD depending on the context, but such labels would enable identification in a way that is more technology-forward and cinegenic. But, of course, all mediated solutions introduce layers of technology that also introduces more potential points of failure, so not a simple choice for the real-world.

Oh, that’s right, he doesn’t do this professionally.

So, as you can read, there’s no slam-dunk solution that meets both cinegenic and real-world needs. Given that so much of our emotional experience is informed by the faces of actors, I expect to see transparent visors in sci-fi for the foreseeable future. But it’s ripe for innovation.

Sci-fi Spacesuits: Moving around

Whatever it is, it ain’t going to construct, observe, or repair itself. In addition to protection and provision, suits must facilitate the reason the wearer has dared to go out into space in the first place.

One of the most basic tasks of extravehicular activity (EVA) is controlling where the wearer is positioned in space. The survey shows several types of mechanisms for this. First, if your EVA never needs you to leave the surface of the spaceship, you can go with mountaineering gear or sticky feet. (Or sticky hands.) We can think of maneuvering through space as similar to piloting a craft, but the outputs and interfaces have to be made wearable, like wearable control panels. We might also expect to see some tunnel in the sky displays to help with navigation. We’d also want to see some AI safeguard features, to return the spacewalker to safety when things go awry. (Narrator: We don’t.)

Mountaineering gear

In Stowaway (2021) astronauts undertake unplanned EVAs with carabiners and gear akin to mountaineers use. This makes some sense, though even this equipment needs to be modified for use by astronauts’ thick gloves.

Stowaway (2021) Drs Kim and Levinson prepare to scale to the propellant tank.

Sticky feet (and hands)

Though it’s not extravehicular, I have to give a shout out to 2001: A Space Odyssey (1969), where we see a flight attendant manage their position in the microgravity with special shoes that adhere to the floor. It’s a lovely example of a competent Hand Wave. We don’t need to know how it works because it says, right there, “Grip shoes.” Done. Though props to the actress Heather Downham, who had to make up a funny walk to illustrate that it still isn’t like walking on earth.

2001: A Space Odyssey (1969)
Pan Am: “Thank god we invented the…you know, whatever shoes.

With magnetic boots, seen in Destination Moon, the wearer simply walks around and manages the slight awkwardness of having to pull a foot up with extra force, and have it snap back down on its own.

Battlestar Galactica added magnetic handgrips to augment the control provided by magnetized boots. With them, Sergeant Mathias is able to crawl around the outside of an enemy vessel, inspecting it. While crawling, she holds grip bars mounted to circles that contain the magnets. A mechanism for turning the magnet off is not seen, but like these portable electric grabbers, it could be as simple as a thumb button.

Iron Man also had his Mark 50 suit form stabilizing suction cups before cutting a hole in the hull of the Q-Ship.

Avengers: Infinity War (2018)

In the electromagnetic version of boots, seen in Star Trek: First Contact, the wearer turns the magnets on with a control strapped to their thigh. Once on, the magnetization seems to be sensitive to the wearer’s walk, automatically lessening when the boot is lifted off. This gives the wearer something of a natural gait. The magnetism can be turned off again to be able to make microgravity maneuvers, such as dramatically leaping away from Borg minions.

Star Trek: Discovery also included this technology, but with what appears to be a gestural activation and a cool glowing red dots on the sides and back of the heel. The back of each heel has a stack of red lights that count down to when they turn off, as, I guess, a warning to anyone around them that they’re about to be “air” borne.

Quick “gotcha” aside: neither Destination Moon nor Star Trek: First Contact bothers to explain how characters are meant to be able to kneel while wearing magnetized boots. Yet this very thing happens in both films.

Destination Moon (1950): Kneeling on the surface of the spaceship.
Star Trek: First Contact (1996): Worf rises from operating the maglock to defend himself.

Controlled Propellant

If your extravehicular task has you leaving the surface of the ship and moving around space, you likely need a controlled propellant. This is seen only a few times in the survey.

In the film Mission to Mars, the manned mobility unit, or MMU, seen in the film is based loosely on NASA’s MMU. A nice thing about the device is that unlike the other controlled propellant interfaces, we can actually see some of the interaction and not just the effect. The interfaces are subtly different in that the Mission to Mars spacewalkers travel forward and backward by angling the handgrips forward and backward rather than with a joystick on an armrest. This seems like a closer mapping, but also seems more prone to error by accidental touching or bumping into something.

The plus side is an interface that is much more cinegenic, where the audience is more clearly able to see the cause and effect of the spacewalker’s interactions with the device.

If you have propellent in a Moh’s 4 or 5 film, you might need to acknowledge that propellant is a limited resource. Over the course of the same (heartbreaking) scene shown above, we see an interface where one spacewalker monitors his fuel, and another where a spacewalker realizes that she has traveled as far as she can with her MMU and still return to safety.

Mission to Mars (2000): Woody sees that he’s out of fuel.

For those wondering, Michael Burnham’s flight to the mysterious signal in that pilot uses propellant, but is managed and monitored by controllers on Discovery, so it makes sense that we don’t see any maneuvering interfaces for her. We could dive in and review the interfaces the bridge crew uses (and try to map that onto a spacesuit), but we only get snippets of these screens and see no controls.

Iron Man’s suits employ some Phlebotinum propellant that lasts for ever, can fit inside his tailored suit, and are powerful enough to achieve escape velocity.

Avengers: Infinity War (2018)

All-in-all, though sci-fi seems to understand the need for characters to move around in spacesuits, very little attention is given to the interfaces that enable it. The Mission to Mars MMU is the only one with explicit attention paid to it, and that’s quite derived from NASA models. It’s an opportunity for film makers should the needs of the plot allow, to give this topic some attention.

Sci-fi Spacesuits: Interface Locations

A major concern of the design of spacesuits is basic usability and ergonomics. Given the heavy material needed in the suit for protection and the fact that the user is wearing a helmet, where does a designer put an interface so that it is usable?

Chest panels

Chest panels are those that require that the wearer only look down to manipulate. These are in easy range of motion for the wearer’s hands. The main problem with this location is that there is a hard trade off between visibility and bulkiness.

Arm panels

Arm panels are those that are—brace yourself—mounted to the forearm. This placement is within easy reach, but does mean that the arm on which the panel sits cannot be otherwise engaged, and it seems like it would be prone to accidental activation. This is a greater technological challenge than a chest panel to keep components small and thin enough to be unobtrusive. It also provides some interface challenges to squeeze information and controls into a very small, horizontal format. The survey shows only three arm panels.

The first is the numerical panel seen in 2001: A Space Odyssey (thanks for the catch, Josh!). It provides discrete and easy input, but no feedback. There are inter-button ridges to kind of prevent accidental activation, but they’re quite subtle and I’m not sure how effective they’d be.

2001: A Space Odyssey (1968)

The second is an oversimplified control panel seen in Star Trek: First Contact, where the output is simply the unlabeled lights underneath the buttons indicating system status.

The third is the mission computers seen on the forearms of the astronauts in Mission to Mars. These full color and nonrectangular displays feature rich, graphic mission information in real time, with textual information on the left and graphic information on the right. Input happens via hard buttons located around the periphery.

Side note: One nifty analog interface is the forearm mirror. This isn’t an invention of sci-fi, as it is actually on real world EVAs. It costs a lot of propellant or energy to turn a body around in space, but spacewalkers occasionally need to see what’s behind them and the interface on the chest. So spacesuits have mirrors on the forearm to enable a quick view with just arm movement. This was showcased twice in the movie Mission to Mars.

HUDs

The easiest place to see something is directly in front of your eyes, i.e. in a heads-up display, or HUD. HUDs are seen frequently in sci-fi, and increasingly in sc-fi spacesuits as well. One is Sunshine. This HUD provides a real-time view of each other individual to whom the wearer is talking while out on an EVA, and a real-time visualization of dangerous solar winds.

These particular spacesuits are optimized for protection very close to the sun, and the visor is limited to a transparent band set near eye level. These spacewalkers couldn’t look down to see the top of a any interfaces on the suit itself, so the HUD makes a great deal of sense here.

Star Trek: Discovery’s pilot episode included a sequence that found Michael Burnham flying 2000 meters away from the U.S.S. Discovery to investigate a mysterious Macguffin. The HUD helped her with wayfinding, navigating, tracking time before lethal radiation exposure (a biological concern, see the prior post), and even doing a scan of things in her surroundings, most notably a Klingon warrior who appears wearing unfamiliar armor. Reference information sits on the periphery of Michael’s vision, but the augmentations occur mapped to her view. (Noting this raises the same issues of binocular parallax seen in the Iron HUD.)

Iron Man’s Mark L armor was able to fly in space, and the Iron HUD came right along with it. Though not designed/built for space, it’s a general AI HUD assisting its spacewalker, so worth including in the sample.

Avengers: Infinity War (2018)

Aside from HUDs, what we see in the survey is similar to what exists in existing real-world extravehicular mobility units (EMUs), i.e. chest panels and arm panels.

Inputs illustrate paradigms

Physical controls range from the provincial switches and dials on the cigarette-girl foldout control panels of Destination Moon to the simple and restrained numerical button panel of 2001, to strangely unlabeled buttons of Star Trek: First Contact’s arm panels (above), and the ham-handed touch screens of Mission to Mars.

Destination Moon (1950)
2001: A Space Odyssey (1968)

As the pictures above reveal, the input panels reflect the familiar technology of the time of the creation of the movie or television show. The 1950s were still rooted in mechanistic paradigms, the late 1960s interfaces were electronic pushbutton, the 2000s had touch screens and miniaturized displays.

Real world interfaces

For comparison and reference, the controls for NASA’s EMU has a control panel on the front, called the Display and Control Module, where most of the controls for the EMU sit.

The image shows that inputs are very different than what we see as inputs in film and television. The controls are large for easy manipulation even with thick gloves, distinct in type and location for confident identification, analog to allow for a minimum of failure points and in-field debugging and maintenance, and well-protected from accidental actuation with guards and deep recesses. The digital display faces up for the convenience of the spacewalker. The interface text is printed backwards so it can be read with the wrist mirror.

The outputs are fairly minimal. They consist of the pressure suit gauge, audio warnings, and the 12-character alphanumeric LCD panel at the top of the DCM. No HUD.

The gauge is mechanical and standard for its type. The audio warnings are a simple warbling tone when something’s awry. The LCD panel provides information about 16 different values that the spacewalker might need, including estimated time of oxygen remaining, actual volume of oxygen remaining, pressure (redundant to the gauge), battery voltage or amperage, and water temperature. To cycle up and down the list, she presses the Mode Selector Switch forward and backward. She can adjust the contrast using the Display Intensity Control potentiometer on the front of the DCM.

A NASA image tweeted in 2019.

The DCMs referenced in the post are from older NASA documents. In more recent images on NASA’s social media, it looks like there have been significant redesigns to the DCM, but so far I haven’t seen details about the new suit’s controls. (Or about how that tiny thing can house all the displays and controls it needs to.)

Sci-fi Spacesuits: Protecting the Wearer from the Perils of Space

Space is incredibly inhospitable to life. It is a near-perfect vacuum, lacking air, pressure, and warmth. It is full of radiation that can poison us, light that can blind and burn us, and a darkness that can disorient us. If any hazardous chemicals such as rocket fuel have gotten loose, they need to be kept safely away. There are few of the ordinary spatial clues and tools that humans use to orient and control their position. There are free-floating debris that range from to bullet-like micrometeorites to gas and rock planets that can pull us toward them to smash into their surface or burn in their atmospheres. There are astronomical bodies such as stars and black holes that can boil us or crush us into a singularity. And perhaps most terrifyingly, there is the very real possibility of drifting off into the expanse of space to asphyxiate, starve (though biology will be covered in another post), freeze, and/or go mad.

The survey shows that sci-fi has addressed most of these perils at one time or another.

Alien (1976): Kane’s visor is melted by a facehugger’s acid.

Interfaces

Despite the acknowledgment of all of these problems, the survey reveals only two interfaces related to spacesuit protection.

Battlestar Galactica (2004) handled radiation exposure with simple, chemical output device. As CAG Lee Adama explains in “The Passage,” the badge, worn on the outside of the flight suit, slowly turns black with radiation exposure. When the badge turns completely black, a pilot is removed from duty for radiation treatment.

This is something of a stretch because it has little to do with the spacesuit itself, and is strictly an output device. (Nothing that proper interaction requires human input and state changes.) The badge is not permanently attached to the suit, and used inside a spaceship while wearing a flight suit. The flight suit is meant to act as a very short term extravehicular mobility unit (EMU), but is not a spacesuit in the strict sense.

The other protection related interface is from 2001: A Space Odyssey. As Dr. Dave Bowman begins an extravehicular activity to inspect seemingly-faulty communications component AE-35, we see him touch one of the buttons on his left forearm panel. Moments later his visor changes from being transparent to being dark and protective.

We should expect to see few interfaces, but still…

As a quick and hopefully obvious critique, Bowman’s function shouldn’t have an interface. It should be automatic (not even agentive), since events can happen much faster than human response times. And, now that we’ve said that part out loud, maybe it’s true that protection features of a suit should all be automatic. Interfaces to pre-emptively switch them on or, for exceptional reasons, manually turn them off, should be the rarity.

But it would be cool to see more protective features appear in sci-fi spacesuits. An onboard AI detects an incoming micrometeorite storm. Does the HUD show much time is left? What are the wearer’s options? Can she work through scenarios of action? Can she merely speak which course of action she wants the suit to take? If a wearer is kicked free of the spaceship, the suit should have a homing feature. Think Doctor Strange’s Cloak of Levitation, but for astronauts.

As always, if you know of other examples not in the survey, please put them in the comments.

“Real-time,” Interplanetary Chat

While recording a podcast with the guys at DecipherSciFi about the twee(n) love story The Space Between Us, we spent some time kvetching about how silly it was that many of the scenes involved Gardner, on Mars, in a real-time text chat with a girl named Tulsa, on Earth. It’s partly bothersome because throughout the rest of the the movie, the story tries for a Mohs sci-fi hardness of, like, 1.5, somewhere between Real Life and Speculative Science, so it can’t really excuse itself through the Applied Phlebotinum that, say, Star Wars might use. The rest of the film feels like it’s trying to have believable science, but during these scenes it just whistles, looks the other way, and hopes you don’t notice that the two lovebirds are breaking the laws of physics as they swap flirt emoji.

Hopefully unnecessary science brief: Mars and Earth are far away from each other. Even if the communications transmissions are sent at light speed between them, it takes much longer than the 1 second of response time required to feel “instant.” How much longer? It depends. The planets orbit the sun at different speeds, so aren’t a constant distance apart. At their closest, it takes light 3 minutes to travel between Mars and Earth, and at their farthest—while not being blocked by the sun—it takes about 21 minutes. A round-trip is double that. So nothing akin to real-time chat is going to happen.

But I’m a designer, a sci-fi apologist, and a fairly talented backworlder. I want to make it work. And perhaps because of my recent dive into narrow AI, I began to realize that, well, in a way, maybe it could. It just requires rethinking what’s happening in the chat.

Let’s first acknowledge that we’ve solved long distance communications a long time ago. Gardner and Tulsa could just, you know, swap letters or, like the characters in 2001: A Space Odyssey, recorded video messages. There. Problem solved. It’s not real-time interaction, but it gets the job done. But kids aren’t so much into pen pals anymore, and we have to acknowledge that Gardner doesn’t want to tip his hand that he’s on Mars (it’s a grave NASA secret, for plot reasons). So the question is how could we make it work so it feels like a real time chat to her. Let’s first solve it for the case where he’s trying to disguise his location, and then how it might work when both participants are in the know.

Fooling Tulsa

Since 1984 (ping me, as always, if you can think of an earlier reference) sci-fi has had the notion of a digitally-replicated personality. Here I’m thinking of Gibson’s Neuromancer and the RAM boards on which Dixie Flatline “lives.” These RAM boards house an interactive digital personality of a person, built out of a lifetime of digital traces left behind: social media, emails, photos, video clips, connections, expressed interests, etc. Anyone in that story could hook the RAM board up to a computer, and have conversations with the personality housed there that would closely approximate how that person would (or would have) respond in real life.

SBU_Tulsa.png
Listen to the podcast for a mini-rant on translucent screens, followed by apologetics.

Is this likely to actually happen? Well it kind of already is. Here in the real world, we’re seeing early, crude “me bots” populate the net which are taking baby steps toward the same thing. (See MessinaBothttps://bottr.me/, https://sensay.it/, the forthcoming http://bot.me/) By the time we actually get a colony to Mars (plus the 16 years for Gardner to mature), mebot technology should should be able to stand in for him convincingly enough in basic online conversations.

Training the bot

So in the story, he would look through cached social media feeds to find a young lady he wanted to strike up a conversation with, and then ask his bot-maker engine to look at her public social media to build a herBot with whom he could chat, to train it for conversations. During this training, the TulsaBot would chat about topics of interest gathered from her social media. He could pause the conversation to look up references or prepare convincing answers to the trickier questions TulsaBot asks. He could also add some topics to the conversation they might have in common, and questions he might want to ask her. By doing this, his GardnerBot isn’t just some generic thing he sends out to troll any young woman with. It’s a more genuine, interactive first “letter” sent directly to her. He sends this GardnerBot to servers on Earth.

Hey-mars-chat.gif
A demonstration of a chat with a short Martian delay. (Yes, it’s an animated gif.)

Launching the bot

GardnerBot would wait until it saw Tulsa online and strike up the conversation with her. It would send a signal back to Gardner that the chat has begun so he can sit on his end and read a space-delayed transcript of the chat. GardnerBot would try its best to manage the chat based on what it knows about awkward teen conversation, Turing test best practices, what it knows about Gardner, and how it has been trained specifically for Tulsa. Gardner would assuage some of his guilt by having it dodge and carefully frame the truth, but not outright lie.

Buying time

If during the conversation she raised a topic or asked a question for which GardnerBot was not trained, it could promise an answer later, and then deflect, knowing that it should pad the conversation in the meantime:

  • Ask her to answer the same question first, probing into details to understand rationale and buy more time
  • Dive down into a related subtopic in which the bot has confidence, and which promises to answer the initial question
  • Deflect conversation to another topic in which it has a high degree of confidence and lots of detail to share
  • Text a story that Gardner likes to tell that is known to take about as long as the current round-trip signal

Example

  • TULSA
  • OK, here’s one: If you had to live anywhere on Earth where they don’t speak English, where would you live?

GardnerBot has a low confidence that it knows Gardner’s answer. It could respond…

  1. (you first) “Oh wow. That is a tough one. Can I have a couple of minutes to think about it? I promise I’ll answer, but you tell me yours first.”
  2. (related subtopic) “I’m thinking about this foreign movie that I saw one time. There were a lot of animals in it and a waterfall. Does that sound familiar?”
  3. (new topic) “What? How am I supposed to answer that one? 🙂 Umm…While I think about it, tell me…what kind of animal would you want to be reincarnated as. And you have to say why.”
  4. (story delay) “Ha. Sure, but can I tell a story first? When I was a little kid, I used to be obsessed with this music that I would hear drifting into my room from somewhere around my house…”

Lagged-realtime training

Each of those responses is a delay tactic that allows the chat transcript to travel to Mars for Gardner to do some bot training on the topic. He would be watching the time-delayed transcript of the chat, keeping an eye on an adjacent track of data containing the meta information about what the bot is doing, conversationally speaking. When he saw it hit low-confidence or high-stakes topic and deflect, it would provide a chat window for him to tell the GardnerBot what it should do or say.

  • To the stalling GARDNERBOT…
  • GARDNER
  • For now, I’m going to pick India, because it’s warm and I bet I would really like the spicy food and the rain. Whatever that colored powder festival is called. I’m also interested in their culture, Bollywood, and Hinduism.
  • As he types, the message travels back to Earth where GardnerBot begins to incorporate his answers to the chat…
SBU_Gardner.png
  • At a natural break in the conversation…
  • GARDNERBOT
  • OK. I think I finally have an answer to your earlier question. How about…India?
  • TULSA
  • India?
  • GARDNERBOT
  • Think about it! Running around in warm rain. Or trying some of the street food under an umbrella. Have you seen youTube videos from that festival with the colored powder everywhere? It looks so cool. Do you know what it’s called?

Note that the bot could easily look it up and replace “that festival with the colored powder everywhere” with “Holi Festival of Color” but it shouldn’t. Gardner doesn’t know that fact, so the bot shouldn’t pretend it knows it. A Cyrano-de-Bergerac software—where it makes him sound more eloquent, intelligent, or charming than he really is to woo her—would be a worse kind of deception. Gardner wants to hide where he is, not who he is.

That said, Gardner should be able to direct the bot, to change its tactics. “OMG. GardnerBot! You’re getting too personal! Back off!” It might not be enough to cover a flub made 42 minutes ago, but of course the bot should know how to apologize on Gardner’s behalf and ask conversational forgiveness.

Gotta go

If the signal to Mars got interrupted or the bot got into too much trouble with pressure to talk about low confidence or high stakes topics, it could use a believable, pre-rolled excuse to end the conversation.

  • GARDNERBOT
  • Oh crap. Will you be online later? I’ve got chores I have to do.

Then, Gardner could chat with TulsaBot on his end without time pressure to refine GardnerBot per their most recent topics, which would be sent back to Earth servers to be ready for the next chat.

In this way he could have “chats” with Tulsa that are run by a bot but quite custom to the two of them. It’s really Gardner’s questions, topics, jokes, and interest, but a bot-managed delivery of these things.

So it could work, does it fit the movie? I think so. It would be believable because he’s a nerd raised by scientists. He made his own robot, why not his own bot?

From the audience’s perspective, it might look like they’re chatting in real time, but subtle cues on Gardner’s interface reward the diligent with hints that he’s watching a time delay. Maybe the chat we see in the film is even just cleverly edited to remove the bots.

How he manages to hide this data stream from NASA to avoid detection is another question better handled by someone else.

SBU_whodis.png

An honest version: bot envoy

So that solves the logic from the movie’s perspective but of course it’s still squickish. He is ultimately deceiving her. Once he returns to Mars and she is back on Earth, could they still use the same system, but with full knowledge of its botness? Would real world astronauts use it?

Would it be too fake?

I don’t think it would be too fake. Sure, the bot is not the real person, but neither are the pictures, videos, and letters we fondly keep with us as we travel far from home. We know they’re just simulacra, souvenir likenesses of someone we love. We don’t throw these away in disgust for being fakes. They are precious because they are reminders of the real thing. So would the themBot.

  • GARDNER
  • Hey, TulsaBot. Remember when we were knee deep in the Pacific Ocean? I was thinking about that today.
  • TULSABOT
  • I do. It’s weird how it messes with your sense of balance, right? Did you end up dreaming about it later? I sometimes do after being in waves a long time.
  • GARDNER
  • I can’t remember, but someday I hope to come back to Earth and feel it again. OK. I have to go, but let me know how training is going. Have you been on the G machine yet?

Nicely, you wouldn’t need stall tactics in the honest version. Or maybe it uses them, but can be called out.

  • TULSA
  • GardnerBot, you don’t have to stall. Just tell Gardner to watch Mission to Mars and update you. Because it’s hilarious and we have to go check out the face when I’m there.

Sending your loved one the transcript will turn it into a kind of love letter. The transcript could even be appended with a letter that jokes about the bot. The example above was too short for any semi-realtime insertions in the text, but maybe that would encourage longer chats. Then the bot serves as charming filler, covering the delays between real contact.

Ultimately, yes, I think we can backworld what looks physics-breaking into something that makes sense, and might even be a new kind of interactive memento between interplanetary sweethearts, family, and friends.

The Fermi Paradox and Sci-fi

In the prior post we introduced the Fermi paradox—or Fermi question—before an overview of the many hypotheses that try to answer the question, and ended noting that we must consider what we are to do, given the possibilities. In this post I’m going to share which of those hypotheses that screen-based sci-fi has chosen to tell stories about.

First we should note that screen sci-fi (this is, recall, a blog that concerns itself with sci-fi in movies and television), since the very, very beginning, has embraced questionably imperialist thrills. In Le Voyage dans la Lune, George Melies’ professor-astronomers encounter a “primitive” alien culture on Earth’s moon when they land there, replete with costumes, dances, and violent responses to accidental manslaughter. Hey, we get it, aliens are part of why audiences and writers are in it: As a thin metaphor for speculative human cultures that bring our own into relief. So, many properties are unconcerned with the *yawn* boring question of the Fermi paradox, instead imagining a diegesis with a whole smörgåsbord of alien civilizations that are explicitly engaged with humans, at times killing, trading, or kissing us, depending on which story you ask.

image.png

But some screen sci-fi does occasionally concern itself with the Fermi question.

Which are we telling stories about?

Screen sci-fi is a vast library, and more is being produced all the time, so it’s hard to give an exact breakdown, but if Drake can do it for Fermi’s question, we can at least ballpark it, too. To do this, I took a look at every sci-fi in the survey that produced Make It So and has been extended here on scifiinterfaces.com, and I tallied the breakdown between aliens, no aliens, and silent aliens. Here’s the Google Sheet with the data. And here’s what we see.

image

No aliens is the clear majority of stories! This is kind of surprising for me, since when I think of sci-fi my brain pops bug eyes and tentacles alongside blasters and spaceships. But it also makes sense because a lot of sci-fi is near future or focused on the human condition.

Some notes about these numbers.

I counted all the episodes or movies that exist in a single diegesis as one. So the two single largest properties in the sci-fi universe, Star Trek and Star Wars, only count once each. That seems unfair, since we’ve spent lots more total minutes of our lives with C3PO and the Enterprise crews than we have with Barbarella. This results in low-seeming numbers. There’s only 53 diegeses at the time of this writing even though it spans thousands of hours of shows. But all that said, this is ballpark problem, meant to tally rationales across diegeses, so we’ll deal with numbers that skew differently than our instincts would suggest. Someone else with a bigger budget of time or money can try and get exhaustive with the number, attempt to normalize for total minutes of media produced, and again for number of alien species referenced at their leisure, and then again for how popular the particular show was. Those numbers may be different.

image.png

Additionally the categorizations can be ambiguous. Should Star Trek go in “Silent Aliens” because of the Prime Directive, or under “Aliens” since the show has lots and lots and lots of aliens? Since the Fermi question seeks to answer why Silent Aliens are silent in our real world now, I opted for Silent Aliens, but that’s an arguable choice. Should The Martian count as “Life is Rare” since it’s competence porn that underscores how fragile life is? Should Deep Impact show that life is rare even though they never talk about aliens? It’s questionable to categorize something on a strong implication, but I did it where I felt the connection was strong. Additionally I may have ranked something as “no reason” because I missed an explanatory line of dialog somewhere. Please let me know if I missed something major or got something wrong in the comments.

All that said, let’s look back and see how those broad numbers break down when we look at individual Fermi hypotheses. First, we should omit shows with aliens. They categorically exclude themselves. Aliens is an obvious example. Also, let’s exclude shows that are utterly unconcerened with the question of aliens, e.g. Logan’s Run, (or those that never bother to provide an explanation as to why aliens may have been silent for so long, e.g. The Fifth Element.) We also have to dismiss the other show in the survey that shows a long-dead species but does not investigate why, Total Recall (1990). Aaaaand holy cow, that takes us down to only 8 shows that give some explanation for the historical absence or silence of aliens. Since that number is so low, I’ll list the shows explicitly to the right of their numbers. I’ll leave the numbers as percentages for consistency when I get to increase the data set.

No Aliens

8% Life is rare: Battlestar Galactica (2004)
25% Life doesn’t last (Natural disasters): Deep Impact, The CoreArmaggedon
8% Life doesn’t last (Technology will destroy us): Forbidden Planet

Silent Aliens

8% Superpredators: Oblivion
0% Information is dangerous
33% Prime directive: The Day the Earth Stood Still, 2001: A Space Odyssey, Mission to Mars, Star Trek
0% Isolationism
0% Zoo
0% Planetarium
0% Lighthouse hello
0% Still ringing
8% Hicksville: The Hitchhiker’s Guide to the Galaxy
0% Too distributed
0% Tech mismatch
0% Inconceivability
0% Too expensive
8% Cloaked: Men in Black

(*2% lost to rounding)

It’s at this point that some readers are sharpening their keyboards to inform me of the shows I’ve missed, and that’s great. I would rather have had the data before, but I’m just a guy and nothing motivates geeks like an incorrect pop culture data set. We can run these numbers again when more come in and see what changes.

image.png

In the meantime, the first thing we note is that of those that concern themselves with the question of Silent Aliens, most use some version of the prime directive.

Respectively, they say we have to do A Thing before they’ll contact us.

  • Mature ethically
  • Mature technologically by finding the big obelisk on the moon (and then the matching one around Jupiter)
  • Mature technologically by mastering faster-than-light travel
  • Find the explanatory kiosk/transportation station on Mars

It’s easy to understand why Prime Directives would be attractive as narrative rationales. It explains why things are so silent now, and puts the onus on us as a species to achieve The Thing, to do good, to improve. They are inspirational and encourage us to commit to space travel.

The second thing to note, is that those that concern themselves with the notion that Life Doesn’t Last err toward disaster porn, which is attractive because such films are tried and true formulas. The dog gets saved along with the planet, that one person died, there’s a ticker tape parade after they land, and the love interests reconcile. Some are ridiculous. Some are competent. None stand out to me as particularly memorable or life changing. I can’t think of one that illustrates how it is inevitable.

So prime directives and disaster porn are the main answers we see in sci-fi. Are those the right ones? I’ll discuss that in the next post. Stay Tuned.

Introducing Heath Rezabek

MLIS—Librarian and Futurist.

rezabek-heath-01-tall-halfsize

Hi there. Tell us a bit about yourself. What’s your name, where are you from, how do you spend your time?

I’m Heath Rezabek. I live in Austin, Texas, and have been an enthusiast of user interface design for many years. By career and calling I’m a librarian, and am a library services and technology grant manager by day. I have long been interested in how information is portrayed, symbolized, and accessed. I’m also writer of experimental speculative fiction, and have an interest in how the future is seen by creators and audiences. Interfaces play a key role in my fiction series, as well, from holographic to virtual world driven to all-out surrealist.

Screen Shot 2015-01-22 at 22.47.52

What are some of your favorite sci-fi interfaces (Other than in Oblivion)? (And, of course, why.)

Continue reading

Wearable Control Panels

As I said in the first post of this topic, exosuits and environmental suits are out of the definition of wearable computers. But there is one item commonly found on them that can count as wearable, and that’s the forearm control panels. In the survey these appear in three flavors.

Just Buttons

Fairly late in sci-fi they acknowledged the need for environmental suits, and acknowledged the need for controls on them. The first wearable control panel belongs to the original series of Star Trek, “The Naked Time” S01E04. The sparkly orange suits have a white cuff with a red and a black button. In the opening scene we see Mr. Spock press the red button to communicate with the Enterprise.

This control panel is crap. The buttons are huge momentary buttons that exist without a billet, and would be extremely easy to press accidentally. The cuff is quite loose, meaning Spock or the redshirt have to fumble around to locate it each time. Weeeeaak.

Star Trek (1966)

TOS_orangesuit

Some of these problems were solved when another WCP appeared 3 decades later in the the Next Generation movie First Contact.

Star Trek First Contact (1996)

ST1C-4arm

This panel is at least anchored, and located in places that could be located fairly easily via proprioception. It seems to have a facing that acts as a billet, and so might be tough to accidentally activate. It’s counter to its wearer’s social goals, though, since it glows. The colored buttons help to distinguish it when you’re looking at it, but it sure makes it tough to sneak around in darkness. Also, no labels? No labels seems to be a thing with WCPs since even Pixar thought it wasn’t necessary.

The Incredibles (2004)

Admittedly, this WCP belonged to a villain who had no interest in others’ use of it. So that’s at least diegetically excusable.

TheIncredibles_327

Hey, Labels, that’d be greeeeeat

Zipping back to the late 1960s, Kubrick’s 2001 nailed most everything. Sartorial, easy to access and use (look, labels! color differentiation! clustering!), social enough for an environmental suit, billeted, and the inputs are nice and discrete, even though as momentary buttons they don’t announce their state. Better would have been toggle buttons.

2001: A Space Odyssey (1968)

2001-spacesuit-021

Also, what the heck does the “IBM” button do, call a customer service representative from space? Embarrassing. What’s next, a huge Mercedez-Benz logo on the chest plate? Actually, no, it’s a Compaq logo.

A monitor on the forearm

The last category of WCP in the survey is seen in Mission to Mars, and it’s a full-color monitor on the forearm.

Mission to Mars

M2Mars-242

This is problematic for general use and fine for this particular application. These are scientists conducting a near-future trip to Mars, and so having access to rich data is quite important. They’re not facing dangerous Borg-like things, so they don’t need to worry about the light. I’d be a bit worried about the giant buttons that stick out on every edge that seem to be begging to be bumped. Also I question whether those particular buttons and that particular screen layout are wise choices, but that’s for the formal M2M review. A touchscreen might be possible. You might think that would be easy to accidentally activate, but not if it could only be activated by the fingertips in the exosuit’s gloves.

Wearableness

This isn’t an exhaustive list of every wearable control panel from the survey, but a fair enough recounting to point out some things about them as wearable objects.

  • The forearm is a fitting place for controls and information. Wristwatches have taken advantage of this for…some time. 😛
  • Socially, it’s kind of awkward to have an array of buttons on your clothing. Unless it’s an exosuit, in which case knock yourself out.
  • If you’re meant to be sneaking around, lit buttons are counterindicated. As are extruded switch surfaces that can be glancingly activated.
  • The fitness of the inputs and outputs depend on the particular application, but don’t drop the understandability (read: labels) simply for the sake of fashion. (I’m looking at you, Roddenberry.)