Spacesuits in Sci-fi

“Why cannot we walk outside [the spaceship] like the meteor? Why cannot we launch into space through the scuttle? What enjoyment it would be to feel oneself thus suspended in ether, more favored than the birds who must use their wings to keep themselves up!”

—The astronaut Michel Ardan in Round the Moon by Jules Verne (1870)

When we were close to publication on Make It So, we wound up being way over the maximum page count for a Rosenfeld Media book. We really wanted to keep the components and topics sections, and that meant we had to cut the section on things. Spacesuits was one of the chapters I drafted about things. I am representing that chapter here on the blog. n.b. This was written ten years ago in 2011. There are almost certainly other more recent films and television shows that can serve as examples. If you, the reader, notice any…well, that‘s what the comments section is for.

Sci-fi doesn’t have to take place in interplanetary space, but a heck of a lot of it does. In fact, the first screen-based science fiction film is all about a trip to the moon.

La Voyage Dans La Lune (1904): The professors suit up for their voyage to the moon by donning conical caps, neck ruffles, and dark robes.

Most of the time, traveling in this dangerous locale happens inside spaceships, but occasionally a character must travel out bodily into the void of space. Humans—and pretty much everything (no not them) we would recognize as life—can not survive there for very long at all. Fortunately, the same conceits that sci-fi adopts to get characters into space can help them survive once they’re there.

Establishing terms

An environmental suit is any that helps the wearer survive in an inhospitable environment. Environment suits first began with underwater suits, and later high-altitude suits. For space travel, pressure suits are to be worn during the most dangerous times, i.e. liftoff and landing, when an accident may suddenly decompress a spacecraft. A spacesuit is an environmental suit designed specifically for survival in outer space. NASA refers to spacesuits as Extravehicular Mobility Units, or EMUs. Individuals who wear the spacesuits are known as spacewalkers. The additional equipment that helps a spacewalker move around space in a controlled manner is the Manned Mobility Unit, or MMU.

Additionally, though many other agencies around the world participate in the design and engineering of  spacesuits, there is no convenient way to reference them and their efforts as a group, so Aerospace Community is used as a shorthand. This also helps to acknowledge that my research and interviews were primarily with sources primarily from NASA.

The design of the spacesuit is an ongoing and complicated affair. To speak of “the spacesuit” as if it were a single object ignores the vast number of iterations and changes made to the suits between each cycle of engineering, testing, and deployment, must less between different agencies working on their own designs. So, for those wondering, I’m using the Russian Orlan spacesuit currently being used in the International Space Station and shuttle missions as the default design when speaking about modern spacesuits.

Spacesuit Orlan-MKS at MAKS-2013(air show) (fragment) CC BY-SA 4.0

What the thing’s got to do

A spacesuit, whether in sci-fi or the real world, has to do three things.

  1. It has to protect the wearer from the perils of interplanetary space.
  2. It has to accommodate the wearer’s ongoing biological needs.
  3. Since space is so dangerous, the suit and tools must help the wearer accomplish their extravehicular tasks efficiently and get them back to safer environs as quickly as possible.

Each of these categories of functions, and the related interfaces, are discussed in following posts.

Tunnel-in-the-Sky Displays

“Tunnel in the Sky” is the name of a 1955 Robert Heinlein novel that has nothing to do with this post. It is also the title of the following illustration by Muscovite digital artist Vladimir Manyukhin, which also has nothing to do with this post, but is gorgeous and evocative, and included here solely for visual interest.

See more of Vladimir’s work here https://www.artstation.com/mvn78.

Instead, this post is about the piloting display of the same name, and written specifically to sci-fi interface designers.


Last week in reviewing the spinners in Blade Runner, I included mention and a passing critique of the tunnel-in-the-sky display that sits in front of the pilot. While publishing, I realized that I’d seen this a handful of other times in sci-fi, and so I decided to do more focused (read: Internet) research about it. Turns out it’s a real thing, and it’s been studied and refined a lot over the past 60 years, and there are some important details to getting one right.

Though I looked at a lot of sources for this article, I must give a shout-out to Max Mulder of TU Delft. (Hallo, TU Delft!) Mulder’s PhD thesis paper from 1999 on the subject is truly a marvel of research and analysis, and it pulls in one of my favorite nerd topics: Cybernetics. Throughout this post I rely heavily on his paper, and you could go down many worse rabbit holes than cybernetics. n.b., it is not about cyborgs. Per se. Thank you, Max.

I’m going to breeze through the history, issues, and elements from the perspective of sci-fi interfaces, and then return to the three examples in the survey. If you want to go really in depth on the topic (and encounter awesome words like “psychophysics” and “egomotion” in their natural habitat), Mulder’s paper is available online for free from researchgate.net: “Cybernetics of Tunnel-in-the-Sky Displays.”

What the heck is it?

A tunnel-in-the-sky display assists pilots, helping them know where their aircraft is in relation to an ideal flight path. It consists of a set of similar shapes projected out into 3D space, circumscribing the ideal path. The pilot monitors their aircraft’s trajectory through this tunnel, and makes course corrections as they fly to keep themselves near its center.

This example comes from Michael P. Snow, as part of his “Flight Display Integration” paper, also on researchgate.net.

Please note that throughout this post, I will spell out the lengthy phrase “tunnel-in-the-sky” because the acronym is pointlessly distracting.

Quick History

In 1973, Volkmar Wilckens was a research engineer and experimental test pilot for the German Research and Testing Institute for Aerospace (now called the German Aerospace Center). He was doing a lot of thinking about flight safety in all-weather conditions, and came up with an idea. In his paper “Improvements In Pilot/Aircraft-Integration by Advanced Contact Analog Displays,” he sort of says, “Hey, it’s hard to put all the information from all the instruments together in your head and use that to fly, especially when you’re stressed out and flying conditions are crap. What if we took that data and rolled it up into a single easy-to-use display?” Figure 6 is his comp of just such a system. It was tested thoroughly in simulators and shown to improve pilot performance by making the key information (attitude, flight-path and position) perceivable rather than readable. It also enabled the pilot greater agency, by not having them just follow rules after instrument readings, but empowering them to navigate multiple variables within parameters to stay on target.

In Wilckens’ Fig. 6, above, you can see the basics of what would wind up on sci-fi screens decades later: shapes repeated into 3D space ahead of the aircraft to give the pilot a sense of an ideal path through the air. Stay in the tunnel and keep the plane safe.

Mulder notes that the next landmark developments come from the work of Arthur Grunwald & S. J. Merhav between 1976–1978. Their research illustrates the importance of augmenting the display and of including a preview of the aircraft in the display. They called this preview the Flight Path Predictor, or FPS. I’ve also seen it called the birdie in more modern papers, which is a lot more charming. It’s that plus symbol in the Grunwald illustration, below. Later in 1984, Grunwald also showed that a heads-up-display increased precision adhering to a curved path. So, HUDs good.

 n.b. This is Mulder’s representation of Grunwald’s display format.

I have also seen lots of examples of—but cannot find the research provenance for—tools for helping the pilot stay centered, such as a “ghost” reticle at the center of each frame, or alternately brackets around the FPP, called the Flight Director Box, that the pilot can align to the corners of the frames. (I’ll just reference the brackets. Gestalt be damned!) The value of the birdie combined with the brackets seems very great, so though I can’t cite their inventor, and it wasn’t in Mulder’s thesis, I’ll include them as canon.

The takeaway from the history is really that these displays have a rich and studied history. The pattern has a high confidence.

Elements of an archetypical tunnel-in-the-sky display

There are lots of nuances that have been studied for these displays. Take for example the effect that angling the frames have on pilot banking, and the perfect time offset to nudge pilot behavior closer to ideal banking. For the purposes of sci-fi interfaces, however, we can reduce the critical components of the real world pattern down to four.

  1. Square shapes (called frames) extending into the distance that describe an ideal path through space
    1. The frame should be about five times the width of the craft. (The birdie you see below is not proportional and I don’t think it’s standard that they are.)
    2. The distances between frames will change with speed, but be set such that the pilot encounters a new one every three seconds.
    3. The frames should adopt perspective as if they were in the world, being perpendicular to the flight path. They should not face the display.
    4. The frames should tilt, or bank, on curves.
    5. The tunnel only needs to extend so far, about 20 seconds ahead in the flight path. This makes for about 6 frames visible at a time.
  2. An aircraft reference symbol or Flight Path Predictor Symbol (FPS, or “birdie”) that predicts where the plane will be when it meets the position of the nearest frame. It can appear off-facing in relation to the cockpit.
    1. These are often rendered as two L shapes turned base-to-base with some space between them. (See one such symbol in the Snow example above.)
    2. Sometimes (and more intuitively, imho) as a circle with short lines extending out the sides and the top. Like a cartoon butt of a plane. (See below.)
  3. Contour lines connect matching corners across frames
  4. A horizon line
This comp illustrates those critical features.

There are of course lots of other bits of information that a pilot needs. Altitude and speed, for example. If you’re feeling ambitious, and want more than those four, there are other details directly related to steering that may help a pilot.

  • Degree-of-vertical-deviation indicator at a side edge
  • Degree-of-horizontal-deviation indicator at the top edge
  • Center-of-frame indicator, such as a reticle, appearing in the upcoming frame
  • A path predictor 
  • Some sense of objects in the environment: If the display is a heads-up display, this can be a live view. If it is a separate screen, some stylized representation what the pilot would see if the display was superimposed onto their view.
  • What the risk is when off path: Just fuel? Passenger comfort? This is most important if that risk is imminent (collision with another craft, mountain) but then we’re starting to get agentive and I said we wouldn’t go there, so *crumbles up paper, tosses it*.

I haven’t seen a study showing efficacy of color and shading and line scale to provide additional cues, but look closely at that comp and you’ll see…

  • The background has been level-adjusted to increase contrast with the heads-up display
  • A dark outline around the white birdie and brackets to help visually distinguish them from the green lines and the clouds
  • A shadow under the birdie and brackets onto the frames and contours as an additional signal of 3D position
  • Contour lines diminishing in size as they extend into the distance, adding an additional perspective cue and limiting the amount of contour to the 20 second extents.
Some other interface elements added.

What can you play with when designing one in sci-fi?

Everything, of course. Signaling future-ness means extending known patterns, and sci-fi doesn’t answer to usability. Extend for story, extend for spectacle, extend for overwhelmedness. You know your job better than me. But if you want to keep a foot in believability, you should understand the point of each thing as you modify it and try not to lose that.

  1. Each frame serves as a mini-game, challenging the pilot to meet its center. Once that frame passes, that game is done and the next one is the new goal. Frames describe the near term. Having corners to the frame shape helps convey banking better. Circles would hide banking.
  2. Contour lines, if well designed, help describe the overall path and disambiguate the stack of frames. (As does lighting and shading and careful visual design, see above.) Contour lines convey the shape of the overall path and help guide steering between frames. Kind of like how you’d need to see the whole curve before drifitng your car through one, the contour lines help the pilot plan for the near future. 
  3. The birdie and brackets are what a pilot uses to know how close to the center they are. The birdie needs a center point. The brackets need to match the corners of the frame. Without these, it’s easier to drift off center.
  4. A horizon line provides feedback for when the plane is banked.
THIS BAD: You can kill the sense of the display by altering (or in this case, omitting) too much.

Since I mentioned that each frame acts as a mini-game, a word of caution: Just as you should be skeptical when looking to sci-fi, you should be skeptical when looking to games for their interfaces. The simulator which is most known for accuracy (Microsoft Flight Simulator) doesn’t appear to have a tunnel-in-the-sky display, and other categories of games may not be optimizing for usability as much as just plain fun, with the risk of crashing your virtual craft just being part of the risk. That’s not an acceptable outcome in real-world piloting. So, be cautious considering game interfaces as models for this, either.

This clip of stall-testing in the forthcoming MSFS2020 still doesn’t appear to show one. 

So now let’s look at the three examples of sci-fi tunnel-in-the-sky displays in chronological order of release, and see how they fare.

Three examples from sci-fi

So with those ideal components in mind, let’s look back at those three examples in the survey.

Alien (1976)
Blade Runner (1982)

Quick aside on the Blade Runner interface: The spike at the top and the bottom of the frame help in straight tunnels to serve as a horizontal degree-of-deviation indicator. It would not help as much in curved tunnels, and is missing a matching vertical degree-of-deviation indicator. Unless that’s handled automatically, like a car on a road, its absence is notable.

Starship Troopers (1986) We only get 15 frames of this interface in Starship Troopers, as Ibanez pilots the escape shuttle to the surface of Planet P. It is very jarring to see as a repeating gif, so accept this still image instead. 

Some obvious things we see missing from all of them are the birdie, the box, and the contour lines. Why is this? My guess is that the computational power in the 1976 was not enough to manage those extra lines, and Ridley Scott just went with the frames. Then, once the trope had been established in a blockbuster, designers just kept repeating the trope rather than looking to see how it worked in the real world, or having the time to work through the interaction logic. So let me say:

  • Without the birdie and box, the pilot has far too much leeway to make mistakes. And in sci-fi contexts, where the tunnel-in-the-sky display is shown mostly during critical ship maneuvers, their absence is glaring.
  • Also the lack of contour lines might not seem as important, since the screens typically aren’t shown for very long, but when they twist in crazy ways they should help signal the difficulty of the task ahead of the pilot very quickly.

Note that sci-fi will almost certainly encounter problems that real-world researchers will not have needed to consider, and so there’s plenty of room for imagination and additional design. Imagine helping a pilot…

  • Navigating the weird spacetime around a singularity
  • Bouncing close to a supernova while in hyperspace
  • Dodging chunks of spaceship, the bodies of your fallen comrades, and rising plasma bombs as you pilot shuttlecraft to safety on the planet below
  • AI on the ships that can predict complex flight paths and even modify them in real time, and even assist with it all
  • Needing to have the tunnel be occluded by objects visible in a heads up display, such as when a pilot is maneuvering amongst an impossibly-dense asteroid field. 

…to name a few off my head. These things don’t happen in the real world, so would be novel design challenges for the sci-fi interface designer.


So, now we have a deeper basis for discussing, critiquing, and designing sci-fi tunnel-in-the-sky displays. If you are an aeronautic engineer, and have some more detail, let me hear it! I’d love for this to be a good general reference for sci-fi interface designers.

If you are a fan, and can provide other examples in the comments, it would be great to see other ones to compare.

Happy flying, and see you back in Blade Runner in the next post.

Ship Console

FaithfulWookie-console.png

The only flight controls we see are an array of stay-state toggle switches (see the lower right hand of the image above) and banks of lights. It’s a terrifying thought that anyone would have to fly a spaceship with binary controls, but we have some evidence that there’s analog controls, when Luke moves his arms after the Falcon fires shots across his bow.

Unfortunately we never get a clear view of the full breadth of the cockpit, so it’s really hard to do a proper analysis. Ships in the Holiday Special appear to be based on scenes from A New Hope, but we don’t see the inside of a Y-Wing in that movie. It seems to be inspired by the Falcon. Take a look at the upper right hand corner of the image below.

ANewHope_Falcon_console01.png

Piloting Controls

Firefly_piloting

Pilot’s controls (in a spaceship) are one of the categories of “things” that remained on the editing room floor of Make It So when we realized we had about 50% too much material before publishing. I’m about to discuss such pilot’s controls as part of the review of Starship Troopers, and I realized that I’ll first need to to establish the core issues in a way that will be useful for discussions of pilot’s controls from other movies and TV shows. So in this post I’ll describe the key issues independent of any particular movie.

A big shout out to commenters Phil (no last name given) and Clayton Beese for helping point me towards some great resources and doing some great thinking around this topic originally with the Mondoshawan spaceship in The Fifth Element review.

So let’s dive in. What’s at issue when designing controls for piloting a spaceship?

BuckRogers_piloting

First: Spaceships are not (cars|planes|submarines|helicopters|Big Wheels…)

One thing to be careful about is mistaking a spacecraft for similar-but-not-the-same Terran vehicles. Most of us have driven a car, and so have these mental models with us. But a car moves across 2(.1?) dimensions. The well-matured controls for piloting roadcraft have optimized for those dimensions. You basically get a steering wheel for your hands to specify change-of-direction on the driving plane, and controls for speed.

Planes or even helicopters seem like they might be a closer fit, moving as they do more fully across a third dimension, but they’re not right either. For one thing, those vehicles are constantly dealing with air resistance and gravity. They also rely on constant thrust to stay aloft. Those facts alone distinguish them from spacecraft.

These familiar models (cars and planes) are made worse since so many sci-fi piloting interfaces are based on them, putting yokes in the hands of the pilots, and they only fit for plane-like tasks. A spaceship is a different thing, piloted in a different environment with different rules, making it a different task.

2001_piloting

Maneuvering in space

Space is upless and downless, except as a point relates to other things, like other spacecraft, ecliptic planes, or planets. That means that a spacecraft may need to be angled in fully 3-dimensional ways in order to orient it to the needs of the moment. (Note that you can learn more about flight dynamics and attitude control on Wikipedia, but it is sorely lacking in details about the interfaces.)

Orientation

By convention, rotation is broken out along the cartesian coordinates.

  • X: Tipping the nose of the craft up or down is called pitch.
  • Y: Moving the nose left or right around a vertical axis, like turning your head left and right, is called yaw.
  • Z: Tilting the left or right around an axis that runs from the front of the plane to the back is called roll.

Angles_620

In addition to angle, since you’re not relying on thrust to stay aloft, and you’ve already got thrusters everywhere for arbitrary rotation, the ship can move (or translate, to use the language of geometry) in any direction without changing orientation.

Translation

Translation is also broken out along cartesian coordinates.

  • X: Moving to the left or right, like strafing in the FPS sense. In Cartesian systems, this axis is called the abscissa.
  • Y: Moving up or down. This axis is called the ordinate.
  • Z: Moving forward or backward. This axis is less frequently named, but is called the applicate.

Translations_620

Thrust

I’ll make a nod to the fact that thrust also works differently in space when traveling over long distances between planets. Spacecraft don’t need continuous thrust to keep moving along the same vector, so it makes sense that the “gas pedal” would be different in these kinds of situations. But then, looking into it, you run into a theory of constant-thrust or constantacceleration travel, and bam, suddenly you’re into astrodynamics and equations peppered with sigmas, and you’re in way over my head. It’s probably best to presume that the thrust controls are set-point rather than throttle, meaning the pilot is specifying a desired speed rather than the amount of thrust, and some smart algorithm is handling all the rest.

Given these tasks of rotation, translation, and thrust, when evaluating pilot’s controls, we first have to ask how it is the pilot goes about specifying these things. But even that answer isn’t simple. Because you need to determine with what kind of interface agency it is built.

Max was a fully sentient AI who helped David pilot.

Max was a fully sentient AI who helped David pilot.

Interface Agency

If you’re not familiar with my categories of agency in technology, I’ll cover them briefly here. I’ll be publishing them in an upcoming book with Rosenfeld Media, which you can read there if you want to know more. In short, you can think of interfaces as having four categories of agency.

  • Manual: In which the technology shapes the (strictly) physical forces the user applies to it, like a pencil. Such interfaces optimize for good ergonomics.
  • Powered: In which the user is manipulating a powered system to do work, like a typewriter. Such interfaces optimize for good feedback.
  • Assistive: In which the system can offer low-level feedback, like a spell checker. Such interfaces optimize for good flow, in the Csikszentmihalyi sense.
  • Agentive: In which the system can pursue primitive goals on behalf of the user, like software that could help you construct a letter. This would be categorized as “weak” artificial intelligence, and specifically not the sentience of “strong” AI. Such interfaces optimize for good conversation.

So what would these categories mean for piloting controls? Manual controls might not really exist since humans can’t travel in space without powered systems. Powered controls would be much like early real-world spacecraft. Assistive controls would be might provide collision warnings or basic help with plotting a course. Agentive controls would allow a pilot to specify the destination and timing, and it would handle things until it encountered a situation that it couldn’t handle. Of course this being sci-fi, these interfaces can pass beyond the singularity to full, sentient artificial intelligence, like HAL.

Understanding the agency helps contextualize the rest of the interface.

Firefly_piloting03

Inputs

How does the pilot provide input, how does she control the spaceship? With her hands? Partially with her feet? Via a yoke, buttons on a panel, gestural control of a volumetric projection, or talking to a computer?

If manual, we’ll want to look at the ergonomics, affordances, and mappings.

Even agentive controls need to gracefully degrade to assistive and powered interfaces for dire circumstances, so we’d expect to see physical controls of some sorts. But these interfaces would additionally need some way to specify more abstract variables like goals, preferences, and constraints.

Consolidation

Because of the predominance of the yoke interface trope, a major consideration is how consolidated the controls are. Is there a single control that the pilot uses? Or multiple? What variables does each control? If the apparent interface can’t seem to handle all of orientation, translation, and thrust, how does the pilot control those? Are there separate controls for precision maneuvering and speed maneuvering (for, say, evasive maneuvers, dog fights, or dodging asteroids)?

The yoke is popular since it’s familiar to audiences. They see it and instantly know that that’s the pilot’s seat. But as a control for that pilot to do their job, it’s pretty poor. Note that it provides only two variables. In a plane, this means the following: Turn it clockwise or counterclockwise to indicate roll, and push it forward or pull it back for pitch. You’ll also notice that while roll is mapped really well to the input (you roll the yoke), the pitch is less so (you don’t pitch the yoke).

So when we see a yoke for piloting a spaceship, we must acknowledge that a) it’s missing an axis of rotation that spacecraft need, i.e. yaw. b) it’s presuming only one type of translation, which is forward. That leaves us looking about the cockpit for clues about how the pilot might accomplish these other kinds of maneuvers.

StarshipTroopers_pilotingoutput

Output

How does the pilot know that her inputs have registered with the ship? How can she see the effects or the consequences of her choices? How does an assistive interface help her identify problems and opportunities? How does as agentive or even AI interface engage the pilot asking for goals, constraints, and exceptions? I have the sense that Human perception is optimized for a mostly-two-dimensional plane with a predator’s eyes-forward gaze. How does the interface help the pilot expand her perception fully to 360° and three dimensions, to the distances relevant for space, and to see the invisible landscape of gravity, radiation, and interstellar material?

Narrative POV

An additional issue is that of narrative POV. (Readers of the book will recall this concept is came up in the Gestural Interfaces chapter.) All real-world vehicles work from a first-person perspective. That is, the pilot faces the direction of travel and steers the vehicle almost as if it was their own body.

But if you’ve ever played a racing game, you’ll recognize that there’s another possible perspective. It’s called the third-person perspective, and it’s where the camera sits up above the vehicle, slightly back. It’s less immediate than first person, but provides greater context. It’s quite popular with gamers in racing games, being rated twice as popular in one informal poll from escapist magazine. What POV is the pilot’s display? Which one would be of greater use?

MatrixREV_piloting

The consequent criteria

I think these are all the issues. This is new thinking for me, so I’ll leave it up a bit for others to comment or correct. If I’ve nailed them, then for any future piloting controls in the future, these are the lenses through which we’ll look and begin our evaluation:

  • Agency [ manual | powered | assistive | agentive | AI ]
  • Inputs
    • Affordance
    • Ergonomics
    • Mappings
      • orientation
      • translation
      • thrust
    • consolidations
  • Outputs (especially Narrative POV)

This checklist won’t magically give us insight into the piloting interface, but will be a great place to start, and a way to compare apples to apples between these interfaces.

Brain interfaces as wearables

There are lots of brain devices, and the book has a whole chapter dedicated to them. Most of these brain devices are passive, merely needing to be near the brain to have whatever effect they are meant to have (the chapter discusses in turn: reading from the brain, writing to the brain, telexperience, telepresence, manifesting thought, virtual sex, piloting a spaceship, and playing an addictive game. It’s a good chapter that never got that much love. Check it out.)

This is a composite SketchUp rendering of the shapes of all wearable brain control devices in the survey.

This is a composite rendering of the shapes of most of the wearable brain control devices in the survey. Who can name the “tophat”?

Since the vast majority of these devices are activated by, well, you know, invisible brain waves, the most that can be pulled from them are sartorial– and social-ness of their industrial design. But there are two with genuine state-change interactions of note for interaction designers.

Star Trek: The Next Generation

The eponymous Game of S05E06 is delivered through a wearable headset. It is a thin band that arcs over the head from ear to ear, with two extensions out in front of the face that project visuals into the wearer’s eyes.

STTNG The Game-02

The only physical interaction with the device is activation, which is accomplished by depressing a momentary button located at the top of one of the temples. It’s a nice placement since the temple affords placing a thumb beneath it to provide a brace against which a forefinger can push the button. And even if you didn’t want to brace with the thumb, the friction of the arc across the head provides enough resistance on its own to keep the thing in place against the pressure. Simple, but notable. Contrast this with the buttons on the wearable control panels that are sometimes quite awkward to press into skin.

Minority Report (2002)

The second is the Halo coercion device from Minority Report. This is barely worth mentioning, since the interaction is by the PreCrime cop, and it is only to extend it from a compact shape to one suitable for placing on a PreCriminal’s head. Push the button and pop! it opens. While it’s actually being worn there is no interacting with it…or much of anything, really.

MinRep-313

MinRep-314

Head: Y U No house interactions?

There is a solid physiological reason why the head isn’t a common place for interactions, and that’s that raising the hands above the heart requires a small bit of cardiac effort, and wouldn’t be suitable for frequent interactions simply because over time it would add up to work. Google Glass faced similar challenges, and my guess is that’s why it uses a blended interface of voice, head gestures, and a few manual gestures. Relying on purely manual interactions would violate the wearable principle of apposite I/O.

At least as far as sci-fi is telling us, the head is not often a fitting place for manual interactions.

Wearable Control Panels

As I said in the first post of this topic, exosuits and environmental suits are out of the definition of wearable computers. But there is one item commonly found on them that can count as wearable, and that’s the forearm control panels. In the survey these appear in three flavors.

Just Buttons

Fairly late in sci-fi they acknowledged the need for environmental suits, and acknowledged the need for controls on them. The first wearable control panel belongs to the original series of Star Trek, “The Naked Time” S01E04. The sparkly orange suits have a white cuff with a red and a black button. In the opening scene we see Mr. Spock press the red button to communicate with the Enterprise.

This control panel is crap. The buttons are huge momentary buttons that exist without a billet, and would be extremely easy to press accidentally. The cuff is quite loose, meaning Spock or the redshirt have to fumble around to locate it each time. Weeeeaak.

Star Trek (1966)

TOS_orangesuit

Some of these problems were solved when another WCP appeared 3 decades later in the the Next Generation movie First Contact.

Star Trek First Contact (1996)

ST1C-4arm

This panel is at least anchored, and located in places that could be located fairly easily via proprioception. It seems to have a facing that acts as a billet, and so might be tough to accidentally activate. It’s counter to its wearer’s social goals, though, since it glows. The colored buttons help to distinguish it when you’re looking at it, but it sure makes it tough to sneak around in darkness. Also, no labels? No labels seems to be a thing with WCPs since even Pixar thought it wasn’t necessary.

The Incredibles (2004)

Admittedly, this WCP belonged to a villain who had no interest in others’ use of it. So that’s at least diegetically excusable.

TheIncredibles_327

Hey, Labels, that’d be greeeeeat

Zipping back to the late 1960s, Kubrick’s 2001 nailed most everything. Sartorial, easy to access and use (look, labels! color differentiation! clustering!), social enough for an environmental suit, billeted, and the inputs are nice and discrete, even though as momentary buttons they don’t announce their state. Better would have been toggle buttons.

2001: A Space Odyssey (1968)

2001-spacesuit-021

Also, what the heck does the “IBM” button do, call a customer service representative from space? Embarrassing. What’s next, a huge Mercedez-Benz logo on the chest plate? Actually, no, it’s a Compaq logo.

A monitor on the forearm

The last category of WCP in the survey is seen in Mission to Mars, and it’s a full-color monitor on the forearm.

Mission to Mars

M2Mars-242

This is problematic for general use and fine for this particular application. These are scientists conducting a near-future trip to Mars, and so having access to rich data is quite important. They’re not facing dangerous Borg-like things, so they don’t need to worry about the light. I’d be a bit worried about the giant buttons that stick out on every edge that seem to be begging to be bumped. Also I question whether those particular buttons and that particular screen layout are wise choices, but that’s for the formal M2M review. A touchscreen might be possible. You might think that would be easy to accidentally activate, but not if it could only be activated by the fingertips in the exosuit’s gloves.

Wearableness

This isn’t an exhaustive list of every wearable control panel from the survey, but a fair enough recounting to point out some things about them as wearable objects.

  • The forearm is a fitting place for controls and information. Wristwatches have taken advantage of this for…some time. 😛
  • Socially, it’s kind of awkward to have an array of buttons on your clothing. Unless it’s an exosuit, in which case knock yourself out.
  • If you’re meant to be sneaking around, lit buttons are counterindicated. As are extruded switch surfaces that can be glancingly activated.
  • The fitness of the inputs and outputs depend on the particular application, but don’t drop the understandability (read: labels) simply for the sake of fashion. (I’m looking at you, Roddenberry.)

(Other) wearable communications

The prior posts discussed the Star Trek combadge and the Minority Report forearm-comm. In the same of completeness, there are other wearable communications in the survey.

There are tons of communication headsets, such as those found in Aliens. These are mostly off-the-shelf varieties and don’t bear a deep investigation. (Though readers interested in the biometric display should check out the Medical Chapter in the book.)

Besides these there are three unusual ones in the survey worth noting. (Here we should give a shout out to Star Wars’ Lobot, who might count except given the short scenes where he appears in Empire it appears he cannot remove these implants, so they’re more cybernetic enhancements than wearable technology.)

Gattaca-159

In Gattaca, Vincent and his brother Anton use wrist telephony. These are notable for their push-while-talking activation. Though it’s a pain for long conversations, it’s certainly a clear social signal that a microphone is on, it telegraphs the status of the speaker, and would make it somewhat difficult to accidentally activate.

Firefly_E11_036

In the Firefly episode “Trash”, the one-shot character Durran summons the police by pressing the side of a ring he wears on his finger. Though this exact mechanism is not given screen time, it has some challenging constraints. It’s a panic button and meant to be hidden-in-plain-sight most of the time. This is how it’s social. How does he avoid accidental activation? There could be some complicated tap or gesture, but I’d design it to require contact from the thumb for some duration, say three seconds. This would prevent accidental activation most of the time, and still not draw attention to itself. Adding an increasingly intense haptic feedback after a second of hold would confirm the process in intended activations and signal him to move his thumbs in unintended activations.

BttF_066

In Back to the Future, one member the gang of bullies that Marty encounters wears a plastic soundboard vest. (That’s him on the left, officer. His character name was Data.) To use the vest, he presses buttons to play prerecorded sounds. He emphasizes Future-Biff’s accusation of “chicken” with a quick cluck. Though this fails the sartorial criteria, being hard plastic, as a fashion choice it does fit the punk character type for being arresting and even uncomfortable, per the Handicap Principle.

There are certainly other wearable communications in the deep waters of sci-fi, so any additional examples are welcome.

Next up we’ll take a look at control panels on wearables.

Precrime forearm-comm

MinRep-068

Though most everyone in the audience left Minority Report with the precrime scrubber interface burned into their minds (see Chapter 5 of the book for more on that interface), the film was loaded with lots of other interfaces to consider, not the least of which were the wearable devices.

Precrime forearm devices

These devices are worn when Anderton is in his field uniform while on duty, and are built into the material across the left forearm. On the anterior side just at the wrist is a microphone for communications with dispatch and other officers. By simply raising that side of his forearm near his mouth, Anderton opens the channel for communication. (See the image above.)

MinRep-080

There is also a basic circular display in the middle of the posterior left forearm that displays a countdown for the current mission: The time remaining before the crime that was predicted to occur should take place. The text is large white characters against a dark background. Although the translucency provides some visual challenge to the noisy background of the watch (what is that in there, a Joule heating coil?), the jump-cut transitions of the seconds ticking by commands the user’s visual attention.

On the anterior forearm there are two visual output devices: one rectangular perpetrator information (and general display?) and one amber-colored circular one we never see up close. In the beginning of the film Anderton has a man pinned to the ground and scans his eyes with a handheld Eyedentiscan device. Through retinal biometrics, the pre-offender’s identity is confirmed and sent to the rectangular display, where Anderton can confirm that the man is a citizen named Howard Marks.

Wearable analysis

Checking these devices against the criteria established in the combadge writeup, it fares well. This is partially because it builds on a century of product evolution for the wristwatch.

It is sartorial, bearing displays that lay flat against the skin connected to soft parts that hold them in place.

They are social, being in a location other people are used to seeing similar technology.

It is easy to access and use for being along the forearm. Placing different kinds of information at different spots of the body means the officer can count on body memory to access particular data, e.g. Perp info is anterior middle forearm. That saves him the cognitive load of managing modes on the device.

The display size for this rectangle is smallish considering the amount of data being displayed, but being on the forearm means that Anderton can adjust its apparent size by bringing it closer or farther from his face. (Though we see no evidence of this in the film, it would be cool if the amount of information changed based on distance-to-the-observer’s face. Writing that distanceFromFace() algorithm might be tricky though.)

There might be some question about accidental activation, since Anderton could be shooting the breeze with his buddies while scratching his nose and mistakenly send a dirty joke to a dispatcher, but this seems like an unlikely and uncommon enough occurrence to simply not worry about it.

Using voice as the input is cinegenic, but especially in his line of work a subvocalization input would keep him more quiet—and therefore safer— in the field. Still, voice inputs are fast and intuitive, making for fairly apposite I/O. Ideally he might have some haptic augmentation of the countdown, and audio augmentation of the info so Anderton wouldn’t have to pull his arm and attention away from the perpetrator, but as long as the information is glanceable and Anderton is merely confirming data (rather than new information), recognition is a fast enough cognitive process that this isn’t too much of a problem.

All in all, not bad for a “throwaway” wearable technology.

The combadge & ideal wearables

There’s one wearable technology that, for sheer amount of time on screen and number of uses, eclipses all others, so let’s start with that. Star Trek: The Next Generation introduced a technology called a combadge. This communication device is a badge designed with the Starfleet insignia, roughly 10cm wide and tall, that affixes to the left breast of Starfleet uniforms. It grants its wearer a voice communication channel to other personnel as well as the ship’s computer. (And as Memory Alpha details, the device can also do so much more.)

Chapter 10 of Make It So: Interaction Design Lessons from Science Fiction covers the combadge as a communication device. But in this writeup we’ll consider it as a wearable technology.

Enterprise-This-is-Riker Continue reading

Wearable technologies in sci-fi

IMG_8951_

Recently I was interviewed for The Creators Project about wearable technologies for the Intel Make It Wearable Challenge, both for my (old) role as a designer and managing director at Cooper and in relation to sci-fi interfaces. In that interview I referenced a few technologies from the survey relevant to our conversation. Video is a medium constrained by time, so here on scifiinterfaces.com I hope to give the topic a more thorough consideration.

This is a different sort of post than I’ve put to the blog before, more akin to the chapters from the book. This won’t be about a single movie or television show as much as it is a cross-section from many shows.

Image courtesy of Creative Applications Network

Image courtesy of Creative Applications Network

Defining wearable

What counts? Fortunately we don’t have to work too hard on this definition. The name makes it pretty clear that these are technologies worn on the body, either directly or incorporated into clothing. But there’s two edge cases that might count, but I’ll call out as specifically not wearable.

TheFifthElement-Rhod-002

Carryable technologies—like cell phones, most weapons, or even Ruby Rhod’s staff from The Fifth Element—aren’t quite the same thing. When in use, these technologies occupy one or both of the hands of its user. They also have to be holstered or manually put away when not in use. That introduces some different constraints, microinteractions, and ergonomic considerations. In contrast, wearable technologies don’t need to be fetched from storage. They’re just…there, usable at a moment’s notice. So for purposes of the sci-fi interfaces from the survey, I’m only looking at wearable technologies and not these carryable ones.

IronMan_186

Perhaps more controversially, exosuits lie outside the definition. Certainly by definition exosuits are worn. Tony Stark’s Iron Man suit, the loader that Ripley wears in Aliens, or the APUs used to defend Zion in The Matrix: Revolutions are all worn by their users. But these technologies can’t really be donned or removed casually. Users climb into them and strap in, or as with Iron Man, are mechanically sealed inside. That breaks a connotation of the term “wearable” as its used today, and that is that wearable technology fits into our everyday lives. It’s thin, light, and flexible enough to let us ride the bus, have coffee with a friend, or attend to our jobs with little to no disruption. I can’t really see trying to use Ripley’s loader to grab hold of my espresso cup and ask someone about how their day’s gone, so exosuits are out. (Attentive readers note that exosuits are also called out as excluded from of gestural technologies in Chapter 5 of the book. Fans of these cool interfaces must still wait, but someday these devices will get their due attention.)

Catch me soon if I’m wrong in excluding these two categories of tech from wearables, because the remainder of the writeups are based on this boundary.

Even excluding these two, we’re left with quite a bit to consider, reaching almost back to the beginning of cinema. The first sci-fi film, La Voyage Dans La Lune, had nothing we’d recognize as an interface, so of course that’s off the hook. The second, Metropolis, for all of its prescience, puts technology in the furniture and walls of its Upper City, as monstrous edifices in the Lower City, or as the wicked robot Maria.

But the next thing in the survey is the Buck Rogers serials from the 1930s, and there we see a few technologies that are worn. Since then, we’ve seen devices for communication, mind control, biometrics, fashion, gaming, tracking, plus a few nifty one-offs. Of course the survey is just that, the catalog of interfaces captured and documented so far. Sci-fi is vast and has continued since the book was published. If you see any missing by the time I wrap these up, please let me know.

With this introduction complete, the next several posts we’ll look at several examples in details. But the first one is the big one, and that’s the Star Trek combadge.