Since my last post, news broke that Chadwick Boseman has passed away after a four year battle with cancer. He kept his struggles private, so the news was sudden and hard-hitting. The fandom is still reeling. Black people, especially, have lost a powerful, inspirational figure. The world has also lost a courageous and talented young actor. Rise in Power, Mr. Boseman. Thank you for your integrity, bearing, and strength.
Black Panther’s airship is a triangular vertical-takeoff-and-landing vehicle called the Royal Talon. We see its piloting interface twice in the film.
The first time is near the beginning of the movie. Okoye and T’Challa are flying at night over the Sambisa forest in Nigeria. Okoye sits in the pilot’s seat in a meditative posture, facing a large forward-facing bridge window with a heads up display. A horseshoe-shaped shelf around her is filled with unactivated vibranium sand. Around her left wrist, her kimoyo beads glow amber, projecting a volumetric display around her forearm.
She announces to T’Challa, “My prince, we are coming up on them now.” As she disengages from the interface, retracting her hands from the pose, the kimoyo projection shifts and shrinks. (See more detail in the video clip, below.)
The second time we see it is when they pick up Nakia and save the kidnapped girls. On their way back to Wakanda we see Okoye again in the pilot’s seat. No new interactions are seen in this scene though we linger on the shot from behind, with its glowing seatback looking like some high-tech spine.
Now, these brief glimpses don’t give a review a lot to go on. But for a sake of completeness, let’s talk about that volumetric projection around her wrist. I note is that it is a lovely echo of Dr. Strange’s interface for controlling the time stoneEye of Agamatto.
Wrist projections are going to be all the rage at the next Snap, I predict.
But we never really see Okoye look at this VP it or use it. Cross referencing the Wakandan alphabet, those five symbols at the top translate to 1 2 K R I, which doesn’t tell us much. (It doesn’t match the letters seen on the HUD.) It might be a visual do-not-disturb signal to onlookers, but if there’s other meaning that the letters and petals are meant to convey to Okoye, I can’t figure it out. At worst, I think having your wrist movements of one hand emphasized in your peripheral vision with a glowing display is a dangerous distraction from piloting. Her eyes should be on the “road” ahead of her.
The image has been flipped horizontally to illustrate how Okoye would see the display.
Similarly, we never get a good look at the HUD, or see Okoye interact with it, so I’ve got little to offer other than a mild critique that it looks full of pointless ornamental lines, many of which would obscure things in her peripheral vision, which is where humans need the most help detecting things other than motion. But modern sci-fi interfaces generally (and the MCU in particular) are in a baroque period, and this is partly how audiences recognize sci-fi-ness.
I also think that requiring a pilot to maintain full lotus to pilot is a little much, but certainly, if there’s anyone who can handle it, it’s the leader of the Dora Milaje.
One remarkable thing to note is that this is the first brain-input piloting interface in the survey. Okoye thinks what she wants the ship to do, and it does it. I expect, given what we know about kimoyo beads in Wakanda (more on these in a later post), what’s happening is she is sending thoughts to the bracelet, and the beads are conveying the instructions to the ship. As a way to show Okoye’s self-discipline and Wakanda’s incredible technological advancement, this is awesome.
Unfortunately, I don’t have good models for evaluating this interaction. And I have a lot of questions. As with gestural interfaces, how does she avoid a distracted thought from affecting the ship? Why does she not need a tunnel-in-the-sky assist? Is she imagining what the ship should do, or a route, or something more abstract, like her goals? How does the ship grant her its field awareness for a feedback loop? When does the vibranium dashboard get activated? How does it assist her? How does she hand things off to the autopilot? How does she take it back? Since we don’t have good models, and it all happens invisibly, we’ll have to let these questions lie. But that’s part of us, from our less-advanced viewpoint, having to marvel at this highly-advanced culture from the outside.
Black Health Matters
Each post in the Black Panther review is followed by actions that you can take to support black lives.
Thinking back to the terrible loss of Boseman: Fuck cancer. (And not to imply that his death was affected by this, but also:) Fuck the racism that leads to worse medical outcomes for black people.
One thing you can do is to be aware of the diseases that disproportionately affect black people (diabetes, asthma, lung scarring, strokes, high blood pressure, and cancer) and be aware that no small part of these poorer outcomes is racism, systemic and individual. Listen to Dorothy Roberts’ TED talk, calling for an end to race-based medicine.
If you’re the reading sort, check out the books Black Man in a White Coat by Damon Tweedy, or the infuriating history covered in Medical Apartheid by Harriet Washington.
If you are black, in Boseman’s memory, get screened for cancer as often as your doctor recommends it. If you think you cannot afford it and you are in the USA, this CDC website can help you determine your eligibility for free or low-cost screening: https://www.cdc.gov/cancer/nbccedp/screenings.htm. If you live elsewhere, you almost certainly have a better healthcare system than we do, but a quick search should tell you your options.
Cancer treatment is equally successful for all races. Yet black men have a 40% higher cancer death rate than white men and black women have a 20% higher cancer death rate than white women. Your best bet is to detect it early and get therapy started as soon as possible. We can’t always win that fight, but better to try than to find out when it’s too late to intervene. Your health matters. Your life matters.
So the first Fritzes are now a thing. Before I went off on that awesome tangent, where were we? Oh that’s right. I was reviewing Blade Runner as part of a series on AI in sci-fi. I was just about to get to Spinners. Now vehicles are complicated things as they are, much less when they are navigating proper 3D space. Additionally, the police force is, ostensibly, a public service, which complicates things even further. So this will get lengthy. Still, I think I can get this down to eight or so subtopics.
In the distant future of 2019⸮, flying cars, called “spinners,” are a reality. They’re largely for the wealthy and powerful (including law enforcement). The main protagonist, Deckard, is only ever a passenger in a few over the course of the film. His partner Gaff flies one, though, so we have enough usage to review.
Opening the skies to automobile-like traffic poses challenges, especially when those skies are as full of lightning bolts, ever-present massive flares, distracting building-sized video advertisements, and of course, other spinners.
Piloting controls
To pilot the spinner, Gaff keeps his hands on each handle of a split yoke. Within easy reach of his fingers are a few unlabeled buttons and small lights. Once we see him reach with his right thumb to press one of the buttons, but we don’t see any result, so it’s not clear what these buttons do. It’s nice that they don’t require him to take his hands off the controls. (This might seem like a prescient concept, but WP tells me the first non-horn wheel-mounted controls date back as far back as 1966.)
It is contextualizing to note the mode of agency here. That is, the controls are manual, with no AI offering assistance or acting as an agent. (The AI is in the passenger’s seat, lol fight me.) It appears to be up to Gaff to observe conditions, monitor displays, perform wayfinding, and keep the spinner on track.
Note that we never see what his feet are doing and never see him doing other things with his hands other than putting on a headset before lift-off. There are lots of other controls to the pilot’s left and in the console between seats, but we never see them in use. So, you know, approach with caution. There are a lot of unknowns here.
The Traditional Chinese characters on the window read “No entry,” for citizens outside the spinner, passing by when it is on the ground. (Hat tips for the translation to Mischa Park-Doob and Frank Chung.)
The spinner is more like a VTOL aircraft or helicopter than a spaceship. That is, it is constantly in the presence of planetary gravity and must overcome the constant resistance of air. So the standards I established in the piloting controls post are of only limited use to us here.
So let’s look at how helicopter controls work. The FAA Helicopter Flying Handbook tells us that a pilot has controls for…
The vertical velocity, up or down. (Controlled by the angle of the control stick called the collective. The collective is to the left of the pilot’s hip when they are seated.)
The thrust. (Controlled by the twistgrip on the collective.)
Movement forward, rearward, left, and right. (Controlled with the stick in front of the pilot, called the cyclic.)
Yaw of the vehicle. (Controlled with the pair of antitorque pedals at the pilot’s feet.)
Since we don’t see Gaff when the spinner is moving up and down, let’s presume that the thing he’s gripping is like a Y-shaped cyclic, with lots of little additional controls around the handles. Then, if we presume he has a collective somewhere out of sight to his left and antitorque pedals at his feet, this interface meets modern helicopter standards for control. From the outside, those appear to be well mapped (collective up = helicopter up, cyclic right = helicopter right). Twist for thrust is a little weird, but it’s a standard and certainly learnable, as I recall from my motorcycling days. So let’s say it’s complete and convincing. Is it the best it could be? I’m not enough of an aeronautical engineer (read: not at all) to imagine better options, so let’s move along. I might have more to say if it was agentive.
Dashboard
There are two large screens in the dashboard. The one directly in front of Gaff shows a stylized depiction of the 3D surfaces around him as cyan highlights on a navy blue background. Approaching red shapes describe a pill-shaped tunnel-in-the-sky display. These have been tested since 1981 and found to provide higher tracking performance to ideal paths in manual flight, lower cognitive workload, and enhanced situational awareness. (https://arc.aiaa.org/doi/abs/10.2514/3.56119) So, this is believable and well done. I’m not sure that Gaff could readily use the 3D background to effectively understand the 3D terrain, but it is tertiary, after the real world and the tunnel display.
I have to say that it’s a frustrating anti-trope to run into again, but it must be said: If the spinner knows where the ship should be, and general artificial intelligence exists in this diegesis, why exactly are humans doing the piloting? Shouldn’t the spinner fly itself? But back to the interfaces…
Above the tunnel-in-the-sky display is a cyan 7-segment LED scroll display. In the gif above it displays “MAXIMUM SPEED” and later it provides some wayfinding text. I’m not sure how many different types of information it is meant to cycle through, but it sure would be a pain to wait for vital information to appear, and distracting to have to control it to get to the one you wanted.
There is also a vertical screen in the middle of the console listing cyan labels ALT, VEL, and PTCH. These match to altitude, velocity, and pitch variables, reinforcing the helicopter model. The yellow numbers below these labels change in the scene very slowly, and—remarkably for a four-second interface from 1982—do not appear to change randomly. That’s awesome.
But then, there’s a paragraph of cyan text in the middle of the screen that appears over the course of the scene, letter by letter. This animation calls unnecessary attention to itself. There are also smaller, thin screens in the pilot’s door that also continually scroll that same teeny tiny cyan text. I’m not sure WTF all this text is supposed to be, since it would be horribly distracting to a pilot. There are also a few rows of white LEDs with cylon-eye displays traveling back and forth. They are distracting, but at least they’re regular, and might be habituate-able and act as some sort of ambient display. Anyway, if we were building this thing for real, we’d want to eliminate these.
Lastly, at the bottom of the center screen are some unlabeled bar charts depicting some variables that appear to be wiggling randomly. So, like, only the top fifth of this screen can be lauded. The rest is fuigetry. *sigh* It’s hard to escape.
Wayfinding
To help navigate the 3D space, pilots have a number of tools. First, there are windows where you expect windows to be in a car, and there are also glass panels under their feet. The movie doesn’t make a big deal out of it, but it’s clear in the scene where the spinner lifts off from the street level. These transparent panes surround pilots and passengers and allow them to track visual cues for landmarks and to identify collision threats.
It’s reflecting some neon on the street below.
The tunnel-in-the-sky display above is the most obvious wayfinding tool. Somehow Gaff has entered a destination, and the tunnel guides him where it needs to go. Since this entails a safe path through the air, it’s the most important display. Other bits of information (like the ALT, VEL, and PTCH in the center screen) should be oriented around it. This would make them glanceable, allowing Gaff glance to check them and quickly return his eyes to the windshield. In fact, we have to admit that a heads up display would allow Gaff to keep his attention where it needs to be rather than splitting it between the real world and these dashboard displays. Modern vehicle drivers are used to this split attention, and can manage it well enough. But I suspect that a HUD would be better.
It’s also at this point that you begin to wonder if these are the scout ships we see in Close Encounters.
There is also that crawling LED display above the tunnel-in-the-sky screen. In one scene it shows “SECTOR FOUR (4)…QUAD-” (we don’t get to see the end of this phrase) but it implies that one of the bits of information this scroll provides is a reminder of the name of the neighborhood you’re currently in. That really only helps if you’re way off course, and seems too low a fidelity for actual wayfinding assistance, but presuming the tunnel-in-the-sky is helping provide the rest of the wayfinding, this information is of secondary importance.
A special note about takeoff: ENVIRON CTR
The display sequence infamous for appearing in both Alien and Blade Runner happens as Gaff lifts off in a spinner early in the film. White all-cap letters label this blue screen “ENVIRON CTR,” above a grid of square characters. Then two 8-digit sequences “drop” down the center of the square grid: 92886599 | 95654085. Once they drop 3 rows, the background turns red, the grid disappears to be replaced by a big blinking label PURGE. Characters at the bottom read “24556 DR 5”, and don’t change.
After the spinner lifts off the display shows a complex diagram of a circle-within-a-circle, illustrating the increasing elevation from the ground below. The delightful worldbuilding thing about the sequence is that it is inscrutable, and legible only by a trained driver, yet gets full focus on screen. There’s not really enough information about the speculative engineering or functional constraints of the spinner to say why these screens would be necessary or useful. I have a suspicion that a live camera view would be more useful than the circle-within-a-circle view, but gosh, it sure is cool. Here’s the shot from Alien, by the way, for easy comparison.
Since people seem to be all over this one now, let me also interject that Alien is also connected to Firefly, since Mal’s anti-aircraft HUD in the pilot had a Weyland-Yutani logo. Chew on that trivia, Internet.
Intercar communication
Of special note is a scene just before his call to Sebastian’s apartment. Deckard is sitting in his parked vehicle in a call with Bryant. A police spinner glides by and we hear an announcement over his loudspeaker, directed to Deckard’s vehicle saying, “This sector’s closed to ground traffic. What are you doing here?” From inside his vehicle, Deckard looks towards his video phone in the console (we never see if there is video, but he’s looking in that direction rather than out the window) and without touching a thing, responds defensively, “I’m working. What are you doing?” The policeman’s reply comes through the videophone’s speakers, “Arresting you, that’s what I’m doing.”
Note that Deckard did not have to answer the call or even put Bryant on hold. We don’t know what the police officer did on their end, but this interaction implies that the police can make an instant, intrusive audio connection with vehicles it finds suspicious. It’s so seamless it will slip by you if you don’t know to look for it, but it paints quite a picture of intercar communication. Can you imagine if our cars automatically shared an audio space with the cars around it?
External interfaces
Another aspect of the car is that it is an interface not just for the people using the car, but for the citizens observing or near the spinner as it goes about its business. There are a number of features that helps it act as an interface to the public.
Police exist as a social service, and the 995 repeated around the outside helps remind citizens of the number they can call in case of an emergency.
Modern patrol cars have beacons and sirens to tell other drivers to get out of the way when they are on urgent business. Police spinners are gravid with beacons, having 12 of them visible from the front alone. (See below.) As the spinner is taking off, yellow and blue beacons circle as a warning. This would be of no help to a blind person nearby, but the vehicle does make some incidental noise that serves as an audible warning.
The rich light strip makes sense because it has such a greater range of movement than ground-based cars, and needs more attention grabbing power. Another nice touch is that, since the spinner can be above people, there are also beacons on the chassis.
Upshot: Spinners do well
So, all in all, the spinner fares quite well on close inspection. It builds on known models of piloting, shows mostly-relevant data, uses known best practices for assistance, and has a lot of well-considered surface features for citizens.
Now if only I could figure out why they’re called spinners.
The only flight controls we see are an array of stay-state toggle switches (see the lower right hand of the image above) and banks of lights. It’s a terrifying thought that anyone would have to fly a spaceship with binary controls, but we have some evidence that there’s analog controls, when Luke moves his arms after the Falcon fires shots across his bow.
Unfortunately we never get a clear view of the full breadth of the cockpit, so it’s really hard to do a proper analysis. Ships in the Holiday Special appear to be based on scenes from A New Hope, but we don’t see the inside of a Y-Wing in that movie. It seems to be inspired by the Falcon. Take a look at the upper right hand corner of the image below.
After recklessly undocking we see Ibanez using an interface of…an indeterminate nature.
Through the front viewport Ibanez can see the cables and some small portion of the docking station. That’s not enough for her backup maneuver. To help her with that, she uses the display in front of her…or at least I think she does.
The display is a yellow wireframe box that moves “backwards” as the vessel moves backwards. It’s almost as if the screen displayed a giant wireframe airduct through which they moved. That might be useful for understanding the vessel’s movement when visual data is scarce, such as navigating in empty space with nothing but distant stars for reckoning. But here she has more than enough visual cues to understand the motion of the ship: If the massive space dock was not enough, there’s that giant moon thing just beyond. So I think understanding the vessel’s basic motion in space isn’t priority while undocking. More important is to help her understand the position of collision threats, and I cannot explain how this interface does that in any but the feeblest of ways.
If you watch the motion of the screen, it stays perfectly still even as you can see the vessel moving and turning. (In that animated gif I steadied the camera motion.) So What’s it describing? The ideal maneuver? Why doesn’t it show her a visual signal of how well she’s doing against that goal? (Video games have nailed this. The “driving line” in Gran Turismo 6 comes to mind.)
If it’s not helping her avoid collisions, the high-contrast motion of the “airduct” is a great deal of visual distraction for very little payoff. That wouldn’t be interaction so much as a neurological distraction from the task at hand. So I even have to dispense with my usual New Criticism stance of accepting it as if it was perfect. Because if this was the intention of the interface, it would be encouraging disaster.
The ship does have some environmental sensors, since when it is 5 meters from the “object,” i.e. the dock, a voiceover states this fact to everyone in the bridge. Note that it’s not panicked, even though that’s relatively like being a peach-skin away from a hull breach of bajillions of credits of damage. No, the voice just says it, like it was remarking about a penny it happened to see on the sidewalk. “Three meters from object,” is said with the same dispassion moments later, even though that’s a loss of 40% of the prior distance. “Clear” is spoken with the same dispassion, even though it should be saying, “Court Martial in process…” Even the tiny little rill of an “alarm” that plays under the scene sounds more like your sister hasn’t responded to her Radio Shack alarm clock in the next room rather than—as it should be—a throbbing alert.
Since the interface does not help her, actively distracts her, and underplays the severity of the danger, is there any apology for this?
1. Better: A viewscreen
Starship Troopers happened before the popularization of augmented reality, so we can forgive the film for not adopting that technology, even though it might have been useful. AR might have been a lot for the film to explain to a 1997 audience. But the movie was made long after the popularization of the viewscreen forward display in Star Trek. Of course it’s embracing a unique aesthetic, but focusing on utility: Replace the glass in front of her with a similar viewscreen, and you can even virtually shift her view to the back of the Rodger Young. If she is distracted by the “feeling” of the thrusters, perhaps a second screen behind her will let her swivel around to pilot “backwards.” With this viewscreen she’s got some (virtual) visual information about collision threats coming her way. Plus, you could augment that view with precise proximity warnings, and yes, if you want, air duct animations showing the ideal path (similar to what they did in Alien).
2. VP
The viewscreen solution still puts some burden on her as a pilot to translate 2D information on the viewscreen to 3D reality. Sure, that’s often the job of a pilot, but can we make that part of the job easier? Note that Starship Troopers was also created after the popularization of volumetric projections in Star Wars, so that might have been a candidate, too, with some third person display nearby that showed her the 3D information in an augmented way that is fast and easy for her to interpret.
3. Autopilot or docking tug-drones
Yes, this scene is about her character, but if you were designing for the real world, this is a maneuver that an agentive interface can handle. Let the autopilot handle it, or adorable little “tug-boat” drones.
Pilot’s controls (in a spaceship) are one of the categories of “things” that remained on the editing room floor of Make It So when we realized we had about 50% too much material before publishing. I’m about to discuss such pilot’s controls as part of the review of Starship Troopers, and I realized that I’ll first need to to establish the core issues in a way that will be useful for discussions of pilot’s controls from other movies and TV shows. So in this post I’ll describe the key issues independent of any particular movie.
A big shout out to commenters Phil (no last name given) and Clayton Beese for helping point me towards some great resources and doing some great thinking around this topic originally with the Mondoshawan spaceship in The Fifth Element review.
So let’s dive in. What’s at issue when designing controls for piloting a spaceship?
First: Spaceships are not (cars|planes|submarines|helicopters|Big Wheels…)
One thing to be careful about is mistaking a spacecraft for similar-but-not-the-same Terran vehicles. Most of us have driven a car, and so have these mental models with us. But a car moves across 2(.1?) dimensions. The well-matured controls for piloting roadcraft have optimized for those dimensions. You basically get a steering wheel for your hands to specify change-of-direction on the driving plane, and controls for speed.
Planes or even helicopters seem like they might be a closer fit, moving as they do more fully across a third dimension, but they’re not right either. For one thing, those vehicles are constantly dealing with air resistance and gravity. They also rely on constant thrust to stay aloft. Those facts alone distinguish them from spacecraft.
These familiar models (cars and planes) are made worse since so many sci-fi piloting interfaces are based on them, putting yokes in the hands of the pilots, and they only fit for plane-like tasks. A spaceship is a different thing, piloted in a different environment with different rules, making it a different task.
Maneuvering in space
Space is upless and downless, except as a point relates to other things, like other spacecraft, ecliptic planes, or planets. That means that a spacecraft may need to be angled in fully 3-dimensional ways in order to orient it to the needs of the moment. (Note that you can learn more about flight dynamics and attitude control on Wikipedia, but it is sorely lacking in details about the interfaces.)
Orientation
By convention, rotation is broken out along the cartesian coordinates.
X: Tipping the nose of the craft up or down is called pitch.
Y: Moving the nose left or right around a vertical axis, like turning your head left and right, is called yaw.
Z: Tilting the left or right around an axis that runs from the front of the plane to the back is called roll.
In addition to angle, since you’re not relying on thrust to stay aloft, and you’ve already got thrusters everywhere for arbitrary rotation, the ship can move (or translate, to use the language of geometry) in any direction without changing orientation.
Translation
Translation is also broken out along cartesian coordinates.
X: Moving to the left or right, like strafing in the FPS sense. In Cartesian systems, this axis is called the abscissa.
Y: Moving up or down. This axis is called the ordinate.
Z: Moving forward or backward. This axis is less frequently named, but is called the applicate.
Thrust
I’ll make a nod to the fact that thrust also works differently in space when traveling over long distances between planets. Spacecraft don’t need continuous thrust to keep moving along the same vector, so it makes sense that the “gas pedal” would be different in these kinds of situations. But then, looking into it, you run into a theory of constant-thrust or constant–acceleration travel, and bam, suddenly you’re into astrodynamics and equations peppered with sigmas, and you’re in way over my head. It’s probably best to presume that the thrust controls are set-point rather than throttle, meaning the pilot is specifying a desired speed rather than the amount of thrust, and some smart algorithm is handling all the rest.
Given these tasks of rotation, translation, and thrust, when evaluating pilot’s controls, we first have to ask how it is the pilot goes about specifying these things. But even that answer isn’t simple. Because you need to determine with what kind of interface agency it is built.
Max was a fully sentient AI who helped David pilot.
Interface Agency
If you’re not familiar with my categories of agency in technology, I’ll cover them briefly here. I’ll be publishing them in an upcoming book with Rosenfeld Media, which you can read there if you want to know more. In short, you can think of interfaces as having four categories of agency.
Manual: In which the technology shapes the (strictly) physical forces the user applies to it, like a pencil. Such interfaces optimize for good ergonomics.
Powered: In which the user is manipulating a powered system to do work, like a typewriter. Such interfaces optimize for good feedback.
Assistive: In which the system can offer low-level feedback, like a spell checker. Such interfaces optimize for good flow, in the Csikszentmihalyi sense.
Agentive: In which the system can pursue primitive goals on behalf of the user, like software that could help you construct a letter. This would be categorized as “weak” artificial intelligence, and specifically not the sentience of “strong” AI. Such interfaces optimize for good conversation.
So what would these categories mean for piloting controls? Manual controls might not really exist since humans can’t travel in space without powered systems. Powered controls would be much like early real-world spacecraft. Assistive controls would be might provide collision warnings or basic help with plotting a course. Agentive controls would allow a pilot to specify the destination and timing, and it would handle things until it encountered a situation that it couldn’t handle. Of course this being sci-fi, these interfaces can pass beyond the singularity to full, sentient artificial intelligence, like HAL.
Understanding the agency helps contextualize the rest of the interface.
Inputs
How does the pilot provide input, how does she control the spaceship? With her hands? Partially with her feet? Via a yoke, buttons on a panel, gestural control of a volumetric projection, or talking to a computer?
If manual, we’ll want to look at the ergonomics, affordances, and mappings.
Even agentive controls need to gracefully degrade to assistive and powered interfaces for dire circumstances, so we’d expect to see physical controls of some sorts. But these interfaces would additionally need some way to specify more abstract variables like goals, preferences, and constraints.
Consolidation
Because of the predominance of the yoke interface trope, a major consideration is how consolidated the controls are. Is there a single control that the pilot uses? Or multiple? What variables does each control? If the apparent interface can’t seem to handle all of orientation, translation, and thrust, how does the pilot control those? Are there separate controls for precision maneuvering and speed maneuvering (for, say, evasive maneuvers, dog fights, or dodging asteroids)?
The yoke is popular since it’s familiar to audiences. They see it and instantly know that that’s the pilot’s seat. But as a control for that pilot to do their job, it’s pretty poor. Note that it provides only two variables. In a plane, this means the following: Turn it clockwise or counterclockwise to indicate roll, and push it forward or pull it back for pitch. You’ll also notice that while roll is mapped really well to the input (you roll the yoke), the pitch is less so (you don’t pitch the yoke).
So when we see a yoke for piloting a spaceship, we must acknowledge that a) it’s missing an axis of rotation that spacecraft need, i.e. yaw. b) it’s presuming only one type of translation, which is forward. That leaves us looking about the cockpit for clues about how the pilot might accomplish these other kinds of maneuvers.
Output
How does the pilot know that her inputs have registered with the ship? How can she see the effects or the consequences of her choices? How does an assistive interface help her identify problems and opportunities? How does as agentive or even AI interface engage the pilot asking for goals, constraints, and exceptions? I have the sense that Human perception is optimized for a mostly-two-dimensional plane with a predator’s eyes-forward gaze. How does the interface help the pilot expand her perception fully to 360° and three dimensions, to the distances relevant for space, and to see the invisible landscape of gravity, radiation, and interstellar material?
Narrative POV
An additional issue is that of narrative POV. (Readers of the book will recall this concept is came up in the Gestural Interfaces chapter.) All real-world vehicles work from a first-person perspective. That is, the pilot faces the direction of travel and steers the vehicle almost as if it was their own body.
But if you’ve ever played a racing game, you’ll recognize that there’s another possible perspective. It’s called the third-person perspective, and it’s where the camera sits up above the vehicle, slightly back. It’s less immediate than first person, but provides greater context. It’s quite popular with gamers in racing games, being rated twice as popular in one informal poll from escapist magazine. What POV is the pilot’s display? Which one would be of greater use?
The consequent criteria
I think these are all the issues. This is new thinking for me, so I’ll leave it up a bit for others to comment or correct. If I’ve nailed them, then for any future piloting controls in the future, these are the lenses through which we’ll look and begin our evaluation:
This checklist won’t magically give us insight into the piloting interface, but will be a great place to start, and a way to compare apples to apples between these interfaces.
The reawakened alien places his hand in the green display and holds it there for a few seconds. This summons a massive pilot seat. If the small green sphere is meant to be a map to the large cyan astrometric sphere, the mapping is questionable. Better perhaps would be to touch where the seat would appear and lift upwards through the sphere.
He climbs into the seat and presses some of the “egg buttons” arrayed on the armrests and on an oval panel above his head. The buttons illuminate in response, blinking individually from within. The blink pattern for each is regular, so it’s difficult to understand what information this visual noise conveys. A few more egg presses re-illuminate the cyan astrometric display.
A few more presses on the overhead panel revs up the spaceship’s engines and seals him in an organic spacesuit. The overhead panel slowly advances towards his face. The purpose for this seems inexplicable. If it was meant to hold the alien in place, why would it do so with controls? Even if they’re just navigation controls that no longer matter since he is on autopilot, he wouldn’t be able to take back sudden navigation control in a crisis. If the armrest panels also let him navigate, why are the controls split between the two parts?
On automatic at this point, the VP traces a thin green arc from the chair to the VP earth and adds highlight graphics around it. Then the ceiling opens and the spaceships lifts up into the air.
There are a great many interfaces seen on the bridge of the Prometheus, and like most flight instrument panels in sci-fi, they are largely about storytelling and less about use.
The captain of the Prometheus is also a pilot, and has a captain’s chair with a heads-up display. This HUD has with real-time wireframe displays of the spaceship in plan view, presumably for glanceable damage feedback.
He also can stand afore at a waist-high panel that overlooks the ship’s view ports. This panel has a main screen in the center, grouped arrays of backlit keys to either side, a few blinking components, and an array of red and blue lit buttons above. We only see Captain Janek touch this panel once, and do not see the effects.
Navigator Chance’s instrument panel below consists of four 4:3 displays with inscrutable moving graphs and charts, one very wide display showing a topographic scan of terrain, one dim panel, two backlit reticles, and a handful of lit switches and buttons. Yellow lines surround most dials and group clusters of controls. When Chance “switches to manual”, he flips the lit switches from right to left (nicely accomplishable with a single wave of the hand) and the switches lights light up to confirm the change of state. This state would also be visible from a distance, useful for all crew within line of sight. Presumably, this is a dangerous state for the ship to be in, though, so some greater emphasis might be warranted: either a blinking warning, or a audio feedback, or possibly both.
Captain Janek has a joystick control for manual landing control. It has a line of light at the top rear-facing part, but its purpose is not apparent. The degree of differentiation in the controls is great, and they seem to be clustered well.
A few contextless flight screens are shown. One for the scientist known only as Ford features 3D charts, views of spinning spaceships, and other inscrutable graphs, all of which are moving.
A contextless view shows the points of metal detected overlaid on a live view from the ship.
There is a weather screen as well that shows air density. Nearby there’s a push control, which Chance presses and keeps held down when he says, “Boss, we’ve got an incoming storm front. Silica and lots of static. This is not good.” Thought we never see the control, it’s curious how such a thing could work. Would it be an entire-ship intercom, or did Chance somehow specify Janek as a recipient with a single button?
Later we see Chance press a single button that illuminates red, after which the screens nearby change to read “COLLISION IMMINENT,” and an all-ship prerecorded announcement begins to repeat its evacuation countdown.
This is single button is perhaps the most egregious of the flight controls. As Janek says to Shaw late in the film, “This is not a warship.” If that’s the case, why would Chance have a single control that automatically knows to turn all screens red with the Big Label and provide a countdown? And why should the crew ever have to turn this switch on? Isn’t a collision one of the most serious things that could happen to the ship? Shouldn’t it be hard to, you know, turn off?