Spinners (flying cars)

So the first Fritzes are now a thing. Before I went off on that awesome tangent, where were we? Oh that’s right. I was reviewing Blade Runner as part of a series on AI in sci-fi. I was just about to get to Spinners. Now vehicles are complicated things as they are, much less when they are navigating proper 3D space. Additionally, the police force is, ostensibly, a public service, which complicates things even further. So this will get lengthy. Still, I think I can get this down to eight or so subtopics.

In the distant future of 2019, flying cars, called “spinners,” are a reality. They’re largely for the wealthy and powerful (including law enforcement). The main protagonist, Deckard, is only ever a passenger in a few over the course of the film. His partner Gaff flies one, though, so we have enough usage to review.

Opening the skies to automobile-like traffic poses challenges, especially when those skies are as full of lightning bolts, ever-present massive flares, distracting building-sized video advertisements, and of course, other spinners.

Piloting controls

To pilot the spinner, Gaff keeps his hands on each handle of a split yoke. Within easy reach of his fingers are a few unlabeled buttons and small lights. Once we see him reach with his right thumb to press one of the buttons, but we don’t see any result, so it’s not clear what these buttons do. It’s nice that they don’t require him to take his hands off the controls. (This might seem like a prescient concept, but WP tells me the first non-horn wheel-mounted controls date back as far back as 1966.)

It is contextualizing to note the mode of agency here. That is, the controls are manual, with no AI offering assistance or acting as an agent. (The AI is in the passenger’s seat, lol fight me.) It appears to be up to Gaff to observe conditions, monitor displays, perform wayfinding, and keep the spinner on track.

Note that we never see what his feet are doing and never see him doing other things with his hands other than putting on a headset before lift-off. There are lots of other controls to the pilot’s left and in the console between seats, but we never see them in use. So, you know, approach with caution. There are a lot of unknowns here.

The Traditional Chinese characters on the window read “No entry,” for citizens outside the spinner, passing by when it is on the ground. (Hat tips for the translation to Mischa Park-Doob and Frank Chung.)

The spinner is more like a VTOL aircraft or helicopter than a spaceship. That is, it is constantly in the presence of planetary gravity and must overcome the constant resistance of air. So the standards I established in the piloting controls post are of only limited use to us here.

So let’s look at how helicopter controls work. The FAA Helicopter Flying Handbook tells us that a pilot has controls for…

  1. The vertical velocity, up or down. (Controlled by the angle of the control stick called the collective. The collective is to the left of the pilot’s hip when they are seated.)
  2. The thrust. (Controlled by the twistgrip on the collective.)
  3. Movement forward, rearward, left, and right. (Controlled with the stick in front of the pilot, called the cyclic.)
  4. Yaw of the vehicle. (Controlled with the pair of antitorque pedals at the pilot’s feet.)

Since we don’t see Gaff when the spinner is moving up and down, let’s presume that the thing he’s gripping is like a Y-shaped cyclic, with lots of little additional controls around the handles. Then, if we presume he has a collective somewhere out of sight to his left and antitorque pedals at his feet, this interface meets modern helicopter standards for control. From the outside, those appear to be well mapped (collective up = helicopter up, cyclic right = helicopter right). Twist for thrust is a little weird, but it’s a standard and certainly learnable, as I recall from my motorcycling days. So let’s say it’s complete and convincing. Is it the best it could be? I’m not enough of an aeronautical engineer (read: not at all) to imagine better options, so let’s move along. I might have more to say if it was agentive.

Dashboard

There are two large screens in the dashboard. The one directly in front of Gaff shows a stylized depiction of the 3D surfaces around him as cyan highlights on a navy blue background. Approaching red shapes describe a pill-shaped tunnel-in-the-sky display. These have been tested since 1981 and found to provide higher tracking performance to ideal paths in manual flight, lower cognitive workload, and enhanced situational awareness. (https://arc.aiaa.org/doi/abs/10.2514/3.56119) So, this is believable and well done. I’m not sure that Gaff could readily use the 3D background to effectively understand the 3D terrain, but it is tertiary, after the real world and the tunnel display.

I have to say that it’s a frustrating anti-trope to run into again, but it must be said: If the spinner knows where the ship should be, and general artificial intelligence exists in this diegesis, why exactly are humans doing the piloting? Shouldn’t the spinner fly itself? But back to the interfaces…

Above the tunnel-in-the-sky display is a cyan 7-segment LED scroll display. In the gif above it displays “MAXIMUM SPEED” and later it provides some wayfinding text. I’m not sure how many different types of information it is meant to cycle through, but it sure would be a pain to wait for vital information to appear, and distracting to have to control it to get to the one you wanted.

There is also a vertical screen in the middle of the console listing cyan labels ALT, VEL, and PTCH. These match to altitude, velocity, and pitch variables, reinforcing the helicopter model. The yellow numbers below these labels change in the scene very slowly, and—remarkably for a four-second interface from 1982—do not appear to change randomly. That’s awesome.

But then, there’s a paragraph of cyan text in the middle of the screen that appears over the course of the scene, letter by letter. This animation calls unnecessary attention to itself. There are also smaller, thin screens in the pilot’s door that also continually scroll that same teeny tiny cyan text. I’m not sure WTF all this text is supposed to be, since it would be horribly distracting to a pilot. There are also a few rows of white LEDs with cylon-eye displays traveling back and forth. They are distracting, but at least they’re regular, and might be habituate-able and act as some sort of ambient display. Anyway, if we were building this thing for real, we’d want to eliminate these.

Lastly, at the bottom of the center screen are some unlabeled bar charts depicting some variables that appear to be wiggling randomly. So, like, only the top fifth of this screen can be lauded. The rest is fuigetry. *sigh* It’s hard to escape.

Wayfinding

To help navigate the 3D space, pilots have a number of tools. First, there are windows where you expect windows to be in a car, and there are also glass panels under their feet. The movie doesn’t make a big deal out of it, but it’s clear in the scene where the spinner lifts off from the street level. These transparent panes surround pilots and passengers and allow them to track visual cues for landmarks and to identify collision threats.

It’s reflecting some neon on the street below.

The tunnel-in-the-sky display above is the most obvious wayfinding tool. Somehow Gaff has entered a destination, and the tunnel guides him where it needs to go. Since this entails a safe path through the air, it’s the most important display. Other bits of information (like the ALT, VEL, and PTCH in the center screen) should be oriented around it. This would make them glanceable, allowing Gaff glance to check them and quickly return his eyes to the windshield. In fact, we have to admit that a heads up display would allow Gaff to keep his attention where it needs to be rather than splitting it between the real world and these dashboard displays. Modern vehicle drivers are used to this split attention, and can manage it well enough. But I suspect that a HUD would be better.

It’s also at this point that you begin to wonder if these are the scout ships we see in Close Encounters.

There is also that crawling LED display above the tunnel-in-the-sky screen. In one scene it shows “SECTOR FOUR (4)…QUAD-” (we don’t get to see the end of this phrase) but it implies that one of the bits of information this scroll provides is a reminder of the name of the neighborhood you’re currently in. That really only helps if you’re way off course, and seems too low a fidelity for actual wayfinding assistance, but presuming the tunnel-in-the-sky is helping provide the rest of the wayfinding, this information is of secondary importance.

A special note about takeoff: ENVIRON CTR

The display sequence infamous for appearing in both Alien and Blade Runner happens as Gaff lifts off in a spinner early in the film. White all-cap letters label this blue screen “ENVIRON CTR,” above a grid of square characters. Then two 8-digit sequences “drop” down the center of the square grid: 92886599 | 95654085. Once they drop 3 rows, the background turns red, the grid disappears to be replaced by a big blinking label PURGE. Characters at the bottom read “24556 DR 5”, and don’t change.

After the spinner lifts off the display shows a complex diagram of a circle-within-a-circle, illustrating the increasing elevation from the ground below. The delightful worldbuilding thing about the sequence is that it is inscrutable, and legible only by a trained driver, yet gets full focus on screen. There’s not really enough information about the speculative engineering or functional constraints of the spinner to say why these screens would be necessary or useful. I have a suspicion that a live camera view would be more useful than the circle-within-a-circle view, but gosh, it sure is cool. Here’s the shot from Alien, by the way, for easy comparison.

Since people seem to be all over this one now, let me also interject that Alien is also connected to Firefly, since Mal’s anti-aircraft HUD in the pilot had a Weyland-Yutani logo. Chew on that trivia, Internet.

Intercar communication

Of special note is a scene just before his call to Sebastian’s apartment. Deckard is sitting in his parked vehicle in a call with Bryant. A police spinner glides by and we hear an announcement over his loudspeaker, directed to Deckard’s vehicle saying, “This sector’s closed to ground traffic. What are you doing here?” From inside his vehicle, Deckard looks towards his video phone in the console (we never see if there is video, but he’s looking in that direction rather than out the window) and without touching a thing, responds defensively, “I’m working. What are you doing?” The policeman’s reply comes through the videophone’s speakers, “Arresting you, that’s what I’m doing.”

Note that Deckard did not have to answer the call or even put Bryant on hold. We don’t know what the police officer did on their end, but this interaction implies that the police can make an instant, intrusive audio connection with vehicles it finds suspicious. It’s so seamless it will slip by you if you don’t know to look for it, but it paints quite a picture of intercar communication. Can you imagine if our cars automatically shared an audio space with the cars around it?

External interfaces

Another aspect of the car is that it is an interface not just for the people using the car, but for the citizens observing or near the spinner as it goes about its business. There are a number of features that helps it act as an interface to the public. 

Police exist as a social service, and the 995 repeated around the outside helps remind citizens of the number they can call in case of an emergency. 

Modern patrol cars have beacons and sirens to tell other drivers to get out of the way when they are on urgent business. Police spinners are gravid with beacons, having 12 of them visible from the front alone. (See below.) As the spinner is taking off, yellow and blue beacons circle as a warning. This would be of no help to a blind person nearby, but the vehicle does make some incidental noise that serves as an audible warning.

The rich light strip makes sense because it has such a greater range of movement than ground-based cars, and needs more attention grabbing power. Another nice touch is that, since the spinner can be above people, there are also beacons on the chassis.

Upshot: Spinners do well

So, all in all, the spinner fares quite well on close inspection. It builds on known models of piloting, shows mostly-relevant data, uses known best practices for assistance, and has a lot of well-considered surface features for citizens.

Now if only I could figure out why they’re called spinners.

Ship Console

FaithfulWookie-console.png

The only flight controls we see are an array of stay-state toggle switches (see the lower right hand of the image above) and banks of lights. It’s a terrifying thought that anyone would have to fly a spaceship with binary controls, but we have some evidence that there’s analog controls, when Luke moves his arms after the Falcon fires shots across his bow.

Unfortunately we never get a clear view of the full breadth of the cockpit, so it’s really hard to do a proper analysis. Ships in the Holiday Special appear to be based on scenes from A New Hope, but we don’t see the inside of a Y-Wing in that movie. It seems to be inspired by the Falcon. Take a look at the upper right hand corner of the image below.

ANewHope_Falcon_console01.png

Escape pod and insertion windows

vlcsnap-2014-12-09-21h15m14s193

When the Rodger Young is destroyed by fire from the Plasma Bugs on Planet P, Ibanez and Barcalow luckily find a functional escape pod and jettison. Though this pod’s interface stays off camera for almost the whole scene, the pod is knocked and buffeted by collisions in the debris cloud outside the ship, and in one jolt we see the interface for a fraction of a second. If it looks familiar, it is not from anything in Starship Troopers.

vlcsnap-2014-12-09-21h16m18s69 Continue reading

Rescue Shuttle

shuttle01

After the ambush on Planet P, Ibanez pilots the shuttle that rescues survivors and…and Diz. We have a shot of the display that appears on the dashboard between the pilot and copilot. Tiny blue columns of text too small to read that spill onto the left. One big column of tiny green text that wipes on and flashes. Seizure-inducing yellow dots spazzing around on red grids. A blue circle on the right is probably Planet P or a radar, but the graphic…spinning about its center so quick you cannot follow. There’s not…I can’t…how is this supposed to…I’m just going to call it: fuigetry.

Little boxes on the interface

StarshipT-undocking01

After recklessly undocking we see Ibanez using an interface of…an indeterminate nature.

Through the front viewport Ibanez can see the cables and some small portion of the docking station. That’s not enough for her backup maneuver. To help her with that, she uses the display in front of her…or at least I think she does.

Undocking_stabilization

The display is a yellow wireframe box that moves “backwards” as the vessel moves backwards. It’s almost as if the screen displayed a giant wireframe airduct through which they moved. That might be useful for understanding the vessel’s movement when visual data is scarce, such as navigating in empty space with nothing but distant stars for reckoning. But here she has more than enough visual cues to understand the motion of the ship: If the massive space dock was not enough, there’s that giant moon thing just beyond. So I think understanding the vessel’s basic motion in space isn’t priority while undocking. More important is to help her understand the position of collision threats, and I cannot explain how this interface does that in any but the feeblest of ways.

If you watch the motion of the screen, it stays perfectly still even as you can see the vessel moving and turning. (In that animated gif I steadied the camera motion.) So What’s it describing? The ideal maneuver? Why doesn’t it show her a visual signal of how well she’s doing against that goal? (Video games have nailed this. The “driving line” in Gran Turismo 6 comes to mind.)

Gran Turismo driving line

If it’s not helping her avoid collisions, the high-contrast motion of the “airduct” is a great deal of visual distraction for very little payoff. That wouldn’t be interaction so much as a neurological distraction from the task at hand. So I even have to dispense with my usual New Criticism stance of accepting it as if it was perfect. Because if this was the intention of the interface, it would be encouraging disaster.

StarshipT-undocking17

The ship does have some environmental sensors, since when it is 5 meters from the “object,” i.e. the dock, a voiceover states this fact to everyone in the bridge. Note that it’s not panicked, even though that’s relatively like being a peach-skin away from a hull breach of bajillions of credits of damage. No, the voice just says it, like it was remarking about a penny it happened to see on the sidewalk. “Three meters from object,” is said with the same dispassion moments later, even though that’s a loss of 40% of the prior distance. “Clear” is spoken with the same dispassion, even though it should be saying, “Court Martial in process…” Even the tiny little rill of an “alarm” that plays under the scene sounds more like your sister hasn’t responded to her Radio Shack alarm clock in the next room rather than—as it should be—a throbbing alert.

StarshipT-undocking24

Since the interface does not help her, actively distracts her, and underplays the severity of the danger, is there any apology for this?

1. Better: A viewscreen

Starship Troopers happened before the popularization of augmented reality, so we can forgive the film for not adopting that technology, even though it might have been useful. AR might have been a lot for the film to explain to a 1997 audience. But the movie was made long after the popularization of the viewscreen forward display in Star Trek. Of course it’s embracing a unique aesthetic, but focusing on utility: Replace the glass in front of her with a similar viewscreen, and you can even virtually shift her view to the back of the Rodger Young. If she is distracted by the “feeling” of the thrusters, perhaps a second screen behind her will let her swivel around to pilot “backwards.” With this viewscreen she’s got some (virtual) visual information about collision threats coming her way. Plus, you could augment that view with precise proximity warnings, and yes, if you want, air duct animations showing the ideal path (similar to what they did in Alien).

2. VP

The viewscreen solution still puts some burden on her as a pilot to translate 2D information on the viewscreen to 3D reality. Sure, that’s often the job of a pilot, but can we make that part of the job easier? Note that Starship Troopers was also created after the popularization of volumetric projections in Star Wars, so that might have been a candidate, too, with some third person display nearby that showed her the 3D information in an augmented way that is fast and easy for her to interpret.

3. Autopilot or docking tug-drones

Yes, this scene is about her character, but if you were designing for the real world, this is a maneuver that an agentive interface can handle. Let the autopilot handle it, or adorable little “tug-boat” drones.

StarshipT-undocking25

Piloting Controls

Firefly_piloting

Pilot’s controls (in a spaceship) are one of the categories of “things” that remained on the editing room floor of Make It So when we realized we had about 50% too much material before publishing. I’m about to discuss such pilot’s controls as part of the review of Starship Troopers, and I realized that I’ll first need to to establish the core issues in a way that will be useful for discussions of pilot’s controls from other movies and TV shows. So in this post I’ll describe the key issues independent of any particular movie.

A big shout out to commenters Phil (no last name given) and Clayton Beese for helping point me towards some great resources and doing some great thinking around this topic originally with the Mondoshawan spaceship in The Fifth Element review.

So let’s dive in. What’s at issue when designing controls for piloting a spaceship?

BuckRogers_piloting

First: Spaceships are not (cars|planes|submarines|helicopters|Big Wheels…)

One thing to be careful about is mistaking a spacecraft for similar-but-not-the-same Terran vehicles. Most of us have driven a car, and so have these mental models with us. But a car moves across 2(.1?) dimensions. The well-matured controls for piloting roadcraft have optimized for those dimensions. You basically get a steering wheel for your hands to specify change-of-direction on the driving plane, and controls for speed.

Planes or even helicopters seem like they might be a closer fit, moving as they do more fully across a third dimension, but they’re not right either. For one thing, those vehicles are constantly dealing with air resistance and gravity. They also rely on constant thrust to stay aloft. Those facts alone distinguish them from spacecraft.

These familiar models (cars and planes) are made worse since so many sci-fi piloting interfaces are based on them, putting yokes in the hands of the pilots, and they only fit for plane-like tasks. A spaceship is a different thing, piloted in a different environment with different rules, making it a different task.

2001_piloting

Maneuvering in space

Space is upless and downless, except as a point relates to other things, like other spacecraft, ecliptic planes, or planets. That means that a spacecraft may need to be angled in fully 3-dimensional ways in order to orient it to the needs of the moment. (Note that you can learn more about flight dynamics and attitude control on Wikipedia, but it is sorely lacking in details about the interfaces.)

Orientation

By convention, rotation is broken out along the cartesian coordinates.

  • X: Tipping the nose of the craft up or down is called pitch.
  • Y: Moving the nose left or right around a vertical axis, like turning your head left and right, is called yaw.
  • Z: Tilting the left or right around an axis that runs from the front of the plane to the back is called roll.

Angles_620

In addition to angle, since you’re not relying on thrust to stay aloft, and you’ve already got thrusters everywhere for arbitrary rotation, the ship can move (or translate, to use the language of geometry) in any direction without changing orientation.

Translation

Translation is also broken out along cartesian coordinates.

  • X: Moving to the left or right, like strafing in the FPS sense. In Cartesian systems, this axis is called the abscissa.
  • Y: Moving up or down. This axis is called the ordinate.
  • Z: Moving forward or backward. This axis is less frequently named, but is called the applicate.

Translations_620

Thrust

I’ll make a nod to the fact that thrust also works differently in space when traveling over long distances between planets. Spacecraft don’t need continuous thrust to keep moving along the same vector, so it makes sense that the “gas pedal” would be different in these kinds of situations. But then, looking into it, you run into a theory of constant-thrust or constantacceleration travel, and bam, suddenly you’re into astrodynamics and equations peppered with sigmas, and you’re in way over my head. It’s probably best to presume that the thrust controls are set-point rather than throttle, meaning the pilot is specifying a desired speed rather than the amount of thrust, and some smart algorithm is handling all the rest.

Given these tasks of rotation, translation, and thrust, when evaluating pilot’s controls, we first have to ask how it is the pilot goes about specifying these things. But even that answer isn’t simple. Because you need to determine with what kind of interface agency it is built.

Max was a fully sentient AI who helped David pilot.

Max was a fully sentient AI who helped David pilot.

Interface Agency

If you’re not familiar with my categories of agency in technology, I’ll cover them briefly here. I’ll be publishing them in an upcoming book with Rosenfeld Media, which you can read there if you want to know more. In short, you can think of interfaces as having four categories of agency.

  • Manual: In which the technology shapes the (strictly) physical forces the user applies to it, like a pencil. Such interfaces optimize for good ergonomics.
  • Powered: In which the user is manipulating a powered system to do work, like a typewriter. Such interfaces optimize for good feedback.
  • Assistive: In which the system can offer low-level feedback, like a spell checker. Such interfaces optimize for good flow, in the Csikszentmihalyi sense.
  • Agentive: In which the system can pursue primitive goals on behalf of the user, like software that could help you construct a letter. This would be categorized as “weak” artificial intelligence, and specifically not the sentience of “strong” AI. Such interfaces optimize for good conversation.

So what would these categories mean for piloting controls? Manual controls might not really exist since humans can’t travel in space without powered systems. Powered controls would be much like early real-world spacecraft. Assistive controls would be might provide collision warnings or basic help with plotting a course. Agentive controls would allow a pilot to specify the destination and timing, and it would handle things until it encountered a situation that it couldn’t handle. Of course this being sci-fi, these interfaces can pass beyond the singularity to full, sentient artificial intelligence, like HAL.

Understanding the agency helps contextualize the rest of the interface.

Firefly_piloting03

Inputs

How does the pilot provide input, how does she control the spaceship? With her hands? Partially with her feet? Via a yoke, buttons on a panel, gestural control of a volumetric projection, or talking to a computer?

If manual, we’ll want to look at the ergonomics, affordances, and mappings.

Even agentive controls need to gracefully degrade to assistive and powered interfaces for dire circumstances, so we’d expect to see physical controls of some sorts. But these interfaces would additionally need some way to specify more abstract variables like goals, preferences, and constraints.

Consolidation

Because of the predominance of the yoke interface trope, a major consideration is how consolidated the controls are. Is there a single control that the pilot uses? Or multiple? What variables does each control? If the apparent interface can’t seem to handle all of orientation, translation, and thrust, how does the pilot control those? Are there separate controls for precision maneuvering and speed maneuvering (for, say, evasive maneuvers, dog fights, or dodging asteroids)?

The yoke is popular since it’s familiar to audiences. They see it and instantly know that that’s the pilot’s seat. But as a control for that pilot to do their job, it’s pretty poor. Note that it provides only two variables. In a plane, this means the following: Turn it clockwise or counterclockwise to indicate roll, and push it forward or pull it back for pitch. You’ll also notice that while roll is mapped really well to the input (you roll the yoke), the pitch is less so (you don’t pitch the yoke).

So when we see a yoke for piloting a spaceship, we must acknowledge that a) it’s missing an axis of rotation that spacecraft need, i.e. yaw. b) it’s presuming only one type of translation, which is forward. That leaves us looking about the cockpit for clues about how the pilot might accomplish these other kinds of maneuvers.

StarshipTroopers_pilotingoutput

Output

How does the pilot know that her inputs have registered with the ship? How can she see the effects or the consequences of her choices? How does an assistive interface help her identify problems and opportunities? How does as agentive or even AI interface engage the pilot asking for goals, constraints, and exceptions? I have the sense that Human perception is optimized for a mostly-two-dimensional plane with a predator’s eyes-forward gaze. How does the interface help the pilot expand her perception fully to 360° and three dimensions, to the distances relevant for space, and to see the invisible landscape of gravity, radiation, and interstellar material?

Narrative POV

An additional issue is that of narrative POV. (Readers of the book will recall this concept is came up in the Gestural Interfaces chapter.) All real-world vehicles work from a first-person perspective. That is, the pilot faces the direction of travel and steers the vehicle almost as if it was their own body.

But if you’ve ever played a racing game, you’ll recognize that there’s another possible perspective. It’s called the third-person perspective, and it’s where the camera sits up above the vehicle, slightly back. It’s less immediate than first person, but provides greater context. It’s quite popular with gamers in racing games, being rated twice as popular in one informal poll from escapist magazine. What POV is the pilot’s display? Which one would be of greater use?

MatrixREV_piloting

The consequent criteria

I think these are all the issues. This is new thinking for me, so I’ll leave it up a bit for others to comment or correct. If I’ve nailed them, then for any future piloting controls in the future, these are the lenses through which we’ll look and begin our evaluation:

  • Agency [ manual | powered | assistive | agentive | AI ]
  • Inputs
    • Affordance
    • Ergonomics
    • Mappings
      • orientation
      • translation
      • thrust
    • consolidations
  • Outputs (especially Narrative POV)

This checklist won’t magically give us insight into the piloting interface, but will be a great place to start, and a way to compare apples to apples between these interfaces.