Piloting Controls

Firefly_piloting

Pilot’s controls (in a spaceship) are one of the categories of “things” that remained on the editing room floor of Make It So when we realized we had about 50% too much material before publishing. I’m about to discuss such pilot’s controls as part of the review of Starship Troopers, and I realized that I’ll first need to to establish the core issues in a way that will be useful for discussions of pilot’s controls from other movies and TV shows. So in this post I’ll describe the key issues independent of any particular movie.

A big shout out to commenters Phil (no last name given) and Clayton Beese for helping point me towards some great resources and doing some great thinking around this topic originally with the Mondoshawan spaceship in The Fifth Element review.

So let’s dive in. What’s at issue when designing controls for piloting a spaceship?

BuckRogers_piloting

First: Spaceships are not (cars|planes|submarines|helicopters|Big Wheels…)

One thing to be careful about is mistaking a spacecraft for similar-but-not-the-same Terran vehicles. Most of us have driven a car, and so have these mental models with us. But a car moves across 2(.1?) dimensions. The well-matured controls for piloting roadcraft have optimized for those dimensions. You basically get a steering wheel for your hands to specify change-of-direction on the driving plane, and controls for speed.

Planes or even helicopters seem like they might be a closer fit, moving as they do more fully across a third dimension, but they’re not right either. For one thing, those vehicles are constantly dealing with air resistance and gravity. They also rely on constant thrust to stay aloft. Those facts alone distinguish them from spacecraft.

These familiar models (cars and planes) are made worse since so many sci-fi piloting interfaces are based on them, putting yokes in the hands of the pilots, and they only fit for plane-like tasks. A spaceship is a different thing, piloted in a different environment with different rules, making it a different task.

2001_piloting

Maneuvering in space

Space is upless and downless, except as a point relates to other things, like other spacecraft, ecliptic planes, or planets. That means that a spacecraft may need to be angled in fully 3-dimensional ways in order to orient it to the needs of the moment. (Note that you can learn more about flight dynamics and attitude control on Wikipedia, but it is sorely lacking in details about the interfaces.)

Orientation

By convention, rotation is broken out along the cartesian coordinates.

  • X: Tipping the nose of the craft up or down is called pitch.
  • Y: Moving the nose left or right around a vertical axis, like turning your head left and right, is called yaw.
  • Z: Tilting the left or right around an axis that runs from the front of the plane to the back is called roll.

Angles_620

In addition to angle, since you’re not relying on thrust to stay aloft, and you’ve already got thrusters everywhere for arbitrary rotation, the ship can move (or translate, to use the language of geometry) in any direction without changing orientation.

Translation

Translation is also broken out along cartesian coordinates.

  • X: Moving to the left or right, like strafing in the FPS sense. In Cartesian systems, this axis is called the abscissa.
  • Y: Moving up or down. This axis is called the ordinate.
  • Z: Moving forward or backward. This axis is less frequently named, but is called the applicate.

Translations_620

Thrust

I’ll make a nod to the fact that thrust also works differently in space when traveling over long distances between planets. Spacecraft don’t need continuous thrust to keep moving along the same vector, so it makes sense that the “gas pedal” would be different in these kinds of situations. But then, looking into it, you run into a theory of constant-thrust or constantacceleration travel, and bam, suddenly you’re into astrodynamics and equations peppered with sigmas, and you’re in way over my head. It’s probably best to presume that the thrust controls are set-point rather than throttle, meaning the pilot is specifying a desired speed rather than the amount of thrust, and some smart algorithm is handling all the rest.

Given these tasks of rotation, translation, and thrust, when evaluating pilot’s controls, we first have to ask how it is the pilot goes about specifying these things. But even that answer isn’t simple. Because you need to determine with what kind of interface agency it is built.

Max was a fully sentient AI who helped David pilot.

Max was a fully sentient AI who helped David pilot.

Interface Agency

If you’re not familiar with my categories of agency in technology, I’ll cover them briefly here. I’ll be publishing them in an upcoming book with Rosenfeld Media, which you can read there if you want to know more. In short, you can think of interfaces as having four categories of agency.

  • Manual: In which the technology shapes the (strictly) physical forces the user applies to it, like a pencil. Such interfaces optimize for good ergonomics.
  • Powered: In which the user is manipulating a powered system to do work, like a typewriter. Such interfaces optimize for good feedback.
  • Assistive: In which the system can offer low-level feedback, like a spell checker. Such interfaces optimize for good flow, in the Csikszentmihalyi sense.
  • Agentive: In which the system can pursue primitive goals on behalf of the user, like software that could help you construct a letter. This would be categorized as “weak” artificial intelligence, and specifically not the sentience of “strong” AI. Such interfaces optimize for good conversation.

So what would these categories mean for piloting controls? Manual controls might not really exist since humans can’t travel in space without powered systems. Powered controls would be much like early real-world spacecraft. Assistive controls would be might provide collision warnings or basic help with plotting a course. Agentive controls would allow a pilot to specify the destination and timing, and it would handle things until it encountered a situation that it couldn’t handle. Of course this being sci-fi, these interfaces can pass beyond the singularity to full, sentient artificial intelligence, like HAL.

Understanding the agency helps contextualize the rest of the interface.

Firefly_piloting03

Inputs

How does the pilot provide input, how does she control the spaceship? With her hands? Partially with her feet? Via a yoke, buttons on a panel, gestural control of a volumetric projection, or talking to a computer?

If manual, we’ll want to look at the ergonomics, affordances, and mappings.

Even agentive controls need to gracefully degrade to assistive and powered interfaces for dire circumstances, so we’d expect to see physical controls of some sorts. But these interfaces would additionally need some way to specify more abstract variables like goals, preferences, and constraints.

Consolidation

Because of the predominance of the yoke interface trope, a major consideration is how consolidated the controls are. Is there a single control that the pilot uses? Or multiple? What variables does each control? If the apparent interface can’t seem to handle all of orientation, translation, and thrust, how does the pilot control those? Are there separate controls for precision maneuvering and speed maneuvering (for, say, evasive maneuvers, dog fights, or dodging asteroids)?

The yoke is popular since it’s familiar to audiences. They see it and instantly know that that’s the pilot’s seat. But as a control for that pilot to do their job, it’s pretty poor. Note that it provides only two variables. In a plane, this means the following: Turn it clockwise or counterclockwise to indicate roll, and push it forward or pull it back for pitch. You’ll also notice that while roll is mapped really well to the input (you roll the yoke), the pitch is less so (you don’t pitch the yoke).

So when we see a yoke for piloting a spaceship, we must acknowledge that a) it’s missing an axis of rotation that spacecraft need, i.e. yaw. b) it’s presuming only one type of translation, which is forward. That leaves us looking about the cockpit for clues about how the pilot might accomplish these other kinds of maneuvers.

StarshipTroopers_pilotingoutput

Output

How does the pilot know that her inputs have registered with the ship? How can she see the effects or the consequences of her choices? How does an assistive interface help her identify problems and opportunities? How does as agentive or even AI interface engage the pilot asking for goals, constraints, and exceptions? I have the sense that Human perception is optimized for a mostly-two-dimensional plane with a predator’s eyes-forward gaze. How does the interface help the pilot expand her perception fully to 360° and three dimensions, to the distances relevant for space, and to see the invisible landscape of gravity, radiation, and interstellar material?

Narrative POV

An additional issue is that of narrative POV. (Readers of the book will recall this concept is came up in the Gestural Interfaces chapter.) All real-world vehicles work from a first-person perspective. That is, the pilot faces the direction of travel and steers the vehicle almost as if it was their own body.

But if you’ve ever played a racing game, you’ll recognize that there’s another possible perspective. It’s called the third-person perspective, and it’s where the camera sits up above the vehicle, slightly back. It’s less immediate than first person, but provides greater context. It’s quite popular with gamers in racing games, being rated twice as popular in one informal poll from escapist magazine. What POV is the pilot’s display? Which one would be of greater use?

MatrixREV_piloting

The consequent criteria

I think these are all the issues. This is new thinking for me, so I’ll leave it up a bit for others to comment or correct. If I’ve nailed them, then for any future piloting controls in the future, these are the lenses through which we’ll look and begin our evaluation:

  • Agency [ manual | powered | assistive | agentive | AI ]
  • Inputs
    • Affordance
    • Ergonomics
    • Mappings
      • orientation
      • translation
      • thrust
    • consolidations
  • Outputs (especially Narrative POV)

This checklist won’t magically give us insight into the piloting interface, but will be a great place to start, and a way to compare apples to apples between these interfaces.

31 thoughts on “Piloting Controls

  1. Oh this is an interesting one! Being into interaction design and also piloting planes (some time ago) this hook me up from the start.

    I would say that keeping the stick can deffinitly help on many aspects. For one many pilots would already have the mental models associated with those controls, altough they could work a bit different in space also can help on learnability. Maybe having a look at the space shuttle controls would help here?

    Also think that the matter of the agency can vary with the porpuse of the ship of course, and same goes for the Output. Anyways. the list sure sounds like a good place to make comparisons.

  2. To be honest, for a robust and mature spacecraft technology, there would only be a set of controls for emergency use. They would only have rudimentary control because spacecraft are all fly by wire. And that wire goes through a set of computers and if they go down/rebel/strike for better pay, you’re screwed.

    There would be no bridge per se on a civilian craft, the “captain” of said vessel could control it from any interface on the ship or his own data device. That or just say out loud, “Computer? Set course to Titan, best possible arrival time.” and that would be that.

    On warships there would a CnC, similar to what we saw on the reimagined Battlestar Galactica.

    • The fly-by-wire is certainly the threshold of manual to powered, but we have to leave room for eras, or low-rent, or smaller, or gracefully degrading craft that might still need manual controls, right? When I took a Tiger Cruise on a U.S. Navy ship I was impressed with how many drills were about being able to still do the job even when the systems were down.

      I like your thoughts on the captain’s control. Would the bridge become obsolete? (or rather, would anyplace he happened to be become the bridge?)

      • The best way to handle controlling a ship is to make it a distributed system, where if one system goes down, the rest of the ship is still operational. The problem is that it wouldn’t be a one person job. If the central computer went down, then you’d need several people to replace it, each coordinating their actions from access stations around the ship.

        Heaven help you if the ship’s drive control system goes down. That part of the ship is very radioactive in realistic ships, there’s no brave captain going down there and bravely switching the polarity and not die.

  3. Fascinating stuff about ergometrics, et al.
    I have a few notes about ship controls here
    http://www.projectrho.com/public_html/rocket/controldeck.php#id–Flight_Controls
    What it boils down to is that the computer game Kerbal Space Program has a bare minimum interface that will get the job done.

    The navigator will give a “maneuver” to the pilot. 3 parts: axis of thrust (where ship’s nose should point), amount of deltaV to expend (how long you burn the engine), and the precise time to start the manuever). Pilot will use a rotation (orientation) control, a throttle control, a clock indicator, a deltaV expended indicator, and an axis of thrust indicator.

    Translation controls are generally only used for docking.

    Rotation control in the Apollo program was a joystick. Push forward/pull back for pitch, push left/right for roll, twist joystick clockwise/counterclockwise for yaw.
    Ergometrically the joystick acts like there is a little model of the spacecraft glued on top of the joystick, movement of the model mirrors movement of spacecraft.

    And movement is from point of view of operator. If the operator with joystick is in the spacecraft’s tail facing to the rear, pushing forwards on the stick will make the nose pitch up, not down. This is because the operator views the joystick controlling orientation of the spacecraft’s tail, not the nose.

    • These are good “powered” controls for control. (Thanks very much for the rundown.) I wonder how much this might change in a world with nearly-unlimited energy, and super-durable spacecraft? I think the limited/costly energy and (relative) fragility of spacecraft are the reasons for these kinds of precision, don’t you, Winchell?

      • Agreed, current spacecraft propulsion system force spacecraft to be flimsy tinfoil structures, which would need powered controls. If you have powerful propulsion systems, that changes everything.

    • Also forgot to clarify: After we nail the powered controls, I’m most interested in the next layer, the agentive layer. It’s not about specifying a maneuver, it should be about declaring the goal and working with a smart spaceship to get there.

    • And I finally spent real quality time with your post there, Wendell. Great, detailed write up. Props for being willing to delve into written fiction as well. Looks like you believe that NASA got it right, and that consolidation of controls is pointless, yes?

      • Again, NASA’s solution is for a spacecraft with a weak propulsion system and a premium on reducing structural mass to a bare minimum.
        If you have a torchship with a powerful propulsion system and no motivation to reduce structural mass, then consolidation of controls might make sense.

  4. I’m glad KSP has already been mentioned. In considering piloting interfaces, one must remember that most sci-fi films ignore, for all intents and purposes, Newton’s laws of motion (2001: A Space Odyssey being a notable exception) The interface is not realistic, but it needn’t be, as neither are the physical laws under which the ships operate.

  5. First a quibble: the “humans are predators” meme comes from a few macho anthropologists. Our evolutionary cousins and ancestors are mostly fruit eaters, from whom we get our above-average (by mammal standards) color vision. We have our binocular vision and depth perception, like the majority of primates, because our ancestors lived in the treetops and when you’re swinging from branch to branch it’s really useful to know how far away the next handhold is.

    On to controls. For short range, short time scale navigation I can’t see why aircraft style flight controls shouldn’t work. For the Starship Troopers style yoke, I would have forward/back motion controlling pitch, rotation of the yoke for yaw or heading, and add left/right downwards movement for roll. A control doesn’t need to actually move for this: a strain gauge in the yoke could detect how much pressure was being applied to the left and right handles by the pilot and roll left or right accordingly. (OK, it’s a feeble justification for what we see in the film.) Add an aircraft style throttle for speeding up or slowing down.

    The disadvantage is that this setup prevents you from pointing the ship in a different direction to the vector of motion. For something that isn’t a warship, I think that this might not be a problem.

    Maybe helicopter controls would be a better match? Or Harrier jumpjets?

    • Which leads us to my favorite crackpot theory, i.e., The Aquatic Ape Theory, which puts challenge to the tree-swining source. But I’m intrigued: What’s a good source for the treetops explanation?

      I agree that the yoke _could_ work, but I’m curious to know if, given the different circumstances of space, if we can design one that would work better. Your yoke + throttle solution is nice. Could it better mapped? Also, could we add translation to it, or would that be another, separate control?

      Helicopters and jumpjets are closer in metaphor, but still have the issue of constant thrust, constant gravity, and air resistance, which we get to design without.

    • The “strain gauge” is actually a completely real thing used in some air craft and at least one high-end gaming joystick, thoguh it’s usually just referred to as “force sensing controls”.

  6. Pingback: Shuttle Yoke | Sci-fi interfaces

  7. On the treetops, any evolutionary history of mammals will have primate ancestors living in the trees as far back as the dinosaurs, and binocular color vision is almost universal among monkeys today. The various carnivorous dogs and cats don’t have very good color vision. Obviously we humans have some evolutionary adaptations for the ground plane, for example we’re much better at running and throwing than any other ape. But as any children’s playground will show, we’re still pretty good at climbing and swinging around.

    Assuming spaceships with only one main engine, my understanding of translation is that it’s most likely used for micro adjustments of position, for example while docking. You want a puff of gas to move you sideways and then an opposite push to cancel it. No, I don’t think a yoke would be very good for that. I’d go with a cursor key arrangement for discrete steps left/right and up/down, and something like a mouse scroll wheel on the throttle for forward/back.

    Once you’re under way, like Ibanez piloting the shuttle, you can’t really translate without first coming to a stop. Instead you’re adding vectors by rotating and firing the main engine. If you’re not trying for an agentive system, I think a yoke and throttle are a reasonably simple way for someone to express “point this way”

  8. Chris – I was thinking about your categories of interface agency while pondering some of the interesting controls (and wrestling of controls) in Sunshine (Boyle 2007). This movie is a thorn in my side, since the third act runs so far off the rails that it nearly ruins the whole, which contains a wealth of interesting situations.

    When discarding the bizarre third act to look solely at interface issues and control issues, Sunshine suddenly becomes even more fruitful. In listening to Brian Cox’ physicist consultant commentary, he mentioned that several of the plot pivots turn on moments when the human crew contests something that the autopilot had under control, or when the autopilot had to pull rank, or when the humans erroneously pulled rank, or when failsafe control systems worked well / failed to work effectively.

    In fact, the tightwire of protocols, mission objectives, and the fallability of human instinct when it overruled them seems like the main focus of the movie: The entirety of the Icarus II, including its protocols, are a finely tuned interface for achieving the mission. One which the crew repeatedly breaks.

    The film has a fascination with failsafe controls, and the puzzle-like logic of their interventions. Much of this apparently drew from their research on actual space mission protocols, but more interesting than this is the fact that the autopilot here occupies what I’d see as a rare middle ground in sci-fi between Assistive and Agentive. It’s a rudimentary AI, and far from the standard trope where the AI turns out to be the problem, it attempts to save their bacon several times. From the tv-tropes page on Sunshine:

    Icarus: Resuming computer control of Icarus II.
    Cassie: Negative, Icarus. Manual control.
    Icarus: Negative, Cassie. Computer control.
    Cassie: Icarus, override computer to manual control.
    Icarus: Negative. Mission in jeopardy. Override command statement “manual flight controls” removed.
    Cassie: Negative, Icarus, negative. State reason immediately.
    Icarus: Fire in oxygen garden.

    All this is preamble to asking: This kind of failsafe or “assertive throttle” role nearly seems like it belongs to a different category than either Assistive or Agentive. The computer isn’t warning the human; the computer is taking care of things the human would likely mess up completely. But neither is the computer in complete, uncontested control. If it had been, the crew could not have broken the mission. It’s a primitive AI at best, itself yoked to its own protocols.

    This may not warrant an in-between category, but it seems like such a crucially important real-world role for automated interfaces as tasks grow more complex and mission critical, that I had to point it out.

    Any thoughts?

  9. I’m not going to even read this. There are so many things which people criticize in ficiton, and very much in space fiction. But not a very large percentage of the supposed movie myths and flaws are real or realistically messed up in a given story. Only very few and very specific mistakes in continuity with the laws of physics (Both ours and theirs.) of a given story are ever there and bring down the story individually, not as “Hollywood”. The big sweeping generalizations are poor, and very few unrealisms are consistent. Most of it is mental disbelief, a decision people made in their heads because of one disappointing story long ago, unrealistic or just bad.

    Don’t go criticizing control schemes because you find another favorable to you or can list the pros and cons. I’m not even going to read this list. But in a spaceship, actually, a joystick isn’t that bad, as long as you are not concerned about controlling acceleration with the stick. Nobody is. Nor ever has. Yaw? It should be pitch and yaw under the stick’s control, and you need the second analog stick for roll, if you ever need to change that. Just be careful and think it through. Is this a criticism of fiction because it is fiction, or because there is a genuine mistake in unknown, technologically and culturally aloof control schemes.

  10. Pingback: Ship Console | Sci-fi interfaces

  11. You might find it interesting to look at some of the stranger game controllers that came out in the late 90s. There was a brief period where it looked like six-axis games where the way of the future (Descent being the most famous example. You can rotate or add propulsion along any axis you want, at any time.) So people made game controllers to work with that.

    Lazy Game Reviews did a recent review of the Spaceorb: https://www.youtube.com/watch?v=UtRqgszxZlg&list=PLB9FA1979AB986522&index=1 (It seems to work very well, but the ergonomics are terrible. One of the higher end ones from the CAD industry would probably work quite well for part of the solution to these controls.

    Sadly most of the links are dead in the post I saw these devices in, but one of the controllers was the Falcon, by this company: https://en.wikipedia.org/wiki/Novint_Technologies

  12. Pingback: Sci-fi Spacesuits: Moving around | Sci-fi interfaces

Leave a Reply to Christopher NoesselCancel reply