The SHIELD helicarrier cockpit has dozens and dozens of agents sitting at desktop screens, working 3D mice and keyboards, speaking to headsets, and doing superspy information work.
The camera mostly sweeps by these interfaces, never lingering too hard on them. It’s hard to see any details because of the motion blur, but given the few pauses we do see:
Wireframe of the helicarrier (A map to help locate problems?)
Gantt chart (Literally for the nascent Avengers initiative?)
Complex, node-network diagram (Datamining as part of the ongoing search for Loki?)
View of a flying camera pointing down. (You might think this is a live view from the bottom of the Helicarrier, but it’s above water, and this seems to be showing land, so recorded? part of the search?)
Live-video displays of cameras around the Helicarrier
There are others that appear later (see the next entry) but these bear some special note for a couple of reasons.
The ones that are instantly recognizable make sense at this glanceable level.
I couldn’t spot any repeats, even among the fuidget-filled screens (this represents a lot of work.)
The screens are all either orange or blue. Not as in orange and blue highlights. I mean each screen is either strictly values of orange or strictly values of blue. Maybe cyan.
In Starship Troopers, after Ibanez explains that the new course she plotted for the Rodger Young (without oversight, explicit approval, or notification to superiors) is “more efficient this way,” Barcalow walks to the navigator’s chair, presses a few buttons, and the computer responds with a blinking-red Big Text Label reading “COURSE OPTIMAL” and a spinning graphic of two intersecting grids.
Yep, that’s enough for a screed, one addressed first to sci-fi writers.
A plea to sci-fi screenwriters: Change your mental model
Think about this for a minute. In the Starship Troopers universe, Barcalow can press a button to ask the computer to run some function to determine if a course is good (I’ll discuss “good” vs. “optimal” below). But if it could do that, why would it wait for the navigator to ask it after each and every possible course? Computers are built for this kind of repetition. It should not wait to be asked. It should just do it. This interaction raises the difference between two mental models of interacting with a computer: the Stoic Guru and the Active Academy.
Stoic Guru vs. Active Academy
This movie was written when computation cycles may have seemed to be a scarce resource. (Around 1997 only IBM could afford a computer and program combination to outthink Kasparov.) Even if computation cycles were scarce, navigating the ship safely would be the second most important non-combat function it could possibly do, losing out only to safekeeping its inhabitants. So I can’t see an excuse for the stoic-guru-on-the-hill model of interaction here. In this model, the guru speaks great truth, but only when asked a direct question. Otherwise it sits silently, contemplating whatever it is gurus contemplate, stoically. Computers might have started that way in the early part of the last century, but there’s no reason they should work that way today, much less by the time we’re battling space bugs between galaxies.
A better model for thinking about interaction with these kinds of problems is as an active academy, where a group of learned professors is continually working on difficult questions. For a new problem—like “which of the infinite number of possible courses from point A to point B is optimal?”—they would first discuss it among themselves and provide an educated guess with caveats, and continue to work on the problem afterward, continuously, contacting the querant when they found a better answer or when new information came in that changed the answer. (As a metaphor for agentive technologies, the active academy has some conceptual problems, but it’s good enough for purposes of this article.)
Consider this model as you write scenes. Nowadays computation is rarely a scarce resource in your audience’s lives. Most processors are bored, sitting idly and not living up to their full potential. Pretending computation is scarce breaks believability. If ebay can continuously keep looking on my behalf for a great deal on a Ted Baker shirt, the ship’s computer can keep looking for optimal courses on the mission’s behalf.
In this particular scene, the stoic guru has for some reason neglected up to this point to provide a crucial piece of information, and that is the optimal path. Why was it holding this information back if it knew it? How does it know that now? “Well,” I imagine Barcalow saying as he slaps the side of the monitor, “Why didn’t you tell me that the first time I asked you to navigate?” I suspect that, if it had been written with the active academy in mind, it would not end up in the stupid COURSE OPTIMAL zone.
Optimal vs. more optimal than
Part of the believability problem of this particular case may come from the word “optimal,” since that word implies the best out of all possible choices.
But if it’s a stoic guru, it wouldn’t know from optimal. It would just know what you’d asked it or provided it in the past. It would only know relative optimalness amongst the set of courses it had access to. If this system worked that way, the screen text should read something like “34% more optimal than previous course” or “Most optimal of supplied courses.” Either text could show some fuigetry that conveys a comparison of compared parameters below the Big Text Label. But of course the text conveys how embarrassingly limited this would be for a computer. It shouldn’t wait for supplied courses.
If it’s an active academy model, this scene would work differently. It would have either shown him optimal long ago, or show him that it’s still working on the problem and that Ibanez’ is the “Most optimal found.” Neither is entirely satisfying for purposes of the story.
How could this scene have gone?
We need a quick beat here to show that in fact, Ibanez is not just some cocky upstart. She really knows what’s up. An appeal to authority is a quick way to do it, but then you have to provide some reason the authority—in this case the computer—hasn’t provided that answer already.
A bigger problem than Starship Troopers
This is a perennial problem for sci-fi, and one that’s becoming more pressing as technology gets more and more powerful. Heroes need to be heroic. But how can they be heroic if computers can and do heroic things for them? What’s the hero doing? Being a heroic babysitter to a vastly powerful force? This will ultimately culminate once we get to the questions raised in Her about actual artificial intelligence.
Fortunately the navigator is not a full-blown artificial intelligence. It’s something less than A.I., and that’s an agentive interface, which gives us our answer. Agentive algorithms can only process what they know, and Ibanez could have been working with an algorithm that the computer didn’t know about. She’s just wrapped up school, so maybe it’s something she developed or co-developed there:
Barcalow turns to the nav computer and sees a label: “Custom Course: 34% more efficient than models.”
BARCALOW
Um…OK…How did you find a better course than the computer could?
IBANEZ
My grad project nailed the formula for gravity assist through trinary star systems. It hasn’t been published yet.
BAM. She sounds like a badass and the computer doesn’t sound like a character in a cheap sitcom.
So, writers, hopefully that model will help you not make the mistake of penning your computers to be stoic gurus. Next up, we’ll discuss this same short scene with more of a focus on interaction designers.
The first bit of human technology we see belongs to the Federation of Territories, as a spaceship engages the planet-sized object that is the Ultimate Evil. The interfaces are the screen-based systems that bridge crew use to scan the object and report back to General Staedert so he can make tactical decisions.
We see very few input mechanisms and very little interaction with the system. The screen includes a large image on the right hand side of the display and smaller detailed bits of information on the left. Inputs include
Rows of backlit modal pushbuttons adjacent to red LEDs
A few red 7-segment displays
An underlit trackball
A keyboard
An analog, underlit, grease-pencil plotting board. (Nine Inch Nails fans may be pleased to find that initialism written near the top.)
The operator of the first of these screens touches one of the pushbuttons to no results. He then scrolls the trackball downward, which scrolls the green text in the middle-left part of the screen as the graphics in the main section resolve from wireframes to photographic renderings of three stars, three planets, and the evil planet in the foreground, in blue.
The main challenge with the system is what the heck is being visualized? Professor Pacoli says in the beginning of the film that, “When the three planets are in eclipse, the black hole, like a door, is open.” This must refer to an unusual, trinary star system. But if that’s the case, the perspective is all wrong on screen.
Plus, the main sphere in the foreground is the evil planet, but it is resolved to a blue-tinted circle before the evil planet actually appears. So is it a measure of gravity and event horizons of the “black hole?” Then why are the others photo-real?
Where is the big red gas giant planet that the ship is currently orbiting? And where is the ship? As we know from racing game interfaces and first-person shooters, having an avatar representation of yourself is useful for orientation, and that’s missing.
And finally, why does the operator need to memorize what “Code 487” is? That places a burden on his memory that would be better used for other, more human-value things. This is something of a throw-away interface, meant only to show the high-tech nature of the Federated Territories and for an alternate view for the movie’s editor to show, but even still it presents a lot of problems.
The main interface on the bridge is the volumetric projection display. This device takes up the center of the bridge and is the size of a long billiards table. It serves multiple purposes for the crew. Its later use is to display the real-time map of the alien complex.
Map of the alien complex
The redshirt geologist named Chance in the landing party uses some nifty tools to initiate mapping of the alien complex. The information is sent from these floating sensors back to the ship, which displays the results in real time.
The display of this information is rich with a saturated-color, color-coded, edge-opacity style, leaving outer surfaces rendered in a gossamer cyan, and internal features rendered in an edge-lit green wireframe. In the area above the VP surface, other arbitrary rectangles of data can be summoned for particular tasks, including in-air volumetric keyboards. The flat base of the bridge VP is mirrored, which given the complex 3D nature of the information, causes a bit of visual confusion. (Am I seeing two diamonds reflected or four on two levels?)
Later in the film, Janek tells Ravel to modify the display; specifically, to “strip away the dome” and “isolate that area, bring it up.” He is even to enlarge and rotate the alien spaceship when they find it. Ravel does these modifications this through a touch screen panel at his station, though he routes the results to the “table.” We don’t see the controls in use so can’t evaluate them. But being able to modify displays are one of the ways that people look for patterns and make sense of such information.
A major question about this interface is why this information is not routed back to the people who can use it the most, i.e. the landing party. Chance has to speak to Janek over their intercom and figure out his cardinal directions in one scene. I know they’re redshirts, but they’re already wearing high tech spacesuits. And in the image below we see that this diegesis has handheld volumetric projections. They couldn’t integrate one of those to a sleeve to help life-critical wayfinding?