Escape door

Faithful-Wookiee-door-10.png

e

There is one last interface in The Faithful Wookiee we see in use. It’s one of those small interfaces, barely seen, but that invites lots of consideration. In the story, Boba and Chewie have returned to the Falcon and administered to Luke and Han the cure to the talisman virus. Relieved, Luke (who assigns loyalty like a puppy at a preschool) says,

“Boba, you’re a hero and a faithful friend. [He isn’t. —Editor] You must come back with us. [He won’t.What’s the matter with R2?”

C3PO says,“I’m afraid sir, it’s because you said Boba is a faithful friend and faithful ally. [He didn’t.] That simply does not feed properly into R2’s information banks.”

Luke asks, “What are you talking about?”

“We intercepted a message between Boba and Darth Vader, sir. Boba Fett is Darth Vader’s right-hand man. I’m afraid this whole adventure has been an Imperial plot.”

faithful-wookiee-door-09
Luke did not see this coming.

Luke gapes towards Boba, who has his blaster drawn and is backing up into an alcove with an escape hatch. Boba glances at a box on the wall, slides some control sideways, and a hatch opens in the ceiling. He says, deadpan, “We’ll meet again…friend,” before touching some control on his belt that sends him flying into the clear green sky, leaving behind a trail of smoke.

Faithful-Wookiee-door-07.png

A failure of door

Let’s all keep in mind that the Falcon isn’t a boat or a car. It is a spaceship. On the other side of the hatch could be breathable air at the same pressure as what’s inside the ship, or it could also be…

  • The bone-cracking 2.7° Kelvin emptiness of space
  • The physics-defying vortex of hyperspace
  • Some poisonous atmosphere like Venus’, complete with sulfuric acid clouds
  • A hungry flock of neebrays.

There should be no easy way to open any of its external doors.

Think of an airplane hatch. On the other side of that thing is an atmosphere known to support human life, and it sure as hell doesn’t open like a gen-1 iPhone. For safety, it should take some doing.

Hatch.png

If we’re being generous, maybe there’s some mode by which each door can be marked as “safe” and thereby made this easy to open. But that raises issues of security and authorizations and workflow that probably aren’t worth going into without a full redesign and inserting some new technological concepts into the diegesis.

Let’s also not forget that to secure that most precious of human biological needs, i.e. air, there should be an airlock, where the outer door and inner door can’t be opened at the same time without extensive override. But that’s not a hindrance. It could have made for an awesome moment.

  • LUKE gapes at Boba. Cut to HAN.
  • HAN
  • You won’t get any information out of us, alive or dead. Even the droids are programmed to self-destruct. But there’s a way out for you.
  • HAN lowers his hand to a panel, and presses a few buttons. An escape hatch opens behind Boba Fett.
  • BOBA FETT
  • We’ll meet again…friend.

That quick change might have helped explain why Boba didn’t just kill everyone and steal the Falcon and the droids (along with their information banks) then and there.

Security is often sacrificed to keep narrative flowing, so I get why makers are tempted to bypass these issues. But it’s also worth mentioning two other failures that this 58-second scene illustrates.

faithful-wookiee-door-12

A failure to droid

Why the hell did C3PO and R2D2 wait to tell Luke and Han of this betrayal until Luke happened to say something that didn’t fit into “information banks?” C3PO could have made up some bullshit excuse to pull Luke aside and whisper the news. But no, he waits, maybe letting Luke and Han spill vital information about the Rebellion, and only when something doesn’t compute, blurt out that the only guy in the room with the blaster happens to be in bed with Space Voldemort.

I can’t apologize for this. It’s a failure of writing and an unimaginative mental model. If you are a writer wondering how droids would behave, think of them less as stoic gurus and more as active academies.

A failure of plot

Worse, given that C3PO says this is all an Imperial plot, we’re meant to understand that in an attempt to discover the Rebel base, the Empire…

  • Successfully routed rumors of a mystical talisman, which the Empire was just about to find, to the Rebels in a way they would trust it
  • Actually created a talisman
  • Were right on their long shot bet that the Rebels would bite at the lure
  • Bioengineered a virus that
    • Caused a sleeping sickness that only affects humans
    • Survived on the talisman indefinitely
  • Somehow protected Boba Fett from the virus even though he is human
  • Planted a cure for the virus on a planet near to where Han and Chewie would find the talisman
  • Successfully routed the location of the cure to Chewbacca so he would know where to go
  • Got Boba Fett—riding an ichthyodont—within minutes, to the exact site on the planet where Chewie would crash-land the Falcon.

Because without any of these points, the plan would not have worked. Yet despite the massive logistics, technological, and scientific effort, this same Empire had to be stupid enough to…

  • Bother to interrupt the mission in progress to say that the mission was on track
  • Use insecure, unencrypted, public channels to for this report

Also note that despite all this effort (and buffoonery) they never, ever used this insanely effective bioweapon against the Rebels, again.


I know, you’re probably thinking this is just some kid’s cartoon in the Star Wars diegesis, but that only raises more problems, which I’ll address in the final post on this crazy movie within a crazy movie.

Advertisements

Course optimal, the Stoic Guru, and the Active Academy

After Ibanez explains that the new course she plotted for the Rodger Young (without oversight, explicit approval, or notification to superiors) is “more efficient this way,” Barcalow walks to the navigator’s chair, presses a few buttons, and the computer responds with a blinking-red Big Text Label reading “COURSE OPTIMAL” and a spinning graphic of two intersecting grids.

STARSHIP_TROOPERS_Course-Optimal

Yep, that’s enough for a screed, one addressed first to sci-fi writers.

A plea to sci-fi screenwriters: Change your mental model

Think about this for a minute. In the Starship Troopers universe, Barcalow can press a button to ask the computer to run some function to determine if a course is good (I’ll discuss “good” vs. “optimal” below). But if it could do that, why would it wait for the navigator to ask it after each and every possible course? Computers are built for this kind of repetition. It should not wait to be asked. It should just do it. This interaction raises the difference between two mental models of interacting with a computer: the Stoic Guru and the Active Academy.

A-writer

Stoic Guru vs. Active Academy

This movie was written when computation cycles may have seemed to be a scarce resource. (Around 1997 only IBM could afford a computer and program combination to outthink Kasparov.) Even if computation cycles were scarce, navigating the ship safely would be the second most important non-combat function it could possibly do, losing out only to safekeeping its inhabitants. So I can’t see an excuse for the stoic-guru-on-the-hill model of interaction here. In this model, the guru speaks great truth, but only when asked a direct question. Otherwise it sits silently, contemplating whatever it is gurus contemplate, stoically. Computers might have started that way in the early part of the last century, but there’s no reason they should work that way today, much less by the time we’re battling space bugs between galaxies.

A better model for thinking about interaction with these kinds of problems is as an active academy, where a group of learned professors is continually working on difficult questions. For a new problem—like “which of the infinite number of possible courses from point A to point B is optimal?”—they would first discuss it among themselves and provide an educated guess with caveats, and continue to work on the problem afterward, continuously, contacting the querant when they found a better answer or when new information came in that changed the answer. (As a metaphor for agentive technologies, the active academy has some conceptual problems, but it’s good enough for purposes of this article.)

guruacademy

Consider this model as you write scenes. Nowadays computation is rarely a scarce resource in your audience’s lives. Most processors are bored, sitting idly and not living up to their full potential. Pretending computation is scarce breaks believability. If ebay can continuously keep looking on my behalf for a great deal on a Ted Baker shirt, the ship’s computer can keep looking for optimal courses on the mission’s behalf.

In this particular scene, the stoic guru has for some reason neglected up to this point to provide a crucial piece of information, and that is the optimal path. Why was it holding this information back if it knew it? How does it know that now? “Well,” I imagine Barcalow saying as he slaps the side of the monitor, “Why didn’t you tell me that the first time I asked you to navigate?” I suspect that, if it had been written with the active academy in mind, it would not end up in the stupid COURSE OPTIMAL zone.

Optimal vs. more optimal than

Part of the believability problem of this particular case may come from the word “optimal,” since that word implies the best out of all possible choices.

But if it’s a stoic guru, it wouldn’t know from optimal. It would just know what you’d asked it or provided it in the past. It would only know relative optimalness amongst the set of courses it had access to. If this system worked that way, the screen text should read something like “34% more optimal than previous course” or “Most optimal of supplied courses.” Either text could show some fuigetry that conveys a comparison of compared parameters below the Big Text Label. But of course the text conveys how embarrassingly limited this would be for a computer. It shouldn’t wait for supplied courses.

If it’s an active academy model, this scene would work differently. It would have either shown him optimal long ago, or show him that it’s still working on the problem and that Ibanez’ is the “Most optimal found.” Neither is entirely satisfying for purposes of the story.

Hang-on-idea

How could this scene gone?

We need a quick beat here to show that in fact, Ibanez is not just some cocky upstart. She really knows what’s up. An appeal to authority is a quick way to do it, but then you have to provide some reason the authority—in this case the computer—hasn’t provided that answer already.

A bigger problem than Starship Troopers

This is a perennial problem for sci-fi, and one that’s becoming more pressing as technology gets more and more powerful. Heroes need to be heroic. But how can they be heroic if computers can and do heroic things for them? What’s the hero doing? Being a heroic babysitter to a vastly powerful force? This will ultimately culminate once we get to the questions raised in Her about actual artificial intelligence.

Fortunately the navigator is not a full-blown artificial intelligence. It’s something less than A.I., and that’s an agentive interface, which gives us our answer. Agentive algorithms can only process what they know, and Ibanez could have been working with an algorithm that the computer didn’t know about. She’s just wrapped up school, so maybe it’s something she developed or co-developed there:

  • Barcalow turns to the nav computer and sees a label: “Custom Course: 34% more efficient than models.”
  • BARCALOW
  • Um…OK…How did you find a better course than the computer could?
  • IBANEZ
  • My grad project nailed the formula for gravity assist through trinary star systems. It hasn’t been published yet.

BAM. She sounds like a badass and the computer doesn’t sound like a character in a cheap sitcom.

So, writers, hopefully that model will help you not make the mistake of penning your computers to be stoic gurus. Next up, we’ll discuss this same short scene with more of a focus on interaction designers.