The Royal Talon piloting interface

Since my last post, news broke that Chadwick Boseman has passed away after a four year battle with cancer. He kept his struggles private, so the news was sudden and hard-hitting. The fandom is still reeling. Black people, especially, have lost a powerful, inspirational figure. The world has also lost a courageous and talented young actor. Rise in Power, Mr. Boseman. Thank you for your integrity, bearing, and strength.

Photo CC BY-SA 2.0,
by Gage Skidmore.

Black Panther’s airship is a triangular vertical-takeoff-and-landing vehicle called the Royal Talon. We see its piloting interface twice in the film.

The first time is near the beginning of the movie. Okoye and T’Challa are flying at night over the Sambisa forest in Nigeria. Okoye sits in the pilot’s seat in a meditative posture, facing a large forward-facing bridge window with a heads up display. A horseshoe-shaped shelf around her is filled with unactivated vibranium sand. Around her left wrist, her kimoyo beads glow amber, projecting a volumetric display around her forearm.

She announces to T’Challa, “My prince, we are coming up on them now.” As she disengages from the interface, retracting her hands from the pose, the kimoyo projection shifts and shrinks. (See more detail in the video clip, below.)

The second time we see it is when they pick up Nakia and save the kidnapped girls. On their way back to Wakanda we see Okoye again in the pilot’s seat. No new interactions are seen in this scene though we linger on the shot from behind, with its glowing seatback looking like some high-tech spine.

Now, these brief glimpses don’t give a review a lot to go on. But for a sake of completeness, let’s talk about that volumetric projection around her wrist. I note is that it is a lovely echo of Dr. Strange’s interface for controlling the time stone Eye of Agamatto.

Wrist projections are going to be all the rage at the next Snap, I predict.

But we never really see Okoye look at this VP it or use it. Cross referencing the Wakandan alphabet, those five symbols at the top translate to 1 2 K R I, which doesn’t tell us much. (It doesn’t match the letters seen on the HUD.) It might be a visual do-not-disturb signal to onlookers, but if there’s other meaning that the letters and petals are meant to convey to Okoye, I can’t figure it out. At worst, I think having your wrist movements of one hand emphasized in your peripheral vision with a glowing display is a dangerous distraction from piloting. Her eyes should be on the “road” ahead of her.

The image has been flipped horizontally to illustrate how Okoye would see the display.

Similarly, we never get a good look at the HUD, or see Okoye interact with it, so I’ve got little to offer other than a mild critique that it looks full of pointless ornamental lines, many of which would obscure things in her peripheral vision, which is where humans need the most help detecting things other than motion. But modern sci-fi interfaces generally (and the MCU in particular) are in a baroque period, and this is partly how audiences recognize sci-fi-ness.

I also think that requiring a pilot to maintain full lotus to pilot is a little much, but certainly, if there’s anyone who can handle it, it’s the leader of the Dora Milaje.

One remarkable thing to note is that this is the first brain-input piloting interface in the survey. Okoye thinks what she wants the ship to do, and it does it. I expect, given what we know about kimoyo beads in Wakanda (more on these in a later post), what’s happening is she is sending thoughts to the bracelet, and the beads are conveying the instructions to the ship. As a way to show Okoye’s self-discipline and Wakanda’s incredible technological advancement, this is awesome.

Unfortunately, I don’t have good models for evaluating this interaction. And I have a lot of questions. As with gestural interfaces, how does she avoid a distracted thought from affecting the ship? Why does she not need a tunnel-in-the-sky assist? Is she imagining what the ship should do, or a route, or something more abstract, like her goals? How does the ship grant her its field awareness for a feedback loop? When does the vibranium dashboard get activated? How does it assist her? How does she hand things off to the autopilot? How does she take it back? Since we don’t have good models, and it all happens invisibly, we’ll have to let these questions lie. But that’s part of us, from our less-advanced viewpoint, having to marvel at this highly-advanced culture from the outside.

Black Health Matters

Each post in the Black Panther review is followed by actions that you can take to support black lives.

Thinking back to the terrible loss of Boseman: Fuck cancer. (And not to imply that his death was affected by this, but also:) Fuck the racism that leads to worse medical outcomes for black people.

One thing you can do is to be aware of the diseases that disproportionately affect black people (diabetes, asthma, lung scarring, strokes, high blood pressure, and cancer) and be aware that no small part of these poorer outcomes is racism, systemic and individual. Listen to Dorothy Roberts’ TED talk, calling for an end to race-based medicine.

If you’re the reading sort, check out the books Black Man in a White Coat by Damon Tweedy, or the infuriating history covered in Medical Apartheid by Harriet Washington.

If you are black, in Boseman’s memory, get screened for cancer as often as your doctor recommends it. If you think you cannot afford it and you are in the USA, this CDC website can help you determine your eligibility for free or low-cost screening: If you live elsewhere, you almost certainly have a better healthcare system than we do, but a quick search should tell you your options.

Cancer treatment is equally successful for all races. Yet black men have a 40% higher cancer death rate than white men and black women have a 20% higher cancer death rate than white women. Your best bet is to detect it early and get therapy started as soon as possible. We can’t always win that fight, but better to try than to find out when it’s too late to intervene. Your health matters. Your life matters.

Panther Suit 2.0

The suit that the Black Panther wears is critical to success. At the beginning of the movie, this is “just” a skintight bulletproof suit with homages to its namesake. But, after T’Challa is enthroned, Shuri takes him to her lab and outfits him with a new one with some nifty new features. This write-up is about Shuri’s 2.0 Panther Suit.


At the demonstration of the new suit, Shuri first takes a moment to hold up a bracelet of black Kimoyo beads (more on these in a later post) to his neck. With a bubbly computer sound, the glyphs on the beads begin to glow vibranium-purple, projecting two particular symbols on his neck. (The one that looks kind of like a reflective A, and the other that looks like a ligature of a T and a U.)

This is done without explanation, so we have to make some assumptions here, which is always shaky ground for critique.

I think she’s authorizing him to use the suit. At first I thought the interaction was her “pairing” him with the suit, but I can’t imagine that the bead would need to project something onto his skin to read his identity or DNA. So my updated guess is this is a dermal mark that, like the Wakandan tattoos, the suit will check for with a “intra-skin scan,” like the HAN/BAN concepts from the early aughts. This would enable her to authorize many people, which is, perhaps, not as secure.

This interpretation is complicated by Killmonger’s wearing one of the other Black Panther suits when he usurps T’Challa. Shuri had fled with Queen Romonda to the Jibari stronghold, so Shuri couldn’t have authorized him. Maybe some lab tech who stayed behind? If there was some hint of what’s supposed to be happening here we would have more grounds to evaluate this interaction.

There might be some hint if there was an online reference to these particular symbols, but they are not part of the Wakandan typeface, or the Andinkra symbols, or the Nsibidi symbols that are seen elsewhere in the film. (I have emails out to the creator of the above image to see if I can learn more there. Will update if I get a response.)


When she finishes whatever the bead did, she says, “Now tell it to go on.” T’Challa looks at it intensely, and the suit spreads from the “teeth” in the necklace with an insectoid computer sound, over the course of about 6 seconds.

We see him activate the suit several more times over the course of the movie, but learn nothing new about activation beyond this. How does he mentally tell it to turn it on? I presume it’s the same mental skill he’s built up across his lifetime with kimoyo beads, but it’s not made explicit in the movie.

A fun detail is that while the suit activates in 6 seconds in the lab—far too slow for action in the field considering Shuri’s sardonic critique of the old suit (“People are shooting at me! Wait! Let me put on my helmet!”)—when T’Challa uses it in Korea, it happens in under 3. Shuri must have slowed it down to be more intelligible and impressive in the lab.

Another nifty detail that is seen but not discussed is that the nanites will also shred any clothes being worn at the time of transformation, as seen at the beginning of the chase sequence outside the casino and when Killmonger is threatened by the Dora Milaje.

Hopefully they weren’t royal…oh. Oh well?


T’Challa thinks the helmet off a lot over the course of the movie, even in some circumstances where I am not sure it was wise. We don’t see the mechanism. I expect it’s akin to kimoyo communication, again. He thinks it, and it’s done. (n.b. “It’s mental” is about as satisfying from a designer’s critique as “a wizard did it”, because it’s almost like a free pass, but *sigh* perfectly justifiable given precedent in the movie.)

Kinetic storage & release

At the demonstration in her lab, Shuri tells T’Challa to, “Strike it.” He performs a turning kick to the mannequin’s ribcage and it goes flying. When she fetches it from across the lab, he marvels at the purple light emanating from Nsibidi symbols that fill channels in the suit where his strike made contact. She explains “The nanites have absorbed the kinetic energy. They hold it in place for redistribution.

He then strikes it again in the same spot, and the nanites release the energy, knocking him back across the lab, like all those nanites had become a million microscopic bigclaw snapping shrimp all acting in explosive concert. Cool as it is, this is my main critique of the suit.

First, the good. As a point of illustration of how cool their mastery of tech is, and how it works, this is pretty sweet. Even the choice of purple is smart because it is a hard color to match in older chemical film processes, and can only happen well in a modern, digital film. So extradiegetically, the color is new and showing off a bit.

Tactically though, I have to note that it broadcasts his threat level to his adversaries. Learning this might take a couple of beatings, but word would get around. Faithful readers will know we’ve looked at aposematic signaling before, but those kinds of markings are permanent. The suit changes as he gets technologically beefier. Wouldn’t people just avoid him when he was more glowy, or throw something heavy at him to force him to expend it, and then attack when he was weaker? More tactical I think to hold those cards close to the chest, and hide the glow.

Now it is quite useful for him to know the level of charge. Maybe some tactile feedback like a warmth or or a vibration at the medial edge of his wrists. Cinegenics win for actual movie-making of course, but designers take note. What looks cool is not always smart design.

Not really a question for me: Can he control how much he releases? If he’s trying to just knock someone out, it would be crappy if he accidentally killed them, or expected to knock out the big bad with a punch, only to find it just tickled him like a joy buzzer. But if he already knows how to mentally activate the suit, I’m sure he has the skill down to mentally clench a bit to control the output. Wizards.

Regarding Shuri’s description, I think she’s dumbing things down for her brother. If the suit actually absorbed the kinetic energy, the suit would not have moved when he kicked it. (Right?) But let’s presume if she were talking to someone with more science background, she would have been more specific to say, “absorbed some of the kinetic energy.”

Explosive release

When the suit has absorbed enough kinetic energy, T’Challa can release it all at once as a concussive blast. He punches the ground to trigger it, but it’s not clear how he signals to the suit that he wants to blast everyone around him back rather than, say, create a crater, but again, I think we can assume it’s another mental command. Wizards.


To activate the suit’s claws, T’Challa quickly extends curved fingers and holds them there, and they pop out.

This gesture is awesome, and completely fit for purpose. Shaping the fingers like claws make claws. It’s also when fingers are best positioned to withstand the raking motion. The second of hold ensures it’s not accidental activation. Easy to convey, easy to remember, easy to intuit. Kids playing Black Panther on the sidewalk would probably do the same without even seeing the movie.

We have an unanswered question about how those claws retract. Certainly the suit is smart enough to retract automatically so he doesn’t damage himself. Probably more mental commands, but whatever. I wouldn’t change a thing here.

Black Lives Matter

Each post in the Black Panther review is followed by actions that you can take to support black lives. I had something else planned for this post, but just before publication another infuriating incident has happened.

While the GOP rallies to the cause of the racist-in-chief in Charlotte, right thinking people are taking to the streets in Kenosha, Wisconsin, to protest the unjust shooting of a black man, Jacob Blake. The video is hard to watch. Watch it. It’s especially tragic, especially infuriating, because Kenosha had gone through “police reform” initiatives in 2014 meant to prevent exactly this sort of thing. It didn’t prevent this sort of thing. As a friend of mine says, it’s almost enough to make you an abolitionist.

Raysean White via

Information is still coming in as to what happened, but here’s the narrative we understand right now: It seems that Blake had pulled over his car to stop a fight in progress. When the police arrived, he figured they had control of the situation, and he walked back to his car to leave. That’s when officers shot him in the back multiple times, while his family—who were still waiting for him in the car—watched. He’s out of surgery and stable, but rather than some big-picture to-do tonight, please donate to support his family. They have witnessed unconscionable trauma.

Blake and kids, in happier times

Several fundraisers posted to support Blake’s family have been taken down by GoFundMe for being fake, but “Justice for Jacob Blake” remains active as of Monday evening. Please donate.

8 Reasons The Voight-Kampff Machine is shit (and a redesign to fix it)

Distinguishing replicants from humans is a tricky business. Since they are indistinguishable biologically, it requires an empathy test, during which the subject hears empathy-eliciting scenarios and watched carefully for telltale signs such as, “capillary dilation—the so-called blush response…fluctuation of the pupil…involuntary dilation of the iris.” To aid the blade runner in this examination, they use a portable machine called the Voight-Kampff machine, named, presumably, for its inventors.

The device is the size of a thick laptop computer, and rests flat on the table between the blade runner and subject. When the blade runner prepares the machine for the test, they turn it on, and a small adjustable armature rises from the machine, the end of which is an intricate piece of hardware, housing a powerful camera, glowing red.

The blade runner trains this camera on one of the subject’s eyes. Then, while reading from the playbook book of scenarios, they keep watch on a large monitor, which shows an magnified image of the subject’s eye. (Ostensibly, anyway. More on this below.) A small bellows on the subject’s side of the machine raises and lowers. On the blade runner’s side of the machine, a row of lights reflect the volume of the subject’s speech. Three square, white buttons sit to the right of the main monitor. In Leon’s test we see Holden press the leftmost of the three, and the iris in the monitor becomes brighter, illuminated from some unseen light source. The purpose of the other two square buttons is unknown. Two smaller monochrome monitors sit to the left of the main monitor, showing moving but otherwise inscrutable forms of information.

In theory, the system allows the blade runner to more easily watch for the minute telltale changes in the eye and blush response, while keeping a comfortable social distance from the subject. Substandard responses reveal a lack of empathy and thereby a high probability that the subject is a replicant. Simple! But on review, it’s shit. I know this is going to upset fans, so let me enumerate the reasons, and then propose a better solution.

-2. Wouldn’t a genetic test make more sense?

If the replicants are genetically engineered for short lives, wouldn’t a genetic test make more sense? Take a drop of blood and look for markers of incredibly short telomeres or something.

-1. Wouldn’t an fMRI make more sense?

An fMRI would reveal empathic responses in the inferior frontal gyrus, or cognitive responses in the ventromedial prefrontal gyrus. (The brain structures responsible for these responses.) Certinaly more expensive, but more certain.

0. Wouldn’t a metal detector make more sense?

If you are testing employees to detect which ones are the murdery ones and which ones aren’t, you might want to test whether they are bringing a tool of murder with them. Because once they’re found out, they might want to murder you. This scene should be rewritten such that Leon leaps across the desk and strangles Holden, IMHO. It would make him, and other blade runners, seem much more feral and unpredictable.

(OK, those aren’t interface issues but seriously wtf. Onward.)

1. Labels, people

Controls needs labels. Especially when the buttons have no natural affordance and the costs of experimentation to discover the function are high. Remembering the functions of unlabeled controls adds to the cognitive load for a user who should be focusing on the person across the table. At least an illuminated button helps signal the state, so that, at least, is something.

 2. It should be less intimidating

The physical design is quite intimidating: The way it puts a barrier in between the blade runner and subject. The fact that all the displays point away from the subject. The weird intricacy of the camera, its ominous HAL-like red glow. Regular readers may note that the eyepiece is red-on-black and pointy. That is to say, it is aposematic. That is to say, it looks evil. That is to say, intimidating.

I’m no emotion-scientist, but I’m pretty sure that if you’re testing for empathy, you don’t want to complicate things by introducing intimidation into the equation. Yes, yes, yes, the machine works by making the subject feel like they have to defend themselves from the accusations in the ethical dilemmas, but that stress should come from the content, not the machine.

2a. Holden should be less intimidating and not tip his hand

While we’re on this point, let me add that Holden should be less intimidating, too. When Holden tells Leon that a tortoise and a turtle are the same thing, (Narrator: They aren’t) he happens to glance down at the machine. At that moment, Leon says, “I’ve never seen a turtle,” a light shines on the pupil and the iris contracts. Holden sees this and then gets all “ok, replicant” and becomes hostile toward Leon.

In case it needs saying: If you are trying to tell whether the person across from you is a murderous replicant, and you suddenly think the answer is yes, you do not tip your hand and let them know what you know. Because they will no longer have a reason to hide their murderyness. Because they will murder you, and then escape, to murder again. That’s like, blade runner 101, HOLDEN.

3. It should display history 

The glance moment points out another flaw in the interface. Holden happens to be looking down at the machine at that moment. If he wasn’t paying attention, he would have missed the signal. The machine needs to display the interview over time, and draw his attention to troublesome moments. That way, when his attention returns to the machine, he can see that something important happened, even if it’s not happening now, and tell at a glance what the thing was.

4. It should track the subject’s eyes

Holden asks Leon to stay very still. But people are bound to involuntarily move as their attention drifts to the content of the empathy dilemmas. Are we going to add noncompliance-guilt to the list of emotional complications? Use visual recognition algorithms and high-resolution cameras to just track the subject’s eyes no matter how they shift in their seat.

5. Really? A bellows?

The bellows doesn’t make much sense either. I don’t believe it could, at the distance it sits from the subject, help detect “capillary dilation” or “ophthalmological measurements”. But it’s certainly creepy and Terry Gilliam-esque. It adds to the pointless intimidation.

6. It should show the actual subject’s eye

The eye color that appears on the monitor (hazel) matches neither Leon’s (a striking blue) or Rachel’s (a rich brown). Hat tip to Typeset in the Future for this observation. His is a great review.

7. It should visualize things in ways that make it easy to detect differences in key measurements

Even if the inky, dancing black blob is meant to convey some sort of information, the shape is too organic for anyone to make meaningful readings from it. Like seriously, what is this meant to convey?

The spectrograph to the left looks a little more convincing, but it still requires the blade runner to do all the work of recognizing when things are out of expected ranges.

8. The machine should, you know, help them

The machine asks its blade runner to do a lot of work to use it. This is visual work and memory work and even work estimating when things are out of norms. But this is all something the machine could help them with. Fortunately, this is a tractable problem, using the mighty powers of logic and design.

Pupillary diameter

People are notoriously bad at estimating the sizes of things by sight. Computers, however, are good at it. Help the blade runner by providing a measurement of the thing they are watching for: pupillary diameter. (n.b. The script speaks of both iris constriction and pupillary diameter, but these are the same thing.) Keep it convincing and looking cool by having this be an overlay on the live video of the subject’s eye.

So now there’s some precision to work with. But as noted above, we don’t want to burden the user’s memory with having to remember stuff, and we don’t want them to just be glued to the screen, hoping they don’t miss something important. People are terrible at vigilance tasks. Computers are great at them. The machine should track and display the information from the whole session.

Note that the display illustrates radius, but displays diameter. That buys some efficiencies in the final interface.

Now, with the data-over-time, the user can glance to see what’s been happening and a precise comparison of that measurement over time. But, tracking in detail, we quickly run out of screen real estate. So let’s break the display into increments with differing scales.

There may be more useful increments, but microseconds and seconds feel pretty convincing, with the leftmost column compressing gradually over time to show everything from the beginning of the interview. Now the user has a whole picture to look at. But this still burdens them into noticing when these measurements are out of normal human ranges. So, let’s plot the threshold, and note when measurements fall outside of that. In this case, it feels right that replicants display less that normal pupillary dilation, so it’s a lower-boundary threshold. The interface should highlight when the measurement dips below this.


I think that covers everything for the pupillary diameter. The other measurement mentioned in the dialogue is capillary dilation of the face, or the “so-called blush response.” As we did for pupillary diameter, let’s also show a measurement of the subject’s skin temperature over time as a line chart. (You might think skin color is a more natural measurement, but for replicants with a darker skin tone than our two pasty examples Leon and Rachel, temperature via infrared is a more reliable metric.) For visual interest, let’s show thumbnails from the video. We can augment the image with degree-of-blush. Reduce the image to high contrast grayscale, use visual recognition to isolate the face, and then provide an overlay to the face that illustrates the degree of blush.

But again, we’re not just looking for blush changes. No, we’re looking for blush compared to human norms for the test. It would look different if we were looking for more blushing in our subject than humans, but since the replicants are less empathetic than humans, we would want to compare and highlight measurements below a threshold. In the thumbnails, the background can be colored to show the median for expected norms, to make comparisons to the face easy. (Shown in the drawing to the right, below.) If the face looks too pale compared to the norm, that’s an indication that we might be looking at a replicant. Or a psychopath.

So now we have solid displays that help the blade runner detect pupillary diameter and blush over time. But it’s not that any diameter changes or blushing is bad. The idea is to detect whether the subject has less of a reaction than norms to what the blade runner is saying. The display should be annotating what the blade runner has said at each moment in time. And since human psychology is a complex thing, it should also track video of the blade runner’s expressions as well, since, as we see above, not all blade runners are able to maintain a poker face. HOLDEN.

Anyway, we can use the same thumbnail display of the face, without augmentation. Below that we can display the waveform (because they look cool), and speech-to-text the words that are being spoken. To ensure that the blade runner’s administration of the text is not unduly influencing the results, let’s add an overlay to the ideal intonation targets. Despite evidence in the film, let’s presume Holden is a trained professional, and he does not stray from those targets, so let’s skip designing the highlight and recourse-for-infraction for now.

Finally, since they’re working from a structured script, we can provide a “chapter” marker at the bottom for easy reference later.

Now we can put it all together, and it looks like this. One last thing we can do to help the blade runner is to highlight when all the signals indicate replicant-ness at once. This signal can’t be too much, or replicants being tested would know from the light on the blade runner’s face when their jig is up, and try to flee. Or murder. HOLDEN.

For this comp, I added a gray overlay to the column where pupillary and blush responses both indicated trouble. A visual designer would find some more elegant treatment.

If we were redesigning this from scratch, we could specify a wide display to accomodate this width. But if we are trying to squeeze this display into the existing prop from the movie, here’s how we could do it.

Note the added labels for the white squares. I picked some labels that would make sense in the context. “Calibrate” and “record” should be obvious. The idea behind “mark” is an easy button for the blade runner to press when they see something that looks weird, like when doctors manually annotate cardiograph output.

Lying to Leon

There’s one more thing we can add to the machine that would help out, and that’s a display for the subject. Recall the machine is meant to test for replicant-ness, which happens to equate to murdery-ness. A positive result from the machine needs to be handled carefully so what happens to Holden in the movie doesn’t happen. I mentioned making the positive-overlay subtle above, but we can also make a placebo display on the subject’s side of the interface.

The visual hierarchy of this should make the subject feel like its purpose is to help them, but the real purpose is to make them think that everything’s fine. Given the script, I’d say a teleprompt of the empathy dilemma should take up the majority of this display. Oh, they think, this is to help me understand what’s being said, like a closed caption. Below the teleprompt, at a much smaller scale, a bar at the bottom is the real point.

On the left of this bar, a live waveform of the audio in the room helps the subject know that the machine is testing things live. In the middle, we can put one of those bouncy fuiget displays that clutters so many sci-fi interfaces. It’s there to be inscrutable, but convince the subject that the machine is really sophisticated. (Hey, a diegetic fuiget!) Lastly—and this is the important part—An area shows that everything is “within range.” This tells the subject that they can be at ease. This is good for the human subject, because they know they’re innocent. And if it’s a replicant subject, this false comfort protects the blade runner from sudden murder. This test might flicker or change occasionally to something ambiguous like “at range,” to convey that it is responding to real world input, but it would never change to something incriminating.

This way, once the blade runner has the data to confirm that the subject is a replicant, they can continue to the end of the module as if everything was normal, thank the replicant for their time, and let them leave the room believing they passed the test. Then the results can be sent to the precinct and authorizations returned so retirement can be planned with the added benefit of the element of surprise.


Look, I’m sad about this, too. The Voight-Kampff machine is cool. It fits very well within the art direction of the Blade Runner universe. This coolness burned the machine into my memory when I saw this film the first dozen times, but despite that, it just doesn’t stand up to inspection. It’s not hopeless, but does need a lot of thinkwork and design to make it really fit to task, and convincing to us in the audience.

Routing Board

When the two AIs Colossus and Guardian are disconnected from communicating with each other, they try and ignore the spirit of the human intervention and reconnect on their own. We see the humans monitoring Colossus’ progress in this task on big board in the U.S. situation room. It shows a translucent projection map of the globe with white dots representing data centers and red icons representing missiles. Beneath it, glowing arced lines illustrate the connection routes Colossus is currently testing. When it finds that a current segment is ineffective, that line goes dark, and another segment extending from the same node illuminates.

For a smaller file size, the animated gif has been stilled between state changes, but the timing is as close as possible to what is seen in the film.

Forbin explains to the President, “It’s trying to find an alternate route.”

A first in sci-fi: Routing display 🏆

First, props to Colossus: The Forbin Project for being the first show in the survey to display something like a routing board, that is, a network of nodes through which connections are visible, variable, and important to stakeholders.

Paul Baran and Donald Davies had published their notion of a network that could, in real-time, route information dynamically around partial destruction of the network in the early 1960s, and this packet switching had been established as part of ARPAnet in the late 1960s, so Colossus was visualizing cutting edge tech of the time.

This may even be the first depiction of a routing display in all of screen sci-fi or even cinema, though I don’t have a historical perspective on other genres, like the spy genre, which is another place you might expect to see something like this. As always, if you know of an earlier one, let me know so I can keep this record up to date and honest.

A nice bit: curvy lines

Should the lines be straight or curvy? From Colossus’ point of view, the network is a simple graph. Straight lines between its nodes would suffice. But from the humans’ point of view, the literal shape of the transmission lines are important, in case they need to scramble teams to a location to manually cut the lines. Presuming these arcs mean that (and not just the way neon in a prop could bend), then the arcs are the right display. So this is good.

But, it breaks some world logic

The board presents some challenges with the logic of what’s happening in the story. If Colossus exists as a node in a network, and its managers want to cut it off from communication along that network, where is the most efficient place to “cut” communications? It is not at many points along the network. It is at the source.

Imagine painting one knot in a fishing net red and another one green. If you were trying to ensure that none of the strings that touch the red knot could trace a line to the green one, do you trim a bunch of strings in the middle, or do you cut the few that connect directly to the knot? Presuming that it’s as easy to cut any one segment as any other, the fewer number of cuts, the better. In this case that means more secure.

The network in Colossus looks to be about 40 nodes, so it’s less complicated than the fishing net. Still, it raises the question, what did the computer scientists in Colossus do to sever communications? Three lines disappear after they cut communications, but even if they disabled those lines, the rest of the network still exists. The display just makes no sense.

Before, happy / After, I will cut a Prez

Per the logic above, they would cut it off at its source. But the board shows it reaching out across the globe. You might think maybe they just cut Guardian off, leaving Colossus to flail around the network, but that’s not explicitly said in the communications between the Americans and the Russians, and the U.S. President is genuinely concerned about the AIs at this point, not trying to pull one over on the “pinkos.” So there’s not a satisfying answer.

It’s true that at this point in the story, the humans are still letting Colossus do its primary job, so it may be looking at every alternate communication network to which it has access: telephony, radio, television, and telegraph. It would be ringing every “phone” it thought Guardian might pick up, and leaving messages behind for possible asynchronous communications. I wish a script doctor had added in a line or three to clarify this.

  • We’ve cut off its direct lines to Guardian. Now it’s trying to find an indirect line. We’re confident there isn’t one, but the trouble will come when Colossus realizes it, too.

Too slow

Another thing that seems troubling is the slow speed of the shifting route. The segments stay illuminated for nearly a full second at a time. Even with 1960s copper undersea cables and switches, electronic signals should not take that long. Telephony around the world was switched from manual to automatic switching by the 1930s, so it’s not like it’s waiting on a human operating a switchboard.

You’re too slow!

Even if it was just scribbling its phone number on each network node and the words “CALL ME” in computerese, it should go much faster than this. Cinematically, you can’t go too fast or the sense of anticipation and wonder is lost, but it would be better to have it zooming through a much more complicated network to buy time. It should feel just a little too fast to focus on—frenetic, even.

This screen gets 15 seconds of screen time, and if you showed one new node per frame, that’s only 360 states you need to account for, a paltry sum compared to the number of possible paths it could test across a 38 node graph between two points.

Plus the speed would help underscore the frightening intelligence and capabilities of the thing. And yes I understand that that is a lot easier said than done nowadays with digital tools than with this analog prop.

Realistic-looking search strategies

Again, I know this was a neon, analog prop, but let’s just note that it’s not testing the network in anything that looks like a computery way. It even retraces some routes. A brute force algorithm would just test every possibility sequentially. In larger networks there are pathfinding algorithms that are optimized in different ways to find routes faster, but they don’t look like this. They look more like what you see in the video below. (Hat tip to YouTuber gray utopia.)

This would need a lot of art direction and the aforementioned speed, but it would be more believable than what we see.

What’s the right projection?

Is this the right projection to use? Of course the most accurate representation of the earth is a globe, but it has many challenges in presenting a phenomenon that could happen anywhere in the world. Not the least of these is that it occludes about half of itself, a problem that is not well-solved by making it transparent. So, a projection it must be. There are many, many ways to transform a spherical surface into a 2D image, so the question becomes which projection and why.

The map uses what looks like a hand-drawn version of Peirce quincuncial projection. (But n.b. none of the projection types I compared against it matched exactly, which is why I say it was hand-drawn.) Also those longitude and latitude lines don’t make any sense; though again, a prop. I like that it’s a non standard projection because screw Mercator, but still, why Peirce? Why at this angle?

Also, why place time zone clocks across the top as if they corresponded to the map in some meaningful way? Move those clocks.

I have no idea why the Peirce map would be the right choice here, when its principle virtue is that it can be tessellated. That’s kind of interesting if you’re scrolling and can’t dynamically re-project the coastlines. But I am pretty sure the Colossus map does not scroll. And if the map is meant to act as a quick visual reference, having it dynamic means time is wasted when users look to the map and have to orient themselves.

If this map was only for tracking issues relating to Colossus, it should be an azimuthal map, but not over the north pole. The center should be the Colossus complex in Colorado. That might be right for a monitoring map in the Colossus Programming Office. This map is over the north pole, which certainly highlights the fact that the core concern of this system is the Cold War tensions between Moscow and D.C. But when you consider that, it points out another failing. 

Later in the film the map tracks missiles (not with projected paths, sadly, but with Mattel Classic Football style yellow rectangles). But missiles could conceivably come from places not on this map. What is this office to do with a ballistic-missile submarine off of the Baja peninsula, for example? Just wait until it makes its way on screen? That’s a failure. Which takes us to the crop.


The map isn’t just about missiles. Colossus can look anywhere on the planet to test network connections. (Even nowadays, near-earth orbit and outer space.) Unless the entire network was contained just within the area described on the map, it’s excluding potentially vital information. If Colossus routed itself through through Mexico, South Africa, and Uzbekistan before finally reconnecting to Guardian, users would be flat out of luck using that map to determine the leak route. And I’m pretty sure they had a functioning telephone network in Mexico, South Africa, and the Balkan countries in the 1960s.

This needs a complete picture

SInce the missiles and networks with which Colossus is concerned are potentially global, this should be a global map. Here I will offer my usual fanboy shout-outs to the Dymaxion and Pacific-focused Waterman projection for showing connectedness and physical flow, but there would be no shame in showing the complete Peirce quincuncial. Just show the whole thing.

Maybe fill in some of the Pacific “wasted space” with a globe depiction turned to points of interest, or some other fuigetry. Which gives us a new comp something like this.

I created this proof of concept manually. With more time, I would comp it up in Processing or Python and it would be even more convincing. (And might have reached London.)

All told, this display was probably eye-opening for its original audience. Golly jeepers! This thing can draw upon resources around the globe! It has intent, and a method! And they must have cool technological maps in D.C.! But from our modern-day vantage point, it has a lot to learn. If they ever remake the film, this would be a juicy thing to fully redesign.

Life Day Orbs

The last interface in The Star Wars Holiday Special is one of the handful of ritual interfaces we see in the scifiinterfaces survey. After Saun Dann leaves, the Wookiee family solemnly proceeds to a shelf in the living room. One by one they retrieve hand-sized transparent orbs with a few lights glowing inside of each. They gather together in the center of the living room, and a watery light floods them from stage right while the rest of the house lights dim. They hold the orbs up, with heads tilted reverently. Then they go blurry before refocusing again, and now they’re wearing blood red robes and floating in a sea of stars.


Then we cut to a long procession of Wookiees walking single file across an invisible space bridge into a glowing ball of space light, which explodes in sparkles at no particular time, and to which no one in the procession reacts in any way.


Break for commercial.


Lights up, and dozens of blood robed Wookiees are gathered in a dark space at the foot of a great, uplit tree called The Tree of Life. Stars occasionally, but not consistently, appear behind the tree. Fog hugs the floor and covers randomly distributed strings of fairy lights. Everyone carries the glowing orbs. They greet newcomers arriving from the star bridge with moans and bows (n.b. sloppy seiritsu form). Then C3PO and R2D2 appear from behind the Tree and walk out onto an elevated platform to greet Chewbacca (who seems to be some sort of spiritual leader in addition to being a Rebel Leader) with a “Happy Life Day!” An unholy chorus of Wookiee howls emerges from the gathered crowd. C3PO turns to the audience and says, “Happy Life Day, everyone!” C3PO expresses his and R2’s Pinnochio Syndrome to the crowd, though no one asked. Then Leia, Luke, and Han arrive.

Leia speaks (in English) explaining to the Wookiee gathered there the meaning of their own, dearest holiday. She then sings the Life Day Carol. (Again, in English.) No Wookiee has the biological morphology to participate, so they just watch. As a public service, I have transcribed these lyrics. Posthumus props to Carrie Fisher for delivering this with complete earnestness.

Life Day Carol

Sung by Princess Leia


We celebrate a day of peace
A day of har-moh-neeeee
A day of joy we all can share
Together joyously [thx to scifihugh for this line]
A day that takes us through the darkness
A day that leads into light
A day that makes us want to celebrate
The light

[Horn section gets exuberant]

A day that brings the promise
That one day we’ll be free
To live
To laugh
To dream
To grow
To trust
To know
To be

Once the song is done, the Wookiees gather to file up a ramp and past the humans, greeting them each in turn with nods and exit back over the star bridge.

Then Chewbacca has a sudden dissociative fugue episode, where he relives moments from his recent past. (I’m going to sidestep the troubling but wholly possible implication that he has PTSD from his experiences with the Rebellion.) When he finally recovers, his family is back in their living room, staring at their glowing orbs, which sit in a basket in the center of the dining room table. The robes are gone. They are gathered for a family meal of fruit. (Since Mala’s actual cooking would probably not go down well.) They gather hands and bow their heads reverently in a deeply disturbing, ethnocentric gesture. Fade to black.



The design of ritual is a fascination of mine. So if there’s ever a sci-fi movie showing of The Star Wars Holiday Special, that should be one topic for the hangout afterward. What does it purport to mean? Why do non-Wookiees get the starring role? Why the robes? What’s with the unsettling self-centeredness of having essentially North-American Christian rites?

But in this house we talk interface, and that means those orbs.

Physical Interface

The orbs’ physical interface is fit to task. Because they’re spherical, they can’t be easily set on a surface and put “out of mind.” (Kind of like a drinking horn, but no one gets inebriated in the Star Wars diegesis.) The orbs must be held and cared for, which is a nice way to get participants into a reverent mood. It also means that at least one hand is dedicated to holding it throughout the ceremony, which might put participants into a bit of active meditation, to free the body so the mind can focus and contemplate: Life and Days.

Visual design

The transparency and little lights within are also nice. Like the fairy lights common to many winter celebrations, they engage a sense of wonder and spectacle. Like holding fireflies, or stars in the palm of your hand. They speak a bit to the Pareto Principle, related to the notion that life is rare, precious, and valuable. The transparency also brings the color and motion of the surrounding environment into attention as well, speaking of the connectedness of all things.


Turning them on

I presume this is automatic, i.e. the lights illuminate just ahead the datetime of the ritual. They either have a calendar or some technology in the home automatically broadcasts the signal to come on. They could even slowly warm up as the ritual approached to help with a sense of anticipation. This automation would make them seem more natural, like a blossoming flower or budding fruit. You know, life.

Activation: Go there

If part of the celebration of Life Day is about togetherness, well then having the activation require literally gathering the family together with the spheres in hand is pretty on point. There’s even feedback for the family that they’re close enough together when the orbs signal the family’s Hue lights to dim and turn on the watery-reflection projection.

Note it also has to have some pretty sophisticated contextual awareness. Note that it only started once all four Wookiees were close together. Recall that Chewie almost didn’t make it home for Life Day. Would they have just been unable to participate without him? Doubtful. More likely they somehow know, like a Nest Thermostat, who’s home and waits for all of them to be in proximity to kick things off.

Note also that it did not start when they were in their storage basket, but only when they’re held up in the living room. So it also has some precise location awareness, to.

Sidenote: Where is there?

Where is the Tree of Life and how does the orb help them get there?


The Tree of Life is real, on Kazook/Kashyyyk and the orbs provide a trippy means of teleportation to this site. This would mean the Wookiees have access to teleportation tech that they don’t use in any other way—like, say, in their struggle against the Empire. So, this seems unlikely.


Since it’s not literal, and I can’t imagine the whole thing being some sort of metaphor, the other possibility is that the tree is virtual. This would help explain why there are only a few dozen Wookiees around this single sacred tree on its high holy day: It’s not bound by actual physical constraints. This raises a whole host of other questions, such as how does it project the perceptual data into the Wookiee’s senses that they’re robed, and walking the star bridge, and at the tree?


So…pretty nice

All told, the orbs design helps reinforce the themes of Life Day, cheesy and creepy as they are.

You know, when The Star Wars Holiday Special came out, this “technology” was pure fancy. But that now we have cheap, ultrabright LEDs, tiny processors, WIFI chips, identity servers, all sorts of sensors, and Hue lights. If anyone wanted to build working models of these as an homage to an obscure sci-fi interface, it’s entirely possible now.

The Cookie

In one of the story threads, Matt uses an interface as part of his day job at Smartelligence to wrangle an AI that is the cloned a mind of a client named Greta. Matt has three tasks in this role. 

  1. He has to explain to her that she is an artificial intelligence clone of a real world person’s mind. This is psychologically traumatic, as she has decades of memories as if she were a real person with a real body and full autonomy in the world.
  2. He has to explain how she will do her job: Her responsibilities and tools.
  3. He has to “break” her will and coerce her to faithfully serve her master—who is the the real-world Greta. (The idea is that since virtual Greta is an exact copy, she understands real Greta’s preferences and can perform personal assistant duties flawlessly.)

The AI is housed in a small egg-shaped device with a single blue light camera lens. The combination of the AI and the egg-shaped device is called “The Cookie.” Why it is not called The Egg is a mystery left for the reader, though I hope it is not just for the “Cookie Monster” joke dropped late in the episode. Continue reading

Syd’s dash display


If Jasper’s car is aftermarket, Syd’s built-in display seems to be more consumer-savvy. It is a blue electroluminescent flat display built into the dashboard. It has more glanceable information with a cleaner information hierarchy. It has no dangerous keyboard entry. All we see of the display in these few glimpses is the speedometer, but even that’s enough to illustrate these differences.

Door Bomb and Safety Catches

Johnny leaves the airport by taxi, ending up in a disreputable part of town. During his ride we see another video phone call with a different interface, and the first brief appearance of some high tech binoculars. I’ll return to these later, for the moment skipping ahead to the last of the relatively simple and single-use physical gadgets.

Johnny finds the people he is supposed to meet in a deserted building but, as events are not proceeding as planned, he attaches another black box with glowing red status light to the outside of the door as he enters. Although it looks like the motion detector we saw earlier, this is a bomb.


This is indeed a very bad neighbourhood of Newark. Inside are the same Yakuza from Beijing, who plan to remove Johnny’s head. There is a brief fight, which ends when Johnny uses his watch to detonate the bomb. It isn’t clear whether he pushes or rotates some control, but it is a single quick action. Continue reading

Brain Upload

Once Johnny has installed his motion detector on the door, the brain upload can begin.

3. Building it

Johnny starts by opening his briefcase and removing various components, which he connects together into the complete upload system. Some of the parts are disguised, and the whole sequence is similar to an assassin in a thriller film assembling a gun out of harmless looking pieces.


It looks strange today to see a computer system with so many external devices connected by cables. We’ve become accustomed to one piece computing devices with integrated functionality, and keyboards, mice, cameras, printers, and headphones that connect wirelessly.

Cables and other connections are not always considered as interfaces, but “all parts of a thing which enable its use” is the definition according to Chris. In the early to mid 1990s most computer user were well aware of the potential for confusion and frustration in such interfaces. A personal computer could have connections to monitor, keyboard, mouse, modem, CD drive, and joystick – and every single device would use a different type of cable. USB, while not perfect, is one of the greatest ever improvements in user interfaces. Continue reading

The Memory Doubler

In Beijing, Johnny steps into a hotel lift and pulls a small package out his pocket. He unwraps it to reveal the “Pemex MemDoubler”.


Johnny extends the cable from the device and plugs it into the implant in his head. The socket glows red once the connection is made.


Continue reading