Vibranium sand tables

There are a number of vibranium sand tables seen in Black Panther.

  1. The horseshoe-shaped shelf in which Okoye sits as she pilots the Royal Talon. (We never see it activated.)
  2. The small sand table in the center of the Royal Talon.
  3. The big sand tables in Shuri’s lab.

You can see the Royal Talon one in the post about piloting that craft. The other two are described below.

All of these build on the given that vibranium is a very powerful substance and that Wakanda’s scientists have managed to gain a very, very sophisticated control over it.

In the Talon

This table is about a meter square, and raised off the floor around knee-height. As Okoye and T’Challa approach the traffickers in the Sambisa Forest, T’Challa approaches the table and it springs to life, showing him real-time model of the traffickers’ vehicle train. T’Challa picks up the model of the small transport truck and with a finger, wipes off its roof, revealing that there are over a dozen people huddled within. One of the figures glows amber. (It’s Nakia.) He places the truck back into the display, and the display collapses back to inert sand.

A quick critique of this interaction. The sand highlights Nakia for T’Challa, but why did it wait for him to find her truck and wipe off the top of it to look inside? It knew his goals (find Nakia), can clearly conduct a scan into the vehicle, and understood the context (she’s in one of those trucks), it should not wait for him to pick up each car and scrape off its roof to check and see which one she was in. The interface should have drawn his attention to the truck it knew she was in. This is a “stoic guru” mistake that I’ve critiqued before. You know, the computer knows all, but only tells you when you ask it. It is much more sensible for the transport truck to be glowing from the moment the table goes live, as in the comp below.

Designers: Don’t wait for users to ask just the the right thing at the right time.

Otherwise, this is a good high-tech use of the sand table for the more common meaning of “sand table,” which is a 3-dimensional surface for understanding a theatre of conflict. It doesn’t really help him run through scenarios, testing various tactics, but T’Challa is a warrior king, he can do all that in his head.

The interaction also nicely blurs the line between display and gestural interactive tool, in the same way that the Prometheus astrometrics display did. Like that other example, it would be useful for the display to distinguish when it is representing reality, and when the display is being interrupted or modified. Also, T’Challa is nice enough to put the truck back where it “belongs,” but a design would also need to handle how to respond when T’Challa put the truck back in the wrong place, or, say, crushed the truck model with his hand in fury.

In Prometheus it was an Earth, not a truck, but still focused on Africa.

Shuri’s lab

The largest table we see in the movie is in Shuri’s lab. After Black Panther challenges Killmonger and engages in battle outside the capital city, Shuri, Nakia, and Agent Ross rush down to the lab. As they approach an edge-lit hexagonal table, the vibranium sand lowers to reveal 3D-printed armor and weaponry for Shuri and Nakia to join the fight. (Though it’s not like modern 3D printing, these are powered weapons and kimoyo beads, items with very sophisticated functionality.)

Shuri outfits Ross with kimoyo beads from the print and takes off to join the fight. In the lab, the table creates a seat for Ross to remote-pilot the Royal Talon. Up on the flight deck, Shuri throws a control bead onto the Talon, and an AI in the lab named Griot announces to Agent Ross, “Remote piloting system activated.” (Hey, Trevor Noah, we hear you there!)

Around the seat, a volumetric projection of the Talon appears around him, including a 360° display just beyond the windshield that gives him a very immersive remote flying experience. We hear Shuri’s voice explain to Ross “I made it American Style for you. Get in!

Ross sits down, grabs joystick controls, and begins remote-chasing down the cargo ships that are carrying munitions to Killmonger’s War Dogs around the world. (The piloting controls and HUD for Ross are a separate issue, and will be handled in their own post.)

The moment that Ross pilots the Talon through the last cargo ship, the volumetric projection disappears and the piloting seat returns to sand, ungraciously plopping Ross down the floor level of the lab.

It is in this shot that we realize that the dark tiles of the lab’s floor are all recessed vibranium sand tables. I can count seven in the shot. So the lab is full of them.

Display material

Let’s talk for a bit about the display choices. Vibranium can change to display any color and a shape down to a fine level of detail. See the screen cap below for an example of perfectly lifelike (if scaled) representation.

This is a vibranium-powered volumetric display.
It raises the gaze matching issues we’ve seen before.

So why would it be designed so that in most cases, the display is sparkly and black like black tourmaline? Wouldn’t the truck that T’Challa picks up be most useful if it was photographically rendered? Wouldn’t the remote piloting chair be more comfortable if it had pleather- and silicone-like surfaces?

Extradiegetically, I understand the reason is because art direction. We want Wakandan tech to be visibly different than other tech in the MCU, and having it look like vibranium dust ties it back to that key plot element.

But, per the stance of this blog, I try to look for a diegetic reason. It might be a deliberate reminder of the resource on which their technological fortunes are built. And as the Okoye VP above shows, they aren’t purists about it. When detail is needed, it’s included. So perhaps this is it. That implies a great deal of sophistication on the part of the displays to know when photorealism is needed and when it is not, but the presence of Griot there tells us that they have something approaching general AI.

Missing interactions

So, just like I had to do for the Royal Talon, I have to throw my hands up about reviewing the interactions with the sand tables, because we don’t see the interactions that would give these results.

How were the mission goals communicated to the Royal Talon table? Is it programmed to activate when someone approaches it, or did T’Challa issue a mental command? How did Shuri specify those weapons and that armor? What did she do to make the ship “American style” for Ross? Is that a template? Was it Griot’s interpretation of her intention? Why did the remote piloting seat vanish the moment the mission was complete? Was this something Shuri set up in advance, or Griot’s way of telling Agent Ross to GTFO for his own safety? How does someone in the lab instruct a floor tile to leap up and become a table and do stuff? It’s almost certainly via mental commands through the kimoyo beads, but that’s conjecture. The film really provides little evidence.

On the one hand, this is appropriate for us mere non-Wakandans observing the most technologically advanced society on earth. Much of it would feel like inexplicable magic to us.

On the other, sci-fi routinely introduces us to advanced technologies, and doesn’t always eschew the explanatory interactions, so the absence is notable here. It’s magic.

Black Lives Matter

Each post in the Black Panther review is followed by actions that you can take to support black lives.

In the last post we grieved Chadwick Boseman’s passing. This week we’re grieving the loss of Ruth Bader Ginsburg. May her memory be a blessing. With her loss, the GOP is ratcheting up its outrageous hypocrisy by reversing a precedent that they themselves established when Obama was president. The “Moscow Mitch Rule” (oh, oops, sorry) “McConnell Rule” was that new Justices should not be appointed within a year of a general election, so the people’s voice can be taken into account. Of course, the bastards are just ignoring that now and trying to ram through one of their own before election day. This Justice will certainly be a conservative, and we know with this administration that means reactionary, loyal to tiny-hand Twittler, and racist as a Jim Crow law.

There are a few arrows in citizen’s quivers to stop this. One is to convince at least 4 Republican Senators to reject this outright hypocrisy, put country over party, and adhere to the McConnell rule.

Brilliant image by Jesse Duquette

To help put pressure where it might work, you can leave voicemails with Republican Senators who may be mulling whether to put country over party. Those 6 Senators’ names and numbers are below. Here’s a script for your message:

Hello, my name is ______. In 2016, Mitch McConnell created the principle of not confirming a Supreme Court Justice in an election year until after the next inauguration. For the legitimacy of the Court in the eyes of the people, I’m asking Senator ________ to uphold that principle by refusing to confirm a new Justice until after a new President is installed. Thank you.

—You, hopefully
  • Lisa Murkowski, Alaska; (202) 224-6665
  • Mitt Romney, Utah: (202) 224-5251
  • Susan Collins, Maine: (202) 224-2523
  • Martha McSally, Arizona: (202) 224-2235
  • Cory Gardner, Colorado: (202) 224-5941
  • Chuck Grassley, Iowa: (202) 224-3744

I’ve made my calls and left my messages. Can you do the same to stop the hypocritical Trumpian power grab that would tip the Supreme Court for generations?

UPDATE: Nevermind. Romney caved.

Course Optimal (for IXD)

In the prior post, I spoke about how the COURSE OPTIMAL betrays the writer of Starship Trooper’s mental model of technology as a “stoic guru” and implored writers to shift that model to one of an “active academy.” It’s a good post (if I do say so myself). Check it out if you haven’t yet.


But this blog is ostensibly for interaction design (also a thinly veiled rèsumè for my wealthtastic and fameulous future career consulting for sci-fi movies). What do the stoic guru and active academy metaphors do for us?

Is it only for strategists?

To change a writer’s metaphor is to encourage them to conceive of technology differently at a strategic level. That is, what strategic role does the technology play in its users’ lives? If you as an interaction designer have the luxury of consulting on projects at a strategic level, then this metaphor is as powerful for you as it is for the writer. Are you writing scenarios where your personas query technology? Or is the technology getting to know its user and then doing work for them? (Don’t worry, there’s plenty for interaction designers to still do.)


In-house designers—are often inheriting projects where the strategy was done by someone else, a fait accompli. What if you weren’t asked about the strategic implications of the design task at hand (but you’re still thinking of them?) Here I must encourage some upstartness, some whippersnappery piss n’ vinegar. I used to work with a smaller interaction design consultancy in my day job, and even then we never let a design brief get in the way of a great idea. That is, we will solve the problem as the client frames it first, and then deliver a But Wait, There’s More second idea for consideration if not for this project, then for a later project. Even if it can’t be acted on in the moment, it can plant a seed that germinates later.


So don’t fret if it’s not your job. Make it part of your job to send these ideas up the chain, and more than likely it will eventually become your job. Sure, design the thing, but then design thing you want to design.

Course optimal, the Stoic Guru, and the Active Academy

After Ibanez explains that the new course she plotted for the Rodger Young (without oversight, explicit approval, or notification to superiors) is “more efficient this way,” Barcalow walks to the navigator’s chair, presses a few buttons, and the computer responds with a blinking-red Big Text Label reading “COURSE OPTIMAL” and a spinning graphic of two intersecting grids.


Yep, that’s enough for a screed, one addressed first to sci-fi writers.

A plea to sci-fi screenwriters: Change your mental model

Think about this for a minute. In the Starship Troopers universe, Barcalow can press a button to ask the computer to run some function to determine if a course is good (I’ll discuss “good” vs. “optimal” below). But if it could do that, why would it wait for the navigator to ask it after each and every possible course? Computers are built for this kind of repetition. It should not wait to be asked. It should just do it. This interaction raises the difference between two mental models of interacting with a computer: the Stoic Guru and the Active Academy.


Stoic Guru vs. Active Academy

This movie was written when computation cycles may have seemed to be a scarce resource. (Around 1997 only IBM could afford a computer and program combination to outthink Kasparov.) Even if computation cycles were scarce, navigating the ship safely would be the second most important non-combat function it could possibly do, losing out only to safekeeping its inhabitants. So I can’t see an excuse for the stoic-guru-on-the-hill model of interaction here. In this model, the guru speaks great truth, but only when asked a direct question. Otherwise it sits silently, contemplating whatever it is gurus contemplate, stoically. Computers might have started that way in the early part of the last century, but there’s no reason they should work that way today, much less by the time we’re battling space bugs between galaxies.

A better model for thinking about interaction with these kinds of problems is as an active academy, where a group of learned professors is continually working on difficult questions. For a new problem—like “which of the infinite number of possible courses from point A to point B is optimal?”—they would first discuss it among themselves and provide an educated guess with caveats, and continue to work on the problem afterward, continuously, contacting the querant when they found a better answer or when new information came in that changed the answer. (As a metaphor for agentive technologies, the active academy has some conceptual problems, but it’s good enough for purposes of this article.)


Consider this model as you write scenes. Nowadays computation is rarely a scarce resource in your audience’s lives. Most processors are bored, sitting idly and not living up to their full potential. Pretending computation is scarce breaks believability. If ebay can continuously keep looking on my behalf for a great deal on a Ted Baker shirt, the ship’s computer can keep looking for optimal courses on the mission’s behalf.

In this particular scene, the stoic guru has for some reason neglected up to this point to provide a crucial piece of information, and that is the optimal path. Why was it holding this information back if it knew it? How does it know that now? “Well,” I imagine Barcalow saying as he slaps the side of the monitor, “Why didn’t you tell me that the first time I asked you to navigate?” I suspect that, if it had been written with the active academy in mind, it would not end up in the stupid COURSE OPTIMAL zone.

Optimal vs. more optimal than

Part of the believability problem of this particular case may come from the word “optimal,” since that word implies the best out of all possible choices.

But if it’s a stoic guru, it wouldn’t know from optimal. It would just know what you’d asked it or provided it in the past. It would only know relative optimalness amongst the set of courses it had access to. If this system worked that way, the screen text should read something like “34% more optimal than previous course” or “Most optimal of supplied courses.” Either text could show some fuigetry that conveys a comparison of compared parameters below the Big Text Label. But of course the text conveys how embarrassingly limited this would be for a computer. It shouldn’t wait for supplied courses.

If it’s an active academy model, this scene would work differently. It would have either shown him optimal long ago, or show him that it’s still working on the problem and that Ibanez’ is the “Most optimal found.” Neither is entirely satisfying for purposes of the story.


How could this scene gone?

We need a quick beat here to show that in fact, Ibanez is not just some cocky upstart. She really knows what’s up. An appeal to authority is a quick way to do it, but then you have to provide some reason the authority—in this case the computer—hasn’t provided that answer already.

A bigger problem than Starship Troopers

This is a perennial problem for sci-fi, and one that’s becoming more pressing as technology gets more and more powerful. Heroes need to be heroic. But how can they be heroic if computers can and do heroic things for them? What’s the hero doing? Being a heroic babysitter to a vastly powerful force? This will ultimately culminate once we get to the questions raised in Her about actual artificial intelligence.

Fortunately the navigator is not a full-blown artificial intelligence. It’s something less than A.I., and that’s an agentive interface, which gives us our answer. Agentive algorithms can only process what they know, and Ibanez could have been working with an algorithm that the computer didn’t know about. She’s just wrapped up school, so maybe it’s something she developed or co-developed there:

  • Barcalow turns to the nav computer and sees a label: “Custom Course: 34% more efficient than models.”
  • Um…OK…How did you find a better course than the computer could?
  • My grad project nailed the formula for gravity assist through trinary star systems. It hasn’t been published yet.

BAM. She sounds like a badass and the computer doesn’t sound like a character in a cheap sitcom.

So, writers, hopefully that model will help you not make the mistake of penning your computers to be stoic gurus. Next up, we’ll discuss this same short scene with more of a focus on interaction designers.