Caveat: This is definitely me reading into things. Or even, inferring something that I’d like to see in the world. But why not?
Black Panther begins with a conversation between a son and father.
- SON
- Baba?
- FATHER
- Yes, my son?
- SON
- Tell me a story
- FATHER
- Which one?
- SON
- The story of home.
The conversation continues with the father describing the history of Wakanda. On screen, we see a lovely sequence of shapes that illustrate the story. A meteor strikes Africa and the nearby flora and fauna change. Five hands form a pentagram version of the four-handed carry grip to represent the five tribes. The hands shift to become warring tribespeople. Their armor. Their weapons. Their animals.
All these shapes are made from vibranium sand—gunmetal gray colored, sparkling particles, see the screen caps—that move and reform fluidly, with a unifying highlight of glowing blue.
Now, this opening sequence isn’t presented as an interface, or really, as anything in the diegesis at all. We understand it is exposition, for us in the audience. But what if it wasn’t? What if this is showing us a close up of a display that illustrates in real-time what the storyteller is saying? Something just over the shoulder of Baba that the child can watch?
The display would not be prerecorded, which requires the storyteller to match its fixed pace. (Presenters who have tried pecha-kucha style presentations of 20 slides, 20 seconds each will know how awkward this can be.) Instead, this display responds instantly to the storyteller’s tone and pace, allowing them to tailor the story to the responses of the audience: emphasizing the things that seem exciting, or heartwarming, or whatever the storyteller wants.
It’s a given in the MCU that Wakanda has developed the technology to control vibranium down to a very small scale, including levitating it, shaping it, and having it form materials of widely varying properties. Nearly all of the technology we see in the film is made from it. So, the diegetic technology for such a display is there.
It’s not that far a stretch from 2D technology we have now. The game Scribblenauts lets players type in phrases and *poof* that thing appears in the scene with your characters. I doubt it’s, like, dictionary-exhaustive, but the vast majority of things I and my son have typed in have been there.
- Black panther? Check. (Well, it’s the large cat version, anyway.)
- Huge pink Cthulu? Check.
- Teeny tiny singularity? Check!
- Enraged plaid Beowulf? OK. Not that. But if enough people typed it in, I have a feeling it would eventually show up.
Pipe a speech-to-text engine into something like that, skin it with vibranium sand, and you’re most of the way there.
The interface issues for such a thing probably center around 1. interpretation and 2 control.
1. Natural language understanding of the story
I work on a natural language AI system in my day job at IBM, and disambiguation is one of the major challenges we face: Teaching the systems enough about the world and language to understand what might a user have been meant when they typed something like “deliveries tuesday.” But I work with real-world narrow artificial intelligence, and getting it to understand like a human might understand is a massive undertaking.
The MCU generally, and Wakanda in particular has speculative, human-like Artificial General Intelligences (AGI) like J.A.R.V.I.S., F.R.I.D.A.Y., and Ultron, so all the disambiguation problems we face in the real world are a trivial issue. (Noting that Shuri’s AGI isn’t named in the film.)
AGI can interpret and design and render the story like some magical realtime scene painter in the same way a person would—only much, much faster—and would interpret the language in the same reasonable way. (Plus, I’m pretty sure the display has heard Baba tell this exact same myth before, so its confidence that it is displaying the right thing is even greater.)
2. Controlling the display
The other issue is controlling the display. How does Baba start and stop the rendering? How does it correct something it misunderstood, or change the styling? In the real world we have to work out escape sequences for opt-out systems (like “//” for comments in code) and wake words for opt-in systems (like “Hey, Google” or “Alexa”), but in the MCU we get to rely on the speculative AGI again. Just like a person would know to listen for cues when to start and stop, it can reasonably interpret commands like “pause display,” or “hold here” as we would expect of a person in a tech booth overseeing a theatrical performance.
***
Given the AGI in Wakanda, vibranium sand, and the render-almost-anything engines in the real world, we don’t even have to add anything to the diegesis to make it work, just make a new combination of existing parts.
So while there is zero evidence that this is a diegetic interface, I’m choosing to believe it is one, and hope somebody makes something like it one day.
Black Lives Matter: A first reading list
The Black Lives Matter movement needs to be much more than education—we need action to dismantle the unjust and racist systems it brings to light—but education can be a first place to start. So for this first post, let’s talk how to educate yourself on the issues at hand. This is especially for white people, since this can be so far out of our lived experience that the claims seem at first implausible.
Here biracial/black filmmaker Maria Breaux has given me persmission to share the books she has shared with me, which are a kind of 101 syllabus. Pick one, any one, and read.
- The New Jim Crow by Michelle Alexander
- Stamped from the Beginning by Ibram X Kendi
- How to Be Antiracist (also) by Ibram X Kendi
- So You Want to Talk about Race by Ijeoma Oluo
- Just Mercy by Bryan Stevenson
In full disclosure I have not read any of these yet. (I’m a notoriously slow reader.) I’m on this journey, too. I’m starting with The New Jim Crow, because it seems the most painful to read.
