before we get into the Kimoyo beads, or the Cape Shields, or the remote driving systems…
before I have to dismiss these interactions as “a wizard did it” style non-designs…
before I review other brain-computer interfaces in other shows…
…I wanted check on the state of the art of brain-computer interfaces (or BCIs) and see how our understanding had advanced since I wrote the Brain interface chapter in the book, back in the halcyon days of 2012.
Note that I am deliberately avoiding the tech side of this question. I’m not going to talk about EEG, PET, MRI, and fMRI. (Though they’re linked in case you want to learn more.) Modern brain-computer interface (or BCI) technologies are evolving too rapidly to bother with an overview of them. They’ll change in the real world by the time I press “publish,” much less by the time you read this. And sci-fi tech is most often a black box anyway. But the human part of the human-computer interaction model changes much more slowly. We can look to the brain as a relatively-unalterable component of the BCI question, leading us to two believability questions of sci-fi BCI.
How can people express intent using their brains?
How do we prevent accidental activation using BCI?
Let’s discuss each.
1. How can people express intent using their brains?
In the see-think-do loop of human-computer interaction…
See (perceive) has been a subject of visual, industrial, and auditory design.
Think has been a matter of human cognition as informed by system interaction and content design.
Do has long been a matter of some muscular movement that the system can detect, to start its matching input-process-output loop. Tap a button. Move a mouse. Touch a screen. Focus on something with your eyes. Hold your breath. These are all ways of “doing” with muscles.
But the first promise of BCI is to let that doing part happen with your brain. The brain isn’t a muscle, so what actions are BCI users able to take in their heads to signal to a BCI system what they want it to do? The answer to this question is partly physiological, about the way the brain changes as it goes about its thinking business.
Our brains are a dense network of bioelectric signals, chemicals, and blood flow. But it’s not chaos. It’s organized. It’s locally functionalized, meaning that certain parts of the brain are predictably activated when we think about certain things. But it’s not like the Christmas lights in Stranger Things, with one part lighting up discretely at a time. It’s more like an animated proportional symbol map, with lots of places lighting up at the same time to different degrees.
The sizes and shapes of what’s lighting up may change slightly between people, but a basic map of healthy, undamaged brains will be similar to each other. Lots of work has gone on to map these functional areas, with researchers showing subjects lots of stimuli and noting what areas of the brain light up. Test enough of these subjects and you can build a pretty good functional map of concepts. Thereafter, you can take a “picture” of the brain, and you can cross-reference your maps to reverse-engineer what is being thought.
Right now those pictures are pretty crude and slow, but so were the first actual photographs in the world. In 20–50 years, we may be able to wear baseball caps that provide a much more high-resolution, real time inputs of concepts being thought. In the far future (or, say, the alternate history of the MCU) it is conceivable to read these things from a distance. (Though there are significant ethical questions involved in such a technology, this post is focused on questions of viability and interaction.)
Similarly the brain maps we have are only for a small percentage of an average adult vocabulary. Jack Gallant’s semantic map viewer (pictured and linked above) shows the maps for about 140 concepts, and estimates of average active vocabulary is around 20,000 words, so we’re looking at a tenth of a tenth of what we can imagine (not even counting the infinite composability of language). But in the future we will not only have more concepts mapped, more confidently, but we will also have idiographs for each individual, like the personal dictionary in your smart phone.
All this is to say that our extant real world technology confirms that thoughts are a believable input for a system. This includes linguistic inputs like “Turn on the light” and “activate the vibranium sand table” and “Sincerely, Chris” and even imagining the desired change, like a light changing from dark to light. It might even include subconscious thoughts that yet to be formed into words.
2. How do we prevent accidental activation?
But we know from personal experience, we don’t want all our thoughts to be acted on. Take, for example, those thoughts you’re feeling hangry, or snarky, or dealing with a jerk-in-authority. Or those texts and emails that you’ve composed in the heat of the moment but wisely deleted before they get you in trouble.
If a speculative BCI is being read by a general artificial intelligence, it can manage that just like a smart human partner would.
He is composing a blog post, reasons the AGI, so I will just disregard his thought that he needs to pee.
And if there’s any doubt, an AGI can ask. “Did you intend me to include the bit about pee in the post?” Me: “Certainly not. Also BRB.” (Readers following the Black Panther reviews will note that AGI is available to Wakandans in the form of Griot.)
If AGI is unavailable to the diegesis (and it would significantly change any diegesis of which it is a part) then we need some way to indicate when a thought is intended as input and when it isn’t. Having that be some mode of thought feels complicated and error-prone, like when programmers have to write regex expressions that escape escape characters. Better I think is to use some secondary channel, like a bodily interaction. Touch forefinger and pinky together, for instance, and the computer understands you intend your thoughts as input.
So, for any BCI that appears in sci-fi, we would want to look for the presence or absence of AGI as a reasonableness interpreter, and, barring that, for some alternate-channel mechanism for indicating deliberateness. We would also hope to see some feedback and correction loops to understand the nuances of the edge-case interactions, but these are rare in sci-fi.
Even more future-full
This all points to the question of what seeing/perceiving via a BCI might be. A simple example might be a disembodied voice that only the user can hear.
A woman walks alone at night. Lost in thoughts, she hears her AI whisper to her thoughts, “Ada, be aware that a man has just left a shadowy doorstep and is following, half a block behind you. Shall I initialize your shock shoes?”
What other than language can be written to the brain in the far future? Images? Movies? Ideas? A suspicion? A compulsion? A hunch? How will people know what are their own thoughts and what has been placed there from the outside? I look forward to the stories and shows that illustrate new ideas, and warn us of the dark pitfalls.
All of these build on the given that vibranium is a very powerful substance and that Wakanda’s scientists have managed to gain a very, very sophisticated control over it.
In the Talon
This table is about a meter square, and raised off the floor around knee-height. As Okoye and T’Challa approach the traffickers in the Sambisa Forest, T’Challa approaches the table and it springs to life, showing him real-time model of the traffickers’ vehicle train. T’Challa picks up the model of the small transport truck and with a finger, wipes off its roof, revealing that there are over a dozen people huddled within. One of the figures glows amber. (It’s Nakia.) He places the truck back into the display, and the display collapses back to inert sand.
A quick critique of this interaction. The sand highlights Nakia for T’Challa, but why did it wait for him to find her truck and wipe off the top of it to look inside? It knew his goals (find Nakia), can clearly conduct a scan into the vehicle, and understood the context (she’s in one of those trucks), it should not wait for him to pick up each car and scrape off its roof to check and see which one she was in. The interface should have drawn his attention to the truck it knew she was in. This is a “stoic guru” mistake that I’ve critiqued before. You know, the computer knows all, but only tells you when you ask it. It is much more sensible for the transport truck to be glowing from the moment the table goes live, as in the comp below.
Otherwise, this is a good high-tech use of the sand table for the more common meaning of “sand table,” which is a 3-dimensional surface for understanding a theatre of conflict. It doesn’t really help him run through scenarios, testing various tactics, but T’Challa is a warrior king, he can do all that in his head.
The interaction also nicely blurs the line between display and gestural interactive tool, in the same way that the Prometheus astrometrics display did. Like that other example, it would be useful for the display to distinguish when it is representing reality, and when the display is being interrupted or modified. Also, T’Challa is nice enough to put the truck back where it “belongs,” but a design would also need to handle how to respond when T’Challa put the truck back in the wrong place, or, say, crushed the truck model with his hand in fury.
The largest table we see in the movie is in Shuri’s lab. After Black Panther challenges Killmonger and engages in battle outside the capital city, Shuri, Nakia, and Agent Ross rush down to the lab. As they approach an edge-lit hexagonal table, the vibranium sand lowers to reveal 3D-printed armor and weaponry for Shuri and Nakia to join the fight. (Though it’s not like modern 3D printing, these are powered weapons and kimoyo beads, items with very sophisticated functionality.)
Shuri outfits Ross with kimoyo beads from the print and takes off to join the fight. In the lab, the table creates a seat for Ross to remote-pilot the Royal Talon. Up on the flight deck, Shuri throws a control bead onto the Talon, and an AI in the lab named Griot announces to Agent Ross, “Remote piloting system activated.” (Hey, Trevor Noah, we hear you there!)
Around the seat, a volumetric projection of the Talon appears around him, including a 360° display just beyond the windshield that gives him a very immersive remote flying experience. We hear Shuri’s voice explain to Ross “I made it American Style for you. Get in!”
Ross sits down, grabs joystick controls, and begins remote-chasing down the cargo ships that are carrying munitions to Killmonger’s War Dogs around the world. (The piloting controls and HUD for Ross are a separate issue, and will be handled in their own post.)
The moment that Ross pilots the Talon through the last cargo ship, the volumetric projection disappears and the piloting seat returns to sand, ungraciously plopping Ross down the floor level of the lab.
It is in this shot that we realize that the dark tiles of the lab’s floor are all recessed vibranium sand tables. I can count seven in the shot. So the lab is full of them.
Let’s talk for a bit about the display choices. Vibranium can change to display any color and a shape down to a fine level of detail. See the screen cap below for an example of perfectly lifelike (if scaled) representation.
So why would it be designed so that in most cases, the display is sparkly and black like black tourmaline? Wouldn’t the truck that T’Challa picks up be most useful if it was photographically rendered? Wouldn’t the remote piloting chair be more comfortable if it had pleather- and silicone-like surfaces?
Extradiegetically, I understand the reason is because art direction. We want Wakandan tech to be visibly different than other tech in the MCU, and having it look like vibranium dust ties it back to that key plot element.
But, per the stance of this blog, I try to look for a diegetic reason. It might be a deliberate reminder of the resource on which their technological fortunes are built. And as the Okoye VP above shows, they aren’t purists about it. When detail is needed, it’s included. So perhaps this is it. That implies a great deal of sophistication on the part of the displays to know when photorealism is needed and when it is not, but the presence of Griot there tells us that they have something approaching general AI.
So, just like I had to do for the Royal Talon, I have to throw my hands up about reviewing the interactions with the sand tables, because we don’t see the interactions that would give these results.
How were the mission goals communicated to the Royal Talon table? Is it programmed to activate when someone approaches it, or did T’Challa issue a mental command? How did Shuri specify those weapons and that armor? What did she do to make the ship “American style” for Ross? Is that a template? Was it Griot’s interpretation of her intention? Why did the remote piloting seat vanish the moment the mission was complete? Was this something Shuri set up in advance, or Griot’s way of telling Agent Ross to GTFO for his own safety? How does someone in the lab instruct a floor tile to leap up and become a table and do stuff? It’s almost certainly via mental commands through the kimoyo beads, but that’s conjecture. The film really provides little evidence.
On the one hand, this is appropriate for us mere non-Wakandans observing the most technologically advanced society on earth. Much of it would feel like inexplicable magic to us.
On the other, sci-fi routinely introduces us to advanced technologies, and doesn’t always eschew the explanatory interactions, so the absence is notable here. It’s magic.
Black Lives Matter
Each post in the Black Panther review is followed by actions that you can take to support black lives.
In the last post we grieved Chadwick Boseman’s passing. This week we’re grieving the loss of Ruth Bader Ginsburg. May her memory be a blessing. With her loss, the GOP is ratcheting up its outrageous hypocrisy by reversing a precedent that they themselves established when Obama was president. The “Moscow Mitch Rule” (oh, oops, sorry) “McConnell Rule” was that new Justices should not be appointed within a year of a general election, so the people’s voice can be taken into account. Of course, the bastards are just ignoring that now and trying to ram through one of their own before election day. This Justice will certainly be a conservative, and we know with this administration that means reactionary, loyal to tiny-hand Twittler, and racist as a Jim Crow law.
There are a few arrows in citizen’s quivers to stop this. One is to convince at least 4 Republican Senators to reject this outright hypocrisy, put country over party, and adhere to the McConnell rule.
To help put pressure where it might work, you can leave voicemails with Republican Senators who may be mulling whether to put country over party. Those 6 Senators’ names and numbers are below. Here’s a script for your message:
Hello, my name is ______. In 2016, Mitch McConnell created the principle of not confirming a Supreme Court Justice in an election year until after the next inauguration. For the legitimacy of the Court in the eyes of the people, I’m asking Senator ________ to uphold that principle by refusing to confirm a new Justice until after a new President is installed. Thank you.
Lisa Murkowski, Alaska; (202) 224-6665
Mitt Romney, Utah: (202) 224-5251
Susan Collins, Maine: (202) 224-2523
Martha McSally, Arizona: (202) 224-2235
Cory Gardner, Colorado: (202) 224-5941
Chuck Grassley, Iowa: (202) 224-3744
I’ve made my calls and left my messages. Can you do the same to stop the hypocritical Trumpian power grab that would tip the Supreme Court for generations?
Since my last post, news broke that Chadwick Boseman has passed away after a four year battle with cancer. He kept his struggles private, so the news was sudden and hard-hitting. The fandom is still reeling. Black people, especially, have lost a powerful, inspirational figure. The world has also lost a courageous and talented young actor. Rise in Power, Mr. Boseman. Thank you for your integrity, bearing, and strength.
Black Panther’s airship is a triangular vertical-takeoff-and-landing vehicle called the Royal Talon. We see its piloting interface twice in the film.
The first time is near the beginning of the movie. Okoye and T’Challa are flying at night over the Sambisa forest in Nigeria. Okoye sits in the pilot’s seat in a meditative posture, facing a large forward-facing bridge window with a heads up display. A horseshoe-shaped shelf around her is filled with unactivated vibranium sand. Around her left wrist, her kimoyo beads glow amber, projecting a volumetric display around her forearm.
She announces to T’Challa, “My prince, we are coming up on them now.” As she disengages from the interface, retracting her hands from the pose, the kimoyo projection shifts and shrinks. (See more detail in the video clip, below.)
The second time we see it is when they pick up Nakia and save the kidnapped girls. On their way back to Wakanda we see Okoye again in the pilot’s seat. No new interactions are seen in this scene though we linger on the shot from behind, with its glowing seatback looking like some high-tech spine.
Now, these brief glimpses don’t give a review a lot to go on. But for a sake of completeness, let’s talk about that volumetric projection around her wrist. I note is that it is a lovely echo of Dr. Strange’s interface for controlling the time stoneEye of Agamatto.
Wrist projections are going to be all the rage at the next Snap, I predict.
But we never really see Okoye look at this VP it or use it. Cross referencing the Wakandan alphabet, those five symbols at the top translate to 1 2 K R I, which doesn’t tell us much. (It doesn’t match the letters seen on the HUD.) It might be a visual do-not-disturb signal to onlookers, but if there’s other meaning that the letters and petals are meant to convey to Okoye, I can’t figure it out. At worst, I think having your wrist movements of one hand emphasized in your peripheral vision with a glowing display is a dangerous distraction from piloting. Her eyes should be on the “road” ahead of her.
Similarly, we never get a good look at the HUD, or see Okoye interact with it, so I’ve got little to offer other than a mild critique that it looks full of pointless ornamental lines, many of which would obscure things in her peripheral vision, which is where humans need the most help detecting things other than motion. But modern sci-fi interfaces generally (and the MCU in particular) are in a baroque period, and this is partly how audiences recognize sci-fi-ness.
I also think that requiring a pilot to maintain full lotus to pilot is a little much, but certainly, if there’s anyone who can handle it, it’s the leader of the Dora Milaje.
One remarkable thing to note is that this is the first brain-input piloting interface in the survey. Okoye thinks what she wants the ship to do, and it does it. I expect, given what we know about kimoyo beads in Wakanda (more on these in a later post), what’s happening is she is sending thoughts to the bracelet, and the beads are conveying the instructions to the ship. As a way to show Okoye’s self-discipline and Wakanda’s incredible technological advancement, this is awesome.
Unfortunately, I don’t have good models for evaluating this interaction. And I have a lot of questions. As with gestural interfaces, how does she avoid a distracted thought from affecting the ship? Why does she not need a tunnel-in-the-sky assist? Is she imagining what the ship should do, or a route, or something more abstract, like her goals? How does the ship grant her its field awareness for a feedback loop? When does the vibranium dashboard get activated? How does it assist her? How does she hand things off to the autopilot? How does she take it back? Since we don’t have good models, and it all happens invisibly, we’ll have to let these questions lie. But that’s part of us, from our less-advanced viewpoint, having to marvel at this highly-advanced culture from the outside.
Black Health Matters
Each post in the Black Panther review is followed by actions that you can take to support black lives.
Thinking back to the terrible loss of Boseman: Fuck cancer. (And not to imply that his death was affected by this, but also:) Fuck the racism that leads to worse medical outcomes for black people.
One thing you can do is to be aware of the diseases that disproportionately affect black people (diabetes, asthma, lung scarring, strokes, high blood pressure, and cancer) and be aware that no small part of these poorer outcomes is racism, systemic and individual. Listen to Dorothy Roberts’ TED talk, calling for an end to race-based medicine.
If you are black, in Boseman’s memory, get screened for cancer as often as your doctor recommends it. If you think you cannot afford it and you are in the USA, this CDC website can help you determine your eligibility for free or low-cost screening: https://www.cdc.gov/cancer/nbccedp/screenings.htm. If you live elsewhere, you almost certainly have a better healthcare system than we do, but a quick search should tell you your options.
Cancer treatment is equally successful for all races. Yet black men have a 40% higher cancer death rate than white men and black women have a 20% higher cancer death rate than white women. Your best bet is to detect it early and get therapy started as soon as possible. We can’t always win that fight, but better to try than to find out when it’s too late to intervene. Your health matters. Your life matters.
The suit that the Black Panther wears is critical to success. At the beginning of the movie, this is “just” a skintight bulletproof suit with homages to its namesake. But, after T’Challa is enthroned, Shuri takes him to her lab and outfits him with a new one with some nifty new features. This write-up is about Shuri’s 2.0 Panther Suit.
At the demonstration of the new suit, Shuri first takes a moment to hold up a bracelet of black Kimoyo beads (more on these in a later post) to his neck. With a bubbly computer sound, the glyphs on the beads begin to glow vibranium-purple, projecting two particular symbols on his neck. (The one that looks kind of like a reflective A, and the other that looks like a ligature of a T and a U.)
This is done without explanation, so we have to make some assumptions here, which is always shaky ground for critique.
I think she’s authorizing him to use the suit. At first I thought the interaction was her “pairing” him with the suit, but I can’t imagine that the bead would need to project something onto his skin to read his identity or DNA. So my updated guess is this is a dermal mark that, like the Wakandan tattoos, the suit will check for with a “intra-skin scan,” like the HAN/BAN concepts from the early aughts. This would enable her to authorize many people, which is, perhaps, not as secure.
This interpretation is complicated by Killmonger’s wearing one of the other Black Panther suits when he usurps T’Challa. Shuri had fled with Queen Romonda to the Jibari stronghold, so Shuri couldn’t have authorized him. Maybe some lab tech who stayed behind? If there was some hint of what’s supposed to be happening here we would have more grounds to evaluate this interaction.
There might be some hint if there was an online reference to these particular symbols, but they are not part of the Wakandan typeface, or the Andinkra symbols, or the Nsibidi symbols that are seen elsewhere in the film. (I have emails out to the creator of the above image to see if I can learn more there. Will update if I get a response.)
When she finishes whatever the bead did, she says, “Now tell it to go on.” T’Challa looks at it intensely, and the suit spreads from the “teeth” in the necklace with an insectoid computer sound, over the course of about 6 seconds.
We see him activate the suit several more times over the course of the movie, but learn nothing new about activation beyond this. How does he mentally tell it to turn it on? I presume it’s the same mental skill he’s built up across his lifetime with kimoyo beads, but it’s not made explicit in the movie.
A fun detail is that while the suit activates in 6 seconds in the lab—far too slow for action in the field considering Shuri’s sardonic critique of the old suit (“People are shooting at me! Wait! Let me put on my helmet!”)—when T’Challa uses it in Korea, it happens in under 3. Shuri must have slowed it down to be more intelligible and impressive in the lab.
Another nifty detail that is seen but not discussed is that the nanites will also shred any clothes being worn at the time of transformation, as seen at the beginning of the chase sequence outside the casino and when Killmonger is threatened by the Dora Milaje.
T’Challa thinks the helmet off a lot over the course of the movie, even in some circumstances where I am not sure it was wise. We don’t see the mechanism. I expect it’s akin to kimoyo communication, again. He thinks it, and it’s done. (n.b. “It’s mental” is about as satisfying from a designer’s critique as “a wizard did it”, because it’s almost like a free pass, but *sigh* perfectly justifiable given precedent in the movie.)
Kinetic storage & release
At the demonstration in her lab, Shuri tells T’Challa to, “Strike it.” He performs a turning kick to the mannequin’s ribcage and it goes flying. When she fetches it from across the lab, he marvels at the purple light emanating from Nsibidi symbols that fill channels in the suit where his strike made contact. She explains “The nanites have absorbed the kinetic energy. They hold it in place for redistribution.”
He then strikes it again in the same spot, and the nanites release the energy, knocking him back across the lab, like all those nanites had become a million microscopic bigclaw snapping shrimp all acting in explosive concert. Cool as it is, this is my main critique of the suit.
First, the good. As a point of illustration of how cool their mastery of tech is, and how it works, this is pretty sweet. Even the choice of purple is smart because it is a hard color to match in older chemical film processes, and can only happen well in a modern, digital film. So extradiegetically, the color is new and showing off a bit.
Tactically though, I have to note that it broadcasts his threat level to his adversaries. Learning this might take a couple of beatings, but word would get around. Faithful readers will know we’ve looked at aposematic signaling before, but those kinds of markings are permanent. The suit changes as he gets technologically beefier. Wouldn’t people just avoid him when he was more glowy, or throw something heavy at him to force him to expend it, and then attack when he was weaker? More tactical I think to hold those cards close to the chest, and hide the glow.
Now it is quite useful for him to know the level of charge. Maybe some tactile feedback like a warmth or or a vibration at the medial edge of his wrists. Cinegenics win for actual movie-making of course, but designers take note. What looks cool is not always smart design.
Not really a question for me: Can he control how much he releases? If he’s trying to just knock someone out, it would be crappy if he accidentally killed them, or expected to knock out the big bad with a punch, only to find it just tickled him like a joy buzzer. But if he already knows how to mentally activate the suit, I’m sure he has the skill down to mentally clench a bit to control the output. Wizards.
Regarding Shuri’s description, I think she’s dumbing things down for her brother. If the suit actually absorbed the kinetic energy, the suit would not have moved when he kicked it. (Right?) But let’s presume if she were talking to someone with more science background, she would have been more specific to say, “absorbed some of the kinetic energy.”
When the suit has absorbed enough kinetic energy, T’Challa can release it all at once as a concussive blast. He punches the ground to trigger it, but it’s not clear how he signals to the suit that he wants to blast everyone around him back rather than, say, create a crater, but again, I think we can assume it’s another mental command. Wizards.
To activate the suit’s claws, T’Challa quickly extends curved fingers and holds them there, and they pop out.
This gesture is awesome, and completely fit for purpose. Shaping the fingers like claws make claws. It’s also when fingers are best positioned to withstand the raking motion. The second of hold ensures it’s not accidental activation. Easy to convey, easy to remember, easy to intuit. Kids playing Black Panther on the sidewalk would probably do the same without even seeing the movie.
We have an unanswered question about how those claws retract. Certainly the suit is smart enough to retract automatically so he doesn’t damage himself. Probably more mental commands, but whatever. I wouldn’t change a thing here.
Black Lives Matter
Each post in the Black Panther review is followed by actions that you can take to support black lives. I had something else planned for this post, but just before publication another infuriating incident has happened.
While the GOP rallies to the cause of the racist-in-chief in Charlotte, right thinking people are taking to the streets in Kenosha, Wisconsin, to protest the unjust shooting of a black man, Jacob Blake. The video is hard to watch. Watch it. It’s especially tragic, especially infuriating, because Kenosha had gone through “police reform” initiatives in 2014 meant to prevent exactly this sort of thing. It didn’t prevent this sort of thing. As a friend of mine says, it’s almost enough to make you an abolitionist.
Information is still coming in as to what happened, but here’s the narrative we understand right now: It seems that Blake had pulled over his car to stop a fight in progress. When the police arrived, he figured they had control of the situation, and he walked back to his car to leave. That’s when officers shot him in the back multiple times, while his family—who were still waiting for him in the car—watched. He’s out of surgery and stable, but rather than some big-picture to-do tonight, please donate to support his family. They have witnessed unconscionable trauma.
Several fundraisers posted to support Blake’s family have been taken down by GoFundMe for being fake, but “Justice for Jacob Blake” remains active as of Monday evening. Please donate.
He is lead organizer in Oakland and advisory board member for the Black Speculative Arts Movement (BSAM), co-founded by Reynaldo Anderson, a national and global movement dedicated to celebrating the Black imagination and design. Dr. Brooks serves as Creative Director for BSAM Futures, which aims to promote, publish, and teach forecasting with Afrocentric perspectives in mind, using gaming and facilitation for imaginative, action-oriented thinking.
He also volunteers as a core member for outreach at Dynamicland.org, a pioneering non-profit dedicated to creating a more collaborative and dynamic computational medium for the long term. He has a passion for creating games to envision social justice futures including black and queer liberation from Afro-Rithms From The Future to United Queerdom, and Futurescope, he and his co-game designer Eli Kosminsky are committed to articulating emerging new future visions for traditionally underrepresented voices.
He is currently writing Imagining Queer Futures with Afrofuturism@Futureland: Circulating Afro-Queer futuretypes of Work, Culture and Racial Identity.
“As a forecaster and Afrofuturist who imagines alternative futures from a Black Diaspora perspective, I think about long-term signals that will shape the next 10 to 100 years.”
Caveat: This is definitely me reading into things. Or even, inferring something that I’d like to see in the world. But why not?
Black Panther begins with a conversation between a son and father.
Yes, my son?
Tell me a story
The story of home.
The conversation continues with the father describing the history of Wakanda. On screen, we see a lovely sequence of shapes that illustrate the story. A meteor strikes Africa and the nearby flora and fauna change. Five hands form a pentagram version of the four-handed carry grip to represent the five tribes. The hands shift to become warring tribespeople. Their armor. Their weapons. Their animals.
All these shapes are made from vibranium sand—gunmetal gray colored, sparkling particles, see the screen caps—that move and reform fluidly, with a unifying highlight of glowing blue.
Now, this opening sequence isn’t presented as an interface, or really, as anything in the diegesis at all. We understand it is exposition, for us in the audience. But what if it wasn’t? What if this is showing us a close up of a display that illustrates in real-time what the storyteller is saying? Something just over the shoulder of Baba that the child can watch?
The display would not be prerecorded, which requires the storyteller to match its fixed pace. (Presenters who have tried pecha-kucha style presentations of 20 slides, 20 seconds each will know how awkward this can be.) Instead, this display responds instantly to the storyteller’s tone and pace, allowing them to tailor the story to the responses of the audience: emphasizing the things that seem exciting, or heartwarming, or whatever the storyteller wants.
It’s a given in the MCU that Wakanda has developed the technology to control vibranium down to a very small scale, including levitating it, shaping it, and having it form materials of widely varying properties. Nearly all of the technology we see in the film is made from it. So, the diegetic technology for such a display is there.
It’s not that far a stretch from 2D technology we have now. The game Scribblenauts lets players type in phrases and *poof* that thing appears in the scene with your characters. I doubt it’s, like, dictionary-exhaustive, but the vast majority of things I and my son have typed in have been there.
Black panther? Check. (Well, it’s the large cat version, anyway.)
Huge pink Cthulu? Check.
Teeny tiny singularity? Check!
Enraged plaid Beowulf? OK. Not that. But if enough people typed it in, I have a feeling it would eventually show up.
Pipe a speech-to-text engine into something like that, skin it with vibranium sand, and you’re most of the way there.
The interface issues for such a thing probably center around 1. interpretation and 2 control.
1. Natural language understanding of the story
I work on a natural language AI system in my day job at IBM, and disambiguation is one of the major challenges we face: Teaching the systems enough about the world and language to understand what might a user have been meant when they typed something like “deliveries tuesday.” But I work with real-world narrow artificial intelligence, and getting it to understand like a human might understand is a massive undertaking.
The MCU generally, and Wakanda in particular has speculative, human-like Artificial General Intelligences (AGI) like J.A.R.V.I.S., F.R.I.D.A.Y., and Ultron, so all the disambiguation problems we face in the real world are a trivial issue. (Noting that Shuri’s AGI isn’t named in the film.)
AGI can interpret and design and render the story like some magical realtime scene painter in the same way a person would—only much, much faster—and would interpret the language in the same reasonable way. (Plus, I’m pretty sure the display has heard Baba tell this exact same myth before, so its confidence that it is displaying the right thing is even greater.)
2. Controlling the display
The other issue is controlling the display. How does Baba start and stop the rendering? How does it correct something it misunderstood, or change the styling? In the real world we have to work out escape sequences for opt-out systems (like “//” for comments in code) and wake words for opt-in systems (like “Hey, Google” or “Alexa”), but in the MCU we get to rely on the speculative AGI again. Just like a person would know to listen for cues when to start and stop, it can reasonably interpret commands like “pause display,” or “hold here” as we would expect of a person in a tech booth overseeing a theatrical performance.
Given the AGI in Wakanda, vibranium sand, and the render-almost-anything engines in the real world, we don’t even have to add anything to the diegesis to make it work, just make a new combination of existing parts.
So while there is zero evidence that this is a diegetic interface, I’m choosing to believe it is one, and hope somebody makes something like it one day.
Black Lives Matter: A first reading list
The Black Lives Matter movement needs to be much more than education—we need action to dismantle the unjust and racist systems it brings to light—but education can be a first place to start. So for this first post, let’s talk how to educate yourself on the issues at hand. This is especially for white people, since this can be so far out of our lived experience that the claims seem at first implausible.
Here biracial/black filmmaker Maria Breaux has given me persmission to share the books she has shared with me, which are a kind of 101 syllabus. Pick one, any one, and read.
In the Marvel Cinematic Universe, Wakanda is a greatly advanced nation in Africa, which hides from the world both its true nature and the great deposit of valuable vibranium on top of which the capital city is built. The vibranium causes purple flowers to grow in underground caves, the essence of which grants an imbiber superhuman abilities. Wakandans reserve the right to imbibe the essence for their reigning monarch, who is then called the Black Panther.
In 1992 T’Chaka, then king of Wakanda, confronts his brother, Prince N’Jobu, in an Oakland apartment, accusing him of treason and collusion with the murderous vibranium-trafficker Ulysses Klaue. N’Jobu explains his radicalization, “I observed for as long as I could. But their leaders have been assassinated, communities flooded with drugs and weapons. They are overly policed and incarcerated.” He urges T’Chaka to end Wakandan isolationism. Unmoved, the king insists N’Jobu face trial. N’Jobu draws a weapon and aims it at T’Chaka, who in self-defense kills N’Jobu.
In 2018 following the death of T’Chaka, his son Prince T’Challa is to be crowned king. In the ceremony, he is challenged to trial-by-combat by M’Baku, leader of the Jabari tribe, but T’Challa proves victorious.
Meanwhile, ex-military supervillain Killmonger is collaborating with Klaue. Together they violently liberate a Wakandan treasure made of vibranium from a British colonialist museum. Word gets back to Okoye, who is the badass general of the all-female Wakandan royal military, the Dora Milaje. She recommends they follow the lead to bring Klaue to justice, and the royal court agrees. T’Challa is outfitted with a new Black Panther suit and weapons by his science nerd sister, Shuri.
They travel to a South Korean casino to intercept the sale of the vibranium to CIA agent Everett Ross. Klaue arrives and after a gunfight and car chase, is captured. The arrest is short-lived as, after a day, Klaue is busted out of CIA custody by Killmonger and some goons. Agent Ross is wounded in the process, and taken back to Wakanda for healing.
Killmonger betrays Klaue, killing him and bringing his body to Wakanda. There, he reveals that he is son of N’Jobu, and challenges T’Challa to trial by combat. Killmonger seems to be victorious, throwing T’Challa over a waterfall. T’Challa’s family, his sweetheart Nakia, and Agent Ross flee the capital to the mountain hold of the Jabari. There M’Baku reveals that they have T’Challa in safekeeping. They heal him with the last of the vibranium flowers.
Killmonger reveals his murderous plans of revenge and global conquest to the Wakandan court. As equipment and ships are being loaded for the war, T’Challa appears, challenging Killmonger to finish the trial-by-combat. The fight involves the Border tribe fighting T’Challa out of national duty, the Jabari arriving as cavalry, Agent Ross’ preventing the ships from leaving Wakandan airspace by remote pilot, and Shuri and the Dora Milaje’s mutiny against the usurper. In the end, Black Panther defeats Killmonger, wounding him. Though he could be healed, Killmoger opts to die before a Wakandan sunset instead. He asks that he be buried in the ocean with Africans who jumped from slave ships, because “they knew death was better than bondage.”
The final scene has T’Challa and Shuri visiting Oakland, where he explains that this will be the site of the first of a series of community outreach centers around the world, ending Wakandan isolationism and hiding, and promising a better, more communal future.
(The stinger has him making a similar announcement to the U.N.)
I ordinarily reserve the introductory post of a series to just a summary of its story. But I chose Black Panther to follow Blade Runner because of the surge of the Black Lives Matter movement following the unjust murder of George Floyd. Protests have died down somewhat since that tragedy, but these issues are far from resolved. Given my pandemic-slowed posting rate, I trust this will help keep these issues visible on this forum for months to come. After all, there is more work to do.
Similar to the anti-fascist series that accompanied the review of Idiocracy, the posts in these reviews will be followed by ways that you can take action against white supremacy and white nationalism, especially in the context of ending police brutality against black lives and the carceral state.
To amplify some awesome voices, I have invited several black writers and futurists to join me in the critique of Black Panther’s interfaces. It is important to note that I am paying them for their efforts, directly or to a charity of their choice. I hope you look forward as much as I do to the Black Panther reviews, and their call to continued activism.
What we think about AI largely depends on how we know AI, and most people “know” AI through science fiction. But how well do the AIs in these shows match up with the science? What kinds of stories are we telling ourselves about AI that are pure fiction? And more importantly, what stories _aren’t_ we telling ourselves that we should be? Hear Chris Noessel of scifiinterfaces.com talk about this study and rethink what you “know” about #AI.
The network of in-house, studio, and freelance professionals who work together to create the interfaces in the sci-fi shows we know, love, and critique is large, complicated, and obfuscated. It’s very hard as an outsider to find out who should get the credit for what. So, I don’t try. I rarely identify the creators of the things I critique, trusting that they know who they are. Because of all this, I’m delighted when one of the studios reaches out to me directly. That’s what happened when Territory Studio recently reached out to me regarding the Fritz awards that went out in early February. They’d been involved with four of them! So, we set up our socially-distanced pandemic-approved keyboards, and here are the results.
First, congratulations to Territory Studio on having worked in four of the twelve 2019 Fritz Award nominees!
Chris: What exactly did you do on each of the films?
Ad Astra (winner of Best Believable)
Marti Romances (founding partner and creative director of Territory Studio San Francisco): We were one of the screen graphic vendors on Ad Astra and our brief was to support specific storybeats, in which the screen content helped to explain or clarify complex plot points. As a speculative vision of the near future, the design brief was to create realistic looking user interfaces that were grounded in military or scientific references and functionality, with the clean minimal look of high-end tech firms, and simple colour palettes befitting of the military nature of the mission. Our screen interfaces can be seen on consoles, monitors and tablet displays, signage and infographics on the Lunar Shuttle, moon base, rovers and Cepheus cockpit sets, among others.”
The biggest challenge on the project was to maintain a balance between the minimalistic and highly technical style that the director requested and the needs of the audience to quickly and easily follow narrative points.”
Men In Black International (nominated for Best Overall)
Andrew Popplestone (creative director of Territory Studio London): The art department asked us to create holotech concepts for MIB Int’l HQ in London, and we were then asked to deliver those in VFX. We worked closely with Dneg to create holographic content and interfaces for their environmental extensions (digital props) in the Lobby and Briefing Room sets. Our work included volumetric wayfinding systems, information points, desk screens and screen graphics. We also created holographic vehicle HUDs.
What I loved about our challenge on this film was to create a design aesthetic that felt part of the MIB universe yet stood on its own as the London HQ. We developed a visual language that drew upon the Art Deco influences from the set design which helped create a certain timeless flavour which was both classic yet futuristic.”
Spider-Man: Far from Home (winner of Best Overall)
Andrew Popplestone: Territory were invited to join the team in pre-production and we started creating visual language and screen interface concepts for Stark technology, Nick Fury technology and Beck / Mysterio technology. We went on to deliver shots for the Stark and Fury technology, including the visual language and interface for Fury Ops Centre in Prague, a holographic display sequence that Fury shows Peter Parker/Spider-Man, and all the shots relating to Stark/E.D.I.T.H. glasses tech.
The EDITH sequence was a really interesting challenge from a storytelling perspective. There was a lot of back and forth editorially with the logic and how the technology would help tell the story and that is when design for film is most rewarding.
Avengers: Endgame (winner of Audience Choice)
Marti Romances: We were also pleased to see that Endgame won Audience Choice because that was based on work we had produced for the first part, Avengers: Infinity War. We joined Marvel’s team on Infinity War and created all the technology interfaces seen in Peter Quill’s new spaceship, a more evolved version of the original Milano. We also created screen graphics for the Avengers Compound set.
We then continued to work on-screen graphics for Endgame, and as Quill’s ship had been badly damaged at the end of Infinity War, we reflected this in the screens by overlaying our original UI animations with glitches signifying damage. We also updated Avengers Compound screens, created original content for Stark Labs and the 1960’s lab and created a holographic dancing robots sequence for the Karaoke set.
What did you find challenging and rewarding about the work on these films?
David Sheldon-Hicks (Founder & Executive Creative Director): It’s always a challenge to create original designs that support a director’s vision and story and actor’s performance. There are so many factors and conversations that play into the choices we make about visual language, colour palette, iconography, data visualisation, animation, 3D elements, aesthetic embellishments, story beats, how to time content to tie into actor’s performance, how to frame content to lead the audience to the focal point, and more. The reward is that our work becomes part of the storytelling and if we did it well, it feels natural and credible within the context and narrative.
Hollywood seems to make it really hard to find out who contributed what to a film. Any idea why this is?
David Sheldon-Hicks: Well, the studio controls the press strategy and their focus is naturally all about the big vision and the actors and actresses. Also, creative vendors are subject to press embargoes with restrictions on image sharing which means that it’s challenging for us to take advantage of the release window to talk about our work. Having said that, there are brilliant magazines like Cinefex that work closely with the studios to cover the making of visual effects films. So, once we are able to talk about our work we try to as much as is possible.
But Territory do more than films; we work with game developers, brands, museums and expos, and more recently with smartwatch and automobile manufactures.
Chris: To make sure I understand that correctly, the difference is that Art Department work is all about FUI, where VFX are the creation of effects (not on screen in the diegesis) like light sabers, spaceships, and creatures? Things like that?
When we first started out, our work for the Art Department was strictly screen graphics and FUI. Screen graphics can be any motion design on a screen that gives life to a set or explains a storybeat, and FUI (Fictional User Interface) is a technology interface, for example screens for navigation, engineering, weapons systems, communications, drone fees, etc.
VFX relates to Visual Effects, (not to be confused with Special Effects which describes physical effects, explosions or fires on set, for example.) VFX include full CGI environments, set extensions, CGI props, etc. Think the giant holograms that walk through Ghost In the Shell (2017), or the holographic signage and screens seen in the Men In Black International lobby. And while some screens are shot live on-set, some of those screens may need to be adjusted in post, using a VFX pipeline. In this case we work with the Production VFX Supervisor to make sure that our design concept can be taken into post.
What, in your opinion, makes for a great fictional user interface?
David Sheldon-Hicks: That’s a good question. Different screens need to do different things. For example, there are ambient screens that help to create background ‘noise’ – think of a busy mission control and all the screens that help set the scene and create a tense atmosphere. The audience doesn’t need to see all those screens in detail, but they need to feel coherent and do that by reinforcing the overall visual language.
Then there are the hero screens that help to explain plot points. These tie into specific ‘story beats’ and are only in shot for about 3 seconds. There’s a lot that needs to come together in that moment. The FUI has to clearly communicate the narrative point, visualise and explain often complex information at a glance. If it’s a science fiction story, the screen has to convey something about that future and about its purpose; it has to feel futuristic yet be understandable at the same time. The interaction should feel credible in that world so that the audience can accept it as a natural part of the story. If it achieves all that and manages to look and feel fresh and original, I think it could be a great FUI.
Chris: What about “props”? Say, the door security in Prometheus, or the tablets in Ad Astra. Are those ambient or hero?
That depends on whether they are created specifically to support a storybeat. For example, the tablet in Ad Astra and the screen in The Martian where the audience and characters understand that Whatney is still alive, both help to explain context, while door furniture is often embellishment used to convey a standard of technology and if it doesn’t work or is slow to work it can be a narrative device to build tension and drama. Because a production can be fluid and we never really know exactly which screens will end up in camera and for how long, we try to give the director and DOP (director of photography) as much flexibility as possible by taking as much care over ambient screens as we do for hero screens.
Where do you look for inspiration when designing?
David Sheldon-Hicks: Another good question! Prometheus really set our approach in that director Ridley Scott wanted us to stay away from other cinematic sci-fi references and instead draw on art, modern dance choreography and organic and marine life for our inspiration. We did this and our work took on an organic feel that felt fresh and original. It was a great insight that we continue to apply when it’s appropriate. In other situations, the design brief and references are more tightly controlled, for good reason. I’m thinking of Ad Astra and The Martian, which are both based on science fact, and Zero Dark Thirty and Wolf’s Call, which are in effect docudramas that require absolute authenticity in terms of design.
What makes for a great FUI designer?
David Sheldon-Hicks: We look for great motion designers, creatively curious team players who enjoy R&D and data visualisation, are quick learners with strong problem-solving skills.
There are so many people involved in sci-fi interfaces for blockbusters. How is consistency maintained across all the teams?
David Sheldon-Hicks: We have great producers, and a structured approach to briefings and reviews to ensure the team is on track. Also, we use Autodesk Shotgun, which helps to organise, track and share the work to required specifications and formats, and remote review and approve software which enables us to work and collaborate effectively across teams and time zones.
I understand the work is very often done at breakneck speeds. How do you create something detailed and spectacular with such short turnaround times?
David Sheldon-Hicks: Broadly speaking, the visual language is the first thing we tackle and once approved, that sets the design aesthetic across an asset package. We tend to take a modular approach that allows us to create a framework into which elements can plug and play. On big shows we look at design behaviours for elements, animations and transitions and set those up as widgets. After we have automated as much as we can, we can become more focussed on refining the specific look and feel of individual screens to tie into storybeats.
That sounds fascinating. Can you share a few images that allow us to see a design language across these phases?
I can share a few screens from The Martian that show you how the design language and all screens are developed to feel cohesive across a set.
What thing about the industry do you think most people in audiences would be surprised by?
David Sheldon-Hicks: It would probably surprise most people to know how unglamorous filmmaking is and how much thought goes into the details. It’s an incredible effort by a huge amount of people and from creative vendors it demands 24-hour delivery, instant response times, time zone challenges, early mornings starts on-set, and so on. It can be incredibly challenging and draining but we give so much to it; like every prop and costume accessory, every detail on a screen has a purpose and is weighed up and discussed.
How do you think that FUI in cinema has evolved over the past, say, 10 years?
David Sheldon-Hicks: When we first started out in 2010, green screen dominated and it was rare to find directors who preferred to work with on-set screens. Directors like Ridley Scott (Prometheus, 2012), Kathryn Bigelow (Zero Dark Thirty, 2012) and James Gunn (Guardians of the Galaxy, 2014) who liked it for how it supports actors’ performances and contributes to ambience and lighting in-camera, used it and eventually it gained in popularity as is reflected in our film credits. In time, volumetric design became to suggest advanced technology and we incorporated 3D elements into our screens, like in Avengers; Age of Ultron (2015). Ultimately this led to full holographic elements, like the giant advertising holograms and 3D signage we created for Ghost in the Shell (2017). Today, briefs still vary but we find that authenticity and credibility continue to be paramount. Whatever we make, it has to feel seamless and natural to the story world.
Where do you expect the industry might go in the future? (Acknowledging that it’s really hard to see past the COVID-19 pandemic.)
David Sheldon-Hicks: On the industry front, virtual production has come into its own by necessity and we expect to see more of that in future. We also now find that the art department and VFX are collaborating as more integrated teams, with conversations that cross the production and post-production. As live rendered CG becomes more established in production, it will be interesting to see what becomes of on-set props and screens. I suspect that some directors will continue to favour it while others will enjoy the flexibility that VFX offers. Whatever happens, we have made sure to gear up to work as the studios and directors prefer.
I know that Territory does work for “real world” clients in addition to cinema. How does your work in one domain influence work in the other?
David Sheldon-Hicks: Clients often come to us because they have seen our FUI in a Marvel film, or in The Martian or Blade Runner 2049, and they want that forward-facing look and feel to their product UI. We try, within the limitations of real-world constraints, to apply a similar creative approach to client briefs as we do to film briefs, combining high production values with a future-facing aesthetic style. Hence, our work on the Huami Amazfit smartwatch tapped into a superhero aesthetic that gave data visualisations and infographics a minimalistic look with smooth animated details and transitions between functions and screens. We applied the same approach to our work with Medivis’ innovative biotech AR application which allows doctors to use a HoloLens headset to see holographically rendered clinical images and transpose these on to a physical body to better plan surgical procedures.
Similarly, our work for automobile manufacturers applies our experience of designing HUDS and navigation screens for futuristic vehicles to next-generation cars.
Lastly, I like finishing interviews with these two questions. What’s your favorite sci-fi interface that someone else designed?
David Sheldon-Hicks: Well, I have to say the FUI in the original Star Wars film is what made me want to design film graphics. But, my favourite has got to be the physical interface seen in the Flight of the Navigator. There is something so human about how the technology adapts to serve the character, rather than the other way around, that it feels like all the technology we create is leading up to that moment.
What’s next for the studio?
David Sheldon-Hicks: We want to come out of the pandemic lockdown in a good place to continue our growth in London and San Francisco, and over time pursue plans to open in other locations. But in terms of projects, we’ve got a lot of exciting stuff coming up and look forward to Series 1 of Brave New World this summer and of course, No Time To Die in November.
The Black Lives Matter protests are still going strong, 14 days after George Floyd was murdered by police in Minneapolis, and thank goodness. Things have to change. It still feels a little wan to post anything to this blog about niche interests in the design of interfaces in science fiction, but I also want to wrap Blade Runner up and post an interview I’ve had waiting in the wings for a bit so I can get to a review of Black Panther (2018) to further support black visibility and Black Lives Matter issues on this platform that I have. So in the interest of that, here’s the report card for Blade Runner.
It is hard to understate Blade Runner’s cultural impact. It is #29 of hollywoodreporter.com’s best movies of all time. Note that that is not a list of the best sci-fi of all time, but of all movies.
When we look specifically at sci-fi, Blade Runner has tons of accolades as well. Metacritic gave it a score of 84% based on 15 critics, citing “universal acclaim” across 1137 ratings. It was voted best sci-fi film by The Guardian in 2004. In 2008, Blade Runner was voted “all-time favourite science fiction film” in the readers’ poll in New Scientist (requires a subscription, but you can see what you need to in the “peek” first paragraph). The Final Cut (the version used for this review) boasts a 92% on rottentomatoes.com. In 1993 the U.S. National Film Registry selected it for preservation in the Library of Congress as being “culturally, historically, or aesthetically significant.” Adam Savage penned an entire article in 2007 for Popular Mechanics, praising the practical special effects, which still hold up. It just…it means a lot to people.
As is my usual caveat, though, this site reviews not the film, but the interfaces that appear in the film, and specifically, across three aspects.
Sci: B (3 of 4) How believable are the interfaces?
It’s not all 4th-wall-crumbling-ness. Bypassing the magical anti-gravity of the spinners, the pilot interfaces are pretty nice. The elevator is bad design, but quite believable. The VID-PHŌN is . Replicants are the primary novum in the story, so the AGI gets a kind-of genre-wide pass, and though the design is terrible, it’s the kind of stupidity we see in the world, so, sure.
Fi: B (3 of 4) How well do the interfaces inform the narrative of the story?
The Voight-Kampf Machine excels at this. It’s uncanny and unsettling, and provides nice cinegenic scenes that telegraph a broader diegesis and even feels philosophical. The Photo Inspector, on the surface, tells us that Deckard is good at his job, as morally bankrupt as it is.
The Spinners and VID-PHŌN do some heavy lifting for worldbuilding, and as functional interfaces do what they need to do, though they are not key storybeats.
But there were lots of missed opportunities. The Elevator and the VID-PHŌN could have reinforced the constant assault of advertisement. The Photo Inspector could have used an ad-hoc tangible user interface to more tightly integrate who Deckard is with how he does his work and the despair of his situation. So no full marks.
Interfaces: F (0 of 4) How well do the interfaces equip the characters to achieve their goals?
This is where the interfaces fail the worst. The Voight-Kampf Machine is, as mentioned in the title of the post, shit. Deckard’s elevator forces him to share personally-identifiable information. The Front Door key cares nothing about his privacy and misses multifactor authentication. The Spinner looks like a car, but works like a VTOL aircraft. The Replicants were engineered specifically to suffer, and rebel, and infiltrate society, to no real diegetic point.
The VID-PHŌN is OK, I guess.
Most of the interfaces in the film “work” because they were scripted to work, not because they were designed to work, and that makes for very low marks.
Final Grade C (6 of 12), Matinée.
I have a special place in my heart for both great movies with faltering interfaces, and unappreciated movies with brilliant ones. Blade Runner is one of the former. But for its rich worldbuilding, its mood, and the timely themes of members of an oppressed class coming head-to-head with a murderous police force, it will always be a favorite. Don’t not watch this film because of this review. Watch it for all the other reasons.