Panther Glove Guns

As I rule I don’t review lethal weapons on scifiinterfaces.com. The Panther Glove Guns appear to be remote-bludgeoning beams, so this kind of sneaks by. Also, I’ll confess in advance that there’s not a lot that affords critique.

We first see the glove guns in the 3D printer output with the kimoyo beads for Agent Ross and the Dora Milaje outfit for Nakia. They are thick weapons that fit over Shuri’s hands and wrists. I imagine they would be very useful to block blades and even disarm an opponent in melee combat, but we don’t see them in use this way.

The next time we see them, Shuri is activating them. (Though we don’t see how) The panther heads thrust forward, their mouths open wide, and the “neck” glows a hot blue. When the door before her opens, she immediately raises them at the guards (who are loyal to usurper Killmonger) and fires.

A light-blue beam shoots out of the mouths of the weapons, knocking the guards off the platform. Interestingly, one guard is lifted up and thrown to his 4-o-clock. The other is lifted up and thrown to his 7-o-clock. It’s not clear how Shuri instructs the weapons to have different and particular knock-down effects. But we’ve seen all over Black Panther that brain-computer interfaces (BCI) are a thing, so it’s diegetically possible she’s simply imagining where she wants them to be thrown, and then pulling a trigger or clenching her fist around a rod or just thinking “BAM!” to activate. The force-bolt strikes them right where they need to so that, like a billiard ball, they get knocked in the desired direction. As with all(?) brain-computer interfaces, there is not an interaction to critique.

After she dispatches the two guards, still wearing the gloves, she throws a control bead onto the Talon. The scene is fast and blurry, but it’s unclear how she holds and releases the bead from the glove. Was it in the panther’s jaw the whole time? Could be another BCI, of course. She just thought about where she wanted it, flung her arm, and let the AI decide when to release it for perfect targeting. The Talon is large and she doesn’t seem to need a great deal of accuracy with the bead, but for more precise operations, the AI targeting would make more sense than, say, letting the panther heads disintegrate on command so she would have freedom of her hands. 

Later, after Killmonger dispatches the Dora Milaje, Shuri and Nakia confront him by themselves. Nakia gets in a few good hits, but is thrown from the walkway. Shuri throws some more bolts his way though he doesn’t appear to even notice. I note that the panther gloves would be very difficult to aim since there’s no continuous beam providing feedback, and she doesn’t have a gun sight to help her. So, again—and I’m sorry because it feels like cheating—I have to fall back to an AI assist here. Otherwise it doesn’t make sense. 

Then Shuri switches from one blast at a time to a continuous beam. It seems to be working, as Killmonger kneels from the onslaught.

This is working! How can I eff it up?

But then for some reason she—with a projectile weapon that is actively subduing the enemy and keeping her safe at a distance—decides to close ranks, allowing Killmonger to knock the glove guns with a spear tip, thereby free himself, and destroy the gloves with a clutch of his Panther claws. I mean, I get she was furious, but I expected better tactics from the chief nerd of Wakanda. Thereafter, they spark when she tries to fire them. So ends this print of the Panther Guns.

As with all combat gear, it looks cool for it to glow, but we don’t want coolness to help an enemy target the weapon. So if it was possible to suppress the glow, that would be advisable. It might be glowing just for the intimidation factor, but for a projectile weapon that seems strange.

The panther head shapes remind an opponent that she is royalty (note no other Wakandan combatants have ranged weapons) and fighting in Bast’s name, which I suppose if you’re in the business of theocratic warfare is fine, I guess.

It’s worked so well in the past. More on this aspect later.

So, if you buy the brain-computer interface interpretation, AI targeting assist, and theocratic design, these are fine, with the cinegenic exception of the attention-drawing glow.


Black History Matters

Each post in the Black Panther review is followed by actions that you can take to support black lives.

When The Watchmen series opened with the Tulsa Race Massacre, many people were shocked to learn that this event was not fiction, reminding us just how much of black history is erased and whitewashed for the comfort of white supremacy (and fuck that). Today marks the beginning of Black History Month, and it’s a good opportunity to look back and (re)learn of the heroic figures and stories of both terror and triumph that fill black struggles to have their citizenship and lives fully recognized.

Library of Congress, American National Red Cross Photograph Collection

There are lots of events across the month. The African American History Month site is a collaboration of several government organizations (and it feels so much safer to share such a thing now that the explicitly racist administration is out of office and facing a second impeachment):

  • The Library of Congress
  • National Archives and Records Administration
  • National Endowment for the Humanities
  • National Gallery of Art
  • National Park Service
  • Smithsonian Institution and United States Holocaust Memorial Museum

The site, https://www.africanamericanhistorymonth.gov/, has a number of resources, including images, video, and calendar of events for you.

Today we can take a moment to remember and honor the Greensboro Four.

On this day, February 1, 1960: Through careful planning and enlisting the help of a local white businessman named Ralph Johns, four Black college students—Ezell A. Blair, Jr., Franklin E. McCain, Joseph A. McNeil, David L. Richmond—sat down at a segregated lunch counter at Woolworth’s in Greensboro, North Carolina, and politely asked for service. Their request was refused. When asked to leave, they remained in their seats.

Police arrived on the scene, but were unable to take action due to the lack of provocation. By that time, Ralph Johns had already alerted the local media, who had arrived in full force to cover the events on television. The Greensboro Four stayed put until the store closed, then returned the next day with more students from local colleges.

Their passive resistance and peaceful sit-down demand helped ignite a youth-led movement to challenge racial inequality throughout the South.

A last bit of amazing news to share today is that Black Lives Matter has been nominated for the Nobel Peace Prize! The movement was co-founded by Alicia Garza, Patrisse Cullors and Opal Tometi in response to the acquittal of Trayvon Martin’s murderer, got a major boost with the outrage following and has grown to a global movement working to improve the lives of the entire black diaspora. May it win!

UX of Speculative Brain-Computer Inputs

So much of the technology in Black Panther appears to work by mental command (so far: Panther Suit 2.0, the Royal Talon, and the vibranium sand tables) that…

  • before we get into the Kimoyo beads, or the Cape Shields, or the remote driving systems…
  • before I have to dismiss these interactions as “a wizard did it” style non-designs
  • before I review other brain-computer interfaces in other shows…

…I wanted check on the state of the art of brain-computer interfaces (or BCIs) and see how our understanding had advanced since I wrote the Brain interface chapter in the book, back in the halcyon days of 2012.

Note that I am deliberately avoiding the tech side of this question. I’m not going to talk about EEG, PET, MRI, and fMRI. (Though they’re linked in case you want to learn more.) Modern brain-computer interface (or BCI) technologies are evolving too rapidly to bother with an overview of them. They’ll change in the real world by the time I press “publish,” much less by the time you read this. And sci-fi tech is most often a black box anyway. But the human part of the human-computer interaction model changes much more slowly. We can look to the brain as a relatively-unalterable component of the BCI question, leading us to two believability questions of sci-fi BCI.

  1. How can people express intent using their brains?
  2. How do we prevent accidental activation using BCI?

Let’s discuss each.

1. How can people express intent using their brains?

In the see-think-do loop of human-computer interaction…

  • See (perceive) has been a subject of visual, industrial, and auditory design.
  • Think has been a matter of human cognition as informed by system interaction and content design.
  • Do has long been a matter of some muscular movement that the system can detect, to start its matching input-process-output loop. Tap a button. Move a mouse. Touch a screen. Focus on something with your eyes. Hold your breath. These are all ways of “doing” with muscles.
The “bowtie” diagram I developed for my book on agentive tech.

But the first promise of BCI is to let that doing part happen with your brain. The brain isn’t a muscle, so what actions are BCI users able to take in their heads to signal to a BCI system what they want it to do? The answer to this question is partly physiological, about the way the brain changes as it goes about its thinking business.

Ah, the 1800s. Such good art. Such bad science.

Our brains are a dense network of bioelectric signals, chemicals, and blood flow. But it’s not chaos. It’s organized. It’s locally functionalized, meaning that certain parts of the brain are predictably activated when we think about certain things. But it’s not like the Christmas lights in Stranger Things, with one part lighting up discretely at a time. It’s more like an animated proportional symbol map, with lots of places lighting up at the same time to different degrees.

Illustrative composite of a gif and an online map demo.

The sizes and shapes of what’s lighting up may change slightly between people, but a basic map of healthy, undamaged brains will be similar to each other. Lots of work has gone on to map these functional areas, with researchers showing subjects lots of stimuli and noting what areas of the brain light up. Test enough of these subjects and you can build a pretty good functional map of concepts. Thereafter, you can take a “picture” of the brain, and you can cross-reference your maps to reverse-engineer what is being thought.

From Jack Gallant’s semantic maps viewer.

Right now those pictures are pretty crude and slow, but so were the first actual photographs in the world. In 20–50 years, we may be able to wear baseball caps that provide a much more high-resolution, real time inputs of concepts being thought. In the far future (or, say, the alternate history of the MCU) it is conceivable to read these things from a distance. (Though there are significant ethical questions involved in such a technology, this post is focused on questions of viability and interaction.)

From Jack Gallant’s semantic map viewer

Similarly the brain maps we have are only for a small percentage of an average adult vocabulary. Jack Gallant’s semantic map viewer (pictured and linked above) shows the maps for about 140 concepts, and estimates of average active vocabulary is around 20,000 words, so we’re looking at a tenth of a tenth of what we can imagine (not even counting the infinite composability of language). But in the future we will not only have more concepts mapped, more confidently, but we will also have idiographs for each individual, like the personal dictionary in your smart phone.

All this is to say that our extant real world technology confirms that thoughts are a believable input for a system. This includes linguistic inputs like “Turn on the light” and “activate the vibranium sand table” and “Sincerely, Chris” and even imagining the desired change, like a light changing from dark to light. It might even include subconscious thoughts that yet to be formed into words.

2. How do we prevent accidental activation?

But we know from personal experience, we don’t want all our thoughts to be acted on. Take, for example, those thoughts you’re feeling hangry, or snarky, or dealing with a jerk-in-authority. Or those texts and emails that you’ve composed in the heat of the moment but wisely deleted before they get you in trouble.

If a speculative BCI is being read by a general artificial intelligence, it can manage that just like a smart human partner would.

He is composing a blog post, reasons the AGI, so I will just disregard his thought that he needs to pee.

And if there’s any doubt, an AGI can ask. “Did you intend me to include the bit about pee in the post?” Me: “Certainly not. Also BRB.” (Readers following the Black Panther reviews will note that AGI is available to Wakandans in the form of Griot.)

If AGI is unavailable to the diegesis (and it would significantly change any diegesis of which it is a part) then we need some way to indicate when a thought is intended as input and when it isn’t. Having that be some mode of thought feels complicated and error-prone, like when programmers have to write regex expressions that escape escape characters. Better I think is to use some secondary channel, like a bodily interaction. Touch forefinger and pinky together, for instance, and the computer understands you intend your thoughts as input.

So, for any BCI that appears in sci-fi, we would want to look for the presence or absence of AGI as a reasonableness interpreter, and, barring that, for some alternate-channel mechanism for indicating deliberateness. We would also hope to see some feedback and correction loops to understand the nuances of the edge-case interactions, but these are rare in sci-fi.

Even more future-full

This all points to the question of what seeing/perceiving via a BCI might be. A simple example might be a disembodied voice that only the user can hear.

A woman walks alone at night. Lost in thoughts, she hears her AI whisper to her thoughts, “Ada, be aware that a man has just left a shadowy doorstep and is following, half a block behind you. Shall I initialize your shock shoes?”

What other than language can be written to the brain in the far future? Images? Movies? Ideas? A suspicion? A compulsion? A hunch? How will people know what are their own thoughts and what has been placed there from the outside? I look forward to the stories and shows that illustrate new ideas, and warn us of the dark pitfalls.