Panther Glove Guns

As I rule I don’t review lethal weapons on scifiinterfaces.com. The Panther Glove Guns appear to be remote-bludgeoning beams, so this kind of sneaks by. Also, I’ll confess in advance that there’s not a lot that affords critique.

We first see the glove guns in the 3D printer output with the kimoyo beads for Agent Ross and the Dora Milaje outfit for Nakia. They are thick weapons that fit over Shuri’s hands and wrists. I imagine they would be very useful to block blades and even disarm an opponent in melee combat, but we don’t see them in use this way.

The next time we see them, Shuri is activating them. (Though we don’t see how) The panther heads thrust forward, their mouths open wide, and the “neck” glows a hot blue. When the door before her opens, she immediately raises them at the guards (who are loyal to usurper Killmonger) and fires.

A light-blue beam shoots out of the mouths of the weapons, knocking the guards off the platform. Interestingly, one guard is lifted up and thrown to his 4-o-clock. The other is lifted up and thrown to his 7-o-clock. It’s not clear how Shuri instructs the weapons to have different and particular knock-down effects. But we’ve seen all over Black Panther that brain-computer interfaces (BCI) are a thing, so it’s diegetically possible she’s simply imagining where she wants them to be thrown, and then pulling a trigger or clenching her fist around a rod or just thinking “BAM!” to activate. The force-bolt strikes them right where they need to so that, like a billiard ball, they get knocked in the desired direction. As with all(?) brain-computer interfaces, there is not an interaction to critique.

After she dispatches the two guards, still wearing the gloves, she throws a control bead onto the Talon. The scene is fast and blurry, but it’s unclear how she holds and releases the bead from the glove. Was it in the panther’s jaw the whole time? Could be another BCI, of course. She just thought about where she wanted it, flung her arm, and let the AI decide when to release it for perfect targeting. The Talon is large and she doesn’t seem to need a great deal of accuracy with the bead, but for more precise operations, the AI targeting would make more sense than, say, letting the panther heads disintegrate on command so she would have freedom of her hands. 

Later, after Killmonger dispatches the Dora Milaje, Shuri and Nakia confront him by themselves. Nakia gets in a few good hits, but is thrown from the walkway. Shuri throws some more bolts his way though he doesn’t appear to even notice. I note that the panther gloves would be very difficult to aim since there’s no continuous beam providing feedback, and she doesn’t have a gun sight to help her. So, again—and I’m sorry because it feels like cheating—I have to fall back to an AI assist here. Otherwise it doesn’t make sense. 

Then Shuri switches from one blast at a time to a continuous beam. It seems to be working, as Killmonger kneels from the onslaught.

This is working! How can I eff it up?

But then for some reason she—with a projectile weapon that is actively subduing the enemy and keeping her safe at a distance—decides to close ranks, allowing Killmonger to knock the glove guns with a spear tip, thereby free himself, and destroy the gloves with a clutch of his Panther claws. I mean, I get she was furious, but I expected better tactics from the chief nerd of Wakanda. Thereafter, they spark when she tries to fire them. So ends this print of the Panther Guns.

As with all combat gear, it looks cool for it to glow, but we don’t want coolness to help an enemy target the weapon. So if it was possible to suppress the glow, that would be advisable. It might be glowing just for the intimidation factor, but for a projectile weapon that seems strange.

The panther head shapes remind an opponent that she is royalty (note no other Wakandan combatants have ranged weapons) and fighting in Bast’s name, which I suppose if you’re in the business of theocratic warfare is fine, I guess.

It’s worked so well in the past. More on this aspect later.

So, if you buy the brain-computer interface interpretation, AI targeting assist, and theocratic design, these are fine, with the cinegenic exception of the attention-drawing glow.


Black History Matters

Each post in the Black Panther review is followed by actions that you can take to support black lives.

When The Watchmen series opened with the Tulsa Race Massacre, many people were shocked to learn that this event was not fiction, reminding us just how much of black history is erased and whitewashed for the comfort of white supremacy (and fuck that). Today marks the beginning of Black History Month, and it’s a good opportunity to look back and (re)learn of the heroic figures and stories of both terror and triumph that fill black struggles to have their citizenship and lives fully recognized.

Library of Congress, American National Red Cross Photograph Collection

There are lots of events across the month. The African American History Month site is a collaboration of several government organizations (and it feels so much safer to share such a thing now that the explicitly racist administration is out of office and facing a second impeachment):

  • The Library of Congress
  • National Archives and Records Administration
  • National Endowment for the Humanities
  • National Gallery of Art
  • National Park Service
  • Smithsonian Institution and United States Holocaust Memorial Museum

The site, https://www.africanamericanhistorymonth.gov/, has a number of resources, including images, video, and calendar of events for you.

Today we can take a moment to remember and honor the Greensboro Four.

On this day, February 1, 1960: Through careful planning and enlisting the help of a local white businessman named Ralph Johns, four Black college students—Ezell A. Blair, Jr., Franklin E. McCain, Joseph A. McNeil, David L. Richmond—sat down at a segregated lunch counter at Woolworth’s in Greensboro, North Carolina, and politely asked for service. Their request was refused. When asked to leave, they remained in their seats.

Police arrived on the scene, but were unable to take action due to the lack of provocation. By that time, Ralph Johns had already alerted the local media, who had arrived in full force to cover the events on television. The Greensboro Four stayed put until the store closed, then returned the next day with more students from local colleges.

Their passive resistance and peaceful sit-down demand helped ignite a youth-led movement to challenge racial inequality throughout the South.

A last bit of amazing news to share today is that Black Lives Matter has been nominated for the Nobel Peace Prize! The movement was co-founded by Alicia Garza, Patrisse Cullors and Opal Tometi in response to the acquittal of Trayvon Martin’s murderer, got a major boost with the outrage following and has grown to a global movement working to improve the lives of the entire black diaspora. May it win!

Staff of the Living Tribunal

This staff appears to be made of wood and is approximately a meter long when in its normal form. When activated by Mordo it has several powers. With a strong pull on both ends, the staff expands into a jointed energy nunchaku. It can also extend to an even greater length like a bullwhip. When it impacts a solid object such as a floor, it seems to release a crack of loud energy. Too bad we only ever see it in demo mode.

How might this work as technology?

The staff is composed of concentric rings within rings of material similar to a collapsing travel cup. This allows the device to expand and contract in length. The handle would likely contain the artificial intelligence and a power source that activates when Mordo gives it a gestural command, or if we’re thinking far future, a mental one. There might also be an additional control for energy discharge.

In the movie, sadly, Mordo does not use the Staff to its best effect, especially when Kaecilius returns to the New York sanctum. Mordo could easily disrupt the spell being cast by the disciples using the staff like a whip, but instead he leaps off the balcony to physically attack them. Dude, you’re the franchise’s next Big Bad? But let’s put down the character’s missteps to look at the interface.

Mode switching and inline meta-signals

Any time you design a thing with modes, you have to design the state changes between those modes. Let’s look at how Mordo moves between staff, nunchaku, and whip in this short demonstration scene.

To go from staff to nunchaku, Mordo pulls it apart. It’s now in a dangerous state, so is there any authentication or safety switch here? It could be there, but all passive via contact sensors, which would be the best so it could be activated in a hurry. The film doesn’t give us any clue, really, so that’s an open question.

How does it know to go from nunchaku to whip? It sure would be crappy to bet on a disabling thwack against your opponent only to find it lazily draping over a shoulder instead. (Pere Perez might have advanced ideas, given his ideas on light saber tactics.) Again, this state change could be passive, detecting in real time the subtle gestural differences in a distal snap, which a bullwhip would need, and lateral force, which sets the nunchaku spinning, and adjust between the two accordingly. Gestural and predictive technologies are not particularly cinegenic, so let’s give it the benefit of the doubt and say that’s what’s happening.

A last mode is After Mordo cracks it against the ground, it retracts back to Staff form. This is the hardest one to buy. Certainly it’s a most dramatic ending for Mordo’s demonstration. But does it snap back automatically after it strikes a surface? Automation is not always the answer. Deliberate control would mean Mordo doesn’t have to waste time undoing unwanted automatic actions.

Critical systems must be extremely confident in their interpretations before automation is the right choice.

It might be that this particular gesture is a retraction signal, but how the Staff distinguishes this from a mid-combat strike is tricky. It would have to have sophisticated situational awareness to know the difference, and it doesn’t display this. Better backworlding would point at some subtle gestural signal from Mordo. A double-tightening of his grip, maybe. Or even a double-slight-release of his grip, since that’s something he’s quite unlikely to do in combat.

This is a broad pattern for designers to remember. Inline control signals should be simple-to-provide, but unlikely to occur in literal use. Imagine if the Winter Soldier’s Trigger Phrase wasn’t “Longing, rusted, 17, daybreak, furnace, 9, benign, homecoming, 1, freight car” but instead was the word “the.” He’d be berserking every few seconds. Unworkable. So, if you were designing the Staff’s retraction command gesture, you’d have to pick something he could remember and perform easily, and that would be difficult to accidentally provide.

If Mordo has the staff in the next film, I hope the control modes are clearer and of course well-designed.

Iron Man HUD: A Breakdown

So this is going to take a few posts. You see, the next interface that appears in The Avengers is a video conference between Tony Stark in his Iron Man supersuit and his partner in romance and business, Pepper Potts, about switching Stark Tower from the electrical grid to their independent power source. Here’s what a still from the scene looks like.

Avengers-Iron-Man-Videoconferencing01

So on the surface of this scene, it’s a communications interface.

But that chat exists inside of an interface with a conceptual and interaction framework that has been laid down since the original Iron Man movie in 2008, and built upon with each sequel, one in 2010 and one in 2013. (With rumors aplenty for a fourth one…sometime.)

So to review the video chat, I first have to talk about the whole interface, and that has about 6 hours of prologue occurring across 4 years of cinema informing it. So let’s start, as I do with almost every interface, simply by describing it and its components.

Exosuit

The Iron Man is the name of the series of superpowered exosuits designed by Tony Stark. They range from the Mark I, a comparatively crude suit of armor to escape imprisonment by terrorists, through the Mark XLVI, the armor seen in The Avengers: Age of Ultron. The suit acts as defense against nearly every type of weapon known. It has repulsor beams built into the palms and in later models the arc reactor mounted in the chest that can be used to deliver concussive force. It allows the wearer to fly. Offensive weaponry varies between models, but has included a high powered laser system, and auto-targeting minigun pod and missiles. The suit can act semi-autonomously or via remote control. One of the models in The Avengers has parts that are seen to self-propel to Tony, targeting a beacon bracelet he wears, and self-assemble around him very quickly.

Marks1and43

Immersive display

Though Tony’s head is completely covered, he has a virtual reality display within his helmet. It is a full-field-of-vision, very high-resolution, full-color display that provides stereoscopic imaging. It allows Tony to see the world around him as if he were not wearing the helmet, augment the view with goal-, person-, location-, and object-sensitive awareness.

The display varies a great deal, changing to the needs of the situation. But five icons persistently in the lower part of the display seem to be: suit status, targeting and optics, radar, artificial horizon, and map.

An interpretive view of Tony’s experience, from Iron Man (2008).
An interpretive view of Tony’s experience, from Iron Man (2008).
An first-person view from within the HUD, Iron Man (2008).
An first-person view from within the HUD, Iron Man (2008).

There is much to critique about the readability of the complex layering and translucency, the limits of human perception, and the necessarily- (and strictly-) interpretive nature of what we as audience see, but let me save those three points for a later post. For now it’s enough to log the features as aspects of the system.

Head NUI

Though Tony could use his hands to interact with an interface projected into the augmented reality view around him, his hands are often occupied in controlling flight or in combat. For this reason the means of input are head gesture, eye gesture, and voice input. A bit more on each follows.

Elements within the HUD such as reticles around his eyes follow and track his head gestures. Other elements stay locked in place. The HUD can track his gaze perfectly, allowing him to designate targets for his weapons with a fixation. Using this perfect eye tracking, Tony can also speak about something he is looking at, either in the real world or in the interface, and the system understands exactly what he’s talking about.

In fact, Tony is able to speak fully natural language commands, and indeed, carry out full-Turing conversations with the suit because of the presence of…

Strong artificial intelligence: JARVIS

An on-board artificial intelligence known as JARVIS handles any information task Tony asks of it, and monitors the surroundings and anticipates informational needs. There is strong evidence that most of the functions of the suit are handled by JARVIS behind the scenes. The crucialness of the artificial intelligence to the function of the suit cannot be overstated. It’s difficult to imagine how most of the suit could function as it does without an artificial intelligence behind the scenes facilitating results and even guiding Tony. With this in mind it is instructive to reframe the AI as the thing being named the Iron Man, with Tony Stark being an onboard manager, or, more charitably, a command-and-control center. Who quips.

Next up in the Iron HUD series: Lets review the functions of the suit.

Avengers-Iron-Man-Videoconferencing02