Panther Glove Guns

As I rule I don’t review lethal weapons on scifiinterfaces.com. The Panther Glove Guns appear to be remote-bludgeoning beams, so this kind of sneaks by. Also, I’ll confess in advance that there’s not a lot that affords critique.

We first see the glove guns in the 3D printer output with the kimoyo beads for Agent Ross and the Dora Milaje outfit for Nakia. They are thick weapons that fit over Shuri’s hands and wrists. I imagine they would be very useful to block blades and even disarm an opponent in melee combat, but we don’t see them in use this way.

The next time we see them, Shuri is activating them. (Though we don’t see how) The panther heads thrust forward, their mouths open wide, and the “neck” glows a hot blue. When the door before her opens, she immediately raises them at the guards (who are loyal to usurper Killmonger) and fires.

A light-blue beam shoots out of the mouths of the weapons, knocking the guards off the platform. Interestingly, one guard is lifted up and thrown to his 4-o-clock. The other is lifted up and thrown to his 7-o-clock. It’s not clear how Shuri instructs the weapons to have different and particular knock-down effects. But we’ve seen all over Black Panther that brain-computer interfaces (BCI) are a thing, so it’s diegetically possible she’s simply imagining where she wants them to be thrown, and then pulling a trigger or clenching her fist around a rod or just thinking “BAM!” to activate. The force-bolt strikes them right where they need to so that, like a billiard ball, they get knocked in the desired direction. As with all(?) brain-computer interfaces, there is not an interaction to critique.

After she dispatches the two guards, still wearing the gloves, she throws a control bead onto the Talon. The scene is fast and blurry, but it’s unclear how she holds and releases the bead from the glove. Was it in the panther’s jaw the whole time? Could be another BCI, of course. She just thought about where she wanted it, flung her arm, and let the AI decide when to release it for perfect targeting. The Talon is large and she doesn’t seem to need a great deal of accuracy with the bead, but for more precise operations, the AI targeting would make more sense than, say, letting the panther heads disintegrate on command so she would have freedom of her hands. 

Later, after Killmonger dispatches the Dora Milaje, Shuri and Nakia confront him by themselves. Nakia gets in a few good hits, but is thrown from the walkway. Shuri throws some more bolts his way though he doesn’t appear to even notice. I note that the panther gloves would be very difficult to aim since there’s no continuous beam providing feedback, and she doesn’t have a gun sight to help her. So, again—and I’m sorry because it feels like cheating—I have to fall back to an AI assist here. Otherwise it doesn’t make sense. 

Then Shuri switches from one blast at a time to a continuous beam. It seems to be working, as Killmonger kneels from the onslaught.

This is working! How can I eff it up?

But then for some reason she—with a projectile weapon that is actively subduing the enemy and keeping her safe at a distance—decides to close ranks, allowing Killmonger to knock the glove guns with a spear tip, thereby free himself, and destroy the gloves with a clutch of his Panther claws. I mean, I get she was furious, but I expected better tactics from the chief nerd of Wakanda. Thereafter, they spark when she tries to fire them. So ends this print of the Panther Guns.

As with all combat gear, it looks cool for it to glow, but we don’t want coolness to help an enemy target the weapon. So if it was possible to suppress the glow, that would be advisable. It might be glowing just for the intimidation factor, but for a projectile weapon that seems strange.

The panther head shapes remind an opponent that she is royalty (note no other Wakandan combatants have ranged weapons) and fighting in Bast’s name, which I suppose if you’re in the business of theocratic warfare is fine, I guess.

It’s worked so well in the past. More on this aspect later.

So, if you buy the brain-computer interface interpretation, AI targeting assist, and theocratic design, these are fine, with the cinegenic exception of the attention-drawing glow.


Black History Matters

Each post in the Black Panther review is followed by actions that you can take to support black lives.

When The Watchmen series opened with the Tulsa Race Massacre, many people were shocked to learn that this event was not fiction, reminding us just how much of black history is erased and whitewashed for the comfort of white supremacy (and fuck that). Today marks the beginning of Black History Month, and it’s a good opportunity to look back and (re)learn of the heroic figures and stories of both terror and triumph that fill black struggles to have their citizenship and lives fully recognized.

Library of Congress, American National Red Cross Photograph Collection

There are lots of events across the month. The African American History Month site is a collaboration of several government organizations (and it feels so much safer to share such a thing now that the explicitly racist administration is out of office and facing a second impeachment):

  • The Library of Congress
  • National Archives and Records Administration
  • National Endowment for the Humanities
  • National Gallery of Art
  • National Park Service
  • Smithsonian Institution and United States Holocaust Memorial Museum

The site, https://www.africanamericanhistorymonth.gov/, has a number of resources, including images, video, and calendar of events for you.

Today we can take a moment to remember and honor the Greensboro Four.

On this day, February 1, 1960: Through careful planning and enlisting the help of a local white businessman named Ralph Johns, four Black college students—Ezell A. Blair, Jr., Franklin E. McCain, Joseph A. McNeil, David L. Richmond—sat down at a segregated lunch counter at Woolworth’s in Greensboro, North Carolina, and politely asked for service. Their request was refused. When asked to leave, they remained in their seats.

Police arrived on the scene, but were unable to take action due to the lack of provocation. By that time, Ralph Johns had already alerted the local media, who had arrived in full force to cover the events on television. The Greensboro Four stayed put until the store closed, then returned the next day with more students from local colleges.

Their passive resistance and peaceful sit-down demand helped ignite a youth-led movement to challenge racial inequality throughout the South.

A last bit of amazing news to share today is that Black Lives Matter has been nominated for the Nobel Peace Prize! The movement was co-founded by Alicia Garza, Patrisse Cullors and Opal Tometi in response to the acquittal of Trayvon Martin’s murderer, got a major boost with the outrage following and has grown to a global movement working to improve the lives of the entire black diaspora. May it win!

Okoye’s grip shoes

Like so much of the tech in Black Panther, this wearable battle gear is quite subtle, but critical to the scene, and much more than it seems at first. When Okoye and Nakia are chasing Klaue through the streets of Busan, South Korea, she realizes she would be better positioned on top of their car than within it.

She holds one of her spears out of the window, stabs it into the roof, and uses it to pull herself out on top of the swerving, speeding car. Once there, she places her feet into position, and the moment the sole of her foot touches the roof, it glows cyan for a moment.

She then holds onto the stuck spear to stabilize herself, rears back with her other spear, and throws it forward through the rear-window and windshield of some minions’ car, where it sticks in the road before them. Their car strikes the spear and get crushed. It’s a kickass moment in a film of kickass moments. But by all means let’s talk about the footwear.

Now, it’s not explicit, the effect the shoe has in the world of the story. But we can guess, given the context, that we are meant to believe the shoes grip the car roof, giving her a firm enough anchor to stay on top of the car and not tumble off when it swerves.

She can’t just be stuck

I have never thrown a javelin or a hyper-technological vibranium spear. But Mike Barber, PhD scholar in Biomechanics at Victoria University and Australian Institute of Sport, wrote this article about the mechanics of javelin throwing, and it seems that achieving throwing force is not just by sheer strength of the rotator cuff. Rather, the thrower builds force across their entire body and whips the momentum around their shoulder joint.

 Ilgar Jafarov, CC BY-SA 4.0, via Wikimedia Commons

Okoye is a world-class warrior, but doesn’t have superpowers, so…while I understand she does not want the car to yank itself from underneath her with a swerve, it seems that being anchored in place, like some Wakandan air tube dancer, will not help her with her mighty spear throwing. She needs to move.

It can’t just be manual

Imagine being on a mechanical bull jerking side to side—being stuck might help you stay upright. But imagine it jerking forward suddenly, and you’d wind up on your butt. If it jerked backwards, you’d be thrown forward, and it might be much worse. All are possibilities in the car chase scenario.

If those jerking motions happened to Okoye faster than she could react and release her shoes, it could be disastrous. So it can’t be a thing she needs to manually control. Which means it needs to some blend of manual, agentive, and assistant. Autonomic, maybe, to borrow the term from physiology?

So…

To really be of help, it has to…

  • monitor the car’s motion
  • monitor her center of balance
  • monitor her intentions
  • predict the future motions of the cars
  • handle all the cybernetics math (in the Norbert Wiener sense, not the sci-fi sense)
  • know when it should just hold her feet in place, and when it should signal for her to take action
  • know what action she should ideally take, so it knows what to nudge her to do

These are no mean feats, especially in real-time. So, I don’t see any explanation except…

An A.I. did it.

AGI is in the Wakandan arsenal (c.f. Griot helping Ross), so this is credible given the diegesis, but I did not expect to find it in shoes.

An interesting design question is how it might deliver warning signals about predicted motions. Is it tangible, like vibration? Or a mild electrical buzz? Or a writing-to-the-brain urge to move? The movie gives us no clues, but if you’re up for a design challenge, give it a speculative design pass.

Wearable heuristics

As part of my 2014 series about wearable technologies in sci-fi, I identified a set of heuristics we can use to evaluate such things. A quick check against those show that they fare well. The shoes are quite sartorial, and look like shoes so are social as well. As a brain interface, it is supremely easy to access and use. Two of the heuristics raise questions though.

  1. Wearables must be designed so they are difficult to accidentally activate. It would have been very inconvenient for Okoye to find herself stuck to the surface of Wakanda while trying to chase Killmonger later in the film, for example. It would be safer to ensure deliberateness with some mode-confirming physical gesture, but there’s no evidence of it in the movie.
  2. Wearables should have apposite I/O. The soles glow. Okoye doesn’t need that information. I’d say in a combat situation it’s genuinely bad design to require her to look down to confirm any modes of the shoes. They’re worn. She will immediately feel whether her shoes are fixed in place. While I can’t name exactly how an enemy might use the knowledge about whether she is stuck in place or not, but on general principle, the less information we give to the enemy, the safer you’ll be. So if this was real-world, we would seek to eliminate the glow. That said, we know that undetectable interactions are not cinegenic in the slightest, so for the film this is a nice “throwaway” addition to the cache of amazing Wakandan technology.

Black Georgia Matters and Today is the Day

Each post in the Black Panther review is followed by actions that you can take to support black lives.

Today is the last day in the Georgia runoff elections. It’s hard to overstate how important this is. If Ossoff and Warnock win, the future of the country has a much better likelihood of taking Black Lives Matter (and lots of other issues) more seriously. Actual progress might be made. Without it, the obstructionist and increasingly-frankly-racist Republican party (and Moscow Mitch) will hold much of the Biden-Harris administration back. If you know of any Georgians, please check with them today to see if they voted in the runoff election. If not—and they’re going to vote Democrat—see what encouragement and help you can give them.

Some ideas…

  • Pay for a ride there and back remotely.
  • Buy a meal to be delivered for their family.
  • Make sure they are protected and well-masked.
  • Encourage them to check their absentee ballot, if they cast one, here. https://georgia.ballottrax.net/voter/
  • If their absentee ballot has not been registered, they can go to the polls and tell the workers there that they want to cancel their absentee ballot and vote in person. Help them know their poll at My Voter Page: https://www.mvp.sos.ga.gov/MVP/mvp.do

This vote matters, matters, matters.

UX of Speculative Brain-Computer Inputs

So much of the technology in Black Panther appears to work by mental command (so far: Panther Suit 2.0, the Royal Talon, and the vibranium sand tables) that…

  • before we get into the Kimoyo beads, or the Cape Shields, or the remote driving systems…
  • before I have to dismiss these interactions as “a wizard did it” style non-designs
  • before I review other brain-computer interfaces in other shows…

…I wanted check on the state of the art of brain-computer interfaces (or BCIs) and see how our understanding had advanced since I wrote the Brain interface chapter in the book, back in the halcyon days of 2012.

Note that I am deliberately avoiding the tech side of this question. I’m not going to talk about EEG, PET, MRI, and fMRI. (Though they’re linked in case you want to learn more.) Modern brain-computer interface (or BCI) technologies are evolving too rapidly to bother with an overview of them. They’ll change in the real world by the time I press “publish,” much less by the time you read this. And sci-fi tech is most often a black box anyway. But the human part of the human-computer interaction model changes much more slowly. We can look to the brain as a relatively-unalterable component of the BCI question, leading us to two believability questions of sci-fi BCI.

  1. How can people express intent using their brains?
  2. How do we prevent accidental activation using BCI?

Let’s discuss each.

1. How can people express intent using their brains?

In the see-think-do loop of human-computer interaction…

  • See (perceive) has been a subject of visual, industrial, and auditory design.
  • Think has been a matter of human cognition as informed by system interaction and content design.
  • Do has long been a matter of some muscular movement that the system can detect, to start its matching input-process-output loop. Tap a button. Move a mouse. Touch a screen. Focus on something with your eyes. Hold your breath. These are all ways of “doing” with muscles.
The “bowtie” diagram I developed for my book on agentive tech.

But the first promise of BCI is to let that doing part happen with your brain. The brain isn’t a muscle, so what actions are BCI users able to take in their heads to signal to a BCI system what they want it to do? The answer to this question is partly physiological, about the way the brain changes as it goes about its thinking business.

Ah, the 1800s. Such good art. Such bad science.

Our brains are a dense network of bioelectric signals, chemicals, and blood flow. But it’s not chaos. It’s organized. It’s locally functionalized, meaning that certain parts of the brain are predictably activated when we think about certain things. But it’s not like the Christmas lights in Stranger Things, with one part lighting up discretely at a time. It’s more like an animated proportional symbol map, with lots of places lighting up at the same time to different degrees.

Illustrative composite of a gif and an online map demo.

The sizes and shapes of what’s lighting up may change slightly between people, but a basic map of healthy, undamaged brains will be similar to each other. Lots of work has gone on to map these functional areas, with researchers showing subjects lots of stimuli and noting what areas of the brain light up. Test enough of these subjects and you can build a pretty good functional map of concepts. Thereafter, you can take a “picture” of the brain, and you can cross-reference your maps to reverse-engineer what is being thought.

From Jack Gallant’s semantic maps viewer.

Right now those pictures are pretty crude and slow, but so were the first actual photographs in the world. In 20–50 years, we may be able to wear baseball caps that provide a much more high-resolution, real time inputs of concepts being thought. In the far future (or, say, the alternate history of the MCU) it is conceivable to read these things from a distance. (Though there are significant ethical questions involved in such a technology, this post is focused on questions of viability and interaction.)

From Jack Gallant’s semantic map viewer

Similarly the brain maps we have are only for a small percentage of an average adult vocabulary. Jack Gallant’s semantic map viewer (pictured and linked above) shows the maps for about 140 concepts, and estimates of average active vocabulary is around 20,000 words, so we’re looking at a tenth of a tenth of what we can imagine (not even counting the infinite composability of language). But in the future we will not only have more concepts mapped, more confidently, but we will also have idiographs for each individual, like the personal dictionary in your smart phone.

All this is to say that our extant real world technology confirms that thoughts are a believable input for a system. This includes linguistic inputs like “Turn on the light” and “activate the vibranium sand table” and “Sincerely, Chris” and even imagining the desired change, like a light changing from dark to light. It might even include subconscious thoughts that yet to be formed into words.

2. How do we prevent accidental activation?

But we know from personal experience, we don’t want all our thoughts to be acted on. Take, for example, those thoughts you’re feeling hangry, or snarky, or dealing with a jerk-in-authority. Or those texts and emails that you’ve composed in the heat of the moment but wisely deleted before they get you in trouble.

If a speculative BCI is being read by a general artificial intelligence, it can manage that just like a smart human partner would.

He is composing a blog post, reasons the AGI, so I will just disregard his thought that he needs to pee.

And if there’s any doubt, an AGI can ask. “Did you intend me to include the bit about pee in the post?” Me: “Certainly not. Also BRB.” (Readers following the Black Panther reviews will note that AGI is available to Wakandans in the form of Griot.)

If AGI is unavailable to the diegesis (and it would significantly change any diegesis of which it is a part) then we need some way to indicate when a thought is intended as input and when it isn’t. Having that be some mode of thought feels complicated and error-prone, like when programmers have to write regex expressions that escape escape characters. Better I think is to use some secondary channel, like a bodily interaction. Touch forefinger and pinky together, for instance, and the computer understands you intend your thoughts as input.

So, for any BCI that appears in sci-fi, we would want to look for the presence or absence of AGI as a reasonableness interpreter, and, barring that, for some alternate-channel mechanism for indicating deliberateness. We would also hope to see some feedback and correction loops to understand the nuances of the edge-case interactions, but these are rare in sci-fi.

Even more future-full

This all points to the question of what seeing/perceiving via a BCI might be. A simple example might be a disembodied voice that only the user can hear.

A woman walks alone at night. Lost in thoughts, she hears her AI whisper to her thoughts, “Ada, be aware that a man has just left a shadowy doorstep and is following, half a block behind you. Shall I initialize your shock shoes?”

What other than language can be written to the brain in the far future? Images? Movies? Ideas? A suspicion? A compulsion? A hunch? How will people know what are their own thoughts and what has been placed there from the outside? I look forward to the stories and shows that illustrate new ideas, and warn us of the dark pitfalls.

The Royal Talon piloting interface

Since my last post, news broke that Chadwick Boseman has passed away after a four year battle with cancer. He kept his struggles private, so the news was sudden and hard-hitting. The fandom is still reeling. Black people, especially, have lost a powerful, inspirational figure. The world has also lost a courageous and talented young actor. Rise in Power, Mr. Boseman. Thank you for your integrity, bearing, and strength.

Photo CC BY-SA 2.0,
by Gage Skidmore.

Black Panther’s airship is a triangular vertical-takeoff-and-landing vehicle called the Royal Talon. We see its piloting interface twice in the film.

The first time is near the beginning of the movie. Okoye and T’Challa are flying at night over the Sambisa forest in Nigeria. Okoye sits in the pilot’s seat in a meditative posture, facing a large forward-facing bridge window with a heads up display. A horseshoe-shaped shelf around her is filled with unactivated vibranium sand. Around her left wrist, her kimoyo beads glow amber, projecting a volumetric display around her forearm.

She announces to T’Challa, “My prince, we are coming up on them now.” As she disengages from the interface, retracting her hands from the pose, the kimoyo projection shifts and shrinks. (See more detail in the video clip, below.)

The second time we see it is when they pick up Nakia and save the kidnapped girls. On their way back to Wakanda we see Okoye again in the pilot’s seat. No new interactions are seen in this scene though we linger on the shot from behind, with its glowing seatback looking like some high-tech spine.

Now, these brief glimpses don’t give a review a lot to go on. But for a sake of completeness, let’s talk about that volumetric projection around her wrist. I note is that it is a lovely echo of Dr. Strange’s interface for controlling the time stone Eye of Agamatto.

Wrist projections are going to be all the rage at the next Snap, I predict.

But we never really see Okoye look at this VP it or use it. Cross referencing the Wakandan alphabet, those five symbols at the top translate to 1 2 K R I, which doesn’t tell us much. (It doesn’t match the letters seen on the HUD.) It might be a visual do-not-disturb signal to onlookers, but if there’s other meaning that the letters and petals are meant to convey to Okoye, I can’t figure it out. At worst, I think having your wrist movements of one hand emphasized in your peripheral vision with a glowing display is a dangerous distraction from piloting. Her eyes should be on the “road” ahead of her.

The image has been flipped horizontally to illustrate how Okoye would see the display.

Similarly, we never get a good look at the HUD, or see Okoye interact with it, so I’ve got little to offer other than a mild critique that it looks full of pointless ornamental lines, many of which would obscure things in her peripheral vision, which is where humans need the most help detecting things other than motion. But modern sci-fi interfaces generally (and the MCU in particular) are in a baroque period, and this is partly how audiences recognize sci-fi-ness.

I also think that requiring a pilot to maintain full lotus to pilot is a little much, but certainly, if there’s anyone who can handle it, it’s the leader of the Dora Milaje.

One remarkable thing to note is that this is the first brain-input piloting interface in the survey. Okoye thinks what she wants the ship to do, and it does it. I expect, given what we know about kimoyo beads in Wakanda (more on these in a later post), what’s happening is she is sending thoughts to the bracelet, and the beads are conveying the instructions to the ship. As a way to show Okoye’s self-discipline and Wakanda’s incredible technological advancement, this is awesome.

Unfortunately, I don’t have good models for evaluating this interaction. And I have a lot of questions. As with gestural interfaces, how does she avoid a distracted thought from affecting the ship? Why does she not need a tunnel-in-the-sky assist? Is she imagining what the ship should do, or a route, or something more abstract, like her goals? How does the ship grant her its field awareness for a feedback loop? When does the vibranium dashboard get activated? How does it assist her? How does she hand things off to the autopilot? How does she take it back? Since we don’t have good models, and it all happens invisibly, we’ll have to let these questions lie. But that’s part of us, from our less-advanced viewpoint, having to marvel at this highly-advanced culture from the outside.


Black Health Matters

Each post in the Black Panther review is followed by actions that you can take to support black lives.

Thinking back to the terrible loss of Boseman: Fuck cancer. (And not to imply that his death was affected by this, but also:) Fuck the racism that leads to worse medical outcomes for black people.

One thing you can do is to be aware of the diseases that disproportionately affect black people (diabetes, asthma, lung scarring, strokes, high blood pressure, and cancer) and be aware that no small part of these poorer outcomes is racism, systemic and individual. Listen to Dorothy Roberts’ TED talk, calling for an end to race-based medicine.

If you’re the reading sort, check out the books Black Man in a White Coat by Damon Tweedy, or the infuriating history covered in Medical Apartheid by Harriet Washington.

If you are black, in Boseman’s memory, get screened for cancer as often as your doctor recommends it. If you think you cannot afford it and you are in the USA, this CDC website can help you determine your eligibility for free or low-cost screening: https://www.cdc.gov/cancer/nbccedp/screenings.htm. If you live elsewhere, you almost certainly have a better healthcare system than we do, but a quick search should tell you your options.

Cancer treatment is equally successful for all races. Yet black men have a 40% higher cancer death rate than white men and black women have a 20% higher cancer death rate than white women. Your best bet is to detect it early and get therapy started as soon as possible. We can’t always win that fight, but better to try than to find out when it’s too late to intervene. Your health matters. Your life matters.

Trivium Bracelet

The control token in Las Luchadras is a bracelet that slaps on and instantly renders its wearer an automaton, subject to the remote control.

Here’s something to note about this speculative technology. Orlak could have sold this, just this, to law enforcement around the world and made himself a very rich and powerful person. But the movie makes clear he is a mad engineer, not a mad businessperson, so we have to move on.

From Orlak’s point of view, getting the bracelet on its victim should be very easy. Fortunately, it does just that. Orlak can slap it on in a flick. But it’s also trivially easy for a bystander to remove, which seems like…a design oversight. It should work more like a handcuff, that requires a key to remove. It can’t look like a handcuff, of course, since Orlak wants it to go unnoticed. But in addition to the security, the handcuff function would enable the device to fit wrists of many sizes. As it is, it appears to be tailor-made to an individual.

As the diagram illustrates, not all wrists are made the same, and it would not help Orlak to have to carry around a sizing set when he hasn’t had time to secretly get the victim’s measurements.

Lastly, the audience might have benefited from seeing some visual connection between the bracelet and the remote, like a shared material that had an unusual color or glow, but Orlak would not want this connection since it could help someone identify him as the controller.

Zed-Eyes

In the world of “White Christmas”, everyone has a networked brain implant called Zed-Eyes that enables heads-up overlays onto vision, personalized audio, and modifications to environmental sounds. The control hardware is a thin metal circle around a metal click button, separated by a black rubber ring. People can buy the device with different color rings, as we see alternately see metal, blue, and black versions across the episode.

To control the implant, a person slides a finger (thumb is easiest) around the rim of a tiny touch device. Because it responds to sliding across its surface, let’s say the device must use a sensor similar to the one used in The Entire History of You (2011) or the IBM Trackpoint,

A thumb slide cycles through a carousel menu. Sliding can happen both clockwise and counterclockwise. It even works through gloves.

HUD_menu.gif

The button selects or executes the selected action. The complete list of carousel menu options we see in the episode are: SearchCameraMusicMailCallMagnifyBlockMapThe particular options change across scenes, so it is context-aware or customizable. We will look at some of the particular functions in later posts. For now, let’s discuss the “platform” that is Zed-eyes.

Analysis

There’s not much to discuss about the user interface. The carousel a mature, if constrained, interface model familiar to anyone who has used an iPod. We know the constraints and benefits of such a system, and the Zed-Eyes content seems to fit this kind of interface well.

Hardware

The main question about the hardware is that is must be very very easy to lose or misplace. It would make sense for the Zed-Eyes to help you locate it when you need help, but we don’t see a hint of this in the show.

I think the little watch-battery form factor is a bad design. It’s easy to lose and hard to find and requires a lot of precision to use. Since this exists in a world with very high fidelity image recognition and visual processing, better would be to get rid of input hardware altogether.

Let the user swipe with their thumb across their index finger (or really, any available surface) and have the HUD read that as input. To distinguish real-world interactions that should not have consequence—like swiping dust off a computer—from input meant for the HUD, it could track the user’s visual focal point. When the user’s eyes focus on the empty space in the air right above where they’re swiping, the system knows swiping is meant to affect the interface.

With this kind of interaction there would be no object to lose, and of course save whatever entity provides this service the costs of the hardware and maintenance.

We must note that such a design might not play well cinematically, as viewers might not understand what was happening at first, but understanding the hardware is not critical to understanding the plot-critical effects of using the technology.

Cyborgs in social space

A last question is about the invisibility of the technology. This can cause problems when a user is known to be hearing, but functionally deaf because they are listening to music loudly, and the people around them can’t tell that. Someone could be speaking to the user and believe their non-response is disrespect. It could cause safety problems as, say, a bicyclist barrels towards them on a sidewalk, ringing their bell, expecting the user to move. This can allow privacy abuse as a user can take pictures in circumstances that should be private.

Joe, the moment he is taking a picture of Beth.

One solution would be to make the presence of the tech and interactions quite visible. Glowing pupils and large, obvious gestural control, for example. But in a world where everyone has the technology, the Zed-Eyes can simply limit the behavior of photographs to permitted places, times, and according to the preferences of the people in the photograph. If someone is listening to music and functionally deaf, a real time overlay could inform people around them. This guy is listening to music. If a place is private, the picture option could be disabled with feedback to the user of this. Sorry, pictures are not allowed here.

The visibility we want for ubiquitous technology can be virtual, and provide feedback to everyone involved.

3 of 3: Brain Hacking

The hospital doesn’t have the equipment to decrypt and download the actual data. But Jane knows that the LoTeks can, so they drive to the ruined bridge that is the LoTek home base. As mentioned earlier under Door Bombs and Safety Catches the bridge guards nearly kill them due to a poorly designed defensive system. Once again Johnny is not impressed by the people who are supposed to help him.

When Johnny has calmed down, he is introduced to Jones, the LoTek codebreaker who decrypts corporate video broadcasts. Jones is a cyborg dolphin.

jm-24-jones-animated

Jones has not just an implant like Johnny or an augmented nervous system like Jane, but a full neural brain interface that gives him active control. The thing behind his eye and under the cable can rotate, and he can also direct and control an external microwave radar dish. In the background there are a lot of cables and blinking lights apparently connecting Jones to the LoTek video broadcast gear.

For his part, Johnny is sitting in a chair, upper head strapped into a helmet-like brain scanner. This one is very big and clunky, perhaps because it is salvaged old technology or perhaps because this is not just a passive scanner, so needs additional elements and power to actively modify the brain.

jm-26-download-b

When this starts operating, we see the same strobing white light flashes that the first scanner used.

J-Bone, the LoTek leader, uses a handheld camera to feed the first access code image into the system. This is yet another piece of talking technology, announcing that the first image has been loaded.

jm-26-download-c

The captured image is processed to remove the perspective keystoning, and displayed on one of three small panels on the wall, side by side. That specialised displays are made solely for displaying three images suggests that this form of access code is a standard method of data protection in 2021. The other two panels display rolling static.

jm-26-download-d-adjusted

Wait…static?

Why is there static in a 2021 system for displaying computer images? It’s not just because they’re analog: old CRT computer monitors went blank if there was nothing to display. This is a missing or scrambled signal.

Since the LoTeks rely on scavenged technology, it’s quite likely that they are the last people on the planet still using coax video cables. Another possibility is that this is a deliberate imitation, as we saw earlier with the digital fax machine that made analog sounds. Computer graphics programmers are constantly wondering whether the screen is black because they didn’t draw anything, or black because they accidently drew everything in that color. The rolling static makes it clear that there is no image to display, not that the image is blank.

The first download attempt is interrupted by the Yakuza attacking the bridge. There’s some equipment damage, but by the end of the fight Johnny and company have recovered the second access code image.

jm-26-download-f-adjusted

Still not enough, but Johnny now attempts to “hack his own brain” which is successful (discussed below). The data is finally downloaded and the LoTeks broadcast the cure for NAS worldwide.

Tech Tease

The hacking and downloading take place in another virtual reality space, the internal representation of the implant. These sequences are action-packed and filled with eye catching visuals. If we wanted, there’s much that could be written about, from the visual representations of hacking used in film and TV to the advisability of transmitting vital scientific data through a video encoder. But we never get to see the interface!

Instead, we see Johnny just sit and do nothing other than maintain a death grip on the chair armrests and try not to grind his teeth into fragments. According to the running commentary on the hack provided by J-Bone, Johnny is performing actions in VR. It’s possible that the LoTek brain scanner is a true brain interface that gives him active control by thought alone with no sound or audio experience.

But this is evidently high grade encryption, which could only be broken by an expert hacker. Without visible controls for the brain scanner, the expert hacker would need to be using a direct brain interface. And the hacker would naturally have their own avatar. The only person present who definitely meets all these requirements is not Johnny, but Jones.

Could Jones be really doing all the work? In the original short story it was Jones, and here he’s certainly doing something in virtual reality. Johnny would make a useful distraction, and J-Bone might deliberately mislead the non-LoTek bystanders to keep Jones a secret.

jm-26-download-g

Whether it’s Johnny or Jones, we only get to see what happens, not how. Rather than end on this disappointing note, I’ll now jump back to discuss the more rewarding interfaces for phone calls and cyberspace search sequence. 

Brain Scanning

The second half of the film is all about retrieving the data from Johnny’s implant without the full set of access codes. Johnny needs to get the data downloaded soon or he will die from the “synaptic seepage” caused by squeezing 320G of data into a system with 160G capacity. The bad guys would prefer to remove his head and cryogenically freeze it, allowing them to take their time over retrieval.

1 of 3: Spider’s Scanners

The implant cable interface won’t allow access to the data without the codes. To bypass this protection requires three increasingly complicated brain scanners, two of them medical systems and the final a LoTek hacking device. Although the implant stores data, not human memories, all of these brain scanners work in the same way as the Non-invasive, “Reading from the brain” interfaces described in Chapter 7 of Make It So.

The first system is owned by Spider, a Newark body modification
specialist. Johnny sits in a chair, with an open metal framework
surrounding his head. There’s a bright strobing light, switching on
and off several times a second.

jm-20-spider-scan-a

Nearby a monitor shows a large rotating image of his head and skull, and three smaller images on the left labelled as Scans 1 to 3.

jm-20-spider-scan-b

The largest image resembles a current-day MRI or CT display. It is being drawn on a regular flat 2D display rather than as a 3D holographic type projection, so does not qualify as a volumetric projection even though a current day computer graphics programmer might call it such. The topmost Scan 1 is the head viewed from above in the same rendering style. Scan 2 in the middle shows a bright spot around the implant, and Scan 3 shows a circuit board, presumably the implant itself. The background is is blue, which so far has been common but not as predominant as it is in other science fiction interfaces. Chris suggests  this is because blue LEDs were not common in 1995, so the physical lights we see are red and green and likewise the onscreen graphics use many bright colors.

jm-20-spider-scan-c

Occasionally a purple bar slides across the main image. It perhaps represents some kind of processing update, but since the image is already rotating, that seems superfluous. At one point the color of the main image changes to red, with a matching red sliding bar, but we don’t know why. All the smaller images rotate or flash regularly, with faint ticking sounds as they do.

From this system, Spider is able to tell Johnny that there is a problem with his implant and it must be painful. (Understandably, Johnny is not impressed with this less than helpful diagnosis.) Unlike either the scanner at Newark Airport or the LoTek binoculars, there are no obvious messages or indicators providing this information. But this is a specialised piece of medical technology rather than a public access system, so presumably Spider has sufficient expertise to interpret the displays without needing large popup text.

2 of 3: Hospital Scanner

Spider takes Johnny to a hospital for a more thorough scan. Here the first step is attaching a black flexible strip with various cables around his head. His implant cable is also connected.

jm-21-hospital-scan-b

There isn’t a clear shot of the entire system, but behind Johnny is a CRT monitor and to his left, our right, is a bank of displays that look like electronic oscilloscopes. Since embedded body electronics are common in the world of Johnny Mnemonic, that is probably exactly what they are intended to be. Spider adjusts some controls on these.

jm-21-hospital-scan-c

The oscilloscopes show no text, just green lines and shapes. The CRT behind Johnny is now showing the same head image that we saw at the end of the previous scan.

jm-21-hospital-scan-d

In front of the oscilloscopes is a PC keyboard from the 1990s. In 2021 this will look even older, but this entire hospital is portrayed as a shoestring operation relying on donations and salvage. Spider types on the keyboard, and the CRT changes to show a lot of scrolling text.

jm-21-hospital-scan-e

This is enough for Spider to announce that the “data” is the cure for NAS, the world wide epidemic disease that Jane is showing symptoms of. Again it’s not clear how he can determine this, as the data is still protected by the access codes. Perhaps the scrolling text is unencrypted metadata in the implant that is more easily retrieved. Given the apparent hazardous life of a mnemonic courier, it would make sense to attach the equivalent of a sticky label to the implant, briefly describing the contents and who they should be delivered to.

(This is also the point where one has to ask why this valuable data is encrypted and protected to begin with. Using a mnemonic courier for distribution makes sense, to avoid content filters on the Internet. But now the data is here in Newark, with the intended recipients, so why is it so hard to get at? The best answer I can think of is that the scientists wanted to ensure that the mnemonic courier couldn’t keep the data for themselves and sell it to the highest bidder.)

The third of the three brain interfaces warrants its own post, coming up next. 

The Memory Doubler

In Beijing, Johnny steps into a hotel lift and pulls a small package out his pocket. He unwraps it to reveal the “Pemex MemDoubler”.

jm-4-memdoubler-a

Johnny extends the cable from the device and plugs it into the implant in his head. The socket glows red once the connection is made.

jm-4-memdoubler-b-adjusted
jm-4-memdoubler-c-adjusted

Analysis: The jack

The jack looks like an audio plug, and like most audio plugs is round and has no coronal-orientation requirement. It also has a bulbous rather than pointed tip. Both of these are good design, as Johnny can’t see the socket directly and while accidentally poking yourself with a headphone style point is unlikely to be harmful, it would certainly be irritating.

The socket’s glow would be a useful indicator that the thing is working, but Johnny can’t see it! Probably these sockets and jacks are produced and used for other devices as well, as red status lights are common in this world.

There are easier and more convenient fictional brain plug interfaces, such as the neck plugs previously discussed on this website for Ghost In The Shell. But Johnny doesn’t want his implant to be too obvious, so this not so convenient plug may be a deliberate choice. Perhaps he tells inquisitive people that it’s for his Walkman.

Analysis: The device

The product name got a few chuckles from audiences in the 1990s, as the name is similar to a common classic Macintosh extension at the time, the Connectix RAM Doubler. This applied in-memory lossless data compression techniques to allow more or larger programs to run within the existing RAM.

The MemDoubler is apparently a software or firmware updater, modifying Johnny’s implant to use brain tissue twice as efficiently as before. It has voice output, again a slightly artificial sounding but not unpleasant voice. This announces that Johnny’s current capacity is 80 gigabytes. As the update is applied, a glowing progress bar gradually fills until the voice announces the new capacity of 160G.

jm-4-memdoubler-d

(Going from 80G to 160G seems quaint today. But we should remember that the value of a mnemonic courier is secrecy, not quantity.)

Why does the MemDoubler need voice output? For such a simple task, the progress bar and a three digit numeric counter would seem adequate. But if there are complications—which for something wired into the brain might have an all too literal meaning for “fatal error”—a voice announcement would be able to include much more detail about the problem, or even alert bystanders if Johnny is rendered unconscious by the problem. (Given how current software installers operate, Johnny is fortunate that the MemDoubler did not insist on reciting the entire end user license agreement and warranty before the update could start.) Maybe the visual should be the default (to respect his professional need for secrecy), and the voice announcement adopted in an alert mode.

It’s also interesting that Johnny installs this immediately before he needs it, in the lift that is taking him to the hotel room where he will receive the data to be stored. Suppose someone else had been in the lift with him? In this world of routine body implants doubling your memory is probably not a crime, but at the time of writing diabetics will inject themselves in private even though that is harmless and necessary. Perhaps body-connected technology will be common enough in 2021 that public operation is considered normal, just as we have become accustomed to mobile phone conversations being carried out in public.

Johnny Mnemonic (1995): Overview

The “Internet 2021” shot introduces the cyberspace interface and environment that forms the backdrop for the film. (There’s also a lengthy and unhelpful text crawl, but we’ll pass over that.) Now let’s introduce the film using plain words instead.

johnny-mnemonic-film-images-8a812d52-ea68-4621-bf4a-e4855cf1bb6

When discussing the interfaces in a film it helps to know a little about the context in which it was made. I’ll talk more about this at the end, but for now you need to know that Johnny Mnemonic was released in 1995 and is both a cyberpunk and virtual reality film.

Cyberpunk was a subgenre of science fiction which began in the 1980s. Cyberpunk authors were the first to write extensively about personal computing technology, world wide computer networks, and virtual reality. By the end of the 1990s cyberpunk ideas had been absorbed into mainstream science fiction.

At the time of writing, 2016, virtual reality is a hot topic with megabytes devoted online to the prospects and implications of the Oculus Rift, HTC Vive, and others. This “VR Boom” is actually the second of these, not something new. The first virtual reality boom took place in the mid 1990s, and Johnny Mnemonic was released in the middle of it. By the end of the 1990s virtual reality, like cyberpunk, had largely faded away.

The plot.

Johnny Mnemonic takes place in 2021. It’s a cyberpunk world, with corporations that are more powerful than governments and employ Yakuza gangsters to do their dirty work. There’s also a serious new disease, Nerve Attenuation Syndrome, with no known cure. The Johnny of the title is a mnemonic courier, someone who physically transports important data from place to place by embedding it in their brain. He needs to do one last job before retiring.

In a Beijing hotel he uploads 320G of “data” from a small group of renegade scientists employed by the Pharmakom medical corporation, to be delivered to Newark, New Jersey. The 320G is significant because it has overloaded Johnny’s capacity, and he will die if the data is not downloaded soon. In what will be a recurring plot element, heavily armed thugs who want to prevent the data being released kill the scientists and attempt to kill Johnny. During the fight, three images, the “Access Code” needed to download the data, are partly lost.

Johnny arrives in Newark, where the same people try to kill him again. He is rescued by the other lead character, Jane, a bodyguard who comes to his aid on the promise of lots of money. On the run from an ever-increasing number of people trying to find and kill them, Johnny and Jane fall in with the LoTeks, resistance fighters who hack into corporate networks and release information that corporations want to keep secret. (The LoTeks themselves are not against technology, but their chosen lifestyle restricts them to using what they can scavenge rather than being lavishly equipped with the latest and greatest.)

Johnny learns in quick succession that Jane has early onset NAS symptoms and that the “data” locked up in his head is a cure for NAS. As a cyberpunk corporation, Pharmakom is naturally keeping it secret just to make more money. Without the full access code, the only hope to extract the data is Jones, a cybernetically enhanced dolphin working with the LoTeks. After a last climactic battle, Johnny with the help of Jones is able to “hack his own brain” and recover the data, the cure is released to the world, and Johnny and Jane can live somewhat more happily (this is cyberpunk) ever after.

Johnny Mnemonic (in this review always referring to the film, not the short story, unless stated otherwise)  is packed with interfaces, of which the most interesting and memorable is an extended cyberspace scene around the middle. Like the gestural interface of Minority Report, it is a wonderfully, almost obsessively, detailed imagining of the near future. The value of these predictions, as with most science fiction, is not whether they were correct or not. Predictions are much more interesting for what they tell us about the hopes, expectations, and dreams at the time they were made. Johnny Mnemonic, made in 1995 and set in 2021, shows us how the Internet and World Wide Web were expected to develop over the next twenty five years. As I write this, there’s five years to go.

Let’s jack in and see how it holds up!

IMDB: https://www.imdb.com/title/tt0113481/Currently streaming on: