Panther Glove Guns

As I rule I don’t review lethal weapons on scifiinterfaces.com. The Panther Glove Guns appear to be remote-bludgeoning beams, so this kind of sneaks by. Also, I’ll confess in advance that there’s not a lot that affords critique.

We first see the glove guns in the 3D printer output with the kimoyo beads for Agent Ross and the Dora Milaje outfit for Nakia. They are thick weapons that fit over Shuri’s hands and wrists. I imagine they would be very useful to block blades and even disarm an opponent in melee combat, but we don’t see them in use this way.

The next time we see them, Shuri is activating them. (Though we don’t see how) The panther heads thrust forward, their mouths open wide, and the “neck” glows a hot blue. When the door before her opens, she immediately raises them at the guards (who are loyal to usurper Killmonger) and fires.

A light-blue beam shoots out of the mouths of the weapons, knocking the guards off the platform. Interestingly, one guard is lifted up and thrown to his 4-o-clock. The other is lifted up and thrown to his 7-o-clock. It’s not clear how Shuri instructs the weapons to have different and particular knock-down effects. But we’ve seen all over Black Panther that brain-computer interfaces (BCI) are a thing, so it’s diegetically possible she’s simply imagining where she wants them to be thrown, and then pulling a trigger or clenching her fist around a rod or just thinking “BAM!” to activate. The force-bolt strikes them right where they need to so that, like a billiard ball, they get knocked in the desired direction. As with all(?) brain-computer interfaces, there is not an interaction to critique.

After she dispatches the two guards, still wearing the gloves, she throws a control bead onto the Talon. The scene is fast and blurry, but it’s unclear how she holds and releases the bead from the glove. Was it in the panther’s jaw the whole time? Could be another BCI, of course. She just thought about where she wanted it, flung her arm, and let the AI decide when to release it for perfect targeting. The Talon is large and she doesn’t seem to need a great deal of accuracy with the bead, but for more precise operations, the AI targeting would make more sense than, say, letting the panther heads disintegrate on command so she would have freedom of her hands. 

Later, after Killmonger dispatches the Dora Milaje, Shuri and Nakia confront him by themselves. Nakia gets in a few good hits, but is thrown from the walkway. Shuri throws some more bolts his way though he doesn’t appear to even notice. I note that the panther gloves would be very difficult to aim since there’s no continuous beam providing feedback, and she doesn’t have a gun sight to help her. So, again—and I’m sorry because it feels like cheating—I have to fall back to an AI assist here. Otherwise it doesn’t make sense. 

Then Shuri switches from one blast at a time to a continuous beam. It seems to be working, as Killmonger kneels from the onslaught.

This is working! How can I eff it up?

But then for some reason she—with a projectile weapon that is actively subduing the enemy and keeping her safe at a distance—decides to close ranks, allowing Killmonger to knock the glove guns with a spear tip, thereby free himself, and destroy the gloves with a clutch of his Panther claws. I mean, I get she was furious, but I expected better tactics from the chief nerd of Wakanda. Thereafter, they spark when she tries to fire them. So ends this print of the Panther Guns.

As with all combat gear, it looks cool for it to glow, but we don’t want coolness to help an enemy target the weapon. So if it was possible to suppress the glow, that would be advisable. It might be glowing just for the intimidation factor, but for a projectile weapon that seems strange.

The panther head shapes remind an opponent that she is royalty (note no other Wakandan combatants have ranged weapons) and fighting in Bast’s name, which I suppose if you’re in the business of theocratic warfare is fine, I guess.

It’s worked so well in the past. More on this aspect later.

So, if you buy the brain-computer interface interpretation, AI targeting assist, and theocratic design, these are fine, with the cinegenic exception of the attention-drawing glow.


Black History Matters

Each post in the Black Panther review is followed by actions that you can take to support black lives.

When The Watchmen series opened with the Tulsa Race Massacre, many people were shocked to learn that this event was not fiction, reminding us just how much of black history is erased and whitewashed for the comfort of white supremacy (and fuck that). Today marks the beginning of Black History Month, and it’s a good opportunity to look back and (re)learn of the heroic figures and stories of both terror and triumph that fill black struggles to have their citizenship and lives fully recognized.

Library of Congress, American National Red Cross Photograph Collection

There are lots of events across the month. The African American History Month site is a collaboration of several government organizations (and it feels so much safer to share such a thing now that the explicitly racist administration is out of office and facing a second impeachment):

  • The Library of Congress
  • National Archives and Records Administration
  • National Endowment for the Humanities
  • National Gallery of Art
  • National Park Service
  • Smithsonian Institution and United States Holocaust Memorial Museum

The site, https://www.africanamericanhistorymonth.gov/, has a number of resources, including images, video, and calendar of events for you.

Today we can take a moment to remember and honor the Greensboro Four.

On this day, February 1, 1960: Through careful planning and enlisting the help of a local white businessman named Ralph Johns, four Black college students—Ezell A. Blair, Jr., Franklin E. McCain, Joseph A. McNeil, David L. Richmond—sat down at a segregated lunch counter at Woolworth’s in Greensboro, North Carolina, and politely asked for service. Their request was refused. When asked to leave, they remained in their seats.

Police arrived on the scene, but were unable to take action due to the lack of provocation. By that time, Ralph Johns had already alerted the local media, who had arrived in full force to cover the events on television. The Greensboro Four stayed put until the store closed, then returned the next day with more students from local colleges.

Their passive resistance and peaceful sit-down demand helped ignite a youth-led movement to challenge racial inequality throughout the South.

A last bit of amazing news to share today is that Black Lives Matter has been nominated for the Nobel Peace Prize! The movement was co-founded by Alicia Garza, Patrisse Cullors and Opal Tometi in response to the acquittal of Trayvon Martin’s murderer, got a major boost with the outrage following and has grown to a global movement working to improve the lives of the entire black diaspora. May it win!

UX of Speculative Brain-Computer Inputs

So much of the technology in Black Panther appears to work by mental command (so far: Panther Suit 2.0, the Royal Talon, and the vibranium sand tables) that…

  • before we get into the Kimoyo beads, or the Cape Shields, or the remote driving systems…
  • before I have to dismiss these interactions as “a wizard did it” style non-designs
  • before I review other brain-computer interfaces in other shows…

…I wanted check on the state of the art of brain-computer interfaces (or BCIs) and see how our understanding had advanced since I wrote the Brain interface chapter in the book, back in the halcyon days of 2012.

Note that I am deliberately avoiding the tech side of this question. I’m not going to talk about EEG, PET, MRI, and fMRI. (Though they’re linked in case you want to learn more.) Modern brain-computer interface (or BCI) technologies are evolving too rapidly to bother with an overview of them. They’ll change in the real world by the time I press “publish,” much less by the time you read this. And sci-fi tech is most often a black box anyway. But the human part of the human-computer interaction model changes much more slowly. We can look to the brain as a relatively-unalterable component of the BCI question, leading us to two believability questions of sci-fi BCI.

  1. How can people express intent using their brains?
  2. How do we prevent accidental activation using BCI?

Let’s discuss each.

1. How can people express intent using their brains?

In the see-think-do loop of human-computer interaction…

  • See (perceive) has been a subject of visual, industrial, and auditory design.
  • Think has been a matter of human cognition as informed by system interaction and content design.
  • Do has long been a matter of some muscular movement that the system can detect, to start its matching input-process-output loop. Tap a button. Move a mouse. Touch a screen. Focus on something with your eyes. Hold your breath. These are all ways of “doing” with muscles.
The “bowtie” diagram I developed for my book on agentive tech.

But the first promise of BCI is to let that doing part happen with your brain. The brain isn’t a muscle, so what actions are BCI users able to take in their heads to signal to a BCI system what they want it to do? The answer to this question is partly physiological, about the way the brain changes as it goes about its thinking business.

Ah, the 1800s. Such good art. Such bad science.

Our brains are a dense network of bioelectric signals, chemicals, and blood flow. But it’s not chaos. It’s organized. It’s locally functionalized, meaning that certain parts of the brain are predictably activated when we think about certain things. But it’s not like the Christmas lights in Stranger Things, with one part lighting up discretely at a time. It’s more like an animated proportional symbol map, with lots of places lighting up at the same time to different degrees.

Illustrative composite of a gif and an online map demo.

The sizes and shapes of what’s lighting up may change slightly between people, but a basic map of healthy, undamaged brains will be similar to each other. Lots of work has gone on to map these functional areas, with researchers showing subjects lots of stimuli and noting what areas of the brain light up. Test enough of these subjects and you can build a pretty good functional map of concepts. Thereafter, you can take a “picture” of the brain, and you can cross-reference your maps to reverse-engineer what is being thought.

From Jack Gallant’s semantic maps viewer.

Right now those pictures are pretty crude and slow, but so were the first actual photographs in the world. In 20–50 years, we may be able to wear baseball caps that provide a much more high-resolution, real time inputs of concepts being thought. In the far future (or, say, the alternate history of the MCU) it is conceivable to read these things from a distance. (Though there are significant ethical questions involved in such a technology, this post is focused on questions of viability and interaction.)

From Jack Gallant’s semantic map viewer

Similarly the brain maps we have are only for a small percentage of an average adult vocabulary. Jack Gallant’s semantic map viewer (pictured and linked above) shows the maps for about 140 concepts, and estimates of average active vocabulary is around 20,000 words, so we’re looking at a tenth of a tenth of what we can imagine (not even counting the infinite composability of language). But in the future we will not only have more concepts mapped, more confidently, but we will also have idiographs for each individual, like the personal dictionary in your smart phone.

All this is to say that our extant real world technology confirms that thoughts are a believable input for a system. This includes linguistic inputs like “Turn on the light” and “activate the vibranium sand table” and “Sincerely, Chris” and even imagining the desired change, like a light changing from dark to light. It might even include subconscious thoughts that yet to be formed into words.

2. How do we prevent accidental activation?

But we know from personal experience, we don’t want all our thoughts to be acted on. Take, for example, those thoughts you’re feeling hangry, or snarky, or dealing with a jerk-in-authority. Or those texts and emails that you’ve composed in the heat of the moment but wisely deleted before they get you in trouble.

If a speculative BCI is being read by a general artificial intelligence, it can manage that just like a smart human partner would.

He is composing a blog post, reasons the AGI, so I will just disregard his thought that he needs to pee.

And if there’s any doubt, an AGI can ask. “Did you intend me to include the bit about pee in the post?” Me: “Certainly not. Also BRB.” (Readers following the Black Panther reviews will note that AGI is available to Wakandans in the form of Griot.)

If AGI is unavailable to the diegesis (and it would significantly change any diegesis of which it is a part) then we need some way to indicate when a thought is intended as input and when it isn’t. Having that be some mode of thought feels complicated and error-prone, like when programmers have to write regex expressions that escape escape characters. Better I think is to use some secondary channel, like a bodily interaction. Touch forefinger and pinky together, for instance, and the computer understands you intend your thoughts as input.

So, for any BCI that appears in sci-fi, we would want to look for the presence or absence of AGI as a reasonableness interpreter, and, barring that, for some alternate-channel mechanism for indicating deliberateness. We would also hope to see some feedback and correction loops to understand the nuances of the edge-case interactions, but these are rare in sci-fi.

Even more future-full

This all points to the question of what seeing/perceiving via a BCI might be. A simple example might be a disembodied voice that only the user can hear.

A woman walks alone at night. Lost in thoughts, she hears her AI whisper to her thoughts, “Ada, be aware that a man has just left a shadowy doorstep and is following, half a block behind you. Shall I initialize your shock shoes?”

What other than language can be written to the brain in the far future? Images? Movies? Ideas? A suspicion? A compulsion? A hunch? How will people know what are their own thoughts and what has been placed there from the outside? I look forward to the stories and shows that illustrate new ideas, and warn us of the dark pitfalls.

The Royal Talon piloting interface

Since my last post, news broke that Chadwick Boseman has passed away after a four year battle with cancer. He kept his struggles private, so the news was sudden and hard-hitting. The fandom is still reeling. Black people, especially, have lost a powerful, inspirational figure. The world has also lost a courageous and talented young actor. Rise in Power, Mr. Boseman. Thank you for your integrity, bearing, and strength.

Photo CC BY-SA 2.0,
by Gage Skidmore.

Black Panther’s airship is a triangular vertical-takeoff-and-landing vehicle called the Royal Talon. We see its piloting interface twice in the film.

The first time is near the beginning of the movie. Okoye and T’Challa are flying at night over the Sambisa forest in Nigeria. Okoye sits in the pilot’s seat in a meditative posture, facing a large forward-facing bridge window with a heads up display. A horseshoe-shaped shelf around her is filled with unactivated vibranium sand. Around her left wrist, her kimoyo beads glow amber, projecting a volumetric display around her forearm.

She announces to T’Challa, “My prince, we are coming up on them now.” As she disengages from the interface, retracting her hands from the pose, the kimoyo projection shifts and shrinks. (See more detail in the video clip, below.)

The second time we see it is when they pick up Nakia and save the kidnapped girls. On their way back to Wakanda we see Okoye again in the pilot’s seat. No new interactions are seen in this scene though we linger on the shot from behind, with its glowing seatback looking like some high-tech spine.

Now, these brief glimpses don’t give a review a lot to go on. But for a sake of completeness, let’s talk about that volumetric projection around her wrist. I note is that it is a lovely echo of Dr. Strange’s interface for controlling the time stone Eye of Agamatto.

Wrist projections are going to be all the rage at the next Snap, I predict.

But we never really see Okoye look at this VP it or use it. Cross referencing the Wakandan alphabet, those five symbols at the top translate to 1 2 K R I, which doesn’t tell us much. (It doesn’t match the letters seen on the HUD.) It might be a visual do-not-disturb signal to onlookers, but if there’s other meaning that the letters and petals are meant to convey to Okoye, I can’t figure it out. At worst, I think having your wrist movements of one hand emphasized in your peripheral vision with a glowing display is a dangerous distraction from piloting. Her eyes should be on the “road” ahead of her.

The image has been flipped horizontally to illustrate how Okoye would see the display.

Similarly, we never get a good look at the HUD, or see Okoye interact with it, so I’ve got little to offer other than a mild critique that it looks full of pointless ornamental lines, many of which would obscure things in her peripheral vision, which is where humans need the most help detecting things other than motion. But modern sci-fi interfaces generally (and the MCU in particular) are in a baroque period, and this is partly how audiences recognize sci-fi-ness.

I also think that requiring a pilot to maintain full lotus to pilot is a little much, but certainly, if there’s anyone who can handle it, it’s the leader of the Dora Milaje.

One remarkable thing to note is that this is the first brain-input piloting interface in the survey. Okoye thinks what she wants the ship to do, and it does it. I expect, given what we know about kimoyo beads in Wakanda (more on these in a later post), what’s happening is she is sending thoughts to the bracelet, and the beads are conveying the instructions to the ship. As a way to show Okoye’s self-discipline and Wakanda’s incredible technological advancement, this is awesome.

Unfortunately, I don’t have good models for evaluating this interaction. And I have a lot of questions. As with gestural interfaces, how does she avoid a distracted thought from affecting the ship? Why does she not need a tunnel-in-the-sky assist? Is she imagining what the ship should do, or a route, or something more abstract, like her goals? How does the ship grant her its field awareness for a feedback loop? When does the vibranium dashboard get activated? How does it assist her? How does she hand things off to the autopilot? How does she take it back? Since we don’t have good models, and it all happens invisibly, we’ll have to let these questions lie. But that’s part of us, from our less-advanced viewpoint, having to marvel at this highly-advanced culture from the outside.


Black Health Matters

Each post in the Black Panther review is followed by actions that you can take to support black lives.

Thinking back to the terrible loss of Boseman: Fuck cancer. (And not to imply that his death was affected by this, but also:) Fuck the racism that leads to worse medical outcomes for black people.

One thing you can do is to be aware of the diseases that disproportionately affect black people (diabetes, asthma, lung scarring, strokes, high blood pressure, and cancer) and be aware that no small part of these poorer outcomes is racism, systemic and individual. Listen to Dorothy Roberts’ TED talk, calling for an end to race-based medicine.

If you’re the reading sort, check out the books Black Man in a White Coat by Damon Tweedy, or the infuriating history covered in Medical Apartheid by Harriet Washington.

If you are black, in Boseman’s memory, get screened for cancer as often as your doctor recommends it. If you think you cannot afford it and you are in the USA, this CDC website can help you determine your eligibility for free or low-cost screening: https://www.cdc.gov/cancer/nbccedp/screenings.htm. If you live elsewhere, you almost certainly have a better healthcare system than we do, but a quick search should tell you your options.

Cancer treatment is equally successful for all races. Yet black men have a 40% higher cancer death rate than white men and black women have a 20% higher cancer death rate than white women. Your best bet is to detect it early and get therapy started as soon as possible. We can’t always win that fight, but better to try than to find out when it’s too late to intervene. Your health matters. Your life matters.

Blade Runner (1982)

Whew. So we all waited on tenterhooks through November to see if somehow Tyrell Corporation would be founded, develop and commercialize general AI, and then advance robot evolution into the NEXUS phase, all while in the background space travel was perfected, Off-world colonies and asteroid mining established, global warming somehow drenched Los Angeles in permanent rain and flares, and flying cars appear on the market. None of that happened. At least not publicly. So, with Blade Runner squarely part of the paleofuture past, let’s grab our neon-tube umbrellas and head into the rain to check out this classic that features some interesting technologies and some interesting AI.

Release date: 25 Jun 1982

The punctuation-challenged crawl for the film:

“Early in the 21st Century, THE TYRELL CORPORATION advanced Robot evolution into the NEXUS phase—a being virtually identical to a human—known as a Replicant. [sic] The NEXUS 6 Replicants were superior in strength and agility, and at least equal in intelligence, to the genetic engineers who created them. Replicants were used Off-world as slave labor, in the hazardous exploration and colonisation of other planets. After a bloody mutiny by a NEXUS 6 combat team in an Off-world colony, Replicants were declared illegal on Earth—under penalty of death. Special police squads—BLADE RUNNER UNITS—had orders to shoot to kill, upon detection, any trespassing Replicants.

“This was not called execution. It was called retirement.”

Four murderous replicants make their way to Earth, to try and find a way to extend their genetically-shortened life spans. The Blade Runner named Deckard is coerced by his ex-superior Bryant and detective Gaff out of retirement and into finding and “retiring” these replicants.

Deckard meets Dr. Tyrell to interview him, and at Tyrell’s request tests Rachel on a Voight-Kampff machine, which is designed to help blade runners tell replicants from people. Deckard and Rachel learn that she is a replicant. Then with Gaff, he follows clues to the apartment of one exposed replicant, Leon, where he finds a synthetic snake scale in the bathtub and a set of photographs in a drawer. Using a sophisticated image inspection tool in his home, he scans one of the photos taken in Leon’s apartment, until he finds the reflection of a face. He prints the image to take with him.

He takes the snake scale to someone with an electron microscope who is able to read the micrometer-scale “maker’s serial number” there. He visits the maker, a person named “the Egyptian,” who tells Deckard he sold the snake to Taffey Lewis. Deckard visits Taffey’s bar, where he sees Zhora, another of the wanted replicants, perform a stage act with a snake. She matches the picture he holds. He heads backstage to talk to her in her dressing room, posing as a representative of the “American Federation of Variety Artists, Confidential Committee on Moral Abuses.” When she finishes pretending to prepare for her next act, she attacks him and flees. He chases and retires her. Leon happens to witness the killing, and attacks Deckard. Leon has the upper hand but Deckard is saved when Rachel appears from the crowd and shoots Leon in the head. They return to his apartment. They totally make out.

Meanwhile, Roy has learned of a Tyrell employee named Sebastian who does genetic design. On orders, Pris befriends Sebastian and dupes him into letting her into his apartment. She then lets Roy in. Sebastian figures out that they are replicants, but confesses he cannot help them directly. Roy intimidates Sebastian into arranging a meeting between him and Dr. Tyrell. At the meeting, Tyrell says there is nothing that can be done. In fury, Roy kills Tyrell and Sebastian.

The police investigating the scene contact Deckard with Sebastian’s address. Deckard heads there, where he finds, fights, and retires Pris. Roy is there, too, but proves too tough for Deckard to retire. Roy could kill Deckard but instead opts to die peacefully, even poetically. Witnessing this act of grace, Deckard comes to appreciate the “humanity” of the replicants, and returns home to elope with Rachel.

In the last scene, Gaff hints to Deckard with this unicorn origami that Deckard himself is a replicant.


P.S. This series uses “The Final Cut” edit of the movie, so I don’t have to hear that wretchedly-scripted voiceover from the theatrical release. If you can, I recommend seeing that version.

IMDB Icon
v
iTunes

Trivium Bracelet

The control token in Las Luchadras is a bracelet that slaps on and instantly renders its wearer an automaton, subject to the remote control.

Here’s something to note about this speculative technology. Orlak could have sold this, just this, to law enforcement around the world and made himself a very rich and powerful person. But the movie makes clear he is a mad engineer, not a mad businessperson, so we have to move on.

From Orlak’s point of view, getting the bracelet on its victim should be very easy. Fortunately, it does just that. Orlak can slap it on in a flick. But it’s also trivially easy for a bystander to remove, which seems like…a design oversight. It should work more like a handcuff, that requires a key to remove. It can’t look like a handcuff, of course, since Orlak wants it to go unnoticed. But in addition to the security, the handcuff function would enable the device to fit wrists of many sizes. As it is, it appears to be tailor-made to an individual.

As the diagram illustrates, not all wrists are made the same, and it would not help Orlak to have to carry around a sizing set when he hasn’t had time to secretly get the victim’s measurements.

Lastly, the audience might have benefited from seeing some visual connection between the bracelet and the remote, like a shared material that had an unusual color or glow, but Orlak would not want this connection since it could help someone identify him as the controller.

Zed-Eyes

In the world of “White Christmas”, everyone has a networked brain implant called Zed-Eyes that enables heads-up overlays onto vision, personalized audio, and modifications to environmental sounds. The control hardware is a thin metal circle around a metal click button, separated by a black rubber ring. People can buy the device with different color rings, as we see alternately see metal, blue, and black versions across the episode.

To control the implant, a person slides a finger (thumb is easiest) around the rim of a tiny touch device. Because it responds to sliding across its surface, let’s say the device must use a sensor similar to the one used in The Entire History of You (2011) or the IBM Trackpoint,

A thumb slide cycles through a carousel menu. Sliding can happen both clockwise and counterclockwise. It even works through gloves.

HUD_menu.gif

The button selects or executes the selected action. The complete list of carousel menu options we see in the episode are: SearchCameraMusicMailCallMagnifyBlockMapThe particular options change across scenes, so it is context-aware or customizable. We will look at some of the particular functions in later posts. For now, let’s discuss the “platform” that is Zed-eyes. Continue reading

And now after the trailer: Johnny Mnemonic

The “Internet 2021” shot introduces the cyberspace interface and environment that forms the backdrop for the film. (There’s also a lengthy and unhelpful text crawl, but we’ll pass over that.) Now let’s introduce the film using plain words instead.

johnny-mnemonic-film-images-8a812d52-ea68-4621-bf4a-e4855cf1bb6

When discussing the interfaces in a film it helps to know a little about the context in which it was made. I’ll talk more about this at the end, but for now you need to know that Johnny Mnemonic was released in 1995 and is both a cyberpunk and virtual reality film.

Cyberpunk was a subgenre of science fiction which began in the 1980s. Cyberpunk authors were the first to write extensively about personal computing technology, world wide computer networks, and virtual reality. By the end of the 1990s cyberpunk ideas had been absorbed into mainstream science fiction.

At the time of writing, 2016, virtual reality is a hot topic with megabytes devoted online to the prospects and implications of the Oculus Rift, HTC Vive, and others. This “VR Boom” is actually the second of these, not something new. The first virtual reality boom took place in the mid 1990s, and Johnny Mnemonic was released in the middle of it. By the end of the 1990s virtual reality, like cyberpunk, had largely faded away.

Everything about the cast and crew

Everything about the tropes

Find it on iTunes (US)

Buy from Amazon

The plot.

Johnny Mnemonic takes place in 2021. It’s a cyberpunk world, with corporations that are more powerful than governments and employ Yakuza gangsters to do their dirty work. There’s also a serious new disease, Nerve Attenuation Syndrome, with no known cure. The Johnny of the title is a mnemonic courier, someone who physically transports important data from place to place by embedding it in their brain. He needs to do one last job before retiring.

In a Beijing hotel he uploads 320G of “data” from a small group of renegade scientists employed by the Pharmakom medical corporation, to be delivered to Newark, New Jersey. The 320G is significant because it has overloaded Johnny’s capacity, and he will die if the data is not downloaded soon. In what will be a recurring plot element, heavily armed thugs who want to prevent the data being released kill the scientists and attempt to kill Johnny. During the fight, three images, the “Access Code” needed to download the data, are partly lost.

Johnny arrives in Newark, where the same people try to kill him again. He is rescued by the other lead character, Jane, a bodyguard who comes to his aid on the promise of lots of money. On the run from an ever-increasing number of people trying to find and kill them, Johnny and Jane fall in with the LoTeks, resistance fighters who hack into corporate networks and release information that corporations want to keep secret. (The LoTeks themselves are not against technology, but their chosen lifestyle restricts them to using what they can scavenge rather than being lavishly equipped with the latest and greatest.)

Johnny learns in quick succession that Jane has early onset NAS symptoms and that the “data” locked up in his head is a cure for NAS. As a cyberpunk corporation, Pharmakom is naturally keeping it secret just to make more money. Without the full access code, the only hope to extract the data is Jones, a cybernetically enhanced dolphin working with the LoTeks. After a last climactic battle, Johnny with the help of Jones is able to “hack his own brain” and recover the data, the cure is released to the world, and Johnny and Jane can live somewhat more happily (this is cyberpunk) ever after.

Johnny Mnemonic (in this review always referring to the film, not the short story, unless stated otherwise)  is packed with interfaces, of which the most interesting and memorable is an extended cyberspace scene around the middle. Like the gestural interface of Minority Report, it is a wonderfully, almost obsessively, detailed imagining of the near future. The value of these predictions, as with most science fiction, is not whether they were correct or not. Predictions are much more interesting for what they tell us about the hopes, expectations, and dreams at the time they were made. Johnny Mnemonic, made in 1995 and set in 2021, shows us how the Internet and World Wide Web were expected to develop over the next twenty five years. As I write this, there’s five years to go.

Let’s jack in and see how it holds up!

Brain VP

GitS-VPbrain-04

When trying to understand the Puppet Master, Kusanagi’s team consults with their staff Cyberneticist, who displays for them in his office a volumetric projection of the cyborg’s brain. The brain floats free of any surrounding tissue, underlit in a screen-green translucent monochrome. The edge of the projection is a sphere that extends a few centimeters out from the edge of the brain. A pattern of concentric lines routinely passes along the surface of this sphere. Otherwise, the "content" of the VP, that is, the brain itself, does not appear to move or change.

The Cyberneticist explains, while the team looks at the VP, "It isn’t unlike the virtual ghost-line you get when a real ghost is dubbed off. But it shows none of the data degradation dubbing would produce. Well, until we map the barrier perimeter and dive in there, we won’t know anyting for sure."

GitS-VPbrain-01

GitS-VPbrain-02

GitS-VPbrain-03

The VP does not appear to be interactive, it’s just an output. In fact, it’s just an output of the surface features of a brain. There’s no other information called out, no measurements, or augmenting data. Just a brain. Which raises the question of what purpose does this projection serve? Narratively, of course, it tells us that the Cyberneticist is getting deep into neurobiology of the cyborg. But he doesn’t need that information. Kunasagi’s team doesn’t even need that information. Is this some sort of screen saver?

And what’s up with the little ripples? It’s possible that these little waves are more than just an artifact of the speculative technology’s refresh. Perhaps they’re helping to convey that a process is currently underway, perhaps "mapping the barrier perimeter." But if that was the case, the Cyberneticist would want to see some sense of progress against a goal. At the very least there should be some basic sense of progress: How much time is estimated before the mapping is complete, and how much time has elapsed?

Of course any trained brain specialist would gain more information from looking at the surface features of a brain than us laypersons could understand. But if he’s really using this to do such an examination, the translucency and peaked, saturated color makes that task prohibitively harder than just looking at the real thing an office away or a photograph, not to mention the routine rippling occlusion of the material being studied.

Unless there’s something I’m not seeing, this VP seems as useless as an electric paperweight.

Headrest jack

GitS-3Dscanner-010

The jack mechanism in the intercept van is worth noting for its industrial design. Kusanagi has four jacks on the back of her neck in a square pattern. Four plugs sit on the headrest of her seat. To jack in, she simply leans back, and they seat perfectly. She leans forward, and the cables extend from the seat. Given the simple back and forward motion, it takes all of a second. Seems simple enough. But I’ve committed a blog post to it, so of course you can guess it’s not really that simple. I can see two issues with this interface.

How do the jacks and plugs meet so perfectly?

Of course, she’s a super cyborg, so we can presume she can be quite precise in her movements. But does she have eyes/cameras on the back of her head, or precision kinesthetics and a perfect body memory for position? Even if she does, it would be better would be to accommodate some margin of error to account for bumpy roads or action-packed driving maneuvers.

How to do this? One way would be a countersink so that a sloppy approach is corrected by shape. The popular (and difficult-to-source) keyhole for drunk people uses this same principle. Unfortunately, in the case of this headrest jack, the base object is Kusanagi’s neck, which is functionally a cylinder. The cones on the back of her neck would have to be unsightly large or a miss would splay the plugs and force her to retry. Fortunately, the second issue leads us to another solution.

keyhole

How does she genuinely rest against the seat when she doesn’t want to jack in?

Is that even an option here? How does she simply lean back for a road trip nap without being blasted awake by a neon green 3D Google Map?

If it was a magnetic connection, like Apple’s MagSafe power connectors, the jacks and plugs could be designed such that magnetic forces pull them together. But unlike MagSafe, these jacks could be electromagnets controlled by Kusanagi. This would not only ensure intended connections, but also help deal with the precision issues raised above. The electromagnets would snap the plugs into place even if they were misaligned.

MagSafe

An electromagnetic interface would also answer the question of how this works for taller or shorter cyborgs hoping to use the same headrest jack.

An automated solution

This solution does require complex mechanics in the body of the rider. That’s no problem for the Ghost in the Shell diegesis, but if we were facing a challenge like this in the real world, implanting users with tech isn’t a viable solution. Instead, we could push the technology back on the van by letting it do the aiming. In the half a second she leans back, the van itself can look through a camera in the headrest to gauge the fit, and position the plugs correctly with, say, linear actuators. This solution lets human users stay human, but would ensure a precision fit where it was needed.