UX of Speculative Brain-Computer Inputs

So much of the technology in Black Panther appears to work by mental command (so far: Panther Suit 2.0, the Royal Talon, and the vibranium sand tables) that…

  • before we get into the Kimoyo beads, or the Cape Shields, or the remote driving systems…
  • before I have to dismiss these interactions as “a wizard did it” style non-designs
  • before I review other brain-computer interfaces in other shows…

…I wanted check on the state of the art of brain-computer interfaces (or BCIs) and see how our understanding had advanced since I wrote the Brain interface chapter in the book, back in the halcyon days of 2012.

Note that I am deliberately avoiding the tech side of this question. I’m not going to talk about EEG, PET, MRI, and fMRI. (Though they’re linked in case you want to learn more.) Modern brain-computer interface (or BCI) technologies are evolving too rapidly to bother with an overview of them. They’ll change in the real world by the time I press “publish,” much less by the time you read this. And sci-fi tech is most often a black box anyway. But the human part of the human-computer interaction model changes much more slowly. We can look to the brain as a relatively-unalterable component of the BCI question, leading us to two believability questions of sci-fi BCI.

  1. How can people express intent using their brains?
  2. How do we prevent accidental activation using BCI?

Let’s discuss each.

1. How can people express intent using their brains?

In the see-think-do loop of human-computer interaction…

  • See (perceive) has been a subject of visual, industrial, and auditory design.
  • Think has been a matter of human cognition as informed by system interaction and content design.
  • Do has long been a matter of some muscular movement that the system can detect, to start its matching input-process-output loop. Tap a button. Move a mouse. Touch a screen. Focus on something with your eyes. Hold your breath. These are all ways of “doing” with muscles.
The “bowtie” diagram I developed for my book on agentive tech.

But the first promise of BCI is to let that doing part happen with your brain. The brain isn’t a muscle, so what actions are BCI users able to take in their heads to signal to a BCI system what they want it to do? The answer to this question is partly physiological, about the way the brain changes as it goes about its thinking business.

Ah, the 1800s. Such good art. Such bad science.

Our brains are a dense network of bioelectric signals, chemicals, and blood flow. But it’s not chaos. It’s organized. It’s locally functionalized, meaning that certain parts of the brain are predictably activated when we think about certain things. But it’s not like the Christmas lights in Stranger Things, with one part lighting up discretely at a time. It’s more like an animated proportional symbol map, with lots of places lighting up at the same time to different degrees.

Illustrative composite of a gif and an online map demo.

The sizes and shapes of what’s lighting up may change slightly between people, but a basic map of healthy, undamaged brains will be similar to each other. Lots of work has gone on to map these functional areas, with researchers showing subjects lots of stimuli and noting what areas of the brain light up. Test enough of these subjects and you can build a pretty good functional map of concepts. Thereafter, you can take a “picture” of the brain, and you can cross-reference your maps to reverse-engineer what is being thought.

From Jack Gallant’s semantic maps viewer.

Right now those pictures are pretty crude and slow, but so were the first actual photographs in the world. In 20–50 years, we may be able to wear baseball caps that provide a much more high-resolution, real time inputs of concepts being thought. In the far future (or, say, the alternate history of the MCU) it is conceivable to read these things from a distance. (Though there are significant ethical questions involved in such a technology, this post is focused on questions of viability and interaction.)

From Jack Gallant’s semantic map viewer

Similarly the brain maps we have are only for a small percentage of an average adult vocabulary. Jack Gallant’s semantic map viewer (pictured and linked above) shows the maps for about 140 concepts, and estimates of average active vocabulary is around 20,000 words, so we’re looking at a tenth of a tenth of what we can imagine (not even counting the infinite composability of language). But in the future we will not only have more concepts mapped, more confidently, but we will also have idiographs for each individual, like the personal dictionary in your smart phone.

All this is to say that our extant real world technology confirms that thoughts are a believable input for a system. This includes linguistic inputs like “Turn on the light” and “activate the vibranium sand table” and “Sincerely, Chris” and even imagining the desired change, like a light changing from dark to light. It might even include subconscious thoughts that yet to be formed into words.

2. How do we prevent accidental activation?

But we know from personal experience, we don’t want all our thoughts to be acted on. Take, for example, those thoughts you’re feeling hangry, or snarky, or dealing with a jerk-in-authority. Or those texts and emails that you’ve composed in the heat of the moment but wisely deleted before they get you in trouble.

If a speculative BCI is being read by a general artificial intelligence, it can manage that just like a smart human partner would.

He is composing a blog post, reasons the AGI, so I will just disregard his thought that he needs to pee.

And if there’s any doubt, an AGI can ask. “Did you intend me to include the bit about pee in the post?” Me: “Certainly not. Also BRB.” (Readers following the Black Panther reviews will note that AGI is available to Wakandans in the form of Griot.)

If AGI is unavailable to the diegesis (and it would significantly change any diegesis of which it is a part) then we need some way to indicate when a thought is intended as input and when it isn’t. Having that be some mode of thought feels complicated and error-prone, like when programmers have to write regex expressions that escape escape characters. Better I think is to use some secondary channel, like a bodily interaction. Touch forefinger and pinky together, for instance, and the computer understands you intend your thoughts as input.

So, for any BCI that appears in sci-fi, we would want to look for the presence or absence of AGI as a reasonableness interpreter, and, barring that, for some alternate-channel mechanism for indicating deliberateness. We would also hope to see some feedback and correction loops to understand the nuances of the edge-case interactions, but these are rare in sci-fi.

Even more future-full

This all points to the question of what seeing/perceiving via a BCI might be. A simple example might be a disembodied voice that only the user can hear.

A woman walks alone at night. Lost in thoughts, she hears her AI whisper to her thoughts, “Ada, be aware that a man has just left a shadowy doorstep and is following, half a block behind you. Shall I initialize your shock shoes?”

What other than language can be written to the brain in the far future? Images? Movies? Ideas? A suspicion? A compulsion? A hunch? How will people know what are their own thoughts and what has been placed there from the outside? I look forward to the stories and shows that illustrate new ideas, and warn us of the dark pitfalls.

The Royal Talon piloting interface

Since my last post, news broke that Chadwick Boseman has passed away after a four year battle with cancer. He kept his struggles private, so the news was sudden and hard-hitting. The fandom is still reeling. Black people, especially, have lost a powerful, inspirational figure. The world has also lost a courageous and talented young actor. Rise in Power, Mr. Boseman. Thank you for your integrity, bearing, and strength.

Photo CC BY-SA 2.0,
by Gage Skidmore.

Black Panther’s airship is a triangular vertical-takeoff-and-landing vehicle called the Royal Talon. We see its piloting interface twice in the film.

The first time is near the beginning of the movie. Okoye and T’Challa are flying at night over the Sambisa forest in Nigeria. Okoye sits in the pilot’s seat in a meditative posture, facing a large forward-facing bridge window with a heads up display. A horseshoe-shaped shelf around her is filled with unactivated vibranium sand. Around her left wrist, her kimoyo beads glow amber, projecting a volumetric display around her forearm.

She announces to T’Challa, “My prince, we are coming up on them now.” As she disengages from the interface, retracting her hands from the pose, the kimoyo projection shifts and shrinks. (See more detail in the video clip, below.)

The second time we see it is when they pick up Nakia and save the kidnapped girls. On their way back to Wakanda we see Okoye again in the pilot’s seat. No new interactions are seen in this scene though we linger on the shot from behind, with its glowing seatback looking like some high-tech spine.

Now, these brief glimpses don’t give a review a lot to go on. But for a sake of completeness, let’s talk about that volumetric projection around her wrist. I note is that it is a lovely echo of Dr. Strange’s interface for controlling the time stone Eye of Agamatto.

Wrist projections are going to be all the rage at the next Snap, I predict.

But we never really see Okoye look at this VP it or use it. Cross referencing the Wakandan alphabet, those five symbols at the top translate to 1 2 K R I, which doesn’t tell us much. (It doesn’t match the letters seen on the HUD.) It might be a visual do-not-disturb signal to onlookers, but if there’s other meaning that the letters and petals are meant to convey to Okoye, I can’t figure it out. At worst, I think having your wrist movements of one hand emphasized in your peripheral vision with a glowing display is a dangerous distraction from piloting. Her eyes should be on the “road” ahead of her.

The image has been flipped horizontally to illustrate how Okoye would see the display.

Similarly, we never get a good look at the HUD, or see Okoye interact with it, so I’ve got little to offer other than a mild critique that it looks full of pointless ornamental lines, many of which would obscure things in her peripheral vision, which is where humans need the most help detecting things other than motion. But modern sci-fi interfaces generally (and the MCU in particular) are in a baroque period, and this is partly how audiences recognize sci-fi-ness.

I also think that requiring a pilot to maintain full lotus to pilot is a little much, but certainly, if there’s anyone who can handle it, it’s the leader of the Dora Milaje.

One remarkable thing to note is that this is the first brain-input piloting interface in the survey. Okoye thinks what she wants the ship to do, and it does it. I expect, given what we know about kimoyo beads in Wakanda (more on these in a later post), what’s happening is she is sending thoughts to the bracelet, and the beads are conveying the instructions to the ship. As a way to show Okoye’s self-discipline and Wakanda’s incredible technological advancement, this is awesome.

Unfortunately, I don’t have good models for evaluating this interaction. And I have a lot of questions. As with gestural interfaces, how does she avoid a distracted thought from affecting the ship? Why does she not need a tunnel-in-the-sky assist? Is she imagining what the ship should do, or a route, or something more abstract, like her goals? How does the ship grant her its field awareness for a feedback loop? When does the vibranium dashboard get activated? How does it assist her? How does she hand things off to the autopilot? How does she take it back? Since we don’t have good models, and it all happens invisibly, we’ll have to let these questions lie. But that’s part of us, from our less-advanced viewpoint, having to marvel at this highly-advanced culture from the outside.


Black Health Matters

Each post in the Black Panther review is followed by actions that you can take to support black lives.

Thinking back to the terrible loss of Boseman: Fuck cancer. (And not to imply that his death was affected by this, but also:) Fuck the racism that leads to worse medical outcomes for black people.

One thing you can do is to be aware of the diseases that disproportionately affect black people (diabetes, asthma, lung scarring, strokes, high blood pressure, and cancer) and be aware that no small part of these poorer outcomes is racism, systemic and individual. Listen to Dorothy Roberts’ TED talk, calling for an end to race-based medicine.

If you’re the reading sort, check out the books Black Man in a White Coat by Damon Tweedy, or the infuriating history covered in Medical Apartheid by Harriet Washington.

If you are black, in Boseman’s memory, get screened for cancer as often as your doctor recommends it. If you think you cannot afford it and you are in the USA, this CDC website can help you determine your eligibility for free or low-cost screening: https://www.cdc.gov/cancer/nbccedp/screenings.htm. If you live elsewhere, you almost certainly have a better healthcare system than we do, but a quick search should tell you your options.

Cancer treatment is equally successful for all races. Yet black men have a 40% higher cancer death rate than white men and black women have a 20% higher cancer death rate than white women. Your best bet is to detect it early and get therapy started as soon as possible. We can’t always win that fight, but better to try than to find out when it’s too late to intervene. Your health matters. Your life matters.

3 of 3: Brain Hacking

The hospital doesn’t have the equipment to decrypt and download the actual data. But Jane knows that the LoTeks can, so they drive to the ruined bridge that is the LoTek home base. As mentioned earlier under Door Bombs and Safety Catches the bridge guards nearly kill them due to a poorly designed defensive system. Once again Johnny is not impressed by the people who are supposed to help him.

When Johnny has calmed down, he is introduced to Jones, the LoTek codebreaker who decrypts corporate video broadcasts. Jones is a cyborg dolphin. Continue reading

Brain Scanning

The second half of the film is all about retrieving the data from Johnny’s implant without the full set of access codes. Johnny needs to get the data downloaded soon or he will die from the “synaptic seepage” caused by squeezing 320G of data into a system with 160G capacity. The bad guys would prefer to remove his head and cryogenically freeze it, allowing them to take their time over retrieval.

1 of 3: Spider’s Scanners

The implant cable interface won’t allow access to the data without the codes. To bypass this protection requires three increasingly complicated brain scanners, two of them medical systems and the final a LoTek hacking device. Although the implant stores data, not human memories, all of these brain scanners work in the same way as the Non-invasive, “Reading from the brain” interfaces described in Chapter 7 of Make It So.

The first system is owned by Spider, a Newark body modification
specialist. Johnny sits in a chair, with an open metal framework
surrounding his head. There’s a bright strobing light, switching on
and off several times a second.

jm-20-spider-scan-a

Nearby a monitor shows a large rotating image of his head and skull, and three smaller images on the left labelled as Scans 1 to 3. Continue reading

The Memory Doubler

In Beijing, Johnny steps into a hotel lift and pulls a small package out his pocket. He unwraps it to reveal the “Pemex MemDoubler”.

jm-4-memdoubler-a

Johnny extends the cable from the device and plugs it into the implant in his head. The socket glows red once the connection is made.

jm-4-memdoubler-b-adjusted

Continue reading

Itchy’s SFW Masturbation Chair

With the salacious introduction, “Itchy, I know what you’d like,” Saun Dann reveals himself as a peddler of not just booby trapped curling irons, but also softcore erotica! The Life Day gift he gives to the old Wookie is a sexy music video for his immersive media chair.

SWHS-Chair-03

The chair sits in the family living room, and has a sort of helmet fixed in place such that Itchy can sit down and rest his head within it. On the outside of the helmet are lights that continuously blink out of sync with each other and seem unrelated to the actual function of the chair. Maybe a fairy-lights power indicator?

SWHS-Chair-02

Continue reading

Dat glaive: Projectile gestures

TRIGGER WARNING: IF YOU ARE PRONE TO SEIZURES, this is not the post for you. In fact, you can just read the text and be quit of it. The more neurologically daring of you can press “MORE,” but you have been forewarned.

If the first use of Loki’s glaive is as a melée weapon, the second use is of a projectile weapon. Loki primes it, it glows fiercely blue-white, and then he fires it with usually-deadly accuracy to the sorrow of his foes.

This blog is not interested in the details of the projectile, but what is interesting is the interface by which he primes and fires it. How does he do it? Let’s look. He fires the thing 8 times over the course of the movie. What do we see there? Continue reading

Brain interfaces as wearables

There are lots of brain devices, and the book has a whole chapter dedicated to them. Most of these brain devices are passive, merely needing to be near the brain to have whatever effect they are meant to have (the chapter discusses in turn: reading from the brain, writing to the brain, telexperience, telepresence, manifesting thought, virtual sex, piloting a spaceship, and playing an addictive game. It’s a good chapter that never got that much love. Check it out.)

This is a composite SketchUp rendering of the shapes of all wearable brain control devices in the survey.

This is a composite rendering of the shapes of most of the wearable brain control devices in the survey. Who can name the “tophat”?

Since the vast majority of these devices are activated by, well, you know, invisible brain waves, the most that can be pulled from them are sartorial– and social-ness of their industrial design. But there are two with genuine state-change interactions of note for interaction designers.

Star Trek: The Next Generation

The eponymous Game of S05E06 is delivered through a wearable headset. It is a thin band that arcs over the head from ear to ear, with two extensions out in front of the face that project visuals into the wearer’s eyes.

STTNG The Game-02

The only physical interaction with the device is activation, which is accomplished by depressing a momentary button located at the top of one of the temples. It’s a nice placement since the temple affords placing a thumb beneath it to provide a brace against which a forefinger can push the button. And even if you didn’t want to brace with the thumb, the friction of the arc across the head provides enough resistance on its own to keep the thing in place against the pressure. Simple, but notable. Contrast this with the buttons on the wearable control panels that are sometimes quite awkward to press into skin.

Minority Report (2002)

The second is the Halo coercion device from Minority Report. This is barely worth mentioning, since the interaction is by the PreCrime cop, and it is only to extend it from a compact shape to one suitable for placing on a PreCriminal’s head. Push the button and pop! it opens. While it’s actually being worn there is no interacting with it…or much of anything, really.

MinRep-313

MinRep-314

Head: Y U No house interactions?

There is a solid physiological reason why the head isn’t a common place for interactions, and that’s that raising the hands above the heart requires a small bit of cardiac effort, and wouldn’t be suitable for frequent interactions simply because over time it would add up to work. Google Glass faced similar challenges, and my guess is that’s why it uses a blended interface of voice, head gestures, and a few manual gestures. Relying on purely manual interactions would violate the wearable principle of apposite I/O.

At least as far as sci-fi is telling us, the head is not often a fitting place for manual interactions.

The secret of the tera-keyboard

GitS-Hands-01

Many characters in Ghost in the Shell have a particular cybernetic augmentation that lets them use specially-designed keyboards for input.

Triple-hands

To control this input device, the user’s hands are replaced with cybernetic ones. Normally they look and behave like normal human hands. But when needed, the fingers of these each split into three separate mini-fingers, which can move independently. These 30 spidery fingerlets triple the number of digits at play, dancing across the keyboard at a blinding 24 positions per second.

GitS-Hands-02

The tera-keyboard

The keyboards for which these hands were built have eight rows. The five rows nearest the user have single symbols. (QWERTY English?) Three rows farthest from the user have keys labeled with individual words. Six other keys at the top right are unlabeled. Each key glows cyan when pressed and is flush with the board itself. In this sense it works more like a touch panel than a keyboard. The board has around 100 keys in total.

GitS-Hands-03

What’s nifty about the keyboard itself is not the number of keys. Modern keyboards have about that many. What’s nifty is that you can see these keyboards are massively chorded, with screen captures from the film showing nine keys being pressed at once.

GitS-Hands-04

Let’s compare. (And here I owe a great mathematical debt of thanks to Nate Clinton for his mastery of combinatorics.) The keyboard I’m typing this blog post on has 104 keys, and can handle five keys being pressed at once, i.e, a base key like “S” and up to four modifier keys: shift, control, option, and command. If you do the math, this allows for 1600 different keypresses. That’s quite a large range of momentary inputs.

But on the tera-keyboard you’re able to press nine keys at once, and more importantly, it looks like any key can be chorded with any other key. If we’re conservative in the interpretation and presume that 9 keys must be pressed at once—leaving 6 fingerlets free to move into position for the next bit of input—that still adds up to a possible 2,747,472,247,520 possible keypresses (≈2.7 trillion). That’s about nine orders of magnitude more than our measley 1600. At 24 keypresses per second, that’s a data rate of 6.5939334e+13 per second.

GitS-Hands-05

So, ok, yes, fast, but it only raises the question:

What exactly is being input?

It’s certainly more than just characters. Unicode‘s 110,000 characters is a fraction of a fraction of this amount of data, and it covers most of the world’s scripts.

Is it words? Steven Pinker in his book The Language Instinct cites sources estimating the number of words in an educated person’s vocabulary is around 60,000. This excludes proper names, numbers, foreign words, any scientific terms, and acronyms, so it’s pretty conservative. Even if we double it, we’re still around the number of characters in Unicode. So even if the keyboard had one keypress for every word the user could possibly know and be thinking at any particular moment, the typist would only be using a fragment of its capacity.

typing

The only thing that nears this level of data on a human scale is the human brain. With a common estimate of 100 billion neurons, the keyboard could be expressing the state of it’s users brain, 24 times a second, distinguishing between 10 different states of each neuron.

This also bypasses one of the concerns of introducing an input mechanism like this that requires active manipulation: The human brain doesn’t have the mechanisms to manage 30 digits and 9-key-chording at this rate. To get it to where it could manage this kind of task would need fairly massive rewiring of the brain of the user. (And if you could do that, why bother with the computer?)

But if it’s a passive device, simply taking “pictures” of the brain and sharing those pictures with the computer, it doesn’t require that the human be reengineered, just re-equipped. It requires a very smart computer system able to cope with and respond to that kind of input, but we see that exact kind of artificial intelligence elsewhere in the film.

The “secret”

Because of the form factor of hands and keyboard, it looks like a manual input device. But looking at the data throughput, the evidence suggests that it’s actually a brain interface, meant to keep the computer up to date with whatever the user is thinking at that exact moment and responding appropriately. For all the futurism seen in this film, this is perhaps the most futuristic, and perhaps the most surprising.

GitS-Hands-06

Thermoptic camouflage

GitS-thermoptic-03

Kusanagi is able to mentally activate a feature of her skintight bodysuit and hair(?!) that renders her mostly invisible. It does not seem to affect her face by default. After her suit has activated, she waves her hand over her face to hide it. We do not see how she activates or deactivates the suit in the first place. She seems to be able to do so at will. Since this is not based on any existing human biological capacity, a manual control mechanism would need some biological or cultural referent. The gesture she uses—covering her face with open-fingered hands—makes the most sense, since even with a hand it means, “I can see you but you can’t see me.”

In the film we see Ghost Hacker using the same technology embedded in a hooded coat he wears. He activates it by pulling the hood over his head. This gesture makes a great deal of physical sense, similar to the face-hiding gesture. Donning a hood would hide your most salient physical identifier, your face, so having it activate the camouflage is a simple synechdochic extension.

GitS-thermoptics-30

The spider tank also features this same technology on its surface, where we learn it is a delicate surface. It is disabled from a rain of glass falling on it.

GitS-spidertank-01

This tech less than perfect, distorting the background behind it, and occasionally flashing with vigorous physical activity. And of course it cannot hide the effects that the wearer is creating in the environment, as we see with splashes the water and citizens in a crowd being bumped aside.

Since this imperfection runs counter to the wearer’s goal, I’d design a silent, perhaps haptic feedback, to let the wearer know when they’re moving too fast for the suit’s processors to keep up, as a reinforcement to whatever visual effects they themselves are seeing.

UPDATE: When this was originally posted, I used the incorrect concept “metonym” to describe these gestures. The correct term is “synechdoche” and the post has been updated to reflect that.