A special subset of spacesuit interfaces is the communication subsystems. I wrote a whole chapter about Communications in Make It So, but spacesuit comms bear special mention, since they’re usually used in close physical proximity but still must be mediated by technology, the channels for detailed control are clumsy and packed, and these communicators are often being overseen by a mission control center of some sort. You’d think this is rich territory, but spoiler: There’s not a lot of variation to study.
Every single spacesuit in the survey has audio. This is so ubiquitous and accepted that, after 1950, no filmmaker has thought the need to explain it or show an interface for it. So you’d think that we’d see a lot of interactions.
Spacesuit communications in sci-fi tend to be many-to-many with no apparent means of control. Not even a push-to-mute if you sneezed into your mic. It’s as if the spacewalkers were in a group, merely standing near each other in air, chatting. No push-to-talk or volume control is seen. Communication with Mission Control is automatic. No audio cues are given to indicate distance, direction, or source of the sound, or to select a subset of recipients.
The one seeming exception to the many-to-many communication is seen in the reboot of Battlestar Galactica. As Boomer is operating a ship above a ground crew, shining a light down on them for visibility, she has the following conversation with Tyrol.
Raptor 478, this is DC-1, I have you in my sights.
Copy that, DC-1. I have you in sight.
How’s it looking there? Can you tell what happened?
Lieutenant, don’t worry…about my team. I got things under control.
Copy that, DC-1. I feel better knowing you’re on it.
Then, when her copilot gives her a look about what she has just said, she says curtly to him, “Watch the light, you’re off target.” In this exchange there is clear evidence that the copilot has heard the first conversation, but it appears that her comment to him is addressed to him and not for the others to hear. Additionally, we do not hear chatter going on between the ground grew during this exchange. Unfortunately, we do not see any of the conversationalists touch a control to give us an idea about how they switch between these modes. So, you know, still nothing.
More recent films, especially in the MCU, has seen all sorts of communication controlled by voice with the magic of General AI…pause for gif…
…but as I mention more and more, once you have a General AI in the picture, we leave the realm of critique-able interactions. Because an AI did it.
In short, sci-fi just doesn’t care about showing audio controls in sci-fi spacesuits, and isn’t likely to start caring anytime soon. As always, if you know of something outside my survey, please mention it.
For reference, in the real world, a NASA astronaut has direct control over the volume of audio that she hears, using potentiometer volume controls. (Curiously the numbers on them are not backwards, unlike the rest of the controls.)
A spacewalker uses the COMM dial switch mode selector at the top of the DCM to select between three different frequencies of wireless communication, each of which broadcasts to each other and the vehicle. When an astronaut is on one of the first two channels, transmission is voice-activated. But a backup, “party line” channel requires push-to-talk, and this is what the push-to-talk control is for.
By default, all audio is broadcast to all other spacewalkers, the vehicle, and Mission Control. To speak privately, without Mission Control hearing, spacewalkers don’t have an engineered option. But if one of the radio frequency bands happens to be suffering a loss of signal to Mission Control, she can use this technological blind spot to talk with some degree of privacy.
Whatever it is, it ain’t going to construct, observe, or repair itself. In addition to protection and provision, suits must facilitate the reason the wearer has dared to go out into space in the first place.
One of the most basic tasks of extravehicular activity (EVA) is controlling where the wearer is positioned in space. The survey shows several types of mechanisms for this. First, if your EVA never needs you to leave the surface of the spaceship, you can go with mountaineering gear or sticky feet. (Or sticky hands.) We can think of maneuvering through space as similar to piloting a craft, but the outputs and interfaces have to be made wearable, like wearable control panels. We might also expect to see some tunnel in the sky displays to help with navigation. We’d also want to see some AI safeguard features, to return the spacewalker to safety when things go awry. (Narrator: We don’t.)
In Stowaway (2021) astronauts undertake unplanned EVAs with carabiners and gear akin to mountaineers use. This makes some sense, though even this equipment needs to be modified for use by astronauts’ thick gloves.
Sticky feet (and hands)
Though it’s not extravehicular, I have to give a shout out to 2001: A Space Odyssey (1969), where we see a flight attendant manage their position in the microgravity with special shoes that adhere to the floor. It’s a lovely example of a competent Hand Wave. We don’t need to know how it works because it says, right there, “Grip shoes.” Done. Though props to the actress Heather Downham, who had to make up a funny walk to illustrate that it still isn’t like walking on earth.
With magnetic boots, seen in Destination Moon, the wearer simply walks around and manages the slight awkwardness of having to pull a foot up with extra force, and have it snap back down on its own.
Battlestar Galactica added magnetic handgrips to augment the control provided by magnetized boots. With them, Sergeant Mathias is able to crawl around the outside of an enemy vessel, inspecting it. While crawling, she holds grip bars mounted to circles that contain the magnets. A mechanism for turning the magnet off is not seen, but like these portable electric grabbers, it could be as simple as a thumb button.
Iron Man also had his Mark 50 suit form stabilizing suction cups before cutting a hole in the hull of the Q-Ship.
In the electromagnetic version of boots, seen in Star Trek: First Contact, the wearer turns the magnets on with a control strapped to their thigh. Once on, the magnetization seems to be sensitive to the wearer’s walk, automatically lessening when the boot is lifted off. This gives the wearer something of a natural gait. The magnetism can be turned off again to be able to make microgravity maneuvers, such as dramatically leaping away from Borg minions.
Star Trek: Discovery also included this technology, but with what appears to be a gestural activation and a cool glowing red dots on the sides and back of the heel. The back of each heel has a stack of red lights that count down to when they turn off, as, I guess, a warning to anyone around them that they’re about to be “air” borne.
Quick “gotcha” aside: neither Destination Moon nor Star Trek: First Contact bothers to explain how characters are meant to be able to kneel while wearing magnetized boots. Yet this very thing happens in both films.
If your extravehicular task has you leaving the surface of the ship and moving around space, you likely need a controlled propellant. This is seen only a few times in the survey.
In the film Mission to Mars, the manned mobility unit, or MMU, seen in the film is based loosely on NASA’s MMU. A nice thing about the device is that unlike the other controlled propellant interfaces, we can actually see some of the interaction and not just the effect. The interfaces are subtly different in that the Mission to Mars spacewalkers travel forward and backward by angling the handgrips forward and backward rather than with a joystick on an armrest. This seems like a closer mapping, but also seems more prone to error by accidental touching or bumping into something.
The plus side is an interface that is much more cinegenic, where the audience is more clearly able to see the cause and effect of the spacewalker’s interactions with the device.
If you have propellent in a Moh’s 4 or 5 film, you might need to acknowledge that propellant is a limited resource. Over the course of the same (heartbreaking) scene shown above, we see an interface where one spacewalker monitors his fuel, and another where a spacewalker realizes that she has traveled as far as she can with her MMU and still return to safety.
For those wondering, Michael Burnham’s flight to the mysterious signal in that pilot uses propellant, but is managed and monitored by controllers on Discovery, so it makes sense that we don’t see any maneuvering interfaces for her. We could dive in and review the interfaces the bridge crew uses (and try to map that onto a spacesuit), but we only get snippets of these screens and see no controls.
Iron Man’s suits employ some Phlebotinum propellant that lasts for ever, can fit inside his tailored suit, and are powerful enough to achieve escape velocity.
All-in-all, though sci-fi seems to understand the need for characters to move around in spacesuits, very little attention is given to the interfaces that enable it. The Mission to Mars MMU is the only one with explicit attention paid to it, and that’s quite derived from NASA models. It’s an opportunity for film makers should the needs of the plot allow, to give this topic some attention.
A major concern of the design of spacesuits is basic usability and ergonomics. Given the heavy material needed in the suit for protection and the fact that the user is wearing a helmet, where does a designer put an interface so that it is usable?
Chest panels are those that require that the wearer only look down to manipulate. These are in easy range of motion for the wearer’s hands. The main problem with this location is that there is a hard trade off between visibility and bulkiness.
Arm panels are those that are—brace yourself—mounted to the forearm. This placement is within easy reach, but does mean that the arm on which the panel sits cannot be otherwise engaged, and it seems like it would be prone to accidental activation. This is a greater technological challenge than a chest panel to keep components small and thin enough to be unobtrusive. It also provides some interface challenges to squeeze information and controls into a very small, horizontal format. The survey shows only three arm panels.
The first is the numerical panel seen in 2001: A Space Odyssey (thanks for the catch, Josh!). It provides discrete and easy input, but no feedback. There are inter-button ridges to kind of prevent accidental activation, but they’re quite subtle and I’m not sure how effective they’d be.
The second is an oversimplified control panel seen in Star Trek: First Contact, where the output is simply the unlabeled lights underneath the buttons indicating system status.
The third is the mission computers seen on the forearms of the astronauts in Mission to Mars. These full color and nonrectangular displays feature rich, graphic mission information in real time, with textual information on the left and graphic information on the right. Input happens via hard buttons located around the periphery.
Side note: One nifty analog interface is the forearm mirror. This isn’t an invention of sci-fi, as it is actually on real world EVAs. It costs a lot of propellant or energy to turn a body around in space, but spacewalkers occasionally need to see what’s behind them and the interface on the chest. So spacesuits have mirrors on the forearm to enable a quick view with just arm movement. This was showcased twice in the movie Mission to Mars.
The easiest place to see something is directly in front of your eyes, i.e. in a heads-up display, or HUD. HUDs are seen frequently in sci-fi, and increasingly in sc-fi spacesuits as well. One is Sunshine. This HUD provides a real-time view of each other individual to whom the wearer is talking while out on an EVA, and a real-time visualization of dangerous solar winds.
These particular spacesuits are optimized for protection very close to the sun, and the visor is limited to a transparent band set near eye level. These spacewalkers couldn’t look down to see the top of a any interfaces on the suit itself, so the HUD makes a great deal of sense here.
Star Trek: Discovery’s pilot episode included a sequence that found Michael Burnham flying 2000 meters away from the U.S.S. Discovery to investigate a mysterious Macguffin. The HUD helped her with wayfinding, navigating, tracking time before lethal radiation exposure (a biological concern, see the prior post), and even doing a scan of things in her surroundings, most notably a Klingon warrior who appears wearing unfamiliar armor. Reference information sits on the periphery of Michael’s vision, but the augmentations occur mapped to her view. (Noting this raises the same issues of binocular parallax seen in the Iron HUD.)
Iron Man’s Mark L armor was able to fly in space, and the Iron HUD came right along with it. Though not designed/built for space, it’s a general AI HUD assisting its spacewalker, so worth including in the sample.
Aside from HUDs, what we see in the survey is similar to what exists in existing real-world extravehicular mobility units (EMUs), i.e. chest panels and arm panels.
Inputs illustrate paradigms
Physical controls range from the provincial switches and dials on the cigarette-girl foldout control panels of Destination Moon to the simple and restrained numerical button panel of 2001, to strangely unlabeled buttons of Star Trek: First Contact’s arm panels (above), and the ham-handed touch screens of Mission to Mars.
As the pictures above reveal, the input panels reflect the familiar technology of the time of the creation of the movie or television show. The 1950s were still rooted in mechanistic paradigms, the late 1960s interfaces were electronic pushbutton, the 2000s had touch screens and miniaturized displays.
Real world interfaces
For comparison and reference, the controls for NASA’s EMU has a control panel on the front, called the Display and Control Module, where most of the controls for the EMU sit.
The image shows that inputs are very different than what we see as inputs in film and television. The controls are large for easy manipulation even with thick gloves, distinct in type and location for confident identification, analog to allow for a minimum of failure points and in-field debugging and maintenance, and well-protected from accidental actuation with guards and deep recesses. The digital display faces up for the convenience of the spacewalker. The interface text is printed backwards so it can be read with the wrist mirror.
The outputs are fairly minimal. They consist of the pressure suit gauge, audio warnings, and the 12-character alphanumeric LCD panel at the top of the DCM. No HUD.
The gauge is mechanical and standard for its type. The audio warnings are a simple warbling tone when something’s awry. The LCD panel provides information about 16 different values that the spacewalker might need, including estimated time of oxygen remaining, actual volume of oxygen remaining, pressure (redundant to the gauge), battery voltage or amperage, and water temperature. To cycle up and down the list, she presses the Mode Selector Switch forward and backward. She can adjust the contrast using the Display Intensity Control potentiometer on the front of the DCM.
The DCMs referenced in the post are from older NASA documents. In more recent images on NASA’s social media, it looks like there have been significant redesigns to the DCM, but so far I haven’t seen details about the new suit’s controls. (Or about how that tiny thing can house all the displays and controls it needs to.)
Spacesuits must support the biological functioning of the astronaut. There are probably damned fine psychological reasons to not show astronauts their own biometric data while on stressful extravehicular missions, but there is the issue of comfort. Even if temperature, pressure, humidity, and oxygen levels are kept within safe ranges by automatic features of the suit, there is still a need for comfort and control inside of that range. If the suit is to be warn a long time, there must be some accommodation for food, water, urination, and defecation. Additionally, the medical and psychological status of the wearer should be monitored to warn of stress states and emergencies.
Unfortunately, the survey doesn’t reveal any interfaces being used to control temperature, pressure, or oxygen levels. There are some for low oxygen level warnings and testing conditions outside the suit, but these are more outputs than interfaces where interactions take place.
There are also no nods to toilet necessities, though in fairness Hollywood eschews this topic a lot.
The one example of sustenance seen in the survey appears in Sunshine, we see Captain Kaneda take a sip from his drinking tube while performing a dangerous repair of the solar shields. This is the only food or drink seen in the survey, and it is a simple mechanical interface, held in place by material strength in such a way that he needs only to tilt his head to take a drink.
Similarly, in Sunshine, when Capa and Kaneda perform EVA to repair broken solar shields, Cassie tells Capa to relax because he is using up too much oxygen. We see a brief view of her bank of screens that include his biometrics.
Remote monitoring of people in spacesuits is common enough to be a trope, but has been discussed already in the Medical chapter in Make It So, for more on biometrics in sci-fi.
There are some non-interface biological signals for observers. In the movie Alien, as the landing party investigates the xenomorph eggs, we can see that the suit outgases something like steam—slower than exhalations, but regular. Though not presented as such, the suit certainly confirms for any onlooker that the wearer is breathing and the suit functioning.
Given that sci-fi technology glows, it is no surprise to see that lots and lots of spacesuits have glowing bits on the exterior. Though nothing yet in the survey tells us what these lights might be for, it stands to reason that one purpose might be as a simple and immediate line-of-sight status indicator. When things are glowing steadily, it means the life support functions are working smoothly. A blinking red alert on the surface of a spacesuit could draw attention to the individual with the problem, and make finding them easier.
One nifty thing that sci-fi can do (but we can’t yet in the real world) is deploy biology-protecting tech at the touch of a button. We see this in the Marvel Cinematic Universe with Starlord’s helmet.
If such tech was available, you’d imagine that it would have some smart sensors to know when it must automatically deploy (sudden loss of oxygen or dangerous impurities in the air), but we don’t see it. But given this speculative tech, one can imagine it working for a whole spacesuit and not just a helmet. It might speed up scenes like this.
What do we see in the real world?
Are there real-world controls that sci-fi is missing? Let’s turn to NASA’s space suits to compare.
The Primary Life-Support System (PLSS) is the complex spacesuit subsystem that provides the life support to the astronaut, and biomedical telemetry back to control. Its main components are the closed-loop oxygen-ventilation system for cycling and recycling oxygen, the moisture (sweat and breath) removal system, and the feedwater system for cooling.
The only “biology” controls that the spacewalker has for these systems are a few on the Display and Control Module (DCM) on the front of the suit. They are the cooling control valve, the oxygen actuator slider, and the fan switch. Only the first is explicitly to control comfort. Other systems, such as pressure, are designed to maintain ideal conditions automatically. Other controls are used for contingency systems for when the automatic systems fail.
The suit is insulated thoroughly enough that the astronaut’s own body heats the interior, even in complete shade. Because the astronaut’s body constantly adds heat, the suit must be cooled. To do this, the suit cycles water through a Liquid Cooling and Ventilation Garment, which has a fine network of tubes held closely to the astronaut’s skin. Water flows through these tubes and past a sublimator that cools the water with exposure to space. The astronaut can increase or decrease the speed of this flow and thereby the amount to which his body is cooled, by the cooling control valve, a recessed radial valve with fixed positions between 0 (the hottest) and 10 (the coolest), located on the front of the Display Control Module.
The spacewalker does not have EVA access to her biometric data. Sensors measure oxygen consumption and electrocardiograph data and broadcast it to the Mission Control surgeon, who monitors it on her behalf. So whatever the reason is, if it’s good enough for NASA, it’s good enough for the movies.
Back to sci-fi
So, we do see temperature and pressure controls on suits in the real world, which underscores their absence in sci-fi. But, if there hasn’t been any narrative or plot reason for such things to appear in a story, we should not expect them.
“Why cannot we walk outside [the spaceship] like the meteor? Why cannot we launch into space through the scuttle? What enjoyment it would be to feel oneself thus suspended in ether, more favored than the birds who must use their wings to keep themselves up!”
—The astronaut Michel Ardan in Round the Moon by Jules Verne (1870)
When we were close to publication on Make It So, we wound up being way over the maximum page count for a Rosenfeld Media book. We really wanted to keep the components and topics sections, and that meant we had to cut the section on things. Spacesuits was one of the chapters I drafted about things. I am representing that chapter here on the blog. n.b. This was written ten years ago in 2011. There are almost certainly other more recent films and television shows that can serve as examples. If you, the reader, notice any…well, that‘s what the comments section is for.
Sci-fi doesn’t have to take place in interplanetary space, but a heck of a lot of it does. In fact, the first screen-based science fiction film is all about a trip to the moon.
Most of the time, traveling in this dangerous locale happens inside spaceships, but occasionally a character must travel out bodily into the void of space. Humans—and pretty much everything (no not them) we would recognize as life—can not survive there for very long at all. Fortunately, the same conceits that sci-fi adopts to get characters into space can help them survive once they’re there.
An environmental suit is any that helps the wearer survive in an inhospitable environment. Environment suits first began with underwater suits, and later high-altitude suits. For space travel, pressure suits are to be worn during the most dangerous times, i.e. liftoff and landing, when an accident may suddenly decompress a spacecraft. A spacesuit is an environmental suit designed specifically for survival in outer space. NASA refers to spacesuits as Extravehicular Mobility Units, or EMUs. Individuals who wear the spacesuits are known as spacewalkers. The additional equipment that helps a spacewalker move around space in a controlled manner is the Manned Mobility Unit, or MMU.
Additionally, though many other agencies around the world participate in the design and engineering of spacesuits, there is no convenient way to reference them and their efforts as a group, so Aerospace Community is used as a shorthand. This also helps to acknowledge that my research and interviews were primarily with sources primarily from NASA.
The design of the spacesuit is an ongoing and complicated affair. To speak of “the spacesuit” as if it were a single object ignores the vast number of iterations and changes made to the suits between each cycle of engineering, testing, and deployment, must less between different agencies working on their own designs. So, for those wondering, I’m using the Russian Orlan spacesuit currently being used in the International Space Station and shuttle missions as the default design when speaking about modern spacesuits.
What the thing’s got to do
A spacesuit, whether in sci-fi or the real world, has to do three things.
It has to protect the wearer from the perils of interplanetary space.
It has to accommodate the wearer’s ongoing biological needs.
Since space is so dangerous, the suit and tools must help the wearer accomplish their extravehicular tasks efficiently and get them back to safer environs as quickly as possible.
Each of these categories of functions, and the related interfaces, are discussed in following posts.
First, congratulations to Perception Studio for the excellent work on Black Panther! Readers can see Perception’s own write up about the interfaces on their website. (Note that the reviewers only looked at this after the reviews were complete, to ensure we were looking at end-result, not intent. Also all images in this post were lifted from that page, with permission, unless otherwise noted.)
John LePore of Perception Studio reached out to me when we began to publish the reviews, asking if he could shed light on anything. So I asked if he would be up for an email interview when the reviews were complete. This post is all that wonderful shed light.
What exactly did Perception do for the film?
John: Perception was brought aboard early in the process for the specific purpose of consulting on potential areas of interest in science and technology. A brief consulting sprint evolved into 18 months of collaboration that included conceptual development and prototyping of various technologies for use in multiple sequences and scenarios. The most central of these elements was the conceptualization and development of the vibraniumsandinterfaces throughout the film. Some of this work was used as design guidelines for various vfx houses while other elements were incorporated directly into the final shots by Perception. In addition to the various technologies, Perception worked closely on two special sequences in the film—the opening ‘history of Wakanda’ prologue, and the main-on-end title sequence, both of which were based on the technological paradigm of vibranium sand.
What were some of the unique challenges for Black Panther?
John: We encountered various challenges on Black Panther, both conceptual and technical. An inspiring challenge was the need to design the most advanced technology in the Marvel Cinematic Universe, while conceptualizing something that had zero influence from any existing technologies. There were lots of challenges around dynamic sand, and even difficulty rendering when a surge in the crypto market made GPU’s scarce to come by!
One of the things that struck me about Black Panther is the ubiquity of (what appear to be) brain-computer interfaces. How was it working with speculative tech that seemed so magical?
John: From the very start, it was very important to us that all of the technology we conceptualized was grounded in logic, and had a pathway to feasibility. We worked hard to hold ourselves to these constraints, and looked for every opportunity to include signals for the audience (sometimes nuanced, sometimes obvious) as to how these technologies worked. At the same time, we know the film will never stop dead in its tracks to explain technology paradigm #6. In fact, one of our biggest concerns was that any of the tech would appear to be ‘made of magic’.
Chris: Ooh, now I want to know what some of the nuanced signals were!
John: One of the key nuances that made it from rough tests to the final film was that the vibranium Sand ‘bounces’ to life with a pulse. This is best seen in the tactical table in the Royal Talon at the start of the film. The ‘bounce’ was intended to be a rhythmic cue to the idea of ultrasonic soundwaves triggering the levitating sand.
Did you know going in that you’d be creating something that would be so important to black lives?
John: Sometimes on a film, it is often hard to imagine how it will be received. On Black Panther, all the signals were clear that the film would be deeply important. From our early peeks at concept art of Wakanda, to witnessing the way Marvel Studios supported Ryan Coogler’s vision. The whole time working on the film the anticipation kept growing, and at the core of the buzz was an incredibly strong black fandom. Late in our process, the hype was still increasing—It was becoming obvious that Black Panther could be the biggest Marvel film to date. I remember working on the title sequence one night, a couple months before release, and Ryan played (over speakerphone) the song that would accompany the sequence. We were bugging out— “Holy shit that’s Kendrick!”… it was just another sign that this film would be truly special, and deeply dedicated to an under-served audience.
How did working on the film affect the studio?
John: For us it’s been one of our proudest moments— it combined everything we love in terms of exciting concept development, aesthetic innovation and ambitious technical execution. The project is a key trophy in our portfolio, and I revisit it regularly when presenting at conferences or attracting new clients, and I’m deeply proud that it continues to resonate.
Where did you look for inspiration when designing?
John: When we started, the brief was simple: Best tech, most unique tech, and centered around vibranium. With a nearly open canvas, the element of vibranium (only seen previously as Captain America’s shield) sent us pursuing vibration and sound as a starting point. We looked deeply into cymatic patterns and other sound-based phenomena like echo-location. About a year prior, we were working with an automotive supplier on a technology that used ultrasonic soundwaves to create ‘mid-air haptics’… tech that lets you feel things that aren’t really there. We then discovered that the University at Tokyo was doing experiments with the same hardware to levitate styrofoam particles with limited movement. Our theory was that with the capabilities of vibranium, this effect could levitate and translate millions of particles simultaneously.
Beyond technical and scientific phenomenon, there was tremendous inspiration to be taken from African culture in general. From textile patterns, to colors of specific spices and more, there were many elements that influenced our process.
What thing about working on the film do you think most people in audiences would be surprised by?
John: I think the average audience member would be surprised by how much time and effort goes into these pieces of the film. There are so many details that are considered and developed, without explicitly figuring into the plot of the film. We consider ourselves fortunate that film after film Marvel Studios pushes to develop these ideas that in other films are simply ‘set dressing’.
Chris: Lastly, I like finishing interviews with these questions.
What, in your opinion, makes for a great fictional user interface?
John: I love it when you are presented with innovative tech in a film and just by seeing it you can understand the deeper implications. Having just enough information to make assumptions about how it works, why it works, and what it means to a culture or society. If you can invite this kind of curiosity, and reward this fascination, the audience gets a satisfying gift. And if these elements pull me in, I will almost certainly get ‘lost’ in a film…in the best way.
What’s your favorite sci-fi interface that someone else designed? (and why)
John: I always loved two that stood out to me for the exact reasons mentioned above.
One is Westworld’s tablet-based Dialog Tree system. It’s not the most radical UI design etc, but it means SO much to the story in that moment, and immediately conveys a complicated concept effortlessly to the viewer.
Another see-it-and-it-makes-sense tech concept is the live-tracked projection camera system from Mission Impossible: Ghost Protocol. It’s so clever, so physical, and you understand exactly how it works (and how it fails!). When I saw this in the theatre, I turned to my wife and whispered, “You see, the camera is moving to match the persp…” and she glared at me and said “I get it! Everybody gets it!” The clever execution of the gadget and scene made me, the viewer, feel smarter than I actually was!
What’s next for the studio?
The Perception team is continuing to work hard in our two similar paths of exploration— film and real-world tech. This year we have seen our work appear in Marvel’s streaming shows, with more to come. We’ve also been quite busy in the technology space, working on next-generation products from technology platforms to exciting automobiles. The past year has been busy and full of changes, but no matter how we work, we continue to be fascinated and inspired by the future ahead.
Black Panther’s financial success is hard to ignore. From the Wikipedia page:
Black Panther grossed $700.1 million in the United States and Canada, and $646.9 million in other territories, for a worldwide total of $1.347 billion. It became the highest-grossing solo superhero film, the third-highest-grossing film of the MCU and superhero film overall, the ninth-highest-grossing film of all time, and the highest-grossing film by a black director. It is the fifth MCU film and 33rd overall to surpass $1 billion, and the second-highest-grossing film of 2018. Deadline Hollywood estimated the net profit of the film to be $476.8 million, accounting for production budgets, P&A, talent participations and other costs, with box office grosses and ancillary revenues from home media, placing it second on their list of 2018’s “Most Valuable Blockbusters”.
It was also a critical success (96% Tomotometer anyone?) as well as a fan…well, “favorite” seems too small a word. Here, let me let clinical psychologist, researcher and trusted media expert Erlanger Turner speak to this.
Many have wondered why Black Panther means so much to the black community and why schools, churches and organizations have come to the theaters with so much excitement. The answer is that the movie brings a moment of positivity to a group of people often not the centerpiece of Hollywood movies… [Racial and ethnic socialization] helps to strengthen identity and helps reduce the likelihood on internalizing negative stereotypes about one’s ethnic group.
People—myself included—just love this movie. As is my usual caveat, though, this site reviews not the film, but the interfaces that appear in the film, and specifically, across three aspects.
Sci: B (3 of 4) How believable are the interfaces?
This category (and Interfaces, I’ll be repeating myself later) is complicated because Wakanda is the most technologically-advanced culture on Earth as far as the MCU goes. So who’s to say what’s believable when you have general artificial intelligence, nanobots, brain interfaces, and technology barely distinguishable from magic? But this sort of challenge is what I signed up for, so…pressing on.
The interfaces are mostly internally consistent and believable within their (admittedly large) scope of nova.
There are plenty of weird wtf moments, though. Why do remote piloting interfaces routinelydrop their users onto their tailbones? Why are the interfaces sometimes photo-real and sometimes sandpaper? Why does the Black Panther suit glow with a Here-I-Am light? Why have a recovery room in the middle of a functioning laboratory? Why have a control where thrusting one way is a throttle and the other fires weapons?
Fi: A (4 of 4) How well do the interfaces inform the narrative of the story?
Here’s where Black Panther really shines. The wearable technology tells of a society build around keeping its advancement secret. The glowing tech gives clues as to what’s happening where. The kimoyo beads help describe a culture that—even if it is trapped in a might-makes-right and isolationist belief system—is still marvelous and equitable. The tech helps tell a wholly believable story that this is the most technologically advanced society on MCU Earth 616.
Interfaces: B (3 of 4) How well do the interfaces equip the characters to achieve their goals?
As I mentioned above, this is an especially tough determination given the presence of nanobots, AGI, and brain interfaces. All these things confound usual heuristic approaches.
But they do not make it impossible. The suit and Talon provide gorgeous displays. (As does the med table, even if its interaction model has issues.) The claws, the capes, and the sonic overload incorporate well-designed gestures. Griot (the unnamed AI) must be doing an awful lot of the heavy lifting, but as a model of AI is one that appears increasingly in the MCU, where the AI is the thing in the background that lets the heroes be heroes (which I’m starting to tag as sidekick AI).
All that said, we still see the same stoic guru mistakes in the sand table that seem to plague sci-fi. In the med station we see a red-thing-bad oversimplicity, mismatched gestures-to-effects, and a display that pulls attention away from a patient, which keeps it from an A grade.
Final Grade A- (10 of 12), Blockbuster.
It was an unfortunately poignant time to have been writing these reviews. I started them because of the unconscionable murders of Breonna Taylor and George Flloyd—in the long line of unconscionable black deaths at the hands of police—and, knowing the pandemic was going to slow posting frequency, would keep these issues alive at least on this forum long after the initial public fury has died down.
But across the posts, Raysean White was killed. Cops around the nation responded with inappropriate force. Chadwick Boseman died of cancer. Ruth Bader Ginsberg died, exposing one of the most blatant hypocrisies of the GOP and tilting the Supreme Court tragically toward the conservative. The U.S. ousted its racist-in-chief and Democrats took control of the Senate for the first time since 2011, despite a coordinated attempt by the GOP to suppress votes while peddling the lie that the election was stolen (for which lawmakers involved have yet to suffer any consequences).
It hasn’t ended. Just yesterday began the trial of the officer who murdered George Floyd. It’s going to take about a month just to hear the main arguments. The country will be watching.
Meanwhile Georgia just passed new laws that are so restrictive journalists are calling it the new Jim Crow. This is part of a larger conservative push to disenfranchise Democrats and voters of color in particular. We have a long way to go, but even though this wraps the Black Panther reviews, our work bending the arc of the moral universe is ongoing. Science fiction is about imagining other worlds so we can make this one better.
Black Panther II is currently scheduled to come out July 8, 2022.
I presume my readership are adults. I honestly cannot imagine this site has much to offer the 3-to-8-year-old. That said, if you are less than 8.8 years old, be aware that reading this will land you FIRMLY on the naughty list. Leave before it’s too late. Oooh, look! Here’s something interesting for you.
For those who celebrate Yule (and the very hybridized version of the holiday that I’ll call Santa-Christmas to distinguish it from Jesus-Christmas or Horus-Christmas), it’s that one time of year where we watch holiday movies. Santa features in no small number of them, working against the odds to save Christmas and Christmas spirit from something that threatens it. Santa accomplishes all that he does by dint of holiday magic, but increasingly, he has magic-powered technology to help him. These technologies are different for each movie in which they appear, with different sci-fi interfaces, which raises the question: Who did it better?
Unraveling this stands to be even more complicated than usual sci-fi fare.
These shows are largely aimed at young children, who haven’t developed the critical thinking skills to doubt the core premise, so the makers don’t have much pressure to present wholly-believable worlds. The makers also enjoy putting in some jokes for adults that are non-diegetic and confound analysis.
Despite the fact that these magical technologies are speculative just as in sci-fi, makers cannot presume that their audience are sci-fi fans who are familiar with those tropes. And things can’t seem too technical.
The sci in this fi is magical, which allows makers to do all-sorts of hand-wavey things about how it’s doing what it’s doing.
Many of the choices are whimsical and serve to reinforce core tenets of the Santa Claus mythos rather than any particular story or worldbuilding purpose.
But complicated-ness has rarely cowed this blog’s investigations before, why let a little thing like holiday magic do it now?
A Primer on Santa
I have readers from all over the world. If you’re from a place that does not celebrate the Jolly Old Elf, a primer should help. And if you’re from a non-USA country, your Saint Nick mythos will be similar but not the same one that these movies are based on, so a clarification should help. To that end, here’s what I would consider the core of it.
Santa Claus is a magical, jolly, heavyset old man with white hair, mustache, and beard who lives at the North Pole with his wife Ms. Claus. The two are almost always caucasian. He can alternately be called Kris Kringle, Saint Nick, Father Christmas, or Klaus. The Clark Moore poem calls him a “jolly old elf.” He is aware of the behavior of children, and tallies their good and bad behavior over the year, ultimately landing them on the “naughty” or “nice” list. Santa brings the nice ones presents. (The naughty ones are canonically supposed to get coal in their stockings though in all my years I have never heard of any kids actually getting coal in lieu of presents.) Children also hang special stockings, often on a mantle, to be filled with treats or smaller presents. Adults encourage children to be good in the fall to ensure they get presents. As December approaches, Children write letters to Santa telling him what presents they hope for. Santa and his elves read the letters and make all the requested toys by hand in a workshop. Then the evening of 24 DEC, he puts all the toys in a large sack, and loads it into a sleigh led by 8 flying reindeer. Most of the time there is a ninth reindeer up front with a glowing red nose named Rudolph. He dresses in a warm red suit fringed with white fur, big black boots, thick black belt, and a stocking hat with a furry ball at the end. Over the evening, as children sleep, he delivers the presents to their homes, where he places them beneath the Christmas tree for them to discover in the morning. Families often leave out cookies and milk for Santa to snack on, and sometimes carrots for the reindeer. Santa often tries to avoid detection for reasons that are diegetically vague.
There is no single source of truth for this mythos, though the current core text might be the 1823 C.E. poem, “A Visit from St. Nicholas” by Clement Clarke Moore. Visually, Santa’s modern look is often traced back to the depictions by Civil War cartoonist Thomas Nast, which the Coca-Cola Corporation built upon for their holiday advertisements in 1931.
There are all sorts of cultural conversations to have about the normalizing a magical panopticon, what effect hiding the actual supply chain has, and asking for what does perpetuating this myth train children; but for now let’s stick to evaluating the interfaces in terms of Santa’s goals.
Given all of the above, we can say that the following are Santa’s goals.
Sort kids by behavior as naughty or nice
Many tellings have him observing actions directly
Manage the lists of names, usually on separate lists
Sending toy requests to the workshop
Travel to kids’ homes
Find the most-efficient way there
Control the reindeer
Maintain air safety
Avoid air obstacles
Find a way inside and to the tree
Enjoy the cookies / milk
Deliver all presents before sunrise
For each child:
Know whether they are naughty or nice
If nice, match the right toy to the child
Stage presents beneath the tree
Avoid being seen
We’ll use these goals to contextualize the Santa interfaces against.
Nearly every story tells of Santa working with other characters to save Christmas. (The metaphor that we have to work together to make Christmas happen is appreciated.) The challenges in the stories can be almost anything, but often include…
Inclement weather (usually winter, but Santa is a global phenomenon)
Air obstacles (Planes, helicopters, skyscrapers)
Ingress/egress into homes
Home security systems / guard dogs
Imdb.com lists 847 films tagged with the keyword “santa claus,” which is far too much to review. So I looked through “best of” lists (two are linked below) and watched those films for interfaces. There weren’t many. I even had to blend CGI and live action shows, which I’m normally hesitant to do. As always, if you know of any additional shows that should be considered, please mention it in the comments.
After reviewing these films, the ones with Santa interfaces came down to four, presented below in chronological order.
The Santa Clause (1994)
This movie deals with the lead character, Scott Calvin, inadvertently taking on the “job” of Santa Clause. (If you’ve read Anthony’s Incarnations of Immortality series, this plot will feel quite familiar.)
The sleigh he inherits has a number of displays that are largely unexplained, but little Charlie figures out that the center console includes a hot chocolate and cookie dispenser. There is also a radar, and far away from it, push buttons for fog, planes, rain, and lightning. There are several controls with Christmas bell icons associated with them, but the meaning of these are unclear.
This is the oldest of the candidates. Its interfaces are quite sterile and “tacked on” compared to the others, but was novel for its time.
This movie tells the story of Santa’s n’er do well brother Fred, who has to work in the workshop for one season to work off bail money. While there he winds up helping forestall foreclosure from an underhanded supernatural efficiency expert, and un-estranging himself from his family. A really nice bit in this critically-panned film is that Fred helps Santa understand that there are no bad kids, just kids in bad circumstances.
Fred is taken to the North Pole in a sled with switches that are very reminiscent of the ones in The Santa Clause. A funny touch is the “fasten your seatbelt” sign like you might see in a commercial airliner. The use of Lombardic Capitals font is a very nice touch given that much of modern Western Santa Claus myth (and really, many of our traditions) come from Germany.
This chamber is where Santa is able to keep an eye on children. (Seriously panopticony. They have no idea they’re being surveilled.) Merely by reading the name and address of a child a volumetric display appears within the giant snowglobe. The naughtiest children’s names are displayed on a digital split-flap display, including their greatest offenses. (The nicest are as well, but we don’t get a close up of it.)
The final tally is put into a large book that one of the elves manages from the sleigh while Santa does the actual gift-distribution. The text in the book looks like it was printed from a computer.
In this telling, the Santa job is passed down patrilineally. The oldest Santa, GrandSanta, is retired. The dad, Malcolm, is the current-acting Santa one, and he has two sons. One is Steve, a by-the-numbers type into military efficiency and modern technology. The other son, Arthur, is an awkward fellow who has a semi-disposable job responding to letters. Malcolm currently pilots a massive mile-wide spaceship from which ninja elves do the gift distribution. They have a lot of tech to help them do their job. The plot involves Arthur working with Grandsanta using his old Sleigh to get a last forgotten gift to a young girl before the sun rises.
To help manage loud pets in the home who might wake up sleeping people, this gun has a dial for common pets that delivers a treat to distract them.
Elves have face scanners which determine each kids’ naughty/nice percentage. The elf then enters this into a stocking-filling gun, which affects the contents in some unseen way. A sweet touch is when one elf scans a kid who is read as quite naughty, the elf scans his own face to get a nice reading instead.
The S-1 is the name of the spaceship sleigh at the beginning (at the end it is renamed after Grandsanta’s sleigh). Its bridge is loaded with controls, volumetric displays, and even a Little Tree air freshener. It has a cloaking display on its underside which is strikingly similar to the MCUS.H.I.E.L.D. helicarrier cloaking. (And this came out the year before The Avengers, I’m just sayin’.)
The north pole houses the command-and-control center, which Steve manages. Thousands of elves manage workstations here, and there is a huge shared display for focusing and informing the team at once when necessary. Smaller displays help elf teams manage certain geographies. Its interfaces fall to comedy and trope, mostly, but are germane to the story beats
One of the crisis scenarios that this system helps manage is for a “waker,” a child who has awoken and is at risk of spying Santa.
Grandsanta’s outmoded sleigh is named Eve. Its technology is much more from the early 20th century, with switches and dials, buttons and levers. It’s a bit janky and overly complex, but gets the job done.
One notable control on S-1 is this trackball with dark representations of the continents. It appears to be a destination selector, but we do not see it in use. It is remarkable because it is very similar to one of the main interface components in the next candidate movie, The Christmas Chronicles.
The Christmas Chronicles follows two kids who stowaway on Santa’s sleigh on Christmas Eve. His surprise when they reveal themselves causes him to lose his magical hat and wreck his sleigh. They help him recover the items, finish his deliveries, and (well, of course) save Christmas just in time.
Santa’s sleight enables him to teleport to any place on earth. The main control is a trackball location selector. Once he spins it and confirms that the city readout looks correct, he can press the “GO” button for a portal to open in the air just ahead of the sleigh. After traveling in a aurora borealis realm filled with famous landmarks for a bit, another portal appears. They pass through this and appear at the selected location. A small magnifying glass above the selection point helps with precision.
Santa wears a watch that measures not time, but Christmas spirit, which ranges from 0 to 100. In the bottom half, chapter rings and a magnifying window seem designed to show the date, with 12 and 31 sequential numbers, respectively. It’s not clear why it shows mid May. A hemisphere in the middle of the face looks like it’s almost a globe, which might be a nice way to display and change time zone, but that may be wishful thinking on my part.
Santa also has a tracking device for finding his sack of toys. (Apparently this has happened enough time to warrant such a thing.) It is an intricate filligree over a cool green and blue glass. A light within blinks faster the closer the sphere is to the sack.
Since he must finish delivering toys before Christmas morning, the dashboard has a countdown clock with Nixie tube numbers showing hours, minutes, and milliseconds. They ordinary glow a cyan, but when time runs out, they turn red and blink.
This Santa also manages his list in a large book with lovely handwritten calligraphy. The kids whose gifts remain undelivered glow golden to draw his attention.
The hard problem here is that there is a lot of apples-to-oranges comparisons to do. Even though the mythos seems pretty locked down, each movie takes liberties with one or two aspects. As a result not all these Santas are created equally. Calvin’s elves know he is completely new to his job and will need support. Christmas Chronicles Santa has perfect memory, magical abilities, and handles nearly all the delivery duties himself, unless he’s enacting a clever scheme to impart Christmas wisdom. Arthur Christmas has intergenerational technology and Santas who may not be magic at all, but fully know their duty from their youths but rely on a huge army of shock troop elves to make things happen. So it’s hard to name just one. But absent a point-by-point detailed analysis, there are two that really stand out to me.
Coverage of goals
Arthur Christmas movie has, by far, the most interfaces of any of the candidates, and more coverage of the Santa-family’s goals. Managing noisy pets? Check? Dealing with wakers? Check. Navigating the globe? Check. As far as thinking through speculative technology that assists its Santa, this film has the most.
Keeping the holiday spirit
I’ll confess, though, that extradiegetically, one of the purposes of annual holidays is to mark the passage of time. By trying to adhere to traditions as much as we can, time and our memory is marked by those things that we cannot control (like, say, a pandemic keeping everyone at home and hanging with friends and family virtually). So for my money, the thoroughly modern interfaces that flood Arthur Christmas don’t work that well. They’re so modern they’re not…Christmassy. Grandsanta’s sleigh Eve points to an older tradition, but it’s also clearly framed as outdated in the context of the story.
Compare this to The Christmas Chronicles, with its gorgeous steampunk-y interfaces that combine a sense of magic and mechanics. These are things that a centuries-old Santa would have built and use. They feel rooted in tradition while still helping Santa accomplish as many of his goals as he needs (in the context of his Christmas adventure for the stowaway kids). These interfaces evoke a sense of wonder, add significantly to the worldbuilding, and which I’d rather have as a model for magical interfaces in the real world.
Of course it’s a personal call, given the differences, but The Christmas Chronicles wins in my book.
For those that celebrate Santa-Christmas, I hope it’s a happy one, given the strange, strange state of the world. May you be on the nice list.
Remote operation appears twice during Black Panther. This post describes the second, in which CIA Agent Ross remote-pilots the Talon in order to chase down cargo airships carrying Killmonger’s war supplies. The prior post describes the first, in which Shuri remotely drives an automobile.
In this sequence, Shuri equips Ross with kimoyo beads and a bone-conducting communication chip, and tells him that he must shoot down the cargo ships down before they cross beyond the Wakandan border. As soon as she tosses a remote-control kimoyo bead onto the Talon, Griot announces to Ross in the lab “Remote piloting system activated” and creates a piloting seat out of vibranium dust for him. Savvy watchers may wonder at this, since Okoye pilots the thing by meditation and Ross would have no meditation-pilot training, but Shuri explains to him, “I made it American style for you. Get in!” He does, grabs the sparkly black controls, and gets to business.
The most remarkable thing to me about the interface is how seamlessly the Talon can be piloted by vastly different controls. Meditation brain control? Can do. Joystick-and-throttle? Just as can do.
Now, generally, I have a beef with the notion of hyperindividualized UI tailoring—it prevents vital communication across a community of practice (read more about my critique of this goal here)—but in this case, there is zero time for Ross to learn a new interface. So sure, give him a control system with which he feels comfortable to handle this emergency. It makes him feel more at ease.
The mutable nature of the controls tells us that there is a robust interface layer that is interpreting whatever inputs the pilot supplies and applying them to the actuators in the Talon. More on this below. Spoiler: it’s Griot.
Too sparse HUD
The HUD presents a simple circle-in-a-triangle reticle that lights up red when a target is in sights. Otherwise it’s notably empty of augmentation. There’s no tunnel in the sky display to describe the ideal path, or proximity warnings about skyscrapers, or airspeed indicator, or altimeter, or…anything. This seems a glaring omission since we can be certain other “American-style” airships have such things. More on why this might be below, but spoiler: It’s Griot.
What do these controls do, exactly?
I take no joy in gotchas. That said…
When Ross launches the Talon, he does so by pulling the right joystick backward.
When he shoots down the first cargo ship over Birnin Zana, he pushes the same joystick forward as he pulls the trigger, firing energy weapons.
Why would the same control do both? It’s hard to believe it’s modal. Extradiegetically, this is probably an artifact of actor Martin Freeman’s just doing what feels dramatic, but for a real-world equivalent I would advise against having physical controls have wholly different modes on the same grip, lest we risk confusing pilots on mission-critical tasks. But spoiler…oh, you know where this is going.
Diegetically, Shuri is flat-out wrong that Ross is an experienced pilot. But she also knew that it didn’t matter, because her lab has him covered anyway. Griot is an AI with a brain interface, and can read Ross’ intentions, handling all the difficult execution itself.
This would also explain the lack of better HUD augmentation. That absence seems especially egregious considering that the first cargo ship was flying over a crowded city at the time it was being targeted. If Ross had fired in the wrong place, the cargo ship might have crashed into a building, or down to the bustling city street, killing people. But, instead, Griot quietly, precisely targets the ship for him, to insure that it would crash safely in nearby water.
This would also explain how wildly different interfaces can control the Talon with similar efficacy.
So, Occams-apology says, yep, it’s Griot.
An AI-wizard did it?
In the post about Shuri’s remote driving, I suggested that Griot was also helping her execute driving behind the scenes. This hearkens back to both the Iron HUD and Doctor Strange’s Cloak of Levitation. It could be that the MCU isn’t really worrying about the details of its enabling technologies, or that this is a brilliant model for our future relationship with technology. Let us feel like heroes, and let the AI manage all the details. I worry that I’m building myself into a wizard-did-it pattern, inserting AI for wizard. Maybe that’s worth another post all its own.
But there is one other thing about Ross’ interface worth noting.
The sonic overload
When the last of the cargo ships is nearly at the border, Ross reports to Shuri that he can’t chase it, because Killmonger-loyal dragon flyers have “got me trapped with some kind of cables.” She instructs him to, “Make an X with your arms!” He does. A wing-like display appears around him, confirming its readiness.
Then she shouts, “Now break it!” he does, and the Talon goes boom shaking off the enemy ships, allowing Ross to continue his pursuit.
First, what a great gesture for this function. Very ordinarily, Wakandans are piloting the Talon, and each of them would be deeply familiar with this gesture, and even prone to think of it when executing a hail Mary move like this.
Second, when an outsider needed to perform the action, why didn’t she just tell Griot to just do it? If there’s an interpretation layer in the system, why not just speak directly to that controller? It might be so the human knows how to do it themselves next time, but this is the last cargo ship he’s been tasked with chasing, and there’s little chance of his officially joining the Wakandan air force. The emergency will be over after this instance. Maybe Wakandans have a principle that they are first supposed to engage the humans before bringing in the machines, but that’s heavy conjecture.
Third, I have a beef about gestures—there’s often zero affordances to tell users what gestures they can do, and what effects those gestures will have. If Shuri was not there to answer Ross’ urgent question, would the mission have just…failed? Seems like a bad design.
How else could have known he could do this? If Griot is on board, Griot could have mentioned it. But avoiding the wizard-did-it solutions, some sort of context-aware display could detect that the ship is tethered to something, and display the gesture on the HUD for him. This violates the principle of letting the humans be the heroes, but would be a critical inclusion in any similar real-world system.
Any time we are faced with “intuitive” controls that don’t map 1:1 to the thing being controlled, we’re faced with similar problems. (We’ve seen the same problems in Sleep Dealer and Lost in Space (1998). Maybe that’s worth its own write-up.) Some controls won’t map to anything. More problematic is that there will be functions which don’t have controls. Designers can’t rely on having a human cavalry like Shuri there to save the day, and should take steps to find ways that the system can inform users of how to activate those functions.
Fit to purpose?
I’ve had to presume a lot about this interface. But if those things are correct, then, sure, this mostly makes it possible for Ross, a novice to piloting, to contribute something to the team mission, while upholding the directive that AI Cannot Be Heroes.
If Griot is not secretly driving, and that directive not really a thing, then the HUD needs more work, I can’t diegetically explain the controls, and they need to develop just-in-time suggestions to patch the gap of the mismatched interface.
Black Georgia Matters
Each post in the Black Panther review is followed by actions that you can take to support black lives. As this critical special election is still coming up, this is a repeat of the last one, modified to reflect passed deadlines.
Despite outrageous, anti-democratic voter suppression by the GOP, for the first time in 28 years, Georgia went blue for the presidential election, verified with two hand recounts. Credit to Stacey Abrams and her team’s years of effort to get out the Georgian—and particularly the powerful black Georgian—vote.
But the story doesn’t end there. Though the Biden/Harris ticket won the election, if the Senate stays majority red, Moscow Mitch McConnell will continue the infuriating obstructionism with which he held back Obama’s efforts in office for eight years. The Republicans will, as they have done before, ensure that nothing gets done.
To start to undo the damage the fascist and racist Trump administration has done, and maybe make some actual progress in the US, we need the Senate majority blue. Georgia is providing that opportunity. Neither of the wretched Republican incumbents got 50% of the vote, resulting in a special runoff election January 5, 2021. If these two seats go to the Democratic challengers, Warnock and Ossof, it will flip the Senate blue, and the nation can begin to seriously right the sinking ship that is America.
Residents can also volunteer to become a canvasser for either of the campaigns, though it’s a tough thing to ask in the middle of the raging pandemic.
The rest of us (yes, even non-American readers) can contribute either to the campaigns directly using the links above, or to Stacey Abrams’ Fair Fight campaign. From the campaign’s web site:
We promote fair elections in Georgia and around the country, encourage voter participation in elections, and educate voters about elections and their voting rights. Fair Fight brings awareness to the public on election reform, advocates for election reform at all levels, and engages in other voter education programs and communications.
We will continue moving the country into the anti-racist future regardless of the runoff, but we can make much, much more progress if we win this election. Please join the efforts as best you can even as you take care of yourself and your loved ones over the holidays. So very much depends on it.
Black Reparations Matter
This is timely, so I’m adding this on as well rather than waiting for the next post: A bill is in the house to set up a commission to examine the institution of slavery and its impact and make recommendations for reparations to Congress. If you are an American citizen, please consider sending a message to your congresspeople asking them to support the bill.
On this ACLU site you will find a form and suggested wording to help you along.
before we get into the Kimoyo beads, or the Cape Shields, or the remote driving systems…
before I have to dismiss these interactions as “a wizard did it” style non-designs…
before I review other brain-computer interfaces in other shows…
…I wanted check on the state of the art of brain-computer interfaces (or BCIs) and see how our understanding had advanced since I wrote the Brain interface chapter in the book, back in the halcyon days of 2012.
Note that I am deliberately avoiding the tech side of this question. I’m not going to talk about EEG, PET, MRI, and fMRI. (Though they’re linked in case you want to learn more.) Modern brain-computer interface (or BCI) technologies are evolving too rapidly to bother with an overview of them. They’ll change in the real world by the time I press “publish,” much less by the time you read this. And sci-fi tech is most often a black box anyway. But the human part of the human-computer interaction model changes much more slowly. We can look to the brain as a relatively-unalterable component of the BCI question, leading us to two believability questions of sci-fi BCI.
How can people express intent using their brains?
How do we prevent accidental activation using BCI?
Let’s discuss each.
1. How can people express intent using their brains?
In the see-think-do loop of human-computer interaction…
See (perceive) has been a subject of visual, industrial, and auditory design.
Think has been a matter of human cognition as informed by system interaction and content design.
Do has long been a matter of some muscular movement that the system can detect, to start its matching input-process-output loop. Tap a button. Move a mouse. Touch a screen. Focus on something with your eyes. Hold your breath. These are all ways of “doing” with muscles.
But the first promise of BCI is to let that doing part happen with your brain. The brain isn’t a muscle, so what actions are BCI users able to take in their heads to signal to a BCI system what they want it to do? The answer to this question is partly physiological, about the way the brain changes as it goes about its thinking business.
Our brains are a dense network of bioelectric signals, chemicals, and blood flow. But it’s not chaos. It’s organized. It’s locally functionalized, meaning that certain parts of the brain are predictably activated when we think about certain things. But it’s not like the Christmas lights in Stranger Things, with one part lighting up discretely at a time. It’s more like an animated proportional symbol map, with lots of places lighting up at the same time to different degrees.
The sizes and shapes of what’s lighting up may change slightly between people, but a basic map of healthy, undamaged brains will be similar to each other. Lots of work has gone on to map these functional areas, with researchers showing subjects lots of stimuli and noting what areas of the brain light up. Test enough of these subjects and you can build a pretty good functional map of concepts. Thereafter, you can take a “picture” of the brain, and you can cross-reference your maps to reverse-engineer what is being thought.