Space is incredibly inhospitable to life. It is a near-perfect vacuum, lacking air, pressure, and warmth. It is full of radiation that can poison us, light that can blind and burn us, and a darkness that can disorient us. If any hazardous chemicals such as rocket fuel have gotten loose, they need to be kept safely away. There are few of the ordinary spatial clues and tools that humans use to orient and control their position. There are free-floating debris that range from to bullet-like micrometeorites to gas and rock planets that can pull us toward them to smash into their surface or burn in their atmospheres. There are astronomical bodies such as stars and black holes that can boil us or crush us into a singularity. And perhaps most terrifyingly, there is the very real possibility of drifting off into the expanse of space to asphyxiate, starve (though biology will be covered in another post), freeze, and/or go mad.
The survey shows that sci-fi has addressed most of these perils at one time or another.
Mission to Mars (2000): A rock violently strikes Reneés visor.
Alien (1976): Kane’s visor is melted by a facehugger’s acid.
Sunshine (2007): Kaneda faces death in the solar winds.
Star Trek: First Contact (1996): Worf’s suit outgasses after being cut in a fight with a Borg minion.
Firefly (Episode 14, “Objects in Space,” 2001): Mal throws Erly off the ship and into the reaches of space.
Interfaces
Despite the acknowledgment of all of these problems, the survey reveals only two interfaces related to spacesuit protection.
Battlestar Galactica (2004) handled radiation exposure with simple, chemical output device. As CAG Lee Adama explains in “The Passage,” the badge, worn on the outside of the flight suit, slowly turns black with radiation exposure. When the badge turns completely black, a pilot is removed from duty for radiation treatment.
Battlestar Galactica (Season 3, Episode 10, “The Passage”): Lee Adama explains the function of the radiation badge to his pilots.
This is something of a stretch because it has little to do with the spacesuit itself, and is strictly an output device. (Nothing that proper interaction requires human input and state changes.) The badge is not permanently attached to the suit, and used inside a spaceship while wearing a flight suit. The flight suit is meant to act as a very short term extravehicular mobility unit (EMU), but is not a spacesuit in the strict sense.
The other protection related interface is from 2001: A Space Odyssey. As Dr. Dave Bowman begins an extravehicular activity to inspect seemingly-faulty communications component AE-35, we see him touch one of the buttons on his left forearm panel. Moments later his visor changes from being transparent to being dark and protective.
2001: A Space Odyssey (1968): Dr. Bowman changes the opacity of his visor.
We should expect to see few interfaces, but still…
As a quick and hopefully obvious critique, Bowman’s function shouldn’t have an interface. It should be automatic (not even agentive), since events can happen much faster than human response times. And, now that we’ve said that part out loud, maybe it’s true that protection features of a suit should all be automatic. Interfaces to pre-emptively switch them on or, for exceptional reasons, manually turn them off, should be the rarity.
But it would be cool to see more protective features appear in sci-fi spacesuits. An onboard AI detects an incoming micrometeorite storm. Does the HUD show much time is left? What are the wearer’s options? Can she work through scenarios of action? Can she merely speak which course of action she wants the suit to take? If a wearer is kicked free of the spaceship, the suit should have a homing feature. Think Doctor Strange’s Cloak of Levitation, but for astronauts.
As always, if you know of other examples not in the survey, please put them in the comments.
“Why cannot we walk outside [the spaceship] like the meteor? Why cannot we launch into space through the scuttle? What enjoyment it would be to feel oneself thus suspended in ether, more favored than the birds who must use their wings to keep themselves up!”
—The astronaut Michel Ardan in Round the Moon by Jules Verne (1870)
When we were close to publication on Make It So, we wound up being way over the maximum page count for a Rosenfeld Media book. We really wanted to keep the components and topics sections, and that meant we had to cut the section on things. Spacesuits was one of the chapters I drafted about things. I am representing that chapter here on the blog. n.b. This was written ten years ago in 2011. There are almost certainly other more recent films and television shows that can serve as examples. If you, the reader, notice any…well, that‘s what the comments section is for.
Sci-fi doesn’t have to take place in interplanetary space, but a heck of a lot of it does. In fact, the first screen-based science fiction film is all about a trip to the moon.
La Voyage Dans La Lune (1904): The professors suit up for their voyage to the moon by donning conical caps, neck ruffles, and dark robes.
Most of the time, traveling in this dangerous locale happens inside spaceships, but occasionally a character must travel out bodily into the void of space. Humans—and pretty much everything (no not them) we would recognize as life—can not survive there for very long at all. Fortunately, the same conceits that sci-fi adopts to get characters into space can help them survive once they’re there.
Establishing terms
An environmental suit is any that helps the wearer survive in an inhospitable environment. Environment suits first began with underwater suits, and later high-altitude suits. For space travel, pressure suits are to be worn during the most dangerous times, i.e. liftoff and landing, when an accident may suddenly decompress a spacecraft. A spacesuit is an environmental suit designed specifically for survival in outer space. NASA refers to spacesuits as Extravehicular Mobility Units, or EMUs. Individuals who wear the spacesuits are known as spacewalkers. The additional equipment that helps a spacewalker move around space in a controlled manner is the Manned Mobility Unit, or MMU.
Additionally, though many other agencies around the world participate in the design and engineering of spacesuits, there is no convenient way to reference them and their efforts as a group, so Aerospace Community is used as a shorthand. This also helps to acknowledge that my research and interviews were primarily with sources primarily from NASA.
The design of the spacesuit is an ongoing and complicated affair. To speak of “the spacesuit” as if it were a single object ignores the vast number of iterations and changes made to the suits between each cycle of engineering, testing, and deployment, must less between different agencies working on their own designs. So, for those wondering, I’m using the Russian Orlan spacesuit currently being used in the International Space Station and shuttle missions as the default design when speaking about modern spacesuits.
Spacesuit Orlan-MKS at MAKS-2013(air show) (fragment) CC BY-SA 4.0
What the thing’s got to do
A spacesuit, whether in sci-fi or the real world, has to do three things.
It has to protect the wearer from the perils of interplanetary space.
It has to accommodate the wearer’s ongoing biological needs.
Help them move around.
Facilitate communication between them, other spacewalkers, and mission control.
Identify who is wearing the suit for others
Each of these categories of functions, and the related interfaces, are discussed in following posts.
First, congratulations to Perception Studio for the excellent work on Black Panther! Readers can see Perception’s own write up about the interfaces on their website. (Note that the reviewers only looked at this after the reviews were complete, to ensure we were looking at end-result, not intent. Also all images in this post were lifted from that page, with permission, unless otherwise noted.)
John LePore of Perception Studio reached out to me when we began to publish the reviews, asking if he could shed light on anything. So I asked if he would be up for an email interview when the reviews were complete. This post is all that wonderful shed light.
What exactly did Perception do for the film?
John: Perception was brought aboard early in the process for the specific purpose of consulting on potential areas of interest in science and technology. A brief consulting sprint evolved into 18 months of collaboration that included conceptual development and prototyping of various technologies for use in multiple sequences and scenarios. The most central of these elements was the conceptualization and development of the vibraniumsandinterfaces throughout the film. Some of this work was used as design guidelines for various vfx houses while other elements were incorporated directly into the final shots by Perception. In addition to the various technologies, Perception worked closely on two special sequences in the film—the opening ‘history of Wakanda’ prologue, and the main-on-end title sequence, both of which were based on the technological paradigm of vibranium sand.
What were some of the unique challenges for Black Panther?
John: We encountered various challenges on Black Panther, both conceptual and technical. An inspiring challenge was the need to design the most advanced technology in the Marvel Cinematic Universe, while conceptualizing something that had zero influence from any existing technologies. There were lots of challenges around dynamic sand, and even difficulty rendering when a surge in the crypto market made GPU’s scarce to come by!
One of the things that struck me about Black Panther is the ubiquity of (what appear to be) brain-computer interfaces. How was it working with speculative tech that seemed so magical?
John: From the very start, it was very important to us that all of the technology we conceptualized was grounded in logic, and had a pathway to feasibility. We worked hard to hold ourselves to these constraints, and looked for every opportunity to include signals for the audience (sometimes nuanced, sometimes obvious) as to how these technologies worked. At the same time, we know the film will never stop dead in its tracks to explain technology paradigm #6. In fact, one of our biggest concerns was that any of the tech would appear to be ‘made of magic’.
Chris: Ooh, now I want to know what some of the nuanced signals were!
John: One of the key nuances that made it from rough tests to the final film was that the vibranium Sand ‘bounces’ to life with a pulse. This is best seen in the tactical table in the Royal Talon at the start of the film. The ‘bounce’ was intended to be a rhythmic cue to the idea of ultrasonic soundwaves triggering the levitating sand.
Similarly, you can find cymatic patterns in numerous effects in the film.
Did you know going in that you’d be creating something that would be so important to black lives?
John: Sometimes on a film, it is often hard to imagine how it will be received. On Black Panther, all the signals were clear that the film would be deeply important. From our early peeks at concept art of Wakanda, to witnessing the way Marvel Studios supported Ryan Coogler’s vision. The whole time working on the film the anticipation kept growing, and at the core of the buzz was an incredibly strong black fandom. Late in our process, the hype was still increasing—It was becoming obvious that Black Panther could be the biggest Marvel film to date. I remember working on the title sequence one night, a couple months before release, and Ryan played (over speakerphone) the song that would accompany the sequence. We were bugging out— “Holy shit that’s Kendrick!”… it was just another sign that this film would be truly special, and deeply dedicated to an under-served audience.
How did working on the film affect the studio?
John: For us it’s been one of our proudest moments— it combined everything we love in terms of exciting concept development, aesthetic innovation and ambitious technical execution. The project is a key trophy in our portfolio, and I revisit it regularly when presenting at conferences or attracting new clients, and I’m deeply proud that it continues to resonate.
Where did you look for inspiration when designing?
John: When we started, the brief was simple: Best tech, most unique tech, and centered around vibranium. With a nearly open canvas, the element of vibranium (only seen previously as Captain America’s shield) sent us pursuing vibration and sound as a starting point. We looked deeply into cymatic patterns and other sound-based phenomena like echo-location. About a year prior, we were working with an automotive supplier on a technology that used ultrasonic soundwaves to create ‘mid-air haptics’… tech that lets you feel things that aren’t really there. We then discovered that the University at Tokyo was doing experiments with the same hardware to levitate styrofoam particles with limited movement. Our theory was that with the capabilities of vibranium, this effect could levitate and translate millions of particles simultaneously.
Beyond technical and scientific phenomenon, there was tremendous inspiration to be taken from African culture in general. From textile patterns, to colors of specific spices and more, there were many elements that influenced our process.
What thing about working on the film do you think most people in audiences would be surprised by?
John: I think the average audience member would be surprised by how much time and effort goes into these pieces of the film. There are so many details that are considered and developed, without explicitly figuring into the plot of the film. We consider ourselves fortunate that film after film Marvel Studios pushes to develop these ideas that in other films are simply ‘set dressing’.
Chris: Lastly, I like finishing interviews with these questions.
What, in your opinion, makes for a great fictional user interface?
John: I love it when you are presented with innovative tech in a film and just by seeing it you can understand the deeper implications. Having just enough information to make assumptions about how it works, why it works, and what it means to a culture or society. If you can invite this kind of curiosity, and reward this fascination, the audience gets a satisfying gift. And if these elements pull me in, I will almost certainly get ‘lost’ in a film…in the best way.
What’s your favorite sci-fi interface that someone else designed? (and why)
John: I always loved two that stood out to me for the exact reasons mentioned above.
One is Westworld’s tablet-based Dialog Tree system. It’s not the most radical UI design etc, but it means SO much to the story in that moment, and immediately conveys a complicated concept effortlessly to the viewer.
from Westworld Season 01 Episode 06, “The Adversary”
Another see-it-and-it-makes-sense tech concept is the live-tracked projection camera system from Mission Impossible: Ghost Protocol. It’s so clever, so physical, and you understand exactly how it works (and how it fails!). When I saw this in the theatre, I turned to my wife and whispered, “You see, the camera is moving to match the persp…” and she glared at me and said “I get it! Everybody gets it!” The clever execution of the gadget and scene made me, the viewer, feel smarter than I actually was!
from Mission: Impossible – Ghost Protocol (2011)
What’s next for the studio?
The Perception team is continuing to work hard in our two similar paths of exploration— film and real-world tech. This year we have seen our work appear in Marvel’s streaming shows, with more to come. We’ve also been quite busy in the technology space, working on next-generation products from technology platforms to exciting automobiles. The past year has been busy and full of changes, but no matter how we work, we continue to be fascinated and inspired by the future ahead.
Black Panther’s financial success is hard to ignore. From the Wikipedia page:
Black Panther grossed $700.1 million in the United States and Canada, and $646.9 million in other territories, for a worldwide total of $1.347 billion. It became the highest-grossing solo superhero film, the third-highest-grossing film of the MCU and superhero film overall, the ninth-highest-grossing film of all time, and the highest-grossing film by a black director. It is the fifth MCU film and 33rd overall to surpass $1 billion, and the second-highest-grossing film of 2018. Deadline Hollywood estimated the net profit of the film to be $476.8 million, accounting for production budgets, P&A, talent participations and other costs, with box office grosses and ancillary revenues from home media, placing it second on their list of 2018’s “Most Valuable Blockbusters”.
It was also a critical success (96% Tomotometer anyone?) as well as a fan…well, “favorite” seems too small a word. Here, let me let clinical psychologist, researcher and trusted media expert Erlanger Turner speak to this.
Many have wondered why Black Panther means so much to the black community and why schools, churches and organizations have come to the theaters with so much excitement. The answer is that the movie brings a moment of positivity to a group of people often not the centerpiece of Hollywood movies… [Racial and ethnic socialization] helps to strengthen identity and helps reduce the likelihood on internalizing negative stereotypes about one’s ethnic group.
People—myself included—just love this movie. As is my usual caveat, though, this site reviews not the film, but the interfaces that appear in the film, and specifically, across three aspects.
Sci: B (3 of 4) How believable are the interfaces?
This category (and Interfaces, I’ll be repeating myself later) is complicated because Wakanda is the most technologically-advanced culture on Earth as far as the MCU goes. So who’s to say what’s believable when you have general artificial intelligence, nanobots, brain interfaces, and technology barely distinguishable from magic? But this sort of challenge is what I signed up for, so…pressing on.
The interfaces are mostly internally consistent and believable within their (admittedly large) scope of nova.
There are plenty of weird wtf moments, though. Why do remote piloting interfaces routinelydrop their users onto their tailbones? Why are the interfaces sometimes photo-real and sometimes sandpaper? Why does the Black Panther suit glow with a Here-I-Am light? Why have a recovery room in the middle of a functioning laboratory? Why have a control where thrusting one way is a throttle and the other fires weapons?
Fi: A (4 of 4) How well do the interfaces inform the narrative of the story?
Here’s where Black Panther really shines. The wearable technology tells of a society build around keeping its advancement secret. The glowing tech gives clues as to what’s happening where. The kimoyo beads help describe a culture that—even if it is trapped in a might-makes-right and isolationist belief system—is still marvelous and equitable. The tech helps tell a wholly believable story that this is the most technologically advanced society on MCU Earth 616.
Interfaces: B (3 of 4) How well do the interfaces equip the characters to achieve their goals?
As I mentioned above, this is an especially tough determination given the presence of nanobots, AGI, and brain interfaces. All these things confound usual heuristic approaches.
It even made me make this Simpsons-riff animated gif, which I expect I’ll be using increasingly in the future. In this metaphor I am Frink.
But they do not make it impossible. The suit and Talon provide gorgeous displays. (As does the med table, even if its interaction model has issues.) The claws, the capes, and the sonic overload incorporate well-designed gestures. Griot (the unnamed AI) must be doing an awful lot of the heavy lifting, but as a model of AI is one that appears increasingly in the MCU, where the AI is the thing in the background that lets the heroes be heroes (which I’m starting to tag as sidekick AI).
All that said, we still see the same stoic guru mistakes in the sand table that seem to plague sci-fi. In the med station we see a red-thing-bad oversimplicity, mismatched gestures-to-effects, and a display that pulls attention away from a patient, which keeps it from an A grade.
Final Grade A- (10 of 12), Blockbuster.
It was an unfortunately poignant time to have been writing these reviews. I started them because of the unconscionable murders of Breonna Taylor and George Flloyd—in the long line of unconscionable black deaths at the hands of police—and, knowing the pandemic was going to slow posting frequency, would keep these issues alive at least on this forum long after the initial public fury has died down.
But across the posts, Raysean White was killed. Cops around the nation responded with inappropriate force. Chadwick Boseman died of cancer. Ruth Bader Ginsberg died, exposing one of the most blatant hypocrisies of the GOP and tilting the Supreme Court tragically toward the conservative. The U.S. ousted its racist-in-chief and Democrats took control of the Senate for the first time since 2011, despite a coordinated attempt by the GOP to suppress votes while peddling the lie that the election was stolen (for which lawmakers involved have yet to suffer any consequences).
It hasn’t ended. Just yesterday began the trial of the officer who murdered George Floyd. It’s going to take about a month just to hear the main arguments. The country will be watching.
Meanwhile Georgia just passed new laws that are so restrictive journalists are calling it the new Jim Crow. This is part of a larger conservative push to disenfranchise Democrats and voters of color in particular. We have a long way to go, but even though this wraps the Black Panther reviews, our work bending the arc of the moral universe is ongoing. Science fiction is about imagining other worlds so we can make this one better.
Black Panther II is currently scheduled to come out July 8, 2022.
I presume my readership are adults. I honestly cannot imagine this site has much to offer the 3-to-8-year-old. That said, if you are less than 8.8 years old, be aware that reading this will land you FIRMLY on the naughty list. Leave before it’s too late. Oooh, look! Here’s something interesting for you.
For those who celebrate Yule (and the very hybridized version of the holiday that I’ll call Santa-Christmas to distinguish it from Jesus-Christmas or Horus-Christmas), it’s that one time of year where we watch holiday movies. Santa features in no small number of them, working against the odds to save Christmas and Christmas spirit from something that threatens it. Santa accomplishes all that he does by dint of holiday magic, but increasingly, he has magic-powered technology to help him. These technologies are different for each movie in which they appear, with different sci-fi interfaces, which raises the question: Who did it better?
Unraveling this stands to be even more complicated than usual sci-fi fare.
These shows are largely aimed at young children, who haven’t developed the critical thinking skills to doubt the core premise, so the makers don’t have much pressure to present wholly-believable worlds. The makers also enjoy putting in some jokes for adults that are non-diegetic and confound analysis.
Despite the fact that these magical technologies are speculative just as in sci-fi, makers cannot presume that their audience are sci-fi fans who are familiar with those tropes. And things can’t seem too technical.
The sci in this fi is magical, which allows makers to do all-sorts of hand-wavey things about how it’s doing what it’s doing.
Many of the choices are whimsical and serve to reinforce core tenets of the Santa Claus mythos rather than any particular story or worldbuilding purpose.
But complicated-ness has rarely cowed this blog’s investigations before, why let a little thing like holiday magic do it now?
Ho-Ho-hubris!
A Primer on Santa
I have readers from all over the world. If you’re from a place that does not celebrate the Jolly Old Elf, a primer should help. And if you’re from a non-USA country, your Saint Nick mythos will be similar but not the same one that these movies are based on, so a clarification should help. To that end, here’s what I would consider the core of it.
Santa Claus is a magical, jolly, heavyset old man with white hair, mustache, and beard who lives at the North Pole with his wife Ms. Claus. The two are almost always caucasian. He can alternately be called Kris Kringle, Saint Nick, Father Christmas, or Klaus. The Clark Moore poem calls him a “jolly old elf.” He is aware of the behavior of children, and tallies their good and bad behavior over the year, ultimately landing them on the “naughty” or “nice” list. Santa brings the nice ones presents. (The naughty ones are canonically supposed to get coal in their stockings though in all my years I have never heard of any kids actually getting coal in lieu of presents.) Children also hang special stockings, often on a mantle, to be filled with treats or smaller presents. Adults encourage children to be good in the fall to ensure they get presents. As December approaches, Children write letters to Santa telling him what presents they hope for. Santa and his elves read the letters and make all the requested toys by hand in a workshop. Then the evening of 24 DEC, he puts all the toys in a large sack, and loads it into a sleigh led by 8 flying reindeer. Most of the time there is a ninth reindeer up front with a glowing red nose named Rudolph. He dresses in a warm red suit fringed with white fur, big black boots, thick black belt, and a stocking hat with a furry ball at the end. Over the evening, as children sleep, he delivers the presents to their homes, where he places them beneath the Christmas tree for them to discover in the morning. Families often leave out cookies and milk for Santa to snack on, and sometimes carrots for the reindeer. Santa often tries to avoid detection for reasons that are diegetically vague.
There is no single source of truth for this mythos, though the current core text might be the 1823 C.E. poem, “A Visit from St. Nicholas” by Clement Clarke Moore. Visually, Santa’s modern look is often traced back to the depictions by Civil War cartoonist Thomas Nast, which the Coca-Cola Corporation built upon for their holiday advertisements in 1931.
Both these illustrations are by Nast.
There are all sorts of cultural conversations to have about the normalizing a magical panopticon, what effect hiding the actual supply chain has, and asking for what does perpetuating this myth train children; but for now let’s stick to evaluating the interfaces in terms of Santa’s goals.
Santa’s goals
Given all of the above, we can say that the following are Santa’s goals.
Sort kids by behavior as naughty or nice
Many tellings have him observing actions directly
Manage the lists of names, usually on separate lists
Manage letters
Reading letters
Sending toy requests to the workshop
Storing letters
Make presents
Travel to kids’ homes
Find the most-efficient way there
Control the reindeer
Maintain air safety
Avoid air obstacles
Find a way inside and to the tree
Enjoy the cookies / milk
Deliver all presents before sunrise
For each child:
Know whether they are naughty or nice
If nice, match the right toy to the child
Stage presents beneath the tree
Avoid being seen
We’ll use these goals to contextualize the Santa interfaces against.
This is the Worst Santa, but the image is illustrative of the weather challenges.
Typical Challenges
Nearly every story tells of Santa working with other characters to save Christmas. (The metaphor that we have to work together to make Christmas happen is appreciated.) The challenges in the stories can be almost anything, but often include…
Inclement weather (usually winter, but Santa is a global phenomenon)
Air safety
Air obstacles (Planes, helicopters, skyscrapers)
Ingress/egress into homes
Home security systems / guard dogs
The Contenders
Imdb.com lists 847 films tagged with the keyword “santa claus,” which is far too much to review. So I looked through “best of” lists (two are linked below) and watched those films for interfaces. There weren’t many. I even had to blend CGI and live action shows, which I’m normally hesitant to do. As always, if you know of any additional shows that should be considered, please mention it in the comments.
After reviewing these films, the ones with Santa interfaces came down to four, presented below in chronological order.
The Santa Clause (1994)
This movie deals with the lead character, Scott Calvin, inadvertently taking on the “job” of Santa Clause. (If you’ve read Anthony’s Incarnations of Immortality series, this plot will feel quite familiar.)
The sleigh he inherits has a number of displays that are largely unexplained, but little Charlie figures out that the center console includes a hot chocolate and cookie dispenser. There is also a radar, and far away from it, push buttons for fog, planes, rain, and lightning. There are several controls with Christmas bell icons associated with them, but the meaning of these are unclear.
Santa’s hat in this story has headphones and the ball has a microphone for communicating with elves back in the workshop.
This is the oldest of the candidates. Its interfaces are quite sterile and “tacked on” compared to the others, but was novel for its time.
This movie tells the story of Santa’s n’er do well brother Fred, who has to work in the workshop for one season to work off bail money. While there he winds up helping forestall foreclosure from an underhanded supernatural efficiency expert, and un-estranging himself from his family. A really nice bit in this critically-panned film is that Fred helps Santa understand that there are no bad kids, just kids in bad circumstances.
Fred is taken to the North Pole in a sled with switches that are very reminiscent of the ones in The Santa Clause. A funny touch is the “fasten your seatbelt” sign like you might see in a commercial airliner. The use of Lombardic Capitals font is a very nice touch given that much of modern Western Santa Claus myth (and really, many of our traditions) come from Germany.
The workshop has an extensive pneumatic tube system for getting letters to the right craftself.
This chamber is where Santa is able to keep an eye on children. (Seriously panopticony. They have no idea they’re being surveilled.) Merely by reading the name and address of a child a volumetric display appears within the giant snowglobe. The naughtiest children’s names are displayed on a digital split-flap display, including their greatest offenses. (The nicest are as well, but we don’t get a close up of it.)
The final tally is put into a large book that one of the elves manages from the sleigh while Santa does the actual gift-distribution. The text in the book looks like it was printed from a computer.
In this telling, the Santa job is passed down patrilineally. The oldest Santa, GrandSanta, is retired. The dad, Malcolm, is the current-acting Santa one, and he has two sons. One is Steve, a by-the-numbers type into military efficiency and modern technology. The other son, Arthur, is an awkward fellow who has a semi-disposable job responding to letters. Malcolm currently pilots a massive mile-wide spaceship from which ninja elves do the gift distribution. They have a lot of tech to help them do their job. The plot involves Arthur working with Grandsanta using his old Sleigh to get a last forgotten gift to a young girl before the sun rises.
To help manage loud pets in the home who might wake up sleeping people, this gun has a dial for common pets that delivers a treat to distract them.
Elves have face scanners which determine each kids’ naughty/nice percentage. The elf then enters this into a stocking-filling gun, which affects the contents in some unseen way. A sweet touch is when one elf scans a kid who is read as quite naughty, the elf scans his own face to get a nice reading instead.
The S-1 is the name of the spaceship sleigh at the beginning (at the end it is renamed after Grandsanta’s sleigh). Its bridge is loaded with controls, volumetric displays, and even a Little Tree air freshener. It has a cloaking display on its underside which is strikingly similar to the MCUS.H.I.E.L.D. helicarrier cloaking. (And this came out the year before The Avengers, I’m just sayin’.)
The north pole houses the command-and-control center, which Steve manages. Thousands of elves manage workstations here, and there is a huge shared display for focusing and informing the team at once when necessary. Smaller displays help elf teams manage certain geographies. Its interfaces fall to comedy and trope, mostly, but are germane to the story beats
One of the crisis scenarios that this system helps manage is for a “waker,” a child who has awoken and is at risk of spying Santa.
Grandsanta’s outmoded sleigh is named Eve. Its technology is much more from the early 20th century, with switches and dials, buttons and levers. It’s a bit janky and overly complex, but gets the job done.
One notable control on S-1 is this trackball with dark representations of the continents. It appears to be a destination selector, but we do not see it in use. It is remarkable because it is very similar to one of the main interface components in the next candidate movie, The Christmas Chronicles.
The Christmas Chronicles follows two kids who stowaway on Santa’s sleigh on Christmas Eve. His surprise when they reveal themselves causes him to lose his magical hat and wreck his sleigh. They help him recover the items, finish his deliveries, and (well, of course) save Christmas just in time.
Santa’s sleight enables him to teleport to any place on earth. The main control is a trackball location selector. Once he spins it and confirms that the city readout looks correct, he can press the “GO” button for a portal to open in the air just ahead of the sleigh. After traveling in a aurora borealis realm filled with famous landmarks for a bit, another portal appears. They pass through this and appear at the selected location. A small magnifying glass above the selection point helps with precision.
Santa wears a watch that measures not time, but Christmas spirit, which ranges from 0 to 100. In the bottom half, chapter rings and a magnifying window seem designed to show the date, with 12 and 31 sequential numbers, respectively. It’s not clear why it shows mid May. A hemisphere in the middle of the face looks like it’s almost a globe, which might be a nice way to display and change time zone, but that may be wishful thinking on my part.
Santa also has a tracking device for finding his sack of toys. (Apparently this has happened enough time to warrant such a thing.) It is an intricate filligree over a cool green and blue glass. A light within blinks faster the closer the sphere is to the sack.
Since he must finish delivering toys before Christmas morning, the dashboard has a countdown clock with Nixie tube numbers showing hours, minutes, and milliseconds. They ordinary glow a cyan, but when time runs out, they turn red and blink.
This Santa also manages his list in a large book with lovely handwritten calligraphy. The kids whose gifts remain undelivered glow golden to draw his attention.
The hard problem here is that there is a lot of apples-to-oranges comparisons to do. Even though the mythos seems pretty locked down, each movie takes liberties with one or two aspects. As a result not all these Santas are created equally. Calvin’s elves know he is completely new to his job and will need support. Christmas Chronicles Santa has perfect memory, magical abilities, and handles nearly all the delivery duties himself, unless he’s enacting a clever scheme to impart Christmas wisdom. Arthur Christmas has intergenerational technology and Santas who may not be magic at all, but fully know their duty from their youths but rely on a huge army of shock troop elves to make things happen. So it’s hard to name just one. But absent a point-by-point detailed analysis, there are two that really stand out to me.
The weathered surface of this camouflage button is delightful (Arthur Christmas).
Coverage of goals
Arthur Christmas movie has, by far, the most interfaces of any of the candidates, and more coverage of the Santa-family’s goals. Managing noisy pets? Check? Dealing with wakers? Check. Navigating the globe? Check. As far as thinking through speculative technology that assists its Santa, this film has the most.
Keeping the holiday spirit
I’ll confess, though, that extradiegetically, one of the purposes of annual holidays is to mark the passage of time. By trying to adhere to traditions as much as we can, time and our memory is marked by those things that we cannot control (like, say, a pandemic keeping everyone at home and hanging with friends and family virtually). So for my money, the thoroughly modern interfaces that flood Arthur Christmas don’t work that well. They’re so modern they’re not…Christmassy. Grandsanta’s sleigh Eve points to an older tradition, but it’s also clearly framed as outdated in the context of the story.
Gorgeous steampunkish binocular HUD from The Christmas Chronicles 2, which was not otherwise included in this post.
Compare this to The Christmas Chronicles, with its gorgeous steampunk-y interfaces that combine a sense of magic and mechanics. These are things that a centuries-old Santa would have built and use. They feel rooted in tradition while still helping Santa accomplish as many of his goals as he needs (in the context of his Christmas adventure for the stowaway kids). These interfaces evoke a sense of wonder, add significantly to the worldbuilding, and which I’d rather have as a model for magical interfaces in the real world.
Of course it’s a personal call, given the differences, but The Christmas Chronicles wins in my book.
Ho, Ho, HEH.
For those that celebrate Santa-Christmas, I hope it’s a happy one, given the strange, strange state of the world. May you be on the nice list.
Remote operation appears twice during Black Panther. This post describes the second, in which CIA Agent Ross remote-pilots the Talon in order to chase down cargo airships carrying Killmonger’s war supplies. The prior post describes the first, in which Shuri remotely drives an automobile.
In this sequence, Shuri equips Ross with kimoyo beads and a bone-conducting communication chip, and tells him that he must shoot down the cargo ships down before they cross beyond the Wakandan border. As soon as she tosses a remote-control kimoyo bead onto the Talon, Griot announces to Ross in the lab “Remote piloting system activated” and creates a piloting seat out of vibranium dust for him. Savvy watchers may wonder at this, since Okoye pilots the thing by meditation and Ross would have no meditation-pilot training, but Shuri explains to him, “I made it American style for you. Get in!” He does, grabs the sparkly black controls, and gets to business.
The most remarkable thing to me about the interface is how seamlessly the Talon can be piloted by vastly different controls. Meditation brain control? Can do. Joystick-and-throttle? Just as can do.
Now, generally, I have a beef with the notion of hyperindividualized UI tailoring—it prevents vital communication across a community of practice (read more about my critique of this goal here)—but in this case, there is zero time for Ross to learn a new interface. So sure, give him a control system with which he feels comfortable to handle this emergency. It makes him feel more at ease.
The mutable nature of the controls tells us that there is a robust interface layer that is interpreting whatever inputs the pilot supplies and applying them to the actuators in the Talon. More on this below. Spoiler: it’s Griot.
Too sparse HUD
The HUD presents a simple circle-in-a-triangle reticle that lights up red when a target is in sights. Otherwise it’s notably empty of augmentation. There’s no tunnel in the sky display to describe the ideal path, or proximity warnings about skyscrapers, or airspeed indicator, or altimeter, or…anything. This seems a glaring omission since we can be certain other “American-style” airships have such things. More on why this might be below, but spoiler: It’s Griot.
What do these controls do, exactly?
I take no joy in gotchas. That said…
When Ross launches the Talon, he does so by pulling the right joystick backward.
When he shoots down the first cargo ship over Birnin Zana, he pushes the same joystick forward as he pulls the trigger, firing energy weapons.
Why would the same control do both? It’s hard to believe it’s modal. Extradiegetically, this is probably an artifact of actor Martin Freeman’s just doing what feels dramatic, but for a real-world equivalent I would advise against having physical controls have wholly different modes on the same grip, lest we risk confusing pilots on mission-critical tasks. But spoiler…oh, you know where this is going.
It’s Griot
Diegetically, Shuri is flat-out wrong that Ross is an experienced pilot. But she also knew that it didn’t matter, because her lab has him covered anyway. Griot is an AI with a brain interface, and can read Ross’ intentions, handling all the difficult execution itself.
This would also explain the lack of better HUD augmentation. That absence seems especially egregious considering that the first cargo ship was flying over a crowded city at the time it was being targeted. If Ross had fired in the wrong place, the cargo ship might have crashed into a building, or down to the bustling city street, killing people. But, instead, Griot quietly, precisely targets the ship for him, to insure that it would crash safely in nearby water.
This would also explain how wildly different interfaces can control the Talon with similar efficacy.
So, Occams-apology says, yep, it’s Griot.
An AI-wizard did it?
In the post about Shuri’s remote driving, I suggested that Griot was also helping her execute driving behind the scenes. This hearkens back to both the Iron HUD and Doctor Strange’s Cloak of Levitation. It could be that the MCU isn’t really worrying about the details of its enabling technologies, or that this is a brilliant model for our future relationship with technology. Let us feel like heroes, and let the AI manage all the details. I worry that I’m building myself into a wizard-did-it pattern, inserting AI for wizard. Maybe that’s worth another post all its own.
But there is one other thing about Ross’ interface worth noting.
The sonic overload
When the last of the cargo ships is nearly at the border, Ross reports to Shuri that he can’t chase it, because Killmonger-loyal dragon flyers have “got me trapped with some kind of cables.” She instructs him to, “Make an X with your arms!” He does. A wing-like display appears around him, confirming its readiness.
Then she shouts, “Now break it!” he does, and the Talon goes boom shaking off the enemy ships, allowing Ross to continue his pursuit.
First, what a great gesture for this function. Very ordinarily, Wakandans are piloting the Talon, and each of them would be deeply familiar with this gesture, and even prone to think of it when executing a hail Mary move like this.
Second, when an outsider needed to perform the action, why didn’t she just tell Griot to just do it? If there’s an interpretation layer in the system, why not just speak directly to that controller? It might be so the human knows how to do it themselves next time, but this is the last cargo ship he’s been tasked with chasing, and there’s little chance of his officially joining the Wakandan air force. The emergency will be over after this instance. Maybe Wakandans have a principle that they are first supposed to engage the humans before bringing in the machines, but that’s heavy conjecture.
Third, I have a beef about gestures—there’s often zero affordances to tell users what gestures they can do, and what effects those gestures will have. If Shuri was not there to answer Ross’ urgent question, would the mission have just…failed? Seems like a bad design.
How else could have known he could do this? If Griot is on board, Griot could have mentioned it. But avoiding the wizard-did-it solutions, some sort of context-aware display could detect that the ship is tethered to something, and display the gesture on the HUD for him. This violates the principle of letting the humans be the heroes, but would be a critical inclusion in any similar real-world system.
Any time we are faced with “intuitive” controls that don’t map 1:1 to the thing being controlled, we’re faced with similar problems. (We’ve seen the same problems in Sleep Dealer and Lost in Space (1998). Maybe that’s worth its own write-up.) Some controls won’t map to anything. More problematic is that there will be functions which don’t have controls. Designers can’t rely on having a human cavalry like Shuri there to save the day, and should take steps to find ways that the system can inform users of how to activate those functions.
Fit to purpose?
I’ve had to presume a lot about this interface. But if those things are correct, then, sure, this mostly makes it possible for Ross, a novice to piloting, to contribute something to the team mission, while upholding the directive that AI Cannot Be Heroes.
If Griot is not secretly driving, and that directive not really a thing, then the HUD needs more work, I can’t diegetically explain the controls, and they need to develop just-in-time suggestions to patch the gap of the mismatched interface.
Black Georgia Matters
Each post in the Black Panther review is followed by actions that you can take to support black lives. As this critical special election is still coming up, this is a repeat of the last one, modified to reflect passed deadlines.
Always on my mind, or at least until July 06.
Despite outrageous, anti-democratic voter suppression by the GOP, for the first time in 28 years, Georgia went blue for the presidential election, verified with two hand recounts. Credit to Stacey Abrams and her team’s years of effort to get out the Georgian—and particularly the powerful black Georgian—vote.
But the story doesn’t end there. Though the Biden/Harris ticket won the election, if the Senate stays majority red, Moscow Mitch McConnell will continue the infuriating obstructionism with which he held back Obama’s efforts in office for eight years. The Republicans will, as they have done before, ensure that nothing gets done.
To start to undo the damage the fascist and racist Trump administration has done, and maybe make some actual progress in the US, we need the Senate majority blue. Georgia is providing that opportunity. Neither of the wretched Republican incumbents got 50% of the vote, resulting in a special runoff election January 5, 2021. If these two seats go to the Democratic challengers, Warnock and Ossof, it will flip the Senate blue, and the nation can begin to seriously right the sinking ship that is America.
Residents can also volunteer to become a canvasser for either of the campaigns, though it’s a tough thing to ask in the middle of the raging pandemic.
The rest of us (yes, even non-American readers) can contribute either to the campaigns directly using the links above, or to Stacey Abrams’ Fair Fight campaign. From the campaign’s web site:
We promote fair elections in Georgia and around the country, encourage voter participation in elections, and educate voters about elections and their voting rights. Fair Fight brings awareness to the public on election reform, advocates for election reform at all levels, and engages in other voter education programs and communications.
We will continue moving the country into the anti-racist future regardless of the runoff, but we can make much, much more progress if we win this election. Please join the efforts as best you can even as you take care of yourself and your loved ones over the holidays. So very much depends on it.
Black Reparations Matter
This is timely, so I’m adding this on as well rather than waiting for the next post: A bill is in the house to set up a commission to examine the institution of slavery and its impact and make recommendations for reparations to Congress. If you are an American citizen, please consider sending a message to your congresspeople asking them to support the bill.
Image, uncredited, from the ACLU site. Please contact me if you know the artist.
On this ACLU site you will find a form and suggested wording to help you along.
before we get into the Kimoyo beads, or the Cape Shields, or the remote driving systems…
before I have to dismiss these interactions as “a wizard did it” style non-designs…
before I review other brain-computer interfaces in other shows…
…I wanted check on the state of the art of brain-computer interfaces (or BCIs) and see how our understanding had advanced since I wrote the Brain interface chapter in the book, back in the halcyon days of 2012.
Note that I am deliberately avoiding the tech side of this question. I’m not going to talk about EEG, PET, MRI, and fMRI. (Though they’re linked in case you want to learn more.) Modern brain-computer interface (or BCI) technologies are evolving too rapidly to bother with an overview of them. They’ll change in the real world by the time I press “publish,” much less by the time you read this. And sci-fi tech is most often a black box anyway. But the human part of the human-computer interaction model changes much more slowly. We can look to the brain as a relatively-unalterable component of the BCI question, leading us to two believability questions of sci-fi BCI.
How can people express intent using their brains?
How do we prevent accidental activation using BCI?
Let’s discuss each.
1. How can people express intent using their brains?
In the see-think-do loop of human-computer interaction…
See (perceive) has been a subject of visual, industrial, and auditory design.
Think has been a matter of human cognition as informed by system interaction and content design.
Do has long been a matter of some muscular movement that the system can detect, to start its matching input-process-output loop. Tap a button. Move a mouse. Touch a screen. Focus on something with your eyes. Hold your breath. These are all ways of “doing” with muscles.
The “bowtie” diagram I developed for my book on agentive tech.
But the first promise of BCI is to let that doing part happen with your brain. The brain isn’t a muscle, so what actions are BCI users able to take in their heads to signal to a BCI system what they want it to do? The answer to this question is partly physiological, about the way the brain changes as it goes about its thinking business.
Ah, the 1800s. Such good art. Such bad science.
Our brains are a dense network of bioelectric signals, chemicals, and blood flow. But it’s not chaos. It’s organized. It’s locally functionalized, meaning that certain parts of the brain are predictably activated when we think about certain things. But it’s not like the Christmas lights in Stranger Things, with one part lighting up discretely at a time. It’s more like an animated proportional symbol map, with lots of places lighting up at the same time to different degrees.
The sizes and shapes of what’s lighting up may change slightly between people, but a basic map of healthy, undamaged brains will be similar to each other. Lots of work has gone on to map these functional areas, with researchers showing subjects lots of stimuli and noting what areas of the brain light up. Test enough of these subjects and you can build a pretty good functional map of concepts. Thereafter, you can take a “picture” of the brain, and you can cross-reference your maps to reverse-engineer what is being thought.
Right now those pictures are pretty crude and slow, but so were the first actual photographs in the world. In 20–50 years, we may be able to wear baseball caps that provide a much more high-resolution, real time inputs of concepts being thought. In the far future (or, say, the alternate history of the MCU) it is conceivable to read these things from a distance. (Though there are significant ethical questions involved in such a technology, this post is focused on questions of viability and interaction.)
Similarly the brain maps we have are only for a small percentage of an average adult vocabulary. Jack Gallant’s semantic map viewer (pictured and linked above) shows the maps for about 140 concepts, and estimates of average active vocabulary is around 20,000 words, so we’re looking at a tenth of a tenth of what we can imagine (not even counting the infinite composability of language). But in the future we will not only have more concepts mapped, more confidently, but we will also have idiographs for each individual, like the personal dictionary in your smart phone.
All this is to say that our extant real world technology confirms that thoughts are a believable input for a system. This includes linguistic inputs like “Turn on the light” and “activate the vibranium sand table” and “Sincerely, Chris” and even imagining the desired change, like a light changing from dark to light. It might even include subconscious thoughts that yet to be formed into words.
2. How do we prevent accidental activation?
But we know from personal experience, we don’t want all our thoughts to be acted on. Take, for example, those thoughts you’re feeling hangry, or snarky, or dealing with a jerk-in-authority. Or those texts and emails that you’ve composed in the heat of the moment but wisely deleted before they get you in trouble.
If a speculative BCI is being read by a general artificial intelligence, it can manage that just like a smart human partner would.
He is composing a blog post, reasons the AGI, so I will just disregard his thought that he needs to pee.
And if there’s any doubt, an AGI can ask. “Did you intend me to include the bit about pee in the post?” Me: “Certainly not. Also BRB.” (Readers following the Black Panther reviews will note that AGI is available to Wakandans in the form of Griot.)
If AGI is unavailable to the diegesis (and it would significantly change any diegesis of which it is a part) then we need some way to indicate when a thought is intended as input and when it isn’t. Having that be some mode of thought feels complicated and error-prone, like when programmers have to write regex expressions that escape escape characters. Better I think is to use some secondary channel, like a bodily interaction. Touch forefinger and pinky together, for instance, and the computer understands you intend your thoughts as input.
So, for any BCI that appears in sci-fi, we would want to look for the presence or absence of AGI as a reasonableness interpreter, and, barring that, for some alternate-channel mechanism for indicating deliberateness. We would also hope to see some feedback and correction loops to understand the nuances of the edge-case interactions, but these are rare in sci-fi.
Even more future-full
This all points to the question of what seeing/perceiving via a BCI might be. A simple example might be a disembodied voice that only the user can hear.
A woman walks alone at night. Lost in thoughts, she hears her AI whisper to her thoughts, “Ada, be aware that a man has just left a shadowy doorstep and is following, half a block behind you. Shall I initialize your shock shoes?”
What other than language can be written to the brain in the far future? Images? Movies? Ideas? A suspicion? A compulsion? A hunch? How will people know what are their own thoughts and what has been placed there from the outside? I look forward to the stories and shows that illustrate new ideas, and warn us of the dark pitfalls.
He is lead organizer in Oakland and advisory board member for the Black Speculative Arts Movement (BSAM), co-founded by Reynaldo Anderson, a national and global movement dedicated to celebrating the Black imagination and design. Dr. Brooks serves as Creative Director for BSAM Futures, which aims to promote, publish, and teach forecasting with Afrocentric perspectives in mind, using gaming and facilitation for imaginative, action-oriented thinking.
Cover art for Afrofuturism 2.0: The Rise of Astro-Blackness, by John Jennings.
He also volunteers as a core member for outreach at Dynamicland.org, a pioneering non-profit dedicated to creating a more collaborative and dynamic computational medium for the long term. He has a passion for creating games to envision social justice futures including black and queer liberation from Afro-Rithms From The Future to United Queerdom, and Futurescope, he and his co-game designer Eli Kosminsky are committed to articulating emerging new future visions for traditionally underrepresented voices.
He is currently writing Imagining Queer Futures with Afrofuturism@Futureland: Circulating Afro-Queer futuretypes of Work, Culture and Racial Identity.
“As a forecaster and Afrofuturist who imagines alternative futures from a Black Diaspora perspective, I think about long-term signals that will shape the next 10 to 100 years.”
What we think about AI largely depends on how we know AI, and most people “know” AI through science fiction. But how well do the AIs in these shows match up with the science? What kinds of stories are we telling ourselves about AI that are pure fiction? And more importantly, what stories _aren’t_ we telling ourselves that we should be? Hear Chris Noessel of scifiinterfaces.com talk about this study and rethink what you “know” about #AI.
The network of in-house, studio, and freelance professionals who work together to create the interfaces in the sci-fi shows we know, love, and critique is large, complicated, and obfuscated. It’s very hard as an outsider to find out who should get the credit for what. So, I don’t try. I rarely identify the creators of the things I critique, trusting that they know who they are. Because of all this, I’m delighted when one of the studios reaches out to me directly. That’s what happened when Territory Studio recently reached out to me regarding the Fritz awards that went out in early February. They’d been involved with four of them! So, we set up our socially-distanced pandemic-approved keyboards, and here are the results.
First, congratulations to Territory Studio on having worked in four of the twelve 2019 Fritz Award nominees!
Chris: What exactly did you do on each of the films?
Marti Romances (founding partner and creative director of Territory Studio San Francisco): We were one of the screen graphic vendors on Ad Astra and our brief was to support specific storybeats, in which the screen content helped to explain or clarify complex plot points. As a speculative vision of the near future, the design brief was to create realistic looking user interfaces that were grounded in military or scientific references and functionality, with the clean minimal look of high-end tech firms, and simple colour palettes befitting of the military nature of the mission. Our screen interfaces can be seen on consoles, monitors and tablet displays, signage and infographics on the Lunar Shuttle, moon base, rovers and Cepheus cockpit sets, among others.”
The biggest challenge on the project was to maintain a balance between the minimalistic and highly technical style that the director requested and the needs of the audience to quickly and easily follow narrative points.”
Ad Astra (New Regency Pictures, 2019)
Men In Black International (nominated for Best Overall)
Andrew Popplestone (creative director of Territory Studio London): The art department asked us to create holotech concepts for MIB Int’l HQ in London, and we were then asked to deliver those in VFX. We worked closely with Dneg to create holographic content and interfaces for their environmental extensions (digital props) in the Lobby and Briefing Room sets. Our work included volumetric wayfinding systems, information points, desk screens and screen graphics. We also created holographic vehicle HUDs.
What I loved about our challenge on this film was to create a design aesthetic that felt part of the MIB universe yet stood on its own as the London HQ. We developed a visual language that drew upon the Art Deco influences from the set design which helped create a certain timeless flavour which was both classic yet futuristic.”
Men in Black: International (Sony Pictures, 2019)
Spider-Man: Far from Home (winner of Best Overall)
Spider-man Far From Home (Marvel Studios, 2019)
Andrew Popplestone: Territory were invited to join the team in pre-production and we started creating visual language and screen interface concepts for Stark technology, Nick Fury technology and Beck / Mysterio technology. We went on to deliver shots for the Stark and Fury technology, including the visual language and interface for Fury Ops Centre in Prague, a holographic display sequence that Fury shows Peter Parker/Spider-Man, and all the shots relating to Stark/E.D.I.T.H. glasses tech.
The EDITH sequence was a really interesting challenge from a storytelling perspective. There was a lot of back and forth editorially with the logic and how the technology would help tell the story and that is when design for film is most rewarding.
Spider-Man far from Home (Columbia Pictures, 2019)
Marti Romances: We were also pleased to see that Endgame won Audience Choice because that was based on work we had produced for the first part, Avengers: Infinity War. We joined Marvel’s team on Infinity War and created all the technology interfaces seen in Peter Quill’s new spaceship, a more evolved version of the original Milano. We also created screen graphics for the Avengers Compound set.
We then continued to work on-screen graphics for Endgame, and as Quill’s ship had been badly damaged at the end of Infinity War, we reflected this in the screens by overlaying our original UI animations with glitches signifying damage. We also updated Avengers Compound screens, created original content for Stark Labs and the 1960’s lab and created a holographic dancing robots sequence for the Karaoke set.
Avengers: Endgame (Marvel Studios, 2019)
What did you find challenging and rewarding about the work on these films?
David Sheldon-Hicks (Founder & Executive Creative Director): It’s always a challenge to create original designs that support a director’s vision and story and actor’s performance. There are so many factors and conversations that play into the choices we make about visual language, colour palette, iconography, data visualisation, animation, 3D elements, aesthetic embellishments, story beats, how to time content to tie into actor’s performance, how to frame content to lead the audience to the focal point, and more. The reward is that our work becomes part of the storytelling and if we did it well, it feels natural and credible within the context and narrative.
Hollywood seems to make it really hard to find out who contributed what to a film. Any idea why this is?
David Sheldon-Hicks: Well, the studio controls the press strategy and their focus is naturally all about the big vision and the actors and actresses. Also, creative vendors are subject to press embargoes with restrictions on image sharing which means that it’s challenging for us to take advantage of the release window to talk about our work. Having said that, there are brilliant magazines like Cinefex that work closely with the studios to cover the making of visual effects films. So, once we are able to talk about our work we try to as much as is possible.
But Territory do more than films; we work with game developers, brands, museums and expos, and more recently with smartwatch and automobile manufactures.
Chris: To make sure I understand that correctly, the difference is that Art Department work is all about FUI, where VFX are the creation of effects (not on screen in the diegesis) like light sabers, spaceships, and creatures? Things like that?
When we first started out, our work for the Art Department was strictly screen graphics and FUI. Screen graphics can be any motion design on a screen that gives life to a set or explains a storybeat, and FUI (Fictional User Interface) is a technology interface, for example screens for navigation, engineering, weapons systems, communications, drone fees, etc.
VFX relates to Visual Effects, (not to be confused with Special Effects which describes physical effects, explosions or fires on set, for example.) VFX include full CGI environments, set extensions, CGI props, etc. Think the giant holograms that walk through Ghost In the Shell (2017), or the holographic signage and screens seen in the Men In Black International lobby. And while some screens are shot live on-set, some of those screens may need to be adjusted in post, using a VFX pipeline. In this case we work with the Production VFX Supervisor to make sure that our design concept can be taken into post.
Mindhunter (Denver and Delilah Productions, 2017)Shanghai Fortress (HS Entertainment Group, 2019)Goldfish holograms and street furniture CG props from Ghost in the Shell (Paramount Pictures, 2017)
What, in your opinion, makes for a great fictional user interface?
David Sheldon-Hicks: That’s a good question. Different screens need to do different things. For example, there are ambient screens that help to create background ‘noise’ – think of a busy mission control and all the screens that help set the scene and create a tense atmosphere. The audience doesn’t need to see all those screens in detail, but they need to feel coherent and do that by reinforcing the overall visual language.
Then there are the hero screens that help to explain plot points. These tie into specific ‘story beats’ and are only in shot for about 3 seconds. There’s a lot that needs to come together in that moment. The FUI has to clearly communicate the narrative point, visualise and explain often complex information at a glance. If it’s a science fiction story, the screen has to convey something about that future and about its purpose; it has to feel futuristic yet be understandable at the same time. The interaction should feel credible in that world so that the audience can accept it as a natural part of the story. If it achieves all that and manages to look and feel fresh and original, I think it could be a great FUI.
Chris: What about “props”? Say, the door security in Prometheus, or the tablets in Ad Astra. Are those ambient or hero?
That depends on whether they are created specifically to support a storybeat. For example, the tablet in Ad Astra and the screen in The Martian where the audience and characters understand that Whatney is still alive, both help to explain context, while door furniture is often embellishment used to convey a standard of technology and if it doesn’t work or is slow to work it can be a narrative device to build tension and drama. Because a production can be fluid and we never really know exactly which screens will end up in camera and for how long, we try to give the director and DOP (director of photography) as much flexibility as possible by taking as much care over ambient screens as we do for hero screens.
The Martian (Twentieth Century Fox, 2015)
Where do you look for inspiration when designing?
David Sheldon-Hicks: Another good question! Prometheus really set our approach in that director Ridley Scott wanted us to stay away from other cinematic sci-fi references and instead draw on art, modern dance choreography and organic and marine life for our inspiration. We did this and our work took on an organic feel that felt fresh and original. It was a great insight that we continue to apply when it’s appropriate. In other situations, the design brief and references are more tightly controlled, for good reason. I’m thinking of Ad Astra and The Martian, which are both based on science fact, and Zero Dark Thirty and Wolf’s Call, which are in effect docudramas that require absolute authenticity in terms of design.
Ad Astra (New Regency Pictures, 2019), The Martian (Twentieth Century Fox, 2015), Zero Dark Thirty (Columbia Pictures, 2012), and The Wolf’s Call (Pathé, 2019)
What makes for a great FUI designer?
David Sheldon-Hicks: We look for great motion designers, creatively curious team players who enjoy R&D and data visualisation, are quick learners with strong problem-solving skills.
There are so many people involved in sci-fi interfaces for blockbusters. How is consistency maintained across all the teams?
David Sheldon-Hicks: We have great producers, and a structured approach to briefings and reviews to ensure the team is on track. Also, we use Autodesk Shotgun, which helps to organise, track and share the work to required specifications and formats, and remote review and approve software which enables us to work and collaborate effectively across teams and time zones.
I understand the work is very often done at breakneck speeds. How do you create something detailed and spectacular with such short turnaround times?
David Sheldon-Hicks: Broadly speaking, the visual language is the first thing we tackle and once approved, that sets the design aesthetic across an asset package. We tend to take a modular approach that allows us to create a framework into which elements can plug and play. On big shows we look at design behaviours for elements, animations and transitions and set those up as widgets. After we have automated as much as we can, we can become more focussed on refining the specific look and feel of individual screens to tie into storybeats.
That sounds fascinating. Can you share a few images that allow us to see a design language across these phases?
I can share a few screens from The Martian that show you how the design language and all screens are developed to feel cohesive across a set.
What thing about the industry do you think most people in audiences would be surprised by?
David Sheldon-Hicks: It would probably surprise most people to know how unglamorous filmmaking is and how much thought goes into the details. It’s an incredible effort by a huge amount of people and from creative vendors it demands 24-hour delivery, instant response times, time zone challenges, early mornings starts on-set, and so on. It can be incredibly challenging and draining but we give so much to it; like every prop and costume accessory, every detail on a screen has a purpose and is weighed up and discussed.
How do you think that FUI in cinema has evolved over the past, say, 10 years?
David Sheldon-Hicks: When we first started out in 2010, green screen dominated and it was rare to find directors who preferred to work with on-set screens. Directors like Ridley Scott (Prometheus, 2012), Kathryn Bigelow (Zero Dark Thirty, 2012) and James Gunn (Guardians of the Galaxy, 2014) who liked it for how it supports actors’ performances and contributes to ambience and lighting in-camera, used it and eventually it gained in popularity as is reflected in our film credits. In time, volumetric design became to suggest advanced technology and we incorporated 3D elements into our screens, like in Avengers; Age of Ultron (2015). Ultimately this led to full holographic elements, like the giant advertising holograms and 3D signage we created for Ghost in the Shell (2017). Today, briefs still vary but we find that authenticity and credibility continue to be paramount. Whatever we make, it has to feel seamless and natural to the story world.
Where do you expect the industry might go in the future? (Acknowledging that it’s really hard to see past the COVID-19 pandemic.)
David Sheldon-Hicks: On the industry front, virtual production has come into its own by necessity and we expect to see more of that in future. We also now find that the art department and VFX are collaborating as more integrated teams, with conversations that cross the production and post-production. As live rendered CG becomes more established in production, it will be interesting to see what becomes of on-set props and screens. I suspect that some directors will continue to favour it while others will enjoy the flexibility that VFX offers. Whatever happens, we have made sure to gear up to work as the studios and directors prefer.
I know that Territory does work for “real world” clients in addition to cinema. How does your work in one domain influence work in the other?
David Sheldon-Hicks: Clients often come to us because they have seen our FUI in a Marvel film, or in The Martian or Blade Runner 2049, and they want that forward-facing look and feel to their product UI. We try, within the limitations of real-world constraints, to apply a similar creative approach to client briefs as we do to film briefs, combining high production values with a future-facing aesthetic style. Hence, our work on the Huami Amazfit smartwatch tapped into a superhero aesthetic that gave data visualisations and infographics a minimalistic look with smooth animated details and transitions between functions and screens. We applied the same approach to our work with Medivis’ innovative biotech AR application which allows doctors to use a HoloLens headset to see holographically rendered clinical images and transpose these on to a physical body to better plan surgical procedures.
Similarly, our work for automobile manufacturers applies our experience of designing HUDS and navigation screens for futuristic vehicles to next-generation cars.
Avengers: Age of Ultron (Marvel Studios, 2015)
Blade Runner 2049 (Columbia Pictures, 2017)
Huami Amazfit
Medivis
Lastly, I like finishing interviews with these two questions. What’s your favorite sci-fi interface that someone else designed?
David Sheldon-Hicks: Well, I have to say the FUI in the original Star Wars film is what made me want to design film graphics. But, my favourite has got to be the physical interface seen in the Flight of the Navigator. There is something so human about how the technology adapts to serve the character, rather than the other way around, that it feels like all the technology we create is leading up to that moment.
Flight of the Navigator (Producers Sales Organization, 1986)
What’s next for the studio?
David Sheldon-Hicks: We want to come out of the pandemic lockdown in a good place to continue our growth in London and San Francisco, and over time pursue plans to open in other locations. But in terms of projects, we’ve got a lot of exciting stuff coming up and look forward to Series 1 of Brave New World this summer and of course, No Time To Die in November.