Who did it better? Santa Claus edition

I presume my readership are adults. I honestly cannot imagine this site has much to offer the 3-to-8-year-old. That said, if you are less than 8.8 years old, be aware that reading this will land you FIRMLY on the naughty list. Leave before it’s too late. Oooh, look! Here’s something interesting for you.


For those who celebrate Yule (and the very hybridized version of the holiday that I’ll call Santa-Christmas to distinguish it from Jesus-Christmas or Horus-Christmas), it’s that one time of year where we watch holiday movies. Santa features in no small number of them, working against the odds to save Christmas and Christmas spirit from something that threatens it. Santa accomplishes all that he does by dint of holiday magic, but increasingly, he has magic-powered technology to help him. These technologies are different for each movie in which they appear, with different sci-fi interfaces, which raises the question: Who did it better?

Unraveling this stands to be even more complicated than usual sci-fi fare.

  • These shows are largely aimed at young children, who haven’t developed the critical thinking skills to doubt the core premise, so the makers don’t have much pressure to present wholly-believable worlds. The makers also enjoy putting in some jokes for adults that are non-diegetic and confound analysis.
  • Despite the fact that these magical technologies are speculative just as in sci-fi, makers cannot presume that their audience are sci-fi fans who are familiar with those tropes. And things can’t seem too technical.
  • The sci in this fi is magical, which allows makers to do all-sorts of hand-wavey things about how it’s doing what it’s doing.
  • Many of the choices are whimsical and serve to reinforce core tenets of the Santa Claus mythos rather than any particular story or worldbuilding purpose.

But complicated-ness has rarely cowed this blog’s investigations before, why let a little thing like holiday magic do it now?

Ho-Ho-hubris!

A Primer on Santa

I have readers from all over the world. If you’re from a place that does not celebrate the Jolly Old Elf, a primer should help. And if you’re from a non-USA country, your Saint Nick mythos will be similar but not the same one that these movies are based on, so a clarification should help. To that end, here’s what I would consider the core of it.

Santa Claus is a magical, jolly, heavyset old man with white hair, mustache, and beard who lives at the North Pole with his wife Ms. Claus. The two are almost always caucasian. He can alternately be called Kris Kringle, Saint Nick, Father Christmas, or Klaus. The Clark Moore poem calls him a “jolly old elf.” He is aware of the behavior of children, and tallies their good and bad behavior over the year, ultimately landing them on the “naughty” or “nice” list. Santa brings the nice ones presents. (The naughty ones are canonically supposed to get coal in their stockings though in all my years I have never heard of any kids actually getting coal in lieu of presents.) Children also hang special stockings, often on a mantle, to be filled with treats or smaller presents. Adults encourage children to be good in the fall to ensure they get presents. As December approaches, Children write letters to Santa telling him what presents they hope for. Santa and his elves read the letters and make all the requested toys by hand in a workshop. Then the evening of 24 DEC, he puts all the toys in a large sack, and loads it into a sleigh led by 8 flying reindeer. Most of the time there is a ninth reindeer up front with a glowing red nose named Rudolph. He dresses in a warm red suit fringed with white fur, big black boots, thick black belt, and a stocking hat with a furry ball at the end. Over the evening, as children sleep, he delivers the presents to their homes, where he places them beneath the Christmas tree for them to discover in the morning. Families often leave out cookies and milk for Santa to snack on, and sometimes carrots for the reindeer. Santa often tries to avoid detection for reasons that are diegetically vague.

There is no single source of truth for this mythos, though the current core text might be the 1823 C.E. poem, “A Visit from St. Nicholas” by Clement Clarke Moore. Visually, Santa’s modern look is often traced back to the depictions by Civil War cartoonist Thomas Nast, which the Coca-Cola Corporation built upon for their holiday advertisements in 1931.

Both these illustrations are by Nast.

There are all sorts of cultural conversations to have about the normalizing a magical panopticon, what effect hiding the actual supply chain has, and asking for what does perpetuating this myth train children; but for now let’s stick to evaluating the interfaces in terms of Santa’s goals.

Santa’s goals

Given all of the above, we can say that the following are Santa’s goals.

  • Sort kids by behavior as naughty or nice
    • Many tellings have him observing actions directly
    • Manage the lists of names, usually on separate lists
  • Manage letters
    • Reading letters
    • Sending toy requests to the workshop
    • Storing letters
  • Make presents
  • Travel to kids’ homes
    • Find the most-efficient way there
    • Control the reindeer
    • Maintain air safety
      • Avoid air obstacles
    • Find a way inside and to the tree
    • Enjoy the cookies / milk
  • Deliver all presents before sunrise
  • For each child:
    • Know whether they are naughty or nice
    • If nice, match the right toy to the child
    • Stage presents beneath the tree
  • Avoid being seen

We’ll use these goals to contextualize the Santa interfaces against.

This is the Worst Santa, but the image is illustrative of the weather challenges.

Typical Challenges

Nearly every story tells of Santa working with other characters to save Christmas. (The metaphor that we have to work together to make Christmas happen is appreciated.) The challenges in the stories can be almost anything, but often include…

  • Inclement weather (usually winter, but Santa is a global phenomenon)
  • Air safety
    • Air obstacles (Planes, helicopters, skyscrapers)
  • Ingress/egress into homes
  • Home security systems / guard dogs

The Contenders

Imdb.com lists 847 films tagged with the keyword “santa claus,” which is far too much to review. So I looked through “best of” lists (two are linked below) and watched those films for interfaces. There weren’t many. I even had to blend CGI and live action shows, which I’m normally hesitant to do. As always, if you know of any additional shows that should be considered, please mention it in the comments.

https://editorial.rottentomatoes.com/guide/best-christmas-movies/https://screenrant.com/best-santa-claus-holiday-movies-ranked/

After reviewing these films, the ones with Santa interfaces came down to four, presented below in chronological order.

The Santa Clause (1994)

This movie deals with the lead character, Scott Calvin, inadvertently taking on the “job” of Santa Clause. (If you’ve read Anthony’s Incarnations of Immortality series, this plot will feel quite familiar.)

The sleigh he inherits has a number of displays that are largely unexplained, but little Charlie figures out that the center console includes a hot chocolate and cookie dispenser. There is also a radar, and far away from it, push buttons for fog, planes, rain, and lightning. There are several controls with Christmas bell icons associated with them, but the meaning of these are unclear.

Santa’s hat in this story has headphones and the ball has a microphone for communicating with elves back in the workshop.

This is the oldest of the candidates. Its interfaces are quite sterile and “tacked on” compared to the others, but was novel for its time.

The Santa Clause on imdb.com

Fred Claus (2007)

This movie tells the story of Santa’s n’er do well brother Fred, who has to work in the workshop for one season to work off bail money. While there he winds up helping forestall foreclosure from an underhanded supernatural efficiency expert, and un-estranging himself from his family. A really nice bit in this critically-panned film is that Fred helps Santa understand that there are no bad kids, just kids in bad circumstances.

Fred is taken to the North Pole in a sled with switches that are very reminiscent of the ones in The Santa Clause. A funny touch is the “fasten your seatbelt” sign like you might see in a commercial airliner. The use of Lombardic Capitals font is a very nice touch given that much of modern Western Santa Claus myth (and really, many of our traditions) come from Germany.

The workshop has an extensive pneumatic tube system for getting letters to the right craftself.

This chamber is where Santa is able to keep an eye on children. (Seriously panopticony. They have no idea they’re being surveilled.) Merely by reading the name and address of a child a volumetric display appears within the giant snowglobe. The naughtiest children’s names are displayed on a digital split-flap display, including their greatest offenses. (The nicest are as well, but we don’t get a close up of it.)

The final tally is put into a large book that one of the elves manages from the sleigh while Santa does the actual gift-distribution. The text in the book looks like it was printed from a computer.

Fred Clause on imdb.com

Arthur Christmas (2011)

In this telling, the Santa job is passed down patrilineally. The oldest Santa, GrandSanta, is retired. The dad, Malcolm, is the current-acting Santa one, and he has two sons. One is Steve, a by-the-numbers type into military efficiency and modern technology. The other son, Arthur, is an awkward fellow who has a semi-disposable job responding to letters. Malcolm currently pilots a massive mile-wide spaceship from which ninja elves do the gift distribution. They have a lot of tech to help them do their job. The plot involves Arthur working with Grandsanta using his old Sleigh to get a last forgotten gift to a young girl before the sun rises.

To help manage loud pets in the home who might wake up sleeping people, this gun has a dial for common pets that delivers a treat to distract them.

Elves have face scanners which determine each kids’ naughty/nice percentage. The elf then enters this into a stocking-filling gun, which affects the contents in some unseen way. A sweet touch is when one elf scans a kid who is read as quite naughty, the elf scans his own face to get a nice reading instead.

The S-1 is the name of the spaceship sleigh at the beginning (at the end it is renamed after Grandsanta’s sleigh). Its bridge is loaded with controls, volumetric displays, and even a Little Tree air freshener. It has a cloaking display on its underside which is strikingly similar to the MCU S.H.I.E.L.D. helicarrier cloaking. (And this came out the year before The Avengers, I’m just sayin’.)

The north pole houses the command-and-control center, which Steve manages. Thousands of elves manage workstations here, and there is a huge shared display for focusing and informing the team at once when necessary. Smaller displays help elf teams manage certain geographies. Its interfaces fall to comedy and trope, mostly, but are germane to the story beats

One of the crisis scenarios that this system helps manage is for a “waker,” a child who has awoken and is at risk of spying Santa.

Grandsanta’s outmoded sleigh is named Eve. Its technology is much more from the early 20th century, with switches and dials, buttons and levers. It’s a bit janky and overly complex, but gets the job done.

One notable control on S-1 is this trackball with dark representations of the continents. It appears to be a destination selector, but we do not see it in use. It is remarkable because it is very similar to one of the main interface components in the next candidate movie, The Christmas Chronicles.

Arthur Christmas on imdb.com

The Christmas Chronicles (2018)

The Christmas Chronicles follows two kids who stowaway on Santa’s sleigh on Christmas Eve. His surprise when they reveal themselves causes him to lose his magical hat and wreck his sleigh. They help him recover the items, finish his deliveries, and (well, of course) save Christmas just in time.

Santa’s sleight enables him to teleport to any place on earth. The main control is a trackball location selector. Once he spins it and confirms that the city readout looks correct, he can press the “GO” button for a portal to open in the air just ahead of the sleigh. After traveling in a aurora borealis realm filled with famous landmarks for a bit, another portal appears. They pass through this and appear at the selected location. A small magnifying glass above the selection point helps with precision.

Santa wears a watch that measures not time, but Christmas spirit, which ranges from 0 to 100. In the bottom half, chapter rings and a magnifying window seem designed to show the date, with 12 and 31 sequential numbers, respectively. It’s not clear why it shows mid May. A hemisphere in the middle of the face looks like it’s almost a globe, which might be a nice way to display and change time zone, but that may be wishful thinking on my part.

Santa also has a tracking device for finding his sack of toys. (Apparently this has happened enough time to warrant such a thing.) It is an intricate filligree over a cool green and blue glass. A light within blinks faster the closer the sphere is to the sack.

Since he must finish delivering toys before Christmas morning, the dashboard has a countdown clock with Nixie tube numbers showing hours, minutes, and milliseconds. They ordinary glow a cyan, but when time runs out, they turn red and blink.

This Santa also manages his list in a large book with lovely handwritten calligraphy. The kids whose gifts remain undelivered glow golden to draw his attention.

The Christmas Chronicles on imdb.com

So…who did it better?

The hard problem here is that there is a lot of apples-to-oranges comparisons to do. Even though the mythos seems pretty locked down, each movie takes liberties with one or two aspects. As a result not all these Santas are created equally. Calvin’s elves know he is completely new to his job and will need support. Christmas Chronicles Santa has perfect memory, magical abilities, and handles nearly all the delivery duties himself, unless he’s enacting a clever scheme to impart Christmas wisdom. Arthur Christmas has intergenerational technology and Santas who may not be magic at all, but fully know their duty from their youths but rely on a huge army of shock troop elves to make things happen. So it’s hard to name just one. But absent a point-by-point detailed analysis, there are two that really stand out to me.

The weathered surface of this camouflage button is delightful (Arthur Christmas).

Coverage of goals

Arthur Christmas movie has, by far, the most interfaces of any of the candidates, and more coverage of the Santa-family’s goals. Managing noisy pets? Check? Dealing with wakers? Check. Navigating the globe? Check. As far as thinking through speculative technology that assists its Santa, this film has the most.

Keeping the holiday spirit

I’ll confess, though, that extradiegetically, one of the purposes of annual holidays is to mark the passage of time. By trying to adhere to traditions as much as we can, time and our memory is marked by those things that we cannot control (like, say, a pandemic keeping everyone at home and hanging with friends and family virtually). So for my money, the thoroughly modern interfaces that flood Arthur Christmas don’t work that well. They’re so modern they’re not…Christmassy. Grandsanta’s sleigh Eve points to an older tradition, but it’s also clearly framed as outdated in the context of the story.

Gorgeous steampunkish binocular HUD from The Christmas Chronicles 2, which was not otherwise included in this post.

Compare this to The Christmas Chronicles, with its gorgeous steampunk-y interfaces that combine a sense of magic and mechanics. These are things that a centuries-old Santa would have built and use. They feel rooted in tradition while still helping Santa accomplish as many of his goals as he needs (in the context of his Christmas adventure for the stowaway kids). These interfaces evoke a sense of wonder, add significantly to the worldbuilding, and which I’d rather have as a model for magical interfaces in the real world.

Of course it’s a personal call, given the differences, but The Christmas Chronicles wins in my book.

Ho, Ho, HEH.

For those that celebrate Santa-Christmas, I hope it’s a happy one, given the strange, strange state of the world. May you be on the nice list.


For more Who Did it Better, see the tag.

Agent Ross’ remote piloting

Remote operation appears twice during Black Panther. This post describes the second, in which CIA Agent Ross remote-pilots the Talon in order to chase down cargo airships carrying Killmonger’s war supplies. The prior post describes the first, in which Shuri remotely drives an automobile.

In this sequence, Shuri equips Ross with kimoyo beads and a bone-conducting communication chip, and tells him that he must shoot down the cargo ships down before they cross beyond the Wakandan border. As soon as she tosses a remote-control kimoyo bead onto the Talon, Griot announces to Ross in the lab “Remote piloting system activated” and creates a piloting seat out of vibranium dust for him. Savvy watchers may wonder at this, since Okoye pilots the thing by meditation and Ross would have no meditation-pilot training, but Shuri explains to him, “I made it American style for you. Get in!” He does, grabs the sparkly black controls, and gets to business.

The most remarkable thing to me about the interface is how seamlessly the Talon can be piloted by vastly different controls. Meditation brain control? Can do. Joystick-and-throttle? Just as can do.

Now, generally, I have a beef with the notion of hyperindividualized UI tailoring—it prevents vital communication across a community of practice (read more about my critique of this goal here)—but in this case, there is zero time for Ross to learn a new interface. So sure, give him a control system with which he feels comfortable to handle this emergency. It makes him feel more at ease.

The mutable nature of the controls tells us that there is a robust interface layer that is interpreting whatever inputs the pilot supplies and applying them to the actuators in the Talon. More on this below. Spoiler: it’s Griot.

Too sparse HUD

The HUD presents a simple circle-in-a-triangle reticle that lights up red when a target is in sights. Otherwise it’s notably empty of augmentation. There’s no tunnel in the sky display to describe the ideal path, or proximity warnings about skyscrapers, or airspeed indicator, or altimeter, or…anything. This seems a glaring omission since we can be certain other “American-style” airships have such things. More on why this might be below, but spoiler: It’s Griot.

What do these controls do, exactly?

I take no joy in gotchas. That said…

  • When Ross launches the Talon, he does so by pulling the right joystick backward.
  • When he shoots down the first cargo ship over Birnin Zana, he pushes the same joystick forward as he pulls the trigger, firing energy weapons.

Why would the same control do both? It’s hard to believe it’s modal. Extradiegetically, this is probably an artifact of actor Martin Freeman’s just doing what feels dramatic, but for a real-world equivalent I would advise against having physical controls have wholly different modes on the same grip, lest we risk confusing pilots on mission-critical tasks. But spoiler…oh, you know where this is going.

It’s Griot

Diegetically, Shuri is flat-out wrong that Ross is an experienced pilot. But she also knew that it didn’t matter, because her lab has him covered anyway. Griot is an AI with a brain interface, and can read Ross’ intentions, handling all the difficult execution itself.

This would also explain the lack of better HUD augmentation. That absence seems especially egregious considering that the first cargo ship was flying over a crowded city at the time it was being targeted. If Ross had fired in the wrong place, the cargo ship might have crashed into a building, or down to the bustling city street, killing people. But, instead, Griot quietly, precisely targets the ship for him, to insure that it would crash safely in nearby water.

This would also explain how wildly different interfaces can control the Talon with similar efficacy.

An stained-glass image of William of Ockham. A modern blackletter caption reads, “It was always Griot.”

So, Occams-apology says, yep, it’s Griot.

An AI-wizard did it?

In the post about Shuri’s remote driving, I suggested that Griot was also helping her execute driving behind the scenes. This hearkens back to both the Iron HUD and Doctor Strange’s Cloak of Levitation. It could be that the MCU isn’t really worrying about the details of its enabling technologies, or that this is a brilliant model for our future relationship with technology. Let us feel like heroes, and let the AI manage all the details. I worry that I’m building myself into a wizard-did-it pattern, inserting AI for wizard. Maybe that’s worth another post all its own.

But there is one other thing about Ross’ interface worth noting.

The sonic overload

When the last of the cargo ships is nearly at the border, Ross reports to Shuri that he can’t chase it, because Killmonger-loyal dragon flyers have “got me trapped with some kind of cables.” She instructs him to, “Make an X with your arms!” He does. A wing-like display appears around him, confirming its readiness.

Then she shouts, “Now break it!” he does, and the Talon goes boom shaking off the enemy ships, allowing Ross to continue his pursuit.

First, what a great gesture for this function. Very ordinarily, Wakandans are piloting the Talon, and each of them would be deeply familiar with this gesture, and even prone to think of it when executing a hail Mary move like this.

Second, when an outsider needed to perform the action, why didn’t she just tell Griot to just do it? If there’s an interpretation layer in the system, why not just speak directly to that controller? It might be so the human knows how to do it themselves next time, but this is the last cargo ship he’s been tasked with chasing, and there’s little chance of his officially joining the Wakandan air force. The emergency will be over after this instance. Maybe Wakandans have a principle that they are first supposed to engage the humans before bringing in the machines, but that’s heavy conjecture.

Third, I have a beef about gestures—there’s often zero affordances to tell users what gestures they can do, and what effects those gestures will have. If Shuri was not there to answer Ross’ urgent question, would the mission have just…failed? Seems like a bad design.

How else could have known he could do this? If Griot is on board, Griot could have mentioned it. But avoiding the wizard-did-it solutions, some sort of context-aware display could detect that the ship is tethered to something, and display the gesture on the HUD for him. This violates the principle of letting the humans be the heroes, but would be a critical inclusion in any similar real-world system.

Any time we are faced with “intuitive” controls that don’t map 1:1 to the thing being controlled, we’re faced with similar problems. (We’ve seen the same problems in Sleep Dealer and Lost in Space (1998). Maybe that’s worth its own write-up.) Some controls won’t map to anything. More problematic is that there will be functions which don’t have controls. Designers can’t rely on having a human cavalry like Shuri there to save the day, and should take steps to find ways that the system can inform users of how to activate those functions.

Fit to purpose?

I’ve had to presume a lot about this interface. But if those things are correct, then, sure, this mostly makes it possible for Ross, a novice to piloting, to contribute something to the team mission, while upholding the directive that AI Cannot Be Heroes.

If Griot is not secretly driving, and that directive not really a thing, then the HUD needs more work, I can’t diegetically explain the controls, and they need to develop just-in-time suggestions to patch the gap of the mismatched interface. 


Black Georgia Matters

Each post in the Black Panther review is followed by actions that you can take to support black lives. As this critical special election is still coming up, this is a repeat of the last one, modified to reflect passed deadlines.

The state flag of Georgia, whose motto clearly violates the doctrine of separation of church and state.
Always on my mind, or at least until July 06.

Despite outrageous, anti-democratic voter suppression by the GOP, for the first time in 28 years, Georgia went blue for the presidential election, verified with two hand recounts. Credit to Stacey Abrams and her team’s years of effort to get out the Georgian—and particularly the powerful black Georgian—vote.

But the story doesn’t end there. Though the Biden/Harris ticket won the election, if the Senate stays majority red, Moscow Mitch McConnell will continue the infuriating obstructionism with which he held back Obama’s efforts in office for eight years. The Republicans will, as they have done before, ensure that nothing gets done.

To start to undo the damage the fascist and racist Trump administration has done, and maybe make some actual progress in the US, we need the Senate majority blue. Georgia is providing that opportunity. Neither of the wretched Republican incumbents got 50% of the vote, resulting in a special runoff election January 5, 2021. If these two seats go to the Democratic challengers, Warnock and Ossof, it will flip the Senate blue, and the nation can begin to seriously right the sinking ship that is America.

Photograph: Erik S Lesser/EPA

What can you do?

If you live in Georgia, vote blue, of course. You can check your registration status online. You can also help others vote. Important dates to remember, according to the Georgia website

  • 14 DEC Early voting begins
  • 05 JAN 2021 Final day of voting

Residents can also volunteer to become a canvasser for either of the campaigns, though it’s a tough thing to ask in the middle of the raging pandemic.

The rest of us (yes, even non-American readers) can contribute either to the campaigns directly using the links above, or to Stacey AbramsFair Fight campaign. From the campaign’s web site:

We promote fair elections in Georgia and around the country, encourage voter participation in elections, and educate voters about elections and their voting rights. Fair Fight brings awareness to the public on election reform, advocates for election reform at all levels, and engages in other voter education programs and communications.

We will continue moving the country into the anti-racist future regardless of the runoff, but we can make much, much more progress if we win this election. Please join the efforts as best you can even as you take care of yourself and your loved ones over the holidays. So very much depends on it.

Black Reparations Matter

This is timely, so I’m adding this on as well rather than waiting for the next post: A bill is in the house to set up a commission to examine the institution of slavery and its impact and make recommendations for reparations to Congress. If you are an American citizen, please consider sending a message to your congresspeople asking them to support the bill.

Image, uncredited, from the ACLU site. Please contact me if you know the artist.

On this ACLU site you will find a form and suggested wording to help you along.

UX of Speculative Brain-Computer Inputs

So much of the technology in Black Panther appears to work by mental command (so far: Panther Suit 2.0, the Royal Talon, and the vibranium sand tables) that…

  • before we get into the Kimoyo beads, or the Cape Shields, or the remote driving systems…
  • before I have to dismiss these interactions as “a wizard did it” style non-designs
  • before I review other brain-computer interfaces in other shows…

…I wanted check on the state of the art of brain-computer interfaces (or BCIs) and see how our understanding had advanced since I wrote the Brain interface chapter in the book, back in the halcyon days of 2012.

Note that I am deliberately avoiding the tech side of this question. I’m not going to talk about EEG, PET, MRI, and fMRI. (Though they’re linked in case you want to learn more.) Modern brain-computer interface (or BCI) technologies are evolving too rapidly to bother with an overview of them. They’ll change in the real world by the time I press “publish,” much less by the time you read this. And sci-fi tech is most often a black box anyway. But the human part of the human-computer interaction model changes much more slowly. We can look to the brain as a relatively-unalterable component of the BCI question, leading us to two believability questions of sci-fi BCI.

  1. How can people express intent using their brains?
  2. How do we prevent accidental activation using BCI?

Let’s discuss each.

1. How can people express intent using their brains?

In the see-think-do loop of human-computer interaction…

  • See (perceive) has been a subject of visual, industrial, and auditory design.
  • Think has been a matter of human cognition as informed by system interaction and content design.
  • Do has long been a matter of some muscular movement that the system can detect, to start its matching input-process-output loop. Tap a button. Move a mouse. Touch a screen. Focus on something with your eyes. Hold your breath. These are all ways of “doing” with muscles.
The “bowtie” diagram I developed for my book on agentive tech.

But the first promise of BCI is to let that doing part happen with your brain. The brain isn’t a muscle, so what actions are BCI users able to take in their heads to signal to a BCI system what they want it to do? The answer to this question is partly physiological, about the way the brain changes as it goes about its thinking business.

Ah, the 1800s. Such good art. Such bad science.

Our brains are a dense network of bioelectric signals, chemicals, and blood flow. But it’s not chaos. It’s organized. It’s locally functionalized, meaning that certain parts of the brain are predictably activated when we think about certain things. But it’s not like the Christmas lights in Stranger Things, with one part lighting up discretely at a time. It’s more like an animated proportional symbol map, with lots of places lighting up at the same time to different degrees.

Illustrative composite of a gif and an online map demo.

The sizes and shapes of what’s lighting up may change slightly between people, but a basic map of healthy, undamaged brains will be similar to each other. Lots of work has gone on to map these functional areas, with researchers showing subjects lots of stimuli and noting what areas of the brain light up. Test enough of these subjects and you can build a pretty good functional map of concepts. Thereafter, you can take a “picture” of the brain, and you can cross-reference your maps to reverse-engineer what is being thought.

From Jack Gallant’s semantic maps viewer.

Right now those pictures are pretty crude and slow, but so were the first actual photographs in the world. In 20–50 years, we may be able to wear baseball caps that provide a much more high-resolution, real time inputs of concepts being thought. In the far future (or, say, the alternate history of the MCU) it is conceivable to read these things from a distance. (Though there are significant ethical questions involved in such a technology, this post is focused on questions of viability and interaction.)

From Jack Gallant’s semantic map viewer

Similarly the brain maps we have are only for a small percentage of an average adult vocabulary. Jack Gallant’s semantic map viewer (pictured and linked above) shows the maps for about 140 concepts, and estimates of average active vocabulary is around 20,000 words, so we’re looking at a tenth of a tenth of what we can imagine (not even counting the infinite composability of language). But in the future we will not only have more concepts mapped, more confidently, but we will also have idiographs for each individual, like the personal dictionary in your smart phone.

All this is to say that our extant real world technology confirms that thoughts are a believable input for a system. This includes linguistic inputs like “Turn on the light” and “activate the vibranium sand table” and “Sincerely, Chris” and even imagining the desired change, like a light changing from dark to light. It might even include subconscious thoughts that yet to be formed into words.

2. How do we prevent accidental activation?

But we know from personal experience, we don’t want all our thoughts to be acted on. Take, for example, those thoughts you’re feeling hangry, or snarky, or dealing with a jerk-in-authority. Or those texts and emails that you’ve composed in the heat of the moment but wisely deleted before they get you in trouble.

If a speculative BCI is being read by a general artificial intelligence, it can manage that just like a smart human partner would.

He is composing a blog post, reasons the AGI, so I will just disregard his thought that he needs to pee.

And if there’s any doubt, an AGI can ask. “Did you intend me to include the bit about pee in the post?” Me: “Certainly not. Also BRB.” (Readers following the Black Panther reviews will note that AGI is available to Wakandans in the form of Griot.)

If AGI is unavailable to the diegesis (and it would significantly change any diegesis of which it is a part) then we need some way to indicate when a thought is intended as input and when it isn’t. Having that be some mode of thought feels complicated and error-prone, like when programmers have to write regex expressions that escape escape characters. Better I think is to use some secondary channel, like a bodily interaction. Touch forefinger and pinky together, for instance, and the computer understands you intend your thoughts as input.

So, for any BCI that appears in sci-fi, we would want to look for the presence or absence of AGI as a reasonableness interpreter, and, barring that, for some alternate-channel mechanism for indicating deliberateness. We would also hope to see some feedback and correction loops to understand the nuances of the edge-case interactions, but these are rare in sci-fi.

Even more future-full

This all points to the question of what seeing/perceiving via a BCI might be. A simple example might be a disembodied voice that only the user can hear.

A woman walks alone at night. Lost in thoughts, she hears her AI whisper to her thoughts, “Ada, be aware that a man has just left a shadowy doorstep and is following, half a block behind you. Shall I initialize your shock shoes?”

What other than language can be written to the brain in the far future? Images? Movies? Ideas? A suspicion? A compulsion? A hunch? How will people know what are their own thoughts and what has been placed there from the outside? I look forward to the stories and shows that illustrate new ideas, and warn us of the dark pitfalls.

Untold AI video

What we think about AI largely depends on how we know AI, and most people “know” AI through science fiction. But how well do the AIs in these shows match up with the science? What kinds of stories are we telling ourselves about AI that are pure fiction? And more importantly, what stories _aren’t_ we telling ourselves that we should be? Hear Chris Noessel of scifiinterfaces.com talk about this study and rethink what you “know” about #AI.

You can see the entire Untold AI study at https://scifiinterfaces.com/tag/untold-ai/?order=asc

See the big overview poster of the project at https://scifiinterfaces.com/2018/07/10/untold-ai-poster/

Recorded for the MEDIA, ARTS AND DESIGN conference, 19 JUN 2020. https://www.mad-conferences.com #madai2020

SciFi Interfaces Q&A with Territory Studio

The network of in-house, studio, and freelance professionals who work together to create the interfaces in the sci-fi shows we know, love, and critique is large, complicated, and obfuscated. It’s very hard as an outsider to find out who should get the credit for what. So, I don’t try. I rarely identify the creators of the things I critique, trusting that they know who they are. Because of all this, I’m delighted when one of the studios reaches out to me directly. That’s what happened when Territory Studio recently reached out to me regarding the Fritz awards that went out in early February. They’d been involved with four of them! So, we set up our socially-distanced pandemic-approved keyboards, and here are the results.

First, congratulations to Territory Studio on having worked in four of the twelve 2019 Fritz Award nominees!

Chris: What exactly did you do on each of the films?

Ad Astra (winner of Best Believable)

Ad Astra Screen Graphics Reel from Territory Studio.

Marti Romances (founding partner and creative director of Territory Studio San Francisco): We were one of the screen graphic vendors on Ad Astra and our brief was to support specific storybeats, in which the screen content helped to explain or clarify complex plot points.  As a speculative vision of the near future, the design brief was to create realistic looking user interfaces that were grounded in military or scientific references and functionality, with the clean minimal look of high-end tech firms, and simple colour palettes befitting of the military nature of the mission. Our screen interfaces can be seen on consoles, monitors and tablet displays, signage and infographics on the Lunar Shuttle, moon base, rovers and Cepheus cockpit sets, among others.”

The biggest challenge on the project was to maintain a balance between the minimalistic and highly technical style that the director requested and the needs of the audience to quickly and easily follow narrative points.”

Ad Astra (New Regency Pictures, 2019)

Men In Black International (nominated for Best Overall)

Men in Black: International | Screen Graphics | © Sony Pictures

Andrew Popplestone (creative director of Territory Studio London): The art department asked us to create holotech concepts for MIB Int’l HQ in London, and we were then asked to deliver those in VFX. We worked closely with Dneg to create holographic content and interfaces for their environmental extensions (digital props) in the Lobby and Briefing Room sets. Our work included volumetric wayfinding systems, information points, desk screens and screen graphics. We also created holographic vehicle HUDs.

What I loved about our challenge on this film was to create a design aesthetic that felt part of the MIB universe yet stood on its own as the London HQ. We developed a visual language that drew upon the Art Deco influences from the set design which helped create a certain timeless flavour which was both classic yet futuristic.”

Men in Black: International (Sony Pictures, 2019)

Spider-Man: Far from Home (winner of Best Overall)

Spider-man Far From Home (Marvel Studios, 2019)

Andrew Popplestone: Territory were invited to join the team in pre-production and we started creating visual language and screen interface concepts for Stark technology, Nick Fury technology and Beck / Mysterio technology. We went on to deliver shots for the Stark and Fury technology, including the visual language and interface for Fury Ops Centre in Prague, a holographic display sequence that Fury shows Peter Parker/Spider-Man, and all the shots relating to Stark/E.D.I.T.H. glasses tech.

The EDITH sequence was a really interesting challenge from a storytelling perspective. There was a lot of back and forth editorially with the logic and how the technology would help tell the story and that is when design for film is most rewarding.

Spider-Man far from Home (Columbia Pictures, 2019)

Avengers: Endgame (winner of Audience Choice)

See more at Marvel’s Avengers: Infinity War & Endgame

Marti Romances: We were also pleased to see that Endgame won Audience Choice because that was based on work we had produced for the first part, Avengers: Infinity War.  We joined Marvel’s team on Infinity War and created all the technology interfaces seen in Peter Quill’s new spaceship, a more evolved version of the original Milano. We also created screen graphics for the Avengers Compound set.

We then continued to work on-screen graphics for Endgame, and as Quill’s ship had been badly damaged at the end of Infinity War, we reflected this in the screens by overlaying our original UI animations with glitches signifying damage.  We also updated Avengers Compound screens, created original content for Stark Labs and the 1960’s lab and created a holographic dancing robots sequence for the Karaoke set.

Avengers: Endgame (Marvel Studios, 2019)

What did you find challenging and rewarding about the work on these films?

David Sheldon-Hicks (Founder & Executive Creative Director): It’s always a challenge to create original designs that support a director’s vision and story and actor’s performance.  There are so many factors and conversations that play into the choices we make about visual language, colour palette, iconography, data visualisation, animation, 3D elements, aesthetic embellishments, story beats, how to time content to tie into actor’s performance, how to frame content to lead the audience to the focal point, and more. The reward is that our work becomes part of the storytelling and if we did it well, it feels natural and credible within the context and narrative.

Hollywood seems to make it really hard to find out who contributed what to a film. Any idea why this is?

David Sheldon-Hicks: Well, the studio controls the press strategy and their focus is naturally all about the big vision and the actors and actresses. Also, creative vendors are subject to press embargoes with restrictions on image sharing which means that it’s challenging for us to take advantage of the release window to talk about our work. Having said that, there are brilliant magazines like Cinefex that work closely with the studios to cover the making of visual effects films. So, once we are able to talk about our work we try to as much as is possible. 

But Territory do more than films; we work with game developers, brands, museums and expos, and more recently with smartwatch and automobile manufactures. 

Chris: To make sure I understand that correctly, the difference is that Art Department work is all about FUI, where VFX are the creation of effects (not on screen in the diegesis) like light sabers, spaceships, and creatures? Things like that?

When we first started out, our work for the Art Department was strictly screen graphics and FUI. Screen graphics can be any motion design on a screen that gives life to a set or explains a storybeat, and FUI (Fictional User Interface) is a technology interface, for example screens for navigation, engineering, weapons systems, communications, drone fees, etc.  

VFX relates to Visual Effects, (not to be confused with Special Effects which describes physical effects, explosions or fires on set, for example.) VFX include full CGI environments, set extensions, CGI props, etc. Think the giant holograms that walk through Ghost In the Shell (2017), or the holographic signage and screens seen in the Men In Black International lobby.  And while some screens are shot live on-set, some of those screens may need to be adjusted in post, using a VFX pipeline. In this case we work with the Production VFX Supervisor to make sure that our design concept can be taken into post. 

Mindhunter model Mindhunter final
Mindhunter (Denver and Delilah Productions, 2017)
Mindhunter model Mindhunter final
Shanghai Fortress (HS Entertainment Group, 2019)
Goldfish holograms and street furniture CG props from Ghost in the Shell (Paramount Pictures, 2017)

What, in your opinion, makes for a great fictional user interface?

David Sheldon-Hicks: That’s a good question. Different screens need to do different things. For example, there are ambient screens that help to create background ‘noise’ – think of a busy mission control and all the screens that help set the scene and create a tense atmosphere. The audience doesn’t need to see all those screens in detail, but they need to feel coherent and do that by reinforcing the overall visual language.

Then there are the hero screens that help to explain plot points. These tie into specific ‘story beats’ and are only in shot for about 3 seconds. There’s a lot that needs to come together in that moment. The FUI has to clearly communicate the narrative point, visualise and explain often complex information at a glance. If it’s a science fiction story, the screen has to convey something about that future and about its purpose; it has to feel futuristic yet be understandable at the same time. The interaction should feel credible in that world so that the audience can accept it as a natural part of the story.  If it achieves all that and manages to look and feel fresh and original, I think it could be a great FUI.

Chris: What about “props”? Say, the door security in Prometheus, or the tablets in Ad Astra. Are those ambient or hero?

That depends on whether they are created specifically to support a storybeat. For example, the tablet in Ad Astra and the screen in The Martian where the audience and characters understand that Whatney is still alive, both help to explain context, while door furniture is often embellishment used to convey a standard of technology and if it doesn’t work or is slow to work it can be a narrative device to build tension and drama. Because a production can be fluid and we never really know exactly which screens will end up in camera and for how long, we try to give the director and DOP (director of photography) as much flexibility as possible by taking as much care over ambient screens as we do for hero screens. 

The Martian (Twentieth Century Fox, 2015)

Where do you look for inspiration when designing?

David Sheldon-Hicks: Another good question! Prometheus really set our approach in that director Ridley Scott wanted us to stay away from other cinematic sci-fi references and instead draw on art, modern dance choreography and organic and marine life for our inspiration. We did this and our work took on an organic feel that felt fresh and original. It was a great insight that we continue to apply when it’s appropriate. In other situations, the design brief and references are more tightly controlled, for good reason. I’m thinking of Ad Astra and The Martian, which are both based on science fact, and Zero Dark Thirty and Wolf’s Call, which are in effect docudramas that require absolute authenticity in terms of design. 

What makes for a great FUI designer?

David Sheldon-Hicks: We look for great motion designers, creatively curious team players who enjoy R&D and data visualisation, are quick learners with strong problem-solving skills.

There are so many people involved in sci-fi interfaces for blockbusters. How is consistency maintained across all the teams?

David Sheldon-Hicks: We have great producers, and a structured approach to briefings and reviews to ensure the team is on track. Also, we use Autodesk Shotgun, which helps to organise, track and share the work to required specifications and formats, and remote review and approve software which enables us to work and collaborate effectively across teams and time zones. 

I understand the work is very often done at breakneck speeds. How do you create something detailed and spectacular with such short turnaround times?

David Sheldon-Hicks: Broadly speaking, the visual language is the first thing we tackle and once approved, that sets the design aesthetic across an asset package. We tend to take a modular approach that allows us to create a framework into which elements can plug and play. On big shows we look at design behaviours for elements, animations and transitions and set those up as widgets. After we have automated as much as we can, we can become more focussed on refining the specific look and feel of individual screens to tie into storybeats. 

That sounds fascinating. Can you share a few images that allow us to see a design language across these phases?

I can share a few screens from The Martian that show you how the design language and all screens are developed to feel cohesive across a set. 

What thing about the industry do you think most people in audiences would be surprised by?

David Sheldon-Hicks: It would probably surprise most people to know how unglamorous filmmaking is and how much thought goes into the details. It’s an incredible effort by a huge amount of people and from creative vendors it demands 24-hour delivery, instant response times, time zone challenges, early mornings starts on-set, and so on. It can be incredibly challenging and draining but we give so much to it; like every prop and costume accessory, every detail on a screen has a purpose and is weighed up and discussed.

How do you think that FUI in cinema has evolved over the past, say, 10 years?

David Sheldon-Hicks: When we first started out in 2010, green screen dominated and it was rare to find directors who preferred to work with on-set screens. Directors like Ridley Scott (Prometheus, 2012), Kathryn Bigelow (Zero Dark Thirty, 2012) and James Gunn (Guardians of the Galaxy, 2014) who liked it for how it supports actors’ performances and contributes to ambience and lighting in-camera, used it and eventually it gained in popularity as is reflected in our film credits. In time, volumetric design became to suggest advanced technology and we incorporated 3D elements into our screens, like in Avengers; Age of Ultron (2015). Ultimately this led to full holographic elements, like the giant advertising holograms and 3D signage we created for Ghost in the Shell (2017). Today, briefs still vary but we find that authenticity and credibility continue to be paramount. Whatever we make, it has to feel seamless and natural to the story world.

Where do you expect the industry might go in the future? (Acknowledging that it’s really hard to see past the COVID-19 pandemic.)

David Sheldon-Hicks: On the industry front, virtual production has come into its own by necessity and we expect to see more of that in future. We also now find that the art department and VFX are collaborating as more integrated teams, with conversations that cross the production and post-production. As live rendered CG becomes more established in production, it will be interesting to see what becomes of on-set props and screens. I suspect that some directors will continue to favour it while others will enjoy the flexibility that VFX offers. Whatever happens, we have made sure to gear up to work as the studios and directors prefer. 

I know that Territory does work for “real world” clients in addition to cinema. How does your work in one domain influence work in the other?

David Sheldon-Hicks: Clients often come to us because they have seen our FUI in a Marvel film, or in The Martian or Blade Runner 2049, and they want that forward-facing look and feel to their product UI. We try, within the limitations of real-world constraints, to apply a similar creative approach to client briefs as we do to film briefs, combining high production values with a future-facing aesthetic style.  Hence, our work on the Huami Amazfit smartwatch tapped into a superhero aesthetic that gave data visualisations and infographics a minimalistic look with smooth animated details and transitions between functions and screens. We applied the same approach to our work with Medivis’ innovative biotech AR application which allows doctors to use a HoloLens headset to see holographically rendered clinical images and transpose these on to a physical body to better plan surgical procedures.

Similarly, our work for automobile manufacturers applies our experience of designing HUDS and navigation screens for futuristic vehicles to next-generation cars.  

Lastly, I like finishing interviews with these two questions. What’s your favorite sci-fi interface that someone else designed?

David Sheldon-Hicks: Well, I have to say the FUI in the original Star Wars film is what made me want to design film graphics. But, my favourite has got to be the physical interface seen in the Flight of the Navigator. There is something so human about how the technology adapts to serve the character, rather than the other way around, that it feels like all the technology we create is leading up to that moment.

Flight of the Navigator (Producers Sales Organization, 1986)

What’s next for the studio?

David Sheldon-Hicks: We want to come out of the pandemic lockdown in a good place to continue our growth in London and San Francisco, and over time pursue plans to open in other locations. But in terms of projects, we’ve got a lot of exciting stuff coming up and look forward to Series 1 of Brave New World this summer and of course, No Time To Die in November.

Report Card: Blade Runner (1982)

Read all the Blade Runner posts in chronological order.

The Black Lives Matter protests are still going strong, 14 days after George Floyd was murdered by police in Minneapolis, and thank goodness. Things have to change. It still feels a little wan to post anything to this blog about niche interests in the design of interfaces in science fiction, but I also want to wrap Blade Runner up and post an interview I’ve had waiting in the wings for a bit so I can get to a review of Black Panther (2018) to further support black visibility and Black Lives Matter issues on this platform that I have. So in the interest of that, here’s the report card for Blade Runner.


It is hard to understate Blade Runner’s cultural impact. It is #29 of hollywoodreporter.com’s best movies of all time. Note that that is not a list of the best sci-fi of all time, but of all movies.

When we look specifically at sci-fi, Blade Runner has tons of accolades as well. Metacritic gave it a score of 84% based on 15 critics, citing “universal acclaim” across 1137 ratings. It was voted best sci-fi film by The Guardian in 2004. In 2008, Blade Runner was voted “all-time favourite science fiction film” in the readers’ poll in New Scientist (requires a subscription, but you can see what you need to in the “peek” first paragraph). The Final Cut (the version used for this review) boasts a 92% on rottentomatoes.com. In 1993 the U.S. National Film Registry selected it for preservation in the Library of Congress as being “culturally, historically, or aesthetically significant.” Adam Savage penned an entire article in 2007 for Popular Mechanics, praising the practical special effects, which still hold up. It just…it means a lot to people.

Drew Struzan’s gorgeous movie poster.

As is my usual caveat, though, this site reviews not the film, but the interfaces that appear in the film, and specifically, across three aspects.

Sci: B (3 of 4) How believable are the interfaces?

My first review was titled “8 Reasons the Voight-Kampf Machine is shit” so you know I didn’t think too highly of that. But also Deckard’s front door key wouldn’t work like that, and the photo inspector couldn’t work like that. So I’m taken out of the film a lot for these things just breaking believability.

It’s not all 4th-wall-crumbling-ness. Bypassing the magical anti-gravity of the spinners, the pilot interfaces are pretty nice. The elevator is bad design, but quite believable. The VID-PHŌN is . Replicants are the primary novum in the story, so the AGI gets a kind-of genre-wide pass, and though the design is terrible, it’s the kind of stupidity we see in the world, so, sure.

Fi: B (3 of 4) How well do the interfaces inform the narrative of the story?

The Voight-Kampf Machine excels at this. It’s uncanny and unsettling, and provides nice cinegenic scenes that telegraph a broader diegesis and even feels philosophical. The Photo Inspector, on the surface, tells us that Deckard is good at his job, as morally bankrupt as it is.

The Spinners and VID-PHŌN do some heavy lifting for worldbuilding, and as functional interfaces do what they need to do, though they are not key storybeats.

But there were lots of missed opportunities. The Elevator and the VID-PHŌN could have reinforced the constant assault of advertisement. The Photo Inspector could have used an ad-hoc tangible user interface to more tightly integrate who Deckard is with how he does his work and the despair of his situation. So no full marks.

The official, meh, John Alvin poster.

Interfaces: F (0 of 4) How well do the interfaces equip the characters to achieve their goals?

This is where the interfaces fail the worst. The Voight-Kampf Machine is, as mentioned in the title of the post, shit. Deckard’s elevator forces him to share personally-identifiable information. The Front Door key cares nothing about his privacy and misses multifactor authentication. The Spinner looks like a car, but works like a VTOL aircraft. The Replicants were engineered specifically to suffer, and rebel, and infiltrate society, to no real diegetic point.

 The VID-PHŌN is OK, I guess.

Most of the interfaces in the film “work” because they were scripted to work, not because they were designed to work, and that makes for very low marks.

Final Grade C (6 of 12), Matinée.

I have a special place in my heart for both great movies with faltering interfaces, and unappreciated movies with brilliant ones. Blade Runner is one of the former. But for its rich worldbuilding, its mood, and the timely themes of members of an oppressed class coming head-to-head with a murderous police force, it will always be a favorite. Don’t not watch this film because of this review. Watch it for all the other reasons.

The lovely Hungarian poster.

VID-PHŌN

At around the midpoint of the movie, Deckard calls Rachel from a public videophone in a vain attempt to get her to join him in a seedy bar. Let’s first look at the device, then the interactions, and finally take a critical eye to this thing.

The panel

The lower part of the panel is a set of back-lit instructions and an input panel, which consists of a standard 12-key numeric input and a “start” button. Each of these momentary pushbuttons are back-lit white and have a red outline.

In the middle-right of the panel we see an illuminated orange logo panel, bearing the Saul Bass Bell System logo and the text reading, “VID-PHŌN” in some pale yellow, custom sans-serif logotype. The line over the O, in case you are unfamiliar, is a macron, indicating that the vowel below should be pronounced as a long vowel, so the brand should be pronounced “vid-phone” not “vid-fahn.”

In the middle-left there is a red “transmitting” button (in all lower case, a rarity) and a black panel that likely houses the camera and microphone. The transmitting button is dark until he interacts with the 12-key input, see below.

At the top of the panel, a small cathode-ray tube screen at face height displays data before and after the call as well as the live video feed during the call. All the text on the CRT is in a fixed-width typeface. A nice bit of worldbuilding sees this screen covered in Sharpie graffiti.

The interaction

His interaction is straightforward. He approaches the nook and inserts a payment card. In response, the panel—including its instructions and buttons—illuminates. A confirmation of the card holder’s identity appears in the in the upper left of the CRT, i.e. “Deckard, R.,” along with his phone number, “555-6328” (Fun fact: if you misdialed those last four numbers you might end up talking to the Ghostbusters) and some additional identifying numbers.

A red legend at the bottom of the CRT prompts him to “PLEASE DIAL.” It is outlined with what look like ASCII box-drawing characters. He presses the START button and then dials “555-7583” on the 12-key. As soon as the first number is pressed, the “transmitting” button illuminates. As he enters digits, they are simultaneously displayed for him on screen.

His hands are not in-frame as he commits the number and the system calls Rachel. So whether he pressed an enter key, #, or *; or the system just recognizes he’s entered seven digits is hard to say.

After their conversation is complete, her live video feed goes blank, and TOTAL CHARGE $1.25, is displayed for his review.

Chapter 10 of the book Make It So: Interaction Design Lessons from Science Fiction is dedicated to Communication, and in this post I’ll use the framework I developed there to review the VID-PHŌN, with one exception: this device is public and Deckard has to pay to use it, so he has to specify a payment method, and then the system will report back total charges. That wasn’t in the original chapter and in retrospect, it should have been.

Ergonomics

Turns out this panel is just the right height for Deckard. How do people of different heights or seated in a wheelchair fare? It would be nice if it had some apparent ability to adjust for various body heights. Similarly, I wonder how it might work for differently-abled users, but of course in cinema we rarely get to closely inspect devices for such things.

Activating

Deckard has to insert a payment card before the screen illuminates. It’s nice that the activation entails specifying payment, but how would someone new to the device know to do this? At the very least there should be some illuminated call to action like “insert payment card to begin,” or better yet some iconography so there is no language dependency. Then when the payment card was inserted, the rest of the interface can illuminate and act as a sort of dial-tone that says, “OK, I’m listening.”

Specifying a recipient: Unique Identifier

In Make It So, I suggest five methods of specifying a recipient: fixed connection, operator, unique identifier, stored contacts, and global search. Since this interaction is building on the experience of using a 1982 public pay phone, the 7-digit identifier quickly helps audiences familiar with American telephone standards understand what’s happening. So even if Scott had foreseen the phone explosion that led in 1994 to the ten-digit-dialing standard, or the 2053 events that led to the thirteen-digital-dialing standard, it would have likely have confused audiences. So it would have slightly risked the read of this scene. It’s forgivable.

Page 204–205 in the PDF and dead tree versions.

I have a tiny critique over the transmitting button. It should only turn on once he’s finished entering the phone number. That way they’re not wasting bandwidth on his dialing speed or on misdials. Let the user finish, review, correct if they need to, and then send. But, again, this is 1982 and direct entry is the way phones worked. If you misdialed, you had to hang up and start over again. Still, I don’t think having the transmitting light up after he entered the 7th digit would have caused any viewers to go all hruh?

There are important privacy questions to displaying a recipient’s number in a way that any passer-by can see. Better would have been to mount the input and the contact display on a transverse panel where he could enter and confirm it with little risk of lookie-loos and identity theives.

Audio & Video

Hopefully, when Rachel received the call, she was informed who it was and that the call was coming from a public video phone. Hopefully it also provided controls for only accepting the audio, in case she was not camera-ready, but we don’t see things from her side in this scene.

Gaze correction is usually needed in video conversation systems since each participant naturally looks at the center of the screen and not at the camera lens mounted somewhere next to its edge. Unless the camera is located in the center of the screen (or the other person’s image on the screen), people would not be “looking” at the other person as is almost always portrayed. Instead, their gaze would appear slightly off-screen. This is a common trope in cinema, but one which we’re become increasingly literate in, as many of us are working from home much more and gaining experience with videoconferencing systems, so it’s beginning to strain suspension of disbelief.

Also how does the sound work here? It’s a noisy street scene outside of a cabaret. Is it a directional mic and directional speaker? How does he adjust the volume if it’s just too loud? How does it remain audible yet private? Small directional speakers that followed his head movements would be a lovely touch.

And then there’s video privacy. If this were the real world, it would be nice if the video had a privacy screen filter. That would have the secondary effect of keeping his head in the right place for the camera. But that is difficult to show cinemagentically, so wouldn’t work for a movie.

Ending the call

Rachel leans forward to press a button on her home video phone end her part of the call. Presumably Deckard has a similar button to press on his end as well. He should be able to just yank his card out, too.

The closing screen is a nice touch, though total charges may not be the most useful thing. Are VID-PHŌN calls a fixed price? Then this information is not really of use to him after the call as much as it is beforehand. If the call has a variable cost, depending on long distance and duration, for example, then he would want to know the charges as the call is underway, so he can wrap things up if it’s getting too expensive. (Admittedly the Bell System wouldn’t want that, so it’s sensible worldbuilding to omit it.) Also if this is a pre-paid phone card, seeing his remaining balance would be more useful.

But still, the point was that total charges of $1.25 was meant to future-shocked audiences of the time, since public phone charges in the United States at the time were $0.10. His remaining balance wouldn’t have shown that and not had the desired effect. Maybe both? It might have been a cool bit of worldbuilding and callback to build on that shock to follow that outrageous price with “Get this call free! Watch a video of life in the offworld colonies! Press START and keep your eyes ON THE SCREEN.”

Because the world just likes to hurt Deckard.

Deckard’s Photo Inspector

Back to Blade Runner. I mean, the pandemic is still pandemicking, but maybe this will be a nice distraction while you shelter in place. Because you’re smart, sheltering in place as much as you can, and not injecting disinfectants. And, like so many other technologies in this film, this will take a while to deconstruct, critique, and reimagine.

Description

Doing his detective work, Deckard retrieves a set of snapshots from Leon’s hotel room, and he brings them home. Something in the one pictured above catches his eye, and he wants to investigate it in greater detail. He takes the photograph and inserts it in a black device he keeps in his living room.

Note: I’ll try and describe this interaction in text, but it is much easier to conceptualize after viewing it. Owing to copyright restrictions, I cannot upload this length of video with the original audio, so I have added pre-rendered closed captions to it, below. All dialogue in the clip is Deckard.

Deckard does digital forensics, looking for a lead.

He inserts the snapshot into a horizontal slit and turns the machine on. A thin, horizontal orange line glows on the left side of the front panel. A series of seemingly random-length orange lines begin to chase one another in a single-row space that stretches across the remainder of the panel and continue to do so throughout Deckard’s use of it. (Imagine a news ticker, running backwards, where the “headlines” are glowing amber lines.) This seems useless and an absolutely pointless distraction for Deckard, putting high-contrast motion in his peripheral vision, which fights for attention with the actual, interesting content down below.

If this is distracting you from reading, YOU SEE MY POINT.

After a second, the screen reveals a blue grid, behind which the scan of the snapshot appears. He stares at the image in the grid for a moment, and speaks a set of instructions, “Enhance 224 to 176.”

In response, three data points appear overlaying the image at the bottom of the screen. Each has a two-letter label and a four-digit number, e.g. “ZM 0000 NS 0000 EW 0000.” The NS and EW—presumably North-South and East-West coordinates, respectively—immediately update to read, “ZM 0000 NS 0197 EW 0334.” After updating the numbers, the screen displays a crosshairs, which target a single rectangle in the grid.

A new rectangle then zooms in from the edges to match the targeted rectangle, as the ZM number—presumably zoom, or magnification—increases. When the animated rectangle reaches the targeted rectangle, its outline blinks yellow a few times. Then the contents of the rectangle are enlarged to fill the screen, in a series of steps which are punctuated with sounds similar to a mechanical camera aperture. The enlargement is perfectly resolved. The overlay disappears until the next set of spoken commands. The system response between Deckard’s issuing the command and the device’s showing the final enlarged image is about 11 seconds.

Deckard studies the new image for awhile before issuing another command. This time he says, “Enhance.” The image enlarges in similar clacking steps until he tells it, “Stop.”

Other instructions he is heard to give include “move in, pull out, track right, center in, pull back, center, and pan right.” Some include discrete instructions, such as, “Track 45 right” while others are relative commands that the system obeys until told to stop, such as “Go right.”

Using such commands he isolates part of the image that reveals an important clue, and he speaks the instruction, “Give me a hard copy right there.” The machine prints the image, which Deckard uses to help find the replicant pictured.

This image helps lead him to Zhora.

I’d like to point out one bit of sophistication before the critique. Deckard can issue a command with or without a parameter, and the inspector knows what to do. For example, “Track 45 right” and “Track right.” Without the parameter, it will just do the thing repeatedly until told to stop. That helps Deckard issue the same basic command when he knows exactly where he wants to look and when doesn’t know what exactly what he’s looking for. That’s a nice feature of the language design.

But still, asking him to provide step-by-step instructions in this clunky way feels like some high-tech Big Trak. (I tried to find a reference that was as old as the film.) And that’s not all…

Some critiques, as it is

  • Can I go back and mention that amber distracto-light? Because it’s distracting. And pointless. I’m not mad. I’m just disappointed.
  • It sure would be nice if any of the numbers on screen made sense, and had any bearing with the numbers Deckard speaks, at any time during the interaction. For instance, the initial zoom (I checked in Photoshop) is around 304%, which is neither the 224 or 176 that Deckard speaks.
  • It might be that each square has a number, and he simply has to name the two squares at the extents of the zoom he wants, letting the machine find the extents, but where is the labeling? Did he have to memorize an address for each pixel? How does that work at arbitrary levels of zoom?
  • And if he’s memorized it, why show the overlay at all?
  • Why the seizure-inducing flashing in the transition sequences? Sure, I get that lots of technologies have unfortunate effects when constrained by mechanics, but this is digital.
  • Why is the printed picture so unlike the still image where he asks for a hard copy?
  • Gaze at the reflection in Ford’s hazel, hazel eyes, and it’s clear he’s playing Missile Command, rather than paying attention to this interface at all. (OK, that’s the filmmaker’s issue, not a part of the interface, but still, come on.)
The photo inspector: My interface is up HERE, Rick.

How might it be improved for 1982?

So if 1982 Ridley Scott was telling me in post that we couldn’t reshoot Harrison Ford, and we had to make it just work with what we had, here’s what I’d do…

Squash the grid so the cells match the 4:3 ratio of the NTSC screen. Overlay the address of each cell, while highlighting column and row identifiers at the edges. Have the first cell’s outline illuminate as he speaks it, and have the outline expand to encompass the second named cell. Then zoom, removing the cell labels during the transition. When at anything other than full view, display a map across four cells that shows the zoom visually in the context of the whole.

Rendered in glorious 4:3 NTSC dimensions.

With this interface, the structure of the existing conversation makes more sense. When Deckard said, “Enhance 203 to 608” the thing would zoom in on the mirror, and the small map would confirm.

The numbers wouldn’t match up, but it’s pretty obvious from the final cut that Scott didn’t care about that (or, more charitably, ran out of time). Anyway I would be doing this under protest, because I would argue this interaction needs to be fixed in the script.

How might it be improved for 2020?

What’s really nifty about this technology is that it’s not just a photograph. Look close in the scene, and Deckard isn’t just doing CSI Enhance! commands (or, to be less mocking, AI upscaling). He’s using the photo inspector to look around corners and at objects that are reconstructed from the smallest reflections. So we can think of the interaction like he’s controlling a drone through a 3D still life, looking for a lead to help him further the case.

With that in mind, let’s talk about the display.

Display

To redesign it, we have to decide at a foundational level how we think this works, because it will color what the display looks like. Is this all data that’s captured from some crazy 3D camera and available in the image? Or is it being inferred from details in the 2 dimensional image? Let’s call the first the 3D capture, and the second the 3D inference.

If we decide this is a 3-D capture, then all the data that he observes through the machine has the same degree of confidence. If, however, we decide this is a 3D inferrer, Deckard needs to treat the inferred data with more skepticism than the data the camera directly captured. The 3-D inferrer is the harder problem, and raises some issues that we must deal with in modern AI, so let’s just say that’s the way this speculative technology works.

The first thing the display should do it make it clear what is observed and what is inferred. How you do this is partly a matter of visual design and style, but partly a matter of diegetic logic. The first pass would be to render everything in the camera frustum photo-realistically, and then render everything outside of that in a way that signals its confidence level. The comp below illustrates one way this might be done.

Modification of a pair of images found on Evermotion
  • In the comp, Deckard has turned the “drone” from the “actual photo,” seen off to the right, toward the inferred space on the left. The monochrome color treatment provides that first high-confidence signal.
  • In the scene, the primary inference would come from reading the reflections in the disco ball overhead lamp, maybe augmented with plans for the apartment that could be found online, or maybe purchase receipts for appliances, etc. Everything it can reconstruct from the reflection and high-confidence sources has solid black lines, a second-level signal.
  • The smaller knickknacks that are out of the reflection of the disco ball, and implied from other, less reflective surfaces, are rendered without the black lines and blurred. This provides a signal that the algorithm has a very low confidence in its inference.

This is just one (not very visually interesting) way to handle it, but should illustrate that, to be believable, the photo inspector shouldn’t have a single rendering style outside the frustum. It would need something akin to these levels to help Deckard instantly recognize how much he should trust what he’s seeing.

Flat screen or volumetric projection?

Modern CGI loves big volumetric projections. (e.g. it was the central novum of last year’s Fritz winner, Spider-Man: Far From Home.) And it would be a wonderful juxtaposition to see Deckard in a holodeck-like recreation of Leon’s apartment, with all the visual treatments described above.

But…

Also seriously who wants a lamp embedded in a headrest?

…that would kind of spoil the mood of the scene. This isn’t just about Deckard’s finding a clue, we also see a little about who he is and what his life is like. We see the smoky apartment. We see the drab couch. We see the stack of old detective machines. We see the neon lights and annoying advertising lights swinging back and forth across his windows. Immersing him in a big volumetric projection would lose all this atmospheric stuff, and so I’d recommend keeping it either a small contained VP, like we saw in Minority Report, or just keep it a small flat screen.


OK, so we have an idea about how the display would (and shouldn’t) look, let’s move on to talk about the inputs.

Inputs

To talk about inputs, then, we have to return to a favorite topic of mine, and that is the level of agency we want for the interaction. In short, we need to decide how much work the machine is doing. Is the machine just a manual tool that Deckard has to manipulate to get it to do anything? Or does it actively assist him? Or, lastly, can it even do the job while his attention is on something else—that is, can it act as an agent on his behalf? Sophisticated tools can be a blend of these modes, but for now, let’s look at them individually.

Manual Tool

This is how the photo inspector works in Blade Runner. It can do things, but Deckard has to tell it exactly what to do. But we can still improve it in this mode.

We could give him well-mapped physical controls, like a remote control for this conceptual drone. Flight controls wind up being a recurring topic on this blog (and even came up already in the Blade Runner reviews with the Spinners) so I could go on about how best to do that, but I think that a handheld controller would ruin the feel of this scene, like Deckard was sitting down to play a video game rather than do off-hours detective work.

Special edition made possible by our sponsor, Tom Nook.
(I hope we can pay this loan back.)

Similarly, we could talk about a gestural interface, using some of the synecdochic techniques we’ve seen before in Ghost in the Shell. But again, this would spoil the feel of the scene, having him look more like John Anderton in front of a tiny-TV version of Minority Report’s famous crime scrubber.

One of the things that gives this scene its emotional texture is that Deckard is drinking a glass of whiskey while doing his detective homework. It shows how low he feels. Throwing one back is clearly part of his evening routine, so much a habit that he does it despite being preoccupied about Leon’s case. How can we keep him on the couch, with his hand on the lead crystal whiskey glass, and still investigating the photo? Can he use it to investigate the photo?

Here I recommend a bit of ad-hoc tangible user interface. I first backworlded this for The Star Wars Holiday Special, but I think it could work here, too. Imagine that the photo inspector has a high-resolution camera on it, and the interface allows Deckard to declare any object that he wants as a control object. After the declaration, the camera tracks the object against a surface, using the changes to that object to control the virtual camera.

In the scene, Deckard can declare the whiskey glass as his control object, and the arm of his couch as the control surface. Of course the virtual space he’s in is bigger than the couch arm, but it could work like a mouse and a mousepad. He can just pick it up and set it back down again to extend motion.

This scheme takes into account all movement except vertical lift and drop. This could be a gesture or a spoken command (see below).

Going with this interaction model means Deckard can use the whiskey glass, allowing the scene to keep its texture and feel. He can still drink and get his detective on.

Tipping the virtual drone to the right.

Assistant Tool

Indirect manipulation is helpful for when Deckard doesn’t know what he’s looking for. He can look around, and get close to things to inspect them. But when he knows what he’s looking for, he shouldn’t have to go find it. He should be able to just ask for it, and have the photo inspector show it to him. This requires that we presume some AI. And even though Blade Runner clearly includes General AI, let’s presume that that kind of AI has to be housed in a human-like replicant, and can’t be squeezed into this device. Instead, let’s just extend the capabilities of Narrow AI.

Some of this will be navigational and specific, “Zoom to that mirror in the background,” for instance, or, “Reset the orientation.” Some will more abstract and content-specific, e.g. “Head to the kitchen” or “Get close to that red thing.” If it had gaze detection, he could even indicate a location by looking at it. “Get close to that red thing there,” for example, while looking at the red thing. Given the 3D inferrer nature of this speculative device, he might also want to trace the provenance of an inference, as in, “How do we know this chair is here?” This implies natural language generation as well as understanding.

There’s nothing from stopping him using the same general commands heard in the movie, but I doubt anyone would want to use those when they have commands like this and the object-on-hand controller available.

Ideally Deckard would have some general search capabilities as well, to ask questions and test ideas. “Where were these things purchased?” or subsequently, “Is there video footage from the stores where he purchased them?” or even, “What does that look like to you?” (The correct answer would be, “Well that looks like the mirror from the Arnolfini portrait, Ridley…I mean…Rick*”) It can do pattern recognition and provide as much extra information as it has access to, just like Google Lens or IBM Watson image recognition does.

*Left: The convex mirror in Leon’s 21st century apartment.
Right: The convex mirror in Arnolfini’s 15th century apartment

Finally, he should be able to ask after simple facts to see if the inspector knows or can find it. For example, “How many people are in the scene?”

All of this still requires that Deckard initiate the action, and we can augment it further with a little agentive thinking.

Agentive Tool

To think in terms of agents is to ask, “What can the system do for the user, but not requiring the user’s attention?” (I wrote a book about it if you want to know more.) Here, the AI should be working alongside Deckard. Not just building the inferences and cataloguing observations, but doing anomaly detection on the whole scene as it goes. Some of it is going to be pointless, like “Be aware the butter knife is from IKEA, while the rest of the flatware is Christofle Lagerfeld. Something’s not right, here.” But some of it Deckard will find useful. It would probably be up to Deckard to review summaries and decide which were worth further investigation.

It should also be able to help him with his goals. For example, the police had Zhora’s picture on file. (And her portrait even rotates in the dossier we see at the beginning, so it knows what she looks like in 3D for very sophisticated pattern matching.) The moment the agent—while it was reverse ray tracing the scene and reconstructing the inferred space—detects any faces, it should run the face through a most wanted list, and specifically Deckard’s case files. It shouldn’t wait for him to find it. That again poses some challenges to the script. How do we keep Deckard the hero when the tech can and should have found Zhora seconds after being shown the image? It’s a new challenge for writers, but it’s becoming increasingly important for believability.

Though I’ve never figured out why she has a snake tattoo here (and it seems really important to the plot) but then when Deckard finally meets her, it has disappeared.

Scene

Interior. Deckard’s apartment. Night.

Deckard grabs a bottle of whiskey, a glass, and the photo from Leon’s apartment. He sits on his couch, places the photo on the coffee table and says “Photo inspector?” The machine on top of a cluttered end table comes to life. Deckard continues, “Let’s look at this.” He points to the photo. A thin line of light sweeps across the image. The scanned image appears on the screen, pulled in a bit from the edges. A label reads, “Extending scene,” and we see wireframe representations of the apartment outside the frame begin to take shape. A small list of anomolies begins to appear to the left. Deckard pours a few fingers of whiskey into the glass. He takes a drink and says, “Controller,” before putting the glass on the arm of his couch. Small projected graphics appear on the arm facing the inspector. He says, “OK. Anyone hiding? Moving?” The inspector replies, “No and no.” Deckard looks at the screen he says, “Zoom to that arm and pin to the face.” He turns the glass on the couch arm counterclockwise, and the “drone” revolves around to show Leon’s face, with the shadowy parts rendered in blue. He asks, “What’s the confidence?” The inspector replies, “95.” On the side of the screen the inspector overlays Leon’s police profile. Deckard says, “unpin” and lifts his glass to take a drink. He moves from the couch to the floor to stare more intently and places his drink on the coffee table. “New surface,” he says, and turns the glass clockwise. The camera turns and he sees into a bedroom. “How do we have this much inference?” he asks. The inspector replies, “The convex mirror in the hall…” Deckard interrupts, saying, “Wait. Is that a foot? You said no one was hiding.” The inspector replies, “The individual is not hiding. They appear to be sleeping.” Deckard rolls his eyes. He says, “Zoom to the face and pin.” The view zooms to the face, but the camera is level with her chin, making it hard to make out the face. Deckard tips the glass forward and the camera rises up to focus on a blue, wireframed face. Deckard says, “That look like Zhora to you?” The inspector overlays her police file and replies, “63% of it does.” Deckard says, “Why didn’t you say so?” The inspector replies, “My threshold is set to 66%.” Deckard says, “Give me a hard copy right there.” He raises his glass and finishes his drink.


This scene keeps the texture and tone of the original, and camps on the limitations of Narrow AI to let Deckard be the hero. And doesn’t have him programming a virtual Big Trak.

Pathogen Movie Backgrounds

While we’re all sheltering-at-home, trying to contain the COVID-19 virus, many of us are doing business through videoconferencing apps. Some of these let you add backgrounds, and people are having some fun with these. We need all the levity we can get.

The Killer that Stalked New York (1950)

Also, those of us with kids are slammed, suddenly doing daycare and homeschooling. The lucky of us are also trying to hold down jobs.

While I’m scrambling to do all my stuff, there’s not a ton of time for blog-related stuff. (I spent quite a bit of time making the last post, and need to catch up on some of those other spinning plates.) So, this week I’m doing a low-effort but still-timely post of backgrounds grabbed in the movies referenced in the Spreading Pathogen Maps post.

Hopefully this will prove fun for you, and will buy me a bit of time to get back to Blade Runner.

The Andromeda Strain (1971)

Outbreak (1995)

Evolution (2001)

Contagion (2011)

Rise of the Planet of the Apes (2011)

World War Z (2013)

Edge of Tomorrow (2014)

Dawn of the Planet of the Apes (2014)

Spreading pathogen maps

So while the world is in the grip of the novel COVID-19 coronavirus pandemic, I’ve been thinking about those fictional user interfaces that appear in pandemic movies that project how quickly the infectious-agent-in-question will spread. The COVID-19 pandemic is a very serious situation. Most smart people are sheltering in place to prevent an overwhelmed health care system and finding themselves with some newly idle cycles (or if you’re a parent like me, a lot fewer idle cycles). Looking at this topic through the lens of sci-fi is not to minimize what’s happening around us as trivial, but to process the craziness of it all through this channel that I’ve got on hand. I did it for fascism, I’ll do it for this. Maybe this can inform some smart speculative design.

Caveat #1: As a public service I have included some information about COVID-19 in the body of the post with a link to sources. These are called out the way this paragraph is, with a SARS-CoV-2 illustration floated on the left. I have done as much due diligence as one blogger can do to not spread disinformation, but keep in mind that our understanding of this disease and the context are changing rapidly. By the time you read this, facts may have changed. Follow links to sources to get the latest information. Do not rely solely on this post as a source. If you are reading this from the relative comfort of the future after COVID-19, feel free to skip these.

A screen grab from a spreading pathogen map from Contagion (2011), focused on Africa and Eurasia, with red patches surrounding major cities, including Hong Kong.
Get on a boat, Hongkongers, you can’t even run for the hills! Contagion (2011)

And yes, this is less of my normal fare of sci-fi and more bio-fi, but it’s still clearly a fictional user interface, so between that and the world going pear-shaped, it fits well enough. I’ll get back to Blade Runner soon enough. I hope.

Giving credit where it’s due: All but one of the examples in this post were found via the TV tropes page for Spreading Disaster Map Graphic page, under live-action film examples. I’m sure I’ve missed some. If you know of others, please mention it in the comments.

Four that are extradiegetic and illustrative

This first set of pandemic maps are extradiegetic.

Vocabulary sidebar: I use that term a lot on this blog, but if you’re new here or new to literary criticism, it bears explanation. Diegesis is used to mean “the world of the story,” as the world in which the story takes place is often distinct from our own. We distinguish things as diegetic and extradiegetic to describe when they occur within the world of the story, or outside of it, respectively. My favorite example is when we see a character in a movie walking down a hallway looking for a killer, and we hear screechy violins that raise the tension. When we hear those violins, we don’t imagine that there is someone in the house who happens to be practicing their creepy violin. We understand that this is extradiegetic music, something put there to give us a clue about how the scene is meant to feel.

So, like those violins, these first examples aren’t something that someone in the story is looking at. (Claude Paré? Who the eff is—Johnson! Get engineering! Why are random names popping up over my pandemic map?) They’re something the film is doing for us in the audience.

The Killer that Stalked New York (1950) is a short about a smallpox infection of New York City.