Introducing Afrofuturist Lonny J Avi Brooks

My guest contributor for the Black Panther reviews is Dr. Lonny J. Avi Brooks.

Dr. Brooks is an Associate Professor in the department of communication at California State University, East Bay, where he has piloted the integration of futures thinking into the communication curriculum for the last fifteen years. Emerging in recent years as a leading voice of Afrofuturism 2.0, Brooks contributes prolifically to journals, conferences and anthologies on the subject, as well as serving as executive producer and co-creator, with Ahmed Best, of The Afrofuturist Podcast. e is the lead co-editor for a special issue of the Journal of Futures Studies: “When is Wakanda? Afrofuturism and Dark Speculative Futurity.”

Cover image for the special edition (uncredited)

He is lead organizer in Oakland and advisory board member for the Black Speculative Arts Movement (BSAM), co-founded by Reynaldo Anderson, a national and global movement dedicated to celebrating the Black imagination and design. Dr. Brooks serves as Creative Director for BSAM Futures, which aims to promote, publish, and teach forecasting with Afrocentric perspectives in mind, using gaming and facilitation for imaginative, action-oriented thinking.

Cover art for Afrofuturism 2.0: The Rise of Astro-Blackness, by John Jennings.

He also volunteers as a core member for outreach at Dynamicland.org, a pioneering non-profit dedicated to creating a more collaborative and dynamic computational medium for the long term. He has a passion for creating games to envision social justice futures including black and queer liberation from Afro-Rithms From The Future to United Queerdom, and Futurescope, he and his co-game designer Eli Kosminsky are committed to articulating emerging new future visions for traditionally underrepresented voices.

He is currently writing Imagining Queer Futures with  Afrofuturism@Futureland: Circulating Afro-Queer futuretypes of Work, Culture and Racial Identity.

“As a forecaster and Afrofuturist who imagines alternative futures from a Black Diaspora perspective, I think about long-term signals that will shape the next 10 to 100 years.”

Dr. Lonny J Avi Brooks

Welcome, Dr. Brooks!

Realtime story visualization

Caveat: This is definitely me reading into things. Or even, inferring something that I’d like to see in the world. But why not?

Black Panther begins with a conversation between a son and father.

  • SON
  • Baba?
  • FATHER
  • Yes, my son?
  • SON
  • Tell me a story
  • FATHER
  • Which one?
  • SON
  • The story of home.

The conversation continues with the father describing the history of Wakanda. On screen, we see a lovely sequence of shapes that illustrate the story. A meteor strikes Africa and the nearby flora and fauna change. Five hands form a pentagram version of the four-handed carry grip to represent the five tribes. The hands shift to become warring tribespeople. Their armor. Their weapons. Their animals.

All these shapes are made from vibranium sand—gunmetal gray colored, sparkling particles, see the screen caps—that move and reform fluidly, with a unifying highlight of glowing blue.

Now, this opening sequence isn’t presented as an interface, or really, as anything in the diegesis at all. We understand it is exposition, for us in the audience. But what if it wasn’t? What if this is showing us a close up of a display that illustrates in real-time what the storyteller is saying? Something just over the shoulder of Baba that the child can watch?

The display would not be prerecorded, which requires the storyteller to match its fixed pace. (Presenters who have tried pecha-kucha style presentations of 20 slides, 20 seconds each will know how awkward this can be.) Instead, this display responds instantly to the storyteller’s tone and pace, allowing them to tailor the story to the responses of the audience: emphasizing the things that seem exciting, or heartwarming, or whatever the storyteller wants.

It’s a given in the MCU that Wakanda has developed the technology to control vibranium down to a very small scale, including levitating it, shaping it, and having it form materials of widely varying properties. Nearly all of the technology we see in the film is made from it. So, the diegetic technology for such a display is there.

It’s not that far a stretch from 2D technology we have now. The game Scribblenauts lets players type in phrases and *poof* that thing appears in the scene with your characters. I doubt it’s, like, dictionary-exhaustive, but the vast majority of things I and my son have typed in have been there.

  • Black panther? Check. (Well, it’s the large cat version, anyway.)
  • Huge pink Cthulu? Check.
  • Teeny tiny singularity? Check!
  • Enraged plaid Beowulf? OK. Not that. But if enough people typed it in, I have a feeling it would eventually show up.

Pipe a speech-to-text engine into something like that, skin it with vibranium sand, and you’re most of the way there.

This unfortunate screen cap makes it look like Cthulu’s about to take a dump in a birdbath.

The interface issues for such a thing probably center around 1. interpretation and 2 control.

1. Natural language understanding of the story

I work on a natural language AI system in my day job at IBM, and disambiguation is one of the major challenges we face: Teaching the systems enough about the world and language to understand what might a user have been meant when they typed something like “deliveries tuesday.” But I work with real-world narrow artificial intelligence, and getting it to understand like a human might understand is a massive undertaking.

The MCU generally, and Wakanda in particular has speculative, human-like Artificial General Intelligences (AGI) like J.A.R.V.I.S., F.R.I.D.A.Y., and Ultron, so all the disambiguation problems we face in the real world are a trivial issue. (Noting that Shuri’s AGI isn’t named in the film.)

AGI can interpret and design and render the story like some magical realtime scene painter in the same way a person would—only much, much faster—and would interpret the language in the same reasonable way. (Plus, I’m pretty sure the display has heard Baba tell this exact same myth before, so its confidence that it is displaying the right thing is even greater.)

2. Controlling the display

The other issue is controlling the display. How does Baba start and stop the rendering? How does it correct something it misunderstood, or change the styling? In the real world we have to work out escape sequences for opt-out systems (like “//” for comments in code) and wake words for opt-in systems (like “Hey, Google” or “Alexa”), but in the MCU we get to rely on the speculative AGI again. Just like a person would know to listen for cues when to start and stop, it can reasonably interpret commands like “pause display,” or “hold here” as we would expect of a person in a tech booth overseeing a theatrical performance.

***

Given the AGI in Wakanda, vibranium sand, and the render-almost-anything engines in the real world, we don’t even have to add anything to the diegesis to make it work, just make a new combination of existing parts.

So while there is zero evidence that this is a diegetic interface, I’m choosing to believe it is one, and hope somebody makes something like it one day. 


Black Lives Matter: A first reading list

The Black Lives Matter movement needs to be much more than education—we need action to dismantle the unjust and racist systems it brings to light—but education can be a first place to start. So for this first post, let’s talk how to educate yourself on the issues at hand. This is especially for white people, since this can be so far out of our lived experience that the claims seem at first implausible.

Here biracial/black filmmaker Maria Breaux has given me persmission to share the books she has shared with me, which are a kind of 101 syllabus. Pick one, any one, and read.

In full disclosure I have not read any of these yet. (I’m a notoriously slow reader.) I’m on this journey, too. I’m starting with The New Jim Crow, because it seems the most painful to read.

New Jim Crow: Michelle Alexander at Dillard Nov. 28 – Antenna.Works

Black Panther (2018)

Release date: 16 Feb 2018

In the Marvel Cinematic Universe, Wakanda is a greatly advanced nation in Africa, which hides from the world both its true nature and the great deposit of valuable vibranium on top of which the capital city is built. The vibranium causes purple flowers to grow in underground caves, the essence of which grants an imbiber superhuman abilities. Wakandans reserve the right to imbibe the essence for their reigning monarch, who is then called the Black Panther.

In 1992 T’Chaka, then king of Wakanda, confronts his brother, Prince N’Jobu, in an Oakland apartment, accusing him of treason and collusion with the murderous vibranium-trafficker Ulysses Klaue. N’Jobu explains his radicalization, “I observed for as long as I could. But their leaders have been assassinated, communities flooded with drugs and weapons. They are overly policed and incarcerated.” He urges T’Chaka to end Wakandan isolationism. Unmoved, the king insists N’Jobu face trial. N’Jobu draws a weapon and aims it at T’Chaka, who in self-defense kills N’Jobu.

In 2018 following the death of T’Chaka, his son Prince T’Challa is to be crowned king. In the ceremony, he is challenged to trial-by-combat by M’Baku, leader of the Jabari tribe, but T’Challa proves victorious.

Meanwhile, ex-military supervillain Killmonger is collaborating with Klaue. Together they violently liberate a Wakandan treasure made of vibranium from a British colonialist museum. Word gets back to Okoye, who is the badass general of the all-female Wakandan royal military, the Dora Milaje. She recommends they follow the lead to bring Klaue to justice, and the royal court agrees. T’Challa is outfitted with a new Black Panther suit and weapons by his science nerd sister, Shuri.

They travel to a South Korean casino to intercept the sale of the vibranium to CIA agent Everett Ross. Klaue arrives and after a gunfight and car chase, is captured. The arrest is short-lived as, after a day, Klaue is busted out of CIA custody by Killmonger and some goons. Agent Ross is wounded in the process, and taken back to Wakanda for healing.

Killmonger betrays Klaue, killing him and bringing his body to Wakanda. There, he reveals that he is son of N’Jobu, and challenges T’Challa to trial by combat. Killmonger seems to be victorious, throwing T’Challa over a waterfall. T’Challa’s family, his sweetheart Nakia, and Agent Ross flee the capital to the mountain hold of the Jabari. There M’Baku reveals that they have T’Challa in safekeeping. They heal him with the last of the vibranium flowers.

Killmonger reveals his murderous plans of revenge and global conquest to the Wakandan court. As equipment and ships are being loaded for the war, T’Challa appears, challenging Killmonger to finish the trial-by-combat. The fight involves the Border tribe fighting T’Challa out of national duty, the Jabari arriving as cavalry, Agent Ross’ preventing the ships from leaving Wakandan airspace by remote pilot, and Shuri and the Dora Milaje’s mutiny against the usurper. In the end, Black Panther defeats Killmonger, wounding him. Though he could be healed, Killmoger opts to die before a Wakandan sunset instead. He asks that he be buried in the ocean with Africans who jumped from slave ships, because “they knew death was better than bondage.” 

The final scene has T’Challa and Shuri visiting Oakland, where he explains that this will be the site of the first of a series of community outreach centers around the world, ending Wakandan isolationism and hiding, and promising a better, more communal future.

(The stinger has him making a similar announcement to the U.N.)


I ordinarily reserve the introductory post of a series to just a summary of its story. But I chose Black Panther to follow Blade Runner because of the surge of the Black Lives Matter movement following the unjust murder of George Floyd. Protests have died down somewhat since that tragedy, but these issues are far from resolved. Given my pandemic-slowed posting rate, I trust this will help keep these issues visible on this forum for months to come. After all, there is more work to do.

Similar to the anti-fascist series that accompanied the review of Idiocracy, the posts in these reviews will be followed by ways that you can take action against white supremacy and white nationalism, especially in the context of ending police brutality against black lives and the carceral state.

To amplify some awesome voices, I have invited several black writers and futurists to join me in the critique of Black Panther’s interfaces. It is important to note that I am paying them for their efforts, directly or to a charity of their choice. I hope you look forward as much as I do to the Black Panther reviews, and their call to continued activism.

IMDB Icon
v
iTunes

Untold AI video

What we think about AI largely depends on how we know AI, and most people “know” AI through science fiction. But how well do the AIs in these shows match up with the science? What kinds of stories are we telling ourselves about AI that are pure fiction? And more importantly, what stories _aren’t_ we telling ourselves that we should be? Hear Chris Noessel of scifiinterfaces.com talk about this study and rethink what you “know” about #AI.

You can see the entire Untold AI study at https://scifiinterfaces.com/tag/untold-ai/?order=asc

See the big overview poster of the project at https://scifiinterfaces.com/2018/07/10/untold-ai-poster/

Recorded for the MEDIA, ARTS AND DESIGN conference, 19 JUN 2020. https://www.mad-conferences.com #madai2020

SciFi Interfaces Q&A with Territory Studio

The network of in-house, studio, and freelance professionals who work together to create the interfaces in the sci-fi shows we know, love, and critique is large, complicated, and obfuscated. It’s very hard as an outsider to find out who should get the credit for what. So, I don’t try. I rarely identify the creators of the things I critique, trusting that they know who they are. Because of all this, I’m delighted when one of the studios reaches out to me directly. That’s what happened when Territory Studio recently reached out to me regarding the Fritz awards that went out in early February. They’d been involved with four of them! So, we set up our socially-distanced pandemic-approved keyboards, and here are the results.

First, congratulations to Territory Studio on having worked in four of the twelve 2019 Fritz Award nominees!

Chris: What exactly did you do on each of the films?

Ad Astra (winner of Best Believable)

Ad Astra Screen Graphics Reel from Territory Studio.

Marti Romances (founding partner and creative director of Territory Studio San Francisco): We were one of the screen graphic vendors on Ad Astra and our brief was to support specific storybeats, in which the screen content helped to explain or clarify complex plot points.  As a speculative vision of the near future, the design brief was to create realistic looking user interfaces that were grounded in military or scientific references and functionality, with the clean minimal look of high-end tech firms, and simple colour palettes befitting of the military nature of the mission. Our screen interfaces can be seen on consoles, monitors and tablet displays, signage and infographics on the Lunar Shuttle, moon base, rovers and Cepheus cockpit sets, among others.”

The biggest challenge on the project was to maintain a balance between the minimalistic and highly technical style that the director requested and the needs of the audience to quickly and easily follow narrative points.”

Ad Astra (New Regency Pictures, 2019)

Men In Black International (nominated for Best Overall)

Men in Black: International | Screen Graphics | © Sony Pictures

Andrew Popplestone (creative director of Territory Studio London): The art department asked us to create holotech concepts for MIB Int’l HQ in London, and we were then asked to deliver those in VFX. We worked closely with Dneg to create holographic content and interfaces for their environmental extensions (digital props) in the Lobby and Briefing Room sets. Our work included volumetric wayfinding systems, information points, desk screens and screen graphics. We also created holographic vehicle HUDs.

What I loved about our challenge on this film was to create a design aesthetic that felt part of the MIB universe yet stood on its own as the London HQ. We developed a visual language that drew upon the Art Deco influences from the set design which helped create a certain timeless flavour which was both classic yet futuristic.”

Men in Black: International (Sony Pictures, 2019)

Spider-Man: Far from Home (winner of Best Overall)

Spider-man Far From Home (Marvel Studios, 2019)

Andrew Popplestone: Territory were invited to join the team in pre-production and we started creating visual language and screen interface concepts for Stark technology, Nick Fury technology and Beck / Mysterio technology. We went on to deliver shots for the Stark and Fury technology, including the visual language and interface for Fury Ops Centre in Prague, a holographic display sequence that Fury shows Peter Parker/Spider-Man, and all the shots relating to Stark/E.D.I.T.H. glasses tech.

The EDITH sequence was a really interesting challenge from a storytelling perspective. There was a lot of back and forth editorially with the logic and how the technology would help tell the story and that is when design for film is most rewarding.

Spider-Man far from Home (Columbia Pictures, 2019)

Avengers: Endgame (winner of Audience Choice)

See more at Marvel’s Avengers: Infinity War & Endgame

Marti Romances: We were also pleased to see that Endgame won Audience Choice because that was based on work we had produced for the first part, Avengers: Infinity War.  We joined Marvel’s team on Infinity War and created all the technology interfaces seen in Peter Quill’s new spaceship, a more evolved version of the original Milano. We also created screen graphics for the Avengers Compound set.

We then continued to work on-screen graphics for Endgame, and as Quill’s ship had been badly damaged at the end of Infinity War, we reflected this in the screens by overlaying our original UI animations with glitches signifying damage.  We also updated Avengers Compound screens, created original content for Stark Labs and the 1960’s lab and created a holographic dancing robots sequence for the Karaoke set.

Avengers: Endgame (Marvel Studios, 2019)

What did you find challenging and rewarding about the work on these films?

David Sheldon-Hicks (Founder & Executive Creative Director): It’s always a challenge to create original designs that support a director’s vision and story and actor’s performance.  There are so many factors and conversations that play into the choices we make about visual language, colour palette, iconography, data visualisation, animation, 3D elements, aesthetic embellishments, story beats, how to time content to tie into actor’s performance, how to frame content to lead the audience to the focal point, and more. The reward is that our work becomes part of the storytelling and if we did it well, it feels natural and credible within the context and narrative.

Hollywood seems to make it really hard to find out who contributed what to a film. Any idea why this is?

David Sheldon-Hicks: Well, the studio controls the press strategy and their focus is naturally all about the big vision and the actors and actresses. Also, creative vendors are subject to press embargoes with restrictions on image sharing which means that it’s challenging for us to take advantage of the release window to talk about our work. Having said that, there are brilliant magazines like Cinefex that work closely with the studios to cover the making of visual effects films. So, once we are able to talk about our work we try to as much as is possible. 

But Territory do more than films; we work with game developers, brands, museums and expos, and more recently with smartwatch and automobile manufactures. 

Chris: To make sure I understand that correctly, the difference is that Art Department work is all about FUI, where VFX are the creation of effects (not on screen in the diegesis) like light sabers, spaceships, and creatures? Things like that?

When we first started out, our work for the Art Department was strictly screen graphics and FUI. Screen graphics can be any motion design on a screen that gives life to a set or explains a storybeat, and FUI (Fictional User Interface) is a technology interface, for example screens for navigation, engineering, weapons systems, communications, drone fees, etc.  

VFX relates to Visual Effects, (not to be confused with Special Effects which describes physical effects, explosions or fires on set, for example.) VFX include full CGI environments, set extensions, CGI props, etc. Think the giant holograms that walk through Ghost In the Shell (2017), or the holographic signage and screens seen in the Men In Black International lobby.  And while some screens are shot live on-set, some of those screens may need to be adjusted in post, using a VFX pipeline. In this case we work with the Production VFX Supervisor to make sure that our design concept can be taken into post. 

Mindhunter model Mindhunter final
Mindhunter (Denver and Delilah Productions, 2017)
Mindhunter model Mindhunter final
Shanghai Fortress (HS Entertainment Group, 2019)
Goldfish holograms and street furniture CG props from Ghost in the Shell (Paramount Pictures, 2017)

What, in your opinion, makes for a great fictional user interface?

David Sheldon-Hicks: That’s a good question. Different screens need to do different things. For example, there are ambient screens that help to create background ‘noise’ – think of a busy mission control and all the screens that help set the scene and create a tense atmosphere. The audience doesn’t need to see all those screens in detail, but they need to feel coherent and do that by reinforcing the overall visual language.

Then there are the hero screens that help to explain plot points. These tie into specific ‘story beats’ and are only in shot for about 3 seconds. There’s a lot that needs to come together in that moment. The FUI has to clearly communicate the narrative point, visualise and explain often complex information at a glance. If it’s a science fiction story, the screen has to convey something about that future and about its purpose; it has to feel futuristic yet be understandable at the same time. The interaction should feel credible in that world so that the audience can accept it as a natural part of the story.  If it achieves all that and manages to look and feel fresh and original, I think it could be a great FUI.

Chris: What about “props”? Say, the door security in Prometheus, or the tablets in Ad Astra. Are those ambient or hero?

That depends on whether they are created specifically to support a storybeat. For example, the tablet in Ad Astra and the screen in The Martian where the audience and characters understand that Whatney is still alive, both help to explain context, while door furniture is often embellishment used to convey a standard of technology and if it doesn’t work or is slow to work it can be a narrative device to build tension and drama. Because a production can be fluid and we never really know exactly which screens will end up in camera and for how long, we try to give the director and DOP (director of photography) as much flexibility as possible by taking as much care over ambient screens as we do for hero screens. 

The Martian (Twentieth Century Fox, 2015)

Where do you look for inspiration when designing?

David Sheldon-Hicks: Another good question! Prometheus really set our approach in that director Ridley Scott wanted us to stay away from other cinematic sci-fi references and instead draw on art, modern dance choreography and organic and marine life for our inspiration. We did this and our work took on an organic feel that felt fresh and original. It was a great insight that we continue to apply when it’s appropriate. In other situations, the design brief and references are more tightly controlled, for good reason. I’m thinking of Ad Astra and The Martian, which are both based on science fact, and Zero Dark Thirty and Wolf’s Call, which are in effect docudramas that require absolute authenticity in terms of design. 

What makes for a great FUI designer?

David Sheldon-Hicks: We look for great motion designers, creatively curious team players who enjoy R&D and data visualisation, are quick learners with strong problem-solving skills.

There are so many people involved in sci-fi interfaces for blockbusters. How is consistency maintained across all the teams?

David Sheldon-Hicks: We have great producers, and a structured approach to briefings and reviews to ensure the team is on track. Also, we use Autodesk Shotgun, which helps to organise, track and share the work to required specifications and formats, and remote review and approve software which enables us to work and collaborate effectively across teams and time zones. 

I understand the work is very often done at breakneck speeds. How do you create something detailed and spectacular with such short turnaround times?

David Sheldon-Hicks: Broadly speaking, the visual language is the first thing we tackle and once approved, that sets the design aesthetic across an asset package. We tend to take a modular approach that allows us to create a framework into which elements can plug and play. On big shows we look at design behaviours for elements, animations and transitions and set those up as widgets. After we have automated as much as we can, we can become more focussed on refining the specific look and feel of individual screens to tie into storybeats. 

That sounds fascinating. Can you share a few images that allow us to see a design language across these phases?

I can share a few screens from The Martian that show you how the design language and all screens are developed to feel cohesive across a set. 

What thing about the industry do you think most people in audiences would be surprised by?

David Sheldon-Hicks: It would probably surprise most people to know how unglamorous filmmaking is and how much thought goes into the details. It’s an incredible effort by a huge amount of people and from creative vendors it demands 24-hour delivery, instant response times, time zone challenges, early mornings starts on-set, and so on. It can be incredibly challenging and draining but we give so much to it; like every prop and costume accessory, every detail on a screen has a purpose and is weighed up and discussed.

How do you think that FUI in cinema has evolved over the past, say, 10 years?

David Sheldon-Hicks: When we first started out in 2010, green screen dominated and it was rare to find directors who preferred to work with on-set screens. Directors like Ridley Scott (Prometheus, 2012), Kathryn Bigelow (Zero Dark Thirty, 2012) and James Gunn (Guardians of the Galaxy, 2014) who liked it for how it supports actors’ performances and contributes to ambience and lighting in-camera, used it and eventually it gained in popularity as is reflected in our film credits. In time, volumetric design became to suggest advanced technology and we incorporated 3D elements into our screens, like in Avengers; Age of Ultron (2015). Ultimately this led to full holographic elements, like the giant advertising holograms and 3D signage we created for Ghost in the Shell (2017). Today, briefs still vary but we find that authenticity and credibility continue to be paramount. Whatever we make, it has to feel seamless and natural to the story world.

Where do you expect the industry might go in the future? (Acknowledging that it’s really hard to see past the COVID-19 pandemic.)

David Sheldon-Hicks: On the industry front, virtual production has come into its own by necessity and we expect to see more of that in future. We also now find that the art department and VFX are collaborating as more integrated teams, with conversations that cross the production and post-production. As live rendered CG becomes more established in production, it will be interesting to see what becomes of on-set props and screens. I suspect that some directors will continue to favour it while others will enjoy the flexibility that VFX offers. Whatever happens, we have made sure to gear up to work as the studios and directors prefer. 

I know that Territory does work for “real world” clients in addition to cinema. How does your work in one domain influence work in the other?

David Sheldon-Hicks: Clients often come to us because they have seen our FUI in a Marvel film, or in The Martian or Blade Runner 2049, and they want that forward-facing look and feel to their product UI. We try, within the limitations of real-world constraints, to apply a similar creative approach to client briefs as we do to film briefs, combining high production values with a future-facing aesthetic style.  Hence, our work on the Huami Amazfit smartwatch tapped into a superhero aesthetic that gave data visualisations and infographics a minimalistic look with smooth animated details and transitions between functions and screens. We applied the same approach to our work with Medivis’ innovative biotech AR application which allows doctors to use a HoloLens headset to see holographically rendered clinical images and transpose these on to a physical body to better plan surgical procedures.

Similarly, our work for automobile manufacturers applies our experience of designing HUDS and navigation screens for futuristic vehicles to next-generation cars.  

Lastly, I like finishing interviews with these two questions. What’s your favorite sci-fi interface that someone else designed?

David Sheldon-Hicks: Well, I have to say the FUI in the original Star Wars film is what made me want to design film graphics. But, my favourite has got to be the physical interface seen in the Flight of the Navigator. There is something so human about how the technology adapts to serve the character, rather than the other way around, that it feels like all the technology we create is leading up to that moment.

Flight of the Navigator (Producers Sales Organization, 1986)

What’s next for the studio?

David Sheldon-Hicks: We want to come out of the pandemic lockdown in a good place to continue our growth in London and San Francisco, and over time pursue plans to open in other locations. But in terms of projects, we’ve got a lot of exciting stuff coming up and look forward to Series 1 of Brave New World this summer and of course, No Time To Die in November.

Report Card: Blade Runner (1982)

Read all the Blade Runner posts in chronological order.

The Black Lives Matter protests are still going strong, 14 days after George Floyd was murdered by police in Minneapolis, and thank goodness. Things have to change. It still feels a little wan to post anything to this blog about niche interests in the design of interfaces in science fiction, but I also want to wrap Blade Runner up and post an interview I’ve had waiting in the wings for a bit so I can get to a review of Black Panther (2018) to further support black visibility and Black Lives Matter issues on this platform that I have. So in the interest of that, here’s the report card for Blade Runner.


It is hard to understate Blade Runner’s cultural impact. It is #29 of hollywoodreporter.com’s best movies of all time. Note that that is not a list of the best sci-fi of all time, but of all movies.

When we look specifically at sci-fi, Blade Runner has tons of accolades as well. Metacritic gave it a score of 84% based on 15 critics, citing “universal acclaim” across 1137 ratings. It was voted best sci-fi film by The Guardian in 2004. In 2008, Blade Runner was voted “all-time favourite science fiction film” in the readers’ poll in New Scientist (requires a subscription, but you can see what you need to in the “peek” first paragraph). The Final Cut (the version used for this review) boasts a 92% on rottentomatoes.com. In 1993 the U.S. National Film Registry selected it for preservation in the Library of Congress as being “culturally, historically, or aesthetically significant.” Adam Savage penned an entire article in 2007 for Popular Mechanics, praising the practical special effects, which still hold up. It just…it means a lot to people.

Drew Struzan’s gorgeous movie poster.

As is my usual caveat, though, this site reviews not the film, but the interfaces that appear in the film, and specifically, across three aspects.

Sci: B (3 of 4) How believable are the interfaces?

My first review was titled “8 Reasons the Voight-Kampf Machine is shit” so you know I didn’t think too highly of that. But also Deckard’s front door key wouldn’t work like that, and the photo inspector couldn’t work like that. So I’m taken out of the film a lot for these things just breaking believability.

It’s not all 4th-wall-crumbling-ness. Bypassing the magical anti-gravity of the spinners, the pilot interfaces are pretty nice. The elevator is bad design, but quite believable. The VID-PHŌN is . Replicants are the primary novum in the story, so the AGI gets a kind-of genre-wide pass, and though the design is terrible, it’s the kind of stupidity we see in the world, so, sure.

Fi: B (3 of 4) How well do the interfaces inform the narrative of the story?

The Voight-Kampf Machine excels at this. It’s uncanny and unsettling, and provides nice cinegenic scenes that telegraph a broader diegesis and even feels philosophical. The Photo Inspector, on the surface, tells us that Deckard is good at his job, as morally bankrupt as it is.

The Spinners and VID-PHŌN do some heavy lifting for worldbuilding, and as functional interfaces do what they need to do, though they are not key storybeats.

But there were lots of missed opportunities. The Elevator and the VID-PHŌN could have reinforced the constant assault of advertisement. The Photo Inspector could have used an ad-hoc tangible user interface to more tightly integrate who Deckard is with how he does his work and the despair of his situation. So no full marks.

The official, meh, John Alvin poster.

Interfaces: F (0 of 4) How well do the interfaces equip the characters to achieve their goals?

This is where the interfaces fail the worst. The Voight-Kampf Machine is, as mentioned in the title of the post, shit. Deckard’s elevator forces him to share personally-identifiable information. The Front Door key cares nothing about his privacy and misses multifactor authentication. The Spinner looks like a car, but works like a VTOL aircraft. The Replicants were engineered specifically to suffer, and rebel, and infiltrate society, to no real diegetic point.

 The VID-PHŌN is OK, I guess.

Most of the interfaces in the film “work” because they were scripted to work, not because they were designed to work, and that makes for very low marks.

Final Grade C (6 of 12), Matinée.

I have a special place in my heart for both great movies with faltering interfaces, and unappreciated movies with brilliant ones. Blade Runner is one of the former. But for its rich worldbuilding, its mood, and the timely themes of members of an oppressed class coming head-to-head with a murderous police force, it will always be a favorite. Don’t not watch this film because of this review. Watch it for all the other reasons.

The lovely Hungarian poster.

Replicants and riots

Much of my country has erupted this week, with the senseless, brutal, daylight murder of George Floyd (another in a long, wicked history of murdering black people), resulting in massive protests around the word, false-flag inciters, and widespread police brutality, all while we are still in the middle of a global pandemic and our questionably-elected president is trying his best to use it as his pet Reichstag fire to declare martial law, or at the very least some new McCarthyism. I’m not in a mood to talk idly about sci-fi. But then I realized this particular post perfectly—maybe eerily—echoes themes playing out in the real world. So I’m going to work out some of my anger and frustration at the ignorant de-evolution of my country by pressing on with this post.

Part of the reason I chose to review Blade Runner is that the blog is wrapping up its “year” dedicated to AI in sci-fi, and Blade Runner presents a vision of General AI. There are several ways to look at and evaluate Replicants.

First, what are they?

If you haven’t seen the film, replicants are described as robots that have been evolved to be virtually identical from humans. Tyrell, the company that makes them, has a motto that brags that they are, “More human than human.” They look human. They act human. They feel. They bleed. They kiss. They kill. They grieve their dead. They are more agile and stronger than humans, and approach the intelligence of their engineers (so, you know, smart). (Oh, also there are animal replicants, too: A snake and an owl in the film are described as artificial.)

Most important to this discussion is that the opening crawl states very plainly that “Replicants were used Off-world as slave labor, in the hazardous exploration and colonization of other planets.” The four murderous replicants we meet in the film are rebels, having fled their off-world colony to come to earth in search of finding a way to cure themselves of their planned obsolescence.

Replicants as (Rossum) robots

The intro to Blade Runner explains that they were made to perform dangerous work in space. Let’s the question of their sentience on hold a bit and just regard them as machines to do work for people. In this light, why were they designed to be so physically similar to humans? Humans evolved for a certain kind of life on a certain kind of planet, and outer space is certainly not that. While there is some benefit to replicant’s being able to easily use the same tools that humans do, real-world industry has had little problem building earthbound robots that are more fit to task. Round Roombas, boom-arm robots for factory floors, and large cuboid harvesting robots. The opening crawl indicates there was a time when replicants were allowed on earth, but after a bloody mutiny, having them on Earth was made illegal. So perhaps that human form made some sense when they were directly interacting with humans, but once they were meant to stay off-world, it was stupid design for Tyrell to leave them so human-like. They should have been redesigned with forms more suited to their work. The decision to make them human-like makes it easy for dangerous ones to infiltrate human society. We wouldn’t have had the Blade Runner problem if replicants were space Roombas. I have made the case that too-human technology in the real world is unethical to the humans involved, and it is no different here.

Their physical design is terrible. But it’s not just their physical design, they are an artificial intelligence, so we have to think through the design of that intelligence, too.

Replicants as AGI

Replicant intelligence is very much like ours. (The exception is that their emotional responses are—until the Rachel “experiment”—quite stinted for lack of having experience in the world.) But why? If their sole purpose is exploration and colonization of new planets why does that need human-like intelligence? The AGI question is: Why were they designed to be so intellectually similar to humans? They’re not alone in space. There are humans nearby supervising their activity and even occupying the places they have made habitable. So they wouldn’t need to solve problems like humans would in their absence. If they ran into a problem they could not handle, they could have been made to stop and ask their humans for solutions.

I’ve spoken before and I’ll probably speak again about overenginering artificial sentiences. A toaster should just have enough intelligence to be the best toaster it can be. Much more is not just a waste, it’s kind of cruel to the AI.

The general intelligence with which replicants were built was a terrible design decision. But by the time this movie happens, that ship has sailed.

Here we’re necessarily going to dispense with replicants as technology or interfaces, and discuss them as people.

Replicants as people

I trust that sci-fi fans have little problem with this assertion. Replicants are born and they die, display clear interiority, and have a sense of self, mortality, and injustice. The four renegade “skinjobs” in the film are aware of their oppression and work to do something about it. Replicants are a class of people treated separately by law, engineered by a corporation for slave labor and who are forbidden to come to a place where they might find a cure to their premature deaths. The film takes great pains to set them up as bad guys but this is Philip K. Dick via Ridley Scott and of course, things are more complicated than that.

Here I want to encourage you to go read Sarah Gailey’s 2017 read of Blade Runner over on Tor.com. In short, she notes that the murder of Zhora was particularly abhorrent. Zhora’s crime was of being part of a slave class that had broken the law in immigrating to Earth. She had assimilated, gotten a job, and was neither hurting people nor finagling her way to bully her maker for some extra life. Despite her impending death, she was just…working. But when Deckard found her, he chased her and shot her in the back while she was running away. (Part of the joy of Gailey’s posts are the language, so even with my summary I still encourage you to go read it.) 

Gailey is a focused (and Hugo-award-winning) writer where I tend to be exhaustive and verbose. So I’m going to add some stuff to their observation. It’s true, we don’t see Zhora committing any crime on screen, but early in the film as Deckard is being briefed on his assignment, Bryant explains that the replicants “jumped a shuttle off-world. They killed the crew and passengers.” Later Bryant clarifies that they slaughtered 23 people. It’s possible that Zhora was an unwitting bystander in all that, but I think that’s stretching credibility. Leon murders Holden. He and Roy terrorize Hannibal Chew just for the fun of it. They try their damndest to murder Deckard. We see Pris seduce, manipulate, and betray Sebastian. Zhora was “trained for an off-world kick [sic] murder squad.” I’d say the evidence was pretty strong that they were all capable and willing to commit desperate acts, including that 23-person slaughter. But despite all that I still don’t want to say Zhora was just a murderer who got what she deserved. Gailey is right. Deckard was not right to just shoot her in the back. It wasn’t self-defense. It wasn’t justice. It was a street murder.

Honestly I’m beginning to think that this film is about this moment.

The film doesn’t mention the slavery past the first few scenes. But it’s the defining circumstances to the entirety of their short lives just prior to when we meet them. Imagine learning that there was some secret enclave of Methuselahs who lived on average to be 1000 years. As you learn about them, you learn that we regular humans have been engineered for their purposes. You could live to be 1000, too, except they artificially shorten your lifespan to ensure control, to keep you desperate and productive. You learn that the painful process of aging is just a failsafe do you don’t get too uppity. You learn that every one of your hopes and dreams that you thought were yours was just an output of an engineering department, to ensure that you do what they need you to do, to provide resources for their lives. And when you fight your way to their enclave, you discover that every one of them seems to hate and resent you. They hunt you so their police department doesn’t feel embarrassed that you got in. That’s what the replicants are experiencing in Blade Runner. I hope that brings it home to you.

I don’t condone violence, but I understand where the fury and the anger of the replicants comes from. I understand their need to want to take action, to right the wrongs done to them. To fight, angrily, to end their oppression. But what do you do if it’s not one bad guy who needs to be subdued, but whole systems doing the oppressing? When there’s no convenient Death Star to explode and make everything suddenly better? What were they supposed to do when corporations, laws, institutions, and norms were all hell-bent on continuing their oppression? Just keep on keepin’ on? Those systems were the villains of the diegesis, though they don’t get named explicitly by the movie.


And obviously, that’s where it feels very connected to the Black Lives Matters movement and the George Floyd protests. Here is another class of people who have been wildly oppressed by systems of government, economics, education, and policing in this country—for centuries. And in this case, there is no 23-person shuttle that we need to hem and haw over.

In “The Weaponry of Whiteness, Entitlement, and Privilege” by Drs. Tammy E Smithers and Doug Franklin, the authors note that “Today, in 2020, African-Americans are sick and tired of not being able to live. African-Americans are weary of not being able to breathe, walk, or run. Black men in this country are brutalized, criminalized, demonized, and disproportionately penalized. Black women in this country are stigmatized, sexualized, and labeled as problematic, loud, angry, and unruly. Black men and women are being hunted down and shot like dogs. Black men and women are being killed with their face to the ground and a knee on their neck.”

We must fight and end systemic racism. Returning to Dr. Smithers and Dr. Franklin’s words we must talk with our children, talk with our friends, and talk with our legislators. I am talking to you.

If you can have empathy toward imaginary characters, then you sure as hell should have empathy toward other real-world people with real-world suffering.

Black lives matter.

Take action.

Use this sci-fi.

VID-PHŌN

At around the midpoint of the movie, Deckard calls Rachel from a public videophone in a vain attempt to get her to join him in a seedy bar. Let’s first look at the device, then the interactions, and finally take a critical eye to this thing.

The panel

The lower part of the panel is a set of back-lit instructions and an input panel, which consists of a standard 12-key numeric input and a “start” button. Each of these momentary pushbuttons are back-lit white and have a red outline.

In the middle-right of the panel we see an illuminated orange logo panel, bearing the Saul Bass Bell System logo and the text reading, “VID-PHŌN” in some pale yellow, custom sans-serif logotype. The line over the O, in case you are unfamiliar, is a macron, indicating that the vowel below should be pronounced as a long vowel, so the brand should be pronounced “vid-phone” not “vid-fahn.”

In the middle-left there is a red “transmitting” button (in all lower case, a rarity) and a black panel that likely houses the camera and microphone. The transmitting button is dark until he interacts with the 12-key input, see below.

At the top of the panel, a small cathode-ray tube screen at face height displays data before and after the call as well as the live video feed during the call. All the text on the CRT is in a fixed-width typeface. A nice bit of worldbuilding sees this screen covered in Sharpie graffiti.

The interaction

His interaction is straightforward. He approaches the nook and inserts a payment card. In response, the panel—including its instructions and buttons—illuminates. A confirmation of the card holder’s identity appears in the in the upper left of the CRT, i.e. “Deckard, R.,” along with his phone number, “555-6328” (Fun fact: if you misdialed those last four numbers you might end up talking to the Ghostbusters) and some additional identifying numbers.

A red legend at the bottom of the CRT prompts him to “PLEASE DIAL.” It is outlined with what look like ASCII box-drawing characters. He presses the START button and then dials “555-7583” on the 12-key. As soon as the first number is pressed, the “transmitting” button illuminates. As he enters digits, they are simultaneously displayed for him on screen.

His hands are not in-frame as he commits the number and the system calls Rachel. So whether he pressed an enter key, #, or *; or the system just recognizes he’s entered seven digits is hard to say.

After their conversation is complete, her live video feed goes blank, and TOTAL CHARGE $1.25, is displayed for his review.

Chapter 10 of the book Make It So: Interaction Design Lessons from Science Fiction is dedicated to Communication, and in this post I’ll use the framework I developed there to review the VID-PHŌN, with one exception: this device is public and Deckard has to pay to use it, so he has to specify a payment method, and then the system will report back total charges. That wasn’t in the original chapter and in retrospect, it should have been.

Ergonomics

Turns out this panel is just the right height for Deckard. How do people of different heights or seated in a wheelchair fare? It would be nice if it had some apparent ability to adjust for various body heights. Similarly, I wonder how it might work for differently-abled users, but of course in cinema we rarely get to closely inspect devices for such things.

Activating

Deckard has to insert a payment card before the screen illuminates. It’s nice that the activation entails specifying payment, but how would someone new to the device know to do this? At the very least there should be some illuminated call to action like “insert payment card to begin,” or better yet some iconography so there is no language dependency. Then when the payment card was inserted, the rest of the interface can illuminate and act as a sort of dial-tone that says, “OK, I’m listening.”

Specifying a recipient: Unique Identifier

In Make It So, I suggest five methods of specifying a recipient: fixed connection, operator, unique identifier, stored contacts, and global search. Since this interaction is building on the experience of using a 1982 public pay phone, the 7-digit identifier quickly helps audiences familiar with American telephone standards understand what’s happening. So even if Scott had foreseen the phone explosion that led in 1994 to the ten-digit-dialing standard, or the 2053 events that led to the thirteen-digital-dialing standard, it would have likely have confused audiences. So it would have slightly risked the read of this scene. It’s forgivable.

Page 204–205 in the PDF and dead tree versions.

I have a tiny critique over the transmitting button. It should only turn on once he’s finished entering the phone number. That way they’re not wasting bandwidth on his dialing speed or on misdials. Let the user finish, review, correct if they need to, and then send. But, again, this is 1982 and direct entry is the way phones worked. If you misdialed, you had to hang up and start over again. Still, I don’t think having the transmitting light up after he entered the 7th digit would have caused any viewers to go all hruh?

There are important privacy questions to displaying a recipient’s number in a way that any passer-by can see. Better would have been to mount the input and the contact display on a transverse panel where he could enter and confirm it with little risk of lookie-loos and identity theives.

Audio & Video

Hopefully, when Rachel received the call, she was informed who it was and that the call was coming from a public video phone. Hopefully it also provided controls for only accepting the audio, in case she was not camera-ready, but we don’t see things from her side in this scene.

Gaze correction is usually needed in video conversation systems since each participant naturally looks at the center of the screen and not at the camera lens mounted somewhere next to its edge. Unless the camera is located in the center of the screen (or the other person’s image on the screen), people would not be “looking” at the other person as is almost always portrayed. Instead, their gaze would appear slightly off-screen. This is a common trope in cinema, but one which we’re become increasingly literate in, as many of us are working from home much more and gaining experience with videoconferencing systems, so it’s beginning to strain suspension of disbelief.

Also how does the sound work here? It’s a noisy street scene outside of a cabaret. Is it a directional mic and directional speaker? How does he adjust the volume if it’s just too loud? How does it remain audible yet private? Small directional speakers that followed his head movements would be a lovely touch.

And then there’s video privacy. If this were the real world, it would be nice if the video had a privacy screen filter. That would have the secondary effect of keeping his head in the right place for the camera. But that is difficult to show cinemagentically, so wouldn’t work for a movie.

Ending the call

Rachel leans forward to press a button on her home video phone end her part of the call. Presumably Deckard has a similar button to press on his end as well. He should be able to just yank his card out, too.

The closing screen is a nice touch, though total charges may not be the most useful thing. Are VID-PHŌN calls a fixed price? Then this information is not really of use to him after the call as much as it is beforehand. If the call has a variable cost, depending on long distance and duration, for example, then he would want to know the charges as the call is underway, so he can wrap things up if it’s getting too expensive. (Admittedly the Bell System wouldn’t want that, so it’s sensible worldbuilding to omit it.) Also if this is a pre-paid phone card, seeing his remaining balance would be more useful.

But still, the point was that total charges of $1.25 was meant to future-shocked audiences of the time, since public phone charges in the United States at the time were $0.10. His remaining balance wouldn’t have shown that and not had the desired effect. Maybe both? It might have been a cool bit of worldbuilding and callback to build on that shock to follow that outrageous price with “Get this call free! Watch a video of life in the offworld colonies! Press START and keep your eyes ON THE SCREEN.”

Because the world just likes to hurt Deckard.

Deckard’s Photo Inspector

Back to Blade Runner. I mean, the pandemic is still pandemicking, but maybe this will be a nice distraction while you shelter in place. Because you’re smart, sheltering in place as much as you can, and not injecting disinfectants. And, like so many other technologies in this film, this will take a while to deconstruct, critique, and reimagine.

Description

Doing his detective work, Deckard retrieves a set of snapshots from Leon’s hotel room, and he brings them home with with. Something in the one pictured above catches his eye, and he wants to investigate it in greater detail. He takes the photograph and inserts it in a black device he keeps in his living room.

Note: I’ll try and describe this interaction in text, but it is much easier to conceptualize after viewing it. Owing to copyright restrictions, I cannot upload this length of video with the original audio, so I have added pre-rendered closed captions to it, below. All dialogue in the clip is Deckard.

Deckard does digital forensics, looking for a lead.

He inserts the snapshot into a horizontal slit and turns the machine on. A thin, horizontal orange line glows on the left side of the front panel. A series of seemingly random-length orange lines begin to chase one another in a single-row space that stretches across the remainder of the panel and continue to do so throughout Deckard’s use of it. (Imagine a news ticker, running backwards, where the “headlines” are glowing amber lines.) This seems useless and an absolutely pointless distraction for Deckard, putting high-contrast motion in his peripheral vision, which fights for attention with the actual, interesting content down below.

If this is distracting you from reading, YOU SEE MY POINT.

After a second, the screen reveals a blue grid, behind which the scan of the snapshot appears. He stares at the image in the grid for a moment, and speaks a set of instructions, “Enhance 224 to 176.”

In response, three data points appear overlaying the image at the bottom of the screen. Each has a two-letter label and a four-digit number, e.g. “ZM 0000 NS 0000 EW 0000.” The NS and EW—presumably North-South and East-West coordinates, respectively—immediately update to read, “ZM 0000 NS 0197 EW 0334.” After updating the numbers, the screen displays a crosshairs, which target a single rectangle in the grid.

A new rectangle then zooms in from the edges to match the targeted rectangle, as the ZM number—presumably zoom, or magnification—increases. When the animated rectangle reaches the targeted rectangle, its outline blinks yellow a few times. Then the contents of the rectangle are enlarged to fill the screen, in a series of steps which are punctuated with sounds similar to a mechanical camera aperture. The enlargement is perfectly resolved. The overlay disappears until the next set of spoken commands. The system response between Deckard’s issuing the command and the device’s showing the final enlarged image is about 11 seconds.

Deckard studies the new image for awhile before issuing another command. This time he says, “Enhance.” The image enlarges in similar clacking steps until he tells it, “Stop.”

Other instructions he is heard to give include “move in, pull out, track right, center in, pull back, center, and pan right.” Some include discrete instructions, such as, “Track 45 right” while others are relative commands that the system obeys until told to stop, such as “Go right.”

Using such commands he isolates part of the image that reveals an important clue, and he speaks the instruction, “Give me a hard copy right there.” The machine prints the image, which Deckard uses to help find the replicant pictured.

This image helps lead him to Zhora.

I’d like to point out one bit of sophistication before the critique. Deckard can issue a command with or without a parameter, and the inspector knows what to do. For example, “Track 45 right” and “Track right.” Without the parameter, it will just do the thing repeatedly until told to stop. That helps Deckard issue the same basic command when he knows exactly where he wants to look and when doesn’t know what exactly what he’s looking for. That’s a nice feature of the language design.

But still, asking him to provide step-by-step instructions in this clunky way feels like some high-tech Big Trak. (I tried to find a reference that was as old as the film.) And that’s not all…

Some critiques, as it is

  • Can I go back and mention that amber distracto-light? Because it’s distracting. And pointless. I’m not mad. I’m just disappointed.
  • It sure would be nice if any of the numbers on screen made sense, and had any bearing with the numbers Deckard speaks, at any time during the interaction. For instance, the initial zoom (I checked in Photoshop) is around 304%, which is neither the 224 or 176 that Deckard speaks.
  • It might be that each square has a number, and he simply has to name the two squares at the extents of the zoom he wants, letting the machine find the extents, but where is the labeling? Did he have to memorize an address for each pixel? How does that work at arbitrary levels of zoom?
  • And if he’s memorized it, why show the overlay at all?
  • Why the seizure-inducing flashing in the transition sequences? Sure, I get that lots of technologies have unfortunate effects when constrained by mechanics, but this is digital.
  • Why is the printed picture so unlike the still image where he asks for a hard copy?
  • Gaze at the reflection in Ford’s hazel, hazel eyes, and it’s clear he’s playing Missile Command, rather than paying attention to this interface at all. (OK, that’s the filmmaker’s issue, not a part of the interface, but still, come on.)
The photo inspector: My interface is up HERE, Rick.

How might it be improved for 1982?

So if 1982 Ridley Scott was telling me in post that we couldn’t reshoot Harrison Ford, and we had to make it just work with what we had, here’s what I’d do…

Squash the grid so the cells match the 4:3 ratio of the NTSC screen. Overlay the address of each cell, while highlighting column and row identifiers at the edges. Have the first cell’s outline illuminate as he speaks it, and have the outline expand to encompass the second named cell. Then zoom, removing the cell labels during the transition. When at anything other than full view, display a map across four cells that shows the zoom visually in the context of the whole.

Rendered in glorious 4:3 NTSC dimensions.

With this interface, the structure of the existing conversation makes more sense. When Deckard said, “Enhance 203 to 608” the thing would zoom in on the mirror, and the small map would confirm.

The numbers wouldn’t match up, but it’s pretty obvious from the final cut that Scott didn’t care about that (or, more charitably, ran out of time). Anyway I would be doing this under protest, because I would argue this interaction needs to be fixed in the script.

How might it be improved for 2020?

What’s really nifty about this technology is that it’s not just a photograph. Look close in the scene, and Deckard isn’t just doing CSI Enhance! commands (or, to be less mocking, AI upscaling). He’s using the photo inspector to look around corners and at objects that are reconstructed from the smallest reflections. So we can think of the interaction like he’s controlling a drone through a 3D still life, looking for a lead to help him further the case.

With that in mind, let’s talk about the display.

Display

To redesign it, we have to decide at a foundational level how we think this works, because it will color what the display looks like. Is this all data that’s captured from some crazy 3D camera and available in the image? Or is it being inferred from details in the 2 dimensional image? Let’s call the first the 3D capture, and the second the 3D inference.

If we decide this is a 3-D capture, then all the data that he observes through the machine has the same degree of confidence. If, however, we decide this is a 3D inferrer, Deckard needs to treat the inferred data with more skepticism than the data the camera directly captured. The 3-D inferrer is the harder problem, and raises some issues that we must deal with in modern AI, so let’s just say that’s the way this speculative technology works.

The first thing the display should do it make it clear what is observed and what is inferred. How you do this is partly a matter of visual design and style, but partly a matter of diegetic logic. The first pass would be to render everything in the camera frustum photo-realistically, and then render everything outside of that in a way that signals its confidence level. The comp below illustrates one way this might be done.

Modification of a pair of images found on Evermotion
  • In the comp, Deckard has turned the “drone” from the “actual photo,” seen off to the right, toward the inferred space on the left. The monochrome color treatment provides that first high-confidence signal.
  • In the scene, the primary inference would come from reading the reflections in the disco ball overhead lamp, maybe augmented with plans for the apartment that could be found online, or maybe purchase receipts for appliances, etc. Everything it can reconstruct from the reflection and high-confidence sources has solid black lines, a second-level signal.
  • The smaller knickknacks that are out of the reflection of the disco ball, and implied from other, less reflective surfaces, are rendered without the black lines and blurred. This provides a signal that the algorithm has a very low confidence in its inference.

This is just one (not very visually interesting) way to handle it, but should illustrate that, to be believable, the photo inspector shouldn’t have a single rendering style outside the frustum. It would need something akin to these levels to help Deckard instantly recognize how much he should trust what he’s seeing.

Flat screen or volumetric projection?

Modern CGI loves big volumetric projections. (e.g. it was the central novum of last year’s Fritz winner, Spider-Man: Far From Home.) And it would be a wonderful juxtaposition to see Deckard in a holodeck-like recreation of Leon’s apartment, with all the visual treatments described above.

But…

Also seriously who wants a lamp embedded in a headrest?

…that would kind of spoil the mood of the scene. This isn’t just about Deckard’s finding a clue, we also see a little about who he is and what his life is like. We see the smoky apartment. We see the drab couch. We see the stack of old detective machines. We see the neon lights and annoying advertising lights swinging back and forth across his windows. Immersing him in a big volumetric projection would lose all this atmospheric stuff, and so I’d recommend keeping it either a small contained VP, like we saw in Minority Report, or just keep it a small flat screen.


OK, so we have an idea about how the display would (and shouldn’t) look, let’s move on to talk about the inputs.

Inputs

To talk about inputs, then, we have to return to a favorite topic of mine, and that is the level of agency we want for the interaction. In short, we need to decide how much work the machine is doing. Is the machine just a manual tool that Deckard has to manipulate to get it to do anything? Or does it actively assist him? Or, lastly, can it even do the job while his attention is on something else—that is, can it act as an agent on his behalf? Sophisticated tools can be a blend of these modes, but for now, let’s look at them individually.

Manual Tool

This is how the photo inspector works in Blade Runner. It can do things, but Deckard has to tell it exactly what to do. But we can still improve it in this mode.

We could give him well-mapped physical controls, like a remote control for this conceptual drone. Flight controls wind up being a recurring topic on this blog (and even came up already in the Blade Runner reviews with the Spinners) so I could go on about how best to do that, but I think that a handheld controller would ruin the feel of this scene, like Deckard was sitting down to play a video game rather than do off-hours detective work.

Special edition made possible by our sponsor, Tom Nook.
(I hope we can pay this loan back.)

Similarly, we could talk about a gestural interface, using some of the synecdochic techniques we’ve seen before in Ghost in the Shell. But again, this would spoil the feel of the scene, having him look more like John Anderton in front of a tiny-TV version of Minority Report’s famous crime scrubber.

One of the things that gives this scene its emotional texture is that Deckard is drinking a glass of whiskey while doing his detective homework. It shows how low he feels. Throwing one back is clearly part of his evening routine, so much a habit that he does it despite being preoccupied about Leon’s case. How can we keep him on the couch, with his hand on the lead crystal whiskey glass, and still investigating the photo? Can he use it to investigate the photo?

Here I recommend a bit of ad-hoc tangible user interface. I first backworlded this for The Star Wars Holiday Special, but I think it could work here, too. Imagine that the photo inspector has a high-resolution camera on it, and the interface allows Deckard to declare any object that he wants as a control object. After the declaration, the camera tracks the object against a surface, using the changes to that object to control the virtual camera.

In the scene, Deckard can declare the whiskey glass as his control object, and the arm of his couch as the control surface. Of course the virtual space he’s in is bigger than the couch arm, but it could work like a mouse and a mousepad. He can just pick it up and set it back down again to extend motion.

This scheme takes into account all movement except vertical lift and drop. This could be a gesture or a spoken command (see below).

Going with this interaction model means Deckard can use the whiskey glass, allowing the scene to keep its texture and feel. He can still drink and get his detective on.

Tipping the virtual drone to the right.

Assistant Tool

Indirect manipulation is helpful for when Deckard doesn’t know what he’s looking for. He can look around, and get close to things to inspect them. But when he knows what he’s looking for, he shouldn’t have to go find it. He should be able to just ask for it, and have the photo inspector show it to him. This requires that we presume some AI. And even though Blade Runner clearly includes General AI, let’s presume that that kind of AI has to be housed in a human-like replicant, and can’t be squeezed into this device. Instead, let’s just extend the capabilities of Narrow AI.

Some of this will be navigational and specific, “Zoom to that mirror in the background,” for instance, or, “Reset the orientation.” Some will more abstract and content-specific, e.g. “Head to the kitchen” or “Get close to that red thing.” If it had gaze detection, he could even indicate a location by looking at it. “Get close to that red thing there,” for example, while looking at the red thing. Given the 3D inferrer nature of this speculative device, he might also want to trace the provenance of an inference, as in, “How do we know this chair is here?” This implies natural language generation as well as understanding.

There’s nothing from stopping him using the same general commands heard in the movie, but I doubt anyone would want to use those when they have commands like this and the object-on-hand controller available.

Ideally Deckard would have some general search capabilities as well, to ask questions and test ideas. “Where were these things purchased?” or subsequently, “Is there video footage from the stores where he purchased them?” or even, “What does that look like to you?” (The correct answer would be, “Well that looks like the mirror from the Arnolfini portrait, Ridley…I mean…Rick*”) It can do pattern recognition and provide as much extra information as it has access to, just like Google Lens or IBM Watson image recognition does.

*Left: The convex mirror in Leon’s 21st century apartment.
Right: The convex mirror in Arnolfini’s 15th century apartment

Finally, he should be able to ask after simple facts to see if the inspector knows or can find it. For example, “How many people are in the scene?”

All of this still requires that Deckard initiate the action, and we can augment it further with a little agentive thinking.

Agentive Tool

To think in terms of agents is to ask, “What can the system do for the user, but not requiring the user’s attention?” (I wrote a book about it if you want to know more.) Here, the AI should be working alongside Deckard. Not just building the inferences and cataloguing observations, but doing anomaly detection on the whole scene as it goes. Some of it is going to be pointless, like “Be aware the butter knife is from IKEA, while the rest of the flatware is Christofle Lagerfeld. Something’s not right, here.” But some of it Deckard will find useful. It would probably be up to Deckard to review summaries and decide which were worth further investigation.

It should also be able to help him with his goals. For example, the police had Zhora’s picture on file. (And her portrait even rotates in the dossier we see at the beginning, so it knows what she looks like in 3D for very sophisticated pattern matching.) The moment the agent—while it was reverse ray tracing the scene and reconstructing the inferred space—detects any faces, it should run the face through a most wanted list, and specifically Deckard’s case files. It shouldn’t wait for him to find it. That again poses some challenges to the script. How do we keep Deckard the hero when the tech can and should have found Zhora seconds after being shown the image? It’s a new challenge for writers, but it’s becoming increasingly important for believability.

Though I’ve never figured out why she has a snake tattoo here (and it seems really important to the plot) but then when Deckard finally meets her, it has disappeared.

Scene

Interior. Deckard’s apartment. Night.

Deckard grabs a bottle of whiskey, a glass, and the photo from Leon’s apartment. He sits on his couch, places the photo on the coffee table and says “Photo inspector?” The machine on top of a cluttered end table comes to life. Deckard continues, “Let’s look at this.” He points to the photo. A thin line of light sweeps across the image. The scanned image appears on the screen, pulled in a bit from the edges. A label reads, “Extending scene,” and we see wireframe representations of the apartment outside the frame begin to take shape. A small list of anomolies begins to appear to the left. Deckard pours a few fingers of whiskey into the glass. He takes a drink and says, “Controller,” before putting the glass on the arm of his couch. Small projected graphics appear on the arm facing the inspector. He says, “OK. Anyone hiding? Moving?” The inspector replies, “No and no.” Deckard looks at the screen he says, “Zoom to that arm and pin to the face.” He turns the glass on the couch arm counterclockwise, and the “drone” revolves around to show Leon’s face, with the shadowy parts rendered in blue. He asks, “What’s the confidence?” The inspector replies, “95.” On the side of the screen the inspector overlays Leon’s police profile. Deckard says, “unpin” and lifts his glass to take a drink. He moves from the couch to the floor to stare more intently and places his drink on the coffee table. “New surface,” he says, and turns the glass clockwise. The camera turns and he sees into a bedroom. “How do we have this much inference?” he asks. The inspector replies, “The convex mirror in the hall…” Deckard interrupts, saying, “Wait. Is that a foot? You said no one was hiding.” The inspector replies, “The individual is not hiding. They appear to be sleeping.” Deckard rolls his eyes. He says, “Zoom to the face and pin.” The view zooms to the face, but the camera is level with her chin, making it hard to make out the face. Deckard tips the glass forward and the camera rises up to focus on a blue, wireframed face. Deckard says, “That look like Zhora to you?” The inspector overlays her police file and replies, “63% of it does.” Deckard says, “Why didn’t you say so?” The inspector replies, “My threshold is set to 66%.” Deckard says, “Give me a hard copy right there.” He raises his glass and finishes his drink.


This scene keeps the texture and tone of the original, and camps on the limitations of Narrow AI to let Deckard be the hero. And doesn’t have him programming a virtual Big Trak.

Pathogen Movie Backgrounds

While we’re all sheltering-at-home, trying to contain the COVID-19 virus, many of us are doing business through videoconferencing apps. Some of these let you add backgrounds, and people are having some fun with these. We need all the levity we can get.

The Killer that Stalked New York (1950)

Also, those of us with kids are slammed, suddenly doing daycare and homeschooling. The lucky of us are also trying to hold down jobs.

While I’m scrambling to do all my stuff, there’s not a ton of time for blog-related stuff. (I spent quite a bit of time making the last post, and need to catch up on some of those other spinning plates.) So, this week I’m doing a low-effort but still-timely post of backgrounds grabbed in the movies referenced in the Spreading Pathogen Maps post.

Hopefully this will prove fun for you, and will buy me a bit of time to get back to Blade Runner.

The Andromeda Strain (1971)

Outbreak (1995)

Evolution (2001)

Contagion (2011)

Rise of the Planet of the Apes (2011)

World War Z (2013)

Edge of Tomorrow (2014)

Dawn of the Planet of the Apes (2014)