Spreading pathogen maps

So while the world is in the grip of the novel COVID-19 coronavirus pandemic, I’ve been thinking about those fictional user interfaces that appear in pandemic movies that project how quickly the infectious-agent-in-question will spread. The COVID-19 pandemic is a very serious situation. Most smart people are sheltering in place to prevent an overwhelmed health care system and finding themselves with some newly idle cycles (or if you’re a parent like me, a lot fewer idle cycles). Looking at this topic through the lens of sci-fi is not to minimize what’s happening around us as trivial, but to process the craziness of it all through this channel that I’ve got on hand. I did it for fascism, I’ll do it for this. Maybe this can inform some smart speculative design.

Caveat #1: As a public service I have included some information about COVID-19 in the body of the post with a link to sources. These are called out the way this paragraph is, with a SARS-CoV-2 illustration floated on the left. I have done as much due diligence as one blogger can do to not spread disinformation, but keep in mind that our understanding of this disease and the context are changing rapidly. By the time you read this, facts may have changed. Follow links to sources to get the latest information. Do not rely solely on this post as a source. If you are reading this from the relative comfort of the future after COVID-19, feel free to skip these.

A screen grab from a spreading pathogen map from Contagion (2011), focused on Africa and Eurasia, with red patches surrounding major cities, including Hong Kong.
Get on a boat, Hongkongers, you can’t even run for the hills! Contagion (2011)

And yes, this is less of my normal fare of sci-fi and more bio-fi, but it’s still clearly a fictional user interface, so between that and the world going pear-shaped, it fits well enough. I’ll get back to Blade Runner soon enough. I hope.

Giving credit where it’s due: All but one of the examples in this post were found via the TV tropes page for Spreading Disaster Map Graphic page, under live-action film examples. I’m sure I’ve missed some. If you know of others, please mention it in the comments.

Four that are extradiegetic and illustrative

This first set of pandemic maps are extradiegetic.

Vocabulary sidebar: I use that term a lot on this blog, but if you’re new here or new to literary criticism, it bears explanation. Diegesis is used to mean “the world of the story,” as the world in which the story takes place is often distinct from our own. We distinguish things as diegetic and extradiegetic to describe when they occur within the world of the story, or outside of it, respectively. My favorite example is when we see a character in a movie walking down a hallway looking for a killer, and we hear screechy violins that raise the tension. When we hear those violins, we don’t imagine that there is someone in the house who happens to be practicing their creepy violin. We understand that this is extradiegetic music, something put there to give us a clue about how the scene is meant to feel.

So, like those violins, these first examples aren’t something that someone in the story is looking at. (Claude Paré? Who the eff is—Johnson! Get engineering! Why are random names popping up over my pandemic map?) They’re something the film is doing for us in the audience.

The Killer that Stalked New York (1950) is a short about a smallpox infection of New York City.
Edge of Tomorrow (2014) has this bit showing the Mimics, spreading their way across Europe.
The end of Rise of the Planet of the Apes (2011) shows the fictional virus ALZ-113 spreading.
The beginning of Dawn of the Planet of the Apes (2014) repeats the fictional virus ALZ-113 spreading, but augments it with video overlays.

There’s not much I feel the need to say about these kinds of maps, as they are a motion graphic and animation style. I note at least two use aposematic signals in their color palette and shapes, but that’s just because it helps reinforce for the audience that whatever is being shown here is a major threat to human life. But I have much more authoritative things to say about systems that are meant to be used.

Before we move on, here’s a bonus set of extradiegetic spreading-pathogen maps I saw while watching the Netflix docuseries Pandemic: How to Prevent an Outbreak, as background info for this post.

A supercut from Pandemic: How to Prevent an Outbreak.
Motion graphics by Zero Point Zero Productions.

Five that are diegetic and informative

The five examples in this section are spread throughout the text for visual interest, but presented in chronological order. They are The Andromeda Strain (1977), Outbreak (1995), Evolution (2001), Contagion (2011), and World War Z (2013). I highly recommend Contagion for the acting, movie making, the modeling, and some of the facts it conveys. For instance, I think it’s the only film that discusses fomites. Everyone should know about fomites.

Since I raise their specter: As of publication of this post the CDC stated that fomites are not thought to be the main way the COVID-19 novel coronavirus spreads, but there are recent and conflicting studies. The scientific community is still trying to figure this out. The CDC says for certain it spreads primarily through sneezes, coughs, and being in close proximity to an infected person, whether or not they are showing symptoms.

Note that these five spreading pathogen examples are things that characters are seeing in the diegesis, that is, in the context of the story. These interfaces are meant to convey useful information to the characters as well as us in the audience.

Which is as damning a setup as I can imagine for this first example from The Andromeda Strain (1971). Because as much as I like this movie, WTF is this supposed to be? “601” is explained in the dialogue as the “overflow error” of this computer, but the pop-art seizure graphics? C’mon. There’s no way to apologize for this monstrosity.

This psychedelic nonsense somehow tells the bunkered scientists about how fast the eponymous Andromeda Strain will spread. (1971) Somehow the CRT gets nervous, too.

I’m sorry that you’ll never get those 24 seconds back. But at least we can now move on to look at the others, which we can break down into the simple case of persuasion, and the more complex case of use.

The simple case

In the simplest case, these graphics are shown to persuade an authority to act. That’s what happening in this clip from Outbreak (1995).

General Donald McClintock delivers a terrifying White House Chief-of-Staff Briefing about the Motaba virus. Outbreak (1995)

But if the goal is to persuade one course of action over another, some comparison should be made between two options, like, say, what happens if action is taken sooner rather than later. While that is handled in the dialogue of many of these films—and it may be more effective for in-person persuasion—I can’t help but think it would be reinforcing to have it as part of the image itself. Yet none of our examples do this.

Compare the “flatten the curve” graphics that have been going around. They provide a visual comparison between two options and make it very plain which is the right one to pick. One that stays in the mind of the observer even after they see it. This is one I’ve synthesized and tweaked from other sources.

This is a conceptual diagram, not a chart. The capacity bar is terrifyingly lower on actual charts. Stay home as much as you can. Special shouts out to Larry West.

There is a diegetic possibility, i.e., that no one amidst the panic of the epidemic has the time to thoughtfully do more than spit out the data and handle the rest with conversation. But we shouldn’t leave it at that, because there’s not much for us to learn there.

More complex case

The harder problem is when these displays are for people who need to understand the nature of the threat and determine the best course of action, and now we need to talk about epidemiology.

Caveat #2: I am not an epidemiologist. They are all really occupied for the foreseeable future, so I’m not even going to reach out and bother one of them to ask their opinions on this post. Like I said before about COVID-19, I really hope you don’t come to sci-fi interfaces to become an expert in epidemiology. And, since I’m just Some Guy on the Internet Who Has Read Some Stuff on the Internet, you should take whatever you learn here with a grain of salt. If I get something wrong, please let me know. Here are my major sources:

A screen gran from Contagion (2011) showing Dr. Erin Mears standing before a white board, explaining to the people in the room what R-naught is.
Kate Winslet, playing epidemiologist Dr. Erin Mears in Contagion (2011), is probably more qualified than me. Hey, Kate: Call me. I have questions.

Caveat #3: To discuss using technology in our species’ pursuit of an effective global immune system is to tread into some uncomfortable territory. ​Because of the way disease works, it is not enough to surveil the infected. We must always surveil the entire population, healthy or not, for signs of a pathogen outbreak, so responses can be as swift and certain as possible. We may need to surveil certain at-risk or risk-taking populations quite closely, as potential superspreaders. Otherwise we risk getting…well…*gestures vaguely at the USA*. I am pro-privacy, so know that when I speak about health surveillance in this post, I presume that we are simultaneously trying to protect as much “other” privacy as we can, maybe by tracking less-abusable, less-personally identifiable signals. I don’t pretend this is a trivial task, and I suspect the problem is more wicked than merely difficult to execute. But health surveillance must happen, and for this reason I will speak of it as a good thing in this context.

A screen grab from Idiocracy (2006) showing one of the vending machines that continually scanned citizens bar codes and reported their location.
Making this seem a lot less stupid than it first appeared.

Caveats complete? We’ll see.


Epidemiology is a large field of study, so for purposes of this post, we’re talking about someone who studies disease at the level of the population, rather than individual cases. Fictional epidemiologists appear when there is an epidemic or pandemic in the plot, and so are concerned with two questions: What are we dealing with? and What do we need to do?

Part 1: What are we dealing with?

Our response should change for different types of threat. So it’s important for an epidemiologist to understand the nature of a pathogen. There are a few scenes in Contagion where we see scientists studying a screen with gene sequences and a protein-folding diagram, and this touches on understanding the nature of the virus. But this is a virologists view, and doesn’t touch on most of what an epidemiologist is ultimately hoping to build first, and that’s a case definition. It is unlikely to appear in a spreading pathogen map, but it should inform one. So even if your pathogen is fictional, you ought to understand what one is.

A screen grab from Contagion (2011), showing a display for a virologist, including gene sequences, and spectroscopy.
“We’ve sequenced the virus and determined its origin, and we’ve modeled the way it edges the cells of the lung and the brain…” —Dr. Hextall, Contagion (2011)

A case definition is the standard shared definition of what a pathogen is; how a real, live human case is classified as belonging to an epidemic or not. Some case definitions are built for non-emergency cases, like for influenza. The flu is practically a companion to humanity, i.e., with us all the time, and mutates, so its base definition for health surveillance can be a little vague. But for the epidemics and pandemics that are in sci-fi, they are building a case definition for outbreak investigations. These are for a pathogen in a particular time and place, and act as a standard for determining whether or not a given person is counted as a case for the purposes of studying the event.

Case definition for outbreak investigations

The CDC lists the following as the components of a case definition.

  • Clinical criteria
    • Clinical description
    • Confirmatory laboratory tests
      • These can be pages long, with descriptions of recommended specimen collections, transportation protocols, and reporting details.
    • Combinations of symptoms (subjective complaints)
    • Signs (objective physical findings)
    • Source
  • (Sometimes) Specifics of time and place.

There are sometimes different case definitions based on the combination of factors. COVID-19 case definitions with the World Health Organization, for instance, are broken down between suspect, probable, and confirmed. A person showing all the symptoms and who has been in an area where an infected person was would be suspect. A person whose laboratory results confirmed the presence of SARS-CoV-2 is confirmed. Notably for a map, these three levels might warrant three levels of color.

As an example, here is the CDC case definition for ebola, as of 09 JUL 2019.

n.b. Case definitions are unlikely to work on screen

Though the case definition is critical to epidemiology, and may help the designer create the spreading pathogen map (see the note about three levels of color, above), but the thing itself is too text-heavy to be of much use for a sci-fi interface, which rely much more on visuals. Better might be the name or an identifying UUID to the definition. WHO case references look like this: WHO/COVID-19/laboratory/2020.5 I do not believe the CDC has any kind of UUID for its case definitions.

While case definitions don’t work on screen, counts and rates do. See below under Surveil Public Health for more on counts and rates.

Disease timeline

Infectious disease follows a fairly standard order of events, depicted in the graphic below. Understanding this typical timeline of events helps you understand four key metrics for a given pathogen: chains of transmission, R0, SI, and CFR.

A redesigned graphic from the CDC Principles epidemiology handbook, showing susceptibility, exposure, subclinical disease with pathologic changes and the beginning of an infectious period, the onset of symptoms and beginning of clinical disease, diagosis, the end of the infectious period, and a resolution of recovery, life-long disability, or death.

For each of the key metrics, I’ll list ranges and variabilities where appropriate. These are observed attributes in the real world, but an author creating a fictional pathogen, or a sci-fi interfaces maker needing to illustrate them, may need to know what those numbers look like and how they tend to behave over time so they can craft these attributes.

Chains of Transmission

What connects the individual cases in an epidemic are the methods of transmission. The CDC lists the following as the basics of transmission.

  • Reservoir: where the pathogen is collected. This could be the human body, or a colony of infected mynocks, a zombie, or a moldy Ameglian Major flank steak forgotten in a fridge. Or your lungs.
  • Portal of exit, or how the pathogen leaves the reservoir. Say, the open wound of a zombie, or an innocent recommendation, or an uncovered cough.
  • Mode of transmission tells how the pathogen gets from the portal of exit to the portal of entry. Real-world examples include mosquitos, fomites (you remember fomites from the beginning of this post, don’t you?), sex, or respiratory particles.
  • Portal of entry, how the pathogen infects a new host. Did you inhale that invisible cough droplet? Did you touch that light saber and then touch your gills? Now it’s in you like midichlorians.
  • Susceptible host is someone more likely than not to get the disease.

A map of this chain of transmission would be a fine secondary-screen to a spreading pathogen map, illustrating how the pathogen is transmitted. After all, this will inform the containment strategies.

Variability: Once the chain of transmission is known, it would only change if the pathogen mutated.

Basic Rate of Reproduction = How contagious it is

A famous number that’s associated with contagiousness is the basic reproduction rate. If you saw Contagion you’ll recall this is written as R0, and pronounced “R-naught.” It describes, on average, how many people an infected person will infect before they stop being infectious.

  • If R0 is below 1, an infected person is unlikely to infect another person, and the pathogen will quickly die out.
  • If R0 is 1, an infected person is likely to infect one other, and the disease will continue through a population at a steady rate without intervention.
  • If R0 is higher than 1, a pathogen stands to explode through a population.

The CDC book tells me that R0 describes how the pathogen would reproduce through the population with no intervention, but other sources talk of lowering the R0 so I’m not certain if those other sources are using it less formally, or if my understanding is wrong. For now I’ll go with the CDC, and talk about R0 as a thing that is fixed.

It, too, is not an easy thing to calculate. It can depend on the duration of contagiousness after a person becomes infected, or the likelihood of infection for each contact between a susceptible person and an infectious person or vector, and the contact rate.

Variability: It can change over time. When a novel pathogen first emerges, the data is too sparse and epidemiologists are scrambling to do the field work to confirm cases. As more data comes in and numbers get larger, the number will converge toward what will be its final number.

It can also differ based on geography, culture, geopolitical boundaries, and the season, but the literature (such as I’ve read) refers to R0 as a single number.

Range: The range of R0 >1 can be as high as 12–18, but measles morbillivirus is an infectious outlier. Average range of R0, not including measles, of this sample is 2.5–5.2. MEV-1 from Contagion has a major dramatic moment when it mutates and its predicted R0 becomes 4, making it roughly as contagious as the now-eradicated killer smallpox.

Data from https://en.wikipedia.org/wiki/Basic_reproduction_number

Serial Interval = How fast it spreads

Serial interval is the average time between successive cases in a chain of transmission. This tells the epidemiologist how fast a pathogen stands to spread through a population.

Variability: Like the other numbers, SI is calculated and updated with new cases while an epidemic is underway, but tend to converge toward a number. SI for some respiratory diseases is charted below. Influenza A moves very fast. Pertussis is much slower.

Range: As you can see in the chart, SI can be as fast as 2.2 days, or as slow as 22.8 days. The median in this set is 14 days and the average is 12.8. SARS-CoV-2 is currently estimated to be about 4 days, which is very fast.

Data from: https://academic.oup.com/aje/article/180/9/865/2739204

CFR = How deadly it is

The case fatality rate is a percentage that any given case will prove fatal. It is very often shortened to CFR. This is not always easy to calculate.

Variability: Early in a pandemic it might be quite low because hospital treatment is still available. Later in a pandemic, as hospital and emergency rooms are packed full, the CFR might raise quite high. Until a pathogen is eradicated, the precise CFR is changing with each new case. Updates can occur daily, or in real time with reports. In a sci-fi world, it could update real time directly from ubiquitous sensors, and perhaps predicted by a specialty A.I. or precognitive character.

Range: Case fatality rates range from the incurable, like kuru, at 100%. to 0.001% for chickenpox affecting unvaccinated children. The CFR changes greatly at the start of a pandemic and slowly converges towards its final number.

So, if the spreading pathogen map is meant to convey to an epidemiologist the nature of the pathogen, it should display these four factors:

  1. Mode of Transmission: How it spreads
  2. R0: How contagious it is
  3. SI: How fast it spreads
  4. CFR: How deadly it is

Part 2: What do we do?

An epidemiologist during an outbreak has a number of important responsibilities beyond understanding the nature of the pathogen. I’ve taken a crack at listing those below. Note: this list is my interpretation of the CDC materials, rather than their list. As always, offer corrections in comments.

  • Surveil the current state of things
  • Prevent further infections
  • Communicate recommendations

Epidemiology has other non-outbreak functions, but those routine, non-emergency responsibilities rarely make it to cinema. And since “communicate recommendations” is pretty covered under “The Simple Case,” above, the rest of this post will be dedicated to health surveillance and prevention tools.

Surveil the current state of things

In movies the current state of things is often communicated via the spreading pathogen map in some command and control center. The key information on these maps are counts and rates.

Counts and Rates

The case definition (above) helps field epidemiologists know which cases to consider in the data set for a given outbreak. They routinely submit reports of their cases to central authorities like the CDC or WHO, who aggregate them into counts, which are tallies of known cases. (And though official sources in the real world are rightly cautious to do it, sci-fi could also include an additional layer of suspected or projected cases.) Counts, especially over time, are important for tracking the spread of a virus. Most movie goers have basic numeracy, so red number going up = bad is an easy read for an audience.

Counts can be broken down into many variables. Geopolitical regions make sense as governmental policies and cultural beliefs can make meaningful distinctions in how a pathogen spreads. In sci-fi a speculative pathogen might warrant different breakdowns, like frequency of teleportation, or time spent in FTL warp fields, or genetic distance from the all-mother.

In the screen cap of the John Hopkins COVID-19 tracker, you can see counts high in the visual hierarchy for total confirmed (in red), total deaths (in white), and total recovered (in green). The map plots current status of the counts.

From the Johns Hopkins COVID-19 tracker, screen capped in the halcyon days of 23 MAR 2020.

Rates is another number that epidemiologists are interested in, to help normalize the spread of a pathogen for different group sizes. (Colloquially, rate often implies change over time, but in the field of epidemiology, it is a static per capita measurement of a point in time.) For example, 100 cases is around a 0.00001% rate in China, with its population of 1.386 billion, but it would be a full 10% rate of Vatican City, so count can be a poor comparison to understand how much of a given population is affected. By representing the rates alongside the counts you can detect if it’s affecting a subgroup of the global population more or less than others of its kind, which may warrant investigation into causes, or provide a grim lesson to those who take the threat lightly.

Counts and rates over time

The trend line in the bottom right of the Johns Hopkins dashboard helps understand how the counts of cases are going over time, and might be quite useful for helping telegraph the state of the pandemic to an audience, though having it tucked in a corner and in orange may not draw attention as it needs to for instant-understanding.

These two displays show different data, and one is more cinemagenic than the other. Confirmed cases, on the left, is a total, and at best will only ever level off. If you know what you’re looking at, you know that older cases represented by the graph are…uh…resolved (i.e. recovery, disability, or death) and that a level-off is the thing we want to see there. But the chart on the right plots the daily increase, and will look something like a bell curve when the pandemic comes to an end. That is a more immediate read (bad thing was increasing, bad thing peaked, bad thing is on the decline) and so I think is better for cinema.

At a glance you can also tell that China appears to have its shit sorted. [Obviously this is an old screen grab.]

In the totals, sparklines would additionally help a viewer know whether things are getting better or getting worse in the individual geos, and would help sell the data via small multiples on a close-up.

Plotting cases on maps

Counts and rates are mostly tables of numbers with a few visualizations. The most cinemagenic thing you can show are cases on geopolitical maps. All of the examples, except the trainwreck that is The Andromedia Strain pathogen map, show this, even the extradiegetic ones. Real-world pathogens mostly spread through physical means, so physical counts of areas help you understand where the confirmed cases are.

Which projection?

But as we all remember from that one West Wing scene, projections have consequences. When wondering where in the world do we send much-needed resources, Mercator will lie to you, exaggerating land at the poles at the expense of equatorial regions. I am a longtime advocate for alternate projections, such as—from the West Wing scene—the Gall-Peters. I am an even bigger big fan of Dymaxion and Watterman projections. I think they look quite sci-fi because they are familiar-but-unfamiliar, and they have some advantages for showing things like abstract routes across the globe.

A Dymaxion or Fuller projection of the earth.

If any supergenre is here to help model the way things ought to be, it’s sci-fi. If you only have a second or less of time to show the map, then you may be locked to Mercator for its instant-recognizability, but if the camera lingers, or you have dialogue to address the unfamiliarity, or if the art direction is looking for uncanny-ness, I’d try for one of the others.

What is represented?

Of course you’re going to want to represent the cases on the map. That’s the core of it. And it may be enough if the simple takeaway is thing bad getting worse. But if the purpose of the map is to answer the question “what do we do,” the cases may not be enough. Recall that another primary goal of epidemiologists is to prevent further infections. And the map can help indicate this and inform strategy.

Take for instance, 06 APR 2020 of the COVID-19 epidemic in the United States. If you had just looked at a static map of cases, blue states had higher counts than red states. But blue states had been much more aggressive in adopting “flattening the curve” tactics, while red states had been listening to Trump and right wing media that had downplayed the risk for many weeks in many ways. (Read the Nate Silver post for more on this.) If you were an epidemiologist, seeing just the cases on that date might have led you to want to focus social persuasion resources on blue states. But those states have taken the science to heart. Red states on the other hand, needed a heavy blitz of media to convince them that it was necessary to adopt social distancing and shelter-in-place directives. With a map showing both cases and social acceptance of the pandemic, it might have helped an epidemiologist make the right resource allocation decision quickly.

Another example is travel routes. International travel played a huge role in spreading COVID-19, and visualizations of transportation routes can prove more informative in understanding its spread than geographic maps. Below is a screenshot of the New York Times’ beautiful COVID-19 MAR 2020 visualization How the Virus Got Out, which illustrates this point.

Other things that might be visualized depend, again, on the chain of transmission.

  • Is the pathogen airborne? Then you might need to show upcoming wind and weather forecasts.
  • Is the reservoir mosquitoes? Then you might want to show distance to bodies of still water.
  • Is the pathogen spread through the mycelial network? Then you might need to show an overlay of the cosmic mushroom threads.

Whatever your pathogen, use the map to show the epidemiologist ways to think about its future spread, and decide what to do. Give access to multiple views if needed.

How do you represent it?

When showing intensity-by-area, there are lots of ways you could show it. All of them have trade offs. The Johns-Hopkins dashboard uses a Proportional Symbol map, with a red dot, centered on the country or state, the radius of which is larger for more confirmed cases. I don’t like this for pandemics, mostly because the red dots begin to overlap and make it difficult to any detail without interacting with the map to get a better focus. It does make for an immediate read. In this 23 MAR 2020 screen cap, it’s pretty obvious that the US, Europe, and China are current hotspots, but to get more detail you have to zoom in, and the audience, if not the characters, don’t have that option. I suppose it also provides a tone-painting sense of unease when the symbols become larger than the area they are meant to represent. It looks and feels like the area is overwhelmed with the pathogen, which is an appropriate, if emotional and uninformative, read.

The Johns-Hopkins dashboard uses a proportional symbol map. And I am distraught at how quaint those numbers seem now, much less what they will be in the future.

Most of the sci-fi maps we see are a variety of Chorochromatic map, where color is applied to the discrete thing where it appears on the map. (This is as opposed to a Cloropleth map, where color fills in existing geopolitical regions.) The chorochromatic option is nice for sci-fi because the color makes a shape—a thing—that does not know of or respect geopolitical boundaries. See the example from Evolution below.

Governor Lewis watches the predicted spread of the Glen Canyon asteroid organisms out of Arizona and to the whole of North America. Evolution (2001)

It can be hard to know (or pointlessly-detailed) to show exactly where a given thing is on a map, like, say, where infected people literally are. To overcome this you could use a dot-distribution map, as in the Outbreak example (repeated below so you don’t have to scroll that far back up).

Outbreak (1995), again.

Like many such maps, the dot-distribution becomes solid red to emphasize passing over some magnitude threshold. For my money, the dots are a little deceptive, as if each dot represented a person rather than part of a pattern than indicates magnitude, but a glance at the whole map gives the right impression.

For a real world example of dot-distribution for COVID-19, see this example posted to reddit.com by user Edward-EFHIII.

COVID-19 spread from January 23 through March 14th.

Often times dot-distribution is reserved for low magnitudes, and once infections are over a threshold, become cloropleth maps. See this example from the world of gaming.

A screen grab of the game Plague, Inc., about 1/3 of the way through a game.
In Plague, Inc., you play the virus, hoping to win against humanity.

Here you can see that India and Australia have dots, while China, Kyrgyzstan, Tajikistan, Turkmenistan, and Afghanistan (I think) are “solid” red.

The other representation that might make sense is a cartogram, in which predefined areas (like country or state boundaries) are scaled to show the magnitude of a variable. Continuous-area cartograms can look hallucinogenic, and would need some explanation by dialogue, but can overcome the inherent bias that size = importance. It might be a nice secondary screen alongside a more traditional one.

A side by side comparison of a standard and cartographic projection.
On the left, a Choropleth map of the 2012 US presidential election, where it looks like red states should have won. On the right, a continuous cartogram with state sizes scaled to reflect states’ populations, making more intuitive sense why blue states carried the day.

Another gorgeous projection dispenses with the geographic layout. Dirk Brockman, professor at the Institute for Theoretical Biology, Humboldt University, Berlin, developed a visualization that places the epicenter of a disease at the center of a node graph, and plots every city around it based on how many airport flights it takes to get there. Plotting proportional symbols to this graph makes the spread of the disease radiate in mostly- predictable waves. Pause the animation below and look at the red circles. You can easily predict where the next ones will likely be. That’s an incredibly useful display for the epidemiologist. And as a bonus, it’s gorgeous and a bit mysterious, so would make a fine addition in a sci-fi display to a more traditional map. Read more about this innovative display on the CityLab blog. (And thanks, Mark Coleran, for the pointer.)

How does it move?

First I should say I don’t know that it needs to move. We have information graphics that display predicted change-over-area without motion: Hurricane forecast maps. These describe a thing’s location in time, and simultaneously, the places it is likely to be in the next few days.

National Hurricane Center’s 5-day forecast for Hurricane Florence, 08 SEP 2018.
Image: NHC

If you are showing a chorochromatic map, then you can use “contour lines” or color regions to demonstrate the future predictions.

Not based on any real pathogen.

Another possibility is small multiples, where the data is spread out over space instead of time. This makes it harder to compare stages, but doesn’t have the user searching for the view they want. You can mitigate this with small lines on each view representing the boundaries of other stages.

Not based on any real pathogen.

The side views could also represent scenarios. Instead of +1, +2, etc., the side views could show the modeled results for different choices. Perhaps those scenario side views and their projected counts could be animated.

To sing the praises of the static map: Such a view, updated as data comes in, means a user does not have to wait for the right frame to pop up, or interact with a control to get the right piece of information, or miss some detail when they just happened to have the display paused on the wrong frame of an animation.

But, I realize that static maps are not as cinemagenic as a moving map. Movement is critical to cinema, so a static map, updating only occasionally as new data comes in, could look pretty lifeless. Animation gives the audience more to feel as some red shape slowly spreads to encompass the whole world. So, sure. I think there are better things to animate than the primary map, but doing so puts us back into questions of style rather than usability, so I’ll leave off that chain of thought and instead show you the fourth example in this section, Contagion.

MEV-1 spreads from fomites! It’s fomites! Contagion (2011), designed by Cory Bramall of Decca Digital.

Prevent further transmissions: Containment strategies

The main tactic for epidemiological intervention is to deny pathogens the opportunity to jump to new hosts. The top-down way to do this is to persuade community leaders to issue broad instructions, like the ones around the world that have us keeping our distance from strangers, wearing masks and gloves, and sheltering-in-place. The bottom-up tactic is to identify those who have been infected or put at risk for contracting a pathogen from an infected person. This is done with contact tracing.

Contain Known Cases

When susceptible hosts simply do not know whether or not they are infected, some people will take their lack of symptoms to mean they are not infectious and do risky things. If these people are infectious but not yet showing symptoms, they spread the disease. For this reason, it’s critical to do contact tracing of known cases to inform and encourage people to get tested and adopt containment behaviors.

Contact tracing

There are lots of scenes in pathogen movies where scientists stand around whiteboards with hastily-written diagrams of who-came-into-contact-with-whom, as they hope to find and isolate cases, or to find “patient 0,” or to identify super-spreaders and isolate them.

An infographic from Wikimedia showing a flow chart of contact tracing. Its label reads “Contact tracing finds cases quickly so they can be isolated and reduce spread.”
Wikimedia file, CC BY-SA 4.0

These scenes seem ripe for improvement by technology and AI. There are opt-in self-reporting systems, like those that were used to contain COVID-19 in South Korea, or the proposed NextTrace system in the West. In sci-fi, this can go further.

Scenario: Imagine an epidemiologist talking to the WHO AI and asking it to review public footage, social media platforms, and cell phone records to identify all the people that a given case has been in contact with. It could even reach out and do field work, calling humans (think Google Duplex) who might be able to fill in its information gaps. Field epidemiologists are focused on situations when the suspected cases don’t have phones or computers.

Or, for that matter, we should ask why the machine should wait to be asked. It should be set up as an agent, reviewing these data feeds continually, and reaching out in real time to manage an outbreak.

  • SCENE: Karen is walking down the sidewalk when her phone rings.
  • Computer voice:
  • Good afternoon, Karen. This is Florence, the AI working on behalf of the World Health Organization.
  • Karen:
  • Oh no. Am I sick?
  • Computer voice:
  • Public records indicate you were on a bus near a person who was just confirmed to be infected. Your phone tells me your heart rate has been elevated today. Can you hold the phone up to your face so I can check for a fever?
  • Karen does. As the phone does its scan, people on the sidewalk behind her can be seen to read texts on their phone and move to the other side of the street. Karen sees that Florence is done, and puts the phone back to her ear.
  • Computer voice:
  • It looks as if you do have a fever. You should begin social distancing immediately, and improvise a mask. But we still need a formal test to be sure. Can you make it to the testing center on your own, or may I summon an ambulance? It is a ten minute walk away.
  • Karen:
  • I think I can make it, but I’ll need directions.
  • Computer voice:
  • Of course. I have also contacted your employer and spun up an AI which will be at work in your stead while you self-isolate. Thank you for taking care of yourself, Karen. We can beat this together.

Design challenge: In the case of an agentive contact tracer, the display would be a social graph displayed over time, showing confirmed cases as they connect to suspected cases (using evidence-of-proximity or evidence-of-transmission) as well as the ongoing agent’s work in contacting them and arranging testing. It would show isolation monitoring and predicted risks to break isolation. It would prioritize cases that are greatest risk for spreading the pathogen, and reach out for human intervention when its contact attempts failed or met resistance. It could be simultaneously tracing contacts “forward” to minimize new infections and tracing contacts backward to find a pathogen’s origins.

Another consideration for such a display is extension beyond the human network. Most pathogens mutate and much more freely in livestock and wild animal populations, making their way into humans occasionally. it happened this way for SARS (bats → civets → people), MERS (bats → camels → people), and COVID-19 (bats → pangolin → people). (Read more about bats as a reservoir.) It’s not always bats, by the way, livestock are also notorious breeding grounds for novel pathogens. Remember Bird flu? Swine flu? This “zoonotic network” should be a part of any pathogen forensic or surveillance interface.

A photograph of an adorable pangolin, the most trafficked animal in the world. According to the International Union for Conservation of Nature (IUCN), more than a million pangolins were poached in the decade prior to 2014.
As far as SARS-CoV-2 is concerned, this is a passageway.
U.S. Fish and Wildlife Service Headquarters / CC BY (https://creativecommons.org/licenses/by/2.0)

Design idea: Even the notion of what it means to do contact tracing can be rethought in sci-fi. Have you seen the Mythbusters episode “Contamination”? In it Adam Savage has a tube latexed to his face, right near his nose that drips a florescent dye at the same rate a person’s runny nose might drip. Then he attends a staged dinner party where, despite keeping a napkin on hand to dab at the fluid, the dye gets everywhere except the one germophobe. It brilliantly illustrates the notion of fomites and how quickly an individual can spread a pathogen socially.

Now imagine this same sort of tracing, but instead of dye, it is done with computation. A camera watches, say, grocery shelves, and notes who touched what where and records the digital “touch,” or touchprint, along with an ID for the individual and the area of contact. This touchprint could be exposed directly with augmented reality, appearing much like the dye under black light. The digital touch mark would only be removed from the digital record of the object if it is disinfected, or after the standard duration of surface stability expires. (Surface stability is how long a pathogen remains a threat on a given surface). The computer could further watch the object for who touches it next, and build an extended graph of the potential contact-through-fomites.

Ew, I got touchprint on me.

You could show the AR touchprint to the individual doing the touching, this would help remind them to wear protective gloves if the science calls for it, or to ask them to disinfect the object themselves. A digital touchprint could also be used for workers tasked with disinfecting the surfaces, or by disinfecting drones. Lastly, if an individual is confirmed to have the pathogen, the touchprint graph could immediately identify those who had touched an object at the same spot as the infected person. The system could provide field epidemiologists with an instant list of people to contact (and things to clean), or, if the Florence AI described above was active, the system could reach out to individuals directly. The amount of data in such a system would be massive, and the aforementioned privacy issues would be similarly massive, but in sci-fi you can bypass the technical constraints, and the privacy issues might just be a part of the diegesis.

In case you’re wondering how long that touch mark would last for SARS-CoV-2 (the virus that causes COVID-19), this study from the New England Journal of Medicine says it’s 4 hours for copper, 24 hours for paper and cardboard, and 72 hours on plastic and steel.

Anyway, all of this is to say that the ongoing efforts by the agent to do the easy contact tracing would be an excellent, complicated, cinemagenic side-display to a spreading pathogen map.

Destroying non-human reservoirs

Another way to reduce the risk of infection is to seal or destroy reservoirs. Communities encourage residents to search their properties and remove any standing water to remove the breeding grounds for mosquitos, for example. There is the dark possibility that a pathogen is so lethal that a government might want to “nuke it from orbit” and kill even human reservoirs. Outbreak features an extended scene where soldiers seek to secure a neighborhood known to be infected with the fictional Motoba virus, and soldiers threaten to murder a man trying to escape with his family. For this dark reason, in addition to distance-from-reservoir, the location of actual reservoirs may be important to your spreading pathogen map. Maybe also counts of the Hail Mary tools that are available, their readiness, effects, etc.

To close out the topic of What Do We Do? Let me now point you to the excellent and widely-citied Medium article by Tomas Peuyo, “Act Today or People Will Die,” for thoughts on that real-world question.

The…winner(?)

At the time of publication, this is the longest post I’ve written on this blog. Partly that’s because I wanted to post it as a single thing, but also because it’s a deep subject that’s very important to the world, and there are lots and lots of variables to consider when designing one.

Which makes it not surprising that most of the examples in this mini survey are kind of weak, with only one true standout. That standout is the World War Z spreading disaster map, shown below.

World War Z (2013)

It goes by pretty quickly, but you can see more features discussed above in this clip than any of the other exmaples.

Description in the caption.
A combination of chorochromatic marking for the zombie infection, and cloropleth marking for countries. Note the signals showing countries where data is unavailable.
Description in the caption.
Along the bottom, rates (not cases) are expressed as “Population remaining.” That bar of people along the bottom would start slow and then just explode to red, but it’s a nice “things getting worse” moment. Maybe it’s a log scale?
Description in the caption.
A nice augmentation of the main graphic is down the right-hand side. A day count in the upper right (with its shout-out to zombie classic 28 Days Later), and what I’m guessing are resources, including nukes.

It doesn’t have that critical layer of forecasting data, but it got so much more right than its peers, I’m still happy to have it. Thanks to Mark Coleran for pointing me to it.


Let’s not forget that we are talking about fiction, and few people in the audience will be epidemiologists, standing up in the middle of the cinema (remember when we could go to cinemas?) to shout, “What’s with this R0 of 0.5? What is this, the LaCroix of viruses?” But c’mon, surely we can make something other than Andromeda Strain’s Pathogen Kaleidoscope, or Contagion’s Powerpoint wipe. Modern sci-fi interfaces are about spectacle, about overwhelming the users with information they can’t possibly process, and which they feel certain our heroes can—but they can still be grounded in reality.

Lastly, while I’ve enjoyed the escapism of talking about pandemics in fiction, COVID-19 is very much with us and very much a threat. Please take it seriously and adopt every containment behavior you can. Thank you for taking care of yourself. We can beat this together.

Advertisements

Call for examples: Spreading Disaster Interfaces

So of the bumper crop of our current dystopias, the COVID-19 novel coronavirus feels the most pressing. While everyone is washing their hands regularly, working from home, conducting social isolation, and trying like hell not to touch their face, (you’re doing all these things, right?) they are also downloading and processing the pandemic via cinema. Contagion, in particular, from 2011, seems to be the film people are scrambling to find and watch. These films are more bio-fi than sci-fi, but these interfaces are clearly in the realm of Fictional User Interfaces, and regular readers know I often go off-leash to follow interests wherever they lead.

Contagion (2011)

While it’s a questionable kind of global-disaster therapy (Does it make people too paranoid? Does it give people false hope? Does it model the right behavior?) it makes me want to investigate the displays from such movies.

  • What diegetic questions do these displays hope to answer?
  • How well do the Fictional User Interfaces help answer these questions?
  • What ideally should these characters/teams be monitoring?
  • Are there better forms for this task?

And while there are lots of possible displays for all the permutations of these questions, I expect the anchor display will be what tvtropes.com calls the Spreading Disaster Map Graphic.

Contagion (2011+some change)

You know this one. Map starts with a few red dots labeled “today,” then transitions to another version with more red dots labeled something like “a little future,” and then landing to a final version absolutely covered in red death with a label like, “a little more future.” Holy wow, we think, the stakes are dire.

TV tropes has a number of examples. The list below are those that are closer to sci-fi and filtered for disease vectors rather than, say, human armies.

  • The Andromeda Strain
  • Rise of the Planet of the Apes
  • Dawn of the Planet of the Apes
  • Evolution
  • Outbreak
  • Jurassic World (kind of. Not armies but Indominus Rexes)
  • The Killer That Stalked New York
  • Edge of Tomorrow
  • Moana (no, really)

But I suspect they don’t have them all. (Like, where are all the zombie movies?) So scour your brain for examples of these kinds of interfaces, and comment so I have a good sample to work from.

In the meantime, while we’re on the topic, the most useful, informative, and even-keeled post I’ve seen on the issue is this one by Thomas PueyoCoronavirus: Why You Must Act Now.” Please give it a read.

Tunnel-in-the-Sky Displays

“Tunnel in the Sky” is the name of a 1955 Robert Heinlein novel that has nothing to do with this post. It is also the title of the following illustration by Muscovite digital artist Vladimir Manyukhin, which also has nothing to do with this post, but is gorgeous and evocative, and included here solely for visual interest.

See more of Vladimir’s work here https://www.artstation.com/mvn78.

Instead, this post is about the piloting display of the same name, and written specifically to sci-fi interface designers.


Last week in reviewing the spinners in Blade Runner, I included mention and a passing critique of the tunnel-in-the-sky display that sits in front of the pilot. While publishing, I realized that I’d seen this a handful of other times in sci-fi, and so I decided to do more focused (read: Internet) research about it. Turns out it’s a real thing, and it’s been studied and refined a lot over the past 60 years, and there are some important details to getting one right.

Though I looked at a lot of sources for this article, I must give a shout-out to Max Mulder of TU Delft. (Hallo, TU Delft!) Mulder’s PhD thesis paper from 1999 on the subject is truly a marvel of research and analysis, and it pulls in one of my favorite nerd topics: Cybernetics. Throughout this post I rely heavily on his paper, and you could go down many worse rabbit holes than cybernetics. n.b., it is not about cyborgs. Per se. Thank you, Max.

I’m going to breeze through the history, issues, and elements from the perspective of sci-fi interfaces, and then return to the three examples in the survey. If you want to go really in depth on the topic (and encounter awesome words like “psychophysics” and “egomotion” in their natural habitat), Mulder’s paper is available online for free from researchgate.net: “Cybernetics of Tunnel-in-the-Sky Displays.”

What the heck is it?

A tunnel-in-the-sky display assists pilots, helping them know where their aircraft is in relation to an ideal flight path. It consists of a set of similar shapes projected out into 3D space, circumscribing the ideal path. The pilot monitors their aircraft’s trajectory through this tunnel, and makes course corrections as they fly to keep themselves near its center.

This example comes from Michael P. Snow, as part of his “Flight Display Integration” paper, also on researchgate.net.

Please note that throughout this post, I will spell out the lengthy phrase “tunnel-in-the-sky” because the acronym is pointlessly distracting.

Quick History

In 1973, Volkmar Wilckens was a research engineer and experimental test pilot for the German Research and Testing Institute for Aerospace (now called the German Aerospace Center). He was doing a lot of thinking about flight safety in all-weather conditions, and came up with an idea. In his paper “Improvements In Pilot/Aircraft-Integration by Advanced Contact Analog Displays,” he sort of says, “Hey, it’s hard to put all the information from all the instruments together in your head and use that to fly, especially when you’re stressed out and flying conditions are crap. What if we took that data and rolled it up into a single easy-to-use display?” Figure 6 is his comp of just such a system. It was tested thoroughly in simulators and shown to improve pilot performance by making the key information (attitude, flight-path and position) perceivable rather than readable. It also enabled the pilot greater agency, by not having them just follow rules after instrument readings, but empowering them to navigate multiple variables within parameters to stay on target.

In Wilckens’ Fig. 6, above, you can see the basics of what would wind up on sci-fi screens decades later: shapes repeated into 3D space ahead of the aircraft to give the pilot a sense of an ideal path through the air. Stay in the tunnel and keep the plane safe.

Mulder notes that the next landmark developments come from the work of Arthur Grunwald & S. J. Merhav between 1976–1978. Their research illustrates the importance of augmenting the display and of including a preview of the aircraft in the display. They called this preview the Flight Path Predictor, or FPS. I’ve also seen it called the birdie in more modern papers, which is a lot more charming. It’s that plus symbol in the Grunwald illustration, below. Later in 1984, Grunwald also showed that a heads-up-display increased precision adhering to a curved path. So, HUDs good.

 n.b. This is Mulder’s representation of Grunwald’s display format.

I have also seen lots of examples of—but cannot find the research provenance for—tools for helping the pilot stay centered, such as a “ghost” reticle at the center of each frame, or alternately brackets around the FPP, called the Flight Director Box, that the pilot can align to the corners of the frames. (I’ll just reference the brackets. Gestalt be damned!) The value of the birdie combined with the brackets seems very great, so though I can’t cite their inventor, and it wasn’t in Mulder’s thesis, I’ll include them as canon.

The takeaway from the history is really that these displays have a rich and studied history. The pattern has a high confidence.

Elements of an archetypical tunnel-in-the-sky display

There are lots of nuances that have been studied for these displays. Take for example the effect that angling the frames have on pilot banking, and the perfect time offset to nudge pilot behavior closer to ideal banking. For the purposes of sci-fi interfaces, however, we can reduce the critical components of the real world pattern down to four.

  1. Square shapes (called frames) extending into the distance that describe an ideal path through space
    1. The frame should be about five times the width of the craft. (The birdie you see below is not proportional and I don’t think it’s standard that they are.)
    2. The distances between frames will change with speed, but be set such that the pilot encounters a new one every three seconds.
    3. The frames should adopt perspective as if they were in the world, being perpendicular to the flight path. They should not face the display.
    4. The frames should tilt, or bank, on curves.
    5. The tunnel only needs to extend so far, about 20 seconds ahead in the flight path. This makes for about 6 frames visible at a time.
  2. An aircraft reference symbol or Flight Path Predictor Symbol (FPS, or “birdie”) that predicts where the plane will be when it meets the position of the nearest frame. It can appear off-facing in relation to the cockpit.
    1. These are often rendered as two L shapes turned base-to-base with some space between them. (See one such symbol in the Snow example above.)
    2. Sometimes (and more intuitively, imho) as a circle with short lines extending out the sides and the top. Like a cartoon butt of a plane. (See below.)
  3. Contour lines connect matching corners across frames
  4. A horizon line
This comp illustrates those critical features.

There are of course lots of other bits of information that a pilot needs. Altitude and speed, for example. If you’re feeling ambitious, and want more than those four, there are other details directly related to steering that may help a pilot.

  • Degree-of-vertical-deviation indicator at a side edge
  • Degree-of-horizontal-deviation indicator at the top edge
  • Center-of-frame indicator, such as a reticle, appearing in the upcoming frame
  • A path predictor 
  • Some sense of objects in the environment: If the display is a heads-up display, this can be a live view. If it is a separate screen, some stylized representation what the pilot would see if the display was superimposed onto their view.
  • What the risk is when off path: Just fuel? Passenger comfort? This is most important if that risk is imminent (collision with another craft, mountain) but then we’re starting to get agentive and I said we wouldn’t go there, so *crumbles up paper, tosses it*.

I haven’t seen a study showing efficacy of color and shading and line scale to provide additional cues, but look closely at that comp and you’ll see…

  • The background has been level-adjusted to increase contrast with the heads-up display
  • A dark outline around the white birdie and brackets to help visually distinguish them from the green lines and the clouds
  • A shadow under the birdie and brackets onto the frames and contours as an additional signal of 3D position
  • Contour lines diminishing in size as they extend into the distance, adding an additional perspective cue and limiting the amount of contour to the 20 second extents.
Some other interface elements added.

What can you play with when designing one in sci-fi?

Everything, of course. Signaling future-ness means extending known patterns, and sci-fi doesn’t answer to usability. Extend for story, extend for spectacle, extend for overwhelmedness. You know your job better than me. But if you want to keep a foot in believability, you should understand the point of each thing as you modify it and try not to lose that.

  1. Each frame serves as a mini-game, challenging the pilot to meet its center. Once that frame passes, that game is done and the next one is the new goal. Frames describe the near term. Having corners to the frame shape helps convey banking better. Circles would hide banking.
  2. Contour lines, if well designed, help describe the overall path and disambiguate the stack of frames. (As does lighting and shading and careful visual design, see above.) Contour lines convey the shape of the overall path and help guide steering between frames. Kind of like how you’d need to see the whole curve before drifitng your car through one, the contour lines help the pilot plan for the near future. 
  3. The birdie and brackets are what a pilot uses to know how close to the center they are. The birdie needs a center point. The brackets need to match the corners of the frame. Without these, it’s easier to drift off center.
  4. A horizon line provides feedback for when the plane is banked.
THIS BAD: You can kill the sense of the display by altering (or in this case, omitting) too much.

Since I mentioned that each frame acts as a mini-game, a word of caution: Just as you should be skeptical when looking to sci-fi, you should be skeptical when looking to games for their interfaces. The simulator which is most known for accuracy (Microsoft Flight Simulator) doesn’t appear to have a tunnel-in-the-sky display, and other categories of games may not be optimizing for usability as much as just plain fun, with the risk of crashing your virtual craft just being part of the risk. That’s not an acceptable outcome in real-world piloting. So, be cautious considering game interfaces as models for this, either.

This clip of stall-testing in the forthcoming MSFS2020 still doesn’t appear to show one. 

So now let’s look at the three examples of sci-fi tunnel-in-the-sky displays in chronological order of release, and see how they fare.

Three examples from sci-fi

So with those ideal components in mind, let’s look back at those three examples in the survey.

Alien (1976)
Blade Runner (1982)

Quick aside on the Blade Runner interface: The spike at the top and the bottom of the frame help in straight tunnels to serve as a horizontal degree-of-deviation indicator. It would not help as much in curved tunnels, and is missing a matching vertical degree-of-deviation indicator. Unless that’s handled automatically, like a car on a road, its absence is notable.

Starship Troopers (1986) We only get 15 frames of this interface in Starship Troopers, as Ibanez pilots the escape shuttle to the surface of Planet P. It is very jarring to see as a repeating gif, so accept this still image instead. 

Some obvious things we see missing from all of them are the birdie, the box, and the contour lines. Why is this? My guess is that the computational power in the 1976 was not enough to manage those extra lines, and Ridley Scott just went with the frames. Then, once the trope had been established in a blockbuster, designers just kept repeating the trope rather than looking to see how it worked in the real world, or having the time to work through the interaction logic. So let me say:

  • Without the birdie and box, the pilot has far too much leeway to make mistakes. And in sci-fi contexts, where the tunnel-in-the-sky display is shown mostly during critical ship maneuvers, their absence is glaring.
  • Also the lack of contour lines might not seem as important, since the screens typically aren’t shown for very long, but when they twist in crazy ways they should help signal the difficulty of the task ahead of the pilot very quickly.

Note that sci-fi will almost certainly encounter problems that real-world researchers will not have needed to consider, and so there’s plenty of room for imagination and additional design. Imagine helping a pilot…

  • Navigating the weird spacetime around a singularity
  • Bouncing close to a supernova while in hyperspace
  • Dodging chunks of spaceship, the bodies of your fallen comrades, and rising plasma bombs as you pilot shuttlecraft to safety on the planet below
  • AI on the ships that can predict complex flight paths and even modify them in real time, and even assist with it all
  • Needing to have the tunnel be occluded by objects visible in a heads up display, such as when a pilot is maneuvering amongst an impossibly-dense asteroid field. 

…to name a few off my head. These things don’t happen in the real world, so would be novel design challenges for the sci-fi interface designer.


So, now we have a deeper basis for discussing, critiquing, and designing sci-fi tunnel-in-the-sky displays. If you are an aeronautic engineer, and have some more detail, let me hear it! I’d love for this to be a good general reference for sci-fi interface designers.

If you are a fan, and can provide other examples in the comments, it would be great to see other ones to compare.

Happy flying, and see you back in Blade Runner in the next post.

The Design of Evil

The exports from my keynote at Dark Futures.

Way back in the halcyon days of 2015 I was asked by Phil Martin and Jordan of Speculative Futures SF to make a presentation for one their early meetings. I immediately thought of one of the chapters that I had wanted to write for Make It So: Interaction Design Lessons from Sci-Fi, but had been cut for space reasons, and that is: How is evil (in sci-fi interfaces) designed? There were some sub-questions in the outline that went something like this.

  • What does evil look like?
  • Are there any recurring patterns we can see?
  • What are those patterns?
  • Why would they be the way they are?
  • What would we do with this information?

I made that presentation. It went well, I must say. Then I forgot about it until Nikolas Badminton of Dark Futures invited me to participate in his first-ever San Francisco edition of that meetup in November of 2019. In hindsight, maybe I should have done a reading from one of my short stories that detail dark (or very, very dark) futures, but instead, I dusted off this 45 minute presentation and cut it down to 15 minutes. That also went well I daresay. But I figure it’s time to put these thoughts into some more formal place for a wider audience. And here we are.

Nah, they’re cool!

Wait…Evil?

That’s a loaded term, I hear you say, because you’re smart, skeptical, loathe bandying about such dehumanizing terms lightly, and relish in nuance. And you’re right. If you were to ask this question outside of the domain of fiction, you’d run up against lots of problems. Most notably that—as Socrates said through Plato in the Meno Dialogues—by the time someone commits something that most people would call “evil,” they have gone through the mental gymnastics to convince themselves that whatever they’re doing is not evil. A handy example menu of such lies-to-self follows.

  • It’s horrible but necessary.
  • They deserve it.
  • The sky god is on my side.
  • It is not my decision.
  • I am helpless to stop myself.
  • The victim is subhuman.
  • It’s not really that bad.
  • I and my tribe are exceptional and not subject to norms of ethics.
  • There is no quid pro quo.

And so, we must conclude, since nobody thinks they’re evil, and most people design for themselves, no one in the real world designs for evil.

Oh well?

But, the good news we are not outside the domain of fiction, we’re soaking in it! And in fiction, there are definitely characters and organizations who are meant to be—and be read by the audience as—evil, as the bad guys. The Empire. The First Order. Zorg! The Alliance! Norsefire! All evil, and all meant to be umabiguously so.

Image result for norsefire
from V for Vendetta.

And while alien biology, costume, set, and prop design all enable creators to signal evil, this blog is about interfaces. So we’ll be looking at eeeevil interfaces.

What we find

Note that in earlier cinema and television, technology was less art directed and less branded than it is today. Even into the 1970s, art direction seemed to be trying to signal the sci-fi-ness of interfaces rather than the character of the organizations that produced them. Kubrick expertly signaled HAL’s psychopathy in 1969’s 2001: A Space Odyssey, and by the early 1980s more and more films had begun to follow suit not just with evil AI, but with interfaces created and used by evil organizations. Nowadays I’d be surprised to find an interface in sci-if that didn’t signal the character of its user or the source organization.

Evil interfaces, circa Buck Rogers (1939).

Note that some evil interfaces don’t adhere to the pattern. They don’t in and of themselves signal evil, even if someone is using them to commit evil acts. Physical controls, especially, are most often bound by functional and ergonomic considerations rather than style, where digital interfaces are much less so.

Many of the interfaces fall into two patterns. One is the visual appearance. The other is a recurrent shape. More about each follows.

1. High-contrast, high-saturation, bold elements

Evil has little filigree. Elements are high-contrast and bold with sharp edges. The colors are highly saturated, very often against black. The colors vary, but the palette is primarily red-on-black, green-on-black, and blue-on-black.

Mostly red-on-black

The overwhelming majority of evil technologies are blood-red on black. This pattern appears across the technologies of evil, whether screen, costume, sets, or props.

Red-on-black accounts for maybe 3/4 of the examples I gathered.

Sometimes a sickly green

Less than a quarter focus on a sickly or unnatural green.

Occasionally calculating blue

A handful of examples are a cold-and-calculating blue on black.

A note of caution: While evil is most often red-on-black, red does not, in and of itself, denote evil. It is a common color to see for urgency warnings in sci-if. See the tag for big red label examples.

Not evil, just urgent.

2. Also, evil is pointy

Evil also has a lot of acute angles in its interfaces. Spikes, arrows, and spurs appear frequently. In a word, evil is often pointy.

Why would this be?

Where would this pattern of high-saturation, high-constrast, pointy, mostly red-on-black come from?

Now, usually, I try and run numbers, do due diligence to look for counter-evidence, scope checks, and statistical significance. But this post is going to be less research and more reason. I’m interested if anyone else wants to run or share a more academically grounded study.

I can’t imagine that these patterns in sci-fi are arbitrary. While a great number of shows may be camping on tropes that were established in shows that came before them, the tropes would not have survived if they didn’t tap some ground truth. And there are universal ground truths to work with.

My favorite example of this is the takete-maluma effect from phonosemantics, first tested by Wolfgang Köhler in 1929. Given the two images below, and the two names “maluma” and “takete”, 95–98% of people would rather assign the name “takete” to the spiky shape on the left, and “maluma” to the curvy shape on the right. This effect has been tested in 1947 and again in 2001, with slightly different names but similar results, across cultures and continents.

What this tells us is that there are human universals in the interpretation of forms.

I believe these universals come from nature. So if we turn to nature, where do we see this kind of high-contrast, high-saturation patterning? There is a place. To explain it, we have to dip a bit into evolution.

Aposematics: Signaling theory

Evolution, in the absence of heavy reproductive pressures, will experiment with forms, often as a result of sexual selection. If through this experimentation a species develops conspicuousness, and the members are tasty and defenseless, that trait will be devoured right out of the gene pool by predators. So conspicuousness in tasty and defenseless species is generally selected against. Inconspicuousness and camouflage are selected for.

Would not last long outside of a pig disco.

But if the species is unpalatable, like a ladybug, or aggressive, like a wolverine, or with strong defenses, like a wasp, the naïve predator learns quickly that the conspicuous signal is to be avoided. The signal means Don’t Fuck with Me. After a few experiences, the predator will learn to steer clear of the signal. Even if the defense kills the attacker (and the lesson lost to the grave), other attackers may learn in their stead, or evolution will favor creatures with an instinct to avoid the signal.

In short, a conspicuous signal that survives becomes a reinforcing advertisement in its ecosystem. This is called aposematic signaling.

There are many interesting mimicry tactics you should check out (for no other reason that they can explain things like Dolores Umbridge) but for our purposes, it is enough to know that danger has a pattern in nature, and it tends toward, you guessed it, bold, high-contrast, high saturation patterns, including spikes.

Looking at the color palette in nature’s examples, though, we see many saturated colors, including lots of yellows. We don’t see yellow predominant in sci-fi evil interfaces. So why is sci-fi human evil red & black? Here I go out on a limb without even the benefit of an evolutionary theory, but I think it’s simply blood and night.

Not blood, just cherry glazing.

When we see blood on a human outside of menstruation and childbirth, it means some violence or sickness has happened to them. (And childbirth is pretty violent.) So, blood red is often a signal of danger.

And we are a diurnal species, optimized for daylight, and maladapted for night. Darkness is low-information, and with nocturnal predators around, high-risk. Black is another signal for danger.

Image result for nighttime scary
This is fine.

And spikes? Spikes are just physics. Thorns and claws tell us this shape means pointy, puncturing danger.

So I believe the design of evil in sci-fi interfaces (and really, sci-fi shows generally) looks the way it does because of aposematics, because of these patterns that are familiar to us from our experience of the world. We should expect most of evil to embody these same patterns.

What do designers do with this?

So if I’m right, it bears asking, What we do with this? (Recall that the “tag line” for this project is “Stop watching sci-fi. Start using it.”) I think it’s a big start to simply be aware of these patterns. Once you are, you can use it, for products and services whose brand promise includes the anti-social, tough-guy message Don’t Fuck with Me.

Or, conversely, if you are hoping to create an impression of goodness, safety, and nurturance, avoid these patterns. Choose different palettes, roundness, and softness.

What should people not do with this?

As a last note, it’s important not to overgeneralize this. While a lot of evil, like, say, Nazis, utilize aposematic signals directly, some will adopt mimicry patterns to appear safe, welcoming, and friendly. Some evil will wear beige slacks and carry tiki torches. Others will surround themselves with in-group signals, like wrapping themselves in the flag, to make you think they’re a-OK. Still others will hang fuzzy-wuzzy kitty-witty pictures all over their office.

Image result for dolores umbridge
Is there a better example in sci-fi? @me.

Do not be fooled. Evil is as evil does, and signaling in sci-fi is a narrative convenience. Treat the surface of things as a signal to consider, subordinate to a person—or a group’s—actual behavior.

Evaluating strong AI interfaces in sci-fi

Regular readers have detected a pause. I introduced Colossus to review it, and then went silent. This is because I am wrestling with some foundational ideas on how to proceed. Namely, how do you evaluate the interfaces to speculative strong artificial intelligence? This, finally, is that answer. Or at least a first draft. It’s giant and feels sprawling and almost certainly wrong, but trying to get this perfect is a fool’s errand, and I need to get this out there so we can move on.

This is a draft.

I expect most readers are less interested in this kind of framework than they are how it gets applied to their favorite sci-fi AIs. If you’re mostly here for the fiction, skip this one. It’s long.


Continue reading

Gendered AI: An infographic

To date, the #GenderedAI study spans many posts, lots of words and some admittedly deep discussion. If you’re a visual person like me, sometimes you just want to see a picture. So, I made an infographic. It’s way too big for WordPress, so you’ll have to peruse this preview and head over to IMGUR to scroll through the full-size thing in all its nerdy glory. (https://imgur.com/k6wtuop) That site does marvelously with long, tall images.

Anyway this should make it easy to grok the big takeaways from the study and to share on social media so more people can get sensitized to these issues. Also… (more below)

…Please help me get this content in front of creators at SxSW 2020. Head over to their panelpicker and vote up the submission (You can use that link or this one). If accepted, the panel will include awesome sci-fi author and futurist Madeline Ashby and awesome author and podcaster Leila A. McNeill of the Lady Science podcast and of course myself! Thank you!

http://panelpicker.sxsw.com/vote/98525

A Default Gender?

By guest blogger Cathy Pearl

In 8th grade, I went on our class trip to Washington D.C. The hotel we were staying at had kids from all over the country, and one night they held a dance.  I had changed into sweats and a t-shirt and was dancing away with my friends when a boy walked up behind me, tapped me on the shoulder, and said, “Fairy!”

cortana
“I think we both know the answer to that.” —Cortana, Halo: Combat Evolved

When I turned around and the boy realized I was a girl, he got a confused look on his face, mumbled something and walked off.  I was left feeling angry and hurt.

Humans have a strong pull to identify gender not just in people, but in robots, animals, and even smart speakers.  (Whether that is wrong or right is another matter that I don’t address here, but many people are uncomfortable when gender is ambiguous.)

Even robots, which could easily be genderless, are assigned a gender.

Author Chris Noessel has accumulated an amazing set of data which looks at hundreds of characters in science fiction, and has found that, among many other things, of the 327 AI characters he looked at, about twice as many are male as female.

Social Gender

Noessel has further broken down gender assignment into types:  social, bodily, and biological. I find the “social” category particularly interesting, which he defines as follows:

Characters are tagged as socially male or female if the only cues are the voice of the actor or other characters use gendered pronouns to refer to it. R2D2 from Star Wars, for example, is referred to as “him” or “he” many times, even though he has no other gender markers, not even voice. For this reason, R2D2 is tagged as “socially male.”

Disturbingly, Noessel found that the gender ratio was skewed most for this category, at 5 male characters for every 1 female.

I believe that much of the time, when writers create an AI character, it is male by default, unless there is something important about being female.  For example, if the character is a love interest or mother, then it must be female; otherwise, by default, it’s male. This aligns with the “Men Are Generic, Women Are Special” theory from TV Tropes, which states:

This leads to the Smurfette Principle, in which a character’s femaleness is the most important and interesting thing about her, often to exclusion of all else. It also tends to result in works failing The Bechdel Test, because if there’s a potential character who doesn’t have to be any particular gender, the role will probably be filled by a male character by default. 

TV Tropes

Having been designing and researching voice interfaces for twenty years, I’d like to add some perspective on how gender and AI is applied to our current technology.

In the real world

One exception to this rule is voice assistants, such as Siri, Cortana, and Alexa.  The majority of voice assistants have a female voice, although some allow you to change the default to a male voice. On the other hand, embodied robots (such as Jibo (pictured below), Vector, Pepper, and Kuri) are more often gendered as male.

When a robot is designed, gender does not have to be immediately assigned.  In a voice assistant, however, it’s the most apparent characteristic.

In his book Wired for Speech, Clifford Nass wrote that individuals generally perceive female voices as helping us solve our problems by ourselves, while they view male voices as authority figures who tell us the answers to our problems.

If voice-only assistants are predominantly given female voices, why are robots any different?

Why are robots different?

One reason is androcentrism: the default for many things in society is male, and whatever differs from that default must be marked in some way. When people see a robot with no obviously “female” traits (such as long hair, breasts, or, in the case of Rosie from the Jetsons, an apron) they usually assign a male gender, as this study found. It’s similar for cartoons such as stick figures, and animals in animated movies. Animals are often given unrealistic bodies (such as a nipped-in waist), a hairbow, or larger, pink lips to “mark” them as female.  

It would not be surprising if designers felt that to make a robot NOT male, they would have to add exaggerated features. Imagine if, after R2D2 was constructed, George Lucas said “let’s make R2D2 female”.  Despite the fact that nothing would have to be changed (apart from the “he” pronoun in the script), I have no doubt the builders would have scrambled to “female-ize” R2D2 by adding a pink bow or something equally unnecessary. 

“There. Perfect!” (This is actually R2-KT. Yes, she was created to be the female R2-D2.)

In addition, male characters in fictional works are often more defined by their actions, and female characters by their looks and/or personalities.  In this light, it makes sense that a more physical assistant would be more likely to be male.

There are some notable exceptions to this, mainly in the area of home health robots (such as Mabu).  It is interesting to note that Mabu, though “she” has a physical form, the body doesn’t move, just the head and eyes; it serves mainly as a holder for an iPad. Again, she’s an assistant.

So what?

One may ask, what’s the harm in these gendered assistants? One problem is the continued reinforcement of women as always helpful, pleasant, organized, and never angry.  They’re not running things; they’re simply paving the way to make your life easier. But if you want a computer that’s “knowledgeable”—such as IBM’s Watson that took on the Jeopardy! Challenge—the voice is male.  These stereotypes have an impact on our relationships with real people, and not for the better. There shouldn’t be a “default” gender, and it’s time to move past our tired stereotypes of women as the gender that’s always helpful and accommodating. 

As fans of sci-fi, we should become at least sensitized, and more hopefully, vocal and active, about this portrayal of women, and do our part to create more equal technology.


My donation

Thanks to all who donated to compensate underrepresented voices! I am donating the monies I’ve received to the Geena Davis Institute on Gender in Media. This group “is the first and only research-based organization working within the media and entertainment industry to engage, educate, and influence content creators, marketers and audiences about the importance of eliminating unconscious bias, highlighting gender balance, challenging stereotypes, creating role models and scripting a wide variety of strong female characters in entertainment and media that targets and influences children ages 11 and under.” Check them out.

Gendered AI: Germane-ness Correlations

The Gendered AI series looks at sci-fi movies and television to see how Hollywood treats AI of different gender presentations. For example…

  • Do female- and male-presenting AIs get different bodies? Yes.
  • Are female AIs more subservient? No.
  • How does gender correlate to an AI’s goodness? Males are extremists.
  • Men are more often masters of female AIs. Women are more often masters of non-bindary AIs. Male AIs shy away from having women masters. No, really.

This last correlations post investigates the complicated question of which genders are assigned when gender is not germane to the plot. If you haven’t read the series intro, related germane-ness distributions, or correlations 101 posts, I recommend you read them first. As always, check out the live Google sheet for the most recent data.

Recall from the germane distribution post that the germane tag is about whether the gender is important to the plot. (Yes, it’s fairly subjective.)

  • If an AI character makes a baby via common biological means, or their sex-related organs play a critical role, then the gender of the character is highly germane. Rachel in the Blade Runner franchise gestates a baby, so her having a womb is critical, and as we’ve seen in the survey, gender stacks, so her gender is highly germane.
  • If an AI character has a romantic relationship with a mono-sexual partner, or is themselves mono-sexual, or they occupy a gendered social role that is important to the plot, the characters is listed as slightly germane. For example, all you’d have to do is, say, make Val Com bisexual or gay, and then they could present as female and nothing else in the plot of Heartbeeps would need to change to accommodate it.
  • If the character’s gender could be swapped to another gender and it not change the story much, then we say that the character’s gender is not germane. BB-8, for instance, could present as female, and nothing in the canon Star Wars movies would change.
Yes, this matters.

I need to clarify that I’m talking about plot—what happens in the show—rather than story—which entails the reasons it is told and effects—because given the nature of identity politics, a change in gender presentation would often change how the story is received and interpreted by the audience.

All the characters in Alien, for instance, were written unisex, to be playable by actors of any sex or gender presentation. So while it “didn’t matter” that Ripley was cast as Sigourney Weaver, it totally did matter because she was such a bad-ass female character whose gender was immaterial to the plot (we hadn’t had a lot of those at this point in cinematic history). She was just a bad-ass who happened to be female, not female because she “needed” to be. So, yes, it does matter. But diegetically, had she been Alan Ripley, the plot and character relationships of Alien would not need to change. He still damned well better save Jonesy.

So what do we see when we look at the germane-ness of AI characters in a mostly-binary way?

Sure enough, when gender matters to the plot—slightly or highly—the gender presentation of the character is 5.47% female, or about 7% more likely than presenting male. When the gender presentation does not matter, that value is flipped, being around 7% more male than female, and around 9% more other than female.

The sample size for highly germane is vanishingly small, and one would expect the coupling to include a male, so the under-noise values for that category is not too surprising. But the other categories. Holy cow.

Put another way…

AI characters more often present as female only when they need to be.

Otherwise, they’re more often male or not gendered at all.

That is shitty. It’s like Hollywood thinks men are the default gender, and I know I just said it, but I’m going to stay it again—that’s shitty. Hey, Hollywood. Women are people.

Ayup.

Gendered AI: Gender of Master Correlations

The Gendered AI series looks at sci-fi movies and television to see how Hollywood treats AI of different gender presentations. For example…

  • Do female-presenting AIs get different bodies than male-presenting AIs? Yes.
  • Are female AIs more subservient? No.
  • How does gender correlate to an AI’s goodness? Males are extremists.

This particular post asks who are the master of AIs. If you haven’t read the series intro, related master distributions, or correlations 101 posts, I recommend you read them first. As always, check out the live Google sheet for the most recent data.

Barbarella (female-presenting human) is master of Alphy (an AI whose voice presents male.) This is, statistically, an unlikely and unrepresentative relationship, but spot on for the late 01960s-feminist bent of Barbarella.

You may be wondering how this is different than the earlier subservience posts. Recall that the subservience studies look at gender presentation of AI as it relates to their own degree of freedom. Are most AIs freewilled? Yes. Do free-willed AI tend to present as boys more often than as girls or other? Yes. But these tell us nothing about the gender relationship of the subservient AIs to their master’s gender. It would be one thing if all the male-presenting AIs were “owned” by male-presenting owners. If would be another if female-presenting AIs were owned much more often by male-presenting masters. This post exposes those correlations in the survey. Chart time!

Data nerds (high fives) may note that unlike every other correlations chart in the series, these numbers don’t balance. For instance, looking at the Male AI in the left chart, -1.63 + 3.97 + 3.97 = 6.31. Shouldn’t they zero out? If we were looking at the entire survey, they would. But in this case, free-willed AI only muddy this picture, so those AIs are omitted, making the numbers seem wonky. Check the live sheet if you’re eager to dig into the data.

This is two charts in one.

The left chart groups the data by genders of master. Turns out if you have a female-presenting master, you are unlikely to be male- or female-presenting. (Recall that there are only 5 female-presenting masters in the entire Gendered AI survey, so the number of data points is low.) If you present as male, you’re more likely to be master of a gendered AI. Otherwise, you are more likely to be master of a male-presenting AI.

Your AI may not be happy about it, though.

The right chart is the same data, but pivoted to look at it from genders of AI. That’s where the clusters are a little more telling.

  • If you are a female-presenting AI, you are more likely to have a male-presenting master.
  • If you are non-binary AI, you are more likely to have a female-presenting master.
  • If you are a male AI, you have anything but a female-presenting master.

The detailed chart doesn’t reveal anything more than we see from this aggregate, so isn’t shown.

The notion of people owning people is revolting, but the notion of owning an AI is still not universally reviled. (With nods to the distinctions of ANI and AGI.) That means that sci-fi AI serves as unique metaphor for taboo questions of gender and ownership. The results are upsetting for their social implications, of course. And sci-fi needs to do better. Hey, maybe this gives you an idea…

And yet this isn’t the most upsetting correlations finding in the study. I saved that for last, which is next, which is when we look at gender and germaneness. Gird your loins.

Gendered AI: Gender and Goodness

The Gendered AI series looks at sci-fi movies and television to see how Hollywood treats AI of different gender presentations. For example, do female-presenting AIs get different bodies than male-presenting AIs? (Yes.) Are female AIs more subservient? (No.) What genders are the masters of AI? This particular post is about gender and goodness. If you haven’t read the series intro, related goodness distributions, or correlations 101 posts, I recommend you read them first. As always, check out the live Google sheet for the most recent data.

n.b. If you’re looking at the live sheet, you may note it says “alignment” rather than “goodness” in the dropdown and sheets. Sorry about the D&D roots showing. But by this, I mean a rough, highly debatable scale of saintliness to villainy.

Gender and goodness

What do we see when we look at the correlations of gender and level of goodness? There are three big trends.

  1. The aggregate picture shows a tendency for female-presenting AI’s to be closer to neutral, rather than extreme.
  2. It shows a tendency for male-presenting AI’s to be very good, or very evil.
  3. It shows a slight tendency for nonbinary-presenting AI to be slightly evil, but not full-bore.

When we look into the detailed chart, some additional trends appear.

  • Biologicially- and bodily-presenting female AI tends toward somewhat evil, but not very evil.
  • Socially female (voice or pronouns, only) tend toward neutral.
  • Gender-less AI spike at somewhat evil.
  • Genderfluid characters (noting that this occurs mostly as a tool of deception) spike at very evil, like, say, Skynet.
  • AIs showing multiple genders tend toward neutral, like Star Trek TOS’s Exo III androids, or somewhat evil, like Mudd’s androids.