Make It So: The Clippy Theory of Star Trek Action

My partner and I spent much of March watching episodes of Star Trek: The Next Generation in mostly random order. I’d seen plenty of Trek before—watching pretty much all of DS9 and Voyager as a teenager, and enjoying the more recent J.J. Abrams reboot—but it’s been years since I really considered the franchise as a piece of science fiction. My big takeaway is…TNG is bonkers, and that’s okay. The show is highly watchable because it’s really just a set of character moments, risk taking, and ethical conundrums strung together with pleasing technobabble, which soothes and hushes the parts of our brain that might object to the plot based on some technicality. It’s a formula that will probably never lose its appeal.

But there is one thing that does bother me: how can the crew respond to Picard’s orders so fast? Like, beyond-the-limits-of-reason fast.

A 2-panel “photonovella.” Above, Picard approaches Data and says, “Data, ask the computer if it can use the Voynich Manuscript and i-propyl cyanide to somehow solve the Goldback Conjecture.” Below, under the caption, “Two taps later…” Data replies, “It says it will have the answer by the commercial break, Captain.”

How are you making that so?

When the Enterprise-D encounters hostile aliens, ship malfunctions, or a mysterious space-time anomaly, we often get dynamic moments on the bridge that work like this. Data, Worf and the other bridge crew, sometimes with input from Geordi in engineering, call out sensor readings and ship functionality metrics. Captain Picard stares toward the viewscreen/camera and gives orders, sometimes intermediated by Commander Riker. Worf or Data will tap once or twice on their consoles and then quickly report the results—i.e. “our phasers have no effect” or “the warp containment field is stabilizing,” that sort of thing. It all moves very quickly, and even though the audience doesn’t quite know the dangers of tachyon radiation or how tricky it is to compensate for subspace interference, we feel a palpable urgency. It’s probably one of the most recognizable scenes-types in television.

Now, extradiegetically, I think there are very good reasons to structure the action this way. It keeps the show moving, keeps the focus on the choices, rather than the tech. And of course, diegetically, their computers would be faster than ours, responding nearly instantaneously. The crew are also highly trained military personnel, whose focus, reaction speed, and knowledge of the ship’s systems are kept sharp by regular drills. The occasional scenes we get of tertiary characters struggling with the controls only drives home how elite the Enterprise senior staff are.

A screen cap from TNG with Wil Wheaton as Wesley in the navigator seat, saying to the bridge crew, “Does…uh…anyone know where the ‘engage’ key is?”
Just kidding, we love ya, Wil.

Nonetheless, it is one thing to shout out the strength of the ship’s shields. No doubt Worf has an indicator at tactical that’s as easy to read as your laptop’s battery level. That’s bound to be routine.  But it’s quite another for a crewmember to complete a very specific and unusual request in what seems like one or two taps on a console. There are countless cases of the deflector dish or tractor beam being “reconfigured” to emit this or that kind of force or radiation. Power is constantly being rerouted from one system to another. There’s a great deal of improvisational engineering by all characters.

Just to pick examples in my most recent days of binging: in “Descent, Part 2,” for instance, Beverly Crusher, as acting captain, tells the ensign at ops to launch a probe with the ship’s recent logs on it, as a warning to Starfleet, thus freeing the Enterprise to return through a transwarp conduit to take on The Borg. Or in the DS9 episode “Equilibrium”—yes, we’ve started on the next series now that TNG is off Netflix—while investigating a mysterious figure from Jadzia’s past, Sisko instructs Bashir to “check the enrollment records of all the Trill music academies during Belar’s lifetime.” In both cases, the order is complete in barely a second.

Even for Julian Bashir—a doctor and secretly a mutant genius—there is no way for a human to perform such a narrow and out-of-left-field search without entering a few parameters, perhaps navigating via menus to the correct database. From a UX perspective, we’re talking several clicks at least!

There is a tension in design between…

  • Interface elements that allow you to perform a handful of very specific operations quickly (if you know where the switch is), and…
  • Those that let you do almost anything, but slower.

For instance, this blog has big colorful buttons that make it easy to get email updates about new posts or to donate to a tip jar. If you want to find a specific post, however, you have to type something into the search box or perhaps scroll through the list of TV/movie properties on the right. While the 24th Century no doubt has somewhat better design than WordPress, they are still bound by this tension.

Of course it would be boring to wait while Bashir made the clicks required to bring up the Trill equivalent of census records or LexisNexis. With movie magic they simply edit out those seconds. But I think it’s interesting to indulge in a little backworlding and imagine that Starfleet really does have the technology to make complex general computing a breeze. How might they do it?

Enter the Ship’s AI

One possible answer is that the ship’s Computer—a ubiquitous and omnipresent AI—is probably doing most of the heavy lifting. Much like how Iron Man is really Jarvis with a little strategic input from Tony, I suspect that the Computer listens to the captain’s orders and puts the appropriate commands on the relevant crewman’s console the instant the words are out of Picard’s mouth. (With predictive algorithms, maybe even just before.) The crewman then merely has to confirm that the computer correctly interpreted the orders and press execute. Similarly, the Computer must be constantly analyzing sensor data and internal metrics and curating the most important information for the crew to call out. This would be in line with the Active Academy model proposed in relation to Starship Troopers.

Centaurs, Minotaurs, and anticipatory computing

I’ve heard this kind of human-machine relationship called “Centaur Computing.” In chess, for instance, some tournaments have found that human-computer teams outperform either humans or computers working on their own. This is not necessarily intuitive, as one would think that computers, as the undisputed better chess players, would be hindered by having an imperfect human in the mix. But in fact, when humans can offer strategic guidance, choosing between potential lines that the computer games out, they often outmaneuver pure-AIs.

I often contrast Centaur Computing with something I call “Minotaur Computing.” In the Centaur version—head of a man on the body of a beast—the human makes the top-level decision and the computer executes. In Minotaur Computing—head of a beast with the body of a man—the computer calls the shots and leaves it up to human partners to execute. An example of this would be the machine gods in Person of Interest, which have no Skynet Terminator armies but instead recruit and hire human operatives to carry out their cryptic plans.

In some ways this kind of anticipatory computing is simply a hyper-advanced version of AI features we already have today, such as when Gmail offers to complete my sentence when I begin to type “thank you for your time and consideration” at the end of a cover letter.

Hi, it looks like you’re trying to defeat the Borg…

In this formulation,  the true spiritual ancestor of the Starfleet Computer is Clippy, the notorious Microsoft Word anthropomorphic paperclip helper, which would pop up and make suggestions like “It looks like you’re writing a letter. Would you like help?” Clippy was much maligned in popular culture for being annoying, distracting, and the face of what was in many ways a clunky, imperfect software product. But the idea of making sense of the user’s intentions and offering relevant options isn’t always a bad one. The Computer in Star Trek performs this task so smoothly, efficiently, and in-the-background, that Starfleet crews are able to work in fast-paced harmony, acting on both instinct and expertise, and staying the heroes of their stories.

One to beam into the Sun, Captain.

Admittedly, this deftness is a bit at odds with the somewhat obtuse behavior the Computer often displays when asked a question directly, such as demanding you specify a temperature when you request a glass of water. Given how often the Computer suffers strange malfunctions that complicate life on the Enterprise for days a time, one wonders if the crew feel as though they are constantly negotiating with a kind of capricious spirit—usually benign but occasionally temperamental and even dangerously creative in its interpretations of one’s wishes, like a djinn. Perhaps they rarely complain about or even mention the Computer’s role in Clippy-ing orders onto their consoles because they know better than to insult the digital fairies that run the turbolifts and replicate their food.

All of which brings a kind of mystical cast to those rapid, chain-of-command-tightened exchanges amongst the bridge crew when shit hits the fan. When Picard gives his crew an order, he’s really talking to the Computer. When Riker offers a sub-order, he’s making a judgment call that the Computer might need a little more guidance. The crew are there to act as QA—a general-intelligence safeguard—confirming with human eyes and brain that the Computer is interpreting Picard correctly. The one or two beeps we often hear as they execute a complex command are them merely dismissing incorrect or confused operation-lines. They report back that the probe is ready or the phasers are locked, as the captain wished, and Picard double confirms with his iconic “make it so.” It’s a multilayered checking and rechecking of intentions and plans, much like the military today uses to prevent miscommunications, but in this case with the added bonus of keeping the reins on a powerful but not always cooperative genie.

There’s a good argument to be made that this is the relationship we want to have with technology. Smooth and effective, but with plenty of oversight, and without the kind of invasive elements that right now make tech the center of so many conversations. We want AI that gives us computational superpowers, but still keeps us the heroes of our stories.


Andrew Dana Hudson is a speculative fiction author, researcher, and theorist. His first book, Our Shared Storm: A Novel of Five Climate Futures, is fresh off the press. Check it out here. And follow his work via his newsletter, solarshades.club.

Tattoo surveillance

In the prior Idiocracy post I discussed the car interface, especially in terms of how it informs the passengers what is happening when it is remotely shut down. Today let’s talk about the passive interface that shuts it down: Namely, Joe’s tattoo and the distance-scanning vending machine.

It’s been a while since that prior post, so here’s a recap of what’s happening in Idiocracy in this scene:

When Frito is driving Joe and Rita away from the cops, Joe happens to gesture with his hand above the car window, where a vending machine he happens to be passing spots the tattoo. Within seconds two harsh beeps sound in the car and a voice says, “You are harboring a fugitive named NOT SURE. Please, pull over and wait for the police to incarcerate your passenger.”

Frito’s car begins slowing down, and the dashboard screen shows a picture of Not Sure’s ID card and big red text zooming in a loop reading PULL OVER.

It’s a fast scene and the beat feels more like a filmmaker’s excuse to get them out of the car and on foot as they hunt for the Time Masheen. I breezed by it in an earlier post, but it bears some more investigation.

This is a class of transaction where, like taxes and advertising, the subject is an unwilling and probably uncooperative participant. But this same interface has to work for payment, in which the subject is a willing participant. Keep this in mind as we look first at the proximate problem, i.e. locating the fugitive for apprehension; and at the ultimate goal, i.e. how a culture deals with crime.

A quick caveat: While it’s fair to say I’m an expert on interaction design, I’m Just a Guy when it comes to criminology and jurisprudence. And these are ideas with some consequence. Feel free to jump in and engage in friendly debate on any of these points.

Proximate problem: Finding the fugitive

The red scan is fast, but it’s very noticable. The sudden flash of light, the red color. This could easily tip a fugitive off and cause them to redouble efforts at evasion, maybe even covering up the tattoo, making the law’s job of apprehending them that much harder. Better would be some stealthier means of detection like RFID chips. I know, that’s not as cinegenic, so the movie version would instead use image recognition, showing the point of view from the vending machine camera (machine point of view or MPOV), with some UI clues showing it identifying, zooming in to, and confirming the barcode.

Yes, that’s a shout-out.

So we can solve stealth-detection cinematically, using tropes. But anytime a designer is asked to consider a scenario, it is a good idea to see if the problem can be more effectively addressed somewhere higher up the goal chain. Is stealth-detection really better?

Goal chain

  • Why is the system locating him? To tell authorities so they can go there and apprehend him.
  • Why are they apprehending him? He has shown an inability to regulate damaging anti-social behavior (in the eyes of the law, anyway) and the offender must be incarcerated.
  • Why do we try to incarcerate criminals? To minimize potential damage to society while the offender is rehabilitated.
  • Why do we try to rehabilitate criminals? Well, in the Idiocracy, it’s an excuse for damnatio ad vehiculum, that is, violent public spectacle based on the notion that jurisprudence is about punishment-as-deterrent. (Pro-tip: That doesn’t work. Did I say that doesn’t work? Because that doesn’t work.) In a liberal democracy like ours, it’s because we understand that the mechanisms of law are imperfect and we don’t want the state to enact irreversible capital punishment when it could be wrong, and, moreover, that human lives have intrinsic value. We should try to give people who have offended a chance to demonstrate an understanding of their crime and the willingness to behave lawfully in the future. Between incarceration and rehabilitation, we seek to minimize crime.
  • Why do we try to minimize crime? (This ought to be self-evident, but juuust in case…) Humans thrive when they do not need to guard against possible attack by every other human they encounter. They can put their resources towards the pursuit of happiness rather than the defense of encroachment. Such lawful societies benefit from network effects.

The MPOV suggestion above fixes the problem at the low level of detection, but each step in the goal chain invites design at a more effective level. It’s fun to look at each of these levels and imagine an advanced-technology solution (and even find sci-fi examples of each), but for this post, let’s look at the last one, minimizing crime, in the context of the tattoo scanner.

Ultimate problem: Preventing crime

In his paper “Deterrence in the Twenty-First Century,” Daniel Nagin reviewed state of the art criminology findings and listed five things about deterrence. Number one in his list is that the chance of being caught is a vastly more effective deterrent than even draconian punishment.

Research shows clearly that the chance of being caught is a vastly more effective deterrent than even draconian punishment.

Daniel S. Nagin, 2013

How might we increase the evident chance of being caught?

  1. Fund police forces well so they are well-staffed, well-trained, and have a near-constant, positive presence in communities, and impressive capture rates. Word would get around.
  2. Nagin himself suggests concentrating police presence in criminal hotspots, ensuring that they have visible handcuffs and walkie-talkies.
  3. Another way might be media: Of making sure that potential criminals hear an overwhelming number of stories through their network of criminals being captured successfully. This could involve editorial choice, or even media manipulation, filtering to ensure that “got caught” narratives appear in feeds more than “got away with it” ones. But we’re hopefully becoming more media savvy as a result of Recent Things, and this seems more deceptive than persuasive.
  4. The other way is to increase the sense of observation. And that leads us (as so many things do) to the panopticon.

The Elaboratory*

The Panopticon is almost a trope at this point, but that’s what this scene points to. If you’re not familiar, it is an idea about the design of buildings in which “a number of persons are meant to be kept under inspection,” conceived in the late 1700s by Samuel Bentham and formalized by his brother James in letters to their father. Here is a useful illustration.

*Elaboratory was one of the alternate terms he suggested for the idea. It didn’t catch on since it didn’t have the looming all-seeing-eye ring of the other term.

Elevation, section, and plan as drawn by Willey Reveley, 1791

The design of the panopticon is circular, with prisoners living in isolated cells along the perimeter. The interior wall of each cell is open to view so the inmate can be observed by a person in a central tower or “inspector’s lodge.” Things are structured so the inmates cannot tell whether or not they are being observed. (Bentham suggested louvers.) Over time, the idea goes, the inmate internalizes the unseen authority as a constant presence, and begins to regulate themselves, behaving as they believe the guard would have them behave. Bentham thought this was ideal from an efficacy and economic standpoint.

“Ideal perfection, if that were the object, would require that each person should actually [be under the eyes of the persons who should inspect them], during every instant of time.”

—Jeremey Bentham

It’s an idea that has certainly enjoyed currency. If you hadn’t come across the idea via Bentham, you may have come across it via Foucault in Discipline and Punish, who regarded it not as a money-saving design, but as an illustration of the effect of power. Or maybe Orwell, who did not use the term, but extended it to all of society in 1984. Or perhaps you heard it from Shoshana Zuboff, who in The Age of the Smart Machine reconceived it for information technology in a work environment.

Umm…Carol? Why aren’t you at your centrifuge?

In Benjamen Walker’s podcast Theory of Everything, he dedicates an episode to the argument that as a metaphor it needs to be put away, since…

  1. It builds on one-way observation, and modern social media has us sharing information about ourselves willingly, all the time. The diagram is more dream catcher than bicycle wheel. We volunteer ourselves to the inspector, any inspector, and can become inspectors to anyone else any time. Sousveillance. Stalking.
  2. Most modern uses of the metaphor are anti-government, but surveillance capitalism is a more pernicious problem (here in the West), where advertising uses all the information it can to hijack your reward systems and schlorp money out of you.
  3. Bentham regarded it as a tool for behavior modification, but the metaphor is not used to talk about how surveillance changes us and our identities, but rather as a violation of privacy rights.

It’s a good series, check it out, and hat tip to Brother-from-a-Scottish-Mother John V Willshire for pointing me in its direction.

To Walker’s list I will add another major difference: Panopticon inmates must know they are being watched. It’s critical to the desired internalization of authority. But modern surveillance tries its best to be invisible despite the fact that it gathers an enormous amount of information. (Fortunately it often fails to be invisible, and social media channels can be used to expose the surveillance.)

Guns are bad.

But then, Idiocracy

In Idiocracy, this interface—of the tattoo and the vending machine—is what puts this squarely back in Bentham’s metaphor. The ink is in a place that will be seen very often by the owner, and a place that’s very difficult to casually hide. (I note that the overwhelming majority of Hillfinger [sic] shirts in the movie are even short-sleeved.) So it serves as that permanent—and permanently-visible—identifier. You are being watched. (Holy crap now I have yet another reason to love Person of Interest. It’s adding to our collective media impression the notion of AI surveillance. Anyway…) In this scene, it’s a clear signal that he and his co-offenders could see, which means they would tell their friends this story of how easily Joe was caught. It’s pretty cunningly designed as a conspicuous signal.

Imagine how this might work throughout that world. As people went around their business in the Idiocracy, stochastic flashes of light on their and other people’s wrists keep sending a signal that everyone is being watched. It’s crappy surveillance which we don’t like for all the reasons we don’t like it, but it illustrates why stealth-detection may not be the ideal for crime preventions and why this horrible tattoo might be the thing that a bunch of doomed eggheads might have designed for the future when all that was left was morons. Turns out at least for the Idiocracy, this is a pretty well-designed signal for deterrence, which is the ultimate goal of this interface.

Beep.


Trivium remotes

Once a victim is wearing a Trivium Bracelet, any of Orlak’s henchmen can control the wearer’s actions. The victim’s expression is blank, suggesting that their consciousness is either comatose, twilit, or in some sort of locked in state. Their actions are controlled via a handheld remote control.

We see the remote control in use in four places in Las Luchadoras vs El Robot Asesino.

  1. One gets clapped on Dr. Chavez to test it.
  2. One goes on Gemma to demonstrate it.
  3. One is removed from the robot.
  4. One goes on Berthe to transform her to Black Electra.
Continue reading

St. God’s: Healthmaster Inferno

After Joe goes through triage, he is directed to the “diagnosis area to the right.” He waits in a short queue, and then enters the diagnosis bay.

The attendant wears a SMARTSPEEK that says, “Your illness is very important to us. Welcome to the Healthmaster Inferno.”

The attendant, DR. JAGGER, holds three small metal probes, and hands each one to him in turn saying, “Uh, this one goes in your mouth. This one goes in your ear. And this one goes up your butt.” (Dark side observation about the St. God’s: Apparently what it takes to become a doctor in Idiocracy is an ability to actually speak to patients and not just let the SMARTSPEEK do all the talking.)

Joe puts one in his mouth and is getting ready to insert the rest, when a quiet beeping causes the attendant to pause and correct himself. “Shit. Hang on a second.” He takes the mouth one back and hands him another one. “This one…No.” He gathers them together, and unable to tell them apart, he shuffles them trying to figure it out, saying “This one. This one goes in your mouth.” Joe reluctantly puts the offered probe into his mouth and continues.

The diagnosis is instant (and almost certainly UNKNOWN). SMARTSPEEK says, “Thank you for waiting. Dr. Lexus will be with you shortly.”

Idiocracy_diagnosis01

The probes

The probes are rounded, metal cylinders, maybe a decimeter in length. They look like 3.6mm audio plugs with the tips ground off. The interface-slash-body-horror joke is that we in the audience know that you shouldn’t cross-contaminate between those orifices in a single person, much less between multiple people, and the probes look identical. (Not only that, but they aren’t cleaned or used with a sterile disposable sheath, etc.) So Joe’s not sure what he’s about to have to put in his mouth, and DR. JAGGER is too dumb to know or care.

IDIOCRACY_diagnostic-and-carwash-ref.jpg

The bay

Modeled on car wash aethetics, the bay is a molded-plastic arch, about 4 meters to a side. The inside has a bunch of janky and unsanitary looking medical probes and tools. Around the entrance of the bay are an array of backlit signs, clockwise from 7 o’ clock:

  • Form one line | Do not push
  • (Two right-facing arrows, one blue, one orange)
  • (A stop sign)
  • (A hepatitis readout, from Hepatitis A to Hepatitis F, which does not exist.)
  • Tumor | E-Coli | Just gas | Tapeworm | Unknown
  • Gout | Lice | Leprosy | Malaria
  • (Three left-facing arrows, orange, blue, and magenta)
  • (The comp created for the movie tells…) Be probe ready | Thank you!

Theoretically, the lights help patients understand what to do and what their diagnosis is. But the instruction panels don’t seem to change, and once the patient is inside the bay, they can no longer see the diagnosis panels. The people in the queue and the lobby, however, can. So not only does it rob the patients of any bodily privacy (as they’re having to ram a probe up their rears), but it also robs them of any privacy about their diagnosis. HIPPA and GDPR are rolling around in their then-500 year old graves.

Hygiene

A better solution would of course focus on hygiene first, offering a disposable sheath for the probes. They should still be sterilized between patients.

Idiocracy_diagnosis05
Because this is such as visceral reminder, I’m nominating this as the top anti-example of affordances and constraints for new designers.

Better affordances

Second would be changing the design of the probes such that they were easy to distinguish between them. Color, shape, and labeling are initial ideas.

Better constraints

Third would be to constrain the probes so that…

  • The butt probe can’t reach up beyond the butt (maybe tying the cable to the floor? Though that means it’s likely to drop to the ground, which is clearly not sterile in this place, so maybe tying it the wall and having it klaxon loudly if it’s above butt height.)
  • The mouth probe can’t reach below the head (maybe tying the cable to the ceiling)
  • The ear probe should be smaller and ear-shaped rather than some huge eardrum-piercing thing.

And while modesty is clearly not an issue for people of Idiocracy, convention, modesty, and the law require us in our day to make this a LOT more private.

Prevention > remedy

Note that there is an error beep when Joe puts the wrong probe in his butt. Like many errors, by that time it is too late. It makes engineering sense for the machine to complain when there is a problem. It makes people sense to constrain so that errors are not possible, or at the very least, put the alarm where it will dissuade from error.

Also, can we turn the volume up on those quiet beeps to, say, 80 decibels? I think everyone’s interested in more of an alarm than a whisper for this.

Idiocracy_diagnosis06

A hidden, eviscerating joke

In addition to the base comedy—of treating diagnosis like a carwash, the interaction design of the missing affordances and constraints, and the poop humor of sticking a butt probe in your mouth—there is yet another layer of stupid evident here. Many of the diseases listed on the “proscenium” of the bay are ones that can be caused by, yep, ingesting feces. (Hepatitis A, Hepatitis E, tapeworm, E. “boli.”) Enjoy the full, appetizing list on Wikipedia. It’s a whole other layer of funny, and hearkens back to stories of when late-1800s doctors took umbrage at Ignaz Semmelweis’ suggestions that they wash their hands. (*huffgrumble* But we’re gentlemen! *monocle pop*) This is that special kind of stupid when people are the cause of their own problems, and refuse to believe it because they are either proud…or idiots.

But of course, we’re so much wiser today. People are never, say, duped into voting for some sense of tribal identity despite mountains of evidence that they are voting against their community, or even their own self-interest.

Fighting the unsanitary butt plugs of the Idiocracy

“Action by action, day by day, group by group, Indivisibles are remaking our democracy. They make calls. They show up. They speak with their neighbors. They organize. And through that work, they’ve built hundreds of mini-movements in support of their local values. And now, after practice, training, and repetition, they’ve built lasting power on their home turf and a massive, collective political muscle ready to be exercised each and every day in every corner of the country.”

cropped-Indivisible_Favicon.png

Donate or join the phone bankers at Indivisible to talk people into voting, and perhaps some sanity into Idiocrats. Indivisible’s mission is “to cultivate and lift up a grassroots movement of local groups to defeat the Trump agenda, elect progressive leaders, and realize bold progressive policies.”

Bitching about Transparent Screens

I’ve been tagged a number of times on Twitter from people are asking me to weigh in on the following comic by beloved Parisian comic artist Boulet.

Since folks are asking (and it warms my robotic heart that you do), here’s my take on this issue. Boulet, this is for you.

Sci-fi serves different masters

Interaction and interface design answers to one set of masters: User feedback sessions, long-term user loyalty, competition, procurement channels, app reviews, security, regulation, product management tradeoffs of custom-built vs. off-the-shelf, and, ideally, how well it helps the user achieve their goals.

But technology in movies and television shows don’t have to answer to any of these things. The cause-and-effect is scripted. It could be the most unusable piece of junk tech in that universe and it will still do exactly what it is supposed to do. Hell, it’s entirely likely that the actor was “interacting” with a blank screen on set and the interface painted on afterward (in “post”). Sci-fi interfaces answer to the masters of story, worldbuilding, and often, spectacle.

I have even interviewed one of the darlings of the FUI world about their artistic motivations, and was told explicitly that they got into the business because they hated having to deal with the pesky constraints of usability. (Don’t bother looking for it, I have not published that interview because I could not see how to do so without lambasting it.) Most of these things are pointedly baroque where usability is a luxury priority.

So for goodness’ sake, get rid of the notion that the interfaces in sci-fi are a model for usability. They are not.

They are technology in narrative

We can understand how they became a trope by looking at things from the makers’ perspective. (In this case “maker” means the people who make the sci-fi.)

thankthemaker.gif

Not this Maker.

Transparent screens provide two major benefits to screen sci-fi makers.

First, they quickly inform the audience that this is a high-tech world, simply because we don’t have transparent screens in our everyday lives. Sci-fi makers have to choose very carefully how many new things they want to introduce and explain to the audience over the course of a show. (A pattern that, in the past, I have called What You Know +1.) No one wants to sit through lengthy exposition about how the world works. We want to get to the action.

buckrogers

With some notable exceptions.

So what mostly gets budgeted-for-reimagining and budgeted-for-explanation in a script are technologies that are a) important to the diegesis or b) pivotal to the plot. The display hardware is rarely, if ever, either. Everything else usually falls to trope, because tropes don’t require pausing the action to explain.

Secondly (and moreover) transparent screens allow a cinematographer to show the on-screen action and the actor’s face simultaneously, giving us both the emotional frame of the shot as well as an advancement of plot. The technology is speculative anyway, why would the cinematographer focus on it? Why cut back and forth from opaque screen to an actor’s face? Better to give audiences a single combined shot that subordinates the interface to the actors’ faces.

minrep-155

We should not get any more bent out of shape for this narrative convention than any of these others.

  • My god, these beings, who, though they lived a long time ago and in a galaxy far, far away look identical to humans! What frozen evolution or panspermia resulted in this?
  • They’re speaking languages that are identical to some on modern Earth! How?
  • Hasn’t anyone noticed the insane coincidence that these characters from the future happen to look exactly like certain modern actors?
  • How are there cameras everywhere that capture these events as they unfold? Who is controlling them? Why aren’t the villains smashing them?
  • Where the hell is that orchestra music coming from?
  • This happens in the future, how are we learning about it here in their past?

The Matter of Believability

It could be, that what we are actually complaining about is not usability, but believability. It may be that the problems of eye strain, privacy, and orientation are so obvious that it takes us out of the story. Breaking immersion is a cardinal sin in narrative. But it’s pretty easy (and fun) to write some simple apologetics to explain away these particular concerns.

eye-strain

Why is eye strain not a problem? Maybe the screens actually do go opaque when seen from a human eye, we just never see them that way because we see them from the POV of the camera.

privacy

Why is privacy not a problem? Maybe the loss of privacy is a feature, not a bug, for the fascist society being depicted; a way to keep citizens in line. Or maybe there is an opaque mode, we just don’t see any scenes where characters send dick pics, or browse porn, and would thereby need it. Or maybe characters have other, opaque devices at home specifically designed for the private stuff.

orientation

Why isn’t orientation a problem? Tech would only require face recognition for such an object to automatically orient itself correctly no matter how it is being picked up or held. The Appel Maman would only present itself downwards to the table if it was broken.

So it’s not a given that transparent screens just won’t work. Admittedly, this is some pretty heavy backworlding. But they could work.

But let’s address the other side of believability. Sci-fi makers are in a continual second-guess dance with their audience’s evolving technological literacy. It may be that Boulet’s cartoon is a bellwether, a signal that non-technological audiences are becoming so familiar with the real-world challenges of this trope that is it time for either some replacement, or some palliative hints as to why the issues he illustrates aren’t actually issues. As audience members—instead of makers—we just have to wait and see.

Sci-fi is not a usability manual.

It never was. If you look to sci-fi for what is “good” design for the real-world, you will cause frustration, maybe suffering, maybe the end of all good in the ’verse. Please see the talk I gave at the Reaktor conference a few years ago for examples, presented in increasing degrees of catastrophe.

I would say—to pointedly use the French—that the “raison d’être” of this site is exactly this. Sci-fi is so pervasive, so spectacular, so “cool,” that designers must build up a skeptical immunity to prevent its undue influence on their work.

I hope you join me on that journey. There’s sci-fi and popcorn in it for everyone.

The Cloak of Levitation, Part 4: Improvements

In prior posts we looked at an overview of the cloak, pondered whether it could ever work in reality (Mostly, in the far future), and whether or not the cloak could be considered agentive. (Mostly, yes.) In this last post I want to look at what improvements we might make if I was designing something akin to this for the real world.

Given its wealth of capabilities, the main complaint might be its lack of language.

A mute sidekick

It has a working theory of mind, a grasp of abstract concepts, and intention, so why does it not use language as part of a toolkit to fulfill its duties? Let’s first admit that mute sidekicks are kind of a trope at this point. Think R2D2, Silent Bob, BB8, Aladdin’s Magic Carpet (Disney), Teller, Harpo, Bernardo / Paco (admittedly obscure), Mini-me. They’re a thing.

tankerbell.gif

Yes, I know she could talk to other fairies, but not to Peter.

Despite being a trope, its muteness in a combat partner is a significant impediment. Imagine its being able to say, “Hey Steve, he’s immune to the halberd. But throw that ribcage-looking thing on the wall at him, and you’ll be good.” Strange finds himself in life-or-death situations pretty much constantly, so having to disambiguate vague gestures wastes precious time that might make the difference between life and death. For, like, everyone on Earth. Continue reading

“Real-time,” Interplanetary Chat

While recording a podcast with the guys at DecipherSciFi about the twee(n) love story The Space Between Us, we spent some time kvetching about how silly it was that many of the scenes involved Gardner, on Mars, in a real-time text chat with a girl named Tulsa, on Earth. It’s partly bothersome because throughout the rest of the the movie, the story tries for a Mohs sci-fi hardness of, like, 1.5, somewhere between Real Life and Speculative Science, so it can’t really excuse itself through the Applied Phlebotinum that, say, Star Wars might use. The rest of the film feels like it’s trying to have believable science, but during these scenes it just whistles, looks the other way, and hopes you don’t notice that the two lovebirds are breaking the laws of physics as they swap flirt emoji.

Hopefully unnecessary science brief: Mars and Earth are far away from each other. Even if the communications transmissions are sent at light speed between them, it takes much longer than the 1 second of response time required to feel “instant.” How much longer? It depends. The planets orbit the sun at different speeds, so aren’t a constant distance apart. At their closest, it takes light 3 minutes to travel between Mars and Earth, and at their farthest—while not being blocked by the sun—it takes about 21 minutes. A round-trip is double that. So nothing akin to real-time chat is going to happen.

But I’m a designer, a sci-fi apologist, and a fairly talented backworlder. I want to make it work. And perhaps because of my recent dive into narrow AI, I began to realize that, well, in a way, maybe it could. It just requires rethinking what’s happening in the chat.

Let’s first acknowledge that we’ve solved long distance communications a long time ago. Gardner and Tulsa could just, you know, swap letters or, like the characters in 2001: A Space Odyssey, recorded video messages. There. Problem solved. It’s not real-time interaction, but it gets the job done. But kids aren’t so much into pen pals anymore, and we have to acknowledge that Gardner doesn’t want to tip his hand that he’s on Mars (it’s a grave NASA secret, for plot reasons). So the question is how could we make it work so it feels like a real time chat to her. Let’s first solve it for the case where he’s trying to disguise his location, and then how it might work when both participants are in the know.

Fooling Tulsa

Since 1984 (ping me, as always, if you can think of an earlier reference) sci-fi has had the notion of a digitally-replicated personality. Here I’m thinking of Gibson’s Neuromancer and the RAM boards on which Dixie Flatline “lives.” These RAM boards house an interactive digital personality of a person, built out of a lifetime of digital traces left behind: social media, emails, photos, video clips, connections, expressed interests, etc. Anyone in that story could hook the RAM board up to a computer, and have conversations with the personality housed there that would closely approximate how that person would (or would have) respond in real life.

SBU_Tulsa.png
Listen to the podcast for a mini-rant on translucent screens, followed by apologetics.

Is this likely to actually happen? Well it kind of already is. Here in the real world, we’re seeing early, crude “me bots” populate the net which are taking baby steps toward the same thing. (See MessinaBothttps://bottr.me/, https://sensay.it/, the forthcoming http://bot.me/) By the time we actually get a colony to Mars (plus the 16 years for Gardner to mature), mebot technology should should be able to stand in for him convincingly enough in basic online conversations.

Training the bot

So in the story, he would look through cached social media feeds to find a young lady he wanted to strike up a conversation with, and then ask his bot-maker engine to look at her public social media to build a herBot with whom he could chat, to train it for conversations. During this training, the TulsaBot would chat about topics of interest gathered from her social media. He could pause the conversation to look up references or prepare convincing answers to the trickier questions TulsaBot asks. He could also add some topics to the conversation they might have in common, and questions he might want to ask her. By doing this, his GardnerBot isn’t just some generic thing he sends out to troll any young woman with. It’s a more genuine, interactive first “letter” sent directly to her. He sends this GardnerBot to servers on Earth.

Hey-mars-chat.gif
A demonstration of a chat with a short Martian delay. (Yes, it’s an animated gif.)

Launching the bot

GardnerBot would wait until it saw Tulsa online and strike up the conversation with her. It would send a signal back to Gardner that the chat has begun so he can sit on his end and read a space-delayed transcript of the chat. GardnerBot would try its best to manage the chat based on what it knows about awkward teen conversation, Turing test best practices, what it knows about Gardner, and how it has been trained specifically for Tulsa. Gardner would assuage some of his guilt by having it dodge and carefully frame the truth, but not outright lie.

Buying time

If during the conversation she raised a topic or asked a question for which GardnerBot was not trained, it could promise an answer later, and then deflect, knowing that it should pad the conversation in the meantime:

  • Ask her to answer the same question first, probing into details to understand rationale and buy more time
  • Dive down into a related subtopic in which the bot has confidence, and which promises to answer the initial question
  • Deflect conversation to another topic in which it has a high degree of confidence and lots of detail to share
  • Text a story that Gardner likes to tell that is known to take about as long as the current round-trip signal

Example

  • TULSA
  • OK, here’s one: If you had to live anywhere on Earth where they don’t speak English, where would you live?

GardnerBot has a low confidence that it knows Gardner’s answer. It could respond…

  1. (you first) “Oh wow. That is a tough one. Can I have a couple of minutes to think about it? I promise I’ll answer, but you tell me yours first.”
  2. (related subtopic) “I’m thinking about this foreign movie that I saw one time. There were a lot of animals in it and a waterfall. Does that sound familiar?”
  3. (new topic) “What? How am I supposed to answer that one? 🙂 Umm…While I think about it, tell me…what kind of animal would you want to be reincarnated as. And you have to say why.”
  4. (story delay) “Ha. Sure, but can I tell a story first? When I was a little kid, I used to be obsessed with this music that I would hear drifting into my room from somewhere around my house…”

Lagged-realtime training

Each of those responses is a delay tactic that allows the chat transcript to travel to Mars for Gardner to do some bot training on the topic. He would be watching the time-delayed transcript of the chat, keeping an eye on an adjacent track of data containing the meta information about what the bot is doing, conversationally speaking. When he saw it hit low-confidence or high-stakes topic and deflect, it would provide a chat window for him to tell the GardnerBot what it should do or say.

  • To the stalling GARDNERBOT…
  • GARDNER
  • For now, I’m going to pick India, because it’s warm and I bet I would really like the spicy food and the rain. Whatever that colored powder festival is called. I’m also interested in their culture, Bollywood, and Hinduism.
  • As he types, the message travels back to Earth where GardnerBot begins to incorporate his answers to the chat…
SBU_Gardner.png
  • At a natural break in the conversation…
  • GARDNERBOT
  • OK. I think I finally have an answer to your earlier question. How about…India?
  • TULSA
  • India?
  • GARDNERBOT
  • Think about it! Running around in warm rain. Or trying some of the street food under an umbrella. Have you seen youTube videos from that festival with the colored powder everywhere? It looks so cool. Do you know what it’s called?

Note that the bot could easily look it up and replace “that festival with the colored powder everywhere” with “Holi Festival of Color” but it shouldn’t. Gardner doesn’t know that fact, so the bot shouldn’t pretend it knows it. A Cyrano-de-Bergerac software—where it makes him sound more eloquent, intelligent, or charming than he really is to woo her—would be a worse kind of deception. Gardner wants to hide where he is, not who he is.

That said, Gardner should be able to direct the bot, to change its tactics. “OMG. GardnerBot! You’re getting too personal! Back off!” It might not be enough to cover a flub made 42 minutes ago, but of course the bot should know how to apologize on Gardner’s behalf and ask conversational forgiveness.

Gotta go

If the signal to Mars got interrupted or the bot got into too much trouble with pressure to talk about low confidence or high stakes topics, it could use a believable, pre-rolled excuse to end the conversation.

  • GARDNERBOT
  • Oh crap. Will you be online later? I’ve got chores I have to do.

Then, Gardner could chat with TulsaBot on his end without time pressure to refine GardnerBot per their most recent topics, which would be sent back to Earth servers to be ready for the next chat.

In this way he could have “chats” with Tulsa that are run by a bot but quite custom to the two of them. It’s really Gardner’s questions, topics, jokes, and interest, but a bot-managed delivery of these things.

So it could work, does it fit the movie? I think so. It would be believable because he’s a nerd raised by scientists. He made his own robot, why not his own bot?

From the audience’s perspective, it might look like they’re chatting in real time, but subtle cues on Gardner’s interface reward the diligent with hints that he’s watching a time delay. Maybe the chat we see in the film is even just cleverly edited to remove the bots.

How he manages to hide this data stream from NASA to avoid detection is another question better handled by someone else.

SBU_whodis.png

An honest version: bot envoy

So that solves the logic from the movie’s perspective but of course it’s still squickish. He is ultimately deceiving her. Once he returns to Mars and she is back on Earth, could they still use the same system, but with full knowledge of its botness? Would real world astronauts use it?

Would it be too fake?

I don’t think it would be too fake. Sure, the bot is not the real person, but neither are the pictures, videos, and letters we fondly keep with us as we travel far from home. We know they’re just simulacra, souvenir likenesses of someone we love. We don’t throw these away in disgust for being fakes. They are precious because they are reminders of the real thing. So would the themBot.

  • GARDNER
  • Hey, TulsaBot. Remember when we were knee deep in the Pacific Ocean? I was thinking about that today.
  • TULSABOT
  • I do. It’s weird how it messes with your sense of balance, right? Did you end up dreaming about it later? I sometimes do after being in waves a long time.
  • GARDNER
  • I can’t remember, but someday I hope to come back to Earth and feel it again. OK. I have to go, but let me know how training is going. Have you been on the G machine yet?

Nicely, you wouldn’t need stall tactics in the honest version. Or maybe it uses them, but can be called out.

  • TULSA
  • GardnerBot, you don’t have to stall. Just tell Gardner to watch Mission to Mars and update you. Because it’s hilarious and we have to go check out the face when I’m there.

Sending your loved one the transcript will turn it into a kind of love letter. The transcript could even be appended with a letter that jokes about the bot. The example above was too short for any semi-realtime insertions in the text, but maybe that would encourage longer chats. Then the bot serves as charming filler, covering the delays between real contact.

Ultimately, yes, I think we can backworld what looks physics-breaking into something that makes sense, and might even be a new kind of interactive memento between interplanetary sweethearts, family, and friends.

R. S. Revenge Comms

Note: In honor of the season, Rogue One opening this week, and the reviews of Battlestar Galactica: The Mini-Series behind us, I’m reopening the Star Wars Holiday Special reviews, starting with the show-within-a-show, The Faithful Wookie. Refresh yourself of the plot if it’s been a while.

Faithful-Wookiee-02

On board the R.S. Revenge, the purple-skinned communications officer announces he’s picked up something. (Genders are a goofy thing to ascribe to alien physiology, but the voice actor speaks in a masculine register, so I’m going with it.)

faithful-wookiee-01-surrounds

He attends a monitor, below which are several dials and controls in a panel. On the right of the monitor screen there are five physical controls.

  • A stay-state toggle switch
  • A stay-state rocker switch
  • Three dials

The lower two dials have rings under them on the panel that accentuate their color.

Map View

The screen is a dark purple overhead map of the impossibly dense asteroid field in which the Revenge sits. A light purple grid divides the space into 48 squares. This screen has text all over it, but written in a constructed orthography unmentioned in the Wookieepedia. In the upper center and upper right are unchanging labels. Some triangular label sits in the lower-left. In the lower right corner, text appears and disappears too fast for (human) reading. The middle right side of the screen is labeled in large characters, but they also change too rapidly to make much sense of it.

revengescreen Continue reading

Lumpy’s Brilliant Cartoon Player

I am pleased to report that with this post, we are over 50% of the way through this wretched, wretched Holiday Special.

SWHS-Cartoon-Player-07

Description

After Lumpy tries to stop stormtroopers from going upstairs, an Imperial Officer commands Malla to keep him quiet. To do so, she does what any self-respecting mother of a pre-teen in the age of technology does, and sits him down to watch cartoons. The player is a small, yellow device that sits flat on an angled tabletop, like a writing desk.

Two small silver buttons stack vertically on the left, and an upside down plug hole strainer on the right. A video screen sits above these controls. Since no one in the rest of his family wants to hear the cartoon introduction of Boba Fett, he dons a pair of headphones, which are actually kind of stylish in that the earpieces are square and perforated, but not beveled. There are some pointless animations that start up, but then the cartoon starts and Lumpy is, in fact, quiet for the duration. So, OK, point one Malla.

SWHS-Cartoon-Player-08
Why no budding DJ has glommed onto this for an album cover is beyond me.

Analysis

Continue reading