As I said in the first post of this topic, exosuits and environmental suits are out of the definition of wearable computers. But there is one item commonly found on them that can count as wearable, and that’s the forearm control panels. In the survey these appear in three flavors.
Fairly late in sci-fi they acknowledged the need for environmental suits, and acknowledged the need for controls on them. The first wearable control panel belongs to the original series of Star Trek, “The Naked Time” S01E04. The sparkly orange suits have a white cuff with a red and a black button. In the opening scene we see Mr. Spock press the red button to communicate with the Enterprise.
This control panel is crap. The buttons are huge momentary buttons that exist without a billet, and would be extremely easy to press accidentally. The cuff is quite loose, meaning Spock or the redshirt have to fumble around to locate it each time. Weeeeaak.
Star Trek (1966)
Some of these problems were solved when another WCP appeared 3 decades later in the the Next Generation movie First Contact.
Star Trek First Contact (1996)
This panel is at least anchored, and located in places that could be located fairly easily via proprioception. It seems to have a facing that acts as a billet, and so might be tough to accidentally activate. It’s counter to its wearer’s social goals, though, since it glows. The colored buttons help to distinguish it when you’re looking at it, but it sure makes it tough to sneak around in darkness. Also, no labels? No labels seems to be a thing with WCPs since even Pixar thought it wasn’t necessary.
The Incredibles (2004)
Admittedly, this WCP belonged to a villain who had no interest in others’ use of it. So that’s at least diegetically excusable.
Hey, Labels, that’d be greeeeeat
Zipping back to the late 1960s, Kubrick’s 2001 nailed most everything. Sartorial, easy to access and use (look, labels! color differentiation! clustering!), social enough for an environmental suit, billeted, and the inputs are nice and discrete, even though as momentary buttons they don’t announce their state. Better would have been toggle buttons.
2001: A Space Odyssey (1968)
Also, what the heck does the “IBM” button do, call a customer service representative from space? Embarrassing. What’s next, a huge Mercedez-Benz logo on the chest plate? Actually, no, it’s a Compaq logo.
A monitor on the forearm
The last category of WCP in the survey is seen in Mission to Mars, and it’s a full-color monitor on the forearm.
Mission to Mars
This is problematic for general use and fine for this particular application. These are scientists conducting a near-future trip to Mars, and so having access to rich data is quite important. They’re not facing dangerous Borg-like things, so they don’t need to worry about the light. I’d be a bit worried about the giant buttons that stick out on every edge that seem to be begging to be bumped. Also I question whether those particular buttons and that particular screen layout are wise choices, but that’s for the formal M2M review. A touchscreen might be possible. You might think that would be easy to accidentally activate, but not if it could only be activated by the fingertips in the exosuit’s gloves.
This isn’t an exhaustive list of every wearable control panel from the survey, but a fair enough recounting to point out some things about them as wearable objects.
The forearm is a fitting place for controls and information. Wristwatches have taken advantage of this for…some time. 😛
Socially, it’s kind of awkward to have an array of buttons on your clothing. Unless it’s an exosuit, in which case knock yourself out.
If you’re meant to be sneaking around, lit buttons are counterindicated. As are extruded switch surfaces that can be glancingly activated.
The fitness of the inputs and outputs depend on the particular application, but don’t drop the understandability (read: labels) simply for the sake of fashion. (I’m looking at you, Roddenberry.)
Hi there. Tell us a bit about yourself. What’s your name, where are you from, how do you spend your time?
I’m Heath Rezabek. I live in Austin, Texas, and have been an enthusiast of user interface design for many years. By career and calling I’m a librarian, and am a library services and technology grant manager by day. I have long been interested in how information is portrayed, symbolized, and accessed. I’m also writer of experimental speculative fiction, and have an interest in how the future is seen by creators and audiences. Interfaces play a key role in my fiction series, as well, from holographic to virtual world driven to all-out surrealist.
What are some of your favorite sci-fi interfaces (Other than in Oblivion)? (And, of course, why.)
In the prior post we introduced the Fermi paradox—or Fermi question—before an overview of the many hypotheses that try to answer the question, and ended noting that we must consider what we are to do, given the possibilities. In this post I’m going to share which of those hypotheses that screen-based sci-fi has chosen to tell stories about.
First we should note that screen sci-fi (this is, recall, a blog that concerns itself with sci-fi in movies and television), since the very, very beginning, has embraced questionably imperialist thrills. In Le Voyage dans la Lune, George Melies’ professor-astronomers encounter a “primitive” alien culture on Earth’s moon when they land there, replete with costumes, dances, and violent responses to accidental manslaughter. Hey, we get it, aliens are part of why audiences and writers are in it: As a thin metaphor for speculative human cultures that bring our own into relief. So, many properties are unconcerned with the *yawn* boring question of the Fermi paradox, instead imagining a diegesis with a whole smörgåsbord of alien civilizations that are explicitly engaged with humans, at times killing, trading, or kissing us, depending on which story you ask.
But some screen sci-fi does occasionally concern itself with the Fermi question.
Which are we telling stories about?
Screen sci-fi is a vast library, and more is being produced all the time, so it’s hard to give an exact breakdown, but if Drake can do it for Fermi’s question, we can at least ballpark it, too. To do this, I took a look at every sci-fi in the survey that produced Make It So and has been extended here on scifiinterfaces.com, and I tallied the breakdown between aliens, no aliens, and silent aliens. Here’s the Google Sheet with the data. And here’s what we see.
No aliens is the clear majority of stories! This is kind of surprising for me, since when I think of sci-fi my brain pops bug eyes and tentacles alongside blasters and spaceships. But it also makes sense because a lot of sci-fi is near future or focused on the human condition.
Some notes about these numbers.
I counted all the episodes or movies that exist in a single diegesis as one. So the two single largest properties in the sci-fi universe, Star Trek and Star Wars, only count once each. That seems unfair, since we’ve spent lots more total minutes of our lives with C3PO and the Enterprise crews than we have with Barbarella. This results in low-seeming numbers. There’s only 53 diegeses at the time of this writing even though it spans thousands of hours of shows. But all that said, this is ballpark problem, meant to tally rationales across diegeses, so we’ll deal with numbers that skew differently than our instincts would suggest. Someone else with a bigger budget of time or money can try and get exhaustive with the number, attempt to normalize for total minutes of media produced, and again for number of alien species referenced at their leisure, and then again for how popular the particular show was. Those numbers may be different.
Additionally the categorizations can be ambiguous. Should Star Trek go in “Silent Aliens” because of the Prime Directive, or under “Aliens” since the show has lots and lots and lots of aliens? Since the Fermi question seeks to answer why Silent Aliens are silent in our real world now, I opted for Silent Aliens, but that’s an arguable choice. Should The Martian count as “Life is Rare” since it’s competence porn that underscores how fragile life is? Should Deep Impact show that life is rare even though they never talk about aliens? It’s questionable to categorize something on a strong implication, but I did it where I felt the connection was strong. Additionally I may have ranked something as “no reason” because I missed an explanatory line of dialog somewhere. Please let me know if I missed something major or got something wrong in the comments.
All that said, let’s look back and see how those broad numbers break down when we look at individual Fermi hypotheses. First, we should omit shows with aliens. They categorically exclude themselves. Aliens is an obvious example. Also, let’s exclude shows that are utterly unconcerened with the question of aliens, e.g. Logan’s Run, (or those that never bother to provide an explanation as to why aliens may have been silent for so long, e.g. The Fifth Element.) We also have to dismiss the other show in the survey that shows a long-dead species but does not investigate why, Total Recall (1990). Aaaaand holy cow, that takes us down to only 8 shows that give some explanation for the historical absence or silence of aliens. Since that number is so low, I’ll list the shows explicitly to the right of their numbers. I’ll leave the numbers as percentages for consistency when I get to increase the data set.
8% Life is rare: Battlestar Galactica (2004) 25% Life doesn’t last (Natural disasters): Deep Impact, The Core, Armaggedon 8% Life doesn’t last (Technology will destroy us): Forbidden Planet
8% Superpredators: Oblivion 0% Information is dangerous 33% Prime directive: The Day the Earth Stood Still, 2001: A Space Odyssey, Mission to Mars, Star Trek 0% Isolationism 0% Zoo 0% Planetarium 0% Lighthouse hello 0% Still ringing 8% Hicksville: The Hitchhiker’s Guide to the Galaxy 0% Too distributed 0% Tech mismatch 0% Inconceivability 0% Too expensive 8% Cloaked: Men in Black
(*2% lost to rounding)
It’s at this point that some readers are sharpening their keyboards to inform me of the shows I’ve missed, and that’s great. I would rather have had the data before, but I’m just a guy and nothing motivates geeks like an incorrect pop culture data set. We can run these numbers again when more come in and see what changes.
In the meantime, the first thing we note is that of those that concern themselves with the question of Silent Aliens, most use some version of the prime directive.
Respectively, they say we have to do A Thing before they’ll contact us.
Mature technologically by finding the big obelisk on the moon (and then the matching one around Jupiter)
Mature technologically by mastering faster-than-light travel
Find the explanatory kiosk/transportation station on Mars
It’s easy to understand why Prime Directives would be attractive as narrative rationales. It explains why things are so silent now, and puts the onus on us as a species to achieve The Thing, to do good, to improve. They are inspirational and encourage us to commit to space travel.
The second thing to note, is that those that concern themselves with the notion that Life Doesn’t Last err toward disaster porn, which is attractive because such films are tried and true formulas. The dog gets saved along with the planet, that one person died, there’s a ticker tape parade after they land, and the love interests reconcile. Some are ridiculous. Some are competent. None stand out to me as particularly memorable or life changing. I can’t think of one that illustrates how it is inevitable.
So prime directives and disaster porn are the main answers we see in sci-fi. Are those the right ones? I’ll discuss that in the next post. Stay Tuned.
While recording a podcast with the guys at DecipherSciFi about the twee(n) love story The Space Between Us, we spent some time kvetching about how silly it was that many of the scenes involved Gardner, on Mars, in a real-time text chat with a girl named Tulsa, on Earth. It’s partly bothersome because throughout the rest of the the movie, the story tries for a Mohs sci-fi hardness of, like, 1.5, somewhere between Real Life and Speculative Science, so it can’t really excuse itself through the Applied Phlebotinum that, say, Star Wars might use. The rest of the film feels like it’s trying to have believable science, but during these scenes it just whistles, looks the other way, and hopes you don’t notice that the two lovebirds are breaking the laws of physics as they swap flirt emoji.
Hopefully unnecessary science brief: Mars and Earth are far away from each other. Even if the communications transmissions are sent at light speed between them, it takes much longer than the 1 second of response time required to feel “instant.” How much longer? It depends. The planets orbit the sun at different speeds, so aren’t a constant distance apart. At their closest, it takes light 3 minutes to travel between Mars and Earth, and at their farthest—while not being blocked by the sun—it takes about 21 minutes. A round-trip is double that. So nothing akin to real-time chat is going to happen.
But I’m a designer, a sci-fi apologist, and a fairly talented backworlder. I want to make it work. And perhaps because of my recent dive into narrow AI, I began to realize that, well, in a way, maybe it could. It just requires rethinking what’s happening in the chat.
Let’s first acknowledge that we’ve solved long distance communications a long time ago. Gardner and Tulsa could just, you know, swap letters or, like the characters in 2001: A Space Odyssey, recorded video messages. There. Problem solved. It’s not real-time interaction, but it gets the job done. But kids aren’t so much into pen pals anymore, and we have to acknowledge that Gardner doesn’t want to tip his hand that he’s on Mars (it’s a grave NASA secret, for plot reasons). So the question is how could we make it work so it feels like a real time chat to her. Let’s first solve it for the case where he’s trying to disguise his location, and then how it might work when both participants are in the know.
Since 1984 (ping me, as always, if you can think of an earlier reference) sci-fi has had the notion of a digitally-replicated personality. Here I’m thinking of Gibson’s Neuromancer and the RAM boards on which Dixie Flatline “lives.” These RAM boards house an interactive digital personality of a person, built out of a lifetime of digital traces left behind: social media, emails, photos, video clips, connections, expressed interests, etc. Anyone in that story could hook the RAM board up to a computer, and have conversations with the personality housed there that would closely approximate how that person would (or would have) respond in real life.
Is this likely to actually happen? Well it kind of already is. Here in the real world, we’re seeing early, crude “me bots” populate the net which are taking baby steps toward the same thing. (See MessinaBot, https://bottr.me/, https://sensay.it/, the forthcoming http://bot.me/) By the time we actually get a colony to Mars (plus the 16 years for Gardner to mature), mebot technology should should be able to stand in for him convincingly enough in basic online conversations.
Training the bot
So in the story, he would look through cached social media feeds to find a young lady he wanted to strike up a conversation with, and then ask his bot-maker engine to look at her public social media to build a herBot with whom he could chat, to train it for conversations. During this training, the TulsaBot would chat about topics of interest gathered from her social media. He could pause the conversation to look up references or prepare convincing answers to the trickier questions TulsaBot asks. He could also add some topics to the conversation they might have in common, and questions he might want to ask her. By doing this, his GardnerBot isn’t just some generic thing he sends out to troll any young woman with. It’s a more genuine, interactive first “letter” sent directly to her. He sends this GardnerBot to servers on Earth.
Launching the bot
GardnerBot would wait until it saw Tulsa online and strike up the conversation with her. It would send a signal back to Gardner that the chat has begun so he can sit on his end and read a space-delayed transcript of the chat. GardnerBot would try its best to manage the chat based on what it knows about awkward teen conversation, Turing test best practices, what it knows about Gardner, and how it has been trained specifically for Tulsa. Gardner would assuage some of his guilt by having it dodge and carefully frame the truth, but not outright lie.
If during the conversation she raised a topic or asked a question for which GardnerBot was not trained, it could promise an answer later, and then deflect, knowing that it should pad the conversation in the meantime:
Ask her to answer the same question first, probing into details to understand rationale and buy more time
Dive down into a related subtopic in which the bot has confidence, and which promises to answer the initial question
Deflect conversation to another topic in which it has a high degree of confidence and lots of detail to share
Text a story that Gardner likes to tell that is known to take about as long as the current round-trip signal
OK, here’s one: If you had to live anywhere on Earth where they don’t speak English, where would you live?
GardnerBot has a low confidence that it knows Gardner’s answer. It could respond…
(you first) “Oh wow. That is a tough one. Can I have a couple of minutes to think about it? I promise I’ll answer, but you tell me yours first.”
(related subtopic) “I’m thinking about this foreign movie that I saw one time. There were a lot of animals in it and a waterfall. Does that sound familiar?”
(new topic) “What? How am I supposed to answer that one? 🙂 Umm…While I think about it, tell me…what kind of animal would you want to be reincarnated as. And you have to say why.”
(story delay) “Ha. Sure, but can I tell a story first? When I was a little kid, I used to be obsessed with this music that I would hear drifting into my room from somewhere around my house…”
Each of those responses is a delay tactic that allows the chat transcript to travel to Mars for Gardner to do some bot training on the topic. He would be watching the time-delayed transcript of the chat, keeping an eye on an adjacent track of data containing the meta information about what the bot is doing, conversationally speaking. When he saw it hit low-confidence or high-stakes topic and deflect, it would provide a chat window for him to tell the GardnerBot what it should do or say.
To the stalling GARDNERBOT…
For now, I’m going to pick India, because it’s warm and I bet I would really like the spicy food and the rain. Whatever that colored powder festival is called. I’m also interested in their culture, Bollywood, and Hinduism.
As he types, the message travels back to Earth where GardnerBot begins to incorporate his answers to the chat…
At a natural break in the conversation…
OK. I think I finally have an answer to your earlier question. How about…India?
Think about it! Running around in warm rain. Or trying some of the street food under an umbrella. Have you seen youTube videos from that festival with the colored powder everywhere? It looks so cool. Do you know what it’s called?
Note that the bot could easily look it up and replace “that festival with the colored powder everywhere” with “Holi Festival of Color” but it shouldn’t. Gardner doesn’t know that fact, so the bot shouldn’t pretend it knows it. A Cyrano-de-Bergerac software—where it makes him sound more eloquent, intelligent, or charming than he really is to woo her—would be a worse kind of deception. Gardner wants to hide where he is, not who he is.
That said, Gardner should be able to direct the bot, to change its tactics. “OMG. GardnerBot! You’re getting too personal! Back off!” It might not be enough to cover a flub made 42 minutes ago, but of course the bot should know how to apologize on Gardner’s behalf and ask conversational forgiveness.
If the signal to Mars got interrupted or the bot got into too much trouble with pressure to talk about low confidence or high stakes topics, it could use a believable, pre-rolled excuse to end the conversation.
Oh crap. Will you be online later? I’ve got chores I have to do.
Then, Gardner could chat with TulsaBot on his end without time pressure to refine GardnerBot per their most recent topics, which would be sent back to Earth servers to be ready for the next chat.
In this way he could have “chats” with Tulsa that are run by a bot but quite custom to the two of them. It’s really Gardner’s questions, topics, jokes, and interest, but a bot-managed delivery of these things.
So it could work, does it fit the movie? I think so. It would be believable because he’s a nerd raised by scientists. He made his own robot, why not his own bot?
From the audience’s perspective, it might look like they’re chatting in real time, but subtle cues on Gardner’s interface reward the diligent with hints that he’s watching a time delay. Maybe the chat we see in the film is even just cleverly edited to remove the bots.
How he manages to hide this data stream from NASA to avoid detection is another question better handled by someone else.
An honest version: bot envoy
So that solves the logic from the movie’s perspective but of course it’s still squickish. He is ultimately deceiving her. Once he returns to Mars and she is back on Earth, could they still use the same system, but with full knowledge of its botness? Would real world astronauts use it?
Would it be too fake?
I don’t think it would be too fake. Sure, the bot is not the real person, but neither are the pictures, videos, and letters we fondly keep with us as we travel far from home. We know they’re just simulacra, souvenir likenesses of someone we love. We don’t throw these away in disgust for being fakes. They are precious because they are reminders of the real thing. So would the themBot.
Hey, TulsaBot. Remember when we were knee deep in the Pacific Ocean? I was thinking about that today.
I do. It’s weird how it messes with your sense of balance, right? Did you end up dreaming about it later? I sometimes do after being in waves a long time.
I can’t remember, but someday I hope to come back to Earth and feel it again. OK. I have to go, but let me know how training is going. Have you been on the G machine yet?
Nicely, you wouldn’t need stall tactics in the honest version. Or maybe it uses them, but can be called out.
GardnerBot, you don’t have to stall. Just tell Gardner to watch Mission to Mars and update you. Because it’s hilarious and we have to go check out the face when I’m there.
Sending your loved one the transcript will turn it into a kind of love letter. The transcript could even be appended with a letter that jokes about the bot. The example above was too short for any semi-realtime insertions in the text, but maybe that would encourage longer chats. Then the bot serves as charming filler, covering the delays between real contact.
Ultimately, yes, I think we can backworld what looks physics-breaking into something that makes sense, and might even be a new kind of interactive memento between interplanetary sweethearts, family, and friends.