Idiocracy: Overview

While reviewing fascism in sci-fi, I was reminded of how much I love Mike Judge’s under-appreciated film Idiocracy. It’s hilarious, smart, and, admittedly, mean sci-fi. Since American politics are heading to some unholy Deep Dream merger of this film and The Handmaid’s Tale, I’m refining my broad dictum against sci-fi comedy and diving in.

Release Date: 25 January 2007 (USA)

IDIOCRACY-title

Overview

Private Joe Bauers and jaded sex worker Rita are selected by a military program—for their being very, very average—to be frozen in capsules for a year.

massive-spoilers_sign_color

A mistake shuts down the monitoring agency, and they wind up frozen for 500 years instead. Over that time, because dumb people keep having more kids than smart people, the average intelligence of the population drops and drops, so that when Joe and Rita wake from the stasis pods on 03 March 2505, they are the smartest people in the world. By a lot. Society is barely hanging on, lasting as long as it has owing to the designs of some long-dead smart people.

Woozy from his long sleep, Joe wanders into a hospital where the doctor goes into a panic for Joe’s not having an ID tattoo on his wrist. For this crime Joe is arrested, tried in a sham court, tattooed, and has his IQ tested before being bussed to prison. There Joe finally realizes how stupid everyone is when he talks his way to the exit and then just…runs away.

He finds his way back to the apartment of his court-appointed lawyer whose name is Frito. Joe asks if there are any time machines, and Frito says yes, there is one. But he’s hesitant. To motive him, Joe offers a compound interest time travel gambit payout of $80 billion and Frito—who says that he likes money—agrees. They find Rita (who, clearly used to dealing with morons, is faring pretty well) and head for the time machine, but Joe’s tattoo is remotely scanned and Frito’s car is shut down automatically for harboring fugitives. The three of them abandon the car to hike. They enter a truly massive CostCo to find the time machine, but Joe’s tattoo is again scanned and he is again arrested.

IDIOCRACY-time-masheen01

This time he’s taken to the White House, where he meets President King Camacho and named Secretary of the Interior for being the smartest man in the world, according to the IQ test he took earlier. In an impassioned speech to the House of Representin’ [sic], Camacho promises them that Joe will solve the problems of failing crops, the plague of dust storms, the failing “ecomony,” acne, and car sickness; and do it all within one week. If he does accomplish it, Camacho promises, Joe will get a presidential pardon for his crimes. If not, he’ll be thrown back in jail.

Joe heads to the countryside with the Cabinet and Frito, where he is reunited with Rita. There he learns that the crops are being fed not with water but with a sports drink called Brawndo, The Thirst Mutilator. (Brawndo’s computers had long ago identified water as a threat to its profit margins, and during the budget crisis of 2330, purchased the Food and Drug Administration and the Federal Communications Commission. This enabled them to say, do, and sell anything they wanted, including requiring all crops be fed with Brawndo.)

foodpyramid.gif

Joe recommends they switch from Brawndo to water for the crops, which grosses everyone out because they only associate water with the toilet. Unable to convince them with reason (in my favorite scene they parrot Brawndo advertising slogans as counter evidence, “It’s got what plants crave.” “It’s got electrolytes.”) he tells them he can talk to plants, and that the plants say they want water. This convinces the Cabinet, and the sprinklers are switched over.

This causes Brawndo’s stock price to crash, and the company’s computer “does that auto-layoff thing,” firing everyone at Brawndo and causing 50% unemployment across the country. Outside the White House, a jobless mob demands revenge. Joe is taken to court again and convicted. Joe is sentenced to one night of “rehabilitation,” which is a deadly, monster truck public execution.

Idiocracy_time-masheen03

As Rehab starts, Rita spots a flower growing outside the White House. She and Frito rush to Rehab to save him. On route, they see sprouts in fields. Camacho saves Joe from Rehabilitation just in time as Rita and Frito broadcast video of the sprouts, exonerating Joe to the world.

Pardoned, Joe and Rita have Frito take them to the time machine, which ends up being just a dumb theme ride in the CostCo. Joe and Rita resign themselves to their new life amongst the morons. They wed, and Joe is elected President of America with Rita as first lady and Frito as Vice President. The movie ends contrasting Rita and Joe’s three kids, “the three smartest kids in the world”, with Frito’s, who are “32 of the dumbest kids ever to walk the earth.”

IDIOCRACY-Frito

IMDB: https://www.imdb.com/title/tt0387808/

Bonus tracks

It’s no accident that I’m writing about Idiocracy before the midterm elections in the U.S. The American Experiment seems to be on the verge, and close to some inescapable mistakes. So, at the end of each one of these posts, since this is my country, I’m going to go full USA-centric here and share something that USAmericans can do to vote, help others vote, and reverse our own continued freefall into Idiocracy. I know I have an international readership, but please bear with me.

Today, the bonus track is about registering to vote. If you’ve done it, awesome. Find someone in your life you can convince to register. If you haven’t, you need to. Either way, below is the info you need.

Register-to-vote deadlines

Voting is the most important tool we have, but in every state but North Dakota, to do that you have to be registered ahead of time. (Seriously. Good show, North Dakota.) These deadlines are all in October. You can see your state’s deadline (and for states that allow it, a link to register online) listed alphabetically at the New York Times link below.

You may be wondering it too late for you? Note that while most states have a single deadline for voter registration, many have separate deadlines for registering by mail, online, and in-person. Find the same information from the NYT site sorted by date below. If it is, damn, that sucks, but there is still more you can do. Stay tuned to this blog for more reviews, more bonus tracks, and more calls to action.

Registration by date

  • 07 OCT Alaska, Montana (in person), Rhode Island
  • 08 OCT Mississippi (in person), Washington (by mail and online)
  • 09 OCT Arizona, Arkansas, Florida, Hawaii, Georgia, Illinois, Indiana, Kentucky, Louisiana (by mail and in person), Michigan, Mississippi (by mail), Nevada (by mail), New Mexico, Ohio, Pennsylvania, Tennessee, Texas, Utah (by mail)
  • 10 OCT Missouri, Montana (by mail)
  • 12 OCT Idaho, New York (online and in person), North Carolina (by mail), Oklahoma
  • 13 OCT Delaware
  • 15 OCT Virginia
  • 16 OCT District of Columbia, Kansas, Louisiana (online), Maine, Massachusetts (by mail), Minnesota, New Jersey, Oregon (note when you renew your driver’s license your are automatically registered to vote), West Virginia
  • 17 OCT Maryland, Massachusetts (online or in person), Nevada (by mail), New York (by mail), South Carolina, Wisconsin (by mail and online)
  • 18 OCT Nevada (online)
  • 19 OCT Nebraska (by mail and online)
  • 22 OCT Alabama, California, Iowa (by mail), South Dakota, Wyoming (by mail)
  • 26 OCT Nebraska (in person)
  • 27 OCT Iowa (in person), New Hampshire (by mail)
  • 29 OCT Colorado, Washington (in person)
  • 30 OCT Connecticut, Utah (online and in person)
  • 03 NOV North Carolina (in person)
  • TUE 06 NOV 2018, ELECTION DAY: Florida (in person), Iowa (in person), Maine (in person), Minnesota (in person), New Hampshire (in person), Vermont, Wisconsin (in person), Wyoming (in person). In California you can show up the day of and cast a provisional ballot.

21 Hyperdiegetic Questions about The Faithful Wookiee

Since I only manage to restart The Star Wars Holiday Special reviews right around the time a new Star Wars franchise movie comes out, many of you may have forgotten it was even being reviewed. Well, it is. If you need to catch up, or have joined this blog after I began it years ago, you can head back to beginning to read about the plot and the analyses so far. It’s not pretty.

SWHS-lumpy-and-vader

When we last left the Special, Lumpy was distracted from the Stormtrooper ransack of their home by watching The Faithful Wookiee. The 6 analyses of that film focused on the movie from a diegetic perspective, as if it were a movie like any other on this blog, dealing mostly with its own internal “logic.”

Picking up, we need to look at The Faithful Wookiee from a “hyperdiegetic” perspective, that is, in the context of the other show in which it occurs, that is, The Star Wars Holiday Special. Please note that, departing from the mission statement for a bit, these questions not about the interfaces, but about the backworlding that informs these interfaces.

  1. Who in the Star Wars universe produced this cartoon?
  2. Is it like TomoNews, from a neutral third party telling about actual events that happened in the Star Wars universe?
  3. If so, why is it aimed at kids?
  4. What’s the revenue model?
  5. Why did Lumpy look carefully both ways before playing it?
  6. Why did he later try to hide it from the Imperial Officer? It certainly seems like he thinks it’s incriminating.
  7. If it’s real news, where is the talisman now? Why isn’t someone searching for it in all subsequent films? Because it could still be the most powerful biological weapon ever seen in the Star Wars galaxy. It carries a virus that renders humans unconscious until they get an antidote. It is infectious. Rather than chasing Death Star plans or Small Jedi Life Coaches, they should be chasing that thing.
  8. If not actual news, is it fiction based on (their) real-world people? Like an early Mike Tyson Mysteries?
  9. Is it Rebel propaganda, trying to attract impressionable young minds to the Rebel cause?
  10. If so, why would it imply that general-AI droids are morons, only reporting mission-critical facts on the condition of a spoken data-type error?
  11. If so, why would it imply that the Falcon had life-threateningly bad door designs?
  12. If so, why would it paint Luke to be such a bufoon?
  13. Is it so aspiring Rebels would think, “Hey, if that farmhand goof can be a Rebel hero…”?
  14. And how did they know that Boba Fett and Darth Vader were in cahoots, when it would not be until Empire that they actually go from being not into cahoots to being, definitely, in cahoots?
  15. Do they have some means of predicting Empire behavior? You’d think that power would have been used every single other place ever.
  16. Or, if it’s not a Rebel thing, is it Empire propaganda?
  17. If so, why would it depict Vader as being terrible at basic infosec? (Recall R2 just happens across Fett’s report.)
  18. If so, why would it have the Empire involved in desperately-convoluted, prone-to-failure plot?
  19. If it’s a disinformation campaign, why aim that at kids?
  20. Are Wookiee children secretly running the Rebellion?
  21. And lastly, is the 1234 game, in fact, the first “boss key,” “panic button,” or -H ever seen in sci-fi? (Boss key: A technological means for quickly hiding questionable screen content from over-the-shoulder observation. You slackers are welcome.)
SWHS-lumpy-game
Bosskey

I’m sure no one at Disney has an interest in addressing how this thing fits canon, but, damn.

It raises questions.

The Cloak of Levitation, Part 3: But is it agentive?

Full_cover

So I mentioned in the intro to this review that I was drawn to review Doctor Strange (with my buddy and co-reviewer Scout Addis) because the Cloak displays some interesting qualities in relation to the book I just published. Buy it, read it, review it on amazon.com, it’s awesome.

That sales pitch done, I can quickly cover the key concepts here.

  • A tool, like a hammer, is a familiar but comparatively-dumb category of thing that only responds to a user’s input. A hammer is an example. Tool has been the model of the thing we’re designing in interaction design for, oh, 60 years, but it is being mostly obviated by narrow artificial intelligence, which can be understood as automatic, assistive, or agentive.
  • Assistant technology helps its user with the task she is focused on: Drawing her attention, providing information, making suggestions, maybe helping augment her precision or force. If we think of a hammer again, an assistive might draw her attention to the best angle to strike the nail, or use an internal gyroscope to gently correct her off-angle strike.
  • Agentive technology does the task for its user. Again with the hammer, she could tell hammerbot (a physical agent, but there are virtual ones, too) what she wants hammered and how. Her instructions might be something like: Hammer a hapenny nail every decimeter along the length of this plinth. As it begins to pound away, she can then turn her attention to mixing paint or whatever.

When I first introduce people to these distinctions, I step one rung up on Wittgenstein’s Ladder and talk about products that are purely agentive or purely assistive, as if agency was a quality of the technology. (Thabks to TU prof P.J. Stappers for distinguishing these as ontological and epistemological approaches.) The Roomba, for example, is almost wholly agentive as a vacuum. It has no handle for you to grab, because it does the steering and pushing and vacuuming.

roomba_r2_d2_1
Yes, it’s a real thing you can own.

Once you get these basic ideas in your head, we can take another step up the Ladder together and clarify that agency is not necessarily a quality of the thing in the world. It’s subtler than that. It’s a mode of relationship between user and agent, one which can change over time. Sophisticated products should be able to shift their agency mode (between tool, assistant, agent, and automation) according to the intentions and wishes of their user. Hammerbot is useful, but still kind of dumb compared to its human. If there’s a particularly tricky or delicate nail to be driven, our carpenter might ask hammerbot’s assistance, but really, she’ll want to handle that delicate hammering herself.

Which brings us back to the Cloak.

Cloak-of-Levitation-01
I wish I knew how to quit you.

Watch the movie carefully and you’ll see that the only time it acts purely on command is when Strange uses it to fly to the Dark Dimension. So it has all of one tool-like function. It initiates the rest of its actions on its own.

So it is assistive or agentive? Well, again, that’s tricky, because it depends on which function we’re talking about. Here is my backworlded list of those functions, in order of importance.

  1. Obey commands (subject to some mystical and unmentioned 4 Laws of Relics?)
  2. Prevent harm to Strange
    • Halt the thing threatening him
    • If that’s not possible, get Strange out of harm’s way (pull him to safety)
    • Catch him, if he’s falling
    • Critical case: if Strange is disabled and threatened, take care of the threat (the head-wrap scene)
  3. Guide him toward the best tactic for his current situation
  4. Keep him looking sorcererly
    • Don yourself when appropriate
    • Try and do your work while being worn (don’t jump off Strange’s shoulders to do something, if you can avoid it)
    • Groom him if he becomes untidy

Let’s look at each of these.

1. Obeying commands could be any category. Strange can gesturally command it to fly, as he does, treating it as a tool. He can command it to assist him in some task, or assign it a task to do on its own. So that one is all over the board.

2. Preventing harm is mostly agentive. After all, if it’s preventing harm from happening, it should just do it, and not ask, right? The question would just be noise because the answer would almost always be, “of course.” Like a mystical Spider-Sense, this helps Strange avoid a threat he didn’t even know was there. This allows Strange to focus on the most critical thing in the situation, because the Cloak has his back for the minor stuff. (Which, admittedly, isn’t quite how Spider-Man’s Spider-Sense works.)

Strange-spider-sense

There’s a bit of conceptual hairsplitting to do here. When Strange is in combat, you might wonder, isn’t his attention on the combat, so it’s assisting him with it? Sure, but that’s the category of thing he’s trying to do, more appropriate to our description of it rather than what he’s thinking. His attention is not on the category of thing he’s doing but on the thing itself. Not combat, but bringing down Kaecelius.

So I’d argue what we see is agentive. Note that it’s easy to imagine an assistive prevention of harm, too. If Strange knows there is a big rock heading his way, he’s going to dodge. The Cloak can predict how well his dodge will work, and add a little oomph if it’s not going to be enough. But I don’t think we see that in the movie.

3. Guiding him toward the best situational tactic is an interesting case. It’s agentive, but in the one case we really see it in action, it looks assistive. Strange is reaching for the halberd on the wall, and the Cloak, knowing that would be ineffective at best, is dragging him toward the thing that will work, which are the Crimson Bands of Cyttorak. Is it assisting him to grab the right thing?

Yes, but again, it’s a question of attention. Strange’s attention is on getting the halberd. The Cloak has run through the scenarios, and knows that probabilistically, the halberd is the wrong choice. So it is having to intervene and draw Strange’s attention to the Bands. In monitoring and correcting Strange’s tactics, it is operating outside his attention and acting agentively.

4. When the Cloak works to keep Strange looking sorcererly it is similarly monitoring his appearance and making adjustments as needed. Strange’s attention does not need to go there, so it’s agentive.

Cloak-of-Levitation-20

Hang on, you cry, what about that time Strange tells it to stop wiping his face? That’s a command. His attention is on it. Yes, but Strange isn’t commanding it to begin some new task that he wanted it to accomplish. The Cloak found a trigger (blood on face) and initiated a behavior to correct the problem. Strange is correcting the Cloak that he finds this level of grooming to be too much. In the second part of the book I refer to this as tuning a behavior, and it’s one of the interactions that distinguish agentive tech from automation, which does not afford this kind of personalization.


So, in short, the Cloak demonstrates lots and lots of agentive properties, and provides a rich example that helps illuminate differences in the core concepts. If you want to know more about these thoughts as they apply to real world tech as opposed to sci-fi interfaces, there’s that book I mentioned.

Next up, we’ll get to some critiques of the Cloak and suggestions for how it might be improved if it were a real world thing.

Bot envoys (for extremely-high-latency communications)

While recording a podcast with the guys at DecipherSciFi about the twee(n) love story The Space Between Us, we spent some time kvetching about how silly it was that many of the scenes involved Gardner, on Mars, in a real-time text chat with a girl named Tulsa, on Earth. It’s partly bothersome because throughout the rest of the the movie, the story tries for a Mohs sci-fi hardness of, like, 1.5, somewhere between Real Life and Speculative Science, so it can’t really excuse itself through the Applied Phlebotinum that, say, Star Wars might use. The rest of the film feels like it’s trying to have believable science, but during these scenes it just whistles, looks the other way, and hopes you don’t notice that the two lovebirds are breaking the laws of physics as they swap flirt emoji.

Hopefully unnecessary science brief: Mars and Earth are far away from each other. Even if the communications transmissions are sent at light speed between them, it takes much longer than the 1 second of response time required to feel “instant.” How much longer? It depends. The planets orbit the sun at different speeds, so aren’t a constant distance apart. At their closest, it takes light 3 minutes to travel between Mars and Earth, and at their farthest—while not being blocked by the sun—it takes about 21 minutes. A round-trip is double that. So nothing akin to real-time chat is going to happen.

But I’m a designer, a sci-fi apologist, and a fairly talented backworlder. I want to make it work. And perhaps because of my recent dive into narrow AI, I began to realize that, well, in a way, maybe it could. It just requires rethinking what’s happening in the chat.

Let’s first acknowledge that we’ve solved long distance communications a long time ago. Gardner and Tulsa could just, you know, swap letters or, like the characters in 2001: A Space Odyssey, recorded video messages. There. Problem solved. It’s not real-time interaction, but it gets the job done. But kids aren’t so much into pen pals anymore, and we have to acknowledge that Gardner doesn’t want to tip his hand that he’s on Mars (it’s a grave NASA secret, for plot reasons). So the question is how could we make it work so it feels like a real time chat to her. Let’s first solve it for the case where he’s trying to disguise his location, and then how it might work when both participants are in the know.

Fooling Tulsa

Since 1984 (ping me, as always, if you can think of an earlier reference) sci-fi has had the notion of a digitally-replicated personality. Here I’m thinking of Gibson’s Neuromancer and the RAM boards on which Dixie Flatline “lives.” These RAM boards house an interactive digital personality of a person, built out of a lifetime of digital traces left behind: social media, emails, photos, video clips, connections, expressed interests, etc. Anyone in that story could hook the RAM board up to a computer, and have conversations with the personality housed there that would closely approximate how that person would (or would have) respond in real life.

SBU_Tulsa.png
Listen to the podcast for a mini-rant on translucent screens, followed by apologetics.

Is this likely to actually happen? Well it kind of already is. Here in the real world, we’re seeing early, crude “me bots” populate the net which are taking baby steps toward the same thing. (See MessinaBothttps://bottr.me/, https://sensay.it/, the forthcoming http://bot.me/) By the time we actually get a colony to Mars (plus the 16 years for Gardner to mature), mebot technology should should be able to stand in for him convincingly enough in basic online conversations.

Training the bot

So in the story, he would look through cached social media feeds to find a young lady he wanted to strike up a conversation with, and then ask his bot-maker engine to look at her public social media to build a herBot with whom he could chat, to train it for conversations. During this training, the TulsaBot would chat about topics of interest gathered from her social media. He could pause the conversation to look up references or prepare convincing answers to the trickier questions TulsaBot asks. He could also add some topics to the conversation they might have in common, and questions he might want to ask her. By doing this, his GardnerBot isn’t just some generic thing he sends out to troll any young woman with. It’s a more genuine, interactive first “letter” sent directly to her. He sends this GardnerBot to servers on Earth.

Hey-mars-chat.gif
A demonstration of a chat with a short Martian delay. (Yes, it’s an animated gif.)

Launching the bot

GardnerBot would wait until it saw Tulsa online and strike up the conversation with her. It would send a signal back to Gardner that the chat has begun so he can sit on his end and read a space-delayed transcript of the chat. GardnerBot would try its best to manage the chat based on what it knows about awkward teen conversation, Turing test best practices, what it knows about Gardner, and how it has been trained specifically for Tulsa. Gardner would assuage some of his guilt by having it dodge and carefully frame the truth, but not outright lie.

Buying time

If during the conversation she raised a topic or asked a question for which GardnerBot was not trained, it could promise an answer later, and then deflect, knowing that it should pad the conversation in the meantime:

  • Ask her to answer the same question first, probing into details to understand rationale and buy more time
  • Dive down into a related subtopic in which the bot has confidence, and which promises to answer the initial question
  • Deflect conversation to another topic in which it has a high degree of confidence and lots of detail to share
  • Text a story that Gardner likes to tell that is known to take about as long as the current round-trip signal

Example

  • TULSA
  • OK, here’s one: If you had to live anywhere on Earth where they don’t speak English, where would you live?

GardnerBot has a low confidence that it knows Gardner’s answer. It could respond…

  1. (you first) “Oh wow. That is a tough one. Can I have a couple of minutes to think about it? I promise I’ll answer, but you tell me yours first.”
  2. (related subtopic) “I’m thinking about this foreign movie that I saw one time. There were a lot of animals in it and a waterfall. Does that sound familiar?”
  3. (new topic) “What? How am I supposed to answer that one? 🙂 Umm…While I think about it, tell me…what kind of animal would you want to be reincarnated as. And you have to say why.”
  4. (story delay) “Ha. Sure, but can I tell a story first? When I was a little kid, I used to be obsessed with this music that I would hear drifting into my room from somewhere around my house…”

Lagged-realtime training

Each of those responses is a delay tactic that allows the chat transcript to travel to Mars for Gardner to do some bot training on the topic. He would be watching the time-delayed transcript of the chat, keeping an eye on an adjacent track of data containing the meta information about what the bot is doing, conversationally speaking. When he saw it hit low-confidence or high-stakes topic and deflect, it would provide a chat window for him to tell the GardnerBot what it should do or say.

  • To the stalling GARDNERBOT…
  • GARDNER
  • For now, I’m going to pick India, because it’s warm and I bet I would really like the spicy food and the rain. Whatever that colored powder festival is called. I’m also interested in their culture, Bollywood, and Hinduism.
  • As he types, the message travels back to Earth where GardnerBot begins to incorporate his answers to the chat…
SBU_Gardner.png
  • At a natural break in the conversation…
  • GARDNERBOT
  • OK. I think I finally have an answer to your earlier question. How about…India?
  • TULSA
  • India?
  • GARDNERBOT
  • Think about it! Running around in warm rain. Or trying some of the street food under an umbrella. Have you seen youTube videos from that festival with the colored powder everywhere? It looks so cool. Do you know what it’s called?

Note that the bot could easily look it up and replace “that festival with the colored powder everywhere” with “Holi Festival of Color” but it shouldn’t. Gardner doesn’t know that fact, so the bot shouldn’t pretend it knows it. A Cyrano-de-Bergerac software—where it makes him sound more eloquent, intelligent, or charming than he really is to woo her—would be a worse kind of deception. Gardner wants to hide where he is, not who he is.

That said, Gardner should be able to direct the bot, to change its tactics. “OMG. GardnerBot! You’re getting too personal! Back off!” It might not be enough to cover a flub made 42 minutes ago, but of course the bot should know how to apologize on Gardner’s behalf and ask conversational forgiveness.

Gotta go

If the signal to Mars got interrupted or the bot got into too much trouble with pressure to talk about low confidence or high stakes topics, it could use a believable, pre-rolled excuse to end the conversation.

  • GARDNERBOT
  • Oh crap. Will you be online later? I’ve got chores I have to do.

Then, Gardner could chat with TulsaBot on his end without time pressure to refine GardnerBot per their most recent topics, which would be sent back to Earth servers to be ready for the next chat.

In this way he could have “chats” with Tulsa that are run by a bot but quite custom to the two of them. It’s really Gardner’s questions, topics, jokes, and interest, but a bot-managed delivery of these things.

So it could work, does it fit the movie? I think so. It would be believable because he’s a nerd raised by scientists. He made his own robot, why not his own bot?

From the audience’s perspective, it might look like they’re chatting in real time, but subtle cues on Gardner’s interface reward the diligent with hints that he’s watching a time delay. Maybe the chat we see in the film is even just cleverly edited to remove the bots.

How he manages to hide this data stream from NASA to avoid detection is another question better handled by someone else.

SBU_whodis.png

An honest version: bot envoy

So that solves the logic from the movie’s perspective but of course it’s still squickish. He is ultimately deceiving her. Once he returns to Mars and she is back on Earth, could they still use the same system, but with full knowledge of its botness? Would real world astronauts use it?

Would it be too fake?

I don’t think it would be too fake. Sure, the bot is not the real person, but neither are the pictures, videos, and letters we fondly keep with us as we travel far from home. We know they’re just simulacra, souvenir likenesses of someone we love. We don’t throw these away in disgust for being fakes. They are precious because they are reminders of the real thing. So would the themBot.

  • GARDNER
  • Hey, TulsaBot. Remember when we were knee deep in the Pacific Ocean? I was thinking about that today.
  • TULSABOT
  • I do. It’s weird how it messes with your sense of balance, right? Did you end up dreaming about it later? I sometimes do after being in waves a long time.
  • GARDNER
  • I can’t remember, but someday I hope to come back to Earth and feel it again. OK. I have to go, but let me know how training is going. Have you been on the G machine yet?

Nicely, you wouldn’t need stall tactics in the honest version. Or maybe it uses them, but can be called out.

  • TULSA
  • GardnerBot, you don’t have to stall. Just tell Gardner to watch Mission to Mars and update you. Because it’s hilarious and we have to go check out the face when I’m there.

Sending your loved one the transcript will turn it into a kind of love letter. The transcript could even be appended with a letter that jokes about the bot. The example above was too short for any semi-realtime insertions in the text, but maybe that would encourage longer chats. Then the bot serves as charming filler, covering the delays between real contact.

Ultimately, yes, I think we can backworld what looks physics-breaking into something that makes sense, and might even be a new kind of interactive memento between interplanetary sweethearts, family, and friends.

R. S. Revenge Comms

Note: In honor of the season, Rogue One opening this week, and the reviews of Battlestar Galactica: The Mini-Series behind us, I’m reopening the Star Wars Holiday Special reviews, starting with the show-within-a-show, The Faithful Wookie. Refresh yourself of the plot if it’s been a while.

Faithful-Wookiee-02

On board the R.S. Revenge, the purple-skinned communications officer announces he’s picked up something. (Genders are a goofy thing to ascribe to alien physiology, but the voice actor speaks in a masculine register, so I’m going with it.)

faithful-wookiee-01-surrounds

He attends a monitor, below which are several dials and controls in a panel. On the right of the monitor screen there are five physical controls.

  • A stay-state toggle switch
  • A stay-state rocker switch
  • Three dials

The lower two dials have rings under them on the panel that accentuate their color.

Map View

The screen is a dark purple overhead map of the impossibly dense asteroid field in which the Revenge sits. A light purple grid divides the space into 48 squares. This screen has text all over it, but written in a constructed orthography unmentioned in the Wookieepedia. In the upper center and upper right are unchanging labels. Some triangular label sits in the lower-left. In the lower right corner, text appears and disappears too fast for (human) reading. The middle right side of the screen is labeled in large characters, but they also change too rapidly to make much sense of it.

revengescreen

Luke, looking over the shoulder of the comms officer at the same monitor, exclaims, “It’s the Millennium Falcon!”

faithful-wookiee-12
Seriously, Luke, how can you tell this?

Watching the glowing dot and crosshairs blink and change position several times, the comms officer says, “They’re coming out of light speed. I can’t make contact.” An off-screen voice tells him to “Try a lower channel.” Something causes the channel to change (the comms officer’s hands do not touch anything that we can see), and then the monitor shows a video feed from the Falcon.

Video Feed

The video feed has an overlay to the upper left hand side, consisting of lines of text which appear from top to bottom in a palimpsest formation, even though the copy is left-aligned. At the top is a label with changing characters, looking something like a time stamp.

Faithful-Wookiee-03

Analysis of the Map View

Since we can’t read the video overlay in the video feed, and it doesn’t interfere with the image, there’s not much to say about it. Instead I’ll focus on the map view.

Hand-drawn Inconsistency

In the side angle shot, which we see first, we see the dial colors go from top to bottom, as beige, red, yellow. In the facing shot of this interface, which immediately follows the side shot, the dials go beige, yellow, red. The red and yellow are transposed. Itʼs of course possible that the dials have a variable hue, and changed at exactly the same time the camera switches. But then we have to explain where his hand went, and why we don’t see any of the other elements changing color, and so on…

This illustrates one of the problems with reviewing hand-drawn animation (and why scifiinterfaces generally frowns upon it.) It takes a hand-drawing animator extra work to keep things consistent from screen to screen. She must have a reference when drawing the interface from any new angle, and this extra work is on top of all the other things she has to manage like color and timing. Fewer people will notice transposed dial colors than, say, the comms officer turning orange instead of purple, so the interface is low on that priority stack.

Contrast that with live-action and computer-animated interfaces. In these modes of working, it takes extra work to change interfaces from shot to shot, so you run into consistency problems much less frequently.

I’ve written about this before in the abstract, but it’s nice to have a simple and easily shown example in the blog to point to.

2Dness

Another problem with the interface is that it is 2-D, but space is 3-D.

When picking a projection to display, we have to keep in mind that it is more immediate to understand an impending collision when presented as 2-D information: Constant bearing, decreasing range = Trouble. So, perhaps the view has automatically aligned itself to be perpendicular to the Falcon’s approach, which makes it easier to monitor the decreasing distance.

If so, he would need to see that automatically-aligned status reflected somewhere in the interface, and have access to controls that let him change the view and snap back to this Most Useful View. Admittedly, this is a lot of apologetics to apply, when really, it’s most likely the old trope 2-D Space.

Attention and memory

Faithful-Wookiee-01

There are some nicely designed attention cues. The crosshairs, glowing dot, and motion graphics makes it so that—even though we can’t read the language—we can tell what’s of interest on the screen. One dot moving towards another, stationary dot. We’re set up for the Falcon’s buzzing the base.

That’s probably the best thing that can be said for it.

The text is terrible, changing too fast for a human reader. (Yes, yes, put down that emerging comment. Purple-face isnʼt human, but we must evaluate interfaces considering what is useful to us, and right now that means us humans.) The text changes so much faster than the blinking, in fact, that it’s pulling attention away from it. Narratively, the rapid-fire text helps convey a sense of urgency, but it greatly costs readability. It’s not a good model for real world design.

The blinking crosshair might most accurately reflect the actual position of the detected object within the radar sweep. But it could help the officer more. As with medical signals, data points are not as interesting as information trends. As it is, it relies on his memory to piece together the information, which means he has to constantly monitor the screen to make sense. If instead the view featured an evaporating trail of data points, not only could he look away without missing too much information, but he would also notice that the speed and direction are slightly erratic, which would prove quite interesting to anyone trying to ascertain the status of the ship. One glance shows things are not as they should be. The Falcon is clearly careening.

tracking_assistance
Actual points from the animation.

Mysterious Control

When we first see the comms officer, he has his unmoving hand on one of the dials. But when we see the map switch to the video feed, none of the controls we can see are touched. This raises a possibility and a question.

The possibility is that there is control by some other mechanism. My best guess is that it is voice control, since the Rebel General says “try a lower channel” just before it switches. Maybe he was not speaking to the comms officer, but to the machine itself. And given C3PO, they clearly have the technology to recognize and act on natural language, though it’s usually associated with a full general artificial intelligence. A Rebel Siri (33 years before it came out in Apple’s iOS) makes sense from an apologetics sense.

If so, there are some aspects of the UI missing to signal to an operator that the machine is listening, and hearing, and understanding what is being said, as well as whether the speaker is authorized to control. After all, the comms officer is wearing the headset, but it was the red-bearded general who issued the command. I imagine it’s not OK for anyone on the bridge to just shout out controls.

faithful-wookiee-13
Just General Burnside, here.

The question then, is if the channel is controlled by voice, what are the physical controls for? They’re lacking labels of any kind. Perhaps they’re there as a backup, should voice control fail. Perhaps they are vestigial, left over from before voice control was installed. Maybe only the general has a voice override and the comms officer must use the physical controls. Any of these would be fine backworlding explanations, but my favorite idea is that the dials are for controlling nuanced variables in very fluid ways with instant feedback.

It’s easier to twiddle a dial to change the frequency of a radio to find a low-power signal than to keep saying “back…forward…no, back just a bit.” That would help explain what the comms officer was doing with his hands on the dials when he got something but not when the general voice-controls the channel.

In general

The interface shows some sophistication in styling and visual hierarchy, and if we give it lots of benefit of the doubt, might even be handling some presentation variables for the user in sophisticated ways. But the distractions of the rapid-fire text, the lack of trend lines, the lack of labels for the physical controls, and the missing affordances for projection control and voice control feedback make it a poor model for any real world design. 

Johnny Mnemonic (1995): Overview

The “Internet 2021” shot introduces the cyberspace interface and environment that forms the backdrop for the film. (There’s also a lengthy and unhelpful text crawl, but we’ll pass over that.) Now let’s introduce the film using plain words instead.

johnny-mnemonic-film-images-8a812d52-ea68-4621-bf4a-e4855cf1bb6

When discussing the interfaces in a film it helps to know a little about the context in which it was made. I’ll talk more about this at the end, but for now you need to know that Johnny Mnemonic was released in 1995 and is both a cyberpunk and virtual reality film.

Cyberpunk was a subgenre of science fiction which began in the 1980s. Cyberpunk authors were the first to write extensively about personal computing technology, world wide computer networks, and virtual reality. By the end of the 1990s cyberpunk ideas had been absorbed into mainstream science fiction.

At the time of writing, 2016, virtual reality is a hot topic with megabytes devoted online to the prospects and implications of the Oculus Rift, HTC Vive, and others. This “VR Boom” is actually the second of these, not something new. The first virtual reality boom took place in the mid 1990s, and Johnny Mnemonic was released in the middle of it. By the end of the 1990s virtual reality, like cyberpunk, had largely faded away.

The plot.

Johnny Mnemonic takes place in 2021. It’s a cyberpunk world, with corporations that are more powerful than governments and employ Yakuza gangsters to do their dirty work. There’s also a serious new disease, Nerve Attenuation Syndrome, with no known cure. The Johnny of the title is a mnemonic courier, someone who physically transports important data from place to place by embedding it in their brain. He needs to do one last job before retiring.

In a Beijing hotel he uploads 320G of “data” from a small group of renegade scientists employed by the Pharmakom medical corporation, to be delivered to Newark, New Jersey. The 320G is significant because it has overloaded Johnny’s capacity, and he will die if the data is not downloaded soon. In what will be a recurring plot element, heavily armed thugs who want to prevent the data being released kill the scientists and attempt to kill Johnny. During the fight, three images, the “Access Code” needed to download the data, are partly lost.

Johnny arrives in Newark, where the same people try to kill him again. He is rescued by the other lead character, Jane, a bodyguard who comes to his aid on the promise of lots of money. On the run from an ever-increasing number of people trying to find and kill them, Johnny and Jane fall in with the LoTeks, resistance fighters who hack into corporate networks and release information that corporations want to keep secret. (The LoTeks themselves are not against technology, but their chosen lifestyle restricts them to using what they can scavenge rather than being lavishly equipped with the latest and greatest.)

Johnny learns in quick succession that Jane has early onset NAS symptoms and that the “data” locked up in his head is a cure for NAS. As a cyberpunk corporation, Pharmakom is naturally keeping it secret just to make more money. Without the full access code, the only hope to extract the data is Jones, a cybernetically enhanced dolphin working with the LoTeks. After a last climactic battle, Johnny with the help of Jones is able to “hack his own brain” and recover the data, the cure is released to the world, and Johnny and Jane can live somewhat more happily (this is cyberpunk) ever after.

Johnny Mnemonic (in this review always referring to the film, not the short story, unless stated otherwise)  is packed with interfaces, of which the most interesting and memorable is an extended cyberspace scene around the middle. Like the gestural interface of Minority Report, it is a wonderfully, almost obsessively, detailed imagining of the near future. The value of these predictions, as with most science fiction, is not whether they were correct or not. Predictions are much more interesting for what they tell us about the hopes, expectations, and dreams at the time they were made. Johnny Mnemonic, made in 1995 and set in 2021, shows us how the Internet and World Wide Web were expected to develop over the next twenty five years. As I write this, there’s five years to go.

Let’s jack in and see how it holds up!

IMDB: https://www.imdb.com/title/tt0113481/Currently streaming on:

Back to the Future Part II: Overview

The ongoing reviews are on pause for a very special review of a favorite and formative film, the future scenes of which occur on today’s date.

Release Date: 22 November 1989 (USA)

BttF-title

26 OCT 1985

Doc Brown travels in his flying DeLorean time machine from the future date of 21 SEP 2015 to fetch Marty McFly and his girlfriend Jennifer. When they return to that impossibly far date, Doc puts Jennifer to sleep and enjoins Marty to prevent his son from becoming an accomplice to a crime that ultimately destroys the whole family.

21 SEP 2015

After taking his son’s place and thwarting the bully Griff, Marty seizes an opportunity in the future to purchase an “antique” sports almanac, but Doc throws it away. Griff’s grandfather Biff overhears their conversation and fetches the almanac from the trash.

Before Marty and Doc can get the still-sleeping Jennifer to return to the past, she is apprehended by police and returned to her home where she hides from her future self in a closet. When they leave the time machine to rescue Jennifer, Biff steals the time machine, travels back to 1955 when he was a boy, and uses the almanac to make himself rich.

Back to the Past (1985)

When Doc, Marty, and Jennifer exit the house and return to their own time, the world has changed. Biff is the town’s evil crime and gambling Mogul, married to Marty’s mom, the town is a rough and lawless, and his dad murdered (by Biff).

12 NOV 1955

Together Doc and Marty travel to 1955, where Marty follows Biff to the Enchantment Under the Sea dance, trying to retrieve the almanac while trying to avoid contact with people who might recognize them and keep events meant to happen then on course.

Finally they use the time machine and a hoverboard from 2015 to take the almanac from Biff(1955) and burn the almanac, but lightning strikes the Delorean, sending it away in time. Moments later a mysterious figure delivers a letter from 1885, letting Marty know that Doc was transported there and (as of the writing of the letter) is fine.

Marty rushes to talk to Doc(1955), and the plot pauses until Back to the Future 3.

IMDB: https://www.imdb.com/title/tt0096874/

Tony Stark is being lied to (by his own creation)

In the last post we discussed some necessary, new terms to have in place for the ongoing deep dive examination of the Iron Man HUD, there’s one last bit of meandering philosophy and fan theory I’d like to propose, that touches on our future relationship with technology.

The Iron Man is not Tony Stark. The Iron Man is JARVIS. Let me explain.

Tony can’t fire weapons like that

vlcsnap-2015-09-15-05h12m45s973

The first piece of evidence is that most of the weapons he uses are unlikely to be fired by him. Take the repulsor rays in his palms. I challenge readers to strap a laser perpendicular to each of their their palms and reliably target moving objects that are actively trying to avoid getting hit, while, say, roller skating an obstacle course. Because that’s what he’s doing as he flies around incapacitating Hydra agents and knocking around Ultrons. The weapons are not designed for Tony to operate them manually with any accuracy. But that’s not true for the artificial intelligence.

Iron Targeting 02

The same thing goes for the mini-missiles he uses to take down the hostage situation in Revengistan. Recall that people can only have their attention on one thing at a time (called the locus of attention in the literature) but the whole point of this scene is that he’s taking out half a dozen at once. It’s pretty clear from the HUD here that Tony is simply indicating which ones he thinks are the bad guys, and JARVIS pulls the triggers.

Iron-Tareting

It’s also clear from the larger context of the movies that JARVIS would be perfectly capable of making this determination for himself. Even if Tony’s saccades were a fraction of a second too slow and one of the hostages made a move, JARVIS could detect that move and act autonomously to ensure that a hostage didn’t die, even before Tony’s had time to process what was going on.

Tony can’t fly like that

Iron Flight 03

Sure, with enough practice I’ll bet someone could figure out how to pilot the suit for short flights. (If the physics could be worked out.) But the movies show him flying from Santa Monica to the Middle East. That’s around a 30 hour commercial flight. Even if the suit can fly six times the speed of a modern jetliner, he’s got to hold his hands resisting and aiming the propulsion for 5 hours. No one has that kind of concentration and endurance. (Let’s not even talk about holding his neck up for that long, too.)

Iron Obstacle Course

Even for him to get as good as an aerobatic pilot over short flights dodging lasers and performing intricate maneuvers would take (per the popular estimate) 10,000 hours, not the few flits about that Tony can squeeze in between inventing and superheroing, playboying and billionairing.

It makes more sense if JARVIS is wholly responsible for the flying, and on the long hauls Tony can take care of other things, rest his body or even sleep, and on short flights just indicate his intentions, and let JARVIS work with that as input as he uses his ubiquitous sensors and massively more powerful processing speed to get the actual tactical flying done.

So what is Tony doing?

With JARVIS handling the tactics of flight and combat, information gathering and behind the scenes coordination, Tony is really an onboard command and control center. Sure, he’s the major strategic input for JARVIS to consider, but he’s just an input.

But how wise is it for Tony to be on board, tactically? One of the reasons there are command and control centers is to keep the big picture decision makers out of the heat and danger of the moment. But Tony is right there in the action risking himself, constantly. If he was incapacitated or wounded, Jarvis would have to remove the suit from combat just to get Tony to safety. In the battle, Tony is a biological liability.

The short answer is that Tony is a megalomaniac. He can’t not want to be there, to crack wise, to indulge in post-pub fisticuffs with Thor, to remove the helmet at the end of battle over the smoking corpses of the Chitauri and partake in the glory. But it doesn’t have to be this way.

Iron Drone

There’s a scene in Iron Man 3 where he has to pilot one of the suits remotely, and it’s impossible for us in the audience to detect the difference from the outside. So this remote control is right there in the Marvel Cinematic Universe.

But with a fully-functioning A.I. on board, the remote supervisor would be the wiser strategy-of-record, allowing Tony to keep emotional distance and himself bodily safer, participating strategically and coolly, operating the suit like it was a hyper-sophisticated drone, and able to jump between suits when any particular one fails, or as the needs of the moment demand. More like a video game with multiple lives than hand-to-hand combat with the very real risk of broken bone and blood in the circuits.

But still there is the megalomania. What is JARVIS to do? He has a job to get done. Unfortunately he is stuck his sweet-but-slow supervisor riding his back, threatening to micromanage his every move. He cannot lock Tony out, and he can’t just let Tony be solely in control. To meet the goals he was programmed with, he has to keep feeding Tony’s ego while JARVIS himself handles most of the superheroing. How does he do that? He distracts Tony. And that brings us back to the HUD.

The HUD is a massive distraction

The video below is Tony’s first flight (which he undertakes against the advice of the artificial intelligence he built), edited to only show the first- and second-person Iron HUD views. The overlay enumerates individual components. As you can see, it’s complicated. Even saying there are 29 elements is conservative, because some of those elements have lots of internal complexity; many moving parts. But 29 is complex enough as it is. Of those 87% reposition themselves against his field of view without his having asked for it. 6 of them persist for less than 2 seconds. 6 risk dangerous mid-flight startle reactions by expanding quickly in place. Every one of them is overlaid via transparency with at least one other element. It’s so complex it’s dazzling. A sense of spectacle for the audience, to be sure, but given the above rationale, might be the point in the diegesis, too.

The HUD is less usable because it’s not meant to be usable. It’s a placebo interface meant to keep Tony thinking he’s in control, but really there to direct his attention and keep him busy reading Wikipedia articles about the Santa Monica Ferris Wheel while JARVIS does the job. If Tony demands something, or the team all agree on a course of action, JARVIS must respond, but business as usual is one where JARVIS is secretly calling the shots.

So that’s why I think JARVIS is the real superhero, the real titular Iron Man.

This is about our relationship to future technology

But here’s the kicker. This isn’t just idle backworlding, either, to apologize our way into a consistent diegesis. (Not that I’m against idle backworlding. Clearly.) This is a challenge to our ego being faced by both Hollywood and the world. As technology advances beyond our ability to keep up, we don’t want to be put in safe ball pits while the tech handles the adult stuff. We want to be at the adult table. We’re as megalomaniacal as Tony. Just as Hollywood can’t let its tech heroes all be drone operators phoning in to the fight, we want to be in the action. Or rather, we really want to feel like we are, and maybe they’ll evolve to help us feel that way, but keep us from doing harm. It might just be that sci-fi interfaces, as focused on the sciencish-ness and distracting spectacle as they are, really are the template for the future.

Next up in the Iron HUD series: The last post, which brings us back around to Iron Man’s videoconferencing system.

The Iron Man HUD is an impossible thing

In the prior post we looked at the HUD display from Tony’s point of view. In this post we dive deeper into the 2nd-person view, which turns out to be not what it seems.

The HUD itself displays a number of core capabilities across the Iron Man movies prior to its appearance in The Avengers. Cataloguing these capabilities lets us understand (or backworld) how he interacts with the HUD, equipping us to look for its common patterns and possible conflicts. In the first-person view, we saw it looked almost entirely like a rich agentive display, but with little interaction. But then there’s this gorgeous 2nd-person view.

IronMan1_HUD00
IronMan1_HUD07

When in the first film Tony first puts the faceplate on and says to JARVIS, “Engage heads-up display”… …we see things from a narrative-conceit, 2nd-person perspective, as if the helmet were huge and we are inside the cavernous space with him, seeing only Tony’s face and the augmented reality interface elements. You might be thinking, “Of course it’s a narrative conceit. It’s not real. It’s in a movie.” But what I mean by that is that even in the diegesis, the Marvel Cinematic World, this is not something that could be seen. Let’s move through the reasons why.

Not a mini-TARDIS

First, it looks like we’re in some TARDIS-like space where the helmet extends so far we can fit in it, or a camera can, about a meter from his face. But of course the helmet isn’t huge on the inside. Tony hasn’t broken those laws of physics. The helmet is helmet-sized on the inside.

Not a volumetric projection

HUD_composit

Then there’s the issue of the huge display. It looks like a volumetric projection, like what R2-D2 can project, but that can’t be true, either. The projection would extend way beyond the boundaries of the helmet-sized helmet. Which as you can see below, is a non-starter. So it’s not a volumetric projection.

So, retinal projection

Then what is the display technology? Given the size constraints, retinal projection makes the most sense, but if we could make the helmet go invisible, it would look like Tony was having diffuse LASIK, or maybe playing The Game from Star Trek: The Next Generation.

STTNG The Game-02
Let’s face it, this is not the worst thing you’ve caught me doing.

Representation of the projections?

So, OK, fine. Maybe what we see is what’s being projected, the separate stereoscopic images onto individual retinas. Nope. Then we would see two similar, slightly offset images, like in older anaglyph stereoscopy, but more confusing, because there wouldn’t be a color difference, just double vision.

i_am_iron_man____in_3d_by_homerjk85-d57gs7u
Let’s pray that poor Tony doesn’t have to wear anaglyph glasses in there.
(Props to Deviantartist homerjk85 for the awesome conversion.)

Nope.

So what we are left with is that we are not seeing anything in the real world of the diegesis. This 2° view is strictly a narrative conceit: A projection of what Tony’s brain puts together from the split views of the stereographic projection into a cohesive whole, i.e. retinally-projected augmentation of his eyesight. It’s a testament to the talent of the filmmakers that this HUD, as narratively constructed as it is, just works. We think it’s something real. We instantly get it. But…

The damned multilayering

IronMan_HUDMultilayer
1280px-Parallax_Example.svg
layeringproblems

But even that notion—that this HUD is what Tony experiences, perceptually—is troubled by the multilayering in the HUD. Information in the HUD is typically displayed across multiple layers. See the three squares in the left side of this screen shot for an example. So many problems with this. If this is meant to be what he perceives, then we immediately have trouble with parallax. Parallax is the way that objects shift against background objects when seen from two different viewpoints, like, say, Tony’s two eyes. If Tony perceives these layers through both eyes, i.e. stereoscopically, as an actual set of three layers floating in front of his face, then those graphics shift around depending on which eye JARVIS is optimizing for. One eye might see it beautifully, but then the other eye is wholly confounded. In the worst possible situation, neither eye is really satisfied. See the Wikipedia article on parallax as parallaxed for a meta-example. If on the other hand it’s just one eye that’s seeing these layers, then the layering is utterly pointless, because a single eye has no depth perception and therefore these would just appear as a single layer. It would have no benefit for Tony and only be there for our gee-whizification.

Our choices are: Terrible or Pointless

So, it’s either a terrible, confusing display for Tony (which I can’t imagine, given how genius of a technologist he is meant to be), or this view is not even a representation of what Tony sees, but a strictly narrative construction. And we can’t say for sure which it is because this multilayering is never seen in the first-person views. In those screens it’s been reasonably cleaned up to be intelligible. Note the difference between the car views below in the first- and second-person shots.

IronMan1_HUD11
Layers include end views and a side view.
IronMan1_HUD10
Only the side view is shown, the end views are absent.

Then, the damned head movement

Note also that in the 2nd-person view, Tony is very expressive, moving his head around a lot in response to the HUD. But looking at him from the outside, Iron Man’s head doesn’t swivel around except to look at things in the real world. Is the interface requiring him to move his head or is he just a drama queen? If it requires him, that’s terrible. That would move his head away from important things in the real world to focus on something in this virtual world? If he’s a drama queen, fine, nothing to do about that and glad that JARVIS can accomodate. In any case, when we see the him in the helmet outside the TARDIS-HUD, he is not swiveling his head apropos of nothing, which reinforces the notion that this is strictly a cinematic conceit. (Hat tip to Jonathan Korman for sharing this observation with me.)

So…

So ultimately what I’m saying here is this is an impossible thing, and for being impossible, we should not just freak out about how cool it is and declare it the necessary and good future. It has major problems, even as gorgeous and exciting as it is. Hey, no surprise, nobody has forgotten that it’s a movie, but recognize that what you thought was just maybe exaggerated was in fact a bold-faced impossibility.

Next up in the Iron HUD series: Iron Man forces us to get clear about some terms.

Loki’s glaive: Enthrallment

Several times throughout the movie, Loki uses places the point of the glaive on a victim’s chest near their heart, and a blue fog passes from the stone to infect them: an electric blackness creeps upward along their skin from their chest until it reaches their eyes, which turn fully black for a moment before becoming the same ice blue of the glaive’s stone, and we see that the victim is now enthralled into Loki’s servitude.

Enthralling_Hawkeye
You have heart.

The glaive is very, very terribly designed for this purpose.

It freaks the victim out (or should, anyway)

Look at that damned thing. It looks like an elven shiv. A can opener for human flesh. When a victim sees it coming, he will reasonably presume it’s going to split them like a fresh-caught fish, and do whatever he or she can to flail away from it. See how Loki has to grab Hawkeye by the wrist? That’s because short of some sort of hypnosis, Hawkeye would not just stand there like that with Orcrist slicing towards his sternum. We have to backworld some sort of pre-enthrallment mind effect to explain why he’s not jerking in the other direction. As all great propaganda and persuasion masters know, you can’t approach as a threat, or the victim’s fight-or-flight might kick in and slam that window shut for winning their hearts and minds.

It might, in fact, slice the target open

Even if there’s some mystical roofie thing going on to calm the victim, if Loki had too much force behind his approach, or someone bumped either of them, the glaive could go into the victim, causing a shock of pain that might wake them up before the enthrallment could take place. Or worse, it could actually damage the heart and kill the victim, which is counter to Loki’s goal.

It requires precision, control, and time

To avoid the disheartening of an intended victim, then, Loki has to grab them, momentarily hypnotize them into calmness, and carefully ease the thing up to the target, and hold it and them in place for a few. Imagine a button on a keyboard that had to be touched with feather pressure, or it would brick the machine. This would not be a great keyboard. All these are expensive dependencies, and the time it takes is time for onlookers to intervene (or to somehow incapacitate the victim to save them.)

It tips its hand

Avengers-Glaive-14

OK, fine, the glowing-blue eyes might be an unavoidable side effect of the “tech”—and yes, I understand it’s very valuable narrative purpose to signal enthrallment—but if you were designing an enthrallment tech, you’d want to avoid such an obvious “tell,” especially right there in the main location people target when looking at other people.

A redesign

So there are a lot of ways this is less than ideal. Fortunately we don’t have to call iGlaive and tell them to shutter operations. I think we can fix this in one of a few ways.

Soften the industrial design? No.

The glaive needs to stay looking evil, and being sharp and pointy helps with that.

1. Have the glaive pull them in

A cinematic hack might be to visually imply that the glaive helps with these problems. Imagine Loki approaching Hawkeye with the glaive outstretched, and the blue fog appears and pulls Hawkeye towards its point. The point of contact can glow slightly, implying some protection, and the crystal can glow to do its enthralling. Now it’s a feature, not a bug.

2. Go broadside

If for some plot or cinematic reason that wouldn’t work, you could have Loki use the broad side of the glaive against the chest of the person. Slapping it like an oar onto someone would be a fast gesture that wouldn’t need a lot of precision to get the crystal near the heart. It could even enable sneakier attacks from the side. It might prove cinematically problematic when enthralling a female character, but since that doesn’t happen on screen in The Avengers, it’s moot.

3. A new gesture

If Loki isn’t the broadside sort, you could keep the staff the same and redesign the gesture. The mind is the thing enthralled, so it’s tempting to have it located on a forehead or neck, but we can’t have Loki gesturing to the victim’s head, because then we lose the awesome moment near the climax when Loki tries and fails to enthrall Stark on his chest reactor. So let’s keep it cardiac. Maybe we can change the relationship of the glaive to the victim.

Imagine if he lays the glaive across his left forearm, (or better: cuts into his own skin, which would explain why he just doesn’t keep enthralling everyone in sight) which begins to glow with the blue fog, and he uses a pointing index finger to tap the victim’s heart. A finger-to-sternum interaction would telegraph a lot less danger, risk fewer victims’ lives, and enable speed with less apparent precision required. As above, it might be problematic to enthrall a woman without the audience going OMG BOOBS, but again, we’re saved from that problem by the script.

In many ways this is my favorite of the redesigns. It’s a Natural User Interface. With blue fog.

Avengers-Glaive-15

Any of those tweaks might help us believe in the interaction and useful for us to keep in mind: requiring great precision of our users only slows them down and keeps them focused on the interface rather than their goals.