The Cloak of Levitation, Part 4: Improvements

In prior posts we looked at an overview of the cloak, pondered whether it could ever work in reality (Mostly, in the far future), and whether or not the cloak could be considered agentive. (Mostly, yes.) In this last post I want to look at what improvements we might make if I was designing something akin to this for the real world.

Given its wealth of capabilities, the main complaint might be its lack of language.

A mute sidekick

It has a working theory of mind, a grasp of abstract concepts, and intention, so why does it not use language as part of a toolkit to fulfill its duties? Let’s first admit that mute sidekicks are kind of a trope at this point. Think R2D2, Silent Bob, BB8, Aladdin’s Magic Carpet (Disney), Teller, Harpo, Bernardo / Paco (admittedly obscure), Mini-me. They’re a thing.

tankerbell.gif

Yes, I know she could talk to other fairies, but not to Peter.

Despite being a trope, its muteness in a combat partner is a significant impediment. Imagine its being able to say, “Hey Steve, he’s immune to the halberd. But throw that ribcage-looking thing on the wall at him, and you’ll be good.” Strange finds himself in life-or-death situations pretty much constantly, so having to disambiguate vague gestures wastes precious time that might make the difference between life and death. For, like, everyone on Earth. Continue reading

Advertisements

The Cloak of Levitation, Part 3: But is it agentive?

Full_coverSo I mentioned in the intro to this review that I was drawn to review Doctor Strange (with my buddy and co-reviewer Scout Addis) because the Cloak displays some interesting qualities in relation to the book I just published. Buy it, read it, review it on amazon.com, it’s awesome.

That sales pitch done, I can quickly cover the key concepts here.

  • A tool, like a hammer, is a familiar but comparatively-dumb category of thing that only responds to a user’s input. A hammer is an example. Tool has been the model of the thing we’re designing in interaction design for, oh, 60 years, but it is being mostly obviated by narrow artificial intelligence, which can be understood as automatic, assistive, or agentive.
  • Assistive technology helps its user with the task she is focused on: Drawing her attention, providing information, making suggestions, maybe helping augment her precision or force. If we think of a hammer again, an assistive might draw her attention to the best angle to strike the nail, or use an internal gyroscope to gently correct her off-angle strike.
  • Agentive technology does the task for its user. Again with the hammer, she could tell hammerbot (a physical agent, but there are virtual ones, too) what she wants hammered and how. Her instructions might be something like: Hammer a hapenny nail every decimeter along the length of this plinth. As it begins to pound away, she can then turn her attention to mixing paint or whatever.

When I first introduce people to these distinctions, I step one rung up on Wittgenstein’s Ladder and talk about products that are purely agentive or purely assistive, as if agency was a quality of the technology. (Thabks to TU prof P.J. Stappers for distinguishing these as ontological and epistemological approaches.) The Roomba, for example, is almost wholly agentive as a vacuum. It has no handle for you to grab, because it does the steering and pushing and vacuuming.

roomba_r2_d2_1

Yes, it’s a real thing you can own.

Once you get these basic ideas in your head, we can take another step up the Ladder together and clarify that agency is not necessarily a quality of the thing in the world. It’s subtler than that. It’s a mode of relationship between user and agent, one which can change over time. Sophisticated products should be able to shift their agency mode (between tool, assistant, agent, and automation) according to the intentions and wishes of their user. Hammerbot is useful, but still kind of dumb compared to its human. If there’s a particularly tricky or delicate nail to be driven, our carpenter might ask hammerbot’s assistance, but really, she’ll want to handle that delicate hammering herself.

Which brings us back to the Cloak. Continue reading

“Real-time,” Interplanetary Chat

While recording a podcast with the guys at DecipherSciFi about the twee(n) love story The Space Between Us, we spent some time kvetching about how silly it was that many of the scenes involved Gardner, on Mars, in a real-time text chat with a girl named Tulsa, on Earth. It’s partly bothersome because throughout the rest of the the movie, the story tries for a Mohs sci-fi hardness of, like, 1.5, somewhere between Real Life and Speculative Science, so it can’t really excuse itself through the Applied Phlebotinum that, say, Star Wars might use. The rest of the film feels like it’s trying to have believable science, but during these scenes it just whistles, looks the other way, and hopes you don’t notice that the two lovebirds are breaking the laws of physics as they swap flirt emoji.

Hopefully unnecessary science brief: Mars and Earth are far away from each other. Even if the communications transmissions are sent at light speed between them, it takes much longer than the 1 second of response time required to feel “instant.” How much longer? It depends. The planets orbit the sun at different speeds, so aren’t a constant distance apart. At their closest, it takes light 3 minutes to travel between Mars and Earth, and at their farthest—while not being blocked by the sun—it takes about 21 minutes. A round-trip is double that. So nothing akin to real-time chat is going to happen.

But I’m a designer, a sci-fi apologist, and a fairly talented backworlder. I want to make it work. And perhaps because of my recent dive into narrow AI, I began to realize that, well, in a way, maybe it could. It just requires rethinking what’s happening in the chat.

Let’s first acknowledge that we’ve solved long distance communications a long time ago. Gardner and Tulsa could just, you know, swap letters or, like the characters in 2001: A Space Odyssey, recorded video messages. There. Problem solved. It’s not real-time interaction, but it gets the job done. But kids aren’t so much into pen pals anymore, and we have to acknowledge that Gardner doesn’t want to tip his hand that he’s on Mars (it’s a grave NASA secret, for plot reasons). So the question is how could we make it work so it feels like a real time chat to her. Let’s first solve it for the case where he’s trying to disguise his location, and then how it might work when both participants are in the know.

Fooling Tulsa

Since 1984 (ping me, as always, if you can think of an earlier reference) sci-fi has had the notion of a digitally-replicated personality. Here I’m thinking of Gibson’s Neuromancer and the RAM boards on which Dixie Flatline “lives.” These RAM boards house an interactive digital personality of a person, built out of a lifetime of digital traces left behind: social media, emails, photos, video clips, connections, expressed interests, etc. Anyone in that story could hook the RAM board up to a computer, and have conversations with the personality housed there that would closely approximate how that person would (or would have) respond in real life.

SBU_Tulsa.png
Listen to the podcast for a mini-rant on translucent screens, followed by apologetics.

Is this likely to actually happen? Well it kind of already is. Here in the real world, we’re seeing early, crude “me bots” populate the net which are taking baby steps toward the same thing. (See MessinaBothttps://bottr.me/, https://sensay.it/, the forthcoming http://bot.me/) By the time we actually get a colony to Mars (plus the 16 years for Gardner to mature), mebot technology should should be able to stand in for him convincingly enough in basic online conversations.

Training the bot

So in the story, he would look through cached social media feeds to find a young lady he wanted to strike up a conversation with, and then ask his bot-maker engine to look at her public social media to build a herBot with whom he could chat, to train it for conversations. During this training, the TulsaBot would chat about topics of interest gathered from her social media. He could pause the conversation to look up references or prepare convincing answers to the trickier questions TulsaBot asks. He could also add some topics to the conversation they might have in common, and questions he might want to ask her. By doing this, his GardnerBot isn’t just some generic thing he sends out to troll any young woman with. It’s a more genuine, interactive first “letter” sent directly to her. He sends this GardnerBot to servers on Earth.

Hey-mars-chat.gif
A demonstration of a chat with a short Martian delay. (Yes, it’s an animated gif.)

Launching the bot

GardnerBot would wait until it saw Tulsa online and strike up the conversation with her. It would send a signal back to Gardner that the chat has begun so he can sit on his end and read a space-delayed transcript of the chat. GardnerBot would try its best to manage the chat based on what it knows about awkward teen conversation, Turing test best practices, what it knows about Gardner, and how it has been trained specifically for Tulsa. Gardner would assuage some of his guilt by having it dodge and carefully frame the truth, but not outright lie.

Buying time

If during the conversation she raised a topic or asked a question for which GardnerBot was not trained, it could promise an answer later, and then deflect, knowing that it should pad the conversation in the meantime:

  • Ask her to answer the same question first, probing into details to understand rationale and buy more time
  • Dive down into a related subtopic in which the bot has confidence, and which promises to answer the initial question
  • Deflect conversation to another topic in which it has a high degree of confidence and lots of detail to share
  • Text a story that Gardner likes to tell that is known to take about as long as the current round-trip signal

Example

  • TULSA
  • OK, here’s one: If you had to live anywhere on Earth where they don’t speak English, where would you live?

GardnerBot has a low confidence that it knows Gardner’s answer. It could respond…

  1. (you first) “Oh wow. That is a tough one. Can I have a couple of minutes to think about it? I promise I’ll answer, but you tell me yours first.”
  2. (related subtopic) “I’m thinking about this foreign movie that I saw one time. There were a lot of animals in it and a waterfall. Does that sound familiar?”
  3. (new topic) “What? How am I supposed to answer that one? 🙂 Umm…While I think about it, tell me…what kind of animal would you want to be reincarnated as. And you have to say why.”
  4. (story delay) “Ha. Sure, but can I tell a story first? When I was a little kid, I used to be obsessed with this music that I would hear drifting into my room from somewhere around my house…”

Lagged-realtime training

Each of those responses is a delay tactic that allows the chat transcript to travel to Mars for Gardner to do some bot training on the topic. He would be watching the time-delayed transcript of the chat, keeping an eye on an adjacent track of data containing the meta information about what the bot is doing, conversationally speaking. When he saw it hit low-confidence or high-stakes topic and deflect, it would provide a chat window for him to tell the GardnerBot what it should do or say.

  • To the stalling GARDNERBOT…
  • GARDNER
  • For now, I’m going to pick India, because it’s warm and I bet I would really like the spicy food and the rain. Whatever that colored powder festival is called. I’m also interested in their culture, Bollywood, and Hinduism.
  • As he types, the message travels back to Earth where GardnerBot begins to incorporate his answers to the chat…
SBU_Gardner.png
  • At a natural break in the conversation…
  • GARDNERBOT
  • OK. I think I finally have an answer to your earlier question. How about…India?
  • TULSA
  • India?
  • GARDNERBOT
  • Think about it! Running around in warm rain. Or trying some of the street food under an umbrella. Have you seen youTube videos from that festival with the colored powder everywhere? It looks so cool. Do you know what it’s called?

Note that the bot could easily look it up and replace “that festival with the colored powder everywhere” with “Holi Festival of Color” but it shouldn’t. Gardner doesn’t know that fact, so the bot shouldn’t pretend it knows it. A Cyrano-de-Bergerac software—where it makes him sound more eloquent, intelligent, or charming than he really is to woo her—would be a worse kind of deception. Gardner wants to hide where he is, not who he is.

That said, Gardner should be able to direct the bot, to change its tactics. “OMG. GardnerBot! You’re getting too personal! Back off!” It might not be enough to cover a flub made 42 minutes ago, but of course the bot should know how to apologize on Gardner’s behalf and ask conversational forgiveness.

Gotta go

If the signal to Mars got interrupted or the bot got into too much trouble with pressure to talk about low confidence or high stakes topics, it could use a believable, pre-rolled excuse to end the conversation.

  • GARDNERBOT
  • Oh crap. Will you be online later? I’ve got chores I have to do.

Then, Gardner could chat with TulsaBot on his end without time pressure to refine GardnerBot per their most recent topics, which would be sent back to Earth servers to be ready for the next chat.

In this way he could have “chats” with Tulsa that are run by a bot but quite custom to the two of them. It’s really Gardner’s questions, topics, jokes, and interest, but a bot-managed delivery of these things.

So it could work, does it fit the movie? I think so. It would be believable because he’s a nerd raised by scientists. He made his own robot, why not his own bot?

From the audience’s perspective, it might look like they’re chatting in real time, but subtle cues on Gardner’s interface reward the diligent with hints that he’s watching a time delay. Maybe the chat we see in the film is even just cleverly edited to remove the bots.

How he manages to hide this data stream from NASA to avoid detection is another question better handled by someone else.

SBU_whodis.png

An honest version: bot envoy

So that solves the logic from the movie’s perspective but of course it’s still squickish. He is ultimately deceiving her. Once he returns to Mars and she is back on Earth, could they still use the same system, but with full knowledge of its botness? Would real world astronauts use it?

Would it be too fake?

I don’t think it would be too fake. Sure, the bot is not the real person, but neither are the pictures, videos, and letters we fondly keep with us as we travel far from home. We know they’re just simulacra, souvenir likenesses of someone we love. We don’t throw these away in disgust for being fakes. They are precious because they are reminders of the real thing. So would the themBot.

  • GARDNER
  • Hey, TulsaBot. Remember when we were knee deep in the Pacific Ocean? I was thinking about that today.
  • TULSABOT
  • I do. It’s weird how it messes with your sense of balance, right? Did you end up dreaming about it later? I sometimes do after being in waves a long time.
  • GARDNER
  • I can’t remember, but someday I hope to come back to Earth and feel it again. OK. I have to go, but let me know how training is going. Have you been on the G machine yet?

Nicely, you wouldn’t need stall tactics in the honest version. Or maybe it uses them, but can be called out.

  • TULSA
  • GardnerBot, you don’t have to stall. Just tell Gardner to watch Mission to Mars and update you. Because it’s hilarious and we have to go check out the face when I’m there.

Sending your loved one the transcript will turn it into a kind of love letter. The transcript could even be appended with a letter that jokes about the bot. The example above was too short for any semi-realtime insertions in the text, but maybe that would encourage longer chats. Then the bot serves as charming filler, covering the delays between real contact.

Ultimately, yes, I think we can backworld what looks physics-breaking into something that makes sense, and might even be a new kind of interactive memento between interplanetary sweethearts, family, and friends.

Luke’s predictive HUD

When Luke is driving Kee and Theo to a boat on the coast, the car’s heads-up-display shows him the car’s speed with a translucent red number and speed gauge. There are also two broken, blurry gauges showing unknown information.

Suddenly the road becomes blocked by a flaming car rolled onto the road by a then unknown gang. In response, an IMPACT warning triangle zooms in several times to warn the driver of the danger, accompanied by a persistent dinging sound.

childrenofmen-impact-08

It commands attention effectively

Continue reading

Rebel videoscope

Talking to Luke

SWHS-rebelcomms-02

Hidden behind a bookshelf console is the family’s other comm device. When they first use it in the show, Malla and Itchy have a quick discussion and approach the console and slide two panels aside. The device is small and rectangular, like an oscilloscope, sitting on a shelf about eye level. It has a small, palm sized color cathode ray tube on the left. On the right is an LED display strip and an array of red buttons over an array of yellow buttons. Along the bottom are two dials.

SWHS-rebelcomms-03

Without any other interaction, the screen goes from static to a direct connection to a hangar where Luke Skywalker is working with R2-D2 to repair some mechanical part. He simply looks up to the camera, sees Malla and Itchy, and starts talking. He does nothing to accept the call or end it. Neither do they. Continue reading

The Mechanized Squire

Avengers-Iron-Man-Gear-Down06

Having completed the welding he did not need to do, Tony flies home to a ledge atop Stark tower and lands. As he begins his strut to the interior, a complex, ring-shaped mechanism raises around him and follows along as he walks. From the ring, robotic arms extend to unharness each component of the suit from Tony in turn. After each arm precisely unscrews a component, it whisks it away for storage under the platform. It performs this task so smoothly and efficiently that Tony is able to maintain his walking stride throughout the 24-second walk up the ramp and maintain a conversation with JARVIS. His last steps on the ramp land on two plates that unharness his boots and lower them into the floor as Tony steps into his living room.

Yes, yes, a thousand times yes.

This is exactly how a mechanized squire should work. It is fast, efficient, supports Tony in his task of getting unharnessed quickly and easily, and—perhaps most importantly—how we wants his transitions from superhero to playboy to feel: cool, effortless, and seamless. If there was a party happening inside, I would not be surprised to see a last robotic arm handing him a whiskey.

This is the Jetsons vision of coming home to one’s robotic castle writ beautifully.

There is a strategic question about removing the suit while still outside of the protection of the building itself. If a flying villain popped up over the edge of the building at about 75% of the unharnessing, Tony would be at a significant tactical disadvantage. But JARVIS is probably watching out for any threats to avoid this possibility.

Another improvement would be if it did not need a specific landing spot. If, say…

  • The suit could just open to let him step out like a human-shaped elevator (this happens in a later model of the suit seen in The Avengers 2)
  • The suit was composed of fully autonomous components and each could simply fly off of him to their storage (This kind of happens with Veronica later in The Avengers 2)
  • If it was composed of self-assembling nanoparticles that flowed off of him, or, perhaps, reassembled into a tuxedo (If I understand correctly, this is kind-of how the suit currently works in the comic books.)

These would allow him to enact this same transition anywhere.

Iron Welding

Avengers-Underwater_welding01

Cut to the bottom of the Hudson River where some electrical “transmission lines” rest. Tony in his Iron Man supersuit has his palm-mounted repulsor rays configured such that they create a focused beam, capable of cutting through an iron pipe to reveal power cables within. Once the pipe casing is removed, he slides a circular device onto the cabling. The cuff automatically closes, screws itself tight, and expands to replace the section of casing. Dim white lights burn brighter as hospital-green rings glow brightly around the cable’s circumference. His task done, he underwater-flies away, flying up the southern tip of Manhattan to Stark Tower.

It’s quick scene that sets up the fact that they’re using Tony’s arc reactor technology to liberate Stark Tower from the electrical grid (incidentally implying that the Avengers will never locate a satellite headquarters anywhere in Florida. Sorry, Jeb.) So, since it’s a quick scene, we can just skip the details and interaction design issues, right?

Of course not. You know better from this blog.

Avengers-Underwater_welding02 Continue reading

Odyssey Navigation

image07

When the Odyssey needs to reverse thrust to try and counter a descent towards the TET, Jack calls for a full OMS (Orbital Maneuvering System) burn. We do not see what information he looks at to determine how fast he is approaching the TET, or how he knows that the OMS system will provide enough thrust.

We do see 4 motor systems on board the Odyssey

  1. The Main Engines (which appear to be Ion Engines)
  2. The OMS system (4 large chemical thrusters up front)
  3. A secondary set of thrusters (similar and larger than the OMS system) on the sleep module
  4. Tiny chemical thrusters like those used to change current spacecraft yaw/pitch/roll (the shuttle’s RCS).

image05

After Jack calls out for an OMS burn, Vika punches in a series of numbers on her keypad, and jack flips two switches under the keypad. After flipping the switches ‘up’, Jack calls out “Gimbals Set” and Vika says “System Active”.

Finally, Jack pulls back on a silver thrust lever to activate the OMS.

OMS

Why A Reverse Lever?

Typically, throttles are pushed forward to increase thrust. Why is this reversed? On current NASA spacecraft, the flight stick is set up like an airplane’s control, i.e., back pitches up, forward pitches down, left/right rolls the same. Note that the pilot moves the stick in the direction he wants the craft to move. In this case, the OMS control works the same way: Jack wants the ship to thrust backwards, so he moves the control backwards. This is a semi-direct mapping of control to actuator. (It might be improved if it moved not in an arc but in a straight forward-and-backward motion like the THC control, below. But you also want controls to feel different for instant differentiation, so it’s not a clear cut case.)

image03

Source: NASA

What is interesting is that, in NASA craft, the control that would work the main thrusters forward is the same control used for lateral, longitudinal, and vertical controls:

image00

Source: NASA

Why are those controls different in the Odyssey? My guess is that, because the OMS thrusters are so much more powerful than the smaller RCS thrusters, the RCS thrusters are on a separate controller much like the Space Shuttle’s (shown above).

And, look! We see evidence of just such a control, here:

image06

Separating the massive OMS thrusters from the more delicate RCS controls makes sense here because the control would have such different effects—and have different fuel costs—in one direction than in any other. Jack knows that by grabbing the RCS knob he is making small tweaks to the Odyssey’s flight path, while the OMS handle will make large changes in only one direction.

The “Targets” Screen

image02

When Jack is about to make the final burn to slow the Odyssey down and hold position 50km away from the TET, he briefly looks at this screen and says that the “targets look good”.

It is not immediately obvious what he is looking at here.

Typically, NASA uses oval patterns like this to detail orbits. The top of the pattern would be the closest distance to an object, while the further line would indicate the furthest point. If that still holds true here, we see that Jack is at the closest he is going to get to the TET, and in another orbit he would be on a path to travel away from the TET at an escape velocity.

Alternatively, this plot shows the Odyssey’s entire voyage. In that case, the red dotted line shows the Odyssey’s previous positions. It would have entered range of the TET, made a deceleration burn, then dropped in close.

Either way, this is a far less useful or obvious interface than others we see in the Odyssey.

The bars on the right-hand panel do not change, and might indicate fuel or power reserves for various thruster banks aboard the Odyssey.

Why is Jack the only person operating the ship during the burn?

This is the final burn, and if Jack makes a mistake then the Odyssey won’t be on target and will require much more complicated math and piloting to fix its position relative to the TET. These burns would have been calculated back on Earth, double-checked by supercomputers, and monitored all the way out.

A second observer would be needed to confirm that Jack is following procedure and gets his timing right. NASA missions have one person (typically the co-pilot) reading from the checklist, and the Commander carrying out the procedure. This two-person check confirms that both people are on the same page and following procedure. It isn’t perfect, but it is far more effective than having a single person completing a task from memory.

Likely, this falls under the same situation as the Odyssey’s controls: there is a powerful computer on board checking Jack’s progress and procedure. If so, then only one person would be required on the command deck during the burn, and he or she would merely be making sure that the computer was honest.

This argument is strengthened by the lack of specificity in Jack’s motions. He doesn’t take time to confirm the length of the burn required, or double-check his burn’s start time.

image01

If the computer was doing all that for him, and he was merely pushing the right button at the indicated time, the system could be very robust.

This also allows Vika to focus on making sure that the rest of the crew is still alive and healthy in suspended animation. It lowers the active flight crew requirement on the Odyssey, and frees up berths and sleep pods for more scientific-minded crew members.

Help your users

Detail-oriented tasks, like a deceleration burn, are important but let’s face it, boring. These kinds of tasks require a lot of memory on the part of users, and pinpoint precision in timing. Neither of those are things humans are good at.

If you can have your software take care of these tasks for your users, you can save on the cost of labor (one user instead of two or three), increase reliability, and decrease mistakes.

Just make sure that your computer works, and that your users have a backup method in case it fails.

Homing Beacon

image04

After following a beacon signal, Jack makes his way through an abandoned building, tracking the source. At one point he stops by a box on the wall, as he sees a couple of cables coming out from the inside of it, and cautiously opens it.

The repeater

I can’t talk much about interactions on this one given that he does not do much with it. But I guess readers might be interested to know about the actual prop used in the movie, so after zooming in on a screen capture and a bit of help from Google I found the actual radio.

image05

When Jack opens the box he finds the repeater device inside. He realizes that it’s connected to the building structure, using it as an antenna, and over their audio connection asks Vika to decrypt the signal.

The desktop interface

Although this sequence centers around the transmission from the repeater, most of the interactions take place on Vika’s desktop interface. A modal window on the display shows her two slightly different waveforms that overlap one another. But it’s not clear at all why the display shows two signals instead of just one, let aside what the second signal means.

After Jack identifies it as a repeater and asks her to decrypt the signal, Vika touches a DECODE button on her screen. With a flourish of orange and white, the display changes to reveal a new panel of information, providing a LATITUDE INPUT and LONGITUDE INPUT, which eventually resolve to 41.146576 -73.975739. (Which, for the curious, resolves to Stelfer Trading Company in Fairfield, Connecticut here on Earth. Hi, M. Stelfer!) Vika says, “It’s a set of coordinates. Grid 17. It’s a goddamn homing beacon.”

DECODE_15FPS

At the control tower Vika was already tracking the signal through her desktop interface. As she hears Jack’s request, she presses the decrypt button at the top of the signal window to start the process.

Continue reading

Communications with Sally

image01

While Vika and Jack are conducting their missions on the ground, Sally is their main point of contact in orbital TET command. Vika and Sally communicate through a video feed located in the top left corner of the TETVision screen. There is no camera visible in the film, but it is made obvious that Sally can see Vika and at one point Jack as well.

image00

The controls for the communications feed are located in the bottom left corner of the TETVision screen. There are only two controls, one for command and one for Jack. The interaction is pretty standard—tap to enable, tap again to disable. It can be assumed that conferencing is possible, although certain scenes in the film indicate that this has never taken place. Continue reading