Odyssey Communications

Desktop_2014_07_15_19_32_16_746

The TET is far enough away from Earth that the crew goes into suspended animation for the initial travel to it. This initial travel is either automated or controlled from Earth. After waking up, the crew speak conversationally with their mission controller Sally.

This conversation between Jack, Vika, and [actual human] Sally happens over a small 2d video communication system. The panel in the middle of the Odyssey’s control panel shows Sally and a small section of Mission Control, presumably back on Earth. Sally confirms with Jack that the readings Earth is getting from the Odyssey remotely are what is actually happening on site.

Desktop_2014_07_15_19_29_53_919

Soon after, mission control is able to respond immediately to Jack’s initial OMS burn and let him know that he is over-stressing the ship trying to escape the TET. Jack is then able to make adjustments (cut thrust) before the stress damages the Odyssey.

FTL Communication

Communication between Odyssey and the Earth happens in real-time. When you look at the science of it all, this is more than a little surprising. Continue reading

Dat glaive: Enthrallment

Several times throughout the movie, Loki uses places the point of the glaive on a victim’s chest near their heart, and a blue fog passes from the stone to infect them: an electric blackness creeps upward along their skin from their chest until it reaches their eyes, which turn fully black for a moment before becoming the same ice blue of the glaive’s stone, and we see that the victim is now enthralled into Loki’s servitude.

Enthralling_Hawkeye

You have heart.

The glaive is very, very terribly designed for this purpose. Continue reading

Iron Man HUD: 2nd-person view

In the prior post we looked at the HUD display from Tony’s point of view. In this post we dive deeper into the 2nd-person view, which turns out to be not what it seems.

The HUD itself displays a number of core capabilities across the Iron Man movies prior to its appearance in The Avengers. Cataloguing these capabilities lets us understand (or backworld) how he interacts with the HUD, equipping us to look for its common patterns and possible conflicts. In the first-person view, we saw it looked almost entirely like a rich agentive display, but with little interaction. But then there’s this gorgeous 2nd-person view.

When in the first film Tony first puts the faceplate on and says to JARVIS, “Engage heads-up display”… IronMan1_HUD00 …we see things from a narrative-conceit, 2nd-person perspective, as if the helmet were huge and we are inside the cavernous space with him, seeing only Tony’s face and the augmented reality interface elements. IronMan1_HUD07 You might be thinking, “Of course it’s a narrative conceit. It’s not real. It’s in a movie.” But what I mean by that is that even in the diegesis, the Marvel Cinematic World, this is not something that could be seen. Let’s move through the reasons why. Continue reading

Tony Stark is being lied to (by his own creation)

In the last post we discussed some necessary, new terms to have in place for the ongoing deep dive examination of the Iron Man HUD, there’s one last bit of meandering philosophy and fan theory I’d like to propose, that touches on our future relationship with technology.

The Iron Man is not Tony Stark. The Iron Man is JARVIS. Let me explain.

Tony can’t fire weapons like that

vlcsnap-2015-09-15-05h12m45s973

The first piece of evidence is that most of the weapons he uses are unlikely to be fired by him. Take the repulsor rays in his palms. I challenge readers to strap a laser perpendicular to each of their their palms and reliably target moving objects that are actively trying to avoid getting hit, while, say, roller skating an obstacle course. Because that’s what he’s doing as he flies around incapacitating Hydra agents and knocking around Ultrons. The weapons are not designed for Tony to operate them manually with any accuracy. But that’s not true for the artificial intelligence.

Iron Targeting 02 Continue reading

R. S. Revenge Comms

Note: In honor of the season, Rogue One opening this week, and the reviews of Battlestar Galactica: The Mini-Series behind us, I’m reopening the Star Wars Holiday Special reviews, starting with the show-within-a-show, The Faithful Wookie. Refresh yourself of the plot if it’s been a while.

Faithful-Wookiee-02

On board the R.S. Revenge, the purple-skinned communications officer announces he’s picked up something. (Genders are a goofy thing to ascribe to alien physiology, but the voice actor speaks in a masculine register, so I’m going with it.)

faithful-wookiee-01-surrounds

He attends a monitor, below which are several dials and controls in a panel. On the right of the monitor screen there are five physical controls.

  • A stay-state toggle switch
  • A stay-state rocker switch
  • Three dials

The lower two dials have rings under them on the panel that accentuate their color.

Map View

The screen is a dark purple overhead map of the impossibly dense asteroid field in which the Revenge sits. A light purple grid divides the space into 48 squares. This screen has text all over it, but written in a constructed orthography unmentioned in the Wookieepedia. In the upper center and upper right are unchanging labels. Some triangular label sits in the lower-left. In the lower right corner, text appears and disappears too fast for (human) reading. The middle right side of the screen is labeled in large characters, but they also change too rapidly to make much sense of it.

revengescreen Continue reading

“Real-time,” Interplanetary Chat

While recording a podcast with the guys at DecipherSciFi about the twee(n) love story The Space Between Us, we spent some time kvetching about how silly it was that many of the scenes involved Gardner, on Mars, in a real-time text chat with a girl named Tulsa, on Earth. It’s partly bothersome because throughout the rest of the the movie, the story tries for a Mohs sci-fi hardness of, like, 1.5, somewhere between Real Life and Speculative Science, so it can’t really excuse itself through the Applied Phlebotinum that, say, Star Wars might use. The rest of the film feels like it’s trying to have believable science, but during these scenes it just whistles, looks the other way, and hopes you don’t notice that the two lovebirds are breaking the laws of physics as they swap flirt emoji.

Hopefully unnecessary science brief: Mars and Earth are far away from each other. Even if the communications transmissions are sent at light speed between them, it takes much longer than the 1 second of response time required to feel “instant.” How much longer? It depends. The planets orbit the sun at different speeds, so aren’t a constant distance apart. At their closest, it takes light 3 minutes to travel between Mars and Earth, and at their farthest—while not being blocked by the sun—it takes about 21 minutes. A round-trip is double that. So nothing akin to real-time chat is going to happen.

But I’m a designer, a sci-fi apologist, and a fairly talented backworlder. I want to make it work. And perhaps because of my recent dive into narrow AI, I began to realize that, well, in a way, maybe it could. It just requires rethinking what’s happening in the chat. Continue reading

The Cloak of Levitation, Part 3: But is it agentive?

Full_coverSo I mentioned in the intro to this review that I was drawn to review Doctor Strange (with my buddy and co-reviewer Scout Addis) because the Cloak displays some interesting qualities in relation to the book I just published. Buy it, read it, review it on amazon.com, it’s awesome.

That sales pitch done, I can quickly cover the key concepts here.

  • A tool, like a hammer, is a familiar but comparatively-dumb category of thing that only responds to a user’s input. A hammer is an example. Tool has been the model of the thing we’re designing in interaction design for, oh, 60 years, but it is being mostly obviated by narrow artificial intelligence, which can be understood as automatic, assistive, or agentive.
  • Assistive technology helps its user with the task she is focused on: Drawing her attention, providing information, making suggestions, maybe helping augment her precision or force. If we think of a hammer again, an assistive might draw her attention to the best angle to strike the nail, or use an internal gyroscope to gently correct her off-angle strike.
  • Agentive technology does the task for its user. Again with the hammer, she could tell hammerbot (a physical agent, but there are virtual ones, too) what she wants hammered and how. Her instructions might be something like: Hammer a hapenny nail every decimeter along the length of this plinth. As it begins to pound away, she can then turn her attention to mixing paint or whatever.

When I first introduce people to these distinctions, I step one rung up on Wittgenstein’s Ladder and talk about products that are purely agentive or purely assistive, as if agency was a quality of the technology. (Thabks to TU prof P.J. Stappers for distinguishing these as ontological and epistemological approaches.) The Roomba, for example, is almost wholly agentive as a vacuum. It has no handle for you to grab, because it does the steering and pushing and vacuuming.

roomba_r2_d2_1

Yes, it’s a real thing you can own.

Once you get these basic ideas in your head, we can take another step up the Ladder together and clarify that agency is not necessarily a quality of the thing in the world. It’s subtler than that. It’s a mode of relationship between user and agent, one which can change over time. Sophisticated products should be able to shift their agency mode (between tool, assistant, agent, and automation) according to the intentions and wishes of their user. Hammerbot is useful, but still kind of dumb compared to its human. If there’s a particularly tricky or delicate nail to be driven, our carpenter might ask hammerbot’s assistance, but really, she’ll want to handle that delicate hammering herself.

Which brings us back to the Cloak. Continue reading

21 Hyperdiegetic Questions about The Faithful Wookiee

Since I only manage to restart The Star Wars Holiday Special reviews right around the time a new Star Wars franchise movie comes out, many of you may have forgotten it was even being reviewed. Well, it is. If you need to catch up, or have joined this blog after I began it years ago, you can head back to beginning to read about the plot and the analyses so far. It’s not pretty.

SWHS-lumpy-and-vader

When we last left the Special, Lumpy was distracted from the Stormtrooper ransack of their home by watching The Faithful Wookiee. The 6 analyses of that film focused on the movie from a diegetic perspective, as if it were a movie like any other on this blog, dealing mostly with its own internal “logic.”

Picking up, we need to look at The Faithful Wookiee from a “hyperdiegetic” perspective, that is, in the context of the other show in which it occurs, that is, The Star Wars Holiday Special. Please note that, departing from the mission statement for a bit, these questions not about the interfaces, but about the backworlding that informs these interfaces. Continue reading

Mission slot

To provide the Victim Cards to the Robot Asesino, Orlak inserts it into an open slot in the robot’s chest, which then illuminates, confirming that the instructions have been received.

There is, I must admit, a sort of lovely, morbid poetry to a cardiogram being inserted into a slot where the robot heart would be to give the robot instructions to end the beating of the human heart described in the cardiogram. And we don’t see a lot of poetry in sci-fi interface designs. So, props for that.

The illumination is a nice bit of feedback, but I think it could convey the information in more useful and cinemagenic ways.

In this new scenario…

  • Orlak has the robot pull back its coat
  • The chamfered slot is illuminated, signaling “card goes here.”
  • As Orlak inserts the target card, the slot light dims as the chest-cavity light brightens, signaling “I have the card.”
  • After a moment, the chest-cavity light turns blood red, signaling confirmation of the victim and the new dastardly mission.

When the robot returns to Orlak after completing a mission, the red light would dim as the slot light illuminates again, signaling that it is ready for its next mission.

These changes improve the interface by first drawing the user’s locus of attention exactly where it needs to go, and then distinguishing the internal system states as they happen. It would also work for the audience, who understands by association that red means danger.

The shape of the slot is pretty good for its base usability. It has clear affordances with its placement, orientation, and metallic lining. There’s plenty of room to insert the target card. It might benefit from a fillet or chamfer for the slot, to help avoid accidentally crumpling the paper cards when they are aimed poorly.

In addition to the tactical questions of illumination and shape of the slot, I have a few strategic questions.

  • There is no authorization in evidence. Can just anyone specify a target? Why doesn’t Gaby use her luchadora powers to Spin-A-Roonie a target card with Orlak’s face on it and let the robot save the day? Maybe the robot has a whitelist of heartbeats, and would fight to resist anyone else, but that’s just me making stuff up.
  • Also I’m not sure why the card stays in the robot. That leaves a discoverable paper trail of its crimes, perfect for a Scooby to hand over to the federales. Maybe the robot has some incinerator or shredder inside? If not, it would be better from Orlak’s perspective to design it as an insert-and-hold slot, which would in turn require a redesign of the card to have some obvious spot to hold it, and a bump-in on the slot to make way for fingers. Then he could remove the incriminating evidence and destroy it himself and not worry whether the robot’s paper shredder was working or not.
  • Another problem is that, since the robot doesn’t talk, it would be difficult to find out who its current target is at any given time. Since anyone can supply a target, Orlak can’t just rely on his memory to be certain. If the card was going to stay inside, it would be better to have it displayed so it’s easy to check.
  • How would Orlak cancel a target?
  • It is unclear how Orlak specifies whether the target is to be kidnapped or killed even though some are kidnapped and some are killed.
  • It’s also unclear about how Orlak might rescind or change an order once given.
  • It is also unclear how the assassin finds its target. Does it have internal maps with addresses? Or does it have unbelievably good hearing that can listen to every sound nearby, isolate the particular heartbeat in question, and just head in that direction, destroying any walls it encounters? Or can it reasonably navigate human cities and interiors to maintain its disguise? Because that would be some amazing technology for 1969. This last is admittedly not an interface question, but a backworlding question for believability.

So there’s a lot missing from the interface.

It’s the robot assassin designer’s job to not just tick a box to tell themselves that they have provided feedback, but to push through the scenarios of use to understand in detail how to convey to the evil scientist what’s happening with his murderous intent.