Given its wealth of capabilities, the main complaint might be its lack of language.
A mute sidekick
It has a working theory of mind, a grasp of abstract concepts, and intention, so why does it not use language as part of a toolkit to fulfill its duties? Let’s first admit that mute sidekicks are kind of a trope at this point. Think R2D2, Silent Bob, BB8, Aladdin’s Magic Carpet (Disney), Teller, Harpo, Bernardo / Paco (admittedly obscure), Mini-me. They’re a thing.
Yes, I know she could talk to other fairies, but not to Peter.
Despite being a trope, its muteness in a combat partner is a significant impediment. Imagine its being able to say, “Hey Steve, he’s immune to the halberd. But throw that ribcage-looking thing on the wall at him, and you’ll be good.” Strange finds himself in life-or-death situations pretty much constantly, so having to disambiguate vague gestures wastes precious time that might make the difference between life and death. For, like, everyone on Earth.Continue reading →
So I mentioned in the intro to this review that I was drawn to review Doctor Strange (with my buddy and co-reviewer Scout Addis) because the Cloak displays some interesting qualities in relation to the book I just published. Buy it, read it, review it on amazon.com, it’s awesome.
That sales pitch done, I can quickly cover the key concepts here.
A tool, like a hammer, is a familiar but comparatively-dumb category of thing that only responds to a user’s input. A hammer is an example. Tool has been the model of the thing we’re designing in interaction design for, oh, 60 years, but it is being mostly obviated by narrow artificial intelligence, which can be understood as automatic, assistive, or agentive.
Assistive technology helps its user with the task she is focused on: Drawing her attention, providing information, making suggestions, maybe helping augment her precision or force. If we think of a hammer again, an assistive might draw her attention to the best angle to strike the nail, or use an internal gyroscope to gently correct her off-angle strike.
Agentive technology does the task for its user. Again with the hammer, she could tell hammerbot (a physical agent, but there are virtual ones, too) what she wants hammered and how. Her instructions might be something like: Hammer a hapenny nail every decimeter along the length of this plinth. As it begins to pound away, she can then turn her attention to mixing paint or whatever.
When I first introduce people to these distinctions, I step one rung up on Wittgenstein’s Ladder and talk about products that are purely agentive or purely assistive, as if agency was a quality of the technology. (Thabks to TU prof P.J. Stappers for distinguishing these as ontological and epistemological approaches.) The Roomba, for example, is almost wholly agentive as a vacuum. It has no handle for you to grab, because it does the steering and pushing and vacuuming.
Once you get these basic ideas in your head, we can take another step up the Ladder together and clarify that agency is not necessarily a quality of the thing in the world. It’s subtler than that. It’s a mode of relationship between user and agent, one which can change over time. Sophisticated products should be able to shift their agency mode (between tool, assistant, agent, and automation) according to the intentions and wishes of their user. Hammerbot is useful, but still kind of dumb compared to its human. If there’s a particularly tricky or delicate nail to be driven, our carpenter might ask hammerbot’s assistance, but really, she’ll want to handle that delicate hammering herself.
While recording a podcast with the guys at DecipherSciFi about the twee(n) love story The Space Between Us, we spent some time kvetching about how silly it was that many of the scenes involved Gardner, on Mars, in a real-time text chat with a girl named Tulsa, on Earth. It’s partly bothersome because throughout the rest of the the movie, the story tries for a Mohs sci-fi hardness of, like, 1.5, somewhere between Real Life and Speculative Science, so it can’t really excuse itself through the Applied Phlebotinum that, say, Star Wars might use. The rest of the film feels like it’s trying to have believable science, but during these scenes it just whistles, looks the other way, and hopes you don’t notice that the two lovebirds are breaking the laws of physics as they swap flirt emoji.
Hopefully unnecessary science brief: Mars and Earth are far away from each other. Even if the communications transmissions are sent at light speed between them, it takes much longer than the 1 second of response time required to feel “instant.” How much longer? It depends. The planets orbit the sun at different speeds, so aren’t a constant distance apart. At their closest, it takes light 3 minutes to travel between Mars and Earth, and at their farthest—while not being blocked by the sun—it takes about 21 minutes. A round-trip is double that. So nothing akin to real-time chat is going to happen.
But I’m a designer, a sci-fi apologist, and a fairly talented backworlder. I want to make it work. And perhaps because of my recent dive into narrow AI, I began to realize that, well, in a way, maybe it could. It just requires rethinking what’s happening in the chat. Continue reading →
When Luke is driving Kee and Theo to a boat on the coast, the car’s heads-up-display shows him the car’s speed with a translucent red number and speed gauge. There are also two broken, blurry gauges showing unknown information.
Suddenly the road becomes blocked by a flaming car rolled onto the road by a then unknown gang. In response, an IMPACT warning triangle zooms in several times to warn the driver of the danger, accompanied by a persistent dinging sound.
Hidden behind a bookshelf console is the family’s other comm device. When they first use it in the show, Malla and Itchy have a quick discussion and approach the console and slide two panels aside. The device is small and rectangular, like an oscilloscope, sitting on a shelf about eye level. It has a small, palm sized color cathode ray tube on the left. On the right is an LED display strip and an array of red buttons over an array of yellow buttons. Along the bottom are two dials.
Without any other interaction, the screen goes from static to a direct connection to a hangar where Luke Skywalker is working with R2-D2 to repair some mechanical part. He simply looks up to the camera, sees Malla and Itchy, and starts talking. He does nothing to accept the call or end it. Neither do they. Continue reading →
Having completed the welding he did not need to do, Tony flies home to a ledge atop Stark tower and lands. As he begins his strut to the interior, a complex, ring-shaped mechanism raises around him and follows along as he walks. From the ring, robotic arms extend to unharness each component of the suit from Tony in turn. After each arm precisely unscrews a component, it whisks it away for storage under the platform. It performs this task so smoothly and efficiently that Tony is able to maintain his walking stride throughout the 24-second walk up the ramp and maintain a conversation with JARVIS. His last steps on the ramp land on two plates that unharness his boots and lower them into the floor as Tony steps into his living room.
Yes, yes, a thousand times yes.
This is exactly how a mechanized squire should work. It is fast, efficient, supports Tony in his task of getting unharnessed quickly and easily, and—perhaps most importantly—how we wants his transitions from superhero to playboy to feel: cool, effortless, and seamless. If there was a party happening inside, I would not be surprised to see a last robotic arm handing him a whiskey.
This is the Jetsons vision of coming home to one’s robotic castle writ beautifully.
There is a strategic question about removing the suit while still outside of the protection of the building itself. If a flying villain popped up over the edge of the building at about 75% of the unharnessing, Tony would be at a significant tactical disadvantage. But JARVIS is probably watching out for any threats to avoid this possibility.
Another improvement would be if it did not need a specific landing spot. If, say…
The suit could just open to let him step out like a human-shaped elevator (this happens in a later model of the suit seen in The Avengers 2)
The suit was composed of fully autonomous components and each could simply fly off of him to their storage (This kind of happens with Veronica later in TheAvengers 2)
If it was composed of self-assembling nanoparticles that flowed off of him, or, perhaps, reassembled into a tuxedo (If I understand correctly, this is kind-of how the suit currently works in the comic books.)
These would allow him to enact this same transition anywhere.
Cut to the bottom of the Hudson River where some electrical “transmission lines” rest. Tony in his Iron Man supersuit has his palm-mounted repulsor rays configured such that they create a focused beam, capable of cutting through an iron pipe to reveal power cables within. Once the pipe casing is removed, he slides a circular device onto the cabling. The cuff automatically closes, screws itself tight, and expands to replace the section of casing. Dim white lights burn brighter as hospital-green rings glow brightly around the cable’s circumference. His task done, he underwater-flies away, flying up the southern tip of Manhattan to Stark Tower.
It’s quick scene that sets up the fact that they’re using Tony’s arc reactor technology to liberate Stark Tower from the electrical grid (incidentally implying that the Avengers will never locate a satellite headquarters anywhere in Florida. Sorry, Jeb.) So, since it’s a quick scene, we can just skip the details and interaction design issues, right?