So I mentioned in the intro to this review that I was drawn to review Doctor Strange (with my buddy and co-reviewer Scout Addis) because the Cloak displays some interesting qualities in relation to the book I just published. Buy it, read it, review it on amazon.com, it’s awesome.
That sales pitch done, I can quickly cover the key concepts here.
- A tool, like a hammer, is a familiar but comparatively-dumb category of thing that only responds to a user’s input. A hammer is an example. Tool has been the model of the thing we’re designing in interaction design for, oh, 60 years, but it is being mostly obviated by narrow artificial intelligence, which can be understood as automatic, assistive, or agentive.
- Assistive technology helps its user with the task she is focused on: Drawing her attention, providing information, making suggestions, maybe helping augment her precision or force. If we think of a hammer again, an assistive might draw her attention to the best angle to strike the nail, or use an internal gyroscope to gently correct her off-angle strike.
- Agentive technology does the task for its user. Again with the hammer, she could tell hammerbot (a physical agent, but there are virtual ones, too) what she wants hammered and how. Her instructions might be something like: Hammer a hapenny nail every decimeter along the length of this plinth. As it begins to pound away, she can then turn her attention to mixing paint or whatever.
When I first introduce people to these distinctions, I step one rung up on Wittgenstein’s Ladder and talk about products that are purely agentive or purely assistive, as if agency was a quality of the technology. (Thabks to TU prof P.J. Stappers for distinguishing these as ontological and epistemological approaches.) The Roomba, for example, is almost wholly agentive as a vacuum. It has no handle for you to grab, because it does the steering and pushing and vacuuming.

Yes, it’s a real thing you can own.
Once you get these basic ideas in your head, we can take another step up the Ladder together and clarify that agency is not necessarily a quality of the thing in the world. It’s subtler than that. It’s a mode of relationship between user and agent, one which can change over time. Sophisticated products should be able to shift their agency mode (between tool, assistant, agent, and automation) according to the intentions and wishes of their user. Hammerbot is useful, but still kind of dumb compared to its human. If there’s a particularly tricky or delicate nail to be driven, our carpenter might ask hammerbot’s assistance, but really, she’ll want to handle that delicate hammering herself.
Which brings us back to the Cloak.

I wish I knew how to quit you.
Watch the movie carefully and you’ll see that the only time it acts purely on command is when Strange uses it to fly to the Dark Dimension. So it has all of one tool-like function. It initiates the rest of its actions on its own.
So it is assistive or agentive? Well, again, that’s tricky, because it depends on which function we’re talking about. Here is my backworlded list of those functions, in order of importance.
- Obey commands (subject to some mystical and unmentioned 4 Laws of Relics?)
- Prevent harm to Strange
- Halt the thing threatening him
- If that’s not possible, get Strange out of harm’s way (pull him to safety)
- Catch him, if he’s falling
- Critical case: if Strange is disabled and threatened, take care of the threat (the head-wrap scene)
- Guide him toward the best tactic for his current situation
- Keep him looking sorcererly
- Don yourself when appropriate
- Try and do your work while being worn (don’t jump off Strange’s shoulders to do something, if you can avoid it)
- Groom him if he becomes untidy
Let’s look at each of these.
1. Obeying commands could be any category. Strange can gesturally command it to fly, as he does, treating it as a tool. He can command it to assist him in some task, or assign it a task to do on its own. So that one is all over the board.
2. Preventing harm is mostly agentive. After all, if it’s preventing harm from happening, it should just do it, and not ask, right? The question would just be noise because the answer would almost always be, “of course.” Like a mystical Spider-Sense, this helps Strange avoid a threat he didn’t even know was there. This allows Strange to focus on the most critical thing in the situation, because the Cloak has his back for the minor stuff. (Which, admittedly, isn’t quite how Spider-Man’s Spider-Sense works.)
There’s a bit of conceptual hairsplitting to do here. When Strange is in combat, you might wonder, isn’t his attention on the combat, so it’s assisting him with it? Sure, but that’s the category of thing he’s trying to do, more appropriate to our description of it rather than what he’s thinking. His attention is not on the category of thing he’s doing but on the thing itself. Not combat, but bringing down Kaecelius.
So I’d argue what we see is agentive. Note that it’s easy to imagine an assistive prevention of harm, too. If Strange knows there is a big rock heading his way, he’s going to dodge. The Cloak can predict how well his dodge will work, and add a little oomph if it’s not going to be enough. But I don’t think we see that in the movie.
3. Guiding him toward the best situational tactic is an interesting case. It’s agentive, but in the one case we really see it in action, it looks assistive. Strange is reaching for the halberd on the wall, and the Cloak, knowing that would be ineffective at best, is dragging him toward the thing that will work, which are the Crimson Bands of Cyttorak. Is it assisting him to grab the right thing?
Yes, but again, it’s a question of attention. Strange’s attention is on getting the halberd. The Cloak has run through the scenarios, and knows that probabilistically, the halberd is the wrong choice. So it is having to intervene and draw Strange’s attention to the Bands. In monitoring and correcting Strange’s tactics, it is operating outside his attention and acting agentively.
4. When the Cloak works to keep Strange looking sorcererly it is similarly monitoring his appearance and making adjustments as needed. Strange’s attention does not need to go there, so it’s agentive.
Hang on, you cry, what about that time Strange tells it to stop wiping his face? That’s a command. His attention is on it. Yes, but Strange isn’t commanding it to begin some new task that he wanted it to accomplish. The Cloak found a trigger (blood on face) and initiated a behavior to correct the problem. Strange is correcting the Cloak that he finds this level of grooming to be too much. In the second part of the book I refer to this as tuning a behavior, and it’s one of the interactions that distinguish agentive tech from automation, which does not afford this kind of personalization.
So, in short, the Cloak demonstrates lots and lots of agentive properties, and provides a rich example that helps illuminate differences in the core concepts. If you want to know more about these thoughts as they apply to real world tech as opposed to sci-fi interfaces, there’s that book I mentioned.
Next up, we’ll get to some critiques of the Cloak and suggestions for how it might be improved if it were a real world thing.
Pingback: The Cloak of Levitation, Part 4: Improvements | Sci-fi interfaces