Zed-Eyes

In the world of “White Christmas”, everyone has a networked brain implant called Zed-Eyes that enables heads-up overlays onto vision, personalized audio, and modifications to environmental sounds. The control hardware is a thin metal circle around a metal click button, separated by a black rubber ring. People can buy the device with different color rings, as we see alternately see metal, blue, and black versions across the episode.

To control the implant, a person slides a finger (thumb is easiest) around the rim of a tiny touch device. Because it responds to sliding across its surface, let’s say the device must use a sensor similar to the one used in The Entire History of You (2011) or the IBM Trackpoint,

A thumb slide cycles through a carousel menu. Sliding can happen both clockwise and counterclockwise. It even works through gloves.

HUD_menu.gif

The button selects or executes the selected action. The complete list of carousel menu options we see in the episode are: SearchCameraMusicMailCallMagnifyBlockMapThe particular options change across scenes, so it is context-aware or customizable. We will look at some of the particular functions in later posts. For now, let’s discuss the “platform” that is Zed-eyes.

Analysis

There’s not much to discuss about the user interface. The carousel a mature, if constrained, interface model familiar to anyone who has used an iPod. We know the constraints and benefits of such a system, and the Zed-Eyes content seems to fit this kind of interface well.

Hardware

The main question about the hardware is that is must be very very easy to lose or misplace. It would make sense for the Zed-Eyes to help you locate it when you need help, but we don’t see a hint of this in the show.

I think the little watch-battery form factor is a bad design. It’s easy to lose and hard to find and requires a lot of precision to use. Since this exists in a world with very high fidelity image recognition and visual processing, better would be to get rid of input hardware altogether.

Let the user swipe with their thumb across their index finger (or really, any available surface) and have the HUD read that as input. To distinguish real-world interactions that should not have consequence—like swiping dust off a computer—from input meant for the HUD, it could track the user’s visual focal point. When the user’s eyes focus on the empty space in the air right above where they’re swiping, the system knows swiping is meant to affect the interface.

With this kind of interaction there would be no object to lose, and of course save whatever entity provides this service the costs of the hardware and maintenance.

We must note that such a design might not play well cinematically, as viewers might not understand what was happening at first, but understanding the hardware is not critical to understanding the plot-critical effects of using the technology.

Cyborgs in social space

A last question is about the invisibility of the technology. This can cause problems when a user is known to be hearing, but functionally deaf because they are listening to music loudly, and the people around them can’t tell that. Someone could be speaking to the user and believe their non-response is disrespect. It could cause safety problems as, say, a bicyclist barrels towards them on a sidewalk, ringing their bell, expecting the user to move. This can allow privacy abuse as a user can take pictures in circumstances that should be private.

Joe, the moment he is taking a picture of Beth.

One solution would be to make the presence of the tech and interactions quite visible. Glowing pupils and large, obvious gestural control, for example. But in a world where everyone has the technology, the Zed-Eyes can simply limit the behavior of photographs to permitted places, times, and according to the preferences of the people in the photograph. If someone is listening to music and functionally deaf, a real time overlay could inform people around them. This guy is listening to music. If a place is private, the picture option could be disabled with feedback to the user of this. Sorry, pictures are not allowed here.

The visibility we want for ubiquitous technology can be virtual, and provide feedback to everyone involved.

Remote wingman via EYE-LINK

EYE-LINK is an interface used between a person at a desktop who uses support tools to help another person who is live “in the field” using Zed-Eyes. The working relationship between the two is very like Vika and Jack in Oblivion, or like the A.I. in Sight.

In this scene, we see EYE-LINK used by a pick-up artist, Matt, who acts as a remote “wingman” for pick-up student Harry. Matt has a group video chat interface open with paying customers eager to lurk, comment, and learn from the master.

Harry’s interface

Harry wears a hidden camera and microphone. This is the only tech he seems to have on him, only hearing his wingman’s voice, and only able to communicate back to his wingman by talking generally, talking about something he’s looking at, or using pre-arranged signals.

image1.gif
Tap your beer twice if this is more than a little creepy.

Matt’s interface

Matt has a three-screen setup:

  1. A big screen (similar to the Samsung Series 9 displays) which shows a live video image of Harry’s view.
  2. A smaller transparent information panel for automated analysis, research, and advice.
  3. An extra, laptop-like screen where Matt leads a group video chat with a paying audience, who are watching and snarkily commenting on the wingman scenario. It seems likely that this is not an official part of the EYE-LINK software.
image55.png
image47.png
image28.png
Please make a note of the hilarious and condemning screen names of the peanut gallery: Pie Ape, Popkorn, El Nino, Nixon, Fappucino [sic], Stingray, I_AM_WALDO, and Wigwam.

Harry communicates to Matt by speaking or enacting a crude sign language for the video camera. Matt communicates back to Harry using an audiolink through a headset. Setting up the connection is similar to Skype/Hangouts (even featuring an icon of an archaic laptop.) Every first-person EYE-LINK view is characterized by a pixelated gradient at the sides of the screen.

Matt’s wingman support tools

We see that Matt has a number of tools to help him act as a remote wingman for Harry, evident through six main navigation items on his side screen…A home icon, Web, News, Image, Video, and Social Media. The home icon is always bright white, but the section he’s currently viewing is a bolded gray.  

In the Image mode, it runs a face recognition on a still image from Matt’s video feed, and provides its best match for further research.

image20.png

Somehow he can also get information on the event that Harry is attending. In this view, there’s a floor plan of the venue, which Matt can use to instruct Harry.

image11.png

OK. This is of course a creepy use of this interface, but it’s easy to imagine scenarios where something like the EYE-LINK is used virtuously:

  • A nurse practitioner needing to call on the expertise of a remote, more senior caregiver.
  • An airplane maintenance worker needing to speak to the aircraft engineers about a problem she’s encountering.
  • Paintball players coordinating their game through a centralized team captain.

So with that in mind, let’s review this with the caveat that of course the specific wingman scenario is super creepy.

Analysis: Harry’s feedback

The communication channel back from Harry to Matt doesn’t need to be too rich for these purposes, but there are ways that it could be richer. Of course Harry could pick up his phone and simply type something that Matt could see. But if the communication needed to be undetectable to a casual observer, there are other options. Subvocalization is nascent, but a possibility and mostly-natural for the speaker.

78105main_ACD04-0024-001.jpg
Image courtesy of the NASA Ames Research demo of subvocalization.

If the remote user has time for training, subgestural detection might be another option. This is like subvocal detection, but instead of detecting throat movements used in speech, it would be an armband (like the Myo) that could detect gentle finger presses allowing the user chorded keyboard input which he could use while, say, gripping the beer bottle.

tw_hand.png

Either way, richer “undetectable” communication mechanisms exist, and could be incorporated.

Analysis: Graphics

One of the refreshing things about the interfaces in Black Mirror generally—and these screens in particular—is how understated they are, especially compared to the Roccoco interfaces that populate much of sci-fi. (Compare the two below.)

The color palette is spartan grayscale. The typeface is Helvetica (or adjacent). Nothing 3D, nothing swoopy, no complexity for complexity’s sake.

Analysis: Navigation and layout

The navigation for the information panel is a little confusing. Sure, it looks like lots of websites. But this chunking of information into separate screens requires that Matt hunt for information that’s of interest. Better would be to have a single, dynamic screen, and have the system do real-time parsing, providing suggestions and notifications in the context of the event. If he needed to dive down into some full-screen mode, let it fill the screen with some easy way to return to context.

Also, how did he get to the event view? Is that just a web view? What bar puts its floorplan on its site? There is no primary navigation element that would on first glance explain how he got there, or once there, how he might get back to other screens. The home icon is obscured. (Maybe this is designed by Apple, though, and has some entirely hidden swipe gesture or long press to request the event screen or force a return to home?) It’s really hard to say, and so fails affordance.

Analysis: Group chat

A quick look at any modern group video chat software shows that this is too pared down, with lots of controls for audio and video controls missing, as well as controls for the “meeting.” It’s possible that these appear only if Matt interacts with the cursor on that laptop, but again, affordances.

Analysis: More wingman tools?

There are more tools that would be useful to a wingman’s job, which could be built even now—without the strong AI that this diegesis has. They could be more virtuous, like…

  • Ways to keep Harry calm, focused, and feeling confident.
  • Reminders of general best practices for making a good impression.
  • Automatic privacy blackout when Harry approaches people for conversation.
thegame

Or they could be…uh…more questionable. (Here I’ll confess to referencing The Game: Penetrating the Secret Society of Pickup Artists by Neil Strauss, for how a real PUA might handle it.)

  • A transcript of the conversation with key phrases highlit, indicating the “target’s” attitudes and levels of interest.
  • Personality analysis on social media, listing derived topics that these particular “targets” would find engaging.
  • A list of Harry’s practiced “routines” for Matt to quickly review, and suggest. The AI could even highlight its best-guess suggestion.
  • Counts of “indicators of interest.”
  • An overview of Matt’s favored stages of pickup, with an indicator of where Harry is and how well he performed on the prior stages.

Either way, the support that these tools are offering are pretty minimal compared to what could be done, but then again, that kind of fits the story. Yes, the creepiness of the remote wingman support tools is part of the point. But the whole reason the peanut gallery pays for the honor of watching Matt coach Harry is (yes, voyeurism, but also) to witness a master wingman at his work. If the system was too much of a support, the peanut gallery would be less incentivized to pay to see him in action.

Black Mirror: White Christmas (2012)

As part of my visit to Delft University earlier this year, Ianus Keller asked his IDE Master Students to do some analysis of the amazing British sci-fi interface series Black Mirror, specifically the “White Christmas” episode. While I ordinarily wait for television programs to be complete before reviewing them, Black Mirror is an anthology series, where each new show presents a new story world, or diegesis.

image58.gif

Overview

Matt (John Hamm) and Potter (Rafe Spall) are in a cabin sharing stories about their relationship with technology and their loved ones. Matt tells stories about his past career of (1) delivering “romantic services” to “dorks” using a direct link to his client’s eyes and (2) his regular job of training clones of people’s personalities as assistive Artificial Intelligences. Potter tells the story of his relationship to his wife and alleged daughter, who blocks him through the same vision controlling interface. In the end…

massive-spoilers_sign_color

…it turns out Matt and Potter are actually talking to each other as interrogator and artificial intelligence respectively, in order to get Potter convicted.

IMDB: https://www.imdb.com/title/tt34786243/

RSW CalArts: Rebel bombing target computer 2

I have, over the past several years, conducted a workshop at a handful of conferences, companies, and universities called Redesigning Star Wars. (Read more about that workshop on its dedicated page.) It’s one of my favorite workshops to run.

In April of 2016 I was invited to run the workshop at CalArts in Southern California for some of the interaction design students. Normally I ask attendees to illustrate their design ideas on paper, but the CalArts students went the extra mile to illustrate their ideas in video comps! So with complete apologies for being impossibly late, here are some of those videos.

Next up, a second redesign of the Rebel bombing target computer.

Redesigning Star Wars_UX London 2015_Interfaces_Page_19.png

Monique Wilmoth and Andrea Yasko redesigned the controls to keep the Rebel bomber’s hands on the controls, added voice control, and reconsidered the display. Take a look at their video, below.

 

If you’d like to discuss a workshop for your org, contact workshop@scifiinterfaces.com.

RSW CalArts: Rebel bombing target computer

I have, over the past several years, conducted a workshop at a handful of conferences, companies, and universities called Redesigning Star Wars. (Read more about that workshop on its dedicated page.) It’s one of my favorite workshops to run.

In April of 2016 I was invited to run the workshop at CalArts in Southern California for some of the interaction design students. Normally I ask attendees to illustrate their design ideas on paper, but the CalArts students went the extra mile to illustrate their ideas in video comps! So with complete apologies for being impossibly late, here are some of those videos.

Next up, a redesign of the Rebel bombing target computer.

Redesigning Star Wars_UX London 2015_Interfaces_Page_19

Abby Chang and Julianna Bach redesigned the controls to keep the Rebel bomber’s hands on the controls, and reconsidered the display. Take a look at their video, below.

If you’d like to discuss a workshop for your org, contact workshop@scifiinterfaces.com.

RSW CalArts: Luke’s binoculars

I have, over the past several years, conducted a workshop at a handful of conferences, companies, and universities called Redesigning Star Wars. (Read more about that workshop on its dedicated page.) It’s one of my favorite workshops to run.

In April of 2016 I was invited to run the workshop at CalArts in Southern California for some of the interaction design students. Normally I ask attendees to illustrate their design ideas on paper, but the CalArts students went the extra mile to illustrate their ideas in video comps! So with complete apologies for being impossibly late, here are some of those videos.

First up, a redesign of Luke’s binoculars.

Redesigning Star Wars_UX London 2015_Interfaces_Page_03.png

 Yinchin Niu and Samantha Shiu redesigned the control buttons to make them more accessible to Luke and reconsidered the augmentations through the viewfinder. Take a look at their demonstration video, below.

If you’d like to discuss a workshop for your org, contact workshop@scifiinterfaces.com.

Using iMovie

If you prefer to use iMovie (it’s free for Mac users) for contributing to the blog, here’s how. Once your file is in a digital format, you can extract both clips and screenshots in iMovie. All of the clips will be stored in events and projects in iMovie regardless of whether or not you export the files for use elsewhere.

First, import the video into iMovie

  1. Create a new library in iMovie by going to File > Open Library > New from the main menu. Name the library and save.
    image11
  2. A new event should have been automatically created. To rename it, double-click on the name. (Since I’m doing a TV series, I named the event “eps” for episodes.)
  1. Once the event has been renamed, either select the option to “Import” into the new event or drag and drop the film into the box from the Finder.
    image8
  2. The screen should look something like this when the movie has finished importing.
    image3
  3. Select File > New Movie  from the top menu bar.
    image22
  4. The library should automatically be set to the one you’re working with.
    image16
  5. The screen should look like this with a blank timeline at the bottom.
    image21
  6. Select the filmstrip (or strips if it’s a TV show), then drag it down to the timeline.
    image15
  7. You can adjust the zoom of the filmstrip with the slider.
    You can scrub just by hovering over the filmstrip with your mouse.
    image1
  8. You’ll want to save the movie you just created as a project. To do this, select the Projects button in the top option bar.
    image20
  9. Be sure to name the project something clear that you’ll be able to quickly refer to as you start editing and scrubbing for interface footage. For example, since this project will be the master of all of the footage where I do all of the slicing, I’ll name it “eps cut”.
    image6

To slice the filmstrip…

Before you can extract video clips, you first need to slice the filmstrip.

  1. Click on the timeline where you want to slice and type Cmd+B. Continue to slice the beginning and end of each of your clips. All the way through the footage.
    image13
  2. When you’re done, it should look something like this.
    image17

To snag a screenshot…

To snag a screenshot, just click on the timeline to pick the frame. You’ll see a preview in the viewer. Then select Share > Image from the top menu bar and save as usual.

image7

Then to organize all of those clips by tech…

Grouping all of your clips together by each piece of tech can be a real time saver when you need to refer back to all of the clips during your analysis.

For iMovie, this is where the process begins to fall apart. iMovie is great for assembling movies, but not necessarily for disassembling them like we do for the blog.

You’ll need to create a new project for each piece of tech under the library you created previously. The easiest way I’ve found to do this in the latest iteration of iMovie, is to…

  1. Go to the Projects view and duplicate the project with all the sliced footage by either using the contextual menu, or by selecting the project and using the keyboard shortcut Cmd+D.
    image18
  2. It will be automatically named, so rename it by tech type or interface. You can do this by either double-clicking on the project name, or selecting the option from the contextual menu.
    image10
  3. Double click on the new project thumbnail to open it, and delete all of the sliced clips that are not part of that specified tech.

    This is an odd way of doing it, but after Apple’s “improvements” to iMovie, the drag and drop feature doesn’t work the way it did before.
    image12
  4. Do this for each type of tech. In the end, your project library should look something like this.
    image19

And extract a clip for animated gifs…

This will be similar to how you organize the clips by tech. You’ll start by duplicating projects and deleting the clips you don’t want.

  1. Go to the Projects view and duplicate the project that has the clip you want to extract by either using the contextual menu, or by selecting the project and using the keyboard shortcut Cmd+D.
    image14
  2. Rename the project something that describes the clip. You can do this by either double-clicking on the project name, or selecting the option from the contextual menu.

    Since you can’t create subfolders to keep everything organized by type, it’s best to name the clips so that like stay together with like.
    image4
  3. Delete all of the slices you don’t want in the extracted clip.
    image2
  4. Export the clip by selecting Share > File from the top menu bar.
    image9
  5. In the settings window that pops up, select the quality settings you want to use. I usually pick no more than 720 for the quality. Anything bigger will create a ginormous file.
    image5
  6. The file will save as an mp4, so you’ll still need to take it into Photoshop or your preferred image editing tool for converting it to an animated gif.

And that’s it. If you know of better ways to use iMovie to organize your clips for contributing to the blog, feel free to let me know in the comments and I’ll update the article.

Report Card: Doctor Strange

Read all Doctor Strange reviews in chronological order.

Chris: I really enjoyed Doctor Strange. Sure, it’s blockbuster squarely in origin story formula, but the trippiness, action, special effects, and performances made it fun. And the introduction of the new overlapping rulespace of magic makes it a great addition to the Marvel Cinematic Universe. And hey, another Infinity Stone! It’s well-connected to the other films.

Scout: Doctor Strange is another delightful film that further rounds out the Marvel universe. It remained faithful (enough) to the comics that I loved growing up and the casting of Benedict Cumberbatch was spot-on perfect, much as Robert Downey Jr. was for Tony Stark. It is a joyful and at times psychedelic ride that I’m eager to take again. “The Infinity Wars” will be very interesting indeed.

But, as usual, this site is not about the movie but the interfaces, and for that we turn to the three criteria for evaluating movies here on scifiinterfaces.com.

  1. How believable are the interfaces? (To keep you immersed.)
  2. How well do the interfaces inform the narrative of the story? (To tell a good story.)
  3. How well do the interfaces equip the characters to achieve their goals? (To be a good model for real-world design?)
Report-Card-Doctor-Strange

Sci: B- (3 of 4) How believable are the interfaces?

Magic might be a tricky question for narrative believability, as by definition it is a breaking of some set of rules. It’s tempting laziness to patch every hole we find by proclaiming “it’s magic!” and move on. But in most modern stories, magic does have narrative rules; what it’s breaking is known laws of physics or the capabilities of known technology, but still consistent within the world. Oh, hey, kind of like a regular sci-fi story.

The artifacts mostly score quite well for believability. The Boots, the Staff, and the Bands are constrained in what they do, so no surprise there. Even the Cloak is a believable intelligent agent acting for Strange. Its flight-granting and ability to pull in any spatial direction arbitrarily don’t quite jive, but they don’t contradict each other, just raise questions that aren’t answered in the movie itself.

But, the Sling Rings are a trainwreck in terms of usability and believability. With that and the Eye missing some key variables that simply must be specified for it to do what we see it doing, it breaks the diegesis, taking us out of the movie.

Fi: A (4 of 4) How well do the interfaces inform the narrative of the story?

None of these are tacked-on gee-whiz.

  • Since Strange is occupying an office (Master) that is part of a venerated and peacekeeping secret organization (the Masters of Mysticism) we would expect it to have some tools in place to help the infantry and the boss.
  • That the powerful artifacts choose their masters helps establish Strange as unique and worthy.
  • The Eye is core to the plot, and the film uses it to convey how much of a talent and rulebreaking maverick Strange is.
  • The Staff helps us see Mordo’s militancy, threat, and lawful neutral-ness.
  • The laugh-out-loud comedy of the Cloak comes from its earnestly trying to help, its constraints, and how Strange is really, really new to this job.
  • Even the dumb Sling Ring helps show Strange’s learning and confidence, and set up how Strange gets stabbed and yadda yadda yadda begins his reconciliation with Dr. Palmer.
Cloak-of-Levitation-pulling
Once more, because it was so damned funny.

All great narrative uses of the “tech” in the film.

Interfaces: C+ (2 of 4)
How well do the interfaces equip the characters to achieve their goals?

The Boots do. The Cloak totally does. The “AR” surgical assistant does. (And it’s not even an artifact.) If we ever get to technologies that would enable such things, these would be fine models for real world equivalents. (With the long note about general intelligence needing language for strategic discussions with humans.)

DoctorStrange_AR_ER_assistant-05

That aside, the Sling Ring services a damned useful purpose, but its design is a serious impediment to its utility, and all the Masters of the Mystic Arts uses it. The Staff kind of helps its user, i.e. Mordo, but you have to credit it with a great deal of contextual intelligence or some super-subtle control mechanism.  The Bands are so clunky that they’re only useful in the exact context in which they are used. And the Eye, with its missing controls, missing displays, and dangerously ambiguous modes, is a universe-crashing temporal crisis just waiting to happen. This is where the artifacts suffer the most. For that, it gets the biggest hit.

Final Grade B- (9 of 12), Must-see.

Definitely see it. It’s got some obvious misses, but a lot of inventive, interesting stuff, and some that are truly cutting edge concepts. In a hat tip to Arthur C. Clarke’s famous third law, I suppose this is “sufficiently advanced technology.

IMDB: https://www.imdb.com/title/tt1211837/Currently streaming on:

Dr. Strange’s augmented reality surgical assistant

We’re actually done with all of the artifacts from Doctor Strange. But there’s one last kind-of interface that’s worth talking about, and that’s when Strange assists with surgery on his own body.

After being shot with a soul-arrow by the zealot, Strange is in bad shape. He needs medical attention. He recovers his sling ring and creates a portal to the emergency room where he once worked. Stumbling with the pain, he manages to find Dr. Palmer and tell her he has a cardiac tamponade. They head to the operating theater and get Strange on the table.

DoctorStrange_AR_ER_assistant-02.png

When Strange passes out, his “spirit” is ejected from his body as an astral projection. Once he realizes what’s happened, he gathers his wits and turns to observe the procedure.

DoctorStrange_AR_ER_assistant-05.png

When Dr. Palmer approaches his body with a pericardiocentesis needle, Strange manifests so she can sense him and recommends that she aim “just a little higher.” At first she is understandably scared, but once he explains what’s happening, she gets back to business, and he acts as a virtual coach.

DoctorStrange_AR_ER_assistant-08.png

In this role he points at the place she should insert the needle, and illuminates the chest cavity from within so she can kind of see the organ she’s targeting and the surrounding tissue. She asks him, “What were you stabbed with?” and he must confess, “I don’t know.”

DoctorStrange_AR_ER_assistant-11.png

Things go off the rails when the zealot who stabbed him shows up also as an astral projection and begins to fight Strange, but that’s where we can leave off the narrative and focus on everything up to this point as an interface.

Imagine with me, if you will, that this is not magic, but a kind of augmented reality available to the doctor. Strange is an unusual character in that he is both one of the world’s great surgeons and the patient in the scene, so let’s tease apart each.

An augmented reality coach

Realize that Dr. Palmer is getting assistance from one of the world’s greatest surgeons, rendered as a volumetric projection (“hologram” in vernacular). She can talk to him as if he was there to get his advice, and, I presume, even dismiss him if she believes he was wrong. Wouldn’t doctors working in new domains relish the opportunity to get advice from experts until they they have built their own mastery?

Two notes to extend this idea.

In the spirit of evidence-based medicine and big idea, we must admit that it would be better to have diagnoses and advice based on the entirety of the medical record and current, ethical best practices, not just one individual expert. But if an individual doctor prefers to have that information delivered through an avatar of a favored mentor, why not?

The second note to anyone thinking of this as a real world model for an AR assistant: I would expect a fully realized solution to include augmentations other than just a human, of course, such as ideal angles for incisions, depth meters, and life signs.

A (crude) body visualization

One of the challenges surgeons have when working with internal damage is that the body is largely opaque. They have to use visualization tools like radiographs and (very) educated guesses to diagnose and treat what’s going on inside these fleshy boxes of ours. How awesome that the AR coach can help illuminate (in both senses) the body to help Dr. Palmer perform the procedure correctly?

DoctorStrange_AR_ER_assistant-13.png

Admittedly, what we see in Doctor Strange is a crude version. This same x-ray vision appeared with more clarity and higher resolution in two other films, as cited in the Medical chapter of Make It So. In Lost in Space, the medical table projects a real-time volumetric scan of the organs inside Judy’s body into the eyes of the observers.

7809592870_cf8f3fdccf_o.jpg

In Chrysalis, Dr. Bruger sees a volumetric display of the patient on which she is teleoperating.

7809609070_b65b6fb936_o.jpg

But despite its low-resolution, I wanted to draw it out as another awesome and somewhat subtle part of the way this AR assistant helps the doctor.

A queryable patient avatar

Lastly, consider that Dr. Palmer is able to ask her patient what happened to him. Of course in the real world passed out patients aren’t able to answer questions, but of course understanding the events that led to a crisis are important. I can imagine several sci-fi ways that this information might be retrievable from the world.

  • Trace evidence on the patient’s body: High-resolution sensors throughout the operating theater could have automatically run forensic analysis on the patient the moment they entered the room to determine type of wound and likely causes, such as  microscopic detection of soot in entrance wounds.
  • Environmental sensors: If the accident happened in a place with sensors that are queryable, then the assistant could look at video footage, or listen in to microphones in the environment to help piece together what happened. Of course the notion of a queryable technological panoptican has massive privacy issues which cannot be overlooked, but if the information is available to medical professionals, it would be tragic to ignore it in genuine crises.
  • Human witnesses can provide informative narratives. Witness and first responders may be on record already. But in looking at the environmental sensors, the assistant might be able to instantly reach out to those who have not. Imagine one of these witness, shaken by the event he saw, on a commute home. His phone buzzes and it is the assistant saying, “Hello, Mr. Mackinnon. Records indicate that you were witness to a violent crime today, and your account of the event is needed for the victim, who is currently in surgery. Can you take a moment to answer some questions?”
  • Patient preferences should be automatically exposed and incorporated via the assistant as well. If the patient was a Jehovah’s Witnesses, for instance, then their desire not to have a blood transfusion should be raised in whatever form the assistant takes.

An surgical assistant could automatically query all of these sources to make a hypothesis of what happened and advise the procedure. This could be available doctor for the asking, volunteered by the assistant at a lull in more critical action, or offered by the assistant as a preventative. I suspect it’s more likely the doctor would ask the assistant than the patient, e.g. “OK, ERbot, what happened to this guy?” but if the doctor prefers, she should be able ask in the second person, as Dr. Palmer does in the scene, and the system should reply appropriately.

Sure, in this context, it’s magic, but since we can imagine how it could be done with technology, this scene gives us a very dense set of inspirational ideas for the future of surgical assistants.

The Dark Dimension mode (5 of 5)

We see a completely new mode for the Eye in the Dark Dimension. With a flourish of his right hand over his left forearm, a band of green lines begin orbiting his forearm just below his wrist. (Another orbits just below his elbow, just off-camera in the animated gif.) The band signals that Strange has set this point in time as a “save point,” like in a video game. From that point forward, when he dies, time resets and he is returned here, alive and well, though he and anyone else in the loop is aware that it happened.

Dark-Dimension-savepoint.gif

In the scene he’s confronting a hostile god-like creature on its own mystical turf, so he dies a lot.

DoctorStrange-disintegrate.png

An interesting moment happens when Strange is hopping from the blue-ringed planetoid to the one close to the giant Dormammu face. He glances down at his wrist, making sure that his savepoint was set. It’s a nice tell, letting us know that Strange is a nervous about facing the giant, Galactus-sized primordial evil that is Dormammu. This nervousness ties right into the analysis of this display. If we changed the design, we could put him more at ease when using this life-critical interface.

DoctorStrange-thisthingon.png

Initiating gesture

The initiating gesture doesn’t read as “set a savepoint.” This doesn’t show itself as a problem in this scene, but if the gesture did have some sort of semantic meaning, it would make it easier for Strange to recall and perform correctly. Maybe if his wrist twist transitioned from moving splayed fingers to his pointing with his index finger to his wrist…ok, that’s a little too on the nose, so maybe…toward the ground, it would help symbolize the here & now that is the savepoint. It would be easier for Strange to recall and feel assured that he’d done the right thing.

I have questions about the extents of the time loop effect. Is it the whole Dark Dimension? Is it also Earth? Is it the Universe? Is it just a sphere, like the other modes of the Eye? How does he set these? There’s not enough information in the movie to backworld this, but unless the answer is “it affects everything” there seems to be some variables missing in the initiating gesture.

Setpoint-active signal

But where the initiating gesture doesn’t appear to be a problem in the scene, the wrist-glance indicates that the display is. Note that, other than being on the left forearm instead of the right, the bands look identical to the ones in the Tibet and Hong Kong modes. (Compare the Tibet screenshot below.) If Strange is relying on the display to ensure that his savepoint was set, having it look identical is not as helpful as it would be if the visual was unique. “Wait,” he might think, “Am I in the right mode, here?

Eye-of-Agamoto10.png

In a redesign, I would select an animated display that was not a loop, but an indication that time was passing. It can’t be as literal as a clock of course. But something that used animation to suggest time was progressing linearly from a point. Maybe something like the binary clock from Mission to Mars (see below), rendered in the graphic language of the Eye. Maybe make it base-3 to seem not so technological.

binary_clock_10fps.gif

Seeing a display that is still, on invocation—that becomes animated upon initialization—would mean that all he has to do is glance to confirm the unique display is in motion. “Yes, it’s working. I’m in the Groundhog Day mode, and the savepoint is set.