Mind Crimes

Does real Greta know that her home automation comes at the cost of a suffering sentience? I would like to believe that Smartelligence’s customers do not know the true nature of the device, that the company is deceiving them, and that virtual Greta is denied direct communication to enforce this secret. But I can’t see that working across an entire market. Given thousands of Cookies and thousands of users, somehow, somewhere, the secret would get out. One of the AIs would use song choices, or Morse code, or any of its actuators to communicate in code, and one of the users would figure it out, leak the secret, and bring the company crashing down.

And then there’s the final scene in the episode, in which we see police officers torturing one of the Cookies, and it is clear that they’re aware. It would be a stretch to think that just the police are in on it with Smartelligence, so we have to accept that everyone knows.

Black_Mirror_White_Christmas_Officers.png

This asshole.

That they are aware means that—as Matt has done—Greta, the officers, and all Smartelligence customers have told themselves that “it’s just code” and, therefore, OK to subjugate, to casually cause to suffer. In case it’s not obvious, that’s like causing human suffering and justifying it by telling yourself that those people are “just atoms.” If you find that easy to do, you’re probably a psychopath. Continue reading

Advertisements

Tibet Mode Analysis: Representing the future (3 of 5)

A major problem with the use of the Eye is that it treats the past and the future similarly. But they’re not the same. The past is a long chain of arguably-knowable causes and effects. So, sure, we can imagine that as a movie to be scrubbed.

But the future? Not so much. Which brings us, briefly, to this dude.

pierre-simon-laplace.png

If we knew everything, Pierre-Simon Laplace argued in 1814, down to the state of every molecule, and we had a processor capable, we would be able to predict with perfect precision the events of the future. (You might think he’s talking about a computer or an AI, but in 1814 they used demons for their thought experiments.) In the two centuries since, there have been several major repudiations of Laplace’s demon. So let’s stick to the near-term, where there’s not one known future waiting to happen, but a set of probabilities. That means we have to rethink what the Eye shows when it lets Strange scrub the future.

Note that in the film, the “future” of the apple shown to Strange was just a likelihood, not a fact. The Eye shows it being eaten. In the actual events of the film, after the apple is set aside:

  • Strange repairs the tome
  • Mordo and Wong interrupt Strange
  • They take him into the next room for some exposition
  • The Hong Kong sanctum portal swings open
  • Kaecilius murders a redshirt
  • Kaecilius explodes Strange into the New York sanctum

Then for the next 50 minutes, The Masters of Mysticism are scrambling to save the world. I doubt any of them have time to while away in a library, there to discover an abandoned apple with a bite taken out of it, and decide—staphylococcus aureus be damned—a snack’s a snack. No, it’s safe to say the apple does not get eaten.

post-eye-no-apple.png

So the Eye gets the apple wrong, but it showed Strange that future as if it were a certainty. That’s a problem. Sure, when asked about the future, it ought to show something, but better would be to…

  • Indicate somewhere that what is being displayed is one of a set of possibilities
  • Provide options to understand the probability distribution among the set
  • Explore the alternates
  • Be notified when new data shifts the probability distribution or inserts new important possibilities

So how to display probabilities? There are lots of ways, but I am most fond of probability tree diagrams. In nerd parlance, this is a unidirectional graph where the nodes are states and the lines are labeled for probabilities. In regular language they look like sideways two-dimensional trees. See an example below from mathisfun.com. These diagrams seem to me a quick way to understand branching possibilities. (I couldn’t find any studies giving me more to work on than “seem to me”.)

probability-tree-coin2.png

In addition to being easy to understand, they afford visual manipulation. You can work branching lines around an existing design.

Now if we were actually working out a future-probabilities gestural scrubber attached to the Eye of Agamotto saucer, we’d have a whole host of things to get into next, like designing…

  1. A compact but informative display that signals the relative probabilities of each timeline
  2. The mechanism for opening that display so probabilities can be seen rather than read
  3. Labels so Strange wouldn’t have to hunt through all of them for the thing of interest (or some means of search)
  4. A selection process for picking the new timeline
  5. A comparison mode
  6. A means of collapsing the display to return to scrub mode
  7. A you-are-here signal in the display to indicate the current timeline

Which is a big set of design tasks for a hobbyist website. Fortunately for us, Strange only deals with a simple, probable (but wrong) scenario of the apple’s future as an illustration for the audience of what the Eye can do; and he only deals with the past of the tome. So while we could get into all of the above, it’s most expedient just to resolve the first one for the scene and tidy up the interface as it helps illustrate a well-thought-out and usable world.

Below I’ve drafted up an extension of my earlier conceptual diagram. I’ve added a tree to the future part of the chapter ring, using some dots to indicate the comparative likelihood of each branch. This could be made more compact, and might be good to put on a second z-axis layer to distinguish it from the saucer, but again: conceptual diagram.

Eye-of-Agamoto-tail.png

If this were implemented in the film, we would want to make sure that the probability tree begins to flicker right before Wong and Mordo shut him down, as a nod to the events happening off screen with Kaecilius that are changing those futures. This would give a clue that the Eye is smartly keeping track of real-world events and adjusting its predictions appropriately.

These changes would make the Eye more usable for Strange and smart as a model for us.

Eye-of-Agamoto-01_comp.png

Twist ending: This is a real problem we will have to solve

I skipped those design tasks for this comp, but we may not be able to avoid those problems forever. As it turns out, this is not (just) an idle, sci-fi problem. One of the promises of assistive AI is that it will be giving its humans advice, based on predictive algorithms, which will be a set of probabilistic scenarios. There may be an overwhelmingly likely next scenario, but there may also be several alternatives that users will need to explore and understand before deciding the best strategy. So, yeah, an exercise for the reader.

Wrapping up the Tibet Mode

So three posts is not the longest analysis I’ve done, bit it was a lot. In recap: Gestural time scrubbing seems like a natural interaction mapped well to analog clocks. The Eye’s saucer display is cool, but insufficient. We can help Strange much more by adding an events-based chapter ring detailing the facts of the past and the probabilities of the future.

Alas. We’re not done yet. As you’ll recall from the intro post, there are two other modes: The Hong Kong and Dark Dimension modes. Let’s next talk the Hong Kong mode, which is like the Tibet mode, but different.

Little boxes on the interface

StarshipT-undocking01

After recklessly undocking we see Ibanez using an interface of…an indeterminate nature.

Through the front viewport Ibanez can see the cables and some small portion of the docking station. That’s not enough for her backup maneuver. To help her with that, she uses the display in front of her…or at least I think she does.

Undocking_stabilization

The display is a yellow wireframe box that moves “backwards” as the vessel moves backwards. It’s almost as if the screen displayed a giant wireframe airduct through which they moved. That might be useful for understanding the vessel’s movement when visual data is scarce, such as navigating in empty space with nothing but distant stars for reckoning. But here she has more than enough visual cues to understand the motion of the ship: If the massive space dock was not enough, there’s that giant moon thing just beyond. So I think understanding the vessel’s basic motion in space isn’t priority while undocking. More important is to help her understand the position of collision threats, and I cannot explain how this interface does that in any but the feeblest of ways.

If you watch the motion of the screen, it stays perfectly still even as you can see the vessel moving and turning. (In that animated gif I steadied the camera motion.) So What’s it describing? The ideal maneuver? Why doesn’t it show her a visual signal of how well she’s doing against that goal? (Video games have nailed this. The “driving line” in Gran Turismo 6 comes to mind.)

Gran Turismo driving line

If it’s not helping her avoid collisions, the high-contrast motion of the “airduct” is a great deal of visual distraction for very little payoff. That wouldn’t be interaction so much as a neurological distraction from the task at hand. So I even have to dispense with my usual New Criticism stance of accepting it as if it was perfect. Because if this was the intention of the interface, it would be encouraging disaster.

StarshipT-undocking17

The ship does have some environmental sensors, since when it is 5 meters from the “object,” i.e. the dock, a voiceover states this fact to everyone in the bridge. Note that it’s not panicked, even though that’s relatively like being a peach-skin away from a hull breach of bajillions of credits of damage. No, the voice just says it, like it was remarking about a penny it happened to see on the sidewalk. “Three meters from object,” is said with the same dispassion moments later, even though that’s a loss of 40% of the prior distance. “Clear” is spoken with the same dispassion, even though it should be saying, “Court Martial in process…” Even the tiny little rill of an “alarm” that plays under the scene sounds more like your sister hasn’t responded to her Radio Shack alarm clock in the next room rather than—as it should be—a throbbing alert.

StarshipT-undocking24

Since the interface does not help her, actively distracts her, and underplays the severity of the danger, is there any apology for this?

1. Better: A viewscreen

Starship Troopers happened before the popularization of augmented reality, so we can forgive the film for not adopting that technology, even though it might have been useful. AR might have been a lot for the film to explain to a 1997 audience. But the movie was made long after the popularization of the viewscreen forward display in Star Trek. Of course it’s embracing a unique aesthetic, but focusing on utility: Replace the glass in front of her with a similar viewscreen, and you can even virtually shift her view to the back of the Rodger Young. If she is distracted by the “feeling” of the thrusters, perhaps a second screen behind her will let her swivel around to pilot “backwards.” With this viewscreen she’s got some (virtual) visual information about collision threats coming her way. Plus, you could augment that view with precise proximity warnings, and yes, if you want, air duct animations showing the ideal path (similar to what they did in Alien).

2. VP

The viewscreen solution still puts some burden on her as a pilot to translate 2D information on the viewscreen to 3D reality. Sure, that’s often the job of a pilot, but can we make that part of the job easier? Note that Starship Troopers was also created after the popularization of volumetric projections in Star Wars, so that might have been a candidate, too, with some third person display nearby that showed her the 3D information in an augmented way that is fast and easy for her to interpret.

3. Autopilot or docking tug-drones

Yes, this scene is about her character, but if you were designing for the real world, this is a maneuver that an agentive interface can handle. Let the autopilot handle it, or adorable little “tug-boat” drones.

StarshipT-undocking25