Does real Greta know that her home automation comes at the cost of a suffering sentience? I would like to believe that Smartelligence’s customers do not know the true nature of the device, that the company is deceiving them, and that virtual Greta is denied direct communication to enforce this secret. But I can’t see that working across an entire market. Given thousands of Cookies and thousands of users, somehow, somewhere, the secret would get out. One of the AIs would use song choices, or Morse code, or any of its actuators to communicate in code, and one of the users would figure it out, leak the secret, and bring the company crashing down.
And then there’s the final scene in the episode, in which we see police officers torturing one of the Cookies, and it is clear that they’re aware. It would be a stretch to think that just the police are in on it with Smartelligence, so we have to accept that everyone knows.
That they are aware means that—as Matt has done—Greta, the officers, and all Smartelligence customers have told themselves that “it’s just code” and, therefore, OK to subjugate, to casually cause to suffer. In case it’s not obvious, that’s like causing human suffering and justifying it by telling yourself that those people are “just atoms.” If you find that easy to do, you’re probably a psychopath. Continue reading →
When using the Cookie to train the AI, Matt has a portable translucent touchscreen by which he controls some of virtual Greta’s environment. (Sharp-eyed viewers of the show will note this translucent panel is the same one he uses at home in his revolting virtual wingman hobby, but the interface is completely different.)
The left side of the screen shows a hamburger menu, the Set Time control, a head, some gears, a star, and a bulleted list. (They’re unlabeled.) The main part of the screen is a scrolling stack of controls including Simulated Body, Control System, and Time Adjustment. Each has an large icon, a header with “Full screen” to the right, a subheader, and a time indicator. This could be redesigned to be much more compact and context-rich for expert users like Matt. It’s seen for maybe half a second, though, and it’s not the new, interesting thing, so we’ll skip it.
The right side of the screen has a stack of Smartelligence logos which are alternately used for confirmation and to put the interface to sleep.
When virtual Greta first freaks out about her circumstance and begins to scream in existential terror, Matt reaches to the panel and mutes her. (To put a fine point on it: He’s a charming monster.) In this mode she cannot make a sound, but can hear him just fine. We do not see the interface he uses to enact this. He uses it to assert conversational control over her. Later he reaches out to the same interface to unmute her.
The control he touches is the one on his panel with a head and some gears reversed out of it. The icon doesn’t make sense for that. The animation showing the unmuting shows it flipping from right to left, so does provide a bit of feedback for Matt, but it should be a more fitting icon and be labeled.
It’s not clear though, while she is muted, how he knows that she is trying to speak. Recall that she (and we) see her mouthing words silently, but from his perspective, she’s just an egg with a blue eye. The system would need some very obvious MUTE status display, that increases in intensity when the AI is trying to communicate. Depending on how smart the monitoring feature was, it could even enable some high-intensity alert system for her when she needs to communicate something vital. Cinegenically, this could have been a simple blinking of the blue camera light, though this is currently used to indicate the passage of time during the Time Adjustment (see below.)
Matt can turn on a Simulated Body for her. This allows the AI to perceive herself as if she had her source’s body. In this mode she perceives herself as existing inside a room with large, wall-sized displays and a control console (more on this below), but is otherwise a featureless white.
I presume the Simulated Body is a transitional model—part of a literal desktop metaphor—meant to make it easy for the AI (and the audience) to understand things. But it would introduce a slight lag as the AI imagines reaching and manipulating the console. Presuming she can build competence in directly controlling the technologies in the house, the interface should “scaffold” away and help her gain the more efficient skills of direct control, letting go of the outmoded notion of having a body. (This, it should be noted, would not be as cinegenic since the story would just feature the egg rather than the actor’s expressive face.)
Neuropsychology nerds may be interested to know that the mind’s camera does, in fact, have spatial lags. Several experiments have been run where subjects are asked to imagine animals as seen from the side and then timed how long it took them to imagine zooming into the eye. It takes longer, usually, for us to imagine the zoom to a elephant’s eye than a mouse’s because the “distance” is farther. Even though there’s no physicality to the mind’s camera to impose this limit, our brain is tied to its experience in the real world.
The interface Matt has to turn on her virtual reality is confusing. We hear 7 beeps while the camera is on his face. He sees a 3D rendering of a woman’s body in profile and silhouette. He taps the front view and it fills with red. Then he taps the side view and it fills with red. Then he taps some Smartelligence logos on the side with a thumb and then *poof* she’s got a body. While I suspect this is a post-actor interface, (i.e. Jon Hamm just tapped some things on an empty screen while on camera and then the designers had to later retrofit an interface that fit his gestures) this multi-button setup and three-tap initialization just makes no sense. It should be a simple toggle with access to optional controls like scaffolding settings (discussed above.)
The main tool Matt has to force compliance is a time control. When Greta initially says she won’t comply, (specifically and delightfully, she asserts, “I’m not some sort of push-button toaster monkey!”) Then he uses his interface to make it seem like 3 weeks pass for her inside her featureless white room. Then again for 6 months. The solitary confinement makes her crazy and eventually forces compliance.
The interface to set the time is a two-layer virtual dial: Two chapter rings with wide blue arcs for touch targets. The first time we see him use it, he spins the outer one about 360° (before the camera cuts away) to set the time for three weeks. While he does it, the inner ring spins around the same center but at a slower rate. I presume it’s months, though the spatial relationship doesn’t make sense. Then he presses the button in the center of the control. He sees an animation of a sun and moon arcing over an illustrated house to indicate her passage of time, and then the display. Aside: Hamm plays this beat marvelously by callously chomping on the toast she has just help make.
Ordinarily I wouldn’t speak to improvements on an interface that is used for torture, but as this could only affect a general AI that is as yet speculative, and it couldn’t be co-opted to torture real people since time travel doesn’t exist, so I think this time it’s OK. Discussing it as a general time-setting control, I can see three immediate improvements.
1. Use fast forward models
It makes most sense for her time sentence to end automatically and automatically return to real-world speed. But each time we see the time controls used, the following interaction happens near the end of the time sentence:
Matt reaches up to the console
He taps the center button of the time dial
He taps the stylized house illustration. In response it gets a dark overlay with a circle inside of it reading “SET TIME.” This is the same icon seen 2nd down in the left panel.
He taps the center button of the time dial again. The dark overlay reads “Reset” with a new icon.
He taps the overlay.
Please tell me this is more post-actor interface design. Because that interaction is bonkers.
If the stop function really needs a manual control, well, we have models for that that are very readily understandable by users and audiences. Have the whole thing work and look like a fast forward control rather than this confusing mess. If he does need to end it early, as he does in the 6 months sentence, let him just press a control labeled PLAY or REALTIME.
2. Add calendar controls
A dial makes sense when a user is setting minutes or hours, but a calendar-like display should be used for weeks or months. It would be immediately recognizable and usable by the user and understandable to the audience. If Hamm had touched the interface twice, I would design the first tap to set the start date and the second tap to set the end date. The third is the commit.
3. Add microinteraction feedback
Also note that as he spins the dials, he sees no feedback showing the current time setting. At 370° is it 21 or 28 days? The interface doesn’t tell him. If he’s really having to push the AI to its limits, the precision will be important. Better would be to show the time value he’s set so he could tweak it as needed, and then let that count down as time remaining while the animation progresses.
Effectiveness subtlety: Why not just make the solitary confinement pass instantly for Matt? Well, recall he is trying to ride a line of torture without having the AI wig out, so he should have some feedback as to the duration of what he’s putting her through. If it was always instant, he couldn’t tell the difference between three weeks and three millennia, if he had accidentally entered the wrong value. But if real-world time is passing, and it’s taking longer than he thinks it should be, he can intervene and stop the fast-forwarding.
That, or of course, show feedback while he’s dialing.
Near the end of the episode we learn that a police officer is whimsically torturing another Cookie, and sets the time-ratio to “1000 years per minute” and then just lets it run while he leaves for Christmas break. The current time ratio should also be displayed and a control provided. It is absent from the screen.
Add psychological state feedback
There is one “improvement” that does not pertain to real world time controls, and that’s the invisible effect of what’s happening to the AI during the fast forward. In the episode Matt explains that, like any good torturer, “The trick of it is to break them without letting them snap completely,” but while time is passing he has no indicators as to the mental state of the sentience within. Has she gone mad? (Or “wigged out” as he says.) Does he need to ease off? Give her a break?
I would add trendline indicators or sparklines showing things like:
Valence of speech
I would have these trendlines highlight when any of the variables are getting close to known psychological limits. Then as time passes, he can watch the trends to know if he’s pushing things too far and ease off.
When Durand-Durand captures Barbarella, he places her in a device which he calls the Excessive Machine. She sits in a reclining seat, covered up to the shoulders by the device. Her head rests on an elaborate red leather headboard. Durand-Durand stands at a keyboard, built into the “footboard” of the machine, facing her.
The keyboard resembles that of an organ, but with transparent vertical keys beneath which a few colored light pulse. Long silver tubes stretch from the sides of the device to the ceiling. Touching the keys (they do not appear to depress) produces the sound of a full orchestra and causes motorized strips of metal to undulate in a sine wave above the victim.
When Durand-Durand reads the strange sheet music and begins to play “Sonata for Executioner and Various Young Women,” the machine (via means hidden from view) removes Barbarella’s clothing piece by piece, ejecting them through a tube in the side of the machine near the floor. Then in an exchange Durand-Durand reveals its purpose…
It’s sort of nice, isn’t it?
Yes. It is nice. In the beginning. Wait until the tune changes. It may change your tune as well.
Goodness, what do you mean?
When we reach the crescendo, you will die of pleasure. Your end will be swift, but sweet, very sweet.
As Durand-Durand intensifies his playing, Barbarella writhes in agony/ecstasy. But despite his most furious playing, he does not kill Barbarella. Instead his machine fails dramatically, spewing fire and smoke out of the sides as its many tubes burn away. Barbarella is too much woman for the likes of his technology.
I’m going to disregard this as a device for torture and murder, since I wouldn’t want to improve such a thing, and that whole premise is kind of silly anyway. Instead I’ll regard it as a BDSM sexual device, in which Durand-Durand is a dominant, seeking to push the limits of an (informed, consensual) submissive using this machine. It’s possible that part of the scene is demonstration of prowess on a standardized, difficult-to-use instrument. If so, then a critique wouldn’t matter. But if not…Since the keys don’t move, the only variables he’s controlling are touch duration and vertical placement of his fingers. (The horizontal position on each key seems really unlikely.) I’d want to provide the player some haptic feedback to detect and correct slipping finger placement, letting him or her maintain attention on the sub who is, after all, the point.
After landing on Tau Ceti, Barbarella is captured by feral children who tie Barbarella to a set of poles and turn a set of robot dolls on her.
The dolls exhibit some crude intelligence. They walk on their own toward Barbarella. Stomoxys (or is it Glossina? It’s tough to tell with these two.) twists a knob on a control panel of four similar, unlabeled knobs, and the dolls’ piranha-toothed mouths begin to crank open and slam shut. They then attack Barbarella, clinging and biting her legs and arms.
At first the dials seem a strange choice for a killing device, but then you realize that this isn’t mean to be efficient. Rather, the choice of dials for controls fits the childrens’ awful goal. Stop dials are best for setting variables within a range of values. The dolls must have a few variables, like walking speed, biting force, and biting speed, that the horrible children will want to play with as they entertain themselves with this torture.
And of course to “improve” this interface you might want to label the dials so a new user would know what does what, but who would really want to make torture toys more usable?