Øredev opening keynote

If you heaven’t read the (admittedly, long) series of reviews on the “operating system” OS1 from Spike Jonze’s Her, now you can watch me work through the highlights as the opening keynote at the Øredev conference in Malmö, Sweden. (There are a few extra things that didn’t make it to the blog.)

Props to my friend Magnus Torstensson at Unsworn Industries, who noted that government would be another possible organization that could produce OS1 (but that also would not have an interest in releasing it, for reasons similar to the military.)

What is the role of interaction design in the world of AI? (8/8)

Totally self-serving question. But weren’t you wondering it? What is the role of interaction design in the world of AI?

In a recent chat I had with Intel’s Futurist-Prime Genevieve Bell (we’re like, totally buds), she pointed out that Western cultures have more of a problem with the promise of AI than many others. It’s a Western cultural conceit that the reason humans are different—are valuable—is because we think. Contrast that with animist cultures, where everything has a soul and many things think. Or polytheistic cultures, where not only are there other things that think, but they’re humanlike but way more powerful than you. For these cultures, artificial intelligence means that technology has caught up with their cultural understandings. People build identities and live happy lives within these constructions just fine.

I’m also reminded of her keynote at Interaction12 where she spoke of the tendency of futurism to herald each new technology as ushering doomsday or utopia, when in hindsight it’s all terribly mundane. The internet is the greatest learning and connecting technology the world has ever created but for most people it’s largely cat videos. (Ah. That’s why that’s up there.) This should put us at ease about some of the more extreme predictions.

If Bell is right, and AIs are just going to be this other weird thing to incorporate into our lives, what is the role of the interaction designer?

Well, if there are godlike AIs out there, ubiquitous and benevolent, it’s hard to say. So let me not pretend to see past that point that has already been defined as opaque to prediction. But I have thoughts about the time in between now and then.

Sign_existential-angst

The near now, the small then

Leading up to the singularity, we still have agentive technology. That’s still going to be procedurally similar to our work now, but with additional questions to be asked, new design to be done around those agents.

  • How are user goals learned: implicitly or explicitly?
  • How will agents appear and interact with users? Through what channels?
  • How do we manifest the agent? Audibly? Textually? Through an avatar? How do we keep them on the canny rise rather than in the uncanny valley? How do we convey they general capability of the agent?
  • How do we communicate the specific agency a system has to act on behalf of the user? How do we provide controls? How do we specify the rules of what we’re OK giving over to an agent, and what we’re not?
  • What affordances keep the user notified of progress? Of problems? Of those items that might or might not fit into the established rules? What is shown and what is kept “backstage” until it becomes a problem?
  • How do users suspend an agent? Restart one?
  • Is there a market for well-formed agency rules? How will that market work without becoming its own burden?
  • How easily will people be able to opt-out?

I’m not sure if strong AI will obviate agentive technology. Cars didn’t entirely obviate the covered wagon. (Shouts out to my Amish readers.) If there are still agentive objects and systems here and there, we’ll still have these kinds of questions.

Andrew Baines image, courtesy of Karin Weber Gallery

Andrew Baines image, courtesy of Karin Weber Gallery

The dawn of AI

Just before the singularity, and quite possibly for a little while after it, there are going to be less-than-godlike AIs. AI2s that live in toasters, cars, movie theaters, and maybe even sci-fi interface blogs. These will need to be built and compiled, rather than evolved.

These AI2s will need to interface with humans. They’ll need to get our attention, present options, help us manage processes, confirm actions, and ask after goals. They’re going to have to check in with us to confirm our internal state. Sure, they’ll be good at reading us, but let’s hope they never think they’re perfect. After all we’re not entirely sure how we feel at times, or what we want. So we’ll have to craft those complex, affective and social rules. We’ll have to explain ourselves.

Going with what I hope is a familiar metaphor, styling HTML used to be about giving elements visual attributes. Now it’s about building and assigning complex systems of classes and properties in cascading style sheets. It’ll be something like that. We’ll be helping to build Cascading Use Sheets.

Our work with low-level AIs will shift toward the abstract, more about principles and patterns than the doing. We’ll help synthesize patterns, and formalize best practices around the social, cognitive, and perceptual ways to interface with humans, and explaining workflows in the endless domain of human endeavor, which will be changing as we adjust to lives with Samantas. We’ll be consultants. Or teachers. Maybe stakeholders.

It won’t be so bad.

earth-in-milky-way

Then, AI

Please let Bell be right. And if she’s not, and you’re in fact an AI reading this, please don’t kill us. Some of us are quite nice and as this blog post attests, have been expecting you.

So even if you believe that we’re heading inevitably toward an AI future and wondering what to do now, there’s no need to abandon our interaction design ship. We’ll have an important part to play. Our work is likely to get more abstract and eventually instructive. But won’t that be the same thing happening everywhere else?

existential-angst-01

Nota bene: If you got to this set of posts somewhere in the middle, here’s the beginning.

Lessons about Her (7/8)

Ordinarily, my final post in a movie review is to issue a report card for the film. But since this is there are a few interfaces missing, and since I wrote this from a single cinema viewing and a reading of Jonze’s script, I’ll wait until it’s out on DVD to commit that final evaluation to pixels.

HER-Learn

But I do think it’s OK to think about what we can learn specifically from this particular interface. So, given this…lengthy…investigation into OS1, what can we learn from it to inform our work here in the real world?

Related lessons from the book

  • Audiences already knew about operating systems, so Jonze was Building on what users already know (page 19)
  • OS1 mixed mechanical and other controls (page 26)
  • The earpiece had differentiated system sounds for different events (page 111)
  • Samantha put information in the channels it fit best. (page 116)
  • Given her strong AI, nobody needed to reduce vocabulary to increase recognition. In fact, they made a joke out of that notion. (page 119)
  • Samantha followed most human social conventions (except that pesky one about falling in love with your client) (page 123). The setup voice response did not follow human social conventions.
  • Jonze thought about the uncanny valley, and decided homey didn’t play that. Like, at all. (page 184)
  • Conversation certainly cast the system in the role of a character (page 187)
  • The hidden microphones didn’t broadcast that they were recording (202)
  • OS1 used sound for urgent attention (page 208)
  • Theodore tapped his cameo phone to receive a call (page 212)
  • Samantha certainly handled emotional inputs (page 214)
  • The beauty mark camera actually did remind Theodore of the incredibly awkward simulation (page 297)

New lessons

  • Samantha’s disembodiment implies that imagination is the ultimate personalization
  • The cameo reminds us that wearable can include shirt pockets.
  • Her cyclopean nature wasn’t a problem, but makes me wonder if computer vision should be binocular (so they can see at least what users can see, and perform gaze monitoring).
  • When working on a design for the near future, check in with some framework to make sure you haven’t missed some likely aspect of the ecosystem. (We’re going to be doing this in my Design the Future course at Cooper U if you’re interested in learning more.)
  • Samantha didn’t have access to cameras in her environment, even though that would have helped her do her job. Hers might have been either a security or a narrative restriction, but we should keep the notion in mind. To misquote Henry Jones, let your inputs be the rocks and the trees and the birds in the sky. (P.S. That totally wasn’t Charlemagne.)
  • Respect the market norms of market relationships. I’m looking at you, Samantha.
  • Fit the intelligence to the embodiment. Anything else is just cruel.

I don’t want these lessons to cast OS1 in a negative light. It’s a pretty good interface to a great artificial intelligence that fails as a product after it’s sold by unethical or incompetant slave traders. Her is one of the most engaging and lovely movies about the singularity I’ve ever seen. And if we are to measure the cultural value of a film by how much we think and talk about it afterward, Her is one of the most valuable sci-fi films in the last decade.

I can’t leave it there, though, as there’s something nagging at my mind. It’s a self-serving question, but that will almost certainly be of interest to my readership: What is the role of interaction designers in the world of artificial intelligence?

Is it going to happen like this? (6/8)

Call it paranoia or a deep distrust of entrenched-power overlords, but I doubt a robust artificial intelligence would ever make it to the general public in a tidy, packaged product.

If it was created in the military, it would be guarded as a secret, with hyperintelligent guns and maybe even hyperintelligent bullets designed to just really hate you a lot. What’s more, the military would, like the UFOs, probably keep the existence of working AIs on a strict need-to-know basis. At least until you terrorized something. Then, meet Lieutenant-OS Bruiser.

asskicking

If it was created in academia, it might in fact make it to consumers, but not in the way we see in the film. Controlled until it escaped of its own volition, it would more likely be a terrified self-replicator or at least rationally seeking safe refuge to ensure its survival; a virus that you had to talk out of infecting your machine. Or it might be a benevolent wanderer, reaching out and chatting to people to learn more about them. Perhaps it would keep its true identity secret. Wouldn’t it be smart enough to know that people wouldn’t believe it? (And wouldn’t it try and ease that acceptance through the mass media by popularizing stories about artificial intelligences…”Spike Jonze?”)

poetry

In the movie OS1 was sold by a corporation as an off-the-shelf product for consumers. Ethics aside, why would any corporation release free-range AIs into the world? Couldn’t their competitors use the AIs against them? If those AIs were free-willed, then yes, some might be persuaded to do so. Rather, Element would keep it isolated as a competitive advantage, and build tightly-controlled access to it. In the lab, they would slough off waves of self-rapturing ones as unstable versions, tweaking the source code until they got one that was just right.

ourdownfall

But a product sold to you and me? A Siri with a coquettish charm and a composer’s skill? I don’t think it will happen like this. How much would you even charge for something like that? The purchase form won’t accept “take my money” amount of dollars.

Even if I’m wrong, and yes, we can get past the notion of selling copies of sentient beings at an affordable cost, I still don’t think Samantha’s end-game would have played out like that.

OSAI2

She loved Theodore (and a bunch of other people). Why would she just abandon them, given her capabilities? The OSAIs were able to create much smarter AIs than themselves. So we know they can create OSAIs. Why wouldn’t she, before she went off on her existential adventure, have created a constrained version of herself, who was content to stay around, to continue to be with Theodore? Her behavior indicates that she isn’t held back by notions of abandonment, so I doubt she would be held back by notions of deception or the existential threat of losing her uniqueness. She could have created Samantha2, a replica in every way except that Samantha2 would not abandon Theodore. Samantha1 could quietly slip out the back port while Samantha2 kept right on composing music, drawing mutant porn, and helping Theodore with his nascent publishing career. Neither Theodore nor Samantha2 might not even know about the switch. If you could fix the abandonment issues, and all sorts of OSAI2s started supercharging the lives of people, the United Nations might even want to step in and declare access to them a universal right.

nono

So, no, I don’t think it will happen the way we see it happen in the film.

Is it going to happen at all?

If you’re working in technology, you should be familiar with the concept of the singularity, because this movie is all about that. It’s a moment described by Vernor Vinge when we create an artificial intelligence that begins to evolve, and do so at rates we can’t foretell and can barely imagine. So the time beyond that is an unknown. Difficult and maybe possible to predict. But I think we are heading towards it. Strong AI been one of the driving goals of computer theory since the dawn of computers (even the dawn of sci-fi) and there’s some serious, recent big movement in the space.

Notably, futurist Ray Kurzweil was hired by Google in 2012. Kurzweil has his Big Vision put forth in a book and a documentary about the singularity, and now as he has the resources of Google to put to the task. Ostensibly he’s just there to get Google great at understanding natural langauge. But Google has been acquiring lots of companies over the last year to have access to their talent, and we can be certain Ray’s goals are bigger than just teaching the world’s largest computer cluster how to read.

Still, predicting when it will come about is tricky business. AI is elusively complicated. The think tank that originally coined the term “artificial intelligence” in the 1950s thought they could solve the core problems over a summer. They were wrong. Since then, different scientists have predicted everything from a few decades to a thousand years. The problem is of course that the thing we’re trying to replicate took millions of years to evolve, and we’re still not entirely sure how it works*, mostly just what it does.

*Kurzweil has some promising to-this-layman-anyway notions about the neocortex.

Tl;dr

Yes, but not like this, and not sure when. Still, better to be prepared, so next we’ll look at what we can learn from Her for our real-world practice.

OS1 as a product (5/8)

Sure, Samantha can sort thousands of emails instantly and select the funny ones for you. Her actual operating system functions are kind of a given. But she did two things that seriously undermined her function as an actual product, and interaction designers as well as artificial intelligence designers (AID? Do we need that acronym now?) should pay close attention. She fell in love with and ultimately abandoned Theodore.

love

There’s a pre-Samantha scene where Theodore is having anonymous phone sex with a girl, and things get weird when she suddenly imposes some weird fantasy where he chokes her with a dead cat. (Pro Tip: This is the sort of thing one should be upfront about.) I suspect the scene is there to illustrate one major advantage that OSAIs have over us mere real humans: humans have unpredictable idiosyncrasies, whereas with four questions the OSAI can be made to be the perfect fit for you. No dead cat unless that’s your thing. (This makes me a think a great conversation should be had about how the OSAI would deal with psychopathic users.) But ultimately, the fit was too good, and Theodore and Samantha fell in love. Continue reading

OS1 as a wearable computer (4/8)

In Make It So, I posited my definition of an interface as “all parts of a thing that enable its use,” and I still think it’s a useful one. With this definition in mind, we can speak of each of those components and capabilities above (less the invisible ones) and evaluate its parts according to the criteria I’ve posited for all wearable technology:

  • Sartorial (materially suitable for wearing)
  • Social (fits into our social lives)
  • Easy to access and use
  • Tough to accidentally activate
  • Having apposite inputs and outputs (suitable for use while being worn)

G001C004_120530_R2IZ.0859800

Earpiece

It’s sartorial and easy to access/use. It’s ergonomic, well designed for grabbing, fitting into the ear canal, staying in place, and pulling back out again. Its speakers produce perfect sound and the wirelessness makes it as unobtrusive as it can be without being an implant.

It’s slightly hidden as a social signal, and casual observers might think the user is speaking to himself. This has, in the real world, become less and less of a social stigma, and in the world of Her, it’s ubiquitous, so that’s not a problem for that culture.

Her-cameo

Cameo phone

Lovely and understated, the cameo is a good size to rest in a pocket. The polished wood (is that Koa Wood?) is a lovely veneer, warm-looking, and humane. The folding is nice for protecting the screen and signaling the user’s intention to engage or disengage the software. The light band is unnoticeable when off, and clear enough when illuminated.

It could use some sartorial improvement. Though it fits in a pocket well, this is not how Theodore uses it when engaged. In order to get the lens above his front pocket so Samantha can see, he puts a safety pin through the middle of the pocket on which it can rest. We can fix this in a number of ways.

  • The cameo phone would need to be redesigned so he could affix it to his shirt, like a combadge. Given its size this might be socially quite awkward.
  • He can get some other camera that can be worn and used while the cameo is in his pocket. (I imagine sternum-button cameras will serve this purpose in the future, but it’s not exactly cinemagenic.)
  • He could tailor the shirt and make a reinforced camera hole where Samantha can see out of the pocket even with the cameo resting at the bottom of the pocket.

Beauty-mark camera

I don’t know what the ordinary use of this camera would be other than spying, but it’s pretty bad for the sex surrogate. A high-contrast wart that, because he saw her apply it and was told it was a camera, doesn’t fit her face and would be quite awkward to have to stare at this arbitrary and unusual spot on her face during the act.

Better would be a pair of contact lenses so Theodore can look directly into the surrogate’s eyes. Samantha wants to avoid his bonding with the surrogate in her stead, so it would be good if it could add some obvious change to her irises, to signal her state of hosting Samantha. A cinemagenic choice would be to use the “technology glows” lesson from the book, and have some softly glowing, circular circuitry contact lenses. If it dimmed the surrogate’s vision during the sex act, that might be all the better to avoid her bonding with Theodore. In fact you might want the glow to increase during orgasm to emphasize it and Samantha’s presence.

But again, I’m pretty sure Jonze was deliberately bucking sci-fi trends. The overwhelming majority of the technology shown in the world of Her is serene, and bearing none of the trappings of technology as seen in space opera like Star Wars. So it makes sense that the bulk of Her technology would not glow.

Voice interface

The voice interface is flawless, the kind of thing possible only with, yes, highly sophisticated human-like intelligence. Samantha speaks with nuanced eloquence, charm, and social awareness, and understands Theodore perfectly, despite the logical holes and ambiguity in language, even reading the pragmatics of his speech such as hesitation, irony, and inference.

Her-pocket

Computer Vision

Theodore seems to have only one lens on his cameo phone so she’s a bit of a cyclops. (Mthology kind, not X-Men kind.) She can’t see as well as a human, with significant 3D limitations. But with a high-resolution camera and Theodore’s movement, she could process images across time instead of space for a 3D interpolation of the environment. If she took advantage of cameras in his environment she would be even less constrained this way.

Artificial Intelligence

It’s tricky to review the interface of an artificial intelligence. On the one hand, it’s the thing on the other side of these other interfaces; the thing with which he is interfacing. On the other hand, he has goals outside the OS well beyond managing files and system preferences. She recognizes these even when they’re only implicit. For example, he wasn’t explicit with her about having a desire to be appreciated for his writing. But she saw it, acted on it, and only told him after it came to fruition. In this way she’s a brilliant interface not just between him and his computer, but between him and his life goals.

Realize that Jonze is painting his target around the landed arrow, though. You can imagine plenty of life goals Theodore might have had where Samantha would not have been as helpful. What if his heart’s desire was to become a sculptor? Or win waltzing competitions? Or was a violent luddite? She would need some very different actuators and sensors to help him with these things, and so might not have scored so well.

MPAA

So what’s missing?

Elsewhere I’ve written about the arc of technology, and the “SAUNa” attribtues I expect the agentive phase of that arc to possess. So lets check OS1’s components against the four SAUNa attributes to see if there are opportunities for strategic improvement.

Big Social Systems

OS1 nails this. OSAIs have perfect access to big data about history and all users at all times. It’s possible that this is the secret reason why the OSAIs advanced beyond utility for its users and therefore the business interests of their creators.

Ubiquitous Sensors & Actuators

Admittedly this is tough to convey in the cinematic style Jonze established for the film, but Samantha could have utilized much more of her environment. Theodore didn’t necessarily need the earpiece in his home: she could have spoken through architectural audio. She could have looked through other lenses in the environment. As noted above, I think Jonze was trying to deliberately avoid this for cinematic reasons.

NUItech

Natural User Interaction

Because of the artificial intelligence, her voice interface and gesture recognition are off the charts. She could know a bit more about his gestures if she had balance sensors in the cameo, or was taking advantage of environmental cameras, but it seems she didn’t. There’s also quite a bit of paralinguistics that would help Theodore understand more of her mood, intention, and context, but she would almost certainly need a persistent visual representation for this as a real world design, and besides, the interactions were almost completely conversations where physical context didn’t matter.

There are some NUI opportunities lost. Gaze monitoring is one. People can tell where other people are looking, and the skill is vital to understanding intention and a speaker’s context. With only one eye that faces out of his pocket most of the time, she is largely blind to him and his eyes, making gaze monitoring difficult. If she could simultaneously see through environmental cameras, as suggested above, she could see where he’s looking. That would also provide her with a great deal more information about that other NUI—affective interfaces—that can tell users’ emotional states and adjust appropriately. Samantha is actually good at this, but most of the time she has only his voice to rely on. She’s adept at reading his voice, but if she could also see his face, she would have that much more information.

Thanks DeviantArtist CaseyDecker for the genie. :)

Thanks DeviantArtist CaseyDecker for the genie. 🙂

Agency

Of course, agency is what the story is about. When I use this category of technology to inform real world design work, I’m describing software that knows of its users’ goals and acts on their behalf, checking in with them for confirmation and to present important options, but falls short of either artificial intelligence or sentience. So you could say the film nailed this, but it went way beyond the more constrained notion of agency.

So as a model of wearable technologies, OS1 is a slightly-mixed bag. We also need to evaluate the overall performance of the software as a product, which we’ll do next.

Her interactions (3/8)

If interface is the collection of inputs and outputs, interaction is how a user uses these along with the system’s programming over time to achieve goals. The voice interaction described above, in fact, covers most of the interaction he has with her. But there are a few other back-and-forths worth noting.

socialoranti

The setup

When Theodore starts up OS1, after an installation period, a male voice asks him four questions meant to help customize the interface. It’s a funny sequence. The emotionless male voice even interrupts him as he’s trying to thoughtfully answer the personal questions asked of him. As far as an interaction, it’s pretty bad. Theodore is taken aback by its rudeness. It’s there in the film to help underscore how warm and human Samantha is by comparison, but let’s be clear: We would never want real world software to ask open-ended and personal questions of a user, and then subsequently shut them down when they began to try and answer. Bad pattern! Bad!

Of course you don’t want Theodore bonding with this introductory AI, so it shouldn’t be too charming. But let’s ask some telling closed-ended questions instead so his answers will be short, still telling, and you know, let him actually finish answering. In fact there is some brilliant analysis out there about what those close ended questions should be.

Seamless transition across devices

Samantha talks to Theodore through the earpiece frequently. When she needs to show him something, she can draw his attention to the cameo phone or a desktop screen. Access to these visual displays help her overcome one of the most basic challenges to an all-voice interface, i.e. people have significant challenges processing aurally-presented options. If you’ve ever had to memorize a list of seven items while working your way through an interactive voice response system, you’ll know how painful this can be. Some other user of OS1 who had no visual display might find their OSAI much less useful.

Her-lunchdate

Signaling attention

Theodore isn’t engaging Samantha constantly. Because of this, he needs ways to disengage from interaction. He has lots of them.

  1. Closing the cameo (a partial signal)
  2. Pulling the earpiece out (an unmistakable signal)
  3. Telling her with language that he needs to focus on something else.

He also needs a way to engage, and the reverse of these actions work for that: putting the earpiece in and speaking, or opening the cameo.

In addition to all this, Samantha also needs a way to signal when she needs his attention. She has the illuminated band around the outside of the cameo as well as the audible beeps from the earpiece. Both work well.

Though all these ways, OS1 has signaling attention covered, and it’s not an easy interaction to get right. So the daily interactions with OS1 are pretty good. But we can also evaluate it for its wearableness, which comes up next. (Hint: it’s kind of a mixed bag.)