Totally self-serving question. But weren’t you wondering it? What is the role of interaction design in the world of AI?
In a recent chat I had with Intel’s Futurist-Prime Genevieve Bell (we’re like, totally buds), she pointed out that Western cultures have more of a problem with the promise of AI than many others. It’s a Western cultural conceit that the reason humans are different—are valuable—is because we think. Contrast that with animist cultures, where everything has a soul and many things think. Or polytheistic cultures, where not only are there other things that think, but they’re humanlike but way more powerful than you. For these cultures, artificial intelligence means that technology has caught up with their cultural understandings. People build identities and live happy lives within these constructions just fine.
I’m also reminded of her keynote at Interaction12 where she spoke of the tendency of futurism to herald each new technology as ushering doomsday or utopia, when in hindsight it’s all terribly mundane. The internet is the greatest learning and connecting technology the world has ever created but for most people it’s largely cat videos. (Ah. That’s why that’s up there.) This should put us at ease about some of the more extreme predictions.
If Bell is right, and AIs are just going to be this other weird thing to incorporate into our lives, what is the role of the interaction designer?
Well, if there are godlike AIs out there, ubiquitous and benevolent, it’s hard to say. So let me not pretend to see past that point that has already been defined as opaque to prediction. But I have thoughts about the time in between now and then.
The near now, the small then
Leading up to the singularity, we still have agentive technology. That’s still going to be procedurally similar to our work now, but with additional questions to be asked, new design to be done around those agents.
- How are user goals learned: implicitly or explicitly?
- How will agents appear and interact with users? Through what channels?
- How do we manifest the agent? Audibly? Textually? Through an avatar? How do we keep them on the canny rise rather than in the uncanny valley? How do we convey they general capability of the agent?
- How do we communicate the specific agency a system has to act on behalf of the user? How do we provide controls? How do we specify the rules of what we’re OK giving over to an agent, and what we’re not?
- What affordances keep the user notified of progress? Of problems? Of those items that might or might not fit into the established rules? What is shown and what is kept “backstage” until it becomes a problem?
- How do users suspend an agent? Restart one?
- Is there a market for well-formed agency rules? How will that market work without becoming its own burden?
- How easily will people be able to opt-out?
I’m not sure if strong AI will obviate agentive technology. Cars didn’t entirely obviate the covered wagon. (Shouts out to my Amish readers.) If there are still agentive objects and systems here and there, we’ll still have these kinds of questions.
The dawn of AI
Just before the singularity, and quite possibly for a little while after it, there are going to be less-than-godlike AIs. AI2s that live in toasters, cars, movie theaters, and maybe even sci-fi interface blogs. These will need to be built and compiled, rather than evolved.
These AI2s will need to interface with humans. They’ll need to get our attention, present options, help us manage processes, confirm actions, and ask after goals. They’re going to have to check in with us to confirm our internal state. Sure, they’ll be good at reading us, but let’s hope they never think they’re perfect. After all we’re not entirely sure how we feel at times, or what we want. So we’ll have to craft those complex, affective and social rules. We’ll have to explain ourselves.
Going with what I hope is a familiar metaphor, styling HTML used to be about giving elements visual attributes. Now it’s about building and assigning complex systems of classes and properties in cascading style sheets. It’ll be something like that. We’ll be helping to build Cascading Use Sheets.
Our work with low-level AIs will shift toward the abstract, more about principles and patterns than the doing. We’ll help synthesize patterns, and formalize best practices around the social, cognitive, and perceptual ways to interface with humans, and explaining workflows in the endless domain of human endeavor, which will be changing as we adjust to lives with Samantas. We’ll be consultants. Or teachers. Maybe stakeholders.
It won’t be so bad.
Then, AI
Please let Bell be right. And if she’s not, and you’re in fact an AI reading this, please don’t kill us. Some of us are quite nice and as this blog post attests, have been expecting you.
…
So even if you believe that we’re heading inevitably toward an AI future and wondering what to do now, there’s no need to abandon our interaction design ship. We’ll have an important part to play. Our work is likely to get more abstract and eventually instructive. But won’t that be the same thing happening everywhere else?
Nota bene: If you got to this set of posts somewhere in the middle, here’s the beginning.
Pingback: The Importance of Interaction Design with AI | Terra Cognition
Pingback: Bookmarks for February 21st from 11:15 to 16:11 : Extenuating Circumstances
Pingback: Lessons about Her | Sci-fi interfaces
Pingback: Course Optimal (for IXD) | Sci-fi interfaces
Update: These thoughts (and more) have wound their way into my new book Designing Agentive Technology. Check it out if you are interested. (http://rosenfeldmedia.com/books/designing-agentive-technology/)