Untold AI: The survey

What AI Stories Aren’t We Telling (That We Should Be)?

HAL

Last fall I was invited with some other spectacular people to participate in a retreat about AI, happening at the Juvet Landscape Hotel in Ålstad, Norway. (A breathtaking opportunity, and thematically a perfect setting since it was the shooting location for Ex Machina. Thanks to Andy Budd for the whole idea, as well as Ellen de Vries, James Gilyead, and the team at Clearleft who helped organize.) The event was structured like an unconference, so participants could propose sessions and if anyone was interested, join up. One of the workshops I proposed was called “AI Narratives” and it sought to answer the question “What AI Stories Aren’t We Telling (That We Should Be)?” So, why this topic?

Sci-fi, my reasoning goes, plays an informal and largely unacknowledged role in setting public expectations and understanding about technology in general and AI in particular. That, in turn, affects public attitudes, conversations, behaviors at work, and votes. If we found that sci-fi was telling the public misleading stories over and over, we should make a giant call for the sci-fi creating community to consider telling new stories. It’s not that we want to change sci-fi from being entertainment to being propaganda, but rather to try and take its role as informal opinion-shaper more seriously.

Juvet sign

In the workshop we were working with a very short timeframe, so we managed to do good work, but not get very far, even though we doubled our original time frame. I have taken time since to extend that work to get to this series of posts for scifiinterfaces.com.

My process to get to an answer will take six big steps.

  1. First I’ll do some term-setting and describe what we managed to get done in the short time we had at Juvet.
  2. Then I’ll share the set of sci-fi films and television shows I identified that deal with AI to consider as canon for the analysis. (Step one and two are today’s post)
  3. I’ll these properties’ aggregated “takeaways” that pertain to AI: What would an audience reasonably presume given the narrative about AI in the real world? These are the stories we are telling ourselves.
  4. Next I’ll look at the handful of manifestos and books dealing with AI futurism to identify their imperatives.
  5. I’ll map the cinematic takeaways to the imperatives.
  6. Finally I’ll run the “diff” to identify find out what stories we aren’t telling ourselves, and hypothesize a bit about why.

Along the way, we’ll get some fun side-analyses, like:

  • What categories of AI appear in screen sci-fi?
  • Do more robots or software AI appear?
  • Are our stories about AI more positive or negative, and how has that changed over time?
  • What takeaways tend to correlate with other takeaways?
  • What takeaways appear in mostly well-rated movies (and poorly-rated movies)?
  • Which movies are most aligned with computer science’s concerns? Which are least?
  • These will come up in the analysis when they make sense.

Longtime readers of this blog may sense something familiar in this approach, and that’s because I am basing the methodology partly on the thinking I did last year for working through the Fermi Paradox and Sci-Fi question. Also, I should note that, like the Fermi analysis, this isn’t about the interfaces for AI, so it’s technically a little off-topic for the blog. Return later if you’re disinterested in this bit.

Zorg fires the ZF-1

Since AI is a big conceptual space, let me establish some terms of art to frame the discussion.

  1. Narrow AI is the AI of today, in which algorithms enact decisions and learn in narrow domains. They are unable to generalize knowledge and adapt to new domains. The Roomba, the Nest Thermostat, and self-driving cars are real-world examples of this kind of AI. Karen from Spider-Man: Homecoming, S.H.I.E.L.D.’s car AIs (also from the MCU), and even the ZF-1 weapon in The Fifth Element are sci-fi examples.
  2. General AI is the as-yet speculative AI that thinks kind of like a human thinks, able to generalize knowledge and adapt readily to new domains. HAL from 2001: A Space Odyssey, the Replicants in Blade Runner, and the robots in Star Wars like C3PO and BB-8 are examples of this kind of AI.
  3. Super AI is the speculative AI that is orders of magnitude smarter than general AI, and thereby orders of magnitude smarter than us. It’s arguable that we’ve really ever seen a proper Super AI in screen sci-fi (because characters keep outthinking it and wut?), but Deep Thought from The Hitchhiker Guide to the Galaxy, the big AI in The Matrix diegesis, and the titular AI from Colossus: The Forbin Project come close.

There are fine arguments to be made that these are insufficient for the likely breadth of AI that we’re going to be facing, but for now, let’s accept these as working categories, because the strategies (and thereby what stories we should be telling ourselves) for each is different.

  • Narrow AI is the AI of now. It’s in the world. (As long as it’s not autonomous weapons,…) It gets safer as it gets more intelligent. It will enable efficiencies, for some domains, never before seen. It will disrupt our businesses and our civics. It, like any technology, can be misused, but the AI won’t have any ulterior motives of its own.
  • General AI is what lots of big players are gunning for. It doesn’t exist yet. It gets more dangerous as it gets smarter, largely because it will begin to approach a semblance of sentience and approach the evolutionary threshold to superintelligence. We will restructure society to accomodate it, and it will restructure society. It could come to pass in a number of ways: a willing worker class, a revolt, new world citizenry. It/they will have a convincing consciousness, by definition, so their motives and actions become a factor.
  • Super AI is the most risky scenario. If we have seeded it poorly, it presents the existential risk that big names like Gates and Musk are worried about. If seeded poorly, it could wipe us out as a side-effect of pursuing its goals. If seeded well, it might help us solve some of the vexing problems plaguing humanity. (c.f. Climate change, inequality, war, disease, overpopulation, maybe even senescence and death.) It’s very hard to really imagine what life will be like in a world with something approaching godlike intelligence. It could conceivably restructure the planet, the solar system, and us to accomplish whatever its goals are.

Since these things are related but categorically so different, we should take care so speak about them differently when talking about our media strategy toward them.

Also I should clarify that I included AI that was embodied in a mobile form, like C-3PO or cylons, and call them robots in the analysis when its pertinent. Other non-embodied AI is just called AI or unembodied.

Those terms established, let me also talk a bit about the foundational work done with a smart group of thinkers at Juvet.

At Juvet

Juvet was an amazing experience generally (we saw the effing northern lights, y’all) and if you’re interested, there was a group write up afterwards, called the Juvet Agenda. Check that out.

Northern lights

My workshop for “AI Narratives” attracted 8 participants. Shouts out to them follows. Many are doing great work in other domains, so give them a look up sometime.

Juvet attendees

To pursue an answer, this team first wrote up every example of an AI in screen-based sci-fi that we could think of on red Post-It Notes. (A few of us referenced some online sources so it wasn’t just from memory.) Next we clustered those thematically. This was the bulk of the work done there.

I also took time to try and simultaneously put together on yellow Post-It Notes a set of Dire Warnings from the AI community, and even started to use Blake Snyder’s Save the Cat! story frameworks to try and categorize the examples, but we ran out of time before we could begin to pursue any of this. It’s as well. I realized later the Save The Cat! Framework was not useful to this analysis.

Save the Cat

Still, a lot of what came out there is baked into the following posts, so let this serve as a general shout-out and thanks to those awesome participants. Can’t wait to meet you at the next one.

But when I got home and began thinking of posting this to scifiinterfaces, I wanted to make sure I was including everything I could. So, I sought out some other sources to check the list against.  

What AI Stories Are We Telling in Sci-Fi?

This sounds simple, but it’s not. What counts as AI in sci-fi movies and TV shows? Do Robots? Do automatons? What about magic that acts like technology? What about superhero movies that are on the “edge” of sci-fi? Spy shows? Are we sticking to narrow AI, strong AI, or super AI, or all of the above? At Juvet and since, I’ve eschewed trying to work out some formal definition, and instead go with loose, English language definitions, something like the ones I shared above. We’re looking at the big picture. Because of this, trying to hairsplit the details won’t serve us.

How did you come up with the survey of AI shows?

So, I wound up taking the shows identified at Juvet and then adding in shows in this list from Wikipedia and a few stragglers tagged on IMDB with AI as a keyword. That processes resulted in the following list.

2001: A Space Odyssey
A.I. Artificial Intelligence
Agents of S.H.I.E.L.D.
Alien
Alien: Covenant
Aliens
Alphaville
Automata
Avengers: Age of Ultron
Barbarella
Battlestar Galactica
Battlestar Galactica
Bicentennial Man
Big Hero 6
Black Mirror “Be Right Back”
Black Mirror “Black Museum”
Black Mirror “Hang the DJ”
Black Mirror “Hated in the Nation”
Black Mirror “Metalhead”
Black Mirror “San Junipero”
Black Mirror “USS Callister”
Black Mirror “White Christmas”
Blade Runner
Blade Runner 2049
Buck Rogers in the 25th Century
Buffy the Vampire Slayer Intervention
Chappie
Colossus: The Forbin Project
D.A.R.Y.L.
Dark Star
The Day the Earth Stood Still
The Day the Earth Stood Still (2008 film)
Demon Seed
Der Herr der Welt (i.e. Master of the World)
Dr. Who
Eagle Eye
Electric Dreams
Elysium
Enthiran
Ex Machina
Ghost in the Shell
Ghost in the Shell (2017 film)
Her
Hide and Seek
The Hitchhiker’s Guide to the Galaxy
I, Robot
Infinity Chamber
Interstellar
The Invisible Boy
The Iron Giant
Iron Man
Iron Man 3
Knight Rider
Logan’s Run
Max Steel
Metropolis
Mighty Morphin Power Rangers: The Movie
The Machine
The Matrix
The Matrix Reloaded
The Matrix Revolutions
Moon
Morgan
Pacific Rim
Passengers (2016 film)
Person of Interest
Philip K. Dick’s Electric Dreams (Series) “Autofac”
Power Rangers
Prometheus
Psycho-pass: The Movie
Ra.One
Real Steel
Resident Evil
Resident Evil: Extinction
Resident Evil: Retribution
Resident Evil: The Final Chapter
Rick & Morty “The Ricks Must be Crazy”
RoboCop
Robocop (2014 film)
Robocop 2
Robocop 3
Robot & Frank
Rogue One: A Star Wars Story
S1M0NE
Short Circuit
Short Circuit 2
Spider-Man: Homecoming
Star Trek First Contact
Star Trek Generations
Star Trek: The Motion Picture
Star Trek: The Next Generation
Star Wars
Star Wars: Episode I – The Phantom Menace
Star Wars: Episode II – Attack of the Clones
Star Wars: Episode III – Revenge of the Sith
Star Wars: The Force Awakens
Stealth
Superman III
The Terminator
Terminator 2: Judgment Day
Terminator 3: Rise of the Machines
Terminator Genisys, aka Terminator 5
Terminator Salvation
Tomorrowland
Total Recall
Transcendence
Transformers
Transformers: Age of Extinction
Transformers: Dark of the Moon
Transformers: Revenge of the Fallen
Transformers: The Last Knight
Tron
Tron: Legacy
Uncanny
WALL•E
WarGames
Westworld
Westworld
X-Men: Days of Future Past
 

Now sci-fi is vast, and more is being created all the time. Even accounting for the subset that has been committed to television and movie screens, it’s unlikely that this list contains every possible example. If you want to suggest more, feel free to add them in the comments. I am especially interested in examples that would suggest a tweak to the strategic conclusions at the end of this series of posts.

Did anything not make the cut?

A “greedy” definition of narrow AI would include some fairly mundane automatic technologies. The doors found in the Star Trek diegesis, for example, detect many forms of life (including synthetic) and even gauge the intentions of its users to determine whether or not they should activate. That’s more sophisticated than it first seems. (There was a chapter all about sci-fi doors that wound up on the cutting room floor of the book. Maybe I’ll pick that up and post it someday.) But when you think about this example in terms of cultural imperatives, the benefits of the door are so mundane, and the risks near nil (in the Star Trek universe they work perfectly, even if on set they didn’t), it doesn’t really help us answer the ultimate question driving these posts. Let’s call those smart, utilitarian, low-risk technologies mundane, and exclude those.

TOS door blooper

That’s not to say workaday, real-world narrow AI is out. IBM’s Watson for Oncology (full disclosure: I’ve worked there the past year and a half) reads X-rays to help identify tumors faster and more accurately than human doctors can keep up with. (Fuller disclosure: It is not without its criticisms.)…(Fullest disclosure: I do not speak on behalf of IBM anywhere on this blog.)

Watson for Oncology winds up being workaday, but still really valuable. It would be great to see such benefits to humanity writ in sci-fi. It would remind us of why we might pursue it even though it presents risk. On the flip side, mundane examples can have pernicious, hard-to-see consequences when implemented at a social scale, and if it’s clear a sci-fi narrow AI illustrates those kind of risks, it would be very valuable to include.

Also comedy may have AI examples, but for the same reason those examples are very difficult to review, they’re also difficult to include in this analysis. What belongs to the joke and what should be considered actually part of the diegesis? So, say, the Fembots from Austin Powers aren’t included.

No Austin Powers

Why not rate individual AIs?

You’ll note that I put The Avengers: Age of Ultron on one line, rather than listing Ultron, JARVIS, Friday, and Vision as separate things to consider. I did this because the takeaways (detailed in the next post) are tied to the whole story, not just the AI. If a story only has evil AIs, the implied imperative is to steer clear of AI. If a story only has good AIs, it implies we should step on the gas. But when a story has both, the takeaway is more complicated. Maybe it is that we should avoid the thing that made the evil AI evil, or to ensure that AI has human welfare baked into its goals and easy ways to unplug it if it’s become clear that it doesn’t. These examples show that it is the story that is the profitable chunk to examine.

Ultrons

TV shows are more complicated than movies because long-running ones, like Dr. Who or Star Trek, have lots of stories and the strategic takeaways may have changed over episodes much less the decades. For these shows, I’ve had to cheat a little and talk just about Daleks, say, or Data. My one-line coverage does them a bit of a disservice. But to keep this on track and not become a months-long analysis, I’ve gone with the very high level summary.

Similarly, franchises (like the overweighted Terminator series) can get more weight because there are many movies. But without dipping down into counting the actual minutes of time for each show and somehow noting which of those minutes are dedicated, conceptually, to AI, it’s practical simply to note the bias of the selected research strategy and move on.

OMFG you forgot [insert show here]!

If you want to suggest additions, awesome. Look at the Google Sheet (link below), specifically page named “properties”, and comment on this post with all the information that would be necessary to fill in a new row with the new show. Please also be aware a refresh of the subsequent analysis will happen only after some time and/or it becomes apparent that the conclusions would be significantly affected by new examples. Remember that since we’re looking for effects at a social level, the blockbusters and popular shows have more weight than obscure ones. More people see them. And I think the blockbusters and popular shows are all there.

So, that’s the survey from which the rest of this was built.

A first, tiny analysis

Once I had the list, I started working with the shows in the survey. Much of the process was managed in a “Sheets” (Google Docs) spreadsheet, which you can see at the link below.

Not wanting to have such a major post without at least some analysis, I did a quick breakdown of this data is how many of these shows each year involve AI. As you might guess, that number has been increasing a little over time, but has significantly spiked after 2010.

showsperyear
Click for a full-size image

Looking at the data, there’s not really many surprises there. We see one or two at the beginning of the prior century. Things picked up following real-world AI hype between 1970–1990. There was a tiny lull before AI became a mainstay in 1999 and ramped up as of 2011.

There’s a bit of statistical weirdness that the years ending in 0 tend not to have shows, but I think that’s just noise.

What isn’t apparent in the chart itself is that cinematic interest in AI did not show a tight mapping to the real-world “AI Winter (a period of hype-exhaustion that sharply reduced funding and publishing) that computer science suffered in 1974–80 and again 1987–93. It seems that, as audiences, we’re still interested in the narrative issues even when the actual computer science has quieted down.

It’s no sursprise that we’ve been telling ourselves more stories about AI over time. But things get more interesting when we look at the tone of those shows, as discussed in the next post.

Reader wish: More interviews with authors

A-writer

[This is a one-off request from the most recent readership poll.]

This is a great idea! Many times my critiques pass the buck from the interface designers to the script writers, so in all fairness I should also interview them. I would very much want to have completed a review for them to respond to first, though it’s admittedly not a requirement. I do have a personal connection to the author of Arrival. Maybe I’ll get to that one.

One clarification, though, reader: Do you mean authors for the shows I’ve reviewed, any show, or authors of written sci-fi?

Also: Does anyone have connection to authors of sci-fi? Especially of any shows that I’ve reviewed already? (If you’re an RSS reader, there’s a list of shows on the right-hand side of the site.) If so, send me a private message at chris[at]scifiinterfaces.com and pass me the author name and how you know them. Then we can discuss your asking them if they’d be OK with an instruction to me for an interview.

Reader wish: More about the narrative side of things

[This is a one-off request from the most recent readership poll.]

I am actually quite interested in this. I have an outline for a book, tentatively titled Worldbuilding with Interfaces, and in my head this would include individual frameworks for common interfaces and what needed to be shown for several models of interaction, among other things.

While I’m dreaming, let me also put out that I have a daydream where I join the faculty down at Worldbuilding Institute to get deep into this with the pros. Hook a nerd up, will ya. Back to reality.

If I started to include posts as a lead-up to a full book on it, though, this would be a pretty major shift in the tone and content. Would that be worth starting a new blog for just that purpose? Or could it fit in here amongst the other reviews? Would the lines be too blurry? Would it isolate existing readers? It would certainly slow down my already pokey publishing pace.

Since this would be a major shift, I’m putting it out there to see if anyone wants to discuss it. In, of course, comments. Or chris[at]scifiinterfaces.com if you have secret, sage words of advice.

Reader wish: More diverse UI work

[This is a one-off request from the most recent readership poll.]

Reader wish: Most of the content is fixated on one type of FUI. It would be nice to see more diverse UI work.

This was really weird for me to read since Scout and I are currently reviewing magic items as if they were tech. In the past the blog has covered bizarre gestural, suicide kits, Krell technology, robot design, ectoplasmic containment units, NUI, AI, service design, and even panopticon teleporting matchmaking interfaces.

I have gone back to the beginning of sci-fi and thereafter spread new reviews out amongst the decades. I review every interface in any given movie or TV show, using a very broad definition of interfaces. The only type of sci-fi interface I won’t cover is weapons, torture devices, or work done by toxic people.

So if you can comment and help me understand more of what you mean I’d appreciate it. But if that doesn’t satisfy, HUDs and GUIs includes the occasional games and some lightweight analysis, too, so be sure to check them out. And of course anyone is welcome to offer to contribute to ensure there is more diversity of the sort you are seeking.

  • You could mean games, and here’s why not.
  • You could mean literature or illustration, and the intro to the book covers why that’s a non-starter.
  • You could mean more obscure sci-fi or subgenres, and that’s just a matter of my limited bandwidth.

I guess what I’m saying is I think the blog already covers a huge range of FUI, within the constraints of movie and TV sci-fi. If you’ve actually identified a blind spot I’ve had, please email me or comment on the site so I can have my eyes opened.

Reader wish: Talk to more creators

[This is a one-off request from the most recent readership poll.]

Reader wish: I wish there would be more interviews whenever you can get creators to talk about their interfaces, because I’d like to have more context about the story behind them.

Sounds good. I like that content, too.

I’ve been explicit about the virtues of a New Criticism approach to critique, which explicitly calls against including a creator’s intention in a critique. I still believe that to be true, despite modern trends toward ad hominem analysis.

But after a review gets completed, I don’t see any harm. Well, except that lots of sites are now featuring creator interviews, and it’s a time-intensive undertaking for—comparitively—not much pay off.

I’ll do my best. Let me know if you have any particular interfaces that you’re thinking of, or even any particular creators you already know about in the comments.

Reader comment: Sometimes the breakdowns are pretty abstract and pedantic or obscure

[This is a one-off request from the most recent readership poll.]

All true. I follow the analyses where they lead, and I won’t reject a line of inquiry because it’s abstract, pedantic, or obscure. My twitter description used to note that “I delight in finding truffles in oubliettes”, and that bit of poetry refers to exactly this.

If I was to flatter myself, I would love for this blog to be considered in a league with PBS Idea Channel. Insightful and unapologetically nerdy. Not there yet of course.

So I hadn’t considered this a bug but a feature.

I’d love to hear from other readers. Do you feel this same way? If a majority of readers feel that the abstraction, depth, and obscure places the blog goes to is off-putting, it might be a good moment to consider the future of the blog.

Reader complaint: Boring

[This is a one-off request from the most recent readership poll.]

This reader free-form comment has two parts.

1. All the analysis lately has just been of lo-res/boring/barely seen interfaces from old programs…

I presume you mean the Star Wars Holiday Special and perhaps Johnny Mnemonic, but Doctor Strange is from 2016, and that analysis began 30 MAY, five weeks before this reader poll. So…maybe check out those?

Also, note that I’m in this for the insight, and hi-res/explosion-filled/blockbusters have no monopoly on insightful ideas. In fact, if anything, I’d wager they’re most often the shallow ones. I hope to encourage readers to explore more sci-fi to learn the cool stuff that is out there, well beyond the most-hyped stuff at Comicon. So, reader, please join me in judging books by their contents, and looking across the whole library.

2. …and now every show is stretched thin over many separate articles.

If it helps to know, my writing style is quite the opposite. I tend to write things out as single posts to get the thinking right, and then yes, make a call as to how to divide it up. For instance, the readership poll posts started out as single post that scrolled for miles and I just couldn’t see asking anyone to set aside a vacation to read it as one post. Reader logs show me that people don’t read the longer posts, so I keep it cut down to digestible chunks. My mental model is something that someone can read in a short  break at work. My apologies if that feels thin rather than digestible.

I should do my due diligence though and just ask: Are people more interested in long-form posts, like I began the blog with (see Metropolis and The Cabin in the Woods) rather than the short-form posts adopted after then?

Reader free-form comment: Would be cool to know how (and if) do you apply these reviews to your design work

[This is a one-off request from the most recent readership poll.]

Short answer: Yes, through critique practice and design patterns. Longer answer follows.

Brainscan-2
Exactly like this.

Generally, improving my thinking

This is broad, but quite true. After making a practice of looking at interfaces systematically, and putting that critique into words that I can read, and vet, and feel comfortable posting on the frakking internet for anyone to read, I’ve gotten better at it. As a design manager, learning to quickly critique other’s work is invaluable. As a direct contributor, I can bring a more sophisticated real-time critique of my own ideas, which makes the design that much better, even doing pair design.

Apologetics

It would be easy to just rag on sci-fi interfaces. But having to put critiques of them out in the world, I have to understand that they’re created by talented (or at least well-meaning) people and I should seek to understand what they were doing, and even give an interface a thought pass, imagining that they’re not broken, but brilliant. That doesn’t always pay off, but when it does the results are golden. Deep insight that is shareable in fun memetic stories. So I’ve developed apologetics as part of my critiques, and it allows me to see the good in a design rather than just trashing them. Which is a lesson the whole Internet could take to heart, n’est-ce pas?

Better skepticism

I’ve spoken at conferences about the risk in conflating sci-fi interfaces’ cinematic coolness with their real-world goodness. By systematically, pedantically, deconstructing them to understand them, I feel more confident in my ability to not get misled by the cool things I see in movies and TV.

A rich backpack of inspirations

At the same time as I’m building up skepticism, I have to admit that these interfaces are really, really cool. In getting to know the survey intimately, I have a century-wide pool of examples and inspiration to pull from when tackling a new design problem.

Giving me new patterns to work with

Occasionally I’ll run into genuine new patterns (in the Alexander sense) that I can incorporate into my work. Should I need to design a chat feature, I can always remember the Empire and consider a hierarchical display option as seen in the Star Wars volumetric projection interfaces.

34287983545_cd4106b758_o.jpg

But let me give a more concrete answer. Big thoughts often coalesce from many places at once, and my latest book was just that. One major place it came from was an analysis of the HUD in the Firefly pilot, i.e. If the HUD (above) knows where the bad guy is, why does it ask Mal to aim it? It seemed like Hollywood had this conceptual challenge, and then I realized that humans may have, too. We don’t like the thought that computers can do some things better than us, but they can. Then after doing a lot of exploration, realizing they do, and more importantly, in some domains, they should. No one had written a book about it, so I did. And part of it came out of sci-fi.

You may be surprised to note that I don’t get visual ideas from sci-fi. Part of that is I haven’t done visual design for interfaces in about 20 years. Part of it is I really am a function guy at heart. Other people are really vested in the presentation layer (and it’s very important to the success of a given interface) but that’s not me.

***

I realize all this is kind of vague, but without giving away client IP, that’s the most concrete answer I could give, reader. Here’s one for you: Has anything in sci-fi ever influenced your design work? (Comment! Comment!)

Reader wish: I’d really like a better WordPress theme. This one is tricky to navigate at times

[This is a one-off request from the most recent readership poll.]

Yes, yes, yes! I agree. Way back when I started the blog, I modified a default WordPress theme and even I get frustrated with it sometimes. But I’m better at content than I am at WordPress design, and honestly would rather spend my time doing more writing and creating more reviews than selecting and modifying another template. Is there anyone who wants to volunteer to improve the template or suggest a new one? I’d love it. Email me at chrisat-symbol.pngscifiinterfaces.com if so.

Alternatively I might could run a kickstarter to see if we can raise the money for a professional WordPress developer to improve things. (This is an idea from another commenter, which I found awesome.) Until then, please comment with the particular problems you find frustrating, and I’ll see if I can incrementally improve those things in the meantime.

Reader wish: Video games, space combat simulators in particular. :-)

[This is a one-off request from the most recent readership poll.]

I’m a gamer myself, so I’m tempted to venture out. But there’s some stuff to discuss.

Let’s first distinguish between interfaces in cut-scenes, which are very much like the sci-fi interfaces I review here, and the interfaces of the games themselves.

Cut-scene interfaces are very much like the sci-fi interfaces reviewed on this blog. They might be a candidate for reviews. Except they don’t exit in isolation, they’re most often quite tied in with the game-itself interfaces, and those are entirely different beasts. The rest of this post discusses how different those beasts are.

Game-itself interfaces answer to different masters than sci-fi interfaces, even if on the surface they share surface similarities.

  • They are subject to pressures of usability, but the game is not meant to be perfectly usable. (That would be a button saying “win game.”)
  • They have to work exhaustively, meaning that if there’s a button it has to do something. sci-fi interfaces often have parts which actors are told work and parts they’re warned won’t.
  • Makers of sci-fi interfaces often tell the actor to just do their acting thing, and the makers will go back in later and backfill the interface around the actor’s motions. This of course affects the interface. Game-itself interfaces never backfill around users.
  • Sci-fi interface designers may have had no formal training in interaction design, and more around art and motion graphics. This makes those interfaces a kind of outsider art, which is kind of why they are sometimes brilliant and sometimes shite. (Even as more and more sci-fi interface studios are also doing real-world projects, they are clear about which one they’re working on.)
  • Game-itself interfaces are limited by the inputs of the gaming system: Keyboard or handheld controller. Sci-fi interfaces have little restrictions.
  • Sci-fi interfaces only occupy the full screen for at most seconds at a time. Game-itself interfaces are up the whole time during gameplay.
  • Sci-fi interfaces just always work. Even if the actor does something wrong, the effect that the story needs still happens.
  • Game-itself interfaces are customizable, so different people will be using different instantiations of the same thing.
  • Sci-fi interfaces have as their goal to tell the audience something, and can fudge most of the other semiotic layers beneath in the service of that. They mostly convey narrative information. A caused B change. C is happening. D is the intended plan. E is how F is doing this cool thing. The audience never has to use that information except in the service of understanding the story. Game-itself interfaces are about both the knowing and using that information directly.
  • The reviews would be different: we want to evaluate sci-fi interfaces for being believable, for how they contribute to the narrative, and what we can learn from them. game-itself interfaces would be reviewed for usability, for they equipped you to play the game. Just not the same thing.

So it’s because they are such different beasts, requiring a whole different conceptual framework, that I don’t think it’s right to include them here on this blog. There was a fellow a few years ago who started his own blog about game interface reviews, but I can’t find the blog URL in my inbox, or via search, but anyway I don’t think he was able to keep it up. Maybe he or someone else will be able to pick it up sometime.

But if someone started a blog on this topic (or wrote a nice in-depth article about it), I think it would be informative to analyses here. And heck, space agencies and sci-fi makers should pay attention to the lessons learned there.

Also I’m loathe to give too much attention to reviewing warfare and weapons interfaces. Hollywood already glamorizes war a little too much.