Dust Storm Alert

WallE-DustStorm04

While preparing for his night cycle, Wall-E is standing at the back of his transport/home. On the back drop door of the transport, he is cleaning out his collection cooler. In the middle of this ritual, an alert sounds from his external speakers. Concerned by the sound, Wall-E looks up to see a dust storm approaching. After seeing this, he hurries to finish cleaning his cooler and seal the door of the transport.

A Well Practiced Design

The Dust Storm Alert appears to override Wall-E’s main window into the world: his eyes. This is done to warn him of a very serious event that could damage him or permanently shut him down. What is interesting is that he doesn’t appear to register a visual response first. Instead, we first hear the audio alert, then Wall-E’s eye-view shows the visual alert afterward.

Given the order of the two parts of the alert, the audible part was considered the most important piece of information by Wall-E’s designers. It comes first, is unidirectional as well as loud enough for everyone to hear, and is followed by more explicit information.

WallE-DustStorm01

Equal Opportunity Alerts

By having the audible alert first, all Wall-E units, other robots, and people in the area would be alerted of a major event. Then, the Wall-E units would be given the additional information like range and direction that they need to act. Either because of training or pre-programmed instructions, Wall-E’s vision does not actually tell him what the alert is for, or what action he should take to be safe. This could also be similar to tornado sirens, where each individual is expected to know where they are and what the safest nearby location is.

For humans interacting alongside Wall-E units each person should have their own heads-up display, likely similar to a Google-glass device. When a Wall-E unit gets a dust storm alert, the human could then receive a sympathetic alert and guidance to the nearest safe area. Combined with regular training and storm drills, people in the wastelands of Earth would then know exactly what to do.

Why Not Network It?

Whether by luck or proper programming, the alert is triggered with just enough time for Wall-E to get back to his shelter before the worst of the storm hits. Given that the alert didn’t trigger until Wall-E was able to see the dust cloud for himself, this feels like very short notice. Too short notice. A good improvement to the system would be a connection up to a weather satellite in orbit, or a weather broadcast in the city. This would allow him to be pre-warned and take shelter well before any of the storm hits, protecting him and his solar collectors.

Other than this, the alert system is effective. It warns Wall-E of the approaching storm in time to act, and it also warns everyone in the local vicinity of the same issue. While the alert doesn’t inform everyone of what is happening, at least one actor (Wall-E) knows what it means and knows how to react. As with any storm warning system, having a connection that can provide forecasts of potentially dangerous weather would be a huge plus.

Eve’s Gun

EvesGun02

For personal security during her expeditions on Earth, Eve is equipped with a powerful energy weapon in her right arm. Her gun has a variable power setting, and is shown firing blasts between “Melt that small rock” and “Mushroom Cloud visible from several miles away”

EvesGun03_520

After each shot, the weapon is shown charging up before it is ready to fire again. This status is displayed by three small yellow lights on the exterior, as well as a low-audible charging whine. Smaller blasts appear to use less energy than large blasts, since the recharge cycle is shorter or longer depending on the damage caused.

EvesGun01

On the Axiom, Eve’s weapon is removed during her service check-up and tested separately from her other systems. It is shown recharging without firing, implying an internal safety or energy shunt in case the weapon needs to be discharged without firing.

While detached, Wall-E manages to grab the gun away from the maintenance equipment. Through an unseen switch, Wall-E then accidentally fires the charged weapon. This shot destroys the systems keeping the broken robots in the Axiom’s repair ward secured and restrained.

Awesome but Irresponsible

I am assuming here that BNL has a serious need for a weapon of Eve’s strength. Good reasons for this are:

  • They have no idea what possible threats may still lurk on Earth (a possible radioactive wasteland), or
  • They are worried about looters, or
  • They are protecting their investment in Eve from any residual civilization that may see a giant dropship (See the ARV) as a threat.

In any of those cases, Eve would have to defend herself until more Eve units or the ARV could arrive as backup.

Given that the need exists, the weapon should protect Eve and the Axiom. It fails to do this because of its flawed activation (firing when it wasn’t intended). The accidental firing scheme is an anti-pattern that shouldn’t be allowed into the design.

EvesGun05

The only lucky part about Wall-E’s mistake is that he doesn’t manage to completely destroy the entire repair ward. Eve’s gun is shown having the power to do just that, but Wall-E fires the weapon on a lower power setting than full blast. Whatever the reason for the accidental shot, Wall-E should never have been able to fire the weapon in that situation.

First, Wall-E was holding the gun awkwardly. It was designed to be attached at Eve’s shoulder and float via a technology we haven’t invented yet. From other screens shown, there were no physical buttons or connection points. This means that the button Wall-E hits to fire the gun is either pressure sensitive or location sensitive. Either way, Wall-E was handling the weapon unsafely, and it should not have fired.

EvesGun00

Second, the gun is nowhere near (relatively speaking) Eve when Wall-E fires. She had no control over it, shown by her very cautious approach and “wait a minute” gestures to Wall-E. Since it was not connected to her or the Axiom, the weapon should not be active.

EvesGun04

Third, they were in the “repair ward”, which implies that the ship knows that anything inside that area may be broken and do something wildly unpredictable. We see broken styling machines going haywire, tennis ball servers firing non-stop, and an umbrella that opens involuntarily. Any robot that could be dangerous to the Axiom was locked in a space where they couldn’t do harm. Everything was safely locked down except Eve’s gun. The repair ward was too sensitive an area to allow the weapon to be active.

In short:

  1. Unsafe handling
  2. Unauthorized user
  3. Extremely sensitive area

Any one of those three should have kept Eve’s gun from firing.

Automatic Safeties

Eve’s gun should have been locked down the moment she arrived on the Axiom through the gun’s location aware internal safeties, and exterior signals broadcast by the Axiom. Barring that, the gun should have locked itself down and discharged safely the moment it was disconnected from either Eve or the maintenance equipment.

A Possible Backup?

There is a rationale for having a free-form weapon like this: as a backup system for human crew accompanying an Eve probe during an expedition. In a situation where the Eve pod was damaged, or when humans had to take control, the gun would be detachable and wielded by a senior officer.

Still, given that it can create mushroom clouds, it feels grossly irresponsible.

In a “fallback” mode, a simple digital totem (such as biometrics or an RFID chip) could tie the human wielder to the weapon, and make sure that the gun was used only by authorized personnel. (Notably Wall-E is not an authorized wielder.) By tying the safety trigger to the person using the weapon, or to a specific action like the physical safeties on today’s firearms, the gun would prevent someone who is untrained in its operation from using it.

If something this powerful is required for exploration and protection, it should protect its user in all reasonable situations. While we can expect Eve to understand the danger and capabilities of her weapon, we cannot assume the same of anyone else who might come into contact with it. Physical safeties, removal of easy to press external buttons, and proper handling would protect everyone involved in the Axiom exploration team.

The Dropship

WallEDropShip-08

The Axiom Return Vehicle’s (ARV’s) first job is to drop off Eve and activate her for her mission on Earth. The ARV acts as the transport from the Axiom, landing on the surface of Earth to drop off Eve pods, then returning after an allotted time to retrieve the pods and return them to the Axiom.

The ARV drops Eve at the landing site by Wall-E’s home, then pushes a series of buttons on her front chest. The buttons light up as they’re pushed, showing up blue just after the arm clicks them. At the end of the button sequence, Eve wakes up and immediately begins scanning the ground directly in front of her. She then continues scanning the environment, leaving the ARV to drop off more Eve Pods elsewhere.

If It Ain’t Broke…

There’s an oddity in ARV’s use of such a crude input device to activate Eve. On first appearance, it seems like it’s a system that is able to provide a backup interface for a human user, allowing Eve to be activated by a person on the ground in the event of an AI failure, or a human-led research mission. But this seems awkward in use because Eve’s front contains no indication of what the buttons each do, or what sequence is required.

A human user of the system would be required to memorize the proper sequence as a physical set of relationships. Without more visual cues, it would be incredibly easy for the person in that situation to push the wrong button to start with, then continue pushing wrong buttons without realizing it (unless they remembered what sound the first button was supposed to make, but then they have one /more/ piece of information to memorize. It just spirals out of control from there).

What was originally for people is now best used by robots.

WallEDropShip-03

So if it’s not for humans, what’s going on? Looking at it, the minimal interface has strong hints of originally being designed for legacy support: large physical buttons, coded interface, and tilted upward for a person standing above it. BNL shows a strong desire to design out people, but leave interactions (see The Gatekeeper). This style of button interface looks like a legacy control kept by BNL because by the time people weren’t needed in the system anymore, the automated technology had already been adapted for the same situation.

Large hints to this come from the labels. Each label is an abstract symbol, with the keys grouped into two major areas (the radial selector on the top, and the line of large squares on the bottom). For highly trained technicians meant to interact only rarely with an Eve pod, these cryptic labels would either be memorized or referenced in a maintenance manual. For BNL, the problem would only appear after both the technicians and the manual are gone.

It’s an interface that sticks around because it’s more expensive to completely redo a piece of technology than simply iterate it.

Despite the information hurdles, the physical parts of this interface look usable. By angling the panel they make it easier to see the keypad from a standing position, and the keys are large enough to easily press without accidentally landing on the wrong one. The feedback is also excellent, with a physical depression, a tactile click, and a backlight that trails slightly to show the last key hit for confirmation.

If I were redesigning this I would bring in the ability for a basic- or intermediate-skill technician to use this keypad quickly. An immediate win would be labeling the keys on the panel with their functions, or at least their position in the correct activation sequence. Small hints would make a big difference for a technician’s memory.

WallEDropShip-04

To improve it even more, I would bring in the holographic technology BNL has shown elsewhere. With an overlay hologram, the pod itself could display real-time assistance, of the right sequence of keypresses for whatever function the technician needed.

This small keypad continues to build on the movie’s themes of systems that evolve: Wall-E is still controllable and serviceable by a human, but Eve from the very start has probably never even seen a human being. BNL has automated science to make it easier on their customers.

WALL·E (2008): Overview

Release date: 27 June 2008, United States

image02

Humanity’s materialistic society has left Earth uninhabitable. A powerful mega-corporation – Buy-N-Large – offered resort-like spaceships for humanity to live on while they left behind an army of solar-powered trash collection robots to clean up the mess. Fast forward seven centuries, and Wall·E is the only surviving trash robot left. He continues to do his job, kept company by a cockroach who has also managed to survive the end of civilization.

Wall·E’s life is abruptly interrupted by the arrival of Eve, a BNL (Buy-N-Large) probe meant to search for any signs that life is returning to Earth. After becoming emotionally attached to Eve, Wall·E shows her a plant he found during his trash collection. She “freaks out”, collects the plant, and leaves the planet. He follows her back to her home and what may be the last surviving outpost of humanity: the Axiom, an all-inclusive lifeboat and pleasure cruise that has been waiting to return to Earth for 700 years.

Aboard the Axiom, Wall·E opens the eyes of people who have become so accustomed to having every wish granted to them by BNL that they have stopped doing anything for themselves. Eve and Wall·E fight the Axiom’s autopilot and security systems for the plant, and draw in the humans aboard the Axiom to the fight.

After the people on board the Axiom realize that they need to start doing things for themselves, they are able to deactivate the Axiom’s deceitful autopilot and trigger the ship’s return mode to land back on Earth. Wall·E, Eve, and the other robots of the Axiom then help people rebuild civilization on an Earth that is beginning to heal.

image00

IMDB: https://www.imdb.com/title/tt0910970/

Introducing Clayton Beese

Now this this exciting. Scifiinterface.com’s first guest review begins this week! That’s right, someone took a look at the terrifying Contribute! page, and stepped up to the sci-fi plate! So with no further ado, let me introduce Clayton Beese, and share his answers to a few questions I posed to him.

Clayton Beese

Hi there. Tell us a bit about yourself. What’s your name, where are you from, how do you spend your time?

Hi! I’m Clayton Beese, a User Experience designer from Overland Park, Kansas, and I’m someone who is drawn to the idea of storytelling as a very basic human activity. Outside of work I bike, I’m an amateur writer, I am usually the designated photographer on family trips, and I like taking random classes in things like rock climbing, blacksmithing, and Tai Chi to see what they’re like. Science fiction has always captured my interest because it asks questions about our needs as people, and what we want to see out of our future.

What are some of your favorite sci-fi interfaces (Other than in Wall•E)? (And why.)

ironMan_mkVII_HUD_01_jayse_hansen1

Iron Man: Tony Stark has managed to create an interface that tracks his eye focus, his conversational commands, and his gestures perfectly, and he has mastered their use. I think it’s one of the biggest gaps in current gesture technology, in that they’re only tracking one piece of cognition that their user is working with. By just capturing the motion of a person’s arms the interface is missing out on eye focus, which is a huge hint as to what the user actually wants to work with; and without being able to capture the random vocal thoughts that seep out, the interface is lacking context to add richness to the choices of gesture commands.

battlestar_galactica-last-supper

The new Battlestar Galactica: The show got to play with two completely different technology aesthetics. The Galactica is a brutalist machine, meant to take a beating and come out the other side still working. It has redundant systems, physical hardware everywhere it can, and a lived-in quality that says those systems are well used. The Galactica isn’t the most efficient, but it gets across its needs. The Cylon ships are the complete opposite. They speak to interactions that don’t require memorization or practice, and instead speak of an intuitive grasp of a system that can figure out what you want to do. The ships are so built around the idea that their users interact at a higher level than mere physical-ness that the walls aren’t even painted.

gurren_lagann_387_1680

And, from pure enjoyment of how intuitive and awesome they’ve managed to make everything, Gurren Lagann.

Why did you pick Wall-E for your first scifiinterfaces review?

Screen Shot 2013-12-02 at 5.03.12 PM

I really like how Wall-E shows the two spectrums of endurance technology: both post-apocalyptic hardened ruggedness, and post-AI takeover hands-off automation. There was also a lot to work with in the motivations that the different interfaces were built to serve. For me, watching how the Axiom works is always fun because it describes what Buy-N-Large expected out of its customers. I think there are a lot of companies that would like to do what BNL did, and the movie asks us if it’s really a world we want to live in.

Also, of the Pixar movies that has interfaces to review, Wall-E is my favorite.

What was your biggest surprise when doing the review?

I was surprised at how easy it was to descend into nitpicking aspects of the interface designs, without acknowledging that they were very effective at their purpose. After writing a review, I could go back and realize that I made it sound like the interface failed completely when that wasn’t my intention at all. Especially when most of the interfaces I went into the most detail on were the ones that I really enjoyed. Pixar did a good job relaying Buy-N-Large’s design goals and intentions through their interfaces, and I really wanted to get that idea through when I did my reviews.

What else are you working on?

The biggest single other project I’m working on (outside of work at least!) is my first full-length novel. I’ve written a lot of stuff for fun, but this novel has grown far past my initial attempts to just write down some fun scenes and concepts I was working on. It’s turned into a way of getting down ideas I’ve had floating around about artificial intelligence and having fun writing long form plot that is actually intended to be marketable:

When a vindictive fellow pilot tries to steal Elizabeth’s warship and Artificial Intelligence, Phi, Elizabeth suddenly realizes that there’s more to an interplanetary war than just fighting against enemy forces. Elizabeth will have to tease out who she can trust among her fellow pilots, and whether she should hide the surprising intelligence Phi displays from her paranoid superiors if she wants to survive the growing conflict.

It’s aimed at a high school reader, and just needs a bit more work before it’ll be ready to go out and look for an agent.

Report Card: Logan’s Run

LogansRun-Report-Card

For our purposes, Dome City is a service. Provided by the city’s ancestors to provide a “good life” for their cloned descendants in a sustainable way, i.e., a way that does not risk the problems of overpopulation. The “good life” in this case is a particular hedonistic vision full of fashion, time at the spa, and easy casual sex.

There’s an ethical, philosophical, and anthropological question on whether this is the “right” sort of life one would want to structure a service around. I suspect it’s a good conversation that will last at least a few beers. Fascinating as that question may be, looking into the interaction design requires us to accept those as a given and see how well the touchpoints help these personas address their goal in this framework.

Sci: F (0 of 4) How believable are the interfaces?

The Fade Out drug is the only, only interface that’s perfectly believable. And while I can make up some reasons the Clean Up Rig is cool, that’s clearly what I’m bringing to it, and the rest of the bunch, to an interface, has massive problems with fundamental believability and usability. Seriously, the movie is a study in bad design.

Fi: A (4 of 4)

How well do the interfaces inform the narrative of the story?

Here the interfaces are fine. The Lifeclock tells us of their forced life limit. The Circuit tells us of the easy sex. Fade Out tells of easy inebriation. New You of easy physical changes.

The interfaces help tell the story of this bizarre dystopia, help paint the “vast, silly spectacle” that Roger Ebert criticized when he write his original review in 1976.

Other interfaces help move the plot along in effective, if sometimes ham-handed ways, like the SandPhone and Aesculator Mark III. So even when they’re background tech, they help. Full marks.

Interfaces: D (1 of 4)
How well do the interfaces equip the characters to achieve their goals?

Sure, if you ignore all the usability problems and handwaving the movie does, the characters are getting what they want on a surface level. But ultimately, the service design of Dome City fails for every reason it could fail.

  • The system was poorly implemented.
  • Its touchpoints are unusable.
  • Its touchpoints don’t let its users achieve the system goals.

But the main reason it fails is that it fails to take into account some fundamental aspects of human nature, such as

  • Biophilia
  • The (entirely questionable) tendency towards punctuated serial monogamy in pair bonds
  • A desire for self-determination
  • Basic self-preservation.

If you don’t understand the goals of your users, you really have no hope of designing for them. And if you’re designing an entire, all-consuming world for those same users, misjudging the human universals puts your entire project—and their world—at risk.

Final Grade C- (5 of 12), MATINEE

Related lessons from the book

  • The Übercomputer’s all caps and fixed-width evoke “that look” of early computer interfaces (page 33), as does its OCR sans-serif typeface (page 37) and blue color (page 42).
  • The SandPhone would have been much more useful as Augmented Reality (chapter 8, page 157)
  • The Aesculaptor could use a complete revamp from the Medical Chapter (chapter 9, page 258), most notably using waveforms (page 263) and making it feel humane (page 281).
  • The Evidence Tray reminds us of multifactor authentication (page 118).
  • Of course The Circuit appears in the Sex chapter (chapter 13, page 293) and as my redesign showed, needed to modernize its matchmaking (page 295) use more subtle cues (page 301). Certainly Jessica-5 could have used a safeword (page 303).

New lessons

  • The Lifeclock reminds us to keep meaningful colors distinguishable.
  • The Circuit shows why a serial presentation democritizes options.
  • The Circuit also shows us that matchmaking must account for compatability, availability, and interest.
  • The Aesculaptor tells why a system should never fail into a worse state.
  • Carrousel implies that we don’t hide the worst of a system, but instead cover it in a dazzle pattern.
  • The improvements I suggested for the SandPhone imply that solving problems higher up the goal chain are much harder but more disruptive.
  • The Evidence Tray gives us the opposite of the “small interfaces” lesson (page 296), too large an interface can overpromise for small interactions.

I grew up in Texas, and had the chance to visit the Fort Worth Water Gardens and Market Center where some of the scenes were shot. So I have a weirdly personal connection to this movie. Despite that, on review, the interfaces just suck, bless their little interactive hearts. Use them as fodder for apologetics and perhaps as a cautionary tale, but little, little else.

IMDB: https://www.imdb.com/title/tt0074812/Currently streaming on:

Scripts!

Such a cool collection of interactive voice response systems, with high fives out to everyone who thought up great (and ofttimes obscure) “talkie computers” from decades of sci-fi from the 1950s to the 2000-teens. By name…

ForbiddenPlanet-085
  • kedamono x7
  • Joe Bloch x10
  • dhwood
  • Burning x4
  • Kelley Strang
  • dhwood
  • brightrock
  • Clayton
  • Pixel I/O
  • pavellishin x2
  • Clayton
  • @CarsTheElectric
  • Steve Silvas x2
  • Matt Sheehe
  • Ben
  • Matt Sheehe
  • Joe Bloch
  • pavellishin
  • Matt Sheehe x2
  • Lela x2
  • NP
  • Clayton x2

The list of talkie computers we collected is “Robby the Robot, Adam Link, Jupiter 2, Landru, M-5, Nomad probe, The Oracle, Beta-V, HAL, Colossus, BOXX, Thermostellar Triggering Device, IRAC, the Übercomputer, C-3PO, Alex 7000, Proteus IV, Zen, Orac, Slave, V-Ger, Artificial persons, Dr. Theopolis and TWKE-4, MU-TH-UR 6000, KITT, Replicants, Image Machine, MCP, SAL, Max, Holly, Kryten!, L7, 790, Sphere, Ship [sic], AMEE, Ship, Andromeda Ascendant, Zero, S.A.R.A.H., Andy the Deputy AI, Icarus, KITT, Otto, Gerty, and Jarvis.” Think you could name the movies and TV shows these are from just from these names?

colossus-and-forbin

The next step is to build a collection of the scripts of these interactions, since we’ll be analyzing any peculiar, non-standard-English that we find. I’m down to provide these scripts myself, but it would be easier if we crowdsource it. If you’re up to it, head to the following form to add the metadata and line-by-line script of the interaction. You can often find the scripts with a simple Google Search, or by (popping in the VHS/DVD/Blu-Ray you own, finding a video of the scene on some online video service and transcribing it from there. We are interested in word-perfect transcriptions. Don’t sweat it if you don’t have the time yourself. As of Thanksgiving weekend, I’ll manually complete any unfinished ones that I find.

KITT2000

The form to add scripts: https://docs.google.com/forms/d/
1fLJKW_PviuWezpDKtUrnMO8IH3CE_f9fT4FpQznI6oo/viewform

Screen Shot 2013-11-24 at 22.19.39

The answer does not program

LogansRun224

Logan’s life is changed when he surrenders an ankh found on a particular runner. Instead being asked to identify, the central computer merely stays quiet a long while as it scans the objects. Then its lights shut off, and Logan has a discussion with the computer he has never had before.

The computer asks him to “approach and identify.” The computer gives him, by name, explicit instructions to sit facing the screen. Lights below the seat illuminate. He identifies in this chair by positioning his lifeclock in a recess in the chair’s arm, and a light above him illuminates. Then a conversation ensues between Logan and the computer.

LogansRun113

The computer communicates through a combination of voice and screen, on which it shows blue text and occasional illustrative shapes. The computer’s voice is emotionless and soothing. For the most part it speaks in complete sentences. In contrast, Logan’s responses are stilted and constrained, saying “negative” instead of “no,” and prefacing all questions with the word, “Question,” as in, “Question: What is it?”

On the one hand it’s linguistically sophisticated

Speech recognition and generation would not have a commercially released product for four years after the release of Logan’s Run, but there is an odd inconsistency here even for those unfamiliar with the actual constraints of the technology. The computer is sophisticated enough to generate speech with demonstrative pronouns, referring to the picture of the ankh as “this object” and the label as “that is the name of the object.” It can even communicate with pragmatic meaning. When Logan says,

“Question: Nobody reached renewal,”

…and receives nothing but silence, the computer doesn’t object to the fact that his question is not a question. It infers the most reasonable interpretation, as we see when Logan is cut off during his following objection by the computer’s saying,…

“The question has been answered.”

Despite these linguistic sophistications, it cannot parse anything but the most awkwardly structured inputs? Sadly, this is just an introduction to the silliness that is this interface.

Logan undergoes procedure “033-03,” in which his lifeclock is artificially set to blinking. He is then instructed to become a runner himself and discover where “sanctuary” is. After his adventure in the outside performing the assignment he was forced to accept, he is brought in as a prisoner. The computer traps him in a ring of bars demanding to know the location of sanctuary. Logan reports (correctly) that Santuary doesn’t exist.

LogansRun206
LogansRun205
hologram

On the other hand, it explodes

This freaks the computer out. Seriously. Now, the crazy thing is that the computer actually understands Logan’s answer, because it comments on it. It says, “Unacceptable. The answer does not program [sic].” That means that it’s not a data-type error, as if it got the wrong kind of input. No, the thing heard what Logan was saying. It’s just unsatisfied, and the programmer decided that the best response to dissatisfaction was to engage the heretofore unused red and green pixels in the display, randomly delete letters from the text—and explode. That’s right. He decided that in addition to the Dissatisfaction() subroutine calling the FreakOut(Seriously) subroutine, the FreakOut(Seriously) subroutine in its turn calls Explode(Yourself), Release(The Prisoner), and the WhileYoureAtItRuinAllStructuralIntegrityoftheSurroundingArcitecture() subroutines.

LogansRun221

Frankly, if this is the kind of coding that this entire society was built upon, this whole social collapse thing was less deep social commentary and really just a matter of technical debt.

LogansRun223
LogansRun225
LogansRun226
LogansRun227

Technical. Debt.

A call out for call outs

Hey small slice of the internet. I’m working with an awesome linguist, Anthony Stone of operativewords.com, on a project and since I don’t know everything but you do, I’m wondering if you can help. We’re collecting examples of scenes from more serious movies and TV shows where a human is interacting with a artificial intelligence primarily through speech.

Example 1

In the ST:TOS episode "Mirror, Mirror" Captain Kirk speaks with his computer to learn if the ship could be used to get him back in the "good universe." (This dialogue was featured in the Learning chapter of the book.)

Computer

Example 2: In the movie "Logan’s Run" Logan speaks with the Übercomputer twice: once for questioning about the ankh, and once to report his findings about Sanctuary.

LogansRun128

There are others, but we’d like to collect as many examples as we can to get a good "corpus" to work from on this sooper secret thingy. But of course it’s in the service of a blog post, so contribute away, and we’ll thank you in the post once it finally comes out. What do you think: Can you name any?