If you’d like to read the reviews for Logan’s Run in chronological order, use this WordPress-hack link: http://scifiinterfaces.com/category/logans-run-1976/?order=asc
For our purposes, Dome City is a service. Provided by the city’s ancestors to provide a “good life” for their cloned descendants in a sustainable way, i.e., a way that does not risk the problems of overpopulation. The “good life” in this case is a particular hedonistic vision full of fashion, time at the spa, and easy casual sex.
There’s an ethical, philosophical, and anthropological question on whether this is the “right” sort of life one would want to structure a service around. I suspect it’s a good conversation that will last at least a few beers. Fascinating as that question may be, looking into the interaction design requires us to accept those as a given and see how well the touchpoints help these personas address their goal in this framework.
Sci: F (0 of 4) How believable are the interfaces?
The Fade Out drug is the only, only interface that’s perfectly believable. And while I can make up some reasons the Clean Up Rig is cool, that’s clearly what I’m bringing to it, and the rest of the bunch, to an interface, has massive problems with fundamental believability and usability. Seriously, the movie is a study in bad design.
Fi: A (4 of 4)
How well do the interfaces inform the narrative of the story?
The interfaces help tell the story of this bizarre dystopia, help paint the “vast, silly spectacle” that Roger Ebert criticized when he write his original review in 1976.
Interfaces: D (1 of 4)
How well do the interfaces equip the characters to achieve their goals?
Sure, if you ignore all the usability problems and handwaving the movie does, the characters are getting what they want on a surface level. But ultimately, the service design of Dome City fails for every reason it could fail.
- The system was poorly implemented.
- Its touchpoints are unusable.
- Its touchpoints don’t let its users achieve the system goals.
But the main reason it fails is that it fails to take into account some fundamental aspects of human nature, such as
- The (entirely questionable) tendency towards punctuated serial monogamy in pair bonds
- A desire for self-determination
- Basic self-preservation.
If you don’t understand the goals of your users, you really have no hope of designing for them. And if you’re designing an entire, all-consuming world for those same users, misjudging the human universals puts your entire project—and their world—at risk.
Final Grade C- (5 of 12), MATINEE
Related lessons from the book
- The Übercomputer’s all caps and fixed-width evoke “that look” of early computer interfaces (page 33), as does its OCR sans-serif typeface (page 37) and blue color (page 42).
- The SandPhone would have been much more useful as Augmented Reality (chapter 8, page 157)
- The Aesculaptor could use a complete revamp from the Medical Chapter (chapter 9, page 258), most notably using waveforms (page 263) and making it feel humane (page 281).
- The Evidence Tray reminds us of multifactor authentication (page 118).
- Of course The Circuit appears in the Sex chapter (chapter 13, page 293) and as my redesign showed, needed to modernize its matchmaking (page 295) use more subtle cues (page 301). Certainly Jessica-5 could have used a safeword (page 303).
- The Lifeclock reminds us to keep meaningful colors distinguishable.
- The Circuit shows why a serial presentation democritizes options.
- The Circuit also shows us that matchmaking must account for compatability, availability, and interest.
- The Aesculaptor tells why a system should never fail into a worse state.
- Carrousel implies that we don’t hide the worst of a system, but instead cover it in a dazzle pattern.
- The improvements I suggested for the SandPhone imply that solving problems higher up the goal chain are much harder but more disruptive.
- The Evidence Tray gives us the opposite of the “small interfaces” lesson (page 296), too large an interface can overpromise for small interactions.
I grew up in Texas, and had the chance to visit the Fort Worth Water Gardens and Market Center where some of the scenes were shot. So I have a weirdly personal connection to this movie. Despite that, on review, the interfaces just suck, bless their little interactive hearts. Use them as fodder for apologetics and perhaps as a cautionary tale, but little, little else.
Such a cool collection of interactive voice response systems, with high fives out to everyone who thought up great (and ofttimes obscure) “talkie computers” from decades of sci-fi from the 1950s to the 2000-teens. By name…
- kedamono x7
- Joe Bloch x10
- Burning x4
- Kelley Strang
- Pixel I/O
- pavellishin x2
- Steve Silvas x2
- Matt Sheehe
- Matt Sheehe
- Joe Bloch
- Matt Sheehe x2
- Lela x2
- Clayton x2
The list of talkie computers we collected is “Robby the Robot, Adam Link, Jupiter 2, Landru, M-5, Nomad probe, The Oracle, Beta-V, HAL, Colossus, BOXX, Thermostellar Triggering Device, IRAC, the Übercomputer, C-3PO, Alex 7000, Proteus IV, Zen, Orac, Slave, V-Ger, Artificial persons, Dr. Theopolis and TWKE-4, MU-TH-UR 6000, KITT, Replicants, Image Machine, MCP, SAL, Max, Holly, Kryten!, L7, 790, Sphere, Ship [sic], AMEE, Ship, Andromeda Ascendant, Zero, S.A.R.A.H., Andy the Deputy AI, Icarus, KITT, Otto, Gerty, and Jarvis.” Think you could name the movies and TV shows these are from just from these names?
The next step is to build a collection of the scripts of these interactions, since we’ll be analyzing any peculiar, non-standard-English that we find. I’m down to provide these scripts myself, but it would be easier if we crowdsource it. If you’re up to it, head to the following form to add the metadata and line-by-line script of the interaction. You can often find the scripts with a simple Google Search, or by (popping in the VHS/DVD/Blu-Ray you own, finding a video of the scene on some online video service and transcribing it from there. We are interested in word-perfect transcriptions. Don’t sweat it if you don’t have the time yourself. As of Thanksgiving weekend, I’ll manually complete any unfinished ones that I find.
The form to add scripts: https://docs.google.com/forms/d/
Logan’s life is changed when he surrenders an ankh found on a particular runner. Instead being asked to identify, the central computer merely stays quiet a long while as it scans the objects. Then its lights shut off, and Logan has a discussion with the computer he has never had before.
The computer asks him to “approach and identify.” The computer gives him, by name, explicit instructions to sit facing the screen. Lights below the seat illuminate. He identifies in this chair by positioning his lifeclock in a recess in the chair’s arm, and a light above him illuminates. Then a conversation ensues between Logan and the computer.
Hey small slice of the internet. I’m working with an awesome linguist, Anthony Stone of operativewords.com, on a project and since I don’t know everything but you do, I’m wondering if you can help. We’re collecting examples of scenes from more serious movies and TV shows where a human is interacting with a artificial intelligence primarily through speech.
In the ST:TOS episode "Mirror, Mirror" Captain Kirk speaks with his computer to learn if the ship could be used to get him back in the "good universe." (This dialogue was featured in the Learning chapter of the book.)
Example 2: In the movie "Logan’s Run" Logan speaks with the Übercomputer twice: once for questioning about the ankh, and once to report his findings about Sanctuary.
There are others, but we’d like to collect as many examples as we can to get a good "corpus" to work from on this sooper secret thingy. But of course it’s in the service of a blog post, so contribute away, and we’ll thank you in the post once it finally comes out. What do you think: Can you name any?
Sandmen surrender any physical objects recovered from the bodies of runners to the Übercomputer for evaluation via a strange device I’m calling The Evidence Tray.
As a Sandman enters the large interrogation chamber, a transparent cylinder lowers from the ceiling. At the top of this cylinder an arm continuously rotates bearing four pin lights. A chrome cone sits in the center of the base. The Sandman can access the interior of the cylinder through a large oblong opening in the side the top of which is just taller than Sandmen (who seem to be a near-uniform height). Continue reading
At dispatch for the central computer, Sandmen monitor a large screen that displays a wireframe plan of the city, including architectural detail and even plants, all color coded using saturated reds, greens, and blues. When a Sandman has accepted the case of a runner, he appears as a yellow dot on the screen. The runner appears as a red dot. Weapons fire can even be seen as a bright flash of blue. The red dots of terminated runners fades from view.
Using the small screens and unlabeled arrays of red and yellow lit buttons situated on an angled panel in front of them, the seated Sandman can send a call out to catch runners, listen to any spoken communications, and respond with text and images.
*UXsigh* What are we going to do with this thing? With an artificial intelligence literally steps behind them, why rely on a slow bunch of humans at all for answering questions and transmitting data? It might be better to just let the Sandmen do what they’re good at, and let the AI handle what it’s good at. Continue reading
Sandmen have a clean-up crew to quickly rid the citys floors of the unsightly corpses they create when they terminate runners. Logan summons one through the CB function of his SandPhone, telling dispatch, “Runner terminated, 0.31, ready for cleanup.”
Minutes later, Cleanup arrives. This crew floats around the city in a slow-moving hover platforms, that look a little like a vertical knee raise machine with anti-gravity pads and a faulty muffler. The controls aren’t apparent, but the operator maneuvers the platform over the cadaver to spray it with a fast-acting solvent that emits out the base.
This is the surface question of the Cleanup platform interface. If the operators don’t move, how are they controlling the platform? Of course it could be a brain interface, but that’s an easy answer. There are at least three alternative types of input that could explain what we see on screen. Continue reading