Lessons in instrument design from Star Trek

by S. Astrid Bin 

Editor’s Note: Longtime fans of this site may be familiar with its “tag line,” “Stop watching sci-fi. Start using it.” So I was thrilled when a friend told me they had seen Astrid present how she had made an instrument from a Star Trek episode real! Please welcome Astrid as she tells us about the journey and lessons learned from making something from a favorite sci-fi show real. —Christopher

I’ve been watching Star Trek for as long as I can remember. Though it’s always been in the air of culture, it wasn’t until March 2020—when we were all stuck at home with Netflix and nothing else to do—that I watched all of it from the beginning.

Discovering Trek Instruments

I’m a designer and music researcher, and I specialise in interfaces for music. When I started this Great Rewatch with my husband (who is an enormous Trek fan, so nothing pleased him more) I started noting every musical instrument I saw. What grabbed me was they were so different from the instruments I write about, design, make, and look at, because none of these instruments, you know, actually worked. They were pure speculation, free even of the conventions of the last couple of decades since computers became small and powerful enough that digital musical instruments started to become a common thing on Kickstarter. I got excited every time I saw a new one.

What struck me the most about these instruments is that how they worked didn’t ever seem to enter into the mind of the person who dreamed them up. This sure is a departure for me, as I’ve spent more than ten years designing instruments and worrying about the subtleties of sensors, signal processing, power requirements, material response, fabrication techniques, sound design, and countless other factors that come into play when you make novel digital musical instruments. The instruments in Star Trek struck me as anarchic, because it was clear the designers didn’t consider at all how they would work, or, if they did, they just weren’t concerned. Some examples: Tiny instruments make enormous sounds. Instruments are “telepathic”. Things resonate by defying the laws of physics. Some basic sound design is tossed in at the end, and bam, job done.

Some previous instrument design projects. From left: Moai (electronic percussion), Keppi (electronic percussion), Gliss (synth module interaction, as part of the Bela.io team)

I couldn’t get over how different this was to the design process I was used to. Of course, this is because the people designing these instruments weren’t making “musical instruments” the way we know them, as functional cultural objects that produce sound of some kind. Rather, Trek instruments are storytelling devices, alluring objects that have a narrative and character function, and the sound they make and how they might work is completely secondary. These instruments have a number of storytelling purposes, but most of all they serve to show that alien civilisations are as complex, creative and culturally sophisticated as humans’.

This was striking, because I was used to the opposite; so often the technical aspects of an instrument—and there are many, from synthesis to sensors—always somehow become the most significant determining factor in an instruments’ final form.

The Aldean Instrument

There was one instrument that especially intrigued me, the “unnamed Aldean instrument” from Season 1, Episode 16 of Star Trek: The Next Generation, “When the Bough Breaks”. This instrument is a light-up disc that is played by laying hands on it, through which it translates your thoughts to sound. In this episode the children of the Enterprise are kidnapped by a race of people who can’t reproduce (spoiler alert: it was an environmental toxin, they’re fine now) and the children are distributed among various families. One girl is sent to a family of very kind musicians, and the grandfather teaches her to play this instrument. When she puts her hands on it, lays her fingers over the edge and is very calm it plays some twinkly noise, but then she gets anxious when she remembers she’s been kidnapped, and it makes a burst of horrible noise.

[If you have a subscription to Paramount, you can see the episode here. —Ed.]

This instrument was fascinating for a lot of reasons. It looked so cool with the light-up sides and round shape, and it was only on screen for about four tantalising seconds. Unlike other instruments that were a bit ridiculous, I kept thinking about this one because it was uniquely beautiful, and it seemed like a lot of thought went into it.

I researched the designers of Trek instruments and this instrument was the only one that had a design credit: Andrew Probert. Andrew is a prolific production designer who’s worked mainly in science fiction, and he’s been active for decades, designing everything from the bridge on the Enterprise to the Delorian in Back to the Future. He’s still working, his work is fantastic, and he has a website, so I emailed him and asked him what he could tell me about the design process.

He got back to me straight away and said he couldn’t remember anything about it, but he dug out his production sketch for me:

Courtesy of Andrew Probert, https://probert.artstation.com/

The sketch was so gloriously beautiful that I couldn’t resist building it. I had so many questions that you can’t answer, except through bringing it into reality: How would I make it work like it did in the show? How would I make it come alive slowly, and require calmness? How was I going to make that shape? Wait, this thing is supposed to translate moods, what does that even mean? How was I going to achieve the function and presence that this instrument had in the show, and what would I learn?

Building the Aldean Instrument

Translating moods

When I discussed this project with people, the question I got asked most often was “So how are you going to make it read someone’s mind?”

While the instrument doesn’t read minds, the idea of translating moods gave me pause and eventually led me to think of affective computing, an area of computing that was originated by a woman named—brace yourself—Rosalind Picard. Picard says that affective computing refers to computing that relates to, arises from, or deliberately impacts emotions.

Affective computing considers two variable and intersecting factors: Arousal (on a scale of “inactive” to “active”), and valence (on a scale from “unpleasant” to “pleasant”). A lot of research has been done on how various emotions fall into this two-dimensional space, and how emotional states can be inferred by sensing these two factors.

Image by Patricia Bota, 2019

I realised that, to make this instrument work the way it did in the show, the valence/arousal state that the instrument was sensing was much simpler. In the show, the little girl is calm (and the instrument plays some sparkly sound), and then she’s not (and the instrument emits a burst of noise). If this instrument just sensed arousal through how hard it was being gripped and valence through how much the instrument was moving, this creates an interaction space that still has a lot of possibility.

The instrument playing requires calmness, and I could sense how much they were moving around with an accelerometer, by calculating quantity of motion. If the instrument was moved suddenly or violently it could make a burst of noise. For valence—pleasantness to unpleasantness—I could sense how hard the person was gripping the instrument using a Trill Bar sensor. The Trill Bar can sense up to five individual touches, as well as the size of those touches (in other words, how hard those fingers are pressing). 

Both the touch sensing and the accelerometer data would be processed by a Bela Mini, a tiny but powerful computer that could process the sensor data, as well as provide the audio playback.

Making the body

I got to work first with the body of the instrument. I often prototype 3D shapes using layers of paper that are laser cut and sandwiched together, as it allows for a gradual, hands-on process that allows adjustments throughout. After a few days with a laser cutter and some cut and paste circuitry, I had something that lit up that I could attach the sensing system to.

Putting it together

I attached the Bela Mini to the underside of the instrument body, and embedded the Trill Bar sensor on the underside of the hand grip, so I could sense when someone’s hand was on the instrument. 

As I set out to recreate how the instrument looked and sounded in the show, I wanted to make a faithful reproduction of the sound design, despite the sound design being pretty basic.

The sound is a four-part major chord harmony. I recreated the sound in Ableton Live, with each part of the harmony as a separate sample. I also made a burst of noise. 

When the instrument is being held gently and there are no sudden movements, it can play; this doesn’t mean stillness, just a lack of chaos. As the player places their fingers over the instrument’s edge, each of their four fingers will be sensed and trigger one part of the harmony. The harder that finger presses, the louder that voice is.

There’s a demo video of me playing it, above.

Reflections on the process

This process was just as interesting as I suspected, for a number of reasons.

Firstly, de-emphasising technology in the process of making a technological object presented a fresh way of thinking. Instead of worrying about what I could add, whether the interaction was enough, or what other sensors I had access to (and thereby making the design a product of those technical decisions), I was able instead to be led by the material and object factors in this design process. This is an inverse of what usually happens, and I certainly am going to consciously invert this process more often from now on.

Secondly, thinking about what this instrument needed to do, say and mean, and extract the technological factors from there, made the technical aspects much simpler. I found myself working artistic muscles that aren’t always active in designing technology, because there’s often some kind of pressure, real or imagined, to make the technical aspects more complex. In this situation, the most important thing was supporting what this was in the show, which was an object that told a story. When I thought along those lines, the two axes of sensing were an obvious, and refreshingly simple direction to take.

Third, one of the difficult things about designing instruments is that, thanks to tiny and powerful computers, they can sound like anything you can imagine. There’s no size limitations for sound, no physical bodies to resonate, no material factors that affect the acoustic physics that create a noise. This freedom is often overwhelming, and it’s hard to make sound design choices that make sense. However, because I was working backwards from thinking about how this instrument was presented in the plot of the episode, I had something to attach these decisions to. I recreated the show’s simplistic sound design, but I’ve since designed sound worlds for it that support this calm, gentle, but very much alive nature that the Aldean instrument would have, when I imagine it played in its normal context. 

Not only physically recreating the shape an instrument from Star Trek, but making it function as an instrument showed me that bringing imaginary things into reality is a process that offers the creator a fresh perspective, whether designing fantastical or earthly interfaces.

Make It So: The Clippy Theory of Star Trek Action

My partner and I spent much of March watching episodes of Star Trek: The Next Generation in mostly random order. I’d seen plenty of Trek before—watching pretty much all of DS9 and Voyager as a teenager, and enjoying the more recent J.J. Abrams reboot—but it’s been years since I really considered the franchise as a piece of science fiction. My big takeaway is…TNG is bonkers, and that’s okay. The show is highly watchable because it’s really just a set of character moments, risk taking, and ethical conundrums strung together with pleasing technobabble, which soothes and hushes the parts of our brain that might object to the plot based on some technicality. It’s a formula that will probably never lose its appeal.

But there is one thing that does bother me: how can the crew respond to Picard’s orders so fast? Like, beyond-the-limits-of-reason fast.

A 2-panel “photonovella.” Above, Picard approaches Data and says, “Data, ask the computer if it can use the Voynich Manuscript and i-propyl cyanide to somehow solve the Goldback Conjecture.” Below, under the caption, “Two taps later…” Data replies, “It says it will have the answer by the commercial break, Captain.”

How are you making that so?

When the Enterprise-D encounters hostile aliens, ship malfunctions, or a mysterious space-time anomaly, we often get dynamic moments on the bridge that work like this. Data, Worf and the other bridge crew, sometimes with input from Geordi in engineering, call out sensor readings and ship functionality metrics. Captain Picard stares toward the viewscreen/camera and gives orders, sometimes intermediated by Commander Riker. Worf or Data will tap once or twice on their consoles and then quickly report the results—i.e. “our phasers have no effect” or “the warp containment field is stabilizing,” that sort of thing. It all moves very quickly, and even though the audience doesn’t quite know the dangers of tachyon radiation or how tricky it is to compensate for subspace interference, we feel a palpable urgency. It’s probably one of the most recognizable scenes-types in television.

Now, extradiegetically, I think there are very good reasons to structure the action this way. It keeps the show moving, keeps the focus on the choices, rather than the tech. And of course, diegetically, their computers would be faster than ours, responding nearly instantaneously. The crew are also highly trained military personnel, whose focus, reaction speed, and knowledge of the ship’s systems are kept sharp by regular drills. The occasional scenes we get of tertiary characters struggling with the controls only drives home how elite the Enterprise senior staff are.

A screen cap from TNG with Wil Wheaton as Wesley in the navigator seat, saying to the bridge crew, “Does…uh…anyone know where the ‘engage’ key is?”
Just kidding, we love ya, Wil.

Nonetheless, it is one thing to shout out the strength of the ship’s shields. No doubt Worf has an indicator at tactical that’s as easy to read as your laptop’s battery level. That’s bound to be routine.  But it’s quite another for a crewmember to complete a very specific and unusual request in what seems like one or two taps on a console. There are countless cases of the deflector dish or tractor beam being “reconfigured” to emit this or that kind of force or radiation. Power is constantly being rerouted from one system to another. There’s a great deal of improvisational engineering by all characters.

Just to pick examples in my most recent days of binging: in “Descent, Part 2,” for instance, Beverly Crusher, as acting captain, tells the ensign at ops to launch a probe with the ship’s recent logs on it, as a warning to Starfleet, thus freeing the Enterprise to return through a transwarp conduit to take on The Borg. Or in the DS9 episode “Equilibrium”—yes, we’ve started on the next series now that TNG is off Netflix—while investigating a mysterious figure from Jadzia’s past, Sisko instructs Bashir to “check the enrollment records of all the Trill music academies during Belar’s lifetime.” In both cases, the order is complete in barely a second.

Even for Julian Bashir—a doctor and secretly a mutant genius—there is no way for a human to perform such a narrow and out-of-left-field search without entering a few parameters, perhaps navigating via menus to the correct database. From a UX perspective, we’re talking several clicks at least!

There is a tension in design between…

  • Interface elements that allow you to perform a handful of very specific operations quickly (if you know where the switch is), and…
  • Those that let you do almost anything, but slower.

For instance, this blog has big colorful buttons that make it easy to get email updates about new posts or to donate to a tip jar. If you want to find a specific post, however, you have to type something into the search box or perhaps scroll through the list of TV/movie properties on the right. While the 24th Century no doubt has somewhat better design than WordPress, they are still bound by this tension.

Of course it would be boring to wait while Bashir made the clicks required to bring up the Trill equivalent of census records or LexisNexis. With movie magic they simply edit out those seconds. But I think it’s interesting to indulge in a little backworlding and imagine that Starfleet really does have the technology to make complex general computing a breeze. How might they do it?

Enter the Ship’s AI

One possible answer is that the ship’s Computer—a ubiquitous and omnipresent AI—is probably doing most of the heavy lifting. Much like how Iron Man is really Jarvis with a little strategic input from Tony, I suspect that the Computer listens to the captain’s orders and puts the appropriate commands on the relevant crewman’s console the instant the words are out of Picard’s mouth. (With predictive algorithms, maybe even just before.) The crewman then merely has to confirm that the computer correctly interpreted the orders and press execute. Similarly, the Computer must be constantly analyzing sensor data and internal metrics and curating the most important information for the crew to call out. This would be in line with the Active Academy model proposed in relation to Starship Troopers.

Centaurs, Minotaurs, and anticipatory computing

I’ve heard this kind of human-machine relationship called “Centaur Computing.” In chess, for instance, some tournaments have found that human-computer teams outperform either humans or computers working on their own. This is not necessarily intuitive, as one would think that computers, as the undisputed better chess players, would be hindered by having an imperfect human in the mix. But in fact, when humans can offer strategic guidance, choosing between potential lines that the computer games out, they often outmaneuver pure-AIs.

I often contrast Centaur Computing with something I call “Minotaur Computing.” In the Centaur version—head of a man on the body of a beast—the human makes the top-level decision and the computer executes. In Minotaur Computing—head of a beast with the body of a man—the computer calls the shots and leaves it up to human partners to execute. An example of this would be the machine gods in Person of Interest, which have no Skynet Terminator armies but instead recruit and hire human operatives to carry out their cryptic plans.

In some ways this kind of anticipatory computing is simply a hyper-advanced version of AI features we already have today, such as when Gmail offers to complete my sentence when I begin to type “thank you for your time and consideration” at the end of a cover letter.

Hi, it looks like you’re trying to defeat the Borg…

In this formulation,  the true spiritual ancestor of the Starfleet Computer is Clippy, the notorious Microsoft Word anthropomorphic paperclip helper, which would pop up and make suggestions like “It looks like you’re writing a letter. Would you like help?” Clippy was much maligned in popular culture for being annoying, distracting, and the face of what was in many ways a clunky, imperfect software product. But the idea of making sense of the user’s intentions and offering relevant options isn’t always a bad one. The Computer in Star Trek performs this task so smoothly, efficiently, and in-the-background, that Starfleet crews are able to work in fast-paced harmony, acting on both instinct and expertise, and staying the heroes of their stories.

One to beam into the Sun, Captain.

Admittedly, this deftness is a bit at odds with the somewhat obtuse behavior the Computer often displays when asked a question directly, such as demanding you specify a temperature when you request a glass of water. Given how often the Computer suffers strange malfunctions that complicate life on the Enterprise for days a time, one wonders if the crew feel as though they are constantly negotiating with a kind of capricious spirit—usually benign but occasionally temperamental and even dangerously creative in its interpretations of one’s wishes, like a djinn. Perhaps they rarely complain about or even mention the Computer’s role in Clippy-ing orders onto their consoles because they know better than to insult the digital fairies that run the turbolifts and replicate their food.

All of which brings a kind of mystical cast to those rapid, chain-of-command-tightened exchanges amongst the bridge crew when shit hits the fan. When Picard gives his crew an order, he’s really talking to the Computer. When Riker offers a sub-order, he’s making a judgment call that the Computer might need a little more guidance. The crew are there to act as QA—a general-intelligence safeguard—confirming with human eyes and brain that the Computer is interpreting Picard correctly. The one or two beeps we often hear as they execute a complex command are them merely dismissing incorrect or confused operation-lines. They report back that the probe is ready or the phasers are locked, as the captain wished, and Picard double confirms with his iconic “make it so.” It’s a multilayered checking and rechecking of intentions and plans, much like the military today uses to prevent miscommunications, but in this case with the added bonus of keeping the reins on a powerful but not always cooperative genie.

There’s a good argument to be made that this is the relationship we want to have with technology. Smooth and effective, but with plenty of oversight, and without the kind of invasive elements that right now make tech the center of so many conversations. We want AI that gives us computational superpowers, but still keeps us the heroes of our stories.


Andrew Dana Hudson is a speculative fiction author, researcher, and theorist. His first book, Our Shared Storm: A Novel of Five Climate Futures, is fresh off the press. Check it out here. And follow his work via his newsletter, solarshades.club.

Design fiction in sci-fi

As so many of my favorite lines of thought have begun, this one was started with a provocative question lobbed at me across social media. Friend and colleague Jonathan Korman tweeted to ask, above a graphic of the Black Mirror logo, “Surely there is another example of pop design fiction?”

I replied in Twitter, but my reply there was rambling and unsatisfying, so I’m re-answering here with an eye toward being more coherent.

What’s Design Fiction?

If you’re not familiar, design fiction is a practice that focuses on speculative artifacts to raise issues. While leading the interactions program at The Royal College of Art, Anthony Dunne and Fiona Raby catalyzed the practice.

“It thrives on imagination and aims to open up new perspectives on what are sometimes called wicked problems, to create spaces for discussion and debate about alternative ways of being, and to inspire and encourage people’s imaginations to flow freely. Design speculations can act as a catalyst for collectively redefining our relationship to reality.”

Anthony Dunne and Fiona Raby, Speculative Everything: Design, Dreaming, and Social Dreaming

Dunne & Raby tend to often lean toward provocation more than clarity (“sparking debate” is a stated goal, as opposed to “identifying problems and proposing solutions.”) Where to turn for a less shit-stirring description? Like many related fields there are lots of competing definitions and splintering. John Spicey has listed 26 types of Design Fiction over on Simplicable. But I am drawn to the more practical definition offered by the Making Tomorrow handbook.

Design Fiction proposes speculative scenarios that aim to stimulate commitment concerning existing and future issues.

Nicolas Minvielle et al., Making Tomorrow Collective

To me, that feels like a useful definition and clearly indicates a goal I can get behind. Your mileage may vary. (Hi, Tony! Hi, Fiona!)

Some examples should help.

Dunne & Raby once designed a mask for dogs called Spymaker, so that the lil’ scamps could help lead their owners to unsurveilled locations in an urban environment.

Julijonas Urbonas while at RCA conceived and designed a “euthanasia coaster” which would impart enough Gs on its passengers to kill them through cerebral hypoxia. While he designed its clothoid inversions and even built a simple physical model, the idea has been recapitulated in a number of other media, including the 3D rendering you see below.

This commercial example from Ericsson is a video with mild narrative about appliances having a limited “social life.”

Corporations create design fictions from time to time to illustrate their particular visions of the future. Such examples are on the verge of the space, since we can be sure those would not be released if they ran significantly counter to the corporation’s goals. They’re rarely about the “wicked” problems invoked above and tend more toward gee-whiz-ism, to coin a deroganym.

How does it differ from sci-fi?

Design Fiction often focuses on artifacts rather than narratives. The euthanasia coaster has no narrative beyond what you bring or apply to it, but I don’t think this lack of narrative a requirement. For my money, the point of design fiction is focused on exploring the novum more than a particular narrative around the novum. What are its consequences? What are its causes? What kind of society would need to produce it and why? Who would use it and how? What would change? What would lead there and do we want to do that? Contrast Star Wars, which isn’t about the social implications of lightsabers as much as it is space opera about dynasties, light fascism, and the magic of friendship.

Adorable, ravenous friendship.

But, I don’t think there’s any need to consider something invalid as design fiction if it includes narrative. Some works, like Black Mirror, are clearly focused on their novae and their implications and raise all the questions above, but are told with characters and plots and all the usual things you’d expect to find.

So what’s “pop” design fiction?

As a point of clarification, in Korman’s original question, he asked after pop design fiction. I’m taking that not to mean the art movement in the 01950–60s, which Black Mirror isn’t, but rather “accessible” and “popular,” which Black Mirror most definitely is.

So not this, even though it’s also adorable. And ravenous.

What would distinguish other sci-fi works as design fiction?

So if sci-fi can be design fiction, what would we look for in a show to classify it is design fiction? It’s a sloppy science, of course, but here’s a first pass. A show can be said to be design fiction if it…

  • Includes a central novum…
  • …that is explored via the narrative: What are its consequences, direct and indirect?
  • Corollary: The story focused on a primary novum, and not a mish-mash of them. (Too many muddle the thought experiment.)
  • Corollary: The story focuses on characters who are most affected by the novae.
  • Its explorations include the personal and social.
  • It goes where the novum leads, avoiding narrative fiats that sully the thought experiment.
  • Bonus points if it provides illustrative contrasts: Different versions of the novum, characters using it in different ways, or the before and after.

With this stake in the ground, it probably strikes you that some subgenres lend themselves to design fiction and others do not. Anthology series, like Black Mirror, can focus on different characters, novae, and settings each episode. Series and franchises like Star Wars and Star Trek, in contrast, have narrative investments in characters and settings that make it harder to really explore nova on their own terms, but it is not impossible. The most recent season of Black Mirror is pointing at a unified diegesis and recurring characters, which means Brooker may be leaning the series away from design fiction. Meanwhile, I’d posit that the eponymous Game from Star Trek: The Next Generation S05E06 is an episode that acts as a design fiction. So it’s not cut-and-dry.

“It’s your turn. Play the game, Will Wheaton.”

What makes this even more messy is that you are asking a subjective question, i.e. “Is this focused on its novae?”, or even “Does this intend to spur some commitment about the novae?” which is second-guessing whether or not what you think the maker’s intent was. As I mentioned, it’s messy, and against the normal critical stance of this blog. But, there are some examples that lean more toward yes than no.

Jurasic Park

Central novum: What if we use science to bring dinosaurs back to life?

Commitment: Heavy prudence and oversight for genetic sciences, especially if capitalists are doing the thing.

Hey, we’ve reviewed Jurassic Park on this very blog!

This example leads to two observations. First, the franchises that follow successful films are much less likely to be design fiction. I’d argue that every Jurassic X sequel has simply repeated the formula and not asked new questions about that novum. More run-from-the-teeth than do-we-dare?

Second is that big-budget movies are almost required to spend some narrative calories discussing the origin story of novae at the cost of exploring multiple consequences of the same. Anthology series are less likely to need to care about origins, so are a safer bet IMHO.

Minority Report

Central novum: What if we could predict crime? (Presuming Agatha is a stand-in for a regression algorithm and not a psychic drug-baby mutant.)

Commitment: Let’s be cautious about prediction software, especially as it intersects civil rights: It will never be perfect and the consequences are dire.

Blade Runner

Central novum: What if general artificial intelligence was made to look indistinguishable from humans, and kept as an oppressed class?

Commitment: Let’s not do any of that. From the design perspective: Keep AI on the canny rise.

Hey, I reviewed Blade Runner on this very blog!

Ex Machina

Central novum: Will we be able to box a self-interested general intelligence?

Commitment: No. It is folly to think so.

Colossus: The Forbin Project

Central novum: What if we deliberately prevented ourselves from pulling the plug on a superintelligence, and then asked it to end war?

Commitment: We must be extremely careful what we ask a superintelligence to do, how we ask it, and the safeguards we provide ourselves if we find out we messed it up.

Hey, I lovingly reviewed Colossus: The Forbin Project on this very blog!

Person of Interest

Central novum: What if we tried to box a good superintelligence?

Commitment: Heavy prudence and oversight for computer sciences, especially if governments are doing the thing.

Not reviewed, but it won an award for Untold AI

This is probably my favorite example, and even though it is a long-running series with recurring characters, I argue that the leads are all highly derived, narratively, from the novum, and still counts strongly.

But are they pop?

Each of these are more-or-less accessible and mainstream, even if their actual popularity and interpretations vary wildly. So, yes, from that perspective.

Jurassic Park is at the time of writing the 10th highest-grossing sci-fi movie of all time. So if you agree that it is design fiction, it is the most pop of all. Sadly, that is the only property I’d call design fiction on the entire highest-grossing list.

So, depending on a whole lot of things (see…uh…above) the short answer to Mr. Korman’s original question is yes, with lots of if.

What others?

I am not an exhaustive encyclopedia of sci-fi, try though I may. Agree with this list above? What did I miss? If you comment with additions, be sure and list, as I did these, the novum and the challenge.

Gendered AI: AI Picks Female

Where we are in this series: I just finished showing how AI in sci-fi presents gender, what bodies it is given, how subservient it is, the gender presentation of the masters of AI, how germane the gender of the AI was to the plot of the stories in which they appear, how good or evil those AIs were, and what category of AI they seemed to be. Next up we’re going to look at the correlations of those distributions to gender, but first a fun fact from the survey.

There are all of three AI characters who elect their gender presentation for some reason other than deception.

1In “The Offspring” episode of Star Trek: The Next Generation, Data builds an adult child named Lal. Data gives Lal the opportunity to pick their gender, and Lal picks female.

2Holly, the AI in Red Dwarf begins presenting male and after a bit reveals that she would rather present as female. Later, she is destroyed and rebuilt from an earlier copy, when the AI presents as male again, but notably, this was not Holly’s decision.

3The Machine, from Person of Interest (shout-out: it won the award for best representation of the AI science in the Untold AI series, and a personal favorite) chooses in the last season to adopt the voice of its main devotee, Root, who is female.

Image result for root person of interest
The Machine is never directly embodied in the series, but here’s a pic of Root.

Though this is a very small sample inside our dataset, it is notable in light the male bias that AI characters show, by these examples,…

when an AI chooses a gender presentation, it is always a female.

Not quite “picking a gender”

There are a handful of other times an AI winds up with a gender presentation that can not quite be said to be a matter of personal preference.

  • If you’re wondering about the Maschinenmensch from Metropolis, its gender is not a choice, but something assigned to it by the mad scientist Rotwang as part of a plot of deception.
  • If you’re thinking of Skynet, from the Terminator series, it has no presenting gender until Terminator Salvation. In that film the AI chooses to mimic a female character, Dr. Kogan, because “Calculations confirm Serena Kogan’s face is the easiest for [Marcus] to process.” It assures him that if he preferred someone different, Skynet could mimic another person. So this is not picking gender for an identity reason as much as a mask for efficacy.
  • Later in Terminator Genisys, Skynet is embodied as a man, the T-5000 known as “Alex,” but this appears to be the opportunistic colonization of an available body rather than a selection by the AI.
  • The Puppet Master from Ghost in the Shell is similarly an opportunistic colonization of a female cyborg. There might be some selection process in the choice of a victim, but that evidence is not on screen.
  • In Futurama, Bender has also opted several times to be female, but it is for the express purpose of getting something out of the deal, such as competing in the Robo-Olympics or to play a heel character in wrestling. By the end of each episode, he’s back to being his old self again.

If you know of additional or even counterexamples, let me know so I can add them to the database. But as of right now, the AI future looks female.

Iron Man HUD: 2nd-person view

In the prior post we looked at the HUD display from Tony’s point of view. In this post we dive deeper into the 2nd-person view, which turns out to be not what it seems.

The HUD itself displays a number of core capabilities across the Iron Man movies prior to its appearance in The Avengers. Cataloguing these capabilities lets us understand (or backworld) how he interacts with the HUD, equipping us to look for its common patterns and possible conflicts. In the first-person view, we saw it looked almost entirely like a rich agentive display, but with little interaction. But then there’s this gorgeous 2nd-person view.

When in the first film Tony first puts the faceplate on and says to JARVIS, “Engage heads-up display”… IronMan1_HUD00 …we see things from a narrative-conceit, 2nd-person perspective, as if the helmet were huge and we are inside the cavernous space with him, seeing only Tony’s face and the augmented reality interface elements. IronMan1_HUD07 You might be thinking, “Of course it’s a narrative conceit. It’s not real. It’s in a movie.” But what I mean by that is that even in the diegesis, the Marvel Cinematic World, this is not something that could be seen. Let’s move through the reasons why. Continue reading

Brain interfaces as wearables

There are lots of brain devices, and the book has a whole chapter dedicated to them. Most of these brain devices are passive, merely needing to be near the brain to have whatever effect they are meant to have (the chapter discusses in turn: reading from the brain, writing to the brain, telexperience, telepresence, manifesting thought, virtual sex, piloting a spaceship, and playing an addictive game. It’s a good chapter that never got that much love. Check it out.)

This is a composite SketchUp rendering of the shapes of all wearable brain control devices in the survey.

This is a composite rendering of the shapes of most of the wearable brain control devices in the survey. Who can name the “tophat”?

Since the vast majority of these devices are activated by, well, you know, invisible brain waves, the most that can be pulled from them are sartorial– and social-ness of their industrial design. But there are two with genuine state-change interactions of note for interaction designers.

Star Trek: The Next Generation

The eponymous Game of S05E06 is delivered through a wearable headset. It is a thin band that arcs over the head from ear to ear, with two extensions out in front of the face that project visuals into the wearer’s eyes.

STTNG The Game-02

The only physical interaction with the device is activation, which is accomplished by depressing a momentary button located at the top of one of the temples. It’s a nice placement since the temple affords placing a thumb beneath it to provide a brace against which a forefinger can push the button. And even if you didn’t want to brace with the thumb, the friction of the arc across the head provides enough resistance on its own to keep the thing in place against the pressure. Simple, but notable. Contrast this with the buttons on the wearable control panels that are sometimes quite awkward to press into skin.

Minority Report (2002)

The second is the Halo coercion device from Minority Report. This is barely worth mentioning, since the interaction is by the PreCrime cop, and it is only to extend it from a compact shape to one suitable for placing on a PreCriminal’s head. Push the button and pop! it opens. While it’s actually being worn there is no interacting with it…or much of anything, really.

MinRep-313

MinRep-314

Head: Y U No house interactions?

There is a solid physiological reason why the head isn’t a common place for interactions, and that’s that raising the hands above the heart requires a small bit of cardiac effort, and wouldn’t be suitable for frequent interactions simply because over time it would add up to work. Google Glass faced similar challenges, and my guess is that’s why it uses a blended interface of voice, head gestures, and a few manual gestures. Relying on purely manual interactions would violate the wearable principle of apposite I/O.

At least as far as sci-fi is telling us, the head is not often a fitting place for manual interactions.

Ideal wearables

There’s one wearable technology that, for sheer amount of time on screen and number of uses, eclipses all others, so let’s start with that. Star Trek: The Next Generation introduced a technology called a combadge. This communication device is a badge designed with the Starfleet insignia, roughly 10cm wide and tall, that affixes to the left breast of Starfleet uniforms. It grants its wearer a voice communication channel to other personnel as well as the ship’s computer. (And as Memory Alpha details, the device can also do so much more.)

Chapter 10 of Make It So: Interaction Design Lessons from Science Fiction covers the combadge as a communication device. But in this writeup we’ll consider it as a wearable technology.

Enterprise-This-is-Riker

How do you use it?

To activate it, the crewman reaches up with his right hand and taps the badge once. A small noise confirms that the channel has been opened and the crewman is free to speak. A small but powerful speaker provides output that can be heard against reasonable background noise, and even to announce an incoming call. To close the channel, the crewman reaches back up to the combadge and double-taps its surface. Alternately, the other party can just “hang up.”

This one device illustrates of the primary issues germane to wearable technology. It’s perfectly wearable, social, easy to access, prevents accidental activation, and utilizes apposite inputs and outputs.

Wearable

Sartorial

The combadge is light, thin, appropriately sized, and durable. It stays in place but is casually removable. There might be some question about its hard, pointy edges, but given its standard location on the left breast, this never presents a poking hazard.

combadge01

Social

Wearable tech exists in our social space, and so has to fit into our social selves. The combadge is styled appropriately to work on a military uniform. It is sleek, sober, and dynamic. It could work as is, even without the functional aspects. That it is distributed to personnel and part of the uniform means it doesn’t suffer the vagaries of fashion, but it helps that it looks pretty cool.

As noted in the book, since it is a wireless microphone, it really should have some noticeable visual signal for others to know when it’s on, so they know that there might be an eavesdropper or when they might be recorded. Other than breaking this rule of politeness, the combadge suits Starfleet’s social requirements quite well.

When Riker encounters "Rice" in The Arsenal Of Freedom (S1E21), "Rice" isn't aware that the combadge is recording. Sure, he was really a self-iterating hyper-intelligent weapon (decades before the Omnidroid) but it's still the polite thing to do.
When Riker encounters “Rice” in The Arsenal Of Freedom (S1E21), “Rice” isn’t aware that the combadge is recording. Sure, he was really a self-iterating hyper-intelligent weapon (decades before the Omnidroid) but it’s still the polite thing to do.

I don’t recall ever seeing scenes where multiple personnel try to use their combadges near each other at the same time and having trouble as a result. I don’t recall this from the show (and Memory-Alpha doesn’t mention it) but I presume the combadges are keyed to the voice of the user to help solve this sort of problem, so it can be used socially.

Technology

Easy to access and use

Being worn on the left breast of the uniform means that it’s in an ideal position to activate with a touch from the right hand (and only a little more difficult for lefties). The wearer almost doesn’t need to even move his shoulder. This low-resistance activation makes sense since it is likely to be accessed often, and often in urgent situations.

Picard

Tough to accidentally activate

In this location it’s also difficult to accidentally activate. It’s rare that other people’s hands are near there, and when they are, its close enough to the wearers face that they know it and can avoid it if they need to.

Apposite I/O

The surface of the body is a pretty crappy place to try and implement WIMP models of interface design. Using touch for activation/deactivation and voice for commands fit most common uses of the device. It’s easy to imagine scenarios where silence might be crucial. In these cases it would be awesome if the combadge could read the musculature of its wearer to register subvocalized commands and communication.

The fact that the combadge announces an incoming call with audio could prove problematic if the wearer is in a very noisy environment, is in the middle of a conversation, or in a situation where silence is critical. Rather than use an “ring” with an audio announcement, a better approach might build in intensity: a haptic vibration for the initial or first several “rings,” and adding the announcement only later. This gives the wearer an opportunity to notice it amidst noise, silence it if noise would be unwelcome, and still provide an audible signal that told others engaged with the wearer what’s happening and that he may need to excuse himself.

Geordi

So, as far as wearable tech goes, not only is it the most familiar, but it’s pretty good, and pretty illustrative of the categories of analysis applicable to all wearable interfaces. Next we’ll take a look at other wearable communications technologies in the survey, using them to illustrate these concepts, and see what new things they add.