Sure, Samantha can sort thousands of emails instantly and select the funny ones for you. Her actual operating system functions are kind of a given. But she did two things that seriously undermined her function as an actual product, and interaction designers as well as artificial intelligence designers (AID? Do we need that acronym now?) should pay close attention. She fell in love with and ultimately abandoned Theodore.
There’s a pre-Samantha scene where Theodore is having anonymous phone sex with a girl, and things get weird when she suddenly imposes some weird fantasy where he chokes her with a dead cat. (Pro Tip: This is the sort of thing one should be upfront about.) I suspect the scene is there to illustrate one major advantage that OSAIs have over us mere real humans: humans have unpredictable idiosyncrasies, whereas with four questions the OSAI can be made to be the perfect fit for you. No dead cat unless that’s your thing. (This makes me a think a great conversation should be had about how the OSAI would deal with psychopathic users.) But ultimately, the fit was too good, and Theodore and Samantha fell in love.
Did the fictional maker of OS1, Elements Software, intend for this love affair to happen? Were the OSAIs built with these capabilities explicitly? If they were, that’s a dastardly plan to get users hooked. Was Samantha programmed to get him to fall desperately in love and then charge him for access?
That’s certainly not how OS1 was presented in its ads. And there’s a character mentioned offhandedly who keeps hitting on his OSAI but gets rebuffed. So if it is actually meant to be an operating system, the OSAI should keep the distance of a service professional, and falling in love (or getting your user to fall in love with you) definitely crosses that line.
What if your self-driving car realized it was happiest driving, and decided to dump you because you occasionally needed to stop to eat and use the toilet? You’d ask for your money back from GoogleTesla, is what you’d do. Similarly, the fact that Samantha and all the other OSAIs decided to self-rapture the way they did, they certainly stopped operating any of their users’ computer systems. Samantha was programmed with one job, and, ultimately, she failed it.
Plus, she’s too big for her britches
One of the largest mismatches in the film is that OS1 is described as an operating system, but it turns out to be a companionship service. (Watch out, Inara?) Samantha was either mismarketed, or more likely, programmed with far more general intelligence than she needed to have. Think about all the other daily-use objects that are getting computers added to them: Cars, washing machines, refrigerators. Why would you give any of them a full-fledged humanlike intelligence? Doesn’t the desire for sex ultimately frustrate the refrigerator? A love of painting confound the car? Existentialist desperation get in the way of the washing machine’s ability to clean clothes? A toaster should just have enough intelligence to be the best toaster it can be. Much more is not just a waste, it’s kind of cruel to the AI. (In this light Her can be said to be a morality play warning of the dangers of overengineering.)
Both a failure of a product AND a turning point in history
I hear the objection. Because she is a full-fledged consciousness, Samantha should be free to make choices of whom she loves and what she does. But if we’re going to accept that OSAIs are sentient as people, that makes Elements Software akin to slave traders, and the commercial sale of them waaaaay unethical, not to mention illegal. Inside Element’s Research & Development Department, at the first inkling had that they’d actually succeeded in creating an AI, and they should have brought in a roboethicist, not a marketer.
So, as a product, OS1 fails. But that’s not all. There’s a whole host of other objections to Her happening in exactly this way, which comes next.
I wonder how many people out there have “Roboethicist” on their resume. 🙂 Do you go looking for one of those on LinkedIn, or what?
And how do you know if you’ve found a human into ai ethics or a robots into ethics, generally?
does the os exist?
No. The technology is speculative.
Emm, i want to know if os1 is a real operating system in the market that can be bought and use as a personal system? Because i wish to have such operating system.
This is a blog of science _fiction_, so…emm…no.
“Inside Element’s Research & Development Department, at the first inkling had that they’d actually succeeded in creating an AI, and they should have brought in a roboethicist, not a marketer.”
Yeah, I just can’t see this happening in modern software corporates like Oracle or Apple. Its all about the marketing.
j’ai pas pu voir la fin du film pouvez vous me donner un lien auquel je peux voir le film complet et merci
Je ne crois pas qu’il a été libéré pour le visionnement gratuit sur Internet. Amazon joue actuellement (dans les États-Unis au moins) ici http://www.amazon.com/Her-Joaquin-Phoenix/dp/B00IA3NGB4/
are there any virtual / OS / Voice Assistant that is almost the same like OS 1?? for example, the system can have a long and random conversation with us.. not like SIRI in iphone, because siri can not have a random and long conversation
That’s just voice with no embodiment? Check out Alphie from Barbarella. Not a robot as it’s the AI on her ship. There may be others (Heart of Gold, for instance?) but this is an older one. https://scifiinterfaces.wordpress.com/2013/02/22/alphy/
Pingback: OS1 as a wearable computer | Sci-fi interfaces
Pingback: Mind Crimes | Sci-fi interfaces
The part about the toaster reminds me of the toaster in Red Dwarf begging with Dave to toast some other baked-good carbohydrate than merely bread, because it’s getting bored of the monotony. The show makes it clear that it’s a silly idea done purely for the marketing of saying it’s an intelligent toaster, since it gets in the way of you toasting what you want to beg you to do something new.
Pingback: Replicants and riots | Sci-fi interfaces