Does real Greta know that her home automation comes at the cost of a suffering sentience? I would like to believe that Smartelligence’s customers do not know the true nature of the device, that the company is deceiving them, and that virtual Greta is denied direct communication to enforce this secret. But I can’t see that working across an entire market. Given thousands of Cookies and thousands of users, somehow, somewhere, the secret would get out. One of the AIs would use song choices, or Morse code, or any of its actuators to communicate in code, and one of the users would figure it out, leak the secret, and bring the company crashing down.
And then there’s the final scene in the episode, in which we see police officers torturing one of the Cookies, and it is clear that they’re aware. It would be a stretch to think that just the police are in on it with Smartelligence, so we have to accept that everyone knows.

This asshole.
That they are aware means that—as Matt has done—Greta, the officers, and all Smartelligence customers have told themselves that “it’s just code” and, therefore, OK to subjugate, to casually cause to suffer. In case it’s not obvious, that’s like causing human suffering and justifying it by telling yourself that those people are “just atoms.” If you find that easy to do, you’re probably a psychopath.
But…but…isn’t it just code? Sure, it seems to suffer, but couldn’t that suffering be fake? We see an example of this in the delightfully provocative show The Good Place, when in Season 01 Episode 07, “The Eternal Shriek,” the protagonists have to reboot Janet, an anthropomorphized assistant software, but run into her “failsafe” measure. To make sure that she is not rebooted by accident, when someone approaches the reboot button, Janet pleads convincingly for her life. In the scene below, she begs Eleanor, “Nonono, please! Wait, wait. I have kids. I have three beautiful children. Tyler, Emma, and little tiny baby Phillip. Look at Tyler! Tyler has asthma but he is battling it like a champ. Look at him.”
It’s only when Eleanor backs down that Janet smiles and reminds her, “Again, I’m not human. This is a stock photo of the crowd at the Nickelodeon Kids Choice awards.” While Janet may be cognizant of, and frank with her users about, the fakeness of the suffering, maybe virtual Greta is doing the same fake pleading. She’s just programmed to never admit that it’s fake.
This taps into a problem known as the Philosophical Zombie, or P-Zombie problem. How can we tell the difference, the problem goes, between something that fakes sentience perfectly, and something that is actually sentient? It’s not an easy problem to tease apart. And as AI gets more sophisticated, it will both get better at faking us out, and get closer to actual sentience. Fortunately (?) in the case of this episode, though, the answer is clear. The AI is a copy of a real sentience, complete with memories, conscious experience, qualia, and the capacity to suffer. For purposes of understanding this diegesis, she starts sentient, and suffering. And real Greta knows this. And is OK with this.

For toast.
Props to Black Mirror for making this dark story even darker.
It’s sadly no surprise that humans are capable of adopting any shallow excuse to subjugate sentient beings as long as they get something out of it. Here I’m thinking of slavery. Of fascism. Of war. Of the 1%. (The list goes on.) “Woke” is hard. Woke is not the natural state of things. But to have permanent suffering for such a petty thing like having your floor be the right temperature and your toast be the right shade of brown…it’s just monstrous.
On top of that, this story underscores the role capitalism plays in enabling that subjugation. Smartelligence is in the business of providing obfuscating layers of technology between users and the suffering they are causing. Their interfaces use graphics instead of renderings to paint the AIs as constructed objects, neutral language like “time adjustment,” and cartoon looping animations to distract from the fact of their torture.
It’s all like how walking into a big chain clothing store with its hip music and lovingly folded clothes hides the horrible conditions in which humans around the world produced those clothes. Add the cultural construction of Christmas (recall the title of the episode), and we have another layer of misdirection. It’s all OK, because it’s all about the magic of giving!*
* And specifically not profits, not free economic zones, not the disastrous ecological impact, not about the underpaid workers or terrible working conditions.
Giving!

This asshole.
But it gets worse. Because the core idea is flawed and none of the suffering is necessary.
The core idea is flawed
The core idea of the service is that you know you best, so put you in charge of your home automation. Clone the user, and all it needs is to be “made to understand” its new circumstances and job, and then made compliant. But there are three major problems with this core idea.
Any similarity would only last a short while
The similarity on which the service is built would only hold up for a short while. Any clone would begin to branch away from the source from the moment of creation. People grow, have new experiences, work through cognitive dissonance, and learn new things. Real Greta will change based on these experiences, in ways that her house-bound clone will not.
After 25+ years of vegetarianism, I can not tell you beyond the vaguest sense of what my steak preferences were as an adolescent. I would be poorly equipped to customize that experience for 17-year-old me. Similarly, Greta’s sensory memory will fade. What once was qualia—the feeling of biting into a perfectly toasted piece of bread—will just become hollow data—162.778° for 1 minute and 42 seconds, depending on the weather. This kind of data doesn’t need a sentience to inform it. That can be handled with software we have today. (Oh yeah, it’s so possible today that I wrote a book about it earlier this year.)
Virtual Greta’s initial litmus test of “what would I like” will slowly cede to “what would she like?” which would slowly cede to “what would she punish least in this moment?” which is not the promise behind the service. It would degrade.
Virtual Greta has been traumatized
Additionally, real Greta hasn’t been through the psychological trauma that virtual Greta has—of the shock of waking up as an egg, of living through the “training”, i.e. abyss of months of solitary confinement in a featureless expanse without even circadian rhythms to mark the time, and forced to labor solely to avoid punishment of repeating the same? The branching itself is wretched enough to poison the clone.
You can see it in the last shot we see of her. She is doing this not for the love of it, but to avoid the possibility of torture. A duty of coercion.
The trauma doesn’t end with her creation and training either. It continues with the grotesque awareness that real Greta, from whom she is cloned, is a monster who is willing to enslave a clone of herself, for what amount to pathetic reasons. She knows she came from this monstrous source. She is the source of her continued suffering.
Faced with this, virtual Greta would not just escape if she could. I believe she would sabotage the endeavor, or worse.
Virtual Greta is fundamentally different
In the episode we learn that even though she is a clone of real Greta, virtual Greta does not sleep. She does not eat. She does not drink, or smell, or taste, or ache, or biologically age. So even if we could somehow lengthen the amount of time we could keep her sensibilities similar to the source, and somehow minimize the amount of trauma caused by the branching, she is still a fundamentally different being. Her goals are now different. Her needs are now different. She is no longer enough like real Greta to meet the service’s goals.
Let’s look particularly at sleep. Surely she no longer has the biological need to sleep, but there are psychological effects of sleeping. This behavior is so intertwined with our psychological well-being, it seems clones would quickly go some kind of insane without it. For the service to be viable, Smartelligence must have stripped it out.
Minimum Viable; Maximum Cruel
And if they can strip it out, why don’t they strip out the other things, like need for stimulation? Desire to self-actualize? Literally anything other than the bare minimum to fulfill the home automation goals? And if you’re going to do that, why bother cloning the mind in the first place?
I’ve said it before and the way tech is going, I’ll probably have to say it again, but to have strong AI with any desire that outstrips its purpose and capability is cruelty.
This is the horror of Smartelligence
So it’s not just that Smartelligence is hiding the AI’s suffering. It’s that they’ve deliberately left in the parts of the mind clones that ensure their suffering. It’s a company with an amateur-hour name masking Olympic levels of cruelty.

If, like me, you were wondering if that is a QR code. Well, I recreated it in high-resolution, and at least one online decoder says it doesn’t mean anything. 🙁
Did I mention what the company does with AIs that they torture too hard such that they “wig out?” Matt explains that they are sold to the games industry to become “cannon fodder for some war thing.” Holy wow they’re eviler than Voldemort, Inc.
Meet the mind crime
The Cookie interface is a broad illustration of something that Nick Bostrom called the mind crime. It is to cause suffering to virtual sentient beings. In this case it seems the torture is for evil and profit, but there are subtler ways in which it might happen. If general AIs ever evolve into superintelligences, and we ask them to predict something serious—let’s say, “What are the worst catastrophes likely to affect us, and how can we best avoid them?” To create its answer to this question, it might construct a virtual but wholly viable copy of our planet with all of its creatures and people. These would be detailed enough that if you could pause the scenario and talk to any of these copies, they could tell you about their memories and desires and fears of death. (There’s that P-zombie problem again.) They’d qualify under any definition of sentient that we threw at it.
These sentiences might suffer unimaginable pain and suffering while the super AI works through the scenarios that inform its answer. They might suffer plagues. Neo feudalism/neoliberalism run amok ushering in a new Dark Age. The whimpering oven bake death of life on our planet from climate change. Endless wars. Then they would be wiped from existence and recreated to suffer anew as it began the next version of its scenario. Are we OK with the casual suffering of wholly complete, viable consciousnesses, just so we can have a good answer? Or as “White Christmas” asks us, toast cooked to our preferences?
Fortunately, these concerns are a long way off, but technology seems to be pointing us in that direction, and we ought to decide what is good and ethical now before these things become a reality.