Now that we’ve compared sci-fi’s takeaways to compsci’s imperatives, we can see that there are some movies and TV shows featuring AI that just don’t have any connection to the concerns of AI professionals. It might be that they’re narratively expedient or misinformed, but whatever the reason, if we want audiences to think of AI rationally, we should stop telling these kinds of stories. Or, at the very least, we should try and educate audiences that these are to be understood for what they are.
The list of 12 pure fiction takeaways fall into four main Reasons They Might Not Be of Interest to Scientists.
1. AGI is still a long way off
The first two takeaways concern the legal personhood of AI. Are they people, or machines? Do we have a moral obligation to them? What status should they hold in our societies? These are good questions, somewhat entailed in the calls to develop a robust ethics around AI. They are even important questions for the clarity they help provide moral reasoning about the world around us now. But current consensus is that general artificial intelligence is yet a long way off, and these issues won’t be of concrete relevance until we are close.
- AI will be regular citizens: In these shows, AI is largely just another character. They might be part of the crew, or elected to government. But society treats them like people with some slight difference.
- AI will be “special” citizens: By special, I mean that they are categorically a different class of citizen, either explicitly as a servant class, legally constrained from personhood, or with artificially constrained capabilities.
Now science fiction isn’t constrained to the near future, nor should it be. Sometimes its power comes from illustrating modern problems with futuristic metaphors. But pragmatically we’re a long way from concerns about whether an AI can legally run for office. Continue reading