It’s the (drone) ethics

Today, a post touching on some of the ethical issues at hand with drone technology.

drone-week

Much of the debate today and in science fiction about drone technologies rightly focuses on ethics. To start, it is valuable to remember that drones are merely a technology like any other. While the technology’s roots have been driven by military research and military applications, like, say, the internet, the examples in the prior posts demonstrate that the technology can be so much more. But of course it’s not that simple.

Hang on, it’s going to get nerdy. But that’s why you come to this blog, innit?

Where drones become particularly challenging to assess in an ethical context is in their blurring of the lines of agency and accountability. As such, we must consider the ethics of the relationship between user/creator and the technology/device itself. A gun owner doesn’t worry about the gun…itself…turning on him or her. However, in Oblivion, for instance, Tom Cruise’s character flies and repairs the Predator-esque drones but then has them turn on him when their sensors see him as a threat.

image05

While not obviously not an autonomous strong artificial intelligence, a real-world drone alternates between being operated manually and autonomously. Even a hobbyist with a commercial quadcopter can feel this awkward transition when they switch from flying their drone like an RC plane to preprogramming it to fly a pattern or take photos. This transition in agency has serious implications.

Anonymity

When you see someone standing in park, looking at a handheld control or staring up into the sky at their quadcopter buzzing around, the actions of that device are attributed to them. But seeing a drone flying without a clear operator in sight gives any observer a bit of the creeps. Science fiction’s focus on military drones has meant that the depictions can bypass questions about who owns and operates them—it is always the state or the military. But as consumer drones become increasingly available it will become unclear to whom or what we can attribute the actions of a drone. Just this year there was serious public concern when drones were spotted flying around historical landmarks in Paris because their control was entirely anonymous. Should drones have physical or digital identification to service a “license plate” of sorts that could link any actions of the drone to its owner? Most probably. But the bad guys will just yank them off.

The author (in blue) and Chris Noessel (in black), at InfoCamp.

The author (in blue) and Chris Noessel (in black), at InfoCamp.

Gap between agent and outcome

Many researchers have explored the difference between online behaviour and in-person behavior. (There’s a huge body of research here. Contact me if you want more information.) People have a lot less problem typing a vitriolic tweet when they don’t have to face the actual impact of that tweet on the receiver. Will drones have a similar effect? Unlike a car, where the driver is physically within the device and operating it by hand and foot, a drone might be operated a distance of hundreds of feet (or even thousands of miles, for military drones). Will this physical distance, mediated by a handheld controller, a computer interface, or a smartphone application change how the operator behaves? For instance, research has found that these operators, thousands of miles away from combat and working with a semi autonomous technology, are in fact at risk for PTSD. They still feel connected and in enough control of the drone that its effects have a significant impact on their mental health.

However, as drones become more ubiquitous, their applications become more diverse (and mundane), and their agency increases, will this connection remain as strong? If Amazon has tens of thousands of drones flying from house to house, do the technicians managing their flight paths feel the ethical implications of one crashing through a window the same as if they had accidentally knocked a baseball through? If I own a drone and program it to pick up milk from the store, do I feel fully responsible for its behaviour as part of that flight pattern Or does the physical distance, the intangible interface and the mediating technology (the software and hardware purchased from a drone company) disassociate the agent from the effect of the technology?Just imagine the court cases.

From Breaking Defense: As drone operations increase, the military is researching effective interfaces to support human operators

From Breaking Defense: As drone operations increase, the military is researching effective interfaces to support human operators

What are the ethical consequences of the different levels of agency an operator can provide to the drone? What are the moral consequences of increased anonymity in the use of these drone technologies? The U.S. military designs computer interfaces to help its drone pilots make effective decisions to achieve mission targets. Can science fiction propose designs, interfaces and experiences that help users of drone technologies achieve ethical missions?

Come on, sci-fi. Show the way.

People don’t know what to make of this technology. It’s currently the domain of the military, big technology companies, a few startups, and a hobbyist community generally ignored outside of beautiful, drone-filmed YouTube videos. Science fiction is a valuable (and enjoyable) tool for understanding technology and envisioning implications of new technologies. Science fiction should be pushing the limits of how drone technologies will change our world, not just exaggerating today’s worst applications.

As journalist and robotics researcher P.W. Singer puts it in his TED talk, “As exciting as a drone for search-and-rescue is, or environmental monitoring or filming your kids playing soccer, that’s still doing surveillance. It’s what comes next, using it in ways that no one’s yet imagined, that’s where the boom will be.

5 thoughts on “It’s the (drone) ethics

  1. While its not the exact focus of this blog, my graduate advisor wrote an article in the journal IEEE Intelligent Systems a few years ago rewriting Asimov’s Laws to be a little more practical for today’s robots: http://researchnews.osu.edu/archive/roblaw.htm

    One the points he likes to drive home (and one touched on in this post I think) is that all robots, regardless of how advanced they are, have limits in the situations they can handle. When they are in those situations they must 1) recognize they are operating beyond their capability, something they struggle with, 2) smoothly alert, and transfer control to, a human operator or decision maker. One of the major results of this is that when you design a robotic platform your not just designing the physical thing (or software to run the thing), your designing a human-robot SYSTEM that requires the human and robot to coordinate with one another.

  2. Drones can and most likely will be used to spy on countries own citizens. Drones should be outlawed. No one in the world should be able to use them except in extreme emergency.

  3. Christian Enemark is an academic who studies just this topic and written a book “Armed Drones and the Ethics of War”. At a talk he gave three possible models for thinking about drones that cause harm. Oversimplifying, these are

    Product liability. If a drone malfunctions, that’s the fault of the operator or programmers. But when you have autonomous drones with learning algorithms this gets fuzzy …
    Owner responsibility. Think of the drone as a large dog. If it runs loose and bites someone, the owner is responsible. (Commanding officer for military drones.) And when we get to something like full AI
    The drone is responsible. This itself is a tricky problem, because can you punish a drone? And will humans be satisfied that the drone is being punished?

Leave a Reply