A Deadly Pattern

The Drones’ primary task is to patrol the surface for threats, then eliminate those threats. The drones are always on guard, responding swiftly and violently against anything they do perceive as a threat.

image02

During his day-to-day maintenance, Jack often encounters active drones. Initially, the drones always regard him as a threat, and offer him a brief window of time speak his name and tech number (for example, “Jack, Tech 49”) to authenticate. The drone then compares this speech against some database, shown on their HUD as a zoomed-in image of Jack’s mouth and a vocal frequency.

vlcsnap-2015-02-03-22h07m47s249

Occasionally, we see that Jack’s identification doesn’t immediately work. In those cases, he’s given a second chance by the drone to confirm his identity.

image01

Although never shown, it is almost certain that failing to properly identify himself would get Jack immediately killed. We never see any backup mechanism, and when Jack’s response doesn’t immediately work, we see him get very worried. He knows what happens when the drone detects a threat.

Zero Error Tolerance

This pattern is deadly because it offers very little tolerance for error. The Drone does show some desire to give Jack a second chance on his vocal pattern, but it is unclear how many total chances he gets.

On a website, if I enter my password wrong too many times it will lock me out. With this system, the wrong password too many times will get Jack killed.

There are many situations where Jack may not be able to immediately respond:

  • Falling off his bike and knocking himself out
  • Focus on repairing a drone, when a second drone swoops in to check the situation out
  • Severe shock after breaking a limb
  • etc…

As we see in the crashed shuttle scene, the Drones have no hesitation in killing unconscious targets. This means that Jack has a strong chance of being killed by his Drone protector in some of the situations where he needs help the most.

image03

A more effective method could be a passive recognition system. We already know that the drone can remotely detect Jack’s biosignature, and that the Tet has full access to the Drone’s HUD feed.

The Drone then could be automatically set to not attack Jack unless the Tet gives a very specific override. Or, alternatively, the Drone could be hard-wired to never attack Jack at all (though this would complicate the movie’s plot). In any situation where it looks like the Drone might attack anyways, the remote software Vika uses could act as a secondary switch, providing a backup confirmation message.

That said, we must acknowledge that this system excels at is keeping Jack nervous and afraid of active drones.  While they help him, he knows that they can turn on him at any moment.  This serves the TET by keeping Jack cowed, obedient, and always looking over his shoulder.

Ethical Ramifications

The Drones are built as autonomous sentries, able to protect extraordinarily expensive infrastructure against attack. They need to be able to eliminate that threat, quickly and efficiently. Current militaries are facing the exact same issues. Even though they have pledged (for now) to not build autonomous kill systems, modern military planners may find value in having a robot perform a drudging, dangerous task like patrolling remote infrastructure.

The question asked best in Oblivion is “What should constitute a threat?

image00

Drones fire mercilessly on unarmed civilians and armed enemy militia, but do not attack armed friendly soldiers (Jack). This already implies some level of advanced threat analysis, even if we abhor the choices the Drone makes.

The Future

Military Planners will need to answer the same question: How does the algorithm determine a threat? With human labor becoming more and more expensive both monetarily and emotionally, the push for autonomous drone systems will become even stronger for future conflicts.

There is still enough time to research and test potential concepts before we have to make a decision on autonomous drones.

Interaction Design Lessons:

  1. Don’t threaten civilians and non-combatants.
  2. Give clear feedback of limits and consequences if a deadly pattern is about to be activated.
  3. Give users a second chance.

5 thoughts on “A Deadly Pattern

  1. An answer here (for real military planners) might be similar to ‘computer assisted chess’. Let the drones patrol with little human input, and provide a system to let human operators intervene more strongly when a more complex situation (moral issues, identification) arises. Like the assisted chess, the AI suggests courses of action which can be vetoed, modified or accepted unchanged by a human operator.

  2. You are assuming that the “drone” is in service of the humans. It is not. It is in service of the alien invader. In fact, the “human” SHOULD be a KNOWN quantity because he’s a clone. So, the base assumptions on WHY the interface should work a certain way are false.

    Analyze it from the point of view of the invader. Then maybe a second chance to authenticate is too risky.

    • From the point of view of the TET, I agree: a second chance to authenticate is risky for the TET’s goals.

      Here, however, I think it asks questions that we should be answering for human systems; and I think those questions are a lot more interesting than just analyzing what the TET wants (resources).

      I also talked briefly with Chris when doing the analysis for Eve’s gun in Wall-E about a similar issue: I’m not sure I’m comfortable taking an interface from a sci-fi movie and figuring out how to make it more deadly in an analysis like this. I don’t think it serves human interests on either side of the interface to do that, and I think Science Fiction is at its best when it’s asking questions about what people /should/ want.

      • It is interesting to analyse how a system can be designed differently if the operators are clones.

  3. Peter Singer has written about the “humans in the loop” model and points out that while in peacetime everyone is in favour of it, in wartime priorities change. Modern combat happens so fast that weapon systems have to be increasingly autonomous, such as the Phalanx or Goalkeeper anti-missile radar guided guns on modern warships. Human intervention is limited to switching them on or off.

    Professor Ron Arkin of Georgia Tech has argued that autonomous systems could be more ethical than humans. Under stress humans make mistakes, eg the USS Vincennes shooting down an Iranian airliner in 1987. Drones might make better decisions, not worse.

    It’s a complex and evolving field. I can recommend Peter Singer’s book “Wired for War” as an excellent overview of how military robots are affecting a wide variety of people.

Leave a Reply to BenPCancel reply