Sunday, March 17, 2013

NEXT: Drone Killing Decisions Made By Software Programs Requiring No Human Input

NYT's Bill Keller outlines the scary scenario:
IF you find the use of remotely piloted warrior drones troubling, imagine that the decision to kill a suspected enemy is not made by an operator in a distant control room, but by the machine itself. Imagine that an aerial robot studies the landscape below, recognizes hostile activity, calculates that there is minimal risk of collateral damage, and then, with no human in the loop, pulls the trigger.

Welcome to the future of warfare. While Americans are debating the president’s power to order assassination by drone, powerful momentum — scientific, military and commercial — is propelling us toward the day when we cede the same lethal authority to software.

Next month, several human rights and arms control organizations are meeting in London to introduce a campaign to ban killer robots before they leap from the drawing boards. Proponents of a ban include many of the same people who succeeded in building a civilized-world consensus against the use of crippling and indiscriminate land mines. This time they are taking on what may be the trickiest problem arms control has ever faced.

3 comments:

  1. "Skynet. We care... About you!"
    Of course, no one would ever hack the targeting software. What could possibly go wrong?

    ReplyDelete
  2. Skip "Who is John Galt?", but here it is "Who is John Conner?"

    ReplyDelete
  3. Hell, current systems require no human input. No human would do this to his fellow man...

    ReplyDelete