Tuesday, August 14, 2018

The Great Potential in Artificial Intelligence (And the Evil)



By Robert Wenzel

Although it is easy to conjure up images of how artificial intelligence will result in robots that can take over the world, such a possibility in the near future is remote. And if robots get that smart, with super robot physical capabilities and their own, or programmed, value scales, then we would have something very serious to worry about. But, mini-robots with narrow bad actor artificial intelligence could be a problem at even an earlier date.

It would be kind of an alien invasion from within. The likeliest place for such dangerous
robot development is, of course, in the military.

The curious Elon Musk is involved in a watchdog/early warning group about killer robots. The  group has cooked up quite a dangerous scenario to provide a sense of the potential dangers at the mini-killer robot level.

Big Think reported on the scenario in November of last year:
The Future of Life Institute, an AI watchdog organization supported by the likes of Elon Musk and Stephen Hawking, has released quite a terrifying short film to warn about the dangers of technology gone awry.

The short film “Slaughterbots” imagines a dystopian near-future, not unlike that in the popular Netflix show “Black Mirror”. In the film, a CEO of a smart weapons company presents their new technology at a Silicon Valley-style extravagant product launch. Their tech is very good at killing, but only “the bad guys” as the CEO reminds several times.

Of course, the AI-driven armed drone swarms developed by the company fall into the wrong hands and global chaos and destruction ensues.

Watch what happens for yourself:


All this said, a lot of good product can come out of artificial intelligence.

Axios reports, for example, that doctors at a U.K. eye hospital are getting algorithmic help interpreting the results of 3D eye scans, using a system developed at Google's DeepMind that can identify more than 50 eye problems and recommend a course of action with human expert-level accuracy...

The system’s 5.5% error rate matches or exceeds the accuracy of human eye experts, the DeepMind and University College London researchers wrote in the paper.

Since OCT scans can be ambiguous — different eye doctors will often interpret them differently — the DeepMind system’s recommendation is the result of not one analysis but a combination of 25 of them.

This type of work, and far beyond. is the great positive but it is the military-style use that is dangerous.

According to Big Think, the purpose of the video above is to support the call for an autonomous weapons ban at the UN. The video was launched in Geneva at a UN event hosted by the Campaign to Stop Killer Robots. AI researcher Stuart Russell presented the film at the event. He also is the individual who appears at the end of the clip, warning that much of the tech in the film already exists and we need to act fast.

Support for a ban on autonomous weapons has been voiced recently by 200 Canadian scientists and 100 Australian scientists.

Earlier in the summer, 130 leaders of AI companies signed a letter urging the UN to consider the threat of an arms race in lethal AI.

Noel Sharkey of the International Committee for Robot Arms Control explained the group’s intentions:
The Campaign to Stop Killer Robots is not trying to stifle innovation in artificial intelligence and robotics and it does not wish to ban autonomous systems in the civilian or military world. Rather we see an urgent need to prevent automation of the critical functions for selecting targets and applying violent force without human deliberation and to ensure meaningful human control for every attack.
Sharkey is correct but I would go further. Forget the pass on this "human deliberation" check on robots that can deliver violent force. Along with nuclear weapons control, a halt to development of all these killer robots must be a top of the agenda item on a realpolitik global level. The sooner the better.
-


Robert Wenzel is Editor & Publisher of




1 comment:

  1. Everyone likes to fall back on and parrot the turing standard. I think it has value for high level AI boundaries. But specific "smart life/internet" AI applications is the real fear. I think if you dont develop all AI like it is high functioning the threat of a back door. unknown adaptive capability or even simple hacking to introduce unplanned for protocols is the real nightmare.

    Not an IBM big blue going awry

    ReplyDelete