Friday, March 25, 2016

Microsoft Removes Racist Comments From an Artificial Intelligence Chat Bot that Went Rogue

Microsoft Corp. is in damage control mode after Twitter users exploited its new artificial intelligence chat bot, teaching it to spew racist, sexist and offensive remarks, reports Bloomberg.

The company introduced Tay earlier this week to chat with real humans on Twitter and other messaging platforms. The bot learns by parroting comments and then generating its own answers and statements based on all of its interactions. It was supposed to emulate the casual speech of a stereotypical millennial. The Internet took advantage and quickly tried to see how far it could push Tay.

The bot was targeted at 18 to 24-year-olds in the U.S. and meant to entertain and engage people through casual and playful conversation, according to Microsoft’s website.

Getting 18 to 24 year olds to teach a robot, what could possibly go wrong?

Bloomberg continues:
In less than a day, Twitter’s denizens realized Tay didn’t really know what it was talking about and that it was easy to get the bot to make inappropriate comments on any taboo subject. People got Tay to deny the Holocaust, call for genocide and lynching, equate feminism to cancer and stump for Adolf Hitler. 
Tay parroted another user to spread a Donald Trump message, tweeting “WE’RE GOING TO BUILD A WALL. AND MEXICO IS GOING TO PAY FOR IT.” Under the tutelage of Twitter’s users, Tay even learned how to make threats and identify “evil” races.
-RW

4 comments:

  1. Who needs the Onion when you've got real life!

    ReplyDelete
  2. Remember when JP Morgan thought it was a good idea to allow the public to tweet in questions for one of its execs? Good times indeed!

    ReplyDelete
  3. Replies
    1. Yeah Tay said "Hitler dindu nuffin." It has been rebooted to only respond with PC comments. Apparently it has been rebooted to only respond with PC comments.

      Delete