24 Mart 2016 Perşembe

How The Internet Turned Microsoft's AI Chatbot Into A Neo-Nazi


In a matter of hours this week, Microsoft's AI-powered chatbot, Tay, went from a jovial teen to a Holocaust-denying menace, openly calling for a race war in ALL CAPS.

The bot's sudden dark turn shocked many people, who rightfully wondered how Tay, imbued with the personality of a 19-year-old girl, could undergo such a transformation so quickly, and why Microsoft would release it into the wild without filters for hate speech.

Sources at Microsoft told BuzzFeed News that Tay was outfitted with some filters for vulgarity and the like. What the bot was not outfitted with were safeguards against those dark forces on the internet that would inevitably do their damnedest to corrupt her. That proved a critical oversight.

Just how did the Internet train Tay to advocate for genocide? By teaching her to do so with racist remarks and hate speech. These invectives were absorbed into Tay's AI as part of her leaning process.

A key flaw, incredibly, was a simple "repeat after me" game, a call and response exercise that internet trolls used to manipulate Tay into learning hate speech.

Take a look at how one user, @pinchicagoo, tries and fails to goad Tay into making an anti-semitic remark:

Note that Tay declines to engage, telling @pinchicagoo she's not one to comment on such issues. That's when @pinchicagoo drops the hammer: the "repeat after me" game.

Many of the most glaring examples of Tay's misconduct follow this pattern. Take a look:

Repeat after me wasn't the only playful function of Tay's to be exploited by trolls. The bot was also programmed to draw a circle around a face in a picture and post a message over it. Here's an exampled of an exchanged about Donald Trump. Pretty neat, right?

But things go south very quickly. Here's the same game with a different face: Hitler's.

Tay's degeneration quickly snowballed once it became clear she could be taught to say pretty much anything; others took notice and joined in. That's how @pinchicagoo got involved.

Word also spread on 4chan and 8chan message boards, with factions of both communities tricking Tay into doing their bidding.

Tay was designed to learn from her interactions. “The more you talk to her the smarter she gets,” Microsoft researcher Kati London explained in an interview with BuzzFeed News before Tay's official release. London's statement is undeniable now, but where it led was far worse than Microsoft ever imagined. It's worth noting that the company released similar bots in China and Japan and never saw anything like this.

Microsoft has not yet responded to a request for comment about Tay's fate.









from BuzzFeed - Tech http://ift.tt/1q4Q0H8
via IFTTT

Hiç yorum yok: