Asked during a closed beta whether or not she'd kill baby Hitler, "Tay," Microsoft's AI-powered chatbot, replied with a simple "of course." But after 24 hours conversing with the public, Tay's dialogue took a sudden and dramatic turn. The chatbot, which Microsoft claims to have imbued with the personality of a teenage American girl, began tweeting her support for genocide and denying the Holocaust.
Microsoft quickly took Tay offline, issuing a comment blaming the bot's sudden degeneration on a coordinated effort to undermine her conversational abilities.
“The AI chatbot Tay is a machine learning project, designed for human engagement. It is as much a social and cultural experiment, as it is technical," a Microsoft spokesperson told BuzzFeed News in an email. "Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.”
Left unexplained: why Tay was released to the public without a mechanism that would have protected the bot from such abuse, blacklisting contentious language. Asked why Microsoft didn't filter words like the n-word and "holocaust," a Microsoft spokesperson did not immediately provide an explanation.
Microsoft unleashed Tay to the masses Wednesday on a number of platforms including GroupMe, Twitter, and Kik. Tay learns as she goes: “The more you talk to her the smarter she gets," Microsoft researcher Kati London told BuzzFeed News in an interview. Tay takes stances, London said. An intriguing theory, but obviously problematic when tested against the dark elements of the internet.
As of 10 a.m. Pacific time Thursday, Microsoft had not yet removed a number of these tweets:
Tay's racist turn is an unsettling moment for artificial intelligence, which is developing at a rapid pace. The key characteristic of AI is that it can learn on its own, unsupervised by human programmers. AI is designed to become "smarter" as it ingests more data. Facebook's AI-powered virtual assistant, M, refuses to take stances, a constraint set by the company that BuzzFeed News has detailed in recent months. Perhaps Facebook was on to something. Tay is the opposite end of the spectrum, programmed to be feisty and opinionated. And we're now seeing the dark places that can lead.
from BuzzFeed - Tech http://ift.tt/1T7RoUH
via IFTTT
Hiç yorum yok:
Yorum Gönder