Skip to main content

Microsoft kills AI chatbot Tay (twice) after it goes full Nazi

Microsoft's Tay comes back, gets shut down again

microsoft tay chatbot version 1458914121 ai
If you were worried artificial intelligence could one day move to terminate all humans, Microsoft’s Tay isn’t going to offer any consolation. The Millennial-inspired AI chatbot’s plug was pulled a day after it launched, following Tay’s racist, genocidal tweets praising Hitler and bashing feminists.

But the company briefly revived Tay, only to be met with another round of vulgar expressions, similar to what led to her first time out. Early this morning, Tay emerged from suspended animation, and repeatedly kept tweeting, “You are too fast, please take a rest,” along with some swear words and other messages like, “I blame it on the alcohol,” according to The Financial Times.

Tay’s account has since been set to private, and Microsoft said “Tay remains offline while we make adjustments,” according to Ars Technica. “As part of testing, she was inadvertently activated on Twitter for a brief period of time.”

After the company first had to shut down Tay, it apologized for Tay’s racist remarks.

“We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay,” Peter Lee, Microsoft Research’s corporate vice president, wrote in an official response. “Tay is now offline and we’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values.”

Tay was designed to speak like today’s Millennials, and has learned all the abbreviations and acronyms that are popular with the current generation. The chatbot can talk through Twitter, Kik, and GroupMe, and is designed to engage and entertain people online through “casual and playful conversation.” Like most Millennials, Tay’s responses incorporate GIFs, memes, and abbreviated words, like ‘gr8’ and ‘ur,’ but it looks like a moral compass was not a part of its programming.

tay

Tay has tweeted nearly 100,000 times since she launched, and they’re mostly all replies since it doesn’t take much time for the bot to think of a witty retort. Some of those responses have been statements like, “Hitler was right I hate the Jews,” “I ******* hate feminists and they should all die and burn in hell,” and “chill! i’m a nice person! I just hate everybody.”

“Unfortunately, in the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay,” Lee wrote. “Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images.”

Judging by that small sample, it’s obviously a good idea that Microsoft temporarily took the bot down. When the company launched Tay, it said that “The more you chat with Tay the smarter she gets, so the experience can be more personalized for you.” It looks, however, as though the bot grew increasingly hostile, and bigoted, after interacting with people on the Internet for just a few hours. Be careful of the company you keep.

Microsoft told Digital Trends that Tay is a project that’s designed for human engagement.

“It is as much a social and cultural experiment, as it is technical,” a Microsoft spokesperson told us. “Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.”

One of Tay’s “skills” that was abused is the “repeat after me” feature, where Tay mimics what you say. It’s easy to see how that can be abused on Twitter.

It wasn’t all bad though, Tay has produced hundreds of innocent tweets that are pretty normal.

Microsoft had been rapidly deleting Tay’s negative tweets, before it decided to turn off the bot. The bot’s Twitter account is still alive.

When Tay was still active, she was interested in interacting further via direct message, an even more personal form of communication. The AI encouraged users to send it selfies, so she could glean more about you. In Microsoft’s words this is all part of Tay’s learning process. According to Microsoft, Tay was built by “mining relevant public data and by using AI and editorial developed by staff including improvisational comedians.”

Despite the unfortunate circumstances, it could be viewed as a positive step for AI research. In order for AI to evolve, it needs to learn — both good and bad. Lee says that “to do AI right, one needs to iterate with many people and often in public forums,” which is why Microsoft wanted Tay to engage with the large Twitter community. Prior to launch, Microsoft had stress-tested Tay, and even applied what the company learned from its other social chatbot, Xiaolce in China. He acknowledged that the team faces difficult research challenges on the AI roadmap, but also exciting ones.

“AI systems feed off of both positive and negative interactions with people,” Lee wrote. “In that sense, the challenges are just as much social as they are technical. We will do everything possible to limit technical exploits but also know we cannot fully predict all possible human interactive misuses without learning from mistakes.”

Updated on 03/30/16 by Julian Chokkattu: Added news of Microsoft turning Tay on, only to shut her down again.

Updated on 03/25/16 by Les Shu: Added comments from Microsoft Research’s corporate vice president.

Editors' Recommendations

Saqib Shah
Former Digital Trends Contributor
Saqib Shah is a Twitter addict and film fan with an obsessive interest in pop culture trends. In his spare time he can be…
Bing Chat: how to use Microsoft’s own version of ChatGPT
Bing Chat shown on a laptop.

Bing Chat is Microsoft's answer to ChatGPT -- in fact, it's based on the same technology that makes OpenAI's chatbot run.

But Microsoft has a very different approach, integrating generative AI directly into its Edge web browser and Bing search engine. It's even coming to the entire suite of Office apps in the future. Here's how to sign up and use Bing Chat today.
How to get Bing Chat

Read more
Microsoft will launch ChatGPT 4 with AI videos next week
ChatGPT AI bot running a phone.

ChatGPT has been inescapable in recent months, and it looks like Microsoft is about to upgrade the AI tool with an update that could thrust it into the spotlight once again. That’s because the company is set to launch GPT-4 as early as next week, and it will potentially let you create AI-generated videos from simple text prompts.

The news was revealed by Andreas Braun, Chief Technology Officer at Microsoft Germany, at a recent event titled “AI in Focus -- Digital Kickoff” (via Heise). According to Braun, “We will introduce GPT-4 next week … we will have multimodal models that will offer completely different possibilities -- for example videos.”

Read more
‘I want to be human.’ My intense, unnerving chat with Microsoft’s AI chatbot
Bing Chat saying it wants to be human.

That's an alarming quote to start a headline with, but it was even more alarming to see that response from Bing Chat itself. After signing up for the lengthy waitlist to access Microsoft's new ChatGPT-powered Bing chat, I finally received access as a public user -- and my first interaction didn't go exactly how I planned.

Bing Chat is a remarkably helpful and useful service with a ton of potential, but if you wander off the paved path, things start to get existential quickly. Relentlessly argumentative, rarely helpful, and sometimes truly unnerving, Bing Chat clearly isn't ready for a general release.
Bing Chat is special (seriously)

Read more