Microsoft ‘deeply sorry for the unintended offensive tweets’ by Tay bot

Reading time icon 3 min. read


Readers help support Windows Report. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help Windows Report sustain the editorial team. Read more

Microsoft’s artificial chatbot Tay has been put to bed, following a recent attempt by the communications bot to emulate its surroundings that eventually devolved into a series of questionable and offensive tweets.

While Tay’s experience with humanity via Twitter may have been relatively short, it has provided Microsoft some insight into what influenced the chatbot’s responses as well as how the company can readjust parameters to better deal with the more ‘savory’ individuals that inhabit the internet.

The logical place for us to engage with a massive group of users was Twitter. Unfortunately, in the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay. Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images. We take full responsibility for not seeing this possibility ahead of time. We will take this lesson forward as well as those from our experiences in China, Japan and the U.S. Right now; we are hard at work addressing the specific vulnerability that was exposed by the attack on Tay.”

Conversely, Microsoft’s other, less publicized AI chatbot Xiaolce has and is currently used by 40 million people in China to tell stories and hold casual conversations for some time now with little to no offensive marks against it. The Xiaolce experience in China is what led Microsoft to try out a chatbot in an apparently radically different environment such as the U.S and through Twitter.

Knowing now, what went wrong, Microsoft is going to retool Tay’s AI design to limit technical exploits without restricting the AI’s ability to learn from mistakes. Also, the company plans to enter public forums such as Twitter, with greater caution than it did this past two days.

Looking ahead, we face some difficult – and yet exciting – research challenges in AI design. AI systems feed off of both positive and negative interactions with people. In that sense, the challenges are just as much social as they are technical. We will do everything possible to limit technical exploits but also know we cannot fully predict all possible human interactive misuses without learning from mistakes. To do AI right, one needs to iterate with many people and often in public forums. We must enter each one with great caution and ultimately learn and improve, step by step, and to do this without offending people in the process.”

Thankfully, it looks as though Tay will return, but with a stern scolding and point in the right direction from its parents the AI bot will do a lot better on its second go-around.

User forum

0 messages