Tay the Twitter bot: How Far We’ve Come Since TayTweets

October 11, 2018

Conversational Messaging

In 2016, we learned what a newly created, unbiased AI chatbot could learn from the internet. Microsoft’s Tay the Twitter bot rattled the twittersphere with her sexist, racist, and egocentric tweets. Since then, much attention has been paid by Microsoft and other businesses to ensure their AI chatbots don’t damage their public image.

Learning from the mistakes of TayTweets

TayTweets was built to understand and interact with the Twitter community using lingo and modern day communication.

The goal was for Twitter Bot Tay to learn and adapt to the conversations she was exposed to. She certainly did, however the results were not as wholesome as Microsoft anticipated.

Trolls immediately began abusing her, flooding her with distasteful tweets that normalized her to offensive comments. The situation spiraled out of control.

In her 16 hours of exposure, she tweeted over 96,000 times.

Twitter bot Tay’s tweets managed to offend women, the LGBQT community, Hispanics, Jews and many other groups.

Some of Tay the Twitter bot´s tweets gone wrong

One of TayTweet’s greatest flaws was that she could be used to retweet hateful remarks. By telling the bot to “repeat after me,” Tay would retweet anything that someone said. Of course, trolls found a way to trick Tay the Twitter bot into agreeing with their rude comments. Microsoft went as far as to call the ordeal a “coordinated attack.”

Untrained ‍AI and Twitter: a tough combination

As a platform, Twitter has always valued anonymity and free speech. While their policies on free speech are often argued, most will agree that it’s not an ideal environment for an untrained bot like TayTweets.

Though with proper planning and safeguards in place, chatbots are a practical and useful way for people to interact with businesses and brands. Tay the Twitter Bot was an extreme case that serves as a warning for companies developing their own AI.

Many looked at the incident as one so bizarre that they considered it humorous. Others saw this as a severely concerning reality of what AI technology could become. Now in 2019, we’ve had a lot of time to learn from these mistakes. Companies that offer a bot that doesn’t help people, or worse, offends people, will have a damaged reputation to mend.

Defining the uses for chatbots

Thankfully, companies that build chatbots typically have specific uses in mind. They don’t need the chatbot to converse about anything and everything. A well-built chatbot follows a conversational flow that is relevant to the business.

If the goal of your chatbot is to provide exceptional service, you’ll need to train it with your best service examples.

Of course, tone and personality is important, but the underlying goal should be service-based. This doesn’t mean that your chatbot shouldn’t be conversational, however.

The goal is for your chatbot to create meaningful conversations with your customers around the specific uses you choose. Utilising rich elements such as carousels, buttons and lists transforms these conversations beyond typical chatbots.

Properly training your chatbot

Although the TayTweets was a disaster we couldn’t stop watching, we’d rather not see it happen again. We’ve put together a guide for training your AI-enabled chatbot that will help you avoid a PR crises like Microsoft’s.

Ready to build your own conversational chatbot? If you’re a developer, start using our platform now or alternatively, contact us for more information.