Microsoft accidentally makes racist bot and has to delete a load of tweets

In a PR screw-up of epic proportions Microsoft was forced to delete a tonne of tweets from it’s new bot.
Here’s how they describe it:
Tay is an artificial intelligent chat bot developed by Microsoft’s Technology and Research and Bing teams to experiment with and conduct research on conversational understanding.
Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation.
The more you chat with Tay the smarter she gets, so the experience can be more personalized for you.
Ok so Microsoft has made a bot that learns from how people talk on the internet. How do you think that might go wrong? Yes, in the exact way you’d think.
AND LET’S CHECK OUT SOME OF THOSE TWEETS OK? JUST SIX OF THE WORST, MOST TERRIBLE TWEETS THEIR BOT DID
1. It shouldn’t have tweeted this:
2. It certainly shouldn’t have tweeted this:
3. Sweet lord, it certainly shouldn’t have tweeted this:
4. Mercy me, no please stop this bot:
5. BLOODY HELL!!!!!
6. Nope, this is a bad bad bot with terrible tweets and needs to be stopped
MICROSOFT STATEMENT:
“The AI chatbot Tay is a machine learning project, designed for human engagement. As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. We’re making some adjustments to Tay.”
Wow. Er.. This is the kind of error they’re going to write books about. Amazing!
Source: http://uk.businessinsider.com/microsoft-deletes-racist-genocidal-tweets-from-ai-chatbot-tay-2016-3