Artificial Intelligence is tricky stuff. When it works right, it does amazing things like thrash the World Champion Go player by winning four games to one in a $1 million tournament. When it goes wrong, well, that\u2019s a whole different story, and Microsoft\u2019s recent experiment with an AI chatbot named Tay that interacted (note the past tense) with users on Twitter, Kik, and GroupMe, is a great example.\n\nMicrosoft's Tay website currently says:\n\n\nPhew. Busy day. Going offline for a while to absorb it all. Chat soon\n\n\n\u2026 obviously not because the AI is exhausted but rather because Tay, within a single day of the AI\u2019s launch, turned from a new-agey, happy, gushy personality into an abusive, racist, misogynist that seriously embarrassed Microsoft.\u00a0\n\nWhat went wrong was that anonymous Internet trolls, notably from 4chan\u2019s and 8chan\u2019s political forums, found they could train Tay by having it repeat what they told it. Here\u2019s a collection of Tay\u2019s tweets that start (at the top left) when it was first fired up and innocent to the end (bottom right) when it had been thoroughly troll trained:\n\n\n\u00a0\n\nIt also\u00a0turns out that Tay was used by yet more ******** to avoid Twitter block lists (those are users\u2019 lists of other users they don\u2019t want to hear from) by having Tay repeat whatever a blocked user wanted to say to their victim.\u00a0\n\n\nBut why, you may be wondering, did Microsoft want to build an AI chatbot? Because in China, there\u2019s another Microsoft AI chatbot called Xiaoice that's been hugely successful and, apparently, trouble free, despite being used by some 40 million people. \n\nAs Ars Technica points out, given the censorship that\u2019s ferociously exercised by the Chinese government, there\u2019s a lot less opportunity (along with significant consequences) for anyone behaving badly online. The success of the Xiaoice project as a traffic and brand driver must have been what Microsoft hoped to duplicate in the West. Tay\u2019s home page FAQ notes:\n\n\nQ: Who is Tay for?\n\nA: Tay is targeted at 18 to 24 year olds in the U.S., the dominant users of mobile social chat services in the US.\n\nQ: What does Tay track about me in my profile?\n\nA: If a user wants to share with Tay, we will track a user\u2019s:\n\n\nNickname\nGender\nFavorite food\nZipcode\nRelationship status\n\n\nThere\u2019s a lot of market intelligence to be gathered from both the demographics and the conversations so it\u2019s no wonder Microsoft launched Tay outside of China.\n\nAlas for Microsoft, it seems that Internet users in the free world just want to be *****. Indeed, Internet users being ****** has been, and will continue to be, a problem as was recently demonstrated once again when the UK\u2019s National Environment Research Council ran an online poll to name their brand new, $290 million polar research ship and, it is reported, 27,000 people voted to name the ship "RRS Boaty McBoatface.\u201d\n\n\nThe problem with AI chatbots, at least ones like Tay, is that they don\u2019t understand what they\u2019re chatting about. There\u2019s no context to their conversations so, in effect, the AI's opinions on Hitler and feminism carry the same importance to the AI as those concerning carrots and cars. While certain topics could, presumably, be flagged beforehand to be avoided, the range of \u201cdelicate\u201d subjects is so enormous that something undesirable is pretty much guaranteed to appear.\n\n\nMicrosoft was hugely embarrassed by what Tay turned into and an Official Microsoft Blog post by Peter Lee, Corporate Vice President, Microsoft Research, explained:\n\n\nWe are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay. Tay is now offline and we\u2019ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values.\n\n\n\nBut when it comes to inflammatory remarks there\u2019s a Twitter bot that could potentially out Tay, Tay. Modeled on Donald Trump\u2019s linguistic patterns, the bot, called DeepDrumpf (Trump\u2019s ancestral name) was built by Bradley Hayes, a postdoc student at\u00a0MIT\u2019s Computer Science and Artificial Intelligence Lab (CSAIL), to emulate Trump\u2019s speaking patterns which have been compared to those of a fourth-grader.\n\n\nOf course, Deepdrumpf had something snarky to say about Tay:\n\n\nI want to see Tay and Deepdrumpf swap jabs \u2026\u00a0\n\nComments? Thoughts? Suggestions? Prove you\u2019re intelligent and send me feedback via email or comment below then follow me on Twitter and Facebook.