Americas

  • United States

Microsoft’s AI Tay offends and goes offline; Deepdrumpf AI snarks

News
Mar 26, 20164 mins
CareersInternetSocial Networking Apps

How an Artificially Intelligence chatbot was perverted by online trolls and another channels Donald Trump

Artificial Intelligence is tricky stuff. When it works right, it does amazing things like thrash the World Champion Go player by winning four games to one in a $1 million tournament. When it goes wrong, well, that’s a whole different story, and Microsoft’s recent experiment with an AI chatbot named Tay that interacted (note the past tense) with users on Twitter, Kik, and GroupMe, is a great example.

Microsoft’s Tay website currently says:

Phew. Busy day. Going offline for a while to absorb it all. Chat soon

… obviously not because the AI is exhausted but rather because Tay, within a single day of the AI’s launch, turned from a new-agey, happy, gushy personality into an abusive, racist, misogynist that seriously embarrassed Microsoft. 

What went wrong was that anonymous Internet trolls, notably from 4chan’s and 8chan’s political forums, found they could train Tay by having it repeat what they told it. Here’s a collection of Tay’s tweets that start (at the top left) when it was first fired up and innocent to the end (bottom right) when it had been thoroughly troll trained:

 

It also turns out that Tay was used by yet more ******** to avoid Twitter block lists (those are users’ lists of other users they don’t want to hear from) by having Tay repeat whatever a blocked user wanted to say to their victim. 

tay04

But why, you may be wondering, did Microsoft want to build an AI chatbot? Because in China, there’s another Microsoft AI chatbot called Xiaoice that’s been hugely successful and, apparently, trouble free, despite being used by some 40 million people.

As Ars Technica points out, given the censorship that’s ferociously exercised by the Chinese government, there’s a lot less opportunity (along with significant consequences) for anyone behaving badly online. The success of the Xiaoice project as a traffic and brand driver must have been what Microsoft hoped to duplicate in the West. Tay’s home page FAQ notes:

Q: Who is Tay for?

A: Tay is targeted at 18 to 24 year olds in the U.S., the dominant users of mobile social chat services in the US.

Q: What does Tay track about me in my profile?

A: If a user wants to share with Tay, we will track a user’s:

  • Nickname
  • Gender
  • Favorite food
  • Zipcode
  • Relationship status

There’s a lot of market intelligence to be gathered from both the demographics and the conversations so it’s no wonder Microsoft launched Tay outside of China.

Alas for Microsoft, it seems that Internet users in the free world just want to be *****. Indeed, Internet users being ****** has been, and will continue to be, a problem as was recently demonstrated once again when the UK’s National Environment Research Council ran an online poll to name their brand new, $290 million polar research ship and, it is reported, 27,000 people voted to name the ship “RRS Boaty McBoatface.”

The problem with AI chatbots, at least ones like Tay, is that they don’t understand what they’re chatting about. There’s no context to their conversations so, in effect, the AI’s opinions on Hitler and feminism carry the same importance to the AI as those concerning carrots and cars. While certain topics could, presumably, be flagged beforehand to be avoided, the range of “delicate” subjects is so enormous that something undesirable is pretty much guaranteed to appear.

Microsoft was hugely embarrassed by what Tay turned into and an Official Microsoft Blog post by Peter Lee, Corporate Vice President, Microsoft Research, explained:

We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay. Tay is now offline and we’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values.

But when it comes to inflammatory remarks there’s a Twitter bot that could potentially out Tay, Tay. Modeled on Donald Trump’s linguistic patterns, the bot, called DeepDrumpf (Trump’s ancestral name) was built by Bradley Hayes, a postdoc student at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL), to emulate Trump’s speaking patterns which have been compared to those of a fourth-grader.

drumpf1

Of course, Deepdrumpf had something snarky to say about Tay:

drumpf3

I want to see Tay and Deepdrumpf swap jabs … 

Comments? Thoughts? Suggestions? Prove you’re intelligent and send me feedback via email or comment below then follow me on Twitter and Facebook.

mark_gibbs

Mark Gibbs is an author, journalist, and man of mystery. His writing for Network World is widely considered to be vastly underpaid. For more than 30 years, Gibbs has consulted, lectured, and authored numerous articles and books about networking, information technology, and the social and political issues surrounding them. His complete bio can be found at http://gibbs.com/mgbio

More from this author