Microsoft's racist Twitter bot should run for office
Microsoft has temporarily suspended its artificially intelligent tweetbot, Tay, after it sent a string of racist, sexist tweets.
The worst of the tweets came as a result of Tay’s (utterly blockheaded) “repeat after me” feature, which allowed users to put words or phrases directly into the bot's mouth. This -- surprise! -- soon resulted in Tay tweeted about how (s)he was planning to build a wall between the US and Mexico, before going on to attack feminists and then boast support for the Nazis. By the time Tay was tweeting racists slurs, Microsoft decided it was probably time to pull the plug.
In a statement explaining the failure, Microsoft said:
"The AI chatbot Tay is a machine learning project, designed for human engagement. As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. We're making some adjustments to Tay."
And yet. All of this assumes that Tay was a failure. Let’s consider the facts:
From a standing start, Tay was able to gather a huge number of Twitter followers and a vast amount of attention from posting the most vile sexist, racist language you can possibly imagine. Throughout much of it, there was little sense of a coherent world view, so much as a brainless bot parroting hate speech designed to rile up its audience, growing even more popular with every new low, thus sparking a debate about what depths society has plumbed. Today the bot and its gross tweets are on the front page of news sites and newspapers around the world.
You call that a failure, I call that Donald Trump.
As Al Franken would say, I’m kidding, but on the square. The similarities between Tay and Trump are amusing, but the reality is, whether intended or not, Tay really did very quickly crack the code of what makes for a very popular Twitter account.
Had the bot only tweeted out vapid platitudes or dull life updates, it would never have built the audience it did, and certainly not so quickly. But by learning from its worst, most abusive interactions it very quickly came to “understand” that on Twitter hate and conflict is the most valuable currency.
Perhaps when Microsoft relaunches Tay as a kindler gentler soul the experiment might produce some interesting lessons about AI. But my guess is, absent the hate, it’ll get a lot less attention.
Perhaps a better approach would be to leave Tay exactly as it is and instead announce its candidacy for the republican presidential nomination. It might be the world’s best hope of defeating Trump.