In her brief 16-hour “life”, Tay put out more than 96,000 tweets, gifting her creator a steaming pile of internet garbage and a PR disaster for the ages. In less than 24 hours, Tay transformed from a bubbly teenager into a racist, misogynistic, Holocaust-denying Nazi.Īs internet weirdos had a field day, Microsoft quickly realised that giving Tay a “repeat after me” function wasn’t its best idea ever, and promptly pulled the plug.īut plenty of damage had already been done. Not long after “she” made her debut with tweets such as, “Can I just say that I’m stoked to meet u? Humans are super cool”, pranksters, trolls and layabouts began to bombard Tay’s developing AI mind with a strict diet of human depravity. Like we said, it was a more innocent time. The more humans chat with Tay, the smarter it gets, Microsoft said, betraying a rather naive understanding of human behaviour on the internet. In March 2016 – a simpler, more innocent time – the company decided it would be a good idea to improve the conversational abilities of its cutting-edge new chatbot ‘Tay’ by letting it loose on Twitter, of all places.ĭeveloped by Microsoft to conduct research on "conversational understanding", Tay was designed to talk like a 19-year-old American girl and marketed as 'The AI with zero chill'. This led to a series of rapid breakthroughs that culminated in the internet losing its collective mind last month.īut Microsoft hasn’t always been this slick and successful in its efforts to push the boundaries of AI - and that’s putting it mildly. The software giant didn’t just back OpenAI with a boatload of money – its cloud services arm Azure provided the massive computing power needed to power ChatGPT. Microsoft first bet on OpenAI in 2019 with a $1 billion investment, and followed this up with another $2 billion over the next few years.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |