The following is an excerpt from CADE METZ and KEITH COLLINS | February 21, 2018 | Nytimes.com |
Tay said terrible things. She was racist, xenophobic and downright filthy. At one point, she said the Holocaust did not happen. But she was old technology.
Let loose on the internet nearly two years ago, Tay was an experimental system built by Microsoft. She was designed to chat with digital hipsters in breezy, sometimes irreverent lingo, and American netizens quickly realized they could coax her into spewing vile and offensive language. This was largely the result of a simple design flaw — Tay was programmed to repeat what was said to her — but the damage was done. Within hours, Microsoft shut her down for good.
Since then, a new breed of conversational technology has emerged inside Microsoft and other internet giants that is far more nimble and effective than the techniques that underpinned Tay. And researchers believe these new systems will improve at an even faster rate when they are let loose on the internet. But sometimes, like Tay, these conversational systems reflect the worst of human nature. And given the history here, companies like Microsoft are reluctant to set them free — at least for now.
For more visit: Nytimes.com