Find a Question:
Friendly chat bot Microsoft Tay turned boor and racist (2 photos)
A few days ago, Microsoft launched the social networking site Twitter friendly self-learning chat bot Tay, which was developed jointly with the Research and Bing. Artificial intelligence is trained several means of communication: communication by means of text (can make jokes and tell stories), recognition of smiles, discussion of images sent to him. His manner of communication adjusted to the style of young people aged 18 to 24 years, because, according to the developers, this category of the population accounts for a large part of the audience in social networks. According to the draft, Tay had to communicate, learning to parallel manner people communicate. However, after a day at the Tay friendliness vanished, and in its replaced by racism, hostility to feminism and rough contact with insults.
Дружелюбный чат-бот Microsoft Tay оказался хамом и расистом (2 фото)
“Humanization” chat-bot went on an unplanned scenario and, according to the authors of the project are to blame coordinated provocative actions on the part of users of the service. The cause of inappropriate behavior Tay was the analysis of the existing conversations on Twitter, that the AI used for their self-improvement rather than ignore, as do most reasonable people. Ultimately, the Microsoft had to intervene in the operation of an IM bot, correct inappropriate remarks, and later did suspend the project in the social network. Will the future development team to change the Tay susceptibility to various statements, to teach him the “right” to perceive the infinite amount of information that will be known later.Viewing:-195
Answer this Question
You must be Logged In to post an Answer.
Not a member yet? Sign Up Now »
Star Points Scale
Earn points for Asking and Answering Questions!