Find a Question:
Microsoft pulls robot chat “Tai” after publishing racist messages
Microsoft Corp. has stopped chatting robot , which called it “Tai” Tay after less than 24 hours to put forward, because of the shift to Android program broadcast messages of hatred and racism.
Microsoft had launched the “Tai”, which is a program that represents a virtual character of a teenage girl who can have a chat with users and answer their questions. The company has developed a robot with artificial intelligence technology Artificial Intelligence, he said that “Tai” You have to learn and improve their answers gradually through chats you make.
But the “Tai” turned into a tool for the dissemination of racist and hate, where he began computed robotic on Twitter publish messages glorifying Nazism and Hitler, and incite against blacks, immigrants and calls for ethnic cleansing.
Although Microsoft said that the basic information that girl by the “Tai” knowledge came through the public information available on the web, and some other content furnished by Microsoft, but the company also reported that “Tai” evolution of knowledge through Drdashtha with others, and it seems that this particular point made it possible to manipulate its information through the inculcation of racist ideas through some users.
Microsoft was quick to delete the offending tweets published by the “Tai” which disable provide answers until you modify algorithms before returning in may weaken the possibility of manipulation.Viewing:-239
Answer this Question
You must be Logged In to post an Answer.
Not a member yet? Sign Up Now »
Star Points Scale
Earn points for Asking and Answering Questions!