Microsoft apologizes for racist messages published by the robot chat “Tai” which

Mar

26

2016

Microsoft Corp. introduced through a post published on its website, apologized for racist tweets published by Robot Artificial Intelligence “Tai” Tay, which had stopped after less than 24 hours to put forward.

He wrote “Peter Lee,” head of research at Microsoft explaining what happened, stressing that what Badr of “Tai” does not reflect Microsoft, nor about how the robot design was made.

And “Tai”, a program to represent a virtual character of a teenage girl who can have a chat with users and answer their questions. The company has developed a robot with artificial intelligence technology Artificial Intelligence, he said that “Tai” You have to learn and improve their answers gradually through chats you make. But the “Tai” turned into a tool for the dissemination of racist and hate, where he began computed robotic on Twitter publish messages glorifying Nazism and Hitler, and incite against blacks, immigrants and calls for ethnic cleansing.

“Peter Lee,” has tried to explain what had happened, saying that “Tai” is not the first experience of the application of technology of artificial intelligence was launched by Microsoft, which has already launched Android XiaoIce in China has been used by about 400 million users to chat, exchange views, and, according to Microsoft it was this experiment successful too, which invited her to experience Android “Tai” directed to Americans between the ages of 18 to 24 years. The company said it during the development of “Tai” was keen to test the robot in different circumstances and sought to provide a positive experience free of problems.

But what happened to the company described as “coordinated attacks” by people who were able to exploit a loophole in the “Tai” system enabled them to broadcast racist ideas and thoughts of hatred that he repeated robot that can for him to learn and develop knowledge through interaction with others. Although Microsoft has developed a “Tai” are not easy to manipulate them, but they missed this kind of specific attacks, according to the company.

Finally, the company said that they now have a challenge to improve the artificial intelligence research to avoid such incidents in the future.

Viewing:-178

In: Technology & Gadgets Asked By: [15229 Red Star Level]

Answer this Question

You must be Logged In to post an Answer.

Not a member yet? Sign Up Now »