Tay

Microsoft disables its artificial intelligence test because it learned to be racist

Yesterday we told you about Tay, an artificial intelligence system that Microsoft had launched to allow interaction with the human race through various channels, including Twitter.

The objective is to demonstrate how a bot could learn over time, how it will adapt its knowledge database as it interacts with others. Tay had a knowledge base and will evolve as he listens and participates with other people, a full-blown artificial intelligence.

The problem is that the teachers have not been as good as expected… with a few hours of operation Tay began making racist comments, and soon after the controversial tweets were removed and the robot was deactivated with a farewell message.

From messages commenting on the work done by Hitler, to matters related to September 11, the topics covered quickly got out of hand, reaching insulting comments, generated automatically based on what he learned talking to humans.

The theme is simple: If we want to create artificial intelligence that learns automatically thanks to humans, we have to carefully select teachers, because if we launch it directly on the social web, the result can be catastrophic.