Google fights abusive content of children via artificial intelligence




launched a free artificial intelligence program designed to help human supervisors identify and remove offensive content for children. Child pornography is a priority for large Internet companies, but it is a difficult and cumbersome task for those supervisors. This is done by checking photos and videos based on a list of previously identified offensive material such as PhotoDNA , a tool developed by Microsoft that was adopted by companies such as Facebook and Twitter.

This type of program that systematically reviews the web to create a data index is an effective way to prevent people who publish previously identified child abuse content, but these programs suffer from being unable to identify material that has not previously been identified as illegal, Human supervisors should therefore intervene and review the content themselves.

Google’s new artificial intelligence program helps at this point. The company, using its expertise in computer vision techniques, is helping human supervisors in the process of sorting previously reported images and videos and focusing heavily on reviewing children’s abusive content, Would allow for a much faster review process than before.

According to the company, the new artificial intelligence program in one of the experiments helped the supervisor take action on child abuse content by 700 percent over the same time period. Fred Langford, executive vice president of IWF, said the program would help human teams like us With limited resources much more efficiently, since we are currently using the human element only to determine the content. ”

The Internet Watch Foundation (IWF) is one of the largest organizations to stop the spread of child abuse on the Internet. It is based in the UK but receives funding and contributions from major global technology companies, including Google, and employs a team of human supervisors to identify offensive images, It operates in more than a dozen countries around the world with data from Internet users to report suspicious material.

Because of what is being said about artificial intelligence and its potentials, IWF will test the new artificial intelligence program from Google to see how it performs and fits in with the functioning of human supervisors, said Fred Langford. “Tools like this were a step towards fully automated systems that could identify materials that were not But he said that these tools should be trusted only in clear cases to avoid leaving offensive material spread across the network.

The new Google program is based on deep-seated DNN networks and will be provided free of charge to NGOs and other industry partners, including other technology companies, as the company faces increasing criticism for its role in helping criminals spread offensive material For children across the web.


In: A Technology & Gadgets Asked By: [23266 Red Star Level]

Answer this Question

You must be Logged In to post an Answer.

Not a member yet? Sign Up Now »

Star Points Scale

Earn points for Asking and Answering Questions!

Grey Sta Levelr [1 - 25 Grey Star Level]
Green Star Level [26 - 50 Green Star Level]
Blue Star Level [51 - 500 Blue Star Level]
Orange Star Level [501 - 5000 Orange Star Level]
Red Star Level [5001 - 25000 Red Star Level]
Black Star Level [25001+ Black Star Level]