Facebook announced (see below) on Wednesday that it removed 8.7 million pieces of content that violated its child n****y or child s****l exploitation policies in a three-month span. About 99 percent of the affected content was removed before any users reported them, it said.
The company said it used artificial intelligence and machine-learning techniques flagging software to detect the images as they were uploaded in the past year. The figure it gave, of 8.7 million pieces of content found worldwide, covered actions taken between July and September. It is an improvement on photo matching, which Facebook has used for years to stop sharing of child exploitation images.
The software is able to “get in the way of inappropriate actions with children, review them and if it looks like there’s something problematic, take action,” Antigone Davis, Facebook’s global head of safety, said.
The use of artificial intelligence can quickly identify content and notify the National Center for Missing and Exploited Children, and close the accounts of Facebook users promoting inappropriate actions with children.
“We have specially trained teams with backgrounds in law enforcement, online safety, analytics, and forensic investigations, which review content and report findings to NCMEC,” the company said on Wednesday.
Facebook has historically erred on the side of caution in the past in deleting and reporting inappropriate photos of children. The process has led, in the past, to the removal of photographs of emaciated children taken at Nazi concentration camps, as well as a Pulitzer Prize-winning war photo of a naked Vietnamese girl after a napalm attack.
In the past, though, the company has relied largely on users who flag and report inappropriate images.
The new system allows Facebook to “proactively detect child n****y and previously unknown child exploitative content when it’s uploaded,” it said.
Here is the Facebook announcement:
New Technology to Fight Child Exploitation
One of our most important responsibilities is keeping children safe on Facebook. We do not tolerate any behavior or content that exploits them online and we develop safety programs and educational resources with more than 400 organizations around the world to help make the internet a safer place for children. For years our work has included using photo-matching technology to stop people from sharing known child exploitation images, reporting violations to the National Center for Missing and Exploited Children (NCMEC), requiring children to be at least 13 to use our services, and limiting the people that teens can interact with after they sign up.
Today we are sharing some of the work we’ve been doing over the past year to develop new technology in the fight against child exploitation. In addition to photo-matching technology, we’re using artificial intelligence and machine learning to proactively detect child n****y and previously unknown child exploitative content when it’s uploaded. We’re using this and other technology to more quickly identify this content and report it to NCMEC, and also to find accounts that engage in potentially inappropriate interactions with children on Facebook so that we can remove them and prevent additional harm.
Our Community Standards ban child exploitation and to avoid even the potential for abuse, we take action on nons****l content as well, like seemingly benign photos of children in the bath. With this comprehensive approach, in the last quarter alone, we removed 8.7 million pieces of content on Facebook that violated our child n****y or s****l exploitation of children policies, 99% of which was removed before anyone reported it. We also remove accounts that promote this type of content. We have specially trained teams with backgrounds in law enforcement, online safety, analytics, and forensic investigations, which review content and report findings to NCMEC. In turn, NCMEC works with law enforcement agencies around the world to help victims, and we’re helping the organization develop new software to help prioritize the reports it shares with law enforcement in order to address the most serious cases first.
We also collaborate with other safety experts, NGOs and companies to disrupt and prevent the s****l exploitation of children across online technologies. For example, we work with the Tech Coalition to eradicate online child exploitation, the Internet Watch Foundation, and the multi-stakeholder WePROTECT Global Alliance to End Child Exploitation Online. And next month, Facebook will join Microsoft and other industry partners to begin building tools for smaller companies to prevent the grooming of children online. You can learn more about all of our efforts at facebook.com/safety.