Skip to Content

Meta must do more to address non-consensual, deepfake porn, Oversight Board says

By Clare Duffy, CNN

New York (CNN) — Meta failed to remove an explicit, AI-generated image of an Indian public figure until it was questioned by its Oversight Board, the board said Thursday in a report that calls on the tech giant to do more to address non-consensual, nude deepfakes on its platforms.

The report is the result of an investigation the Meta Oversight Board announced in April into Meta’s handling of deepfake pornography, including two specific instances where explicit images were posted of an American public figure and an Indian public figure.

The threat of AI-generated pornography has gained attention in recent months, with celebrities like Taylor Swift, as well as US high school students and other women around the world, falling victim to the form of online abuse. Widely accessible generative AI tools have made it faster, easier and cheaper to create such images. Social media platforms including Meta’s — where such images can spread rapidly — have faced growing pressure to combat the issue.

In the case of the image of the American public figure posted to Facebook — which was generated by artificial intelligence and depicted her as nude and being groped — the company immediately removed the picture, which had previously been added to a matching bank that automatically detects rule-breaking images. But in the case of the Indian public figure, although the image was twice reported to Meta, the company did not remove the image from Instagram until the Oversight Board took up the case.

“Meta determined that its original decision to leave the content on Instagram was in error and the company removed the post for violating the Bullying and Harassment Community Standard,” the Oversight Board said in its report. “Later, after the Board began its deliberations, Meta disabled the account that posted the content.”

The report suggests that Meta is not consistently enforcing its rules against non-consensual sexual imagery, even as advancements in artificial intelligence have made this form of harassment increasingly common. The report also points to continued issues at Meta moderating content in non-Western or non-English speaking countries, which the company has faced criticism over before.

Meta said in a statement that it welcomed the board’s decision. It added that while the specific posts identified in the report have already been removed, the company will “take action” on images of the Indian public figure that are “identical and in the same context” as those highlighted by the Oversight Board “where technically and operationally possible to do so.”

In its report, the Oversight Board — a quasi-independent entity made up of experts in areas such as freedom of expression and human rights — laid out additional recommendations for how Meta could improve its efforts to combat sexualized deepfakes. It urged the company to make its rules clearer by updating its prohibition against “derogatory sexualized photoshop” to specifically include the word “non-consensual” and to clearly cover other photo manipulation techniques such as AI.

According to the report, Meta told the board that it had not originally added the image of the Indian public figure to its rule-violating photo matching bank because there had not been news reports about it, whereas the media had covered the images of the US public figure. “This is worrying because many victims of deepfake intimate images are not in the public eye and are forced to either accept the spread of their non-consensual depictions or search for and report every instance,” the Board said, adding that Meta could consider other factors, including whether an image was AI-generated when determining whether to add it to the bank.

After the board began its inquiry in April, Meta added the image to its photo matching bank.

The push to fight non-consensual deepfakes is just part of Meta’s larger efforts to prevent the sexual exploitation of its users. The company on Wednesday said it had removed around 63,000 accounts in Nigeria that were engaging in financial sextortion scams, where people (often teenagers) are tricked into sending nude images and then extorted.

The-CNN-Wire
™ & © 2024 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.

Article Topic Follows: CNN - Business/Consumer

Jump to comments ↓

CNN Newsource

BE PART OF THE CONVERSATION

KTVZ NewsChannel 21 is committed to providing a forum for civil and constructive conversation.

Please keep your comments respectful and relevant. You can review our Community Guidelines by clicking here

If you would like to share a story idea, please submit it here.

Skip to content