Meta’s Oversight Board says deepfake policies need update and response to explicit image fell short
AP Business Writer
LONDON (AP) — Meta’s Oversight Board says the company failed to take down an AI-generated intimate image of an Indian female public figure that violated its policies until the board got involved. The quasi-independent board also said the social media giant’s policies on non-consensual deepfake images needs updating, including wording that’s “not sufficiently clear.” Deepake nude images of women and celebrities including Taylor Swift have proliferated on social media because the technology used to make them has become more accessible and easier to use. Online platforms have been facing pressure to do more to tackle the problem. Meta said it welcomed the board’s recommendations and is reviewing them.