Meta, the parent company of Facebook and Instagram, has revealed significant changes to its policies concerning AI-generated and manipulated media content, following criticism from its Oversight Board. The revisions, set to take effect next month, will involve more extensive labeling of potentially deceptive content, with a specific focus on deepfakes. Meta intends to introduce a "Made with AI" badge on deepfake videos and provide additional contextual information for other manipulated media that could mislead the public on critical issues. These adjustments are pivotal as they could result in increased labeling of misleading content, particularly crucial during a year marked by numerous elections globally. However, Meta's approach to deepfakes will only involve labeling content where specific AI image indicators are present, or if the uploader discloses that it's AI-generated.
The shift in policy signals a move towards maintaining AI-generated and manipulated media on Meta's platforms by prioritizing transparency and context over removal. The company views this strategy as a more effective means of addressing problematic content, considering the potential risks associated with outright removal in terms of free speech.Meta's stance now seems to be more labels and fewer takedowns when it comes to AI-generated or manipulated media across platforms like Facebook and Instagram. Starting in July, Meta will cease removing content solely based on its current manipulated video policy, providing users with a transition period to understand the self-disclosure process. This decision aligns with Meta's response to growing legal pressures around content moderation, particularly with the European Union's Digital Services Act looming.
The company's Advisory Board, although funded by Meta, operates independently and reviews a minute fraction of content moderation decisions. Nonetheless, Meta has agreed to adapt its approach based on the Board's recommendations. Monika Bickert, Meta's VP of content policy, acknowledged the need to broaden policies on AI-generated and manipulated media following the Oversight Board's feedback. Notably, the Board scrutinized Meta's approach after addressing a manipulated video involving President Biden, urging Meta to reconsider its handling of AI-generated content.In response, Meta announced plans to expand labeling synthetic media, including video and audio, based on industry-shared signals or user disclosures. This expansion aims to provide users with more context when encountering potentially deceptive content.
Despite these changes, Meta clarified that it will only remove manipulated content if it violates other policies such as voter interference or incitement. In instances of high public interest, Meta may opt to add informational labels and context rather than outright removal. Moreover, Meta highlighted its collaboration with independent fact-checkers to mitigate risks associated with manipulated content. These fact-checkers will play a crucial role in identifying and flagging false or altered content, enabling Meta to reduce its reach and provide additional information to users encountering such material. As the prevalence of synthetic content continues to rise, Meta's policy shift underscores a reliance on labeling and contextualization over content removal, aligning with evolving demands on content moderation and free speech protection.