Meta to Flag AI-Generated Content on Facebook, Instagram


TEHRAN (Tasnim) –Meta announced that starting from May, it will begin labeling AI-generated content on Facebook and Instagram. This decision marks a departure from its previous policy of deleting such computer-created content.

In a blog post on Friday, Meta explained that it will apply "Made with AI" labels to photo, audio, or video content generated using artificial intelligence. These labels will be automatically applied when Meta detects "industry-shared signals" of AI content or when users voluntarily disclose that their post was created with AI.

Meta stated that if the content poses a high risk of materially deceiving the public on an important matter, a more prominent label may be applied. Presently, Meta's 'manipulated media' policy only covers videos altered by AI to make a person appear to say something they didn't say, and such content is removed rather than labeled.

The new policy expands to cover videos showing someone "doing something they didn't do," as well as photos and audio. However, unlike the previous approach, this content will be allowed to remain online.

"Our manipulated media policy was written in 2020 when realistic AI-generated content was rare...In the last four years, and particularly in the last year, people have developed other kinds of realistic AI-generated content like audio and photos, and this technology is quickly evolving," Meta explained.

Earlier this year, US regulators banned AI-generated "robocalls" after New Hampshire residents received computer-generated calls urging them to sit out the state's Democratic primary election. Former US President Donald Trump has accused media outlets of using AI to alter his appearance in photographs.

Meta is not alone in combating artificial content with labels. TikTok began asking users to label their AI-generated content last year, while YouTube introduced a similar system recently.

Lawmakers have urged tech firms to take action against AI-created "deepfakes," particularly with crucial elections approaching in the EU and the US. Microsoft, Meta, Google, and other industry leaders have pledged to prevent deceptive AI content from interfering with global elections.

Under the EU's AI Act, which takes effect next summer, platforms may face fines for failing to detect and identify AI-created content, including text related to public matters. This provision could compel platforms like TikTok and YouTube to adopt Meta's labeling approach.