Meta Plans for a less punitive AI-generated content policy

How To Drive Discoverability, Performance & Sales

Meta announced an update to its AI tagging policy, expanding its definition of “manipulated media” to go beyond AI-generated videos, to now include misleading audio and images on Facebook, Instagram and Threads.

An important characteristic of the new policy is its sensitivity to being perceived as restrictive of freedom of expression. Instead of taking the approach of removing problematic content, Meta simply labels it. Meta introduced two tags, “Made with AI” and “Imagined with AI,” to make it clear what content was created or modified with AI.

New warning labels

AI-generated content will be based on identifying authorship signals and AI self-reporting:

“Our ‘Made with AI’ tags on AI-generated video, audio and images will be based on our detection of signals shared by the AI ​​imagery industry or people who reveal that they are uploading AI-generated content.”

Content that is highly misleading can be tagged more prominently so that users can better understand it.

Harmful content that violates community standards, such as content that incites violence, election interference, bullying, or harassment, regardless of whether it is human or AI-generated, may be removed.

Reason for Meta’s updated policy

The original AI flagging policy was created in 2020 and, due to the state of the technology, was limited to addressing misleading videos (the kind that depicted public figures saying things they never did). Meta’s Oversight Board recognized that technology has advanced to the point where a new policy is needed. Accordingly, the new policy is expanded to now address AI-generated audio and images, in addition to video.

Based on user feedback

Meta’s process for updating its rules seems to have anticipated pushback from all sides. Its new policy is based on extensive comment from a wide range of stakeholders and input from the general public. The new policy also has the flexibility to bend if needed.

Meta explains:

“In the spring of 2023, we began reassessing our policies to see if we needed a new approach to keep pace with the rapid developments… We completed consultations with more than 120 stakeholders in 34 countries in all major regions of the world . Overall, we heard broad support for labeling AI-generated content and strong support for more prominent labeling in high-risk scenarios. Many stakeholders were receptive to the concept of people self-disclosing content as a generated by AI.

… We also conducted public opinion research with more than 23,000 respondents in 13 countries, asking people how social media companies, like Meta, should approach AI-generated content on their platforms. A large majority (82%) favor warning labels for AI-generated content that depicts people saying things they didn’t say.

… And the Oversight Board noted that its recommendations were informed by consultations with civil society organizations, academics, intergovernmental organizations and other experts.”

Collaboration and consensus

Meta’s announcement explains that they plan to keep policy in step with technology by reviewing it with organizations like the AI ​​Association, governments and non-governmental organizations.

Meta’s revised policy emphasizes the need for transparency and context for AI-generated content, that content removal will be based on violations of its community standards, and that the preferred response will be to flag potentially problematic content.

Read Meta’s announcement

Our approach to tagging AI-generated content and manipulated media

Featured image by Shutterstock/Boumen Japet

[ad_2]

Source link

You May Also Like

About the Author: Ted Simmons

I follow and report the current news trends on Google news.

Leave a Reply

Your email address will not be published. Required fields are marked *