To allay concerns among users and governments regarding deepfakes, Facebook and Instagram behemoth Meta announced on Friday that it will start tagging ai content in May. The social media behemoth further stated that, in order to respect the right to free speech, it will no longer delete altered photos and sounds that do not violate its policies and will instead rely on terminology and context. the corporation declared, and will include a “Made with ai” label. Facebook, Threads, and Instagram posts are all subject to the policy.
Meta has announced that it would begin designating more audio, video, and image content as artificial intelligence (ai)-generated, admitting that its present policy is “too narrow.” Although the business didn’t elaborate on its detection method, labels would be applied either when users confess using ai tools or when Meta recognizes the content with its ai image indicators that are industry standard.
Meta responds to oversight board criticism
The modifications were prompted by criticism from Meta’s oversight board, which is an independent evaluation panel for the internet behemoth that modifies material. With the tremendous advancements in ai and the ease with which media can be manipulated into highly convincing deepfakes, the board asked Meta in February
to rapidly reassess its strategy to manipulating media. In the middle of an important election year for elections both locally and globally, the board issued its warning amid worries about the rising misuse of ai-powered applications for disinformation on platforms.
Monika Bickert, Vice President of Content Policy said,
“We agree with the Oversight Board’s argument that our existing approach is too narrow since it only covers videos that are created or altered by ai to make a person appear to say something they didn’t say.”
Source: news/2024/04/metas-approach-to-labeling-ai-generated-content-and-manipulated-media/” rel=”nofollow noopener” target=”_blank”>Meta
In response to the oversight board’s recommendations, Meta opted to shift its approach towards manipulated media. Rather than outright removal, the company will now rely on labeling and contextualization to provide transparency regarding the origin and authenticity of content. As Meta says, that is because of its commitment to freedom of speech while safeguarding from harm posed by misleading or deceptive media.
Implementation of ai content labeling
The labeling project from Meta will be implemented in two stages, the first of which will start in May 2024. ai-generated content—which includes a variety of media formats like audio, video, and images—will be recognized and categorized appropriately during this stage. In order to warn users of the possibility of manipulation, content that is judged to have a high risk of deceiving the public will also be labeled more prominently.
The removal of altered media based just on the previous policy will be discontinued as part of Meta’s second rollout phase, which is set to go live in July. Unless ai-manipulated content violates other Community Standards, like as those that prohibit hate speech or interfere with elections, it will remain accessible on the platform under the new norm. Meta is dedicated to striking a balance between preserving the integrity of its platforms and allowing freedom of expression, and this method represents that commitment.
Bickert added,
“The labels will cover a broader range of content in addition to the manipulated content that the Oversight Board recommended labeling.”
Source: news/2024/04/metas-approach-to-labeling-ai-generated-content-and-manipulated-media/” rel=”nofollow noopener” target=”_blank”>Meta Inc.
Regardless of whether it was made with ai tools or not, content that breaks other guidelines—such as those pertaining to bullying, meddling in elections, and harassment—will still be taken down.
Major digital giants and ai companies came to an agreement in February to crack down on manipulated content meant to mislead voters, which is connected to these new labeling approaches. A single watermarking standard to be used for photos generated by ai applications was already agreed upon by Meta, Google, and OpenAI.