Facebook and Instagram users will start seeing labels on AI-generated images that appear on their social media feeds, part of a broader tech industry initiative to sort between what is real and not.
Meta stated this week it was working with industry partners on technical standards that would make it easier to identify images and eventually video and audio generated by artificial intelligence tools.
What remained to be seen was how well it would work at a time when it was easier than ever to make and distribute AI-generated imagery that could cause harm, from election misinformation to non-consensual fake nudes of celebrities.
Meta’s president of Global Affairs, Nick Clegg, did not specify when the labels would appear.
Clegg, however, indicated that it would be “in the coming months” and in different languages, noting that a “number of important elections are taking place around the world.”
“As the difference between human and synthetic content gets blurred, people want to know where the boundary lies,” he said in a blog post.
Meta already puts an “Imagined with AI” label on photorealistic images made by its own tool, but most of the AI-generated content flooding its social media services comes from elsewhere.
A number of tech industry collaborations, including the Adobe-led Content Authenticity Initiative, had been working to set standards.
A push for digital watermarking and labeling of AI-generated content was also part of an executive order that US president, Joe Biden, signed in October.
Clegg explained that Meta would be working to label “images from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock as they implemented their plans for adding metadata to images created by their tools.”
WARNING! All rights reserved. This material, and other digital content on this website, may not be reproduced, published, broadcast, rewritten or redistributed in whole or in part without prior express permission from ZAMBIA MONITOR.