Facebook and Instagram users will start seeing labels on AI-generated images that appear on their social media feeds, part of a broader tech industry initiative to sort between what’s real and not.
Photorealistic images created using Meta’s AI imaging tool are already labelled as AI, however, the company’s president of global affairs, Nick Clegg, announced on Tuesday that the company would work to begin labelling AI-generated images developed on rival services.
Meta’s AI images already contain metadata and invisible watermarks that can tell other organisations that the image was developed by AI, and the company is developing tools to identify these types of markers when used by other companies, such as Google, OpenAI, Microsoft, Adobe, Midjourney and Shutterstock in their AI image generators, Clegg said.
“As the difference between human and synthetic content gets blurred, people want to know where the boundary lies,” Clegg said. “People are often coming across AI-generated content for the first time and our users have told us they appreciate transparency around this new technology. So it’s important that we help people know when photorealistic content they’re seeing has been created using AI.”
Clegg said the capability was being built and the labels would be applied in all languages in the coming months. He said the company would also place a more prominent label on “digitally created or altered” images, video or audio that “creates a particularly high risk of materially deceiving the public on a matter of importance”.