Earlier this year, Meta announced its initiative to label photos created with AI tools on its social networks. Since May, the company has regularly tagged some photos with a “Made with AI” label on Facebook, Instagram, and Threads. However, this approach has sparked criticism from users and photographers, who claim that Meta is incorrectly labeling photos that were not created using AI tools.
Many users have reported instances where Meta automatically tagged non-AI photos with the “Made with AI” label. For instance, a photo of the Kolkata Knight Riders winning the Indian Premier League Cricket tournament was incorrectly labeled. Notably, this label is only visible on mobile apps and not on the web. Numerous photographers have expressed frustration over their images being wrongly tagged, arguing that merely editing a photo with a tool should not warrant the label.
Former White House photographer Pete Souza shared on Instagram that one of his photos was tagged with the new label. Souza explained that Adobe’s updated cropping tool now requires users to “flatten the image” before saving it as a JPEG. He suspects that this action may have triggered Meta’s algorithm to label the photo as AI-generated. Despite unchecking the “Made with AI” option, Souza found the label was still applied.
Meta’s Response and Criteria
Meta has not provided on-the-record responses to questions about specific cases like Souza’s or other mislabelings. In a February blog post, Meta mentioned using image metadata to detect and apply the label. The company stated it was developing tools to identify invisible markers in images, specifically referencing “AI generated” information in the C2PA and IPTC technical standards. This system is intended to label images from sources like Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock as they implement metadata in their AI-generated content.
Reports indicate that Meta applies the “Made with AI” label when photographers use tools such as Adobe’s Generative AI Fill to remove objects. While Meta has not clarified the exact criteria for applying the label, some photographers support the approach, believing that any use of AI tools should be disclosed. However, there is no separate label to differentiate between photos merely cleaned up with tools and those created entirely with AI. This lack of distinction can make it difficult for users to understand the extent of AI involvement in a photo.
Challenges and Implications
Despite Meta’s labeling efforts, many clearly AI-generated photos on its platforms remain unlabeled. With U.S. elections approaching, social media companies face increased pressure to handle AI-generated content accurately. Meta’s current label specifies that “Generative AI may have been used to create or edit content in this post,” but this information is only visible if users tap on the label.
Meta’s initiative to label AI-generated photos aims to provide transparency but has faced significant backlash due to incorrect tagging of real photos. As social media platforms navigate the complexities of AI content, ensuring accurate labeling will be crucial, particularly in the context of upcoming elections and the growing use of AI tools in digital media.
See also: OpenAI Acquires Rockset To Enhance Enterprise AI Capabilities