Elon Musk’s platform, X, is the latest social networking service to introduce a feature that labels edited images as “manipulated media,” according to a post by Musk. However, the company has yet to clarify the criteria for this determination or whether it includes images modified with traditional editing tools, such as Adobe Photoshop.
The only information available about this new feature comes from a vague post on X by Elon Musk, stating, “Edited visuals warning,” while he reshared an announcement made by the anonymous account DogeDesigner. This account frequently acts as a conduit for new feature announcements on X, as Musk often reposts its updates.
Details regarding the new system remain scarce. According to DogeDesigner’s post, the feature could potentially complicate the dissemination of misleading clips or images by traditional media outlets. Additionally, the post asserted that this feature is novel to X.
Prior to its acquisition and rebranding as X, the platform known as Twitter labeled tweets containing manipulated, deceptively altered, or fabricated media instead of removing them. This policy encompassed not only AI-related content but also alterations such as selective editing, cropping, slowing down footage, overdubbing, and subtitle manipulation, as stated by Yoel Roth, head of site integrity, in 2020.
It remains unclear whether X will adopt similar guidelines or if there have been substantial changes regarding the handling of AI-generated content. Currently, its help documentation mentions a policy against disseminating inauthentic media, but enforcement appears limited, as evidenced by the recent deepfake controversy involving the sharing of non-consensual nude images. Moreover, even the White House has shared manipulated visuals.
The distinction between “manipulated media” and “AI-generated images” can be intricate.
Considering that X serves as a platform for political propaganda both domestically and internationally, it is essential to document how the company will define “edited” content, including what qualifies as AI-generated or AI-manipulated. Furthermore, users should be informed about the existence of a dispute resolution process beyond X’s crowdsourced Community Notes.
Event
|
June 23, 2026
As demonstrated by Meta when it launched AI image labeling in 2024, detection systems can sometimes yield erroneous results. For instance, Meta inadvertently tagged authentic photographs with its “Made with AI” label, even when they were not produced using generative AI.
This confusion arose because AI functionalities are increasingly integrated into the creative tools utilized by photographers and graphic designers. A recent example is Apple’s new Creator Studio suite, which launched today.
This situation confused Meta’s identification algorithms. For instance, Adobe’s cropping tool was flattening images prior to saving them as JPEGs, triggering Meta’s AI detector. Similarly, Adobe’s Generative AI Fill tool, used for object removal—like eliminating wrinkles from clothing or unwanted reflections—was also causing images to be incorrectly labeled as “Made with AI,” even though they had merely been edited with AI tools.
Ultimately, Meta revised its labeling to “AI info” to avoid mislabeling images that had not been produced entirely with AI technology.
Currently, a standards-setting body known as the C2PA (Coalition for Content Provenance and Authenticity) exists to verify the authenticity and provenance of digital content. Related initiatives, such as the Content Authenticity Initiative (CAI) and Project Origin, focus on embedding tamper-evident provenance metadata into media content.
It is presumed that X’s implementation will adhere to some established process for identifying AI-generated content; however, Elon Musk has not divulged what that process entails. Additionally, it remains uncertain whether he is specifically addressing AI-generated images or any edited content that is not directly uploaded from a smartphone camera. Furthermore, it is debatable whether the feature is indeed new, as DogeDesigner claims.
X is not the only entity attempting to tackle manipulated media. Alongside Meta, TikTok has also begun labeling AI-generated content. Streaming platforms such as Deezer and Spotify are scaling initiatives to identify and label AI-generated music. Google Photos has also incorporated C2PA standards to indicate how photos on its platform were created. Various major players, including Microsoft, the BBC, Adobe, Arm, Intel, Sony, OpenAI, and others are part of the C2PA’s steering committee, with numerous additional companies joining as members.
Currently, X is not listed among the members of the C2PA, although we have reached out for clarification. X typically does not respond to inquiries, but we have made an exception in this case.

