China’s internet regulators have stepped up enforcement against AI-generated content that lacks mandatory disclosure, removing more than 543,000 pieces of illegal or non-compliant material and sanctioning 13,421 accounts across major platforms. The campaign, announced by the country’s internet information authorities, targets a range of harms—from fabricated human-interest and disaster footage to deepfaked impersonations of public figures and grotesque edits of children’s characters.
Regulators published a series of representative cases to illustrate the problem. On short-video and social platforms, accounts posted AI-created clips of dogs purportedly rescuing infants and defusing bombs, or of crocodile attacks, without labeling them as synthetic; others circulated fabricated fire scenes. Separate clusters of accounts used face swaps and voice cloning to impersonate athletes, entertainers and entrepreneurs, selling “personalized” AI greetings and monetizing falsified endorsements.
The authorities singled out content they judged particularly dangerous to minors, citing AI-altered clips that mutilated or sexualised beloved animated characters and footage that promoted violence and shock value. E-commerce and lifestyle platforms were also implicated: users shared tutorials and software to strip AI watermarks or remove disclosure labels, and several online shops were taken down or had offending goods delisted.
The move forms part of a broader “clean-up” drive to protect online order and public sentiment during the Lunar New Year period, reflecting Beijing’s emphasis on social stability and the health of the online ecosystem. Platforms named in the notice included Weibo, Douyin, Kuaishou, Bilibili, WeChat, Xiaohongshu and major e-commerce marketplaces; platform operators were ordered to ‘‘deeply investigate and rectify’’ the distribution chains for such content and to take swift, lawful action.
For platform operators and creators the immediate consequence is intensified compliance pressure: stronger detection, faster takedowns and more aggressive policing of monetization pathways. For the wider AI ecosystem it signals a regulatory preference for top-down enforcement and platform accountability rather than laissez-faire experimentation—policies that will shape how generative tools are deployed, labelled and monetized inside China.
Internationally, the announcement echoes parallel concerns in Europe and the United States about deepfakes and synthetic-media disclosure, but China’s approach is conditioned by a different mix of priorities. Beijing frames the issue primarily in terms of misinformation, protection of minors and public order, and it is deploying administrative authority to compel platforms to act immediately rather than waiting for protracted legislative debates.
The crackdown also exposed a flourishing secondary market for anti-detection tools: tutorials and services that strip AI marks are financially incentivized and thus likely to persist, pushing enforcement into a technical arms race. Absent stricter legal prohibitions on the sale of such tools, regulators may need to combine takedowns with measures targeting the supply side—payment channels, hosting and app-distribution pathways.
Ultimately the campaign underscores an accelerating tension between creative uses of generative AI and the state’s insistence on visible safeguards. Platforms will bear the brunt of implementation, but the government’s message is clear: synthetic media that deceives, profits from impersonation, or harms minors will face swift sanction, and operators who fail to police their ecosystems risk sustained regulatory intervention.
