Shanghai Lawmaker Calls for Citywide AI Governance — From Mandatory Watermarks to a Content‑Safety Detection Hub

A Shanghai deputy and securities technologist, Zhan Tingting, has called for a Shanghai‑specific, citywide AI governance system combining a municipal content‑safety detection centre, mandatory non‑removable digital watermarks for AIGC, new local AI laws including protections for minors, and shared tools to detect AI‑enabled financial manipulation. The proposals aim to balance rapid AI adoption with data security, market integrity and public resilience.

Vivid 3D rendering of dynamic colorful ribbons forming a square in abstract digital art.

Key Takeaways

  • 1Proposal for a municipal AI content‑safety detection centre to monitor, warn and trace high‑risk generative AI outputs.
  • 2Call to mandate non‑removable digital watermarks on AIGC services used in Shanghai to ensure traceability.
  • 3Push for a Shanghai‑level AI law and a minors’ AI protection regulation to lead local and international governance norms.
  • 4Creation of a shared database of AI false‑information signatures and tighter audits to prevent AI‑driven market manipulation and data leakage.
  • 5Integration of AI safety education into citywide digital‑literacy efforts targeting vulnerable groups and corporate finance staff.

Editor's
Desk

Strategic Analysis

Zhan’s package signals a pragmatic city‑level turn in China’s AI governance: instead of waiting for national legislation alone, Shanghai is proposing enforceable technical requirements and institutional mechanisms to manage immediate risks. That matters because Shanghai is both a global financial centre and a major AI development hub; local rules there will ripple across industries and could become de facto standards for firms operating in China. Technically, mandatory watermarking and a city detection centre would increase compliance costs and raise the bar for smaller AIGC providers, while demanding technical advances in watermark robustness and provenance tracing to resist removal and adversarial attacks. Politically, a Shanghai law that aspires to “lead global governance norms” reflects a desire to shape international discourse and to reconcile openness with security — but it also risks fragmenting regulatory expectations between city, provincial and national authorities. For markets and platforms, the proposals create both enforcement burdens and potential stability benefits by improving attribution and response capabilities for AI‑enabled fraud. The path forward will depend on detailed rule‑making, cross‑agency coordination, and the technical feasibility of watermarking and detection at scale; failure to get those elements right could either choke innovation or leave gaps exploitable by bad actors.

China Daily Brief Editorial
Strategic Insight
China Daily Brief

A Shanghai municipal deputy and securities‑industry technologist has proposed a comprehensive, city‑level approach to governing generative artificial intelligence that blends technical defences, new local laws and public education. Zhan Tingting, a deputy to the Shanghai People’s Congress and an assistant general manager in Guotai Haitong Securities’ R&D division, warned that the rapid spread of AIGC (AI‑generated content) has lowered barriers to entry while amplifying misuse through industrialised deepfakes, concealed attacks and cross‑sector harms.

Zhan recommends creating a municipal “AI content safety detection centre” in partnership with research institutions and leading firms. The centre would combine monitoring, early warning and provenance tracing, require technical filings for high‑risk scenarios and run periodic, penetration‑style assessments to shift defences from passive blocking to active protection.

On the content side, she proposes strict enforcement of national digital‑watermarking standards and a mandate that AIGC services used or hosted in Shanghai embed non‑removable watermarks to make generated material traceable across platforms. Zhan also urges accelerated local legislation to produce a Shanghai‑tailored AI law that both supports the city’s tech ecosystem and aims to set governance norms internationally, alongside a targeted regulation protecting minors from violent, biased or otherwise harmful generated content.

Financial market integrity and data security are central to her pitch. Zhan suggests a shared “AI false‑information and anomalous‑data feature library” to spot coordinated fabrication used to manipulate stocks or regulatory narratives, plus stricter security audits of third‑party AI tools to block leakage of commercial secrets into public large models. She argues these measures are necessary to raise the city’s ability to detect, attribute and respond to AI‑enabled financial manipulation and data exfiltration.

Finally, Zhan stresses social resilience: AI safety education should be folded into a municipal digital‑literacy campaign aimed at vulnerable groups — the elderly, children and corporate finance staff. Practical tips, such as checking for implausible private details or staged micro‑actions in videos, are proposed to build a “psychological firewall” against deepfakes and misleading predictions.

The proposal frames Shanghai’s dilemma plainly: how to reconcile the city’s role as a global financial and technology hub with the security demands posed by generative AI. If adopted, the package would raise compliance and technical requirements for local AIGC providers, push enterprises to audit their AI supply chains more rigorously, and could position Shanghai as an influential testbed for urban‑scale AI governance inside China and abroad.

Share Article

Related Articles

📰
No related articles found