As the Lunar New Year approaches, short videos generated by artificial intelligence and circulating widely on Chinese social platforms have become a festal oddity — and a legal headache. Clips that stitch together familiar faces and voices into personalised greetings are striking a chord with audiences hungry for novelty, but many are created without the consent of the people they portray.
Major commercial AI services and professional video‑synthesis tools have begun to adopt basic anti‑abuse measures. Platforms are increasingly embedding visible watermarks or mandatory AI‑generated labels, imposing contractual bans on unauthorised use of another person’s likeness or voice, and deploying automated content filters that leverage the same machine‑learning techniques to spot suspicious deep synthesis.
Those safeguards, however, are uneven. Open‑source models and smaller apps often omit clear labelling or usage constraints, and some creators deliberately strip identifiers to improve realism. That patchwork leaves room for misuse: a cheerful holiday clip today can become a vehicle for harassment, fraud or reputational harm tomorrow.
The legal risks are immediate and multi‑layered. Using someone’s image or voice without permission can trigger civil claims under rights to portrait and reputation; altered or fabricated speech may amount to defamation; and repurposed identities have practical uses for social‑engineering scams that could draw criminal liability. Enforcement is complicated by jurisdictional ambiguity when tools, content and viewers cross administrative or national borders.
The timing amplifies the stakes. New Year greetings travel fast within family groups and community chat channels, a viral pathway that can spread manipulated content before platforms or rights‑holders can react. Cultural resonance gives such clips outsized visibility and increases the chance that an unauthorised synthetic likeness will inflict real social or economic harm.
For platform operators and policymakers, the dilemma is twofold: limit creativity and convenience, or tolerate a permissive environment that enables abuse. Tech firms face pressure to roll out robust provenance systems and tougher verification measures while balancing user experience and commercial incentives. Regulators face the classic trade‑off between technology neutrality and targeted rules that prevent harm without hindering innovation.
For international observers, China’s experience illustrates a universal problem. The rise of easy, inexpensive tools for producing realistic synthetic media exposes gaps in governance and user awareness everywhere. The immediate remedy combines better platform controls, clearer legal remedies for victims and sustained public education about the provenance of digital content.
Absent faster and more consistent safeguards, unauthorised AI greetings will remain a test case for whether societies can preserve trust in everyday digital interactions while embracing generative technologies. The holiday cheer may fade quickly; the legal and social fallout may not.
