A high-profile roundtable in January brought a blunt question to the fore: are China and the United States now pursuing fundamentally different paths in artificial intelligence? The discussion, staged by the Observer and featuring commentator Liu Ge and veteran internet scholar Jiang Qiping, used ChatGPT and China's DeepSeek as shorthand for two divergent technical and strategic logics that are already shaping markets, research agendas and policy debates.
Jiang framed the divergence as more than a competition over resources. On the technical front he described a split between what he called “violent computation” — hardware‑intensive, brute‑force approaches associated with U.S. tech giants — and “clever computation,” which prizes algorithmic efficiency and software ingenuity and is exemplified, in his telling, by recent Chinese projects such as DeepSeek. He argued that this difference reflects deeper scientific assumptions: a material‑science paradigm that leans on traditional math‑physics chemistry models in the West versus a shift toward an information‑science paradigm that some Chinese researchers favour.
The exchange quickly moved from methods to values. Jiang suggested that the United States tends toward a form of tool rationality — treating AI primarily as a technical instrument — while China emphasizes ecological or human‑centred rationality, where machines are designed with social and environmental affinities in mind. He invoked classical Chinese ideas about marrying technology to humanistic ends to explain why Chinese researchers and policymakers might prioritise “embodied” or context‑aware intelligence over purely abstract, scale‑driven models.
Geopolitics and policy are feeding and amplifying the technical divergence. U.S. export controls and the “small yard, high wall” approach to sensitive chips and tools have incentivised Chinese firms and labs to pursue self‑reliant or different technical routes, including open‑source alternatives. Jiang argued that such restrictions can spur indigenous innovation, but warned that unilateral containment also risks hardening two separate ecosystems that are costly to reconcile later.
The speakers were also clear that governance is now urgent. If core standards and rules for safety, interoperability and responsibility remain unresolved, the world could fragment into competing technical regimes — a scenario Jiang compared to historical cases of divergent standards and spheres of influence. He urged that embedding ethics and human‑control mechanisms into AI architectures needs to be a global priority rather than an afterthought.
On the question of timeline and impact, Jiang stressed the practical difference between focus on an elusive “general intelligence” and the incremental application of powerful models across industries. He said many leading Western firms debate the commercial viability and timeline of AGI, whereas China’s strategy of diffusing AI into manufacturing, logistics and services can produce measurable economic gains sooner. Regardless of timing, both discussants agreed on a basic premise: preserving human agency and responsibility in AI design is a political and technical imperative.
The roundtable did not produce a neat forecast, but it underscored a simple point with strategic weight: technical choices, governance philosophies and geopolitical policies are interacting to produce distinct AI ecosystems. Whether those ecosystems converge, compete peacefully, or collide will be one of the defining geopolitical and economic questions of the coming decade.
