Anthropic’s Claude Code, driven by the new Opus 4.5 model, has burst into mainstream use and is prompting engineers and non‑engineers alike to rethink what software development means. Users report that the tool can autonomously access files, browse the web and orchestrate desktop workflows, enabling people with no formal coding background to assemble bespoke applications in days rather than months. High‑profile technologists have turned heads: Vercel’s CTO Malte Ubl says he compressed a year’s worth of work into a single week, and Shopify’s Tobias Lütke has used the system to prototype medical image analysis tools.
The practical effects are already visible in two contrasting trends. At one end, a wave of “micro‑apps” is emerging—small, single‑purpose tools built by individuals to solve immediate personal or team problems and then discarded. At the other, a new breed of highly productive, AI‑augmented developer—dubbed “Cracked Engineers”—is consolidating value, producing in weeks what used to take teams months, and changing hiring calculus in Silicon Valley.
The economics behind this shift are straightforward. As generative models become competent at translating natural‑language intent into working code and integrating across systems, the transaction cost of obtaining software falls. Individuals no longer need to subscribe to off‑the‑shelf SaaS for niche needs; they can commission a tailored script or web app in a matter of hours. Companies, meanwhile, can multiply output with fewer engineers—if those engineers can operate at the new, AI‑enabled frontier.
That productivity boon has immediate winners and losers. Small teams and solo entrepreneurs can produce more for less, and some founders are shelving hiring plans because AI raises individual throughput. But the same dynamics intensify competition for scarce, digitally native talent and concentrate bargaining power in the hands of those who can best pair human judgment with machine speed. Several founders and investors quoted in the Chinese piece describe a hiring environment that increasingly prizes relentless, gamelike performance and penalizes mediocrity.
Risks multiply as well. Micro‑apps are often built without standard software engineering practices: security, maintenance, provenance and compliance can be afterthoughts. Agents that require broad access to personal files and browsers amplify data‑protection and insider‑threat concerns. At the macro level, firms that lean on a handful of “super‑producers” may be papering over deeper product or business model flaws—an observation recruiters and VCs in the article explicitly warn about.
Regulatory and societal implications are significant. If AI agents become standard workplace tools, regulators will wrestle with questions of liability for errors, intellectual property for AI‑generated code, and workforce displacement. Educational institutions and employers will face pressure to retrain people not just to code, but to orchestrate and audit AI systems. The immediate result is a two‑track market: broad democratization at the edges and extreme differentiation at the core.
Anthropic’s adoption metrics underscore the momentum. The piece cites more than a doubling of Claude’s web audience year‑on‑year in December and a 12% increase in global desktop daily active users, signaling that Claude Code is moving beyond lab demos into everyday workflows. Comparable tools such as OpenAI’s Codex derivatives, Replit and Bolt are part of the same ecosystem, meaning this is not a single‑vendor story but a structural change in how software is produced and consumed.
For businesses and policymakers the question is not whether these tools will change work, but how quickly and with what safeguards. The near future will likely be noisy: a profusion of ephemeral apps, new roles for super‑productive individuals, and a scramble by incumbents to adapt processes, hiring and compliance to an environment where code can be created—and broken—faster than ever.
