Musk Revives Dojo 3 to Power Tesla’s Push into Full Self‑Driving and Robots

Elon Musk has ordered the restart of Tesla’s Dojo supercomputer project after completing the AI5 chip design, pivoting to a Dojo 3 architecture that densely integrates hundreds of AI5/AI6 chips per board. The reboot aims to cut training costs and support Tesla’s Full Self‑Driving and Optimus robot programmes, but it faces major technical, supply‑chain and competitive challenges.

Elegant Tesla Model S parked outdoors against a modern backdrop, showcasing luxury and innovation.

Key Takeaways

  • 1Tesla has restarted the Dojo supercomputer programme (Dojo 3) after completing its AI5 chip design, five months after a full stop and team dispersal.
  • 2Dojo 3 will move away from bespoke D1 chips and wafer‑level packaging toward boards integrating up to 512 AI5 or AI6 chips to reduce wiring complexity and cost.
  • 3AI6 is planned on a 2nm process; the architecture is intended to support FSD development, Optimus robot training, and integration with xAI’s Grok via synthetic data loops.
  • 4The restart includes an aggressive hiring push; success could lower AI training costs and speed commercialisation, but challenges include foundry access, yields, power/cooling and software scaling.

Editor's
Desk

Strategic Analysis

Tesla’s Dojo reboot is a high‑stakes bet on vertical integration: combine proprietary data, bespoke silicon and in‑house model development to create a cost advantage over cloud providers and chip vendors. If Tesla manages to deliver materially cheaper, large‑scale training capacity it can accelerate iteration cycles for FSD and robotics and widen the lead that data advantage already gives it. Yet the economics and technical difficulty of moving to bleeding‑edge nodes like 2nm, while simultaneously redesigning system architecture and rebuilding a core team, make success uncertain. The outcome will test whether Tesla’s engineering culture can translate ambition into repeatable infrastructure delivery—or whether the company will be compelled to partner with or buy compute from specialised providers, reshaping its long‑term strategy for autonomy and embodied AI.

China Daily Brief Editorial
Strategic Insight
China Daily Brief

Elon Musk has ordered a restart of Tesla’s Dojo supercomputer programme, announcing on social media that work on a third-generation Dojo will resume now that the company’s AI5 chip design is complete. The decision comes five months after Tesla abruptly halted the Dojo programme and dismantled its core team, and was accompanied by a recruitment drive aimed at engineers willing to “build the world’s highest‑yield chip.”

The revived Dojo 3 is a deliberate redesign rather than a continuation of prior hardware choices. Tesla says it will abandon the earlier path of using bespoke D1 chips and wafer‑level packaging in favour of a dense single‑board cluster approach that integrates up to 512 AI5 or next‑generation AI6 chips per board. That architecture is pitched as a way to cut wiring complexity and hardware cost by orders of magnitude while retaining large‑scale parallel training capacity.

AI6 is slated to be manufactured on a 2nm process, and the plan explicitly aims to reconcile training and inference workloads across Tesla’s vehicle fleet, data centres and the Optimus humanoid‑robot project. Musk also signalled integration with xAI’s Grok model, suggesting synthetic data pipelines will be used to create an iterative training loop for both autonomous driving and robotics systems.

Dojo’s resurrection is as much strategic as it is technical. Tesla needs more on‑premises compute to shorten development cycles for its Full Self‑Driving (FSD) stack and to train motion‑control and perception models for Optimus. The timing coincides with Tesla’s Robotaxi ambitions—its robo‑taxi service has recently secured rideshare licences in Texas—and with a promised FSD software update that aims to improve handling of rare road scenarios.

The move also reflects Tesla’s broader chip strategy shift of the past year. When Musk halted Dojo in August, he argued that splitting resources across two different chip architectures was inefficient and announced a focus on AI5 and AI6 designs intended to handle both efficient inference and core training. The stoppage precipitated talent flight: the former Dojo lead, Peter Bannon, left and roughly 20 engineers formed a start‑up, DensityAI.

Tesla’s re‑entry into supercomputing places it squarely in competition with cloud hyperscalers and specialised accelerator makers such as Nvidia and Google’s TPU teams. Tesla’s vertically integrated model—building data, models and silicon under one roof—could yield an advantage if it meaningfully lowers per‑petaflop training costs. But the technical and supply‑chain hurdles are large: securing 2nm wafers, achieving acceptable yields, designing power‑efficient boards and scaling distributed training software are non‑trivial challenges.

The public hiring notice and Musk’s request that applicants list three key technical problems they’ve solved underline the urgency of the project. Restarting Dojo while pursuing a 2nm AI6 product will demand close coordination with foundries and significant capital expenditure, even as Tesla seeks to keep development cycles tight to feed rapid model iteration.

If Tesla succeeds in delivering a cheaper, high‑performance training cluster, Dojo 3 could accelerate the commercial roll‑out of FSD and make Optimus development more tractable. Failure, however, would not merely be an engineering setback; it would squander talent and capital and raise questions about the limits of Tesla’s ambition to own both hardware and software stacks for next‑generation AI.

Share Article

Related Articles

📰
No related articles found