Elon Musk has ordered a restart of Tesla’s Dojo supercomputer programme, announcing on social media that work on a third-generation Dojo will resume now that the company’s AI5 chip design is complete. The decision comes five months after Tesla abruptly halted the Dojo programme and dismantled its core team, and was accompanied by a recruitment drive aimed at engineers willing to “build the world’s highest‑yield chip.”
The revived Dojo 3 is a deliberate redesign rather than a continuation of prior hardware choices. Tesla says it will abandon the earlier path of using bespoke D1 chips and wafer‑level packaging in favour of a dense single‑board cluster approach that integrates up to 512 AI5 or next‑generation AI6 chips per board. That architecture is pitched as a way to cut wiring complexity and hardware cost by orders of magnitude while retaining large‑scale parallel training capacity.
AI6 is slated to be manufactured on a 2nm process, and the plan explicitly aims to reconcile training and inference workloads across Tesla’s vehicle fleet, data centres and the Optimus humanoid‑robot project. Musk also signalled integration with xAI’s Grok model, suggesting synthetic data pipelines will be used to create an iterative training loop for both autonomous driving and robotics systems.
Dojo’s resurrection is as much strategic as it is technical. Tesla needs more on‑premises compute to shorten development cycles for its Full Self‑Driving (FSD) stack and to train motion‑control and perception models for Optimus. The timing coincides with Tesla’s Robotaxi ambitions—its robo‑taxi service has recently secured rideshare licences in Texas—and with a promised FSD software update that aims to improve handling of rare road scenarios.
The move also reflects Tesla’s broader chip strategy shift of the past year. When Musk halted Dojo in August, he argued that splitting resources across two different chip architectures was inefficient and announced a focus on AI5 and AI6 designs intended to handle both efficient inference and core training. The stoppage precipitated talent flight: the former Dojo lead, Peter Bannon, left and roughly 20 engineers formed a start‑up, DensityAI.
Tesla’s re‑entry into supercomputing places it squarely in competition with cloud hyperscalers and specialised accelerator makers such as Nvidia and Google’s TPU teams. Tesla’s vertically integrated model—building data, models and silicon under one roof—could yield an advantage if it meaningfully lowers per‑petaflop training costs. But the technical and supply‑chain hurdles are large: securing 2nm wafers, achieving acceptable yields, designing power‑efficient boards and scaling distributed training software are non‑trivial challenges.
The public hiring notice and Musk’s request that applicants list three key technical problems they’ve solved underline the urgency of the project. Restarting Dojo while pursuing a 2nm AI6 product will demand close coordination with foundries and significant capital expenditure, even as Tesla seeks to keep development cycles tight to feed rapid model iteration.
If Tesla succeeds in delivering a cheaper, high‑performance training cluster, Dojo 3 could accelerate the commercial roll‑out of FSD and make Optimus development more tractable. Failure, however, would not merely be an engineering setback; it would squander talent and capital and raise questions about the limits of Tesla’s ambition to own both hardware and software stacks for next‑generation AI.
