Apple is accelerating development of three new wearable devices as part of a broader pivot to AI-first hardware. The company is working on smart glasses, a wearable pendant that can clip to clothing or be worn as a necklace, and AirPods with expanded AI capabilities, each designed to work closely with iPhone and a varied camera system.
All three products are being built around Siri, which will draw on visual context to carry out commands. That implies multimodal capabilities where sight and sound combine to let Siri understand a user’s surroundings, identify objects or text, and perform tasks that go beyond conventional voice queries.
The move reflects a wider industry shift toward embedding advanced generative and multimodal AI into consumer hardware. Rivals such as Meta and newer entrants linked to OpenAI are competing for leadership in AR-enabled wearables and ambient AI assistants, but Apple’s advantage lies in its control of hardware, software and the iPhone ecosystem—allowing it to stitch cameras, on-device processing and cloud services into a single experience.
The technical design choices matter: each device will rely on different camera arrays to supply visual context, and all will interoperate with an iPhone. That model can reduce latency, enable richer experiences and keep heavy computation partly on-device, but it also raises hard questions about sensor validation, battery life and real-world accuracy for visual AI features.
Privacy and regulation will be central constraints. Cameras and always-on sensing invite scrutiny from regulators and privacy advocates in major markets, particularly in Europe and the United States. Apple has historically marketed privacy as a differentiator; how it balances on-device processing, data flows to servers and transparency around what is captured will shape adoption.
From a market perspective, the three-device strategy spreads risk across form factors while expanding Apple’s addressable market. Smart glasses target augmented-reality interactions, a pendant offers an unobtrusive always-with-you sensor, and enhanced AirPods aim at the high-volume audio accessory market. Success will depend on convincing users these devices add utility without imposing unacceptable privacy, cost or battery trade-offs.
Manufacturing and supply-chain implications are non-trivial. Advanced camera modules, miniaturised sensors and new materials will be required, creating opportunities for component suppliers but also engineering headaches for Apple. Timelines are unclear and hinge on R&D progress, regulatory clearance and the company’s appetite for staggered launches to test consumer demand.
If Apple can deliver a reliable multimodal assistant experience that feels natural and respects privacy, it could reframe the way people interact with AI—shifting many interactions from screens to ambient, context-aware devices. But the path will demand technical restraint, careful product design and clear communication about data use before these wearables move from lab prototypes to everyday accessories.
