Loading...
For years, startups have sought to expand headphone functionality beyond music and phone calls. Nearly a decade ago, Waverly Labs and Mymanu introduced real-time translation, followed by Google’s launch of a voice-activated AI assistant in 2020. More recently, Samsung and Apple have entered the space, making noise cancellation a near-standard feature.
Many emerging companies, attending this week’s Consumer Electronics Show (CES) in Las Vegas, are now refining these technologies for specific uses. OSO, for example, aims to turn earbuds into a professional assistant, capable of recording meetings and retrieving information using natural language.
Viaim offers similar services but focuses on cross-platform compatibility, targeting users whose phones lack built-in AI features. Meanwhile, Timekettle has found success in education, with most of its sales coming from schools that use its earbuds to help non-English-speaking students follow lessons without translators.
Experts say earbuds provide a more accessible entry point for AI than smart glasses due to their lower cost and widespread use. However, analysts caution that they rely on voice interaction, are not worn continuously, and lack cameras—factors that limit their potential as a dominant AI interface.
Some startups are pushing boundaries further. Naqi Logix’s Neural Earbuds use ultra-sensitive sensors to detect tiny movements, allowing users with severe disabilities to control devices discreetly. Neurable is exploring headsets that read brain activity, envisioning communication through thought alone. For now, analysts say such breakthroughs remain niche, and most headphones will continue to focus on listening.