Can Google finally get smart glasses right?

Can Google finally get smart glasses right? Can Google finally get smart glasses right?

Google has announced plans to launch two new types of smart glasses in 2026, powered by AI. But will it have better luck than its previous attempt?

Working with its partners Samsung, Gentle Monster, and Warby Parker the glasses will be “stylish, lightweight glasses” that users can wear comfortably all day.

The first set of glasses are designed to be screen-free. They will feature built-in speakers, microphones, and cameras so users can talk to Gemini, take pictures, and receive help.

The second set are display-oriented. The displays will privately show users helpful information such as navigation or translation.

Driving this vision is Android XR – the first Android platform built in the Gemini era and the ecosystem behind the glasses’ functionality. Created in collaboration with Samsung, Android XR is the culmination of years of investment in AI and AR/VR.

“These glasses work in tandem with your phone,” says Google, “giving you access to your apps without ever having to reach in your pocket … Pairing these glasses with Gemini means they see and hear what you do, so they understand your context, remember what’s important to you and can help you throughout your day.”

Google says that these Android XR glasses can be used for messaging friends, making appointments, asking for turn-by-turn directions, taking photos, and much more. There’s a demonstration of a live language translation between two people.

“We know that glasses can only truly be helpful if you want to wear them all day,” Google states.

Android XR will also support wired XR glasses, combining headset-level immersion with real-world awareness. Google has offered an early look at Project Aura from XREAL, the first Android XR device in this category. Featuring a 70° field of view and optical see-through technology, the glasses overlay digital content directly onto the user’s view of the physical environment. This creates a large, private canvas for multiple windows, enabling users to carry their workspace or entertainment with them without shutting out the world around them.

The glasses are also suited to everyday tasks, such as following a floating recipe while cooking or viewing step-by-step visual instructions anchored to an appliance during repairs.

More details about the launch of Project Aura are expected next year.

However, whilst all of this sounds promising, this isn’t the first time the tech giant has attempted to launch smart glasses.

What was Google Glass?

Google Glass was a wearable smart display developed by Google’s experimental Google X lab, first unveiled in 2012.

Designed to look like a pair of lightweight glasses with a small prism positioned above the right eye, it aimed to bring hands-free computing into everyday life.

Users could interact with the device using voice commands, view notifications, take photos and videos, access navigation, and perform basic searches.

At the time, the concept felt revolutionary and positioned Google Glass as a glimpse into the future of personal computing.

What happened to Google Glass?

The initial launch generated enormous hype, particularly after a high-profile demonstration at Google I/O in which skydivers wearing Glass live-streamed their descent.

In 2013, Google opened the invitation-only Explorer Program, allowing early adopters to purchase the device for $1,500. However, once in real-world use, the limitations became clear. Battery life was short, the display was small, the interface was awkward, and the overall experience felt more like a prototype than a finished consumer product.

Public reaction quickly turned negative, largely due to privacy concerns. The built-in camera raised fears that people could be recorded without their knowledge, and the device was banned in many bars, cinemas, and private venues. Social discomfort also played a major role: talking to glasses in public felt unnatural, eye contact was disrupted, and wearers were often perceived as intrusive or arrogant.

Beyond social issues, Google also made strategic missteps. Glass was pushed toward consumers before clear everyday use cases had been established, leaving many people unsure why they needed it instead of a smartphone. The high price point further limited adoption, especially given the lack of a compelling ‘killer app’ that justified the cost. Together, these factors made it difficult for Glass to move beyond a niche audience.

In January 2015, Google announced the end of the consumer-facing Explorer Program, leading many to believe the project had been cancelled entirely. In reality, Glass was restructured and shifted away from consumer markets. Development moved under Nest CEO Tony Fadell, and the focus turned to enterprise and industrial applications, where hands-free access to information made far more sense.

This new direction resulted in Google Glass Enterprise Edition, launched in 2017. In workplaces such as manufacturing, logistics, healthcare, and field services, Glass proved genuinely useful. It allowed workers to view instructions, receive remote assistance, and capture information without stopping their tasks. The enterprise versions improved on earlier flaws, offering better battery life, more comfortable designs, and integrations with safety equipment, while avoiding many of the privacy and social issues that plagued the consumer model.

Despite this success in professional settings, the enterprise AR market remained relatively small. In 2023, Google officially discontinued Glass Enterprise Edition as part of a broader shift away from hardware and toward software, AI, and platform partnerships. Competition from other AR devices and changing strategic priorities contributed to the final decision.

Learning from past mistakes

Some argue that Google Glass did not fail because the idea was flawed, but because it was ahead of its time. The technology was not mature enough, social norms were unprepared, and the product was aimed at the wrong audience first.

Admittedly, its legacy is significant. Glass influenced the development of smartwatches, modern AR headsets, and today’s more cautious approaches to smart glasses.

But it does stand as a reminder that even groundbreaking technology must align with human behaviour, culture, and clear practical value to succeed.

Will Google get it right this time around? Let’s wait and see.

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Previous Post
Microchip Technology has expanded its PolarFire FPGA smart embedded video ecosystem to support developers who need reliable, low-power, high-bandwidth video connectivity.

Microchip expands PolarFire FPGA smart video ecosystem

Next Post
Infineon launches next-generation of USB 2.0 peripheral controller

Infineon launches next-generation of USB 2.0 peripheral controller