As a child, my first vision for the future of technology was shaped by an aspect of Star Trek that’s different from most people’s: the absolute ubiquity of touchscreens and tablets (PADDs).
I was enamored with the idea of interacting with information via touch and being able to do so anywhere instead of in a fixed place. All that became reality with the iPhone and Android, and I live it every day.
My next vision of the future came with a 2:30 minute concept video. It ingrained in me that what comes after the smartphone is information overlaid with your line of sight as it’s contextually needed.
Something appearing where you’re already looking without having to do anything — like take out your phone or raise your wrist — makes information feel instantaneous.
On the contextual front, the defining moment of that concept for me is nearing a transit station and being told that service is down, with a suggestion for walking directions that are then given in real-time.
That was 2012, and using everything that has come since – even tech introduced earlier in 2024 – made it seem like we were still 5+ years away.
Last week, for 20 mins, I lived that future through Google’s prototype Android XR glasses. It was glorious and shockingly realized.
The first thing that Google realized is the physical size. This device, with a “monocular” (or single) in-lens display and camera, was indistinguishable in size and weight from today’s Meta Ray-Bans. That absolutely cannot be said of the smart glasses with in-lens screens that I’ve used in the past.
The difference here is that this display is primarily for delivering glanceable information and not being a full computer, which is what others in the glasses space are showing off. In the short-term, I do not want to browse the web on my glasses.
(Their approach is almost like how pre-iPhone smartphones tried to jam the desktop paradigm into a significantly smaller screen and failed. It’s a somewhat understandable overcompensation for not having a computing platform.)
Unfortunately, Google did not allow any pictures or videos to be taken of the prototypes, which amusingly had full USB-C ports, including clear plastic ones that showed all the components.
With today’s announcement, the company shared several first person POV clips, and I affirm that the depicted display matches what I experienced.
What you see in Google’s glasses do not take up your entire vision, but it is more than enough for delivering rich, graphical information, including crisp text.
The screen output is simply good. I saw reality with information augmented over it, with no obvious boundary between where the display starts and stops.
I also used a second “binocular” prototype that has two displays, with the bulk a bit more pronounced but not absurdly so.
The frames don’t make you look like a raccoon, and it allowed me to watch a video about the size of a phone held maybe an arm’s length away.
This was more a technologically demonstration of what’s possible rather than an expected behavior.
The more realistic use case is previewing an image after taking a picture, which I tried out, to help gauge framing.
It’s using the Raxium microLED tech that Google acquired in 2022. Still in the R&D phase, Google says this is a “highly unique monolithic RGB microLED technology using a wafer level process to achieve all colors on a single panel without the need for color conversion.”
The next things Google realized are the interaction method and feature set.
There’s a touchpad on the right stem, with one button on the top edge and another below. The top button takes pictures, while the second activates Gemini (specifically, Project Astra powered by Gemini 2.0).
This Gemini performed incredibly well as a control method and excelled at contextual awareness. Once Gemini is turned on, it stays active. (You can tap the trackpad to pause the assistant and have it stop listening/seeing).
I picked up a book and just asked “who wrote this” without having to preface anything. Then I flipped to a random page and asked for a summary, which it successfully did.
I got a live translation demo — in a callback to Google I/O 2022 — and it was perfect. It could also translate signs.
While looking at a vinyl sleeve, I asked Gemini to play a song from the album and YouTube Music, which was explicitly referenced, started the playback session.
A Google Maps navigation demo showed the next direction in my line of sight, and a full map appeared when I looked down in a very delightful design touch.
The fidelity was fantastic and the dream of Live View AR. Notably, these two navigation concepts were part of the Glass vision. I also got a Google Chat notification and was able to reply.
When I was demoing all these capabilities, it was back-to-back. There were no resets. The original Gemini session I began continued. It was exactly like what Gemini Live is capable of today, but with the addition of vision.
What was phenomenal about Astra was that Googlers were prompting me on what to say as I went to different tables set up with objects for me to pick up.
However, the glasses never interpreted their voices or my side conversations with them as a command. It only responded when I intended to speak to Gemini, and I didn’t go through any Voice Match training ahead of time.
These features are useful and reveal how Google is building for a here and now where the smartphone isn’t going away anytime soon, with Android XR offloading things to its paired device just like other wearables.
(I’ve long believed that the 2012 concept video hit all the feature marks for what successful consumer smart glasses look like.) This is incredibly promising and reveals how these prototypes are beyond the tech demo stage.
Compared to playing ping pong, I’d use these navigation and notification capabilities every day. There are clear use cases that I’d argue compete with smartwatches, and beat it.
If Google did Project Glass Explorer for this hardware and priced them, at say, $3,499, I’d buy them and integrate them into my day-to-day. The utility and quality was that apparent to me after 20 minutes.
From what has leaked about Google’s AR work, you’d think the company was scrambling, and that the effort was suffering from the worst of Google’s restart and pivot tendencies.
Having actually used what Google has ready — admittedly still in R&D — I’m blown away. A prototype in the lab is of course different from something that’s ready for mass production, but the future is bright.
Google is already realizing the target hardware sizes, use cases, and Gemini as the input and output methods.