Here’s how Google ‘translation’ Glass will open doors to real-world apps

Alphabet Inc’s new augmented reality glasses can automatically translate different languages' speech in real time.
23 May 2022

Here’s how the Google ‘translation’ glass would open new doors to real-world applications. (Photo by Josh Edelson / AFP)

  • The Google ‘translation’ glass could prove a valuable tool to deaf or hard-of-hearing users, in cases where they do not or cannot wear hearing aids
  • The new augmented reality display works directly with Google Translate
  • The glass is however still a prototype and yet to be named — or indicated if it would even be released or otherwise

Seems like Google is giving the ‘smart glasses’ segment another try, a decade after its Google Glass did a brief stint on the commercial market and flopped in spectacular fashion. Being too niche and ahead of its time back then didn’t really pay off — but that has not stopped the Alphabet Inc-owned company from trying again.

Following recent developments that Google has been working on a renaissance of augmented reality (AR) glasses, the company indeed introduced a prototype pair of “translation” eyewear at its annual developer conference this year. Demonstrated in a video by company executives during Google I/O 2022, the highlight of the yet-to-be-named product can actually automatically translate speech in real-time.

Instead of acting like a wearable computer, the new AR wearables harness Google Translate to listen to speech in different languages in real-time, and can even detect American Sign Language while projecting a heads-up translation in front of a user’s eyes at the same time. In short, it is real-time intelligence-enabled subtitling.

Alphabet’s first foray into smart glasses encountered sizable backlash years ago, as the built-in camera stirred privacy concerns, on top of the high price that slowed down adoption. Although this time around details on the new translation glasses remains sparse, the design alone marks a departure: shifting from a more sci-fi look to more traditional frames, likely to counter prior claims the original Google Glass looked kitschy and unwearable in public.

Other details like its name, technical specifications, even any indication if it would even be released as an actual product remained unshared as of now. In fact, Google only went to the extent of showing it in a demonstration video.

Live ‘translation’-enabled Google Glass a gamechanger?

The device as it is known right now is a more inclusive product as compared to its predecessor. Worn like a normal pair of glasses, the lens incorporates a small screen that shows a real-time translation of another language in AR, so it overlays what you see normally. The functionality, as Google product manager Max Spear summed up, was like “subtitles for the world.”

In a blog posting that summed up the announcements made during Google I/O, the tech giant stated how AR allowed them to spend more time focusing on real-world applications that really matter. “It can break down communication barriers — and help us better understand each other by making language visible. Watch what happens when we bring technologies like transcription and translation to your line of sight,” it reads.

For Google, AR is representative of the new frontier of computing that has huge potential. In fact, the company has been heavily invested in that area, building AR into many Google products, from Google Lens to MultiSearch, scene exploration, and Live and immersive views in Maps. “These AR capabilities are already useful on phones and the magic will really come alive when you can use them in the real world without the technology getting in the way,” the blog on it reads.

“Let’s take language as an example. Language is just so fundamental to connecting with one another,” Google added. “And yet, understanding someone who speaks a different language, or trying to follow a conversation if you are deaf or hard of hearing can be a real challenge. Let’s see what happens when we take our advancements in translation and transcription and deliver them in your line of sight in one of the early prototypes we’ve been testing.”

In comparison, as Channel Asia puts it, the original “Google Glass was not augmented reality — it was a heads-up display. The only contextual or environmental awareness it could deal with was location. Based on location, it could give turn-by-turn directions or location-based reminders. But it couldn’t normally harvest visual or audio data, then return to the user information about what they were seeing or hearing.”

The translation glasses on the other hand are, in fact, AR by essentially taking audio data from the environment and returning to the user a transcript of what’s being said in the language of choice. That said, if Google doesn’t actually make them, it’s just a matter of time before someone else will, and that will prove a strong category for business users.