The Google logo is pictured during the annual Web Summit technology conference in Lisbon, Portugal. Picture: Pedro Fiúza/NurPhoto via Getty Images
The Google logo is pictured during the annual Web Summit technology conference in Lisbon, Portugal. Picture: Pedro Fiúza/NurPhoto via Getty Images

"I have a peanut allergy and it is life-threatening," is what you are trying to convey to your French waiter.

"Je ne comprends pas," is the response.

Things could go horribly wrong. But artificial intelligence (AI) can come to your rescue.

Last week Google’s AI gurus unveiled a feature that will delight frequent travellers. Using Google Assistant, Google now offers an interpreter mode that has the ability to provide real-time translation.

Speak or type a sentence to your phone, and the assistant will dish up an instant translation. It works with 44 languages.

To trigger the interpreter mode, use the command: "Hey Google, be my French translator." Start speaking, and you’ll see the translated conversation appear on your device.

It isn’t the only travel-related advancement worth paying special attention to. Earlier this year, Google unveiled instant camera translation. It regards this as one of the most significant advances its AI investment has yielded, says Google Translate’s engineering director Macduff Hughes.

Instant camera translation lets its users see the world in their own language by pointing a camera lens at foreign text, such as a billboard or newspaper. The foreign text is instantly replaced by a translation.

Hughes says: "We ... can transition between an online model on the server if you have a good internet connection, or it can work entirely offline if you download the language." This is ideal if you are on the move in a foreign country.

The tech giant has a vision to use AI in a way that frees up human time. The company held an event at its Zurich office recently where it went behind the scenes on apps like Translate, Live Transcribe, Gboard keyboard and the Pixel smartphone.

AI is the capability of machines to mimic human thinking and behaviour — and Google is one of the world’s biggest investors in it. It believes it can make "it easier for people to do things every day".

Those "things" are expanding rapidly. The gallery of photos on a smartphone can use facial recognition to group photos based on the faces it detects. Google Assistant — or Apple’s Siri and Amazon’s Alexa — can set reminders, perform web searches or type messages on your behalf. Or AI-powered automation in a home can reduce power use by letting you control thermostats, plugs and lighting sensors from a device.

Google acquired Nest Labs in 2014 for $3.2bn. The search giant has since opened a dedicated division called Google AI, which has over 5,500 free-to-download research projects published by staff online.

Its other AI advancements include apps for people with disabilities or physical limitations. Live Caption, for example, is a real-time, on-device automatic captioning system that uses machine learning to turn spoken words (like those on videos) into text, even without an internet connection. Lookout is an app that uses computer vision to inform blind people about their surroundings. And Live Transcribe captures speech into text.

All three services were demonstrated at the Zurich event by Dimitri Kanevsky, a deaf scientist and mathematician from the Google accessibility team. The apps have changed his life, he says, because they allow him direct communication with people, and to deliver presentations.

The company also showcased how these products work on its Pixel 4 smartphone, featuring a world first: a miniature radar chip. When combined with machine learning, it will detect movement around the device.

Google calls it Motion Sense. It lets you skip tracks, silence calls, snooze an alarm or interact with its Pokémon Pikachu wallpaper, all without touching the handset. It will sense when you’re reaching for the phone to initiate facial unlocking, or turn off the screen when you’re not around.

Brandon Barbello, product manager at Google Hardware, says: "The AI behind this is fascinating because the radar has a signal that is so different from computer vision — there is no human sense. It’s like an ear that hears motion, and it sees things like blobs in space."

The Pixel 4 also has new AI enhancements to its camera that bring features like astrophotography — a mode to capture the stars at night with long exposure by combining multiple photos. Super Res Zoom will zoom in close to a subject without losing image quality, and Top Shot recognises facial expressions to identify when someone has blinked or did not look into the camera.

The smartphone’s Gboard keyboard supports over 1,000 languages. Its users have given consent — whether knowingly or not — for Google to read their data to better its machine learning algorithm. Is that safe?

Françoise Beaufays, principal scientist at Google Search, says it combines data straight from a book and data from users, but to protect their privacy the latter isn’t uploaded to Google servers. What gets uploaded is the difference between the model received and the model obtained after the change. "Now you have a new model that represents a variety of users and the details of the user have never left their device; it’s not exposed to the Google servers."

This raises obvious data security concerns, but the reality is that good data is vital. Without it, artificial intelligence simply cannot work.

Akabor visited Zurich as a guest of Google

Would you like to comment on this article or view other readers' comments?
Register (it’s quick and free) or sign in now.

Speech Bubbles

Please read our Comment Policy before commenting.