- The Google Assistant now sounds more natural thanks to software from Alphabet's DeepMind artificial intelligence research group.
- The announcement comes a few months after Apple said it was making Siri sound more natural.
In the intensifying battle to have the best voice-powered technology, Google is making its virtual assistant sound more human and less robotic.
The speech-activated Google Assistant is relying on software from DeepMind, the artificial intelligence research group under Alphabet. The technology now uses a version of DeepMind's WaveNet system for American English and Japanese, according to a blog post published on Wednesday.
It's a timely shift. Two weeks ago Apple released an upgraded version of the Siri virtual assistant, which is available on iPhones, iPads, Macs and other devices. The news also comes as Google introduces new versions of its Pixel smartphones, as well as speakers and earbuds that will let users talk to the Google Assistant.
The blog, from DeepMind research scientists Aäron van den Oord and Tom Walters and Google speech software engineer Trevor Strohman, said that the Assistant working on WaveNet "is the first product to launch" using Google's second-generation AI chip, the tensor processing unit, or TPU. Google also uses graphics cards from Nvidia to train certain AI systems.