Google's virtual assistant now sounds less like a robot and more like a person

Key Points
  • The Google Assistant now sounds more natural thanks to software from Alphabet's DeepMind artificial intelligence research group.
  • The announcement comes a few months after Apple said it was making Siri sound more natural.
Sundar Pichai, chief executive officer of Google Inc., discusses the Google Pixel virtual assistant during a Google product launch event in San Francisco on Oct. 4 2016.
Michael Short | Bloomberg | Getty Images

In the intensifying battle to have the best voice-powered technology, Google is making its virtual assistant sound more human and less robotic.

The speech-activated Google Assistant is relying on software from DeepMind, the artificial intelligence research group under Alphabet. The technology now uses a version of DeepMind's WaveNet system for American English and Japanese, according to a blog post published on Wednesday.

It's a timely shift. Two weeks ago Apple released an upgraded version of the Siri virtual assistant, which is available on iPhones, iPads, Macs and other devices. The news also comes as Google introduces new versions of its Pixel smartphones, as well as speakers and earbuds that will let users talk to the Google Assistant.

The blog, from DeepMind research scientists Aäron van den Oord and Tom Walters and Google speech software engineer Trevor Strohman, said that the Assistant working on WaveNet "is the first product to launch" using Google's second-generation AI chip, the tensor processing unit, or TPU. Google also uses graphics cards from Nvidia to train certain AI systems.

The Google Home Mini, which lets you speak with the Google Assistant.
Source: Google

Google acquired DeepMind in 2014, and has used the software to help lower the cost of cooling its data centers.

DeepMind first unveiled WaveNet a year ago, but it was "too computationally intensive to work in consumer products," according to the blog post.