Enterprise

Google is making a big-time move in silicon that should scare Nvidia

Key Points
  • The upgraded TPU goes deeper into artificial intelligence than did the initial version.
  • Google will now rely more on its own silicon, meaning there will likely be less need for Nvidia's technology.
  • Google introduced the first TPU at its developer conference last year.
Google's latest AI chip could mean bad news for chip supplier Nvidia
VIDEO0:4700:47
Google's latest AI chip could mean bad news for chip supplier Nvidia

Google has spent a decade building servers that can handle billions of web searches a day. The company is now developing chips to deliver the smartest results.

At its annual developer conference on Wednesday, Alphabet introduced the second generation of Google's tensor processing unit (TPU), which is designed for artificial intelligence (AI) workloads. Google unveiled the first version in 2016 and said it had started work on the "stealthy project" a few years earlier.

The upgraded version is the latest indication that Google doesn't want to depend on other companies for core computing infrastructure. It's potentially troubling news for Nvidia, whose graphics processing units (GPUs) have been used by Google for intensive machine learning applications. Nvidia even named Google Cloud as a notable customer in its latest annual report.

Deep learning, a trendy type of AI, typically involves two stages: training artificial neural networks on lots of data, and then directing the networks to make inferences about the new data. Over the past five years, GPUs have become a standard for the training stage of deep learning, which can be used for image recognition, speech recognition and other applications.

While the original TPU was only meant for the inference stage of deep learning, the new version can handle training as well.

"I would expect us to use these TPUs more and more for our training needs, making our experiment cycle faster and more rapid," said Jeff Dean, a senior fellow and head of the Google Brain research team, in a media briefing on Tuesday. The company is sill using "GPUs internally for some kinds of models," he said.

Google’s second-generation tensor processing unit (TPU).
Source: Google

It takes a day to train a machine translation system using 32 of the best commercially available GPUs, and the same workload takes six hours atop eight connected TPUs, Dean said.

Unlike Nvidia and Intel, Google operates this equipment inside its own data centers rather than selling it to other device makers. Facebook has done much the same, although it has opted to share the designs publicly through the Open Compute Project it established in 2011.

People outside Google will be able to rent out virtual machines (VMs) with acceleration from the second-generation TPUs. Google will also introduce VMs that draw on the Volta GPU that Nvidia announced earlier this month.

Over time, Nvidia could find itself getting less business for AI directly from Google. There could also be a more indirect impact, as some users of Nvidia GPUs for AI processing might run that work in Google's data centers.

Nvidia has been a Wall Street darling of late. The stock price has surged nine-fold over the past four years and the company is now worth more than $80 billion. In addition to data center customers, Nvidia also sells GPUs for professional workstations, gaming rigs and even vehicles.

Last month, Google published a paper comparing TPUs to existing chips and said its own processors are running 15 to 30 times faster and 30 to 80 times more efficient than the competition. Nvidia CEO Jen-Hsun Huang shot back and said his company's current chips have "approximately twice the performance of the TPU — the first-generation TPU."

A spokesperson for Nvidia did not respond to a request for comment on Wednesday's announcement.