- Professor David Patterson retired from U.C. Berkeley last summer after a 40-year academic career in computer architecture.
- He's now a key part of the team behind a critical chip that Google uses for artificial intelligence processing.
- Without this chip, Google top execs estimated it would have had to double its data centers to support even a limited amount of voice processing.
A year ago the University of California at Berkeley hosted a retirement celebration for David Patterson, who was hanging it up after a 40-year academic career in computer architecture.
Patterson encored the event last May with a personal 16-minute history, chronicling his days as a wrestler in high school and college and a math major at UCLA, followed by a job at Hughes Aircraft and four decades at Berkeley.
From writing two books with Stanford University's John Hennessy to chairing the Computing Research Association, Patterson told the audience that a key to his success was doing "one big thing at a time."
His next big thing could be enormous.
Rather than hitting the beach after retirement, Patterson joined Google in July to work on an ambitious new chip that's designed to run at least 10 times faster than today's processors and is sophisticated enough to handle the intensive computations required for artificial intelligence.
It's called the Tensor Processing Unit (TPU), and Patterson has emerged as one of the principal evangelists. He spoke to about 100 students and faculty members at the Berkeley campus on Wednesday, a few days shy of the anniversary of his retirement celebration.
"Four years ago they had this worry and it went to the top of the corporation," said Patterson, 69, while sporting a T-shirt for Google Brain, the company's research group. The fear was that if every Android user had three minutes of conversation translated a day using Google's machine learning technology, "we'd have to double our data centers," he said.
Google parent Alphabet already spends $10 billion a year on capital expenses, largely tied to data center costs. And now it's addressing what it calls a "renaissance in machine learning." Deep neural networks, or computers that are modeled to learn and get smarter over time as data sets get bigger and more complicated, require big breakthroughs in hardware efficiency.
Patterson, who gave the same talk at Stanford on Thursday, was among the lead authors
The paper, written by 75 engineers, will be delivered next month at the International Symposium on Computer Architecture in Toronto.
The report was Patterson's debut project at Google. Once a week he treks down to the Mountain View headquarters and twice a week he works at the company's office in San Francisco. He reports to Jeff Dean, an 18-year Google veteran and head of Google Brain.
It's not Patterson's first Google gig — he worked there while on academic sabbatical from 2013 to 2014. This time, he joined the TPU project as a distinguished engineer, a year after the chips were first put to use in Google's data centers and just two months after his retirement party.
Patterson has never been one to sit idle. In 2013, while still teaching, he participated in a powerlifting competition — and set a new California state record for his age group.
"Now that I think back, there was no evidence for the assumption that he was retiring except that it was called a retirement celebration," said Mark Hill, a
Hill, who now chairs the computer sciences department at the University of Wisconsin in Madison, said that in computer architecture Patterson is "on the short list of the great ones of the last half of the 20th century." He called the computer architecture book that Patterson wrote with Hennessy the field's most influential textbook of the last 25 years.
Google says the TPU is being tested broadly across the company. It's used for every search query as well as for improving maps and navigation, and it was the technology used to power DeepMind's AlphaGo victory over Go legend Lee Sedol last year in Seoul.
But don't expect Google to compete in the semiconductor market against the likes of Intel and Nvidia. Outside developers who want the sort of performance that Google's processors can offer have to turn to the company's cloud computing service and products like TensorFlow, an open-source software library for machine learning workloads.
"They're much better at building tools for themselves than for others," said Chris Nicholson,
At least one start-up may be trying to fill in the gaps where Google won't tread. Several engineers from the project have teamed up with investor Chamath Palihapitiya to form a mysterious start-up called
It's still very early days for the TPU. Thus far, the processor has proven effective at what's called inference, or the second phase of deep learning. The first phase is training and — as far as we know — for that Google still counts on off-the-shelf processors.
But Google is famous for keeping quiet about its most cutting-edge projects.
During his speech at Berkeley, Patterson was asked what's next for his team. He didn't take the bait.
Patterson said that one thing he's learned at Google is that, unlike in academia, he's not allowed to talk publicly about future products.
"I can say that Google isn't resting," he said. "It seemed like it was a good experience. They didn't give up. Nobody got laid off."