Google is moving beyond developing artificial intelligence chips for its own data centers and is now designing them to work inside products made by other companies.
After unveiling the Tensor Processing Unit two years ago, Google announced on Wednesday the Edge TPU, which will enable sensors and other gadgets to process data more quickly. The chips could be used in a wide variety of scenarios, but one initial use is in industrial manufacturing: Consumer electronics maker LG is testing them in a system that detects manufacturing defects in glass for displays.
Google's jump into custom silicon is one way it's trying to expand its cloud market share against Amazon and Microsoft. Since 2015, Google has been using TPUs to accelerate certain workloads in its own data centers, rather than relying on commercially available hardware from vendors like Nvidia.
Last year, Google said its AI silicon is becoming even more strategically important.
In AI, researchers train models with lots of data so that machines are capable of making predictions as new data arrive. The initial version of the TPU could only make those predictions, whereas the second version, in 2017, could also be used to train models — an update that made the chips competitive with Nvidia's graphics cards.
A third-generation TPU was announced earlier this year.
Now we have Edge TPUs, tiny chips that are specifically meant to do the prediction part of AI, which is less computationally intensive than training models. Edge TPUs can run their calculations without having to connect to a bunch of powerful computers, so applications can work faster and more reliably. They can handle AI work alongside a standard chip or microcontroller in a sensor or gateway device.
Google isn't making the Edge TPU to compete with traditional chips, said Injong Rhee, Samsung's former chief technology officer who joined Google as an entrepreneur in residence in February.
"It's very good for all the silicon vendors and the device makers," Rhee told CNBC.
Rhee said Edge TPUs might be "disruptive for the cloud competition," because some computation can now take place at the device level rather than all of it being sent to data centers. The Google chip can be more efficient with certain types of computing than more traditional chips in terms of cost and energy usage, he said.
Google isn't the only cloud provider to put an emphasis on processing for the so-called internet of things, which is centered on managing and processing data from many small embedded devices. Earlier this year Microsoft announced a chip design for IoT.
Google's new chips will run models built on a simplified version of the TensorFlow AI software, which the company released under an open-source license in 2015.
An organization at LG that handles IT services internally and for other companies has tested out Edge TPUs and plans to start using them inside inspection devices on production lines.
Currently, the inspection devices process more than 200 images of glass per second during the production of glass for display panels. Any issues that arise are manually inspected, and the existing system has about 50 percent accuracy, said Shingyoon Hyun, chief technology officer of LG's CNS organization. Google's AI delivers 99.9 percent accuracy, Hyun said.
"My expectation is to save money in terms of detecting anomalies and imperfections that really impact our quality," Hyun said, adding that his group previously looked at a computing system from Nvidia.
Google has built a kit that includes the Edge TPU, an NXP chip and Wi-Fi connectivity for developers to try out. The company is partnering with manufacturers like Arm, Harting, Hitachi Vantara, Nexcom, Nokia and NXP.
Rhee wouldn't say if Google is planning to build a more powerful Edge TPU for training models.