Like an increasing number of other Google products, the $249 camera relies on artificial intelligence. Whenever it's on, Google Clips is constantly looking out for certain people, facial expressions like smiles, and other indications that it should record footage, Google product manager Juston Payne said at Google's hardware event in San Francisco on Wednesday. The automatic editing capability is reminiscent of the small company Graava's miniature camera.
"It looks for stable, clear shots of people you know, and you help the camera learn who's important to you," Payne said. "And finally, all the machine learning happens on the device itself, so just like any point-and-shoot, nothing leaves your device until you decide to save and share it."
That last part is important — AI tasks can require complex computing, and historically Google has been very reliant on that for AI on mobile devices in particular, but Google has made it possible for Clips to work its magic locally.
Additionally, a Now Playing feature on the second generation of Google's Pixel smartphones "detects what song is playing and matches it with tens of thousands of song patterns on your phone" to quickly identify it without ever reaching out to cloud servers for assistance, Google senior director of product management Sabrina Ellis said at the Google event.
This isn't the first time Google is doing a type of artificial intelligence computation locally on a mobile device. For example, in 2015 Google made it possible for people to automatically translate text that people captured with the cameras on their phones.
What's notable this time is that Google is positioning Now Playing and the Clips camera as examples of "on-device machine learning." Just a few months ago, Apple was talking about that when it introduced Core ML, a system developers can use to make AI models work inside iOS apps, without pinging the cloud.
Google didn't announce a release date for Clips but did say the product is coming soon. GoPro stock fell after Google revealed Clips.