KEY POINTS
  • Google will embed information called a markup inside images created by its AI models that will warn people it was originally created by a computer, the company said Wednesday.
  • "Image self-labeled as AI generated," reads one example warning provided by Google.
  • The move is the most significant effort by a big technology company so far to label and classify output from so-called generative AI.

In this article

Alphabet CEO Sundar Pichai delivers the keynote address at the Google I/O developers conference at Shoreline Amphitheatre in Mountain View, California, on May 10, 2023.

Google will embed information called a markup inside images created by its AI models to warn people the images were originally created by a computer, it said on Wednesday.

The data inside the images won't be visible to the human eye, but software such as Google Search will be able to read it and then display a label warning users. Google will also provide additional information about all images in its results to help prevent deception, including when the image was first uploaded to the search engine and whether it's been cited by news sites.

In this article