Success

It’s getting harder to trust what you read online—a Google exec explains why, and what you can do about it

Share
Miniseries | E+ | Getty Images

Gen Zers might be the most digitally savvy group on the planet, but that doesn't make them immune from believing — and spreading — false information online.

That's a big problem for everyone, Google executive Beth Goldberg tells CNBC Make It.

"I do think Gen Z's susceptibility to misinformation is more important than other cohorts. Definitely," says Goldberg, the head of research and development at Jigsaw, a Google unit created in 2010 to study and monitor online abuse, harassment and disinformation.

Gen Z is "just so online," Goldberg says, citing a recent study backed by Google on explanations for young people's susceptibility to misinformation. It can lead to "information overload," where the sheer volume of available information can make it harder to discern fact from fiction, she adds.

More than half of Gen Z's members get most of their news directly from social media, and younger generations are much more likely to believe online influencers over other sources of information, research shows. These traits aren't necessarily unique to Gen Z, but they're more prevalent among that cohort, Goldberg says.

And more than half of Gen Z members spend more than four hours per day on social media, nearly twice the percentage of all U.S. adults who are online that much, according to a 2022 Morning Consult survey.

"You have this, sort of, amplification [of misinformation] — not just of Gen Z being susceptible as consumers, but also then propagating that misinformation as commenters and creators themselves," Goldberg says. "It has an outsized risk for all of us online."

What you can do to make the internet more trustworthy

The increasing spread of false information online is a significant concern, especially as a threat to democracy and public health. The issue is only expected to get worse, as artificial intelligence technology could make convincing disinformation easier to create and spread.

But the problem is solvable, Goldberg and other experts say. It may require commitment from a variety of sources, from units like Jigsaw — which develops AI tools that can identify toxic speech online — and its parent company Google to world governments and individual internet users like you.

You can learn to spot misinformation by practicing something called "lateral reading," where you try to verify information you read online by opening new tabs to search for supportive evidence and details on the website where the original information was published.

"[It's] looking up the funder, looking up the name of the website and where it's from, and really digging in and getting other sources to verify what's in the first tab that you're on," says Goldberg.

From there, you can call out misinformation in the comments of social media posts, supplying evidence to show why certain claims might be off-base or unverified. Those friendly fact-checks can be incredibly effective "because it's someone you already trust in your in your peer group," Goldberg says.

Other potential solutions for online misinformation

On a more structural level, internet literacy programs in schools can help people learn to fact-check and identify false information, Goldberg says. Tech companies should publish better summaries of content on social media platforms to combat information overload, she adds.

And the social platforms themselves need to amplify trustworthy sources of information, Goldberg says. Google, Meta, Twitter and other companies have faced intense criticism for allowing false information to spread on their platforms.

In response, those tech giants typically cite the difficulty of stamping out every instance of misinformation in real time. Google and Meta have demonetized some high-profile accounts linked to disinformation, including Russian state media following the invasion of Ukraine, cutting them off from ad dollars.

Demonetization is a surprisingly successful tactic, Goldberg says: The revenue brought in by disinformation in is often "a big driver of [bad actors] spending that much time and energy creating harmful content."

Google, Meta and Twitter didn't immediately respond to CNBC Make It's requests for comment.

Jigsaw has also experimented with "prebunking," or combating conspiracy theories by creating short video ads that highlight misinformation tactics like scapegoating to fear-mongering.

Its prebunking ads on YouTube videos in Eastern Europe have reached tens of millions of viewers, with one video breaking down false claims that had been circulating about Ukrainian refugees, Goldberg says.

"We can anticipate, 'What are [people] falling for online right now? What are the types of misinformation narratives or techniques that are convincing them of lies, essentially?'" she says. "And can we then design either videos, or some sort of prebunking message to help them gain a little bit of resilience, a little mental armor, ahead of time?"

DON'T MISS: Want to be smarter and more successful with your money, work & life? Sign up for our new newsletter!

Get CNBC's free report, 11 Ways to Tell if We're in a Recession, where Kelly Evans reviews the top indicators that a recession is coming or has already begun.

Barbara Corcoran: Don't diversify your investments
VIDEO0:5500:55
Barbara Corcoran: Don't diversify your investments