Carolyn Everson, Facebook's vice president of global marketing solutions, regularly sends out video messages to communicate with the company's internal ad-sales teams.
But in mid-February, Everson did something unusual: She recorded a video for clients explaining what Facebook was doing to prevent ads from ending up next to undesirable content.
She sent the video to a group of advertising agencies and consumer brands that represent some of the company's biggest ad buyers.
Among the topics discussed was how Facebook was working to combat hate speech, terrorism and what the company calls "false news."
The decision by Everson to share the video with Facebook's Client Council shows how concerned the company and its customers are about the problem of misinformation. Rival Google is, too.
"They're both taking this very seriously," said John Montgomery, executive vice president for brand safety at GroupM, a unit of the giant ad agency WPP.
A Facebook spokesperson explained, "We regularly share updates with our advertising partners around the world. This is one example of those communications."
False news is an issue that rears up each time misinformation and other inaccurate stories spread on Facebook and YouTube following mass shootings like the one in Florida in February and Las Vegas in October.
"Every time something like this happens, it undermines confidence in social as a platform" for brand advertising, said Montgomery, an industry veteran who speaks with Facebook, Google and large brands on a weekly basis.
Some of the posts that followed the Florida high school shootings are a reminder that "no un-curated social advertising is 100 percent safe," he said.
The latest round of false social media-conspiracy theories following a mass shooting came two weeks after CEO Mark Zuckerberg said on an earnings conference call that "preventing false news, hate speech and other abuse is another important area of focus for us."
In a speech Wednesday, Facebook COO Sheryl Sandberg gave a clear account of the nature of the problem.
"People are writing outlandish headlines so they can get clicks and can get ad money, so probably the most important thing we can do is go after the economic incentives," Sandberg told the audience of investors at the Morgan Stanley Technology, Media and Telecom conference.
Facebook wants to "make sure the people who are purveying false news are not making money on it," Sandberg said.
Much of the source of false news on the platform is "fake accounts," she said.
Facebook says it now catches the vast majority of improper posts with its content-filtering systems that combine artificial intelligence-based software with human workers.
The company has said, for example, that its AI systems filter out 99 percent of content from terror groups like ISIS and al-Qaeda.
And Zuckerberg said last month that the company "made progress demoting false news in News Feed, which typically reduces an article's traffic by 80 percent and destroys the economic incentives that most spammers and troll farms have to generate these false articles."
The company has more than doubled the number of workers it employs to filter content, to 14,000, and plans to have 20,000 dedicated to that task by January.
Yet even if 1 percent gets through, that's a lot of risky content for brands, given the scale of Facebook, which has 1.4 billion daily users who watch 100 million hours of video per day.
After the Feb. 14 Florida massacre, Facebook and Google's YouTube hosted content that was later proved false.
One theme of posts, later debunked, argued that one of the high school students who spoke in media interviews about gun control was a paid actor.
In another, the father of one student who accused CNN of scripting questions for a town hall meeting later admitted he left out some information in an email to Fox News and other outlets that reported the story.
Ending up next to blatantly fake news isn't the only emerging concern digital marketers have about social media advertising, according to executives at two digital ad agencies that represent national and regional brands.
Verifying whether ad clicks they are charged for are coming from real humans or bots is another, said these executives, who asked not to be identified to avoid harming their relationships with Facebook and Google.
These issues are "top of mind" for brands right now, said one of the executives, who spoke to CNBC in a phone interview.
Violent content is also still a problem, as demonstrated by the shooting death of a North Carolina man that was shown in real time on Facebook.
These concerns are part of a longer trend going back two decades, during which advertisers have surrendered control — first to ad-buying networks and later to automated ad auctions — of their digital campaigns.
In exchange, ad-buying on social media and the broader internet has become more efficient and able to reach a broader audience.
So-called programmatic advertising, or automated buying in which advertisers let Facebook and Google select an audience that matches an ad-buyer's target demographic, has exacerbated the problem.
Yet while these concerns have grown, they haven't stopped advertisers from using Facebook.
The company's revenue surged 47 percent last year to $40.7 billion, as more marketers used Facebook and its Instagram service to market their goods and services.