Russia meddling mess will cost tech giants big bucks to fix

  • Congress grilled Facebook, Google and Twitter execs this week over issues related to fake news and Russian meddling in the U.S. election.
  • The tech giants will need more than artificial intelligence to fix the problem.
  • Here are the expensive steps the companies need to take to avoid regulation.

Committee ranking member Rep. Adam Schiff (D-CA) speaks during a hearing before the House (Select) Intelligence Committee November 1, 2017 on Capitol Hill in Washington, DC.
Getty Images
Committee ranking member Rep. Adam Schiff (D-CA) speaks during a hearing before the House (Select) Intelligence Committee November 1, 2017 on Capitol Hill in Washington, DC.

During a series of hearings before House and Senate committees this week, members of Congress trotted out poster boards showing graphic examples of social media advertisements that attempted to influence the 2016 election. With headlines like "Heritage, not hate. The South will rise again!" and "Join us because we care. Black matters!" these ads focused on polarizing, hot-button issues including gun ownership, race relations, immigration, and religion, simultaneously targeting both sides of each debate in an effort to foment unrest.

Attorneys for Facebook, Google, and Twitter sat in the hot seat during these hearings and offered Congress assurances that they take the issue seriously and are implementing new controls to prevent misleading advertising. The issue with those safeguards, however, is that they are not likely to be effective. Many of them depend heavily upon artificial intelligence and machine learning technologies that simply aren't yet up to the challenge, at least on their own.

At the heart of these approaches is the belief that social media companies can develop models that automatically identify false and misleading advertisements, as well as advertisers operating under a false flag. The reality is that parties seeking to defeat these automated safeguards can continually alter their advertisements until they discover content that passes through the algorithm's filters.

This doesn't mean, however, that artificial intelligence can't be a valuable tool in the fight against fake news. It just can't be our only tool. Effective approaches to combatting misleading advertisements must combine technology with old-fashioned human investigative skills. There are two key actions industry can take to protect the integrity of online advertising.

"Make no mistake about it. These mechanisms will be expensive, but they are necessary to preserving the integrity of online platforms as trusted forums."

First, companies should verify and disclose the identities of online advertisers. The American public has the right to know when they're viewing paid content and deserves to understand who is putting up the money to sponsor those messages. During Tuesday's hearings, Senator John Kennedy of Louisiana addressed this issue bluntly: "I don't believe that you have the ability to determine the identity of all of your advertisers," he told attorneys for Facebook, Google, and Twitter. "You're good, but you're not that good." Overcoming the skepticism of Senator Kennedy will require that social media firms roll up their sleeves and dig into records to confirm the identity of advertisers.

Second, the content of advertisements also requires careful scrutiny. Advertisements from new advertisers should be reviewed by a human trained to identify blatantly false and misleading advertisements - prior to their publication. This single control will prevent propaganda purveyors from simply creating new advertising accounts each time they are shut down.

Make no mistake about it. These mechanisms will be expensive, but they are necessary to preserving the integrity of online platforms as trusted forums. Recent estimates place the size of the online advertising market at $83 billion this year. Surely, there's some room in that budget for the staff required to screen advertisers and advertising content.

In fact, according to a report in Wired magazine, Google has had a similar model in place since 2004. The company hires a virtual army of content screeners to view YouTube videos searching for offensive content. Their goal? To make sure that paid advertisements don't wind up running alongside offensive content, potentially tarnishing the reputation of their corporate customers. It's time to take this same approach to protect society from misleading and dangerous advertisements.

Social media companies find themselves at a crucial crossroads. They may choose to apply their considerable talent and wealth toward the goal of creating what Sean Edgett, attorney for Twitter, described to Congress as "a safe, open, transparent, and positive platform." Or they may decide that rigorous screening of potential customers is not a justifiable investment.

Will Google, Facebook, Twitter, and their counterparts take these steps? While it would be ideal, history tells us that self-regulation rarely works in the absence of government oversight. That's why we have the FTC, FDA, and other watchdog agencies. Social media platforms would be wise to heed the warning issued by Senator Dianne Feinstein during Wednesday's hearings: "You have to be the ones to do something about it…or we will."

Senator Feinstein is right. The American people are watching. Do the right thing.

Commentary by Mike Chapple, academic director of the Master of Science in Business Analytics program and associate teaching professor of IT, Analytics, and Operations in the University of Notre Dame's Mendoza College of Business. Follow him on Twitter @mchapple.

For more insight from CNBC contributors, follow @CNBCopinion on Twitter.