Advertisement

SKIP ADVERTISEMENT

Op-Ed Contributor

How to Monitor Fake News

WASHINGTON — The indictment of 13 Russians filed on Friday by Robert Mueller, the special counsel investigating Russian efforts to influence the 2016 presidential election, details the secret workings of the Internet Research Agency, an organization in St. Petersburg, Russia, that disseminates false information online. According to American intelligence officials, the Kremlin oversaw this shadowy operation, which made extensive use of social media accounts to foster conflict in the United States and erode public faith in its democracy.

But the Kremlin’s operation relied on more than just its own secrecy. It also benefited from the secrecy of social media platforms like Facebook and Twitter. Their algorithms for systematically targeting users to receive certain content are off limits to the public, and the output of these algorithms is almost impossible to monitor. The algorithms make millions of what amount to editorial decisions, pumping out content without anyone fully understanding what is happening.

The editorial decisions of a newspaper or television news program are immediately apparent (articles published, segments aired) and so can be readily analyzed for bias and effect. By contrast, the editorial decisions of social media algorithms are opaque and slow to be discovered — even to those who run the platforms. It can take days or weeks before anyone finds out what has been disseminated by social media software.

Image
Displays showing social media posts were shown at a congressional hearing in November about Russian interference in the election.Credit...Andrew Harrer/Bloomberg

The Mueller investigation is shining a welcome light on the Kremlin’s covert activity, but there is no similar effort to shine a light on the social media algorithms that helped the Russians spread their messages. There needs to be. This effort should begin by “opening up” the results of the algorithms.

In computer-speak, this “opening up” would involve something called an open application programming interface. This is a common software technique that allows different programs to work with one another. For instance, Uber uses the open application programming interface of Google Maps to get information about a rider’s pickup point and destination. It is not Uber’s own mapping algorithm, but rather Google’s open application programming interface, that makes it possible for Uber to build its own algorithms for its distinctive functions.

The government should require social media platforms like Facebook and Twitter to use a similar open application programming interface. This would make it possible for third parties to build software to monitor and report on the effects of social media algorithms. (This idea has been proposed by Wael Ghonim, the Egyptian Google employee who helped organize the Tahrir Square uprising in 2011.)

To be clear, the proposal is not to force companies to open up their algorithms — just the results of the algorithms. The goal is to make it possible to understand what content is fed into the algorithms and how the algorithms distribute that content. Who created the information or advertisement? And to what groups of users was it directed? An open application programming interface would therefore threaten neither a social media platform’s intellectual property nor the privacy of its individual users.

Media watchdog groups have long been able to assess the results of the editorial decisions of newspapers and television. Whether those stories express the left, right or center of the political spectrum, they are openly available to independent organizations that want to understand what is being communicated.

Extending this practice to social media would mean that a watchdog group could create software to analyze and make public whatever information from the platforms it might consider important: the demographics of the readership of a certain article, for instance, or whether a fake story continued to be widely disseminated even after being debunked.

After the Mueller indictment, Twitter issued a statement noting that technology companies “cannot defeat this novel, shared threat alone” — referring to efforts like the Russian disinformation campaign. “The best approach,” the statement continued, “is to share information and ideas to increase our collective knowledge, with the full weight of government and law enforcement leading the charge against threats to our democracy.”

This is true. And one effective form of information-sharing would be legally mandated open application programming interfaces for social media platforms. They would help the public identify what is being delivered by social media algorithms, and thus help protect our democracy.

Tom Wheeler, the chairman of the Federal Communications Commission from 2013 to 2017, is a visiting fellow at the Brookings Institution and a fellow at Harvard Kennedy School.

Follow The New York Times Opinion section on Facebook and Twitter (@NYTopinion), and sign up for the Opinion Today newsletter.

A version of this article appears in print on  , Section A, Page 23 of the New York edition with the headline: How to Monitor Fake News. Order Reprints | Today’s Paper | Subscribe

Advertisement

SKIP ADVERTISEMENT