The European Commission’s disinformation fail

EuroActiv Politico News

L. Gordon Crovitz, a former publisher of the Wall Street Journal, is co-CEO of NewsGuard. 

Europe has missed a great opportunity.  

Announcing a new anti-disinformation code last month, European Commission Vice President Věra Jourová drew attention to the fact that “Russia is weaponizing disinformation as part of its military aggression against Ukraine.” She also highlighted how the Kremlin has been using it more broadly as a tool against democracies. 

Jourová’s right about the urgency of the challenge we are facing when it comes to disinformation — particularly from Russia. However, rather than comprehensively addressing the matter, the new European code steps back from the fight at hand, directly capitulating to Silicon Valley platforms once more.  

The Strengthened Code of Practice on Disinformation, which was announced in June, continues to inoculate platforms from the known harms they cause, as each one has now been allowed to pick and choose which sections of the code it will be bound by. The result is that the large platforms — except for Microsoft — have all declined to follow key “user empowerment” steps that would sharply limit the spread of disinformation. 

As NewsGuard, we were asked by Commission staff to become a signatory to this revised code, which was supposed to strengthen the original 2018 version, which had failed to make a difference in the battle against disinformation. And throughout this process, my colleagues got a birds-eye view of just how far Facebook, Google/YouTube, Twitter, TikTok and others will go to evade taking responsibility for the disinformation they distribute and promote via their algorithms. 

NewsGuard is an organization that rates news and information sources on how they adhere to basic apolitical criteria of journalistic practice. We agreed to become a signatory, hoping that platforms would make tools like ours available to their users, so they would know who exactly is feeding them information. It would finally give users the ability to know which sources are generally reliable, and which ones aren’t. 

The Commission’s goal for the revised code stated that “Users should have access to tools to understand and flag disinformation and safely navigate in the online environment.” It encouraged platforms to provide access to “indicators of trustworthiness, focused on the integrity of the source,” emphasizing how this “can support users in making informed choices.”  

However, this is not what was achieved. As we had to agree to keep the negotiations over the revised code confidential, I cannot detail the filibustering that took place over the course of those eight months, as platform representatives watered down user-empowerment commitments paragraph by paragraph, word by word. And in the end — besides Microsoft — they refused to sign anyway. 

The key provision that caused this refusal called on platforms to “make it possible for users of their services to access indicators of trustworthiness (such as trust marks focused on the integrity of the source and the methodology behind such indicators) developed by independent third parties.”  

And in line with this provision, services that provide such indicators, including NewsGuard, pledged that “information sources [would be] reviewed in a transparent, apolitical, unbiased, and independent manner, applying fully disclosed criteria equally to all sources and allowing independent audits by independent regulatory authorities or other competent bodies.”  

They also agreed to “provide compliance and correction mechanisms and respect the right of publishers to be heard, including to engage in the assessment process before indicators are applied and to have their responses available to consumers after assessments are published.” 

European Commissioner for Values and Transparency Věra Jourová in Brussels | Thierry Monasse/Getty Images

This user empowerment provision is critical, as it goes beyond the failed steps platforms already offer. For example, while platforms claim to use fact-checkers, this doesn’t reduce disinformation because, by definition, fact-checking only happens after the falsehood has spread across the internet.  

In contrast, research shows that people consume and share less disinformation when they have an inkling as to the trustworthiness of their sources. A study published by analysts at New York University and Princeton University in May found that among the 10 to 20 percent of users whose news diets included a meaningful percentage of untrustworthy sites, access to NewsGuard’s ratings led to a “substantively meaningful boost in news quality.”  

Princeton researcher Andrew Guess told Nieman Lab, “It’s really rare to find any lasting effects on people’s behavior the way we did here,” adding that “For a very subtle intervention of that kind, I think that’s quite remarkable.” Providing news-literacy tools helps undermine publishers of disinformation, reducing the impact of hoaxes about COVID-19, genetically modified foods, measles vaccines, the 2020 United States presidential election — and even Russian disinformation. 

Jourová had good reason to call out the example of Russian propaganda among the most harmful disinformation spread by digital platforms. Years ago, the Kremlin had already identified Silicon Valley as its best partner for spreading falsehoods, its Russia Today (RT) outlet performing so well on YouTube that it was the leading source of news on the platform in many countries — including the U.S. And when it became the first news channel to hit 1 billion views on YouTube, a senior Google executive even appeared on RT, describing its reporting as “authentic” and without “agenda or propaganda.”  

The Kremlin’s disinformation industry had spread the false claim that the Ukrainian government is run by Nazis long before its invasion of the country. And NewsGuard has now tracked dozens of false claims regarding the war, including that Ukraine is slaughtering its own people. 

So, when Russia invaded Ukraine, the European Commission made a big show of requiring platforms to drop RT and Sputnik News in Europe. However, data shows that these two sites are but a small part of Russian President Vladimir Putin’s active information operations. 

Our analysts have so far identified 236 websites spreading disinformation about the war, including sites like Pravda — which is run by a Putin supporter — and numerous others claiming to be independent think tanks, while failing to disclose the source of their funding. Many of them are operating in multiple languages too, targeting Europe and North America.  

Platforms know what percentage of their users are getting most of their information from unreliable sources. And sharing this information would make clear to users and the public just how much of the news people see in their feeds comes from propagandists and publishers of disinformation. It would, of course, embarrass the platforms as well, which is perhaps why they refuse to provide users or researchers with this data.  

Facebook, YouTube and the other platforms designed their products to maximize engagement and boost advertising revenues, regardless of the trustworthiness of the information they spread. Under basic liability laws, any other industry would be required to design products for safety by taking reasonable steps to prevent known harms.  

So, while platforms now boast that they’ve signed onto the EU’s revised code, they are doing so without mentioning the critical provisions they’ve ignored. Maybe they assume the pressure’s now off. But unless the stronger requirements are adopted, they will simply continue to operate as usual — failing to warn their users about sources of falsehoods and carrying on as useful accomplices of Putin and his propagandists.

[Microsoft Edge uses NewsGuard technology to identify disinformation and is a sponsor of their news literacy program for schools and libraries. The company has signed on to the EU’s Strengthened Code of Practice on Disinformation.]