Such is the speed with which the internet and social media can amplify misinformation. And to give the truth a fighting chance, we urgently need to make important changes to our digital life. 

As the COVID-19 pandemic has created a unique moment of crisis that left all of us more engaged than ever with digital platforms, the pitfalls of our government’s inability to adequately regulate the tech sector have left the whole world imperiled by the speed with which a lie can travel.


The perils of disinformation are all around us. It was present long before the Trump administration, but exacerbated by a president who lied over 30,000 times in office, and who became surrounded by a political machine and media ecosystem that amplified those lies. This eventually resulted in the “Big Lie” of a stolen election, and a violent insurrection in our nation’s capital on January 6th that attacked the very pillars of American democracy. 

If that wasn’t bad enough, misinformation is hindering vaccination efforts by spreading false rumors about vaccines that could lead to more deaths and the development of new variants that could resist vaccines. These are just two of the most glaring examples of the many societal ills that arise from vast systems of disinformation brainwashing people with lies.

Both the fault and solution largely lie with the big tech companies that have come to dominate our lives. Never in the history of the world have there been companies as powerful as the likes of Facebook, Google and Twitter. Facebook, for instance, has more information on U.S. citizens than the U.S. government. With billions of people using the products of big tech every day, these platforms have an unprecedented ability to influence our world.

These are publicly-traced behemoths who might spout clever marketing slogans like “Don’t be Evil” but in reality they are driven solely by the bottom-line. We are mistaken in thinking that companies like Facebook are social media platforms that help people connect. Their true functions are as advertising companies. Facebook might hide behind the fig-leaf of saying they don’t sell your personal information, but in reality they sell access to you through the vast amount of data points they’ve collected about you. They are “walled garden” that offer companies the chance to micro-target people to sell their products. Your activity on these platforms – and the way they follow you around the internet on your phone or PC – enables these platforms to understand you at a shockingly granular level. So granular in fact, that they can predict your behavior, needs and wants with incredible precision.

Some might be unbothered by this. A new mother might benefit from Facebook seeing her pictures of her baby and targeting her with diapers advertisements. A young artist may feel that all of the content on their feed is supportive of their work, or even inspiring. But of course the algorithms behind such targeting are far more nefarious than just these benign examples.


Guillaume Chaslot was a Google engineer who helped develop the Youtube algorithm that keeps viewers glued to its platform to the tune of one billion hours per day. Initially as a whistleblower, and now researching these platforms, he has documented how the algorithms behind these platforms can send users down a rabbit hole of disinformation trying to keep them hooked: In other words, someone who is a 2nd Amendment supporter viewing related videos on Youtube might find themselves slowly sucked into the world of dangerous, but popular and influential, lies that are Qanon.

We are not powerless against the dangerous speed with which these lies travel, but the road to reducing the amount of disinformation plaguing our society is a challenging one. We have to tackle these issues from all sides: we require increased digital literacy education, regulatory and legislative reform, and the design of innovative new technologies to solve these systemic problems.

We must start with greater regulation of the tech platforms that are radicalizing our societies around the world. A first step should be requiring and enforcing identity checks. It is speculated that more than half of the users on Facebook and Twitter are bots, and these are often the worst offenders in spreading lies, whether they are Russian agents pushing a divisive agenda to undermine American democracy, such as anti-vaxxer propaganda undermining vaccination efforts.There is no reason these anonymous bots should be able to wreak havoc as they do, and getting rid of anonymized accounts will go a long way to clearing them from platforms and amplifying disinformation. 


Revisiting Section 230 of the Communications Decency Act is also essential. This is, however, a highly nuanced undertaking that should try to find a middle-ground that allows these platforms to continue to thrive as forums for free speech but doesn’t allow tech companies to simply hide behind the law and permit disinformation to thrive. Hiring more moderators and investing in technological solutions to police malicious content is a good place to start.

Another important step is to require these big tech companies to pay for the journalism that is featured on their platforms, as has been done recently in Australia. The Australian law is far from perfect, but government regulation there has provided a much-needed lifeline to publications who have seen their model upended by these tech platforms, who rely heavily on the sharing of news stories but don’t compensate publications for it. The disinformation problem is exacerbated when credible publications that can debunk lies are being increasingly marginalized and put out of business.

Civil society and foundations have an important role to play too. The challenge of misinformation is not going to be fully solved anytime soon, even with strong regulation, so we need to be teaching people, especially children, how to identify it and become more conscious consumers of digital content.


The tech companies are feeling the pressure on this issue and are trying to do things to reduce the heat on them. Twitter put caveats on tweets around the election by partisans spreading misinformation, and Facebook and others have invested in both human and technical solutions to weed out bad offenders. But despite seemingly endless financial resources to throw at the problem, that won’t be enough. A recent study showed that disinformation on Facebook is 68% higher in Italy than Ireland because Facebook is better equipped to handle this challenge in English than in other languages. Think about Facebook’s global reach and how many languages there are and the scope of this global issue is even more apparent.

As we get closer to another pivotal election – the 2022 midterm elections – the time is now to reduce the speed by which a lie can travel and give the truth a chance to set the record straight. It is not an exaggeration to say that democracy, science, health and the ties that bind our society are on the line unless we rise to the challenge of this dangerous trend.


Brittany Kaiser was the whistleblower in the Cambridge Analytica scandal and is the founder of the Own Your Own Data Foundation. Ann Ravel is the Director of the Digital Deception project Maplight and the former Chair of the Federal Election Commission. Jeremy Hurewitz is Curation Director at NationSwell.