When The Truth No Longer Matters: How Social Media’s Engagement Obsession Is Killing Democracy

from left: Maria Ressa, CEO Rappler, Eryk Salvaggio, Author Cybernetic Forrests; Maria Amelie, CEO Factiverse
Getty, Salvaggio, Amelie

Meta’s recent decision to end its third-party fact-checking program has reignited fears about misinformation’s growing influence. The move reflects a broader trend among social media giants: stepping away from content moderation in favor of engagement-driven algorithms. As these companies strip away safeguards, journalism faces an existential crisis—one where facts struggle to compete with viral falsehoods. This has sweeping implications worldwide as increasing polarization and authoritarian rule result.

Social media platforms have become the world’s most powerful distributors of news, shaping public opinion at an unprecedented scale. But rather than prioritizing accuracy, these platforms are engineered to maximize engagement. Sensationalism, controversy, and misinformation outperform factual reporting, eroding trust in journalism and deepening societal divisions.

A System Designed For Clicks, Not Truth

Meta’s fact-checking initiative was once seen as a modest attempt to combat misinformation. Under the program, independent fact-checkers flagged misleading content, reducing its reach and adding context to prevent the spread of falsehoods. But earlier this year, Meta scrapped the initiative, citing concerns over “political bias” in fact-checking organizations and “too much censorship.” CEO Mark Zuckerberg framed the decision as a step toward “more speech, fewer mistakes,” arguing that the public should determine truth through a system called “community notes.”

In fact, PolitiFact and FactCheck.org, both of which previously collaborated with Meta, have dismissed claims of liberal bias. They emphasize that they never had the power to remove content or censor posts, reiterating that Meta has always retained ultimate control over content moderation decisions.

Community notes, modeled after X’s (formerly Twitter’s) user-driven fact-checking system, allows users to append additional context to posts they find misleading. Proponents claim this method democratizes fact-checking, but critics argue it’s an ineffective and unreliable replacement. A study by Alexios Mantzarlis and Alex Mahadevan on the 2024 U.S. election concluded that “At the broadest level, it’s hard to call the program a success,” noting that only 29% of fact-checkable tweets in their sample carried a helpful note.

Maria Amelie is the CEO of Factiverse, which she cofounded with researcher, Vinay Setty. Factiverse specializes in AI-powered fact verification. Amelie believes Meta’s justification for scrapping its program is flawed. “Fact-checkers were never removing content or censoring speech,” she explained. “They were simply labeling misinformation to give users more context.”

Amelie also points out that Meta and other platforms could drastically reduce the spread of misinformation with a simple fix: limiting the ability to share unverified viral content. “Platforms like Meta could have eliminated a lot of the misinformation if they just removed the share button,” she said. “But they’re not doing that because it’s not good for business.”

Eryk Salvaggio is a visiting professor at the Rochester Institute of Technology and his background in media studies and applied cybernetics shapes his analysis of social media platforms and AI systems. Salvaggio was not surprised by Meta’s decision, seeing it as part of a broader pattern in the tech industry of aligning with political shifts. “This is a calculated move to position themselves with the new administration,” he said.

Salvaggio pointed to a seemingly small but telling change: Meta quietly removed the option for users to display transgender pride colors on their profiles. “It might seem trivial on the surface, but it’s actually a significant signal,” he explained. “It’s not just about content moderation—it’s about what they are explicitly allowing now, including harassment of marginalized groups.”

He argued that this decision is part of a larger trend of platforms scaling back support for LGBTQ+ communities in response to political pressure. “By eliminating something as simple as pride-themed profile options, Meta is signaling a retreat from inclusion,” he said. “It’s a move that, whether intentional or not, aligns with a broader right-wing push to erase visibility and protections for trans people.”

Salvaggio also noted that this shift is reflected in Meta’s content moderation choices, where anti-trans harassment remains widespread while safeguards are quietly dismantled. “We’ve seen this playbook before,” he said. “It’s about making the platform more hospitable to reactionary movements while claiming neutrality.”

Misinformation As A Business Model And The ‘Halo Of Reality’

Falsehoods spread faster than facts. A 2018 MIT study found that misinformation travels six times faster than the truth on social media. While this issue was widely acknowledged even then, it has since worsened. The rise of AI-generated disinformation and the decline of content moderation have further tilted the playing field in favor of misleading content.

At the recent International Association for the Safe and Ethical AI conference in Paris, Maria Ressa, CEO of Rappler and a 2021 Nobel Peace Prize recipient, spoke about the impact of social media-driven misinformation campaigns. As a leading voice on the societal implications of AI on free expression and accountability, her research has shown that disinformation isn’t just an unfortunate byproduct of engagement-driven algorithms—it is the business model.

Ressa argues that personalization, designed to enhance user experience, has instead shattered any sense of shared reality. “The goal was to keep users engaged, but in doing so, they’ve built individual echo chambers,” she said. “Because if every single person in this room has their own personalized reality, this would be an insane asylum, right?” If everyone is living in their own version of reality, we could not function as a society, Ressa argues.

She criticizes tech companies for prioritizing rapid deployment over responsibility. “They justify it by calling it ‘emergent behavior’—as if the technology is just reflecting human nature,” Ressa said. “But in reality, they’ve created a self-reinforcing loop where the more extreme the content, the more it spreads, the more divided we become.”

Salvaggio, also an author at Tech Policy Press and Cybernetic Forests, reinforces Ressa’s view where journalism is struggling to compete with a system optimized for outrage. “What people care about isn’t whether something is true, but how it makes them feel,” he said. “The algorithm doesn’t reward truth—it rewards hostility. If an accurate news report gets less engagement than a sensationalized falsehood, the falsehood wins.”

This shift has profound implications for public discourse. Rather than fostering informed debate, social media platforms encourage tribalism, where users seek out content that reinforces their beliefs and dismiss anything that challenges them. “It’s not about getting to the truth anymore,” Salvaggio said. “It’s about picking a side.”

Users often prioritize emotional resonance over factual accuracy when engaging with content on social media platforms. He suggests that “people often times don’t even necessarily care about the ‘factuality’ of information; what they care about is the feeling that they are expressing by sharing information that may actually not be true and knowing it is not true.” This behavior reflects a climate where politics has become deeply personal, and interactions are often aimed at provoking or “owning” opposing viewpoints rather than fostering constructive debate.

Salvaggio emphasizes that content sharing on social media has evolved into a means of asserting one’s identity and ideological position. He states, “Everything that we post is about who we are. And now that includes the facts that we are selecting.” What is clear is the pursuit of truth on these platforms has been overshadowed by the desire for ideological affirmation and identity expression.

In Salvaggio’s view, the current social media environment does not facilitate meaningful debates about reality or democratic issues. Instead, it encourages users to create what he calls a “halo of reality” around themselves,” and around the things they see, the things they share, and the things people are responding to. That confirms a certain position in the world and as Salvaggio claims, oftentimes, it’s an ideological one.

Ressa points to a troubling fact, “Look at where the world is as of last year–71% of the world is under authoritarian rule, and we ourselves, are part of a democracy where individual voters are electing illiberal leaders democratically, and part of that is because the platform, social media, our public information ecosystem, is designed to trigger our emotions. It hacks our biology. Lies spread six times faster than truth on these channels.”

She argued in almost every election, where people get their news from social media, she saw information operations influencing the outcome. Ressa reiterated, “The angrier or more afraid we are, the longer we stay engaged. And the longer we stay engaged, the more money they make.”

The consequences of this system have been devastating. In countries like the Philippines, Brazil, and the United States, misinformation has fueled political division, undermined trust in democratic institutions, and even incited violence. Ressa cites the example of the Philippines, where her team at Rappler tracked a network of just 26 fake accounts that manipulated the views of up to three million people.

“This isn’t free speech,” she explained. “It’s mass manipulation. And it’s happening at scale.”

Ressa explains this mass manipulation goes viral in two ways: “There’s the viral spread, and the meta narrative spread, where you just bury the meta narrative, and then you pound it opportunistically every single time the news allows. An example of that is Ukraine. In 2014 Russia seeded the meta narrative that allowed it to annex Crimea. And in 2022 the same meta narrative was used by Putin to invade Ukraine itself.

The Dangers Of Eroding Discourse

For Salvaggio, the implication beyond social media platforms is the erosion of speech as a tool for achieving political goals, as he conveys, “In societies where speech no longer serves that purpose, we are clearing the way towards… what I call ‘AI slop.’” He refers to the overwhelming amount of unverified, fake information that floods the platforms, making it impossible for us to discern truth from fiction. As a result, it becomes increasingly easy to dismiss opinions we disagree with.

As a result, most people walk away from social media feeling worse, not enriched or inspired. Instead, they ask themselves, “What is wrong with people?” When society is filled with people who say, “You can’t talk to those people,” it creates a division. These “other people” are labeled as unreachable or insane, further entrenching polarization.

This erosion of meaningful discourse goes beyond political speech and paves the way for more extreme measures, as people begin to seek alternative avenues to achieve their goals—often much more extreme than the speech we currently see online.

Salvaggio worries about the far-reaching implications, as he states, “It undermines the core principle of democratic society: the idea that truth and falsehood can grapple, and truth will ultimately prevail. If the system doesn’t allow that grappling to happen—if we can’t even have meaningful debates—then we risk short-circuiting the very foundation of democratic participation. That’s what worries me most.”

For Amelie, former journalist, and now cofounder of a real-time fact-checking solution, times have indeed changed as she reflects, “Events like the Arab Spring initially painted social networks as revolutionary tools for change.” She adds, in hindsight, there was a lack of critical scrutiny toward how these platforms developed their algorithms and their broader societal impacts, which has led to significant consequences in how information is consumed and shared today. Today, virality drives business growth for the platforms at the cost of distorting public discourse.

Users who grew up with social media now engage with information in ways that are deeply influenced by these platforms and as Amelie adds this has created a concerning dynamic where critical thinking is diminished, and misinformation thrives. “In Europe, we question–almost too much–when it comes to the effects algorithms will have on our future, our democracy, economies, and even warfare,” she remarks.

Amelie adds that fact-checking initiatives have also evolved over the past decade, but they face challenges. While early efforts focused on labeling information as true or false, users often rejected such binary approaches, expressing distrust in technology dictating what is true. Instead, she explains, people seek to understand complex issues more deeply, forming their own opinions based on well-rounded information. This highlights a cognitive resistance to being told what to believe, emphasizing the need for platforms that educate rather than dictate. “You don’t want technology to tell you what truth is, but you want to understand complex topics faster,” Amelie explains.

Meta and X: The Path To An Unregulated Internet

Meta’s decision to abandon fact-checking mirrors trends at other platforms. Since Elon Musk took over Twitter, now rebranded as X, the company has dismantled most of its content moderation policies. Musk has framed these moves as a defense of free speech, but the result has been an explosion of misinformation, harassment, and hate speech.

X’s lax approach has driven away advertisers and alienated many users, but Musk appears willing to accept those losses in favor of positioning the platform as a haven for unfiltered speech. Meta seems to be taking a similar path—one that prioritizes political favor and regulatory avoidance over responsibility.

Ressa warns that these decisions are part of a broader global trend. “We are in a period of anti-regulation,” she said. “Regulation, like facts, is now seen as ‘woke.’ The companies that profited from misinformation in the past are doubling down, refusing to acknowledge the harm they’ve caused.”

Europe has taken a more aggressive stance on regulating tech companies. The EU’s Digital Services Act (DSA) and AI Act impose stricter oversight on content moderation and algorithmic transparency. Amelie believes these regulations could force platforms to take more responsibility in Europe, even if they continue deregulating elsewhere. “I think Meta might keep some fact-checking in Europe because they’ll be legally required to,” she said. “But they’re not doing it out of a sense of duty—they’re doing it because they have no choice.”

Can Journalism Survive The Social Media Machine?

Despite the dire state of news distribution, some journalists and independent media outlets are finding ways to adapt. Many have turned to platforms like Substack, YouTube, and Patreon to bypass social media algorithms and reach their audiences directly.

Amelie sees promise in these emerging models but acknowledges that they remain small compared to the reach of misinformation-driven content. “There’s great investigative journalism happening outside traditional media, but it’s not being amplified the way false narratives are,” she said.

One potential solution is to rethink the role of AI in news consumption. Amelie’s company, Factiverse, focuses on explainable AI—technology that helps users verify information while being transparent about its sources. “If AI is going to be part of journalism, it needs to be accountable,” she said. “People should be able to see where the information is coming from and why it’s being presented to them.”

Perhaps it is up to the individual to make their own informed choices–go analog–with an understanding of manipulative algorithms that seek to influence them. Salvaggio agrees. He argues that true freedom of speech cannot exist on algorithmic platforms and that the common belief in unfettered social media conversations is fundamentally flawed, as he states, “We don’t have freedom of speech because the algorithm is going to decide whether your speech is amplified or whether it is not.”

He explains that algorithms don’t operate neutrally but instead surface content based on user similarities or stark oppositions. These filtering systems, including those used by Facebook, Meta, and Google, effectively delegate our information access to external forces rather than relying on personal critical reasoning. In addition, Salvaggio points out search engines can reinforce confirmation bias–searches aimed at proving a point will return supporting evidence, while those seeking to disprove something will find contradicting information. And even when platforms present multiple viewpoints, the boundaries of these perspectives are determined by algorithmic design choices.

He emphasizes that algorithms aren’t neutral entities, but deliberately designed products resulting from specific decisions. In his view, the challenge isn’t just about accessing good information; it’s about understanding the filters through which we receive it, “What is that filter? Who’s responsible for building it? Is it a journalist that we trust, or is it some massive prescriptive process, where 16 engineers in four different states and three different countries are building tiny pieces and don’t even see the whole thing themselves?”

A Fight for Truth

Ressa believes this moment represents a turning point. “If we don’t demand a fact-based public space, we will lose it,” she warned. “Journalism exists to hold power accountable. If platforms won’t prioritize truth, then we need to find ways to take back control.”

As Meta, X, and other platforms continue to prioritize engagement over accuracy, the responsibility falls on governments, journalists, and the public to push back. Salvaggio argues that real change will only happen when people stop passively accepting the information fed to them by algorithms, “We need to start asking who is deciding what we see, and why.”

The battle for journalism’s future is not just about misinformation. It’s about whether truth itself can survive in an information ecosystem designed to bury it.Ressa believes this moment represents a turning point. “If we don’t demand a fact-based public space, we will lose it,” she warned. “Journalism exists to hold power accountable. If platforms won’t prioritize truth, then we need to find ways to take back control.”

As Meta, X, and other platforms continue to prioritize engagement over accuracy, the responsibility falls on governments, journalists, and the public to push back. Salvaggio argues that real change will only happen when people stop passively accepting the information fed to them by algorithms, “We need to start asking who is deciding what we see, and why.”

The battle for journalism’s future is not just about misinformation. It’s about whether truth itself can survive in an information ecosystem designed to bury it.

This originally appeared on Forbes.

Leave a comment