![]()
With deepfakes and misinformation spreading like wildfire amidst some genuinely strange-but-true news stories, how are can people separate fact from fiction?
Earlier this week, a fake AI-generated video of an earthquake went viral, gaining millions of views in the wake of the devastating Myanmar earthquake that killed thousands of people.
Last week, a top vaccine regulator in the US Food and Drug Administration resigned, citing what Robert F Kennedy Jr’s efforts to spread misinformation about the safety of immunisations.
And last year, amid the chaos that was the US presidential election, Elon Musk shared a fake campaign video for Kamala Harris on X in violation of the platform’s rules – a platform he owns.
Misinformation and disinformation continue to run rampant across social media, particularly amid the rise of AI-generated content and deepfakes.
Ongoing false information can have significant consequences for all walks of society, shaping public opinion, influencing elections, and undermining trust in democratic institutions, scientific research and journalism.
And with the news earlier this year that Meta is putting an end to fact-checking, I wanted to dive a little further into the potential future we’re facing where the flow of data is no longer linear and in, many cases, no longer verified.
Prof Deidre Ahern is director of the technologies, law and society research group at Trinity College Dublin. She is a fellow of the Information Society Law Centre at the University of Milan and an appointed member of Ireland’s AI Advisory Council.
In 2022, became a member of the Royal Irish Academy’s ethics, politics, law and philosophy committee.
She said Meta’s decision to scale back its fact-checking efforts is deeply concerning. “Without dedicated fact-checking teams, social media platforms risk becoming playgrounds for misinformation, as false narratives spread unchecked and unregulated,” she said.
“By deprioritising fact-checking as form of risk mitigation, Meta is placing a community burden on users to use notes to verify the accuracy of information and add notes to posts. From a numbers game, this seems untenable given the vast volume of content generated daily. But more than that is the potential of Meta being passive or asleep at the wheel.”
She also flagged the concern that Meta may set a precedent for other tech companies and social platforms leading to a “race to the bottom” in terms of a decline in combatting misinformation.
“Without fact-checkers, misinformation will spread more freely, potentially influencing elections, public health decisions, and social attitudes in ways that are difficult to reverse.”
‘Incredibly powerful and shockingly dangerous’
Three months into US President Donald Trump’s second term, we have only seen the spread of misinformation increase, thanks in no small part to US cabinet officials.
This upward trajectory, coupled with the ability for it to spread across social media can be “incredibly powerful and shockingly dangerous”, according to Ahern.
“The challenge lies not only in the sheer volume of misleading information but also in the sophisticated ways it is presented, making it increasingly difficult for individuals to differentiate between credible sources and falsehoods,” she said.
“Misinformation by its very nature preys on emotional responses. Content designed to evoke outrage or fear will often spread far wider and more rapidly than factual information.”
The consequences of misinformation are widespread and far-reaching. In the realm of public health, the Covid-19 pandemic provided a stark example of how false information about vaccines can spread online.
A 2014 documentary Merchants of Doubt, inspired by a book of the same name, delved into the broad use of lobbying techniques that have been used for decades to create sophisticated misinformation campaigns relating to areas such as the climate crisis, the dangers of forever chemicals and the negative effects of tobacco products.
These techniques, along with the advancements of AI-generated deepfakes that are flooding the internet, leads to distrust in credible institutions.
“When people are repeatedly exposed to false or misleading narratives, they may begin to doubt the credibility of traditional news sources, scientific consensus and even government institutions,” said Ahern.
“This erosion of trust weakens democratic society, as it creates fertile ground for conspiracy theories, political polarisation and in some cases, violence and unrest.”
She added that the extent to which far-right social media accounts and decentralised platforms are being used to spread false narratives in order to stoke fear and fuel riots is hugely concerning. “The ripple impact of this negative messaging on the culture of our society and democratic values is massive.”
Limitations of fact-checkers
While the removal of fact-checkers at Meta is an alarming precedent to set, content moderation even with fact-checkers has always been a struggle. The sheer volume of content uploaded every single second makes it impossible for human moderators to keep up. According to YouTube, more than 500 hours of content are uploaded every single minute.
Add to that, the fact that the job itself is extremely distressing, with moderators tasked with watching some of the internet’s worst content in order to decide whether it should be removed, day in and day out. The trauma these moderators have experienced has been widely documented.
Ahern said that while AI-driven moderation tools can help, they often struggle with nuance, context and satire, leading to inconsistencies in enforcement.
“Another challenge is the perception of bias in content moderation. Users who have their content removed or flagged may claim censorship, leading to further distrust in platforms. This dynamic has been exploited to wrongly frame fact-checking as an attack on free speech,” she said.
“Ultimately, while fact-checking and content moderation are important tools, they are not sufficient on their own. A more holistic approach, combining media literacy, transparent platform policies, and AI-driven misinformation detection, is necessary to effectively combat misinformation at scale.”
The battle rages on
So with an uncountable amount of content being uploaded every day, the dramatic rise of AI-generated content and the false information that comes from both orchestrated campaigns and rumour mill-style spread, how can we as a society even begin to tackle this problem?
Unsurprisingly, there is no silver bullet to this because a complicated, multi-faceted problem needs complicated, multi-faceted solutions. And even then, the solutions will not be absolute, but they may help in the fight against false information.
Ahern said a combination of education, regulation and technological innovation is necessary to address this growing challenge.
“Governments and regulatory bodies must ensure that major platforms remain committed to combating misinformation,” she said.
“For instance, while Meta has now begun this step [ending fact-checking] in the US for Facebook, Instagram and WhatsApp, interestingly if this same pause on rigorous fact-checking were pursued in EU markets, it could be considered to violate platform obligations applicable to Meta under the EU’s Digital Services Act to take steps to counter the facilitation of fake news and hate speech.”
She also said that platforms should be held accountable for designing systems that prioritise sensationalised misinformation over accurate content. “Algorithmic transparency and responsible content moderation are crucial in ensuring that users are not constantly bombarded with misleading and potential harmful information.”
‘We must also ensure people don’t fall into the trap of dismissing all sources as equally unreliable’
Building digital literacy is another vital component to combatting misinformation. Even with government policies, EU investigations and platforms forced to do the right thing, we will still be bombarded with plenty of false information that we have to analyse.
Ahern said a key part of this should be bringing media literacy education into schools at an early age. “Students should be taught how to evaluate different sources, practice critical thought, and identify misleading or manipulative content.
“Beyond formal education, public awareness campaigns and accessible digital literacy resources are essential. Governments, civil society organisations, and tech companies should collaborate to provide user-friendly tools and guidelines that help people assess online information.”
In her own work, the Royal Irish Academy host free events as part of its Discourse Series, which are then uploaded online and are free to access. The series explores the multifaceted landscape of AI from different perspectives. The next event, taking place on 24 April 2025, focuses responsible AI.
“Another critical aspect of digital literacy is fostering scepticism without cynicism. While it’s important to question information, we must also ensure that people do not fall into the trap of dismissing all sources as equally unreliable,” she said.
In my own opinion, traditional news and media outlets must take on this responsibility too. It has always been the job of journalists to separate fact from fiction and report on the true news readers need to know.
Last year, I interviewed Alex Mahadevan, director of MediaWise at the Poynter Institute, a nonprofit media institute and newsroom and we talked about the changes within the journalism industry and the responsibility it has to “avoid clickbait” even when for-profit models require as many eyeballs on pages or sites as possible.
One thing Mahadevan said at the time that I found positive was that studies show that audiences like fact-checks and this can come in the form of explainer pieces, dedicated fact-checking articles or initiatives like BBC Verify or RTÉ Clarity, the latter of which just launched today (3 April).
As we continue down the rabbit hole, being surrounded by scarily realistic deepfakes and orchestrated disinformation campaigns, I have hope that a new wave of fact-checking will rise.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.


