This article draws on material first published in the Straits Times on 15 January 2024.
“Fake news” is an old problem. But generative AI and a bumper crop of elections have made 2024 a pivotal year for addressing it.
Fake news through the eyes of young artists
A 2021 online exhibition featured the digital artwork “The Web — Friend or Foe?” by Eesa Hussain, along with pieces from 58 sixth-form students across England, all exploring the theme of fake news.
“Fake news” is an old problem. But generative AI and a bumper crop of elections have made 2024 a pivotal year for addressing it. Governments and citizens need to be ready. This essay discusses three categories of false information — misinformation, disinformation, and mal-information — and examines the efforts of governments to regulate them. Such efforts face challenges, particularly if they limit access to information through censorship.
Though Donald Trump would like credit for coining the term, “fake news” was a headline in the New York Times more than a century ago.1 The history of propaganda is far older. An early example was Egypt’s Rameses the Great, who decorated temples with monuments to his tremendous victory in the 13th century BCE Battle of Kadesh.2 (The outcome was, at best, a stalemate.)

The truth is hard
The New York Times launched the ad campaign “The Truth is Hard” shortly after the 2016 US presidential election to counter escalating anti-press rhetoric from Donald Trump and his supporters. It aims to remind everyone about the importance of truth and independent, quality journalism.
Source: Silvercast Media Facebook
But the tools for generating, sharing, and consuming dubious information are now very different in the age of AI. This is cause for concern because, over the course of 2024, countries with more than half the world’s population have held, or will hold national elections. Not all will be free and fair — Vladimir Putin has been returned to power, for example. Yet, it is the first time that most of humanity will participate in varying forms of democratic process within a single year.
Elections are foundational to the legitimacy of government. That in turn relies on trust in the process of voting and the determination of the results. And if voters do not accept the outcome, they may be unwilling to accept the authority of the winners.

The first ever AI-driven election
Argentina’s 2023 presidential race has become a battleground of AI-generated imagery, with the two contending candidates relying heavily on AI-powered posters and videos to bolster their own campaigns or attack each other. This development heralds a new era of politics, marked by the strategic use of generative AI.
Source: Sergio Massa
Harms that we should worry about include the impact on vulnerable individuals, public institutions, and the bonds of trust that hold society together.
THE LIAR’S DIVIDEND
None of this will surprise those who have followed the Donald Trump show over the past few years. Clumsy election denial is one thing, however. Now imagine authentic-looking CCTV footage of corrupt activity and realistic news accounts of vote-rigging, with AIempowered agents producing endless variations on these themes, and the prospect of a larger crisis increases.
Though its role in Argentina’s poll last year appears to have been limited, analysts are warning that the sophistication of generative AI is only going to increase. This has implications far beyond election outcomes.
It’s helpful to distinguish between three types of information, based on veracity and the intention of the creator or sharer.

These distinctions are important because the information itself is not the problem, but how it affects the real world. The fact that something isn’t true does not mean it is a worthy target of concern — as Singapore’s Media Literacy Council accepted in 2019 after it erroneously included “satire”3 as an example of fake news.
Harms that we should worry about include the impact on vulnerable individuals, public institutions, and the bonds of trust that hold society together.

Fake anchor, real propaganda
Days after the Russian concert hall attack in March, Islamic State supporters utilised their AI-driven platform, “News Harvest,” to release a video featuring an AI-generated anchor claiming the incident as part of ongoing wars rather than terrorism. This marked the beginning of AI-generated news bulletins by the group to boost their propaganda efforts.
Source: The Washington Post
The use of generative AI to target individuals for the purposes of fraud, for example, is on the increase.
Though much attention focuses on deepfakes of famous people, we should also be concerned about the prospect of getting a phone call or even a video from a loved one that is faked. I may not be willing to help that Nigerian prince get access to his millions, but who among us would not assist a child or spouse in a panic?
During the pandemic, we saw the consequences of distorted public health messages. In addition to limiting uptake of COVID-19 vaccines in some locations, this continues to hurt acceptance of vaccinations more generally. Deaths due to measles have increased 43% worldwide, for example, more than doubling in the United States.4
In addition to specific deepfakes gaining traction, a perverse product of their proliferation is the “liar’s dividend”, where people dismiss genuine scandals or allegations of wrongdoing because the basis of truth or falsity has become so muddied and confused. That leads to the broadest type of harm, which is the decline of trust more generally.

The gemoy rebranding
A charming animated AI avatar, resembling a “gemoy (cuddly) grandpa,” was created for 72-year-old then Defense Minister Prabowo Subianto’s presidential campaign in Indonesia in February. Despite usage policies from AI tools like Midjourney and OpenAI prohibiting political campaign usage, the enforcement of these policies remains questionable.
Photo: Willy Kurniawan / REUTERS
If synthetic content ends up flooding the Internet, the result may not be that people believe the lies, but that they cease to believe anything at all.
If synthetic content ends up flooding the Internet, the result may not be that people believe the lies, but that they cease to believe anything at all.
Any uncomfortable or inconvenient information will be dismissed as “fake”, while many credulously accept data that reinforce their own worldview as “true”. Generative AI chatbots like ChatGPT and Gemini (formerly Bard) could exacerbate this if we switch from searching for information on the internet, which yields multiple possible responses, to asking an intelligent agent that gives us a single answer. As it learns our preferences — and prejudices — it may serve to reinforce them.
If you think you’ve seen this movie before, you have. Social media and the algorithms that determine news feeds started us down this path. Yet, doomscrolling through content curated for us will not be as effective and affective as personalised answers to our questions.

In bot we trust
When Google’s chatbot Bard (now Gemini) debuted in 2023, the company imposed a safety policy prohibiting its use for “generating and distributing content intended to misinform, misrepresent, or mislead.” Nonetheless, a study by the Center for Countering Digital Hate found that Bard generated “persuasive misinformation” in 78 out of 100 test cases, including content denying climate change, mischaracterising the war in Ukraine, questioning vaccine efficacy, and calling Black Lives Matter activists actors.
Photo: Valerio Rosati / Alamy Stock Photo
Governments are alive to these concerns, motivated also by a measure of regret for failing to regulate social media over the past two decades. Dozens of countries — from the United States to China — are debating policies and legislation.
So what, if anything, should be done?
REGULATORY AND POLICY OPTIONS
Up to now, there has been excessive reliance on technical interventions — with limited success. I’m part of a team at the National University of Singapore looking at a broader approach to what we call “digital information resilience”.5 This emphasises the role of consumer behaviour in understanding why people consume fake news and how it affects them, as well as the important role of technology.
My focus is on how regulation and policy can shape supply and demand.
Efforts to regulate any aspect of the digital information pipeline face challenges, particularly if it limits access to information through censorship.
Regulators across the globe are struggling to address perceived harms associated with generative AI while not unduly limiting innovation or driving it elsewhere.
One of the key virtues of the internet is the ability to access data from around the world. In the context of larger debates over the governance of AI, regulators across the globe are struggling to address perceived harms associated with generative AI while not unduly limiting innovation or driving it elsewhere.
The starting point is to be clear about what the objectives are and the tools and levers available. Regulation is understood here to include rules, standards, and less formal forms of supervised self-regulation. Policy interventions are broader still, including educational and social policies intended to build resilience among consumers.
Spreading malicious content is already the subject of regulation in many jurisdictions. Though there is wariness about unnecessary limits on freedom of speech, even in broadly libertarian jurisdictions like the United States one is not allowed to yell “fire!” in a crowded theatre.




Teen fact-checkers
MediaWise’s Teen Fact-Checking Network is an initiative by The Poynter Institute that empowers teenagers to debunk misinformation and educate their peers. Through this programme, teens create fact-checking content, promoting media literacy and resilience against fake news among young audiences in an engaging, relatable manner.
Source: The Poynter Institute

Sharing a lie makes you a liar
Malaysia’s Anti-Fake News Act, passed in April 2018, sparked public concerns over its potential to silence dissent before the May national election. The law was repealed the following year. However, during the COVID-19 pandemic, the Emergency (Essential Powers) Ordinance 2021 was enacted, seen by some as resurrecting aspects of the Anti-Fake News Act, arousing fear of erosion of civil liberties among the populace.
Photo: Stringer / REUTERS
Generative AI has raised the question of whether the tools that generate content should themselves be regulated. We do not normally regulate private activity — a hateful lie written in my personal diary is not a crime; nor do we punish word processing software for the threats typed on it.
A notable exception is that many jurisdictions make it an offence to create or possess child pornography, including synthetic images in which no actual child was harmed and even if the images are not shared.
For the most part, however, the harm is in the impact the information has on other users and society. In addition to punishing those who intend harms such as fraud, hate speech, or defamation, much attention has focused on the responsibility of platforms that host and facilitate access.
In the United States, this would require a review of section 230 of the 1996 Communications Decency Act, which absolves Internet platforms of responsibility for the content posted on them.
Singapore adopted the Protection from Online Falsehoods and Manipulation Act (POFMA), which empowers ministers to make correction orders for false statements of fact if it is in the public interest to do so.
In the name of disinformation
Following the Russian military invasion of Ukraine in 2022, the European Union (EU) blocked the Russian state media outlets Russia Today and Sputnik within its borders, citing their use for propaganda purposes. In May 2024, four more Kremlin-linked networks were added to the sanctions list. However, the European Federation of Journalists strongly opposed these measures, arguing that “combating disinformation with censorship is a mistake.”
Photo: Pool New / REUTERS

WORK IN PROGRESS
Though Singapore was criticised when it adopted POFMA in 2019, governments around the world are considering similar legislation to deal with the problem of fake news.
Australia released a draft bill in 2023 on Combatting Misinformation and Disinformation that has been hotly debated — including its fair share of fake news. Around the same time, the EU’s Digital Services Act came into force, while Britain passed a new Online Safety Act.
All struggle with the problem of how to deal with “legal but harmful” content online.
Australia’s bill would have granted its media regulator more power to question platforms on their efforts to combat misinformation. A backlash against perceived threats to free speech led the government to postpone its introduction to Parliament until later this year, with promises to “improve the bill.”
The EU legislation avoids defining disinformation, but limits measures on socially harmful (as opposed to “illegal”) content to “very large online platforms” and “very large online search engines” — in essence, big tech companies like Google, Meta, and the like.
Ofcom, the body tasked with enforcing the new UK law, states that it is “not responsible for removing online content”, but will help ensure that firms have effective systems in place to prevent harm.
Such gentle measures may be contrasted with China’s more robust approach, where the “great firewall” is often characterised by over-inclusion. Some years ago, Winnie the Pooh was briefly blocked because of memes comparing him to President Xi Jinping;6 earlier efforts to limit discussion of the “Jasmine Revolution” unfolding across the Arab world in 2011 led to a real-world impact on online sales of jasmine tea.7
Correcting or blocking content is not the only means of addressing the problem, of course. Limiting the speed with which false information can be transmitted is another option, analogous to the circuit breakers that protect stock exchanges from high-frequency trading algorithms sending prices spiralling.
In India in 2018, WhatsApp began limiting the ability to forward messages after lynch mobs killed several people following rumours circulated on the platform. A study based on real data in India, Brazil, and Indonesia showed that such methods can delay the spread of information, but are not effective in blocking the propagation of disinformation campaigns in public groups.
Another platform-based approach is to be more transparent about the provenance of information. Several now promise to label content that is synthetic, though the ease of creation now makes this a challenging game of catch-up.
Tellingly, the US tech companies that agreed to voluntary watermarking last year limited those commitments to images and video, echoed in the Biden Administration’s October 2023 executive order. Synthetic text is nearly impossible to label consistently; as it becomes easier to generate multimedia, it is likely that images and video will go the same way.
In fact, as synthetic media becomes more common, it may be easier to label content that is human rather than AI.
Trusted organisations may also watermark images so that users can identify where a photo comes from. The problem here is that tracking such data requires effort and many users demonstrate little interest in spending the time to verify whether information is true or not.
Twitter (prior to its acquisition by Elon Musk) introduced a “read before you retweet” prompt, which was intended to stop knee-jerk sharing of news based solely on the headline. It appeared to have a positive impact but was not enough to stop the slide into toxicity post-Musk.
As you consider your own information diet, exercise common sense. Try to remain critical without becoming cynical.
READER BEWARE
The ideal, of course, is for users to take responsibility for what they consume and share. Those of us who grew up watching curated nightly news or scanning a physical newspaper may be mystified by a generation that learns about current events from social media feeds and the next video on TikTok.
Yet concerns about the information diet of the public are as old as democracy itself. Some months before the US Constitution was drafted in 1787, Thomas Jefferson pondered whether it would be better to have a government without newspapers or newspapers without a government. “I should not hesitate a moment to prefer the latter,” he concluded, making clear that he meant that all citizens should receive those papers and be capable of reading them.
As voters around the world head to the polls this year, no government has solved the problem of fake news. But as you consider your own information diet, exercise common sense. Try to remain critical without becoming cynical.
And if you see something that seems too good to be true, it probably is. ∞
SIMON CHESTERMAN
Simon Chesterman is Senior Director of AI Governance at AI Singapore and Editor of the Asian Journal of International Law. He is also David Marshall Professor and Vice Provost (Educational Innovation) at the National University of Singapore (NUS), where he is also the founding Dean of NUS College. Previously, he was Dean of NUS Law from 2012 to 2022, Co- President of the Law Schools Global League (LSGL) from 2021 to 2023, and Global Professor and Director of the New York University (NYU) School of Law Singapore Programme from 2006 to 2011. Professor Chesterman is the author or editor of more than twenty books, including We, the Robots? Regulating Artificial Intelligence and the Limits of the Law (CUP, 2021), One Nation Under Surveillance, You, the People, and Just War or Just Peace?

JULY 2024 | ISSUE 12
NAVIGATING THE AI TERRAIN
- “‘Fake’ News for Spain.” The New York Times, 6 Oct 1901. https:// www.nytimes.com/1901/10/06/archives/fake-news-for-spain.html.
- “Battle of Kadesh.” Wikipedia. 16 May 2024. https://en.wikipedia.
org/wiki/Battle_of_Kadesh. - Zhuo, Tee. “Media Literacy Council Apologises for Facebook Post
on Satire Being Fake News.” The Straits Times, 8 Sept 2019.
https://www.straitstimes.com/singapore/is-satire-fake-news-medialiteracy-
council-post-sparks-backlash-from-netizens. - Coblentz, Emilee, and Sara Chernikoff. “‘Staggering’: Measles Deaths Have Nearly Doubled Globally, According to New CDC Data. Here’s Why.” USA Today, 18 Nov 2023. https://www.usatoday.com/story/ news/health/2023/11/18/measles-deaths-nearly-double-globallypast- year/71622796007/.
- “Digital Information Resilience: Restoring Trust and Nudging Behaviors in Digitalisation.” NUS Centre for Trusted Internet and Community. https://ctic.nus.edu.sg/ctic.moeT3/. Accessed 27 May 2024.
- “‘Oh, Bother’: Chinese Censors Can’t Bear Winnie the Pooh.” The Straits Times, 17 Jul 2017. https://www.straitstimes.com/asia/ east-asia/oh-bother-chinese-censors-cant-bear-winnie-the-pooh.
- Dickson, Bruce J. “No ‘Jasmine’ for China.” Current History, vol 110, no 737, 2011, pp 211–216. JSTOR. http://www.jstor.org/ stable/45319730.