top of page

Fact or Fiction: GenAI's Truth Crisis?

Updated: Aug 20

In this article, Xuhong discusses how the spread of false information and deepfakes may be intensified by the presence of Generative AI. The article also highlights existing legislative solutions within the online harms space.


​In late 2024, shocking news gripped local headlines: Male students at Singapore Sports School created and disseminated AI-generated deepfake nude images of their female peers. The proliferation of generative AI (GenAI) tools has made it increasingly simple to create realistic yet false content, leading to the inevitable rise in image-based sexual abuse (IBSA) and identity fraud. A 2023 survey has also revealed that significant numbers of female youth in Singapore are concerned with IBSA (57%), as compared to their male counterparts (39%). 


As Singapore transitions into a digital-centric society, the swift adoption of AI in our daily lives is accompanied by an equally worrying rise in online harms. Online harms generally refer to all forms of Internet activity that result in users experiencing distress or alarm. Some examples cited include (but are not limited to) cyber bullying and sexual harassment, intimate image abuse, child abuse material, impersonation, deepfakes and hate speech. This piece will first examine a subsection of online harms that are associated with AI-generated content before moving on to an analysis of Singapore's approach towards tackling online harms present on social media in general. 


Now that we have a clearer idea on what online harms are, let’s take a closer look at how generative AI (GenAI) has contributed to inauthenticity in the online world. GenAI refers to a subset of AI systems that are capable of generating new content based on knowledge gained from analysing vast amounts of training data obtained from a variety of public databases and private sources. We will be examining 2 main online harms associated with GenAI - the spread of false information created with the assistance of GenAI (regardless of intention) as well as the rise of deceptive synthetic media (deepfakes).


Case Study 1: False Information

Pictured: How misinformation, disinformation and malinformation is different
Pictured: How misinformation, disinformation and malinformation is different

While misinformation and disinformation are largely similar in terms of their content, what sets them apart is the intent of the creator (see above!)

Misinformation

Disinformation

The more benign form, misinformation is typically spread without any intention to cause harm. Common examples of misinformation include social media posts peppered with false or exaggerated claims and incorrectly-tagged image captions.

In contrast, disinformation is usually more sinister in nature, ranging from sophisticated campaigns designed to sway public opinion to anti-vaccine narratives propagated by a conspiracy theorist. Regardless of the intent, the ease of access to GenAI tools has revolutionised the information landscape. 

Today, GenAI has allowed just about anyone to create a convincing news article that looks and sounds like the real deal in mere seconds. Unsurprisingly, falsehoods have proliferated across all corners of the Internet in an unprecedented setback to information integrity. Crucially, the scourge of fake news has threatened a fundamental tenet of modern-day democracy by paving the way for malicious actors to manipulate elections. This worry is also reflected in survey results: a 2024 study found that 83% of respondents in Singapore are concerned that artificial intelligence (AI) can influence election outcomes. For instance, polling data from the 2024 US Presidential Elections suggested that disinformation adversely affected voter perception of candidates. Stories that rose to infamy include false claims that immigrants were eating cats and dogs and a Haitian voter allegedly voting for Kamala Harris in two Georgian counties simultaneously. Can we even trust anything that we read online? 


In this regard, the Singapore government has adopted a targeted approach to managing the online harms of AI-generated content that involve elections. Individuals that publish digitally manipulated content during elections may be liable for offences under the Elections (Integrity of Online Advertising) (Amendment) Act while those that are involved in hostile information campaigns may be penalised under the Foreign Interference Countermeasures Act (FICA). A similar approach is adopted by multiple overseas jurisdictions: Brazil, South Korea and several US states have enacted restrictions on distributing deepfake content under specific circumstances such as during election seasons and without prior permission from the person depicted in the AI-generated media. 


Case Study 2: Deepfakes

Next, another major cause of concern is the rise of deepfakes. As its name suggests, deepfakes refer to AI-generated media that typically depict people in fictional scenarios. With the click of a few buttons, AI can swap out the image or video of a person to pretty much anyone else. The term also refers to realistic-appearing still images of people that simply don’t exist in real life.  The versatility of deepfake technology has resulted in its use in a wide range of nefarious purposes from generating regime propaganda to perpetuating scams and even creating non-consensual explicit content. More recently, firms hiring for remote roles have been inundated by floods of fake job seekers that use deepfake technology to falsify resumes and pass video interviews. 


Still unconvinced by how dangerous deepfakes are? Just last month, a finance director of a local company nearly lost US$500k to a business impersonation scam. After being told to join a video conference discussing a potential restructuring with the firm’s executives, the victim was instructed to make a transfer that would facilitate the business transaction. The problem? None of the conference attendees were real. It was all part of an elaborate ruse by scammers that utilised deepfake technology to mimic the real executives. 

  

The trend of rising online harms has meant that Singapore is no longer leaving platform regulation to chance. After finding that self-regulation by primarily foreign-based online platforms is slow, inefficient and inadequate, Singapore has toughened its stance and taken on the role of the policeman. The Online Safety (Miscellaneous Amendments) Act (2022) holds social media platforms liable for failure to take down objectionable content. Serious consequences apply: a potential fine of up to S$1m may be levied on offenders while repeated instances of non-compliance may lead to service blockage for Singapore-based users. Apart from removing objectionable content in a timely manner, social media platforms must also proactively protect users who have encountered such content in the first place. 


What Are We Doing to Solve the Problem?

Another survey by the Ministry of Digital Development and Information found that while two-thirds of Singapore users encountered harmful content online, nearly half did not report it to hosting platforms for various reasons - not that they expected it to make a difference anyways. Unsurprisingly, most who did file reports also encountered issues with the reporting process by the tech platform. In response, an accompanying Code of Practice for Online Safety (2023) was drafted by the IMDA to shore up protections for social media users against online harms. It spells out additional obligations that social media platforms must take to protect children as well as the creation of a well-defined user reporting/resolution framework for harmful/inappropriate content. The Code is designed to follow up from existing legislation (such as Online Safety (Miscellaneous Amendments) Act) by mandating that social media platforms submit online safety reports regarding their approach to objectionable content. More specifically, these annual reports must include details on how the platform has implemented measures to protect users from objectionable content and metrics on specific actions taken upon the submission of valid end-user reports regarding such content. 


Despite the legislative efforts, it is evident that more needs to be done to address the rising prevalence of online harms. Modelled after similar legislation in the European Union and Australia, Singapore plans to draft legislation creating the Online Safety Commission (OSC) in late 2025. Envisioned as a dedicated agency to provide expedient relief to victims of online harms, the OSC can issue takedown orders to online platforms hosting such content on request from victims. 


An important finding from a public consultation on enhancing online safety (end 2024) is the strong public support (95%) for allowing victims to take legal action against those responsible for the online harm. 

Regulatory remedies aside, new legislation expected to be tabled later this year will also introduce several new statutory torts (civil remedies) allowing victims to seek compensation in court for damages suffered. The OSC is also expected to evaluate victims’ requests to disclose perpetrators' user information for specified purposes such as taking legal action and protecting themselves from perpetrators. Such a mechanism is expected to significantly dent the shield of anonymity that perpetrators hide behind, further discouraging the occurrence of such online harms in the first place. The SHE report revealed that the majority of respondents preferred remedies that involved swift and permanent removal of the online harm while a further 2 in 3 respondents agreed that monetary compensation was a useful remedy for online harms suffered. Hence, the addition of statutory torts that improves legal clarity will be helpful for victims should they eventually choose to pursue a civil claim for compensation or a court order stopping the harm.


A parallel can be seen in the regulatory approach to harassment - a new tort of harassment was codified in the Protection from Harassment Act (2014). Victims of harassment can obtain remedies such as protection orders through a specialised Protection from Harassment Court with simplified proceedings designed for victims to navigate through the court process with relative ease. 


Maintaining information integrity is everyone’s responsibility. Targeted legislation is only the beginning of the fight against online harms: beyond the safety net that the law provides, continued collaboration with industry stakeholders and youth engagement are key to sustain this fight to make the cyberspace a safer place for all.




1 Comment


Guest
Apr 12

Very interesting

Like
scriptblr_black_whitebg_edited.png
  • Instagram
  • Facebook
  • LinkedIn
  • Discord

Get in touch

zoe.toh@youtech.sg
beatrice.tan@youthtech.sg

Stay Connected with Us!

© 2035 by Scriptblr. Powered and secured by Wix 

bottom of page