The Future of GenAI in Classrooms
- Luo Xuhong
- Jan 31
- 9 min read
Reflecting on the NTU saga, #Scriptblr conducted with a range of Singaporean students from secondary school to local universities to understand more about their Generative AI use in education, their schools' Generative AI use policies and the appeals process. Xuhong shares his insights and some policy recommendations that may more accurately reflect students' use of Generative AI in education today.
*All names have been changed for anonymity.
Do you remember your first encounter with Generative AI (GenAI) models?
Mine was fairly uneventful, but I was particularly surprised by the story of a friend here at YouthTech. His first use of GenAI was actually because of his teacher, who had voluntarily taken and completed a SkillsFuture course on GenAI and prompt engineering. After enthusiastically praising the efficiency and breadth of the responses with AI, he encouraged the use of GenAI for GP practices. While not unheard of for educators to encourage the assistive use of AI, this is not necessarily the status quo. There are educators who refuse the use of GenAI in their classrooms and assignments (which may or may not be heeded by their students who have seen its effective use in other classes and modules). This raises the question: Are the GenAI use policies appropriately matched to the ways that students use AI today?
This issue hits especially close to home following an incident involving a trio of Nanyang Technological University (NTU) students that were penalised for the use of GenAI went viral in June last year. The episode garnered extensive media coverage in Singapore and even beyond. Key issues that emerged involved whether the rules regarding AI use were articulated clearly enough as well as concerns over the appeals process, with all 3 students feeling that they were not given a fair chance to explain themselves.
To find out what the youth think, we conducted a series of interviews with students from different fields and age groups, ranging from a secondary school student in a Science stream to a first year university student in a Communications course. Before we make our recommendations, we would first like to lay out a summary of the interview responses.
Approaches to GenAI
Students’ frequency and nature of use
“I use AI in almost all of the work that doesn’t require creative thinking. If there is “labour” I use AI, for example if I need to do research like filling up excel sheets, I use GenAi (Perplexity or ChatGPT) because that saves me a lot of time” - John, Year 3 Cybersecurity student
Generally, our respondents are aware of how to use GenAI. Some also shared with us their varied uses for different GenAI tools (e.g Claude, Perplexity, ChatGPT). While the use of ChatGPT is most commonly cited, some opt for other models in specialised use cases such as graphics generation and taking meeting minutes. The duration and nature of the task is also taken into consideration: one reported using GenAI for “labour” while another acknowledged “widespread use among peers for assignments that take more than a day”.
Despite the seemingly extensive nature of GenAI use, the majority of our respondents agreed on what would constitute “healthy” usage of GenAI tools. Most avoided wholly relying upon GenAI outputs for schoolwork, noting the relatively poorer quality of GenAI output vis-à-vis human content (though some personally knew acquaintances that did so anyways). One student believed that using GenAI for “creative work … is a huge no-no” and elaborated that those who valued "independent thought” would typically avoid GenAI tools entirely.
When asked why they set boundaries on their use of GenAI, concerns such as “start[ing] to lose what the concept of good writing is like” with increasingly prevalent usage of GenAI especially for tasks that demand critical thinking. We were most surprised by one respondent in university who shared that they did not use GenAI at all because of its perceived impact on the environment.
GenAI Use in the Classroom
From the interview responses, the policy frameworks across educational institutions are vastly different across Singapore. Even though some schools may implement centralised guidelines, these may not be uniformly followed or enforced by all the teaching staff. Lecturers are often afforded discretion to set their own preferences for GenAI use (or the lack thereof).
For a start, some secondary schools seem to offer clearer guidance on GenAI use for their students. One respondent shared with us about a novel “traffic light” framework to indicate the degree of acceptable GenAI use in that particular assignment. Under this framework, “green” indicates that GenAI use is permitted (and may form part of the assessment), “yellow” indicates that GenAI use may be limited to specific components of the project while “red” signifies that GenAI use is prohibited throughout. He agreed that the traffic-light framework was intuitive and easy to use for routine assignments.

Sample of a traffic-light framework indicating varying degrees of acceptable AI use. Image credit: UCD College of Arts and Humanities.
At the university level however, the extent of acceptable GenAI use appears to be less clear, in the sense that there are no guidelines similar to the traffic light system implemented university wide. This is possibly in line with the belief that older (and expectedly more mature) students can take greater ownership in ensuring that their learning isn’t outsourced to a chatbot.
Respondents studying at local universities have indicated that professors may include disclaimers highlighting the extent of acceptable GenAI use in assignment briefings or on the pages of online submission portals. However, this practice is not mandated across all modules. This becomes a concern when professors suspect or flag up work that they deem to have used GenAI inappropriately or without permission.
“Most professors either give [the student] benefit of doubt or are too lazy to fill out the paperwork and accuse the student [of undeclared GenAI use]” - Jane, Year 3 University Student
One practice observed in both universities and polytechnics is the presence of compulsory forms in which students are expected to declare the use of GenAI (if any). Yet, respondents have suggested that under/false declarations are common amongst students, and that an inability to detect inaccurate declarations leads to inconsistent enforcement of the GenAI policy.
To Ban or Not to Ban
When prompted to share their thoughts on the reasonableness of restrictions on GenAI use, our respondents largely agreed that such restrictions were justifiable but more flexibility could be accorded in specific contexts.
“The justifiable part is that [the lecturers] are ultimately trying to assess your ability to think/write, so it would be concerning if you used AI for the tasks which they are assessing you on. It’s justifiable to restrict AI for these tasks.” - Jane, Year 1 Communications Student
In particular, some also thought that such restrictions were justified as “that’s how we used to do stuff (in a pre-AI era)” and cited dangers of an overreliance on AI, referring to above mentioned concerns such as an erosion in critical thinking and writing abilities.
On the other hand, one student felt strongly about the presence of excessive guardrails in the form of blanket bans on GenAI use that could limit students’ potential, especially for STEM-related courses. He went on to propose an alternative scenario in which students could be assessed on their proficiency in the use of GenAI instead.
“[Schools] should be willing to take on more risk to help students grow and learn more.” - John, Year 3 Cybersecurity Student
Another respondent described her thoughts regarding restrictions on ChatGPT simply: “[Restrictions are] very backwards… GPT is here to stay, restricting or asking them to declare it is not going to work in the first place”. This response raises the consideration that workforces are also becoming increasingly AI-focused to streamline slower processes and promote efficiency at work — are we doing our students a disservice by disincentivising or prohibiting the use of AI?
The Common Denominator For A Lack of Appeals: A Trust Deficit
Despite our respondent’s differing opinions of GenAI restrictions, most were unfamiliar with the appeals process surrounding GenAI misuse.
When faced with a hypothetical investigation for the alleged misuse of GenAI, students mentioned that they would submit evidence in support of their case such as recordings of Google Docs editing histories and conversation logs with AI models.
Yet, multiple respondents also expressed concerns that escalating the dispute and appealing an unfavourable outcome would do more harm than good. Some raised the concern that some academic staff were not likely to be familiar with the signs of GenAI use beyond certain mainstream markers (i.e the em-dash controversy), and appeals might result in a worse outcome. Unsurprisingly, those same students have indicated that they would rather bite the bullet and accept whatever given outcome as final.
“There are so many people using [GenAI tools] and the university is so big, so I don’t know how helpful it is to have independent counsel review it anyway. And there would be distrust between the student and faculty anyway.” - Jane, Year 3 University Student
In response to the suggestion of involving a third-party mediator who independently reviews evidence from both students and faculty, most respondents believed that such a measure would be neither practical nor helpful.
What’s next?
In response to the pain points that students highlighted in their interactions with GenAI policies, this section discusses several reforms that could theoretically improve the student experience.
“[There should be a] template for your case, like a police report that you have to fill out, Having it down on paper and having a process to handle these cases would be a good thing because then at least each case is documented and gives the school some data to assess if their GenAI rules are too strict... Also helps the school to track what the use cases are for GenAI and evaluate what the punishments should be for certain use cases.” - Jane, Year 3 University Student
Admittedly, the evolving nature of GenAI technology has meant that present guidelines are far from set in stone as compared to other categories of academic misconduct such as plagiarism. Yet, the aforementioned NTU saga revealed the urgent need for a formalised set of guidelines (which have been implemented in some universities).
Ideally, a single SOP that clearly spells out the avenues of appeal and recourse for accusations of academic misconduct that involved AI use would be helpful. When such a policy is uniformly applied and enforced across all colleges and departments at a university, the consistency would reduce unnecessary friction between the student populace and teaching staff. Navigating accusations of academic misconduct is inherently a serious matter and the school can – and should – make the process less stressful for all through an easily navigable standard set of procedures. This is one example of what the SOP could look like:
Guidance for investigating suspected academic misconduct The first action for the Chair or delegate to take is to gather the relevant evidence; this will look different in every case but could include the following documentation: Example 3: Suspected Use of AI
This is adapted from Section 4, Page 7 of the University of Cambridge's 'Process for Examiners investigating and responding to suspicions of academic misconduct', created by the Office of Student Conduct Complaints and Appeals (OSCCA) and published in September 2023. |
Next, students have also raised concerns over the unsustainability of the “cat-and-mouse” style of policing against unauthorised GenAI use that seems to be the norm in classrooms today. Rather than solely relying upon (reportedly unreliable) commercial AI detectors such as TurnitIn, respondents largely agree that a more holistic assessment is necessary in determining the unauthorised use of GenAI. Examples would include verbal presentations and comparisons to the author’s past written work for signs of inconsistencies in word choice and language proficiency.
Some Closing Thoughts
One good initiative that has sprung forth are GenAI workshops/introductory courses by schools, such as the one offered to NTU freshmen or now mandated for Ngee Ann Polytechnic students. Some respondents have also suggested that undergraduates beyond their first year should be reasonably expected to understand the acceptable limits of GenAI us, and thus be given the academic freedom to freely explore the use of GenAI. One respondent shares:
“I think schools should allow unfettered use [of GenAI] and then penalise students who cannot even meet the standard of work produced by GenAI [since human output will always be better]”. - Jane, Year 3 University Student
Amidst rising concerns over the risk of undetected GenAI use compromising assessments, SMU Associate Professor of Marketing Education Seshan Ramaswami suggested in a Straits Times interview that he was open to exploring alternative “hyper-local assignments based on Singapore-specific contexts, oral examinations … and in-class discussions”.
When all else fails, some educators have decided on a return to more “old-school” examination formats of pen and paper instead. After all, research has revealed that handwriting is a superior choice for memory retention as compared to typing – not to mention it’s infinitely harder to cheat with GenAI in the first place.
Irrespective of your personal take on GenAI use in the classroom, it is clear that all of us need to work together in navigating this paradigm shift in our education landscape. Educators must reimagine teaching methods and embrace, rather than shun, the integration of GenAI tools in their curriculum. On the flip side, students must similarly demonstrate genuine engagement with course material and not take the easy way out just because they can.
The views and opinions expressed in this article are those of the interviewees and do not necessarily reflect the views or positions of any entities they represent, unless otherwise stated.




Comments