As we wrap up 2025, Artificial Intelligence continues to be a topic that features regularly on the headlines, and the conversations range from the ethical challenges and impacts of its rapidly increasing capabilities to fears of an AI bubble burst. One prominent change is potentially with the way that we search and consume information. When in the past, we are used to having to conduct keyword searches, generative AI tools now also offer summarised answers when posed with natural language questions. While the latter certainly promises efficiency and convenience, they strip away the context that traditional search results provide. The danger is not just that the information might be wrong, but that it is presented with a confidence that discourages scrutiny. This paradox was explored in SMU Libraries’ Trust, Truth and Technology: Navigating Information in the AI Age series of activities, as part of the UNESCO’s Global Media and Information Literacy Week.
The tussle between convenience and critical thinking
As alluded to above, the greatest allure of this new way of information seeking is speed. However Dr. Leo Lo, Dean of Libraries at the University of Virginia noted in his keynote address that speed comes with a hidden tax. He wanted that “AI can generate a convincing lie in seconds,” while “verifying that lie takes time, scale, and critical thinking.”

This vulnerability pervades the whole of society. In the panel discussion following his keynote, moderated by SMU’s Professor Sun Sun Lim and featuring experts Professor Lim How Kang (SMU), William Tjhi (AI Singapore), Rachel Teo (Google) and Karamjit Kaur (The Straits Times), the panellists shared personal experiences, with one panellist admitting that it was easy to fall prey "even though [they] lead anti-scam work, [they] got scammed [themselves] during a late-night scroll". In an algorithmic world, everyone is at risk, and skepticism of the information we consume, is both necessary and useful.

If the problem is blind acceptance, the solution is active engagement?
In a masterclass that focussed on verifying and contextualising online information with LLMs , Mike Caulfield, co-author of Verified and the creator of SIFT method, argued that we need to treat AI as a "co-reasoning tool."

He suggested one possible means of verification using AI tools, consisting of the following three stages: “Get it in” to start the investigation, “track it down” to verify the citations, and “follow up” with evidence-focused queries. The goal here is not to avoid AI, but to interact with it to foster critical thinking.
The call to focus on critical thinking to maximise AI’s benefits was also seconded by Nicholas Quaass (Statista). In his workshop, Navigating the Risks of AI Adoption, Nicholas highlighted that while AI tools can boost productivity and democratise access to information, they are not infallible and require curated data and human expertise to ensure accuracy and coherence. He further emphasised the importance of critical verification across four dimensions: facts, sources, bias, and logic.

Algorithms, accountability and the balancing act
Beyond individual skills, we also need to be cognizant of the invisible influence of "Big Tech". Algorithms are shaping what appears on our social media feed and this has significant influence on public perception. Samantha Seah (SMU Libraries), in her workshop, Shifting Power and Community Revolts: A Balancing Act in this Uncertain Technological Age, highlighted the lack of transparency in how recommendation algorithms work, and called for greater public oversight of these powerful technologies.

The theme of collective responsibility was also explored during the panel discussion. In their discussion, the panellists discussed the roles of newsrooms and tech companies in ensuring data accuracy and the gap between legal frameworks and technical realities. The panellists agreed that it was critical that regulators focus on rules for specific high-risk AI uses instead of creating a broad, one-size-fits-all framework. Through tailoring regulations to each sector, it could help ensure transparency without stifling innovation.
T.H.A.T.S.: A student’s perspective
Perhaps the most inspiring takeaway came from our student community via the #GetReelSMU Challenge. Students were invited to create a short video or reel exploring any aspect of media and information literacy in digital spaces under the theme “Minds Over AI – MIL in Digital Spaces”.
First-prize winner Hew Si Min coined the acronym T.H.A.T.S., born from a realisation in class that "AI is never reliable". Her framework prioritises "Thinking critically" and "Treasuring Authenticity”, while reminding us that “Halting dependency” and “Adding your own flavour” are important considerations too. Her conclusion? "Smart one wins."
Moving forward
SMU Libraries’ events in October 2025 serve reminders that while AI tools can boost productivity, we must still be aware of what the risks and limitations of the tools are. As we continue to integrate these technologies into our learning and work, the adoption must be a considered one, balancing the use case against the possible downsides. Indeed, whether we are a student, educator, or working professional, our goal remains the same: AI should help us and not replace us, that means humans still need to be in the loop to verify what the AI produces.