By Aaron Tay, Head, Data Services
This is the second in a three-part series on FORCE2026, the international scholarly communication conference SMU is hosting from 3–5 June 2026. The first instalment covered sessions relevant to research assessment and impact. This piece focuses on programmes for researchers interested in the use of AI tools across the research cycle.
The critical question: Can you trust your AI literature search?
AI-powered research tools are proliferating rapidly. Consensus, Elicit, Undermind, Scholar Labs—these tools promise to transform how we find and synthesise research. Many researchers are already using them for literature discovery, evidence synthesis, and even drafting.
But how reliable are they? When you run the same query twice, do you get the same results? When a colleague runs your search, do they find what you found? And if you're conducting a systematic review or building evidence for a grant application, what are the methodological implications of using tools whose inner workings remain opaque?
FORCE2026 has a substantial track addressing these questions directly—not just from vendors, but from researchers and information professionals grappling with evaluation and implementation.
Why this matters for SMU researchers
If you're using AI tools for research—or supervising students who do—critical literacy matters. The sessions at FORCE2026 go beyond "here's a cool new tool" demonstrations to examine fundamental questions about reproducibility, transparency, and fitness for purpose in scholarly work.
Ian Mulvany, CTO of BMJ Group, delivers the Day 2 keynote. With experience spanning Nature, Mendeley, eLife, and SAGE Publishing, he brings deep perspective on how major publishers are responding to AI disruption—and what that means for researchers.
Who should attend?
Researchers and Librarians across all disciplines who:
- Use or are considering AI tools for literature search and synthesis
- Supervise students using AI in their research workflows
- Want to understand the methodological implications before committing to these AI tools
Selected sessions on AI search reliability and reproducibility
- Does Agentic Deep Search Converge? Reproducibility Questions for LLM-Driven Literature Discovery — This session (yes, from our team — led by Aaron Tay & Dong Danping) examines what happens when you ask AI search tools to find literature on a topic. Do different runs produce consistent results? What are the implications for research that depends on comprehensive literature coverage?
- Can AI Help in Systematic Reviews? A Comparative Study of Manual, AI-Assisted, and AI-Dominant Workflows — Aster Zhao from HKUST directly compares three approaches to systematic review: fully manual, AI-assisted, and AI-dominant. For anyone weighing whether to integrate AI tools into evidence synthesis, this offers empirical grounding rather than speculation.
- Developing an assessment framework to support critical evaluation of AI-powered academic search engines — Researchers from Carnegie Mellon present practical criteria for evaluating AI search tools. If you've wondered how to compare these tools systematically rather than relying on marketing claims, this session offers a framework.
Selected sessions on AI and scholarly infrastructure
- Managing AI Bot Access to Open Scholarly Infrastructures — Petr Knoth from CORE (one of the world's largest aggregators of open access research) addresses what happens when AI systems scrape scholarly content at scale. This has implications for how research infrastructure adapts—and for what researchers can expect from AI tools trained on this content.
- Using AI to determine how interesting citations are — Euan Adie from Overton on using AI to analyse citation context and significance. This moves beyond simple citation counts toward understanding how research is actually being used.
- Using AI-assisted programming to Develop Research Services Tools — A practical session from HKUST on building tools for research impact assessment and open access publishing using AI-assisted coding approaches.
Selected sessions on AI literacy and practical application
- AI Literacy for Non-Coders — An accessible entry point for researchers without technical backgrounds who want to understand AI capabilities and limitations in research contexts.
- GenAI in Qualitative Data Analysis: Framework-Guided Prompt Engineering in Library Research Practice — For researchers in qualitative traditions, this session from Singapore Institute of Technology examines how to use generative AI systematically in qualitative analysis while maintaining methodological rigour.
- Redefining the Role: How Artificial Intelligence is Shaping Librarian Occupational Identity — While focused on librarians, this session from USC addresses broader questions about how AI changes professional practice in information-intensive work—relevant to anyone whose role involves literature search and synthesis.
Selected sessions on AI governance and policy
- Operationalizing data sovereignty and open science: AI governance for the future of scholarly publishing — A birds-of-a-feather session examining how AI governance frameworks should account for research data and scholarly communication.
- Trust in Turbulent Times: Safeguarding Knowledge in the Age of Generative AI — Addressing the broader challenge of maintaining trust in scholarly knowledge when AI can generate plausible but unreliable content at scale.
The bottom line
The AI tools reshaping research workflows are here to stay. The question is whether researchers adopt them critically or uncritically. FORCE2026 is where researchers, tool developers, publishers, and information professionals are asking the hard questions together—on our campus.
Interested in joining us?
- Dates: 3–5 June 2026 (pre-conference 2 June)
- Venue: SMU campus
- Early bird rate: USD 275 until 15 March
- Registration site: https://event.fourwaves.com/force2026/
Coming up in ResearchRadar
The final instalment will highlight sessions relevant to faculty with journal editorial responsibilities—covering generative AI policies, research integrity infrastructure, and the pressures reshaping scholarly publishing.