By Aaron Tay, Head, Data Services
At the start of 2025, I wrote about new Google tools to consider for research, it is perhaps fitting that I end the year with yet another update as Google just launched Gemini 3 pro last year and Google has continued to introduce a massive number of research and AI related features for Google NotebookLM, Google Scholar and Gemini.
I will cover 3 interesting new research related tools from Google that just launched last month that might help optimise your research workflow
- Scholar Labs by Google Scholar
- Google NotebookLM adds generation of slides and infographic
- Dynamic views in Gemini 3 pro.
Google Scholar Labs: AI-Powered Search Arrives at the World's Largest Academic Database

After years of watching AI-powered academic search engines proliferate, Google Scholar has finally entered the arena with Scholar Labs, a new AI-enhanced search interface that launched on November 18, 2024.
Researchers are of course very excited but what does it do? Let me break it down for you.
First thing - unlike tools that generate synthesized research reports (like Gemini Deep Research or Consensus), Scholar Labs takes a "Deep Search" approach as opposed to "Deep Research".
What is "Deep Search"? Compared to normal search which provides results in a few seconds, Deep Search tends to take longer, typically minutes if not hours. Typically, this additional time is used by the system to conduct multiple iterative searches, and use AI to evaluate results in depth to find the most relevant papers to your query.
A example of such a tool is AI2 Paper Finder, where it does a series of iterative searches to create a candidate set of papers and then uses LLMs to evaluate the relevancy of each paper as well as generate relevance reasons for each paper. Other Deep Search tools include Undermind.ai, Consensus, Scopus AI.
How It Works
We are given very little details about how Scholar Labs work, so this is my best speculation of what is going on, given the clues given in the interface and some testing.
Scholar Labs works similarly to AI2 Paper Finder, it analyzes your research question, runs multiple search queries (typically 10 or so variations), and presumably combines and ranks them into a list.

It then goes down the list displaying a "Evaluated X top results" as it evaluates the relevancy of each result (probably using Gemini to screen) one by one, flagging them when the system 'thinks" the paper is relevant.
While the papers retrieved are usually relevant (high precision), there will be some false positives, but still, this is much more accurate than traditional Google Scholar
The main interface shows papers as they're found, with AI-generated explanations of why the paper is relevant to your query.

It will continue running until the first 10 relevant papers are found, after which you can click "more results" button and it will continue to evaluate more papers and stop next when 20,30,40 relevant papers are found.

The system almost always hard stops after finding 50 relevant papers with no option to go further or after evaluating the top 300 results (with no option to go further), whichever comes first.
There are rare queries where I find it stops below either condition is met, typically for queries with almost no hits, so there might be other stopping points built into the system

Why It Matters
In a world with so many AI search tools, why are people so excited about Scholar Labs?
The game-changer is scale. Scholar Labs leverages Google Scholar's massive index (estimated at 389M by some)—far larger than Scopus or Web of Science (estimated to be 80-90M) and even OpenAlex, Semantic Scholar. Moreover, the Google Scholar index is updated more frequently than many competitors, often daily for key sources.
In particular, the Google Scholar index is not only broader but also deeper and has more full-text available than any other single academic source as their crawlers are often allowed by Publishers to index the full text where users are not. Other AI search tools like Undermind.ai or Consensus typically have only abstracts and open access full text.
This large and deeper academic index proves particularly valuable for:
- Non-STEM disciplines where specialized databases have weaker coverage
- Finding obscure or forgotten papers on niche topics
Limitations to Note
Scholar Labs has notable constraints:
- Strict query filtering (doesn't accept author-specific searches or paper interrogation queries about individual papers)
- Limited to 300 evaluated results maximum
- Can't be used alone for systematic reviews
Why It's Not Suitable for Systematic Reviews
Despite its impressive precision and unmatched index, Scholar Labs faces fundamental limitations for evidence synthesis work that requires comprehensive recall:
The core problem: coverage vs. findability. Research shows Google Scholar typically has nearly 100% coverage of relevant papers you would find in a systematic review (you can verify this yourself by putting in a paper title from a systematic review, and checking if you can find it). In other words, nearly all relevant papers are indexed by Google Scholar.
But coverage is not the same as practical findability where you don't know the relevant paper titles in advance!
One study found that while Google Scholar had 97.2% coverage of papers retrieved in a set of systematic reviews, recall dropped to only 46.4% when you have to actually run a subject search for your topic which is what you would do if you wanted to conduct a "proper" systematic review!
Why can't you find papers that are demonstrably in the database?
Google Scholar's Structural Limitations:
Compared to specialised databases like PubMed or Scopus, Google Scholar lacks critical search features that help to generate high recall with reasonable precision searches:
- No complex Boolean logic: Cannot build nested searches with proximity operators
- No controlled vocabulary: No MeSH terms or Emtree equivalents for precise topic searching
- Limited field searching: Crucially, you cannot restrict searches to Title/Abstract only—Google Scholar searches full text which while increases recall creates a lot of noise
- Strict query length limits: Cannot build comprehensive search strings beyond 256 characters
- 1,000 result hard cap: Results beyond this threshold are completely inaccessible
Why This Creates a Findability Crisis:
The full-text searching problem is particularly acute. When you search "machine learning" in PubMed and limit to Title/Abstract, you get papers about machine learning. In Google Scholar, you get every paper that mentions "machine learning" anywhere in the full text—including papers where it's tangentially mentioned in the discussion or references.
Even if you could craft a perfect search strategy, the 1,000-result cap means relevant papers ranking 1,001st or lower simply vanish. In systematic reviews covering broad topics, this cap becomes a ceiling on comprehensiveness.
How Scholar Labs Inherits These Problems:
Scholar Labs doesn't solve these fundamental issues:
- Limited evaluation depth: Screens only the top 300 results maximum, leaving potentially relevant papers in positions 301-1,000+ invisible
- Multiple queries may not help: While Scholar Labs runs 10-11 queries, which helps reduce the limitation of just running one query of limited length, if these queries are LLM-generated, research suggests that LLM-generated queries are not capable of generating the kind of comprehensive searches needed for systematic reviews anyway.
- Systematic bias risk: The AI might consistently miss certain types of relevant papers (false negatives). More study needed on how high the risk is here.
Bottom Line for SMU Researchers
Scholar Labs excels at exploratory research and finding papers with high precision, essentially automating what many researchers already do manually when scanning the first few pages of Google Scholar results.
Access currently requires signing in to Google Scholar (education accounts may have better access). Despite its limitations, Scholar Labs represents a significant step forward in making AI-assisted literature discovery accessible at scale.
Google NotebookLM adds generation of slides and infographic
The Gemini 3 launch last month drew a lot of deserved attention, however almost as signifiant was the launch of Nano Banana Pro.
The earlier version of Nano Banana was already a state of art image generator, in my view this improved Nano Banana Pro pushes it to clear #1 image generator at the time of writing.
- Among other improvements, it leverages Gemini 3's reasoning and intergates with Google Search to create accurate, context-rich visuals like educational explainers, diagrams, and infographics.
- It's generation of text in images has improved even further, making it capable of reprducing reliably long text sequences in image
- It clams to be able to blend up to 14 images as a reference and maintain consistent identity and resemblance for up to 5 different people
- All in all I find the model a lot more steerable, allowing you to "photoshop with prompting" more than ever
Here's an amazing example of the capability of Nano Banana Pro in text generation with the following prompt
Put this whole text, verbatim, into a photo of a glossy magazine article on a desk, with photos, beautiful typography design, pull quotes and brave formatting. Use attached images when necessary. Text
The text I used came from https://library.smu.edu.sg/topics-insights/force2026-coming-smu
This is the result

More excitingly, NanoBanana Pro capabilities have been brought into Google NoteBookLM
We have covered the amazing capabilities of Google NotebookLM twice this year, first an introduction in Jan this year, followed by another update in middle of the year about the new Video Overview feature.
Many of these existing features have been further enhanced with more options and some are even using the updated Nano Banana Pro as image generators which leads to even more rich and detailed images.
In addition, NotebookLM now allows you generate far more types of contents beyond just Audio Overviews and Video Overviews.

The ones you might be interested in is "Reports", "Infographic" and "Slide Deck" but I will focus on the last two as this updates our older Research Radar piece - Audio, Video, Posters and Slides- Oh My!
Gemini Dynamic Views
Google has been touting the power of Generative Interfaces in Gemini 3. Instead of generatng an answer in text, it generates HTML / CSS / JS in the background in response to a query, presenting the answer in a highly interactive HTML form.
In the Gemini App, this is an experimental feature also dynamic views.

I've tried attached pdfs of papers, or adding text from blog posts and it generates really interesting dynamic webpages in response.
Here is an example created for the famous paper – Attention is all you Need.
The Bottom Line
Google's end-of-year updates have provided a powerful toolkit to bookend 2025. Scholar Labs brings much-needed AI reasoning to the massive Google Scholar index, though researchers must remain wary of its limitations for systematic work.
Meanwhile, the integration of Nano Banana Pro into NotebookLM and the arrival of Dynamic Views have transformed Gemini from a text generator into a multimedia design partner.
These tools are not perfect. However, they significantly lower the barrier to entry for discovering complex papers and creating beautiful research communication. As you plan your research projects for 2026, these are worth
Considering if you should adopt them in your workflow.

When generating infographics using Google NotebookLM you can selection orientation (Landscape, Protrait, Square) as well as level of detail (Concise, Standard, Detailed) as well as any specific instructions.
This works for PDFs of research papers but here I try it on a personal blog post of mine on Scholar Labs.

Looks pretty good. But the usual word of warning, people have found errors in the infographics generated for example in formulas generated etc.
Almost as interesting is that Google NoteBookLM can now generate slide decks, where you have the option to decide whether to produce a detailed deck (for reading on its own) or Presenter slides (cleaner for live presentation).

Using the same base content, these is the slide deck generated

I have been very impressed by the slides and inforgraphics generated, they are often very detailed and beautiful showing off the power of Nano Banana Pro.
But there are two drawbacks about slide decks generated this way. Firstly they are only exportable in PDF, making it hard to edit. You can use tools to convert to PPT etc, but the results may not always be as good.
More importantly, because this is generated via Google NotebookLM it does not use the images in your source.
Imagine if you upload your paper full of tables and figures into Google NoteBookLM and ask it to generate a slide deck. Instead of reproducing your table or figure it creates it's own version of that.
Still all in all these are very interesting features worth trying.
Finally Google NotebookLM also introduced "Deep Research", but I think this is a bit of a misnomer and should be called "Deep Search", you basically enter your query wait for about 5 minutes and it will generate a research report for you.

