Optimal use of google search tool

Hi everyone,

I’m developing a program to validate news events, checking aspects like the accuracy of announced times and main content. To save on API costs, I’m batching multiple event validations into a single API call instead of processing them one by one.

My current understanding from quick testing is that a single API call internally triggers about 10 web searches. Since my input items are independent, I’m aiming for an optimal batch size of 10 events per API call, with each event ideally getting its own dedicated web search.

Here’s my main question and concern:

What’s the best way to guide the LLM to intelligently and efficiently utilize these 10 internal Google Web Search quotas per batch?

Currently, I’m considering explicitly defining how each of the 10 search quotas should be allocated within my prompt, essentially mapping one web search per news item. However, I’m concerned this might not be the most optimal approach. I worry it could potentially collide or interfere with the internal prompt logic that Google developers have already built into the Google Web Search tool.

My goal is to maximize the output quality for each news event validation. While using a custom web search tool is an option, I’d prefer to leverage Google’s robust internal search tool if possible.

Any advice on how to properly instruct the LLM to generate effective, item-specific web search queries within a batch, without counteracting the tool’s inherent capabilities, would be greatly appreciated!

Hi @sseo,

To maximize efficiency and accuracy of your web search tool usage, here are my two cents:

Instead of explicitly mapping each event to a specific search, I suggest leveraging Gemini’s internal web search logic. By doing so, you let the system optimize the internal search flow, ensuring it dynamically allocates search quotas.

While batching 10 events into a single API call is efficient, consider segmenting each event as a prompt within the batch, not just as separate web searches. You can instruct the model to conduct an individual search per item, but avoid over-defining the allocation process as it can naturally handle independent items.

To maximize quality and efficiency, ensure each event description is as clear and specific as possible.

1 Like

Hi @Krish_Varnakavi1

Thank you so much for your tips.

“Avoid over-defining the allocation process which could lead to collision with internal tool prompts” is the topic that I am most curious about now.

I really liked how the Gemini API docs are well designed, and I found several notes about over-definition. One example is to avoid over-defining the output structure when I use a structured output tool.

However, I believe the tool instruction is designed for general use cases, and there is room for improvement by adding additional instructions that fit my purpose. To understand where the balanced point is between less definition and over-definition, I need to understand how the internal tool guide is written.

Is it (internal prompt or code) publicly available? Or could you explain to me at a high level how it is written, and what the desired guides are that a user can amend to optimize tool use?

Hi @sseo,

Here is one prompting technique to achieve your goal.

Structure your prompt so the model immediately recognizes that the batch contains distinct items to be processed separately. Using structured formats like Markdown lists or JSON arrays is highly effective.

Event 1:

  • Claim: “Major tech conference ‘Innovate 2025’ was announced to start at 9:00 AM PT on July 14, 2025.”
  • Source: “TechCrunch article”

Event 2:

  • Claim: “The Centauri space probe successfully entered Jupiter’s orbit on the evening of July 13, 2025.”
  • Source: “NASA press release”

This structure helps the model understand it needs to perform a distinct validation process for “Event 1,” “Event 2,” and so on. It will then trigger searches as needed for each item independently.

Hope this helps