BUG REPORT: search_web tool fails with backend error

SUMMARY

The search_web tool consistently fails with error “failed to make code assist
backend request”. The issue is intermittent but has persisted for approximately
17 hours across multiple AI models.

ENVIRONMENT

  • Platform: macOS
  • Tool: Gemini Code Assist / Antigravity Agent
  • Models tested: Gemini 3 (M8, M18), Opus 4.5 (M12), GPT-OSS 120B, Sonnet 4.5
  • User network: Verified working (curl, ping, DNS all functional)

TIMELINE

  • Issue first observed: 2026-01-22 approximately 17:37 CET
  • Brief recovery: 2026-01-23 approximately 10:04 CET (worked once with Opus 4.5/M12)
  • Issue persisted: 2026-01-23 10:37 CET (still failing)
  • Total duration: approximately 17 hours (intermittent)

TOOL INVOCATION

XML format used to call the tool:

prasglandeno meaning

Tool schema (from system prompt):

{
“name”: “search_web”,
“description”: “Performs a web search for a given query. Returns a summary
of relevant information along with URL citations.”,
“parameters”: {
“properties”: {
“query”: { “type”: “string” },
“domain”: {
“type”: “string”,
“description”: “Optional domain to recommend the search prioritize”
}
},
“required”: \[“query”\]
}
}

ERROR RESPONSE

Full JSON response received:

{
“output”: “Step Id: 136\nEncountered error in step execution: error
executing cascade step: CORTEX_STEP_TYPE_SEARCH_WEB: failed
to make code assist backend request”
}

Error breakdown:

  • Step Type: CORTEX_STEP_TYPE_SEARCH_WEB
  • Error Message: failed to make code assist backend request
  • HTTP Status: Not provided in response
  • Additional details: None provided

DIAGNOSTIC TESTS PERFORMED

  1. Network connectivity test (PASSED):

    • curl to google (dot) com: HTTP 200 (0.9s)
    • curl to github (dot) com: HTTP 200 (4.2s)
    • ping 8.8.8.8: 0% packet loss (243ms avg)
    • DNS lookup google (dot) com: Resolved to 142.250.184.14
  2. Other tools test:

    • run_command: WORKING
    • read_url_content: PARTIALLY WORKING (some sites return 403)
    • browser_subagent: WORKING
    • search_web: FAILING
  3. Model comparison:

    Model search_web Status
    Gemini 3 (M18) FAILED
    Gemini 3 (M8) FAILED
    Opus 4.5 (M12) WORKED ONCE, THEN FAILED
    GPT-OSS 120B FAILED
    Sonnet 4.5 FAILED

REPRODUCTION STEPS

  1. Open Gemini Code Assist / Antigravity Agent
  2. Select any AI model
  3. Ask the agent to search for anything (e.g., “search the web for weather in Tokyo”)
  4. Agent invokes search_web tool
  5. Tool returns error: “failed to make code assist backend request”

EXPECTED BEHAVIOR

The search_web tool should return a JSON response containing:

  • A summary of search results
  • URL citations/sources

ACTUAL BEHAVIOR

Tool returns error immediately without performing the search.

WORKAROUND

Use browser_subagent to perform searches manually. This is slower but reliable.

ADDITIONAL NOTES

  • The issue appears to be on the backend/server side
  • User network and local installation are confirmed working
  • The error is model-agnostic (affects all models)
  • The issue is intermittent (worked briefly once during testing)
3 Likes

Can confirm, since this morning having the exact same issue as OP. I did the exact same debug process and came to the same conclusion that search_web tool call is broken. I think it has something to do with Vertex AI (I believe search_web was linked to google’s vertexaisearch which now no longer seems to exist in AG)

I am still facing the same problem. The search_web tool is completely broken and unusable. I am being forced to rely on the browser sub-agent, which is extremely slow and inefficient.

This issue has been persisting for the last 3–4 days without any resolution, and there has been no response from the Antigravity team, which is frankly unacceptable for a production system.

On top of this, the Claude Opus models are repeatedly failing, making the platform unreliable for serious work.

This is a critical service degradation, and it needs immediate attention and acknowledgment.

I trueIy think google wants to force us tu use the browser sub-agent.
A lot of improvements need to be done before that happens.

+1

What a nightmare! everyday i’m asking myself what’s gonna be the new issue today

2 Likes

I am repeatedly encountering the exact same web search error. It’s almost failing 100%, but occasionally the web search succeeds.

I’ve tried with or without a VPN, makes no difference.

Problem has continued for two days at least.

Account is on Google AI Pro plan.

Wondering whether it is region specific issue, and the search seems working for me.

No, is not.
I have tried from argentina, USA and Spain. Same issue.
I have switched google accounts, same issue.
I even tried in with different computers, same issue.
Is so sad seen google selling a “No solution “ solution.
I am going back to Cursor. As @Saad_Haddi said. “Is a nightmare” and really a bad one.
At least at Cursor they live from the software they sale so they care about customers.

There was an upgrade today, so I installed it with the hope of a patch for this but no…. every thing is broken as before.

1 Like

The latest update did work for me, search_web is working fine now

Did not for me. Same error. I am using Gemini 3 flash. Which are you using?

Gemini 3 Pro (High). Ask the agent to debug the search_web tool call, for me (using the same conversation from yesterday) it basically just confirmed that it’s working now. Opus is still extremely buggy for me though. Tried an AI Pro as well as Ultra (two different accounts), Pro seems to be working fine but Ultra still failing now and again. It’s very strange.

For context, vertexaisearch was available when AG came out, basically returned very detailed search results but seems like it was removed a while back. Gotta update my prompts to remove that requirement haha

Good luck! I will also quickly say that, as an Ultra user, the experience is MUCH worse than Pro. Ultra fails so often, I get errors, I get tool call fails like crazy. Pro works fine for the most part but limits recently suck.

Can anyone else confirm this? I am getting the same error which is different from @Tenerife result.