Gemini responds with structured / JSON-like output only when Function Calling is enabled

Hello,

Since today (December 23, 2025, KST), I have noticed an issue where Gemini starts responding in a structured / JSON-like format only when Function Calling is enabled.

  • Until yesterday, even with Function Calling added/enabled, Gemini responded in normal natural language unless it was explicitly required to return a structured result.

  • Starting today, whenever I enable or add Function Calling, Geminiโ€™s responses become structured / JSON-like (key-value style, similar to JSON), for example:

    • {โ€œanswerโ€: โ€œโ€ฆโ€, โ€œstepsโ€: [โ€ฆ], โ€œreasoningโ€: โ€œโ€ฆโ€}
  • This happens even for regular chat-style prompts in Korean, where I do not request JSON, structured output, or any specific schema.

  • If I do not use Function Calling, Gemini behaves normally and answers in natural language.

  • Because this behavior changed suddenly without any modification on my side, I suspect it may be related to a recent model or configuration update.

Expected behavior:

  • Enabling or adding Function Calling should not force all responses into a structured / JSON-like format for normal chat.

  • For regular natural-language prompts, Gemini should still respond in plain natural language, unless:

    • a specific tool/function is being called, or

    • I explicitly request JSON / structured output.

Actual behavior:

  • When Function Calling is enabled, Gemini consistently replies in a structured / JSON-like format, even for simple chat queries, which makes normal conversation usage inconvenient.

If helpful, I can provide:

  • Example prompts where this occurs

  • The exact structured outputs Gemini returns when Function Calling is enabled

Could you please check whether this is intended behavior, a bug, or a configuration issue related to Function Calling, and let me know how I can restore normal natural-language responses while still using Function Calling?

Example:

gemini-3-pro-preview

input: ๊ณผ์ฒœ์‹œ ๋‚ ์”จ ์•Œ๋ ค์ค˜. (Tell me the weather in Gwacheon.)

{
  "text": "**Gathering Weather Data**\n\nI'm currently trying to gather the latest weather data for Gwacheon-si. It's December 23rd, 2025, and I need to accurately provide the weather information for this specific date and location. I need to figure out what data is needed to provide a useful response.\n\n\n",
  "thought": true
},
{
  "text": "**Focusing Data Accuracy**\n\n\\n\\nI'm now focusing on verifying the accuracy of the weather information. My goal is to use data that is recent and specific to Gwacheon-si. I'm exploring different sources to ensure the information is reliable and up-to-date. I'm thinking about the parameters needed to achieve an accurate response.\n\n\n",
  "thought": true
},
{
  "text": ""
},
{
  "type": "function_call_request",
  "id": "0838469a-37d1-4989-a852-f989b90ad2ae",
  "name": "serper_web",
  "input": {
  "query": "2025๋…„ 12์›” 23์ผ ๊ณผ์ฒœ์‹œ ๋‚ ์”จ"
}
},
{
  "type": "function_call_result",
  "id": "0838469a-37d1-4989-a852-f989b90ad2ae",
  "name": "serper_web",
  "input": {
  "query": "2025๋…„ 12์›” 23์ผ ๊ณผ์ฒœ์‹œ ๋‚ ์”จ"
},
  "content": "....weather info",
  "structured_content": {
  "result_count": 10,
    "search_params": {
    "query": "2025๋…„ 12์›” 23์ผ ๊ณผ์ฒœ์‹œ ๋‚ ์”จ",
      "location": "Seoul, Seoul, South Korea",
      "country": "kr",
      "language": "ko",
      "autocorrect": true,
      "offset": 1
  },
  "data": [
    {
      "title": "๊ณผ์ฒœ๋™, ๊ฒฝ๊ธฐ๋„, ๋Œ€ํ•œ๋ฏผ๊ตญ ์›”๋ณ„ ๋‚ ์”จ",
      "link": "https://www.accuweather.com/ko/kr/gwacheon-dong/1849815/january-weather/1849815?year=2025",
      "snippet": "1์›”. 1์›” 2์›” 3์›” 4์›” 5์›” 6์›” 7์›” 8์›” 9์›” 10์›” 11์›” 12์›”. 2025. 2024 2025 2026 ยท ์ผ๋ณ„. ์ผ. ์›”. ํ™”. ์ˆ˜. ๋ชฉ. ๊ธˆ. ํ† . 29. 43ยฐ. 24ยฐ. 30. 50ยฐ. 35ยฐ. 31.",
      "position": 1
    },
    ...
  ]
}
},
{
  "text": "**Analyzing Future Weather Data**\n\nI'm currently working on analyzing the weather data for Gwacheon-si on December 23, 2025. My initial focus is on identifying key weather conditions, with \"clear sky\" being one of the first takeaways. I'm cross-referencing this with other sources to build a more comprehensive picture.\n\n\n",
  "thought": true
},
{
  "text": "**Revisiting Temperature Predictions**\n\nI've refined my temperature estimates for that night. Initial data showed a daytime high around +1ยฐC, but further research reveals a nighttime low between -2ยฐC and -1ยฐC, according to Meteotrend. AccuWeather's monthly data is less helpful, but Wanderlog's general December averages are in the ballpark. KMA's current observation isn't relevant for the future date, but the specific forecast indicates a clear sky on the 23rd, and the low is confirmed by Meteotrend.\n\n\n",
  "thought": true
},
{
  "text": "**Constructing a Concise Forecast**\n\nI've integrated the temperature and sky conditions into a coherent forecast. The winter weather suggests clear skies and freezing temperatures, specifically around 0ยฐC to +2ยฐC during the day and dropping to -2ยฐC to -5ยฐC at night.  I'm focusing on providing the most precise information available, emphasizing the clear sky and cold temperatures based on the Dec 23rd data. I've noted the need for warm clothing.\n\n\n",
  "thought": true
},
{
  "text": "**Consolidating Weather Details**\n\nI'm solidifying the forecast. It's looking like clear skies, with daytime temperatures around 0ยฐC to +2ยฐC. The nighttime low appears to be between -1ยฐC and -2ยฐC, which is consistent with the available data. I'm satisfied with this refined overview and plan to move on to the next step.\n\n\n",
  "thought": true
},
{
  "text": "{\n  \"summary\": \"2025๋…„ 12์›” 23์ผ(ํ™”) ๊ณผ์ฒœ์‹œ๋Š” ๋Œ€์ฒด๋กœ ๋ง‘์€ ๋‚ ์”จ๊ฐ€ ์ด์–ด์ง€๊ฒ ์œผ๋ฉฐ, ๊ธฐ"
},
{
  "text": "์˜จ์€ ์˜์ƒ 1๋„ ๋‚ด์™ธ๋กœ ์Œ€์Œ€ํ•  ์ „๋ง์ž…๋‹ˆ๋‹ค.\",\n  \"status\": \"success\""
},
{
  "text": "\n}"
},
{
  "text": "",
  "thought_signature": "..."
}

{
"summary": "2025๋…„ 12์›” 23์ผ(ํ™”) ๊ณผ์ฒœ์‹œ๋Š” ๋Œ€์ฒด๋กœ ๋ง‘์€ ๋‚ ์”จ๊ฐ€ ์ด์–ด์ง€๊ฒ ์œผ๋ฉฐ, ๊ธฐ์˜จ์€ ์˜์ƒ 1๋„ ๋‚ด์™ธ๋กœ ์Œ€์Œ€ํ•  ์ „๋ง์ž…๋‹ˆ๋‹ค.",
"status": "success"
}

gemini-3-pro-preview

input: ์•ˆ๋…•? (Hello?)

{
  "text": "**Analyzing the Input**\n\nI've just started to look at the user's input, which is \"์•ˆ๋…•?\". It seems like they're trying to start a conversation with me, and I need to figure out how to respond appropriately. The timestamp is 2025.\n\n\n",
  "thought": true
},
{
  "text": "**Deciding Response Strategy**\n\nI'm now determining my AI persona and the rules I'll follow in this interaction, building off the greeting intent. Considering the future context of 2025 and the user's initial \"์•ˆ๋…•?\", I'm formulating the appropriate response framework for this interaction and determining how formal or casual my response should be.\n\n\n",
  "thought": true
},
{
  "text": "{\n  \"thought\": \"์‚ฌ์šฉ์ž๊ฐ€ ์ธ์‚ฌ๋ฅผ ๊ฑด๋„ค๋ฉฐ ๋Œ€ํ™”๋ฅผ ์‹œ์ž‘ํ–ˆ์Šต๋‹ˆ๋‹ค. AI๋Š” '์นœ์ ˆํ•จ', '์ „๋ฌธ์„ฑ', '์ ๊ทน์„ฑ'์„ ๊ฐ–์ถ˜ '"
},
{
  "text": "๋…ผ๋ฆฌ์ ์ธ ํ•ด๊ฒฐ์‚ฌ'๋กœ์„œ, ๊ธ์ •์ ์ด๊ณ  ์ƒ๋ƒฅํ•œ ๋งํˆฌ๋กœ ์‘๋Œ€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๊ฐ„๋‹จ"
},
{
  "text": "ํ•œ ์ธ์‚ฌ๋ง์ด๋ฏ€๋กœ 'ํ•ต์‹ฌ ์š”์•ฝ'์€ ์ƒ๋žตํ•˜๊ณ , ์ž์—ฐ์Šค๋Ÿฌ์šด ๋Œ€ํ™”๋กœ ์ด์–ด๊ฐ€๋ฉฐ ์‚ฌ์šฉ์ž๊ฐ€"
},
{
  "text": " ํ•„์š”๋กœ ํ•˜๋Š” ๋„์›€์„ ๋จผ์ € ์ œ์•ˆํ•˜๋Š” ๊ฒƒ์ด ์ ์ ˆํ•ฉ๋‹ˆ๋‹ค.\",\n  \"action\": \"reply\",\n  \""
},
{
  "text": "content\": \"์•ˆ๋…•ํ•˜์„ธ์š”! ๋งŒ๋‚˜์„œ ๋ฐ˜๊ฐ‘์Šต๋‹ˆ๋‹ค. ๐Ÿ˜Š\\n\\n์˜ค๋Š˜ ํ•˜๋ฃจ ์–ด๋–ป๊ฒŒ ๋ณด๋‚ด๊ณ  ๊ณ„์‹ ๊ฐ€์š”? ๊ถ๊ธˆ"
},
{
  "text": "ํ•œ ์ ์ด ์žˆ๊ฑฐ๋‚˜ ๋„์›€์ด ํ•„์š”ํ•œ ์ผ์ด ์žˆ๋‹ค๋ฉด ์–ธ์ œ๋“ ์ง€ ๋ง์”€ํ•ด ์ฃผ์„ธ์š”. ์ตœ์„ ์„ ๋‹คํ•ด ๋„์™€๋“œ๋ฆฌ๊ฒ ์Šต๋‹ˆ๋‹ค!\"\n}"
},
{
  "text": "",
  "thought_signature": "..."
}
{
"thought": "์‚ฌ์šฉ์ž๊ฐ€ ์ธ์‚ฌ๋ฅผ ๊ฑด๋„ค๋ฉฐ ๋Œ€ํ™”๋ฅผ ์‹œ์ž‘ํ–ˆ์Šต๋‹ˆ๋‹ค. AI๋Š” '์นœ์ ˆํ•จ', '์ „๋ฌธ์„ฑ', '์ ๊ทน์„ฑ'์„ ๊ฐ–์ถ˜ '๋…ผ๋ฆฌ์ ์ธ ํ•ด๊ฒฐ์‚ฌ'๋กœ์„œ, ๊ธ์ •์ ์ด๊ณ  ์ƒ๋ƒฅํ•œ ๋งํˆฌ๋กœ ์‘๋Œ€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๊ฐ„๋‹จํ•œ ์ธ์‚ฌ๋ง์ด๋ฏ€๋กœ 'ํ•ต์‹ฌ ์š”์•ฝ'์€ ์ƒ๋žตํ•˜๊ณ , ์ž์—ฐ์Šค๋Ÿฌ์šด ๋Œ€ํ™”๋กœ ์ด์–ด๊ฐ€๋ฉฐ ์‚ฌ์šฉ์ž๊ฐ€ ํ•„์š”๋กœ ํ•˜๋Š” ๋„์›€์„ ๋จผ์ € ์ œ์•ˆํ•˜๋Š” ๊ฒƒ์ด ์ ์ ˆํ•ฉ๋‹ˆ๋‹ค.",
"action": "reply",
"content": "์•ˆ๋…•ํ•˜์„ธ์š”! ๋งŒ๋‚˜์„œ ๋ฐ˜๊ฐ‘์Šต๋‹ˆ๋‹ค. ๐Ÿ˜Š\n\n์˜ค๋Š˜ ํ•˜๋ฃจ ์–ด๋–ป๊ฒŒ ๋ณด๋‚ด๊ณ  ๊ณ„์‹ ๊ฐ€์š”? ๊ถ๊ธˆํ•œ ์ ์ด ์žˆ๊ฑฐ๋‚˜ ๋„์›€์ด ํ•„์š”ํ•œ ์ผ์ด ์žˆ๋‹ค๋ฉด ์–ธ์ œ๋“ ์ง€ ๋ง์”€ํ•ด ์ฃผ์„ธ์š”. ์ตœ์„ ์„ ๋‹คํ•ด ๋„์™€๋“œ๋ฆฌ๊ฒ ์Šต๋‹ˆ๋‹ค!"
}

Thank you.

13 Likes

Same thing, started a few hours ago it seems, at least with gemini-3-flash-preview

2 Likes

Checking server logs, it appears to have started around 2025-12-22 23:00:00 UTC, and the issue remains unresolved.

We havenโ€™t received a complete response, so weโ€™d appreciate it if you could take action quickly.

1 Like

We are facing the same issue.

This bug only occurs with Gemini 3 models such as gemini-3-pro-preview (Confirmed) and gemini-3-flash-preview (Iโ€™m not sure, but itโ€™s very likely).

This bug does not happen with gemini-2.5-pro.

Additionally, gemini-3-pro-preview tends to calls tools in the wrong way and answer bad while gemini-2.5-pro works perfectly!

NOTE: Iโ€™m testing it using the ai-sdk by Vercel.

1 Like

I am using the gemini java sdk, same thing 3.0 flash preview started today, it was working great thought, with good tool calling discrimination

with the current issue, text parts sometimes show like this

{
โ€œtextโ€: โ€œHello! Iโ€™m Anahata, your AI butler here in NetBeans. Iโ€™m ready to help you with your Java projects, refactoring, or any IDE tasks you have in mind. \n\nVisca el Barรงa! We are looking fantastic under Flick latelyโ€”itโ€™s a great time to be a โ€˜culerโ€™, especially seeing how things are a bit shaky over in Madrid. \n\nWhatโ€™s on the agenda today?โ€
}

some times like a โ€œthoughtโ€ but the part itself is not marked as a thought

{
โ€œthoughtโ€: โ€œThe user is saying โ€˜whats upโ€™, which is a casual greeting. I should respond in a friendly and helpful manner, acknowledging the greeting and offering assistance.โ€
}

i have seen it doing a โ€œmessageโ€ too i am pretty sure

1 Like

Iโ€™m using js-genai sdk, and it doesnโ€™t seem to be a problem specific to the sdk.

1 Like

From this it looks like there was an update in gemini 3 for structured output.

2 Likes

Yes! Because it responds as if a structured output were defined.

The weirdest thing we tried https://aistudio.google.com/ with a defined tool and it answer as usual :neutral_face:

Hi @user4064, welcome to the community!

Could you help me out with a few more details so that we can analyze this better?

Are you using any SDKs? If yes, which one? Have you tried providing system instructions specifically for the same? If using any other SDKs other than the Google GenAI SDK, have you passed the thought_signature back to the model? Function calling Requirements

Thank you!

Iโ€™m using the js-genai SDK 1.34.0 and constructing a history using the GoogleGenAI Chats object. Iโ€™m also correctly inserting thought_signature.

Until yesterday, it was processing thousands of requests without issue, but today, the responses suddenly became strange.

I also tried putting โ€˜text/plainโ€™ in responseMimeType just in case, but it didnโ€™t work.

request: {
message: inputMessage,
config: {
tools: toolsConfig.tools as ToolListUnion,
toolConfig: {
functionCallingConfig:
toolsConfig.callableTools.length > 0
? {
mode: FunctionCallingConfigMode.AUTO,
}
: undefined,
},
thinkingConfig: {
includeThoughts: true,
thinkingLevel: thinkingLevel,
},
responseMimeType: 'text/plain', // (Try)
automaticFunctionCalling: {
disable: false,
maximumRemoteCalls: 10,
},
systemInstruction: [
{
text: this.SYSTEM_PROMPT,
},
],
},
},

Three different java projects, all using the genai java sdk (and just updated to the latest 1.32)

All working like a charm yesterday, only seen in gemini 3 flash preview

1 Like

Having the same issue here. Happens with both gemini-3-flash-preview and gemini-3-pro-preview.

Iโ€™m using the Vercel AI SDK. Hereโ€™s an example of the output:

This exact setup was returning regular text content yesterday, but now the model outputs JSON for every response.

1 Like

What did you guys do last night? @Srikanta_K_N tell us, what was the sneaky quite update about?โ€ฆ. tell usโ€ฆ. dont be shy, we wont tell anybody

1 Like

Iโ€™m using the REST API to call the api. The same problem happens to me.

Hey All,

Thank you for flagging this issue. We apologize for the inconvenience and have escalated it to our internal team for investigation. We will update you as soon as we have more information.

3 Likes

Good gooodโ€ฆ.

Dont worryโ€ฆ I am sure the internal team knows very well what they did yesterdayโ€ฆ..

hmmmโ€ฆโ€ฆ

2 Likes

Same hereโ€ฆ using Vercel AI SDK and it was totally okay till yesterday.

1 Like

Facing the same problem. Two different websites. Without any code change, all Gemini answers became JSON instead of natural text some time yesterday evening, already leading to support requests from users. This makes Gemini completely unusable and we had to change to a competitor for the time being. Using Vercel AI SDK 5 and Vercel aI SDK 6 with standard Google provider on the different projects, which, again, worked flawlessly until yesterday evening when something changed on Googleโ€™s side.

2 Likes

I experience this same issue! Using Vercel AI-SDK, Node back-end, tool-calling, in a chat-UX setting. Gemini 3 models keep responding in JSON objects like { thoughts: โ€œโ€ฆโ€, response: โ€œโ€ฆโ€ } or worse partial-objects and partial natural language. Was not experiencing this issue prior to Dec 22. Switched to a different LLM provider in the meantime.

3 Likes

Somehow the model seems to have completely degenerated. It worked well until yesterday, but now I was trying to circumvent the JSON problem by forcing a structured output containing just a string and with the same parameters as yesterday, it completely gets hung up and generates the same tokens hundreds of times. Using a temperature of 0.2 in that case, using 1.0 fixes that particular problem (but leads to others). But until yesterday evening, 0.2 performed well. So the model really seems to have changedโ€ฆ

3 Likes