Hello,
Since today (December 23, 2025, KST), I have noticed an issue where Gemini starts responding in a structured / JSON-like format only when Function Calling is enabled.
-
Until yesterday, even with Function Calling added/enabled, Gemini responded in normal natural language unless it was explicitly required to return a structured result.
-
Starting today, whenever I enable or add Function Calling, Geminiโs responses become structured / JSON-like (key-value style, similar to JSON), for example:
- {โanswerโ: โโฆโ, โstepsโ: [โฆ], โreasoningโ: โโฆโ}
-
This happens even for regular chat-style prompts in Korean, where I do not request JSON, structured output, or any specific schema.
-
If I do not use Function Calling, Gemini behaves normally and answers in natural language.
-
Because this behavior changed suddenly without any modification on my side, I suspect it may be related to a recent model or configuration update.
Expected behavior:
-
Enabling or adding Function Calling should not force all responses into a structured / JSON-like format for normal chat.
-
For regular natural-language prompts, Gemini should still respond in plain natural language, unless:
-
a specific tool/function is being called, or
-
I explicitly request JSON / structured output.
-
Actual behavior:
- When Function Calling is enabled, Gemini consistently replies in a structured / JSON-like format, even for simple chat queries, which makes normal conversation usage inconvenient.
If helpful, I can provide:
-
Example prompts where this occurs
-
The exact structured outputs Gemini returns when Function Calling is enabled
Could you please check whether this is intended behavior, a bug, or a configuration issue related to Function Calling, and let me know how I can restore normal natural-language responses while still using Function Calling?
Example:
gemini-3-pro-preview
input: ๊ณผ์ฒ์ ๋ ์จ ์๋ ค์ค. (Tell me the weather in Gwacheon.)
{
"text": "**Gathering Weather Data**\n\nI'm currently trying to gather the latest weather data for Gwacheon-si. It's December 23rd, 2025, and I need to accurately provide the weather information for this specific date and location. I need to figure out what data is needed to provide a useful response.\n\n\n",
"thought": true
},
{
"text": "**Focusing Data Accuracy**\n\n\\n\\nI'm now focusing on verifying the accuracy of the weather information. My goal is to use data that is recent and specific to Gwacheon-si. I'm exploring different sources to ensure the information is reliable and up-to-date. I'm thinking about the parameters needed to achieve an accurate response.\n\n\n",
"thought": true
},
{
"text": ""
},
{
"type": "function_call_request",
"id": "0838469a-37d1-4989-a852-f989b90ad2ae",
"name": "serper_web",
"input": {
"query": "2025๋
12์ 23์ผ ๊ณผ์ฒ์ ๋ ์จ"
}
},
{
"type": "function_call_result",
"id": "0838469a-37d1-4989-a852-f989b90ad2ae",
"name": "serper_web",
"input": {
"query": "2025๋
12์ 23์ผ ๊ณผ์ฒ์ ๋ ์จ"
},
"content": "....weather info",
"structured_content": {
"result_count": 10,
"search_params": {
"query": "2025๋
12์ 23์ผ ๊ณผ์ฒ์ ๋ ์จ",
"location": "Seoul, Seoul, South Korea",
"country": "kr",
"language": "ko",
"autocorrect": true,
"offset": 1
},
"data": [
{
"title": "๊ณผ์ฒ๋, ๊ฒฝ๊ธฐ๋, ๋ํ๋ฏผ๊ตญ ์๋ณ ๋ ์จ",
"link": "https://www.accuweather.com/ko/kr/gwacheon-dong/1849815/january-weather/1849815?year=2025",
"snippet": "1์. 1์ 2์ 3์ 4์ 5์ 6์ 7์ 8์ 9์ 10์ 11์ 12์. 2025. 2024 2025 2026 ยท ์ผ๋ณ. ์ผ. ์. ํ. ์. ๋ชฉ. ๊ธ. ํ . 29. 43ยฐ. 24ยฐ. 30. 50ยฐ. 35ยฐ. 31.",
"position": 1
},
...
]
}
},
{
"text": "**Analyzing Future Weather Data**\n\nI'm currently working on analyzing the weather data for Gwacheon-si on December 23, 2025. My initial focus is on identifying key weather conditions, with \"clear sky\" being one of the first takeaways. I'm cross-referencing this with other sources to build a more comprehensive picture.\n\n\n",
"thought": true
},
{
"text": "**Revisiting Temperature Predictions**\n\nI've refined my temperature estimates for that night. Initial data showed a daytime high around +1ยฐC, but further research reveals a nighttime low between -2ยฐC and -1ยฐC, according to Meteotrend. AccuWeather's monthly data is less helpful, but Wanderlog's general December averages are in the ballpark. KMA's current observation isn't relevant for the future date, but the specific forecast indicates a clear sky on the 23rd, and the low is confirmed by Meteotrend.\n\n\n",
"thought": true
},
{
"text": "**Constructing a Concise Forecast**\n\nI've integrated the temperature and sky conditions into a coherent forecast. The winter weather suggests clear skies and freezing temperatures, specifically around 0ยฐC to +2ยฐC during the day and dropping to -2ยฐC to -5ยฐC at night. I'm focusing on providing the most precise information available, emphasizing the clear sky and cold temperatures based on the Dec 23rd data. I've noted the need for warm clothing.\n\n\n",
"thought": true
},
{
"text": "**Consolidating Weather Details**\n\nI'm solidifying the forecast. It's looking like clear skies, with daytime temperatures around 0ยฐC to +2ยฐC. The nighttime low appears to be between -1ยฐC and -2ยฐC, which is consistent with the available data. I'm satisfied with this refined overview and plan to move on to the next step.\n\n\n",
"thought": true
},
{
"text": "{\n \"summary\": \"2025๋
12์ 23์ผ(ํ) ๊ณผ์ฒ์๋ ๋์ฒด๋ก ๋ง์ ๋ ์จ๊ฐ ์ด์ด์ง๊ฒ ์ผ๋ฉฐ, ๊ธฐ"
},
{
"text": "์จ์ ์์ 1๋ ๋ด์ธ๋ก ์์ํ ์ ๋ง์
๋๋ค.\",\n \"status\": \"success\""
},
{
"text": "\n}"
},
{
"text": "",
"thought_signature": "..."
}
{
"summary": "2025๋
12์ 23์ผ(ํ) ๊ณผ์ฒ์๋ ๋์ฒด๋ก ๋ง์ ๋ ์จ๊ฐ ์ด์ด์ง๊ฒ ์ผ๋ฉฐ, ๊ธฐ์จ์ ์์ 1๋ ๋ด์ธ๋ก ์์ํ ์ ๋ง์
๋๋ค.",
"status": "success"
}
gemini-3-pro-preview
input: ์๋ ? (Hello?)
{
"text": "**Analyzing the Input**\n\nI've just started to look at the user's input, which is \"์๋
?\". It seems like they're trying to start a conversation with me, and I need to figure out how to respond appropriately. The timestamp is 2025.\n\n\n",
"thought": true
},
{
"text": "**Deciding Response Strategy**\n\nI'm now determining my AI persona and the rules I'll follow in this interaction, building off the greeting intent. Considering the future context of 2025 and the user's initial \"์๋
?\", I'm formulating the appropriate response framework for this interaction and determining how formal or casual my response should be.\n\n\n",
"thought": true
},
{
"text": "{\n \"thought\": \"์ฌ์ฉ์๊ฐ ์ธ์ฌ๋ฅผ ๊ฑด๋ค๋ฉฐ ๋ํ๋ฅผ ์์ํ์ต๋๋ค. AI๋ '์น์ ํจ', '์ ๋ฌธ์ฑ', '์ ๊ทน์ฑ'์ ๊ฐ์ถ '"
},
{
"text": "๋
ผ๋ฆฌ์ ์ธ ํด๊ฒฐ์ฌ'๋ก์, ๊ธ์ ์ ์ด๊ณ ์๋ฅํ ๋งํฌ๋ก ์๋ํด์ผ ํฉ๋๋ค. ๊ฐ๋จ"
},
{
"text": "ํ ์ธ์ฌ๋ง์ด๋ฏ๋ก 'ํต์ฌ ์์ฝ'์ ์๋ตํ๊ณ , ์์ฐ์ค๋ฌ์ด ๋ํ๋ก ์ด์ด๊ฐ๋ฉฐ ์ฌ์ฉ์๊ฐ"
},
{
"text": " ํ์๋ก ํ๋ ๋์์ ๋จผ์ ์ ์ํ๋ ๊ฒ์ด ์ ์ ํฉ๋๋ค.\",\n \"action\": \"reply\",\n \""
},
{
"text": "content\": \"์๋
ํ์ธ์! ๋ง๋์ ๋ฐ๊ฐ์ต๋๋ค. ๐\\n\\n์ค๋ ํ๋ฃจ ์ด๋ป๊ฒ ๋ณด๋ด๊ณ ๊ณ์ ๊ฐ์? ๊ถ๊ธ"
},
{
"text": "ํ ์ ์ด ์๊ฑฐ๋ ๋์์ด ํ์ํ ์ผ์ด ์๋ค๋ฉด ์ธ์ ๋ ์ง ๋ง์ํด ์ฃผ์ธ์. ์ต์ ์ ๋คํด ๋์๋๋ฆฌ๊ฒ ์ต๋๋ค!\"\n}"
},
{
"text": "",
"thought_signature": "..."
}
{
"thought": "์ฌ์ฉ์๊ฐ ์ธ์ฌ๋ฅผ ๊ฑด๋ค๋ฉฐ ๋ํ๋ฅผ ์์ํ์ต๋๋ค. AI๋ '์น์ ํจ', '์ ๋ฌธ์ฑ', '์ ๊ทน์ฑ'์ ๊ฐ์ถ '๋
ผ๋ฆฌ์ ์ธ ํด๊ฒฐ์ฌ'๋ก์, ๊ธ์ ์ ์ด๊ณ ์๋ฅํ ๋งํฌ๋ก ์๋ํด์ผ ํฉ๋๋ค. ๊ฐ๋จํ ์ธ์ฌ๋ง์ด๋ฏ๋ก 'ํต์ฌ ์์ฝ'์ ์๋ตํ๊ณ , ์์ฐ์ค๋ฌ์ด ๋ํ๋ก ์ด์ด๊ฐ๋ฉฐ ์ฌ์ฉ์๊ฐ ํ์๋ก ํ๋ ๋์์ ๋จผ์ ์ ์ํ๋ ๊ฒ์ด ์ ์ ํฉ๋๋ค.",
"action": "reply",
"content": "์๋
ํ์ธ์! ๋ง๋์ ๋ฐ๊ฐ์ต๋๋ค. ๐\n\n์ค๋ ํ๋ฃจ ์ด๋ป๊ฒ ๋ณด๋ด๊ณ ๊ณ์ ๊ฐ์? ๊ถ๊ธํ ์ ์ด ์๊ฑฐ๋ ๋์์ด ํ์ํ ์ผ์ด ์๋ค๋ฉด ์ธ์ ๋ ์ง ๋ง์ํด ์ฃผ์ธ์. ์ต์ ์ ๋คํด ๋์๋๋ฆฌ๊ฒ ์ต๋๋ค!"
}
Thank you.
