I used https://aistudio.google.com/live to test function calling with gemini-2.5-flash-preview-native-audio-dialog , add
[ { "name": "set_light_values", "description": "Sets the brightness and color temperature of a light.", "parameters": { "type": "object", "properties": { "brightness": { "type": "number", "description": "Light level from 0 to 100. Zero is off and 100 is full brightness" }, "color_temp": { "type": "string", "description": "Color temperature of the light fixture, which can be daylight, cool or warm.", "enum": [ "daylight", "cool", "warm" ] } }, "required": [ "brightness", "color_temp" ] } } ]
The model can’t fire this too either.
+1.
In my project. gemini-2.0-flash-live-001 is doing function call.
But gemini-2.5-flash-preview-native-audio-dialog does not in same code.
Hey !
Same thing on my end. But I seem to have found a workaround that’s worth sharing.
If you start the conversation with a text message from the User (can be just one space) and only feed the audio in after the turn complete from the assistant, it works.
Try it on the google ai studio. Start the stream with a text message, don’t activate the microphone, let the assistant finish what they’re saying (no interruption), activate the mic and then ask for tool use. It should be working.
It’s a bit of an odd one but that’s the pattern I’ve observed. Let me know if you can reproduce as well.
And now I can’t make it work anymore.. Curious to know if someone manages to get something working
Continuing on this thread. If google search grounding is activated I get 100% success rate on tool calling, from pure audio to audio scenarios
If you activate google search grounding, function calling works 100% of the time
no , it works with small prompt only , also on github they still not marked it as bug