Gemini 2.5 TTS Model Not Working at Low Temperatures

Is anyone having this issue with Gemini 2.5-Flash-TTS?

Model will not produce any output when temperature is set to a low value (<0.5 approx.). Model will simply process and time out with no response, or return a short snippet with the rest cut off (about 2 mins of silence). I can say with some degree of certainty that this is a problem with the model, given that responses are working with higher temperatures, with the only variable changing between my experiments is the temp. This is using the ‘GoogleGenAI’ library in a TS project.

My prompt:
const response = await ai.models.generateContent({
model: “gemini-2.5-flash-preview-tts”,
contents: [
{
parts: [
{
text: `Read aloud in a narration style: ${transcript}`,
},
],
},
],
config: {
responseModalities: [“AUDIO”],
temperature: 0.4,
speechConfig: {
voiceConfig: {
prebuiltVoiceConfig: { voiceName: “Kore” },
},
},
},
});

Successful response (Temp = 0.5):
GenerateContentResponse {
sdkHttpResponse: {
headers: {
‘alt-svc’: ‘h3=“:443”; ma=2592000,h3-29=“:443”; ma=2592000’,
‘content-encoding’: ‘gzip’,
‘content-type’: ‘application/json; charset=UTF-8’,
date: ‘Thu, 02 Apr 2026 07:40:02 GMT’,
server: ‘scaffolding on HTTPServer2’,
‘server-timing’: ‘gfet4t7; dur=21244’,
‘transfer-encoding’: ‘chunked’,
vary: ‘Origin, X-Origin, Referer’,
‘x-content-type-options’: ‘nosniff’,
‘x-frame-options’: ‘SAMEORIGIN’,
‘x-gemini-service-tier’: ‘standard’,
‘x-xss-protection’: ‘0’
}
},
candidates: [ { content: [Object], finishReason: ‘STOP’, index: 0 } ],
modelVersion: ‘gemini-2.5-flash-preview-tts’,
responseId: ‘UR3OadmXLdvJjuMPg_ra0A8’,
usageMetadata: {
promptTokenCount: 149,
candidatesTokenCount: 896,
totalTokenCount: 1045,
promptTokensDetails: [ [Object] ],
candidatesTokensDetails: [ [Object] ]
}

NON-Successful Response (Temp = 0.1):
GenerateContentResponse {
sdkHttpResponse: {
headers: {
‘alt-svc’: ‘h3=“:443”; ma=2592000,h3-29=“:443”; ma=2592000’,
‘content-encoding’: ‘gzip’,
‘content-type’: ‘application/json; charset=UTF-8’,
date: ‘Thu, 02 Apr 2026 07:38:56 GMT’,
server: ‘scaffolding on HTTPServer2’,
‘server-timing’: ‘gfet4t7; dur=148571’,
‘transfer-encoding’: ‘chunked’,
vary: ‘Origin, X-Origin, Referer’,
‘x-content-type-options’: ‘nosniff’,
‘x-frame-options’: ‘SAMEORIGIN’,
‘x-gemini-service-tier’: ‘standard’,
‘x-xss-protection’: ‘0’
}
},
candidates: [ { finishReason: ‘OTHER’, index: 0 } ],
modelVersion: ‘gemini-2.5-flash-preview-tts’,
responseId: ‘EB3OabBVtrKO4w_t-v6gDw’,
usageMetadata: {
promptTokenCount: 149,
totalTokenCount: 149,
promptTokensDetails: [ [Object] ]
}

Hello @Christian_Twemlow,
Thanks for bringing this to our attention. I wanted to understand your use-case around using a low temperature and if setting the temperature as 1 is not returning the expected result ?

Hi there,

I’m using the API to convert speech to text. Low temperatures won’t return anything, returning some sort of timeout error, or will return silent audio. When the temperature is 1, the API works as expected.