Is Batch Api available for Gemini 2.5 Flash Preview TTS? [Documentation says it is supported, API side not]

Hi, I’m currently working on a project using gemini-2.5-flash-preview-tts.

According to the documentation ( Models Documentation, Pricing ), this model is listed as supporting the Batch API.

When I attempt to call the Batch API, I get this error:


2025-09-15 07:32:43,762 - INFO - HTTP Request: POST [https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-tts\:batchGenerateContent](https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-tts:batchGenerateContent) "HTTP/1.1 404 Not Found"
2025-09-15 07:32:43,763 - ERROR - Error occurred: 404 NOT\_FOUND. {'error': {'code': 404, 'message': 'models/gemini-2.5-flash-preview-tts is not found for API version v1beta, or is not supported for batchGenerateContent. Call ListModels to see the list of available models and their supported methods.', 'status': 'NOT\_FOUND'}}

However, when I query the model info, the only supported_actions returned are:
[‘countTokens’, ‘generateContent’]

Is Batch API actually available for this model, or is the documentation ahead of the implementation?

Hello,

Welcome to the Forum!

Batch API is supported with gemini-2.5-flash-preview-tts, so we will need to analyze this issue in more detail. Could you please share your code for reproduction? That will help us provide a more accurate answer.

Hi, @Lalit_Kumar thanks for the reply!

I’ve already prepared JSONL files to run batch audio generation. Example:


{"key": "<Key>", "request": {"contents": "<Contents>", "generation_config": {"response_modalities": ["AUDIO"], "speech_config": {"voice_config": {"prebuilt_voice_config": {"voice_name": "Leda"}}}}}}

{"key": "<Key>", "request": {"contents": "<Contents>", "generation_config": {"response_modalities": ["AUDIO"], "speech_config": {"voice_config": {"prebuilt_voice_config": {"voice_name": "Leda"}}}}}}

I tested this with 5–10 records, and followed the documentation to upload the JSONL file and create the batch job:


MODEL_NAME = "models/gemini-2.5-flash-preview-tts"

BATCH_DISPLAY_NAME = "batch-narration"

# Upload JSONL file

logger.info(f"Uploading JSONL file: {jsonl_file_path}")

batch_input_file = client.files.upload(

file=str(jsonl_file_path)

)

logger.info(f"Successfully uploaded JSONL file: {batch_input_file.name}")

# Create batch job

logger.info("Creating batch job...")

batch_multimodal_job = client.batches.create(

model=MODEL_NAME,

src=batch_input_file.name,

config={

'display_name': BATCH_DISPLAY_NAME,

}

)

Error observed

Instead of succeeding, I get the following error:


2025-09-24 14:23:09,297 - INFO - HTTP Request: POST https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-tts:batchGenerateContent "HTTP/1.1 404 Not Found"

2025-09-24 14:23:09,299 - ERROR - Error occurred: 404 NOT_FOUND. {'error': {'code': 404, 'message': 'models/gemini-2.5-flash-preview-tts is not found for API version v1beta, or is not supported for batchGenerateContent. Call ListModels to see the list of available models and their supported methods.', 'status': 'NOT_FOUND'}}

I also tried using just "gemini-2.5-flash-preview-tts" as the model name, but the error persists.

Model inspection

When I query the model info directly:


client = genai.Client(api_key=API_KEY, http_options={'api_version': 'v1beta'})

model_info = client.models.get(model=MODEL_NAME)

print(model_info)

The result is:


2025-09-24 14:23:06,150 - INFO - HTTP Request: GET https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-preview-tts "HTTP/1.1 200 OK"

name='models/gemini-2.5-flash-preview-tts' display_name='Gemini 2.5 Flash Preview TTS' description='Gemini 2.5 Flash Preview TTS' version='gemini-2.5-flash-exp-tts-2025-05-19' endpoints=None labels=None tuned_model_info=TunedModelInfo() input_token_limit=8192 output_token_limit=16384 supported_actions=['countTokens', 'generateContent'] default_checkpoint_id=None checkpoints=None

Analysis

From the supported_actions field, it looks like this model only supports:

  • countTokens

  • generateContent

There’s no support for batchGenerateContent, which explains the 404 error when trying to create a batch job.

It looks like you’re encountering an issue where the documentation indicates that the gemini-2.5-flash-preview-tts model supports the Batch API, but your attempt to use the batchGenerateContent method returns a 404 error.

Here’s how to approach the situation:

1.Check Model Support and Documentation:

•The error message indicates that the model either does not exist in the specified version or does not support the batchGenerateContent method. This could suggest a discrepancy between the documentation and the actual implementation.

•Double-check the documentation and release notes for any updates regarding which models support the Batch API, as implementations can sometimes lag behind documentation updates.

2.Use the ListModels Method:

•The error message suggests calling the ListModels method to see the models and their supported actions. You can do this to confirm if gemini-2.5-flash-preview-tts supports the Batch API.

•If batchGenerateContent is not listed as a supported action for that model, then it’s likely that the Batch feature is not implemented for it yet.

3.Reach Out for Support:

•If you’re still uncertain or if the documentation seems to be incorrect, consider reaching out to customer support or the technical team for the API. They can provide clarification on the availability of the Batch API for the specific model you are working with.

4.Check for Alternative Models:

•If batch processing is critical for your project, you might want to explore other models that do support the Batch API as indicated in the documentation.

5.Stay Updated:

•Keep an eye on any updates to the API documentation or announcements, as there may be future enhancements or corrections regarding model capabilities.Given your situation, it’s likely a case of either an inconsistent documentation or a feature that hasn’t been fully rolled out. Checking the supported actions and reaching out for support would be your best next steps.

Joseph lual

I am also facing the same issue.

As per the below documentation, It is mentioned batch api is supported for gemini-2.5-flash-preview-tts but batchGenerateContent is not available in supported_actions for the model and is not working.

This doc is last updated in May 2025. I am not sure why this is not solved till now.

And I am seeing that below tts models are listed in v1beta ListModels but batchGenerateContent is not available in supportedGenerationMethods.

{
            "name": "models/gemini-2.5-flash-preview-tts",
            "version": "gemini-2.5-flash-exp-tts-2025-05-19",
            "displayName": "Gemini 2.5 Flash Preview TTS",
            "description": "Gemini 2.5 Flash Preview TTS",
            "inputTokenLimit": 8192,
            "outputTokenLimit": 16384,
            "supportedGenerationMethods": [
                "countTokens",
                "generateContent"
            ],
            "temperature": 1,
            "topP": 0.95,
            "topK": 64,
            "maxTemperature": 2
        },
        {
            "name": "models/gemini-2.5-pro-preview-tts",
            "version": "gemini-2.5-pro-preview-tts-2025-05-19",
            "displayName": "Gemini 2.5 Pro Preview TTS",
            "description": "Gemini 2.5 Pro Preview TTS",
            "inputTokenLimit": 8192,
            "outputTokenLimit": 16384,
            "supportedGenerationMethods": [
                "countTokens",
                "generateContent"
            ],
            "temperature": 1,
            "topP": 0.95,
            "topK": 64,
            "maxTemperature": 2
        },

I believe the documentation is wrong or is very ahead of implementation.

@Lalit_Kumar Please let us know if you can help here.

Hello,

Apologies for the earlier miscommunication. Upon further investigation, we found that the Batch API is not yet supported with the TTS model. There was an error in our documentation, and it will be corrected shortly.

We sincerely apologize for the inconvenience and for the delay in providing you with this clarification. Thank you for your patience and understanding.