Structured outputs in batch using OpenAI compatibility mode

We’re migrating some batch workloads from OpenAI to Gemini using the OpenAI compatibility layer. Demand (real-time) mode works perfectly with response_format: {“type”: “json_schema”, …}, but the Batch API rejects it.

What works in batch:

  • response_format: {“type”: “json_object”} — accepted
  • response_format: {“type”: “text”} — accepted
  • No response_format — accepted

What fails in batch:

  • response_format: {“type”: “json_schema”, “json_schema”: {…}} — fails with invalid JSON, near column N: no such field: ‘type’ (referring to “type”: “object” inside the schema definition)
  • Any variation of json_schema (with/without strict, additionalProperties, name) — same error
  • Gemini-native fields in the body (response_mime_type, response_schema) — no such field:
    ‘response_mime_type’

Our setup:

JSONL format (standard OpenAI batch format):
{“custom_id”:“req-1”,“method”:“POST”,“url”:“/v1/chat/completions”,“body”:{“model”:“gemini-2.5-flash”,“message
s”:[…],“temperature”:0,“response_format”:{“type”:“json_schema”,“json_schema”:{“name”:“my_schema”,“schema”:{
“type”:“object”,“properties”:{“color”:{“type”:“string”}},“required”:[“color”]}}}}}

Is json_schema structured output intentionally unsupported in the OpenAI-compat Batch API? If so, is there a roadmap for it? The workaround of using json_object + describing the schema in the prompt works but loses the schema validation.