Gemini Python SDK, response schema, data cut off, with max_output_tokens

Using gemini-1.5-flash-001, with below defined response schema, and max_output_token. Sometimes, generates unusual, repeated responses, and text is cut off, with invalid JSON syntax.

Can this issue be resolved in SDK, mapping the class to Pydantic, with Strict mode to True, to solve it ?

My respoonse schema, asking Gemini to provide some opinions (pros and cons):

class GeminiResponse(typing.TypedDict):
    pros: list[str]
    cons: list[str]
1 Like

Yeah i’m getting the same issue using structured outputs. gemini responses are cut off.

{“isSolution”: false, “response”: "Okay, here’s the Python code for the function. It’s assuming that the input array is a standard Python list. It will raise an IndexError if the input i is invalid, and it returns 1 if the input array is empty.\n

^ response doesnt finish generating before its returned