Gemini-robotics-er-1.6-preview input token limit doesn't match documentation

The documentation for gemini-robotics-er-1.6-preview says the input token limit = 1,048,576, but when using the Python API, I’ve been seen the following error consistently: ClientError: 400 INVALID_ARGUMENT. {'error': {'code': 400, 'message': 'The input token count exceeds the maximum number of tokens allowed 131072.', 'status': 'INVALID_ARGUMENT'}}

I have also checked with the below method and see the same 131072 limit.

model_info = client.models.get(model='gemini-robotics-er-1.6-preview')
print(f"{model_info.input_token_limit=}")

# Returns: 
# model_info.input_token_limit=131072
# model_info.output_token_limit=65536

I’m on a Tier 3 plan. Is there a bug in the configuration of the model in the Python API?

I’m running into the same issue, also on Tier3 :confused:

I saw the documentation has now been updated and the input token limit is listed as 131,072. So it seems gemini-robotics-er-1.6-preview has a much lower token limit than the now shutdown gemini-robotics-er-1.5-preview which had an input token limit of 1,048,576