The documentation for gemini-robotics-er-1.6-preview says the input token limit = 1,048,576, but when using the Python API, I’ve been seen the following error consistently: ClientError: 400 INVALID_ARGUMENT. {'error': {'code': 400, 'message': 'The input token count exceeds the maximum number of tokens allowed 131072.', 'status': 'INVALID_ARGUMENT'}}
I have also checked with the below method and see the same 131072 limit.
model_info = client.models.get(model='gemini-robotics-er-1.6-preview')
print(f"{model_info.input_token_limit=}")
# Returns:
# model_info.input_token_limit=131072
# model_info.output_token_limit=65536
I’m on a Tier 3 plan. Is there a bug in the configuration of the model in the Python API?