Logprobs is not enabled for Gemini models

Hello everyone,
I’m encountering an issue with the Gemini API that worked just fine yesterday, but today I get an error message:
{‘error’: {‘code’: 400, ‘message’: ‘Logprobs is not enabled for models/gemini-2.5-flash’, ‘status’: ‘INVALID_ARGUMENT’}}

Here is structure of my request (prompts changed to dummy ones):

resp = g_client.models.generate_content(

            model='gemini-2.5-flash',

            contents="how are you?",

            config=types.GenerateContentConfig(

                system_instruction="you are a helpful assistant",

                temperature=0,

                response_logprobs=True,

                logprobs=0,

                response_mime_type="application/json",

            ),

        )
  1. Yesterday this worked: I got log probabilities in the response.
  2. Today: same code, no change on my end, just the error above.

Questions:

  1. Has anyone else seen the same error?
  2. Is there a known change/announcement that logprobs support in gemini-2.5-flash was disabled or temporarily removed?
  3. Are there model where logprobs are still supported ?
  4. Any recommended workaround until this is resolved?

Thanks in advance for any help or insight!

4 Likes

I have encountered the same issue … it worked yesterday and today it is throwing the error. 2.0-flash and flash-lite work though

1 Like

I got the same error

I encountered the same starting Oct 24th. I wonder why they suddenly closed this argument. :thinking:

3 Likes

Have the same issue started yesterday. Does anyone have any updates on this topic?
Thanks :slight_smile:

I’m having the same issue and it is getting quite frustrating

Haven’t come across an announcement either, seems like logprobs support for the gemini-2.5 series of models has just been removed silently.

Same issue, any updates?

For context, I have been trying to use Gemini’s logprobs for almost a year now for different important projects. TLDR: It is very, very inconsistent. They will allow access sometimes. Then, it only works with Vertex AI. Then, it works with the standard gemini api. But wait, the package updates and the API usage is completely different. Then, there was a brief period of peace these past few months where it would work on both VertexAI and GenAI apis. Now, we are back to square one as far as I can see :frowning: ….

Would appreciate if anyone has a solution that works.