Please fix the gemini api

the gemini api is literally unusable the last 3 days all you get are empty messages / error 500 alot of people are reporting this please fix it

12 Likes

Hi @Mohamed_Amine ,

Welcome to the Forum !!
Could you please let us know which Gemini model you are using?

2 Likes

gemini 2.5 pro and i am a paid user

1 Like

Im using gemini 2.5pro to extract visual information and text from pdf to google sheet. Half of the outputs are empty with no error…

1 Like

same thing happening to me and alot of users i have alot of apps that use gemini 2.5 pro and they are all broken because of this

Since 5 hours, the video understanding API is not working for 2.5 models; it works fine for 2.0 flash.

GenAI SDK in python throws 500
2025-08-14 11:35:41,623 [INFO] httpx – HTTP Request: POST https:// generativelanguage.googleapis .com/v1beta/models/gemini-2.5-pro:generateContent “HTTP/1.1 500 Internal Server Error”

Using AI Studio (free) gives Internal Server Error after uploading the video file.

I encountered the same problem and don’t know how to solve it.

I’ve been having the same problem for about 3 days now. I’m using an API key for the Generative Language API and most of the time I’m getting either no response or truncated response - especially on longer prompts. Bit of a disaster of a service at the moment. The gemini-2.5-flash is slightly better, but it too is intermittent.

2 Likes

Hello guys,

Thank you for reporting this. The engineering team is aware of the empty response issue and is actively working on a fix. We will keep you updated as the fix is identified and rolled out.

1 Like

yes its currently unusable, is not parsing params properly, and keeps giving 500 status errors.

1 Like

Same here. For me I get a constant 500 error when I try to use uris from the file upload that point to .txt files. Works fine for pdfs, but not for .txt files. I don’t get it.

1 Like

Same problem for 48h

1 Like

Any update on the issue?

1 Like

Hi @vjaykrsna ,

This issue has been escalated to our engineering team, and they are actively working on a resolution.

2 Likes

We request that Google update the Gemini API status.

1 Like

I raised this months ago, still an issue I see, moved to OpenAI. Hoping this will get fixed so I can move back, as it’s unusable in the current 500 error/blank issues.

2 Likes

Yes, API is very unstable. It’s not just inconvenience, it’s basically unusable. I ended up switching to openai o3.

1 Like

I am also getting the same empty responses or 500 error. Does OpenAI API takes in a whole PDF? Last I check, one can only pass images (or image of a PDF page) to it.
I started using Gemini because I can pass a full PDF to it.

1 Like

It does take full pdf files

Recently, OpenAI’s API also started supporting PDF uploads. Unlike Gemini, you don’t have to wait for the upload to finish before using the PDF in a prompt — you can use it right away. However, in my experience, it feels about three times slower than Gemini. If Gemini becomes stable again, I’m planning to switch back to it.

# pdf_to_markdown_min.py
# Prereq:
#   pip install --upgrade openai python-dotenv
#   export OPENAI_API_KEY="your_api_key"

from __future__ import annotations
import os
from pathlib import Path
from dotenv import load_dotenv
from openai import OpenAI

load_dotenv()

MODEL = os.getenv("OPENAI_MODEL", "gpt-5")  
INPUT_PDF = "sample.pdf"
OUTPUT_MD = "sample.md"

# Minimal prompt for a doc example
PROMPT = """Convert the following PDF into Markdown.
Output only Markdown text.
"""

def main() -> None:
    client = OpenAI()

    # 1) Upload PDF
    with open(INPUT_PDF, "rb") as f:
        uploaded = client.files.create(
            file=f,
            purpose="assistants"  # or "responses"
        )

    # 2) Call Responses API
    resp = client.responses.create(
        model=MODEL,
        input=[{
            "role": "user",
            "content": [
                {"type": "input_text", "text": PROMPT},
                {"type": "input_file", "file_id": uploaded.id}
            ]
        }],
        response_format={"type": "text"},
        # Optional:
        # reasoning={"effort": "low"},
        # max_output_tokens=60000,
    )

    # 3) Save Markdown
    md = resp.output_text
    Path(OUTPUT_MD).write_text(md, encoding="utf-8")
    print(f"Saved: {Path(OUTPUT_MD).resolve()}")

if __name__ == "__main__":
    main()