I saw this post yesterday: Gemini is now accessible from the OpenAI Library - Google Developers Blog
It works yesterday, but it is not working right now.
I saw this post yesterday: Gemini is now accessible from the OpenAI Library - Google Developers Blog
It works yesterday, but it is not working right now.
Same here. Anyone know what’s up?
This should be fixed now! Please let me know if you’re still running into any issues.
response_object.usage is None when calling Gemini from OpenAI api
Among other things
I’ve been using the following endpoint:
https://generativelanguage.googleapis.com/v1beta/openai/chat/completions
However, I was using ApiKey instead of AccessToken like so.
POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions
x-goog-api-key: {{apiKey}}
Content-Type: application/json
{
"model": "gemini-1.5-flash",
"messages": [
{ "role": "user", "content": [
{ "type": "text", "text": "Why is the sky blue?"}
]}
]
}
Unfortunately, getting an HTTP 400 Bad Request in response.
After reading the original article and swapping to Authorization header, I can confirm it’s working (for me).
POST https://generativelanguage.googleapis.com/v1beta/openai/chat/completions
Authorization: Bearer {{apiKey}}
Content-Type: application/json
{
"model": "gemini-1.5-flash",
"messages": [
{ "role": "system", "content": "You are a helpful assistant."},
{ "role": "user", "content": "Explain to me how AI works"}
]
}
This renders an HTTP 200 OK response.