Hi OpenAI Team,
I hope you’re doing well.
I’m currently experimenting with the Gemini models using the OpenAI libraries and following the compatibility documentation. However, I’m encountering an issue when trying to use the new model “gemini-2.0-flash-001”. Every time I switch the model parameter to “gemini-2.0-flash-001”, I get an error. Interestingly, if I change it to the final experimental version, the code works as expected.
Below is the code snippet I’m using:
import OpenAI from "openai";
const openai = new OpenAI({
apiKey: "GEMINI_API_KEY",
baseURL: "https://generativelanguage.googleapis.com/v1beta/openai/"
});
const response = await openai.chat.completions.create({
model: "gemini-2.0-flash-001",
messages: [
{ role: "system", content: "You are a helpful assistant." },
{
role: "user",
content: "Explain to me how AI works",
},
],
});
console.log(response.choices[0].message);
Could you please help me understand what might be causing this error with the “gemini-2.0-flash-001” model? Is there a known issue or additional configuration required when using this version? Any guidance or suggestions would be greatly appreciated.
Thank you for your time and assistance.
Best regards,
Thiago Felizola