I’ve encountered a significant issue when trying to use the OpenAI JavaScript library with Google’s Gemini API. It appears there’s a compatibility problem leading to a CORS (Cross-Origin Resource Sharing) error.
Specifically, the OpenAI library seems to be making requests in a way that’s not compatible with how Google’s Gemini API handles CORS. This results in the browser blocking the request due to the lack of appropriate Access-Control-Allow-Origin
headers in the Gemini API’s response.
Here’s the code I’m using to reproduce the issue:
import { OpenAI } from 'openai'
const openai = new OpenAI({
baseURL: "https://generativelanguage.googleapis.com/v1beta",
apiKey: import.meta.env.VITE_GEMINI_API_KEY,
dangerouslyAllowBrowser: true,
})
const response = await openai.chat.completions.create({
model: "gemini-1.5-flash",
messages: [
{ role: "system", content: "You are a helpful assistant." },
{
role: "user",
content: "Explain to me how AI works",
},
],
});
Steps to Reproduce:
- Create a new Vite React project.
- Install the
openai
library:npm install openai
- Replace the contents of
src/App.tsx
with the code above - Make sure to set the
VITE_GEMINI_API_KEY
environment variable. - Run the app using
npm run dev
. - Open the browser console, and you will see the CORS error.
The error message I receive in the browser console is:
Access to fetch at 'https://generativelanguage.googleapis.com/v1beta/openai/chat/completions' from origin 'http://localhost:5173' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
- Crucially, I’ve successfully used this same OpenAI library with other API providers without encountering this CORS issue. This suggests the problem is likely specific to how the OpenAI library interacts with the Gemini API, not a general CORS problem on my end.
const openai = new OpenAI({
baseURL: "https://api.groq.com/openai/v1",
apiKey: import.meta.env.VITE_GROQ_API_KEY,
dangerouslyAllowBrowser: true,
})
const response = await openai.chat.completions.create({
model: "llama-3.3-70b-versatile",
messages: [
{ role: "system", content: "You are a helpful assistant." },
{
role: "user",
content: "Explain to me how AI works",
},
],
});