So, I am working on a project from a very long time ago, I used OpenAI SDK from very far, but now I am integrating Gemini Models and I want to include Gemini-2.5-pro in my Application but as my complete app revolves around “OpenAI SDK” and I am unable to find “reasoning/thinking” contents of responses of Gemini-2.5 models, even I set reasoning_efforts to high, can any one tell me am I doing something wrong or Google is again late.
import OpenAI from "openai";
const openai = new OpenAI({
apiKey: "API_KEY",
baseURL: "https://generativelanguage.googleapis.com/v1beta/openai/"
});
async function main() {
const completion = await openai.chat.completions.create({
model: "gemini-2.5-pro",
reasoning_effort: "high",
messages: [
{ "role": "system", "content": "You are a helpful assistant." },
{ "role": "user", "content": "Hello!" }
],
stream: true,
});
for await (const chunk of completion) {
console.log(JSON.stringify(chunk));
}
}
main();