Confirm this is a Node library issue and not an underlying OpenAI API issue
- This is an issue with the Node library
Describe the bug
I am following openai-node/examples/tool-call-helpers.ts at master · openai/openai-node · GitHub example with modifications as per my app. The runner is able to make the call, get the results, but during the final call to get final answer, the runner fails and gives me this error -
{
failed: true,
error: BadRequestError: 400 status code (no body)
at APIError.generate (file:///Users/prenx4x/WebstormProjects/hissab/node_modules/.pnpm/openai@5.1.0_zod@3.25.30/node_modules/openai/src/core/error.ts:72:14)
at OpenAI.makeStatusError (file:///Users/prenx4x/WebstormProjects/hissab/node_modules/.pnpm/openai@5.1.0_zod@3.25.30/node_modules/openai/src/client.ts:400:28)
at OpenAI.makeRequest (file:///Users/prenx4x/WebstormProjects/hissab/node_modules/.pnpm/openai@5.1.0_zod@3.25.30/node_modules/openai/src/client.ts:616:24)
at async _ChatCompletionRunner._createChatCompletion (file:///Users/prenx4x/WebstormProjects/hissab/node_modules/.pnpm/openai@5.1.0_zod@3.25.30/node_modules/openai/src/lib/AbstractChatCompletionRunner.ts:235:28)
at async _ChatCompletionRunner._runTools (file:///Users/prenx4x/WebstormProjects/hissab/node_modules/.pnpm/openai@5.1.0_zod@3.25.30/node_modules/openai/src/lib/AbstractChatCompletionRunner.ts:318:46) {
status: 400,
headers: Headers(11) {
'accept-ranges' => 'none',
'alt-svc' => 'h3=":443"; ma=2592000,h3-29=":443"; ma=2592000',
'content-type' => 'application/json; charset=UTF-8',
'date' => 'Tue, 03 Jun 2025 17:21:16 GMT',
'server' => 'scaffolding on HTTPServer2',
'server-timing' => 'gfet4t7; dur=33',
'transfer-encoding' => 'chunked',
'vary' => 'X-Origin, Referer, Origin,Accept-Encoding',
'x-content-type-options' => 'nosniff',
'x-frame-options' => 'SAMEORIGIN',
'x-xss-protection' => '0',
[immutable]: true
},
requestID: null,
error: undefined,
code: undefined,
param: undefined,
type: undefined
}
Console Logs -
msg {
role: 'assistant',
tool_calls: [ { function: [Object], id: '', type: 'function' } ],
parsed: null,
content: null
}
functionCall {
arguments: '{"expressions":["20+30"]}',
name: 'calculate_with_hissab',
parsed_arguments: null
}
msg {
role: 'tool',
tool_call_id: '', // Empty
content: '[{"expression":"20+30","result":"50"}]'
}
functionCallResult [{"expression":"20+30","result":"50"}]
My function -
export async function generate(
openai: OpenAI,
model: Models,
systemInstructions: string,
userPrompt: string,
): Promise<any> {
const messages: ChatCompletionMessageParam[] = [
{ role: "system", content: systemInstructions },
{ role: "user", content: userPrompt },
];
const tools = [hissabExpFunction];
const runner = openai.chat.completions
.runTools({
model,
stream: false,
tools,
messages,
})
.on("message", (msg) => console.log("msg", msg))
.on("functionToolCall", (functionCall) =>
console.log("functionCall", functionCall),
)
.on("functionToolCallResult", (functionCallResult) =>
console.log("functionCallResult", functionCallResult);
)
.on("content", (diff) => process.stdout.write(diff));
console.log("Running...");
const result = await runner.finalChatCompletion(); // This fails and gives error
console.log("Ran...");
console.dir(runner.messages, { depth: null });
console.log("result", result);
return {
// some json
};
}
My Tool definition -
export const hissabExpFunction: RunnableToolFunction<{
expressions: string[];
}> = {
type: "function",
function: {
name: "calculate_with_hissab",
description: "Hissab expressions corresponding to the user prompt",
parameters: {
type: "object",
properties: {
expressions: {
type: "array",
description: "List of Hissab expressions",
items: {
type: "string",
},
},
},
},
function: calcExp,
parse: JSON.parse,
},
};
A few other things I noticed -
tool_call_id is blank in msgs after tool call id. And parsed_arguments is null, although not sure what parsed_argument is.
Is this a bug in the code, or Am I doing something wrong here. Thanks.
OS
macOS
Node version
Node v22
Library version
5.1.0