OpenAI Compatibilioty with JNodeJS - Error

If sending a completion with just one system role message, it will fail on no body issue as well.

OpenAi supports it, but Gemini doesn’t, so for s single-shot prompt it must be user role.

1 Like

I am getting no luck this time in getting and responding to tool calls using the openai compatibility for Gemini

I define a function properly, and ask to use it with a simple request, like: remember my birth date

In response I receive a tool_calls chunk

 "choices": [
      {
         "delta": {
            "role": "assistant",
            "tool_calls": [
               {
                  "function": {
                     "arguments": "{\"argument_2\":\"False\",\"argument_1\":\"[{\\\"key_name\\\": \\\"birthdate\\\", \\\"key_value\\\":\\\"tomorrow\\\", \\\"key_description\\\": \\\"The user's birthdate\\\", \\\"key_namespace\\\": \\\"personal_data\\\"}]\"}",
                     "name": "set_in_user_memory"
                  },
                  "id": "",
                  "type": "function"
               }
            ]
         },
         "finish_reason": "stop",
         "index": 0
      }
   ],
   "created": 1739723061,
   "model": "gemini-2.0-flash",
   "object": "chat.completion.chunk",
   "usage": {
      "completion_tokens": 54,
      "prompt_tokens": 4319,
      "total_tokens": 4373
   }

Important: The tool_calls is missing an index and does not have a valid id. It does in OpenAI models, and Mistral AI

All arguments are present at once (not sent as delta in parts).
The finish_reason is ‘stop’ instead of receiving a follow-up chunk with finish_reason=‘tool_calls’ like I receive with OpenAI

So I assume that I need to execute the function as if this is the tool_calls, then add the response to the messages. Here is what the response that includes the tool execution looks like:

{
    "role": "user",
    "content": "save my birthdate as tomorrow"
},
{
    "role": "assistant",
    "tool_calls": [
        {
          "function": {
              "arguments": "{\"argument_2\":\"False\",\"argument_1\":\"[{\\\"key_name\\\": date_of_birth\\\", \\\"key_value\\\":\\\"tomorrow\\\", \\\"key_description\\\": \\\"The birthday date of the user\\\", \\\"key_namespace\\\": \\\"personal_data\\\"}]\"}",
                  "name": "set_in_user_memory"
               },
               "id": "",
               "type": "function"
            }
         ]
      },
      {
         "id": "",
         "role": "tool",
         "name": "set_in_user_memory",
         "content": "The parameters were successfully saved or updated"
      }

When I send this message, I get a 400, with no details of what the error is.

Here are my asks:

  • Can any one provide a real life example of using openai with stream and tool for Gemini ? Maybe what I am doing is not correct
  • What can be done to receive the response body text for a 400 error, instead of no content so the library can surface the error up to the client?
  • When will this bridge become production ready ? It’s getting difficult to get something fully working (stream+tools)

Thank you for your help as always

Hugo

I have the same issue here.
There is no documentation of what a valid JSON tool response might look like

If you simply send without the tool execution response, it works


   {
         "role": "user",
         "content": "remember that I am 52 years old "
      },
      {
         "role": "assistant",
         "tool_calls": [
            {
               "function": {
                  "arguments": "{\"argument_1\":\"[{\\\"key_name\\\": \\\"age\\\", \\\"key_value\\\":\\\"52\\\", \\\"key_description\\\": \\\"The age of the user\\\", \\\"key_namespace\\\": \\\"personal_data\\\"}]\",\"argument_2\":\"False\"}",
                  "name": "set_in_user_memory"
               },
               "id": "",
               "type": "function"
            }
         ]
      },

but this doesn’t

      {
         "role": "user",
         "content": "remember that I am 52 years old "
      },
      {
         "role": "assistant",
         "tool_calls": [
            {
               "function": {
                  "arguments": "{\"argument_1\":\"[{\\\"key_name\\\": \\\"age\\\", \\\"key_value\\\":\\\"52\\\", \\\"key_description\\\": \\\"The age of the user\\\", \\\"key_namespace\\\": \\\"personal_data\\\"}]\",\"argument_2\":\"False\"}",
                  "name": "set_in_user_memory"
               },
               "id": "",
               "type": "function"
            }
         ]
      },
      {
         "tool_call_id": "",
         "role": "tool",
         "name": "set_in_user_memory",
         "content": "The parameters were successfully saved or updated in the user memory."
      }

to make it work, you need to set the value of the id with the tool function name for example, and make sure it matches your tool_call_id value

but this does

      {
         "role": "user",
         "content": "remember that I am 52 years old "
      },
      {
         "role": "assistant",
         "tool_calls": [
            {
               "function": {
                  "arguments": "{\"argument_1\":\"[{\\\"key_name\\\": \\\"age\\\", \\\"key_value\\\":\\\"52\\\", \\\"key_description\\\": \\\"The age of the user\\\", \\\"key_namespace\\\": \\\"personal_data\\\"}]\",\"argument_2\":\"False\"}",
                  "name": "set_in_user_memory"
               },
               "id": "**set_in_user_memory**",
               "type": "function"
            }
         ]
      },
      {
         "tool_call_id": "**set_in_user_memory**",
         "role": "tool",
         "name": "set_in_user_memory",
         "content": "The parameters were successfully saved or updated in the user memory."
      }

Voila!

It’s the end of February, and while the presence_penalty was accepted, I still received a 400 error due to the frequency_penalty.

this helped me so much! Was stuck on this for ages. Loving the Gemini models, but they can indeed use some maturity work on reporting back useful errors.

This is driving me crazy as well. Just tell me what’s wrong with the request! 2 things I found that trigger this:

  • setting reasoning_effort in the request
  • using union or discriminated union in the response format JSON schema

Hi @Nathan_Glenn

Welcome to the forum.

Kindly consider that the OpenAI compatibility is still experimental and therefore not all attributes or features have been implemented.

Cheers

Thank you for the welcome! My main complaint is not that support is incomplete. I understand that (and I’m really excited to see more complete support!). ,The issue is the lack of indication of error cause. The “400 (no body)” response is very vague, and I wasn’t sure if it meant that I had the wrong endpoint, a bad API key, a prompt that was too long, a malformed request somehow, etc.