OpenAI compatibility + multimodal?

I wanna adapt my current REST requests with multimodal gemini models (e.g gemini-2.0-flash-exp-image-generation) with the OpenAI compatibility endpoint, but then of course I’m not able to pass the “responseModalities” parameter anymore (not supported in the OpenAI library) in order to make the responses multimodal (in my case, both text or/and image).

Is there any workaround I am missing, or any plans for the future for integrating genuine Gemini API parameters with the OpenAI compatibility REST library?

Thanks

Hi @Javier_De_Pedro_Lope ,

Welcome to the forum !

Gemini API provides OpenAI REST library compatibility.Please refer to the OpenAI compatibility  |  Gemini API  |  Google AI for Developers.

Thank you !!