Hi – the gemini-2.0-flash-live “family” of models has been great.
It would be super useful if, in addition to PCM16 i/o, these streaming models supported mulaw / g711_ulaw. Is that possible?
My application pipes data to/from a phone system and currently I have to re-encode to/from PCM16 on the fly. This works but it’s choppy. The OpenAI realtime system supports g711_ulaw i/o and it’s much smoother – but I’d rather stick with gemini 
4 Likes
Hi @MB_AST,
Thank you for your valuable suggestions. We appreciate your input and will be sure to share this with the team.
1 Like
bump for this, we were just in a process of switching from openAI to gemini, but we got stuck when we found out that g711_ulaw format is not supported. Any news on this?
2 Likes
+1
same issue here… solutions like converting on the fly from ulaw to pcm result most of the time in uneccessary noise
2 Likes
same here… please fix that we need it urgently many startups i know are using it and had same issue
1 Like
+1
bumping this as well, currently stuck with openai realtime because of this issue
1 Like