Gemini Live API WebSocket Error 1008: "Operation is not implemented, or supported, or enabled"

I’m experiencing an issue with the Gemini Live API where the WebSocket connection unexpectedly closes with error code 1008 and the reason “Operation is not implemented, or supported, or enabled.” Here’s the context:

I’m using the @google/genai SDK (latest version) with the gemini-2.5-flash-native-audio-preview-12-2025 model in a Next.js 16 browser environment. I’m trying to establish a direct WebSocket connection to the Live API for audio streaming.

For my setup, I initialize the GoogleGenAI client with a valid API key and configure it with audio modality, the Aoede voice, and generation parameters like temperature 0.7, topP 0.9, and topK 40. I’m sending 16-bit PCM audio at 16kHz mono little-endian in 20-40ms chunks as recommended, and expecting 24kHz audio back.

The connection pattern is this: it establishes successfully and the onopen callback fires. Sometimes I even start receiving the first audio response. But then the connection abruptly closes with code 1008. According to RFC 6455, code 1008 indicates a policy violation where the endpoint terminates because it received a message violating its policy.

I’ve tried several things to fix this. Initially I had a deprecation warning about setting generation_config in a nested object, so I moved those fields to the top level of LiveConnectConfig as the warning suggested. I also tried adding realtimeInputConfig with automaticActivityDetection disabled, reducing chunk sizes, validating my audio format multiple times, and implementing reconnection logic. None of these resolved the 1008 error.

My audio streaming follows Google’s best practices: chunk sizes of 20-40ms, 16kHz input sample rate, 24kHz output, 16-bit PCM mono little-endian format. Everything seems correct according to the documentation.

What I’m seeing in the logs is a successful connection, then I send audio chunks using sendRealtimeInput with the media object containing mimeType “audio/pcm;rate=16000” and base64-encoded data. The server responds with setupComplete, receives my audio chunks, sometimes starts sending audio response frames, then suddenly closes the connection with code 1008.

I’m trying to figure out what specific operation or configuration is triggering this error. Is it related to the model I’m using? Is this model still supported and available? Am I missing required fields in LiveConnectConfig? Could this be a rate limit or quota issue that’s showing up as error 1008 instead of something more specific? Does my API key need special permissions or scopes for Live API access?

This is happening in a real-time voice interview application where users speak and their audio is captured at 16kHz, sent to the API in small chunks, and the AI responds with audio at 24kHz that gets played back. The application works for the initial connection but fails intermittently with these 1008 errors, making it impossible to complete full voice conversations.

I really need to understand the root cause of this error, get the correct configuration for stable Live API WebSocket connections, learn about any model-specific requirements for audio streaming, and find out if the API behavior has changed since the documentation was written.

I am experiencing the exact same issue starting Jan 8th.

My Setup:

  • Interface: Multimodal Live API (WebSocket) / Python backend

  • Model:
    gemini-2.5-flash-native-audio-preview-12-2025

Observations:

  1. Regression: This setup was working perfectly in production until Jan 8th.

  2. No Client Changes: I reverted my codebase to a commit from Jan 4th (proven working state) and the error persists, confirming this is not a client-side code issue.

  3. Crash Timing:
    The 1008 error occurs immediately when the model decides to use a tool.

    • The connection closes before the client receives any
      toolCall
      frame.

    • It seems the backend terminates the session the moment it attempts to generate the function call arguments.

2 Likes

coincidentally I added the tool calling feature to my app on January 8, and that is when this error started. Without tool calling, the production app is basically unusable, so this is urgent. Did you find any way to fix this, or is this an issue on Gemini’s side?

Update: After switching from models/gemini-2.5-flash-native-audio-preview-12-2025 to models/gemini-2.5-flash-native-audio-preview-09-2025, it started working.

but now i’m considering moving to the Vertex API since this app runs in a production environment. If something breaks again, the whole system is at risk. I have already lost three days on this issue, and I cannot afford to go through that again.

1 Like

It’s working in my case (changing back to the gemini-2.5-flash-native-audio-preview-09-2025. The problem happen in a random way, but frequently. It disappears after changing back the model.

same problem… same solution

This wasnt a solution for me, while the 09-25 model didn’t crash out it appears that function calling isnt supported and 09-25 does not support calling a tool while the December build (12‑2025) has a bug that causes function calls to crash the connection.. watching this

no it do support function/tool calling.it successfully handle tool calling in my application

same problem… same solution

1 Like

I swapped to the previous version of the model as mentioned in previous replies. gemini-2.5-flash-native-audio-preview-09-2025 however this model seems to refuse to call tools even having the prompt being very explicit about only replying AFTER checking the available tools.

The latest model calls the tools appropriately but randomly causes the annoying 1008 connection errors. Still haven’t found a reliable way to overcome this issue.

I am not sure if this will help, but I followed this guide while building my typescript application, and it seems to be working correctly: : Best practices with Gemini Live API | Generative AI on Vertex AI | Google Cloud Documentation .Also while browsing discussions, I also noticed your specific issue mentioned elsewhere, although it appears to affect only a few users.:Live API Native Model doesnt do Function Calls - Gemini API - Google AI Developers Forum and function calling is not working for gemini-2.5-flash-preview-native-audio-dialog · Issue #843 · googleapis/python-genai .

1 Like

Thanks for the helpful suggestions. However, when I use the tool with gemini-2.5-flash-native-audio-preview-12-2025, I still occasionally run into errors 1011 and 1008. It doesn’t happen all the time — just intermittently — and I’d like to understand the exact cause of these errors on the model side.

try gemini-2.5-flash-native-audio-preview-09-2025 ? or use vertex ai

1 Like

Thanks for the feedback. The language processing and transcription of gemini-2.5-flash-native-audio-preview-09-2025 are not accurate. I haven’t tested it on Vertex AI yet.

That’s true. It has noticeably worse processing and transcription, including pronunciation, when handling multilingual input. The only solution I can think of is to use the Vertex API or wait until a 03-2026 preview is released, if one becomes available.

1 Like