Google vs Openrouter API differences

I’ve noticed that Gemini 2.0 Flash Thinking experimental and Pro experimental seem to lose context very quickly (lose track of a long and complex set of instructions in my prompt) if I use them through the Openrouter API vs the Google API. This has been consistent and repeatable for me since the models were released. Is anyone else experiencing this, or am I crazy?

It’s not uncommon for different APIs to exhibit variations in behavior, even when interfacing with the same underlying models. Differences in implementation details, such as how prompts are processed or limitations on input length, could potentially affect the model’s ability to maintain context. You’re not alone in noticing these discrepancies; others might experience similar issues depending on the specific configurations and use cases.