Agreed. It’s absurd to me that in 2025 this still isn’t standard practice. Gemini 2.5’s own knowledge is based on the previous SDK, so you’re forced to feed it the updated docs manually by default. Everyone has to go through this same process. Why are we still manually finding docs, copying and pasting, converting to markdown, and then feeding them to the LLM?
Every LLM company should have a standard, LLM-friendly version of their docs. Just throw it in the SDK’s GitHub in a markdown folder and boom, it’s easy to pull locally and pass to the model.
In the future, every SDK should come with this. Right now, the community has to build tools just to scrape, convert, and serve docs to LLMs because we’re constantly fighting outdated knowledge.
If the industry is serious about coding being the first major breakthrough use case, then why this is still a problem makes no sense to me.
We don’t need ten different niche problem startup companies, services, or workflows to fix this. It should be a basic standard, like MCP, maybe even part of MCP. Companies like OpenAI, Google, Anthropic, and others should all offer a link to some kind of MCP server you can just point the model to as a default.
Pretrained models are always stuck in time until the next round of training, so this issue is built-in and insurmountable in itself. Its part of the nature of LLMs. So if that’s the case then the only real solution is having docs that are always up to date, structured, and easy for a model to consume.
Honestly, given how baked-in this problem is, models should be fine tuned / post trained with the assumption that their SDK knowledge is out of date and should default to encouraging the use of the latest docs. Gemini 2.5 Pro will confidently tell you you’re wrong, because its knowledge of the Gemini SDK doesn’t match reality, and it’ll break working code and just rewrite it.
It should be the opposite. Models should default to uncertainty unless you plug in the most recent docs.