@google/genai API docs in Markdown format for LLM consumption

,

Is the preview @google/genai API documentation available in a single flat Markdown document for LLM consumption?

It seems a basic necessity and a no-brainer nowadays with LLM coding assistants/agents, as advocated by multiple famous developers (e.g., from Andrej Karpathy to Jeremy Howard).

There would ideally be different files, with different levels of detail (e.g., from just a overview/quickstart with basic interface needs and examples to the full thing).

This should ideally be directly accessible from the docs main page, with a nice set of buttons on top. Such a simple feature would heavily speed up switching from the old API to the new one, and help devs (human, AI, and hybrid) to catch up quickly.

1 Like

Agreed. It’s absurd to me that in 2025 this still isn’t standard practice. Gemini 2.5’s own knowledge is based on the previous SDK, so you’re forced to feed it the updated docs manually by default. Everyone has to go through this same process. Why are we still manually finding docs, copying and pasting, converting to markdown, and then feeding them to the LLM?

Every LLM company should have a standard, LLM-friendly version of their docs. Just throw it in the SDK’s GitHub in a markdown folder and boom, it’s easy to pull locally and pass to the model.

In the future, every SDK should come with this. Right now, the community has to build tools just to scrape, convert, and serve docs to LLMs because we’re constantly fighting outdated knowledge.

If the industry is serious about coding being the first major breakthrough use case, then why this is still a problem makes no sense to me.

We don’t need ten different niche problem startup companies, services, or workflows to fix this. It should be a basic standard, like MCP, maybe even part of MCP. Companies like OpenAI, Google, Anthropic, and others should all offer a link to some kind of MCP server you can just point the model to as a default.

Pretrained models are always stuck in time until the next round of training, so this issue is built-in and insurmountable in itself. Its part of the nature of LLMs. So if that’s the case then the only real solution is having docs that are always up to date, structured, and easy for a model to consume.

Honestly, given how baked-in this problem is, models should be fine tuned / post trained with the assumption that their SDK knowledge is out of date and should default to encouraging the use of the latest docs. Gemini 2.5 Pro will confidently tell you you’re wrong, because its knowledge of the Gemini SDK doesn’t match reality, and it’ll break working code and just rewrite it.

It should be the opposite. Models should default to uncertainty unless you plug in the most recent docs.

1 Like

@Luigi_Acerbi just found my first example of a company actually doing this in the wild. Bravo!!!

1 Like

Indeed, great to see!
The llms.txt standard was proposed by Jeremy Howard, see here.

To be honest, the obvious thing should be to have something like three levels:

  1. llms_micro.txt
  2. llms_mini.txt
  3. llms.txt

or something similar, because sometimes you just want to give the LLM a “high-level” overview and this might be enough.

1 Like

Interesting. I had no idea it was an actual proposed standard, but it makes total sense. If we have robots.txt, why not? All it will take is Google to set this standard and others will follow.