Is it possible to get token-level embeddings from gemini-embedding-001?

Hey Community,

Is there a way to get the individual token-level embeddings from the `gemini-embedding-001` model, instead of the single, aggregated vector for the full text input?

My use case requires this for Word Sense Disambiguation (analyzing a specific word’s vector in its context).

Thanks!

Hi @Dyson
Welcome to the AI Forum!!!

Thank you for bringing this to our attention.
Currently, the gemini-embedding-001 model only returns a single aggregated embedding for the full text. Token-level embeddings are not supported.
For more information, please refer to this document.

1 Like