Is there a way to get the individual token-level embeddings from the `gemini-embedding-001` model, instead of the single, aggregated vector for the full text input?
My use case requires this for Word Sense Disambiguation (analyzing a specific word’s vector in its context).
Thank you for bringing this to our attention.
Currently, the gemini-embedding-001 model only returns a single aggregated embedding for the full text. Token-level embeddings are not supported.
For more information, please refer to this document.