Few-Shot best practices and experiences

Hi everyone,

i’m used to exploring few-shot learning with the OpenAI and other vendors APIs, but i was wondering how to best use it with the Gemini API. Does anyone have experience with few-shot learning with the Gemini API? Should I put all the examples in one big user prompt, or use the chat history object to supply the examples?

1 Like

Welcome to the forum. I prefer one prompt with several examples plus the question that is the actual question the model is expected to answer at the end. The reason is, if you provide one example and the model then responds to that example (by providing its “take” on it), you run the risk that it actually used erroneous reasoning so far (given that first example). Then you have to explain why the reasoning it came up with is inappropriate and then continue with the second example. You might think it’s even better this way, but in my testing it gets confusing for the model.

As the saying goes, your mileage may vary. If the examples are straightforward and even the first one is perfectly handled, it should make no difference whether the examples are fed all at once in the first prompt or one after the other in chat session turns.

AI Studio lets you easily test out these alternative scenarios. Your specific application might benefit from the chat turns approach. To automate that approach, you will need to create synthetic model responses that match your requirements and pre-populate the chat history, then append the actual question the model is expected to process.

Hope that helps.

1 Like