Candidate_count for multiple output from Gemini LLM

Hello,

I have recently used gemini models using generativeai in python.
during using generate_content function, I set the candidate_count=3 as following:

response = model.generate_content(
contents=input_text,
generation_config=palm.GenerationConfig(
temperature=0.7,
candidate_count=3
)
)

but it says

“google.api_core.exceptions.InvalidArgument: 400 Only one candidate can be specified”

I would like to know how to recieve multiple output of the Gemini LLM by chaning value of candidate_count parameter. Or does the current Gemini API only allow one output per query?

I would appreciate if you could help me with it.
Thanks

I believe the Vertex AI Gemini API currently allows 4 candidates, but the AI Studio one still only allows 1.

Hi @Jaeseong_Lee. Currently, Gemini API doesn’t support generating multiple candidates in a single call. In case you want to generate multiple responses, you can make multiple separate calls to the API using for loop. You can follow the below code :

count = []
for _ in range(3):
    response = model.generate_content(
    contents=input_text,
    generation_config=palm.GenerationConfig(
    temperature=0.7
    )
    )
    count.append(response)

@Logan_Kilpatrick any chance to have this Vertex-only feature for the AI Studio too?

We have candidate count, presence & frequency penalties, and response logprobs available in our new 002 models! Check out: cookbook/quickstarts/New_in_002.ipynb at main · google-gemini/cookbook · GitHub

2 Likes