Multi-turn nano banana example?

In the docs here, it is mentioned there are 3 methods to generate images with gemini-2.5-flash-image-preview:

  1. Text to image(s) and text (interleaved): Outputs images with related text.
  2. Image(s) and text to image(s) and text (interleaved): Uses input images and text to create new related images and text.
  3. Multi-turn image editing (chat): Keep generating and editing images conversationally.
  • Example prompts: [upload an image of a blue car.] , “Turn this car into a convertible.”, “Now change the color to yellow.”

But I couldn’t find in that page any examples for the 3rd case: multi-turn image editing(chat).

I also tried generating the code after a multi-turn conversation on AI Studio, but the code seems to be a generic text to image example, not a multi-turn image editing.

Is there an example somewhere else that I’m missing?

After looking into it a bit more, I found an example on this cookbook.

Here is a simplified version that can be run on a jupyter notebook:

from IPython.display import display, Markdown, Image

# Loop over all parts and display them either as text or images
def display_response(response):
  print(response.usage_metadata)
  for part in response.parts:
    if part.text:
      display(Markdown(part.text))
    elif image:= part.as_image():
      display(Image(data=image.image_bytes))

model = "gemini-2.5-flash-image-preview"
chat = client.chats.create(
    model=model,
)
message = "create an image of a dog running on a beach"
response = chat.send_message(message)
display_response(response)

message = "what would the scene look like at night?" 
response = chat.send_message(message)
display_response(response)
1 Like

In above code, we are doing chat.send_message. If this code has to run in backend server, how do we get reference to chat and do chat.send_message second time when user sends the second message?

1 Like