Why does Gemma 3's image classification change on inconsequential prompt changes?

I got gemma 3 it running on my ancient 1080ti (yay me!) via LM Studio. It runs, but it seems to wildly guess when attempting to classify images of microscopic organisms.

Furthermore, the guessing changes when I do minor rewording of the prompt. Is this a sign I’m doing something wrong? Or is this a sign that it is hallucinating?

For each of these I fed in the same 640x640 PNG of a tardigrade.
gemma-3-12b-instruct-gguf, Q3_K_L quant.

  • “What is this?” = “Based on the image, it appears to be a foraminifera.” - INCORRECT.
  • No prompt: “Based on the image, it appears you’ve captured a microscopic view of what is likely a water bear, also known as a tardigrade.” - CORRECT!
  • “Given this microscope image, identify the species.” = “Based on the microscopic image provided, it appears to be a diatom.” - INCORRECT.
  • “Given this microscope image, identify the species. Be concise.” = “Based on the image, it appears to be a Paramecium.” - INCORRECT BUT DIFFERENT.
  • “Given this microscope image, identify the organism.” = “Based on the microscopic image provided, the organism appears to be a Rotifer.” - INCORRECT AND MORE DIFFERENT.

Why would adding an inconsequential prompt make it have such different replies?

Im getting the same error with 12b.
27b is working well. Did you solve it?

Nope! No solution yet. Glad to hear I’m not the old only. Maybe 12 or less is just too small for this?

Hi,

The different answers you’re getting are due to prompt sensitivity, which is amplified by aggressive model quantization. This is a form of hallucination caused by running a large model on limited hardware, leading to a loss of accuracy. It’s not a user error, but a consequence of the hardware constraints.

Thanks.

2 Likes