Hello members, is anyone else having issues with 1.5-Flash-8B being a bit stupid?
Recently I was using AI Studio for my investigation on how different AI Models can’t count the amount of Rs in the word “strawberry”. And 8B had a questionable response.
It had counted 0 Rs in the word strawberry, im unsure if this is a tokenization issue or just the model being very small that it can’t count in this specific scenario.
Furthermore, Gemini 1.5 is also lightly worse than 1.0, 1.5 Flash AND Pro guessed 2, however Gemini 1.0 managed to guess it perfectly.
Anyone know why this happens?
Thanks!
Hat