I think gemini should learn from deepseek

Gemini Think is sometimes good, sometimes bad. Sometimes it summarizes like an outline; sometimes it uses “we” when it should use “I.” I can’t understand its reasoning. Its reasoning just goes straight to the answer.
For familiar questions (like, “How many 'r’s are in ‘strawberrry’?” I added 1 r the answer is 4), all three newest models answer incorrectly.

Current Gemini 2.0 and 2.0 Think, and Gemini 1206 are terrible compared to DeepSeek R1, a free model, except for the massive token, but there are many errors.

I think Gemini should open-source its model and focus on service. It’s quite meaningless to train a closed model, and a few months later, DeepSeek R2, R3, and others better are free.

Also, I recommend the Gemini app should use only two safety settings: one for children and one for adults. It’s frustrating when I can’t ask many adult questions; it feels like the Gemini app is for kids only.

1 Like

Gemma is an open-source model family by Google, which started with a strong foundation (better than some LLMs from startup companies that put 1000 ads on a chat page). However, this does not make the Gemma models better.
It’s not about being open source (this does not make the model better for most end-users, but for developers), but about priorities.

No one can guarantee his/her own future, including the person who added “Open” to OpenAI. (Started open, closed to date due to their success)
Not to be imposing bad faith or pessimism, but MAYBE they will make v4 closed if they see it’s more sustainable.


Regarding the topic’s title: Yes, Google can learn from DeepSeek models since v1, v2, and v3 are open source and their papers are accessible.

3 Likes