Totally SURPRISED by Gemini

I can use it to understand really long text files, And it memorizes them all ,understands them all.
A few days ago ,even the token number exceeds 1 million, it still works. Today ,this bug seems to have been fixed.
I’d like to know how Gemini will be priced in the future. the Pay-as-you-go is a little confusing,such as: After paying, I only received more tokens per minute, but I’m still billed even if I don’t use that many tokens. Meanwhile, using the free version with the same usage pattern wouldn’t result in any charges.

For personal use, I encourage you to think of the choice to use the free plan or the pay-as-you-go plan based on whether the data you provide to the model has any value to you.

If the data you provide has little or no value to you and will likely not have any value to you in the future, you donate that data to Google and get free use in return.

If the data you provide is valuable to you, or has reasonable prospects to become valuable to you in the future, you ensure that you retain ownership of that data by paying the reasonable fees that Google charges in the pay-as-you-go option.

Hope that helps

1 Like

Thanks a lot, It seems the crux of the issue lies in the privacy and value of the data.I hope there will be a solution where I can utilize Gemini’s ability to handle over 1 million tokens, even if I don’t care about the privacy or value of the data.

Welcome to the forums!

For detailed numbers for each model, make sure you check out the pricing page.

Let’s look at the Gemini 1.5 Flash pricing in detail:

  • There is a free tier, but there are restrictions
    • The data you submit may be used by Google so, as @OrangiaNebula says, it depends on the value of this data to you. You’re paying with the data.
    • Because of this, it isn’t available in every region.
    • There are rate limits
  • There is a paid tier, and the price depends on how many tokens are in your prompt
    • Data is kept private from Google’s training.
    • It is available in most regions (tho not all)
    • There are higher rate limits (and you can request even higher limits)
    • If you use more than 128k in your prompt, all your tokens will cost twice as much

While it may be frustrating that there are such limits, keep in mind that there are real (and somewhat expensive) resources on the back end. (Also compare this to what the free tier on other platforms look like.)

1 Like