How can i change my tier?

i

It looks like you’re on the paid tier.
Are you having issues?

1 Like

Regarding the 50 response limit, I don’t understand why I still have this problem, but it’s really bothering me.

1 Like

The AI Studio UI is always on the free tier, even if you have a project attached to a billing account.

1 Like

This is happening to me also (though I’m not on a paid plan). This has drastically changed the usefulness since I’m up around the 1,000,000 tokens or so. So now what happens is, if it succeeds, then I get one question and it’s done. If it doesn’t succeed, it errors out, and then the rate limit hits me and I’m done after the second attempt (even though the first attempt failed). I understand the free plan is just the free plan, but if it doesn’t work at all (which is the case now), then it doesn’t work at all.

edit: I imagine this sounded harsher than I intended. I am extremely impressed with Gemini, especially Pro 1.5+ which was able to grok my 50k of code much more deeply than I expected. Indeed, since my protocol is unique (it’s a DLT for distributed computation evolved apart from Bitcoin that looks more like IPFS + Ceramic + git), Gemini Pro 1.5 has been the one and only entity that I can get to understand the nuances - which was a real shocker for me! WOW! So great work on the models. But it’s now unusable!

Requests that error out (say, with http code 429, or 400, or 500) don’t count against the daily quota of requests.

1 Like

Thanks for responding so quickly @OrangiaNebula. There are no error codes usually when it errors, though I didn’t check dev tools. The errors range in time from 10s, to 30s, and less frequently, 60s+, so usually when it passes 30s I am optimistic. But it has happened many, many times before when it errors out, I delete my post, try to reword it, and try again and it says I hit my rate limit. This was the case even before the most recent change (past couple of days) of the more constrained rate limiting (I had been having non-error chats with 1.2M+ tokens previously with relatively few problems).

Wait, what is your definition of ‘errors out’? Because it is usual to consider the backend responding with http codes outside the 200 range as erroring out. Describe what it means for you, we otherwise will not know how to be helpful.

A red message pops up saying “There was an Error” or similar, with no error code. I can copy/paste it the next time I hit it for the exact wording. The next message will use the same red toast and say the rate limit has been hit and to try again later.

Ah, I hit the rate limit just now after just one chat iteration so I didn’t get a chance to see the errors message.

edit: The one chat iteration was probably 10+ minutes before btw, though I do have an older laptop. Perhaps the error is driven by the fact that my machine is too slow for the latest changes.

You could try going to the Google console to see if something strange can be seen in the usage and error charts there. Other than that, I am out of ideas. Try to capture what that error says when it shows up next time.

By Google console, do you mean the console in dev tools? I’m on Vimium if that matters, a chromium-based browser.

To get to the console, start from the Get API key page in AI Studio, that is https://aistudio.google.com/app/apikey. Near the bottom is a table. Under the second heading (Project name) there will be an entry “Generative Language Client”. Following that string is an icon indicating a web link. Follow that link and you will get to your project’s cloud console. There are charts showing usage and errors there.

1 Like

Thanks for that @OrangiaNebula . I had not visited the back end to the app. I’m a bit more versed in AWS, but it appears that there are several places that the errors could be located.

Under the “Overview” section, there are all zeros/no information. Obviously since this is the first time I’ve been here, there are no custom configurations!

Under “Dashboards”, there is a single entry “Logs Dashboard” which itself contains no data in any of the charts, which are set to the past 24 hours, during which time I’ve had about ten or so error messages (not including the rate limiting errors which would about double the number).

Under “Logs Explorer” with the Generative Language Client, there are 8 info messages for SearchProjects for the past 24 hours.

All of the other tabs there are either unconfigured or no data.

Do you have a recommended configuration to log the error messages that show as toasts on mobile and simply red error text in the desktop version so I might capture more detail?

The console would have shown something if there was a serious problem with the API key or backend setup. So, your problem seems to client related.

There are two possible scenarios I can think of.

Scenario 1. Your key was compromised and someone else is using up your quota. They would have to be a bit of an evil genius if they are systematically using up, say, 48 of your 50 requests, but not all of them. Seems a bit unlikely, but can’t be ruled out.

Scenario 2. Your browser is causing some problem, maybe an unusual plugin or browser extension. Simple to test out, just try a different browser.

Hope that helps.

1 Like

Probably client side since the web app seems to pull quite a few resources and the two devices I have (phone and laptop) are both not real workhorses. Next time it happens on my laptop, I’ll see what the dev tools say (if anything). Also, I would think a compromised API key would show more activity on the existing logs, similar to the SearchProjects calls?

That said, this is still a severe change of behavior WRT the rate limiting on both devices - of that there is no doubt. I just would imagine it has something to do with clamping down or a change in rate limiting implementation (tokens vs requests) or a change in internal architecture perhaps related to Google ramping up for a strawberry competitor.

Regardless, thanks for your help! Maybe one day I’ll get some investment from someone who actually wants to build the next architecture and be able to afford a new machine! :pray: