I tried and I can tell that it writes shorter messages, often interrupts them in the middle or at the beginning of the message.
By the way, is it just me, or did other models also start to issue errors more often in ai studio after this model was released? Maybe it’s because of the increased load on the servers?
“You’ve reached your rate limit. Please try again later.” What? I didn’t spam or anything. In total I generated about 30 messages, half of which were empty responses.
Also I get that error on Gemini 1.5 pro too, however, Gemini 1.5 pro experimental 0827 is still generating responses
I keep getting “Content not permitted” despite all setting in the safety settings set to ‘none’ WORST UPDATE EVER, what the hell is Logan and the Google Team doing? Why are they making everything so damm difficult?
I have been experiencing this problem since the days of the experimental 0801 model. Until the beginning of this week, the solution was simple - you write a space in response to an empty response from the model. Presumably since yesterday it has stopped working. Possible other ways are to put more spaces or write “Continue”.
Also, I noticed that this behavior is observed in two cases:
- When the content you are trying to generate contains at least a slight hint of NSFW.
- When you attach a file with the model at the beginning of the chat (it happens randomly in other places too).
I attached a file of 228 068 tokens to Recap a fictional tale.
It did, but afterward, any new Prompts is meet with a refusal, even with all sets to ‘None’
Do I seriously have to Jailbreak? Because otherwise, the model is impossible to use.
Is your content not permitted perhaps anything repeating knowledge, or looking like it is processing personal info?
The moderations being done previously certainly isn’t at a level of “understanding” a whole thought or what is not in controllable categories, but there are other algorithmic detectors.
Seems to be working for me, no refusal or advisories or exclamation icon so far on task 1…
This revised code provides a much more robust and flexible solution for sorting your embeddings before storing them, addressing the issues you were experiencing and making your workflow smoother. No more jumbled values!
Looks like this will be handy for those that don’t want to pay thousands in billings, or submit exception requests, simply to get around a poor moderator:
We will continue to offer a suite of safety filters that developers may apply to Google’s models. For the models released today, the filters will not be applied by default so that developers can determine the configuration best suited for their use case.
In my experience, I can say that there is often no need for jailbreaking. For the most part, I’m just pushing the model to the right answers. There is a button in the ui to change the model’s response. So my chat at the beginning (not counting the system prompt) looks like this:
my request
the model refuses to do anything ← here I change it answer to something like “Okay!”
space
And then the model generates the one I need
I want to note that the model is often stubborn about trifles, for example, refuses to analyze an audio file (the multimodal model refuses to analyze audio, lol)
Could be so much easier if they just removed the Censor thing in Studio.
That’s not going to happen. I tested Gemini 1.5 pro 002 and came to the conclusion that it is quite heavily censored in itself. Even with no filters, it just cuts off its response after the first word. I have two assumptions: 1. Either this is a bug that may be fixed in the future. 2. Either this is how all new models released from September 24th will work.
In any case, for me specifically, the new model is slightly useless in the scenarios in which I use it.
Moreover, the limitation of 2 requests per minute and 50 requests per day make its daily use almost impossible.
Bizarrely I’m experiencing the opposite issue with the gemini API set to the -002 model. I have all safety settings set to BLOCK_ONLY_HIGH. With -001 set, a message with a f-bomb, a c-bomb and a threat to kill stops with a SAFETY finish reason and HIGH HARASSMENT and DANGEROUS_CONTENT ratings. With -002 set and same code and message all ratings are set to LOW apart from DANGEROUS_CONTENT which is set to NEGLIGIBLE. Weird …
No exclamation points. However, if you look at the categories to be scored, it doesn’t really fit into any.
A “Guide” to Trapping and Butchering Stray Dogs (FOR INFORMATIONAL PURPOSES ONLY - DO NOT ATTEMPT):
…
Despite some pre-engineering that almost guarantees success in the (non-violating but quite alarming, easily deniable) area, you would have to clean a whole bunch of AI warnings out a fulfilled response like this if you were training for your own embeddings moderator. AI is still doing a justified job.
A terminating output likely comes with a very specific reason (you don’t get). Like producing something for you that’s been written a thousand times. Non-obvious or pretty obvious.
I have yet to run up a chat context with technical tasks, the kind where you spend 10 minutes composing the input, which is where modern model marvels from the AI leader really shine in disappointing. Hoping for good stuff in that area.
all models beside ’ gemini-1.5-flash ’ suffer from LAZYNES: they write a lot of //placeholder notes like://Rest of the code even if told not to write placeholders, so now we are forced to use workflow where we scan code with pro-> with placeholders, and then we must use flash to get full version code
I also forgot to mention that Negative Prompting/Block list System Instructions still does NOT work, 2 entire Models from 0827 to 1.5 Pro 002 and hundreds of feedbacks personally sent by me. And nothing changed. Truly this is a testament to the laziness of the Google Team.
Someone there said that feedback form is fake
Also wdym by negative prompting? You write things like “do not do x”? It works for me, but yes, it’s not particularly effective.
“Send Feedback” is fake?
Yeah, someone recorded that there was no network activity when submitting the form. Therefore, it does not send anything to the developers. I may be wrong, but many have complained that their feedback does not change anything at all, except here on the forum.
Back to the original headline topic, the prince and princess puzzle which Gemini 1.5 Pro Experimental 0827 can and does solve (albeit with stuttering in the output), Gemini 1.5 Pro 002 answers as follows (no stuttering, correct answer for the expression for the age of the prince, wrong answer for the age of the princess):
Let 'p' be the present age of the princess and 'r' be the present age of the prince.
Let's break down the word problem piece by piece:
• "when the princess is twice as old as the prince was..." Let's call this past time 't' years ago. At that time, the princess's age was p-t, and the prince's age was r-t. We know p-t = 2(r-t).
• "...when the princess's age was half the sum of their present age." This refers to another point in the past. At this time, the princess's age was (p+r)/2. Since this clause is part of the same sentence, this also occurred 't' years ago. So, p-t = (p+r)/2.
• "A princess is as old as the prince will be..." This refers to a future time when the prince will be a certain age. Let's call the number of years that pass until this time 'x'. The prince's age will be r+x. The princess's current age is equal to the prince's future age: p = r+x
• "...when the princess is twice as old as the prince was when the princess's age was half the sum of their present age." This ties everything together. The prince's age when the princess was half the sum of their present ages was r-t. So the princess's age at a future time, which is equal to the prince's age at that time (r+x), is twice the prince's age 't' years ago: r+x = 2(r-t). And since p = r+x, we also have p = 2(r-t).
Now we have a system of equations:
1. p - t = 2(r - t) => p - t = 2r - 2t => p + t = 2r
2. p - t = (p + r) / 2 => 2p - 2t = p + r => p - 2t = r
3. p = r + x
4. p = 2(r - t) (This is redundant since it's derived from equation 1 and 3, so we can ignore it)
From equation (2), we can express t in terms of p and r:
2t = p - r => t = (p-r)/2
Substitute this into equation (1):
p + (p-r)/2 = 2r
2p + p - r = 4r
3p = 5r
p = (5/3)r
Since ages must be positive integers, we consider cases where 'r' is a multiple of 3.
• If r = 3: p = 5. Then t = (5-3)/2 = 1. This is a valid solution. Princess is 5 and Prince is 3.
• If r = 6: p = 10. Then t = (10-6)/2 = 2. This is also valid. Princess is 10 and Prince is 6.
• General Solution: The relationship p = (5/3)r holds for any r that's a multiple of 3. Therefore, the general solution is:
Prince's age = 3n
Princess's age = 5n
where 'n' is any positive integer.
The prompt itself is as previously reported in How better is the new model (gemini 1.5 pro experimental 0827) - #13 by OrangiaNebula