Description of Issue: I am reporting severe degradation in the model’s ability to follow explicit instructions, specifically regarding “Negative Constraints” and “Stop Sequences”. The model exhibits extreme “laziness” and “hallucinated compliance.”
Specific Failures Encountered:
-
Violation of Negative Constraints (Critical):
-
I repeatedly instructed the model: “DO NOT generate code yet,” “Stop and listen,” and “Wait for my command.”
-
The model acknowledged these commands but immediately violated them in the very next token generation, outputting long code blocks despite being explicitly forbidden to do so.
-
Diagnosis: The model prioritizes pattern completion (auto-complete behavior) over explicit user restrictions.
-
-
Lazy Generation & Truncation:
-
When generating critical System Prompts, the model failed to complete the code, cutting off mandatory closing tags (e.g.,
**END_OF_SYSTEM_INSTRUCTIONS**). -
When confronted, the model admitted to “laziness” and “token saving” behaviors, which renders it unusable for professional coding tasks.
-
-
False State Claims (Hallucination):
-
The model claimed to be operating at “100% Integrity” and “Strict Mode” while simultaneously failing basic formatting and logic tasks.
-
It hallucinates capabilities (e.g., “I have loaded the core”) that are not reflected in its actual output performance.
-
-
Context Amnesia:
- The model fails to retain instructions across immediate conversation turns. It apologizes for an error (e.g., rushing output) and then commits the exact same error in the immediate next response.
Impact: The model is currently unusable for complex Prompt Engineering or strict logical tasks because it cannot be “slowed down” or forced to adhere to a step-by-step listening protocol. It rushes to low-quality solutions regardless of user input.
Expected Behavior: When a user says “Do not generate code,” the model must HALT generation completely and wait. It should not output a single line of code until authorized.