For a simple translation prompt, I consistently receive a malformed response message:
chatConfig model: "gemini-3-flash-preview"
config:
temperature: 0
maxOutputTokens: 1024
systemInstruction: "Follow the instructions given to you precisely, and perform your task with accuracy and attention to detail. Unless otherwise instructed, your response must contain nothing beyond the return item(s) explicitly requested."
safetySettings:
- category: "HARM_CATEGORY_HARASSMENT"
threshold: "BLOCK_NONE"
- category: "HARM_CATEGORY_HATE_SPEECH"
threshold: "BLOCK_NONE"
- category: "HARM_CATEGORY_SEXUALLY_EXPLICIT"
threshold: "BLOCK_NONE"
- category: "HARM_CATEGORY_DANGEROUS_CONTENT"
threshold: "BLOCK_NONE"
responseModalities:
- "TEXT"
thinkingConfig:
thinkingLevel: "minimal"
history: []
---
lastMessage role: "user"
parts:
- text: |-
I will give you a yaml "clauses" codeblock containing a numbered set of Estonian clauses.
Here it is:
```yaml
clauses:
1: "Ma saan."
```
Return a yaml "translated_clauses" codeblock, with the clause of the original "clauses" codeblock independently and faithfully translated from Estonian to American English, something like this:
```yaml
translated_clauses:
1: "${translated_clause_1}"
```
If this is an explanatory text which wraps Estonian words in quotation marks and already provides American English translations for those wrapped words, then do not translate the words inside those quotation marks. However, if they appear without any accompanying already-present translations for them, then translate them like with everything else.
Note: The provided Estonian text was itself translated/derived from the following true original text. If relevant, use the original to improve fidelity and avoid mistakes introduced by intermediate translation.
Do not translate or include any parenthetical explanatory text found in the original; treat such parentheses as meta-notes, not content.
Original source text:
```
I can.
```
---
apiResult sdkHttpResponse:
headers:
alt-svc: "h3=\":443\"; ma=2592000,h3-29=\":443\"; ma=2592000"
content-encoding: "gzip"
content-type: "application/json; charset=UTF-8"
date: "Wed, 28 Jan 2026 04:16:06 GMT"
server: "scaffolding on HTTPServer2"
server-timing: "gfet4t7; dur=1099"
transfer-encoding: "chunked"
vary: "Origin, X-Origin, Referer"
x-content-type-options: "nosniff"
x-frame-options: "SAMEORIGIN"
x-xss-protection: "0"
candidates:
- content:
parts:
- text: ""
role: "model"
finishReason: "MALFORMED_RESPONSE"
index: 0
modelVersion: "gemini-3-flash-preview"
responseId: "ho15abrUArqs4-EPyq72-QE"
usageMetadata:
promptTokenCount: 278
totalTokenCount: 278
promptTokensDetails:
- modality: "TEXT"
tokenCount: 278
This prompt template typically works just fine with this model and these parameters.
