Hi guys!
After participating in the Gemini AI competition, I wanted to share a straightforward approach that avoids the unnecessarily complex patterns found in Google’s official examples.
While their documentation and learning material pushes you toward intricate, Google-specific implementations, here’s a cleaner solution that keeps you vendor-independent.
This is Go code, because you need a backend and I do not recommend proprietary stuff like firebase, as this will cost you 10-100 times more money in production than needed.
func callGemini(instruction, prompt string, temp float32) (string, error) {
ctx, cancel := context.WithTimeout(context.Background(), time.Duration(Options.Scantimeouts)*time.Second)
defer cancel()
resultChan := make(chan string)
errChan := make(chan error)
client, err := genai.NewClient(ctx, option.WithAPIKey(Options.Key))
if err != nil {
return "", fmt.Errorf("failed to create client: %v", err)
}
defer client.Close()
model := client.GenerativeModel("gemini-1.5-flash")
model.SetTemperature(temp)
model.SystemInstruction = &genai.Content{
Parts: []genai.Part{genai.Text(instruction)},
}
go func() {
resp, err := model.GenerateContent(ctx, genai.Text(prompt))
if err != nil {
errChan <- fmt.Errorf("failed to generate content: %v", err)
return
}
if len(resp.Candidates) == 0 || len(resp.Candidates[0].Content.Parts) == 0 {
errChan <- errors.New("no content generated")
return
}
text, ok := resp.Candidates[0].Content.Parts[0].(genai.Text)
if !ok {
errChan <- errors.New("unexpected content type in response")
return
}
resultChan <- string(text)
}()
select {
case <-ctx.Done():
return "", errors.New("operation timed out")
case err := <-errChan:
return "", err
case result := <-resultChan:
return result, nil
}
}
Taking a Promise-like approach that leverages Go’s native concurrency features, this wrapper elegantly manages LLM API communication through channels and timeouts - essential patterns for production-grade applications. By treating the LLM API as a simple text-in-text-out service with Promise-style asynchronous handling, we create a clean abstraction that’s both powerful and portable.
The beauty lies in its ability to handle complex async operations with proper error boundaries and timeouts, while maintaining the flexibility to adapt to any AI provider’s API with minimal changes.
I hope this helps someone.
“Modified by moderator”