Communicate with any LLM provider using a single, unified interface.
Switch between OpenAI, Anthropic, DeepSeek, Mistral, Ollama, and more without changing your code.
go get github.com/mozilla-ai/any-llm-go
# Set up your API key(s)
export OPENAI_API_KEY="YOUR_KEY_HERE" # or ANTHROPIC_API_KEY, etcpackage main
import (
"context"
"fmt"
"log"
anyllm "github.com/mozilla-ai/any-llm-go"
"github.com/mozilla-ai/any-llm-go/providers/openai"
)
func main() {
ctx := context.Background()
provider, err := openai.New()
if err != nil {
log.Fatal(err)
}
response, err := provider.Completion(ctx, anyllm.CompletionParams{
Model: "gpt-4o-mini",
Messages: []anyllm.Message{
{Role: anyllm.RoleUser, Content: "Hello!"},
},
})
if err != nil {
log.Fatal(err)
}
fmt.Println(response.Choices[0].Message.Content)
}That's it! To switch providers, change the import and constructor (e.g., anthropic.New() instead of openai.New()).
- Go 1.25 or newer
- API keys for whichever LLM providers you want to use
Import the main package and the providers you need:
import (
anyllm "github.com/mozilla-ai/any-llm-go"
"github.com/mozilla-ai/any-llm-go/providers/openai" // OpenAI
"github.com/mozilla-ai/any-llm-go/providers/anthropic" // Anthropic
)See our list of supported providers to choose which ones you need.
Set environment variables for your chosen providers:
export OPENAI_API_KEY="your-key-here"
export ANTHROPIC_API_KEY="your-key-here"
export DEEPSEEK_API_KEY="your-key-here"
# ... etcAlternatively, pass API keys directly in your code using options:
provider, err := openai.New(anyllm.WithAPIKey("your-key-here"))- Simple, unified interface - Same types and patterns across all providers
- Idiomatic Go - Follows Go conventions with proper error handling and context support
- Leverages official provider SDKs - Uses
github.com/openai/openai-goandgithub.com/anthropics/anthropic-sdk-go - Type-safe - Full type definitions for all request and response types
- Streaming support - Channel-based streaming that's natural in Go
- Battle-tested patterns - Based on the proven any-llm Python library
Create a provider instance and use it for requests:
import (
"context"
"fmt"
"log"
anyllm "github.com/mozilla-ai/any-llm-go"
"github.com/mozilla-ai/any-llm-go/providers/openai"
)
// Create provider once, reuse for multiple requests.
provider, err := openai.New(anyllm.WithAPIKey("your-api-key"))
if err != nil {
log.Fatal(err)
}
ctx := context.Background()
response, err := provider.Completion(ctx, anyllm.CompletionParams{
Model: "gpt-4o-mini",
Messages: []anyllm.Message{
{Role: anyllm.RoleUser, Content: "Hello!"},
},
})
if err != nil {
log.Fatal(err)
}
fmt.Println(response.Choices[0].Message.Content)Provider instances are reusable and recommended for production applications.
Use channels for streaming responses:
chunks, errs := provider.CompletionStream(ctx, anyllm.CompletionParams{
Model: "gpt-4o-mini",
Messages: []anyllm.Message{
{Role: anyllm.RoleUser, Content: "Write a short poem about Go."},
},
})
for chunk := range chunks {
if len(chunk.Choices) > 0 {
fmt.Print(chunk.Choices[0].Delta.Content)
}
}
if err := <-errs; err != nil {
log.Fatal(err)
}response, err := provider.Completion(ctx, anyllm.CompletionParams{
Model: "gpt-4o-mini",
Messages: []anyllm.Message{
{Role: anyllm.RoleUser, Content: "What's the weather in Paris?"},
},
Tools: []anyllm.Tool{
{
Type: "function",
Function: anyllm.Function{
Name: "get_weather",
Description: "Get the current weather for a location",
Parameters: map[string]any{
"type": "object",
"properties": map[string]any{
"location": map[string]any{
"type": "string",
"description": "The city name",
},
},
"required": []string{"location"},
},
},
},
},
ToolChoice: "auto",
})
// Check for tool calls.
if len(response.Choices[0].Message.ToolCalls) > 0 {
tc := response.Choices[0].Message.ToolCalls[0]
fmt.Printf("Function: %s, Args: %s\n", tc.Function.Name, tc.Function.Arguments)
}For models that support extended thinking (like Claude):
response, err := provider.Completion(ctx, anyllm.CompletionParams{
Model: "claude-sonnet-4-20250514",
Messages: []anyllm.Message{
{Role: anyllm.RoleUser, Content: "Solve this step by step: What is 15% of 80?"},
},
ReasoningEffort: anyllm.ReasoningEffortMedium,
})
if response.Choices[0].Message.Reasoning != nil {
fmt.Println("Thinking:", response.Choices[0].Message.Reasoning.Content)
}
fmt.Println("Answer:", response.Choices[0].Message.Content)All provider errors are normalized to common error types:
response, err := provider.Completion(ctx, params)
if err != nil {
switch {
case errors.Is(err, anyllm.ErrRateLimit):
// Handle rate limiting - maybe retry with backoff.
case errors.Is(err, anyllm.ErrAuthentication):
// Handle auth errors - check API key.
case errors.Is(err, anyllm.ErrContextLength):
// Handle context too long - reduce input.
default:
// Handle other errors.
}
}You can also use type assertions for more details:
var rateLimitErr *anyllm.RateLimitError
if errors.As(err, &rateLimitErr) {
fmt.Printf("Rate limited by %s: %s\n", rateLimitErr.Provider, rateLimitErr.Message)
}Each provider uses its own model identifiers. To find available models:
- Check the provider's documentation
- Use the
ListModelsAPI (if the provider supports it):
provider, _ := openai.New()
models, err := provider.ListModels(ctx)
for _, model := range models.Data {
fmt.Println(model.ID)
}| Provider | Completion | Streaming | Tools | Reasoning | Embeddings |
|---|---|---|---|---|---|
| Anthropic | ✅ | ✅ | ✅ | ✅ | ❌ |
| DeepSeek | ✅ | ✅ | ✅ | ✅ | ❌ |
| Gemini | ✅ | ✅ | ✅ | ✅ | ✅ |
| Groq | ✅ | ✅ | ✅ | ❌ | ❌ |
| llama.cpp | ✅ | ✅ | ✅ | ❌ | ✅ |
| Llamafile | ✅ | ✅ | ✅ | ❌ | ✅ |
| Mistral | ✅ | ✅ | ✅ | ✅ | ✅ |
| Ollama | ✅ | ✅ | ✅ | ✅ | ✅ |
| OpenAI | ✅ | ✅ | ✅ | ✅ | ✅ |
| z.ai | ✅ | ✅ | ✅ | ✅ | ❌ |
More providers coming soon! See docs/providers.md for the full list.
- Quickstart Guide - Get up and running quickly
- Supported Providers - List of all supported LLM providers
- API Reference - Complete API documentation
- Examples - Code examples for common use cases
This is the official Go port of any-llm. Key differences:
| Feature | Python any-llm | Go any-llm |
|---|---|---|
| Async support | async/await |
Goroutines + channels |
| Streaming | Iterators | Channels |
| Error handling | Exceptions | error return values |
| Type hints | Type annotations | Static types |
| Provider usage | String-based | Direct instantiation |
We welcome contributions from developers of all skill levels! Please see our Contributing Guide or open an issue to discuss changes.
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.