A simple Go SDK for interacting with LLM providers. Supports streaming completions, custom instructions, and easy provider integration.
- Easy to use Go SDK
- Chat completions (non-streaming and streaming)
- Easily switch between providers and models
- Options for customizing requests (model, system prompt, max tokens, temperature, reasoning effort)
- Support for tools calling
- Anannas (
Anannas) - Anthropic (
Anthropic) - Gemini (
Gemini) - GroqCloud (
GroqCloud) - Mistral (
Mistral) - OpenAI (
OpenAi) - OpenRouter (
OpenRouter) - Perplexity (
Perplexity) - Xai (
Xai)
go.mod # Go module file
LICENSE # License file
readme.md # Project documentation
ai.go # Main package entrypoint
base/
│ └── base.go # Base provider
│ └── shared.go # Shared logic
sdk/ # Core SDK interfaces and types
│ ├── errors.go # API errors handling
│ ├── message.go # Message type and roles
│ ├── options.go # Options type for request customization
│ └── provider.go # Provider interface and SDK wrapper
providers/ # Provider implementations
│ ├── anannas.go # Anannas provider
│ ├── anthropic.go # Anthropic provider
│ ├── gemini.go # Gemini provider
│ ├── groqcloud.go # GroqCloud provider
│ ├── mistral.go # Mistral provider
│ ├── openai.go # OpenAI provider
│ ├── openrouter.go # OpenRouter provider
│ └── perplexity.go # Perplexity provider
│ └── xai.go # Xai provider
example/ # Example usage of the SDK
│ └── readme.md
Import the SDK in your Go project:
import "github.com/unsafe0x0/ai/v2"To use a provider, initialize it with your API key using the provided constructor functions:
// Anannas
client := ai.Anannas("YOUR_ANANNAS_API_KEY")
// Anthropic
client := ai.Anthropic("YOUR_ANTHROPIC_API_KEY")
// Gemini
client := ai.Gemini("YOUR_GEMINI_API_KEY")
// GroqCloud
client := ai.GroqCloud("YOUR_GROQ_API_KEY")
// Mistral
client := ai.Mistral("YOUR_MISTRAL_API_KEY")
// OpenAI
client := ai.OpenAi("YOUR_OPENAI_API_KEY")
// OpenRouter
client := ai.OpenRouter("YOUR_OPEN_ROUTER_API_KEY")
// Perplexity
client := ai.Perplexity("YOUR_PERPLEXITY_API_KEY")
// Xai
client := ai.Xai("YOUR_XAI_API_KEY")Create a CompletionRequest to specify messages, model, and other options, then call ChatCompletion():
resp := client.ChatCompletion(ctx, &ai.CompletionRequest{
Messages: []ai.Message{
{Role: "user", Content: "Your message here"},
},
Model: "llama3-8b-8192",
SystemPrompt: "You are a helpful assistant.",
MaxTokens: 150,
Temperature: 0.7,
Stream: true,
})
if resp.Error != nil {
log.Fatalf("Chat completion failed: %v", resp.Error)
}
if resp.Stream != nil {
defer resp.Stream.Close()
fmt.Println("Response:")
if _, err := io.Copy(os.Stdout, resp.Stream); err != nil {
log.Fatalf("Failed to read stream: %v", err)
}
fmt.Println()
} else {
fmt.Println("Response:", resp.Content)
}Model(string): The model to use (e.g., "gpt-4o", "llama3-8b-8192").SystemPrompt(string): Custom system prompt to guide the AI's behavior.MaxTokens(int): The maximum number of tokens to generate.ReasoningEffort(string): Custom reasoning effort (e.g., "low", "medium", "high").Temperature(float32): Controls randomness of the output (0.0 to 1.0).Stream(bool): Set totruefor a streaming response,falsefor a single response.
All code examples for this SDK latest version can be found in the ai-sdk-examples repository.
Contributions are welcome!
- Fork the repository.
- Create a new branch.
- Commit your changes.
- Push to the branch.
- Open a pull request.
If you find a bug or have a feature request, please open an issue on GitHub.
Note: This project is in early development. Features, and structure may change frequently.