Skip to content

Easily manage LLM credentials for any OpenAI compatible provider including OpenAI, Gemini, OpenRouter, Cerebras, Groq, and more — designed to work with applications that use the emerging OPENAI_* environment variable standard.

License

Notifications You must be signed in to change notification settings

samestrin/llm-env

Repository files navigation

LLM Environment Manager

Star on GitHub Fork on GitHub Watch on GitHub

Version 1.1.0 License: MIT Built with Bash

A powerful bash script for seamlessly switching between different LLM providers and models. Perfect for developers who work with multiple AI services and need to quickly switch between free tiers, paid models, or other providers based on availability and cost.

New in v1.1.0: Enhanced with a comprehensive help system, API connectivity testing, configuration backup/restore, bulk operations, and debug mode for easier troubleshooting.

Overview

Easily manage LLM credentials for any OpenAI compatible provider including OpenAI, OpenRouter, Cerebras, Groq, and 15+ other providers — designed to work with applications that use the emerging OPENAI_* environment variable standard. It enables cost management by easily switching from free tiers to paid models when quotas are exhausted. The universal compatibility works with any tool that uses OpenAI-compatible environment variables. API keys are stored securely in your shell profile, never in code. As a pure bash script, it just works everywhere with zero dependencies.

The Problem

If you work with multiple AI providers, you've likely experienced these pain points:

  • Multiple providers, different endpoints: Each provider has unique API endpoints and authentication methods
  • OPENAI_ is the standard*: Most AI tools expect OPENAI_* environment variables, but not every provider uses those names
  • Constant configuration editing: You end up editing ~/.bashrc or ~/.zshrc repeatedly
  • Context switching kills flow: Small mistakes cause mysterious 401s/404s, breaking your development rhythm
  • Configuration drift: Different setups across development, staging, and production environments

The Solution: llm-env

llm-env --help

Supported Providers

This tool supports any OpenAI API compatible provider, including:

  • OpenAI: Industry standard GPT models
  • Cerebras: Fast inference with competitive pricing
  • Groq: Lightning-fast inference
  • OpenRouter: Access to multiple models through one API
  • xAI Grok: Advanced reasoning and coding capabilities
  • DeepSeek: Excellent coding and reasoning models
  • Together AI: Competitive pricing with wide model selection
  • Fireworks AI: Ultra-fast inference optimized for production
  • And any OpenAI API compatible provider!

Installation

Quick Install

# Download and install (recommended) - _may need sudo_
curl -fsSL https://raw.githubusercontent.com/samestrin/llm-env/main/install.sh | bash

Manual Install

  1. Clone this repository:

    git clone https://github.com/samestrin/llm-env.git
    cd llm-env
  2. Copy the script to your PATH:

    sudo cp llm-env /usr/local/bin/
    sudo chmod 755 /usr/local/bin/llm-env
  3. Add the helper function to your shell profile (~/.bashrc or ~/.zshrc):

    # LLM Environment Manager
    llm-env() {
      source /usr/local/bin/llm-env "$@"
    }
  4. Set up your API keys in your shell profile:

    # Add these to ~/.bashrc or ~/.zshrc
    export LLM_CEREBRAS_API_KEY="your_cerebras_key_here"
    export LLM_OPENAI_API_KEY="your_openai_key_here"
    export LLM_GROQ_API_KEY="your_groq_key_here"
    export LLM_OPENROUTER_API_KEY="your_openrouter_key_here"
  5. Reload your shell:

    source ~/.bashrc  # or ~/.zshrc

Usage

Basic Commands

# List all available providers
llm-env list

# Set a provider (switches all OpenAI-compatible env vars)
llm-env set cerebras
llm-env set openai
llm-env set groq

# Show current configuration
llm-env show

# Unset all LLM environment variables
llm-env unset

# Get help
llm-env --help

# Test provider connectivity
llm-env test cerebras

Example Workflow

# Start with free tier
llm-env set openrouter2  # Uses deepseek free model (if you are using the default config)

# When free tier is exhausted, switch to paid
llm-env set cerebras     # Fast and affordable

# For specific tasks, use specialized models
llm-env set groq         # For speed
llm-env set openai       # For quality

Integration Examples

Once you've set a provider, any tool using OpenAI-compatible environment variables will work:

# With curl
curl -H "Authorization: Bearer $OPENAI_API_KEY" \
     -H "Content-Type: application/json" \
     -d '{"model":"'$OPENAI_MODEL'","messages":[{"role":"user","content":"Hello!"}]}' \
     $OPENAI_BASE_URL/chat/completions
# With Python OpenAI client
python -c "import openai; print(openai.chat.completions.create(model=os.environ['OPENAI_MODEL'], messages=[{'role':'user','content':'Hello!'}]))"

With any LLM CLI tool that supports the OpenAI API Environment Variables

qwen -p "What is the capital of France?"  # Uses current provider automatically

Common Use Cases

Development Workflow

# Set up for development
llm-env set cerebras     # Fast and cost-effective for testing

# Test your application
./your-app.py

# Switch to production model when ready
llm-env set openai       # Higher quality for production

Multiple Provider Setup

# Configure different providers for different tasks
llm-env set deepseek     # Excellent for code generation
llm-env set groq         # Fast inference for real-time apps
llm-env set openai       # Complex reasoning tasks

# Switch between providers as needed
llm-env list             # See all available providers
llm-env show             # Check current configuration

Integration with Tools

# Use with curl
curl -H "Authorization: Bearer $OPENAI_API_KEY" \
     -H "Content-Type: application/json" \
     $OPENAI_BASE_URL/models

# Use with Python scripts
python your_script.py    # Uses current provider automatically

# Test connectivity
llm-env test cerebras    # Verify provider is working

Configuration

The script uses a flexible configuration system that allows you to customize providers and models without modifying the script itself.

Quick Setup

# Create a user configuration file
source llm-env config init

# Edit your configuration
source llm-env config edit

Configuration Management

# Add a new provider
source llm-env config add my-provider

# Validate configuration
source llm-env config validate

# Backup configuration
source llm-env config backup

# Restore from backup
source llm-env config restore /path/to/backup.conf

# Bulk operations
source llm-env config bulk enable cerebras openai
source llm-env config bulk disable groq openrouter

For detailed configuration options, examples, and advanced setup, see the Configuration Guide

Troubleshooting

Quick Diagnostics

# Verify setup
llm-env list
llm-env show

# Test API connectivity
llm-env test cerebras

# Enable debug mode for detailed troubleshooting
LLM_ENV_DEBUG=1 llm-env list

# Or manual test
curl -H "Authorization: Bearer $OPENAI_API_KEY" $OPENAI_BASE_URL/models

For detailed troubleshooting, common issues, and solutions, see the Troubleshooting Guide

Why Bash?

llm-env is written in Bash so it runs anywhere Bash runs—macOS, Linux, containers, CI—without asking you to install Python or Node first. It's intentionally compatible with older shells and includes compatibility shims for legacy behavior.

Universal Compatibility:

  • Works out-of-the-box on macOS's default Bash 3.2 and modern Bash 5.x installations
  • Linux distros with Bash 4.0+ are fully supported
  • Backwards-compatible layer ensures features like associative arrays "just work," even on Bash 3.2
  • Verified by automated test matrix across Bash 3.2, 4.0+, and 5.x on macOS and Linux

Security Benefits:

  • Keys live in environment variables—never written to config files
  • Outputs are masked (e.g., ••••abcd) to keep secrets safe on screen and in screenshots
  • Switching is local; nothing is sent over the network except your own API calls during tests
  • No external dependencies means fewer attack vectors

Documentation

Complete documentation is available in the docs directory:

Testing

Comprehensive test suite ensures reliability across platforms and Bash versions:

Running Tests

# Run all tests
./tests/run_tests.sh

# Run specific test suites
./tests/run_tests.sh --unit-only
./tests/run_tests.sh --integration-only
./tests/run_tests.sh --system-only

# Run individual test files
bats tests/unit/test_validation.bats
bats tests/integration/test_providers.bats

Test Structure

  • Unit Tests (tests/unit/) - Core functionality and validation
  • Integration Tests (tests/integration/) - Provider management and configuration
  • System Tests (tests/system/) - Cross-platform compatibility and edge cases
  • Regression Tests - Prevent known issues from reoccurring

Current Test Results

All test suites passing across supported platforms:

  • Unit Tests: 40/40 passing
  • Integration Tests: 13/13 passing
  • System Tests: 40/40 passing
  • Total Coverage: 93 test cases

Platform Support:

  • macOS (Bash 3.2+ and 5.x)
  • Ubuntu/Linux (Bash 4.0+)
  • Multi-version compatibility testing

Test Requirements

  • BATS testing framework
  • Bash 3.2+ (automatically tested across versions)
  • No external dependencies required for basic tests

Contributing

Contributions are welcome! See the Development Guide for details on:

  • Adding new providers
  • Improving functionality
  • Testing and validation
  • Code style guidelines

Version

Current Version: 1.1.0

For detailed version history, feature updates, and breaking changes, see CHANGELOG.md.

Find This Useful?

If you find llm-env useful, please consider starring the repository and supporting the project:

Buy Me A Coffee

License

MIT License - see LICENSE file for details.

Related Tools

This tool works great with:

  • llm - Simon Willison's LLM CLI
  • aider - AI pair programming
  • LiteLLM - A library to simplify calling all LLM APIs
  • LangChain - A framework for building LLM applications

It's also great with CLI coding tools, I use it with qwen-code + qwen-prompts, a collection of "hybrid prompt chaining" slash prompts, but it will work with any tool that uses OpenAI-compatible APIs using Environmental Variables.

Additional: Applications, Scripts, and Frameworks compatible with llm-env

Share

Twitter Facebook LinkedIn

About

Easily manage LLM credentials for any OpenAI compatible provider including OpenAI, Gemini, OpenRouter, Cerebras, Groq, and more — designed to work with applications that use the emerging OPENAI_* environment variable standard.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 2

  •  
  •