Skip to content

modAI-systems/llmock

Repository files navigation

LLM Mock

CI License: MIT

OpenAI-compatible mock server for testing LLM integrations.

Features

  • OpenAI API compatibility with key endpoints (/v1/models, /v1/chat/completions, /v1/responses)
  • Configurable mock responses via strategies
  • Default mirror strategy (echoes input as output)
  • Streaming support for both Chat Completions and Responses APIs

Quick Start

Option A: Docker

docker container run -p 8000:8000 ghcr.io/modai-systems/llmock:latest

Test with this sample request (yes, the default secret key is really your-secret-api-key):

curl http://localhost:8000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer your-secret-api-key" \
  -d '{
    "model": "gpt-4o",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

What the request does is simply mirror the input, so it returns Hello!.

Option B: Local Build

Prerequisites:

  • Python 3.14+
  • uv (package manager)

Installation:

git clone https://github.com/modAI-systems/llmock.git
cd llmock
uv sync --all-extras

Run the Server:

uv run uvicorn llmock.app:app --host 0.0.0.0 --port 8000

For development with auto-reload:

uv run uvicorn llmock.app:app --host 0.0.0.0 --port 8000 --reload

The server will be available at http://localhost:8000. Health check available at /health.

Usage Example

Point your OpenAI client to the mock server:

from openai import OpenAI

client = OpenAI(
    base_url="http://localhost:8000/v1",
    api_key="mock-key"  # Any key works
)

# Chat Completions API
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)

# Responses API
response = client.responses.create(
    model="gpt-4o",
    input="Hello!"
)
print(response.output[0].content[0].text)

Configuration

Edit config.yaml to configure available models:

models:
  - id: "gpt-4o"
    created: 1715367049
    owned_by: "openai"
  - id: "gpt-4o-mini"
    created: 1721172741
    owned_by: "openai"

Development

Run Tests

uv run pytest -v

Lint & Format

uv run ruff format src tests    # Format code
uv run ruff check src tests     # Lint code

Documentation

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Run tests and linting (uv run pytest && uv run ruff check src tests)
  4. Commit your changes (git commit -m 'Add amazing feature')
  5. Push to the branch (git push origin feature/amazing-feature)
  6. Open a Pull Request

License

This project is licensed under the MIT License - see the LICENSE file for details.

About

LLM Mock for Testing

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors 2

  •  
  •