A FastAPI-based proxy server that allows using remote OpenAI-compatible services through an Ollama-compatible interface.
This script creates a proxy service that mimics the Ollama API interface while actually connecting to OpenAI-compatible services. It allows users to use more advanced remote models while maintaining compatibility with local Ollama setups.
Compatible with Jetbrains IntelliJ IDEA AI Assistant. Also added filtering out section from R1 response, such commit message and application of changes from the assistant.
virtualenv venv && source venv/bin/activate && pip install -r requirements.txtExample way to run the service
export OPENAI_API_URL="https://api.sambanova.ai/v1"
export OPENAI_API_KEY="your-api-key-here"
uvicorn main:app --host 0.0.0.0 --port 8000Point your AI Assistant to use the service:

Trying to follow Ollama REST api.
- Provides Ollama API compatibility
- Pretends it is Ollama for model inference
- For some requests, filters out section of model response.
- Supports SambaNova models through OpenAI API compatibility
ollama-proxy - Abandoned
enchanted-ollama-openrouter-proxy - Proxy to use with OpenRouter