|
| 1 | +# ReliAPI |
| 2 | + |
| 3 | +Reliability layer for API calls: retries, caching, dedup, circuit breakers. |
| 4 | + |
| 5 | +[](https://www.npmjs.com/package/reliapi-sdk) |
| 6 | +[](https://pypi.org/project/reliapi-sdk/) |
| 7 | +[](https://hub.docker.com/r/kikudoc/reliapi) |
| 8 | + |
| 9 | +## Installation |
| 10 | + |
| 11 | +- **NPM Package**: [reliapi-sdk](https://www.npmjs.com/package/reliapi-sdk) - `npm install reliapi-sdk` |
| 12 | +- **PyPI Package**: [reliapi-sdk](https://pypi.org/project/reliapi-sdk/) - `pip install reliapi-sdk` |
| 13 | +- **Docker Image**: [kikudoc/reliapi](https://hub.docker.com/r/kikudoc/reliapi) - `docker pull kikudoc/reliapi` |
| 14 | +- **CLI Package**: [reliapi-cli](https://pypi.org/project/reliapi-cli/) - `pip install reliapi-cli` |
| 15 | + |
| 16 | +## Features |
| 17 | + |
| 18 | +- **Retries with Backoff** - Automatic retries with exponential backoff |
| 19 | +- **Circuit Breaker** - Prevent cascading failures |
| 20 | +- **Caching** - TTL cache for GET requests and LLM responses |
| 21 | +- **Idempotency** - Request coalescing with idempotency keys |
| 22 | +- **Rate Limiting** - Built-in rate limiting per tier |
| 23 | +- **LLM Proxy** - Unified interface for OpenAI, Anthropic, Mistral |
| 24 | +- **Cost Control** - Budget caps and cost estimation |
| 25 | + |
| 26 | +## Quick Start |
| 27 | + |
| 28 | +### Using the SDK |
| 29 | + |
| 30 | +**JavaScript/TypeScript:** |
| 31 | + |
| 32 | +```bash |
| 33 | +npm install reliapi-sdk |
| 34 | +``` |
| 35 | + |
| 36 | +```typescript |
| 37 | +import { ReliAPI } from 'reliapi-sdk'; |
| 38 | + |
| 39 | +const client = new ReliAPI({ |
| 40 | + baseUrl: 'https://api.reliapi.dev', |
| 41 | + apiKey: 'your-api-key' |
| 42 | +}); |
| 43 | + |
| 44 | +// HTTP proxy with retries |
| 45 | +const response = await client.proxyHttp({ |
| 46 | + target: 'my-api', |
| 47 | + method: 'GET', |
| 48 | + path: '/users/123', |
| 49 | + cache: 300 // cache for 5 minutes |
| 50 | +}); |
| 51 | + |
| 52 | +// LLM proxy with idempotency |
| 53 | +const llmResponse = await client.proxyLlm({ |
| 54 | + target: 'openai', |
| 55 | + model: 'gpt-4o-mini', |
| 56 | + messages: [{ role: 'user', content: 'Hello!' }], |
| 57 | + idempotencyKey: 'unique-key-123' |
| 58 | +}); |
| 59 | +``` |
| 60 | + |
| 61 | +**Python:** |
| 62 | + |
| 63 | +```bash |
| 64 | +pip install reliapi-sdk |
| 65 | +``` |
| 66 | + |
| 67 | +```python |
| 68 | +from reliapi_sdk import ReliAPI |
| 69 | + |
| 70 | +client = ReliAPI( |
| 71 | + base_url="https://api.reliapi.dev", |
| 72 | + api_key="your-api-key" |
| 73 | +) |
| 74 | + |
| 75 | +# HTTP proxy with retries |
| 76 | +response = client.proxy_http( |
| 77 | + target="my-api", |
| 78 | + method="GET", |
| 79 | + path="/users/123", |
| 80 | + cache=300 |
| 81 | +) |
| 82 | + |
| 83 | +# LLM proxy with idempotency |
| 84 | +llm_response = client.proxy_llm( |
| 85 | + target="openai", |
| 86 | + model="gpt-4o-mini", |
| 87 | + messages=[{"role": "user", "content": "Hello!"}], |
| 88 | + idempotency_key="unique-key-123" |
| 89 | +) |
| 90 | +``` |
| 91 | + |
| 92 | +### Using the CLI |
| 93 | + |
| 94 | +```bash |
| 95 | +pip install reliapi-cli |
| 96 | +``` |
| 97 | + |
| 98 | +```bash |
| 99 | +# Check health |
| 100 | +reli ping |
| 101 | + |
| 102 | +# Make HTTP request |
| 103 | +reli request --method GET --url https://api.example.com/users |
| 104 | + |
| 105 | +# Make LLM request |
| 106 | +reli llm --target openai --message "Hello, world!" |
| 107 | +``` |
| 108 | + |
| 109 | +### Using the GitHub Action |
| 110 | + |
| 111 | +```yaml |
| 112 | +- uses: KikuAI-Lab/reliapi@v1 |
| 113 | + with: |
| 114 | + api-url: 'https://api.reliapi.dev' |
| 115 | + api-key: ${{ secrets.RELIAPI_KEY }} |
| 116 | + endpoint: '/proxy/http' |
| 117 | + method: 'POST' |
| 118 | + body: '{"target": "my-api", "method": "GET", "path": "/health"}' |
| 119 | +``` |
| 120 | +
|
| 121 | +## API Endpoints |
| 122 | +
|
| 123 | +### HTTP Proxy |
| 124 | +
|
| 125 | +``` |
| 126 | +POST /proxy/http |
| 127 | +``` |
| 128 | + |
| 129 | +Proxy any HTTP API with reliability layers. |
| 130 | + |
| 131 | +### LLM Proxy |
| 132 | + |
| 133 | +``` |
| 134 | +POST /proxy/llm |
| 135 | +``` |
| 136 | + |
| 137 | +Proxy LLM requests with idempotency, caching, and cost control. |
| 138 | + |
| 139 | +### Health Check |
| 140 | + |
| 141 | +``` |
| 142 | +GET /healthz |
| 143 | +``` |
| 144 | + |
| 145 | +Health check endpoint for monitoring. |
| 146 | + |
| 147 | +## Self-Hosting |
| 148 | + |
| 149 | +```bash |
| 150 | +docker run -d -p 8000:8000 \ |
| 151 | + -e REDIS_URL="redis://localhost:6379/0" \ |
| 152 | + kikudoc/reliapi:latest |
| 153 | +``` |
| 154 | + |
| 155 | +## Documentation |
| 156 | + |
| 157 | +- [OpenAPI Spec](./openapi/openapi.yaml) |
| 158 | +- [Postman Collection](./postman/collection.json) |
| 159 | +- [Full Documentation](https://reliapi.kikuai.dev) |
| 160 | + |
| 161 | +## License |
| 162 | + |
| 163 | +MIT License - see [LICENSE](./LICENSE) for details. |
| 164 | + |
| 165 | +## Support |
| 166 | + |
| 167 | +- GitHub Issues: https://github.com/KikuAI-Lab/reliapi/issues |
| 168 | +- Email: dev@kikuai.dev |
| 169 | + |
0 commit comments