InputX is a high-speed API gateway for AI inference, enabling seamless, low-latency access to large language models through a simple and scalable interface.
- 🔐 API key-based authentication
- ⚡ Fast and lightweight Node.js backend using Fastify
- 🧠 OpenAI integration (with more models coming soon)
- 📊 Usage logging for metering and billing
- 🧱 Ready for deployment on Vercel, AWS Lambda, or Fly.io
git clone https://github.com/yourusername/inputx.git
cd inputx
npm install
cp .env.example .env
# Add your OpenAI API key to .env
npm startPOST /inference
Authorization: Bearer <your-api-key>
{
"prompt": "What is the capital of France?",
"model": "gpt-3.5-turbo"
}- Add support for Ollama / local models
- Token-based usage tracking and billing
- Web3 wallet-based access and payments
- Caching layer for repeated queries
MIT - see LICENSE
InputX: Serve AI like infrastructure.