-
Notifications
You must be signed in to change notification settings - Fork 199
Description
Hi Deepset team,
We're n1n API, and we offer a robust API that provides access to 400+ large language models (LLMs), as well as multimodal generation capabilities (text, images, video, audio, etc.). Our platform makes it easy to integrate these advanced features into your project, empowering developers with powerful AI tools.
With n1n API, you can quickly leverage a comprehensive suite of AI models for diverse applications—everything from text generation to media synthesis—through simple, easy-to-use endpoints.
Adding n1n API would benefit Deepset users by:
- Providing access to 400+ models through a single API key configuration
- Supporting all major models (OpenAI, Anthropic, Gemini, Llama, and many more)
- Offering competitive pricing with unified billing (some are lowest to 1/10 of the official price)
- Enabling easy model switching for different assistants without managing multiple API keys
Describe the solution you'd like
Integrate n1n API as an additional AI model provider in Deepset. n1n API is OpenAI-compatible (base URL: https://n1n.ai/v1/) and offers extended multimodal generation capabilities beyond standard text completion, including image, video, and audio generation endpoints.
Resources
Website: https://n1n.ai/
Dashboard: https://n1n.ai/console
Base URL: https://n1n.ai/v1/
Pricing: https://n1n.ai/pricing
API Documentation: https://docs.n1n.ai/
Support: [email protected]
We are excited to contribute to the Deepset ecosystem and provide users with access to the latest AI models through our robust API platform. We look forward to collaborating with the Deepset maintainers and community to make this integration successful.
Is your feature request related to a problem? Please describe.
We need more robust, region-diverse, and cost-effective LLM options in Haystack pipelines. Relying on a small set of providers can cause issues during rate limiting, regional outages, or price-sensitive workloads. Adding n1n API as an OpenAI-compatible provider would improve availability, portability, and resilience without disrupting existing workflows.
Describe the solution you'd like
Add n1n API as a first-class LLM provider in Haystack, or officially support configuring the existing OpenAI components via base_url to point to n1n API.
Desired capabilities:
- Chat and text generation via OpenAI-compatible endpoints
- Model listing and selection via n1n API model APIs
- API Key configuration and credential management
- Streaming and non-streaming support (leveraging Haystack abstractions if available)
- Minimal example + docs showing how to use n1n API in a Haystack pipeline
Describe alternatives you've considered
- Use current OpenAI adapters with a custom base_url (if already supported). This works but lacks discoverability and may have subtle compatibility gaps.
- Go through third-party proxies/aggregators. This adds an extra dependency and less control.
- Stay with current providers only. This limits resilience, vendor flexibility, and cost optimization.
Additional context
Compatibility:
n1n API exposes OpenAI-compatible endpoints, enabling low-friction integration by reusing Haystack's existing OpenAI components where possible.