-
Notifications
You must be signed in to change notification settings - Fork 2.3k
feat: Add n1n.ai as a model provider #8659
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
- Added n1n provider type definitions - Created N1nHandler extending OpenAiHandler for API compatibility - Integrated n1n into provider settings, API factory, and ProfileValidator - Added n1n models fetcher for dynamic model discovery - Updated webview message handler and shared API configuration - Supports 400+ models through OpenAI-compatible API at https://n1n.ai/v1/ Addresses #8657
|
||
try { | ||
const response = await axios.get(url, { headers }) | ||
const parsed = N1nModelsResponseSchema.safeParse(response.data) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider logging additional details from the Zod safeParse result when parsing fails (e.g. logging parsed.error) to aid in debugging schema mismatches.
This comment was generated because it violated a code review rule: irule_PTI8rjtnhwrWq6jS.
override getModel() { | ||
const id = this.options.n1nModelId ?? n1nDefaultModelId | ||
// Since n1n.ai supports 400+ models dynamically, we use default model info | ||
// unless we implement dynamic model fetching | ||
const info = n1nDefaultModelInfo | ||
const params = getModelParams({ format: "openai", modelId: id, model: info, settings: this.options }) | ||
return { id, info, ...params } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The getModel()
method always returns n1nDefaultModelInfo
instead of using dynamically fetched models from the cache. This means the 400+ models from n1n.ai won't have accurate metadata (context windows, pricing, vision support, etc.).
The fetcher in src/api/providers/fetchers/n1n.ts
is implemented and integrated, but N1nHandler never uses it. Compare with DeepInfraHandler.getModel()
which properly uses cached models:
const info = this.models[id] ?? deepInfraDefaultModelInfo
N1nHandler should follow the same pattern - store fetched models and use them in getModel()
.
This PR attempts to address Issue #8657 by adding n1n.ai as a new model provider in Roo Code.
What Changed
Technical Details
n1n.ai provides an OpenAI-compatible API that gives access to 400+ large language models through a single API key configuration. The implementation:
Testing
Related Issue
Closes #8657
Feedback and guidance are welcome!
Important
Adds n1n.ai as a new model provider with dynamic model fetching and integrates it into the existing system.
n1n.ai
as a new model provider with base URLhttps://n1n.ai/v1
.getN1nModels()
inn1n.ts
.OpenAiHandler
inN1nHandler
to support n1n.ai API.provider-settings.ts
to includen1n
indynamicProviders
andproviderNames
.n1nSchema
toproviderSettingsSchemaDiscriminated
andproviderSettingsSchema
.modelIdKeys
andmodelIdKeysByProvider
to includen1nModelId
.N1nHandler
tobuildApiHandler()
inindex.ts
.webviewMessageHandler.ts
to handle n1n model fetching.n1n
inMODELS_BY_PROVIDER
inprovider-settings.ts
.This description was created by
for 63a3419. You can customize this summary. It will automatically update as commits are pushed.