You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/pages/providers/cloud/inception.mdx
+6-1Lines changed: 6 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,12 @@
2
2
3
3
Website: https://inceptionlabs.ai/
4
4
5
-
Inception powers Mercury Coder, a diffusion LLM (dLLM) tuned for fast, consistent multi‑line code edits. Unlike token‑by‑token generation, a dLLM refines drafts across many spans at once, which makes it especially strong at structural changes and predictive edits across files.
5
+
Inception powers Mercury 2 and Mercury Edit 2, diffusion LLMs (dLLMs) tuned for fast, consistent code generation and multi-line edits. Unlike token-by-token generation, a dLLM refines drafts across many spans at once, which makes it especially strong at structural changes and predictive edits across files.
6
+
7
+
In the plugin, Inception uses:
8
+
9
+
-`mercury-2` as the general-purpose model for workflows such as Chat and Agent
10
+
-`mercury-edit-2` for Autocomplete and Next-Edit Suggestions
Copy file name to clipboardExpand all lines: docs/pages/providers/cloud/proxyai.mdx
+5-6Lines changed: 5 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,7 @@
2
2
3
3
Website: https://tryproxy.io
4
4
5
-
ProxyAI is the default cloud provider that powers this plugin. By creating a [free account](https://tryproxy.io/signin), you can access advanced open source models to enhance your coding experience.
5
+
ProxyAI is the default cloud provider that powers this plugin. By creating a [free account](https://tryproxy.io/signin), you can access advanced AI models to enhance your coding experience.
6
6
7
7
## Getting Started
8
8
@@ -13,11 +13,10 @@ import { Steps } from 'nextra/components'
13
13
<Steps>
14
14
### Create Your Free Account (optional)
15
15
16
-
ProxyAI offers three different tiers: Anonymous, Free, and Individual.
16
+
ProxyAI offers two different tiers: Free, and Individual.
17
17
18
-
-**Anonymous** - Rate limited access to `gpt-4o-mini`, `codestral` and `zeta` models.
19
-
-**Free** - Token limited access to features and models, including `deepseek-v3`, `qwen-2.5-coder-32b`, `llama-3.1-405b`, and others.
20
-
-**Individual** - Unlimited access to all models and features.
18
+
-**Free** - Token-limited access to ProxyAI features and selected managed models.
19
+
-**Individual** - Higher limits and access to the full managed ProxyAI catalog.
21
20
22
21
### Get Your API Key
23
22
@@ -40,4 +39,4 @@ import { Steps } from 'nextra/components'
40
39
41
40
## Models
42
41
43
-
ProxyAI Cloud gives you access to a range of powerful AI models. You can see the full list and learn more about model capabilities on our [Models page](/models).
42
+
ProxyAI Cloud gives you access to the plugin's managed model catalog. You can see the full per-provider breakdown on our [Models page](/models).
description: Learn about the AI models available through ProxyAIand how context windows work.
3
+
description: Learn which model catalogs the plugin currently exposes for ProxyAI, Inception, and the other built-in providers.
4
4
---
5
5
6
6
# Models
@@ -13,7 +13,7 @@ You can choose your preferred model in two ways:
13
13
14
14
### From the Chat Window:
15
15
16
-
Select directly from the dropdown in the chat interface.
16
+
Use the model dropdown in the Chat or Agent toolwindow to switch the active model for the current conversation. This is the fastest way to try a different model while you work.
@@ -28,69 +28,49 @@ Select directly from the dropdown in the chat interface.
28
28
29
29
### From Settings:
30
30
31
-
Go to **Settings/Preferences > Tools > ProxyAI > Providers**. Select your provider and choose your model.
31
+
Go to **Settings/Preferences > Tools > ProxyAI > Models** to manage model selection per feature. From this page, you can configure separate models for Chat, Agent, Autocomplete, Next-Edit Suggestions, and the other model-backed features, depending on which providers you have enabled.
alt="Selecting a model within the provider settings panel"
36
-
width="1200"
37
-
height="800"
38
-
className="nx-rounded-lg nx-my-4"
39
-
autoPlay
40
-
muted
41
-
loop
42
-
/>
33
+
## Built-In Model Catalogs
43
34
44
-
## Available Models via ProxyAI Cloud
35
+
The tables below reflect the models currently exposed by ProxyAI Cloud. Models for `Ollama`, `llama.cpp`, `Custom OpenAI`, and other BYOK providers are determined by the configured provider and may change independently, so for those providers you should check the model picker in ProxyAI settings for the current list.
45
36
46
-
The models listed below are available through the default **ProxyAI Cloud** service. Model availability and usage limits depend on your ProxyAI Cloud plan (Free or Pro).
37
+
### Agent & Chat Models
47
38
48
-
### Chat Models
39
+
| Model | Provider | Free | Pro |
40
+
|---|---|:---:|:---:|
41
+
|`auto`| Fireworks | ✅ | ✅ |
42
+
|`gpt-5.4`| OpenAI || ✅ |
43
+
|`gpt-5.3-codex`| OpenAI || ✅ |
44
+
|`gpt-5-mini`| OpenAI | ✅ | ✅ |
45
+
|`claude-opus-4-6`| Anthropic || ✅ |
46
+
|`claude-sonnet-4-6`| Anthropic || ✅ |
47
+
|`claude-haiku-4-5`| Anthropic | ✅ | ✅ |
48
+
|`gemini-3.1-pro-preview`| Google || ✅ |
49
+
|`gemini-3-flash-preview`| Google | ✅ | ✅ |
49
50
50
-
| Model | Provider | Free | Pro |
51
-
|----------------------|:---------:|:----:|:---:|
52
-
|`o3-mini`| OpenAI || ✅ |
53
-
|`gpt-4o`| OpenAI || ✅ |
54
-
|`gpt-4o-mini`| OpenAI | ✅ | ✅ |
55
-
|`claude-3.7-sonnet`| Anthropic || ✅ |
56
-
|`gemini-pro-2.5`| Google || ✅ |
57
-
|`gemini-flash-2.0`| Google | ✅ | ✅ |
58
-
|`qwen-2.5-coder-32b`| Fireworks | ✅ | ✅ |
59
-
|`llama-3.1-405b`| Fireworks | ✅ | ✅ |
60
-
|`deepseek-r1`| Fireworks || ✅ |
61
-
|`deepseek-v3`| Fireworks | ✅ | ✅ |
51
+
`auto` is a dynamic selection and may change over time. ProxyAI chooses the model automatically based on the best quality-to-price ratio. It currently routes through Fireworks and uses `GLM-5`.
*Note: Model availability may change over time. When using your own API key, availability depends on the provider's offerings.*
73
60
74
61
## Context Windows
75
62
76
63
A model's context window defines how much information (measured in tokens) it can process at once, including both your inputs and the model's responses.
77
64
78
-
### ProxyAI Cloud
79
-
80
-
- Each chat session uses a managed context window up to 16,000 tokens
81
-
- ProxyAI automatically summarizes or removes older parts of the conversation to stay within this service-specific limit
82
-
- Keep your total input context (files, selections, etc.) under 200,000 tokens for optimal processing
83
-
84
-
### Other Providers (OpenAI, Anthropic, Local, Custom)
85
-
86
-
- When using your own API key or running models locally, context window size is determined by the specific model and provider you choose
87
-
- ProxyAI passes your context to the provider, but the ultimate limit is set by the provider
88
-
- Check your chosen provider's documentation for their specific context window limitations
65
+
- Managed providers such as ProxyAI Cloud can apply product-level limits in addition to the underlying model limits.
66
+
- Bring-your-own-key providers follow the limits of the selected upstream model and API.
67
+
- Local and custom providers depend on the model and server configuration you run.
68
+
- Large files and long conversations still benefit from keeping context focused, even when a model advertises a large context window.
89
69
90
70
For complex or distinct tasks, regardless of the provider, starting a new chat session can improve performance and relevance.
91
71
92
72
## Model Hosting and Privacy
93
73
94
74
All **ProxyAI Cloud** models are hosted by their original providers (OpenAI, Anthropic, etc.), trusted partners, or ProxyAI directly, primarily on US-based infrastructure.
95
75
96
-
When connecting to other providers or using local models, hosting location and privacy considerations follow those specific services or your local environment settings.
76
+
When connecting to other providers or using local models, hosting location and privacy considerations follow those specific services or your local environment settings.
0 commit comments