Skip to content

Commit b616cf7

Browse files
committed
Update docs
1 parent 6a4a46c commit b616cf7

File tree

6 files changed

+44
-15
lines changed

6 files changed

+44
-15
lines changed

docs/concepts/models/custom-model-settings.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -96,7 +96,7 @@ preview_result.display_sample_record()
9696
```
9797

9898
!!! note "Default Providers Always Available"
99-
When you only specify `model_configs`, the default model providers (NVIDIA and OpenAI) are still available. You only need to create custom providers if you want to connect to different endpoints or modify provider settings.
99+
When you only specify `model_configs`, the default model providers (NVIDIA, OpenAI, and OpenRouter) are still available. You only need to create custom providers if you want to connect to different endpoints or modify provider settings.
100100

101101
!!! tip "Mixing Custom and Default Models"
102102
When you provide custom `model_configs` to `DataDesignerConfigBuilder`, they **replace** the defaults entirely. To use custom model configs in addition to the default configs, use the add_model_config method:

docs/concepts/models/default-model-settings.md

Lines changed: 35 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ Data Designer ships with pre-configured model providers and model configurations
44

55
## Model Providers
66

7-
Data Designer includes two default model providers that are configured automatically:
7+
Data Designer includes a few default model providers that are configured automatically:
88

99
### NVIDIA Provider (`nvidia`)
1010

@@ -24,6 +24,15 @@ The NVIDIA provider gives you access to state-of-the-art models including Nemotr
2424

2525
The OpenAI provider gives you access to GPT models and other OpenAI offerings.
2626

27+
### OpenRouter Provider (`openrouter`)
28+
29+
- **Endpoint**: `https://openrouter.ai/api/v1`
30+
- **API Key**: Set via `OPENROUTER_API_KEY` environment variable
31+
- **Models**: Access to a wide variety of models through OpenRouter's unified API
32+
- **Getting Started**: Get your API key from [openrouter.ai](https://openrouter.ai)
33+
34+
The OpenRouter provider gives you access to a unified interface for many different language models from various providers.
35+
2736
## Model Configurations
2837

2938
Data Designer provides pre-configured model aliases for common use cases. When you create a `DataDesignerConfigBuilder` without specifying `model_configs`, these default configurations are automatically available.
@@ -32,22 +41,35 @@ Data Designer provides pre-configured model aliases for common use cases. When y
3241

3342
The following model configurations are automatically available when `NVIDIA_API_KEY` is set:
3443

35-
| Alias | Model | Use Case | Temperature | Top P |
36-
|-------|-------|----------|-------------|-------|
37-
| `nvidia-text` | `nvidia/nemotron-3-nano-30b-a3b` | General text generation | 0.85 | 0.95 |
38-
| `nvidia-reasoning` | `openai/gpt-oss-20b` | Reasoning and analysis tasks | 0.35 | 0.95 |
39-
| `nvidia-vision` | `nvidia/nemotron-nano-12b-v2-vl` | Vision and image understanding | 0.85 | 0.95 |
44+
| Alias | Model | Use Case | Inference Parameters |
45+
|-------|-------|----------|---------------------|
46+
| `nvidia-text` | `nvidia/nemotron-3-nano-30b-a3b` | General text generation | `temperature=1.0, top_p=1.0` |
47+
| `nvidia-reasoning` | `openai/gpt-oss-20b` | Reasoning and analysis tasks | `temperature=0.35, top_p=0.95` |
48+
| `nvidia-vision` | `nvidia/nemotron-nano-12b-v2-vl` | Vision and image understanding | `temperature=0.85, top_p=0.95` |
49+
| `nvidia-embedding` | `nvidia/llama-3.2-nv-embedqa-1b-v2` | Text embeddings | `encoding_format="float", extra_body={"input_type": "query"}` |
4050

4151

4252
### OpenAI Models
4353

4454
The following model configurations are automatically available when `OPENAI_API_KEY` is set:
4555

46-
| Alias | Model | Use Case | Temperature | Top P |
47-
|-------|-------|----------|-------------|-------|
48-
| `openai-text` | `gpt-4.1` | General text generation | 0.85 | 0.95 |
49-
| `openai-reasoning` | `gpt-5` | Reasoning and analysis tasks | 0.35 | 0.95 |
50-
| `openai-vision` | `gpt-5` | Vision and image understanding | 0.85 | 0.95 |
56+
| Alias | Model | Use Case | Inference Parameters |
57+
|-------|-------|----------|---------------------|
58+
| `openai-text` | `gpt-4.1` | General text generation | `temperature=0.85, top_p=0.95` |
59+
| `openai-reasoning` | `gpt-5` | Reasoning and analysis tasks | `temperature=0.35, top_p=0.95` |
60+
| `openai-vision` | `gpt-5` | Vision and image understanding | `temperature=0.85, top_p=0.95` |
61+
| `openai-embedding` | `text-embedding-3-large` | Text embeddings | `encoding_format="float"` |
62+
63+
### OpenRouter Models
64+
65+
The following model configurations are automatically available when `OPENROUTER_API_KEY` is set:
66+
67+
| Alias | Model | Use Case | Inference Parameters |
68+
|-------|-------|----------|---------------------|
69+
| `openrouter-text` | `nvidia/nemotron-3-nano-30b-a3b` | General text generation | `temperature=1.0, top_p=1.0` |
70+
| `openrouter-reasoning` | `openai/gpt-oss-20b` | Reasoning and analysis tasks | `temperature=0.35, top_p=0.95` |
71+
| `openrouter-vision` | `nvidia/nemotron-nano-12b-v2-vl` | Vision and image understanding | `temperature=0.85, top_p=0.95` |
72+
| `openrouter-embedding` | `openai/text-embedding-3-large` | Text embeddings | `encoding_format="float"` |
5173

5274

5375
## Using Default Settings
@@ -83,7 +105,7 @@ Both methods operate on the same files, ensuring consistency across your entire
83105
## Important Notes
84106

85107
!!! warning "API Key Requirements"
86-
While default model configurations are always available, you need to set the appropriate API key environment variable (`NVIDIA_API_KEY` or `OPENAI_API_KEY`) to actually use the corresponding models for data generation. Without a valid API key, any attempt to generate data using that provider's models will fail.
108+
While default model configurations are always available, you need to set the appropriate API key environment variable (`NVIDIA_API_KEY`, `OPENAI_API_KEY`, or `OPENROUTER_API_KEY`) to actually use the corresponding models for data generation. Without a valid API key, any attempt to generate data using that provider's models will fail.
87109

88110
!!! tip "Environment Variables"
89111
Store your API keys in environment variables rather than hardcoding them in your scripts:
@@ -92,6 +114,7 @@ Both methods operate on the same files, ensuring consistency across your entire
92114
# In your .bashrc, .zshrc, or similar
93115
export NVIDIA_API_KEY="your-api-key-here"
94116
export OPENAI_API_KEY="your-openai-api-key-here"
117+
export OPENROUTER_API_KEY="your-openrouter-api-key-here"
95118
```
96119

97120
## See Also

docs/concepts/models/model-configs.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ The `ModelConfig` class has the following fields:
1515
| `alias` | `str` | Yes | Unique identifier for this model configuration (e.g., `"my-text-model"`, `"reasoning-model"`) |
1616
| `model` | `str` | Yes | Model identifier as recognized by the provider (e.g., `"nvidia/nemotron-3-nano-30b-a3b"`, `"gpt-4"`) |
1717
| `inference_parameters` | `InferenceParamsT` | No | Controls model behavior during generation. Use `ChatCompletionInferenceParams` for text/code/structured generation or `EmbeddingInferenceParams` for embeddings. Defaults to `ChatCompletionInferenceParams()` if not provided. The generation type is automatically determined by the inference parameters type. See [Inference Parameters](inference_parameters.md) for details. |
18-
| `provider` | `str` | No | Reference to the name of the Provider to use (e.g., `"nvidia"`, `"openai"`). If not specified, one set as the default provider, which may resolve to the first provider if there are more than one |
18+
| `provider` | `str` | No | Reference to the name of the Provider to use (e.g., `"nvidia"`, `"openai"`, `"openrouter"`). If not specified, one set as the default provider, which may resolve to the first provider if there are more than one |
1919

2020

2121
## Examples

docs/concepts/models/model-providers.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ The `ModelProvider` class has the following fields:
1212

1313
| Field | Type | Required | Description |
1414
|-------|------|----------|-------------|
15-
| `name` | `str` | Yes | Unique identifier for the provider (e.g., `"nvidia"`, `"openai"`) |
15+
| `name` | `str` | Yes | Unique identifier for the provider (e.g., `"nvidia"`, `"openai"`, `"openrouter"`) |
1616
| `endpoint` | `str` | Yes | API endpoint URL (e.g., `"https://integrate.api.nvidia.com/v1"`) |
1717
| `provider_type` | `str` | No | Provider type (default: `"openai"`). Uses OpenAI-compatible API format |
1818
| `api_key` | `str` | No | API key or environment variable name (e.g., `"NVIDIA_API_KEY"`) |

docs/notebook_source/_README.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -46,6 +46,9 @@ export NVIDIA_API_KEY="your-api-key-here"
4646

4747
# For OpenAI
4848
export OPENAI_API_KEY="your-api-key-here"
49+
50+
# For OpenRouter
51+
export OPENROUTER_API_KEY="your-api-key-here"
4952
```
5053

5154
For more information, check the [Quick Start](../quick-start.md), [Default Model Settings](../concepts/models/default-model-settings.md) and how to [Configure Model Settings Using The CLI](../concepts/models/configure-model-settings-with-the-cli.md).

docs/quick-start.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,13 +8,16 @@ Before you begin, you'll need an API key from one of the default providers:
88

99
- **NVIDIA API Key**: Get yours from [build.nvidia.com](https://build.nvidia.com)
1010
- **OpenAI API Key** (optional): Get yours from [platform.openai.com](https://platform.openai.com/api-keys)
11+
- **OpenRouter API Key** (optional): Get yours from [openrouter.ai](https://openrouter.ai)
1112

1213
Set your API key as an environment variable:
1314

1415
```bash
1516
export NVIDIA_API_KEY="your-api-key-here"
1617
# Or for OpenAI
1718
export OPENAI_API_KEY="your-openai-api-key-here"
19+
# Or for OpenRouter
20+
export OPENROUTER_API_KEY="your-openrouter-api-key-here"
1821
```
1922

2023
## Example

0 commit comments

Comments
 (0)