@@ -148,7 +148,6 @@ main()
148148 ├── 1. _bootstrap_config_cache()
149149 │ If /tmp/nemoclaw-provider-config-cache.json does NOT exist:
150150 │ Write defaults for:
151- │ - nvidia-inference → OPENAI_BASE_URL=https://inference-api.nvidia.com/v1
152151 │ - nvidia-endpoints → NVIDIA_BASE_URL=https://integrate.api.nvidia.com/v1
153152 │ If it already exists: skip (no-op)
154153 │
@@ -436,13 +435,12 @@ Step 10: Cleanup temp policy file
436435```
437436Step 1: Log receipt (hash prefix)
438437Step 2: Run CLI command:
439- nemoclaw provider update nvidia-inference \
440- --type openai \
441- --credential OPENAI_API_KEY=<key> \
442- --config OPENAI_BASE_URL=https://inference-api.nvidia.com/v1
438+ nemoclaw provider update nvidia-endpoints \
439+ --credential NVIDIA_API_KEY=<key> \
440+ --config NVIDIA_BASE_URL=https://integrate.api.nvidia.com/v1
443441 Timeout: 120s
444442Step 3: If success:
445- - Cache config {"OPENAI_BASE_URL": "https://inference-api.nvidia.com/v1"} under name "nvidia-inference "
443+ - Cache config under name "nvidia-endpoints "
446444 - State → "done"
447445 If failure:
448446 - State → "error" with stderr/stdout message
@@ -537,10 +535,10 @@ Step 3: Merge with config cache values
537535The CLI outputs text like:
538536```
539537Id: abc-123
540- Name: nvidia-inference
541- Type: openai
542- Credential keys: OPENAI_API_KEY
543- Config keys: OPENAI_BASE_URL
538+ Name: nvidia-endpoints
539+ Type: nvidia
540+ Credential keys: NVIDIA_API_KEY
541+ Config keys: NVIDIA_BASE_URL
544542```
545543
546544Parsing rules:
@@ -560,11 +558,11 @@ After parsing, if the provider name has an entry in the config cache, a `configV
560558 "providers" : [
561559 {
562560 "id" : " abc-123" ,
563- "name" : " nvidia-inference " ,
564- "type" : " openai " ,
565- "credentialKeys" : [" OPENAI_API_KEY " ],
566- "configKeys" : [" OPENAI_BASE_URL " ],
567- "configValues" : {"OPENAI_BASE_URL " : " https://inference- api.nvidia.com/v1" }
561+ "name" : " nvidia-endpoints " ,
562+ "type" : " nvidia " ,
563+ "credentialKeys" : [" NVIDIA_API_KEY " ],
564+ "configKeys" : [" NVIDIA_BASE_URL " ],
565+ "configValues" : {"NVIDIA_BASE_URL " : " https://integrate. api.nvidia.com/v1" }
568566 }
569567 ]
570568}
@@ -671,7 +669,7 @@ nemoclaw cluster inference get
671669
672670** Output Parsing (` _parse_cluster_inference ` ):**
673671```
674- Provider: nvidia-inference
672+ Provider: nvidia-endpoints
675673Model: meta/llama-3.1-70b-instruct
676674Version: 2
677675```
@@ -688,7 +686,7 @@ Version: 2
688686``` json
689687{
690688 "ok" : true ,
691- "providerName" : " nvidia-inference " ,
689+ "providerName" : " nvidia-endpoints " ,
692690 "modelId" : " meta/llama-3.1-70b-instruct" ,
693691 "version" : 2
694692}
@@ -703,7 +701,7 @@ Version: 2
703701** Request Body:**
704702``` json
705703{
706- "providerName" : " nvidia-inference " ,
704+ "providerName" : " nvidia-endpoints " ,
707705 "modelId" : " meta/llama-3.1-70b-instruct"
708706}
709707```
@@ -721,7 +719,7 @@ nemoclaw cluster inference set --provider <name> --model <model>
721719``` json
722720{
723721 "ok" : true ,
724- "providerName" : " nvidia-inference " ,
722+ "providerName" : " nvidia-endpoints " ,
725723 "modelId" : " meta/llama-3.1-70b-instruct" ,
726724 "version" : 3
727725}
@@ -925,7 +923,7 @@ The `nemoclaw provider get` CLI only returns config **key names**, not their val
925923- Read on every `GET /api/providers` request
926924- Written on every `POST` (create) and `PUT` (update) that includes config values
927925- Cleaned up on `DELETE`
928- - Bootstrapped at server startup with a default for `nvidia-inference `
926+ - Bootstrapped at server startup with a default for `nvidia-endpoints `
929927
930928---
931929
@@ -948,8 +946,8 @@ Output is parsed the same way as provider detail (line-by-line, prefix matching,
948946**Format:**
949947```json
950948{
951- "nvidia-inference ": {
952- "OPENAI_BASE_URL ": "https://inference- api.nvidia.com/v1"
949+ "nvidia-endpoints ": {
950+ "NVIDIA_BASE_URL ": "https://integrate. api.nvidia.com/v1"
953951 },
954952 "my-custom-provider": {
955953 "CUSTOM_URL": "https://example.com"
@@ -1237,7 +1235,7 @@ This means it uses a local sandbox name rather than a container image reference.
12371235
12381236### 18.10 Inject Key Hardcodes Provider Name
12391237
1240- The ` _run_inject_key ` function hardcodes ` nvidia-inference ` as the provider name. This is not configurable via the API.
1238+ The ` _run_inject_key ` function hardcodes ` nvidia-endpoints ` as the provider name. This is not configurable via the API.
12411239
12421240### 18.11 Error State Truncation
12431241
0 commit comments