You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
- Set up a provider override for Fireworks, routing requests for this provider directly to the specified Fireworks endpoint (bypassing Cody Gateway)
243
243
- Add two Fireworks models:
244
-
- `"fireworks::v1::mixtral-8x22b-instruct"` with "chat" capabiity - used for "chat" and "fastChat"
244
+
- `"fireworks::v1::mixtral-8x7b-instruct"` with "chat" capabiity - used for "chat" and "fastChat"
245
245
- `"fireworks::v1::starcoder-16b"` with "autocomplete" capability - used for "autocomplete"
246
246
247
247
</Accordion>
@@ -358,7 +358,6 @@ In the configuration above,
358
358
}
359
359
```
360
360
361
-
In the configuration above,
362
361
In the configuration above,
363
362
364
363
- Set up a provider override for Azure OpenAI, routing requests for this provider directly to the specified Azure OpenAI endpoint (bypassing Cody Gateway).
- Set up a provider override for Google Gemini, routing requests for this provider directly to the specified endpoint (bypassing Cody Gateway)
504
+
- Add the `"google::v1::gemini-1.5-pro"` model, which is used for all Cody features. We do not add other models for simplicity, as adding multiple models is already covered in the examples above
- Set up a provider override for Google Gemini, routing requests for this provider directly to the specified endpoint (bypassing Cody Gateway)
564
-
- Add the `"google::v1::gemini-1.5-pro"` model, which is used for all Cody features. We do not add other models for simplicity, as adding multiple models is already covered in the examples above
0 commit comments