Skip to content

Commit 5e87db8

Browse files
committed
fixing bookmarks
1 parent 385c9d8 commit 5e87db8

File tree

3 files changed

+8
-8
lines changed

3 files changed

+8
-8
lines changed

articles/machine-learning/how-to-deploy-models-cohere-command.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -112,7 +112,7 @@ To create a deployment:
112112
1. Select the endpoint to open its Details page.
113113
1. Select the **Test** tab to start interacting with the model.
114114
1. You can always find the endpoint's details, URL, and access keys by navigating to **Workspace** > **Endpoints** > **Serverless endpoints**.
115-
2. Take note of the **Target** URL and the **Secret Key**. For more information on using the APIs, see the [reference](#reference-for-cohere-models-deployed-as-a-service) section.
115+
2. Take note of the **Target** URL and the **Secret Key**. For more information on using the APIs, see the [reference](#reference-for-cohere-models-deployed-as-a-serverless-api) section.
116116

117117
To learn about billing for models deployed with pay-as-you-go, see [Cost and quota considerations for Cohere models deployed as a service](#cost-and-quota-considerations-for-models-deployed-as-a-service).
118118

@@ -125,7 +125,7 @@ The previously mentioned Cohere models can be consumed using the chat API.
125125
1. Copy the **Target** URL and the **Key** token values.
126126
2. Cohere exposes two routes for inference with the Command R and Command R+ models. The [Azure AI Model Inference API](reference-model-inference-api.md) on the route `/chat/completions` and the native [Cohere API](#cohere-chat-api).
127127

128-
For more information on using the APIs, see the [reference](#reference-for-cohere-models-deployed-as-a-service) section.
128+
For more information on using the APIs, see the [reference](#reference-for-cohere-models-deployed-as-a-serverless-api) section.
129129

130130
## Reference for Cohere models deployed as a serverless API
131131

articles/machine-learning/how-to-deploy-models-cohere-embed.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -91,7 +91,7 @@ To create a deployment:
9191
1. Select the endpoint to open its Details page.
9292
1. Select the **Test** tab to start interacting with the model.
9393
1. You can always find the endpoint's details, URL, and access keys by navigating to **Workspace** > **Endpoints** > **Serverless endpoints**.
94-
1. Take note of the **Target** URL and the **Secret Key**. For more information on using the APIs, see the [reference](#embed-api-reference-for-cohere-embed-models-deployed-as-a-service) section.
94+
1. Take note of the **Target** URL and the **Secret Key**. For more information on using the APIs, see the [reference] (#embed-api-reference-for-cohere-embed-models-deployed-as-a-serverless-api) section.
9595

9696
To learn about billing for models deployed with pay-as-you-go, see [Cost and quota considerations for Cohere models deployed as a service](#cost-and-quota-considerations-for-models-deployed-as-a-service).
9797

@@ -104,7 +104,7 @@ The previously mentioned Cohere models can be consumed using the chat API.
104104
1. Copy the **Target** URL and the **Key** token values.
105105
1. Cohere exposes two routes for inference with the Embed v3 - English and Embed v3 - Multilingual models. `v1/embeddings` adheres to the Azure AI Generative Messages API schema, and `v1/embed` supports Cohere's native API schema.
106106

107-
For more information on using the APIs, see the [reference](#embed-api-reference-for-cohere-embed-models-deployed-as-a-service) section.
107+
For more information on using the APIs, see the [reference](#embed-api-reference-for-cohere-embed-models-deployed-as-a-serverless-api) section.
108108

109109
## Embed API reference for Cohere Embed models deployed as a serverless API
110110

articles/machine-learning/how-to-deploy-models-llama.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ The following models are available in Azure Marketplace for Meta Llama models wh
4343
* [Meta Llama-3-8B (preview)](https://aka.ms/aistudio/landing/meta-llama-3-8b-base)
4444
* [Meta Llama-3-70B (preview)](https://aka.ms/aistudio/landing/meta-llama-3-70b-base)
4545

46-
If you need to deploy a different model, [deploy it to real-time endpoints](#deploy-meta-llama-models-to-real-time-endpoints) instead.
46+
If you need to deploy a different model, [deploy it to managed compute](#deploy-meta-llama-models-to-managed-compute) instead.
4747

4848
# [Meta Llama 2](#tab/llama-two)
4949

@@ -54,7 +54,7 @@ If you need to deploy a different model, [deploy it to real-time endpoints](#dep
5454
* Meta Llama-2-70B (preview)
5555
* Meta Llama 2 70B-Chat (preview)
5656

57-
If you need to deploy a different model, [deploy it to managed compute](#deploy-meta-llama-models-to-real-time-endpoints) instead.
57+
If you need to deploy a different model, [deploy it to managed compute](#deploy-meta-llama-models-to-managed-compute) instead.
5858

5959
---
6060

@@ -199,7 +199,7 @@ Models deployed as a service can be consumed using either the chat or the comple
199199
- For completions models, such as `Llama-3-8B`, use the [`<target_url>/v1/completions`](#completions-api) API.
200200
- For chat models, such as `Llama-3-8B-Instruct`, use the [`<target_url>/v1/chat/completions`](#chat-api) API.
201201

202-
For more information on using the APIs, see the [reference](#reference-for-meta-llama-models-deployed-as-a-service) section.
202+
For more information on using the APIs, see the [reference](#reference-for-meta-llama-models-deployed-a-serverless-api) section.
203203

204204
# [Meta Llama 2](#tab/llama-two)
205205

@@ -211,7 +211,7 @@ Models deployed as a service can be consumed using either the chat or the comple
211211
- For completions models, such as `Meta-Llama-2-7B`, use the [`/v1/completions`](#completions-api) API or the [Azure AI Model Inference API](reference-model-inference-api.md) on the route `/completions`.
212212
- For chat models, such as `Meta-Llama-2-7B-Chat`, use the [`/v1/chat/completions`](#chat-api) API or the [Azure AI Model Inference API](reference-model-inference-api.md) on the route `/chat/completions`.
213213

214-
For more information on using the APIs, see the [reference](#reference-for-meta-llama-models-deployed-as-a-service) section.
214+
For more information on using the APIs, see the [reference](#reference-for-meta-llama-models-deployed-a-serverless-api) section.
215215

216216
---
217217

0 commit comments

Comments
 (0)