You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Cohere Embed English is a text representation model used for semantic search, retrieval-augmented generation (RAG), classification, and clustering. Embed English performs well on the HuggingFace MTEB benchmark and on use-cases for various industries, such as Finance, Legal, and General-Purpose Corpora. Embed English also has the following attributes:
31
+
Cohere Embed English is a text representation model used for semantic search, retrieval-augmented generation (RAG), classification, and clustering. Embed English performs well on the HuggingFace (massive text embed) MTEB benchmark and on use-cases for various industries, such as Finance, Legal, and General-Purpose Corpora. Embed English also has the following attributes:
32
32
33
33
* Embed English has 1,024 dimensions.
34
34
* Context window of the model is 512 tokens
@@ -80,7 +80,7 @@ Read more about the [Azure AI inference package and reference](https://aka.ms/az
80
80
81
81
## Work with embeddings
82
82
83
-
In this section, you use the [Azure AI model inference API](https://aka.ms/azureai/modelinference) with a embeddings model.
83
+
In this section, you use the [Azure AI model inference API](https://aka.ms/azureai/modelinference) with an embeddings model.
Cohere Embed V3 models can generate multiple embeddings for the same input depending on how you plan to use them. This capability allows you to retrieve more accurate embeddings for RAG patterns.
182
182
183
-
The following example shows how to create embeddings used to create an embedding for a document that will be stored in a vector database:
183
+
The following example shows how to create embeddings that are used to create an embedding for a document that will be stored in a vector database:
184
184
185
185
186
186
```python
@@ -217,7 +217,7 @@ The Cohere family of models for embeddings includes the following models:
Cohere Embed English is a text representation model used for semantic search, retrieval-augmented generation (RAG), classification, and clustering. Embed English performs well on the HuggingFace MTEB benchmark and on use-cases for various industries, such as Finance, Legal, and General-Purpose Corpora. Embed English also has the following attributes:
220
+
Cohere Embed English is a text representation model used for semantic search, retrieval-augmented generation (RAG), classification, and clustering. Embed English performs well on the HuggingFace (massive text embed) MTEB benchmark and on use-cases for various industries, such as Finance, Legal, and General-Purpose Corpora. Embed English also has the following attributes:
Cohere Embed V3 models can generate multiple embeddings for the same input depending on how you plan to use them. This capability allows you to retrieve more accurate embeddings for RAG patterns.
373
373
374
-
The following example shows how to create embeddings used to create an embedding for a document that will be stored in a vector database:
374
+
The following example shows how to create embeddings that are used to create an embedding for a document that will be stored in a vector database:
375
375
376
376
377
377
```javascript
@@ -408,7 +408,7 @@ The Cohere family of models for embeddings includes the following models:
Cohere Embed English is a text representation model used for semantic search, retrieval-augmented generation (RAG), classification, and clustering. Embed English performs well on the HuggingFace MTEB benchmark and on use-cases for various industries, such as Finance, Legal, and General-Purpose Corpora. Embed English also has the following attributes:
411
+
Cohere Embed English is a text representation model used for semantic search, retrieval-augmented generation (RAG), classification, and clustering. Embed English performs well on the HuggingFace (massive text embed) MTEB benchmark and on use-cases for various industries, such as Finance, Legal, and General-Purpose Corpora. Embed English also has the following attributes:
412
412
413
413
* Embed English has 1,024 dimensions.
414
414
* Context window of the model is 512 tokens
@@ -451,7 +451,7 @@ Models deployed with the [Azure AI model inference API](https://aka.ms/azureai/m
451
451
452
452
## Work with embeddings
453
453
454
-
In this section, you use the [Azure AI model inference API](https://aka.ms/azureai/modelinference) with a embeddings model.
454
+
In this section, you use the [Azure AI model inference API](https://aka.ms/azureai/modelinference) with an embeddings model.
455
455
456
456
### Create a client to consume the model
457
457
@@ -582,7 +582,7 @@ The response is as follows, where you can see the model's usage statistics:
582
582
583
583
Cohere Embed V3 models can generate multiple embeddings for the same input depending on how you plan to use them. This capability allows you to retrieve more accurate embeddings for RAG patterns.
584
584
585
-
The following example shows how to create embeddings used to create an embedding for a document that will be stored in a vector database:
585
+
The following example shows how to create embeddings that are used to create an embedding for a document that will be stored in a vector database:
586
586
587
587
588
588
```json
@@ -618,7 +618,7 @@ Cohere Embed V3 models can optimize the embeddings based on its use case.
618
618
| Azure AI Inference package for JavaScript | JavaScript |[Link](https://aka.ms/azsdk/azure-ai-inference/javascript/samples)|
619
619
| Azure AI Inference package for Python | Python |[Link](https://aka.ms/azsdk/azure-ai-inference/python/samples)|
0 commit comments