Skip to content

Commit 2c4f430

Browse files
Merge pull request #3560 from santiagxf/santiagxf-patch-1
Update python.md
2 parents 745d5fc + e26a32e commit 2c4f430

File tree

1 file changed

+7
-7
lines changed
  • articles/ai-foundry/model-inference/includes/use-image-embeddings

1 file changed

+7
-7
lines changed

articles/ai-foundry/model-inference/includes/use-image-embeddings/python.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ author: msakande
77
reviewer: santiagxf
88
ms.service: azure-ai-model-inference
99
ms.topic: how-to
10-
ms.date: 01/22/2025
10+
ms.date: 03/17/2025
1111
ms.author: mopeakande
1212
ms.reviewer: fasantia
1313
ms.custom: generated
@@ -41,7 +41,7 @@ import os
4141
from azure.ai.inference import ImageEmbeddingsClient
4242
from azure.core.credentials import AzureKeyCredential
4343

44-
model = ImageEmbeddingsClient(
44+
client = ImageEmbeddingsClient(
4545
endpoint=os.environ["AZURE_INFERENCE_ENDPOINT"],
4646
credential=AzureKeyCredential(os.environ["AZURE_INFERENCE_CREDENTIAL"]),
4747
model="Cohere-embed-v3-english"
@@ -55,7 +55,7 @@ import os
5555
from azure.ai.inference import ImageEmbeddingsClient
5656
from azure.identity import DefaultAzureCredential
5757

58-
model = ImageEmbeddingsClient(
58+
client = ImageEmbeddingsClient(
5959
endpoint=os.environ["AZURE_INFERENCE_ENDPOINT"],
6060
credential=DefaultAzureCredential(),
6161
model="Cohere-embed-v3-english"
@@ -70,7 +70,7 @@ To create image embeddings, you need to pass the image data as part of your requ
7070
from azure.ai.inference.models import ImageEmbeddingInput
7171

7272
image_input= ImageEmbeddingInput.load(image_file="sample1.png", image_format="png")
73-
response = model.embed(
73+
response = client.embed(
7474
input=[ image_input ],
7575
)
7676
```
@@ -102,7 +102,7 @@ Some models can generate embeddings from images and text pairs. In this case, yo
102102
```python
103103
text_image_input= ImageEmbeddingInput.load(image_file="sample1.png", image_format="png")
104104
text_image_input.text = "A cute baby sea otter"
105-
response = model.embed(
105+
response = client.embed(
106106
input=[ text_image_input ],
107107
)
108108
```
@@ -117,7 +117,7 @@ The following example shows how to create embeddings that are used to create an
117117
```python
118118
from azure.ai.inference.models import EmbeddingInputType
119119

120-
response = model.embed(
120+
response = client.embed(
121121
input=[ image_input ],
122122
input_type=EmbeddingInputType.DOCUMENT,
123123
)
@@ -129,7 +129,7 @@ When you work on a query to retrieve such a document, you can use the following
129129
```python
130130
from azure.ai.inference.models import EmbeddingInputType
131131

132-
response = model.embed(
132+
response = client.embed(
133133
input=[ image_input ],
134134
input_type=EmbeddingInputType.QUERY,
135135
)

0 commit comments

Comments
 (0)