You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/customization.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -50,7 +50,7 @@ The prompts are currently tailored to the sample data since they start with "Ass
50
50
51
51
TODO FIX THIS!
52
52
53
-
If you followed the instructions in [the GPT vision guide](gpt4v.md) to enable the vision approach and the "Use GPT vision model" option is selected, then the chat tab will use the `chatreadretrievereadvision.py` approach instead. This approach is similar to the `chatreadretrieveread.py` approach, with a few differences:
53
+
If you followed the instructions in [the multimodal guide](multimodal.md) to enable the vision approach and the "Use GPT vision model" option is selected, then the chat tab will use the `chatreadretrievereadvision.py` approach instead. This approach is similar to the `chatreadretrieveread.py` approach, with a few differences:
54
54
55
55
1. Step 1 is the same as before, except it uses the GPT-4 Vision model instead of the default GPT-3.5 model.
56
56
2. For this step, it also calculates a vector embedding for the user question using [the Computer Vision vectorize text API](https://learn.microsoft.com/azure/ai-services/computer-vision/how-to/image-retrieval#call-the-vectorize-text-api), and passes that to the Azure AI Search to compare against the `imageEmbeddings` fields in the indexed documents. For each matching document, it downloads the image blob and converts it to a base 64 encoding.
@@ -70,7 +70,7 @@ The prompt for step 2 is currently tailored to the sample data since it starts w
70
70
#### Ask with vision
71
71
72
72
TODO FIX THIS!
73
-
If you followed the instructions in [the GPT vision guide](gpt4v.md) to enable the vision approach and the "Use GPT vision model" option is selected, then the ask tab will use the `retrievethenreadvision.py` approach instead. This approach is similar to the `retrievethenread.py` approach, with a few differences:
73
+
If you followed the instructions in [the multimodal guide](multimodal.md) to enable the vision approach and the "Use GPT vision model" option is selected, then the ask tab will use the `retrievethenreadvision.py` approach instead. This approach is similar to the `retrievethenread.py` approach, with a few differences:
74
74
75
75
1. For this step, it also calculates a vector embedding for the user question using [the Computer Vision vectorize text API](https://learn.microsoft.com/azure/ai-services/computer-vision/how-to/image-retrieval#call-the-vectorize-text-api), and passes that to the Azure AI Search to compare against the `imageEmbeddings` fields in the indexed documents. For each matching document, it downloads the image blob and converts it to a base 64 encoding.
76
76
2. When it combines the search results and user question, it includes the base 64 encoded images, and sends along both the text and images to the GPT4 Vision model (similar to this [documentation example](https://platform.openai.com/docs/guides/vision/quick-start)). The model generates a response that includes citations to the images, and the UI renders the base64 encoded images when a citation is clicked.
Copy file name to clipboardExpand all lines: docs/deploy_features.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -135,14 +135,14 @@ This process does *not* delete your previous model deployment. If you want to de
135
135
136
136
## Using reasoning models
137
137
138
-
⚠️ This feature is not currently compatible with [vision integration](./gpt4v.md). TODO: OR IS IT?
138
+
⚠️ This feature is not currently compatible with [multimodal feature](./multimodal.md). TODO: OR IS IT?
139
139
140
140
This feature allows you to use reasoning models to generate responses based on retrieved content. These models spend more time processing and understanding the user's request.
141
141
To enable reasoning models, follow the steps in [the reasoning models guide](./reasoning.md).
142
142
143
143
## Using agentic retrieval
144
144
145
-
⚠️ This feature is not currently compatible with [vision integration](./gpt4v.md). TODO: OR IS IT?
145
+
⚠️ This feature is not currently compatible with [multimodal feature](./multimodal.md). TODO: OR IS IT?
146
146
147
147
This feature allows you to use agentic retrieval in place of the Search API. To enable agentic retrieval, follow the steps in [the agentic retrieval guide](./agentic_retrieval.md)
148
148
@@ -259,7 +259,7 @@ to experiment with different options before committing to them.
259
259
260
260
⚠️ This feature is not currently compatible with [integrated vectorization](#enabling-integrated-vectorization).
261
261
262
-
It is compatible with [GPT vision integration](./gpt4v.md), but the features provide similar functionality. TODO: UPDATE
262
+
It is compatible with the [multimodal feature](./multimodal.md), but the features provide similar functionality. TODO: UPDATE
263
263
264
264
By default, if your documents contain image-like figures, the data ingestion process will ignore those figures,
265
265
so users will not be able to ask questions about them.
@@ -347,7 +347,7 @@ azd env set USE_SPEECH_OUTPUT_BROWSER true
347
347
348
348
## Enabling Integrated Vectorization
349
349
350
-
⚠️ This feature is not currently compatible with the [GPT vision integration](./gpt4v.md). TODO: UPDATE
350
+
⚠️ This feature is not currently compatible with the [multimodal feature](./multimodal.md). TODO: UPDATE
351
351
352
352
Azure AI search recently introduced an [integrated vectorization feature in preview mode](https://techcommunity.microsoft.com/blog/azure-ai-services-blog/announcing-the-public-preview-of-integrated-vectorization-in-azure-ai-search/3960809). This feature is a cloud-based approach to data ingestion, which takes care of document format cracking, data extraction, chunking, vectorization, and indexing, all with Azure technologies.
Copy file name to clipboardExpand all lines: docs/productionizing.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -106,7 +106,7 @@ First make sure you have the locust package installed in your Python environment
106
106
python -m pip install locust
107
107
```
108
108
109
-
Then run the locust command, specifying the name of the User class to use from `locustfile.py`. We've provided a `ChatUser` class that simulates a user asking questions and receiving answers, as well as a `ChatVisionUser` to simulate a user asking questions with [multimodal answering enabled](/docs/gpt4v.md). TODO
109
+
Then run the locust command, specifying the name of the User class to use from `locustfile.py`. We've provided a `ChatUser` class that simulates a user asking questions and receiving answers, as well as a `ChatVisionUser` to simulate a user asking questions with [multimodal answering enabled](/docs/multimodal.md). TODO
0 commit comments