Skip to content

Commit 7d576f0

Browse files
committed
Update links to multimodal
1 parent dae363f commit 7d576f0

File tree

4 files changed

+8
-8
lines changed

4 files changed

+8
-8
lines changed

docs/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ These are advanced topics that are not necessary for a basic deployment.
1212
- [Enabling optional features](deploy_features.md)
1313
- [All features](docs/deploy_features.md)
1414
- [Login and access control](login_and_acl.md)
15-
- [GPT-4 Turbo with Vision](gpt4v.md) TODO
15+
- [Multimodal](multimodal.md)
1616
- [Private endpoints](deploy_private.md)
1717
- [Agentic retrieval](agentic_retrieval.md)
1818
- [Sharing deployment environments](sharing_environments.md)

docs/customization.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@ The prompts are currently tailored to the sample data since they start with "Ass
5050

5151
TODO FIX THIS!
5252

53-
If you followed the instructions in [the GPT vision guide](gpt4v.md) to enable the vision approach and the "Use GPT vision model" option is selected, then the chat tab will use the `chatreadretrievereadvision.py` approach instead. This approach is similar to the `chatreadretrieveread.py` approach, with a few differences:
53+
If you followed the instructions in [the multimodal guide](multimodal.md) to enable the vision approach and the "Use GPT vision model" option is selected, then the chat tab will use the `chatreadretrievereadvision.py` approach instead. This approach is similar to the `chatreadretrieveread.py` approach, with a few differences:
5454

5555
1. Step 1 is the same as before, except it uses the GPT-4 Vision model instead of the default GPT-3.5 model.
5656
2. For this step, it also calculates a vector embedding for the user question using [the Computer Vision vectorize text API](https://learn.microsoft.com/azure/ai-services/computer-vision/how-to/image-retrieval#call-the-vectorize-text-api), and passes that to the Azure AI Search to compare against the `imageEmbeddings` fields in the indexed documents. For each matching document, it downloads the image blob and converts it to a base 64 encoding.
@@ -70,7 +70,7 @@ The prompt for step 2 is currently tailored to the sample data since it starts w
7070
#### Ask with vision
7171

7272
TODO FIX THIS!
73-
If you followed the instructions in [the GPT vision guide](gpt4v.md) to enable the vision approach and the "Use GPT vision model" option is selected, then the ask tab will use the `retrievethenreadvision.py` approach instead. This approach is similar to the `retrievethenread.py` approach, with a few differences:
73+
If you followed the instructions in [the multimodal guide](multimodal.md) to enable the vision approach and the "Use GPT vision model" option is selected, then the ask tab will use the `retrievethenreadvision.py` approach instead. This approach is similar to the `retrievethenread.py` approach, with a few differences:
7474

7575
1. For this step, it also calculates a vector embedding for the user question using [the Computer Vision vectorize text API](https://learn.microsoft.com/azure/ai-services/computer-vision/how-to/image-retrieval#call-the-vectorize-text-api), and passes that to the Azure AI Search to compare against the `imageEmbeddings` fields in the indexed documents. For each matching document, it downloads the image blob and converts it to a base 64 encoding.
7676
2. When it combines the search results and user question, it includes the base 64 encoded images, and sends along both the text and images to the GPT4 Vision model (similar to this [documentation example](https://platform.openai.com/docs/guides/vision/quick-start)). The model generates a response that includes citations to the images, and the UI renders the base64 encoded images when a citation is clicked.

docs/deploy_features.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -135,14 +135,14 @@ This process does *not* delete your previous model deployment. If you want to de
135135

136136
## Using reasoning models
137137

138-
⚠️ This feature is not currently compatible with [vision integration](./gpt4v.md). TODO: OR IS IT?
138+
⚠️ This feature is not currently compatible with [multimodal feature](./multimodal.md). TODO: OR IS IT?
139139

140140
This feature allows you to use reasoning models to generate responses based on retrieved content. These models spend more time processing and understanding the user's request.
141141
To enable reasoning models, follow the steps in [the reasoning models guide](./reasoning.md).
142142
143143
## Using agentic retrieval
144144
145-
⚠️ This feature is not currently compatible with [vision integration](./gpt4v.md). TODO: OR IS IT?
145+
⚠️ This feature is not currently compatible with [multimodal feature](./multimodal.md). TODO: OR IS IT?
146146
147147
This feature allows you to use agentic retrieval in place of the Search API. To enable agentic retrieval, follow the steps in [the agentic retrieval guide](./agentic_retrieval.md)
148148
@@ -259,7 +259,7 @@ to experiment with different options before committing to them.
259259
260260
⚠️ This feature is not currently compatible with [integrated vectorization](#enabling-integrated-vectorization).
261261
262-
It is compatible with [GPT vision integration](./gpt4v.md), but the features provide similar functionality. TODO: UPDATE
262+
It is compatible with the [multimodal feature](./multimodal.md), but the features provide similar functionality. TODO: UPDATE
263263
264264
By default, if your documents contain image-like figures, the data ingestion process will ignore those figures,
265265
so users will not be able to ask questions about them.
@@ -347,7 +347,7 @@ azd env set USE_SPEECH_OUTPUT_BROWSER true
347347
348348
## Enabling Integrated Vectorization
349349
350-
⚠️ This feature is not currently compatible with the [GPT vision integration](./gpt4v.md). TODO: UPDATE
350+
⚠️ This feature is not currently compatible with the [multimodal feature](./multimodal.md). TODO: UPDATE
351351
352352
Azure AI search recently introduced an [integrated vectorization feature in preview mode](https://techcommunity.microsoft.com/blog/azure-ai-services-blog/announcing-the-public-preview-of-integrated-vectorization-in-azure-ai-search/3960809). This feature is a cloud-based approach to data ingestion, which takes care of document format cracking, data extraction, chunking, vectorization, and indexing, all with Azure technologies.
353353

docs/productionizing.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -106,7 +106,7 @@ First make sure you have the locust package installed in your Python environment
106106
python -m pip install locust
107107
```
108108

109-
Then run the locust command, specifying the name of the User class to use from `locustfile.py`. We've provided a `ChatUser` class that simulates a user asking questions and receiving answers, as well as a `ChatVisionUser` to simulate a user asking questions with [multimodal answering enabled](/docs/gpt4v.md). TODO
109+
Then run the locust command, specifying the name of the User class to use from `locustfile.py`. We've provided a `ChatUser` class that simulates a user asking questions and receiving answers, as well as a `ChatVisionUser` to simulate a user asking questions with [multimodal answering enabled](/docs/multimodal.md). TODO
110110

111111
```shell
112112
locust ChatUser

0 commit comments

Comments
 (0)