Skip to content

Commit 7df3449

Browse files
authored
Update use-your-data.md
1 parent 01f5be6 commit 7df3449

File tree

1 file changed

+6
-6
lines changed

1 file changed

+6
-6
lines changed

articles/ai-services/openai/concepts/use-your-data.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -68,7 +68,7 @@ For some data sources such as uploading files from your local machine (preview)
6868
|Data source | Description |
6969
|---------|---------|
7070
| [Azure AI Search](/azure/search/search-what-is-azure-search) | Use an existing Azure AI Search index with Azure OpenAI On Your Data. |
71-
| [Azure Cosmos DB](/azure/cosmos-db/introduction) | Azure Cosmos DB's API for Postgres and vCore-based API for MongoDB offer natively integrated vector indexing; therefore, they don't require Azure AI Search. However, its other APIs do require Azure AI Search for vector indexing. Azure Cosmos DB for NoSQL's natively integrated vector database bebuts in mid-2024. |
71+
| [Azure Cosmos DB](/azure/cosmos-db/introduction) | Azure Cosmos DB's API for Postgres and vCore-based API for MongoDB offer natively integrated vector indexing; therefore, they don't require Azure AI Search. However, its other APIs do require Azure AI Search for vector indexing. Azure Cosmos DB for NoSQL's natively integrated vector database debuts in mid-2024. |
7272
|Upload files (preview) | Upload files from your local machine to be stored in an Azure Blob Storage database, and ingested into Azure AI Search. |
7373
|URL/Web address (preview) | Web content from the URLs is stored in Azure Blob Storage. |
7474
|Azure Blob Storage (preview) | Upload files from Azure Blob Storage to be ingested into an Azure AI Search index. |
@@ -493,9 +493,9 @@ As part of this RAG pipeline, there are three steps at a high-level:
493493

494494
In total, there are two calls made to the model:
495495

496-
* For processing the intent: The token estimate for the *intent prompt* includes those for the user question, conversation history and the instructions sent to the model for intent generation.
496+
* For processing the intent: The token estimate for the *intent prompt* includes those for the user question, conversation history, and the instructions sent to the model for intent generation.
497497

498-
* For generating the response: The token estimate for the *generation prompt* includes those for the user question, conversation history, the retrieved list of document chunks, role information and the instructions sent to it for generation.
498+
* For generating the response: The token estimate for the *generation prompt* includes those for the user question, conversation history, the retrieved list of document chunks, role information, and the instructions sent to it for generation.
499499

500500
The model generated output tokens (both intents and response) need to be taken into account for total token estimation. Summing up all the four columns below gives the average total tokens used for generating a response.
501501

@@ -577,9 +577,9 @@ Upgrade to a higher pricing tier or delete unused assets.
577577

578578
**Preprocessing Timeout Issues**
579579

580-
*couldn't execute skill because the Web API request failed*
580+
*Couldn't execute skill because the Web API request failed*
581581

582-
*couldn't execute skill because Web API skill response is invalid*
582+
*Couldn't execute skill because Web API skill response is invalid*
583583

584584
Resolution:
585585

@@ -595,7 +595,7 @@ This means the storage account isn't accessible with the given credentials. In t
595595

596596
### 503 errors when sending queries with Azure AI Search
597597

598-
Each user message can translate to multiple search queries, all of which get sent to the search resource in parallel. This can produce throttling behavior when the amount of search replicas and partitions is low. The maximum number of queries per second that a single partition and single replica can support may not be sufficient. In this case, consider increasing your replicas and partitions, or adding sleep/retry logic in your application. See the [Azure AI Search documentation](../../../search/performance-benchmarks.md) for more information.
598+
Each user message can translate to multiple search queries, all of which get sent to the search resource in parallel. This can produce throttling behavior when the number of search replicas and partitions is low. The maximum number of queries per second that a single partition and single replica can support may not be sufficient. In this case, consider increasing your replicas and partitions, or adding sleep/retry logic in your application. See the [Azure AI Search documentation](../../../search/performance-benchmarks.md) for more information.
599599

600600
## Regional availability and model support
601601

0 commit comments

Comments
 (0)