You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -295,19 +295,19 @@ The resource group and all the resources will be deleted.
295
295
296
296
### FAQ
297
297
298
-
<details>
298
+
<details><aid="ingestion-why-chunk"></a>
299
299
<summary>Why do we need to break up the PDFs into chunks when Azure Cognitive Search supports searching large documents?</summary>
300
300
301
301
Chunking allows us to limit the amount of information we send to OpenAI due to token limits. By breaking up the content, it allows us to easily find potential chunks of text that we can inject into OpenAI. The method of chunking we use leverages a sliding window of text such that sentences that end one chunk will start the next. This allows us to reduce the chance of losing the context of the text.
302
302
</details>
303
303
304
-
<details>
304
+
<details><aid="ingestion-more-pdfs"></a>
305
305
<summary>How can we upload additional PDFs without redeploying everything?</summary>
306
306
307
307
To upload more PDFs, put them in the data/ folder and run `./scripts/prepdocs.sh` or `./scripts/prepdocs.ps1`. To avoid reuploading existing docs, move them out of the data folder. You could also implement checks to see whats been uploaded before; our code doesn't yet have such checks.
308
308
</details>
309
309
310
-
<details>
310
+
<details><aid="compare-samples"></a>
311
311
<summary>How does this sample compare to other Chat with Your Data samples?</summary>
312
312
313
313
Another popular repository for this use case is here:
@@ -322,13 +322,13 @@ The primary differences:
322
322
323
323
</details>
324
324
325
-
<details>
325
+
<details><aid="switch-gpt4"></a>
326
326
<summary>How do you use GPT-4 with this sample?</summary>
327
327
328
328
In `infra/main.bicep`, change `chatGptModelName` to 'gpt-4' instead of 'gpt-35-turbo'. You may also need to adjust the capacity above that line depending on how much TPM your account is allowed.
329
329
</details>
330
330
331
-
<details>
331
+
<details><aid="chat-ask-diff"></a>
332
332
<summary>What is the difference between the Chat and Ask tabs?</summary>
333
333
334
334
The chat tab uses the approach programmed in [chatreadretrieveread.py](https://github.com/Azure-Samples/azure-search-openai-demo/blob/main/app/backend/approaches/chatreadretrieveread.py).
@@ -345,7 +345,7 @@ The ask tab uses the approach programmed in [retrievethenread.py](https://github
345
345
There are also two other /ask approaches with a slightly different approach, but they aren't currently working due to [langchain compatibility issues](https://github.com/Azure-Samples/azure-search-openai-demo/issues/541).
346
346
</details>
347
347
348
-
<details>
348
+
<details><aid="azd-up-explanation"></a>
349
349
<summary>What does the `azd up` command do?</summary>
350
350
351
351
The `azd up` command comes from the [Azure Developer CLI](https://learn.microsoft.com/en-us/azure/developer/azure-developer-cli/overview), and takes care of both provisioning the Azure resources and deploying code to the selected Azure hosts.
@@ -359,7 +359,7 @@ Finally, it looks at `azure.yaml` to determine the Azure host (appservice, in th
359
359
Related commands are `azd provision` for just provisioning (if infra files change) and `azd deploy` for just deploying updated app code.
360
360
</details>
361
361
362
-
<details>
362
+
<details><aid="appservice-logs"></a>
363
363
<summary>How can we view logs from the App Service app?</summary>
364
364
365
365
You can view production logs in the Portal using either the Log stream or by downloading the default_docker.log file from Advanced tools.
0 commit comments