Skip to content

Commit b2d8788

Browse files
authored
Adding anchors (#638)
1 parent 444ea09 commit b2d8788

File tree

1 file changed

+7
-7
lines changed

1 file changed

+7
-7
lines changed

README.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -295,19 +295,19 @@ The resource group and all the resources will be deleted.
295295
296296
### FAQ
297297

298-
<details>
298+
<details><a id="ingestion-why-chunk"></a>
299299
<summary>Why do we need to break up the PDFs into chunks when Azure Cognitive Search supports searching large documents?</summary>
300300

301301
Chunking allows us to limit the amount of information we send to OpenAI due to token limits. By breaking up the content, it allows us to easily find potential chunks of text that we can inject into OpenAI. The method of chunking we use leverages a sliding window of text such that sentences that end one chunk will start the next. This allows us to reduce the chance of losing the context of the text.
302302
</details>
303303

304-
<details>
304+
<details><a id="ingestion-more-pdfs"></a>
305305
<summary>How can we upload additional PDFs without redeploying everything?</summary>
306306

307307
To upload more PDFs, put them in the data/ folder and run `./scripts/prepdocs.sh` or `./scripts/prepdocs.ps1`. To avoid reuploading existing docs, move them out of the data folder. You could also implement checks to see whats been uploaded before; our code doesn't yet have such checks.
308308
</details>
309309

310-
<details>
310+
<details><a id="compare-samples"></a>
311311
<summary>How does this sample compare to other Chat with Your Data samples?</summary>
312312

313313
Another popular repository for this use case is here:
@@ -322,13 +322,13 @@ The primary differences:
322322

323323
</details>
324324

325-
<details>
325+
<details><a id="switch-gpt4"></a>
326326
<summary>How do you use GPT-4 with this sample?</summary>
327327

328328
In `infra/main.bicep`, change `chatGptModelName` to 'gpt-4' instead of 'gpt-35-turbo'. You may also need to adjust the capacity above that line depending on how much TPM your account is allowed.
329329
</details>
330330

331-
<details>
331+
<details><a id="chat-ask-diff"></a>
332332
<summary>What is the difference between the Chat and Ask tabs?</summary>
333333

334334
The chat tab uses the approach programmed in [chatreadretrieveread.py](https://github.com/Azure-Samples/azure-search-openai-demo/blob/main/app/backend/approaches/chatreadretrieveread.py).
@@ -345,7 +345,7 @@ The ask tab uses the approach programmed in [retrievethenread.py](https://github
345345
There are also two other /ask approaches with a slightly different approach, but they aren't currently working due to [langchain compatibility issues](https://github.com/Azure-Samples/azure-search-openai-demo/issues/541).
346346
</details>
347347

348-
<details>
348+
<details><a id="azd-up-explanation"></a>
349349
<summary>What does the `azd up` command do?</summary>
350350

351351
The `azd up` command comes from the [Azure Developer CLI](https://learn.microsoft.com/en-us/azure/developer/azure-developer-cli/overview), and takes care of both provisioning the Azure resources and deploying code to the selected Azure hosts.
@@ -359,7 +359,7 @@ Finally, it looks at `azure.yaml` to determine the Azure host (appservice, in th
359359
Related commands are `azd provision` for just provisioning (if infra files change) and `azd deploy` for just deploying updated app code.
360360
</details>
361361

362-
<details>
362+
<details><a id="appservice-logs"></a>
363363
<summary>How can we view logs from the App Service app?</summary>
364364

365365
You can view production logs in the Portal using either the Log stream or by downloading the default_docker.log file from Advanced tools.

0 commit comments

Comments
 (0)