Skip to content

Commit 1a8a1ce

Browse files
authored
Update readme FAQs (#417)
* Update readme FAQs * Update readme * README summary details
1 parent 755188c commit 1a8a1ce

File tree

1 file changed

+21
-8
lines changed

1 file changed

+21
-8
lines changed

README.md

Lines changed: 21 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@ It will look like the following:
6969

7070
> NOTE: It may take a minute for the application to be fully deployed. If you see a "Python Developer" welcome screen, then wait a minute and refresh the page.
7171
72-
#### Use existing resources
72+
#### Using existing resources
7373

7474
1. Run `azd env set AZURE_OPENAI_SERVICE {Name of existing OpenAI service}`
7575
1. Run `azd env set AZURE_OPENAI_RESOURCE_GROUP {Name of existing resource group that OpenAI service is provisioned to}`
@@ -79,9 +79,15 @@ It will look like the following:
7979

8080
> NOTE: You can also use existing Search and Storage Accounts. See `./infra/main.parameters.json` for list of environment variables to pass to `azd env set` to configure those existing resources.
8181
82-
#### Deploying or re-deploying a local clone of the repo
82+
#### Deploying again
8383

84-
* Simply run `azd up`
84+
If you've only changed the backend/frontend code in the `app` folder, then you don't need to re-provision the Azure resources. You can just run:
85+
86+
```azd deploy```
87+
88+
If you've changed the infrastructure files (`infra` folder or `azure.yaml`), then you'll need to re-provision the Azure resources. You can do that by running:
89+
90+
```azd up```
8591

8692
#### Running locally
8793

@@ -121,12 +127,19 @@ Once in the web app:
121127
122128
### FAQ
123129

124-
***Question***: Why do we need to break up the PDFs into chunks when Azure Cognitive Search supports searching large documents?
130+
<details>
131+
<summary>Why do we need to break up the PDFs into chunks when Azure Cognitive Search supports searching large documents?</summary>
125132

126-
***Answer***: Chunking allows us to limit the amount of information we send to OpenAI due to token limits. By breaking up the content, it allows us to easily find potential chunks of text that we can inject into OpenAI. The method of chunking we use leverages a sliding window of text such that sentences that end one chunk will start the next. This allows us to reduce the chance of losing the context of the text.
133+
Chunking allows us to limit the amount of information we send to OpenAI due to token limits. By breaking up the content, it allows us to easily find potential chunks of text that we can inject into OpenAI. The method of chunking we use leverages a sliding window of text such that sentences that end one chunk will start the next. This allows us to reduce the chance of losing the context of the text.
134+
</details>
127135

128-
### Troubleshooting
136+
<details>
137+
<summary>How can we upload additional PDFs without redeploying everything?</summary>
138+
139+
To upload more PDFs, put them in the data/ folder and run `./scripts/prepdocs.sh` or `./scripts/prepdocs.ps1`. To avoid reuploading existing docs, move them out of the data folder. You could also implement checks to see whats been uploaded before; our code doesn't yet have such checks.
140+
</details>
129141

130-
If you see this error while running `azd deploy`: `read /tmp/azd1992237260/backend_env/lib64: is a directory`, then delete the `./app/backend/backend_env folder` and re-run the `azd deploy` command. This issue is being tracked here: <https://github.com/Azure/azure-dev/issues/1237>
142+
### Troubleshooting
131143

132-
If the web app fails to deploy and you receive a '404 Not Found' message in your browser, run `azd deploy`.
144+
If the web app fails to deploy and you receive a '404 Not Found' message in your browser, run `azd deploy`. If you still encounter errors with the deployed app, consult these [tips for debugging Flask app deployments](http://blog.pamelafox.org/2023/06/tips-for-debugging-flask-deployments-to.html)
145+
and file an issue if the error logs don't help you resolve the issue.

0 commit comments

Comments
 (0)