You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/README_LOCAL.md
+4-2Lines changed: 4 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,6 @@
6
6
These variables are required:
7
7
-`AZURE_OPENAI_RESOURCE`
8
8
-`AZURE_OPENAI_MODEL`
9
-
-`AZURE_OPENAI_KEY`
10
9
11
10
These variables are optional:
12
11
-`AZURE_OPENAI_TEMPERATURE`
@@ -174,6 +173,9 @@ Note: settings starting with `AZURE_SEARCH` are only needed when using Azure Ope
174
173
175
174
| App Setting | Value | Note |
176
175
| --- | --- | ------------- |
176
+
|AZURE_AI_AGENT_API_VERSION|2025-01-01-preview| API version when using the Azure Foundry agent on your data.|
177
+
|AZURE_AI_AGENT_ENDPOINT||The endpoint of the Azure AI foundry project|
178
+
|AZURE_AI_AGENT_MODEL_DEPLOYMENT_NAME||The name of the gpt model|
177
179
|AZURE_SEARCH_SERVICE||The name of your Azure AI Search resource|
178
180
|AZURE_SEARCH_INDEX||The name of your Azure AI Search Index|
179
181
|AZURE_SEARCH_KEY||An **admin key** for your Azure AI Search resource|
@@ -193,7 +195,6 @@ Note: settings starting with `AZURE_SEARCH` are only needed when using Azure Ope
193
195
|AZURE_OPENAI_MODEL||The name of your model deployment|
194
196
|AZURE_OPENAI_ENDPOINT||The endpoint of your Azure OpenAI resource.|
195
197
|AZURE_OPENAI_MODEL_NAME|gpt-35-turbo-16k|The name of the model|
196
-
|AZURE_OPENAI_KEY||One of the API keys of your Azure OpenAI resource|
197
198
|AZURE_OPENAI_TEMPERATURE|0|What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. A value of 0 is recommended when using your data.|
198
199
|AZURE_OPENAI_TOP_P|1.0|An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. We recommend setting this to 1.0 when using your data.|
199
200
|AZURE_OPENAI_MAX_TOKENS|1000|The maximum number of tokens allowed for the generated answer.|
@@ -211,6 +212,7 @@ Note: settings starting with `AZURE_SEARCH` are only needed when using Azure Ope
|SANITIZE_ANSWER|False|Whether to sanitize the answer from Azure OpenAI. Set to True to remove any HTML tags from the response.|
213
214
|USE_PROMPTFLOW|False|Use existing Promptflow deployed endpoint. If set to `True` then both `PROMPTFLOW_ENDPOINT` and `PROMPTFLOW_API_KEY` also need to be set.|
215
+
|USE_AI_FOUNDRY_SDK|False|Boolean flag to determine whether to use the AI Foundry SDK instead of the OpenAI SDK.|
214
216
|PROMPTFLOW_ENDPOINT||URL of the deployed Promptflow endpoint e.g. https://pf-deployment-name.region.inference.ml.azure.com/score|
215
217
|PROMPTFLOW_API_KEY||Auth key for deployed Promptflow endpoint. Note: only Key-based authentication is supported.|
216
218
|PROMPTFLOW_RESPONSE_TIMEOUT|120|Timeout value in seconds for the Promptflow endpoint to respond.|
0 commit comments