Skip to content

Commit f75c7fb

Browse files
Merge pull request #470 from microsoft/dev
feat: Added Foundry SDK, FDP changes
2 parents 741cc14 + 5fe69e8 commit f75c7fb

File tree

13 files changed

+459
-891
lines changed

13 files changed

+459
-891
lines changed

docs/README_LOCAL.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,6 @@
66
These variables are required:
77
- `AZURE_OPENAI_RESOURCE`
88
- `AZURE_OPENAI_MODEL`
9-
- `AZURE_OPENAI_KEY`
109

1110
These variables are optional:
1211
- `AZURE_OPENAI_TEMPERATURE`
@@ -174,6 +173,9 @@ Note: settings starting with `AZURE_SEARCH` are only needed when using Azure Ope
174173

175174
| App Setting | Value | Note |
176175
| --- | --- | ------------- |
176+
|AZURE_AI_AGENT_API_VERSION|2025-01-01-preview| API version when using the Azure Foundry agent on your data.|
177+
|AZURE_AI_AGENT_ENDPOINT||The endpoint of the Azure AI foundry project|
178+
|AZURE_AI_AGENT_MODEL_DEPLOYMENT_NAME||The name of the gpt model|
177179
|AZURE_SEARCH_SERVICE||The name of your Azure AI Search resource|
178180
|AZURE_SEARCH_INDEX||The name of your Azure AI Search Index|
179181
|AZURE_SEARCH_KEY||An **admin key** for your Azure AI Search resource|
@@ -193,7 +195,6 @@ Note: settings starting with `AZURE_SEARCH` are only needed when using Azure Ope
193195
|AZURE_OPENAI_MODEL||The name of your model deployment|
194196
|AZURE_OPENAI_ENDPOINT||The endpoint of your Azure OpenAI resource.|
195197
|AZURE_OPENAI_MODEL_NAME|gpt-35-turbo-16k|The name of the model|
196-
|AZURE_OPENAI_KEY||One of the API keys of your Azure OpenAI resource|
197198
|AZURE_OPENAI_TEMPERATURE|0|What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. A value of 0 is recommended when using your data.|
198199
|AZURE_OPENAI_TOP_P|1.0|An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. We recommend setting this to 1.0 when using your data.|
199200
|AZURE_OPENAI_MAX_TOKENS|1000|The maximum number of tokens allowed for the generated answer.|
@@ -211,6 +212,7 @@ Note: settings starting with `AZURE_SEARCH` are only needed when using Azure Ope
211212
|UI_SHOW_SHARE_BUTTON|True|Share button (right-top)
212213
|SANITIZE_ANSWER|False|Whether to sanitize the answer from Azure OpenAI. Set to True to remove any HTML tags from the response.|
213214
|USE_PROMPTFLOW|False|Use existing Promptflow deployed endpoint. If set to `True` then both `PROMPTFLOW_ENDPOINT` and `PROMPTFLOW_API_KEY` also need to be set.|
215+
|USE_AI_FOUNDRY_SDK|False|Boolean flag to determine whether to use the AI Foundry SDK instead of the OpenAI SDK.|
214216
|PROMPTFLOW_ENDPOINT||URL of the deployed Promptflow endpoint e.g. https://pf-deployment-name.region.inference.ml.azure.com/score|
215217
|PROMPTFLOW_API_KEY||Auth key for deployed Promptflow endpoint. Note: only Key-based authentication is supported.|
216218
|PROMPTFLOW_RESPONSE_TIMEOUT|120|Timeout value in seconds for the Promptflow endpoint to respond.|

infra/abbreviations.json

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -18,8 +18,8 @@
1818
"languageService": "lang-",
1919
"speechService": "spch-",
2020
"translator": "trsl-",
21-
"aiHub": "aih-",
22-
"aiHubProject": "aihp-"
21+
"aiFoundry": "aif-",
22+
"aiFoundryProject": "aifp-"
2323
},
2424
"analytics": {
2525
"analysisServicesServer": "as",

0 commit comments

Comments
 (0)