Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
19 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions .github/workflows/CAdeploy.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,8 @@ on:
- cron: '0 6,18 * * *' # Runs at 6:00 AM and 6:00 PM GMT

env:
GPT_MIN_CAPACITY: 250
TEXT_EMBEDDING_MIN_CAPACITY: 40
GPT_MIN_CAPACITY: 200
TEXT_EMBEDDING_MIN_CAPACITY: 80
BRANCH_NAME: ${{ github.head_ref || github.ref_name }}

jobs:
Expand Down
3 changes: 2 additions & 1 deletion docs/CustomizingAzdParameters.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,9 +19,10 @@ By default this template will use the environment name as the prefix to prevent
| `AZURE_ENV_EMBEDDING_MODEL_CAPACITY` | integer | `80` | Set the capacity for embedding model deployment. |
| `AZURE_ENV_IMAGETAG` | string | `latest` | Set the image tag (allowed values: `latest`, `dev`, `hotfix`). |
| `AZURE_LOCATION` | string | `japaneast` | Sets the Azure region for resource deployment. |
| `AZURE_ENV_LOG_ANALYTICS_WORKSPACE_ID` | string | `<Existing Workspace Id>` | Reuses an existing Log Analytics Workspace instead of provisioning a new one. |
| `AZURE_ENV_LOG_ANALYTICS_WORKSPACE_ID` | string | Guide to get your [Existing Workspace ID](/docs/re-use-log-analytics.md) | Reuses an existing Log Analytics Workspace instead of provisioning a new one. |
| `AZURE_EXISTING_AI_PROJECT_RESOURCE_ID` | string | `<Existing AI Foundry Project Resource Id>` | Reuses an existing AI Foundry Project Resource Id instead of provisioning a new one. |


## How to Set a Parameter
To customize any of the above values, run the following command **before** `azd up`:

Expand Down
8 changes: 8 additions & 0 deletions docs/DeploymentGuide.md
Original file line number Diff line number Diff line change
Expand Up @@ -137,6 +137,14 @@ To adjust quota settings, follow these [steps](./AzureGPTQuotaSettings.md).

</details>

<details>

<summary><b>Reusing an Existing Log Analytics Workspace</b></summary>

Guide to get your [Existing Workspace ID](/docs/re-use-log-analytics.md)

</details>

### Deploying with AZD

Once you've opened the project in [Codespaces](#github-codespaces), [Dev Containers](#vs-code-dev-containers), or [locally](#local-environment), you can deploy it to Azure by following these steps:
Expand Down
8 changes: 4 additions & 4 deletions docs/QuotaCheck.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ azd auth login

### 📌 Default Models & Capacities:
```
gpt-4o-mini:30, text-embedding-ada-002:80
gpt-4o-mini:200, text-embedding-ada-002:80
```
### 📌 Default Regions:
```
Expand All @@ -37,19 +37,19 @@ eastus, uksouth, eastus2, northcentralus, swedencentral, westus, westus2, southc
```
✔️ Check specific model(s) in default regions:
```
./quota_check_params.sh --models gpt-4o-mini:30,text-embedding-ada-002:80
./quota_check_params.sh --models gpt-4o-mini:200,text-embedding-ada-002:80
```
✔️ Check default models in specific region(s):
```
./quota_check_params.sh --regions eastus,westus
```
✔️ Passing Both models and regions:
```
./quota_check_params.sh --models gpt-4o-mini:30 --regions eastus,westus2
./quota_check_params.sh --models gpt-4o-mini:200 --regions eastus,westus2
```
✔️ All parameters combined:
```
./quota_check_params.sh --models gpt-4o-mini:30,text-embedding-ada-002:80 --regions eastus,westus --verbose
./quota_check_params.sh --models gpt-4o-mini:200,text-embedding-ada-002:80 --regions eastus,westus --verbose
```

### **Sample Output**
Expand Down
Binary file added docs/images/re_use_log/logAnalytics.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/re_use_log/logAnalyticsJson.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/re_use_log/logAnalyticsList.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
31 changes: 31 additions & 0 deletions docs/re-use-log-analytics.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
[← Back to *DEPLOYMENT* guide](/docs/DeploymentGuide.md#deployment-options--steps)

# Reusing an Existing Log Analytics Workspace
To configure your environment to use an existing Log Analytics Workspace, follow these steps:
---
### 1. Go to Azure Portal
Go to https://portal.azure.com

### 2. Search for Log Analytics
In the search bar at the top, type "Log Analytics workspaces" and click on it and click on the workspace you want to use.

![alt text](../docs/images/re_use_log/logAnalyticsList.png)

### 3. Copy Resource ID
In the Overview pane, Click on JSON View

![alt text](../docs/images/re_use_log/logAnalytics.png)

Copy Resource ID that is your Workspace ID

![alt text](../docs/images/re_use_log/logAnalyticsJson.png)

### 4. Set the Workspace ID in Your Environment
Run the following command in your terminal
```bash
azd env set AZURE_ENV_LOG_ANALYTICS_WORKSPACE_ID '<Existing Log Analytics Workspace Id>'
```
Replace `<Existing Log Analytics Workspace Id>` with the value obtained from Step 3.

### 5. Continue Deployment
Proceed with the next steps in the [deployment guide](/docs/DeploymentGuide.md#deployment-options--steps).
2 changes: 1 addition & 1 deletion infra/deploy_ai_foundry.bicep
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ var aiModelDeployments = [
name: embeddingModel
model: embeddingModel
sku: {
name: 'Standard'
name: 'GlobalStandard'
capacity: embeddingDeploymentCapacity
}
raiPolicyName: 'Microsoft.Default'
Expand Down
2 changes: 1 addition & 1 deletion infra/main.bicep
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ param imageTag string = 'latest'
type: 'location'
usageName: [
'OpenAI.GlobalStandard.gpt-4o-mini,200'
'OpenAI.Standard.text-embedding-ada-002,80'
'OpenAI.GlobalStandard.text-embedding-ada-002,80'
]
}
})
Expand Down
8 changes: 4 additions & 4 deletions infra/main.json
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
"_generator": {
"name": "bicep",
"version": "0.36.177.2456",
"templateHash": "1860841716622591379"
"templateHash": "2238194529646818649"
}
},
"parameters": {
Expand Down Expand Up @@ -114,7 +114,7 @@
"type": "location",
"usageName": [
"OpenAI.GlobalStandard.gpt-4o-mini,200",
"OpenAI.Standard.text-embedding-ada-002,80"
"OpenAI.GlobalStandard.text-embedding-ada-002,80"
]
},
"description": "Location for AI Foundry deployment. This is the location where the AI Foundry resources will be deployed."
Expand Down Expand Up @@ -744,7 +744,7 @@
"_generator": {
"name": "bicep",
"version": "0.36.177.2456",
"templateHash": "15524709584492116446"
"templateHash": "1124249040831466979"
}
},
"parameters": {
Expand Down Expand Up @@ -1038,7 +1038,7 @@
"name": "[parameters('embeddingModel')]",
"model": "[parameters('embeddingModel')]",
"sku": {
"name": "Standard",
"name": "GlobalStandard",
"capacity": "[parameters('embeddingDeploymentCapacity')]"
},
"raiPolicyName": "Microsoft.Default"
Expand Down
2 changes: 1 addition & 1 deletion infra/scripts/checkquota.sh
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ echo "✅ Azure subscription set successfully."
# Define models and their minimum required capacities
declare -A MIN_CAPACITY=(
["OpenAI.GlobalStandard.gpt-4o-mini"]=$GPT_MIN_CAPACITY
["OpenAI.Standard.text-embedding-ada-002"]=$TEXT_EMBEDDING_MIN_CAPACITY
["OpenAI.GlobalStandard.text-embedding-ada-002"]=$TEXT_EMBEDDING_MIN_CAPACITY
)

VALID_REGION=""
Expand Down
28 changes: 18 additions & 10 deletions infra/scripts/copy_kb_files.sh
Original file line number Diff line number Diff line change
Expand Up @@ -123,46 +123,54 @@ echo "Uploading files to Azure Blob Storage"
# Using az storage blob upload-batch to upload files with managed identity authentication, as the az storage fs directory upload command is not working with managed identity authentication.
az storage blob upload-batch --account-name "$storageAccount" --destination data/"$extractedFolder1" --source $extractionPath1 --auth-mode login --pattern '*' --overwrite --output none
if [ $? -ne 0 ]; then
retries=3
maxRetries=5
retries=$maxRetries
sleepTime=10
echo "Error: Failed to upload files to Azure Blob Storage. Retrying upload...($((4 - retries)) of 3)"
attempt=1
while [ $retries -gt 0 ]; do
echo "Error: Failed to upload files to Azure Blob Storage. Retrying upload...$attempt of $maxRetries in $sleepTime seconds"
sleep $sleepTime
az storage blob upload-batch --account-name "$storageAccount" --destination data/"$extractedFolder1" --source $extractionPath1 --auth-mode login --pattern '*' --overwrite --output none
if [ $? -eq 0 ]; then
echo "Files uploaded successfully to Azure Blob Storage."
break
else
((retries--))
echo "Retrying upload... ($((4 - retries)) of 3)"
((attempt++))
sleepTime=$((sleepTime * 2))
sleep $sleepTime
fi
done
exit 1
if [ $retries -eq 0 ]; then
echo "Error: Failed to upload files after all retry attempts."
exit 1
fi
else
echo "Files uploaded successfully to Azure Blob Storage."
fi

az storage blob upload-batch --account-name "$storageAccount" --destination data/"$extractedFolder2" --source $extractionPath2 --auth-mode login --pattern '*' --overwrite --output none
if [ $? -ne 0 ]; then
retries=3
maxRetries=5
retries=$maxRetries
attempt=1
sleepTime=10
echo "Error: Failed to upload files to Azure Blob Storage. Retrying upload...($((4 - retries)) of 3)"
while [ $retries -gt 0 ]; do
echo "Error: Failed to upload files to Azure Blob Storage. Retrying upload...$attempt of $maxRetries in $sleepTime seconds"
sleep $sleepTime
az storage blob upload-batch --account-name "$storageAccount" --destination data/"$extractedFolder2" --source $extractionPath2 --auth-mode login --pattern '*' --overwrite --output none
if [ $? -eq 0 ]; then
echo "Files uploaded successfully to Azure Blob Storage."
break
else
((retries--))
echo "Retrying upload... ($((4 - retries)) of 3)"
((attempt++))
sleepTime=$((sleepTime * 2))
sleep $sleepTime
fi
done
exit 1
if [ $retries -eq 0 ]; then
echo "Error: Failed to upload files after all retry attempts."
exit 1
fi
else
echo "Files uploaded successfully to Azure Blob Storage."
fi
Comment on lines 124 to 176
Copy link

Copilot AI Jul 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The retry logic duplicates the same code block twice with only minor variations. Consider extracting this into a reusable function to reduce code duplication.

Copilot uses AI. Check for mistakes.
Expand Down
8 changes: 2 additions & 6 deletions infra/scripts/quota_check_params.sh
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ log_verbose() {
}

# Default Models and Capacities (Comma-separated in "model:capacity" format)
DEFAULT_MODEL_CAPACITY="gpt-4o-mini:30,text-embedding-ada-002:80"
DEFAULT_MODEL_CAPACITY="gpt-4o-mini:200,text-embedding-ada-002:80"

# Convert the comma-separated string into an array
IFS=',' read -r -a MODEL_CAPACITY_PAIRS <<< "$DEFAULT_MODEL_CAPACITY"
Expand Down Expand Up @@ -165,11 +165,7 @@ for REGION in "${REGIONS[@]}"; do
FOUND=false
INSUFFICIENT_QUOTA=false

if [ "$MODEL_NAME" = "text-embedding-ada-002" ]; then
MODEL_TYPES=("openai.standard.$MODEL_NAME")
else
MODEL_TYPES=("openai.standard.$MODEL_NAME" "openai.globalstandard.$MODEL_NAME")
fi
MODEL_TYPES=("openai.standard.$MODEL_NAME" "openai.globalstandard.$MODEL_NAME")

for MODEL_TYPE in "${MODEL_TYPES[@]}"; do
FOUND=false
Expand Down
46 changes: 45 additions & 1 deletion src/App/backend/agents/agent_factory.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,8 @@ async def get_wealth_advisor_agent(cls):
)

agent_name = "WealthAdvisor"
agent_instructions = "You are a helpful assistant to a Wealth Advisor."
agent_instructions = '''You are a helpful assistant to a Wealth Advisor.
If the question is unrelated to data but is conversational (e.g., greetings or follow-ups), respond appropriately using context, do not use external tools or perform any web searches for these conversational inputs.'''

agent_definition = await client.agents.create_agent(
model=ai_agent_settings.model_deployment_name,
Expand Down Expand Up @@ -105,3 +106,46 @@ async def delete_all_agent_instance(cls):
)
cls._search_agent["client"].close()
cls._search_agent = None

@classmethod
async def get_sql_agent(cls) -> dict:
"""
Get or create a singleton SQLQueryGenerator AzureAIAgent instance.
This agent is used to generate T-SQL queries from natural language input.
"""
async with cls._lock:
if not hasattr(cls, "_sql_agent") or cls._sql_agent is None:

agent_instructions = config.SQL_SYSTEM_PROMPT or """
You are an expert assistant in generating T-SQL queries based on user questions.
Always use the following schema:
1. Table: Clients (ClientId, Client, Email, Occupation, MaritalStatus, Dependents)
2. Table: InvestmentGoals (ClientId, InvestmentGoal)
3. Table: Assets (ClientId, AssetDate, Investment, ROI, Revenue, AssetType)
4. Table: ClientSummaries (ClientId, ClientSummary)
5. Table: InvestmentGoalsDetails (ClientId, InvestmentGoal, TargetAmount, Contribution)
6. Table: Retirement (ClientId, StatusDate, RetirementGoalProgress, EducationGoalProgress)
7. Table: ClientMeetings (ClientId, ConversationId, Title, StartTime, EndTime, Advisor, ClientEmail)

Rules:
- Always filter by ClientId = <provided>
- Do not use client name for filtering
- Assets table contains snapshots by date; do not sum values across dates
- Use StartTime for time-based filtering (meetings)
- Only return the raw T-SQL query. No explanations or comments.
"""

Comment on lines +119 to +137
Copy link

Copilot AI Jul 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] The fallback instructions should be stored as a constant rather than embedded as a multi-line string. This improves maintainability and makes the instructions easier to update.

Suggested change
agent_instructions = config.SQL_SYSTEM_PROMPT or """
You are an expert assistant in generating T-SQL queries based on user questions.
Always use the following schema:
1. Table: Clients (ClientId, Client, Email, Occupation, MaritalStatus, Dependents)
2. Table: InvestmentGoals (ClientId, InvestmentGoal)
3. Table: Assets (ClientId, AssetDate, Investment, ROI, Revenue, AssetType)
4. Table: ClientSummaries (ClientId, ClientSummary)
5. Table: InvestmentGoalsDetails (ClientId, InvestmentGoal, TargetAmount, Contribution)
6. Table: Retirement (ClientId, StatusDate, RetirementGoalProgress, EducationGoalProgress)
7. Table: ClientMeetings (ClientId, ConversationId, Title, StartTime, EndTime, Advisor, ClientEmail)
Rules:
- Always filter by ClientId = <provided>
- Do not use client name for filtering
- Assets table contains snapshots by date; do not sum values across dates
- Use StartTime for time-based filtering (meetings)
- Only return the raw T-SQL query. No explanations or comments.
"""
agent_instructions = config.SQL_SYSTEM_PROMPT or SQL_AGENT_FALLBACK_PROMPT

Copilot uses AI. Check for mistakes.
project_client = AIProjectClient(
endpoint=config.AI_PROJECT_ENDPOINT,
credential=DefaultAzureCredentialSync(),
api_version="2025-05-01",
)

agent = project_client.agents.create_agent(
model=config.AZURE_OPENAI_MODEL,
instructions=agent_instructions,
name="SQLQueryGeneratorAgent",
)

cls._sql_agent = {"agent": agent, "client": project_client}
Copy link

Copilot AI Jul 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The singleton pattern implementation lacks proper cleanup. Consider adding a cleanup method similar to delete_all_agent_instance for the SQL agent to prevent resource leaks.

Copilot uses AI. Check for mistakes.
return cls._sql_agent
Loading
Loading