Skip to content

Commit 12e8460

Browse files
feat: Unit test code changes and EXP changes for Log analytics
2 parents 32d7c0b + afcb78e commit 12e8460

29 files changed

+3520
-71
lines changed

.github/workflows/test.yml

Lines changed: 42 additions & 44 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,7 @@ on:
66
- main
77
- dev
88
- demo
9+
- psl-backend-unittest
910
pull_request:
1011
types:
1112
- opened
@@ -18,49 +19,48 @@ on:
1819
- demo
1920

2021
jobs:
21-
# frontend_tests:
22-
# runs-on: ubuntu-latest
23-
24-
# steps:
25-
# - name: Checkout code
26-
# uses: actions/checkout@v3
27-
28-
# - name: Set up Node.js
29-
# uses: actions/setup-node@v3
30-
# with:
31-
# node-version: '20'
32-
33-
# - name: Check if Frontend Test Files Exist
34-
# id: check_frontend_tests
35-
# run: |
36-
# if [ -z "$(find App/frontend/src -type f -name '*.test.js' -o -name '*.test.ts' -o -name '*.test.tsx')" ]; then
37-
# echo "No frontend test files found, skipping frontend tests."
38-
# echo "skip_frontend_tests=true" >> $GITHUB_ENV
39-
# else
40-
# echo "Frontend test files found, running tests."
41-
# echo "skip_frontend_tests=false" >> $GITHUB_ENV
42-
# fi
43-
44-
# - name: Install Frontend Dependencies
45-
# if: env.skip_frontend_tests == 'false'
46-
# run: |
47-
# cd App/frontend
48-
# npm install
49-
50-
# - name: Run Frontend Tests with Coverage
51-
# if: env.skip_frontend_tests == 'false'
52-
# run: |
53-
# cd App/frontend
54-
# npm run test -- --coverage
55-
56-
# - name: Skip Frontend Tests
57-
# if: env.skip_frontend_tests == 'true'
58-
# run: |
59-
# echo "Skipping frontend tests because no test files were found."
22+
# frontend_tests:
23+
# runs-on: ubuntu-latest
24+
25+
# steps:
26+
# - name: Checkout code
27+
# uses: actions/checkout@v3
28+
29+
# - name: Set up Node.js
30+
# uses: actions/setup-node@v3
31+
# with:
32+
# node-version: '20'
33+
34+
# - name: Check if Frontend Test Files Exist
35+
# id: check_frontend_tests
36+
# run: |
37+
# if [ -z "$(find App/frontend/src -type f -name '*.test.js' -o -name '*.test.ts' -o -name '*.test.tsx')" ]; then
38+
# echo "No frontend test files found, skipping frontend tests."
39+
# echo "skip_frontend_tests=true" >> $GITHUB_ENV
40+
# else
41+
# echo "Frontend test files found, running tests."
42+
# echo "skip_frontend_tests=false" >> $GITHUB_ENV
43+
# fi
44+
45+
# - name: Install Frontend Dependencies
46+
# if: env.skip_frontend_tests == 'false'
47+
# run: |
48+
# cd App/frontend
49+
# npm install
50+
51+
# - name: Run Frontend Tests with Coverage
52+
# if: env.skip_frontend_tests == 'false'
53+
# run: |
54+
# cd App/frontend
55+
# npm run test -- --coverage
56+
57+
# - name: Skip Frontend Tests
58+
# if: env.skip_frontend_tests == 'true'
59+
# run: |
60+
# echo "Skipping frontend tests because no test files were found."
6061

6162
backend_tests:
6263
runs-on: ubuntu-latest
63-
6464

6565
steps:
6666
- name: Checkout code
@@ -69,7 +69,7 @@ jobs:
6969
- name: Set up Python
7070
uses: actions/setup-python@v4
7171
with:
72-
python-version: '3.11'
72+
python-version: "3.11"
7373

7474
- name: Install Backend Dependencies
7575
run: |
@@ -81,7 +81,7 @@ jobs:
8181
- name: Check if Backend Test Files Exist
8282
id: check_backend_tests
8383
run: |
84-
if [ -z "$(find src/api -type f -name 'test_*.py')" ]; then
84+
if [ -z "$(find src/tests/api -type f -name 'test_*.py')" ]; then
8585
echo "No backend test files found, skipping backend tests."
8686
echo "skip_backend_tests=true" >> $GITHUB_ENV
8787
else
@@ -94,8 +94,6 @@ jobs:
9494
run: |
9595
pytest --cov=. --cov-report=term-missing --cov-report=xml
9696
97-
98-
9997
- name: Skip Backend Tests
10098
if: env.skip_backend_tests == 'true'
10199
run: |

documents/CustomizingAzdParameters.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -45,4 +45,9 @@ Change the Embedding Deployment Capacity (choose a number based on available emb
4545

4646
```shell
4747
azd env set AZURE_OPENAI_EMBEDDING_MODEL_CAPACITY 80
48+
```
49+
50+
Set the Log Analytics Workspace Id if you need to reuse the existing workspace which is already existing
51+
```shell
52+
azd env set AZURE_ENV_LOG_ANALYTICS_WORKSPACE_ID '<Existing Log Analytics Workspace Id>'
4853
```

documents/DeploymentGuide.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -116,6 +116,8 @@ When you start the deployment, most parameters will have **default values**, but
116116
| **GPT Model Deployment Capacity** | Configure capacity for **GPT models**. | 30k |
117117
| **Embedding Model** | Default: **text-embedding-ada-002**. | text-embedding-ada-002 |
118118
| **Embedding Model Capacity** | Set the capacity for **embedding models**. | 80k |
119+
| **Existing Log analytics workspace** | To reuse the existing Log analytics workspace Id. | |
120+
119121

120122
</details>
121123

documents/Fabric_deployment.md

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -3,11 +3,13 @@
33
### How to customize
44

55
If you'd like to customize the solution accelerator, here are some ways you might do that:
6-
- Ingest your own [audio conversation files](./ConversationalDataFormat.md) by uploading them into the `cu_audio_files_all` lakehouse folder and run the data pipeline
7-
- Deploy with Microsoft Fabric by following the steps in [Fabric_deployment.md](./Fabric_deployment.md)
6+
1. Ingest your own [audio conversation files](./ConversationalDataFormat.md) by uploading them into the `cu_audio_files_all` lakehouse folder and run the data pipeline
7+
2. Deploy with Microsoft Fabric by following the steps in [Fabric_deployment.md](./Fabric_deployment.md)
88

99

10-
3. **Create Fabric workspace**
10+
3. **Create or Use an Existing Microsoft Fabric Workspace**
11+
12+
> ℹ️ **Note:** If you already have an existing Microsoft Fabric Workspace, you can skip workspace creation and **continue from Point 5 (Environment Creation)**.
1113
1. Navigate to ([Fabric Workspace](https://app.fabric.microsoft.com/))
1214
2. Click on Data Engineering experience
1315
3. Click on Workspaces from left Navigation
@@ -38,7 +40,7 @@ If you'd like to customize the solution accelerator, here are some ways you migh
3840
4. ```cd ./Conversation-Knowledge-Mining-Solution-Accelerator/Deployment/scripts/fabric_scripts```
3941
5. ```sh ./run_fabric_items_scripts.sh keyvault_param workspaceid_param solutionprefix_param```
4042
1. keyvault_param - the name of the keyvault that was created in Step 1
41-
2. workspaceid_param - the workspaceid created in Step 2
43+
2. workspaceid_param - Existing workspaceid or workspaceid created in Step 3
4244
3. solutionprefix_param - prefix used to append to lakehouse upon creation
4345
5. **Add App Authentication**
4446

@@ -53,4 +55,4 @@ All files WAV files can be uploaded in the corresponding Lakehouse in the data/F
5355

5456
### Post-deployment
5557
- To process additional files, manually execute the pipeline_notebook after uploading new files.
56-
- The OpenAI prompt can be modified within the Fabric notebooks.
58+
- The OpenAI prompt can be modified within the Fabric notebooks.

infra/deploy_ai_foundry.bicep

Lines changed: 18 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -5,11 +5,14 @@ param keyVaultName string
55
param cuLocation string
66
param deploymentType string
77
param gptModelName string
8+
param gptModelVersion string
89
param azureOpenAIApiVersion string
910
param gptDeploymentCapacity int
1011
param embeddingModel string
1112
param embeddingDeploymentCapacity int
1213
param managedIdentityObjectId string
14+
param existingLogAnalyticsWorkspaceId string = ''
15+
1316
var abbrs = loadJsonContent('./abbreviations.json')
1417
// var storageName = '${solutionName}hubstorage'
1518
// var storageSkuName = 'Standard_LRS'
@@ -58,6 +61,7 @@ var aiModelDeployments = [
5861
name: deploymentType
5962
capacity: gptDeploymentCapacity
6063
}
64+
version: gptModelVersion
6165
raiPolicyName: 'Microsoft.Default'
6266
}
6367
{
@@ -73,11 +77,20 @@ var aiModelDeployments = [
7377

7478
var containerRegistryNameCleaned = replace(containerRegistryName, '-', '')
7579

80+
var useExisting = !empty(existingLogAnalyticsWorkspaceId)
81+
var existingLawResourceGroup = useExisting ? split(existingLogAnalyticsWorkspaceId, '/')[4] : ''
82+
var existingLawName = useExisting ? split(existingLogAnalyticsWorkspaceId, '/')[8] : ''
83+
7684
resource keyVault 'Microsoft.KeyVault/vaults@2022-07-01' existing = {
7785
name: keyVaultName
7886
}
7987

80-
resource logAnalytics 'Microsoft.OperationalInsights/workspaces@2023-09-01' = {
88+
resource existingLogAnalyticsWorkspace 'Microsoft.OperationalInsights/workspaces@2023-09-01' existing = if (useExisting) {
89+
name: existingLawName
90+
scope: resourceGroup(existingLawResourceGroup)
91+
}
92+
93+
resource logAnalytics 'Microsoft.OperationalInsights/workspaces@2023-09-01' = if (!useExisting){
8194
name: workspaceName
8295
location: location
8396
tags: {}
@@ -97,7 +110,7 @@ resource applicationInsights 'Microsoft.Insights/components@2020-02-02' = {
97110
Application_Type: 'web'
98111
publicNetworkAccessForIngestion: 'Enabled'
99112
publicNetworkAccessForQuery: 'Disabled'
100-
WorkspaceResourceId: logAnalytics.id
113+
WorkspaceResourceId: useExisting ? existingLogAnalyticsWorkspace.id : logAnalytics.id
101114
}
102115
}
103116

@@ -714,7 +727,9 @@ output aiSearchService string = aiSearch.name
714727
output aiProjectName string = aiHubProject.name
715728

716729
output applicationInsightsId string = applicationInsights.id
717-
output logAnalyticsWorkspaceResourceName string = logAnalytics.name
730+
output logAnalyticsWorkspaceResourceName string = useExisting ? existingLogAnalyticsWorkspace.name : logAnalytics.name
731+
output logAnalyticsWorkspaceResourceGroup string = useExisting ? existingLawResourceGroup : resourceGroup().name
732+
718733
output storageAccountName string = storageNameCleaned
719734

720735
output azureOpenAIKeyName string = azureOpenAIApiKeyEntry.name

infra/deploy_post_deployment_scripts.bicep

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,11 +17,12 @@ param sqlDbName string
1717
param sqlUsers array = [
1818
]
1919
param logAnalyticsWorkspaceResourceName string
20+
param logAnalyticsWorkspaceResourceGroup string
2021
var resourceGroupName = resourceGroup().name
2122

2223
resource logAnalytics 'Microsoft.OperationalInsights/workspaces@2020-10-01' existing = {
2324
name: logAnalyticsWorkspaceResourceName
24-
scope: resourceGroup()
25+
scope: resourceGroup(logAnalyticsWorkspaceResourceGroup)
2526
}
2627

2728
resource containerAppEnv 'Microsoft.App/managedEnvironments@2022-03-01' = {

infra/main.bicep

Lines changed: 10 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,9 @@ var abbrs = loadJsonContent('./abbreviations.json')
66
@description('A unique prefix for all resources in this deployment. This should be 3-20 characters long:')
77
param environmentName string
88

9+
@description('Optional: Existing Log Analytics Workspace Resource ID')
10+
param existingLogAnalyticsWorkspaceId string = ''
11+
912
@minLength(1)
1013
@description('Location for the Content Understanding service deployment:')
1114
@allowed(['swedencentral', 'australiaeast'])
@@ -28,18 +31,12 @@ param secondaryLocation string
2831
])
2932
param deploymentType string = 'GlobalStandard'
3033

31-
@minLength(1)
3234
@description('Name of the GPT model to deploy:')
33-
@allowed([
34-
'gpt-4o-mini'
35-
'gpt-4o'
36-
'gpt-4'
37-
])
3835
param gptModelName string = 'gpt-4o-mini'
3936

40-
// @minLength(1)
41-
// @description('Version of the GPT model to deploy:')
42-
// param gptModelVersion string = '2024-02-15-preview' //'2024-08-06'
37+
@description('Version of the GPT model to deploy:')
38+
param gptModelVersion string = '2024-07-18'
39+
4340
var azureOpenAIApiVersion = '2024-02-15-preview'
4441

4542
@minValue(10)
@@ -102,11 +99,14 @@ module aifoundry 'deploy_ai_foundry.bicep' = {
10299
cuLocation: contentUnderstandingLocation
103100
deploymentType: deploymentType
104101
gptModelName: gptModelName
102+
gptModelVersion: gptModelVersion
105103
azureOpenAIApiVersion: azureOpenAIApiVersion
106104
gptDeploymentCapacity: gptDeploymentCapacity
107105
embeddingModel: embeddingModel
108106
embeddingDeploymentCapacity: embeddingDeploymentCapacity
109107
managedIdentityObjectId: managedIdentityModule.outputs.managedIdentityOutput.objectId
108+
existingLogAnalyticsWorkspaceId: existingLogAnalyticsWorkspaceId
109+
110110
}
111111
scope: resourceGroup(resourceGroup().name)
112112
}
@@ -168,6 +168,7 @@ module uploadFiles 'deploy_post_deployment_scripts.bicep' = {
168168
managedIdentityClientId:managedIdentityModule.outputs.managedIdentityOutput.clientId
169169
keyVaultName:aifoundry.outputs.keyvaultName
170170
logAnalyticsWorkspaceResourceName: aifoundry.outputs.logAnalyticsWorkspaceResourceName
171+
logAnalyticsWorkspaceResourceGroup: aifoundry.outputs.logAnalyticsWorkspaceResourceGroup
171172
sqlServerName: sqlDBModule.outputs.sqlServerName
172173
sqlDbName: sqlDBModule.outputs.sqlDbName
173174
sqlUsers: [

infra/main.bicepparam

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,8 @@ param contentUnderstandingLocation = readEnvironmentVariable('AZURE_CONTENT_UNDE
66
param secondaryLocation = readEnvironmentVariable('AZURE_SECONDARY_LOCATION', 'eastus2')
77
param deploymentType = readEnvironmentVariable('AZURE_OPEN_AI_MODEL_DEPLOYMENT_TYPE', 'GlobalStandard')
88
param gptModelName = readEnvironmentVariable('AZURE_OPEN_AI_DEPLOYMENT_MODEL', 'gpt-4o-mini')
9+
param gptModelVersion = readEnvironmentVariable('AZURE_ENV_MODEL_VERSION', '2024-07-18')
910
param gptDeploymentCapacity = int(readEnvironmentVariable('AZURE_OPEN_AI_DEPLOYMENT_MODEL_CAPACITY', '30'))
1011
param embeddingModel = readEnvironmentVariable('AZURE_OPENAI_EMBEDDING_MODEL', 'text-embedding-ada-002')
1112
param embeddingDeploymentCapacity = int(readEnvironmentVariable('AZURE_OPENAI_EMBEDDING_MODEL_CAPACITY', '80'))
13+
param existingLogAnalyticsWorkspaceId = readEnvironmentVariable('AZURE_ENV_LOG_ANALYTICS_WORKSPACE_ID', '')

infra/main.json

Lines changed: 16 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
"_generator": {
66
"name": "bicep",
77
"version": "0.35.1.17967",
8-
"templateHash": "15074305105133532037"
8+
"templateHash": "17280998536631599018"
99
}
1010
},
1111
"parameters": {
@@ -53,16 +53,17 @@
5353
"gptModelName": {
5454
"type": "string",
5555
"defaultValue": "gpt-4o-mini",
56-
"allowedValues": [
57-
"gpt-4o-mini",
58-
"gpt-4o",
59-
"gpt-4"
60-
],
61-
"minLength": 1,
6256
"metadata": {
6357
"description": "Name of the GPT model to deploy:"
6458
}
6559
},
60+
"gptModelVersion": {
61+
"type": "string",
62+
"defaultValue": "2024-07-18",
63+
"metadata": {
64+
"description": "Version of the GPT model to deploy:"
65+
}
66+
},
6667
"gptDeploymentCapacity": {
6768
"type": "int",
6869
"defaultValue": 30,
@@ -589,6 +590,9 @@
589590
"gptModelName": {
590591
"value": "[parameters('gptModelName')]"
591592
},
593+
"gptModelVersion": {
594+
"value": "[parameters('gptModelVersion')]"
595+
},
592596
"azureOpenAIApiVersion": {
593597
"value": "[variables('azureOpenAIApiVersion')]"
594598
},
@@ -612,7 +616,7 @@
612616
"_generator": {
613617
"name": "bicep",
614618
"version": "0.35.1.17967",
615-
"templateHash": "5794496619234186044"
619+
"templateHash": "796973952642771216"
616620
}
617621
},
618622
"parameters": {
@@ -634,6 +638,9 @@
634638
"gptModelName": {
635639
"type": "string"
636640
},
641+
"gptModelVersion": {
642+
"type": "string"
643+
},
637644
"azureOpenAIApiVersion": {
638645
"type": "string"
639646
},
@@ -903,6 +910,7 @@
903910
"name": "[parameters('deploymentType')]",
904911
"capacity": "[parameters('gptDeploymentCapacity')]"
905912
},
913+
"version": "[parameters('gptModelVersion')]",
906914
"raiPolicyName": "Microsoft.Default"
907915
},
908916
{

pytest.ini

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
[pytest]
2+
markers =
3+
unittest: Unit Tests (relatively fast)
4+
functional: Functional Tests (tests that require a running server, with stubbed downstreams)
5+
azure: marks tests as extended (run less frequently, relatively slow)
6+
pythonpath = ./src/api
7+
log_level=debug

0 commit comments

Comments
 (0)