Skip to content

Commit 5222368

Browse files
committed
Bug#5250: Update the Process audio files on the read
1 parent b83b5f8 commit 5222368

File tree

1 file changed

+48
-35
lines changed

1 file changed

+48
-35
lines changed

README.md

Lines changed: 48 additions & 35 deletions
Original file line numberDiff line numberDiff line change
@@ -84,39 +84,51 @@ Simple deploy
8484
4. Open Workspace
8585
5. Retrieve Workspace ID from URL, refer to documentation additional assistance ([here](https://learn.microsoft.com/en-us/fabric/admin/portal-workspace#identify-your-workspace-id))
8686

87-
3. **Deploy Fabric resources and artifacts**
88-
1. Navigate to ([Azure Portal](https://portal.azure.com/))
89-
2. Click on Azure Cloud Shell in the top right of navigation Menu (add image)
90-
3. Run the run the following command:
91-
1. ```az login``` ***Follow instructions in Azure Cloud Shell for login instructions
92-
2. ```rm -rf ./Customer-Service-Conversational-Insights-with-Azure-OpenAI-Services```
93-
3. ```git clone https://github.com/microsoft/Customer-Service-Conversational-Insights-with-Azure-OpenAI-Services```
94-
4. ```cd ./Customer-Service-Conversational-Insights-with-Azure-OpenAI-Services/Deployment/scripts/fabric_scripts```
95-
5. ```sh ./run_fabric_items_scripts.sh keyvault_param workspaceid_param solutionprefix_param```
96-
1. keyvault_param - the name of the keyvault that was created in Step 1
97-
2. workspaceid_param - the workspaceid created in Step 2
98-
3. solutionprefix_param - prefix used to append to lakehouse upon creation
99-
4. Get Fabric Lakehouse connection details:
100-
1. Once deployment is complete, navigate to Fabric Workspace
101-
2. Find Lakehouse in workspace (ex.lakehouse_*solutionprefix_param*)
102-
3. Click on the ```...``` next to the SQL Analytics Endpoint
103-
4. Click on ```Copy SQL connection string```
104-
5. Click Copy button in popup window.
105-
5. Wait 10-15 minutes to allow the data pipelines to finish processing then proceed to next step.
106-
4. **Deploy Power BI report**
107-
1. Download the .pbix file from the [Reports folder](Deployment/Reports/).
108-
2. Open Power BI report in Power BI Dashboard
109-
3. Click on Transform Data menu option from the Task Bar
110-
4. Click Data source settings
111-
5. Click Change Source...
112-
6. Input the Server link (from Fabric Workspace)
113-
7. Input Database name (from Fabric Workspace)
114-
8. Click OK
115-
9. Click Edit Permissions
116-
10. If not signed in, sign in your credentials and proceed to click OK
117-
11. Click Close
118-
12. Report should refresh with need connection.
119-
5. **Schedule Post-Processing Notebook**
87+
3. **Deploy Fabric resources and artifacts**
88+
1. Navigate to ([Azure Portal](https://portal.azure.com/))
89+
2. Click on Azure Cloud Shell in the top right of navigation Menu (add image)
90+
3. Run the run the following commands:
91+
1. ```az login``` ***Follow instructions in Azure Cloud Shell for login instructions
92+
2. ```rm -rf ./Customer-Service-Conversational-Insights-with-Azure-OpenAI-Services```
93+
3. ```git clone https://github.com/microsoft/Customer-Service-Conversational-Insights-with-Azure-OpenAI-Services```
94+
4. ```cd ./Customer-Service-Conversational-Insights-with-Azure-OpenAI-Services/Deployment/scripts/fabric_scripts```
95+
5. ```sh ./run_fabric_items_scripts.sh keyvault_param workspaceid_param solutionprefix_param```
96+
1. keyvault_param - the name of the keyvault that was created in Step 1
97+
2. workspaceid_param - the workspaceid created in Step 2
98+
3. solutionprefix_param - prefix used to append to lakehouse upon creation
99+
4. Get Fabric Lakehouse connection details:
100+
5. Once deployment is complete, navigate to Fabric Workspace
101+
6. Find Lakehouse in workspace (ex.lakehouse_*solutionprefix_param*)
102+
7. Click on the ```...``` next to the SQL Analytics Endpoint
103+
8. Click on ```Copy SQL connection string```
104+
9. Click Copy button in popup window.
105+
10. Wait 10-15 minutes to allow the data pipelines to finish processing then proceed to next step.
106+
4. **Open Power BI report**
107+
1. Download the .pbix file from the [Reports folder](Deployment/Reports/).
108+
2. Open Power BI report in Power BI Dashboard
109+
3. Click on `Transform Data` menu option from the Task Bar
110+
4. Click `Data source settings`
111+
5. Click `Change Source...`
112+
6. Input the Server link (from Fabric Workspace)
113+
7. Input Database name (the lakehouse name from Fabric Workspace)
114+
8. Click `OK`
115+
9. Click `Edit Permissions`
116+
10. If not signed in, sign in your credentials and proceed to click OK
117+
11. Click `Close`
118+
12. Report should refresh with new connection.
119+
5. **Publish Power BI**
120+
1. Click `Publish` (from PBI report in Power BI Desktop application)
121+
2. Select Fabric Workspace
122+
3. Click `Select`
123+
4. After publish is complete, navigate to Fabric Workspace
124+
5. Click `...` next to the Semenatic model for Power BI report
125+
6. Click on `Settings`
126+
7. Click on `Edit credentials` (under Data source credentials)
127+
8. Select `OAuth2` for the Authentication method
128+
9. Select option for `Privacy level setting for this data source`
129+
10. Click `Sign in`
130+
11. Navigate back to Fabric workspace and click on Power BI report
131+
6. **Schedule Post-Processing Notebook**
120132
It is essential to update dates daily as they advance based on the current day at the time of deployment. Since the Power BI report relies on the current date, we highly recommend scheduling or running the 03_post_processing notebook daily in the workspace. Please note that this process modifies the original date of the processed data. If you do not wish to run this, do not execute the 03_post_processing notebook.
121133

122134
To schedule the notebook, follow these steps:
@@ -129,9 +141,10 @@ Simple deploy
129141

130142
### Process audio files
131143
Currently, audio files are not processed during deployment. To manually process audio files, follow these steps:
132-
- Open the pipeline_notebook
144+
- Open the `pipeline_notebook`
145+
- Comment out cell 2 (only if there are zero files in the `conversation_input` data folder waiting for JSON processing)
133146
- Uncomment cells 3 and 4
134-
- Run pipeline_notebook
147+
- Run `pipeline_notebook`
135148

136149

137150
### Upload additional files

0 commit comments

Comments
 (0)