Skip to content

Commit 720414c

Browse files
Eugene FedorenkoEugene Fedorenko
authored andcommitted
typo
1 parent bc372bc commit 720414c

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

articles/machine-learning/how-to-cicd-data-ingestion.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ The team members work in slightly different ways to collaborate on the Python no
4242

4343
### Python Notebook Source Code
4444

45-
The data engineers work with the Python notebook source code either locally in a an IDE (for example, [Visual Studio Code](https://code.visualstudio.com)) or directly in the Databricks workspace. The latter gives the ability to debug the code on the development environment. In any case, the code is going to be merged to the repository following a branching policy.
45+
The data engineers work with the Python notebook source code either locally in an IDE (for example, [Visual Studio Code](https://code.visualstudio.com)) or directly in the Databricks workspace. The latter gives the ability to debug the code on the development environment. In any case, the code is going to be merged to the repository following a branching policy.
4646
It's highly recommended to store the code in `.py` files rather than in `.ipynb` Jupyter notebook format. It improves the code readability and enables automatic code quality checks in the CI process.
4747

4848
### Azure Data Factory Source Code
@@ -183,7 +183,7 @@ The values in the json file are default values configured in the pipeline defini
183183

184184
## Continuous Delivery (CD)
185185

186-
The Continuous Delivery process takes the artifacts and deploys them to the first target environment. It makes sure that the solution works by running tests. If succesful, it continues to the next environment. The CD Azure Pipeline consists of multiple stages representing the environments. Each stage contains [deployments](https://docs.microsoft.com/azure/devops/pipelines/process/deployment-jobs?view=azure-devops) and [jobs](https://docs.microsoft.com/azure/devops/pipelines/process/phases?view=azure-devops&tabs=yaml) that perform the following steps:
186+
The Continuous Delivery process takes the artifacts and deploys them to the first target environment. It makes sure that the solution works by running tests. If successful, it continues to the next environment. The CD Azure Pipeline consists of multiple stages representing the environments. Each stage contains [deployments](https://docs.microsoft.com/azure/devops/pipelines/process/deployment-jobs?view=azure-devops) and [jobs](https://docs.microsoft.com/azure/devops/pipelines/process/phases?view=azure-devops&tabs=yaml) that perform the following steps:
187187
* Deploy a Python Notebook to Azure Databricks workspace
188188
* Deploy an Azure Data Factory pipeline
189189
* Run the pipeline

0 commit comments

Comments
 (0)