You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-studio/how-to/llmops-azure-devops-prompt-flow.md
+24-23Lines changed: 24 additions & 23 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -38,28 +38,28 @@ LLMOps with prompt flow is a "LLMOps template and guidance" to help you build LL
38
38
39
39
- It supports pure **python based Evaluation** as well using promptflow-evals package.
40
40
41
-
- It should be used for INNER-LOOP Experimentation and Evaluation.
41
+
- It supports INNER-LOOP Experimentation and Evaluation.
42
42
43
-
- It should be used for OUTER-LOOP Deployment and Inferencing.
43
+
- It supports OUTER-LOOP Deployment and Inferencing.
44
44
45
-
-**Centralized Code Hosting**: This repo supports hosting code for multiple flows based on prompt flow, providing a single repository for all your flows. Think of this platform as a single repository where all your prompt flow code resides. It's like a library for your flows, making it easy to find, access, and collaborate on different projects.
45
+
-It supports **Centralized Code Hosting** for multiple flows based on prompt flow, providing a single repository for all your flows. Think of this platform as a single repository where all your prompt flow code resides. It's like a library for your flows, making it easy to find, access, and collaborate on different projects.
46
46
47
-
-**Lifecycle Management**: Each flow enjoys its own lifecycle, allowing for smooth transitions from local experimentation to production deployment.
48
-
:::image type="content" source="../media/prompt-flow/llmops/pipeline.png" alt-text="Screenshot of pipeline." lightbox = "../media/prompt-flow/llmops/pipeline.png":::
47
+
- Each flow enjoys its own **Lifecycle Management**, allowing for smooth transitions from local experimentation to production deployment.
48
+
:::image type="content" source="../media/prompt-flow/llmops/workflow.png" alt-text="Screenshot of workflow." lightbox = "../media/prompt-flow/llmops/workflow.png":::
49
49
50
-
-**Variant and Hyperparameter Experimentation**: Experiment with multiple variants and hyperparameters, evaluating flow variants with ease. Variants and hyperparameters are like ingredients in a recipe. This platform allows you to experiment with different combinations of variants across multiple nodes in a flow.
50
+
-Experiment with multiple **Variant and Hyperparameter Experimentation**, evaluating flow variants with ease. Variants and hyperparameters are like ingredients in a recipe. This platform allows you to experiment with different combinations of variants across multiple nodes in a flow.
51
51
52
-
-**Multiple Deployment Targets**: The repo supports deployment of flows to **Azure App Services, Kubernetes, Azure Managed computes** driven through configuration ensuring that your flows can scale as needed. It also generates **Docker images** infused with Flow compute session and your flows for deployment to **any target platform and Operating system** supporting Docker.
52
+
- The repo supports deployment of flows to **Azure App Services, Kubernetes, Azure Managed computes** driven through configuration ensuring that your flows can scale as needed. It also generates **Docker images** infused with Flow compute session and your flows for deployment to **any target platform and Operating system** supporting Docker.
53
53
:::image type="content" source="../media/prompt-flow/llmops/endpoints.png" alt-text="Screenshot of endpoints." lightbox = "../media/prompt-flow/llmops/endpoints.png":::
54
54
55
-
-**A/B Deployment**: Seamlessly implement A/B deployments, enabling you to compare different flow versions effortlessly. Like in traditional A/B testing for websites, this platform facilitates A/B deployment for prompt flow. This means you can effortlessly compare different versions of a flow in a real-world setting to determine which performs best.
55
+
-Seamlessly implement **A/B Deployment**, enabling you to compare different flow versions effortlessly. As in traditional A/B testing for websites, this platform facilitates A/B deployment for prompt flow. This means you can effortlessly compare different versions of a flow in a real-world setting to determine which performs best.
56
56
:::image type="content" source="../media/prompt-flow/llmops/a-b-deployments.png" alt-text="Screenshot of deployments." lightbox = "../media/prompt-flow/llmops/a-b-deployments.png":::
57
57
58
-
-**Many-to-many dataset/flow relationships**: Accommodate multiple datasets for each standard and evaluation flow, ensuring versatility in flow test and evaluation. The platform is designed to accommodate multiple datasets for each flow.
58
+
-Accommodates **Many-to-many dataset/flow relationships** for each standard and evaluation flow, ensuring versatility in flow test and evaluation. The platform is designed to accommodate multiple datasets for each flow.
59
59
60
-
-**Conditional Data and Model registration**: The platform creates a new version for dataset in Azure AI Studio Data Asset and flows in model registry only when there's a change in them, not otherwise.
60
+
-It supports **Conditional Data and Model registration** by creating a new version for dataset in Azure AI Studio Data Asset and flows in model registry only when there's a change in them, not otherwise.
61
61
62
-
-**Comprehensive Reporting**: Generate detailed reports for each variant configuration, allowing you to make informed decisions. Provides detailed Metric collection, experiment, and variant bulk runs for all runs and experiments, enabling data-driven decisions in csv as well as HTML files.
62
+
-Generates **Comprehensive Reporting**for each **variant configuration**, allowing you to make informed decisions. Provides detailed Metric collection, experiment, and variant bulk runs for all runs and experiments, enabling data-driven decisions in csv as well as HTML files.
@@ -79,13 +79,13 @@ LLMOps with prompt flow provides capabilities for both simple and complex LLM-in
79
79
80
80
The lifecycle comprises four distinct stages:
81
81
82
-
-**Initialization:** Clearly define the business objective, gather relevant data samples, establish a basic prompt structure, and craft a flow that enhances its capabilities.
82
+
1.**Initialization:** Clearly define the business objective, gather relevant data samples, establish a basic prompt structure, and craft a flow that enhances its capabilities.
83
83
84
-
-**Experimentation:** Apply the flow to sample data, assess the prompt's performance, and refine the flow as needed. Continuously iterate until satisfied with the results.
84
+
2.**Experimentation:** Apply the flow to sample data, assess the prompt's performance, and refine the flow as needed. Continuously iterate until satisfied with the results.
85
85
86
-
-**Evaluation & Refinement:** Benchmark the flow's performance using a larger dataset, evaluate the prompt's effectiveness, and make refinements accordingly. Progress to the next stage if the results meet the desired standards.
86
+
3.**Evaluation & Refinement:** Benchmark the flow's performance using a larger dataset, evaluate the prompt's effectiveness, and make refinements accordingly. Progress to the next stage if the results meet the desired standards.
87
87
88
-
-**Deployment:** Optimize the flow for efficiency and effectiveness, deploy it in a production environment including A/B deployment, monitor its performance, gather user feedback, and use this information to further enhance the flow.
88
+
4.**Deployment:** Optimize the flow for efficiency and effectiveness, deploy it in a production environment including A/B deployment, monitor its performance, gather user feedback, and use this information to further enhance the flow.
89
89
90
90
By adhering to this structured methodology, prompt flow empowers you to confidently develop, rigorously test, fine-tune, and deploy flows, leading to the creation of robust and sophisticated AI applications.
91
91
@@ -111,13 +111,13 @@ The repository for this article is available at [LLMOps with Prompt flow templat
111
111
4. The merge to Main triggers the build and release process for the Development environment. Specifically:
112
112
113
113
a. The CI pipeline is triggered from the merge to Main. The CI pipeline performs all the steps done in the PR pipeline, and the following steps:
114
-
- Experimentation flow
115
-
- Evaluation flow
116
-
- Registers the flows in the AI Studio Registry when changes are detected
114
+
1. Experimentation flow
115
+
2. Evaluation flow
116
+
3. Registers the flows in the AI Studio Registry when changes are detected
117
117
b. The CD pipeline is triggered after the completion of the CI pipeline. This flow performs the following steps:
118
-
- Deploys the flow from the AI Studio registry to a AI Studio deployment
119
-
- Runs integration tests that target the online endpoint
120
-
- Runs smoke tests that target the online endpoint
118
+
1. Deploys the flow from the AI Studio registry to a AI Studio deployment
119
+
2. Runs integration tests that target the online endpoint
120
+
3. Runs smoke tests that target the online endpoint
121
121
122
122
5. An approval process is built into the release promotion process – upon approval, the CI & CD processes described in steps 4.a. & 4.b. are repeated, targeting the Test environment. Steps 4.a. and 4.b. are the same, except that user acceptance tests are run after the smoke tests in the Test environment.
123
123
@@ -224,7 +224,7 @@ The test outputs should be similar to ones shown at [here](https://github.com/mi
224
224
225
225
## Local execution
226
226
227
-
Experiments can be executed using the prompt_pipeline.py python script locally. The script takes the experiment.yaml file as input and runs the evaluations defined in the experiment.yaml file along with use case name and environment name. This generates RUN_ID.txt file containing the run id's which is later used for evaluation phase.
227
+
To harness the capabilities of the **local execution**, follow these installation steps:
228
228
229
229
1.**Clone the Repository**: Begin by cloning the template's repository from its [GitHub repository](https://github.com/microsoft/llmops-promptflow-template.git).
5. Bring or write your flows into the template based on documentation [here](https://github.com/microsoft/llmops-promptflow-template/blob/main/docs/how_to_onboard_new_flows.md).
277
277
278
278
## Next steps
279
-
*[LLMOps with Prompt flow template](https://github.com/microsoft/llmops-promptflow-template/blob/main/docs/Azure_devops_how_to_setup.md) on GitHub
279
+
*[LLMOps with Prompt flow template](https://github.com/microsoft/llmops-promptflow-template/) on GitHub
280
+
*[LLMOps with Prompt flow template documentation](https://github.com/microsoft/llmops-promptflow-template/blob/main/docs/Azure_devops_how_to_setup.md) on GitHub
280
281
*[FAQS for LLMOps with Prompt flow template](https://github.com/microsoft/llmops-promptflow-template/blob/main/docs/faqs.md)
281
282
*[Prompt flow open source repository](https://github.com/microsoft/promptflow)
282
283
*[Install and set up Python SDK v2](/python/api/overview/azure/ai-ml-readme)
0 commit comments