Skip to content

Commit 7663a8a

Browse files
committed
fixes for comments raised
1 parent 0e571a2 commit 7663a8a

File tree

3 files changed

+58
-48
lines changed

3 files changed

+58
-48
lines changed

articles/ai-studio/how-to/llmops-azure-devops-promptflow.md renamed to articles/ai-studio/how-to/llmops-azure-devops-prompt-flow.md

Lines changed: 24 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -38,28 +38,28 @@ LLMOps with prompt flow is a "LLMOps template and guidance" to help you build LL
3838

3939
- It supports pure **python based Evaluation** as well using promptflow-evals package.
4040

41-
- It should be used for INNER-LOOP Experimentation and Evaluation.
41+
- It supports INNER-LOOP Experimentation and Evaluation.
4242

43-
- It should be used for OUTER-LOOP Deployment and Inferencing.
43+
- It supports OUTER-LOOP Deployment and Inferencing.
4444

45-
- **Centralized Code Hosting**: This repo supports hosting code for multiple flows based on prompt flow, providing a single repository for all your flows. Think of this platform as a single repository where all your prompt flow code resides. It's like a library for your flows, making it easy to find, access, and collaborate on different projects.
45+
- It supports **Centralized Code Hosting** for multiple flows based on prompt flow, providing a single repository for all your flows. Think of this platform as a single repository where all your prompt flow code resides. It's like a library for your flows, making it easy to find, access, and collaborate on different projects.
4646

47-
- **Lifecycle Management**: Each flow enjoys its own lifecycle, allowing for smooth transitions from local experimentation to production deployment.
48-
:::image type="content" source="../media/prompt-flow/llmops/pipeline.png" alt-text="Screenshot of pipeline." lightbox = "../media/prompt-flow/llmops/pipeline.png":::
47+
- Each flow enjoys its own **Lifecycle Management**, allowing for smooth transitions from local experimentation to production deployment.
48+
:::image type="content" source="../media/prompt-flow/llmops/workflow.png" alt-text="Screenshot of workflow." lightbox = "../media/prompt-flow/llmops/workflow.png":::
4949

50-
- **Variant and Hyperparameter Experimentation**: Experiment with multiple variants and hyperparameters, evaluating flow variants with ease. Variants and hyperparameters are like ingredients in a recipe. This platform allows you to experiment with different combinations of variants across multiple nodes in a flow.
50+
- Experiment with multiple **Variant and Hyperparameter Experimentation**, evaluating flow variants with ease. Variants and hyperparameters are like ingredients in a recipe. This platform allows you to experiment with different combinations of variants across multiple nodes in a flow.
5151

52-
- **Multiple Deployment Targets**: The repo supports deployment of flows to **Azure App Services, Kubernetes, Azure Managed computes** driven through configuration ensuring that your flows can scale as needed. It also generates **Docker images** infused with Flow compute session and your flows for deployment to **any target platform and Operating system** supporting Docker.
52+
- The repo supports deployment of flows to **Azure App Services, Kubernetes, Azure Managed computes** driven through configuration ensuring that your flows can scale as needed. It also generates **Docker images** infused with Flow compute session and your flows for deployment to **any target platform and Operating system** supporting Docker.
5353
:::image type="content" source="../media/prompt-flow/llmops/endpoints.png" alt-text="Screenshot of endpoints." lightbox = "../media/prompt-flow/llmops/endpoints.png":::
5454

55-
- **A/B Deployment**: Seamlessly implement A/B deployments, enabling you to compare different flow versions effortlessly. Like in traditional A/B testing for websites, this platform facilitates A/B deployment for prompt flow. This means you can effortlessly compare different versions of a flow in a real-world setting to determine which performs best.
55+
- Seamlessly implement **A/B Deployment**, enabling you to compare different flow versions effortlessly. As in traditional A/B testing for websites, this platform facilitates A/B deployment for prompt flow. This means you can effortlessly compare different versions of a flow in a real-world setting to determine which performs best.
5656
:::image type="content" source="../media/prompt-flow/llmops/a-b-deployments.png" alt-text="Screenshot of deployments." lightbox = "../media/prompt-flow/llmops/a-b-deployments.png":::
5757

58-
- **Many-to-many dataset/flow relationships**: Accommodate multiple datasets for each standard and evaluation flow, ensuring versatility in flow test and evaluation. The platform is designed to accommodate multiple datasets for each flow.
58+
- Accommodates **Many-to-many dataset/flow relationships** for each standard and evaluation flow, ensuring versatility in flow test and evaluation. The platform is designed to accommodate multiple datasets for each flow.
5959

60-
- **Conditional Data and Model registration**: The platform creates a new version for dataset in Azure AI Studio Data Asset and flows in model registry only when there's a change in them, not otherwise.
60+
- It supports **Conditional Data and Model registration** by creating a new version for dataset in Azure AI Studio Data Asset and flows in model registry only when there's a change in them, not otherwise.
6161

62-
- **Comprehensive Reporting**: Generate detailed reports for each variant configuration, allowing you to make informed decisions. Provides detailed Metric collection, experiment, and variant bulk runs for all runs and experiments, enabling data-driven decisions in csv as well as HTML files.
62+
- Generates **Comprehensive Reporting** for each **variant configuration**, allowing you to make informed decisions. Provides detailed Metric collection, experiment, and variant bulk runs for all runs and experiments, enabling data-driven decisions in csv as well as HTML files.
6363
:::image type="content" source="../media/prompt-flow/llmops/variants.png" alt-text="Screenshot of flow variants report." lightbox = "../media/prompt-flow/llmops/variants.png":::
6464

6565
Other features for customization:
@@ -79,13 +79,13 @@ LLMOps with prompt flow provides capabilities for both simple and complex LLM-in
7979

8080
The lifecycle comprises four distinct stages:
8181

82-
- **Initialization:** Clearly define the business objective, gather relevant data samples, establish a basic prompt structure, and craft a flow that enhances its capabilities.
82+
1. **Initialization:** Clearly define the business objective, gather relevant data samples, establish a basic prompt structure, and craft a flow that enhances its capabilities.
8383

84-
- **Experimentation:** Apply the flow to sample data, assess the prompt's performance, and refine the flow as needed. Continuously iterate until satisfied with the results.
84+
2. **Experimentation:** Apply the flow to sample data, assess the prompt's performance, and refine the flow as needed. Continuously iterate until satisfied with the results.
8585

86-
- **Evaluation & Refinement:** Benchmark the flow's performance using a larger dataset, evaluate the prompt's effectiveness, and make refinements accordingly. Progress to the next stage if the results meet the desired standards.
86+
3. **Evaluation & Refinement:** Benchmark the flow's performance using a larger dataset, evaluate the prompt's effectiveness, and make refinements accordingly. Progress to the next stage if the results meet the desired standards.
8787

88-
- **Deployment:** Optimize the flow for efficiency and effectiveness, deploy it in a production environment including A/B deployment, monitor its performance, gather user feedback, and use this information to further enhance the flow.
88+
4. **Deployment:** Optimize the flow for efficiency and effectiveness, deploy it in a production environment including A/B deployment, monitor its performance, gather user feedback, and use this information to further enhance the flow.
8989

9090
By adhering to this structured methodology, prompt flow empowers you to confidently develop, rigorously test, fine-tune, and deploy flows, leading to the creation of robust and sophisticated AI applications.
9191

@@ -111,13 +111,13 @@ The repository for this article is available at [LLMOps with Prompt flow templat
111111
4. The merge to Main triggers the build and release process for the Development environment. Specifically:
112112

113113
a. The CI pipeline is triggered from the merge to Main. The CI pipeline performs all the steps done in the PR pipeline, and the following steps:
114-
- Experimentation flow
115-
- Evaluation flow
116-
- Registers the flows in the AI Studio Registry when changes are detected
114+
1. Experimentation flow
115+
2. Evaluation flow
116+
3. Registers the flows in the AI Studio Registry when changes are detected
117117
b. The CD pipeline is triggered after the completion of the CI pipeline. This flow performs the following steps:
118-
- Deploys the flow from the AI Studio registry to a AI Studio deployment
119-
- Runs integration tests that target the online endpoint
120-
- Runs smoke tests that target the online endpoint
118+
1. Deploys the flow from the AI Studio registry to a AI Studio deployment
119+
2. Runs integration tests that target the online endpoint
120+
3. Runs smoke tests that target the online endpoint
121121

122122
5. An approval process is built into the release promotion process – upon approval, the CI & CD processes described in steps 4.a. & 4.b. are repeated, targeting the Test environment. Steps 4.a. and 4.b. are the same, except that user acceptance tests are run after the smoke tests in the Test environment.
123123

@@ -224,7 +224,7 @@ The test outputs should be similar to ones shown at [here](https://github.com/mi
224224

225225
## Local execution
226226

227-
Experiments can be executed using the prompt_pipeline.py python script locally. The script takes the experiment.yaml file as input and runs the evaluations defined in the experiment.yaml file along with use case name and environment name. This generates RUN_ID.txt file containing the run id's which is later used for evaluation phase.
227+
To harness the capabilities of the **local execution**, follow these installation steps:
228228

229229
1. **Clone the Repository**: Begin by cloning the template's repository from its [GitHub repository](https://github.com/microsoft/llmops-promptflow-template.git).
230230

@@ -276,7 +276,8 @@ python -m llmops.common.prompt_eval --run_id run_id.txt --subscription_id xxxxx
276276
5. Bring or write your flows into the template based on documentation [here](https://github.com/microsoft/llmops-promptflow-template/blob/main/docs/how_to_onboard_new_flows.md).
277277

278278
## Next steps
279-
* [LLMOps with Prompt flow template](https://github.com/microsoft/llmops-promptflow-template/blob/main/docs/Azure_devops_how_to_setup.md) on GitHub
279+
* [LLMOps with Prompt flow template](https://github.com/microsoft/llmops-promptflow-template/) on GitHub
280+
* [LLMOps with Prompt flow template documentation](https://github.com/microsoft/llmops-promptflow-template/blob/main/docs/Azure_devops_how_to_setup.md) on GitHub
280281
* [FAQS for LLMOps with Prompt flow template](https://github.com/microsoft/llmops-promptflow-template/blob/main/docs/faqs.md)
281282
* [Prompt flow open source repository](https://github.com/microsoft/promptflow)
282283
* [Install and set up Python SDK v2](/python/api/overview/azure/ai-ml-readme)

0 commit comments

Comments
 (0)