Skip to content

Commit 7788904

Browse files
committed
update prompt builder doc with comment
1 parent c3d1c60 commit 7788904

File tree

1 file changed

+36
-36
lines changed

1 file changed

+36
-36
lines changed

docs/intelligentapps/promptbuilder.md

Lines changed: 36 additions & 36 deletions
Original file line numberDiff line numberDiff line change
@@ -2,65 +2,65 @@
22
Order: 4
33
Area: intelligentapps
44
TOCTitle: Prompt Builder
5-
ContentId:
5+
ContentId: bd3d7555-3d84-4500-ae95-6dcd39641af0
66
PageTitle: Prompt Builder
7-
DateApproved:
7+
DateApproved: 04/09/2025
88
MetaDescription: Get Started with creating, iterating and optimizing your prompts in AI Toolkit.
99
---
10-
# Prompt Engineering in AI Toolkit
10+
# Prompt engineering in AI Toolkit
1111

12-
Prompt Builder in AI Toolkit streamlines the prompt engineering workflow by generating starter prompts, helping you iterate and refine with each run, breaking down complex tasks through prompt chaining and structured outputs, and providing easy access to code for seamless LLM integration via APIs.
12+
Prompt builder in AI Toolkit streamlines the prompt engineering workflow by generating starter prompts, helping you iterate and refine with each run, breaking down complex tasks through prompt chaining and structured outputs, and providing easy access to code for seamless LLM integration via APIs.
1313

14-
![PromptBuilder demo](./images/promptbuilder/promptbuilder.gif)
14+
![Getting started with prompt builder](./images/promptbuilder/promptbuilder.gif)
1515

1616
## Create, edit and test prompts
1717

18-
To access the Prompt Builder:
19-
- In AI Toolkit view, select **Prompt Builder**
18+
To access the prompt builder, use one of these options:
19+
- In the AI Toolkit view, select **Prompt Builder**
2020
- Select **Try in Prompt Builder** from a model card in the model catalog
2121

2222
To test a prompt in the prompt builder, follow these steps:
23-
1. In **Models**, select a model from the dropdown list or click **Browser models** to add another model from model catalog.
24-
![select_model](./images/promptbuilder/s1_models.png)
25-
2. Enter **System prompt** and **User prompt**
23+
1. In **Models**, select a model from the dropdown list, or select **Browse models** to add another model from the model catalog.
24+
![select a model](./images/promptbuilder/s1_models.png)
25+
2. Enter a **User prompt** and optionally enter a **System prompt**
26+
The user prompt is the input that you want to send to the model. The optional system prompt is used to provide instructions with relevant context to guide the model response.
27+
2628
> [!TIP]
27-
> - System prompt is optional, it is used to provide instructions with relevant context to guide the model response
28-
> - User prompt is mandatory, it is the input you want to send to the model
29-
> - If you don't know how to input these prompts, you can simply describe your project idea in natural language and let the AI-powered feature generate prompts for you to experiment with
30-
> ![generate_prompts](./images/promptbuilder/generate_prompt.gif)
31-
3. Click **Run** to send the prompts to the selected model
32-
4. Optionally, you can click **Add Prompts** to add more User and Assistant prompts to the conversation, or **Use Response as Assistant Prompt** as the history and context you send to the model to further guide the model's behavior
33-
5. You can repeat the above steps to iterate over your prompts by observing the model response and making changes to the prompts
29+
> If you don't know how to input these prompts, you can describe your project idea in natural language and let the AI-powered feature generate prompts for you to experiment with.
30+
> ![generate prompts with natural language](./images/promptbuilder/generate_prompt.gif)
31+
3. Select **Run** to send the prompts to the selected model
32+
4. Optionally, select **Add Prompts** to add more user and assistant prompts to the conversation, or select **Use Response as Assistant Prompt** as the history and context you send to the model to further guide the model's behavior
33+
5. Repeat the previous steps to iterate over your prompts by observing the model response and making changes to the prompts
3434

3535
## Structured output
3636

3737
Structured output support helps you design prompts to deliver outputs in a structured, predictable format.
38-
![structured_output](./images/promptbuilder/structured_output.gif)
38+
![Use structured output](./images/promptbuilder/structured_output.gif)
3939

4040
To test a prompt in the prompt builder, follow these steps:
4141

42-
1. Click on **Format** dropdown In **Response** area, and select **json_schema**
43-
2. Click on **Prepare schema**
44-
3. From the command palette, you have two options, you can either use your own schema by selecting **Select local file**, or use a predefined schema by selecting **Use an example**
45-
4. If you proceed with an example, you can select a schema from the dropdown list
46-
5. Click on **Run** to send the prompts to the selected model
47-
6. You can also edit the schema by clicking on **Edit**
48-
![edit_schema](./images/promptbuilder/edit_schema.png)
42+
1. Select the **Format** dropdown in the **Response** area, and select **json_schema**
43+
2. Select **Prepare schema**
44+
3. From the Command Palette, either select **Select local file** to use your own schema, or select **Use an example** to use a predefined schema
45+
If you proceed with an example, you can select a schema from the dropdown list
46+
5. Select **Run** to send the prompts to the selected model
47+
6. You can also edit the schema by selecting **Edit**
48+
![edit schema](./images/promptbuilder/edit_schema.png)
4949

5050
## Integrate prompt engineering into your application
51-
After experimenting with models and prompts, you can get into coding right away by viewing ready-to-use Python code automatically generated
52-
![view_code](./images/promptbuilder/view_code.gif)
5351

54-
To access the code, follow these steps:
55-
1. Click on **View Code**
56-
2. Select the inference SDK you want to use if it's hosted by GitHub
57-
> [!TIP]
58-
> AI Toolkit will generate the code for the corresponding model you selected using the provider's client SDK. For models hosted by GitHub, you have the option to select the inference SDK you want to use: [`Azure AI Inference SDK`](https://learn.microsoft.com/python/api/overview/azure/ai-inference-readme?view=azure-python-preview) or the SDK from model provider such as [OpenAI SDK](https://platform.openai.com/docs/libraries) or [Mistral API](https://docs.mistral.ai/api).
59-
3. You can view the generated code snippet in a new file window and copy them into your application.
60-
> [!TIP]
61-
> To authenticate with the model you will normally need API key from the provider. To access models hosted by GitHub, [generate a personal access token](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens) (PAT) in your GitHub settings
52+
After experimenting with models and prompts, you can get into coding right away with the automatically generated Python code.
53+
![view code](./images/promptbuilder/view_code.gif)
54+
To view the Python code, follow these steps:
55+
1. Select **View Code**
56+
1. Select the inference SDK you want to use if it's hosted by GitHub
57+
58+
AI Toolkit generates the code for the model you selected by using the provider's client SDK. For models hosted by GitHub, you have the option to select the inference SDK you want to use: [Azure AI Inference SDK](https://learn.microsoft.com/python/api/overview/azure/ai-inference-readme?view=azure-python-preview) or the SDK from the model provider, such as [OpenAI SDK](https://platform.openai.com/docs/libraries) or [Mistral API](https://docs.mistral.ai/api).
59+
1. The generated code snippet is shown in a new editor, where you can copy it into your application.
60+
1. You can view the generated code snippet in a new file window and copy them into your application.
61+
> To authenticate with the model you usually need an API key from the provider. To access models hosted by GitHub, [generate a personal access token](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens) (PAT) in your GitHub settings
6262
6363
## Next steps
6464

65-
- [Run a set of prompts](/docs/intelligentapps/bulkrun.md) in an imported dataset, individually or in a full batch
65+
- [Run an evaluation job](/docs/intelligentapps/evaluation.md) for the popular evaluators
6666
- [Run evaluation](/docs/intelligentapps/evaluation.md) job for the popular evaluators

0 commit comments

Comments
 (0)