|
1 |
| -### Hands-On Activity: Train and Customize a Dialogue Model in Azure OpenAI Studio |
2 |
| -**Time:** 20–25 min |
3 |
| - |
4 |
| ---- |
5 |
| - |
6 | 1 | **Objective:**
|
7 |
| -Enhance the chatbot created in Module 1 by integrating a custom-trained dialogue model using Azure OpenAI Studio. Use domain-specific data to improve response accuracy, refine tone, and simulate memory through prompt engineering. Test and iterate on the chatbot’s performance in real time. |
8 | 2 |
|
9 |
| ---- |
| 3 | +Enhance the chatbot created in module 1 by integrating a custom-trained dialogue model using Azure OpenAI Studio. Use domain-specific data to improve response accuracy, refine tone, and simulate memory through prompt engineering. Test and iterate on the chatbot's performance in real time. |
| 4 | + |
| 5 | +**Materials needed:** |
10 | 6 |
|
11 |
| -**Materials Needed:** |
12 | 7 | - Laptop or desktop with internet access
|
13 | 8 | - Azure OpenAI Studio access
|
14 | 9 | - Dataset of sample dialogues or FAQs (provided or created by learner)
|
15 | 10 |
|
16 |
| ---- |
| 11 | +## Part 1: Upload and train with custom data |
17 | 12 |
|
18 |
| -### Part 1: Upload and Train with Custom Data |
19 |
| - |
20 |
| -1. **Prepare a Dataset** |
| 13 | +1. **Prepare a dataset** |
21 | 14 | - Use a collection of relevant user queries and ideal responses (such as customer FAQs, support logs, or product descriptions).
|
22 | 15 | - Format the data in CSV or plain text, with clear input/output pairs.
|
23 | 16 |
|
24 |
| -2. **Upload the Dataset** |
| 17 | +2. **Upload the dataset** |
25 | 18 | - In Azure OpenAI Studio, navigate to your chatbot project.
|
26 | 19 | - Upload your dataset under the **Files** section and connect it to your deployment using the Playground.
|
27 | 20 |
|
28 |
| -3. **Apply Supervised Learning via Prompt Examples** |
| 21 | +3. **Apply supervised learning via prompt examples** |
29 | 22 | - Provide a few sample dialogues using system + user + assistant messages to demonstrate ideal behavior.
|
30 | 23 | Example:
|
31 | 24 | ```
|
32 | 25 | User: "How do I reset my password?"
|
33 | 26 | Assistant: "To reset your password, go to the login page and click 'Forgot Password.' Follow the instructions sent to your email."
|
34 | 27 | ```
|
35 | 28 |
|
36 |
| ---- |
37 |
| -
|
38 |
| -### Part 2: Customize Tone and Interaction Style |
| 29 | +## Part 2: Customize tone and interaction style |
39 | 30 |
|
40 |
| -1. **Set a System Prompt** |
41 |
| - - Define the assistant’s role and tone. |
| 31 | +1. **Set a system prompt** |
| 32 | + - Define the assistant's role and tone. |
42 | 33 | Example:
|
43 |
| - *“You are a friendly and professional assistant for an online bookstore. Respond with clear, concise, and warm answers.”* |
| 34 | + *"You are a friendly and professional assistant for an online bookstore. Respond with clear, concise, and warm answers."* |
44 | 35 |
|
45 |
| -2. **Simulate Context Awareness** |
| 36 | +2. **Simulate context awareness** |
46 | 37 | - Create a prompt that includes conversation history, enabling the model to handle follow-up questions.
|
47 | 38 | Example:
|
48 |
| - *“The user previously asked about shipping policies. Now they are asking, ‘How long does express shipping take?’”* |
| 39 | + *"The user previously asked about shipping policies. Now they are asking, 'How long does express shipping take?'"* |
49 | 40 |
|
50 |
| -3. **Test and Iterate** |
| 41 | +3. **Test and iterate** |
51 | 42 | - Interact with your chatbot in the Playground.
|
52 | 43 | - Ask questions covered in your dataset and test how it handles follow-ups.
|
53 | 44 | - Tweak your examples or prompt instructions to improve continuity and engagement.
|
54 | 45 |
|
55 |
| ---- |
56 |
| -
|
57 |
| -### Part 3: Evaluate and Improve |
| 46 | +## Part 3: Evaluate and improve |
58 | 47 |
|
59 |
| -1. **Identify Strengths and Gaps** |
| 48 | +1. **Identify strengths and gaps** |
60 | 49 | - Does the chatbot handle follow-ups appropriately?
|
61 | 50 | - Does it reflect the desired tone and style?
|
62 | 51 | - Are any answers too vague, repetitive, or off-topic?
|
63 | 52 |
|
64 |
| -2. **Refine and Re-Test** |
| 53 | +2. **Refine and re-test** |
65 | 54 | - Update the dataset, adjust the prompt, or provide additional sample dialogues.
|
66 | 55 | - Test variations in the Playground until the responses align with expectations.
|
67 | 56 |
|
68 |
| ---- |
| 57 | +## Expected outcome |
69 | 58 |
|
70 |
| -**Expected Outcome:** |
71 | 59 | Learners will transform a basic chatbot into a refined, domain-specific assistant by uploading a custom dataset, configuring system prompts, and simulating context awareness. This activity demonstrates the power of Azure OpenAI Studio for training and fine-tuning dialogue behavior to better meet user needs and organizational goals.
|
0 commit comments