Skip to content

Commit d3aba10

Browse files
committed
update
1 parent bc0cfbe commit d3aba10

File tree

1 file changed

+9
-23
lines changed

1 file changed

+9
-23
lines changed

articles/ai-foundry/openai/how-to/fine-tuning-cost-management.md

Lines changed: 9 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -15,8 +15,8 @@ show_latex: true
1515

1616
Fine-tuning can be intimidating: unlike base models, where you're just paying for input and output tokens for inferencing, fine-tuning requires training your custom models and dealing with hosting. This guide is intended to help you better understand the costs of fine-tuning and how to manage them.
1717

18-
> [!NOTE]
19-
> The prices in this article are for example purposes only. In some cases they may match current pricing, but you should refer to the official [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service) for exact pricing details to use in the formulas provided in this article.
18+
> [!IMPORTANT]
19+
> The numbers in this article are for example purposes only. You should always refer to the official [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service) for pricing details to use in the formulas provided in this article.
2020
2121
## Upfront investment - training your model
2222

@@ -39,7 +39,7 @@ We offer both regional and global training for SFT; if you don't need data resid
3939
> [!IMPORTANT]
4040
> We don't charge you for time spent in queue, failed jobs, jobs canceled prior to training beginning, or data safety checks.
4141
42-
#### Example: Supervised Fine-Tuning
42+
#### Example: Supervised fine-tuning (SFT)
4343

4444
Projecting the costs to fine-tune a model that takes natural language and outputs code.
4545

@@ -57,7 +57,7 @@ $$
5757
\$2 \div 1\text{M tokens} \times 1\text{M training tokens} \times 2\text{ epochs} = \$4
5858
$$
5959

60-
### Reinforcement Fine-Tuning (RFT)
60+
### Reinforcement fine-tuning (RFT)
6161

6262
The cost is determined by the time spent on training the model for Reinforcement fine tuning technique.
6363

@@ -175,27 +175,13 @@ Let's assume your chatbot handles 10,000 customer conversations in its first mon
175175
- Hosting charges: $1.70 per hour
176176
- Total Input: The user queries sent to the model total 20 million tokens.
177177
- Total Output: The model's responses to users total 40 million tokens.
178+
- Input Cost Calculation: 20× $1.10 = $22.00
179+
- Output Cost Calculation: 40 × $4.40 = $176.00
178180

179-
$$
180-
\textbf{Input Cost Calculation:} \quad 20 \times \$1.10 = \$22.00
181-
$$
181+
Your total operational cost for the month would be:
182182

183-
$$
184-
\textbf{Output Cost Calculation:} \quad 40 \times \$4.40 = \$176.00
185-
$$
183+
Total Cost = Hosting charges + Token usage cost
186184

187185
$$
188-
\textbf{Your total operational cost for the month would be:}
189-
$$
190-
191-
$$
192-
= \text{Hosting charges} + \text{Token usage cost}
193-
$$
194-
195-
$$
196-
= (1.70 \times 30 \times 24) + (22 + 176) = 1422.00
197-
$$
198-
186+
\text{Total Cost} = (1.70 \times 30 \times 24) + (22 + 176) = \$1422.00
199187
$$
200-
\text{Total Monthly Cost} = \$1422.00
201-
$$

0 commit comments

Comments
 (0)