Skip to content

Commit 73fbe2b

Browse files
Merge pull request #6552 from voutilad/ft-papercut-1
Update model table to include fine tuning for GPT-4o series.
2 parents 311ea0c + 4764555 commit 73fbe2b

File tree

1 file changed

+11
-9
lines changed

1 file changed

+11
-9
lines changed

articles/ai-foundry/concepts/fine-tuning-overview.md

Lines changed: 11 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -72,16 +72,18 @@ Follow this link to view and download [example datasets](https://github.com/Azur
7272
## Model Comparison Table
7373
This table provides an overview of the models available
7474

75-
| Model | Modalities | Techniques | Strengths |
76-
|----------------------|---------------|--------------|--------------------------------------|
75+
| Model | Modalities | Techniques | Strengths |
76+
|----------------------|---------------|--------------|--------------------------------------------------------------------|
7777
| GPT 4.1 | Text, Vision | SFT, DPO | Superior performance on sophisticated tasks, nuanced understanding |
78-
| GPT 4.1-mini | Text | SFT, DPO | Fast iteration, cost-effective, good for simple tasks |
79-
| GPT 4.1-nano | Text | SFT, DPO | Fast, cost-effective, and minimal resource usage |
80-
| o4-mini | Text | RFT | Reasoning model suited for complex logical tasks |
81-
| Phi 4 | Text | SFT | Cost effective option for simpler tasks |
82-
| Ministral 3B | Text | SFT | Low-cost option for faster iteration |
83-
| Mistral Nemo | Text | SFT | Balance between size and capability |
84-
| Mistral Large (2411) | Text | SFT | Most capable Mistral model, better for complex tasks |
78+
| GPT 4.1-mini | Text | SFT, DPO | Fast iteration, cost-effective, good for simple tasks |
79+
| GPT 4.1-nano | Text | SFT, DPO | Fast, cost-effective, and minimal resource usage |
80+
| GPT 4o | Text, Vision | SFT, DPO | Previous generation flagship model for complex tasks |
81+
| GPT 4o-mini | Text | SFT | Previous generation small model for simple tasks |
82+
| o4-mini | Text | RFT | Reasoning model suited for complex logical tasks |
83+
| Phi 4 | Text | SFT | Cost effective option for simpler tasks |
84+
| Ministral 3B | Text | SFT | Low-cost option for faster iteration |
85+
| Mistral Nemo | Text | SFT | Balance between size and capability |
86+
| Mistral Large (2411) | Text | SFT | Most capable Mistral model, better for complex tasks |
8587

8688
## Get Started with Fine Tuning
8789

0 commit comments

Comments
 (0)