Skip to content

Commit df72018

Browse files
committed
Final tweaks
1 parent b1e0265 commit df72018

File tree

1 file changed

+6
-6
lines changed

1 file changed

+6
-6
lines changed

articles/aks/concepts-fine-tune-language-models.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -2,14 +2,14 @@
22
title: Concepts - Fine-tuning language models for AI and machine learning workflows
33
description: Learn about how you can customize language models to use in your AI and machine learning workflows on Azure Kubernetes Service (AKS).
44
ms.topic: conceptual
5-
ms.date: 07/01/2024
5+
ms.date: 07/15/2024
66
author: schaffererin
77
ms.author: schaffererin
88
---
99

1010
# Concepts - Fine-tuning language models for AI and machine learning workflows
1111

12-
In this article, you learn about fine-tuning [language models][language-models], including some common methods and how applying the results to your models can improve the performance of your AI and machine learning workflows on Azure Kubernetes Service (AKS).
12+
In this article, you learn about fine-tuning [language models][language-models], including some common methods and how applying the tuning results can improve the performance of your AI and machine learning workflows on Azure Kubernetes Service (AKS).
1313

1414
## Pre-trained language models
1515

@@ -23,7 +23,7 @@ The following table lists some pros and cons of using PLMs in your AI and machin
2323

2424
| Pros | Cons |
2525
|------|------|
26-
| • Get started quickly with development and deployment in your machine learning lifecycle. <br> • Avoid heavy compute costs associated with model training.. <br> • Reduces the need to store large, labeled datasets. | • Might provide generalized or outdated responses based on pre-training data sources. <br> • Might not be suitable for all tasks or domains. <br> • Performance can vary depending on inferencing context. |
26+
| • Get started quickly with deployment in your machine learning lifecycle. <br> • Avoid heavy compute costs associated with model training. <br> • Reduces the need to store large, labeled datasets. | • Might provide generalized or outdated responses based on pre-training data sources. <br> • Might not be suitable for all tasks or domains. <br> • Performance can vary depending on inferencing context. |
2727

2828
## Fine-tuning methods
2929

@@ -39,15 +39,15 @@ The following table lists some pros and cons of using PLMs in your AI and machin
3939

4040
## Experiment with fine-tuning language models on AKS
4141

42-
Kubernetes AI Toolchain Operator (KAITO) is an open-source operator that automates small and large language model deployments in Kubernetes clusters. The KAITO add-on for AKS simplifies onboarding and reduces the time-to-inference for open-source models on your AKS clusters. The add-on automatically provisions right-sized GPU nodes and sets up the associated interference server as an endpoint server to your chosen model.
42+
Kubernetes AI Toolchain Operator (KAITO) is an open-source operator that automates small and large language model deployments in Kubernetes clusters. The AI toolchain operator add-on leverages KAITO to simplify onboarding, save on infrastructure costs, and reduce the time-to-inference for open-source models on an AKS cluster. The add-on automatically provisions right-sized GPU nodes and sets up the associated inference server as an endpoint server to your chosen model.
4343

44-
In the upcoming open source KAITO release, you can efficiently fine-tune supported MIT and Apache 2.0 licensed models with the following features:
44+
With KAITO version 0.3.0 or later, you can efficiently fine-tune supported MIT and Apache 2.0 licensed models with the following features:
4545

4646
* Store your retraining data as a container image in a private container registry.
4747
* Host the new adapter layer image in a private container registry.
4848
* Efficiently pull the image for inferencing with adapter layers in new scenarios.
4949

50-
For guidance on getting started with fine-tuning on KAITO, see the [Kaito Tuning Workspace API documentation][kaito-fine-tuning]. To learn more about deploying language models with KAITO in your AKS clusters, see the [KAITO model GitHub repository][kaito-repo].
50+
For guidance on getting started with fine-tuning on KAITO, see the [Kaito Tuning Workspace API documentation][kaito-fine-tuning]. To learn more about deploying language models with KAITO in your AKS clusters, see the [KAITO model GitHub repository][kaito-repo].
5151

5252
## Next steps
5353

0 commit comments

Comments
 (0)