You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/aks/concepts-fine-tune-language-models.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,14 +2,14 @@
2
2
title: Concepts - Fine-tuning language models for AI and machine learning workflows
3
3
description: Learn about how you can customize language models to use in your AI and machine learning workflows on Azure Kubernetes Service (AKS).
4
4
ms.topic: conceptual
5
-
ms.date: 07/01/2024
5
+
ms.date: 07/15/2024
6
6
author: schaffererin
7
7
ms.author: schaffererin
8
8
---
9
9
10
10
# Concepts - Fine-tuning language models for AI and machine learning workflows
11
11
12
-
In this article, you learn about fine-tuning [language models][language-models], including some common methods and how applying the results to your models can improve the performance of your AI and machine learning workflows on Azure Kubernetes Service (AKS).
12
+
In this article, you learn about fine-tuning [language models][language-models], including some common methods and how applying the tuning results can improve the performance of your AI and machine learning workflows on Azure Kubernetes Service (AKS).
13
13
14
14
## Pre-trained language models
15
15
@@ -23,7 +23,7 @@ The following table lists some pros and cons of using PLMs in your AI and machin
23
23
24
24
| Pros | Cons |
25
25
|------|------|
26
-
| • Get started quickly with development and deployment in your machine learning lifecycle. <br> • Avoid heavy compute costs associated with model training.. <br> • Reduces the need to store large, labeled datasets. | • Might provide generalized or outdated responses based on pre-training data sources. <br> • Might not be suitable for all tasks or domains. <br> • Performance can vary depending on inferencing context. |
26
+
| • Get started quickly with deployment in your machine learning lifecycle. <br> • Avoid heavy compute costs associated with model training. <br> • Reduces the need to store large, labeled datasets. | • Might provide generalized or outdated responses based on pre-training data sources. <br> • Might not be suitable for all tasks or domains. <br> • Performance can vary depending on inferencing context. |
27
27
28
28
## Fine-tuning methods
29
29
@@ -39,15 +39,15 @@ The following table lists some pros and cons of using PLMs in your AI and machin
39
39
40
40
## Experiment with fine-tuning language models on AKS
41
41
42
-
Kubernetes AI Toolchain Operator (KAITO) is an open-source operator that automates small and large language model deployments in Kubernetes clusters. The KAITO add-on for AKS simplifies onboardingand reduces the time-to-inference for open-source models on your AKS clusters. The add-on automatically provisions right-sized GPU nodes and sets up the associated interference server as an endpoint server to your chosen model.
42
+
Kubernetes AI Toolchain Operator (KAITO) is an open-source operator that automates small and large language model deployments in Kubernetes clusters. The AI toolchain operator add-on leverages KAITO to simplify onboarding, save on infrastructure costs, and reduce the time-to-inference for open-source models on an AKS cluster. The add-on automatically provisions right-sized GPU nodes and sets up the associated inference server as an endpoint server to your chosen model.
43
43
44
-
In the upcoming open source KAITO release, you can efficiently fine-tune supported MIT and Apache 2.0 licensed models with the following features:
44
+
With KAITO version 0.3.0 or later, you can efficiently fine-tune supported MIT and Apache 2.0 licensed models with the following features:
45
45
46
46
* Store your retraining data as a container image in a private container registry.
47
47
* Host the new adapter layer image in a private container registry.
48
48
* Efficiently pull the image for inferencing with adapter layers in new scenarios.
49
49
50
-
For guidance on getting started with fine-tuning on KAITO, see the [Kaito Tuning Workspace API documentation][kaito-fine-tuning]. To learn more about deploying language models with KAITO in your AKS clusters, see the [KAITO model GitHub repository][kaito-repo].
50
+
For guidance on getting started with fine-tuning on KAITO, see the [Kaito Tuning Workspace API documentation][kaito-fine-tuning]. To learn more about deploying language models with KAITO in your AKS clusters, see the [KAITO model GitHub repository][kaito-repo].
0 commit comments