Skip to content

Commit e10ae28

Browse files
committed
added simple finetune
1 parent 2bbac4e commit e10ae28

File tree

7 files changed

+348
-0
lines changed

7 files changed

+348
-0
lines changed
Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
*
2+
!/materializers/**
3+
!/pipelines/**
4+
!/steps/**
5+
!/utils/**

simple-llm-finetuning/LICENSE

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
Apache Software License 2.0
2+
3+
Copyright (c) ZenML GmbH 2024. All rights reserved.
4+
5+
Licensed under the Apache License, Version 2.0 (the "License");
6+
you may not use this file except in compliance with the License.
7+
You may obtain a copy of the License at
8+
9+
http://www.apache.org/licenses/LICENSE-2.0
10+
11+
Unless required by applicable law or agreed to in writing, software
12+
distributed under the License is distributed on an "AS IS" BASIS,
13+
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14+
See the License for the specific language governing permissions and
15+
limitations under the License.

simple-llm-finetuning/README.md

Lines changed: 180 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,180 @@
1+
# README for LLM Fine-Tuning with ZenML and Lightning AI Studios
2+
3+
## Overview
4+
5+
In the fast-paced world of AI, the ability to efficiently fine-tune Large Language Models (LLMs) for specific tasks is crucial. This project combines ZenML with Lightning AI Studios to streamline and automate the LLM fine-tuning process, enabling rapid iteration and deployment of task-specific models. This is a toy showcase only but can be extended for full production use.
6+
7+
## Table of Contents
8+
9+
1. [Introduction](#introduction)
10+
2. [Installation](#installation)
11+
3. [Running the Pipeline](#running-the-pipeline)
12+
4. [Configuration](#configuration)
13+
5. [Accelerated Fine-Tuning](#accelerated-fine-tuning)
14+
6. [Running with Remote Stack](#running-with-remote-stack)
15+
7. [Customizing Data Preparation](#customizing-data-preparation)
16+
8. [Project Structure](#project-structure)
17+
9. [Benefits & Future](#benefits--future)
18+
10. [Credits](#credits)
19+
20+
## Introduction
21+
22+
As LLMs such as GPT-4, Llama 3.1, and Mistral become more accessible, companies aim to adapt these models for specialized tasks like customer service chatbots, content generation, and specialized data analysis. This project addresses the challenge of scaling fine-tuning and managing numerous LLM variants by combining Lightning AI Studios with the automation capabilities of ZenML.
23+
24+
### Key Benefits
25+
26+
- **Efficient Fine-Tuning:** Fine-tune models with minimal computational resources.
27+
- **Ease of Management:** Store and distribute adapter weights efficiently.
28+
- **Scalability:** Serve thousands of fine-tuned variants from a single base model.
29+
30+
## Installation
31+
32+
To set up your environment, follow these steps:
33+
34+
```bash
35+
# Set up a Python virtual environment, if you haven't already
36+
python3 -m venv .venv
37+
source .venv/bin/activate
38+
39+
# Install requirements
40+
pip install -r requirements.txt
41+
42+
# Install ZenML and Lightning integrations
43+
pip install zenml
44+
zenml integration install lightning s3 aws -y
45+
46+
# Initialize and connect to a deployed ZenML server
47+
zenml init
48+
zenml connect --url <MYZENMLSERVERURL>
49+
```
50+
51+
## Running the Pipeline
52+
53+
To run the fine-tuning pipeline, use the `run.py` script with the appropriate configuration file:
54+
55+
```shell
56+
python run.py --config configs/config_large_gpu.yaml
57+
```
58+
59+
## Configuration
60+
61+
The fine-tuning process can be configured using YAML files located in the `configs` directory. Here are examples:
62+
63+
### Example `config_large_gpu.yaml`
64+
65+
```yaml
66+
model:
67+
name: llm-finetuning-gpt2-large
68+
description: "Fine-tune GPT-2 on larger GPU."
69+
tags:
70+
- llm
71+
- finetuning
72+
- gpt2-large
73+
74+
parameters:
75+
base_model_id: gpt2-large
76+
77+
steps:
78+
prepare_data:
79+
parameters:
80+
dataset_name: squad
81+
dataset_size: 1000
82+
max_length: 512
83+
84+
finetune:
85+
parameters:
86+
num_train_epochs: 3
87+
per_device_train_batch_size: 8
88+
89+
settings:
90+
orchestrator.lightning:
91+
machine_type: A10G
92+
```
93+
94+
### Example `config_small_cpu.yaml`
95+
96+
```yaml
97+
model:
98+
name: llm-finetuning-distilgpt2-small
99+
description: "Fine-tune DistilGPT-2 on smaller computer."
100+
tags:
101+
- llm
102+
- finetuning
103+
- distilgpt2
104+
105+
parameters:
106+
base_model_id: distilgpt2
107+
108+
steps:
109+
prepare_data:
110+
parameters:
111+
dataset_name: squad
112+
dataset_size: 100
113+
max_length: 128
114+
115+
finetune:
116+
parameters:
117+
num_train_epochs: 1
118+
per_device_train_batch_size: 4
119+
```
120+
121+
## Running with Remote Stack
122+
123+
Set up a remote lightning stack with ZenML for fine tuning on remote infrastructure:
124+
125+
1. **Register Orchestrator and Artifact Store:**
126+
127+
```shell
128+
zenml integration install lightning s3
129+
zenml orchestrator register lightning_orchestrator --flavor=lightning --machine_type=CPU --user_id=<YOUR_LIGHTNING_USER_ID> --api_key=<YOUR_LIGHTNING_API_KEY> --username=<YOUR_LIGHTNING_USERNAME>
130+
zenml artifact-store register s3_store --flavor=s3 --path=s3://yourpath
131+
```
132+
133+
2. **Set up and Register the Stack:**
134+
135+
```shell
136+
zenml stack register lightning_stack -o lightning_orchestrator -a s3_store
137+
zenml stack set lightning_stack
138+
```
139+
140+
## Customizing Data Preparation
141+
142+
Customize the `prepare_data` step for different datasets by modifying loading logic or tokenization patterns. Update the relevant YAML configuration parameters to fit your dataset and requirements.
143+
144+
## Project Structure
145+
146+
The project follows a structured layout for easy navigation and management:
147+
148+
```
149+
.
150+
├── configs # Configuration files for the pipeline
151+
│ ├── config_large_gpu.yaml # Config for large GPU setup
152+
│ ├── config_small_cpu.yaml # Config for small CPU setup
153+
├── .dockerignore # Docker ignore file
154+
├── LICENSE # License file
155+
├── README.md # This file
156+
├── requirements.txt # Python dependencies
157+
├── run.py # CLI tool to run pipelines on ZenML Stack
158+
```
159+
160+
## Benefits & Future
161+
162+
Using smaller, task-specific models is more efficient and cost-effective than relying on large general-purpose models. This strategy allows for:
163+
164+
- **Cost-Effectiveness:** Less computational resources reduce operational costs.
165+
- **Improved Performance:** Models fine-tuned on specific data often outperform general models on specialized tasks.
166+
- **Faster Iteration:** Quicker experimentation and iteration cycles.
167+
- **Data Privacy:** Control over training data, crucial for industries with strict privacy requirements.
168+
169+
## Credits
170+
171+
This project relies on several tools and libraries:
172+
173+
- [Hugging Face Transformers](https://huggingface.co/transformers/)
174+
- [Hugging Face Datasets](https://huggingface.co/datasets)
175+
- [ZenML](https://zenml.io/)
176+
- [Lightning AI Studios](https://www.lightning.ai/)
177+
178+
With these tools, you can efficiently manage the lifecycle of multiple fine-tuned LLM variants, benefiting from the robust infrastructure provided by ZenML and the scalable resources of Lightning AI Studios.
179+
180+
For more details, consult the [ZenML documentation](https://docs.zenml.io) and the [Lightning AI Studio documentation](https://lightning.ai).
Lines changed: 44 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,44 @@
1+
settings:
2+
docker:
3+
requirements: requirements.txt
4+
python_package_installer: uv
5+
apt_packages:
6+
- git
7+
environment:
8+
PJRT_DEVICE: CUDA
9+
USE_TORCH_XLA: "false"
10+
MKL_SERVICE_FORCE_INTEL: "1"
11+
PYTORCH_CUDA_ALLOC_CONF: "expandable_segments:True"
12+
orchestrator.lightning:
13+
machine_type: CPU
14+
user_id: USERID
15+
api_key: APIKEY
16+
username: USERNAME
17+
teamspace: TEAMSPACE
18+
19+
model:
20+
name: llm-finetuning-gpt2-large
21+
description: "Fine-tune GPT-2 on larger GPU."
22+
tags:
23+
- llm
24+
- finetuning
25+
- gpt2-large
26+
27+
parameters:
28+
base_model_id: gpt2-large
29+
30+
steps:
31+
prepare_data:
32+
parameters:
33+
dataset_name: squad
34+
dataset_size: 1000
35+
max_length: 256
36+
37+
finetune:
38+
parameters:
39+
num_train_epochs: 3
40+
per_device_train_batch_size: 4
41+
42+
settings:
43+
orchestrator.lightning:
44+
machine_type: A10G
Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
model:
2+
name: llm-finetuning-distilgpt2-small
3+
description: "Fine-tune DistilGPT-2 on smaller computer."
4+
tags:
5+
- llm
6+
- finetuning
7+
- distilgpt2
8+
9+
parameters:
10+
base_model_id: distilgpt2
11+
12+
steps:
13+
prepare_data:
14+
parameters:
15+
dataset_name: squad
16+
dataset_size: 100
17+
max_length: 128
18+
19+
finetune:
20+
parameters:
21+
num_train_epochs: 1
22+
per_device_train_batch_size: 4
Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
datasets>=2.19.1
2+
transformers>=4.43.1
3+
peft
4+
bitsandbytes>=0.41.3
5+
scipy
6+
evaluate
7+
rouge_score
8+
nltk
9+
accelerate>=0.30.0
10+
urllib3<2
11+
zenml>=0.62.0
12+
torch>=2.2.0
13+
sentencepiece
14+
huggingface_hub

simple-llm-finetuning/run.py

Lines changed: 68 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,68 @@
1+
import torch
2+
from datasets import load_dataset, Dataset
3+
from transformers import AutoTokenizer, AutoModelForCausalLM, Trainer, TrainingArguments, DataCollatorForLanguageModeling
4+
from zenml import pipeline, step, log_model_metadata
5+
from typing_extensions import Annotated
6+
import argparse
7+
from zenml.integrations.huggingface.materializers.huggingface_datasets_materializer import HFDatasetMaterializer
8+
9+
@step(output_materializers=HFDatasetMaterializer)
10+
def prepare_data(base_model_id: str, dataset_name: str, dataset_size: int, max_length: int) -> Annotated[Dataset, "tokenized_dataset"]:
11+
tokenizer = AutoTokenizer.from_pretrained(base_model_id)
12+
tokenizer.pad_token = tokenizer.eos_token
13+
dataset = load_dataset(dataset_name, split=f"train[:{dataset_size}]")
14+
15+
def tokenize_function(example):
16+
prompt = f"Question: {example['question']}\nAnswer: {example['answers']['text'][0]}"
17+
return tokenizer(prompt, truncation=True, padding="max_length", max_length=max_length)
18+
19+
tokenized_data = dataset.map(tokenize_function, remove_columns=dataset.column_names)
20+
log_model_metadata(metadata={"dataset_size": len(tokenized_data), "max_length": max_length})
21+
return tokenized_data
22+
23+
@step
24+
def finetune(base_model_id: str, tokenized_dataset: Dataset, num_train_epochs: int, per_device_train_batch_size: int) -> None:
25+
torch.cuda.empty_cache()
26+
model = AutoModelForCausalLM.from_pretrained(
27+
base_model_id,
28+
device_map="auto",
29+
torch_dtype=torch.float32, # Changed from float16 to float32
30+
low_cpu_mem_usage=True
31+
)
32+
tokenizer = AutoTokenizer.from_pretrained(base_model_id)
33+
tokenizer.pad_token = tokenizer.eos_token
34+
model.config.pad_token_id = tokenizer.pad_token_id
35+
36+
training_args = TrainingArguments(
37+
output_dir="./results",
38+
num_train_epochs=num_train_epochs,
39+
per_device_train_batch_size=per_device_train_batch_size,
40+
gradient_accumulation_steps=8,
41+
logging_steps=10,
42+
save_strategy="epoch",
43+
learning_rate=2e-5,
44+
weight_decay=0.01,
45+
optim="adamw_torch",
46+
)
47+
48+
trainer = Trainer(
49+
model=model,
50+
args=training_args,
51+
train_dataset=tokenized_dataset,
52+
data_collator=DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False),
53+
)
54+
55+
train_result = trainer.train()
56+
log_model_metadata(metadata={"metrics": {"train_loss": train_result.metrics.get("train_loss")}})
57+
trainer.save_model("finetuned_model")
58+
59+
@pipeline
60+
def llm_finetune_pipeline(base_model_id: str):
61+
tokenized_dataset = prepare_data(base_model_id)
62+
finetune(base_model_id, tokenized_dataset)
63+
64+
if __name__ == "__main__":
65+
parser = argparse.ArgumentParser()
66+
parser.add_argument('--config', type=str, required=True, help='Path to the YAML config file')
67+
args = parser.parse_args()
68+
llm_finetune_pipeline.with_options(config_path=args.config)()

0 commit comments

Comments
 (0)