Skip to content

Commit 6f9e15f

Browse files
Add Ray-KFT example for Ray based data processing and Kubeflow Training-Operator V1 based finetuning capabilities
1 parent 5bd9a40 commit 6f9e15f

16 files changed

+3193
-0
lines changed

examples/ray-kft-v1/1_ray_sdg.ipynb

Lines changed: 462 additions & 0 deletions
Large diffs are not rendered by default.
Lines changed: 323 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,323 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"id": "d1c9049a-7daa-43aa-9e50-5f9f951a8324",
6+
"metadata": {},
7+
"source": [
8+
"## Phase 2: Distributed Training using Kubeflow Training Operator and SDK\n",
9+
"\n",
10+
"- **kubeflow-training SDK**: PyTorchJob creation and management\n",
11+
"- **TRL + PEFT**: Modern fine-tuning with LoRA adapters\n",
12+
"- **Distributed Training**: Multi-node GPU coordination "
13+
]
14+
},
15+
{
16+
"cell_type": "markdown",
17+
"id": "035727e0",
18+
"metadata": {},
19+
"source": [
20+
"### Training Configuration using kubeflow-training SDK"
21+
]
22+
},
23+
{
24+
"cell_type": "code",
25+
"execution_count": null,
26+
"id": "92017175-8d63-4dbe-ac8d-f2724b57f9a8",
27+
"metadata": {
28+
"scrolled": true
29+
},
30+
"outputs": [],
31+
"source": [
32+
"%pip install kubernetes yamlmagic"
33+
]
34+
},
35+
{
36+
"cell_type": "code",
37+
"execution_count": 1,
38+
"id": "c82140d0",
39+
"metadata": {},
40+
"outputs": [],
41+
"source": [
42+
"%load_ext yamlmagic"
43+
]
44+
},
45+
{
46+
"cell_type": "code",
47+
"execution_count": null,
48+
"id": "7be28c8f",
49+
"metadata": {},
50+
"outputs": [],
51+
"source": [
52+
"%%yaml training_parameters\n",
53+
"\n",
54+
"# Model configuration\n",
55+
"model_name_or_path: ibm-granite/granite-3.1-2b-instruct\n",
56+
"model_revision: main\n",
57+
"torch_dtype: bfloat16\n",
58+
"attn_implementation: flash_attention_2\n",
59+
"use_liger: false\n",
60+
"\n",
61+
"# PEFT / LoRA configuration\n",
62+
"use_peft: true\n",
63+
"lora_r: 16\n",
64+
"lora_alpha: 16 # Changed from 8 to 16 for better scaling\n",
65+
"lora_dropout: 0.05\n",
66+
"lora_target_modules: [\"q_proj\", \"v_proj\", \"k_proj\", \"o_proj\", \"gate_proj\", \"up_proj\", \"down_proj\"]\n",
67+
"lora_modules_to_save: []\n",
68+
"\n",
69+
"# QLoRA (BitsAndBytes)\n",
70+
"load_in_4bit: false\n",
71+
"load_in_8bit: false\n",
72+
"\n",
73+
"# Dataset configuration (synthetic data from Ray preprocessing)\n",
74+
"dataset_path: synthetic_gsm8k\n",
75+
"dataset_config: main\n",
76+
"dataset_train_split: train\n",
77+
"dataset_test_split: test\n",
78+
"dataset_text_field: text\n",
79+
"dataset_kwargs:\n",
80+
" add_special_tokens: false\n",
81+
" append_concat_token: false\n",
82+
"\n",
83+
"# SFT configuration # Fixed typo\n",
84+
"max_seq_length: 1024\n",
85+
"dataset_batch_size: 1000\n",
86+
"packing: false\n",
87+
"\n",
88+
"# Training hyperparameters\n",
89+
"num_train_epochs: 3\n",
90+
"per_device_train_batch_size: 8\n",
91+
"per_device_eval_batch_size: 8\n",
92+
"auto_find_batch_size: false\n",
93+
"eval_strategy: epoch\n",
94+
"\n",
95+
"# Precision and optimization\n",
96+
"bf16: true\n",
97+
"tf32: false\n",
98+
"learning_rate: 1.0e-4 # Reduced from 2.0e-4 for more stable LoRA training\n",
99+
"warmup_steps: 100 # Increased from 10 for better stability\n",
100+
"lr_scheduler_type: inverse_sqrt\n",
101+
"optim: adamw_torch_fused\n",
102+
"max_grad_norm: 1.0\n",
103+
"seed: 42\n",
104+
"\n",
105+
"# Gradient settings\n",
106+
"gradient_accumulation_steps: 1\n",
107+
"gradient_checkpointing: false\n",
108+
"gradient_checkpointing_kwargs:\n",
109+
" use_reentrant: false\n",
110+
"\n",
111+
"# FSDP for distributed training\n",
112+
"fsdp: \"full_shard auto_wrap\"\n",
113+
"fsdp_config:\n",
114+
" activation_checkpointing: true\n",
115+
" cpu_ram_efficient_loading: false\n",
116+
" sync_module_states: true\n",
117+
" use_orig_params: true\n",
118+
" limit_all_gathers: false\n",
119+
"\n",
120+
"# Checkpointing and logging\n",
121+
"save_strategy: epoch\n",
122+
"save_total_limit: 1\n",
123+
"resume_from_checkpoint: false\n",
124+
"log_level: warning\n",
125+
"logging_strategy: steps\n",
126+
"logging_steps: 10 # Reduced frequency from 1 to 10\n",
127+
"report_to:\n",
128+
"- tensorboard\n",
129+
"\n",
130+
"output_dir: /shared/models/granite-3.1-2b-instruct-synthetic2"
131+
]
132+
},
133+
{
134+
"cell_type": "markdown",
135+
"id": "20521af6",
136+
"metadata": {},
137+
"source": [
138+
"### Configure kubeflow-training Client\n",
139+
"\n",
140+
"Set up the kubeflow-training SDK client following the sft.ipynb pattern:\n"
141+
]
142+
},
143+
{
144+
"cell_type": "code",
145+
"execution_count": 3,
146+
"id": "deb20fde",
147+
"metadata": {},
148+
"outputs": [
149+
{
150+
"name": "stdout",
151+
"output_type": "stream",
152+
"text": [
153+
"kubeflow-training client configured\n"
154+
]
155+
}
156+
],
157+
"source": [
158+
"# Configure kubeflow-training client (following sft.ipynb pattern)\n",
159+
"from kubernetes import client\n",
160+
"from kubeflow.training import TrainingClient\n",
161+
"from kubeflow.training.models import V1Volume, V1VolumeMount, V1PersistentVolumeClaimVolumeSource\n",
162+
"\n",
163+
"token=\"<auth_token>\"\n",
164+
"api_server=\"<api_server_url>\"\n",
165+
"\n",
166+
"configuration = client.Configuration()\n",
167+
"configuration.host = api_server\n",
168+
"configuration.api_key = {\"authorization\": f\"Bearer {token}\"}\n",
169+
"# Un-comment if your cluster API server uses a self-signed certificate or an un-trusted CA\n",
170+
"configuration.verify_ssl = False\n",
171+
"\n",
172+
"api_client = client.ApiClient(configuration)\n",
173+
"training_client = TrainingClient(client_configuration=api_client.configuration)\n",
174+
"\n",
175+
"print(\"kubeflow-training client configured\")"
176+
]
177+
},
178+
{
179+
"cell_type": "code",
180+
"execution_count": 4,
181+
"id": "91d0b76b",
182+
"metadata": {},
183+
"outputs": [
184+
{
185+
"name": "stdout",
186+
"output_type": "stream",
187+
"text": [
188+
"PyTorchJob submitted successfully\n"
189+
]
190+
}
191+
],
192+
"source": [
193+
"from scripts.kft_granite_training import training_func\n",
194+
"\n",
195+
"job = training_client.create_job(\n",
196+
" job_kind=\"PyTorchJob\",\n",
197+
" name=\"test1-training\",\n",
198+
" # Use script file instead of function import\n",
199+
" train_func=training_func,\n",
200+
" # Pass YAML parameters as config\n",
201+
" parameters=training_parameters,\n",
202+
" # Distributed training configuration\n",
203+
" num_workers=2,\n",
204+
" num_procs_per_worker=2,\n",
205+
" resources_per_worker={\n",
206+
" \"nvidia.com/gpu\": 2, # Uncomment for GPU training\n",
207+
" \"memory\": \"24Gi\",\n",
208+
" \"cpu\": 4,\n",
209+
" },\n",
210+
" base_image=\"quay.io/modh/training:py311-cuda124-torch251\",\n",
211+
" # Environment variables for training\n",
212+
" env_vars={\n",
213+
" # HuggingFace configuration - use shared storage\n",
214+
" \"HF_HOME\": \"/shared/huggingface_cache\",\n",
215+
" \"HF_DATASETS_CACHE\": \"/shared/huggingface_cache/datasets\",\n",
216+
" \"TOKENIZERS_PARALLELISM\": \"false\",\n",
217+
" # Training configuration\n",
218+
" \"PYTHONUNBUFFERED\": \"1\",\n",
219+
" \"NCCL_DEBUG\": \"INFO\",\n",
220+
" },\n",
221+
" # Package dependencies\n",
222+
" packages_to_install=[\n",
223+
" \"transformers>=4.36.0\",\n",
224+
" \"trl>=0.7.0\",\n",
225+
" \"datasets>=2.14.0\",\n",
226+
" \"peft>=0.6.0\",\n",
227+
" \"accelerate>=0.24.0\",\n",
228+
" \"torch>=2.0.0\",\n",
229+
" ],\n",
230+
" volumes=[\n",
231+
" V1Volume(\n",
232+
" name=\"shared\",\n",
233+
" persistent_volume_claim=V1PersistentVolumeClaimVolumeSource(claim_name=\"shared\")\n",
234+
" ),\n",
235+
" ],\n",
236+
" volume_mounts=[\n",
237+
" V1VolumeMount(name=\"shared\", mount_path=\"/shared\"),\n",
238+
" ],\n",
239+
")\n",
240+
"\n",
241+
"print(f\"PyTorchJob submitted successfully\")\n"
242+
]
243+
},
244+
{
245+
"cell_type": "markdown",
246+
"id": "08beef7d",
247+
"metadata": {},
248+
"source": [
249+
"### Create Training Job using kubeflow-training SDK\n",
250+
"\n",
251+
"Create and submit the distributed training job following the sft.ipynb pattern:\n"
252+
]
253+
},
254+
{
255+
"cell_type": "markdown",
256+
"id": "cac9307d",
257+
"metadata": {},
258+
"source": [
259+
"### Monitor Training Job\n",
260+
"\n",
261+
"Follow the training progress and logs:\n"
262+
]
263+
},
264+
{
265+
"cell_type": "code",
266+
"execution_count": null,
267+
"id": "a7f61439",
268+
"metadata": {
269+
"scrolled": true
270+
},
271+
"outputs": [],
272+
"source": [
273+
"# Monitor training job logs (following sft.ipynb pattern)\n",
274+
"training_client.get_job_logs(\n",
275+
" name=\"test1-training\",\n",
276+
" job_kind=\"PyTorchJob\",\n",
277+
" follow=True,\n",
278+
")\n"
279+
]
280+
},
281+
{
282+
"cell_type": "code",
283+
"execution_count": 6,
284+
"id": "8571ae47",
285+
"metadata": {},
286+
"outputs": [
287+
{
288+
"name": "stdout",
289+
"output_type": "stream",
290+
"text": [
291+
"PytorchJob deleted!\n"
292+
]
293+
}
294+
],
295+
"source": [
296+
"# Delete the Training Job\n",
297+
"training_client.delete_job(\"test1-training\")\n",
298+
"print(\"PytorchJob deleted!\")"
299+
]
300+
}
301+
],
302+
"metadata": {
303+
"kernelspec": {
304+
"display_name": "Python 3.12",
305+
"language": "python",
306+
"name": "python3"
307+
},
308+
"language_info": {
309+
"codemirror_mode": {
310+
"name": "ipython",
311+
"version": 3
312+
},
313+
"file_extension": ".py",
314+
"mimetype": "text/x-python",
315+
"name": "python",
316+
"nbconvert_exporter": "python",
317+
"pygments_lexer": "ipython3",
318+
"version": "3.12.9"
319+
}
320+
},
321+
"nbformat": 4,
322+
"nbformat_minor": 5
323+
}

0 commit comments

Comments
 (0)