Skip to content

Commit 05858ac

Browse files
committed
Use latest Ray custom images in the LLM fine-tuning with Ray example
1 parent 2788499 commit 05858ac

File tree

2 files changed

+3
-9
lines changed

2 files changed

+3
-9
lines changed

examples/ray-finetune-llm-deepspeed/ray_finetune_llm_deepspeed.ipynb

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -67,12 +67,12 @@
6767
" worker_memory_requests=128,\n",
6868
" worker_memory_limits=256,\n",
6969
" head_memory=128,\n",
70-
" # Use the following parameters with NVIDIA GPUs \n",
71-
" image=\"quay.io/rhoai/ray:2.23.0-py39-cu121\",\n",
70+
" # Use the following parameters with NVIDIA GPUs\n",
71+
" image=\"quay.io/rhoai/ray:2.35.0-py39-cu121-torch24-fa26\",\n",
7272
" head_extended_resource_requests={'nvidia.com/gpu':1},\n",
7373
" worker_extended_resource_requests={'nvidia.com/gpu':1},\n",
7474
" # Or replace them with these parameters for AMD GPUs\n",
75-
" # image=\"quay.io/rhoai/ray:2.35.0-py39-rocm61-torch24\",\n",
75+
" # image=\"quay.io/rhoai/ray:2.35.0-py39-rocm61-torch24-fa26\",\n",
7676
" # head_extended_resource_requests={'amd.com/gpu':1},\n",
7777
" # worker_extended_resource_requests={'amd.com/gpu':1},\n",
7878
"))"
Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,5 @@
11
accelerate==0.31.0
22
awscliv2==2.3.0
33
datasets==2.19.2
4-
deepspeed==0.14.4
5-
# Flash Attention 2 requires PyTorch to be installed first
6-
# See https://github.com/Dao-AILab/flash-attention/issues/453
7-
https://github.com/Dao-AILab/flash-attention/releases/download/v2.6.3/flash_attn-2.6.3+cu123torch2.3cxx11abiFALSE-cp39-cp39-linux_x86_64.whl
84
peft==0.11.1
9-
ray[train]==2.23.0
10-
torch==2.3.1
115
transformers==4.44.0

0 commit comments

Comments
 (0)