Skip to content

Commit b717f6c

Browse files
Docs to build and run image
Signed-off-by: Thara Palanivel <[email protected]>
1 parent 91ca2af commit b717f6c

File tree

1 file changed

+132
-0
lines changed

1 file changed

+132
-0
lines changed

build/README.md

Lines changed: 132 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,132 @@
1+
# Building fms-model-optimizer as an Image
2+
3+
The Dockerfile provides a way of running FMS Model Optimizer (FMS MO). It installs the dependencies needed and adds two additional scripts that helps to parse arguments to pass to FMS MO. The `accelerate_launch.py` script is run by default when running the image to trigger FMS MO for single or multi GPU by parsing arguments and running `accelerate launch fms_mo.run_quant.py`.
4+
5+
## Configuration
6+
7+
The scripts accept a JSON formatted config which are set by environment variables. `FMS_MO_CONFIG_JSON_PATH` can be set to the mounted path of the JSON config. Alternatively, `FMS_MO_CONFIG_JSON_ENV_VAR` can be set to the encoded JSON config using the below function:
8+
9+
```py
10+
import base64
11+
12+
def encode_json(my_json_string):
13+
base64_bytes = base64.b64encode(my_json_string.encode("ascii"))
14+
txt = base64_bytes.decode("ascii")
15+
return txt
16+
17+
with open("test_config.json") as f:
18+
contents = f.read()
19+
20+
encode_json(contents)
21+
```
22+
23+
The keys for the JSON config are all of the flags available to use with [FMS Model Optimizer](fms_mo/training_args.py).
24+
25+
For configuring `accelerate launch`, use key `accelerate_launch_args` and pass the set of flags accepted by [accelerate launch](https://huggingface.co/docs/accelerate/package_reference/cli#accelerate-launch). Since these flags are passed via the JSON config, the key matches the long formed flag name. For example, to enable flag `--quiet`, use JSON key `"quiet"`, using the short formed `"q"` will fail.
26+
27+
For example, the below config is used for creating a GPTQ checkpoint of LLAMA-3-8B with two GPUs:
28+
29+
```json
30+
{
31+
"accelerate_launch_args": {
32+
"main_process_port": 1234
33+
},
34+
"model_name_or_path": "meta-llama/Meta-Llama-3-8B",
35+
"training_data_path": "data_train",
36+
"quant_method": "gptq",
37+
"bits": 4,
38+
"group_size": 128,
39+
"output_dir": "/output/Meta-Llama-3-8B-GPTQ-MULTIGPU"
40+
}
41+
```
42+
43+
`num_processes` defaults to the amount of GPUs allocated for optimization, unless the user sets `SET_NUM_PROCESSES_TO_NUM_GPUS` to `False`. Note that `num_processes` which is the total number of processes to be launched in parallel, should match the number of GPUs to run on. The number of GPUs used can also be set by setting environment variable `CUDA_VISIBLE_DEVICES`. If ``num_processes=1`, the script will assume single-GPU.
44+
45+
46+
## Building the Image
47+
48+
With docker, build the image at the top level with:
49+
50+
```sh
51+
docker build . -t fms-model-optimizer:mytag -f build/Dockerfile
52+
```
53+
54+
## Running the Image
55+
56+
Run fms-model-optimizer-image with the JSON env var and mounts set up.
57+
58+
```sh
59+
docker run -v config.json:/app/config.json -v $MODEL_PATH:/models --env FMS_MO_CONFIG_JSON_PATH=/app/config.json fms-model-optimizer:mytag
60+
```
61+
62+
This will run `accelerate_launch.py` with the JSON config passed.
63+
64+
An example Kubernetes Pod for deploying fms-model-optimizer which requires creating PVCs with the model and input dataset and any mounts needed for the outputted quantized model:
65+
66+
```yaml
67+
apiVersion: v1
68+
kind: ConfigMap
69+
metadata:
70+
name: fms-model-optimizer-config
71+
data:
72+
config.json: |
73+
{
74+
"accelerate_launch_args": {
75+
"main_process_port": 1234
76+
},
77+
"model_name_or_path": "meta-llama/Meta-Llama-3-8B",
78+
"training_data_path": "data_train",
79+
"quant_method": "gptq",
80+
"bits": 4,
81+
"group_size": 128,
82+
"output_dir": "/output/Meta-Llama-3-8B-GPTQ-MULTIGPU"
83+
}
84+
---
85+
apiVersion: v1
86+
kind: Pod
87+
metadata:
88+
name: fms-model-optimizer-test
89+
spec:
90+
containers:
91+
env:
92+
- name: FMS_MO_CONFIG_JSON_PATH
93+
value: /config/config.json
94+
image: fms-model-optimizer:mytag
95+
imagePullPolicy: IfNotPresent
96+
name: fms-mo-test
97+
resources:
98+
limits:
99+
nvidia.com/gpu: "2"
100+
memory: 200Gi
101+
cpu: "10"
102+
ephemeral-storage: 2Ti
103+
requests:
104+
memory: 80Gi
105+
cpu: "5"
106+
ephemeral-storage: 1600Gi
107+
volumeMounts:
108+
- mountPath: /data/input
109+
name: input-data
110+
- mountPath: /data/output
111+
name: output-data
112+
- mountPath: /config
113+
name: fms-model-optimizer-config
114+
restartPolicy: Never
115+
terminationGracePeriodSeconds: 30
116+
volumes:
117+
- name: input-data
118+
persistentVolumeClaim:
119+
claimName: input-pvc
120+
- name: output-data
121+
persistentVolumeClaim:
122+
claimName: output-pvc
123+
- name: fms-model-optimizer-config
124+
configMap:
125+
name: fms-model-optimizer-config
126+
```
127+
128+
The above kube resource values are not hard-defined. However, they are useful when running some models (such as LLaMa-13b model). If ephemeral storage is not defined, you will likely hit into error `The node was low on resource: ephemeral-storage. Container was using 1498072868Ki, which exceeds its request of 0.` where the pod runs low on storage while tuning the model.
129+
130+
Note that additional accelerate launch arguments can be passed, however, FSDP defaults are set and no `accelerate_launch_args` need to be passed.
131+
132+
Another good example can be found [here](../examples/kfto-kueue-fms-model-optimizer.yaml) which launches a Kubernetes-native `PyTorchJob` using the [Kubeflow Training Operator](https://github.com/kubeflow/training-operator/) with [Kueue](https://github.com/kubernetes-sigs/kueue) for the queue management of optimization jobs. The KFTO example is running GPTQ of LLAMA-3-8B with two GPUs.

0 commit comments

Comments
 (0)