@@ -70,8 +70,8 @@ accelerate launch train_controlnet.py \
7070 --learning_rate=1e-5 \
7171 --validation_image " ./conditioning_image_1.png" " ./conditioning_image_2.png" \
7272 --validation_prompt " red circle with blue background" " cyan circle with brown floral background" \
73- --train_batch_size=4
74- --trust_remote_code=True
73+ --train_batch_size=4 \
74+ --trust_remote_code
7575```
7676
7777This default configuration requires ~ 38GB VRAM.
@@ -94,7 +94,8 @@ accelerate launch train_controlnet.py \
9494 --validation_image " ./conditioning_image_1.png" " ./conditioning_image_2.png" \
9595 --validation_prompt " red circle with blue background" " cyan circle with brown floral background" \
9696 --train_batch_size=1 \
97- --gradient_accumulation_steps=4
97+ --gradient_accumulation_steps=4 \
98+ --trust_remote_code
9899```
99100
100101## Training with multiple GPUs
@@ -117,7 +118,8 @@ accelerate launch --mixed_precision="fp16" --multi_gpu train_controlnet.py \
117118 --train_batch_size=4 \
118119 --mixed_precision=" fp16" \
119120 --tracker_project_name=" controlnet-demo" \
120- --report_to=wandb
121+ --report_to=wandb \
122+ --trust_remote_code
121123```
122124
123125## Example results
@@ -164,7 +166,8 @@ accelerate launch train_controlnet.py \
164166 --train_batch_size=1 \
165167 --gradient_accumulation_steps=4 \
166168 --gradient_checkpointing \
167- --use_8bit_adam
169+ --use_8bit_adam \
170+ --trust_remote_code
168171```
169172
170173## Training on a 12 GB GPU
@@ -192,7 +195,8 @@ accelerate launch train_controlnet.py \
192195 --gradient_checkpointing \
193196 --use_8bit_adam \
194197 --enable_xformers_memory_efficient_attention \
195- --set_grads_to_none
198+ --set_grads_to_none \
199+ --trust_remote_code
196200```
197201
198202When using ` enable_xformers_memory_efficient_attention ` , please make sure to install ` xformers ` by ` pip install xformers ` .
@@ -251,7 +255,8 @@ accelerate launch train_controlnet.py \
251255 --gradient_checkpointing \
252256 --enable_xformers_memory_efficient_attention \
253257 --set_grads_to_none \
254- --mixed_precision fp16
258+ --mixed_precision fp16 \
259+ --trust_remote_code
255260` ` `
256261
257262# # Performing inference with the trained ControlNet
@@ -390,7 +395,8 @@ python3 train_controlnet_flax.py \
390395 --tracker_project_name=$HUB_MODEL_ID \
391396 --num_train_epochs=11 \
392397 --push_to_hub \
393- --hub_model_id=$HUB_MODEL_ID
398+ --hub_model_id=$HUB_MODEL_ID \
399+ --trust_remote_code
394400 ` ` `
395401
396402Since we passed the `--push_to_hub` flag, it will automatically create a model repo under your huggingface account based on `$HUB_MODEL_ID`. By the end of training, the final checkpoint will be automatically stored on the hub. You can find an example model repo [here](https://huggingface.co/YiYiXu/fill-circle-controlnet).
0 commit comments