You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/content/docs/workers-ai/features/fine-tunes/loras.mdx
+1-12Lines changed: 1 addition & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,18 +17,7 @@ Workers AI supports fine-tuned inference with adapters trained with [Low-Rank Ad
17
17
18
18
## Limitations
19
19
20
-
- We only support LoRAs for the following models (must not be quantized):
21
-
22
-
-`@cf/meta/llama-3.2-11b-vision-instruct`
23
-
-`@cf/meta/llama-3.3-70b-instruct-fp8-fast`
24
-
-`@cf/meta/llama-guard-3-8b`
25
-
-`@cf/meta/llama-3.1-8b-instruct-fast (soon)`
26
-
-`@cf/deepseek-ai/deepseek-r1-distill-qwen-32b`
27
-
-`@cf/qwen/qwen2.5-coder-32b-instruct`
28
-
-`@cf/qwen/qwq-32b`
29
-
-`@cf/mistralai/mistral-small-3.1-24b-instruct`
30
-
-`@cf/google/gemma-3-12b-it`
31
-
20
+
- We only support LoRAs for a [variety of models](/workers-ai/models/?capabilities=LoRA) (must not be quantized)
32
21
- Adapter must be trained with rank `r <=8` as well as larger ranks if up to 32. You can check the rank of a pre-trained LoRA adapter through the adapter's `config.json` file
33
22
- LoRA adapter file must be < 300MB
34
23
- LoRA adapter files must be named `adapter_config.json` and `adapter_model.safetensors` exactly
0 commit comments