Skip to content

Commit f7bbf4e

Browse files
Update src/content/docs/workers-ai/features/fine-tunes/loras.mdx
Co-authored-by: Kevin Jain <[email protected]>
1 parent 50545d8 commit f7bbf4e

File tree

1 file changed

+1
-1
lines changed
  • src/content/docs/workers-ai/features/fine-tunes

1 file changed

+1
-1
lines changed

src/content/docs/workers-ai/features/fine-tunes/loras.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ Workers AI supports fine-tuned inference with adapters trained with [Low-Rank Ad
3030
- `@cf/google/gemma-3-12b-it`
3131

3232
- Adapter must be trained with rank `r <=8` as well as larger ranks if up to 32. You can check the rank of a pre-trained LoRA adapter through the adapter's `config.json` file
33-
- LoRA adapter file must be < 500MB
33+
- LoRA adapter file must be < 300MB
3434
- LoRA adapter files must be named `adapter_config.json` and `adapter_model.safetensors` exactly
3535
- You can test up to 30 LoRA adapters per account
3636

0 commit comments

Comments
 (0)