Skip to content

Commit d13bd68

Browse files
authored
[Workers AI] Use model catalog instead of static lists (#23168)
1 parent a9bc93a commit d13bd68

File tree

2 files changed

+2
-18
lines changed

2 files changed

+2
-18
lines changed

src/content/docs/workers-ai/features/batch-api/index.mdx

Lines changed: 1 addition & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -33,9 +33,4 @@ This will create a repository in your GitHub account and deploy a ready-to-use W
3333

3434
## Supported Models
3535

36-
- [@cf/meta/llama-3.3-70b-instruct-fp8-fast](/workers-ai/models/llama-3.3-70b-instruct-fp8-fast/)
37-
- [@cf/baai/bge-small-en-v1.5](/workers-ai/models/bge-small-en-v1.5/)
38-
- [@cf/baai/bge-base-en-v1.5](/workers-ai/models/bge-base-en-v1.5/)
39-
- [@cf/baai/bge-large-en-v1.5](/workers-ai/models/bge-large-en-v1.5/)
40-
- [@cf/baai/bge-m3](/workers-ai/models/bge-m3/)
41-
- [@cf/meta/m2m100-1.2b](/workers-ai/models/m2m100-1.2b/)
36+
Refer to our [model catalog](/workers-ai/models/?capabilities=Batch) for supported models.

src/content/docs/workers-ai/features/fine-tunes/loras.mdx

Lines changed: 1 addition & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -17,18 +17,7 @@ Workers AI supports fine-tuned inference with adapters trained with [Low-Rank Ad
1717

1818
## Limitations
1919

20-
- We only support LoRAs for the following models (must not be quantized):
21-
22-
- `@cf/meta/llama-3.2-11b-vision-instruct`
23-
- `@cf/meta/llama-3.3-70b-instruct-fp8-fast`
24-
- `@cf/meta/llama-guard-3-8b`
25-
- `@cf/meta/llama-3.1-8b-instruct-fast (soon)`
26-
- `@cf/deepseek-ai/deepseek-r1-distill-qwen-32b`
27-
- `@cf/qwen/qwen2.5-coder-32b-instruct`
28-
- `@cf/qwen/qwq-32b`
29-
- `@cf/mistralai/mistral-small-3.1-24b-instruct`
30-
- `@cf/google/gemma-3-12b-it`
31-
20+
- We only support LoRAs for a [variety of models](/workers-ai/models/?capabilities=LoRA) (must not be quantized)
3221
- Adapter must be trained with rank `r <=8` as well as larger ranks if up to 32. You can check the rank of a pre-trained LoRA adapter through the adapter's `config.json` file
3322
- LoRA adapter file must be < 300MB
3423
- LoRA adapter files must be named `adapter_config.json` and `adapter_model.safetensors` exactly

0 commit comments

Comments
 (0)