Skip to content

Commit 939ff8e

Browse files
Update src/routes/blog/post/chatbot-with-webllm-and-webgpu/+page.markdoc
Co-authored-by: Aditya Oberai <[email protected]>
1 parent 7b2bbe1 commit 939ff8e

File tree

1 file changed

+5
-5
lines changed

1 file changed

+5
-5
lines changed

src/routes/blog/post/chatbot-with-webllm-and-webgpu/+page.markdoc

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -135,11 +135,11 @@ In the HTML file, we've created a chat interface with controls for model selecti
135135

136136
Notice that in the `div` with class `controls`, we have a `select` element for model selection and a `button` for loading the model. Here are the specifications for each model:
137137

138-
| Model | Parameters | Q4 file size (MB) | VRAM needed (MB) |
139-
| ------------ | ------------ | ----------------- | ---------------- |
140-
| SmolLM2-360M | 360 million | ~270 MB | ~380 MB |
141-
| Phi-3.5-mini | 3.8 billion | ~2,400 MB | ~3,700 MB |
142-
| Llama-3.1-8B | 8.03 billion | ~4,900 MB | ~5,000 MB |
138+
| Model | Parameters | Q4 file size | VRAM needed |
139+
| ------------ | ------------ | ------------ | ----------- |
140+
| SmolLM2-360M | 360 million | ~270 MB | ~380 MB |
141+
| Phi-3.5-mini | 3.8 billion | ~2.4 GB | ~3.7 GB |
142+
| Llama-3.1-8B | 8.03 billion | ~4.9 GB | ~5 GB |
143143

144144
When you're deciding which of these models to use in a browser environment with WebLLM, think first about what kind of work you want it to handle.
145145

0 commit comments

Comments
 (0)