Skip to content

Commit 7b2bbe1

Browse files
Update src/routes/blog/post/chatbot-with-webllm-and-webgpu/+page.markdoc
Co-authored-by: Aditya Oberai <[email protected]>
1 parent d47c2ae commit 7b2bbe1

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

src/routes/blog/post/chatbot-with-webllm-and-webgpu/+page.markdoc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -143,11 +143,11 @@ Notice that in the `div` with class `controls`, we have a `select` element for m
143143

144144
When you're deciding which of these models to use in a browser environment with WebLLM, think first about what kind of work you want it to handle.
145145

146-
**SmolLM2-360M** is the smallest by a wide margin, which means it loads quickly and puts the least strain on your device. If you're writing short notes, rewriting text, or making quick coding helpers that run in a browser, this might be all you need.
146+
- **SmolLM2-360M** is the smallest by a wide margin, which means it loads quickly and puts the least strain on your device. If you're writing short notes, rewriting text, or making quick coding helpers that run in a browser, this might be all you need.
147147

148-
**Phi-3.5-mini** brings more parameters and more capacity for reasoning, even though it still runs entirely in your browser. It's good for handling multi-step explanations, short document summarisation, or answering questions about moderately long prompts. If you're looking for a balance between size and capability, Phi-3.5-mini has a comfortable middle ground.
148+
- **Phi-3.5-mini** brings more parameters and more capacity for reasoning, even though it still runs entirely in your browser. It's good for handling multi-step explanations, short document summarisation, or answering questions about moderately long prompts. If you're looking for a balance between size and capability, Phi-3.5-mini has a comfortable middle ground.
149149

150-
**Llama-3.1-8B** is the largest of the three and carries more of the general knowledge and pattern recognition that bigger models can offer. It's more reliable if you're dealing with open-ended dialogue, creative writing, or complex coding tasks. But you'll need more memory.
150+
- **Llama-3.1-8B** is the largest of the three and carries more of the general knowledge and pattern recognition that bigger models can offer. It's more reliable if you're dealing with open-ended dialogue, creative writing, or complex coding tasks. But you'll need more memory.
151151

152152
Each of these models trades off size, memory use, and output quality in different ways. So choosing the right one depends on what your hardware can handle and what kind of prompts you plan to work with. All can run directly in modern browsers with WebGPU support.
153153

0 commit comments

Comments
 (0)