Skip to content

Commit 6e0dc08

Browse files
committed
Update the example
1 parent 5075732 commit 6e0dc08

File tree

6 files changed

+114
-50
lines changed

6 files changed

+114
-50
lines changed

README.md

Lines changed: 23 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -18,55 +18,51 @@ our companion project that runs LLMs natively on iPhone and other native local e
1818
## Get Started
1919

2020
WebLLM offers a minimalist and modular interface to access the chatbot in the browser.
21-
The following code demonstrates the basic usage.
22-
23-
```typescript
24-
import { ChatModule } from "@mlc-ai/web-llm";
25-
26-
async function main() {
27-
const chat = new ChatModule();
28-
// load a prebuilt model
29-
await chat.reload("RedPajama-INCITE-Chat-3B-v1-q4f32_0");
30-
// generate a reply base on input
31-
const prompt = "What is the capital of Canada?";
32-
const reply = await chat.generate(prompt);
33-
console.log(reply);
34-
}
35-
```
36-
3721
The WebLLM package itself does not come with UI, and is designed in a
3822
modular way to hook to any of the UI components. The following code snippet
39-
contains part of the program that generates a streaming response on a webpage.
23+
demonstrate a simple example that generates a streaming response on a webpage.
4024
You can check out [examples/get-started](examples/get-started/) to see the complete example.
4125

4226
```typescript
27+
import * as webllm from "@mlc-ai/web-llm";
28+
29+
// We use label to intentionally keep it simple
30+
function setLabel(id: string, text: string) {
31+
const label = document.getElementById(id);
32+
if (label == null) {
33+
throw Error("Cannot find label " + id);
34+
}
35+
label.innerText = text;
36+
}
37+
4338
async function main() {
4439
// create a ChatModule,
45-
const chat = new ChatModule();
40+
const chat = new webllm.ChatModule();
4641
// This callback allows us to report initialization progress
47-
chat.setInitProgressCallback((report: InitProgressReport) => {
42+
chat.setInitProgressCallback((report: webllm.InitProgressReport) => {
4843
setLabel("init-label", report.text);
4944
});
50-
// pick a model. Here we use red-pajama
51-
const localId = "RedPajama-INCITE-Chat-3B-v1-q4f32_0";
52-
await chat.reload(localId);
45+
// You can also try out "RedPajama-INCITE-Chat-3B-v1-q4f32_0"
46+
await chat.reload("vicuna-v1-7b-q4f32_0");
5347

54-
// callback to refresh the streaming response
5548
const generateProgressCallback = (_step: number, message: string) => {
5649
setLabel("generate-label", message);
5750
};
51+
5852
const prompt0 = "What is the capital of Canada?";
59-
// generate response
53+
setLabel("prompt-label", prompt0);
6054
const reply0 = await chat.generate(prompt0, generateProgressCallback);
6155
console.log(reply0);
6256

63-
const prompt1 = "How about France?";
64-
const reply1 = await chat.generate(prompt1, generateProgressCallback)
57+
const prompt1 = "Can you write a poem about it?";
58+
setLabel("prompt-label", prompt1);
59+
const reply1 = await chat.generate(prompt1, generateProgressCallback);
6560
console.log(reply1);
6661

67-
// We can print out the status
6862
console.log(await chat.runtimeStatsText());
6963
}
64+
65+
main();
7066
```
7167

7268
Finally, you can find a complete

examples/get-started/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ To try it out, you can do the following steps
77
- `@mlc-ai/web-llm` points to a valid npm version e.g.
88
```js
99
"dependencies": {
10-
"@mlc-ai/web-llm": "^0.1.0"
10+
"@mlc-ai/web-llm": "^0.1.3"
1111
}
1212
```
1313
Try this option if you would like to use WebLLM without building it yourself.
Lines changed: 9 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,5 @@
1-
import { ChatModule, InitProgressReport } from "@mlc-ai/web-llm";
1+
import * as webllm from "@mlc-ai/web-llm";
22

3-
// We use label to intentionally keep it simple
43
function setLabel(id: string, text: string) {
54
const label = document.getElementById(id);
65
if (label == null) {
@@ -10,37 +9,29 @@ function setLabel(id: string, text: string) {
109
}
1110

1211
async function main() {
13-
// create a ChatModule,
14-
const chat = new ChatModule();
12+
const chat = new webllm.ChatModule();
1513

16-
// This callback allows us to report initialization progress
17-
chat.setInitProgressCallback((report: InitProgressReport) => {
14+
chat.setInitProgressCallback((report: webllm.InitProgressReport) => {
1815
setLabel("init-label", report.text);
1916
});
20-
// pick a model, here we use red-pajama
21-
// at any time point, you can call reload
22-
// to switch the underlying model
23-
const localId = "RedPajama-INCITE-Chat-3B-v1-q4f32_0";
24-
await chat.reload(localId);
2517

26-
// this callback allows us to stream result back
18+
await chat.reload("vicuna-v1-7b-q4f32_0");
19+
2720
const generateProgressCallback = (_step: number, message: string) => {
2821
setLabel("generate-label", message);
2922
};
23+
3024
const prompt0 = "What is the capital of Canada?";
3125
setLabel("prompt-label", prompt0);
32-
33-
// generate response
3426
const reply0 = await chat.generate(prompt0, generateProgressCallback);
3527
console.log(reply0);
3628

37-
const prompt1 = "How about France?";
29+
const prompt1 = "Can you write a poem about it?";
3830
setLabel("prompt-label", prompt1);
39-
const reply1 = await chat.generate(prompt1, generateProgressCallback)
31+
const reply1 = await chat.generate(prompt1, generateProgressCallback);
4032
console.log(reply1);
4133

42-
// We can print out the statis
4334
console.log(await chat.runtimeStatsText());
4435
}
4536

46-
main()
37+
main();

examples/simple-chat/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ chat app based on WebLLM. To try it out, you can do the following steps
77
- Option 1: `@mlc-ai/web-llm` points to a valid npm version e.g.
88
```js
99
"dependencies": {
10-
"@mlc-ai/web-llm": "^0.1.0"
10+
"@mlc-ai/web-llm": "^0.1.3"
1111
}
1212
```
1313
Try this option if you would like to use WebLLM.

package-lock.json

Lines changed: 75 additions & 0 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

package.json

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -22,18 +22,20 @@
2222
"license": "Apache-2.0",
2323
"homepage": "https://github.com/mlc-ai/web-llm",
2424
"devDependencies": {
25+
"@mlc-ai/web-tokenizers": "^0.1.0",
2526
"@rollup/plugin-commonjs": "^20.0.0",
2627
"@rollup/plugin-node-resolve": "^13.0.4",
2728
"@typescript-eslint/eslint-plugin": "^5.59.6",
2829
"@typescript-eslint/parser": "^5.59.6",
30+
"@webgpu/types": "^0.1.24",
31+
"buffer": "^5.7.1",
2932
"eslint": "^8.41.0",
33+
"process": "^0.11.10",
3034
"rollup": "^2.56.2",
3135
"rollup-plugin-ignore": "^1.0.10",
3236
"rollup-plugin-typescript2": "^0.34.1",
3337
"tslib": "^2.3.1",
34-
"@webgpu/types": "^0.1.24",
3538
"tvmjs": "file:./tvm_home/web",
36-
"typescript": "^4.9.5",
37-
"@mlc-ai/web-tokenizers": "^0.1.0"
39+
"typescript": "^4.9.5"
3840
}
3941
}

0 commit comments

Comments
 (0)