We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent d64baee commit 4b50517Copy full SHA for 4b50517
README.md
@@ -138,6 +138,7 @@ Then you will find that the execution speed is as fast as native GPU environment
138
## Links
139
140
- [Demo page](https://mlc.ai/web-llm/)
141
+- If you want to run LLM on native runtime, check out [MLC-LLM](https://github.com/mlc-ai/mlc-llm)
142
- You might also be interested in [Web Stable Diffusion](https://github.com/mlc-ai/web-stable-diffusion/).
143
144
## Acknowledgement
0 commit comments