You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/hub/local-apps.md
+7-5Lines changed: 7 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -33,19 +33,21 @@ The best way to check if a local app is supported is to go to the Local Apps set
33
33
34
34
<Tip>
35
35
36
-
To use these local apps, copy the snippets from the model card as above.
36
+
👨💻 To use these local apps, copy the snippets from the model card as above.
37
+
38
+
👷 If you're building a local app, you can learn about integrating with the Hub in [this guide](https://huggingface.co/docs/hub/en/models-adding-libraries).
37
39
38
40
</Tip>
39
41
40
42
### Llama.cpp
41
43
42
-
Llama.cpp is a high-performance C/C++ library for running LLMs locally with optimized inference across different hardware. If you are running a CPU, this is the best option.
44
+
Llama.cpp is a high-performance C/C++ library for running LLMs locally with optimized inference across lots of different hardware, including CPUs, CUDA and Metal.
43
45
44
46
**Advantages:**
45
-
- Extremely fast performance for CPU-based models
47
+
- Extremely fast performance for CPU-based models on multiple CPU families
0 commit comments