Skip to content

Commit 0e17e45

Browse files
committed
Use Streamline to analyze LLM runing on CPU with llama.cpp: Ready for push to arm learning path repo
1 parent 540cf22 commit 0e17e45

File tree

1 file changed

+1
-1
lines changed
  • content/learning-paths/servers-and-cloud-computing/llama_cpp_streamline

1 file changed

+1
-1
lines changed

content/learning-paths/servers-and-cloud-computing/llama_cpp_streamline/Conclusion.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,5 +9,5 @@ layout: learningpathall
99
# Conclusion
1010
By leveraging the Streamline tool together with a good understanding of the llama.cpp code, the execution process of the LLM model can be visualized, which helps analyze code efficiency and investigate potential optimization.
1111

12-
Note that addtional annotation code in llama.cpp and gatord might somehow affect the performance.
12+
Note that addtional annotation code in llama.cpp and gatord might somehow affect the performance.
1313

0 commit comments

Comments
 (0)