forked from ggml-org/llama.cpp
-
Notifications
You must be signed in to change notification settings - Fork 1
Open
Labels
Description
Note: This issue was copied from ggml-org#6506
Original Author: @tomsanbear
Original Issue Number: ggml-org#6506
Created: 2024-04-05T14:08:32Z
We're doing some work over at https://github.com/huggingface/candle to improve our Metal backend, I've been collecting various gputraces for the different frameworks and was wondering if there was a documented/known way to generate one for llama.cpp during model inference.
Specifically talking about this type of debugger output: https://developer.apple.com/documentation/xcode/metal-debugger