Skip to content

Conversation

danbev
Copy link
Member

@danbev danbev commented May 9, 2025

This commit deprecates the WHISPER_CCACHE CMake option in favor of the GGML_CCACHE option.

The motivation for this change is that currently when setting, or not setting WHISPER_CCACHE, the outut message from ggml will be that to enable ccache you need to set GGML_CCACHE which can be confusing. This also seems to be inline with what llama.cpp does which does not have a LLAMA_CCACHE option as far as I know.

Resolves: #3063

danbev added 2 commits May 9, 2025 13:00
This commit deprecates the WHISPER_CCACHE CMake option in favor of
the GGML_CCACHE option.

The motivation for this change is that currently when setting, or not
setting WHISPER_CCACHE, the outut message from ggml will be that to
enable ccache you need to set GGML_CCACHE which can be confusing.
This also seems to be inline with what llama.cpp does which does not
have a LLAMA_CCACHE option as far as I know.

Resolves: ggml-org#3063
@danbev danbev merged commit 288304e into ggml-org:master May 9, 2025
@danbev danbev deleted the deprecate-whisper-ccache branch May 22, 2025 04:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

CCache not found warning persists even when GGML_CCACHE is set to off

2 participants