-
-
Notifications
You must be signed in to change notification settings - Fork 154
Closed
Labels
new featureNew feature or requestNew feature or requestreleasedreleased on @betaroadmapPart of the roadmap for node-llama-cpp (https://github.com/orgs/withcatai/projects/1)Part of the roadmap for node-llama-cpp (https://github.com/orgs/withcatai/projects/1)
Milestone
Description
Feature Description
Hide all the logs produced by llama.cpp about model running/parameters/etc
The Solution
Either ignore output or process it and receive as a callback when running model internally
Considered Alternatives
In general non-optional logging in (any) libraries is a bad practice
Additional Context
Thanks for your work ❤️
Related Features to This Feature Request
- Metal support
- CUDA support
- Grammar
Are you willing to resolve this issue by submitting a Pull Request?
Yes, I have the time, and I know how to start.
Metadata
Metadata
Assignees
Labels
new featureNew feature or requestNew feature or requestreleasedreleased on @betaroadmapPart of the roadmap for node-llama-cpp (https://github.com/orgs/withcatai/projects/1)Part of the roadmap for node-llama-cpp (https://github.com/orgs/withcatai/projects/1)
Type
Projects
Status
Done