Replies: 2 comments 2 replies
-
Hey @mtcl, we are a very small team, so cannot do everything that Please enter a feature request in the Issues. I'll label it with "help wanted" and we will see what happens. |
Beta Was this translation helpful? Give feedback.
2 replies
-
This is great, will give it a try! |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hey Team,
Amazing work here. as compared to llama.cpp the biggest feature that I see missing is support for tool calling. D oyou have any plans to include it in the future roadmap? Or am i missing something and it alredy exists?
I am forced to use other frameworks, even though i like inferencing speeds from ik_llama.cpp, just beacuse i cant live without these features and want to swap it out natively in the openai's python client in my project implementation.
I know tha i can prompt the model in a particular way to force it to produce a json response. I am not looking for that.
Thank you in advance!
Beta Was this translation helpful? Give feedback.
All reactions