Replies: 1 comment 1 reply
-
Generally you can't call the LLM with a system message only, in my experience. What we generally do when we want the system to initiate is have a very simple user message like "Start the conversation" that we hide from the UI. You should be able to use markers like |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi there, I've been using RubyLLM with great satisfaction in a new project for a few weeks now.
Recently as I was experimenting I encountered an problem where an LLM went into a loop of repeatedly calling the same tool. Clearly this was no good and I wouldn't want it to happen in production since it's both bad UX and just plain expensive! So my solution was to check for too many tool calls within the ask block. If too many tool calls happen, raise an exception which breaks out of the loop and then I can ask the user abort/continue.
My problem is that to continue I need to prompt the model, but what seems to happen is that the model gets confused. For instance I tried "<resume_generation />" as the message generated when the user selects continue, but the model responded with something like "I see you are interested in generating a resume now. Would you like to work on that instead?". LOL.
I think this would be much better if I could trigger generation in the context of a system message rather than a user message. I think the idea of starting the conversation with a system message could also enable a workflow where the system is initiating a conversation with the user, which would also be a useful tool to have.
Thoughts?
Cheers,
Darrick
Beta Was this translation helpful? Give feedback.
All reactions