Skip to content
Discussion options

You must be logged in to vote

I think the best approach for you to do that right now is to create multiple LlamaContexts with a LlamaChatSession for each one, and then load all the history into all of the chat sessions, so you can treat the response of each session as a different option.
I'll see if I can make it possible to clone a context to keep its current state for improved performance of this operation

Replies: 1 comment 5 replies

Comment options

You must be logged in to vote
5 replies
@nigel-daniels
Comment options

@giladgd
Comment options

@nigel-daniels
Comment options

@stduhpf
Comment options

@giladgd
Comment options

Answer selected by giladgd
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
3 participants