Replies: 6 comments
-
interesting idea, like that one game show i'm not gonna name. I can see the potential benefits. |
Beta Was this translation helpful? Give feedback.
-
By prompt, do you mean the prompt that would generate the reply? I wonder what we could expect to see from that. I'm assuming the user would have to provide something that would work as an initial prompt, because "Now give me a list of these places." wouldn't be a very effective example. Then again, the assistant could be making a reference to an earlier statement, in which case the correct prompt would be a user reply - unless you limit the assistant messages to the first ones of each tree only. |
Beta Was this translation helpful? Give feedback.
-
Yes |
Beta Was this translation helpful? Give feedback.
-
This could be used as a way to train and tune doc2query as well - there a somewhat related issue in this project with #645. Could also be used to connect a given response to more prompts and thereby increasing the dataset for the reward model to train on. |
Beta Was this translation helpful? Give feedback.
-
This is actually a brilliant idea @erkinalp. This will create diversity for the data. You would then be creating a new sister subtree tho. So this could complicated things. I will leave for the webteam to decide if this can be implemented. i adding appropriate labels to let that team know. |
Beta Was this translation helpful? Give feedback.
-
Maybe the trees created from this should be in a different dataset altogether? That'd make things easier, but I don't know if there's a benefit in putting the predicted prompts in the same dataset. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
This training scenario might be useful to train latent diffusor (http://arxiv.org/abs/2212.09462) and combined diffusor+GAN architectures of generative models.
Beta Was this translation helpful? Give feedback.
All reactions