Extract class log probabilities from LLM classifier using vLLM #3680
Unanswered
CoolFish88
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
I am using a Mistral 7B Instruct v3 model to classify text documents. The output of the model takes the form of multi-token class labels. Currently, I am using the djl_python.lmi_vllm.vllm_async_service entrypoint, which runs blazing fast and returns top-K log probabilities (if requested).
I am interested in extracting the log probabilities for the class labels, as the top K (where K is set to the number of classes) may be associated with tokens different that the ones associated with class labels.
What would be the best approach to do this?
I am thinking about the following solutions, but require some expertize as evidenced by the associated questions:
allowed_token_ids
to the set of single token class labels. Then write acustom_output_formatter
function that maps single label tokens to their original multi-token equivalent.Thank you for looking into this!
Beta Was this translation helpful? Give feedback.
All reactions