My llava1.5 7b can get output, but how to set it up to get the same output as transformers #1180
Unanswered
bleedingfight
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I can use the transformer interface to input text and images for output。The parameters passed in by tensorrt-llm are not the same. How to ensure that the same model can obtain same results that are consistent with transformers or tensorrt-llm.trt model.generate()'s param not same with transformers LlavaForConditionalGeneration.Is there a one-to-one corresponding parameter pair?
Beta Was this translation helpful? Give feedback.
All reactions