-
Notifications
You must be signed in to change notification settings - Fork 50
Add truss example for Qwen1.5-110B with vllm & streaming support #282
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from 1 commit
1bcdbbb
ecc0070
5c90be0
ede8d24
ee5a83a
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,19 @@ | ||
environment_variables: {CUDA_VISIBLE_DEVICES: "0,1,2,3"} | ||
external_package_dirs: [] | ||
model_metadata: | ||
example_model_input: {"prompt": "How long would it take to reach the sun?"} | ||
model_name: Qwen1.5-vllm-streaming | ||
python_version: py310 | ||
requirements: | ||
- torch==2.1.2 | ||
- transformers==4.37.0 | ||
- vllm | ||
|
||
- asyncio==3.4.3 | ||
- ray | ||
resources: | ||
accelerator: A100 | ||
cpu: '40' | ||
memory: 100Gi | ||
|
||
use_gpu: true | ||
secrets: {} | ||
system_packages: [] |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,64 @@ | ||
import subprocess | ||
import uuid | ||
from transformers import AutoTokenizer | ||
|
||
from vllm import SamplingParams | ||
from vllm.engine.arg_utils import AsyncEngineArgs | ||
from vllm.engine.async_llm_engine import AsyncLLMEngine | ||
|
||
|
||
class Model: | ||
def __init__(self, model_name="Qwen/Qwen1.5-110B-Chat"): | ||
self.model_name = model_name | ||
self.tokenizer = None | ||
self.sampling_params = None | ||
|
||
command = "ray start --head" | ||
subprocess.check_output(command, shell=True, text=True) | ||
Comment on lines
+16
to
+17
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I don't think this is still necessary with newer vlllm versions There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. |
||
|
||
def load(self): | ||
self.model_args = AsyncEngineArgs( | ||
model=self.model_name, | ||
dtype='auto', | ||
enforce_eager=True, | ||
tensor_parallel_size=4 | ||
|
||
) | ||
|
||
self.tokenizer = AutoTokenizer.from_pretrained(self.model_name) | ||
|
||
self.sampling_params = SamplingParams( # Using default values | ||
temperature=0.7, | ||
top_p=0.8, | ||
repetition_penalty=1.05, | ||
max_tokens=512 | ||
) | ||
|
||
self.llm_engine = AsyncLLMEngine.from_engine_args(self.model_args) | ||
|
||
async def predict(self, model_input): | ||
message = model_input.pop("prompt") | ||
|
||
prompt = [ | ||
{"role": "system", "content": "You are a helpful assistant."}, | ||
{"role": "user", "content": message} | ||
] | ||
|
||
text = self.tokenizer.apply_chat_template( | ||
prompt, | ||
tokenize=False, | ||
add_generation_prompt=True | ||
) | ||
|
||
idx = str(uuid.uuid4().hex) | ||
vllm_generator = self.llm_engine.generate(text, self.sampling_params, idx) | ||
|
||
async def generator(): | ||
full_text = "" | ||
async for output in vllm_generator: | ||
text = output.outputs[0].text | ||
delta = text[len(full_text) :] | ||
full_text = text | ||
yield delta | ||
|
||
return generator() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To use 4 devices, please up the resources below to grant you access to 4 GPUs by changing to
A100:4
. If you don't need 4 devices, you can drop this env var.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, let me change the resources section in the config file.
However, I am getting this error when dropping this env:

There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So it turns out this error goes away with passing any random env not just CUDA_VISIBLE_DEVICES.
I just tried setting
{test: "okok"}
in env and model loading was a breeze.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi Immar, Can you just drop the whole
environment_variables
from the config? I should work better that way. I think something is off with the yaml config of this dictionary and the default should work wellThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey Bola, I have made the necessary changes. I think the workflow's awaiting approval from a maintainer.
Best