-
Notifications
You must be signed in to change notification settings - Fork 1.2k
fix: Enable quantization and compilation in the same optimization job via ModelBuilder and add validations to prevent compilation for Llama-3.1 on TRTLLM. #4875
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
# TRTLLM is used by Neo if the following are provided: | ||
# 1) a GPU instance type | ||
# 2) compilation config | ||
gpu_instance_families = ["g4", "g5", "p4d"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Need to consider "g6", "p5" as well. And I don't "g4" should be included
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
g4 is not supported.
mock_metadata_config = Mock() | ||
mock_metadata_config.resolved_config = { | ||
"supported_inference_instance_types": ["ml.inf2.48xlarge"], | ||
"hosting_neuron_model_id": "huggingface-llmneuron-mistral-7b", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this model ID not the same as the MB's init argument?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing items -
- The ability to add JS models as draft models for speculative decoding
- If the draft model is a gated model, customer needs to provide an additional accept_eula parameter during model initialization.
- TRT Compilation cannot be stacked with Speculative Decoding(SD)
- New Quantization option - FP8
- New Quantization option - SmoothQuant
- SmoothQuant requires TRT compilation (llama 3.1 not supported, cannot be stacked with SD)
- ModelSharding ...
# TRTLLM is used by Neo if the following are provided: | ||
# 1) a GPU instance type | ||
# 2) compilation config | ||
gpu_instance_families = ["g4", "g5", "p4d"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
g4 is not supported.
|
||
# HF Model ID format = "meta-llama/Meta-Llama-3.1-8B" | ||
# JS Model ID format = "meta-textgeneration-llama-3-1-8b" | ||
llama_3_1_keywords = ["llama-3.1", "llama-3-1"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we swap this for a more dynamic check based on JS metadata ?
I am not sure if this code path caters to non-JS models as well. So maybe we want to keep this for HF models.
}, | ||
compilation_config={"OverrideEnvironment": {"OPTION_TENSOR_PARALLEL_DEGREE": "2"}}, | ||
env_vars={ | ||
"OPTION_TENSOR_PARALLEL_DEGREE": "1", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is there a mismatch in TP Degree here ?
Are we trying to compile with TP=2 and then deploy with TP=1 ?
This behavior is unsupported. Essentially LMI will ignore the runtime TP Degree in favor of what was compiled.
|
…ModelBuilder.
Issue #, if available:
Description of changes: TRTLLM containers allow compilation and quantization at the same time, so the validations to ensure mutual exclusivity must be removed. Compilation using Llama-3.1 on TRTLLM v11 also doesn't currently work, so adding validations to prevent those jobs from being created.
Testing done: New and updated unit tests on jumpstart_builder and model_builder. Locally tested TRTLLM/Llama-3.1 validations from both JS and HF flow.
Merge Checklist
Put an
x
in the boxes that apply. You can also fill these out after creating the PR. If you're unsure about any of them, don't hesitate to ask. We're here to help! This is simply a reminder of what we are going to look for before merging your pull request.General
Tests
unique_name_from_base
to create resource names in integ tests (if appropriate)By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.