Skip to content

fix: Enable quantization and compilation in the same optimization job via ModelBuilder and add validations to prevent compilation for Llama-3.1 on TRTLLM. #4875

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 0 commits into from

Conversation

cj-zhang
Copy link
Collaborator

@cj-zhang cj-zhang commented Sep 18, 2024

…ModelBuilder.

Issue #, if available:

Description of changes: TRTLLM containers allow compilation and quantization at the same time, so the validations to ensure mutual exclusivity must be removed. Compilation using Llama-3.1 on TRTLLM v11 also doesn't currently work, so adding validations to prevent those jobs from being created.

Testing done: New and updated unit tests on jumpstart_builder and model_builder. Locally tested TRTLLM/Llama-3.1 validations from both JS and HF flow.

Merge Checklist

Put an x in the boxes that apply. You can also fill these out after creating the PR. If you're unsure about any of them, don't hesitate to ask. We're here to help! This is simply a reminder of what we are going to look for before merging your pull request.

General

  • I have read the CONTRIBUTING doc
  • I certify that the changes I am introducing will be backward compatible, and I have discussed concerns about this, if any, with the Python SDK team
  • I used the commit message format described in CONTRIBUTING
  • I have passed the region in to all S3 and STS clients that I've initialized as part of this change.
  • I have updated any necessary documentation, including READMEs and API docs (if appropriate)

Tests

  • I have added tests that prove my fix is effective or that my feature works (if appropriate)
  • I have added unit and/or integration tests as appropriate to ensure backward compatibility of the changes
  • I have checked that my tests are not configured for a specific region or account (if appropriate)
  • I have used unique_name_from_base to create resource names in integ tests (if appropriate)

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

@cj-zhang cj-zhang requested a review from a team as a code owner September 18, 2024 00:45
@cj-zhang cj-zhang requested a review from Aditi2424 September 18, 2024 00:45
@cj-zhang cj-zhang changed the title bugfix: Enable quantization and compilation in the same optimization job via ModelBuilder. fix: Enable quantization and compilation in the same optimization job via ModelBuilder. Sep 18, 2024
@cj-zhang cj-zhang changed the title fix: Enable quantization and compilation in the same optimization job via ModelBuilder. fix: Enable quantization and compilation in the same optimization job via ModelBuilder and add validations to prevent compilation for Llama-3.1 on TRTLLM. Sep 19, 2024
# TRTLLM is used by Neo if the following are provided:
# 1) a GPU instance type
# 2) compilation config
gpu_instance_families = ["g4", "g5", "p4d"]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to consider "g6", "p5" as well. And I don't "g4" should be included

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

g4 is not supported.

mock_metadata_config = Mock()
mock_metadata_config.resolved_config = {
"supported_inference_instance_types": ["ml.inf2.48xlarge"],
"hosting_neuron_model_id": "huggingface-llmneuron-mistral-7b",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this model ID not the same as the MB's init argument?

Copy link
Contributor

@Lokiiiiii Lokiiiiii left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing items -

  1. The ability to add JS models as draft models for speculative decoding
  2. If the draft model is a gated model, customer needs to provide an additional accept_eula parameter during model initialization.
  3. TRT Compilation cannot be stacked with Speculative Decoding(SD)
  4. New Quantization option - FP8
  5. New Quantization option - SmoothQuant
  6. SmoothQuant requires TRT compilation (llama 3.1 not supported, cannot be stacked with SD)
  7. ModelSharding ...

# TRTLLM is used by Neo if the following are provided:
# 1) a GPU instance type
# 2) compilation config
gpu_instance_families = ["g4", "g5", "p4d"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

g4 is not supported.


# HF Model ID format = "meta-llama/Meta-Llama-3.1-8B"
# JS Model ID format = "meta-textgeneration-llama-3-1-8b"
llama_3_1_keywords = ["llama-3.1", "llama-3-1"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we swap this for a more dynamic check based on JS metadata ?

I am not sure if this code path caters to non-JS models as well. So maybe we want to keep this for HF models.

},
compilation_config={"OverrideEnvironment": {"OPTION_TENSOR_PARALLEL_DEGREE": "2"}},
env_vars={
"OPTION_TENSOR_PARALLEL_DEGREE": "1",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is there a mismatch in TP Degree here ?

Are we trying to compile with TP=2 and then deploy with TP=1 ?

This behavior is unsupported. Essentially LMI will ignore the runtime TP Degree in favor of what was compiled.

@Lokiiiiii
Copy link
Contributor

Missing items -

1. The ability to add JS models as draft models for speculative decoding

2. If the draft model is a gated model, customer needs to provide an additional accept_eula parameter during model initialization.

3. TRT Compilation cannot be stacked with Speculative Decoding(SD)

4. New Quantization option - FP8

5. New Quantization option - SmoothQuant

6. SmoothQuant requires TRT compilation (llama 3.1 not supported, cannot be stacked with SD)

7. ModelSharding ...

self.artifact_version: str = json_obj["artifact_version"]
- ArtifactVersion is now optional and might not always be available in JS Metadata.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants