[TRTLLM-10591][feat] Add programmatic model-feature support matrix & runtime auto-fallback support#10127
[TRTLLM-10591][feat] Add programmatic model-feature support matrix & runtime auto-fallback support#10127venkywonka wants to merge 15 commits intoNVIDIA:mainfrom
Conversation
|
/bot run --disable-fail-fast |
|
/bot run --disable-fail-fast |
389910f to
a276656
Compare
|
/bot run --disable-fail-fast |
|
PR_Github #29610 [ run ] triggered by Bot. Commit: |
|
PR_Github #29610 [ run ] completed with state
|
7b38734 to
6f35665
Compare
📝 WalkthroughWalkthroughThis PR introduces a model support matrix system with automatic feature fallback handling. It defines structured support data for model architectures and features, adds a fallback mechanism to automatically disable unsupported features during LLM initialization, and provides documentation generation and validation utilities. The configuration is controlled via a new Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant BaseLLM as BaseLLM.__init__
participant ModelLoader
participant HFConfig as HF Model Config
participant SupportMatrix as Support Matrix
participant UserArgs as User Args
Client->>BaseLLM: __init__(args)
BaseLLM->>ModelLoader: load model
ModelLoader->>HFConfig: get HuggingFace config
BaseLLM->>BaseLLM: _apply_model_feature_fallbacks()
BaseLLM->>HFConfig: read architecture
BaseLLM->>SupportMatrix: get_status(architecture, feature)
alt Feature Unsupported (NO/NA)
SupportMatrix-->>BaseLLM: SupportStatus.NO/NA
BaseLLM->>BaseLLM: check if feature enabled in args
BaseLLM->>UserArgs: disable feature flag
BaseLLM->>BaseLLM: emit warning
else Feature Supported
SupportMatrix-->>BaseLLM: SupportStatus.YES
BaseLLM->>UserArgs: keep feature enabled
end
BaseLLM-->>Client: return LLM instance
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@tensorrt_llm/llmapi/model_support_matrix.py`:
- Around line 76-462: The MULTIMODAL_MATRIX uses the key "LlavaLlamaModel
(VILA)" which doesn't match the HF architecture name "LlavaLlamaModel" used by
BaseLLM._apply_model_feature_fallbacks (cfg.architectures[0]), causing
get_cell() to return None and skip fallbacks; fix by renaming the matrix key to
"LlavaLlamaModel" (and update MULTIMODAL_ARCH_ORDER to the same name) or add an
alias mapping lookup in the fallback path so
BaseLLM._apply_model_feature_fallbacks resolves "LlavaLlamaModel" to the
existing "LlavaLlamaModel (VILA)" entry before calling get_cell().
In `@tests/unittest/llmapi/test_model_feature_fallbacks.py`:
- Around line 254-262: The helper function mock_status has an unused parameter
(arch) which triggers Ruff ARG001; update the signature of mock_status used in
the patch around BaseLLM._apply_model_feature_fallbacks (and the similar helper
at lines ~303-311) to accept a placeholder for unused args (for example: def
mock_status(_arch, feature): or def mock_status(*_, feature): or def
mock_status(*args, feature):) so the linter no longer flags the unused argument
while preserving the side_effect behavior for get_support_status.
🧹 Nitpick comments (6)
tensorrt_llm/llmapi/llm_args.py (1)
2112-2119: Clarify “silently … with a warning” inauto_disable_unsupported_featuresdescription.Right now it reads as both silent and warning-emitting.
Proposed fix
- "features are silently disabled with a warning instead of causing errors.", + "features are disabled with a warning instead of causing errors.",tensorrt_llm/llmapi/model_support_matrix.py (2)
486-505: Guard_render_md_table()against row/header length mismatches.Right now a malformed row will blow up with an
IndexErrorin_row(). A small validation makes failures actionable.Proposed fix
def _render_md_table(headers: Sequence[str], rows: Sequence[Sequence[str]]) -> str: + for row in rows: + if len(row) != len(headers): + raise ValueError( + f"Markdown table row has {len(row)} cells but expected {len(headers)}: {row}" + ) widths = [len(h) for h in headers] for row in rows: for i, cell in enumerate(row): widths[i] = max(widths[i], len(cell))
507-576: Use consistent heading levels in generated markdown.You already have
# Supported Models; consider making subsequent top-level sections##to avoid multiple H1s and fix the missing space in “Matrix(Key Models)”.Proposed fix
- out.append("## Model-Feature Support Matrix(Key Models)") + out.append("## Model-Feature Support Matrix (Key Models)") @@ - out.append("# Multimodal Feature Support Matrix (PyTorch Backend)") + out.append("## Multimodal Feature Support Matrix (PyTorch Backend)")tensorrt_llm/llmapi/llm.py (1)
258-347: Fallback logic is solid; please verify backend scope + fix Ruff B009.
- The matrix/module is framed as PyTorch backend support, but
_apply_model_feature_fallbacks()is invoked for both TRT and Torch builds—please confirm that’s intentional and statuses are valid for TRT too.- Ruff B009:
hasattr()already guards the attribute, sogetattr()isn’t buying safety here.Proposed fix (Ruff B009)
_disable_if_unsupported( SupportFeature.OVERLAP_SCHEDULER, enabled=hasattr(self.args, "disable_overlap_scheduler") - and getattr(self.args, "disable_overlap_scheduler") is False, + and self.args.disable_overlap_scheduler is False, arg_path="disable_overlap_scheduler", - disable=lambda: setattr(self.args, "disable_overlap_scheduler", True - ), + disable=lambda: setattr(self.args, "disable_overlap_scheduler", True), )tests/unittest/llmapi/test_model_feature_fallbacks.py (1)
81-385: Optional: consider parametrizing the repetitive “disable/no-change” cases.This would shrink the file and make it easier to extend when new features are added.
tests/unittest/tools/test_supported_models_sync.py (1)
28-38: Consider extracting shared module loading logic.This function duplicates the module loading logic from
docs/source/helper.py(lines 354-362). While test isolation is sometimes preferred, you could consider:
- Extracting the module loading into a shared utility in
tensorrt_llm/llmapi/model_support_matrix.pyitself- Or accepting the duplication for test independence
The current approach works correctly, so this is purely a maintainability consideration.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (10)
docs/source/conf.pydocs/source/helper.pydocs/source/models/supported-models.mdtensorrt_llm/llmapi/llm.pytensorrt_llm/llmapi/llm_args.pytensorrt_llm/llmapi/model_support_matrix.pytests/integration/test_lists/test-db/l0_a100.ymltests/unittest/api_stability/references/llm.yamltests/unittest/llmapi/test_model_feature_fallbacks.pytests/unittest/tools/test_supported_models_sync.py
🧰 Additional context used
📓 Path-based instructions (3)
**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.py: The code developed for TensorRT-LLM should conform to Python 3.8+
Indent Python code with 4 spaces. Do not use tabs
Always maintain the namespace when importing Python modules, even if only one class or function from a module is used
Python filenames should use snake_case (e.g.,some_file.py)
Python classes should use PascalCase (e.g.,class SomeClass)
Python functions and methods should use snake_case (e.g.,def my_awesome_function():)
Python local variables should use snake_case, with prefixkfor variable names that start with a number (e.g.,k_99th_percentile)
Python global variables should use upper snake_case with prefixG(e.g.,G_MY_GLOBAL)
Python constants should use upper snake_case (e.g.,MY_CONSTANT)
Avoid shadowing variables declared in an outer scope in Python
Initialize all externally visible members of a Python class in the constructor
For Python interfaces that may be used outside a file, prefer docstrings over comments
Use comments in Python for code within a function, or interfaces that are local to a file
Use Google-style docstrings for Python classes and functions, which can be parsed by Sphinx
Python attributes and variables can be documented inline with the format"""<type>: Description"""
Avoid using reflection in Python when functionality can be easily achieved without reflection
When using try-except blocks in Python, limit the except clause to the smallest set of errors possible
When using try-except blocks in Python to handle multiple possible variable types (duck-typing), keep the body of the try as small as possible and use the else block for the main logic
Files:
tests/unittest/llmapi/test_model_feature_fallbacks.pydocs/source/conf.pytensorrt_llm/llmapi/llm_args.pydocs/source/helper.pytensorrt_llm/llmapi/llm.pytensorrt_llm/llmapi/model_support_matrix.pytests/unittest/tools/test_supported_models_sync.py
**/*.{cpp,cc,cxx,h,hpp,hxx,cu,cuh,py}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
All TensorRT-LLM source files (.cpp, .h, .cu, .py, and other source files) should contain an NVIDIA copyright header with the year of latest meaningful modification
Files:
tests/unittest/llmapi/test_model_feature_fallbacks.pydocs/source/conf.pytensorrt_llm/llmapi/llm_args.pydocs/source/helper.pytensorrt_llm/llmapi/llm.pytensorrt_llm/llmapi/model_support_matrix.pytests/unittest/tools/test_supported_models_sync.py
**/*.{md,rst}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
When documenting CLI commands for TensorRT-LLM tools like
trtllm-serve,trtllm-bench, ortrtllm-eval, prefer using--configover--extra_llm_api_optionsfor specifying configuration files
Files:
docs/source/models/supported-models.md
🧠 Learnings (12)
📚 Learning: 2025-08-14T15:38:01.771Z
Learnt from: MatthiasKohl
Repo: NVIDIA/TensorRT-LLM PR: 6904
File: cpp/tensorrt_llm/pybind/thop/bindings.cpp:55-57
Timestamp: 2025-08-14T15:38:01.771Z
Learning: In TensorRT-LLM Python bindings, tensor parameter collections like mla_tensor_params and spec_decoding_tensor_params are kept as required parameters without defaults to maintain API consistency, even when it might affect backward compatibility.
Applied to files:
tests/unittest/api_stability/references/llm.yamltensorrt_llm/llmapi/llm_args.py
📚 Learning: 2025-07-28T17:06:08.621Z
Learnt from: moraxu
Repo: NVIDIA/TensorRT-LLM PR: 6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.
Applied to files:
tests/unittest/llmapi/test_model_feature_fallbacks.pytests/integration/test_lists/test-db/l0_a100.ymldocs/source/models/supported-models.mdtensorrt_llm/llmapi/llm.pytensorrt_llm/llmapi/model_support_matrix.pytests/unittest/tools/test_supported_models_sync.py
📚 Learning: 2025-08-29T14:07:45.863Z
Learnt from: EmmaQiaoCh
Repo: NVIDIA/TensorRT-LLM PR: 7370
File: tests/unittest/trt/model_api/test_model_quantization.py:24-27
Timestamp: 2025-08-29T14:07:45.863Z
Learning: In TensorRT-LLM's CI infrastructure, pytest skip markers (pytest.mark.skip) are properly honored even when test files have __main__ blocks that call test functions directly. The testing system correctly skips tests without requiring modifications to the __main__ block execution pattern.
Applied to files:
tests/unittest/llmapi/test_model_feature_fallbacks.py
📚 Learning: 2025-09-09T09:40:45.658Z
Learnt from: fredricz-20070104
Repo: NVIDIA/TensorRT-LLM PR: 7645
File: tests/integration/test_lists/qa/llm_function_core.txt:648-648
Timestamp: 2025-09-09T09:40:45.658Z
Learning: In TensorRT-LLM test lists, it's common and intentional for the same test to appear in multiple test list files when they serve different purposes (e.g., llm_function_core.txt for comprehensive core functionality testing and llm_function_core_sanity.txt for quick sanity checks). This duplication allows tests to be run in different testing contexts.
Applied to files:
tests/unittest/llmapi/test_model_feature_fallbacks.pytests/integration/test_lists/test-db/l0_a100.ymltests/unittest/tools/test_supported_models_sync.py
📚 Learning: 2025-08-26T09:49:04.956Z
Learnt from: pengbowang-nv
Repo: NVIDIA/TensorRT-LLM PR: 7192
File: tests/integration/test_lists/test-db/l0_dgx_b200.yml:56-72
Timestamp: 2025-08-26T09:49:04.956Z
Learning: In TensorRT-LLM test configuration files, the test scheduling system handles wildcard matching with special rules that prevent duplicate test execution even when the same tests appear in multiple yaml files with overlapping GPU wildcards (e.g., "*b200*" and "*gb200*").
Applied to files:
tests/unittest/llmapi/test_model_feature_fallbacks.pytests/integration/test_lists/test-db/l0_a100.yml
📚 Learning: 2025-09-17T02:48:52.732Z
Learnt from: tongyuantongyu
Repo: NVIDIA/TensorRT-LLM PR: 7781
File: tests/integration/test_lists/waives.txt:313-313
Timestamp: 2025-09-17T02:48:52.732Z
Learning: In TensorRT-LLM, `tests/integration/test_lists/waives.txt` is specifically for waiving/skipping tests, while other test list files like those in `test-db/` and `qa/` directories are for different test execution contexts (pre-merge, post-merge, QA tests). The same test appearing in both waives.txt and execution list files is intentional - the test is part of test suites but will be skipped due to the waiver.
Applied to files:
tests/integration/test_lists/test-db/l0_a100.yml
📚 Learning: 2025-09-18T05:41:45.847Z
Learnt from: pengbowang-nv
Repo: NVIDIA/TensorRT-LLM PR: 7120
File: tensorrt_llm/llmapi/llm.py:690-697
Timestamp: 2025-09-18T05:41:45.847Z
Learning: Kimi model support is currently focused on the PyTorch backend path, with TRT path support potentially coming later.
Applied to files:
docs/source/models/supported-models.md
📚 Learning: 2025-08-15T06:46:53.813Z
Learnt from: eopXD
Repo: NVIDIA/TensorRT-LLM PR: 6767
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:0-0
Timestamp: 2025-08-15T06:46:53.813Z
Learning: In the TensorRT-LLM KV cache manager, SWA (Sliding Window Attention) combined with beam search is currently in a broken/non-functional state and is planned for future rework. During preparatory refactoring phases, code related to SWA+beam search may intentionally remain in a non-working state until the broader rework is completed.
Applied to files:
docs/source/models/supported-models.md
📚 Learning: 2025-08-14T15:43:23.107Z
Learnt from: MatthiasKohl
Repo: NVIDIA/TensorRT-LLM PR: 6904
File: tensorrt_llm/_torch/attention_backend/trtllm.py:259-262
Timestamp: 2025-08-14T15:43:23.107Z
Learning: In TensorRT-LLM's attention backend, tensor parameters in the plan() method are assigned directly without validation (dtype, device, contiguity checks). This maintains consistency across all tensor inputs and follows the pattern of trusting callers to provide correctly formatted tensors.
Applied to files:
docs/source/models/supported-models.md
📚 Learning: 2025-08-14T06:36:40.701Z
Learnt from: timlee0212
Repo: NVIDIA/TensorRT-LLM PR: 6886
File: tensorrt_llm/_torch/models/modeling_deepseekv3.py:0-0
Timestamp: 2025-08-14T06:36:40.701Z
Learning: In DeepSeek V3 model (tensorrt_llm/_torch/models/modeling_deepseekv3.py), the disagreement between AllReduce.__init__ guard and _compute_mlp_tp_size logic for MNNVL usage is expected by design. The AllReduce component and MLP TP-size computation intentionally use different criteria for MNNVL availability decisions.
Applied to files:
docs/source/models/supported-models.md
📚 Learning: 2025-08-26T09:37:10.463Z
Learnt from: jiaganc
Repo: NVIDIA/TensorRT-LLM PR: 7031
File: tensorrt_llm/bench/dataclasses/configuration.py:90-104
Timestamp: 2025-08-26T09:37:10.463Z
Learning: In TensorRT-LLM, the `get_pytorch_perf_config()` method returns `self.pytorch_config` which can contain default `cuda_graph_config` values, so `llm_args` may already have this config before the extra options processing.
Applied to files:
tensorrt_llm/llmapi/llm_args.py
📚 Learning: 2025-08-06T13:58:07.506Z
Learnt from: galagam
Repo: NVIDIA/TensorRT-LLM PR: 6487
File: tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py:1-12
Timestamp: 2025-08-06T13:58:07.506Z
Learning: In TensorRT-LLM, test files (files under tests/ directories) do not require NVIDIA copyright headers, unlike production source code files. Test files typically start directly with imports, docstrings, or code.
Applied to files:
tensorrt_llm/llmapi/llm.pytensorrt_llm/llmapi/model_support_matrix.pytests/unittest/tools/test_supported_models_sync.py
🧬 Code graph analysis (5)
tests/unittest/llmapi/test_model_feature_fallbacks.py (2)
tensorrt_llm/llmapi/llm.py (1)
_apply_model_feature_fallbacks(258-347)tensorrt_llm/llmapi/model_support_matrix.py (2)
SupportStatus(25-29)Feature(32-49)
docs/source/conf.py (1)
docs/source/helper.py (1)
generate_supported_models(349-374)
docs/source/helper.py (1)
tensorrt_llm/llmapi/model_support_matrix.py (1)
render_supported_models_markdown(507-576)
tensorrt_llm/llmapi/llm.py (2)
tensorrt_llm/llmapi/model_support_matrix.py (3)
Feature(32-49)SupportStatus(25-29)get_status(479-483)tensorrt_llm/llmapi/llm_utils.py (2)
ModelLoader(97-600)load_hf_model_config(589-600)
tests/unittest/tools/test_supported_models_sync.py (1)
tensorrt_llm/llmapi/model_support_matrix.py (1)
render_supported_models_markdown(507-576)
🪛 Ruff (0.14.11)
tests/unittest/llmapi/test_model_feature_fallbacks.py
254-254: Unused function argument: arch
(ARG001)
303-303: Unused function argument: arch
(ARG001)
docs/source/helper.py
358-358: Avoid specifying long messages outside the exception class
(TRY003)
tensorrt_llm/llmapi/llm.py
328-328: Do not call getattr with a constant attribute value. It is not any safer than normal property access.
Replace getattr with attribute access
(B009)
tests/unittest/tools/test_supported_models_sync.py
32-32: Avoid specifying long messages outside the exception class
(TRY003)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Pre-commit Check
🔇 Additional comments (16)
tests/unittest/api_stability/references/llm.yaml (1)
234-237: API stability reference update looks correct for the new flag.tensorrt_llm/llmapi/model_support_matrix.py (1)
16-74: Clean stdlib-only primitives for docs + runtime lookups.Frozen dataclasses +
FeatureCell.render()make the table rendering straightforward.tensorrt_llm/llmapi/llm.py (3)
47-49: Imports for support matrix utilities look fine.
905-910: HF config loading: using_hf_model_dir+trust_remote_codeis the right direction.
1017-1022: Good wiring: fallbacks run immediately after HF config load.This is the right spot (before input processor / executor config consumption).
Also applies to: 1212-1217
tests/unittest/llmapi/test_model_feature_fallbacks.py (1)
31-385: Test coverage is well targeted (esp. multimodal precedence + “unknown architecture” no-ops).tests/integration/test_lists/test-db/l0_a100.yml (1)
24-24: LGTM!Adding the sync validation test to the pre-merge test list is appropriate. This ensures docs/model matrix consistency is verified before merging. Based on learnings, test duplication across list files is common when tests serve different CI contexts.
docs/source/conf.py (2)
180-181: LGTM!The import extension follows the existing pattern for importing helper functions.
193-193: LGTM!Calling
generate_supported_models()during Sphinx setup ensures the model support documentation stays synchronized with the programmatic matrix. The placement aftergenerate_llmapi()maintains logical ordering of documentation generation steps.docs/source/models/supported-models.md (2)
2-3: LGTM!The generation comments clearly indicate this file is auto-generated and provide instructions for regeneration. This is good practice for maintaining generated documentation.
8-45: LGTM!The table formatting is consistent and the content aligns with the programmatic model support matrix. The standardized column widths and alignment improve readability.
docs/source/helper.py (2)
5-5: LGTM!The
sysimport is needed for thesys.modulesregistration in the new function.
349-374: LGTM!The implementation follows the established pattern in this file for dynamic module loading (similar to
update_version()). Key aspects are well-handled:
sys.modulesregistration ensures dataclass/type evaluation works during module execution- Conditional write avoids unnecessary file modifications
- Clear error messaging if module loading fails
The static analysis hint (TRY003) about the exception message is a style preference; the inline message is appropriate here for debugging purposes.
tests/unittest/tools/test_supported_models_sync.py (3)
1-15: LGTM!The copyright header is properly formatted with the 2025 year and Apache 2.0 license.
41-56: LGTM!The sync test correctly validates that the committed markdown matches the generated content. The
.strip()comparison handles trailing whitespace differences gracefully, and the error message provides actionable guidance.
58-103: LGTM!The invariant tests provide comprehensive validation:
- Order lists match matrix keys (catches missing/extra rows)
- Duplicate detection in order lists and SUPPORTED_MODELS_PYTORCH
- Footnote reference validation ensures all used footnotes are defined
One observation: the footnote check (line 101) looks for
{fn}:pattern. If footnote definitions use the format[^1]: text, this check correctly identifies if a referenced footnote (e.g.,[^1]) has a corresponding definition starting with[^1]:.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
|
/bot run --disable-fail-fast |
- Generate docs/source/models/supported-models.md from code via docs/source/helper.py - Gate unsupported features per-model with warnings (kv_cache_reuse, chunked_prefill, attention_dp, overlap scheduler) - Add CI sync test to keep committed markdown in sync Signed-off-by: Venky Ganesh <23023424+venkywonka@users.noreply.github.com>
Use the support matrix as the docs source-of-truth, regenerate supported-models.md with stable formatting, and apply minimal runtime fallbacks based on HF architectures. Signed-off-by: Venky Ganesh <23023424+venkywonka@users.noreply.github.com>
Add unittest/tools/test_supported_models_sync.py to the A100 pre-merge unit test list so docs/support-matrix drift is caught before merge. Signed-off-by: Venky Ganesh <23023424+venkywonka@users.noreply.github.com>
Centralize model feature fallback disabling and standardize the warning format. Mark supported-models.md as generated and add basic generator/data invariants to prevent silent drift. Signed-off-by: Venky Ganesh <23023424+venkywonka@users.noreply.github.com>
Signed-off-by: Venky Ganesh <23023424+venkywonka@users.noreply.github.com>
|
/bot run --disable-fail-fast |
|
PR_Github #32376 [ run ] triggered by Bot. Commit: |
|
PR_Github #32376 [ run ] completed with state
|
Signed-off-by: Venky Ganesh <23023424+venkywonka@users.noreply.github.com>
|
/bot run --disable-fail-fast |
|
PR_Github #33002 [ run ] triggered by Bot. Commit: |
|
/bot kill |
|
PR_Github #33019 [ kill ] triggered by Bot. Commit: |
|
PR_Github #33019 [ kill ] completed with state |
|
/bot run |
|
PR_Github #33022 [ run ] triggered by Bot. Commit: |
| model="Qwen3Next", | ||
| huggingface="Qwen/Qwen3-Next-80B-A3B-Thinking", | ||
| ), | ||
| ) |
There was a problem hiding this comment.
Is it better to add an assertion here and ensure all architectures in MODEL_CLASS_MAPPING are covered?
There was a problem hiding this comment.
It may be even better to reuse the existing register_auto_model, which is used to register a new model arch. Just adding the new fields and applying the changes for all the arches.
There was a problem hiding this comment.
Thanks @Superjomn - the register_auto_model is probably the way to go, i missed it initially. let me refactor to include that.
| exception_handler.register(self, 'shutdown') | ||
| atexit.register(LLM._shutdown_wrapper, weakref.ref(self)) | ||
|
|
||
| def _apply_model_feature_fallbacks(self) -> None: |
There was a problem hiding this comment.
Could we move this logic to a model_validator of BaseLlmArgs / TorchLlmArgs / etc.?
We could define one validator in BaseLlmArgs for the common attributes, and then additional validators in TorchLlmArgs / TrtLlmArgs etc. for the backend-specific validators. Ideally this would mean we avoid the use of getattr entirely.
I'm mainly assuming this is possible because BaseLlmArgs has a field for model and it seems possible to derive things like the architecture from that.
There was a problem hiding this comment.
you're right @anish-shanbhag - but the complication is with the timing - the stage when i'm checking is when the full model or config is already loaded and i'm leveraging that.
If I move the validation to a much earlier LLMArgs-parsing stage, I'll need to find a way to reliably obtain the hf.architecture arg, by essentially replicating a part of the model-loading code just to get the architecture metadata.
Let me prototype if that's cleaner
There was a problem hiding this comment.
That's a good point - might end up adding unnecessary complexity.
Another option is to not use a model_validator but instead add something like def update_from_feature_matrix(architecture: str) to each of the LlmArgs classe, which would update the right attributes using the architecture that's passed in.
I think my main initial concern was just with the getattr / hasattr logic which could end up being harder to maintain.
| status="prototype") | ||
|
|
||
| auto_disable_unsupported_features: bool = Field( | ||
| default=False, |
There was a problem hiding this comment.
I know it's a breaking change, but I think it could be worth enabling this by default in this PR since we aren't improving the out-of-the-box functionality if this is disabled - and also, this is required to achieve the original goal of enabling things like KV cache reuse by default.
But I'll definitely defer to others here. If we leave the default here as False, I think we should at least have TODOs/tickets tracking (1) eventually enabling auto_disable_unsupported_features by default and (2) enabling other settings like chunked prefill by default
There was a problem hiding this comment.
agreed - meaningful impact is only when its enabled by default - originally i was planning on doing that but deferred it. It's probably worth toggling it here itself, just a line change. Either way, i shall create tickets for this and add TODOs here.
| # See the License for the specific language governing permissions and | ||
| # limitations under the License. | ||
|
|
||
| """Supported models and feature support matrices (stdlib-only).""" |
There was a problem hiding this comment.
nit: maybe move the stdlib-only part to a separate comment alongside the imports for clarity
| SupportedModel( | ||
| architecture="DeepseekV3ForCausalLM", | ||
| model="DeepSeek-V3", | ||
| huggingface="deepseek-ai/DeepSeek-V3", |
There was a problem hiding this comment.
How will this currently work with quantized variants like Deepseek-R1-FP4?
| Feature.CHUNKED_PREFILL: FeatureCell(status=SupportStatus.YES), | ||
| Feature.MTP: FeatureCell(status=SupportStatus.NO), | ||
| Feature.EAGLE3_ONE_MODEL_ENGINE: FeatureCell(status=SupportStatus.YES), | ||
| Feature.EAGLE3_TWO_MODEL_ENGINE: FeatureCell(status=SupportStatus.YES, footnote="[^4]"), |
There was a problem hiding this comment.
Could we somehow directly specify the full footnote here (Overalap scheduler isn't...) and then dynamically change it to [^4] while rendering? This might help with maintainability
| Feature.KV_CACHE_REUSE: FeatureCell(status=SupportStatus.YES), | ||
| Feature.LOGITS_POST_PROCESSOR: FeatureCell(status=SupportStatus.YES), | ||
| Feature.EPD_DISAGG_SERVING: FeatureCell(status=SupportStatus.NO), | ||
| Feature.MODALITY: FeatureCell(text="L + I + V"), |
There was a problem hiding this comment.
It looks like the text field is only used for this modality feature - should we split this into 3 separate features for language, image, and video (at least that's what I assume these represent)? That way they'll conform to the status approach
There was a problem hiding this comment.
yep that's a good catch - this was a simple reverse-engineering of the documentation where they kept this "L,I,V" convention
| } | ||
|
|
||
| KEY_MODEL_FOOTNOTES: Tuple[str, ...] = ( | ||
| "[^1]: Chunked Prefill for MLA can only be enabled on SM100/SM103.", |
There was a problem hiding this comment.
It seems like these could eventually be auto-enabled/disabled with some extra logic (though it's probably out of scope for this PR)
Should we add some TODOs here for that?
There was a problem hiding this comment.
yep i shall add todos for this thanks for the catch!
| ) | ||
|
|
||
| KEY_MODEL_MATRIX: Mapping[str, Mapping[Feature, FeatureCell]] = { | ||
| "DeepseekV3ForCausalLM": { |
There was a problem hiding this comment.
It looks like the current assumption is that all models under the same architecture will have the same feature support - is this always true?
There was a problem hiding this comment.
hmm great question - I guess you're talking about complications arising from quantized checkpoints that share same architecture metadata (DeepSeek-R1-0528-NVFP4-v2 vs DeepSeek-R1-0528) or just simply size (llama-3.1-405b vs llama-3.1-8b) .... i'm not sure if currently that's a problem - but i can imagine features in the future that need this distinction unfortunatey - that means we'll to make our key-value mappings more granular
There was a problem hiding this comment.
But practically, for example, say a feature is marked as supported for DeepseekV3ForCausalLM, and is supported for DeepSeekV3 but is not supported in DeepSeek-R1-0528-NVFP4-v2 - then the auto-fallback will miss this and we're atleast as good as today.
There was a problem hiding this comment.
i think the inverse rarely happens - where we mark it as unsupported and auto-disable it but a specific checkpoint/size variant supports that feature. I can't seem to think of a situation like that.
| Feature.GUIDED_DECODING, | ||
| ) | ||
|
|
||
| KEY_MODEL_MATRIX: Mapping[str, Mapping[Feature, FeatureCell]] = { |
There was a problem hiding this comment.
How about scatter this large table to register_auto_model? Like upgrade it to support:
supported_features = [
Feature.OVERLAP_SCHEDULER,
...
]
@register_auto_model("LlamaForCausalLM", features=supported_features, ...other_fields)
class LlamaForCausalLM(...):The advantage is that registration and updates are already handled within each model, with model PICs tracking them. This way, we avoid duplicating model architecture entries.
cc @QiJune, what do you think?
| Feature.MODALITY, | ||
| ) | ||
|
|
||
| MULTIMODAL_MATRIX: Mapping[str, Mapping[Feature, FeatureCell]] = { |
There was a problem hiding this comment.
This too, scatter to register_auto_model may be better.
| self.args.cuda_graph_config = None | ||
|
|
||
| if getattr(self.args, "guided_decoding_backend", None) is not None: | ||
| if _disable(SupportFeature.GUIDED_DECODING, |
There was a problem hiding this comment.
It seems like there are many more SupportFeatures than the ones listed here - should we track the remaining ones that aren't being auto-disabled somewhere?
There was a problem hiding this comment.
yea - the reason they aren't auto-disabled is because they don't neatly map to a "knob" in llmapi to auto-disable unfortunatley
| return "\n".join(lines) | ||
|
|
||
|
|
||
| def render_supported_models_markdown() -> str: |
There was a problem hiding this comment.
A minor suggestion—perhaps overkill—but using a Jinja2 template could help clarify the document structure. The headings and some parts of the content are fixed, and we're essentially just inserting content into predefined fields.
| annotation: int | ||
| default: 1000 | ||
| status: prototype | ||
| auto_disable_unsupported_features: |
There was a problem hiding this comment.
How about shortening this to ignore_unsupported_features , I am not sure if "auto" + "disable" = "ignore"?
|
PR_Github #33022 [ run ] completed with state
|
Signed-off-by: Venky Ganesh <23023424+venkywonka@users.noreply.github.com>
Signed-off-by: Venky Ganesh <23023424+venkywonka@users.noreply.github.com>
|
Closign this in favour of #11035 |
Summary
This PR introduces a programmatic model support matrix (
model_support_matrix.py) that serves as the single source of truth for model-feature compatibility in TensorRT-LLM. It also introduces an associated LLMAPI argauto_disable_unsupported_featuresto toggle the auto-fallback behavior.Motivation
enable_chunked_prefill=Trueeven though for all the other models, enabling it as a default would result in OOTB long-context support for free.enable_chunked_prefillcan be enabledTrueglobally by default, so all models that support it can benefit from long-context support - while models that don't support it can auto-fallback.enable_block_reuse=True(currently default) will not break QwenNext or other models that don't support it.Things to note
auto_disable_unsupported_featuresis set to False by default to be safe and not change the behavior of existing workloads (what if some downstream workload breaks when trtllm runtime auto-disables an llmapi flag because it wasn't supported in the model)? One can always toggle them later to a breaking change.supported-models.mdfile will break the test that checks for parity with script.PR items:
auto_disable_unsupported_features=True. Else just a warning and continue with current behavior (failure).docs/source/models/supported-models.mdfrom the matrixTest Plan
test_model_feature_fallbacks.py)test_supported_models_sync.py)Summary by CodeRabbit
New Features
auto_disable_unsupported_featuresflag to automatically disable unsupported model features with warnings instead of errors.Documentation
✏️ Tip: You can customize this high-level summary in your review settings.
Description
Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]to print this help message.See details below for each supported subcommand.
Details
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-listparameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.mdand the
scripts/test_to_stage_mapping.pyhelper.kill
killKill all running builds associated with pull request.
skip
skip --comment COMMENTSkip testing for latest commit on pull request.
--comment "Reason for skipping build/test"is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipelineReuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.