Skip to content

[TRTLLM-10591][feat] Add programmatic model-feature support matrix & runtime auto-fallback support#10127

Closed
venkywonka wants to merge 15 commits intoNVIDIA:mainfrom
venkywonka:llmapi-model-support-matrix
Closed

[TRTLLM-10591][feat] Add programmatic model-feature support matrix & runtime auto-fallback support#10127
venkywonka wants to merge 15 commits intoNVIDIA:mainfrom
venkywonka:llmapi-model-support-matrix

Conversation

@venkywonka
Copy link
Collaborator

@venkywonka venkywonka commented Dec 18, 2025

Summary

This PR introduces a programmatic model support matrix (model_support_matrix.py) that serves as the single source of truth for model-feature compatibility in TensorRT-LLM. It also introduces an associated LLMAPI arg auto_disable_unsupported_features to toggle the auto-fallback behavior.

Motivation

  • Currently we don’t have a programmatic way to track which features are supported by which models. We expect the user to configure them instead of giving an OOTB optimal config by referring to the docs here.
  • For example:
    • QwenNext will break with the default setting of enable_block_reuse=True , so user has to disable explictly.
    • DSv3.2 and VLMs will break with enable_chunked_prefill=True even though for all the other models, enabling it as a default would result in OOTB long-context support for free.
  • The PR attempts to make a subset of inference features that are clearly mapped to an LLMAPI argument (chunked prefill, kv cache reuse, cuda graphs, ... etc.) programmatically tracked - so that one can toggle intelligent fallback in LLMAPI usage (when that feature is enabled by default, but not supported in that specific model), as well as autogeneration of the model support matrix in documentation.
  • In the future, this would enable us to make auto-fallback behavior default and then go ahead and confidently enable certain LLMAPI flags globally that would give good OOTB performance for the majority of models, without breaking the models that don't support it. Specifically,
    • enable_chunked_prefill can be enabled True globally by default, so all models that support it can benefit from long-context support - while models that don't support it can auto-fallback.
    • enable_block_reuse=True (currently default) will not break QwenNext or other models that don't support it.

Things to note

  • These changes are not breaking. the added param auto_disable_unsupported_features is set to False by default to be safe and not change the behavior of existing workloads (what if some downstream workload breaks when trtllm runtime auto-disables an llmapi flag because it wasn't supported in the model)? One can always toggle them later to a breaking change.
  • Any new model or feature support update to the docs now has to be done in-code, and the docs will autogenerate it. Direct modifications to the supported-models.md file will break the test that checks for parity with script.

PR items:

  1. Optional runtime feature gating - Automatically disable unsupported features based on model architecture, if auto_disable_unsupported_features=True. Else just a warning and continue with current behavior (failure).
  2. Documentation generation - Auto-generate docs/source/models/supported-models.md from the matrix
  3. Sync validation - CI check to ensure docs stay in sync with the programmatic matrix

Test Plan

  • Unit tests for all fallback scenarios (test_model_feature_fallbacks.py)
  • Docs sync validation test (test_supported_models_sync.py)

Summary by CodeRabbit

  • New Features

    • Added auto_disable_unsupported_features flag to automatically disable unsupported model features with warnings instead of errors.
    • Introduced comprehensive model support matrix tracking feature compatibility across architectures.
  • Documentation

    • Updated Supported Models documentation with auto-generated content reflecting current architecture and feature support.

✏️ Tip: You can customize this high-level summary in your review settings.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@venkywonka
Copy link
Collaborator Author

/bot run --disable-fail-fast

@venkywonka venkywonka changed the title llmapi: add programmatic model support matrix [TRTC-119][feat] Add automatic model-feature support-matrix for tracking and functional robustness Dec 18, 2025
@venkywonka venkywonka changed the title [TRTC-119][feat] Add automatic model-feature support-matrix for tracking and functional robustness [TRTC-119][feat] BREAKING: LLMAPI auto-fallback for unsupported features Dec 18, 2025
@venkywonka
Copy link
Collaborator Author

/bot run --disable-fail-fast

@venkywonka venkywonka force-pushed the llmapi-model-support-matrix branch from 389910f to a276656 Compare December 23, 2025 11:13
@venkywonka
Copy link
Collaborator Author

/bot run --disable-fail-fast

@venkywonka venkywonka self-assigned this Dec 23, 2025
@tensorrt-cicd
Copy link
Collaborator

PR_Github #29610 [ run ] triggered by Bot. Commit: a276656

@tensorrt-cicd
Copy link
Collaborator

PR_Github #29610 [ run ] completed with state SUCCESS. Commit: a276656
/LLM/main/L0_MergeRequest_PR pipeline #22777 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@venkywonka venkywonka force-pushed the llmapi-model-support-matrix branch 2 times, most recently from 7b38734 to 6f35665 Compare January 15, 2026 03:20
@venkywonka venkywonka changed the title [TRTC-119][feat] BREAKING: LLMAPI auto-fallback for unsupported features [TRTC-119][feat] LLMAPI auto-fallback for unsupported features Jan 15, 2026
@venkywonka venkywonka marked this pull request as ready for review January 15, 2026 03:23
@venkywonka venkywonka requested review from a team as code owners January 15, 2026 03:23
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 15, 2026

📝 Walkthrough

Walkthrough

This PR introduces a model support matrix system with automatic feature fallback handling. It defines structured support data for model architectures and features, adds a fallback mechanism to automatically disable unsupported features during LLM initialization, and provides documentation generation and validation utilities. The configuration is controlled via a new auto_disable_unsupported_features flag.

Changes

Cohort / File(s) Summary
Model Support Matrix Core
tensorrt_llm/llmapi/model_support_matrix.py
New module introducing SupportStatus and Feature enums, SupportedModel and FeatureCell data classes, support matrices for key and multimodal models across architectures, and utility functions to query support status and render Markdown documentation.
LLM Feature Fallback Mechanism
tensorrt_llm/llmapi/llm.py
Added _apply_model_feature_fallbacks() method to BaseLLM that automatically disables unsupported features (KV_CACHE_REUSE, CHUNKED_PREFILL, ATTENTION_DP, etc.) based on architecture support status. Integrated into initialization and model building pipelines. Enhanced HF model config loading to respect trust_remote_code and configurable model directories.
Configuration & Arguments
tensorrt_llm/llmapi/llm_args.py, tests/unittest/api_stability/references/llm.yaml
Added new public field auto_disable_unsupported_features: bool (default False, beta status) to BaseLlmArgs to control automatic feature disabling behavior.
Documentation & Build Helpers
docs/source/conf.py, docs/source/helper.py, docs/source/models/supported-models.md
Added generate_supported_models() helper function to regenerate supported-models.md from the model support matrix module. Updated documentation build configuration and supported models markdown with generated content and formatting adjustments.
Test Coverage & Validation
tests/unittest/llmapi/test_model_feature_fallbacks.py
Comprehensive test suite validating the fallback mechanism across multiple model architectures, feature combinations, and edge cases including multimodal models and flag toggling behavior.
Sync & Invariant Testing
tests/unittest/tools/test_supported_models_sync.py, tests/integration/test_lists/test-db/l0_a100.yml
Added test suite to validate synchronization between generated markdown documentation and the programmatic support matrix, and tests for data structure invariants (arch ordering, footnote references, uniqueness).

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant BaseLLM as BaseLLM.__init__
    participant ModelLoader
    participant HFConfig as HF Model Config
    participant SupportMatrix as Support Matrix
    participant UserArgs as User Args

    Client->>BaseLLM: __init__(args)
    BaseLLM->>ModelLoader: load model
    ModelLoader->>HFConfig: get HuggingFace config
    BaseLLM->>BaseLLM: _apply_model_feature_fallbacks()
    BaseLLM->>HFConfig: read architecture
    BaseLLM->>SupportMatrix: get_status(architecture, feature)
    alt Feature Unsupported (NO/NA)
        SupportMatrix-->>BaseLLM: SupportStatus.NO/NA
        BaseLLM->>BaseLLM: check if feature enabled in args
        BaseLLM->>UserArgs: disable feature flag
        BaseLLM->>BaseLLM: emit warning
    else Feature Supported
        SupportMatrix-->>BaseLLM: SupportStatus.YES
        BaseLLM->>UserArgs: keep feature enabled
    end
    BaseLLM-->>Client: return LLM instance
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 57.14% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately describes the main changes: adding a programmatic model-feature support matrix and implementing runtime auto-fallback support for unsupported features.
Description check ✅ Passed PR description adequately covers motivation, implementation approach, test plan, and includes completed checklist items.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@tensorrt_llm/llmapi/model_support_matrix.py`:
- Around line 76-462: The MULTIMODAL_MATRIX uses the key "LlavaLlamaModel
(VILA)" which doesn't match the HF architecture name "LlavaLlamaModel" used by
BaseLLM._apply_model_feature_fallbacks (cfg.architectures[0]), causing
get_cell() to return None and skip fallbacks; fix by renaming the matrix key to
"LlavaLlamaModel" (and update MULTIMODAL_ARCH_ORDER to the same name) or add an
alias mapping lookup in the fallback path so
BaseLLM._apply_model_feature_fallbacks resolves "LlavaLlamaModel" to the
existing "LlavaLlamaModel (VILA)" entry before calling get_cell().

In `@tests/unittest/llmapi/test_model_feature_fallbacks.py`:
- Around line 254-262: The helper function mock_status has an unused parameter
(arch) which triggers Ruff ARG001; update the signature of mock_status used in
the patch around BaseLLM._apply_model_feature_fallbacks (and the similar helper
at lines ~303-311) to accept a placeholder for unused args (for example: def
mock_status(_arch, feature): or def mock_status(*_, feature): or def
mock_status(*args, feature):) so the linter no longer flags the unused argument
while preserving the side_effect behavior for get_support_status.
🧹 Nitpick comments (6)
tensorrt_llm/llmapi/llm_args.py (1)

2112-2119: Clarify “silently … with a warning” in auto_disable_unsupported_features description.

Right now it reads as both silent and warning-emitting.

Proposed fix
-        "features are silently disabled with a warning instead of causing errors.",
+        "features are disabled with a warning instead of causing errors.",
tensorrt_llm/llmapi/model_support_matrix.py (2)

486-505: Guard _render_md_table() against row/header length mismatches.

Right now a malformed row will blow up with an IndexError in _row(). A small validation makes failures actionable.

Proposed fix
 def _render_md_table(headers: Sequence[str], rows: Sequence[Sequence[str]]) -> str:
+    for row in rows:
+        if len(row) != len(headers):
+            raise ValueError(
+                f"Markdown table row has {len(row)} cells but expected {len(headers)}: {row}"
+            )
     widths = [len(h) for h in headers]
     for row in rows:
         for i, cell in enumerate(row):
             widths[i] = max(widths[i], len(cell))

507-576: Use consistent heading levels in generated markdown.

You already have # Supported Models; consider making subsequent top-level sections ## to avoid multiple H1s and fix the missing space in “Matrix(Key Models)”.

Proposed fix
-    out.append("## Model-Feature Support Matrix(Key Models)")
+    out.append("## Model-Feature Support Matrix (Key Models)")
@@
-    out.append("# Multimodal Feature Support Matrix (PyTorch Backend)")
+    out.append("## Multimodal Feature Support Matrix (PyTorch Backend)")
tensorrt_llm/llmapi/llm.py (1)

258-347: Fallback logic is solid; please verify backend scope + fix Ruff B009.

  • The matrix/module is framed as PyTorch backend support, but _apply_model_feature_fallbacks() is invoked for both TRT and Torch builds—please confirm that’s intentional and statuses are valid for TRT too.
  • Ruff B009: hasattr() already guards the attribute, so getattr() isn’t buying safety here.
Proposed fix (Ruff B009)
         _disable_if_unsupported(
             SupportFeature.OVERLAP_SCHEDULER,
             enabled=hasattr(self.args, "disable_overlap_scheduler")
-            and getattr(self.args, "disable_overlap_scheduler") is False,
+            and self.args.disable_overlap_scheduler is False,
             arg_path="disable_overlap_scheduler",
-            disable=lambda: setattr(self.args, "disable_overlap_scheduler", True
-                                    ),
+            disable=lambda: setattr(self.args, "disable_overlap_scheduler", True),
         )
tests/unittest/llmapi/test_model_feature_fallbacks.py (1)

81-385: Optional: consider parametrizing the repetitive “disable/no-change” cases.

This would shrink the file and make it easier to extend when new features are added.

tests/unittest/tools/test_supported_models_sync.py (1)

28-38: Consider extracting shared module loading logic.

This function duplicates the module loading logic from docs/source/helper.py (lines 354-362). While test isolation is sometimes preferred, you could consider:

  1. Extracting the module loading into a shared utility in tensorrt_llm/llmapi/model_support_matrix.py itself
  2. Or accepting the duplication for test independence

The current approach works correctly, so this is purely a maintainability consideration.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f7de285 and 6f35665.

📒 Files selected for processing (10)
  • docs/source/conf.py
  • docs/source/helper.py
  • docs/source/models/supported-models.md
  • tensorrt_llm/llmapi/llm.py
  • tensorrt_llm/llmapi/llm_args.py
  • tensorrt_llm/llmapi/model_support_matrix.py
  • tests/integration/test_lists/test-db/l0_a100.yml
  • tests/unittest/api_stability/references/llm.yaml
  • tests/unittest/llmapi/test_model_feature_fallbacks.py
  • tests/unittest/tools/test_supported_models_sync.py
🧰 Additional context used
📓 Path-based instructions (3)
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: The code developed for TensorRT-LLM should conform to Python 3.8+
Indent Python code with 4 spaces. Do not use tabs
Always maintain the namespace when importing Python modules, even if only one class or function from a module is used
Python filenames should use snake_case (e.g., some_file.py)
Python classes should use PascalCase (e.g., class SomeClass)
Python functions and methods should use snake_case (e.g., def my_awesome_function():)
Python local variables should use snake_case, with prefix k for variable names that start with a number (e.g., k_99th_percentile)
Python global variables should use upper snake_case with prefix G (e.g., G_MY_GLOBAL)
Python constants should use upper snake_case (e.g., MY_CONSTANT)
Avoid shadowing variables declared in an outer scope in Python
Initialize all externally visible members of a Python class in the constructor
For Python interfaces that may be used outside a file, prefer docstrings over comments
Use comments in Python for code within a function, or interfaces that are local to a file
Use Google-style docstrings for Python classes and functions, which can be parsed by Sphinx
Python attributes and variables can be documented inline with the format """<type>: Description"""
Avoid using reflection in Python when functionality can be easily achieved without reflection
When using try-except blocks in Python, limit the except clause to the smallest set of errors possible
When using try-except blocks in Python to handle multiple possible variable types (duck-typing), keep the body of the try as small as possible and use the else block for the main logic

Files:

  • tests/unittest/llmapi/test_model_feature_fallbacks.py
  • docs/source/conf.py
  • tensorrt_llm/llmapi/llm_args.py
  • docs/source/helper.py
  • tensorrt_llm/llmapi/llm.py
  • tensorrt_llm/llmapi/model_support_matrix.py
  • tests/unittest/tools/test_supported_models_sync.py
**/*.{cpp,cc,cxx,h,hpp,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

All TensorRT-LLM source files (.cpp, .h, .cu, .py, and other source files) should contain an NVIDIA copyright header with the year of latest meaningful modification

Files:

  • tests/unittest/llmapi/test_model_feature_fallbacks.py
  • docs/source/conf.py
  • tensorrt_llm/llmapi/llm_args.py
  • docs/source/helper.py
  • tensorrt_llm/llmapi/llm.py
  • tensorrt_llm/llmapi/model_support_matrix.py
  • tests/unittest/tools/test_supported_models_sync.py
**/*.{md,rst}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

When documenting CLI commands for TensorRT-LLM tools like trtllm-serve, trtllm-bench, or trtllm-eval, prefer using --config over --extra_llm_api_options for specifying configuration files

Files:

  • docs/source/models/supported-models.md
🧠 Learnings (12)
📚 Learning: 2025-08-14T15:38:01.771Z
Learnt from: MatthiasKohl
Repo: NVIDIA/TensorRT-LLM PR: 6904
File: cpp/tensorrt_llm/pybind/thop/bindings.cpp:55-57
Timestamp: 2025-08-14T15:38:01.771Z
Learning: In TensorRT-LLM Python bindings, tensor parameter collections like mla_tensor_params and spec_decoding_tensor_params are kept as required parameters without defaults to maintain API consistency, even when it might affect backward compatibility.

Applied to files:

  • tests/unittest/api_stability/references/llm.yaml
  • tensorrt_llm/llmapi/llm_args.py
📚 Learning: 2025-07-28T17:06:08.621Z
Learnt from: moraxu
Repo: NVIDIA/TensorRT-LLM PR: 6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.

Applied to files:

  • tests/unittest/llmapi/test_model_feature_fallbacks.py
  • tests/integration/test_lists/test-db/l0_a100.yml
  • docs/source/models/supported-models.md
  • tensorrt_llm/llmapi/llm.py
  • tensorrt_llm/llmapi/model_support_matrix.py
  • tests/unittest/tools/test_supported_models_sync.py
📚 Learning: 2025-08-29T14:07:45.863Z
Learnt from: EmmaQiaoCh
Repo: NVIDIA/TensorRT-LLM PR: 7370
File: tests/unittest/trt/model_api/test_model_quantization.py:24-27
Timestamp: 2025-08-29T14:07:45.863Z
Learning: In TensorRT-LLM's CI infrastructure, pytest skip markers (pytest.mark.skip) are properly honored even when test files have __main__ blocks that call test functions directly. The testing system correctly skips tests without requiring modifications to the __main__ block execution pattern.

Applied to files:

  • tests/unittest/llmapi/test_model_feature_fallbacks.py
📚 Learning: 2025-09-09T09:40:45.658Z
Learnt from: fredricz-20070104
Repo: NVIDIA/TensorRT-LLM PR: 7645
File: tests/integration/test_lists/qa/llm_function_core.txt:648-648
Timestamp: 2025-09-09T09:40:45.658Z
Learning: In TensorRT-LLM test lists, it's common and intentional for the same test to appear in multiple test list files when they serve different purposes (e.g., llm_function_core.txt for comprehensive core functionality testing and llm_function_core_sanity.txt for quick sanity checks). This duplication allows tests to be run in different testing contexts.

Applied to files:

  • tests/unittest/llmapi/test_model_feature_fallbacks.py
  • tests/integration/test_lists/test-db/l0_a100.yml
  • tests/unittest/tools/test_supported_models_sync.py
📚 Learning: 2025-08-26T09:49:04.956Z
Learnt from: pengbowang-nv
Repo: NVIDIA/TensorRT-LLM PR: 7192
File: tests/integration/test_lists/test-db/l0_dgx_b200.yml:56-72
Timestamp: 2025-08-26T09:49:04.956Z
Learning: In TensorRT-LLM test configuration files, the test scheduling system handles wildcard matching with special rules that prevent duplicate test execution even when the same tests appear in multiple yaml files with overlapping GPU wildcards (e.g., "*b200*" and "*gb200*").

Applied to files:

  • tests/unittest/llmapi/test_model_feature_fallbacks.py
  • tests/integration/test_lists/test-db/l0_a100.yml
📚 Learning: 2025-09-17T02:48:52.732Z
Learnt from: tongyuantongyu
Repo: NVIDIA/TensorRT-LLM PR: 7781
File: tests/integration/test_lists/waives.txt:313-313
Timestamp: 2025-09-17T02:48:52.732Z
Learning: In TensorRT-LLM, `tests/integration/test_lists/waives.txt` is specifically for waiving/skipping tests, while other test list files like those in `test-db/` and `qa/` directories are for different test execution contexts (pre-merge, post-merge, QA tests). The same test appearing in both waives.txt and execution list files is intentional - the test is part of test suites but will be skipped due to the waiver.

Applied to files:

  • tests/integration/test_lists/test-db/l0_a100.yml
📚 Learning: 2025-09-18T05:41:45.847Z
Learnt from: pengbowang-nv
Repo: NVIDIA/TensorRT-LLM PR: 7120
File: tensorrt_llm/llmapi/llm.py:690-697
Timestamp: 2025-09-18T05:41:45.847Z
Learning: Kimi model support is currently focused on the PyTorch backend path, with TRT path support potentially coming later.

Applied to files:

  • docs/source/models/supported-models.md
📚 Learning: 2025-08-15T06:46:53.813Z
Learnt from: eopXD
Repo: NVIDIA/TensorRT-LLM PR: 6767
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:0-0
Timestamp: 2025-08-15T06:46:53.813Z
Learning: In the TensorRT-LLM KV cache manager, SWA (Sliding Window Attention) combined with beam search is currently in a broken/non-functional state and is planned for future rework. During preparatory refactoring phases, code related to SWA+beam search may intentionally remain in a non-working state until the broader rework is completed.

Applied to files:

  • docs/source/models/supported-models.md
📚 Learning: 2025-08-14T15:43:23.107Z
Learnt from: MatthiasKohl
Repo: NVIDIA/TensorRT-LLM PR: 6904
File: tensorrt_llm/_torch/attention_backend/trtllm.py:259-262
Timestamp: 2025-08-14T15:43:23.107Z
Learning: In TensorRT-LLM's attention backend, tensor parameters in the plan() method are assigned directly without validation (dtype, device, contiguity checks). This maintains consistency across all tensor inputs and follows the pattern of trusting callers to provide correctly formatted tensors.

Applied to files:

  • docs/source/models/supported-models.md
📚 Learning: 2025-08-14T06:36:40.701Z
Learnt from: timlee0212
Repo: NVIDIA/TensorRT-LLM PR: 6886
File: tensorrt_llm/_torch/models/modeling_deepseekv3.py:0-0
Timestamp: 2025-08-14T06:36:40.701Z
Learning: In DeepSeek V3 model (tensorrt_llm/_torch/models/modeling_deepseekv3.py), the disagreement between AllReduce.__init__ guard and _compute_mlp_tp_size logic for MNNVL usage is expected by design. The AllReduce component and MLP TP-size computation intentionally use different criteria for MNNVL availability decisions.

Applied to files:

  • docs/source/models/supported-models.md
📚 Learning: 2025-08-26T09:37:10.463Z
Learnt from: jiaganc
Repo: NVIDIA/TensorRT-LLM PR: 7031
File: tensorrt_llm/bench/dataclasses/configuration.py:90-104
Timestamp: 2025-08-26T09:37:10.463Z
Learning: In TensorRT-LLM, the `get_pytorch_perf_config()` method returns `self.pytorch_config` which can contain default `cuda_graph_config` values, so `llm_args` may already have this config before the extra options processing.

Applied to files:

  • tensorrt_llm/llmapi/llm_args.py
📚 Learning: 2025-08-06T13:58:07.506Z
Learnt from: galagam
Repo: NVIDIA/TensorRT-LLM PR: 6487
File: tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py:1-12
Timestamp: 2025-08-06T13:58:07.506Z
Learning: In TensorRT-LLM, test files (files under tests/ directories) do not require NVIDIA copyright headers, unlike production source code files. Test files typically start directly with imports, docstrings, or code.

Applied to files:

  • tensorrt_llm/llmapi/llm.py
  • tensorrt_llm/llmapi/model_support_matrix.py
  • tests/unittest/tools/test_supported_models_sync.py
🧬 Code graph analysis (5)
tests/unittest/llmapi/test_model_feature_fallbacks.py (2)
tensorrt_llm/llmapi/llm.py (1)
  • _apply_model_feature_fallbacks (258-347)
tensorrt_llm/llmapi/model_support_matrix.py (2)
  • SupportStatus (25-29)
  • Feature (32-49)
docs/source/conf.py (1)
docs/source/helper.py (1)
  • generate_supported_models (349-374)
docs/source/helper.py (1)
tensorrt_llm/llmapi/model_support_matrix.py (1)
  • render_supported_models_markdown (507-576)
tensorrt_llm/llmapi/llm.py (2)
tensorrt_llm/llmapi/model_support_matrix.py (3)
  • Feature (32-49)
  • SupportStatus (25-29)
  • get_status (479-483)
tensorrt_llm/llmapi/llm_utils.py (2)
  • ModelLoader (97-600)
  • load_hf_model_config (589-600)
tests/unittest/tools/test_supported_models_sync.py (1)
tensorrt_llm/llmapi/model_support_matrix.py (1)
  • render_supported_models_markdown (507-576)
🪛 Ruff (0.14.11)
tests/unittest/llmapi/test_model_feature_fallbacks.py

254-254: Unused function argument: arch

(ARG001)


303-303: Unused function argument: arch

(ARG001)

docs/source/helper.py

358-358: Avoid specifying long messages outside the exception class

(TRY003)

tensorrt_llm/llmapi/llm.py

328-328: Do not call getattr with a constant attribute value. It is not any safer than normal property access.

Replace getattr with attribute access

(B009)

tests/unittest/tools/test_supported_models_sync.py

32-32: Avoid specifying long messages outside the exception class

(TRY003)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (16)
tests/unittest/api_stability/references/llm.yaml (1)

234-237: API stability reference update looks correct for the new flag.

tensorrt_llm/llmapi/model_support_matrix.py (1)

16-74: Clean stdlib-only primitives for docs + runtime lookups.

Frozen dataclasses + FeatureCell.render() make the table rendering straightforward.

tensorrt_llm/llmapi/llm.py (3)

47-49: Imports for support matrix utilities look fine.


905-910: HF config loading: using _hf_model_dir + trust_remote_code is the right direction.


1017-1022: Good wiring: fallbacks run immediately after HF config load.

This is the right spot (before input processor / executor config consumption).

Also applies to: 1212-1217

tests/unittest/llmapi/test_model_feature_fallbacks.py (1)

31-385: Test coverage is well targeted (esp. multimodal precedence + “unknown architecture” no-ops).

tests/integration/test_lists/test-db/l0_a100.yml (1)

24-24: LGTM!

Adding the sync validation test to the pre-merge test list is appropriate. This ensures docs/model matrix consistency is verified before merging. Based on learnings, test duplication across list files is common when tests serve different CI contexts.

docs/source/conf.py (2)

180-181: LGTM!

The import extension follows the existing pattern for importing helper functions.


193-193: LGTM!

Calling generate_supported_models() during Sphinx setup ensures the model support documentation stays synchronized with the programmatic matrix. The placement after generate_llmapi() maintains logical ordering of documentation generation steps.

docs/source/models/supported-models.md (2)

2-3: LGTM!

The generation comments clearly indicate this file is auto-generated and provide instructions for regeneration. This is good practice for maintaining generated documentation.


8-45: LGTM!

The table formatting is consistent and the content aligns with the programmatic model support matrix. The standardized column widths and alignment improve readability.

docs/source/helper.py (2)

5-5: LGTM!

The sys import is needed for the sys.modules registration in the new function.


349-374: LGTM!

The implementation follows the established pattern in this file for dynamic module loading (similar to update_version()). Key aspects are well-handled:

  • sys.modules registration ensures dataclass/type evaluation works during module execution
  • Conditional write avoids unnecessary file modifications
  • Clear error messaging if module loading fails

The static analysis hint (TRY003) about the exception message is a style preference; the inline message is appropriate here for debugging purposes.

tests/unittest/tools/test_supported_models_sync.py (3)

1-15: LGTM!

The copyright header is properly formatted with the 2025 year and Apache 2.0 license.


41-56: LGTM!

The sync test correctly validates that the committed markdown matches the generated content. The .strip() comparison handles trailing whitespace differences gracefully, and the error message provides actionable guidance.


58-103: LGTM!

The invariant tests provide comprehensive validation:

  • Order lists match matrix keys (catches missing/extra rows)
  • Duplicate detection in order lists and SUPPORTED_MODELS_PYTORCH
  • Footnote reference validation ensures all used footnotes are defined

One observation: the footnote check (line 101) looks for {fn}: pattern. If footnote definitions use the format [^1]: text, this check correctly identifies if a referenced footnote (e.g., [^1]) has a corresponding definition starting with [^1]:.

✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.

@venkywonka venkywonka marked this pull request as draft January 16, 2026 02:10
@venkywonka
Copy link
Collaborator Author

/bot run --disable-fail-fast

- Generate docs/source/models/supported-models.md from code via docs/source/helper.py

- Gate unsupported features per-model with warnings (kv_cache_reuse, chunked_prefill, attention_dp, overlap scheduler)

- Add CI sync test to keep committed markdown in sync

Signed-off-by: Venky Ganesh <23023424+venkywonka@users.noreply.github.com>
Use the support matrix as the docs source-of-truth, regenerate supported-models.md with stable formatting, and apply minimal runtime fallbacks based on HF architectures.

Signed-off-by: Venky Ganesh <23023424+venkywonka@users.noreply.github.com>
Add unittest/tools/test_supported_models_sync.py to the A100 pre-merge unit test list so docs/support-matrix drift is caught before merge.

Signed-off-by: Venky Ganesh <23023424+venkywonka@users.noreply.github.com>
Centralize model feature fallback disabling and standardize the warning format. Mark supported-models.md as generated and add basic generator/data invariants to prevent silent drift.

Signed-off-by: Venky Ganesh <23023424+venkywonka@users.noreply.github.com>
Signed-off-by: Venky Ganesh <23023424+venkywonka@users.noreply.github.com>
@venkywonka
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #32376 [ run ] triggered by Bot. Commit: 265cee6

@tensorrt-cicd
Copy link
Collaborator

PR_Github #32376 [ run ] completed with state SUCCESS. Commit: 265cee6
/LLM/main/L0_MergeRequest_PR pipeline #25090 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Signed-off-by: Venky Ganesh <23023424+venkywonka@users.noreply.github.com>
@venkywonka
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #33002 [ run ] triggered by Bot. Commit: aa69102

@venkywonka venkywonka changed the title [TRTC-119][feat] LLMAPI auto-fallback for unsupported features [TRTC-119][feat] Add programmatic model-feature support matrix + auto-fallback for unsupported features Jan 21, 2026
@venkywonka venkywonka changed the title [TRTC-119][feat] Add programmatic model-feature support matrix + auto-fallback for unsupported features [TRTC-119][feat] Add programmatic model-feature support matrix & runtime auto-fallback support Jan 21, 2026
@venkywonka venkywonka changed the title [TRTC-119][feat] Add programmatic model-feature support matrix & runtime auto-fallback support [TRTLLM-10591][feat] Add programmatic model-feature support matrix & runtime auto-fallback support Jan 21, 2026
Signed-off-by: Venky Ganesh <23023424+venkywonka@users.noreply.github.com>
@venkywonka
Copy link
Collaborator Author

/bot kill

@tensorrt-cicd
Copy link
Collaborator

PR_Github #33019 [ kill ] triggered by Bot. Commit: 11d6485

@tensorrt-cicd
Copy link
Collaborator

PR_Github #33019 [ kill ] completed with state SUCCESS. Commit: 11d6485
Successfully killed previous jobs for commit 11d6485

@venkywonka
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #33022 [ run ] triggered by Bot. Commit: 11d6485

model="Qwen3Next",
huggingface="Qwen/Qwen3-Next-80B-A3B-Thinking",
),
)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it better to add an assertion here and ensure all architectures in MODEL_CLASS_MAPPING are covered?

Copy link
Collaborator

@Superjomn Superjomn Jan 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It may be even better to reuse the existing register_auto_model, which is used to register a new model arch. Just adding the new fields and applying the changes for all the arches.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @Superjomn - the register_auto_model is probably the way to go, i missed it initially. let me refactor to include that.

exception_handler.register(self, 'shutdown')
atexit.register(LLM._shutdown_wrapper, weakref.ref(self))

def _apply_model_feature_fallbacks(self) -> None:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we move this logic to a model_validator of BaseLlmArgs / TorchLlmArgs / etc.?

We could define one validator in BaseLlmArgs for the common attributes, and then additional validators in TorchLlmArgs / TrtLlmArgs etc. for the backend-specific validators. Ideally this would mean we avoid the use of getattr entirely.

I'm mainly assuming this is possible because BaseLlmArgs has a field for model and it seems possible to derive things like the architecture from that.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you're right @anish-shanbhag - but the complication is with the timing - the stage when i'm checking is when the full model or config is already loaded and i'm leveraging that.
If I move the validation to a much earlier LLMArgs-parsing stage, I'll need to find a way to reliably obtain the hf.architecture arg, by essentially replicating a part of the model-loading code just to get the architecture metadata.
Let me prototype if that's cleaner

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a good point - might end up adding unnecessary complexity.

Another option is to not use a model_validator but instead add something like def update_from_feature_matrix(architecture: str) to each of the LlmArgs classe, which would update the right attributes using the architecture that's passed in.

I think my main initial concern was just with the getattr / hasattr logic which could end up being harder to maintain.

status="prototype")

auto_disable_unsupported_features: bool = Field(
default=False,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know it's a breaking change, but I think it could be worth enabling this by default in this PR since we aren't improving the out-of-the-box functionality if this is disabled - and also, this is required to achieve the original goal of enabling things like KV cache reuse by default.

But I'll definitely defer to others here. If we leave the default here as False, I think we should at least have TODOs/tickets tracking (1) eventually enabling auto_disable_unsupported_features by default and (2) enabling other settings like chunked prefill by default

Copy link
Collaborator Author

@venkywonka venkywonka Jan 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agreed - meaningful impact is only when its enabled by default - originally i was planning on doing that but deferred it. It's probably worth toggling it here itself, just a line change. Either way, i shall create tickets for this and add TODOs here.

# See the License for the specific language governing permissions and
# limitations under the License.

"""Supported models and feature support matrices (stdlib-only)."""
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: maybe move the stdlib-only part to a separate comment alongside the imports for clarity

SupportedModel(
architecture="DeepseekV3ForCausalLM",
model="DeepSeek-V3",
huggingface="deepseek-ai/DeepSeek-V3",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How will this currently work with quantized variants like Deepseek-R1-FP4?

Feature.CHUNKED_PREFILL: FeatureCell(status=SupportStatus.YES),
Feature.MTP: FeatureCell(status=SupportStatus.NO),
Feature.EAGLE3_ONE_MODEL_ENGINE: FeatureCell(status=SupportStatus.YES),
Feature.EAGLE3_TWO_MODEL_ENGINE: FeatureCell(status=SupportStatus.YES, footnote="[^4]"),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we somehow directly specify the full footnote here (Overalap scheduler isn't...) and then dynamically change it to [^4] while rendering? This might help with maintainability

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agreed lol

Feature.KV_CACHE_REUSE: FeatureCell(status=SupportStatus.YES),
Feature.LOGITS_POST_PROCESSOR: FeatureCell(status=SupportStatus.YES),
Feature.EPD_DISAGG_SERVING: FeatureCell(status=SupportStatus.NO),
Feature.MODALITY: FeatureCell(text="L + I + V"),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like the text field is only used for this modality feature - should we split this into 3 separate features for language, image, and video (at least that's what I assume these represent)? That way they'll conform to the status approach

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yep that's a good catch - this was a simple reverse-engineering of the documentation where they kept this "L,I,V" convention

}

KEY_MODEL_FOOTNOTES: Tuple[str, ...] = (
"[^1]: Chunked Prefill for MLA can only be enabled on SM100/SM103.",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like these could eventually be auto-enabled/disabled with some extra logic (though it's probably out of scope for this PR)

Should we add some TODOs here for that?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yep i shall add todos for this thanks for the catch!

)

KEY_MODEL_MATRIX: Mapping[str, Mapping[Feature, FeatureCell]] = {
"DeepseekV3ForCausalLM": {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like the current assumption is that all models under the same architecture will have the same feature support - is this always true?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm great question - I guess you're talking about complications arising from quantized checkpoints that share same architecture metadata (DeepSeek-R1-0528-NVFP4-v2 vs DeepSeek-R1-0528) or just simply size (llama-3.1-405b vs llama-3.1-8b) .... i'm not sure if currently that's a problem - but i can imagine features in the future that need this distinction unfortunatey - that means we'll to make our key-value mappings more granular

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But practically, for example, say a feature is marked as supported for DeepseekV3ForCausalLM, and is supported for DeepSeekV3 but is not supported in DeepSeek-R1-0528-NVFP4-v2 - then the auto-fallback will miss this and we're atleast as good as today.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think the inverse rarely happens - where we mark it as unsupported and auto-disable it but a specific checkpoint/size variant supports that feature. I can't seem to think of a situation like that.

Feature.GUIDED_DECODING,
)

KEY_MODEL_MATRIX: Mapping[str, Mapping[Feature, FeatureCell]] = {
Copy link
Collaborator

@Superjomn Superjomn Jan 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about scatter this large table to register_auto_model? Like upgrade it to support:

supported_features = [
    Feature.OVERLAP_SCHEDULER,
    ...
]

@register_auto_model("LlamaForCausalLM", features=supported_features, ...other_fields)
class LlamaForCausalLM(...):

The advantage is that registration and updates are already handled within each model, with model PICs tracking them. This way, we avoid duplicating model architecture entries.

cc @QiJune, what do you think?

Feature.MODALITY,
)

MULTIMODAL_MATRIX: Mapping[str, Mapping[Feature, FeatureCell]] = {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This too, scatter to register_auto_model may be better.

self.args.cuda_graph_config = None

if getattr(self.args, "guided_decoding_backend", None) is not None:
if _disable(SupportFeature.GUIDED_DECODING,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like there are many more SupportFeatures than the ones listed here - should we track the remaining ones that aren't being auto-disabled somewhere?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yea - the reason they aren't auto-disabled is because they don't neatly map to a "knob" in llmapi to auto-disable unfortunatley

return "\n".join(lines)


def render_supported_models_markdown() -> str:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A minor suggestion—perhaps overkill—but using a Jinja2 template could help clarify the document structure. The headings and some parts of the content are fixed, and we're essentially just inserting content into predefined fields.

annotation: int
default: 1000
status: prototype
auto_disable_unsupported_features:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about shortening this to ignore_unsupported_features , I am not sure if "auto" + "disable" = "ignore"?

@QiJune QiJune requested a review from leslie-fang25 January 22, 2026 01:47
@tensorrt-cicd
Copy link
Collaborator

PR_Github #33022 [ run ] completed with state SUCCESS. Commit: 11d6485
/LLM/main/L0_MergeRequest_PR pipeline #25530 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@venkywonka venkywonka marked this pull request as draft January 22, 2026 22:57
Signed-off-by: Venky Ganesh <23023424+venkywonka@users.noreply.github.com>
Signed-off-by: Venky Ganesh <23023424+venkywonka@users.noreply.github.com>
@venkywonka
Copy link
Collaborator Author

Closign this in favour of #11035

@venkywonka venkywonka closed this Feb 3, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants