Skip to content

Pull requests: NVIDIA/TransformerEngine

Author
Filter by author
Loading
Label
Filter by label
Loading
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Loading
Milestones
Filter by milestone
Loading
Reviews
Assignee
Filter by who’s assigned
Assigned to nobody Loading
Sort

Pull requests list

ONNX: Fix FP8 quantization for the second MLP in LayerNormMLP
#2577 opened Jan 8, 2026 by victoroliv2 Loading…
1 of 13 tasks
Debug doc generation bug Something isn't working documentation Improvements or additions to documentation
#2576 opened Jan 7, 2026 by timmoon10 Loading…
9 of 13 tasks
fix(examples): te_llama compatibility with transformers >= 4.57
#2572 opened Jan 7, 2026 by sbhavani Loading…
6 of 13 tasks
Update THD sink attention logic for cudnn >=9.18.0
#2568 opened Jan 6, 2026 by cuichenx Loading…
13 tasks
CPU Optimizations for FP8 cpu_overhead
#2559 opened Jan 5, 2026 by vthumbe1503 Loading…
13 tasks
[PyTorch] Remove unnecessary save of weights
#2549 opened Dec 30, 2025 by pggPL Draft
8 of 13 tasks
[PyTorch]Add Casting-Free FP8-Flow-MoE Blockwise Optimizations community-contribution PRs from external contributor outside the core maintainers, representing community-driven work.
#2544 opened Dec 26, 2025 by xiaoxi-wangfj Loading…
4 of 13 tasks
[PyT] Plumbing correct bias dims from TE to cudnn attention bug Something isn't working pytorch
#2537 opened Dec 20, 2025 by KshitijLakhani Loading…
4 of 11 tasks
[PyTorch] Bunch of fixes for cpu offloading
#2535 opened Dec 19, 2025 by pggPL Draft
13 tasks
Documentation for cpu offloading documentation Improvements or additions to documentation
#2520 opened Dec 16, 2025 by pggPL Loading…
8 of 13 tasks
[JAX] HLO FFI tests jax
#2517 opened Dec 16, 2025 by jberchtold-nvidia Loading…
7 of 13 tasks
Cpu optimizations v2 cpu_overhead
#2514 opened Dec 12, 2025 by vthumbe1503 Draft
13 tasks
[Common] Optimize fused RoPE kernel performance performance Performance issues
#2508 opened Dec 11, 2025 by yaox12 Draft
13 tasks
[common] Add support for cuBLASLt GEMM for GroupedTensor MoE
#2502 opened Dec 10, 2025 by pggPL Loading…
8 tasks done
Add logic for block-scaled tensors with GEMM swizzled scales enhancement New feature or request MoE performance Performance issues refactor
#2486 opened Dec 6, 2025 by timmoon10 Loading…
14 of 19 tasks
Add support for SWA (left, right) with FusedAttention 2.12.0
#2477 opened Dec 4, 2025 by sudhakarsingh27 Loading…
22 of 28 tasks
[JAX] Einsum with quantization
#2474 opened Dec 3, 2025 by phu0ngng Draft
13 tasks
ProTip! Updated in the last three days: updated:>2026-01-04.