forked from pytorch/pytorch
-
Notifications
You must be signed in to change notification settings - Fork 75
[AUTOGENERATED] rocm7.1_internal_testing_IFU_2025-09-24 #2678
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
pragupta
merged 694 commits into
rocm7.1_internal_testing
from
rocm7.1_internal_testing_IFU_2025-09-24
Oct 1, 2025
Merged
Changes from 250 commits
Commits
Show all changes
694 commits
Select commit
Hold shift + click to select a range
76a841f
Port OpSchema.__post_init__ and OpSchema._recompute_comparison_key to…
swolchok 46c647d
[vllm hash update] update the pinned vllm hash (#163304)
pytorchupdatebot 3016616
[BE] Update Python min version to 3.10 (#162310)
malfet c91f59b
Fix performance regression when indexing by Numpy arrays (#163280)
ezyang ce5637b
Fix invalid indices bug for max_unpool2d/3d on MPS (#163036)
can-gaa-hou 5780478
Revert "[BE] Update Python min version to 3.10 (#162310)"
pytorchmergebot 1708120
Revert "[CI] Move Windows build/tests to Python-3.10 (#162862)"
pytorchmergebot e0bcd58
[MTIA] Add MTIA dispatch for kernel foreach_maximum(Add D80022242 bac…
DoubleBiao 1302637
Revert "[dynamo][guards] Do not construct entire framelocals dict for…
pytorchmergebot 32ad29b
Revert "[dynamo][guards] Fail on an unknown framelocals to dict conve…
pytorchmergebot 0815091
[CP][BE] Cosmetic refactors for CP code base (#163115)
fegin ab5086a
[WOQ] Add XPU kernel for _weight_int8pack_mm (#160938)
xiaowangintel 33e6c5a
[Dependabot] Update(deps): Bump transformers from 4.54.0 to 4.56.0 in…
dependabot[bot] bee362c
[ROCm][SymmMem] Fix skip condition for PLATFORM_SUPPORTS_SYMM_MEM (#1…
pragupta 264e7f6
[ROCm] Fix mx fp8 and fp4 code after scaling refactor changes. (#163127)
jagadish-amd f8f230a
[FP8][cuBLAS][H100] only test fp32 outputs for rowwise `_scaled_mm` o…
eqy e631d76
[Flex] Changing how bwd configs are setup and updating default b200 c…
drisspg 4967ad8
[Graph Partition] improve custom op output alias (#163227)
BoyuanFeng 3e663ce
[Inductor][Triton][FP8] Add a Blackwell-specific scaled persistent + …
jananisriram 2984bfe
[ez][CI] Run vllm workflow on vllm pin updates (#163353)
clee2000 a3b68c7
Revert "Fix boxcox to return same result for same input in one batch …
pytorchmergebot 607469b
Revert "[ROCm] Bump FBGEMM commit to avoid CK errors (#162590)"
pytorchmergebot a0d2d84
Handling overflow for long int overflow for the product of kernel_hei…
arkadip-maitra b8c5ec5
[CD] Simplify NVIDIA driver installation step (#163349)
malfet 52dd7a8
Move ROCM trunk wheel builds to 3.10 (#163339)
malfet 03f34fd
Add explicit typing to nn.Module.__init__() parameters (#157389)
dsashidh bc7b17a
Realize LazyVariableTracker before raising exception (#163350)
guilhermeleobas 979e10f
[Bugfix] Match eager stride semantics for cloned tensors with preserv…
Lucaskabela a273475
[BE] Introduce `CONDA_ROOT_DIR` (#163341)
malfet 4a160da
[CUDA] revert PR 130472 (#162950)
thenumberouscode 2a308c7
Revert "Improve device info with new flops and bandwidth formula base…
pytorchmergebot f8fb437
[SymmMem] Barrier on team instead of world (#163298)
kwen2501 7130b17
[SymmMem] Fix memory allocation hold-up (#162680)
kwen2501 ba3c2c8
SDP Backend function fix (#161169)
ahkush 466122b
[inductor] avoid creating LoopBody twice (#162101)
shunting314 e88460f
[Inductor] don't call sympy_str when not needed (#162126)
shunting314 248156e
[Inductor] do loop reordering in a separate final round (#162355)
shunting314 df9a482
Bugfix for doing negative padding (#161639)
skpark-rh 9f8a311
[Inductor][Intel GPU] Save `threads_per_warp` from tirton compiled ke…
etaf fab8455
Don't use declarations in global namespace in stable headers (#163352)
mikaylagawarecki e6a9db5
Add analytics ID to cpp docs (#163370)
svekars 9b5ec0f
Use computed buffer sizes of torch for cusparseLt metadata (#163125)
aartbik 0098e56
[CI] Move Windows build/tests to Python-3.10 (#162862)
malfet ee7bdd8
[graph partition] Add way to register custom rule (#163310)
zou3519 093f064
[CP][BE] Correct an incorrect docstring (#163131)
fegin 8225a26
[dynamo] Fix issue with namedtuple slicing (#163351)
jansel bfe9e60
Simplify PrecompileContext to no longer be a CacheArtifactManager (#1…
jamesjwu a1df0b4
Lazy import to avoid circular import issue for DebugMode (#163381)
SherlockNoMad a31acf3
Clean up obsoleted vLLM tests (#163383)
huydhn e56dd5d
[Inductor-FX] Support torch.cond (#163234)
blaine-rister a87aea0
Update RandomSampler docstring. data_source must be Sized not Dataset…
dsashidh 0b5a99b
remove duplicate import for defaultdict (#160519)
parsshar-RH df5d6d5
[inductor][triton heuristics] move allow tf32 out of config params (#…
coconutruben 0ee331b
[inductor][choices] move extra kwargs out of get_template_configs (#1…
coconutruben d55c9d5
[CP] Fix cuDNN CP LSE dimension bug (#163231)
fegin 5050cfa
[Opitmus] fix fp8 activation quatization for duplicates forward outpu…
mengluy0125 eb11d17
[Caffe2] Improve SVE batch box cox by 2% (#163360)
Nicoshev f9074c7
[STABLE ABI] Add copy_ operation. (#161895)
pearu d70c0ba
minimize graph capture output (#162211)
avikchaudhuri 3938175
[1/n] Support cpu_tensor.to("cuda:0") in FakeTensorMode on cuda-less …
SherlockNoMad 9e3725e
make fullgraph_capture work on mod, args, kwargs (#162849)
avikchaudhuri 8e3fd3d
[AI Codemod][DevmatePerfOptimizationVectorReallocation] fbcode/caffe2…
yfeldblum e37b600
[CUDA][cuBLAS][FP8] Forward-fix #162022 (#163354)
eqy 2887f3f
[BE] Slight improvements to documentation in python_dispatch (#162963)
ezyang 97eb7a2
torchdim Python port (#160236)
ezyang 5b386ee
[vllm hash update] update the pinned vllm hash (#163392)
pytorchupdatebot 1ca9445
[BE][Ez]: Prevent copies of std::vector in CUDA ForeachOps (#163416)
Skylion007 f591bb5
Remove data_source argument from Sampler (#163134)
cyyever 4a96a6f
[Docs] Fix indentations in cond.md (#156147)
windsonsea 1faf636
Delete functorch C extension entirely. (#163340)
ezyang 9ba9180
Add api info for torch._C._nn.pyi (#162707)
orangeH25 d8cbbc0
[Easy][AMP] Refactor the AMP logic for getting dtype (#162796)
fffrog 5d8a226
[SymmMem] Promote `@requires_nvshmem` instead of `enable_triton` (#16…
kwen2501 f34744d
[inductor] bugfix: keep WeakDeps (WAR deps) during fusion (#162316)
v0i0 51152ef
Remove autograd code for Python < 3.9 (#163313)
cyyever 5599f48
Fully native DTensor.__new__ (#162508)
swolchok 4d3d32f
Add torchfuzz initial impl. (#163417)
laithsakka 8b14f43
[torch] DRY a couple of lines in unpickler (#163447)
yfeldblum 6ac2b3a
[BE] Adding aliases for CUDA and XPU API documentation (#162984)
jiannanWang 8a281d7
[submodule] Bump libfmt to 12.0.0 (#163441)
cyyever 0b59492
[export] Fix wrap_with_set_grad_enabled retracing (#163295)
angelayi 01f927e
Remove workarounds for Python 3.6 (#163440)
cyyever 281bb56
Enable half precision types on test_conv_cudnn_nhwc_support (#163444)
cyyever 3a7db34
Revert "[SymmMem] Promote `@requires_nvshmem` instead of `enable_trit…
pytorchmergebot f007894
Revert "[RELAND] Always build USE_DISTRIBUTED (#160449) and Make dist…
pytorchmergebot ae5be03
Revert "Delete functorch C extension entirely. (#163340)"
pytorchmergebot edafc90
Revert "[BE] Make PyObjectSlot use a global PyInterpreter (#162659)"
pytorchmergebot 96a3afb
Simplify BFLOAT16_AVAILABLE (#163445)
cyyever 60b4791
[MPS] Fix compile linalg inv (#163452)
Isalia20 9f5a644
[BE] Update Python min version to 3.10 (#162310)
malfet 10adeb9
Revert "[BE] Update Python min version to 3.10 (#162310)"
pytorchmergebot 509c4e8
Update cutlass version for fbcode (#163091)
henrylhtsang eaac218
[ROCm] Fix environment variable AOTRITON_INSTALLED_PREFIX (#163373)
xinyazhang e310cc5
Update fbgemm submodule (#163411)
cthi 9ca183e
switch from stack based to graph based aproach (#163459)
laithsakka 06fe5b9
[AOTI] fix TestAOTInductorPackage temp file locked handler. (#163499)
xuhancn 5e7be98
[BE] Update Python min version to 3.10 (#162310)
malfet 281f8f4
Combine strong and weak refcounts in intrusive_ptr in a single refcou…
mcfi d279a6a
ci: Add a way to lint all files in a PR from label (#163525)
seemethere bec967e
Remove C++ and test branches for CUDA<12 (#163443)
cyyever 3be9c86
[opaque obj] Initial OpaqueObject (#162660)
angelayi dd30667
[opaque_obj] Add set_payload + docs (#163276)
angelayi 4941719
Enable logging for absolute memory estimation (#158799)
basilwong 7e97811
Fix lint (#163542)
angelayi 1818c36
[Fix] Restrict stride normalization to 1D tensors on export (#163282)
Kathryn-cat eaa613b
Revert "[opaque_obj] Add set_payload + docs (#163276)"
pytorchmergebot bf28990
Add support for NestedTensor share_memory_ (#162272)
adabeyta d150484
[opaque_obj] Add set_payload + docs (#163276)
angelayi 6f9aef5
[2/n] Support module.to("cuda:0") in FakeTensorMode on cuda-less mach…
SherlockNoMad d008670
[triton] update 3.5 pin to bbb06c0334a6772b92d24bde54956e675c8c6604 (…
davidberard98 fd785b1
Add NestedTensor dispatch for _is_any_true/_is_all_true (#162096)
adabeyta e065d35
[BE]: Add a few more missing move from return indices (#163456)
Skylion007 46e1b7d
remove allow-untyped-defs from ./torch/utils/data/datapipes/iter/file…
bobrenjc93 cf28ab2
remove allow-untyped-defs from ./torch/ao/quantization/pt2e/duplicate…
bobrenjc93 02da475
Triton template IMA reads on B200 (#163460)
drisspg 8abc2af
[STABLE ABI] Add clone method to torch::stable::Tensor (#161896)
pearu 8e62d01
Add dynamic shapes doc (#159428)
svekars 4027e97
[BE] Delete `skipIfMPSOnMacOS13` (#163515)
malfet 09cb34c
[RELAND] Always build USE_DISTRIBUTED (#160449) and Make distributed …
ezyang e558f7a
[vllm hash update] update the pinned vllm hash (#163463)
pytorchupdatebot da05aa7
[BE] Use `output_t` directly (#163518)
malfet 0256f91
[BUG] MaxUnpool2d/3d should check output dim before accessing its ele…
can-gaa-hou 2b03663
Allow add_persistent_r_block to scale up rblock up to a limit (#162296)
PaulZhang12 7ea8998
Better decomp for torch.eye (#163386)
jansel 36c2a13
[inductor] Fix bug where viewed outputs get padded (#163398)
jansel a1bd924
[inductor] Fallback on strided complex add (#163387)
jansel c8fd2b4
[inductor] Skip test_baddmm on XPU (#163414)
jansel 4fc271e
[inductor] Don't require_dense for grid_sampler_2d_backward (#163415)
jansel e0cbab4
[Inductor] avoid CUDA__equal when constant tensors are from different…
cp2923 b756b58
Improve fake tensor leakage detection in export by not relying on gc …
tugsbayasgalan 60c2bde
Replace Literal[None] with None in typing (#163489)
cyyever 33daaad
dynamo: Handle objects in graph that do not support weakref (#163168)
c00w fa15fb0
[EZ] Remove XLA from unstable.yml (#163564)
malfet 8da0086
Remove outdated commented CMake code (#163442)
cyyever 68e75be
Update pytorch_sphinx_theme2 to latest hash (#163269)
svekars 539e84e
[precompile] Add option to disable guard check on aot-compiled functi…
zhxchen17 3ef1bef
[sdpa] make sure to recompile if alignment is different than before (…
ColinPeppler 2c7959e
[ignore][codex-test] Add typing to simple library registry (#161367)
bobrenjc93 8f30a8d
[AOTInductor] Add grid information for Triton Kernels (#160131)
muchulee8 e9300b2
remove allow-untyped-defs from ./torch/onnx/_internal/torchscript_exp…
bobrenjc93 6a48f57
[1/N] Remove 'type: ignore' suppressions (#163468)
cyyever 447b8fc
[2/N] Use filesystem in inductor (#163465)
cyyever 27164b6
Add fake_impl for _native_multi_head_attention (#163167)
ydwu4 0b75a16
[torchfuzz] Encapsulate fuzzing and codegen logic into ops (#163547)
bobrenjc93 95ac7d7
Rename to _debug_mode.py to make it private (#163534)
SherlockNoMad fcd79d5
[vllm hash update] update the pinned vllm hash (#163590)
pytorchupdatebot 0e12238
[torchfuzz] remove supports_variable_inputs for now (#163553)
bobrenjc93 bb5be56
[torch][cuda][device_limits] Library for querying device hardware lim…
valentinandrei e3b392b
[BC breaking] Remove deprecated imports for torch.utils.data.datapipe…
cyyever d3a1345
Use functools.cache on has_efa (#163439)
cyyever 19b754d
Revert "Update cutlass version for fbcode (#163091)"
pytorchmergebot 08c5efd
[torchfuzz] cache operators (#163554)
bobrenjc93 d5e51d3
[torchfuzz] decompose -> fuzz_inputs_specs (#163555)
bobrenjc93 1545bb1
[torchfuzz] shuffle compatible ops (#163556)
bobrenjc93 309fe03
[torchfuzz] remove unneeded try catch (#163557)
bobrenjc93 45d9dcc
Update Kineto Submodule (#162222)
sraikund16 375f3e3
[OpenReg][Docs] Correct docs about `openreg` usage example. (#163235)
KarhouTam b426ba1
[torchfuzz] introduce tensor and scalar pointwise ops (#163558)
bobrenjc93 8d81564
[pt2][cache] rework cache for true generic usage + better tests (#163…
nmacchioni 5d749ce
Remove test conditions for CUDA<12 (#163495)
cyyever 3c64b2a
CUDA 13.0 Warning update for supported architectures (#163585)
atalman bda9ab2
[inductor] fix as_strided lowering with .view(dtype) inputs (#163319)
xmfan 1a42656
[Flex attention] Fix flex attention head broadcast (#163426)
Isalia20 aff76c0
Revert "Add fake_impl for _native_multi_head_attention (#163167)"
pytorchmergebot e05c9c0
[ROCm][CI] cudagraph trees ut fixes (#163592)
jeffdaily 4264fd3
Add basic tests for torch.distributed.tensor._utils.compute_global_te…
swolchok 518c320
[inductor] libdevice.sqrt => tl.sqrt_rn (#163419)
jansel ed84e80
[inductor] Freeze layouts in FlexAttention (#163434)
jansel 9c4d9f9
[inductor] Support out_dtype arg to matmul (#163393)
jansel 6ef7487
[dynamo] Fix TorchFunctionMode handling with get_rng_state (#163412)
jansel 49e7b2f
[inductor] Fix error from custom CUDA allocators (#163422)
jansel 720a7b2
[export] Remove .contiguous() when saving weights to raw bytes (#163587)
yiming0416 0f67407
Large tests failing on bfloat16 (#163537)
drisspg b3cf5c7
Skip on sm100 later since Tests are non determinisitic (#163552)
drisspg 5f0c7cb
Add B200 smoke test (#159494)
drisspg ebddbe7
[ROCm][CI] skip test_sparse_triangular_solve (#163651)
jeffdaily 6e5dddb
Use accelerator API in common_dtensor (#163498)
dilililiwhy 221ac81
Revert "[precompile] Add option to disable guard check on aot-compile…
pytorchmergebot 134dfbe
[DCP] DTensor slice dequantization with proper block alignment (#163532)
saumishr fde929c
[AOTI] Fix model_package_loader get_cpp_compile_command (#163561)
xuhancn 2aadcea
[ROCm] Improve perf for elementwise broadcast with mixed dtype (#163562)
jerrymannil 649ceda
[export] handling NamedTuple inputs (#162959)
Raman-RH ca35dc2
[EZ] Fix UP041 violations (#163648)
malfet 0696a4b
[EZ] Perma-ignore UP038 (#163649)
malfet 8e6b0c7
[Inductor] Remove `no_type_check` annotation on properties (#163570)
blaine-rister bcb893a
[ROCm] Build FBGEMM_GENAI for gfx942 only (#162648)
jithunnair-amd 22c5e8c
Add num_store to inductor_meta and use it to scale persistent reducti…
PaulZhang12 2a9745d
[multi-kernel] shape-similarity kernel selection (#163090)
pianpwk fc84743
Implement CUDA stream protocol (#163614)
msaroufim e671dcc
Update tests to check for more robust pattern (#163107)
tugsbayasgalan 5ca563e
symintify fill_diagonol_ (#163485)
bobrenjc93 b182365
[ez] use list initializer syntax in fill_diagonal_ (#163607)
bobrenjc93 8c8416b
Update pytorch.org links in docs/conf.py (#163682)
svekars 29af258
Less aggressive persistent reduction when it could induce large maski…
eellison c3d9f08
[torchfuzz] introduce multi process fuzzer (#163560)
bobrenjc93 c63e417
use reduction hint for aggressive rblock (#163371)
eellison b879ef7
[ROCm][CI] skip TestCudaPrimaryCtx.test_set_device_0 (#163693)
jeffdaily 2014908
[MPS] Compute `offset2bag/bag_size/max_indices` in `_embedding_bag` (…
kurtamohler 6b5ad5f
[Kineto] Add list of string parsing for profiler (#163593)
muchulee8 f3f67ff
Fix warn message (#163578)
drisspg f9fa138
[BE] Delete all pre py-3.10 checks (#163653)
malfet ee75c3d
Support for amin, amax, and aminmax (#163669)
srsuryadev eb3fbf5
[inductor] in emulate_precision_casts, disable fma fusion in triton (…
v0i0 4535254
[3/N] Use std::filesystem in inductor (#163632)
cyyever dc93529
[Triton] [Inductor] Restrict subprocess autotuning to just Triton (#1…
njriasan 1e754d5
docs and optional kwargs for full graph capture (#163550)
avikchaudhuri be6c127
[AOTI] Pass comments from metadata to the autotune block (#163600)
desertfire e2ce79e
[Flex] Fix silent correctness w/ backpropping grads (#163677)
drisspg c261c71
Simplify _compute_local_shape_and_global_offset and make it SPMD. (#1…
ezyang ca512af
[inductor] Fix issue with scalar arg handling (#163481)
jansel 6fa9727
[inductor] Fix bugs in emulate_precision_casts (#163520)
jansel d746b98
[inductor] Fix divmod error in decomp (#163482)
jansel 42e9902
cd: Move arm64 to linux.arm64.r7g.12xlarge.memory (#163681)
seemethere 6f1d962
[vllm hash update] update the pinned vllm hash (#163711)
pytorchupdatebot 20eeb54
Add api info for torch._C._nn.pyi (#162936)
orangeH25 124dd36
[hop] support local_map + SAC (#163322)
xmfan 0390798
[Triton] [Inductor] Enable Epilogue Subtiling in the blackwell ws tem…
njriasan a8e9ed2
[inductor] turn on loaf (for oss) by default (#162030)
shunting314 f68de58
[Inductor-FX] Support symbol and dynamic scalar graph inputs and outp…
blaine-rister 2c5a3d7
Delete functorch C extension entirely. (#163340)
ezyang dad54ca
Add mistral/gpt-oss to benchmarks (#163565)
angelayi 11a231e
[c10d] P2P tensors must be dense (#163719)
kwen2501 bf0747c
[Code Clean] Remove deadcodes about Python3.9 [1/N] (#163626)
fffrog 0bca779
[Code Clean] Remove deadcodes about Python3.9 [2/N] (#163627)
fffrog 33aabdd
[Code Clean] Remove deadcodes about Python3.9 [3/N] (#163629)
fffrog ec0cd81
[Code Clean] Remove deadcodes about Python3.9 [4/N] (#163643)
fffrog 6f34cc0
[Code Clean] Remove deadcodes about Python3.9 [5/N] (#163644)
fffrog a635505
[Code Clean] Remove deadcodes about Python3.9 [6/N] (#163645)
fffrog 2390d34
[Code Clean] Remove deadcodes about Python3.9 [7/N] (#163646)
fffrog 3e1b1a3
Revert "[inductor] Fix issue with scalar arg handling" (#163737)
jansel 207f104
[Triton] [Inductor] Set default configs for Blackwell Matmul Template…
njriasan b66aa1a
[ARM] Add test_memory_profiler to aarch64 tests (#145260)
robert-hardwick 141fc72
[CD] CUDA 13.0 fix preload logic to include nvidia/cu13/lib/ (#163661)
atalman 3b73841
update test_quantization tests to run weekly (#163077)
liangel-02 9d0d98a
Use cuda nvrtc so file based on cuda version used by torch (#163642)
atalman 5d0f639
Make `Tensor.__dlpack__(stream=None)` capture-safe during CUDA Graph …
eee4017 4c2c401
Record redistribute_local_tensor in DebugMode (#163704)
SherlockNoMad 9341ede
Revert to old behaviour of not padding strides if shape or stride is …
nandesuka 768361e
Add less warps config to inner reductions (#162447)
PaulZhang12 c414f75
[WOQ][Inductor] Enable CUDA coverage for _weight_int8pack_mm (#163461)
bbeckca 0456b23
[AOTI] Add verbose error information for extract file (#163718)
xuhancn 71eec6a
[dist] handle discontiguous allgather/reducescatter inputs (#163712)
ngimel 0dce2af
[ROCm][CI] adjust tf32 tolerance for test_compile_kernel_advanced (#1…
jeffdaily 90a2825
Add `inference_mode` hint message to use `eval` with inference. (#163…
zeshengzong 1495b35
Remove Python 3.9 for Triton builds (#163778)
atalman b40191b
Merge remote-tracking branch 'upstream/main' into rocm7.1_internal_te…
github-actions[bot] f3e8213
Fix merge conflicts
pragupta 0ad8381
Address review comments wrt triton_heuristics and install_rocm
pragupta 63fcd9b
update related_commits
pragupta 77f4534
Fix more conflicts with triton_heuristics.py
pragupta File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1 +1 @@ | ||
| 56392aa978594cc155fa8af48cd949f5b5f1823a | ||
| e0dda9059d082537cee36be6c5e4fe3b18c880c0 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,2 +1,2 @@ | ||
| transformers==4.54.0 | ||
| transformers==4.56.0 | ||
| soxr==0.5.0 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1 @@ | ||
| 7fe50dc3da2069d6645d9deb8c017a876472a977 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pragupta This change was supposed to be temporary as per f1ad49a (cc @pruthvistony)
Can we please ascertain if this is really needed for ROCm 7.1 mainline?
cc @jeffdaily to comment on whether this is needed for the ROCm7.0 CI upstream enablement
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ROCm 7 CI upgrade doesn't have this line. What was this fixing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed.