Skip to content

Support rotation on beta cuda#1235

Merged
mollyxu merged 8 commits intometa-pytorch:mainfrom
mollyxu:beta-cuda-rot90
Feb 13, 2026
Merged

Support rotation on beta cuda#1235
mollyxu merged 8 commits intometa-pytorch:mainfrom
mollyxu:beta-cuda-rot90

Conversation

@mollyxu
Copy link
Contributor

@mollyxu mollyxu commented Feb 11, 2026

Add rotation support to BetaCudaDeviceInterface using torch::rot90

Parametrize over cpu and cuda:beta in test_rotation_applied_to_frames (note: since we are planning to move to beta cuda eventually, we've decided to currently not support this on cuda with ffmpeg backend)

@pytorch-bot
Copy link

pytorch-bot bot commented Feb 11, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/meta-pytorch/torchcodec/1235

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 298c9ef with merge base 2d1b5c6 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Meta Open Source bot. label Feb 11, 2026
@mollyxu mollyxu changed the title [wip] beta cuda rot90 Support rotation on BetaCuda Feb 11, 2026
@mollyxu mollyxu marked this pull request as ready for review February 11, 2026 19:06
@mollyxu mollyxu changed the title Support rotation on BetaCuda Support rotation on beta cuda Feb 11, 2026
}
// Apply rotation using torch::rot90 on the H and W dims of our HWC tensor.
// torch::rot90 returns a view, so we need to make it contiguous.
frameOutput.data = torch::rot90(frameOutput.data, k, {0, 1}).contiguous();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code structure makes it a bit hard to follow what preAllocatedOutputTensor is storing, perhaps we could change it to be more explicit when changes to the preallocated tensor are needed:

if rotation_ == Rotation::NONE:
  convertNV12FrameToRGB() # regular call
else:
  convertNV12FrameToRGB() # call without preallocated tensor
  applyRotation()

Ideally we could handle the rotation switch statement and call to torch::rot90 in a separate function.

There might be nuances to the logic I am missing that make this structure difficult, please double check me on this.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed it might help to have an applyRotation() helper.
I'm not sure about having convertNV12FrameToRGB() within if/else blocks, it might be tricky since there is also validatePreAllocatedTensorShape which depends on preAllocatedOutputTensor having the correct size (i.e. we'd still need the savedPreAllocatedOutputTensor hack above, I think).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm still a little hesitant on this approach - when the hack sets preAllocatedOutputTensor = std::nullopt, then validatePreAllocatedTensorShape does not actually do any validation. Excluding the validation call in the else block would be more explicit about that.

This is minor though, and could be addressed when we implement the less hacky solution which I believe will use the NPP functions?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the discussion! I agree that it is clearer to be explicit in excluding the validation call and to reduce the number of variables

Copy link
Contributor

@NicolasHug NicolasHug left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @mollyxu , this LGTM, I'll let @Dan-Flores make another pass as well

Copy link
Contributor

@Dan-Flores Dan-Flores left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @mollyxu!

@mollyxu mollyxu merged commit cc15044 into meta-pytorch:main Feb 13, 2026
68 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Meta Open Source bot.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants