-
Notifications
You must be signed in to change notification settings - Fork 71
Decoder-native resize public implementation #1003
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 20 commits
dd24dfa
3a2df84
5344ab4
98cf81b
65c4ad7
f300c70
2c3b7f0
80e84b5
5ac60d8
531b40f
cc333ac
238a8ff
55d362c
0d2492e
a2da767
2cd3f65
4ff0186
0f9eb62
8081298
39ed9ac
6e6815c
363e688
463674d
c20914c
254641a
9b4186a
105c77f
70b5976
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | ||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| @@ -0,0 +1,17 @@ | ||||||||||||||||
| .. _samplers: | ||||||||||||||||
|
|
||||||||||||||||
| =================== | ||||||||||||||||
| torchcodec.transforms | ||||||||||||||||
| =================== | ||||||||||||||||
|
||||||||||||||||
| =================== | |
| torchcodec.transforms | |
| =================== | |
| ===================== | |
| torchcodec.transforms | |
| ===================== | |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -125,3 +125,4 @@ Encoding | |
| api_ref_decoders | ||
| api_ref_encoders | ||
| api_ref_samplers | ||
| api_ref_transforms | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -4,3 +4,4 @@ files = src/torchcodec | |
| show_error_codes = True | ||
| pretty = True | ||
| allow_redefinition = True | ||
| follow_untyped_imports = True | ||
|
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I was getting linting errors like: https://github.com/meta-pytorch/torchcodec/actions/runs/19157614790/job/54761644331 Which points to docs which recommend the above change: https://mypy.readthedocs.io/en/stable/running_mypy.html#missing-imports |
||
| Original file line number | Diff line number | Diff line change | ||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
@@ -8,17 +8,18 @@ | |||||||||||||||
| import json | ||||||||||||||||
| import numbers | ||||||||||||||||
| from pathlib import Path | ||||||||||||||||
| from typing import Literal, Optional, Tuple, Union | ||||||||||||||||
| from typing import List, Literal, Optional, Sequence, Tuple, Union | ||||||||||||||||
|
|
||||||||||||||||
| import torch | ||||||||||||||||
| from torch import device as torch_device, Tensor | ||||||||||||||||
| from torch import device as torch_device, nn, Tensor | ||||||||||||||||
|
|
||||||||||||||||
| from torchcodec import _core as core, Frame, FrameBatch | ||||||||||||||||
| from torchcodec.decoders._decoder_utils import ( | ||||||||||||||||
| _get_cuda_backend, | ||||||||||||||||
| create_decoder, | ||||||||||||||||
| ERROR_REPORTING_INSTRUCTIONS, | ||||||||||||||||
| ) | ||||||||||||||||
| from torchcodec.transforms import DecoderTransform, Resize | ||||||||||||||||
|
|
||||||||||||||||
|
|
||||||||||||||||
| class VideoDecoder: | ||||||||||||||||
|
|
@@ -66,6 +67,13 @@ class VideoDecoder: | |||||||||||||||
| probably is. Default: "exact". | ||||||||||||||||
| Read more about this parameter in: | ||||||||||||||||
| :ref:`sphx_glr_generated_examples_decoding_approximate_mode.py` | ||||||||||||||||
| transforms (sequence of transform objects, optional): Sequence of transforms to be | ||||||||||||||||
| applied to the decoded frames by the decoder itself, in order. Accepts both | ||||||||||||||||
| :class:`~torchcodec.transforms.DecoderTransform` and | ||||||||||||||||
| `torchvision.transforms.v2.Transform <https://docs.pytorch.org/vision/stable/transforms.html#v2-api-reference-recommended>`_ | ||||||||||||||||
| objects. All transforms are applied | ||||||||||||||||
|
||||||||||||||||
| intersphinx_mapping = { | |
| "python": ("https://docs.python.org/3/", None), | |
| "torch": ("https://pytorch.org/docs/stable/", None), | |
| "numpy": ("https://numpy.org/doc/stable/", None), | |
| "PIL": ("https://pillow.readthedocs.io/en/stable/", None), | |
| "matplotlib": ("https://matplotlib.org/stable/", None), | |
| } |
Feel free to leave that as follow-up / open an issue.
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Following up on #1003 (comment)
Do we want to document that we consider passing untransformed frames to TorchVision transforms as our reference? I think we do, because I think that's implied by accepting the TorchVision transforms, and it's a easy way to explain the feature to users.
Agreed, we should document and claim that TV is our ref. I think we have slightly different understandings of what we mean by "TV is our ref", your definition is slightly stricter than mine (see below).
Is when the transform is applied useful to users? I thought it was, but if it's of little value, we could potentially just not talk about it.
I don't think it adds a lot of value to document, as I don't know if that's a questions users are even asking themselves. But I could be wrong and I don't feel strongly about it. What I'm slightly more concerned about the comment is that it seems like a contract, and I suspect we may want to relax that behavior in the future. E.g. for crop, we might want to apply it in YUV space instead of RGB if it's faster and if models can't notice the difference.
To me, when we say "TV is our ref", it means "this transforms has the same behavior as the TV transform as far as models are concerned". It's not strictly about bitwise equality (we'll never have that). It's only about whether the models can tell the difference. We know they can tell the difference for resize's interpolation mode. But if they can't tell the difference for (e.g.) crop being applied before or after color-conversion, I think we could allow ourselves to make that change of behavior. That allows us more freedom to potentially enable higher perf gains in the future.
None of my comments above are blocking. We can go ahead as-is. I'm happy that for once, I am not the one insisting on strictness :D
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@NicolasHug, that's all fair, and I also think it's fair to err on the side of explaining less about the implementation. If folks start asking about it, we can revisit.
But if they can't tell the difference for (e.g.) crop being applied before or after color-conversion, I think we could allow ourselves to make that change of behavior. That allows us more freedom to potentially enable higher perf gains in the future.
Based on what I did with crop and resize, I actually think that is likely to be the case everywhere: applying the transform in YUV versus RGB will be noticeable by the model. But we can easily punt on that determination by just not saying anything about it. If it becomes something folks ask about, we may need to make it an explicit option, in which case we'll document behavior.
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this fails if tv_available is False? Because v2 wouldn't exist
EDIT ah no that's probably fine because of the if not tv_available: check above.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes me thing we should have a dummy job where we don't install TV that ensures TC still works fine...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On a job which doesn't have TorchVision installed: I agree we need to do something here, but I'd like to punt on this for now. The current testing file imports TorchVision unconditionally. I think we'll want to separate out the tests that require TorchVision from those that don't so that we can test both behaviors, but that will require different .py files. I'd like to deal with that in its own PR.
I actually started to add a step in the current linux wheel test that did not install TorchVision when I realized this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, we can punt on this. I'm hoping we can do something very simple regarding testing: keep all but one test job using torchvision, and just have one small CI job that doesn't install TV and just runs a few tests, basically just insuring TV is an optional dependency. I'd like to avoid separating tests in different files just for that - we may have more than one optional dependency and that quickly becomes untractable.
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit, I think I would have been less surprised by v2 being actually optional if this were elif.
| if isinstance(transform, v2.Resize): | |
| elif isinstance(transform, v2.Resize): |
scotts marked this conversation as resolved.
Show resolved
Hide resolved
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,7 @@ | ||
| # Copyright (c) Meta Platforms, Inc. and affiliates. | ||
| # All rights reserved. | ||
| # | ||
| # This source code is licensed under the BSD-style license found in the | ||
| # LICENSE file in the root directory of this source tree. | ||
|
|
||
| from ._decoder_transforms import DecoderTransform, Resize # noqa |
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
| @@ -0,0 +1,60 @@ | ||||||
| # Copyright (c) Meta Platforms, Inc. and affiliates. | ||||||
| # All rights reserved. | ||||||
| # | ||||||
| # This source code is licensed under the BSD-style license found in the | ||||||
| # LICENSE file in the root directory of this source tree. | ||||||
|
|
||||||
| from abc import ABC, abstractmethod | ||||||
| from dataclasses import dataclass | ||||||
| from typing import Sequence | ||||||
|
|
||||||
|
|
||||||
| @dataclass | ||||||
| class DecoderTransform(ABC): | ||||||
| """Base class for all decoder transforms. | ||||||
|
|
||||||
| A *decoder transform* is a transform that is applied by the decoder before | ||||||
| returning the decoded frame. Applying decoder transforms to frames | ||||||
| should be both faster and more memory efficient than receiving normally | ||||||
| decoded frames and applying the same kind of transform. | ||||||
|
|
||||||
| Most `DecoderTransform` objects have a complementary transform in TorchVision, | ||||||
|
||||||
| Most `DecoderTransform` objects have a complementary transform in TorchVision, | |
| Most ``DecoderTransform`` objects have a complementary transform in TorchVision, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I saw them render as italics, and I just though, "Oh, Sphinx makes code just italics? Okay..." :)
scotts marked this conversation as resolved.
Show resolved
Hide resolved
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we call it something else than _make_params?
make_params exists for the v2 transforms, but it does something quite different.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.