-
Notifications
You must be signed in to change notification settings - Fork 5.8k
[API-Compat] Add paddle.compat.min/max and new PHI kernel (min/max_with_index) #74547
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[API-Compat] Add paddle.compat.min/max and new PHI kernel (min/max_with_index) #74547
Conversation
你的PR提交成功,感谢你对开源项目的贡献! |
/re-run all-failed |
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## develop #74547 +/- ##
==========================================
Coverage ? 78.19%
==========================================
Files ? 5
Lines ? 188
Branches ? 0
==========================================
Hits ? 147
Misses ? 41
Partials ? 0 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
ff2ded8
to
6414203
Compare
/re-run all-failed |
75b797d
to
10eb128
Compare
/re-run CI-Build |
Attempting to fix integral type gradient computation (rejection)
…ed implementation
removed split API for independence.
b2df5fe
to
17f080e
Compare
/re-run all-failed |
2 similar comments
/re-run all-failed |
/re-run all-failed |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
0fbbb99
/re-run all-failed |
2 similar comments
/re-run all-failed |
/re-run all-failed |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
- For case 1: a single value Tensor (0-dim) | ||
- For case 2: a named tuple MinMaxRetType(values: Tensor, indices: Tensor), `values` has the same data type as the `input`, | ||
while indices is always an int64 Tensor, with exactly the same shape as `values`. | ||
MinMaxRetType can be used (indexed, packed, unpacked) in the same way as a regular tuple | ||
- For case 3: see `paddle.minimum` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- For case 1: a single value Tensor (0-dim) | |
- For case 2: a named tuple MinMaxRetType(values: Tensor, indices: Tensor), `values` has the same data type as the `input`, | |
while indices is always an int64 Tensor, with exactly the same shape as `values`. | |
MinMaxRetType can be used (indexed, packed, unpacked) in the same way as a regular tuple | |
- For case 3: see `paddle.minimum` | |
- For case 1. A single value Tensor (0-dim) | |
- For case 2. A named tuple MinMaxRetType(values: Tensor, indices: Tensor), `values` has the same data type as the `input`, | |
while indices is always an int64 Tensor, with exactly the same shape as `values`. | |
MinMaxRetType can be used (indexed, packed, unpacked) in the same way as a regular tuple | |
- For case 3. See `paddle.minimum`(:ref:`api_paddle_minimum`) |
- Returns内容里,引号
:
前面的内容会自动渲染成return type,所以尽量不要用引号

- 加一下case3 api的跳转引用
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK,补PR进行修复。
) -> Tensor | MinMaxRetType: | ||
""" | ||
|
||
Computes the minimum of tensor elements. There are mainly 3 cases (functionalities): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Computes the minimum of tensor elements. There are mainly 3 cases (functionalities): | |
Computes the minimum of tensor elements. There are mainly 3 cases (functionalities): | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK,补PR进行修复。
Special warning: the gradient behavior is NOT well-documented by PyTorch, the actual behavior should be: | ||
1. Case 1: the same as `min` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Special warning: the gradient behavior is NOT well-documented by PyTorch, the actual behavior should be: | |
1. Case 1: the same as `min` | |
Special warning: the gradient behavior is NOT well-documented by PyTorch, the actual behavior should be: | |
1. Case 1: the same as `min` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK,补PR进行修复。
) -> Tensor | MinMaxRetType: | ||
""" | ||
|
||
Computes the maximum of tensor elements. There are mainly 3 cases (functionalities): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Computes the maximum of tensor elements. There are mainly 3 cases (functionalities): | |
Computes the maximum of tensor elements. There are mainly 3 cases (functionalities): | |
Special warning: the gradient behavior is NOT well-documented by PyTorch, the actual behavior should be: | ||
1. Case 1: the same as `max` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Special warning: the gradient behavior is NOT well-documented by PyTorch, the actual behavior should be: | |
1. Case 1: the same as `max` | |
Special warning: the gradient behavior is NOT well-documented by PyTorch, the actual behavior should be: | |
1. Case 1: the same as `max` |
Returns: | ||
- For case 1: a single value Tensor (0-dim) | ||
- For case 2: a named tuple MinMaxRetType(values: Tensor, indices: Tensor), `values` has the same data type as the `input`, | ||
while indices is always an int64 Tensor, with exactly the same shape as `values`. | ||
MinMaxRetType can be used (indexed, packed, unpacked) in the same way as a regular tuple | ||
- For case 3: see `paddle.maximum` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Returns: | |
- For case 1: a single value Tensor (0-dim) | |
- For case 2: a named tuple MinMaxRetType(values: Tensor, indices: Tensor), `values` has the same data type as the `input`, | |
while indices is always an int64 Tensor, with exactly the same shape as `values`. | |
MinMaxRetType can be used (indexed, packed, unpacked) in the same way as a regular tuple | |
- For case 3: see `paddle.maximum` | |
Returns: | |
- For case 1. A single value Tensor (0-dim) | |
- For case 2. A named tuple MinMaxRetType(values: Tensor, indices: Tensor), `values` has the same data type as the `input`, | |
while indices is always an int64 Tensor, with exactly the same shape as `values`. | |
MinMaxRetType can be used (indexed, packed, unpacked) in the same way as a regular tuple | |
- For case 3. See `paddle.maximum` (:ref:`api_paddle_maximum`) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
文档问题稍后修改~
/re-run all-failed |
1 similar comment
/re-run all-failed |
PR Category
Operator Mechanism
PR Types
New features
Description
本 PR 为 #74495 的 reopen 版本,rebase 到了一个更新的版本,并且解决了与 #74506 的冲突。#74495 当时只用于CI发现问题,本 PR 尝试对其中“共用amin/amax backward op 但amin/amax 不支持某些整数类型“的问题进行了修复,基于SFINAE与python端检查。目前本 PR 在 #74506 未合入前会显得改动过多,实际上是包含了部分前序 PR 的改动,前序 PR merge 后应该可以自动 resolve。
本 PR 新增的 feature:
(min/max)_with_index_grad
。注意amin/amax
的行为:只对minimum/maximum index位置的结果传梯度,像是对梯度进行了一个 take_along_axis。paddle.compat.min
,paddle.compat.max
,与 torch 的行为进行对齐。torch.min
/torch.max
输入输出关系很复杂(一个API包含了太多功能):minimum
/maximum
一致除上述【情况2】在 CUDA GPU 后端下会调用
(min/max)_with_index
,其余情况都是由 python 调用_C_ops.xxx
获得结果的。其中情况1/2/3 在CUDA GPU后端下应该都具有较好的性能(没有进行组合,调用单算子完成),而情况2在其他后端下使用 argmin/max 与 take_along_axis 组合(并且需要配合 squeeze_ 操作),不是最优性能方案,但应当具有较高的开发性价比。TODO
test_compat_minmax.py
: 等能达到单测覆盖率要求的算子单测:coverage 覆盖率可能显示有误。Pcard-89620