Skip to content

Commit 8282c16

Browse files
committed
增加第二类api difference
1 parent 60beaad commit 8282c16

File tree

99 files changed

+1986
-0
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

99 files changed

+1986
-0
lines changed
Lines changed: 25 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,25 @@
1+
## [ 仅 API 调用方式不一致 ]torch.Tensor.sparse_mask
2+
3+
### [torch.Tensor.sparse_mask](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.sparse_mask)
4+
5+
```python
6+
torch.Tensor.sparse_mask(mask)
7+
```
8+
9+
### [paddle.sparse.mask_as](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/api/paddle/sparse/mask_as_cn.html#paddle/sparse/mask_as_cn#cn-api-paddle-sparse-mask_as)
10+
11+
```python
12+
paddle.sparse.mask_as(x, mask, name=None)
13+
```
14+
15+
两者功能一致,但调用方式不一致,具体如下:
16+
17+
### 转写示例
18+
19+
```python
20+
# PyTorch 写法
21+
out = x.sparse_mask(coo)
22+
23+
# Paddle 写法
24+
out = paddle.sparse.mask_as(x, coo)
25+
```
Lines changed: 27 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,27 @@
1+
## [ 仅 API 调用方式不一致 ]torch.distributions.distribution.Distribution.log_prob
2+
3+
### [torch.distributions.distribution.Distribution.log_prob](https://pytorch.org/docs/stable/generated/torch.distributions.distribution.Distribution.html#torch.distributions.distribution.Distribution.log_prob)
4+
5+
```python
6+
torch.distributions.distribution.Distribution.log_prob(value)
7+
```
8+
9+
### [paddle.distribution.Distribution.log_prob](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/api/paddle/distribution/Distribution/log_prob_cn.html#paddle/distribution/Distribution/log_prob_cn#cn-api-paddle-distribution-Distribution-log_prob)
10+
11+
```python
12+
paddle.distribution.Distribution.log_prob(value)
13+
```
14+
15+
两者功能一致,但调用方式不一致,具体如下:
16+
17+
### 转写示例
18+
19+
```python
20+
# PyTorch 写法
21+
uniform = torch.distributions.Uniform(0.0, 1.0)
22+
result = uniform.log_prob(torch.tensor(0.3))
23+
24+
# Paddle 写法
25+
uniform = paddle.distribution.Uniform(0.0, 1.0)
26+
result = uniform.log_prob(paddle.to_tensor(0.3))
27+
```
Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,26 @@
1+
## [ 仅 API 调用方式不一致 ]torch.max
2+
3+
### [torch.max](https://pytorch.org/docs/stable/generated/torch.max.html)
4+
5+
```python
6+
torch.max(input, *, out=None)
7+
```
8+
9+
### [paddle.compat.max](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/api/paddle/compat/max_cn.html#paddle/compat/max_cn#cn-api-paddle-compat-max)
10+
11+
```python
12+
paddle.compat.max(input, *, out=None)
13+
```
14+
15+
两者功能一致,但调用方式不一致,具体如下:
16+
17+
### 转写示例
18+
19+
```python
20+
# PyTorch 写法
21+
result = torch.max(x)
22+
23+
# Paddle 写法
24+
result = paddle.compat.max(x)
25+
26+
```
Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,26 @@
1+
## [ 仅 API 调用方式不一致 ]torch.median
2+
3+
### [torch.median](https://pytorch.org/docs/stable/generated/torch.median.html)
4+
5+
```python
6+
torch.median(input)
7+
```
8+
9+
### [paddle.compat.median](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/api/paddle/compat/median_cn.html#paddle/compat/median_cn#cn-api-paddle-compat-median)
10+
11+
```python
12+
paddle.compat.median(input, dim=None, keepdim=False, *, out=None)
13+
```
14+
15+
两者功能一致,但调用方式不一致,具体如下:
16+
17+
### 转写示例
18+
19+
```python
20+
# PyTorch 写法
21+
result = torch.median(input)
22+
23+
# Paddle 写法
24+
result = paddle.compat.median(input)
25+
26+
```
Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,26 @@
1+
## [ 仅 API 调用方式不一致 ]torch.min
2+
3+
### [torch.min](https://pytorch.org/docs/stable/generated/torch.min.html)
4+
5+
```python
6+
torch.min(input, *, out=None)
7+
```
8+
9+
### [paddle.compat.min](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/api/paddle/compat/min_cn.html#paddle/compat/min_cn#cn-api-paddle-compat-min)
10+
11+
```python
12+
paddle.compat.min(input, *args, out=None, **kwargs)
13+
```
14+
15+
两者功能一致,但调用方式不一致,具体如下:
16+
17+
### 转写示例
18+
19+
```python
20+
# PyTorch 写法
21+
result = torch.min(x)
22+
23+
# Paddle 写法
24+
result = paddle.compat.min(x)
25+
26+
```
Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,26 @@
1+
## [ 仅 API 调用方式不一致 ]torch.nanmedian
2+
3+
### [torch.nanmedian](https://pytorch.org/docs/stable/generated/torch.nanmedian.html)
4+
5+
```python
6+
torch.nanmedian(input)
7+
```
8+
9+
### [paddle.compat.nanmedian](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/api/paddle/compat/nanmedian_cn.html#paddle/compat/nanmedian_cn#cn-api-paddle-compat-nanmedian)
10+
11+
```python
12+
paddle.compat.nanmedian(input, dim=None, keepdim=False, *, out=None)
13+
```
14+
15+
两者功能一致,但调用方式不一致,具体如下:
16+
17+
### 转写示例
18+
19+
```python
20+
# PyTorch 写法
21+
result = torch.nanmedian(input)
22+
23+
# Paddle 写法
24+
result = paddle.compat.nanmedian(input)
25+
26+
```
Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,26 @@
1+
## [ 仅 API 调用方式不一致 ]torch.nn.Unfold
2+
3+
### [torch.nn.Unfold](https://pytorch.org/docs/stable/generated/torch.nn.Unfold.html#torch.nn.Unfold)
4+
5+
```python
6+
torch.nn.Unfold(kernel_size, dilation=1, padding=0, stride=1)
7+
```
8+
9+
### [paddle.compat.Unfold](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/api/paddle/compat/Unfold_cn.html#paddle/compat/Unfold_cn#cn-api-paddle-compat-Unfold)
10+
11+
```python
12+
pyaddle.compat.Unfold(kernel_size, dilation=1, padding=0, stride=1)
13+
```
14+
15+
两者功能一致,但调用方式不一致,具体如下:
16+
17+
### 转写示例
18+
19+
```python
20+
# PyTorch 写法
21+
unfold = torch.nn.Unfold(kernel_size=(2, 2))
22+
23+
# Paddle 写法
24+
unfold = paddle.compat.Unfold(kernel_size=(2, 2))
25+
26+
```
Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,26 @@
1+
## [ 仅 API 调用方式不一致 ]torch.nn.functional.pad
2+
3+
### [torch.nn.functional.pad](https://pytorch.org/docs/stable/nn.functional.html#torch.nn.functional.pad)
4+
5+
```python
6+
torch.nn.functional.pad(input, pad, mode="constant", value=None)
7+
```
8+
9+
### [paddle.compat.pad](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/api/paddle/compat/pad_cn.html#paddle/compat/pad_cn#cn-api-paddle-compat-pad)
10+
11+
```python
12+
paddle.compat.pad(input, pad, mode="constant", value=0.0)
13+
```
14+
15+
两者功能一致,但调用方式不一致,具体如下:
16+
17+
### 转写示例
18+
19+
```python
20+
# PyTorch 写法
21+
result = torch.nn.functional.pad(x, [0, 0, 0, 0, 0, 1, 2, 3], value=1)
22+
23+
# Paddle 写法
24+
result = paddle.compat.pad(x, [0, 0, 0, 0, 0, 1, 2, 3], value=1)
25+
26+
```
Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,26 @@
1+
## [ 仅 API 调用方式不一致 ]torch.nn.functional.softmax
2+
3+
### [torch.nn.functional.softmax](https://pytorch.org/docs/stable/nn.functional.html#torch.nn.functional.softmax)
4+
5+
```python
6+
torch.nn.functional.softmax(input, dim=None, _stacklevel=3, dtype=None)
7+
```
8+
9+
### [paddle.compat.softmax](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/api/paddle/compat/softmax_cn.html#paddle/compat/softmax_cn#cn-api-paddle-compat-softmax)
10+
11+
```python
12+
paddle.compat.softmax(input, dim=None, dtype, *, out=None)
13+
```
14+
15+
两者功能一致,但调用方式不一致,具体如下:
16+
17+
### 转写示例
18+
19+
```python
20+
# PyTorch 写法
21+
result = torch.nn.functional.softmax(x, -1)
22+
23+
# Paddle 写法
24+
result = paddle.compat.softmax(x, -1)
25+
26+
```
Lines changed: 36 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,36 @@
1+
## [ 仅 API 调用方式不一致 ]torch.optim.Optimizer.add_param_group
2+
3+
### [torch.optim.Optimizer.add_param_group](https://pytorch.org/docs/stable/generated/torch.optim.Optimizer.html#torch.optim.Optimizer.add_param_group)
4+
5+
```python
6+
torch.optim.Optimizer.add_param_group(param_group)
7+
```
8+
9+
### [paddle.optimizer.Optimizer._add_param_group](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/api/paddle/optimizer/Optimizer/_add_param_group_cn.html#paddle/optimizer/Optimizer/_add_param_group_cn#cn-api-paddle-optimizer-Optimizer-_add_param_group)
10+
11+
```python
12+
paddle.optimizer.Optimizer._add_param_group(param_group)
13+
```
14+
15+
两者功能一致,但调用方式不一致,具体如下:
16+
17+
### 转写示例
18+
19+
```python
20+
# PyTorch 写法
21+
optimizer = torch.optim.SGD(pg1, lr=0.1, momentum=0.9, weight_decay=0.0005)
22+
optimizer.add_param_group({
23+
'params': pg2,
24+
'lr': 0.1 * 2,
25+
'weight_decay': 0.0
26+
})
27+
28+
# Paddle 写法
29+
optimizer = paddle.optimizer.SGD(learning_rate=0.1, parameters=pg1, weight_decay=0.0005)
30+
optimizer._add_param_group({
31+
'params': pg2,
32+
'learning_rate': 0.1 * 2,
33+
'weight_decay': 0.0
34+
})
35+
36+
```

0 commit comments

Comments
 (0)