|
22 | 22 |
|
23 | 23 | **[Documentation](https://pytorch-scatter.readthedocs.io)** |
24 | 24 |
|
25 | | -This package consists of a small extension library of highly optimized sparse update (scatter) operations for the use in [PyTorch](http://pytorch.org/), which are missing in the main package. |
26 | | -Scatter operations can be roughly described as reduce operations based on a given "group-index" tensor. |
| 25 | +This package consists of a small extension library of highly optimized sparse update (scatter/segment) operations for the use in [PyTorch](http://pytorch.org/), which are missing in the main package. |
| 26 | +Scatter and segment operations can be roughly described as reduce operations based on a given "group-index" tensor. |
27 | 27 | The package consists of the following operations: |
28 | 28 |
|
29 | | -* [**Scatter Add**](https://pytorch-scatter.readthedocs.io/en/latest/functions/add.html) |
30 | | -* [**Scatter Sub**](https://pytorch-scatter.readthedocs.io/en/latest/functions/sub.html) |
31 | | -* [**Scatter Mul**](https://pytorch-scatter.readthedocs.io/en/latest/functions/mul.html) |
32 | | -* [**Scatter Div**](https://pytorch-scatter.readthedocs.io/en/latest/functions/div.html) |
33 | | -* [**Scatter Mean**](https://pytorch-scatter.readthedocs.io/en/latest/functions/mean.html) |
34 | | -* [**Scatter Std**](https://pytorch-scatter.readthedocs.io/en/latest/functions/std.html) |
35 | | -* [**Scatter Min**](https://pytorch-scatter.readthedocs.io/en/latest/functions/min.html) |
36 | | -* [**Scatter Max**](https://pytorch-scatter.readthedocs.io/en/latest/functions/max.html) |
37 | | -* [**Scatter LogSumExp**](https://pytorch-scatter.readthedocs.io/en/latest/functions/logsumexp.html) |
| 29 | +* [**Scatter**](https://pytorch-scatter.readthedocs.io/en/latest/functions/add.html) |
| 30 | +* [**SegmentCOO**](https://pytorch-scatter.readthedocs.io/en/latest/functions/add.html) |
| 31 | +* [**SegmentCSR**](https://pytorch-scatter.readthedocs.io/en/latest/functions/add.html) |
38 | 32 |
|
39 | 33 | In addition, we provide composite functions which make use of `scatter_*` operations under the hood: |
40 | 34 |
|
| 35 | +* [**Scatter Std**](https://pytorch-scatter.readthedocs.io/en/latest/composite/softmax.html#torch_scatter.composite.scatter_std) |
| 36 | +* [**Scatter LogSumExp**](https://pytorch-scatter.readthedocs.io/en/latest/composite/softmax.html#torch_scatter.composite.scatter_logsumexp) |
41 | 37 | * [**Scatter Softmax**](https://pytorch-scatter.readthedocs.io/en/latest/composite/softmax.html#torch_scatter.composite.scatter_softmax) |
42 | 38 | * [**Scatter LogSoftmax**](https://pytorch-scatter.readthedocs.io/en/latest/composite/softmax.html#torch_scatter.composite.scatter_log_softmax) |
43 | 39 |
|
44 | | -All included operations are broadcastable, work on varying data types, and are implemented both for CPU and GPU with corresponding backward implementations. |
| 40 | +All included operations are broadcastable, work on varying data types, are implemented both for CPU and GPU with corresponding backward implementations, and are fully traceable via `@torch.jit.script`. |
45 | 41 |
|
46 | 42 | ## Installation |
47 | 43 |
|
@@ -81,17 +77,17 @@ from torch_scatter import scatter_max |
81 | 77 | src = torch.tensor([[2, 0, 1, 4, 3], [0, 2, 1, 3, 4]]) |
82 | 78 | index = torch.tensor([[4, 5, 4, 2, 3], [0, 0, 2, 2, 1]]) |
83 | 79 |
|
84 | | -out, argmax = scatter_max(src, index, fill_value=0) |
| 80 | +out, argmax = scatter_max(src, index, dim=-1) |
85 | 81 | ``` |
86 | 82 |
|
87 | 83 | ``` |
88 | 84 | print(out) |
89 | | -tensor([[ 0, 0, 4, 3, 2, 0], |
90 | | - [ 2, 4, 3, 0, 0, 0]]) |
| 85 | +tensor([[0, 0, 4, 3, 2, 0], |
| 86 | + [2, 4, 3, 0, 0, 0]]) |
91 | 87 |
|
92 | 88 | print(argmax) |
93 | | -tensor([[-1, -1, 3, 4, 0, 1] |
94 | | - [ 1, 4, 3, -1, -1, -1]]) |
| 89 | +tensor([[5, 5, 3, 4, 0, 1] |
| 90 | + [1, 4, 3, 5, 5, 5]]) |
95 | 91 | ``` |
96 | 92 |
|
97 | 93 | ## Running tests |
|
0 commit comments