Skip to content

Commit a9cd2d8

Browse files
authored
3020 Enhance what's new for transfomer networks (#3019)
* [DLMED] add more doc Signed-off-by: Nic Ma <[email protected]> * [DLMED] edit change log Signed-off-by: Nic Ma <[email protected]> * [DLMED] add command to get transform backend Signed-off-by: Nic Ma <[email protected]> * [DLMED] enhance the doc Signed-off-by: Nic Ma <[email protected]>
1 parent aa4eb5d commit a9cd2d8

File tree

3 files changed

+3
-3
lines changed

3 files changed

+3
-3
lines changed

CHANGELOG.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.
3232
* Deprecated input argument `dimensions` and `ndims`, in favor of `spatial_dims`
3333
* Updated the Sphinx-based documentation theme for better readability
3434
* `NdarrayTensor` type is replaced by `NdarrayOrTensor` for simpler annotations
35-
* Attention-based network blocks now support both 2D and 3D inputs
35+
* Self-attention-based network blocks now support both 2D and 3D inputs
3636

3737
### Removed
3838
* The deprecated `TransformInverter`, in favor of `monai.transforms.InvertD`

docs/source/highlights.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -58,7 +58,7 @@ transformations. These currently include, for example:
5858

5959

6060
### 3. Transforms support both NumPy array and PyTorch Tensor (CPU or GPU accelerated)
61-
From MONAI v0.7 we introduced PyTorch `Tensor` based computation in transforms, many transforms already support both `numpy array` and `Tensor` data.
61+
From MONAI v0.7 we introduced PyTorch `Tensor` based computation in transforms, many transforms already support both `NumPy array` and `Tensor` as input types and computational backends. To get the supported backends of every transform, please execute: `python monai/transforms/utils.py`.
6262

6363
To accelerate the transforms, a common approach is to leverage GPU parallel-computation. Users can first convert input data into GPU Tensor by `ToTensor` or `EnsureType` transform, then the following transforms can execute on GPU based on PyTorch `Tensor` APIs.
6464
GPU transform tutorial is available at [Spleen fast training tutorial](https://github.com/Project-MONAI/tutorials/blob/master/acceleration/fast_training_tutorial.ipynb).

docs/source/whatsnew_0_7.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ more](https://github.com/Project-MONAI/tutorials/blob/master/acceleration/fast_t
2929

3030
MONAI starts to roll out major usability enhancements for the
3131
`monai.transforms` module. Many transforms are now supporting both NumPy and
32-
PyTorch, as input types and computational backends.
32+
PyTorch, as input types and computational backends. To get the supported backends of every transform, please execute: `python monai/transforms/utils.py`.
3333

3434
One benefit of these enhancements is that the users can now better leverage the
3535
GPUs for preprocessing. By transferring the input data onto GPU using

0 commit comments

Comments
 (0)