Skip to content

Commit e10ffc8

Browse files
committed
Fix docs typos
1 parent e55b2da commit e10ffc8

File tree

8 files changed

+18
-18
lines changed

8 files changed

+18
-18
lines changed

README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -46,8 +46,8 @@ import segmentation_models_pytorch as smp
4646

4747
model = smp.Unet(
4848
encoder_name="resnet34", # choose encoder, e.g. mobilenet_v2 or efficientnet-b7
49-
encoder_weights="imagenet", # use `imagenet` pretrained weights for encoder initialization
50-
in_channels=1, # model input channels (1 for grayscale images, 3 for RGB, etc.)
49+
encoder_weights="imagenet", # use `imagenet` pre-trained weights for encoder initialization
50+
in_channels=1, # model input channels (1 for gray-scale images, 3 for RGB, etc.)
5151
classes=3, # model output channels (number of classes in your dataset)
5252
)
5353
```
@@ -56,7 +56,7 @@ model = smp.Unet(
5656

5757
#### 2. Configure data preprocessing
5858

59-
All encoders have pretrained weights. Preparing your data the same way as during weights pretraining may give your better results (higher metric score and faster convergence). But it is relevant only for 1-2-3-channels images and **not necessary** in case you train the whole model, not only decoder.
59+
All encoders have pretrained weights. Preparing your data the same way as during weights pre-training may give your better results (higher metric score and faster convergence). But it is relevant only for 1-2-3-channels images and **not necessary** in case you train the whole model, not only decoder.
6060

6161
```python
6262
from segmentation_models_pytorch.encoders import get_preprocessing_fn

docs/insights.rst

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -15,8 +15,8 @@ All segmentation models in SMP (this library short name) are made of:
1515
2. Creating your own encoder
1616
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1717

18-
Encoder is a "classification model" which extarct features from image and pass it to decoder.
19-
Each encoder should have following attributes and methods and be inherited from `segmetation_models_pytorch.encoders._base.EncoderMixin`
18+
Encoder is a "classification model" which extract features from image and pass it to decoder.
19+
Each encoder should have following attributes and methods and be inherited from `segmentation_models_pytorch.encoders._base.EncoderMixin`
2020

2121
.. code-block:: python
2222
@@ -29,13 +29,13 @@ Each encoder should have following attributes and methods and be inherited from
2929
self._out_channels: List[int] = [3, 16, 64, 128, 256, 512]
3030
3131
# A number of stages in decoder (in other words number of downsampling operations), integer
32-
# use in in forward pass to reduce number of returning fatures
32+
# use in in forward pass to reduce number of returning features
3333
self._depth: int = 5
3434
3535
# Default number of input channels in first Conv2d layer for encoder (usually 3)
3636
self._in_channels: int = 3
3737
38-
# Define enoder modules below
38+
# Define encoder modules below
3939
...
4040
4141
def forward(self, x: torch.Tensor) -> List[torch.Tensor]:
@@ -60,7 +60,7 @@ When you write your own Encoder class register its build parameters
6060
.. code-block:: python
6161
6262
smp.encoders.encoders["my_awesome_encoder"] = {
63-
"encoder": MyEncoder, # enocoder class here
63+
"encoder": MyEncoder, # encoder class here
6464
"pretrained_settings": {
6565
"imagenet": {
6666
"mean": [0.485, 0.456, 0.406],

docs/quickstart.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -11,8 +11,8 @@ Segmentation model is just a PyTorch nn.Module, which can be created as easy as:
1111
1212
model = smp.Unet(
1313
encoder_name="resnet34", # choose encoder, e.g. mobilenet_v2 or efficientnet-b7
14-
encoder_weights="imagenet", # use `imagenet` pretreined weights for encoder initialization
15-
in_channels=1, # model input channels (1 for grayscale images, 3 for RGB, etc.)
14+
encoder_weights="imagenet", # use `imagenet` pre-trained weights for encoder initialization
15+
in_channels=1, # model input channels (1 for gray-scale images, 3 for RGB, etc.)
1616
classes=3, # model output channels (number of classes in your dataset)
1717
)
1818
@@ -21,7 +21,7 @@ Segmentation model is just a PyTorch nn.Module, which can be created as easy as:
2121

2222
**2. Configure data preprocessing**
2323

24-
All encoders have pretrained weights. Preparing your data the same way as during weights pretraining may give your better results (higher metric score and faster convergence). But it is relevant only for 1-2-3-channels images and **not necessary** in case you train the whole model, not only decoder.
24+
All encoders have pretrained weights. Preparing your data the same way as during weights pre-training may give your better results (higher metric score and faster convergence). But it is relevant only for 1-2-3-channels images and **not necessary** in case you train the whole model, not only decoder.
2525

2626
.. code-block:: python
2727

segmentation_models_pytorch/losses/constants.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
#: Loss binary mode suppose you are solving binary segmentation task.
22
#: That mean yor have only one class which pixels are labled as **1**,
3-
#: the rest pixels are backgroud and labeled as **0**.
3+
#: the rest pixels are background and labeled as **0**.
44
#: Target mask shape - (N, H, W), model output mask shape (N, 1, H, W).
55
BINARY_MODE: str = "binary"
66

segmentation_models_pytorch/losses/dice.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -31,8 +31,8 @@ def __init__(
3131
from_logits: If True, assumes input is raw logits
3232
smooth: Smoothness constant for dice coefficient (a)
3333
ignore_index: Label that indicates ignored pixels (does not contribute to loss)
34-
eps: A small epsilon for numerical stability to avoid zero divison error
35-
(denominator wiil be always greater or equal to eps)
34+
eps: A small epsilon for numerical stability to avoid zero division error
35+
(denominator will be always greater or equal to eps)
3636
3737
Shape
3838
- **y_pred** - torch.Tensor of shape (N, C, H, W)

segmentation_models_pytorch/losses/focal.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ def __init__(
2626
Args:
2727
mode: Loss mode 'binary', 'multiclass' or 'multilabel'
2828
alpha: Prior probability of having positive value in target.
29-
gamma: Power factor for dampening weight (focal strenght).
29+
gamma: Power factor for dampening weight (focal strength).
3030
ignore_index: If not None, targets may contain values to be ignored.
3131
Target values equal to ignore_index will be ignored from loss computation.
3232
normalized: Compute normalized focal loss (https://arxiv.org/pdf/1909.07829.pdf).

segmentation_models_pytorch/losses/jaccard.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -30,8 +30,8 @@ def __init__(
3030
from_logits: If True, assumes input is raw logits
3131
smooth: Smoothness constant for dice coefficient
3232
ignore_index: Label that indicates ignored pixels (does not contribute to loss)
33-
eps: A small epsilon for numerical stability to avoid zero divison error
34-
(denominator wiil be always greater or equal to eps)
33+
eps: A small epsilon for numerical stability to avoid zero division error
34+
(denominator will be always greater or equal to eps)
3535
3636
Shape
3737
- **y_pred** - torch.Tensor of shape (N, C, H, W)

segmentation_models_pytorch/losses/lovasz.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,7 @@ def _lovasz_hinge(logits, labels, per_image=True, ignore=None):
5555
def _lovasz_hinge_flat(logits, labels):
5656
"""Binary Lovasz hinge loss
5757
Args:
58-
logits: [P] Variable, logits at each prediction (between -iinfinity and +iinfinity)
58+
logits: [P] Variable, logits at each prediction (between -infinity and +infinity)
5959
labels: [P] Tensor, binary ground truth labels (0 or 1)
6060
ignore: label to ignore
6161
"""

0 commit comments

Comments
 (0)