Skip to content

Commit 19eeebe

Browse files
committed
Fixup, fix type hint
1 parent 6207310 commit 19eeebe

File tree

2 files changed

+3
-3
lines changed

2 files changed

+3
-3
lines changed

segmentation_models_pytorch/decoders/dpt/decoder.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
import torch
22
import torch.nn as nn
33
from segmentation_models_pytorch.base.modules import Activation
4-
from typing import Optional, Sequence
4+
from typing import Optional, Sequence, Union, Callable
55

66

77
class ProjectionBlock(nn.Module):
@@ -241,7 +241,7 @@ def __init__(
241241
self,
242242
in_channels: int,
243243
out_channels: int,
244-
activation: Optional[str] = None,
244+
activation: Optional[Union[str, Callable]] = None,
245245
kernel_size: int = 3,
246246
upsampling: float = 2.0,
247247
):

segmentation_models_pytorch/decoders/dpt/model.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ class DPT(SegmentationModel):
2525
2626
Note:
2727
Since this model uses a Vision Transformer backbone, it typically requires a fixed input image size.
28-
To handle variable input sizes, you can set `dynamic_img_size=True` in the model initialization
28+
To handle variable input sizes, you can set `dynamic_img_size=True` in the model initialization
2929
(if supported by the specific `timm` encoder). You can check if an encoder requires fixed size
3030
using `model.encoder.is_fixed_input_size`, and get the required input dimensions from
3131
`model.encoder.input_size`, however it's no guarantee that information is available.

0 commit comments

Comments
 (0)