Skip to content

Commit 7fa1020

Browse files
laol777qubvel
authored andcommitted
resnext model (#55)
* added instagram pretrains for resnext encoders * added resnext info into readme
1 parent 5b105a8 commit 7fa1020

File tree

2 files changed

+135
-14
lines changed

2 files changed

+135
-14
lines changed

README.md

Lines changed: 16 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -60,23 +60,25 @@ preprocess_input = get_preprocessing_fn('resnet18', pretrained='imagenet')
6060

6161
#### Encoders <a name="encoders"></a>
6262

63-
| Type | Encoder names |
64-
|------------|-----------------------------------------------------------------|
65-
| VGG | vgg11, vgg13, vgg16, vgg19, vgg11bn, vgg13bn, vgg16bn, vgg19bn |
66-
| DenseNet | densenet121, densenet169, densenet201, densenet161 |
67-
| DPN | dpn68, dpn68b, dpn92, dpn98, dpn107, dpn131 |
68-
| Inception | inceptionresnetv2 |
69-
| ResNet | resnet18, resnet34, resnet50, resnet101, resnet152 |
70-
| SE-ResNet | se_resnet50, se_resnet101, se_resnet152 |
71-
| SE-ResNeXt | se_resnext50_32x4d, se_resnext101_32x4d |
72-
| SENet | senet154 | |
63+
| Type | Encoder names |
64+
|------------|---------------------------------------------------------------------------------------------|
65+
| VGG | vgg11, vgg13, vgg16, vgg19, vgg11bn, vgg13bn, vgg16bn, vgg19bn |
66+
| DenseNet | densenet121, densenet169, densenet201, densenet161 |
67+
| DPN | dpn68, dpn68b, dpn92, dpn98, dpn107, dpn131 |
68+
| Inception | inceptionresnetv2 |
69+
| ResNet | resnet18, resnet34, resnet50, resnet101, resnet152 |
70+
| ResNeXt | resnext50_32x4d, resnext101_32x8d, resnext101_32x16d, resnext101_32x32d, resnext101_32x48d |
71+
| SE-ResNet | se_resnet50, se_resnet101, se_resnet152 |
72+
| SE-ResNeXt | se_resnext50_32x4d, se_resnext101_32x4d |
73+
| SENet | senet154 |
7374

7475
#### Weights <a name="weights"></a>
7576

76-
| Weights name | Encoder names |
77-
|--------------|-----------------------|
78-
| imagenet+5k | dpn68b, dpn92, dpn107 |
79-
| imagenet | * all other encoders |
77+
| Weights name | Encoder names |
78+
|---------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
79+
| imagenet+5k | dpn68b, dpn92, dpn107 |
80+
| imagenet | vgg11, vgg13, vgg16, vgg19, vgg11bn, vgg13bn, vgg16bn, vgg19bn, <br> densenet121, densenet169, densenet201, densenet161, dpn68, dpn98, dpn131, <br> inceptionresnetv2, <br> resnet18, resnet34, resnet50, resnet101, resnet152, <br> resnext50_32x4d, resnext101_32x8d, <br> se_resnet50, se_resnet101, se_resnet152, <br> se_resnext50_32x4d, se_resnext101_32x4d, <br> senet154 |
81+
| [instagram](https://pytorch.org/hub/facebookresearch_WSL-Images_resnext/) | resnext101_32x8d, resnext101_32x16d, resnext101_32x32d, resnext101_32x48d |
8082

8183
### Models API <a name="api"></a>
8284
- `model.encoder` - pretrained backbone to extract features of different spatial resolution

segmentation_models_pytorch/encoders/resnet.py

Lines changed: 119 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -81,4 +81,123 @@ def load_state_dict(self, state_dict, **kwargs):
8181
'layers': [3, 8, 36, 3],
8282
},
8383
},
84+
85+
'resnext50_32x4d': {
86+
'encoder': ResNetEncoder,
87+
'pretrained_settings': {
88+
'imagenet': {
89+
'url': 'https://download.pytorch.org/models/resnext50_32x4d-7cdf4587.pth',
90+
'input_space': 'RGB',
91+
'input_size': [3, 224, 224],
92+
'input_range': [0, 1],
93+
'mean': [0.485, 0.456, 0.406],
94+
'std': [0.229, 0.224, 0.225],
95+
'num_classes': 1000
96+
}
97+
},
98+
'out_shapes': (2048, 1024, 512, 256, 64),
99+
'params': {
100+
'block': Bottleneck,
101+
'layers': [3, 4, 6, 3],
102+
'groups': 32,
103+
'width_per_group': 4
104+
},
105+
},
106+
107+
'resnext101_32x8d': {
108+
'encoder': ResNetEncoder,
109+
'pretrained_settings': {
110+
'imagenet': {
111+
'url': 'https://download.pytorch.org/models/resnext101_32x8d-8ba56ff5.pth',
112+
'input_space': 'RGB',
113+
'input_size': [3, 224, 224],
114+
'input_range': [0, 1],
115+
'mean': [0.485, 0.456, 0.406],
116+
'std': [0.229, 0.224, 0.225],
117+
'num_classes': 1000
118+
},
119+
'instagram': {
120+
'url': 'https://download.pytorch.org/models/ig_resnext101_32x8-c38310e5.pth',
121+
'input_space': 'RGB',
122+
'input_size': [3, 224, 224],
123+
'input_range': [0, 1],
124+
'mean': [0.485, 0.456, 0.406],
125+
'std': [0.229, 0.224, 0.225],
126+
'num_classes': 1000
127+
}
128+
},
129+
'out_shapes': (2048, 1024, 512, 256, 64),
130+
'params': {
131+
'block': Bottleneck,
132+
'layers': [3, 4, 23, 3],
133+
'groups': 32,
134+
'width_per_group': 8
135+
},
136+
},
137+
138+
'resnext101_32x16d': {
139+
'encoder': ResNetEncoder,
140+
'pretrained_settings': {
141+
'instagram':{
142+
'url': 'https://download.pytorch.org/models/ig_resnext101_32x16-c6f796b0.pth',
143+
'input_space': 'RGB',
144+
'input_size': [3, 224, 224],
145+
'input_range': [0, 1],
146+
'mean': [0.485, 0.456, 0.406],
147+
'std': [0.229, 0.224, 0.225],
148+
'num_classes': 1000
149+
}
150+
},
151+
'out_shapes': (2048, 1024, 512, 256, 64),
152+
'params': {
153+
'block': Bottleneck,
154+
'layers': [3, 4, 23, 3],
155+
'groups': 32,
156+
'width_per_group': 16
157+
},
158+
},
159+
160+
'resnext101_32x32d': {
161+
'encoder': ResNetEncoder,
162+
'pretrained_settings': {
163+
'instagram': {
164+
'url': 'https://download.pytorch.org/models/ig_resnext101_32x32-e4b90b00.pth',
165+
'input_space': 'RGB',
166+
'input_size': [3, 224, 224],
167+
'input_range': [0, 1],
168+
'mean': [0.485, 0.456, 0.406],
169+
'std': [0.229, 0.224, 0.225],
170+
'num_classes': 1000
171+
}
172+
},
173+
'out_shapes': (2048, 1024, 512, 256, 64),
174+
'params': {
175+
'block': Bottleneck,
176+
'layers': [3, 4, 23, 3],
177+
'groups': 32,
178+
'width_per_group': 32
179+
},
180+
},
181+
182+
'resnext101_32x48d': {
183+
'encoder': ResNetEncoder,
184+
'pretrained_settings': {
185+
'instagram': {
186+
'url': 'https://download.pytorch.org/models/ig_resnext101_32x48-3e41cc8a.pth',
187+
'input_space': 'RGB',
188+
'input_size': [3, 224, 224],
189+
'input_range': [0, 1],
190+
'mean': [0.485, 0.456, 0.406],
191+
'std': [0.229, 0.224, 0.225],
192+
'num_classes': 1000
193+
}
194+
},
195+
'out_shapes': (2048, 1024, 512, 256, 64),
196+
'params': {
197+
'block': Bottleneck,
198+
'layers': [3, 4, 23, 3],
199+
'groups': 32,
200+
'width_per_group': 48
201+
},
202+
},
84203
}

0 commit comments

Comments
 (0)