Skip to content

Commit a9d988a

Browse files
committed
Remove efficientnet-pytorch dependency
1 parent 196bbe9 commit a9d988a

File tree

10 files changed

+23
-247
lines changed

10 files changed

+23
-247
lines changed

README.md

Lines changed: 21 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -1,18 +1,18 @@
11
<div align="center">
2-
3-
![logo](https://i.ibb.co/dc1XdhT/Segmentation-Models-V2-Side-1-1.png)
4-
**Python library with Neural Networks for Image
5-
Segmentation based on [PyTorch](https://pytorch.org/).**
6-
7-
[![Generic badge](https://img.shields.io/badge/License-MIT-<COLOR>.svg?style=for-the-badge)](https://github.com/qubvel/segmentation_models.pytorch/blob/main/LICENSE)
8-
[![GitHub Workflow Status (branch)](https://img.shields.io/github/actions/workflow/status/qubvel/segmentation_models.pytorch/tests.yml?branch=main&style=for-the-badge)](https://github.com/qubvel/segmentation_models.pytorch/actions/workflows/tests.yml)
9-
[![Read the Docs](https://img.shields.io/readthedocs/smp?style=for-the-badge&logo=readthedocs&logoColor=white)](https://smp.readthedocs.io/en/latest/)
2+
3+
![logo](https://i.ibb.co/dc1XdhT/Segmentation-Models-V2-Side-1-1.png)
4+
**Python library with Neural Networks for Image
5+
Segmentation based on [PyTorch](https://pytorch.org/).**
6+
7+
[![Generic badge](https://img.shields.io/badge/License-MIT-<COLOR>.svg?style=for-the-badge)](https://github.com/qubvel/segmentation_models.pytorch/blob/main/LICENSE)
8+
[![GitHub Workflow Status (branch)](https://img.shields.io/github/actions/workflow/status/qubvel/segmentation_models.pytorch/tests.yml?branch=main&style=for-the-badge)](https://github.com/qubvel/segmentation_models.pytorch/actions/workflows/tests.yml)
9+
[![Read the Docs](https://img.shields.io/readthedocs/smp?style=for-the-badge&logo=readthedocs&logoColor=white)](https://smp.readthedocs.io/en/latest/)
1010
<br>
11-
[![PyPI](https://img.shields.io/pypi/v/segmentation-models-pytorch?color=blue&style=for-the-badge&logo=pypi&logoColor=white)](https://pypi.org/project/segmentation-models-pytorch/)
12-
[![PyPI - Downloads](https://img.shields.io/pypi/dm/segmentation-models-pytorch?style=for-the-badge&color=blue)](https://pepy.tech/project/segmentation-models-pytorch)
11+
[![PyPI](https://img.shields.io/pypi/v/segmentation-models-pytorch?color=blue&style=for-the-badge&logo=pypi&logoColor=white)](https://pypi.org/project/segmentation-models-pytorch/)
12+
[![PyPI - Downloads](https://img.shields.io/pypi/dm/segmentation-models-pytorch?style=for-the-badge&color=blue)](https://pepy.tech/project/segmentation-models-pytorch)
1313
<br>
14-
[![PyTorch - Version](https://img.shields.io/badge/PYTORCH-1.4+-red?style=for-the-badge&logo=pytorch)](https://pepy.tech/project/segmentation-models-pytorch)
15-
[![Python - Version](https://img.shields.io/badge/PYTHON-3.9+-red?style=for-the-badge&logo=python&logoColor=white)](https://pepy.tech/project/segmentation-models-pytorch)
14+
[![PyTorch - Version](https://img.shields.io/badge/PYTORCH-1.4+-red?style=for-the-badge&logo=pytorch)](https://pepy.tech/project/segmentation-models-pytorch)
15+
[![Python - Version](https://img.shields.io/badge/PYTHON-3.9+-red?style=for-the-badge&logo=python&logoColor=white)](https://pepy.tech/project/segmentation-models-pytorch)
1616

1717
</div>
1818

@@ -23,7 +23,7 @@ The main features of this library are:
2323
- 124 available encoders (and 500+ encoders from [timm](https://github.com/rwightman/pytorch-image-models))
2424
- All encoders have pre-trained weights for faster and better convergence
2525
- Popular metrics and losses for training routines
26-
26+
2727
### [📚 Project Documentation 📚](http://smp.readthedocs.io/)
2828

2929
Visit [Read The Docs Project Page](https://smp.readthedocs.io/) or read the following README to know more about Segmentation Models Pytorch (SMP for short) library
@@ -55,7 +55,7 @@ The segmentation model is just a PyTorch `torch.nn.Module`, which can be created
5555
import segmentation_models_pytorch as smp
5656

5757
model = smp.Unet(
58-
encoder_name="resnet34", # choose encoder, e.g. mobilenet_v2 or efficientnet-b7
58+
encoder_name="resnet34", # choose encoder, e.g. mobilenet_v2 or timm-efficientnet-b7
5959
encoder_weights="imagenet", # use `imagenet` pre-trained weights for encoder initialization
6060
in_channels=1, # model input channels (1 for gray-scale images, 3 for RGB, etc.)
6161
classes=3, # model output channels (number of classes in your dataset)
@@ -277,14 +277,6 @@ The following is a list of supported encoders in the SMP. Select the appropriate
277277

278278
|Encoder |Weights |Params, M |
279279
|--------------------------------|:------------------------------:|:------------------------------:|
280-
|efficientnet-b0 |imagenet |4M |
281-
|efficientnet-b1 |imagenet |6M |
282-
|efficientnet-b2 |imagenet |7M |
283-
|efficientnet-b3 |imagenet |10M |
284-
|efficientnet-b4 |imagenet |17M |
285-
|efficientnet-b5 |imagenet |28M |
286-
|efficientnet-b6 |imagenet |40M |
287-
|efficientnet-b7 |imagenet |63M |
288280
|timm-efficientnet-b0 |imagenet / advprop / noisy-student|4M |
289281
|timm-efficientnet-b1 |imagenet / advprop / noisy-student|6M |
290282
|timm-efficientnet-b2 |imagenet / advprop / noisy-student|7M |
@@ -361,7 +353,7 @@ The following is a list of supported encoders in the SMP. Select the appropriate
361353

362354
Backbone from SegFormer pretrained on Imagenet! Can be used with other decoders from package, you can combine Mix Vision Transformer with Unet, FPN and others!
363355

364-
Limitations:
356+
Limitations:
365357

366358
- encoder is **not** supported by Linknet, Unet++
367359
- encoder is supported by FPN only for encoder **depth = 5**
@@ -423,18 +415,18 @@ Total number of supported encoders: 549
423415
##### Input channels
424416
Input channels parameter allows you to create models, which process tensors with arbitrary number of channels.
425417
If you use pretrained weights from imagenet - weights of first convolution will be reused. For
426-
1-channel case it would be a sum of weights of first convolution layer, otherwise channels would be
418+
1-channel case it would be a sum of weights of first convolution layer, otherwise channels would be
427419
populated with weights like `new_weight[:, i] = pretrained_weight[:, i % 3]` and than scaled with `new_weight * 3 / new_in_channels`.
428420
```python
429421
model = smp.FPN('resnet34', in_channels=1)
430422
mask = model(torch.ones([1, 1, 64, 64]))
431423
```
432424

433-
##### Auxiliary classification output
434-
All models support `aux_params` parameters, which is default set to `None`.
425+
##### Auxiliary classification output
426+
All models support `aux_params` parameters, which is default set to `None`.
435427
If `aux_params = None` then classification auxiliary output is not created, else
436428
model produce not only `mask`, but also `label` output with shape `NC`.
437-
Classification head consists of GlobalPooling->Dropout(optional)->Linear->Activation(optional) layers, which can be
429+
Classification head consists of GlobalPooling->Dropout(optional)->Linear->Activation(optional) layers, which can be
438430
configured by `aux_params` as follows:
439431
```python
440432
aux_params=dict(
@@ -472,7 +464,7 @@ $ pip install git+https://github.com/qubvel/segmentation_models.pytorch
472464
473465
### 🤝 Contributing
474466
475-
#### Install SMP
467+
#### Install SMP
476468
477469
```bash
478470
make install_dev # create .venv, install SMP in dev mode
@@ -484,7 +476,7 @@ make install_dev # create .venv, install SMP in dev mode
484476
make fixup # Ruff for formatting and lint checks
485477
```
486478

487-
#### Update table with encoders
479+
#### Update table with encoders
488480

489481
```bash
490482
make table # generate a table with encoders and print to stdout

docs/conf.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -102,7 +102,6 @@ def get_version():
102102
"PIL",
103103
"pretrainedmodels",
104104
"torchvision",
105-
"efficientnet-pytorch",
106105
"segmentation_models_pytorch.encoders",
107106
"segmentation_models_pytorch.utils",
108107
# 'segmentation_models_pytorch.base',

docs/encoders.rst

Lines changed: 0 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -215,22 +215,6 @@ EfficientNet
215215
+------------------------+--------------------------------------+-------------+
216216
| Encoder | Weights | Params, M |
217217
+========================+======================================+=============+
218-
| efficientnet-b0 | imagenet | 4M |
219-
+------------------------+--------------------------------------+-------------+
220-
| efficientnet-b1 | imagenet | 6M |
221-
+------------------------+--------------------------------------+-------------+
222-
| efficientnet-b2 | imagenet | 7M |
223-
+------------------------+--------------------------------------+-------------+
224-
| efficientnet-b3 | imagenet | 10M |
225-
+------------------------+--------------------------------------+-------------+
226-
| efficientnet-b4 | imagenet | 17M |
227-
+------------------------+--------------------------------------+-------------+
228-
| efficientnet-b5 | imagenet | 28M |
229-
+------------------------+--------------------------------------+-------------+
230-
| efficientnet-b6 | imagenet | 40M |
231-
+------------------------+--------------------------------------+-------------+
232-
| efficientnet-b7 | imagenet | 63M |
233-
+------------------------+--------------------------------------+-------------+
234218
| timm-efficientnet-b0 | imagenet / advprop / noisy-student | 4M |
235219
+------------------------+--------------------------------------+-------------+
236220
| timm-efficientnet-b1 | imagenet / advprop / noisy-student | 6M |

docs/quickstart.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,11 +6,11 @@
66
Segmentation model is just a PyTorch nn.Module, which can be created as easy as:
77

88
.. code-block:: python
9-
9+
1010
import segmentation_models_pytorch as smp
1111
1212
model = smp.Unet(
13-
encoder_name="resnet34", # choose encoder, e.g. mobilenet_v2 or efficientnet-b7
13+
encoder_name="resnet34", # choose encoder, e.g. mobilenet_v2 or timm-efficientnet-b7
1414
encoder_weights="imagenet", # use `imagenet` pre-trained weights for encoder initialization
1515
in_channels=1, # model input channels (1 for gray-scale images, 3 for RGB, etc.)
1616
classes=3, # model output channels (number of classes in your dataset)

pyproject.toml

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,6 @@ classifiers = [
1717
'Programming Language :: Python :: Implementation :: PyPy',
1818
]
1919
dependencies = [
20-
'efficientnet-pytorch>=0.6.1',
2120
'huggingface-hub>=0.24',
2221
'numpy>=1.19.3',
2322
'pillow>=8',

requirements/minimum.old

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,3 @@
1-
efficientnet-pytorch==0.6.1
21
huggingface-hub==0.24.0
32
numpy==1.19.3
43
pillow==8.0.0

requirements/required.txt

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,3 @@
1-
efficientnet-pytorch==0.7.1
21
huggingface_hub==0.27.0
32
numpy==2.2.1
43
pillow==11.0.0

segmentation_models_pytorch/encoders/__init__.py

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,6 @@
99
from .densenet import densenet_encoders
1010
from .inceptionresnetv2 import inceptionresnetv2_encoders
1111
from .inceptionv4 import inceptionv4_encoders
12-
from .efficientnet import efficient_net_encoders
1312
from .mobilenet import mobilenet_encoders
1413
from .xception import xception_encoders
1514
from .timm_efficientnet import timm_efficientnet_encoders
@@ -34,7 +33,6 @@
3433
encoders.update(densenet_encoders)
3534
encoders.update(inceptionresnetv2_encoders)
3635
encoders.update(inceptionv4_encoders)
37-
encoders.update(efficient_net_encoders)
3836
encoders.update(mobilenet_encoders)
3937
encoders.update(xception_encoders)
4038
encoders.update(timm_efficientnet_encoders)

segmentation_models_pytorch/encoders/efficientnet.py

Lines changed: 0 additions & 177 deletions
This file was deleted.

tests/encoders/test_smp_encoders.py

Lines changed: 0 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -22,20 +22,3 @@ class TestMixTransformerEncoder(base.BaseEncoderTester):
2222
if not RUN_ALL_ENCODERS
2323
else ["mit_b0", "mit_b1", "mit_b2", "mit_b3", "mit_b4", "mit_b5"]
2424
)
25-
26-
27-
class TestEfficientNetEncoder(base.BaseEncoderTester):
28-
encoder_names = (
29-
["efficientnet-b0"]
30-
if not RUN_ALL_ENCODERS
31-
else [
32-
"efficientnet-b0",
33-
"efficientnet-b1",
34-
"efficientnet-b2",
35-
"efficientnet-b3",
36-
"efficientnet-b4",
37-
"efficientnet-b5",
38-
"efficientnet-b6",
39-
# "efficientnet-b7", # extra large model
40-
]
41-
)

0 commit comments

Comments
 (0)