Manually adding additional backbones? #684
-
Is there a way to add new backbones as for example an EfficientNet? As far as i am aware there are only four backbones supported: cait_m48_448, deit_base_distilled_patch16_384, resnet18, wide_resnet50_2 |
Beta Was this translation helpful? Give feedback.
Replies: 4 comments 1 reply
-
@PabloFerrerGo, it is possible. We use In addition to timm, we are also adding torch fx feature extractor in PR #675. You could get the list of supported feature extractors by >>> model = torchvision.models.resnet18()
>>> train_nodes, eval_nodes = get_graph_node_names(model) Once you know the name of the layers, you could extract the features by >>> import torch
>>> from anomalib.models.components.feature_extractors import get_torchfx_feature_extractor
>>> from torchvision.models.efficientnet import EfficientNet_B5_Weights
>>> feature_extractor = get_torchfx_feature_extractor(
backbone="efficientnet_b5", return_nodes=["features.6.8"], weights=EfficientNet_B5_Weights.DEFAULT
)
>>> input = torch.rand((32, 3, 256, 256))
>>> features = feature_extractor(input)
>>> [layer for layer in features.keys()] Note that this has not been merged yet. We plan on merging it early next week |
Beta Was this translation helpful? Give feedback.
-
@Tekno-H it depends on whether you want to modify it from the configuration file or from the code. Feature extractor in the cflow model is defined here: And where From the Code>>> input = torch.rand((4, 3, 256, 256))
# MobileNetv2 feature extractor
>>> model = FeatureExtractor(backbone="mobilenetv2_050", layers=["blocks"], pre_trained=False)
>>> output = model(input)
>>> output["blocks"].shape
torch.Size([4, 8, 128, 128])
# FBNetv3 that uses MobileNetv3
>>> model = FeatureExtractor(backbone="fbnetv3_b", layers=["blocks"], pre_trained=False)
>>> output = model(input)
>>> output["blocks"].shape
torch.Size([4, 16, 128, 128]) From Config File# MobileNetv2
model:
name: cflow
backbone: mobilenetv2_050
pre_trained: true
layers:
- blocks
decoder: freia-cflow
...
# FBNetv3 that uses MobileNetv3
model:
name: cflow
backbone: fbnetv3_b
pre_trained: true
layers:
- blocks
decoder: freia-cflow
... You need to have a look at the models supported by timm and graph nodes for the layers. Hope this answer your question. |
Beta Was this translation helpful? Give feedback.
-
In addition to what Samet mentioned, now that the TorchFX PR is merged, you can use the Python API as follows: With torchvision models >>> from torchvision.models.efficientnet import EfficientNet_B5_Weights
>>> from anomalib.models.components.feature_extractors import TorchFXFeatureExtractor
>>> feature_extractor = TorchFXFeatureExtractor(
backbone="efficientnet_b5",
return_nodes=["features.6.8"],
weights=EfficientNet_B5_Weights.DEFAULT
)
>>> input = torch.rand((32, 3, 256, 256))
>>> features = feature_extractor(input)
>>> [layer for layer in features.keys()]
["features.6.8"]
>>> [feature.shape for feature in features.values()]
[torch.Size([32, 304, 8, 8])] With custom models >>> feature_extractor = TorchFXFeatureExtractor(
"path.to.CustomModel", ["linear_relu_stack.3"], weights="path/to/weights.pth"
)
>>> input = torch.randn(1, 1, 28, 28)
>>> features = feature_extractor(input)
>>> [layer for layer in features.keys()]
["linear_relu_stack.3"] Here, you can also directly pass the model class rather than the path to it. Say you have >>> model = TorchFXFeatureExtractor(
backbone=DummyModel,
weights="dummy_model.pt",
return_nodes=["conv3"],
)
>>> features = model(test_input)
>>> features["conv3"].shape
torch.Size((32, 1, 244, 244)) |
Beta Was this translation helpful? Give feedback.
-
@samet-akcay @ashwinvaidya17 Hi, thank you for your valuable comments! I have made several efforts as below:
So, I choose to change resnet18 to mobilenetv2_050. To find the layers of mobilenetv2_050, I follow the example below:
However, from torchvision.models, I cannot import mobilenetv2_050. I only can import mobilenet_v2().
And, I got these layers for train_nodes: Then, I changed my config from padim config as below:
However, I got the error:
Could you give me suggestions to overcome the error? |
Beta Was this translation helpful? Give feedback.
@Tekno-H it depends on whether you want to modify it from the configuration file or from the code.
Feature extractor in the cflow model is defined here:
https://github.com/openvinotoolkit/anomalib/blob/6b799ce65c128dd7a1a867a7b87e9290378e7f55/anomalib/models/cflow/torch_model.py#L41
And
FeatureExtractor
object creates the backbone here usingtimm
:https://github.com/openvinotoolkit/anomalib/blob/6b799ce65c128dd7a1a867a7b87e9290378e7f55/anomalib/models/components/feature_extractors/feature_extractor.py#L48-L54
where
backbone
is the model string used by thetimm
library, andlayers
are the graph nodes, from which the features are extracted.From the Code