Skip to content

Commit f8a1e7f

Browse files
committed
Readme
1 parent ff8779c commit f8a1e7f

File tree

1 file changed

+40
-42
lines changed

1 file changed

+40
-42
lines changed

README.md

Lines changed: 40 additions & 42 deletions
Original file line numberDiff line numberDiff line change
@@ -94,21 +94,6 @@ The aim is also to serve as a benchmark of algorithms and metrics for research o
9494
<img src="./examples/metrics.png">
9595
<img src="./examples/road.png">
9696

97-
98-
----------
99-
# Choosing the Target Layer
100-
You need to choose the target layer to compute the CAM for.
101-
Some common choices are:
102-
- FasterRCNN: model.backbone
103-
- Resnet18 and 50: model.layer4[-1]
104-
- VGG, densenet161 and mobilenet: model.features[-1]
105-
- mnasnet1_0: model.layers[-1]
106-
- ViT: model.blocks[-1].norm1
107-
- SwinT: model.layers[-1].blocks[-1].norm1
108-
109-
If you pass a list with several layers, the CAM will be averaged accross them.
110-
This can be useful if you're not sure what layer will perform best.
111-
11297
----------
11398

11499
# Usage examples
@@ -141,36 +126,20 @@ with GradCAM(model=model, target_layers=target_layers) as cam:
141126
[cam.py](https://github.com/jacobgil/pytorch-grad-cam/blob/master/cam.py) has a more detailed usage example.
142127

143128
----------
129+
# Choosing the Target Layer
130+
You need to choose the target layer to compute the CAM for.
131+
Some common choices are:
132+
- FasterRCNN: model.backbone
133+
- Resnet18 and 50: model.layer4[-1]
134+
- VGG, densenet161 and mobilenet: model.features[-1]
135+
- mnasnet1_0: model.layers[-1]
136+
- ViT: model.blocks[-1].norm1
137+
- SwinT: model.layers[-1].blocks[-1].norm1
144138

145-
# Metrics and evaluating the explanations
146-
147-
```python
148-
from pytorch_grad_cam.utils.model_targets import ClassifierOutputSoftmaxTarget
149-
from pytorch_grad_cam.metrics.cam_mult_image import CamMultImageConfidenceChange
150-
# Create the metric target, often the confidence drop in a score of some category
151-
metric_target = ClassifierOutputSoftmaxTarget(281)
152-
scores, batch_visualizations = CamMultImageConfidenceChange()(input_tensor,
153-
inverse_cams, targets, model, return_visualization=True)
154-
visualization = deprocess_image(batch_visualizations[0, :])
155-
156-
# State of the art metric: Remove and Debias
157-
from pytorch_grad_cam.metrics.road import ROADMostRelevantFirst, ROADLeastRelevantFirst
158-
cam_metric = ROADMostRelevantFirst(percentile=75)
159-
scores, perturbation_visualizations = cam_metric(input_tensor,
160-
grayscale_cams, targets, model, return_visualization=True)
161-
162-
# You can also average across different percentiles, and combine
163-
# (LeastRelevantFirst - MostRelevantFirst) / 2
164-
from pytorch_grad_cam.metrics.road import ROADMostRelevantFirstAverage,
165-
ROADLeastRelevantFirstAverage,
166-
ROADCombined
167-
cam_metric = ROADCombined(percentiles=[20, 40, 60, 80])
168-
scores = cam_metric(input_tensor, grayscale_cams, targets, model)
169-
```
170-
139+
If you pass a list with several layers, the CAM will be averaged accross them.
140+
This can be useful if you're not sure what layer will perform best.
171141
----------
172142

173-
174143
# Adapting for new architectures and tasks
175144

176145
Methods like GradCAM were designed for and were originally mostly applied on classification models,
@@ -201,6 +170,35 @@ targets = [ClassifierOutputTarget(281)]
201170
However more advanced cases, you might want another behaviour.
202171
Check [here](https://github.com/jacobgil/pytorch-grad-cam/blob/master/pytorch_grad_cam/utils/model_targets.py) for more examples.
203172

173+
----------
174+
175+
# Metrics and evaluating the explanations
176+
177+
```python
178+
from pytorch_grad_cam.utils.model_targets import ClassifierOutputSoftmaxTarget
179+
from pytorch_grad_cam.metrics.cam_mult_image import CamMultImageConfidenceChange
180+
# Create the metric target, often the confidence drop in a score of some category
181+
metric_target = ClassifierOutputSoftmaxTarget(281)
182+
scores, batch_visualizations = CamMultImageConfidenceChange()(input_tensor,
183+
inverse_cams, targets, model, return_visualization=True)
184+
visualization = deprocess_image(batch_visualizations[0, :])
185+
186+
# State of the art metric: Remove and Debias
187+
from pytorch_grad_cam.metrics.road import ROADMostRelevantFirst, ROADLeastRelevantFirst
188+
cam_metric = ROADMostRelevantFirst(percentile=75)
189+
scores, perturbation_visualizations = cam_metric(input_tensor,
190+
grayscale_cams, targets, model, return_visualization=True)
191+
192+
# You can also average across different percentiles, and combine
193+
# (LeastRelevantFirst - MostRelevantFirst) / 2
194+
from pytorch_grad_cam.metrics.road import ROADMostRelevantFirstAverage,
195+
ROADLeastRelevantFirstAverage,
196+
ROADCombined
197+
cam_metric = ROADCombined(percentiles=[20, 40, 60, 80])
198+
scores = cam_metric(input_tensor, grayscale_cams, targets, model)
199+
```
200+
201+
----------
204202

205203
# Tutorials
206204
Here you can find detailed examples of how to use this for various custom use cases like object detection:

0 commit comments

Comments
 (0)