Skip to content

Commit 094ba73

Browse files
authored
Merge pull request #476 from aai-institute/feature/ekfac_new_framework
Implement ekfac with new interface
2 parents e3643f1 + 62c69c8 commit 094ba73

File tree

13 files changed

+954
-94
lines changed

13 files changed

+954
-94
lines changed

CHANGELOG.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,8 @@
88
for single dimensional arrays [PR #485](https://github.com/aai-institute/pyDVL/pull/485)
99
- Fix implementations of `to` methods of `TorchInfluenceFunctionModel` implementations
1010
[PR #487](https://github.com/aai-institute/pyDVL/pull/487)
11+
- Implement new method: `EkfacInfluence`
12+
[PR #451](https://github.com/aai-institute/pyDVL/issues/451)
1113

1214
## 0.8.0 - 🆕 New interfaces, scaling computation, bug fixes and improvements 🎁
1315

README.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -318,7 +318,8 @@ We currently implement the following papers:
318318
- Schioppa, Andrea, Polina Zablotskaia, David Vilar, and Artem Sokolov.
319319
[Scaling Up Influence Functions](http://arxiv.org/abs/2112.03052).
320320
In Proceedings of the AAAI-22. arXiv, 2021.
321-
321+
- James Martens, Roger Grosse, [Optimizing Neural Networks with Kronecker-factored Approximate Curvature](https://arxiv.org/abs/1503.05671), International Conference on Machine Learning (ICML), 2015.
322+
- George, Thomas, César Laurent, Xavier Bouthillier, Nicolas Ballas, Pascal Vincent, [Fast Approximate Natural Gradient Descent in a Kronecker-factored Eigenbasis](https://arxiv.org/abs/1806.03884), Advances in Neural Information Processing Systems 31,2018.
322323

323324
# License
324325

docs/assets/pydvl.bib

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -342,4 +342,21 @@ @InProceedings{kwon_data_2023
342342
pdf = {https://proceedings.mlr.press/v202/kwon23e/kwon23e.pdf},
343343
url = {https://proceedings.mlr.press/v202/kwon23e.html},
344344
abstract = {Data valuation is a powerful framework for providing statistical insights into which data are beneficial or detrimental to model training. Many Shapley-based data valuation methods have shown promising results in various downstream tasks, however, they are well known to be computationally challenging as it requires training a large number of models. As a result, it has been recognized as infeasible to apply to large datasets. To address this issue, we propose Data-OOB, a new data valuation method for a bagging model that utilizes the out-of-bag estimate. The proposed method is computationally efficient and can scale to millions of data by reusing trained weak learners. Specifically, Data-OOB takes less than $2.25$ hours on a single CPU processor when there are $10^6$ samples to evaluate and the input dimension is $100$. Furthermore, Data-OOB has solid theoretical interpretations in that it identifies the same important data point as the infinitesimal jackknife influence function when two different points are compared. We conduct comprehensive experiments using 12 classification datasets, each with thousands of sample sizes. We demonstrate that the proposed method significantly outperforms existing state-of-the-art data valuation methods in identifying mislabeled data and finding a set of helpful (or harmful) data points, highlighting the potential for applying data values in real-world applications.}
345+
}
346+
347+
@article{george2018fast,
348+
title={Fast approximate natural gradient descent in a kronecker factored eigenbasis},
349+
author={George, Thomas and Laurent, C{\'e}sar and Bouthillier, Xavier and Ballas, Nicolas and Vincent, Pascal},
350+
journal={Advances in Neural Information Processing Systems},
351+
volume={31},
352+
year={2018}
353+
}
354+
355+
@inproceedings{martens2015optimizing,
356+
title={Optimizing neural networks with kronecker-factored approximate curvature},
357+
author={Martens, James and Grosse, Roger},
358+
booktitle={International conference on machine learning},
359+
pages={2408--2417},
360+
year={2015},
361+
organization={PMLR}
345362
}

docs/influence/influence_function_model.md

Lines changed: 30 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -87,7 +87,7 @@ the Hessian and \(V\) contains the corresponding eigenvectors. See also
8787

8888
```python
8989
from pydvl.influence.torch import ArnoldiInfluence
90-
if_model = ArnoldiInfluence
90+
if_model = ArnoldiInfluence(
9191
model,
9292
loss,
9393
hessian_regularization=0.0,
@@ -97,4 +97,32 @@ if_model = ArnoldiInfluence
9797
```
9898
These implementations represent the calculation logic on in memory tensors. To scale up to large collection
9999
of data, we map these influence function models over these collections. For a detailed discussion see the
100-
documentation page [Scaling Computation](scaling_computation.md).
100+
documentation page [Scaling Computation](scaling_computation.md).
101+
102+
### Eigenvalue Corrected K-FAC
103+
104+
K-FAC, short for Kronecker-Factored Approximate Curvature, is a method that approximates the Fisher information matrix [FIM](https://en.wikipedia.org/wiki/Fisher_information) of a model. It is possible to show that for classification models with appropriate loss functions the FIM is equal to the Hessian of the model’s loss over the dataset. In this restricted but nonetheless important context K-FAC offers an efficient way to approximate the Hessian and hence the influence scores.
105+
For more info and details refer to the original paper [@martens2015optimizing].
106+
107+
The K-FAC method is implemented in the class [EkfacInfluence](pydvl/influence/torch/influence_function_model.py). The following code snippet shows how to use the K-FAC method to calculate the influence function of a model. Note that, in contrast to the other methods for influence function calculation, K-FAC does not require the loss function as an input. This is because the current implementation is only applicable to classification models with a cross entropy loss function.
108+
109+
```python
110+
from pydvl.influence.torch import EkfacInfluence
111+
if_model = EkfacInfluence(
112+
model,
113+
hessian_regularization=0.0,
114+
)
115+
```
116+
Upon initialization, the K-FAC method will parse the model and extract which layers require grad and which do not. Then it will only calculate the influence scores for the layers that require grad. The current implementation of the K-FAC method is only available for linear layers, and therefore if the model contains non-linear layers that require gradient the K-FAC method will raise a NotImplementedLayerRepresentationException.
117+
118+
A further improvement of the K-FAC method is the Eigenvalue Corrected K-FAC (EKFAC) method [@george2018fast], which allows to further re-fit the eigenvalues of the Hessian, thus providing a more accurate approximation. On top of the K-FAC method, the EKFAC method is implemented by setting `update_diagonal=True` when initialising [EkfacInfluence](pydvl/influence/torch/influence_function_model.py). The following code snippet shows how to use the EKFAC method to calculate the influence function of a model.
119+
120+
```python
121+
from pydvl.influence.torch import EkfacInfluence
122+
if_model = EkfacInfluence(
123+
model,
124+
update_diagonal=True,
125+
hessian_regularization=0.0,
126+
)
127+
if_model.fit(train_loader)
128+
```

notebooks/influence_wine.ipynb

Lines changed: 308 additions & 78 deletions
Large diffs are not rendered by default.

notebooks/support/torch.py

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -74,8 +74,6 @@ def __init__(
7474
layers.append(nn.Tanh())
7575
layers.pop()
7676

77-
layers.append(nn.Softmax(dim=-1))
78-
7977
self.layers = nn.Sequential(*layers)
8078

8179
def forward(self, x: torch.Tensor) -> torch.Tensor:

src/pydvl/influence/base_influence_function_model.py

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -36,6 +36,12 @@ def __init__(self):
3636
)
3737

3838

39+
class NotImplementedLayerRepresentationException(ValueError):
40+
def __init__(self, module_id: str):
41+
message = f"Only Linear layers are supported, but found module {module_id} requiring grad."
42+
super().__init__(message)
43+
44+
3945
"""Type variable for tensors, i.e. sequences of numbers"""
4046
TensorType = TypeVar("TensorType", bound=Collection)
4147
DataLoaderType = TypeVar("DataLoaderType", bound=Iterable)

src/pydvl/influence/torch/__init__.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,5 +2,6 @@
22
ArnoldiInfluence,
33
CgInfluence,
44
DirectInfluence,
5+
EkfacInfluence,
56
LissaInfluence,
67
)

0 commit comments

Comments
 (0)