Skip to content

Commit a9e1117

Browse files
authored
Merge pull request #96 from deel-ai/feat/keras3_test_uniformization
Feat/keras3 test uniformization
2 parents add83a8 + d7ef3dc commit a9e1117

31 files changed

+6458
-3371
lines changed

CONTRIBUTING.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ $ make test
3232

3333
This command will:
3434
- check your code with black PEP-8 formatter and flake8 linter.
35-
- run `unittest` on the `tests/` folder with different Python and TensorFlow versions.
35+
- run `pytest` on the `tests/` folder with different Python and TensorFlow versions.
3636

3737

3838
## Submitting your changes

README.md

Lines changed: 19 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
<div align="center">
22
<picture>
3-
<source media="(prefers-color-scheme: dark)" srcset="./docs/assets/logo_white.svg">
4-
<source media="(prefers-color-scheme: light)" srcset="./docs/assets/logo.svg">
5-
<img alt="Library Banner" src="./docs/assets/logo.svg">
3+
<source media="(prefers-color-scheme: dark)" srcset="./docs/assets/banner_dark_deellip.png">
4+
<source media="(prefers-color-scheme: light)" srcset="./docs/assets/banner_light_deellip.png">
5+
<img alt="DEEL-LIP Banner" src="./docs/assets/banner_light_deellip.png">
66
</picture>
77
</div>
88
<br>
@@ -12,10 +12,10 @@
1212
<img src="https://img.shields.io/pypi/pyversions/deel-lip.svg">
1313
</a>
1414
<a href="https://github.com/deel-ai/deel-lip/actions/workflows/python-linters.yml">
15-
<img alt="PyLint" src="https://github.com/deel-ai/deel-lip/actions/workflows/python-linters.yml/badge.svg?branch=master">
15+
<img alt="PyLint" src="https://github.com/deel-ai/deel-lip/actions/workflows/python-linters.yml/badge.svg?branch=keras3">
1616
</a>
1717
<a href="https://github.com/deel-ai/deel-lip/actions/workflows/python-tests.yml">
18-
<img alt="Tox" src="https://github.com/deel-ai/deel-lip/actions/workflows/python-linters.yml/badge.svg?branch=master">
18+
<img alt="Tox" src="https://github.com/deel-ai/deel-lip/actions/workflows/python-linters.yml/badge.svg?branch=keras3">
1919
</a>
2020
<a href="https://pypi.org/project/deel-lip">
2121
<img alt="Pypi" src="https://img.shields.io/pypi/v/deel-lip.svg">
@@ -38,19 +38,15 @@ has many applications ranging from adversarial robustness to Wasserstein
3838
distance estimation.
3939

4040
This library provides an efficient implementation of **k-Lispchitz
41-
layers for keras**.
41+
layers for Keras 3**.
4242

4343
> [!CAUTION]
44-
> **Incompatibility with TensorFlow >= 2.16 and Keras 3**
44+
> **This branch is a major update designed for compatibility with TensorFlow 2.16 and Keras 3**
4545
>
46-
> Due to significant changes introduced in TensorFlow version 2.16 and Keras 3, this
47-
> package is currently incompatible with TensorFlow versions 2.16 and above. Users are
48-
> advised to use TensorFlow versions lower than 2.16 to ensure compatibility and proper
46+
> Due to significant changes introduced in TensorFlow 2.16 and Keras 3, backward compatibility
47+
> with older TensorFlow versions can no longer be maintained. Users requiring an older version
48+
> of TensorFlow are encouraged to clone the master branch to ensure compatibility and proper
4949
> functionality of this package.
50-
>
51-
> We are actively working on updating the package to support Keras 3. Please stay tuned
52-
> for updates. For now, make sure to install an earlier version of TensorFlow by
53-
> specifying it in your environment.
5450
5551
## 📚 Table of contents
5652

@@ -69,7 +65,7 @@ layers for keras**.
6965
You can install ``deel-lip`` directly from pypi:
7066

7167
```python
72-
pip install deel-lip
68+
pip install git+https://github.com/deel-ai/deel-lip.git@keras3
7369
```
7470

7571
In order to use ``deel-lip``, you also need a [valid tensorflow
@@ -80,12 +76,12 @@ supports tensorflow versions 2.x.
8076

8177
| **Tutorial Name** | Notebook |
8278
| :-------------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------: |
83-
| Getting Started 1 - Creating a 1-Lipschitz neural network | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/deel-ai/deel-lip/blob/master/docs/notebooks/Getting_started_1.ipynb) |
84-
| Getting Started 2 - Training an adversarially robust 1-Lipschitz neural network | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/deel-ai/deel-lip/blob/master/docs/notebooks/Getting_started_2.ipynb) |
85-
| Wasserstein distance estimation on toy example | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/deel-ai/deel-lip/blob/master/docs/notebooks/demo1.ipynb) |
86-
| HKR Classifier on toy dataset | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/deel-ai/deel-lip/blob/master/docs/notebooks/demo2.ipynb) |
87-
| HKR classifier on MNIST dataset | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/deel-ai/deel-lip/blob/master/docs/notebooks/demo3.ipynb) |
88-
| HKR multiclass and fooling | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/deel-ai/deel-lip/blob/master/docs/notebooks/demo4.ipynb) |
79+
| Getting Started 1 - Creating a 1-Lipschitz neural network | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/deel-ai/deel-lip/blob/keras3/docs/notebooks/Getting_started_1.ipynb) |
80+
| Getting Started 2 - Training an adversarially robust 1-Lipschitz neural network | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/deel-ai/deel-lip/blob/keras3/docs/notebooks/Getting_started_2.ipynb) |
81+
| Wasserstein distance estimation on toy example | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/deel-ai/deel-lip/blob/keras3/docs/notebooks/demo1.ipynb) |
82+
| HKR Classifier on toy dataset | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/deel-ai/deel-lip/blob/keras3/docs/notebooks/demo2.ipynb) |
83+
| HKR classifier on MNIST dataset | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/deel-ai/deel-lip/blob/keras3/docs/notebooks/demo3.ipynb) |
84+
| HKR multiclass and fooling | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/deel-ai/deel-lip/blob/keras3/docs/notebooks/demo4.ipynb) |
8985

9086

9187
## 📦 What's Included
@@ -127,7 +123,8 @@ More from the DEEL project:
127123

128124
- [Xplique](https://github.com/deel-ai/xplique) a Python library exclusively dedicated to explaining neural networks.
129125
- [Influenciae](https://github.com/deel-ai/influenciae) Python toolkit dedicated to computing influence values for the discovery of potentially problematic samples in a dataset.
130-
- [deel-torchlip](https://github.com/deel-ai/deel-torchlip) a Python library for training k-Lipschitz neural networks on PyTorch.
126+
- [deel-TorchLip](https://github.com/deel-ai/deel-torchlip) a Python library for training k-Lipschitz neural networks on PyTorch.
127+
- [Oodeel](https://github.com/deel-ai/oodeel) a Python library for post-hoc deep OOD (Out-of-Distribution) detection on already trained neural network image classifiers
131128
- [DEEL White paper](https://arxiv.org/abs/2103.10529) a summary of the DEEL team on the challenges of certifiable AI and the role of data quality, representativity and explainability for this purpose.
132129

133130
## 🙏 Acknowledgments

deel/lip/layers/convolutional.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -441,6 +441,10 @@ def __init__(
441441
raise ValueError("SpectralConv2DTranspose does not support dilation rate")
442442
if self.padding != "same":
443443
raise ValueError("SpectralConv2DTranspose only supports padding='same'")
444+
if self.output_padding is not None:
445+
raise ValueError(
446+
"SpectralConv2DTranspose only supports output_padding=None"
447+
)
444448
self.set_klip_factor(k_coef_lip)
445449
self.u = None
446450
self.sig = None

deel/lip/layers/pooling.py

Lines changed: 56 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,7 @@
2222
"""
2323

2424
import numpy as np
25+
from typing import Tuple
2526
import keras
2627
import keras.ops as K
2728
from keras.saving import register_keras_serializable
@@ -91,6 +92,7 @@ def __init__(
9192
data_format=data_format,
9293
**kwargs,
9394
)
95+
self.built = False
9496
self.set_klip_factor(k_coef_lip)
9597
self._kwargs = kwargs
9698

@@ -181,6 +183,7 @@ def __init__(
181183
data_format=data_format,
182184
**kwargs,
183185
)
186+
self.built = False
184187
self.set_klip_factor(k_coef_lip)
185188
self.eps_grad_sqrt = eps_grad_sqrt
186189
self._kwargs = kwargs
@@ -246,6 +249,7 @@ def __init__(self, data_format=None, k_coef_lip=1.0, eps_grad_sqrt=1e-6, **kwarg
246249
super(ScaledGlobalL2NormPooling2D, self).__init__(
247250
data_format=data_format, **kwargs
248251
)
252+
self.built = False
249253
self.set_klip_factor(k_coef_lip)
250254
self.eps_grad_sqrt = eps_grad_sqrt
251255
self._kwargs = kwargs
@@ -308,6 +312,7 @@ def __init__(self, data_format=None, k_coef_lip=1.0, **kwargs):
308312
super(ScaledGlobalAveragePooling2D, self).__init__(
309313
data_format=data_format, **kwargs
310314
)
315+
self.built = False
311316
self.set_klip_factor(k_coef_lip)
312317
self._kwargs = kwargs
313318

@@ -363,32 +368,44 @@ def __init__(
363368
**kwargs: params passed to the Layers constructor
364369
"""
365370
super(InvertibleDownSampling, self).__init__(name=name, dtype=dtype, **kwargs)
366-
self.pool_size = pool_size
367371
self.data_format = data_format
368372

369-
def call(self, inputs):
370-
if self.data_format == "channels_last":
371-
return K.concatenate(
372-
[
373-
inputs[
374-
:, i :: self.pool_size[0], j :: self.pool_size[1], :
375-
] # for now we handle only channels last
376-
for i in range(self.pool_size[0])
377-
for j in range(self.pool_size[1])
378-
],
379-
axis=-1,
380-
)
373+
ndims = 2
374+
ks: Tuple[int, ...]
375+
if isinstance(pool_size, int):
376+
ks = (pool_size,) * ndims
381377
else:
382-
return K.concatenate(
383-
[
384-
inputs[
385-
:, :, i :: self.pool_size[0], j :: self.pool_size[1]
386-
] # for now we handle only channels last
387-
for i in range(self.pool_size[0])
388-
for j in range(self.pool_size[1])
389-
],
390-
axis=1,
378+
ks = tuple(pool_size)
379+
380+
if len(ks) != ndims:
381+
raise ValueError(
382+
f"Expected {ndims}-dimensional pool_size, but "
383+
f"got {len(ks)}-dimensional instead"
384+
)
385+
self.pool_size = ks
386+
387+
def call(self, inputs):
388+
if self.data_format == "channels_first":
389+
# convert to channels_first
390+
inputs = K.transpose(inputs, [0, 2, 3, 1])
391+
# from shape (bs, w*pw, h*ph, c) to (bs, w, h, c, pw, ph)
392+
input_shape = K.shape(inputs)
393+
w, h, c_in = input_shape[1], input_shape[2], input_shape[3]
394+
pw, ph = self.pool_size
395+
wo = w // pw
396+
ho = h // ph
397+
inputs = K.reshape(inputs, (-1, wo, pw, h, c_in))
398+
inputs = K.reshape(inputs, (-1, wo, pw, ho, ph, c_in))
399+
inputs = K.transpose(
400+
inputs, [0, 1, 3, 5, 2, 4]
401+
) # (bs, wo, pw, ho, ph, c) -> (bs, wo, ho, c, pw, ph)
402+
inputs = K.reshape(inputs, (-1, wo, ho, c_in * pw * ph))
403+
404+
if self.data_format == "channels_first":
405+
inputs = K.transpose(
406+
inputs, [0, 3, 1, 2] # (bs, w, h, c*pw*ph) -> (bs, c*pw*ph, w, h) ->
391407
)
408+
return inputs
392409

393410
def get_config(self):
394411
config = {
@@ -427,9 +444,22 @@ def __init__(
427444
**kwargs: params passed to the Layers constructor
428445
"""
429446
super(InvertibleUpSampling, self).__init__(name=name, dtype=dtype, **kwargs)
430-
self.pool_size = pool_size
431447
self.data_format = data_format
432448

449+
ndims = 2
450+
ks: Tuple[int, ...]
451+
if isinstance(pool_size, int):
452+
ks = (pool_size,) * ndims
453+
else:
454+
ks = tuple(pool_size)
455+
456+
if len(ks) != ndims:
457+
raise ValueError(
458+
f"Expected {ndims}-dimensional pool_size, but "
459+
f"got {len(ks)}-dimensional instead"
460+
)
461+
self.pool_size = ks
462+
433463
def call(self, inputs):
434464
if self.data_format == "channels_first":
435465
# convert to channels_first
@@ -439,12 +469,12 @@ def call(self, inputs):
439469
w, h, c_in = input_shape[1], input_shape[2], input_shape[3]
440470
pw, ph = self.pool_size
441471
c = c_in // (pw * ph)
442-
inputs = K.reshape(inputs, (-1, w, h, pw, ph, c))
472+
inputs = K.reshape(inputs, (-1, w, h, c, pw, ph))
443473
inputs = K.transpose(
444474
K.reshape(
445475
K.transpose(
446-
inputs, [0, 5, 2, 4, 1, 3]
447-
), # (bs, w, h, pw, ph, c) -> (bs, c, w, pw, h, ph)
476+
inputs, [0, 3, 2, 5, 1, 4]
477+
), # (bs, w, h, c, pw, ph) -> (bs, c, w, pw, h, ph)
448478
(-1, c, w, pw, h * ph),
449479
), # (bs, c, w, pw, h, ph) -> (bs, c, w, pw, h*ph) merge last axes
450480
[

deel/lip/utils.py

Lines changed: 25 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -61,35 +61,36 @@ def evaluate_lip_const_gen(
6161

6262
def evaluate_lip_const(model: keras.Model, x, eps=1e-4, seed=None):
6363
"""
64-
Evaluate the Lipschitz constant of a model, with the naive method.
65-
Please note that the estimation of the lipschitz constant is done locally around
66-
input sample. This may not correctly estimate the behaviour in the whole domain.
64+
Evaluate the Lipschitz constant of a model using the Jacobian of the model.
65+
The estimation is done locally around input samples.
6766
6867
Args:
69-
model: built keras model used to make predictions
70-
x: inputs used to compute the lipschitz constant
71-
eps (float): magnitude of noise to add to input in order to compute the constant
72-
seed (int): seed used when generating the noise ( can be set to None )
68+
model (Model): A built Keras model used to make predictions.
69+
x (np.ndarray): Input samples used to compute the Lipschitz constant.
7370
7471
Returns:
75-
float: the empirically evaluated lipschitz constant. The computation might also
76-
be inaccurate in high dimensional space.
77-
72+
float: The empirically evaluated Lipschitz constant.
7873
"""
79-
y_pred = model.predict(x)
80-
# x = np.repeat(x, 100, 0)
81-
# y_pred = np.repeat(y_pred, 100, 0)
82-
x_var = x + keras.random.uniform(
83-
shape=x.shape, minval=eps * 0.25, maxval=eps, seed=seed
84-
)
85-
y_pred_var = model.predict(x_var)
86-
dx = x - x_var
87-
dfx = y_pred - y_pred_var
88-
ndx = K.sqrt(K.sum(K.square(dx), axis=range(1, len(x.shape))))
89-
ndfx = K.sqrt(K.sum(K.square(dfx), axis=range(1, len(y_pred.shape))))
90-
lip_cst = K.max(ndfx / ndx)
91-
print(f"lip cst: {lip_cst:.3f}")
92-
return lip_cst
74+
batch_size = x.shape[0]
75+
x = keras.ops.convert_to_tensor(x, dtype=model.inputs[0].dtype)
76+
77+
if keras.config.backend() == "tensorflow":
78+
import tensorflow as tf
79+
80+
with tf.GradientTape() as tape:
81+
tape.watch(x)
82+
y_pred = model(x, training=False)
83+
batch_jacobian = tape.batch_jacobian(y_pred, x)
84+
else:
85+
assert False, "Only tensorflow backend is supported for now."
86+
# Flatten input/output dimensions for spectral norm computation
87+
xdim = keras.ops.prod(keras.ops.shape(x)[1:])
88+
ydim = keras.ops.prod(keras.ops.shape(y_pred)[1:])
89+
batch_jacobian = keras.ops.reshape(batch_jacobian, (batch_size, ydim, xdim))
90+
91+
# Compute spectral norm of the Jacobians and return the maximum
92+
spectral_norms = keras.ops.linalg.norm(batch_jacobian, ord=2, axis=[-2, -1])
93+
return keras.ops.max(spectral_norms).numpy()
9394

9495

9596
def _padding_circular(x, circular_paddings):
121 KB
Loading
117 KB
Loading

0 commit comments

Comments
 (0)