|
1 | 1 | # keras3 (development version) |
2 | 2 |
|
| 3 | +## Added compatibility with Keras v3.7.0. User-facing changes: |
| 4 | + |
| 5 | +### New functions |
| 6 | + |
| 7 | +#### Activations |
| 8 | +- `activation_celu()` |
| 9 | +- `activation_glu()` |
| 10 | +- `activation_hard_shrink()` |
| 11 | +- `activation_hard_tanh()` |
| 12 | +- `activation_log_sigmoid()` |
| 13 | +- `activation_soft_shrink()` |
| 14 | +- `activation_squareplus()` |
| 15 | +- `activation_tanh_shrink()` |
| 16 | + |
| 17 | +#### Configuration |
| 18 | +- `config_disable_flash_attention()` |
| 19 | +- `config_enable_flash_attention()` |
| 20 | +- `config_is_flash_attention_enabled()` |
| 21 | + |
| 22 | +#### Layers and Initializers |
| 23 | +- `initializer_stft()` |
| 24 | +- `layer_max_num_bounding_boxes()` |
| 25 | +- `layer_stft_spectrogram()` |
| 26 | + |
| 27 | +#### Losses and Metrics |
| 28 | +- `loss_circle()` |
| 29 | +- `metric_concordance_correlation()` |
| 30 | +- `metric_pearson_correlation()` |
| 31 | + |
| 32 | +#### Operations |
| 33 | +- `op_celu()` |
| 34 | +- `op_exp2()` |
| 35 | +- `op_glu()` |
| 36 | +- `op_hard_shrink()` |
| 37 | +- `op_hard_tanh()` |
| 38 | +- `op_ifft2()` |
| 39 | +- `op_inner()` |
| 40 | +- `op_soft_shrink()` |
| 41 | +- `op_squareplus()` |
| 42 | +- `op_tanh_shrink()` |
| 43 | + |
| 44 | +#### New arguments |
| 45 | + |
| 46 | +* `callback_backup_and_restore()`: Added `double_checkpoint` argument to save a fallback checkpoint |
| 47 | +* `callback_tensorboard()`: Added support for `profile_batch` argument |
| 48 | +* `layer_group_query_attention()`: Added `flash_attention` and `seed` arguments |
| 49 | +* `layer_multi_head_attention()`: Added `flash_attention` argument |
| 50 | +* `metric_sparse_top_k_categorical_accuracy()`: Added `from_sorted_ids` argument |
| 51 | + |
| 52 | +### Performance improvements |
| 53 | + |
| 54 | +* Added native Flash Attention support for GPU (via cuDNN) and TPU (via Pallas kernel) in JAX backend |
| 55 | +* Added opt-in native Flash Attention support for GPU in PyTorch backend |
| 56 | +* Enabled additional kernel fusion via bias_add in TensorFlow backend |
| 57 | +* Added support for Intel XPU devices in PyTorch backend |
| 58 | + |
| 59 | + |
3 | 60 | - `install_keras()` changes: if a GPU is available, the default is now to |
4 | 61 | install a CPU build of TensorFlow and a GPU build of JAX. To use a GPU in the |
5 | 62 | current session, call `use_backend("jax")`. |
|
0 commit comments