Releases: aimat-lab/gcnn_keras
kgcnn v4.0.2
kgcnn v4.0.1
- Removed unused layers and added manual built in scripts and training functions, since with keras==3.0.5 the pytorch trainer tries to rebuild the model,
even if the model is already built and does it eagerly without proper tensor input, which causes crashes for almost every model in kgcnn. - Fix Error in
ExtensiveMolecularLabelScaler.transformmissing default value. - Added further benchmark results for kgcnn version 4.
- Fix error in
kgcnn.layers.geom.PositionEncodingBasisLayer - Fix error in
kgcnn.literature.GCN.make_model_weighted - Fix error in
kgcnn.literature.AttentiveFP.make_model - Had to change serialization for activation functions since with keras>=3.0.2 custom strings are not allowed also
causing clashes with built-in functions. We catch defaults to be at least as backward compatible as possible and changed to serialization dictionary. Adapted all hyperparameter. - Renamed leaky_relu and swish in
kgcnn.ops.activto leaky_relu2 and swish2. - Fix error in jax scatter min/max functions.
- Added
kgcnn.__safe_scatter_max_min_to_zero__for tensorflow and jax backend scattering with default to True. - Added simple ragged support for loss and metrics.
- Added simple ragged support for
train_force.py - Implemented random equivariant initialize for PAiNN
- Implemented charge and dipole output for HDNNP2nd
- Implemented jax backend for force models.
- Fix
GraphBatchNormalization. - Fix error in
kgcnn.io.loaderfor unused IDs and graph state input. - Added experimental
DisjointForceMeanAbsoluteError
kgcnn v4.0.0
Completely reworked version of kgcnn for Keras 3.0 and multi-backend support. A lot of fundamental changes have been made.
However, we tried to keep as much of the API from kgcnn 3.0 so that models in literature can be used with minimal changes.
Mainly, the "input_tensor_type"="ragged" model parameter has to be added if ragged tensors are used as input in tensorflow.
For very few models also the order of inputs had to be changed.
Also note that the input embedding layer requires integer tensor input and does not cast from float anymore.
The scope of models has been reduced for initial release but will be extended in upcoming versions.
Note that some changes are also stem for keras API changes, like for example learning_rate rate parameter or serialization.
Moreover, tensorflow addons had to be dropped for keras 3.0 .
The general representations of graphs has been changed from ragged tensors (tensorflow only, not supported by keras 3.0) to
the disjoint graph representation compatible with e.g. PyTorchGeometric.
Input can be padded or (still) ragged input. Or direct disjoint representations with batch loader.
(See models chapter in docs).
For jax we added a padded_disjoint parameter that can enable jit'able jax models but requires a dataloader,
which is not yet thoroughly implemented in kgcnn . For padded samples it can already been tested,
but the padding of each sample is a much larger overhead than padding the batch.
Some other changes:
- Reworked training scripts to have a single
train_graph.pyscript. Command line arguments are now optional and just used for verification, all butcategorywhich has to select a model/hyperparameter combination from hyper file.
Since the hyperparameter file already contains all necessary information. - Train test indices can now also be set and loaded from the dataset directly.
- Scaler behaviour has changed with regard to
transform_dataset. Key names of properties to transform has been moved to the constructor!
Also be sure to checkStandardLabelScalerif you want to scale regression targets, since target properties are default here. - Literature Models have an optional output scaler from new
kgcnn.layers.scalelayer controlled byoutput_scalingmodel argument. - Input embedding in literature models is now controlled with separate
input_node_embeddingorinput_edge_embeddingarguments which can be set toNonefor no embedding.
Also embedding input tokens must be of dtype int now. No auto-casting from float anymore. - New module
kgcnn.opswithkgcnn.backendto generalize aggregation functions for graph operations. - Reduced the models in literature. Will keep bringing all models of kgcnn<4.0.0 back in next versions and run benchmark training again.
kgcnn v3.1.0
- Added flexible charge for
rdkit_xyz_to_molas e.g. list. - Added
from_xyztoMolecularGraphRDKit. - Started additional
kgcnn.molecule.preprocessormodule for graph preprocessors. - BREAKING CHANGES: Renamed module
kgcnn.layers.poolingtokgcnn.layers.aggrfor better compatibility.
However, kept legacy pooling module and all old ALIAS. - Repair bug in
RelationalMLP. HyperParameteris not verified on initialize anymore, just callhyper.verify().- Moved losses from
kgcnn.metrics.lossinto separate modulkgcnn.lossesto be more compatible with keras. - Reworked training scripts especially to simplify command line arguments and strengthen hyperparameter.
- Started with potential keras-core port. Not yet tested or supported.
- Removed
get_split_indicesto make the graph indexing more consistent. - Started with keras-core integration. Any code is WIP and not tested or working yet.
kgcnn v3.0.2
- Added
add_epstoPAiNNUpdatelayer as option. - Rework
data.transform.scaler.standardto hopefully now fix all errors with the scalers. - BREAKING CHANGES: Refactored activation functions
kgcnn.ops.activand layerskgcnn.layers.activthat have trainable parameters, due to keras changes in 2.13.0.
Please check your config, since parameters are ignored in normal functions!
If for example "kgcnn>leaky_relu" you can not change the leak anymore. You must use akgcnn.layers.activfor that. - Rework
kgcnn.graph.methods.range_neighbour_latticeto use pymatgen. - Added
PolynomialDecayScheduler - Added option for force model to use normal gradient and added as option
use_batch_jacobian. - BREAKING CHANGES: Reworked
kgcnn.layers.gatherto reduce/simplify code and speed up some models.
The behaviour ofGatherNodeshas changed a little in that it first splits and then concatenates. The default parameters now havesplit_axisandconcat_axisset to 2.concat_indiceshas been removed.
The default behaviour of the layer however stays the same. - An error in layer
FracToRealCoordinateshas been fixed and improved speed.
kgcnn v3.0.1
- Removed deprecated molecules.
- Fix error in
kgcnn.data.transform.scaler.serial - Fix error in
QMDatasetif attributes have been chosen. Nowset_attributesdoes not cause an error. - Fix error in
QMDatasetwith labels without SDF file. - Fix error in
kgcnn.layers.conv.GraphSageNodeLayer. - Add
reverse_edge_indicesoption toGraphDict.from_networkx. Fixed error in connection withkgcnn.crystal. - Started with
kgcnn.io.file. Experimental. Will get more updates. - Fix error with
StandardLabelScalerinheritance. - Added workflow notebook examples.
- Fix error in import
kgcnn.crystal.periodic_tableto now properly include package data.
kgcnn v3.0.0
Major refactoring of kgcnn layers and models.
We try to provide the most important layers for graph convolution as kgcnn.layers with ragged tensor representation.
As for literature models only input and output is matched with kgcnn .
- Move
kgcnn.layers.convtokgcnn.literature. - Refactored all graph methods in
graph.methods. - Moved
kgcnn.mol.*andkgcnn.moldyn.*intokgcnn.molecule - Moved
hyperintotrainig - Updated
crystal.
kgcnn v2.2.4
- Added
ACSFConstNormalizationto literature models as option. - Adjusted and reworked
MLP. Now includes more normalization options. - Removed 'is_sorted', 'node_indexing' and 'has_unconnected' from
GraphBaseLayerand added it to the pooling layers directly.
kgcnn v2.2.3
- HOTFIX: Changed
MemoryGraphList.tensor()so that the correct dtype is given to the tensor output. This is important for model loading etc. - Added
CENTChargePlusElectrostaticEnergytokgcnn.layers.conv.hdnnp_convandkgcnn.literature.HDNNP4th. - Fix bug in latest
train_force.pyof v2.2.2 that forgets to apply inverse scaling to dataset, causing subsequent folds to have wrong labels. - HOTFIX: Updated
MolDynamicsModelPredictorto call keras model without very expensive retracing. Alternative mode useuse_predict=True. - Update training results and data subclasses for matbench datasets.
- Added
GraphInstanceNormalizationandGraphNormalizationtokgcnn.layers.norm.
kgcnn v2.2.2
- Reworked all scaler class to have separate name for using either X or y. For example
StandardScalerorStandardLabelScaler. - Moved scalers to
kgcnn.data.transform. We will expand on this in the future. - IMPORTANT: Renamed and changed behaviour for
EnergyForceExtensiveScaler. New name isEnergyForceExtensiveLabelScaler. Return is just y now. Added experimental functionality for transforming dataset. - Adjusted training scripts for new scalers.
- Reduced requirements for tensorflow to 2.9.
- Renamed
kgcnn.mdtokgcnn.moldynfor naming conflicts with markdown. - In
MolDynamicsModelPredictorrenamed argumentmodel_postprocessortograph_postprocessor.