Releases: lmnt-com/haste
Releases · lmnt-com/haste
Haste 0.5.0-rc0
v0.5.0-rc0 Bump version to 0.5.0-rc0 in preparation for PyPI release.
Haste 0.4.0
Added
- New layer normalized GRU layer (
LayerNormGRU). - New IndRNN layer.
- CPU support for all PyTorch layers.
- Support for building PyTorch API on Windows.
- Added
stateargument to PyTorch layers to specify initial state. - Added weight transforms to TensorFlow API (see docs for details).
- Added
get_weightsmethod to extract weights from RNN layers (TensorFlow). - Added
to_native_weightsandfrom_native_weightsto PyTorch API forLSTMandGRUlayers. - Validation tests to check for correctness.
Changed
- Performance improvements to GRU layer.
- BREAKING CHANGE: PyTorch layers default to CPU instead of GPU.
- BREAKING CHANGE:
hmust not be transposed before passing it togru::BackwardPass::Iterate.
Fixed
- Multi-GPU training with TensorFlow caused by invalid sharing of
cublasHandle_t.
Haste 0.3.0
Added
- PyTorch support.
- New layer normalized LSTM layer (
LayerNormLSTM). - New fused layer normalization layer.
Fixed
- Occasional uninitialized memory use in TensorFlow LSTM implementation.
Haste 0.2.0
This release focuses on LSTM performance.
Added
- New time-fused API for LSTM (
lstm::ForwardPass::Run,lstm::BackwardPass::Run). - Benchmarking code to evaluate the performance of an implementation.
Changed
- Performance improvements to existing iterative LSTM API.
- BREAKING CHANGE:
hmust not be transposed before passing it tolstm::BackwardPass::Iterate. - BREAKING CHANGE:
dvdoes not need to be allocated andvmust be passed instead tolstm::BackwardPass::Iterate.
Haste 0.1.0
Initial release.