v0.4.1 #775
Unanswered
mattdangerw
asked this question in
General
v0.4.1
#775
Replies: 2 comments
-
@mattdangerw We also fixed Mistake in projection layer dimension of MaskedLMHead #733 through #725 |
Beta Was this translation helpful? Give feedback.
0 replies
-
@shivance indeed we did! Though actually we are not exposing the new The fix for |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
The 0.4.1 release is a minor release with new model architectures and compilation defaults for task models. If you encounter any problems or have questions, please open an issue!
Summary
keras_nlp.models.BertClassifier
). No existing functionality is changed, but users of task models can now skip calling.compile()
and use default learning rates and optimizer strategies provided by the library.keras_nlp.models.AlbertBackbone
,keras_nlp.models.AlbertClassifier
, preprocessor, and tokenizer layers for pre-trained ALBERT models.keras_nlp.models.FNetBackbone
,keras_nlp.models.FNetClassifier
, preprocessor, and tokenizer layers for pre-trained FNet models.keras_nlp.models.DebertaV3Backbone
,keras_nlp.models.DebertaV3Classifier
, preprocessor, and tokenizer layers for pre-trained DeBERTaV3 models.What's Changed
self
in calls tosuper()
by @mbrukman in Clean up class name andself
in calls tosuper()
#628Backbone
base class by @jbischof in CreateBackbone
base class #621value_dim
inTransformerDecoder
's cross-attn layer by @abheesht17 in Fixvalue_dim
inTransformerDecoder
's cross-attn layer #667token_embedding
as a Backbone Property by @abheesht17 in Exposetoken_embedding
as a Backbone Property #676from_preset
to base tokenizer classes by @shivance in Movefrom_preset
to base tokenizer classes #673AlbertClassifier
by @shivance in AddAlbertClassifier
#668GPT2Preprocessor
by @chenmoneygithub in Add start/end token padding toGPT2Preprocessor
#704New Contributors
self
in calls tosuper()
#628from_preset
to base tokenizer classes #673Full Changelog: v0.4.0...v0.4.1
This discussion was created from the release v0.4.1.
Beta Was this translation helpful? Give feedback.
All reactions