Skip to content

Commit 0e6a41c

Browse files
Update changelog for r2.2.1 (NVIDIA-NeMo#12818)
* beep boop: Update changelog Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> * Add 2.2.1 changelog highlights Signed-off-by: Charlie Truong <chtruong@nvidia.com> --------- Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Signed-off-by: Charlie Truong <chtruong@nvidia.com> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Charlie Truong <chtruong@nvidia.com>
1 parent bcd7093 commit 0e6a41c

File tree

1 file changed

+24
-0
lines changed

1 file changed

+24
-0
lines changed

CHANGELOG.md

Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,30 @@
11
# Changelog
22

33
<!-- Next changelog -->
4+
## NVIDIA Neural Modules 2.2.1
5+
6+
### Highlights
7+
8+
- Training
9+
- Fix MoE based models training instability.
10+
- Fix bug in Llama exporter for Llama 3.2 1B and 3B.
11+
- Fix bug in LoRA linear_fc1adapter when different TP is used during saving and loading the adapter checkpoint.
12+
13+
### Detailed Changelogs:
14+
15+
</details>
16+
17+
#### Uncategorized:
18+
19+
<details><summary>Changelog</summary>
20+
21+
- Re-add reverted commits after 2.2.0 and set next version to be 2.2.1 by @chtruong814 :: PR: #12587
22+
- Cherry pick `Fix exporter for llama models with shared embed and output layers (12545)` into `r2.2.0` by @ko3n1g :: PR: #12608
23+
- Cherry pick `Fix TP for LoRA adapter on `linear_fc1` (12519)` into `r2.2.0` by @ko3n1g :: PR: #12607
24+
- Bump mcore to use 0.11.1 by @chtruong814 :: PR: #12634
25+
26+
</details>
27+
428
## NVIDIA Neural Modules 2.2.0
529

630
### Highlights

0 commit comments

Comments
 (0)