Skip to content
This repository was archived by the owner on Oct 14, 2025. It is now read-only.

Conversation

@ZHAOTING
Copy link

This PR enables pre-training and continual pre-training of Qwen models.

We have used the code for continually pre-training Qwen-14B on 66B tokens of Japanese data and produced Nekomata-14B. It is part of the AWS LLM development support program in Japan, and thus we hope to release the code in addition to the already released model weights.

Changes include,

  • Code for model weights conversion between HF and Nemo checkpoints.
  • A new config option that enables QKV bias individually.
  • Minor model code changes that reflect the above option.
  • trust_remote_code=True when loading AutoTokenizer for the Qwen tokenizer.
  • transformers>=4.32.0 and tiktoken in requirements.txt.
  • Training config files and scripts.

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant