-
Notifications
You must be signed in to change notification settings - Fork 207
Open
Labels
community-requestfeatureNew capabilities, enhancements, or enablement workNew capabilities, enhancements, or enablement workneeds-follow-upIssue needs follow-upIssue needs follow-up
Description
Is your feature request related to a problem? Please describe.
I am looking for a way to manage multiple LoRA adapters on a single base model within NeMo Megatron-Bridge, similar to the functionality provided by Hugging Face PEFT. Currently, it is not clear how to efficiently switch between or combine multiple adapters in a distributed (TP/PP) environment.
Describe the solution you'd like
I would like to see clear documentation or a standard API for:
- Multi-LoRA Management: The ability to load and mount multiple
.nemoadapters onto one base model. - Dynamic Switching: A clear API to switch active adapters during inference (e.g.,
model.set_enabled_adapters(["adapter_name"])). - Concurrent Activation: Support for activating multiple adapters simultaneously (e.g., style LoRA + task LoRA) in a chain.
- Training/Fine-tuning: A workflow for training a new adapter while existing adapters are attached but frozen.
Describe alternatives you've considered
Additional context
I'm using verl for llm rl training.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
community-requestfeatureNew capabilities, enhancements, or enablement workNew capabilities, enhancements, or enablement workneeds-follow-upIssue needs follow-upIssue needs follow-up