Skip to content

Support and switching mechanism for multiple LoRA adapters #1986

@xvrong

Description

@xvrong

Is your feature request related to a problem? Please describe.
I am looking for a way to manage multiple LoRA adapters on a single base model within NeMo Megatron-Bridge, similar to the functionality provided by Hugging Face PEFT. Currently, it is not clear how to efficiently switch between or combine multiple adapters in a distributed (TP/PP) environment.

Describe the solution you'd like
I would like to see clear documentation or a standard API for:

  1. Multi-LoRA Management: The ability to load and mount multiple .nemo adapters onto one base model.
  2. Dynamic Switching: A clear API to switch active adapters during inference (e.g., model.set_enabled_adapters(["adapter_name"])).
  3. Concurrent Activation: Support for activating multiple adapters simultaneously (e.g., style LoRA + task LoRA) in a chain.
  4. Training/Fine-tuning: A workflow for training a new adapter while existing adapters are attached but frozen.

Describe alternatives you've considered

Additional context
I'm using verl for llm rl training.

Metadata

Metadata

Assignees

Labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions