Skip to content

Questions about the amount of fine-tunable parameters #27

@kaicheng001

Description

@kaicheng001

Hello, author, first of all, thank you for your MoE-PEFT contact work. I encountered a problem when fine-tuning with your MoE-PEFT. The moe_peft.json configuration I used is as follows:

{
    "cutoff_len": 2048,
    "save_strategy": "epoch",
    "train_lora_candidate_num": 2,
    "train_lora_simultaneously_num": 2,
    "train_strategy": "optim",
    "lora": [
        {
            "name": "expert_0_name",
            "task_name": "expert_0",
            "optim": "adamw",
            "scheduler_type": "constant",
            "warmup_steps": 0,
            "lr": 0.0002,
            "batch_size": 1,
            "micro_batch_size": 1,
            "evaluate_batch_size": 1,
            "num_epochs": 3,
            "r": 4,
            "lora_alpha": 16,
            "lora_dropout": 0.05,
            "target_modules": {
                "q_proj": false,
                "k_proj": false,
                "v_proj": false,
                "o_proj": false,
                "gate_proj": true,
                "down_proj": false,
                "up_proj": false
            },
            "routing_strategy": "mixlora",
            "num_experts": 7,
            "top_k": 2,
            "group_by_length": false
        },
        {
            "name": "expert_1_name",
            "task_name": "expert_1",
            "optim": "adamw",
            "scheduler_type": "constant",
            "warmup_steps": 0,
            "lr": 0.0002,
            "batch_size": 1,
            "micro_batch_size": 1,
            "evaluate_batch_size": 1,
            "num_epochs": 3,
            "r": 4,
            "lora_alpha": 16,
            "lora_dropout": 0.05,
            "target_modules": {
                "q_proj": false,
                "k_proj": false,
                "v_proj": false,
                "o_proj": false,
                "gate_proj": true,
                "down_proj": false,
                "up_proj": false
            },
            "routing_strategy": "mixlora",
            "num_experts": 7,
            "top_k": 2,
            "group_by_length": false
        },
        {
            "name": "expert_2_name",
            "task_name": "expert_2",
            "optim": "adamw",
            "scheduler_type": "constant",
            "warmup_steps": 0,
            "lr": 0.0002,
            "batch_size": 1,
            "micro_batch_size": 1,
            "evaluate_batch_size": 1,
            "num_epochs": 3,
            "r": 4,
            "lora_alpha": 16,
            "lora_dropout": 0.05,
            "target_modules": {
                "q_proj": false,
                "k_proj": false,
                "v_proj": false,
                "o_proj": false,
                "gate_proj": true,
                "down_proj": false,
                "up_proj": false
            },
            "routing_strategy": "mixlora",
            "num_experts": 7,
            "top_k": 2,
            "group_by_length": false
        },
        {
            "name": "expert_3_name",
            "task_name": "expert_3",
            "optim": "adamw",
            "scheduler_type": "constant",
            "warmup_steps": 0,
            "lr": 0.0002,
            "batch_size": 1,
            "micro_batch_size": 1,
            "evaluate_batch_size": 1,
            "num_epochs": 3,
            "r": 4,
            "lora_alpha": 16,
            "lora_dropout": 0.05,
            "target_modules": {
                "q_proj": false,
                "k_proj": false,
                "v_proj": false,
                "o_proj": false,
                "gate_proj": true,
                "down_proj": false,
                "up_proj": false
            },
            "routing_strategy": "mixlora",
            "num_experts": 7,
            "top_k": 2,
            "group_by_length": false
        },
        {
            "name": "expert_4_name",
            "task_name": "expert_4",
            "optim": "adamw",
            "scheduler_type": "constant",
            "warmup_steps": 0,
            "lr": 0.0002,
            "batch_size": 1,
            "micro_batch_size": 1,
            "evaluate_batch_size": 1,
            "num_epochs": 3,
            "r": 4,
            "lora_alpha": 16,
            "lora_dropout": 0.05,
            "target_modules": {
                "q_proj": false,
                "k_proj": false,
                "v_proj": false,
                "o_proj": false,
                "gate_proj": true,
                "down_proj": false,
                "up_proj": false
            },
            "routing_strategy": "mixlora",
            "num_experts": 7,
            "top_k": 2,
            "group_by_length": false
        },
        {
            "name": "expert_5_name",
            "task_name": "expert_5",
            "optim": "adamw",
            "scheduler_type": "constant",
            "warmup_steps": 0,
            "lr": 0.0002,
            "batch_size": 1,
            "micro_batch_size": 1,
            "evaluate_batch_size": 1,
            "num_epochs": 3,
            "r": 4,
            "lora_alpha": 16,
            "lora_dropout": 0.05,
            "target_modules": {
                "q_proj": false,
                "k_proj": false,
                "v_proj": false,
                "o_proj": false,
                "gate_proj": true,
                "down_proj": false,
                "up_proj": false
            },
            "routing_strategy": "mixlora",
            "num_experts": 7,
            "top_k": 2,
            "group_by_length": false
        },
        {
            "name": "expert_6_name",
            "task_name": "expert_6",
            "optim": "adamw",
            "scheduler_type": "constant",
            "warmup_steps": 0,
            "lr": 0.0002,
            "batch_size": 1,
            "micro_batch_size": 1,
            "evaluate_batch_size": 1,
            "num_epochs": 3,
            "r": 4,
            "lora_alpha": 16,
            "lora_dropout": 0.05,
            "target_modules": {
                "q_proj": false,
                "k_proj": false,
                "v_proj": false,
                "o_proj": false,
                "gate_proj": true,
                "down_proj": false,
                "up_proj": false
            },
            "routing_strategy": "mixlora",
            "num_experts": 7,
            "top_k": 2,
            "group_by_length": false
        }
    ]
}

The model I chose is Meta-Llama-3-8B-Instruct, I printed the output of the model's trainable parameters as follows:

===== LoRA LAYER TYPE STATISTICS FOR expert_0_name =====
FFN: 17,432,576 parameters (100.00%)
Attention: 0 parameters (0.00%)
Other: 0 parameters (0.00%)
LoRA Total Parameters: 17,432,576
LoRA Rank: 4
LoRA Alpha: 16
Target Modules: ['gate_proj']

===== LoRA LAYER TYPE STATISTICS FOR expert_1_name =====
FFN: 17,432,576 parameters (100.00%)
Attention: 0 parameters (0.00%)
Other: 0 parameters (0.00%)
LoRA Total Parameters: 17,432,576
LoRA Rank: 4
LoRA Alpha: 16
Target Modules: ['gate_proj']

===== LoRA LAYER TYPE STATISTICS FOR expert_2_name =====
FFN: 17,432,576 parameters (100.00%)
Attention: 0 parameters (0.00%)
Other: 0 parameters (0.00%)
LoRA Total Parameters: 17,432,576
LoRA Rank: 4
LoRA Alpha: 16
Target Modules: ['gate_proj']

===== LoRA LAYER TYPE STATISTICS FOR expert_3_name =====
FFN: 17,432,576 parameters (100.00%)
Attention: 0 parameters (0.00%)
Other: 0 parameters (0.00%)
LoRA Total Parameters: 17,432,576
LoRA Rank: 4
LoRA Alpha: 16
Target Modules: ['gate_proj']

===== LoRA LAYER TYPE STATISTICS FOR expert_4_name =====
FFN: 17,432,576 parameters (100.00%)
Attention: 0 parameters (0.00%)
Other: 0 parameters (0.00%)
LoRA Total Parameters: 17,432,576
LoRA Rank: 4
LoRA Alpha: 16
Target Modules: ['gate_proj']


===== LoRA LAYER TYPE STATISTICS FOR expert_5_name =====
FFN: 17,432,576 parameters (100.00%)
Attention: 0 parameters (0.00%)
Other: 0 parameters (0.00%)
LoRA Total Parameters: 17,432,576
LoRA Rank: 4
LoRA Alpha: 16
Target Modules: ['gate_proj']

===== LoRA LAYER TYPE STATISTICS FOR expert_6_name =====
FFN: 17,432,576 parameters (100.00%)
Attention: 0 parameters (0.00%)
Other: 0 parameters (0.00%)
LoRA Total Parameters: 17,432,576
LoRA Rank: 4
LoRA Alpha: 16
Target Modules: ['gate_proj']
================================

Your code prints like this:

[2025-03-25 00:39:18,638] MoE-PEFT: expert_0_name total trainable params: 17432576
[2025-03-25 00:39:18,639] MoE-PEFT: expert_0_name total trainable params (except gates): 16515072
[2025-03-25 00:39:18,647] MoE-PEFT: expert_1_name total trainable params: 17432576
[2025-03-25 00:39:18,649] MoE-PEFT: expert_1_name total trainable params (except gates): 16515072
[2025-03-25 00:39:18,656] MoE-PEFT: expert_2_name total trainable params: 17432576
[2025-03-25 00:39:18,658] MoE-PEFT: expert_2_name total trainable params (except gates): 16515072
[2025-03-25 00:39:18,663] MoE-PEFT: expert_3_name total trainable params: 17432576
[2025-03-25 00:39:18,665] MoE-PEFT: expert_3_name total trainable params (except gates): 16515072
[2025-03-25 00:39:18,670] MoE-PEFT: expert_4_name total trainable params: 17432576
[2025-03-25 00:39:18,672] MoE-PEFT: expert_4_name total trainable params (except gates): 16515072
[2025-03-25 00:39:18,678] MoE-PEFT: expert_5_name total trainable params: 17432576
[2025-03-25 00:39:18,679] MoE-PEFT: expert_5_name total trainable params (except gates): 16515072
[2025-03-25 00:39:18,686] MoE-PEFT: expert_6_name total trainable params: 17432576
[2025-03-25 00:39:18,688] MoE-PEFT: expert_6_name total trainable params (except gates): 16515072

But the code printed in your trainer.py seems to print the total trainable parameters for each expert:

Image

Under the above parameters, my model still exceeds the video memory on a single 24GB 4090, which seems a bit abnormal. Is there any good solution?

Looking forward to your answer and thank you!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions