Skip to content

Commit 027eca1

Browse files
gyuilLimsovrasov
andauthored
Add documentation for PEFT (LoRA & DoRA) (#4596)
Co-authored-by: Vladislav Sovrasov <[email protected]>
1 parent a960b19 commit 027eca1

File tree

2 files changed

+108
-1
lines changed

2 files changed

+108
-1
lines changed

docs/source/guide/tutorials/advanced/index.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ Advanced Tutorials
77
configuration
88
huggingface_model
99
multi_gpu
10-
low_rank_adaptation
10+
peft
1111
torch_compile
1212

1313
.. Once we have enough material, we might need to categorize these into `data`, `model learning` sections.
Lines changed: 107 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,107 @@
1+
PEFT: Parameter-Efficient Fine-Tuning (LoRA & DoRA) for Classification
2+
======================================================================
3+
4+
.. note::
5+
6+
PEFT (LoRA, DoRA) is only supported for VisionTransformer models.
7+
See the method in otx.backend.native.models.classification.utils.peft
8+
9+
10+
Overview
11+
--------
12+
13+
OpenVINO™ Training Extensions supports Parameter-Efficient Fine-Tuning (PEFT) for Transformer classifiers via Low Rank Adaptation (LoRA) and Weight-Decomposed Low-Rank Adaptation (DoRA).
14+
These methods adapt pre-trained models with a small number of additional parameters instead of fully fine-tuning all weights.
15+
16+
Benefits
17+
--------
18+
19+
- **Efficiency**: Minimal extra parameters and faster adaptation.
20+
- **Performance**: Competitive accuracy compared to full fine-tuning.
21+
- **Flexibility**: Apply LoRA or DoRA selectively to model components.
22+
23+
Supported
24+
---------
25+
26+
- **Backbones**: Vision Transformer family (e.g., DINOv2)
27+
- **Tasks**: Multiclass, Multi-label, Hierarchical Label Classification
28+
29+
How to Use PEFT in OpenVINO™ Training Extensions
30+
--------------------------------------------------
31+
32+
.. tab-set::
33+
34+
.. tab-item:: API
35+
36+
.. code-block:: python
37+
38+
from training_extensions.src.otx.backend.native.models.classification.multiclass_models.vit import VisionTransformerMulticlassCls
39+
40+
# Choose one: "lora" or "dora"
41+
model = VisionTransformerForMulticlassCls(..., peft="lora")
42+
43+
.. tab-item:: CLI
44+
45+
.. code-block:: bash
46+
47+
(otx) $ otx train ... --model.peft dora
48+
49+
.. tab-item:: YAML
50+
51+
.. code-block:: yaml
52+
53+
task: MULTI_CLASS_CLS
54+
model:
55+
class_path: otx.backend.native.models.classification.multiclass_models.vit.VisionTransformerMulticlassCls
56+
init_args:
57+
label_info: 1000
58+
model_name: "dinov2-small"
59+
peft: "dora"
60+
61+
optimizer:
62+
class_path: torch.optim.AdamW
63+
init_args:
64+
lr: 0.0001
65+
weight_decay: 0.05
66+
67+
Alternative
68+
-----------
69+
70+
- **Linear Fine-Tuning**: Train only the classification head while keeping all backbone frozen.
71+
This approach works with *all* classification backbones.
72+
73+
How to Use Linear Fine-Tuning
74+
-----------------------------
75+
76+
.. tab-set::
77+
78+
.. tab-item:: API
79+
80+
.. code-block:: python
81+
82+
from training_extensions.src.otx.backend.native.models.classification.multiclass_models.vit import VisionTransformerMulticlassCls
83+
84+
# Linear FT = freeze_backbone=True, no PEFT
85+
model = VisionTransformerMulticlassCls(
86+
...,
87+
freeze_backbone=True,
88+
)
89+
90+
.. tab-item:: CLI
91+
92+
.. code-block:: bash
93+
94+
(otx) $ otx train ... --model.freeze_backbone true
95+
96+
.. tab-item:: YAML
97+
98+
.. code-block:: yaml
99+
100+
task: MULTI_CLASS_CLS
101+
model:
102+
class_path: otx.backend.native.models.classification.multiclass_models.vit.VisionTransformerMulticlassCls
103+
init_args:
104+
label_info: 1000
105+
model_name: "dinov2-small"
106+
peft: ""
107+
freeze_backbone: true

0 commit comments

Comments
 (0)