Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions docs/source/_toctree.yml
Original file line number Diff line number Diff line change
Expand Up @@ -114,6 +114,8 @@
title: Layernorm tuning
- local: package_reference/vera
title: VeRA
- local: package_reference/pvera
title: PVeRA
- local: package_reference/fourierft
title: FourierFT
- local: package_reference/gralora
Expand Down
40 changes: 40 additions & 0 deletions docs/source/package_reference/pvera.md
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also make sure to add this file to the docs/source/_toctree.yml possibly next to VeRA.

Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
<!--Copyright 2025 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.

⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.

-->

# PVeRA: Probabilistic Vector-Based Random Matrix Adaptation

[PVeRA](https://huggingface.co/papers/2512.07703) is a parameter-efficient fine-tuning technique that is base on VeRA, in the family of the LoRA-based adapters. It keeps the very low parameter budget of VeRA, but increases the performance by learning a distribution of latent adaptations. This also enables models adapted with PVeRA to generate Monte Carlo confidence interval estimates, by sampling from the learned distribution at inference.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe also mention (not necessarily here) how we can set the required sample_at_infernece config value when loading a pre-trained checkpoint:

# Setting sample_at_inference=True for PVeRA checkpoints during load
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM

base_model = AutoModelForCausalLM.from_pretrained("gpt2")

peft_config = PeftConfig.from_pretrained(peft_model_id)
peft_config.sample_at_inference = True

peft_model = PeftModel.from_pretrained(base_model, peft_model_id, config=peft_config)


When saving the adapter parameters, it's possible to eschew storing the low rank matrices by setting `save_projection=False` on the `PveraConfig`. In that case, these matrices will be restored based on the fixed random seed from the `projection_prng_key` argument. This cuts down on the size of the checkpoint, but we cannot guarantee reproducibility on all devices and for all future versions of PyTorch. If you want to ensure reproducibility, set `save_projection=True` (which is the default).

To handle different shapes of adapted layers, PVeRA initializes shared A and B matrices with the largest required size for each dimension. During the forward pass, submatrices A and B for a given layer are sliced out from these shared matrices and used as described in the paper. For example, adapting two linear layers of shapes (100, 20) and (80, 50) will create A and B matrices of shapes (rank, 50) and (100, rank) respectively. Then, to adapt a layer of shape (100, 20), submatrices A and B of shapes (rank, 20) and (100, rank) will be extracted.

PVeRA currently has the following constraint:

- Only `nn.Linear` layers are supported.
- The latent representation is not easily accessible, for training using the KL divergence.

The abstract from the paper is:

> Large foundation models have emerged in the last years and are pushing performance boundaries for a variety of tasks. Training or even finetuning such models demands vast datasets and computational resources, which are often scarce and costly. Adaptation methods provide a computationally efficient solution to address these limitations by allowing such models to be finetuned on small amounts of data and computing power. This is achieved by appending new trainable modules to frozen backbones with only a fraction of the trainable parameters and fitting only these modules on novel tasks. Recently, the VeRA adapter was shown to excel in parameter-efficient adaptations by utilizing a pair of frozen random low-rank matrices shared across all layers. In this paper, we propose PVeRA, a probabilistic version of the VeRA adapter, which modifies the low-rank matrices of VeRA in a probabilistic manner. This modification naturally allows handling inherent ambiguities in the input and allows for different sampling configurations during training and testing. A comprehensive evaluation was performed on the VTAB-1k benchmark and seven adapters, with PVeRA outperforming VeRA and other adapters.

## PveraConfig

[[autodoc]] tuners.pvera.config.PveraConfig

## PveraModel

[[autodoc]] tuners.pvera.model.PveraModel
4 changes: 4 additions & 0 deletions src/peft/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -99,6 +99,8 @@
PromptEncoderReparameterizationType,
PromptTuningConfig,
PromptTuningInit,
PveraConfig,
PveraModel,
RandLoraConfig,
RandLoraModel,
RoadConfig,
Expand Down Expand Up @@ -213,6 +215,8 @@
"PromptLearningConfig",
"PromptTuningConfig",
"PromptTuningInit",
"PveraConfig",
"PveraModel",
"RandLoraConfig",
"RandLoraModel",
"RoadConfig",
Expand Down
3 changes: 3 additions & 0 deletions src/peft/tuners/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,7 @@
from .poly import PolyConfig, PolyModel
from .prefix_tuning import PrefixEncoder, PrefixTuningConfig
from .prompt_tuning import PromptEmbedding, PromptTuningConfig, PromptTuningInit
from .pvera import PveraConfig, PveraModel
from .randlora import RandLoraConfig, RandLoraModel
from .road import RoadConfig, RoadModel
from .shira import ShiraConfig, ShiraModel
Expand Down Expand Up @@ -113,6 +114,8 @@
"PromptEncoderReparameterizationType",
"PromptTuningConfig",
"PromptTuningInit",
"PveraConfig",
"PveraModel",
"RandLoraConfig",
"RandLoraModel",
"RoadConfig",
Expand Down
26 changes: 26 additions & 0 deletions src/peft/tuners/pvera/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
# Copyright 2025-present the HuggingFace Inc. team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.


from peft.utils import register_peft_method

from .config import PveraConfig
from .layer import Linear, PveraLayer
from .model import PveraModel


__all__ = ["Linear", "PveraConfig", "PveraLayer", "PveraModel"]


register_peft_method(name="pvera", config_cls=PveraConfig, model_cls=PveraModel, prefix="pvera_lambda_")
Loading