Skip to content
This repository was archived by the owner on Nov 13, 2025. It is now read-only.

Commit 1ae0e85

Browse files
authored
fix image source in rst
1 parent 8637982 commit 1ae0e85

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

README.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ Gall <https://pages.iai.uni-bonn.de/gall_juergen/>`__\ :sup:`1,3`
1919
**Abstract:** Modern methods for fine-tuning a Vision Transformer (ViT) like Low-Rank Adaptation (LoRA) and its variants demonstrate impressive performance. However, these methods ignore the high-dimensional nature of Multi-Head Attention (MHA) weight tensors. To address this limitation, we propose Canonical Rank Adaptation (CaRA). CaRA leverages tensor mathematics, first by tensorising the transformer into two different tensors; one for projection layers in MHA and the other for feed-forward layers. Second, the tensorised formulation is fine-tuned using the low-rank adaptation in Canonical-Polyadic Decomposition (CPD) form. Employing CaRA efficiently minimizes the number of trainable parameters. Experimentally, CaRA outperforms existing Parameter-Efficient Fine-Tuning (PEFT) methods in visual classification benchmarks such as Visual Task Adaptation Benchmark (VTAB)-1k and Fine-Grained Visual Categorization (FGVC).
2020

2121

22-
.. image:: https://raw.githubusercontent.com/BonnBytes/CaRA/refs/heads/dev/images/tensorisation.jpg
22+
.. image:: https://raw.githubusercontent.com/BonnBytes/CaRA/refs/heads/main/images/tensorisation.jpg
2323
:width: 100%
2424
:alt: Alternative text
2525

@@ -88,4 +88,4 @@ The code is built on the implementation of `FacT <https://github.com/JieShibo/PE
8888
:alt: Project Page
8989
.. |Arxiv| image:: https://img.shields.io/badge/OpenReview-Paper-blue
9090
:target: https://openreview.net/pdf?id=vexHifrbJg
91-
:alt: Paper
91+
:alt: Paper

0 commit comments

Comments
 (0)