Skip to content

Add support for Orthogonal Fine-Tuning (OFT)#79

Open
XiaohonYe wants to merge 2 commits intoIrisRainbowNeko:devfrom
XiaohonYe:oft
Open

Add support for Orthogonal Fine-Tuning (OFT)#79
XiaohonYe wants to merge 2 commits intoIrisRainbowNeko:devfrom
XiaohonYe:oft

Conversation

@XiaohonYe
Copy link
Copy Markdown

This PR implements Orthogonal Fine-Tuning (OFT) for efficient parameter adaptation.
Main updates include:

hcpdiff/models/lora_base_patch.py: code base for OFTBlock
hcpdiff/models/lora_layers_patch.py: core logic for OFTLayer
hcpdiff/models/init.py

OFT weights are initialized from host layers and kept frozen during training..

@XiaohonYe XiaohonYe changed the base branch from neko to dev May 25, 2025 07:28
@IrisRainbowNeko
Copy link
Copy Markdown
Owner

OFT is different from Lora. Please put OFTLayer, OFTBlock and other code in a separate file

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants