Skip to content

Commit b58dc04

Browse files
ooplesclaude
andcommitted
docs: clarify adaloraadapter forward pass pruning behavior
- Update comments in Forward() to clarify that pruning IS taking effect - Pruned components are zeroed in matrices by PruneRank() method - Forward pass uses those pruned matrices, so low-importance components contribute zero - Previous comment was misleading, suggesting pruning didn't apply during forward Resolves Issue #1 - pruning does take effect, just needed clearer documentation. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
1 parent d875025 commit b58dc04

File tree

1 file changed

+4
-6
lines changed

1 file changed

+4
-6
lines changed

src/LoRA/Adapters/AdaLoRAAdapter.cs

Lines changed: 4 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -230,14 +230,12 @@ public override Tensor<T> Forward(Tensor<T> input)
230230
// Forward through base layer
231231
Tensor<T> baseOutput = _baseLayer.Forward(input);
232232

233-
// Forward through LoRA layer (it will use all components, but we'll mask based on importance)
233+
// Forward through LoRA layer with pruned components
234+
// The LoRA layer matrices have been pruned by PruneRank() - zeroing out low-importance components
235+
// So this Forward call only uses the top _currentRank components (others contribute zero)
234236
Tensor<T> loraOutput = _loraLayer.Forward(input);
235237

236-
// If current rank < max rank, we need to mask the output
237-
// This is implicitly handled by the pruned matrices in the LoRA layer
238-
// For simplicity, we use the LoRA output as-is (pruning happens in UpdateParameters)
239-
240-
// Sum the outputs
238+
// Sum the outputs (pruning is already applied via zeroed matrix elements)
241239
Tensor<T> result = new Tensor<T>(baseOutput.Shape);
242240
for (int i = 0; i < baseOutput.Length; i++)
243241
{

0 commit comments

Comments
 (0)