Skip to content

Use float32 for LoRA weights to avoid the risk of underflow and overflow.#22559

Open
james77777778 wants to merge 3 commits intokeras-team:masterfrom
james77777778:use-float32-for-lora-weights
Open

Use float32 for LoRA weights to avoid the risk of underflow and overflow.#22559
james77777778 wants to merge 3 commits intokeras-team:masterfrom
james77777778:use-float32-for-lora-weights

Conversation

@james77777778
Copy link
Copy Markdown
Contributor

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates the LoRA implementation across several layers—including Convolutional, Dense, EinsumDense, and Embedding—to ensure that LoRA weights are initialized as float32 to prevent numerical instability. It also introduces explicit casting to the appropriate variable or compute dtypes during kernel composition and forward passes. A critical issue was identified in the EinsumDense layer where a trailing comma incorrectly converts the LoRA update into a tuple, which will cause a TypeError during tensor operations.

@codecov-commenter
Copy link
Copy Markdown

codecov-commenter commented Mar 27, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 83.27%. Comparing base (ebb7e78) to head (28a8fe9).
⚠️ Report is 1 commits behind head on master.

Additional details and impacted files
@@            Coverage Diff             @@
##           master   #22559      +/-   ##
==========================================
+ Coverage   77.33%   83.27%   +5.94%     
==========================================
  Files         596      596              
  Lines       67828    67835       +7     
  Branches    10562    10562              
==========================================
+ Hits        52452    56487    +4035     
+ Misses      12612     8605    -4007     
+ Partials     2764     2743      -21     
Flag Coverage Δ
keras 83.09% <100.00%> (+5.89%) ⬆️
keras-jax 59.82% <100.00%> (+<0.01%) ⬆️
keras-numpy 54.43% <46.15%> (-0.01%) ⬇️
keras-openvino 51.70% <46.15%> (-0.01%) ⬇️
keras-tensorflow 61.14% <100.00%> (?)
keras-torch 60.00% <100.00%> (+<0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@james77777778 james77777778 force-pushed the use-float32-for-lora-weights branch from a9a06aa to 7722450 Compare March 27, 2026 06:09
@james77777778 james77777778 force-pushed the use-float32-for-lora-weights branch from 7722450 to 28a8fe9 Compare March 27, 2026 06:20
@keerthanakadiri keerthanakadiri added the stat:awaiting keras-eng Awaiting response from Keras engineer label Mar 27, 2026
@hertschuh hertschuh added the keras-team-review-pending Pending review by a Keras team member. label Mar 31, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

keras-team-review-pending Pending review by a Keras team member. size:M stat:awaiting keras-eng Awaiting response from Keras engineer

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants