You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Mixed precision methods (32Mixed) use Float32 internally and have reduced accuracy
compared to full Float64 precision. Changed tolerance from 1e-10 to 1e-5 for these
methods in allocation tests to account for the expected precision loss.
Also added proper imports for the mixed precision types.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <[email protected]>
0 commit comments