You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
refactor: unify learning rate schedulers with array API
- Refactor BaseLR in dpmodel to use array_api_compat for backend-agnostic implementation
- Consolidate learning rate logic from TF/PT/PD backends into unified dpmodel layer
- Use array API operations (xp.where, xp.clip, etc.) for JIT compatibility across backends
- Add warmup support (warmup_steps, warmup_ratio, warmup_start_factor) during refactoring
- Add stop_ratio parameter as alternative to stop_lr for flexible configuration
- Implement mutual exclusion validation for stop_lr/stop_ratio and warmup_steps/warmup_ratio
- Update all backends to use unified BaseLR implementation
- Add comprehensive consistency tests across NumPy/PyTorch/JAX/array_api_strict backends
0 commit comments