Skip to content

[Bug]: hadamard dtype and inplace tranform #1631

@wenhuach21

Description

@wenhuach21

Problem Description

1 hadamard is performed at float64 in prior arts while ours are on bflaot16
2 weight transform could be conducted inplace, no need to run the transform in each iter of AR tuning
3 shared layers like moe, qkv
4 real random
5 fuse to ar block wise tuning,otherwise ram is high

Reproduction Steps

~

Environment Information

~

Error Logs

~

Additional Context

No response

Metadata

Metadata

Assignees

Labels

bugSomething isn't working

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions