-
Notifications
You must be signed in to change notification settings - Fork 27
Open
Description
This issue lists the short-term and long-term plans for functional support of floating-point operation in PIMeval.
Common:
- PIMeval defines PIM_FP8, PIM_FP16, PIM_BF16 data type enums
- PIMeval provides correct performance and energy results based on data type enums, no matter how we simulate the functional computation behavior
- Performance modeling for bit-serial PIM requires non-trivial effort
Option 1:
- The user uses 4-byte float to write PIMeval apps (we can typedef fp8/fp16/bf16 to float optionally)
- PIMeval performs float operations for functional simulation
- Optionally, PIMeval can do some precision saturation and range handling to make sure float results are within fp8/fp16/bf16 ranges. If we don’t do this, we can still get correct performance and energy estimation, although results are not truly fp8/fp16/bf16
Option 2:
- User uses customized data types defined by PIMeval to write PIMeval apps, e.g. PIMeval can implement fp8/fp16/bf16 types with 1/2/2 bytes with operator overloading
- PIMeval performs fp8/fp16/bf16 operations for functional simulation. In the overloaded operators, we can convert 1/2/2 bytes representation into float, do the computation, and convert back. Or there can be other IEEE compliant implementations.
Option 3:
- The user uses 1/2/2-byte fp8/fp16/bf16 data types from a third-party library. The library provides portable implementation of fp8/fp16/bf16 data types
- PIMeval performs fp8/fp16/bf16 operations using the same third-party library for functional simulation
Metadata
Metadata
Assignees
Labels
No labels