-
Notifications
You must be signed in to change notification settings - Fork 765
gpl: modify timing-driven max weight #6862
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
gpl: modify timing-driven max weight #6862
Conversation
Signed-off-by: Augusto Berndt <[email protected]>
Signed-off-by: Augusto Berndt <[email protected]>
Signed-off-by: Augusto Berndt <[email protected]>
Signed-off-by: Augusto Berndt <[email protected]>
|
clang-tidy review says "All clean, LGTM! 👍" |
|
I have made a table with data from the dashboard showing the impact on tns and wirelength with the percentage difference between master and this change. Looking at the column for setup TNS after GPL, improvements seem larger than degradations. Furthermore, this change should unblock @AcKoucher change with mpl-soft-sa-resizing, which presented a large slack degradation for a gf12 private design. I performed tests with multiple TD max weight values on such design, and 5 seems to help with the issue without increasing wirelength too much. |
|
clang-tidy review says "All clean, LGTM! 👍" |
|
clang-tidy review says "All clean, LGTM! 👍" |
|
clang-tidy review says "All clean, LGTM! 👍" |
Signed-off-by: Augusto Berndt <[email protected]>
|
clang-tidy review says "All clean, LGTM! 👍" |
|
This change showed to reduce the slack presented on bp_single at #6697 (comment) |
This PR increases the timing-driven maximum weight for instances that are part of the 10% worst slack nets. The value has been raised from 1.9 to 5.
Gpl balances HPWL and overflow (density) during placement. The modified timing-driven weight increases movement based on HPWL rather than overflow. In other words, this adjustment prioritizes instances with the worst slacks to be placed closer together.
Experiments showed that increasing this value too much leads to higher wirelength. I believe this happens because we currently apply the weight to all instances in a bad slack net. A more precise approach would be to apply the weight only to problematic instances (thanks to @povik for identifying this important detail in Gpl’s timing-driven mode).
The 10% threshold is also adjustable via a TCL script, but increasing it does not appear to improve slacks.