Replies: 2 comments
-
|
I believe what bazel is trying to tell me is that with infinite parallelism, the tests could have run 5x faster(529/112). I don't understand why mock-array isn't showing up as a critical path. Is it because the build server is running on some highly parallel machine and that the detailed routing in mock-array is sped up enormously meaning that it is not in the critical path? |
Beta Was this translation helpful? Give feedback.
-
|
When going from 16 threads to 48 threads CPU on a workstation(nothing else is running), the total / critical is ca 1x. Python doesn't work on my machine, so I can't run |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
In bazel, we now get a trace of the critical path. There are lots of activities running in parallel in a DAG in bazel, so critical path is what constitutes the longst running time.
A neat testing feature that has been added to OpenROAD and where Bazel shines is to run testing of power per module using the -hier feature and reading in a .vcd file from a simulation of the gate netlist (after cts, estimated parasitics, as well as final real parasitics). This is a test that is quite involved in number of dependencies that is has on tooling and there is no other automated test that brings this all together. This test case is optimized for maximum test coverage and minimum running time. Getting power for a non-trivial workload for a real design could easily be on the order of 24 hours, so 5 minutes is pretty good for the coverage it gives. mock-array has a long history in ORFS as a productive test case and has it is the culmination of many tweaks to cover various aspects(such as having pins on multiple metal levels, automated macro layout with routing by abutment and whatnot).
Running time wise, it does mean that detailed routing has to run, which is the slowest step for mock-array. mock-array has a good floorplan now(space for flip flops and buffers used on IOs).
It is a bit surprising that detailed routing isn't faster when there is so much routing by abutment going on and the rest of the routing is essentially routing horizontal and vertical wires.
These are running times from a local build
bazelisk test test/orfs/mock-array/... --profile=build.profile && bazelisk analyze-profile build.profile:If I run ``bazelisk test test/orfs/mock-array/... --profile=build.profile && bazelisk analyze-profile build.profile` again, I get the running time, considering that everything is cached.
I don't think bazel can tell me what the critical path is for the running times of the cached items...
Looking at a recent build on the server, I get some suprising running times.... https://jenkins.openroad.tools/job/OpenROAD-Public/job/PR-8543-merge/1/pipeline-overview/?selected-node=57
Why does the below take the most time???
What is going on here? This takes 0 seconds locally:
I suspect that short running times here are explained by caching. This is a python test case that failed(?) in the first iteration of this PR?
Beta Was this translation helpful? Give feedback.
All reactions