-
Notifications
You must be signed in to change notification settings - Fork 172
feat: add constraint stream profiling #1901
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
feat: add constraint stream profiling #1901
Conversation
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like this in principle! When it comes to the implementation, I have concerns. See specifically the comment regarding profiling mode.
I also see this hooks itself into tuple lifecycle. But I don't see it tracking propagation overhead anywhere. Arguably it should.
The logging aspect is OK, but I am missing any methods to consume this information at runtime. How about Micrometer metrics? Or custom JFR events? Thinking out loud here: if it triggered an event (micrometer / JFR) every time it finished a single measurement, these could be aggregated at the end as well as queried in-flight. More often than not, we will be run in the platform - people will want a way of monitoring their solvers.
I question the name "profiling"; this doesn't use a profiler, and doesn't work in a way in which profilers work. The name should be changed in order to not invite confusing comparisons with actual profiling.
core/src/main/java/ai/timefold/solver/core/config/score/director/ConstraintProfilingMode.java
Outdated
Show resolved
Hide resolved
...src/test/java/ai/timefold/solver/core/preview/api/move/builtin/ListSwapMoveProviderTest.java
Outdated
Show resolved
Hide resolved
|
In regards to it tracking propagation overhead, it did in a previous implementation (by wrapping all the Propogators). However, this is a form of double counting; the Propogators fundamentally call lifecycles, which are profiled. Additionally, Propogators correspond to parents of nodes instead of nodes themselves, which can cause absurdities such as |
Not sure about that. These lifecycles count for a lot, true. But propagation also deals with the queues, and iterates a lot. IMO it's not necessarily a double count.
Which is another good argument that this should not profile per-method or per-line, but per-constraint. Doesn't matter which part takes such a long amount of time - it's the constraint that's what matters. |
Profiling constraints is notorishing difficult, since each component of a constraint are converted to nodes, some of which are shared. As such, a JVM method profile is basically unreadable and does not represent how much time is actually spent for each constraint. To aid in profiling, an optional constraintStreamProfilingMode configuration was added. If set to a value other than NONE, it wraps each tuple lifecycle node inside a ProfilingTupleLifecycle, which will measure how long each lifecycle executes. The ProfilingTupleLifecycle find out what constraint is responsible for creating that lifecycle by getting snapshot of the stack traces from its constraint stream's creator (when a constraint stream is shared, their stack traces are merged into the same set). At the end of solving, a profiling summary is then produced in the INFO log. The details differ depending on the profiling mode: - In the BY_METHOD profiling mode, (className, methodName) is used as the key - In the BY_LINE profiling mode, (className, methodName, lineNumber) is used as the key. The methods/lines are printed in descending order of time percentage spent. The sum of time percentage spent may be over 100%, since methods/lines can share time spent with other methods/lines. timefold.solver.constraint-stream-profiling-mode was added as a property to Quarkus and Spring Boot to configure profiling (defaults to NONE).
491b14c to
ad8deff
Compare
…ng or without a constraint provider
| </xs:simpleType> | ||
|
|
||
|
|
||
| <xs:simpleType name="constraintProfilingMode"> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Having read your previous comments, I still believe this is needless complexity, but I have a compromise proposal.
We remove this setting - there is no need for people to choose between these options. More importantly, there is no reason why these should be mutually exclusive. No, if I were doing this profiling, I'd want all the information I can get, as opposed to running the thing several times with different configs.
So, here's my proposal:
- The setting goes away. It becomes a boolean - enabled or disabled.
- Under the hood, we keep two ways of profiling; by constraint, and then by node. Both are active at the same time.
- When printing the outputs, we print a breakdown. We start with how much time is taken by the constraint, and then one level down, we break it down per node.
- By line and by method goes away. This is an advanced feature and, much like with node network visualization, it requires some understanding of the underlying internals. If we deal with nodes only, it will be both correct and consistent with the visualization.
| <xs:element minOccurs="0" name="constraintStreamAutomaticNodeSharing" type="xs:boolean"/> | ||
|
|
||
|
|
||
| <xs:element minOccurs="0" name="constraintStreamProfilingMode" type="tns:constraintProfilingMode"/> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we need a better name than "profiling".
It's confusing the terms - there is no profiler here, there is no profile, IMO this name does more harm than good.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a profile though; it a flat profile: https://sourceware.org/binutils/docs/gprof/Flat-Profile.html
|
This is what a profile looks like now: By location percentage may add up to over 100% due to node sharing; the rest should always add up to 100%. |
|







Profiling constraints is notorishing difficult, since each component of a constraint are converted to nodes, some of which are shared. As such, a JVM method profile is basically unreadable and does not represent how much time is actually spent for each constraint.
To aid in profiling, an optional constraintStreamProfilingMode configuration was added. If set to a value other than NONE, it wraps each tuple lifecycle node inside a ProfilingTupleLifecycle, which will measure how long each lifecycle executes. The ProfilingTupleLifecycle find out what constraint is responsible for creating that lifecycle by getting snapshot of the stack traces from its constraint stream's creator (when a constraint stream is shared, their stack traces are merged into the same set).
At the end of solving, a profiling summary is then produced in the INFO log. The details differ depending on the profiling mode:
In the BY_METHOD profiling mode, (className, methodName) is used as the key
In the BY_LINE profiling mode, (className, methodName, lineNumber) is used as the key.
The methods/lines are printed in descending order of time percentage spent. The sum of time percentage spent may be over 100%, since methods/lines can share time spent with other methods/lines.
timefold.solver.constraint-stream-profiling-mode was added as a property to Quarkus and Spring Boot to configure profiling (defaults to NONE).