Skip to content

Conversation

alonme
Copy link
Contributor

@alonme alonme commented Aug 29, 2025

Hey @markshannon!

I am not sure if this is what you meant by "perhaps in the jit itself", however it made sense to me to cleanup when more executors are created - so that we can keep the amount of executors reasonable.

Also - I'm unsure if this is the best point to perform the check during this process.

I had another idea of moving the check to ENTER_EXECUTOR, which is less frequent than MAKE_WARM - would that be a better point to perform the check?

@savannahostrowski
Copy link
Member

Hey - thanks for looking into this!

Hmm, I'm unsure about this approach as moving the counter from _MAKE_WARM to uop_optimize() fundamentally changes what we're measuring, from executor executions to executor creations. My worry is that this approach this will result in us losing any memory saving we originally gained.

@markshannon - I understand the motivation from #138050, but could you clarify: is the counter decrement/check measurably expensive, or is this more of a "nice to have" simplification?

Of course, we should benchmark this to understand the implications, though!

@markshannon
Copy link
Member

@savannahostrowski Yes it is expensive. Full answer

@markshannon
Copy link
Member

markshannon commented Oct 10, 2025

Regarding benchmarking, in this case I don't think we do, at least not for time. The generated machine code is clearly better, and the cost added to the optimiser is effectively zero: Each trace is run 100k times on average according to our stats.
Memory use is much less clear.

I'll inclined to merge this, and come back to managing memory use when the JIT has the new tracing front-end and optimizer is more complete.

@markshannon
Copy link
Member

@alonme do you want to make this not a draft?

@alonme alonme marked this pull request as ready for review October 10, 2025 10:45
@alonme alonme requested a review from markshannon as a code owner October 10, 2025 10:45
@alonme
Copy link
Contributor Author

alonme commented Oct 10, 2025

@savannahostrowski thanks for the review!
I agree with your points, however there i think there is also a pro there, by checking exector creations we can have a much clearer maximum memory overhead.

@markshannon done!
Regarding what you said about benchmarking this at this point - should we just leave the current value of JIT_CLEANUP_THRESHOLD as is without tuning it?

@markshannon
Copy link
Member

If you think you can tune it now, go ahead.

But we will need to retune it once we've made our planned changes to the optimizer, so tuning it right now isn't that important.

@alonme
Copy link
Contributor Author

alonme commented Oct 10, 2025

I will only be next to a computer in a week, so if it's not a must - let's go ahead

@savannahostrowski
Copy link
Member

savannahostrowski commented Oct 11, 2025

@markshannon Thanks for elaborating.

I'll inclined to merge this, and come back to managing memory use when the JIT has the new tracing front-end and optimizer is more complete.

This sounds reasonable to me.

@alonme
Copy link
Contributor Author

alonme commented Oct 16, 2025

@savannahostrowski @markshannon
Addressed the renaming comment, anything else before we can merge this?

Copy link
Member

@savannahostrowski savannahostrowski left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Based on the earlier conversation, I think this looks good.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants