Skip to content

Conversation

@original-brownbear
Copy link
Contributor

We can release resources earlier by releasing before responding to the listener (everything that needs retaining is retained via the search response already) as well as make the GCs life a little easier and get obvious correctness by using the listener that nulls out resources in a thread-safe manner instead of a non-thread-safe and mutable list shared across all kinds of places in the code (currently this pretty much just works out by accident because we only add resources when moving to new phases).

This also fixes a potential bug around adding a releasable resource after the listener has already failed.

We can release resources earlier by releasing before responding to the
listener (everything that needs retaining is retained via the search
response already) as well as make the GCs life a little easier and get
obvious correctness by using the listener that nulls out resources in a
thread-safe manner intead of a non-thread-safe and mutable list shared
across all kinds of places in the code.
@elasticsearchmachine
Copy link
Collaborator

Pinging @elastic/es-search-foundations (Team:Search Foundations)

@elasticsearchmachine elasticsearchmachine added the Team:Search Foundations Meta label for the Search Foundations team in Elasticsearch label Mar 6, 2025
if (doneFuture.isDone()) {
releasable.close();
} else {
doneFuture.addListener(ActionListener.releasing((releasable)));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor: There is a pair of extra parenthesis around releasable.

*/
public void addReleasable(Releasable releasable) {
releasables.add(releasable);
var doneFuture = this.doneFuture;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should a check be added here to verify that the releasable is not null?

Copy link
Contributor Author

@original-brownbear original-brownbear Mar 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need to at this point I think, all the spots we're using this for are null safe. And we really shouldn't be adding more uses of this I think :) It's quite imprecise, many of the things we add here could probably be released earlier in their lifecycle. => no new nulls incoming either 🤞

Copy link
Contributor

@drempapis drempapis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks for sharing this

@original-brownbear
Copy link
Contributor Author

Thanks Dimitris!

@original-brownbear original-brownbear added the auto-backport Automatically create backport pull requests when merged label Mar 13, 2025
@original-brownbear original-brownbear merged commit 9f6042a into elastic:main Mar 13, 2025
18 checks passed
@original-brownbear original-brownbear deleted the release-before branch March 13, 2025 19:03
@elasticsearchmachine
Copy link
Collaborator

💚 Backport successful

Status Branch Result
8.x

elasticsearchmachine pushed a commit that referenced this pull request Mar 13, 2025
…124800)

We can release resources earlier by releasing before responding to the
listener (everything that needs retaining is retained via the search
response already) as well as make the GCs life a little easier and get
obvious correctness by using the listener that nulls out resources in a
thread-safe manner intead of a non-thread-safe and mutable list shared
across all kinds of places in the code.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

auto-backport Automatically create backport pull requests when merged >non-issue :Search Foundations/Search Catch all for Search Foundations Team:Search Foundations Meta label for the Search Foundations team in Elasticsearch v8.19.0 v9.1.0

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants