Skip to content

Conversation

@brownbaerchen
Copy link
Contributor

This is one feature that I added during the large refactor and one that can be merged independently. It is not particularly useful as used in this PR, but is used extensively in the refactored version.

The caching decorator returns a cached result if the function arguments match a previous call and can be used like this:

num_calls = 0

@cache
def increment(x):
    num_calls += 1
    return x + 1

increment(0)  # returns 1, num_calls = 1
increment(1)  # returns 2, num_calls = 2
increment(0)  # returns 1, num_calls = 2

@tlunet
Copy link
Member

tlunet commented Jun 17, 2025

Why don't you use directly functools.cache ? I imagine it should do exactly the same thing ...

@brownbaerchen
Copy link
Contributor Author

Why don't you use directly functools.cache ? I imagine it should do exactly the same thing ...

Yes. I started out with using that. But apparently there are some issues with memory leaks.

@tlunet
Copy link
Member

tlunet commented Jun 17, 2025 via email

@pancetta
Copy link
Member

Did those leaks show up for you? Or why did you start implementing this on your own in the first place?

@brownbaerchen
Copy link
Contributor Author

Did those leaks show up for you? Or why did you start implementing this on your own in the first place?

Ruff complains about this. The link leads to the ruff documentation of things they check against. So then I told ChatGPT to write sth, as one does, and it was quickly implemented.

I added the link to the ruff documentation about why I don't just use functools.cache to the documentation and added a test for this decorator.

@pancetta pancetta merged commit e351091 into Parallel-in-Time:master Jun 17, 2025
47 checks passed
@tlunet
Copy link
Member

tlunet commented Jun 17, 2025

But if I understood correctly, the problem comes when you use the @cache decorator on a method rather than on a function ... which seems not to be a problem with your implementation.

Although (just to be safe), can you add the following test that ensure that the cache does not keep a reference to the object, making it not de-alocated ?

track = [0]

class KeepTrack:

    def __init__(self):
        track[0] += 1

    @cache
    def method(self, a, b, c=1, d=2):
        return f"{a},{b},c={c},d={d}"

    def __del__(self):
        track[0] -= 1


def function():
    obj = KeepTrack()
    for i in range(10):
        obj.method(1, 2, d=2)

for i in range(3):
    function()

assert track[0] == 0, "possible memory leak with the @cache"

I've tested it and it work with your decorator, while not with functools.cache

brownbaerchen added a commit to brownbaerchen/pySDC that referenced this pull request Jun 17, 2025
* Implemented caching wrapper for spectral helper

* Added test for caching decorator
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants