Add cache_method decorator#895
Conversation
08359bf to
3c5fe40
Compare
nkanazawa1989
left a comment
There was a problem hiding this comment.
Thanks Chris. This suggestion seems good direction, but still we need to allow for some flexibility. For example, current framework is missing the capability to check experiment options self.experiment_options, and to cache outcomes across instances. This is beyond the requirements by the tomography fitter, but we must provide flexible API so that #878 can update the mechanism based on their needs.
|
|
||
|
|
||
| def cache_method( | ||
| cache: Union[Dict, str] = "_cache", cache_args: bool = True, require_hashable: bool = True |
There was a problem hiding this comment.
The combination of two booleans is bit harder to understand. Perhaps string based behavior makes the interface more intuitive, such as first_time, hash_all, only_hashable. This will allow more flexibility for hashing mechanism. For example, another option we may want to have would be repr(arg) to make everything hashable, e.g. try_repr.
There was a problem hiding this comment.
Personally I find the booleans easier to understand than string values. I don't think we should add more flexibility in hashing mechanism. If anything maybe we should remove the require_hashable should be removed so the option is just to use all args (and they must be hashable) or none.
There was a problem hiding this comment.
If you remove that option perhaps this is only applicable to static methods? Because self of an experiment instance is not hashable.
There was a problem hiding this comment.
self not being hashed is the design of this decorator, and main reason why you need it instead of lru_cache. So it should only be used with regular methods, not static methods. For static methods you should be able to use a regular lru_cache without issues since it behaves like a regular function.
There was a problem hiding this comment.
PS @nkanazawa1989 I will make a commit to remove the require_hashable kwarg to not overcomplicate this as you suggest.
|
@nkanazawa1989 I think there is a bit of a misunderstanding, this decorator is intended to be a replacement for |
|
If this is just a bug fix of python |
|
I'm not familiar with the tomography use case. I'm aware of several use cases where we want to refrain from retranspiling. To this end, what we need is not a caching mechanism. Instead, it is sufficient to:
|
This supports caching regular methods of class instances with optionally support for including hashable arg values in cache key.
Define function for returning the method cache dict outside of the wrapped method so it doesn't need to be checked every method call.
3c5fe40 to
a397fcf
Compare
|
|
||
| def _cache_fn(instance, method): | ||
| # pylint: disable = unused-argument | ||
| name = method.__name__ |
There was a problem hiding this comment.
@nkanazawa1989 I wonder if this should be method.__qualname__ instead? qualname includes the class name in the string like <cls_name>.<method_name>, rather than just <method_name>.
There was a problem hiding this comment.
I think qualname should be useful if we want to support class level cache in future. Also you can validate that the method is not a function.
|
The proposal of #895 (comment) may require more thinking, because Still, I'd like to revisit the caching mechanism that you're building here. You're putting effort in it, and we'll have to maintain it. What's the motivation? Is it justified? It seems to be beyond the scope of qiskit-experiments. The reuse of transpiled circuits is a modest goal that can do with a small solution. Probably along the lines of #895 (comment), or something else of the same magnitude. |
|
This is a cache mechanism to store some internal state. In QE, some helper function (method) might be called multiple times thus sometimes it's better to cache them for performance. For example, a dedicated logic that caches the transpiled circuit doesn't speed up the analysis class, and we may need huge memory space if user generate circuits with multiple settings (if we do in-memory cache). As Chris wrote in the PR comment, this is mainly to overcome memory leak issue in current python RLU cache (I don't know the details). I think currently Chris is looking into different approach. |
|
Closing this for #997 |
Summary
Adds a
cache_methoddecorator that generalizeslru_cachefor caching methods of class instances (since apparently lru_cache can have memory leaks when used in this way, if it works)This is based on suggested solution for caching experiment methods in this comment
Details and comments
By default this decorator requires all method arg and kwarg values to be hashable, and they are included in the cache key for matching. However setting
cache_args=Falseon the decorator will ignore args and kwargs and match only on the method name. Alternatively the decorator can be called withrequire_hashable=Falsewhich will allow non-hashable args while matching on all hashable args and kwargs.