You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Here’s a version that runs faster by avoiding the overhead of functools.lru_cache and the creation of tuples/keys for the cache.
For small integer ranges, use a `list` to store results and return directly, which is the fastest possible cache for sequential integer keys.
The use of `" ".join(map(str, ...))` is already optimal for the join step, so we preserve it.
**Notes:**
- For the typical use case (number ≤ 1000), this is much faster than `lru_cache` because it avoids the overhead of dict hashing, and just uses a fast list lookup.
- No function signature or output is changed.
- For numbers >1000, there’s no caching to avoid unbounded memory growth, exactly as before.
- Comments are only adjusted to reflect how caching now works.
0 commit comments