-
Notifications
You must be signed in to change notification settings - Fork 137
Improve prompt generating performance by caching prompt parts(%m, %M) #1127
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
cd56ed4 to
fdea403
Compare
In prompt calculation, `main.to_s` was called on every keystroke and every line in multiline input. Cache prompt parts(%m, %M) so that `main.to_s` is only called once per read-eval cycle.
fdea403 to
35eb9e2
Compare
| end | ||
|
|
||
| def format_prompt(format, ltype, indent, line_no) # :nodoc: | ||
| part_cache = @prompt_part_cache || {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since format_prompt is only called in generate_prompt, is it possible to be called outside of readmultiline context? (perhaps in the dyamic_prompt callback?)
If it can be, let's also wrap the caller inside the wrapper mentioned above to provide caching?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently, generate_prompt without cache is only used in test code testing format_prompt.
I'd like to make cache mechanism optional because:
- Testing format_prompt is easier.
- Always requiring wrapper is simple, but also introduces some sort of (small) complexity in call chain.
(But I'm not really sure, maybe wrong)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see. Then let's at least add comments to explain why it could be nil, like
| part_cache = @prompt_part_cache || {} | |
| # @prompt_part_cache could be nil in unit tests | |
| part_cache = @prompt_part_cache || {} |
lib/irb.rb
Outdated
| end | ||
|
|
||
| def readmultiline | ||
| @prompt_part_cache = {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can create a wrapper to ensure wherever we can generate prompt we have this set and cleaned. Like
with_prompt_part_cached do
...
endThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice catch! updated
| end | ||
|
|
||
| def format_prompt(format, ltype, indent, line_no) # :nodoc: | ||
| part_cache = @prompt_part_cache || {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see. Then let's at least add comments to explain why it could be nil, like
| part_cache = @prompt_part_cache || {} | |
| # @prompt_part_cache could be nil in unit tests | |
| part_cache = @prompt_part_cache || {} |
Fixes #1126
In prompt calculation,
main.to_swas called on every keystroke and every line in multiline input.Cache prompt parts(
%mand%M) so thatmain.to_sis only called once per read-eval cycle.Also fixes this problem.