-
Notifications
You must be signed in to change notification settings - Fork 16
aj/new loop #563
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
aj/new loop #563
Conversation
…y flush controller to account for this change. Some small http_client tweaks
…-extension into aj/new-loop
This reverts commit d28cfbe.
| Ok(resp) => { | ||
| if resp.status() != 202 { | ||
| let status = resp.status(); | ||
| _ = resp.text().await; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixes an issue where keepalive can't be used if we don't fully read/exhaust the response buffer
| should_periodic_flush | ||
| ); | ||
| return should_periodic_flush; | ||
| return false; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure I understand this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
previously this method would control whether or not to switch over to flushing periodically as well as determining if we've switched to periodic and it's time to flush. That's partially why the main loop was so confusing.
Instead, we've lifted that logic to the main loop, so this method can now be simplified. It only determines if the user specified they want to flush at end, or if we're on the default strategy – if we've seen enough invocations to flip over to periodic flushing. If the answer is yes, then we should not flush at end.
| ); | ||
|
|
||
| if let Some(metrics) = metrics { | ||
| return Some(RuntimeDoneMeta { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can't we just sent the event back?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I kinda like having the runtime done meta because right now that's the only thing we care about, and the data we need is buried in an option. If we just return the event we'd push this matching/conditional handling into the main loop in an already kinda complex case.
thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't like it a 100% but we can change it in the future. My idea was still sending the clone of the event as an optional, that way we ensure we don't have to push the conditional back to the loop.
It just feels unnecessary when an object already exist for this which has the same information.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah it's a little weirder with the enum because we strip a bunch of the event info away and just return what we need. It's another alloc but it should be stack alloc'd and fast anyway. We can change it later
This removes about ~30ms of latency for customers with very busy functions by no longer blocking on the

platform.runtimeDoneevent, which is provided to us by Lambda via the Telemetry API. The call has a minimum 25ms buffer time, which is why the delay is present.