-
Notifications
You must be signed in to change notification settings - Fork 118
Accept multiple POST /invoke
requests to allow parallel testing
#585
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…er processes a request
Co-authored-by: Copilot <[email protected]>
…untime into sebsto/fix_584
POST /invoke
requests while the Lambda handl…POST /invoke
requests to allow parallel testing
Note that this PR fixes concurrent invocation for non streaming lambda function only. Local testing streaming lambda function is affected by another issue #588 |
@0xTim I think I have a working solution. I update the PR title and description to accurately describe the changes. I'll let you review when you have time. This is a big one - apologies for that but I can't think of a simpler solution to address that issue. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
Copilot reviewed 5 out of 5 changed files in this pull request and generated 3 comments.
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
Co-authored-by: Copilot <[email protected]>
Co-authored-by: Copilot <[email protected]>
As I understand this you return a failure response if the pool is already being used.
|
Thank you @adam for the suggestions
You're correct. This should never be thrown by the way. It reflects a programming error in the Pool. The Pool is now designed to accept multiple responses in parallel and to queue them. I'll sent a commit in a minute
I tried this approach, but |
Actually looking at the code you could possibly do something like this. Instead of adding concurrent tasks for each connection to the server, get rid of the discarding task group and just do this. This means the pool will only be accessed by one connection at a time try await channel.executeThenClose { inbound in
for try await connectionChannel in inbound {
logger.trace("Handling a new connection")
await server.handleConnection(channel: connectionChannel, logger: logger)
logger.trace("Done handling the connection")
}
} |
@adam-fowler The local HTTP server was designed to act as a single HTTP server. It accepts both the testing client requests ( I think we need to keep the current task group to handle requests from the testing client and requests from the lambda runtime separately. Does it make sense ? (or do I miss something bigger here) |
Ok that won't work then. |
Closing #584
The LocalServer now queues concurrent
POST /invoke
requests from testing client applications and ensures that the requests are delivered to the Lambda Runtime one by one, just like the AWS Lambda Runtime environment does.The
Pool
has now two modes : pure FIFO (one element get exactly onenext()
) and one mode where multiple elements can get pushed and multiplenext(for requestId:String)
can be called concurrently.The two modes are needed because invocations are 1:1 (one
POST /invoke
is always by one matchingGET /next
) but responses are n:n (a response can have multiple chunks and concurrent invocations can trigger multiplenext(for requestId: String)
I made a couple of additional changes while working on this PR
I moved the
Pool
code in a separate file for improved readabilityI removed an instance of
DispatchTime
that was hiding in the code, unnoticed until todayI removed the
async
requirement onPool.push(_)
function. This was not required (thank you @t089 for having reported this)I removed the
fatalError()
that was in thePool
implementation. The pool now throws an error whennext()
is invoked concurrently, making it easier to test.I added extensive unit tests to validate the Pool behavior
I added a test to verify that a rapid succession of client invocations are correctly queued and return no error
I moved a
continuation(resume:)
outside of a lock. Generally speaking, it's a bad idea to resume continuation while owning a lock. I suspect this is causing a error during test execution when we spawn and tear down mutlipleTask
very quickly. In some rare occasions, the test was failing with an invalid assertion in NIO :NIOCore/NIOAsyncWriter.swift:177: Fatal error: Deinited NIOAsyncWriter without calling finish()