Replies: 2 comments 1 reply
-
By default, when you run a Laravel app with Octane, each worker process starts your application once and then handles multiple requests in a loop. If you don’t explicitly close the database connection at the end of each request, then the same PDO connection keeps getting reused for every request that hits that worker. Why is this a problem? Imagine a request that begins a transaction but doesn’t wrap it up promptly (no commit or rollback). Any subsequent requests in that same worker will be stuck with the transaction “in progress.” This can cause messy side effects—like unexpected rollbacks or weird isolation issues—because these new requests have essentially “inherited” the open transaction. You can enable DisconnectFromDatabases in Octane’s middleware to avoid this problem. This closes the database connection after each request, ensuring no transactions are accidentally shared between different requests. However, there’s a cost: every single request will now have to spin up a brand-new connection, which can add unnecessary overhead. Potential Solutions: Use an External Connection Pooler (PgBouncer, ProxySQL): Implement Multiple Connections Per Worker (In-Memory Pooling): Hybrid: Persistent Connections + Concurrency Limits |
Beta Was this translation helpful? Give feedback.
-
Why is this happening, though? Pooled connections are implemented (in general) by issuing the RDBMS equivalent of "switch user" when a connection is pulled from the pool, which closes any open transactions and resets session state. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello everyone,
I’m using Laravel Octane and noticed an interesting behavior. When Octane initially boots up, it establishes a single database connection, and all requests use this same connection. This lack of re-connection boosts endpoint speed. However, if a transaction is started in one endpoint, all subsequent database operations use that same connection and behave as if the transaction is also active for them.
This creates issues. If an error occurs in an endpoint that started a transaction and triggers a rollback, unrelated endpoints using the same connection also experience the rollback, leading to inconsistency in database records.
To prevent this, I enabled DisconnectFromDatabases::class, so now each request establishes its own connection. However, instead of creating a new connection for each request, I would prefer to use a connection pooling strategy. Ideally, the worker could initialize a pool of around 200 connections at boot, then assign an available connection to each request. Once the request is complete, the connection would be marked as available again. This way, I could avoid connection delays while maintaining transaction isolation.
Could anyone suggest how I might achieve this? Or is there an existing package that implements something like this?
Thank you in advance!
Beta Was this translation helpful? Give feedback.
All reactions