Skip to content

I think we're leaking socketsΒ #184

@athornton

Description

@athornton

I was getting quite a lot of

[pid: 7|app: 0|req: 6255/12909] 10.165.4.11 () {50 vars in 850 bytes} [Fri Feb 21 15:05:16 2025] POST /lsst-dm/testdata_image_cutouts/objects/batch => generated 3334 bytes in 832 msecs (HTTP/1.1 200) 2 headers in 85 bytes (1 switches on core 0)
Fri Feb 21 15:05:18 2025 - *** uWSGI listen queue of socket "127.0.0.1:39397" (fd: 3) full !!! (1025/1024) ***
Fri Feb 21 15:05:19 2025 - *** uWSGI listen queue of socket "127.0.0.1:39397" (fd: 3) full !!! (1025/1024) ***
Fri Feb 21 15:05:20 2025 - *** uWSGI listen queue of socket "127.0.0.1:39397" (fd: 3) full !!! (1025/1024) ***
Fri Feb 21 15:05:21 2025 - *** uWSGI listen queue of socket "127.0.0.1:39397" (fd: 3) full !!! (1025/1024) ***
Fri Feb 21 15:05:22 2025 - *** uWSGI listen queue of socket "127.0.0.1:39397" (fd: 3) full !!! (1025/1024) ***
Fri Feb 21 15:05:23 2025 - *** uWSGI listen queue of socket "127.0.0.1:39397" (fd: 3) full !!! (1025/1024) ***

at a listen queue of depth 100. Raising it to 1024 helped for a while, but it still ran out (see above). So I think there's something going on with not reclaiming sockets if connections time out, or something like that.

I realize this is very vague, but I wanted to open the issue so I don't forget about it.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions