Skip to content

Conversation

@Reskov
Copy link
Contributor

@Reskov Reskov commented Apr 9, 2025

No description provided.

@Reskov
Copy link
Contributor Author

Reskov commented Apr 9, 2025

@Dreamsorcerer please review

@Reskov Reskov marked this pull request as draft April 9, 2025 13:51
@Reskov Reskov marked this pull request as ready for review April 9, 2025 15:49
@Reskov
Copy link
Contributor Author

Reskov commented Apr 9, 2025

@joanhey @Dreamsorcerer Thanks for the review. I've applied the suggested fixes.

@Reskov
Copy link
Contributor Author

Reskov commented Apr 10, 2025

Surprisingly, gunicorn outperforms nginx on my local machine.

./tfb --mode benchmark --test aiohttp --type json --concurrency-levels=32 --duration=30

nginx+aiohttp

 Concurrency: 32 for json
 wrk -H 'Host: tfb-server' -H 'Accept: application/json,text/html;q=0.9,application/xhtml+xml;q=0.9,application/xml;q=0.8,*/*;q=0.7' -H 'Connection: keep-alive' --latency -d 30 -c 32 --timeout 8 -t 6 "http://tfb-server:8080/json"
---------------------------------------------------------
Running 30s test @ http://tfb-server:8080/json
  6 threads and 32 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   363.08us  411.56us  18.82ms   96.48%
    Req/Sec    15.06k     1.69k   29.15k    70.31%
  Latency Distribution
     50%  304.00us
     75%  436.00us
     90%  587.00us
     99%    1.25ms
  2699814 requests in 30.10s, 466.03MB read
Requests/sec:  89695.31
Transfer/sec:     15.48MB

gunicorn

./tfb --mode benchmark --test aiohttp-gunicorn --type json --concurrency-levels=32 --duration=30

 Concurrency: 32 for json
 wrk -H 'Host: tfb-server' -H 'Accept: application/json,text/html;q=0.9,application/xhtml+xml;q=0.9,application/xml;q=0.8,*/*;q=0.7' -H 'Connection: keep-alive' --latency -d 30 -c 32 --timeout 8 -t 6 "http://tfb-server:8080/json"
---------------------------------------------------------
Running 30s test @ http://tfb-server:8080/json
  6 threads and 32 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   523.89us    1.42ms  30.72ms   92.96%
    Req/Sec    30.22k     6.09k   48.30k    69.22%
  Latency Distribution
     50%  126.00us
     75%  208.00us
     90%    1.00ms
     99%    7.69ms
  5420481 requests in 30.09s, 0.87GB read
Requests/sec: 180133.40
Transfer/sec:     29.55MB

@msmith-techempower msmith-techempower merged commit 7d7f223 into TechEmpower:master Apr 10, 2025
3 checks passed
@Dreamsorcerer
Copy link
Contributor

Surprisingly, gunicorn outperforms nginx on my local machine.

Something looks wrong there, it appears to be exactly double the performance. I wonder if there's something wrong with the load balancing?

@Reskov
Copy link
Contributor Author

Reskov commented Apr 10, 2025

I'll try to double check the configuration, but any advice would be extremely useful to me. Locally I also tried to deploy using haproxy, results was better but still worse than gunicorn has.

keepalive_requests 10000000;

upstream aiohttp {
least_conn;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any difference in performance if we drop this and use the default round-robin? Guessing this isn't really needed given that requests should be around 1ms.

done

cat >> /aiohttp/nginx.conf <<EOF
keepalive 32;
Copy link
Contributor

@Dreamsorcerer Dreamsorcerer Apr 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm also wondering if this should be much higher? i.e. The aiohttp servers can handle many simultaneous connections.

With the above config, it might be that the workers are creating upto 65535 connections, of which 32 of them are allowed to be kept alive. So, we probably want this to be the same value.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

65535 connections could also start slowing down the event loop, so also worth testing with lower numbers too, once these are aligned.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've tried different options, someone helps but all of them insignificantly. So i decided to revert nginx as default proxy and keep it as separate docker file #9807

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants