Skip to content

Conversation

@cclilshy
Copy link
Contributor

Update some configurations to be compatible with the v0.6 core, this version addresses some of the exceptions that occurred earlier.

@cclilshy
Copy link
Contributor Author

In one test, I got the following results. The difference was huge, but no unexpected exceptions occurred, just extra parameters

-s pipeline.lua -- 16

According to the conclusion: Currently Ripple does not support the same long connection to submit another request before the end of the response

If you focus only on this feature, is there a clear correlation between the following large differences in test results? Thank you for your guidance.

wrk -H 'Host: tfb-server' -H 'Accept: text/plain,text/html;q=0.9,application/xhtml+xml;q=0.9,application/xml;q=0.8,*/*;q=0.7' -H 'Connection: keep-alive' --latency -d 15 -c 512 --timeout 8 -t 16 http://tfb-server:8080/plaintext
---------------------------------------------------------
Running 15s test @ http://tfb-server:8080/plaintext
  16 threads and 512 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    76.46ms   23.91ms 824.06ms   78.31%
    Req/Sec   409.19     73.58   633.00     78.35%
  Latency Distribution
     50%   75.06ms
     75%   87.18ms
     90%  100.04ms
     99%  132.38ms
  97821 requests in 15.09s, 19.03MB read
Requests/sec:   6482.30
Transfer/sec:      1.26MB
---------------------------------------------------------
 Concurrency: 256 for plaintext
 wrk -H 'Host: tfb-server' -H 'Accept: text/plain,text/html;q=0.9,application/xhtml+xml;q=0.9,application/xml;q=0.8,*/*;q=0.7' -H 'Connection: keep-alive' --latency -d 15 -c 256 --timeout 8 -t 16 http://tfb-server:8080/plaintext -s pipeline.lua -- 16
---------------------------------------------------------
Running 15s test @ http://tfb-server:8080/plaintext
  16 threads and 256 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     0.00us    0.00us   0.00us    -nan%
    Req/Sec   157.38      3.05   160.00     87.50%
  Latency Distribution
     50%    0.00us
     75%    0.00us
     90%    0.00us
     99%    0.00us
  256 requests in 15.06s, 51.00KB read
Requests/sec:     17.00
Transfer/sec:      3.39KB
STARTTIME 1727616870
ENDTIME 1727616885

@joanhey
Copy link
Contributor

joanhey commented Sep 29, 2024

In the last test you don't only added -s pipeline.lua -- 16, also use 256 concurrency -c 256.
Half the concurrency from the firs test -c 512 .

So it's normal to have less req/s in the last test, less concurrency = less req/s.

Test it again with the same concurrency.

@cclilshy
Copy link
Contributor Author

Thank you for your concern, the test is consistent with my inference, but the focus in the short term is not to solve the pipeline.lua support

Do you have any other suggestions for this commit?

@cclilshy
Copy link
Contributor Author

I did that, currently it follows a uniform code. maybe the ripple:publish of config will need to be submitted next time.

@cclilshy
Copy link
Contributor Author

No, lose the '.env 'configuration and its listening address will be 127.0.0.1 instead of 0.0.0.0, so I'll do better next time, excuse me, it urgently needs to use the latest way

@cclilshy cclilshy closed this Sep 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants