Skip to content

Write a benchmark to compare against the default schedulerΒ #64

@franz1981

Description

@franz1981

In order to validate the effectiveness of this scheduler, we should write a benchmark testing two scenarios:

  1. Using this custom scheduler
  2. Using the default scheduler

Both scenarios can be implemented using the same code, but offloading the virtual thread handling to different ThreadFactories.
The test should be made of three components:

  • server using Netty 4.2 and http
  • client using jbang wrk Hyperfoil
  • another server using Netty 4.2 and length prefixed encoding

The test should excercize this code path:

  • the second sever should start accepting requests first
  • the script check that both the first and the second servers are running
  • the load generator uses N connections to send http requests to a default endpoint to the first server
  • once a connection to the first server is established, the first time the first server create a persistent blocking http connection to the second server (using Apache http client or the built JDK one) which is bound to that connection
  • the first server decode the http requests, keeping the http connection alive offloading the processing to a freshly created virtual thread using a custom scheduler (local to the event loop) or default virtual thread factory
  • the virtual thread has the assigned http client to issue a request to a fixed length binary endpoint waiting it to complete
  • the JSON endpoint just produce binary form which represent N User instances which each contains a single integer property
  • the virtual thread take care of creating the User instances and use a configured Jackson ObjectMapper to produce a JSON into a Netty ByteBuf treated as a stream
  • it then write and flush the buffer as body of its own http request as JSON content (with length) back to the load generator

The second server should have a configurable think time option to schedule afte fixed time to write and flush the response to the first server.
It doesn't need more than one event loop and can just have a fixed and constant/cached full response, making it very light.

The first server should just have a configurable amount of event loops and which scheduler to use.

The whole test could be orchestrated similarly to
https://github.com/franz1981/quarkus-profiling-workshop/blob/master/scripts/benchmark.sh

Including (mandatory) pidstat vs the first server and optionally async profiler.
Ideally the two servers should run on containers (allowing to set the cpu-set for each) using the shipilev loom build and sharing host network whilst the load generator can run locally or in a container.

Metadata

Metadata

Labels

No labels
No labels

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions