-
Notifications
You must be signed in to change notification settings - Fork 13
Open
Labels
area/performanceImprovements to performance.Improvements to performance.kind/enhancementImprovements to existing feature.Improvements to existing feature.status/triageCollecting information required to triage the issue.Collecting information required to triage the issue.
Description
What are you trying to achieve?
I have a grpc-swift-2 binary against which I am running some load tests with k6. As the VUs (concurrent connections) ramp up, the memory usage balloons - at 1000 concurrent connections the program consumes ~1200M memory (according to htop
).
I was able to reproduce this behaviour with the route-guide
code example and the following k6 script:
import { Client, StatusOK } from "k6/net/grpc";
import { check, sleep } from "k6";
export const options = {
vus: 1000,
duration: "5m",
};
const client = new Client();
client.load(["Sources/Protos/routeguide"], "route_guide.proto");
export default () => {
client.connect("127.0.0.1:31415", { reflect: false, plaintext: true });
const data = { latitude: 407_838_351, longitude: -746_143_763 };
const response = client.invoke("routeguide.RouteGuide/GetFeature", data);
check(response, {
"status is OK": (r) => r && r.status === StatusOK,
});
client.close();
};
What have you tried so far?
Not many ideas on what to try. Mostly compared to other solutions I have at the ready to sanity check whether this memory usage could be normal (but an equivalent Vapor program to the one I was originally testing was consuming only ~70 MiB of memory).
I'm running this on Ubuntu 22 on WSL2. I compiled the program in release mode.
Swift version 6.0.2 (swift-6.0.2-RELEASE)
Target: x86_64-unknown-linux-gnu
Metadata
Metadata
Assignees
Labels
area/performanceImprovements to performance.Improvements to performance.kind/enhancementImprovements to existing feature.Improvements to existing feature.status/triageCollecting information required to triage the issue.Collecting information required to triage the issue.