Skip to content

[Bug]: Memory baseline regression in 2.10.0 vs 2.9.2 (~36% higher RSS, idle workload) #6211

@AKhozya

Description

@AKhozya

Description

After upgrading from 2.9.2-fat2.10.0-fat, container working-set memory increased ~36% under an identical idle workload. Container reached ~96% of its memory limit and tripped near-OOM monitoring alerts within ~1.5h of rollout.

Metrics (Kubernetes — container_memory_working_set_bytes)

Version Window Peak working-set % of 1280Mi limit
2.9.2-fat 7d 903 MiB ~71%
2.10.0-fat 6h 1231 MiB ~96%
2.10.0-fat idle (30m avg) 30m 844 MiB ~66%

Same node, same Deployment manifest, only the image tag changed.

Environment

  • Image: docker.io/frooodle/s-pdf:2.10.0-fat
  • Runtime: containerd / K3s v1.34 on Linux x86_64
  • JVM: default (no custom -Xmx, no JAVA_TOOL_OPTIONS)
  • Memory limit: 1280Mi (request 768Mi)
  • Workload: idle homelab instance, no API requests in the last 30m

Expected

Release notes for v2.10.0 state "Improved memory efficiency of API calls". Expected baseline RSS to be flat or lower than 2.9.2.

Actual

Steady ~885 MiB idle and peaks above 1230 MiB with no traffic — ~36% higher baseline than 2.9.2 on the same deployment.

Reproduction

  1. Deploy frooodle/s-pdf:2.9.2-fat with a 1280Mi memory limit, record container_memory_working_set_bytes over 24h
  2. Bump tag to 2.10.0-fat, no other changes
  3. Compare baseline RSS over a 30m+ window

Notes

  • 0 OOMKills (limit not breached), but alert fired at the 95% near-limit threshold
  • No custom JVM tuning — using whatever defaults the -fat image ships with
  • Happy to provide additional metrics (heap, non-heap, thread count) if useful

Metadata

Metadata

Assignees

No one assigned

    Labels

    Back EndIssues related to back-end developmentneeds investigationIssues that require further investigation

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions