-
Notifications
You must be signed in to change notification settings - Fork 223
Description
Describe the bug
I noticed that automatic flushing is triggered before lambda receives shutdown signal. However, it makes the http exporter fail with http timeout.
Forcing the metric flush or triggering shutdown sequence during lambda invocation works. In this case, I've observed same metric being emitted to grafana 5 times, each 1 minute apart. This part of the bug may not be relevant to the project; but I'm unable to find the root cause.
Steps to reproduce
I've created this repo that simulates the behavior. You can find the details logs of the test results there as well.
What did you expect to see?
Basic config of otel exporter with http exproter successfully exports metrics.
What did you see instead?
ERROR {"stack":"Error: PeriodicExportingMetricReader: metrics export failed (error Error: Request Timeout)\n at d._doRun (/opt/773.wrapper.js:1:3565)\n at processTicksAndRejections (node:internal/process/task_queues:95:5)\n at runNextTicks (node:internal/process/task_queues:64:3)\n at process.processTimers (node:internal/timers:516:9)\n at async d._runOnce (/opt/773.wrapper.js:1:2894)\n at async d.onForceFlush (/opt/773.wrapper.js:1:3784)\n at async d.forceFlush (/opt/773.wrapper.js:1:2000)\n at async ee.forceFlush (/opt/773.wrapper.js:1:21040)\n at async Promise.all (index 0)\n at async ne.forceFlush (/opt/773.wrapper.js:1:24412)","message":"PeriodicExportingMetricReader: metrics export failed (error Error: Request Timeout)","name":"Error"}
What version of collector/language SDK version did you use?
node20
layers:
- opentelemetry-nodejs-0_16_0:1
- opentelemetry-collector-amd64-0_17_0:1
What language layer did you use?
nodejs
Additional context
Reproducible version can be found in https://github.com/kaskavalci/otlp-lambda-example/tree/main
It's weird that when shutdown or forceflush is triggered, same metric appears 5 times in grafana. I appreaciate your comment on this as well.
