-
-
Notifications
You must be signed in to change notification settings - Fork 4
Open
Labels
Description
π©Ί Problem Statement
The application currently lacks a standardized endpoint for exposing runtime metrics, making it difficult to monitor health, performance, and usage in production environments. This limits observability and proactive troubleshooting.
π Description
Integrate a Prometheus-compatible /metrics endpoint using the prometheus_client Python library. The endpoint should expose key application metrics such as request count, latency, active requests, and response size. The metrics server should run on a configurable port and be accessible for Prometheus scraping. Implementation should be robust, well-documented, and follow best practices for production monitoring.
π― Goals
- Add
/metricsendpoint usingprometheus_client. - Track and expose:
- Total request count
- Request latency (histogram)
- Number of active requests
- Response size (summary)
- Make metrics server port configurable via environment variable.
- Ensure endpoint is accessible and compatible with Prometheus scraping.
- Document usage, configuration, and integration steps.
β Expected Result
/metricsendpoint available and serving Prometheus metrics.- Metrics are updated in real-time and reflect application activity.
- Documentation is clear and enables easy integration with Prometheus/Grafana.
- Improved observability and monitoring for the application.