-
Notifications
You must be signed in to change notification settings - Fork 597
Add OTLPMetricsWriter
#10685
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Add OTLPMetricsWriter
#10685
Conversation
|
Very nice work! I'll have a look at it. This should also resolve #9900 |
|
Hi, I spent some time testing the new Writer. Here's some first feedback:
Maybe the name of the writer could be OTLPMetricWriter instead of OTelWriter. Just an idea. OTelWriter is also fine.
When I set
Maybe the resource attribute service.namespace should not be hard coded to "icinga". From the OpenTelemetry Conventions:
I think the namespace is more akin to a "Kubernetes Namespace". I'm not 100% sure if this is something that should be set via the actual Icinga Service Objects or
Currently the Icinga Host/Service information is set as an attribute, maybe these should be resource attributes. From the OpenTelemetry Conventions:
There are Host and Service conventions for this:
I think it's ok if they are "namespaced" like this "icinga2.host.name" and "icinga2.service.name".
Maybe we want a more generic metric name than "icinga2_perfdata".
For example there are proposals for a See also open-telemetry/semantic-conventions#1106
When
This makes it hard to work with them for example when plotting them. They should be encoded as metrics with the same attributes as the perfdata metric. For example:
When I send the data to the Prometheus OTLP Writer. The Icinga2 daemon logs some warnings due to the response headers I think: Here is my test setupOTel Collectorprometheus.yml |
|
Also tested it with Grafana Mimir. Works great Grafana Mimir |
|
Another small note, I think
|
|
Thanks for the feedback and compose file you provided!
I will bring this into our weekly meetings next week for discussion.
Aye, that was an oversight on my part. Though, I've dropped it completely now, so there shouldn't be any daemon crashes now :).
We must look into this from the Icinga 2 side and not from an OTel perspective. There is no way we're going to introduce
Ack! Will change that.
That makes sense. I'll update the metric names accordingly, especially since there's a proposal for this.
Good point! I'm going transform all thresholds into separate metric streams then, i.e,
I didn't know about this, so thanks for testing it out! I'll fix it. |
|
I've addressed all your feedbacks apart from the |
|
Sorry. I had to fix one issue introduced with my last push (thanks @martialblog for testing!). Apparently, namespace is a reserved keyword in Icinga 2 DSL, so can't be used as an attribute name. |
|
Hold my beer. 😉 |
|
Thanks! I'm aware that DSL users can escape keywords but there's no point in using a reserved word as an attribute for a built-in config object. |
|
I encountered a strange issue with the new code The deamon did tell me it was flushing data, but then never did.
@yhabteab and I did some debugging and isolated this area. When replacing the ASSERT with VERIFY is seems to work again. Yonas has the details. |
Thanks for your help! Apparently, AsioProtobufOutStream outputS{*m_Stream, m_ConnInfo, yc};
ASSERT(m_Request->SerializeToZeroCopyStream(&outputS));C++ code after the C++ preprocessor has run ( AsioProtobufOutStream outputS{*m_Stream, m_ConnInfo, yc};
((void)0);On the other hand, when using AsioProtobufOutStream outputS{*m_Stream, m_ConnInfo, yc};
((m_Request->SerializeToZeroCopyStream(&outputS)) ? void(0) : icinga_assert_fail("m_Request->SerializeToZeroCopyStream(&outputS)", "otel.cpp", 340));I don't know what I was thinking when I used Lines 8 to 9 in 422f116
I've fixed it now and should behave normally. |
|
Did some testing with OpenSearch Data-Prepper, I did manage to send data successfully to OpenSearch like this: However, I did see some "critical" errors in the Icinga2 logs: Compose with OpenSearch Data-Prepper# cat metric_pipeline.yaml
metric-pipeline:
source:
otlp:
unframed_requests: true
health_check_service: true
authentication:
unauthenticated:
ssl: false
sink:
- stdout:
- opensearch:
hosts: [ "https://opensearch:9200" ]
insecure: true
username: admin
password: Developer@123
index: otel_metrics
# cat data-prepper-config.yaml
ssl: false |
I don't know, how OpenSearch behave and whether their OTELP receiver fully conforms to the OTELP specs but that looks like OpenSearch is closing the connection after some time (no persistent HTTP connection support?). I'll try to go through their docs and see if I can find something about that. |
I can't find anything about persistent connections in the Data Prepper1 docs so far, but the OTel spec2 clearly says:
However, OpenSearch doesn't seem to honor that, so it closes the connection after each request, I guess it's because the sentence is rephrased as SHOULD and not MUST. Nonetheless, I will try to detect such cases and degrade from a critical to some other log severity instead. $ netstat -ant | grep 21893
tcp4 0 0 127.0.0.1.21893 127.0.0.1.51712 FIN_WAIT_2 # OpenSearch closed the connection is waiting for remote peer to close it.
tcp4 0 0 127.0.0.1.51712 127.0.0.1.21893 CLOSE_WAIT # OpenSearch has closed the conn but Icinga 2 didn't close it.
tcp4 0 0 *.21893 *.* LISTENFootnotes |
|
Yeah that makes sense, if the client only SHOULD keep the connection alive then a less severe log level is alright. |
|
I've fixed the critical logs shown in #10685 (comment) by degrading that specific |
|
The newly pushed commits include the following changes:
|
| /** | ||
| * A zero-copy output stream that writes directly to an Asio [TLS] stream. | ||
| * | ||
| * This class implements the @c google::protobuf::io::ZeroCopyOutputStream interface, allowing Protobuf | ||
| * serializers to write data directly to an Asio [TLS] stream without unnecessary copying of data. It | ||
| * doesn't buffer data internally, but instead writes it in chunks to the underlying stream using an HTTP | ||
| * request writer (@c HttpRequestWriter) in a Protobuf binary format. It is not safe to be reused across | ||
| * multiple export calls. | ||
| * | ||
| * @ingroup otel | ||
| */ | ||
| class AsioProtobufOutStream final : public google::protobuf::io::ZeroCopyOutputStream | ||
| { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this should be moved to its own non-Otel-specific header so it's easier for other classes to use it, now that we have the dependency on protobuf. I get that it's still behind a build switch, so we can't just put it in "remote/protobuf.hpp". Maybe we can put it in "lib/protobuf/protobuf.hpp" or something like that?
lib/remote/httpmessage.hpp
Outdated
| /** | ||
| * HTTP request serializer with support for efficient streaming of the body. | ||
| * | ||
| * This class is similar to @c HttpResponse but is specifically designed for sending HTTP requests with | ||
| * potentially large bodies that are generated on-the-fly. Just as with HTTP responses, requests can use | ||
| * chunk encoding too if the server on the other end supports it. | ||
| * | ||
| * @ingroup remote | ||
| */ | ||
| class HttpRequestWriter : public boost::beast::http::request<SerializableBody<boost::beast::flat_buffer>> | ||
| { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I dislike the naming and asymmetry of this class, and that it essentially needs to copy code because the existing class is not general enough.
I've got an idea how to fix that and make templated (Incoming|Outgoing)HttpMessage classes that work for more general use-cases. I wanted to do this originally in #10516, but couldn't justify it because nothing else was using that code. But with this PR I think there's an argument for that here and it could help with my #10668 as well.
I'll make a refactor PR with no functional changes for master and link that here, then we have a consistent class hierarchy and this PR can get a little smaller. I'll add all the functions you additionally need here (Commit(), Prepare() and the Stream as a variant), too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds a good idea! I initially wanted to generalize the existing class as well, but then decided against it as it would have made the PR even larger.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
| if(NOT ICINGA2_OPENTELEMETRY_PROTOS_DIR STREQUAL "" AND NOT EXISTS "${ICINGA2_OPENTELEMETRY_PROTOS_DIR}") | ||
| message(FATAL_ERROR "The provided ICINGA2_OPENTELEMETRY_PROTOS_DIR '${ICINGA2_OPENTELEMETRY_PROTOS_DIR}' does not exist!") | ||
| elseif(ICINGA2_OPENTELEMETRY_PROTOS_DIR STREQUAL "") | ||
| message(STATUS "Fetching OpenTelemetry proto files...") | ||
| include(FetchContent) | ||
| FetchContent_Declare( | ||
| opentelemetry-proto | ||
| GIT_REPOSITORY https://github.com/open-telemetry/opentelemetry-proto.git | ||
| GIT_TAG v1.9.0 | ||
| ) | ||
| FetchContent_MakeAvailable(opentelemetry-proto) | ||
| set(ICINGA2_OPENTELEMETRY_PROTOS_DIR "${opentelemetry-proto_SOURCE_DIR}") | ||
| endif() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a side-note:
I actually didn't realize that we have FetchContent in our minimum CMake version, but I guess we do since recently, it's 3.11. This opens up the possibility for pulling in header libraries directly from the source, for example nlohmann_json (I might make a PR for that) or at some point maybe even doctest🥲.
This module is copied from CMake's official module repository[^1] and
contains only minor changes as outlined below.
```diff
--- a/third-party/cmake/protobuf/FindProtobuf.cmake
+++ b/third-party/cmake/protobuf/FindProtobuf.cmake
@@ -218,9 +218,6 @@ Example:
GENERATE_EXTENSIONS .grpc.pb.h .grpc.pb.cc)
#]=======================================================================]
-cmake_policy(PUSH)
-cmake_policy(SET CMP0159 NEW) # file(STRINGS) with REGEX updates CMAKE_MATCH_<n>
-
function(protobuf_generate)
set(_options APPEND_PATH DESCRIPTORS)
set(_singleargs LANGUAGE OUT_VAR EXPORT_MACRO PROTOC_OUT_DIR PLUGIN PLUGIN_OPTIONS DEPENDENCIES)
@@ -503,7 +500,7 @@ if( Protobuf_USE_STATIC_LIBS )
endif()
endif()
-include(${CMAKE_CURRENT_LIST_DIR}/SelectLibraryConfigurations.cmake)
+include(SelectLibraryConfigurations)
# Internal function: search for normal library as well as a debug one
# if the debug one is specified also include debug/optimized keywords
@@ -768,7 +765,7 @@ if(Protobuf_INCLUDE_DIR)
endif()
endif()
-include(${CMAKE_CURRENT_LIST_DIR}/FindPackageHandleStandardArgs.cmake)
+include(FindPackageHandleStandardArgs)
FIND_PACKAGE_HANDLE_STANDARD_ARGS(Protobuf
REQUIRED_VARS Protobuf_LIBRARIES Protobuf_INCLUDE_DIR
VERSION_VAR Protobuf_VERSION
@@ -805,5 +802,3 @@ foreach(Camel
string(TOUPPER ${Camel} UPPER)
set(${UPPER} ${${Camel}})
endforeach()
-
-cmake_policy(POP)
```
[^1]: https://github.com/Kitware/CMake/blob/v3.31.0/Modules/FindProtobuf.cmake
|
Since I had to rebase this, force push was unavoidable, so while force-pushing anyway, I've cleaned up the commits a bit. |
|
@yhabteab As discussed, we probably need some attributes to identify multiple metrics for one check command. The load check for example returns load1, load5, load15 and has different warn/crit thresholds for each: |
Al2Klimov
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Before merging this PR, make sure the GHA are green.
we will probably end up having to provide our own Protobuf packages for these Distros
No. Instead, disable OTLPMetricsWriter on the problematic distros.
If we can require PHP 8.2 for the whole Icinga Web 2 (Icinga/icingaweb2#5444) and *nix for any perfdata writer (#9704), we can surely require existing Protobuf packages for this shiny new feature.
Additional packages are just an additional burden. Even if they're feasible by themselves, not all of our repos are available to the GHA.
For the record:
Amazon Linux 2 (will be EOL soon, so not a big deal)
Standard Support: Ends in 5 months (30 Jun 2026)
– https://endoflife.date/amazon-linux
Debian 11 (Bullseye) - will be EOL soon as well
EOL LTS: 2026-08-31
– https://wiki.debian.org/DebianReleases
Ubuntu 22.04 LTS has the same issue
END OF STANDARD SUPPORT: June 2027
– https://documentation.ubuntu.com/project/release-team/list-of-releases/
And finally, the big one: RHEL 8
Maintenance support ends: May 31, 2029
– https://access.redhat.com/support/policy/updates/errata#Life_Cycle_Dates
This PR introduces the long-awaited
OTelWriter, a new Icinga 2 component that enables seamless integration with OpenTelemetry. I'm a newbie to OpenTelemetry, so bear with me if you spot any obvious mistakes ;), therefore I would highly appreciate any feedback from OpenTelemetry experts (cc @martialblog and all the other users who reacted to the referenced issue).First and foremost, this might surprise some of you, but this PR does not make use of the existing OpenTelemetry C++ SDK.
The reason for this is twofold:
Instead, I implemented a tiny OTLP HTTP client based on Boost.Beast that only supports OpenTelemetry metrics. That's right, no traces or logs, just metrics. Of course, it still uses Protocol Buffers for serialization as required by the OTLP specification, but without pulling in the entire OpenTelemetry C++ SDK. Also, since Icinga 2 just transforms the collected performance data into OpenTelemetry metrics (which means there's no way to know ahead of time which metrics with which names/units will be sent), the implementation doesn't provide any advanced aggregation features like the OpenTelemetry SDK does. Instead, it simply creates a single metric stream without any units or aggregation temporality, then appends each produced performance data transformed into an OTel Gauge metric data point to that stream. Here's how the OpenTelemetry collector debug printout looks like when sending some sample performance data to a local OpenTelemetry collector instance:
Expand Me
{ "resourceMetrics": [ { "resource": { "attributes": [ { "key": "service.name", "value": { "stringValue": "Icinga 2" } }, { "key": "service.instance.id", "value": { "stringValue": "547bc214-5b76-484e-833d-2de90da1bb74" } }, { "key": "service.version", "value": { "stringValue": "v2.15.0-235-gb35b335f2" } }, { "key": "telemetry.sdk.language", "value": { "stringValue": "cpp" } }, { "key": "telemetry.sdk.name", "value": { "stringValue": "Icinga 2 OTel Integration" } }, { "key": "telemetry.sdk.version", "value": { "stringValue": "v2.15.0-235-gb35b335f2" } }, { "key": "service.namespace", "value": { "stringValue": "icinga" } }, { "key": "icinga2.host.name", "value": { "stringValue": "something" } }, { "key": "icinga2.command.name", "value": { "stringValue": "icinga" } } ], "entityRefs": [ { "type": "host", "idKeys": [ "icinga2.host.name" ] } ] }, "scopeMetrics": [ { "scope": { "name": "icinga2", "version": "v2.15.0-235-gb35b335f2" }, "metrics": [ { "name": "state_check.perfdata", "gauge": { "dataPoints": [ { "attributes": [ { "key": "label", "value": { "stringValue": "api_num_conn_endpoints" } } ], "startTimeUnixNano": "1768762502097898752", "timeUnixNano": "1768762502101475072", "asDouble": 0 }, { "attributes": [ { "key": "label", "value": { "stringValue": "api_num_endpoints" } } ], "startTimeUnixNano": "1768762502097898752", "timeUnixNano": "1768762502101475072", "asDouble": 0 } ] } } ], "schemaUrl": "https://opentelemetry.io/schemas/1.39.0" } ], "schemaUrl": "https://opentelemetry.io/schemas/1.39.0" }, { "resource": { "attributes": [ { "key": "service.name", "value": { "stringValue": "Icinga 2" } }, { "key": "service.instance.id", "value": { "stringValue": "2d6c27cd-484d-436d-9542-b70abdaf2f76" } }, { "key": "service.version", "value": { "stringValue": "v2.15.0-235-gb35b335f2" } }, { "key": "telemetry.sdk.language", "value": { "stringValue": "cpp" } }, { "key": "telemetry.sdk.name", "value": { "stringValue": "Icinga 2 OTel Integration" } }, { "key": "telemetry.sdk.version", "value": { "stringValue": "v2.15.0-235-gb35b335f2" } }, { "key": "service.namespace", "value": { "stringValue": "icinga" } }, { "key": "icinga2.host.name", "value": { "stringValue": "something" } }, { "key": "icinga2.service.name", "value": { "stringValue": "something-service" } }, { "key": "icinga2.command.name", "value": { "stringValue": "icinga" } } ], "entityRefs": [ { "type": "service", "idKeys": [ "icinga2.host.name", "icinga2.service.name" ] } ] }, "scopeMetrics": [ { "scope": { "name": "icinga2", "version": "v2.15.0-235-gb35b335f2" }, "metrics": [ { "name": "state_check.perfdata", "gauge": { "dataPoints": [ { "attributes": [ { "key": "label", "value": { "stringValue": "api_num_conn_endpoints" } } ], "startTimeUnixNano": "1768762509990163200", "timeUnixNano": "1768762510002787072", "asDouble": 0 }, { "attributes": [ { "key": "label", "value": { "stringValue": "api_num_endpoints" } } ], "startTimeUnixNano": "1768762509990163200", "timeUnixNano": "1768762510002787072", "asDouble": 0 } ] } } ], "schemaUrl": "https://opentelemetry.io/schemas/1.39.0" } ], "schemaUrl": "https://opentelemetry.io/schemas/1.39.0" } ] }As already mentioned, this is a first implementation and everything is open for discussion but primarily about the
following aspects:
enable_send_metadataoption is set and include attributes likeicinga2.check.state,icinga2.check.latency, etc.). EDIT: There are now no such attributes anymore and theenable_send_metadatais gone as well.icinga2.to avoid collisions) and the overall metric naming (currently justicinga2.perfdatafor all performance data points). EDIT: The metrics have been renamed tostate_check.perfdata,state_check.threshold.warningetc. as suggested in AddOTLPMetricsWriter#10685 (comment).The high-level class overview is as higlighted in the following mermaid UML diagram:
--- title: OTel Integration --- classDiagram %% The two classes below are just a type alias definitions for better readability. note for OTelAttrVal "OTelAttrVal is implemented as a type alias not a class." note for OTelAttrsMap "OTelAttrsSet is implemented as a type alias not a class." class OTelAttrVal { <<type alias>> +std::variant~bool, int64_t, double, String> } class OTelAttrsMap { <<type alias>> +set~pair~String-AttrValue~~ } OTelAttrsMap --o OTelAttrVal : manages class Gauge { -std::unique_ptr~proto::Gauge~ ProtoGauge +Transform(metric: proto::Metric*) void +IsEmpty() bool +Record(value: double|int64_t, start_time: double, end_time: double, attributes: OTelAttrsMap) std::size_t } OTelAttrsMap <.. Gauge : uses class OTel { -proto::ExportMetricsServiceRequest Request -std::optional~StreamType~ Stream -asio::io_context::strand Strand -std::atomic_bool Exporting, Stopped +Start() void +Stop() void +Export(MetricsRequest& request) void +IsExporting() bool +Stopped() bool +ValidateName(name: string_view) bool$ +IsRetryableExportError(status: beast::http::status) bool$ +PopulateResourceAttrs(rm: const std::unique_ptr~opentelemetry::proto::metrics::v1::ResourceMetrics~&) void$ -Connect(yc: boost::asio::yield_context&) void -ExportLoop(yc: boost::asio::yield_context&) void -Export(yc: boost::asio::yield_context&) void -ExportingSet(exporting: bool, notifyAll: bool) void } AsioProtobufOutputStream <.. OTel : serializes via RetryableExportError <.. OTel : uses Backoff <.. OTel : uses `google::protobuf::io::ZeroCopyOutputStream` <|.. AsioProtobufOutputStream : implements class AsioProtobufOutputStream { -int64_t Pos -int64_t Buffered -HttpRequestWriter Writer -asio::yield_context& Yield +AsioProtobufOutputStream(stream: const StreamType&, info: const OTelConnInfo&, yield: asio::yield_context&) +Next(data: void**, size: int**) bool +BackUp(count: int) void +ByteCount() std::size_t -Flush(final: bool) bool } class RetryableExportError { -uint64_t Throttle +Throttle() uint64_t +what() const char* } class Backoff { +std::chrono::milliseconds MaxBackoff$ +std::chrono::milliseconds MinBackoff$ +operator()() std::chrono::milliseconds } class OTLPMetricsWriter { -unordered_set~shared_ptr~Metric~~ Metrics -OTel m_Exporter } Gauge "0...*" o-- "" OTLPMetricsWriter: produces OTelAttrsMap <.. OTLPMetricsWriter: creates OTel "1" o-- "" OTLPMetricsWriter: exports viaThe
OTelWriterby itself is pretty straightforward and doesn't contain any complex logic. The main OTel-related logic is encapsulated in a new library calledotel, which provides an HTTP client that conforms to the OTLP HTTP protocol specification. TheOTelclass is the one used by theOTelWriterto export metrics to the OpenTelemetry collector. TheOTelclass internally uses several helper classes to build the required Protocol Buffers messages as per the OpenTelemetry specification. Unlike the existing metric writers, this client doesn't create separate HTTP connections for each metric export. Instead, it maintains a persistent connection to the OpenTelemetry collector and reuses it for subsequent exports until the connection is closed by either side. The Protobuf message is serialized directly into HTTP connection without any intermediate buffering of the serialized message. This is possible only because the OpenTelemetry Collector supports HTTP/1.1 chunked transfer encoding, which allows sending the message in chunks without knowing the entire message size beforehand.That's it. Overall, this implementation is quite minimalistic and only implements the bare minimum required to send metrics to an OpenTelemetry collector.
Known Issues
Well, since the OpenTelemetry proto files require a
proto3language syntax, it turned out that not all our supported Distros provide a recent enough version ofprotocthat supportsproto3. Those Distros are:Also, due the
FindProtobufmodule shipped with CMake version <3.31.0being completely broken, I ended up having to import that very same module from CMake3.31.0into our CMake third-party modules directory. This is obviously not ideal, but I didn't find any other way around this issue. Once we bump our minimum required CMake version to3.31.0, we can remove this workaround again. So, the PR is being so huge partly because of this workaround.Testing
Testing this PR is a non-trivial task as it requires some knowledge about OpenTelemetry/Prometheus and setting up a local
collector instance. Here's a brief guide for anyone interested in testing this PR:
First, you need to set up an OpenTelemetry Collector instance. You can use the official Docker image for this purpose. I've included two exporters in the configuration: a standard output exporter for debugging and a Prometheus exporter to scrape the metrics via Prometheus (choose whatever you're comfortable with).
otel-collector-config.yaml
And then start the collector using the following command:
docker network create otel docker run --network otel -p 4318:4318 --rm -v $(pwd)/otel-collector-config.yml:/etc/otelcol/config.yaml otel/opentelemetry-collectorIf you chose the debug exporter instead of Prometheus, you will see the received metrics printed in the container logs, once Icinga 2 starts sending them. Otherwise, you have to add a Prometheus instance to scrape the metrics from the collector. For this, you can just use the following config and start Prometheus via Docker as well:
And start Prometheus in the background:
docker run --network otel --name prometheus -d -p 9090:9090 -v $(pwd)/config.yml:/etc/prometheus/prometheus.yml prom/prometheusNow, the only thing left to do is to build Icinga 2 with this PR applied and configure the
OTelWriter. If you're an experienced Icinga 2 user that knows how to manually build Icinga 2 Docker images, you can do your own thing and skip the following steps. For everyone else, here's a quick guide:Containerfileprovided in the just cloned repository:docker build --tag icinga/icinga2:otel --file Containerfile .Afterwards, you can use this image
icinga/icinga2:otelto start an Icinga 2 container with the OTelWriter configured just like any other Icinga 2 component.Having done all of the above (especially if you chose the Prometheus exporter), how you verify that everything works as expected? Well, you have to do a few more things again :). Here's what I used to render some beautiful graphs in Icinga Web 2.
First,
icingaweb2-module-perfdatagraphs-prometheusmodule developed by @oxzi. Though, in order to make it work with the data sent by theOTelWriter, you need to perform some monkey patching.Monkey Patch
Next, well, you need to install and configure another module yet again, the
icingaweb2-module-perfdatagraphsmodule in your Icinga Web 2 instance. Follow the instructions in the repository to get it set up and use the above cloned module as a backend for this module.If everything is set up correctly, you should start seeing performance data metrics in Prometheus as well as beautiful
graphs in Icinga Web 2.
TODO
resolves #10439
resolves #9900