Skip to content

Commit 889a9c9

Browse files
committed
Initial commit
1 parent beba51e commit 889a9c9

File tree

7 files changed

+329
-80
lines changed

7 files changed

+329
-80
lines changed

instrumentation/jmx-metrics/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ No targets are enabled by default. The supported target environments are listed
3131
- [kafka-broker](javaagent/kafka-broker.md)
3232
- [tomcat](library/tomcat.md)
3333
- [wildfly](library/wildfly.md)
34-
- [hadoop](javaagent/hadoop.md)
34+
- [hadoop](library/hadoop.md)
3535

3636
The [jvm](library/jvm.md) metrics definitions are also included in the [jmx-metrics library](./library)
3737
to allow reusing them without instrumentation. When using instrumentation, the [runtime-telemetry](../runtime-telemetry)

instrumentation/jmx-metrics/javaagent/hadoop.md

Lines changed: 0 additions & 15 deletions
This file was deleted.

instrumentation/jmx-metrics/javaagent/src/main/resources/jmx/rules/hadoop.yaml

Lines changed: 0 additions & 63 deletions
This file was deleted.

instrumentation/jmx-metrics/javaagent/src/test/java/io/opentelemetry/instrumentation/javaagent/jmx/JmxMetricInsightInstallerTest.java

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ class JmxMetricInsightInstallerTest {
3232
private static final String PATH_TO_ALL_EXISTING_RULES = "src/main/resources/jmx/rules";
3333
private static final Set<String> FILES_TO_BE_TESTED =
3434
new HashSet<>(
35-
Arrays.asList("activemq.yaml", "camel.yaml", "hadoop.yaml", "kafka-broker.yaml"));
35+
Arrays.asList("activemq.yaml", "camel.yaml", "kafka-broker.yaml"));
3636

3737
@Test
3838
void testToVerifyExistingRulesAreValid() throws Exception {
Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
# Hadoop Metrics
2+
3+
Here is the list of metrics based on MBeans exposed by Hadoop.
4+
5+
| Metric Name | Type | Attributes | Description |
6+
|-----------------------------|---------------|-----------------------------------------|--------------------------------------------------------|
7+
| hadoop.capacity | UpDownCounter | hadoop.node.name | Current raw capacity of data nodes. |
8+
| hadoop.capacity.used | UpDownCounter | hadoop.node.name | Current used capacity across all data nodes. |
9+
| hadoop.block.count | UpDownCounter | hadoop.node.name | Current number of allocated blocks in the system. |
10+
| hadoop.block.missing | UpDownCounter | hadoop.node.name | Current number of missing blocks. |
11+
| hadoop.block.corrupt | UpDownCounter | hadoop.node.name | Current number of blocks with corrupt replicas. |
12+
| hadoop.volume.failure.count | Counter | hadoop.node.name | Total number of volume failures across all data nodes. |
13+
| hadoop.file.count | UpDownCounter | hadoop.node.name | Current number of files and directories. |
14+
| hadoop.connection.count | UpDownCounter | hadoop.node.name | Current number of connection. |
15+
| hadoop.datanode.count | UpDownCounter | hadoop.node.name, hadoop.datanode.state | The number of data nodes. |
Lines changed: 73 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,73 @@
1+
---
2+
rules:
3+
- bean: Hadoop:service=NameNode,name=FSNamesystem
4+
prefix: hadoop.
5+
metricAttribute:
6+
hadoop.node.name: param(tag.Hostname)
7+
mapping:
8+
# hadoop.capacity
9+
CapacityTotal:
10+
metric: capacity
11+
type: updowncounter
12+
unit: By
13+
desc: Current raw capacity of data nodes.
14+
# hadoop.capacity.used
15+
CapacityUsed:
16+
metric: capacity.used
17+
type: updowncounter
18+
unit: By
19+
desc: Current used capacity across all data nodes.
20+
# hadoop.block.count
21+
BlocksTotal:
22+
metric: block.count
23+
type: updowncounter
24+
unit: "{block}"
25+
desc: Current number of allocated blocks in the system.
26+
# hadoop.block.missing
27+
MissingBlocks:
28+
metric: block.missing
29+
type: updowncounter
30+
unit: "{block}"
31+
desc: Current number of missing blocks.
32+
# hadoop.block.corrupt
33+
CorruptBlocks:
34+
metric: block.corrupt
35+
type: updowncounter
36+
unit: "{block}"
37+
desc: Current number of blocks with corrupt replicas.
38+
# hadoop.volume.failure.count
39+
VolumeFailuresTotal:
40+
metric: volume.failure.count
41+
type: counter
42+
unit: "{failure}"
43+
desc: Total number of volume failures across all data nodes.
44+
metricAttribute:
45+
direction: const(sent)
46+
# hadoop.file.count
47+
FilesTotal:
48+
metric: file.count
49+
type: updowncounter
50+
unit: "{file}"
51+
desc: Current number of files and directories.
52+
# hadoop.connection.count
53+
TotalLoad:
54+
metric: connection.count
55+
type: updowncounter
56+
unit: "{connection}"
57+
desc: Current number of connections.
58+
59+
# hadoop.datanode.count
60+
NumLiveDataNodes:
61+
metric: &metric datanode.count
62+
type: &type updowncounter
63+
unit: &unit "{node}"
64+
desc: &desc The number of data nodes.
65+
metricAttribute:
66+
hadoop.datanode.state: const(live)
67+
NumDeadDataNodes:
68+
metric: *metric
69+
type: *type
70+
unit: *unit
71+
desc: *desc
72+
metricAttribute:
73+
hadoop.datanode.state: const(dead)

0 commit comments

Comments
 (0)