Skip to content

Commit 44ba09c

Browse files
committed
Add JMX Collector API Specification.
1 parent 3649821 commit 44ba09c

File tree

4 files changed

+364
-1
lines changed

4 files changed

+364
-1
lines changed

README.adoc

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,8 @@ Kafka/ZK REST API is to provide the production-ready endpoints to perform some a
1313
* Consumer group(old zookeeper based/new kafka based) list/describe
1414
* Offset check/reset
1515
* Consumer Group Lag check
16+
* Collect JMX metrics from applications that expose JMX metrics +
17+
More details refer to https://github.com/gnuhpc/Kafka-zk-restapi/blob/master/docs/JMXCollector.adoc[JMXCollector API Specification]
1618
// end::base-t[]
1719

1820
image::https://raw.githubusercontent.com/gnuhpc/Kafka-zk-restapi/master/pics/ShowApi.png[API]
@@ -63,6 +65,7 @@ You can access Swagger-UI by accessing http://127.0.0.1:8121/api
6365

6466
* kafka-controller : Kafka Api
6567
* zookeeper-controller : Zookeeper Api
68+
* collector-controller : JMX Metric Collector Api
6669

6770

6871
=== https://github.com/gnuhpc/Kafka-zk-restapi/blob/master/docs/definitions.adoc[Data Model Definitions for 0.10]

docs/JMXCollector.adoc

Lines changed: 360 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,360 @@
1+
= JMX Collector Rest API
2+
3+
== Overview
4+
JMX Collector Rest API provides two APIs to collect JMX metrics from applications that expose JMX metrics.
5+
6+
* V1: Collect all the JMX metric data.
7+
8+
* V2: Higher level API to collect the JMX metric data by querying with the filters. You can query the metrics which you want to include or exclude.
9+
10+
== V1 API
11+
12+
=== How to visit V1 API
13+
Visit the service through HTTP GET and provide a name "jmxurl" String parameter as follows:
14+
[source, html]
15+
----
16+
http://localhost:8121/jmx/v1?jmxurl=127.0.0.1:19999,127.0.0.1:29999
17+
----
18+
*Notice:* Parameter "jmxurl" should be a comma-separated list of {IP:Port} or set to 'default'. The list should match the following regex. If set to default, it will use the value of the variable "jmx.kafka.jmxurl" that defined in the application config file.
19+
[source, java]
20+
----
21+
private static final String IP_AND_PORT_LIST_REGEX = "(([0-9]+(?:\\.[0-9]+){3}:[0-9]+,)*([0-9]+(?:\\.[0-9]+){3}:[0-9]+)+)|(default)";
22+
----
23+
24+
=== V1 API JSON Format Response
25+
Response from the service is a list of object that in JSON format. Each JSON object includes the following fields:
26+
27+
* host: The "host" field is composited of IP and exposed JMX Port.
28+
* timestamp: Time when collect. For easier reading, the "timestamp" field is transformed to the format "yyyy-MM-dd HH:mm:ss".
29+
* collected: If the collection is successful, the filed "collected" will return true and return false otherwise.
30+
* mbeanInfo: JMX metric data. It's a dictionary that key is the JMX bean name and value is the attribute info dictionary. The mbeanInfo will be empty when "collected" return false.
31+
* msg: The error message when collecting the JMX metrics.
32+
33+
==== Sample Response for success
34+
[source, json]
35+
----
36+
[
37+
{
38+
"host": "127.0.0.1:19999",
39+
"timestamp": "2018-04-10 00:13:16",
40+
"collected": true,
41+
"mbeanInfo": {
42+
"kafka.network:type=RequestMetrics,name=ResponseQueueTimeMs,request=FetchFollower": {
43+
"75thPercentile": "0.0",
44+
"Mean": "0.2777777777777778",
45+
"StdDev": "0.7911877721292356",
46+
"98thPercentile": "3.69999999999996",
47+
"Min": "0.0",
48+
"99thPercentile": "6.0",
49+
"95thPercentile": "1.0",
50+
"Max": "6.0",
51+
"999thPercentile": "6.0",
52+
"Count": "72",
53+
"50thPercentile": "0.0"
54+
},
55+
"kafka.server:type=ReplicaFetcherManager,name=MinFetchRate,clientId=Replica": {
56+
"Value": "1.8566937378852422"
57+
}
58+
...
59+
}
60+
},
61+
{
62+
"host": "127.0.0.1:29999",
63+
"timestamp": "2018-04-10 00:14:16",
64+
"collected": true,
65+
"mbeanInfo": {
66+
...
67+
}
68+
}
69+
]
70+
----
71+
==== Sample Response for failure
72+
[source, json]
73+
----
74+
[
75+
{
76+
"host": "127.0.0.1:19999",
77+
"timestamp": "2018-04-10 14:18:28",
78+
"collected": false,
79+
"mbeanInfo": {},
80+
"msg": "org.gnuhpc.bigdata.exception.CollectorException occurred. URL: service:jmx:rmi:///jndi/rmi://127.0.0.1:19999/jmxrmi. Reason: java.rmi.ConnectException: Connection refused to host: 192.168.1.106; nested exception is: \n\tjava.net.ConnectException: Operation timed out"
81+
},
82+
{
83+
"host": "127.0.0.1:29999",
84+
"timestamp": "2018-04-10 14:21:06",
85+
"collected": false,
86+
"mbeanInfo": {},
87+
"msg": "org.gnuhpc.bigdata.exception.CollectorException occurred. URL: service:jmx:rmi:///jndi/rmi://127.0.0.1:29999/jmxrmi. Reason: java.rmi.ConnectException: Connection refused to host: 192.168.1.106; nested exception is: \n\tjava.net.ConnectException: Operation timed out"
88+
}
89+
]
90+
----
91+
92+
== V2 API
93+
If you only want collect some metrics, not all of them, then choose V2 API.
94+
95+
=== How to visit V2 API:/jmx/v2
96+
Visit the service through HTTP POST. Provide a name "jmxurl" String parameter and put the JSON query filter into the RequestBody as follows:
97+
[source, html]
98+
----
99+
http://localhost:8121/jmx/v2?jmxurl=127.0.0.1:19999,127.0.0.1:29999
100+
RequesBody:
101+
{
102+
"filters":[
103+
{
104+
"include":{
105+
"domain":"kafka.server",
106+
"bean":["kafka.server:type=BrokerTopicMetrics,name=TotalProduceRequestsPerSec"],
107+
"attribute":["OneMinuteRate", "FiveMinuteRate"]
108+
},
109+
"exclude":{
110+
111+
}
112+
}
113+
]
114+
}
115+
----
116+
==== Instruction: Query Filter
117+
Query filter is used to define the query conditions. The field "filters" is a list of parallel query configurations.
118+
Only 2 keys are allowed in each query configuration:
119+
120+
* include (mandatory): Dictionary of JMX filter. Any attribute that matches these filters will be collected unless it also matches the “exclude” filters (see below)
121+
* exclude (optional): Dictionary of JMX filter. Attributes that match these filters won’t be collected
122+
123+
Each include or exclude dictionary supports the following keys:
124+
125+
* domain: a list of domain names (e.g. java.lang)
126+
* domain_regex: a list of regexes on the domain name (e.g. java\.lang.*)
127+
* bean or bean_name: A list of full bean names (e.g. java.lang:type=Compilation)
128+
* bean_regex: A list of regexes on the full bean names (e.g. java\.lang.*[,:]type=Compilation.*)
129+
* attribute: It can accept two types of values: a dictionary whose keys are attributes names or a list of attributes names
130+
131+
You can freely customize the query conditions, and you can also use the filter template for convenience(See the below for details.)
132+
133+
==== Response of V2 API /jmx/v2
134+
Response from the service is a list of object that in JSON format. Each JSON object includes the following fields:
135+
136+
* host: The "host" field is composited of IP and exposed JMX Port.
137+
* timestamp: Time when collect. For easier reading, the "timestamp" field is transformed to the format "yyyy-MM-dd HH:mm:ss".
138+
* collected: If the collection is successful, the filed "collected" will return true and return false otherwise.
139+
* metrics: JMX metric data. It's a list of dictionary that includes following keys.
140+
141+
** domain: domain name of the metric
142+
** metric_type: metric type that defined in the "attribute" field of query field. Default value is "gauge".
143+
** alias: metric alias that defined in the "attribute" field of query filter
144+
** beanName: bean name of the metric
145+
** attributeName: attribute name of the metric
146+
** value: metric value
147+
* msg: The error message when collecting the JMX metrics.
148+
149+
Sample response is as follows:
150+
[source, json]
151+
----
152+
[
153+
{
154+
"host": "127.0.0.1:4444",
155+
"timestamp": "2018-04-04 22:40:18",
156+
"collected": true,
157+
"metrics": [
158+
{
159+
"domain": "kafka.consumer",
160+
"metric_type": "consumer",
161+
"alias": "owned_partitions_count",
162+
"beanName": "kafka.consumer:clientId=console-consumer-4251,groupId=console-consumer-4251,name=OwnedPartitionsCount,type=ZookeeperConsumerConnector",
163+
"attributeName": "Value",
164+
"value": 3
165+
},
166+
{
167+
"domain": "kafka.consumer",
168+
"metric_type": "consumer",
169+
"alias": "messages_per_sec",
170+
"beanName": "kafka.consumer:clientId=console-consumer-4251,name=MessagesPerSec,type=ConsumerTopicMetrics",
171+
"attributeName": "Count",
172+
"value": 0
173+
},
174+
{
175+
"domain": "kafka.consumer",
176+
"metric_type": "consumer",
177+
"alias": "min_fetch_rate",
178+
"beanName": "kafka.consumer:clientId=console-consumer-4251,name=MinFetchRate,type=ConsumerFetcherManager",
179+
"attributeName": "Value",
180+
"value": 9.7817371514609
181+
},
182+
{
183+
"domain": "kafka.consumer",
184+
"metric_type": "consumer",
185+
"alias": "kafka_commits_per_sec",
186+
"beanName": "kafka.consumer:clientId=console-consumer-4251,name=KafkaCommitsPerSec,type=ZookeeperConsumerConnector",
187+
"attributeName": "Count",
188+
"value": 0
189+
},
190+
{
191+
"domain": "kafka.consumer",
192+
"metric_type": "consumer",
193+
"alias": "bytes_per_sec",
194+
"beanName": "kafka.consumer:clientId=console-consumer-4251,name=BytesPerSec,type=ConsumerTopicMetrics",
195+
"attributeName": "Count",
196+
"value": 0
197+
},
198+
{
199+
"domain": "kafka.consumer",
200+
"metric_type": "consumer",
201+
"alias": "maxlag",
202+
"beanName": "kafka.consumer:clientId=console-consumer-4251,name=MaxLag,type=ConsumerFetcherManager",
203+
"attributeName": "Value",
204+
"value": 0
205+
}
206+
],
207+
"msg": null
208+
}
209+
]
210+
----
211+
212+
=== How to visit V2 API:/jmx/v2/filters
213+
Specific applications have their own JMX metrics, then we developed some filter templates such as KafkaBrokerFilter, KafkaConsumerFilter and KafkaProducerFilter.
214+
215+
This API helps list the query filter templates with the filterKey(not case sensitive). If filterKey is set to empty, it will return all the templates.
216+
[source, html]
217+
----
218+
http://localhost:8121/jmx/v2/filters?filterKey=consumer
219+
----
220+
221+
The response is as follows:
222+
[source, json]
223+
----
224+
{
225+
"KafkaConsumerFilter": {
226+
"filters": [
227+
{
228+
"include": {
229+
"domain": "kafka.consumer",
230+
"bean_regex": "kafka.consumer:type=ConsumerFetcherManager,name=MaxLag,clientId=([-.\\w]+)",
231+
"attribute": {
232+
"Value": {
233+
"metric_type": "KAFKA_CONSUMER_OLD_HIGH",
234+
"alias": "MaxLag"
235+
}
236+
}
237+
}
238+
},
239+
{
240+
"include": {
241+
"domain": "kafka.consumer",
242+
"bean_regex": "kafka.consumer:type=ConsumerFetcherManager,name=MinFetchRate,clientId=([-.\\w]+)",
243+
"attribute": {
244+
"Value": {
245+
"metric_type": "KAFKA_CONSUMER_OLD_HIGH",
246+
"alias": "MinFetchRate"
247+
}
248+
}
249+
}
250+
},
251+
{
252+
"include": {
253+
"domain": "kafka.consumer",
254+
"bean_regex": "kafka.consumer:type=ConsumerTopicMetrics,name=MessagesPerSec,clientId=([-.\\w]+)",
255+
"attribute": {
256+
"Count": {
257+
"metric_type": "KAFKA_CONSUMER_OLD_HIGH",
258+
"alias": "MessagesPerSec"
259+
}
260+
}
261+
}
262+
},
263+
{
264+
"include": {
265+
"domain": "kafka.consumer",
266+
"bean_regex": "kafka.consumer:type=ConsumerTopicMetrics,name=BytesPerSec,clientId=([-.\\w]+)",
267+
"attribute": {
268+
"Count": {
269+
"metric_type": "KAFKA_CONSUMER_OLD_HIGH",
270+
"alias": "BytesPerSec"
271+
}
272+
}
273+
}
274+
},
275+
{
276+
"include": {
277+
"domain": "kafka.consumer",
278+
"bean_regex": "kafka.consumer:type=ZookeeperConsumerConnector,name=KafkaCommitsPerSec,clientId=([-.\\w]+)",
279+
"attribute": {
280+
"Count": {
281+
"metric_type": "KAFKA_CONSUMER_OLD_HIGH",
282+
"alias": "KafkaCommitsPerSec"
283+
}
284+
}
285+
}
286+
},
287+
{
288+
"include": {
289+
"domain": "kafka.consumer",
290+
"bean_regex": "kafka.consumer:type=ZookeeperConsumerConnector,name=OwnedPartitionsCount,clientId=([-.\\w]+),groupId=([-.\\w]+)",
291+
"attribute": {
292+
"Value": {
293+
"metric_type": "KAFKA_CONSUMER_OLD_HIGH",
294+
"alias": "OwnedPartitionsCount"
295+
}
296+
}
297+
}
298+
}
299+
]
300+
}
301+
}
302+
----
303+
304+
==== How to add filter template
305+
You can add filter template yml file in the resources/JMXFilterTempalte directory. The fields of the file are the same with the query filter that noticed above.
306+
307+
Sample filter template is as follows:
308+
[source, yml]
309+
----
310+
filters:
311+
- include:
312+
domain: kafka.consumer
313+
bean_regex: kafka.consumer:type=ConsumerFetcherManager,name=MaxLag,clientId=([-.\w]+)
314+
attribute:
315+
Value:
316+
metric_type: KAFKA_CONSUMER_OLD_HIGH
317+
alias: MaxLag
318+
- include:
319+
domain: kafka.consumer
320+
bean_regex: kafka.consumer:type=ConsumerFetcherManager,name=MinFetchRate,clientId=([-.\w]+)
321+
attribute:
322+
Value:
323+
metric_type: KAFKA_CONSUMER_OLD_HIGH
324+
alias: MinFetchRate
325+
- include:
326+
domain: kafka.consumer
327+
bean_regex: kafka.consumer:type=ConsumerTopicMetrics,name=MessagesPerSec,clientId=([-.\w]+)
328+
attribute:
329+
Count:
330+
metric_type: KAFKA_CONSUMER_OLD_HIGH
331+
alias: MessagesPerSec
332+
- include:
333+
domain: kafka.consumer
334+
bean_regex: kafka.consumer:type=ConsumerTopicMetrics,name=BytesPerSec,clientId=([-.\w]+)
335+
attribute:
336+
Count:
337+
metric_type: KAFKA_CONSUMER_OLD_HIGH
338+
alias: BytesPerSec
339+
- include:
340+
domain: kafka.consumer
341+
bean_regex: kafka.consumer:type=ZookeeperConsumerConnector,name=KafkaCommitsPerSec,clientId=([-.\w]+)
342+
attribute:
343+
Count:
344+
metric_type: KAFKA_CONSUMER_OLD_HIGH
345+
alias: KafkaCommitsPerSec
346+
- include:
347+
domain: kafka.consumer
348+
bean_regex: kafka.consumer:type=ZookeeperConsumerConnector,name=OwnedPartitionsCount,clientId=([-.\w]+),groupId=([-.\w]+)
349+
attribute:
350+
Value:
351+
metric_type: KAFKA_CONSUMER_OLD_HIGH
352+
alias: OwnedPartitionsCount
353+
----
354+
355+
356+
357+
358+
359+
360+

pics/ShowApi.png

7.85 MB
Loading

src/main/java/org/gnuhpc/bigdata/controller/CollectorController.java

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ public class CollectorController {
2828
private String jmxKafkaURL;
2929

3030
@GetMapping("/jmx/v1")
31-
@ApiOperation(value = "Fetch JMX metric data")
31+
@ApiOperation(value = "Fetch all JMX metric data")
3232
public List<JMXMetricDataV1> collectJMXMetric(
3333
@Pattern(regexp = IP_AND_PORT_LIST_REGEX)@RequestParam @ApiParam(
3434
value = "Parameter jmxurl should be a comma-separated list of {IP:Port} or set to \'default\'")String jmxurl) {

0 commit comments

Comments
 (0)