Skip to content

Commit df6552b

Browse files
committed
Include HTML performance tools
Add documentation, add Java-based web server program to avoid installing anything else, downgrade Jetty version to be compatible with Java 1.6, update assemblies accordingly. Fixes #1
1 parent 5a9fc1c commit df6552b

26 files changed

+1091
-539
lines changed

HTML_PERFORMANCE_TOOLS.md

Lines changed: 363 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,363 @@
1+
# RabbitMQ Performance Tool #
2+
3+
We have created a couple of tools to facilitate benchmarking RabbitMQ
4+
in different usage scenarios. One part of these tools is the `PerfTest`
5+
Java class, the other part is a
6+
couple of HTML/JS tools that will let you plot the results obtained
7+
from the benchmarks into nicely looking graphs.
8+
9+
The following blog posts show some examples of what can be done with
10+
this library:
11+
12+
[RabbitMQ Performance Measurements, part
13+
1](http://www.rabbitmq.com/blog/2012/04/17/rabbitmq-performance-measurements-part-1/).
14+
[RabbitMQ Performance Measurements, part
15+
2](http://www.rabbitmq.com/blog/2012/04/25/rabbitmq-performance-measurements-part-2/).
16+
17+
## Running benchmarks ##
18+
19+
Let's see how to run some benchmarks and then display the results in
20+
HTML using this tool.
21+
22+
To run a benchmark we need to create a _benchmark specification file_,
23+
which is simply a JSON file like this one:
24+
25+
```javascript
26+
[{'name': 'consume', 'type': 'simple', 'params':
27+
[{'time-limit': 30, 'producer-count': 4, 'consumer-count': 2}]}]
28+
```
29+
30+
Place this code in a file called `publish-consume-spec.js` and then go
31+
to the root folder of the binary distribution and run the following
32+
command to start the benchmark:
33+
34+
```bash
35+
bin/runjava com.rabbitmq.examples.PerfTestMulti
36+
publish-consume-spec.js publish-consume-result.js
37+
```
38+
39+
This command will start a benchmark scenario where four producers
40+
will send messages to RabbitMQ over a period of thirty seconds. At the
41+
same time, two consumers will be consuming those messages.
42+
43+
The results will be stored in the file `publish-consume-result.js`
44+
which we will now use to display a graph in our HTML page.
45+
46+
## Displaying benchmark results ##
47+
48+
Provided you have included our libraries (refer to the "Boilerplate
49+
HTML" section to know how to do that), the following HTML snippet will
50+
display the graph for the benchmark that we just ran:
51+
52+
```html
53+
<div class="chart"
54+
data-type="time"
55+
data-latency="true"
56+
data-x-axis="time (s)"
57+
data-y-axis="rate (msg/s)"
58+
data-y-axis2="latency (μs)"
59+
data-scenario="consume"></div>
60+
```
61+
62+
Here we use HTML's _data_ attributes to tell the performance library
63+
how the graph should be displayed. We are telling it to load the
64+
`consume` scenario, showing time in seconds on the x-axis, the rate of
65+
messages per second on the y-axis and a second y-axis showing latency
66+
in microseconds; all of this displayed in a _time_ kind of graph:
67+
68+
![Publish Consume Graph](./images/publish-consume-graph.png)
69+
70+
If instead of the CSS class `"chart"` we use the `"small-chart"` CSS
71+
class, then we can get a graph like the one below:
72+
73+
```html
74+
<div class="small-chart"
75+
data-type="time"
76+
data-x-axis="time(s)"
77+
data-y-axis=""
78+
data-scenario="no-ack"></div>
79+
```
80+
81+
![Small Chart Example](./images/small_chart.png)
82+
83+
Finally, there's a type of graphs called `"summary"` that can show a summary of the whole benchmark. Here's the _HTML_ for displaying them:
84+
85+
```html
86+
<div class="summary"
87+
data-scenario="shard"></div>
88+
```
89+
90+
And this is how they look like:
91+
92+
![Summary Graph](./images/summary.png)
93+
94+
95+
## Types of graphs ##
96+
97+
We support several `types` of graphs, that you can specify using the
98+
`data-type` attribute:
99+
100+
- `time`: this graph can plot several variables on the y-axis while
101+
plotting the time on the x-axis. For example you could compare the
102+
send and receive rate over a period of time.
103+
104+
In the previous section we showed how to display these kind of graphs
105+
using HTML.
106+
107+
- `series`: will plot how changing a variable affects the results of
108+
the benchmark, for example, what's the difference in speed from
109+
sending small, medium and large messages?. This type of graph can
110+
show you that.
111+
112+
Here's an HTML example of a `series` graph:
113+
114+
```html
115+
<div class="chart"
116+
data-type="series"
117+
data-scenario="message-sizes-and-producers"
118+
data-x-key="producerCount"
119+
data-x-axis="producers"
120+
data-y-axis="rate (msg/s)"
121+
data-plot-key="send-msg-rate"
122+
data-series-key="minMsgSize"></div>
123+
```
124+
125+
- `x-y`: we can use this one to compare, for example, how message size
126+
affects the message rate per second. Refer to the second
127+
blogpost for an example of this kind of graph.
128+
129+
![1 -> 1 sending rate message
130+
sizes](./images/1_1_sending_rates_msg_sizes.png)
131+
132+
Here's how to represent an `x-y` graph in HTML:
133+
134+
```html
135+
<div class="chart"
136+
data-type="x-y"
137+
data-scenario="message-sizes-large"
138+
data-x-key="minMsgSize"
139+
data-plot-keys="send-msg-rate send-bytes-rate"
140+
data-x-axis="message size (bytes)"
141+
data-y-axis="rate (msg/s)"
142+
data-y-axis2="rate (bytes/s)"
143+
data-legend="ne"></div>
144+
```
145+
146+
- `r-l`: This type of graph can help us compare the sending rate of
147+
messages vs. the latency. See scenario "1 -> 1 sending rate
148+
attempted vs latency" from the first blogpost for an example:
149+
150+
![1 -> 1 sending rate attempted vs
151+
latency](./images/1_1_sending_rates_latency.png)
152+
153+
Here how's to draw a `r-l` graph with HTML:
154+
155+
```html
156+
<div class="chart"
157+
data-type="r-l"
158+
data-x-axis="rate attempted (msg/s)"
159+
data-y-axis="rate (msg/s)"
160+
data-scenario="rate-vs-latency"></div>
161+
```
162+
163+
To see how all these benchmark specifications can be put together
164+
take a look at the `various-spec.js` file in the HTML examples directory,
165+
The `various-result.js` file in the same directory contains
166+
the results of the benchmark process run on a particular computer
167+
and `various.html` shows you how to display the results in an
168+
HTML page.
169+
170+
## Supported HTML attributes ##
171+
172+
We can use several HTML attributes to tell the library how to draw the
173+
chart. Here's the list of the ones we support.
174+
175+
- `data-file`: this specifies the file from where to load the
176+
benchmark results, for example
177+
`data-file="results-mini-2.7.1.js"`. This file will be loaded via
178+
AJAX. If you are loading the results on a local machine, you might
179+
need to serve this file via HTTP, since certain browsers refuse
180+
to perform the AJAX call otherwise.
181+
182+
- `data-scenario`: A results file can contain several scenarios. This
183+
attribute specifies which one to display in the graph.
184+
185+
- `data-type`: The type of graph as explained above in "Types of
186+
Graphs".
187+
188+
- `data-mode`: Tells the library from where to get the message
189+
rate. Possible values are `send` or `recv`. If no value is
190+
specified, then the rate is the average of the send and receive
191+
rates added together.
192+
193+
- `data-latency`: If we are creating a chart to display latency, then
194+
by specifying the `data-latency` as `true` the average latency will
195+
also be plotted alongside _send msg rate_ and _receive msg rate_.
196+
197+
- `data-x-axis`, `data-y-axis`, `data-y-axis2`: These attributes
198+
specify the label of the `x` and the `y` axes.
199+
200+
- `data-series-key`: If we want to specify from where which JSON key
201+
to pick our series data, then we can provide this attribute. For
202+
example: `data-series-key="minMsgSize"`.
203+
204+
- `data-x-key`: Same as the previous attributed, but for the x
205+
axis. Example: `data-x-key="minMsgSize"`.
206+
207+
## Boilerplate HTML ##
208+
209+
The file `./html/examples/sample.html` shows a full HTML page used to
210+
display some results. You should include the following Javascript
211+
Files:
212+
213+
```html
214+
<!--[if lte IE 8]><script language="javascript"type="text/javascript" src="../lib/excanvas.min.js"></script><![endif]-->
215+
<script language="javascript" type="text/javascript" src="../lib/jquery.min.js"></script>
216+
<script language="javascript" type="text/javascript" src="../lib/jquery.flot.min.js"></script>
217+
<script language="javascript" type="text/javascript" src="../perf.js"></script>
218+
```
219+
220+
Our `perf.js` library depends on the _jQuery_ and _jQuery Flot_ libraries for
221+
drawing graphs, and the _excanvas_ library for supporting older browsers.
222+
223+
Once we load the libraries we can initialize our page with the
224+
following Javascript:
225+
226+
```html
227+
<script language="javascript" type="text/javascript">
228+
$(document).ready(function() {
229+
var main_results;
230+
$.ajax({
231+
url: 'publish-consume-result.js',
232+
success: function(data) {
233+
render_graphs(JSON.parse(data));
234+
},
235+
fail: function() { alert('error loading publish-consume-result.js'); }
236+
});
237+
});
238+
</script>
239+
```
240+
241+
We can then load the file with the benchmark results and pass that to our
242+
`render_graphs` function, which will take care of the rest, provided
243+
we have defined the various `div`s where our graphs are going to be
244+
drawn.
245+
246+
## Writing benchmark specifications ##
247+
248+
Benchmarks specifications should be written in JSON format. We can
249+
define an array containing one or more benchmark scenarios to run. For
250+
example:
251+
252+
```javascript
253+
[ {'name': 'no-ack-long', 'type': 'simple', 'interval': 10000,
254+
'params': [{'time-limit': 500}]},
255+
256+
{'name': 'headline-publish', 'type': 'simple', 'params':
257+
[{'time-limit': 30, 'producer-count': 10, 'consumer-count': 0}]}]
258+
```
259+
260+
This JSON object specifies two scenarios `'no-ack-long'` and
261+
`'headline-publish'`, of the type `simple` and sets
262+
parameters, like `producer-count`, for the benchmarks.
263+
264+
There are three kind of benchmark scenarios:
265+
266+
- `simple`: runs a basic benchmark based on the parameters in the spec
267+
as seen in the example above.
268+
- `rate-vs-latency`: compares message rate with latency.
269+
- `varying`: can vary some variables during the benchmark, for example
270+
message size as shown in the following scenario snippet:
271+
272+
```javascript
273+
{'name': 'message-sizes-small', 'type': 'varying',
274+
'params': [{'time-limit': 30}], 'variables': [{'name':
275+
'min-msg-size', 'values': [0, 100, 200, 500, 1000, 2000, 5000]}]},
276+
```
277+
278+
Note that `min-msg-size` gets converted to `minMsgSize`.
279+
280+
You can also set the AMQP URI. See the [URI Spec](https://www.rabbitmq.com/uri-spec.html).
281+
Default to `"amqp://localhost"` . For example:
282+
283+
```javascript
284+
[{'name': 'consume', 'type': 'simple', 'uri': 'amqp://rabbitmq_uri',
285+
'params': [{'time-limit': 30, 'producer-count': 4, 'consumer-count': 2}]}]
286+
```
287+
288+
### Supported scenario parameters ###
289+
290+
The following parameters can be specified for a scenario:
291+
292+
- exchange-type: exchange type to be used during the
293+
benchmark. Defaults to `'direct'`
294+
- exchange-name: exchange name to be used during the
295+
benchmark. Defaults to whatever `exchangeType` was set to.
296+
- queue-name: queue name to be used during the benchmark. Defaults to
297+
an empty name, letting RabbitMQ provide a random one.
298+
- routing-key: routing key to be used during the benchmark. Defaults to
299+
an empty routing key.
300+
- random-routing-key: allows the publisher to send a different routing
301+
key per published message. Useful when testing exchanges like the
302+
consistent hashing one. Defaults to `false`.
303+
- producer-rate-limit: limit number of messages a producer will produce
304+
per second. Defaults to `0.0f`
305+
- consumer-rate-limit: limit number of messages a consumer will consume
306+
per second. Defaults to 0.0f
307+
- producer-count: number of producers to run for the benchmark. Defaults
308+
to 1
309+
- consumer-count: number of consumers to run for the benchmark. Defaults
310+
to 1
311+
- producer-tx-size: number of messages to send before committing the
312+
transaction. Defaults to 0, i.e.: no transactions
313+
- consumer-tx-size: number of messages to consume before committing the
314+
transaction. Defaults to 0, i.e.: no transactions
315+
- confirm: specifies whether to wait for publisher confirms during the
316+
benchmark. Defaults to -1. Any number >= 0 will make the benchmarks
317+
to use confirms.
318+
- auto-ack: specifies whether the benchmarks should auto-ack messages. Defaults
319+
to `false`.
320+
- multi-ack-every: specifies whether to send a multi-ack every X seconds. Defaults
321+
to `0`.
322+
- channel-prefetch: sets the per-channel prefetch. Defaults to `0`.
323+
- consumer-prefetch: sets the prefetch consumers. Defaults to `0`.
324+
- min-msg-size: the size in bytes of the messages to be
325+
published. Defaults to `0`.
326+
- time-limit: specifies how long the benchmark should be run. Defaults to`0`.
327+
- producer-msg-count: number of messages to be published by the producers.
328+
Defaults to `0`.
329+
- consumer-msg-count: number of messages to be consumed by the consumer. Defaults to `0`.
330+
- msg-count: single flag to set the previous two counts to the same value.
331+
- flags: flags to pass to the producer, like `"mandatory"`,
332+
or `"persistent"`. Defaults to an empty list.
333+
- predeclared: tells the benchmark tool if the exchange/queue name
334+
provided already exists in the broker. Defaults to `false`.
335+
336+
## Starting a web server to display the results ##
337+
338+
Some browsers may need to use a web server (`file://` wouldn't work).
339+
340+
From the `html` directory, you can start a web server with Python:
341+
342+
$ python -m SimpleHTTPServer
343+
344+
As an alternative, from the root directory of the binary distribution,
345+
you can launch a Java-based web server:
346+
347+
$ bin/runjava com.rabbitmq.examples.WebServer
348+
349+
The latter command starts a web server listening on port 8080, with the
350+
`html` directory as its base directory. You can then see the included
351+
sample at http://localhost:8080/examples/sample.html. To change these defaults:
352+
353+
$ bin/runjava com.rabbitmq.examples.WebServer ./other-base-dir 9090
354+
355+
At last, if you want a quick preview of your results (same layout
356+
as the first 'consume' scenario above), ensure the scenario name is
357+
'benchmark' in the result file and launch the following command:
358+
359+
$ bin/runjava com.rabbitmq.examples.BenchmarkResults my-result-file.js
360+
361+
The latter command will start a web server on port 8080 and open
362+
a browser window to display the results.
363+

0 commit comments

Comments
 (0)