@@ -206,7 +206,11 @@ values for memory may require disabling the callback-flooding tests
206
206
using the --bootargs parameter discussed below.
207
207
208
208
Sometimes additional debugging is useful, and in such cases the --kconfig
209
- parameter to kvm.sh may be used, for example, ``--kconfig 'CONFIG_KASAN=y' ``.
209
+ parameter to kvm.sh may be used, for example, ``--kconfig 'CONFIG_RCU_EQS_DEBUG=y' ``.
210
+ In addition, there are the --gdb, --kasan, and --kcsan parameters.
211
+ Note that --gdb limits you to one scenario per kvm.sh run and requires
212
+ that you have another window open from which to run ``gdb `` as instructed
213
+ by the script.
210
214
211
215
Kernel boot arguments can also be supplied, for example, to control
212
216
rcutorture's module parameters. For example, to test a change to RCU's
@@ -219,10 +223,17 @@ require disabling rcutorture's callback-flooding tests::
219
223
--bootargs 'rcutorture.fwd_progress=0'
220
224
221
225
Sometimes all that is needed is a full set of kernel builds. This is
222
- what the --buildonly argument does.
226
+ what the --buildonly parameter does.
223
227
224
- Finally, the --trust-make argument allows each kernel build to reuse what
225
- it can from the previous kernel build.
228
+ The --duration parameter can override the default run time of 30 minutes.
229
+ For example, ``--duration 2d `` would run for two days, ``--duration 3h ``
230
+ would run for three hours, ``--duration 5m `` would run for five minutes,
231
+ and ``--duration 45s `` would run for 45 seconds. This last can be useful
232
+ for tracking down rare boot-time failures.
233
+
234
+ Finally, the --trust-make parameter allows each kernel build to reuse what
235
+ it can from the previous kernel build. Please note that without the
236
+ --trust-make parameter, your tags files may be demolished.
226
237
227
238
There are additional more arcane arguments that are documented in the
228
239
source code of the kvm.sh script.
@@ -291,3 +302,73 @@ the following summary at the end of the run on a 12-CPU system::
291
302
TREE07 ------- 167347 GPs (30.9902/s) [rcu: g1079021 f0x0 ] n_max_cbs: 478732
292
303
CPU count limited from 16 to 12
293
304
TREE09 ------- 752238 GPs (139.303/s) [rcu: g13075057 f0x0 ] n_max_cbs: 99011
305
+
306
+
307
+ Repeated Runs
308
+ =============
309
+
310
+ Suppose that you are chasing down a rare boot-time failure. Although you
311
+ could use kvm.sh, doing so will rebuild the kernel on each run. If you
312
+ need (say) 1,000 runs to have confidence that you have fixed the bug,
313
+ these pointless rebuilds can become extremely annoying.
314
+
315
+ This is why kvm-again.sh exists.
316
+
317
+ Suppose that a previous kvm.sh run left its output in this directory::
318
+
319
+ tools/testing/selftests/rcutorture/res/2022.11.03-11.26.28
320
+
321
+ Then this run can be re-run without rebuilding as follow:
322
+
323
+ kvm-again.sh tools/testing/selftests/rcutorture/res/2022.11.03-11.26.28
324
+
325
+ A few of the original run's kvm.sh parameters may be overridden, perhaps
326
+ most notably --duration and --bootargs. For example::
327
+
328
+ kvm-again.sh tools/testing/selftests/rcutorture/res/2022.11.03-11.26.28 \
329
+ --duration 45s
330
+
331
+ would re-run the previous test, but for only 45 seconds, thus facilitating
332
+ tracking down the aforementioned rare boot-time failure.
333
+
334
+
335
+ Distributed Runs
336
+ ================
337
+
338
+ Although kvm.sh is quite useful, its testing is confined to a single
339
+ system. It is not all that hard to use your favorite framework to cause
340
+ (say) 5 instances of kvm.sh to run on your 5 systems, but this will very
341
+ likely unnecessarily rebuild kernels. In addition, manually distributing
342
+ the desired rcutorture scenarios across the available systems can be
343
+ painstaking and error-prone.
344
+
345
+ And this is why the kvm-remote.sh script exists.
346
+
347
+ If you the following command works::
348
+
349
+ ssh system0 date
350
+
351
+ and if it also works for system1, system2, system3, system4, and system5,
352
+ and all of these systems have 64 CPUs, you can type::
353
+
354
+ kvm-remote.sh "system0 system1 system2 system3 system4 system5" \
355
+ --cpus 64 --duration 8h --configs "5*CFLIST"
356
+
357
+ This will build each default scenario's kernel on the local system, then
358
+ spread each of five instances of each scenario over the systems listed,
359
+ running each scenario for eight hours. At the end of the runs, the
360
+ results will be gathered, recorded, and printed. Most of the parameters
361
+ that kvm.sh will accept can be passed to kvm-remote.sh, but the list of
362
+ systems must come first.
363
+
364
+ The kvm.sh ``--dryrun scenarios `` argument is useful for working out
365
+ how many scenarios may be run in one batch across a group of systems.
366
+
367
+ You can also re-run a previous remote run in a manner similar to kvm.sh:
368
+
369
+ kvm-remote.sh "system0 system1 system2 system3 system4 system5" \
370
+ tools/testing/selftests/rcutorture/res/2022.11.03-11.26.28-remote \
371
+ --duration 24h
372
+
373
+ In this case, most of the kvm-again.sh parmeters may be supplied following
374
+ the pathname of the old run-results directory.
0 commit comments