If you set up a new mechanical disk that you take fresh out of the package and slot it into your NAS, do you ask yourself whether the data on it will be safe? I personally enjoy knowing about problems with my storage hardware before it contains any important data, and so do a bunch of folks who run machines with lots of drives in them!
There's a pretty decent procedure that I'd been using to burn-in my HDDs, taken from this forum thread; it was fine in 2014 but these days, HDDs in excess of 18TiB exist, and on those badblocks runs into a limitation of its block offset representation. (Besides, badblocks is really slow.)
Hence, this tool: disk-spinner.
It destructively writes blocks of random data to an entire disk device (or, optionally, just a partition; but you'll probably want the whole drive), then verifies that the data matches what's been written.
Using Rayon, it creates a number of testing threads equal to the number of cores your system has. If you would like to override this for whatever reason, the RAYON_NUM_THREADS
environment variable will take precedence over that check.
If any data could not be read exactly as written, it informs you in big letters. That means your disk is bad & you should make use of your vendor's RMA policy. Doesn't it feel great to not run into problems?
This tool is made mainly for linux, but should work on many posix-ish systems; I've tested it on macOS, where it does the thing too.
The Linux platform is privileged a bit in terms of not only my own platform usage (my NAS running zfs on linux and all that) but also safety checking (we use udev to determine various things about the devices under test and error if they look used or non-mechanical), and terminal UI (I am confident only in Linux's ability to return accurate sizes of the block device being tested).
This tool is for spinning disks; it's also a play on the German word "Spinner" (a goofball), referring to me - a person goofy about disks.
First of all, the 18TiB issue. I'd been using zfs (write random data, zpool scrub
) to validate the data in the meantime; but unfortunately, that reserves a good chunk of space on the volume which will remain untested. So, here we are.
I also wanted to make something that does a bunch of error-checking before it'll let me super-destructively overwrite data on a drive: This tool does check that it's running on a bare disk drive with a rotational medium.
Also, performance is not bad at all: The random data that is getting written is generated by encrypting zeroes with AES-128-CTR, based on a key generated from a seed (--seed
, or autogenerated). That gets a speed of about 660MiB/s when writing to my SSD.
Plus, it can be run in parallel on multiple disks (spawning one thread per drive) and presents compact output. I like compact output.
A similar technique used here (pipe zeroes into a crypto routine) is being suggested by the folks on archwiki, but the concrete technique (write zeroes via AES-128-CTR based on a deterministic key) comes from my pal @rwg who has done & discarded more super cool hacks than many of us can dream of.