Advertisements

USB-to-SATA Drive Adapter Performance

The discussion about scrubbing hard drives suggested I really should be using larger block sizes to wring better performance from the hardware.

So I ran variations on this theme:

time sudo dd if=/dev/urandom of=/dev/sdc bs=4K count=32K

For the BS (“block size”) parameter, 1K = 1024 and 1KB = 1000. Similarly for 1M vs. 1MB.

The results, viewed as a picture because WordPress seems unable to import a formatted spreadsheet from LibreOffice like it used to:

USB-SATA Adapter - Barracuda 7200.10 drive

USB-SATA Adapter – Barracuda 7200.10 drive

Each operation transfers 128 MB (128 x 220 = 131 x 106) bytes. The variations probably come from other stuff going on, most notably the USB-to-serial adapter driving the plotter while I’m testing a tweak to the Superformula demo code.

Reads ever so much faster than writes, so the USB adapter definitely isn’t getting in the way; I assume the drive accepts the commands & data as fast as its little heads can carry them away. The data, being relentlessly pseudo-random, won’t get compressed along the way.

So, in round numbers, the block size just absolutely does not make any difference.

Update: Based on an early comment from Edward Berner to a previous post, I was looking in the wrong place:

dd if=/dev/urandom of=/dev/zero bs=4K count=32K
32768+0 records in
32768+0 records out
134217728 bytes (134 MB) copied, 9.63064 s, 13.9 MB/s
dd if=/dev/urandom of=test.bin bs=4K count=32K
32768+0 records in
32768+0 records out
134217728 bytes (134 MB) copied, 10.018 s, 13.4 MB/s
dd if=test.bin of=/dev/zero bs=4K count=32K
32768+0 records in
32768+0 records out
134217728 bytes (134 MB) copied, 0.0385358 s, 3.5 GB/s
dd if=test.bin of=test2.bin bs=4K count=32K
32768+0 records in
32768+0 records out
134217728 bytes (134 MB) copied, 0.45044 s, 298 MB/s

I installed an SSD on this box a while ago, so the 3.5 GB/s disk-to-discard speed represents the SSD’s read rate. The 298 MB/s disk-to-disk speed would be its write speed, probably with some clever buffering going on.

So the real bandwidth limitation in wiping a disk comes from the pseudo-random generator behind /dev/urandom, not the disk or USB interface. It would probably be faster to fill a 1 GB (or more) file with noise at 14 MB/s, then copy it enough times to fill the drive at whatever speed the drive can handle it.

Thanks, Edward, for figuring that out!

Advertisements
  1. #1 by przemek klosowski on 2015-11-02 - 13:55

    I really don’t think it matters how many times the disk is overwritten or actually what is it overwritten with. If you could read data from around the tracks, the disk people would have used it to increase the capability of the disk!

    • #2 by Ed on 2015-11-02 - 20:17

      I have no delusions that anything on those disks is important enough to warrant serious measures; blowing away the data once, then rebuilding the file system will suffice.

      The only real way to destroy the data is to shred the entire disk drive into little pieces. I may use the platters for art projects / wind chimes, which should produce the same result.