The stack of drives-to-be-scrubbed disgorged a pair of SATA drives, so I plugged one of them into an internal SATA port and unleashed dd
on it:
time sudo dd if=/dev/urandom of=/dev/sdb bs=4096 count=10000 10000+0 records in 10000+0 records out 40960000 bytes (41 MB) copied, 4.19793 s, 9.8 MB/s real 0m4.208s user 0m0.004s sys 0m2.880s time sudo dd if=/dev/urandom of=/dev/sdb bs=1024 count=40000 40000+0 records in 40000+0 records out 40960000 bytes (41 MB) copied, 7.38392 s, 5.5 MB/s real 0m7.394s user 0m0.004s sys 0m3.505s time sudo dd if=/dev/urandom of=/dev/sdb bs=16384 count=2500 2500+0 records in 2500+0 records out 40960000 bytes (41 MB) copied, 4.2042 s, 9.7 MB/s real 0m4.214s user 0m0.000s sys 0m2.880s
The timing for a few (tens of) thousand blocks comes out somewhat below the long-term average, which was the same 12 to 14 MB/s produced by the USB 2.0 adapter. It just doesn’t get any better than that, mostly due to rotational delays.
In round numbers, that’s 40 to 50 GB/h, so it’s best to start scrubbing those terabyte drives early in the day…
I started to say that you should be getting more than that for sequential writes but then noticed the small block size. Is there some reason not yo use 64K or even 1M blocks? With decent size blocks even my long abandoned 300gb Seagate Barracuda 7000.8 SATA drives could do between 65MB/s to 30MB/s sustained reads – writes were probably only slightly slower. Four of those puppies in HW RAID-0 setup were my workhorse for some years… reckless I know, but back then Seagate made drives that didn’t consistently fail. Nowadays I wouldn’t trust any drive with less then one parity and that’s probably pushing it.
I remember the days when the maximum block size was 63k (64512 bytes).
I vaguely recall trying huge block sizes a while back and getting very little improvement above a few kB, but I should try that again, just to see what happens. It may be the difference between read and write: the sustained write speed seems cleverly hidden behind the maximum read speed.
Older drives, like the WD 80 GB Caviar drives, had 63 sectors (32,256 bytes) per track. Other brands/models of that vintage had similar track capacities. I seem to recall Hitachi used 45KB per track. If you write anything less than a full track, you are probably experiencing multiple rotational delays to write the track and that will lower the performance.
On some disk measurements I was involved in with newer, higher capacity drives, block sizes of 64KB to 1MB were necessary to max out the performance.
Oh yeah, that does ring a bell. Originally hard discs had the same number of sectors per track for every track. You’d have to tell the operating system the number of heads, cylinders, and sectors it had, and there was a limit of 63 sectors per track (6-bit field, maybe?). Later drives had more sectors in the (larger) outer cylinders, but computers still believed in the H/C/S model, so you’d tell it the drive had 255 heads and other creative things to claim enough capacity.
You might be bottlenecking on urandom.
Testing on a handy Linux system, “dd if=/dev/urandom of=/dev/zero bs=4096 count=1000” gets about 5.9 MB/s and “dd if=/dev/sda of=/dev/zero bs=4096 count=1000” (with a cold cache) gets about 58 MB/s. (The system is a Dell Inspiron 531s with a 2300 MHz AMD Sempron, running CentOS 6.7.)
As another datapoint, some time ago I benchmarked a 40 GB Seagate SATA drive with “iozone -r 4k -s 10g” and got about 40 MB/s sequential write speeds.
Bingo!
I’ll add some measurements to the most recent post to make your observation more conspicuous.
Guess I don’t have to learn anything else that’s new today: got that done early, thanks to you!