Advertisements

Archive for March 31st, 2009

Terabyte Backup for the Backup

Now that terabyte drives sell for under 100 bucks, there’s no longer any reason to worry about backup space. Just do it, OK?

My file server runs a daily backup (using rsnapshot, about which more later) just around midnight, copying all the changed files to an external 500 GB USB drive. On the first of each month, it sets aside the current daily backup as a monthly set, so I have a month of days and a year of months.

Roughly once a quarter, I copy the contents of that drive to another drive, empty the file system, and start over again.

Right now there’s about 380 GB of files on the server, but rsnapshot maintains only one copy of each changed file on the backup drive and we typically change only a few tens of megabytes each day, sooo a 500 GB drive doesn’t fill up nearly as fast as you might think.

Our daughter’s doing a science fair project involving ballistics and video recording, so she recently accumulated 36 GB of video files in short order… and re-captured the entire set several times. Of course the external drive filled up, so it’s time for the swap.

Recently I picked up a 1 TB SATA drive and it’s also time to document that process.

You will, of course, have already set up SSH and added your public key to that box, so you can do this from your Comfy Chair rather than huddling in the basement. You’ll also use screen so you can disconnect from the box while letting the session run overnight.

Plug the drive into a SATA-to-USB converter, which will most likely pop up a cheerful dialog asking what you want to do with the FAT/NTFS formatted empty drive. Dismiss that; you’re going to blow it away and use ext2.

Find out which drive it is

dmesg | tail
[25041.488000] sd 8:0:0:0: [sdc] 1953525168 512-byte hardware sectors (1000205 MB)
[25041.488000] sd 8:0:0:0: [sdc] Write Protect is off
[25041.488000] sd 8:0:0:0: [sdc] Mode Sense: 00 38 00 00
[25041.488000] sd 8:0:0:0: [sdc] Assuming drive cache: write through
[25041.488000]  sdc: sdc1

Unmount the drive

sudo umount /dev/sdc1

Create an empty ext2 filesystem

sudo mke2fs -v -m 0 -L Backup1T /dev/sdc1

The -v lets you watch the lengthy proceedings. The -m 0 eliminates the normal 5% of the capacity reserved for root; you won’t be running this as a normal system drive and won’t need emergency capacity for logs and suchlike. The -L gives it a useful name.

Create a mount point and mount the new filesystem

sudo mkdir /mnt/part
sudo mount /dev/sdc1 /mnt/part

You may have to fight off another automounter intervention in there somewhere.

The existing backup drive is in /etc/fstab for easy mounting by the the rsync script. Because it’s a USB drive, I used its UUID name rather than a /dev name that depends on what else might be plugged in at the time. To find that out

ll /dev/disk/by-uuid/
 ... snippage ...
lrwxrwxrwx 1 root root 10 2009-03-17 09:54 fedcdb1c-ec6e-4edc-be35-22915c82e46a -> ../../sdd1

So then this gibberish belongs in /etc/fstab

UUID=fedcdb1c-ec6e-4edc-be35-22915c82e46a /mnt/backup ext3 defaults,noatime,noauto,rw,nodev,noexec,nosuid 0 0

Mount the existing backup drive and dump its contents to the new one:

sudo mount /mnt/backup
sudo rsync -aHu --progress --bwlimit=10000 --exclude=".Trash-1**" /mnt/backup/snapshots /mnt/part

You need sudo to get access to files from other users.

Rsync is the right hammer for this job; don’t use cp.

The -a preserves all the timestamps & attributes & owners, which is obviously a Good Thing.

The -H preserves hard links, which is what rsnapshot uses to maintain one physical copy in multiple snapshot directories; if you forget this, you’ll get one physical copy for every snapshot directory and run out of space on that 1 TB drive almost immediately.

The -u does an update, which is really helpful if you must interrupt the process in midstream. Trust me on this, you will interrupt it from time to time.

You will also want to watch the proceedings, which justifies –progress. There’s a torrent of screen output, but it’s mostly just comfort noise.

The key parameter is –bwlimit=10000, which throttles the transfer down to a reasonable level and leaves some CPU available for normal use. Unthrottled USB-to-USB transfer ticks along at about 15 MB/s on my system, which tends to make normal file serving and printing rather sluggish. Your mileage will vary; I use –bwlimit=5000 during the day.

You probably want to exclude the desktop trash files, which is what the –exclude=”.Trash-1**” accomplishes. If your user IDs aren’t in the 1000 range, adjust that accordingly.

How long will it take? Figure 500 GB / 10 MB/s = 50 k seconds = 14 hours.

That’s why you use screen: so you can shut down your Comfy Chair system overnight and get some shut-eye while rsync continues to tick along.

When that’s all done, compare the results to see how many errors happen while shuffling the data around:

sudo diff -q -r --speed-large-files /mnt/backup/snapshots/ /mnt/part/snapshots/ | tee > diff.log

The sudo gives diff access to all those files, but the tee just records what it gets into a file owned by me.

You’ll want to do that overnight, as it really hammers the server for CPU and I/O bandwidth. Batch is back!

That was the theory, anyway, but it turned out the new drive consistently disconnected during the diff and then emerged with no filesystem. A definite disappointment after a half-day copy operation and something of a surprise given that diff is a read-only operation.

A considerable amount of fiddling showed that running USB-to-USB copies simply didn’t work, even if the drives were on different inside-the-PC USB controllers, with failures occurring around a third of a terabyte. So, rather than debugging that mess, I wound up making copies directly from the file server’s internal drives, which ran perfectly but also ignored the deep history on the backup drive.

But, eh, I’ve been stashing a drive in the safe deposit box for the last few years, so there should be enough history to go around…

Advertisements

1 Comment