The best and traditional way that has been there for decades is an rsync and then reinstallation of boot-loader. It works always if you know how it’s done.
If you need detailed instructions, I can send you that!
Yes, please! Could you either post here to the list, or to me personally?
What is running on the server? You might be able to get away with a dd, to build a duplicate disk. This disk can be directly attached or on another server tunneled through ssh.
or setup a drbd replica, wait for it to replicate, then stop the replication.
CentOS 5.8 and CentOS 6.2 servers. A duplicate disk is not what I am after as I cannot always replace with exact drives, i.e., same make, model, size, etc.
But thank you anyway…
note that there’s a lot of things where file by file, or even sector by sector, duplicates are NOT valid if made while the system is online.
for instance, relational databases such as PostgreSQL, Oracle, you can’t just copy their files while the database server is running, as they rely on writes being made in a specific order.
The problem I found with rsync is that it is very slow when there are a lot of small files. Any idea how this can be improved on or is that a fundamental limit?
That requires drbd to be setup in advance doesn’t it? I was trying this approach then ran into that wall. And given the amount of work required to get drbd working on a new setup, it seemed easier to use mdraid to do the same thing.
T24gOS83LzIwMTIgMTo0OCDPgM68LCBNaWNreSB3cm90ZToKCj4gVGhlIGJlc3QgYW5kIHRyYWRp dGlvbmFsIHdheSB0aGF0IGhhcyBiZWVuIHRoZXJlIGZvciBkZWNhZGVzIGlzIGFuIHJzeW5jCj4g YW5kIHRoZW4gcmVpbnN0YWxsYXRpb24gb2YgYm9vdC1sb2FkZXIuCgpXZSBhcmUgdXNpbmcgbW9u ZG9yZXNjdWUgKG1vbmRvYXJjaGl2ZSBhbmQgbW9uZG9yZXN0b3JlKS4gV29ya3MgZmluZSBhbmQg CnN1cHBvcnRzIG1hbnkgd2F5cyBvZiBhcmNoaXZpbmcvcmVzdG9yaW5nLCBMVk0gZXRjLgoKSSBy ZWNvbW1lbmQgaXQuIEdvb2QgYm90aCBmb3IgYmFja3VwcyBhbmQgY2xvbmluZy4KCk5pY2sKX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KQ2VudE9TIG1haWxp bmcgbGlzdApDZW50T1NAY2VudG9zLm9yZwpodHRwOi8vbGlzdHMuY2VudG9zLm9yZy9tYWlsbWFu L2xpc3RpbmZvL2NlbnRvcwo
Phil Savoie wrote:
The list, if you please. (A link will do.)
One thing that helps is to break it up into separate runs, at least per-filesystem and perhaps some of the larger subdirectories. Depending on the circumstances, you might be able to do an initial run ahead of time when speed doesn’t matter so much, then just before the cutover shut down the services that will be changing files and databases and do a final rsync which will go much faster.
Also, have you looked at clonezilla and ReaR?
I do dump/restores fir this sort of thing.
Depends on if you want everything, and of course, if there’s a hardware difference, you need to chroot (assuming you rsync’d to special directories, like /new and /boot/new), and do some mounts and chroot, and rebuild the initrd.img
I did try this but the time taken is pretty similar in the main delay is the part where rsync goes through all the files and spend a few hours trying to figure out what needs to be the updated on the second run after I shutdown the services. In hindsight, I might had been able to speed up things up considerably if I had generated a file list based on last modified time and passed it to rsync via the exclude/include parameters.
Yes, but due to time constraints, I figured it was safer to go with something simpler that I didn’t have to learn as I go and could be done live without needed extra hardware on site. Plus it would be something that works at any site I needed it without extra software too.
Thanks for this, I didn’t know there was such a command until now!
But it looks like it should work for me since bulk of the data are usually in /home which is a separate fs/mount usually. I can always resize the fs after transfer so I’ll give this a try the next time I
need to do a dup/migrate.
dump should not be used on mounted file systems, except / in single user.
restore can restore to any size file system of the same type (ext3, ext4) thats large enough to hold the files dumped.
Aha, thanks for the warning!
IF you’re using LVM, you can take a file system snapshot, and dump the snapshot, however, as this is a point-in-time replica of the file system.
Unfortunately I wasn’t.
It does seem that essentially all the better methods that minimize downtime require the system to be prepped when first installed, be it LVM/MD/DRBD.
So going ahead, I’m basically making it a point to use MD mirror on all new installs, including VMs that are not running RAID 1 virtually as the physical storage is already RAIDed.
The assumption is that I should be able to just add an iSCSI target as a member of the degraded RAID mirror, wait for it to sync, then shutdown and start the new server within minutes as opposed to waiting a couple of hours for rsync or any other forms of imaging/dump to backup the current state.
The added benefit of this approach, it would seem is that I could use that same approach to do backup of the entire fs.
Hours? This should happen in the time it takes to transfer a directory listing and read through it unless you used –ignore-times in the arguments. If you have many millions of files or not enough RAM to hold the list I suppose it could take hours.
Rear ‘might’ be quick and easy. It is intended to be almost unattended and do everything for you. As for extra software – it is a
‘yum install’ from EPEL. The down side is that if it doesn’t work, it isn’t very well documented to help figure out how to fix it. I’d still recommend looking at it as a backup/restore solution with an option to clone. With a minimum amount of fiddling you can get it to generate a boot iso image that will re-create the source filesystem layout and bring up the network. Then, if you didn’t want to let it handle the backup/restore part you could manually rsync to it from the live system.
Not that many files definitely, more in the range of tens of thousands. But definitely more than an hour or two with small bursts of network traffic.
I’ll look into it when I need to do this again. It just isn’t something I expect to do with any regularity and unfortunately server admin isn’t what directly goes into my salary so it has to take a second priority.
Perhaps you have some very large files with small changes then
(mailboxes, logfiles, db’s, etc.). In that case the receiving rsync spends a lot of time copying the previous version of the file in addition to merging the changed bits.
ReaR’s (Relax and Restore) real purpose is to be a full-auto restore to the existing hardware after replacing disks, etc., something that is relatively hard to do with complex filesystem layouts (lvm, raid, etc.) and something armchair sysadmins are likely to need when they least expect it. It does that function pretty well with a couple of lines of config setup (point to an NFS share to hold the backup) for anything where live tar backups are likely to work. The whole point of the tool is that you don’t need to know what it is doing and pretty much anyone could do the restore on bare metal. Using it to clone or to move to a modified layout is sort of an afterthought at this point but it is still not unreasonable – it is just a bunch of shell scripts wrapping the native tools from the system but you have to figure out the content of the files where it stores the layout to build.