how to clone a disk 

Newsgroups: comp.os.linux.networking, comp.os.linux.misc
Date: 2001-08-26 03:09:36 PST
> >  [. . .] How can I copy the disc 1:1 to the ohter computer. they
>if all the harddisks are the same kind (i.e. 6.5gb Seagate), attach
>three disks beside the original (let's say that original is /dev/hda
>and others are hdb, hdc and hdd. Now copy with device dump (dd).
>dd if=/dev/hda of=/dev/hdb bs=10000 &
>dd if=/dev/hda of=/dev/hdc bs=10000 &
>dd if=/dev/hda of=/dev/hdd bs=10000 &

This sounds nice, but will not avoid any bad sectors that might be on the new drive. (If you are sure that the new drive has no bad sectors, you might be okay.) Also, for efficiency, the "bs=" parameter should probably be set to a power of 2, or at least to a multiple of 512. Note that this also requires you to move all of the hard disks into one computer (might be a mess) and then move them back, instead of doing it over the network like you probably want.

First, read the hard disk upgrade instructions at http://www.linuxdoc.org/HOWTO/mini/Hard-Disk-Upgrade/index.html

However, don't start quite yet. You have a few things to keep in mind:

  1. That document recommends using "cp -x" as one of the ways to back up a single file system. However, in some versions of GNU fileutils, the "-x" option to stay within one file system is broken — it just keeps rolling along. If you are not sure that you have a properly working version of the "cp" command, do not depend upon this option. My workaround for this was to use "cp -a" on each top-level directory and "cp -dp" on the top-level files that I needed to copy (basically the third method in the above-referenced document), back when I was doing a disk- to-disk copy (different than the tarball copy method that I have been slogging through recently and finally figured out today). At the time I was doing this (last fall), the latest alpha version of fileutils had this fixed; this has presumably been released now, but many Linux installations are likely to have the broken version. (For instance, I was using MontaVista Hard Hat Linux 1.1, which is a close derivative of Red Hat 6.1 — not the latest, but not exactly ancient either.)
  2. Caution: when readjusting Lilo for the new hard disk, proofread /etc/lilo.conf CAREFULLY. Referencing a non-existent partition or misspelling something in this file can cause failures that are quite an unwanted challenge to diagnose.

If you deviate from that document and instead use a tarball on intermediate media (for instance, a CD-R or a network file server) as I did (and it sounds like you probably want), it wouldn't hurt to do a compare between the backup tarball and your original disks after creating the tarball, just to make sure that it didn't get corrupted. This also applies to doing a disk-to-disk copy (especially if you are not yet sure of the quality of your IDE connection — I have found that this can cause corruption before it causes total failure). My backup command (the version that worked) was (executed in the root directory of the original disk):

tar -zcvpf /mnt/backup_volume/backup_file.tgz -T backuplist.txt

where backuplist.txt was a list of top-level files and directories to back up (easiest way to get this is "ls -1 > backuplist.txt" and then edit the file to get rid of /mnt and /proc.

The compare command would be:
tar -zdvpf /mnt/backup_volume/backup_file.tgz

I didn't do this on the original drive, because I was trying to save time in my experimentation, but I did do it on the destination drive after unpacking the tarball (see below), because I suspected hard disk corruption on the destination.

The restore command was (executed in the root directory of the destination disk):

tar -zxvpf /mnt/backup_volume/backup_file.tgz

On the destination disk, you need to create the /mnt directory tree and /proc manually after doing either a disk-to-disk copy or a restore from a tarball. (The version of the "tar" command from RedHat 6.1 or later has no problem with the /dev nodes.) After I did all this, everything seems to work properly (including having the right permissions on the restored /tmp directory, which I noticed were wrong if the "-p" option was not in the "tar" commands).

One pain in the behind I have had is having to use 2+1/2 different boot disks for this. Tom's Root Boot Disk is able to mount the remote NFS volume containing the tarball, but has a cheap plastic imitation of the "tar" command that can't handle a tarball, so I had to compile the "tar" and "gzip" commands statically linked (or they wouldn't work with the libraries on Tom's Root Boot Disk) and put them on the remote NFS volume. Tom's Root Boot disk also does something that makes Lilo abort with an error message like "System map too big", even if I try to run a statically linked recent version of Lilo from the remote NFS volume. Therefore, I have to boot off a Red Hat 6.2 or 7.0 installation CD in rescue mode, which is really brain-dead (the 7.0 version being even worse than 6.2) and doesn't create most of the hard disk nodes in /dev, so I have to do it manually; in addition, the Red Hat rescue mode seems to be unable to get proper network access (which is why I have to use Tom's Root Boot Disk for the first part of the restore). I haven't yet had a chance to mess with other distributions' rescue modes (where I work, we are stuck with Red Hat by management decision — be glad Linux is even allowed into product at all) or DemoLinux, but these might alleviate this problem.

Lucius Chiaraviglio