Most us don't take the time to plan for disaster recovery. One excuse is not wanting to figure out what to do. One excuse down--this article gives you the step-by-step.
by Charles Curley
Imagine your disk drive has just become a very expensive hockey puck. Imagine you have had a fire, and your computer case now looks like something Salvador Dalí would like to paint. Now what?
Bare metal recovery is the process of rebuilding a computer after a catastrophic failure. This article is a step-by-step tutorial on how to back up a Linux computer to be able to make a bare metal recovery, and how to make that bare metal recovery.
The normal bare metal restoration process is: install the operating system from the product disks, install the backup software (so you can restore your data), and then restore your data. Then, you get to restore functionality by verifying your configuration files, permissions, etc.
The process here will save installing the operating system product disk. It will also restore only the files that were backed up from the production computer, so your configuration will be intact when you restore the system. This should save you hours of verifying configurations and data.
The target computer for this article is a Pentium computer with a Red Hat 5.2 Linux server installation on one IDE hard drive. It does not have vast amounts of data because the computer was set up as a ``sacrificial'' test bed. That is, I did not want to test this process with a production computer and production data. Also, I did a fresh ``server'' install before I started the testing so that I could always reinstall if I needed to revert to a known configuration.
The target computer does not have any other operating systems on it. While it simplifies the exercise at hand, it also means if you have a dual boot system, you will have to experiment to get the non-Linux OS to restore.
The process shown below is not easy. Practice it before you need it! Do as I did, and practice on a sacrificial computer.
Nota Bene: The sample commands will show, in most cases, what I had to type to recover the target system. You may have to use similar commands, but with different parameters. For example, below we show how to make a swap device on /dev/hda9. It is up to you to be sure you duplicate your setup, and not the test computer's setup.
The basic procedure is set out by W. Curtis Preston in Unix Backup & Recovery, ( http://www.ora.com/, http://www.oreilly.com/catalog/unixbr/), which I favorably reviewed in Linux Journal, October 2000. However, the book is a bit thin on the ground. For example, exactly which files do you back up? What metadata do you need to preserve, and how?
We will start with the assumption that you have backed up your system with a typical backup tool such as Amanda, Bru, tar, Arkeia or cpio. The question, then, is how to get from toasted hardware to the point where you can run the restoration tool that will restore your data.
Users of Red Hat Package Manager (RPM)-based Linux distributions should also save RPM metadata as part of their normal backups. Something like:
rpm -Va > /etc/rpmVa.txtin your backup script will give you a basis for comparison after a bare metal restoration.
To get to this point, you need to have:
You will restore in stages as well. In stage one, we build partitions, file systems, etc., and restore a minimal file system from the Zip disk. The goal of stage one is to be able to boot a running computer with a network connection, tape drives, restoration software or whatever we need for stage two.
The second stage, if it is necessary, consists of restoring backup software and any relevant databases. For example, suppose you use Arkeia and build a bare metal recovery Zip disk for your backup server. Arkeia keeps a huge database on the server's hard drives. You can recover the database from the tapes, if you want. Instead, why not tar and gzip the whole Arkeia directory (at /usr/knox), and save that to another computer over nfs? Stage one, as we have defined it, does not include X, so you will have some experimenting to do to back up X as well as your backup program.
Of course, if you are using some other backup program, you may have some work to do to. You will have to find out which directories and files it needs to run. If you use tar, gzip, cpio, mt or dd for your backup and recovery tools, they will be saved to and restored from our Zip disk as part of the stage one process described below.
The last stage is a total restoration from tape or other media. After you have done that last stage, you should be able to boot to a fully restored and operational system.
First, do your normal backups on their regular schedule. This article is useless if you don't do that.
Next, build yourself a rescue disk. I use tomsrtbt, available at http://www.toms.net/~toehser/rb/. It is well documented and packs a lot of useful tools onto one floppy diskette. There is an active list for it, and the one question I had was quickly and accurately answered. I like that in a product my shop may depend on one day.
Next, figure out how to do the operating system backup you will need so that you can restore your normal backup. I followed Preston's advice and used an Iomega parallel port Zip drive. They get approximately 90MB of useful storage to a disk. Since the scripts I developed for the stage one backup save about 30MB of data, one Zip disk should be plenty for the job at hand.
Much of this is covered in the Zip Drive HOWTO (http://www.linuxdoc.org/HOWTO/mini/ZIP-Drive.html), so I'll show you exactly what I did. Your mileage may vary. You may have already done a lot of this, in which case your setup may vary. For me the procedure was simplified by not having a printer share the parallel port with the Zip drive.
First, install the driver for the Zip drive, and make a mount point for it:
[root@tester /etc]# modprobe ppa [root@tester /etc]# mkdir /mnt/zipInsert a suitable line into your fstab:
/dev/sda4 /mnt/zip vfat noauto 0 0Save fstab. Put a Zip disk in the drive. Then you should be able to mount the Zip drive:
[root@tester /etc]# mount /mnt/zip [root@tester /etc]# ls -l /mnt/zip total 277 drwxr-xr-x 2 root root 16384 Dec 31 1969 . drwxr-xr-x 7 root root 1024 May 11 08:34 .. -rwxr-xr-x 1 root root 265728 Jan 30 1998 50ways.exe
Zip drives come formatted for MS-DOS or Windows (FAT) or the Mac. The FAT format is somewhat inefficient for what we are doing, although not fatally so. Our test computer setup put about 26MB onto the Zip disk, so you can skip installing the ext2fs on your Zip disk.
Here is how to replace the FAT file system on the Zip disk with an ext2fs. First, unmount the Zip drive:
[root@tester /etc]# umount /mnt/zipThen run fdisk and see what you have:
[root@tester /etc]# fdisk /dev/sda Command (m for help): p Disk /dev/sda: 64 heads, 32 sectors, 96 cylinders Units = cylinders of 2048 * 512 bytes Device Boot Start End Blocks Id System /dev/sda4 * 1 96 98288 6 DOS 16-bit >=32M Command (m for help):For reasons known only to Murphy and Iomega, FAT Iomega Zip disks have only one partition, partition four. Delete the offending partition:
Command (m for help): d Partition number (1-4): 4 Command (m for help): p Disk /dev/sda: 64 heads, 32 sectors, 96 cylinders Units = cylinders of 2048 * 512 bytes Device Boot Start End Blocks Id SystemPer the Zip drive HOWTO, we will make a new partition as partition 1:
Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-96): 1 Last cylinder or +size or +sizeM or +sizeK ([1]-96): 96Displaying the partition table indicates that it was marked as a Linux ext2fs partition for us, so we don't have to change the file system id:
Command (m for help): p Disk /dev/sda: 64 heads, 32 sectors, 96 cylinders Units = cylinders of 2048 * 512 bytes Device Boot Start End Blocks Id System /dev/sda1 1 96 98288 83 Linux nativeUse the w command to write the partition table and exit. We now have to make a file system on the freshly minted partition:
[root@tester /etc]# mke2fs /dev/sda1 mke2fs 1.12, 9-Jul-98 for EXT2 FS 0.5b, 95/08/09 Linux ext2 file system format File system label= 24576 inodes, 98288 blocks 4914 blocks (5.00%) reserved for the super user First data block=1 Block size=1024 (log=0) Fragment size=1024 (log=0) 12 block groups 8192 blocks per group, 8192 fragments per group 2048 inodes per group Superblock backups stored on blocks: 8193, 16385, 24577, 32769, 40961, 49153, 57345, 65537, 73729, 81921, 90113 Writing inode tables: done Writing superblocks and file system accounting information: doneNow we go back to fstab and add a new entry:
/dev/sda1 /mnt/zip ext2 noauto 0 0This lets us specify which file system we have on the Zip disk by mounting the device file rather than the mount point. For example:
[root@tester /etc]# mount /dev/sda1The order in which you place the two lines in /etc/fstab is important. The first one determines whether the default partition mount will try to mount if you specify /mnt/zip. So put this line above the entry you made earlier. We will use this in the script that saves the stage one data to the Zip disk.
Having made your production backups, what additional information do you need to back up in order to rebuild your system? You need to preserve your partition information so that you can rebuild them. This and other metadata are preserved in the script save.metadata (see Listing 1).
Unfortunately, fdisk does not yet export partition information in a manner that allows you to reimport it from a file. Since you have to rebuild your partitions by hand using fdisk, we will save it as a human-readable text file.
Fortunately, the price of hard drives is plummeting almost as fast as the public's trust in politicians after an election. So it is good that the hand-editing process allows the flexibility necessary to use a larger replacement drive.
The script saves the partition information in the file fdisk.hda in the root of the Zip disk. It is a good idea to print this file and your /etc/fstab. Then, you can work from hard copy while you restore the partition data. You can save a tree by toggling between two virtual consoles, running fdisk in one and catting /etc/fstab or /fdisk.hda as needed, but this strikes me as error prone.
You will also want to preserve files relevant to your restoration method. For example, if you use nfs to save your data, you will need to preserve hosts.allow, hosts.deny, exports, etc. Also, if you are using any network backed restoration process, such as Amanda or Quick Restore, you will need to preserve networking files like HOSTNAME, hosts, etc. and the relevant software tree.
The simplest way to handle these and similar questions is to preserve the entire /etc directory.
Preston cheats when he backs up his system. There is no way a 100MB Zip drive is going to hold a server installation of Red Hat 5.2. A 250MB Zip disk will hold a fresh server installation, but probably won't hold a production server. We have to be much more selective than simply preserving the whole kazoo. What files do we need?
PATH=/bin:/sbin:/usr/bin:/usr/sbin export PATHTrial and error indicated that we needed some other directories as well, such as /dev. In retrospect, of course we need /dev. In Linux, you can't do much without device files.
In reading the script save.metadata (see Listing 1), again note that we aren't necessarily saving files that are called with absolute paths. We may require several iterations of backup, test the bare metal restore, reinstall from CD and try again, before we have a working backup script. While I worked on this article, I made five such interations before I had a successful restoration. That is one reason why it is essential to use scripts whenever possible. Test thoroughly!
The first thing to do before starting the restoration process is to verify that the hardware time is set correctly. Use the BIOS setup for this. How close to exact you have to set the time depends on your applications. For restoration, within a few minutes of exact time should be accurate enough. This will allow time-critical events to pick up where they left off when you finally launch the restored system.
Before booting tomsrtbt, make sure your Zip drive is placed on a parallel port, either /dev/lp0 or /dev/lp1. The startup software will load the parallel port Zip drive driver for you.
I have one of those ne2000 clone Ethernet cards in my test system. This, it turns out, gives the 3c59x driver in the tomsrtbt kernel fits. The workaround is to tell the kernel to ignore its address range. At the LILO prompt:
zImage reserve=0x300,32The next step is to set the video mode. I usually like to see as much on the screen as I can. So when the option to select a video mode comes, I use mode 6, 80 columns by 60 lines. Your hardware may or may not be able to handle high resolutions like that, so experiment with it.
Once tomsrtbt has booted and you have a console, mount the Zip drive. It is probably a good idea to mount it read only:
# mount /dev/sda1 /mnt -o roCheck to be sure it is there:
# ls -l /mntThen clean out the first two sectors of the hard drive:
# dd if=/dev/zero of=/dev/hda bs=512 count=2This sets the master boot record (MBR) to all zeros. It wipes out all record of the partitions and any boot code, such as LILO.
Then, using the hard copy of fdisk.hda you made earlier, use fdisk to partition the hard drive. Make the first cylinder a primary partition. After that, you will build one or more extended partitions with one to four logical partitions. Primary and extended partitions are numbered one to four. Partitions five and up are logical partitions. Notice also that the extended partitions overlap the logical partitions they contain. The partition number of a partition is the last one or two numbers in the device name, so /dev/hda5 is partition number five on hda.
You can have more than one primary partition, but this is unwise. Each primary partition precludes an extended partition, which can be subdivided into logical partitions. Extended partitions are far more flexible:
# fdisk Command (m for help): p Disk /dev/hda: 64 heads, 63 sectors, 1023 cylinders Units = cylinders of 4032 * 512 bytes Device Boot Start End Blocks Id System Command (m for help):We will make a new partition as partition 1:
Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-1023): 1 Last cylinder or +size or +sizeM or +sizeK (1-1023): 9Now we will make the extended partition:
Command (m for help): n Command action e extended p primary partition (1-4) e Partition number (1-4): 2 First cylinder (1-1023): 10 Last cylinder or +size or +sizeM or +sizeK (10-1023): 1022Note that the hard drive has 1,023 cylinders, and we only made the extended partition go to 1,022, wasting a cylinder. The reason is that we are duplicating the setup Red Hat gave us on the original installation.
Now for a logical partition:
Command (m for help): n Command action l Logical (5 or over) p primary partition (1-4) l First cylinder (10-1022): 10" Last cylinder or +size or +sizeM or +sizeK (1-1022): 368And so on for each partition that your fdisk.hda file indicates you should have.
Set the file system ID on your swap partition(s) to Linux Swap (82) using the t command.
Don't forget to use the a command to set a partition active, or bootable. In a Linux-only installation, this is usually the first partition, but can be another one. It will be the partition that mounts as /boot (see your printout of fstab):
Command (m for help): a Partition number (1-9): 1Then verify your work:
Command (m for help): v 372 unallocated sectors Command (m for help): p Disk /dev/hda: 64 heads, 63 sectors, 1022 cylinders Units = cylinders of 4032 * 512 bytes Device Boot Start End Blocks Id System /dev/hda1 * 1 9 18112+ 83 Linux native /dev/hda2 10 1022 2042208 5 Extended /dev/hda5 10 368 723712+ 83 Linux native /dev/hda6 369 727 723712+ 83 Linux native /dev/hda7 728 858 264064+ 83 Linux native /dev/hda8 859 989 264064+ 83 Linux native /dev/hda9 990 1022 66496+ 82 Linux swap Command (m for help):Check this listing against your printed copy. Note that the one extended partition overlaps the five Linux partitions.
Finally, we use the w command to write the partition table and exit:
Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. hda: hda1 hda2 < hda5 hda6 hda7 hda8 hda9 > hda: hda1 hda2 < hda5 hda6 hda7 hda8 hda9 > Syncing disks. WARNING: If you have created or modified any DOS 6.x partitions, please see the fdisk manual page for additional information.Then make ext2 file systems on each partition you will be using as an ext2 partition. These will be the primary and logical partitions that you did not change to swap partitions. Don't do this to your extended partitions!
mke2fs /dev/hda1 mke2fs /dev/hda5 ...For example,
# mke2fs /dev/hda1 mke2fs 1.10, 24-April-97 for EXT2 FS 0.5b, 95/08/09 Linux ext2 file system format File-system label= 4536 inodes, 18112 blocks 905 blocks (5.00%) reserved for the super user First data block=1 Block size=1024 (log=0) Fragment size=1024 (log=0) 3 block groups 8192 blocks per group, 8192 fragments per group 1512 inodes per group Superblock backups stored on blocks: 8193, 16385 Writing inode tables: done Writing superblocks and file system accounting information: doneAnd so on for each primary or logical partition that is not a swap partition.
Now set up the swap partition.
# mkswap /dev/hda9 Setting up swap space, size = 68087808 bytesThen, we have to manually build and mount a partition to each directory. Since what is now /target will eventually become /, we mount what will be / to /target:
# mkdir /target # mount /dev/hda8 /targetNext, we build the directories we need in /target, and mount to them, like so for each directory:
# mkdir /target/boot # mount /dev/hda1 /target/bootYou can determine which directories and mountspace points to build and mount from /mnt/etc/fstab. Fortunately, umask is already set correctly for almost all the directories we need to build.
And so on for each partition. Referring to /mnt/ls.root.txt, make sure you also set the proper modes for the directories you are building.
Do not try to mount the extended or swap partitions, though. You don't need to, and it won't do you any good, anyway.
To check your progress, use the command mount with no parameters.
# mount /dev/ram0 on / type minix (rw) none on /proc type proc (rw) /dev/ram1 on /usr type minix (rw) /dev/ram3 on /tmp type minix (rw) /dev/sda1 on /mnt type ext2 (rw) /dev/hda8 on /target type ext2 (rw) /dev/hda1 on /target/boot type ext2 (rw) /dev/hda6 on /target/home type ext2 (rw) /dev/hda5 on /target/usr type ext2 (rw) /dev/hda7 on /target/var type ext2 (rw)Once you have created all your directories and mounted partitions to them, you can run the script /mnt/root.bin/restore.metadata (see Listing 2). This will restore the contents of the Zip drive to the hard drive.
Listing 2. Restore script! Zip to HD
You should see a directory of the Zip disk's root directory, then a list of the archive files as they are restored. tar on tomsrtbt will tell you that tar's block size is 20, and that's fine. You can ignore it. Be sure that LILO prints out its results:
Added linux *That will be followed by the output from a df -m command.
If you normally boot directly to X, that could cause problems. To be safe, change your boot run level temporarily. In /etc/inittab, find the line that looks like this:
id:5:initdefault:and change it to this:
id:3:initdefault:Now, you can gracefully reboot. Remove the tomsrtbt floppy from your floppy drive if you haven't already done so, and give the computer the three-fingered salute, or its equivalent, ``shutdown -r now''. The computer will shut down and reboot.
As the computer reboots, go back to the BIOS and verify that the clock is more or less correct.
Once you have verified the clock, exit the BIOS and reboot, this time to the hard drive. You will see a lot of error messages, mostly along the lines of ``I can't find blah! Waahhh!'' Well, if you have done your homework correctly up until now, those error messages won't matter. You don't need linuxconf or apache to do what you need to do.
You should be able to log into a root console (no X, no users, sorry). You should now be able to use the network, for example, to NFS mount the backup of your system.
If you did the two stage backup I suggested for Arkeia, you can restore Arkeia's database and executables. Now, you should be able to run /etc/rc.d/init.d/arkeia start and start the server. If you have the GUI installed on another computer with X installed, you should be able to log in to Arkeia on your tape server, and prepare your restoration.
When you restore, read the documentation for your restoration programs carefully. For example, tar does not normally restore certain characteristics of files, like suid bits. File permissions are set by the user's umask. To restore your files exactly as you saved them, use tar's p option. Similarly, make sure your restoration software will restore everything exactly as you saved it.
To restore the test computer:
[root@tester ~]# restore.allIf you used tar for your backup and restoration, and used the -k (keep old files, don't overwrite) option, you will see a lot of this:
tar: usr/sbin/rpcinfo: Could not create file: File exists tar: usr/sbin/zdump: Could not create file: File exists tar: usr/sbin/zic: Could not create file: File exists tar: usr/sbin/ab: Could not create file: File existsThis is normal, as tar is refusing to overwrite files you restored during the first stage of restoration.
Just to be paranoid, run LILO after you perform your restoration. I doubt it is necessary, but if it is necessary, it's a lot easier than the alternative. You will notice I have it in my script, restore.all (see Listing 3).
Now reboot. On the way down, you will see a lot of error messages, such as ``no such pid.'' This is a normal part of the process. The shutdown code is using the pid files from dæmons that were running when the backup was made to shut down dæmons that were not started on the last boot. Of course there's no such pid.
Your system should come up normally, with a lot fewer errors than it had before. The acid test of how well your restore works on an RPM based system is to verify all packages:
rpm -VaSome files, such as configuration and log files, will have changed in the normal course of things, and you should be able to mentally filter those out of the report.
If you took my advice earlier and keep RPM metadata as a normal part of your backup process, you should be able to diff the two files, thereby speeding up this step considerably.
You should be up and running. It is time to test your applications, especially those that run as dæmons. The more sophisticated the application, the more testing you may need to do. If you have remote users, disable them from using the system, or make it ``read only'' while you test it. This is especially important for databases, to prevent making any corruption or data loss worse than it already might be.
If you normally boot to X, and disabled it above, test X before you re-enable it. Re-enable it by changing that one line in /etc/inittab back to: id:5:initdefault:
You should now be ready to rock and roll--and for some Aspirin and a couch.
You should take your Zip disk for each computer and the printouts you made, and place them in a secure location in your shop. You should also store copies of these in your off-site storage location. The major purpose of off-site backup storage is to enable disaster recovery, and restoring each host onto replacement hardware is a part of disaster recovery.
You should also have several tomsrtbt floppies and possibly some Zip drives in your off-site storage as well. Have copies of the tomsrtbt distribution on several of your computers so that they back each other up. In addition, you should probably keep copies of this article, with your site-specific annotations on it, with your backups and in your off-site backup storage.
This article is the result of experiments on one computer. No doubt you will find some other directories or files you need to back up in your first stage backup. I have not dealt with saving and restoring X on the first stage. Nor have I dealt with other operating systems in a dual boot system, or with processors other than Intel.
I would appreciate your feedback as you test and improve these scripts on your own computers. I also encourage vendors of backup software to document how to do a minimal backup of their products. I'd like to see the whole Linux community sleep just a little better at night.
Charles Curley (ccurley@trib.com) lives in Wyoming, where he rides horses and herds cattle, cats and electrons. Only the last of those pays well, so he also writes documentation for a small software company headquartered in Redmond, Washington.