Table of Contents
tdev=hda1 nice badblocks -p 99 -c 90000 -nv /dev/$tdev 2>&1 | tee -a /tmp/badblocks.$tdev.log &
$ jot - 900000 100000 -50000 | xargsi -t badblocks -c {} -nv /dev/hda
[...]
badblocks -c 150000 -nv /dev/hda
badblocks: Cannot allocate memory while allocating buffers
badblocks -c 100000 -nv /dev/hda
Initializing random test data
Checking for bad blocks in non-destructive read-write mode
[...]By default only a non-destructive read-only test is done.
-n Use non-destructive read-write mode.
-w Use write-mode test. With this option, badblocks scans for bad
blocks by writing some patterns (0xaa, 0x55, 0xff, 0x00) on
every block of the device, reading every block and comparing the
contents. This option may not be combined with the -n option,
as they are mutually exclusive.Badblocks needs memory proportional to the number of blocks tested at once, in read-only mode, proportional to twice that number in read-write mode (NB, might not be true. I noticed that the memory requirement is constant, as in e2fsprogs-1.27-9. Tong), and proportional to three times that number in non-destructive read-write mode.
If you set the number-of-blocks parameter (-c) to too high a value, badblocks will exit almost immediately with an out-of-memory error "while allocating buffers" in non-destructive read-write mode.
If you set it too low, however, for a non-destructive-write-mode test, then it's possble for questionable blocks on an unreliable hard drive to be hidden by the effects of the hard disk track buffer.