du & df 

>The problem is also that "du" and "df" are terribly unreliable, at least
>in my experience. "du" seems to round up to the nearest 512 bytes (or
>1024 depending on the -k flag), and I wouldn't trust "df" any further
>than I can..um..throw it.

The output of "df" gives a highly reliable representation of the amount of disk space available on a filesystem. It gets its numbers straight from the horse's mouth (i.e., it directly queries the filesystem). If these numbers seem wrong to you then I suspect that you aren't understanding what is (and is not) being counted in the report.

What "du" is reporting is the actual amount of disk used by the files --- not the sum of the data contained in each file (i.e., the amount shown by a "ls -l"), but the sum of the disk blocks used by the files. This number may be higher (due to indirect blocks and blocks for tail fragments of files) or lower (due to holes) than a direct conversion of the file's size might suggest. Indeed, du does round up each file's size to the nearest block, as each file is stored in an integer number of blocks on the disk (glossing over the issue of the use of "fragments" on several filesystems).

If what you're after is the sum of the byte counts of the files in a directory tree, without any adjustments for actual disk space used, then you'll need to use some other tool than either "du" or "df". Perhaps:

(echo 'my $c=0; END {print $c, "\n"}';
exec find2perl . -eval '$c += -s $_') | perl

Ken Pizzini

documented on: 08-03-99 16:37:05