system report program 

> > What are the programs that I can use to know the system hardware and
> > software info. E.g. the # of cpus, it speed, bus speed, memeory size,
> > cache size, HD size... all sorts of info for hardware and
> Which OS?
>
> Many systems log the klog buffer to a file during startup.  You'll
> find anything there, but how it is named depends on your OS.
>
> If the system was booted only recently, you may try "dmesg".
>
> BSD 4.4 based systems have a "sysctl" command that'll let you find
> out, too.

For Sun Solaris. I knew there was a command and tried every "valid guess" but failed, thanks for your help I found out it is "sysinfo" — for H/W. tks.

Tong

Running program under root ID 

Newsgroups: comp.os.linux.misc
>    I have written a PERL script to prompt for a username and password. It
> then adds the user to the system. The problem I am having is that the user
> running the script is not root. Is there a way to get the PERL script to run
> as root, by someone other than root ?

(It is not spelled "PERL".) If you have suidperl installed, you can make the script run with root's permissions by making it owned by root, then turning on the setuid bit with "chmod u+s /path/to/script". You may want to take away the permission of any user to run it ("chmod o-x /path/to/script") and put the authorized users and the script in a special group.

Notes:

Paul Kimoto

Running program under root ID 

>  It is too hard to write secure setuid shell scripts.  Accordingly, Linux
>  ignores the setuid bit on scripts.

Not true.

Linux ignores the setuid bit, because the current method of invoking a script allows for a race condition. This has been solved on other Unix systems (like, say, Solaris) by invoking the interpreter and passing it an open file handle to the script instead of the name of it, breaking the race condition.

From the perlsec man page:

hint: symbolic links and an suid script make it trivial to run any program as the owner of the suid script on such systems, of which Linkux is one. Set up a symlink like foo->/sbin/rootly, where rootly is an suid script. Then run 'foo'… if you're quick and can point foo at myrootshell between the time the kernel decides to run perl (or sh or any other #!'ist script), myrootshell will run as root… even though it's not suid.

Brian Moore

file:Boot log 

boot log/messages are store in file

/var/log/messages
Note It only keep the messages that the bootup program want to log, not everything you see on bootup console.

In Linux, check /var/log/dmesg can verify that your dos drive is being correctly detected.

full boot-up messages 

Newsgroups:  gmane.linux.debian.user
Date:        Thu, 16 Nov 2006 18:44:19 +0100
> Where are the boot-up messsages logged, please?
>
> dmesg doesn't give me the "Starting... done." messages -- I'd like to
> have exactly what is written on the screen during boot-up time.

You need to activate bootlogd for this: edit the file /etc/default/bootlogd, change BOOTLOGD_ENABLE to Yes. After a reboot you will find these messages in /var/log/boot.

Mertens Bram "M8ram"

full boot-up messages 

> I found that very helpful - thanks - even though I didn't ask the original
> question.  Is there anything similar which will give the messages on
> shutdown?

You could probably coble something together using bootlogd, but you'd want the log saved to something other than /var/log/boot since it will get overwritten on the next boot. Try /var/log/shutdown. You will also want it to file everything before the script that umounts local fs (and hense /var).

Douglas Tutty

Ways of reducing log files sizes automatically 

> Is there a way to remove parts of the file that are older than
> 30 days for example.

Ways of reducing log files sizes automatically 

> [ You sound like you may want to try logrotate. This is a little utility
> [which [ can be set to rotate nominated logs every day/week/whatever, and
> [have old [ records emailed to you. The system's log files will not exceed
> [a certain pre- [ set size limit using logrotate.

Alternatively, if you want to control this, and many other things, using an integrated tool that Does Lots Of Cleanup, you might try cfengine. <http://www.iu.hioslo.no/cfengine/>

I use the following set of rules that run each day to clean up a number of files _I_ found got pretty sticky. [The usual log rotaters are not aware of the log files produced by Postfix; that caused me problems on one system…]

disable:

/var/log/auth.log rotate=4 type=plain size=>200k
/var/log/mail.info rotate=4 type=plain size=>200k
/var/log/mail.warn rotate=4 type=plain size=>200k
/var/log/mail.log rotate=4 type=plain size=>200k
/var/log/mail.err rotate=4 type=plain size=>200k
/var/log/kern.log rotate=4 type=plain size=>200k
/var/log/debug rotate=4 type=plain size=>200k
/var/log/daemon.log rotate=4 type=plain size=>200k
/var/log/messages rotate=8 type=plain size=>200k
/var/log/debug rotate=4 type=plain size=>200k

cbbrowne

Why, ext2 don't need defrag 

> the artist formerly known as dan said:
> >I understand that the ext2 filesystem is a little "smarter" then the fat
> >fs, and it does not need to be defrag.  But can someone explain why, I
> >mean the physical architecture of how the ext2 fs works, or if it's too
> >much to explain does anyone know of a site that can thoroughtly
> >breakdown how the ext2 fs works.
>
> See:
> <http://step.polymtl.ca/~ldd/ext2fs/ext2fs_toc.html>[]
>    Analysis of the Ext2fs structure
>
> As well as the references by Remy Card and Theodore Ts'o that are
> referenced therein.
>
> It is _NOT_ a "comparative analysis of ext2fs _as compared to DOS FAT_"
> and thus will not provide a detailed answer as to _why_ ext2 is better.
>
> That is left as an exercise to the gentle reader; if you're not up
> to looking at the sources and assessing it yourself, you would likely
> not be able to find actual value in anything more specific than the
> rather blank claim that "ext2 allocates files more intelligently than
> FAT."

Right. What would be more helpful would be an explaination as to why the FAT file system (MS-DOS/MS-Windows file system) both does fragment (which is not in *inself* bad) and why a fragmented FAT file system is bad, *partitularly* in the context of MS-DOS & MS-Windows.

The simple / short explaination has to do with the fact that the FAT file system uses a *simple* linked list. Linked lists are notorious for fragmentation — which is a major problem with LISP systems under memory-starved conditions — they tend to page fault all over the place after things are running for a while. This is *exactly* what happens with the FAT file system. Note: fragmentation itself is not really bad, if the operating system is designed to cope with the fragmentation *intellegently*, which usually means things like smart cacheing and various sorts of file system 'look ahead' / 'read ahead' features, which MS-DOS & MS-Windows generally lack (MS-DOS lacks these features, as does MS-Windows 95, 98, and ME — 'NT and 2K are a little better). Note ext2 does fragment some, but both the ext2 structure and Linux itself are *designed* to cope with the fragmentation in an intellegent fashion and not suffer performance degradation because of it.

The FAT file system was originally invented for *floppy disks*, which are relatively low capacity and where there is not really massive file creation and deletion. Also floppies are slow (in a relative sense) and don't have the spare sectors for a more complex file system. (Linux people tend to use the FAT filesystem for floppies, since Ext2 has too much overhead for floppies). Fragmentation on a floppy is easy to cure: copy everything off the floppy, reformat it and copy everything back. Actually floppies tend to wear out before fragmentation becomes serious. Or else the files on a floppy are 'static' (they are backup copies).

Robert Heller @cs.umass.edu

Why, ext2 don't need defrag 

> EXT2-fs warning: checktime reached, running e2fsck is recommended
>
> Doesn't e2fsck also "defrag" a bit as well as checking the file-system?
>

No. It just checks the file system. The "EXT2-fs warning: checktime reached, running e2fsck is recommended" is not specific to SuSE. It is a normal part of the EXT2-fs code.

Robert Heller @cs.umass.edu

logs to monitor 

> OS Sun OS 5.5.1. Can some one guide as to what are the system logs (for
> security, resource utilization etc) to
>  monitor on a daily basis, so as to take any preventive measures.

You can obtain the files containing system logs from /etc/syslog.conf You can generate the logs for different services using the above configuration file.

cmd:df 

$ df -HT
Filesystem    Type    Size  Used Avail Use% Mounted on
/dev/hdc2     ext3    2.1G  1.3G  691M  65% /
/dev/hdc5     ext3    8.3G  6.8G  1.1G  86% /export
none         tmpfs     15M     0   15M   0% /dev/shm
$ df
Filesystem           1k-blocks      Used Available Use% Mounted on
/dev/hdc2              2016044   1238468    675164  65% /
/dev/hdc5              8119744   6565832   1141448  86% /export
none                     15060         0     15060   0% /dev/shm
$ df -ke
df: invalid option -- e
Try `df --help' for more information.
$ gcc -dumpmachine
i386-redhat-linux

disk free for cwd 

Newsgroups:  comp.unix.shell
Date:        Sat, 19 Nov 2005 18:43:45 +0000
> Is there a command that would tell me how much disk space is still left
> for the current directory?
df -k .

Stephane CHAZELAS

Differences between df and du -s 

Newsgroups: comp.os.linux.misc
Date: 29 Jan 2003 09:55:12 +0100
> I've had this strange problem for which I haven't been able to find an
> explanation:
>
> With "df", I saw my partition in /dev/hda7, mounted on /var, which had all
> its capacity (250 MB) used. I have a MySQL database installed (it uses the
> /var directory), and it had stopped working.
>
> Hoewever, changing to this directory, and doing du -s, showed that only 23
> or 24 MB were allocated to the filesystem. [...]
>
> Where could the problem be?

Not all the files existing are accessible from the directories!

int main()
{
    int i;
    char buffer[1024];
    int fd=open("/tmp/test",O_CREAT|O_RDWR);
    unlink("/tmp/test");
    for(i=0;i<250*1024;i++){
        write(fd,buffer,sizeof(buffer);
    }
    for(;;){
        sleep(3600);
    }
    return(0);
}
> What tools could I use to find the problem if it happens again?
lsof, in that case ------------------------------++++
                                                 ||||
[pascal@thalassa essais]$ lsof|head              VVVV
COMMAND     PID   USER   FD   TYPE     DEVICE    SIZE      NODE NAME
init          1   root  mem    REG        9,0  440536    608790 /sbin/init
do-fetchm   151   root  mem    REG        9,0  442760        28 /bin/bash

The grossest way: reboot Otherwise, if you identify the culprit: kill $pid

Pascal_Bourguignon

Differences between df and du -s 

> In any unix-like system, when a file is deleted but it is still
> opened by a process, the space is not released (and actually the process
> can continue using the file as normal).   When the process exits,
> the space is released.

Right. And the difference in du -s and df output is because df asks the filesystem (which knows about the files even though they're not listed in any directory), whereas du -s just walks the directories, so the deleted but not released files don't show up.

Ed

Differences between df and du -s 

> A process can create a file and immediately unlink it. If the process

You or someone or some program could also have rm a file which is in use. That file name is removed from the directory structure, but the inodes are not released and the file can continue using it. Eg, you might have decided that /var/log/syslog is too big and removed it, thinking that syslog would recreate the file. Instead syslogd was still connected to that file and filling it up. This is why if you look at the various logrotate scripts, they always restart the daemon after saving the log file. This kills the process which was using it, and restarts it. At the killing the old process releases the inodes of the removed file and the space is freed. When it is restarted it restarts with the new file.

Do you recall trying to "clean up" /var/log of some large file and simply rm it?

Bill Unruh

> The utilities 'lofs' and 'fuser' may also help. I can't recall which comes
> with linux distributions - possibly both.
>
> Frank Ranner

shutdown permission 

Newsgroups: comp.os.linux.misc

so there is an internal check to see if the process is running with root privileges, and in your case it isn't. If you REALLY need to run this as yourself and not root, you would need to set the suid bit on the file: chmod u+s shutdown then it will assume the identity of the file owner (root in this case) when it runs. An ls -l would then reveal -rwsr-xr-x …

Mike.

Linux disk blocksize 

Newsgroups: alt.os.linux,comp.os.linux.misc,comp.os.linux.setup,comp.os.linux.networking

> On Solaris, I can find the blocksize used on the disks with:
> df -g
>
> What is the command to show the same thing in Linux?
dumpe2fs -h /dev/hdxx

Manfred

How to make incredible size file as soon as faster? 

> > man mkfile
>
>First of all, I still don't even understand his question.

I assumed he meant "How do I make a very big file as quickly as possible?"

> Secondly, I don't have an mkfile on my system...

I guess it's a Solarisism. If you don't have it, then:

dd if=/dev/zero of=newfile bs=1 oseek=<size-1> count=1

The above command will make a sparse file, with disk space only allocated for the last block. To allocate disk space for the entire file, use:

dd if=/dev/zero of=newfile bs=<size> count=1

Barry Margolin

cmd:size 

> dir /opt/bin/bash # org version
>-rwxr-xr-x    1 root     root      1520199 Dec 27 16:16 /opt/bin/bash*
>
> dir /opt/bin/bash # stripped version
>-rwxr-xr-x    1 root     root       482496 Jan 16 19:43 /opt/bin/bash*
>
>Now it is better. :-)

For the true size of an executable, may I suggest to use the size command.

$ size /bin/bash text data bss dec hex filename 279384 18664 6196 304244 4a474 /bin/bash

Villy

documented on: 2001.01.18 Thu 16:40:18

mouse 

x The gpm program allows you to cut and paste text on    x
x the virtual consoles using a mouse.  If you choose to  x
x run it at boot time, this line will be added to your   x
x /etc/rc.d/rc.gpm:                                      x
x                                                        x
x     gpm -m /dev/mouse -t imps2                         x
x                                                        x
x Running gpm with a bus mouse can cause problems with   x
x XFree86.  If XFree86 refuses to start and complains    x
x that it can not open the mouse, then comment the line  x
x out of /etc/rc.d/rc.gpm, or add '-R' to gpm and set    x
x up X to use /dev/gpmdata as your mouse device.         x

documented on: 2001.02.03 Sat 22:40:21

cmd:xosview 

xosview +net -int &
chmod u+s /usr/X11R6/bin/xosview
xosview +net -int -xrm "xosview*serial0:True" &

who accessed the file 

*Tags*: cmd:stat

Newsgroups: comp.unix.questions
> How do we know who accessed the file, is there a way?. We know when it was
> accessed last by looking as `ls -alu file`

You cannot tell who accesses a file. Run "stat filename" will give you just about all the information that is available for a file.

eg:

stat typescript
typescript:
        inode 149959; dev 284; links 1; size 404
        regular; mode is rw-r--r--; uid 35621 (nlong); gid 20 (user)
        projid 0        st_fstype: xfs
        change time - Fri Mar 30 23:36:57 2001 (986013417)
        access time - Fri Mar 30 23:37:03 2001 (986013423)
        modify time - Fri Mar 30 23:36:57 2001 (986013417)

Nick Long

Real size of a file in a filesystem … 

Newsgroups: comp.os.linux.misc

My task is to divide myriads of files into subsets to burn them on a bunch of cdroms. For this I use mkisofs (to create a joliet-iso-filesystem) and I want to find a way to calculate the space a certain file or directory will take in the isofs later.

I guess, the size I get reported from the system is not the size the file acutally needs on the media and then it will need some place in the directory-table and more.

What I want now, is a fast method to estimate this 'real' size of a file as good as possible and on the other hand I would be interested in a deep- sight background of all this stuff. I read the filesystems-howto but didnt found the information I am looking for.

thanx for any hints, links and informations,

peter

Real size of a file in a filesystem … 

> Would the du (man du) command do what you need?

Unfortunately not. du calculate the needed diskspace on the current filesystem (reiserFS or ext2), but not its space on the target-filesystem (joliet-iso9660). On the other hand du is very usefor for calculating the space for a folder and its subfolder, but no use for calculating the space for 100.000nds of files seperately, which is what I need.

thanx a lot, peter

Real size of a file in a filesystem … 

> In my experience of making ISO images, I never have more than 500k of
> overhead.  I normally use the output of du and add 500k to it to do
> the estimate.  It comes very close.  If I have to add more, I just
> recreate the ISO file.  It does not take long to regenerate an ISO
> image.  I use a slow machine (Sparc5 85MHz cpu).  It takes about 12-14
> minutes to make an 650MB ISO image.  Don't forget 650MB is 681574400
> bytes and 700MB is 734003200 bytes.  If your cpu is faster than mine,
> you can afford to recreate as often as you need.

I have 24 gigs of files, seperated in about 80.000 files from 1k to 500M. I want to create 38 iso-images that holds all this files to burn it on cd. If I just copy them just as the files come in, I would need much more cd's, so I need to find out a perfect compiliation.

For this I want to take a file, calculate the exact size it will have in the iso-image and 'throw' it in the iso-image it fits best. (actually I create a symlink and run later mkisofs)

So du is no help for my task. And when adding 12.000 files in one iso- image the overhead is much greater.

thnx, peter

Real size of a file in a filesystem … 

The space occupied by each file is the file size rounded up to a multiple of 2048 bytes.

It is much harder to estimate the space taken by the directory entries - although it is unlikely to be more than 1 or 2% of the whole CD (even with 12000 files). The only way to get the exact size is to run mkisofs with the -print-size option.

There is something called "cdburn" which is a wrapper to mkisofs/cdrecord that designed to store data on multiple CDs. I've never used it, but it may help.

James Pearson

>> 2001.04.12 Thu 15:59:26

cmd:last 

Usage 

last -30 -a

Help 

-num   This is a count telling last how many lines to show.
-a     Display the hostname in the last column. Useful in combination
       with the next flag.

Disabling reboot fsck forever 

Newsgroups: comp.os.linux.setup
 >  I really like the way my machine works and I have no problem with what fsck
 >does to it.  But I'm an OpenGL developper and I always program with my video
 >card memory unprotected (3dfx).  Sometimes, it crashes real hard.
[verse]

Well … if it crashed a file system check may not be such a bad idea ? A JFS is helpful for sure, but a JFS is not going to prevent any loss of real data.

Still and depending on how you're using your system, of course you can skip the fsck. I've a self built system, so your /etc/bcheckrc is going to look different or reside in a different path, but an extract …

echo "*** RUNNING FSCK - PLEASE WAIT ***"
FSSTATUS=0
while read FSDEV FSMOUNT FSTYPE FSFLAGS FSDUMPFREQ FSFSCKPASS
do
      if [ $FSTYPE = ext2 -a $FSFSCKPASS -ne 0 ]
      then
         if [ $FSMOUNT = "/" ]
         then
            FSROOT=$FSDEV
         fi

         /sbin/e2fsck $FSFORCEFSCK -p $FSDEV; FSSTATUS=$(( FSSTATUS | $? ))
      fi
done < /etc/fstab

if [ $FSSTATUS -ge 2 ]
then
   echo "*** FSCK EXITED WITH $FSSTATUS - STARTING MAINTAINCE SHELL ***"
   /sbin/sulogin /dev/console
fi

... say e2fsck is run via some script. Means you can disable a fsck
easily. E.g. add something like ...

if [ ! -f /nofsck ]
then
fi
..

Question is when to create /nofsck though … you could boot into different runlevels, say 2 for "normal" operation and 4 for development work, perhaps mounting some file systems ro and so on.

You could even disable the "standard" fsck completely, run a cronjob which creates a tag file /dofsck once a week and only check your filesystems if this file exists, removing it afterwards.

>I will then have to run fsck manually some times, or I would put it in my cron
>deamon weekly...

You had better not run a e2fsck on a busy system. Mind a life system keeps changing the file systems information and a fs should be mounted ro before checking it.

In short since the file system check is not a kernel thingy you're pretty flexible.

Juergen

>> 2001.05.28 Mon 14:19:36

Another superuser 

Newsgroups: comp.unix.admin
> >I am trying to give a group of users 'not quite' superuser (extraordinary I
> >suppose!) access e.g their access will allow them to create other users.
>
> adding a 'new user' on stock unix requires 2 things:
> 1) Write access to passwd files (master.passwd, or passwd & shadow, etc).
> 2) the ability to create a home directory and probably chown it to the
>    user in question (create is fairly easy, chown is hard unless you allow
>    file giveaway, which has its own problems)
>
> However, once you've given someone write access to passwd, they can change
> their uid to 0, so you've given them root anyway.
>
> Sudo should be portable to most systems.  I run it on solaris, which
> is SVR4 derived.  Use a wrapper script, obviously, or you could end up
> giving away root anyway.

Some systems have a `useradd' command. You could use sudo and restrict the semi-admins to only running this command. (Assuming, of course, that you trust its security…)

Nate Eldredge

Tips from experianced admins wanted 

Newsgroups: comp.unix.admin
> I currently administer 3 unix boxes.  We've recently merged and I find I
> now have 10-12 boxes plus other network devices to
> administer/monitor/manage.  I'm curious to know what other administrators
> do to easily manage large numbers of boxes?
> For example:
> How do you distribute patches between boxes

This really is something that I'm not comfortable doing automatically. I will push the patche clusters out using ssh or rdist, but I always run the installs directly (preferrably from console).

> Monitor logs

Centralized logging to a secured server that is only accessible via console and accepts it's logs via SSH tunnels (or better yet line printer).

> Monitor resource usage (RAM, CPU, Disk space etc.)

Big Brother and MRTG

> Audit the actions of assistant administrators and support staff

Sudo is probably the best way to handle this, though you need to get very paranoid about the sudoers file and you will need to get creative about the logging.

Fredrich P. Maney

syscall tracer 

Unix:truss 

truss -f -o log -t open,stat,access  <program>

Linux:strace 

Usage 

1<<, >>
strace -f -o /tmp/strace.out info zoog >/dev/null
grep -E 'open|fstat' /tmp/strace.out
!! | grep -v ENO
grep '^[0-9]* open(.* = [^-]' /tmp/strace.out
2<<, >>
strace -f -eopen -o /tmp/mplayer mplayer -vo xvidix test.mpg

Help 

-e expr     A qualifying expression which modifies  which  events  to
            trace or how to trace them.  The format of the expression
            is:
[qualifier=][!]value1[,value2]...
where qualifier is one of trace, abbrev, verbose, raw,
signal, read, or write and value is a qualifier-dependent
symbol or number.  The default qualifier is trace.  Using
an exclamation mark negates the set of values.  For
example, -eopen means literally -e trace=open which in
turn means trace only the open system call.  By contrast,
-etrace=!open means to trace every system call except
open.  In addition, the special values all and none have
the obvious meanings.
Note that some shells use the exclamation point for
history expansion even inside quoted arguments.  If so,
you must escape the exclamation point with a backslash.
-e trace=set
            Trace only the specified set of  system  calls.   The  -c
            option is useful for determining which system calls might
            be     useful      to      trace.       For      example,
            trace=open,close,read,write  means  to  only  trace those
            four system calls.  Be  careful  when  making  inferences
            about the user/kernel boundary if only a subset of system
            calls are being monitored.  The default is trace=all.
-e trace=process
            Trace all system calls which involve process  management.
            This  is  useful  for  watching  the fork, wait, and exec
            steps of a process.
-e trace=network
            Trace all the network related system calls.
-e trace=signal
            Trace all signal related system calls.
-e trace=ipc
            Trace all IPC related system calls.

strace log clean up for the comparison 

If you want to compare between different strace logs, filter them with the following first.

awk '{$1=""; gsub(/0x[0-9a-f][0-9a-f][0-9a-f]+/,"0x..."); print}'
Note
awk '{$1=""; gsub(/0x[0-9a-f]*/,"0x..."); print}'

would hide too much info. Eg., 'iopl(0x3)'.

Best way is to use the following, but it failed to work:

awk '{$1=""; gsub(/0x[0-9a-f]{6,}/,"0x..."); print}'
$ awk --version
GNU Awk 3.1.5

documented on: 2006.10.08

cmd:info path setup 

Newsgroups: comp.unix.admin
>>I found that /usr/share/info is not in my info path. How can I add
>>it to my default info path? thanks

AFAIK, the GNU info program uses the $INFOPATH environment variable to define the path it'll traverse looking for whatever you're looking info on, similar to the way man uses $MANPATH. The way to set this variable is to set it in one of your shell rc files - You could define it systemwide by defining it in /etc/profile or /etc/csh.cshrc or whatever shell users generally use on your box, or you could simply add it to your own users' rc file…

> Depends on which shell you are using.   If korn shell,
> add    export PATH=$PATH:/usr/share/info  in .profile

Bash'll source .profile, so'll the bourne shell…

Brian Scanlan

cmd:info path setup 

> > AFAIK, the GNU info program uses the $INFOPATH environment variable to define
>
> When I said "default info path", I want to know where is the conf
> file for the default paths.

There is no conf file for the default paths. The default INFOPATH is a compile time setting.

> That's why I want to find the default ones and add my own to it.
>
> Currently, my RH6.2 don't have any INFOPATH setting in system
> setting or my private setting, yet I am able to info most of the
> programs (of cause except those from /usr/share/info).

There's two ways to get the default $INFOPATH…

From the texinfo source ($SRC_ROOT/info/filesys.h)

#define DEFAULT_INFOPATH
"/usr/local/info:/usr/info:/usr/local/lib/info:/usr/lib/info:
/usr/local/gnu/info:/usr/local/gnu/lib/info:/usr/gnu/info:
/usr/gnu/lib/info:/opt/gnu/info:/usr/share/info:
/usr/share/lib/info:/usr/local/share/info:/usr/local/share/lib/info:
/usr/gnu/lib/emacs/info:/usr/local/gnu/lib/emacs/info:
/usr/local/lib/emacs/info:/usr/local/emacs/info:."

and by using strace…

$ strace -f -o /tmp/info.out info zoog >/dev/null
Process 21838 attached
Process 21838 detached
info: No menu item `zoog' in node `(dir)Top'.

Do a grep of /tmp/info.out and you can see info looking for the zoog infopage (which of course doesn't exist).

Here's a snippet of the strace output…

21837 stat("/usr/lib/info/dir", 0xbffff9e4) = -1 ENOENT (No such file or
directory)
21837 stat("/usr/lib/info/localdir", 0xbffff9e4) = -1 ENOENT (No such file or
directory)
21837 stat("/usr/local/gnu/info/dir", 0xbffff9e4) = -1 ENOENT (No such file or
directory)
21837 stat("/usr/local/gnu/info/localdir", 0xbffff9e4) = -1 ENOENT (No such
file or directory)

In these cases, /usr/lib/info and /usr/local/gnu/info are present in the default INFOPATH, the dir and localdir files are what info can use to build up a db of what info files are on the system…

Brian Scanlan

cmd:dmesg 

Usage 

dmesg > boot.messages
Useful: print out their bootup messages.
Actual: print or control the kernel ring buffer

Help 

-nlevel
set the level at which logging of messages is done to the
console.  For example, -n 1 prevents all messages, expect
panic messages, from appearing on the console.  All levels of
messages are still written to /proc/kmsg, so syslogd(8) can
still be used to control exactly where kernel messages appear.
When the -n option is used, dmesg will not print or clear the
kernel ring buffer.

Quota Setup 

Help Sources 

Contents are copied directly from the following linux howto, but well arranged. The arrange of linux howto is the worst I ever seen.

The Configuration Steps 

Modify /etc/fstab 

To enable user quota support on a file system, add "usrquota" to the fourth field containing the word "defaults" (man fstab for details).

/dev/hda1       /       ext2    defaults        1       1
/dev/hda2       /usr    ext2    defaults,usrquota       1       1
/dev/hdb5      /export  ext2    defaults,usrquota  1 2

Replace "usrquota" with "grpquota", should you need group quota support on a file system.

Create quota record "quota.user" and "quota.group" 

Both quota record files, quota.user and quota.group, should be owned by root, and read-write permission for root and none for anybody else.

Login as root. Go to the root of the partition you wish to enable quota, then create quota.user and quota.group.

touch /export/quota.user
touch /export/quota.group
chmod 600 /export/quota.user
chmod 600 /export/quota.group
If not 
% quotaon -avug
quotaon: using /export/quota.user on /dev/hdb5: No such file or directory

Turning on quota 

% quotacheck -avug
Scanning /dev/hdb5 [/export] done
Checked 6982 directories and 88226 files
Using quotafile /export/quota.user
% /sbin/quotaon -avug
/dev/hdb5: user quotas turned on
-a     All  file  systems in /etc/fstab marked read-write with quotas
       will have their quotas turned on.  This is  normally  used  at
       boot time to enable quotas.
-v     Display a message for each file system where quotas are turned
       on.
-u     Manupulate user quotas. This is the default.
-g     Manupulate group quotas.

make it into system init script (see howto).

reboot!

Handling Quota with linuxconf 

Under "File systems" -> "Set quota defaults"

Check 

repquota -a

Tip !!
% repquota -a
jacky    --    2380    5000    6500             96  1000  1500
% quota -v jacky
Disk quotas for user jacky (uid 10102):
     Filesystem  blocks   quota   limit   grace   files   quota   limit   grace
      /dev/hdb5    2380    5000    6500              96    1000    1500

quota 

quota: 013263m (uid 7431): permission denied
host:~/bin>quota -v 012953b
quota: 012953b (uid 7117): permission denied
Note Only the super-user may use the optional username argument to view the limits of other users.

documented on: 02-04-99

How to create /dev/null 

Newsgroups: comp.os.linux.setup
Date: 1999/01/21
:>> Does anyone know how to create /dev/null
: When all else fails:
: "man null"
: - man pages give you explicit instructions on creating a proper NULL device.
/bin/rm /dev/null
mknod -m 666 /dev/null c 1 3
chown root:mem /dev/null

Jayasuthan

How to create /dev/null 

null, zero - data sink

DESCRIPTION Data written on a null or zero special file is discarded.

Reads  from  the null special file always return end of file, whereas
reads from zero always return \0 characters.

How to create /dev/null 

Linux 

$ dir /dev/null /dev/zero
crw-rw-rw-    1 root     root       1,   3 Oct 22 20:04 /dev/null
crw-rw-rw-    1 root     root       1,   5 May  5  1998 /dev/zero

Solaris 

$ dir /dev/null /dev/zero
lrwxrwxrwx   1 root     root   1998 /dev/null -> ../devices/pseudo/mm@0:null
lrwxrwxrwx   1 root     root   1998 /dev/zero -> ../devices/pseudo/mm@0:zero
$ dir /devices/pseudo/mm@0:null /devices/pseudo/mm@0:zero
crw-rw-rw-   1 root     sys   17:39 /devices/pseudo/mm@0:null
crw-rw-rw-   1 root     sys    1998 /devices/pseudo/mm@0:zero

cmd:realpath - return the canonicalized absolute pathname 

realpath expands all symbolic links and resolves references to '/./', '/../' and extra '/' characters in the null terminated string named by path and stores the canonicalized absolute pathname in the buffer of size PATH_MAX named by resolved_path. The resulting path will have no symbolic link, '/./' or '/../' components.

first appeared in BSD 4.4, appears in libc 4.5.21 in Linux.

The BSD 4.4, Linux and SUSv2 versions always return an absolute path name. Solaris may return a relative path name when the path argument is relative.

tmpfs filesystem 

mkdir MOUNTPOINT
mount tmpfs MOUNTPOINT -t tmpfs

Creating a tmpfs filesystem with a maximum size is easy. To create a new tmpfs filesystem with a maximum filesystem size of 32 MB, type:

mount tmpfs /dev/shm -t tmpfs -o size=32m

This time, instead of mounting our new tmpfs filesystem at /mnt/tmpfs, we created it at /dev/shm, which is a directory that happens to be the "official" mountpoint for a tmpfs filesystem. If you happen to be using devfs, you'll find that this directory has already been created for you.

Also, if we want to limit the filesystem size to 512 KB or 1 GB, we can specify size=512k and size=1g, respectively. In addition to limiting size, we can also limit the number of inodes (filesystem objects) by specifying the nr_inodes=x parameter. When using nr_inodes, x can be a simple integer, and can also be followed with a k, m, or g to specify thousands, millions, or billions (!) of inodes.

Also, if you'd like to add the equivalent of the above mount tmpfs command to your /etc/fstab, it'd look like this:

tmpfs   /dev/shm        tmpfs   size=32m        0       0

welcome file 

>Which file do i need to modify to change the initial login message  -
>thats the message that appears before the login prompt.
/etc/motd

documented on: 1999.08.27 Fri

welcome message when login 

> i added some welcome message in the /etc/motd file, but everytime when i
> login, i can't see that message.
> which file control this function?
'man login' gives information on this.
  When you login, for Bourne shell and Korn shell logins, the shell executes
/etc/profile and $HOME /.profile, if it exists.
  For C shell logins, the shell executes /etc/.login, $HOME /.cshrc, and
$HOME /.login.
  The default /etc/profile and /etc/.login files check quotas , print
/etc/motd, and check for mail.
None of the messages are printed if the file  $HOME /.hushlogin exists.

find user name with shadow passwd 

$ grep szabo /etc/passwd
+kszabo:::::/home/users/kszabo:/bin/csh
$ ypmatch kszabo passwd
kszabo:Q4QuCgWLhW/Xo:18226:2323:Kevin Szabo:/users/guest:/bin/csh

documented on: 2000.04.25 Tue 18:03:13

Sun Computer Administration FAQ 

Sun Computer Administration Frequently Asked Questions http://www.faqs.org/faqs/comp-sys-sun-faq/ Newsgroups: comp.sys.sun.admin,comp.sys.sun.misc,comp.unix.solaris,comp.answers,news.answers

csh interactive sessions 

My rdump is failing with a "Protocol botched" message. What do I do?

This occurs when something in .cshrc on the remote machine prints
something to stdout or stderr (eg. stty, echo). The rdump command
doesn't expect this, and chokes. Other commands which use the rsh
protocol (eg. rdist, rtar) may also be affected.
The way to get around this is to add the following line near the
beginning of .cshrc, before any command that might send something
to stdout or stderr:
if ( ! $?prompt ) exit
This causes .cshrc to exit when prompt isn't set, which
distinguishes between remote commands (eg. rdump, rsh) where these
variables are not set, and interactive sessions (eg. rlogin) where
they are.

How do I synchronize time on my Network? 

You should use xntp version 3 to synchronize your time. Xntp
synchronizes to "atomic" and/or Radio Frequency clocks. Using
xntp time should always be within a few "milliseconds" of the
actual time. Xntp does not require a "atomic" clock, any
stable UNIX host clock will do.
You can get XNTP version 3 from
ftp://ftp.udel.edu/pub/ntp/
XNTP works with all versions of SunOS(4.x and 5.x).
Also, A web page for XNTP is available at
http://www.eecis.udel.edu/~ntp
Finally, Solaris 2.6 now cames with XNTP version 3.5Y.

documented on: 2000.05.10 Wed 09:37:40

Security Products for Solaris 

> I've always managed security by myself, e.g. monitoring logs, disabling
> remote root access, etc.  My employer wants to know about security
> products for solaris.  I only knew of Satan.  Can anyone recommend any
> they prefer?

Losts of great info here:

http://www.alw.nih.gov/Security/prog-full.html

documented on: 2000.05.19 Fri 12:14:07

hosts.deny fills up redundantly 

Newsgroups: comp.os.linux.misc
> I am running portsentry on my system.  I find that every day entries to
> my hosts.deny increases, which would be fine if the new entries WERE
> always new.  Instead, I get a couple new/unique entries added to
> hosts.deny but, by far, the majority of entries are redundant.  I end
> up with a file loaded with repeated entries of the same IP address.

why use the hosts.deny feature? All you need in /etc/host.deny is

ALL: ALL

or

ALL: ALL : spawn (echo Attempt from %h %a to %d at `date` | tee -a /var/log/tcp.deny.log | mail root )

Then just allow who you want in hosts.allow

David Turley