cmd:lndir 

SYNOPSIS

lndir [ -silent ] [ -ignorelinks ] [ -withrevinfo ] fromdir [ todir ]

Caution: as opposed to cp, lndir will not create a dir, ie, all dir/files from the 'fromdir' will be created under pwd unless the 'todir' is specified.

Also, lndir will not create the 'todir', you have to create it first if it does not exist.

cmd:ln 

Usage 

file 
ln -s index.htm index.html
dir 
ln -s ~/ndl/libwww ~/install/libwww

— works fine for this case, but not for www well, it depends on how apache is configured. <<:2000.08.03 Thu:>>

iitrc:~/ndl/libwww$ ls *me | xargs1 ln {} ~/www/tools/libwww

— has to do this way, for files out side of www, hard link *N*: use hard link IFF for www!

ls ~+1/* | xargs -n 1 ln -s
ls ~+1/* | doeach ln -s @_
Note 'doeach.pl ln -s ./@_ ~+1/' doesn't work! ln just takes whatever from the command line and create the file as is. No hidden processing.
cd /opt/bin
ln -s ../pkg1/bin/* .
Obj: All sub dirs that contains file ++index.html,
    create a relative sym link to it named index.html
ff . ++index.html | sed 's|/++|/|'  | xargs1 ln -sf ++index.html
$ dir ./news/e21/usCommentary/2001/0329/index.html
 ./news/e21/usCommentary/2001/0329/index.html -> ++index.html
Note file ++index.html need not exist in current dir!
ln -s /usr/local/teTeX/bin/sparc-solaris/* /usr/local/bin
ln -s /usr/local/teTeX/man/man1/* /usr/local/man/man1
ln -s /usr/local/teTeX/man/man5/* /usr/local/man/man5
ln -s /usr/local/teTeX/info/*info* /usr/local/info
ln -s some_dir/* .
Note link from somewhere else to current dir:
/usr/shared# ln -s teTeX/bin/tie bin
/usr/shared# dir bin/tie
lrwxrwxrwx   1 root     other          13 Aug  3 18:12 bin/tie -> teTeX/bin/tie

— nono

/usr/shared/bin# ln -s ../teTeX/bin/tie .
/usr/shared/bin# dir tie
lrwxrwxrwx   1 root     other          16 Aug  3 18:14 tie -> ../teTeX/bin/tie*
/usr/shared/bin# ln -s ../teTeX/bin/* .

Test 

1<<, >>
host:~/www>ln index.htm index.html
host:~/www>dir
-rw-r--r--   2 034897s cstudent      174 Feb 17 14:56 index.htm
-rw-r--r--   2 034897s cstudent      174 Feb 17 14:56 index.html

— two "same" files, with "link count" of 2

2<<, >>
host:~/www>ln -s index.htm index.html
ln: index.html: File exists
host:~/www>rm index.html
host:~/www>dir
-rw-r--r--   1 034897s cstudent      174 Feb 17 14:56 index.htm

— index.htm still remains after rm index.html, "link count" down to 1

3<<, >>
host:~/www>ln -s index.htm index.html
host:~/www>dir
-rw-r--r--   1 034897s cstudent      174 Feb 17 14:56 index.htm
lrwxrwxrwx   1 034897s cstudent        9 Feb 17 15:03 index.html -> index.htm

— two files, one is symbolic links, "link count" are still 1

4<<, >>
host:~/ndl>ln get-1.5.3.tar.gz ~/www/tools
host:~/ndl>dir ~/www/tools
total 463
-rw-r--r--   2 034897s cstudent   446966 Feb 17 20:53 get-1.5.3.tar.gz

*N*: can't do a symbolic link this time, web server refuse to get files out of www dir. For links from web to home file only, have to use hard links!

help 

NAME

ln - make links between files

SYNOPSIS

ln [options] source [dest]
ln [options] source... directory
It  is  an error  if the last argument is not a directory and more than
two files are given.  It makes hard links  by  default.   By default, it
does not remove existing files.
-s, --symbolic
     Make symbolic links instead of hard links.  This option
     produces  an  error message on systems that do not sup-
     port symbolic links.
Newsgroups: comp.unix.shell
> > > Does anyone know if it's possible to do a search for broken sym.links and
> > > a remove of them after that?
> >
> > Try
> >
> > find . -type l ! -exec test -r {} \; -print
>
>Does it really work? I tried it and find that it lists all my links, not
>only the dead links...

It works for me. Leaving out '! -exec test -r {} \;' causes it to print all links.

What it actually tests for is not having read permission to the file that the link points to. So if you have links to non-readable files, they'll be listed as well. You could try using -f instead of -r, but that will cause links to directories to be listed.

Barry Margolin

Tapani had said

in some shells you need to escape '!'

have to escape it under bash, no need in sh

iitrc:~/ndl/libwww/libwww-perl-5.41/bin$ dir
total 78
-rwxr-xr-x   2 tong    other       6349 Nov 19 17:44 lwp-download.PL*
-rwxr-xr-x   2 tong    other       2817 Dec  3  1997 lwp-mirror.PL*
-rwxr-xr-x   2 tong    other      13979 Nov 19 17:44 lwp-request.PL*
-rwxr-xr-x   2 tong    other      15107 Sep 11 05:41 lwp-rget.PL*
iitrc:~/ndl/libwww/libwww-perl-5.41/bin$ ln * ~/sbin
iitrc:~/sbin$ ls | xargs1 mv {} '`fname {}`'
>mv lwp-download.PL `fname lwp-download.PL`
>mv lwp-mirror.PL `fname ...
iitrc:~/sbin$ dir
total 78
-rwxr-xr-x   2 tong    other       6349 Nov 19 17:44 lwp-download*
-rwxr-xr-x   2 tong    other       2817 Dec  3  1997 lwp-mirror*
-rwxr-xr-x   2 tong    other      13979 Nov 19 17:44 lwp-request*
-rwxr-xr-x   2 tong    other      15107 Sep 11 05:41 lwp-rget*
iitrc:~/sbin$ dir ~/ndl/libwww/libwww-perl-5.41/bin
total 78
-rwxr-xr-x   2 tong    other       6349 Nov 19 17:44 lwp-download.PL*
-rwxr-xr-x   2 tong    other       2817 Dec  3  1997 lwp-mirror.PL*
-rwxr-xr-x   2 tong    other      13979 Nov 19 17:44 lwp-request.PL*
-rwxr-xr-x   2 tong    other      15107 Sep 11 05:41 lwp-rget.PL*

documented on: 02-24-99 10:46:39

*Tags*: jbuilder

Symptom 

can't put a symlink in /opt/bin for jbuilder.

Reason 

jbuilder is a simple shell script. It does not handle the case that it could be multi-level linked.

Solution 

Handle the case that there might be several links before it reach to the finial shell script.

$ diff -wU 1 jbuilder.org jbuilder > /tmp/mll.jb
@@ -6,3 +6,15 @@

-SDIR=`dirname $0`
+PRG=$0
+# Resolve symlinks
+while [ -L "$PRG" ]; do
+    ls=`ls -ld "$PRG"`
+    link=`expr "$ls" : '.*-> \(.*\)$'`
+    if expr "$link" : '/' > /dev/null; then
+    PRG="$link"
+    else
+    PRG="`dirname $PRG`/$link"
+    fi
+done
+
+SDIR=`dirname $PRG`
 cd $SDIR
 SDIR=`pwd`

documented on: 2000.11.13 Mon 21:57:37

Newsgroups: comp.os.linux.misc
>I'm trying to make a hard link from a directory in my home directory to
>/mnt/robbo.

Others have already identified the issue and fixed it (use ln -s), but I will merely comment on a more technical aspect of this problem.

There are actually two representations of a file or directory; it is also worth noting that a directory is a file — a special type of file which can only be manipulated by the kernel, specifically the file system responsible for the volume upon which the directory resides. In this respect, Linux ext2fs isn't that much different from DOS FAT/FAT32, except for the fixed-size root directory — or probably from NTFS (NT's default file system), although I don't know the details.

The first representation is the familiar path representation, which starts at the root or the current directory — one can call this the "naming tree" (and it actually was in at least one OS), even though this is not strictly accurate; see below. This is the one that all users and most programmers see. Note that there's no drive specifier in Unix; all paths start from the root, which is the root of the first mounted volume. Subsequent volumes are mounted lower down in the naming tree; for example, an installation may associate / with /dev/hda1, swap with /dev/hda5, /usr with /dev/hda6, and /var with /dev/hda7 (this association is managed by either manually using the mount(1m) command, or by editing the file /etc/fstab, which is read upon bootup and 'mount -a'. The name resolver knows what to do if fed a pathname such as /usr/local/bin/netscape; it will get the file from /dev/hda6's filesystem with the path '/local/bin/netscape'. This makes things more flexible than the "Map Network drive" command in Windows, or even the '\\server\sharename' path form, as it is controlled by the node requesting the mount, not the node sharing the information.

I will call the shortest possible pathname to access a file or directory (without symbolic links) the "canonical pathname" in the following.

The second representation is using an internal number, called an inode number. This representation is only of interest to hardcore developer types who like to muck around in the file system — or do something unusual to repair a damaged one. However, understanding this concept helps to explain what a "hard link" is. (By convention, the root inode of an ext2-formatted volume is always 2, for some reason; this predates Linux. Note that this isn't a given for other volume types; a FAT volume has a root inode of 1 in Linux, for example. (It's somewhat arbitrary as a FAT volume has that weird root directory contiguous area, anyway.) )

A soft link is just a text representation, relative to the link's parent. If /a/b/c/d is a soft link containing the text "../../e/f/g", attempting to open /a/b/c/d results in the resolving of the text /a/b/c/../../e/f/g, which (usually!) leads to the path /a/e/f/g, which may or may not exist. I am not sure of the details of this at this time, but if you're really interested, you can peruse the kernel source code, starting at /usr/src/linux/fs/namei.c.

This can get rather involved if multiple soft links are encountered. Most systems, Linux included, will stop at 10 or so links, and return the error EMLINK ("Too many links").

A hard link, by contrast, is actually *another* file entry. As you may have already noticed, there is no requirement that the mapping between canonical root path and inode be one-to-one; one can easily envision /a/b/c/d and /a/e/f/g pointing to the same inode. This is precisely what 'ln' (sans -s option) does:

ln /a/e/f/g /a/b/c/d

will create a file or directory /a/b/c/d, while taking the inode number from /a/e/f/g, whatever it is. (One can list the inode number by using ls -i.)

That's all a hard link is.

Once created, a hard link is indistingishable from the file it is linking to, except perhaps for the modification time of its containing directory.

A couple more things.

First, there's the concept of links (stat.st_nlink; see /usr/include/bits/stat.h) — just to confuse things even further. When a directory refers to a file or another directory, st_nlink is incremented for the file referred to; this means that files normally have a link count of 1, and directories have a link count of 2 + subdirectories (remember that . and .., which are present in every directory, have to have their counts adjusted, too — but . is itself, and .. is the parent). find(1) uses this information to attempt to optimize its directory scanning; the -noleaf option disables this optimization if necessary.

In other words, stat.st_nlink is a link *count*.

(The link count is the first number in an ls -l output, just after the permissions and before the owner.)

However, if an object has been hard-linked, the link count increases; if a file has 3 names, its link count will be 3. This means that the "naming tree" is in fact a directed-acyclic-graph, if not worse.

If one deletes a name, the link count decreases; the space on the volume used by that file or directory is actually reclaimed only when the count goes to 0.

If one renames a hard-linked file, the inode doesn't change, just the name. All other names referring to that file still refer to the same file.

Note that soft links don't increment the link count and can become "broken" if the object being referred to (or one of its ancestor directories) is removed or moved away. One can also create a file through a soft link (but all the directories above it must exist), but a hard-linked file must exist at the time of the hard link. Also, soft links can refer to another object; check out /lib, for example, for a straightforward application of this technique:

$ ls -l /lib/libc*
-rwxr-xr-x   1 root     root      4101324 Feb 29  2000 /lib/libc-2.1.3.so
lrwxrwxrwx   1 root     root           13 May 15 18:59 /lib/libc.so.6 ->
libc-2.1.3.so
...

Second, as another poster has pointed out, one can do very bad things with hard links, if one is careless; the typical problem is a directory reference loop. Also, it's not clear to me that find will function quite right, as the link count for the directory will be slightly off. (It's probably not a big problem, as I suspect it's used to allocate an internal array; it would only be a problem if the count is too *low*.)

Hope this helps. :-)

ewill @ aimnet.com

documented on: 2000.10.25 Wed 00:05:57

how to find a users *true* home directory with a shell script? 

Newsgroups: comp.unix.shell,comp.unix.admin
>>Here's my suggestion:
>>
>>UsersHomedir=`cd ~$UserID;pwd`
>>
>
>Unfortunately, depending on your implementation of pwd, this may
>return the symlinked path.  (It certainly does for me on Linux, HP-UX,
>AIX, IRIX, and Solaris).

Investigating further - this works fine if the pwd concerned is /bin/pwd and not /sbin/pwd.

Julian

how to find a users *true* home directory with a shell script? 

$ dir /home
lrwxrwxrwx    1 root     root           12 Dec 16 01:09 /home -> /export/home/
tong@sunny:~$ pwd
/home/tong
tong@sunny:~$ type pwd
pwd is a shell builtin
tong@sunny:~$ sh -c pwd
/home/tong
tong@sunny:~$ /bin/pwd
/export/home/tong
tong@sunny:~$ /sbin/pwd
bash: /sbin/pwd: No such file or directory

documented on: 2001.05.16 Wed 17:25:51

Newsgroups: comp.unix.shell,comp.unix.solaris,comp.unix.programmer

Remove the symlink and then recreate it:

lchmod()
{
  if [ ! -h "$1" ]; then
     echo "$1 is not a symlink." >&2
     exit 1
  fi
  symlink=`ls -ld $1 | awk '{print $9}'`
  target=`ls -ld $1 | awk '{print $11}'`
  rm $1
  ln -s $target $symlink
}

This won't work if either of the names contains whitespace.

You can do a little better by splitting on the ' -> ', but even that isn't reliable (consider "ln -s 'a -> b' c"). To make it robust you would have to resort to a C program or perl script (or anything else that can do a proper readlink() call).

Geoff Clare

documented on: 2001.03.21 Wed 16:37:52