>echo * | xargs echo asciiart.tar.gz assn.tar.gz bin.tgz doc.tar.gz
reads arguments from the standard input, delimited by blanks (which can be protected with double or single quotes or a backslash) or newlines, and executes the command (default is /bin/echo) one or more times with any initial-arguments fol- lowed by arguments read from standard input. Blank lines on the standard input are ignored.
>echo * | xargs echo asciiart.tar.gz assn.tar.gz bin.tgz doc.tar.gz
>echo * | xargs -n 1 echo asciiart.tar.gz assn.tar.gz bin.tgz doc.tar.gz
--replace[=replace-str], -i[replace-str] Replace occurences of replace-str in the initial arguments with names read from standard input. Also, unquoted blanks do not terminate arguments. If replace-str is omitted, it defaults to "{}" (like for `find -exec'). Implies -x and -l 1.
--max-lines[=max-lines], -l[max-lines] Use at most max-lines nonblank input lines per command line; max-lines defaults to 1 if omitted. Trailing blanks cause an input line to be logically continued on the next input line. Implies -x.
--max-args=max-args, -n max-args Use at most max-args arguments per command line. Fewer than max-args arguments will be used if the size (see the -s option) is exceeded, unless the -x option is given, in which case xargs will exit.
-t Enable trace mode. Each generated command line will be written to standard error just prior to invocation.
$ echo `jot 3` | xargs -n 1 -i echo {} {} 1 2 3 1 2 3
$ jot 3 | xargs -i -n 1 echo {} {} {} {} 1 {} {} 2 {} {} 3
$ jot 3 | xargs -i echo {} {} 1 1 2 2 3 3
— Use above, 'ls -1 | xargs -i' !!,
$ ls ind* | xargs -i echo mv {} _{} mv index.html _index.html mv index001.html _index001.html
— no -1 will also do!!! <<:Thu 06-24-99:>>
*N*: If you have to use echo, use this (Sun 03-14-99) echo * | xargs -n 1 | xargs -i echo {} {}
alias xargsi="tr '\n' '\0' | xargs -0 -i"
— the same as alias xargsi="xargs -i"
alias xargs1="tr '\n' '\0' | xargs -0 -n 1"
— the same as alias xargsi="xargs -l 1"
$ ls I mean "no" a b'c d
$ find . -print0 | xargs -0 -l1 rm rm: cannot remove `.' or `..'
$ dir total 0
documented on: 02-20-99
find . -group 500
find ~/www/ai -name lget-log.txt -print -exec cat {} \;
— Should use parameters at full length
find /etc/ -newer /root/anaconda-ks.cfg
-size N[bckw]
True if the file uses N units of space, rounding up. The units are 512-byte blocks by default, but they can be changed by adding a one-character suffix to N:
`b' 512-byte blocks `c' bytes `k' kilobytes (1024 bytes) `w' 2-byte words
find /java/expresso/webapps/expresso/expresso/doc/edg/ /www/html/docs/expresso/javadoc/ -name '*.html'
host:~/ndl>find . -name *.tar -exec echo mv {} `fname {}`.bar \; mv ./st.tar .bar
host:~/ndl>find . -name *.gz -exec echo mv {} `fname {}`.bar \; find: paths must precede expression Usage: find [path...] [expression]
— the same syntax with previous, the only difference is this time many files note that the previous is not working as supposed to be.
find /var/ -path '*/spool' -prune -o -path '*/cpan' -prune -o -path '*/www' -prune -o -type d
host:~/ndl>find . -name *.tar -exec 'echo mv {} `fname {}`.bar \;' find: missing argument to `-exec'
host:~/ndl>find . -name *.tar '-exec echo mv {} `fname {}`.bar \;' find: invalid predicate `-exec echo mv {} `fname {}`.bar \;'
host:~/ndl>find . -name *.tar -'exec echo mv {} `fname {}`.bar \;' find: invalid predicate `-exec echo mv {} `fname {}`.bar \;'
find ~/www/ai -name lget-log.txt -print '-exec echo ; cat {} \;' find: bad option -exec echo ; cat {} \; find: path-list predicate-list
iitrc:~/ndl/libwww/libwww-perl-5.41/bin$ find . -name '*.PL' -exec echo mv {} {}.bar \; mv ./lwp-rget.PL {}.bar mv ./lwp-download.PL {}.bar mv ./lwp-mirror.PL {}.bar mv ./lwp-request.PL {}.bar
Should use '' when using */?
Can only use {} once!
give up trying to use more than one command, or {}
1999.09.02 Thu 16:56:06
>How can I use ` in find -exec? >E.g.: >find -name '*.tgz' -exec echo mv {} `basename {}` \;
The simplest solution is to write a script that does what you want, and invoke that with the -exec option. E.g. create a script named moveit that contains:
#!/bin/sh mv "$1" "`basename $1`"
and then do:
find -name '*.tgz' -exec moveit {} \; find ~/temp -name 'b*.tgz' -exec fileh fnh mv basename {} \;
Barry Margolin
> Q1: What is the easiest way to delete all the (say) .o files under all > the sub-directories?
find . -name "*.o" -print | xargs rm -f
> Q2: What is the easiest way to delete all the files under all the > sub-directories that are not write protected?
rm -r directoryname 2>/dev/null
documented on: 07-31-99
$ find | xargs echo xargs: unmatched single quote
ls | grep "'" | doeach.pl mv "@'@_@'" "@'@~echo \\@'@_\\@' @b sed \\@'s/'/@@/g\\@'@~@'"
find . -ls | cut -c68- | sed "s/'/\\\\'/g" | xargs touch -c -t 200104150000 # -maxdepth 1
find -print0 | xargs -0 echo
ddate=200104150000 find -print0 | xargs -0 touch -c -t $ddate find -type d -print0 | xargs -0 touch -c -t $ddate
documented on: 2001.04.13 Fri 03:43:11
>How can I get access mode of a file?
[pizza:/export/home/root] /usr/local/bin/find /etc/passwd -printf %m 444
documented on: 04-20-99 15:37:52
whereis is not very helpful.
echo $MANPATH | tr ":" "\n" | doeach.pl find @_ ' -name string*'
documented on: Fri 07-09-99 10:22:27
> So, what should I do, to find files that are modified within 3 days? >
It's obscurely noted in the manual, but for numbers:
+n means greater than n n means exactly n -n means less than
So you would want:
find path -mtime -3
Dan Mercer
find -mtime -5 -printf "%TY-%Tm-%Td %TT %p\n" | sort
— Great!
>I can use -perm parameter of find to find writable files but how can I >find read-only files? thanks!
Read-only files are files that are not writable, so you first write an expression that matches writable files and then negate it:
find ! \( -perm -200 -o -perm -020 -o -perm -002 \)
Barry Margolin
actually, you just need one -perm:
find . ! -perm +0222
Carlos Duarte
documented on: 2000.02.22 Tue 12:23:59
Newsgroups: comp.os.linux.misc Date: 2002-11-24 15:42:12 PST
> I need to know a way that I can list all the files starting from / > that have sizes above e.g. 100Mb.
find / -size +102400k
CrayzeeWulf
> find / -size +102400k
Yeah, that'd work, but for more general use, the -s option to ls might work better. Less to type, anyway. "ls -s | sort -n" will give you a list of all the files in the current directory with size in K prepended, sorted by size with largest last. Add -R to the ls, you'll get a recursive listing.
Matt G
You can check whether two files are the same file with two hard links by:
$ ls -li file1 file2
documented on: 2007.08.03
Note that the rsync '-a' (archive mode) does not preserve hard links:
$ ls -li inc* 107256 -rw------- 3 tong tong 10 03-23 10:33 inc.txt 107544 lrwxrwxrwx 1 tong tong 7 03-23 12:13 inc2.txt -> inc.txt 107256 -rw------- 3 tong tong 10 03-23 10:33 inc31.txt 107256 -rw------- 3 tong tong 10 03-23 10:33 inc32.txt
rsync -vua . /tmp/lnk_tst cd /tmp/lnk_tst
$ ls -li inc* 217084 -rw------- 1 tong tong 10 03-23 10:33 inc.txt 217061 lrwxrwxrwx 1 tong tong 7 06-02 11:47 inc2.txt -> inc.txt 217085 -rw------- 1 tong tong 10 03-23 10:33 inc31.txt 217086 -rw------- 1 tong tong 10 03-23 10:33 inc32.txt
For the rsync command to preserve hard links, use an extra -H:
-H, --hard-links preserve hard links
rm -rf /tmp/lnk_tst rsync -vuaH . /tmp/lnk_tst cd /tmp/lnk_tst
$ ls -li inc* 217067 -rw------- 3 tong tong 10 03-23 10:33 inc.txt 216974 lrwxrwxrwx 1 tong tong 7 06-28 16:44 inc2.txt -> inc.txt 217067 -rw------- 3 tong tong 10 03-23 10:33 inc31.txt 217067 -rw------- 3 tong tong 10 03-23 10:33 inc32.txt
$ rsync -v rsync version 2.6.9 protocol version 29
To copy files while preserving hard links, use tar:
mkdir /tmp/lnk_tst2 tar -cSf - . | tar -xvSpf - -C /tmp/lnk_tst2 cd /tmp/lnk_tst2
$ ls -li inc* 217117 -rw------- 3 tong tong 10 03-23 10:33 inc.txt 217118 lrwxrwxrwx 1 tong tong 7 06-02 11:55 inc2.txt -> inc.txt 217117 -rw------- 3 tong tong 10 03-23 10:33 inc31.txt 217117 -rw------- 3 tong tong 10 03-23 10:33 inc32.txt
$ tar --version tar (GNU tar) 1.16
Well, actually 'cp -a' preserves hard links as well:
cp -a . /tmp/lnk_tst
$ ls -li /tmp/lnk_tst/inc* 217074 -rw------- 3 tong tong 10 03-23 10:33 /tmp/lnk_tst/inc.txt 217075 lrwxrwxrwx 1 tong tong 7 06-14 15:37 /tmp/lnk_tst/inc2.txt -> inc.txt 217074 -rw------- 3 tong tong 10 03-23 10:33 /tmp/lnk_tst/inc31.txt 217074 -rw------- 3 tong tong 10 03-23 10:33 /tmp/lnk_tst/inc32.txt
documented on: 2008-06-02
Newsgroups: gmane.linux.debian.user Date: Tue, 17 Oct 2006 22:52:25 +0100
On Tue, 17 Oct 2006 15:46:46 -0700, Bob McGowan wrote:
>> Is it possible to find the hard links of the same file? Ie, group the >> above finding into same file groups? > > find . -type f -links +1 -ls | sort -n -k 1 > > This command line will [...]
Bingo! thanks a lot.
> How can I find hard linked files?
All "regular" files are hard links. See http://en.wikipedia.org/wiki/Hard_link
the stat(1) command tells you how many files point at a given inode (so, if "Links:" > 1, you have two filenames and one file):
$ touch a $ ln a b $ stat a File: `a' Size: 0 Blocks: 0 IO Block: 4096 regular empty file Device: fe01h/65025d Inode: 1287087 Links: 2 Access: (0644/-rw-r--r--) Uid: ( 1000/ jon) Gid: ( 1000/ jon) Access: 2006-10-17 22:47:53.000000000 +0100 Modify: 2006-10-17 22:47:53.000000000 +0100 Change: 2006-10-17 22:47:54.000000000 +0100
> Is it possible to find the hard links of the same file? > Ie, group the above finding into same file groups?
Parse the above (if you are writing a shellscript) or use the equivalent system call in C: stat(2)
Jon Dowland
> How can I find hard linked files?
using for example:
[ "`stat -c %h filename`" -gt 1 ] && echo hard linked
> Is it possible to find the hard links of the same file? Ie, group the > above finding into same file groups?
AFAIK it's not possible using general purpose tool. Maybe some filesystem offer a specific interface to get the file name(s) given the inode number (as ncheck on aix), but I don't have made any deep investigation.
Usually general purpose tools (tar, rsync) use an hash table and repeat the check for every file.
You could make your own simple map using this expensive command
find /somepath -xdev -type f -printf '%n %i% %p\n' | grep -v '^1 ' | sort -k2
> Is it possible to find the hard links of the same file? Ie, group the > above finding into same file groups?
find . -type f -links +1 -ls | sort -n -k 1
This command line will find all regular files (-type f) that have 2 or more hard links (-links +1) and list them (-ls, format similar to ls -l, except that it includes the inode number in column one). The result is piped to a numeric sort on column one.
Bob McGowan
Newsgroups: comp.unix.shell Date: Sat, 19 Nov 2005 18:50:48 +0000
> I wish there is a '-n' switch for rm (to default answer no) so as > not to remove write-protected file. How can I do that? Why "yes no | rm *" > won't work? > > touch a b c d > touch aa bb cc dd > chmod a-w ?? > > $ rm * > rm: remove write-protected regular empty file `aa'? ^C > > $ yes no | rm * > > $ ls | wc -l > 0 > > The yes command provide "no" to rm, but why it still deleted all my files? [...]
Because rm detects that it is not speaking to a human (because its standard input is not a terminal), so doesn't ask any question.
With zsh,
rm *(.Uw)
will only remove regular (.) files owned by yourself (U) and writable by yourself (w).
Stephane CHAZELAS
use find, like:
find -perm 644 -print0 | xargs -0r rm
You can have more options on setting up the permission bits with "find", for example:
find -perm +w -type f -print0 | xargs -0r rm
this delete all files having "write" at 'u','g', or 'o' bit
find -perm +w -type f -prune -o -print0 | xargs -0r rm
this selects all files skipped by the first command and then delete:
XC - xicheng
*Tags*: cmd:find
Newsgroups: comp.unix.shell Date: 26 Nov 2006 21:51:03 -0800
> Is there any way to find which dir (under cwd) is empty? I.e., has no > file in it but may have sub dirs?
Here is one way. Note that this is breaks if there are paths with spaces in them:
find . -type d -exec sh -c '[ -z "$(find '{}' -maxdepth 1 ! -type d -print)" ]' \; -print
Kaz Kylheku
> Note that this is breaks if there are paths with > spaces in them.
$ find . -type d -exec sh -c '[ -z "$(find '{}' -maxdepth 1 ! -type d -print)" ]' \; -print find: ./af/empty: No such file or directory find: dir: No such file or directory ./af/empty dir
$ find . -type d -exec sh -c '[ -z "$(find '\''{}'\'' -maxdepth 1 ! -type d -print)" ]' \; -print ./af/empty dir ./af/empty dir/subd
$ find . -type d -exec sh -c '[ -z "$(find "'{}'" -maxdepth 1 ! -type d -print)" ]' \; -print ./af/empty dir ./af/empty dir/subd
T
> > Here is one way. Note that this is breaks if there are paths with > > spaces in them: > > > > find . -type d -exec sh -c '[ -z "$(find '{}' -maxdepth 1 ! -type d > > -print)" ]' \; -print > > Thanks, that works. > > May I know why it will break when there are spaces in path name? > > find: ./af/empty: No such file or directory > find: dir: No such file or directory > ./af/empty dir > > I saw that the {} has been quoted with ''. Why is the quote ignored?
That's a mistake. Note that the entire argument to sh -c is in single quotes. So these inner quotes break that up into two single-quoted pieces with {} in the middle. It doesn't matter if you remove these; either way, it breaks.
To fix the problem, you have to get {} to be quoted. Here is how. Switch to double quotes, and then escape them within. Command expansion takes place now so you have to escape that too, you want the embedded sh -c to evaluate the $(find …), not the outer command:
find . -type d -exec sh -c "[ -z \"\$(find \"{}\" -maxdepth 1 ! -type d -print)\" ]" \; -print
The real challenge would be protecting this against expansions of {} which look like a predicate, valid or otherwise.
Kaz Kylheku
> To fix the problem, you have to get {} to be quoted. Here is how. > Switch to double quotes, and then escape them within. Command expansion > takes place now so you have to escape that too, you want the embedded > sh -c to evaluate the $(find ...), not the outer command: > > find . -type d -exec sh -c "[ -z \"\$(find \"{}\" -maxdepth 1 ! -type d > -print)\" ]" \; -print
Hi, thank you Kaz for the detailed explanation.
FYI, I found the following sufficient enough under Linux, in which sh is actually bash:
$ find . -type d -exec sh -c '[ -z "$(find "'{}'" -maxdepth 1 ! -type d -print)" ]' \; -print ./af/empty dir ./af/empty dir/subd
*Tags*: cmd:find
Newsgroups: comp.unix.shell Date: Tue, 28 Nov 2006 07:16:07 +0000
> I'm trying to rename names of all files under the current dir to lower > case, but keep dir names as-is. > > Is there any way to tweak the following to suit the above goal? > > find . -print0 | xargs -t0 rename 'y/A-Z/a-z/'
I'd do
zmv -Qv '(**/)(*)(.D)' '$1${(L)2}'
(using zsh and its autoloadable zmv function).
Or, as you seem to have GNU find and the perl based rename command:
find . -type f -name '*[[:upper:]]' -print0 -exec rename -v ' s,(.*/)(.*),$1\L$2,' {} +
If you don't want to recurse into subdirectories, you can use the non-standard -maxdepth or do:
find . ! -name . -prune -type f -name '*[[:upper:]]*' -exec \ rename '$_=lc' {} +
Stephane CHAZELAS
find . -type f ...
Bill Marcum
>> I'm trying to rename names of all files under the current dir to lower >> case, but keep dir names as-is. >> > find . -type f -name '*[[:upper:]]' -print0 -exec rename -v ' > s,(.*/)(.*),$1\L$2,' {} +
Thanks, that works, although didn't work at my first trial — maybe because missing the ending '*' after [[:upper:]].
Anyway, this is what I used:
find . -type f -exec rename -v 's/[_ ]//g; s,(.*/)(.*),$1\L$2,' {} \;
BTW, Just for the information, both Bill and Adam's suggestion won't work because rename will try to lower case the dir names as well.
T
> find . -type f -exec rename -v 's/[_ ]//g; s,(.*/)(.*),$1\L$2,' {} \; > > BTW, Just for the information, both Bill and Adam's suggestion won't work > because rename will try to lower case the dir names as well. [...]
And in your solution above it won't work if the dirs have spaces or underscores in their name for the same reason. You could do:
find . -type f -exec rename -v ' s,[^/]*$,$_=lc$&;s/[_ ]//g;$_,e' {} +
Note the "+" instead of ";" to avoid having to call rename for every file. It should work with any POSIX or Unix conformant find.
Stephane CHAZELAS
>>> Note the "+" instead of ";" to avoid having to call rename for >>> every file. It should work with any POSIX or Unix conferment find. >> >> thx, learned another trick here as well. Now I don't need to do >> >> find . -print0 | xargs -t0 ... >> >> to avoid exec invocation on every file any more... > > If the found result is extremely long, longer than a command can take as > parameter, can find -exec + do the same thing as xarg so as to break down > the parameters in chunks so that the command can handle?
Yes, that's what -exec … {} + is meant to do.
Stephane CHAZELAS
Newsgroups: comp.unix.shell Date: Tue, 05 Dec 2006 21:10:38 +0100
> I get "find: missing argument to `-exec'" for the following command but I > don't know how to fix: > > find /other/path ! -group grp -exec ln -s {} dest +
Most implementations of ` require it to immediately follow the {} Otherwise, use \; rather than `
Michael Tosch
find /other/path ! -group grp -exec sh -c ' exec ln -s "$@" dest' arg0 {} +
Stephane CHAZELAS
> > find /other/path ! -group grp -exec sh -c ' > > exec ln -s "$@" dest' arg0 {} + > > what the arg0 here for?
When using the -c option to the shell, the first argument after the command string is used as the shell's $0, and the remaining ones fill in $@. So you need a dummy argument to fill in $0.
A better question would be what the exec is for. Most shells automatically exec the last (in this case only) command, kind of like tail-call elimination in Scheme. Notice:
$ sh -c 'sleep 10; echo ok' & [17] 32501
$ ps -p 32501 PID TTY TIME CMD 32501 pts/0 00:00:00 sh
barmar $ sh -c 'sleep 100' & [1] 6370 barmar $ ps -p 6370 PID TT STAT TIME COMMAND 6370 p1 S 0:00.02 sleep 100
Barry Margolin
> A better question would be what the exec is for. Most shells > automatically exec the last (in this case only) command, kind of like > tail-call elimination in Scheme. Notice:
ash and pdksh based shells don't, AT&T ksh, bash and zsh based ones do.
If you've got traps set up, that optimisation becomes a bug, such as in AT&T ksh. bash and zsh know not to do that optimisation when there are traps.
Stephane CHAZELAS