cmd 2> file
cmd 2> file
echo "Message..." >&2
There is no way to pipe only stderr. If want to pipe stderr, stdin has to go through too:
prog 2>&1 | prog2
documented on: 2005.05.25
As explained before, if you want to pipe stderr the standard way, stdin has to go through too.
However, if you really want to pipe only stderr through, without stdin messing up the content, this is how:
$ ls -d . no ls: no: No such file or directory . ls -d . no 2>&1 > log1 | cat > log2 $ cat log1 . $ cat log2 ls: no: No such file or directory
I.e., the file log2 contains output only from the stderr of the first command.
documented on: 2007.04.03
to redirect both stdout & stderr
> /dev/null 2>&1
E.g.:
host$ rm nooo >/dev/null 2>&1
host$ echo rm nooo >/dev/null 2>&1
host$
iitrc:~/www/ai$ ls no no: No such file or directory
iitrc:~/www/ai$ ls no >& /dev/null
![]() |
'>&' works fine for bash and csh (and 4dos), but not sh: |
iitrc:~/www/ai$ sh $ ls no >& /dev/null /dev/null: bad number
documented on: Thu 02-18-99 16:47:08
Newsgroups: comp.unix.shell Date: 1997/02/06
> Is it possible to redirect stderr and not stdout?
Yes. To send stderr to file log and not stdout do
foo 2> log
where foo is your command producing output & error. Naturally this will will not work for csh-type shells.
Pete
*Tags*: redirect to stderr
> Simple question, how can I echo to stderr?
echo "aaa" >&2
if you have a Bourne-based shell. If you have C-shell, well, I think you're probably kinda stuck (redirecting anything other than stdout from C-shell is notoriously hard).
Chris Mattern
Newsgroups: comp.unix.shell Date: Fri, 24 Nov 2006 00:31:33 -0500
> Is it possible to cat from script and file using a single cat? > I.e., > > cat << EOF | cat - file.app > from script > EOF
cat - file.app << EOF
You can only redirect standard input once per command, but cat can read from a list of arguments, with the argument '-' representing standard input.
Bill Marcum
Newsgroups: comp.unix.shell Date: Tue, 24 Oct 2006 17:48:11 +0200
> Is it possible to diff $variable1 $variable2? > > The content of variable1 and variable2 is the result of a few commands > on the content of files. So rather than putting the output of > processing those files to temporary files just to perform the diff I > figured I'd ask if the above was possible.
$ echo "$var1" pippo pluto
$ echo "$var2" pippo1 pluto
$ diff <(echo "$var1") <(echo "$var2") 1c1 < pippo --- > pippo1
Radoulov, Dimitre
Newsgroups: comp.unix.shell Date: Fri, 17 Jun 2005 20:26:13 -0400
> I'm wondering, if I have a process with long list of commands piping from > one to another, how can I 'nice' the whole piping process? > > I presume that > > nice p1 | p2 | p3 | ... > > will only nice on p1, right?
Right. You can either nice each command:
nice p1 | nice p2 | nice p3 | ...
or you can run an extra shell:
nice sh -c "p1 | p2 | p3 | ..."
Barry Margolin
Newsgroups: gmane.linux.debian.user Date: Thu, 02 Mar 2006 10:23:20 -0500
Could someone tell me why the following works in zsh but not in bash/posh/dash?
benjo[3]:~% echo foo bar baz | read a b c benjo[4]:~% echo $a $b $c foo bar baz
If I try the same with bash (or other sh-compatible shells), the variables $a $b and $c are unset. From the bash man page:
> read [-ers] [-u fd] [-t timeout] [-a aname] [-p prompt] [-n nchars] [-d > delim] [name ...] > One line is read from the standard input, or from > the file descriptor fd supplied as an argument to the -u > option, and the first word is assigned to the first name, > the second word to the second name, and so on, with > leftover words and their interven\u2010 ing separators > assigned to the last name.
So "read" claims to read from the standard input, but it doesn't actually seem to happen when a pipe is involved.
Posh and dash behave like bash in this respect, so I guess that this is not a bug, and that what zsh does is actually an extension. So, what is the correct POSIX-compatible way to get "read" to work as I want?
Kevin B. McCarty
> So "read" claims to read from the standard input, but it doesn't > actually seem to happen when a pipe is involved. [...]
Each command in the pipeline gets executed in its own subshell and so the variables are set there and not passed back to the parent process. This is clear for example when you read about the compound command ( list ):
(list) list is executed in a subshell. Variable assignments and builtin commands that affect the shell's environment do not remain in effect after the command completes. [...]
So this would work:
$ echo foo bar baz | ( read a b c; echo $a $b $c) foo bar baz $
Else you can use a 'here document' construct to feed some standard input to "read" in the same environment as the shell doing the "read", as for example:
$ read a b c<<END > foo bar baz > END $ echo $a $b $c foo bar baz $
alfredo
> If I try the same with bash (or other sh-compatible shells), the > variables $a $b and $c are unset. From the bash man page: ... > So "read" claims to read from the standard input, but it doesn't > actually seem to happen when a pipe is involved.
What's happening here is that the pipe is causing a subshell to be spawned, which is then parsing the command "read a b c".
I'm not sure of the POSIX way to use read in this manner, but I found this on Google/A9:
The example he gives, with the < <() syntax, worked in bash, but not in Debian or FreeBSD's /bin/sh.
David Kirchner
> The example he gives, with the < <() syntax, worked in bash, but not in > Debian or FreeBSD's /bin/sh.
In more recent bashes, the following should work as well
#!/bin/bash read a b c <<<`echo foo bar baz` echo $a $b $c
The <<< ("here strings") are an extension of the "here document" syntax, IOW, the string given after <<< is supplied as stdin to the command.
Then, there's another variant, which is about as ugly as it can get… It should, however, work with most bourne shell compatible shells:
#!/bin/sh eval `echo foo bar baz | (read a b c; echo "a='$a';b='$b';c='$c'" )` echo $a $b $c
To get the variable's values from the subshell back to the main shell, a shell code fragment is written on stdout, captured with backticks, and then eval'ed in the main shell… (this is the moment when I usually switch to some other scripting language — if not before :)
Almut
I think the POSIX way would be
echo foo bar baz | { read a b c; echo $a $b $c; }
Bill Marcum
I wrote:
> Could someone tell me why the following works in zsh but not in > bash/posh/dash? > > benjo[3]:~% echo foo bar baz | read a b c benjo[4]:~% echo $a $b $c > foo bar baz
Thanks everyone for the enlightening answers! So just to summarize, the problem is that the pipeline is treated as a subshell, and so the variables $a $b and $c are defined within the subshell but not the "main" shell.
These seem like the best solutions to my problem:
Bash-specific (i.e. not POSIX-compliant) :
David Kirchner wrote:
> I'm not sure of the POSIX way to use read in this manner, but I found > this on Google/A9: > > http://linuxgazette.net/issue57/tag/1.html[] > > The example he gives, with the < <() syntax, worked in bash, but not in > Debian or FreeBSD's /bin/sh.
Almut Behrens wrote:
> In more recent bashes, the following should work as well > > #!/bin/bash > read a b c <<<`echo foo bar baz` > echo $a $b $c > > The <<< ("here strings") are an extension of the "here document" syntax, > IOW, the string given after <<< is supplied as stdin to the command.
POSIX-compliant:
Bill Marcum wrote:
> I think the POSIX way would be > echo foo bar baz | { read a b c; echo $a $b $c; }
Not too bad if what you want to do inside the { } braces is pretty short.
Almut Behrens wrote:
> #!/bin/sh > eval `echo foo bar baz | (read a b c; echo "a='$a';b='$b';c='$c'" )` > echo $a $b $c > > To get the variable's values from the subshell back to the main shell, a > shell code fragment is written on stdout, captured with backticks, and > then eval'ed in the main shell... (this is the moment when I usually > switch to some other scripting language -- if not before :)
Ugh. Does get the job done though. I guess one has to be a little careful about escaping special characters in this case? Here's the safest version I've found so far — single quotes in the input have to be special cased with the sed command, and the -r flag to "read" keeps it from eating backslashes.
# set some variables to nightmarish values for testing purposes d='"ab\""q"' # literal value is "ab\""q" e='$d' # literal value is $d f="'ba'r'" # literal value is 'ba'r'
# here's the meat of the code result="`echo "$d $e $f" | sed "s/'/\'\\\\\\\'\'/g" | \ ( read -r a b c; echo "a='$a' ; b='$b' ; c='$c'" )`" eval "$result"
# test that $a $b $c have the right values echo "$a $b $c"
Tested on Sarge with zsh, bash, dash and posh :-)
Of course, replace this:
echo "$d $e $f"
with whatever is producing the output that needs to be put into $a $b $c
Personally, I'd rather constrain my script to work only with bash and use <<< or < <() operators than to write something like the above! (N.B. every method above still needs the -r flag to read if the input might contain backslashes.)
Kevin B. McCarty
> # set some variables to nightmarish values for testing purposes > d='"ab\""q"' # literal value is "ab\""q" e='$d' # literal > value is $d > f="'ba'r'" # literal value is 'ba'r' > > # here's the meat of the code > result="`echo "$d $e $f" | sed "s/'/\'\\\\\\\'\'/g" | \ > ( read -r a b c; echo "a='$a' ; b='$b' ; c='$c'" )`" > eval "$result" > > # test that $a $b $c have the right values echo "$a $b $c"
Another, more sane way to do what you want (I think):
set `echo "$d $e $f" | sed "s/'/\'\\\\\\\'\'/g"` a=$1; b=$2; c=$3
which will work if your data have no embedded spaces. Otherwise, pick a character you know is not in the data (+, for example) and:
OIFS=$IFS; IFS="+" set `echo "$d+$e+$f" | sed "s/'/\'\\\\\\\'\'/g"` a=$1; b=$2; c=$3 IFS=$OIFS
I've been writing shell scripts for many years, and I *still* trip over this now and again, curse a time or three, then remember that some things are done in sub-shells or sub-processes, and finally do the IFS and set trick. :)
Neal Murphy
Newsgroups: comp.unix.shell
>Please make suggestion on how to interact with users in pipe command. >I am writing a sh shell script that read data from pipe while having >the capability to interact with user. > >Previously I use > read ans < /dev/tty >to get user input while preserving the stdio contents. Then I want >to add timeout capability for the read. So I added > stty -icanon min 0 time $timeout >before the read. But I get the error message: > stty: standard input: Invalid argument > >What's your suggestion? > >The "-F device" in Linux might help, but Solaris doesn't have that...
Try;
stty -icanon min 0 time $timeout </dev/tty
laura fairhead
If you are using (or can use) bash (since 2.03 or 2.04), the read command accepts a time-out value:
read -t $timeout ans < /dev/tty
Chris F.A. Johnson
If you've finished reading the file, then you could redirect stdin back to the controlling terminal with the command
exec </dev/tty
The script I tried this with looked like:
#!/bin/sh #This is temp.sh cat - # the above "eats" all of stdin exec </dev/tty # The above sets stdin back to /dev/tty - the controlling terminal cat - # Echo whatever's typed in (Ctrl-D ends the script)
I then invoked it as:
./temp.sh <temp.sh
This printed each line of the script and then started echoing whatever I type in.
Date: 2000.08.24 Newsgroups: comp.unix.questions
> Does anyone have some genius ideas about how to show program results > in another xterm window? > > The most convenient format for me is something like: > > ls -l | popup > > in which popup will show the result of "ls -l" in a newly lauched > xterm window. The question has been haunting me for quite a while > and I still can't come up with any idea yet. > > PS. I managed to come up with a script used like this: > > popup ls -l > > But I don't like it, 'cause all my alias & function definitions > (bash) can not be used in this format. that's why I think getting > results from pipe is a better idea.
well, may not be the best approach:
redirect the pipe input to a temp file (cat > $tmpf) then show the temp file in lauched xterm…
T
Newsgroups: comp.unix.shell
> How can I detect the execution mode of my shell script (whether being > piped or not)...
Use tty(1):
$ sh foo.sh 0 $ echo foo | sh foo.sh 1 $ cat foo.sh tty -s echo $?
Derek
documented on: 2000.08.15 Tue 12:10:03
$ echo asdb >&1 asdb $ echo asdb >&2 asdb $ echo asdb >&3 bash: 3: Bad file descriptor $ echo asdb 3>/dev/tty >&3 asdb $ echo asdb 3>test >&3 $ cat test asdb
Newsgroups: comp.unix.shell Date: Tue, 05 Apr 2005 15:04:01 -0400
> I'm wondering if this is possible, i.e., to obtain return code before > the pipe command. E.g., I want to do something like this > > real_command | grep -v garbage || error_handling command > > I want the error_handling to kick in only if the real_command fails, but > apparently the above won't work because it takes the return code of grep, > not the real_command.
If you use bash, you can use the $PIPESTATUS[] array variable.
real_command | grep -v garbage if [[ ${PIPESTATUS[0]} -ne 0 ]] then error_handling command fi
Barry Margolin
> real_command | grep -v garbage || error_handling command
Would this suit you…?
if real_command then : else error_handling command 1>&2 fi | grep -v garbage
Janis Papanagnou
> I'm wondering if this is possible, i.e., to obtain return code before > the pipe command. E.g., I want to do something like this
See question 11 in the FAQ -
Ed Morton
Newsgroups: comp.unix.shell
> Is there a way that I control whether piping or not by a variable in > sh script?
I thought about this problem, too, and as far as I can see there is no really elegant solution.
What I came up with now uses Bash's process substitution:
if [ ...some conditions... ] then exec 3> >( tee log ) # Here goes your pipe! else exec 3>&1 # To stdout otherwise. fi
{ date; # The producer: Do your stuff here. } >&3
exec 3>- # Close file descriptor (and end pipe).
Martin Ramsch
Yeah, Martin, you hit right on the nail top. Seems to be an elegant solution to me. :-) I'll play more with it. Thanks
bash$ exec 3> >( tee log )
bash$ date>&3 Mon May 29 10:06:19 ADT 2000
bash$ exec 3>&-
bash$ cat log Mon May 29 10:06:19 ADT 2000
exec > >( tee log | awk '{print $1}' ) ; ls -l ; exec >&-
sh$ dev_dest=/dev/null sh$ echo aaa > $dev_dest sh$ dev_dest=/dev/tty sh$ echo aaa > $dev_dest aaa sh$
sh$ dev_dest='> /dev/tty' sh$ echo aaa $dev_dest aaa > /dev/tty sh$ eval echo aaa $dev_dest aaa sh$
> > What I came up with now uses Bash's process substitution: > > > > exec 3> >( tee log ) # Here goes your pipe! > > ... > > exec 3>&- # Close file descriptor (and end pipe). > > is there way to do the above bash trick in sh script? thanks
In plain sh the effect of process substitution can be done with explicitly created named pipes:
if [ ...some conditions... ] then NPIPE=/tmp/pipe.$$ # filename for auxiliary named pipe. mknod $NPIPE p # create named pipe
# Now start into the backgroud all commands # which are to read from the named pipe: { sort | tee log.txt; echo "Done."; } <$NPIPE &
exec 3>$NPIPE # file descriptor 3 writes into the named pipe else exec 3>&1 # in this case file desc 3 simply _is_ STDOUT fi
{ date; ... # Do your stuff here. } >&3 # Every output is written to file desc #3.
exec 3>&- # Close file desc #3. # This causes EOF for the background commands. if [ "$NPIPE" != "" ] then rm $NPIPE # Remove auxiliary named pipe again. unset NPIPE fi
Martin
Since no particular problem was mentioned let me try to create one.
PROBLEM: Create a command which does the following: if $1 is '/' pipe date through tr and convert all : to / else just echo date.
The "straight forward" way of doing this is to use a script like this
if [[ $1 = "/" ]] then date | tr : / else date fi
There are many alternatives
Use a variable for the command.
CMD="cat" [[ $1 = "/" ]] && CMD="tr : / " date | ${CMD}
will do the job.
Or a simpler version of this (without the cat) would be
CMD="" [[ $1 = "/" ]] && CMD="| tr : /" eval date $CMD
or a clearer version would be to just say
CMD="date" [[ $1 = "/" ]] && CMD="date | tr : /" eval $CMD
All the above used variables that have been set up prior to the command so the conditional is not checked in the actual statement.
Try these if you want to do the checking of the conditional in the pipe
date | ( ( [[ $1 = "/" ]] && tr : / ) || cat ) # please ensure there is a space between two ((s
or more explicitly as
date | if [[ $1 = "/" ]] then tr : / else cat fi
if you dont like redundant cats ( not many do ) then try this
eval date $( [[ $1 = "/" ]] && print "| tr : /" )
It is interesting to see how much you can play around to accomplish the same thing. I am sure that the gurus here can think of more ways.
Play around :-)
Raja
$ a=
$ date | ( ( [[ $a = "/" ]] && tr : / ) || cat ) Fri Mar 22 13:18:44 CST 2002
$ a=/
$ date | ( ( [[ $a = "/" ]] && tr : / ) || cat ) Fri Mar 22 13/20/02 CST 2002
$ a=
$ eval date $( [[ $a = "/" ]] && printf "| tr : /" ) Fri Mar 22 13:21:00 CST 2002
$ a=/
$ eval date $( [[ $a = "/" ]] && printf "| tr : /" ) Fri Mar 22 13/21/14 CST 2002
$ eval date ` [[ $a = "/" ]] && printf "| tr : /" ` Fri Mar 22 13/21/40 CST 2002
> I tried to extend this solution for my "simple" task, which involes more > than one piping, but failed. So the above is good for one comand piping.(?)
> CMD="cat | tr : / " # test a more than one piping case > > and all the following failed: > > date | ${CMD} > eval date | ${CMD} > date | ( ${CMD} )
Try this as an example of a multiple pipeline command:
[[ $1 = / ]] && CMD="| tr : / | wc -l " eval date $CMD
would succeed. This illustrates two things:
eval is necessary for multiple pipeline commands to succeed. (cos you want the shell to interpret the pipe characters not just pass them of to the command as command line parameters)
The command line should not contain a pipe character. For instance the second line in the above cannot be eval date | $CMD # this would fail.
If the pipeline contains nested quotes or "special" characters like (; | {}() $ []) etc then it is better to make a function out of the whole complex pipeline and call that function conditionally.
For instance in the above illustration we can rewrite it as :
function f1 { tr : / | wc -l }
[[ $1 = / ]] && CMD="| f1" eval date $CMD
multiple pipelines can also be handled by the other versions of the script that I had mentioned.
for instance it is possible to write:
eval date $( [[ $1 = / ]] && print "| tr : / | wc -l" )
Raja
>and all the following failed: > >date | ${CMD} >eval date | ${CMD} >date | ( ${CMD} )
That's because you want either:
date | eval ${CMD}
or:
eval date \| ${CMD}
The $CMD needs to be eval'd in order to get its internal pipeline interpreted correctly, and the one example you showed that has an eval statement is just doing an "eval date" and piping that to $CMD, which isn't right (though you could, if you were feeling silly:).
eval date | eval ${CMD}
Ken Pizzini
> grep xxx myfile > myfile.tmp > [ $? -eq 0 ] && cat myfile.tmp | mail someone > > Of course the following doesnt work but I hope there is a trick to write > something like > grep xxx myfile &&| mail someone > > "mail" is just an example. So I dont need special switches for the > mail-command, it is a shell question: > How to activate a pipe only if the exit status of the former command has > been oK.
I think the nearest is something like…
grep xxx myfile > myfile.tmp && cat myfile.tmp | mail someone
grep xxx myfile && grep xxx myfile | mail someone
Adrian
One way to do it would be the following:
oIFS=$IFS;IFS='' RESULT=$(grep xxx myfile) [[ ! -z "$RESULT" ]] && echo $RESULT | mail someone IFS=$oIFS
That only works if (like grep) if the command fails and produces no output. You could change that to:
oIFS=$IFS;IFS='' RESULT=$(grep xxx myfile) [[ "$?" == 0 ]] && echo $RESULT | mail someone IFS=$oIFS
This might work also:
grep xxx myfile | ( [[ $? == 0 ]] && mail someone )
Be aware that that will run the mail command in a subshell (but you may not notice any difference if you use bash - bash runs the last command in a pipeline in a subshell anyway). What difference does that make? Well, consider he following:
grep xxx myfile | ( [[ $? == 0 ]] && mail someone || echo "Nothing found" )
You might reasonably expect the command to print out "Nothing found" if grep fails to find a match. It in fact produces no output since the echo prints to the subshell. The best way to achieve conditional piping (of a sort) is to use redirections as a kind of 'pipe emulator'. Search http://deja.com/ in comp.unix.shell for the subject "conditional piping" for previous discussions.
Dave. :-)
documented on: 2000.08.31 Thu 00:53:39
>I am trying to figure out what is the difference between executing a >shellscript foo as > >sh foo > >compared to sh < foo.
Not much:
the former maintains the stdin of its invoking environment, the latter forces stdin to come from the script, thus making non-redirected "read"s inside the script to behave oddly (and also affecting any program invoked by "foo" which read stdin);
$0 in the former is set to "foo"; in the latter it is "sh".
Ken Pizzini
Newsgroups: comp.unix.shell
> > echo done retrieval. >&2 > ^^^ That's the culprit. > >">&" is a csh-ism.
An addendum: * Tong *'s OP claimed that bash was being used; in that case >&2 is perfectly legitimate for routing both stdout and stderr to the file named "2" (though even then the preferred bash syntax is &>2). But he also said that the script started with !/bin/sh, which is _not_ appropriate for invoking bash, even if /bin/sh is a symlink to bash on the system. If you expect the script to be interpreted by bash, then say so in the shebang line: !/bin/bash .
Ken Pizzini