unalias *
*Tags*: alias clear, alias clean
unalias *
unalias -a
unalias [-a] [name ...]
Remove names from the list of defined aliases. If -a is supplied, all alias definitions are removed. The return value is true unless a supplied name is not a defined alias.
documented on: 2000.01.22 Sat 14:47:19
okinawa:~$ chsh Password: Changing the login shell for suntong Enter the new value, or press return for the default
Login Shell [/bin/bash]: /bin/tcsh
chsh
documented on: Thu 11-19-98 09:03:06
For example, if your login shell is csh or tcsh, and you have installed bash in /usr/gnu/bin/bash, add the following line to ~/.login:
if ( -f /usr/gnu/bin/bash ) exec /usr/gnu/bin/bash --login
(the `—login' tells bash that it is a login shell).
It's not a good idea to put this command into ~/.cshrc, because every csh you run without the `-f' option, even ones started to run csh scripts, reads that file. If you must put the command in ~/.cshrc, use something like
if ( $?prompt ) exec /usr/gnu/bin/bash --login
to ensure that bash is exec'd only when the csh is interactive.
Because we have to run tcsh (from bash) sometimes, so we need to:
if ( ! $?BASH_VERSION ) exec ~/sbin/bash --login
documented on: Sat 05-29-99 11:12:56
Date: Sat, Apr 30 2005 10:43 am Groups: comp.unix.shell
SHELLdorado Newsletter 1/2005 - April 30th, 2005
The "SHELLdorado Newsletter" covers UNIX shell script related topics. To subscribe to this newsletter, leave your e-mail address at the SHELLdorado home page:
http://www.shelldorado.com/
View previous issues at the following location:
http://www.shelldorado.com/newsletter/
"Heiner's SHELLdorado" is a place for UNIX shell script programmers providing
Many shell script examples, shell scripting tips & tricks, a large collection of shell-related links & more...
Contents
o Shell Tip: How to read a file line-by-line o Shell Tip: Print a line from a file given its line number o Shell Tip: How to convert upper-case file names to lower-case o Shell Tip: Speeding up scripts using "xargs" o Shell Tip: How to avoid "Argument list too long" errors
The essential part of writing fast scripts is avoiding external processes.
for file in *.txt do gzip "$file" done
is much slower than just
gzip *.txt
because the former code may need many "gzip" processes for a task the latter command accomplishes with only one external process. But how could we build a command line like the one above when the input files come from a file, or even standard input? A naive approach could be
gzip `cat textfiles.list archivefiles.list`
but this command can easily run into an "Argument list too long" error, and doesn't work with file names containing embedded whitespace characters. A better solution is using "xargs":
cat textfiles.list archivefiles.list | xargs gzip
The "xargs" command reads its input line by line, and build a command line by appending each line to its arguments (here: "gzip"). Therefore the input
a.txt b.txt c.txt
would result in "xargs" executing the command
gzip a.txt b.txt c.txt
"xargs" also takes care that the resulting command line does not get too long, and therefore avoids "Argument list too long" errors.
Oh no, there it is again: the system's spool directory is almost full (4018 files); old files need to be removed, and all useful commands only print the dreaded "Argument list too long":
$ cd /var/spool/data $ ls * ls: Argument list too long $ rm * rm: Argument list too long
So what exactly in the character '*' is too long? Well, the current shell does the useful work of converting '*' to a (large) list of files matching that pattern. This is not the problem. Afterwards, it tries to execute the command (e.g. "/bin/ls") with the file list using the system call execve(2) (or a similar one). This system call has a limitation for the maximum number of bytes that can be used for arguments and environment variables(*), and fails.
It's important to note that the limitation is on the side of the the system call, not the shell's internal lists.
To work around this problem, we'll use shell-internal functions, or ways to limit the number of files directly specified as arguments to a command.
Examples:
Don't specify arguments, to get the (hopefully) useful default:
$ ls
Use shell-internal functionality ("echo" and "for" are shell-internal commands):
$ echo * file1 file2 [...]
$ for file in *; do rm "$file"; done # be careful!
Use "xargs"
$ ls | xargs rm # careful!
$ find . -type f -size +100000 -print | xargs ...
Limit the number of arguments for a command:
$ ls [a-l]* $ ls [m-z]*
Using this techniques should help getting around the problem.
(*) Parameter ARG_MAX, often 128K (Linux) or 1 or 2 MB (Solaris).
Avoid 'ls /long/path/etc/files', cd into the directory first to do ls.
If that won't help, use cmd:xargs to solve Argument list too long problem:
$ ls | xargs cmd...
Or, use cmd:split to solve Argument list too long problem: If the path info is necessary, or there are just to much files in a single directory for ls <criteria>, you can simply use 'split' to split up the result before using them.
xpt
documented on: 2007-09-09
Date: Wed, Mar 1 2006 6:58 am Groups: comp.unix.shell
$ for i in `ls /media/cdrecorder5/subdir/*.zip`; do unzip $i; done
generated the following error message:
bash: /bin/ls: Argument list too long/
According to ls | wc -l, this directory contains 3307 (!) files. I tried narrowing the selection by piping it through grep - ^[a-f] to no avail. I appear to be coming up against an internal limitation of bash and/or ls. Any suggestions on how to get around this?
> avail. I appear to be coming up against an internal limitation of bash > and/or ls. Any suggestions on how to get around this?
It's not a limitation in bash OR ls, but rather in the kernel. The kernel imposes a maximum size that any process can use for its environment: its environment includes not just the normal environment variables, etc. but also the command line arguments.
You can get away from this by avoiding the wildcard; remember that in UNIX wildcards are expanded by the shell first, then the results of the expansion are placed in the command line of the subcommand (hence your very long list on the command line).
Try something like:
ls -1 /media/cdrecorder5/subdir \ | while read file; do \ case $file in \ *.zip) unzip "$file" ;; \ done
Paul D. Smith <psm…@nortel.com>
> ls -1 /media/cdrecorder5/subdir \ > | while read file; do \ > case $file in \ > *.zip) unzip "$file" ;; \ > done
That assumes that the filenames in subdir don't have any newline characters.
Also note that, by default, read splits its input to put it into the various variables whose names it is provided with. In that splitting, the blank characters contain in $IFS will be stripped from the beginning and end of each line, the line will then be splitted in as many words as there are variable names provided (the backslash will act as an escape character for the separators and the newline).
To disable this, you need to disable the backslash escaping processing by using the "-r" read option. To disable the stripping of leading blank characters, you need to remove those blank characters from IFS, so that's more "while IFS= read -r"
Funnily enough, I just had to do exactly the same thing yesterday, I wrote it:
for f (/media/sda1/**/*.(#i)zip(.D)) unzip $f
but that's because I use zsh as my interactive shell.
That's zsh's short form of the for loop. **/* is to recurse into subdirectories, (.) is to only select regular files (ommit symlinks, directories…), (D) is to also include dotfiles (.foo.zip), (#i) is to toggle case insensitive matching because I also wanted to extract files called like "FOO.ZIP".
Stephane
> blank characters from IFS, so that's more "while IFS= read -r"
This is all very true, but on the other hand read -r is not as portable (Solaris /bin/sh doesn't support it, as one example).
It's a trade-off between correctness, simplicity, and portability. For a one-time situation like this it didn't seem worthwhile to make it ironclad. But, you're definitely correct that I should have made the shortcomings of that solution clear in my post.
Paul D. Smith <psm…@nortel.com>
Try This:
cd /media/cdrecorder5/subdir/ ls | grep -i '\.zip$' | xargs unzip -i -t -c {}
K
First there is the basic for loop, it is able to handle files with spaces in the filename, e.g., it works for files like "Sample File.txt".
for file in *; do echo "Updating timestamp for $file"; touch "$file"; done
Using find with xargs, this only seems to be able to handle one command at a time (unless you bother to write a script for it). It puts all the commands on one line, rather than executing the command multiple times. It also doesn't support more than one command, so echoing the message for each individual file seems impossible and it won't work on commands unless they accept multiple filenames.
find . -print0 | xargs -0 touch
The next one is to use find with the -exec argument and some odd parameters, this allows for multiple command and spaces :):
find . -exec echo "Updating timestamp for" {} \&\& touch {} \;
Finally if you want something thats a bit nicer for use in scripts you can use read to stop the splitting of filenames:
find . | while read FILE; do echo "Updating timestamp for $FILE" touch "$FILE" done;
> What is the minimum mode requirement for my script file for other uses to use?
444 is the minimum if you can invoke the interpreter directly. 555 is the minimum if you want to run it by shebang line magic. 755 is used most often though.
> Why 711 not ok? Isn't "-x" means be able to run?
"scripts" are not executable programs.
They are the _input_ to an executable program (the interpreter).
The interpreter must be able to read the script if it is to interpret it…
Tad McClellan
No, that only works for binary executables. When you run a script, it's equivalent to running the interpreter with the script as a filename parameter, and the interpreter has to be able to read the file.
Barry Margolin
documented on: 1999.09.02 Thu 15:26:49
From the Expect manpage:
It is often useful to store passwords (or other private information) in Expect scripts. This is not recommended since anything that is stored on a computer is susceptible to being accessed by anyone. Thus, interactively prompting for passwords from a script is a smarter idea than embedding them literally. Nonetheless, sometimes such embedding is the only possibility.
Unfortunately, the UNIX file system has no direct way of creating scripts which are executable but unreadable. Systems which support setgid shell scripts may indirectly simulate this as follows:
Create the Expect script (that contains the secret data) as usual. Make its permissions be 750 (-rwxr-x---) and owned by a trusted group, i.e., a group which is allowed to read it. If necessary, create a new group for this purpose. Next, create a /bin/sh script with permissions 2751 (-rwxr-s—x) owned by the same group as before.
The result is a script which may be executed (and read) by anyone. When invoked, it runs the Expect script.
documented on: 2003.12.18 Thu
Newsgroups: comp.unix.shell Date: Fri, 12 Dec 2003 04:54:47 GMT
> Is there a way (in shell) to set the stdout as unbuffered? > > I want to catch my connection speed from the wvdial output, while > still be able to see it. > > wvdial | tee /tmp/wvdial.log > > would normally do. However, the problem is that the > /tmp/wvdial.log is empty, even though the wvdial has already made > some output. > > I've been looking around in my local man pages, but didn't find the > solution. I tried stty raw, but that didn't help.
The `problem' is the standard I/O library. When it is writing to a terminal it is unbuffered, but if it is writing to a pipe then it sets up buffering.
So you have 2 choices.
As you are talking about a program that you have the source code to, you can tell the stdio library not to buffer the output.
Add a line like
setvbuf(stdout,NULL, _IOLBF,BUFSIZE);
near the start of the program. Or,
an answer closer to the spirit of your question, make the stdio library think it is writing to a terminal. There are a number of programs around which can do this, the most famous is 'expect', and it even comes with an example program which does this. On my system it is installed as
/usr/share/doc/expect/examples/unbuffer
and its contents are
#!/bin/sh # \ exec expect -- "$0" ${1+"$@"} # Description: unbuffer stdout of a program # Author: Don Libes, NIST
set stty_init "-opost" eval spawn -noecho $argv set timeout -1 expect
There is also Dan Bernsteins 'pty' program.
Icarus Sparry
> > Is there a way (in shell) to set the stdout as unbuffered? > > 2) an answer closer to the spirit of your question, make the stdio library > think it is writing to a terminal. > > There is also Dan Bernsteins 'pty' program.
A pty is described in Richard Stevens' book "Advanced Programming in the Unix Environment".
The source code is available at http://www.kohala.com/start/apue.html
Janis
> If I have shell script which has a command that runs in the background, is it > possible to get its return code.
Use wait:
$ sh -c 'false & wait $!; echo $?' 1 $ sh -c 'true & wait $!; echo $?' 0
$ echo aa bb cc dd & wait $! ; echo $? aa bb cc dd [1] 13529 [1]+ Done echo aa bb cc dd 0
Ref:
$ echo aa bb cc dd & ; wait $! ; echo $? bash: syntax error near unexpected token `;'
documented on: 22:16:19