$ cat /proc/version Linux version 2.4.7-10 (bhcompile@stripples.devel.redhat.com) (gcc version 2.96 20000731 (Red Hat Linux 7.1 2.96-98)) #1 Thu Sep 6 17:27:27 EDT 2001
$ cat /proc/version Linux version 2.4.7-10 (bhcompile@stripples.devel.redhat.com) (gcc version 2.96 20000731 (Red Hat Linux 7.1 2.96-98)) #1 Thu Sep 6 17:27:27 EDT 2001
$ gcc -dumpmachine i386-redhat-linux
uname -rsm
but better just:
cat /etc/issue
$ uname -rsm SunOS 5.6 sun4u Linux 2.2.16 i686
echo " OS: `uname -s`, version `uname -r`" [ "`uname -s`" = SunOS ] && echo " (CPU : `uname -p`)"
$ uname -a SunOS zkpks001 5.6 Generic_105181-12 sun4u sparc Linux sunshine 2.0.36 #2 Sun Feb 21 15:55:27 EST 1999 i586 unknown
# -v is not useful!:
tong@iitrc:~$ uname -v Generic_105181-20
tong@sunny:~$ uname -v #1 Tue Mar 7 21:07:39 EST 2000
There was a post on linuxquestions.org:
that asked for a GUI software to post a cd image. The conclusion reply was due to a lack of "huge demand for posting software". This really triggered me into thinking…
> Well, it seems there isn't a huge demand for posting software, your only > option may be the CLI app newspost, here are some examples. . .
Yeah, that's true. But to me, the reason is not merely no huge demand, but the fundamental different between *niX and Window$' philosophy.
Window$' philosophy is everything GUI. For example, You can find over hundreds of renaming tools, from mp3 specify renaming tools to general purpose ones. So if you need to batch rename something, you need first to think, "Oh boy, of the over a hundred renaming tools that I have, which one can help me this time?" For those hard to do requests, you may have to look into the help file of each specific renaming tool, then finally realize that none of them can help you this time.
For me, under niX, the *single* CLI command 'rename' never failed me, no matter how bizarre the renaming request might be.
$ rename Usage: rename [-v] [-n] [-f] perlexpr [filenames]
OK, enough OT talk and back to newposting software. Again, to me the CLI makes more sense. Using the very same example OP uses — posting a cd/dvd image, a simple command,
newspost -q -y -n alt.binaries.test -s 'big image' the.package.iso.*.rar
will post the over-a-hundred rar files for you automatically. This is *much more* straightforward than launching a GUI and then click, click, click, click, click, click, click…
Moreover, you can use at or cron to schedule the posting, eg, while you were sleeping. This is trivial to CLI tools, but you will be at the mercy of the GUI tools to give you that feature.
The CLI tools have the greatest flexibility than any GUI tools could possibly have, yet they demand the least footprint of memory use. The GUI opponent would normally be ten times bigger, and more than ten times less flexible.
Still, all Window$ GUI tools proudly boast that they are standalone all-in-one product that can solve all your needs. E.g., If you have installed three newposting GUI tools, each of them will have their own NNTP protocol handling, new posting, par checking and repairing, or even NZB parsing and generating capabilities, instead of relying on common tools to handle the common tasks. I rarely see Window$ tools that are based on other tools, because normal Window$ users never expect that it will happen and will freak out on seeing a tool that depend on more than three other tools.
The *niX tools however is well known to utilize existing tools as much as possible. It is very common to see a Debian package that bases on more than ten other packages. The CLI tools are fundamental building blocks to achieve versatile goals. It is very common in this note collection that 3 to 5 (and even more) CLI tools works together to achieve one goal (One very common Window$ oriented question was, "how come grep doesn't have the capability to recursively descend into sub directories and search within a specific type of files". The answer is, "welcome to the niX philosophy — a program will only do one thing but do it well").
Let me explain using a vivid example. Each *niX CLI tool is one power tool in your tools collection. You use them to make whatever thing you'd like to make, be it chair, desk, shelf, etc. There is one Window$ software company that sells a standalone all-in-one GUI tool, chair-maker. The chair-maker is really convenient for you, all you need is to throw in the wood, nails and paints. It will then spit out a chair for you. Easy!!!
But wait, how about the sizes, and styles? Now the trouble begins, all kinds of chair-maker producers now produce all kinds chair-makers that have slightly different controls over how to customize build chairs of different sizes and styles. Some advanced ones will boast that their products are capable of building stools as well (since chairs and stools are really not that much different). Some really advanced ones will boast that they can build all kinds chairs, stools, and even bar-stools. The problem is, they need to have panels and panels of control bottoms in order to make "all kinds" of them that you want. Now everybody is happy. But is it really? What if you want to do a little bit different? Also, does such fancy chair-maker can produce all styles of chairs exist in the world? Imagine how complicated just its control panels and bottoms will be in order to produce a chair like this:
A normal Window$ user will normally end up with several chair-makers in their tools collection to make various chairs. Further, this is yet only about a chair. They also need maker for desk, shelf, etc as well. I.e., each thing they make they end up having several "easy" but actually fancy, expensive and cumbersome tools. Now looking back at the "simple" niX power tools in your tools collection, which philosophy makes more sense? Which tool set is hard in learning to use in the long run? Which tool you will more likely to forget how to use after a longer period of time? Before I entered the *niX world, I was impeded by the common saying "*niX do not have as many tools as Window$ has". Now I just laugh at such stupid propaganda.
documented on: 2008-01-12
While the rest of the world points and clicks in a scary little world of icons, all alike, we in the world of Unix get to use a good old-fashioned CLI, or Command Line Interface. One reason why the command line has remained so pervasive in Unix environments is that the implementation, the Unix shell in its various incarnations, is actually pretty damn good. It allows the user to use the tools provided to build new tools. This, by any other name, is programming. And programming is the essential activity of computing. Without it, a computer, however expensive the materials of which it is made, is no more than an expensive heap of junk. At all levels beyond the bare transistors, it is programs that make it what it is.
The unfortunate legions of office workers today saddled with Windows are obliged to worship their computer as an all-knowing god that can do no wrong, that is always finding fault with them; and consequently develop a fierce hatred for it. This is inevitable. One cannot effectively use any tool without some understanding of its workings.
Almost as soon as one begins to use Unix, one is programming the shell. The first pipeline one builds,
ls -l | less
for instance, shell program is a small program in itself. Shell programming proper begins when such combinations of commands are put in a file where they can be run repeatedly. Unix makes no distinction between executable files of one stripe or another. A text file with execute permission containing our little pipeline above is no different to it, in principle, than GNU Chess. This is a great advantage, in that it allows us to "cut our coat to suit our cloth", so to speak, in choosing the most appropriate programming tool for the task in hand, secure in the knowledge that whatever we choose to build in, our finished product will be treated by the system as just another program.
"There are many people who use UNIX or Linux who IMHO do not understand UNIX. UNIX is not just an operating system, it is a way of doing things, and the shell plays a key role by providing the glue that makes it work. The UNIX methodology relies heavily on reuse of a set of tools rather than on building monolithic applications. Even perl programmers often miss the point, writing the heart and soul of the application as perl script without making use of the UNIX toolkit."
"This is the Unix philosophy. Write programs that do one thing and do it well. Write programs to work together. Write programs to handle text streams, because that is a universal interface."
Most computer application programs can be thought of as software tools. If the only tool you have is a hammer, everything looks like a nail. The guy who writes letters in his spreadsheet program is a good example of this. Unix programs too are software tools. And Unix is a toolbox stuffed full of these tools. The more tools you have, the more you can do.
Two concepts in particular stand out that make the "toolbox" much more useful. The first is the idea of the filter.
The concept of a filter is a key idea for anyone who wishes to use Unix effectively, but especially for the programmer. It is one that migrants from other operating systems may find new and unusual.
So what is a filter? At the most basic level, a filter is a program that accepts input, transforms it, and outputs the transformed data. The idea of the filter is closely associated with several ideas that are central features of Unix: standard input and output, input/output redirection, and pipes.
Standard input and output refer to default places from which a program will take input and to which it will write output respectively. The standard input (STDIN) for a program running interactively at the command line is the keyboard; the standard output (STDOUT), the terminal screen.
With input/output redirection, a program can take input or send output someplace other than standard input or output — to a file, for instance. Redirection of STDIN is accomplished using the < symbol, redirection of STDOUT by >.
The second idea is the pipe. The pipe (|) is a junction that allows us to connect the standard output of one program with the standard input of another. A moment's thought should make the usefulness of this when combined with filters quite obvious. We can build quite complex programs, on the command line or in a shell script, simply by stringing filters together.
The combination of filters and pipes is very powerful, because it allows you a) to break down tasks and b) to pick the best tool for tackling each task. Many jobs that would have to be handled in a programming language (Perl, for example) in another computing environment, can be done under Unix by stringing together a few simple filters on the command line. Even when a programming language must be used for a particularly complicated filter, you are still saving a lot of development effort through doing as much as possible using existing tools.
Tools doing one thing well.
Filters using standard input and output.
Tools piped together to make new tools — no programming required!
I could go on, but the Unix Philosophy has been expounded in detail already by some of its most notable proponents. What I've written here should hopefully provide a concise summary. For further reading, you could do worse than start with two books by some Bell Labs luminaries: The Unix Programming Environment. and Software Tools. I should also mention Mike Garantz's The Unix Philosophy. This provides a useful foil to the other two books: shorter on examples and code, longer on the ideas behind them, and very much worth reading.
Here are some web resources:
The Elements of Style: Unix as Literature
In The Beginning Was The Command Line
Semiotics and GUI Design
GUIs Considered Harmful
Slashdot: David Korn Tells All
The Real World and Unix Philosophy: a rather more jaundiced view?.
Clueless users are bad for debian
Copyright (c) 1995-2007 Paul Dunne,
documented on: 2008-01-12
September 1st, 1995 by Belinda Frazie
From the book "The Unix Philosophy"
Author: Mike Gancarz Publisher: Digital Press ISBN: 1-55558-123-4
The main tenets (each of which have sub-tenets) of the Unix philosophy are as follows:
Small is beautiful.
Make each program do one thing well.
Build a prototype as soon as possible.
Choose portability over efficiency.
Store numerical data in flat ASCII files.
Use software leverage to your advantage.
Use shell scripts to increase leverage and portability.
Avoid captive user interfaces.
Make every program a filter.
The author introduces each tenet with a simple, real-world example (or "case study") , then further explains why the tenet is important by including non-technical computer-world examples.
Tenet 1. Small is beautiful.The book offers an example of how Volkswagen ran an ad campaign with the phrase "small is beautiful" in the US to promote the VW bug, but the idea was generally ignored in the US until the price of oil went up and Americans learned the advantages of small cars. The author draws an analogy to these nouveau small-car-appreciators to programmers at AT&T Bell Labs discovering that small programs were also easier to handle, maintain, and adapt than large programs.
In a non-Unix environment, a program to copy one file to another file might include, as in an example given in the book, twelve steps which do more than perform a file copy. The twelve steps perform extra tasks, some of which are considered "safety features" by some. The steps might include checking to see if the file exists, if the output files are empty, and prompting users to see if they know what they're doing (for example, "Are you really really sure you want to do this, and does your mother know you're doing this?"), etc. Just one step of the sequence might be the actual copy command. A Unix program (or command) would only include the one copy command step. Other small programs would each do the other 11 steps and could be used together if the Unix user wanted to use these extra steps. Although the author purposefully steers away from giving Unix examples until near the end of the book, I would have liked to see several Unix commands strung together to accomplish all the tasks described by the twelve steps.
Tenet 4. Choose portability over efficiency.The example given here is of the Atari 2600 which was the first successful home video game. Most of the code for the game cartridges was very efficient but nonportable. With the advent of new hardware (the "5200"), the code had to be rewritten to run on the 5200 which took time and money. The author proposes that Atari would have been the largest supplier of software in the world if its code had been portable.
There is a three-page analogy of selling Tupperware to the "use software leverage to your advantage" tenet. Who would have realized a multilevel marketing scheme is a good way to write software?
A sub-tenet of the leverage tenet is allow other people to use your code to leverage their own work. Many programmers hoard their source code. The author states that "Unix owes much of its success to the fact that its developers saw no particular need to retain strong control of its source code." Unix source code was originally fairly inexpensive compared to the cost of developing a new operating system, and companies started choosing Unix as the platform to build their software on. Companies who chose Unix spent their effort and money on developing their applications, rather than on maintaining and developing an operating system.
documented on: 2008-01-12
Newsgroups: comp.os.linux.misc
>I'm a new linux user, and I'm confused on one aspect of it. Where is >the the appropriate place to put installed software packages? I see >some want to go to /opt, and others prefer /usr/local. What is the >current standard for this?
I suggest using /opt for packages that insist on being kept together under a directory. This is usually the case for commercial packages such as WordPerfect. Use /usr/local for packages that distribute themselves into standard sub-directories such as bin, lib, man, etc, src, include, and so on. But "whole" packages can go under /usr/local as well.
Bob T.
Read the Filesystem Hierarchy Standard ( http://www.pathname.com/fhs/ )
Markus Kossmann
Newsgroups: comp.os.linux.setup > Are there any rules where to install software to - /usr and > /usr/local? Any guidance rules?
For the most part, /usr is software installed by a distrobution or pre-packaged binaries. /usr/local is usually where 'locally' compiled packages go; stuff you build from source. If /usr and /usr/local are seperate partitions, it allows you to easily upgrade or re-install the OS without mucking with stuff you've already compiled.
(__) Doug Alcorn
mod 744: can list files but can't execute mod 711: can't list files but can execute designated file
so, strong protection should be 711
Newsgroups: comp.unix.shell
> I used to use $LOGNAME or $USER to avoid one layer of system call of > `whoami`. Later I found out that in some systems or shells, they > don't exist, so I backed off to use `whoami`. > > Now I'm thinking to re-use the variable approach again... > > I want to pick a generally-available-to-all-shell variable and > stick to it, accross all shells
IIRC, login sets LOGNAME. Which shell you use won't effect whether or not login sets LOGNAME.
Derek M. Flynn
from man login:
The basic environment is initialized to:
HOME=your-login-directory LOGNAME=your-login-name PATH=/usr/bin: SHELL=last-field-of-passwd-entry MAIL=/var/mail/your-login-name TZ=timezone-specification
>Which shell you use won't effect whether or not >login sets LOGNAME.
However, programs may be invoked without going through login. For instance, login isn't involved if you run a program via rsh or cron.
Barry Margolin
>I used to use $LOGNAME or $USER to avoid one layer of system call of >`whoami`. Later I found out that in some systems or shells, they >don't exist, so I backed off to use `whoami`. > >Now I'm thinking to re-use the variable approach again. can somebody >give me some general ideas like "originally where they are defined", >"which shell have them", etc.
$LOGNAME is the POSIX-blessed spelling of this variable; it's origins are in the AT&T System V (and even some if its predecessors) world. $USER, however, has at least as many adherents; it's orgins are the BSD 4.0 (and perhaps even older) releases.
>I want to pick a generally-available-to-all-shell variable and >stick to it, accross all shells, providing the default value if the >shell itself doesn't define it.
Despite its flaws, I tend to favor following POSIX whenever making an arbritrary choice between a set of actions, so I'd suggest using $LOGNAME as your convention. You can initialize it with:
: ${LOGNAME=${USER=`whoami`}}
("If $LOGNAME is not defined, set it to the value of $USER, which, in its turn, is set to `whoami` if it is also not already defined.")
Ken Pizzini
documented on: 2000.08.07 Mon 22:30:32
> The /bin/echo I used to use in Solaris understand \n and \t stuffs > by default. Is there any trick I can play so that I don't need to > specify the -e parameter for echo?
use printf instead. printing \t \n type things with echo is completly non-portable any more. some echos use echo foo\c for a newline some need -e some don't.
> The reason I'm asking is that debian is the only un*x I've used that > /bin/echo don't interprate \n... by default. I've already wirtten > tons of scripts using /bin/echo. Please help.
hmm Digital UNIX (aka OSF/4) needs -e, OpenBSD /bin/echo needs -e, GNU echo needs -e, bash builtin echo needs -e … (OpenBSD's /bin/sh has a builtin echo which does not need -e but /bin/echo does)
you really can't depend on any echo doing anything consistently unfortunatly. printf on the other hand behaves perfectly consistent on GNU/Linux, OpenBSD, SunOS 5.7, Digital UNIX (OSF/4) and i presume just about everything else (those are all i have access to)
if you don't want to use printf (or can't) you could replace your echo commands with $echo and do a test like so:
if [ `echo "foo\n"` = "foo\n" ] ; then echo="echo -e" else echo="echo" fi
which may or may not be reliable…
my advice is use printf unless your script must work on stripped down systems like rescue floppies or / without /usr mounted. (or anything else abnormal)
Ethan Benson
documented on: 2000.06.02 Fri 03:40:13
form Intro.1.html:
Pages of special interest are categorized as follows:
1B Commands found only in the SunOS/BSD Compatibility Package. Refer to the Source Compatibility Guide for more information.
1C Commands for communicating with other systems.
1F Commands associated with Form and Menu Language Interpreter (FMLI).
1S Commands specific to the SunOS system.
OTHER SECTIONS See these sections of the man Pages(1M): System Administra- tion Commands for more information.
o Section 1M in this manual for system maintenance com- mands.
o Section 4 of this manual for information on file formats.
o Section 5 of this manual for descriptions of publicly available files and miscellaneous information pages.
o Section 6 in this manual for computer demonstrations.
choose section
$ man printf -- show man for PRINTF(1)
$ man -al printf printf (1) -M /opt/gnu/man printf (1) -M /usr/man printf (3s) -M /usr/man printf (3b) -M /usr/man printf (1) -M /usr/share/man printf (3s) -M /usr/share/man printf (3b) -M /usr/share/man
$ man -s 3s printf -- show man for Standard C I/O Functions
choose path
$ man -al test test (1) -M /opt/gnu/man test (1) -M /usr/man test (1f) -M /usr/man test (1b) -M /usr/man test (1) -M /usr/share/man test (1f) -M /usr/share/man test (1b) -M /usr/share/man
$ man -M /usr/man test -- show man other than default gnu version
-a Show all manual pages matching name within the MANPATH search path. Manual pages are displayed in the order found.
-l List all manual pages found matching name within the search path.
-s section ... Specify sections of the manual for man to search. The directories searched for name is limited to those specified by section. section can be a digit (perhaps followed by one or more letters), a word (for example: local, new, old, public), or a letter. To specify multiple sections, separate each section with a comma. This option overrides the MANPATH environment variable and the man.cf file. See Search Path below for an explanation of how man conducts its search.
-M path Specify an alternate search path for manual pages. This option overrides the MANPATH environment variable.
-d Debug. Displays what a section-specifier evalu- ates to, method used for searching, and paths searched by man.
In Linux, when man looks for a man page for a particular command, it will consult the current PATH value. Searching from the first path in PATH till the last one, man will adds the following suffix to the path, and probes the possible existence of the man directories.
/man /MAN /../man /../man1 /../man8
If you have different version of command in different path, this approach will guarantee that the man page you get is the executable that you'll execute.
Note that in RH, there is no man directories related with sbin directories (e.g., /sbin, /usr/sbin, etc). The man page for sbin tools are in man8 directories. For example, the man page for mkswap, which is under /sbin, is at /usr/share/man/man8/mkswap.8.gz.
the default list is in /etc/man.config. But it is generated automatically from man.conf.in by the configure script. So better change the profile.
An empty substring of MANPATH denotes the default list!
So set it like this:
MANPATH=::/usr/share/man:/opt/man
The MANPATH setting will overwrite default. So if set 'MANPATH=/opt/man', all the default man pages can't be found.
The default is in /etc/man.config. add 'MANPATH /opt/man' to the proper section will do. no need to reset/restart anything.
documented on: 1999.10.28 Thu 17:07:41
RET Follow a node reference near point, like mouse-2. n Move to the "next" node of this node. p Move to the "previous" node of this node. u Move "up" from this node. m Pick menu item specified by name (or abbreviation). Picking a menu item causes another node to be selected. d Go to the Info directory node. f Follow a cross reference. Reads name of reference. l Move to the last node you were at. i Look up a topic in this file's Index and move to that node. , (comma) Move to the next match from a previous `i' command. t Go to the Top node of this file. > Go to the final node in this file. [ Go backward one node, considering all nodes as forming one sequence. ] Go forward one node, considering all nodes as forming one sequence.
Moving within a node:
SPC Normally, scroll forward a full screen. Once you scroll far enough in a node that its menu appears on the screen but after point, the next scroll moves into its first subnode. When after all menu items (or if their is no menu), move up to the parent node. DEL Normally, scroll backward. If the beginning of the buffer is already visible, try to go to the previous menu entry, or up if there is none. b Go to beginning of node.
Advanced commands:
q Quit Info: reselect previously selected buffer. e Edit contents of selected node. 1 Pick first item in node's menu. 2, 3, 4, 5 Pick second ... fifth item in node's menu. g Move to node specified by name. You may include a filename as well, as (FILENAME)NODENAME. s Search through this Info file for specified regexp, and select the node in which the next occurrence is found. TAB Move cursor to next cross-reference or menu item. M-TAB Move cursor to previous cross-reference or menu item.
Info has powerful searching facilities that let you find things quickly. You can search either the manual indices or its text.
Since most subjects related to what the manual describes should be indexed, you should try the index search first. The `i' command looks up a subject in the indices. The `i' command finds all index entries which include the string you typed _as a substring_. Type `,' one or more times to go through additional index entries which match your subject.
The `s' command allows you to search a whole file for a string. To search for the same string again, just `s' followed by <RET> will do. The file's nodes are scanned in the order they are in in the file, which has no necessary relationship to the order that they may be in the tree structure of menus and `next' pointers.
apropos is actually just the -k option to the man(1) com- mand.
DIAGNOSTICS /usr/share/man/windex: No such file or directory This database does not exist. catman(1M) must be run to create it.
$ catman -wp /shared/local/man: search the sections lexicographically /shared/share/man: from man.cf, MANSECTS=1,1m,1c,1f,1s,1b,2,3,3c,3s,3x,3xc,3xn,3r,3t,3n,3m,3k,3g,3e,3b,9f,9s,9e,9,4,5,7,7d,7i,7m,7p,7fs,4b,6,l,n /shared/opt/SUNWrtvc/man: search the sections lexicographically ...
mandir path = /shared/local/man /usr/bin/sh /usr/lib/makewhatis /shared/local/man
mandir path = /shared/share/man /usr/bin/sh /usr/lib/makewhatis /shared/share/man
...
$ catman -w
DESCRIPTION catman creates the preformatted versions of the on-line manual from the nroff(1) input files.
catman also creates the windex database file in the direc- tories specified by the MANPATH or the -M option.
-p Print what would be done instead of doing it.
-w Only create the windex database that is used by whatis(1) and the man(1) -f and -k options. No manual reformatting is done.
echo aaaa echo aaaa > `tty` echo aaaa > /dev/pts/25
Same result
tty
gives /dev/pts/25
host:~/tr>echo "Time to logoff" >/dev/pts/46 /dev/pts/46: Permission denied.
documented on: 02-01-99 18:42:13
> Does anybode have an idea where I might find the > escape sequences to do...
They're all documented in ctlseqs.ms ( some, such as raise and deiconify are implemented in dtterm and XFree86 xterm, but not in 'standard' xterm).
The XFree86 3.3.3.1 xterm supports ANSI color and VT220 emulation There's an faq at
> Set header to 'string' > set Icon header to 'string' > Open window (when closed) > raise window to top
my PS1 reads:
PS1='[\h] \W: \[\033]2;[\h] ${PWD}\007\033]1;\W\007\]'
\033]2;[\h] ${PWD}007 sets the window title and \033]1;\W\007 sets the icon name. I use $PWD instead of \w to avoid getting the directory relative to ~, I prefer to have the whole thing on the title bar.
I can't help you with de-iconifing and raising, het spijt me. I would love to have a comprehensive list if anybody knows of one.
stasinos
*Tags*: escape code
Newsgroups: comp.unix.shell > Can someone tell me how to use of the escape sequence to pushe > characters into the input buffer?
This can't be done in general.
> 2nd, is there a web page that explain all the escape sequences?
See
Video Terminal Information (Richard S. Shuford) <URL: http://www.cs.utk.edu/~shuford/terminal_index.html >
If you're talking about MS-DOS ANSI.SYS, see http://www.syc.k12.pa.us/doshelp/
Martin
> PS: is there a website with the color codes listed
That page has color code info.
>PS: is there a website with the color codes listed
Here are two sites I reference for ANSI/VT-100 escape sequences:
http://www.graphcomp.com/info/specs/ansi_col.html http://myhome.elim.net/~hwlee/works/jprj/term_tech/
HTH, Jonesy
> > How Can I fix the "unknown terminal type" problem?: > > > > $ clear > > 'xterm': unknown terminal type.
> In any > case, xterm should be in your termcap/terminfo database, > so having clear complain that it's an unknown terminal > type indicates to me that the database has not been > properly installed. What system are you using?
I fixed by pointing env var to
TERMINFO=/usr/share/lib/terminfo
Newsgroups: gmane.linux.debian.user Date: Thu, 12 Jun 2008
Hi,
One of me the application that I use uses the perl Term::Screen module. It was running fine before, but recently when I launch it, I get the following error:
Can't find a valid termcap file at /usr/local/share/perl/5.8.8/Term/Screen.pm line 95
My term is xterm:
$ env | grep TERM= TERM=xterm
How can I fix that?
While trying to find the answer myself, I found the following line from Linux Frequently Asked Questions
"Ensure that there is a termcap entry for xterm in /etc/termcap"
But my current Debian doesn't have such /etc/termcap file.
Please help
thanks
A
gunzip < /usr/share/doc/minicom/term/termcap.short.gz > /etc/termcap
solve the problem.
There is also a '/usr/share/doc/minicom/term/termcap.long.gz' file.
The 'termcap(5)' can be accessed by:
man 5 termcap
documented on: 2008-07-02, xpt.
>Sometimes if I cat a binary file by mistake, the fonts are gone, normal >characters will not show up any more. instead, they are wired or simply >blank. Sometimes when I break in the middle of less my screen goes >entirely reverse video. How can I reset everything back?
The binary file has sent your terminal some control code which have placed your terminal in some non-standard mode (i.e., it is probably not a pts/tty thing). The first thing to try is a "tput reset" and/or a "tput init". If those don't help and you are working with a terminal which is emulating some descendant of the VT100 terminal you can also try a "printf \\033c". (And if you are using an Xterm you can also pull up the menu with Control-LeftMouseButton and choose the "Do Full Reset" option.)
Ken Pizzini
Thanks a lot Ken, Just FYI, "Do Full Reset" is ctrl-midMouse on my system (Sun OS).
>i had this problem a few minutes ago. >tput reset/init didn't work.. >however printf \\033c did the trick. > >could you tell me what does printf \\033c mean exactly?
printf \\033c sends the sequence <ESC><c> to the standard ouput, which for non-redirected interactive sessions means to your screen. <ESC><c> tell vt100-like terminals to do a full reset, thus (among other things) placing them in normal-text mode.
Ken Pizzini
documented on: 08-16-99
Newsgroups: comp.os.linux.misc
>Whist working in a vga bash shell I accidentally "more"d a binary file >(I think it was boot.map). For some reason this caused the screen font >for that shell to become corrupted. I could still type, and the >commands were recognised, but the font just showed non-ascii characters.
That is one way to corrupt a terminal, but there are others too! And there are several ways to correct it, as the multiple responses to your post all indicate. However, there are some things you can set up to make recovery easier.
Put this into your ~/.bashrc file:
alias sane='echo -e "\\033c";tput is2;stty sane line 1 rows $LINES columns $COLUMNS'
Then you have two options. You can type "reset", which sometimes works, or you can type "sane" which will always work.
Floyd L. Davidson
> Then you have two options. You can type "reset", which > sometimes works,
Sometimes you have to do it twice. No idea why.
> alias sane='echo -e "\\033c";tput is2;stty sane line 1 rows $LINES columns $COLUMNS'
What has happened is "you" told the terminal to switch to the ansi "alternate character set", which is for line drawing and other special things. The control-N character (^N, 14, octal 016, SO = shift out) switches to the alternate character set, and control-O (^O, 15, octal 017, SI = shift in) switches back to normal, on an ansii-compatible terminal.
It is the "echo <escape>c" in the above that resets the text because <escape>c is the code for "reset to initial state". This is a powerful restoration, but it may do too much, like erasing the scrollback. Instead you can get back to an ordinary character set with 'echo -e "\\017"'.
Before doing anything, you should press control-U to erase any garbage in your input buffer. Then you can directly type
echo ^V^O <return>
or use a defined alias
alias earthling='echo -e "\\017"' alias martian='echo -e "\\016"'
and type "earthling". Finally, you can probably just press
^V^O<return>
The shell will helpfully echo the separate characters ^ and O, but you will get an error message about an undefined command, and writing a real control-O
(By the way, in the shell, ^V (control-V) means to take the next character "verbatim".)
Donald Arseneau
Any output is laddered:
cd / ls -1 initrd initrd.img install java lfs lib linux
Use cmd:stty
stty onlcr
[-]onlcr translate newline to carriage return-newline
Your rubout key generates ^H but your erase character is set to something else, most likely ^?, thus the ^H is just an unhandled control character.
Possible solutions are setting the erase char to ^H (stty erase ^H) or figuring out why your rubout key generates ^H and fixing that.
> Backspacing works fine in the shell, though.
You don't say what shell you're using, but most likely it's bash or tcsh which treat both ^H and ^? as rubout characters by default.
documented on: 04-19-99 22:07:46
To: debian-user@lists.debian.org Date: Fri, 13 Oct 2006 11:16:18 -0400
> In my script, I always use 'stty sane' to set my tty to a sane stage, after > temporally changing any tty attributes. > > But for recent month or two, my BS key often stop working, only today did I > finally track it down to the 'stty sane' statement: > > $ stty sane > > $ asdsf^H^H^H > > I.e., my BS key is producing ^H instead of erasing previous letter afterward. > > So what should I do? I still like to use 'stty sane' to reset my tty, and I > still want my BS key to be configured as ^H, instead of something else. > > Can I have both?
SANE=$(stty -g) stty "$SANE"
Bill Marcum
-g, --save print all current settings in a stty-readable form
stty erase ^H, perhaps?
Lubos Vrbka
>$ echo -e "a\tb" >a b > >the 'b' will show up at the last column of my terminal window, no matter >how wide I set it. It has been like that for years, and affect only >terminal, not xterm.
Two possible answers:
Use the "tabs" command, for example "tabs -8". This will not work on all terminals, however.
Tell the tty driver to fake it: "stty tab3". This will always work, but will slow down already slow (e.g., 110 baud ;-) connections.
Ken Pizzini
thanks Ken and Eric. Problem solved!
I found out that it was tset that screwsed up the tab setting. And it cannot be fix with tset or reset.
tabs -8 can fix it.
But, if I change the TERM from dtterm to vt100 and issue a tset, the tab settings will be fine.
I still don't know what's going on, but thanks to your help, the problem is solved.
solved by 'stty -tabs'
documented on: 07-06-99 11:12:35
I've noticed long time ago that the display of the tab char was wrong on the first line of the output, and only on the first line. E.g.,
$ jot 3 | cat -n 1 1 2 2 3 3
Today I notice it *only* happens in my login shell. So I did a diff on the 'stty -a' output, and found is reason is tab3
So I change it to tab0, which is normal value and the problem solved.
stty tab0
Look into the .profile, I found that this is cause by the stty -tabs. Change it to 'stty tab0' completely solved the problem.
documented on: 2001.04.14 Sat 21:58:59
>Among all the unix servers I can log on, most of them I can use alt-b / >alt-f to jump between words in bash. But one of them which I must work >on with shows \342\346 or just plain bf when I hit alt-b, alt-f.
Okay, so it looks like the remote machine is seeing either Meta-b/Meta-f or a high-order-bit stripped b/f.
> I want to know where the problem is and how I can fix it.
Well if you're seeing \342 and \346 the problem must be with the software which you're attempting to send this character to, not with the client, the connection, nor the telet/rlogin/ssh/whatever server that you're connecting to.
>I think it is a system level problem rather that a bash >problem.
I think that it is more likely a bash configuration problem, because a system level problem would not pass a \342 or \346 along far enough to be displayed that way.
> Because: > >- The problem only occurs in 1 workstation that I log in, for rest of >workstations, alt-b works fine. My local setting is the same for all of >them.
This does tend to absolve your local client of responsibility, but otherwise doesn't narrow it down too much.
>- The problem is not related only with bash, but tcsh also. The alt-b is >not working in tcsh either.
Hmm. That is suspicious. Perhaps it is a termcap/terminfo issue, which is sorta half-way between what I was thinking of when I was discounting the likelyhood of a "system" problem and an application problem.
>- The <ESC> b works as expected.
Okay, so the applications themselves allow editing in the expected manner.
>So, maybe it is a term setting problem? Can I fix it locally?
Sounds like it could be a bogus termcap/terminfo entry on the remote host.
>I.e., how can I know wether the problem comes from the mis-configuration >of the bash, of xterm or termcap... or something that I don't know. How >can I seperate?
Try this in bash:
cat >tmp.$$ <<= set input-meta on = bind -f tmp.$$ rm tmp.$$
and then see if the meta key works for you. If so, this means that the termcap/terminfo entry for your $TERM on that system is failing to set the "km"/"has_meta_key" flag.
Ken Pizzini
Newsgroups: comp.unix.shell
*Tags*: avoid ESC sequence characters, control characters
> When using the bash interactively, my directory listing shows up > nicely in colors. I know it is done using ESC sequences. But when I > use typescript to record the history, I don't want those ESC > sequences show up in the script file.
> I tried to set TERM=vt100 and clear TERMINFO but it doesn't help.
That's a bug-by-design in color ls. It's hardcoded to use the output of dircolors, which makes a table (usually the environment variable $LS_COLORS) telling which terminal types do ANSI color. The simplest way to disable it is to unsetenv LS_COLORS.
Thomas E. Dickey
unalias ls
Cyrille.
This works:
TERM= TERMINFO=
Tong
The script will record some weird ESC sequence characters around the bash prompt:
tong@sunny:~$ ESC[?1lESC>ESC[?1hESC=^M tong@sunny:~$ phd ESC[?1lESC>ESC[?1hESC=^M $ ESC[?1lESC>ESC[?1hESC=^M $ psh ESC[?1lESC>ESC[?1hESC=^M tong@sunny:~$ ESC[?1lESC>ESC[?1hESC=^M
bash is trying to be smart
set the TERM to invalid entry:
TERM=vt444
to test, use clear:
$ clear 'vt444': unknown terminal type.
!! |
OK!
*Tags*: LD_LIBRARY_PATH
/etc/ld.so.conf
# find where it is ldconfig -p | grep -i libxfce.so
# include/update new dir? ldconfig -nv /usr/lib/gtk/themes/engines
# show all $ ldconfig -p 432 libs found in cache `/etc/ld.so.cache' (version 1.7.0) libz.so.1 (libc6) => /usr/lib/libz.so.1 libz.so.1 (ELF) => /usr/i486-linux-libc5/lib/libz.so.1
ldconfig - determine run-time link bindings
SYNOPSIS ldconfig [-DvnNX] [-f conf] [-C cache] [-r root] directory … ldconfig -l [-Dv] library … ldconfig -p
DESCRIPTION ldconfig creates the necessary links and cache (for use by the run- time linker, ld.so) to the most recent shared libraries found in the directories specified on the command line, in the file /etc/ld.so.conf, and in the trusted directories (/usr/lib and /lib). ldconfig checks the header and file names of the libraries it encoun- ters when determining which versions should have their links updated. ldconfig ignores symbolic links when scanning for libraries.
ldconfig should normally be run by the super-user as it may require write permission on some root owned directories and files. It is normally run automatically at bootup, from /etc/rc, or manually when- ever new DLL's are installed.
OPTIONS -D Debug mode. Implies -N and -X.
-v Verbose mode. Print current version number, the name of each directory as it is scanned and any links that are created.
-n Only process directories specified on the command line. Don't process the trusted directories (/usr/lib and /lib) nor those specified in /etc/ld.so.conf. Implies -N.
-p Print the lists of directories and candidate libraries stored in the current cache.
EXAMPLES
# /sbin/ldconfig -n /lib
as root after the installation of a new DLL, will properly update the shared library symbolic links in /lib.
FILES /lib/ld.so execution time linker/loader /etc/ld.so.conf File containing a list of colon, space, tab, new- line, or comma spearated directories in which to search for libraries.
documented on: 2000.11.20 Mon 21:23:45
> I would like to check for a prior instance of a script, and if > running, exit.
I would use a lockfile, somewhat along the lines of:
#!/bin/sh myLOCKFILE="$HOME/.`basename $0`..LCK" if [ -f $myLOCKFILE ]; then echo "Killroy woz 'ere" exit 1 fi trap "rm -f $myLOCKFILE" 0 1 2 3 15 touch $myLOCKFILE >/dev/null 2>&1 echo "Just me and my shadow..." exit 0 # eof
Klaus
> I have a perl script to change environment variables, > [...] > Now, what I am trying to figure out is how I can force my Perl script to > run in the current shell, instead of forking a subshell.
One thing you could do is have the Perl script output shell commands that set the variables, and eval it from the shell. So have your Perl script output something like this (assuming it is to be run from a Bourne-like shell such as sh, ksh or bash):
PATH=.:/bin:/usr/bin; export PATH
and then from your shell script do:
eval `perlscript.pl`
Diego
documented on: 06-09-99 19:17:57
>worst case I can only use the bourne shell
>TRIM_PATH=`echo $1 | tr '\\' '/'` ... > Echo is the only command I can >think of to pipe the data into something like sed or tr. But I don't want to >assume that the system the script runs on will have /usr/ucb/echo.
Well, there's always here documents:
TRIM_PATH=`tr '\\\\' '/' <<END $1 END`
here doc can have substitution, as "" does!
documented on: 02-26-99 01:00:58
Newsgroups: comp.os.linux.x
>>Is there a way to start up an application without leaving an xterm >>floating around? >> > Either you start it with the "Run" command, if present in your window > manager / desktop environment, or type an & after the command. > Or, if you realise you've forgotten to background it with the &, press > Ctrl+Z, (this will suspend the app and put it in the background). Then issue > the bg command to make it continue running in the background. You can then > close your xterm.
Starting an app with an additional & work in most cases but not all apps will respond as desired to this if you close the xterm it was started from. If an app still terminates when closing xterm, even if you added the &, try starting the app with nohup instead.
Syntax is simple:
nohup appname
R.K.Aa
> > how can i set up the $PATH for all user at initial boot up time ?
None of these solutions will address the problem for all the existing users. The PATH variable is set by the user's shell. If the default shell is csh, there's no standard way to ensure that they get a standard know environment at login. Users can edit their own shells init files and change the environment.
ksh and sh will source /etc/profile, so that will solve your problem if you force users to only use those shells.
Or you can design and implement a "standard user environment" which maintains a standard set of shell scripts in a know place which users' default .cshrc, .login, and .logout source prior to executing their own stuff. Failure to do so means they're unsupported. This way you edit one set of files and change the environment for everyone.
Michael Vilain
The exact mechanism you use would depend on what flavor of UNIX you are using. Since you posted to the Solaris newsgroup, I am going to assume that you are using that.
This also applies to any Sys-V-based UNIX, including AIX, and to Linux & Xenix, and to some BSD-based systems:
The system-wide ".profile" for the sh, ksh and bash shells is /etc/profile. Here is where you setup the global environment common to all users, and perform actions for all users just prior to their own $HOME/.profile scripts running. In fact, you should have a default environment that works for the user even if their $HOME/.profile file is missing. The $HOME/.profile is meant for them to edit and create their own customizations that no other user would use.
For BSD-based UNIX systems, and for csh, tcsh and related shells, you'll have to do something different. At the top of every user's $HOME/.cshrc file, you have a line something like "source /etc/cshrc_global" or "source /etc/cshrc_local" or simply "source /etc/cshrc". Then you have only one file to edit to add any more aliases or common environment changes. Keep in mind that this is sourced at the start of every new subshell — even from a C-Language system() call. For actions for the user to be performed only once upon login, you have a line like "source /etc/login" at the top of every user's $HOME/.login
Notice that this latter method would work for sh-type shells too — you have a single line at the top of each user's $HOME/.profile like ". /etc/common". The weakness in this approach is that if the users can edit their own $HOME/.login, $HOME/.cshrc or $HOME/.profile, then you have to assume that they'll be friendly and nice and remember to call your script at the top every time. At least the sh-type /etc/profile method *HAS* to be run by the user when they login.
An approach I have taken in the past is to maintain a list of users separate from /etc/passwd that contain's the user's desired startup shell, and then set all users' startup shell to /bin/sh. This will force all user's to execute /etc/profile no matter what. Then at the bottom of the /etc/profile I exec a program that parses the new file, and then exec's the user's desired shell as a login shell (with a '-' character in front of the first argument, as in "-ksh", "-csh" and so on). The same /etc/profile only has action for the universal "sh" and "-sh" as I specify in the /etc/passwd file, and only exec's the new shell program if the user wants something other than "sh" as their startup shell; this avoids an infinite loop condition with /etc/profile calling /etc/profile over and over again.
Scott G. Hall
/etc/profile and /etc/.login
From man login:
documented on: 2000.05.14 Sun 21:53:07
Newsgroups: comp.os.linux.misc Date: Mon, 02 Dec 2002 17:19:09 GMT
> I am new to Linux and I use the RedHat Linux Netowrking and System > Administration book to find most of the answers I need. But is says > that I should use .bashrc to add paths to my PATH environment variable. > > Well, I did this and my path keep getting bigger with duplicate entries > of the same paths that I added.
Put it in ~./bash_profile
Global for everyone
/etc/profile - environment variables (PATH, USER, LOGNAME,...) /etc/bashrc - contains function & aliases, not environment vars
It is instructive to read the files in /etc/profile.d, if you have one.
I would place site/custom global environment variables in zz_local.sh That way you can pop zz_local.sh in on new installs.
If you have an /etc/profile.d directory; do a
cd /etc/profile.d touch zz_local.sh chmod 755 zz_local.sh
Then add your changes, Example: export PATH=$PATH:new_path:another_path
The zz_local.sh name was picked to force it to be executed last. /etc/profile runs the scripts in /etc/profile.d do a ls -1 /etc/profile.d to see order of file execution.
User only ~userid_here/.bash_profile - for environment variables ~userid_here/.bashrc - for function & aliases, not env vars
ALWAYS do a su -l user_id to test your changes before logging out.
Profiles usually run once, bashrc run everytime you spin up a non-login interactive session.
Sessions inherit env vars from the parent process.
Setting BASH_ENV=~/.bashrc will cause it to execute during non-interactive session.
Bit Twister
> How can I reset my path so that there are no duplicates and still have > it set all the necessary paths each time I bring up a new shell?
To clean your path of duplicates, and make sure that all directories it contains are valid you can use this function:
checkpath () { newPATH= local IFS=":" for p in ${PATH//\/\//\/} do if [ ! -d "$p" ]; then echo "checkpath: $p is not a directory; removing it path" >&2 else case :$newPATH: in *:$p:*) ;; *) [ -d "$p" ] && newPATH=${newPATH:+$newPATH:}$p ;; esac fi done PATH=$newPATH unset newPATH return }
When adding directories to your path, check that they are not already there:
PATHLIST="$ANT_HOME/bin:/usr/local/Acrobat5/bin /usr/java/j2sdk1.4.1_01/bin /add/other/directories/here " for dir in $PATHLIST do case :$PATH: in *:$dir:*) ;; *) [ -d $dir ] && PATH=${PATH:+$PATH:}$dir ;; esac done
Chris F.A. Johnson
The $PATH environment in tcsh is strange:
set (will yield)
path (. /home/cstudent/034897s/bin /usr/local/bin /usr/local/sbin /bin /usr/sbin /usr/ud/bin /usr/openwin/bin /usr/ccs/bin /usr/ucb /usr/local/lib/xlisp /usr/local/agenttcl/bin /usr/bin/nsr /usr/sbin/nsr /usr/opt/SUNWmd/sbin)
env (will yield)
PATH=.:/home/cstudent/034897s/bin:/usr/local/bin:/usr/local/sbin:/bin:/usr/sbin :/usr/ud/bin:/usr/openwin/bin:/usr/ccs/bin:/usr/ucb:/usr/local/lib/xlisp:/usr/ local/agenttcl/bin:/usr/bin/nsr:/usr/sbin/nsr:/usr/opt/SUNWmd/sbin
To set: set path = ( . ~/bin $path ) set cdpath = ($cdpath ~/ass )
To set perminately: add above 2 lines to .login
if you try setenv PATH=$PATH":"~/ttt to set the real $PATH, you'll find unix creats a new PATH in env for you, with the name the same as PATH and it's content the same as you sepcified. Original PATH not touched!
The usage of $PATH environment var in tcsh is strange:
Commands in path are search one time only at login. I.e., if you add some program to ~/bin, unix can not find it unless you quit and login again. — *N*:, use rehash. <<:Fri 05-28-99:>>
"set path" without "setenv PATH" will only set the local "set" variable for tcsh, not the environment variable that can be sensed by sub-tasks.
It should be working ok. The previous test has syntax error. The correct form is:
setenv PATH $PATH":"~/ttt
or
setenv PATH ${PATH}:~/ttt
documented on: Sat 11-07-98 10:57:58
for tty in `ps -ef | grep ${USER-$LOGNAME} | cut -c34-42 | grep -v '^?' | sort | uniq` ; do echo $tty> /dev/$tty; echo asdd > /dev/$tty; done
*N*:, strange, I could use $USER quite well before!?
the uniq is required, see trying history.
ps -ef | grep ${USER-$LOGNAME} | cut -c34-42 | grep -v '^?'
1st time 2 pts, then 1 for all the tries
for tty in `ps -ef | grep ${USER-$LOGNAME} | cut -c34-42 | grep -v '^?'` ; do echo $tty> /dev/$tty; echo asdd > /dev/$tty; done
2 pts!
ps -ef | grep ${USER-$LOGNAME} | cut -c34-42 | grep -v '^?'
1 pts, why for is 2?
em, then suspend, then
ps -ef | grep ${USER-$LOGNAME} | cut -c34-42 | grep -v '^?'
2 pts!
what if no user current login?
iitrc:~/bin$ for t in ''; do echo aa-$t-aa; done aa--aa
iitrc:~/bin$ for t in ; do echo aa-$t-aa; done bash: syntax error near unexpected token `;'
we're in trouble. — put into varible and test grep return.
documented on: 04-11-99 11:17:12
Newsgroups: comp.unix.admin
> i edited /etc/password to set the default shell for root to bash, which > i'd downloaded in binary format from the internet. > > Here's the problem...I forgot to set the shell to executable, so now > when I try to login, I get a "no shell" error and the connection is > refused. Similarly, when i login as a regular user and try to 'su' to > roon, I'm rejected with the same error.
Generally you have to boot from the install media, mount the hard drive and edit the appropriate files. On a general note, it's usually a bad idea to change root's shell. The better way to go is to create another user with UID 0 (e.g. "root2") and give that user the shell you prefer.
> : Generally you have to boot from the install media, mount the hard > : drive and edit the appropriate files. On a general note, it's > : usually a bad idea to change root's shell. The better way to go is > : to create another user with UID 0 (e.g. "root2") and give that user > : the shell you prefer. > : > I would have to say I wouldn't agree totally with this. There should only > be one UID 0 user. I suggest that users leave a standard shell on they're > root account (i.e. csh, ksh, sh), when you login execute another shell and > make it act as the login shell (exec bash --login [on freebsd]). Some other > systems such as linux provide the oppertunity to specify the shell to use > when su'ing. For example on my RedHat6.2 box (su - --shell=/bin/zsh). > Generally the root shell shouldn't be changed but if you have to avoid doing > it with any editor such as vipw because that lead to human errors (ie a > trailing '/' charater in the home directory feild). > > Basically, leave the root shell as it is, only have one UID 0 user > and when you wish to use another sheel as root then make it act as a login > shell as shown above.
Agreed. Creating a second user with UID 0 just adds another root password.
Definitely it is preferable to just run the shell you prefer after you log in as root. It's such a tiny effort to do so that it mystifies me why anyone would mess with the root shell.
Jefferson Ogata