Você está na página 1de 36

Principle of Script

Defining the Shell Type

To make a ksh script (which is a ksh program) crate a new file with a starting line like:
#!/usr/bin/ksh
It is important that the path to the ksh is propper and that the line doesn not have more
than 32 characters. The shell from which you are starting the script will find this line and
and hand the whole script over to to ksh. Without this line the script would be interpreted
by the same typ of shell as the one, from which it was started. But since the syntax is
different for all shells, it is necessary to define the shell with that line.

Four Types of Lines

A script has four types of lines: The shell defining line at the top, empty lines,
commentary lines starting with a # and command lines. See the following top of a script
as an example for these types of lines:

#!/usr/bin/ksh

# Commentary......

file=/path/file
if [[ $file = $1 ]];then
command
fi

Start and End of Script

The script starts at the first line and ends either when it encounters an "exit" or the last
line. All "#" lines are ignored.

Start and End of Command

A command starts with the first word on a line or if it's the second command on a line
with the first word after a";'.
A command ends either at the end of the line or whith a ";". So one can put several
commands onto one line:

print -n "Name: "; read name; print ""

One can continue commands over more than one line with a "\" immediately followed by
a newline sign which is made be the return key:

grep filename | sort -u | awk '{print $4}' | \


uniq -c >> /longpath/file
Name and Permissions of Script File

The script mus not have a name which is identical to a unix command: So the script must
NOT be called "test"!
After saveing the file give it the execute permissions with: chmod 700 filename.

Variables

Filling in

When filling into a variable then one uses just it's name: state="US" and no blanks. There
is no difference between strings and numbers: price=50.

Using

When using a variable one needs to put a $ sign in front of it: print $state $price.

Arrays

Set and use an array like:

arrname[1]=4 To fill in
print ${arraname[1]} To print out
${arrname[*]} Get all elements
${#arrname[*]} Get the number of elements

Declaration

There are happily no declarations of variables needed in ksh. One cannot have decimals
only integers.

Branching

if then fi
if [[ $value -eq 7 ]];then
print "$value is 7"
fi
or:

if [[ $value -eq 7 ]]
then
print "$value is 7"
fi
or:
if [[ $value -eq 7 ]];then print "$value is 7";fi

if then else fi
if [[ $name = "John" ]];then
print "Your welcome, ${name}."
else
print "Good bye, ${name}!"
fi

if then elif then else fi


if [[ $name = "John" ]];then
print "Your welcome, ${name}."
elif [[ $name = "Hanna" ]];then
print "Hello, ${name}, who are you?"
else
print "Good bye, ${name}!"
fi

case esac
case $var in
john|fred) print $invitation;;
martin) print $declination;;
*) print "Wrong name...";;
esac

Looping
while do done
while [[ $count -gt 0 ]];do
print "\$count is $count"
(( count -= 1 ))
done

until do done
until [[ $answer = "yes" ]];do
print -n "Please enter \"yes\": "
read answer
print ""
done

for var in list do done


for foo in $(ls);do
if [[ -d $foo ]];then
print "$foo is a directory"
else
print "$foo is not a directory"
fi
done

continue...break
One can skip the rest of a loop and directly go to the next iteration with: "continue".

while read line


do
if [[ $line = *.gz ]];then
continue
else
print $line
fi
done

One can also prematurely leave a loop with: "break".

while read line;do


if [[ $line = *!(.c) ]];then
break
else
print $line
fi
done

Command Line Arguments


(Officially they are called "positional parameters")

The number of command line arguments is stored in $# so one can check


for arguments with:

if [[ $# -eq 0 ]];then
print "No Arguments"
exit
fi

The single Arguments are stored in $1, ....$n and all are in $* as one string. The
arguments cannot
directly be modified but one can reset the hole commandline for another part of the
program.
If we need a first argument $first for the rest of the program we do:

if [[ $1 != $first ]];then
set $first $*
fi

One can iterate over the command line arguments with the help of the shift command.
Shift indirectly removes the first argument.

until [[ $# -qe 0 ]];do


# commands ....
shift
done

One can also iterate with the for loop, the default with for is $*:

for arg;do
print $arg
done

The program name is stored in $0 but it contains the path also!

Comparisons
To compare strings one uses "=" for equal and "!=" for not equal.
To compare numbers one uses "-eq" for equal "-ne" for not equal as well as "-gt" for
greater than
and "-lt" for less than.

if [[ $name = "John" ]];then


# commands....
fi
if [[ $size -eq 1000 ]];then
# commands....
fi

With "&&" for "AND" and "||" for "OR" one can combine statements:

if [[ $price -lt 1000 || $name = "Hanna" ]];then


# commands....
fi
if [[ $name = "Fred" && $city = "Denver" ]];then
# commands....
fi

Variable Manipulations
Removing something from a variable
Variables that contain a path can very easily be stripped of it: ${name##*/} gives you just
the filename.
Or if one wants the path: ${name%/*}. % takes it away from the left and # from the right.
%% and ## take the longest possibility while % and # just take the shortest one.

Replacing a variable if it does not yet exits


If we wanted $foo or if not set 4 then: ${foo:-4} but it still remains unset. To change that
we use:
${foo:=4}

Exiting and stating something if variable is not set


This is very important if our program relays on a certain vaiable: ${foo:?"foo not set!"}

Just check for the variable


${foo:+1} gives one if $foo is set, otherwise nothing.

Ksh Regular Expressions


Ksh has it's own regular expressions.
Use an * for any string. So to get all the files ending it .c use *.c.
A single character is represented with a ?. So all the files starting with any sign followed
bye 44.f can be fetched by: ?44.f.

Especially in ksh there are quantifiers for whole patterns:

?(pattern) matches zero or one times the pattern.


*(pattern) matches any time the pattern.
+(pattern) matches one or more time the pattern.
@(pattern) matches one time the pattern.
!(pattern) matches string without the pattern.

So one can question a string in a variable like: if [[ $var = fo@(?4*67).c ]];then ...

Functions
Description
A function (= procedure) must be defined before it is called, because ksh is interpreted at
run time.
It knows all the variables from the calling shell except the commandline arguments. But
has it's
own command line arguments so that one can call it with different values from different
places in
the script. It has an exit status but cannot return a value like a c funcition can.

Making a Function
One can make one in either of the following two ways:

function foo {
# commands...
}

foo(){
# commands...
}

Calling the Function


To call it just put it's name in the script: foo. To give it arguments do: foo arg1 arg2 ...
The arguments are there in the form of $1...$n and $* for all at once like in the main
code.
And the main $1 is not influenced bye the $1 of a particular function.

Return
The return statement exits the function imediately with the specified return value as an
exit status.

Data Redirection
General
Data redirection is done with the follwoing signs: "> >> < <<". Every program has at
least a

standardinput, standardoutput and standarderroroutput. All of these can be redirected.

Command Output to File


For writing into a new file or for overwriting a file do: command > file

For appending to a file do: command >> file


Standard Error Redirection
To redirect the error output of a command do: command 2> file

To discard the error alltogether do: command 2>/dev/null

To put the error to the same location as the normal output do: command 2>&1

File into Command


If a program needs a file for input over standard input do: command < file

Combine Input and Output Redirection


command < infile > outfile
command < infile > outfile 2>/dev/null

Commands into Program ( Here Document )


Every unix command can take it's commands from a text like listing with:

command <<EOF
input1
input2
input3
EOF

From eof to eof all is feeded into the above mentioned command.

Pipes
For a serial processing of data from one command to the next do:
command1 | command2 | command3 ...
e.g. last | awk '{print $1}' | sort -u.

Coprocesses
One can have one background process with which one can comunicate with read -p and
print -p. It is started with command |&. If one uses: ksh |& then this shell in the
background will do everything for us even telnet and so on: print -p "telnet hostname".
Read Input from User and from Files
Read in a Variable
From a user we read with: read var. Then the users can type something in. One should
first print something like: print -n "Enter your favorite haircolor: ";read var; print "". The
-n suppresses the newline sign.

Read into a File Line for Line


To get each line of a file into a variable iteratively do:

{ while read myline;do


# process $myline
done } < filename

To catch the output of a pipeline each line at a time in a variable use:

last | sort | {
while read myline;do
# commands
done }

Special Variables
$# Number of arguments on commandline.
$? Exit status of last command.
$$ Process id of current program.
$! Process id of last backgroundjob or background function.
$0 Program name including the path if started from another directory.
$1..n Commandline arguments, each at a time.
$* All commandline arguments in one string.

Action on Success or on Failure of a


Command
If one wants to do a thing only if a command succeded then: command1 && command2.
If the second command has to be performed only if the first one failed, then: command1 ||
command2.

Trivial Calculations
Simpe calculations are done with either a "let" in front of it or within (( ... )). One can
increment a variable within the (( )) without a "$": (( a+=1 )) or let a+=1.

Numerical Calculations using "bc"


For bigger caluculations one uses "bc" like: $result=$(print "n=1;for(i=1;i<8;i+
+)n=i*n;n"|bc)

"grep"
Search for the occurence of a pattern in a file: grep 'pattern' file. If one just wants to know
how often soemthing occurs in a file, then: grep -c 'pattern file. This can be used in a
script like:
if [[ $(grep -c 'pattern' file) != 0 ]];then ......;fi. The condition is fullfilled if the pattern
was found.

"sed"
Sed means stream line editor. It searches like grep, but is then able to replace the found
pattern. If you want to change all occurences of "poor" with "rich", do: sed -e
's/poor/rich/g' filename. Or what is often seen in software packages, that have to be
compiled after getting a propper configuration, is a whole file stuffed with replacements
patterns like: /@foo@/s;;king;g. This file with inumerable lines like that has to be given
to sed with: sed -f sedscript filename. It then precesses each line from file with all the sed
commands in the sedscript. (Of course sed can do much more:-))

"awk"
Awk can find and process a found line with several tools: It can branch, loop, read from
files and also print out to files or to the screen, and it can do arithmetics.
For example: We have a file with lines like: Fred 300 45 70 but hundreds of them. But
some lines have a "#" as the first sign of them and we have to omit these ones for both,
processing and output. And we want to have lines as output like: 415 Fred where 415 is
the sum of 300, 45 and 70. Then we call on awk:

awk '$1 !~ /^#/ && $0 ~ /[^ ]/ {print $2+$3+$4,"\t",$1}' filename.

This ignores lines with a "#" at the beginning of the first field and also blank lines. It then
prints the desired sum and the $1 ist only printed after a tab. This is the most trivial use of
awk only.
"perl"
Perl is a much richer programming language then ksh, but still one can do perl commands
from within a ksh script. This might touch Randal, but it's true. Let's say you want to
remove all ^M from a file, then take perl for one line in your ksh script:

perl -i -ep 's/\015//g' filename.

Perl can do an infinite amount of things in many different ways. For anything bigger use
perl instead of a shell script.
1. Handling Command Line Arguments
Why is it necessary to write something about command line arguments? The concept is
very easy and clear: if you enter the following command

$ ls -l *.txt

the command "ls" is executed with the command line flag "-l" and all files in the
current directory ending with ".txt" as arguments.

Still many shell scripts do not accept command line arguments the way we are used to
(and came to like) from other standard commands. Some shell programmers do not even
bother implementing command line argument parsing, often aggravating the script's users
with other strange calling conventions.

For examples on how to name command line flags to be consistent with existing UNIX
commands see the table Frequent option names.

Here are some examples of bad coding practices.

• Setting environment variables for script input that could be specified on the
command line.

One example:

:
# AUTORUN must be specified by the user
if [ "$AUTORUN" != yes ]
then
echo "Do you really want to run this script?"
echo "Enter ^D to quit:"
if read answer
then
echo "o.k, starting up memhog daemon"
else
echo "terminating"
exit 0
fi
fi
# start of script...

Consider the script's user who might ponder "What was the name of this variable?
FORCERUN? AUTOSTART? AUTO_RUN? or AUTORUN?"
Don't get me wrong, environment variables do have their place and can make life
easier for the user. A much better way to solve the autorun option would be to
implement a command line flag, i.e. "-f" for "force non-interactive execution".

• Positional parameters.

Example:

:
# process - process input file

ConfigFile="$1"
InputFile="$2"
OutputFile="$3"

# Read config file


get_defaults "$ConfigFile"
# Do the processing
process_input < "$InputFile" > "$OutputFile"

This script expects exactly three parameters in exactly this order: the name of a
configuration file with default settings, the name of an input file, and the name of
an output file. The script could be called with the following parameters:

$ process defaults.cf important.dat output.dat

It then reads the configuration file "defaults.cf", processes the input file
"important.dat" and then writes (possibly overwriting) the output file
"output.dat". Now see what happens if you call it like this:

$ process output.dat defaults.cf important.dat

Now the script tries to read the output file "output.dat" as configuration file. If
the user is lucky the script will terminate at this point, before it tries to overwrite
his data file "important.dat" it will be using as the output file!

This script would have been better with the following usage:

$ process -c default.cf -o output.dat file.dat

The command line option "-c" precedes the default file, the output file is
specified with the "-o" option, and every other argument is taken to be the input
file name.

Our goal are shellscripts, that use "standard" command line flags and options. We will
develop a shell script code fragment that handles command line options well. You may
then use this template in your shell scripts and modify it to fit your needs.
Consider the following command line:

$ fgrep -v -i -f excludes.list *.c *.h

This command line consists of a command ("fgrep") with three flags "-v", "-i" and "-f".
One flag takes an argument ("excludes.list"). After the command line flags multiple file
names ("*.c", "*.h") may follow. At this point we do not know how many file names that
may be; the shell will expand the file name patterns (or "wildcards") to a list of actual file
names before calling the command "fgrep". The command itself does not have to deal
with wildcards.

What happens if there is no file matching the pattern "*.c" in the current directory? In this
case the shell will pass the parameter unchanged to the program.

If we wanted to handle command lines like the above, we must be prepared to handle

• command line flags (i.e. "-v", "-i")


• command line flags with arguments (i.e. "-f file")
• multiple file names following the flags

The shell sets some environment variables according to the command line arguments
specified:

The name the script was invoked with. This may be a basename without directory
$0 component, or a path name. This variable is not changed with subsequent shift
commands.
$1, $2, The first, second, third, ... command line argument, respectively. The argument
$3, ... may contain whitespace if the argument was quoted, i.e. "two words".
$# Number of command line arguments, not counting the invocation name $0
$@
"$@" is replaced with all command line arguments, enclosed in quotes, i.e. "one",
"two three", "four". Whitespace within an argument is preserved.
$* is replaced with all command line arguments. Whitespace is not preserved, i.e.
$*
"one", "two three", "four" would be changed to "one", "two", "three", "four".
This variable is not used very often, "$@" is the normal case, because it leaves the
arguments unchanged.

The following code segment loops through all command line arguments, and prints them:

:
# cmdtest - print command line arguments

while [ $# -gt 0 ]
do
echo "$1"
shift
done

The environment variable $# is automatically set to the number of command line


arguments. If the script was called with the following command line:

$ cmdtest one "two three" four


$# would have the value "3" for the arguments: "one", "two three", and "four". "two
three" count as one argument, because they are enclosed within quotes.

The shift command "shifts" all command line arguments one position to the left. The
leftmost argument is lost. The following table lists the values of $# and the command line
arguments during the iterations of the while loop:

$#remaining arguments comments


$1 = "one"
3 $2 = "two three" start of the command
$3 = "four"
$1 = "two three"
2 after the first shift
$2 = "four"
1 $1 = "four" after the second shift
0 end of the while loop

Now that we can loop through the argument list, we can set script variables depending on
command line flags:

vflag=off
while [ $# -gt 0 ]
do
case "$1" in
-v) vflag=on;;
esac
shift
done

The command line option -v will now result in the variable vflag to be set to "on". We
can then use this variable throughout the script.

Now let's improve this code fragment to handle file names. It would be nice if the script
would handle all command line flags, but leave the file names alone. This way we could
use the shell variable $@ with the remaining command line arguments later on, i.e.

# ...
grep $searchstring "$@"

and be sure that it only contains file names. But how do we recognize file names from
command line switches? That's easy: files do not start with a dash "-" (at least not yet...):

vflag=off
while [ $# -gt 0 ]
do
case "$1" in
-v) vflag=on;;
-*)
echo >&2 "usage: $0 [-v] [file ...]"
exit 1;;
*) break;; # terminate while loop
esac
shift
done

This example prints a short usage message and terminates if an unknown command line
flag starting with a dash was specified. If the current argument does not start with a dash
(and therefore probably is a file name), the while loop is terminated with the break
statement, leaving the file name in the variable "$1".

Now we just need a switch for command line flags with arguments, i.e. "-f filename".
This is also pretty straight forward:

vflag=off
filename=
while [ $# -gt 0 ]
do
case "$1" in
-v) vflag=on;;
-f) filename="$2"; shift;;
-*) echo >&2 \
"usage: $0 [-v] [-f file] [file ...]"
exit 1;;
*) break;; # terminate while loop
esac
shift
done

If the argument $1 is "-f", the next argument ($2) should be the file name. We now
handled two arguments ("-f" and the filename), but the shift after the case construct will
only "consume" one argument. This is the reason why we execute an initial shift after
saving the filename in the variable filename. This shift removes the "-f" flag, while the
second (after the case construct) removes the filename argument.
We still have a problem handling file names starting with a dash ("-"), but that's a
problem every standard unix command interpreting command line switches has. It is
commonly solved by inventing a special command line option named "--" meaning "end
of the option list".

If you for example had a file named "-f", it could not be removed using the command "rm
-f", because "-f" is a valid command line option. Instead you can use "rm -- -f". The
double dash "--" means "end of command line flags", and the following "-f" is then
interpreted as a file name.

Note:
You can also remove a file named "-f" using the command "rm ./-f"

The following (recommended) command line handling code is a good way to solve this
problem:

vflag=off
filename=
while [ $# -gt 0 ]
do
case "$1" in
-v) vflag=on;;
-f) filename="$2"; shift;;
--) shift; break;;
-*)
echo >&2 \
"usage: $0 [-v] [-f file] [file ...]"
exit 1;;
*) break;; # terminate while loop
esac
shift
done
# all command line switches are processed,
# "$@" contains all file names

The drawback of this command line handling is that it needs whitespace between the
option character and an argument, ("-f file" works, but "-ffile" fails), and that multiple
option characters cannot be written behind one switch character, ("-v -l" works, but "-vl"
does not).

Portability:
This method works with all shells derived from the Bourne Shell, i.e. sh, ksh,
ksh93, bash, pdksh, zsh.

Using "getopt"
Now this script processes its command line arguments like any standard UNIX
command, with one exception. Multiple command line flags may be combined with
standard commands, i.e. "ls -l -a -i" may be written as "ls -lai". This is not that easy to
handle from inside of our shell script, but fortunately there is a command that does the
work for us: getopt(1).

The following test shows us, how getopt rewrites the command line arguments "-vl -f
file one two three":

$ getopt f:vl -vl -ffile one two three

produces the output

-v -l -f file -- one two three

These are the command line flags we would have liked to get! The flags "-vl" are
separated into two flags "-v" and "-l". The command line options are separated from the
file named by a "--" argument.

How did getopt know, that "-f" needed a second argument, but "-v" and "-l" did not? The
first argument to getopt describes, what options are acceptable, and if they have
arguments. An option character followed by a colon (":") means that the option expects
an argument.

Now we are ready to let getopt rewrite the command line arguments for us. Since getopt
writes the rewritten arguments to standard output, we use

set -- `getopt f:vl "$@"`


to set the arguments. `getopt ...` means "the output of the command getopt", and "set -- "
sets the command line arguments to the result of this output. In our example
set -- `getopt f:vl -vl -ffile one two three`
is replaced with
set -- -v -l -f file -- one two three
which results in the command line arguments
-v -l -f file -- one two three
These arguments can easily be processed by the script we developed above.

Now we include getopt within our script:

vflag=off
filename=
set -- `getopt vf: "$@"`
[ $# -lt 1 ] && exit 1 # getopt failed
while [ $# -gt 0 ]
do
case "$1" in
-v) vflag=on;;
-f) filename="$2"; shift;;
--) shift; break;;
-*)
echo >&2 \
"usage: $0 [-v] [-f file] file ..."
exit 1;;
*) break;; # terminate while loop
esac
shift
done
# all command line switches are processed,
# "$@" contains all file names

The first version of this document contained the line

set -- `getopt vf: "$@"` || exit 1

This commands do not work with all shells, because the set command doesn't always
return an error code if getopt fails. The line assumes, that getopt sets its return value if
the command line arguments are wrong (which is almost certainly the case) and that set
returns an error code if the command substitution (that executes getopt) fails. This is not
always true.

Why didn't we use getopt in the first place? There is one drawback with the use of getopt:
it removes whitespace within arguments. The command line

one "two three" four


(three command line arguments) is rewritten as
one two three four
(four arguments). Don't use the getopt command if the arguments may contain
whitespace characters.

Newer shells (Korn Shell, BASH) have the build-in getopts command, which does not
have this problem. This command is described in the following section.

Portability:
The getopt command is part of almost any UNIX system.

Using "getopts"
On newer shells, the getopts command is built-in. Do not confuse it with the older getopt
(without the trailing "s") command. getopts strongly resembles the C library function
getopt(3).

Below is a typical example of how getopts is used:


vflag=off
filename=
while getopts vf: opt
do
case "$opt" in
v) vflag=on;;
f) filename="$OPTARG";;
\?) # unknown flag
echo >&2 \
"usage: $0 [-v] [-f filename] [file ...]"
exit 1;;
esac
done
shift `expr $OPTIND - 1`

Portability:
The getopts command is an internal command of newer shells. As a rule of thumb
all systems that have the KSH have shells (including the Bourne Shell sh) that
include a built-in getopts command.

Frequent option names


The following table should help you find good names for your command line flags. Look
at the second column (Meaning), and see if you find a rough description of your
command line option there. If you i.e. are searching for the name of on option to append
to a file, you could use the "-a" flag.

Flag Meaning UNIX examples


• append, i.e. output to a file
tee -a
-a ls -a
• show/process all files, ...
• count something grep -c
-c sh -c command
• command string
• directory
cpio -d
-d cut -ddelimiter
• specify a delimiter
• expand something, i.e. tabs to spaces pr -e
-e xterm -e
• execute command /bin/ksh

-f • read input from a file fgrep -f file


• force some condition (i.e. no prompts, non-interactive rm -f
execution)
cut -ffieldnumber
• specify field number
• print a help message
-h pr -hheader
• print a header
Note: -t for title may be more appropriate.

• ignore the case of characters


• Turn on interactive mode grep -i
-i rm -i
• Specify input option
• long output format ls -l, ps -l, who
• list file names -l
-l • line count grep -l
wc -l
• login name rlogin -lname

-L • follow symbolical links cpio -L, ls -L

• non-interactive mode
rsh -n
-n sort -n
• numeric processing

-o • output option, i.e. output file name cc -o, sort -o

• process id
ps -p pid
-p mkdir -p
• process path
• quick mode
finger -q, who
-q -q
• quiet mode
• process directories recursively
Note: the flag -R would be better for this purpose.
rm -r
-r • process something in the reverse order sort -r, ls -r

• specify root directory


chmod -R
-R • process directories recursively ls -R

• be silent about errors


Note: such an option is unnecessary, because the user can make cat -s
-s lp -s
the program silent by redirecting standard output and standard
error to /dev/null.
-t • specify tab character sort -ttabchar

• Produce unique output


sort -u
-u cat -u
• process data unbuffered
• print verbose output, the opposite of -q cpio -v, tar -v
-v grep -v
• reverse the functionality
• specify width
• wide output format pr -w, sdiff -w
-w ps -w
wc -w
• work with words
-x • exclude something
• answer yes to all questions (effectively making the
fsck -y,
-y command non-interactive) shutdown -y
Note: The flag -f may be better for this purpose.

Now you know the standard option names, on to "standard" UNIX commands that do not
use them.

dd - disk dump

dd if=infile of=outfile bs=10k

The syntax of this command probably is older than UNIX itself. One major
disadvantage is that argument names and file names are written together without
whitespace, i.e. if=mydoc*.txt. The shell will take "if=" as part of the file name,
and cannot expand the wildcards "mydoc*.txt".

find - find files

find / -name '*.txt' -print

With this command option names have more than one character. This makes them
more memorable and more readable. If only all commands would be like this!
And if only -print was a default option!

By the way, did you know that the command line

$ ls -bart -simpson -is -cool


is a valid usage for the SOLARIS ls command?
2. Temporary files and signal handling
Temporary files are frequently used in shell scripts. In a typical shell script often some
data is processed, and the results are written to a scratch file, the new data is processed in
another way, and eventually the scratch file is removed.

So why write an article about this topic?

Often shell script programmers use temporary files in their scripts, and remove them at
the end of the program. This simple and straight forward approach works well as long as
a user does not interrupt the script using a signal (i.e. by pressing ^C or DEL). In this case
the script doesn't have a chance to remove its temporary files before closing.

This article shows how to intercept interrupts from shell scripts.

One example:

:
# viman - start "vi" on a manual page

Tmp=/tmp/viman

man "$@" | col -b | uniq > $Tmp


vi $Tmp
rm -f $Tmp

This script passes its command line arguments on to the man command, and writes the
result to a temporary file /tmp/viman. Before starting vi on the file, all control
characters are removed ("col -b"), and duplicate or empty lines are removed ("uniq").
After vi terminates, the file is removed.

This simple script has two drawbacks.

Consider what happens if two people call this script, one after the other. The first one has
his manual page written to /tmp/viman. Shortly after that the second one has his manual
page written to the same file, overwriting the contents of the first manual page. Now the
first user gets the wrong manual page in the vi editor, and terminates. His instance of the
script removes the file /tmp/viman, and with a little bad luck the first user at the same
time now has an empty file within the vi.

The solution to this problem is clear: each user needs to have a unique temporary file, but
how to do it? We could try to create the temporary file in the directory $HOME. Each user
is (normally) guaranteed to have a unique HOME directory. But even then the user may
overwrite the file if he has a windowing system (like OpenWindows or the Common
Desktop Environment (CDE)) and is logged in more than once with the same HOME
directory.

Steve Bourne (the creator of the Bourne Shell) suggests in The UNIX system to use the
unique process identifier (PID) of the shell script as part of the file name. Since the
process id of the script is always available via the environment variable $$, we could
rewrite the script as follows:

:
# viman - start "vi" with a manual page

Tmp=/tmp/vm$$

man "$@" | col -b | uniq > $Tmp


vi $Tmp
rm -f $Tmp

This small change solves the problem.

But one problem remains: what happens to the temporary file, if the script is terminated
with a signal? In this case, the temporary file may is not removed, because the last line of
the script is never reached!

You may think: "Who cares about files clogging up the /tmp directory? The directory
gets cleaned up automatically anyway!" On the other hand you are reading this text to
become a better shell programmer, and could be excited to come to know there is an easy
way to "trap" signals from a shell script.

The general syntax for the trap command is:

trap [ command ] signal [ signal ... ]

Signals may be specified using numbers (0 to 31), "0" being a pseudo-signal meaning
"program termination". The Korn shell also understands names for the signal, i.e. HUP for
HANGUP signal, TERM for the SIGTERM signal etc. Newer kill commands display a
list of signal names if called with the flag -l. The following table lists the most common
signals along with their KSH names:

KSH
Number Comments
name
0 EXIT This number does not correspond to a real signal,
but the corresponding trap is executed before script
termination.
1 HUP hangup
2 INT The interrupt signal typically is generated using the
DEL or the ^C key
3 QUIT The quit signal is typically generated using the
^[ key. It is used like the INT signal but explicitly
requests a core dump.
9 KILL cannot be caught or ignored
10 BUS bus error
11 SEGV segmentation violation
13 PIPE generated if there is a pipeline without reader to
terminate the writing process(es)
15 TERM generated to terminate the process gracefully
16 USR1 user defined signal 1
17 USR2 user defined signal 2
- DEBUG KSH only: This is no signal, but the corresponding
trap code is executed before each statement of the
script.

A simple example would be:

trap "rm -f $Tmp" 0 1 2 3 15

This means: execute the command "rm -f $Tmp" if the script terminates ("signal" 0), or
after receiving any of the signals 1 (HANGUP), 2 (QUIT), 3 (INTR), or 15 (TERM).
Actually, a good shell script should handle all these signals.

Only one refinement has to be made before we can present The Canonical Way To
Handle Temporary Files ©. Suppose we use the following line in our script:

trap "rm -f $Tmp" 0 1 2 3 15

If somebody sends the SIGTERM signal to our script (i.e. by entering "kill -15
scriptpid"), the following would happen:

1. The script would trap the signal 15, and execute the command "rm -f $Tmp",
thus removing the temporary file.
2. Then it would continue with the next script command. This could cause strange
results, because the (probably needed) temporary file $Tmp is gone. Another point
is that somebody explicitly tried to terminate the script, a fact it deliberately
ignores.
3. Just before the script exits the trap for signal "0" is always performed, resulting in
a second attempt to remove $Tmp. This will result in unwanted error messages
(although in this case it will do no harm).
A better (and the recommended) way to handle the signals is as follows:

trap 'rm -f "$Tmp" >/dev/null 2>&1' 0


trap "exit 2" 1 2 3 15

The first trap ensures that the temporary file $Tmp is removed at the end of the script
execution. Possible error messages are simply discarded.

The second trap causes our script to terminate after receiving one of the specified signals.
Before the script terminates, the trap for "signal" 0 is executed, effectively removing the
temporary file.

Our original script, now rewritten to handle signals and use unique temporary files looks
as follows:

:
# viman - start "vi" with a manual page

Tmp="${TMPDIR:=/tmp}/vm$$"

# Assure the file is removed at program termination


# or after we received a signal:
trap 'rm -f $Tmp >/dev/null 2>&1' 0
trap "exit 2" 1 2 3 13 15

EXINIT="set ignorecase nowrapscan readonly"


export EXINIT

man "$@" | col -b | uniq > "$Tmp" || exit

[ -s "$Tmp" ] || exit 0 # file is empty


head -1 < "$Tmp" |
grep 'No.*entry' && exit 0 # no manual page

${EDITOR:-vi} "$Tmp"

Handling signals requires a bit more overhead; perhaps overkill for simple scripts like
this one but definitely worthwhile for complex scripts.

Stop simultaneous execution of the same script


Level: Advanced Submitted by: dmcvey@swbell.net URL: none
My_pid=$$
print $My_pid >> temp.file
read input_pid < temp.file
if [[ "$input_pid" == "$My_pid" ]] then
(allow the script to continue)
else
(exit the script, My_pid was not 1st)
fi
This guarantees that the first instance of the script submitted will run
and any other occurances of the same script will exit. Remove temp.file
before the script ends and I suggest using a Trap. Better than trying to
deal with ps/grep and it make a nice little function.

Backup Log files without renaming and interrupting service


Level: Advanced Submitted by: dmcvey11@home.com URL: none
Problem: Periodically your log files become too large and need to be
backed
up and compressed. Renaming the file with mv will mess up any links to
the file and could disrupt your servers using the logs. If your servers
are expected to be running 24X7 renaming is not an option.

Solution:

suffix=`date +%Y%m%d%H%M%S`.bak
newFileName=${YourLogName.log}.${suffix}
cp -p YourLogName.log $newFileName
cp /dev/null YourLogName.log
compress $newFileName

A copy of your log is made and compressed and the log file is initialized
(emptied) and ready for more messages. Processing can continue without
interruption. I suppose there is a possibility of a few messages being
dropped (what we don't see won't be missed) , so do this during times of
slow usage with a crontab entry.

saving stdout, stderr and both into 3 separate files


Level: Advanced Submitted by: ben_altman@hotmail.com URL: none
# Sometimes it is useful to not only know what has gone to stdout and
stderr
but also where they occurred with respect to each other:
# Allow stderr to go to err.txt, stdout to out.txt and both to mix.txt
#
((./program 2>&1 1>&3 | tee ~/err.txt) 3>&1 1>&2
| tee ~/out.txt) > ~/mix.txt 2>&1

"comment out" code blocks


Level: Script Programmer Submitted by: ??? URL: none
One line of shell code can be "commented out" using the
"#" character. Sometimes however it would be nice to "comment
out"
more than one line of code, like the C "/* */" comments.

One way to comment out multiple lines is this:


: '
,,,,,,
'

After the ":" command (that returns "true") the rest


of the code
is a large string constant enclosed within 'single quotes'.

Of course this works only if the code "commented-out" does


not contain single quotes.

cleaning up tmp files


Level: Script Programmer Submitted by: ericj@monkey.org URL: none
I've seen too many scripts using massive number of tmp files, which is
wrong in its self.
But not only that, people tend to clean them up one at a time in a
fashion
such as

if [ -a ${tmpFile} ]; then
rm ${tmpFile};
fi

This, when you use up to 10 or even 5 tmp files gets nasty. A quicker
way
of cleaning
up such tmp files is to use a simple loop, I even perfer to use array's
which are
availible in Korn shell. Here is an example.

clean()
{
tmpfiles[0]=${temp1}
tmpfiles[1]=${temp2}

for file in ${tmpfiles[*]}


do
if [ -a ${file} ]; then
rm ${file}
fi
done

This way, as you accumulate more and more tmp files, you need only to add
one line to get it cleaned up.

cleaning up tmp files (2)


Submitted by:
Level: Script Programmer URL: none
???
Another way to clean up multiple temporary
files is to create them within a subdirectory, i.e.

TmpBase=${TMPDIR:=/tmp}/myscript.$$
mkdir "$TmpBase" || exit 1 # create directory
chmod 700 "$TmpBase" || exit 1 # restrict access

# Remove all temporary files after program termination


# or at receiption of a signal:
trap 'rm -rf "$TmpBase" >/dev/null 2>&1' 0
trap "exit 2" 1 2 3 15

# The following files will be remove automatically:


input=$TmpBase/input
output=$TmpBase/output
#...

Convert "relative" in "absolute" path name


Level: Script Programmer Submitted by: ??? URL: none
In shell scripts it is often necessary to convert a
relative path name, i.e. "../usr/../lib/somefile" to
an absolute path name starting with a slash "/", i.e.
"/lib/somefile". The following code fragment does exactly this:

D=`dirname "$relpath"`
B=`basename "$relpath"`
abspath="`cd \"$D\" 2>/dev/null && pwd ||
echo \"$D\"`/$B"

Positioning the cursor from within shell scripts


Level: Script Programmer Submitted by: ??? URL: none
[This tip was first published within the SHELLdorado Newsletter 1/99]

For some shell scripts it would be desirable, if the script


could position the cursor to arbitrary (row, column) pairs
(i.e. to display a status line, ...)

The following shell function uses the "tput" command to


move the cursor to the specified (row, column) position:

# move cursor to row $1, col $2 (both starting with zero)


# usage: writeyx message rowno colno
writeyx () {
tput cup $2 $3
echo "$1"
}

Example usage:

clear # clear the screen


writeyx "This is a centered message" 11 26
writeyx "press any key to continue..." 22 0
read dummy

The "tput" comm!


and looks up the escape command sequence for
a feature needed for the current terminal. You can use it
for other terminal related things, too:

tput smso # "start mode shift out": usually


# reverse
echo "This is printed reverse"
tput rmso # "reset mode shift out"

All available capability names are listed on the terminfo(5)


manual page.

Portability:
The "tput" command is available with the "terminfo"
terminal information database

Setting default values for variables


Level: Script Programmer Submitted by: ??? URL: none
In shell scripts it's often useful to
provide default values for script variables, i.e.

if [ -z "$Host" ]
then
Host=`uname -n`
fi

For this kind of assignment the shell


has a shorthand:

: ${Host:=`uname -n`}

This means: if the variable "Host" is


not already set, execute the command
"uname -n" and set the variable to
the returned value.

Getting a file into "memory"


Level: Script Programmer Submitted by: glong@openwave.com URL: none
Sometimes its convienient to have a file
read into memory to work on it. The
form that you take to accomplish this is
an array data structure. In ksh88 the
maximum is 1024 elements, however, on
some of the more modern versions you can
go much higher.

To do this the following can be done:

#/usr/bin/ksh
typeset -i cnt=0

while read line


do
myarray[$cnt]=$line
((cnt = cnt + 1))
done < myfile
# end of file---------

Now, if I want to access any line of that


file, I simply use:

${<arrayname>[<subscript>]}

echo ${myarray[4]}

This is useful for parsing, or for interactive


use of of the file's contents. I have
all !
of the lines of the file available in
the array, and I can move around in them,
select the ones I want.

Look at the following example:

#!/usr/bin/ksh

typeset -i cnt=0

while read line


do
myarray[$cnt]=$line
((cnt = cnt + 1))
done < myfile

PS3="Select a number: "


select linefromfile in ${myarray[@]}
do
echo $linefromfile
done
# end of file------------

There are many other uses for this techique.


Dynamic menus
numeric error message reference
getting mulitple specific lines of a file
in a single pass

Have fun.

Find user's name


Level: Script Programmer Submitted by: Acrosser@pcez.com URL: none
The full name of each user is available in the /etc/passwd file. If you
would like to use the full name in your script instead of $LOGNAME,
which simply returns the user's login name, you can declare the following
variable in your script:

fullname=`grep $LOGNAME /etc/passwd | cut -f 5 -d :`

If you only want the first name, you would declare this variable:

firstname=`grep $LOGNAME /etc/passwd | cut -f 5 -d : | cut -f 1 -d "


"`

Find user's name (2)


Level: Script Programmer Submitted by: ??? URL: none
Since the full name is the 5th column
of the file /etc/passwd, it's easy
to look up the full name for a
login name like "joe":

awk -F: '$1 == name {print $5}' name=joe /etc/passwd

The option "-F" tells awk to use ":" als field


separator (instead of whitespace).

Using "here-documents" instead of multiple "echo"


Level: Script Programmer Submitted by: ??? URL: none
Multiple "echo" commands may be replaced by a
"here-document".
This makes the script faster and easier to read.

Example:

echo "Please enter your choice:"


echo "1 - list current directory"
echo "2 - list current users"
echo "3 - log off"

may be replaced with

cat <<!
Please enter your choice
1 - list current directory
2 - list current users
3 - log off
!

Using "here-documents" instead of multiple "echo" (2)


Level: Script Submitted by: URL:
Programmer mailto:bijoytg@usa.net http://www.geocities.com/bijoytg_
# you can also turn multi-echos into a single echo

echo "
Welcome to Foo.Bar v0.8
=======================
Press enter to continue...
";

To find idle users


Level: Script Programmer Submitted by: ??? URL: none
w | gawk '
BEGIN { FIELDWIDTHS = "9 11 13 10 8 7 7 14" }
NR > 2 {
idle = $5
sub(/^ */, "", idle)
if ( idle == "" )
idle = 0
if (idle ~ /:/) {
split(idle, t, ":")
idle = t[1] * 60 + t[2] #Converts idle time into seconds
}
if (idle ~ /days/)
idle *= 24*60*60
print $1, $2, idle

}'

Using ksh builtins instead of external commands


Level: Script Submitted by: unix- URL:
Programmer guy@pacbell.net http://www.community.net/~atomik
Many times, scripters will use external commands like basename, dirname
and
tr because they don't realize they can instead use ksh builtins.

An added bonus is the builtins are faster and require less system
resources
because no sub-process is spawned.

basename replacement:
---------------------

$ fullfile="/some/dir/file.txt"
# replaced: file=$(basename $fullfile)
$ file=${fullfile##/*}
$ echo $file
file.txt

dirname replacement:
--------------------
$ fullfile="/some/dir/file.txt"
# replaced: dir=$(dirname $fullfile)
$ dir=${fullfile%/*}
$ echo $dir
/some/dir

tr replacements:
----------------

$ word="MiXeD"
# replaced: word=$(echo $word | tr [A-Z] [a-z])
$ typeset -l word
$ echo $word
mixed

# replaced: word=$(echo $word | tr [a-z] [A-Z])


$ typeset -u word
$ echo $word
MIXED

KSH build-in networking functions


Level: Script Programmer Submitted by: ? URL: none
[Note: the following examples will work only with standard
ksh implementations. They will not work with the Linux Korn
Shell pdksh.]

Most Korn Shells (/bin/ksh) have sparsely documented, build-in


networking functions.

Example:

$ date=
$ read date < /dev/tcp/127.0.0.1/13
$ echo $date
Wed Feb 10 00:45:39 MET 1999

This command opens a TCP connection to the IP address 127.0.0.1


(the local loopback IP address), and connects to the port "13"
(daytime, see /etc/services). The current date and time is
returned, and assigned to the variable "date".

Note that the "/dev/tcp/*" directories do not have to exist;


the file names are special to the Korn Shell and are interpre!
ted
by the shell internally. Only numerical ip addresses and port
numbers are supported; "read date < /dev/tcp/localhost/daytime"
does not work.

"Normalize" input field separators


Level: Script Programmer Submitted by: ??? URL: none
Script programmers sometimes have to process input that consists
of fields separated by whitespace, i.e.
field1 field 2 <TAB> <TAB> field3

This input has the disadvantage that it uses different combinations


of blank and TAB characters as input, and is hard to process using
"cut" and "sort", because these commands expect
exactly one
field separator character.

The following "sed" line "normalizes" this input


replacing each
sequence of two or more whitespace characters with exactly one
<TAB> character:

sed 's/ [ <TAB>][ <TAB>]*/<TAB>/g'

Substitute the five characters "<TAB>" with a "real"


TAB character
(ASCII 9).

Further processing can be done using this <TAB> character


as field separator.

To Reverse a File
Level: Script Submitted by:
URL: none
Programmer radhakrishnan_m@hotmail.com
######## TO PRINT FILE IN REVERSE ORDER BY LINE #############3

if [ $# -ne 1 ]
then
echo "Usage reverse_file <filename> "
exit 1;
fi

######## By using for loop #############


awk '{ line[NR] = $0 } END { for (i=NR; i>0; i=i-1)
print line[i] }' $1

Script debugging settings


Level: Script Programmer Submitted by: ??? URL: none
Most shell script programmers know the command

set -vx

to print each script command before execution. Sometimes


the following flags are useful, too:

set -e # terminate the script at first error


set -u # unset variables are fatal errors
Swapping stdout and stderr
Level: Script
Submitted by: dia@unix.swx.ch URL: none
Programmer
In scripts it's often useful not
to filter standard output (i.e. cmd | filter),
but stderr. This may be done using
the following command:

cmd 3>&1 1>&2 2>&3

It's even possible to filter both


standard output and standard error:

( ( cmd | ... process stdout ) 3>&1 1>&2 2>&3 ) | \


... process stderr 3>&1 1>&2 2>&3

The last file descriptor operations restore


the normal meaning of stdin and stdout.

Você também pode gostar