Escolar Documentos
Profissional Documentos
Cultura Documentos
Working with Files and Directories is very basic thing which we dont want to miss while learning Solaris 10. Lets
check few very basic commands.
To display the current working directory:
pwd command: It displays the current working directory.
example:
#pwd
/export/home/ravi
To display contents of a directory:
ls command (Listing Command):It displays all files and directories under the specified directory.
Syntax: ls -options <DirName>|<FileName>
The options are discussed as follows:
Optio
n
Description
It lists all the files & directories. The directory names are succeeded by the
symbol '/'
It lists all files along with their type. The symbols '/', '*', (None), '@' at the end of
file name represents directory, executable, Plain text or ASCII file & symbolic
link respectively
It lists all the files & directories name including hidden files
It displays all the files & directories in descending order of their modified time.
It displays all the files & directories & sub-directories in recursive order
tr
It displays all the files & directories in the ascending order of their last modified
date
Entry
FileType
Description
'-' for file & 'd' for directory
Permissions
LinkCount
UID
Owner's User ID
GID
Group's ID
Size
Last ModifiedDate & ModifiedTime Last Modified Date & Time of the file/directory
<File/Directory Name>
File/Directory name
Example:
# ls -l
total 6
-rw-r--r-- 1 root
-rw-r--r-- 1 root
-rw-r--r-- 1 root
root
root
root
Understanding permissions:
Following table explains the permission entry:
Entry
Description
No permission/denied
read permission
write permission
execute permission
File Command: It is used to determine the file type. The output of file command can be "text", "data" or "binary".
Syntax: file <file name>
Example:
# file data
data: English text
Changing Directories: 'cd' commad is used to change directories.Syntax:cd <dir name>
If cd command is used without any option it changes the directory from current working directory to user's home
directory.
Example: Let the user be 'ravi' and current working directory is /var/adm/messages
#pwd
/var/adm/messages
#cd
#pwd
#/export/home/ravi
There is also a different way to navigate to the user's home directory :
#pwd
/var/adm/messages
#cd ~ravi
#pwd
/export/home/ravi
#cd ~raju
#pwd
/export/home/raju
#cd ~ravi/dir1
#pwd
/export/home/ravi/dir1
In the above examples, the '~' character is the abbreviation that represents the absolute path of the user's home
directory. However this functionality is not available in all shells.
There are few other path name abbreviations which we can use as well. These are listed below :
. current working directory
.. Parent directory or directory above the current working directory.
So if we want to go to the parent directory of the current working directory following command is used:
#cd ..
We can also navigate multiple levels up in directory using cd, .. and /.
Example: If you want to move two levels up the current working directory, we will use the command :
#cd ../..
#pwd
/export/home/ravi
#cd ../..
#pwd
/export
#cd ..
#pwd
/
Viewing the files:
cat command: It displays the entire content of the file without pausing.
Syntax: cat <file name>
Example:
#file data
data: English text
#cat data
This is an example for demonstrating the cat command.
#
Warning: The cat command should not be used to open a binary file as it will freeze the terminal window and it has
to be closed. So check the file type using 'file' command, if you are not sure about it.
more command: It is used to view the content of a long text file in the manner of one screen at a time.
Scrolling Keys
Action
Space Bar
Return
/string
head command: It displays the first 10 lines of a file by default. The number of lines to be displayed can be changed
using the option -n. The syntax for the head command is as follows:
Syntax: head -n <file name>
This displays the first n lines of the file.
tail command: It displays the last 10 lines of a file by default. The number of lines to be displayed can be changed
using the options -n or +n.
Syntax:
#tail -n <file name>
#tail +n <file name>
The -n option displays the n lines from the end of the file.
The +n option displays the file from line n to the end of the file.
Displaying line, word and character count:
wc command: It is used to display the number of lines, words and characters in a given file.
Syntax: wc -options <file name>
The following option can be used with wc command:
Option
Description
Example:
#cat data
This is an example for demonstrating the cat command.
#wc -w data
9
Copying Files:
Optio
n
Description
Includes the contents of a directory, including the contents of all subdirectories, when you copy a directory
Example:
#cp file1 file2 dir1
In the above example file1 and file2 are copies to dir1.
Moving & renaming files and directories:
mv command: It can be used to
1. Move files and directories within the directory hierarchy :
Example: We want to move file1 and file2 under the directory /export/home/ravi to /var
#pwd
/export/home/ravi
#mv file1 file2 /var
2. Rename existing files and directories.
Example: we want to rename file1 under /export/home/ravi to file2.
#pwd
/export/home/ravi
#mv file1 file2
The mv command does not affect the contents of the files or directories being moved or renamed.
We can use -i option with the mv command to prevent the accidental overwriting of the file.
Creating files and directories :
touch Command : It is used to create an empty file. We can create multiple file using this command.
Syntax: touch <files name>
Example: #touch file1 files2 file3
mkdir command : It is used to create directories.
Syntax: mkdir -option <dir name>
When the <dir name> includes a pah name, option -p is used to create all non-existing parent directory.
Example:
#mkdir -p /export/home/ravi/test/test1
Searches for the expression as acomplete word, ignoring those matches that are
sub strings of larger words
Metacha
Purpose
r
Example Result
'^test'
'test$'
't..t'
'[a-s]*'
lowercase a-s
[]
[^]
'[^as]est'
Metacha
r
Purpose
Example
Result
Matches one of
more preceding
chars
x|y
Matches either x
'printer|scanner' Matches for either expression
or y
(|)
Groups
characters
'[a-z]+est'
'(1|2)+' or 'test(s|
Matches for one or more occurrence.
ing)'
Expressions: The search criteria is mentioned here. We will discuss search criteria below in details.
Expression
Definition
-name
filename
-size [+|-]n
Finds files that are larger than +n, smaller than -n, or exactly n.
-atime [+|-]n
Find files that have been accessed more than +n days, less than -n or
exactly n days ago.
-mtime [+|-]n
Find files that have been modified more than +n days, less than -n or
exactly n days ago.
-user loginID
-type
-perm
Action: Action required after all the files have been found. By default it displays all the matching pathnames
Action
Definition
-exec
Runs the specified command on each file located.
command {} \;
-ok commadn Requires confirmation before the find command applies the command
{} \:
to each file located.
-print
-ls
-user loginID
-type
-perm
Examples:
#touch findtest
#cat >> findtest
This is for test.
#find ~ -name findtest -exec cat {} \;
This is for test.
#
The above examples searches for the file : findtest and displays its content. We can also use 'ok' option instead of
exec. This will prompt for confirmation before displaying the contents of file findtest.
If we want to find files larger than 10 blocks (1 block = 512bytes) starting from /ravi directory, following command is
used :
#find /ravi -size +10
If we want to see all files that have not been modified in the last two days in the directory /ravi, we use :
#find /ravi -mtime +2
Printing Files:
lp comand : This command is located in /usr/bin directory. It is used to submit the print request to the printer.
Syntax:
/usr/bin/lp <file name>
/usr/bin/lp -d <printer name > <file name>
The options for the lp command are discussed below :
Option
Description
lpstat command : It displays the status of the printer queue. The Syntax for this command is as follows:
lpstat -option <printer name>
The options for the lpstat command are discussed below :
Option
Description
VI Editor
VI Editor (Visual Editor)
Its an editor like notepad in windows which is used to edit a file in SOLARIS. Unlike notepad it is very difficult to use. I
wish the VI editor would have been developed by Bill gates rather than Bill Joy. Anways, guys we dont have any
other option rather than getting aware of all these commands so that we become proficient in working with the VI
Editor. Here are few commands that can be used while working with VI editor.
There are three command modes in VI editor and we will see the commands based on the modes.
Command Mode :
This is default mode of the VI editor. In this mode we can delete, change, copy and move text.
Navigation:
Key
Use
k(or up arrow)
Go to beginning of line
Go to beginning of line
CTRL+F
forward 1 screen
CTRL+B
backward 1 screen
CTRL+D
CTRL+U
Key
Use
y+w
n+y+w
y+y
To copy a line
n+y+y
To copy n lines
p(lowerCase) To paste a copied words/lines after the current position of the cursor
P(uppercase) To paste a copied words/lines before the current position of the cursor
Deletion:
Key
Use
n+X
d+w
n+d+w
d$
d+d
n+d+d
Key
Use
ZZ
Input or Insert Mode: In this mode we can insert text into the file. We can enter the insert mode by pressing
following keys in command mode:
Key
Use
Esc
Last line mode or Collan Mode : This is used for advance editing commands. To access the last line mode enter ":"
while in command mode.
Key
Use
:+set nu
:+set nonu
:+enter+n
:+/keyword
:+n+d
:+5,10d
:+7 co 32
:+10,20 co 35
:+%s/old_text/new_text/g
:+q+!
:+w
:+w+q
:+w+q+!
1,$s/$/"
-type=Text_to_be_appended
Using VI Command:
vi options <file name>
The options are discussed below:
-r : To recover a file from system crash while editing.
-R : To open a file in read only mode.
Viewing Files in Read Only Mode:
view <file name>
This is also used to open the file in read only mode. To exit type ':q' command.
Automatic Customization of a VI session:
1. Create a file in the user's home directory with the name .exrc
2. enter the set variables without preceding colon
The shell prompt for regular user is hostname% and for root user hostname#.
Korn Shell:
It is a superset of Bourne Shell with C shell like enhancements and additional features like command history,
command line editing, aliasing & job control.
Alternative shells:
Bash(Bourne Again shell): It is Bourne compatible shell that incorporates useful features from Korn and C shells,
such as command line history and editing and aliasing.
Z Shell: It resembles Korn shell and includes several enhancements.
TC Shell: It is completely compatible version of C shell with additional enhancements.
Shell Metacharacters:
Lets understand Shell Metacharacters before we can proceed any further. These are the special characters,
generally symbols that has specific meaning to the shell.There are three types of metacharacters:
1. Pathname metacharacter
2. File name substitution metacharacter
3. Redirection metacharacter
Path Name Metacharacters:
Tilde (~) character: The '~' represents the home directory of the currently logged in user.It can be used instead of the
user's absolute home path.Example : Lets consider ravi is the currently logged in user.
#pwd
/
#cd ~
#pwd
/export/home/ravi
#cd ~/dir1
#pwd
/export/home/ravi/dir1
#cd ~raju
#pwd
/export/home/raju
Note: '~' is available in all shells except Bourne shell.
Dash(-) character: The '-' character represents the previous working directory.It can be used to switch between the
previous and current working directory.
Example:
#pwd
/
#cd ~
#pwd
/export/home/ravi
#cd #pwd
/
#cd #pwd
/export/home/ravi
File Name Substitution Metacharacters :
Asterisk (*) Character: It is a called wild card character and represents zero or more characters except for leading
period '.' of a hidden file.
#pwd
/export/home/ravi
#ls dir*
dir1 dir2 directory1 directory2
#
Question Mark (?) Metacharacters: It is also a wild card character and represents any single character except the
leading period (.) of a hidden file.
#pwd
/export/home/ravi
#ls dir?
dir1 dir2
#
Compare the examples of Asterisk and Question mark metacharacter and you will get to know the difference.
Square Bracket Metacharacters: It represents a set or range of characters for a single character position.
The range list can be anything like : [0-9], [a-z], [A-Z].
#ls [a-d]*
apple boy cat dog
#
The above example will list all the files/directories starting with either 'a' or 'b' or 'c' or 'd'.
#ls [di]*
dir1 dir2 india ice
#
The above example will list all the files starting with either 'd' or 'i'.
Few shell metacharacters are listed below:
Metacharacte
r
Description
The '~' represents the home directory of the currently logged in user
<
>
>>
&
Korn Shell Variables: It is referred to as temporary storage area in memory.It enables us to store value into the
variable. These variables are of two types :
1. Variables that are exported to subprocesses.
2. Variables that are not exported to subprocesses.
Lets check few commands to work with these variables:
To set a variable :
#VAR=value
#export VAR
Note: There is no space on the either side of the '=' sign.
To unset a variable:
#unset VAR
To display all variables:
We can use 'set' or 'env' or 'export' command.
To display value of a variable:
echo $VAR or print $VAR
Note: When a shell variable follows $ sign, then the shell substitutes it by the value of the variable.
Default Korn Shell Variables :
EDITOR : The default editor for the shell.
FCEDIT : It defines the editor for the fc command.
HOME : Sets the directory to which cd command switches.
LOGNAME : Sets the login name of the user.
PATH : It specifies the paths where shell searches for a command to be executed.
PS1 :It specifies the primary korn shell ($)
PS2 : It specifies the secondary command prompt (>)
SHELL : It specifies the name of the shell.
Using quoting characters:
Quoting is the process that instructs the shell to mask/ignore the special meaning of the metacharacters. Following
are few use of the quoting characters:
Single quotation mark (''): It instructs the shell to ignore all enclosed metacharacters.
Example:
#echo $SHELL
/bin/ksh
#echo '$SHELL'
$SHELL
#
Double quotation mark (""): It instructs the shell to ignore all enclosed shell metacharacters, except for following :
1. The single backward quotation(`) mark : This executes the solaris command inside the single
quotation.Example:
# echo "Your current working directory is `pwd`"
Your current working directory is /export/home/ravi
In the above example the '`' is used to execute the 'pwd' command inside the quotation mark.
2. The blackslash(\) in the front of a metacharacter : This ignores the meaning of the metacharacter.Example:
#echo "$SHELL"
/bin/ksh
#echo "\$SHELL"
$SHELL
In the above example, the inclusion of '\' ignores the meaning of metacharacter '$'
3. The '$' sign followed by command inside parenthesis : This executes the command inside the
parenthesis.Example:
# echo "Your current working directory is $(pwd)"
Your current working directory is /export/home/ravi
In the above example enclosing the pwd command inside parenthesis and $ sign before parenthesis, executes the
pwd command.
Displaying the command history:
The shell keeps the history of all the commands entered. We can re-use this command in our ways. For a given user
this list of command used is shared among all the korn shells.
Syntax: history option
The output will somewhat like following :
...
125 pwd
126 date
127 uname -a
128 cd
The numbers displayed on the left of the command are command numbers and can be used to re-execute the
command corresponding to it.To view the history without command number -n option is used : #history -n
To display the last 5 commands used along with the current command :
#history -5
To display the list in reverse order:
#history -r
To display most recent pwd command to the most recent uptime command, enter the following:
#history pwd uptime
Note: The Korn shell stores the command history in file specified by the HISTFILE variable. The default is the
~/.sh_history file. By default shell stores most recent 128 commands.
Note: The history command is alias for the command "fc -l".
The 'r' command :
The r command is an alias in Korn Shell that enables us to repeat a command.
Example:
#pwd
/export/home/ravi
#r
/export/home/ravi
Type ls d and then press Esc and then \ (backslash) key. The shell completes the file name and will display :
#ls directoryforlisting/
We can also request to display all the file name beginning with 'd' by pressing Esc and = key sequentially.
Two points to be noted here :
1. The key sequence presented above works only in the vi mode of the command line editing.
2. The sequence in which the key is pressed is important.
Command Redirection:
There are two redirection commands:
1. The greater than (>) sign metacharacter
2. The less than (<) sign metacharacter
Both the above mentioned mentioned commands are implied by pipe (|) character.
The File Descriptors:
Each process works with shell descriptor. The file descriptor determines where the input to command originates and
where the output and error messages are sent.
Definition
stdin
stdout
stderr
All command that process file content read from the standard input and write to standard output.
Redirecting the standard Input:
command < filename or command 0<filename
The above command the "command" takes the input from "filename" instead of keyboard.
Redirecting the standard Output:
command > filename or command 1>filename
#ls -l ~/dir1 > dirlist
The above command redirects the output to a file 'dirlist' instead of displaying it over the terminal.
command >> filename
#ls -l ~/dir1 >> dirlist
The above example appends the output to the file 'dirlist'.
Redirecting the Standard Error:
command > filename 2> <filename that will save error>
command> filename 2>&1
The first example will redirect the error to the file name specified at the end.
The second example will redirect the error to the input file itself.
The Pipe character :
The pipe character is used to redirect the output of a command as input to the another command.
Syntax: command | command
Example:
Shell
System wide
Initialization
File
Primary
user Initialization File
Read at Login
Bourn
/etc/profile
e
$HOME/.profile
Korn
$HOME/.profile
/etc/profile
User Initialization
Shell
Files Read When a
Pathname
New Shell is Started
/bin/sh
$HOME/.kshrc
/bin/ksh
$HOME/.cshrc
/bin/csh
$HOME/.kshrc
C
/etc/.login
$HOME/.cshrc
$HOME/.login
when the user logs in. It can be used to a) customize the terminal settings & environment variables b)instruct system
to initiate an application.
Following settings can be configured in .cshrc file :
Shell prompt definations (PS1 & PS2)
Alias Definitions
Shell functions
History Variables
Shell option ( set -o option)
2. The ~/.login file: It has same functionality as .cshrc file and has been retained for legacy reasons.
Note: The /etc/.login file is a separate system wide file that system administrator maintains to set up tasks for every
user who logs in.
The changes made in these files are applicable only when the user logs in again. To make the changes effective
immediately, source the ~/.cshrc and ~/.login file using the source command:
#source ~/.cshrc
#source ~/.login
The ~/.dtprofile file : It resides in the user home directory and determines generic and customized settings for the
desktop environment.The variable setting in this file can overwrite the default desktop settings. This file is created
when the user first time logs into the desktop environment.
Important: When a user logins to the desktop environment, the shell reads .dtprofile, .profile and .kshrsc file
sequentially. If the DTSOURCEPROFILE variable under .dtprofle is not ture or does not exists, the .profile file is not
read by the shell.
The shell reads .profile and .kshrsc file when user opens console window.
The shell reads .kshrsc file when user opens terminal window.
Configuring the $HOME/.profile file:
It can be configured to instruct the login process to execute the initialization file referenced by ENV variable.
To configure that we need to add the following into the $HOME/.profile file:
ENV=$HOME/.kshrc
export ENV
Configuring the $HOME/.kshrc file :
This file contains korn shell specific setting.To configure PS1 variable, we need to add the following into the
$HOME/.kshrc file:
PS1="''hostname' $"
export PS1
Advanced Shell Functionality:
In this module we will learn four important aspects of Korn shell.
Command
Value
jobs
List all jobs that are currently running or stopped in the background
bg %<jobID>
fg %<jobID>
Ctrl+Z
stop
%<jobID>
Note: When a job is placed either in foreground or background, the job restarts.
Alias Utility in Korn Shell :
Aliases in Korn shell can be used to abbreviate the commands for the ease of usage.
Example:
we are frequently using the listing command: ls -ltr. We can create alias for this command as follows:
#alias list='ls -ltr'
Now when we type the 'list' over shell prompt and hit return, it replaces the 'list' with the command 'ls -ltr' and
executes it.
Syntax : alias <alias name>='command string'
Note:
1. There should not be any space on the either side of the '=' sign.
2. The command string mustbe quoted if it includes any options, metacharacters, or spaces.
3. Each command in a single alias must be separated with a semicolon.e.g.:#alias info='uname -a; date'
The Korn shell has predefines aliases as well which can be listed by using 'alias' command:
#alias
..
stop='kill -STOP'
suspend='kill -STOP $$'
..
Removing Aliases:
Syntax: unalias <alias name>
Example:
#unalias list
Korn Shell functions :
Function is a group of commands organized together as a separate routine. Using a function involves two steps :
1. Define the function:
function <function name> { command;...command; }
A space must appear after the first brace and before the
closing brace.
Example:
#function HighFS{ du -ak| sort -n| tail -10; }
The above example defines a function to check the top 10 files using most of the space under current working
directory.
2. Invoke the function :
If we want to run the above defined function, we just need to call it by its name.
Example:
#HighFS
6264 ./VRTSvcs/conf/config
6411 ./VRTSvcs/conf
6510 ./VRTSvcs
11312 ./gconf/schemas
14079 ./gconf/gconf.xml.defaults/schemas/apps
16740 ./gconf/gconf.xml.defaults/schemas
17534 ./gconf/gconf.xml.defaults
28851 ./gconf
40224 ./svc
87835 .
Note: If a function and an alias are defined by the same name, alias takes precedence.
To view the list of all functions :
#typeset -f -> This will display functions as well as their definitions.
#typeset +f -> This will display functions name only.
Configuring the Shell Environment variable:
The shell secondary prompt sting is stored in the PS2 shell variable, and it can be customized as follows:
#PS2="Secondary Shell Prompt"
#echo PS2
Secondary Shell Prompt
#
To display the secondary shell prompt in every shell, it must be included in the user's Korn Shell initialization
file(.kshrc file)
Setting Korn Shell options :
Korn Shell options are boolean (on or off). Following is the Syntax:
To turn on an option:
#set -o option_name
To turn off an option:
#set +o option_name
To display current options:
# set -o
Example:
#set -o noclobber
#set -o | grep noclobber
noclobber
on
The above example sets the noclobber option. When this option is set, shell refuses to redirect the standard output to
a file and displays error message on the screen.
#df -h > DiskUsage
#vmstat > DiskUsage
ksh: DiskUsage: file already exists
#
To deactivate the noclobber option :
#set +o noclobber
Shell Scripts:
It is a text file that has series of command executed one by one. There are different shell available in Solaris. To
ensure that the correct shell is used to run the script, it should begin with the characters #! followed immediately by
the absolute pathname of the shell.
#!/full_Pathname_of_Shell
Example:
#!/bin/sh
#!/bin/ksh
Comments: It provides information about the script files/commands. The text inside the comment is not executed.
The comment starts with character '#'.
lets write our first shell script :
#cat MyFirstScript
#!/bin/sh
ls -ltr #This is used to list the files/directories
Running a Shell Script :
The shell executes the script line by line. It does not compile the script and keep it in binary form. So, In order to run a
script, a user must have read and execute permission.
Example:
#./MyFirstScript
The above example runs the script in sub-shell. If we want to run the script as if the commands in it were ran in same
shell, the dot(.) command is used as follows:
#. ./MyFirstScript
Passing Value to the shell script:
We can pass value to the shell script using the pre-defined variables $1, $2 and so on. These variables are called
Positional Parameters. When the user run the shell script, the first word after the script name is stored in $1, second
in $2 and so on.
Example:
#cat welcome
#!/bin/sh
echo $1 $2
#welcome ravi ranjan
ravi ranjan
In the above example when we ran the script welcome, the two words after it ravi and ranjan was stored in $1 and $2
respectively.
Note: There is a limitation in Bourne shell. It accepts only a single number after $ sign. So if we are trying to access
the 10th argument $10, it will result in the value of $1 followed by (0).
In order to overcome this problem, shift command is used.
Shift Command:
It enables to shift the value of positional parameter values back by one position i.e. the value of $2 parameter is
assigned to $1, and $3 to $2, and so on.
Checking Exit status:
All commands under Solaris returns an exit status. The value '0' indicates success and non-zero value ranging from
1-255 represents failure. The exit status of the last command run under foreground is held in ? special shell variable.
# ps -ef | grep nfsd
root 6525 22601 0 05:55:01 pts/11
0:00 grep nfsd
# echo ?
1
#
In the above example there is no nfsd process running, hence 1 is returned.
Option
-eq
equal to
-ne
not equal to
-lt
less than
-le
-gt
greater than
-ge
We can compare strings for equality, inequality etc. Following table lists the various options that can be used to
compare strings:
Option
equal to.
e.g #test "string1" = "string2"
!=
<
less than.
e.g #test "ab" \< "cd"
>
greater than.
e.g #test "ab" \> "cd" "
-z
-n
Note: the < and > operators are also used by the shell for redirection, so we must escape them using \< or \>.
Example :
Lets test that the value of variable $LOGNAME is ravi.
#echo $LOGNAME
ravi
# test "LOGNAME" = "ravi"
#echo $?
0
#[ "LOGNAME" = "ravi" ]
#echo $?
0
Lets test if read permissions on the /ravi
#ls -l /ravi
-rw-r--r-- 1 root sys 290 Jan 10 01:10 /ravi
#test -r /ravi
#echo $?
0
#[ -r /ravi ]
#echo $?
0
Lets test if /var is a directory
#test -d /var
#echo $?
0
#[ -d /var ]
#echo $?
0
>command
>;;
...
>patn)command
>command
>..
>command
The special shell variable :
Process Management
Process: Every program in Solaris runs as a process and there is a unique PID attached with each process. The
process started/run by OS is called Daemon. It runs in background and provides services.
Each process has a PID, UID and GID associated with it. UID indicates the user who owns the process and GID
denotes the group to which owner belongs to.
When a process creates another process, then the new process is calledChild Process and old one is called Parent
Process.
Viewing Process:
ps command: It is used to view process and is discussed below.
Syntax: ps options
Few options are discussed below:
Optio
n
Description
-e
Prints info about every process on the system including PID, TTY(terminal
identifier), TIme & CMD
-f
Full verbose listing which includes UIDm parent PID, process start
time(STIME)
Example:
#ps -ef | more
UID PID PPID C STIME TTY
root 0 0
0 Jun 02
?
root 1 0
0 Jun 02
?
TIME CMD
2:18 sched
1:47 /sbin/init
root 2 0
root 3 0
daemon 140
root 7 1
--More--
0
0
1 0
0
Jun 02
Jun 02
Jun 02
Jun 02
?
?
?
?
0:13 pageout
110:25 fsflush
0:15 /usr/lib/crypto/kcfd
0:28 /lib/svc/bin/svc.startd
Colum
n
Description
UID
PID
Process ID
PPID
Parent Process ID
STIME
TTY
TIME
CMD
We can also search specific process using ps and grep command. For Example, if we want to search for nfsd
process, we using the following command :
-sh-3.00$ ps -ef | grep nfsd
daemon 2127 1 0 Jul 06 ?
0:00 /usr/lib/nfs/nfsd
ravi 26073 23159 0 03:05:49 pts/175 0:00 grep nfsd
-sh-3.00$
pgrep command: It is used to search process by process name and displays PID of the process.
Syntax : pgrep options pattern
The options are described below:
Optio
n
Description
-x
-n
Displays only the most recently created PID that matches the pattern
-U uid
Displays only the PIDs that belong to the specific user. This option uses either
a user name or a UID
-l
-t term
Displays only those processes that are associated with a terminal in the term
list
Examples:
-sh-3.00$ pgrep j
3440
1398
-sh-3.00$ pgrep -l j
3440 java
1398 java
-sh-3.00$ pgrep -x java
3440
1398
-sh-3.00$ pgrep -n java
1398
-sh-3.00$ pgrep -U ravi
28691
28688
Signal Signal
No.
Name
Event
SIGHUP
SIGINT
Interrupt
SIGKILL
Kill
15
Definition
Default
Response
Exit
Exit
Exit
Using kill Command: It is used to send signal to one or more processes and terminates only those process that is
owned by the user. A root user can kill any process. This command sends signal 15 to the process.
Syntax: kill [-signals] PIDs
Examples:
# pgrep -l java
2441 java
#kill 2441
If the process does not terminates, issue signal 9 to forcefully terminate the process as below :
#kill -9 2441
Using pkill Command: It is used to terminate the process with signal 15. We can specify the process names(to be
terminated) also in this command.
Syntax: pkill [-options] pattern
The options are same as that of pgrep command.
Example:
#pkill java
We can force the process to terminate by using signal 9:
#pkill -9 -x java
The different flavors of UNIX have different default file systems. Few of them are listed below:
SOLARIS - UFS (Unix File System)
AIX - JFS (journal FS)
JP - HFS (high performance FS)
LINUX - ext2 & ext3
Before getting into the UFS file system, lets discuss about the architecture of the file system in SOLARIS and other
file systems used in SOLARIS.
SOLARIS uses VFS (Virtual File System architecture). It provides standard interface for different file system types.
The VFS architecture enables kernel to perform basic file operation such as reading, writting and listing. Its is called
virtual because the user can issue same command to work regardless of the file system. SOLARIS also
uses memory based file system and disk based file system.
Lets discuss some memory based file systems:
Memory based File Systems:
It use the physical memory rather than disk and hence also called Virtual File System or pseudo file
system. Following are the Memory based file system supported by SOLARIS:
1. Cache File System(CacheFS): It uses the local disk to cache the data from the slow file systems like CD - ROM.
2. Loopback File System(LOFS): If we want to make a file system e.g: /example to look like /ex, we can do that by
creating a new virtual file system known as Loopback File System.
3. Process File System(PROOFS): It is used to contains the list of active process in SOLARISby their process ID, in
the /proc directory. It is used by the ps command.
4. Temporary File System(TEMPFS): It is the temporary file system used by SOLARIS to perform the operation on
file systems. It is default file system for /tmp directory in SOLARIS.
5. FIFOFS: First in first out file system contains named pipe to give processes access to data
6. MNTFS: It contains information about all the mounted file system in SOLARIS.
7. SWAPFS: This file system is used by kernel for swapping.
Disk Based File System:
The disk based file systems resides on disks such as hard disk, cd-rom etc. Following are the disk based file system
supported by SOLARIS:
1. High Sierra File System(HSFS): It is the file system for CD-ROMs. It is read only file system.
2. PC File System(PCFS): It is used to gain read/write access to the disks formatted for DOS.
3. Universal Disk Format(UDF): It is used to store information on DVDs.
4. Unix File System(UFS): It is default File system used in SOLARIS. We will discuss in details below.
Device File System (devfs)
The device file system (devfs) manages devices in Solaris 10 and is mounted to the mount point/devices.
The files in the /dev directory are symbolic links to the files in the /devices directory.
Features of UFS File System:
1. Extended Fundamental Types (EFTs). Provides a 32-bit user ID (UID), a group ID (GID), and device numbers.
2. Large file systems. This file system can be up to 1 terabyte in size, and the largest file size on a 32-bit system
can be about 2 gigabytes.
3. Logging. Offers logging that is enabled by default in Solaris 10. This feature can be very useful for auditing,
troubleshooting, and security purposes.
4. Multiterabyte file systems. Solaris 10 provides support for mutiterabyte file systems on machines that run a 64bit Solaris kernel. In the previous versions, the support was limited to approximately 1 terabyte for both 32-bit and
64-bit kernels. You can create a UFS up to 16 terabytes in size with an individual file size of up to 1 terabyte.
5. State flags. Indicate the state of the file system such as active, clean, or stable.
6. Directory contents: table
7. Max file size: 273 bytes (8 ZB)
8. Max filename length: 255 bytes
9. Max volume size: 273 bytes (8 ZB)
10. Supported operating systems: AIX, DragonFlyBSD, FreeBSD, FreeNAS, HP-UX, NetBSD, Linux, OpenBSD,
Solaris, SunOS, Tru64 UNIX, UNIX System V, and others
Now, that we have some basic idea of the SOLARIS file system, lets explore some important file systems
in SOLARIS.
Windows guys must be aware of important directories in windows like sytem32, program files etc., like wise below we
will discuss some important file systems in Solaris:
/ root directory
/usr man pages information
/opt 3rd party packages
/etc system configuration files
/dev logical drive info
/devices physical devices info
/home default user home directory
/ kernel Info abt kernel(genunix for Solaris)
lost+found unsaved data info
/proc all active PID's running
/tmp Temporary files system
/lib library file information(debuggers, compilers)
/var It contains logs for troubleshooting
/bin Symbolic link to the /usr/bin directory (Symbolic link is same as shortcut in windows)
/export It commonly holds user's home directory but can customized according the requirement
/mnt Default mount point used to temporarily mount file systems
/sbin Contains system administration commands and utilities. Used during booting when /usr/bin is not
mounted.
Important: / is the root directory and as the name suggests, other directories spawn from it.
File Handling
Lets us now get started with managing file i.e. creating, editing and deleting files.I have mentioned few commands
below and their usage in managing/handling file & directories.
pwd Displays current working directory
touch filename Creates a file
touch file1 file2 file3 Creates multiple files(space is used as separator)
file filename Displays the type of a file/directory
cat filename Displays the content of the file
cat > filename Writes/over-writes the file(ctrl + D save and exit)
cat >> filename Used to append the content to the file(ctrl + D save and exit)
rm <linkName>
Important: We should not delete a file without deleting the symbolic links. However, you cannot delete the file (its
content) unless you delete all the hard links pointing to it.
Few commands to check disk and file system usage
df command (Disk free command)
df -h It is used to display the file system information in human readable format
ls -lt It displays all the files and directories in the descending order of their last modified date(first last)
ls -ltr It displays all the files and directories in the ascending order of their last modified date(last first)
ls -R It displays all the files and directories and sub-directories
ls -r It displays all the files and directories in the revese alphabetical order
ls -i <FileName> Displays the inode number of the file
Field
Description
Owne
Permission used by the assigned owner of the file or directory
r
Group Permission used by the members of the group that owns the file or directory
Other
Permission used by all user other than owner, and members of group that owns
the file or directory
Each of the these user has three permission, called permission set. Each permission set contains read, write and
execute permissions.
Each file or directory has three permission sets for three type of users. The first permission set is for owner, the
second permission set is for group and the third and last is for other user's permission.
For Example:
#ls -l
-rw-r--r-- 2 root root 10 Jan 31 06:37 file1
In the above example the first permission set is rw mean read and write. The first permission set is for owner so
owner has read and write permissions.
The second permission set for the group is r i.e. read only.
The third permission set for the other user is r i.e. read only.
The '-' symbol denotes denied permission.
Permission characters and sets:
Permissio Characte
Access for a file
n
r
Octal
Value
Read
User can display the file content & copy the file
Write
Execute
Note : For a directory to be in general use it must have read and execute permission.
When we create a new file or directory in Solaris, OS assigns initial permission automatically. The initial permission of
a file or a directory are modified based on default umask value.
UMASK(User Mask Value)
It is used to provide security to files and directories.It is three digit octal value that is associated with the read, write,
and execute permissions. The default UMASK value is [022]. It is stored under /etc/profile.
The Various Permission and their Values are listed below:
r (read only) = 4
w (write) = 2
x (execute) = 1
rwx (read+write+execute) 4+2+1 = 7
rw (read + write) 4+2 =6
Computation of Default permission for a directory:
The directory has a default UMASK value of [777]. When a user creates a directory the user's umask value is
subtracted from the Directory's UMASK value.
The UMASK Value of a directory created[755](rwx-rw-rw) = [777](Directory's UMASK value) - [022](Default user's
UMASK Value)
Computation of Default permission for a file:
The file has a UMASK value of [666]. When a user creates a file the user's umask value is subtracted from the File's
UMASK value.
The UMASK Value of a file created[644](rw-r-r) = [666](File's UMASK value) - [022](Default user's UMASK Value)
#umask Displays the user's UMASK Value
#umask 000 Changes the user's UMASK Value to 000
Note: It is strictly not recommended to change the UMASK value.
chmod(Change Mode):
This command is used to change the file's or directory's pemission.There are two ways of doing it.
1. Absolute or Octal Mode:
e.g. chmod 464 <FileName>/<DirectoryName>
The above command gives the permission r-rw-r.
2. Symbolic Mode:
First we need to understand the below mentioned symbols:
'+' It is used to add a permission
'-' It is used to remove a permission
'u' It er
'g' It is uis used to assign/remove the permission of the ussed to assign/remove the permission of the group
'o' It is used to assign/remove the permission of other user
'a' Permission for all.
e.g. chmod u-wx,g-x,g+w,o-x
u[ser]::perm
o[ther]:perm
u[ser]:UID:perm or
u[ser]:username:perm
g[roup]:GID:perm or
g[roup]:groupname:per
m
m[ask]
Determining if a file have an ACL : The files having ACL entry are called Non-Trivial ACL entry and if file do not
have any ACL entry except the default one it is called Trivial-ACL entry. When we do ls -l, the file having Non-Trivial
ACL entry is having +sign at the end of permission. For example :
#ls -l ravi
-rw-r--r--+ 1 root root 0 April 07 09:00 acltest
#getfacl acltest
#file: acltest
#owner: root
#group: root
user::rwuser:acluser:rwx
group::r--
mask::r-other:r-The + sign at the end indicates the presence of non-trivial ACL entries.
2. setfacl : It is used to configure ACL entries on files.
Configuring or modifying an ACL :
Syntax : setfacl -m acl_entry filename
-m : Modifies the existing ACL entry.
acl_entry : It is a list of modifications to apply to ACLs for one or more files/directories.
Example:
#getfacl acltest
#file: acltest
#owner: root
#group: root
user::rwgroup::r-mask::r-other:r-#setfacl -m u:acluser:7 acltest
#effective:r--
#getfacl acltest
#file: acltest
#owner: root
#group: root
user::rwuser:acluser:rwx
#effective: r-- as mask is set to r-group::r-#effective:r-mask::r-other:r-In the above example, we saw how we assigned rwx permission to the user acluser, however the effective permission
remains r-- as the mask value is r-- which is the maximum effective permission for the user except owner and others.
Recalculating an ACL Mask:
In the above example, we saw that even after making an acl entry of rwx for the user acluser, the effective permission
remains r--. In order to overcome that we use -r option to recalculate the ACL mask to provide the full set of
requested permissions for that entry. The below example shows the same :
#setfacl -r -m u:acluser:7 acltest
#getfacl acltest
#file: acltest
#owner: root
#group: root
user::rwuser:acluser:rwx
group::r-mask::r-other:r--
#effective: rwx
#effective:r--
We have seen above how chmod is used to change permissions too. However we should be careful while using this
command if ACL entry exists for the file/directory as it recalculates the mask and changes the effective permission.
Lets proceed with the above example. We have changed the effective permission of user acluser to rwx. Now, lets
change the group permission to rw- using chmod command:
#chmod 664 acltest
#getacl acltest
#file: acltest
#owner: root
#group: root
user::rwuser:acluser:rwx
#effective: rwgroup::rw#effective:rwmask::rwother:r-So we saw that the effective permission changes to rw from rwx for the user acluser.
Substituting an ACL:
This is used to replace the entire set of ACL entry with the specified one. So, we should not miss the basic set of an
ACL entries : user, group, other and ACL mask permissions.
Syntax: setfacl -s u::perm, g::perm, o::perm, [u:UID:perm], [g:GID:perm] filename
-s : for the substitution of an acl entry
Deleting an ACL :
It is used to delete and ACL entry.
Syntax :setfacl -d acl_entry filename
Lets go with the last example of file acltest. Now we want to remove the entry for the user acluser. This is done as
follows :
#setfacl -d u:acluser acltest
#getfacl acltest
#file: acltest
#owner: root
#group: root
user::rwgroup::rwmask::rwother:r--
#effective:rw-
File System
A file system is a structure of directories that you can use to organize and store files.
A file system refers to each of the following:
- A particular type of file system : disk based, network based or virtual file system
- An entire file tree, beginning with the / directory
- The data structure of a disk slice or other media storage device
- A portion of a file tree structure that is attached to a mount point on the main file tree so that files are accessible.
Solaris uses VFS(Virtual File system) architecture which provides a standard interface for different file system types
and enables basic operations such as reading, writing and listing files.
UFS(Unix File System) is the default file system for Solaris. It starts with the root directory. Solaris OS also includes
ZFS(Zeta File System) which can be used with UFS or as primary file system.
/bin
Symbolic link to /usr/bin & location for binary files of standard system
commands
/dev
/etc
/export
the default directory for commonly shared file system such as user's home
directory, application software or other shared file system
/home
/kernel
/lib
/mnt
/opt
/
/platform The directory of platform dependent loadable kernel modules
platform
/sbin
The single user bin directory that contains essential exe that are used during
booting process and in manual system-failure recovery
/usr
The directory that contains program, scripts & libraries that are used by all
system users
/var
/dev/fd
/devices
/etc/mnttab
/etc/svc/volatile
/proc
Stores current process related information. Every process has its set
of sub directories below /proc directory
/tmp
/var/run
It contains lock files, special files & reference file for a variety of
system processes & services.
/dev/dsk
/dev/fd
File descriptors
/dev/md
/dev/pts
/dev/rdsk
/dev/rmt
/dev/term
serial devices
/etc/acct
/etc/cron.d
/etc/init.d
/etc/lib
/etc/lp
/etc/mail
/etc/nfs
/etc/opt
/etc/rc.d#
/
Control files for role based access and security privileges
etc/security
/etc/skel
/etc/svc
/usr/bin
/usr/ccs
/usr/demo
/usr/dt
/
Header files (for C program)
usr/include
/usr/jdk
/usr/kernel
Platform independent loadable kernel modules that are not required during
boot process
/usr/sbin
/usr/lib
/usr/opt
/usr/spool
/var/adm
log files
/var/crash
/var/spool
Spooled files
/var/svc
/var/tmp
Note: In-memory directories are created & maintained by Kernel & system services. A user should never create or
alter these directories.
Disk Terms
Description
Track
Cylinder
The set of tracks with the same nominal distance from the axis about
which the disk rotates.
Sector
Block
Disk
controller
A chip and its associated circuitry that controls the disk drive.
Disk label
Part of the disk, usually starting from first sector, that contains disk
geometry and partition information.
Device driver A kernel module that controls a physical (hardware) or virtual device
Disk slices are the group of cylinders that are commonly used to organize the data by function. A starting cylinder
and ending cylinder defines each slice and determine the size of the slice.
To label a disk means writing the slice information on the disk. The disk is labeled after the changes has been made
to the slice.
Slice
Name
Function
Swap
Swap area
Entire Disk
3
4
5
/opt
Optional Software
/usr
/export/home
The EFI (Extensible Firmware Interface) disk label includes a partition table in which you can define upto 10 (0 9)
disk partitions (slices). Provision is made upto 16 slices but only 10 of these are used ( 8, plus 2 used for platform
specific purposes). The Solaris OS currently do not boot from the disk containing EFI labels.
Slice
Name
Function
Swap
Swap area
Entire Disk
3
4
5
/opt
Optional Software
/usr
/export/home
boot
Alternative disk
In Solaris OS each devices are represented by three different names: Physical, logical and Instance name.
Logical Device Name:
It is symbolic link to the physical device name.
It is kept under /dev directory
Every disk devices has entry in /dev/dsk & /dev/rdsk
It contains controller number, target number(if req.), disk number and slice number
Physical Device Name:
It uniquely defines the physical location of the hardware devices on the system and are maintained in /devices
directory.
It contains the hardware information represented as a series of node names(separated by slashes) that indicate path
through the system's physical device tree to device.
Instance Names:
It is the abbreviated name assigned by kernel for each device on system. It is shortened name for physical device
name :
sdn: SCSI Disk
cmdkn: Common Disk Driver is disk name for SATA Disks
dadn: Direct Access Device is the name for the first IDE Disk device
atan: Advanced Technology Attachment is the disk name for the first IDE Disk device
The instance names are recorded in file /etc/path_to_inst.
Few commands viewing/managing devices:
prtconf command:
It displays system configuration information, including total memory. It list all possible instances of a device. To list
instance name of devices attached with the system:
prtconf | grep -v not
format utility:
It displays the physical and logical device names of all the disks.
prtdiag command:
It displays system configuration and diagnostic information.
Performing device reconfiguration:
If a new device is added to the system and in order to recognize that device reconfiguration need to be done. This
can be done in two ways:
First way:
1. Create a /reconfigure file.
2. Shut down the system using init 5 command.
3. Install the peripheral device.
4. Power on & boot the system.
5. Use format and prtconf command to verify the peripheral device.
Second Way:
Go to OBP and give the command:
ok>boot -r
and reboot the system.
devfsadm:
It performs the device reconfiguration process & updates the /etc/path_to_inst file and the /dev & /devices directories.
This command does not require system re-boot, hence its convenient to use.
To restrict the devfsadm to specific device use the following command:
#devfsadm -c device_class
Examples:
#devfsadm
#devfsadm -c disk
#devfsadm -c disk -c tape
To remove the symbolic link and device files for devices that are no longer attached to the system use following
command:
#devfsadm -C
It is also said to run in Cleanup mode. Prompts devfsadm to invoke cleanup routines that are not normally invoked to
remove dangling logical links. If -c is also used, devfsadm only cleans up for the listed devices' classes.
Part
Cylinders
Size
Blocks
Flag
tag
15=private region
Defining a Slice on SPARC systems:
1. Run the format utility and select a disk: Type format and select a disk.
2. Display the partition menu: Type partition at the format prompt.
3. Print the partition table: Type print at the partition prompt to display the VTOC
4. Select a slice: Select a slice by entering the slice number.
5. Set tag & flag values:
When prompted for ID tag, type question mark(?) and press enter to lsit tha available choices. Enter the tag name
and press return.
When prompted for perission flags, type a question mark(?) and press enter to llist the available choices.
wm = write & mountable
wu = write & un-mountable
rm = read-only & mountable
ru = read-only & unmountable
The default flag is wm, press return to accept it.
6. Set the partition size: Enter the starting cylinder and size of the partition.
7. label the disk: label the disk by typing label at partition prompt.
8 Enter q or quit to exit out partition or format utility.
Creating fdisk partition using format utility(Only for x86/64 systems):
1.run the format utility and select a disk: Type format and select a disk.
2. Enter the fdisk command at format menu: If there is no fdisk partition defined, the fdisk presents the option to
create a single fdisk partition that uses the entire disk.
type n to edit the fdisk partition table.
3. To create a fdisk partition select option 1.
4.Enter the number that selects the type of partition. Select option 1 to create SOLARIS2 fdisk partition.
5.Enter the percentage of the disk which you want to use.
6.fdisk menu then prompts if this should be active fdisk partition. Only the fdisk partition that is being used to boot the
system be marked as active fdisk partition. Because this is going to be non-bootable, enter no.
Defining a Slice on x86/64 systems:
1. run the format utility and select a disk: Type format and select a disk.
2. Display the partiition menu: Type partition at the format prompt.
3. Print the partition table: Type print at the partition prompt to display the VTOC
4. Select a slice: Select a slice by entering the slice number.
5. Set tag & flag values:
When prompted for ID tag, type question mark(?) and press enter to lsit tha available choices. Enter the tag name
and press return.
When prompted for perission flags, type a question mark(?) and press enter to list the available choices.
wm = write & mountable
wu = write & un-mountable
rm = read-only & mountable
ru = read-only & unmountable
The default flag is wm, press return to accept it.
6. Set the partition size: Enter the starting cylinder and size of the partition.
7. label the disk: label the disk by typing label at partition prompt.
8 Enter q or quit to exit out partition or format utility.
Note: For removing a slice, the steps are same as creating the slice. The only difference is at the point where specify
the size of partition as 0MB.
Viewing the disk VTOC:
There are two methods to view a SPARC or x86/x64 VTOC on a disk:
1. Use the verify command in the format utility:
#format
#format> verify
2. Run prtvtoc command from the command line
#prtvtoc /dev/rdsk/c0t0d0s3
Raw Device:The device which is not formatted and not mounted is called Raw Device. It is same as the unformatted
drive in windows. Its information is stored in /dev/rdsk/SliceName(c0t0d0s3)
Block Device: The device which is formatted and mounted is called Block Device.
Working with Raw device: In the previous section we saw how to create a slice or partition. In order to use that
partition, it need to be formatted using newfs and mounted on a mount point. Going forward we are going to discuss
these concepts .
1. Formatting the raw device using newfs command:
The newfs command should always be applied to raw device. It formats the file system and also creates new
lost+found directory for storing the unsaved data information.
Lets consider we have a raw device c0t0d0s3, which we want to mount.
#newfs /dev/rdsk/c0t0d0s3
To verify the created file system following command is used:
# fsck /dev/rdsk/<deviceName>
Once the file system is created, mount the file system.
2. Mounting the device:
It is the process of attaching the file system to the directory under root. The main reason we are going for mounting is
to make the file system available to the user for storing the data. If we dont mount the file system it cannot be
accessed. It is always used for block devices.
Lets consider we want to mount raw device /dev/rdsk/c0t0d0s3 on the file system /oracle. Following step shows how
to mount:
#newfs /dev/rdsk/c0t0d0s3
#mkdir /oracle
#mount /dev/dsk/c0t0d0s3 /oracle
Note: This is temporary and the file system /oracle is un-mounted upon the end of the session. To make it
permanent, we need to update information to /etc/vfstab.
The vfstab is also called Virtual File System Table:
The /etc/vfstab(Virtual File system table) lists all the file systems to be mounted during system boot time with
exception of the /etc/mnttab & /var/run. The vfstable contains following seven fields:
1. device to mount: This is the block device that needs to be mounted. E.g: /dev/dsk/c0t0d0s3
2. device to fsck: This is the raw device that needs to be mounted. E.g: /dev/rdsk/c0t0d0s3
3. mount point: The file system on which the block device need to be mounted. E.g: /oracle
4. FS type: ufs by default
5. fsck pass:
1- for serial fsck scanning and
2- for parallel scanning of the device during boot process.
6. mount at boot: 'yes' to auto-mount the device on system boot
7. mount options: There are two mount options
'-' for large files: This is default option for solaris 7,8,9,10. The files will have by default 'rw' permission and they can
be more the 2gb in size.
'ro' for no large files : This option was default option in SOLARIS versions earlier than 7. The default permission for
the files created is 'ro'. The files cannot be more than 2gb in size.
The tab or white space is used as a separator The dash(-) character is used as place holder for the fields when text
arguments are not appropriate.
Note: When we are trying to create, modify or delete a slice, the complete information about the slice is updated
under /etc/format.dat.
/etc/mnttab:
It is an mntfs file system that provides read-only information directly from kernel about the mounted file system on
local host. The mount command creates entry in this file. The fields in /etc/mnttab are as follows:
Device Name: This is the block device where the file system is mounted.
Mount Point: The mount point or directory name where the file system is attached.
File System Type: The type of file system e.g UFS.
Mount options(includes a dev=number): The list of mount option.
Time & date mounted: The time at which the file system was mounted.
Whenever a file is mounted an entry is created in this table and whenever a file is removed an entry is removed from
the table. When the mount command is used without any argument, it lists all the mounted file system under
/etc/mnttab.
newfs(Explore more!!!):
When we are creating a file system using newfs command on a raw device, it creates lots of data structure such as
logical block size, fragmentation size, minimum disk free space.
1. Logical Block Size:
- SOLARIS supports logical block size in between 4096b to 8192b.
- It is recommended to create UFS file system with more logical block size because more block size will store more
data.
- Customizing the block size:
#newfs -b 8192 <raw device>
2. Fragmentation Size
- The main purpose of it is to increase the performance of the hard disk by organizing the data continuously and
which helps in providing fast read/write requests.
- The default fragmentation size is 1kb.
- By default fragmentation is enabled in SOLARIS OS.
3. Minimum Disk Free Space
- It is the % of free space reserved for lost+found directory for storing the unsaved data information.
- The default minimum disk free space before SOLARIS 7 is 8%, whereas from SOLARIS 7 onwards it is auto defined
between 6% to 10%.
The mounted file system is active and the data will be lost if system is
interrupted
FSBAD
FSCLEAN
The File System is unmounted properly and don't need to be checked for
in consistency.
FSLOG
FSSTABLE
The file system do not have any inconsistency and therefore no need to
runfsck command before mounting the file system.
fsck is a multipass file system check program that performs successive passes over each file system, checking
blocks and sizes, pathnames, connectivity, reference counts, and the map of free blocks (possibly rebuilding it). fsck
also performs cleanup. fsck command fixes the file system in multiple passes as listed below :
Phase 1 : Checks blocks and sizes.
Phase 2 : Checks path names.
Phase 3 : Checks connectivity.
Phase 4 : Checks reference counts.
Phase 5 : Checks cylinder groups.
Note: The File System to be repaired must be inactive before it can be fixed. So it is always advisable to un-mount
the file system before running the fsck command on that file system.
Identifying issues on file systems using fsck:
Type fsck -m /dev/rdsk/c0t0d0s7 and press Enter. The state flag in the superblock of the file system specified is
checked to see whether the file system is clean or requires checking. If we omit the device argument, all the UFS file
systems listed in /etc/vfstab with an fsck pass value of greater than 0 are checked.
In the following example, the first file system needs checking, but the second file system does not:
#fsck -m /dev/rdsk/c0t0d0s7
** /dev/rdsk/c0t0d0s7
ufs fsck: sanity check: /dev/rdsk/c0t0d0s7 needs checking
#fsck -m /dev/rdsk/c0t0d0s8
** /dev/rdsk/c0t0d0s8
ufs fsck: sanity check: /dev/rdsk/c0t0d0s8 okay
Recover Super block(when fsck fails to fix):
1. #newfs -N /dev/dsk/c0t0d0s7
2. fsck -F ufs -o b=32 /dev/rdsk/c0t0d0s7
The syntax for the fsck command is as follows:
#fsck [<options>] [<rawDevice>]
The <rawDevice> is the device interface in /dev/rdsk. If no <rawDevice> is specified,fsck checks the /etc/vfstab file.
The file systems are checked which are represented by the entries in the /etc/vfstab with :
1. The value of the fsckdev field is a character-special device.
2. The value of the fsckpass field is a non-zero numeral.
The options for the fsck command are as follows:
-F <FSType>. Limit the check to the file systems specified by <FSType>.
-m. Check but do not repairuseful for checking whether the file system is suitable for mounting.
-n | -N. Assume a "no" response to all questions that will be asked during the fsck run.
-y | - Y. Assume a "yes" response to all questions that will be asked during the fsck run.
Steps to run fsck command :
1. Become superuser.
2. Unmount the file system that need to check for the file system inconsistency.
3. Use the fsck command by specifying the mount point directory or the/dev/dsk/<deviceName> as an argument to
the command.
4. The inconsistency messages will be displayed.
5. fsck command will not necessarily fix all the error. You may have to run twice or thrice until you see following
message:
"FILE SYSTEM STATE NOT SET TO OKAY or FILE SYSTEM MODIFIED"
6. Mount the repaired file system.
7. Move the files and directories of lost+found directories to their corresponding location. If you are unable to locate
the files/directories in lost+found directories, remove the files/directories.
Repairing files if boot fails on a SPARC system:
1. Insert the Solaris DVD
Swap Management:
The anonymous memory pages used by process are placed in swap area but unchanged file system pages are not
placed in swap area. In the primary Solaris 10 OS, the default location for the primary swap is slice 1 of the boot disk,
which, by default, starts at cylinder 0.
Swap files:
It is used to provide additional swap space. This is useful when re-slicing of disk is difficult. Swap files reside on files
system and are created using mkfile command.
swapfs file system:
The swapfs file system consists of Swap Slice, Swap files & physical memory(RAM).
Paging:
The transfer of selected memory pages between RAM & swap areas is termed as paging. The default page size in
Solaris 10 SPARC machine is 8192bytes and in X86 machine is 4096bytes.
Command to display size of a memory page in bytes:
# pagesize
Command to display all supported page sizes:
# pagesize -a
Swapping is the movement of all modified data memory pages associated with a process, between RAM and a disk.
The available swap space must satisfy two criteria:
1. Swap space must be sufficient to supplement physical RAM to meet the needs of concurrently running processes.
2. Swap space must be sufficient to hold crash dump(in a single slice), unless dumpadm(1m) has been used to
specify a dump device outside of swap space.
Configuring Swap space:
The swap are changes made at command line is not permanent and are lost after a reboot. To permanently add swap
space, create an entry in the /etc/vfstab file. The entry in /etc/vfstab file is added to swap space at each reboot.
Displaying the current swap configuration:
#swap -s
The swap -s output does not take into account the preallocated swap space that has not yet been used by a process.
It displays the output in Kbytes.
Displaying the details of the system's physical swap areas:
#swap -l
It reports the values in 512byte blocks.
It enables to load new boot program data into PROM using software.
System configuration Information:
Each Sun system has another important element known as System Configuration Information.
This information includes the Ethernet or MAC address, the system host identification number(ID), and the user
configurable parameters.
The user configurable parameters in System Information is called NVRAM (Non-Volatile Random Access)
Variables or EEPROM (Electronically Erasable PROM) parameters.
Using these parameters we can control :
1. POST(Power on self Test)
2. Specify the default boot device
3. perform other configuration settings
Note: Depending on the system these system configuration information is stored in NVRAM chip, a
SEEPROM(Serially Electronically Erasable PROM) or a System Configuration Card(SCC).
The older systems used NVRAM chip which is located on the main system board and is removable. It contains
Lithium Battery to provide the battery backup for configuration information. The battery also provides the system's
time of day(TOD) function.
New systems uses a non-removable SEEPROM chip to store the system configuration information. The chip is
located on the main board and doesn't requires battery.
In addition to NVRAM and SEEPROM chip, some systems uses a removable SCC(System Configuration Card) to
store system configuration information. An SCC is inserted into the SCC reader.
Working of Boot PROM Firmware:
The Boot PROM firmware booting proceeds in following stages:
1. When a system is turned on, It initiates low-level POST. The low level post code is stored in system's boot PROM.
The POST code tests the most elementary functions of the system.
2. After the low level post completes successfully, the Boot PROM firmware takes control. It probes memory and
CPU.
3. Next, Boot PROM probes bus devices and interprets their drivers to build a device tree.
4. After the device tree is built, the boot PROM firmware installs the console.
5. The Boot PROM displays the banner once the system initialization is complete.
Note: The system determines how to Boot the the OS by checking the parameter stored in the Boot PROM and
NVRAM.
Stop key sequences:
It can be used to enable various diagnostics mode. The Stop Key sequences affect the OpenBoot PROM and help to
define how POST runs when the system is powered on.
Using Stop Key Sequences:
When the system is powered on use :
1. STOP+D to switch the boot PROM to the diagnostic mode. In this mode the variable "diag-switch?" is set true.
2. STOP+N to set NVRAM parameters to the default value. You can release the key when the LED starts flashing on
the key board.
Abort Sequences:
STOP+A puts the system into command entry mode for the OpenBoot PROM & interrupts any running program.
When the OK prompt is displayed, the system is ready to accept OpenBoot PROM commandds.
Disabling the Abort Sequences:
1. Edit /etc/default/kbd and comment out the statement "KEYBOARD_ABORT=disable".
2. Run the command: #kbd -i
Once the abort sequence is disabled, it can only be used during the boot process.
3. Confirm all the values of OBP registers are set to zero using .registers command.
Now we are ready to use any probe command without any problem.
ok>.speed: It displays the speed of the processor.
ok>.enet-addr: It displays the MAC address of the NIC
ok>.version: It displays the release and version information of PROM chip.
ok> show-disks: It displays all the connected disks/CD-ROM
ok> page : To clear the screen
ok> watch-net: It displays the NIC status.ok> test-all : It is nothing but performing POST i.e. self testing all the
connected devices.
ok>sync: It manually attempts to flush memory and synchronize the file system.
ok>test: It is used to perform self test on the device specified.
Device Tree:
It is used to organize the devices attached to the system.
It is built by the OpenBoot Firmware by using the information collected at the POST.
Node of the device tree:
1. The top most node of the device tree is the root device node.
2. Bus nexus node follows the root device node.
3. A leaf node(acts as a controller for the an attached device) is connected to the bus nexus node.
Examples:
1. The disk device path of an Ultra workstation with a PCI IDE Bus:
/pci@1f,0/pci@,1/ide@3/dad@0,0
/ -> Root device
pci@1f,0/pci@,1/ide@3 -> Bus devices & controllers
dad@ -> Device type(IDE disk)
0 -> IDE Target address
0 -> Disk number (LUN logical Unit Number)
2. The disk device path of an Ultra workstation with a PCI SCSI Bus:
/pci@1f,0/pci@,1/SUNW,isptwo@4/sd@3,0
/ -> Root device
pci@1f,0/pci@,1/SUNW,isptwo@4 -> Bus devices & controllers
sd -> Device type(SCSI Device)
3 -> SCSI Target address
0 -> Disk number (LUN logical Unit Number)
ok> show-devs: Displays the list of all the devices in the OpenBoot device tree.
ok>devalias: It is used to display the list of defined device aliases on a system.
Device aliases provides shot names for longer physical device paths. The alias names are stored under
NVRAMRC(contains registes to store the parameters). It is part of NVRAM.
Creating an alias name for device in Solaris
1. Use the show-disks command to list all the disks connected. Select and copy the location of the disk for which the
alias need to be created. The partial path provided in show-disks command is completed by entering right targer &
disk values.
2. Use the following command to create the alias :
nvalias <alias name> <physical path>
The physical path is the location copied in step 1. The alias name can be anything of user choice.
ok> devalias boot-device : It displays current boot devices alias for the system.
ok> nvunalias <alias name>: It removes device alias name.
The /usr/sbin/eeprom command:
It is used to display & change the NVRAM parameters while Solaris OS is running.
Note: It can be only used by root user.
e.g. #eeprom -> list all the NVRAM parameters.
e.g. #eeprom boot-device -> It lists the value of parameter boot-device
e.g. #eeprom boot-device=disk2 -> Changes the boot-device parameter
e.g. #eeprom auto-boot?=true -> Sets the parameter auto-boot? parameter to true
e.g. #eeprom auto-boot? -> It lists the value of auto-boot? parameter
Interrupting an Unresponsive System:
1. Kill the unresponsive process & then try to reboot unresponsive system gracefully.
2. If the above step fails, press STOP+A.
3. use sync command at Open Boot prompt. This command creates panic situation in the system & synchronizes the
file systems. Additionally, it creates a crash dump of memory and reboots system.
Solaris 10 uses SMF(Service Management Facility) which begins service in parallel based on dependencies. This
allows faster system boot and minimizes dependencies conflicts.
SMF contains:
A service configuration on repository
A process restarter
Administrative Command Line Interpreter(CLI) utilities
Supporting kernel functionality
These features enables Solaris services to:
1. specify requirement for prerequisite services and system facilities and services.
2. identity and privilege requirements for tasks.
3. specify the configuration settings for each service instance.
Phases of the boot process:
The very first boot phase of any system is Hardware and memory test done by POST (Power on Self Test)
instruction.
In SPARC machines, this is done by PROM monitor and in X86/x64 machines it is done by BIOS.
In SPARC machines, if no errors are found during POST and if auto-boot? parameter is set to true, the system
automatically starts the boot process.
In X86/x64 machines, if no errors are found during POST and if /boot/grub/menu.lst file is set to positive value, the
system automatically starts the boot process.
The boot process is divided into five phases:
Boot PROM Phase
Boot programs Phase
Kernel intialization phase
init phase
svc.startd phase
Note: The fist two phases, boot PROM & boot programs, differ between SPARC & X86/64 systems.
SPARC Boot PROM Phase:
The boot PROM phase on a SPARC system involves following steps:
1. PROM firmware runs POST
2. PROM displays the system identification banner which includes:
Model Type
Keyboard status
PROM revision number
Processor type & speed
Ethernet address
Host ID
Available RAM
NVRAM Serial Number
3. The boot PROM identifies the boot-device PROM parameter.
4. The PROM reads the disk label located at sector 0 of the default boot device.
5. The PROM locates the boot program on the default boot device.
6. The PROM loads the bootblk program into memory.
x86/x64 Boot PROM Phase:
The boot PROM phase on a x86/x64 system involves following steps:
1. BIOS ROM runs POST & BIOS extensions in ROMs, and invokes the software interrupt INT 19h, bootstrap.
2. The handler for the interrupt begins the boot sequence
3. The processor moves the first byte of the sector image in memory. The first sector on on a hard disk contains the
master boot block. This block contains the master boot(mboot) program & FDISK table.
SPARC Boot Program Phase:
The boot Program phase involves following steps:
1. The bootblk program loads the secondary boot program, ufsboot from boot device into memory.
2. The ufsboot program locates & loads the kernel.
Run
levels
Description
This run level ensures that the system is running the PROM monitor.
s or S
This run level runs in single user mode with critical file systems mounted &
accessible.
This run level ensures that the system running in a single user administrative,
and it has access to all available file systems.
In this run level system supports multiuser operations. At this run level, all
system daemons, except the Network File System(NFS) server & some other
network resource server related daemons, are running.
At this run level, the system supports multiuser operations. All system
daemons including the NFS resource sharing & other network resource
This is a transitional run level when the OS shuts down & the system reboots
to the default run level.
/sbin/rc0
/sbin/rc1
/sbin/rc2
/sbin/rc3
/sbin/rc5
& /sbin/rc6
Peforms function such as stopping system services & daemons & starting
scripts that perform fast system cleanup functions by running the
/etc/rc0.d/K* scripts first & then /etc/rc0.d/S* scripts
/sbin/rcS
degraded
disabled
legacy_run
The legacy service is not managed by SMF, but the service can be
observed. This state is only used by legacy services.
maintenanc The service instance has encountered an error that must be resolved by
e
the administrator.
offline
The service instance is enabled, but the service is not yet running or
available to run.
online
uninitialized
This state is the initial state for all services before their configuration has
been read.
With milestones you can group certain services. Thus you dont have to define each service when configuring the
dependencies, you can use a matching milestones containing all the needed services.
Furthermore you can force the system to boot to a certain milestone. For example: Booting a system into the single
user mode is implemented by defining a single user milestone. When booting into single user mode, the system just
starts the services of this milestone.
The milestone itself is implemented as a special kind of service. It's an anchor point for dependencies and a
simplification for the admin.
Types of the milestones:
single-user
multi-user
multi-user-server
network
name-services
sysconfig
devices
SMF Dependencies:Dependencies define the relationships between services. These relationships provide precise
fault containment by restarting only those services that are directly affected by a fault, rather than restarting all of the
services. The dependencies can be services or file systems.
The SMF dependencies refer to the milestones & requirements needed to reach various levels.
The svc.startd daemon:
1. It maintains system services & ensures that the system boots to the milestone specified at boot time.
2. It chooses built in milestone "all", if no milestone is specified at boot time. At present, five milestone can be used at
boot time:
none
single-user
Multi-user
multi-user-server
all
To boot the system to a specific milestone use following command at OBP:
ok> boot -m milestone=single-user
3. It ensures the proper running, starting & restarting of system services.
4. It retrieves information about services from the repository.
5. It starts the processes for the run level attained.
6. It identifies the required milestone and processes the manifests in the /var/svc/manifest directory.
Service Configuration Repository:
The service configuration repository :
1. stores persistent configuration information as well as SMF runtime data for services.
2. The repository is distributed among local memory and local files.
3. Can only be manipulated or queried by using SMF interfaces.
The svccfg command offers a raw view of properties, and is precise about whether the properties are set on the
service or the instance. If you view a service by using the svccfg command, you cannot see instance properties. If
you view the instance instead, you cannot see service properties.
The svcprop command offers a composed view of the instance, where both instance properties and service
properties are combined into a single property namespace. When service instances are started, the composed view
of their properties is used.
All SMF configuration changes can be logged by using the Oracle Solaris auditing framework.
SMF Snapshots:
The data in the service configuration repository includes snapshots, as well as a configuration that can be edited.
Data about each service instance is stored in the snapshots. The standard snapshots are as follows:
initial Taken on the first import of the manifest
running Taken when svcadm refresh is run.
start Taken at the last successful start
The SMF service always executes with the running snapshot. This snapshot is automatically created if it does not
exist.
The svccfg command is used to change current property values. Those values become visible to the service when
the svcadm command is run to integrate those values into the running snapshot. The svccfg command can also be
used to, view or revert to instance configurations in another snapshot.
svcs command:
1. Listing service:
#svcs <service name>/<Service FMRI>
2. Listing service dependencies:
a. svcs -d <service name>/<Service FMRI>: Displays services on which named service depends.
b. svcs -D <service name>/<Service FMRI>: Displays services that depend on the named service.
3. svcs -x FMRI: Determining why services are not running.
svcadm command:
The svcadm command can be used to change the state of service(disable/enable/clear).
Example:
Understanding GRUB
GRUB (Grand Unified Loader for x86 systems only)
It loads the boot archive(contains kernel modules & configuration files) into the system's memory.
It has been implemented on x86 systems that are running the Solaris OS.
Some Important Terms before we proceed ahead:
Boot Archive: Collection of important system file required to boot the Solaris OS. The system maintains two boot
archive:
1. Primary boot archive: It is used to boot Solaris OS on a system.
2. Secondary boot archive: Failsafe Archive is used for system recovery in case of failure of primary boot archive. It
is referred as Solaris failsafe in the GRUB menu.
Boot loader: First software program executed after the system is powered on.
3. It requires modification of the active GRUB menu.lst file for any change in its menu options.
Locating the GRUB Menu:
#bootadm list-menu
The locatiofor the active GRUB menus is : /boot/grub/menu.lst
Edit the menu.lst file to add new OS entries & GRUB console redirection information.
Edit the menu.lst file to modify system behaviour.
GRUB Main Menu Entries:
On installing the Solaris OS, by default two GRUB menu entries are installed on the system:
1. Solaris OS entry: It is used to boot Solaris OS on a system.
2. miniroot(failsafe) archieve: Failsafe Archive is used for system recovery in case of failure of primary boot archive. It
is referred as Solaris failsafe in the GRUB menu.
Modifying menu.lst:
When the system boots, the GRUb menu is displayed for a specific period of time. If the user do not select during this
period, the system boots automatically using the default boot entry.
The timeout value in the menu.lst file:
1. determines if the system will boot automatically
2. prevents the system from booting automatically if the value specified as -1.
Modifying X86 System Boot Behavior
1. eeprom command: It assigsn a different value to a standard set of properties. These values are equivalent to the
SPARC OpenBoot PROM NVRAM variables and are saved in /boot/solaris/bootenv.rc
2. kernel command: It is used to modify the boot behavior of a system.
3. GRUB menu.lst:
Note:
1.The kernel command settings override the changes done by using the eeprom command. However, these changes
are only effective until you boot the system again.
2. GRUB menu.lst is not preferred option because entries in menu.lst file can be modified during a software upgrade
& changes made are lost.
Verifying the kernel in use:
After specifying the kernel to boot using the eeprom or kernel commands, verify the kernel in use by following
command:
#prtconf -v | grep /platform/i86pc/kernel
GRUB Boot Archives
The GRUB menu in Solaris OS uses two boot archive:
1. Primary boot archive: It shadows a root(/) file system. It contains all the kernel modules, driver.conf files & some
configuration files. All these configuration files are placed in /etc directory. Before mounting the root file system the
kernel reads the files from the boot archive. After the root file system is mounted, the kernel removes the boot archive
from the memory.
2. failsafe boot archive: It is self-sufficient and can boot without user intervention. It does not require any
maintenance. By default, the failsafe boot archive is created during installation and stored in /boot/x86.minor-safe.
Default Location of primary boot archive: /platform/i86pc/boot_archive
Managing the primary boot archive:
The boot archive :
1. needs to be rebuilt, whenever any file in the boot archive is modified.
2. Should be build on system reboot.
3. Can be built using bootadm command
#bootadm update-archive -f -R /a
Options of the bootadm command:
-f: forces the boot archive to be updated
-R: enables to provide an alternative root where the boot archive is located.
-n: enables to check the archive content in an update-archive operation, without updating the content.
The boot archive can be rebuild by booting the system using the failsafe archive.
Booting a system in GRUB-Based boot environment
Booting a System to Run Level 3(Multiuser Level):
To boot a system functioning at run level 0 to 3:
1. reboot the system.
2. press the Enter key when the GRUB menu appears.
3. log in as the root & verify that the system is running at run level 3 using :
#who -r
Booting a system to run level S (Single-User level):
1. reboot the system
2. type e at the GRUB menu prompt.
3. from the command list select the "kernel /platform/i86pc/multiboot" boot entry and type e to edit the entry.
4. add a space and -s option at the end of the "kernel /platform/i86pc/multiboot -s" to boot at run level S.
5. Press enter to return the control to the GRUB Main Menu.
6. Type b to boot the system to single user level.
7. Verify the system is running at run level S:
#who -r
8. Bring the system back to muliuser state by using the Ctrl+D key combination.
Booting a system interactively:
1. reboot the system
2. type e at the GRUB menu prompt.
3. from the command list select the "kernel /platform/i86pc/multiboot" boot entry and type e to edit the entry.
4. add a space and -a option at the end of the "kernel /platform/i86pc/multiboot -a" .
5. Press enter to return the control to the GRUB Main Menu.
6. Type b to boot the system interactively.
Stopping an X86 system:
1. init 0
2. init 6
3. Use reset button or power button.
Booting the failsafe archive for recovery purpose:
1. reboot the system.
2. Press space bar while while GRUB menu is displayed.
3. Select Solaris failsafe entry and press b.
4. Type y to automatically update an out-of-date boot archive.
5. Select the OS instance on which the read write mount can happen.
6. Type y to mount the selected OS instance on /a.
7. Update the primary archive using following command:
#bootadm update-archive -f -R /a
8. Change directory to root(/): #cd /
9. Reboot the system.
Interrupting an unresponsive system
1. Kill the offending process.
2. Try rebooting system gracefully.
3. Reboot the system by holding down the ctrl+alt+del key sequence on the keyboard.
4. Press the reset button.
5. Power off the system & then power it back on.
Item
Requirement
Platform
Minimum: 64mb
Recommended: 256mb
For GUI Installation: 384mb or higher
SWAP area
Default: 512mb
Processor
Disk Space
Minimum: 12gb
Types of Installation:
1.
Interactive Installation (Interactive Installation)
1.
2.
3.
1.
Feed the following information into the server where we are going to save the image of the SOLARIS installation
disk.
HostName
Client Machine IP address
Client Machine MAC address
STOP + A (Go to OBP)
3.
OK> boot net -install(It boots from the n/w and takes the image from the server where the client machine
information was added in the step 1.) We will discuss this method of Installation in details in later section.
3.
Flash Achieve Installation (Replicate the same s/w & configuration on multiple systems)
1.
2.
3.
2.
1.
2.
3.
Copy the image of the machine which need to be installed. Save the image on a server.
Boot the client machine with the SOLARIS disk and follow the normal interactive installation process.
At the stage of installation where it asks for specify media, select NFS. NFS stands for network file system.
4.
Mention the server name and the image name in the format mentioned below:
200:100:0:1 :/imagename
4.
Live Upgrade (Upgrade a system while it is running)
5.
WAN boot (Install multiple systems over the wide area network or internet)
6.
SOLARIS 10 Zones(Create isolated application environment on the same machine after
original SOLARIS 10 OS installation)
Modes of Installation of Solaris 10
1. Text Installer ModeThe Solaris text installer enables you to install interactively by typing information in a terminal
or a console window.
2. Graphical User Interface (GUI) mode
The Solaris GUI installer enables you to interact with the installation program by using graphic elements such as
windows, pull-down menus, buttons, scrollbars, and icons.
Different display options
Memory
Display Option
64-127MB
128-383MB
Table : Disk space requirements for installing different Solaris 10 software groups
Software Group
Description
Required Disk
Space
2.0GB
2.0GB
6.0GB
Package Naming Convention: The name for a Sun package always begins with the prefixSUNW such as
in SUNWaccr, SUNWadmap, and SUNWcsu. However, the name of a third-party package usually begins with a prefix
that identifies the company in some way, such as the company's stock symbol.
When you install Solaris, you install a Solaris software group that contains packages and clusters.
Few take away points:
If you want to use the Solaris 10 installation GUI, boot from the local CD or DVD by issuing the following command at
the ok prompt:
ok boot cdrom
If you want to use the text installer in a desktop session, boot from the local CD or DVD by issuing the following
command at the ok prompt:
ok boot cdrom -text
The -text option is used to override the default GUI installer with the text installer in a desktop session.
If you want to use the text installer in a console session, boot from the local CD or DVD by issuing the following
command at the ok prompt:
ok boot cdrom -nowin
Review the contents of the /a/var/sadm/system/data/upgrade_cleanup file to determine whether you need to make any
correction to the local modifications that the Solaris installation program could not preserve. This is used in upgrade
scenario and has to be checked before system reboot..
Installation logs are saved in the /var/sadm/system/logs and /var/sadm/install/logsdirectories
you can upgrade your Solaris 7 (or higher version) system to Solaris 10Installing and Managing PACKAGE in
Solaris 10
In Solaris 10 packages are available in two different formats:
File System format: It acts as a directory which contains sub directories and files.
Data Stream Format: It acts as a single compressed file.
Most of the packages downloaded from the internet will be in data stream format. We can convert the
package from one from to another using the command: pkgtrans command.
To display the installed software distributing group use following command:
#cat /var/sadm/system/admin/clusterCLUSTER = SUNWCall (EDSSG without OEM) or SUNWXall(With OEM)
To display all information about all the installed packages in the OS:#pkginfo
To display all the information about the specific package:#pkginfo SUNWzsh -> This is the package name.
To display all the complete information about the specific package:#pkginfo -l SUNWzsh ->This is the package
name.
To Install a package:#pkgadd -d /cdrom/cdrom0/SOLARIS10/product SUNWzsh
-d option specifies the absolute path to the software package.
Spooling a package : It is nothing but copying the package to the local hard drive instead of installing to.
The default location for the spool is /var/spool/pkg.
Command for Spooling a package to our customized locations
#pkgadd -d /cdrom/cdrom0/solaris10/product -s <spool dir> <Package Name>
-s option specifies the name of the spool directory where the software package will be spooled
Command for Installing the package from the default spool location
#pkgadd <Package Name>
Command for Installing package from customized spool location
#pkgadd -d <spool dir> <Package Name>
Command for Deleting the package from spool location
#pkgrm -s <spool dir> <Package Name>
Displaying the dependent files used for installing a package in OS
Patch Administration
A patch is a collection of files and directories that may replace or update existing files and
directories of a software. A patch is identified by its unique patch ID, which is an alphanumeric
string that consists of a patch base code and a number that represents the patch revision number;
both separated by a hyphen (e.g., 107512-10)
If the patches you downloaded are in a compressed format, you will need to use the unzip or the tar
command to uncompress them before installing them.
Installing Patches : patchadd command is used to install patches and to find out which patches are
already installed on system.
patchadd [-d] [-G] [-u] [-B <backoutDir>] <source> [<destination>]
-d. Do not back up the files to be patched (changed or removed due to patch installation). When this option
is used, the patch cannot be removed once it has been added. The default is to save (back up) the copy of
all files being updated as a result of patch installation so that the patch can be removed if necessary.
-G. Adds patches to the packages in the current zone only
-u. Turns off file validation. That means that the patch is installed even if some of the files to be patched have
been modified since their original installation.
<source>. Specifies the source from which to retrieve the patch, such as a directory and a patch id.
<destination>. Specifies the destination to which the patch is to be applied. The default destination is the
current system.
The log for the patchadd command is saved into the file : /var/sadm/patch/<patch-ID>/log
The showrev command is meant for displaying the machine, software revision, and patch revision
information. e.g : #showrev -p
Removing Patches : patchrm command can be used to remove (uninstall) a patch and restore the
previously saved files. The command has the following syntax:
patchrm [-f] [-G] -B <backoutDir>] <patchID>
The operand <patchID> specifies the patch ID such as 105754-03. The options are described here:
-f. Forces the patch removal even if the patch was superseded by another patch.
-G. Removes the patch from the packages in the current zone only.
-B <backoutDir>. Specifics the backout directory for a patch to be removed so that the saved files could be restored.
This option is needed only if the backout data has been moved from the directory where it was saved during the
execution of the patchadd command.
For example, the following command removes a patch with patch ID 107512-10 from a standalone system:
#patchrm 107512-10
Functio
n
Definition
Specifies archive file or tape device. The default tape device is /dev/rmt/0. If
the name of archve file is "-", the tar command reads from standard i/p when
reading from a tar archive or writes to the standard output if creating a tar
archive.
Example :
#tar cvf files.tar file1 file2
The above example archives file1 & file2 into files.tar.
To create an archive which bundles all the files in the current directory that end with .doc into the alldocs.tar file:
tar cvf alldocs.tar *.doc
Third example, to create a tar file named ravi.tar containing all the files from the /ravi directory (and any of its
subdirectories):
tar cvf ravi.tar ravi/
You can also create tar files on tape drives or floppy disks, like this:
tar cvfM /dev/fd0 panda Archive the files in the panda directory to floppy disk(s).
tar cvf /dev/rmt0 panda Archive the files in the panda directory to the tape drive.
In these examples, the c, v, and f flags mean create a new archive, be verbose (list files being archived), and write
the archive to a file.
To view an archive from a Tape:
#tar tf /dev/rmt/0
To view an archive from a Archive File:
#tar tf ravi.tar
To retrieve archive from a Tape :
#tar xvf /dev/rmt/0
To retrieve archive from a Flash Drive:
#volrmmount -i rmdisk0 #mounts the flash drive
#cd /rmdisk/rmdisk0
#ls
ravi.tar
#cp ravi.tar ~ravi #copies the tar file to user ravi's home dir
#cd ~ravi
#tar xvf ravi.tar #retrieving the archived files
Excluding a particular file from the restore:
Create a file and add the files to be excluded.
#vi excludelist
/moon/a
/moon/b
:wq!
Tar -Xxvf excludelist <destination folder>
X Excluding
Disadvantage:
By using TAR we cannot take the backup of file size more than 2GB
The Jar command: The Jar command is used to combine multiple files into a single archive file and compresses it.
Syntax : jar options destination <file names>
Functio
n
Definition
Specifies the jar file to process. The jar command send data to screen if this
option is not specified.
For decompressing :
gunzip file1.gz #uncompress the file.gz
Note: It performs the same compression as compress command but generally produces smaller files.
'gzcat' command:
It is used to view compressed files using gzip or compress command:
gzcat <file name>
gzcat file.gz
Using zip command: To compress multiple files into a single archive file.
For compressing:
zip target_filename source_filenames
zip file.zip file1 file2 file3
For decompressing :
unzip <zipfile> # unzip the file
unzip -l <zipfile> #list the files in the zip archive.
It adds .zip extension of no name/extension is give for the zipped file.
Note: The jar command and zip command create files that are compatible with each other. The unzip command can
uncompress a jar file and the jar command can uncompress a zip file.
Following table summarizes the various compressing/archiving:
Utility
Compress
View
Uncompress
tar
tar -xvf
Archivedfile.tar
jar
jar -xvf
Archivedfile.tar
zcat filename.Z
uncompress
<filename>
compres
compress <filename>
s
uncompress -c
filename.Z
gzcat filename.Z
gzip
gzcat filename.gz
gunzip filename.gz
zip
unzip -l file.zip
unzip file.zip
The ~/.rhosts:
It provides another authentication procedure to determine if a remote user can access the local host with the identity
of a local user. This procedure bypass the password authentication mechanism. Here the rhosts file refers to the
remote users rhosts file.
If a user's .rhosts file contain a plus(+) character, then the user is able to login from any known system without
providing password.
Using the rlogin command: To establish a remote login session.
rlogin <Host Name>
rlogin -l <user name> <host name>
rlogin starts a terminal session on the remote host specified as host. The remote host must be running
a rlogind service (or daemon) for rlogin to connect to. rlogin uses the standard rhosts authorization mechanism.
When no user name is specified either with the -l option or as part of username@hostname, rlogin connects as the
user you are currently logged in as (including either your domain name if you are a domain user or your machine
name if you are not a domain user).
Note: If the remote host contains ~/.rhosts file for the user, the password is not prompted.
Running a program on a remote system:
rsh <host name> command
The rsh command works only if a .rhosts file exists for the user because the rsh command does not prompt for a
password to authenticate new users. We can also provide the IP address instead of host name.
Example: #rsh host1 ls -l /var
Terminating a Process Remotely by Logging on to a another system:
rlogin <host name>
pkill shell
Using Secure Shell (SSH) remote login:
Syntax: ssh [-l <login name>] <host name> | username@hostname [command]
If the system that user logs in from is listed in /etc/hosts.equiv or /etc/shosts.equiv on the remote system and the user
name is same on both the systems, the user is immediately permitted to log in.
If .rhosts or .shosts exists in the user's home directory on remote system and contains entry with the client system
and user name on that system, the user is permitted to log in.
Note: The above two types of authentication is normally not allowed as they are not secure.
Using a telnet Command: To log on to a remote system and work in that environment.
telnet <Host Name>
Note: telnet command always prompts for password and does not uses ~/.rhosts file.
Using Virtual network Computing(VNC):
It provides remote desktop session over the Remote Frame Buffer (RFB). The VNC consists of two components:
1. X VNC server
2. VNC Client for X
Xvnc is an X VNC server that allows sharing a Solaris 10 X windows sessions with another Solaris, Linux or Windows
system. Use vncserver command to start or stop an Xvnc server:
vncserver options
Vncviewer is and X VNC Client that allows viewing an X windows session from another Solaris, Linux, or Windows
system on Solaris 10 system. Use vncviewer command to establish a connection to an Xvnc srver.
Vncviewer options host:display#
Remote
Command
Use
Requirement
Syntax
rlogin
rsh
To run
commands
remotely
telnet
ssh
rcp
ftp
rcp <host
To copy files
name>:<source file>
It checks for the ~/.rhosts file for
from one host
<destination file>
access permissions.
to another
rcp <host
name>:<source file>
<host
name>:<destination file>
Remote File
Transfer
User Administration
User Administration:
In Solaris each user requires following details:
1. A unique user name
2. A user ID
3. home directory
4. login shell
5. Group to which the user belongs.
System files used for storing user account information are:
The /etc/passwd file:
It contains login information for authorized system user. It displays following seven fields in each entry:
loginID
UID
Unique user ID. System reserves the values 0 to 99 for system accounts.
The UID 60001 is reserved for the nobody account & 60002 is reserved for
the noaccess account. The UID after 60000 should be avoided.
GID
Group ID. System reserves the values 0 to 99 for system accounts. The
GID numbers for users ranges from 100 to 60000.
comment
home
directory
login shell
The user's default login shell. It can be anyone from the list : Bourne shell,
Korn shell, C shell, Z shell, BASH shell, TC shell.
User
name
User
ID
Description
root
daemon
bin
sys
adm
lp
71
loginID
passwor
It contains the 13 letter encrypted password
d
lastchg
Number of days between 1st January & last password modification date.
min
Minimum number of days to pass before you can change the password.
max
warn
The number of days prior to password expiry that the user is warned.
inactive
The number of inactive days allowed for the user before the user account is
locked.
expire
The number of days after which the user account would expire. The number
of days are counted since 1st Jan 1970.
flag
groupname
grouppassword
GID
usernamelist
MAXWEEKS
MINWEEKS
PASSLENGHT
WARNWEEKS
NAMECHECK=NO
DICTIONLIST=
Password Management:
pam_unix_auth module is responsible for the password management in Solaris. To configure locking of user account
after specified number of attempts following parameters are modified:
1. LOCK_AFTER_RETRIES tunable parameter in the /etc/security/policy.conf file &
2. lock_after-retries key in the /etc/user_attr file is modified.
Note: The LOCK_AFTER_RETRIES parameter is used to specify the number of failed login attempts after which the
user account is locked. The number of attempts are defined by RETRIES parameter in the /etc/default/login file.
passwd command:
The passwd command is used to set the password for the user account.
syntax:
#passwd <options> <user name>
Various options used with the passwd command are described below:
-s
Shows password attributes for a particular user. When used with the -a option,
attributes for all user accounts are displayed.
-d
Deletes password for name and unlocks the account. The login name is not
prompted for a password.
Forces the user to change passwords at the next login by expiring the password.
Makes the password entry for <name> a value that cannot be used for login but
-N does not lock the account. It is used to create password for non-login account(e.g
accounts for running cron jobs).
-u Unlocks a locked account.
-c
A short description of the login, typically the user's name and phone
<comment> extension. This string can be up to 256 characters.
-d
<directory>
Specifies the home directory of the new user. This string is limited to
1,024 characters.
-g <group>
-G <group>
-n <login>
-s <shell>
-u <uid>
Specifies the user ID of the user you want to add. If you do not specify
this option, the system assigns the next available unique UID greater
than 100.
-m
Note: When a user account is created using useradd command it is locked and need to be unlocked & password is
set using passwd command.
Modifying a user account:
Modifying a user id: # usermod -u <New User ID> <User Name>
Modifying a primary group: #usermod -g <New Primary Group> <User Name>
Modifying a secondary group: #usermod -G <New Secondary Group> <User Name>
In similar manner we can modify other user related information.
Deleting a user account:
#userdel <user name> user's home directory is not deleted
#userdel -r <user name> user's home directory is deleted
Locking a User Account:
# passwd -l <user name>
Unlock a User Account:
#passwd -u <user name>
Note: uid=0 (Super user, administrator having all privileges). By default root is having uid = 0 which can be
duplicated. This is the only user id which can be duplicated.
For example:
1. #useradd -u 0 -o <user name>
2. #usermod -u 0 -o <user name>
Here option -o is used to duplicate the user id 0.
smuser command:
This command is used for remote management of user accounts.
Example: If you want to add a user raviranjan in nis domain office.com on system MainPC use smuser command as
follows:
# /usr/sadm/bin/ smuser add -D nis:/MainPC/office.com -- -u 111 -n raviranjan
The subcommands used with smuser command:
add
modify
delete
list
-c <comment>
A short description of the login, typically the user's name and phone
extension. This string can be up to 256 characters.
-d <directory>
Specifies the home directory of the new user. This string is limited to
1,024 characters.
-g <group>
-G <group>
-n <login>
-s <shell>
-u <uid>
Specifies the user ID of the user you want to add. If you do not specify
this option, the system assigns the next available unique UID greater
than 100.
-x autohome=Y|
Sets the home directory to automount if set to Y.
N
smgroup command:
This command is used for remote management of groups.
Example: If you want to add a group admin in nis domain office.com on system MainPC use smgroup command as
follows:
#/usr/sadm/bin/smgroup add -D nis:/MainPC/office.com -- -g 101 -n admin
The subcommands used with smgroup command:
add
modify
To modify a group.
delete
To delete a group.
list
Note: The use of subcommands requires authorization with the Solaris Management Console server. Solaris
Management Console also need to be initialized.
Managing Groups:
There are two groups related to a user account:
1. Primary Group: The maximum and minimum number of primary group for a user is 1.
2. Secondary Group: A user can be member of maximum 15 secondary groups.
Adding a group
#groupadd <groupname>
#groupadd -g <groupid> <groupname>
The group id is updated under /etc/group.
#vi /etc/group
ss2::645
Note: Here ss2 is group name and 645 is group id.
Modifying a group
By group ID: #groupmod -g <New Group ID> <Old Group Name>
By group Name: #groupmod -n <New Group Name> <Old Group Name>
Note:
For every group we are having a group name and id(for kernel reference). By default 0-99 group ids are system
defined.
The complete information about the group is stored under /etc/group file.
Deleting a group
# groupdel <group name>
Variables for customizing a user session:
Variable
Set
By
Description
LOGNAM
E
login
HOME
login
SHELL
login
PATH
login
login
TERM
login
PWD
shell
PS1
shell
prompt
shell
Shell
Bourne/Korn
VARIABLE=value;export VARIBLE
eg:#PS1="$HOSTNAME";export PS1
Finger Command:
By default, the finger command displays in multi-column format the following information about each logged-in user:
user name
user's full name
terminal name(prepended with a '*' (asterisk) if write-permission is denied)
idle time
login time
host name, if logged in remotely
Syntax:
finger [ -bfhilmpqsw ] [ username... ]
finger [-l ] [ username@hostname1[@hostname2...@hostnamen]
finger [-l ] [ @hostname1[@hostname2...@hostnamen] ... ]
... ]
Options:
-b Suppress printing the user's home directory and shell in a long format printout.
-f Suppress printing the header that is normally printed in a non-long format printout.
-h Suppress printing of the .project file in a long format printout.
-i Force "idle" output format,which is similarto short format except that only the login name,terminal,login time,and
idle time are printed.
-l Force long output format.
-m Match arguments only on user name (not first or last name).
-p Suppress printing of the .plan file in a long format printout.
-q Force quick output format, which is similar to short format except that only the login name, terminal, and login
time are printed.
-s Force short output format.
-w Suppress printing the full name in a short format printout.
Note: The username@hostname form supports only the -l option.
last command:
The output of this command is very long and contains information about all the users. We can user the last command
in following ways:
1. To display the n lines from the o/p of last command:
#last -n 10
2. Login information specific to a user:
#last <user name>
SULOG=/var/adm/sulog
The sulog file lists all uses of the su command, not only the su attempts that are used to switch from user to
superuser. The entries show the date and time the command was entered, whether or not the attempt was successful
(+ or -), the port from which the command was issued, and finally, the name of the user and the switched identity.
The console parameter in /etc/default/su file contains the device name to which all atempts to switch user should be
logged
CONSOLE=/dev/console
By default this option is commented.
Controlling system Access:
1. /etc/default/login: CONSOLE Variable: This parameter can be used to restrict the root user login. The
value /dev/console for CONSOLE variable enables the root user to login from system console only. The remote login
for root is user is not possible. However, if the parameter CONSOLE is commented or not defined, the root user can
login to the device from any other system on the networ.
PASSREQ: If set to YES, forces user to enter the password when they login for first time. This is applicable for the
user account with no password.
2. /etc/default/passwd:
It is centralized password aging file for all this normal users. If we update any information to this file, automatically all
users will be updated.
3. /etc/nologin:
It is the file which is responsible for restricting all the normal users not to access server. By default this file does not
exists.
To restrict all normal users from login:
#touch /etc/nologin
#vi /etc/nologin
NOTE: Using setuid permissions with the reserved UIDs (0-99) from a program may not set the effective UID
correctly. Instead, use a shell script to avoid using the reserved UIDs with setuid permissions.
You setuid permissions by using the chmod command to assign the octal value 4 as the first number in a series of
four octal values. Use the following steps to setuid permissions:
1. If you are not the owner of the file or directory, become superuser.
2. Type chmod <4nnn> <filename> and press Return.
3. Type ls -l <filename> and press Return to verify that the permissions of the file have changed.
When setgid permission is applied to a directory, files subsequently created in the directory belong to the group the
directory belongs to, not to the group the creating process belongs to. Any user who has write permission in the
directory can create a file there; however, the file does not belong to the group of the user, but instead belongs to the
group of the directory.
You can set setgid permissions by using the chmod command to assign the octal value 2 as the first number in a
series of four octal values. Use the following steps to set setgid permissions:
1. If you are not the owner of the file or directory, become superuser.
2. Type chmod <2nnn> <filename> and press Return.
3. Type ls -l <filename> and press Return to verify that the permissions of the file have changed.
The following example sets setuid permission on the myfile:
#chmod 2551 myfile
#ls -l myfile
-r-xr-sx 1 ravi admin 26876 Jul 15 21:23 myfile
#
Sticky Bit
The sticky bit on a directory is a permission bit that protects files within that directory. If the directory has the sticky bit
set, only the owner of the file, the owner of the directory, or root can delete the file. The sticky bit prevents a user from
deleting other users' files from public directories, such as uucppublic:
# ls -l /var/spool/uucppublic
drwxrwxrwt 2 uucp uucp
When you set up a public directory on a TMPFS temporary file system, make sure that you set the sticky bit manually.
You can set sticky bit permissions by using the chmod command to assign the octal value 1 as the first number in a
series of four octal values. Use the following steps to set the sticky bit on a directory:
1. If you are not the owner of the file or directory, become superuser.
2. Type chmod <1nnn> <filename> and press Return.
3. Type ls -l <filename> and press Return to verify that the permissions of the file have changed.
The following example sets the sticky bit permission on the pubdir directory:
# chmod 1777 pubdir
# ls -l pubdir
drwxrwxrwt 2 winsor
staff
system.The following table shows an example of file entries for Ethernet interfaces commonly found in Solaris
systems:
/etc/hostname.e1000g0 First e1000g (Intel PRO/1000 Gigabit family device driver) Ethernet interface in the system
/etc/hostname.bge0
First bge (Broadcom Gigabit Ethernet device driver) Ethernet interface in the system
/etc/hostname.bge1
/etc/hostname.ce0
First ce (Cassini Gigabit Ethernet Device driver) Ethernet interface in the system
/etc/hostname.qfe0
/etc/hostname.hme0
/etc/hostname.eri0
First eri (eri Fast-Ethernet Device driver) Ethernet interface in the system
/etc/hostname.nge0
First nge (Nvidia Gigabit Ethernet Device driver) Ethernet interface in the system
The /etc/hostname.xxn files contain either the host name or the IP address of the system that contains the xxn
interface.
The host name must be there in the file /etc/inet/hosts file so that it can be resolved to an IP address at system boot.
Example:
# cat /etc/hostname.ce0
Computer1 netmask + broadcast + up
/etc/inet/hosts file:
It is the file which associates the IP addresses of hosts with their names.It can be used with, or instead of , other
hosts databases including DNS, NIS hosts map & NIS+ hosts table.
The /etc/inet/hosts file contains at least the loopback & host information. It has one entry for each IP address of each
host. The entries in the files are in following format:
<IP address> <Host name> [aliases]
127.0.0.1 localhost
/etc/inet/ipnodes file:
It is a local database or file that associates the names of nodes with their IP addresses. It is a symbolic link to the
/etc/inet/hosts file. It associates the names of nodes with their Internet Protocol (IP) addresses. The ipnodes file can
be used in conjuction with, instead of, other ipnodes databases, including the DNS, the NIS ipnodes map, and LDAP.
The fomat of each line is:
<IP address> <Host Name> [alias]
# internet host table
::1 localhost
127:0:0:1 localhost
10.21.108.254 system1
Changing the System Host Name:
The system host name is in four system files & we must modify these files and perform a reboot to change a system
host name:
/etc/nodename
/etc/hostname.xxn
/etc/inet/hosts
/etc/inet/ipnodes
sys-unconfig Command:
The /usr/sbin/sys-unconfig command is used to restore a system configuration to an unconfigured state. This
command does the following:
1. It saves the current /etc/inet/hosts files information in the /etc/inet/hosts.saved file.
2. It saves the /etc/vfstab files to the /etc/vfstab.orig file if the current /etc/vfstab file contains NFS mount entries.
3. It restores the default /etc/inet/hosts file.
NETSTAT:
It lists the connection for all protocols and address family to and from machine.
The address family (AF) includes:
INET ipv4
INET - ipv6
UNIX Unix Domain Sockets(Solaris/FreeBSD/Linux etc.)
Protocols supported in INET/INET6 are:
TCP, IP, ICMP(PING), IGMP, RAWIP, UDP(DHCP, TFTP)
NETSTAT also list:
1. routing tables,
2. any multi-cast entry for NIC,
3 .DHCP status for various interfaces,
4.net-to-media/MAC table.
Usage:
# netstat
UDP: Ipv4
Local Address Remote Address State
-------------------- -------------------- ---------System1.bge0.54844 10.95.8.202.domain Connected
System1.bge0.54845 10.95.8.213.domain Connected
TCP: Ipv4
Local Address Remote Address Swind Send-Q Rwind Recv-Q State
-------------------- -------------------- ----- ------ ----- ------ ----------localhost.41771 localhost.3306 49152 0 49152 0 ESTABLISHED
localhost.3306 localhost.41771 49152 0 49152 0 ESTABLISHED
localhost.50230 localhost.3306 49152 0 49152 0 CLOSE_WAIT
localhost.50231 localhost.3306 49152 0 49152 0 CLOSE_WAIT
Note: NETSTAT returns sockets by protocol using /etc/services lookup. Below example gives detailed information
about the /etc/services files.
# ls -ltr /etc/services
lrwxrwxrwx 1 root root 15 Apr 8 2009 /etc/services -> ./inet/services(its soft link to /etc/inet/services)
The below example shows the content of the /etc/services file. Its columns represents Network services, port number
and Protocol.
# less /etc/services
#
# Copyright 2008 Sun Microsystems, Inc. All rights reserved.
# Use is subject to license terms.
#
#ident "@(#)services 1.34 08/11/19 SMI"
#
# Network services, Internet style
#
tcpmux 1/tcp
echo 7/tcp
echo 7/udp
discard 9/tcp sink null
discard 9/udp sink null
systat 11/tcp users
daytime 13/tcp
daytime 13/udp
netstat 15/tcp
Note: The NETSTAT command resolves the host name with the help of local /etc/hosts file or DNS server. There is
an important file /etc/resolv.conf which tells resolver what look up facilities such as LDAP, DNS or files to use.
/etc/nssswitch.conf is consulted by netstat to resolve names for IP.
/etc/resolv.conf:
# cat /etc/resolv.conf
domain WorkDomain
nameserver 10.95.8.202
nameserver 10.95.8.213
/etc/hosts file:
# cat /etc/hosts
127.0.0.1 localhost
172.30.228.58 mysystem.bge0 bge0
172.30.228.58 mysystem loghost
The command netstat -a will dump the connection including name lookup from /etc/services directly. It returns all
protocols for all address families (TCP/UDP/UNIX).
#netstat -a
UDP: Ipv4
Local Address Remote Address State
-------------------- -------------------- ---------*.snmpd Idle
*.55466 Idle
System1.bge0.55381 10.95.8.202.domain Connected
System1-prod.bge0.55382 10.95.8.213.domain Connected
*.32859 Idle
#netstat -an :
-n option disables the name resolution of hosts and ports and speed up the o/p time
#netstat -i:
returns state of configured interfaces.
# netstat -i
Name Mtu Net/Dest Address Ipkts Ierrs Opkts Oerrs Collis Queue
lo0 8232 loopback localhost 1498672734 0 1498672734 0 0 0
nge0 1500 System1.bge0 System1.bge0 1081897064 0 1114394170 6 0 0
#netstat -m :
It returns streams(TCP) statistics
streams allocation:
cumulative allocation
current maximum total failures
streams 408 4350 28881897 0
queues 841 4764 43912097 0
mblk 7062 40068 780613980 0
dblk 7062 45999 4815973363 0
linkblk 5 84 6 0
syncq 17 75 58511 0
qband 0 0 0 0
2469 Kbytes allocated for streams data
#netstat -p :
It returns net to media information(MAC/layer-2 information).
Net to Media Table: Ipv4
Device IP Address Mask Flags Phys Addr
------ -------------------- --------------- -------- --------------nge0 defaultrouter 255.255.255.255 00:50:5a:1e:e4:01
nge0 172.30.228.54 255.255.255.255 00:14:4f:6f:39:13
nge0 172.30.228.52 255.255.255.255 o 00:14:4f:7e:97:53
nge0 172.30.228.53 255.255.255.255 o 00:14:4f:6f:4f:75
nge0 172.30.228.49 255.255.255.255 00:1e:68:86:84:16
nge0 System1.bge0 255.255.255.255 SPLA 00:21:28:70:19:36
nge0 System2 255.255.255.255 o 00:21:28:6b:c6:7a
nge0 172.30.228.57 255.255.255.255 SPLA 00:21:28:70:19:36
nge0 224.0.0.0 240.0.0.0 SM 01:00:5e:00:00:00
#netstat -P <protocol> (ip|ipv6|icmp|icmpv6|tcp|udp|rawip|raw|igmp): returns active sockets for selected protocol.
Network Configuration
There are two main configuration:
1. Local files : configuration is defined statically via key files
2. Network configuration : DHCP is used to auto-config interfaces
dladm command: It is used to determine the physical interfaces using following command:
dladm show-dev or show-link.
The another command to check the same is ifconfig -a. However there is a difference between O/Ps.
The dladm shows layer 1 related information whereas ifconfig command returns layer 2&3 related information.
# dladm show-dev
ce0
link: unknown speed: 1000 Mbps
duplex: full
ce1
link: unknown speed: 1000 Mbps
duplex: full
ge0
link: unknown speed: 1000 Mbps
duplex: unknown
eri0
link: unknown speed: 100 Mbps
duplex: full
# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
ce0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 6
inet 10.22.213.80 netmask ffffff00 broadcast 10.22.213.255
ether 0:14:4f:67:90:c1
ce1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
inet 10.22.217.35 netmask ffffff00 broadcast 10.22.217.255
ether 0:14:4f:44:4:50
eri0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
inet 10.22.224.147 netmask ffffff00 broadcast 10.22.224.255
ether 0:14:4f:47:92:5e
ge0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 5
inet 10.22.240.108 netmask ffffff00 broadcast 10.22.240.255
ether 0:14:4f:47:92:5f
Key network configuration files:svcs -a | grep physical : This command can be used to see the service responsible
for running/starting the physical interfaces.
svcs -a | grep loopback: This command can be used to see the service responsible for running/starting the local
loopback interface.
Configuring Network
1. IP Address( /etc/hostname.interface): We need to configure /etc/hostname.interface(e.g /etc/hostname.e1000g0,
/etc/hostname.iprb01) for each physical and virtual interface listed by the dladm command. The IP address must be
listed in this file. However this is not a requirement in DHCP or network configuration mode.
2. Domain name( /etc/defaultdomain): We need to configure /etc/defaultdomain. This is not a requirement in case
of DHCP mode of network configuration. This contains domain name information for the host.
3.Netmask(/etc/inet/netmasks): We need to create a files /etc/inet/netmasks if not there. This is also managed by
DHCP. The netmasks file associates Internet Protocol (IP) address masks with IP network numbers.
network-number netmask
The term network-number refers to a number obtained from the Internet Network Information Center. Both the
network-number and the netmasks are specified in "decimal dot" notation, e.g: 128.32.0.0 255.255.255.0
4. Hosts database(/etc/hosts): It is symbolically linked with /etc/inet/hosts, contains the entry for the loopback
adapter and for each IP address linked with the network adapter for name resolution. It gets auto configured by
DHCP.
5. Client DNS resolver file(/etc/resolv.conf): It reveals dns resolver related information. It gets auto configured by
DHCP.
6. Default gateway(/etc/defaultrouter): It is required for communicating with outside network. It is also managed by
DHCP under network configuration mode.
7. Node name(/etc/nodename): This file contains the host name and is not mandatory as the host name is resolved
by the /etc/hosts file. This is taken care by DHCP in network configuration.
Name service configuration file(/etc/nsswitch.conf): It will reveal resolution of various objects.
For manually configuring the network from DCP to local files(static) mode, the above mentioned files need to be
configured as stated. Once that is done, move/rename/delete the file dhcp.<interfacename>, so that the DHCP agent
is not invoked.
Plumb/enable the iprb0 100mbps interface(Plumbing interfaces is analogous to enable interfaces):
1. ifconfig iprb0 plumb up This will enable iprb0 interface.
2. ifconfig iprb0 172.16.20.10 netmask 255.255.255.0 This will enable layer 3 Ipv4 address.
3. Ensure that the newly plumbed persists across reboot:
1. Creating a file /etc/hostname.interface: echo 172.16.20.10 > /etc/hostname.<interfacename>
2. Create an entry in /etc/hosts file:
echo 172.16.20.10 NewHostName >> /etc/hosts
3. Create an entry in file /etc/inet/netmasks
echo 172.16.20.0 255.255.255.0 >> /etc/inet/netmasks
Note: If you want the interface to be managed DHCP, create a file dhcp.<interfacename> under /etc directory.
Logical(Sub-interfaces) Network Interfaces:For each physical interface many logical interfaces can be created
connected to a switch port. This means adding additional IP address to a physical interface.
1. Use ifconfig <interfacename> addif <ip address> <net mask>:
ifconfig e100g0 addif 192.168.1.51 (RFC-1918 defaults /24)
This will automatically create e100g0:1 logical interface.
2.Making the interface to go up: ifconfig e100g0:1 up
Note:
1. This will automatically create an e100g0:1 logical interface.
2. Solaris places new logical interface in down mode by default.
3. Logical/sub-interface are contingent upon physical interface. It means if the physical interface is down the logical
interface will also be down.
4. Connections are sourced using the IP address of the physical interface.
Save logical/sub-interface for persistent across reboots:
1. Create file /etc/hostname.<interfacename> and make interface IP address as entry to it.
2. Optionally update/etc/hosts file.
3. Optionally update /etc/inet/netmasks file when subnetting.
NSSWITCH.CONF(/etc/nsswitch.conf)It saves primarily name service configuration information.
It functions as a policy/rules file for various resolution namely: DNS, passwd(/etc/passwd, /etc/shadow),
group(/etc/group), protocols(/etc/inet/protocols), ethers or mac-to-IP mappings, where to look for host resolution. The
figure below shows a sample nsswitch.conf file.
In the above nsswitch.conf file, the password and group informational resolution is set to files which means the
system check for the local files like /etc/shadow, /etc/passwd. For host name resolution which is set to files, first hosts
file(/etc/hosts) is checked and if it fails then it is send to appropriate DNS server.
will run in interactive mode and if we type help in that mode it will list various options that can be performed.
The command ntptrace: Traces path to the time source. If we run it without any option it will default to local system.
The command ntptrace <ServerName> gives the path and stratum details from the server mentioned to the local
system.
NTP Server configuration:
1. We need to find the NTP pool site such as: http://www.ntp.org/ . We will derive NTP public server from their lists.
2. Once the list is derived, we need to make the entry of that list in the file /etc/inet/ntp.conf as shown below:server
0.asia.pool.ntp.org
server 1.asia.pool.ntp.org
server 2.asia.pool.ntp.org
server 3.asia.pool.ntp.org3. Restart the NTP service: svcadm restart ntp.
4. Making out NTP client machine as NTP server:
1. Go to /etc/inet: cd /etc/inet
2. Disable the NTP service: svcadm disable ntp
3. Copy the file ntp.server to ntp.conf: cp ntp.server ntp.conf
4. Edit ntp.conf file: Make an entry into the file with the servers list obtained from the NTP pool site and local server.
5. Comment the crontab entry for the ntpdate command.
1. crontab -e
2. Comment the line where ntpdate command is run.
6. Enable the NTP service: svcadm enable ntp
NFS Benefits:
1. It enables file system sharing on network across different systems.
2. It can be implemented across different OS.
3. The working of the nfs file system is as easy as the locally mounted file system.
NFS component include:
1. NFS Client: It mounts the file resource shared across the network by the NFS server.
2. NFS Server: It contains the file system that has to be shared across the network.
3. Auto FS
Managing NFS Server:
We use NFS server files, NFS server daemons & NFS server commands to configure and manage an NFS server.
To support NFS server activities we need following files:
file
Description
/etc/dfs/dfstab
Lists the local resource to share at boot time. This file contains the
commands that share local directories. Each line of dfstab file
consists of a share command. E.g: share [-F fstype] [-o options] [-d
"test"] <file system to be shared>
/etc/dfs/sharetab
Lists the local resource currently being shared by the NFS server.
Do not edit this file.
/etc/dfs/fstypes
Lists the default file system types for the remote file systems.
/etc/rmtab
Lists the file systems remotely mounted by the NFS Client. Do not
edit this file. E.g:system1:/export/sharedir1
/
Lists the information defining the local configuration logs used for
etc/nfs/nfslog.conf NFS server logging.
/
Lists the configuration information describing the behavior of the
etc/default/nfslogd nfslogd daemon for NFSv2/3.
/etc/default/nfs
Note: If the svc:/network/nfs/server service does not find any share command in the /etc/dfs/dfstab file, it does not
start the NFS server daemons.
NFS server Daemons:
To start NFS server daemon enable the daemon svc:/network/nfs/server :
#svcadm enable nfs/server
Note: The nfsd and mountd daemons are started if there is an uncommented share statement in the system's
/etc/dfs/dfstab file.
Following are the NFS server daemon required to provide NFS server service:
mountd:
- Handles file system mount request from remote systems & provide access control.
- It determines whether a particular directory is being shared and if the requesting client has permission to access it.
- It is only required for NFSv2 & 3.
nfsd:
Handles client file system requests to access remote file system request.
statd:
Works with lockd daemon to provide crash recovery function for lock manager.
lockd:
Supports record locking function for NFS files.
nfslogd:
Provides operational logging for NFSv2 & 3.
nfsmapid:
- It is implemented in NFSv4.
- The nfsmapid daemon maps owner & group identification that both the NFSv4 client and server use.
- It is started by: svc:/network/nfs/mapid service.
Note: The features provided by mountd & lockd daemons are integrated in NFSv4 protocol.
NFS Server Commands:
share:
Makes a local directory on an NFS server available for mounting. It also displays the contents of the /etc/dfs/sharetab
file. It writes information for all shared resource into /etc/dfs/sharetab file.
Syntax:
share [-F fstype] [-o options] [-d "text"] [Path Name]
-o options: Controls a client's access to an NFS shared resource.
The options lists are as follows:
ro: read only request
rw: read & write request
root=access-list: Informs client that the root user on the specified client systems cna perform superuser-privileged
requests on the shared resource.
ro=acess-list: Allows read requests from specified access list.
rw=acess-list: Allows read & write requests from specified access list.
anon=n: Sets n to be the effective user ID for anonymous users. By default it is 6001. If it is set to -1, the access is
denied.
access-list=client:client : Allows access based on a colon-separated list of one or more clients.
access-list=@network : Allows access based on a network name. The network name must be defined in the
/etc/networks file.
access-list=.domain : Allows access based on DNS domain. The (.) dot identifies the value as a DNS domain.
access-list=netgroup_name: Allows access based on a configured net group(NIS or NIS+ only)
-d description: Describes the shared file resource.
Path name: Absolute path of the resource for sharing.
Example:
#share -o ro /export/share1
The above command provides read only permission to /export/share1.
#share -F nfs -o ro,rw=client1 directory
This command restricts access to read only, but accept read and and write request from client1.
Note: If no argument is specified share command displays list of all shared file resource.
unshare:
Makes a previously available directory unavailable for the client side mount operations.
#unshare [ -F nfs ] pathname
#unshare <resource name>
shareall:
Reads and executes share statements in the /etc/dfs/dfstab file.
This shares all resources listed in the /etc/dfs/dfstab file.
shareall [-F nfs]
unshareall:
Makes previously share resource unavailable which is listed /etc/dfs/sharetab.
Step2:
If needed, make the following entry:
NFS_SERVER_DELEGATION=off
By default this variable is commented and nfs does not provides delegation to the clients.
Step3:
If needed, make the following entry:
NFSMAPID_DOMAIN=<domain name>
By default nfsmapid daemon uses DNS domain of the system.
Determine if NFS server is running:
#svcs network/nfs/server
To enable the service;
#svcadm enable network/nfs/server
Configuring an NFS Client:
Step1 :
Make following entry to /etc/default/nfs file on client machine:
NFS_SERVER_VERSMAX=n
NFS_SERVER_VERSMIN=n
Here n is the version of NFS and takes values:2,3 & 4. By default these values are unspecified. For client's machine
the default minimum is version 2 and maximum is version 4.
Step2:
Mount a file system:
#mount server_name:share_resource local_directory
server_name: Name of NFS server
share_resource: Path of the shared remote directory
local_directory: Path of local mount point
Enable the nfs service:
#svcadm enable network/nfs/client
NFS File Sharing:
At server side:
1. Create following entry in /etc/dfs/dfstab :
#share -F nfs <resource path name>
2. Share the file system:
#exportfs -a
-a: Exports all directories listed in the dfstab file.
3. List all shared file system:
#showmount -e
4. Export the shared file system to kernel:
To share all file system: #shareall
To share specific file system: #share <resource path name>
5. Start the nfs server daemon:
#svcadm enable nfs/server
At Client side:
1. Create a directory to mount the file system.
2. Mount the file system:
#mount -F nfs <Server Name/IP>:<Path name> <Local mount point>
3. Start the nfs client daemon:
#svcadm enable nfs/client
Share to a domain
AutoFS:
AutoFS is a file system mechanism that provides automatic mounting the NFS protocol. It is a client side
service. AutoFS service mounts and unmounts file systems as required without any user intervention.
AutoMount service: svc:/system/filesystem/autofs:default
Whenever a client machine running automountd daemon tries to access a remote file or directory, the daemon
mounts the remote file system to which that file or directory belongs. If the remote file system is not accessed for a
defined period of time, it is unmounted by automountd daemon.
If automount starts up and has nothing to mount or unmount, the following is reported when we use automount
command:
# automount
automount: no mounts
automount: no unmounts
The automount facility contains three components:
The AutoFS file system:
An AutoFS file system's mount points are defined in the automount maps on the client system.
The automountd daemon:
The script /lib/svc/method/svc-autofs script starts the automountd daemon. It mounts file system on demand and
unmount idle mount points.
The automount command:
This command is called at system startup and reads master map to create the intial sets of AutoFS mounts. These
AutoFS mounts are not automatically mounted at startup time and they are mounted on demand.
Automount Maps:
The behavior of the automount is determined by a set of files called automount maps. There are four types of maps:
Master Map: It contains the list of other maps that are used to establish AutoFS system.
Direct map: It is used to mount file systems where each mount point does not share a common prefix with other
mount points in the map.
A /- entry in the master map(/etc/auto_master) defines a mount point for a direct map.
Sample entry: /- auto_direct -ro
The /etc/auto_direct file contains the absolute path name of the mount point, mount options & shared resource to
mount.
Sample entry:
/usr/share/man -ro,soft server1, server2:/usr/share/man
Here server1 and server2 are multiple location from where the resource can be shared depending upon proximity and
administrator defined weights
.
Indirect map: It is useful when we are mounting several file systems that will share a common pathname prefix.
Let us see how an indirect map can be used to manage the directory tree in /home?
We have seen before the following entry into /etc/auto_master:
/home
auto_home
-nobrowse
The /etc/auto_home lists only relative path names. Indirect maps obtain intial path of the mount point from the master
map (/etc/auto_master).
Here in our example, /home is the initial path of the mount point.
Lets see few few sample entries in /etc/auto_home file:
user1 server1:/export/home/user1
user2 server2:/export/home/user2
Here the mount points are /home/user1 & /home/user2. The server1 & server2 are the servers sharing
resource /export/home/user1 & /export/home/user2 respectively.
Reducing the auto_home map into single line:
Lets take a scenario where we want : for every login ID, the client remotely mounts the /export/home/loginID directory
from the NFS server server1 onto the local mount point /home/loginID.
* server1:/export/home/&
Special: It provides access to NFS server by using their host names. The two special maps listed in example for
/etc/auto_master file are:
The -hosts map: This provides access to all the resources shared by NFS server. The shared resources are
mounted below the /net/server_name or /net/server_ip_address directory.
The auto_home map: This provides mechanism to allow users to access their centrally located $HOME directories.
The /net directory:
The shared resources associated with the hosts map entry are mounted below the /net/server_name or
/net/server_ip_address directory. Lets say we have a shared resources Shared_Dir1 on Server1. This shared
resource can be found under /net/Server1/Shared_Dir1 directory. When we use cd command to this directory, the
resource is auto-mounted.
Updating Automount Maps:
After making changes to master map or creation of a direct map, execute the autmount command to make the
changes effective.
#automount [-t duration] [-v]
-t : Specifies time in seconds for which file system remains mounted when not in use. The default is 600s.
-v: Verbose mode
Note:
1. There is no need to restart automountd daemon after making the changes to existing entries in a direct map. The
new information is used when the automountd daemon next access the map entry to perform a mount.
2. If mount point(first field) of the direct map is changed, automountd should be restarted.
Following Table should be referred to run automount command:
Automount Map
Is Modified
Master Map
yes
Yes
Direct Map
yes
No
Indirect Map
No
No
Note: The mounted AutoFS file systems can also be verified from /etc/mnttab.
Enabling Automount system:
#svcadm enable svcs:/system/filesystem/autofs
Disabling Automount system:
#svcadm disable svcs:/system/filesystem/autofs
RAID-0 Volumes:
It consists of slices or soft partitions. These volumes lets us expand disk storage capacity. There are three kinds of
RAID-0 volumes:
1. Stripe volumes
2. Concatenation volumes
3. Concatenated stripe volumes
Note: A component refers to any devices, from slices to soft partitions, used in another logical volume.
Advantage: Allows us to quickly and simply expand disk storage capacity.
Disadvantages: They do not provide any data redundancy (unlike RAID-1 or RAID-5 volumes). If a single
component fails on a RAID-0 volume, data is lost.
We can use a RAID-0 volume that contains:
1. a single slice for any file system.
2. multiple components for any file system except for root (/), /usr, swap, /var, /opt, any file system that is accessed
during an operating system upgrade or installation
Note: While mirroring root (/), /usr, swap, /var, or /opt, we put the file system into a one-way concatenation or stripe
(a concatenation of a single slice) that acts as a submirror. This one-way concatenation is mirrored by another
submirror, which must also be a concatenation.
RAID-0 (Stripe) Volume:
It is a volume that arranges data across one or more components. Striping alternates equally-sized segments of data
across two or more components, forming one logical storage unit. These segments are interleaved round-robin so
that the combined space is made alternately from each component, in effect, shuffled like a deck of cards.
Striping enables multiple controllers to access data at the same time, which is also called parallel access. Parallel
access can increase I/O throughput because all disks in the volume are busy most of the time servicing I/O requests.
An existing file system cannot be converted directly to a stripe. To place an existing file system on a stripe volume ,
you must back up the file system, create the volume, then restore the file system to the stripe volume.
Note: Use a concatenation volume to encapsulate root (/), swap, /usr, /opt, or /var when mirroring these file systems.
The data blocks are written sequentially across the components, beginning with Slice A. Let us consider Slice A
containing logical data blocks 1 through 4. Disk B would contain logical data blocks 5 through 8. Drive C would
contain logical data blocks 9 through 12. The total capacity of volume would be the combined capacities of the three
slices. If each slice were 10 Gbytes, the volume would have an overall capacity of 30 Gbytes.
RAID-1 (Mirror) Volumes:
It is a volume that maintains identical copies of the data in RAID-0 (stripe or concatenation) volumes.
We need at least twice as much disk space as the amount of data you have to mirror. Because Solaris Volume
Manager must write to all submirrors, mirroring can also increase the amount of time it takes for write requests to be
written to disk.
We can mirror any file system, including existing file systems. These file systems root (/), swap, and /usr. We can also
use a mirror for any application, such as a database.
A mirror is composed of one or more RAID-0 volumes (stripes or concatenations) called submirrors.
A mirror can consist of up to four submirrors. However, two-way mirrors usually provide sufficient data redundancy for
most applications and are less expensive in terms of disk drive costs. A third submirror enables you to make online
backups without losing data redundancy while one submirror is offline for the backup.
If you take a submirror "offline", the mirror stops reading and writing to the submirror. At this point, you could access
the submirror itself, for example, to perform a backup. However, the submirror is in a read-only state. While a
submirror is offline, Solaris Volume Manager keeps track of all writes to the mirror. When the submirror is brought
back online, only the portions of the mirror that were written while the submirror was offline (the resynchronization
regions) are resynchronized. Submirrors can also be taken offline to troubleshoot or repair physical devices that have
errors.
Submirrors can be attached or be detached from a mirror at any time, though at least one submirror must remain
attached at all times.
Normally, you create a mirror with only a single submirror. Then, you attach a second submirror after you create the
mirror.
The figure shows RAID-1 (Mirror) :
Diagram shows how two RAID-0 volumes are used together as a RAID-1 (mirror) volume to provide redundant
storage. It shows a mirror, d20. The mirror is made of two volumes (submirrors) d21 and d22.
Solaris Volume Manager makes duplicate copies of the data on multiple physical disks, and presents one virtual disk
to the application, d20 in the example. All disk writes are duplicated. Disk reads come from one of the underlying
submirrors. The total capacity of mirror d20 is the size of the smallest of the submirrors (if they are not of equal size).
Providing RAID-1+0 and RAID-0+1:
Solaris Volume Manager supports both RAID-1+0 and RAID-0+1 redundancy.
RAID-1+0 redundancy constitutes a configuration of mirrors that are then striped.
RAID-0+1 redundancy constitutes a configuration of stripes that are then mirrored.
Note: Solaris Volume Manager cannot always provide RAID-1+0 functionality. However, where both submirrors are
identical to each other and are composed of disk slices (and not soft partitions), RAID-1+0 is possible.
Let us consider a RAID-0+1 implementation with a two-way mirror that consists of three striped slices:
Without Solaris Volume Manager, a single slice failure could fail one side of the mirror. Assuming that no hot spares
are in use, a second slice failure would fail the mirror. Using Solaris Volume Manager, up to three slices could
potentially fail without failing the mirror. The mirror does not fail because each of the three striped slices are
individually mirrored to their counterparts on the other half of the mirror.
The diagram shows how three of six total slices in a RAID-1 volume can potentially fail without data loss because of
the RAID-1+0 implementation.
The RAID-1 volume consists of two submirrors. Each of the submirrors consist of three identical physical disks that
have the same interlace value. A failure of three disks, A, B, and F, is tolerated. The entire logical block range of the
mirror is still contained on at least one good disk. All of the volume's data is available.
However, if disks A and D fail, a portion of the mirror's data is no longer available on any disk. Access to these logical
blocks fail. However, access to portions of the mirror where data is available still succeed. Under this situation, the
mirror acts like a single disk that has developed bad blocks. The damaged portions are unavailable, but the remaining
portions are available.
Mirror resynchronization:
It ensures proper mirror operation by maintaining all submirrors with identical data, with the exception of writes in
progress.
Note: A mirror resynchronization should not be bypassed. You do not need to manually initiate a mirror
resynchronization. This process occurs automatically.
Full Resynchronization:
When a new submirror is attached (added) to a mirror, all the data from another submirror in the mirror is
automatically written to the newly attached submirror. Once the mirror resynchronization is done, the new submirror is
readable. A submirror remains attached to a mirror until it is detached.
If the system crashes while a resynchronization is in progress, the resynchronization is restarted when the system
finishes rebooting.
Optimized Resynchronization:
During a reboot following a system failure, or when a submirror that was offline is brought back online, Solaris Volume
Manager performs an optimized mirror resynchronization. The metadisk driver tracks submirror regions. This
functionality enables the metadisk driver to know which submirror regions might be out-of-sync after a failure. An
optimized mirror resynchronization is performed only on the out-of-sync regions. You can specify the order in which
mirrors are resynchronized during reboot. You can omit a mirror resynchronization by setting submirror pass numbers
to zero. For tasks associated with changing a pass number, see Example 11-16.
Caution
Note: A pass number of zero should only be used on mirrors that are mounted as read-only.
Partial Resynchronization:
After the replacement of a slice within a submirror, SVM performs a partial mirror resynchronization of data. SVM
copies the data from the remaining good slices of another submirror to the replaced slice.
RAID-5 Volumes:
RAID level 5 is similar to striping, but with parity data distributed across all components (disk or logical volume). If a
component fails, the data on the failed component can be rebuilt from the distributed data and parity information on
the other components.
A RAID-5 volume uses storage capacity equivalent to one component in the volume to store redundant information
(parity). This parity information contains information about user data stored on the remainder of the RAID-5 volume's
components. The parity information is distributed across all components in the volume.
Similar to a mirror, a RAID-5 volume increases data availability, but with a minimum of cost in terms of hardware and
only a moderate penalty for write operations.
Note: We cannot use a RAID-5 volume for the root (/), /usr, and swap file systems, or for other existing file systems.
SVM automatically resynchronizes a RAID-5 volume when you replace an existing component. SVM also
resynchronizes RAID-5 volumes during rebooting if a system failure or panic took place.
Example:
Following figure shows a RAID-5 volume that consists of four disks (components):
The first three data segments are written to Component A (interlace 1), Component B (interlace 2), and Component C
(interlace 3). The next data segment that is written is a parity segment. This parity segment is written to Component D
(P 13). This segment consists of an exclusive OR of the first three segments of data. The next three data segments
are written to Component A (interlace 4), Component B (interlace 5), and Component D (interlace 6). Then, another
parity segment is written to Component C (P 46).
This pattern of writing data and parity segments results in both data and parity being spread across all disks in the
RAID-5 volume. Each drive can be read independently. The parity protects against a single disk failure. If each disk in
this example were 10 Gbytes, the total capacity of the RAID-5 volume would be 60 Gbytes. One drive's worth of
space(10 GB) is allocated to parity.
State Database:
It stores information on disk about the state of Solaris Volume Manager software.
Multiple copies of the database are called replica, provides redundancy and should be distributed across
multiple disks.
The SVM uses a majority consensus algorithm to determine which state database replica contain valid data.
The algorithm requires that a majority (half+1) of the state database replicas are available before any of them are
considered valid.
Creating a state database:
#metadb -a -c n -l nnnn -f ctds-of-slice
-a specifies to add a state database replica.
-f specifies to force the operation, even if no replicas exist.
-c n specifies the number of replicas to add to the specified slice.
-l nnnn specifies the size of the new replicas, in blocks.
ctds-of-slice specifies the name of the component that will hold the replica.
Use the -f flag to force the addition of the initial replicas.
Example: Creating the First State Database Replica
# metadb -a -f c0t0d0s0 c0t0d0s1 c0t0d0s4 c0t0d0s5
# metadb
flags
first blk
block count
...
a
u
16
8192
/dev/dsk/c0t0d0s0
a
u
16
8192
/dev/dsk/c0t0d0s1
a
u
16
8192
/dev/dsk/c0t0d0s4
a
u
16
8192
/dev/dsk/c0t0d0s5
The -a option adds the additional state database replica to the system, and the -f option forces the creation of the first
replica (and may be omitted when you add supplemental replicas to the system).
#metadb -a -f -c 2 c1t1d0s1 c1t1d0s2
The above command creates two replica of the slices c1t1d0s1 & c1t1d0s2.
Deleting a State Database Replica:
# metadb -d c2t4d0s7
The -d deletes all replicas that are located on the specified slice. The /etc/system file is automatically updated with
the new information and the /etc/lvm/mddb.cf file is updated.
Metainit command:
This command is used to create metadevices. The syntax is as follows:
#metainit -f concat/stripe numstripes width component....
-f: Forces metainit command to continue, even if one of the slices contained a mounted file system or being used.
concat/stripe: Volume name of the concatenation/stripe being defined.
numstripes: Number of individual stripes in the metadevice. For a simple stripe, numstripes is always 1. For a
concatenation, numstripes is equal to the number of slices.
width: Number of slices that make up a stripe. When width is greater than 1, the slices are striped.
component: logical name for the physical slice(partition) on a disk drive.
Example:
# metainit d30 3 1 c0t0d0s7
d30: Concat/Stripe is setup
1 c0t2d0s7 1 c0t3d0s7
Command
date
Description
Sample Output
who
khushi pts/307
2015-02-05 13:35
(152.69.36.25)
rani pts/313
2015-02-03 10:49
(152.69.36.25)
raju pts/311
2015-02-06 15:24
(10.159.106.213)
ravi pts/324
2015-02-06 01:45
(144.20.169.171)
who am i
root pts/52
2015-02-06 22:13
(10.191.202.3)
echo
ls
system.html
cat <file name>
cat test.txt
This is a test file.
wc
$ wc -l test.txt
6 test.txt
Counts the number of
$ wc -c test.txt
Lines, characters, words
122 test.txt
from a file
$ wc -w test.txt
29 test.txt
rm
removes file or directory rm -r # to remove a directory
rm -f# force remove
pwd
Displays current
working directory
$ pwd
/home/raviranjan
cd
Change directory
cd /var
mkdir
To create a directory
mkdir test