Você está na página 1de 6

17 Examples of using Solaris boot command

by Sandeep Patil Leave a Comment


3
1
1
1
The solaris boot command when used with various optional parameters will change the
booting behavior.
Syntax
The common syntax of boot command is solaris SPARC system is :
ok> boot [device-specifier] [arguments]
The Common Boot [device-specifier]s are :
1. Disk
2. cdrom
3. net (network boot image)
4. url (jumpstart)
Example 1 : Normal Boot
The boot command without any arguments will boot the system into multi -user mode by
default.
ok> boot
Example 2 :
The -a option will ask for configuration information such as where to find the system file,
where to mount root, and even override the name of the kernel itself. This is very useful in
case of a corrupt /etc/system file or any other such file that may be used in booting process.
Simply enter /dev/null when asked for /etc/system. The default responses are mentioned in
square brackets []. Press enter to select the default options.
ok> boot -a
Example 3 : Verbose mode
To boot the system in verbose mode :
ok> boot -v
Example 4 : Single user mode
To boot the system into single user mode (init level s) use the -s argument. In this mode
all local file systems are mounted and only a small set of essential kernel processes are left
running. This mode is usually used in case of patching the system. No users can login into
the system through network.
ok> boot -s
Example 5 : Non-cluster mode
The -x option is used only in case of sun cluster, to boot into non cluster mode.
ok> boot -x
Example 6 : Reconfiguration boot
When booted in reconfiguration mode, the system will probe all the hardware devices and
update the logical as well as physical namespaces in /dev and /devices respectively.
ok> boot -r
Example 7 :
The -f argument causes Autoclient systems to flush and reinitialize the client systems local
cache and read all files over the network from the clients file server. This flag is ignored for
all non-Autoclient systems.
ok> boot -f
Example 8 :
The -D argument specifies the default file. In general without this option the system will
choose a dynamic default file.
ok> boot -D [default_file]
Example 9 :
The argument -w forces root file system to be mounted as read-write while booting. But this
option is not implemented. The ufs root filesystem is mounted read-only to avoid problems
during fsck. After fsck runs, it is remounted as read-write.
ok> boot -w
Example 10 : SMF options
The -m argument can be used to specify the SMF options for booting the system
ok> boot -m [options]
The various options that can be specified with -m are :
verbose - Print a line for each service as it is started
quiet - Very quiet boot; suppresses standard per-service output and error
messages requiring administrative intervention.
debug - Boot in serial mode, with status logging of service success or failure
output to the console. The stdout and stderr streams of each method invoked
will be connected to the console, as well as the standard logging facilities
smf(5) provides.
milestone=[milestone-level] - Boot to a subgraph defined by the given
milestone.
The various milestone levels are :
none - disable all services.
single-user - roughly the equivalent of run level 1 or S
multi-user - roughly the equivalent of run level 2
multi-user-server - roughly the equivalent of run level 3
all - all enabled services
Example 11 : Failsafe mode
Starting solaris 10 update 6, a ZFS root FS OS can be booted in failsafe mode for
troubleshooting when the OS on the primary boot environment fails to boot.
ok> boot -F failsafe
Example 11 : Boot from bootable ZFS dataset
The -L argument allows booting from specific zfs bootable datasets on the disk. The
bootable datasets are mentioned in /rpool/boot/menu.lst file. This is common to all datasets
under the rpool. Additional entries to the menu.lst are added via the Live Upgrade process
for Boot Environments(BEs). The menu.lst file is updated during the shutdown process(init
0 | init 6) after an luactivate has been performed.
ok> boot -L
Rebooting with command: boot -L
Boot device: /pci@1f,700000/scsi@2/disk@0,0:a File and args: -L
1. zfsroot
2. zfsroot-with-patch
Select environment to boot: [ 1 - 2 ]: 1
To boot the selected entry, invoke: boot [ root-device ] -Z
rootpool/ROOT/zfsroot
Example 12 :
With the -Z argument, you can directly specify the bootable zfs dataset to boot from.
ok> boot -Z rootpool/ROOT/zfsroot
Network Booting
The SPARC systems can be booted from network either by using RARP/bootparams or
DHCP.
Example 13 : using RARP
When using RARP option to boot over network, the PROM makes a reverse ARP request.
On a reply to this request, the PROM broadcasts a TFTP request to fetch inetboot over the
network from any server that responds and executes it.
ok> boot net:rarp
Example 14 : using DHCP
When using the DHCP option to boot over network, the PROM broadcasts the MAC address
and kernel architecture of the system and requests an IP address, boot parameters, and
network configuration information. On receiving the information from DHCP server, PROM
downloads inetboot, loads that file into memory, and executes it. inetboot invokes the
kernel, which loads the files it needs and releases inetboot.
ok> boot net:dhcp
Troubleshooting
Now various boot arguments discussed above can be used together to troubleshoot a
booting issue. The most commonly used combination of arguments are :
Example 15 : interactive, verbose, single-user using local disk
ok> boot -avs
Example 16 : interactive, verbose, single-user using cdrom
ok> boot -avs
Example 17 : interactive, verbose, single-user using network
ok> boot net -avs
3
1
1
1

Solaris 10 boot process : SPARC
by Sandeep Patil Leave a Comment
17
1
1
1
Solaris 10 boot process : SPARC
Solaris 10 boot process : x86/x64
The boot process for SPARC platform involves 5 phases as shown in the diagram below.
There is a slight difference in booting process of a SPARC based and x86/x64 based solaris
operating system.

Boot PROM phase
1. The boot PROM runs the power on self test (POST) to test the hardware.
2. The boot PROM displays the banner with below information
- Model type
- processor type
- Memory
- Ethernet address and host ID
3. Boot PROM reads the PROM variable boot -device to determine the boot device.
4. Boot PROM reads the primary boot program (bootblk) [sector 1 to 15] and executes it.
Boot program phase
1. bootblk loads the secondary boot program ufsboot into memory.
2. ufsboot reads and loads the kernel. The kernel is composed of 2 parts :
unix (platform specific kernel)
genunix (platform independent kernel)
3. ufsboot combines these 2 kernel into one complete kernel and loads into memory.
Kernel initialization phase
1. The kernel reads the configuration file /etc/system.
2. Kernel initializes itself and loads the kernel modules. The modules usually reside in
/kernel and /usr/kernel directories. (Platform specif ic drivers in /platform/uname -i/kernel
and /platform/uname -m/kernel directories)
Init phase
1. Kernel starts the /etc/init daemon (with PID 1).
2. The /etc/init daemon starts the svc.startd process which is responsible for starting and
stopping the services.
3. The /etc/init daemon uses a file called /etc/inittab to boot up the system to the
appropriate run level mentioned in this file.
Legacy Run Levels
Run level specifies the state in which specific services and resources are available to
users.
0 - system running PROM monitor (ok> prompt)
s or S - single user mode with critical file-systems mounted.(single user
can access the OS)
1 - single user administrative mode with access to all file-
systems.(single user can access the OS)
2 - multi-user mode. Multiple users can access the system. NFS and some
other network related daemons does not run
3 - multi-user-server mode. Multi user mode with NFS and all other
network resources available.
4 - not implemented.
5 - transitional run level. Os is shutdown and system is powered off.
6 - transitional run level. Os is shutdown and system is rebooted to
the default run level.
svc.startd phase
1. After kernel starts the svc.startd daemon. svc.startd daemon executes the rc scripts in
the /sbin directory based upon the run level.
rc scripts
Now with each run level has an associated script in the /sbin directory.
# ls -l /sbin/rc?
-rwxr--r-- 3 root sys 1678 Sep 20 2012 /sbin/rc0
-rwxr--r-- 1 root sys 2031 Sep 20 2012 /sbin/rc1
-rwxr--r-- 1 root sys 2046 Sep 20 2012 /sbin/rc2
-rwxr--r-- 1 root sys 1969 Sep 20 2012 /sbin/rc3
-rwxr--r-- 3 root sys 1678 Sep 20 2012 /sbin/rc5
-rwxr--r-- 3 root sys 1678 Sep 20 2012 /sbin/rc6
-rwxr--r-- 1 root sys 4069 Sep 20 2012 /sbin/rcS
Each rc script runs the corresponding /etc/rc?.d/K* and /etc/rc?.d/S* scripts. For example
for a run level 3, below scripts will be executed by /sbin/rc3 :
/etc/rc3.d/K*
/etc/rc3.d/S*
The syntax of start and stop run scripts is
S##name_of_script - Start run control scripts
K##name_of_scrips - Stop run control scripts
Note the S and K in caps. Scripts starting with small s and k will be ignored. This can be
used to disable a script for that particular run level.