Escolar Documentos
Profissional Documentos
Cultura Documentos
Solaris 7, 8, 9
if not, run yes | pkgadd -d . [list of SUNWxxx pkgs] from the jumpstart server OS/.../Products
dir to add them.
who -r : show current run level (useful like when doing boot -s)
who -b : show system boot time
shutdown, etc cmd does not seems to reboot automatically either, unless specify a reboot
init level (eg -i 6)
Storage
Filesystem
newfs /dev/rdsk/c... create a new fs for the sapce on "raw" slide
(also appliable to metadevices from disksuite (and veritas?) in both the
stripe+concat and raid5 drives. mirror would need a sync cmd.
see cmd.diskstuite.ref
-v verbose
-b [bsize] specify block size, def should be 8192 (req by dba)
-N print the mkfs cmd that will be used, w/o actually doing any work.
mkfs -m /dev/dsk/c0t0d0s0 show the mkfs used to create the existing fs.
mkfs -m /dev/md/d0 for sds disk, looking at subcomponent will give bogus
data.
Volume Management
Solaris by default does not use a Volume Manager, the file system by default is created right on top of a
partition. Sun does have a Volume Manager that is very tied to Solaris: The Solaris Volume Manager, formerly
Solstice Disk Suite.
Alternatively, lot of places use Veritas Volume Manager. IMHO, the OS boot disk is best left in control of the
SVM. This is a hotly contested topic. I will just say that starting with VxVM 4.0, the word from Veritas tech
support is: "We no longer require you to use VxVM for the boot disk, why don't you just use Veritas for your
data disks". They told me this after I ran into some bugs and they needed me to update from 4.0 to 4.01.
Needless to say, I changed my school of thought then and used SVM for the bootdisk from then on.
SVM/SDS Commands
metastat
show config of disk suite, status and minor stat
metadb
who info about the meta db (state db) used by disksuite to maintian meta/state
info.
metareplace -e d0 c0t0d0s0
This perform a resync on the mirror drive d0, component c0t0d0s0 is the
one that will be wipe out and rebuild. (Used when rebuilding the root partition,
disk0 was yanked out, and so needed to use data from c0t1d0s0 to rebuild
the mirror).
metastat | awk '/State:/ { if ($2 != "Okay") if (prev ~ /^d/) print prev, $0} {prev =
$0}'
Quickly list drives that are not in okay mode. eg, error, sync, etc.
#quickly list drives that are not in okay mode (eg, error, sync, etc.):
PATH=/usr/bin:/usr/sbin:/usr/local/bin:/usr/opt/SUNWmd/sbin/
RCPT=tin@taos.com
HOST=`hostname`
MSG="Solaris DiskSuit alert for $HOST"
Sys Admin Pocket Survival Guide - Solaris
OUTPUT1=`metastat | \
awk '/State:/ { if ($2 != "Okay") if (prev ~ /^d/) print prev, $0} {prev = $0}'`
#quickly see if there are any problems with metadb replicas (state db)
#(work cuz metadb use caps only when they have errors in them.
The way how SVM/SDS do mirroring is that it create a fs (mkfs or newfs) of exact size on the submirrors. This
is independent of the slide size of the different disks. As long as the starting fs size is small enough to fit in all
slides of diff disk, it will work. This is where the lowest common denominator comes from.
Note that due to this approach, once the disk is mirrored, even if slide has more space, it can never be used. On
the other hand, this approach allows disks of dissimilar size to work as mirror pair, allowing some extra
partition space for other "scrach" use.
eg of copying files from 9 gb drive to 18 gb drive, increased partitiion size via format, but after mirror, all disk
slides show matching size for the mirrors, even after the smaller submirrors has been removed.
The final solution of the migration is to use ufsdump | ufsrestore. see backup.ref for info of the exact command.
Create metadb partition on slide 7, with 4 cyl (really just need 1 cyl).
If there isn't enough any free cylinder on your disk, then you will need
to strink SWAP to make more room.
eg:
format> verify
Volume name =
ascii name =
pcyl = 4926
ncyl = 4924
acyl = 2
nhead = 27
nsect = 133
Part Tag Flag Cylinders Size Blocks
0 root wm 580 - 1109 929.31MB (530/0/0) 1903230
1 swap wu 2 - 579 1013.48MB (578/0/0) 2075598
2 backup wm 0 - 4923 8.43GB (4924/0/0) 17682084
3 unassigned wm 0 0 (0/0/0) 0
4 usr wm 1170 - 2039 1.49GB (870/0/0) 3124170
5 var wm 2040 - 2329 508.49MB (290/0/0) 1041390
6 unassigned wm 2330 - 4919 4.43GB (2590/0/0) 9300690
7 unassigned wm 4920 - 4923 7.01MB (4/0/0) 14364
format>
Copy the partition table to the 2nd disk that will hold the mirror.
output of metadb:
flags first blk block count
a m p luo 16 1034 /dev/dsk/c0t0d0s7
a p luo 1050 1034 /dev/dsk/c0t0d0s7
a p luo 16 1034 /dev/dsk/c0t1d0s7
a p luo 1050 1034 /dev/dsk/c0t1d0s7
This is what the mirroring setup will be. Can place this
content in /etc/vfstab for easy future reference.
###
### metadevice mapping to physical devices
### disk in tag 0 and 1 (9 gigs) pair
###
### orig new mirror
### root d0 submirrors: d10 d20 : c0t0d0s0 c0t1d0s0
### swap d1 submirrors: d11 d21 : c0t0d0s1 c0t1d0s1
Sys Admin Pocket Survival Guide - Solaris
# create the additional submirror components of all slides, use disk in c0t1
metainit -f d20 1 1 c0t1d0s0 # addtional mirror of /
metainit -f d21 1 1 c0t1d0s1 # additiional mirror for swap
metainit -f d24 1 1 c0t1d0s4 # additiional mirror for /usr
metainit -f d25 1 1 c0t1d0s5 # additiional mirror for /var
metainit -f d26 1 1 c0t1d0s6 # additiional mirror for /u01
# the above cmd return right away, use metastat to monitor sync process
# or metatool for gui monitor/admin tool.
# review /etc/lvm/md.tab
If these errors are annoying, update /etc/system and comment out the forceload of the
unecessary components. The problem with such mods is that should there be a need of raid
5 device down the road, and forget to re-enable these, then there maybe some hair pulling
in finding out the error :)
----
Optional update to OBP to allow easier booting, should one of the boot disk fail, this
allows one to do:
Sys Admin Pocket Survival Guide - Solaris
boot rootmirror
--------
A sample test for failure scenario: replacing one submirror.
Someime, metastat will report "maintenance needed, issue metareplace...",
this can also be used to fix the error if disk err was transitive or relocatable.
metadetach d5 d15 # detaches submirror d15 from the host mountable drive
# d5 (/var)
# real failure req metareplace will need -f
http://www.sun.com/bigadmin/content/submitted/expand_ufs_svm.html
describe way of expanding disk using SDS trick.
mkfs -G -M ...
will expand ufs w/o lvm, but it is "undocumented"
eg of clean up:
Another method is the use metareplace to "replace a drive with itself". This method can
also be used if the replacement drive does not have the same geometry (size) as the
original drive or that of the rest of the RAID group. For example, one can replace a Sun
18 GB hard drive with a COMPAQ/HP 18 GB drive that has fewer cylinders than Sun (but each
cylinder holds more bytes). In such cases, one need to first manually create the
partition table using the format command, ensuring that the SDS and metadb slides are
larger than the original size (in term of megabytes).
format (select the right disk carefully, create slide 0 and 7).
eg
stripping setup : 1 final volume, compose of 3 subdisks. Use interleave factor of 64k
(def 16k, should have this number that match or be exact multiple of oracle read/write
block size).
For raid 5, sds simply call it raid. Here are examples for a MD device with 3 or 8
constituent disk/partition:
metainit d45 -r c2t3d0s2 c3t0d0s2 c4t0d0s2
or
metainit d0 -r c1t0d0s7 c1t1d0s7 c1t2d0s7 c1t3d0s7 c1t8d0s7 c1t9d0s7 c1t10d0s7 c1t11d0s7
-i 32b
if somehow need to reimport the raid 5 volume, use -k option in metainit. Not sure how
to use it yet though.
Note that the pool name still remains when metastat is issued, but no disk attached to it.
Sys Admin Pocket Survival Guide - Solaris
Sun Volume Manager likes to use slide 7. Book says it only needs 1 cyl, but it allocates
8, and my past experience 15 cyl was needed on a 36 GB drive w/ 24620 cyl! Oracle1 got
30 cyl for this.
72 GB actually has only 14087 cyl, so each cyl is biggger. Hopefully 7 cyl is enough.
If there are not enough cylinnders available, metadb -l [LENGHT] option may help remedy
the situation.
In contracst, Veritas Volume Manager usually needs 2 free avail partitions (except for
boot/root disk, which can do swap reloc but not recommen ded anyway).
Slide 4 would be the private region for additional VxVM managed partitions.
However, for root disk needing encapsulation, slide 4 is 1 cylinder at beginning or end
of disk.
So, if you want to be safe in term of future upgrade (or downgrade) to Veritas, SVM meta
data info should be stored in slide 3, and leave slide 4 unused.
Save your disk VTOCs and do metastat -p /etc/lvm/md.tab and save both somewhere safe. It
will save you lots of time if you need to redo it.
Sys Admin Pocket Survival Guide - Solaris
Also recommended: Put two copies of your metdb on each disk in a seperate partition on
each disk.
#BKDIR=/export/cfbk
BKDIR=/var/adm/cfbk
mkdir $BKDIR
cp -p /etc/vfstab $BKDIR
cp -p /etc/system $BKDIR
cp -p /kernel/drv/md.conf $BKDIR
cp -p /etc/lvm/md.cf $BKDIR
cp -p /etc/lvm/mddb.cf $BKDIR
cp -p /etc/lvm/md.tab $BKDIR # really manual file, metastat -p
DISKPATH=/dev/rdsk/
DISKSET="c0t0d0s2 c0t8d0s2 c0t9d0s2 c0t10d0s2"
#DISKSET="c0t0d0s2 c0t8d0s2 c0t9d0s2 c0t10d0s2 c0t11d0s2 c0t12d0s2"
for DISK in $DISKSET; do
prtvtoc $DISKPATH/$DISK > $BKDIR/`date +%Y%m%d`.vtoc."$DISK"
done
----
sol 8:
/etc/system
* Begin MDD root info (do not edit)
forceload: misc/md_stripe
forceload: misc/md_mirror
forceload: misc/md_trans
forceload: misc/md_raid
forceload: misc/md_hotspares
Sys Admin Pocket Survival Guide - Solaris
forceload: misc/md_sp
forceload: drv/pcipsy
forceload: drv/glm
forceload: drv/sd
rootdev:/pseudo/md@0:0,0,blk
* End MDD root info (do not edit)
ls -lL /dev/dsk/c*sX
where X is the slide number of the metadb slide (typically 7)
References:
Sun SVM admin guide, w/ instructions to create diff devices and some troubleshooting
cases.
Sys Admin Pocket Survival Guide - Solaris
Doc 817-2530
sol 10 svm has the latest commands, with latest feature and changes.
Connectivity (Network)
NIC
ndd -get /dev/hme status_link # query nic speed, see ndd ref in email
ndd -get /dev/hme \? # list all possible param
ndd -get /dev/hme \? | fgrep -v '?' | awk '{print "echo " $1 "; ndd -get /dev/hme " $1 }
' | sh
# display all NIC parameters, must run as root
ndd -get /dev/ip \? | fgrep -v '?' | awk '{ print $1 }' | awk -F\( '{print "echo; echo
---- " $1 " ----; ndd -get /dev/ip " $1 " ; echo"}' | sh
# display lot of IP info. May want to pipe it to less...
ndd -get /dev/tcp \? | egrep -v '\?|obsolete' | awk '{print "echo; echo ---- $1 " ----;
ndd -get /dev/tcp " $1 " ; echo"}' | sh
# display lot of TCP info.
kstat -p hme:0::'/collisions|framing|crc|code_violations|tx_late_collisions/'
kstat -p dmfe:0::'/collisions|framing|crc|code_violations|tx_late_collisions/'
# get NIC collision stat from kernel stat. Runnable as user.
Network Config
Sys Admin Pocket Survival Guide - Solaris
ifconfig -a
ifconfig hme0 plumb
ifconfig hme0 10.10.0.101 broadcast 10.10.0.255 netmask 255.255.255.0 up
hostname
IPMP
Solaris IP Multi Path. Ethernet/IP layer redundancy w/o support from switch side.
Can run as active/standby (more compatible, only single IP presented to outside world),
or active/active config (outbound traffic can go over both NIC using 2 IPs,
inbound will depends on the IP the client use to send data back, so typically only 1 NIC).
/etc/inet/hosts ::
172.27.3.71 oaprod1
172.27.3.72 oaprod1-ce0
172.27.3.73 oaprod1-ce2
172.27.3.74 oaprod2-nic2
NFS
/etc/dfs/dfstab
(add sample)
/etc/default/nfs # solaris 10, need to change NFS client (and server) default vers
max to 3
# NFS 4 has nasty problems of ignoring NFS v3 security settings!!
/etc/default/autofs # all automount options are to be specified here,
# no more args for cli/init script such as -D ARCH=SOL10
# eg: AUTOMOUNTD_ENV=ARCH=SOL10
System Config
Software Management
To search to see which package installed a given file, grep thru the
/var/sadm/install/contents file.
eg, find who installed the cc (shell script!):
grep /usr/ucb/cc /var/sadm/install/contents
---
admintool : gui for varios task, add user, etc. runnable by user in gid 14
Hardware commands
format = slice/partition disk, surface scan, etc. Linux/DOS call this fdisk.
Note that under part submenu, use "label" to save changes to the partition
table to disk.
Use "volname" to add a name to the disk volume (shown in format disk list)
prtvtoc : print the volume table of content (vtoc, ie the partition table + disk geometry
data)
drvconfig; tapes; devlinks : tell system to reconfigure for new tape drive,
eg /dev/rmt/0cbn etc
cfgadm -c configure [c3] # configure controller 3 (HBA), scan san for LUN
# run devfsadm if needed, then see new "disks" in format
luxadm fcode_download -p
display HBA firmware version and driver/path info.
luxadm is probably only for 880 w/ sse dev, and some sun array products.
luxadm probe
display WWN of fc dev
Display resolution
Command to change VGA resolution in SOlaris 9 and 10, sparc. Don't remember if they also
worked for x86.
fbconfig -help
fbconfig -res \? = list supported resolution for given frame buffer card
It seems to poke the monitor to see what it supports also.
fbconfig -res VESA_STD_1600x1200x85 try = test out desired resolution, test doesn't
display anything
but it does set monitor to that resolution, and
monitor ODM
can be used to see resolution/refresh or
whether it blank out.
At the end, it prompt to save cnofig or not.
fbconfig -res VESA_STD_1600x1200x85 now = setup for this session only, but not permanent?
fbconfig -res VESA_STD_1600x1200x85 = no subcommand, seems to just set it.
fbconfig -res VESA_STD_1856x1392x75 now = used in sunblade2500, actual monitor
res=1920x1440, which fb don't support.
Drivers
For the odd occasion of needing to add drivers, here are the things to lookup:
add_drv
rm_drv
FILES
/kernel/drv
Sys Admin Pocket Survival Guide - Solaris
OBP
stop-a : abort
stop-d : enter diag mode
stop-f : forth in ttya
stop-n : reset nvram to default values
boot cdrom - install install new os (upgrade is done by software after boot).
probe-scsi-all
test-all
test /memory
test net
[var]
output-device def: screen alt: ttya ttyb
input-device def: keyboard alt: ttya ttyb
(some jerk has console, which, with frame buffer
card present, won't use ttya for output, weired...)
ttya-mode def: 9600,8,n,1,-
screen-#rows def: 34
auto-boot? def: true
device alias are set via nvalias [var] [val] and nvunalias [var]
---
Inside Solaris, shell command prompt can issue command eeprom to view and set eeprom
variables, including nvramrc, see the SDS/SVM root disk mirror for procedure.
for nvramrc modification, it is easiest if done from within solaris rather than
at the actual OK prompt.
For x86 platform, eeprom command from the shell must be used, as it doesn't have
a real OBP proper.
eeprom | grep serial # show system board serial, but not serial of machine
# for sun support case.
# eeprom local-mac-address?=true
use qfe internal local mac instead of same mac for all interfaces).
seems to require reboot; unplumb and plumb did not get it changed.
ifconfig has another option to program desired mac on it.
---
Note that IDE disks have diff device path than scsi and fc devices:
IDE disks device on x86 has name of the form: c0d0s1 (ie, no d-number)
----
eeprom tty-ignore-cd=true
eeprom input-device=ttya
eeprom output-device=ttya
---
redirecting serial console to the serial port of RSC card (Remote Server Control)
Note that it is not like the LOM on SunFire V100.
RSC require OS software counterpart to work.
So, before setting this OBP param, install RSC software first!!
diag-console rsc
setenv input-device rsc-console
setenv output-device rsc-console
Procedure to restore console to ttya. It works for V880 and V480, For E250, just remove
RSC card.
After turning on the power to your system, watch the front panel wrench LED for
rapid flashing during the boot process. Press the front panel Power button twice (with a
short, one-second delay in between presses).
[it is not the immediate boot flashing, wait for about 1 minutes later, where service
light flashes longer and front panel yellow arrrow does not comes on).
Sys Admin Pocket Survival Guide - Solaris
Notes:
The above procedure sets all nvram parameters to their default settings.
These changes are temporary and the original values will be restored after the next
hardware or software reset.
Ref:
http://www.sunshack.org/data/sh/2.1/infoserver.central/data/syshbk/General/OBP.html
poweron
poweroff
there are options for LOM to automatically power cycle machine if it does not receive LOM
events after threshold.
Solve misterious hang problem.
---
(New V490 claims to have ALOM, while the card look like ALOM card,
all the doc points that it is an RSC card (sans modem connection of old
RSC card). Couldn't login to tell more :( But it requires
serial redirection like RSC, so not worth the headache.
It is probably a bit more integrated with the OS, to the sense that OS
can issue commands to configure/interact with ALOM, via the scadm command in
/usr/platform/SUNW,Sun-Fire-V240/sbin/scadm ALOM-cmd
ALOM cmd:
usershow
...
A large number of Sun machines have an RSC PCI card in the back, eg E220$, E420R, V480).
The PCI card has a build in batter pack and thus allow one to use it even when
machine is powered off. It allows the admin to remotely power on the machine, and,
if Serial Console is redirected, to gain access to it also.
The biggest flaw is that the console has to be redirected via OBP, and it is a redirect,
not a mirroring of the console as done by HP-UX or AIX. The RSC card also need special
software installed on the machine first, so forget about using it as the console for
setting up OS on a new box. Again, I like LOM, nothing else from Sun is better than LOM
:)
I do wish that the make LOM the standard for ALL machines, but with the new AMD-based
machines,
I think Sun is going even more backward and using VGA, PS/2 Keyboard and Mouse. Yikes!
RSC has both serial console and NIC for telnet/http login to the RSC service.
Sys Admin Pocket Survival Guide - Solaris
Main ref:
Sun Remote System Control (RSC) 2.2 User's Guide
It refers to E-250, but okay in 280R, V480
pkgadd -d .
system SUNWrsc Remote System Control
system SUNWrscd Remote System Control User Guide
system SUNWrscj Remote System Control GUI
/usr/platform/.../rsc/rsc-config
Choose to give static ip, configure user, default mode cuar, (username rsc), password is
prompted after it upload settings to rsc firmware, which takes several minutes.
Password is 6-8 chars. C.0..Ma.
ok diag-console rsc
ok setenv input-device rsc-console
ok setenv output-device rsc-console
p34:
If RSC is not designated as the system console, you cannot use RSC to access the console.
You can temporarily redirect the console to RSC using the RSC bootmode -u command, or by
choosing Set Boot Mode using the RSC GUI and checking the box labeled ôForce the host to
direct the console to RSC.ö These methods affect the next boot only.
---
Saving config and user account info:
---
Security assesment:
Ports open on RSC card IP address as per nmap scan:
filtered ports are not actually connectable using telnet test.
so, really just open 23 and 7598.
(per snoop, port 5838 was in use, probably random port for comm)
RSC commands
each
version Displays version number for RSC firmware and components
showsc Same as version without the -v option
flashftp Updates the RSC Flash ROM image
display-fru Displays information stored in the RSC serial EEPROM
logout Ends your current RSC shell session
setlocator Turn the system locator LED on or off (Sun Fire V480 servers only).
showlocator Show the state of the system locator LED (Sun Fire V480 servers
only).
Diagnostic tool
--
SunVTS, Sun Validation and Test suite for hardware verification and stress test.
http://www.sun.com/oem/products/vts/index.html
ver 5.1 (ps9) works for sol 9 and 8 (maybe 7).
[ver 6.0 works exclusively for sol 10; pkg install slightly diff]
pkgadd -d . SUNWlxml SUNWlxmlx # for sol 8 w/o xml pkg
pkgadd -d . SUNWvts SUNWvtsx SUNWvtsmn
# ask to enable kerberos, answer no.
Can copy /opt/SUNWvts/bin to an NFS dir and run it from there.
Sol 8 still need SUNWlxml and SUNWlxmlx installed for lib dependencies.
Sol9 seems to have some warning but runs ok.
cd /opt/SUNWvts/bin
./sunvts -t -l logdir # -t = TUI, easy to just start default test and let it run
# -l /path/to/logdir so that it does not log to /tmp by default
not sure what is system view of CPU numbering, guess it would be:
http://docs.sun.com/db?p=/doc/806-3992-10/6jd3qmd5l&a=view
no special procedure other than unmounting the drive and/or stopping volume mgnt software
on the os level.
then just plug in drive and reprobe with drvconfig...
actually, 450 probed the disk automatically and onlined it (LED on, see new disk in
format).
NIC name
hme0 most machines circa 2000 machines, eg Ultra 10, E220R, E250, E450, etc. 100 Mbps.
aka Happy Meal Entrie
qfe0 PCI quad card 100 Mbps each, circa 2000
qfe4
iprb0 intel-based NIC (x86, eg Dell desktop, IBM laptop, PCI card)
elxl0 3Com NIC (x86, eg PCI card for desktop)
kernel parameter
kbd -a disable : disable break mode when keyboard is pulled (safe to pull keyboard).
kbd -a enable : enable break mode, when keyboard is pulled, system drop to OK prompt.
# also make changes to /etc/default/kbd for boot time default.
ls /platform/sun4u/kernel/
isalist
(ref)
System Tuning
Sys Admin Pocket Survival Guide - Solaris
Virtual Adrian
SAR
Multi boot
reboot -- disk2
Jumpstart
Run
add_install_server
from the Solaris CD #1, inside the Tools directory.
It will copy over all the necessary files to host the jumpstart server.
Files to modify after jumpstart server is setup, but just need to add client::
rules
Profiles/
Sysidcfg/
/etc/ethers
/etc/hosts
cd /jumpstart/OS.local/sol_10_305_sparc/Solaris_10/Tools/
./add_install_client -p 172.27.38.15:/jumpstart/Sysidcfg/sol-client10 -c
172.27.38.15:/jumpstart sol-client10 sun4u
cd /jumpstart/OS.local/sol_8_1001_sparc/Solaris_8/Tools/
./add_install_client -p 172.27.13.15:/jumpstart/Sysidcfg/sol-client8 -c
172.27.13.15:/jumpstart sol-client8 sun4u
edit /etc/bootparams, and ensure all entries for server use IP address, not hostname.
If wanting to use another NFS server for main file repository,
would need to edit bootparams file carefully.
Sys Admin Pocket Survival Guide - Solaris
# net1 would be the second NIC, though the sysidcfg file would need to be updated
# to assign IP on this interface instead of default/primary NIC at net0
Cavets:
7. network_interface=primary
8. network_interface=default
9. Virtual interfaces.
10.If the jumpstart machine has a single nic that would be plugged to different vlan,
11.it is okay to have /etc/rc2.d/S98setVlan script that setup a bunch of virtual
interfaces:
12.ifconfig iprb0:8 plumb
13.ifconfig iprb0:8 172.27.8.15 netmask + broadcast + up
14.ifconfig iprb0:13 plumb
15.ifconfig iprb0:13 172.27.13.15 netmask + broadcast + up
16.ifconfig iprb0:38 plumb
17.ifconfig iprb0:38 172.27.38.15 netmask + broadcast + up
18.ensure that /etc/netmasks has all the vlan defined, mistake may cause
19.jumpstart client boottime hang problem.
20.This way, just need to plug cable to the right vlan and no software changes.
21.The downside of this config is that routing to different vlan defined by the
22.virtual interface won't work (unless the switch configure all the vlans on the
23.port the jumpstart server NIC is connected to).
Sys Admin Pocket Survival Guide - Solaris
25./etc/init.d/boot.server stop
26./etc/init.d/boot.server start
Monitor task:
--------------------------
set ip
set gateway 10.215.2.2
set netmask 255.255.255.0
Target:
disk 1-6, strip + mirror ( raid 1 in T3+ of 2n, n>1 will automatically be strip + m
irror)
disk 7-8, mirror
disk 9, hot spare
files:
/etc/
syslog
---
Sun StorEdge Component Manager is software that can be installed on host to manage the
T3/T3+ array.
But I didn't install it, and configured it via telnet/serial login cli.
Raid Manager (RM6) is used to control the A1000 (array) and D1000 (JBOD) boxen.
These are pretty old by now, popular during the dot-bomb days circa Y2k.
As old as the D1000 is, it will take drives up to 144 GB in size.
RM6 commands
Sys Admin Pocket Survival Guide - Solaris
raidutil -c c2t5d0 -i : get info about raid device, such as firmware version,
etc.
drivutil
fwutil
healthck
lad
logutil
nvutil
parityck
raidutil
rdacutil
rm6
storutil
Basic Information
rm6 Gives an overview of the softwareÆs graphical user interface (GUI), command-line
programs, background process programs and driver modules, and customizable elements.
Sys Admin Pocket Survival Guide - Solaris
rdac Describes the software's support for RDAC (Redundant Disk Array Controller),
including details on any applicable drivers and daemons.
rmevent The RAID Event File Format. This is the file format used by the applications to
dispatch an event to the rmscript notification script. It also is the format for Message
Log's log file (the default is rmlog.log).
raidcode.txt A text file containing information about the various RAID events and error
codes.
Command-Line Utilities
drivutil The drive/LUN utility. This program manages drives/LUNs. It allows you to obtain
drive/LUN information, revive a LUN, fail/revive a drive, and obtain LUN reconstruction
progress.
fwutil The controller firmware download utility. This program downloads appware,
bootware, or an NVSRAM file to a specified controller.
healthck The health check utility. This program performs a health check on the indicated
RAID module and displays a report to standard output.
lad The list array devices utility. This program identifies the RAID controllers and
logical units that are connected to the system.
logutil The log format utility. This program formats the error log file and displays a
formatted version to the standard output.
nvutil The NVSRAM display/modification utility. This program views and changes RAID
controller non-volatile RAM settings, allowing for some customization of controller
behavior. It verifies and fixes any NVSRAM settings that are not compatible with the
storage management software.
parityck The parity check/repair utility. This program checks and, if necessary, repairs
the parity information stored on the array.
raidutil The RAID configuration utility. This program is the command-line counterpart to
the graphical Configuration application. It allows you to create and delete RAID logical
units and hot spares from a command line or script. It also allows certain battery
management functions to be performed on one controller at a time.
rdacutil The redundant disk array controller management utility. This program permits
certain redundant controller operations such as LUN load balancing and controller
failover and restoration to be performed from a command line or script.
storutil The host store utility. This program performs certain operations on a region of
the controller called host store. You can use this utility to set an independent
controller configuration, change RAID module names, and clear information in the host
store region.
arraymon The array monitor background process. The array monitor watches for the
Sys Admin Pocket Survival Guide - Solaris
rdaemon
(UNIX only)
The redundant I/O path error resolution daemon. The rdaemon receives and reacts to
redundant controller exception events and participates in the applicationtransparent
recovery of those events through error analysis and, if necessary, controller failover.
rdriver
(Solaris only)
The redundant I/O path routing driver. The rdriver module works in cooperation with
rdaemon in handling the transparent recovery of I/O path failures. It routes I/Os down
the proper path and communicates with the rdaemon about errors and their resolution.
Customizable Elements
rmparams The storage management softwareÆs parameter file. This ASCII file has a number
of parameter settings, such as the array monitor poll interval, what time to perform the
daily array parity check, and so on. The storage management applications read this file
at startup or at other selected times during their execution. A subset of the parameters
in the rmparams file are changeable under the graphical user interface.
For more information about the rmparams file, see the Sun StorEdge RAID Manager
Installation and Support Guide.
rmscript The notification script. This script is called by the array monitor and other
programs whenever an important event is reported. The file has certain standard actions,
including posting the event to the message log (rmlog.log), sending email to the
superuser/administrator and, in some cases, sending an SNMP trap.
Although you can edit the rmscript file, be sure that you do not disturb any of the
standard actions.
----
a1000 (at least the one attached to sonata, then moved to perseus), scsi controller is
DIFF, SE don't work. From An, DIFF is high voltage differential, SE is low voltage diff.
Thus, A1000 controller is High Voltage Diff. If connect to SE, the scsi bus light blink
on the A1000, and no disk/array will be seen by the host.
IMHO, this is quite a nighmarish exercise. Lot of steps and if-conditions of what to do
listed in a about 3 huge HTML pages.
Cluster patch for Solaris will not cover this at all.
install RM6 (old software, circa 2002. version 6.22.1 was last one).
get patches for OS, most are in cluster patch now.
Sys Admin Pocket Survival Guide - Solaris
patchadd -M . 112126-06
# patchadd -M . 113277-04 113033-03 # these 2 seems to be added by cluster patch
# 113033-03 is only for sbus hba
init S; patchadd 112233-04; touch /reconfigure; reboot
#112233 seems to have later version in latest cluster patch.
run rm6, select controller on array, go to firmware, and after all the warnings, it will
provide list of firmwares that came with RM6, ready for download to the array controller.
Upgrade them in sequence to avoid firmware jump unsupported problems.
It is possible to change a group from RAID 10 to RAID 5 while disk online w/ file system
active.
Extra space gained can be used to create extra LUN.
But RM6 (on A1000) does not support LUN expandsion, so if desire to create a single LUN
with all the disk space of RAID 5, it will still need to remove the LUN, and then
recreate it.
This of course means offline the fs.
RM6 warns that OS communicate with array and expect to see a LUN 0, and problem can arise
when
there is no LUN 0, and that to recreate it back right away.
So far, no problem. Maybe should avoid using format and other disk poking tool when
there is no LUN 0.
---
raid storage array
StorEdge 3510
IP config
software control via fc port: Configuration Service Console
/opt/SUNWsscs/sscsconsole/sscs (GUI)
An LD/LV (Logical Drive/Logical Volume) is created, then inside the LD, partitions are
created.
The partitions are shown to host as LUN.
SE3510 allow global standby/hotspare disk that can serve multiple LD/LV.
---
---
Sys Admin Pocket Survival Guide - Solaris
Use the custom config button to see all the tasks that can be done on an LV such as
partition/lun creation, channel/port binding (for host to see), etc.
TBD