Você está na página 1de 47

Veritas Volume Manager - Volume Manager FAQ - Classic commands - share-ext - wik

is.sun.com
by odo, last edited by odo on Feb 06, 2008 (view change) Comment:
---Note that the first 28 questions have answers related to using the command line.
Sun do not normally support this method of bottom up creation of volumes. An
explanation of why can be seen in Q97. The questions are designed to give you an
idea of what goes on in the background. However, if at all possible, use the
GUI.
---1. How do I create a disk group ?
2. How do I create a a subdisk ?
3. How do I split a subdisk into 2 or more subdisks ?
4. How do I create a concat plex from two subdisks ?
5. How do I create a volume from a plex ?
6. What this volume looks like from a vxprint -Ath output
7. How do I start this volume ?
8. What this volume looks like from a vxprint -Ath output
9. How do I make this a mirrored volume ?
10. What this volume looks like from a vxprint -Ath output during mirror sync ?
11. What this volume looks like from a vxprint -Ath output after mirror ?
12. How do I blow away a mirrored volume ?
13. How do I blow away the subdisks ?
14. How do I attaching a plex to another plex in a volume ?
15. What this volume looks like from a vxprint -Ath output during attach ?
16. What does this volume look like from a vxprint -Ath output after attach ?
17. How do I detach a plex from a volume ?
18. What does this volume look like from a vxprint -Ath output after attach ?
19. How do I disassociate a plex from a volume ?
20. How do I add a disk into my disk group ?
21. How do I create a RAID5 volume ?
22. How does the vxprint -Ath for a RAID 5 volume look during the build ?
23. How does the vxprint -Ath for a RAID 5 volume look after the build ?
24. How do I get information about a RAID5 volume ?
25. How do I build a hot-spare ?
26. How does a hot spare look in vxprint -Ath
27. How do I make this volume a filesystem and mount it ?
28. How does a RAID 5 volume look if we loose a sub-disk ?
---29. How can I tell if a disk has been hot spared ?
30. How do I redirect mail to be sent to another user instead of root in the
event of a failure & hot spare replacement ?
31. How do I force a disk back into it's original configuration after it has
been hot-spared ?
32. How do I make Volume Manager see newly added disks ?
33. How to determine unused space in a diskgroup config ?
34. How do I list all the subdisks in my volume ?
35. How can I find out which physical disks are in a disk group ?
36. How can I change the name of a VM disk ?
37. How do I move all the subdisks from one physical disk to another disk ?
38. Is it safe to use vxevac on a live system ?
39. Can I use a hot-spare across multiple disk groups ?
40. How does Dirty Region Logging (DRL) work ?

41. Early Notifier Alert RAID-5


42. Hot reloction VS hot sparing
43. How can I manually failover a diskgroup in an SSA ?
44. How can I grow the private region of a disk group ?
45. Where can I get the latest SSA Matrix - (Patches & Compatibility) ?
46. Where can I get Demo License for VxVM (2.6) ?
47. How can I create a small rootdg without encapsulating a disk ?
48. What should I consider when configuring swap ?
49. How can I stop volume creation ?
50. How can I recover a failing disk - FAILING ?
51. How do I move disks from one disk group to another ?
52. How do I return to underlying devices ?
53. How could I backup and restore private regions ?
54. How can I add more swap from another disk under Veritas ?
55. How can I setup an alternate dump device (rcS.d/S35newdumpdevice) ?
56. Official SSA/A5000 Software/Firmware Configuration Matrix
57. Veritas On-Line Storage Manager (www.veritas.com)
58. Download Veritas Software
59. Veritas Utilities and Scripts (Unsupported)
60. What can the vxprivutil command be used for ?
61. How can I clear the message 'volume ? is locked by another utility' ?
62. How can I clear subdisks which are in IOFAIL state ?
63. How do I upgrade to Solaris 2.5 & VM 2.1.1 ?
64. How does a hotspare know when to kick in ?
65. How do I encapsulate the boot disk ?
66. How can I clone a Sparc Storage Array which is under Veritas control ?
67. How to import rootdg from another host ?
68. How do I increase the size of a striped mirrored volume ?
69. How do I boot from a SSA ?
70. How can I set the ownership of a volume ?
71. How do I change the WWN of a Sparc Storage Array ?
72. When I try to encapsulate the boot disk I get the errornot enough free
partitions. Why ?
73. I want to move a Volume out of one dg into another, all the disks in the
original dg are part of the Volume. How do I do it ?
74. How do I import diskgroup's on dual ported SSA's ?
75. How do I Start/Stop a disk tray on a SSA ?
76. How do I get a plex out of being in a stale state ?
77. How do I change the hostname info on the private regions ?
78. Can I encapsulate a disk that has no free space for a private region ?
79. How can I get iostat to report the disks as c#t#d# instead of sd# ??
80. How can I get round the errorvolume locked by another utility when trying to
detach a plex ?
81. Veritas Volume Manager Disaster Recovery Guide
82. Can I use prestoserve with the SSA ??
83. Can I grow the root filesystem online ??
84. Can I change the hostname of a machine without affecting Volume Manager ??
85. Can I move some Volumes from one dg to another ??
86. Can I test the SOC from the ok prompt ??
87. Can I restore the rootdisk from backup ??
88. Can I make my SparcStation to wait for the SSA to spin up ??
89. Can I move a SSA to a new system and save the data on it ??
90. Can I have a rootdg, without encapsulating a disk ??
91. Can I Increase the Size of the private region, because when I try and add a
volume to a dg I getvxvm:vxassist: ERROR: No more space in disk group
configuration
92. Cannot import Diskgroup - "No configuration copies"
93. My volume is ENABLED/SYNC but doesn't appear to be a syncing process running

(i.e. vxrecover). What do I do ?


94. How can I view the contents of my Private Region ?
95. I need to change the hostname in my Private Regions and my /etc/vx/volboot
file. How do I do it ?
96. Veritas has changed my VTOC layout when it encapsulated my rootdisk. How do
I recover my original VTOC?
97. My volume won't start. I think it may be due to my volume not on a cylinder
boundary. How can I determine if this is true ?
98. Why can't I simply mirror back my seconday rootdisk mirror to my primary
rootdisk mirror if my primary disk fails ?
99. How can I make hot-relocation use ONLY spare disks ?
100. How do I enable debug mode on vxconfigd ?
101. How do I disable DMP, dynamic multi-pathing ?
102. How do I take a disk out from under Veritas control ?
103. Volume Manager has relayed out my VTOC after encapsulation. I now need to
get back my original VTOC but there is no original in
/etc/vx/reconfig.d/disk.d/vtoc. How can I get back my /opt filesystem ?
104. What are the different status codes in a 'vxdisk list' output ?
105. How do I delete a disk from a disk group ?
106. How can I fix "vxvm: WARNING: No suitable partition from swapvol to set as
the dump device." ?
107. After an A5000 disk failure, I lost my rootdg and now I getvxvm:vxdisk:
ERROR: Cannot get records from vxconfigd: Record not in disk groupwhen I
runvxdisk listandvxvm:vxdctl: ERROR: enable failed: System error occurred in the
clientwhen I runvxdctl enable. How can I fix it ?
108. After some A5000/Photon loop offline/onlines, my volume now has one plex in
STALE state and the other in IOFAIL. How can I fix this ?
109. Is there any VxVM limitation on the number of volumes which can be created
on a system ?
110. Configuration copies explained
----1.
How do I create a disk group ?
/usr/sbin/vxdg init <dg> disk01=c2t0d0s2
2.
How do I create a a subdisk ?
/usr/sbin/vxmake -g <dg> sd disk01-01 dm_name=disk01 dm_offset=0 len=4152640
3.
How do I split a subdisk into 2 or more subdisks ?
/usr/sbin/vxsd -g <dg> -s 152640 split disk02-01 disk02-02
4.
How do I create a concat plex from two subdisks ?
/usr/sbin/vxmake -g <dg> plex pl-01 sd=disk01-01,disk02-01
5.
How do I create a volume from a plex ?
/usr/sbin/vxmake -U fsgen -g <dg> vol vol01 read_pol=SELECT user=root
group=other mode=0644 log_type=NONE len=4305279 plex=pl-01
6.

What this volume looks like from a vxprint -Ath output


V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX
PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE
SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE
v vol01 fsgen DISABLED CLEAN 4305279 SELECT -

pl pl-01 vol01 DISABLED CLEAN 4305279 CONCAT - RW


sd disk01-01 pl-01 disk01 0 4152639 0 c1t5d0 ENA
sd disk02-01 pl-01 disk02 0 152640 4152639 c1t3d0 ENA

7.
How do I start this volume ?
/usr/sbin/vxvol -g <dg> -f start vol01
8.

What this volume looks like from a vxprint -Ath output


V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX
PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE
SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE
v vol01 fsgen ENABLED ACTIVE 4305279 SELECT pl pl-01 vol01 ENABLED ACTIVE 4305279 CONCAT - RW
sd disk01-01 pl-01 disk01 0 4152639 0 c1t5d0 ENA
sd disk02-01 pl-01 disk02 0 152640 4152639 c1t3d0 ENA

9.
How do I make this a mirrored volume ?
/usr/sbin/vxassist -g <dg> mirror vol01 layout=nostripe
10.
What this volume looks like from a vxprint -Ath output during mirror
sync ?
V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX
PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE
SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE
v vol01 fsgen ENABLED ACTIVE 4305279 SELECT pl vol01-01 vol01 ENABLED TEMPRMSD 4306160 CONCAT - WO
sd disk04-01 vol01-01 disk04 0 153520 0 c1t1d0 ENA
sd disk03-01 vol01-01 disk03 0 4152640 153520 c1t4d0 ENA
pl pl-01 vol01 ENABLED ACTIVE 4305279 CONCAT - RW
sd disk01-01 pl-01 disk01 0 4152639 0 c1t5d0 ENA
sd disk02-01 pl-01 disk02 0 152640 4152639 c1t3d0 ENA

11.

What this volume looks like from a vxprint -Ath output after mirror ?
V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX
PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE
SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE
v vol01 fsgen ENABLED ACTIVE 4305279 SELECT pl vol01-01 vol01 ENABLED ACTIVE 4306160 CONCAT - RW
sd disk04-01 vol01-01 disk04 0 153520 0 c1t1d0 ENA
sd disk03-01 vol01-01 disk03 0 4152640 153520 c1t4d0 ENA
pl pl-01 vol01 ENABLED ACTIVE 4305279 CONCAT - RW
sd disk01-01 pl-01 disk01 0 4152639 0 c1t5d0 ENA
sd disk02-01 pl-01 disk02 0 152640 4152639 c1t3d0 ENA

12.
How do I blow away a mirrored volume ?
/usr/sbin/vxedit -g <dg> -fr rm vol01
13.
How do I blow away the subdisks ?
/usr/sbin/vxedit -g <dg> rm disk02-02 disk01-02

14.
How do I attaching a plex to another plex in a volume ?
/usr/sbin/vxplex -g <dg> att vol01 Cor-Plex-2
15.

16.
?

What this volume looks like from a vxprint -Ath output during attach ?
V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX
PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE
SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE
v vol01 fsgen ENABLED ACTIVE 2000000 SELECT pl Cor-Plex vol01 ENABLED ACTIVE 2000000 CONCAT - RW
sd disk-01 Cor-Plex disk01 0 1000000 0 c1t5d0 ENA
sd disk-02 Cor-Plex disk02 0 1000000 1000000 c1t3d0 ENA
pl Cor-Plex-2 vol01 ENABLED TEMP 2000000 CONCAT - WO
sd disk-01-2 Cor-Plex-2 disk01 1000000 1000000 0 c1t5d0 ENA
sd disk-02-2 Cor-Plex-2 disk02 1000000 1000000 1000000 c1t3d0 ENA

What does this volume look like from a vxprint -Ath output after attach
V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX
PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE
SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE
v vol01 fsgen ENABLED ACTIVE 2000000 SELECT pl Cor-Plex vol01 ENABLED ACTIVE 2000000 CONCAT - RW
sd disk-01 Cor-Plex disk01 0 1000000 0 c1t5d0 ENA
sd disk-02 Cor-Plex disk02 0 1000000 1000000 c1t3d0 ENA
pl Cor-Plex-2 vol01 ENABLED ACTIVE 2000000 CONCAT - RW
sd disk-01-2 Cor-Plex-2 disk01 1000000 1000000 0 c1t5d0 ENA
sd disk-02-2 Cor-Plex-2 disk02 1000000 1000000 1000000 c1t3d0 ENA

17.
How do I detach a plex from a volume ?
usr/sbin/vxplex -g <dg> det Cor-Plex-2
18.
?

What does this volume look like from a vxprint -Ath output after attach
V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX
PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE
SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE
v vol01 fsgen ENABLED ACTIVE 2000000 SELECT pl Cor-Plex vol01 ENABLED ACTIVE 2000000 CONCAT - RW
sd disk-01 Cor-Plex disk01 0 1000000 0 c1t5d0 ENA
sd disk-02 Cor-Plex disk02 0 1000000 1000000 c1t3d0 ENA
pl Cor-Plex-2 vol01 DETACHED STALE 2000000 CONCAT - RW
sd disk-01-2 Cor-Plex-2 disk01 1000000 1000000 0 c1t5d0 ENA
sd disk-02-2 Cor-Plex-2 disk02 1000000 1000000 1000000 c1t3d0 ENA

19.
How do I disassociate a plex from a volume ?
usr/sbin/vxplex -g <dg> dis Cor-Plex-2| pl | Cor-Plex-2 | - | DISABLED | - |
2000000 | CONCAT | - | RW |
sd disk-01-2 Cor-Plex-2 disk01 1000000 1000000 0 c1t5d0 ENA
sd disk-02-2 Cor-Plex-2 disk02 1000000 1000000 1000000 c1t3d0 ENA

20.
How do I add a disk into my disk group ?
/usr/sbin/vxdg -g <dg> adddisk disk05=c1t5d2s2
21.
How do I create a RAID5 volume ?
/usr/sbin/vxassist -g <dg> -p maxsize layout=raid5,log nstripe=3 stripeunit=32
disk01 disk02 disk03 disk04
This returns the maximum size (8304640) of the volume which can be used in the
following command.
/usr/sbin/vxassist -g <dg> make vol01 8304640 layout=raid5,log nstripe=3
stripeunit=32 disk01 disk02 disk03 disk04
22.

How does the vxprint -Ath for a RAID 5 volume look during the build ?
v vol-r5 raid5 DETACHED EMPTY 12456192 RAID pl vol-r5-01 vol-r5 ENABLED EMPTY 12457728 RAID 4/128 RW
sd disk04-01 vol-r5-01 disk04 0 4152640 0/0 c1t1d0 ENA
sd disk02-01 vol-r5-01 disk02 0 4152640 1/0 c1t3d0 ENA
sd disk03-01 vol-r5-01 disk03 0 4152640 2/0 c1t4d0 ENA
sd disk01-01 vol-r5-01 disk01 0 4152640 3/0 c1t5d0 ENA
pl vol-r5-02 vol-r5 ENABLED EMPTY 5600 CONCAT - RW
sd disk05-01 vol-r5-02 disk05 0 5600 0 c1t2d2 ENA

23.

How does the vxprint -Ath for a RAID 5 volume look after the build ?
v vol-r5 raid5 ENABLED ACTIVE 12456192 RAID pl vol-r5-01 vol-r5 ENABLED ACTIVE 12457728 RAID 4/128 RW
sd disk04-01 vol-r5-01 disk04 0 4152640 0/0 c1t1d0 ENA
sd disk02-01 vol-r5-01 disk02 0 4152640 1/0 c1t3d0 ENA
sd disk03-01 vol-r5-01 disk03 0 4152640 2/0 c1t4d0 ENA
sd disk01-01 vol-r5-01 disk01 0 4152640 3/0 c1t5d0 ENA
pl vol-r5-02 vol-r5 ENABLED LOG 5600 CONCAT - RW
sd disk05-01 vol-r5-02 disk05 0 5600 0 c1t2d2 ENA

24.
How do I get information about a RAID5 volume ?
/usr/sbin/vxinfo -U raid5 -g <dg> vol-r5
25.
How do I build a hot-spare ?
/usr/sbin/vxedit -g <dg> set spare=on subdisk
26.
How does a hot spare look in vxprint -Ath
DM NAME
DEVICE
TYPE
PRIVLEN PUBLEN
dm disk06
c1t5d1s2
sliced 66079
978880

STATE
SPARE

27.
How do I make this volume a filesystem and mount it ?
/usr/sbin/mkfs /dev/vx/rdsk/<dg>/vol-01 12456192
/usr/sbin/mount /dev/vx/dsk/<dg>/vol-01 /mnt
28.

How does a RAID 5 volume look if we loose a sub-disk ?


v vol-r5 raid5 ENABLED ACTIVE 12456192 RAID pl vol-r5-01 vol-r5 ENABLED ACTIVE 12457728 RAID 4/128 RW

sd
sd
sd
sd
pl
sd

disk04-01
disk02-01
disk03-01
disk01-01
vol-r5-02
disk05-01

vol-r5-01 disk04 0
vol-r5-01 disk02 0
vol-r5-01 disk03 0
vol-r5-01 disk01 0
vol-r5 ENABLED LOG
vol-r5-02 disk05 0

4152640 0/0 c1t1d0 ENA


4152640 1/0 - NDEV
4152640 2/0 c1t4d0 ENA
4152640 3/0 c1t5d0 ENA
5600 CONCAT - RW
5600 0 c1t2d2 ENA

29.

How can I tell if a disk has been hot spared ?


A mail message will have been sent to root (by default) indicating
that a hot relocation has taken place.
The message subjects should be :
Subject: Volume Manager failures on host xxx
Subject: Attempting VxVM relocation on host xxx

30.

How do I redirect mail to be sent to another user instead of root in the

event of a failure & hot spare replacement ?


Edit /etc/rc2.d/S95vxvmrecover and enter 'vxsparecheck user-name &'
where user-name is the user you wish to have mailed.
Note this is hot sparing and not hot relocation.
31.

How do I force a disk back into it's original configuration after it has

been hot-spared ?
/usr/sbin/vxassist -g <dg> move <vol> !disk05 disk02
where disk05 is the 'relocated to' disk and disk02 is the 'relocated
from' disk
Of course, before moving any relocated subdisks back, ensure the disk
thatexperienced the failure has been fixed.
An alternative would be to use vxevac. See option 37 below.
32.
How do I make Volume Manager see newly added disks ?
/usr/sbin/vxdctl enable
33.
How to determine unused space in a diskgroup config ?
/usr/sbin/vxassist -g <dg> maxsize layout=nolog
34.
How do I list all the subdisks in my volume ?
/usr/sbin/vxedit -g <dg> list
35.
How can I find out which physical disks are in a disk group ?
/usr/sbin/vxdisk (-s) list
36.
How can I change the name of a VM disk ?
/usr/sbin/vxedit -g <dg> rename <old name> <new name>
An alternative would be to use vxassist. See option 31 above.
37.
How do I move all the subdisks from one physical disk to another disk ?
/usr/sbin/vxevac -g <dg> disk03 disk02

38.
Is it safe to use vxevac on a live system ?
Yes. Volume manger will build a mirror and do a full sync of the original disk
to the new disk.
39.
Can I use a hot-spare across multiple disk groups ?
No. Hot-spares only act as replacements for other disk in the same disk group.

This rest of this section is designed to give you an idea of various disk
management tips and hints for Volume Manager.
----40.
How does Dirty Region Logging (DRL) work ?
What is DRL ?
Dirty Region Logging is a bit map mechanism used to facilitate mirror
resynchronisation after a system failureWhere DRL's can be used
A DRL is ONLY used when a volume has 2 plexes or more (ie, a mirror) and the
system panics (or someone yanks out the power cord).
NB. One mistake people make is assuming that the DRL is the same thing as the
log used on a Raid5 volume. They function completely different.
The DRL is not added automatically, because it is an "option" and not always
necessary.
If one side of a mirror goes down and is marked as bad and needs to be
restarted, a full resync will be done and the DRLs have no value. The main use
for DRLs is in system failure. In most other scenarios they are not used.
What Veritas DRL's will NOT do
In SDS you can do a metaoffline and the SDS DRL keeps track of which regions are
changed then when doing a metaonlinesome time later SDS only resync's the
changed regions. This 'feature' or functionality is not part of the design of
the Veritas software.
In the case where a plex is disassociated, this causes the entire plex to go
stale, so when you reattach the plex to the mirror, it requires a full resync.
They are not used for on-line reconfigurations operations.
Again, the DRL is ONLY used if the system crashes and is subsequently rebooted.
How DRL's work When the system is rebooted after a 'non-graceful' shutdown,
volume manager needs to syncronize the plexes to guarantee that both mirrors are
identical. It does this by reading each and every block and copying the data to
the "other" plex. Obviously, this will take a long time. If there's a DRL
attached to the volume, volume manager will only have to 'sync' the regions
which are dirty. Since it works on a "region" basis, the resync times are
greatly reduced, in most cases, but not necessarily all cases.
Performance
The performance difference you may notice during normal operation may be due to
the updates in the log. There is some amount of overhead involved, though
normally it is minimal.
Fast writes with DRL's
Fast_write has nothing to do with DRL.... Veritas has no knowledge of
fast_write... The SSA fast_write feature is very safe with DRL's in with all
configurations. There were some problems in the past but that is ancient
history.
41. EARLYNOTIFIER_ALERT_RAID-5
The EARLYNOTIFIER_ALERT_RAID-5 says:
For non-typical RAID-5 configurations which have the following characteristics:
- a RAID-5 column which is split into multiple subdisks where the subdisks do
NOT end and begin on a stripe-unit aligned boundary
- and a RAID-5 reconstruction operation was performed.Data corruption can be
present, in the region of the RAID-5 column where the split subdisks align.

So for a preventive action I want to verify the customers RAID-5 configurations.


( using explorer results )
This can be done as follows.
Here's an example:A. Good case
Case-1:
# vxprint -g kuro -hqvt
v vol01 raid5 ENABLED ACTIVE 640000 RAID pl vol01-01 vol01 ENABLED ACTIVE 640000 RAID 3/32 RW
sd kuro01-01 vol01-01 kuro01 0 224000 0/0 c2t1d0 ENA
sd kuro02-01 vol01-01 kuro02 0 224000 1/0 c2t1d1 ENA
sd kuro03-01 vol01-01 kuro03 0 224000 2/0 c2t1d2 ENA
pl vol01-02 vol01 ENABLED LOG 1008 CONCAT
RW
sd kuro04-01 vol01-02 kuro04 0 1008 0 c2t1d3 ENA
Subdisk size is multiple of stripe width. So you don't worry about this.
Case-2:
# vxprint -g kuro -hqvt
v vol01 raid5 ENABLED ACTIVE 640000 RAID pl vol01-01 vol01 ENABLED ACTIVE 640000 RAID 3/32 RW
sd kuro01-01 vol01-01 kuro01 0 224000 0/0 c2t1d0 ENA
sd kuro05-01 vol01-01 kuro05 0 96000 0/224000 c2t4d0 ENA
sd kuro02-01 vol01-01 kuro02 0 224000 1/0 c2t1d1 ENA
sd kuro06-01 vol01-01 kuro06 0 96000 1/224000 c2t4d1 ENA
sd kuro03-01 vol01-01 kuro03 0 224000 2/0 c2t1d2 ENA
sd kuro07-01 vol01-01 kuro07 0 96000 2/224000 c2t4d2 ENA
pl vol01-02 vol01 ENABLED LOG 1008 CONCAT - RW
sd kuro04-01 vol01-02 kuro04 0 1008 0 c2t1d3 ENA
Subdisk is concatanated, but still they're matched with stripe size boundary.
Also you don't worry about this.
B. Bad case --- You need to care about those configuration.
Case-1:
# vxprint -g kuro -hqvt
v vol01 raid5 ENABLED ACTIVE 614400 RAID pl vol01-01 vol01 ENABLED ACTIVE 617728 RAID 3/32 RW
sd kuro01-01 vol01-01 kuro01 0 205200 0/0 c2t1d0 ENA
sd kuro05-01 vol01-01 kuro05 0 103680 0/205200 c2t4d0 ENA
sd kuro02-01 vol01-01 kuro02 0 205200 1/0 c2t1d1 ENA
sd kuro06-01 vol01-01 kuro06 0 102816 1/205200 c2t4d1 ENA
sd kuro03-01 vol01-01 kuro03 0 205200 2/0 c2t1d2 ENA
sd kuro07-01 vol01-01 kuro07 0 102816 2/205200 c2t4d2 ENA
pl vol01-02 vol01 ENABLED LOG 1008 CONCAT - RW
sd kuro04-01 vol01-02 kuro04 0 1008 0 c2t1d3 ENA
This means the last stripe unit of "kuro01-01" contains only 16 blks. This means
this bottom section isn't aligned stripe size. As the result, the first 16 blks
in "kuro05-01" is used for this stripe. You need to pay attention to this
configuration. The Early Notifier tells us this.
Case-2:
# vxprint -g kuro -hqvt

v vol01 raid5 ENABLED ACTIVE 614400 RAID pl vol01-01 vol01 ENABLED ACTIVE 617728 RAID 3/32 RW
sd kuro01-01 vol01-01 kuro01 0 205200 0/0 c2t1d0 ENA
sd kuro02-01 vol01-01 kuro02 0 205200 1/0 c2t1d1 ENA
sd kuro03-01 vol01-01 kuro03 0 205200 2/0 c2t1d2 ENA
pl vol01-02 vol01 ENABLED LOG 1008 CONCAT - RW
sd kuro04-01 vol01-02 kuro04 0 1008 0 c2t1d3 ENA
Currently, there're no concatinated subdisk in this raid5 volume. But each
subdisk size is not aligned with stripe width. The last stripe has only 16 blks.
At this moment, VxVM doesn't use this last stripe part. But think again, when
you resize this volume with new 3 drives with "vxva" or "vxassist growto", this
configuration will changed to "Case-1" above. So you need to pay attention to
this configuration too.42. Hot reloction VS hot sparing
Hot reloction VS hot sparing.....
Presently if there is a media (read) failure of redundant data, Volume Manager
will read from the other plex and then write the data to the plex that
encountered the error data. The data is then re-read from the newly
written data to ensure it is good. Should this fail again, Volume Manager then
performs a diagnostic in its private region for that disc. This
diagnostic (for the lack of a better descriptive word) consist basically
of writing and reading in the private region. Should it fail then Volume Manager
will replace the disc with another disc (if the Hot Sparing option is
selected and a disc is available), and then copy good data from the good
plex to the newly replaced disc. As you notice this is a disc for disc
replacement for redundant data.In the new Hot Relocation scheme, pretty much the
same is carried on up
to the point of the diagnostic in the private region. Instead of
performing the diagnostic in the private region and then replacing the
disc with another. Now Volume Manager will replace the errored sub-disc with a
matching sized area out of the disc(s) allocated for the Hot Relocation
pool and copies data from the good plexs sub-disc to the newly acquired
sub-disc. As you will notice this is a sub-disc for sub-disc
replacement of redundant data, without the requirement that the complete
disc fail.
In both the above cases non-redundant data is not moved to the newly
aquired disc space, be it a complete disc or sub-disc failure. This
operation becomes the function of the systems operator...
Obviously there is much more to the operation, this is meant to be
nothing more than just a thumb nail sketch of the differences..
A dead disc type failure is pretty much the same, with all the
operations failing on the bad disc.
43. How can I manually failover a diskgroup in an SSA
If SSA linked to two systems and one system fails
To list groups out there
# vxdiskadm
8 Enable access to (import) a disk group
list
To force the import of a disk group
# vxdg -fC import disk_group
To start the volumes
# vxvol -g disk_group -f start vol01 vol02 vol03 etc..
To mount volumes
# mount /dev/vx/dsk/disk_group/vol01 /data01
To bring in rootdg disks on SSA into different group
# vxdg -tC -n "newgroup" import "diskgroup"
44. How can I grow the private region of a disk group ?
The following information is from "Ultra Enterprise PDB 1.2 Release Notes -

December 1996"Large System Configurations


Systems that have a very large disk group configurations can eventually exhaust
the private region size in the disk group, which results in not being able to
add more configuration objects. At this time, the configuration either has to be
split into multiple disk groups, or the private regions have to be enlarged.
This task involves re-initializing each disk in the disk group. There is no
method to dynamically enlarge this private region.The private region of a disk
is specified by using the privlen option to the vxdisksetup cammand. For
example,
# vxdisksetup -i c1t0d0 privlen=2048
Reasons why this is not a good idea
Increasing the size of the private region is tricky because you have to increase
it on ALL the disks in the dg before the database itself will "expand".
Therefore, running vxdissetup on only one disk is not going to work.
Unfortunately, it's not that easy.If you increase the size of the private area,
that means less space for data on the disk, so you'r PROBABLY looking at a
complete backup, tear down of the dg, remaking the dg with the newly
formatted/initialized disks, and then remaking the volumes and restoring data.
Obviously, this is very time consuming.
Once enlarged, if ANY failed disk is replaced by use of normal means the dfault
region size will be on the new device, thus forcing the entire group to default
back to that size.
Lastly, if the customer ever in the future runs 'vxdiskadm' to initialize a new
disk, it'll put the default (small) private area on the disk. This won't work,
cause all the disks in the dg HAVE to have the larger private area. So for the
rest of this guys life he needs to run vxdisksetup with the appropriate
parameters to give the disk the larger private area.
Options
They should look into just putting any new volumes and disks into a new
diskgroup.Split the disk group that has the problem up, move some of the volumes
out to a different disk group, so that they do not run into problems under live
running conditions. As the Veritas volume manager requires space in the private
region for say when it has to use hot sparing/relocation etc... Only where
necessary use enlarged private regions.
45. Where can I get the latest SSA Matrix - (Patches & Compatibility) ?
SSA Matrix - (Patches & Compatibility)
46. Where can I get Demo License for VxVM (2.6) ?
Demo License for VxVM (2.6)
47. How can I create a small rootdg without encapsulating a disk ?
Create Small Rootdg Group
There MUST be a rootdg group for each node when using the SPARCcluster.
It is the default disk group and cannot be omitted.
It is more difficult to upgrade to new version or recover from certain
errors when the root is encapsulated.Prerequsists:Install Solaris 2.x plus patches.
Install Volume Manager and patches.
Reboot.Create a 2 cylinder raw partition on a disk.NB DO NOT RUN VXINSTALL.
Set the initial operating mode to disable. This disables
transactions and creates the rendevous file install-db,
for utilities to perform diagnostic and initialization
operations.
# vxconfigd -m disable
# ps -ef | grep vxconfigd
root 608 236 6 16:20:20 pts/0 0:00 grep vxconfigd
root 605 1 18 16:17:44 ? 0:00 vxconfigd -m disable

NB. If the above command fails its probably because vxconfigd is already
running.
Just kill off vxconfigd and re-run the above command again or alternatively try
vxconfigd -k which kills the daemon and restarts it.
Initialise database
# vxdctl init Make new rootdg group
# vxdg init rootdg Add a simple slice
# vxdctl add disk c0t3d0s7 type=simple
vxvm:vxdctl: WARNING: Device c0t3d0s7: Not currently in the configuration Add
disk record
# vxdisk -f init c0t3d0s7 type=simple Add disk name to rootdg disk group
# vxdg adddisk c0t3d0s7 Enable transactions
# vxdctl enable Remove file
# rm /etc/vx/reconfig.d/state.d/install-db
48. What should I consider when configuring swap ?
Dump device
Primary swap needs to be a phyiscal partition. This is not to say it cannot be
under vertias control.
This is because in the event of a panic VM doesn't support volumes as dump
devices.
It gets it from a physical slice initially which is then is changed to a vertias
volume.
During the startup from /etc/rcS.d/S35vxvm-startup1 it executes the following.
"vxvm: NOTE: Setting partition $dumpdev as the dump device."
swap -a $dumpdev
swap -d $dumpdev
breakdoneMore than one swap
One place for the primary swap.
You can have other swap areas which could be raw partitions or a swap file.
How much swap must be non striped
We do not support a striped primary swap device. This could be made smaller than
memory.
How big must his swap be
We advise this to be at least the same size as his systems memory, so that crash
dumps can be taken. - not really relevant to Enterprise class systems
Maximum limit
Up to a max of 2GB.
All core dumps will be truncated to 2GB, and in fact a partition over 2GB is a
problem for 'savecore'.
The core dump is copied to the _end_ of the primary swap partition,
and savecore can't read it because it can't seek over 2GB.
Direct mapping between devices
Veritas finds the 'real' partition that the swapvol maps to and uses this for
the 'dumpdevice' ( for core dumps ).
If there is not direct mapping, then this will break and the system will have
not dumpdevice.
How swap works
There is no point in striping swap.
Swap is allocated on a round robin basis in 1MB pieces from all the partitions
or files that are assigned as swap space.
Just take all the pieces of disk that you were planning on striping, and assign
them as swap space in their own right.
"Striping" will essentially occur automatically.
Creating primary swap on different disk
The swap partition should still be on slice 1 but must not start at cylinder 0

(start at cylinder 1) and it's tag must be swap.


This is because it stops the veritas volume manager startup script
S35vxvm-config from mapping the underlying partition
as a dumpdevice after the disk has been encapsulated.
Although this was done on a raid5 volume, the same could be done for any type of
volume.
49.

How can I stop volume creation ?

We created a raid5 volume that was imediately seen to be the wrong size.
By running the following commands ment we can have another
go at creating it the correct size, without having to wait
for it to finish the raid5 setup.// Find the id of the process creating the
raid5 volume.
# ps -ef
// Kill off the process.
# kill
// Clear the tutil and putil fields (we want dashes in the fields).
// These fields can be seen by running vxprint.
// May only have to do it for the plex.
# vxmend clear tutil0 vol01-01
# vxmend clear tutil0 vol01
// Remove the volume.
// You will not be able to do this from the GUI.
# vxedit -fr rm vol01
50. How can I recover a failing disk - FAILING ?
In VXVM 2.3 when a subdisk gets an I/O error, hot relocation kicks in and the
VMdisk that contained the failed VMsubdisk is set to "failing=on".This prevents
vxassist from using the disk to create new subdisks unless explicitly directed
to do so by the user.
You can use the following command to turn the flag off if you think it got
turned
on due to a transient error. VXVM 2.3 GUI also has a checkbox for this
functionality.
# vxedit set failing=off disk05 51. How do I move disks from one disk group
to another ?
NB.
If these disks are going into an existing disk group then
you do not need to initialise the disk group and the disk
names must be different,
eg If you have c1t0d0s2=disk01 then the disk group you
are moving the disk into must not have a disk01.
Moving Populated Volume Manager Disks between Disk Groups
i) Assume I intend to move volumes vol02 and vol04 from the disk
group olddg to a new group, newdg.
ii) Get a list of disks in the disk group you intend to split.
# vxdisk
c1t0d0s2
c1t0d1s2
c1t1d0s2
c1t1d1s2
c1t2d0s2
c1t2d1s2
c1t4d0s2
c1t4d1s2
c1t5d0s2
c1t5d1s2

list |
sliced
sliced
sliced
sliced
sliced
sliced
sliced
sliced
sliced
sliced

grep olddg
olddg01 olddg
olddg06 olddg
olddg04 olddg
olddg09 olddg
olddg02 olddg
olddg07 olddg
olddg03 olddg
olddg08 olddg
olddg05 olddg
olddg10 olddg

online
online
online
online
online
online
online
online
online
online

iii) Get the configuration.


iv) Determine which disks contain the volumes to be moved. Insure that all
volume allocations are self-contained in the set of disks to be moved.
In this case, my volumes are contained on disks olddg04 through olddg09,
with no unassociated plexes or subdisks, and no allocations which cross
out of this set of disks.
v) Save the configuration, in a format that can be plugged back into
the vxmake utility. Specify all volumes on the disks in question (plus
any unassociated plexes and their child subdisks, plus any unassociated
subdisks).
# vxprint -hmQqvps -g olddg vol02 vol04 > movers
vi) Unmount the appropriate file systems, and/or stop the processes
which hold the volumes open.
vii) Stop the volumes.
# vxvol -g olddg stop vol02 vol04
viii) Remove from the configuration database the definitions of the
structures (volumes, plexes, subdisks) to be moved. (NOTE that this
does not affect your data.)
# vxedit -g olddg -r rm vol02 vol04
ix) Remove the disks from the original diskgroup.
# vxdg -g olddg rmdisk olddg04 olddg05 olddg06 olddg07 olddg08 olddg09
x) Initialize the new diskgroup using one of your disks. DO NOT
reinitialize the disk itself. (vxdisk init). (If you are moving the disks
to a disk group that already exists, skip this step.) It is simplest to
keep their old names until a later step.
# vxdg init newdg olddg04=c1t1d0s2
xi) Add the rest of the moving disks to the new disk group.
# vxdg -g newdg adddisk olddg05=c1t5d0s2
# vxdg -g newdg adddisk olddg06=c1t0d1s2
# vxdg -g newdg adddisk olddg07=c1t2d1s2 olddg08=c1t4d1s2 olddg09=c1t1d1s2
xii) See the disks in the new disk group.
# vxdisk
c1t0d1s2
c1t1d0s2
c1t1d1s2
c1t2d1s2
c1t4d1s2
c1t5d0s2

list |
sliced
sliced
sliced
sliced
sliced
sliced

grep newdg
olddg06 newdg
olddg04 newdg
olddg09 newdg
olddg07 newdg
olddg08 newdg
olddg05 newdg

online
online
online
online
online
online

xiii) Reload the object configuration into the new disk group.
# vxmake -g newdg -d movers

xiv) Bring the volumes back on-line.


# vxvol -g newdg init active vol02
# vxvol -g newdg init active vol04
xv) Observe the configuration of the new disk group.
# vxprint -ht -g newdg
xvi) Test the data. Remember that the device names have changed to refer
to newdg instead of olddg; you'll need to modify /etc/vfstab and/or your
database configurations to reflect this. Then you'd mount your file
systems, start your database engines, etc.

xvii) Note that the original database is intact, though the disk naming is
a bit odd. You *can* rename your disks and their subdisks to reflect the
change. This is optional.
# vxprint -ht -g olddg
# vxedit rename olddg10 olddg04
# vxprint -g olddg -s -e "name~/olddg10/"
# vxedit rename olddg10-01 olddg04-01
# vxprint -g olddg -ht
xviii) Do the same for the disks in newdg and their subdisks.
# vxprint -g newdg -s
# vxprint -g newdg -e "name~/olddg/"
#
#
#
#
#
#
#
#
#
#
#
#
#
#

vxedit
vxedit
vxedit
vxedit
vxedit
vxedit
vxedit
vxedit
vxedit
vxedit
vxedit
vxedit
vxedit
vxedit

rename
rename
rename
rename
rename
rename
rename
rename
rename
rename
rename
rename
rename
rename

olddg04 newdg01
olddg05 newdg02
olddg06 newdg03
olddg07 newdg04
olddg08 newdg05
olddg09 newdg06
olddg04-01 newdg01-01
olddg05-01 newdg02-01
olddg06-01 newdg03-01
olddg07-01 newdg04-01
olddg08-01 newdg05-01
olddg09-01 newdg06-01
olddg08-02 newdg05-02
olddg09-02 newdg06-02

# vxprint -g newdg -ht 52. How do I return to underlying devices ?


Notes:- Use the name of your system disk rather than c0t3d0, which is
used in this guide only as an example.
- When rebooting, you might have to specify a disk if the disk is not
the default boot disk, instead of init 6.

Sequence:
Boot from CD-ROM to convert back to underlying devices and stop Volume
Manager.
Boot from cdrom
>ok boot cdrom -sw
# fsck /dev/rdsk/c0t3d0s0
FILE SYSTEM STATE IN SUPERBLOCK IS WRONG; FIX? y
# mount /dev/dsk/c0t3d0s0 /a
Take a copy of the vfstab and system files
# cp /a/etc/vfstab /a/etc/vfstab.vx
# cp /a/etc/system /a/etc/system.vx
Edit vfstab file to replace any of the system partition entries with
entries for the underlying partitions.
Also remove/convert/hash out, any other entries to vx devices.
# vi /a/etc/vfstab
/proc - /proc proc - no fd - /dev/fd fd - no swap - /tmp tmpfs - yes /dev/dsk/c0t3d0s0 /dev/rdsk/c0t3d0s0 / ufs 1 no /dev/dsk/c0t3d0s6 /dev/rdsk/c0t3d0s6 /usr ufs 1 no /dev/dsk/c0t3d0s3 /dev/rdsk/c0t3d0s3 /var ufs 1 no /dev/dsk/c0t3d0s5 /dev/rdsk/c0t3d0s5 /opt ufs 2 yes /dev/dsk/c0t3d0s1 - - swap - no Check the rest of the system filesystems.
# fsck /dev/rdsk/c0t3d0s6
# fsck /dev/rdsk/c0t3d0s3
# fsck /dev/rdsk/c0t3d0s5
Edit system file to remove the following entries for Volume Manager.
# vi /a/etc/system
rootdev:/pseudo/vxio@0:0
set vxio:vol_rootdev_is_volume=1
Stop Volume Manager.
# touch /a/etc/vx/reconfig.d/state.d/install-db
install-db should be the only file in this directory, remove any other
files like root-done.
# cd /
# umount /a
# fsck /dev/rdsk/c0t3d0s0
Reboot on the underlying disk
# init 6 53. How could I backup and restore private regions ?
The following proceedure is not a substitute for backing up your valuable
data. Please ensure you have reliable backups of your data.
Take a snapshot of the current configuration
When you have volume manager up and running, you need to run the following
command for each disk group.
# vxprint -hmQqsvp -g dg > /directory-path/filenameWe also need a list of the
names which equate to the physical disks,
this can be obtained be keeping the output from
# vxdisk list
Create the new disk group
NB. Disk names MUST correspond to the same disks exactly as they were
originally (see you backup copy vxdisk list output).
Initalise the diskgroup, using one of the disks from the lost disk

group.
# vxdg init dg access name=medianameNB. Substitute the disk group, disk access
name and media name
from the saved vxdisk list output. e.g.
# vxdg init datadg disk01=c2t0d0s2Add in the rest of the disks.
# vxdg -g new_data_dg adddisk =[other disks]Recreate the volume(s) configuration
from the configuration file.
# vxmake -g dg -d /directory-path/filename
NB. If this fails saying input file too large, then split the
file up and run the above for each one.
In most cases it works, its just for very large configurations
and then we have only split it into two pieces.You can use the above command
with the -V option which will
go through the motions but not actually do anything.
You may have to bring the volume back online (each vol must be done
one at a time).
# vxvol -g init active 54. How can I add more swap from another disk under
Veritas ?
Create more swap in a different disk group
NB Name no longer than swapvol ie swapvl2 or swap02 is ok.To add extra swap
manually:# swap -l
# swap -a /dev/vx/rdsk/swapdg/swap02
# swap -l
To pick up the extra swap at boot time:# vi /etc/vfstab
/dev/vx/rdsk/swapdg/swap02 - - swap - no # init 6 55. How can I setup an alternate dump device
(rcS.d/S35newdumpdevice) ?
#!/bin/sh
#
# /etc/rcS.d/S35newdumpdevice
#
# Author: Bede Seymour (bede.seymour@Eng.Sun.COM)
#
# Created: Wed Apr 30 14:41:09 EST 1997
#
# This script *MUST* come before /etc/rcS.d/S40standardmounts.sh !!!
#
# This only works because the swapadd/swap -a command only edits the
# dumpvp vnode ptr in the kernel if not already set.
#
# Set DUMPDEV to the device/slice on which you would like to dump.
#
# set defaults
PATH=/sbin:/etc:/bin:/usr/sbin:/usr/bin:/usr/ucb ; export PATH
PROG=`basename $0` ; export PROG
DUMPDEV="/dev/dsk/c0t0d0s1" ; export DUMPDEV
echo "$
Unknown macro: {PROG}
: configuring $
Unknown macro: {DUMPDEV}
as dump device ..."
swap -a $
swap -d $
Unknown macro: {DUMPDEV}

echo "$
: done." 56.

Official SSA/A5000 Software/Firmware Configuration Matrix

Official SSA/A5000 Software/Firmware Configuration Matrix 57.

Veritas On-Line

Storage Manager (www.veritas.com)


Veritas On-Line Storage Manager (www.veritas.com) 58.
Software
Download Veritas Software 59.

Download Veritas

Veritas Utilities and Scripts (Unsupported)

Veritas Utilities and Scripts (Unsupported) - Australian 'Spider' server 60.


What can the vxprivutil command be used for ?
Check the consistancy of the information in the private region try the
following:
# /etc/vx/diag.d/vxprivutil dumpconfig /dev/rdsk/c?t?d?s2 > /tmp/vx.output
# cat /tmp/vx.output | vxprint -Ath -D -or
# cat /tmp/vx.output | vxprint -vpshm -D - > /tmp/vxprint.vpshm
# vxmake -g diskgroup -d /tmp/vxprint.vpshm
You should see a normal vxprint output. 61. How can I clear the message
'volume ? is locked by another utility' ?
If you get the following error message :
vxvm:vxplex: ERROR: Plex vol01-02 in volume vol01 is locked by another utility
This can be fixed by running the following commands:
$ vxmend clear tutil plex
$ vxmend clear tutil vol
$ vxplex -g diskgroup det plex
$ vxrecover -s
If you look at the TUTIL0 in a vxprint you will see that there are
some flags set call ATT, all you need to do is to clear them. 62. How can I
clear subdisks which are in IOFAIL state ?
How to get rid of the subdisk in a IOFAIL state, if volume is mirror'd
This is a known bug (Bug ID: 4050048), the workaround is as follows
NOTE:Before you start make sure you have an output of
$ vxprint -Ath
In this example it is the rootdisk which has had this problem, hence
sd rootdiskPriv - ENABLED 2015 - - - PRIVATE
v rootvol root ENABLED 1919232 - ACTIVE - pl rootvol-03 rootvol ENABLED 1919232 - ACTIVE - sd disk01-01 rootvol-03 ENABLED 1919232 0 - - pl rootvol-01 rootvol ENABLED 1919232 - ACTIVE - sd rootdisk-B0 rootvol-01 ENABLED 1 0 - - Block0
sd rootdisk-02 rootvol-01 ENABLED 1919231 1 IOFAIL - - Both sides of the
plex are working but the sd rootdisk-02 is marked as
IOFAIL, the only way to clear this state is to remove the subdisk and
recreate it will the same subdisk offsets and length. this can be done

as follows
# vxplex -g rootdg dis rootvol-01
# vxsd -g rootdg dis rootdisk-02
# vxedit -g rootdg rm rootdisk-02
# vxmake -g rootdg sd rootdisk-02 rootdisk,0,1919231
where DISK is rootdisk, DISKOFFS is 0 and LENGTH is 1919231The DISK, DISKOFFS
and LENGTH can be found from the vxprint -th output
V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX
PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE
SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE
v rootvol root ENABLED ACTIVE 1919232 ROUND pl rootvol-03 rootvol ENABLED ACTIVE 1919232 CONCAT - RW
sd disk01-01 rootvol-03 disk01 0 1919232 0 c1t0d0 ENA
pl rootvol-01 rootvol ENABLED ACTIVE 1919232 CONCAT - RW
sd rootdisk-B0 rootvol-01 rootdisk 1919231 1 0 c0t3d0 ENA
sd rootdisk-02 rootvol-01 rootdisk 0 1919231 1 c0t3d0 ENA NOTE:
depending where the subdisk is used, ie with in a stripe you may need
to use the -l option to specifiy where the subdisk is located in the
stripe.
# vxsd -g rootdg assoc [-l n] rootvol-01 rootdisk-02
where n is the position in the stripe, this again can be determined from the
vxprint -th output.
for subdisk datadg08-01 the -l number would be 2 (STRIPE 2/0)
for subdisk datadg06-01 the -l number would be 0 (STRIPE 0/0)| V | NAME |
USETYPE | KSTATE | STATE | LENGTH | READPOL | PREFPLEX |
PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE
SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE
pl vol01-02 - DISABLED - 104672 STRIPE 4/128 RW
sd datadg06-01 vol01-02 datadg06 0 26208 0/0 c1t1d0 ENA
sd datadg07-01 vol01-02 datadg07 0 26208 1/0 c1t3d0 ENA
sd datadg08-01 vol01-02 datadg08 0 25760 2/0 c1t3d1 ENA
sd datadg09-01 vol01-02 datadg09 0 25760 3/0 c1t5d1 ENA

# vxplex -g rootdg att rootvol rootvol-01


Once the re-sunc has finished then everything should be back to normal.

This section deals more specifically with issues related to Veritas Volume
Manager and the Sparc Storage Array

63.
How do I upgrade to Solaris 2.5 & VM 2.1.1 ?
We had a customer who installed Solaris 2.5 on top of Vm 2.1.... This is not a
supported config, because it does not work. This left the customer in a
position of not being able to get his system back up. After jumping thru some
hoops the customer was back up on line. This to point out a few things to try
and save others from the same or similar problems....
o The Vm 2.1.1 binaries for Solaris 2.3 and 2.4 are NOT the same as the
binaries for Solaris 2.5.
o This a departure from the past, due to some other problems (to be solved in
the next release).

o This also means that the Vm 2.1 binaries will NOT work with Solaris 2.5...
o If you are going to use Solaris 2.5 and Vm, it MUST be Vm 2.1.1(release)....
o If a customer upgrades Vm to 2.1.1 on either Solaris 2.3 or 2.4 and then
decides to go to Solaris 2.5, caution should be taken. As mentioned above the
Vm 2.1.1 binaries for Solaris 2.3/2.4 and 2.5 are not the same, so both Vm and
Solaris need to be upgraded in this case.
o The upgrade procedures for Vm 2.1.1 provided with the CD, do not cover this
situation, a procedure needs to be developed and tested to handle this.
64. How does a hotspare know when to kick in ?
This is how a hotspare works with Veritas Volume Manager....
Specifically, here's the procedure Vol Mgr goes through to determine
whether a disk has failed:
If read error on non-protected (non-mirrored and non-RAID5) volume:
do nothing, other than report the error
(this is the same thing Solaris does)
If read error on protected volume:
read the data from alternate drives
(from alternate mirror, or by reconstructing the
data from the remaining disks in RAID5)
return data to app
write data to drive which originally reported read error
(Reason: if the original read error was due to
a bad spot on the disk, then this write may
cause the SCSI drive to re-vector the block to an
alternate track)
If write error:
Try to read & write the private region on the disk.
If private region accessible:
Must be just a bad spot on the disk.
Do not consider disk failed, and do not hot spare.
Detach plex(es) associated with this disk and email root
Else:
The whole disk is almost surely bad.
Consider disk failed, and initiate hot spare.
Note that it takes a little while to go through all this, doing extra
I/O's to make sure the disk has really failed, waiting for commands to
time out, etc. This is why you don't see hot spares kick in instantaneously.
Volume Manager is trying to make sure that the disk really is bad. This
is Good. You don't want to start swapping in hot spares unless the disk
is really bad.
So, if Vol Mgr decides to kick in a hot spare, it proceeds to build the
appropriate subdisk(s), plex(es), and volume(s) that were on the failed
disk onto the hot spare. It then populates the subdisks with the data
that was on the failed disk.
If all you had on the failed disk was 100MB of volume stuff, then that's
all that gets written to the hot spare. It doesn't write the whole disk
unless it has to.
Look at this example
Failures have been detected by the VERITAS Volume
Manager:

failed plexes:
ukdb00-01
ukdb02-01
ukdb03-01
ukdb04-01
No data appears to have been lost.
The line "No data appears to have been lost." inidcates this, and
inform's root
that he needs to decide what to do
1. try and re-attach disk and see if he has a geniune h/w Error or
2. replace faulty disk, and rebuild the mirror
In a nutshell a hotspare will only be initated by Volume Manager
if/when Volume Manager cannot read/write
to the private region of the disk. In all other circumstances Volume
Manager will email root (to configure
Volume Manager to email more users edit /etc/rc2.d/S95vxvm-recover
and modify the line that says
"vxsparecheck root &" to vxsparecheck root other-users &") and
inform him/her that it has detached a plex
Unknown macro: {mirror}
as it encountered problems, and needs attention (this is better than ODS which
just sits there until,
you issue a metastat command too see whats happening)
65.
How do I encapsulate the boot disk ?
SRDB ID: 10923
SYNOPSIS: How to encapsulate root disk for mirroring
DETAIL DESCRIPTION:
Bring the internal bootable disk under the control of the
Veritas Volume Manager software so it can be mirrored.
SOLUTION SUMMARY:
The disk must have the following file systems:
/ /usr /tmp /var
There must be some swap space allocated on the bootable
disk, or the Veritas software will not encapsulate the disk.
Set up for encapsulation as follows:
* Select two unused slices on the disk and verify that these are
unassigned, 0 (zero) length slices. If there aren't any unused
slices, then they must be made; i.e., if all seven allocated slices
of the disk have been used, the disk cannot be encapsulated.
These two unused slices will become the Volume Manager's private
and public regions.
* Next, try to "steal" two cylinders (at least 2MB are needed) from
the last used slice. It is best to use the two cylinders (or
the amount needed) at the end of the disk. These are not set up
as a slice, they are just not part of the last allocated slice.
This means if slice "g" is the last used slice and extends all the way
to the last cylinder, for example, cylinder number 1254, make
this slice stop at cylinder 1252 instead. This area will be used
by the Veritas software to hold its private region data.
If this space is not allocated for the software to use, it will
take it from the swap area of the disk. So, if swap is at a premium
on this system, allocate the space.
* Encapsulate the disk by running the following from the command line:
# vxdiskadm
* Select "encapsulate disk" from the menu to complete the
encapsulation operation.
Once the disk has been encapsulated, it is readyto be mirrored.

PRODUCT AREA: SunOS Unbundled


PRODUCT: Install
SUNOS RELEASE: Solaris 2.3
HARDWARE: any
66. How can I clone a Sparc Storage Array which is under Veritas control ?
These scripts might prove useful to some of you, to use as-is or
to modify as needed.This is used for cloning purposes.
Here is a script to replicate a SSA plex/subdisk configuration
from a command line instead of each machine doing the GUI work.
This may be of interest to you to take a look at. I thought it
might be useful for say disaster recover purposes.

To make an identical disk storage array set up to the one you have at
another site do the following:
1) at the original site do the command:
# vxprint -t > /tmp/vxprint.out
2) copy the vxprint.out file and the vxawk script to the new site.
3) install the SSA according to the instructions including running the
vxinstall script.
4) make an install script with the commands:
# awk -f vxawk vxprint.out > vxscript
# chmod 755 vxscript
5) run the vxscript:
# ./vxscript
6) start the vxva program:
# vxva &
7) select the Volumes button.
8) use the select followed by mutiple adjust mouse buttons to select all
the volumes.
9) under the Advanced-Ops/Volume/Initialize Volumes menu chose Active.
10) if needed under Basic-Ops/File-System-Operations chose Make File System.
11) The system should now be ready for operations.

#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#

VXAWK SCRIPT TO CREATE A DUPLICATE DISK STORAGE ARRAY SET-UP.


THIS CAN BE USED AS A PART OF A RESTORATION PROCEDURE OF
A DISASTER RECOVERY PLAN, WHERE THE ORIGINAL STORAGE ARRAY
MUST BE REPLACED IN TOTAL. TO USE IN THIS MANNER A vxprint -t
OUTPUT FILE MUST BE SAVED IN OFFLINE MEDIA TO ASSIST IN THE
RECONSTRUCTION OF THE OLD STORAGE ARRAY. SEE ACCOMPANYING
HOWTO FILE FOR MORE INFORMATION.
AUTHOR:
N. Derek Arnold (derek.arnold@cbis.com)
COMPANY: Cincinnati Bell Information Systems.
COPYRIGHT: 1995
EDIT HISTORY:
05-28-95
nda
(creation)
THIS PROGRAM CONTAINS UNPUBLISHED PROPRIETARY MATERIAL OF CBIS
(CINCINNATI BELL INFORMATION SYSTEMS INC.). IT MAY NOT BE

# REPRODUCED OR INCLUDED IN ANY OTHER PACKAGES WITHOUT WRITTEN


# CONSENT FROM CBIS. THE INCLUSION OF THIS COPYRIGHT DOES NOT
# IMPLY INTENT TO PUBLISH. ALL RIGHTS, BOTH FOREIGN AND DOMESTIC,
# ARE RESERVED.
#
BEGIN {
print "PATH=$
Unknown macro: {PATH}
:/usr/sbin:/usr/lib/vxvm/bin"
print "export PATH"
print "set -x"
print ""
print "vxconfigd -k"
print ""
{color:#3366ff}}
# DO NOTHING FOR DGs - this should be done by the vxinstall script
# IF disk groups other than root, this could be a problem in that
# we assume that the dg is always root.
# $1 == "dg" {
#
print "vxdg init " $2 " nconfig=" $3 " nlog=" $4 " minor=" $5
#
print ""
# }
# SAME COMMENT HERE AS FOR DGs
#$1 == "dm" {
#
access = substr( $3, 1, 6 )
#
print "vxdisksetup " access " privlen=" $5 " publen=" $6
#
print "vxdisk init " $3 " type=" $4
#
print "vxdg adddisk " $2 "=" $3
#
print ""
#}
# NEED TO DO REMAINING.
$1 == "sd" {
split( $7, col, "/" )
print "vxmake sd " $2 " disk=" $4 " offset=" $5 " len=" $6 " column=" col[1]
print ""
if( plex[$3] == "" )
plex[$3] = $2
else
plex[$3] = plex[$3] "," $2
{color:#3366ff}}
$1 == "pl" {
if( $7 == "STRIPE" )
{
split( $8, f, "/" )
tag = " st_width=" f[2] " sd_num=" f[1]
}
else
{
tag=""
}
output = "vxmake plex " $2 " kstate=" $4 " state=" $5
output = output " len=" $6 " iomode=" $9 tag " sd=" plex[$2]
print output
print ""
if( vol[$3] == "" )
vol[$3] = $2
else
vol[$3] = vol[$3] "," $2
{color:#3366ff}}
$1 == "v" {

output = "vxmake vol " $2 " use_type=" $3 " kstate=" $4


output = output " state=" $5 " len=" $6 " read_pol=" $7
output = output " plex=" vol[$2]
print output
print ""
{color:#3366ff}}
{ }
67.
How to import rootdg from another host ?
Note: you *can* import the rootdg from system 1 on system 2 with a click
or two from the vxvm GUI after you move the array, however, you will have
to do some serious handwaving to move the array back to its original
configuration.From the Veritas System Administrators Guide, Release 2.0/Jan
95,section 3.7.4.1 Renaming Disk Groups
The following set of steps can be used to temporarily move the rootdg disk group
from one host to another
(for repair work on the root volume, forinstance) and then move it back:
1) On the original host, identify the disk group ID of the rootdg disk group to
be import to the other host:
# vxdisk -s list
This command results in output that includes disk group information similar to
the following.
dgname: rootdg
dgid: 774226267.1025.tweety
2) On the importing host, import and rename the rootdg disk group as
follows:
# vxdg -tC -n "newdg_name" import "diskgroup"
(quotes not needed)
where -t indicates a temporary import name; -C clears import locks; -n specifies
a temporary name for the rootdg to be imported (so that it does not conflict
with the existing rootdg); and "diskgroup" is the disk group ID of the disk
group being imported (774226267.1025.tweety,for example).
If a reboot or crash occurs at this point, the temporarily-imported disk group
will become unimported and will require a reimport.
3) After the necessary work has been done on the imported rootdg, deport it back
to its original host as follows:
# vxdg -h newhost_id deport diskgroup
where newhost_id is the hostid of the system whose rootdg is being returned
(this is the system's namd, which can be confirmed with the command uname -n).
This command removes the imported rootdg from the importing host and returns
locks to its original host. The originalhost will then autoimport its rootdg on
its next reboot.
68. How do I increase the size of a striped mirrored volume ?
This is how I did it, but note that I used the GUI. I found that the
autoresizing of the volume did NOT occur when I used the command line
interface.At this point I have my two volumes:

vol02 is my larger volume who's plex is going to hold the data eventually:
1) stop vol02
# vxvol -g Daviddg stop vol02
2) Disassociate plex from volume
vxplex -g Daviddg dis vol02-01 (this also sets the volume size to 0
by itself)
vxvol -g Daviddg set len=0 vol02 ( not for command line you need to
do this manually)
3) I then remove that volume as I no longer need it
# vxedit -g Daviddg rm vol02
4) time to attach the larger plex to my original volume:
# vxplex -g Daviddg att vol01 vol02-01
Note there is an error at this point when it tries to do a vxpvol
set len
command. (I don't get the error when I use the command line
interface though)
5) Wait for the sync up, remove the smaller plex :
# vxplex -g Daviddg dis vol01-01
( and a vxvol command comes up to and changes the size of the volume
to the larger plex.)
I had to do this by hand using the command line interface.
# vxvol -g Daviddg set len=205632 vol01
vxprint shows the volumes new size:

6) I remove the old plex/subdisks


This is what the GUI ran :
# vxedit -g Daviddg rm vol01-01 (plex)
# vxedit -g Daviddg rm David13-01 David14-01 (subdisks)
On the command line I did this which deleted the subdisks too.
# vxedit -g Daviddg -rf rm vol01-01
7) Increase the size of the filesystem with mkfs
Before:
/dev/vx/dsk/Daviddg/vol01
47855
9 43066
0%
/foobar
# /usr/lib/fs/ufs/mkfs -F ufs -M /foobar /dev/vx/rdsk/Daviddg/vol01
205632
Warning: 192 sector(s) in last cylinder unallocated
/dev/vx/rdsk/Daviddg/vol02:
205632 sectors in 402 cylinders of 16
tracks, 32 sectors
100.4MB in 26 cyl groups (16 c/g, 4.00MB/g, 1920 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 8256, 16480, 24704, 32928, 41152, 49376, 57600, 65824,
74048, 82272, 90496, 98720, 106944, 115168, 123392, 131104, 139328,
147552, 155776, 164000, 172224, 180448, 188672, 196896, 205120,
# df -k /foobar
Filesystem
kbytes
used avail capacity
Mounted on
/dev/vx/dsk/Daviddg/vol01
96143
9 86524
0%
/foobar
Another plex can be added to the vlume to provide protection.
I have done this with files on a volume, and fsck'd the disk afterwards
and it all seems fine. YOU MUST take
backups in case things go wrong, and althoughthis works in the lab I am

sure you understand I cannot


say this will work for you, although someone from Veritas has looked
over this.
69.
How do I boot from a SSA ?
you need the following:
Fcode 1.33 on the sbus card.
Firmware 1.9 in the array.
Optical Modules rev -03.
2.4 Hardware 3/95.
These are all minimum requirements because:
lower rev fcode: scrambles the world address of the array in
the boot prom, never gets to open the disk device.
lower rev firmware: loadsabugs. not recommended anyway.
lower rev optical modules: give SOC-OFFLINE messages then times
out at the boot prom.
lower rev OS: not got the soc or pln drivers bundled so can't
find the array at boot.
The new firmware is supplied with both SSA 2.0 and 2.1 cd's, the fcode
and the program to check / update this is only with the 2.1 cd.
To update / check the fcode, use the fc_update program as follows:
fc_update [return] will check for SOC cards and bring them
all to the current fcode revision, asking for confirmation
on each one.
fc_update -v [return] will go through the system looking for
SOC cards and reporting the fcode revision levels it finds.
No changes are made to the system.
In order to set up a system to boot from an array without any other
disks during the install, you will need either two cd drives to hold
the OS and SSA cd's, or a copy of the fc_update directory on tape that
you can use.
During the install you will be offered the chance to put the OS onto
any of the array disks, along with any other disks on the system
under the "c0t0d0" naming scheme.
To get the boot device, do not reboot after the install has finished,
instead cd into /dev/dsk, and use ls -l on the boot device. This will
show the full symlink to the /devices entry for the array disk. Remove
the initial "/devices" and the tailing ":a" (for slice 0) and this is
your boot device.
To store the boot device into the eeprom, shutdown the system and use
nvedit. eg for an example ss1000, booting from an array connected to
the SOC in sbus slot 3 on the first system board, with the world address
"a0000000,740d10"
nvedit [return]
0 devalias ssa /io-unit@f,e0200000/sbi@0,0/SUNW,soc@3,0/
SUNW,pln@a0000000,740d10/SUNW,ssd@0,0 [return]
1 [ctrl-c]
nvstore [return]
reset
then you can "boot ssa" to start the system.
70.
How can I set the ownership of a volume ?
Say for example you had a volume you wanted to use with sybase
then you'd issue a command something like this....
# vxedit set user=sybase
group=sybase mode=600 /dev/vx/dg/volumename
volume manager will set this everytime upon bootup.
71. How do I change the WWN of a Sparc Storage Array ?
It has become evident that there are times when it is advantageous to
change the World-Wide Number (WWN) for a particular SPARCstorage Array
(SSA), such as after replacing the array controller. Up until the

release of Solaris 2.4 HW 3/95 this entailed a delicate sequence of


steps to modify the device tree enough to recognize the new array
controller, download the old address and then restore the original
devices and links to the array disks.Now that the SSA drivers are bundled in the
O/S (Sol2.4 3/95 and higher)
this operation can be completed much more easily using the cdrom
shell. The only difficulties that present themselves are obtaining the
old WWN and the fact that the cdrom shell does not include the
ssaadm(1M) or ssacli(1M) utilities. Not to worry, we can do this.
The Procedure:
-------------1) Boot cdrom and mount O/S
ok boot cdrom -sw
; boot the single-user cdrom shell
# mount -o ro /dev/dsk/c0t0d0s0 /a ; provides RO access to "root"
NOTE: In the case of a single large / (no separate /usr, /opt, etc)
this is all you need to mount. Otherwise, also mount 'usr' and 'opt'
on /a/usr and /a/opt. Use the /a/etc/vfstab for reference.
NOTE2: Use of the "-o ro" mount will prevent superblock consistency
problems in the case of mirrored system disks.
2) Obtain old WWN value
# ls -l /a/dev/dsk/cNt0d0s2
; where N is old controller #
/dev/dsk/c3t0d0s2 ->../../devices/io-unit@f,e1200000/sbi@0,0/
SUNW,soc@1,0/SUNW,pln@b00008000,201831c9/sd@0,0:c
^^^^^^^^^^^^^
The WWN is 12 digits long made up of the first four digits LEFT
of the comma followed by the next eight digits RIGHT after the
comma (zero fill if necessary). In this case "8000201831c9".
If you happen to have the output of "ssaadm display cN" or
"ssacli display cN" handy the "Serial Num:" field shows the
correct WWN value.
3) Locate the new array controller
# ls -l /dev/dsk/c*t0d0s2 | grep NWWN ; where NWWN is the four digits
; appearing in the SSA display
a match will come from controller N
4) Download the old address to the new controller
/a/opt/SUNWssa/bin/ssacli -s -w 8000201831c9 download cN
OR
/a/usr/sbin/ssaadm download -w 8000201831c9 cN
5) Reset the system
# umount <anything mounted under /a>
# halt
Press "Reset" on the back of the SSA.
ok boot
This should NOT require a reconfiguration, and should have you right back
where you started before the SSA controller was replaced.
As you might imagine, this position in the cdrom shell, with working
SSA devices and drivers loaded allows you to accomplish many other
tasks as well.
Don Finch
SunService
History:
10/31/95 - Original document.
11/01/95 - Clarified WWN info, per Manuel Cisnero's input (Thanks!)
03/05/96 - Specified read-only mount for /a to avoid problems with
mirrored system disk configurations. Thanks to Bob Deguc
of the NTC for catching this potential gotcha.
- Removed Power-Cycle of SSA, reset is easier and less apt to cause
trouble.
72.
When I try to encapsulate the boot disk I get the error "not enough free

partitions". Why ?
Basically you need 2 free partitions (1 for the public
region, and 1 for the private region). You can't use slice 2 since this
is the whole disk. Effectively this mean's you need 3 free partition's
to encapsulate the disk. You'd have to repartition the disk making sure
there's 2 free partitions, and then try again...
73.
I want to move a Volume out of one dg into another, all the disks in the
original dg are part of the Volume. How do I do it ?
In this example we will be moving a volume called "data_vol01"
from a disk group called "old_dg" to a disk group called "new_dg"
and all the disks in disk group "old_dg" are part of this volume
we're also assuming that the diskgroup "new_dg" already exists...the volume is a
stripe consisting of 2 disks......
1. list the group that you want to take the volume from..
# vxprint | grep old_dgc2t0d0s2 sliced martin01 old_dg online
c2t1d0s2 sliced martin02 old_dg online2. umount the volume...
# umount /martin13. Display the Configuration
# vxprint -ht -g old_dg| dg | old_dg | default | default | 57000 |
818174277.1121.p4m-ods |
dm martin01 c2t0d0s2 sliced 1519 4152640 dm martin02 c2t1d0s2 sliced 1519 4152640 v data_vol01 fsgen ENABLED ACTIVE 1024000 SELECT data_vol01-01
pl data_vol01-01 data_vol01 ENABLED ACTIVE 1024496 STRIPE 2/128 RW
sd martin01-01 data_vol01-01 martin01 0 512240 0/0 c2t0d0 ENA
sd martin02-01 data_vol01-01 martin02 0 512240 1/0 c2t1d0 ENA 4. Get the
volume config and store it in the file /tmp/movers
# vxprint -hmQq -g old_dg data_vol01 > /tmp/movers5. Stop the Volume
# vxvol -g old_dg stop data_vol016. Remove the volume from the
configuration database in that dg
# vxedit -g old_dg -r rm data_vol017. Remove the 1st disk from the disk
group
# vxdg -g old_dg rmdisk martin018. Too remove the last disk you have to
deport it....
# vxdg deport old_dg9. As the disk group new_dg already exists we don't
have to bother
issuing the "vxdg init new_dg martin01=c2t0d0s2" command10. We just add
the two disks.....
# vxdg -g new_dg adddisk martin01=c2t0d0s2
# vxdg -g new_dg adddisk martin02=c2t1d0s211. Reload the volume
configuration stored in /tmp/movers
# vxmake -g new_dg -d /tmp/movers12. Re-start the volume
# vxvol -g new_dg init active data_vol0113. Mount the volume & check it's
ok.....
# mount /dev/vx/new_dg/data_vol01 /martin1
# cd /martin1
# ls -l
total 414592 -rw-r-r- 1 root other 2420736 Dec 5 14:03 etc.dir
-rw------T 1 root other 209715200 Dec 5 14:02 hello
drwx------ 2 root root 8192 Dec 5 14:01 lost+found 14. You now need to
edit /etc/vfstab so the mount point is correct....
74.
How do I import diskgroup's on dual ported SSA's ?
If SSA linked to two systems and one system fails
To list groups out there
# vxdiskadm

8
Enable access to (import) a disk group
list
To force the import of a disk group
# vxdg -fC import disk_group
To start the volumes
# vxvol -g disk_group -f start vol01 vol02 vol03 etc..
To mount volumes
# mount /dev/vx/dsk/disk_group/vol01 /data01To bring in rootdg disks on SSA into
different group
# vxdg -tC -n "<newgroup>" import "<diskgroup>"
75.
How do I Start/Stop a disk tray on a SSA ?
Here's how to spin down and start up a disk try...
# ssaadm stop -t <tray_number> c<controller_number>
for example "ssaadm stop -t 2 c3", will stop the 2nd tray on a ssa on
controller 3
to restart the tray you use
# ssaadm start -t <tray_number> c<controller_number>
for example "ssaadm start -t 2 c3", will start the 2nd tray on a ssa on
controller 3
76.

How do I get a plex out of being in a stale state ?

If you have a situation where one of the plexes has gone stale
in a Volume, the following *should* re-enable the plex and get you going
again....
# umount /filesystem
- umount filesystem
# vxvol -g <dg name> stop <vol name>
- Stop the Volume
# vxplex -g <dg name> dis <plex name>
- Dissociate plex from volume
# vxvol -g <dg name> -f start <vol name> - Re-start volume
# vxplex -g <dg name> att <vol name> <plex name> - re-attach plex to volume
77.
How do I change the hostname info on the private regions ?
If you want it permanent, do this:see man vxdctl,
# vxdctl hostid <new-hostname>
# vxdctl init <new-hostname>
this will change the information on the private regions of all disks under VxVm
control.
78.
Can I encapsulate a disk that has no free space for a private region ?
You can encapsulate a partition with no private slice as follows :
c1t5d0
Part Tag Flag Cylinders Size Blocks
0 unassigned wm 0 - 20 10.34MB (21/0/0) 21168
1 unassigned wu 0 0 (0/0/0) 0
2 backup wm 0 - 2035 1002.09MB (2036/0/0) 2052288
3 unassigned wu 0 0 (0/0/0) 0
4 unassigned wu 21 - 41 10.34MB (21/0/0) 21168
5 unassigned wu 0 0 (0/0/0) 0
6 unassigned wu 0 0 (0/0/0) 0
7 unassigned wm 2032 - 2035 1.97MB (4/0/0) 4032# prtvtoc /dev/rdsk/c1t5d0s2
*
First Sector Last
*
0
2
4

Partition Tag Flags Sector Count Sector Mount Directory


0 00 0 21168 21167
5 00 0 2052288 2052287
0 01 21168 21168 42335

7 0 00 2048256 4032 2052287

The following 4 commands adds disk c1t5d0s0 under VM control with no private
region, c1t5d0s0 is added as disk datadg04 in the datadg diskgroup
the vxassist and mount commands just prove everything works ok
# vxdisk -g datadg define c1t5d0s0 type=nopriv# vxdg -g datadg adddisk
datadg04=c1t5d0s0
# /usr/sbin/vxassist -g datadg -U fsgen make vol01 10584k layout=nolog datadg04
# /usr/sbin/mount /dev/vx/dsk/datadg/vol01 /d4
79.

How can I get iostat to report the disks as c#t#d# instead of sd# ??

In Solaris 2.6 you will have the command iostat -xn, till then....
p4m-ods% iostat -x 30
extended disk statistics
disk
r/s w/s Kr/s Kw/s wait actv svc_t %w %b
fd0
0.0 0.0
0.0
0.0
0.0 0.0
0.0
0
0
sd1
0.0 0.0
0.0
0.1
0.0 0.0 27.8
0
0
sd12
0.0 0.0
0.0
0.0
0.0 0.0 16.7
0
0
sd24
0.0 0.0
0.0
0.0
0.0 0.0
9.4
0
0
sd29
0.0 0.0
0.0
0.0
0.0 0.0
4.2
0
0
sd3
0.0 0.0
0.0
0.3
0.0 0.0 47.9
0
0
sd32
0.0 0.0
0.0
0.0
0.0 0.0 10.6
0
0
sd34
0.0 0.0
0.0
0.0
0.0 0.0 10.6
0
0
sd39
0.0 0.0
0.0
0.0
0.0 0.0 15.6
0
0
sd7
0.0 0.0
0.0
0.0
0.0 0.0 10.5
0
0
p4m-ods%
Well you have to look at /etc/path_to_inst and work it out this way....
or you can use these 2 scripts written by Bede.Seymour@corp
--------------------------- iostatctbl.sh ------------------------#!/bin/sh
# iostatctbl.sh
#
# ("IOSTAT Create TaBLe")
# Script that creates a mapping between BSD/UCB names and SVR4 names
# of disk devices
rm -f /tmp/iostatctbl.tmp /var/tmp/iostatctbl.out
ls -l /dev/rdsk/*s0 > /tmp/iostatctbl.tmp
iostat -x | tail +3 | nawk '
Unknown macro: { dnameucb=$1
ucbprefix=substr(dnameucb,1,match(dnameucb,"[0-9]")-1)
inst=substr(dnameucb,match(dnameucb,"[0-9]")) grepcmd=sprintf("grep "/%s@"
/etc/path_to_inst | grep " %s$"n",ucbprefix,inst) printf("%s",dnameucb)
system(grepcmd) }
' | nawk ' BEGIN
Unknown macro: {FS="""}
Unknown macro: {
printf("%s ",$1)
newgrepcmd=sprintf("grep %s
/tmp/iostatctbl.tmpn",$2)
system(newgrepcmd)}' | nawk '{
split($10,carray,"/")
print $1,carray[4]}' > /var/tmp/iostatctbl.out\ \
echo "Table created - see /var/tmp/iostatctbl.out"rm -f /tmp/iostatctbl.tmpexit
0 --------------------------------- iostatx -------------------------------#!/bin/sh\#\# iostatx - do iostat -x but substitute ucb names for svr4 names
./iostatctbl.shmv /var/tmp/iostatctbl.out /var/tmp/iostatctbl.out.oldcat
/var/tmp/iostatctbl.out.old | sed s/fd0//g > /var/tmp/iostatctbl.out iostat -x
10 | nawk 'BEGIN{ i=1 ucblist="" while (getline Entry
<"/var/tmp/iostatctbl.out"){ split(Entry,earray," ") ucbname=earray[1]
svr4name[ucbname]=substr(earray[2],1,6) ucblist=sprintf("%s %s",ucblist,ucbname)

}
}
Unknown macro: { if (match(ucblist,$1) != 0)
printf("%-7s%sn",svr4name[$1],substr($0,8)) else print }
' exit 0
Save the 2 files and issue the command ./iostat and you'll get something looking
like this.....
}p4m-ods% ./iostatx
Table created - see /var/tmp/iostatctbl.out
extended disk statistics
disk
r/s w/s Kr/s Kw/s wait actv svc_t %w %b
fd0
0.0 0.0
0.0
0.0
0.0 0.0
0.0
0
0
c0t1d0
0.0 0.0
0.0
0.1
0.0 0.0 27.8
0
0
c1t1d0
0.0 0.0
0.0
0.0
0.0 0.0 16.7
0
0
c1t2d0
0.0 0.0
0.0
0.0
0.0 0.0
9.4
0
0
c1t3d0
0.0 0.0
0.0
0.0
0.0 0.0
4.2
0
0
c0t3d0
0.0 0.0
0.0
0.3
0.0 0.0 48.0
0
0
c1t3d3
0.0 0.0
0.0
0.0
0.0 0.0 10.6
0
0
c1t4d0
0.0 0.0
0.0
0.0
0.0 0.0 10.6
0
0
c1t5d0
0.0 0.0
0.0
0.0
0.0 0.0 15.6
0
0
c1t0d0
0.0 0.0
0.0
0.0
0.0 0.0 10.5
0
0
80.
How can I get round the error volume locked by another utility when
trying to detach a plex ?
I have found a way to clear the following error message WITHOUT
rebooting the machine or stopping access to any veritas volumes.vxvm:vxplex:
ERROR: Plex vol01-02 in volume vol01 is locked by another utility
$ vxprint vol01
Disk group: datadg | TY | NAME | ASSOC | KSTATE | LENGTH | PLOFFS | STATE |
TUTIL0 | PUTIL0 |
v vol01 fsgen ENABLED 3072384 - ACTIVE ATT1 pl vol01-01 vol01 ENABLED 3072384 - ACTIVE - sd trash02 vol01-01 ENABLED 1024128 0 - - sd trash01 vol01-01 ENABLED 1024128 0 - - sd trash03 vol01-01 ENABLED 1024128 0 - - pl vol01-02 vol01 ENABLED 3072448 - TEMP ATT sd datadg05-01 vol01-02 ENABLED 1536192 0 - - sd datadg04-01 vol01-02 ENABLED 1536192 0 - - - If you look at the TUTIL
field you can see some flags set, what you need to do is clear these
flags,
Hence if we try to detach the plex now we get the error
$ vxplex -g datadg det vol01-02
vxvm:vxplex: ERROR: Plex vol01-02 in volume vol01 is locked by another
utility
the procedure to get this volume back up and running would be as
follows
$ vxmend clear tutil vol01-02
$ vxmend clear tutil vol01
$ vxplex -g datadg det vol01-02
$ vxrecover -s
Hey presto everything is back as it should be.81. Veritas Volume Manager
Disaster Recovery
When to use this guide :
This guide will be useful when a customer has experienced file system corruption
and is no longer

able to boot the system. If this system was under Volume Manager control (i.e
encapsulated) and the
customer had data volumes in the root disk group, then this procedure is
designed to help you recover
the configuration.
Step 1 : booting from underlying devices
o
boot from cdrom to get a memory resident Solaris kernel
ok
boot cdrom -sw
o
the following steps needs to be followed to return to underlying
devices
# mount /dev/dsk/c0t0d0s0
/a
(where c0t0d0 is the system disk)
# cd /a/etc
# cp /etc/prevm.vfstab vfstab
- modify /etc/system and comment out the following lines
rootdev:/pseudo/vxio@0:0
set vxio:vol_rootdev_is_volume=1
# cd /a/etc/vx/reconfig.d/state.d
# rm root-done
# touch install-db
- note that install-db should be the only file in this directory
# cd /
# umount /a
<STOP> A
ok
boot
o
At this point the customer could simply restore his filesystems from
backups.
If this is possible, then restoring the data may be all that is
required. This will
put back the modifications which we have made and all should be well.
All that
will be required after the data restore is to reboot the machine. No
modifications
will have been made to the Volume Manager configuration.
Step 2 : Re-enabling the Volume Manager configuration
o
If the customer needs to re-install the Solaris OS, then we get a
little more involved.
The customer will also have to install the Volume Manager software.
This is still OK
since we will not be affecting his other data in rootdg.
install the Solaris OS
install the Volume Manager software
install the relevant OS/Volume Manager patchespatch matrix
reboot
o
If we are lucky, the following set of commands may bring back our
Volume Manager configuration.
However it is not guaranteed to work every time.
# rm /etc/vx/reconfig.d/state.d/install-db
# vxiod set 10
# vxconfigd -d
# vxdctl init <hostname>
# vxdctl enable
# init 6
o
If this has brought back the Volume Manager configuration, the we
should be able to list the
disks/configuration.
# vxprint -Ath
# vxdisk list
You may have to start the volumes before being able to fsck/moun

them.
# vxvol -g <datadg> start vol01
# fsck -n /dev/vx/rdsk/<datadg>/vol01
# mount -r /dev/vx/dsk/<datadg>/vol01 /mnt
o
If it hasn't brought back the configuration, then we will need to
create a new rootdg, temporarily
import the old rootdg and then export the volumes to a new disk group.
This is quite an involved
procedure and you should inform the customer that this may take some
time.
Step 3 : Re-encapsulating a new rootdg
o

Firstly we need to do an encapsulation of the root disk. To do this, we

must first return to


underlying devices as described in Step 1 above. Once this has
completed, run the following :
# vxinstall
o
Please use with caution. Use the "custom" installation; if your root
disks in on controller 0 (c0),
when 'c0' disks are listed, tell vxinstall to do them individually;
select 'encapsulation' for the bootable
disk only; select 'leave these disks alone' for all the rest on 'c0'.
For all other controllers that are be
listed, tell vxinstall to 'leave these disks alone'. DO NOT BREAK OUT
OF THIS UTILITY] (i.e.: control-C).
This will place the bootable disk as a rootdg disk and when the
vxinstall is finished it will tell you to
reboot the system.
o
When the system is rebooted, we will now have a new rootdg which simply
contains our system disk.
Our next step is to import the old rootdg into our new rootdg and
recover our volumes.
Step 4 : Importing the old rootdg
o

Now we must first get the disk group ID. This is done with the command

:
# vxdisk -s list
o

This should give us something similar to :


dgname:
rootdg
dgid:
793279366.1025.npgts59

o
This information is then used to import the original rootdg into our
new rootdg :
# vxdg -tC -n <new-dg-name> import 793279366.1025.npgts59
imported so that
o

where -t indicates a temporary import name


where -C clears import locks
where -n specifies a temporary name for the rootdg to be
it does
not conflict with the existing rootdg
Sometimes this may fail with the error : disk c0t0d0s2 names rootdg but

groupid differs.
We may also see the error on reboot regarding : insufficient config
copies. This is because
Volume Manager can still only see the system disk (and perhaps mirror)
in rootdg but no
other disks.
This can be resolved by using the -f flag (force) with the import.
# vxdg -tfC -n <new-dg-name> import 793279366.1025.npgts59
o
The rootdg should now hold the new rootdg and the old rootdg using a
different name.
Step 5 :
o

Recovering the data in old rootdg permanently


At this point, if we rebooted the machine, we would loose our temporary

imported old rootdg.


We need a way of keeping this diskgroup permanent. At this point we
would suggest to our
customer to keep these volumes seperate from the rootdg, i.e. keep
them in a datadg. Why ?
well from a support point of view, it makes recovery much easier.
o

To do this, we first of all get a listing of the configuration as it is

at the moment :
# vxprint -vpshmQq -g <new-dg-name> vol01 vol02 .... >
/tmp/vxprint.list
# vxdisk list > /tmp/vxdisk.list
o
Now that we have this configuration information, we deport the old
rootdg and create a
brand new diskgroup,
# vxdg deport <new-dg-name>
o
Now that the disk group is deported again, we must place the disks
which were originally in the rootdg
into this new disk group. Referring to our /tmp/vxdisk.list file,
ensuring the disks are using their
original accessname :
# vxdg init <datadg> disk1=c1t0d0s2
o

Add each of the other disks back into the diskgroup as folows :
# vxdg -g <datadg> adddisk disk?=c?t?d?s2

Repeat this for every disk in the disk group. Now, we remake our volume

configuration.
# vxmake -g <datadg> -d /tmp/vxprint.list
o
Next step is to restart the volumes. This step may or may not be
necessary.
# vxvol -g <datadg> start vol01
o
Repeat this step for every volume. The 'startall' option sometimes
doesn't appear to work at this point.
o

Finally, simply fsck the volumes and mount them.


# fsck -n /dev/vx/rdsk/<datadg>/vol01

# mount -r /dev/vx/dsk/<datadg>/vol01 /mnt


o
If this is successful, then you simply configure the /etc/vfstab to
reflect the changes and that completes
the recover.
o
Remember to remove the /etc/vx/reconfig.d/state.d/install-db file as on
reboot you may run into
problems such as "illegal vminor encountered".

Veritas Volume Manager - Miscellaneous


----82. Can I use prestoserve with the SSA ?
Yes you can. Follow this procedure :
exclude: drv/pr
2. edit /etc/init.d/vxvm-startup2
add
modload /kernel/drv/pr
presto -p > /dev/null
3. edit /etc/init.d/prestoserve
replace presto -u
with presto -u filesystems.....
NOTE: don't prestoserve / , /usr, /var, acceleartion of these filesystems
could lead to data corruption since the filesystems checks for / /usr,
/usr/kvm, /var and /var/adm will precede the flushing of prestoserve buffers
if you're using ODS see bug 121971383. Can I grow my root filesystem online ?
Yes you can. Here's what I did. I only had a root filesystem. On machines with
/usr, /opt, /var you will have to
repeat certain stages but it should work the same.
Total disk cylinders available: 1151 + 2 (reserved cylinders)
Part Tag Flag Cylinders Size Blocks
0 root wm 0 - 1058 372.30MB (1059/0/0)
1 swap wu 1061 - 1150 31.64MB (90/0/0)
2 backup wm 0 - 1150 404.65MB (1151/0/0)
3 - wu 0 - 1150 404.65MB (1151/0/0)
4 - wu 1059 - 1060 0.70MB (2/0/0)
5 unassigned wm 0 0 (0/0/0)
6 unassigned wm 0 0 (0/0/0)
7 unassigned wm 0 0 (0/0/0)
Total disk cylinders available: 2733 + 2 (reserved cylinders)
Part Tag Flag Cylinders Size Blocks
0 root wm 0 - 1347 1000.47MB (1348/0/0)
1 swap wu 1348 - 1477 96.48MB (130/0/0)
2 backup wu 0 - 2732 1.98GB (2733/0/0)
3 - wu 0 0 (0/0/0)
4 - wu 0 0 (0/0/0)
5 unassigned wm 0 0 (0/0/0)
6 unassigned wm 0 0 (0/0/0)
7 unassigned wm 1478 - 2732 931.45MB (1255/0/0)
newfs the root filesystem.
install a boot block on the disk.

# /usr/sbin/installboot /usr/lib/fs/ufs/bootblk /dev/rdsk/c?t?d?s0


nvalias ssa /iommu/sbus/SUNW,soc@3,0/SUNW,pln@a0000000,777ec6/SUNW,ssd@0,0
set the boot-device from the ok prompt
setenv boot-device ssa
Remove the old rootvol and swapvol which are there but not mounted
choose: Encapsulate one or more disks
follow instructions
# df -k
Filesystem kbytes used avail capacity Mounted on
/dev/vx/dsk/rootvol 962582 202482 663850 23% /
/proc 0 0 0 0% /proc
fd 0 0 0 0% /dev/fd
swap 90752 16 90736 0% /tmp
To remove these disks
(select the disk)
ADVANCED OPS=>DISK GROUP=>REMOVE DISKS
(/usr/sbin/vxdg -g rootdg rmdisk diskname)
84.

Can I change the hostname of a machine without affecting Volume Manager ?

Here is what I have done to sucessfully reassociate a rootdg with a machine....


1) Find a disk in the rootdg.......
map flag 15 to the appropriate slice on disk
and plug that slice into the vxprivutil
2) Run the vxprivutil on the private region of the disk in rootdg.....
diskid: 794106338.2119.maggie
group: name=unix2dg id=795205483.1201.schwing
flags: private autoimport
hostid: schwing
version: 2.1
iosize: 512
public: slice=4 offset=0 len=2050272
private: slice=3 offset=1 len=2015
update: time: 795465445 seqno: 0.45
headers: 0 248
configs: count=1 len=1456
logs:
count=1 len=220
3) Notice the hostid in the private region, I want it to belong to "maggie" not
"schwing".
4) So I change the hostid.......
5) Run the procedure in Appendix E................
85. Can I move Volumes from one diskgroup to another ?
Moving Populated Volume Manager Disks between Disk Groups
=========================================================
Roger B.A. Klorese
10/20/94
This is Totally UNSUPPORTED, but it is possible !!!
below is an example of how to do it.....
1) Assume I intend to move volumes vol02 and vol04 from the disk group olddg to
a new group, newdg.
# dd of=/dev/vx/rdsk/olddg/vol02 conv=sync
This is the content of vol02.

^D
0+1 records in
1+0 records out
# dd of=/dev/vx/rdsk/olddg/vol04 conv=sync
This is the content of vol04.
^D
0+1 records in
1+0 records out
# dd if=/dev/vx/rdsk/olddg/vol02 count=1
This is the content of vol02.
1+0 records in
1+0 records out
# dd if=/dev/vx/rdsk/olddg/vol04 count=1
This is the content of vol04.
1+0 records in
1+0 records out
2) Get a list of disks in the disk group you intend to split.
3) Get the configuration.
dg olddg
782676710.2488.dustbuster
dm olddg01
c1t0d0s2
sliced 2015
2050272 /dev/rdsk/c1t0d0s4
dm olddg02
c1t2d0s2
sliced 2015
2050272 /dev/rdsk/c1t2d0s4
dm olddg03
c1t4d0s2
sliced 2015
2050272 /dev/rdsk/c1t4d0s4
dm olddg04
c1t1d0s2
sliced 2015
2050272 /dev/rdsk/c1t1d0s4
dm olddg05
c1t5d0s2
sliced 2015
2050272 /dev/rdsk/c1t5d0s4
dm olddg06
c1t0d1s2
sliced 2015
2050272 /dev/rdsk/c1t0d1s4
dm olddg07
c1t2d1s2
sliced 2015
2050272 /dev/rdsk/c1t2d1s4
dm olddg08
c1t4d1s2
sliced 2015
2050272 /dev/rdsk/c1t4d1s4
dm olddg09
c1t1d1s2
sliced 2015
2050272 /dev/rdsk/c1t1d1s4
dm olddg10
c1t5d1s2
sliced 2015
2050272 /dev/rdsk/c1t5d1s4
v vol01
gen
ENABLED ACTIVE 4194304 SELECT
pl vol01-01
vol01
ENABLED ACTIVE 4194304 CONCAT
sd olddg01-01 vol01-01
olddg01 0
2050272 0
sd olddg02-01 vol01-01
olddg02 0
2050272 2050272 sd olddg03-01 vol01-01
olddg03 0
93760
4100544 v vol02
fsgen
ENABLED ACTIVE 2050268 SELECT
pl vol02-01
vol02
ENABLED ACTIVE 2050268 CONCAT
sd olddg08-01 vol02-01
olddg08 0
4
LOG
sd olddg08-02 vol02-01
olddg08 4
2050268 0
pl vol02-02
vol02
ENABLED ACTIVE 2050268 CONCAT
sd olddg09-01 vol02-02
olddg09 0
4
LOG
sd olddg09-02 vol02-02
olddg09 4
2050268 0
v vol03
fsgen
ENABLED ACTIVE 1024000 SELECT
pl vol03-01
vol03
ENABLED ACTIVE 1024000 CONCAT
sd olddg10-01 vol03-01
olddg10 0
1024000 0
v vol04
fsgen
ENABLED ACTIVE 4194304 SELECT
vol04-01
pl vol04-01
vol04
ENABLED ACTIVE 4194304 STRIPE
4/128
sd olddg04-01 vol04-01
olddg04 0
1048576 0/0
sd olddg05-01 vol04-01
olddg05 0
1048576 1/0
sd olddg06-01 vol04-01
olddg06 0
1048576 2/0
sd olddg07-01 vol04-01
olddg07 0
1048576 3/0
4) Determine which disks contain the volumes to be moved. Insure that all
volume allocations are self-contained in the set of disks to be moved.
In this case, my volumes are contained on disks olddg04 through olddg09,
with no unassociated plexes or subdisks, and no allocations which cross
out of this set of disks.
5) Save the configuration, in a format that can be plugged back into
the vxmake utility. Specify all volumes on the disks in question (plus
any unassociated plexes and their child subdisks, plus any unassociated

RW

RW
RW

RW
RW

subdisks).
6) Unmount the appropriate file systems, and/or stop the processes
which hold the volumes open.
7) Stop the volumes.
8) Remove from the configuration database the definitions of the
structures (volumes, plexes, subdisks) to be moved. (NOTE that this
does not affect your data.)
9) Remove the disks from the original diskgroup.
10) Initialize the new diskgroup using one of your disks. DO NOT
reinitialize the disk itself. (vxdisk init). (If you are moving the disks
to a disk group that already exists, skip this step.) It is simplest to
keep their old names until a later step.
11) Add the rest of the moving disks to the new disk group.
12) See the disks in the new disk group.
13) Reload the object configuration into the new disk group.
14) Bring the volumes back on-line.
15) Observe the configuration of the new disk group.
dg newdg
782682285.2491.dustbuster
dm olddg04
c1t1d0s2
sliced 2015
2050272 /dev/rdsk/c1t1d0s4
dm olddg05
c1t5d0s2
sliced 2015
2050272 /dev/rdsk/c1t5d0s4
dm olddg06
c1t0d1s2
sliced 2015
2050272 /dev/rdsk/c1t0d1s4
dm olddg07
c1t2d1s2
sliced 2015
2050272 /dev/rdsk/c1t2d1s4
dm olddg08
c1t4d1s2
sliced 2015
2050272 /dev/rdsk/c1t4d1s4
dm olddg09
c1t1d1s2
sliced 2015
2050272 /dev/rdsk/c1t1d1s4
v vol02
fsgen
ENABLED ACTIVE 2050268 SELECT
pl vol02-01
vol02
ENABLED ACTIVE 2050268 CONCAT
sd olddg08-01 vol02-01
olddg08 0
4
LOG
sd olddg08-02 vol02-01
olddg08 4
2050268 0
pl vol02-02
vol02
ENABLED ACTIVE 2050268 CONCAT
sd olddg09-01 vol02-02
olddg09 0
4
LOG
sd olddg09-02 vol02-02
olddg09 4
2050268 0
v vol04
fsgen
ENABLED ACTIVE 4194304 SELECT
vol04-01
pl vol04-01
vol04
ENABLED ACTIVE 4194304 STRIPE
4/128
sd olddg04-01 vol04-01
olddg04 0
1048576 0/0
sd olddg05-01 vol04-01
olddg05 0
1048576 1/0
sd olddg06-01 vol04-01
olddg06 0
1048576 2/0
sd olddg07-01 vol04-01
olddg07 0
1048576 3/0
16) Test the data. Remember that the device names have changed to refer
to newdg instead of olddg; you'll need to modify /etc/vfstab and/or your
database configurations to reflect this. Then you'd mount your file
systems, start your database engines, etc.

RW
RW

RW

17) Note that the original database is intact, though the disk naming is
a bit odd. You *can* rename your disks and their subdisks to reflect the
change. This is optional.
dg
dm
dm
dm

olddg
olddg01
olddg02
olddg03

782676710.2488.dustbuster
c1t0d0s2
sliced 2015
c1t2d0s2
sliced 2015
c1t4d0s2
sliced 2015

2050272 /dev/rdsk/c1t0d0s4 2050272 /dev/rdsk/c1t2d0s4 2050272 /dev/rdsk/c1t4d0s4 -

dm olddg10
c1t5d1s2
sliced 2015
2050272 /dev/rdsk/c1t5d1s4
v vol01
gen
ENABLED ACTIVE 4194304 SELECT
pl vol01-01
vol01
ENABLED ACTIVE 4194304 CONCAT
sd olddg01-01 vol01-01
olddg01 0
2050272 0
sd olddg02-01 vol01-01
olddg02 0
2050272 2050272 sd olddg03-01 vol01-01
olddg03 0
93760
4100544 v vol03
fsgen
ENABLED ACTIVE 1024000 SELECT
pl vol03-01
vol03
ENABLED ACTIVE 1024000 CONCAT
sd olddg10-01 vol03-01
olddg10 0
1024000 0
# vxedit rename olddg10 olddg04
# vxprint -g olddg -s -e "name~/olddg10/"
TYPE NAME
ASSOC
KSTATE
LENGTH COMMENT
sd olddg10-01 vol03-01
1024000
# vxedit rename olddg10-01 olddg04-01
# vxprint -g olddg -ht
DG NAME
GROUP-ID
DM NAME
DEVICE
TYPE
PRIVLEN PUBLEN PUBPATH
V NAME
USETYPE
KSTATE STATE
LENGTH READPOL PREFPLEX
PL NAME
VOLUME
KSTATE STATE
LENGTH LAYOUT
NCOL/WDTH
SD NAME
PLEX
DISK
DISKOFFS LENGTH [COL/]OFF FLAGS
dg olddg
782676710.2488.dustbuster
dm olddg01
c1t0d0s2
sliced 2015
2050272 /dev/rdsk/c1t0d0s4
dm olddg02
c1t2d0s2
sliced 2015
2050272 /dev/rdsk/c1t2d0s4
dm olddg03
c1t4d0s2
sliced 2015
2050272 /dev/rdsk/c1t4d0s4
dm olddg04
c1t5d1s2
sliced 2015
2050272 /dev/rdsk/c1t5d1s4
v vol01
gen
ENABLED ACTIVE 4194304 SELECT
pl vol01-01
vol01
ENABLED ACTIVE 4194304 CONCAT
sd olddg01-01 vol01-01
olddg01 0
2050272 0
sd olddg02-01 vol01-01
olddg02 0
2050272 2050272 sd olddg03-01 vol01-01
olddg03 0
93760
4100544 v vol03
fsgen
ENABLED ACTIVE 1024000 SELECT
pl vol03-01
vol03
ENABLED ACTIVE 1024000 CONCAT
sd olddg04-01 vol03-01
olddg04 0
1024000 0
18) Do the same for the disks in newdg and their subdisks.
# vxprint -g newdg -e "name~/olddg/"
TYPE NAME
ASSOC
KSTATE
sd olddg04-01 vol04-01
sd olddg05-01 vol04-01
sd olddg06-01 vol04-01
sd olddg07-01 vol04-01
sd olddg08-01 vol02-01
sd olddg08-02 vol02-01
sd olddg09-01 vol02-02
sd olddg09-02 vol02-02
# vxedit rename olddg04 newdg01
# vxedit rename olddg05 newdg02
# vxedit rename olddg06 newdg03
# vxedit rename olddg07 newdg04
# vxedit rename olddg08 newdg05
# vxedit rename olddg09 newdg06
# vxedit rename olddg04-01 newdg01-01
# vxedit rename olddg05-01 newdg02-01
# vxedit rename olddg06-01 newdg03-01
# vxedit rename olddg07-01 newdg04-01
# vxedit rename olddg08-01 newdg05-01
# vxedit rename olddg09-01 newdg06-01
# vxedit rename olddg08-02 newdg05-02
# vxedit rename olddg09-02 newdg06-02
# vxprint -g newdg -ht

LENGTH COMMENT
1048576
1048576
1048576
1048576
4
2050268
4
2050268

RW

RW

FLAGS
MODE
RW

RW

DG
DM
V
PL
SD
dg
dm
dm
dm
dm
dm
dm
v
pl
sd
sd
pl
sd
sd
v
pl
sd
sd
sd
sd
#

NAME
NAME
NAME
NAME
NAME
newdg
newdg01
newdg02
newdg03
newdg04
newdg05
newdg06
vol02
vol02-01
newdg05-01
newdg05-02
vol02-02
newdg06-01
newdg06-02
vol04
vol04-01
newdg01-01
newdg02-01
newdg03-01
newdg04-01

GROUP-ID
DEVICE
TYPE
PRIVLEN
USETYPE
KSTATE STATE
VOLUME
KSTATE STATE
PLEX
DISK
DISKOFFS
782682285.2491.dustbuster
c1t1d0s2
sliced 2015
c1t5d0s2
sliced 2015
c1t0d1s2
sliced 2015
c1t2d1s2
sliced 2015
c1t4d1s2
sliced 2015
c1t1d1s2
sliced 2015
fsgen
ENABLED ACTIVE
vol02
ENABLED ACTIVE
vol02-01
newdg05 0
vol02-01
newdg05 4
vol02
ENABLED ACTIVE
vol02-02
newdg06 0
vol02-02
newdg06 4
fsgen
ENABLED ACTIVE
vol04
ENABLED ACTIVE
vol04-01
newdg01 0
vol04-01
newdg02 0
vol04-01
newdg03 0
vol04-01
newdg04 0

PUBLEN
LENGTH
LENGTH
LENGTH

PUBPATH
FLAGS
READPOL PREFPLEX
LAYOUT
NCOL/WDTH MODE
[COL/]OFF FLAGS

2050272
2050272
2050272
2050272
2050272
2050272
2050268
2050268
4
2050268
2050268
4
2050268
4194304
4194304
1048576
1048576
1048576
1048576

/dev/rdsk/c1t1d0s4
/dev/rdsk/c1t5d0s4
/dev/rdsk/c1t0d1s4
/dev/rdsk/c1t2d1s4
/dev/rdsk/c1t4d1s4
/dev/rdsk/c1t1d1s4
SELECT
CONCAT
LOG
0
CONCAT
LOG
0
SELECT
vol04-01
STRIPE
4/128
0/0
1/0
2/0
3/0
-

RW
RW

RW

86. Can I test the SOC card from the OK prompt ?


from open boot prom
OK>" /io-mmu@f---xx" select-dev
OK>soc-selftest
Output from command: there is a space between the " and the iommu path"
<#0> ok " /io-unit@f,e0200000/sbi@0,0/SUNW,soc@3,0" select-dev
<#0> ok soc-selftest
Selftest in progress...
checking forward pattern...
checking reverse pattern...
SOC External memory Test -- Passed
SOC POST Test -- Passed
test communications bits
test communications parameters
test communications parameters sram
test idle bit
test slave error bit
SOC Register Test -- Passed
SOC Port A Internal Transmit/Receive Test -- Passed
SOC Port B Internal Transmit/Receive Test -- Passed
SOC Port A OLC Transmit/Receive Test -- Passed
SOC Port B OLC Transmit/Receive Test -- Timeout reached waiting for SOC
Failed
<#0> ok87. Can I restore the rootdisk from backup ?
Providing only the boot disk is in the rootdg you can do the following...
1. restore data and installboot block
2. copy /etc/vfstab, and in the current vfstab, comment out all veritas mount
points..
3. arrange vfstab so it mounts the /dev/dsk and /dev/rdsk dev's only
change

4. comment out the volume manager stuff in /etc/system, see below


5. touch /etc/vx/reconfig.d/state.d/install-db
6. reboot
7. run vxinstall, encapsulated the boot disk only !!
8. reboot
9. copy back the original /etc/vfstab
10. mountall
11. re-mirror rootdisk if required.
88. Can I make my SparcStation wait for the SSA to spin up ?
This program will be stored in the Boot PROM of the Sparc Station's
(non-volatile memory - NVRAM ), and will delay the boot sequence for 80 seconds.
This delay will give sufficient time for the power-on confidence test of the
SPARC Storage Array to be completed before the boot is attempted.
The program is written in the Forth programming language, as used by the Boot
PROM, and is entered using the "nvedit" editor.
at the "ok" boot prompt :
The non-volatile memory has probably already been programmed with a device
alias for the SSA (so that a boot command of "boot ssa" can be used, instead
of a complex full device name ).
To check this, enter
ok devalias
<<< this will give a list of the standard aliases, and there
probably
will be an additional one for the SSA
<<< this should be say "true"

ok printenv use-nvramrc?
IF NOT, enter
ok setenv use-nvramrc? true
Then start the nvedit
ok nvedit
will

editor :

<<< this will display line 0 of the NVRAM memory, which

probably contain the original command used to set up the


SSA device alias, such as :
0: devalias ssa /io-unit......etc.....etc
To leave this line undisturbed, enter
Ctrl N
( = move to next line in
Editor )
You will then get line number 1 , which should be a blank line.
Enter the commands for the delay program ( 8 lines ), exactly as follows :
( " ^ " equals "spacebar" )
ok nvstore
ok nvramrc eval

( " C/R "

equals "return" )

<< to store Editor buffer into NVRAM chip


<< this will execute the stored program, i.e. the delay
will start, and count up to 79 on the screen,

to test
that the program was keyed in correctly. It
will not
boot the SSA at this point.
Then the system can be rebooted, from the SSA, including the delay, by entering
:
ok reset
( In case of any errors whilst keying-in the program, here are some useful
command for the nvedit editor :
Ctrl B
Back one character

Ctrl
Del
Ctrl
Ctrl
Ctrl
Ctrl
Ctrl
89.

F
K
N
P
L
C

Forward one character


Delete the previous character
Delete the next line
Move to next line
Move to previous line
List all lines
Exit the editor.
)

Can I move a SSA to a new system and preserve the data ?

OK. It's easy to do and it's even documented (incompletely, but...)


in the SSA Volume Manager System Administrator's Guide (802-1242-10),
Appendix B, Topic 7 Reinstallation Recovery.
In essence, it's easiest if you already have a rootdg on the receiving system
and the volumes in question are in some other group. Nevertheless, I didn't
have that luxury so here is what to do. This also assumes I'm not going
to need to use the existing system after the SSA is moved or I'm going to
reinstall it. I also am not going to have an encapsulated root on the receiving
system, at least not initially.
Target System:
Receiving System (with same host name):
You should now issue "vxdisk list", "vxprint -SA", and "vxprint -g groupname"
to insure that all your disks are seen and that the configuration is
recognized.
The next parts are undocumented but necessary in order to automate startup
and update vfstab entries.
10. Change to multi-user status. (CTRL-D)
11. Bring up openwin and start vxva (volume manager)
12. Select world or group view.
13. Select volume to mount (refer to vxprint -SA)
14. Select Advanced->Volume-Start or START-all
15. Select Basic->Filesystem->Mount
16. Fill in pop-up menu for volume's mountpoint and select mount at startup
17. Do the same for all other volumes necessary.
18. Your mountpoints have now been created and the entries recreated in
/etc/vfstab.
90. Can I have a rootdg without encapsulating the root disk ?
SRDB ID: 11136
SYNOPSIS: How to setup a rootdg without encapsulating a disk
DETAIL DESCRIPTION:
In many cases it is not desirable to encapsulate a disk (usually the
bootdisk) to set up a rootdg disk group. The rootdg disk group is
mandatory with current releases of VXVM, this procedure bypasses
the need to encapsulate a disk but still allows a rootdg.
The procedure is as follows:
1. Set the initial operating mode to disable. This disables
transactions and creates the rendevous file install-db,
for utilities to perform diagnostic and initialization
operations.
vxconfigd -m disable
Verify with ps afterward:
(ps -ef | grep vxconfigd)
2. Initialize the database:
vxdctl init
3. Make the rootdg disk group:
vxdg init rootdg
4. Add a simple slice (usually about two cylinders in size):

vxdctl add disk cWtXdYsZ type=simple


Note: A warning message will appear here.
5. Add disk records:
vxdisk -f init cWtXdYsZ type=simple
6. Add slice to rootdg:
vxdg adddisk cWtXdYsZ
7. Enable transactions:
vxdctl enable
8. Remove this file; the volume manager will not activate if this
file exists.
rm /etc/vx/reconfig.d/state.d/install-db
91. Can I increase the size of the private region ?
The private region of a disk is specified by using the privlen option to the
vxdisksetup
cammand. For example,
# vxdisksetup -i c1t0d0 privlen=2048
92.

Cannot import Diskgroup - "No configuration copies"

This is a very rare occurrance, but is one of the most frustrating.


Why did this happen ?
The private data regions are accessed frequently by the Veritas kernel,
it is always updating status information in the private regions.
We cannot say exactly what causes this, nither can we definitively state
that all the private regions were totally bad.
What we can say is that this is very rare.
We have not been able to reproduce it, so it isn't easy to pinpoint for a
complete fix that would prevent it from ever happening.
The only way we can reproduce it is to physically overwrite all the private
regions on all the disks.
If even one disk is not overwritten, it can and will pull in the diskgroup.
How do we over come the problem
------------------------------The following proceedure needs an existing rootdg and volume manager running,
so it is not applicable for a lost rootdg.
To reconstruct the configuration
-------------------------------To get the configuration back we would like to use a copy of the configuration
that was taken when the system was running ok with the commands.
# vxprint -hmQqspv -D - > /<directory path>/<filename>
# vxdisk list
This last command gives you a list of access name and media name
pairs.
If this happens and the customer does not have copies of the configuration
when the system was good, we have in tha past run the following command.
If this method is used the customer has to be able to check the configuration
and data as there is no garrantee that this is latest configuration.
# /etc/vx/diag.d/vxprivutil dumpconfig /dev/rdsk/cXtXdXs2 | \
vxprint -hmQqspv -D - > /<directory path>/<filename>
NB.
You may well have to grep out the access name and media name
pairs from the output file.
Once you have the configuration from above you can create the group as follows.
Create the new disk group
-------------------------

NB.

Disk names MUST correspond to the same disks exactly as they were
originally (see you backup copy vxdisk list output).
We also need a list of the names which equate to the physical disks,
this can be obtained be keeping the output from 'vxdisk list' or
it will have to be grep'ed out of the tempory file.

Initalise the diskgroup, using one of the disks from the lost disk
group.
# vxdg init <dg> <access name>=<medianame>
NB.
Substitute the disk group, disk access name and media name
from the saved vxdisk list output.
Example
# vxdg init datadg disk01=c2t0d0s2

Add in the rest of the disks.


# vxdg -g new_data_dg adddisk <access name>=<medianame> [other disks]

93.

Recreate the volume(s) configuration from the configuration file.


# vxmake -g <dg> -d /<directory path>/<filename>
NB.
If this fails saying input file too large, then split the
file up and run the above for each one.
In most cases it works, its just for very large configurations
and then we have only split it into two pieces.
You can use the above command with the -V option which will
go through the motions but not actually do anything.
Now bring the volumes back online.
# vxvol -g <dg> startall
My volume is ENABLED/SYNC but doesn't appear to be syncing. What do I do ?

#vxvol -f -g [dg] resync [vol]


This will force the volume to resync.
94. How can I view the contents of my Private R egion ?
95.

I need to change the hostname in my Private Regions and my /etc/vx/volboot

file. How do I do it ?

96. Veritas has changed my VTOC layout when it encapsulated my rootdisk. How
do I recover my original VTOC?
97. My volume won't start. I think it may be due to my volume not on a cylinder
boundary. How can I determine if this is true ?
This block should then be divisable into the length and offset value which you
get for a volume in a vxprint output from Veritas. If the offset value for the
volume is not devisable by the number of blocks per cylinder found above, then
the volume does not start on a cylinder boundary. If the length is not divisable
by the number of blocks per cylinder, then it doesn't end on a cylinder
boundary.
Note that on occasions, you will be left with an extra block. This is normal and

is believed to be used for disk label preservation.


98. Why can't I simply mirror back my seconday rootdisk mirror to my primary
rootdisk mirror if my primary disk fails ?
So, I think you now can guess what will happen when you resync your secondary
mirror back to the primary. The private region gets written to the beginning of
your disk, offsetting all your filesystems and rendering your saved VTOC as
useless. Now you can never go back to underlying devices. Because of this,
upgrades and future recoveries may not be possible.
There is an infodoc which details an alternate method using the vxmksdpart which
can be used for putting back the VTOC on the mirrored root disk. The infodoc is
14820.
99. How can I make hot-relocation use ONLY spare disks ?
Beginning with SEVM 2.4, if the following value is set in the file
'/etc/default/vxassist':
100. How do I enable debug mode on vxconfigd ?
101. How do I disable DMP, dynamic multi-pathing ?
INFODOC ID: 18314
STATUS: Issued
SYNOPSIS: How to disable dmp
DETAIL DESCRIPTION:
This infodoc will explain the 7 steps to disable DMP
*****NOTE*******
before these steps are done
1. umount all file systems created on volume manager volumes
2. Stop the Volume Manager (vxdctl stop)
Then continue:
3. Remove the vxdmp driver
forceload: drv/vxdmp
101. How do I take a disk out from under Veritas control ?
The vxdiskunsetup command undoes the configuration setup by vxdisksetup and
makes the specified disks unusable by the Volume Manager. It can be applied only
to disks that are not in active use within an imported disk group.
Note that the disk must not be a member of any disk group when this is done. To
remove a disk from a diskgroup, see the answer to question 105.
103. Volume Manager has relayed out my VTOC after encapsulation. I now need to
get back my original VTOC but there is no original in
/etc/vx/reconfig.d/disk.d/vtoc. How can I get back my /opt filesystem ?
Partition Tag Flags Sector Count Sector Mount Directory
0 2 00 0 410400 410399
1 14 01 0 4154160 4154159
2 5 00 0 4154160 4154159
4 7 00 1240320 819280 2059599
5 4 00 2059600 1024480 3084079
7 15 01 4152640 1520 4154159
Because of
slice 1 is
region has
originally

the sizes, we can see that slice 7 is the private region and that
the public region. The private region has a tag of 15, the public
a tag of 14. Now check the /etc/vfstab to see where /opt was
mounted :

Now checking the rootvol volume from the vxprint -Ath output. We know that that
root starts at 0 and ends at 410399. Add 1 to this gives us 410400. This takes
into consideration the boot block cylinder. No other offset is required as the
Private Region is at the end of the disk in this case.
SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE
sd rootdisk-01 rootvol-01 rootdisk 0 410399 1 c1t12d0 ENA
So we now know the Veritas Private Region is not at the start of the disk. Now
we check vxprint -Ath for optvol as we need to know where it starts.
V NAME USETYPE KSTATE STATE LENGTH READPOL PREFPLEX
PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE
SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE
v opt fsgen ENABLED ACTIVE 819280 ROUND pl opt-02 opt ENABLED ACTIVE 819280 CONCAT - RW
sd rootmirror-02 opt-02 rootmirror 1024480 819280 0 c3t14d0 ENA
pl opt-01 opt ENABLED ACTIVE 819280 CONCAT - RW
sd rootdisk-05 opt-01 rootdisk 421039 819280 0 c1t12d0 ENA
From the above we can tell that opt-01 starts at 421039 and has a length of
819280. Add one 421039 gives us 421040 so this the start cylinder for /opt.
Now to find the end cylinder 421040 + 819280 = 1240320. So 1240320 is the end
cylinder for opt.
Now label slice 3 as 421040 to 1240320.
One labeled this fsck -n /dev/rdsk/c1t12d0s3 says last mounted as opt.
104.

What are the different status codes in a 'vxdisk list' output ?

105.

How do I remove a disk from a diskgroup ?

106. How can I fix "vxvm: WARNING: No suitable partition from swapvol to set
as the dump device." ?
16646
107. After an A5000 disk failure, I lost my rootdg and now I get vxvm:vxdisk:
ERROR: Cannot get records from vxconfigd: Record not in disk group when I run
vxdisk list and vxvm:vxdctl: ERROR: enable failed: System error occurred in the
client when I run vxdctl enable. How can I fix it ?
108. After some A5000/Photon loop offline/onlines, my volume now has one plex
in STALE state and the other in IOFAIL. How can I fix this ?
109. Is there any VxVM limitation on the number of volumes which can be created
on a system ?
This value can be changed by inserting the following line in /etc/systems :
On 2.6 systems the max minor number is 0x3fff (decimal 262143). Please
check the 2.5.1 max minor number info. I think they are the same.
110. Configuration copies explained
You can set the number by using :
when creating a new diskgroup or by vxedit on an existing diskgroup. e.g.
Labels parameters
Labels

vxvm vxvm Delete Enter labels to add to this page:


Looking for a label? Just start typing.

Você também pode gostar