Você está na página 1de 384

Veritas Storage

Foundation 6.0 for


UNIX: Manage and
Administer
(Appendices)
100-002687-C
CONFIDENTIAL - NOT FOR DISTRIBUTION
2
COURSE/LAB DEVELOPERS
Bilge Gerrits
James Kenney
Carly Storrie
LEAD SUBJECT MATTER
EXPERT
Brad Willer
TECHNICAL
CONTRIBUTORS AND
REVIEWERS
Steve Evans
Joe Gallagher
Freddie Gilyard
Graeme Gofton
Tony Griffiths
Gene Henriksen
Bob Lucas
Robert Owen
Kleber Saldanha
Kim Sanchez-Paul
Kalyan Subramaniyam
Stephen Williams
Randal Williams
Copyright 2012 Symantec Corporation. All rights reserved.
Symantec, the Symantec Logo, and VERITAS are trademarks or
registered trademarks of Symantec Corporation or its affiliates in
the U.S. and other countries. Other names may be trademarks of
their respective owners.
THIS PUBLICATION IS PROVIDED AS IS AND ALL
EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS
AND WARRANTIES, INCLUDING ANY IMPLIED
WARRANTY OF MERCHANTABILITY, FITNESS FOR A
PARTICULAR PURPOSE OR NON-INFRINGEMENT, ARE
DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH
DISCLAIMERS ARE HELD TO BE LEGALLY INVALID.
SYMANTEC CORPORATION SHALL NOT BE LIABLE FOR
INCIDENTAL OR CONSEQUENTIAL DAMAGES IN
CONNECTION WITH THE FURNISHING, PERFORMANCE,
OR USE OF THIS PUBLICATION. THE INFORMATION
CONTAINED HEREIN IS SUBJECT TO CHANGE WITHOUT
NOTICE.
No part of the contents of this book may be reproduced or
transmitted in any form or by any means without the written
permission of the publisher.
Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Symantec Corporation
World Headquarters
350 Ellis Street
Mountain View, CA 94043
United States
http://www.symantec.com
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
3
Table of Contents iii
Copyright 2012 Symantec Corporation. All rights reserved.
Appendix A: Labs
Lab 1: Administering Volume Manager ....................................................... A-3
Exercise 1: Preparing the environment for the performance labs.......... A-5
Exercise 2: Exploring the vxstat utility................................................. A-6
Exercise 3: Exploring the vxtrace utility............................................... A-8
Exercise 4: Changing the volume layout............................................... A-10
Exercise 5: Optional lab: Monitoring tasks............................................ A-11
Lab 2: Managing Devices Within the VxVM Architecture........................... A-13
Exercise 1: Administering the Device Discovery Layer......................... A-15
Exercise 2: Displaying DMP information............................................... A-16
Exercise 3: Displaying DMP statistics................................................... A-18
Exercise 4: Enabling and disabling DMP paths .................................... A-19
Exercise 5: Managing array policies ..................................................... A-20
Exercise 6: Optional lab: DMP performance tuning templates ............. A-22
Lab 3: Resolving Hardware Problems........................................................ A-25
Exercise 1: Recovering a temporarily disabled disk group ................... A-27
Exercise 2: Preparing for disk failure labs............................................. A-29
Exercise 3: Recovering from temporary disk failure ............................. A-30
Exercise 4: Recovering from permanent disk failure ............................ A-32
Exercise 5: Optional lab: Recovering from temporary disk failure -
Layered volume .................................................................................... A-34
Exercise 6: Optional lab: Recovering from permanent disk failure -
Layered volume .................................................................................... A-37
Exercise 7: Optional lab: Replacing physical drives
(without hot relocation).......................................................................... A-39
Exercise 8: Optional lab: Replacing physical drives
(with hot relocation)............................................................................... A-41
Exercise 9: Optional lab: Recovering from temporary disk failure
with vxattachd daemon...................................................................... A-43
Exercise 10: Optional lab: Exploring spare disk behavior..................... A-44
Exercise 11: Optional lab: Using the Support Web Site........................ A-47
Lab 4: Using Full-Copy Volume Snapshots................................................ A-49
Exercise 1: Full-sized instant snapshots............................................... A-50
Exercise 2: Off-host processing using split-mirror volume snapshots .. A-51
Exercise 3: Optional lab: Performing snapshots using VOM................ A-55
Exercise 4: Optional lab: Traditional volume snapshots ....................... A-56
Lab 5: Using Copy-on-Write SF Snapshots ............................................... A-59
Exercise 1: Using space-optimized instant volume snapshots ............. A-61
Exercise 2: Restoring a file system using storage checkpoints ............ A-64
Exercise 3: Optional lab: Storage checkpoint behavior ........................ A-66
Exercise 4: Optional lab: Using checkpoints in VOM............................ A-68
Lab 6: Using Advanced VxFS Features ..................................................... A-69
Exercise 1: Compressing files and directories with VxFS..................... A-70
Exercise 2: Deduplicating VxFS data.................................................... A-72
Table of Contents
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
4
iv Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
Exercise 3: Using the FileSnap feature................................................ A-74
Lab 7: Using Site Awareness with Mirroring.............................................. A-75
Exercise 1: Configuring site awareness ............................................... A-77
Exercise 2: Analyzing the volume read policy...................................... A-79
Exercise 3: Optional lab: Analyzing the impact of disk failure in a
site-consistent environment.................................................................. A-81
Exercise 4: Optional lab: A manual fire drill operation with remote
mirroring ............................................................................................... A-83
Lab 8: Implementing SmartTier ................................................................. A-85
Exercise 1: Configuring a multi-volume file system and SmartTier ...... A-86
Exercise 2: Testing SmartTier .............................................................. A-88
Lab 9: Replicating a Veritas File System................................................... A-91
Exercise 1: Setting up and performing replication for a VxFS file
system.................................................................................................. A-93
Exercise 2: Restoring the source file system using the replication
target .................................................................................................... A-95
Lab App C: Optional Lab: Importing LUN Snapshots ................................ A-99
Exercise 1: LUN snapshots setup ...................................................... A-100
Exercise 2: Importing clone disk groups............................................. A-102
Lab App D: Optional Lab: Managing the Boot Disk with SF .................... A-103
Exercise 1: Optional lab: Encapsulation and boot disk mirroring ....... A-105
Exercise 2: Optional lab: Testing the boot disk mirror........................ A-106
Exercise 3: Optional lab: Removing the boot disk mirror ................... A-107
Exercise 4: Optional lab: Creating the boot disk snapshot ................. A-108
Exercise 5: Optional lab: Testing and removing the boot disk
snapshot ............................................................................................. A-109
Exercise 6: Optional lab: Unencapsulating.......................................... A-110
Appendix B: Lab Solutions
Lab 1: Administering Volume Manager........................................................ B-3
Exercise 1: Preparing the environment for the performance labs .......... B-5
Exercise 2: Exploring the vxstat utility................................................. B-6
Exercise 3: Exploring the vxtrace utility............................................... B-8
Exercise 4: Changing the volume layout .............................................. B-11
Exercise 5: Optional lab: Monitoring tasks ........................................... B-13
Lab 2: Managing Devices Within the VxVM Architecture .......................... B-15
Exercise 1: Administering the Device Discovery Layer ........................ B-17
Exercise 2: Displaying DMP information .............................................. B-19
Exercise 3: Displaying DMP statistics .................................................. B-22
Exercise 4: Enabling and disabling DMP paths.................................... B-25
Exercise 5: Managing array policies..................................................... B-27
Exercise 6: Optional lab: DMP performance tuning templates............. B-31
Lab 3: Resolving Hardware Problems ....................................................... B-35
Exercise 1: Recovering a temporarily disabled disk group................... B-37
Exercise 2: Preparing for disk failure labs............................................ B-41
Exercise 3: Recovering from temporary disk failure............................. B-42
Exercise 4: Recovering from permanent disk failure............................ B-47
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
5
Table of Contents v
Copyright 2012 Symantec Corporation. All rights reserved.
Exercise 5: Optional lab: Recovering from temporary disk failure -
Layered volume .................................................................................... B-51
Exercise 6: Optional lab: Recovering from permanent disk failure -
Layered volume .................................................................................... B-56
Exercise 7: Optional lab: Replacing physical drives
(without hot relocation).......................................................................... B-60
Exercise 8: Optional lab: Replacing physical drives
(with hot relocation)............................................................................... B-64
Exercise 9: Optional lab: Recovering from temporary disk failure
with vxattachd daemon...................................................................... B-67
Exercise 10: Optional lab: Exploring spare disk behavior..................... B-69
Exercise 11: Optional lab: Using the Support Web Site........................ B-75
Lab 4: Using Full-Copy Volume Snapshots................................................ B-77
Exercise 1: Full-sized instant snapshots............................................... B-78
Exercise 2: Off-host processing using split-mirror volume snapshots .. B-81
Exercise 3: Optional lab: Performing snapshots using VOM................ B-89
Exercise 4: Optional lab: Traditional volume snapshots ....................... B-93
Lab 5: Using Copy-on-Write SF Snapshots ............................................... B-99
Exercise 1: Using space-optimized instant volume snapshots ........... B-101
Exercise 2: Restoring a file system using storage checkpoints .......... B-109
Exercise 3: Optional lab: Storage checkpoint behavior ...................... B-115
Exercise 4: Optional lab: Using checkpoints in VOM.......................... B-121
Lab 6: Using Advanced VxFS Features ................................................... B-123
Exercise 1: Compressing files and directories with VxFS................... B-124
Exercise 2: Deduplicating VxFS data.................................................. B-128
Exercise 3: Using the FileSnap feature............................................... B-132
Lab 7: Using Site Awareness with Mirroring............................................. B-135
Exercise 1: Configuring site awareness.............................................. B-137
Exercise 2: Analyzing the volume read policy..................................... B-142
Exercise 3: Optional lab: Analyzing the impact of disk failure in a
site-consistent environment ................................................................ B-146
Exercise 4: Optional lab: A manual fire drill operation with remote
mirroring.............................................................................................. B-151
Lab 8: Implementing SmartTier ................................................................ B-153
Exercise 1: Configuring a multi-volume file system and SmartTier..... B-154
Exercise 2: Testing SmartTier............................................................. B-160
Lab 9: Replicating a Veritas File System.................................................. B-167
Exercise 1: Setting up and performing replication for a VxFS file
system................................................................................................. B-169
Exercise 2: Restoring the source file system using the replication
target ................................................................................................... B-174
Lab App C: Optional Lab: Importing LUN Snapshots............................... B-181
Exercise 1: LUN snapshots setup....................................................... B-182
Exercise 2: Importing clone disk groups ............................................. B-185
Lab App D: Optional Lab: Managing the Boot Disk with SF..................... B-189
Exercise 1: Optional lab: Encapsulation and boot disk mirroring........ B-191
Exercise 2: Optional lab: Testing the boot disk mirror ........................ B-193
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
6
vi Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
Exercise 3: Optional lab: Removing the boot disk mirror ................... B-195
Exercise 4: Optional lab: Creating the boot disk snapshot ................. B-196
Exercise 5: Optional lab: Testing and removing the boot disk
snapshot ............................................................................................. B-198
Exercise 6: Optional lab: Unencapsulating......................................... B-201
Appendix C: Importing LUN Snapshots
How Volume Manager detects hardware snapshots ................................... C-3
Managing clone disks .................................................................................. C-8
Using disk tags .......................................................................................... C-16
Appendix D: Managing the Boot Disk with SF
Placing the boot disk under VxVM control ................................................... D-3
Creating an alternate boot disk.................................................................. D-18
Administering the boot disk........................................................................ D-22
Removing the boot disk from VxVM control............................................... D-28
Appendix E: Using the VEA for Administrative Operations
Administering Volume Manager with VEA................................................... E-3
Administering point-in-time copies with VEA ............................................... E-5
Administering SmartTier and multi-volume file systems .............................. E-7
Disk encapsulation with VEA..................................................................... E-12
Index
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
7
Appendix A
Labs
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
8
A2 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
9
Lab 1: Administering Volume Manager A3
Copyright 2012 Symantec Corporation. All rights reserved.
A
Lab 1: Administering Volume Manager
In this, you will analyze Volume Manager I/O operations using the vxstat and
the vxtrace utilities. You will also practice changing volume layouts. More
exercises are included to practice managing VxVM tasks.
This lab contains the following exercises:
Preparing the environment for the performance labs
Exploring the vxstat utility
Exploring the vxtrace utility
Changing the volume layout
Optional lab: Monitoring tasks
Prerequisite setup
To perform this lab, you need a lab system with Storage Foundation pre-installed,
configured and licensed. In addition to this, you also need a minimum of six
external disks to be used during the labs.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
10
A4 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
Lab information
In preparation for this lab, you need the following information about your lab
environment.
The exercises for this lab start on the next page.
Object Value
root password veritas
Host name of the lab system sym1
Boot disk: sda
2nd internal disk: sdb
Shared data disks: emc0_dd7 - emc0_d12
3pardata0_49 - 3pardata0_60
Location of lab scripts
(if any):
/student/labs/sf/sf60
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
11
Lab 1: Administering Volume Manager A5
Copyright 2012 Symantec Corporation. All rights reserved.
A
It is possible that your system contains existing disk groups and volumes. Use the
following steps to free up the system to prepare for performance testing.
1 Determine if there are any mounted file systems on VxVM volumes using the
df -k command. Unmount them using the umount command if there are.
2 Use the vxdg command to destroy any existing disk groups.
3 If the SmartMove feature is enabled for all volumes, change it to thin
provisioning devices only.
Exercise 1: Preparing the environment for the performance labs
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
12
A6 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
In this exercise, you analyze the performance of a disk in the testdg disk group
for 32K random reads. You use the vxbench program that is included as part of
the VRTSspt package to generate an I/O load.
1 Create a non-CDS disk group named testdg that contains one disk. Use the
second internal disk. Name the disk testdg01.
2 Determine the maximum volume size that can be created using the single
drive. Create a volume named testvol in the testdg disk group that is the
maximum size on the single drive.
3 Create a VxFS File System on the testvol volume. Create a /test mount
point and mount the File System to it.
4 Invoke the vxstat command to begin drive analysis on the testvol volume.
Set the vxstat interval to display statistics every 1 second. Statistics will
begin printing every second, and all statistics are displayed as 0 until you begin
sending I/O to the volume.
Note: To be able to analyze the output later, you can direct it to a file, for
example /tmp/vxstat.out.
5 Use the vxbench utility to start several invocations of the following
vxbench command. Note that the vxbench location
(/opt/VRTSspt/FS/VxBench) should be in your PATH variable.
vxbench_rhel5_x86_64 -w rand_write -i \
iosize=32,iocount=16384,maxfilesize=3072000 \
/test/test1 &
Note: The vxbench program will create and write to the file test1 in the
/test directory.
Exercise 2: Exploring the vxstat utility
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
13
Lab 1: Administering Volume Manager A7
Copyright 2012 Symantec Corporation. All rights reserved.
A
6 When you execute the vxbench command, the vxstat output in the other
terminal window begins to display data. Wait for all the vxbench commands
to finish executing, then stop the vxstat output by typing CTRL-C on the
terminal where you are running vxstat, and analyze the vxstat output and
determine the peak performance of the drive.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
14
A8 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
The purpose of this section is to provide an introduction to the usage of the
vxtrace utility.
1 Use the vxbench utility to create some random writes to the /test/test1
file using the following vxbench command:
vxbench_rhel5_x86_64 -w rand_write -i \
iosize=32,iocount=16384,maxfilesize=3072000 \
/test/test1 &
2 Dump trace data on virtual and physical disk I/Os for the volume named
testvol to the file named /tmp/vxtrace.out.
3 When the vxbench command completes stop filling the file.
4 Read the trace records from the file named /tmp/vxtrace.out. Notice
that the vxbench program is writing to the testvol volume (vdev
testvol) on the disk that you created for the testdg disk group.
5 Dump error trace data on all disks to the screen using the following vxtrace
command. Note that the output will only display error messages as they occur.
vxtrace -e disk
Note: When the this vxtrace command is started, the command will be
displayed and the prompt will not be returned. Do not end the
command. Leave that window active and run the next commands in a
different terminal window.
6 Use the vxdisk list command to determine the path(s) to the disk used in
the testdg disk group. Then use the vxdmpadm -f disable path=sdb
command to disable all paths to the disk.
Exercise 3: Exploring the vxtrace utility
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
15
Lab 1: Administering Volume Manager A9
Copyright 2012 Symantec Corporation. All rights reserved.
A
7 Use the vxbench utility to create some random writes to the /test/test1
file as shown in step 1. Note that the terminal running the vxtrace -e
disk command now shows write errors.
8 Use the vxdmpadm enable path=sdb command to enable all paths to
the disk.
9 Unmount the /test file system and destroy the testdg disk group.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
16
A10 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
1 Create a disk group called appdg with four disks (emc0_dd7 - emc0_d10).
2 Create a 20-MB concatenated mirrored volume called appvol. Create a Veritas
file system on the volume and mount it to /app.
3 Add data to the volume and verify that the file has been added.
4 Change the volume layout from its current layout (mirrored) to a nonlayered
mirror-stripe with two columns and a stripe unit size of 128 sectors (64K).
Monitor the progress of the relayout operation, and display the volume layout
after each command that you run.
5 Verify that the file is still accessible.
6 Unmount the file system on the volume and remove the volume.
Exercise 4: Changing the volume layout
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
17
Lab 1: Administering Volume Manager A11
Copyright 2012 Symantec Corporation. All rights reserved.
A
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.
In this optional lab, you track volume relayout processes using the vxtask
command and recover from a vxrelayout crash using the command line.
To begin, you should have at least four disks in the disk group that you are using.
1 Create a concatenated volume called appvol in the appdg disk group, with a
size of 1 GB using the vxassist command. When the volume is created,
mirror the volume. Assign a task tag (appvol_monitor) to the task and run the
vxassist mirror command in the background.
2 View the progress of the task.
3 When the task is complete, use vxassist to relayout the volume to stripe-
mirror. Use a stripe unit size of 256K, use two columns, and assign the process
to the above task tag.
4 In another terminal window, abort the task to simulate a crash during relayout.
5 Reverse the relayout operation. View the layout of the volume after the
reversal of the relayout operation completes. Notice that the volume layout is
back to its original state but it is now layered. Change the layout to
non-layered.
6 Destroy the appdg disk group.
End of lab
Exercise 5: Optional lab: Monitoring tasks
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
18
A12 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
19
Lab 2: Managing Devices Within the VxVM Architecture A13
Copyright 2012 Symantec Corporation. All rights reserved.
A
Lab 2: Managing Devices Within the VxVM Architecture
In this lab, you explore the VxVM tools used to manage the device discovery layer
(DDL) and dynamic multipathing (DMP). The objective of this exercise is to make
you familiar with the commands used to administer multipathed disks.
This lab contains the following exercises:
Administering the Device Discovery Layer
Displaying DMP information
Displaying DMP statistics
Enabling and disabling DMP paths
Managing array policies
Optional lab: DMP performance tuning templates.
Prerequisite setup
To perform this lab, you need a lab system with Storage Foundation pre-installed,
configured and licensed. In addition to this, you also need a minimum of three
external disks to be used during the labs.
Before you begin this lab, destroy any data disk groups that are left from previous
labs:
vxdg destroy diskgroup
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
20
A14 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
Lab information
In preparation for this lab, you need the following information about your lab
environment.
The exercises for this lab start on the next page.
Object Value
root password veritas
Host name of the lab system sym1
Shared data disks: emc0_dd7 - emc0_d12
3pardata0_49 - 3pardata0_60
Location of lab scripts: /student/labs/sf/sf60
Location of the vxbench program: /opt/VRTSspt/FS/VxBench
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
21
Lab 2: Managing Devices Within the VxVM Architecture A15
Copyright 2012 Symantec Corporation. All rights reserved.
A
1 List all currently supported disk arrays.
2 List all the enclosures connected to your system using the vxdmpadm
listenclosure all command. Does Volume Manager recognize the disk
array you are using in your lab environment? What is the name of the
enclosure? Note the enclosure name here.
Original enclosure name:__________________________________________
3 Use the vxddladm command to determine if enclosure based naming is set on
your system.
4 If enclosure based naming (EBN) is not set on your system, set it using the
vxdiskadm command.
5 Display the disks attached to your system and note the changes.
6 Rename the emc0 enclosure to emc_disk using the vxdmpadm setattr
command. To find the exact command syntax, check the manual pages for the
vxdmpadm command.
Note: The original name of the enclosure is displayed by the vxdmpadm
listenclosure all command that you used in step 2.
7 Display the disks attached to your system and note the changes.
Exercise 1: Administering the Device Discovery Layer
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
22
A16 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
1 List all controllers on your system using the vxdmpadm listctlr
command. How many controllers are listed for the disk array your system is
connected to?
2 Using one of the controller names discovered in the previous step, display all
paths connected to the controller using the vxdmpadm getsubpaths
ctlr=controller command. Compare the NAME and the
DMPNODENAME columns in the output.
3 In the displayed list of paths, use the DMP node name of one of the paths to
display information about paths that lead to the particular LUN. How many
paths can you see?
4 View DDL extended attributes for the dmpnodename used in the previous step
using the vxdisk -p list command.
5 Determine the Port ID (PID) for all devices attached to the system using the
vxdisk -p list command and the -x option.
6 Determine the DDL_DEVICE_ATTR for all disks attached to the system using
the vxdisk -p list command and the -x option. If no attributes are set
the attribute displays a NULL.
7 Choose an attribute from the following list and view the attribute for all disks
using vxdisk -x attribute -p list. Not all attributes will have
values set.
The supported attributes:
DGID VID
PID ANAME
ATYPE TPD_SUPPRESSED
NR_DEVICE CAB_SERIAL_NO
LUN_SERIAL_NO PORT_SERIAL_NO
CUR_OWNER LIBNAME
Exercise 2: Displaying DMP information
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
23
Lab 2: Managing Devices Within the VxVM Architecture A17
Copyright 2012 Symantec Corporation. All rights reserved.
A
LUN_OWNER LUN_TYPE
SCSI_VERSION REVISION
TPD_META_DEVNO TPD_META_NAME
TPD_LOGI_CTLR TPD_PHY_CTLR
TPD_SUBPATH TPD_DEVICES
ASL_CACHE ASL_VERSION
UDID ECOPY_DISK
ECOPY_TARGET_ID ECOPY_OPER_PARM
DEVICE_TYPE DYNAMIC
TPD_HIDDEN_DEVS LOG_CTLR_NAME
PHYS_CTLR_NAME DISK_GEOMETRY
MT_SAFE FC_PORT_WWN
FC_LUN_NO HARDWARE_MIRROR
TPD_CONTROLLED TPD_PARTITION_MAP
DMP_SINGLE_PATH DMP_VMDISK_IOPOLICY
DDL_DEVICE_ATTR DDL_THIN_DISK
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
24
A18 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
1 Create a disk group called appdg that contains two disks (emc_disk_dd7 and
emc_disk_dd8).
2 Create a 1-GB volume called appvol in the appdg disk group.
3 Determine the device used for the appvol volume. This device name will be
used as the dmpnodename in step 10.
4 Create a VxFS file system on the appvol volume using the mkfs command.
5 Create a mount point for the appvol volume called /app and mount the file
system created in the previous step to the mount point.
6 Enable the gathering of I/O statistics for DMP.
7 Reset the DMP I/O statistics counters to zero.
8 Next, use the dmpiotest script to generate I/O on the disk used by the appdg
disk group. The dmpiotest script uses the vxbench utility, which is a part
of the VRTSspt package and is installed as a part of the SF installation. Change
to the directory containing lab scripts and execute the script:
./dmpiotest
9 Display I/O statistics for all controllers.
10 Display I/O statistics for the DMP node that corresponds to the device used by
appvol. Display statistics every two seconds for eight times.
Note: You can use the vxprint -g appdg -htr appvol command to
identify the dmp node name of the device used by appvol.
Exercise 3: Displaying DMP statistics
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
25
Lab 2: Managing Devices Within the VxVM Architecture A19
Copyright 2012 Symantec Corporation. All rights reserved.
A
1 Use the dmpiotest script to generate I/O on the disk used by the appdg disk
group. The dmpiotest script uses the vxbench utility, which is a part of
the VRTSspt package and is installed as a part of the SF installation. Change to
the directory containing lab scripts and execute the script:
./dmpiotest
2 Display I/O statistics for the DMP node that corresponds to the device used by
appvol. Display statistics every two seconds for one thousand times. This will
ensure that the output continues as you enable and disable paths. I/O should be
present for both paths to the device.
Note: You can use the vxprint -g appdg -htr appvol command to
identify the dmp node name of the device used by appvol.
3 Use the vxdmpadm disable command to disable one of the paths shown in
the vxdmpadm iostat output. Note that I/O for that path stops.
4 Use the vxdmpadm enable command to enable the path that was disabled
in the previous step. Note that I/O for that path resumes.
5 Use the vxdmpadm disable command to disable the other path shown in
the vxdmpadm iostat output. Note that I/O for that path stops.
6 Use the vxdmpadm enable command to enable the path that was disabled
in the previous step. Note that I/O for that path resumes.
Exercise 4: Enabling and disabling DMP paths
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
26
A20 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
1 Display the current I/O policy for the enclosure you are using.
2 Change the current I/O policy for the enclosure to stop load-balancing and only
use multipathing for high availability.
3 Display the new I/O policy attribute.
4 Reset the DMP I/O statistics counters to zero.
5 Next, use the dmpiotest script to generate I/O on the disk used by the appdg
disk group. The dmpiotest script uses the vxbench utility, which is a part
of the VRTSspt package and is installed as a part of the SF installation. Change
to the directory containing lab scripts and execute the script:
./dmpiotest
6 Display I/O statistics for the DMP node that corresponds to the device used by
appvol. Display statistics every two seconds for eight times. Compare the
output to the output you observed before changing the DMP policy to
singleactive. Note that a single path is now used.
Note: You can use the vxprint -g appdg -htr appvol command to
identify the dmp node name of the device used by appvol.
7 Change the DMP I/O policy back to its default value (MinimumQ).
8 Next, use the dmpiotest script to generate I/O on the disk used by the appdg
disk group. The dmpiotest script uses the vxbench utility, which is a part
of the VRTSspt package and is installed as a part of the SF installation. Change
to the directory containing lab scripts and execute the script:
./dmpiotest
Exercise 5: Managing array policies
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
27
Lab 2: Managing Devices Within the VxVM Architecture A21
Copyright 2012 Symantec Corporation. All rights reserved.
A
9 Display I/O statistics for the DMP node again. Compare the output to the
output you observed when changing the DMP policy to singleactive. Note that
both paths are now used again.
10 Unmount /app.
Note: If the unmount of /app fails because the device is busy, it is because
the vxbench commands started by the dmpiotest script are still
running. Either let them complete, or kill each running command
(ps -ef | grep vxbench).
11 Rename the enclosure back to its original name (emc0) using the vxdmpadm
setattr command.
Note: The original name of the enclosure was displayed by the vxdmpadm
listenclosure all command that you used in step 2 of
Exercise 1.
12 Destroy the appdg disk group.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
28
A22 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.
1 Display the current I/O policy for the 3PARDATA array.
2 Use the vxdmpadm config command to dump the DMP tunable parameters
and attributes that are currently set on your system. Dump the values to
/tmp/dmp_perf_tunables.out.
3 Use the more command to view the DMP performance tunables and attributes
that were captured in the file.
4 Use the vi text editor to change the attribute for the iopolicy on just the
3PARDATA array.
5 Use the vxdmpadm config command to load the DMP tunable parameters
and attributes in the/tmp/dmp_perf_tunables.out file. Use the -d
option so that all errors are reported during loading.
6 Use the vxdmpadm config show command to verify that the DMP
tunable parameters and attributes in the/tmp/dmp_perf_tunables.out
file have been loaded.
7 Display the current I/O policy for the 3PARDATA array. The value should now
show singleactive.
8 Use the vxdmpadm config reset command to reset the DMP tunable
parameters and attributes to the default settings.
Exercise 6: Optional lab: DMP performance tuning templates
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
29
Lab 2: Managing Devices Within the VxVM Architecture A23
Copyright 2012 Symantec Corporation. All rights reserved.
A
9 Use the vxdmpadm config show command to verify that the DMP
tunable parameters and attributes are now set to the default values.
10 Display the current I/O policy for the 3PARDATA array. The value should now
show MinimumQ (the default value).
End of lab
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
30
A24 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
31
Lab 3: Resolving Hardware Problems A25
Copyright 2012 Symantec Corporation. All rights reserved.
A
Lab 3: Resolving Hardware Problems
In this lab, you practice recovering from a variety of hardware failure scenarios,
resulting in disabled disk groups and failed disks. First you recover a temporarily
disabled disk group and then you use a set of interactive lab scripts to investigate
and practice recovery techniques. Each interactive lab script:
Sets up the required volumes
Simulates and describes a failure scenario
Prompts you to fix the problem
This lab contains the following exercises:
Recovering a temporarily disabled disk group
Preparing for disk failure labs
Recovering from temporary disk failure
Recovering from permanent disk failure
Optional lab: Recovering from temporary disk failure - Layered volume
Optional lab: Recovering from permanent disk failure - Layered volume
Optional lab: Replacing physical drives (without hot relocation)
Optional lab: Replacing physical drives (with hot relocation)
Optional lab: Recovering from temporary disk failure with vxattachd
daemon
Optional lab: Exploring spare disk behavior
Optional lab: Using the Support Web Site
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
32
A26 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
Prerequisite setup
To perform this lab, you need a lab system with Storage Foundation pre-installed,
configured and licensed. In addition to this, you also need four external disks to be
used during the labs.
Lab information
In preparation for this lab, you need the following information about your lab
environment.
The exercises for this lab start on the next page.
Object Value
Shared data disks: emc0_dd7 - emc0_dd12
3pardata0_49 - 3pardata0_54
Location of lab scripts: /student/labs/sf/sf60
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
33
Lab 3: Resolving Hardware Problems A27
Copyright 2012 Symantec Corporation. All rights reserved.
A
This lab section requires two terminal windows to be open.
1 Create a disk group called appdg that contains one disk (emc0_dd7).
2 Create a 1g concatenated volume called appvol in appdg disk group.
3 Create a Veritas file system on appvol and mount it to /app.
4 Copy the contents of /etc/default directory to /app and display the
contents of the file system.
5 If you want to observe the error messages displayed in the system log while the
failure is being created, open a second terminal window and use the tail -f
command to view the system log. Exit the output using CTRL-C when you are
satisfied.
6 Change into the directory containing the faildg_temp.pl script and
execute the script to create a failure in the appdg disk group.
Notes:
The faildg_temp.pl script disables the paths to the disk in the disk
group to simulate a hardware failure. This is just a simulation and not a real
failure; therefore, the operating system will still be able to see the disk after
the failure. The script will prompt you for the disk group name and then it
will create the failure by disabling the paths to the disk, performing some
I/O and then re-enabling the paths.
All lab scripts are located in the /student/labs/sf/sf60 directory.
7 Use the vxdisk -o alldgs list and vxdg list commands to
determine the status of the disk group and the disk.
8 What happened to the file system?
Exercise 1: Recovering a temporarily disabled disk group
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
34
A28 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
9 Assuming that the failure was due to a temporary fiber disconnection and that
the data is still intact, recover the disk group and start the volume using the first
terminal window. Verify the disk and disk group status using the vxdisk -o
alldgs list and vxdg list commands.
10 Remount the file system and verify that the contents are still there.
11 Unmount the file system.
12 Destroy the appdg disk group.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
35
Lab 3: Resolving Hardware Problems A29
Copyright 2012 Symantec Corporation. All rights reserved.
A
Overview
The following sections use an interactive script to simulate a variety of disk failure
scenarios. Your goal is to recover from the problem as described in each scenario.
Use your knowledge of VxVM administration, in addition to the VxVM recovery
tools and concepts described in the lesson, to determine which steps to take to
ensure recovery. After you recover the test volumes, the script verifies your
solution and provides you with the result. You succeed when you recover the
volumes without corrupting the data.
For most of the recovery problems, you can use any of the VxVM interfaces: the
command line interface, the Veritas Operations Manager (VOM) Web console, or
the vxdiskadm menu interface. Lab solutions are provided for only one method.
If you have questions about recovery using interfaces not covered in the solutions,
see your instructor.
Setup
Due to the way in which the lab scripts work, it is important to set up your
environment as described in this setup section:
1 Create a disk group named testdg and add three disks (emc_disk_dd7,
emc_disk_dd8 and emc_disk_dd9) to the disk group. Assign the following disk
media names to the disks: testdg01, testdg02, and testdg03.
2 In the first terminal window, navigate to the directory that contains the lab
scripts. Note that the lab scripts are located at the
/student/labs/sf/sf60 directory.
Exercise 2: Preparing for disk failure labs
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
36
A30 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
In this lab exercise, a temporary disk failure is simulated. Your goal is to recover
all of the redundant and nonredundant volumes that were on the failed drive. The
lab script disk_failures.pl sets up the test volume configuration and
simulates a disk failure. You must then recover and validate the volumes.
Note: The lab scripts are located at the /student/labs/sf/sf60 directory.
1 From the first terminal window (from the directory that contains the lab
scripts), run the script disk_failures.pl, answer the initial configuration
questions and then select option 1, Exercise 3 - Recovering from temporary
disk failure. Note that the initial configuration questions will only be asked the
first time you run the script. Use test as the prefix for disk group and volume
names.
This script sets up two volumes (test1 and test2), creates a vxfs file system on
each, and then copies duplicate files to each file system. Both file systems are
then mounted.
test1 with a mirrored layout
test2 with a concatenated layout
2 In a second terminal window, view the failure using the vxdisk -o
alldgs list and vxprint -g testdg -htr commands.
3 Attempt to view the files that were copied to mount points /test1 and
/test2. What did you see?
4 Attempt to recover the volumes.
Note: When performing recovery procedures, run vxprint and vxdisk
list often to see what is changing after issuing recovery commands:
Exercise 3: Recovering from temporary disk failure
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
37
Lab 3: Resolving Hardware Problems A31
Copyright 2012 Symantec Corporation. All rights reserved.
A
To recover from the temporary failure:
a If you are using enclosure based naming, identify the OS native name of
the disk that has temporarily failed. You will use this OS disk name while
verifying that the operating system recognizes the device.
b Ensure that the operating system recognizes the device using the
appropriate OS commands.
c Verify that the operating system recognizes the device using the
appropriate OS commands.
d Force the VxVM configuration daemon to reread all of the drives in the
system.
e Reattach the device to the disk media record using the vxreattach
command.
f Recover the volumes using the vxrecover command.
g Use the vxvol command to start the nonredundant volume.
5 Because this is a temporary failure, the files in the test2 volume (and file
system) are still available. Recover the mount point by performing the
following:
a Unmount the /test2 mount point.
b Perform an fsck on the file system.
c Mount the test2 volume to /test2.
6 Compare the two mount points using the diff command.
7 Unmount the file systems and delete the test1 and test2 volumes.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
38
A32 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
In this lab exercise, a permanent disk failure is simulated. Your goal is to replace
the failed drive and recover the volumes as needed. The lab script
disk_failures.pl sets up the test volume configuration and simulates a disk
failure. You must recover the failure and validate the volumes.
Note: The lab scripts are located at the /student/labs/sf/sf60 directory.
1 In the first terminal window (from the directory that contains the lab scripts),
run the script disk_failures.pl, and select option 2, Exercise 4 -
Recovering from permanent disk failure.
This script sets up two volumes (test1 and test2), creates a vxfs file system on
each, and then copies duplicate files to each file system. Both file systems are
then mounted.
test1 with a mirrored layout
test2 with a concatenated layout
2 In a second terminal window, view the failure using the vxdisk -o
alldgs list and vxprint -g testdg -htr commands.
3 Attempt to view the files that were copied to mount points /test1 and
/test2. What did you see?
4 Replace the permanently failed drive with a new disk at another SCSI location.
Then, recover the volumes.
Note: When performing recovery procedures, run vxprint and vxdisk
list often to see what is changing after issuing recovery commands.
To recover from the permanent failure:
a In the second terminal window, initialize a new drive (emc0_d10).
b Attach the failed disk media name (testdg02) to the new drive.
Exercise 4: Recovering from permanent disk failure
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
39
Lab 3: Resolving Hardware Problems A33
Copyright 2012 Symantec Corporation. All rights reserved.
A
c Recover the volumes using the vxrecover command.
d Use the vxvol command to start the nonredundant volume.
Note: You can also use the vxdiskadm menu interface to correct the failure.
Select Replace a failed or removed disk option and select the desired
drive when prompted.
5 Because this is a permanent failure, the files in the test2 volume (and file
system) are no longer available. Recover the mount point by performing the
following:
a Unmount the /test2 mount point.
b Attempt to mount the test2 volume to /test2. You will see an error
because the file system has been lost during the recovery.
c Create a new file system on the test2 volume and then mount the test2
volume to /test2.
6 List the contents of /test2. In a real failure scenario, the files in this file
system would need to be restored from a backup.
7 Unmount the file systems and delete the test1 and test2 volumes.
8 When you have completed this exercise, the disk device that was originally
used during disk failure simulation is in an online invalid state,
reinitialize the disk to prepare for later labs.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
40
A34 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.
In this optional lab exercise, a temporary disk failure is simulated. Your goal is to
recover all of the volumes that were on the failed drive. The lab script
disk_failures.pl sets up the test volume configuration and simulates a disk
failure. You must recover the failure and validate the volumes.
Note: The lab scripts are located at the /student/labs/sf/sf60 directory.
1 Use the vxdg command with the adddisk option to add a fourth disk
(emc0_d11) called testdg04 to the testdg disk group. If necessary,
initialize a new disk before adding it to the disk group.
2 From the directory that contains the lab scripts, run the script
disk_failures.pl, and select option 3, Exercise 5 - Optional Lab:
Recovering from temporary disk failure - Layered volume.
This script sets up two volumes (test1 and test2), creates a vxfs file system on
each, and then copies duplicate files to each file system. Both file systems are
then mounted.
test1 with a stripe-mirror layout on 4 disks
test2 with a concatenated layout
3 In a second terminal window, view the failure using the vxdisk -o
alldgs list and vxprint -g testdg -htr commands. Notice that
there are two disks that have failed.
4 Attempt to view the files that were copied to mount points /test1 and
/test2. What did you see?
Exercise 5: Optional lab: Recovering from temporary disk failure -
Layered volume
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
41
Lab 3: Resolving Hardware Problems A35
Copyright 2012 Symantec Corporation. All rights reserved.
A
5 Assume that the failure was temporary. In a second terminal window, attempt
to recover the volumes.
Note: When performing recovery procedures, run vxprint and vxdisk
list often to see what is changing after issuing recovery commands.
To recover from the temporary failure:
a If you are using enclosure based naming, identify the OS native names of
the disks that have temporarily failed. You will use these OS disk names
while verifying that the operating system recognizes the devices.
b Ensure that the operating system recognizes the devices using the
appropriate OS commands.
c Verify that the operating system recognizes the devices using the
appropriate OS commands.
d Force the VxVM configuration daemon to reread all of the drives in the
system.
e Reattach the devices to the disk media records using the vxreattach
command.
f Recover the volumes using the vxrecover command.
g Use the vxvol command to start the nonredundant volume.
6 Because this is a temporary failure, the files in the test2 volume (and file
system) are still available. Recover the mount point by performing the
following:
a Unmount the /test2 mount point.
b Perform an fsck on the file system.
c Mount the test2 volume to /test2.
7 Compare the two mount points using the diff command.
Note: There is a potential for file system corruption in the test2 volume since
it has no redundancy.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
42
A36 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
8 Unmount the file systems and delete the test1 and test2 volumes.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
43
Lab 3: Resolving Hardware Problems A37
Copyright 2012 Symantec Corporation. All rights reserved.
A
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.
In this optional lab exercise, a permanent disk failure is simulated. Your goal is to
replace the failed drive and recover the volumes as needed. The lab script
disk_failures.pl sets up the test volume configuration and simulates a disk
failure. You must recover the failure and validate the volumes.
Note: The lab scripts are located at the /student/labs/sf/sf60 directory.
1 From the directory that contains the lab scripts, run the script
disk_failures.pl, and select option 4, Exercise 6 - Optional Lab:
Recovering from permanent disk failure - Layered volume:
This script sets up two volumes (test1 and test2), creates a vxfs file system on
each, and then copies duplicate files to each file system. Both file systems are
then mounted.
test1 with a stripe-mirror layout
test2 with a concatenated layout
2 In a second terminal window, view the failure using the vxdisk -o
alldgs list and vxprint -g testdg -htr commands. Notice that
there are two disks that have failed.
3 Attempt to view the files that were copied to mount points /test1 and
/test2. What did you see?
4 Replace the permanently failed drive with either a new disk at the same SCSI
location or by another disk at another SCSI location. Then, recover the
volumes.
Exercise 6: Optional lab: Recovering from permanent disk failure -
Layered volume
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
44
A38 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
Note: When performing recovery procedures, run vxprint and vxdisk
list often to see what is changing after issuing recovery commands.
To recover from the permanent failure:
a In the second terminal window, initialize the drive that failed. In a real
failure scenario this drive would have been replaced with a new drive.
b Attach the failed disk media name to the new drive.
c Recover the volumes using the vxrecover command.
d Use the vxvol command to start the nonredundant volume.
Note: You can also use the vxdiskadm menu interface to correct the failure.
Select Replace a failed or removed disk option and select the desired
drive when prompted.
5 Because this is a permanent failure, the files in the test2 volume (and file
system) are no longer available. Recover the mount point and file system by
performing the following:
a Unmount the /test2 mount point.
b Create a new file system.
c Mount the test2 volume to /test2 and list the contents. The mount point
should only contain a lost+found directory.
6 Unmount the file systems and delete the test1 and test2 volumes.
7 Destroy the testdg disk group.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
45
Lab 3: Resolving Hardware Problems A39
Copyright 2012 Symantec Corporation. All rights reserved.
A
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.
Note: If you have not already done so, destroy the testdg disk group before you
start this section.
1 Create a disk group called appdg that contains four disks (emc0_dd7 -
emc0_d10).
2 Create a 100-MB mirrored volume called appvol in the appdg disk group, add
a VxFS file system to the volume, and mount the file system at the mount point
/app.
3 If the vxrelocd daemon is running, stop it using ps and kill, in order to
stop hot relocation from taking place. Verify that the vxrelocd processes are
killed before you continue.
Note: If you have executed the disk_failures.pl script in the previous
lab sections, the vxrelocd daemon may already be killed.
4 Next, simulate disk failure by writing over the private region using the
overwritepr.pl script.
Note: The lab scripts are located at the /student/labs/sf/sf60
directory.
While using the script, substitute the appropriate disk device name for one of
the disks in use by appvol, for example enter emc0_dd7.
Exercise 7: Optional lab: Replacing physical drives (without hot
relocation)
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
46
A40 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
5 When the error occurs, view the status of the disks from the command line.
6 View the status of the volume from the command line.
7 Recover the disk by replacing the private and public regions on the disk. In the
command, substitute the appropriate disk device name, for example use
emc0_dd7.
8 Bring the disk back under VxVM control.
9 Check the status of the disks and the volume. The disk should now be a part of
the disk group, but the volume still has a failure.
10 From the command line, recover the volume.
11 Check the status of the disks and the volume to ensure that the disk and volume
are fully recovered.
12 Unmount the /app file system and remove the appvol volume.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
47
Lab 3: Resolving Hardware Problems A41
Copyright 2012 Symantec Corporation. All rights reserved.
A
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.
1 Verify that the relocation daemon (vxrelocd) is running. If not, start it as
follows:
2 Create a 100-MB mirrored volume called appvol in the appdg disk group, add
a VxFS file system to the volume, and mount the file system at the mount point
/app.
3 Next, simulate disk failure by writing over the private region using the
overwritepr.pl script.
Note: The lab scripts are located at the /student/labs/sf/sf60
directory.
While using the script, substitute the appropriate disk device name for one of
the disks in use by appvol, for example enter emc0_dd7.
4 When the error occurs, view the status of the disks and volume from the
command line using the vxdisk list and vxprint commands. Allow
sufficient time for the vxrelocd daemon to relocate the failed device.
5 Recover the disk by replacing the private and public regions on the disk. In the
command, substitute the appropriate disk device name, for example use
emc0_dd7.
6 Bring the disk back under VxVM control.
Exercise 8: Optional lab: Replacing physical drives (with hot
relocation)
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
48
A42 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
7 Check the status of the disks and the volume. The failed disk should now be a
part of the disk group, but the plex that used to be on the failed disk is now
relocated to another disk in the disk group.
8 Use the vxunreloc command to return the plex back to the original device.
9 Unmount the /app file system and remove the appvol volume.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
49
Lab 3: Resolving Hardware Problems A43
Copyright 2012 Symantec Corporation. All rights reserved.
A
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.
1 Enable the vxattachd daemon if it is not already running.
2 Create a 100-MB mirrored volume called appvol in the appdg disk group, add
a VxFS file system to the volume, and mount the file system at the mount point
/app.
3 Determine the first device use by the appvol volume. This device should be
emc0_dd7. Use the vxdisk list command to determine all paths to the
device.
4 Use the vxdmpadm -f disable command to disable all paths to the
device.
5 Use the dd command to write to the /app directory to produce a failure.
6 Use the vxdmpadm enable command to enable all paths to the failed
device. Monitor the vxdisk list and vxprint outputs until the
vxattachd daemon senses that the device is back online and reattaches the
device and recovers the failed plexes.
7 Unmount the /app file system and remove the appvol volume.
Exercise 9: Optional lab: Recovering from temporary disk failure
with vxattachd daemon
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
50
A44 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.
1 You should have four disks (appdg01 through appdg04) in the disk group
appdg. Set all disks to have the spare flag on.
2 Create a 100-MB mirrored volume called sparevol.
Is the volume successfully created? Why or why not?
3 Attempt to create the same volume again, but this time specify two disks to
use. Do not clear any spare flags on the disks.
4 Remove the sparevol volume.
5 Verify that the relocation daemon (vxrelocd) is running. If not, start it.
6 Remove the spare flags from three of the four disks.
7 Create a 100-MB concatenated mirrored volume called sparevol.
8 Save the output of vxprint -g appdg -htr to a file.
9 Display the properties of the sparevol volume. In the table, record the device
and disk media name of the disks used in this volume. You are going to
simulate disk failure on one of the disks. Decide which disk you are going to
fail.
Exercise 10: Optional lab: Exploring spare disk behavior
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
51
Lab 3: Resolving Hardware Problems A45
Copyright 2012 Symantec Corporation. All rights reserved.
A
For example, the volume sparevol uses appdg01 and appdg02:
10 Next, simulate disk failure by writing over the private region using the
overwritepr.pl script. In the standard virtual lab environment, this script
is located in the /student/labs/sf/sf60 directory.
While using the script, substitute the appropriate disk device name for one of
the disks in use by sparevol, for example enter emc0_dd7.
11 Run vxprint -g appdg -htr and compare the output to the vxprint
output that you saved earlier. What has occurred?
Note: You may need to wait a minute or two for the hot relocation to
complete.
12 Run vxdisk -o alldgs list. What do you notice?
13 In VOM, view the status of the disks and the volume.
14 Recover the disk by replacing the private and public regions on the disk. In the
command, substitute the appropriate disk device name, for example use
emc0_dd7.
15 Bring the disk back under VxVM control and into the disk group to replace the
failed disk media name.
vom
sym1
Device Name Disk Media Name
Disk 1
Disk 2
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
52
A46 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
16 Undo hot relocation for the disk.
17 Wait until the volume is fully recovered before continuing. Check to ensure
that the disk and the volume are fully recovered.
18 Rename the unrelocated subdisk to its original name.
19 Remove the sparevol volume.
20 Remove the spare flag from the last disk.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
53
Lab 3: Resolving Hardware Problems A47
Copyright 2012 Symantec Corporation. All rights reserved.
A
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.
Note: If you do not have access to the Internet from the classroom systems, skip
this optional lab section.
1 Access the latest information on Veritas Storage Foundation from the
Symantec Support Web site.
2 Which Linux platform is supported for Storage Foundation 6.0?
3 Where would you locate the latest patch for Veritas Storage Foundation and
High Availability?
End of lab
Exercise 11: Optional lab: Using the Support Web Site
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
54
A48 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
55
Lab 4: Using Full-Copy Volume Snapshots A49
Copyright 2012 Symantec Corporation. All rights reserved.
A
Lab 4: Using Full-Copy Volume Snapshots
In this lab, you practice creating and managing full-copy volume snapshots. This
includes enabling FastResync and using full-copy volume snapshots for off-host
This lab contains the following exercises:
Full-sized instant snapshots
Off-host processing using split-mirror volume snapshots
Optional lab: Performing snapshots using VOM
Optional lab: Traditional volume snapshots
Prerequisite setup
To perform this lab, you need two lab systems with Storage Foundation pre-
installed, configured and licensed. In addition to this, you also need four external
disks to be used during the labs.
Lab information
In preparation for this lab, you need the following information about your lab
environment.
The exercises for this lab start on the next page.
Object Value
root password veritas
Host name of lab systems sym1, sym2, vom
Shared data disks: emc0_dd7 - emc0_d12
3pardata0_49 - 3pardata0_60
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
56
A50 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
1 If it does not already exist, create a disk group called appdg with four disks
(emc0_dd7 - emc0_d10).
2 Create a 1GB concatenated volume called appvol. Create a Veritas file system
on the volume and mount it to /app.
3 Add some data to the volume by copying some files from /etc and verify that
the files have been added.
4 Enable FastResync on the appvol volume.
5 View the appvol volume using the vxprint command. You should now see a
DCO associated with the appvol volume.
6 Create a full sized instant snapshot of the appvol volume. Check the original
volume size using vxprint and create the snapshot volume (called snapvol)
using exactly the same size.
7 Prepare the snapshot volume and create the snapshot.
8 Create a mount point (/snap) and mount the snapshot volume. View the files
in the snapshot volume. They should be the same as the files in the appvol
volume.
9 Unmount /snap and /app.
10 Delete the full sized instant snapshot of the appvol volume.
11 Delete the appvol volume.
Exercise 1: Full-sized instant snapshots
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
57
Lab 4: Using Full-Copy Volume Snapshots A51
Copyright 2012 Symantec Corporation. All rights reserved.
A
Phase 1: Create, Split, and Deport
Note: In this lab section, the sym2 server will be used for off-host processing. If it
is not already powered on, do so at this time.
Note: You will be using the appdg disk group that was created at the start of this
lab. If for any reason it was destroyed, recreate it before continuing.
1 Create a 500-MB concatenated volume, appvol, using a single disk. Create a
Veritas file system on the volume and mount the file system on the mount point
/app.
2 Add data to the file system using the following command:
echo Pre-snapshot for app > /app/presnap_on_app
and verify that the data has been added.
3 Enable FastResync for the volume appvol. Can you identify what has changed?
4 Add a mirror to the volume for use as the snapshot. Observe the volume layout.
What is the state of the newly added mirror after synchronization completes?
5 Create a third mirror break-off snapshot named snapvol using the new mirror
you just added. Use the vxsnap -g appdg list command to observe the
snapshots in the disk group.
6 Split the snapshot volume into a separate disk group from the original disk
group called snapdg.
7 Verify that the snapdg disk group exists and contains snapvol. First, display the
disk groups on the system. You should see the new snapdg disk group
displayed. Then, view the volume information for the snapdg disk group.
Exercise 2: Off-host processing using split-mirror volume
snapshots
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
58
A52 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
8 Deport the disk group that contains the snapshot volume.
9 View the disk groups on the system, and then run the vxdisk command to
view the status of the disks on the system.
10 Add additional data to the original volume using the following command:
echo Post-snapshot for app > /app/postsnap_on_app
and verify that the data has been added.
Phase 2: Import, Process, and Deport
11 On the off-host processing (OHP) host (sym2) where the backup or processing
is to be performed, import the disk group that contains the snapshot volume.
View the status of the volume in the snapdg disk group.
Note: You may need to rescan the disks using the vxdctl enable
command on the OHP host so that the host detects the changes.
12 To perform off-host processing, you must first mount the file system on the off-
host processing host. Use the mount point /snap.
13 View and compare the contents of both file systems.
14 Check if you can write to the original file system during off-host processing by
creating a new file in the file system as follows:
echo Data in original of app > /app/data_on_app
sym2
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
59
Lab 4: Using Full-Copy Volume Snapshots A53
Copyright 2012 Symantec Corporation. All rights reserved.
A
ls -l /app
15 After completing off-host processing, you are ready to reattach the snapshot
volume with the original volume. Unmount the snapshot volume on the off-
host processing host.
16 On the OHP host, deport the disk group that contains the snapshot volume.
Phase 3: Import, Join, and Resynchronize
17 Reimport the disk group that contains the snapshot volume:
18 Rejoin the disk group that contains the snapshot volume to the disk group that
contains the original volume.
19 At this point you should have the original volume and its snapshot in the same
disk group but as separate volumes. There is still no synchronization between
the original volume and the snapshot volume. To observe this, the snapshot
volume will be mounted again to observe its contents. You would not need to
perform this step during a normal off-host processing procedure.
a Mount snapvol back on the /snap mount point. Create the mount point if
necessary.
b View and compare the contents of both file systems.
c Unmount the /snap file system.
20 Reattach the plexes of the snapshot volume to the original volume and
resynchronize their contents.
21 Remove the snapshot mirror.
sym2
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
60
A54 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
22 Remove the DCO log and disable FastResync on appvol using the vxsnap
unprepare command.
23 If you have enough time to perform the optional lab exercises, skip this step.
Otherwise, unmount /app and destroy the appdg disk group.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
61
Lab 4: Using Full-Copy Volume Snapshots A55
Copyright 2012 Symantec Corporation. All rights reserved.
A
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.
1 If the Web console is already open skip this step; otherwise, open a Web
browser and enter the URL for the VOM MS Web console on the address line.
You may receive an error about the Web sites certificate, click the link to
continue to this Web site. Log in using the root username and password for
your server.
2 View the appvol volume in the appdg disk group on the
sym1.example.com system.
3 Create a snapshot of the appvol volume. Name the new disk snapshot snapvol.
4 Mount the snapshot to /snap directory.
Note: The mounted snapshot could be verified using the ls command line. For
this lab we will continue by removing the snapshot.
5 Unmount the snapshot.
6 Delete the snapshot.
7 Remove the DCO log and disable FastResync on appvol.
8 If you have enough time to perform the optional lab exercises, skip this step.
Otherwise, unmount /app and destroy the appdg disk group.
Exercise 3: Optional lab: Performing snapshots using VOM
vom
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
62
A56 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.
1 If you already have an appdg disk group with an appvol volume, skip this step.
Otherwise, prepare your environment by performing the following substeps:
a Create a disk group called appdg with four disks (emc0_dd7 - emc0_d10).
b Create a 500m concatenated volume called appvol. Create a Veritas file
system on the volume and mount it to /app.
2 Add some data to the volume by copying some files from /etc and verify that
the files have been added.
3 Enable FastResync on the appvol volume for a traditional volume snapshot.
4 View the appvol volume using the vxprint command. You should now see a
DCO associated with the appvol volume.
5 Start the creation of a traditional snapshot using the vxassist command.
Creating a traditional snapshot is a two step process, this is the first of two
steps. The -b option will cause the command to work in the background.
6 Use the vxtask command to determine when the snapstart operation is
complete.
7 View the appvol volume using the vxprint command. You should now see a
DCO associated with the appvol volume. You should also see that the plex and
DCO have been mirrored.
Exercise 4: Optional lab: Traditional volume snapshots
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
63
Lab 4: Using Full-Copy Volume Snapshots A57
Copyright 2012 Symantec Corporation. All rights reserved.
A
8 Break the traditional snapshot away using the vxassist command. Provide
the snapshot volume name as snapvol.
9 View the appvol and snapvol volumes using the vxprint command. You
should now see a plex and DCO for each volume.
10 Create a mount point (/snap) and mount the snapvol volume. View the files
in the snapshot volume. They should be the same as the files in the appvol
volume.
11 Overwrite the nsswitch.conf file in the appvol volume using the
following command:
echo snapshot test > /app/nsswitch.conf
and verify that the file is now different in the appvol volume and the snapshot.
12 Unmount the snapshot volume.
13 Unmount the appvol volume.
14 Reassociate the snapshot volume back into the appvol volume but resync it
from the replica so that the original nsswitch.conf file gets restored.
15 Mount the appvol volume and verify that the nsswitch.conf file was
restored to the original.
16 Break the traditional snapshot away again using the vxassist command.
Provide the snapshot volume name as snapvol.
17 Disassociate the snapshot volume from the appvol volume.
18 Attempt to reattach the snapshot volume. What is the error?
19 Remove the DCO log that is associated with the snapshot volume.
20 Delete the snapshot volume.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
64
A58 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
21 Remove the DCO log that is associated with the appvol volume and set
FastResync to off.
22 Unmount /app and destroy the appdg disk group.
End of lab
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
65
Lab 5: Using Copy-on-Write SF Snapshots A59
Copyright 2011 Symantec Corporation. All rights reserved.
A
Lab 5: Using Copy-on-Write SF Snapshots
In this lab, you practice creating and managing space-optimized snapshots and
storage checkpoints. First you create a volume with a file system and mount the
file system. The volume is then used to practice the space-optimized snapshot. The
file system is used to create storage checkpoints. Optional lab exercises are also
included to view storage checkpoint behavior.
This lab contains the following exercises:
Using space-optimized instant volume snapshots
Restoring a file system using storage checkpoints
Optional lab: Storage checkpoint behavior
Optional lab: Using checkpoints in VOM
Prerequisite setup
To perform this lab, you need a lab system with Storage Foundation pre-installed,
configured and licensed. In addition to this, you also need at least four disks to be
used in a disk group.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
66
A60 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2011 Symantec Corporation. All rights reserved.
Lab information
In preparation for this lab, you need the following information about your lab
environment.
The exercises for this lab start on the next page.
Object Value
root password veritas
Host names of lab systems sym1, vom
Shared data disks: emc0_dd7 - emc0_d12
3pardata0_49 - 3pardata0_60
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
67
Lab 5: Using Copy-on-Write SF Snapshots A61
Copyright 2011 Symantec Corporation. All rights reserved.
A
1 Create a disk group called appdg with four disks(emc0_dd7 - emc0_d10).
2 Create a 1G concatenated volume called appvol. Create a VxFS file system on
the volume and mount it to /app. Enable FastResync on the appvol volume.
3 Select a disk in the appdg disk group that is not used by the original volume
appvol, and create a 50-MB volume on this disk to be used as the cache
volume. Name the cache volume cachevol. Create a cache object called
mycache on the cache volume. Ensure that the cache object is started.
4 Observe how the cache object and the cache volume are displayed in the disk
group.
5 Add data to the /app file system using the following command:
echo New data before snap am for app > \
/app/presnapam_on_appvol
and verify that the data is written.
6 Create a space-optimized instant snapshot of the appvol volume, named
snapvolam, using the cache object mycache.
7 Display information about the snapshot volumes using the vxprint,
vxsnap list and vxsnap print commands from the command line.
8 Verify which snapshots are associated to the cache object.
9 Mount the space optimized snapshot volume snapvolam to the /snapam
directory.
10 Observe the contents of the /snapam directory and compare it to the contents
of the /app directory.
Exercise 1: Using space-optimized instant volume snapshots
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
68
A62 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2011 Symantec Corporation. All rights reserved.
11 Add data to the /app file system using the following command:
echo New data before snap pm for app > \
/app/presnappm_on_appvol
and verify that the data is written.
12 Create a second space-optimized instant snapshot of the appvol volume, named
snapvolpm, using the same cache object mycache.
13 Verify which snapshots are associated to the cache object.
14 Mount the space optimized snapshot volume snapvolpm to the /snappm
directory.
15 Observe the contents of the original file system and the two space optimized
snapshots.
16 Make the following changes on the file systems:
a Remove the data you had on the original file system prior to starting
Exercise 1: Using space-optimized instant volume snapshots. If you have
followed the lab steps, you need to remove the
presnapam_on_appvol and presnappm_on_appvol files from
the /app file system.
b Add new data to the space optimized snapshot volumes using the following
commands:
echo New data on snapam > /snapam/data_on_snapam
echo New data on snappm > /snappm/data_on_snappm
17 Observe the contents of the original file system and the two space optimized
snapshots.
18 Assume that you have decided to use the contents of the second space
optimized snapshot (snapvolpm) as the final version of the original file system.
Restore the original file system using the second space optimized snapshot.
Note that you will have to unmount the original file system to make this
change. Mount the original file system back to /app directory when the
restore operation completes.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
69
Lab 5: Using Copy-on-Write SF Snapshots A63
Copyright 2011 Symantec Corporation. All rights reserved.
A
19 Observe the contents of the original file system and the two space optimized
snapshots.
20 Refresh the first space optimized snapshot, snapvolam. Note that you will need
to unmount the first space optimized snapshot to make this change. Mount the
snapvolam volume again after its contents are refreshed.
21 Observe the contents of the original file system and the two space optimized
snapshots.
22 Unmount the two space optimized snapshots and dissociate them from the
original volume.
23 Remove the space optimized snapshot volumes.
Note: If you want to use the vxassist remove volume command to
delete the volume from the command line, you first need to delete the
DCO log. Alternatively you can use the vxedit -g appdg -rf
rm volume_name command to remove the volume together with the
associated DCO log.
24 Remove the cache object with its associated cache volume.
25 Unmount the /app file system and remove the original volume, appvol.
Note: If you want to use the vxassist remove volume command to
delete the volume from the command line, you first need to delete the
DCO log. Alternatively you can use the vxedit -g diskgroup
-rf rm volume_name command to remove the volume together
with the associated DCO log.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
70
A64 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2011 Symantec Corporation. All rights reserved.
In the beginning of this section you should have an appdg disk group with four
unused disks in it.
1 Create a concatenated 1500m volume called appvol in the appdg disk group.
2 Create a VxFS file system on the volume.
3 Make three mount points: /app, /ckptam, and /ckptpm.
4 Mount the file system on /app.
5 Write a file of size 1M named 8am in the original /app file system.
6 Create a storage checkpoint named thu_9am on /app. Note the output.
7 Mount the thu_9am storage checkpoint on the mount point /ckptam.
8 Write some more files in the original file system on /app, and synchronize the
file system using the following commands:
dd if=/dev/zero of=/app/2pm bs=1024k count=5
dd if=/dev/zero of=/app/3pm bs=1024k count=5
sync;sync
9 Create a second storage checkpoint, called thu_4pm, on /app. Note the
output.
10 Mount the second storage checkpoint on the mount point /ckptpm.
11 Write some more files in the original file system on /app, and synchronize the
file system using the following commands:
Exercise 2: Restoring a file system using storage checkpoints
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
71
Lab 5: Using Copy-on-Write SF Snapshots A65
Copyright 2011 Symantec Corporation. All rights reserved.
A
dd if=/dev/zero of=/app/5pm bs=1024k count=6
dd if=/dev/zero of=/app/6pm bs=1024k count=6
sync;sync
12 View the checkpoints and the original file system.
13 To prepare to restore from a checkpoint, unmount the original file system and
both storage checkpoints.
14 Restore the file system to the thu_4pm storage checkpoint.
15 Run the fsckpt_restore command again. Note the output.
Control-D to exit the fsckpt_restore command.
16 Mount the appvol volume on /app.
17 Use the fsckptadm command to list all checkpoints still on /app.
18 Use the fsckptadm command to remove all checkpoints still on /app.
19 Use the fsckptadm command to list all checkpoints still on /app.
20 Unmount /app and then remount with the checkpoint visibility option read-
write (-o ckptautomnt=rw).
21 Create a storage checkpoint named fri_8am on /app. Note the output.
22 List the contents of the fri_8am storage checkpoint.
23 Use the fsckptadm command to list all checkpoints on /app.
24 Use the fsckptadm command to remove the fri_8am checkpoint on /app.
25 Unmount /app and destroy the appdg disk group.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
72
A66 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2011 Symantec Corporation. All rights reserved.
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.
In this exercise, you perform and analyze four types of file system operations:
A file to be deleted (1k.to_delete)
A file to be replaced by (new) content (1k.to_replace)
A file to be enlarged (1k5.to_append)
A file to be written by databases (10m.db_io; the file remains at the same
position with the same size, but some blocks within it are replaced)
1 Create a disk group named appdg with four disks (emc0_dd7 - emc0_d10).
2 Create a 128-MB mirrored volume with a log. Name the volume appvol.
Mount the volume at /app.
3 Add these four new files to the volume and view the files:
1K named /app/1k.to_delete
1K named /app/1k.to_replace
1.5K=1536bytes named /app/1k5.to_append
10M named /app/10m.db_io
4 Remount /app and run ncheck.
5 Create a storage checkpoint for /app named ckpt.
6 Delete the file 1k.to_delete.
7 Create a new 1K file named 1k.to_replace.
Exercise 3: Optional lab: Storage checkpoint behavior
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
73
Lab 5: Using Copy-on-Write SF Snapshots A67
Copyright 2011 Symantec Corporation. All rights reserved.
A
8 Copy the 1k5.to_append file to /tmp.
9 Append the 1k5.to_append file in /tmp to the original
1k5.to_append file in /app.
10 Use the following Perl command to generate database-like I/O (modifying a
block within a database file).
The second line opens read/write access to the file without recreating it or
simply appending new data.
The third line creates a variable containing 8K "x".
The next line positions the file pointer at 80K offset from the beginning of
the file.
The following line writes at this position the new 8K block.
perl -e '
> open(FH,"+< /app/10m.db_io") || die;
> $Block="x" x 8192;
> sysseek(FH,8192,0);
> syswrite(FH,$Block,8192,0);
> close(FH);'
11 Remount /app and run ncheck. Mount the checkpoint to /ckpt and
compare the contents with the original file system contents.
12 If you have enough time to perform the optional lab exercises, skip this step.
Otherwise, unmount the checkpoint and the original file system and destroy the
appdg disk group.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
74
A68 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2011 Symantec Corporation. All rights reserved.
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.
Note: The following steps assume that you are already logged into the VOM
console on the vom server. If not, log into the VOM console as the root
user before proceeding with this optional lab.
1 If you completed the previous optional lab, you should have an appvol volume
with a VxFS file system mounted to /app. The file system has one checkpoint
(ckpt) mounted to /ckpt. Verify that the checkpoint is visible on the /app
file system on the VOM console.
2 Create a new storage checkpoint (sat_12pm) for the /app file system.
3 Verify that the checkpoint is visible on the /app file system.
4 Delete all checkpoints, unmount /app and destroy the appdg disk group.
End of lab
Exercise 4: Optional lab: Using checkpoints in VOM
vom
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
75
Lab 6: Using Advanced VxFS Features A69
Copyright 2012 Symantec Corporation. All rights reserved.
A
Lab 6: Using Advanced VxFS Features
In this lab, you perform tasks to manage VxFS file systems. First you create a
volume with a file system and mount the file system. The file system is then used
to practice file and directory compression, deduplication and FileSnap.
This lab contains the following exercises:
Compressing files and directories with VxFS
Deduplicating VxFS data
Using the FileSnap feature
Prerequisite setup
To perform this lab, you need a lab system with Storage Foundation pre-installed,
configured and licensed. In addition to this, you also need at least four disks to be
used in a disk group.
Lab information
In preparation for this lab, you need the following information about your lab
environment.
The exercises for this lab start on the next page.
Object Value
root password veritas
Host name of the lab system sym1
Shared data disks: emc0_dd7 - emc0_d12
3pardata0_49 - 3pardata0_60
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
76
A70 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
1 Create a disk group called appdg with four disks (emc0_dd7 - emc0_d10).
2 Create a 1-GB concatenated volume called appvol. Create a VxFS file system
on the volume and mount it to /app.
3 Make a file in the /app directory using the /etc/termcap file. Use the
cat command to copy multiple instances of the termcap file into the new
file. Then create a copy of the file for use in checking the contents after
compression.
4 View the new file before compression using the ls, du -sk and fsmap -p
commands.
5 Use the vxcompress command to compress the file1 file. Then view the
degree of compression using the -L option.
6 Use the fsmap -p command to view the compressed file. Note that all the
data extents now show that they are compressed (-C).
7 Check the compression of the entire file system using the fsadm -S
compressed command.
8 Compare the compressed file to the saved file using the cmp and ls
commands. The cmp command should show no difference, but the ls
command will show a size difference.
9 Use the dd command to write to the middle of the compressed file so that it
will be partially compressed. Again, use the /etc/termcap file as the input.
10 View the degree of compression using the vxcompress command.
Exercise 1: Compressing files and directories with VxFS
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
77
Lab 6: Using Advanced VxFS Features A71
Copyright 2012 Symantec Corporation. All rights reserved.
A
11 Use the fsmap command to view the compressed file again. Note that one of
the extents no longer shows that it is compressed.
12 Compare the compressed file to the saved file. The two files should now be
different.
13 Return the file to normal by uncompressing the file using the vxcompress
-u command. Use the ls, du -sk and fsmap -p commands to verify that
the file is no longer compressed.
14 Unmount /app and then remove the appvol volume from the appdg disk
group.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
78
A72 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
1 Create a 2-GB concatenated volume called appvol. Create a VxFS file system
on the volume with the block size of 4096 and mount it to /app.
2 Create a directory in /app called testdata and copy ~500MB (copy the
/opt directory and its contents) of data into the directory.
3 Make a second copy of the same data in a new directory called copy.
Note: You now have two exact copies of the same data in two different
directories.
4 View the shared files information in the /app directory using the fsadm -S
shared command.
5 Enable deduplication on /app using the fsdedupadm command. Use the
-c chunk_size option to set the deduplication chunk size to the file system
block size set when creating the file system. Note that the chunk size is
specified in bytes.
6 Use the fsdedupadm command with the list option to view the
deduplication settings.
7 Use the fsdedupadm command to start deduplication on /app.
8 Check the status of the deduplication on /app. Repeat checking the status
until the deduplication is complete. The first change in status may take a few
minutes to appear in the output. Note the space savings during the
deduplication process.
Exercise 2: Deduplicating VxFS data
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
79
Lab 6: Using Advanced VxFS Features A73
Copyright 2012 Symantec Corporation. All rights reserved.
A
9 View shared files information using the fsadm command. Note that the USED
(KB) space has decreased by the amount shown in the Space_Saved
(KB) field.
10 Set a deduplication schedule using the fsdedupadm command so that it runs
the deduplication at midnight every other day.
Note: For more information on setting deduplication schedules refer to the
fsdedupadm man page.
11 View the newly created schedule using the fsdedupadm list command.
12 Unmount /app and then remove the appvol volume from the appdg disk
group.
Note: If the file system unmount yields a device is busy warning, monitor the
deduplication process until it completes and try again.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
80
A74 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
1 Create a 1-GB concatenated volume called appvol. Create a VxFS file system
on the volume and mount it to /app.
2 Enable lazy copy-on-write for the newly created file system.
3 Make a file called file1 in the /app directory by copying the
/etc/termcap file.
4 Observe size of shared and private data in the file as well as the space savings
in the file system using the fsmap and fsadm commands.
5 Use the vxfilesnap command to make a copy of the file1 file to the
file2 file in the same directory.
6 Observe size of shared and private data in the file as well as the space savings
in the file system using the fsmap and fsadm commands.
Note: For more information about FileSnap refer to the Veritas Storage
Foundation Administrator's Guide.
7 Unmount /app and then remove the appvol volume from the appdg disk
group and destroy the appdg disk group.
End of lab
Exercise 3: Using the FileSnap feature
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
81
Lab 7: Using Site Awareness with Mirroring A75
Copyright 2012 Symantec Corporation. All rights reserved.
A
Lab 7: Using Site Awareness with Mirroring
In this lab, you configure your system for site awareness. During the lab, you work
on two systems sharing disks.
This lab contains the following exercises:
Configuring site awareness
Analyzing the volume read policy
Optional lab: Analyzing the impact of disk failure in a site-consistent
environment
Optional lab: A manual fire drill operation with remote mirroring
Prerequisite setup
To perform this lab, you need a lab system with Storage Foundation pre-installed,
configured and licensed. In addition to this, you also need at least four disks to be
used in a disk group.
Before starting this lab you should have at least four shared disks free to be used in
a disk group. You also need to ensure that the Storage Foundation Enterprise
license is installed.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
82
A76 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
Lab information
In preparation for this lab, you need the following information about your lab
environment.
The exercises for this lab start on the next page.
Object Value
root password veritas
Host name of the main lab system sym1
Host name of the system sharing disks sym2
Shared data disks: emc0_dd7 - emc0_d12
3pardata0_49 - 3pardata0_60
Site1 Name siteA
Site2 Name siteB
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
83
Lab 7: Using Site Awareness with Mirroring A77
Copyright 2012 Symantec Corporation. All rights reserved.
A
1 Verify that the site awareness feature has been enabled. If not, use the
vxkeyless set SFENT command to add the SF Enterprise license.
2 Assign the site name siteA to the host.
3 Verify that the site awareness feature has been enabled. If not, use the
vxkeyless set SFENT command to add the SF Enterprise license.
End of Solution
4 Assign the site name siteB to the host.
5 Initialize the four disks that you will use for this lab. Assign two disks to siteA
and two disks to siteB. Display disk tags to observe the assignments.
6 Create a disk group called appdg with four disks. Add the sites to the disk
group configuration.
7 Display disk group information using the vxdg list appdg and
vxprint -g appdg -htr commands. What do you observe? Is the disk
group site-consistent?
Exercise 1: Configuring site awareness
sym1
sym2
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
84
A78 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
8 Create a 100-MB volume called appvol in the appdg disk group. Use the
vxprint command to display the volume layout. What do you observe?
9 Delete the appvol volume in the appdg disk group.
10 Turn on site consistency feature in the appdg disk group. Verify that the flag is
set using the vxdg list appdg command.
11 Create a 500-MB volume called appvol in the appdg disk group without
specifying any additional attributes. Use the vxprint command to display
the volume layout. What do you observe? To what value are the allsites and
siteconsistent attributes of the volume set? What is the volume read policy?
12 Create a 500-MB volume called webvol in the appdg disk group. This time
turn site consistency off while creating the volume. Display volume layout and
attributes. How does this volume differ from appvol?
13 Delete the webvol volume.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
85
Lab 7: Using Site Awareness with Mirroring A79
Copyright 2012 Symantec Corporation. All rights reserved.
A
1 Create a vxfs file system on appvol. Create the /app directory and mount the
file system to the /app directory. Create a 100-MB file on the file system
using the dd if=/dev/zero of=/app/testfile bs=1024k
count=100 command.
2 Reset the I/O statistics using the vxstat -g appdg -r command.
3 Read data from the file system using the vxbench command.
4 Display the I/O statistics using the vxstat -g appdg -d command.
Which disk has been used during the read operation? To which site is this disk
assigned?
5 Reset the I/O statistics using the vxstat -g appdg -r command.
6 Unmount the file system and deport the disk group from the system on which
you are working.
7 Import the appdg disk group on the system at the second site (sym2).
Note: All volumes are started by default when importing a disk group in
Storage Foundation 6.0. With versions before 5.1 SP1, the volumes had
to be started manually after the import was complete.
8 Create the /app directory (if it doesnt exist) and mount the appvol to it.
9 Reset the I/O statistics using the vxstat -g appdg -r command.
Exercise 2: Analyzing the volume read policy
sym1
sym2
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
86
A80 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
10 Read data from the file system using the vxbench command.
11 Display the I/O statistics using the vxstat -g appdg -d command.
Which disk has been used during the read operation? To which site is this disk
assigned?
12 Unmount the file system and deport the disk group.
13 Import the appdg disk group.
14 If you have extra time and you will be performing the optional labs, do not
perform this step. Otherwise, destroy the appdg disk group to prepare for the
next lab.
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
87
Lab 7: Using Site Awareness with Mirroring A81
Copyright 2012 Symantec Corporation. All rights reserved.
A
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.
Note: If you are not using enclosure-based naming scheme, replace the
enclosure-based names in the following steps with the disk access name
which is the dmp node name.
1 Identify the disk used for the second plex in the appvol volume. Note the disk
media name and the disk access name (or the enclosure-based name, if
enclosure-based naming is used) here:
Disk media name: _______________________________________________
Disk access name (Enclosure-based name):____________________________
Note which site this disk belongs to.
2 Mount the appvol volume to /app.
3 If you are using enclosure-based naming, identify the pathname of the disk you
noted in step 1 using the vxdisk -g appdg list dm_name command.
To simulate a disk failure, use the vxdmpadm -f disable
path=pathname command. Note that you need to disable all paths if you
have a disk with multiple paths.
4 Create a 10-MB testfile2 file in /app using the dd command.
5 Display the disk status using the vxdisk -o alldgs list command and
the disk group information using the vxprint -g appdg -htr
command. What do you observe?
Exercise 3: Optional lab: Analyzing the impact of disk failure in a
site-consistent environment
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
88
A82 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
6 Attempt to access the testfile file in /app using the dd command. Can
you still access the file system?
7 Recover from the failure by reenabling the path to the failed disk and by
reattaching the site if necessary.
Note: If the vxattachd daemon is running, the detached site is
automatically reattached and the volumes are automatically recovered
after you enable the paths to the failed disk. If you do not wish to wait
for the daemon to reattach the disks, use the vxreattach command.
8 Turn site consistency off on the disk group.
9 Repeat steps 1 through 4 to simulate the disk failure. How does this differ from
the previous response?
10 Ensure that the vxrelocd daemon is running. Fail the other disk at the same
site. This is the disk to which the second plex had been relocated after the
failure in step 9. Observe what happens. Can you still access the file system? Is
the subdisk on the failed disk relocated again?
11 Recover from the failure by reenabling paths to both of the failed disks and by
reattaching the disks using the vxreattach -br command. Use the
vxtask list command to observe the synchronization.
12 Turn site consistency back on for the disk group.
13 If you do not have time to perform the next optional exercise, unmount /app
and destroy the appdg disk group to prepare for the next lab. Otherwise, skip
this step and continue with the next optional lab section.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
89
Lab 7: Using Site Awareness with Mirroring A83
Copyright 2012 Symantec Corporation. All rights reserved.
A
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.
1 Ensure that there are no failures in the appdg disk group and that the file
system is mounted on the /app directory.
2 Use the vxdg detachsite command to detach siteA manually while the
file system is still mounted. Use the vxdisk list and vxprint
commands to observe what happens. Can you still access the file system?
3 Reattach the detached site and recover the volume.
4 When the volume is completely recovered, this time detach siteB manually
while the file system is still mounted. Use the vxdisk list and vxprint
commands to observe what happens. Can you still access the file system?
5 Reattach the detached site and recover the volume.
6 Unmount /app and destroy the appdg disk group to prepare for the next lab.
End of lab
Exercise 4: Optional lab: A manual fire drill operation with remote
mirroring
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
90
A84 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
91
Lab 8: Implementing SmartTier A85
Copyright 2012 Symantec Corporation. All rights reserved.
A
Lab 8: Implementing SmartTier
In this lab, you configure your system to use SmartTier to manage the placement
of files in a volume set.
This lab contains the following exercises:
Configuring a multi-volume file system and SmartTier
Testing SmartTier
Prerequisite setup
To perform this lab, you need a lab system with Storage Foundation pre-installed,
configured and licensed. In addition to this, you also need at least four disks to be
used in a disk group.
Before starting this lab you should have all the external disks assigned to you
already initialized but free to be used in a disk group.
Lab information
In preparation for this lab, you need the following information about your lab
environment.
The exercises for this lab start on the next page.
Object Value
root password veritas
Host name of the lab system sym1
Shared data disks: emc0_dd7 - emc0_d12
3pardata0_49 - 3pardata0_60
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
92
A86 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
In this lab you set up a SmartTier environment for storing user files. The
assumption in this lab is that the majority of the files will be accessed and modified
less frequently than once a month. So the majority of the space available (two
volumes) will reside in the tier that is configured for files not accessed between
30 - 60 days.
1 Create a disk group called appdg with four disks (emc0_dd7 - emc0_d10).
2 Create a new volume set called appvset containing four sub-volumes.
a Create a 1-GB concatenated volume called subvol1.
b Create a 1-GB concatenated volume called subvol2.
c Create a 1-GB concatenated volume called subvol3.
d Create a 1-GB concatenated volume called subvol4.
e Make a new volume set called appvset using the subvol1, subvol2,
subvol3, and subvol4 volumes.
f Use the vxprint command to view the newly created volume set.
3 Set the file placement classes for each volume in the volume set so that there is
a hightier on the first volume, a midtier on the second and third volumes and a
lowtier on the fourth volume.
a Set the file placement class for the subvol1 volume to hightier.
b Set the file placement class for the subvol2 and subvol3 volumes to midtier.
c Set the file placement class for the subvol4 volume to lowtier.
d Verify that the placement classes were properly created. The placement
classes can be viewed using the vxassist command with the listtag
option. List the placement classes for each volume to ensure they are set
correctly.
Exercise 1: Configuring a multi-volume file system and SmartTier
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
93
Lab 8: Implementing SmartTier A87
Copyright 2012 Symantec Corporation. All rights reserved.
A
4 Create a VxFS file system on appvset and mount it to the /app mount point.
5 View /app using the fsvoladm list command. You should see the four
volumes that were added to appvset.
6 List the volume availability flags using the fsvoladm queryflags
command.
7 If you are not already logged into VOM, start a Web browser and connect to
VOM. Navigate to the sym1.example.com link.
8 Create a new Update age-based placement policy for the /app file system and
assign it to the MVFS. The policy should be called data_agebased_policy. The
policy should include all defined tiers with hightier being the highest
placement class and lowtier the lowest placement class. In the policy, configure
the following relocation policies:
From hightier to lower placement classes if the last modification date is
greater than or equal to 30 days
From midtier to lowtier if the last modification date is greater than or equal
to 60 days
From lowtier to higher placement classes if the last modification date is
less than 60 days
From midtier to hightier if the last modification date is less than 30 days
Do not configure any exceptions to these policies.
9 Verify that the policy has been assigned to the /app mount point using the
fsppadm list command.
vom
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
94
A88 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
1 Create several files in /app as follows:
echo "testfile1 for DST test" > /app/testfile1
echo "testfile2 for DST test" > /app/testfile2
2 Determine on which volume the files are located using the fsmap command.
3 Determine the current date using the date command and record the value
here. This will be needed in further steps.
Current Date __________________________________
4 Change the date of testfile1 using the touch command so that the file is
more than 30 days old, but less than 60 days. You will need to format the date
by YYMMDDhhmm.
5 Determine on which volume the files are located using the fsmap command.
6 Enforce the file placements on /app using either VOM or the CLI.
7 Determine on which volume the testfile1 file is located using the fsmap
command.
8 Change the date of the testfile2 using the touch command so that the
file is more than 60 days old.
9 Enforce the file placements on /app using either VOM or the CLI.
10 Determine on which volume the testfile2 file is located using the fsmap
command.
11 Change the date of testfile2 using the touch command so that the file is
now more than 30 days old, but less than 60 days.
Exercise 2: Testing SmartTier
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
95
Lab 8: Implementing SmartTier A89
Copyright 2012 Symantec Corporation. All rights reserved.
A
12 Enforce the file placements on /app using either VOM or the CLI.
13 Determine on which volume the testfile2 file is located using the fsmap
command.
14 Touch both files so they have the current date.
15 Enforce the file placements on /app using either VOM or the CLI.
16 Determine on which volume the files are located using the fsmap command.
17 Unmount /app and destroy the appdg disk group.
End of lab
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
96
A90 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
97
Lab 9: Replicating a Veritas File System A91
Copyright 2012 Symantec Corporation. All rights reserved.
A
Lab 9: Replicating a Veritas File System
In this lab, you configure two systems for file replication using the Veritas File
Replicator (VFR). After replication has completed you will then restore the source
file system using the replication target system.
This lab contains the following exercises:
Setting up and performing replication for a VxFS file system
Restoring the source file system using the replication target
Prerequisite setup
To perform this lab, you need two lab systems with Storage Foundation pre-
installed, configured and licensed. In addition to this, you also need at least four
disks to be used in a disk group on each lab system.
Before starting this lab you should have all the external disks assigned to you
already initialized but free to be used in a disk group.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
98
A92 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
Lab information
In preparation for this lab, you need the following information about your lab
environment.
The exercises for this lab start on the next page.
Object Value
root password veritas
Host name of the source system sym1
Host name of the replication target sym2
Shared data disks: emc0_dd7 - emc0_d12
3pardata0_49 - 3pardata0_60
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
99
Lab 9: Replicating a Veritas File System A93
Copyright 2012 Symantec Corporation. All rights reserved.
A
In this lab, you set up two systems for file replication using VFR. The assumption
in this lab is that the two systems do not share disks, even though they do in your
test environment. All steps will be based on this assumption. The source system
(sym1) will use the appdg disk group and appvol volume, the target system (sym2)
will use the repdg disk group and the repvol volume. The mount point names on
both systems will be the same.
1 Verify that the license required to perform file replication is installed. If it is
not, add it.
2 Create a disk group called appdg with four disks (emc0_dd7 - emc0_d10).
3 Create a 1-GB concatenated volume called appvol. Create a VxFS file system
on the volume and mount it to /app.
4 Copy the files in /etc/default into /app.
5 Start the Veritas File Replicator scheduler.
6 Verify that the license required to perform file replication is installed. If it is
not, add it.
7 Create a disk group called repdg with four disks (3pardata0_49 -
3pardata0_52).
8 Create a 1-GB concatenated volume called repvol. Create a VxFS file system
on the volume and mount it to /app.
Exercise 1: Setting up and performing replication for a VxFS file
system
sym1 - Source
sym2 - Target
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
100
A94 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
9 Start the target replication daemon.
10 Create the replication job called apprepjob for the /app file system, specify
the -s option (source system). Set the frequency to 15 minutes.
11 Create the replication job called apprepjob for the /app file system, specify
the -t option (target system). Set the frequency to 15 minutes.
12 Start the replication job for the /app file system.
13 List the replication job for the /app file system. Use the verbose (-v) option.
14 Display the replication job status for the /app file system.
15 Display the replication job statistics for the /app file system using the verbose
option and human friendly units.
16 Display the replication job storage checkpoint.
17 List the files that have been replicated in the /app directory.
sym1 - Source
sym2 - Target
sym1 - Source
sym2 - Target
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
101
Lab 9: Replicating a Veritas File System A95
Copyright 2012 Symantec Corporation. All rights reserved.
A
In this exercise, you will simulate a loss of data by unmounting the appvol volume
and creating a new file system on the volume. The volume will be remounted and
the target system (sym2) will become the source temporarily, and the original
source system (sym1) will become the target system temporarily. After the file
system is recovered on the original source system, you will change the replication
direction to resume the replication as before.
1 Unmount /app and recreate a VxFS file system on the appvol volume.
Remount the file system to /app.
2 List the replication job for the /app file system. Use the verbose (-v) option.
Does the replication job exist?
3 Create the replication job for the /app file system, specify the -t option (to
make sym1 the target system). Set the frequency to 15 minutes. List the
replication job to view the job.
4 Stop the scheduler on the old source system.
5 Start the target replication daemon. Note that the original source system is now
configured to function as the target system.
6 Start the scheduler.
7 Determine if the original replication job exists. If it does, destroy it and
recreate the job specifying sym2 as the source system and sym1 as the target
system. List the replication job to view the job.
Exercise 2: Restoring the source file system using the replication
target
sym1 - Original Source
sym2 - Temporary New Source
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
102
A96 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
8 Display the replication job status for the /app file system. What do you
observe?
9 Copy the /etc/group file to /app to simulate updates to the file system
while the original source was down. Verify that the file has been copied.
10 Perform one iteration of the replication using the vfradmin syncjob
command to restore the original file system.
11 After the replication has completed, determine the result of the replication
using the vfradmin getjobstats -v -fh command.
12 On the original source system, verify that the contents of the /app file system
are restored including the /etc/group file that was copied on sym2 in
Step 9.
13 After the original file system is restored, change sym2 back to a target system
by first stopping the replication scheduler and then changing the replication
mode for apprepjob to be the target. Ensure that the target replication daemon
(vxfsrepld daemon) is still running on the system.
14 To configure sym1 as the source system again, change the replication job mode
for apprepjob to source on sym1. List the replication job and confirm that the
source system is now sym1.
sym1 - Original Source
sym2 - Temporary New Source
sym1 - Original Source
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
103
Lab 9: Replicating a Veritas File System A97
Copyright 2012 Symantec Corporation. All rights reserved.
A
15 Stop the replication target daemon on sym1. Note that this step is not necessary
to change sym1 to be the source system for the /app file system replication.
However, since there are no other file systems for which sym1 is the target, the
target replication daemon can be stopped.
16 Start the replication scheduler on the original source system. Verify that the
scheduler is running.
17 Copy new data to the /app file system by executing the following command:
cp /etc/*.conf /app. Confirm that the files have been copied.
18 Start the apprepjob replication job and display its status using the vfradmin
getjobstatus command.
19 Display the job statistics using the vfradmin getjobstats -v -fh
command.
20 Verify that the contents of the /app file system are replicated to the target file
system on sym2.
21 Stop the replication scheduler on sym1. Unmount /app and destroy the appdg
disk group.
sym2 - Target
sym1 - Source
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
104
A98 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
22 Stop the target replication daemon on sym2.
23 Unmount /app and destroy the repdg disk group.
End of lab
sym2 - Target
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
105
Lab App C: Optional Lab: Importing LUN Snapshots A99
Copyright 2012 Symantec Corporation. All rights reserved.
A
Lab App C: Optional Lab: Importing LUN Snapshots
The purpose of this lab is to perform tests that give you a better understanding of
how Volume Manager deals with LUN Snapshots.
This lab contains the following exercises:
LUN snapshots setup
Importing clone disk groups
Prerequisite setup
To perform this lab, you need a lab system with Storage Foundation pre-installed,
configured and licensed. In addition to this, you also need four disks to be used in
a disk group and its clone.
Lab information
In preparation for this lab, you need the following information about your lab
environment.
The exercises for this lab start on the next page.
Object Value
root password veritas
Host name of the lab system sym1
Shared data disks: emc0_dd7 - emc0_d12
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
106
A100 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.
In this exercise, you simulate a hardware based LUN snapshot by cloning two
disks using the dd command. Once the disks are cloned, then you perform testing
on the cloned disks using Volume Manager commands.
1 Create a disk group called appdg with two disks.
2 Create a 1-GB concatenated mirrored volume called appvol. Create a VxFS
file system on the volume and mount it to /app.
3 Copy some files into /app. Verify that the files have been copied.
4 Determine the names of two unused and uninitialized disks that can be used to
simulate LUN Snapshots. Record the names for later use. If necessary, use the
vxdiskunsetup command to uninitialize the disks.
Clone Device 1 _____________________ (emc0_dd9)
Clone Device 2 _____________________ (emc0_d10)
5 Use the dd command to clone each device in the appdg disk group (emc0_dd7
and emc0_dd8) to the disks recorded in step 4 (emc0_dd9 and emc0_d10). Use
1024k block size during the copy operation. Be patient, the copy for this step
will take a while.
6 Use the vxdisk scandisks command to have Volume Manager rescan the
disks. The two disks that have the cloned data should now show as udid
mismatch when viewed using vxdisk list.
Exercise 1: LUN snapshots setup
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
107
Lab App C: Optional Lab: Importing LUN Snapshots A101
Copyright 2012 Symantec Corporation. All rights reserved.
A
7 Use the vxdg listclone command to list all clone disks. You will see two
clone disks in the output. Notice that the device names are the same as you
recorded in step 4.
8 Unmount the /app file system.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
108
A102 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
In this section, you will import the cloned disk group on the same host using two
different methods.
1 Deport the appdg disk group.
2 Import the cloned disk group using the vxdg command with the
-o useclonedev=on option. Do not update the IDs on the disks.
3 Verify that the volume has been started using the vxprint command.
4 Use the vxdisk list command to verify that the cloned disk group was
imported.
5 Mount the appvol volume and verify the files are the same as they were in the
original volume.
6 Unmount /app and deport the disk group.
7 Import the original appdg disk group. Verify that the original disk group was
imported using the vxdisk list command.
8 Import the cloned disk group using a new name. Update the disk IDs. Verify
that there are now two disk groups imported.
9 Turn off the clone flag on both disks using the vxdisk command. You can use
the vxdg listclone command to determine the device names and also to
verify that the flags have been cleared.
10 Destroy both disk groups.
End of lab
Exercise 2: Importing clone disk groups
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
109
Lab App D: Optional Lab: Managing the Boot Disk with SF A103
Copyright 2012 Symantec Corporation. All rights reserved.
A
Lab App D: Optional Lab: Managing the Boot Disk with SF
In this practice, you create a boot disk mirror, disable the boot disk, and boot up
from the mirror and recover the boot disk. Then you boot up again from the boot
disk, remove the mirror, and remove the alternate disk from the boot disk group.
Optionally you can also create a boot disk snapshot from which you can boot.
Finally, you unencapsulate the boot disk. These tasks are performed using a
combination of the VOM interface, the vxdiskadm utility, and CLI commands.
This lab contains the following exercises:
Optional lab: Encapsulation and boot disk mirroring
Optional lab: Testing the boot disk mirror
Optional lab: Removing the boot disk mirror
Optional lab: Creating the boot disk snapshot
Optional lab: Testing and removing the boot disk snapshot
Optional lab: Unencapsulating
Prerequisite setup
To perform this lab, you need a lab system with Storage Foundation pre-installed,
configured and licensed. In addition to this, you also need a second internal disk to
be able to mirror the system disk.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
110
A104 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
Lab information
In preparation for this lab, you need the following information about your lab
environment.
Encapsulation and boot disk mirroring
You will be performing two different types of boot disk mirroring. In the first
section, you mirror the boot disk and keep all volumes mirrored. This would be
used to make sure the data on the boot disk is mirrored and up to date in case of a
disk failure. In the second section, you create a boot disk snapshot which can be
used in case the data on the boot disk gets corrupted or deleted. After the snapshot
is created, the data is no longer kept in sync with the boot disk.
The exercises for this lab start on the next page.
Object Value
root password veritas
Host name sym1
Boot disk: sda
2nd internal disk: sdb
Shared data disks: emc0_dd7 - emc0_dd12
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
111
Lab App D: Optional Lab: Managing the Boot Disk with SF A105
Copyright 2012 Symantec Corporation. All rights reserved.
A
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time.
1 Use vxdiskadm to encapsulate the boot disk. Use systemdg as the name of
your boot disk group and use rootdisk as the name of your boot disk.
2 Use the vxrootadm command to administer mirrors of an encapsulated
system disk. Use the command to mirror rootdisk, and use the second internal
disk on your system for the mirror.
Note: When using the vxrootadm command, the target disk must be in an
uninitialized state with the disk type set to auto: none. If the disk is
already initialized, the command will fail and leave the disk in an error
state. Use the vxdiskunsetup command to uninitialize the disk if
necessary.
3 After the mirroring operation is complete, verify that you now have two disks
in systemdg: rootdisk and rootdisk-s0, and that all volumes are mirrored. Also,
check to determine if rootvol is enabled and active.
Hint: Use vxprint and examine the STATE fields.
Note: In a production environment, you would now place the names of your
alternate boot disks in persistent storage. This however is not possible in
the classroom VMware environment.
Exercise 1: Optional lab: Encapsulation and boot disk mirroring
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
112
A106 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time.
1 Fail the boot disk by turning off the rootvol plex on the boot disk using the
vxmend -g systemdg off rootvol-01 command. The system will
continue to run because you have a mirror of the disk.
a Fail the system disk.
b Verify that rootvol-01 is now disabled and offline.
c To change the plex to a STALE state, run the vxmend on command on
rootvol-01. Verify that rootvol-01 is now in the DISABLED and STALE
state.
2 Now that you have simulated the failure of the original boot disk, reboot the
system and boot up using the mirror. Reboot the system using shutdown -r
now.
3 After the system comes back up, check the status of the root volume. What is
the state of the volume?
4 Use the vxtask command to monitor when the synchronization is complete,
verify the status of rootvol. Verify that rootvol-01 is now in the ENABLED and
ACTIVE state.
Note: You may need to wait a few minutes for the state to change from
STALE to ACTIVE.
Exercise 2: Optional lab: Testing the boot disk mirror
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
113
Lab App D: Optional Lab: Managing the Boot Disk with SF A107
Copyright 2012 Symantec Corporation. All rights reserved.
A
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time.
1 Remove the mirror of the volumes on the encapsulated boot disk. Remove all
but one plex of rootvol and swapvol (that is, remove the newer plex from each
volume in systemdg).
For each volume in systemdg, remove all of the newly created mirrors. More
specifically, for each volume, two plexes are displayed, and you should remove
the newer (mir) plexes from each volume.
2 Remove the second disk from the systemdg disk group.
Exercise 3: Optional lab: Removing the boot disk mirror
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
114
A108 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time.
1 Use the vxdiskunsetup command to remove all partitions from the second
internal drive (sdb).
2 Next, use vxrootadm to create a bootable snapshot from your system disk,
rootdisk. Use bksystemdg as the name of the disk group for the snapshot
3 Open a separate window and monitor the mirroring progress using the
vxtask monitor command for each of the volumes being mirrored on the
boot disk. Use Ctrl+c to exit the command.
4 After the snapshot operation is complete, verify that you now have a new
bksystemdg disk group with the altboot disk in it.
5 Verify that the correct volumes have been created by comparing the volumes in
systemdg with the newly created volumes in bksystemdg using the vxprint
-htr command.
Exercise 4: Optional lab: Creating the boot disk snapshot
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
115
Lab App D: Optional Lab: Managing the Boot Disk with SF A109
Copyright 2012 Symantec Corporation. All rights reserved.
A
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time.
1 Test that the snapshot of the system disk is bootable. To perform this step, on
the virtual machine, reboot the virtual machine using the shutdown -r
now command. When the system reaches the initial boot screen, select the F2
key to enter the setup menu. Under the Boot menu, move the VMware Virtual
SCSI Hard Drive (0:1) so that it is before the VMware Virtual SCSI Hard
Drive (0:0) on the list. Exit and save the changes.
2 Use the df -k command to verify that you are booted from the snapshot disk.
3 Your system is currently booted up from the boot disk snapshot. Boot up from
the original boot disk. To perform this step, on the virtual machine, reboot the
virtual machine using the shutdown -r now command. When the system
reaches the initial boot screen, select the F2 key to enter the setup menu. Under
the Boot menu, move the VMware Virtual SCSI Hard Drive (0:0) so that it
is before the VMware Virtual SCSI Hard Drive (0:1) on the list. Exit and
save the changes.
4 Use the df -k command to verify that you are booted from the original disk.
5 Remove the snapshot of the boot disk by destroying the bksystemdg disk
group.
Exercise 5: Optional lab: Testing and removing the boot disk
snapshot
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
116
A110 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time.
1 Run the command to convert the root volumes back to disk partitions. When
prompted to shutdown the system answer n. Use the shutdown -r now
command to reboot the system.
2 Verify that the mount points are now slices rather than volumes.
End of lab
Exercise 6: Optional lab: Unencapsulating
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
117
Appendix B
Lab Solutions
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
118
B2 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
119
Lab 1: Administering Volume Manager B3
Copyright 2012 Symantec Corporation. All rights reserved.
B
Lab 1: Administering Volume Manager
In this, you will analyze Volume Manager I/O operations using the vxstat and
the vxtrace utilities. You will also practice changing volume layouts. More
exercises are included to practice managing VxVM tasks.
This lab contains the following exercises:
Preparing the environment for the performance labs
Exploring the vxstat utility
Exploring the vxtrace utility
Changing the volume layout
Optional lab: Monitoring tasks
Prerequisite setup
To perform this lab, you need a lab system with Storage Foundation pre-installed,
configured and licensed. In addition to this, you also need a minimum of six
external disks to be used during the labs.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
120
B4 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
Lab information
In preparation for this lab, you need the following information about your lab
environment.
The exercises for this lab start on the next page.
Object Value
root password veritas
Host name of the lab system sym1
Boot disk: sda
2nd internal disk: sdb
Shared data disks: emc0_dd7 - emc0_d12
3pardata0_49 - 3pardata0_60
Location of lab scripts
(if any):
/student/labs/sf/sf60
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
121
Lab 1: Administering Volume Manager B5
Copyright 2012 Symantec Corporation. All rights reserved.
B
It is possible that your system contains existing disk groups and volumes. Use the
following steps to free up the system to prepare for performance testing.
1 Determine if there are any mounted file systems on VxVM volumes using the
df -k command. Unmount them using the umount command if there are.
Solution
df -k
umount /mount_point (if necessary)
End of Solution
2 Use the vxdg command to destroy any existing disk groups.
Solution
vxdg destroy diskgroup
End of Solution
3 If the SmartMove feature is enabled for all volumes, change it to thin
provisioning devices only.
Solution
vxdefault list
vxdefault set usefssmartmove thinonly
vxdefault list
End of Solution
Exercise 1: Preparing the environment for the performance labs
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
122
B6 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
In this exercise, you analyze the performance of a disk in the testdg disk group
for 32K random reads. You use the vxbench program that is included as part of
the VRTSspt package to generate an I/O load.
1 Create a non-CDS disk group named testdg that contains one disk. Use the
second internal disk. Name the disk testdg01.
Solution
vxdisksetup -i sdb format=sliced (if necessary)
vxdg init testdg testdg01=sdb cds=off
End of Solution
2 Determine the maximum volume size that can be created using the single
drive. Create a volume named testvol in the testdg disk group that is the
maximum size on the single drive.
Solution
vxassist -g testdg make testvol maxsize
End of Solution
3 Create a VxFS File System on the testvol volume. Create a /test mount
point and mount the File System to it.
Solution
mkfs -t vxfs /dev/vx/rdsk/testdg/testvol
mkdir /test
mount -t vxfs /dev/vx/dsk/testdg/testvol /test
End of Solution
4 Invoke the vxstat command to begin drive analysis on the testvol volume.
Set the vxstat interval to display statistics every 1 second. Statistics will
begin printing every second, and all statistics are displayed as 0 until you begin
sending I/O to the volume.
Exercise 2: Exploring the vxstat utility
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
123
Lab 1: Administering Volume Manager B7
Copyright 2012 Symantec Corporation. All rights reserved.
B
Note: To be able to analyze the output later, you can direct it to a file, for
example /tmp/vxstat.out.
Solution
vxstat -g testdg -i 1 -d testvol | tee -a \
/tmp/vxstat.out
End of Solution
5 Use the vxbench utility to start several invocations of the following
vxbench command. Note that the vxbench location
(/opt/VRTSspt/FS/VxBench) should be in your PATH variable.
vxbench_rhel5_x86_64 -w rand_write -i \
iosize=32,iocount=16384,maxfilesize=3072000 \
/test/test1 &
Note: The vxbench program will create and write to the file test1 in the
/test directory.
6 When you execute the vxbench command, the vxstat output in the other
terminal window begins to display data. Wait for all the vxbench commands
to finish executing, then stop the vxstat output by typing CTRL-C on the
terminal where you are running vxstat, and analyze the vxstat output and
determine the peak performance of the drive.
Solution
Peak drive performance is reached when the number of I/O operations and the
I/O count stop increasing on the vxstat output.
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
124
B8 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
The purpose of this section is to provide an introduction to the usage of the
vxtrace utility.
1 Use the vxbench utility to create some random writes to the /test/test1
file using the following vxbench command:
vxbench_rhel5_x86_64 -w rand_write -i \
iosize=32,iocount=16384,maxfilesize=3072000 \
/test/test1 &
2 Dump trace data on virtual and physical disk I/Os for the volume named
testvol to the file named /tmp/vxtrace.out.
Solution
vxtrace -g testdg -d /tmp/vxtrace.out -o dev,disk \
testvol
End of Solution
3 When the vxbench command completes stop filling the file.
Solution
Press Ctrl+C.
End of Solution
4 Read the trace records from the file named /tmp/vxtrace.out. Notice
that the vxbench program is writing to the testvol volume (vdev
testvol) on the disk that you created for the testdg disk group.
Solution
vxtrace -g testdg -f /tmp/vxtrace.out -o dev,disk |
more
11441 START write vdev testvol block 318848 len 64
concurrency 132 pid 12244
11442 START write disk sdb op 11441 block 318859 len 64
Exercise 3: Exploring the vxtrace utility
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
125
Lab 1: Administering Volume Manager B9
Copyright 2012 Symantec Corporation. All rights reserved.
B
11443 START write vdev testvol block 2636032 len 64
concurrency 433 pid 12244
11444 START write disk sdb op 11443 block 2636043 len
64
You will see a listing of the virtual and physical disk I/Os to the one disk in the
testdg disk group.
End of Solution
5 Dump error trace data on all disks to the screen using the following vxtrace
command. Note that the output will only display error messages as they occur.
vxtrace -e disk
Note: When the this vxtrace command is started, the command will be
displayed and the prompt will not be returned. Do not end the
command. Leave that window active and run the next commands in a
different terminal window.
6 Use the vxdisk list command to determine the path(s) to the disk used in
the testdg disk group. Then use the vxdmpadm -f disable path=sdb
command to disable all paths to the disk.
Solution
vxdisk list sdb
vxdmpadm -f disable path=sdb
End of Solution
7 Use the vxbench utility to create some random writes to the /test/test1
file as shown in step 1. Note that the terminal running the vxtrace -e
disk command now shows write errors.
Solution
./vxbench_rhel5_x86_64 -w rand_write -i \
iosize=32,iocount=16384,maxfilesize=3072000 \
/test/test1 &
13547 ERROR write sdb op 0 block 240295 len 127 error
5 (Input/output error)
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
126
B10 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
13548 ERROR write sd testdg01-01 op 0 block 174071
len 127 error 5 (Input/output error)
End of Solution
8 Use the vxdmpadm enable path=sdb command to enable all paths to
the disk.
Solution
vxdmpadm enable path=sdb
End of Solution
9 Unmount the /test file system and destroy the testdg disk group.
Solution
umount /test
vxdg destroy testdg
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
127
Lab 1: Administering Volume Manager B11
Copyright 2012 Symantec Corporation. All rights reserved.
B
1 Create a disk group called appdg with four disks (emc0_dd7 - emc0_d10).
Solution
vxdisksetup -i emc0_dd7 (if necessary)
vxdg init appdg appdg01=emc0_dd7 appdg02=emc0_dd8 \
appdg03=emc0_dd9 appdg04=emc0_d10
End of Solution
2 Create a 20-MB concatenated mirrored volume called appvol. Create a Veritas
file system on the volume and mount it to /app.
Solution
vxassist -g appdg make appvol 20m layout=mirror
mkfs -t vxfs /dev/vx/rdsk/appdg/appvol
mkdir /app (if it doesn't already exist.)
mount -t vxfs /dev/vx/dsk/appdg/appvol /app
End of Solution
3 Add data to the volume and verify that the file has been added.
Solution
echo hello > /app/hello
cat /app/hello
End of Solution
4 Change the volume layout from its current layout (mirrored) to a nonlayered
mirror-stripe with two columns and a stripe unit size of 128 sectors (64K).
Monitor the progress of the relayout operation, and display the volume layout
after each command that you run.
Solution
a To begin the relayout operation:
Exercise 4: Changing the volume layout
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
128
B12 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
vxassist -g appdg relayout appvol \
layout=mirror-stripe ncol=2 stripeunit=128
b To monitor the progress of the task, in a separate terminal window run:
vxtask monitor
c Run vxprint to display the volume layout. Notice that a layered layout is
created:
vxprint -g appdg -htr
d Recall that when you relayout a volume to a striped layout, a layered layout
is created first, then you must use vxassist convert to complete the
conversion to a nonlayered mirror-stripe layout:
vxassist -g appdg convert appvol \
layout=mirror-stripe
e Run vxprint to confirm the resulting layout. Notice that the volume is
now a nonlayered volume:
vxprint -g appdg -htr
End of Solution
5 Verify that the file is still accessible.
Solution
cat /app/hello
End of Solution
6 Unmount the file system on the volume and remove the volume.
Solution
umount /app
vxassist -g appdg remove volume appvol
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
129
Lab 1: Administering Volume Manager B13
Copyright 2012 Symantec Corporation. All rights reserved.
B
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.
In this optional lab, you track volume relayout processes using the vxtask
command and recover from a vxrelayout crash using the command line.
To begin, you should have at least four disks in the disk group that you are using.
1 Create a concatenated volume called appvol in the appdg disk group, with a
size of 1 GB using the vxassist command. When the volume is created,
mirror the volume. Assign a task tag (appvol_monitor) to the task and run the
vxassist mirror command in the background.
Solution
vxassist -g appdg make appvol 1g
vxassist -g appdg -b -t appvol_monitor mirror appvol
End of Solution
2 View the progress of the task.
Solution
vxtask list appvol_monitor
or
vxtask monitor
End of Solution
3 When the task is complete, use vxassist to relayout the volume to stripe-
mirror. Use a stripe unit size of 256K, use two columns, and assign the process
to the above task tag.
Solution
vxassist -g appdg -t appvol_monitor relayout appvol \
Exercise 5: Optional lab: Monitoring tasks
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
130
B14 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
layout=stripe-mirror stripeunit=256k ncol=2
End of Solution
4 In another terminal window, abort the task to simulate a crash during relayout.
Solution
vxtask abort appvol_monitor
End of Solution
5 Reverse the relayout operation. View the layout of the volume after the
reversal of the relayout operation completes. Notice that the volume layout is
back to its original state but it is now layered. Change the layout to
non-layered.
Solution
vxrelayout -g appdg reverse appvol
vxprint -g appdg -htr
vxassist -g appdg convert appvol layout=mirror-concat
End of Solution
6 Destroy the appdg disk group.
Solution
vxdg destroy appdg
End of Solution
End of lab
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
131
Lab 2: Managing Devices Within the VxVM Architecture B15
Copyright 2012 Symantec Corporation. All rights reserved.
B
Lab 2: Managing Devices Within the VxVM Architecture
In this lab, you explore the VxVM tools used to manage the device discovery layer
(DDL) and dynamic multipathing (DMP). The objective of this exercise is to make
you familiar with the commands used to administer multipathed disks.
This lab contains the following exercises:
Administering the Device Discovery Layer
Displaying DMP information
Displaying DMP statistics
Enabling and disabling DMP paths
Managing array policies
Optional lab: DMP performance tuning templates.
Prerequisite setup
To perform this lab, you need a lab system with Storage Foundation pre-installed,
configured and licensed. In addition to this, you also need a minimum of three
external disks to be used during the labs.
Before you begin this lab, destroy any data disk groups that are left from previous
labs:
vxdg destroy diskgroup
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
132
B16 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
Lab information
In preparation for this lab, you need the following information about your lab
environment.
The exercises for this lab start on the next page.
Object Value
root password veritas
Host name of the lab system sym1
Shared data disks: emc0_dd7 - emc0_d12
3pardata0_49 - 3pardata0_60
Location of lab scripts: /student/labs/sf/sf60
Location of the vxbench program: /opt/VRTSspt/FS/VxBench
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
133
Lab 2: Managing Devices Within the VxVM Architecture B17
Copyright 2012 Symantec Corporation. All rights reserved.
B
1 List all currently supported disk arrays.
Solution
vxddladm listsupport
End of Solution
2 List all the enclosures connected to your system using the vxdmpadm
listenclosure all command. Does Volume Manager recognize the disk
array you are using in your lab environment? What is the name of the
enclosure? Note the enclosure name here.
Original enclosure name:__________________________________________
Solution
vxdmpadm listenclosure all
Volume Manager recognizes the disk array if it is among the supported disk
arrays you listed in step 1. Any internal disks will show with an enclosure
name of OTHER_DISKS or DISK.
Note: Volume Manager 6.0 does not require that the all option be used. If it
is left out of the command, all is assumed.
End of Solution
3 Use the vxddladm command to determine if enclosure based naming is set on
your system.
Solution
vxddladm get namingscheme
NAMING_SCHEME PERSISTENCE LOWERCASE USE_AVID
======================================================
Enclosure Based Yes Yes Yes
End of Solution
Exercise 1: Administering the Device Discovery Layer
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
134
B18 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
4 If enclosure based naming (EBN) is not set on your system, set it using the
vxdiskadm command.
Solution
vxdiskadm (only if EBN is not set)
Select the option, Change the disk-naming scheme and complete the
prompts to select enclosure-based naming.
End of Solution
5 Display the disks attached to your system and note the changes.
Solution
vxdisk -o alldgs list
End of Solution
6 Rename the emc0 enclosure to emc_disk using the vxdmpadm setattr
command. To find the exact command syntax, check the manual pages for the
vxdmpadm command.
Note: The original name of the enclosure is displayed by the vxdmpadm
listenclosure all command that you used in step 2.
Solution
vxdmpadm setattr enclosure emc0 name=emc_disk
End of Solution
7 Display the disks attached to your system and note the changes.
Solution
vxdisk -o alldgs list
The disks should now contain the new name that you entered in the previous
step, for example emc_disk_dd1.
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
135
Lab 2: Managing Devices Within the VxVM Architecture B19
Copyright 2012 Symantec Corporation. All rights reserved.
B
1 List all controllers on your system using the vxdmpadm listctlr
command. How many controllers are listed for the disk array your system is
connected to?
Solution
vxdmpadm listctlr
In the virtual lab environment, you will observe two controllers listed for the
enclosure you renamed to emc_disk.
End of Solution
2 Using one of the controller names discovered in the previous step, display all
paths connected to the controller using the vxdmpadm getsubpaths
ctlr=controller command. Compare the NAME and the
DMPNODENAME columns in the output.
Solution
vxdmpadm getsubpaths ctlr=controller
The NAME column lists all of the disk devices that the operating system sees
whereas the DMPNODENAME column provides the corresponding DMP node
name used for that disk device. If you have not switched to enclosure based
naming, these names will be the same. Note that the DMP node names are the
ones displayed by the vxdisk -o alldgs list command.
Example Output
vxdmpadm getsubpaths ctlr=c3
NAME STATE[A] PATH-TYPE[M] DMPNODENAME ENCLR-TYPE
ENCLR-NAME ATTRS
======================================================
==========================
sdc ENABLED(A) - emc_disk_dd1 EMC emc_disk -
sde ENABLED(A) - emc_disk_dd2 EMC emc_disk -
sdg ENABLED(A) - emc_disk_dd3 EMC emc_disk -
sdi ENABLED(A) - emc_disk_dd4 EMC emc_disk -
. . . (rest of output omitted)
End of Solution
Exercise 2: Displaying DMP information
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
136
B20 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
3 In the displayed list of paths, use the DMP node name of one of the paths to
display information about paths that lead to the particular LUN. How many
paths can you see?
Solution
vxdmpadm getsubpaths dmpnodename=emc_disk_dd7
End of Solution
4 View DDL extended attributes for the dmpnodename used in the previous step
using the vxdisk -p list command.
Solution
vxdisk -p list emc_disk_dd7
You should see extended attributes such as cabinet serial number, array type,
transport, and so on.
End of Solution
5 Determine the Port ID (PID) for all devices attached to the system using the
vxdisk -p list command and the -x option.
Solution
vxdisk -x PID -p list
Selecting a specific attribute is useful when you wish to see that attribute for all
devices attached to a system.
End of Solution
6 Determine the DDL_DEVICE_ATTR for all disks attached to the system using
the vxdisk -p list command and the -x option. If no attributes are set
the attribute displays a NULL.
Solution
vxdisk -x DDL_DEVICE_ATTR -p list
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
137
Lab 2: Managing Devices Within the VxVM Architecture B21
Copyright 2012 Symantec Corporation. All rights reserved.
B
7 Choose an attribute from the following list and view the attribute for all disks
using vxdisk -x attribute -p list. Not all attributes will have
values set.
The supported attributes:
DGID VID
PID ANAME
ATYPE TPD_SUPPRESSED
NR_DEVICE CAB_SERIAL_NO
LUN_SERIAL_NO PORT_SERIAL_NO
CUR_OWNER LIBNAME
LUN_OWNER LUN_TYPE
SCSI_VERSION REVISION
TPD_META_DEVNO TPD_META_NAME
TPD_LOGI_CTLR TPD_PHY_CTLR
TPD_SUBPATH TPD_DEVICES
ASL_CACHE ASL_VERSION
UDID ECOPY_DISK
ECOPY_TARGET_ID ECOPY_OPER_PARM
DEVICE_TYPE DYNAMIC
TPD_HIDDEN_DEVS LOG_CTLR_NAME
PHYS_CTLR_NAME DISK_GEOMETRY
MT_SAFE FC_PORT_WWN
FC_LUN_NO HARDWARE_MIRROR
TPD_CONTROLLED TPD_PARTITION_MAP
DMP_SINGLE_PATH DMP_VMDISK_IOPOLICY
DDL_DEVICE_ATTR DDL_THIN_DISK
Solution
vxdisk -x attribute -p list
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
138
B22 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
1 Create a disk group called appdg that contains two disks (emc_disk_dd7 and
emc_disk_dd8).
Solution
vxdisksetup -i emc_disk_dd7 (if necessary)
vxdisksetup -i emc_disk_dd8 (if necessary)
vxdg init appdg appdg01=emc_disk_dd7 \
appdg02=emc_disk_dd8
End of Solution
2 Create a 1-GB volume called appvol in the appdg disk group.
Solution
vxassist -g appdg make appvol 1g
End of Solution
3 Determine the device used for the appvol volume. This device name will be
used as the dmpnodename in step 10.
Solution
vxprint -g appdg -htr
v appvol - ENABLED ACTIVE 2097152
SELECT - fsgen
pl appvol-01 appvol ENABLED ACTIVE 2097152
CONCAT - RW
sd appdg01-01 appvol-01 appdg01 0 2097152
0 emc_disk_dd7 ENA
In the above example, the emc_disk_dd7 device is used for the appvol volume.
End of Solution
Exercise 3: Displaying DMP statistics
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
139
Lab 2: Managing Devices Within the VxVM Architecture B23
Copyright 2012 Symantec Corporation. All rights reserved.
B
4 Create a VxFS file system on the appvol volume using the mkfs command.
Solution
mkfs -t vxfs /dev/vx/rdsk/appdg/appvol
End of Solution
5 Create a mount point for the appvol volume called /app and mount the file
system created in the previous step to the mount point.
Solution
mkdir /app (if necessary)
mount -t vxfs /dev/vx/dsk/appdg/appvol /app
End of Solution
6 Enable the gathering of I/O statistics for DMP.
Solution
vxdmpadm iostat start
End of Solution
7 Reset the DMP I/O statistics counters to zero.
Solution
vxdmpadm iostat reset
End of Solution
8 Next, use the dmpiotest script to generate I/O on the disk used by the appdg
disk group. The dmpiotest script uses the vxbench utility, which is a part
of the VRTSspt package and is installed as a part of the SF installation. Change
to the directory containing lab scripts and execute the script:
./dmpiotest
Solution
cd /student/labs/sf/sf60
./dmpiotest /app
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
140
B24 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
This script creates some test files in the /app directory (if they do not exist)
and starts several invocations of the vxbench program as follows:
/opt/VRTSspt/FS/VxBench/vxbench_rhel5_x86_64 \
-w rand_mixed -i \
iosize=8,iocount=65536,maxfilesize=102400,nreps=100 \
/app/test1 /app/test2 /app/test3 /app/test4 \
/app/test5 &
Note: Note that the script is using a version of the vxbench program
specific to your platform.
End of Solution
9 Display I/O statistics for all controllers.
Solution
vxdmpadm iostat show all
End of Solution
10 Display I/O statistics for the DMP node that corresponds to the device used by
appvol. Display statistics every two seconds for eight times.
Note: You can use the vxprint -g appdg -htr appvol command to
identify the dmp node name of the device used by appvol.
Solution
vxdmpadm iostat show dmpnodename=emc_disk_dd7 \
interval=2 count=8
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
141
Lab 2: Managing Devices Within the VxVM Architecture B25
Copyright 2012 Symantec Corporation. All rights reserved.
B
1 Use the dmpiotest script to generate I/O on the disk used by the appdg disk
group. The dmpiotest script uses the vxbench utility, which is a part of
the VRTSspt package and is installed as a part of the SF installation. Change to
the directory containing lab scripts and execute the script:
./dmpiotest
Solution
cd /student/labs/sf/sf60
./dmpiotest /app
This script creates some test files in the /app directory (if they do not exist)
and starts several invocations of the vxbench program as follows:
/opt/VRTSspt/FS/VxBench/vxbench_rhel5_x86_64 \
-w rand_mixed -i \
iosize=8,iocount=65536,maxfilesize=102400,nreps=100 \
/app/test1 /app/test2 /app/test3 /app/test4 \
/app/test5 &
Note: Note that the script is using a version of the vxbench program
specific to your platform.
End of Solution
2 Display I/O statistics for the DMP node that corresponds to the device used by
appvol. Display statistics every two seconds for one thousand times. This will
ensure that the output continues as you enable and disable paths. I/O should be
present for both paths to the device.
Exercise 4: Enabling and disabling DMP paths
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
142
B26 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
Note: You can use the vxprint -g appdg -htr appvol command to
identify the dmp node name of the device used by appvol.
Solution
vxdmpadm iostat show dmpnodename=emc_disk_dd7 \
interval=2 count=1000
End of Solution
3 Use the vxdmpadm disable command to disable one of the paths shown in
the vxdmpadm iostat output. Note that I/O for that path stops.
Solution
vxdmpadm disable path=path_name
End of Solution
4 Use the vxdmpadm enable command to enable the path that was disabled
in the previous step. Note that I/O for that path resumes.
Solution
vxdmpadm enable path=path_name
End of Solution
5 Use the vxdmpadm disable command to disable the other path shown in
the vxdmpadm iostat output. Note that I/O for that path stops.
Solution
vxdmpadm disable path=path_name
End of Solution
6 Use the vxdmpadm enable command to enable the path that was disabled
in the previous step. Note that I/O for that path resumes.
Solution
vxdmpadm enable path=path_name
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
143
Lab 2: Managing Devices Within the VxVM Architecture B27
Copyright 2012 Symantec Corporation. All rights reserved.
B
1 Display the current I/O policy for the enclosure you are using.
Solution
vxdmpadm getattr enclosure emc_disk iopolicy
The default I/O policy is MinimumQ for the array used in the virtual lab
environment.
End of Solution
2 Change the current I/O policy for the enclosure to stop load-balancing and only
use multipathing for high availability.
Solution
vxdmpadm setattr enclosure emc_disk \
iopolicy=singleactive
End of Solution
3 Display the new I/O policy attribute.
Solution
vxdmpadm getattr enclosure emc_disk iopolicy
End of Solution
4 Reset the DMP I/O statistics counters to zero.
Solution
vxdmpadm iostat reset
End of Solution
5 Next, use the dmpiotest script to generate I/O on the disk used by the appdg
disk group. The dmpiotest script uses the vxbench utility, which is a part
Exercise 5: Managing array policies
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
144
B28 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
of the VRTSspt package and is installed as a part of the SF installation. Change
to the directory containing lab scripts and execute the script:
./dmpiotest
Solution
cd /student/labs/sf/sf60
./dmpiotest /app
This script creates some test files in the /app directory (if they do not exist)
and starts several invocations of the vxbench program as follows:
/opt/VRTSspt/FS/VxBench/vxbench_rhel5_x86_64 \
-w rand_mixed -i \
iosize=8,iocount=65536,maxfilesize=102400,nreps=100 \
/app/test1 /app/test2 /app/test3 /app/test4 \
/app/test5 &
Note: Note that the script is using a version of the vxbench program
specific to your platform.
End of Solution
6 Display I/O statistics for the DMP node that corresponds to the device used by
appvol. Display statistics every two seconds for eight times. Compare the
output to the output you observed before changing the DMP policy to
singleactive. Note that a single path is now used.
Note: You can use the vxprint -g appdg -htr appvol command to
identify the dmp node name of the device used by appvol.
Solution
vxdmpadm iostat show dmpnodename=emc_disk_dd7 \
interval=2 count=8
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
145
Lab 2: Managing Devices Within the VxVM Architecture B29
Copyright 2012 Symantec Corporation. All rights reserved.
B
7 Change the DMP I/O policy back to its default value (MinimumQ).
Solution
vxdmpadm setattr enclosure emc_disk iopolicy=minimumq
End of Solution
8 Next, use the dmpiotest script to generate I/O on the disk used by the appdg
disk group. The dmpiotest script uses the vxbench utility, which is a part
of the VRTSspt package and is installed as a part of the SF installation. Change
to the directory containing lab scripts and execute the script:
./dmpiotest
Solution
cd /student/labs/sf/sf60
./dmpiotest /app
This script creates some test files in the /app directory (if they do not exist)
and starts several invocations of the vxbench program as follows:
/opt/VRTSspt/FS/VxBench/vxbench_rhel5_x86_64 \
-w rand_mixed -i \
iosize=8,iocount=65536,maxfilesize=102400,nreps=100 \
/app/test1 /app/test2 /app/test3 /app/test4 \
/app/test5 &
Note: Note that the script is using a version of the vxbench program
specific to your platform.
End of Solution
9 Display I/O statistics for the DMP node again. Compare the output to the
output you observed when changing the DMP policy to singleactive. Note that
both paths are now used again.
Solution
vxdmpadm iostat show dmpnodename=emc_disk_dd7 \
interval=2 count=8
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
146
B30 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
10 Unmount /app.
Solution
umount /app
End of Solution
Note: If the unmount of /app fails because the device is busy, it is because
the vxbench commands started by the dmpiotest script are still
running. Either let them complete, or kill each running command
(ps -ef | grep vxbench).
11 Rename the enclosure back to its original name (emc0) using the vxdmpadm
setattr command.
Note: The original name of the enclosure was displayed by the vxdmpadm
listenclosure all command that you used in step 2 of
Exercise 1.
Solution
vxdmpadm setattr enclosure emc_disk name=emc0
End of Solution
12 Destroy the appdg disk group.
Solution
vxdg destroy appdg
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
147
Lab 2: Managing Devices Within the VxVM Architecture B31
Copyright 2012 Symantec Corporation. All rights reserved.
B
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.
1 Display the current I/O policy for the 3PARDATA array.
Solution
vxdmpadm getattr arrayname 3PARDATA iopolicy
The default I/O policy is MinimumQ.
End of Solution
2 Use the vxdmpadm config command to dump the DMP tunable parameters
and attributes that are currently set on your system. Dump the values to
/tmp/dmp_perf_tunables.out.
Solution
vxdmpadm config dump file=/tmp/dmp_perf_tunables.out
End of Solution
3 Use the more command to view the DMP performance tunables and attributes
that were captured in the file.
Solution
more /tmp/dmp_perf_tunables.out
End of Solution
Exercise 6: Optional lab: DMP performance tuning templates
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
148
B32 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
4 Use the vi text editor to change the attribute for the iopolicy on just the
3PARDATA array.
Solution
vi /tmp/dmp_perf_tunables.out
...
Enclosure
serial=24341
arrayname=3PARDATA
arraytype=A/A
iopolicy=singleactive
partitionsize=512
recoveryoption=nothrottle
recoveryoption=timebound iotimeout=300
redundancy=0
dmp_lun_retry_timeout=0
...
End of Solution
5 Use the vxdmpadm config command to load the DMP tunable parameters
and attributes in the/tmp/dmp_perf_tunables.out file. Use the -d
option so that all errors are reported during loading.
Solution
vxdmpadm -d config load file=/tmp/dmp_perf_tunables.out
End of Solution
6 Use the vxdmpadm config show command to verify that the DMP
tunable parameters and attributes in the/tmp/dmp_perf_tunables.out
file have been loaded.
Solution
vxdmpadm config show
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
149
Lab 2: Managing Devices Within the VxVM Architecture B33
Copyright 2012 Symantec Corporation. All rights reserved.
B
7 Display the current I/O policy for the 3PARDATA array. The value should now
show singleactive.
Solution
vxdmpadm getattr arrayname 3PARDATA iopolicy
End of Solution
8 Use the vxdmpadm config reset command to reset the DMP tunable
parameters and attributes to the default settings.
Solution
vxdmpadm config reset
End of Solution
9 Use the vxdmpadm config show command to verify that the DMP
tunable parameters and attributes are now set to the default values.
Solution
vxdmpadm config show
End of Solution
10 Display the current I/O policy for the 3PARDATA array. The value should now
show MinimumQ (the default value).
Solution
vxdmpadm getattr arrayname 3PARDATA iopolicy
End of Solution
End of lab
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
150
B34 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
151
Lab 3: Resolving Hardware Problems B35
Copyright 2012 Symantec Corporation. All rights reserved.
B
Lab 3: Resolving Hardware Problems
In this lab, you practice recovering from a variety of hardware failure scenarios,
resulting in disabled disk groups and failed disks. First you recover a temporarily
disabled disk group and then you use a set of interactive lab scripts to investigate
and practice recovery techniques. Each interactive lab script:
Sets up the required volumes
Simulates and describes a failure scenario
Prompts you to fix the problem
This lab contains the following exercises:
Recovering a temporarily disabled disk group
Preparing for disk failure labs
Recovering from temporary disk failure
Recovering from permanent disk failure
Optional lab: Recovering from temporary disk failure - Layered volume
Optional lab: Recovering from permanent disk failure - Layered volume
Optional lab: Replacing physical drives (without hot relocation)
Optional lab: Replacing physical drives (with hot relocation)
Optional lab: Recovering from temporary disk failure with vxattachd
daemon
Optional lab: Exploring spare disk behavior
Optional lab: Using the Support Web Site
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
152
B36 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
Prerequisite setup
To perform this lab, you need a lab system with Storage Foundation pre-installed,
configured and licensed. In addition to this, you also need four external disks to be
used during the labs.
Lab information
In preparation for this lab, you need the following information about your lab
environment.
The exercises for this lab start on the next page.
Object Value
Shared data disks: emc0_dd7 - emc0_dd12
3pardata0_49 - 3pardata0_54
Location of lab scripts: /student/labs/sf/sf60
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
153
Lab 3: Resolving Hardware Problems B37
Copyright 2012 Symantec Corporation. All rights reserved.
B
This lab section requires two terminal windows to be open.
1 Create a disk group called appdg that contains one disk (emc0_dd7).
Solution
vxdisksetup -i emc0_dd7 (if necessary)
vxdg init appdg appdg01=emc0_dd7
End of Solution
2 Create a 1g concatenated volume called appvol in appdg disk group.
Solution
vxassist -g appdg make appvol 1g
End of Solution
3 Create a Veritas file system on appvol and mount it to /app.
Solution
mkfs -t vxfs /dev/vx/rdsk/appdg/appvol
mkdir /app (if required)
mount -t vxfs /dev/vx/dsk/appdg/appvol /app
End of Solution
4 Copy the contents of /etc/default directory to /app and display the
contents of the file system.
Solution
cp -r /etc/default /app
ls -lR /app
End of Solution
Exercise 1: Recovering a temporarily disabled disk group
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
154
B38 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
5 If you want to observe the error messages displayed in the system log while the
failure is being created, open a second terminal window and use the tail -f
command to view the system log. Exit the output using CTRL-C when you are
satisfied.
Solution
tail -f /var/log/messages
End of Solution
6 Change into the directory containing the faildg_temp.pl script and
execute the script to create a failure in the appdg disk group.
Notes:
The faildg_temp.pl script disables the paths to the disk in the disk
group to simulate a hardware failure. This is just a simulation and not a real
failure; therefore, the operating system will still be able to see the disk after
the failure. The script will prompt you for the disk group name and then it
will create the failure by disabling the paths to the disk, performing some
I/O and then re-enabling the paths.
All lab scripts are located in the /student/labs/sf/sf60 directory.
Solution
/student/labs/sf/sf60/faildg_temp.pl
What is the name of the disk group would you like to
temporarily disable? [appdg]:
Checking to make sure appdg is enabled . . . done.
Creating failure, please be patient
dd: opening `/app/testfile': Input/output error
Finished creating failure!
Note: You will see a dd error because I/O will be stopped as soon as the
failure is recognized.
End of Solution
7 Use the vxdisk -o alldgs list and vxdg list commands to
determine the status of the disk group and the disk.
Solution
vxdisk -o alldgs list
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
155
Lab 3: Resolving Hardware Problems B39
Copyright 2012 Symantec Corporation. All rights reserved.
B
vxdg list
The disk group should show as disabled and the disk status should change to
online dgdisabled.
End of Solution
8 What happened to the file system?
Solution
df -k /app
The file system is also disabled.
End of Solution
9 Assuming that the failure was due to a temporary fiber disconnection and that
the data is still intact, recover the disk group and start the volume using the first
terminal window. Verify the disk and disk group status using the vxdisk -o
alldgs list and vxdg list commands.
Solution
umount /app
vxdg deport appdg
vxdg import appdg
vxdisk -o alldgs list
vxdg list
The disk group should now be enabled and the disk status should change back
to online.
End of Solution
10 Remount the file system and verify that the contents are still there.
Solution
mount -t vxfs /dev/vx/dsk/appdg/appvol /app
ls -lR /app
It is not necessary to run an fsck on the file system.
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
156
B40 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
11 Unmount the file system.
Solution
umount /app
End of Solution
12 Destroy the appdg disk group.
Solution
vxdg destroy appdg
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
157
Lab 3: Resolving Hardware Problems B41
Copyright 2012 Symantec Corporation. All rights reserved.
B
Overview
The following sections use an interactive script to simulate a variety of disk failure
scenarios. Your goal is to recover from the problem as described in each scenario.
Use your knowledge of VxVM administration, in addition to the VxVM recovery
tools and concepts described in the lesson, to determine which steps to take to
ensure recovery. After you recover the test volumes, the script verifies your
solution and provides you with the result. You succeed when you recover the
volumes without corrupting the data.
For most of the recovery problems, you can use any of the VxVM interfaces: the
command line interface, the Veritas Operations Manager (VOM) Web console, or
the vxdiskadm menu interface. Lab solutions are provided for only one method.
If you have questions about recovery using interfaces not covered in the solutions,
see your instructor.
Setup
Due to the way in which the lab scripts work, it is important to set up your
environment as described in this setup section:
1 Create a disk group named testdg and add three disks (emc_disk_dd7,
emc_disk_dd8 and emc_disk_dd9) to the disk group. Assign the following disk
media names to the disks: testdg01, testdg02, and testdg03.
Solution
vxdg init testdg testdg01=emc0_dd7 testdg02=emc0_dd8 \
testdg03=emc0_dd9
End of Solution
2 In the first terminal window, navigate to the directory that contains the lab
scripts. Note that the lab scripts are located at the
/student/labs/sf/sf60 directory.
Solution
cd /student/labs/sf/sf60
End of Solution
Exercise 2: Preparing for disk failure labs
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
158
B42 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
In this lab exercise, a temporary disk failure is simulated. Your goal is to recover
all of the redundant and nonredundant volumes that were on the failed drive. The
lab script disk_failures.pl sets up the test volume configuration and
simulates a disk failure. You must then recover and validate the volumes.
Note: The lab scripts are located at the /student/labs/sf/sf60 directory.
1 From the first terminal window (from the directory that contains the lab
scripts), run the script disk_failures.pl, answer the initial configuration
questions and then select option 1, Exercise 3 - Recovering from temporary
disk failure. Note that the initial configuration questions will only be asked the
first time you run the script. Use test as the prefix for disk group and volume
names.
Solution
./disk_failures.pl
Initial Configuration File Check
What prefix should be used for the disk group name and
volume names? [app]: test <CR>
What is the path to the SF 6.0 software? [/student/
software/sf/sf60]: <CR>
What is the path to the SF 6.0 lab scripts? [/student/
labs/sf/sf60]: <CR>
This script can be used to test all or any specific
exercises for the SF 6.0 Disk Failure Labs.
Choose the desired lab from the list below.
1. Exercise 3 - Recovering from temporary disk failure
2. Exercise 4 - Recovering from permanent disk failure
3. Exercise 5 - Optional Lab: Recovering from temporary
disk failure - Layered volume
4. Exercise 6 - Optional Lab: Recovering from permanent
disk failure - Layered volume
Exercise 3: Recovering from temporary disk failure
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
159
Lab 3: Resolving Hardware Problems B43
Copyright 2012 Symantec Corporation. All rights reserved.
B
Which setup do you wish to run? Enter 1 - 4: 1
End of Solution
This script sets up two volumes (test1 and test2), creates a vxfs file system on
each, and then copies duplicate files to each file system. Both file systems are
then mounted.
test1 with a mirrored layout
test2 with a concatenated layout
2 In a second terminal window, view the failure using the vxdisk -o
alldgs list and vxprint -g testdg -htr commands.
Solution
vxprint -g testdg -htr
vxdisk -o alldgs list
End of Solution
3 Attempt to view the files that were copied to mount points /test1 and
/test2. What did you see?
Solution
ls /test1
ls /test2
Because the test1 volume is mirrored, the files in the /test1 mount point are
still accessible. When trying to view the files in /test2, you should see the
following error:
/test2: I/O error
End of Solution
4 Attempt to recover the volumes.
Note: When performing recovery procedures, run vxprint and vxdisk
list often to see what is changing after issuing recovery commands:
Solution
vxprint -g testdg -htr
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
160
B44 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
vxdisk -o alldgs list
End of Solution
To recover from the temporary failure:
a If you are using enclosure based naming, identify the OS native name of
the disk that has temporarily failed. You will use this OS disk name while
verifying that the operating system recognizes the device.
Solution
vxdisk -e list ebn_of_failed_disk
DEVICE TYPE DISK GROUP STATUS
OS_NATIVE_NAME ATTR
ebn_of_failed_disk auto:cdsdisk testdg online
osn_of_failed_disk lun
End of Solution
b Ensure that the operating system recognizes the device using the
appropriate OS commands.
Solution
partprobe /dev/osn_of_failed_disk
End of Solution
c Verify that the operating system recognizes the device using the
appropriate OS commands.
Solution
fdisk -l /dev/osn_of_failed_disk
End of Solution
d Force the VxVM configuration daemon to reread all of the drives in the
system.
Solution
vxdctl enable
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
161
Lab 3: Resolving Hardware Problems B45
Copyright 2012 Symantec Corporation. All rights reserved.
B
e Reattach the device to the disk media record using the vxreattach
command.
Solution
vxreattach
End of Solution
f Recover the volumes using the vxrecover command.
Solution
vxrecover
End of Solution
g Use the vxvol command to start the nonredundant volume.
Solution
vxvol -g testdg -f start test2
End of Solution
5 Because this is a temporary failure, the files in the test2 volume (and file
system) are still available. Recover the mount point by performing the
following:
a Unmount the /test2 mount point.
Solution
umount /test2
End of Solution
b Perform an fsck on the file system.
Solution
fsck -t vxfs /dev/vx/rdsk/testdg/test2
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
162
B46 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
c Mount the test2 volume to /test2.
Solution
mount -t vxfs /dev/vx/dsk/testdg/test2 /test2
End of Solution
6 Compare the two mount points using the diff command.
Solution
diff /test1 /test2
If the files are the same you should not see an output. Only differences would
be displayed. You should see an output for common subdirectories.
Common subdirectories: /test1/lost+found and /test2/lost+found
Note: There is a potential for file system corruption in the test2 volume
since it has no redundancy.
End of Solution
7 Unmount the file systems and delete the test1 and test2 volumes.
Solution
umount /test1
umount /test2
vxassist -g testdg remove volume test1
vxassist -g testdg remove volume test2
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
163
Lab 3: Resolving Hardware Problems B47
Copyright 2012 Symantec Corporation. All rights reserved.
B
In this lab exercise, a permanent disk failure is simulated. Your goal is to replace
the failed drive and recover the volumes as needed. The lab script
disk_failures.pl sets up the test volume configuration and simulates a disk
failure. You must recover the failure and validate the volumes.
Note: The lab scripts are located at the /student/labs/sf/sf60 directory.
1 In the first terminal window (from the directory that contains the lab scripts),
run the script disk_failures.pl, and select option 2, Exercise 4 -
Recovering from permanent disk failure.
Solution
./disk_failures.pl
This script can be used to test any specific exercise
for the SF 6.0 Disk Failure Labs.
Choose the desired lab from the list below.
1. Exercise 3 - Recovering from temporary disk failure
2. Exercise 4 - Recovering from permanent disk failure
3. Exercise 5 - Optional Lab: Recovering from temporary
disk failure - Layered volume
4. Exercise 6 - Optional Lab: Recovering from permanent
disk failure - Layered volume
Which setup do you wish to run? Enter 1 - 4: 2
End of Solution
This script sets up two volumes (test1 and test2), creates a vxfs file system on
each, and then copies duplicate files to each file system. Both file systems are
then mounted.
test1 with a mirrored layout
test2 with a concatenated layout
Exercise 4: Recovering from permanent disk failure
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
164
B48 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
2 In a second terminal window, view the failure using the vxdisk -o
alldgs list and vxprint -g testdg -htr commands.
Solution
vxprint -g testdg -htr
vxdisk -o alldgs list
End of Solution
3 Attempt to view the files that were copied to mount points /test1 and
/test2. What did you see?
Solution
ls /test1
ls /test2
Because the test1 volume is mirrored, the files in the /test1 mount point are
still accessible. When trying to view the files in /test2 you should see the
following error
/test2: I/O error
End of Solution
4 Replace the permanently failed drive with a new disk at another SCSI location.
Then, recover the volumes.
Note: When performing recovery procedures, run vxprint and vxdisk
list often to see what is changing after issuing recovery commands.
To recover from the permanent failure:
a In the second terminal window, initialize a new drive (emc0_d10).
Solution
vxdisksetup -i emc0_d10 (if necessary)
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
165
Lab 3: Resolving Hardware Problems B49
Copyright 2012 Symantec Corporation. All rights reserved.
B
b Attach the failed disk media name (testdg02) to the new drive.
Solution
vxdg -g testdg -k adddisk testdg02=emc0_d10
End of Solution
c Recover the volumes using the vxrecover command.
Solution
vxrecover
End of Solution
d Use the vxvol command to start the nonredundant volume.
Solution
vxvol -g testdg -f start test2
End of Solution
Note: You can also use the vxdiskadm menu interface to correct the failure.
Select Replace a failed or removed disk option and select the desired
drive when prompted.
5 Because this is a permanent failure, the files in the test2 volume (and file
system) are no longer available. Recover the mount point by performing the
following:
a Unmount the /test2 mount point.
Solution
umount /test2
End of Solution
b Attempt to mount the test2 volume to /test2. You will see an error
because the file system has been lost during the recovery.
Solution
mount -t vxfs /dev/vx/dsk/testdg/test2 /test2
UX:vxfs mount: ERROR: V-3-20012: not a valid vxfs file
system
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
166
B50 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
UX:vxfs mount: ERROR: V-3-24996: Unable to get disk
layout version
End of Solution
c Create a new file system on the test2 volume and then mount the test2
volume to /test2.
Solution
mkfs -t vxfs /dev/vx/rdsk/testdg/test2
mount -t vxfs /dev/vx/dsk/testdg/test2 /test2
End of Solution
6 List the contents of /test2. In a real failure scenario, the files in this file
system would need to be restored from a backup.
Solution
ls /test2
lost+found
End of Solution
7 Unmount the file systems and delete the test1 and test2 volumes.
Solution
umount /test1
umount /test2
vxassist -g testdg remove volume test1
vxassist -g testdg remove volume test2
End of Solution
8 When you have completed this exercise, the disk device that was originally
used during disk failure simulation is in an online invalid state,
reinitialize the disk to prepare for later labs.
Solution
vxdisksetup -i accessname
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
167
Lab 3: Resolving Hardware Problems B51
Copyright 2012 Symantec Corporation. All rights reserved.
B
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.
In this optional lab exercise, a temporary disk failure is simulated. Your goal is to
recover all of the volumes that were on the failed drive. The lab script
disk_failures.pl sets up the test volume configuration and simulates a disk
failure. You must recover the failure and validate the volumes.
Note: The lab scripts are located at the /student/labs/sf/sf60 directory.
1 Use the vxdg command with the adddisk option to add a fourth disk
(emc0_d11) called testdg04 to the testdg disk group. If necessary,
initialize a new disk before adding it to the disk group.
Solution
vxdisksetup -i emc0_d11 (if necessary)
vxdg -g testdg adddisk testdg04=emc0_d11
End of Solution
2 From the directory that contains the lab scripts, run the script
disk_failures.pl, and select option 3, Exercise 5 - Optional Lab:
Recovering from temporary disk failure - Layered volume.
Solution
./disk_failures.pl
This script can be used to test any specific exercise
for the SF 6.0 Disk Failure Labs.
Choose the desired lab from the list below.
1. Exercise 3 - Recovering from temporary disk failure
2. Exercise 4 - Recovering from permanent disk failure
Exercise 5: Optional lab: Recovering from temporary disk failure -
Layered volume
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
168
B52 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
3. Exercise 5 - Optional Lab: Recovering from temporary
disk failure - Layered volume
4. Exercise 6 - Optional Lab: Recovering from permanent
disk failure - Layered volume
Which setup do you wish to run? Enter 1 - 4: 3
End of Solution
This script sets up two volumes (test1 and test2), creates a vxfs file system on
each, and then copies duplicate files to each file system. Both file systems are
then mounted.
test1 with a stripe-mirror layout on 4 disks
test2 with a concatenated layout
3 In a second terminal window, view the failure using the vxdisk -o
alldgs list and vxprint -g testdg -htr commands. Notice that
there are two disks that have failed.
Solution
vxprint -g testdg -htr
vxdisk -o alldgs list
End of Solution
4 Attempt to view the files that were copied to mount points /test1 and
/test2. What did you see?
Solution
ls /test1
ls /test2
Because the test1 volume is layered and mirrored, the files in the /test1
mount point are still accessible even though two disks have failed. When trying
to view the files in /test2 you should see the following error
/test2: I/O error
End of Solution
5 Assume that the failure was temporary. In a second terminal window, attempt
to recover the volumes.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
169
Lab 3: Resolving Hardware Problems B53
Copyright 2012 Symantec Corporation. All rights reserved.
B
Note: When performing recovery procedures, run vxprint and vxdisk
list often to see what is changing after issuing recovery commands.
Solution
vxprint -g testdg -htr
vxdisk -o alldgs list
End of Solution
To recover from the temporary failure:
a If you are using enclosure based naming, identify the OS native names of
the disks that have temporarily failed. You will use these OS disk names
while verifying that the operating system recognizes the devices.
Solution
vxdisk -e list
DEVICE TYPE DISK GROUP STATUS
OS_NATIVE_NAME ATTR
ebn_of_failed_disk1 auto:cdsdisk testdg online
osn_of_failed_disk1 lun
ebn_of_failed_disk2 auto:cdsdisk testdg online
osn_of_failed_disk2 lun
End of Solution
b Ensure that the operating system recognizes the devices using the
appropriate OS commands.
Solution
partprobe /dev/osn_of_first_failed_disk
partprobe /dev/osn_of_second_failed_disk
End of Solution
c Verify that the operating system recognizes the devices using the
appropriate OS commands.
Solution
fdisk -l /dev/osn_of_first_failed_disk
fdisk -l /dev/osn_of_second_failed_disk
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
170
B54 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
d Force the VxVM configuration daemon to reread all of the drives in the
system.
Solution
vxdctl enable
End of Solution
e Reattach the devices to the disk media records using the vxreattach
command.
Solution
vxreattach
End of Solution
f Recover the volumes using the vxrecover command.
Solution
vxrecover
End of Solution
g Use the vxvol command to start the nonredundant volume.
Solution
vxvol -g testdg -f start test2
End of Solution
6 Because this is a temporary failure, the files in the test2 volume (and file
system) are still available. Recover the mount point by performing the
following:
a Unmount the /test2 mount point.
Solution
umount /test2
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
171
Lab 3: Resolving Hardware Problems B55
Copyright 2012 Symantec Corporation. All rights reserved.
B
b Perform an fsck on the file system.
Solution
fsck -t vxfs /dev/vx/rdsk/testdg/test2
End of Solution
c Mount the test2 volume to /test2.
Solution
mount -t vxfs /dev/vx/dsk/testdg/test2 /test2
End of Solution
7 Compare the two mount points using the diff command.
Solution
diff /test1 /test2
If the files are the same you should not see an output. Only differences would
be displayed. You should see an output for common subdirectories.
Common subdirectories: /test1/lost+found and /test2/lost+found
End of Solution
Note: There is a potential for file system corruption in the test2 volume since
it has no redundancy.
8 Unmount the file systems and delete the test1 and test2 volumes.
Solution
umount /test1
umount /test2
vxassist -g testdg remove volume test1
vxassist -g testdg remove volume test2
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
172
B56 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.
In this optional lab exercise, a permanent disk failure is simulated. Your goal is to
replace the failed drive and recover the volumes as needed. The lab script
disk_failures.pl sets up the test volume configuration and simulates a disk
failure. You must recover the failure and validate the volumes.
Note: The lab scripts are located at the /student/labs/sf/sf60 directory.
1 From the directory that contains the lab scripts, run the script
disk_failures.pl, and select option 4, Exercise 6 - Optional Lab:
Recovering from permanent disk failure - Layered volume:
Solution
./disk_failures.pl
This script can be used to test any specific exercise
for the SF 6.0 Disk Failure Labs.
Choose the desired lab from the list below.
1. Exercise 3 - Recovering from temporary disk failure
2. Exercise 4 - Recovering from permanent disk failure
3. Exercise 5 - Optional Lab: Recovering from temporary
disk failure - Layered volume
4. Exercise 6 - Optional Lab: Recovering from permanent
disk failure - Layered volume
Which setup do you wish to run? Enter 1 - 4: 4
End of Solution
This script sets up two volumes (test1 and test2), creates a vxfs file system on
each, and then copies duplicate files to each file system. Both file systems are
then mounted.
Exercise 6: Optional lab: Recovering from permanent disk failure -
Layered volume
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
173
Lab 3: Resolving Hardware Problems B57
Copyright 2012 Symantec Corporation. All rights reserved.
B
test1 with a stripe-mirror layout
test2 with a concatenated layout
2 In a second terminal window, view the failure using the vxdisk -o
alldgs list and vxprint -g testdg -htr commands. Notice that
there are two disks that have failed.
Solution
vxprint -g testdg -htr
vxdisk -o alldgs list
End of Solution
3 Attempt to view the files that were copied to mount points /test1 and
/test2. What did you see?
Solution
ls /test1
ls /test2
Because the test1 volume is layered and mirrored, the files in the /test1
mount point are still accessible even though two disks have failed. When trying
to view the files in /test2 you should see the following error
/test2: I/O error
End of Solution
4 Replace the permanently failed drive with either a new disk at the same SCSI
location or by another disk at another SCSI location. Then, recover the
volumes.
Note: When performing recovery procedures, run vxprint and vxdisk
list often to see what is changing after issuing recovery commands.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
174
B58 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
To recover from the permanent failure:
a In the second terminal window, initialize the drive that failed. In a real
failure scenario this drive would have been replaced with a new drive.
Solution
vxdisksetup -i accessname
End of Solution
b Attach the failed disk media name to the new drive.
Solution
vxdg -g testdg -k adddisk testdg02=accessname
End of Solution
c Recover the volumes using the vxrecover command.
Solution
vxrecover
End of Solution
d Use the vxvol command to start the nonredundant volume.
Solution
vxvol -g testdg -f start test2
End of Solution
Note: You can also use the vxdiskadm menu interface to correct the failure.
Select Replace a failed or removed disk option and select the desired
drive when prompted.
5 Because this is a permanent failure, the files in the test2 volume (and file
system) are no longer available. Recover the mount point and file system by
performing the following:
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
175
Lab 3: Resolving Hardware Problems B59
Copyright 2012 Symantec Corporation. All rights reserved.
B
a Unmount the /test2 mount point.
Solution
umount /test2
End of Solution
b Create a new file system.
Solution
mkfs -t vxfs /dev/vx/rdsk/testdg/test2
End of Solution
c Mount the test2 volume to /test2 and list the contents. The mount point
should only contain a lost+found directory.
Solution
mount -t vxfs /dev/vx/dsk/testdg/test2 /test2
ls /test2
End of Solution
6 Unmount the file systems and delete the test1 and test2 volumes.
Solution
umount /test1
umount /test2
vxassist -g testdg remove volume test1
vxassist -g testdg remove volume test2
End of Solution
7 Destroy the testdg disk group.
Solution
vxdg destroy testdg
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
176
B60 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.
Note: If you have not already done so, destroy the testdg disk group before you
start this section.
1 Create a disk group called appdg that contains four disks (emc0_dd7 -
emc0_d10).
Solution
vxdg init appdg appdg01=emc0_dd7 appdg02=emc0_dd8 \
appdg03=emc0_dd9 appdg04=emc0_d10
End of Solution
2 Create a 100-MB mirrored volume called appvol in the appdg disk group, add
a VxFS file system to the volume, and mount the file system at the mount point
/app.
Solution
vxassist -g appdg make appvol 100m layout=mirror
mkfs -t vxfs /dev/vx/rdsk/appdg/appvol
mkdir /app (if necessary)
mount -t vxfs /dev/vx/dsk/appdg/appvol /app
End of Solution
3 If the vxrelocd daemon is running, stop it using ps and kill, in order to
stop hot relocation from taking place. Verify that the vxrelocd processes are
killed before you continue.
Exercise 7: Optional lab: Replacing physical drives (without hot
relocation)
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
177
Lab 3: Resolving Hardware Problems B61
Copyright 2012 Symantec Corporation. All rights reserved.
B
Note: If you have executed the disk_failures.pl script in the previous
lab sections, the vxrelocd daemon may already be killed.
Solution
ps -ef | grep vxrelocd
kill -9 pid (if necessary)
ps -ef | grep vxrelocd
End of Solution
4 Next, simulate disk failure by writing over the private region using the
overwritepr.pl script.
Note: The lab scripts are located at the /student/labs/sf/sf60
directory.
While using the script, substitute the appropriate disk device name for one of
the disks in use by appvol, for example enter emc0_dd7.
Solution
cd /student/labs/sf/sf60
./overwritepr.pl
Enter a device used in appvol when prompted.
End of Solution
5 When the error occurs, view the status of the disks from the command line.
Solution
vxdisk -o alldgs list
The physical device is no longer associated with the disk media name and the
disk group.
End of Solution
6 View the status of the volume from the command line.
Solution
vxprint -g appdg -htr
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
178
B62 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
The plex displays a status of DISABLED NODEVICE.
End of Solution
7 Recover the disk by replacing the private and public regions on the disk. In the
command, substitute the appropriate disk device name, for example use
emc0_dd7.
Solution
vxdisksetup -i accessname
Note: This step is only necessary when you replace the failed disk with a
brand new one. If it were a temporary failure, this step would not be
necessary.
End of Solution
8 Bring the disk back under VxVM control.
Solution
vxdg -g appdg -k adddisk dm_name=accessname
where dm_name is the disk media name of the failed disk and accessname
is the enclosure-based name of the disk device used to replace the failed one.
End of Solution
9 Check the status of the disks and the volume. The disk should now be a part of
the disk group, but the volume still has a failure.
Solution
vxdisk -o alldgs list
vxprint -g appdg -htr
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
179
Lab 3: Resolving Hardware Problems B63
Copyright 2012 Symantec Corporation. All rights reserved.
B
10 From the command line, recover the volume.
Solution
vxrecover
End of Solution
11 Check the status of the disks and the volume to ensure that the disk and volume
are fully recovered.
Solution
vxdisk -o alldgs list
vxprint -g appdg -htr
End of Solution
12 Unmount the /app file system and remove the appvol volume.
Solution
umount /app
vxassist -g appdg remove volume appvol
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
180
B64 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.
1 Verify that the relocation daemon (vxrelocd) is running. If not, start it as
follows:
Solution
ps -ef |grep vxrelocd
vxrelocd root & (if necessary)
ps -ef |grep vxrelocd
End of Solution
2 Create a 100-MB mirrored volume called appvol in the appdg disk group, add
a VxFS file system to the volume, and mount the file system at the mount point
/app.
Solution
vxassist -g appdg make appvol 100m layout=mirror
mkfs -t vxfs /dev/vx/rdsk/appdg/appvol
mkdir /app (if necessary)
mount -t vxfs /dev/vx/dsk/appdg/appvol /app
End of Solution
3 Next, simulate disk failure by writing over the private region using the
overwritepr.pl script.
Note: The lab scripts are located at the /student/labs/sf/sf60
directory.
Exercise 8: Optional lab: Replacing physical drives (with hot
relocation)
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
181
Lab 3: Resolving Hardware Problems B65
Copyright 2012 Symantec Corporation. All rights reserved.
B
While using the script, substitute the appropriate disk device name for one of
the disks in use by appvol, for example enter emc0_dd7.
Solution
cd /student/labs/sf/sf60
./overwritepr.pl
Enter a device used in appvol when prompted.
End of Solution
4 When the error occurs, view the status of the disks and volume from the
command line using the vxdisk list and vxprint commands. Allow
sufficient time for the vxrelocd daemon to relocate the failed device.
Solution
vxdisk -o alldgs list
vxprint -g appdg -htr
The physical device is no longer associated with the disk media name and the
disk group. The failed device in the volume should be relocated to a different
device within the disk group
End of Solution
5 Recover the disk by replacing the private and public regions on the disk. In the
command, substitute the appropriate disk device name, for example use
emc0_dd7.
Solution
vxdisksetup -i accessname
Note: This step is only necessary when you replace the failed disk with a
brand new one. If it were a temporary failure, this step would not be
necessary.
End of Solution
6 Bring the disk back under VxVM control.
Solution
vxdg -g appdg -k adddisk dm_name=accessname
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
182
B66 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
where dm_name is the disk media name of the failed disk and accessname
is the enclosure-based name of the disk device used to replace the failed one.
End of Solution
7 Check the status of the disks and the volume. The failed disk should now be a
part of the disk group, but the plex that used to be on the failed disk is now
relocated to another disk in the disk group.
Solution
vxdisk -o alldgs list
vxprint -g appdg -htr
End of Solution
8 Use the vxunreloc command to return the plex back to the original device.
Solution
vxunreloc -g appdg appdg01
vxprint -g appdg -htr
Note: This solution assumes that the failed and then recovered disk was
appdg01. Depending on which disk you failed in step 3, you may need
to use a different disk media name with the vxunreloc command.
End of Solution
9 Unmount the /app file system and remove the appvol volume.
Solution
umount /app
vxassist -g appdg remove volume appvol
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
183
Lab 3: Resolving Hardware Problems B67
Copyright 2012 Symantec Corporation. All rights reserved.
B
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.
1 Enable the vxattachd daemon if it is not already running.
Solution
ps -ef | grep vxattachd
vxattachd & (if necessary)
ps -ef | grep vxattachd
End of Solution
2 Create a 100-MB mirrored volume called appvol in the appdg disk group, add
a VxFS file system to the volume, and mount the file system at the mount point
/app.
Solution
vxassist -g appdg make appvol 100m layout=mirror
mkfs -t vxfs /dev/vx/rdsk/appdg/appvol
mkdir /app (if necessary)
mount -t vxfs /dev/vx/dsk/appdg/appvol /app
End of Solution
3 Determine the first device use by the appvol volume. This device should be
emc0_dd7. Use the vxdisk list command to determine all paths to the
device.
Solution
vxprint -g appdg -htr
vxdisk list emc0_dd7
End of Solution
Exercise 9: Optional lab: Recovering from temporary disk failure
with vxattachd daemon
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
184
B68 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
4 Use the vxdmpadm -f disable command to disable all paths to the
device.
Solution
vxdmpadm -f disable path=path1,path2
vxdisk list emc0_dd7
End of Solution
5 Use the dd command to write to the /app directory to produce a failure.
Solution
dd if=/dev/zero of=/app/test1 bs=1 count=10
End of Solution
6 Use the vxdmpadm enable command to enable all paths to the failed
device. Monitor the vxdisk list and vxprint outputs until the
vxattachd daemon senses that the device is back online and reattaches the
device and recovers the failed plexes.
Solution
vxdmpadm enable path=path1,path2
vxdisk -o alldgs list
vxprint -g appdg -htr
End of Solution
7 Unmount the /app file system and remove the appvol volume.
Solution
umount /app
vxassist -g appdg remove volume appvol
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
185
Lab 3: Resolving Hardware Problems B69
Copyright 2012 Symantec Corporation. All rights reserved.
B
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.
1 You should have four disks (appdg01 through appdg04) in the disk group
appdg. Set all disks to have the spare flag on.
Solution
vxedit -g appdg set spare=on appdg01
vxedit -g appdg set spare=on appdg02
vxedit -g appdg set spare=on appdg03
vxedit -g appdg set spare=on appdg04
End of Solution
2 Create a 100-MB mirrored volume called sparevol.
Is the volume successfully created? Why or why not?
Solution
vxassist -g appdg make sparevol 100m layout=mirror
No, the volume is not created, and you receive the error:
...Cannot allocate space for size block volume ...
The volume is not created because all disks are set as spares, and vxassist
does not find enough free space to create the volume.
End of Solution
3 Attempt to create the same volume again, but this time specify two disks to
use. Do not clear any spare flags on the disks.
Solution
vxassist -g appdg make sparevol 100m layout=mirror \
Exercise 10: Optional lab: Exploring spare disk behavior
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
186
B70 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
appdg03 appdg04
Notice that VxVM overrides its default and applies the two spare disks to the
volume because the two disks were specified by the administrator.
End of Solution
4 Remove the sparevol volume.
Solution
vxassist -g appdg remove volume sparevol
End of Solution
5 Verify that the relocation daemon (vxrelocd) is running. If not, start it.
Solution
ps -ef |grep vxrelocd
vxrelocd root & (if necessary)
ps -ef |grep vxrelocd
End of Solution
6 Remove the spare flags from three of the four disks.
Solution
vxedit -g appdg set spare=off appdg01
vxedit -g appdg set spare=off appdg02
vxedit -g appdg set spare=off appdg03
End of Solution
7 Create a 100-MB concatenated mirrored volume called sparevol.
Solution
vxassist -g appdg make sparevol 100m layout=mirror
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
187
Lab 3: Resolving Hardware Problems B71
Copyright 2012 Symantec Corporation. All rights reserved.
B
8 Save the output of vxprint -g appdg -htr to a file.
Solution
vxprint -g appdg -htr > /tmp/savedvxprint
End of Solution
9 Display the properties of the sparevol volume. In the table, record the device
and disk media name of the disks used in this volume. You are going to
simulate disk failure on one of the disks. Decide which disk you are going to
fail.
For example, the volume sparevol uses appdg01 and appdg02:
10 Next, simulate disk failure by writing over the private region using the
overwritepr.pl script. In the standard virtual lab environment, this script
is located in the /student/labs/sf/sf60 directory.
While using the script, substitute the appropriate disk device name for one of
the disks in use by sparevol, for example enter emc0_dd7.
Solution
cd /student/labs/sf/sf60
./overwritepr.pl
Enter a device used in sparevol when prompted.
End of Solution
11 Run vxprint -g appdg -htr and compare the output to the vxprint
output that you saved earlier. What has occurred?
Device Name Disk Media Name
Disk 1 emc0_dd7 appdg01
Disk 2 emc0_dd8 appdg02
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
188
B72 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
Note: You may need to wait a minute or two for the hot relocation to
complete.
Solution
Hot relocation has taken place. The failed disk has a status of NODEVICE.
VxVM has relocated the mirror of the failed disk onto the designated spare
disk.
End of Solution
12 Run vxdisk -o alldgs list. What do you notice?
Solution
This disk is displayed as a failed disk.
End of Solution
13 In VOM, view the status of the disks and the volume.
Solution
https://vom.example.com:14161/
From Manage > Servers > Hosts > sym1.example.com > Administer > All
Disks, view the status of the disks.
In VOM, the disk is shown as disconnected.
From Manage > Servers > Hosts > sym1.example.com > Administer > All
Volumes > sparevol, view the devices used.
VxVM has relocated the mirror of the failed disk onto the designated spare
disk.
End of Solution
vom
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
189
Lab 3: Resolving Hardware Problems B73
Copyright 2012 Symantec Corporation. All rights reserved.
B
14 Recover the disk by replacing the private and public regions on the disk. In the
command, substitute the appropriate disk device name, for example use
emc0_dd7.
Solution
vxdisksetup -i accessname
End of Solution
15 Bring the disk back under VxVM control and into the disk group to replace the
failed disk media name.
Solution
vxdg -g appdg -k adddisk dm_name=accessname
End of Solution
16 Undo hot relocation for the disk.
Solution
vxunreloc -g appdg dm_name
where dm_name is the disk media name of the failed and replaced disk.
End of Solution
17 Wait until the volume is fully recovered before continuing. Check to ensure
that the disk and the volume are fully recovered.
Solution
vxdisk -o alldgs list
vxprint -g appdg -htr
Note: The vxprint command shows the subdisk with the UR tag.
End of Solution
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
190
B74 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
18 Rename the unrelocated subdisk to its original name.
Solution
vxedit -g appdg rename appdg01-UR-001 appdg01-01
End of Solution
19 Remove the sparevol volume.
Solution
vxassist -g appdg remove volume sparevol
End of Solution
20 Remove the spare flag from the last disk.
Solution
vxedit -g appdg set spare=off appdg04
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
191
Lab 3: Resolving Hardware Problems B75
Copyright 2012 Symantec Corporation. All rights reserved.
B
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.
Note: If you do not have access to the Internet from the classroom systems, skip
this optional lab section.
1 Access the latest information on Veritas Storage Foundation from the
Symantec Support Web site.
Solution
Go to the Symantec Technical Support Web site at
http://support.symantec.com.
Under the Business Support section select Technical Product Support.
From the list of Symantec products select Storage Foundation for UNIX/
Linux.
On the next window, click the Late Breaking News LBNs link. Click the
Product guides link under the appropriate version to find the LBN in the
Release Notes section. Click back to the previous screen when finished.
This will show you any information about the Storage Foundation product
that has changed since the product release.
End of Solution
2 Which Linux platform is supported for Storage Foundation 6.0?
Solution
Select Documentation, Man Pages, and Hardware Compatibility Lists
(HCL)
In the Linux column, find the 6.0 row and then select the Compatibility
lists link.
Select the relevant document.
Download and open the PDF file.
Exercise 11: Optional lab: Using the Support Web Site
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
192
B76 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
Under the Servers section you will see the minimum OS requirements for
the different server types.
End of Solution
3 Where would you locate the latest patch for Veritas Storage Foundation and
High Availability?
Solution
Go to the Symantec Operations Readiness Tools (SORT) Web site at
http://sort.symantec.com/ or click the patches link under the
Business Support section from http://support.symantec.com.
Select the Downloads tab.
Click the Patches link.
Select the Product, Version and Platform.
Available patches are displayed for download.
End of Solution
End of lab
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
193
Lab 4: Using Full-Copy Volume Snapshots B77
Copyright 2012 Symantec Corporation. All rights reserved.
B
Lab 4: Using Full-Copy Volume Snapshots
In this lab, you practice creating and managing full-copy volume snapshots. This
includes enabling FastResync and using full-copy volume snapshots for off-host
This lab contains the following exercises:
Full-sized instant snapshots
Off-host processing using split-mirror volume snapshots
Optional lab: Performing snapshots using VOM
Optional lab: Traditional volume snapshots
Prerequisite setup
To perform this lab, you need two lab systems with Storage Foundation pre-
installed, configured and licensed. In addition to this, you also need four external
disks to be used during the labs.
Lab information
In preparation for this lab, you need the following information about your lab
environment.
The exercises for this lab start on the next page.
Object Value
root password veritas
Host name of lab systems sym1, sym2, vom
Shared data disks: emc0_dd7 - emc0_d12
3pardata0_49 - 3pardata0_60
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
194
B78 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
1 If it does not already exist, create a disk group called appdg with four disks
(emc0_dd7 - emc0_d10).
Solution
vxdg init appdg appdg01=emc0_dd7 appdg02=emc0_dd8 \
appdg03=emc0_dd9 appdg04=emc0_d10
End of Solution
2 Create a 1GB concatenated volume called appvol. Create a Veritas file system
on the volume and mount it to /app.
Solution
vxassist -g appdg make appvol 1g
mkfs -t vxfs /dev/vx/rdsk/appdg/appvol
mkdir /app (if necessary)
mount -t vxfs /dev/vx/dsk/appdg/appvol /app
End of Solution
3 Add some data to the volume by copying some files from /etc and verify that
the files have been added.
Solution
cp /etc/*.conf /app
ls -al /app
End of Solution
4 Enable FastResync on the appvol volume.
Solution
vxsnap -g appdg prepare appvol
End of Solution
Exercise 1: Full-sized instant snapshots
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
195
Lab 4: Using Full-Copy Volume Snapshots B79
Copyright 2012 Symantec Corporation. All rights reserved.
B
5 View the appvol volume using the vxprint command. You should now see a
DCO associated with the appvol volume.
Solution
vxprint -g appdg -htr
End of Solution
6 Create a full sized instant snapshot of the appvol volume. Check the original
volume size using vxprint and create the snapshot volume (called snapvol)
using exactly the same size.
Solution
vxprint -g appdg appvol
vxassist -g appdg make snapvol 2097152
End of Solution
7 Prepare the snapshot volume and create the snapshot.
Solution
vxsnap -g appdg prepare snapvol
vxsnap -g appdg make source=appvol/snapvol=snapvol
End of Solution
8 Create a mount point (/snap) and mount the snapshot volume. View the files
in the snapshot volume. They should be the same as the files in the appvol
volume.
Solution
mkdir /snap
mount -t vxfs /dev/vx/dsk/appdg/snapvol /snap
ls -l /app /snap
End of Solution
9 Unmount /snap and /app.
Solution
umount /snap
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
196
B80 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
umount /app
End of Solution
10 Delete the full sized instant snapshot of the appvol volume.
Solution
vxedit -g appdg -rf rm snapvol
End of Solution
11 Delete the appvol volume.
Solution
vxedit -g appdg -rf rm appvol
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
197
Lab 4: Using Full-Copy Volume Snapshots B81
Copyright 2012 Symantec Corporation. All rights reserved.
B
Phase 1: Create, Split, and Deport
Note: In this lab section, the sym2 server will be used for off-host processing. If it
is not already powered on, do so at this time.
Note: You will be using the appdg disk group that was created at the start of this
lab. If for any reason it was destroyed, recreate it before continuing.
1 Create a 500-MB concatenated volume, appvol, using a single disk. Create a
Veritas file system on the volume and mount the file system on the mount point
/app.
Solution
vxassist -g appdg make appvol 500m
mkfs -t vxfs /dev/vx/rdsk/appdg/appvol
mkdir /app (if necessary)
mount -t vxfs /dev/vx/dsk/appdg/appvol /app
End of Solution
2 Add data to the file system using the following command:
echo Pre-snapshot for app > /app/presnap_on_app
and verify that the data has been added.
Solution
echo Pre-snapshot for app > /app/presnap_on_app
ls /app
End of Solution
3 Enable FastResync for the volume appvol. Can you identify what has changed?
Solution
vxsnap -g appdg prepare appvol
Exercise 2: Off-host processing using split-mirror volume
snapshots
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
198
B82 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
vxprint -g appdg -htr
A DCO log has been added to the volume.
End of Solution
4 Add a mirror to the volume for use as the snapshot. Observe the volume layout.
What is the state of the newly added mirror after synchronization completes?
Solution
vxsnap -g appdg addmir appvol
vxprint -g appdg -htr
The status of the newly added mirror should change from ENABLED/
SNAPATT to ENABLED/SNAPDONE after the synchronization is complete.
End of Solution
5 Create a third mirror break-off snapshot named snapvol using the new mirror
you just added. Use the vxsnap -g appdg list command to observe the
snapshots in the disk group.
Solution
vxsnap -g appdg make \
source=appvol/newvol=snapvol/nmirrors=1
vxsnap -g appdg list
The snapvol volume is listed as a break-off snapshot of appvol.
End of Solution
6 Split the snapshot volume into a separate disk group from the original disk
group called snapdg.
Solution
vxdg split appdg snapdg snapvol
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
199
Lab 4: Using Full-Copy Volume Snapshots B83
Copyright 2012 Symantec Corporation. All rights reserved.
B
7 Verify that the snapdg disk group exists and contains snapvol. First, display the
disk groups on the system. You should see the new snapdg disk group
displayed. Then, view the volume information for the snapdg disk group.
Solution
vxdg list
vxprint -g snapdg -htr
End of Solution
8 Deport the disk group that contains the snapshot volume.
Solution
vxdg deport snapdg
End of Solution
9 View the disk groups on the system, and then run the vxdisk command to
view the status of the disks on the system.
Solution
vxdg list
The snapdg disk group is not displayed because it is deported.
vxdisk -o alldgs list
This shows that the disk used for snapvol is deported. (snapdg) in
parentheses shows that the snapdg disk group is deported.
End of Solution
10 Add additional data to the original volume using the following command:
echo Post-snapshot for app > /app/postsnap_on_app
and verify that the data has been added.
Solution
echo Post-snapshot for app > /app/postsnap_on_app
ls /app
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
200
B84 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
Phase 2: Import, Process, and Deport
11 On the off-host processing (OHP) host (sym2) where the backup or processing
is to be performed, import the disk group that contains the snapshot volume.
View the status of the volume in the snapdg disk group.
Note: You may need to rescan the disks using the vxdctl enable
command on the OHP host so that the host detects the changes.
Solution
vxdg import snapdg
vxprint -g snapdg -htr
End of Solution
12 To perform off-host processing, you must first mount the file system on the off-
host processing host. Use the mount point /snap.
Solution
mkdir /snap
mount -t vxfs /dev/vx/dsk/snapdg/snapvol /snap
End of Solution
13 View and compare the contents of both file systems.
Solution
ls -l /app
sym2
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
201
Lab 4: Using Full-Copy Volume Snapshots B85
Copyright 2012 Symantec Corporation. All rights reserved.
B
ls -l /snap
There is one more file in /app than in /snap. The file that has been written
after the snapshot operation (postsnap_on_name) exists in /app but not
in /snap. The file that has been written before the snapshot operation
(presnap_on_app) exists in both file systems.
End of Solution
14 Check if you can write to the original file system during off-host processing by
creating a new file in the file system as follows:
echo Data in original of app > /app/data_on_app
ls -l /app
15 After completing off-host processing, you are ready to reattach the snapshot
volume with the original volume. Unmount the snapshot volume on the off-
host processing host.
Solution
umount /snap
End of Solution
sym2
sym1
sym2
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
202
B86 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
16 On the OHP host, deport the disk group that contains the snapshot volume.
Solution
vxdg deport snapdg
End of Solution
Phase 3: Import, Join, and Resynchronize
17 Reimport the disk group that contains the snapshot volume:
Solution
vxdg import snapdg
vxdg list
End of Solution
18 Rejoin the disk group that contains the snapshot volume to the disk group that
contains the original volume.
Solution
vxdg join snapdg appdg
vxprint -g appdg -htr
End of Solution
19 At this point you should have the original volume and its snapshot in the same
disk group but as separate volumes. There is still no synchronization between
the original volume and the snapshot volume. To observe this, the snapshot
volume will be mounted again to observe its contents. You would not need to
perform this step during a normal off-host processing procedure.
a Mount snapvol back on the /snap mount point. Create the mount point if
necessary.
Solution
mkdir /snap (if necessary)
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
203
Lab 4: Using Full-Copy Volume Snapshots B87
Copyright 2012 Symantec Corporation. All rights reserved.
B
mount -t vxfs /dev/vx/dsk/appdg/snapvol /snap
End of Solution
b View and compare the contents of both file systems.
Solution
ls -l /app /snap
End of Solution
c Unmount the /snap file system.
Solution
umount /snap
End of Solution
20 Reattach the plexes of the snapshot volume to the original volume and
resynchronize their contents.
Solution
vxsnap -g appdg reattach snapvol source=appvol
End of Solution
21 Remove the snapshot mirror.
Solution
vxsnap -g appdg rmmir appvol
End of Solution
22 Remove the DCO log and disable FastResync on appvol using the vxsnap
unprepare command.
Solution
vxsnap -g appdg unprepare -f appvol
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
204
B88 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
23 If you have enough time to perform the optional lab exercises, skip this step.
Otherwise, unmount /app and destroy the appdg disk group.
Solution
umount /app
vxdg destroy appdg
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
205
Lab 4: Using Full-Copy Volume Snapshots B89
Copyright 2012 Symantec Corporation. All rights reserved.
B
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.
1 If the Web console is already open skip this step; otherwise, open a Web
browser and enter the URL for the VOM MS Web console on the address line.
You may receive an error about the Web sites certificate, click the link to
continue to this Web site. Log in using the root username and password for
your server.
Solution
https://vom.example.com:14161/
End of Solution
2 View the appvol volume in the appdg disk group on the
sym1.example.com system.
Solution
a Manage>Servers>Hosts>sym1.example.com>Administer>All
Volumes.
b View the volume in the table.
Normally the disks should be in Free (Uninitialized) state unless they have
already been initialized for Volume manager use in which case the state
would be Free (Initialized).
End of Solution
Exercise 3: Optional lab: Performing snapshots using VOM
vom
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
206
B90 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
3 Create a snapshot of the appvol volume. Name the new disk snapshot snapvol.
Solution
a Select Snapshot Manager from the Tasks list.
b In the Snapshot manager screen, select Create a new Volume Snapshot
and click Next.
c Select the appdg disk group and appvol if they are not already selected.
Click Next.
d Let volume manager decide what disks to use. Click Next.
e Keep the default settings for FastResync (DCO) Mirrors, Region Size
(KB) and Enable DRL. Click Next.
f Select Full sized for the Snapshot Type. Click Next.
g Select Create a new Volume for Snapshot and check the Synchronize
checkbox. Click Next.
h Let volume manager decide what disks to use. Click Next.
i Enter snapvol for the volume name keep all other default values. Click
Next.
j Do not enter any Volume Attributes. Click Next, Finish and Ok.
The snapvol volume should now be visible in the table.
End of Solution
4 Mount the snapshot to /snap directory.
Solution
a Select the snapvol volume.
b Select Actions>Mount File System.
c Enter /snap for the Mount point name and uncheck the Add to file
system table checkbox. Click Next, Finish and OK.
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
207
Lab 4: Using Full-Copy Volume Snapshots B91
Copyright 2012 Symantec Corporation. All rights reserved.
B
Note: The mounted snapshot could be verified using the ls command line. For
this lab we will continue by removing the snapshot.
5 Unmount the snapshot.
Solution
a Select the snapvol volume.
b Select Actions>Unmount File System.
c Select Yes then click Next, Finish and OK.
End of Solution
6 Delete the snapshot.
Solution
a Select the snapvol volume.
b Select Actions>Delete, click Next, Finish and OK.
End of Solution
7 Remove the DCO log and disable FastResync on appvol.
Solution
a Select Snapshot Manager from the Tasks list.
b In the Snapshot manager screen, select Other operations and click Next.
c Select Disable FastResync on a Volume. Click Next.
d Select the appdg disk group and appvol if they are not already selected.
Click Next.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
208
B92 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
e Click Finish and Ok.
End of Solution
8 If you have enough time to perform the optional lab exercises, skip this step.
Otherwise, unmount /app and destroy the appdg disk group.
Solution
a Select the appvol volume.
b Select Actions>Unmount File System.
c Select Yes then click Next, Finish and OK.
d Select the appdg link from the All Volumes table.
e Select Destroy from the Disk Group Tasks.
f From the Destroy Disk Group window, click Next, Finish and OK.
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
209
Lab 4: Using Full-Copy Volume Snapshots B93
Copyright 2012 Symantec Corporation. All rights reserved.
B
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.
1 If you already have an appdg disk group with an appvol volume, skip this step.
Otherwise, prepare your environment by performing the following substeps:
a Create a disk group called appdg with four disks (emc0_dd7 - emc0_d10).
Solution
vxdg init appdg appdg01=emc0_dd7 appdg02=emc0_dd8\
appdg03=emc0_dd9 appdg04=emc0_d10
End of Solution
b Create a 500m concatenated volume called appvol. Create a Veritas file
system on the volume and mount it to /app.
Solution
vxassist -g appdg make appvol 500m
mkfs -t vxfs /dev/vx/rdsk/appdg/appvol
mkdir /app (if necessary)
mount -t vxfs /dev/vx/dsk/appdg/appvol /app
End of Solution
2 Add some data to the volume by copying some files from /etc and verify that
the files have been added.
Solution
cp /etc/*.conf /app
ls -al /app
End of Solution
Exercise 4: Optional lab: Traditional volume snapshots
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
210
B94 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
3 Enable FastResync on the appvol volume for a traditional volume snapshot.
Solution
vxassist -g appdg addlog appvol logtype=dco
vxvol -g appdg set fastresync=on appvol
End of Solution
4 View the appvol volume using the vxprint command. You should now see a
DCO associated with the appvol volume.
Solution
vxprint -g appdg -htr
End of Solution
5 Start the creation of a traditional snapshot using the vxassist command.
Creating a traditional snapshot is a two step process, this is the first of two
steps. The -b option will cause the command to work in the background.
Solution
vxassist -g appdg -b snapstart appvol
End of Solution
6 Use the vxtask command to determine when the snapstart operation is
complete.
Solution
vxtask list
vxtask monitor
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
211
Lab 4: Using Full-Copy Volume Snapshots B95
Copyright 2012 Symantec Corporation. All rights reserved.
B
7 View the appvol volume using the vxprint command. You should now see a
DCO associated with the appvol volume. You should also see that the plex and
DCO have been mirrored.
Solution
vxprint -g appdg -htr
End of Solution
8 Break the traditional snapshot away using the vxassist command. Provide
the snapshot volume name as snapvol.
Solution
vxassist -g appdg snapshot appvol snapvol
End of Solution
9 View the appvol and snapvol volumes using the vxprint command. You
should now see a plex and DCO for each volume.
Solution
vxprint -g appdg -htr
End of Solution
10 Create a mount point (/snap) and mount the snapvol volume. View the files
in the snapshot volume. They should be the same as the files in the appvol
volume.
Solution
mkdir /snap
mount -t vxfs /dev/vx/dsk/appdg/snapvol /snap
ls -l /app /snap
End of Solution
11 Overwrite the nsswitch.conf file in the appvol volume using the
following command:
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
212
B96 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
echo snapshot test > /app/nsswitch.conf
and verify that the file is now different in the appvol volume and the snapshot.
Solution
echo "snapshot test" > /app/nsswitch.conf
cat /snap/nsswitch.conf
cat /app/nsswitch.conf
End of Solution
12 Unmount the snapshot volume.
Solution
umount /snap
End of Solution
13 Unmount the appvol volume.
Solution
umount /app
End of Solution
14 Reassociate the snapshot volume back into the appvol volume but resync it
from the replica so that the original nsswitch.conf file gets restored.
Solution
Thick devices:
vxassist -g appdg -o resyncfromreplica snapback snapvol
Thin devices:
vxassist -g appdg -o force,resyncfromreplica snapback \
snapvol
Note: The thin devices example is provided here just in case you did not
follow the lab steps as written, or if all that you have available is a thin
provisioning capable array.
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
213
Lab 4: Using Full-Copy Volume Snapshots B97
Copyright 2012 Symantec Corporation. All rights reserved.
B
15 Mount the appvol volume and verify that the nsswitch.conf file was
restored to the original.
Solution
mount -t vxfs /dev/vx/dsk/appdg/appvol /app
cat /app/nsswitch.conf
End of Solution
16 Break the traditional snapshot away again using the vxassist command.
Provide the snapshot volume name as snapvol.
Solution
vxassist -g appdg snapshot appvol snapvol
End of Solution
17 Disassociate the snapshot volume from the appvol volume.
Solution
vxassist -g appdg snapclear snapvol
End of Solution
18 Attempt to reattach the snapshot volume. What is the error?
Solution
vxassist -g appdg snapback snapvol
VxVM vxassist ERROR V-5-1-4521 Volume snapvol is not a
snapshot volume.
You are not able to reattach a snapshot once it has been cleared from the
original volume.
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
214
B98 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
19 Remove the DCO log that is associated with the snapshot volume.
Solution
vxassist -g appdg remove log snapvol logtype=dco
End of Solution
20 Delete the snapshot volume.
Solution
vxassist -g appdg remove volume snapvol
End of Solution
21 Remove the DCO log that is associated with the appvol volume and set
FastResync to off.
Solution
vxassist -g appdg remove log appvol logtype=dco
vxvol -g appdg set fastresync=off appvol
End of Solution
22 Unmount /app and destroy the appdg disk group.
Solution
umount /app
vxdg destroy appdg
End of Solution
End of lab
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
215
Lab 5: Using Copy-on-Write SF Snapshots B99
Copyright 2011 Symantec Corporation. All rights reserved.
B
Lab 5: Using Copy-on-Write SF Snapshots
In this lab, you practice creating and managing space-optimized snapshots and
storage checkpoints. First you create a volume with a file system and mount the
file system. The volume is then used to practice the space-optimized snapshot. The
file system is used to create storage checkpoints. Optional lab exercises are also
included to view storage checkpoint behavior.
This lab contains the following exercises:
Using space-optimized instant volume snapshots
Restoring a file system using storage checkpoints
Optional lab: Storage checkpoint behavior
Optional lab: Using checkpoints in VOM
Prerequisite setup
To perform this lab, you need a lab system with Storage Foundation pre-installed,
configured and licensed. In addition to this, you also need at least four disks to be
used in a disk group.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
216
B100 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2011 Symantec Corporation. All rights reserved.
Lab information
In preparation for this lab, you need the following information about your lab
environment.
The exercises for this lab start on the next page.
Object Value
root password veritas
Host names of lab systems sym1, vom
Shared data disks: emc0_dd7 - emc0_d12
3pardata0_49 - 3pardata0_60
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
217
Lab 5: Using Copy-on-Write SF Snapshots B101
Copyright 2011 Symantec Corporation. All rights reserved.
B
1 Create a disk group called appdg with four disks(emc0_dd7 - emc0_d10).
Solution
vxdg init appdg appdg01=emc0_dd7 appdg02=emc0_dd8 \
appdg03=emc0_dd9 appdg04=emc0_d10
End of Solution
2 Create a 1G concatenated volume called appvol. Create a VxFS file system on
the volume and mount it to /app. Enable FastResync on the appvol volume.
Solution
vxassist -g appdg make appvol 1g
mkfs -t vxfs /dev/vx/rdsk/appdg/appvol
mkdir /app (if necessary)
mount -t vxfs /dev/vx/dsk/appdg/appvol /app
vxsnap -g appdg prepare appvol
End of Solution
3 Select a disk in the appdg disk group that is not used by the original volume
appvol, and create a 50-MB volume on this disk to be used as the cache
volume. Name the cache volume cachevol. Create a cache object called
mycache on the cache volume. Ensure that the cache object is started.
Solution
vxprint -g appdg -htr
vxassist -g appdg make cachevol 50m appdg##
where appdg## is the disk media name of the disk that is unused in the appdg
disk group.
vxmake -g appdg cache mycache cachevolname=cachevol \
autogrow=on
vxcache -g appdg start mycache
End of Solution
Exercise 1: Using space-optimized instant volume snapshots
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
218
B102 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2011 Symantec Corporation. All rights reserved.
4 Observe how the cache object and the cache volume are displayed in the disk
group.
Solution
vxprint -g appdg -htr
End of Solution
5 Add data to the /app file system using the following command:
echo New data before snap am for app > \
/app/presnapam_on_appvol
and verify that the data is written.
Solution
echo New data before snap am for app > \
/app/presnapam_on_appvol
ls -l /app
End of Solution
6 Create a space-optimized instant snapshot of the appvol volume, named
snapvolam, using the cache object mycache.
Solution
vxsnap -g appdg make \
source=appvol/newvol=snapvolam/cache=mycache
End of Solution
7 Display information about the snapshot volumes using the vxprint,
vxsnap list and vxsnap print commands from the command line.
Solution
vxprint -g appdg -htr
vxsnap -g appdg print
vxsnap -g appdg list
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
219
Lab 5: Using Copy-on-Write SF Snapshots B103
Copyright 2011 Symantec Corporation. All rights reserved.
B
8 Verify which snapshots are associated to the cache object.
Solution
vxcache -g appdg listvol mycache
End of Solution
9 Mount the space optimized snapshot volume snapvolam to the /snapam
directory.
Solution
mkdir /snapam
mount -t vxfs /dev/vx/dsk/appdg/snapvolam /snapam
End of Solution
10 Observe the contents of the /snapam directory and compare it to the contents
of the /app directory.
Solution
ls -l /snapam /app
The contents of both file systems should be exactly the same at this point.
End of Solution
11 Add data to the /app file system using the following command:
echo New data before snap pm for app > \
/app/presnappm_on_appvol
and verify that the data is written.
Solution
echo New data before snap pm for app > \
/app/presnappm_on_appvol
ls -l /app
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
220
B104 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2011 Symantec Corporation. All rights reserved.
12 Create a second space-optimized instant snapshot of the appvol volume, named
snapvolpm, using the same cache object mycache.
Solution
vxsnap -g appdg make \
source=appvol/newvol=snapvolpm/cache=mycache
End of Solution
13 Verify which snapshots are associated to the cache object.
Solution
vxcache -g appdg listvol mycache
End of Solution
14 Mount the space optimized snapshot volume snapvolpm to the /snappm
directory.
Solution
mkdir /snappm
mount -t vxfs /dev/vx/dsk/appdg/snapvolpm /snappm
End of Solution
15 Observe the contents of the original file system and the two space optimized
snapshots.
Solution
ls -l /app /snapam /snappm
The second space optimized snapshot; snapvolpm, contains all of the data from
the original file system.
End of Solution
16 Make the following changes on the file systems:
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
221
Lab 5: Using Copy-on-Write SF Snapshots B105
Copyright 2011 Symantec Corporation. All rights reserved.
B
a Remove the data you had on the original file system prior to starting
Exercise 1: Using space-optimized instant volume snapshots. If you have
followed the lab steps, you need to remove the
presnapam_on_appvol and presnappm_on_appvol files from
the /app file system.
Solution
rm /app/presnapam_on_appvol
rm /app/presnappm_on_appvol
End of Solution
b Add new data to the space optimized snapshot volumes using the following
commands:
echo New data on snapam > /snapam/data_on_snapam
echo New data on snappm > /snappm/data_on_snappm
17 Observe the contents of the original file system and the two space optimized
snapshots.
Solution
ls -l /app /snapam /snappm
End of Solution
18 Assume that you have decided to use the contents of the second space
optimized snapshot (snapvolpm) as the final version of the original file system.
Restore the original file system using the second space optimized snapshot.
Note that you will have to unmount the original file system to make this
change. Mount the original file system back to /app directory when the
restore operation completes.
Solution
umount /app
vxsnap -g appdg restore appvol source=snapvolpm
mount -t vxfs /dev/vx/dsk/appdg/appvol /app
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
222
B106 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2011 Symantec Corporation. All rights reserved.
19 Observe the contents of the original file system and the two space optimized
snapshots.
Solution
ls -l /app /snapam /snappm
Note that the contents of the second space optimized snapshot and the original
file system are now the same, whereas the contents of the first space optimized
snapshot remain unchanged.
End of Solution
20 Refresh the first space optimized snapshot, snapvolam. Note that you will need
to unmount the first space optimized snapshot to make this change. Mount the
snapvolam volume again after its contents are refreshed.
Solution
umount /snapam
vxsnap -g appdg refresh snapvolam source=appvol
mount -t vxfs /dev/vx/dsk/appdg/snapvolam /snapam
End of Solution
21 Observe the contents of the original file system and the two space optimized
snapshots.
Solution
ls -l /app /snapam /snappm
Note that the contents should all be the same now.
End of Solution
22 Unmount the two space optimized snapshots and dissociate them from the
original volume.
Solution
umount /snapam
umount /snappm
vxsnap -g appdg dis snapvolam
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
223
Lab 5: Using Copy-on-Write SF Snapshots B107
Copyright 2011 Symantec Corporation. All rights reserved.
B
vxsnap -g appdg dis snapvolpm
End of Solution
23 Remove the space optimized snapshot volumes.
Note: If you want to use the vxassist remove volume command to
delete the volume from the command line, you first need to delete the
DCO log. Alternatively you can use the vxedit -g appdg -rf
rm volume_name command to remove the volume together with the
associated DCO log.
Solution
vxassist -g appdg remove log snapvolam logtype=dco
vxassist -g appdg remove volume snapvolam
vxassist -g appdg remove log snapvolpm logtype=dco
vxassist -g appdg remove volume snapvolpm
Alternatively,
vxedit -g appdg -rf rm snapvolam
vxedit -g appdg -rf rm snapvolpm
End of Solution
24 Remove the cache object with its associated cache volume.
Solution
vxedit -g appdg -rf rm mycache
End of Solution
25 Unmount the /app file system and remove the original volume, appvol.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
224
B108 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2011 Symantec Corporation. All rights reserved.
Note: If you want to use the vxassist remove volume command to
delete the volume from the command line, you first need to delete the
DCO log. Alternatively you can use the vxedit -g diskgroup
-rf rm volume_name command to remove the volume together
with the associated DCO log.
Solution
umount /app
vxassist -g appdg -f remove log appvol logtype=dco
vxassist -g appdg remove volume appvol
Alternatively,
vxedit -g appdg -rf rm appvol
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
225
Lab 5: Using Copy-on-Write SF Snapshots B109
Copyright 2011 Symantec Corporation. All rights reserved.
B
In the beginning of this section you should have an appdg disk group with four
unused disks in it.
1 Create a concatenated 1500m volume called appvol in the appdg disk group.
Solution
vxassist -g appdg make appvol 1500m
End of Solution
2 Create a VxFS file system on the volume.
Solution
mkfs -t vxfs /dev/vx/rdsk/appdg/appvol
End of Solution
3 Make three mount points: /app, /ckptam, and /ckptpm.
Solution
mkdir /app (if necessary)
mkdir /ckptam
mkdir /ckptpm
End of Solution
4 Mount the file system on /app.
Solution
mount -t vxfs /dev/vx/dsk/appdg/appvol /app
End of Solution
Exercise 2: Restoring a file system using storage checkpoints
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
226
B110 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2011 Symantec Corporation. All rights reserved.
5 Write a file of size 1M named 8am in the original /app file system.
Solution
dd if=/dev/zero of=/app/8am bs=1024k count=1
End of Solution
6 Create a storage checkpoint named thu_9am on /app. Note the output.
Solution
fsckptadm -v create thu_9am /app
End of Solution
7 Mount the thu_9am storage checkpoint on the mount point /ckptam.
Solution
mount -t vxfs -o ckpt=thu_9am \
/dev/vx/dsk/appdg/appvol:thu_9am /ckptam
End of Solution
8 Write some more files in the original file system on /app, and synchronize the
file system using the following commands:
dd if=/dev/zero of=/app/2pm bs=1024k count=5
dd if=/dev/zero of=/app/3pm bs=1024k count=5
sync;sync
9 Create a second storage checkpoint, called thu_4pm, on /app. Note the
output.
Solution
fsckptadm -v create thu_4pm /app
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
227
Lab 5: Using Copy-on-Write SF Snapshots B111
Copyright 2011 Symantec Corporation. All rights reserved.
B
10 Mount the second storage checkpoint on the mount point /ckptpm.
Solution
mount -t vxfs -o ckpt=thu_4pm \
/dev/vx/dsk/appdg/appvol:thu_4pm /ckptpm
End of Solution
11 Write some more files in the original file system on /app, and synchronize the
file system using the following commands:
dd if=/dev/zero of=/app/5pm bs=1024k count=6
dd if=/dev/zero of=/app/6pm bs=1024k count=6
sync;sync
12 View the checkpoints and the original file system.
Solution
ls -l /app /ckptam /ckptpm
End of Solution
13 To prepare to restore from a checkpoint, unmount the original file system and
both storage checkpoints.
Solution
umount /ckptam
umount /ckptpm
umount /app
End of Solution
14 Restore the file system to the thu_4pm storage checkpoint.
Solution
fsckpt_restore -l /dev/vx/dsk/appdg/appvol thu_4pm
Restore the filesystem from storage checkpoint thu_4pm?
(ynq) y <CR>
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
228
B112 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2011 Symantec Corporation. All rights reserved.
15 Run the fsckpt_restore command again. Note the output.
Solution
The output shows that the former UNNAMED root fileset was removed, and
that the second checkpoint (thu_4pm) is now the primary fileset. The thu_4pm
fileset is now the fileset that will be mounted by default. The first checkpoint,
thu_9am, still exists, because it was taken earlier than the second checkpoint.
When you roll back to a checkpoint, earlier checkpoints still exist, while any
checkpoints taken later than thu_4pm would have been lost.
End of Solution
Control-D to exit the fsckpt_restore command.
16 Mount the appvol volume on /app.
Solution
mount -t vxfs /dev/vx/dsk/appdg/appvol /app
End of Solution
17 Use the fsckptadm command to list all checkpoints still on /app.
Solution
fsckptadm list /app
The output shows one checkpoint still exists (thu_9am).
End of Solution
18 Use the fsckptadm command to remove all checkpoints still on /app.
Solution
fsckptadm remove thu_9am /app
End of Solution
19 Use the fsckptadm command to list all checkpoints still on /app.
Solution
fsckptadm list /app
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
229
Lab 5: Using Copy-on-Write SF Snapshots B113
Copyright 2011 Symantec Corporation. All rights reserved.
B
The output shows no checkpoints exist.
End of Solution
20 Unmount /app and then remount with the checkpoint visibility option read-
write (-o ckptautomnt=rw).
Solution
umount /app
mount -t vxfs -o ckptautomnt=rw \
/dev/vx/dsk/appdg/appvol /app
End of Solution
21 Create a storage checkpoint named fri_8am on /app. Note the output.
Solution
fsckptadm -v create fri_8am /app
End of Solution
22 List the contents of the fri_8am storage checkpoint.
Solution
ls -al /app/.checkpoint/fri_8am
The fri_8am checkpoint was automatically mounted.
End of Solution
23 Use the fsckptadm command to list all checkpoints on /app.
Solution
fsckptadm list /app
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
230
B114 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2011 Symantec Corporation. All rights reserved.
24 Use the fsckptadm command to remove the fri_8am checkpoint on /app.
Solution
fsckptadm remove fri_8am /app
End of Solution
25 Unmount /app and destroy the appdg disk group.
Solution
umount /app
vxdg destroy appdg
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
231
Lab 5: Using Copy-on-Write SF Snapshots B115
Copyright 2011 Symantec Corporation. All rights reserved.
B
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.
In this exercise, you perform and analyze four types of file system operations:
A file to be deleted (1k.to_delete)
A file to be replaced by (new) content (1k.to_replace)
A file to be enlarged (1k5.to_append)
A file to be written by databases (10m.db_io; the file remains at the same
position with the same size, but some blocks within it are replaced)
1 Create a disk group named appdg with four disks (emc0_dd7 - emc0_d10).
Solution
vxdg init appdg appdg01=emc0_dd7 appdg02=emc0_dd8 \
appdg03=emc0_dd9 appdg04=emc0_d10
End of Solution
2 Create a 128-MB mirrored volume with a log. Name the volume appvol.
Mount the volume at /app.
Solution
vxassist -g appdg make appvol 128m layout=mirror,log
mkfs -t vxfs /dev/vx/rdsk/appdg/appvol
mkdir /app (if necessary)
mount -t vxfs /dev/vx/dsk/appdg/appvol /app
End of Solution
3 Add these four new files to the volume and view the files:
1K named /app/1k.to_delete
1K named /app/1k.to_replace
1.5K=1536bytes named /app/1k5.to_append
Exercise 3: Optional lab: Storage checkpoint behavior
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
232
B116 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2011 Symantec Corporation. All rights reserved.
10M named /app/10m.db_io
Solution
dd if=/dev/zero of=/app/1k.to_delete bs=1024 count=1
dd if=/dev/zero of=/app/1k.to_replace bs=1024 count=1
dd if=/dev/zero of=/app/1k5.to_append bs=1536 count=1
dd if=/dev/zero of=/app/10m.db_io bs=1024k count=10
ls -l /app
End of Solution
4 Remount /app and run ncheck.
Solution
mount -t vxfs -o remount /dev/vx/dsk/appdg/appvol /app
ncheck -t vxfs -o sector= /dev/vx/rdsk/appdg/appvol
End of Solution
5 Create a storage checkpoint for /app named ckpt.
Solution
fsckptadm create ckpt /app
End of Solution
6 Delete the file 1k.to_delete.
Solution
rm /app/1k.to_delete
End of Solution
7 Create a new 1K file named 1k.to_replace.
Solution
dd if=/dev/zero of=/app/1k.to_replace bs=1024 count=1
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
233
Lab 5: Using Copy-on-Write SF Snapshots B117
Copyright 2011 Symantec Corporation. All rights reserved.
B
8 Copy the 1k5.to_append file to /tmp.
Solution
cp /app/1k5.to_append /tmp
End of Solution
9 Append the 1k5.to_append file in /tmp to the original
1k5.to_append file in /app.
Solution
cat /tmp/1k5.to_append >> /app/1k5.to_append
End of Solution
10 Use the following Perl command to generate database-like I/O (modifying a
block within a database file).
The second line opens read/write access to the file without recreating it or
simply appending new data.
The third line creates a variable containing 8K "x".
The next line positions the file pointer at 80K offset from the beginning of
the file.
The following line writes at this position the new 8K block.
perl -e '
> open(FH,"+< /app/10m.db_io") || die;
> $Block="x" x 8192;
> sysseek(FH,8192,0);
> syswrite(FH,$Block,8192,0);
> close(FH);'
11 Remount /app and run ncheck. Mount the checkpoint to /ckpt and
compare the contents with the original file system contents.
Solution
mount -t vxfs -o remount /dev/vx/dsk/appdg/appvol /app
ncheck -t vxfs -o sector= /dev/vx/rdsk/appdg/appvol
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
234
B118 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2011 Symantec Corporation. All rights reserved.
mkdir /ckpt
mount -t vxfs -o ckpt=ckpt \
/dev/vx/dsk/appdg/appvol:ckpt /ckpt
ls -l /ckpt
...
STRUCTURAL 1 4 1 - 0/22-0/29 <inode_alloc_unit>
...
STRUCTURAL 1 6 - - 0/30-0/31 <current_usage_tbl>
STRUCTURAL 1 7 - 39 0/64-0/65 <object_loc_tbl>
STRUCTURAL 1 8 - 40 0/80-0/1103 <device_config>
STRUCTURAL 1 9 - 41 0/1104-0/3151 <intent_log>
STRUCTURAL 1 11 - - 0/66-0/67 <fs_allocation_policy>
STRUCTURAL 1 32 - - 0/68-0/69 <history_log>
STRUCTURAL 1 33 - - 0/3152-0/5199 <blockref_count_queue>
STRUCTURAL 1 34 - - 0/6662-0/6663 <device_label>
STRUCTURAL 1 34 - - 0/0-0/17 <device_label>
STRUCTURAL 1 35 - 3 0/7728-0/7743 <fileset_header>
STRUCTURAL 1 35 - 3 0/6656-0/6659 <fileset_header>
...
STRUCTURAL 1 39 - 7 0/6660-0/6661 <object_loc_tbl>
STRUCTURAL 1 40 - 8 0/6704-0/7727 <device_config>
STRUCTURAL 1 41 - 9 0/1104-0/3151 <intent_log>
STRUCTURAL 1 64 999 - 0/70-0/77 <inode_alloc_unit>
STRUCTURAL 1 65 999 97 0/5200-0/5215 <inode_list>
STRUCTURAL 1 69 999 - 0/5216-0/5231 <bsd_quota>
STRUCTURAL 1 70 999 - 0/5232-0/5247 <bsd_quota>
STRUCTURAL 1 71 - - 0/78-0/79 <state_alloc_bitmap>
STRUCTURAL 1 72 - - 0/5248-0/5249 <extent_au_summary>
STRUCTURAL 1 73 - 105 0/5280-0/5327 <extent_map>
STRUCTURAL 1 73 - 105 0/5264-0/5279 <extent_map>
STRUCTURAL 1 75 1000 - 0/5352-0/5359 <inode_alloc_unit>
STRUCTURAL 1 76 1000 77 0/7744-0/7759 <inode_list>
STRUCTURAL 1 77 1000 76 0/7744-0/7759 <inode_list>
STRUCTURAL 1 82 1000 - 0/7760-0/7775 <bsd_quota>
STRUCTURAL 1 83 1000 - 0/7776-0/7791 <bsd_quota>
STRUCTURAL 1 97 999 65 0/5200-0/5215 <inode_list>
STRUCTURAL 1 105 - 73 0/5280-0/5327 <extent_map>
STRUCTURAL 1 105 - 73 0/5264-0/5279 <extent_map>
UNNAMED 999 5 - - 0/5262-0/5263 /1k.to_replace
UNNAMED 999 6 - - 0/7776-0/7777 /1k5.to_append
UNNAMED 999 6 - - 0/5256-0/5259 /1k5.to_append
UNNAMED 999 7 - - 0/8192-0/28671 /10m.db_io
ckpt 1000 4 - - 0/5252-0/5253 /1k.to_delete
ckpt 1000 5 - - 0/5254-0/5255 /1k.to_replace
ckpt 1000 6 - - 0/7760-0/7761 /1k5.to_append
ckpt 1000 7 - - 0/7792-0/7807 /10m.db_io
...
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
235
Lab 5: Using Copy-on-Write SF Snapshots B119
Copyright 2011 Symantec Corporation. All rights reserved.
B
ls -l /app
Examination of storage checkpoint behavior
The following information is an analysis of the previous output from ncheck:
1k.to_delete
Same data blocks (5252-5253) mapped to CKPT.
1k.to_replace
Old data blocks (5254-5255) mapped, not copied, to CKPT. New data blocks
(5262-5263) were written to a new location.
1k5.to_append
Before checkpointing
UNNAMED
After checkpointing and appending data
UNNAMED
CKPT
5256 5257 5258 5259
5256 5257 5258 5259 7776 7777
7760 7761
-rw-r--r1 root root 10485760 Nov 16 08:22 10m.db_io
-rw-r--r1 root root 1536 Nov 16 08:21 1k5.to_append
-rw-r--r1 root root 1024 Nov 16 08:21 1k.to_delete
-rw-r--r1 root root 1024 Nov 16 08:21 1k.to_replace
drwxr-xr-x 2 root root 96 Nov 16 08:19 lost+found
-rw-r--r-- 1 root root 10485760 Nov 16 08:22 10m.db_io
-rw-r--r-- 1 root root 3072 Nov 16 08:26 1k5.to_append
-rw-r--r-- 1 root root 1024 Nov 16 08:25 1k.to_replace
drwxr-xr-x 2 root root 96 Nov 16 08:19 lost+found
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
236
B120 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2011 Symantec Corporation. All rights reserved.
To get the UNNAMED file system, which is normally the active one, as
contiguous as possible, a copy-before-write of the middle block is performed.
Otherwise copy-before-write would be unnecessary in favor of simple address
mapping.
Note: Blocks 5256-5257 are mapped to both UNNAMED and CKPT. This is not
shown in the output of ncheck.
10m.db_io
The data file for UNNAMED remains at the same position (...............................).
Note: These files are fragmented because the required space was not
preallocated in one extent.
The new blocks are written to UNNAMED, and therefore the old data must be
copied to ...................... (8K) before the new blocks are written. Otherwise
copy-before-write would be unnecessary in favor of simple address mapping.
Note: In all cases, inode and directory information is copied before the write.
End of Solution
12 If you have enough time to perform the optional lab exercises, skip this step.
Otherwise, unmount the checkpoint and the original file system and destroy the
appdg disk group.
Solution
umount /ckpt
umount /app
vxdg destroy appdg
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
237
Lab 5: Using Copy-on-Write SF Snapshots B121
Copyright 2011 Symantec Corporation. All rights reserved.
B
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.
Note: The following steps assume that you are already logged into the VOM
console on the vom server. If not, log into the VOM console as the root
user before proceeding with this optional lab.
1 If you completed the previous optional lab, you should have an appvol volume
with a VxFS file system mounted to /app. The file system has one checkpoint
(ckpt) mounted to /ckpt. Verify that the checkpoint is visible on the /app
file system on the VOM console.
Solution
a Manage>Servers>Hosts>sym1.example.com>Administer>File
Systems.
b Click the /app link in the table.
c Click the Checkpoints tab, the checkpoint is visible and mounted to
/ckpt.
d Click the Details tab
End of Solution
2 Create a new storage checkpoint (sat_12pm) for the /app file system.
Solution
a Select the /app file system.
b Select Actions>Create Storage Checkpoint.
Exercise 4: Optional lab: Using checkpoints in VOM
vom
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
238
B122 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2011 Symantec Corporation. All rights reserved.
c Enter sat_12pm for the Checkpoint name and check the Removable and
Mount checkboxes. Click Next.
d Enter /sat_12pm for the Mount point name and leave the other
checkboxes as the default values. Click Next, Finish and OK.
End of Solution
3 Verify that the checkpoint is visible on the /app file system.
Solution
Click the Checkpoints tab, the checkpoint is visible and mounted to
/sat_12pm.
End of Solution
4 Delete all checkpoints, unmount /app and destroy the appdg disk group.
Solution
a Select all checkpoints.
b Select Actions>Unmount Storage Checkpoint. Select Yes, click Next,
Finish and OK.
c Click the Details tab.
d Select the /app file system.
e Select Actions>Unmount File System. Select Yes, click Next, Finish and
OK.
f Select sym1.example.com Summary>All Disk Groups.
g Select appdg, then select Actions>Destroy. Click Next, Finish and OK.
End of Solution
End of lab
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
239
Lab 6: Using Advanced VxFS Features B123
Copyright 2012 Symantec Corporation. All rights reserved.
B
Lab 6: Using Advanced VxFS Features
In this lab, you perform tasks to manage VxFS file systems. First you create a
volume with a file system and mount the file system. The file system is then used
to practice file and directory compression, deduplication and FileSnap.
This lab contains the following exercises:
Compressing files and directories with VxFS
Deduplicating VxFS data
Using the FileSnap feature
Prerequisite setup
To perform this lab, you need a lab system with Storage Foundation pre-installed,
configured and licensed. In addition to this, you also need at least four disks to be
used in a disk group.
Lab information
In preparation for this lab, you need the following information about your lab
environment.
The exercises for this lab start on the next page.
Object Value
root password veritas
Host name of the lab system sym1
Shared data disks: emc0_dd7 - emc0_d12
3pardata0_49 - 3pardata0_60
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
240
B124 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
1 Create a disk group called appdg with four disks (emc0_dd7 - emc0_d10).
Solution
vxdg init appdg appdg01=emc0_dd7 appdg02=emc0_dd8 \
appdg03=emc0_dd9 appdg04=emc0_d10
End of Solution
2 Create a 1-GB concatenated volume called appvol. Create a VxFS file system
on the volume and mount it to /app.
Solution
vxassist -g appdg make appvol 1g
mkfs -t vxfs /dev/vx/rdsk/appdg/appvol
mkdir /app (if necessary)
mount -t vxfs /dev/vx/dsk/appdg/appvol /app
End of Solution
3 Make a file in the /app directory using the /etc/termcap file. Use the
cat command to copy multiple instances of the termcap file into the new
file. Then create a copy of the file for use in checking the contents after
compression.
Solution
cp /etc/termcap /app
cd /app
cat termcap termcap termcap termcap termcap termcap \
termcap > file1
cp file1 file1.save
End of Solution
Exercise 1: Compressing files and directories with VxFS
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
241
Lab 6: Using Advanced VxFS Features B125
Copyright 2012 Symantec Corporation. All rights reserved.
B
4 View the new file before compression using the ls, du -sk and fsmap -p
commands.
Solution
Ensure that you are in the /app directory when you run the following
commands:
ls -ls
du -sk *
fsmap -p file1
End of Solution
5 Use the vxcompress command to compress the file1 file. Then view the
degree of compression using the -L option.
Solution
vxcompress file1
vxcompress -L file1
End of Solution
6 Use the fsmap -p command to view the compressed file. Note that all the
data extents now show that they are compressed (-C).
Solution
fsmap -p file1
End of Solution
7 Check the compression of the entire file system using the fsadm -S
compressed command.
Solution
/opt/VRTS/bin/fsadm -S compressed /app
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
242
B126 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
8 Compare the compressed file to the saved file using the cmp and ls
commands. The cmp command should show no difference, but the ls
command will show a size difference.
Solution
cmp file1 file1.save
ls -ls
End of Solution
9 Use the dd command to write to the middle of the compressed file so that it
will be partially compressed. Again, use the /etc/termcap file as the input.
Solution
dd if=/etc/termcap of=/app/file1 bs=64k count=1 \
seek=50 conv=notrunc
End of Solution
10 View the degree of compression using the vxcompress command.
Solution
vxcompress -L file1
Note that the degree of compression has been reduced.
End of Solution
11 Use the fsmap command to view the compressed file again. Note that one of
the extents no longer shows that it is compressed.
Solution
fsmap -p file1
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
243
Lab 6: Using Advanced VxFS Features B127
Copyright 2012 Symantec Corporation. All rights reserved.
B
12 Compare the compressed file to the saved file. The two files should now be
different.
Solution
cmp file1 file1.save
End of Solution
13 Return the file to normal by uncompressing the file using the vxcompress
-u command. Use the ls, du -sk and fsmap -p commands to verify that
the file is no longer compressed.
Solution
vxcompress -u file1
ls -ls
du -sk *
fsmap -p file1
End of Solution
14 Unmount /app and then remove the appvol volume from the appdg disk
group.
Solution
cd /
umount /app
vxassist -g appdg remove volume appvol
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
244
B128 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
1 Create a 2-GB concatenated volume called appvol. Create a VxFS file system
on the volume with the block size of 4096 and mount it to /app.
Solution
vxassist -g appdg make appvol 2g
mkfs -t vxfs -o bsize=4096 /dev/vx/rdsk/appdg/appvol
mkdir /app (if necessary)
mount -t vxfs /dev/vx/dsk/appdg/appvol /app
End of Solution
2 Create a directory in /app called testdata and copy ~500MB (copy the
/opt directory and its contents) of data into the directory.
Solution
mkdir /app/testdata
cp -r /opt/* /app/testdata
df -k /app
End of Solution
3 Make a second copy of the same data in a new directory called copy.
Solution
mkdir /app/copy
cp -r /opt/* /app/copy
df -k /app
End of Solution
Note: You now have two exact copies of the same data in two different
directories.
Exercise 2: Deduplicating VxFS data
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
245
Lab 6: Using Advanced VxFS Features B129
Copyright 2012 Symantec Corporation. All rights reserved.
B
4 View the shared files information in the /app directory using the fsadm -S
shared command.
Solution
/opt/VRTS/bin/fsadm -S shared /app
The Used and Logical_Size fields are the same, and since deduplication has
not yet been performed, the Space_Saved value is zero.
End of Solution
5 Enable deduplication on /app using the fsdedupadm command. Use the
-c chunk_size option to set the deduplication chunk size to the file system
block size set when creating the file system. Note that the chunk size is
specified in bytes.
Solution
fsdedupadm enable -c 4096 /app
End of Solution
6 Use the fsdedupadm command with the list option to view the
deduplication settings.
Solution
fsdedupadm list /app
End of Solution
7 Use the fsdedupadm command to start deduplication on /app.
Solution
fsdedupadm start /app
End of Solution
8 Check the status of the deduplication on /app. Repeat checking the status
until the deduplication is complete. The first change in status may take a few
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
246
B130 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
minutes to appear in the output. Note the space savings during the
deduplication process.
Solution
fsdedupadm status /app
End of Solution
9 View shared files information using the fsadm command. Note that the USED
(KB) space has decreased by the amount shown in the Space_Saved
(KB) field.
Solution
/opt/VRTS/bin/fsadm -S shared /app
End of Solution
10 Set a deduplication schedule using the fsdedupadm command so that it runs
the deduplication at midnight every other day.
Solution
fsdedupadm setschedule "0 */2" /app
End of Solution
Note: For more information on setting deduplication schedules refer to the
fsdedupadm man page.
11 View the newly created schedule using the fsdedupadm list command.
Solution
fsdedupadm list /app
End of Solution
12 Unmount /app and then remove the appvol volume from the appdg disk
group.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
247
Lab 6: Using Advanced VxFS Features B131
Copyright 2012 Symantec Corporation. All rights reserved.
B
Note: If the file system unmount yields a device is busy warning, monitor the
deduplication process until it completes and try again.
Solution
cd /
umount /app
vxassist -g appdg remove volume appvol
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
248
B132 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
1 Create a 1-GB concatenated volume called appvol. Create a VxFS file system
on the volume and mount it to /app.
Solution
vxassist -g appdg make appvol 1g
mkfs -t vxfs /dev/vx/rdsk/appdg/appvol
mkdir /app (if necessary)
mount -t vxfs /dev/vx/dsk/appdg/appvol /app
End of Solution
2 Enable lazy copy-on-write for the newly created file system.
Solution
vxtunefs -s -o lazy_copyonwrite=1 /app
End of Solution
3 Make a file called file1 in the /app directory by copying the
/etc/termcap file.
Solution
cp /etc/termcap /app/file1
End of Solution
4 Observe size of shared and private data in the file as well as the space savings
in the file system using the fsmap and fsadm commands.
Solution
fsmap -cH /app/file1
/opt/VRTS/bin/fsadm -H -S shared /app
End of Solution
Exercise 3: Using the FileSnap feature
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
249
Lab 6: Using Advanced VxFS Features B133
Copyright 2012 Symantec Corporation. All rights reserved.
B
5 Use the vxfilesnap command to make a copy of the file1 file to the
file2 file in the same directory.
Solution
vxfilesnap -i -p /app/file1 /app/file2
End of Solution
6 Observe size of shared and private data in the file as well as the space savings
in the file system using the fsmap and fsadm commands.
Solution
fsmap -cH /app/file1
/opt/VRTS/bin/fsadm -H -S shared /app
End of Solution
Note: For more information about FileSnap refer to the Veritas Storage
Foundation Administrator's Guide.
7 Unmount /app and then remove the appvol volume from the appdg disk
group and destroy the appdg disk group.
Solution
umount /app
vxassist -g appdg remove volume appvol
vxdg destroy appdg
End of Solution
End of lab
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
250
B134 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
251
Lab 7: Using Site Awareness with Mirroring B135
Copyright 2012 Symantec Corporation. All rights reserved.
B
Lab 7: Using Site Awareness with Mirroring
In this lab, you configure your system for site awareness. During the lab, you work
on two systems sharing disks.
This lab contains the following exercises:
Configuring site awareness
Analyzing the volume read policy
Optional lab: Analyzing the impact of disk failure in a site-consistent
environment
Optional lab: A manual fire drill operation with remote mirroring
Prerequisite setup
To perform this lab, you need a lab system with Storage Foundation pre-installed,
configured and licensed. In addition to this, you also need at least four disks to be
used in a disk group.
Before starting this lab you should have at least four shared disks free to be used in
a disk group. You also need to ensure that the Storage Foundation Enterprise
license is installed.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
252
B136 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
Lab information
In preparation for this lab, you need the following information about your lab
environment.
The exercises for this lab start on the next page.
Object Value
root password veritas
Host name of the main lab system sym1
Host name of the system sharing disks sym2
Shared data disks: emc0_dd7 - emc0_d12
3pardata0_49 - 3pardata0_60
Site1 Name siteA
Site2 Name siteB
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
253
Lab 7: Using Site Awareness with Mirroring B137
Copyright 2012 Symantec Corporation. All rights reserved.
B
1 Verify that the site awareness feature has been enabled. If not, use the
vxkeyless set SFENT command to add the SF Enterprise license.
Solution
vxlicrep | grep -i site
Site Awareness = Enabled
End of Solution
2 Assign the site name siteA to the host.
Solution
vxdctl list | grep siteid
vxdctl set site=siteA
vxdctl list | grep siteid
End of Solution
3 Verify that the site awareness feature has been enabled. If not, use the
vxkeyless set SFENT command to add the SF Enterprise license.
Solution
vxlicrep | grep -i site
Site Awareness = Enabled
End of Solution
Exercise 1: Configuring site awareness
sym1
sym2
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
254
B138 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
4 Assign the site name siteB to the host.
Solution
vxdctl list | grep siteid
vxdctl set site=siteB
vxdctl list | grep siteid
End of Solution
5 Initialize the four disks that you will use for this lab. Assign two disks to siteA
and two disks to siteB. Display disk tags to observe the assignments.
Solution
vxdisksetup -i emc0_dd7 (if necessary)
vxdisksetup -i emc0_dd8 (if necessary)
vxdisksetup -i emc0_dd9 (if necessary)
vxdisksetup -i emc0_d10 (if necessary)
vxdisk settag site=siteA emc0_dd7
vxdisk settag site=siteA emc0_dd8
vxdisk settag site=siteB emc0_dd9
vxdisk settag site=siteB emc0_d10
vxdisk listtag
End of Solution
6 Create a disk group called appdg with four disks. Add the sites to the disk
group configuration.
Solution
vxdg init appdg appdg01=emc0_dd7 appdg02=emc0_dd8 \
appdg03=emc0_dd9 appdg04=emc0_d10
vxdg -g appdg addsite siteA
vxdg -g appdg addsite siteB
End of Solution
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
255
Lab 7: Using Site Awareness with Mirroring B139
Copyright 2012 Symantec Corporation. All rights reserved.
B
7 Display disk group information using the vxdg list appdg and
vxprint -g appdg -htr commands. What do you observe? Is the disk
group site-consistent?
Solution
vxdg list appdg
vxprint -g appdg -htr
The siteconsistent flag is not set. Therefore, the disk group is not site-
consistent. However, the disk group includes site records and is site-aware.
End of Solution
8 Create a 100-MB volume called appvol in the appdg disk group. Use the
vxprint command to display the volume layout. What do you observe?
Solution
vxassist -g appdg make appvol 100m
vxprint -g appdg -htr
vxprint -g appdg -m appvol | grep site
siteconsistent = off
allsites = on
The volume is automatically mirrored across all sites, the allsites attribute is set
by default but the siteconsistent attribute is turned off.
End of Solution
9 Delete the appvol volume in the appdg disk group.
Solution
vxedit -g appdg -rf rm appvol
End of Solution
10 Turn on site consistency feature in the appdg disk group. Verify that the flag is
set using the vxdg list appdg command.
Solution
vxdg -g appdg set siteconsistent=on
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
256
B140 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
vxdg list appdg
End of Solution
11 Create a 500-MB volume called appvol in the appdg disk group without
specifying any additional attributes. Use the vxprint command to display
the volume layout. What do you observe? To what value are the allsites and
siteconsistent attributes of the volume set? What is the volume read policy?
Solution
vxassist -g appdg make appvol 500m
vxprint -g appdg -htr
vxprint -g appdg -m appvol | grep site
siteconsistent = on
allsites = on
The volume is automatically mirrored across all sites, and the allsites and
siteconsistent attributes are set by default. A DCO volume is also created
automatically. The volume read policy is set to siteread.
End of Solution
12 Create a 500-MB volume called webvol in the appdg disk group. This time
turn site consistency off while creating the volume. Display volume layout and
attributes. How does this volume differ from appvol?
Solution
vxassist -g appdg make webvol 500m siteconsistent=off
vxprint -g appdg -htr
vxprint -g appdg -m webvol | grep site
siteconsistent = off
allsites = on
The volume is automatically mirrored across all sites, but this time only the
allsites attribute is set. There are no DCO or DRL logs associated with the
volume. The volume read policy is set to siteread.
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
257
Lab 7: Using Site Awareness with Mirroring B141
Copyright 2012 Symantec Corporation. All rights reserved.
B
13 Delete the webvol volume.
Solution
vxedit -g appdg -rf rm webvol
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
258
B142 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
1 Create a vxfs file system on appvol. Create the /app directory and mount the
file system to the /app directory. Create a 100-MB file on the file system
using the dd if=/dev/zero of=/app/testfile bs=1024k
count=100 command.
Solution
mkfs -t vxfs /dev/vx/rdsk/appdg/appvol
mkdir /app
mount -t vxfs /dev/vx/dsk/appdg/appvol /app
dd if=/dev/zero of=/app/testfile bs=1024k count=100
End of Solution
2 Reset the I/O statistics using the vxstat -g appdg -r command.
Solution
vxstat -g appdg -r
End of Solution
3 Read data from the file system using the vxbench command.
Solution
/opt/VRTSspt/FS/VxBench/vxbench_rhel5_x86_64 \
-w read -i iosize=8k,iocount=1000 /app/testfile
End of Solution
4 Display the I/O statistics using the vxstat -g appdg -d command.
Which disk has been used during the read operation? To which site is this disk
assigned?
Solution
vxstat -g appdg -d
vxdisk -g appdg listtag dm_name
Exercise 2: Analyzing the volume read policy
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
259
Lab 7: Using Site Awareness with Mirroring B143
Copyright 2012 Symantec Corporation. All rights reserved.
B
where dm_name is the disk media name of the disk device used for the read
operations.
You should observe that the read operation has used the disk device that is at
the same site as the host on which the read operation is performed.
End of Solution
5 Reset the I/O statistics using the vxstat -g appdg -r command.
Solution
vxstat -g appdg -r
End of Solution
6 Unmount the file system and deport the disk group from the system on which
you are working.
Solution
umount /app
vxdg deport appdg
End of Solution
7 Import the appdg disk group on the system at the second site (sym2).
Solution
vxdg import appdg
End of Solution
Note: All volumes are started by default when importing a disk group in
Storage Foundation 6.0. With versions before 5.1 SP1, the volumes had
to be started manually after the import was complete.
sym2
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
260
B144 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
8 Create the /app directory (if it doesnt exist) and mount the appvol to it.
Solution
mkdir /app (if necessary)
mount -t vxfs /dev/vx/dsk/appdg/appvol /app
End of Solution
9 Reset the I/O statistics using the vxstat -g appdg -r command.
Solution
vxstat -g appdg -r
End of Solution
10 Read data from the file system using the vxbench command.
Solution
/opt/VRTSspt/FS/VxBench/vxbench_rhel5_x86_64 \
-w read -i iosize=8k,iocount=1000 /app/testfile
End of Solution
11 Display the I/O statistics using the vxstat -g appdg -d command.
Which disk has been used during the read operation? To which site is this disk
assigned?
Solution
vxstat -g appdg -d
vxdisk -g appdg listtag dm_name
where dm_name is the disk media name of the disk device used for the read
operations.
You should observe that the read operation has used the disk device that is at
the same site as the host on which the read operation is performed.
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
261
Lab 7: Using Site Awareness with Mirroring B145
Copyright 2012 Symantec Corporation. All rights reserved.
B
12 Unmount the file system and deport the disk group.
Solution
umount /app
vxdg deport appdg
End of Solution
13 Import the appdg disk group.
Solution
vxdg import appdg
End of Solution
14 If you have extra time and you will be performing the optional labs, do not
perform this step. Otherwise, destroy the appdg disk group to prepare for the
next lab.
Solution
vxdg destroy appdg
End of Solution
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
262
B146 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.
Note: If you are not using enclosure-based naming scheme, replace the
enclosure-based names in the following steps with the disk access name
which is the dmp node name.
1 Identify the disk used for the second plex in the appvol volume. Note the disk
media name and the disk access name (or the enclosure-based name, if
enclosure-based naming is used) here:
Disk media name: _______________________________________________
Disk access name (Enclosure-based name):____________________________
Note which site this disk belongs to.
Solution
vxprint -g appdg -htr appvol | grep appvol-02
vxdisk -g appdg listtag dm_name
End of Solution
2 Mount the appvol volume to /app.
Solution
mount -t vxfs /dev/vx/dsk/appdg/appvol /app
End of Solution
3 If you are using enclosure-based naming, identify the pathname of the disk you
noted in step 1 using the vxdisk -g appdg list dm_name command.
To simulate a disk failure, use the vxdmpadm -f disable
Exercise 3: Optional lab: Analyzing the impact of disk failure in a
site-consistent environment
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
263
Lab 7: Using Site Awareness with Mirroring B147
Copyright 2012 Symantec Corporation. All rights reserved.
B
path=pathname command. Note that you need to disable all paths if you
have a disk with multiple paths.
Solution
vxdisk -g appdg list dm_name
The paths to the disk are displayed at the end of the list.
vxdmpadm -f disable path=pathname1
vxdmpadm -f disable path=pathname2 (if necessary)
End of Solution
4 Create a 10-MB testfile2 file in /app using the dd command.
Solution
dd if=/dev/zero of=/app/testfile2 bs=1024k count=10
End of Solution
5 Display the disk status using the vxdisk -o alldgs list command and
the disk group information using the vxprint -g appdg -htr
command. What do you observe?
Solution
vxdisk -o alldgs list
vxprint -g appdg -htr
The disk that you disabled is displayed as failed. The other disk at the same site
as the failed disk is detached. The disk group shows that the site where the disk
has failed is detached.
End of Solution
6 Attempt to access the testfile file in /app using the dd command. Can
you still access the file system?
Solution
dd if=/app/testfile of=/dev/null bs=1024k count=100
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
264
B148 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
The file system is still accessible because there is still a complete set of data on
siteA.
End of Solution
7 Recover from the failure by reenabling the path to the failed disk and by
reattaching the site if necessary.
Note: If the vxattachd daemon is running, the detached site is
automatically reattached and the volumes are automatically recovered
after you enable the paths to the failed disk. If you do not wish to wait
for the daemon to reattach the disks, use the vxreattach command.
Solution
vxdmpadm enable path=pathname1
vxdmpadm enable path=pathname2 (if necessary)
Ensure that the disk no longer displays error status.
vxdisk -o alldgs list
vxprint -g appdg -htr
End of Solution
8 Turn site consistency off on the disk group.
Solution
vxdg -g appdg set siteconsistent=off
End of Solution
9 Repeat steps 1 through 4 to simulate the disk failure. How does this differ from
the previous response?
Solution
The disk that you disabled is displayed as failed. However, the site is not
detached this time. If the hot relocation daemon is running, the subdisk on the
failed disk is relocated to another disk at the same site.
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
265
Lab 7: Using Site Awareness with Mirroring B149
Copyright 2012 Symantec Corporation. All rights reserved.
B
10 Ensure that the vxrelocd daemon is running. Fail the other disk at the same
site. This is the disk to which the second plex had been relocated after the
failure in step 9. Observe what happens. Can you still access the file system? Is
the subdisk on the failed disk relocated again?
Solution
ps -ef | grep vxrelocd
vxprint -g appdg -htr appvol | grep appvol-02
vxdisk -g appdg listtag dm_name
vxdisk -g appdg list dm_name
The paths to the disk are displayed at the end of the list.
vxdmpadm -f disable path=pathname1
vxdmpadm -f disable path=pathname2 (if necessary)
dd if=/dev/zero of=/app/testfile2 bs=1024k count=10
You should observe both disks as failed.
vxdisk -o alldgs list
vxprint -g appdg -htr
Note that the site is still not detached although all the disks at the site have
failed.
dd if=/app/testfile of=/dev/null bs=1024k count=100
The volume is still accessible because the disks at the other site are intact.
Hot relocation cannot relocate the subdisk because there are no more disks left
at this site.
End of Solution
11 Recover from the failure by reenabling paths to both of the failed disks and by
reattaching the disks using the vxreattach -br command. Use the
vxtask list command to observe the synchronization.
Solution
vxdmpadm enable path=disk1_pathname1
vxdmpadm enable path=disk1_pathname2 (if necessary)
vxdmpadm enable path=disk2_pathname1
vxdmpadm enable path=disk2_pathname2 (if necessary)
vxreattach -br
vxtask list
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
266
B150 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
vxdisk -o alldgs list
vxprint -g appdg -htr
End of Solution
12 Turn site consistency back on for the disk group.
Solution
vxdg -g appdg set siteconsistent=on
End of Solution
13 If you do not have time to perform the next optional exercise, unmount /app
and destroy the appdg disk group to prepare for the next lab. Otherwise, skip
this step and continue with the next optional lab section.
Solution
umount /app
vxdg destroy appdg
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
267
Lab 7: Using Site Awareness with Mirroring B151
Copyright 2012 Symantec Corporation. All rights reserved.
B
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.
1 Ensure that there are no failures in the appdg disk group and that the file
system is mounted on the /app directory.
Solution
vxprint -g appdg -htr
df -k /app
End of Solution
2 Use the vxdg detachsite command to detach siteA manually while the
file system is still mounted. Use the vxdisk list and vxprint
commands to observe what happens. Can you still access the file system?
Solution
vxdg -g appdg detachsite siteA
vxdisk -o alldgs list
Note that all the disks at siteA are detached.
vxprint -g appdg -htr
Note that the site status is displayed as OFFLINE. You can still access the file
system.
End of Solution
3 Reattach the detached site and recover the volume.
Solution
vxdg -g appdg reattachsite siteA
Exercise 4: Optional lab: A manual fire drill operation with remote
mirroring
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
268
B152 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
vxrecover -g appdg
End of Solution
4 When the volume is completely recovered, this time detach siteB manually
while the file system is still mounted. Use the vxdisk list and vxprint
commands to observe what happens. Can you still access the file system?
Solution
vxdg -g appdg detachsite siteB
vxdisk -o alldgs list
Note that all the disks at siteB are detached.
vxprint -g appdg -htr
Note that the site status is displayed as OFFLINE. You can still access the file
system.
End of Solution
5 Reattach the detached site and recover the volume.
Solution
vxdg -g appdg reattachsite siteB
vxrecover -g appdg
End of Solution
6 Unmount /app and destroy the appdg disk group to prepare for the next lab.
Solution
umount /app
vxdg destroy appdg
End of Solution
End of lab
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
269
Lab 8: Implementing SmartTier B153
Copyright 2012 Symantec Corporation. All rights reserved.
B
Lab 8: Implementing SmartTier
In this lab, you configure your system to use SmartTier to manage the placement
of files in a volume set.
This lab contains the following exercises:
Configuring a multi-volume file system and SmartTier
Testing SmartTier
Prerequisite setup
To perform this lab, you need a lab system with Storage Foundation pre-installed,
configured and licensed. In addition to this, you also need at least four disks to be
used in a disk group.
Before starting this lab you should have all the external disks assigned to you
already initialized but free to be used in a disk group.
Lab information
In preparation for this lab, you need the following information about your lab
environment.
The exercises for this lab start on the next page.
Object Value
root password veritas
Host name of the lab system sym1
Shared data disks: emc0_dd7 - emc0_d12
3pardata0_49 - 3pardata0_60
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
270
B154 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
In this lab you set up a SmartTier environment for storing user files. The
assumption in this lab is that the majority of the files will be accessed and modified
less frequently than once a month. So the majority of the space available (two
volumes) will reside in the tier that is configured for files not accessed between
30 - 60 days.
1 Create a disk group called appdg with four disks (emc0_dd7 - emc0_d10).
Solution
vxdg init appdg appdg01=emc0_dd7 appdg02=emc0_dd8 \
appdg03=emc0_dd9 appdg04=emc0_d10
End of Solution
2 Create a new volume set called appvset containing four sub-volumes.
a Create a 1-GB concatenated volume called subvol1.
Solution
vxassist -g appdg make subvol1 1g
End of Solution
b Create a 1-GB concatenated volume called subvol2.
Solution
vxassist -g appdg make subvol2 1g
End of Solution
c Create a 1-GB concatenated volume called subvol3.
Solution
vxassist -g appdg make subvol3 1g
End of Solution
Exercise 1: Configuring a multi-volume file system and SmartTier
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
271
Lab 8: Implementing SmartTier B155
Copyright 2012 Symantec Corporation. All rights reserved.
B
d Create a 1-GB concatenated volume called subvol4.
Solution
vxassist -g appdg make subvol4 1g
End of Solution
e Make a new volume set called appvset using the subvol1, subvol2,
subvol3, and subvol4 volumes.
Solution
vxvset -g appdg make appvset subvol1
vxvset -g appdg addvol appvset subvol2
vxvset -g appdg addvol appvset subvol3
vxvset -g appdg addvol appvset subvol4
End of Solution
f Use the vxprint command to view the newly created volume set.
Solution
vxprint -g appdg -htr
End of Solution
3 Set the file placement classes for each volume in the volume set so that there is
a hightier on the first volume, a midtier on the second and third volumes and a
lowtier on the fourth volume.
a Set the file placement class for the subvol1 volume to hightier.
Solution
vxassist -g appdg settag subvol1 \
vxfs.placement_class.hightier
End of Solution
b Set the file placement class for the subvol2 and subvol3 volumes to midtier.
Solution
vxassist -g appdg settag subvol2 \
vxfs.placement_class.midtier
vxassist -g appdg settag subvol3 \
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
272
B156 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
vxfs.placement_class.midtier
End of Solution
c Set the file placement class for the subvol4 volume to lowtier.
Solution
vxassist -g appdg settag subvol4 \
vxfs.placement_class.lowtier
End of Solution
d Verify that the placement classes were properly created. The placement
classes can be viewed using the vxassist command with the listtag
option. List the placement classes for each volume to ensure they are set
correctly.
Solution
vxassist -g appdg listtag subvol1 subvol2 \
subvol3 subvol4
End of Solution
4 Create a VxFS file system on appvset and mount it to the /app mount point.
Solution
mkfs -t vxfs /dev/vx/rdsk/appdg/appvset
mkdir /app (if necessary)
mount -t vxfs /dev/vx/dsk/appdg/appvset /app
End of Solution
5 View /app using the fsvoladm list command. You should see the four
volumes that were added to appvset.
Solution
fsvoladm list /app
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
273
Lab 8: Implementing SmartTier B157
Copyright 2012 Symantec Corporation. All rights reserved.
B
6 List the volume availability flags using the fsvoladm queryflags
command.
Solution
fsvoladm queryflags /app
End of Solution
7 If you are not already logged into VOM, start a Web browser and connect to
VOM. Navigate to the sym1.example.com link.
Solution
https://vom.example.com:14161/
Log on to VOM
Navigate to Manage > Servers > Hosts >sym1.example.com.
End of Solution
8 Create a new Update age-based placement policy for the /app file system and
assign it to the MVFS. The policy should be called data_agebased_policy. The
policy should include all defined tiers with hightier being the highest
placement class and lowtier the lowest placement class. In the policy, configure
the following relocation policies:
From hightier to lower placement classes if the last modification date is
greater than or equal to 30 days
From midtier to lowtier if the last modification date is greater than or equal
to 60 days
From lowtier to higher placement classes if the last modification date is
less than 60 days
From midtier to hightier if the last modification date is less than 30 days
Do not configure any exceptions to these policies.
Solution
a Click the Administer>File Systems.
b Select the check box for the multivolume file system (/app).
vom
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
274
B158 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
c Select Actions > Create Canned File Placement Policy.
d Select Update age-based. Click Next.
e Move all three tiers from Available placement classes to Selected
placement classes. Make sure that the order of the tiers from top to bottom
is hightier, midtier, lowtier. Click Next.
f In the To class below when days since last modification >= section set
the value for hightier to 30 and for midtier to 60. In the To class above
when days since last modification <= section set the value for midtier to
30 and for lowtier to 60. Click Next.
g Do not set any values in the Specify relocation exceptions: highest
placement class screen. Click Next.
h Do not set any values in the Specify relocation exceptions: lowest
placement class screen. Click Next.
i Review the policy to make sure it is correct. Click Next.
j Set the Policy Name to data_agebased_policy. Click Finish to
complete the wizard, and then Ok.
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
275
Lab 8: Implementing SmartTier B159
Copyright 2012 Symantec Corporation. All rights reserved.
B
9 Verify that the policy has been assigned to the /app mount point using the
fsppadm list command.
Solution
fsppadm list /app
End of Solution
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
276
B160 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
1 Create several files in /app as follows:
echo "testfile1 for DST test" > /app/testfile1
echo "testfile2 for DST test" > /app/testfile2
2 Determine on which volume the files are located using the fsmap command.
Solution
fsmap -q /app/*
The files should be located in subvol1.
End of Solution
3 Determine the current date using the date command and record the value
here. This will be needed in further steps.
Current Date __________________________________
Solution
date
Mon Jan 16 21:26:41 PST 2012
End of Solution
4 Change the date of testfile1 using the touch command so that the file is
more than 30 days old, but less than 60 days. You will need to format the date
by YYMMDDhhmm.
Solution
The following example uses a date 31 days before the date example of the
previous step.
touch -amt 1112100800 /app/testfile1
End of Solution
Exercise 2: Testing SmartTier
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
277
Lab 8: Implementing SmartTier B161
Copyright 2012 Symantec Corporation. All rights reserved.
B
5 Determine on which volume the files are located using the fsmap command.
Solution
fsmap -q /app/*
The files should still be located in subvol1.
End of Solution
6 Enforce the file placements on /app using either VOM or the CLI.
Solution
VOM
a Mark the appvset file system in the table. Select Actions > Enforce
Active File Placement Policy.
b Answer Yes when asked if you are sure you want to enforce the file
placement policy on /app and click Next.
c Click Finish and Ok to complete the wizard.
CLI
fsppadm enforce /app
End of Solution
7 Determine on which volume the testfile1 file is located using the fsmap
command.
Solution
fsmap -q /app/*
The testfile1 file should be located in subvol2 or subvol3.
End of Solution
vom
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
278
B162 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
8 Change the date of the testfile2 using the touch command so that the
file is more than 60 days old.
Solution
The following example uses a date more than 90 days before the current date
shown in step 3.
touch -amt 1110100800 /app/testfile2
End of Solution
9 Enforce the file placements on /app using either VOM or the CLI.
Solution
VOM
a Mark the appvset file system in the table. Select Actions > Enforce
Active File Placement Policy.
b Answer Yes when asked if you are sure you want to enforce the file
placement policy on /app and click Next.
c Click Finish and Ok to complete the wizard.
CLI
fsppadm enforce /app
End of Solution
10 Determine on which volume the testfile2 file is located using the fsmap
command.
Solution
fsmap -q /app/*
vom
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
279
Lab 8: Implementing SmartTier B163
Copyright 2012 Symantec Corporation. All rights reserved.
B
The testfile2 file should be located in subvol4.
End of Solution
11 Change the date of testfile2 using the touch command so that the file is
now more than 30 days old, but less than 60 days.
Solution
The following example uses a date 31 days before the current date example of
step 3.
touch -amt 1112150800 /app/testfile2
End of Solution
12 Enforce the file placements on /app using either VOM or the CLI.
Solution
VOM
a Mark the appvset file system in the table. Select Actions > Enforce
Active File Placement Policy.
b Answer Yes when asked if you are sure you want to enforce the file
placement policy on /app and click Next.
c Click Finish and Ok to complete the wizard.
CLI
fsppadm enforce /app
End of Solution
vom
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
280
B164 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
13 Determine on which volume the testfile2 file is located using the fsmap
command.
Solution
fsmap -q /app/*
The testfile2 file should be located in subvol2 or subvol3. This shows that
if the file modification date changes to a different value it will move up the tier
structure.
End of Solution
14 Touch both files so they have the current date.
Solution
touch /app/testfile1
touch /app/testfile2
End of Solution
15 Enforce the file placements on /app using either VOM or the CLI.
Solution
VOM
a Mark the appvset file system in the table. Select Actions > Enforce
Active File Placement Policy.
b Answer Yes when asked if you are sure you want to enforce the file
placement policy on /app and click Next.
c Click Finish and Ok to complete the wizard.
CLI
vom
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
281
Lab 8: Implementing SmartTier B165
Copyright 2012 Symantec Corporation. All rights reserved.
B
fsppadm enforce /app
End of Solution
16 Determine on which volume the files are located using the fsmap command.
Solution
fsmap -q /app/*
The files should all be located in subvol1. This shows that if the file
modification date changes they are moved back to hightier.
End of Solution
17 Unmount /app and destroy the appdg disk group.
Solution
umount /app
vxdg destroy appdg
End of Solution
End of lab
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
282
B166 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
283
Lab 9: Replicating a Veritas File System B167
Copyright 2012 Symantec Corporation. All rights reserved.
B
Lab 9: Replicating a Veritas File System
In this lab, you configure two systems for file replication using the Veritas File
Replicator (VFR). After replication has completed you will then restore the source
file system using the replication target system.
This lab contains the following exercises:
Setting up and performing replication for a VxFS file system
Restoring the source file system using the replication target
Prerequisite setup
To perform this lab, you need two lab systems with Storage Foundation pre-
installed, configured and licensed. In addition to this, you also need at least four
disks to be used in a disk group on each lab system.
Before starting this lab you should have all the external disks assigned to you
already initialized but free to be used in a disk group.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
284
B168 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
Lab information
In preparation for this lab, you need the following information about your lab
environment.
The exercises for this lab start on the next page.
Object Value
root password veritas
Host name of the source system sym1
Host name of the replication target sym2
Shared data disks: emc0_dd7 - emc0_d12
3pardata0_49 - 3pardata0_60
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
285
Lab 9: Replicating a Veritas File System B169
Copyright 2012 Symantec Corporation. All rights reserved.
B
In this lab, you set up two systems for file replication using VFR. The assumption
in this lab is that the two systems do not share disks, even though they do in your
test environment. All steps will be based on this assumption. The source system
(sym1) will use the appdg disk group and appvol volume, the target system (sym2)
will use the repdg disk group and the repvol volume. The mount point names on
both systems will be the same.
1 Verify that the license required to perform file replication is installed. If it is
not, add it.
Solution
vxlicrep -s | grep Replicator
If it is not present, add it using:
vxkeyless set SFENT_VR
End of Solution
2 Create a disk group called appdg with four disks (emc0_dd7 - emc0_d10).
Solution
vxdg init appdg appdg01=emc0_dd7 appdg02=emc0_dd8 \
appdg03=emc0_dd9 appdg04=emc0_d10
End of Solution
3 Create a 1-GB concatenated volume called appvol. Create a VxFS file system
on the volume and mount it to /app.
Solution
vxassist -g appdg make appvol 1g
mkfs -t vxfs /dev/vx/rdsk/appdg/appvol
mkdir /app (if necessary)
mount -t vxfs /dev/vx/dsk/appdg/appvol /app
End of Solution
Exercise 1: Setting up and performing replication for a VxFS file
system
sym1 - Source
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
286
B170 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
4 Copy the files in /etc/default into /app.
Solution
cp /etc/default/* /app
ls -al /app
End of Solution
5 Start the Veritas File Replicator scheduler.
Solution
vfradmin startsched
End of Solution
6 Verify that the license required to perform file replication is installed. If it is
not, add it.
Solution
vxlicrep -s | grep Replicator
If it is not present, add it using:
vxkeyless set SFENT_VR
End of Solution
7 Create a disk group called repdg with four disks (3pardata0_49 -
3pardata0_52).
Solution
vxdisksetup -i accessname (if necessary)
vxdg init repdg repdg01=3pardata0_49 \
repdg02=3pardata0_50 repdg03=3pardata0_51 \
repdg04=3pardata0_52
End of Solution
sym2 - Target
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
287
Lab 9: Replicating a Veritas File System B171
Copyright 2012 Symantec Corporation. All rights reserved.
B
8 Create a 1-GB concatenated volume called repvol. Create a VxFS file system
on the volume and mount it to /app.
Solution
vxassist -g repdg make repvol 1g
mkfs -t vxfs /dev/vx/rdsk/repdg/repvol
mkdir /app (if necessary)
mount -t vxfs /dev/vx/dsk/repdg/repvol /app
End of Solution
9 Start the target replication daemon.
Solution
vfradmin startvxfsrepld
End of Solution
10 Create the replication job called apprepjob for the /app file system, specify
the -s option (source system). Set the frequency to 15 minutes.
Solution
vfradmin createjob -s apprepjob sym1 /app sym2 /app 15
End of Solution
sym1 - Source
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
288
B172 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
11 Create the replication job called apprepjob for the /app file system, specify
the -t option (target system). Set the frequency to 15 minutes.
Solution
vfradmin createjob -t apprepjob sym1 /app sym2 /app 15
End of Solution
12 Start the replication job for the /app file system.
Solution
vfradmin startjob apprepjob /app
End of Solution
13 List the replication job for the /app file system. Use the verbose (-v) option.
Solution
vfradmin listjob -v /app
End of Solution
14 Display the replication job status for the /app file system.
Solution
vfradmin getjobstatus apprepjob /app
End of Solution
sym2 - Target
sym1 - Source
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
289
Lab 9: Replicating a Veritas File System B173
Copyright 2012 Symantec Corporation. All rights reserved.
B
15 Display the replication job statistics for the /app file system using the verbose
option and human friendly units.
Solution
vfradmin getjobstats -v -fh apprepjob /app
End of Solution
16 Display the replication job storage checkpoint.
Solution
vfradmin getjobckpt apprepjob /app
End of Solution
17 List the files that have been replicated in the /app directory.
Solution
ls -al /app
End of Solution
sym2 - Target
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
290
B174 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
In this exercise, you will simulate a loss of data by unmounting the appvol volume
and creating a new file system on the volume. The volume will be remounted and
the target system (sym2) will become the source temporarily, and the original
source system (sym1) will become the target system temporarily. After the file
system is recovered on the original source system, you will change the replication
direction to resume the replication as before.
1 Unmount /app and recreate a VxFS file system on the appvol volume.
Remount the file system to /app.
Solution
umount /app
mkfs -t vxfs /dev/vx/rdsk/appdg/appvol
mount -t vxfs /dev/vx/dsk/appdg/appvol /app
ls -al /app
End of Solution
2 List the replication job for the /app file system. Use the verbose (-v) option.
Does the replication job exist?
Solution
vfradmin listjob -v /app
You receive an error message indicating that no jobs are configured. When the
file system is lost during a disaster, the job definition is also lost on the original
source system.
End of Solution
Exercise 2: Restoring the source file system using the replication
target
sym1 - Original Source
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
291
Lab 9: Replicating a Veritas File System B175
Copyright 2012 Symantec Corporation. All rights reserved.
B
3 Create the replication job for the /app file system, specify the -t option (to
make sym1 the target system). Set the frequency to 15 minutes. List the
replication job to view the job.
Solution
vfradmin createjob -t apprepjob sym2 /app sym1 /app 15
vfradmin listjob -v /app
End of Solution
4 Stop the scheduler on the old source system.
Solution
vfradmin stopsched
End of Solution
5 Start the target replication daemon. Note that the original source system is now
configured to function as the target system.
Solution
vfradmin startvxfsrepld
End of Solution
6 Start the scheduler.
Solution
vfradmin startsched
End of Solution
sym2 - Temporary New Source
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
292
B176 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
7 Determine if the original replication job exists. If it does, destroy it and
recreate the job specifying sym2 as the source system and sym1 as the target
system. List the replication job to view the job.
Solution
vfradmin listjob -v /app
vfradmin destroyjob apprepjob /app
vfradmin createjob -s apprepjob sym2 /app sym1 /app 15
vfradmin listjob -v /app
End of Solution
8 Display the replication job status for the /app file system. What do you
observe?
Solution
vfradmin getjobstatus apprepjob /app
The replication job is configured but is not running.
End of Solution
9 Copy the /etc/group file to /app to simulate updates to the file system
while the original source was down. Verify that the file has been copied.
Solution
cp /etc/group /app
ls -l /app
End of Solution
10 Perform one iteration of the replication using the vfradmin syncjob
command to restore the original file system.
Solution
vfradmin syncjob apprepjob /app
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
293
Lab 9: Replicating a Veritas File System B177
Copyright 2012 Symantec Corporation. All rights reserved.
B
11 After the replication has completed, determine the result of the replication
using the vfradmin getjobstats -v -fh command.
Solution
vfradmin getjobstats -v -fh apprepjob /app
vfradmin stopsched
End of Solution
12 On the original source system, verify that the contents of the /app file system
are restored including the /etc/group file that was copied on sym2 in
Step 9.
Solution
ls -l /app
End of Solution
13 After the original file system is restored, change sym2 back to a target system
by first stopping the replication scheduler and then changing the replication
mode for apprepjob to be the target. Ensure that the target replication daemon
(vxfsrepld daemon) is still running on the system.
Solution
vfradmin stopsched
vfradmin setjobmode -t apprepjob /app
Enter y to continue when you receive the warning about losing replication
statistics on this system.
ps -ef | grep vxfsrepld
sym1 - Original Source
sym2 - Temporary New Source
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
294
B178 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
Target replication daemon should still be running and listening on the default
replication port.
End of Solution
14 To configure sym1 as the source system again, change the replication job mode
for apprepjob to source on sym1. List the replication job and confirm that the
source system is now sym1.
Solution
vfradmin setjobmode -s apprepjob /app
vfradmin listjob -v /app
End of Solution
15 Stop the replication target daemon on sym1. Note that this step is not necessary
to change sym1 to be the source system for the /app file system replication.
However, since there are no other file systems for which sym1 is the target, the
target replication daemon can be stopped.
Solution
vfradmin stopvxfsrepld
End of Solution
16 Start the replication scheduler on the original source system. Verify that the
scheduler is running.
Solution
vfradmin startsched
ps -ef | grep vxfstaskd
End of Solution
sym1 - Original Source
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
295
Lab 9: Replicating a Veritas File System B179
Copyright 2012 Symantec Corporation. All rights reserved.
B
17 Copy new data to the /app file system by executing the following command:
cp /etc/*.conf /app. Confirm that the files have been copied.
Solution
cp /etc/*.conf /app
ls -l /app
End of Solution
18 Start the apprepjob replication job and display its status using the vfradmin
getjobstatus command.
Solution
vfradmin startjob apprepjob /app
vfradmin getjobstatus apprepjob /app
The status should indicate that the job is currently scheduled. The job state will
show idle if the last replication job has completed.
End of Solution
19 Display the job statistics using the vfradmin getjobstats -v -fh
command.
Solution
vfradmin getjobstats -v -fh apprepjob /app
The output indicates the files and directories that have been modified and
synced with the target system during the last replication interval.
End of Solution
20 Verify that the contents of the /app file system are replicated to the target file
system on sym2.
Solution
ls -l /app
sym2 - Target
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
296
B180 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
All the *.conf files that were copied to the file system on sym1 in Step 16
should also exist on the target file system on sym2 indicating that the
replication is running successfully.
End of Solution
21 Stop the replication scheduler on sym1. Unmount /app and destroy the appdg
disk group.
Solution
vfradmin stopsched
umount /app
vxdg destroy appdg
End of Solution
22 Stop the target replication daemon on sym2.
Solution
vfradmin stopvxfsrepld
End of Solution
23 Unmount /app and destroy the repdg disk group.
Solution
umount /app
vxdg destroy repdg
End of Solution
End of lab
sym1 - Source
sym2 - Target
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
297
Lab App C: Optional Lab: Importing LUN Snapshots B181
Copyright 2012 Symantec Corporation. All rights reserved.
B
Lab App C: Optional Lab: Importing LUN Snapshots
The purpose of this lab is to perform tests that give you a better understanding of
how Volume Manager deals with LUN Snapshots.
This lab contains the following exercises:
LUN snapshots setup
Importing clone disk groups
Prerequisite setup
To perform this lab, you need a lab system with Storage Foundation pre-installed,
configured and licensed. In addition to this, you also need four disks to be used in
a disk group and its clone.
Lab information
In preparation for this lab, you need the following information about your lab
environment.
The exercises for this lab start on the next page.
Object Value
root password veritas
Host name of the lab system sym1
Shared data disks: emc0_dd7 - emc0_d12
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
298
B182 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time. The optional exercises do not have
any impact on further labs.
In this exercise, you simulate a hardware based LUN snapshot by cloning two
disks using the dd command. Once the disks are cloned, then you perform testing
on the cloned disks using Volume Manager commands.
1 Create a disk group called appdg with two disks.
Solution
vxdisksetup -i emc0_dd7 (if necessary)
vxdisksetup -i emc0_dd8 (if necessary)
vxdg init appdg appdg01=emc0_dd7 appdg02=emc0_dd8
End of Solution
2 Create a 1-GB concatenated mirrored volume called appvol. Create a VxFS
file system on the volume and mount it to /app.
Solution
vxassist -g appdg make appvol 1g layout=mirror
mkfs -t vxfs /dev/vx/rdsk/appdg/appvol
mkdir /app (if necessary)
mount -t vxfs /dev/vx/dsk/appdg/appvol /app
End of Solution
3 Copy some files into /app. Verify that the files have been copied.
Solution
cp /etc/*.conf /app
Exercise 1: LUN snapshots setup
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
299
Lab App C: Optional Lab: Importing LUN Snapshots B183
Copyright 2012 Symantec Corporation. All rights reserved.
B
ls -al /app
End of Solution
4 Determine the names of two unused and uninitialized disks that can be used to
simulate LUN Snapshots. Record the names for later use. If necessary, use the
vxdiskunsetup command to uninitialize the disks.
Clone Device 1 _____________________ (emc0_dd9)
Clone Device 2 _____________________ (emc0_d10)
Solution
vxdisk -o alldgs list
vxdiskunsetup emc0_dd9
vxdiskunsetup emc0_d10
End of Solution
5 Use the dd command to clone each device in the appdg disk group (emc0_dd7
and emc0_dd8) to the disks recorded in step 4 (emc0_dd9 and emc0_d10). Use
1024k block size during the copy operation. Be patient, the copy for this step
will take a while.
Solution
dd if=/dev/vx/rdmp/emc0_dd7 \
of=/dev/vx/rdmp/emc0_dd9 bs=1024k
dd if=/dev/vx/rdmp/emc0_dd8 \
of=/dev/vx/rdmp/emc0_d10 bs=1024k
End of Solution
6 Use the vxdisk scandisks command to have Volume Manager rescan the
disks. The two disks that have the cloned data should now show as udid
mismatch when viewed using vxdisk list.
Solution
vxdisk scandisks
vxdisk -o alldgs list
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
300
B184 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
7 Use the vxdg listclone command to list all clone disks. You will see two
clone disks in the output. Notice that the device names are the same as you
recorded in step 4.
Solution
vxdg listclone
End of Solution
8 Unmount the /app file system.
Solution
umount /app
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
301
Lab App C: Optional Lab: Importing LUN Snapshots B185
Copyright 2012 Symantec Corporation. All rights reserved.
B
In this section, you will import the cloned disk group on the same host using two
different methods.
1 Deport the appdg disk group.
Solution
vxdg deport appdg
End of Solution
2 Import the cloned disk group using the vxdg command with the
-o useclonedev=on option. Do not update the IDs on the disks.
Solution
vxdg -o useclonedev=on import appdg
End of Solution
3 Verify that the volume has been started using the vxprint command.
Solution
vxprint -g appdg -htr
End of Solution
4 Use the vxdisk list command to verify that the cloned disk group was
imported.
Solution
vxdisk -o alldgs list
Notice that the status of the disks in the cloned disk group now show as
clone_disk.
End of Solution
Exercise 2: Importing clone disk groups
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
302
B186 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
5 Mount the appvol volume and verify the files are the same as they were in the
original volume.
Solution
mount -t vxfs /dev/vx/dsk/appdg/appvol /app
ls -al /app
End of Solution
6 Unmount /app and deport the disk group.
Solution
umount /app
vxdg deport appdg
End of Solution
7 Import the original appdg disk group. Verify that the original disk group was
imported using the vxdisk list command.
Solution
vxdg import appdg
vxdisk -o alldgs list
End of Solution
8 Import the cloned disk group using a new name. Update the disk IDs. Verify
that there are now two disk groups imported.
Solution
vxdg -n appdgclone -o useclonedev=on -o updateid /
import appdg
vxdisk -o alldgs list
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
303
Lab App C: Optional Lab: Importing LUN Snapshots B187
Copyright 2012 Symantec Corporation. All rights reserved.
B
9 Turn off the clone flag on both disks using the vxdisk command. You can use
the vxdg listclone command to determine the device names and also to
verify that the flags have been cleared.
Solution
vxdg listclone
For each disk shown run the following vxdisk command. You will need to
run this command twice.
vxdisk -g appdgclone set emc0_dd9 clone=off
vxdisk -g appdgclone set emc0_d10 clone=off
After this command is executed for each device, the vxdg listclone
command should not show any disks in its output.
vxdg listclone
End of Solution
10 Destroy both disk groups.
Solution
vxdg destroy appdgclone
vxdg destroy appdg
End of Solution
End of lab
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
304
B188 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
305
Lab App D: Optional Lab: Managing the Boot Disk with SF B189
Copyright 2012 Symantec Corporation. All rights reserved.
B
Lab App D: Optional Lab: Managing the Boot Disk with SF
In this practice, you create a boot disk mirror, disable the boot disk, and boot up
from the mirror and recover the boot disk. Then you boot up again from the boot
disk, remove the mirror, and remove the alternate disk from the boot disk group.
Optionally you can also create a boot disk snapshot from which you can boot.
Finally, you unencapsulate the boot disk. These tasks are performed using a
combination of the VOM interface, the vxdiskadm utility, and CLI commands.
This lab contains the following exercises:
Optional lab: Encapsulation and boot disk mirroring
Optional lab: Testing the boot disk mirror
Optional lab: Removing the boot disk mirror
Optional lab: Creating the boot disk snapshot
Optional lab: Testing and removing the boot disk snapshot
Optional lab: Unencapsulating
Prerequisite setup
To perform this lab, you need a lab system with Storage Foundation pre-installed,
configured and licensed. In addition to this, you also need a second internal disk to
be able to mirror the system disk.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
306
B190 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
Lab information
In preparation for this lab, you need the following information about your lab
environment.
Encapsulation and boot disk mirroring
You will be performing two different types of boot disk mirroring. In the first
section, you mirror the boot disk and keep all volumes mirrored. This would be
used to make sure the data on the boot disk is mirrored and up to date in case of a
disk failure. In the second section, you create a boot disk snapshot which can be
used in case the data on the boot disk gets corrupted or deleted. After the snapshot
is created, the data is no longer kept in sync with the boot disk.
The exercises for this lab start on the next page.
Object Value
root password veritas
Host name sym1
Boot disk: sda
2nd internal disk: sdb
Shared data disks: emc0_dd7 - emc0_dd12
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
307
Lab App D: Optional Lab: Managing the Boot Disk with SF B191
Copyright 2012 Symantec Corporation. All rights reserved.
B
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time.
1 Use vxdiskadm to encapsulate the boot disk. Use systemdg as the name of
your boot disk group and use rootdisk as the name of your boot disk.
Solution
a Select the vxdiskadm option, Encapsulate one or more disks, and
follow the steps to encapsulate your system disk (sda).
b Select the system disk as the disk to encapsulate.
c Add the system disk to a disk group named systemdg.
d Specify the name of the disk as rootdisk (do not use default naming).
e Shutdown and reboot after exiting vxdiskadm using the shutdown -r
now command.
End of Solution
2 Use the vxrootadm command to administer mirrors of an encapsulated
system disk. Use the command to mirror rootdisk, and use the second internal
disk on your system for the mirror.
Note: When using the vxrootadm command, the target disk must be in an
uninitialized state with the disk type set to auto: none. If the disk is
already initialized, the command will fail and leave the disk in an error
state. Use the vxdiskunsetup command to uninitialize the disk if
necessary.
Solution
vxdiskunsetup sdb (if necessary)
Exercise 1: Optional lab: Encapsulation and boot disk mirroring
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
308
B192 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
vxrootadm -v addmirror sdb
Open a separate window and monitor the mirroring progress using the
vxtask monitor command for each of the volumes being mirrored on the
boot disk. Use Ctrl+c to exit the command.
vxtask monitor
This command shows the percentage complete of the volume being mirrored.
End of Solution
3 After the mirroring operation is complete, verify that you now have two disks
in systemdg: rootdisk and rootdisk-s0, and that all volumes are mirrored. Also,
check to determine if rootvol is enabled and active.
Hint: Use vxprint and examine the STATE fields.
Solution
vxprint -g systemdg -htr
The rootvol should be in the ENABLED and ACTIVE state, and you should
also see two plexes for each of the volumes in systemdg.
End of Solution
Note: In a production environment, you would now place the names of your
alternate boot disks in persistent storage. This however is not possible in
the classroom VMware environment.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
309
Lab App D: Optional Lab: Managing the Boot Disk with SF B193
Copyright 2012 Symantec Corporation. All rights reserved.
B
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time.
1 Fail the boot disk by turning off the rootvol plex on the boot disk using the
vxmend -g systemdg off rootvol-01 command. The system will
continue to run because you have a mirror of the disk.
a Fail the system disk.
Solution
To disable the boot disk and make rootvol-01 disabled and offline, use the
vxmend off command. This command is used to make changes to
configuration database records. Here, you are using the command to place
the plex in an offline state. For more information about this command, see
the vxmend (1m) manual page.
vxmend -g systemdg off rootvol-01
End of Solution
b Verify that rootvol-01 is now disabled and offline.
Solution
vxprint -g systemdg -htr
End of Solution
c To change the plex to a STALE state, run the vxmend on command on
rootvol-01. Verify that rootvol-01 is now in the DISABLED and STALE
state.
Solution
vxmend -g systemdg on rootvol-01
vxprint -g systemdg -htr
End of Solution
Exercise 2: Optional lab: Testing the boot disk mirror
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
310
B194 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
2 Now that you have simulated the failure of the original boot disk, reboot the
system and boot up using the mirror. Reboot the system using shutdown -r
now.
Solution
shutdown -r now
End of Solution
3 After the system comes back up, check the status of the root volume. What is
the state of the volume?
Solution
vxprint -g systemdg -htr
The root volume is in the ENABLED/ACTIVE state. The first plex on the
original boot disk is in STALE state and the other plex on the alternate boot
disk is in ACTIVE state. When the synchronization completes, the original
plex changes state to ACTIVE.
End of Solution
4 Use the vxtask command to monitor when the synchronization is complete,
verify the status of rootvol. Verify that rootvol-01 is now in the ENABLED and
ACTIVE state.
Note: You may need to wait a few minutes for the state to change from
STALE to ACTIVE.
Solution
vxtask monitor
vxprint -g systemdg -htr
You have successfully booted up from the mirror, and the volumes have been
resynchronized.
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
311
Lab App D: Optional Lab: Managing the Boot Disk with SF B195
Copyright 2012 Symantec Corporation. All rights reserved.
B
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time.
1 Remove the mirror of the volumes on the encapsulated boot disk. Remove all
but one plex of rootvol and swapvol (that is, remove the newer plex from each
volume in systemdg).
For each volume in systemdg, remove all of the newly created mirrors. More
specifically, for each volume, two plexes are displayed, and you should remove
the newer (mir) plexes from each volume.
Solution
vxplex -g systemdg -o rm dis mirrootvol-01
vxplex -g systemdg -o rm dis mirswapvol-01
vxprint -g systemdg -htr
End of Solution
2 Remove the second disk from the systemdg disk group.
Solution
vxdg -g systemdg rmdisk rootdisk-s0
End of Solution
Exercise 3: Optional lab: Removing the boot disk mirror
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
312
B196 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time.
1 Use the vxdiskunsetup command to remove all partitions from the second
internal drive (sdb).
Solution
vxdiskunsetup sdb
End of Solution
2 Next, use vxrootadm to create a bootable snapshot from your system disk,
rootdisk. Use bksystemdg as the name of the disk group for the snapshot
Solution
vxrootadm -g systemdg mksnap sdb bksystemdg
If a source disk is not specified as in this solution, the current boot disk is used.
End of Solution
3 Open a separate window and monitor the mirroring progress using the
vxtask monitor command for each of the volumes being mirrored on the
boot disk. Use Ctrl+c to exit the command.
Solution
vxtask monitor
This command shows the percentage complete of the volume being mirrored.
End of Solution
Exercise 4: Optional lab: Creating the boot disk snapshot
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
313
Lab App D: Optional Lab: Managing the Boot Disk with SF B197
Copyright 2012 Symantec Corporation. All rights reserved.
B
4 After the snapshot operation is complete, verify that you now have a new
bksystemdg disk group with the altboot disk in it.
Solution
vxdisk -o alldgs list
End of Solution
5 Verify that the correct volumes have been created by comparing the volumes in
systemdg with the newly created volumes in bksystemdg using the vxprint
-htr command.
Solution
vxprint -htr
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
314
B198 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time.
1 Test that the snapshot of the system disk is bootable. To perform this step, on
the virtual machine, reboot the virtual machine using the shutdown -r
now command. When the system reaches the initial boot screen, select the F2
key to enter the setup menu. Under the Boot menu, move the VMware Virtual
SCSI Hard Drive (0:1) so that it is before the VMware Virtual SCSI Hard
Drive (0:0) on the list. Exit and save the changes.
Solution
shutdown -r now
End of Solution
Exercise 5: Optional lab: Testing and removing the boot disk
snapshot
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
315
Lab App D: Optional Lab: Managing the Boot Disk with SF B199
Copyright 2012 Symantec Corporation. All rights reserved.
B
2 Use the df -k command to verify that you are booted from the snapshot disk.
Solution
df -k
Filesystem 1K-blocks Used Available Use%
Mounted on
/dev/vx/dsk/bksystemdg/rootvol
21225712 5600900 14529184 28% /
You will see that the disk group is bksystemdg and the disk is altboot.
End of Solution
3 Your system is currently booted up from the boot disk snapshot. Boot up from
the original boot disk. To perform this step, on the virtual machine, reboot the
virtual machine using the shutdown -r now command. When the system
reaches the initial boot screen, select the F2 key to enter the setup menu. Under
the Boot menu, move the VMware Virtual SCSI Hard Drive (0:0) so that it
is before the VMware Virtual SCSI Hard Drive (0:1) on the list. Exit and
save the changes.
Solution
shutdown -r now
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
316
B200 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
4 Use the df -k command to verify that you are booted from the original disk.
Solution
df -k
Filesystem 1K-blocks Used Available Use%
Mounted on
/dev/vx/dsk/systemdg/rootvol
21225712 5600900 14529184 28% /
You will see that the disk group is systemdg and the disk is rootdisk.
End of Solution
5 Remove the snapshot of the boot disk by destroying the bksystemdg disk
group.
Solution
vxdg destroy bksystemdg
End of Solution
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
317
Lab App D: Optional Lab: Managing the Boot Disk with SF B201
Copyright 2012 Symantec Corporation. All rights reserved.
B
Note: Check with your instructor to see if you have more time to complete the
optional lab exercises. You do not need to perform the optional lab
exercises unless you have extra time.
1 Run the command to convert the root volumes back to disk partitions. When
prompted to shutdown the system answer n. Use the shutdown -r now
command to reboot the system.
Solution
vxunroot
shutdown -r now
End of Solution
2 Verify that the mount points are now slices rather than volumes.
Solution
df -k
End of Solution
End of lab
Exercise 6: Optional lab: Unencapsulating
sym1
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
318
B202 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
319
Appendix C
Importing LUN Snapshots
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
320
C2 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
321
Appendix C Importing LUN Snapshots C3
Copyright 2012 Symantec Corporation. All rights reserved.
C
How Volume Manager detects hardware snapshots
A snapshot is a point-in-time copy (PITC) which enables you to capture an image
of data at a selected instant.
A hardware snapshot happens when the copy operations occur at the device level
within the hardware providers disk array outside the control of Volume Manager.
Although the way the hardware snapshot is implemented is different between
different vendors, from Volume Managers point of view a hardware snapshot is
equivalent to a block-by-block copy of the individual disks.
Note that the whole LUN is copied in the process, not only the parts containing the
data. Because the disk array does not recognize Volume Manager and is unaware
of the private and public regions on the disk, the private region is copied as well as
the public region. The target disk on which the data is copied is sometimes referred
to as a clone disk.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
322
C4 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
How is a hardware snapshot used?
The way the hardware snapshot is used depends on the environment in the data
center.
In some cases the clone disks are visible to the same server as the original disks
and the snapshot disk group is imported on the same host as the original disks.
In other cases, the original host has no visibility of the clone disks and the snapshot
disk group is imported on a completely different server which has no visibility of
the original disks.
Importing the snapshot disk group on a server other than the original application
server is called off-host processing. Off-host processing is mostly used to reduce
the burden on the application server and to improve performance. However, it can
be used for other purposes, such as security or even disaster recovery.
Note that a snapshot is a point-in-time copy and there may be multiple snapshots
of the same data taken at different times that are visible to the off-host server at the
same time.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
323
Appendix C Importing LUN Snapshots C5
Copyright 2012 Symantec Corporation. All rights reserved.
C
Identification of clone disks before SF 5.0
Volume Manager uses the information in the private region of the disk to identify
which disk it is. When you initialize a disk, this private region is created and then
when you add the disk to a disk group and start using it for volumes, the disk
media name assigned to that disk is written in the header part of the private region
together with the name and ID of the disk group to which this disk belongs.
When you create a clone disk by copying everything on the original disk to another
disk, you also copy this information in the private region.
So, when the system boots up and Volume Manager tries to import the disk group,
it finds two disks claiming to be the same disk. Because the information in the
private regions is identical, Volume Manager has no way of knowing which one is
the original disk by looking only at the private region information.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
324
C6 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
Unique disk identifier
To overcome the problem of multiple disks with the same information in their
private regions, Volume Manager uses information from the Device Discovery
Layer (DDL) to identify each disk uniquely.
Starting with Storage Foundation 5.0 and later, a new attribute which is called a
unique disk identifier (or UDID) is introduced.
The unique disk identifier is initially defined by the Device Discovery Layer based
on the information received at the device level. Most of the time this identifier can
include information, such as the disk serial numbers, vendor names and so on.
Note that the unique disk identifier is different for each disk, and can be used to
identify each disk individually. When you initialize a disk to be used with Volume
Manager, the UDID attribute is written to the private region of the disk.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
325
Appendix C Importing LUN Snapshots C7
Copyright 2012 Symantec Corporation. All rights reserved.
C
Using UDID to detect clone disks
If you use the dd command at the device level to copy one LUN to another from
beginning to end, you would be simulating the impact of taking a hardware
snapshot.
In the example displayed on the slide, disk_1 is copied to disk_7 using the dd
command. During the copy operation the private region of disk_1 is copied
together with the public region to disk_7.
This means that the UDID of disk_1 is also copied to the private region on disk_7.
However, this UDID attribute is different from the original UDID attribute defined
by the Device Discovery Layer for disk_7.
When Volume Manager detects that the UDID written in the private region does
not match the UDID originally assigned to the disk by the DDL, it sets the
udid_mismatch flag for that disk and decides that the disk is a clone disk.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
326
C8 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
Managing clone disks
How you manage the clone disks may be slightly different depending on whether
or not you want to import the clone disks to the same server as the original disks.
The first scenario described on this slide is where the clone disks are used on the
same host as the original disks.
Note that with some hardware snapshot technologies the target devices that are
used as the clone disks are not visible to the server until the snapshot is taken. In
this case you may see the status of the clone devices showing as error while they
are invisible to the host.
After the devices are made visible to the application server, Volume Manager
detects the UDID mismatch problem and immediately decides which disks are the
clone disks. It then changes the status of the clone disks from error to online with
udid_mismatch attribute set.
An example vxdisk list output is shown on the slide. In this example, disk_7
is used as the clone disk for disk_0, and disk_8 is used as the clone disk for disk_1.
Both disks are in the appdg disk group.
If you want to use the original disks to import the disk group, you do not need to
do anything differently. A normal disk group import, for example using the vxdg
import appdg command, always uses the original disks even when the clone
disks are visible.
You need to specifically mention that you want to use the clone disks during the
import process to be able to import the snapshot disk group.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
327
Appendix C Importing LUN Snapshots C9
Copyright 2012 Symantec Corporation. All rights reserved.
C
Displaying clone disk information for imported disk groups
If there are many disks and disk groups on the system it may be difficult to display
the clone disk information using the vxdisk list output.
In this case, the vxdg listclone command can be used to display which
imported disk groups have clone disks associated with them. The example on the
slide shows two disk groups:
The appdg disk group with two clone disks
The oradg disk group with three clone disks
Note that the vxdg listclone command can only be used with an already
imported disk group. If neither the original nor the snapshot disk group is imported
on the server, the vxdisk list command must be used to display the clone disk
information.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
328
C10 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
Importing the cloned disk group on the same host: Scenario 1
The snapshot disk group can be imported on the same server as the original disk
group:
While the original disk group is already imported on the host
After deporting the original disk group
If the original disk group is deported, specify the -o useclonedev=on option
to the vxdg import command to import the disk group using the clone disks.
This option limits the import operation to disks that have either the udid_mismatch
flag or the clone_disk flag set. This way you ensure that the original disks are not
used during the import operation even if they are visible to the host and available
for import. Note that the first time you import the snapshot disk group using the
clone devices, the clone_disk flag is automatically set on each disk with the
udid_mismatch flag on.
While importing the snapshot disk group, the o updateid option can be used
to update the IDs written in the private region of the clone disks. This option
enables the update of the mismatched UDIDs as well as other IDs automatically
assigned by Volume Manager, such as the disk ID, the disk group ID, and so on.
When you use this option the udid_mismatch flag is automatically turned off. Note
that after the group ID is updated, the snapshot group is no longer detected as the
same group as the original disk group by Volume Manager even if they use the
same name.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
329
Appendix C Importing LUN Snapshots C11
Copyright 2012 Symantec Corporation. All rights reserved.
C
The example vxdisk list output on the slide displays how the clone disks
would be observed on the host after the vxdg import command is executed
using the two options introduced on this slide.
Note that the subsequent imports of the same snapshot disk group needs to be
performed using the o useclonedev=on option but without using the
o updateid option.
Here is some additional information describing different possibilities:
If you execute vxdg o useclonedev=on import appdg without
specifying the o updateid option, the snapshot disk group is imported and
the clone_disk flag is set but the IDs are not updated. So the udid_mismatch
flag is still on.
It is recommended to update the IDs during the snapshot disk group import.
Otherwise, the original disk group cannot be imported while the snapshot disk
group is imported on the host even if a different name is assigned during the
import operation. Volume Manager does not allow the import of two disk
groups with identical group IDs even if the disk groups have different names.
The o updateid option is ignored if it is used without the
o useclonedev=on option.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
330
C12 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
Importing the cloned disk group on the same host: Scenario 2
If the original disk group is already imported on the application server, use the
following command to import the snapshot disk group:
vxdg -n newdgname -o useclonedev=on -o updateid import \
diskgroup
Note that the o updateid switch is no longer optional if the original disk
group is already imported on the system.
A different disk group name must also be assigned to the snapshot disk group
using the n option as displayed on the slide. Note that you can add the t switch
to the vxdg import command to make this name assignment temporary. In this
case, the snapshot disk group will be named back to the original disk group name
when it is deported.
The example vxdisk list output on the slide displays both the original disk
group and its snapshot imported at the same time under different names.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
331
Appendix C Importing LUN Snapshots C13
Copyright 2012 Symantec Corporation. All rights reserved.
C
Using clone disks for off-host processing
If the off-host server has no visibility of the original disk group; not even as a
deported disk group, then there are no two disks that have the same UDID written
in their private region. Although Volume Manager on the off-host server detects
the UDID mismatch on the clone disks, it does not have any conflicts because the
original disks are not visible.
In this case, a standard vxdg import command on the off-host server would use
the clone devices to import the disk group because there are no other devices
available for the import operation.
If you import the disk group using the standard vxdg import command without
any additional options, the flags on the disks are not changed. This means that the
udid_mismatch flag is not turned off and the clone_disk flag is not set on the clone
disks.
The example vxdisk list output on the slide displays the status of the clone
devices after the snapshot disk group is imported on the off-host server.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
332
C14 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
Turning off the clone disk flag
To unset the udid_mismatch flags, use the o useclonedev=on and
o updateid options with the vxdg import command on the off-host server
to update all the IDs and to set the clone_disk flag during the snapshot disk group
import.
The example vxdisk list output is displayed on the slide.
In some environments, the snapshot disk group is used to take an example
application data. After the snapshot is taken, the disks are used as a normal
diskgroup and not as clone devices. In this case, turn off the clone_disk flags that
were automatically set when the snapshot disk group was imported:
vxdisk -g diskgroup set disk_media_name clone=off
After you turn off the clone_disk flags, Volume Manager no longer detects the
disks as clone devices.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
333
Appendix C Importing LUN Snapshots C15
Copyright 2012 Symantec Corporation. All rights reserved.
C
Managing multiple clones of the same disk group
If multiple sets of clones are visible to the same off-host server, there are multiple
disks with the same UDID written in their private region even when the original
disks are not visible to the off-host server. In this case, Volume Manager is able to
detect that there are multiple clones in the environment but it is not able to decide
which clone belongs to which snapshot taken at what time; because the hardware
snapshot operation takes place outside the control of Volume Manager. In this case,
you need to use a method called disk tagging that was introduced with Storage
Foundation 5.0 and that enables users to assign arbitrary names or values to disks.
Disk tagging is discussed in detail in the next topic.
The following is an example process that describes how to manage multiple
snaphots of the same disk group:
1 In this example, the first snapshot is taken at 8:00 a.m. The array software
provides a list of devices used as target disks during this snapshot operation.
2 Tag all the clone disks used in the first snapshot with the same arbitrary name,
such as cloneat8am.
3 Then take another snapshot at a different point in time, for example at 1:00
p.m. Business requirements dictate when these snapshot operations need to be
performed.
4 Tag all the clone disks used in the 1pm snapshot with the same tag name, such
as cloneat1pm.
5 When you need to use the clone disks on the off-host server, import the
snapshot disk group that includes all the clone disks that have the same tag
name, for example either the cloneat8am tag or the cloneat1pm tag.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
334
C16 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
Using disk tags
What are disk tags?
A disk tag is an arbitrary identification string specifically assigned to the disk by
the user; for example, appdg_8am.
Disk tags can be used for a number of purposes. For example:
To assign location information purely for information purposes
To assign site information in a disk group that spans multiple sites, such as in
remote mirroring
To identify a set of clone disks that should be used during a disk group import
operation as shown in the example on the slide
The o tag=tagname option ensures that only the disks that have that specific
tag name assigned to them are used during the disk group import operation on the
host.
Note that disk tags are persistent, because the tags are saved in the private region
of the disk.
You can assign multiple tags to the same disk. For example, a clone disk may have
a tag identifying the snapshot time as well as another tag identifying its location or
site.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
335
Appendix C Importing LUN Snapshots C17
Copyright 2012 Symantec Corporation. All rights reserved.
C
Assigning tags to disks
Tags can be assigned:
On an individual disk
On all disks in the same disk group
To assign a disk tag on a single disk, execute the vxdisk settag command as
shown on the slide for disk_7 and disk_8. Note that the disk access name is used in
this command to identify each device. The name of the tag in this example is
clone1.
If you want to tag all the disks in a disk group, you first need to import the disk
group. The second example on the slide shows the snapshot of appdg being
imported using clone devices.
After the disk group is imported, you can assign tags to all the disks in the disk
group using the vxdg g diskgroup settag tagname command as
shown on the slide for appdg. When you assign a tag using this method, the tag is
written to the private region header on each disk as well as to the disk group
configuration database.
Note that if you tag the disk group instead of individual disks, any disks added to
the disk group is automatically tagged with the same tag name.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
336
C18 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
Reserved tag names and tag values
Certain names cannot be used as tag names because they are used by Volume
Manager for other purposes. These reserved names are udid, site and vdid.
Tag names can be assigned optional values. These name-value pairs are used in
other areas within Volume Manager such as the remote mirroring capability which
uses the site tag with a value assigned to it as shown on the slide. The command
used to set the tag is exactly the same; the only difference is the value assignment
to the tag using the tagname=value pair instead of the tagname on its own.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
337
Appendix C Importing LUN Snapshots C19
Copyright 2012 Symantec Corporation. All rights reserved.
C
Displaying tag information
There are several commands you can use on the system to display disk tagging
information.
If you want to display all the disks on the system that have a tag assigned, use the
vxdisk listtag command.
The example on the slide displays disk_1 and disk_2 tagged with the tag name of
original, and disk_7 and disk_8 tagged with the tag name of clone1. Note that none
of the tags have any values assigned to them in this example.
Alternatively, you can use the v option with the vxdisk list accessname
command to display all the tags assigned to an individual disk. Note that tags are
sometimes referred to as annotations.
Finally, to verify whether or not the tags are assigned at the disk group level, you
can use the vxdg listtag command on all imported disk groups or on a
specific disk group as shown on the slide.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
338
C20 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
Removing disk tags
When you take a hardware snapshot, the whole disk is copied onto the target LUN
including the private region. Because the disk tags are written to the private region
of each disk, if the original disks had any disk tags, they would be copied to the
target LUNs together with the rest of the private region information.
The copied disk tags may be misleading as shown in the example on the slide. In
this example, the original disks are tagged with a tag name of original.
When the copies are created, the clone disks continue to display the original tag
which is misleading and should be removed.
To remove a disk tag you can use the vxdisk rmtag tagname
accessname command.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
339
Appendix C Importing LUN Snapshots C21
Copyright 2012 Symantec Corporation. All rights reserved.
C
Labs and solutions for this lesson are located on the following pages:
Appendix A provides complete lab instructions.
Lab App C: Optional Lab: Importing LUN Snapshots, page A-99
Appendix B provides complete lab instructions and solutions.
Lab App C: Optional Lab: Importing LUN Snapshots, page B-181
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
340
C22 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
341
Appendix D
Managing the Boot Disk with SF
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
342
D2 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
343
Appendix D Managing the Boot Disk with SF D3
Copyright 2012 Symantec Corporation. All rights reserved.
D
Placing the boot disk under VxVM control
What is encapsulation or conversion?
On Solaris, encapsulation is the process of converting partitions into volumes to
bring those partitions under VxVM control. On HP-UX, conversion is the process
of enabling LVM physical volumes to be used by VxVM. After a disk has been
encapsulated or converted, the disk is handled like an initialized disk.
Placing disks with data under VxVM control: Solaris
Encapsulation is the process of converting partitions into volumes to bring those
partitions under VxVM control. For example, if a system has three partitions on
the disk drive and you encapsulate the disk to bring it under VxVM control, there
will be three volumes in the disk group.
Solaris encapsulation requirements
Disk encapsulation cannot occur unless these requirements are met:
Partition table entries must be available on the disk for the public and private
regions. During encapsulation, you are prompted to select the disk layout. If
you choose a CDS disk layout, then only one partition is needed. However, if
encapsulation as a CDS disk fails, you can specify that a sliced layout be used
instead, in which case you will need two free partitions.
The disk must contain an s2 slice that represents the full disk (The s2 slice
cannot contain a file system).
65536 sectors of unpartitioned free space, rounded up to the nearest cylinder
boundary, must be available, either at the beginning or at the end of the disk.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
344
D4 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
Solaris: Encapsulating a data disk
vxdiskadm:
Encapsulate one or more disks
Follow the prompts by specifying:
Name of the device to add, and the name of the disk group to use
The disk format (cds or sliced) and the private region size
vxencap:
1 /etc/vx/bin/vxencap [-c] g diskgroup accessname
Use -c if the specified disk group does not exist.
2 Run the following script:
Solaris 8 or 9: /etc/init.d/vxvm-reconfig accessname
Solaris 10: /lib/svc/method/vxvm-reconfig accessname
Solaris: Reversing the encapsulation process
Limitations:
There are no commands available to help with unencapsulation. (The
system disk is an exception to this.)
Unencapsulation should not be attempted if:
Volume layouts have been altered in any way (for example, hot
relocation).
Volumes have mirrors.
The disk has been used for parts of other volumes.
The partition table before encapsulation is stored in:
/etc/vx/reconfig.d/disk.d/device/vtoc
Follow this procedure to unencapsulate:
a Stop applications.
b Remove volumes on the disk and take the disk out of VxVM control.
c Re-create the partition table as provided in the stored vtoc file.
d Manually modify /etc/vfstab (if necessary).
e Reboot or manually mount the partitions and start applications.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
345
Appendix D Managing the Boot Disk with SF D5
Copyright 2012 Symantec Corporation. All rights reserved.
D
Placing disks with data under VxVM control: HP-UX
Conversion is the process of enabling LVM physical volumes to be used by
VxVM. You can convert:
Unused physical volumes
Physical volumes in volume groups
HP-UX: Limitations of LVM conversion
LVM configurations that you cannot convert to VxVM include:
A volume group with insufficient space for metadata
A volume group containing the root volume
A volume group containing the /usr file system
A volume group with any dump or primary swap volumes
A volume group disk used in ServiceGuard clusters
A volume group with any disks that have bad blocks
HP-UX: Converting unused physical volumes
1 View group membership information to ensure that there is no data on the
LVM disk:
pvdisplay device_name
pvdisplay /dev/dsk/c4t1d0
2 Remove LVM disk information:
pvremove device_name
pvremove /dev/dsk/c4t1d0
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
346
D6 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
3 Initialize the disk for VxVM use.
HP-UX: Converting volume groups
vxvmconvert
Volume Manager Support Operations
Menu: Volume Manager/LVM_Conversion
1 Analyze LVM Volume Groups for Conversion
2 Convert LVM Volume Groups to VxVM
3 Roll back from VxVM to LVM
list List disk information
listvg List LVM Volume Group information
? Display help about menu
?? Display help about the menuing system
q Exit from menus
Select an operation to perform:
HP-UX: Conversion process (LVM to VxVM)
1 Identify volume groups.
2 Analyze volume groups.
3 Back up LVM data.
4 Plan for new names.
5 Stop applications.
6 Unmount the file system.
7 Convert volume groups.
8 Make name changes.
9 Restart applications.
10 Customize the configuration.
HP-UX: Restoring LVM volume group configuration
Roll Back Using vxvmconvert
Full LVM Restoration for the Volume Group
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
347
Appendix D Managing the Boot Disk with SF D7
Copyright 2012 Symantec Corporation. All rights reserved.
D
HP-UX: Roll back using vxvmconvert
Option 3, roll back from VxVM to LVM:
Select Volume Group(s) to rollback:
[<pattern-list>,all,list,listvg,q,?] vg08
Roll back this Volume Group? [y,n,q,?] (default: y)
Rolling back LVM configuration records for Volume Group
vg08
Selected Volume Groups have been restored.
Hit RETURN to continue.
Rollback other LVM Volume Groups? [y,n,q,?] (default: n)
HP-UX: Full LVM restoration for the volume group
To restore LVM internal data:
mkdir /dev/vol_group_name
mknod /dev/vol_group_name/group c 64 0x080000
vxdg destroy diskgroup
For each disk in the volume group:
vxdiskunsetup accessname
vgcfgrestore -F -f pathname/filename raw_device_name
vgimport s -m pathname/mapfilename vol_grp_name \
raw_device_name
vgchange -a y vol_group_name
To restore user or application data:
mount -F fstype /dev/vol_group_name/lvname /mount_point
frecover -r -f /dev/rmt/c0t0d0BEST
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
348
D8 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
What is rootability?
Rootability is the ability to place the root file system, swap device, and other file
systems on the boot disk under VxVM control.
Solaris
On Solaris, VxVM converts existing partitions of the boot disk into VxVM
volumes. The system can then mount the standard boot disk file systems (that is,
/, /usr, and so on) from volumes instead of disk partitions.
Boot disk encapsulation has the same requirements as data disk encapsulation, but
requires two free partitions (for the public and private regions). When
encapsulating the boot disk, you can create the private region from the swap area,
which reduces the swap area by the size of the private region, if there is no other
free space available at the beginning or the end of the boot disk. In this case, the
private region is created at the beginning of the swap area, and the swap partition
begins one cylinder from its original location. If there is enough space available at
the end of the boot disk, the private region is placed at the end.
HP-UX
On HP-UX, rootability is carried out by creating a copy of the system disk on
another VxVM disk.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
349
Appendix D Managing the Boot Disk with SF D9
Copyright 2012 Symantec Corporation. All rights reserved.
D
Why place the boot disk under VxVM control?
It is highly recommended that you encapsulate and mirror the boot disk. Some of
the benefits of encapsulating and mirroring root include:
High availability
Encapsulating and mirroring root sets up a high availability environment for
the boot disk. If the boot disk is lost, the system continues to operate on the
mirror disk.
Bad block revectoring
If the boot disk has bad blocks, then VxVM reads the block from the other disk
and copies it back to the bad block to fix it. SCSI drives automatically fix bad
blocks on writes, which is called bad block revectoring.
Improved performance
By adding additional mirrors with different volume layouts, you can achieve
better performance. Mirroring alone can also improve performance if the root
volumes are performing more reads than writes, which is the case on many
systems.
When not to encapsulate the boot disk
If you do not plan to mirror the boot disk, then you should not encapsulate it.
Encapsulation adds a level of complexity to system administration, which
increases the complexity of upgrading the operating system.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
350
D10 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
Limitations of the VxVM boot disk
A system cannot boot from a boot disk that spans multiple devices.
You should never change the layout of boot volumes. No volume associated with
an encapsulated boot disk (rootvol, usr, var, opt, swapvol, and so on) should be
expanded or shrunk using vxresize, because these volumes map to a physical
underlying partition on the disk and must be contiguous.
If you attempt to expand these volumes using vxresize, the system can become
unbootable if it becomes necessary to revert back to slices in order to boot the
system. Expanding these volumes can also prevent a successful OS upgrade, and a
fresh install can be required.
Note that Storage Foundation 5.0 MP3 and later introduce a new command to
grow volumes on a boot disk. The vxrootadm command is discussed later in this
lesson.
Note: You can add a mirror of a different layout, but the mirror is not bootable.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
351
Appendix D Managing the Boot Disk with SF D11
Copyright 2012 Symantec Corporation. All rights reserved.
D
File system requirements for root volumes: Solaris only
To boot from volumes, follow these requirements and recommendations for the
file systems on root volumes:
For the root, usr, var, and opt volumes:
Use UFS file systems: You must use UFS file systems for these volumes,
because the Veritas File System (VxFS) package is not available until later in
the boot process when the scripts for multiuser mode are executed.
Use contiguous disk space: These volumes must be located in a contiguous
area on disk, as required by the OS. For this reason, these volumes cannot use
striped, RAID-5, concatenated mirrored, or striped mirrored layouts.
Do not use dirty region logging for root or usr: You cannot use dirty region
logging (DRL) on the root and usr volumes. If you attempt to add a dirty region
log to the root and usr volumes, you receive an error.
Note: The opt and var volumes can use dirty region logging.
Swap space considerations: If you have swap defined, then it needs to be
contiguous disk space. The first swap volume (as listed in the /etc/vfstab
file) must be contiguous and, therefore, cannot use striped or layered layouts.
Additional swap volumes can be noncontiguous and can use any layout.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
352
D12 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
Note: You can add noncontiguous swap space through VxVM. However,
Solaris automatically uses swap devices in a round-robin method,
which may reduce expected performance benefits of adding striped
swap volumes.
Usable space per swap device is up to 2
63
- 1 bytes.
Volume requirements: HP-UX
All volumes on the root disk must be in the disk group that you choose to be the
bootdg disk group.
The names of the volumes with entries in the LIF LABEL record must be standvol,
rootvol, swapvol, and dumpvol (if present). The names of the volumes for other
file systems on the root disk are generated by appending vol to the name of their
mount point under /.
Any volume with an entry in the LIF LABEL record must be contiguous. The
volume can have only one subdisk, and it cannot span to another disk.
The rootvol and swapvol volumes must have the special volume usage types, root
and swap respectively.
Only the disk access types auto with hpdisk and simple formats are suitable for use
as VxVM root disks, root disk mirrors, or as hot-relocation spares for such disks.
The volumes on the root disk cannot use dirty region logging (DRL).
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
353
Appendix D Managing the Boot Disk with SF D13
Copyright 2012 Symantec Corporation. All rights reserved.
D
Before placing the boot disk under VxVM control
Plan your rootability configuration. bootdg is a system-wide reserved disk
group name that is an alias for the disk group which contains the volumes that are
used to boot the system. When you place the boot disk under VxVM control,
VxVM sets bootdg to the appropriate disk group. You should never attempt to
change the assigned value of bootdg; doing so may render your system unbootable.
An example configuration is to place the boot disk into a disk group named sysdg,
and add at least two more disks to the disk group: one for a boot disk mirror and
one as a spare disk. VxVM then sets bootdg to sysdg.
Solaris
Enable boot disk aliases. On Solaris, before encapsulating your boot disk, set the
EEPROM variable use-nvramrc? to true. This enables VxVM to take advantage of
boot disk aliases to identify the mirror of the boot disk if a replacement is needed.
If this variable is set to false, you must determine which disks are bootable
yourself.
On Solaris, set this variable to true as follows:
eeprom use-nvramrc?=true
Save the layout of partitions before you encapsulate the boot disk. For
example, on Solaris, you can use the prtvtoc command to record the layout of
the partitions on the unencapsulated boot disk (/dev/rdsk/c0t0d0s2 in this
example):
prtvtoc /dev/rdsk/c0t0d0s2
Record the output from this command for future reference.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
354
D14 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
Placing the boot disk under VxVM control
Encapsulating the boot disk: Solaris
vxdiskadm on Solaris
You can use vxdiskadm for encapsulating data disks as well as the boot disk. To
encapsulate the boot disk:
1 From the vxdiskadm main menu, select the Encapsulate one or more disks
option.
2 When prompted, specify the disk device name of the boot disk. If you do not
know the device name of the disk to be encapsulated, type list at the prompt
for a complete listing of available disks.
3 When prompted, specify the name of the disk group to which the boot disk will
be added. The disk group does not need to already exist.
4 When prompted, accept the default disk name and confirm that you want to
encapsulate the disk.
5 If you are prompted to choose whether the disk is to be formatted as a CDS
disk that is portable between different operating systems, or as a nonportable
sliced disk, then you must select sliced. Only the sliced format is suitable for
use with root, boot, or swap disks.
6 When prompted, select the default private region size. vxdiskadm then
proceeds to encapsulate the disk.
7 A message confirms that the disk is encapsulated and states that you should
reboot your system at the earliest possible opportunity.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
355
Appendix D Managing the Boot Disk with SF D15
Copyright 2012 Symantec Corporation. All rights reserved.
D
vxencap on Solaris
The vxencap script identifies any partitions on the specified disk that could be
used for file systems or special areas such as swap devices, and then generates
volumes to cover those areas on the disk.
If the file system is unmounted, no reboot is necessary. If the file system is
mounted, the system needs to be rebooted for the encapsulation to complete.
For more specific information on using the vxencap command, see the manual
pages.
Creating a VxVM boot disk from an LVM boot disk: HP-UX
The vxcp_lvmroot command sets up a VxVM root disk.
The command should be executed at init level 1 (single user mode).
A bootable mirror can also be created at the same time.
A user-specified disk is initialized as a VxVM root disk with the rootdisk##
disk name.
The following example shows how to set up a VxVM root disk on c0t1d0 and its
mirror on c1t4d0:
/etc/vx/bin/vxcp_lvmroot g dg m c1t4d0 v b c0t1d0
This process can be accomplished in two steps, as follows:
/etc/vx/bin/vxcp_lvmroot g dg v b c0t1d0
/etc/vx/bin/vxrootmir g dg v b c1t4d0
-v: verbose
-b: set primary and alternate boot paths to the given devices
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
356
D16 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
After placing the boot disk under VxVM control
Viewing encapsulated disks
To better understand encapsulation of the boot disk, you can examine operating
system files for the changes made by the VxVM root encapsulation process.
Solaris
After encapsulating the boot disk, if you view the VTOC, you notice that Tag 14 is
used for the public region, and Tag 15 is used for the private region. The partitions
for the root, swap, usr, and var partitions are still on the disk, unlike on data disks
where all partitions are removed. The boot disk is a special case, so the partitions
are kept. The following is an example VTOC of an encapsulated system disk:
As part of the root encapsulation process, the /etc/system file is updated to
include information that tells VxVM to boot up on the encapsulated volumes:
rootdev:/pseudo/vxio@0:0
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
357
Appendix D Managing the Boot Disk with SF D17
Copyright 2012 Symantec Corporation. All rights reserved.
D
set vxio:vol_rootdev_is_volume=1
VxVM also updates the /etc/vfstab file to mount volumes instead of
partitions.
Linux
After you encapsulate the boot disk, you can view the changes in the
/etc/fstab file.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
358
D18 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
Creating an alternate boot disk
Creating an alternate boot disk: vxdiskadm
1 Select a disk that is at least as large as the boot disk, and add the disk to the
boot disk group.
2 In the vxdiskadm main menu, select the Mirror volumes on a disk option.
3 When prompted, specify the name of the disk containing the volumes to be
mirrored (that is, the name of the boot disk).
4 When prompted, specify the name of the disk to which the boot disk will be
mirrored.
5 A summary of the action is displayed, and you are prompted to confirm the
operation.
6 After the root mirror is created, verify that the root mirror is bootable.
Creating an alternate boot disk: CLI for Solaris
1 Select a disk that is at least as large as the boot disk, initialize the alternate boot
disk using the sliced format and add it to the boot disk group.
vxdisksetup -i alternate_accessname format=sliced \
[privlen=size][noreserve]
If the alternate boot disk is identical to the original boot disk and the original
boot disk has been encapsulated using a private length size smaller than the
default, you may need to use the privlen option to specify a smaller size for
the private region and the noreserve option to prevent space being reserved
at the beginning of the disk for CDS conversion.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
359
Appendix D Managing the Boot Disk with SF D19
Copyright 2012 Symantec Corporation. All rights reserved.
D
vxdg -g bootdg adddisk \
alternate_dm_name=alternate_accessname
2 To mirror the primary boot disk to the alternate boot disk, type:
vxmirror boot_dm_name alternate_dm_name
3 After the root mirror is created, verify that the root mirror is bootable.
To create a mirror for the root volume only, you can use the vxrootmir
command:
vxrootmir alternate_dm_name
where alternate_dm_name is the disk media name assigned to the other disk.
vxrootmir invokes vxbootsetup (which invokes installboot), so that the
disk is partitioned and made bootable. (The process is similar to using vxmirror
and vxdiskadm.)
Other volumes on the boot disk can be mirrored separately using vxassist. For
example, if you have a /home file system on a volume homevol, you can mirror it
to alternate_dm_name using the command:
vxassist mirror homevol alternate_dm_name
If you do not have space for a copy of some of these file systems on your alternate
boot disk, you can mirror them to other disks. You can also span or stripe these
other volumes across other disks attached to your system.
Creating an alternate boot disk: CLI for HP-UX
To mirror the system disk:
vxrootmir v -b alternate_accessname
Alternatively, to set up the system boot information and to mirror individual
volumes manually:
vxdisksetup iB alternate_accessname format=hpdisk
vxdg g bootdg adddisk rootdisk##=alternate_accessname
vxassist g bootdg mirror standvol dm:rootdisk##

vxvmboot v /dev/rdsk/alternate_accessname
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
360
D20 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
Possible boot disk errors
Root plex is stale or unusable
vxvm:vxconfigd: Warning: Plex rootvol-01 for root
volume is stale or unusable
System startup failed
vxvm:vxconfigd: ERROR: System startup failed
System boot disk does not have a valid root plex
vxvm:vxconfigd: ERROR: System boot disk does not have a
valid root plex
Please boot from one of the following disks:
Disk: dm_name Device: device ...
In the third message, alternate boot disks containing valid root mirrors are listed as
part of the error message. Try to boot from one of the disks named in the error
message.
You may be able to boot using a device alias for one of the named disks. For
example, use this command on the Solaris platform:
ok> boot vx-dm_name
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
361
Appendix D Managing the Boot Disk with SF D21
Copyright 2012 Symantec Corporation. All rights reserved.
D
Booting from an alternate mirror: Solaris
If the boot disk is encapsulated and mirrored, you can use one of its mirrors to boot
the system if the primary boot disk fails.
Booting from an alternate mirror: HP-UX
To boot the system using an alternate boot disk after failure of the primary boot
disk:
1 Interrupt the automatic boot process:
To discontinue, press any key within 10 seconds.
2 Check the alternate boot disk path:
Main Menu: Enter command or menu > co alt pa
Verify that the alternate disk path is the one you want to boot from. If not, set it
using the co alt pa path command.
3 Boot using the alternate disk path:
Main Menu: Enter command or menu > bo alt
Interact with IPL (Y, N, or Cancel)? n
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
362
D22 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
Administering the boot disk
Storage Foundation 5.0 MP3 and later introduce the capability of performing the
following administrative tasks on an encapsulated boot disk:
Creating bootable snapshots of the boot disk (Solaris and Linux only)
Growing boot disk volumes (Solaris only)
Creating boot disk snapshots
It is recommended to protect against system disk failures using an alternate boot
disk which contains the mirror copies of all the volumes on the boot disk.
However, the alternate boot disk is an exact copy of the boot disk volumes at all
times and can be corrupted together with the original boot disk due to application
or human errors. For example, if a system administrator with root privileges typed
rm -r * under the root file system, both the original boot disk and its mirror
would be corrupted.
A boot disk snapshot is a point-in-time copy of the boot disk created using the
vxrootadm mksnap command as shown on the slide. After it is created, it is no
longer modified with any further changes that may happen on the original boot
disk. The boot disk snapshot is also useful as a boot disk backup before
implementing any major upgrades on the boot disk.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
363
Appendix D Managing the Boot Disk with SF D23
Copyright 2012 Symantec Corporation. All rights reserved.
D
Note: If the target disk is already added to the source disk group, you can use the
disk media name as the targetdisk parameter when you execute the
command. Otherwise, use the device name for the target disk (including the
s2 slice on the Solaris platform).
It is important to ensure that the disk that will be used for the snapshot is as large
or larger than the encapsulated boot disk. If for any reason the available space on
the disk is less, the vxrootadm command will fail with the following error:
VxVM vxrootadm ERROR V-5-2-4856 Not enough capacity on
disk c0t2d0s2
The following extracts from an example output illustrate what kind of operations
the vxrootadm mksnap command performs:
vxrootadm -v -g sysdg mksnap c0t2d0s2 bksysdg
VxVM vxrootadm INFO V-5-2-4828 Adding target disk to group
...
VxVM vxrootadm INFO V-5-2-4837 Creating mirrors ...
# vxassist -g sysdg snapstart swapvol
layout=contig,diskalign sysdg01-s0
...
VxVM vxrootadm INFO V-5-2-4829 Bootsetup sysdg sysdg01-s0
...
VxVM vxrootadm INFO V-5-2-4830 Breaking off mirrors ...
# vxassist -g sysdg snapshot rootvol swapvol usr var
VxVM vxrootadm INFO V-5-2-4865 Splitting disk group
...
VxVM vxrootadm INFO V-5-2-4866 Starting new boot volumes
...
VxVM vxrootadm INFO V-5-2-4853 Modifying volumes utypes
VxVM vxrootadm INFO V-5-2-4853 Modifying volumes utypes
VxVM vxrootadm INFO V-5-2-4854 Mounting root volume
...
VxVM vxrootadm INFO V-5-2-4848 Modifying dump device
VxVM vxrootadm INFO V-5-2-4851 Modifying vfstab
VxVM vxrootadm INFO V-5-2-4852 Modifying volboot
VxVM vxrootadm INFO V-5-2-4846 Making dev nodes
VxVM vxrootadm INFO V-5-2-4845 Making dev links
VxVM vxrootadm INFO V-5-2-4872 Unmounting root volume
...
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
364
D24 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
VxVM vxrootadm INFO V-5-2-4894 New boot disk created
successfully.
VxVM vxrootadm INFO V-5-2-4893 Devalias for the new boot
disk: vx-sysdg01-s0
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
365
Appendix D Managing the Boot Disk with SF D25
Copyright 2012 Symantec Corporation. All rights reserved.
D
Administering boot disk mirrors
With SF 5.1 SP1 and later, you can use the vxrootadm command to administer
mirrors of an encapsulated system disk as shown on the slide. This command
automates system disk mirror operations, such as adding or removing a mirror and
splitting an existing mirror as a backup. Note that the target disk must be large
enough to hold the system disk mirror.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
366
D26 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
Growing the boot disk volumes
With SF 5.0 MP3 and later on the Solaris platform, you can use the vxrootadm
grow command to grow the sizes of the volumes on an encapsulated boot disk as
shown on the slide. The target disk specified needs to be large enough to
accommodate all the volumes on the boot disk and the additional sizes for growth.
Before growing the volumes on a boot disk, determine what volumes exist on the
boot disk and what their desired new sizes are. Here are some key points to keep in
mind:
The current boot disk or an alternate boot disk can be specified as the source
disk.
If the current boot disk is specified, multiple reboots are required to complete
the grow operation. You need to type vxrootadm grow continue after
each reboot.
Source and target disks cannot be the same disk.
It is recommended to create a backup (snapshot) of the boot disk before
attempting the grow operation.
If the target disk is already a part of the same disk group as the boot disk, it
must be a sliced disk.
CDS disks are not supported.
If the target is a CDS disk, the command fails. The target disk needs to be
removed from the disk group and reinitialized.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
367
Appendix D Managing the Boot Disk with SF D27
Copyright 2012 Symantec Corporation. All rights reserved.
D
The vxrootadm integration in CPI
With 6.0 version of the CPI installation utilities, the installer now uses the
vxrootadm utility to provide a bootable backup with the old software versions
before upgrading the SF software on the system disk.
Prior to 6.0, if the user wanted to have a copy of the old system disk as a back out
procedure from the upgrade, the user would have to split the system disk mirror
manually. The split and join functions of the vxrootadm utility have been
introduced in SF 5.1SP1. With 6.0, these functions have been integrated into the
installation scripts.
Note that this functionality is only available with the Solaris and Linux platforms
and only for the upgrade paths displayed on the slide.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
368
D28 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
Removing the boot disk from VxVM control
The vxunroot command: Solaris
To convert the root file systems back to being accessible directly through disk
partitions instead of through volume devices, you use the vxunroot utility. Other
changes that were made to ensure the booting of the system from the root volume
are also removed so that the system boots with no dependency on VxVM.
For vxunroot to work properly, all but one plex of rootvol, swapvol, usr, var, opt,
and home must be removed (using vxedit or vxplex).
If this condition is not met, the vxunroot operation fails, and volumes are not
converted back to disk partitions.
When to use vxunroot
Use vxunroot when you need to:
Boot from physical system partitions.
Change the size or location of the private region on the boot disk.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
369
Appendix D Managing the Boot Disk with SF D29
Copyright 2012 Symantec Corporation. All rights reserved.
D
Unencapsulating the boot disk: Solaris
The vxunroot command changes the volume entries in /etc/vfstab to the
underlying disk partitions for the rootvol, swapvol, usr, and var volumes. The
command also modifies /etc/system and prompts for a reboot so that disk
partitions are mounted instead of volumes for the root, swap, usr, and var volumes.
Creating a LVM boot disk from a VxVM boot disk: HP-UX
After booting to single user mode using a VxVM boot disk, you can completely
remove the LVM boot disk if desired:
/etc/vx/bin/vxdestroy_lvmroot -v c0t1d0
If you choose to keep the LVM disk, ensure that you update it with any changes to
the system disk.
This example shows how to create an LVM root disk on the c0t1d0 physical disk
after removing the existing LVM root disk configuration from that disk.
/etc/vx/bin/vxdestroy_lvmroot -v c0t1d0
/etc/vx/bin/vxres_lvmroot -v c0t1d0
If you want to take the boot disk completely out of VxVM control, you then need
to boot from the LVM root disk and use Volume Manager commands to remove
the old boot disk from VxVM control.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
370
D30 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
Labs and solutions for this lesson are located on the following pages:
Appendix A provides complete lab instructions.
Lab App D: Optional Lab: Managing the Boot Disk with SF, page A-103
Appendix B provides complete lab instructions and solutions.
Lab App D: Optional Lab: Managing the Boot Disk with SF, page B-189
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
371
Appendix E
Using the VEA for Administrative Operations
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
372
E2 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
373
Appendix E Using the VEA for Administrative Operations E3
Copyright 2012 Symantec Corporation. All rights reserved.
E
Administering Volume Manager with VEA
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
374
E4 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
375
Appendix E Using the VEA for Administrative Operations E5
Copyright 2012 Symantec Corporation. All rights reserved.
E
Administering point-in-time copies with VEA
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
376
E6 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
377
Appendix E Using the VEA for Administrative Operations E7
Copyright 2012 Symantec Corporation. All rights reserved.
E
Administering SmartTier and multi-volume file systems
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
378
E8 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
379
Appendix E Using the VEA for Administrative Operations E9
Copyright 2012 Symantec Corporation. All rights reserved.
E
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
380
E10 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
381
Appendix E Using the VEA for Administrative Operations E11
Copyright 2012 Symantec Corporation. All rights reserved.
E
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
382
E12 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
Disk encapsulation with VEA
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
383
Index-1
Copyright 2012 Symantec Corporation. All rights reserved.
Files and Directories
/etc/system D-29
/etc/vfstab D-29
A
alternate boot disk
creating D-18
creating in CLI D-18, D-19
creating in vxdiskadm D-18
alternate mirror
booting D-21
B
bad block revectoring D-9
boot disk D-8
creating a snapshot D-22
creating an alternate D-18
creating an alternate in CLI D-18, D-19
creating an alternate in vxdiskadm D-18
encapsulating in vxdiskadm D-14
growing D-26
mirroring in vxdiskadm D-18
unencapsulating D-28
boot disk encapsulation D-8
effect on /etc/system D-16
effect on /etc/vfstab D-17
file system requirements D-11
planning D-13
using vxdiskadm D-14, D-15
viewing D-16
boot disk errors D-20
boot mirror verification D-21
bootdg D-13
booting from alternate mirror D-21
C
clone_disk flag C-10
turning off C-14
creating an alternate boot disk D-18
D
dirty region logging D-11
disk tag C-16
assigning C-17
listing C-19
removing C-20
reserved tag names C-18
E
eeprom D-13
encapsulating root
benefits D-9
limitations D-9
encapsulation
effect on /etc/system D-16
unencapsulating a boot disk D-28
H
hardware snapshot C-3
high availability D-9
I
installboot D-19
L
LUN snapshot C-3
clone disk identification C-5, C-7
displaying clone disk information C-9
importing C-10, C-12
multiple clone sets C-15
off-host processing C-13
M
mirror
booting from alternate D-21
mirroring the boot disk
errors D-20
Index
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION
384
Index-2 Veritas Storage Foundation 6.0 for UNIX: Manage and Administer
Copyright 2012 Symantec Corporation. All rights reserved.
vxdiskadm D-18
O
opt D-10, D-11
P
partitions
after encapsulation D-16
prtvtoc D-13
R
revectoring D-9
root D-11, D-16
root encapsulation
benefits D-9
limitations D-10
root plex
errors D-20
rootability D-8
rootvol D-10
S
swap D-8, D-16
swapvol D-10
T
tag 14 D-16
tag 15 D-16
U
UDID C-6
udid_mismatch C-7, C-10
UFS D-11
unique disk identifier C-6
usr D-10, D-11, D-16
V
var D-10, D-11, D-16
VTOC D-16, D-17
vxassist mirror D-19
vxbootsetup D-19
vxdisk listtag C-19
vxdiskadm
creating an alternate boot disk D-18
encapsulating the boot disk D-14
vxencap
encapsulating the boot disk D-15
VxFS D-11
vxmirror D-19
vxrootadm grow D-26
vxrootadm mksnap D-22
vxrootmir D-19
vxunroot D-28
C
o
p
y
r
i
g
h
t


2
0
1
2

S
y
m
a
n
t
e
c

C
o
r
p
o
r
a
t
i
o
n
.

A
l
l

r
i
g
h
t
s

r
e
s
e
r
v
e
d
.
CONFIDENTIAL - NOT FOR DISTRIBUTION

Você também pode gostar