Escolar Documentos
Profissional Documentos
Cultura Documentos
and 11i v3
July 2010
Table of Contents
Introduction ......................................................................................................................................... 1
Basic rehosting .................................................................................................................................... 2
Installing required software ............................................................................................................... 2
Creating the DRD clone .................................................................................................................... 3
Creating the system information file .................................................................................................... 3
General layout of the system information file .................................................................................... 4
Using the SYSINFO_INTERACTIVE parameter .................................................................................. 4
Setting the SYSINFO_HOSTNAME parameter ................................................................................. 4
Setting other parameters for the entire system .................................................................................. 5
The SYSINFO_PROCESSED parameter............................................................................................ 5
Identifying a network interface ...................................................................................................... 6
Managing a network interface with DHCP ....................................................................................... 6
Using static parameters to configure a network interface ................................................................... 6
Additional static parameters for a network interface ......................................................................... 7
Sample system information files ...................................................................................................... 7
Using drd rehost to copy the system information file to the EFI partition ............................................. 8
Using drd status to check for EFI/HPUX/SYSINFO.TXT ................................................................... 8
Using drd unrehost to review contents of EFI/HPUX/SYSINFO.TXT ............................................... 9
Booting the image with SYSINFO.TXT on the target system ................................................................. 9
Limitations and recommendations for the initial release of drd rehost............................................... 10
Using drd rehost to provision a new BL870C Blade with HP-UX 11i v3 ............................................... 10
Assumptions .................................................................................................................................. 10
Steps for provisioning the new Blade ................................................................................................ 11
Using drd rehost to provision Integrity VMs ...................................................................................... 19
Assumptions .................................................................................................................................. 19
Special considerations for HP-UX 11i v3 prior to Update 3 (March 2008 or earlier) .............................. 19
HP-UX 11i v2 - Special considerations .............................................................................................. 20
Special considerations for avoiding reboots of Integrity VMs ............................................................... 20
Steps for provisioning the new VM ................................................................................................... 20
Glossary ........................................................................................................................................... 29
For more information .......................................................................................................................... 30
Call to action .................................................................................................................................... 30
Introduction
With the introduction of Dynamic Root Disk (DRD) revisions B.1131.A.3.2.1949 for HP-UX 11i v3
(11.31), and the subsequent revision B.1123.A.3.3.221 for HP-UX 11i v2 (11.23), system
administrators can boot a DRD clone on a system other than the one where it was created. This
capability is referred to as rehosting and can be utilized on systems with an LVM-managed, Itanium®-
based copy of HP-UX11i v2 or 11i v3. Rehosting enables a number of new uses for DRD clones. In
1
this paper, we focus on the use of DRD clones to provision new systems, specifically Itanium blades
running HP-UX 11iv3 and Integrity Virtual Machines running HP-UX 11i v2 or 11i v3.
The basis for both provisioning scenarios is DRD rehosting. Before addressing each of them, we give
an overview of the basic steps used in rehosting a system image.
For details of commonly used terms in this white paper, see the Glossary.
Note:
The path to DRD commands is: /opt/drd/bin
Basic rehosting
The common steps to all rehosting scenarios are:
To perform these steps, minimal revisions of Dynamic Root Disk and auto_parms (1M), delivered
in the SystemAdmin.FIRST-BOOT fileset, are required.
1. HP recommends that customers install the latest web release or latest media release of DRD
(HP software product DynRootDisk) that is available. The most recent version of DRD is
available (along with any dependencies) from the DRD downloads webpage, where
download instructions are provided.
Support for the drd rehost command was introduced for HP-UX 11i v3 (11.31) in version
B.1131.A.3.2.1949 of DRD and for HP-UX 11i v2 in version B.1123.A.3.3.221 of DRD
Version B.1131.A.3.2.1949 is available on media with all the Operating Environments (OEs)
released in September 2008 or later. Version B.1123.A.3.3.221 is available on the
Application Releases media for March 2009 or later.
2. Support for auto_parms processing of system information was introduced for HP-UX 11i v3
(11.31) in the PHCO_36525 patch, and for HP-UX 11i v2 (11.23) in the PHCO_38232
patch. The appropriate patch that enhances auto_parms is automatically downloaded and
installed when DRD is downloaded and installed.
3. The format of the system information file is described in sysinfo (4), which is delivered
for HP-UX 11i v3 (11.31) in the PHCO_39064 patch (or a superseding patch) to
SystemAdmin.FIRST-BOOT, and for HP-UX 11i v2 (11.23) in the PHCO_39215 patch (or
a superseding patch.) For instructions on downloading these patches, please see the DRD
downloads webpage.
Creating the DRD clone
Perform the following steps to create the DRD clone:
1. Choose an LVM-managed, Itanium-based system that will be used to create the clone. This is the
source system and it must have the version of HP-UX that you want to use for the target system.
2. Identify a disk on the source system that can be moved to the target system—where the image will
be booted. This disk is the target disk. Typically, the disk is a SAN LUN that can be unpresented
from the source system (being cloned) and presented to the target system (to be booted) by
changing the port or World Wide Port Name (WWPN) zoning. Ensure that the disk is
available—that is, it is not currently in use by the source system, and the disk is large enough to
hold a copy of the root volume group of the source system. For further guidance on choosing a
target disk, see drd-clone (1M), Dynamic Root Disk System Administrator's Guide (~1 MB
PDF), or Dynamic Root Disk: Quick Start & Best Practices (~300 KB PDF.)
3. Issue the “drd clone” command. If the target disk was previously used for an LVM volume
group, VxVM disk group, or as a boot disk, you will need to set the overwrite option to true.
In the following example, the target disk is /dev/disk/disk10.
Caution:
The “-x overwrite=true” option should only be used if
you are sure that the disk is not currently in use.
4. (Optional) Install additional software to the clone using “drd runcmd swinstall …” or
modify kernel tunables on the clone using “drd runcmd kctune …”.
After the appropriate DynRootDisk software and supporting patches have been installed, a sample
system information file is available at /etc/opt/drd/default_sysinfo_file. As delivered,
the file contains a single variable assignment that will trigger interactive use of auto_parms(1M)
when a rehosted disk is booted. The file can be modified by a system administrator as desired. In
addition, the file may be renamed or copied to another location and modified because the file path is
supplied as a parameter to the "drd rehost" command. For security reasons, HP recommends that
you restrict write access to system information files.
3
General layout of the system information file
Parameters may be entered in any order in the file. If a parameter is listed multiple times, the last
occurrence is used.
<NAME>=<VALUE>
The parameters’ names, listed below, all begin with the text string “SYSINFO_”.
Some parameters are set once for the entire system. Array parameters (those ending in “[n]” for a
non-negative integer n) are set for their corresponding network interfaces.
In general, the parameters listed below that are set for the entire system are not required, and they
default to the value already present on the system. The exception is SYSINFO_HOSTNAME, which
must be specified whenever SYSINFO_INTERACTIVE is not set to ALWAYS. (See below for
acceptable values of SYSINFO_HOSTNAME.)
In contrast, the network interface parameters must be specified for each network interface that will be
used on the target system. Pre-existing network interface information on the system will be removed
before the system information file is processed by auto_parms(1M). For example, the contents of
/etc/rc.config.d/netconf and /etc/hosts are reverted to the content as they were
originally delivered (that is, to the “newconfig" template for each of them), without any information
specific to the original source system.
The single setting SYSINFO_INTERACTIVE=ALWAYS can be used to trigger the interactive FIRST-
BOOT interface to be displayed by auto_parms(1M) when the image is booted. This is the only
value that is defined in the delivered sample system information file.
If the interactive interface is not desired, this variable must be removed from the file or set to
“ON_ERROR”, its default value.
The hostname must be set in the system information file if SYSINFO_INTERACTIVE is not set to
ALWAYS. The syntax for the hostname is
SYSINFO_HOSTNAME=<hostname>
The value of the hostname must be a valid domain. It must not end in a period.
The initial period-delimited segment (or the entire name if no periods are used), must conform to all of
the following rules:
The remaining parameters for the entire system are optional. If omitted, the value on the system will
be used. The following is a list of all the system-wide parameters:
SYSINFO_LANGUAGE=<language>
SYSINFO_TIMEZONE=<timezone>
Note:
Due to a defect in the initial implementation of the drd rehost
command, a timezone containing the colon character (“:”) cannot
be specified.
The SYSINFO_PROCESSED parameter must not be included in the system information file. It is added
by auto_parms(1M) to prevent processing of the file in subsequent reboots.
5
If you have acquired the system information file with the drd unrehost command (described later)
on a system that has already been rehosted, you must remove the parameter before re-using the file.
You must identify each network interface that you will use on the target system by MAC (media access
control) address or the hardware path of the interface. If both SYSINFO_MAC_ADDRESS and
SYSINFO_LAN_HW_PATH are specified, SYSINFO_MAC_ADDRESS takes precedence and
SYSINFO_LAN_HW_PATH is ignored. The number “n” used as the array index has a minimum value
of 0 and a maximum value of 1023. All the parameters for a given interface must use the same
index.
SYSINFO_MAC_ADDRESS[n]
Default - None
Status - This parameter is required for an interface if SYSINFO_LAN_HW_PATH is not specified for
the interface.
Value - The prefix "0x" followed by a value of the MAC address expressed as 12 hex digits
(for example, 0x0017A451E718) In the initial rehosting support, the alphabetic
characters must be in uppercase.
or
SYSINFO_LAN_HW_PATH[n]
Default - None
Status - The SYSINFO_LAN_HW_PATH must be specified for any interface for which
SYSINFO_MAC_ADDRESS is not supplied.
Value - A hardware path, specified by non-negative integers, separated by forward slashes ("/").
If the interface is to be managed by DHCP, the only other parameter required for the interface is
SYSINFO_DHCP_ENABLE[n]=1:
SYSINFO_DHCP_ENABLE[n]
Default - 0
Status - For a given network interface, either SYSINFO_DHCP_ENABLE
must be set to 1 or static network parameters must be
specified.
Value - 0 - DHCP client functionality is not enabled for this
interface.
1 - DHCP client functionality is enabled for this interface.
If the network interface will not be managed by DHCP, then the SYSINFO_IP_ADDRESS[n] and
SYSINFO_SUBNET_MASK[n] must be specified for the interface:
SYSINFO_IP_ADDRESS[n]=<decimal-dot IP address>
Default - None
Status - An IP address must be specified for any network interface
for which SYSINFO_DHCP_ENABLE is not set to 1.
Value - An IP address in decimal-dot notation (for example, 192.1.2.3)
Default - None
Status - SYSINFO_SUBNET_MASK must be set for any interface for which
SYSINFO_DHCP_ENABLE is not set to 1.
Value - Subnet mask in hexadecimal or decimal-dot notation (for example, 255.255.255.0)
SYSINFO_ROUTE_COUNT[n]=<0 or 1>
Default - None
Status - SYSINFO_ROUTE_COUNT is optional for a network interface.
Value - 0 - SYSINFO_ROUTE_GATEWAY is a local or loopback interface.
1 - SYSINFO_ROUTE_GATEWAY is a remote interface.
SYSINFO_ROUTE_DESTINATION[n]
Default - None
Status - SYSINFO_ROUTE_DESTINATION is optional for a network interface.
Value - The value must be set to "default" for any interface for which DHCP is not enabled.
SYSINFO_ROUTE_GATEWAY[n]
Default - None
Status - The SYSINFO_ROUTE_GATEWAY is optional for a network interface.
Value - Gateway hostname or IP address in decimal-dot notation (e.g., 192.1.2.3)
If the loopback interface, 127.0.0.1, is specified,
SYSINFO_ROUTE_COUNT must be set to 0 for this interface.
A system information (sysinfo) file that causes the FIRST-BOOT interactive interface to be displayed
consists of the following single line:
SYSINFO_INTERFACTIVE=ALWAYS
A sysinfo file that specifies the hostname and causes a DHCP server to be contacted for configuring a
system with the single network interface with MAC address 0x0017A451E718 consists of the
following lines:
SYSINFO_HOSTNAME=myhost
SYSINFO_MAC_ADDRESS[0]=0x0017A451E718
7
SYSINFO_DHCP_ENABLE[0]=1
A sysinfo file that does NOT use DHCP for configuring a system with the single network interface with
MAC address 0x0017A451E718, with additional network parameters specified, consists of the
following:
SYSINFO_HOSTNAME=myhost
SYSINFO_MAC_ADDRESS[0]=0x0017A451E718
SYSINFO_DHCP_ENABLE[0]=0
SYSINFO_IP_ADDRESS[0]=192.2.3.4
SYSINFO_SUBNET_MASK[0]=255.255.255.0
SYSINFO_ROUTE_GATEWAY[0]=192.2.3.75
SYSINFO_ROUTE_DESTINATION[0]=default
SYSINFO_ROUTE_COUNT[0]=1
A sysinfo file that specifies the hostname, causes a DHCP server to be contacted for configuring a
system with the single network interface (with MAC address 0x0017A451E718), and sets
TIMEZONE to MST7MDT consists of the following lines:
SYSINFO_HOSTNAME=myhost
SYSINFO_MAC_ADDRESS[0]=0x0017A451E718
SYSINFO_DHCP_ENABLE[0]=1
SYSINFO_TIMEZONE=MST7MDT
Using drd rehost to copy the system information file to the EFI
partition
After the clone and system information file have been created, the drd rehost command can be
used to check the syntax of the system information file and copy it to /EFI/HPUX/SYSINFO.TXT in
preparation for processing by auto_parms(1M) during the boot of the image. The following
example uses the /var/opt/drd/tmp/info_for_newhost system information file.
Note that in this example, the default target, which is the inactive image, is used.
If you want to only check the syntax of the system information file, without copying it to the
/EFI/HPUX/SYSINFO.TXT file, use the preview option of the drd rehost command:
# drd status
======= 07/28/09 15:08:55 MDT BEGIN Displaying DRD Clone Image
Information (user=root) (jobid=srcsys)
When you are satisfied with the file, you need to rerun the drd rehost command to return the file
to the EFI partition of the inactive image:
You need to use your SAN software to unpresent the SAN LUN from the source system and present it
to the target system. The techniques for doing this are specific to the SAN software you are using.
If the target system is already booted on another disk or SAN LUN, you can use setboot to set the
primary bootpath to the new LUN. If not, you can interrupt the boot process to choose the new LUN,
identifying it by the presence of SYSINFO.TXT in the /EFI/HPUX directory. Alternatively, you can
use one of the techniques mentioned below for Integrity VMs or Blades.
9
Limitations and recommendations for the initial release of drd rehost
In addition, preliminary testing shows that simple (single root volume group) standalone LVM-
managed, Itanium-based systems running a September 2008 or later Operating Environment can be
rehosted to another system with the exact same hardware. The benefit of the September 2008 (or
later) Operating Environment is the availability of “Self healing of boot disk configuration”, provided
by LVM and described in the September 2008 release notes. See the section on Boot Resiliency in
HP-UX 11i v3 Version 3 Release Notes: HP 9000 and HP Integrity Server and the Summary of
Changes for Logical Volume Manager in HP-UX 11i Version 3 September 2008 Release Notes:
Operating Environments Update Release (both documents are in the Getting Started documents) for
more information on this LVM feature.
Note:
If the blade is configured to use Virtual Connect, a virtual WWPN
can be allocated before the hardware arrives, and a SAN LUN
can be presented to that port name. Aside from this efficiency,
rehosting a DRD clone from one blade to another does not depend
on Virtual Connect.
Assumptions
1. To simplify the discussion, the new blade will be installed in a pre-existing enclosure.
2. A pre-existing blade with an LVM root volume group is installed with the desired software. This
blade will be cloned to provide a boot disk for the new blade, so it will be known as the source.
To make use of the LVM “Boot Disk Configuration Self-Healing” feature introduced in HP-UX 11i
v3 Update 3 (September 2008), the Operating Environment installed on the source is HP-UX 11i
v3 Update 3 or later.
3. A SAN LUN is used for the disk image that will be rehosted, and the SAN management software
is used to make the LUN available to the pre-existing and new blades.
4. The required version of DRD and required patches to FIRST-BOOT, as described in Installing
Required Software, are installed on the source system.
Steps for provisioning the new Blade
From the initial Virtual Connect Enterprise Manager screen, select Define a Profile from
Profile Management.
11
The Create Profile dialog then appears at the bottom of the screen:
Enter a name for the profile, the VC Domain Group, Network Names, FC San Name, SAN
Boot Option, and VC Domain. These parameters have probably already been defined for other
blades in the enclosure. More information about all the parameters can be found in the HP Virtual
Connect Enterprise Manager User Guide. Next, assign the Profile to the bay where you intend to
locate the new blade, and click the OK button.
To see the WWPN and MAC address that were assigned to the new profile, check the box next to
the new profile, and click the Edit button.
The MAC addresses and WWPNs are displayed. You will need these items in subsequent steps.
2. Communicate the new MAC address to the network administrator. If the network to which the
new blade is connected is not managed by DHCP, obtain the IP address corresponding to the
MAC address from the network administrator.
3. Use the source system to determine the size of SAN LUN needed as a boot disk for the new
blade. Request that the storage administrator:
a) Create a host entry for the new blade, using the WWPN determined in Step 1.
b) Assign a new LUN big enough for the boot disk, and present it with read/write access to
both the source system and the new blade
4. When the new SAN LUN is available, identify its device special file (DSF) on the source system.
One way to do this is to obtain the WWID (World Wide ID) of the LUN from the storage
administrator, then use the scsimgr command to identify the device file:
<snip>
name = wwid
13
current = 0x600508b40001123c0000d00001f90000
default =
saved =
5. Issue the drd clone command on the source system with the new LUN as the target disk:
Caution:
Only use the -x overwrite=true option if you are sure
that the target disk is not currently in use and can be
overwritten.
6. Create a sysinfo file with information needed to boot the clone disk on the new blade.
It is convenient to start with a copy of the template /etc/opt/drd/default_sysinfo_file
delivered with DRD. This file contains comments indicating the syntax of each variable.
a. # cp /etc/opt/drd/default_sysinfo_file \
/var/opt/drd/tmp/drdblade.sysinfo
b. # vi /var/opt/drd/tmp/drdblade_demo.sysinfo
c. If you want to supply all information to interactive screens displayed when the new blade
is booted, no changes to the file are needed.
If you would prefer that the boot proceed without need for human interaction, comment
out the line SYSINFO_INTERACTIVE=ALWAYS, and add the needed information to the
file. In this case, you must specify at least the hostname and the network information for
the interfaces defined in the profile. For a given network interface specified as index “n”,
you must specify either SYSINFO_MAC_ADDRESS[n] or SYSINFO_LAN_HW)_PATH[n]
(with SYSINFO_MAC_ADDRESS[n] taking precedence) AND either
SYSINFO_DHCP_ENABLE[n]=1 or SYSINFO_IP_ADDRESS[n] and
SYSINFO_SUBNET_MASK[n]. In the latter case, it is usually helpful to also specify
SYSINFO_GATEWAY_ADDRESS[n].
For the first interface you want to configure, specify:
SYSINFO_MAC_ADDRESS[0]=< MAC from Virtual Connect profile in hex format>
and either
SYSINFO_DHCP_ENABLE[0]=1
or
SYSINFO_IP_ADDRESS[0]=<IP address from network administrator>
SYSINFO_SUBNET_MASK[0]=<subnet mask from network administrator>
Note:
The initial support for the sysinfo file requires that the
letters A-F in the sysinfo file’s MAC address must be
entered in uppercase letters. This restriction is
removed on HP-UX 11i v3 by PHCO_38608, which
supersedes PHCO_36525.
SYSINFO_HOSTNAME=drdbl1
SYSINFO_MAC_ADDRESS[0]=0x0017A477000C
SYSINFO_IP_ADDRESS[0]=15.1.50.70
SYSINFO_SUBNET_MASK[0]=255.255.248.0
SYSINFO_GATEWAY[0]=15.1.48.1
SYSINFO_ROUTE_COUNT[0]=1
7. Issue the drd rehost command, using the sysinfo file just created. The inactive system image
(the clone that was just created) is the default target. This command copies the sysinfo file to the
EFI partition of the clone.
15
syntax check.
* The value 1 for SYSINFO_ROUTE_COUNT[0] passes the syntax
check.
* The value 255.255.248.0 for SYSINFO_SUBNET_MASK[0] passes the
syntax check.
8. When the new blade arrives, it can be booted from the rehosted disk. If DHCP is used to
manage the network, the Integrated Lights Out Management Processor IP address will be
obtained from DHCP and displayed on the Front Display Panel of the new blade.
Otherwise, a serial console can be connected. See Accessing the Integrated Lights-Out
Management Processor for further information.
From the EFI Boot Manager Menu, select the EFI shell. You will see a screen similar to the
following.
The “fs<n>” entries indicate EFI file systems. The goal is to boot from the SAN file system
containing the /EFI/HPUX/SYSINFO.TXT file. In the display, the first “fs<n>” entry
representing a Fibre LUN is “fs3”. We see the SYSINFO.TXT file in the /EFI/HPUX directory of
“fs3”:
Entering hpux.efi starts the HP-UX bootloader on the rehosted disk.
Note:
By default, the EFI boot utilities do not scan for SAN LUNs. To
extend the scan to the SAN, use the following steps:
a) From the EFI Boot Manager Menu, exit to the EFI shell.
b) To determine the driver number (in the DRV column) for the FC driver, issue the command:
drivers -b
c) To determine the controller number, issue the command:
drvcfg <driver_number>
d) To start the Fibre Channel Driver Configuration Utility, issue the command:
drvcfg -s <driver_number> <controller_number>
e) Select Option 4: Edit boot Settings
f) Select Option 6: EFI Variable EFIFCScanLevel
g) Enter y to create the variable.
h) Enter 1 to set the value of the EFIFCScanLevel.
i) Enter 0 to go to Previous Menu.
j) Enter 12 to quit.
k) To rescan the devices, issue the command:
reconnect -r
l) Issue the command:
map -r
The last map -r command will show “fs<n>” entries for SAN LUNs. Proceed as above to
identify the disk containing /EFI/HPUX/SYSINFO.TXT.
The type command, which is available in the EFI shell, can also be used to display contents of
the SYSINFO.TXT file.
17
More information on EFI commands can be found in Appendix D of HP Integrity BL860C Server
Blade HP Service Guide.
9. You can use the EFI shell command, bcfg, described in Chapter 4 of HP Integrity BL860C Server
Blade HP Service Guide to set the primary boot path, but you may find it easier to use the
setboot command after HP-UX is booted to set the booted disk as the primary boot path. The
device file of the boot disk can be determined from vgdisplay of the root group.
10. After the new blade is booted on the clone LUN, the
/var/opt/drd/registry/registry.xml file must be removed. (The requirement that this
file must be removed will be eliminated in the future.)
11. After booting up the new blade, you might want to remove the /EFI/HPUX/SYSINFO.TXT file
from the EFI partition. To do so, enter the command:
Note:
A special variable has been set in
/EFI/HPUX/SYSINFO.TXT to prevent processing of the file
on subsequent events. Removal of the file clarifies to system
administrators that the disk is no longer subject to rehosting.
12. If the release of HP-UX on the source system was earlier than September 2008, error messages
might be issued when vgdisplay or lvlnboot are run. In this case, run the commands:
# vgscan –k –f /dev/vg00
# lvlnboot –R /dev/vg00
See the section on Boot Resiliency in LVM New Features in HP-UX 11i v3 (in the White Papers
documents) for more details.
13. Only the root disk is presented to the new blade. However, if other disks were in use for other
volume groups on the source system, they will still appear in /etc/lvmtab and /etc/fstab
and might need to be removed. For example, the command, vgdisplay -v, might report
errors such as the following:
The following commands can be used to remove the non-root volume groups that were in use on
the source system:
# mv -f /etc/lvmtab /etc/lvmtab.save3
# mv -f /etc/lvmtab_p /etc/lvmtab_p.save3
# vgscan –a
You might also want to import other volume groups from disks that have been presented to the
new system.
The /etc/fstab file can be edited to remove entries not available on the new blade.
14. The contents of /stand/bootconf must be checked to ensure that the current device for the
boot disk is recorded. The format of a line representing an LVM-managed disk is an “l” (ell) in
column one, followed by a space, followed by the block device file of the HP-UX (second)
partition of the boot disk. The boot disk can be determined from vgdisplay of the root group
(usually vg00.)
15. You might need to contact your network administrator to arrange for additional configuration of
the new blade on your DNS, NIS, or DHCP servers.
16. If additional applications use configuration or licensing information specific to a particular host
(such as hostname or IP address), they might need to be updated.
Assumptions
• There is an existing VM on the VM host where the new VM will be created. The existing VM has
an LVM root volume group and the desired software and patches. The root group of the pre-
existing VM will be cloned to provide a boot disk for the new VM, so the pre-existing VM will be
known as the source.
• On the VM host, a virtual switch is already defined, sufficient memory is available to boot a new
VM, and sufficient disk space is available to provision the boot disk for the new VM. The disk
space may be a raw disk, or may be an LVM logical volume.
• The required version of DRD, as described in Installing required software, are installed on the
source VM. Ideal Configuration: Source VM is running HP-UX 11i v3 Update 3 (September 2008)
or later.
Enhancements to LVM in HP-UX 11i v3 allow a system to boot from a disk or SAN LUN, whose
device special file does not match the device special file listed first in /etc/lvmtab on the disk.
Further enhancements were made to this feature in the September 2008 release of LVM to remove
the need to run any LVM “cleanup” commands after the boot completed.
More information on this feature is available in the section on Boot Resiliency in HP-UX 11i v3
Version 3 Release Notes: HP 9000 and HP Integrity Servers (in the Getting Started documents)
and the Summary of Changes for Logical Volume Manager in LVM New Features in HP-UX 11i
v3 (in the White Papers documents) for more information on this LVM feature.
Additional cleanup commands to be run after the target VM boots are noted below.
19
HP-UX 11i v2 - Special considerations
The Boot Resiliency feature for LVM is not available on HP-UX 11i v2 (11.23.)
Two approaches can be used to address the lack of Boot Resiliency in HP-UX 11i v2:
(a) Simple Generating VM: Use a source VM with a very simple I/O configuration when
provisioning the target VM. This simple source VM may be one that is not actually used to
run applications; rather, it is used as a generator of new VMs.
The simple source VM for HP-UX 11i v2 should be configured with an avio_stor disk that is
the boot disk for the source, and an avio_stor disk that is the clone target, and no additional
disks. The two device files should be specified (using the hpvmcreate or hpvmmodify
command) with the (bus, device, target) triples (0,1,0) for the root disk and (0,1,1) for the
clone target. Details for moving the clone target to the target VM are provided below.
(b) Single-user-mode Repair: In this case, no restrictions are made on the source VM. The target
VM will initially be booted into single user mode to adjust the LVM metadata for the boot
disk. Details on this approach are provided below.
In general, storage can be added and deleted to Integrity VMs without a reboot. However, addition
of a virtual storage controller does require that the hpvm guest be restarted. Virtual storage
controllers are created automatically when a storage device needing that controller is added to the
guest. Thus, reboots can be avoided by creating at least one disk of each storage type that will be
needed before the VM is deployed in production. For example, if both SCSI disks and avio_stor disks
will be used, create one of each before the VM is deployed in production.
Note also that explicitly specifying the triple “bus, device, target” in the addition of a storage device
will result in creation of a new controller if one with that “bus, device” pair does not already exist.
See hpvmresources(5) for further information on resource specification in the hpvmmodify(1M)
command.
In addition, if all devices using a given controller have been deleted, the controller itself will be
deleted upon the next restart of the virtual machine.
1. On the VM host, create a new VM with a network interface only. This will provide the MAC
address assigned by the VM host to the virtual Network Interface Card. The MAC address will
be needed later in setting up the boot disk for the new VM.
The following command creates the VM “drdivm2“, with one CPU, 2 GB of memory, and a virtual
network interface to the switch “myvswtch“:
# hpvmcreate -P drdivm2 -c 1 -r 2G \
-a network:avio_lan::vswitch:myvswtch
The hpvmstatus command can be used to verify that the new VM was successfully created and
to determine the virtual MAC address of the new VM:
# hpvmstatus -d -P drdivm2
[Virtual Machine Devices]
2. On the VM host, add a disk to the source VM, “drdivm1“, that is large enough to contain all the
logical volumes in the root volume group. The backing store can be a raw disk or an LVM or
VxVM volume, and must be available—not in use on the host or on any other VM.
# hpvmmodify -P drdivm1 \
-a disk:avio_stor:0,1,1:disk:/dev/dsk/c5t0d0
If the target disk has already been added to the system, the triple in use can be checked by
issuing:
# hpvmstatus –P drdivm1 –d
3. On the source VM, run the drd clone command to create a boot disk for the new VM.
Before running drd clone, you need to determine the device file of the newly added disk.
On HP-UX 11i v3, run ioscan –fNC disk on the source VM. (Do not use the –k option
because the VM needs to discover the newly added disk.) The instance number displayed is the
number to be appended to “disk” in the device file. For example, if the disk has instance number
3, the device file is /dev/disk/disk3.
On HP-UX 11i v2, run ioscan –fnC disk on the source VM. (Do not use the –k option
because the VM needs to discover the newly added disk.) The newly displayed device file can
then be used for the clone target.
The following command on HP-UX 11i v3 creates a clone of the source VM to the disk
/dev/disk/disk3, which is the device file of the disk with backing store /dev/disk/disk18
on the host. The –x overwrite=true option is used to ignore any LVM, VxVM, or boot
records that might exist on the new disk.
21
Caution:
Only use the -x overwrite=true option when you are
sure the target disk is correct, the target disk is not currently in
use, and the target disk can be overwritten.
4. On the source VM, create a system information file for the new VM. You need at least the
following information:
In addition, you will probably want to supply a gateway interface for the network interface if
it is not managed by DHCP, as well as information about NIS and/or DNS servers.
# cp /etc/opt/drd/default_sysinfo_file \
/var/opt/drd/tmp/drdivm2.sysinfo
Using the comments in the file, the information supplied in the Creating the system information
file section, or sysinfo(4), edit the copied file, commenting out the
“SYSINFO_INTERACTIVE=ALWAYS” line and adding lines for the information in the bulleted
list above.
Note:
The initial support for the sysinfo file requires that the letters A-
F in the sysinfo file’s MAC address must be entered in
uppercase letters. This restriction is removed on HP-UX 11i v3
by PHCO_38608, which supersedes PHCO_36525.
After you have finished editing the file, the non-comment lines will be similar to those
displayed below:
SYSINFO_HOSTNAME=drdivm2
SYSINFO_MAC_ADDRESS[0]=0x22431DF569E3
SYSINFO_IP_ADDRESS[0]=15.1.52.164
SYSINFO_SUBNET_MASK[0]=255.255.248.0
SYSINFO_ROUTE_GATEWAY[0]=15.1.48.1
SYSINFO_ROUTE_COUNT[0]=1
SYSINFO_ROUTE_DESTINATION[0]=default
Note that the MAC address in the sysinfo file must be specified in the format documented in
sysinfo(4), which differs slightly from the format of the MAC address in the hpvmstatus
output.
5. On the source VM, run the drd rehost command to copy the system information file created
above to the EFI partition of the clone disk. This provides information that will be processed by
the auto_parms utility when the new VM is booted.
23
* The file "/var/opt/drd/tmp/drdivm2.sysinfo" passes the
syntax check.
* Copying New System Personality
* The sysinfo file "/var/opt/drd/tmp/drdivm2.sysinfo" has been
successfully copied to the target "/dev/disk/disk3".
6. You can run the drd status command on the source VM to verify that the system information
file has been copied to SYSINFO.TXT on the clone.
# drd status
Sp ecia l con s id er a t ion f or HP - UX 11i v 2 : If you are using the Single-user-mode repair
method, copy the mapfile created for the clone to the root file system of the clone itself:
# drd mount
# cp –p /var/opt/drd/mapfiles/drd00mapfile \
/var/opt/drd/mnts/sysimage_001/
# drd umount
7. On the VM host, run the hpvmmodify command to move the clone disk from the source VM to
the new VM:
For HP-UX 11i v3, or if the Single-user-mode repair approach is used for HP-UX 11i v2, the clone
can be moved to the target VM with a simple hpvmmodify command. The hpvm resource need
not be specified:
Sp ecia l con s id er a t ion f or HP - UX 11i v 2: The clone must be moved to the target VM,
taking care to preserve the device file that identifies the disk.
If the Simple Generating VM approach has been used, the device file of the clone disk on the
source was /dev/dsk/c0t0d1. Because no disks were defined on the target VM when it was
created, there will be no conflict in using the triple (0,1,1) for the disk on the target VM. The
following command adds the disk to the target VM:
# hpvmmodify -P drdivm2 \
-a disk:avio_stor:0,1,1:disk:/dev/rdisk/disk18
8. On the VM host, start the new VM with the hpvmstart command, then run hpvmconsole,
followed by CO, to connect to the EFI Boot Manager interface:
# hpvmstart -P drdivm2
# hpvmconsole -P drdivm2
# boot –lm
This boots the target VM into single user mode. You can then use the following steps, which are
a continuation of s t ep 8 above, to repair the LVM metadata: (These steps are adapted from the
Changing the LVM Boot Device Hardware Path for a Virtual Partition section in the HP-UX Virtual
Partitions Administrator’s Guide (in the User Guide documents.)
8.1. Run insf and ioscan to get the device filename of the boot device:
# insf -e
8.2. Run vgscan to get the device filenames of the boot device:
# vgscan
# vgexport /dev/vg00
# mkdir /dev/vg00
# vgimport -m /drd00mapfile \
/dev/vg00/<block_device_file_of_HPUX_partition>
25
Where the device file name is obtained from the ioscan and vgscan commands
above. For example:
# vgchange -a y /dev/vg00
You might also have to cleanup and prepare LVM logical volume to be root, boot,
primary swap, or dump volume as follows:
# lvrmboot -r /dev/vg00/
# lvlnboot –b /dev/vg00/lvol1
# lvlnboot –r /dev/vg00/lvol3
# lvlnboot –s /dev/vg00/lvol2
# lvlnboot –d /dev/vg00/lvol2
# mount
8.7. Verify that the hardware path for the boot device matches the primary boot path:
# lvlnboot -v /dev/vg00
If the hardware path has not changed to the primary boot path, change it by running
lvlnboot with the recovery (-R) option. This step is normally not necessary:
# lvlnboot -R /dev/vg00
9. After the new VM boots, you can login and set the new disk as the primary boot path.
Commands similar to the following can be used:
PV Name /dev/disk/disk3_p2
On HP-UX 11i v3 (11.31) the device special file can be used to set the primary boot path:
# setboot -p /dev/disk/disk3
Sp ecia l con s id er a t ion f or HP - UX 11i v 2: On HP-UX 11i v2, the hardware path must be
supplied to the setboot command. To determine the hardware path of the boot disk, issue:
# setboot -p 0/0/1/0.1.0
11. After the new VM is booted, the /EFI/HPUX/SYSIDENT.TXT file can safely be left in the EFI
partition of the rehosted disk. However, if you wish to remove it, you can use the “drd
unrehost” command, specifying the boot disk as the target of the command:
12. If the release of HP-UX 11i v3 on the source system is earlier than September 2008, error
messages might be issued when vgdisplay or lvlnboot are run. In this case, run the
following commands:
# vgscan –k –f /dev/vg00
# lvlnboot –R /dev/vg00
See the section on Boot Resiliency in LVM New Features in HP-UX 11i v3 (in the White Paper
documents) for more details.
13. After the new VM is booted, you might want to remove non-root volume groups that were present
on the source VM but not on the target. Because the root disk is the only disk currently assigned
to the target VM, the simplest way to do this is to rename /etc/lvmtab (and /etc/lvmtab_p,
if it exists) and re-create /etc/lvmtab and /etc/lvmtab_p with the vgscan command:
# mv /etc/lvmtab /etc/lvmtab.save
# mv /etc/lvmtab_p /etc/lvmtab_p.save
# vgscan –a
Creating "/etc/lvmtab".
27
*** #1. vgchange -a y
*** #2. lvlnboot -R
# vgchange -a y
# lvlnboot -R
The /etc/fstab file can be edited to remove entries not available on the new VM.
14. The contents of /stand/bootconf must be checked to ensure that the current device for the
boot disk is recorded. The format of a line representing an LVM-managed disk is an “l” (ell) in
column one, followed by a space, followed by the block device file of the HP-UX (second)
partition of the boot disk. The boot disk can be determined from vgdisplay of the root group
(usually vg00).
15. You might need to contact your network administrator to arrange for additional configuration of
the new blade on your DNS, NIS, or DHCP servers.
16. If additional applications use configuration or licensing information specific to a particular host
(such as hostname or IP address), they might need to be updated.
Glossary
Term Definition
booted system The system environment that is currently running, also known as the current,
environment active, or running system environment.
LVM Logical Volume Manager. The logical volume manager (LVM), a subsystem
that manages disk space, is supplied at no charge with HP-UX.
OE Operating Environment
original system A booted system environment whose system image is cloned to create another
environment system image. Each system image has exactly one original system
environment (that is, the booted system environment at the time the drd
clone command was issued).
root file system The file system that is mounted at /.
system The combination of the system image and the system activities that comprise a
environment running installation of HP-UX.
system image The filesystems and their contents that comprise an installation of HP-UX,
residing on disk, and therefore persisting across reboots.
29
For more information
To read more about Dynamic Root Disk, go to www.hp.com/go/drd.
Call to action
HP welcomes your input. Please give us comments about this white paper, or suggestions for LVM or
related documentation, through our technical documentation feedback website:
http://docs.hp.com/en/feedback.html
© 2010 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The
only warranties for HP products and services are set forth in the express warranty statements accompanying such products and
services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or
editorial errors or omissions contained herein.