Escolar Documentos
Profissional Documentos
Cultura Documentos
Adriano de Almeida Rafael Antonioli Urban Biel Sylvain Delabarre Bartomiej Grabowski Kristian Milos Fray L Rodrguez
ibm.com/redbooks
International Technical Support Organization IBM PowerVM Best Practices October 2012
SG24-8062-00
Note: Before using this information and the product it supports, read the information in Notices on page xiii.
First Edition (October 2012) This edition applies to: PowerVM Enterprise Edition Virtual I/O Server Version 2.2.1.4 (product number 5765-G34) AIX Version 7.1 (product number 5765-G99) IBM i Version 7.1 (product number 5770-SS1) HMC Version 7.7.4.0 SP02 POWER7 System Firmware Version AL730_87
Copyright International Business Machines Corporation 2012. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
Contents
Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv The team who wrote this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . xvii Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii Chapter 1. Introduction and planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Keeping track of PowerVM features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2 Virtual I/O Server specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2.1 Virtual I/O Server minimum requirements . . . . . . . . . . . . . . . . . . . . . . 4 1.2.2 Configuration considerations of the Virtual I/O Server . . . . . . . . . . . . 5 1.2.3 Logical Volume Manager limits in a Virtual I/O Server . . . . . . . . . . . . 6 1.3 Planning your Virtual I/O Server environment . . . . . . . . . . . . . . . . . . . . . . . 6 1.3.1 System Planning Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.3.2 Document as you go . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.3.3 Hardware planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.3.4 Sizing your Virtual I/O Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.3.5 IBM Systems Workload Estimator. . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.3.6 Shared or dedicated resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.3.7 Single, dual, or multiple Virtual I/O Servers . . . . . . . . . . . . . . . . . . . 11 1.3.8 Network and storage components . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.3.9 Slot numbering and naming conventions . . . . . . . . . . . . . . . . . . . . . 12 1.3.10 Operating systems on virtual client partitions . . . . . . . . . . . . . . . . . 13 Chapter 2. Installation, migration, and configuration . . . . . . . . . . . . . . . . 15 2.1 Creating a Virtual I/O Server profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.1.1 Processing mode - shared or dedicated . . . . . . . . . . . . . . . . . . . . . . 16 2.1.2 Processing settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.1.3 Memory settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.1.4 Physical I/O adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
iii
2.1.5 Virtual I/O adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.1.6 Deploying a Virtual I/O Server with the System Planning Tool . . . . . 24 2.2 Virtual I/O Server installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.3 Updating fix packs, service packs, and interim fixes . . . . . . . . . . . . . . . . . 25 2.3.1 Virtual I/O Server service strategy . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.3.2 Approved third-party applications in the Virtual I/O Server . . . . . . . . 26 2.3.3 Applying fix packs, service packs, and interim fixes . . . . . . . . . . . . . 27 2.4 Virtual I/O Server migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.4.1 Options to migrate the Virtual I/O Server . . . . . . . . . . . . . . . . . . . . . 28 2.4.2 Virtual I/O Server migration considerations. . . . . . . . . . . . . . . . . . . . 28 2.4.3 Multipathing software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 Chapter 3. Administration and maintenance . . . . . . . . . . . . . . . . . . . . . . . 31 3.1 Backing up and restoring the Virtual I/O Server . . . . . . . . . . . . . . . . . . . . 32 3.1.1 When to back up the Virtual I/O Server. . . . . . . . . . . . . . . . . . . . . . . 32 3.1.2 Virtual I/O Server backup strategy . . . . . . . . . . . . . . . . . . . . . . . . . . 33 3.1.3 Backing up user-defined virtual devices . . . . . . . . . . . . . . . . . . . . . . 35 3.1.4 Restoring the Virtual I/O Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 3.1.5 NIM server resilience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 3.2 Dynamic logical partition operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 3.2.1 Dynamically adding virtual Fibre Channel adapters . . . . . . . . . . . . . 43 3.3 Virtual media repository. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 3.4 Power Systems server shutdown and startup . . . . . . . . . . . . . . . . . . . . . . 45 Chapter 4. Networking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 4.1 General networking considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 4.1.1 Shared Ethernet Adapter considerations . . . . . . . . . . . . . . . . . . . . . 51 4.1.2 Maximum transmission unit best practices . . . . . . . . . . . . . . . . . . . . 52 4.1.3 Network bandwidth tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 4.2 Single Virtual I/O Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 4.3 Virtual network redundancy and failover technology . . . . . . . . . . . . . . . . . 55 4.3.1 Dual Virtual I/O Server with VLAN tagging . . . . . . . . . . . . . . . . . . . . 56 4.3.2 Shared Ethernet Adapter failover with load sharing . . . . . . . . . . . . . 58 Chapter 5. Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 5.1 Storage Considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 5.1.1 Virtual I/O Server rootvg storage. . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 5.1.2 Multipathing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 5.1.3 Mixing virtual SCSI and NPIV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 5.1.4 Fibre Channel adapter configuration . . . . . . . . . . . . . . . . . . . . . . . . . 67 5.2 Virtual Small Computer System Interface . . . . . . . . . . . . . . . . . . . . . . . . . 70 5.2.1 When to use a virtual Small Computer System Interface . . . . . . . . . 70 5.2.2 Configuring the Virtual I/O Server with a virtual SCSI . . . . . . . . . . . . 71 5.2.3 Exporting virtual Small Computer System Interface storage. . . . . . . 76
iv
5.2.4 Configuring the Virtual I/O client with virtual SCSI . . . . . . . . . . . . . . 79 5.3 Shared Storage Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 5.3.1 Requirements per Shared Storage Pool node . . . . . . . . . . . . . . . . . 82 5.3.2 Shared Storage Pools specifications . . . . . . . . . . . . . . . . . . . . . . . . 83 5.3.3 When to use Shared Storage Pools . . . . . . . . . . . . . . . . . . . . . . . . . 83 5.3.4 Creating the Shared Storage Pools . . . . . . . . . . . . . . . . . . . . . . . . . 84 5.3.5 SAN storage considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 5.3.6 Monitoring storage pool capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 5.3.7 Network considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 5.4 N-Port ID Virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 5.4.1 When to use N-Port ID Virtualization . . . . . . . . . . . . . . . . . . . . . . . . 88 5.4.2 Configuring the Virtual I/O Server with N-Port ID Virtualization . . . . 89 5.4.3 Configuring the virtual I/O client with N-Port ID Virtualization . . . . . . 91 Chapter 6. Performance monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 6.1 Measuring Virtual I/O Server performance . . . . . . . . . . . . . . . . . . . . . . . . 94 6.1.1 Measuring short-term performance . . . . . . . . . . . . . . . . . . . . . . . . . . 94 6.1.2 Network and Shared Ethernet Adapter monitoring . . . . . . . . . . . . . . 97 6.1.3 Measuring long-term performance . . . . . . . . . . . . . . . . . . . . . . . . . 100 Chapter 7. Security and advanced IBM PowerVM features . . . . . . . . . . . 105 7.1 Virtual I/O Server security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 7.1.1 IBM PowerVM Hypervisor security . . . . . . . . . . . . . . . . . . . . . . . . . 106 7.1.2 Virtual I/O Server network services . . . . . . . . . . . . . . . . . . . . . . . . . 106 7.1.3 Viosecure command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 7.2 IBM PowerSC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 7.3 Live Partition Mobility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 7.3.1 General considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 7.3.2 Implementing Live Partition Mobility . . . . . . . . . . . . . . . . . . . . . . . . 110 7.3.3 Storage considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 7.3.4 Network considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 7.4 Active Memory Sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 7.4.1 When to use Active Memory Sharing . . . . . . . . . . . . . . . . . . . . . . . 114 7.4.2 Implementing Active Memory Sharing . . . . . . . . . . . . . . . . . . . . . . 116 7.4.3 Active Memory Deduplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 Abbreviations and acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Contents
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
vi
Figures
2-1 2-2 2-3 2-4 3-1 3-2 3-3 3-4 3-5 3-6 3-7 4-1 4-2 5-1 5-2 5-3 5-4 7-1 7-2 7-3 7-4 7-5 7-6 Select processor type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Processing Settings best practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Setting the maximum number of virtual adapters . . . . . . . . . . . . . . . . . . . 22 Partition profile properties for source and target virtual adapters . . . . . . . 23 NIM resilience solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 Partition context menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 Save the running partition profile to a new profile . . . . . . . . . . . . . . . . . . . 43 Overwrite the existing partition profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 The Power off system option is turned off . . . . . . . . . . . . . . . . . . . . . . . . 45 Automatically start when the managed system is powered on . . . . . . . . . 46 Partition start policy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Dual Virtual I/O Server configuration with two virtual switches . . . . . . . . . 57 Dual Virtual I/O Server configuration with SEA and load balancing . . . . . 59 Virtual SCSI client with multipathing from dual Virtual I/O Servers . . . . . . 65 Changing the default Maximum virtual adapters number . . . . . . . . . . . . . 73 Select the appropriate virtual I/O client . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 NPIV configuration with dual Virtual I/O Servers. . . . . . . . . . . . . . . . . . . . 91 Live Partition Mobility migration validation function. . . . . . . . . . . . . . . . . 111 Partitions with dedicated memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 Logical partitions with shared memory that run different regions . . . . . . 115 Partitions that support day and night workloads . . . . . . . . . . . . . . . . . . . 115 Sporadically used logical partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 AMS memory weight on a logical partition . . . . . . . . . . . . . . . . . . . . . . . 117
vii
viii
Tables
1-1 1-2 1-3 1-4 2-1 4-1 4-2 5-1 5-2 5-3 5-4 5-5 7-1 7-2 PowerVM features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Minimum resources that are required for a Virtual I/O Server. . . . . . . . . . . 4 Limitations for storage management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Network and storage components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Processor weighting example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Terminology that is used in this chapter . . . . . . . . . . . . . . . . . . . . . . . . . . 50 Typical maximum transmission units (MTUs) . . . . . . . . . . . . . . . . . . . . . . 53 Booting from a virtual SCSI with NPIV for data . . . . . . . . . . . . . . . . . . . . . 67 Recommended settings for virtual I/O client virtual SCSI disks . . . . . . . . 81 Recommended settings for virtual I/O client virtual SCSI adapters . . . . . 82 Shared Storage Pools minimum requirements per node . . . . . . . . . . . . . 82 Shared Storage Pools minimums and maximums . . . . . . . . . . . . . . . . . . 83 Default open ports on the Virtual I/O Server . . . . . . . . . . . . . . . . . . . . . . 106 IBM PowerSC security standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
ix
Examples
2-1 Using the alt_root_vg command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 3-1 Virtual I/O Server backup to a remote file system . . . . . . . . . . . . . . . . . . . 34 3-2 Daily viosbr schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3-3 List device mappings from a viosbr backup . . . . . . . . . . . . . . . . . . . . . . . 36 3-4 The installios command from the HMC . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 3-5 Preparing NIM resources for Virtual I/O Server restore . . . . . . . . . . . . . . 40 3-6 HMC CLI startup commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 5-1 Manage the DS4000 and DS5000 disk driver . . . . . . . . . . . . . . . . . . . . . . 66 5-2 The fscsi attribute modification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 5-3 Monitoring fcs adapter statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 5-4 fcs attributes modification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 5-5 Check num_cmd_elems using the SDDPCM pcmpath command . . . . . . . 69 5-6 HMC virtual slots listing with a single Virtual I/O Server . . . . . . . . . . . . . . 72 5-7 Example of a virtual storage mapping convention . . . . . . . . . . . . . . . . . . 75 5-8 DS4800 mpio_get_config command . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 5-9 Change the reserve_policy disk attribute . . . . . . . . . . . . . . . . . . . . . . . . . 78 5-10 Listing the hdisk0 and vscsi0 path attributes . . . . . . . . . . . . . . . . . . . . . 80 5-11 Changing the vscsi0 priority for hdisk0 . . . . . . . . . . . . . . . . . . . . . . . . . . 80 5-12 Creating the cluster, adding additional nodes, and assigning storage to the virtual I/O client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 5-13 The prepdev command to prepare the disk . . . . . . . . . . . . . . . . . . . . . . . 85 5-14 Highlighted disks which are in the storage pool . . . . . . . . . . . . . . . . . . . 86 5-15 Using the alert command to configure threshold warnings . . . . . . . . . . 87 5-16 Changing attributes on a Fibre Channel adapter on a virtual I/O client . 92 6-1 Display of nmon interactive mode commands . . . . . . . . . . . . . . . . . . . . . 95 6-2 Exporting environment variable in ksh shell to nmon . . . . . . . . . . . . . . . . 95 6-3 Extended disk output of hdisk4 and vhost1 using viostat . . . . . . . . . . . . 97 6-4 Output from the netstat command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 6-5 Output of entstat on SEA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 6-6 Enabling accounting on the Shared Ethernet Adapter . . . . . . . . . . . . . . . 98 6-7 seastat statistics that are filtered by IP address . . . . . . . . . . . . . . . . . . . . 98 6-8 Example use of the fcstat command . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 6-9 Example of a recording in the nmon format . . . . . . . . . . . . . . . . . . . . . . . 101 6-10 Listing of the available agents. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 6-11 The lslparutil command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 6-12 How to calculate processor utilization . . . . . . . . . . . . . . . . . . . . . . . . . . 104 7-1 Exporting viosecure high-level rules to XML . . . . . . . . . . . . . . . . . . . . . 107 7-2 Applying customized viosecure rules . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
xi
xii
Notices
This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not grant you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A. The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM websites are provided for convenience only and do not in any manner serve as an endorsement of those websites. The materials at those websites are not part of the materials for this IBM product and use of those websites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Any performance data contained herein was determined in a controlled environment. Therefore, the results obtained in other operating environments may vary significantly. Some measurements may have been made on development-level systems and there is no guarantee that these measurements will be the same on generally available systems. Furthermore, some measurements may have been estimated through extrapolation. Actual results may vary. Users of this document should verify the applicable data for their specific environment. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy, modify, and distribute these sample programs in any form without payment to IBM for the purposes of developing, using, marketing, or distributing application programs conforming to IBM's application programming interfaces.
xiii
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. These and other IBM trademarked terms are marked on their first occurrence in this information with the appropriate symbol ( or ), indicating US registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both: Active Memory AIX BladeCenter DS4000 DS8000 Enterprise Storage Server Focal Point GDPS Geographically Dispersed Parallel Sysplex GPFS HACMP IBM iSeries Micro-Partitioning Parallel Sysplex PartnerWorld Power Systems POWER7 PowerHA PowerVM POWER PureFlex PureSystems Redbooks Redpaper Redbooks (logo) System p SystemMirror Tivoli
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Microsoft, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Other company, product, or service names may be trademarks or service marks of others.
xiv
Preface
This IBM Redbooks publication provides best practices for planning, installing, maintaining, and monitoring the IBM PowerVM Enterprise Edition virtualization features on IBM POWER7 processor technology-based servers. PowerVM is a combination of hardware, PowerVM Hypervisor, and software, which includes other virtualization features, such as the Virtual I/O Server. This publication is intended for experienced IT specialists and IT architects who want to learn about PowerVM best practices, and focuses on the following topics: Planning and general best practices Installation, migration, and configuration Administration and maintenance Storage and networking Performance monitoring Security PowerVM advanced features This publication is written by a group of seven PowerVM experts from different countries around the world. These experts came together to bring their broad IT skills, depth of knowledge, and experiences from thousands of installations and configurations in different IBM client sites.
xv
Rafael Antonioli is a System Analyst at Banco do Brasil in Brazil. He has 12 years of experience with Linux and five years of experience in the AIX and PowerVM field. He holds a Master of Computer Science degree in Parallel and Distributed Computer Systems from Pontifical Catholic University of Rio Grande do Sul (PUCRS). His areas of expertise include implementation, support, and performance analysis of IBM PowerVM, IBM AIX, and IBM PowerHA. Urban Biel is an IT Specialist in IBM Slovakia. He has been with IBM for six years. He holds a Master degree in Information Systems and Networking from Technical University of Kosice, Slovakia. His areas of expertise include Linux, AIX, PowerVM, PowerHA, IBM GPFS and also IBM enterprise disk storage systems. He has participated in several Redbooks publications. Sylvain Delabarre is a certified IT Specialist at the Product and Solutions Support Center in Montpellier, France. He works as an IBM Power Systems Benchmark Manager. He has been with IBM France since 1988. He has 20 years of AIX System Administration and Power Systems experience working in service delivery, AIX, Virtual I/O Server, and HMC support for EMEA. Bartomiej Grabowski is an IBM iSeries Senior Technical Specialist in DHL IT Services in the Czech Republic. He has seven years of experience with IBM i. He holds a Bachelor degree in Computer Science from Academy of Computer Science and Management in Bielsko-Biala. His areas of expertise include IBM i administration, PowerHA solutions that are based on hardware and software replication, Power Systems hardware, and IBM i virtualization that is based on PowerVM. He is an IBM Certified System Administrator. He was a coauthor of the IBM Active Memory Sharing Redbooks publication. Kristian Milos is an IT Specialist at IBM Australia. Before working at IBM, he spent seven years working at the largest telecommunications organization in Australia. He has 10 years of experience working in enterprise environments, with the past six directly involved with implementing and maintaining AIX, PowerVM, PowerHA, and Power Systems environments. Fray L Rodrguez is an IBM Consulting IT Specialist working in Power Systems Competitive Sales in the United States. He has 12 years of experience in IT and 17 years of experience in customer service. He holds a Bachelor degree in Software Engineering from the University of Texas at Dallas. Fray also holds 19 professional IBM Certifications, including IBM Expert Certified IT Specialist, IBM Technical Sales Expert on Power Systems, and IBM Advanced Technical Expert on Power Systems. The project team that created this publication was managed by: Scott Vetter, PMP IBM Austin
xvi
Thanks to the following people for their contributions to this project: Aaron Bolding, Thomas Bosworth, Ben Castillo, Shaival J Chokshi, Gareth Coates, Pedro Alves Coelho, Julie Craft, Rosa Davidson, Ingo Dimmer, Michael Felt, Rafael Camarda Silva Folco, Chris Gibson, Chris Angel Gonzalez, Paranthaman Gopikaramanan, Randy Greenberg, Hemantha Gunasinghe, Margarita Hammond, Jimi Inge, Narutsugu Itoh, Chandrakant Jadhav, Robert C. Jennings, Anil Kalavakolanu, Bob Kovac, Kiet H Lam, Dominic Lancaster, Luciano Martins, Augie Mena, Walter Montes, Gurin Nicolas, Anderson Ferreira Nobre, Rajendra Patel, Viraf Patel, Thomas Prokop, Xiaohan Qin, Vani Ramagiri, Paisarn Ritthikidjaroenchai, Bjrn Rodn, Humberto Roque, Morgan J Rosas, Stephane Saleur, Susan Schreitmueller, Jorge M Silvestre, Luiz Eduardo Simeone, Renato Stoffalette, Naoya Takizawa, Humberto Tadashi Tsubamoto, Morten Vaagmo, Toma Vincek, Richard Wale, Tom Watts, Evelyn Yeung, and Ken Yu.
Comments welcome
Your comments are important to us! We want our books to be as helpful as possible. Send us your comments about this book or other IBM Redbooks publications in one of the following ways: Use the online Contact us review Redbooks form found at: ibm.com/redbooks
Preface
xvii
Send your comments in an email to: redbooks@us.ibm.com Mail your comments to: IBM Corporation, International Technical Support Organization Dept. HYTD Mail Station P099 2455 South Road Poughkeepsie, NY 12601-5400
xviii
Chapter 1.
This paper provides recommendations for the architecture, implementation, and configuration of PowerVM virtual environments. In areas where there are multiple configuration options, this publication provides recommendations that are based on the environment and the experiences of the authors and contributors. It is possible that new best practices emerge as new hardware technologies become available. Although this publication can be read from start to finish, it is written to enable the selection of individual topics of interest, and to go directly to them. The content is organized into seven major headings: Chapter 1, Introduction and planning on page 1, provides an introduction to PowerVM and best practices to plan and size your virtual environment. Chapter 2, Installation, migration, and configuration on page 15, describes best practices for the creation and installation of a Virtual I/O Server, and migration to new versions. Chapter 3, Administration and maintenance on page 31, covers best practices for daily maintenance tasks, backup, recovery, and troubleshooting of the Virtual I/O Server. Chapter 4, Networking on page 49, describes best practices for network architecture and configuration within the virtual environment. Chapter 5, Storage on page 61, covers best practices for storage architecture and configuration. Chapter 6, Performance monitoring on page 93, describes best practices for monitoring the Virtual I/O Server performance. Chapter 7, Security and advanced IBM PowerVM features on page 105, covers best practices for some of the more advanced PowerVM features.
Shared Storage Pools Integrated Virtualization Manager Live Partition Mobility Active Memory Sharing Active Memory Deduplication NPIV
Note: Table 1-1 on page 3 shows all the PowerVM features, but these features are grouped in three edition packages: Express, Standard, and Enterprise, to best meet your virtualization needs. The following website shows the features that ship with each PowerVM edition: http://www.ibm.com/systems/power/software/virtualization/editions.html
Furthermore, server firmware might require more system memory to manage the virtual adapters. The Virtual I/O Server supports client logical partitions that run IBM AIX 5.3 or later, IBM i 6.1 or later, and Linux. The minimum version of Linux that is supported might be different depending on the hardware model. Check the hardware manual for the model you need to confirm the minimum version you can use.
requirement might be that you are able to move your virtual clients from one physical server to another without any downtime. This feature is called Live Partition Mobility, and is included in the IBM PowerVM Enterprise Edition. Therefore, you need to include this requirement to ensure that Enterprise Edition is included in the frame, as opposed to the Standard or Express Edition. Power Systems servers offer outstanding flexibility as to the number of adapters, cores, memory, and other components and features that you can configure in each server. Therefore, discuss your requirements with your IBM technical sales team, and they can configure a server that meets your needs.
As a best practice, have at least two Ethernet adapters and two Fibre Channel adapters in your Virtual I/O Server. For small Power Systems servers, with eight cores or less, you can start with 0.5 of entitled processor capacity and 2 GB of memory. For more powerful Power Systems servers, we use the example of a Virtual I/O Server that uses two 1 Gb Ethernet adapters and two 4 Gb Fibre Channel adapters, supporting around 20 virtual clients. For this scenario, use one core for the entitled processor capacity and 4 GB of entitled memory when you run AIX or Linux. If the virtual clients are running IBM i and are using virtual SCSI, use 1.5 for entitled processor capacity and 4 GB of entitled memory. If the IBM i virtual client is using NPIV, one core of entitled capacity is a good starting point. High speed adapters, such as 10 Gb Ethernet and 8-Gb Fibre Channel, require more memory for buffering. For those environments, use 6 GB of memory in each scenario, regardless of the operating system that is running on the virtual I/O client. This is also especially important to consider if you plan to use NPIV. It is important because the Virtual I/O Server conducts work that is similar to a virtual storage area network (SAN) switch that is passing packets back and forth between your SAN and the virtual I/O clients. Therefore, the Virtual I/O Server requires more memory for each virtual Fibre Channel adapter that is created in a virtual client. Estimating an exact amount of memory that is needed per adapter is difficult without using tools to look at the specific workloads. It is difficult because the requirements of the adapters vary depending on their technologies, Peripheral Component Interconnect-X (PCI-X) and PCI Express (PCIe), and their configurations. A rule of thumb for estimating the amount of memory that is needed by each adapter, is to add 512 MB for each physical high speed Ethernet adapter in the Virtual I/O Server. This number also includes the memory that is needed for virtual Ethernet adapters in the virtual clients. Also, add 140 MB for each virtual Fibre Channel adapter in the client. Based on this example, for a Virtual I/O Server that has two 10 Gb Ethernet adapters and two 8 Gb Fibre Channel adapters, supports 20 virtual clients, and assumes that each client has one Virtual Fibre Channel adapter, we have the following scenario: 2 GB for base workload + 1 GB (512 MG x 2) for Ethernet adapters + 2.8 GB (20 x 140 MB) for Virtual Fibre Channel adapters = 5.8 GB. Therefore, our general recommendation is to have 6 GB of memory.
Core speeds vary: Remember that core speeds vary from system to system, and IBM is constantly improving the speed and efficiency of the IBM POWER processor cores and memory. Therefore, the referenced guidelines are, again, just a starting point. When you create your Virtual I/O Server environment, including all the virtual clients, be sure to test and monitor it to ensure that the assigned resources are appropriate to handle the load on the Virtual I/O Server.
10
For planning purposes, whichever option you choose, it is a good practice to understand how these options affect your environment. Also consider the amount of resources that you need to satisfy your workloads.
11
You need to think the same way about your storage component needs. Will NPIV or virtual SCSI be used? If you use virtual SCSI, will Shared Storage Pools be used? Will the boot occur from internal disk drives or SAN? Think about all these components in advance to determine the best way to configure and implement them. Remember to document your choices and the reasons behind them. Table 1-4 contains some of the network and storage components you might consider.
Table 1-4 Network and storage components Network components Physical Ethernet adapters Virtual Ethernet adapters Shared Ethernet adapters Backup virtual Ethernet adapters Virtual LANs (VLANs) Etherchannel or Link Aggregation devices Storage components Physical Fibre Channel adapters Virtual Fibre Channel adapters Virtual SCSI adapters SAN volumes Multi-path I/O Mirroring internal drives Storage pools (volume groups) Shared Storage Pools
12
If you also want to identify the virtual client through the slot number without having to look at the mapping, you can add its partition ID to the odd or even number. The following example shows this process: <virtual_client_ID><(virtual_server_odd or Virtual_server_even)> For example, in a system with dual Virtual I/O Servers with IDs 1 and 2, and a virtual client with ID 3, we use odd numbers for Virtual I/O Server 1, as in 31, 33, 35, and so on. The second Virtual I/O Server uses even numbers, as in 32, 34, 36, and so on. This system might not be appropriate though if you have many clients, because the slot number is too high. Also, consider that when you use odd and even numbers, it is difficult to maintain a system if you are frequently using Live Partition Mobility (LPM) to move virtual clients from one physical system to another. Use low slot numbers: It is a best practice to use low slot numbers and define only the number of virtual adapters that you use. This practice is preferred because the Power Hypervisor uses a small amount of memory for each virtual device that you define. A different method for slot numbering is to use adapter ID = XY, where the following values apply: X= adapter type, using 1 for Ethernet, 2 for Fibre Channel, and 3 for SCSI Y= next available slot number With this system, your virtual Ethernet adapters are 11 - 19. Your Virtual Fibre Channel is 20 - 29, and your virtual SCSI is 30 - 39. LPM: When you use LPM, if a target system does not have the slot available that you want to use, it asks if it can use a new slot number. If slot numbers change, it is important that you keep current documentation of your environment. For naming convention methods for the virtual adapters in a Virtual I/O Server, use a name that is descriptive of the virtual client that uses this virtual adapter.
13
virtual client when you plan your Virtual I/O Server configuration. For instance, some tuning options might be more beneficial on one virtual client operating system than another. Throughout the book, we mention such differences as they arise. You can also use the Fix Level Recommendation Tool (FLRT) to ensure that the version of the Virtual I/O Server is compatible with other components. The version needs to be compatible with the firmware on the server, the version of the operating system on the virtual client, the HMC or IVM, and other IBM software that might be in your environment. To use the FLRT, see this website: http://www14.software.ibm.com/webapp/set2/flrt/home
14
Chapter 2.
15
16
17
For wanted processing units, follow the sizing considerations that are described in 1.3.4, Sizing your Virtual I/O Server on page 8. Set this value to a number that meets the needs of the estimated workload. For maximum processing units, round up the wanted processing units value to the next whole number, and add 50%. For example, if the wanted value is 1.2, the maximum value is 3. It is important to allow room between the wanted and the maximum processing units. This suggestion is because you can only increase the wanted value dynamically, through a dynamic LPAR operation, up to the maximum value. At the same time, it is important not to set the maximum value too high because the Power Hypervisor uses more memory the higher it is.
Capped or uncapped
The sharing mode of processor cores can be set to capped or uncapped. Capped partitions have a preset amount of maximum processing unit entitlement. However, partitions that are configured with uncapped processor resources are able to use all of their allocation, plus any unused processing units in a shared processor pool.
18
The load on the Virtual I/O Server varies depending on the demands of the virtual clients. Therefore, you might see spikes in processor or memory usage throughout the day. To address these changes in workload, and to achieve better utilization, use shared and uncapped processors. Therefore, uncapping can provide a significant benefit to partitions that have spikes in utilization.
Weight
When you choose the uncapped partition option, you also need to choose a weight value. For the Virtual I/O Server, best practice is to configure a weight value higher than the virtual clients. The maximum configured value is 255. The Virtual I/O Server must have priority to the processor resources in the frame. Table 2-1 shows an example of distributing weight among different types of environments, depending on their importance.
Table 2-1 Processor weighting example Value 255 200 100 50 25 Usage Virtual I/O Server Production Pre-production Development Test
Figure 2-2 on page 20 shows an example of a best practice configuration of Processing Settings.
19
20
For maximum memory, add 50% to the wanted memory value. For example, if the wanted value is 4 GB, the maximum value is 6 GB. Similarly to processing units, it is important to allow room between the wanted and the maximum memory values. This consideration is because you can only increase the wanted value dynamically, through a dynamic LPAR operation, up to the maximum value. The maximum memory value is also the number that is used when you calculate the amount of memory that is needed for the page tables to support this partition. For this reason, it is not advantageous to set this maximum setting to an unreasonably high amount. This recommendation is because it would waste memory by setting memory aside for page tables that the partition does not need. Amount of memory: The SPT can help you estimate the amount of memory that is required by the Power Hypervisor depending on your IBM Power Systems server and its configuration. Also, the HMC can show you the amount of memory that is available for partition use under the frame properties, and the amount that is reserved for the Hypervisor.
21
22
Figure 2-4 Partition profile properties for source and target virtual adapters
23
Live Partition Mobility: If you are planning to implement Live Partition Mobility, set all virtual adapters to wanted. Required virtual I/O adapters prevent Live Partition Mobility operations.
2.1.6 Deploying a Virtual I/O Server with the System Planning Tool
After you configure your logical partitions in the System Planning Tool, you can save your system plan file (.sysplan) and import it into an HMC or IVM. You can then deploy the system plan to one or more frames. System plan deployment delivers the following benefits to you: You can use a system plan to partition a frame and deploy partitions without having to re-create the partition profiles. This plan saves time and reduces the possibility of errors. You can easily review the partition configurations within the system plan, as necessary. You can deploy multiple identical systems, almost as easily as a single system. You can archive the system plan as a permanent electronic record of the systems that you create.
24
If you do not have a NIM in your environment, install it from a DVD, and then apply the basic installation procedures. You can use the alt_root_vg command to deploy other Virtual I/O Servers. alt_root_vg command: If you boot from a cloned disk that is made by the alt_root_vg command, we suggest you remove obsolete devices that are in the defined state. Also, you might need to reconfigure the Reliable Scalable Cluster Technology (RSCT) subsystem by using the recfgct command. For more information about the recfgct command, see this website: http://www.ibm.com/support/docview.wss?uid=swg21295662 If you boot the Virtual I/O Server from the local Small Computer System Interface (SCSI) disks, remember to mirror the rootvg by using the mirrorios command. And, set the boot list with both the SCSI disks by using the bootlist command.
Updating strategy
The following suggestions assist you in determining the best update strategy for your enterprise: Set a date, every six months, to review your current firmware and software patch levels. Verify the suggested code levels by using the Fix Level Recommendation Tool (FLRT) on the IBM Support Site: http://www.software.ibm.com/webapp/set2/flrt/home
25
Check for the Virtual I/O Server release lifecycles to plan your next upgrade. See this website: http://www.ibm.com/software/support/lifecycleapp/PLCSearch.wss?q=%28 Virtual+I%2FO+Server%29+or++%28PowerVM%29&scope=&ibm-view-btn.x=3&ib m-view-btn.y=9&sort=S There are several options for downloading and installing a Virtual I/O Server update, such as downloading ISO images, packages, or installing from optical media. To check the latest release and instructions for Virtual I/O Server fix updates, see IBM Fix Central at this website: http://www.ibm.com/support/fixcentral/ ISO images: Do not use utilities to extract ISO images of Virtual I/O fix packs or service packs to a local directory or NFS mount point. Burn these images to media. If you need fix packs or service packs on a local directory, download them as a package. Ensure that a regular maintenance window is available to conduct firmware updates and patching. Once a year is the suggested time frame to conduct the updates. When you do system firmware updates from one major release to another, always update the HMC to the latest available version first, along with any mandatory HMC patches. Then, do the firmware updates. If the operating system is being updated as well, update the operating system first, then the HMC code, and lastly the system firmware. In a dual HMC configuration always update both HMCs in a single maintenance window. Or, disconnect one HMC until it is updated to the same level as the other HMC.
26
$ alt_root_vg -target hdisk1 -bundle update_all -location /mnt Before you shut down the Virtual I/O Server, follow these steps: On a single Virtual I/O Server environment, shut down the virtual I/O clients that are connected to the Virtual I/O Server. Or disable any virtual resource that is in use. In a dual Virtual I/O Server environment, check that the alternate Virtual I/O Server is up and running and is serving I/O to the client. (Shared Ethernet Adapter (SEA) failover, virtual SCSI mapping, virtual Fibre Channel mapping, and so on). If there is Logical Volume Manager mirroring on clients, check that both disks of any mirrored volume group are available in the system and the mirroring is properly synchronized.
27
If there is a SEA failover, check the configuration of the priority, and that the backup is active on the second Virtual I/O Server. If there is a Network Interface Backup (NIB), check that the Etherchannel is configured properly on the virtual I/O clients.
28
Note the location of the rootvg disks. Confirm that you have the Migration DVD for the Virtual I/O Server instead of the Installation DVD for the Virtual I/O Server. Installation media: The media for the migration and for the installation are different. Using the installation media overwrites your current Virtual I/O Server configuration. Before you shut down the Virtual I/O Server that you want to migrate, we suggest you check the following scenarios: In a single Virtual I/O Server configuration, during the migration, the client partition needs to be shut down. When the migration is complete, and the Virtual I/O Server is restarted, the client partition can be brought up without any further configuration. In a dual Virtual I/O Server environment, you can migrate one Virtual I/O Server at a time to avoid any interruption of service to the clients: If there is Logical Volume Manager mirroring on clients, check that both disks of any mirrored volume group are available in the system and the mirroring is properly synchronized. If there is Shared Ethernet Adapter (SEA) failover, check the configuration of the priority, and that the backup is active on the second Virtual I/O Server. If there is a Network Interface Backup (NIB), check that the Etherchannel is configured properly on the virtual I/O clients. Folding: Processor folding currently is not supported for Virtual I/O Server partitions. If folding is enabled on your Virtual I/O Server and migration media is used to move from Virtual I/O Server 1.5 to 2.1.0.13 FP 23, or later, processor folding remains enabled. Upgrading by using migration media does not change the processor folding state. If you installed Virtual I/O Server 2.1.3.0, or later, and did not change the folding policy, then folding is disabled.
29
30
Chapter 3.
31
32
33
backupios command
The backupios command performs a backup of the Virtual I/O Server to a tape device, an optical device, or a file system (local file system or a remotely mounted Network File System (NFS)). Backing up to a remote file system satisfies having the Virtual I/O Server backup at a remote location. It also allows the backup to be restored from either a Network Installation Management (NIM) server or the HMC. In Example 3-1, a Virtual I/O Server backup is done to an NFS mount which resides on a NIM server.
Example 3-1 Virtual I/O Server backup to a remote file system
$ mount nim:/export/vios_backup /mnt $ backupios -file /mnt -nomedialib Backup in progress. This command can take a considerable amount of time to complete, please be patient... -nomedialib flag: The -nomedialib flag excludes the contents of the virtual media repository from the backup. Unless explicitly required, excluding the repository significantly reduces the size of the backup.
34
The backupios command that is used in Example 3-1 on page 34, creates a full backup tar file package named nim_resources.tar. This package includes all of the resources that are needed to restore a Virtual I/O Server (mksysb image, bosinst.data, network boot image, and the Shared Product Object Tree (SPOT)) from a NIM or HMC by using the installios command. Section 3.1.4, Restoring the Virtual I/O Server on page 38, describes the restoration methods. Best practice dictates that a full backup is taken before you make any configuration changes to the Virtual I/O Server. In addition to the full backup, a scheduled weekly backup is also a good practice. You can schedule this job by using the crontab command.
viosbr command
The viosbr command performs a backup of the Virtual I/O Server virtual and logical configuration. In Example 3-2, a scheduled backup of the virtual and logical configuration is set up, and existing backups are listed. The backup frequency is daily, and the number of backup files to keep is seven.
Example 3-2 Daily viosbr schedule
$ viosbr -backup -file vios22viosbr -frequency daily -numfiles 7 Backup of this node (vios22) successful $ viosbr -view -list vios22viosbr.01.tar.gz At a minimum, backing up the virtual and logical configuration data before you make changes to the Virtual I/O Server, can help in recovering from configuration errors.
35
In a DR situation where these disk structures do not exist and network cards are at different location codes, you need to ensure that you back up the following devices: Any user-defined disk structures such as storage pools or volume groups and logical volumes. The linking of the virtual device through to the physical devices. These devices are mostly created at the Virtual I/O Server build and deploy time, but change depending on when new clients are added or changes are made.
$ viosbr -view -file vios22viosbr.01.tar.gz -mapping Details in: vios22viosbr.01 SVSA Physloc Client Partition ID ------------------- ---------------------------------- -------------------vhost0 U8233.E8B.061AB2P-V2-C30 0x00000003 VTD Status rootvg_lpar01 Available
36
SVEA Physloc ------- --------------------------------------ent6 U8233.E8B.061AB2P-V2-C111-T1 VTD Status Backing Device Physloc ent11 Available ent10 U78A0.001.DNWHZS4-P1-C6-T2
Slot numbers: It is also vitally important to use the slot numbers as a reference for the virtual SCSI, virtual Fibre Channel, and virtual Ethernet devices. Do not use the vhost/vfchost number or ent number as a reference. The vhost/vfchost and ent devices are assigned by the Virtual I/O Server as they are found at boot time or when the cfgdev command is run. If you add in more devices after subsequent boots or with the cfgdev command, these devices are sequentially numbered. The important information in Example 3-3 on page 36, is not vhost0, but that the virtual SCSI server in slot 30 (the C30 value in the location code) is mapped to physical volume hdisk3. In addition to the information that is stored in the viosbr backup file, a full system configuration backup can be captured by using the snap command. This information enables the Virtual I/O Server to be rebuilt from the installation media if necessary. The crucial information in the snap is the output from the following commands: Network settings netstat -state netstat -routinfo netstat -routtable lsdev -dev entX -attr cfgnamesrv -ls hostmap -ls optimizenet -list entstat -all entX
37
Physical and logical volume devices lspv lsvg lsvg -lv VolumeGroup Physical and logical adapters lsdev -type adapter Code levels, users, and security ioslevel motd loginmsg lsuser viosecure -firewall view viosecure -view -nonint
We suggest that you gather this information in the same time frame as the previous information. The /home/padmin directory (which contains the snap output data) is backed up using the backupios command. Therefore, it is a good location to collect configuration information before a backup. snap output: Keep in mind that if a system memory dump exists on the Virtual I/O Server, it is also captured in the snap output.
38
installios command: The trailing slash in the NFS location nim:/export/vios_backup/ must be included in the command as shown. The configure client network interface setting must be disabled, as shown by the -n option. This step is necessary because the physical adapter in which we are installing the backup, might already be used by a SEA. If so, the IP configuration fails. Log in and configure the IP if necessary after the installation by using a console session. A definition of each command option is available in the installios man page.
Example 3-4 The installios command from the HMC
hscroot@hmc9:~> installios -p vios22 -i 172.16.22.33 -S 255.255.252.0 -g 172.16.20.1 -d 172.16.20.41:/export/vios_backup/ -s POWER7_2-SN061AB2P -m 00:21:5E:AA:81:21 -r default -n -P auto -D auto ... ...Output truncated ... # Connecting to vios22 # Connected # Checking for power off. # Power off complete. # Power on vios22 to Open Firmware. # Power on complete. # Client IP address is 172.16.22.33. # Server IP address is 172.16.20.111. # Gateway IP address is 172.16.20.1. # Subnetmask IP address is 255.255.252.0. # Getting adapter location codes. # /lhea@200000000000000/ethernet@200000000000002 ping successful. # Network booting install adapter. # bootp sent over network. # Network boot proceeding, lpar_netboot is exiting. # Finished. Now open a terminal console on the server to which you are restoring, in case user input is required. Tip: If the installios command seems to be taking a long time to restore, this lag is most commonly caused by a speed or duplex misconfiguration in the network.
39
# nim -o define -t mksysb \ -a server=master \ -a location=/export/vios_backup/vios22.mksysb vios22_mksysb # nim -o define -t spot \ -a server=master \ -a location=/export/vios_backup/spot \ -a source=vios22_mksysb vios22_spot # nim -o bos_inst \ -a source=mksysb \ -a mksysb=vios22_mksysb \ -a spot=vios22_spot \ -a installp_flags=-agX \ -a no_nim_client=yes \ -a boot_client=no \ -a accept_licenses=yes vios22 Important: The no_nim_client=yes option instructs the NIM server to not register the Virtual I/O Server as a NIM client. When this option is set to no, the NIM server configures an IP address on the physical adapter which was used during installation. If this adapter is apart from the SEA, it causes errors during boot. With all the NIM resources ready for installation, you can now proceed to starting the Virtual I/O Server logical partition in SMS mode and perform a network boot. Further information about this process can be found in IBM PowerVM Virtualization Managing and Monitoring, SG24-7590.
40
1
Frame A Frame A
2
Frame B
3
Frame A
VIOS
VIOS
VIOS
VIOS
VIOS
VIOS
VIOS
VIOS
NIM
41
The following guide explains the numbers that are shown at the top of Figure 3-1 on page 41. 1. The NIM is a stand-alone server, independent of the virtualized environment. 2. The NIM that is responsible for managing the logical partitions on Frame A, is a logical partition on Frame B, and vice versa. 3. The NIM uses dedicated resources (that is, the network and storage are not managed by the Virtual I/O Server). In all three scenarios, the NIM continues to function if one or all of the Virtual I/O Servers on a single frame are unavailable.
42
If you have multiple workloads that require multiple partition profiles, do not clean up or rename the partition profiles. Activate the profiles and do not use a date naming scheme, but one that is meaningful to this workload, such as DB2_high.
43
When a virtual Fibre Channel adapter is created for a virtual I/O client, a pair of unique WWPNs is assigned to this adapter by the Power Hypervisor. An attempt to add the same adapter at a later stage results in the creation of another pair of unique WWPNs. When you add virtual Fibre Channel adapters into a virtual I/O client with a dynamic LPAR operation, use the Overwrite existing profile option to save the permanent partition profile. This option is shown in Figure 3-4. This choice results in the same pair of WWPNs in both the active and saved partition profiles.
44
IBM i tip: On IBM i, you can use the virtual media repository as an alternate restart device for Licensed Internal Code installation.
45
If you use Live Partition Mobility (LPM) to optimize your power consumption, set partition auto startup for the Virtual I/O Servers. This setting ensures that after the server is started, it will be ready for LPM as soon as possible. For more information, see 7.3, Live Partition Mobility on page 109. Schedule startup and shutdown: You can schedule the Power Systems server startup and shutdown via the HMC: Servers -> ServerName -> Operations -> Schedule Operations. Select Automatically start when the managed system is powered on for the Virtual I/O Servers, and for any logical partitions that do not depend on the Virtual I/O Servers. We also suggest enabling this function on logical partitions that you want to achieve the best memory and processor affinity. To set this option in the HMC, select the logical partition and the profile that you want to start automatically. Choose Settings and select the check box for an automatic start, as shown in Figure 3-6.
You can specify an order in which LPARs are started automatically. The automatic startup of all the logical partitions might work in your environment; however, there are some things to consider: After the Power Systems server starts up, for every LPAR that is being activated, new memory and processor affinity will be defined.
46
If you start all the partitions at the same time, the client logical partitions with the boot devices that are dependent on the Virtual I/O Server (virtual SCSI or NPIV), wait and try again to boot. The partitions try again to boot until at least one Virtual I/O Server finishes its activation. In dual Virtual I/O Server setups, the following steps occur: Your logical partition will boot after the first Virtual I/O Server comes up. This action might result in stale physical partitions (PPs) in volume groups that are mirrored through both Virtual I/O Servers. Some storage paths might be in an inactive state. All logical partitions might be using only a single Virtual I/O Server for network bridging. There might be application dependencies, for example, a Domain Name System (DNS), a Lightweight Directory Access Protocol (LDAP), a Network File System (NFS), or a database that is running on the same Power Systems server. If there are dependencies, check that these partitions activate correctly. Consider the following options when you are defining your startup order list: Start the Virtual I/O Servers first or set them to start automatically. Set Partition start policy to Auto-Start Always or Auto-Start for Auto-Recovery, as shown in Figure 3-7. In the HMC, choose Server -> ServerName -> Properties, Power-On Parameters.
The LPARs with the highest memory and the highest number of processors are started first to achieve the best processor and memory affinity. For critical production systems, you might want to activate them before the Virtual I/O Servers.
47
Start the LPARs that provide services for other partitions. For example, first start with the DNS and LDAP servers. Also, start the NFS server first, especially if there are NFS clients with NFS hard mounts on startup. Prepare the HMC commands to simplify startup. A spreadsheet might help you create those commands. To start an LPAR and open its virtual terminal on a specific system, use the commands shown in Example 3-6.
Example 3-6 HMC CLI startup commands
chsysstate -m <servername> -o on -r lpar -n vios1a -f <profilename> mkvterm -m <servername> -p vios1a The startup sequence does not need to be serial; many LPARs can be started at the same time. The following scenario provides a server startup example: Activate the critical systems with a high amount of memory and processors, first. Start all Virtual I/O Servers at the same time. After the Virtual I/O Servers are up and running, activate all your DNS, LDAP, or NFS servers at the same time. Activate all database LPARs at the same time. There might be a cluster that needs to be started. After the database LPARs are up and running, activate all the application LPARs at the same time. Grouping partitions: You can use system profiles to group the partitions. For more information, see this website: http://www.ibm.com/developerworks/aix/library/au-systemprofiles/
48
Chapter 4.
Networking
Virtual network infrastructure that is used in IBM Power Systems is extremely flexible and allows for a number of configuration options. This chapter describes the most common networking setups that use a dual Virtual I/O Server configuration as a best practice.
49
The following contains a list of best practices for your virtual environment: Before you start planning your virtual network infrastructure, speak with your network administrators and specialists to synchronize your terminology. Keep things simple. Document them as you go. Use VLAN tagging. Use the same VLAN IDs in the virtual environment as they exist in the physical networking environment. For virtual adapters on client logical partitions (LPARs), use Port VLAN IDs. For simplification of the installation, do not configure multiple VLANs on one adapter and do not use AIX VLAN tagged interfaces on it. Use hot pluggable network adapters for the Virtual I/O Server instead of the built-in integrated network adapters. They are easier to service. Use two Virtual I/O Servers to allow concurrent online software updates to the Virtual I/O Server.
50
Spread physical Virtual I/O Server resources across multiple enclosures and I/O drawers. Configure an IP address on a Shared Ethernet Adapter (SEA) to allow the ping feature of a SEA failover. Use separate virtual Ethernet adapters for Virtual I/O Server management rather than putting it on the SEA. You can lose network connectivity if you are changing SEA settings. Use a vterm when you configure the network on a Virtual I/O Server. Find a balance between the number of virtual Ethernet adapters and the number of virtual local area networks (VLANs) for each virtual Ethernet adapter in a SEA. Try to group common VLANs on the same virtual Ethernet adapter. For example, group production VLANs on one adapter and non-production VLANs on another. Use multiple virtual switches when you work in multi-tenant environments. VLAN ID 3358: Avoid the use of VLAN ID 3358 in your infrastructure. It is reserved to enable the tme attribute for the virtual Fibre Channel and SAS adapters.
Remember the following technical constraints during your configuration: The maximum number of VLANs per virtual adapter is 21 (20 VLAN IDs (VIDs) and 1 Port VLAN ID). The maximum number of virtual adapters for each SEA is 16. The maximum number of physical Ethernet adapters in a link aggregated adapter is eight for primary and one for backup. The maximum virtual Ethernet frame size on the Power Systems server is 65,408 bytes.
VLAN bridging
If you use the suggested VLAN tagging option for the virtual network infrastructure, there are three ways to define VLANs for one SEA: All VLANs are on one virtual adapter, one adapter under a SEA. One VLAN per one virtual adapter, many adapters under a SEA. A combination of the two configurations.
Chapter 4. Networking
51
Changing a list of virtual adapters that are bridged by a SEA can be done dynamically. One SEA can have a maximum of 16 virtual Ethernet adapters. If you need to bridge more than 16 VLANs for each SEA, define more VLANs for each virtual Ethernet adapter. To change a set of VLANs on a virtual Ethernet adapter, check whether your environment supports dynamic change. For more information, see the following website: http://pic.dhe.ibm.com/infocenter/powersys/v3r1m5/index.jsp?topic=/p7hb 1/iphb1_vios_managing_vlans.htm If your environment does not support dynamic changes, remove the adapter from the SEA and then dynamically remove the adapter from the LPAR. Alter the VLAN definitions on the virtual adapter, add it dynamically back to the system, and alter the SEA configuration. During this operation, all access to the VLANs defined on the virtual adapter is unavailable by that Virtual I/O Server. However, all client LPARs automatically fail over and use the other Virtual I/O Server. Remember to apply the changes to the partition profile.
52
When data over 1500 bytes per packet are sent over a network, consider switching to a larger MTU value. For example, the size of a DNS query is small, and a large MTU value has no effect here. However, backups over the network are very demanding on the bandwidth, and might benefit from a larger MTU value. Ensure that path MTU discovery is on. Use the pmtu command on AIX, the CFGTCP command on IBM i, and the tracepath command on Linux. To change the MTU on a specific adapter, follow these steps: On AIX, enter chdev -a mtu=<new mtu> -l enX On Linux, see the distribution documentation. The generic rule is to use the ifconfig command. Also, use the appropriate parameters under /proc/sys/net/ On IBM i, use the following procedure: a. b. c. d. In the command line, type CFGTCP and press Enter. Select option 3 (Change TCP/IP attributes). Type the command 2 (Change) next to the route you want to change. Change the MTU size in the Maximum transmission unit field. Press Enter to apply the change.
If you send data through a network and the data is larger than the MTU of the network, it becomes fragmented, which has a negative affect on performance. If you experience network issues after an MTU change, check the MTU for a specific remote host or network by using ping -s <size> on AIX or Linux. Or on IBM i, use PING RMTSYS(<remote address>) PKTLEN(<size>).
Chapter 4. Networking
53
It is important to set flow control on both the switch and the Virtual I/O Server to avoid packet resending in case the receiver is busy. Unless you have IBM i or Linux in your environment, enable the large_send and large_recieve attributes. For optimum performance, especially on a 10-Gb network, tuning needs to be performed. The most important settings are provided in the following list: Physical adapters on the Virtual I/O Server jumbo_frames=yes, large_send=yes, large_receive=yes, flow_ctrl=yes Shared Ethernet adapters on the Virtual I/O Server jumbo_frames=yes, largesend=yes, large_receive=yes Virtual ethernet adapters on the Virtual I/O Server under SEA dcbflush_local=yes Client partitions Running AIX NIB on client, if applicable jumbo_frames=yes Network interface (enX) mtu to the largest possible value for your network, mtu_bypass=yes Network options tcp_pmtu_discover=1, udp_pmtu_discover=1 To specify the MTU for a specific host or network, add a static route with the -mtu parameter. Put it in /etc/rc.net: route add <destination> <GW> -mtu <MTU> Running IBM i, set the MTU as described in 4.1.2, Maximum transmission unit best practices on page 52 Running Linux, see the documentation for your distribution. The generic rule is to use the ifconfig command. Also, set the appropriate parameters under /proc/sys/net/
54
you must either have two Virtual I/O Servers or use the Live Partition Mobility (LPM): Use LA wherever possible. You achieve high availability and an increase in network performance. If you cannot use LA, use NIB. Single Virtual I/O Server setups are common on hardware that have a limited number of ports. If possible, use VLAN tagging. Use ports from different adapters that are in different enclosures and I/O drawers.
Chapter 4. Networking
55
One clear advantage that the client-side failover solution had over the server-side failover solution was that the client side option allows load sharing. With the new load sharing feature that is added to the SEA high availability configuration, this advantage is reduced.
56
All the virtual Ethernet adapters of each Virtual I/O Server connect to one virtual switch. Try to avoid defining virtual Ethernet adapters from different Virtual I/O Servers in one virtual switch. This way, you eliminate the chance of networking issues because of a network misconfiguration. Port VLAN ID: The PVID that is used on the virtual Ethernet adapter, which makes up the SEA, can be a PVID which is not used in your network.
Client Partition 2 NIB NIB PVID=2 PVID=2 PVID=3 NIB NIB PVID=3 POWER Hypervisor PVID=98 VIDs=3,4,5,.. Link aggregation 802.3ad viosA viosB
ent (1 Gb) ent (1 Gb) 1 Gb
vswitchA
vswitchB
PVID=99 VIDs=1,2
PVID=98 VIDs=3,4,5,..
PVID=99 VIDs=1,2
SEA
VLANs 1,2,3,..
1 Gb
1 Gb
VLANs 1,2,3,..
VLANs 1,2,3,..
1 Gb
Figure 4-1 Dual Virtual I/O Server configuration with two virtual switches
Each client partition has a NIB that is made up of two virtual Ethernet adapters for every VLAN to which it connects. For details about a NIB setup that uses virtual Ethernet adapters, see 4.1.1, Shared Ethernet Adapter considerations on page 51. For more information and to obtain a setup guide, see this website: http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101752
VLANs 1,2,3,..
Chapter 4. Networking
57
58
Client Partition 2 NIB PVID=2 NIB PVID=3 POWER Hypervisor PVID=99 VIDs=1,2 PVID=100 PVID=98 VIDs=3,4,.. SEA Link aggregation 802.3ad
ent (1 Gb) ent (1 Gb)
ETHERNET0
PVID=99 VIDs=1,2
SEA
viosA
viosB
VLANs 1,2,3,..
VLANs 1,2,3,..
1 Gb
1 Gb
1 Gb
VLANs 1,2,3,..
1 Gb
Figure 4-2 Dual Virtual I/O Server configuration with SEA and load balancing
The SEA is connected to two virtual Ethernet adapters; each adapter has a different set of VLAN IDs. Virtual Ethernet adapters on different Virtual I/O Servers must have the same set of VLANs and different trunk priorities. In addition, the SEA is connected to another virtual Ethernet adapter with a PVID=100 that is used as a SEA control channel. The control channel is used for SEA heartbeating and exchanging information between the two SEA adapters on the set of VLAN IDs that each SEA bridges. For more information and to obtain a setup guide, see this website: http://www.ibm.com/support/docview.wss?uid=isg3T7000527
VLANs 1,2,3,..
Chapter 4. Networking
59
60
Chapter 5.
Storage
The Virtual I/O Server has different means to provide storage access to virtual I/O clients. This chapter addresses the best practices on the different methods for presenting and managing storage on the Virtual I/O Server and virtual I/O clients.
61
62
The best practice for booting a Virtual I/O Server is to use internal disks rather than external SAN storage, if the server architecture and available hardware allows. The following list provides reasons for booting from internal disks: The Virtual I/O Server does not require specific multipathing software to support the internal booting disks. This configuration helps when you perform maintenance, migration, and update tasks. The Virtual I/O Server does not need to share Fibre Channel adapters with virtual I/O clients, which helps if a Fibre Channel adapter replacement is required. The virtual I/O clients might have issues with the virtual SCSI disks presented by the Virtual I/O Server that is backed by SAN storage. If so, the troubleshooting can be performed from the Virtual I/O Server. We do not recommend the allocation of logical volume backing devices for virtual I/O clients in the rootvg of the Virtual I/O Server. The SAN design and management is less critical. Access to the dump device is simplified. A SAN boot provides the following advantages: SAN hardware can accelerate booting through their cache subsystems. Redundant Array of Independent Disks (RAID) and mirroring options might be improved. A SAN boot is able to take advantage of advanced features that are available with SAN storage. Easy to increase capacity. Provides smaller server hardware footprints. Zones and cables can be set up and tested before client deployment. In general, avoid single points of failure. Following is a list of best practices for maintaining the Virtual I/O Server availability: Mirror rootvg by using the mirrorios -defer hdiskX command, and then reboot the Virtual I/O Server at your earliest convenience. If rootvg is mirrored, use disks from two different disk controllers. Ideally, those disk controllers are on different Peripheral Component Interconnect (PCI) busses. Booting from external storage: If you are booting from external storage, use multipathing software, with at least two Fibre Channel adapters that are connected to different switches.
Chapter 5. Storage
63
For more information about installing the Virtual I/O Server on a SAN, see: IBM Flex System p260 and p460 Planning and Implementation Guide, SG24-7989 http://www.redbooks.ibm.com/redbooks/pdfs/sg247989.pdf IBM System Storage DS8000 Host Attachment and Interoperability, SG24-8887 http://www.redbooks.ibm.com/redbooks/pdfs/sg248887.pdf
5.1.2 Multipathing
Multipathing is a best practice in terms of performance and redundancy; it can be implemented from both the virtual I/O client and the Virtual I/O Server. Multipathing provides load balancing and failover capabilities if an adapter becomes unavailable. This process also helps if there is a SAN problem on the path to the external storage. A virtual SCSI configuration requires more management from the Virtual I/O Server than NPIV. Figure 5-1 on page 65 shows an example of dual Virtual I/O Servers that provide a multipath SAN disk to a virtual I/O client. In this case, multipathing is managed from both the virtual I/O client and Virtual I/O Server.
64
Client Partition
Default MPIO
Multipath Software
Multipath Software
Physical FC Adapter
Physical FC Adapter
Physical FC Adapter
Physical FC Adapter
SAN Switch
SAN Switch
Physical Resources Virtual Resources Primary Path Secondary Path
SAN Disk
Figure 5-1 Virtual SCSI client with multipathing from dual Virtual I/O Servers
For more information about virtual I/O client configuration, see section 5.2.2, Configuring the Virtual I/O Server with a virtual SCSI on page 71.
Chapter 5. Storage
65
$ oem_setup_env # manage_disk_drivers -l Device Present Driver 2810XIV AIX_AAPCM DS4100 AIX_APPCM DS4200 AIX_APPCM DS4300 AIX_APPCM DS4500 AIX_APPCM DS4700 AIX_APPCM DS4800 AIX_APPCM DS3950 AIX_APPCM DS5020 AIX_APPCM DCS3700 AIX_APPCM DS5100/DS5300 AIX_APPCM DS3500 AIX_APPCM XIVCTRL MPIO_XIVCTRL
Driver Options AIX_AAPCM,AIX_non_MPIO AIX_APPCM,AIX_fcparray,AIX_SDDAPPCM AIX_APPCM,AIX_fcparray,AIX_SDDAPPCM AIX_APPCM,AIX_fcparray,AIX_SDDAPPCM AIX_APPCM,AIX_fcparray,AIX_SDDAPPCM AIX_APPCM,AIX_fcparray,AIX_SDDAPPCM AIX_APPCM,AIX_fcparray,AIX_SDDAPPCM AIX_APPCM,AIX_SDDAPPCM AIX_APPCM,AIX_SDDAPPCM AIX_APPCM AIX_APPCM,AIX_SDDAPPCM AIX_APPCM MPIO_XIVCTRL,nonMPIO_XIVCTRL
66
SDDPCM: If you need to have SDDPCM installed or upgraded, it toggles the DS4000 and DS5000 device driver management to SDDPCM (AIX_SDDAPPCM). Check and change the parameter before you reboot the Virtual I/O Server. For more information, see the SDDPCM for AIX and Virtual I/O Server support matrix at this website: http://www.ibm.com/support/docview.wss?rs=540&uid=ssg1S7001350#AIXSDDPCM
Fibre Channel protocol driver attributes The fscsi devices include specific attributes that must be changed on
Virtual I/O Servers. It is a best practice to change the fc_err_recov and the dyntrk attributes of the fscsi device. Both attributes can be changed by using the chdev command, as shown in Example 5-2 on page 68.
Chapter 5. Storage
67
$ chdev -dev fscsi0 -attr fc_err_recov=fast_fail dyntrk=yes fscsi0 changed Changing the fc_err_recov attribute to fast_fail fails any I/Os immediately if the adapter detects a link event, such as a lost link between a storage device and a switch. The fast_fail setting is only recommended for dual Virtual I/O Server configurations. Setting the dyntrk attribute to yes allows the Virtual I/O Server to tolerate cable changes in the SAN. It is a best practice to change the fscsi attributes before you map the external storage so that you do not need to reboot the Virtual I/O Server.
Fibre Channel device driver attributes The fcs devices include specific attributes that can be changed on Virtual I/O
Servers. These attributes are num_cmd_elems and max_xfer_size. As a best practice, complete a performance analysis on the adapters and change these values, if needed. This analysis can be done with tools such as the fcstat command as shown in Example 5-3 on page 69. num_cmd_elems Modifies the number of commands that can be queued to the adapter. Increasing num_cmd_elems decreases the No Command Resource Count. Increasing num_cmd_elems also makes it more likely to see No Adapter Elements Count, or No DMA Resource Count increasing. Has an effect on the direct memory access (DMA) region size that is used by the adapter. Default max_xfer_size (0x100000) gives a small DMA region size. Tuning up max_xfer_size to 0x200000 provides a medium or large DMA region size, depending on the adapter.
max_xfer_size
Fscsi and fcs changes: Any change to the fscsi and fcs attributes need to be first checked with your storage vendor. Example 5-3 on page 69 shows that there is no need to change the max_xfer_size, since the No DMA Resource Count did not increase. In the same example, consider increasing the num_cmd_elem since the No Command Resource Count increased. These values are measured since the last boot or last reset of the adapter statistics.
68
$ fcstat fcs0|grep -p 'Driver Information' IP over FC Adapter Driver Information No DMA Resource Count: 0 No Adapter Elements Count: 395 FC SCSI Adapter Driver Information No DMA Resource Count: 0 No Adapter Elements Count: 395 No Command Resource Count: 2415 Both attributes can be changed by using the chdev command, as shown in Example 5-4.
Example 5-4 fcs attributes modification
$ lsdev -dev fcs0 -attr max_xfer_size,num_cmd_elems value 0x100000 500 $ chdev -dev fcs0 -attr num_cmd_elems=1024 -perm fcs0 changed $ chdev -dev fcs0 -attr max_xfer_size=0x200000 -perm fcs0 changed In summary, consider the following best practices for the Fibre Channel device driver values: Do not increment these values without an appropriate analysis. Increment these values gradually. Reboot the Virtual I/O Servers for these changes to take effect. Example 5-5 demonstrates how to check whether there is a need to increase the num_cmd_elem by using the SDDPCM commands. We can verify that the value of 869 did not reach the num_cmd_elems limit of 1024.
Example 5-5 Check num_cmd_elems using the SDDPCM pcmpath command
$ oem_setup_env # pcmpath query adapter Total Dual Active and Active/Asymmetric Adapters : 6 Adpt# Name 0 fscsi16 State NORMAL Mode ACTIVE Select 3032 Errors 0 Paths 16 Active 16
Chapter 5. Storage
69
1 fscsi17 NORMAL ACTIVE 2 fscsi0 NORMAL ACTIVE 3 fscsi4 NORMAL ACTIVE 4 fscsi8 NORMAL ACTIVE > 5 fscsi12 NORMAL ACTIVE {kfw:root}/ # pcmpath query adaptstats 5 Adapter #: 5 ============= I/O: SECTOR: Total Read 106925273 3771066843 Total Write 33596805 1107693569
0 0 0 0 0
Active Read 0 0
Active Write 0 0
{kfw:root}/ # lsattr -El fcs12 -a num_cmd_elems num_cmd_elems 1024 Maximum number of COMMANDS to queue to the adapter True
70
The best practice for selecting between these virtual SCSI options, is that the disk devices that are backed by SAN storage, are exported as physical volumes. With internal devices, if you have enough for all the virtual I/O clients, assign the entire disk as a backing device. However, if you have a limited number of disks, create storage pools so that storage can be managed from the Virtual I/O Server.
IBM i
A virtual SCSI allows for the connection to external storage, which natively does not support an IBM i block size. IBM i, for historical reasons, works with 520 bytes per sector of formatted storage. However, the Virtual I/O Server uses industry-standard 512 bytes per sector. In this case, the Hypervisor manages the block size conversion for the IBM i virtual I/O clients. Note: With virtual SCSI, mirroring of client volume groups must be implemented at the level of the client. The logical volume backing device cannot be mirrored on the Virtual I/O Server. Ensure that the logical volume backing devices sit on different physical volumes and are served from dual Virtual I/O Servers.
Chapter 5. Storage
71
In simple environments, the management can be simplified by keeping virtual adapter slot numbers consistent between the client and the server. Start the virtual SCSI slots at 20, then add the lpar_id of the client to this base. Example 5-6 shows the HMC command output.
Example 5-6 HMC virtual slots listing with a single Virtual I/O Server
hscroot@hmc9:~>lshwres -r virtualio --rsubtype scsi -m POWER7_1-SN061AA6P --level lpar -F lpar_name,lpar_id,slot_num,adapter_type,remote_lpar_name,remote_lpar_id ,remote_slot_num | grep AIX_02 vios1a,1,25,server,AIX_02,5,25 AIX_02,5,25,client,vios1a,1,25 In Example 5-6, vios1a has slot 25, which maps to AIX_02 slot 25. With dual Virtual I/O Servers, the adapter slot numbers can be alternated from one Virtual I/O Server to the other. The first Virtual I/O Server can use odd-numbered slots, and the second can use even-numbered slots. In a dual servers scenario, allocate slots in pairs, with each client using two adjacent slots such as 21 and 22, or 33 and 34. Slot numbering scheme: The slot numbering scheme can be altered when a Live Partition Mobility (LPM) migrates a logical partition. Increase the Maximum virtual adapters value above the default value of 10 when you create a logical partition (LPAR) profile. The appropriate number for your environment depends on the virtual adapters slot scheme that you adopt. The allocation needs to be balanced because each unused virtual adapter slot uses a small amount of memory. Figure 5-2 on page 73 is an example of the Maximum virtual adapters value in a logical partition profile.
72
Partition profile: The maximum number of virtual adapter slots that are available on a partition is set by the profile of the partition. It is required to reactivate the LPAR to have the profile taken into account. It is a best practice to use separate virtual adapter pairs for different types of backing devices: For UNIX client logical partitions, do not map boot and data disks on the same virtual adapter. As an example, for AIX, have rootvg and data volume groups on separate virtual SCSI server adapters. Do not map shared storage pool logical units, logical volumes, and physical volumes on the same virtual SCSI server adapter.
Chapter 5. Storage
73
For optimum performance and availability, do not share a vhost to map different types of physical volumes. The max_transfer_size for storage devices might be different. The max_transfer_size is negotiated when the SCSI adapter of the virtual client is first configured. If a new disk is mapped to the related vhost with a higher max_transfer_size, it cannot be configured at the client side. It might require a reboot. In a SAN environment, a LUN is assigned to the Virtual I/O Server Fibre Channel adapter. The Virtual I/O Server maps the LUN to the vhost that is associated to a virtual SCSI client adapter. From a security perspective, ensure that the specific client is connected to the virtual Fibre Channel server adapter, by explicitly selecting a partition, as shown in Figure 5-3.
Naming conventions
The choice of a naming convention is essential for tracking virtual to physical relationship, especially in virtual SCSI configurations. If you want to uniquely identify the virtual client target devices, use a convention similar to the following definitions: <client_lpar_name><bd_type><client_vg><hdisk number> : client_lpar_name bd_type This field needs to be a representative subset, not too long. You want to include the type of backing device used: <L> for Logical Volume
74
<D> for physical disk or LUN mapping <S> for Shared Storage Pool mapping <V> for Virtual optical devices <T> for Tape devices client_vg hdisk_number For an AIX client, you would put a subset of the VG name. You can start with hd0, then, increment. The disk number can change at the client partition. Choosing the same number is not the best idea. It is probably better to increment it.
Example 5-7 shows how you can use this naming convention.
Example 5-7 Example of a virtual storage mapping convention
$ mkvdev -vdev hdisk10 -vadapter vhost35 -dev Lpar1DrvgHd0 Lpar1DrvgHd0 Available $ mkvdev -vdev hdisk11 -vadapter vhost36 -dev Lpar1DsapHd1 Lpar1DsapHd1 Available
$ oem_setup_env # mpio_get_config -Av Frame id 0: Storage Subsystem worldwide name: 60ab800114b1c00004fad119f Controller count: 2 Partition count: 1
Chapter 5. Storage
75
Partition 0: Storage Subsystem Name = 'DS4800POK-3' hdisk# LUN # Ownership User Label hdisk6 3 A (preferred) aix02DrvgHd0 hdisk7 4 B (preferred) aix02DdvgHd1 # exit $ mkvdev -vdev hdisk6 -vadapter vhost33 -dev aix02DrvgHd0 aix02DrvgHd0 Available $ lsmap -vadapter vhost33 SVSA Physloc Client Partition ID --------------- -------------------------------------------- -----------------vhost33 U8233.E8B.061AA6P-V1-C51 0x00000005 VTD Status LUN Backing device Physloc Mirrored aix02rvgHd0 Available 0x8100000000000000 hdisk6 U5802.001.0086848-P1-C2-T1-W201600A0B829AC12-L3000000000000 false
oem_setup_env: Using oem_setup_env is not a best practice unless directed by IBM support, but is permitted for some storage configuration.
Logical volumes
The Virtual I/O Server can export logical volumes to virtual I/O clients. This method does have some advantages over physical volumes: Logical volumes can subdivide physical disk devices between different clients. System administrators with AIX experience are already familiar with the Logical Volume Manager (LVM). In this section, the term volume group refers to both volume groups and storage pools. Also, the term logical volume refers to both logical volumes and storage pool backing devices.
76
Logical volumes cannot be accessed by multiple Virtual I/O Servers concurrently. Therefore, they cannot be used with Microsoft Multipath I/O (MPIO) on the virtual I/O client. Multipathing needs to be managed at the Virtual I/O Server, as explained in 5.1.2, Multipathing on page 64. The following list provides best practices for logical volume mappings: Avoid the use of a rootvg on the Virtual I/O Server to host exported logical volumes. Certain types of software upgrades and system restores might alter the logical volume to target device mapping for logical volumes within rootvg, requiring manual intervention. Also, it is easier to manage Virtual I/O Server rootvg disk replacement when it does not have virtual I/O clients that use logical volumes as backing devices. The default storage pool in the Integrated Virtualization Manager (IVM) is the root volume group of the Virtual I/O Server. Ensure that you create and choose different storage pools to host client backing devices. Although logical volumes that span multiple physical volumes are supported, it is best if a logical volume fully resides on a single physical volume for optimum performance and for maintenance. Mirror the disks in the virtual I/O client. Ensure that the logical volume backing devices are on different physical disks. This configuration also helps with physical disk replacements at the Virtual I/O Server. In dual Virtual I/O Server configurations, if one server is rebooted, ensure that the mirroring is synced on the virtual I/O clients.
Chapter 5. Storage
77
Exporting physical volumes makes administration easier, since it does not require that you manage the size by the Virtual I/O Server. The Virtual I/O Server does not allow partitioning of a single internal physical disk among multiple clients. Rootvg disk: Moving a rootvg disk from one physical managed system to another is only supported by using LPM. In the Virtual I/O Server, there is no need to subdivide SAN-attached disks, because storage allocation can be managed at the storage server. In the SAN environment, provision and allocate LUNs to Virtual I/O Server. Then, export them to the virtual I/O clients as physical volumes. Cloning services: Usage of cloning services of the rootvg disk is only supported for offline backup, restore, and recovery.
reserve_policy
Whenever you need to map the same external volume to a virtual I/O client from a Virtual I/O Servers configuration, change the reserve_policy, as shown in Example 5-9. The attribute name reserve_policy might be called different by your storage vendor.
Example 5-9 Change the reserve_policy disk attribute
$ lsdev -dev hdisk12 -attr reserve_policy value single_path $ chdev -dev hdisk12 -attr reserve_policy=no_reserve hdisk12 changed $ lsdev -dev hdisk12 -attr reserve_policy value no_reserve
Small Computer System Interface queue_depth Increasing the value of the queue_depth attribute improves the throughput of the
disk in some configurations. However, there are several other factors that must be considered. These factors include the value of queue_depth for all of the physical storage devices on the Virtual I/O Server being used as a virtual target device by the disk instance on the client partition. It also includes the maximum transfer size for the virtual Small Computer System Interface (SCSI) client adapter instance that is the parent device for the disk instance.
78
For more information about tuning the SCSI queue_depth parameter, see IBM PowerVM Virtualization Managing and Monitoring, SG24-7590.
Chapter 5. Storage
79
# lspath -l hdisk0 Enabled hdisk0 vscsi0 Enabled hdisk0 vscsi1 # lspath -AHE -l hdisk0 -p vscsi0 attribute value description user_settable priority 1 Priority True
Example 5-11 shows how to set the path priority. This example sets vscsi0 to the lowest priority path. When the setting of the path priority is completed, all new I/O requests use vscsi1, which in this case is represented by Virtual I/O Server 2. The chpath command does not require a reboot and the changes take effect immediately.
Example 5-11 Changing the vscsi0 priority for hdisk0
IBM i
For IBM i, best practice is to use dual or multiple Virtual I/O Servers. Multipathing capabilities are available because of IBM i V6.1.1, which provides redundancy across multiple Virtual I/O Servers. With IBM i V7.1TR 2 or later, the IBM i multipathing algorithm is enhanced from round-robin to dynamic load balancing. With this enhancement, IBM i is able to use paths to optimize resource utilization and performance.
80
Table 5-2 Recommended settings for virtual I/O client virtual SCSI disks Parameter algorithm Considerations and recommendations Recommended value: fail_over Sets how the I/Os are balanced over multiple paths to the SAN storage. A virtual SCSI disk only supports fail_over mode. hcheck_cmd Recommended value: test_unit_rdy or inquiry. Used to determine if a device is ready to transfer data. Change to inquiry only if you have reservation locks on your disks. In other cases, use the default value test_unit_rdy. hcheck_interval Recommended value: 60 Interval in seconds between health check polls to the disk. By default, the hcheck_interval is disabled. The minimum recommended value is 60 and must be configured on both the Virtual I/O Server and the virtual I/O client. The hcheck_interval value should not be lower than the rw_timeout that is configured on the Virtual I/O Server physical volume. If there was a problem with communicating to the storage, the disk driver on the Virtual I/O Server would not notice until the I/O requests times out (rw_timeout value). Therefore, you would not want to send a path health check before this time frame. hcheck_mode Recommended value: nonactive Determines which paths are checked when the health check capability is used. The recommendation is to only check paths with no active I/O. max_transfer Recommended value: Same value as in the Virtual I/O Server. The maximum amount of data that can be transferred to the disk in a single I/O operation. See the documentation from your storage vendor before you set this parameter. queue_depth Recommended value: Same value as in the Virtual I/O Server. The number of concurrent outstanding I/O requests that can be queued on the disk. See the documentation from your storage vendor before you set this parameter. reserve_policy Recommended value: no_reserve Provides support for applications that are able to use the SCSI-2 reserve functions.
Chapter 5. Storage
81
Table 5-3 shows the recommended settings for virtual SCSI adapters in a virtual I/O client.
Table 5-3 Recommended settings for virtual I/O client virtual SCSI adapters Parameter vscsi_err_recov Considerations and recommendations Recommended value: delayed_fail or fast_fail Fast I/O failure might be desirable in situations where multipathing software is being used. The suggested value for this attribute is fast_fail when you are using a dual or multiple Virtual I/O Server configuration. Using fast fail, the virtual I/O client adapter might decrease the I/O fail times because of link loss between the storage device and switch. This value allows for a faster failover to alternate paths. In a single path configuration, especially configurations with a single path to a paging device, the default delayed_fail setting is suggested. vscsi_path_to Recommended value: 30 The virtual SCSI client adapter path timeout feature allows the client adapter to detect whether the Virtual I/O Server is not responding to I/O requests.
82
Requirement per SSP node 1 Fibre Channel attached disk of 10 GBs in size. All storage needs to be allocated on hardware RAIDed storage for redundancy.
Chapter 5. Storage
83
It is important to remember the limitations of SSPs, though. For example, the Virtual I/O Server software updates must be done while all clients that use storage in the SSP are shut down. If you have service level agreements (SLAs) where you cannot schedule downtime for your clients, consider using NPIV or regular virtual SCSI to provide storage to your virtual clients.
$ cluster -create -clustername ClusterA -repopvs hdisk1 -spname \ > StorageA -sppvs hdisk2 hdisk3 hdisk4 -hostname vios1.itso.ibm.com Cluster ClusterA has been created successfully. $ cluster -addnode -clustername ClusterA -hostname vios2.itso.ibm.com Partition vios2.itso.ibm.com has been added to the ClusterA cluster. $ mkbdsp -clustername ClusterA -sp StorageA 10G \ > -bd aix01_datavg -vadapter vhost0 Lu Name:aix01_datavg Lu Udid:84061064c7c3b25e2b4404568c2fcbf0 Assigning file "aix01_datavg" as a backing device. VTD:vtscsi0
84
$ prepdev -dev hdisk10 WARNING! The VIOS has detected that this physical volume is currently in use. Data will be lost and cannot be undone when destructive actions are taken. These actions should only be done after confirming that the current physical volume usage and data are no longer needed. The VIOS detected that this device is a cluster disk. Destructive action: Remove the physical volume from the cluster by running the following command: cleandisk -s hdisk#
Chapter 5. Storage
85
$ lspv NAME PVID VG STATUS hdisk0 00f61ab26fed4d32 rootvg active hdisk1 00f61aa6c31760cc caavg_private active hdisk2 00f61aa6c31771da None hdisk3 00f61aa6c3178248 None hdisk4 00f61aa6c31794bf None $ lscluster -c | grep disk Number of disks in cluster = 3 for disk hdisk4 UUID = 4aeefba9-22bc-cc8a-c821-fe11d01a5db1 cluster_major = 0 cluster_minor = 3 for disk hdisk2 UUID = c1d55698-b7c4-b7b6-6489-c6f5c203fdad cluster_major = 0 cluster_minor = 2 for disk hdisk3 UUID = 3b9e4678-45a2-a615-b8eb-853fd0edd715 cluster_major = 0 cluster_minor = 1 The lspv commands: Using the lspv -free or lspv -avail commands does not display the storage pool disks.
86
Using SCSI reservations (SCSI Reserve/Release and SCSI-3 Reserve) for fencing physical disks in the SSP, is not supported. High availability SAN solutions can be used to mitigate outages.
$ alert -list -clustername ClusterA -spname StorageA PoolName: StorageA PoolID: FFFFFFFFAC10161E000000004FCFA454 ThresholdPercent: 35 $ alert -set -clustername ClusterA -spname StorageA -type threshold -value 25 $ alert -list -clustername ClusterA -spname StorageA PoolName: StorageA PoolID: FFFFFFFFAC10161E000000004FCFA454 ThresholdPercent: 25 $ errlog IDENTIFIER TIMESTAMP T C RESOURCE_NAME 0FD4CF1A 0606145212 I O VIOD_POOL
Available space: The ThresholdPercent value is the percentage of free space that is available in the storage pool.
Chapter 5. Storage
87
Other considerations
Remember the following additional networking considerations: Uninterrupted network connectivity is required for operation. That is, the network interface that is used for SSP configuration must be on a highly reliable network, which is not congested. Changing the hostname/IP address for a system is not supported when it is configured in an SSP. Only compliant with Internet Protocol Version 4 (IPv4). When a Virtual I/O Server is configured for an SSP environment, VLAN tagging is not supported. For workaround, see section 2.6.6, Managing VLAN tagging on page 57, in the IBM Redbooks publication, IBM PowerVM Virtualization Managing and Monitoring, SG24-7590. An SSP configuration configures the TCP/IP resolver routine for name resolution to resolve host names locally first, and then to use the Domain Name System (DNS). For step-by-step instructions, see the TCP/IP name resolution documentation in the AIX Information Center: http://publib.boulder.ibm.com/infocenter/pseries/v5r3/index.jsp When you restore the Virtual I/O Server LPAR configuration from a viosbr backup, all network devices and configurations must be restored before SSP configurations are restored. The forward and reverse lookup resolves to the IP address/hostname that is used for the SSP configuration. It is suggested that the Virtual I/O Servers that are part of the SSP configuration, keep their clocks synchronized.
88
main advantage for selecting NPIV, compared to a virtual SCSI, is that the Virtual I/O Server is only used as a pass through to the virtual I/O client virtual Fibre Channel adapters. Therefore, the storage is mapped directly to the virtual I/O client, with the storage allocation managed in the SAN. This strategy simplifies storage mapping at the Virtual I/O Server. Consider the following additional benefits of NPIV: Provides storage resources to the virtual I/O client, just as they would be with actual physical Fibre Channel adapters. Multipathing takes place at the virtual I/O client. Allows virtual I/O clients to handle persistent reservations, which are useful in high availability cluster solutions, such as PowerHA SystemMirror. Makes Virtual I/O Server maintenance easier because there is no need for multipathing software. Is a preferred method for LPM operations. Using NPIV on virtual clients that run IBM i is a best practice, primarily because of performance reasons. The amount of work that the Virtual I/O Server needs to do with NPIV is less involved than with virtual SCSI.
Chapter 5. Storage
89
Follow these best practices for better availability, performance, and redundancy: Use NPIV in dual or multiple Virtual I/O Server configurations. Have Virtual I/O Server physical Fibre Channel adapters that split across separate system PCI busses, when possible. Operate at the highest possible speed on the SAN switches. Have the physical Fibre Channel adapter ports connected to different switches in the fabric, even in dual Virtual I/O Server configurations. Although it is supported to connect storage and tape libraries through NPIV by using the same Fibre Channel adapter, it is a best practice to separate them among different adapters. Separation is suggested because the disk and tape traffic have different performance characteristics and error recovery scenarios.
90
Client 1
Client 2
VIO A
VIO B
SAN Switch
SAN Switch
Storage
Chapter 5. Storage
91
reconfiguration events in the SAN. These events are likely to occur during LPM operations, and it is a good idea for dyntrk to always be set to yes. Example 5-16 shows how to change values on a Fibre Channel adapter. NPIV support: Most of the storage vendors support NPIV. Check what their requirements are to support the operating system of your virtual I/O client.
Example 5-16 Changing attributes on a Fibre Channel adapter on a virtual I/O client
# lsattr -El fcs0 intr_priority 3 lg_term_dma 0x800000 max_xfer_size 0x100000 num_cmd_elems 200 sw_fc_class 2 # lsattr -El fscsi0 attach none dyntrk yes fc_err_recov fast_fail scsi_id sw_fc_class 3
Interrupt priority False Long term DMA True Maximum Transfer Size True Maximum Number of COMMAND Elements True FC Class for Fabric True How this adapter is CONNECTED False Dynamic Tracking of FC Devices True FC Fabric Event Error RECOVERY Policy True Adapter SCSI ID False FC Class for Fabric True
Adopt the following best practices for virtual I/O clients that use NPIV: Do not change the NPIV num_cmd_elem and max_xfer_size values to be higher than the Virtual I/O Server physical adapter values. The virtual I/O client might not be able to configure new devices and might fail to boot. Configure a reasonable number of paths, such as 4 - 8. An excessive number of paths can increase error recovery, boot time, and the time it takes to run cfgmgr. This configuration can be done by zoning on SAN switches and by using LUN masking at the storage arrays. Balance virtual I/O client workloads across multiple physical adapters in the Virtual I/O Server. To avoid boot issues, balance the paths to the booting devices with bosboot, as documented at the following website: http://www.ibm.com/support/docview.wss?uid=isg3T1012688 Adding a virtual Fibre Channel adapter: If you need to add a virtual Fibre Channel (FC) adapter with a dynamic logical partition operation, save the current logical partition configuration afterward. For more information, see section 3.2, Dynamic logical partition operations on page 42.
92
Chapter 6.
Performance monitoring
Monitoring the virtual I/O environment is essential. This process includes monitoring the critical parts of the operating system that are running on the virtual I/O clients, such as memory, processors, network, and storage. This chapter highlights best practices for monitoring the virtual environment.
93
If you start measuring short-term performance on the Virtual I/O Server and you do not have a specific target, such as network degradation, start with the topas command. The topas command displays local system statistics, such as system resources and Virtual I/O Server SEA statistics. The viostat, netstat, vmstat, svmon, tseastat, and fcstat commands, provide more detailed output information than topas. Document the output of these commands and the time that they were run because it is valuable information if you need to research a performance bottleneck. In Virtual I/O Server 2.1, the nmon functionality is integrated within the topas command. You can start the topas command and switch between the two modes by typing ~ (tilde).
94
Example 6-1 shows the main functions that are used in the topas_nmon command. You can export the environment variable nmon to start topas_nmon the same way every time.
Example 6-1 Display of nmon interactive mode commands
h + c l m d a ^ n A b t u W [ ~
= = = = = = = = = = = = = = = =
Help information q = Quit nmon 0 = reset peak counts double refresh time - = half refresh r = ResourcesCPU/HW/MHz/AIX CPU by processor C=upto 128 CPUs p = LPAR Stats (if LPAR) CPU avg longer term k = Kernel Internal # = PhysicalCPU if SPLPAR Memory & Paging M = Multiple Page Sizes P = Paging Space DiskI/O Graphs D = DiskIO +Service times o = Disks %Busy Map Disk Adapter e = ESS vpath stats V = Volume Group stats FC Adapter (fcstat) O = VIOS SEA (entstat) v = Verbose=OK/Warn/Danger Network stats N=NFS stats (NN for v4) j = JFS Usage stats Async I/O Servers w = see AIX wait procs "="= Net/Disk KB<-->MB black&white mode g = User-Defined-Disk-Groups (see cmdline -g) Top-Process ---> 1=basic 2=CPU-Use 3=CPU(default) 4=Size 5=Disk-I/O Top+cmd arguments U = Top+WLM Classes . = only busy disks & procs WLM Section S = WLM SubClasses Start ODR ] = Stop ODR Switch to topas screen
Example 6-2 shows the nmon variable that is defined to start topas_nmon with options to monitor processor, memory, paging, disks, and service times.
Example 6-2 Exporting environment variable in ksh shell to nmon
95
After the activation of the performance information collection, you can use the topas -fullscreen lpar command in the Virtual I/O Server to determine whether the processor resources are optimally used. Two important fields in the command output, Psize and app, are described here: Psize app Shows the number of online physical processors in the shared pool. Shows the available physical processors in the shared pool.
Using the same command, you can see statistics for all logical processors (LCPU) in the system, such as Power Hypervisor calls, context switching, and processes waiting to run in the queue. On an AIX virtual I/O client, you can use the lparstat command to report logical partition-related information, statistics, and Power Hypervisor information. On an IBM i virtual I/O client, data is collected by the collection services in the QAPMLPARH table. The data can be displayed either as text through SQL, or graphically through IBM System Director Navigator for i. For more information, see IBM PowerVM Virtualization Managing and Monitoring, SG24-7590.
Disk monitoring
A good start to monitor the disk activity in the Virtual I/O Server is to use the viostat command. You can get more specialized output from the viostat command. Example 6-3 on page 97 shows monitoring of vhost1 and hdisk4. It is not recommended to use this output for long-term measuring because it requires a significant amount of disk space to store the data. Example 6-3 on page 97 shows the extended disk output by using the viostat command.
96
Example 6-3 Extended disk output of hdisk4 and vhost1 using viostat
$ viostat -adapter 1 1 | grep -p vhost1 | head -2 ; viostat -extdisk hdisk4 1 1 Vadapter: Kbps tps bkread bkwrtn vhost1 88064.0 688.0 344.0 344.0 System configuration: lcpu=16 drives=12 paths=17 vdisks=41 hdisk4 xfer: read: write: queue: %tm_act 100.0 rps 0.0 wps 344.0 avgtime 0.0 bps 90.2M avgserv 0.0 avgserv 7.1 mintime 0.0 tps 344.0 minserv 0.0 minserv 5.3 maxtime 0.0 bread bwrtn 0.0 90.2M maxserv timeouts 0.0 0 maxserv timeouts 13.4 0 avgwqsz avgsqsz 0.0 2.0
netstat command
The Virtual I/O Server netstat command provides performance data and also provides network information, such as routing table and network data. To show performance-related data, use the command with an interval, as shown in Example 6-4. Note, that you must stop the command with a Ctrl+C. This setting makes it more difficult to use the netstat command in a long-term measuring script because you specify only an interval, and not a count.
Example 6-4 Output from the netstat command
$ netstat 1 input (en11) output input (Total) output packets errs packets errs colls packets errs packets errs colls 43424006 0 66342 0 0 43594413 0 236749 0 0 142 0 3 0 0 142 0 3 0 0 131 0 1 0 0 131 0 1 0 0 145 0 1 0 0 145 0 1 0 0 143 0 1 0 0 143 0 1 0 0 139 0 1 0 0 139 0 1 0 0 137 0 1 0 0 137 0 1 0 0
97
entstat command
The entstat command displays the statistics that are gathered by the specified Ethernet device driver. Use the -all flag to display all the statistics, including the device-specific statistics. On the Virtual I/O Server, use the entstat command to check the status and priority of the SEA. Example 6-5 shows which SEA has the highest priority. This example uses a dual Virtual I/O Server configuration with SEAs that use the failover mode.
Example 6-5 Output of entstat on SEA
$ entstat -all ent9 | grep -i priority Priority: 1 Priority: 1 Active: True Priority: 2 Active: True
seastat command
By using the seastat command, you can generate a report for each client to view the SEA statistics. Before you use the seastat command, enable accounting on the SEA. Example 6-6 demonstrates how to enable this option.
Example 6-6 Enabling accounting on the Shared Ethernet Adapter
$ lsdev -dev ent11 -attr | grep accounting accounting disabled Enable per-client accounting of network statistics True $ chdev -dev ent11 -attr accounting=enabled ent11 changed
With accounting enabled, the SEA tracks the MAC address of all the packets it receives from the virtual I/O clients, increment packets, and byte counts for each virtual I/O client, independently. Example 6-7 shows an output of the packets received and transmitted in a SEA, filtered by IP address.
Example 6-7 seastat statistics that are filtered by IP address
$ seastat -d ent9 -s ip=172.16.22.34 ======================================================================= Advanced Statistics for SEA Device Name: ent9 ======================================================================= MAC: 22:5C:2B:95:67:04 ---------------------VLAN: None VLAN Priority: None
98
IP: 172.16.22.34 Transmit Statistics: Receive Statistics: -------------------------------------Packets: 29125 Packets: 2946870 Bytes: 3745941 Bytes: 184772445 =======================================================================
$ fcstat fcs0 | grep -E "Count|Information" LIP Count: 0 NOS Count: 0 Link Failure Count: 3 Loss of Sync Count: 261 Primitive Seq Protocol Error Count: 0 Invalid Tx Word Count: 190 Invalid CRC Count: 0 IP over FC Adapter Driver Information No DMA Resource Count: 0 No Adapter Elements Count: 0 FC SCSI Adapter Driver Information No DMA Resource Count: 0 No Adapter Elements Count: 0 No Command Resource Count: 0 Example 6-8 shows an example of an adapter that has sufficient values for max_xfer_size and num_cmd_elems. Non zero values indicate that I/Os are being queued at the adapter because of a lack of resources. Fibre Channel device driver attributes on page 68 has some considerations and tuning recommendations about the max_xfer_size and num_cmd_elems parameters.
Memory monitoring
A best practice to monitor the memory consumption in a Virtual I/O Server is to use the vmstat and svmon commands.
99
100
The topasrec command generates binary reports of local recordings, Central Electronic Complex (CEC) recordings, and cluster recordings. In the Virtual I/O Server, persistent local recordings are stored in the /home/ios/perf/topas directory by default. You can verify if the topasrec command is running by using the ps command. The output data is collected in a binary file, in the format hostname.yymmdd, for example, localhost_120524.topas. On an AIX server, this file can be converted to a nmon analyzer report by using the topasout command. Another way to generate records is by using the nmon spreadsheet output format. You can start collecting performance data in the nmon format from the cfgassist menu by selecting Performance Topas Start New Recording Start Persistent local recording nmon. The reports can be customized according to your needs. By default, the files are stored in the directory /home/ios/perf/topas with file extension nmon. For an example of a recording in nmon format, see Example 6-9. You must decide which approach is better in your environment. Best practice is to use only one format, binary or nmon, and to not increase the processor utilization. You can use the cfgassist menu by selecting Performance Topas Stop Persistent Recording to stop an unwanted recording, and to remove entries in the /etc/inittab file. Example 6-9 shows an example of a recording in the nmon format.
Example 6-9 Example of a recording in the nmon format
. . (Lines omitted for clarity) . ZZZZ,T0002,00:12:39,23-May-2012,, CPU_ALL,T0002,0.12,0.43,0.00,99.45,4.00, CPU03,T0002,0.00,7.56,0.00,92.44, CPU02,T0002,0.00,7.14,0.00,92.86, CPU01,T0002,0.16,10.17,0.00,89.67, CPU00,T0002,19.81,63.11,0.01,17.08, DISKBUSY,T0002,0.00,0.00,0.00,0.00, DISKREAD,T0002,0.00,0.00,0.00,0.00, DISKWRITE,T0002,0.00,0.00,4.93,0.00, DISKXFER,T0002,0.00,0.00,0.59,0.00, DISKSERV,T0002,0.00,0.00,0.00,0.00, DISKWAIT,T0002,0.00,0.00,0.00,0.00, MEMREAL,T0002,524288.00,226570.59,26.08,46.18,10.60,10.60, MEMVIRT,T0002,393216.00,99.48,0.52,0.00,0.06,0.00,0.00,0.06,209.38,0.00, PROC,T0002,0.00,1.00,454.05,375.93,37.79,3.49,1.14,1.14,0.40,0.00,99.00,
101
LAN,T0002,0.00,0.00,0.00,0.00,0.00,,0.00,0.00,0.00,0.00,0.00,0.00,,,0.00,0.00,0.00,0. 00,0.00,,0.00,0.00,0.00,0.00,0.00,0.00,, ,0.00,0.00,0.00,0.00,0.00,,0.00,0.00,0.00,0.00,0.00,0.00,,,0.00,0.00,0.00,0.00,0.00,, 0.00,0.00,0.00,0.00,0.00,0.00,,,0.00,0.0 0,0.00,0.00,0.00,,0.00,0.00,0.00,0.00,0.00,0.00,,, IP,T0002,0.54,0.54,0.03,0.03, TCPUDP,T0002,0.50,0.50,0.09,0.17, FILE,T0002,0.00,23.94,0.00,112817.25,1125.65, JFSFILE,T0002,36.61,0.08,81.23,39.24,0.27,80.93, JFSINODE,T0002,11.46,0.00,26.58,6.43,0.01,21.06, LPAR,T0002,0.01,1.00,4.00,16.00,1.00,128.00,0.00,0.07,0.07,1.00, ZZZZ,T0003,00:17:39,23-May-2012,, . . (Lines omitted for clarity) .
You can also use commands such as viostat, netstat, vmstat, svmon, fcstat, and seastat as long-term performance tools on the Virtual I/O Server. They can be used in a script, or started by using the crontab command to form customized reports.
$ lssvc ITM_premium ITM_cec TSM_base ITUAM_base TPC_data TPC_fabric DIRECTOR_agent perfmgr ipsec_tunnel ILMT For more information about the monitoring tools, see this website: http://www.ibm.com/developerworks/wikis/display/WikiPtype/VIOS_Monitoring
102
hscroot@hmc9:~> lslparutil -m POWER7_1-SN061AA6P -r lpar --filter \ > "lpar_names=vios1a" -n 2 -F time,lpar_id,capped_cycles,\ > uncapped_cycles,entitled_cycles,time_cycles 06/08/2012 14:21:13,1,2426637226308,354316260938,58339456586721,58461063017452 06/08/2012 14:20:13,1,2425557392352,354159306792,58308640647690,58430247078431 Example 6-12 on page 104 shows how to calculate the processor utilization for the shared processor partition, 1 minute apart. The sampling rate is a 1-minute interval that is based on the data that is collected from Example 6-11.
103
Processor utilization % = ((capped_cycles + uncapped_cycles) / entitled_cycles) * 100 Processor utilization % = (((2426637226308 - 2425557392352) + (354316260938 - 354159306792)) / (58339456586721 - 58308640647690)) * 100 Processor utilization % = 4.01% Processor units utilized = (capped_cycles + uncapped_cycles) / time_cycles Processor units utilized = ((2426637226308 - 2425557392352) + (354316260938 - 354159306792)) / (58461063017452 - 58430247078431) Processor units utilized = 0.04
104
Chapter 7.
105
In most cases, the Secure Shell (SSH) service for remote login and the Secure Copy Protocol (SCP) for copying files is sufficient for login and file transfer. Telnet
106
and File Transfer Protocol (FTP) are not using encrypted communication and can be disabled. Port 657 for Resource Monitoring and Control (RMC) must be left open if you are considering the use of dynamic logical partition operations. This port is used for the communication between the logical partition and the Hardware Management Console (HMC). The stopping of these services can be done by using the stopnetsvc command.
$ viosecure -level high -outfile viosecure.high.xml iviosecure command rules: The full list of viosecure rules is large. To help editing, we suggest copying the file to your desktop, and using a text editor with XML support. The viosecure command uses the same rule set definitions as aixpert on IBM AIX. For a full list of rules and applicable values, see this website: http://www.ibm.com/developerworks/wikis/display/WikiPtype/aixpert When you finish customizing the XML file, copy it back to the Virtual I/O Server and apply it using the command, as shown in Example 7-2 on page 108.
107
$ viosecure -file viosecure.high.xml viosecure command: We suggest running the viosecure command from a console session because some rules might result in the loss of access through the network connections. padmin password: The maxage and maxexpired stanzas, when applied, might result in the padmin account being disabled because of the aging of the password. We suggest changing the padmin password before you run the viosecure command to prevent the account from becoming disabled.
108
Benefits Reduces risk of compromised security by guaranteeing that an AIX operating system image is not inadvertently or maliciously altered. Ensures high levels of trust by displaying the status of all AIX systems that participate in a trusted system configuration. Prevents tampering or covering security issues by storing AIX virtual machine system logs securely on a central PowerVM Virtual I/O Server. Reduces backup and archive time by storing audit logs in a central location. Ensures that site patch levels policies are adhered to in virtual workloads. Provides notification of noncompliance when back-level systems are activated. Improves performance and reduces network resource consumption by providing firewall services locally with the virtualization layer.
109
110
After you perform an LPM, document all changes that are made during the operation. When you move a partition to a different frame, there might be changes to the partition ID and the adapter slot numbers that were originally configured. When possible, perform LPM operations outside of peak hours. LPM is more efficient when the load on the network is low. LPM: It is important to remember that LPM is not a replacement for PowerHA SystemMirror or a disaster recovery solution. LPM can move a powered-off partition, but not a crashed-kernel partition. Logical partitions cannot be migrated from failed frames.
111
VTD name, before LPM, on the source frame. $lsmap -vadapter vhost1 SVSA Physloc Client Partition ID --------------- -------------------------------------------- -----vhost1 U8233.E8B.061AA6P-V2-C302 0x00000003 VTD Status LUN Backing device client1_hd0 Available 0x8100000000000000 hdisk4
$ chdev -dev client1_hd0 -attr mig_name=lpm_client1_hd0 VTD name, after LPM, on the target frame. $lsmap -vadapter vhost1
112
SVSA Physloc Client Partition ID --------------- -------------------------------------------- -----vhost1 U8233.E8B.061AA6P-V2-C302 0x00000003 VTD Status LUN Backing device lpm_client1_hd0 Available 0x8100000000000000 hdisk4
113
amount of memory that is needed by the virtual client, depending on the workload. With dedicated memory, the allocation of memory to each partition is static, as shown in Figure 7-2. The amount of memory, regardless of demand, does not change.
Dedicated Memory
15 Memory Usage (GB)
10
Americas Europe
Asia
0 0 Tim e
With AMS, the amount of memory that is made available to a partition, changes over the run time of the workload. The amount is based on the memory demands of the workload. AMS is supported by AIX, IBM i, and Linux. For more information about the setup and requirements, see the IBM Redpaper publication, IBM PowerVM Virtualization Active Memory Sharing, REDP-4470, available at this website: http://www.redbooks.ibm.com/abstracts/redp4470.html
114
geographically separated workloads where logical partition memory demands are different for day and night, or consist of many logical partitions with sporadic use. Figure 7-3 shows three partitions with similar memory requirements. With this scenario, it is likely that the workloads will peak memory resources at different times throughout a 24 hour period. High memory requirements in America during business hours might mean that memory requirements in Asia are low.
Active Mem ory Sharing 10 5 0 1 Tim e
Figure 7-3 Logical partitions with shared memory that run different regions
Day Night
0
Tim e
115
Infrequent use
10
116
Memory weight defines a factor that is used by the Power Hypervisor in determining the allocation of physical system memory. It determines which pages must be copied to the paging devices in case of physical memory over-commitment. The best practice is to set a higher weight for production systems and a lower weight for development systems. Figure 7-6 shows the weight configuration on a logical partition.
Paging devices
Response from the AMS paging devices has a significant effect for clients when there is a physical memory over-commitment. You want these operations to be achieved as fast as possible. The best practice is to set the size of the paging device equal to the maximum logical memory, with an exception for IBM i. In IBM i, the paging device must be larger than the maximum memory defined in
117
the partition profile. It requires 1 bit extra for every 16-byte page it allocates. For instance, a partition with 10 Gb of memory needs a paging device with a minimum size of 10.08 Gb defined in the partition profile. Remember, some storage vendors do not support dynamically increasing the size of a logical unit number (LUN). If this case, the paging device needs to be removed and re-created. The following list outlines best practices to consider when you configure paging devices: Use physical volumes, where possible, over logical volumes. If you use logical volumes, use a small stripe size. Use thin provisioned LUNs. Spread the I/O load across as much of the disk subsystem as possible. Use a write cache, whether it is on the adapter or storage subsystem. Size your storage hardware according to your performance needs. Ensure that the PVIDs for the paging devices for physical volumes that are set up by the HMC, are cleared before use. If you plan to use a dual Virtual I/O Server configuration, paging devices must be provided through a SAN, and accessible from both Virtual I/O Servers.
118
119
FLOP FRU FTP GDPS GID GPFS GUI HACMP HBA HMC HTML HTTP Hz I/O IBM ID IDE IEEE IP IPAT IPL IPMP ISV ITSO IVM JFS L1 L2 L3 LA
Floating Point Operation field-replaceable unit File Transfer Protocol IBM Geographically Dispersed Parallel Sysplex group ID General Parallel File System graphical user interface High Availability Cluster Multiprocessing host bus adapter Hardware Management Console Hypertext Markup Language Hypertext Transfer Protocol hertz input/output International Business Machines identifier Integrated Device Electronics Institute of Electrical and Electronics Engineers Internet Protocol IP address takeover initial program load IP Multipathing independent software vendor International Technical Support Organization Integrated Virtualization Manager journaled file system level 1 level 2 level 3 Link Aggregation
LACP LAN LDAP LED LMB LPAR LPP LUN LV LVCB LVM MAC Mbps MBps MCM ML MP MPIO MTU NFS NIB NIM NIMOL N_PORT NPIV NVRAM ODM OS OSPF PCI PCI-e
Link Aggregation Control Protocol local area network Lightweight Directory Access Protocol light-emitting diode Logical Memory Block logical partition licensed program product logical unit number logical volume Logical Volume Control Block Logical Volume Manager Media Access Control megabits per second megabytes per second multiple chip module Maintenance Level Multiprocessor Multipath I/O maximum transmission unit Network File System Network Interface Backup Network Installation Management NIM on Linux Node Port N_Port Identifier Virtualization nonvolatile random access memory Object Data Manager operating system Open Shortest Path First Peripheral Component Interconnect iPeripheral Component Interconnect Express
120
Pool Idle Count process ID public key infrastructure Partition Load Manager power-on self-test Performance Optimization with Enhanced Risc (Architecture) Physical Processor Consumption Physical Processor Fraction Consumed program temporary fix Performance Toolbox Processor Utilization Resource Register physical volume Port Virtual LAN Identifier quality of service Redundant Array of Independent Disks random access memory reliability, availability, and serviceability role-based access control Remote Copy Redundant Disk Array Controller remote input/output Routing Information Protocol reduced instruction-set computer Resource Monitoring and Control Remote Procedure Call Remote Program Loader Red Hat Package Manager
RSA RSCT RSH SAN SCSI SDD SDDPCM SMIT SMP SMS SMT SP SPOT SRC SRN SSA SSH SSL SUID SVC TCP/IP TL TSA UDF UDID VIPA VG VGDA VGSA VLAN
Rivest-Shamir-Adleman algorithm Reliable Scalable Cluster Technology Remote Shell storage area network Small Computer System Interface Subsystem Device Driver Subsystem Device Driver Path Control Module System Management Interface Tool symmetric multiprocessor system management services simultaneous multithreading Service Processor Shared Product Object Tree System Resource Controller service request number Serial Storage Architecture Secure Shell Secure Sockets Layer Set User ID SAN Volume Controller Transmission Control Protocol/Internet Protocol Technology Level Tivoli System Automation Universal Disk Format Universal Disk Identification virtual IP address volume group Volume Group Descriptor Area Volume Group Status Area virtual local area network
PPC PPFC PTF PTX PURR PV PVID QoS RAID RAM RAS RBAC RCP RDAC RIO RIP RISC RMC RPC RPL RPM
121
Virtual Processor vital product data virtual private network Virtual Router Redundancy Protocol Virtual Shared Disk Workload Manager worldwide name worldwide port name
122
Related publications
The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this book.
IBM Redbooks
The following IBM Redbooks publications provide additional information about the topic in this document. Note that some publications referenced in this list might be available in softcopy only. Hardware Management Console V7 Handbook, SG24-7491 IBM PowerVM Virtualization Active Memory Sharing, REDP-4470 IBM PowerVM Virtualization Introduction and Configuration, SG24-7940 IBM PowerVM Virtualization Managing and Monitoring, SG24-7590 Integrated Virtualization Manager on IBM System p5, REDP-4061 Power Systems Memory Deduplication, REDP-4827 PowerVM Migration from Physical to Virtual Storage, SG24-7825 IBM System Storage DS8000 Host Attachment and Interoperability, SG24-8887 IBM Flex System p260 and p460 Planning and Implementation Guide, SG24-7989 You can search for, view, download, or order these documents and other Redbooks, Redpapers, Web Docs, draft and additional materials, at the following website: ibm.com/redbooks
Other publications
These publications are also relevant as further information sources. The following types of documentation are located through the Internet at the following URL: http://publib.boulder.ibm.com/infocenter/powersys/v3r1m5/index.jsp
123
User guides System management guides Application programmer guides All commands reference volumes Files reference Technical reference volumes used by application programmers Detailed documentation about the PowerVM feature and the Virtual I/O Server: https://www14.software.ibm.com/webapp/set2/sas/f/vios/documentation/ home.html
Online resources
These Web sites are also relevant as further information sources: These Web sites and URLs are also relevant as further information sources: AIX and Linux on POWER community http://www-03.ibm.com/systems/p/community/ Capacity on Demand http://www.ibm.com/systems/p/cod/ IBM PowerVM http://www.ibm.com/systems/power/software/virtualization/index.html AIX 7.1 Information Center http://publib.boulder.ibm.com/infocenter/aix/v7r1/index.jsp IBM System Planning Tool http://www.ibm.com/servers/eserver/support/tools/systemplanningtool/ IBM Systems Hardware Information Center http://publib.boulder.ibm.com/infocenter/systems/scope/hw/index.jsp IBM Systems Workload Estimator http://www-304.ibm.com/jct01004c/systems/support/tools/estimator/ind ex.html Latest Multipath Subsystem Device Driver home page http://www-1.ibm.com/support/docview.wss?uid=ssg1S4000201
124
Novell SUSE LINUX Enterprise Server information http://www.novell.com/products/server/index.html SCSI T10 Technical Committee http://www.t10.org SDDPCM software download page http://www.ibm.com/support/docview.wss?uid=ssg1S4000201 Service and productivity tools for Linux on POWER https://www14.software.ibm.com/webapp/set2/sas/f/lopdiags/home.html Virtual I/O Server home page http://www14.software.ibm.com/webapp/set2/sas/f/vios/home.html Virtual I/O Server fix pack download page http://www14.software.ibm.com/webapp/set2/sas/f/vios/download/home.html
Related publications
125
126
Index
Numerics
3rd part list application 26 viosecure 107 commands 91 alt_root_vg command 25 backupios command 3435 bootlist command 25 bosboot command 92 cfgmgr command 92 cfgnamesrv command 37 chdev command 67, 69 chpath command 79 crontab command 35 entstat 98 entstat command 37 fcstat command 69 HMC chsysstate 48 mkvterm 48 hostmap command 37 installios command 38, 40 ioslevel command 38 loginmsg command 38 lparstat 96 lsattr command 70, 92 lsdev command 37, 78 lshwres HMC command 72 lslparutil 103 lspath command 79 lspv command 38 lsuser command 38 lsvg command 38 manage_disk_drivers command 66 mirrorios command 25, 63 mkvdev command 7576 motd command 38 mpio_get_config command 75 netstat command 37 nim command 40 oem_setup_env command 76 optimizenet command 37 pcmpath command 69 recfgct command 25 restorevgstruct command 41 savevgstruct command 36 seastat 98
A
Active Memory Deduplication 3, 118 deduplication table ratio 118 Active Memory Expansion 118 page loaning 118 Active Memory Sharing 3, 113 dedicated memory 114 desired memory 116 examples 114 implementing 116 maximum memory 116 memory weight 117 minimum memory 116 page tables 116 paging devices 117 AIX_APPCM 66 alt_root_vg command 25, 27
B
Backing up VIOS 32 commands backupios command 34 savevgstruct command 36 viosbr command 35 Disaster recovery 32 External device configuration 33 HMC 33 IVM 33 backupios command 3435 booting issues 92 bootlist command 25 bosboot command 92
C
cfgmgr command 92 cfgnamesrv command 37 chdev command 67, 69, 91, 112 chsysstate 48 command
127
svcinfo command 75 vios_advisor 104 viosbr command 3536 viosecure command 38 crontab command 35
D
deduplication table ratio 118 desired memory 20 desired processing units 18 disk attributes queue_depth 78 reserve_policy 78 Disk Monitoring Commands viostat command 96 dlpar 79, 92 DLPAR operations 42 Overwriting partition profiles 42 Renaming partition profiles 42 Virtual FC adapters 43 DS3950 66 DS4000 66, 75 DS4800 75 DS5000 66, 75 DS8000 66 dual Virtual I/O Server 29 dual Virtual I/O Server environment 27 Dual virtual switch 56 DVD 28 Dynamic logical partion operations 92 Dynamic logical partitioning 79 dynamic logical partitioning 3 dyntrk attribute 68, 92
fast_fail 68 fc_err_recov 68, 92 max_transfer_size 74 max_xfer_size 68, 92 num_cmd_elem 92 num_cmd_elems 68 Fibre Channel adapter Attributes 67 Fibre Channel adapter statistics 68 Fibre Channel Monitoring Commands fcstat command 99 max_xfer_size 99 num_cmd_elems 99 File backed devices 70 firmware updates 26 Fix Level Recommendation 25 Fix Packs 25 flow_ctrl 54 fragmentation 53 fscsi AIX device 67
H
hardware features 8 Active Memory Expansion 8 Active Memory Mirroring 8 RAS 8 Hardware Management Console 24 Hardware Manager Console 28 High availability 64 High Availabilty 77 HMC 100 Allow performance information collection 95 Enable Connection Monitoring 100 HMC commands lshwres 72 hostmap command 37
E
eadme instructions 27 entstat command 37
I
IBM Fix Central 26 IBM i 7071, 89 iFixes 25 installation of VIOS 24 installios command 38, 40 ioslevel command 38 ISO images 26 IVM 77
F
fast_fail attribute 68 fc_err_recov attribute 68, 92 fcs AIX device 67 fcstat command 69 Fibre Channel adapter Attributes dyntrk 68, 92
128
J
jumbo_frames 54
L
large_recieve 54 large_send 54 Live Partition Mobililty 67 Live Partition Mobility 3, 46, 55, 72, 89, 109 Logical Volume 74 Logical Volume mappings 77 Logical volumes 70 loginmsg command 38 Long-term performance Commands cfgassist command 101 cfgsvc command 102 lssvc command 102 postprocesssvc command 102 startsvc command 102 stopsvc command 102 topas_nmon command 100 topasout command 101 topasrec command 100101 LPM 46, 55, 67, 89 lsattr command 70, 92 lsdev command 37, 78 lshwres commands 72 lspv command 38 lsuser command 38 lsvg command 38
minimum memory 20 minimum virtual processors 18 Mirroring 71, 77 mirroring 29, 63 mirrorios command 25, 63 mkvdev command 7576 mkvterm 48 motd command 38 MPIO 77 mpio_get_config 75 MTU 52, 54 Multipathing 66, 77, 89 Multipathing for clients 66 SDDPCM 66 Storage 89
N
Naming conventions 74 netstat command 37 Network bandwidth tuning 54 flow_ctrl 54 jumbo_frames 54 large_recieve 54 large_send 54 tcp_pmtu_discovery 54 udp_pmtu_discovery 54 Network Installation Manager 24, 28 Network Interface Backup 29 Network Monitoring Commands entstat command 97 netstat command 97 seastat command 97 Network protocols 802.1Q 50 802.3ad 50 EtherChannel 50 Link aggregation 50 Network interface backup 50, 55, 5758 NIB 50, 55, 5758 trunking 50 VLAN tagging 50, 56 Networking 49 NIM 24 nim command 40 NIM server resilience 41 NPIV 3, 62, 67, 88 num_cmd_elem attribute 92
M
manage_disk_drivers command 66 max_transfer_size attribute 74 max_xfer_size attribute 68, 92 maximum memory 21 maximum processing units 18 maximum transfer unit 52, 54 maximum virtual adapter limit 21 maximum virtual adapters 72 maximum virtual processors 18 Memory Monitoring Commands svmon command 100 vmstat command 100 Micro-partitioning 3 migration 28 migration verification tool 110
Index
129
num_cmd_elems attribute 68
O
oem_setup_env command 76 Optical devices 79 optimizenet command 37
P
packet fragmentation 53 page loaning 118 page tables 116 paging devices 117 Partition startup policy 47 pcmpath command 69 physical I/O adapters 21 Physical volumes 70 Physical volumes mapping 7677 Power Hypervisor 16 PowerHA 77, 89, 111 PowerSC 108 PowerVM hardware planning 7 PowerVM editions 4 PowerVM features Active Memory Deduplication 3 Active Memory Sharing 3 dynamic logical partitioning 3 Integrated Virtualization Manager 3 Live Partition Mobility 3 Micro-partitioning 3 NPIV 3 PowerVM Hypervisor 3 Shared Processor Pools 3 Shared Storage Pools 3 PowerVM Hypervisor 3 PowerVM Hypervisor Security 106 priority to processor 19 processing units 17
reserve_policy attribute 78 restorevgstruct command 41 Restoring VIOS commands installios command 38, 40 nim command 40 restorevgstruct command 41 NFS 38 rootvg 62, 78 rootvg on virtual SCSI clients 67
S
SAN switch 90 savevgstruct command 36 SDDPCM 66 AIX_APPCM 66 DS3950 66 DS4000 66 DS5000 66 SEA 51, 56, 58 SEA failover 58 SEA threading 52 server shutdown 45 server startup 45 Service Packs 25 Shared Ethernet 113 Shared Ethernet Adapter 51, 56, 58 Shared Ethernet Adapter failover 58 shared or dedicated resources 10 shared processing mode 16 Shared Processor Pools 3 Shared Storage Pool 75, 82 creating 84 requirements 82 SAN storage considerations 85 specifications 83 thin or thick provisioning 86 verifying 85 when to use 83 Shared Storage Pools 3, 70 Short-term performance Commands fcstat command 94 netstat command 94 seastat command 94 svmon command 94 topas command 94 topas_nmon command 95
Q
queue_depth disk attribute 78
R
recfgct command 25 Redbooks website 123 Contact us xvii Redundancy 90
130
viostat command 94 vmstat command 94 single Virtual I/O Server 29 single Virtual I/O Server environment 27 SPT 33 startup sequence 48 Storage DS3950 66 DS4000 66, 75 DS4800 75 DS5000 66, 75 DS8000 66 Dynamic Logical partition 92 Dynamic logical partitioning 79 fcs AIX device 67 Fibre Channel adapter attributes 67 Fibre Channel adapter statistics 68 File Backed devices 70 fscsi AIX device 67 IBM i 7071, 89 IVM 77 Live Partition Mobility 72 Logical Volume 74 Logical Volume mappings 77 Logical Volumes 70 manage_disk_drivers command 66 maximum virtual adapters 72 Mirroring 71, 77 mkvdev command 76 MPIO 77 mpio_get_config command 75 Multipathing 66, 77 Naming conventions 74 NPIV 62, 88 Optical devices 79 Physical devices 70 Physical volumes mapping 7677 PowerHA 77 Redundancy 90 rootvg 62, 78 SAN Switch 90 Shared Storage Pool 75 Shared Storage Pools 70 Supported Solutions 66 Tape devices 75, 90 Virtual Fibre Channel 62 virtual Fibre Channel adapter 90 Virtual Fibre Channel server adapter 74 Virtual optical devices 70
Virtual SCSI 70 virtual SCSI 62 Virtual SCSI server adapter 7374 Virtual slots 71 Virtual tape devices 70, 79 WWPN 89 Support Storage 66 svcinfo command 75 System Planing Tool, 24 System Planning Too 21 System Planning Tool 7, 33 Systems Workload Estimator 10
T
Tape devices 75, 79, 90 tcp_pmtu_discovery 54 thin or thick provisioning 86 tracing your storage configuration 22
U
udp_pmtu_discovery 54 Uncapped partitions 18
V
VIOS installation 24 viosbr command 3536 viosecure command 38 Virtual adapters slots considerations Virtual client SCSI adapter 74 virtual FC adapters 22 Virtual Fibre Channel 62 Virtual Fibre Channel adapter 90 Virtual Fibre Channel server adapter Virtual I/O Server configuration limits 5 ethernet adapter 45 Hardware Management Console Integrated Virtualization Manager IP forwarding 5 logical volume manager 6 memory 4 network and storage 11 network services 106 physical disk 4 planning 6 processor 4 requirements 4
71
74
4 4
Index
131
SCSI protocol 5 security 106 single, dual, or multiple 11 sizing 8 slot numbering 12 storage adapter 4 virtual SCSI 5 Virtual I/O Server profile 16 Virtual media repository 44 Virtual optical devices 70 Virtual optical devicesStorage Virtual optical devices 75 Virtual SCSI 70 virtual SCSI 22, 62, 112 Virtual SCSI server adapter 73 Virtual tape devices 70, 79 virtual target device 112 VLAN VLAN bridging 51 VLAN tagging 56
W
weight value 19 WWPN 89
132
Back cover
BUILDING TECHNICAL INFORMATION BASED ON PRACTICAL EXPERIENCE IBM Redbooks are developed by the IBM International Technical Support Organization. Experts from IBM, Customers and Partners from around the world create timely technical information based on realistic scenarios. Specific recommendations are provided to help you implement IT solutions more effectively in your environment.