Você está na página 1de 194

V8.

cover

Front cover

PowerCare: Performance for


Power Systems AIX

(Course code AP25)

Instructor Exercises Guide


with hints
ERC 4.2
Instructor Exercises Guide with hints

Trademarks
IBM is a registered trademark of International Business Machines Corporation.
The following are trademarks of International Business Machines Corporation in the United
States, or other countries, or both:
Active Memory AIX 5L AIX 6
AIX BladeCenter DB
DB2 developerWorks EnergyScale
Express i5/OS Power Architecture
POWER Hypervisor Power Systems Power
PowerPC PowerVM POWER6+
POWER6 POWER7 Systems POWER7+
POWER7 Redbooks System p
System p5 System z Systems Director
VMControl
Tivoli Workload Partitions z/VM
Manager
z9 400
Linux is a registered trademark of Linus Torvalds in the United States, other countries, or
both.
Microsoft, Windows and Windows NT are trademarks of Microsoft Corporation in the
United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other
countries.
Java and all Java-based trademarks and logos are trademarks or registered trademarks
of Oracle and/or its affiliates.
Other product and service names might be trademarks of IBM or other companies.

March 2013 edition


The information contained in this document has not been submitted to any formal IBM test and is distributed on an as is basis without
any warranty either express or implied. The use of this information or the implementation of any of these techniques is a customer
responsibility and depends on the customers ability to evaluate and integrate them into the customers operational environment. While
each item may have been reviewed by IBM for accuracy in a specific situation, there is no guarantee that the same or similar results will
result elsewhere. Customers attempting to adapt these techniques to their own environments do so at their own risk.

Copyright International Business Machines Corporation 2010, 2013.


This document may not be reproduced in whole or in part without the prior written permission of IBM.
Note to U.S. Government Users Documentation related to restricted rights Use, duplication or disclosure is subject to restrictions
set forth in GSA ADP Schedule Contract with IBM Corp.
V8.0
Instructor Exercises Guide with hints

TOC Contents
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v

Exercises description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii

Exercise 1. Introduction to the lab environment . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-1


Exercise instructions with hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-3
Part 1: Access relevant documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-3
Part 2: Gather system information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-4
Part 3: Partition configuration information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-7
Exercise review/wrap-up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-11

Exercise 2. Shared processors and virtual processor tuning . . . . . . . . . . . . . . . . . 2-1


Exercise instructions with hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-3
Part 1: Micro-partitioning and listing the configuration . . . . . . . . . . . . . . . . . . . . . . 2-3
Part 2: Controlling SMT and viewing CPU utilization statistics . . . . . . . . . . . . . . . . 2-7
Part 3: Exploring AIX CPU performance analysis commands . . . . . . . . . . . . . . . 2-17
Part 4: Physical shared processor pool and micro-partitions . . . . . . . . . . . . . . . . 2-24
Exercise review/wrap-up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-31

Exercise 3. Configuring multiple shared processor pools and donating dedicated


processors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-1
Exercise instructions with hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-3
Part 1: Configuring multiple shared processor pools . . . . . . . . . . . . . . . . . . . . . . . 3-3
Part 2: Monitoring user-defined shared processor pool . . . . . . . . . . . . . . . . . . . . 3-10
Part 3: Dedicated partitions running in donating mode . . . . . . . . . . . . . . . . . . . . 3-15
Exercise review/wrap-up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-26

Exercise 4. Active Memory Sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-1


Exercise instructions with hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-3
Part 1: View shared memory pool configuration and configure paging devices . . . 4-3
Part 2: Configure your LPAR to use the shared memory pool. . . . . . . . . . . . . . . . 4-8
Part 3: Monitoring logical memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-17
Exercise review/wrapup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-23

Exercise 5. Active Memory Expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-1


Exercise instructions with hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-2
Part 1: Observe non-AME memory behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-2
Part 2: Configure your LPAR for AME . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-5
Part 3: Observe AME memory behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-6

Exercise 6. I/O device virtualization performance and tuning . . . . . . . . . . . . . . . . 6-1


Exercise instructions with hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-4
Part 1: Virtual SCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-4
Part 2: Virtual Ethernet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-11

Copyright IBM Corp. 2010, 2013 Contents iii


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

Part 3: Shared Ethernet adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-22


Exercise review/wrap-up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-35

Exercise 7. Live Partition Mobility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-1


Exercise instructions with hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-3
Part 1: Partition migration environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-3
Part 2: Pre-migration checks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-4
Part 3: Partition mobility steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-8
Part 4: Migrate back to the initial managed server . . . . . . . . . . . . . . . . . . . . . . . 7-16
Exercise review/wrap-up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-17

Exercise 8. Suspend and resume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-1


Exercise instructions with hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-3
Part 1: View the reserved storage pool configuration . . . . . . . . . . . . . . . . . . . . . . 8-3
Part 2: Suspend the partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-5
Part 3: Resume the suspended partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-8
Exercise review/wrapup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-11

Exercise A. Using the Virtual I/O Server Performance Analysis Reporting Tool . A-1
Exercise instructions with hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-3
Exercise review/wrapup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-8

iv PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

TMK Trademarks
The reader should recognize that the following terms, which appear in the content of this
training document, are official trademarks of IBM or other companies:
IBM is a registered trademark of International Business Machines Corporation.
The following are trademarks of International Business Machines Corporation in the United
States, or other countries, or both:
Active Memory AIX 5L AIX 6
AIX BladeCenter DB
DB2 developerWorks EnergyScale
Express i5/OS Power Architecture
POWER Hypervisor Power Systems Power
PowerPC PowerVM POWER6+
POWER6 POWER7 Systems POWER7+
POWER7 Redbooks System p
System p5 System z Systems Director
VMControl
Tivoli Workload Partitions z/VM
Manager
z9 400
Linux is a registered trademark of Linus Torvalds in the United States, other countries, or
both.
Microsoft, Windows and Windows NT are trademarks of Microsoft Corporation in the
United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other
countries.
Java and all Java-based trademarks and logos are trademarks or registered trademarks
of Oracle and/or its affiliates.
Other product and service names might be trademarks of IBM or other companies.

Copyright IBM Corp. 2010, 2013 Trademarks v


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

vi PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

pref Exercises description


In the exercise instructions you will see each step prefixed by a line.
You may wish to check off each step as you complete it to keep track
of your progress.
Most exercises include required sections which should always be
completed. These may be required before performing later exercises.
Some exercises may also include optional sections that you may wish
to perform if you have sufficient time and want an additional challenge.
This course includes two versions of the course exercises, with hints
and without hints.
The standard Exercise instructions section provides high-level
instructions for the tasks you should perform. You need to apply the
knowledge you gained in the unit presentation to perform the exercise.
The Exercise instructions with hints provide more detailed
instructions and hints to help you perform the exercise steps.

Copyright IBM Corp. 2010, 2013 Exercises description vii


Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

viii PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty Exercise 1. Introduction to the lab environment


(with hints)

Estimated time
00:45

What this exercise is about


This exercise introduces and familiarizes you with the class lab
environment. It includes the key documentation that supports the
objectives of this course.

What you should be able to do


After completing this exercise, you should be able to:
Access HMC
Review LPAR configuration
Check AIX filesets
Access the documentation for AIX and IBM POWER systems

Introduction
This is an exploratory lab. You will explore some commands, reference
materials, and the HMC applications to review prerequisite skills and
to try out commands that you will use later in the course.

Requirements
This workbook
A computer with a network connection to the lab environment.
An HMC that is configured and supporting a POWER7 system.
A POWER7 processor-based system with at least one partition
running AIX 7 and one partition running the Virtual I/O Server code
per student.

Copyright IBM Corp. 2010, 2013 Exercise 1. Introduction to the lab environment 1-1
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

Instructor exercise overview


Lab systems configuration
For this exercise, the managed systems and HMCs should be
powered on. Each managed system should have four running Virtual
I/O Server partitions (vios1, vios2, vios3, vios4) and four running AIX
client partitions (lpar1, lpar2, lpar3, lpar4). Also, there should be two
other Virtual I/O Servers available and running named vios_ams and
vios_ssp, which will be used for the active memory sharing exercise
and the shared storage pool exercise respectively. This configuration
supports four students, where each student has one AIX LPAR and
one VIOS LPAR.
This will be the systems configuration after following the instructions in
the lab setup guide. The profile that the VIOS and client partitions
should be using is named Normal.
Exercise description
As this is the first exercise of the class, assign the students to a lab
team and provide all necessary network and login information. Most of
the exercise steps can be performed alone, but some require the
students to work as a team of two.
The main objective of the exercise is to get students to examine the
configuration of the machine, paying particular attention to:
The virtual adapters configuration on the VIOS and the AIX client
partition
The hostnames and IP addresses of the partitions
How to log in to the partitions
How to check the virtual adapter configuration of Virtual I/O
Servers with the lsmap command
How to check the configuration of a partition with the
lparstat -i command

1-2 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty Exercise instructions with hints

Preface
All exercises of this chapter depend on the availability of specific equipment in your
classroom. You need a computer system configured with a network connection to an
HMC.
The hints provided for locating documentation on particular web pages were correct
when this course was written. However, web pages tend to change over time. Ask your
instructor if you have trouble navigating the websites.
All hints are marked by a sign.
The output shown in the hints is an example. Your output and answers based on the
output might be different.

General notes, applicable to all exercises:


Your instructor will provide you with instructions for accessing the remote environment. For
example, this might involve using a web browser or Virtual Private Network (VPN). Your
instructor will further provide you with all the details and login IDs required.
Unless otherwise stated, log in to systems (HMC/LPAR) using a terminal window (for
example through PuTTY or a Linux command line).
On some terminal emulations, the function keys are not operative and you might need to
substitute escape sequences. For example, instead of pressing F3, you might need to
press <Esc+3> for the same function.

Part 1: Access relevant documentation


In this exercise, you will discover online documentation used to support the POWER7
environment and you will also gather system information. You can refer to this
documentation as you work through the remainder of the exercises in this course.
This exercise requires Internet access.
__ 1. Go to http://www.ibm.com. Click Products then click Power Systems and explore
this page. A large number of links, useful documents, and detailed information are
available. Navigate to any links of interest.
__ 2. Go back to http://www.ibm.com. Click the Support & downloads button. Investigate
the and click Technical support and Downloads options.
__ 3. Use the following web address to access the IBM Systems Information Centers:
http://publib.boulder.ibm.com/eserver. This page is the entry point for hardware as
well as software information. Click the IBM Systems Hardware Information Center
link.

Copyright IBM Corp. 2010, 2013 Exercise 1. Introduction to the lab environment 1-3
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

At the IBM Systems Hardware Information Center, use the Search option on the left
side of the screen to find topics related to following keywords:
Installing AIX in a partition
Partitioning your server
Managing server using HMC
__ 4. From http://publib.boulder.ibm.com/eserver, select AIX Information Center. Select
the AIX 7.1 Information Center. At the resulting IBM Systems Information Center
page, look for the following topics and see what information is available:
Click the AIX PDFs link to get a list of PDFs of the AIX documentation.
Notice the Performance management and Performance Tools Guide and
Reference PDFs.
__ 5. Go to the PowerVM virtualization website and see what is available:
http://www.ibm.com/systems/power/software. After looking at the available links,
follow the link to PowerVM Virtualization without limits. Take a moment to see
what is available from this page.

Part 2: Gather system information


In this section, you will be directed to use commands to identify system resources. These
are standard AIX commands with which you might already be familiar.
__ 6. Log in to your assigned HMC using the user name and password provided by your
instructor. Dont forget to use the https protocol when you type in the address.
__ 7. From the HMC GUI, open a terminal window to your assigned Virtual I/O Server
partition on your managed system.
From the navigation area on the left, select Systems Management > Servers to view
the managed system table. Click your managed system name. The LPAR table should
appear. Select your assigned Virtual I/O Server partition, then run the Console
Window > Open Terminal Window task.
__ 8. Log in to the partition with the user ID padmin and the password abc123. The
Virtual I/O Server partition provides a restricted command line interface (CLI) for the
padmin user.
__ 9. Use the lsmap -all command to list the virtual SCSI disk mapping. Use the
command output to complete Table 1 (below) for your assigned logical partition.
Here is an example on VIOS1. The vhost0 adapter is providing a virtual disk to lpar1,
and vhost1 is providing a virtual disk to lpar2. Each client logical partition has a virtual
SCSI disk provided by two different VIO servers. The client LPAR has MPIO setup.

1-4 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty Note: You might have multiple virtual target devices (VTDs) in your list. Fill out the chart
for the first VTD. The PVID is retrieved from the lspv command listed in the next step.
$ lsmap -all
VSA Physloc Client Partition ID
--------------- --------------------------------------------
vhost0 U8204.E8A.652ACF2-V1-C13 0x00000002
VTD lpar1_rootvg
Status Available
LUN 0x8100000000000000
Backing device hdisk2
Physloc
U78A0.001.DNWGGSH-P1-C1-T1-W500507680140581E-L3000000000000
VSA Physloc Client Partition ID
--------------- --------------------------------------------
vhost1 U8204.E8A.652ACF2-V1-C14 0x00000003
VTD lpar2_rootvg
Status Available
LUN 0x8100000000000000
Backing device hdisk3
Physloc
U78A0.001.DNWGGSH-P1-C1-T1-W500507680140581E-L4000000000000

Table 1: VIOS virtual SCSI configuration


Virtual SCSI server adapter slot
Virtual target device
Backing device
hdisk PVID
Given the example lsmap command output above, for lpar1, the virtual SCSI server
adapter slot is 13, the VTD is lpar1_rootvg, and the backing device is hdisk2.
__ 10. Use the lspv command to identify the PVID of the hdisk used as the backing device
for your AIX client partition. Record this PVID in Table 1 above. If the PVID does not
show in the output of the command, run the following command and replace hdisk2
with your assigned LPARs backing device. Then run lspv again.
$ chdev -dev hdisk2 -attr pv=yes
Example output of lspv command showing the PVID for hdisk2:
$ lspv | grep hdisk2
hdisk2 00f6bcc9a30e4952 None
__ 11. Use the lsmap -all -net command to list the virtual Ethernet and Shared
Ethernet adapter configuration. Record the output of the command in Table 2 below.

Copyright IBM Corp. 2010, 2013 Exercise 1. Introduction to the lab environment 1-5
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

Example command and its output:


$ lsmap -all -net
SVEA Physloc
------ --------------------------------------------
ent2 U8204.E8A.652ACF2-V1-C11-T1
SEA ent4
Backing device ent0
Status Available
Physloc U78A0.001.DNWGGSH-P1-C4-T1
__ 12. Use the lsdev command to list the ctl_chan and ha_mode attributes of the Shared
Ethernet adapter. Record the values for the control channel adapter for SEA failover
and ha_mode in Table 2 below.
Example command and its output:
$lsdev -dev ent4 -attr
attribute value description user_settable
accounting disabled Enable per-client accounting of network statistics
True
ctl_chan ent3 Control Channel adapter for SEA failover True
gvrp no Enable GARP VLAN Registration Protocol(GVRP) True
ha_mode enabled High Availability Mode True
jumbo_frames no Enable Gigabit Ethernet Jumbo Frames True
large_receive no Enable receive TCP segment aggregation True
largesend 0 Enable Hardware Transmit TCP Resegmentation True
netaddr 0 Address to ping True
pvid 1 PVID to use for the SEA device True
pvid_adapter ent2 Default virtual adapter to use for non-VLAN-tagged
packets True
qos_mode disabled N/A rue
real_adapter ent0 Physical adapter associated with the SEA True
thread 1 Thread mode enabled (1) or disabled (0 True
virt_adapters ent2 List of virtual adapters associated with the SEA
(comma separated) True
__ 13. Use the enstat -all entx | grep Priority command to check the priority
and status (Active set to True or False) of your shared Ethernet adapter. You
already observed that Shared Ethernet adapter failover is configured. Record the
values in Table 2 below.
The Priority value defines which Shared Ethernet adapter is the primary and which is
the backup. In this example, the Shared Ethernet adapter has a priority of one and is
Active. We can say that this Virtual I/O Server is the primary for bridging the VLAN.
$ entstat -all ent4 | grep Priority
Priority: 1 Active: True

1-6 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty
Table 2: Virtual Ethernet configuration
Virtual Ethernet adapter slot
Shared Ethernet adapter
Ethernet backing device
Control channel adapter for
SEA failover
ha_mode
Priority and status
Given the above examples in the hints you would document:
Virtual Ethernet adapter slot = virtual slot 11
Shared Ethernet adapter = ent4
Ethernet backing device = ent0
Control channel adapter for SEA failover = ent3
ha_mode = enabled
Priority and status = 1 and Active is true (primary)

Part 3: Partition configuration information


__ 14. Log in to your assigned AIX LPAR using the user and password that the instructor
assigned.
__ 15. Use the lparstat -i AIX command to answer the following questions regarding
your partition:
__ a. Is the partition using dedicated or shared processors?
__ b. Is the partition capped or uncapped?
__ c. Is simultaneous multithreading on or off?
__ d. How much memory is allocated to this partition?
__ e. What is the partitions entitled capacity (in processing units)?
__ f. How many online virtual CPUs are there?
__ g. Your partition is a member of what shared processor pool?
__ h. What is the memory mode?

Copyright IBM Corp. 2010, 2013 Exercise 1. Introduction to the lab environment 1-7
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

The output from the lparstat -i command should be similar to the following:
# lparstat -i
Node Name : sys114_lpar1
Partition Name : sys114_lpar1
Partition Number : 2
Type : Shared-SMT
Mode : Capped
Entitled Capacity : 0.35
Partition Group-ID : 32770
Shared Pool ID : 0
Online Virtual CPUs : 1
Maximum Virtual CPUs : 10
Minimum Virtual CPUs : 1
Online Memory : 1024 MB
Maximum Memory : 2048 MB
Minimum Memory : 768 MB
Variable Capacity Weight : 0
Minimum Capacity : 0.10
Maximum Capacity : 2.00
Capacity Increment : 0.01
Maximum Physical CPUs in system : 8
Active Physical CPUs in system : 8
Active CPUs in Pool : 8
Shared Physical CPUs in system : 8
Maximum Capacity of Pool : 800
Entitled Capacity of Pool : 280
Unallocated Capacity : 0.00
Physical CPU Percentage : 35.00%
Unallocated Weight : 0
Memory Mode : Dedicated
Total I/O Memory Entitlement : -
Variable Memory Capacity Weight : -
Memory Pool ID : -
Physical Memory in the Pool : -
Hypervisor Page Size : -
Unallocated Variable Memory Capacity Weight: -
Unallocated I/O Memory entitlement : -
Memory Group ID of LPAR : -
Given the above example, here are the answers:
__ a. Is the partition using dedicated or shared processors? Shared
__ b. Is the partition capped or uncapped? Capped
__ c. Is simultaneous multithreading on or off? On

1-8 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty __ d. How much memory is allocated to this partition? 1024 MB


__ e. What is the partitions entitled capacity (in processing units)? 0.35
__ f. How many online virtual CPUs are there? One
__ g. Your partition is a member of what shared processor pool? ID 0
__ h. What is the memory mode? Dedicated
__ 16. Use the lsdev -c adapter command to list the virtual Ethernet adapter
configured on your AIX client partition. Record the virtual Ethernet adapter name in
Table 3 below. The slot will be determined in the next step.

Table 3: Client partition virtual Ethernet configuration


Virtual Ethernet adapter name
Virtual Ethernet adapter slot
Example command and its output showing that the virtual Ethernet adapter name is
ent0:
# lsdev -c adapter
ent0 Available Virtual I/O Ethernet Adapter (l-lan)
vsa0 Available LPAR Virtual Serial Adapter
vscsi0 Available Virtual SCSI Client Adapter
vscsi1 Available Virtual SCSI Client Adapter
__ 17. Use the lscfg -vpl command to find out the virtual Ethernet adapter slot ID.
Record the information in the Table 3 above.
Example command and its output showing that the virtual Ethernet adapter slot ID is 11:
# lscfg -vpl ent0
ent0 U8204.E8A.652ACF2-V2-C11-T1 Virtual I/O Ethernet Adapter (l-lan)

Network Address.............2E5C5B02E80B
Displayable Message.........Virtual I/O Ethernet Adapter
(l-lan)
Hardware Location Code......U8204.E8A.652ACF2-V2-C11-T1

PLATFORM SPECIFIC

Name: l-lan
Node: l-lan@3000000b
Device Type: network
Physical Location: U8204.E8A.652ACF2-V2-C11-T1

Copyright IBM Corp. 2010, 2013 Exercise 1. Introduction to the lab environment 1-9
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

__ 18. Determine the operating system level in your partition using the oslevel -s
command. You should find that the LPAR is running AIX 7.1.
__ 19. Use the lspath command to verify that your hdisk0 has two different access paths.
MPIO is set up at your client logical partition.
You should see the following for hdisk0:
# lspath -l hdisk0
Enabled hdisk0 vscsi0
Enabled hdisk0 vscsi1
__ 20. Using the Putty program on your desktop, log in to the HMC command line.
Determine the HMCs version with the lshmc -V command. The following is an
example command and output that shows an HMC at Version 7, Release 7.4.0
Service Pack 1. It also has two fix packs installed.
hscroot@sys11hmc:~> lshmc -V
"version= Version: 7
Release: 7.4.0
Service Pack: 1
HMC Build level 20120207.1
MH01302: Fix for HMC V7R7.4.0 SP1 (02-07-2012)
MH01306: Fix for HMC V7R7.4.0 (02-29-2012)
","base_version=V7R7.4.0
"
Your assigned HMC may be at a different level. In your workplace, you might notice
differences in the output of some commands if your HMC is at a later version or
release.

End of exercise

1-10 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty Exercise review/wrap-up


In this exercise, the students examined key documentation and were shown where to find
information on the Internet. The students also looked at some commands that provide
information about the partitions configured resources and the lab environment.

Copyright IBM Corp. 2010, 2013 Exercise 1. Introduction to the lab environment 1-11
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

1-12 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty Exercise 2. Shared processors and virtual


processor tuning
(with hints)

Estimated time
02:00

What this exercise is about


In this exercise, students examine advanced processor options such
as SMT, shared processors, virtual processors, and capped and
uncapped processor partitions.
AIX performance analysis tools will be used to view how performance
is affected when using various advanced processor configurations.
This exercise guides students through the concept of shared
processors, also known as micro-partitioning. The student will use
CPU-intensive workloads to see the impact of various processor
configurations. By the end of this exercise, students should have a
general understanding of how to optimize a micro-partition for
performance.

What you should be able to do


After completing this exercise, you should be able to:
Use AIX performance tools such as lparstat, mpstat, topas,
sar, vmstat, and iostat to monitor and analyze CPU activity in
a shared processor environment
View the effect of simultaneous multithreading (SMT) on workloads
and AIX analysis tools

Introduction
This exercise is divided into four parts. Throughout this exercise, all of
the partitions have simultaneous multi-threading enabled.
In the first part of the exercise, students gain experience with viewing
micro-partitioning specific configuration options of a logical partition.

Copyright IBM Corp. 2010, 2013 Exercise 2. Shared processors and virtual processor tuning 2-1
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

The second part provides details regarding statistics that are relevant
to SMT and a micro-partitioning environment.
In the third part of the exercise, by dynamically changing the capacity
entitlement (CE), the capped/uncapped setting, and the number of
virtual processors (VPs), you will see the impacts of these
configuration options on the logical partitions processing capacity and
performance.
In the fourth part, you will run a CPU stress load on the partitions using
an executable named spload, and monitor the effect. The spload tool
can be found in the /home/an31/ex2 directory.

Requirements
This workbook
A computer with a network connection to the lab environment.
An HMC that is configured and supporting a POWER7 system.
A POWER7 processor-based system with at least one partition
running AIX 7 and one partition running the Virtual I/O Server code
per student.

Instructor exercise overview


The first part of this exercise provides an opportunity for students to
display the partitions configuration options. A review of different types
of processors is also provided. The first part should not last long, as
the students should already know this material.
The second part is more monitoring-oriented and most of the key AIX
monitoring tools will be used.
The third part is a scenario in which the students must change the
partitions configuration options. The goal here is to help the students
understand how micro-partitioning impacts not only the partition being
tuned, but how configuration changes impact the other partitions on
the same system.
The fourth part will continue to explore different options and their affect
on AIX analysis tools. Students will compare capped versus uncapped
and will view the app statistic.

2-2 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty Exercise instructions with hints

Preface
All exercises of this chapter depend on the availability of specific equipment in your
classroom.
All hints are marked by a sign.

Part 1: Micro-partitioning and listing the configuration

Tools and lab environment


In this lab, you will be using your assigned AIX 7 LPAR. You will use a tool named spload
during this exercise to generate a CPU load in the partitions. A transaction rate is displayed
every five seconds by default and can be changed using the -t option. The help can be
viewed using the -h option.
Command Type Role
Program used to generate CPU load on partitions. A
spload C executable
transaction rate is displayed every 5 seconds.

__ 1. Using the HMC, check the properties of your running partition. Verify that your
assigned client partition has the following configuration values. Use the HMC to look
at the partition properties.
Processing mode: Shared
Processing units: Min=0.1, Current=0.35, Max=2
Virtual processors: Min=1, Current=1, Max=20
Sharing mode: Capped
Shared processor pool: DefaultPool
Log in to your assigned HMC. In the HMC Server Management application, select your
server to display the LPAR table. Select your assigned LPAR and choose Properties.
Click the Hardware tab, then the Processors tab. Verify the processor information is
correct.

Copyright IBM Corp. 2010, 2013 Exercise 2. Shared processors and virtual processor tuning 2-3
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

__ 2. Open a terminal window or a Telnet session to your partition, and log in as the root
user. Your instructor should have provided the root password.
__ 3. Check the partition configuration using the commands lparstat and lsattr
using the following commands.
# lparstat -i
# lsattr -El sys0

2-4 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty The lparstat command provides a convenient output for checking many
configuration values.
# lparstat -i
Node Name : sys114_lpar1
Partition Name : sys114_lpar1
Partition Number : 2
Type : Shared-SMT-4
Mode : Capped
Entitled Capacity : 0.35
Partition Group-ID : 32771
Shared Pool ID : 0
Online Virtual CPUs : 1
Maximum Virtual CPUs : 10
Minimum Virtual CPUs : 1
Online Memory : 1024 MB
Maximum Memory : 2048 MB
Minimum Memory : 512 MB
Variable Capacity Weight : 0
Minimum Capacity : 0.10
Maximum Capacity : 2.00
Capacity Increment : 0.01
Maximum Physical CPUs in system : 8
Active Physical CPUs in system : 8
Active CPUs in Pool : 8
Shared Physical CPUs in system : 8
Maximum Capacity of Pool : 800
Entitled Capacity of Pool : 280
Unallocated Capacity : 0.00
Physical CPU Percentage : 35.00%
Unallocated Weight : 0
Memory Mode : Dedicated
Total I/O Memory Entitlement : -
Variable Memory Capacity Weight : -
Memory Pool ID : -
Physical Memory in the Pool : -
Hypervisor Page Size : -
Unallocated Variable Memory Capacity Weight : -
Unallocated I/O Memory entitlement : -
Memory Group ID of LPAR : -
Desired Virtual CPUs : 1
Desired Memory : 1024 MB
Desired Variable Capacity Weight : 0
Desired Capacity : 0.35
Target Memory Expansion Factor : -

Copyright IBM Corp. 2010, 2013 Exercise 2. Shared processors and virtual processor tuning 2-5
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

Target Memory Expansion Size : -


Power Saving Mode : Disabled
Here is an example of the lsattr command:
# lsattr -El sys0
SW_dist_intr false Enable SW distribution of interrupts True
autorestart true Automatically REBOOT OS after a crash True
boottype disk N/A False
capacity_inc 0.01 Processor capacity increment False
capped true Partition is capped False
chown_restrict true Chown Restriction Mode True
conslogin enable System Console Login False
cpuguard enable CPU Guard True
dedicated false Partition is dedicated False
enhanced_RBAC true Enhanced RBAC Mode True
ent_capacity 0.35 Entitled processor capacity False
frequency 6400000000 System Bus Frequency False
fullcore false Enable full CORE dump True
fwversion IBM,EL340_075 Firmware version and revision levels False
ghostdev 0 Recreate devices in ODM on system change True
id_to_partition 0X040183743A9F5E02 Partition ID False
id_to_system 0X040183743A9F5E00 System ID False
iostat false Continuously maintain DISK I/O history True
keylock normal State of system keylock at boot time False
log_pg_dealloc true Log predictive memory page deallocation events True
max_capacity 2.00 Maximum potential processor capacity False
max_logname 9 Maximum login name length at boot time True
maxbuf 20 Maximum number of pages in block I/O BUFFER CACHE True
maxmbuf 0 Maximum Kbytes of real memory allowed for MBUFS True
maxpout 8193 HIGH water mark for pending write I/Os per file True
maxuproc 128 Maximum number of PROCESSES allowed per user True
min_capacity 0.10 Minimum potential processor capacity False
minpout 4096 LOW water mark for pending write I/Os per file True
modelname IBM,8204-E8A Machine name False
ncargs 256 ARG/ENV list size in 4K byte blocks True
nfs4_acl_compat secure NFS4 ACL Compatibility Mode True
ngroups_allowed 128 Number of Groups Allowed True
os_uuid 2501c52f-73c6-4581-96fd-059aa177fe60 N/A True
pre430core False Use pre-430 style CORE dump True
pre520tune disable Pre-520 tuning compatibility mode True
realmem 1048576 Amount of usable physical memory in Kbytes False
rtasversion 1 Open Firmware RTAS version False
sed_config select Stack Execution Disable (SED) Mode True
systemid IBM,03652ACF2 Hardware system identifier False
variable_weight 0 Variable processor capacity weight False
__ 4. Looking at the output of the lsattr command, what is the value of the
variable_weight attribute?
Can you explain the meaning of this attribute and its current value?
The variable processor capacity weight is set to 0. A partition that is capped and a
partition that is uncapped with a weight of 0 are functionally identical. An uncapped
partition with a weight of 0 is considered soft-capped. In terms of performance, you get

2-6 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty exactly the same result - utilization can only go up to the partition's entitled capacity; not
higher.
It is possible to dynamically change a partition from capped to uncapped, and change
the weight by using the HMC and the dynamic logical partitioning menu (or the chhwres
HMC command).
__ 5. Using the lsdev command, list the available processors in your partition. What type
of processors are listed with the lsdev command?
Example command and its output:
# lsdev -c processor
proc0 Available 00-00 Processor
We have one processor available. This means that we have one virtual processor
configured because your assigned LPAR is configured as a shared processor partition.
If you see any processors listed as Defined, it means that the processor was previously
used by the partition, but is not currently available. This might happen if the partition has
been shut down and then reactivated with a smaller number of processors.
__ 6. Using the bindprocessor command, display the processors available in your
partition. What processor type does the bindprocessor command list?
Example command and its output:
# bindprocessor -q
The available processors are: 0 1 2 3
The bindprocessor command lists the available logical processors. In this example,
we have one virtual processor available (revealed by the lsdev command), and four
logical processors. This means that simultaneous multi-threading is enabled and is
using the SMT4 mode.

Part 2: Controlling SMT and viewing CPU utilization statistics


__ 7. Log in to your assigned LPAR as the root user.
__ 8. Display the number of processors (cores) that are available in your LPAR using the
lsdev command.
The suggested command and example output are:
# lsdev | grep proc
proc0 Available 00-00 Processor
__ 9. Disable SMT in your LPAR and then display the current SMT configuration.

Copyright IBM Corp. 2010, 2013 Exercise 2. Shared processors and virtual processor tuning 2-7
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

The suggested command and example output are:


# smtctl -m off
smtctl: SMT is now disabled. It will persist across reboots if you run
the bosboot command before the next reboot.

# smtctl
This system is SMT capable.
This system supports up to 4 SMT threads per processor.
SMT is currently disabled.
SMT boot mode is set to disabled.
SMT threads are bound to the same virtual processor.

proc0 has 1 SMT threads.


Bind processor 0 is bound with proc0
__ 10. Display the number of logical CPUs that are available in your system.
There are many ways to get this information. Some suggestions are below. All should
show one logical processor.
# bindprocessor -q
The available processors are: 0

# lparstat

System configuration: type=Shared mode=Capped smt=Off lcpu=1


mem=1024MB psize=4 ent=0.35

%user %sys %wait %idle physc %entc lbusy vcsw phint


----- ----- ------ ------ ----- ----- ------ ----- -----
0.0 0.0 0.5 99.5 0.00 0.0 6.5 349404 1608
__ 11. Enable SMT and display the SMT configuration. What is the default number of
logical CPUs per processor?

2-8 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty The suggested command and example output are:


# smtctl -m on
smtctl: SMT is now enabled. It will persist across reboots if you run
the bosboot command before the next reboot.

# smtctl
This system is SMT capable.
This system supports up to 4 SMT threads per processor.
SMT is currently enabled.
SMT boot mode is set to enabled.
SMT threads are bound to the same virtual processor.

proc8 has 4 SMT threads.


Bind processor 0 is bound with proc8
Bind processor 1 is bound with proc8
Bind processor 2 is bound with proc8
Bind processor 3 is bound with proc8
__ 12. Set the number of hardware threads per core to two and then display the SMT
configuration.
The suggested command and example output are:
# smtctl -t 2
smtctl: SMT is now enabled. It will persist across reboots if you run
the bosboot command before the next reboot.

# smtctl
This system is SMT capable.
This system supports up to 4 SMT threads per processor.
SMT is currently enabled.
SMT boot mode is set to enabled.
SMT threads are bound to the same virtual processor.

proc8 has 2 SMT threads.


Bind processor 0 is bound with proc8
Bind processor 1 is bound with proc8
__ 13. Change the number of threads per core back to four.
The suggested command and example output are:
# smtctl -t 4
smtctl: SMT is now enabled. It will persist across reboots if you run
the bosboot command before the next reboot.
__ 14. Use vmstat and mpstat to display the number of logical CPUs on the system.
How many logical CPUs are on your system?

Copyright IBM Corp. 2010, 2013 Exercise 2. Shared processors and virtual processor tuning 2-9
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

The suggested command and example output are:


# vmstat
System configuration: lcpu=4 mem=1024MB ent=0.35
kthr memory page faults cpu
----- ----------- ------------------ ------------ --------------
r b avm fre re pi po fr sr cy in sy cs us sy id wa pc ec
1 1 163470 64229 0 0 0 0 0 0 3 222 161 0 0 99 0 0.00 0.1

# mpstat -s 1 1
System configuration: lcpu=4 ent=0.3 mode=Capped
Proc0
1.35%
cpu0 cpu1 cpu2 cpu3
0.69% 0.22% 0.22% 0.22%
The example output for both commands has a header field (lcpu=) that reports four
logical CPUs. The mpstat report, in addition, shows that there are four logical CPUs per
processor.
Note: New intelligent threads technology enables workload optimization by dynamically
selecting the most suitable threading mode - single thread per core, SMT (simultaneous
multi-thread) with two threads per core or SMT with four threads per core. As a result,
applications can run at their peak performance and server workload capacity is
increased. In this section we will see how POWER7 intelligent threads behave with an
increase in the workload.
__ 15. Run the yes command to generate load on the CPU. Execute one instance of the
yes command in the background.
The suggested command is:
# yes > /dev/null &
__ 16. Monitor the system wide processor utilization by running lparstat for four
intervals of one second each.
The suggested command and example output are:
# lparstat 1 4
System configuration: type=Shared mode=Capped smt=4 lcpu=4 mem=1024MB psize=16
ent=0.35
%user %sys %wait %idle physc %entc lbusy vcsw phint
----- ----- ------ ------ ----- ----- ------ ----- -----
61.6 2.2 0.0 36.2 0.36 102.5 26.9 274 0
61.3 1.5 0.0 37.2 0.35 99.6 28.5 390 1
61.7 2.0 0.0 36.3 0.35 100.4 28.5 374 1
60.7 2.0 0.0 37.3 0.35 98.9 23.2 379 2
__ 17. Examine the lparstat report. How many processing units are being consumed
(physc)?

2-10 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty In the example output, the single CPU intensive job is utilizing all of the entitled capacity
(0.35).
__ 18. Monitor the utilization for all logical CPUs, using the sar command. Specify two
intervals of one second each. You might wish to maximize your terminal emulation
window in order to see all of the output without scrolling. What do you notice in the
command output?
The suggested command and example output are:
# sar -P ALL 1 2

AIX lpar1 1 7 00F6BCC94C00 11/13/12

System configuration: lcpu=4 ent=0.35 mode=Capped

14:42:30 cpu %usr %sys %wio %idle physc %entc


14:42:31 0 50 6 0 44 0.08 22.6
1 94 1 0 4 0.19 52.9
2 0 0 0 100 0.04 12.1
3 0 0 0 100 0.04 12.2
- 61 2 0 37 0.35 99.8
14:42:32 0 89 4 0 8 0.17 48.0
1 67 1 0 32 0.10 27.4
2 0 0 0 100 0.04 12.3
3 0 0 0 100 0.04 12.3
- 61 2 0 37 0.35 99.9

Average 0 76 4 0 19 0.12 35.3


1 85 1 0 14 0.14 40.1
2 0 0 0 100 0.04 12.2
3 0 0 0 100 0.04 12.2
- 61 2 0 37 0.35 99.8
In the output above, notice that two logical processors are busy and two are idle.
Together, the two logical processors are consuming all of the entitled capacity. We can
assume one of the logical processors is running the yes process thread. The other
logical processor is running operating system tasks.
__ 19. Execute one more instance of the yes command in the background. This will result
in two CPU intensive jobs running on the system.
The suggested command and example output are:
# yes > /dev/null &

Copyright IBM Corp. 2010, 2013 Exercise 2. Shared processors and virtual processor tuning 2-11
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

You can check that there are two yes jobs running:
# jobs
[2] + Running yes > /dev/null &
[1] - Running yes > /dev/null &
__ 20. Monitor the system wide processor utilization by running lparstat for four intervals
of one second each. What are the current values of physc and %entc?
____________________________________________________________
The suggested command and example output are:
# lparstat 1 4
System configuration: type=Shared mode=Capped smt=4 lcpu=4 mem=1024MB
psize=4 ent=0.35

%user %sys %wait %idle physc %entc lbusy vcsw phint


----- ----- ------ ------ ----- ----- ------ ----- -----
85.7 2.9 0.0 11.4 0.35 100.0 52.8 499 1
87.0 1.3 0.0 11.7 0.35 99.9 51.0 499 0
87.1 1.3 0.0 11.5 0.35 99.9 49.2 513 1
86.9 1.4 0.0 11.8 0.35 99.9 53.5 520 1

The output should shows that we are still using up all of the entitled capacity. The physc
value matches the ent value and the entitled capacity percentage is 100 or nearly so.
__ 21. Monitor the utilization for all logical CPUs using the sar command. Specify two
intervals of two seconds each. What do you notice about the sar output?

2-12 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty The suggested command and example output are:


# sar -P ALL 2 2
AIX lpar1 1 7 00F6BCC94C00 11/13/12

System configuration: lcpu=4 ent=0.35 mode=Capped

14:56:24 cpu %usr %sys %wio %idle physc %entc


14:56:26 0 61 9 0 30 0.05 14.4
1 95 1 0 5 0.11 32.8
2 97 1 0 3 0.13 37.2
3 68 2 0 30 0.05 15.5
- 86 2 0 12 0.35 99.9
14:56:28 0 68 5 0 26 0.06 16.3
1 99 1 0 0 0.15 42.9
2 95 1 0 4 0.11 31.5
3 47 1 0 52 0.03 9.2
- 88 1 0 11 0.35 99.9

Average 0 65 7 0 28 0.05 15.3


1 97 1 0 2 0.13 37.9
2 96 1 0 3 0.12 34.3
3 60 2 0 38 0.04 12.3
- 87 2 0 11 0.35 99.9
The output shows there are still two busy logical processors and two mostly idle logical
processors.
__ 22. Execute two more instances of the yes command in the background. This will result
in four computational intensive jobs running on the system.
The suggested command and example output are:
#yes > /dev/null &
#yes > /dev/null &

# jobs
[4] + Running yes > /dev/null &
[2] - Running yes > /dev/null &
[3] Running yes > /dev/null &
[1] Running yes > /dev/null &
There should be four instance of the yes command running on your LPAR now.
__ 23. Monitor the utilization for all logical CPUs using the sar command. Specify two
intervals of two seconds each.

Copyright IBM Corp. 2010, 2013 Exercise 2. Shared processors and virtual processor tuning 2-13
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

The suggested command and example output are:


# sar -P ALL 2 2

AIX lpar1 1 7 00F6BCC94C00 11/13/12

System configuration: lcpu=4 ent=0.35 mode=Capped

15:02:43 cpu %usr %sys %wio %idle physc %entc


15:02:45 0 97 3 0 0 0.09 24.9
1 99 1 0 0 0.09 25.0
2 99 1 0 0 0.09 25.1
3 99 1 0 0 0.09 25.0
- 99 1 0 0 0.35 100.0
15:02:47 0 97 3 0 0 0.09 24.9
1 99 1 0 0 0.09 25.0
2 99 1 0 0 0.09 25.0
3 99 1 0 0 0.09 25.0
- 99 1 0 0 0.35 99.9

Average 0 97 3 0 0 0.09 24.9


1 99 1 0 0 0.09 25.0
2 99 1 0 0 0.09 25.1
3 99 1 0 0 0.09 25.0
- 99 1 0 0 0.35 100.0

__ 24. Examine the sar report. How many logical CPUs have a very high % utilization
(%user + %sys) along with a significant physical processor consumption (physc)?
What can you say about the distribution of the workload?
The example output shows four logical CPUs in each interval, each with a high
utilization totalling 100% and a significant physical processor consumption that is split
evenly between all four logical processors. Recall that this is a capped processor
partition, so it cannot use more than its entitled capacity.
__ 25. Run the lparstat command with a count and interval of two. Note the physc and
%entc values here: _____________________________________________

2-14 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty Example lparstat command and its output which shows that physc is the same as the
entitled capacity (0.35) and %entc is about 100%.
# lparstat 2 2

System configuration: type=Shared mode=Capped smt=4 lcpu=4 mem=1024MB


psize=4 ent=0.35

%user %sys %wait %idle physc %entc lbusy vcsw phint


----- ----- ------ ------ ----- ----- ------ ----- -----
98.7 1.2 0.0 0.1 0.35 99.9 100.0 400 1
98.6 1.2 0.0 0.2 0.35 99.8 100.0 402 2
__ 26. In the HMC GUI, select your assigned LPAR and use the Dynamic Logical
Partitioning menu to dynamically change your LPAR to an uncapped partition. Set
the uncapped weight to 128.
In the HMC GUI, select your LPAR. Run the Dynamic Logical Partitioning >
Processor > Add or Remove task. In the window that opens, click the uncapped
checkbox as shown below. Enter an uncapped weight value of 128. Click OK to
complete the task.

Copyright IBM Corp. 2010, 2013 Exercise 2. Shared processors and virtual processor tuning 2-15
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

__ 27. In your LPAR login session, run the lparstat with a count and interval of two. What
do you notice now about the physc and %entc statistics? How do they compare with
the last lparstat command that you ran?
Here is an example lparstat command and its output showing that physc is now 1.0
and the %entc is 285.8%. Now that the partition is uncapped, it is only limited by its one
virtual processor (and the amount of excess cycles in the shared processor pool).
# lparstat 2 2

System configuration: type=Shared mode=Uncapped smt=4 lcpu=4


mem=1024MB psize=4 ent=0.35

%user %sys %wait %idle physc %entc lbusy vcsw phint


----- ----- ------ ------ ----- ----- ------ ----- -----
99.3 0.7 0.0 0.0 1.00 285.8 100.0 400 9
99.3 0.7 0.0 0.0 1.00 285.8 100.0 400 6
__ 28. Leave the four yes processes running on your system for the next part of this
exercise.

2-16 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty Part 3: Exploring AIX CPU performance analysis commands


AIX monitoring commands, such as lparstat and mpstat, can be used to examine
information related to the partitioning capabilities of POWER5, POWER6, and POWER7
systems. Monitoring commands also have output columns for reporting micro-partitioning
statistics.
__ 29. Previously in this exercise, you explored using lparstat and sar to view CPU
utilization, particularly the physc and %entc fields. Next youll look at the distribution
of workload across logical processors and the lbusy statistic. Recall in the last part
of this exercise, you started four yes processes. You also looked at the sar output,
which shows utilization by logical processors, and noticed that the workload was
distributed evenly across the logical processors. Recall also that your LPAR is now
running as an uncapped partition.
Check that the yes jobs are still running with the jobs command. If there are not
four, then start new yes processes so that there are exactly four. Then run
sar -P ALL 2 2 command and notice the distribution of the workload across the
logical processors.

Copyright IBM Corp. 2010, 2013 Exercise 2. Shared processors and virtual processor tuning 2-17
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

Here are example commands and their example outputs. There should be four yes
processes. The sar output should show even distribution of the workload.
# jobs
[4] + Running yes > /dev/null &
[2] - Running yes > /dev/null &
[3] Running yes > /dev/null &
[1] Running yes > /dev/null &

# sar -P ALL 2 2

AIX lpar1 1 7 00F6BCC94C00 11/13/12

System configuration: lcpu=4 ent=0.35 mode=Uncapped

15:33:31 cpu %usr %sys %wio %idle physc %entc


15:33:33 0 99 1 0 0 0.25 71.5
1 100 0 0 0 0.25 71.5
2 99 1 0 0 0.25 71.5
3 99 1 0 0 0.25 71.3
- 99 1 0 0 1.00 285.7
15:33:35 0 99 1 0 0 0.25 71.5
1 100 0 0 0 0.25 71.5
2 100 0 0 0 0.25 71.4
3 99 1 0 0 0.25 71.3
- 99 1 0 0 1.00 285.6

Average 0 99 1 0 0 0.25 71.5


1 100 0 0 0 0.25 71.5
2 99 1 0 0 0.25 71.4
3 99 1 0 0 0.25 71.3
- 99 1 0 0 1.00 285.6
__ 30. Run the mpstat -s command with an interval and count of two. Does the
distribution of the workload match the most recent sar output?

2-18 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty Here is the mpstat command and example output:


# mpstat -s 2 2

System configuration: lcpu=4 ent=0.3 mode=Uncapped

Proc0
99.95%
cpu0 cpu1 cpu2 cpu3
25.00% 25.01% 24.99% 24.95%
------------------------------------------------
Proc0
99.95%
cpu0 cpu1 cpu2 cpu3
25.00% 25.01% 24.99% 24.95%
You should find that the distribution is even just like the sar output.
__ 31. Run the lparstat command with an interval and count of two. What do you notice
about the lbusy statistic? Does this value make sense?
Here is the lparstat command and example output which shows a logical processor
percentage (lbusy) of 100%. This makes sense because you are using all four logical
processors.
# lparstat 2 2

System configuration: type=Shared mode=Uncapped smt=4 lcpu=4


mem=1024MB psize=4 ent=0.35

%user %sys %wait %idle physc %entc lbusy vcsw phint


----- ----- ------ ------ ----- ----- ------ ----- -----
99.3 0.7 0.0 0.0 1.00 285.8 100.0 400 7
99.1 0.9 0.0 0.0 1.00 285.6 100.0 400 6
__ 32. Now kill two of the yes jobs. You can use the kill %1 and kill %2 commands.
Run the jobs command to verify that two jobs remain.

Copyright IBM Corp. 2010, 2013 Exercise 2. Shared processors and virtual processor tuning 2-19
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

Example commands and their outputs.


# kill %1
# kill %2
[1] Terminated yes > /dev/null &
# jobs
[4] + Running yes > /dev/null &
[2] - Terminated yes > /dev/null &
[3] - Running yes > /dev/null &
# jobs
[4] + Running yes > /dev/null &
[3] - Running yes > /dev/null &
The second jobs command shows the remaining two yes processes.
__ 33. Now look at the sar -P ALL and the mpstat -s command outputs once again
using an interval and count of two in both cases.

2-20 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty Example commands and their outputs which show two busy logical processors and two
relatively idle logical processors:
# sar -P ALL 2 2

AIX lpar1 1 7 00F6BCC94C00 11/13/12

System configuration: lcpu=4 ent=0.35 mode=Uncapped

15:46:39 cpu %usr %sys %wio %idle physc %entc


0 99 1 0 0 0.44 125.9
1 0 0 0 100 0.06 17.2
2 99 1 0 0 0.44 125.0
3 0 4 0 96 0.06 17.6
- 87 1 0 12 1.00 285.8
15:46:43 0 99 1 0 0 0.44 125.8
1 0 0 0 100 0.06 17.2
2 99 1 0 0 0.44 125.0
3 0 4 0 96 0.06 17.5
- 87 1 0 12 1.00 285.6

Average 0 99 1 0 0 0.44 125.9


1 0 0 0 100 0.06 17.2
2 99 1 0 0 0.44 125.0
3 0 4 0 96 0.06 17.5
- 87 1 0 12 1.00 285.7
# mpstat -s 2 2

System configuration: lcpu=4 ent=0.3 mode=Uncapped

Proc0
100.02%
cpu0 cpu1 cpu2 cpu3
37.96% 5.90% 43.92% 12.24%
-----------------------------------------
Proc0
99.98%
cpu0 cpu1 cpu2 cpu3
44.04% 6.06% 43.74% 6.14%
__ 34. Once again run the lparstat command with an interval and count of two. What do
you notice about the lbusy statistic now? Does this value make sense?

Copyright IBM Corp. 2010, 2013 Exercise 2. Shared processors and virtual processor tuning 2-21
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

Here is the lparstat command and example output which shows a logical processor
percentage (lbusy) of about 50%. This makes sense because you are using only two of
the four logical processors.
# lparstat 2 2

System configuration: type=Shared mode=Uncapped smt=4 lcpu=4


mem=1024MB psize=4 ent=0.35

%user %sys %wait %idle physc %entc lbusy vcsw phint


----- ----- ------ ------ ----- ----- ------ ----- -----
87.2 0.8 0.0 12.0 1.00 286.1 50.2 359 0
87.0 1.1 0.0 11.9 1.00 285.6 50.5 365 4
__ 35. Another command you can use the monitor CPU utilization is vmstat. Run vmstat
with an interval and count of two. If the output wraps and makes analysis difficult,
widen your session window. Notice the two letter characters used for the column
headings. Physc is pc and %entc is ec in the vmstat output.
Example vmstat output that only shows the cpu fields on the right of the output:
# vmstat 2 2

System configuration: lcpu=4 mem=1024MB ent=0.35

cpu
-----------------------
us sy id wa pc ec
87 1 12 0 1.00 285.5
93 1 6 0 1.00 286.0
__ 36. Run the iostat command from your partition and check the CPU utilization
statistics. Use an interval and count of two. What columns display information about
the actual CPU consumption of the partition?

2-22 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty Example command and its output:


# iostat 2 2

System configuration: lcpu=4 drives=1 ent=0.35 paths=1 vdisks=1

tty: tin tout avg-cpu: % user % sys % idle % iowait physc % entc
0.0 31.2 88.0 0.9 11.2 0.0 1.0 285.6

Disks: % tm_act Kbps tps Kb_read Kb_wrtn


hdisk0 0.0 0.0 0.0 0 0

tty: tin tout avg-cpu: % user % sys % idle % iowait physc % entc
0.0 147.0 88.6 0.7 10.6 0.0 1.0 285.9

Disks: % tm_act Kbps tps Kb_read Kb_wrtn


hdisk0 0.0 0.0 0.0 0 0

Like lparstat, the physc column displays the number of physical processors consumed,
and the %entc column displays the percentage of entitlement consumed.
__ 37. Next, start the topas program on your partition. Press the L key. Look for the physc
and %entc values. Look also for the lbusy field.
The following example shows a partial screen. Notice the physc, %entc, and the
%lbusy fields. There is also a PHYSC field for each logical processor.

Copyright IBM Corp. 2010, 2013 Exercise 2. Shared processors and virtual processor tuning 2-23
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

__ 38. In the topas window, press a capital C to see cross partition data. You might have to
wait a moment for information to appear. It should show all the active partitions in
your managed system. If it doesnt work on your system, just move on to the next
step. When youve finished with topas, press the q key to quit the program.
__ 39. Kill all yes processes that are still running. Use the kill %# command where # is the
number of the job. Use the jobs command to verify that there are no running jobs.

Part 4: Physical shared processor pool and micro-partitions


In the following steps, you are going to use CPU loads on the partitions to help show the
impact of some of the micro-partitioning options.
__ 40. Configure your assigned LPAR to show the Available Physical Processors (app)
statistic.
Using the HMC GUI, open the Partition Properties window for your assigned LPAR.
Access the Hardware tab, then the Processor tab. Click the Allow performance
information collection checkbox so that a check appears.
Example Properties sheet with the checkbox checked:

__ 41. In a session with your partition, change directory to /home/an31/ex2, and invoke the
following command to generate a workload on the CPU:
# ./spload -t 2

2-24 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty Important: Keep this executable running until instructed to stop it. You will have to
monitor the transaction rate output as you make changes to the system
configuration.
The values in the output are the transaction rates.
The following is an example of running spload with the transaction rate displayed
every two seconds.
# cd /home/an31/ex2

# ./spload -t 2
140
143
144
142
144
...
__ 42. Open another window to your partition. Log in as the root user and check the CPU
consumption using the lparstat command.
What is the physical processor consumed value on your partition?
What is the percent entitlement consumed value of the partition?
What is the app value? What does it mean?
# lparstat 2 2

System configuration: type=Shared mode=Uncapped smt=4 lcpu=4


mem=1024MB psize=4 ent=0.35

%user %sys %wait %idle physc %entc lbusy app vcsw phint
----- ----- ------ ------ ----- ----- ------ --- ----- -----
99.6 0.4 0.0 0.0 1.00 285.8 100.0 2.99 400 8
99.7 0.3 0.0 0.0 1.00 285.8 100.0 2.99 400 9

The physc and %entc show a busy partition. The partition is using the maximum amount
of entitled capacity for a partition with one virtual processor.
The available processing units in the shared pool can be seen with the app statistic. In
the example above, the value is 2.99 representing the equivalent of approximately three
idle processors in the shared processor pool. You will likely see a different amount on
your training system.
__ 43. Using the HMC, dynamically change the capacity entitlement of your partition to 0.8
and mark it as capped. Leave the virtual processor value at 1.
Using the HMC GUI, select your partition, then select Dynamic Logical Partitioning >
Processor Resources > Add or Remove. Enter 0.8 in the Assigned Processing

Copyright IBM Corp. 2010, 2013 Exercise 2. Shared processors and virtual processor tuning 2-25
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

units field, uncheck the uncapped checkbox, then click OK. The Add / Remove
Processor Resources window is shown below.

2-26 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty
__ 44. Check the processing capacity of your partition using the lparstat command with
an interval and count of two. What is the physical processor capacity consumed
compared to the entitled capacity?
The processing capacity consumed is equal to the new entitled capacity of 0.8.
# lparstat 2 2
System configuration: type=Shared mode=Capped smt=4 lcpu=4 mem=1024MB
psize=4 ent=0.80

%user %sys %wait %idle physc %entc lbusy app vcsw phint
----- ----- ------ ------ ----- ----- ------ --- ----- -----
99.0 0.8 0.0 0.1 0.80 99.9 100.0 3.18 402 4
99.6 0.4 0.0 0.0 0.80 100.0 100.0 3.19 400 8
__ 45. Now, make two dynamic changes to the LPAR. Use DLPAR to change your partition
back to an uncapped partition. Set the weight value to 128. Also, add one more
virtual processor for a total of two.
Using the HMC GUI, select your partition, then select Dynamic Logical Partitioning >
Processor Resources > Add or Remove. Check the uncapped box, and set the
weight value to 128. Then click the OK button to make the change.

Copyright IBM Corp. 2010, 2013 Exercise 2. Shared processors and virtual processor tuning 2-27
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

Note that the available system processing values on your training system will likely be
different then those shown on the example screen above.
__ 46. Using the lparstat command, discover the current value of the consumed
processing capacity (physc) on your partition.
The processing capacity consumed value is now 2.0, as shown in the output of the
lpartstat command below. Note that it might be possible that your LPAR may have
less than 2.0 physc. If other students are adjusting their entitled capacity and running

2-28 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty processes that take up a lot of available processing resources in the pool, then your
LPAR may have less than 2.0 processing units for physc.
# lparstat 2 2

System configuration: type=Shared mode=Uncapped smt=4 lcpu=8


mem=1024MB psize=4 ent=0.80

%user %sys %wait %idle physc %entc lbusy app vcsw phint
----- ----- ------ ------ ----- ----- ------ --- ----- -----
99.8 0.2 0.0 0.0 2.00 249.9 100.0 1.99 800 16
99.8 0.2 0.0 0.0 2.00 249.9 100.0 1.99 800 10
__ 47. Did the transaction rate provided by the spload executable increase? Why or why
not?
The transaction rate increased when the number of virtual processors was increased to
two. The physical capacity consumed by the logical partition can grow up to 2.0. The
physical CPU capacity consumed by your partition depends also on the activity of the
other partitions that consume extra CPU cycles from the shared processor pool. The
transaction rate can be fluctuating or can be lower than the value shown in the example.
#./spload -t 2
284
289
285
286
287
286
285
285
__ 48. On your partition, use the mpstat command to check the number of context
switches occurring on the partition. Check the columns ilcs and vlcs (involuntary and
voluntary context switches respectively).
# mpstat -d 2

Copyright IBM Corp. 2010, 2013 Exercise 2. Shared processors and virtual processor tuning 2-29
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

From the mpstat output, we can see that we have only involuntary logical context
switches (ilcs). This means the partition is not ceding any idle cycles.
# mpstat -d 2
System configuration: lcpu=8 ent=0.8 mode=Uncapped
cpu cs ics bound rq push S3pull S3grd S0rd S1rd S2rd S3rd S4rd S5rd ilcs vlcs
S3hrd S4hrd S5hrd
0 104 53 0 0 0 0 0 100 0 0 0 0 0 150 0 100 0 0
1 118 57 0 0 0 0 0 100 0 0 0 0 0 150 0 100 0 0
2 1 1 0 0 0 0 0 100 0 0 0 0 0 150 0 100 0 0
3 0 0 0 0 0 0 0 100 0 0 0 0 0 150 0 100 0 0
4 0 0 0 0 0 0 0 100 0 0 0 0 0 150 0 100 0 0
5 0 0 0 0 0 0 0 100 0 0 0 0 0 150 0 100 0 0
6 150 75 0 0 0 0 0 100 0 0 0 0 0 150 0 100 0 0
7 0 0 0 0 0 0 0 100 0 0 0 0 0 150 0 100 0 0
ALL 373 186 0 0 0 0 0 100 0 0 0 0 0 1200 0 100 0 0
If simultaneous multithreading is enabled on a shared processor partition, the mpstat
values for ilcs and vlcs reported for a logical processor are actually the number of
context switches encountered by the virtual processor being used to run the logical
processor. Since each virtual processor has four logical processors, this means the four
logical processors from a single virtual processor will report the same ilcs and vlcs
values. The ALL line of the mpstat output shows the total number of ilcs and vlcs for
all of the virtual processors of the partition.
__ 49. Optional step: Use the nmon command to monitor CPU utilization. Run nmon then
press the h key to view the nmon shortcut keys. Try out the p, c, C, and lowercase L
(l) shortcut keys. See how many statistics you recognize. Use q to quit nmon when
youre finished.
__ 50. Kill the spload process.
Example commands and their outputs:
# ps -ef | grep spload
root 4849716 7077978 493 16:22:20 pts/0 48:24 ./spload -t 2
# kill 4849716
# ps -ef | grep spload

End of exercise

2-30 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty Exercise review/wrap-up


This lab allowed the students to experiment with micro-partitioning configuration options
and to see the impact these have on the ability of a partition to consume CPU resources
from the shared processing pool.

Copyright IBM Corp. 2010, 2013 Exercise 2. Shared processors and virtual processor tuning 2-31
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

2-32 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty Exercise 3. Configuring multiple shared


processor pools and donating
dedicated processors
(with hints)

Estimated time
01:30

What this exercise is about


This exercise has three parts.
Part 1 provides the students with an opportunity to configure the
multiple shared processor pools feature of the POWER6 or above
systems. In this exercise, the students work with the pool ID# 0 (the
default pool) and a user-defined pool.
In part 2, the students must work in teams of two. They will assign their
logical partitions to the same shared processor pool. Then they will
start CPU loads and examine the shared processor pool behavior.
Part 3 provides the students with an opportunity to work with dedicated
logical partitions running in donating mode.

What you should be able to do


After completing this exercise, you should be able to:
Configure a user-defined shared processor pool and dynamically
assign a partition to it
Use commands to identify the LPARs assigned to the available
shared processor pools
Monitor the user-defined shared processor pool
For a dedicated partition, activate the donating mode using the
HMC or CLI
View the changes in various AIX processor analysis tools when
donating mode is enabled
View the changes in the shared storage pool statistics when a
dedicated partition is donating processor cycles

Copyright IBM Corp. 2010, 2013 Exercise 3. Configuring multiple shared processor pools and 3-1
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

Requirements
One shared processor LPAR per student on a POWER7 system.

Instructor exercise overview


In the second and third parts of the exercise, students must work
together with another student. Assign these teams before the exercise
starts.

3-2 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty Exercise instructions with hints

Preface
Use your assigned LPAR for this exercise.
All hints are marked by a sign.

Used tools and lab environment


In this lab, you will be using one of your assigned logical partitions. The tool that will be
used during this exercise is an executable which generates a CPU load on the partitions. A
transaction rate is displayed every five seconds by default and can be changed using the
-t option. The help can be viewed using the -h option.

Command Type Role


Program used to generate CPU load on partitions. A
spload C executable
transaction rate is displayed every five seconds.

Part 1: Configuring multiple shared processor pools

Introduction
The multiple shared processor pool is a POWER6 or above system firmware feature. The
system can have up to 63 additional shared processor pools configured. In this exercise,
you configure the use of one user-defined shared processor pool. The topas command is
used to verify the pool utilization.
__ 1. Before starting the exercise, verify your assigned partition has the following CPU
configuration. Use the HMC to look at the partition properties. Use DLPAR
operations to alter any of the dynamic attributes, or perform a full shut down of the
LPAR, alter the Normal configuration file, and activate the LPAR.
If you completed all of Exercise 2, your assigned LPAR may be currently set to 0.8
processing units, 2 virtual processors, and is uncapped. Use DLPAR to alter these
three settings to the values shown below. The other two settings, shared mode and
assignment to the default shared processor pool, should already be configured.
Processing mode: Shared
Processing units: 0.35
Virtual processors: =1
Sharing mode: Capped
Shared processor pool: DefaultPool

Copyright IBM Corp. 2010, 2013 Exercise 3. Configuring multiple shared processor pools and 3-3
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

Select your LPAR and run the Dynamic Logical Partitioning > Processors >
Add or Remove task. Make the necessary changes and click OK. This example
shows the proper values.

__ 2. Access your primary managed systems HMC (GUI or command) and verify there
are at least 0.5 processors available. You can view the value in the server tables
Available Processing Units column or open the managed system properties and go
to the Processors tab. A sample servers table is shown below which shows 10.6
processing units available. Your system may have a different number.
Record the number of available processing units on your system: _____________

3-4 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty __ 3. At this time, your partition is assigned to the default shared processor pool named
DefaultPool with an ID of 0. In the next few steps youll configure a custom pool and
assign your partition to this pool.
Navigate to the Shared Processor Pool Management screen by selecting your
server and running the Configuration > Virtual Resources > Shared Processor
Pool Management task.

__ 4. In the panel that opens, configure the Reserved Processing Units and Maximum
Processing Units for the SharedPool0x where x is your LPAR number. For example,
an LPAR named sys036_lpar1 will select pool 1; an lpar named sys036_lpar2 will
select pool 2, and so on. Select the Pool Name with a left click on the name, and
then change the value for Reserve Processor Units to 0.5 and the Maximum
Processing Units to 2. Click OK to close the Modify Pool Attributes panel, then close
the Shared Processor Pool panel.
An example Shared Processor Pool panel is shown below. In this example, the
first user-defined pool is being modified.

Copyright IBM Corp. 2010, 2013 Exercise 3. Configuring multiple shared processor pools and 3-5
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

__ 5. On the HMC, look at the servers table once again. Notice the change in the
Available Processor Units column.
Record the current amount of available processing units: __________________
The number should be reduced by your Reserved Processor units value (0.5). Note
that multiple students may all be trying this at the same time so the value may
change more than the reserve processor units value that you set. If no one else is
using your server, then the adjustment should be the same as the reserved
processor units value.

3-6 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty __ 6. Now assign your shared processor partition to the shared processor pool that you
configured. The pool assignment can be done in the LPAR profile or it can be done
dynamically. You will configure it dynamically.
Once again run the Configuration > Virtual Resources > Shared Processor Pool
Management task.
In the Shared Processor Pool window, select the Partitions tab. Then left click
your assigned partitions name. Use the pulldown for the Pool Name(ID) field and
look for your configured shared processor pool name. The default pool and any
other pool with the maximum processor unit attribute set to a whole number will
display in the pull-down list. Select OK to exit.

__ 7. Using an SSH shell, log in to the HMC and run the following command to display the
shared processor pools attributes and the LPAR pool assignments:
lshwres -r procpool -m <managed_system>
Here is an example of a lshwres command and its output that shows that the
sys504_lpar3 partition is assigned to the SharedPool03 pool. The rest of the
partitions are assigned to the DefaultPool.

hscroot@hmc142:~> lshwres -r procpool -m sys504


name=DefaultPool,shared_proc_pool_id=0,"lpar_names=sys504_amsvios,sys
504_vios1,sys504_vios2,sys504_vios3,sys504_vios4,sys504_sspvios,sys50
4_lpar1,sys504_lpar2,sys504_lpar4","lpar_ids=9,1,2,3,4,10,5,6,8"
name=SharedPool03,shared_proc_pool_id=1,max_pool_proc_units=2.0,curr_
reserved_pool_proc_units=0.5,pend_reserved_pool_proc_units=0.5,lpar_n
ames=sys504_lpar3,lpar_ids=7

Copyright IBM Corp. 2010, 2013 Exercise 3. Configuring multiple shared processor pools and 3-7
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

__ 8. Use a Telnet or SSH session to connect to your assigned partition. Use the following
lparstat command to verify your partition is assigned to your shared processor
pool: lparstat -i | grep Shared Pool
Here is example output showing the shared pool ID 3.
Shared Pool ID : 3
__ 9. In the session to your LPAR, start lparstat 2 5. Does the app column appear?
This was configured in Exercise 2. If there is no app column, enable it by opening the
properties of your partition on the HMC. Go to the Hardware tab, then the
Processor tab, and click the Allow performance information collection checkbox
as shown below.

If you just configured the app column, run the lparstat 2 5 command again.
Can you explain the app column value? Record the value here: ______________
The app column value reflects the amount of idle CPU cycles available in your
shared processor pool. As your LPAR is not CPU constrained, the app value
should be closed to the maximum capacity value of the user-defined shared pool
(2.00).
__ 10. Start topas -C. Wait for topas to initialize. You are looking at host metrics. Press
the p key to go to the shared processor pool metrics.
__ a. In the pool metrics, you should see your shared pool ID. Observe how the psize
is the same value that you set for the maximum processor units for the pool. This
value is also seen in the maxc column (multiplied by 100).
__ b. Observe the entc column value for your shared pool ID. Can you explain the
value?
In the following example the entc column shows a value of 85.0. This
corresponds to the sum of the logical partition capacity entitlement (Desired
CE=0.35) plus the Reserved Capacity value of the shared processor pool
(Reserved value=0.5) multiplied by 100. The reserved value of the shared
processor pool is part of the entitlement for that shared pool.

3-8 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty

__ 11. To focus on your pool ID, use the up and down arrow keys to put your cursor on
your pool ID number in the pool column at the left side and when highlighting moves
to that number, select the f key to toggle the focus. This changes what is displayed
in the lower portion of the output. Note that when PhysB (busy physical processors)
appears in topas it is the same as physc or pc (physical processors consumed) in
other places and other tools
And example topas screen is shown below. To get here run topas -C, then
press the p key to get to the pool statistics. Use the down arrow to get to your
shared processor pool ID, then press the f key.

__ 12. Observe the CPU resource consumption by watching the PhysB and %EntC for your
partition. Keep this topas running for the next few steps.

Copyright IBM Corp. 2010, 2013 Exercise 3. Configuring multiple shared processor pools and 3-9
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

__ 13. Dynamically change your logical partition to be an Uncapped partition. Set the
weight to 128.
Using the HMC GUI, select your LPAR and, then select Dynamic Logical
Partitioning > Processor > Add or Remove. Check the uncapped box, and
use the weight value of 128.
__ 14. Start another Telnet session to your partition and change directory to
/home/an31/ex3, and invoke the following command to generate a workload on the
CPU:
# ./spload -t 2
__ 15. Observe the PhysB and %EntC for your partition in the topas screen. Can you explain
the PhysB value?
The PhysB value should be 1.00 and the %EntC value should be 285. The
partition consumes 285% of its entitlement (0.35), which is one CPU (1.0). The
CPU consumption is limited by the number of virtual processors in the LPAR.
__ 16. Dynamically change the number of virtual processors for your partition from 1 to 3.
On the HMC, select your partition and run the Dynamic Logical Partitioning >
Processor > Add or Remove task. Change the assigned virtual processors to
3.
__ 17. Look at the topas screen and notice the PhysB and %EntC values for your LPAR.
Can you explain the PhysB value?
The PhysB value is about 2.00. We could have expected the logical partition to
consume up to three physical processors as we have three VPs configured. But
the maximum capacity of the shared processor pool is 2 and the LPAR cannot
consume more than this maximum capacity. The %EntC is about 571.2% which
also means that it is using the equivalent of 2.0 processing units.
__ 18. Dynamically reconfigure your LPAR to Capped mode and the number of Desired
Virtual processors to 1. Then stop spload and topas.
On the HMC, select your partition and run the Dynamic Logical Partitioning >
Processor > Add or Remove task. Change the assigned virtual processors to
1. and uncheck the Uncapped checkbox.
In the session where spload is running, enter CTRL C. In the session where
topas is running, type a q to quit.

Part 2: Monitoring user-defined shared processor pool


In part two, you must work with a teammate. Both of your teams logical partitions will be
assigned to the same shared processor pool.
__ 19. Before starting, your teams partitions must be Capped and have one desired virtual
processor configured. Check both LPARs configurations.

3-10 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty __ 20. Assign both of your teams partitions to the same shared processor pool. If your
team is using the lpar1and lpar2 partitions, assign them both to SharedPool1. If
your team is using the lpar3 and lpar4 partitions, assign them both to SharedPool3.
In this example, lpar3 and lpar4 are assigned to SharedPool3:

__ 21. Start a CPU load on one of your teams partitions. Change the directory to
/home/an31/ex3, and invoke the following command to generate the workload:
# ./spload -t 2

Important

Keep this executable running until instructed to stop it. You will have to monitor the
transaction rate output given as you make changes to the system configuration.

The following is an example of running spload with the transaction rate


displayed every two seconds:
# cd /home/an31/ex3
# ./spload -t 2
53
54
53
54
54
__ 22. Open another window on this same partition, log in as the root user, and check the
CPU consumption using the lparstat command with an interval of 2 seconds.

Copyright IBM Corp. 2010, 2013 Exercise 3. Configuring multiple shared processor pools and 3-11
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

What is the value of the available processing capacity (app) in this virtual shared
pool?
What can you say about processing capacity of your second partition using the
same shared processor pool?
The processing capacity available in the virtual shared pool can be seen using
the app column value of the lparstat command output. In this case, the value
is approximately 1.65. Recall that the LPARs are configured as capped with 0.35
processing units.
Since the user-defined shared pool has 1.65 physical processors available, and
one of your LPARs is using 0.35 of a processor capacity, we can say that the
other LPAR in your shared processor pool is not using any processor capacity at
all. We can say that this LPAR cedes all its processor cycles to the physical
shared processor pool.
__ 23. Use DLPAR to change the partition running the CPU load to be an Uncapped
partition. Set the Weight value to 128. Also set the number of virtual processors to 2.
Using the HMC GUI, select your LPAR and, then select Dynamic Logical
Partitioning > Processor > Add or Remove. Check the Uncapped box, and
use the Weight value of 128. Type 2 in the Assigned virtual processors input
box, then click the OK button to make the change.
__ 24. Go to the session for the LPAR that is running the lparstat 2 command. Notice
the physc value and the app value. The app statistics may bounce around a bit.
What do you notice?
With two virtual processors configured, the logical partition is able to consume
nearly all the processor cycles (physc=1.99) from the shared processor pool in
this example. The amount of idle CPU cycles available in the shared processor
pool is mostly zero. Your app numbers may bounce around a bit, but you should
notice at least half the time they are close to zero.
%user %sys %wait %idle physc %entc lbusy app vcsw phint
----- ----- ------ ------ ----- ----- ------ --- ----- -----
0.2 2.4 0.0 97.5 0.02 4.9 0.2 1.97 718 0
99.6 0.4 0.0 0.0 1.99 568.4 100.0 0.03 1868 4
99.7 0.3 0.0 0.0 1.99 568.8 100.0 0.04 1858 0
99.7 0.3 0.0 0.0 1.99 568.8 100.0 0.02 1846 4
99.7 0.3 0.0 0.0 1.99 568.6 100.0 2.00 1844 0
99.7 0.3 0.0 0.0 1.99 568.6 100.0 0.03 1872 0

__ 25. While looking at the physc value, start a CPU load on your second partition. Change
the directory to /home/an31/ex3, and start the executable spload and keep it
running up until instructed to stop it.
# ./spload -t 2

3-12 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty __ 26. Using the lparstat command with an interval of 2 on the first (uncapped) partition,
do you notice anything different? What happened to the processing capacity
(physc)? Run lparstat in the second (capped) partition and notice the physc
statistic.
The first LPAR is uncapped and the number of virtual processors is 2, so the
physical capacity can grow up to 2.0 physical CPUs. But it has to contend with
the capped LPAR which is now running a CPU load. The capped LPAR has a
capacity entitlement of 0.35, and cannot grow more than this value. The
uncapped LPAR manages to consume 1.65, and the capped one is at 0.35.
On the uncapped logcal partition:
# lparstat 2
System configuration: type=Shared mode=Uncapped smt=4 lcpu=8
mem=1024MB psize=2 ent=0.35

%user %sys %wait %idle physc %entc lbusy app vcsw phint
----- ----- ------ ------ ----- ----- ------ --- ----- -----
99.7 0.3 0.0 0.0 1.65 471.1 100.0 0.10 2400 4
99.7 0.3 0.0 0.0 1.65 471.1 100.0 2.00 2400 0
99.7 0.3 0.0 0.0 1.65 471.1 100.0 0.09 2400 0
99.6 0.4 0.0 0.0 1.65 471.1 100.0 2.00 2400 2
On the capped logical partition:
# lparstat 2
System configuration: type=Shared mode=Capped smt=4 lcpu=4 mem=1024MB
psize=2 ent=0.35

%user %sys %wait %idle physc %entc lbusy app vcsw phint
----- ----- ------ ------ ----- ----- ------ --- ----- -----
99.1 0.8 0.0 0.1 0.35 99.9 100.0 2.00 400 0
99.0 0.8 0.0 0.1 0.35 99.9 100.0 0.07 400 0
99.1 0.8 0.0 0.1 0.35 99.9 100.0 2.00 400 0
99.1 0.8 0.0 0.1 0.35 99.9 100.0 0.08 400 1
__ 27. Now change the configuration of the capped partition to a sharing mode of
Uncapped with the weight to 128, and set the assigned virtual processors to 2. This
configuration is now the same as the other partition.
Using the HMC GUI, select your LPAR and, then select Dynamic Logical
Partitioning > Processor > Add or Remove. Check the Uncapped box, and
use the default weight value of 128. Set 2 in the Assigned virtual processors
input box, then click the OK button to make the change.
__ 28. What do you notice about the transaction rate reported by spload and the processor
consumed on both logical partitions? Why is this?
The transaction rate on both partitions is now about the same. This is because
the capacity entitlement and the uncapped weight value of both partitions is the

Copyright IBM Corp. 2010, 2013 Exercise 3. Configuring multiple shared processor pools and 3-13
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

same. This means both the guaranteed and excess shared pool cycles are
allocated equally to both partitions.
If you encounter strange behavior, such as a difference in the CPU physical
capacity consumed between LPARs, you should deactivate the virtual processor
folding feature on your logical partitions by executing
schedo -o vpm_xvcpus=-1. At the time of writing, in some situations the virtual
processor folding feature can prevent dispatching the logical threads to all the
virtual processors in the partition.
On the first LPAR:
# lparstat 2
System configuration: type=Shared mode=Uncapped smt=4 lcpu=8
mem=1024MB psize=2 ent=0.35

%user %sys %wait %idle physc %entc lbusy app vcsw phint
----- ----- ------ ------ ----- ----- ------ --- ----- -----
99.4 0.6 0.0 0.0 1.00 285.3 100.0 0.12 2406 0
99.4 0.6 0.0 0.0 1.00 285.3 100.0 2.00 2400 1
99.4 0.6 0.0 0.0 1.00 285.3 100.0 0.11 2400 7
99.4 0.6 0.0 0.0 1.00 285.4 100.0 2.00 2400 1
99.6 0.4 0.0 0.0 1.00 285.2 100.0 0.12 2400 5
On the second LPAR:
# lparstat 2
System configuration: type=Shared mode=Uncapped smt=4 lcpu=8
mem=1024MB psize=2 ent=0.35

%user %sys %wait %idle physc %entc lbusy app vcsw phint
----- ----- ------ ------ ----- ----- ------ --- ----- -----
99.7 0.3 0.0 0.0 1.00 285.4 100.0 0.00 1600 0
99.7 0.3 0.0 0.0 1.00 285.2 100.0 0.01 1604 0
99.7 0.3 0.0 0.0 1.00 285.4 100.0 2.00 1600 2
99.7 0.3 0.0 0.0 1.00 285.4 100.0 0.00 1600 0
99.6 0.4 0.0 0.0 1.00 285.4 100.0 2.00 1600 0
__ 29. Now use the HMC GUI to change the uncapped weight of one of your logical
partition to 254. By setting a higher weight value, we want the partition to be able to
get more available excess shared pool cycles than the second partition.
What do you notice? Is the partition with the higher weight value getting more CPU
cycles than the other one?
Using the HMC GUI, select your LPAR and, then select Dynamic Logical
Partitioning > Processor > Add or Remove. Change the weight value from
128 to 254, then click OK to make the change.
For our example system configuration used in the hint examples so far, there are
16 physical CPUs and the two partitions are using a custom shared processor
pool. You will not see any difference in the physc on both logical partitions

3-14 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty because the partitions do not compete to get extra CPU cycles. Uncapped
weights have no influence because the partitions never reach the point where
they are in competition for the cycles. This is because there are idle cycle still
available in the overall physical shared pool (rather than the user-defined shared
pool). Uncapped capacity is distributed among the uncapped shared processor
partitions that are configured in the entire server not just on the uncapped
partitions within a specific shared processor pool. Therefore, with our
configuration where there are still idle processing units in the physical shared
processor pool, there is no CPU contention and the uncapped weights are not
used.
__ 30. Kill the spload processes in both partitions with Ctrl <C>.
__ 31. Reassign both partitions to the default shared processor pool ID 0. Unconfigure the
two custom pools that you and your partner configured by setting all values back to
zero.
Select the server and run the Configuration > Virtual Resources > Shared
Processor Pool Management task.
Click the Partitions tab. Click each partition in turn and reassign to the
DefaultPool.
Go to the Pools tab.
For each custom pool, click on the name and enter zeros in the Reserved
processing units and the Maximum processing units fields as shown below.

Close the pool windows.

Part 3: Dedicated partitions running in donating mode


In this part of the exercise, youll explore the donating mode feature for dedicated
processor partitions.

__ 32. Access the HMC of your managed system using your browser.
__ 33. Select your managed system name and run the Properties task. Go to the
Capabilities tab and verify the managed system has the Active Partition

Copyright IBM Corp. 2010, 2013 Exercise 3. Configuring multiple shared processor pools and 3-15
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

Processor Sharing Capable capability set to True. You may need to scroll toward
the bottom of the list of capabilities.
Here is an example of the Capabilities tab:

__ 34. Shut down your assigned partition.


If youre already logged into a session, issue the shutdown -F command,
otherwise use the HMC Operations > Shut Down task.
__ 35. In this step, you will copy the Normal partition profile for your assigned partition to a
new profile named Dedicated. Then you will modify the Dedicated profile to
configure your partition to use dedicated processors.
__ a. Copy the Normal profile to a new profile named Dedicated.
Select your assigned partition.
Run the Configuration > Manage Profiles task. In the pop-up window, check
the profile name (Normal), choose Copy from the Actions menu. Name the new
profile Dedicated.
__ b. Edit the Dedicated profile for your partition. Change the profile to use the
Dedicated Processing Mode. Set the Minimum, Desired, and Maximum
processors values to 1. Verify that the option Allow when partition is active
under Processor Sharing, is not checked. Later, you will execute the lparstat
command which will reflect this mode as Capped.
You should still have the Managed Profiles window open. Select the Dedicated
profile and run Edit from the Actions menu.
Click the Processor tab in the Logical Partition Profile Properties window that
pops up.
In the Processing mode box, select the Dedicated radio button.
Set the Minimum, Desired and Maximum values to 1.

3-16 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty Make sure both of the Processor Sharing options are not checked. Click OK.
Here is the example screen. Click OK when you have finished making the edits.

__ 36. Activate your partition using the Dedicated profile. It should have successfully shut
down by now.
If you still have the Managed Profiles window open, you can use the Actions >
Activate menu option. Otherwise, select the LPAR and run the Operations >
Activate > Profile task. Select the Dedicated profile, the click OK.
__ 37. Open a virtual terminal or a Telnet session to your partition. Use the lparstat
command to verify the partition type is Dedicated and the mode is Capped. (That is,
the option to share the dedicated processors when active is not enabled.)

Copyright IBM Corp. 2010, 2013 Exercise 3. Configuring multiple shared processor pools and 3-17
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

Example command and expected output:


# lparstat

System configuration: type=Dedicated mode=Capped smt=4 lcpu=4


mem=1024MB

%user %sys %wait %idle


----- ----- ------ ------
2.2 1.5 0.7 95.6
__ 38. Start the command lparstat -d 5 on the Dedicated partition. Notice the mode
value at the top of the output. Then notice there are six columns of output. The
columns %istol and %bstol are the percentages of idle and busy times stolen by the
hypervisor for administrative tasks. Let the lparstat command continue to run.
You should see an output like the following that shows the mode is Capped:
# lparstat -d 5

System configuration: type=Dedicated mode=Capped smt=4 lcpu=4


mem=1024MB

%user %sys %wait %idle %istol %bstol


----- ----- ------ ------ ------ ------
0.1 0.6 0.0 99.3 0.0 0.0
0.0 0.4 0.0 99.6 0.0 0.0
0.1 0.6 0.0 99.2 0.0 0.0
0.0 0.4 0.0 99.6 0.0 0.0
0.1 0.6 0.0 99.3 0.0 0.0
__ 39. Open a new SSH connection to the HMC using your Putty application. Log in as
hscroot.
__ 40. In the HMC session, use the chhwres command to change the sharing mode value
of the dedicated processor partition from share_idle_procs to
share_idle_procs_active. See the syntax information below. Then, immediately
return to the LPAR session and observe how the mode has changed from Capped to
Donating.
Syntax of command:
chhwres -m <managed system name> -r proc -o s \
-a sharing_mode=share_idle_procs_active -p <your partition name>
Example command where the managed system name is george and the name
of the LPAR is lpar1:
chhwres -m george -r proc -o s -a \
"sharing_mode=share_idle_procs_active" -p lpar1

3-18 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty To check the current setting you can use the following command (but do this after
checking the lparstat command output):
lshwres -m george -r proc --level lpar -F curr_sharing_mode \
--filter lpar_names=lpar1
share_idle_procs_active
See the changed message in the LPAR window. Notice there are more columns
now in the lparstat output. Also notice the mode is now Donating.
System configuration changed. The current iteration values may be
inaccurate.
0.1 0.6 0.0 99.3 0.95 5.2 0.0 0.0 0.0

System configuration: type=Dedicated mode=Donating smt=4 lcpu=4


mem=1024MB
%user %sys %wait %idle physc %idon %bdon %istol %bstol
----- ----- ------ ------ ----- ------ ------ ------ ------
0.0 0.5 0.0 99.5 0.01 98.8 0.0 0.0 0.0
0.1 0.6 0.0 99.3 0.21 79.3 0.0 0.0 0.0
0.0 0.4 0.0 99.5 0.21 79.1 0.0 0.0 0.0
Note that the other way to enable donating mode is to use the HMC GUI and
check the Allow when partition is active checkbox in the Partition Properties
as shown below:

__ 41. In your dedicated partition login session, you should have observed the following:
The System configuration changed message.
The mode is now Donating.
There are more columns of output: physc (physical capacity consumed), plus
%idon and %bdon which are the donated cycles to the shared processor pool.
You should see high values for %idle cycles and idle donated (%idon).

Copyright IBM Corp. 2010, 2013 Exercise 3. Configuring multiple shared processor pools and 3-19
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

You should see output similar to the following:


System configuration changed. The current iteration values may be
inaccurate.
0.1 0.6 0.0 99.3 0.95 5.2 0.0 0.0 0.0

System configuration: type=Dedicated mode=Donating smt=4 lcpu=4


mem=1024MB
%user %sys %wait %idle physc %idon %bdon %istol %bstol
----- ----- ------ ------ ----- ------ ------ ------ ------
0.0 0.5 0.0 99.5 0.01 98.8 0.0 0.0 0.0
0.1 0.6 0.0 99.3 0.21 79.3 0.0 0.0 0.0
0.0 0.4 0.0 99.5 0.21 79.1 0.0 0.0 0.0
__ 42. Stop the lparstat -d command using Ctrl <C>.
__ 43. In your dedicated partition session, create a load in the background by executing the
yes > /dev/null & command. Then execute the lparstat -d 2 command and
notice that %idle has been reduced significantly and %idon is near zero (not
donating).
Example commands:
# yes > /dev/null &
# lparstat -d 2
End lparstat after seeing output desired output with <Ctrl> C. Example
output:
System configuration: type=Dedicated mode=Donating smt=4 lcpu=4
mem=1024MB

%user %sys %wait %idle physc %idon %bdon %istol %bstol


----- ----- ------ ------ ----- ------ ------ ------ ------
61.8 1.3 0.0 36.9 1.00 0.0 0.0 0.0 0.0
61.7 1.2 0.0 37.1 1.00 0.0 0.0 0.0 0.0
61.8 1.2 0.1 36.9 1.00 0.0 0.0 0.0 0.0
61.6 1.7 0.0 36.8 1.00 0.0 0.0 0.0 0.0
__ 44. Use the mpstat command with the -h flag to view the percentage of donated
cycles per logical processor. You should notice the donated value is near or equal to
zero at this time.

3-20 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty Example command and its expected output:


# mpstat -h 5

System configuration: lcpu=4 mode=Donating

cpu pc ilcs vlcs idon bdon istol bstol


0 0.35 0 129 0.0 0.0 0.0 0.0
1 0.41 0 45 0.0 0.0 0.0 0.0
2 0.12 0 11 0.0 0.0 0.0 0.0
3 0.13 0 19 0.0 0.0 0.0 0.0
ALL 1.00 0 204 0.0 0.0 0.0 0.0
Use <Ctrl> C to stop the mpstat output.
__ 45. Kill the yes job.
Example commands:
# jobs
[1] + Running yes > /dev/null &
# kill %1
__ 46. For the next set of steps, you must work as a team of two students to perform the
actions. Keep one of your teams partition as dedicated and activate the second one
as shared.
It is likely that you and your partner have both of your LPARs configured as
dedicated.
- Decide which one will be reconfigured with shared processors.
- Shut down that partition.
- Check that the partitions Normal profile is set to the shared processor mode
with one virtual processor and is set to Uncapped. The weight value should
be 128.
- Activate it with its Normal profile. Continue to the next step once the LPAR
has fully booted.
__ 47. Log in to the shared processor partition and run the lparstat command to
check the mode. It should say Uncapped.

Copyright IBM Corp. 2010, 2013 Exercise 3. Configuring multiple shared processor pools and 3-21
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

Command and expected output showing the LPAR is uncapped:


# lparstat

System configuration: type=Shared mode=Uncapped smt=4 lcpu=4


mem=1024MB psize=4 ent=0.35

%user %sys %wait %idle physc %entc lbusy vcsw phint


----- ----- ------ ------ ----- ----- ------ ----- -----
0.0 0.0 0.6 99.4 0.00 0.0 6.7 348870 2257
__ 48. In your HMC browser session, click the name of the shared LPAR partition to open
its Properties Panel. Click the Hardware tab. On the Processors tab check the
checkbox for Allow performance information collection.
Example panel with the checkbox checked:

__ 49. In your shared processor partition, run the lparstat 4 command. Check the app
column to see the amount of idle cycles in the shared processor pool.
Examine the app column in the lparstat output to see the amount of idle CPU
cycles in the shared processor pool. Your training server may have a different
amount than the example below that shows about 3.7 processing units. Your

3-22 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty results will depend on the size of your system and the workloads in the other
LPARs:
# lparstat 4

System configuration: type=Shared mode=Uncapped smt=4 lcpu=4


mem=1024MB psize=4 ent=0.35

%user %sys %wait %idle physc %entc lbusy app vcsw phint
----- ----- ------ ------ ----- ----- ------ --- ----- -----
0.4 2.0 0.0 97.6 0.01 4.2 0.9 3.70 247 0
0.0 1.4 0.0 98.6 0.01 2.7 0.1 3.74 230 0
0.0 1.2 0.0 98.7 0.01 2.3 0.0 3.73 176 0
__ 50. In your shared processor partition, start a CPU load in the background using the
following command:
# yes > /dev/null &
Remember that only uncapped partitions can use excess cycles in the shared
processor pool.
__ 51. Start the lparstat 4 command again and notice the drop in the app values. There
are fewer idle cycles now. Let the command continue to run. Since the LPAR is
configured with one virtual processor, you should see the app value drop by
approximately 1.0 processing units although your results may vary if another lab
team is performing this same procedure on your server. The yes command can
keep one processor 100% busy.
The lparstat command and its expected output that shows the app value is
about 1.0 processing units less than it was before you started the yes program.
Note that if your results are not the same, it could be that the other group of
students is affecting the results. You should see the app value be reduced by at
least 1.0 processing units.
# lparstat 4

System configuration: type=Shared mode=Uncapped smt=4 lcpu=4


mem=1024MB psize=4 ent=0.35

%user %sys %wait %idle physc %entc lbusy app vcsw phint
----- ----- ------ ------ ----- ----- ------ --- ----- -----
61.6 1.4 0.0 36.9 1.00 285.6 24.7 2.73 298 1
61.6 1.3 0.0 37.1 1.00 285.7 25.7 2.73 309 2
61.5 1.3 0.0 37.2 1.00 285.4 24.4 2.98 290 1
__ 52. While the lparstat 4 command is running, go to the dedicated processor
partition and start a load in the background using the command:
# yes > /dev/null &

Copyright IBM Corp. 2010, 2013 Exercise 3. Configuring multiple shared processor pools and 3-23
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

__ 53. On the shared processor LPAR, what do you observe about the app value?
The app value is reduced as donated cycles are taken back by the dedicated
processor LPAR.
Example lparstat command and its expected output:
# lparstat 4

System configuration: type=Shared mode=Uncapped smt=4 lcpu=4


mem=1024MB psize=4 ent=0.35

%user %sys %wait %idle physc %entc lbusy app vcsw phint
----- ----- ------ ------ ----- ----- ------ --- ----- -----
61.8 1.1 0.0 37.1 1.00 285.9 25.4 1.99 300 2
61.6 1.3 0.0 37.1 1.00 285.3 24.7 1.99 308 1
61.8 1.1 0.0 37.1 1.00 285.9 25.4 1.99 303 2
You may see different results based on the activity of the other lab team,
however you should see an app value that is even lower than before.
__ 54. On the dedicated processor LPAR, run lparstat -d 2. Notice the idle and donate
values are very low (perhaps even zero).
Example command and its expected output:
# lparstat -d 2

System configuration: type=Dedicated mode=Donating smt=4 lcpu=4


mem=1024MB

%user %sys %wait %idle physc %idon %bdon %istol %bstol


----- ----- ------ ------ ----- ------ ------ ------ ------
61.8 1.1 0.0 37.1 1.00 0.0 0.0 0.0 0.0
62.0 1.4 0.0 36.6 1.00 0.0 0.0 0.0 0.0
61.6 1.1 0.0 37.3 1.00 0.0 0.0 0.0 0.0
62.0 1.1 0.0 36.9 1.00 0.0 0.0 0.0 0.0
61.7 1.2 0.0 37.1 1.00 0.0 0.0 0.0 0.0
The physc value is 1.0 which is the entire allocated amount of physical
processors. This LPAR is using all of its processing resources (physc) and is not
donating any (%idon).
__ 55. Kill the yes job on the dedicated partition.
# jobs
[1] + Running yes > /dev/null &
# kill %1
__ 56. Run the lparstat 2 command in the shared processor partition (if its not already
running) and notice the increase in the app value.

3-24 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty __ 57. Stop the lparstat command and kill the yes job on the shared processor partition.
# jobs
[1] + Running yes > /dev/null &
# kill %1
__ 58. Shutdown the dedicated processor partition and activate it with its Normal profile.
You do not need to wait until it is fully booted.

End of exercise

Copyright IBM Corp. 2010, 2013 Exercise 3. Configuring multiple shared processor pools and 3-25
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

Exercise review/wrap-up
This exercise showed how multiple shared processors pools could be dynamically
configured. Students monitored the user-defined shared pool. The last part of the exercise
showed how to configure and monitor dedicated partitions running in donating mode.

3-26 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty Exercise 4. Active Memory Sharing


(with hints)

Estimated time
02:00

What this exercise is about


This lab covers configuring shared memory partitions and creating a
paging space device. Students will use performance analysis
commands to see the memory configuration and statistics specific to
shared memory partitions.

What you should be able to do


After completing this exercise, you should be able to:
Configure a shared memory pool with paging space devices
Configure a shared memory partition
Examine the paging virtual I/O servers virtual adapters involved in
the shared memory mechanisms
Monitor the logical memory statistics using the lparstat,
vmstat, and topas commands
View shared memory pool related statistics using the HMC
utilization events

Introduction
In this exercise, you will first create a logical volume or format a hdisk
device to use as paging device for your assigned LPAR. Then, you will
configure your assigned LPAR profile to use the shared memory pool.
Each lab team will start a memory load on one of your assigned
LPARs and monitor the logical memory over-commit. AMS statistics
will be monitored using lparstat, vmstat, and topas.
Students will work alone to start a memory load to over-commit the
physical memory in the shared pool. The AMS behavior in this
resource demanding environment will be observed using lparstat,
vmstat, and topas.

Copyright IBM Corp. 2010, 2013 Exercise 4. Active Memory Sharing 4-1
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

Requirements
This workbook
A computer with a web browser and a network connection to an
HMC running version 7.7.4 or above to support a POWER7
processor-based system
Utility for running Telnet or SSH

Instructor exercise overview


This exercise has the students alter their partition configurations to test
shared memory partition functionality and observer memory usage
activity with standard operating system commands. Step 4f will likely
need your help with coordination. Only one student can add their
paging device to the Shared Memory Pool at a time. At Step 11 in Part
3, students need to work in pairs. The AMS analysis tools assume that
all four of the student AIX LPARs will be in use running a load.
If there will be fewer than four students working on a system, be sure
to reduce the shared memory pool. For example, if two LPARs will be
used, make the memory pool 2 GB instead of the default 4 GB. This
will ensure that the students will see some paging activity. In addition,
youll need to monitor team progress. In the example analysis
command outputs in the hints, the materials assume that students will
start the amsload programs at about the same time. If one lab team
takes a break, you may wish to intervene and either change the size of
the shared memory pool, or start an amsload program in each LPAR
until the other team gets back.

4-2 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty Exercise instructions with hints

Preface
The procedures in this exercise depend on the availability of specific equipment. You
will need a computer system connected to the Internet, a web browser, a Telnet
program, and a utility for running SSH. You will also need a managed system capable of
running shared processor partitions. All lab systems must be accessible to each other
on a network.
All hints are marked by a sign.
The hints in this exercise reflect results obtained on a System p750 with 4 GB of shared
memory pool and a partition running AIX V7.1 Your systems specific results might differ
but the overall conclusions should be the same.

Used tools and lab environment


In this lab, you will be using your assigned logical partition. The tool used during this
exercise is an executable that generates a memory load on the partitions. Every second,
the tools output will provide the memory allocated versus the total memory to be allocated.
The maximum memory to allocate is identified using the -M option and the time to reach
the allocation is identified using the -d option. The syntax help can be viewed using the -h
option.

Command Type Role


Program used to generate memory load on partitions. The
amsload C executable memory to allocate and the rampup period must be specified.
The memory allocation is displayed every second by default.

Part 1: View shared memory pool configuration and configure paging


devices
You will examine the shared memory pool configuration that is already created on your
managed system. The configuration does not include paging devices. You will assign a
physical volume to use as a paging space device for your partition.

__ 1. Use your web browser to connect to your assigned HMC. Log in with your HMC ID.
Verify your managed system is AMS capable by looking in the Server properties.
Go to the Systems Management application on the HMC. Expand the Servers
information, then select your managed system.
Choose Properties from the Tasks menu or from the tasks pad. The Active Memory
Sharing Capable capability value should be True. This value is set to True only if the

Copyright IBM Corp. 2010, 2013 Exercise 4. Active Memory Sharing 4-3
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

PowerVM Enterprise Edition is activated. The following example has been reduced to
showing only one capability for clarity:

__ 2. Check the shared memory pool configuration from the HMC GUI. This can be done
by accessing the Shared Memory Pool Management wizard. What is the size of
the current shared memory pool? ______________ GB
Go to the Systems Management application on the HMC. Expand the Servers
information, then select your managed system.
Select Configuration > Virtual Resources > Shared Memory Pool Management
from the Tasks menu or from the tasks pad.
In the pop-up window, verify the Pool size is set to 4 GB

Note that on your system the available system memory and available pool memory may
be different than the example above.
__ 3. Open a Telnet or SSH session to the virtual I/O server partition that will be used as
paging space partition (amsvios). Refer to your worksheet, or ask your instructor if
you need the connection information.
__ 4. In the next set of steps, you will configure paging space devices for your logical
partition. Perform this step to use an hdisk as paging device for your partition.
__ a. Identify the hdisk number that you will use as paging space device for your
logical partition. Log in to the AMS virtual I/O server on your managed system

4-4 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty and list the hdisk devices using the lspv command. You should see rootvg
assigned to hdisk0, and other hdisks should be available and not part of any
volume group.
Log in to the AMS virtual I/O server and list the hdisk devices.
For the hdisk selection, the highest student number selects the highest hdisk
number. The lowest student number selects the lowest hdisk number. Record
your hdisk number here ____________.
Log in to the AMS VIOS and run the lspv command. You should see hdisks devices
available. Record your hdisk number.
$ lspv
hdisk0 0002acf2859db562 rootvg
hdisk1 0002acf29d805873 None
hdisk2 0002acf2ad17106b None
hdisk3 0002acf242d3b83d None
hdisk4 0002ace20f96431e None
__ b. To perform this next step you must synchronize with the other students sharing
your managed system. Only one student at a time can perform modifications to
the shared memory pool (adding a paging space device to the pool). If all of the
students perform this step simultaneously, (at least if they select Yes to make
changes to the pool), then only the last modifications will be taken into account.
and all the other modifications performed by the other students will be lost.

One student at a time. From the Shared Memory Pool Management wizard,
select the Paging Space Device(s) tab, then modify the shared memory pool to
add your hdisk device as a paging space device. If you do not see your hdisk, go
to the next step. Specify the AMS virtual I/O server (amsvios) as paging VIOS 1.
Do not specify a second paging VIOS.
Go to the Paging space Device(s) tab. Then click the Add/Remove Paging Space
Device(s) button. The Modify Shared Memory Pool wizard should appear. Click Next
twice to get to the Paging VIOS screen.
The amsvios partition should already be listed as the Paging VIOS 1 as shown below.
If it is not already configured, then select it. Do not specify any VIO server in the paging
VIOS 2 option. Click Next.

Copyright IBM Corp. 2010, 2013 Exercise 4. Active Memory Sharing 4-5
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

Answer Yes to the question Do you wish to make paging space device changes to the
pool? Click Next.
In the Paging Space Device(s) menu, click Select Device(s).
Click the Refresh button.
A device list of available physical drives should appear. Here is an example output with
available physical disk drives. The lists content depends on the available hdisk devices
attached to the virtual I/O server.

Select your hdisk device name; then click OK. If you device name doesnt not appear,
go to the next step.

4-6 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty The Paging Space Device(s) menu should appear showing your hdisk device name as
Paging space device.

Click Next, then check the Summary of all the selections that you made.
Click Finish.
__ c. When you have configured your paging device, you should see it in the Pool
Properties window.

When done, click OK to close the window. Tell the other students sharing the
same shared memory pool that you are done with the paging device
configuration. Then go to Part 2: Configure your LPAR to use the shared memory
pool.
__ d. If your disk device did not appear in the Paging Space Device(s) panel, then you
need to clear any lingering disk information from previous classes. To clear the
disk information, assign it to a new volume group then remove the volume group.
Here is an example using hdisk2:
# mkvg -f -vg myvg1 hdisk2
# reducevg myvg1 hdisk2

Copyright IBM Corp. 2010, 2013 Exercise 4. Active Memory Sharing 4-7
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

Now, return to the previous step 4b above to assign your paging device to the
shared memory pool.

Part 2: Configure your LPAR to use the shared memory pool.


In this part, you will copy your partitions Normal profile to a new Normal_AMS profile.
Then you will modify the Normal_AMS profile to use the shared memory pool. Finally, you
will activate your shared memory logical partition.
__ 5. Shut down your assigned logical partition. You do not have to wait for it to shut down
to continue to the next step.
In the Telnet window to your partition, issue the shutdown -F command.
__ 6. Using your web browser, connect to your assigned HMC. Log in as hscroot.
__ 7. In this step, you will copy the Normal partition profile to a new profile named
Normal_AMS for your partition. Then you will modify the Normal_AMS profile to
configure your LPAR to use the shared memory pool.
__ a. Copy the Normal profile to a new profile named Normal_AMS.
Go to the Systems Management application on the HMC. Expand the Servers
information, then show the partition table view for your server.
Select the partition by clicking the checkbox in the Select column.
Choose Configuration > Manage Profiles from the Tasks menu or from the tasks pad.
In the pop-up window, check the profile name (Normal), and choose Copy from the
Actions menu.

__ b. Open the Normal_AMS profile properties window for your partition.


Go to the Systems Management application on the HMC. Expand the Servers
information, then show the partition table view for your server.
Select the partition by checking the checkbox in the Select column.

4-8 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty Choose Configuration > Manage Profiles from the Tasks menu or from the tasks pad.
In the pop-up window, check the profile name (Normal_AMS), and choose Edit from
the Actions menu.
__ c. Change the profile to use the Shared memory mode. Configure the partition
shared memory to be: 512 MB minimum, 1 GB 512 MB Desired, and 2 GB
maximum. Leave the properties window open until you finished configuring the
profile.
Click the Memory tab in the Logical Partition Profile Properties window that pops up.
In the Memory mode box, select the Shared radio button.
In the Logical Memory box, enter 0 GB 512 MB for the minimum, 1 GB 512 MB for the
desired, and 2 GB for the maximum parameters.
Do not click OK yet.
__ d. Change the memory weight to 128. Select the xxx_amsvios as the Primary
Paging VIOS. Leave the Secondary Paging VIOS to None. Do not select the
Custom I/O entitled memory box. Click the OK button when done.
On the same Memory tab you used in the last step, enter 128 in the Memory weight
field, and select amsvios virtual I/O server as the Primary Paging VIOS.
The Memory tab should now look like the example below. Click OK to make the
changes.

Copyright IBM Corp. 2010, 2013 Exercise 4. Active Memory Sharing 4-9
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

__ 8. Activate your partition using the Normal_AMS profile.


Your logical partition might still be selected from the last operation. If it is not, click the
checkbox in the Select column to select it.
To activate the partition, access the Operations menu and choose the Activate task.
On the screen that pops up, be sure to select the Normal_AMS profile. Click the Open
a terminal window or console session checkbox, then click OK.

4-10 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty

__ 9. As there is no relationship between a paging device and a logical partition before the
first LPAR activation, and because four students use the same shared memory pool,
you cannot be sure of which paging device will be used by your partition when it
activates. Depending on which partition activates first in your managed system, your
partition might not use the device that you created.
__ a. When your partition has finished booting, identify the paging device used by your
partition. Use the HMC Shared Memory Pool Management wizard to check the
paging device configuration. Record the hdisk name or logical volume name of
the paging device assigned to your partition. Your partition is identified by its
partition ID number. Paging device name: _____________________
Go to the Systems Management application on the HMC. Expand the Servers
information, then select your managed system.
Choose Configuration > Virtual Resources > Shared Memory Pool Management
from the Tasks menu or from the tasks pad. In the pop-up window, select the Paging
Space Device(s) tab.

Copyright IBM Corp. 2010, 2013 Exercise 4. Active Memory Sharing 4-11
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

Record the Device name associated with your logical partition ID. Here is an example
showing the device name (hdisk devices in this example) and the partition ID.

__ 10. Once you see that the LPAR has booted in the console, log in as the root user.
__ a. Use the lparstat AIX command with the -i option and view the available
memory information. Check for the Memory Mode, the I/O Memory Entitlement,
the Variable Memory Capacity Weight value, and the physical memory size in the
shared memory pool. Use the main page for lpartstat if you have questions
about the output of this command.
Log in to your partition and run lparstat -i. The output shows the memory settings
for your partition.
Identify the Memory Mode, the I/O Memory Entitlement, the variable Memory Capacity
Weight value, and the physical memory size in the shared memory pool.

4-12 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty Example of lparstat -i output:


lparstat -i
Node Name : lpar1
Partition Name : lpar1
Partition Number : 1
Type : Shared-SMT-4
Mode : Uncapped
Entitled Capacity : 0.35
Partition Group-ID : 32771
Shared Pool ID : 0
Online Virtual CPUs : 1
Maximum Virtual CPUs : 10
Minimum Virtual CPUs : 1
Online Memory : 1536 MB
Maximum Memory : 2048 MB
Minimum Memory : 512 MB
Variable Capacity Weight : 128
Minimum Capacity : 0.10
Maximum Capacity : 2.00
Capacity Increment : 0.01
Maximum Physical CPUs in system : 8
Active Physical CPUs in system : 8
Active CPUs in Pool : 8
Shared Physical CPUs in system : 8
Maximum Capacity of Pool : 800
Entitled Capacity of Pool : 240
Unallocated Capacity : 0.00
Physical CPU Percentage : 35.00%
Unallocated Weight : 0
Memory Mode : Shared
Total I/O Memory Entitlement : 77.000 MB
Variable Memory Capacity Weight : 128
Memory Pool ID : 0
Physical Memory in the Pool : 4.000 GB
Hypervisor Page Size : 4K
Unallocated Variable Memory Capacity Weight: 0
Unallocated I/O Memory entitlement : 0.000 MB
Memory Group ID of LPAR : 32773
Desired Virtual CPUs : 1
Desired Memory : 1536 MB
Desired Variable Capacity Weight : 0
Desired Capacity : 0.35
Target Memory Expansion Factor : -
Target Memory Expansion Size : -

Copyright IBM Corp. 2010, 2013 Exercise 4. Active Memory Sharing 4-13
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

Power Saving Mode : Disabled

__ b. Perform the lparstat command to get statistics about the I/O memory
entitlement for your shared memory partition. What is the value of the I/O entitled
memory for your LPAR? ________MB.
I/O memory entitlement can be seen using the lparstat -m command. lparstat
command example and its output that shows 77 MB of I/O entitled memory:
# lparstat -m
System configuration: lcpu=4 mem=1536MB mpsz=4.00GB iome=77.00MB iomp=9
ent=0.35

physb hpi hpit pmem iomin iomu iomf iohwm iomaf %entc vcsw
----- ----- ----- ----- ------ ------ ------ ------ ----- ----- -----
0.00 3318 2171 1.11 23.7 12.0 53.3 12.7 0 0.0 147943

__ c. How would you get detailed statistics about I/O memory pools?
I/O memory pool statistics can be seen using the lparstat -me command. Here is an
lparstat command output example:
# lparstat -me

System configuration: lcpu=4 mem=1536MB mpsz=4.00GB iome=77.00MB


iomp=9 ent=0.35

physb hpi hpit pmem iomin iomu iomf iohwm iomaf %entc vcsw
----- ----- ----- ----- ------ ------ ------ ------ ----- ----- -----
0.00 3318 2171 1.11 23.7 12.0 53.3 12.7 0 0.0 180276

iompn: iomin iodes iomu iores iohwm iomaf


ent0.txpool 2.12 16.00 2.00 2.12 2.00 0
ent0.rxpool__4 4.00 16.00 3.50 4.00 3.50 0
ent0.rxpool__3 4.00 16.00 2.00 4.00 2.00 0
ent0.rxpool__2 2.50 5.00 2.00 2.50 2.00 0
ent0.rxpool__1 0.84 2.25 0.75 0.84 0.75 0
ent0.rxpool__0 1.59 4.25 1.50 1.59 1.50 0
ent0.phypmem 0.10 0.10 0.09 0.10 0.09 0
vscsi0 8.50 8.50 0.13 8.50 0.89 0
sys0 0.00 0.00 0.00 0.00 0.00 0

__ d. Run the vmstat -h command without any interval count. Look at the memory
mode, the shared memory pool size, and the pmem and loan values displayed
under the hypv-page section. If we consider your partition idle at that time, what
can you say about the sum of the pmem and loan column values?

4-14 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty When your partition is idle, the sum of the pmem and loan values is equal to the logical
memory size of the partition.
# vmstat -h
System configuration: lcpu=4 mem=1536MB ent=0.35 mmode=shared mpsz=4.00GB
kthr memory page faults cpu hypv-page
----- ----------- ------------ ---------- ----------- -----------------
r b avm fre re pi po fr sr cy in sy cs us sy id wa pc ec hpi hpit pmem loan
1 1 174070 125243 0 0 0 2 5 0 6 617 371 0 0 99 0 0.00 0.1 3 1 1.11 0.00

__ e. Using the HMC, dynamically add 512 MB of memory to your LPAR. Then run the
vmstat -h command with no interval count again. What do you observe about
the pmem and loan column values?
The loan value increases as the overall logical memory size increases. Without any
memory load on your LPAR, your logical partition working set is completely backed by
physical memory in the shared pool, so your LPAR operating system can loan logical
memory to the PHYP. That is why the loan column value increased. Depending on the
memory activity on the other LPARs, the pmem value can fluctuate on your LPAR.
Here is an example vmstat -h command output:
# vmstat -h
System configuration: lcpu=4 mem=2048MB ent=0.35 mmode=shared mpsz=4.00GB
kthr memory page faults cpu hypv-page
----- ----------- ------------ ---------- ----------- ------------------
r b avm fre re pi po fr sr cy in sy cs us sy id wa pc ec hpi hpit pmem loan
1 1 177647 106784 0 0 0 1 5 0 6 608 370 0 0 99 0 0.00 0.1 3 1 1.12 0.63

__ f. The topas cross-partition view (topas -C) can be used to get statistics on the
shared memory pool and shared memory partitions. On your shared memory
partition, issue topas -C. The cross-partition panel should appear showing the
partitions.
__ g. Wait for topas to find the other LPARs. Then, to display the Memory Pool panel
from the CEC Panel, press the m key. This panel displays the statistics of all of
the memory pools in the system (at this time we have only one memory pool).

Copyright IBM Corp. 2010, 2013 Exercise 4. Active Memory Sharing 4-15
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

Here is example topas command output:


Topas CEC Monitor Interval: 10 Mon Dec 10 18:39:00
2012
Partitions Memory (GB) Memory Pool(GB) I/O Memory(GB)
Mshr: 1 Mon: 5.5 InUse: 3.9 MPSz: 4.0 MPUse: 1.1 Entl: 77.0 Use:
12.0
Mded: 3 Avl: 1.6 Pools: 1

mpid mpsz mpus mem memu iome iomu hpi hpit


------------------------------------------------------------------------
0 4.00 1.13 2.00 1.12 77.0 12.0 0 0

__ h. Display the partitions associated with the shared memory pool by selecting the
shared memory pool (move the cursor to highlight it) and press the f key. You
can look at the logical memory that is used for each partition (memu column) and
the logical memory loaned to hypervisor by each LPAR (meml column). Consult
the topas main page for more informations about this panel and columns.
Keep this topas panel running while performing the next step.
Here is a example of an example topas output showing the partitions using the shared
memory pool:
Topas CEC Monitor Interval: 10 Wed May 20 17:53:04
2009
Partitions Memory (GB) Memory Pool(GB) I/O Memory(GB)
Mshr: 4 Mon: 6.0 InUse: 4.4 MPSz: 4.0 MPUse: 4.0 Entl: 308.0Use:
47.9
Mded: 0 Avl: 1.6 Pools: 1

mpid mpsz mpus mem memu iome iomu hpi hpit


-----------------------------------------------------------
0 4.00 3.99 6.00 4.39 308.0 47.9 2 162288

Host mem memu pmem meml iome iomu hpi hpit vcsw hysb %entc
-------------------------------------------------------------
lpar1 2.00 1.63 1.14 0.36 77.0 12.0 0 0 714 0.25 248.86
lpar4 2.00 1.68 1.00 0.50 77.0 12.0 0 0 280 0.01 5.49
lpar3 2.00 1.68 0.90 0.60 77.0 12.0 2 1 462 0.18 183.03
lpar2 2.00 1.42 0.96 0.54 77.0 12.0 0 0 622 0.01 6.10

__ i. Using the HMC GUI, perform a dynamic operation to remove 512 MB of memory
from your LPAR for a total of 1.5 GB. Monitor the mem, memu, and meml values
for your LPAR from the topas output. You should notice the values change
according to the logical memory size.

4-16 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty The mem, memu, and meml values change according to the logical memory size in the
partition.

Part 3: Monitoring logical memory


In this section, you will be monitoring the AMS behavior. The results of the monitoring tools
will be affected by the number of teams performing the same steps. Some of the hints
reflect results based on loads generated on a varying number of LPARs sharing the
memory pool.
For the following steps, you should work in teams of two. Your team will work with only one
LPAR. (Choose either yours or your teammates.)
__ 11. Be sure the AIX LPAR that you will use for this part of the exercise has a desired
memory value of 1 GB 512 MB.
You can use lparstat to check the assigned memory or look at the HMC interface.
Look for the mem field in the configuration line at the top of the output.
If the memory amount is not 1.5 GB, then use a dynamic LPAR change to change it.
Select the LPAR and run the Dynamic Logical Partitioning > Memory > Add or
Remove menu option. Enter 1 in the Gigabytes input box and 512 in the Megabytes
input box as shown below. Then click the OK button.

Copyright IBM Corp. 2010, 2013 Exercise 4. Active Memory Sharing 4-17
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

__ 12. In your teams LPAR. Issue topas -C to get the cross partitions panels. Notice the
pmem value.
Keep the topas -C command running while performing the following steps.
Here is a topas -C command output example:
Topas CEC Monitor Interval: 10 Mon Dec 10 18:03:03 2012
Partitions Memory (GB) Processors
Shr: 4 Mon: 6.0 InUse: 4.6 Shr:0.4 PSz: 4 Don: 0.0 Shr_PhysB 0.03
Ded: 0 Avl: - Ded: 0 APP: 4.0 Stl: 0.0 Ded_PhysB 0.00

Host OS Mod Mem InU Lp Us Sy Wa Id PhysB Vcsw Ent %EntC PhI pmem
-------------------------------------shared--------------------------------------
lpar1 A71 U-d 1.5 1.2 2 1 3 0 95 0.01 1272 0.35 7.5 0 0.97
lpar2 A71 U-d 1.5 1.1 2 1 3 0 95 0.01 408 0.35 7.4 0 1.08
lpar3 A71 U-d 1.5 1.1 2 0 2 0 96 0.01 308 0.35 6.0 0 1.03
lpar4 A71 U-d 1.5 1.2 2 0 2 0 96 0.01 322 0.35 5.9 0 0.93

4-18 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty __ 13. Open another Telnet or SSH window on your teams partition. In a session on your
teams LPAR, set the ulimit data to unlimited. There should be no output.
# ulimit -d unlimited
__ 14. In the second session, change the directory to /home/an31/ex4 and then run the
following command:
./amsload -M 800m -d 60
Note: This amsload tool generates a memory intensive load. It will allocate 800 MB
of memory on your LPAR. The memory allocation will be reached at the end of the
60 seconds (specified by the -d option). The output gives you the amount of
allocated memory versus the target memory to allocate.
# ./amsload -M 800m -d 60
minsize = 10485760 bytes (10.000000 MB)
maxsize = 838860800 bytes (800.000000 MB)
rate = 13806250 bytes/s (13.166666 MB/s)
rampup=60 sec (1.000000 min)
Todo loop=-1
Delay=1
Verbose=0
36 / 800 (MB)
49 / 800 (MB)
59 / 800 (MB)
63 / 800 (MB)
__ 15. Leave this command running for about 10 minutes while checking the pmem values
in the topas -C command output. What can you conclude?
In this topas output example, we have two logical partitions running an intensive
memory workload and the other two partitions are without any memory load. The
LPARs without the load will have pmem values that are low, while the LPARs with the
load will have pmem values that are high. The hypervisor allocated memory pages to
the high demanding logical partitions. The partitions that are not memory constrained
and probably loaned a lot of free memory to the PHYP. The amount of their logical
memory backed by physical memory in the shared pool is low.
Topas CEC Monitor Interval: 10 Mon Dec 10 18:24:42 2012
Partitions Memory (GB) Processors
Shr: 4 Mon: 6.0 InUse: 5.9 Shr:0.4 PSz: 4 Don: 0.0 Shr_PhysB 2.02
Ded: 0 Avl: - Ded: 0 APP: 2.0 Stl: 0.0 Ded_PhysB 0.00

Host OS Mod Mem InU Lp Us Sy Wa Id PhysB Vcsw Ent %EntC PhI pmem
-------------------------------------shared-----------------------------------
lpar1 A71 U-d 1.5 1.5 2 97 1 0 0 1.00 394 0.10 1002.7 0 1.37
lpar2 A71 U-d 1.5 1.4 2 97 2 0 0 1.00 412 0.10 998.1 2 1.37
lpar3 A71 U-d 1.5 1.5 2 1 3 0 95 0.01 937 0.10 8.2 0 0.63

Copyright IBM Corp. 2010, 2013 Exercise 4. Active Memory Sharing 4-19
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

lpar4 A71 U-d 1.5 1.5 2 0 3 0 95 0.01 372 0.10 7.1 0 0.63

__ 16. Stop the memory workload by typing Ctrl-C. Open a Telnet or SSH session to your
other teams LPAR. Run the ulimit -d unlimited command.
__ 17. Start the memory workload by running the following commands:
# cd /home/an31/ex4
# ./amsload -M 800m -d 60
Leave this command running for about 10 minutes while checking the pmem values
in the topas -C command output. The pmem value for the LPAR running the
memory intensive workload should increase, while the pmem value for your teams
other LPAR is decreasing.
This is the scenario where the previous LPAR running a memory demanding workload
stops, then another LPAR starts a high intensive memory workload. The PHYP
allocates memory pages to high demanding partitions.
Topas CEC Monitor Interval: 10 Mon Dec 10 18:34:42 2012
Partitions Memory (GB) Processors
Shr: 4 Mon: 6.0 InUse: 5.9 Shr:0.4 PSz: 4 Don: 0.0 Shr_PhysB 2.02
Ded: 0 Avl: - Ded: 0 APP: 2.0 Stl: 0.0 Ded_PhysB 0.00

Host OS Mod Mem InU Lp Us Sy Wa Id PhysB Vcsw Ent %EntC PhI pmem
-------------------------------shared------------------------------------
lpar1 A71 U-d 1.5 1.5 2 97 1 0 0 1.00 394 0.10 1002.7 0 1.57
lpar2 A71 U-d 1.5 1.4 2 97 2 0 0 1.00 412 0.10 998.1 2 1.63
lpar3 A71 U-d 1.5 1.5 2 1 3 0 95 0.01 937 0.10 8.2 0 0.40
lpar4 A71 U-d 1.5 1.5 2 0 3 0 95 0.01 372 0.10 7.1 0 0.40

Your statistics will be different than the example above because of variable activity in
the other partitions.
__ 18. Stop any amsload processes and stop the topas program with CTRL C.
In the next test, you will simultaneous run memory intensive workloads on both of your
teams partitions. You will then check the partitions loaning and hypervisor paging
activities. The observed AMS statistical values will be impacted by the activity of the
partitions that share the memory pool.
__ 19. Restart your and your teammates partitions operating systems to reset the
statistics.
Run the shutdown -Fr command in each partition.
__ 20. Once the partitions have rebooted, open a wide Telnet window on each partition and
issue vmstat -h 2 command. Leave these commands running during the next
few steps.

4-20 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty Example vmstat -h output:


System configuration: lcpu=2 mem=1536MB ent=0.10 mmode=shared mpsz=4.00GB
kthr memory page faults cpu hypv-page
----- ----------- ------------ ---------- ----------- -----------------
r b avm fre re pi po fr sr cy in sy cs us sy id wa pc ec hpi hpit pmem loan
1 1 174070 125243 0 0 0 2 5 0 6 617 371 0 0 99 0 0.00 0.1 3 1 1.05 0.45

__ 21. Open a Telnet window on the VIO server used as the paging partition, then check
the paging device activity. Start topas then click ~ to switch to the nmon screen.
From the nmon screen, type d to get the disks I/O graph.
You should see this type of I/O graph:
+-topas_nmon--S=WLMsubclasses----Host=george4--------Refresh=2 secs---18:53.04-+
Disk-KBytes/second-(K=1024,M=1024*1024) -------------------------------------
Disk Busy Read Write 0----------25-----------50------------75--------100
Name KB/s KB/s | | | | |
hdisk3 0% 0 0| |
hdisk2 0% 0 0| |
hdisk4 0% 0 0| |
hdisk0 0% 0 0| |
hdisk5 0% 0 0| |
hdisk1 0% 0 0| |
Totals 0 0+-----------|------------|-------------|----------+
------------------------------------------------------------------------------

__ 22. Start a memory load on both partitions. Run the following command in both LPARs:
./amsload -M 1000m -d 60
This will consume 1 GB of memory in each LPAR. If you get a realloc: Not
enough space. message then run the ulimit -d unlimited command.
Example steps:
# ulimit -d unlimited
# cd /home/an31/ex4
# ./amsload -M 1000m -d 60
__ 23. While the memory is consumed by the amsload tool, use the vmstat command
output to see the fre memory (amount of free memory) and the loan values
decreasing. Also, verify the hypervisor page in activity (hpi column) increased.
Here is a vmstat output example: After a while, the loan value goes to zero. the hpi
shows hypervisor paging space activity. This activity depends on the activity of the other

Copyright IBM Corp. 2010, 2013 Exercise 4. Active Memory Sharing 4-21
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

partitions sharing the memory pool. An AIX operating system paging space activity
might also occur.
# vmstat -h 2
kthr memory page faults cpu hypv-page
----- ----------- ------------------------ ------------ ----------------------- -------------------------
r b avm fre re pi po fr sr cy in sy cs us sy id wa pc ec hpi hpit pmem loan
2 1 438809 1091 0 88 2070 2244 2320 0 224 187 939 70 7 19 5 0.35 99.8 0 0 1.50 0.00
2 0 443041 1165 0 31 2085 2244 2310 0 176 178 846 62 6 30 2 0.35 98.8 0 0 1.50 0.00
1 0 447296 1409 0 0 2166 2249 2295 0 122 171 703 60 8 32 0 0.34 98.4 0 0 1.50 0.00
1 0 451509 1021 0 2 1966 1987 2048 0 112 1082 615 62 6 31 1 0.35 100.6 0 0 1.50 0.00
1 0 457765 34 0 0 2434 3209 3886 0 130 97 631 60 7 32 0 0.35 99.0 0 0 1.50 0.00
1 0 464204 1164 0 0 3541 3209 3343 0 138 103 704 60 8 32 0 0.35 99.5 0 0 1.50 0.00

__ 24. On the VIOS paging partition, check the paging device activity in the nmon output.
You should see your paging device is busy. Here is an example of I/O graph activity
that shows two disks are busy.

In this example nmon output, we can see read and write activity for hdisk5 and hdisk6.
These hdisks are the paging space devices of the two partitions running the workload
generated by the amsload tool.
__ 25. Keep watching the activity until you are no longer curious. Stop all amsload
processes with CTRL C, and stop any analysis tools that are still running.
__ 26. Shut down your logical partition. (Your teammate should do the same with his or her
partition.) Reactivate your partition using the Normal profile to use dedicated
memory.

End of exercise

4-22 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty Exercise review/wrapup


This exercise covered the configuring of the shared memory options and viewing system
information related to these shared memory options. Students used the lparstat, vmstat,
and topas commands to view the shared memory utilization, the differences between
dedicated and shared memory partitions where there is excess capacity in the shared
memory pool.

Copyright IBM Corp. 2010, 2013 Exercise 4. Active Memory Sharing 4-23
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

4-24 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty Exercise 5. Active Memory Expansion


(with hints)

Estimated time
00:35

What this exercise is about


POWER7 servers support the Active Memory Expansion (AME) in
order to improve the utilization of memory. This exercise provides an
opportunity to configure this capability for an LPAR and observe its
effect.

What you should be able to do


After completing this exercise, you should be able to:
Configure an LPAR to enable Active Memory Expansion
Recognize the effect of AME in memory statistics displays

Introduction
This exercise is designed to give you experience working with Active
Memory Expansion.

Known problems
There are no known problems.

Copyright IBM Corp. 2010, 2013 Exercise 5. Active Memory Expansion 5-1
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

Exercise instructions with hints

Preface
Two versions of these instructions are available: one with hints and one without. You can
use either version to complete this exercise. Also, dont hesitate to ask the instructor if you
have questions.
The output shown in the answers is an example. Your output and answers based on the
output might be different.
All hints are marked with a >> sign.

Part 1: Observe non-AME memory behavior


__ 1. Start a login session to your assigned LPAR. Log in as the root user.
__ 2. Run lparstat, requesting details of the LPAR configuration.
What is the Online Memory? _______________________________________
What is the Memory Mode? _______________________________________
What is the Target Memory Expansion Factor? _________________________
What is the Target Memory Expansion Size? ___________________________
The suggested command and sample output are:
# lparstat -i | grep -i memory
Online Memory : 1024 MB
Maximum Memory : 2048 MB
Minimum Memory : 512 MB
Memory Mode : Dedicated
Total I/O Memory Entitlement : -
Variable Memory Capacity Weight : -
Memory Pool ID : -
Physical Memory in the Pool : -
Unallocated Variable Memory Capacity Weight: -
Unallocated I/O Memory entitlement : -
Memory Group ID of LPAR : -
Desired Memory : 1024 MB
Target Memory Expansion Factor : -
Target Memory Expansion Size : -

In the example output:


Online Memory: 1024 MB
Memory Mode: Dedicated
Target Memory Expansion Factor: No value
Target Memory Expansion Size: No value
__ 3. Open a second login session to your LPAR. Login as root. We will refer to this as
the monitoring window.

5-2 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty __ 4. In the monitoring window, run vmstat with an interval of five seconds and no limit
on iterations.
The suggested command and sample output are:
# vmstat 5
System configuration: lcpu=4 mem=1024MB ent=0.35
kthr memory page faults cpu
----- ----------- ------------------------ ------------ -----------------------
r b avm fre re pi po fr sr cy in sy cs us sy id wa pc ec
0 0 182845 27343 0 0 0 0 0 0 5 519 247 0 2 98 0 0.01 3.9
0 0 182845 27343 0 0 0 0 0 0 7 29 232 0 1 99 0 0.01 2.4
1 0 182829 27359 0 0 0 0 0 0 2 407 227 0 2 98 0 0.01 3.5
2 0 182829 27359 0 0 0 0 0 0 6 37 230 0 1 99 0 0.01 2.5
2 0 182964 27156 0 0 0 0 0 0 11 926 250 0 2 97 0 0.02 4.6
__ 5. Return to your first window, change directory to /home/an31/ex5, and list the files.
The suggested commands and sample output are:
# cd /home/an31/ex5
# ls
memory-eater
__ 6. Execute the memory-eater program in the background and then execute the
lparstat command for an interval of five seconds and for four intervals.
What is the average physical processor consumption (physc)?
The suggested commands and sample output are:
# ./memory-eater &
Allocating memory
[1] 5374038
# lparstat 5 4
System configuration: type=Shared mode=Capped smt=4 lcpu=4 mem=1024MB
psize=4 ent=0.35

%user %sys %wait %idle physc %entc lbusy app vcsw phint
----- ----- ------ ------ ----- ----- ------ --- ----- -----
0.2 1.3 0.0 98.5 0.01 2.7 0.0 2.98 183 0
0.4 1.7 0.0 97.9 0.01 3.8 0.8 2.97 247 0
0.2 1.3 0.0 98.5 0.01 2.8 0.0 2.98 199 0
0.3 1.7 0.0 98.1 0.01 3.4 0.0 2.98 181 0
In the example output, the average physical processor consumption was about 0.01
processing units. Note: You may or may not see an app column in the output.
__ 7. In your monitoring window observe the affect of the memory-eater workload on the
paging space I/O.
Did the extra memory workload cause any thrashing on the paging space?
(Thrashing is persistent page-ins and page-outs in each interval).

Copyright IBM Corp. 2010, 2013 Exercise 5. Active Memory Expansion 5-3
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

The example output is:


kthr memory page faults cpu
----- ----------- ------------------------ ------------ -----------------------
r b avm fre re pi po fr sr cy in sy cs us sy id wa pc ec
1 0 182207 34177 0 0 0 0 0 0 6 76 240 0 1 99 0 0.01 2.7
1 0 182207 34177 0 0 0 0 0 0 6 428 222 0 2 98 0 0.01 3.4
1 0 212962 3422 0 0 0 0 0 0 5 65 232 6 1 92 0 0.04 12.3
1 0 212969 3415 0 0 0 0 0 0 6 426 238 0 2 98 0 0.01 4.0
In the example output, there was no paging space paging activity (pi and po columns).
You might see transitory sporadic instances of paging to and from paging space.
__ 8. Return to your first window and execute three instances of the memory-eater
program in the background and then execute the lparstat command for an
interval of five seconds and for four intervals. You should now see paging activity in
the vmstat output. Look at the lparstat output and answer these questions:
What is the average physical processor consumption (physc)?
How did this compare with the previous lparstat display?
The suggested commands and sample output are:
# ./memory-eater &
[2] 5242948
Allocating memory
# ./memory-eater &
[3] 5373992
Allocating memory
# ./memory-eater &
[4] 6357032
Allocating memory

# lparstat 5 4
System configuration: type=Shared mode=Capped smt=4 lcpu=4 mem=1024MB
psize=4 ent=0.35

%user %sys %wait %idle physc %entc lbusy app vcsw phint
----- ----- ------ ------ ----- ----- ------ --- ----- -----
3.9 4.4 6.7 85.0 0.06 16.7 1.4 2.92 1913 0
12.1 11.3 16.2 60.4 0.16 46.8 3.7 2.82 5172 3
10.6 13.4 14.5 61.5 0.16 46.9 6.3 2.81 4723 3
12.5 12.0 16.2 59.4 0.17 49.3 5.4 2.80 5487 3

In the example output, the average physical processor consumption was between 0.06
and 0.17 processing units.

5-4 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty This is a significant increase over the previous situation. Part of this is the activity of the
memory-eater threads but part of it is the memory management overhead related to the
paging space activity.
__ 9. In your monitoring window observe the affect of the memory-eater workload on the
paging space I/O.
Did the extra memory workload cause any thrashing on the paging space?
(Thrashing is persistent page-ins and page-outs in each interval).
The example output is:
kthr memory page faults cpu
----- ----------- ------------------------ ------------ -----------------------
r b avm fre re pi po fr sr cy in sy cs us sy id wa pc ec
1 1 305555 2400 0 1147 1121 1129 10201 0 1206 35 2517 4 6 81 8 0.07 20.6
1 2 305556 2454 0 2449 2473 2474 2967 0 2546 472 5360 10 9 63 17 0.14 38.7
2 2 305689 2507 0 2459 2467 2498 15911 0 2563 122 5300 17 11 58 15 0.18 51.0
1 1 305673 2520 0 2558 2487 2580 27004 0 2702 472 5754 10 14 61 15 0.16 44.8

In the example output, there is persistent paging space paging activity (pi and po
columns).
__ 10. List and then terminate all of your background jobs.
The suggested commands and sample output are:
# jobs
[4] + Running ./memory-eater &
[3] - Running ./memory-eater &
[2] Running ./memory-eater &
[1] Running ./memory-eater &

# kill %1 %2 %3 %4
[4] + Terminated ./memory-eater &
[3] + Terminated ./memory-eater &
[2] + Terminated ./memory-eater &
[1] + Terminated ./memory-eater &
__ 11. Shut down your AIX operating system with no delays. Do not wait for the shutdown
to complete, but instead go directly to the next step in this exercise.
The suggested command is:
# shutdown -F

Part 2: Configure your LPAR for AME


__ 12. Log in to the HMC GUI and locate your LPAR in the LPAR table.
__ 13. Manage your profiles and make a copy of your Normal profile. Name the new profile
AME.

Copyright IBM Corp. 2010, 2013 Exercise 5. Active Memory Expansion 5-5
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

Select your LPAR.


Run the Configuration > Manage Profiles task.
Select the Normal profile and from the Actions pull-down menu; click Copy.
When prompted, provide a new profile name of AME.
Click OK.
__ 14. Edit your new AME profile. Modify your memory resource by enabling AME and
setting the expansion factor to 1.5.
In the list of profile names, in the Manage Profiles window, click your new profile to
open its properties.
Click the Memory tab.
At the bottom of the memory panel, click the box that enables Active Memory
Expansion, and also fill-in the expansion factor box with a value of 1.5.
Click OK and close the Manage Profiles panel.
__ 15. Monitor the state of your LPAR. Once your LPAR is Not Activated, activate it using
the new AME profile. Request that a virtual terminal be started in order to detect
when it is fully operational with a login prompt.
When the partition state is Not Activated, proceed to activate the partition.
Select the partition (if not already selected).
When the small menu icon appears, click it to show the menu and move your mouse
over the Operations task.
When the submenu appears, move your mouse pointer over the Activate subtask.
When the next submenu appears, click the Profile subtask.
In the pop-up window labeled Activate Logical Partition: <your lpar name>:
i. Click the AMS profile.
ii. Click the small box next to Open a terminal window or console session (unless
you already have a virtual console window open).
iii. Click OK.
iv. You might see security pop-ups. Accept any certificate warnings, and do not
block execution.
You should eventually see a login prompt appear in the virtual console window.

Part 3: Observe AME memory behavior


__ 16. Once your LPAR is fully active, start a login session to your LPAR. Log in to your
assigned LPAR as the root user.

5-6 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty __ 17. Run lparstat, requesting details of the LPAR configuration.


What is the Online Memory? _______________________________________
What is the Memory Mode? _______________________________________
What is the Target Memory Expansion Factor? _________________________
What is the Target Memory Expansion Size? ___________________________
The suggested command and sample output are:
# lparstat -i | grep -i memory
Online Memory : 1024 MB
Maximum Memory : 2048 MB
Minimum Memory : 512 MB
Memory Mode : Dedicated-Expanded
Total I/O Memory Entitlement : -
Variable Memory Capacity Weight : -
Memory Pool ID : -
Physical Memory in the Pool : -
Unallocated Variable Memory Capacity Weight: -
Unallocated I/O Memory entitlement : -
Memory Group ID of LPAR : -
Desired Memory : 1024 MB
Target Memory Expansion Factor : 1.50
Target Memory Expansion Size : 1536 MB
In the example output:
Online memory: 1024 MB
Memory Mode: Dedicated-Expanded
Target Memory Expansion Factor: 1.50
Target Memory Expansion Size: 1536 MB
__ 18. Run an svmon global report and request an AME summary with units of 1 MB
(rather than units of 4 KB).
What is the reported size of logical memory? __________________________
What is the memory mode (mmode)? ________________________________
What is the target expansion factor (txf)? ______________________________
What is the expansion deficit (dxm) __________________________________

Copyright IBM Corp. 2010, 2013 Exercise 5. Active Memory Expansion 5-7
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

The suggested command and sample output are:


# svmon -G -O summary=ame,unit=MB
Unit: MB
--------------------------------------------------------------------------------------
size inuse free pin virtual available mmode
memory 1536.00 805.32 730.68 324.74 624.75 865.77 Ded-E
ucomprsd - 805.32 -
comprsd - 0 -
pg space 512.00 5.55

work pers clnt other


pin 274.86 0 0 49.9
in use 624.75 0 180.57
ucomprsd 624.75
comprsd 0
--------------------------------------------------------------------------------------
True Memory: 1024.00

CurSz %Cur TgtSz %Tgt MaxSz %Max CRatio


ucomprsd 1008.00 98.44 853.34 83.33 - - -
comprsd 16.0 1.56 170.66 16.67 671.75 65.60 0.00

txf cxf dxf dxm


AME 1.50 1.50 0.00 0

In the example output:


Size of logical memory: 1536 MB
Memory mode (mmode): Ded-E (dedicated-expansion)
Target expansion factor (txf): 1.50
Expansion deficit (dxm): 0
__ 19. Open a second login session to your LPAR. Login as root. We will refer to this as
the monitoring window.
__ 20. In the monitoring window, run vmstat with an interval of five seconds, no limit on
iterations, and requesting memory compression statistics.
What is the reported value for logical memory (mem)?____________________
What is the reported value for true memory (tmem)? _____________________
Did the extra memory workload cause any thrashing on the paging space?
Did the extra memory workload cause any thrashing on the compressed memory
pools (ci and co columns)?

5-8 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty The suggested command and sample output are:


# vmstat -c 5
System Configuration: lcpu=4 mem=1536MB tmem=1024MB ent=0.35 mmode=dedicated-E
kthr memory page
------- ------------------------------------------------------ -------------------
r b avm fre csz cfr dxm ci co pi po
0 0 160386 186496 4096 4096 0 0 0 0 0
0 0 160386 186496 4096 4096 0 0 0 0 0
0 0 160386 186496 4096 4096 0 0 0 0 0
0 0 160386 186496 4096 4096 0 0 0 0 0
The columns to the right of the po column are not shown to improve readability.
In the example output:
The reported logical memory was 1536 MB.
The reported true memory was 1024 MB.
There was no paging space paging activity (pi and po columns).
There was no compression pool paging activity (ci and co columns).
__ 21. Return to your first window, change directory to /home/an31/ex5, and list the files.
The suggested commands and sample output are:
# cd /home/an31/ex5
# ls
memory-eater
__ 22. Execute four instances of the memory-eater program in the background and then
execute the lparstat command for an interval of five seconds and for four
intervals.
What is the average physical processor consumption (physc)?
How much physical processor consumption is due to AME compression and
decompression?
The suggested commands and sample output are:
# ./memory-eater &
[1] 3211300
Allocating memory
# ./memory-eater &
[2] 3407908
Allocating memory
# ./memory-eater &
[3] 6488190
Allocating memory
# ./memory-eater &
[4] 6491574
Allocating memory

Copyright IBM Corp. 2010, 2013 Exercise 5. Active Memory Expansion 5-9
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

# lparstat -c 5 4
System configuration: type=Shared mode=Capped mmode=Ded-E smt=4 lcpu=4 mem=1536MB
tmem=1024MB psize=4 ent=0.35

%user %sys %wait %idle physc %entc lbusy app vcsw phint %xcpu xphysc dxm
----- ----- ------ ------ ----- ----- ------ --- ----- ----- ------ ------ ------
0.8 1.6 0.8 96.8 0.02 4.9 1.1 2.96 589 0 9.9 0.0017 0
1.2 2.5 0.8 95.4 0.02 6.7 0.7 2.96 422 0 8.9 0.0021 0
0.3 1.5 0.3 97.9 0.01 3.5 0.3 2.97 334 0 10.1 0.0012 0
0.8 2.1 0.1 97.0 0.02 5.1 0.7 2.97 326 0 5.5 0.0010 0

In the example output, the average physical processor consumption was between 0.01
and 0.02 processing units.
In the example output, the average physical processor consumption due to memory
expansion and compression was between 0.0010 and 0.0021 processing units.
__ 23. In your monitoring window observe the affect of the memory-eater workload on the
paging space I/O.
Did the extra memory workload cause any thrashing on the paging space?
Did the extra memory workload cause any thrashing on the compressed memory
pools?
The example vmstat -c 5 output is:
kthr memory page
------- ------------------------------------------------------ -------------------
r b avm fre csz cfr dxm ci co pi po
0 0 283219 114287 18490 2523 0 2 77 0 0
0 0 283219 114287 18490 2523 0 0 0 0 0
0 0 283221 114273 18490 2534 0 5 0 0 0
0 0 283221 114269 18490 2534 0 0 0 0 0

The columns to the right of the po column are not shown to improve readability.
In the example output:
There was no paging space paging activity (pi and po columns).
There was some transitory compression pool paging activity (ci and co columns).
__ 24. Return to your first window and execute two more memory-eater programs in the
background and then execute the lparstat command for an interval of five
seconds and for four intervals.
What is the average physical processor consumption (physc)?
How much physical processor consumption is due to AME compression and
decompression?

5-10 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty The suggested commands and sample output are:


# ./memory-eater &
[3] 7012464
Allocating memory

# ./memory-eater &
[4] 6684812
Allocating memory

# lparstat -c 5 4
System configuration: type=Shared mode=Capped mmode=Ded-E smt=4 lcpu=4 mem=1536MB
tmem=1024MB psize=4 ent=0.35

%user %sys %wait %idle physc %entc lbusy app vcsw phint %xcpu xphysc dxm
----- ----- ------ ------ ----- ----- ------ --- ----- ----- ------ ------ ------
15.6 23.4 2.2 58.7 0.17 47.2 27.2 2.82 883 1 73.7 0.1217 0
29.0 49.1 8.8 13.0 0.30 84.9 72.2 2.69 593 7 86.7 0.2576 0
7.9 11.5 2.3 78.4 0.09 25.0 12.5 2.90 737 0 63.3 0.0553 0
20.9 36.5 7.8 34.8 0.23 64.3 50.0 2.76 737 13 81.8 0.1841 0

In the example output, the average physical processor consumption was between 0.09
and 0.30 processing units.
This is a significant increase over the previous situation. It is not that much more
overhead that the non-AME, two memory-eater threads case. (You can refer back to the
hints in step 8 to see the non-AME lparstat output with four memory-eater processes.
Part of this is the CPU activity of the memory-eater threads but part of it is overhead to
manage the memory management overhead related to the compressed memory pool
paging activity.
__ 25. In your monitoring window observe the affect of the memory-eater workload on the
paging space I/O.
Did the extra memory workload cause any thrashing on the paging space?
Did the extra memory workload cause any thrashing on the compressed memory
pools?
The example output is:
kthr memory page
------- ------------------------------------------------------
r b avm fre csz cfr dxm ci co pi po
4 2 345281 56985 39052 3640 0 9483 9666 0 0
5 4 345281 57525 39052 3445 0 29908 29775 0 0
3 0 345281 57230 39052 3553 0 11162 11216 0 0
4 2 345281 58590 39052 3523 0 29454 29179 0 0
In the example output:
- There was no paging space paging activity (pi and po columns).

Copyright IBM Corp. 2010, 2013 Exercise 5. Active Memory Expansion 5-11
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

- There was significant and persistent compressed memory pool paging activity (ci
and co columns).
__ 26. Return to your first window and execute two more memory-eater programs in the
background and then execute the lparstat command for an interval of five
seconds and for four intervals.
What is the average physical processor consumption (physc)?
How much physical processor consumption is due to AME compression and
decompression?
The suggested commands and sample output are:
# ./memory-eater &
[5] 6946916
Allocating memory

# ./memory-eater &
[6] 4915302
Allocating memory

# lparstat -c 5 4
System configuration: type=Shared mode=Capped mmode=Ded-E smt=4 lcpu=4 mem=1536MB
tmem=1024MB psize=4 ent=0.35

%user %sys %wait %idle physc %entc lbusy app vcsw phint %xcpu xphysc dxm
----- ----- ------ ------ ----- ----- ------ --- ----- ----- ------ ------ ------
31.4 48.7 11.4 8.5 0.33 95.1 66.0 2.65 2061 117 71.5 0.2382 0
29.3 46.2 11.9 12.6 0.31 88.6 62.7 2.67 1794 114 71.9 0.2230 0
26.4 34.2 6.5 32.9 0.28 79.1 36.2 2.71 2245 32 63.7 0.1762 0
29.3 49.5 13.6 7.6 0.33 93.0 61.4 2.66 1646 91 73.3 0.2386 0

In the example output, the average physical processor consumption was between 0.28
and 0.33 processing units.
Part of this is the CPU activity of the memory-eater threads but part of it is overhead to
manage the memory management overhead related to the compressed memory pool
paging activity.
In the example output, the processor consumption due to AME compression and
decompression was reported to be between 0.17 and 0.24 processing units. This is an
increase over the previous case.
__ 27. In your monitoring window observe the affect of the memory-eater workload on the
paging space I/O.
Did the extra memory workload cause any thrashing on the paging space?
Did the extra memory workload cause any thrashing on the compressed memory
pools?

5-12 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty The example vmstat -c 5 output is:


kthr memory page
------- ---------------------------------------------------------------------------
r b avm fre csz cfr dxm ci co pi po
1 4 407039 1161 43983 1664 0 8504 8518 399 531
2 4 407039 1106 43983 990 0 18390 18111 1082 882
6 6 407296 1003 43983 970 0 25487 25565 1053 982
1 4 407042 1087 43983 862 0 5152 5262 249 187

In the example output:


- There was significant and persistent compressed memory pool paging activity (ci
and co columns).
- There was also significant and persistent paging space paging activity (pi and po
columns). This aging space activity is still less than the non-AME, two memory-eater
thread example from earlier in the exercise)
__ 28. If you did not see any page thrashing, continue to run additional instances of
memory-eater in the background and monitoring until you do see thrashing on the
paging space.
__ 29. How many instances of memory-eater were required to create paging-space page
thrashing?
In the examples, it only took eight instances of memory-eater to trigger paging space
thrashing. You can type the jobs command to see how many you have.
__ 30. Run the amepat tool for one minute.
Example command: # amepat 1
__ 31. Scroll to the top of the output and view the System Configuration section. It should
list the target Expanded Memory size of 1.50 GB and an expansion factor of 1.5.
Example output:
System Configuration:
---------------------
Partition Name : george1
Processor Implementation Mode : POWER7
Number Of Logical CPUs : 4
Processor Entitled Capacity : 0.35
Processor Max. Capacity : 1.00
True Memory : 1.00 GB
SMT Threads : 4
Shared Processor Mode : Enabled-Capped
Active Memory Sharing : Disabled
Active Memory Expansion : Enabled
Target Expanded Memory Size : 1.50 GB
Target Memory Expansion factor : 1.50

Copyright IBM Corp. 2010, 2013 Exercise 5. Active Memory Expansion 5-13
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

__ 32. Scroll down and view the System Resource Statistics and the AME Statistics. The
AME statistics show how much CPU resource was needed to compress a certain
amount of memory. Notice the actual compression ratio.
Example output which shows that 0.18 processing units was needed to compress
680 MB of memory.
System Resource Statistics: Current
--------------------------- ----------------
CPU Util (Phys. Processors) 0.22 [ 22%]
Virtual Memory Size (MB) 1590 [104%]
True Memory In-Use (MB) 1019 [100%]
Pinned Memory (MB) 330 [ 32%]
File Cache Size (MB) 2 [ 0%]
Available Memory (MB) 0 [ 0%]

AME Statistics: Current


--------------- ----------------
AME CPU Usage (Phy. Proc Units) 0.18 [ 18%]
Compressed Memory (MB) 680 [ 44%]
Compression Ratio 4.04

This example output shows an actual compression ratio of 4.04.


__ 33. Now look at the Active Memory Expansion Modeled Statistics. Notice the
Achievable Compression ratio value. Read the recommendations for this LPAR.

5-14 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty Example output showing the achievable compression ratio is much higher (4.04) than
the current expansion factor:
Active Memory Expansion Modeled Statistics :
-------------------------------------------
Modeled Expanded Memory Size : 1.50 GB
Achievable Compression ratio :4.04

Expansion Modeled True Modeled CPU Usage


Factor Memory Size Memory Gain Estimate
--------- ------------- ------------------ -----------
1.00 1.50 GB 0.00 KB [ 0%] 0.00 [ 0%]
1.20 1.25 GB 256.00 MB [ 20%] 0.08 [ 8%]
1.50 1.00 GB 512.00 MB [ 50%] 0.17 [ 17%] <<
CURRENT CONFIG
2.00 768.00 MB 768.00 MB [100%] 0.26 [ 26%]

Active Memory Expansion Recommendation:


---------------------------------------
The recommended AME configuration for this workload is to configure
the LPAR with a memory size of 1.25 GB and to configure a memory
expansion factor of 1.20. This will result in a memory gain of 20%.
With this configuration, the estimated CPU usage due to AME is
approximately 0.08 physical processors, and the estimated overall peak
CPU resource required for the LPAR is 0.12 physical processors.

NOTE: amepat's recommendations are based on the workload's utilization


level during the monitored period. If there is a change in the
workload's utilization level or a change in workload itself, amepat
should be run again.

The modeled Active Memory Expansion CPU usage reported by amepat is


just an estimate. The actual CPU usage used for Active Memory
Expansion may be lower or higher depending on the workload.

The recommendations state that this LPARs memory could be reduced to 1.25 GB
(from 1.5 GB). This would allow you to allocate this extra 0.25 GB to other workloads. It
also recommends setting the expansion factor to 1.20 and save some CPU resources.
__ 34. List and then terminate all of your background jobs.

Copyright IBM Corp. 2010, 2013 Exercise 5. Active Memory Expansion 5-15
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

The suggested commands and sample output are:


# jobs
[8] + Running ./memory-eater &
[7] - Running ./memory-eater &
[6] Running ./memory-eater &
[5] Running ./memory-eater &
[4] Running ./memory-eater &
[3] Running ./memory-eater &
[2] Running ./memory-eater &
[1] Running ./memory-eater &
# kill %1 %2 %3 %4 %5 %6 %7 %8
#
[8] + Terminated ./memory-eater &
[7] + Terminated ./memory-eater &
[6] + Terminated ./memory-eater &
[5] + Terminated ./memory-eater &
[4] + Terminated ./memory-eater &
[3] + Terminated ./memory-eater &
[2] + Terminated ./memory-eater &
[1] + Terminated ./memory-eater &

__ 35. In your monitoring window, terminate your vmstat execution with CTRL C.
__ 36. Shutdown your LPAR with shutdown -F. When the state on the HMC is Not
Activated, activate it with the Normal profile.

End of exercise

5-16 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty Exercise 6. I/O device virtualization performance


and tuning
(with hints)

Estimated time
01:00

What this exercise is about


This exercise has three parts.
In part 1, you will investigate the tuning of the virtual SCSI disks by
changing the capacity entitlement of the VIOS partition.
Part 2 provides an opportunity to manage and tune virtual Ethernet
devices and connections.
Part 3 provides an opportunity to the monitor and tune the shared
Ethernet adapter device. You will see the impact the shared Ethernet
adapter configuration has on a virtual I/O server's performance.

What you should be able to do


After completing this exercise, you should be able to:
Monitor and tune a virtual SCSI configuration
Use the AIX performance tools to monitor the virtual Ethernet
device
Using the netperf tool to load the virtual Ethernet network, and test
how the MTU size, the network buffers, and the partition capacity
entitlement affect the virtual Ethernet performance
Use the AIX performance tools iostat, netstat, entstat,
lparstat, and mpstat to monitor the VIO server partition
Monitor the shared Ethernet adapter activity and tune the device

Introduction
The purpose of this exercise is to monitor and measure a virtualized
environment.

Copyright IBM Corp. 2010, 2013 Exercise 6. I/O device virtualization performance and tuning 6-1
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

Requirements
This workbook
A computer with a web browser and a network connection to an
HMC running Version 7.7.4.0 2 or later configured to support a
POWER7 processor-based system
A Virtual I/O Server version 2.2.2.1 and a client logical partition
running AIX 7
Utility for running Telnet or SSH

Instructor exercise overview


Lab description scenario:
Part 1 Virtual SCSI: The students will measure and monitor their
assigned virtual I/O server activity when their client logical partition is
generating I/O load on its virtual disks. They will investigate the tuning
of the virtual disk configuration by modifying the CPU entitlement on
the virtual I/O server. They will also investigate the memory
configuration considerations on the virtual I/O server.
To test the workload, the students will use the commands located in
/home/an31/ex6. These are all shell scripts which call the diskio
tool.
Each client logical partition has two virtual SCSI disks backed by an
external LUN. hdisk0 hosts rootvg and hdisk1 will be used to create an
additional volume group, ex6_vg1. In this volume group, three JFS2
file systems will be created: ex6_fs1, ex6_fs2, and ex6_fs3. This
volume group and file systems are created by a script called
ex6setup.sh that is invoked automatically by the VIOload script. The
ex6setup.sh script checks to make sure it is being run on a client
logical partition, and only creates the volume group and file systems if
they do not already exist.
Note: When the capacity entitlement of the virtual I/O server is
dynamically changed, topas gives incorrect values for %Entc, since it
still uses the original CE value in the calculation. You will need to stop
and restart topas to obtain correct values. The students are warned in
the exercise instructions.
Part 2 Virtual Ethernet: The script used for generating a network load
is based on netperf. The first step is to setup the virtual Ethernet
connection between the two logical partitions. Then, using the
tcp_stream.sh script, the students will generate a network load
between the two logical partitions, change the MTU size, and observe

6-2 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty how the virtual Ethernet bandwidth increases. The second test shows
how the throughput scales at different processor capacity entitlement
values. A scalability factor will be used to see if the performance and
throughput scales linearly.
Part 3 Shared Ethernet: In this part, the students will monitor the
shared Ethernet adapter, then tune it by adjusting the capacity
entitlement of the virtual I/O server. The students must use the virtual
I/O server command line interface when checking the devices
attributes or displaying the activity and statistics.
The TCP/IP and device tuning are not seen here and are not part of
the exercise objectives. Checking and reconfiguring the real and
virtual devices is also part of a tuning activity, but in this exercise, the
monitoring and tuning options are limited to the CPU entitlement of the
virtual I/O.
In some cases, the systems assigned to the class might contain
10/100/1000 Ethernet adapters in both the AIX partitions. If these
adapters are connected to a gigabit Ethernet switch, then students will
observe performance numbers greater than those shown in the hints.

Copyright IBM Corp. 2010, 2013 Exercise 6. I/O device virtualization performance and tuning 6-3
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

Exercise instructions with hints

Preface
All procedures of this exercise depend on the availability of specific equipment. You will
need a computer system connected to the Internet, a web browser, a Telnet program,
and a utility for running SSH. You will also need a managed system with Fibre Channel
adapters. All lab systems need to be accessible to each other on a network.
All hints are marked by a sign.

Part 1: Virtual SCSI


In this section, you will monitor the virtual I/O server while generating a load on the virtual
SCSI disk configuration. You will investigate the tuning of the virtual SCSI disks
configuration by changing the capacity entitlement of the VIOS partition.
You will use your assigned virtual I/O server and client logical partition. To generate a
workload on the virtual SCSI disk configuration, you will use the script VIOload available
in the /home/an31/ex6 directory.

Command Type Workload


diskio C executable Disk I/O workload
Shell script to invoke the diskio program.
doIO Shell script
Started from the partition
Generates I/O load on virtual SCSI disks by
VIOload Shell script invoking the doIO script. Can be started from
any client partition.

__ 1. Before starting the exercise, use the HMC GUI to check the VIOS processor
configuration. The desired configuration is:
Assigned Processing units: 0.10
Assigned Virtual Processors: 1
Partition is Capped
Dynamically change the processor configuration if necessary.
To check the configuration, open the VIOS partition properties. Heres an example in
which the partition is configured correctly:

6-4 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty

If your VIOS partition needs its processor configuration changed, select the partition
and run the Dynamic Logical Partitioning > Processor > Add or Remove task. Fill
out the attributes as shown below. The Uncapped Weight value will not be used so the
value doesnt matter. Click OK to execute the change.

Copyright IBM Corp. 2010, 2013 Exercise 6. I/O device virtualization performance and tuning 6-5
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

__ 2. On your client logical partition, log in as root and use the lspv command to check
your virtual SCSI disk configuration. You should have a free virtual SCSI disk
available.
Command and expected output:
# lspv
hdisk0 000bf8411dc9dee1 rootvg active
hdisk1 000bf8417b8194fd None
__ 3. On your client logical partition, change directory to /home/an31/ex6 and start the
VIOload script with no options. You will need to confirm by entering YES (all capital
letters) when asked to start the load generation. The VIOload script generates I/O
load for about five minutes. Continue with the exercise as soon as it starts running
because you want to monitor this workload.
__ 4. In a session on your assigned VIOS partition, start the topas command. Press the
d key twice to obtain disk summary statistics as shown below.
While the VIOload script is running on the AIX partitions, look at the Physc, %Entc,
and disk total KBPS values displayed by topas. Record these values in Table 4,
VIOS capacity entitlement tuning, on page 8, in the row for capacity entitlement
value 0.1.
Here is an example of topas output:

6-6 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty $ topas
Topas Monitor for host: george5 EVENTS/QUEUES FILE/TTY
Tue Dec 11 18:43:35 2012 Interval: 2 Cswitch 244 Readch 288
Syscall 910 Writech 3330.5K
CPU User% Kern% Wait% Idle% Physc Entc Reads 3 Rawin 0
ALL 74.7 13.4 0.0 11.9 0.10 99.7 Writes 835 Ttyout 210
Forks 0 Igets 0
Network KBPS I-Pack O-Pack KB-In KB-Out Execs 0 Namei 1
Total 0.5 2.0 2.0 0.2 0.4 Runqueue 1.0 Dirblk 0
Waitqueue 0.0
Disk Busy% KBPS TPS KB-Read KB-Writ MEMORY
Total 7.6 46.0K 580.0 46.0K 0.0 PAGING Real,MB 1024
Faults 0 % Comp 96
FileSystem KBPS TPS KB-Read KB-Writ Steals 0 % Noncomp 1
Total 0.2 1.5 0.2 0.0 PgspIn 0 % Client 1
PgspOut 0
Name PID CPU% PgSp Owner PageIn 0 PAGING SPACE
yes 8192090 38.6 0.2 root PageOut 0 Size,MB 1536
yes 8978558 38.6 0.2 root Sios 0 % Used 6
topas 8781970 0.4 1.6 padmin % Free 94
sec_rcv 3342438 0.2 0.4 root NFS (calls/sec)
getty 7209196 0.1 0.6 root SerV2 0 WPAR Activ
sched 196614 0.1 0.4 root CliV2 0 WPAR Total
sshd 6946876 0.0 0.9 padmin SerV3 0 Press: "h"-help
java 9109708 0.0 86.0 root CliV3 0 "q"-quit

In this case, you would record the value 0.10 in the Physc column, 99.7 in the %EntC
column, and 46.0K in the KBPS column of Table 4. Once the KBPS numbers grow
beyond 99999 KBPS, the topas command displays the numbers with a K multiplier, for
example: 62.2K. This means the throughput is about 62200 KBPS.
__ 5. Stop the topas execution in the VIOS partition by pressing the q key.
__ 6. Use the HMC GUI to dynamically change the capacity entitlement of the VIOS
partition to the next value shown in the Capacity Entitlement column of Table 4.
Select your VIOS partition, then select Dynamic Logical Partitioning > Processor >
Add or Remove. A window similar to the following will be displayed.

Copyright IBM Corp. 2010, 2013 Exercise 6. I/O device virtualization performance and tuning 6-7
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

Enter the appropriate number of processing units value to change the partition to have
the next capacity entitlement value listed in Table 4, and then click the OK button to
implement the change.
__ 7. Make sure that the VIOload script is still running on your logical partition. If it has
stopped, then start it again.
__ 8. On the VIOS partition, start the topas command.
Note: It is important to stop and restart topas after each dynamic change the
processor configuration. If topas is not restarted, it will show incorrect values for
%EntC.
__ 9. Record the values for Physc, %EntC, and disk total KBPS displayed by topas in the
appropriate fields of Table 4 for the current capacity entitlement value.
__ 10. Repeat the steps from step 5 to step 9 to complete the fields of Table 4 for the other
capacity entitlement values.

Table 4: VIOS capacity entitlement tuning


Overall throughput
Capacity entitlement Physc %EntC
(KBPS)
0.1
0.12

6-8 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty Table 4: VIOS capacity entitlement tuning


Overall throughput
Capacity entitlement Physc %EntC
(KBPS)
0.14
0.25
0.50
Here are the values obtained by the authors.

Overall throughput
Capacity entitlement Physc %EntC
(KBPS)
0.1 0.06 59.0% 65.5K
0.12 0.09 73.5% 105.3K
0.14 0.10 72.4% 133.1K
0.25 0.20 79.6% 270.0K
0.50 0.25 49.7% 339.1K
__ 11. Looking at the throughput values, the capacity entitlement, and the CPU
consumption on the virtual I/O server partition, what do you notice?
Notice that when the virtual I/O server is CPU constrained (%EntC over 70%), the
throughput is strongly impacted. If there are perceived performance issues on disks or
storage adapters, be sure to check the CPU utilization on the VIOS partition. Be sure to
monitor processor utilization on the VIOS on a regular basis.
When running with shared processors, the virtual I/O server should be configured as
uncapped. That way, if the capacity entitlement of the partition is undersized, there is
opportunity to get more processor resources (assuming there is some available in the
pool) to service I/O.
__ 12. The next few steps will investigate the memory configuration considerations on the
virtual I/O server. This test is going to show you the memory consumption when a
virtual SCSI client partition is writing on a file system.
Stop topas on the VIOS partition if it is still running. Wait for the VIOload script
running on your logical partition to finish before continuing with the next step.
__ 13. On the VIOS partition, invoke the following command:
$ vmstat 2
Look at the fre column. This is the amount of free memory.

Copyright IBM Corp. 2010, 2013 Exercise 6. I/O device virtualization performance and tuning 6-9
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

The following is an example of the output from vmstat on the VIOS partition:
$ vmstat 2

System configuration: lcpu=4 mem=1024MB ent=0.50

kthr memory page faults cpu


----- ----------- ------------------------ ------------ -----------------------
r b avm fre re pi po fr sr cy in sy cs us sy id wa pc ec
1 0 216149 40173 0 0 0 0 0 0 29 94 233 0 0 99 0 0.01 1.7
1 0 216149 40173 0 0 0 0 0 0 14 30 224 0 0 99 0 0.00 0.9
1 0 216149 40173 0 0 0 0 0 0 32 46 247 0 0 99 0 0.00 1.0
1 0 216149 40173 0 0 0 0 0 0 6 30 210 0 0 99 0 0.00 0.7
1 0 216149 40166 0 3 0 0 0 0 26 31 221 0 0 99 0 0.00 0.8
__ 14. If not still logged in, log in to your assigned AIX partition as the root user. Execute
the commands listed below to clear the related file cache.
# umount /ex6_fs2
# mount /ex6_fs2
__ 15. Start the vmstat 2 command in your AIX LPAR.
__ 16. In a second session to your AIX LPAR, run the following dd command. Immediately,
notice what happens in the vmstat output in both the VIOS and the AIX LPAR.
# dd if=/dev/zero of=/ex6_fs2/850M bs=1m count=850
The following demonstrates the expected vmstat result on the client lpar:
System configuration: lcpu=4 mem=1024MB ent=0.35

kthr memory page faults cpu


----- ---------- ----------------------- ------------ -----------------------
r b avm fre re pi po fr sr cy in sy cs us sy id wa pc ec
1 0 181306 76289 0 0 0 0 0 0 5 32 214 0 1 99 0 0.01 2.4
1 0 181306 76289 0 0 0 0 0 0 7 118 214 0 1 99 0 0.01 2.6
1 0 182357 55397 0 0 0 0 0 0 73 596 414 0 20 80 0 0.10 27.8
1 1 183334 2505 0 0 0 2141 2310 0 312 1672 1066 1 60 36 3 0.29 82.7
3 0 183334 2493 0 0 0 22609 24540 0 161 569 866 0 86 14 0 0.35 100.6
2 0 183122 2619 0 0 0 21676 21675 0 189 581 906 0 83 17 0 0.34 96.8
2 0 183122 2595 0 0 0 22583 24513 0 168 549 833 0 86 14 0 0.35 100.0
0 1 182298 3590 0 0 0 3937 3937 0 74 192 406 0 17 83 0 0.08 21.9
0 0 182426 3300 0 0 0 0 0 0 18 1004 249 1 3 96 1 0.02 7.0
0 0 182426 3300 0 0 0 0 0 0 5 48 197 0 1 99 0 0.01 3.3

6-10 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty The following shows the output of the vmstat command on the VIOS partition while the
dd command was running on the AIX LPAR:
System configuration: lcpu=4 mem=1024MB ent=0.50

kthr memory page faults cpu


----- ---------- ----------------------- ------------ -----------------------
r b avm fre re pi po fr sr cy in sy cs us sy id wa pc ec
3 0 216152 40162 0 0 0 0 0 0 34 93 247 0 0 99 0 0.01 1.1
3 0 216152 40162 0 0 0 0 0 0 30 39 234 0 0 99 0 0.00 0.8
3 0 216152 40162 0 0 0 0 0 0 104 37 228 0 1 99 0 0.01 1.5
3 0 216152 40162 0 0 0 0 0 0 935 35 195 0 5 95 0 0.05 10.2
3 0 216152 40162 0 0 0 0 0 0 702 1326 210 0 4 95 0 0.05 9.2
3 0 216152 40162 0 0 0 0 0 0 723 43 223 0 4 96 0 0.04 8.2
3 0 216152 40162 0 0 0 0 0 0 697 35 213 0 4 96 0 0.04 8.0
3 0 216152 40162 0 0 0 0 0 0 415 30 221 0 2 98 0 0.02 5.0
3 0 216152 40162 0 0 0 0 0 0 49 33 220 0 0 99 0 0.00 0.9
3 0 216152 40162 0 0 0 0 0 0 7 40 205 0 0 99 0 0.00 0.7
3 0 216152 40162 0 0 0 0 0 0 18 167 279 3 0 96 0 0.03 6.1

__ 17. What can you say about the memory requirement of a virtual I/O server partition
serving virtual SCSI disks to client partitions?
Notice during the dd command, the memory free pages number decreases on the client
partition. This indicates the virtual memory cache is used while writing. However, there
is not any memory activity on the virtual I/O server.
There is not any data caching in memory on the server partition. All I/Os which it
services are essentially synchronous disk I/Os. Therefore, the virtual I/O servers
memory requirements should be modest.

Part 2: Virtual Ethernet


In this part, you will create a virtual Ethernet connection between two partitions. You will
then generate a network load between the two partitions through the virtual Ethernet
adapters. Then, using different MTU sizes on the network interface and using different
interface specific network options, you will test how these parameters affect the virtual
Ethernet performance.
In this section, you will be using two logical partitions. To generate a workload in the
partitions, you will use the tcp_stream.sh script available in the directory /home/an31/ex6.

Command Type Workload


tcp_stream.sh Shell script Request/response load
netperf Executable Based on the netperf public domain tool
__ 18. During this section, you should work as a team of two. Each team has two virtual I/O
servers and two logical partitions. Check the processor configuration for both AIX
client partitions. Both logical partitions should be uncapped. Dynamically change
this setting if necessary.

Copyright IBM Corp. 2010, 2013 Exercise 6. I/O device virtualization performance and tuning 6-11
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

Use the lparstat command in the LPARs to see the current processor configuration. In
this example, the mode is listed as capped which means it needs to be changed to
uncapped:
# lparstat

System configuration: type=Shared mode=Capped smt=4 lcpu=4 mem=1024MB


psize=4 ent=0.35

%user %sys %wait %idle physc %entc lbusy app vcsw phint
----- ----- ------ ------ ----- ----- ------ --- ----- -----
0.0 0.0 3.0 97.0 0.00 0.0 5.9 3.27 1590588 16622
To change the LPARs to the uncapped mode, select the partition and run the Dynamic
Logical Partitioning > Processors > Add or Remove task.
In the pop up, window check the Uncapped checkbox and enter 128 for the weight
value (if its not already there). Here is an example of this window:

Be sure to change both AIX partitions.


__ 19. Use the HMC GUI to perform a dynamic logical partitioning operation to add a virtual
Ethernet adapter on each of the AIX partitions. Specify 88 as the virtual adapter ID,
and 88 as the Port Virtual Ethernet ID for the adapter. Do not select the Access
external network or IEEE 802.1Q compatible adapter options.

6-12 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty To dynamically add a virtual Ethernet adapter to your partitions, perform the following
actions.
In the HMC GUI Server Management application, select your logical partition, then
select Dynamic Logical Partitioning > Virtual Adapters from the pop up menu.
Use the Actions menu to create a virtual Ethernet adapter as shown below:

The following window appears:

Enter 88 as the Adapter ID value, and enter 88 in the Port Virtual Ethernet value. Do
not select the Access external network or IEEE 802.1Q compatible adapter options.
Keep the VSwitch to ETHERNET0(Default). After entering the adapter ID and port
virtual Ethernet ID value, click OK. This will return you to the previous window. Click OK
to add the adapter to the partition.
Be sure to add a virtual Ethernet adapter to both of the AIX partitions.
__ 20. Run the cfgmgr command in each partition to detect the newly added virtual
Ethernet device.

Copyright IBM Corp. 2010, 2013 Exercise 6. I/O device virtualization performance and tuning 6-13
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

__ 21. Verify the additional virtual Ethernet adapter is marked as Available in each partition.
Perform the following command sequence to check the existence of the newly added
virtual Ethernet adapter:
# lsdev -c adapter | grep ^ent
ent0 Available Virtual I/O Ethernet Adapter (l-lan)
ent1 Available Virtual I/O Ethernet Adapter (l-lan)
vsa0 Available LPAR Virtual Serial Adapter
vscsi0 Available Virtual SCSI Client Adapter
You should see ent1 marked as Available.
__ 22. Configure the newly added interfaces using smitty chinet. Use subnet mask
255.255.255.0. The name of the interface is based on the name of the adapter
instance. For example, if the virtual Ethernet adapter is ent1, then the interface to
use is en1.
If you followed the instructions in previous lab exercises, you should be using en1
for both logical partitions.
Use the following IP addresses depending on your team number.

Table 5: IP addresses
IP@ second logical
Team number IP@ first logical partition
partition
Team1 10.10.10.1 10.10.10.2
Team2 10.10.20.1 10.10.20.2
Team3 10.10.30.1 10.10.30.2
Team4 10.10.40.1 10.10.40.2
Team5 10.10.50.1 10.10.50.2
Team6 10.10.60.1 10.10.60.2

For the other ISNO network parameters, just use the default values (that is, leave
the fields blank).
Change the current STATE of the adapter to up in the SMIT panel.

6-14 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty Invoke smitty chinet, and then select the desired interface from the list presented.
The following SMIT panel will be shown:
Change / Show a Standard Ethernet Interface

Type or select values in entry fields.


Press Enter AFTER making all desired changes.

[Entry Fields]
Network Interface Name en1
INTERNET ADDRESS (dotted decimal) []
Network MASK (hexadecimal or dotted decimal) []
Current STATE up +
Use Address Resolution Protocol (ARP)? yes +
BROADCAST ADDRESS (dotted decimal) []
Interface Specific Network Options
('NULL' will unset the option)
rfc1323 []
tcp_mssdflt []
tcp_nodelay []
tcp_recvspace []
tcp_sendspace []
Apply change to DATABASE only no +

F1=Help F2=Refresh F3=Cancel F4=List


F5=Reset F6=Command F7=Edit F8=Image
F9=Shell F0=Exit Enter=Do
For example, if you are part of team1, enter the address 10.10.10.1 for the first logical
partition, and 10.10.10.2 for the second. Change the value of the Current STATE field
to up, and then press the Enter key to configure the interface.
__ 23. From each partition, ping the address of the other partition to verify the virtual
Ethernet connection is working.
__ 24. Simultaneous multi-threading should be enabled on both client logical partitions.
Enable simultaneous multi-threading if it is currently disabled.
Use the smtctl command with no arguments to check the status of simultaneous
multi-threading. Verify the proc has four SMT threads. Use the following command to
enable simultaneous multi-threading if required:
# smtctl -m on

Creating a network load


The rest of this exercise depends on having a network load. You will use the netperf
benchmark tool to generate this load. The netperf tool depends on the availability of the I/O
Completion Ports pseudo device.

Copyright IBM Corp. 2010, 2013 Exercise 6. I/O device virtualization performance and tuning 6-15
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

__ 25. Check the status of the I/O Completion Ports pseudo device on both logical
partitions with the following command:
# lsdev -l iocp0
iocp0 Defined I/O Completion Ports
If the device is marked as Defined (rather than Available) as shown above, activate
the device permanently with the following command sequence:
# chdev -l iocp0 -a autoconfig=available
# mkdev -l iocp0
__ 26. Start the netperf server program on both logical partitions, by issuing the following
command on each partition:
# /home/an31/ex6/start_netserver
To determine if the netperf server is running, use the netstat command to verify
that the default port 12865 has LISTEN for its status.
Example command and expected output:
# netstat -an | grep 12865
tcp4 0 0 *.12865 *.* LISTEN
__ 27. On each partition, use the no command to verify the use_isno network attribute is
enabled. Enable it with the no command if necessary. The use_isno attribute is a
restricted parameter.
Example command and expected output:
# no -a -F | grep use_isno
If use_isno is set to a value of 0, use the following command to enable it:
# no -o -F use_isno=1

TCP_STREAM/bandwidth testing a virtual Ethernet


__ 28. The virtual Ethernet adapter on both partitions must have the MTU value set to
1500. Change the MTU for the adapters on both partitions, if necessary.
On both partitions, identify the virtual interface. They should be configured with either
10.10.X0.1 or 10.10.X0.2 as the IP address. (X is your team number)
# ifconfig -a
Next, check the MTU size. Use enX where X represents the number of the actual virtual
device.
# lsattr -El enX -a mtu
If necessary, set the MTU to 1500:
# chdev -l enX -a mtu=1500

6-16 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty __ 29. On the first logical partition, use the ifconfig command to determine the values of
tcp_sendspace and tcp_recvspace for the interface associated with the virtual
Ethernet adapter. For example, in the output below, tcp_sendspace is 262144 and
tcp_recvspace is 262144.
# ifconfig en1
en1:
flags=1e080863,1<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GR
OUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE)>
inet 10.10.X0.1 netmask 0xff000000 broadcast 10.255.255.255
tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1

Note

The virtual Ethernet devices have predefined Interface Specific Network Options (ISNO)
attributes that are automatically set based on the MTU size. So, it is not necessary to
explicitly specify the tcp_sendspace and tcp_recvspace attributes for the virtual Ethernet
devices.

__ 30. Record the values for tcp_sendspace and tcp_recvspace from your logical
partition in the tcp_sendspace (local) and tcp_recvspace (local) fields for the
MTU size 1500 column of Table 6, Virtual Ethernet throughput, on page 19.
__ 31. On your second logical partition, use the ifconfig command to determine the
values of tcp_sendspace and tcp_recvspace for the interface associated with the
virtual Ethernet adapter. Record the values for tcp_sendspace and tcp_recvspace
from in the tcp_sendspace (remote) and tcp_recvspace (remote) fields for the
MTU size 1500 column of Table 6.
__ 32. Open a new terminal session to your first partition and run the netstat -r 2
command. Leave it running until youre instructed to stop it. Youll be monitoring the
total number of packets column.
Example command and expected output:
# netstat -r 2
input (en0) output input (Total) output
packets errs packets errs colls packets errs packets errs colls
26446 0 5398 0 0 29897 0 8854 0 0
13 0 2 0 0 13 0 2 0 0
__ 33. On a different session to your first logical partition, change directory to
/home/an31/ex6.
__ 34. The next part of the exercise uses the tcp_stream.sh script located in the
/home/an31/ex6 directory. This script needs four arguments as follows:
#./tcp_stream.sh <remote host IP> <msg size> <mode> <duration>

Copyright IBM Corp. 2010, 2013 Exercise 6. I/O device virtualization performance and tuning 6-17
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

The mode can be specified as simplex or duplex, and the duration is in seconds. On
your first logical partition, start the tcp_stream.sh network load generator using
10.10.X0.2 as the remote host IP address (X in the IP address is your team number;
this is the address of your second logical partition on the virtual Ethernet.). Use
1000000 (1 Megabyte) as the message size, simplex as the mode, and 20 seconds
for the duration.
#./tcp_stream.sh 10.10.X0.2 1000000 simplex 20
(Replace X in the IP address with your team number.)
__ 35. Record the megabits/second (listed as 10^6bits/s) throughput result given by the
tcp_stream.sh script in the MTU 1500 column in Table 6. Below is example output
from the tcp_stream.sh script. The network interface maximum throughput is listed
at the end of the output in megabits per second (10^6bits/s) and kilobytes per
second (KBytes/s). In this example, you would log 1151 in the Simplex mode
throughput (Megabits/s) row of the MTU 1500 column of Table 6. Be sure to do both
the Simplex test and the Duplex test before proceeding to the next step.
# ./tcp_stream.sh 10.10.X0.2 1000000 simplex 20
TCP STREAM TEST: 10.10.X0.2:4
(+/-5.0% with 99% confidence) - Version: 5.4.0.1 Nov 18 2004 14:18:06
Recv Send Send ---------------------
Socket Socket Message Elapsed Throughput
Size Size Size Time (iter) ---------------------
bytes bytes bytes secs. 10^6bits/s KBytes/s

262088 262088 1000000 20.60(03) 1151.91 141200.09


__ 36. Repeat the test with the different MTU sizes specified in Table 6 below, running each
test once in simplex mode and then once in duplex mode. Fill in the table with the
throughput numbers that you find, and with the values of the tcp_sendspace and
tcp_recvspace network parameters for the local partition and for the remote
partition. Remember to check the values of tcp_sendspace and tcp_recvspace
after each time you set the MTU size.
Change the MTU size on both partitions by issuing the following command on each
partition. Replace X with the appropriate number for the virtual Ethernet interface. This
example shows setting an MTU size of 9000.
# chdev -l enX -a mtu=9000
Check the ISNO network attributes for the virtual Ethernet interface.
# ifconfig enX
__ 37. For the different MTU sizes, check the total number of packets in the netstat output
when the tcp_stream.sh test is running. The number of packets should decrease
as the MTU size is increased.

6-18 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty
Table 6: Virtual Ethernet throughput
MTU Size
1500 9000 32768 65390
tcp_sendspace
(local)
tcp_sendspace
(remote)
tcp_recvspace
(local)
tcp_recvspace
(remote)
Simplex mode
throughput
(Megabits/s)
Duplex mode
throughput
(Megabits/s)
Here are the throughput results measured by the authors. You can compare these
results with the results you measure.

MTU size
1500 9000 32768 65390
tcp_sendspace
262144 262144 262144 262144
(local)
tcp_sendspace
262144 262144 262144 262144
(remote)
tcp_recvspace
262144 262144 262144 262144
(local)
tcp_recvspace
262144 262144 262144 262144
(remote)
Simplex mode
throughput 1151 3723 5075 6056
(Megabits/s)
Duplex mode
throughput 1540 5755 9700 12133
(Megabits/s)
__ 38. What conclusions can you draw from that test?
You should have observed the virtual Ethernet throughput increased when increasing
the MTU size. Choose a large MTU of 65390 or 32768 if you expect large amounts of
data to be transferred inside your virtual Ethernet network.

Copyright IBM Corp. 2010, 2013 Exercise 6. I/O device virtualization performance and tuning 6-19
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

The virtual Ethernet device with a large MTU of something like 32768 or 65390 means a
bulk transfer needs fewer packets to move the data. Fewer packets means fewer trips
up and down the protocol stack, which means less CPU consumption.
__ 39. Stop the netstat execution.

Bandwidth scaling with different processor entitlements


In this part of the exercise, you will test the virtual Ethernet throughput with different
processor capacity entitlements. Only two MTU sizes will be used, 1500 and 9000. The
sharing mode of your first logical partition needs to be changed to capped, and the test will
be repeated at different capacity entitlement values.
__ 40. Using the HMC GUI, dynamically change the desired capacity entitlement of your
first logical partition (this is the partition that will be used to generate the network
load) to 0.2, and change the sharing mode from uncapped to capped. Do not
change the number of virtual processors (leave it at 1).
Using the HMC GUI, select your logical partition, then select Dynamic Logical
Partitioning > Processor Resources > Add or Remove. A window similar to the
following will appear:
Uncheck the Uncapped checkbox, and enter 0.2 in the Assigned Processing units
field, then click OK to make the changes.
__ 41. Verify the changes to the configuration of your logical partition by invoking the
lparstat command with no arguments.
Verify the output of lparstat shows that the partition is capped with an entitlement of
0.2 processing units.
# lparstat
System configuration: type=Shared mode=Capped smt=4 lcpu=4 mem=1024MB
psize=4 ent=0.20

%user %sys %wait %idle physc %entc lbusy app vcsw phint
----- ----- ------ ------ ----- ----- ------ --- ----- -----
0.0 0.0 1.3 98.7 0.00 0.0 2.8 3.27 2176660 16712
__ 42. In the next part of the exercise, you will alter the MTU size of the original virtual
Ethernet interface that is connected to the outside network. You will do this in both of
your teams AIX LPARs. Use ifconfig -a command if you need to look up with
interface is on the external network. Change the MTU size of the virtual Ethernet
interface on both logical partitions back to 1500.
Run the following command on both partitions to change the MTU size.
# chdev -l enX -a mtu=1500
__ 43. From your first logical partition, start the network load generator script
tcp_stream.sh located in the /home/an31/ex6 directory. This script requires four

6-20 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty arguments. You must specify the remote hosts IP address for its virtual Ethernet
adapter, the message size, the mode (simplex or duplex), and the test duration. In
this sequence of tests, we will only be using the duplex mode.
Use 10000 (10 KBytes) as the message size, 10 seconds for the duration, and
perform the test in duplex mode.
# cd /home/an31/ex6
#./tcp_stream.sh 10.10.X0.2 10000 duplex 10
(Replace X in the IP address with your team number.)
__ 44. Record the megabits per second throughput value (10^6bits/s) in Table 7 below, in
the entry for Capacity entitlement 0.2 and MTU size 1500.
__ 45. Change the MTU size of the virtual Ethernet interface on both logical partitions to
9000.
Change the MTU on both partitions.
# chdev -l enX -a mtu=9000
__ 46. Restart the network load generator script tcp_stream.sh located in the
/home/an31/ex6 directory. Use 10000 (10 KBytes) as the message size, 10 seconds
for the duration, and perform the test in duplex mode. Record the megabits per
second throughput value (10^6bits/s) in Table 7 below, in the entry for Capacity
entitlement 0.2 and MTU size 9000.
#./tcp_stream.sh 10.10.X0.2 10000 duplex 10
(Replace X in the IP address with your team number.)
__ 47. Change the MTU size of the virtual Ethernet interface on both logical partitions back
to 1500.
Change the MTU on both partitions.
# chdev -l enX -a mtu=1500
__ 48. Using the HMC GUI, dynamically change the desired capacity entitlement of your
partition to 0.4 processing units.
Using the HMC GUI, select your logical partition, then select Dynamic Logical
Partitioning > Processor Resources > Add or Remove. Enter 0.4 in the Assigned
Processing units field; then click OK to make the changes.
__ 49. Repeat the tasks from step 40 to step 47 for the different capacity entitlement values
listed in Table 7 below.
__ 50. Once you have recorded the megabits/s throughput results in the table, calculate the
scalability factor for each column by dividing the throughput in megabits/s by the CE
value.

Copyright IBM Corp. 2010, 2013 Exercise 6. I/O device virtualization performance and tuning 6-21
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

Table 7: Duplex throughput


MTU size 1500 9000
LPAR Capacity
0.2 0.4 0.6 0.8 0.2 0.4 0.6 0.8
Entitlement (CE)
Throughput
Megabits/s
Throughput / CE

Here are example throughput results with different capacity entitlement values.

MTU size 1500 9000


LPAR Capacity
0.2 0.4 0.6 0.8 0.2 0.4 0.6 0.8
Entitlement (CE)
Throughput
413 893 1262 1588 1253 2377 3272 4759
Megabits/s
Throughput / CE
2065 2232 2103 1985 6265 5942 5453 5948

__ 51. What do you notice about the scalability value for each MTU size as the capacity
entitlement of the partition is increased?
For each MTU size, the scalability value is almost the same for each different CE value.
This indicates that the virtual Ethernet throughput scales linearly with processor
capacity entitlement, so there is no need to specifically dedicate processors to partitions
for performance. The throughput performance is dependant on the processor
entitlement of the partitions.

Part 3: Shared Ethernet adapter


In this section, you will monitor and check statistics associated with the shared Ethernet
adapter. You will reconfigure one of your teams logical partitions to use a physical Ethernet
adapter (HEA). Then you will create a network load between this logical partition and the
other logical partition that is still configured to use a virtual Ethernet adapter. The VIOS
partitions shared Ethernet adapter will allow the partition using the virtual Ethernet adapter
to connect to the outside network. You will monitor the load using tools available to AIX and
the virtual I/O server command line interfaces.
To perform this section, you must work as a team of two.
You will be using three partitions: One virtual I/O server partition, one partition with a logical
host Ethernet adapter, and a third partition using a virtual Ethernet adapter to connect to
the physical network through the shared Ethernet adapter function of the virtual I/O server

6-22 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty partition. To generate a workload in the partitions, you will use the scripts ftpload.sh and
tcp_rr.sh available in the directory /home/an31/ex6.

Command Type Workload


ftpload.sh Shell script ftp 500 MB to /dev/null on the remote host
tcp_rr.sh Shell script Request/response load
netperf Executable Based on the netperf public domain tool

__ 52. Identify which VIOS partition on your assigned server is the primary SEA. There
should be a failover configuration between the vios1 and the vios2 partitions. There
should be another failover configuration between the vios3 and the vios4 partitions.
For your teams set of VIOS partitions, you need to find the one configured as the
primary SEA. Use the entstat -all ent4 | grep -i active command on both
of your assigned VIOS partitions. This example output shows the primary SEA:
Priority 1 Active: True
If the output is the following, this is the secondary SEA in the failover configuration.
Priority 2 Active: False
Document the name of the VIOS partition which has the primary SEA configuration:
___________________

Copyright IBM Corp. 2010, 2013 Exercise 6. I/O device virtualization performance and tuning 6-23
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

__ 53. On the VIOS that you documented above, verify the MTU size of the virtual Ethernet
interface and the physical Ethernet adapter configured in the SEA are set to 1500.
Use the lsmap -net -all command to see the names of the Ethernet adapters.
On the VIOS partition, check the MTU size using the lsdev command.
$ lsdev -dev en0 -attr mtu
value

1500
On the VIOS partition, if necessary, set the MTU size:
$ chdev -dev en0 -attr mtu=1500
__ 54. Use the first LPAR (based on its number) assigned to your team for the AIX LPAR
that will use a virtual Ethernet adapter. In that partition, make sure that the interface
connected to the external network is using an MTU size of 1500.
On the logical partition, check the MTU size using the lsattr or netstat commands.
# lsattr -El en1 -a mtu
mtu 1500 Maximum IP Packet Size for This Device True
# netstat -i
Name Mtu Network Address Ipkts Ierrs Opkts Oerrs Coll
en1 1500 link#2 0.9.6b.6b.d.f6 10802 0 1779 3 0
en1 1500 9.47.88 max92 10802 0 1779 3 0
lo0 16896 link#1 889 0 892 0 0
lo0 16896 127 loopback 889 0 892 0 0
lo0 16896 localhost 889 0 892 0 0
On the client logical partition, if necessary, set the MTU size:
# chdev -l enX -a mtu=1500
__ 55. In the following steps, you will add a logical Host Ethernet adapter to your teams
second logical partition. You will also assign the IP address of the logical partition to
the logical host Ethernet adapter port interface.
__ a. On your second logical partition, perform a dynamic operation to add a logical
host Ethernet adapter port. The first team on each managed system should use
physical port ID 0. The second team should use physical port ID 1.

6-24 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty To get to the above screen, run the Dynamic Logical Partitioning > Host Ethernet >
Add command.
__ b. Select an available logical port. For team 1, use logical port 1. For team 2, use
logical port 2. The ports that do not have LPAR names after them are available.

__ c. Open a virtual terminal on your logical partition, then run the cfgmgr command.
Check for a logical host Ethernet port entX Available.
Here is the output of the command:
lsdev -Cc adapter | grep hea
ent1 Available Logical Host Ethernet Port (lp-hea)
lhea0 Available Logical Host Ethernet Adapter (l-hea)
__ 56. Record the hostname, IP address, netmask, and gateway of the interface configured
for the external network in your second logical partition. You can use a combination
of hostname, netstat -rn, and ifconfig -a to get this information.
Hostname: ___________________________________
IP Address: ___________________________________
Netmask: _____________________________________
Gateway Address: _____________________________
__ 57. Detach the current IP configuration using the chdev -l en0 -a state=detach
command. The IP address should be configured on the first virtual Ethernet adapter
at this time.
__ 58. Assign the same IP configuration that you recorded to the new logical host Ethernet
port.

Copyright IBM Corp. 2010, 2013 Exercise 6. I/O device virtualization performance and tuning 6-25
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

__ 59. Try to ping the default gateway to check the HEA configuration.
__ 60. In the following steps, you will start a network load and monitor the statistics of the
VIOS partitions shared Ethernet adapter.
__ 61. Log in to the partition with the HEA port configured. As the root user, check the
status of the I/O Completion Ports pseudo device with the following command:
# lsdev -l iocp0
If the device is marked as Defined, then activate the device permanently with the
following command sequence:
# chdev -l iocp0 -a autoconfig=available
# mkdev -l iocp0
__ 62. As the root user, verify the netperf server is running. To determine if the netperf
server is running, use the netstat command to verify that the default port 12865
has LISTEN for its status as shown below.
# netstat -an | grep 12865
tcp4 0 0 *.12865 *.* LISTEN
If netperf is not running, start it by issuing the following command:
# /home/an31/ex6/start_netserver
__ 63. Repeat the previous two steps on your first LPAR with the virtual Ethernet adapter
configured.
__ 64. As the root user on your second partition (the one with the HEA port configured),
use the tcp_rr.sh script located in the directory /home/an31/ex6 to generate a
network load to the other logical partition.
Syntax of the tcp_rr.sh script:
# ./tcp_rr.sh <remote host IP> <sessions> <duration>
The tcp_rr.sh script requires three arguments: the IP address of the remote
partition, the number of sessions, and the duration of the test. Start the command
with 10 sessions and a duration of 300.
Here is an example command. Substitute 9.47.88.153 with the actual IP address of
your remote logical partition:
# ./tcp_rr.sh 9.47.88.153 10 300
__ 65. On your logical partition configured with the virtual Ethernet adapter, use the
netstat command to list the network packets going through the virtual Ethernet
adapter interface. Remember the first line of the netstat report is the number of
packets since system boot.
You should see some input and output packets, since the shared Ethernet adapter
on the VIOS partition is forwarding packets being generated by the tcp_rr.sh script
running on the other partition.

6-26 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty There are a few ways of using the netstat command to see packet statistics. Here is
one example, followed by a sample output. Substitute en0 with the device name for the
virtual Ethernet adapter on your logical partition.
# lsdev -Cc adapter | grep ^e
ent0 Available Virtual I/O Ethernet Adapter (l-lan)

# netstat -I en0 2
input (en0) output input (Total) output
packets errs packets errs colls packets errs packets errs colls
1262778 0 697553 0 0 1273553 0 708365 0 0
51701 0 51693 0 0 51701 0 51693 0 0
51692 0 51668 0 0 51693 0 51669 0 0
51759 0 51735 0 0 51759 0 51735 0 0
51633 0 51618 0 0 51633 0 51618 0 0
51705 0 51682 0 0 51705 0 51682 0 0
51593 0 51594 0 0 51593 0 51594 0 0
__ 66. From the padmin CLI on the VIOS partition, use the netstat command to list the
packet flow going through the shared Ethernet adapter device. Do you see any IP
packets? Why or why not?
$ netstat -stats 2
Here is the output of the command netstat -stats 2:
$ netstat -stats 2
input (en3) output input (Total) output
packets errs packets errs colls packets errs packets errs colls
133562 0 5218 0 0 136478 0 8144 0 0
4 0 0 0 0 4 0 0 0 0
5 0 0 0 0 5 0 0 0 0
6 0 0 0 0 6 0 0 0 0
4 0 0 0 0 4 0 0 0 0
16 0 0 0 0 16 0 0 0 0
4 0 1 0 0 4 0 1 0 0
Notice the statistics on the left side are related to the interface en3. The statistics on the
right side of the output are the total statistics on the partition. There might be a small
number of packets received and transmitted, however they are likely the packets used
to transmit output to the terminal you have used to log in (assuming you have not used
the console window from the HMC GUI application).
You might have expected to see the total number of packets received and transmitted
(displayed on the right side of the output) be at least the same as the statistics being
shown on your logical partition (around 52000 packets). However, remember that the
shared Ethernet adapter is a layer 2 bridge device. It is functioning at the Ethernet
frame level, not at the IP packet level, which is where netstat reports statistics. The
way to see the amount of packets going through the shared Ethernet adapter is to use
the enstat entX command on the shared Ethernet adapter logical device name.

Copyright IBM Corp. 2010, 2013 Exercise 6. I/O device virtualization performance and tuning 6-27
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

__ 67. On the VIOS partition, try to list the packet flow going through the physical and the
virtual adapters associated with the shared Ethernet adapter, using the netstat or
enstat command. What are the results?
$ entstat entX
$ netstat -stats 2
Use lsdev to list all of the Ethernet adapters on the VIOS partition.
$ lsdev | grep ent
ent0 Available 10/100 Mbps Ethernet PCI Adapter II (1410ff01)
ent1 Available Virtual I/O Ethernet Adapter (l-lan)
ent2 Available Virtual I/O Ethernet Adapter (l-lan)
ent3 Available Shared Ethernet Adapter
If you are following along with the examples in this exercise, the real physical adapter is
ent0 and the associated virtual adapter is ent2. You can see this on your system from
the output of the lsdev -dev ent3 -attr command. The ent2 adapter is virtual, and it
is configured for access to the VIOS partition.
It is not possible using enstat or netstat to directly display the statistics of the virtual
and physical devices associated with the shared Ethernet adapter. Here is an example
of error messages when trying enstat:
$ entstat -all ent2
entstat: 0909-003 Unable to connect to device ent1, errno = 19
$ entstat -all ent0
entstat: 0909-003 Unable to connect to device ent0, errno = 19
__ 68. A way to see the shared Ethernet adapter statistics is to list the Ethernet device
driver and devices statistics of the shared Ethernet adapter itself. Examine the
statistics for the adapters in the Shared Ethernet Adapter section of the output.
Execute this command multiple times to see the statistics updating.
From the padmin CLI on the VIOS partition:
$ entstat -all entX | more
Replace entX with the name of the shared Ethernet adapter device. The output of
the command can be quite large. If you only want to view the real and virtual side
statistics for packet counts, try passing the output to grep for the paragraph titled
Statistics for adapters in the shared Ethernet adapter entX.
For example:
$ entstat -all entX | grep -p "SEA"

6-28 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty Here is an output example of the command:


$ entstat -all ent3 | grep -p SEA

--------------------------------------------------------------
Statistics for adapters in the Shared Ethernet Adapter ent3
--------------------------------------------------------------
Number of adapters: 2
SEA Flags: 00000001
<THREAD >
VLAN Ids :
ent1: 0 1
Real Side Statistics:
Packets received: 38039595
Packets bridged: 38039595
Packets consumed: 2
Packets received: 0
Packets transmitted: 37448355
Packets dropped: 2
Virtual Side Statistics:
Packets received: 37448355
Packets bridged: 37448355
Packets consumed: 0
Packets received: 0
Packets received: 38039593
Packets dropped: 0
Other Statistics:
Output packets generated: 0
Output packets dropped: 0
Device output failures: 0
Memory allocation failures: 0
ICMP error packets sent: 0
Non IP packets larger than MTU: 0
Thread queue overflow packets: 0
--------------------------------------------------------------
Real Adapter: ent0
__ 69. Another way to check the statistics of the shared Ethernet adapter is using the
seastat command. The seastat command generates a per client view of the
shared Ethernet adapter statistics. To gather network statistics at this level of detail,
advanced accounting should be enabled on the shared Ethernet adapter to provide
additional information about its network traffic.
__ a. Use the chdev command to enable the advanced accounting on the SEA.
$ chdev -dev entX -attr accounting=enabled
Replace entX with your shared Ethernet adapter device name.
__ b. Invoke the seastat command to display the shared Ethernet adapter statistics
per client logical partition. Check for the transmit and receive packets number
increasing.
$ seastat -d entX

Copyright IBM Corp. 2010, 2013 Exercise 6. I/O device virtualization performance and tuning 6-29
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

Here is an output example of the command:


$ seastat -d ent3
===============================================================

Advanced Statistics for SEA


Device Name: ent3

================================================================
MAC: 56:ED:E7:C9:03:0B
----------------------

VLAN: None
VLAN Priority: None
IP: 10.6.112.42

Transmit Statistics: Receive Statistics:


-------------------- -------------------
Packets: 8199931 Packets: 8199380
Bytes: 483824638 Bytes: 491974792

__ 70. Stop any monitoring commands that are running in any of your partitions. Stop the
tcp_rr.sh script with Ctrl C.

Tuning the shared Ethernet adapter


There are two things to consider when tuning a configuration that uses a shared Ethernet
adapter.
The first relates to the configuration of the TCP/IP attributes on the partitions that are using
the shared Ethernet adapter. These attributes might be system wide on the partition, or
they might be ISNO settings for the particular virtual Ethernet interface. The normal TCP/IP
tuning must be performed to maximize the throughput available in a virtual Ethernet or
shared Ethernet adapter configuration, and this is beyond the scope of this course.
The second thing to consider is the shared Ethernet adapter device itself. For the shared
Ethernet adapter, the only thing that can really be tuned is the CPU capacity entitlement of
the virtual I/O server partition. If the virtual I/O server partition is CPU constrained, then the
throughput of the shared Ethernet adapter might be reduced. This is what you will explore
in this part of the exercise.
You must work as a team to perform this part. You will be using your first logical partition
with the virtual Ethernet adapter, your second logical partition with the HEA port configured
and the virtual I/O server bridging the virtual Ethernet adapter to the external network.

6-30 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty __ 71. Use the HMC GUI to dynamically configure both of your AIX logical partitions as
uncapped with an uncapped weight value to 128. You can leave the assigned
processing units at the current setting.
Select a partition and run the Dynamic Logical Partitioning > Processor > Add or
Remove task. Make sure the uncapped checkbox is checked and the weight value is
128. Repeat for the other AIX partition.
__ 72. Use the HMC GUI to check that your VIOS partition has the following CPU
configuration:
Assigned Processing units: 0.10
Assigned Virtual Processors: 1
Partition is Capped

If needed, dynamically change your VIOS partition configuration using the HMC
GUI.
Using the HMC GUI, from the Navigation area on the left, select your server, then your
assigned VIOS partition. From the Tasks menu, select Dynamic Logical Partitioning
> Processor > Add or Remove.
Reconfigure the partition, and then click OK to implement the change.
__ 73. On the VIOS partition that you documented previously with the primary SEA
configuration, monitor the CPU consumed (physc value) using the viostat
command.
# viostat -tty 2
__ 74. As the root user on your second AIX partition (the partition with the HEA port
configured), use the ftpload.sh script located in the /home/an31/ex6 directory to
generate a network load between your logical partitions.
Syntax of the ftpload.sh script:
# ./ftpload.sh <remote host IP> <user> <password>
Here is an example command. Substitute abcxyz with the actual password for the
root account, and substitute 9.47.88.153 with the actual IP address of your second
logical partition:
# ./ftpload.sh 9.47.88.153 root abcxyz

Copyright IBM Corp. 2010, 2013 Exercise 6. I/O device virtualization performance and tuning 6-31
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

Here is an example of the output from the script when the CE of the VIOS partition is
0.10:
# ./ftpload.sh 9.47.88.153 root abcxyz
Verbose mode on.
200 PORT command successful.
150 Opening data connection for /dev/null.
300+0 records in.
300+0 records out.
226 Transfer complete.
314572800 bytes sent in 47.82 seconds (6424 Kbytes/s)
local: | dd if=/dev/zero bs=1m count=300 remote: /dev/null
221 Goodbye.
__ 75. Convert the throughput value reported by the ftpload.sh script from Kbytes/s into
Megabits/s, and record the value in the ftp throughput column for Capacity
Entitlement value 0.1 in Table 8, Throughput scalability, on page 33.
To convert from Kbytes/s to Megabits/s, multiply the value by 8, and then divide by
1000.
If the throughput value reported by the ftpload.sh script is in scientific e-notation,
convert this to Kbytes/s and then convert to Megabits/s. For example, if the result
shows 1.216e_04 KB then multiply the base number (1.216) by 10000 to get 12160
KBytes/s. Convert this into 97.27 Megabits/s (97.27 Mb/s)
__ 76. Monitor the %entc value from the viostat output on the VIOS partition while the
ftpload.sh script is running on your logical partition. Record the result in the %entc
column of the row for Capacity Entitlement value 0.1 in Table 8, Throughput
scalability, on page 33.
Here is an example of the output when the CE is 0.10:
$ viostat -tty 2
System configuration: lcpu=2 ent=0.10

tty: tin tout avg-cpu: % user % sys % idle % iowait physc % entc
0.0 41.0 0.0 1.0 99.0 0.0 0.0 2.8
0.0 41.0 0.0 0.9 99.0 0.0 0.0 2.7
0.0 41.0 0.0 0.9 99.1 0.0 0.0 2.7
0.0 41.0 0.0 1.1 98.9 0.0 0.0 3.0
0.0 39.7 0.0 30.0 70.0 0.0 0.1 53.2
0.0 40.4 0.0 45.2 54.8 0.0 0.1 79.0
0.0 40.6 0.0 45.3 54.7 0.0 0.1 80.2
0.0 40.5 0.1 45.6 54.2 0.0 0.1 79.6
0.0 39.8 0.0 45.2 54.8 0.0 0.1 79.4
0.0 39.4 0.0 45.5 54.5 0.0 0.1 79.8
0.0 39.6 0.0 44.9 55.1 0.0 0.1 78.5
0.0 40.6 0.0 44.7 55.2 0.0 0.1 79.5
0.0 39.0 0.0 44.8 55.2 0.0 0.1 78.8
0.0 41.3 0.0 44.7 55.3 0.0 0.1 78.8
0.0 38.8 0.0 45.3 54.6 0.0 0.1 79.6

6-32 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty In the above viostat output, notice the physc value is equal to the entitled capacity of
the VIOS partition (0.1) yet %entc is reporting only 80% of the entitled capacity is being
used. This is because the physc value is rounded up. If you used the lparstat
command, you would see physc is 0.8 as shown below:
# lparstat 2
System configuration: type=Shared mode=Capped smt=On lcpu=2 mem=512
psize=2 ent0.10
%user %sys %wait %idle physc %entc lbusy app vcsw phint
----- ---- ----- ----- ----- ----- ------ --- ---- -----
0.0 45.7 0.0 54.2 0.08 79.4 64.1 1.80 4728 87
0.0 45.5 0.0 54.5 0.08 79.2 65.4 1.79 4692 79
0.0 45.7 0.0 54.3 0.08 79.4 61.5 1.79 4622 90
0.0 45.9 0.0 54.1 0.08 79.7 61.1 1.79 4752 88
__ 77. Use the HMC GUI application to change the capacity entitlement of the VIOS
partition to the next value in Table 8, then repeat the actions from Step 63 to
Step 76, recording the results in the appropriate row of Table 8. Continue repeating
the steps until you have recorded values for all the different capacity entitlement
values listed in the table.
Use the following sequence of steps to change the capacity entitlement of the VIOS
partition.
In the LPAR table view, select the VIOS partition, then right-click and select Dynamic
Logical Partitioning > Processor > Add or Remove. Enter the new desired amount
of processing units in the Assigned field, then click the OK button to make the change.

Table 8: Throughput scalability


Virtual I/O Server
ftp throughput (Mb/s) %entc
Capacity Entitlement
0.1
0.2
0.4
0.6
0.8

The actual values you obtain here are not important. The point is to understand the
throughput of a shared Ethernet adapter will be restricted if the virtual I/O server
partition is CPU constrained.
Here are the results obtained by the authors:

Copyright IBM Corp. 2010, 2013 Exercise 6. I/O device virtualization performance and tuning 6-33
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

Virtual I/O Server


ftp throughput (Mb/s) %entc
Capacity Entitlement
0.1 47.84 79.7
0.2 189.04 88.9
0.4 395.52 92.4
0.6 560.16 94.3
0.8 910.23 94.5
__ 78. What can you say about the capacity entitlement needed on the virtual I/O server to
get the full throughput of a 1 gigabit Ethernet network (around 910 Mb/s of
application data)?
The actual values you obtain here are not important. The point is to understand the
throughput of a shared Ethernet adapter will be restricted if the virtual I/O server
partition is CPU constrained. The objective is to determine the capacity entitlement
value at which the shared Ethernet adapter throughput is no longer constrained by the
virtual I/O server CPU consumption.
__ 79. If you do not still have it, make sure you record the IP configuration associated with
the second partition. Shutdown your second partition (the one with the HEA port).
Using the HMC GUI, re-activate the partition. Using the virtual terminal session for
this partition, log in and assign the IP configuration to your first virtual Ethernet
adapter.
You can use the smit mktcpip fastpath to configure the virtual Ethernet adapter. Be
sure to choose the adapter that you originally used to access the external network and
not the one with adapter ID 88. If necessary, you can use the lsdev command to view
the location code which will list the slot number after the C in the location code.

End of exercise

6-34 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty Exercise review/wrap-up


In part 1, the students examined the performance of the virtual SCSI configuration of their
assigned logical partition.
In part 2, the students examined the impact the MTU size and CPU capacity entitlement
has on the bandwidth throughput of virtual Ethernet connections.
In part 3, the students examined the impact the CPU capacity entitlement of the virtual I/O
server has on the shared Ethernet adapter performance.

Copyright IBM Corp. 2010, 2013 Exercise 6. I/O device virtualization performance and tuning 6-35
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

6-36 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty Exercise 7. Live Partition Mobility


(with hints)

Estimated time
01:00

What this exercise is about


This exercise provides the students with an opportunity to perform a
partition migration between two managed systems using Live Partition
Mobility.

What you should be able to do


After completing this exercise, you should be able to:
Configure the environment to support Live Partition Mobility
Migrate an LPAR between two POWER7 systems

Introduction
In this exercise, you will configure the assigned lab environment to
support a Live Partition Mobility operation. The source and destination
systems may be managed by different HMCs. You will verify the virtual
I/O servers are configured as mover service partitions, configure the
remote HMC SSH key authentication, and start and monitor the
migration process.

Requirements
Each student must have access to two POWER7 systems and the
associated HMCs. The systems must have the PowerVM Enterprise
Edition feature enabled in the system firmware.
The exercise depends on the following:
Four VIO servers per managed system with one VIO per student
and four students per managed system. This allows the students to
perform the Live Partition Mobility in a dual VIO environment.

Copyright IBM Corp. 2010, 2013 Exercise 7. Live Partition Mobility 7-1
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

Each managed system should have a dedicated HMC. If the


managed systems are managed by one HMC, the remote Live
Partition Mobility (RLPM) steps cannot be performed. Instead, the
students should be directed to perform a normal Live Partition
Mobility. Steps that are unique to the RLPM will be identified.

Instructor exercise overview


You might have to describe the environment to the students. They will
migrate their partition from and to the original system to get back to the
partitions initial configuration. At the time of writing, the IBM training
lab (CLP) had one HMC for every two managed systems. If four
systems are assigned, then you will have two pairs of systems and
each pair would be managed by a different HMC. As you assign the
destination system, make sure you identify a system that is managed
by an HMC other than the one assigned to the source system. If all of
the systems are managed by the same HMC, then a local Live
Partition Mobility will be performed instead.

7-2 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty Exercise instructions with hints

Preface
Use your assigned LPAR for this exercise.
All hints are marked by a sign.

Part 1: Partition migration environment


The following figure shows the environment commonly used for the partition mobility lab
exercise. Only student1 through student4 environment is represented. Students 5 to 8 have
similar partition configurations on the same managed systems: System 1 and System 2.
You will perform the steps independently from the other students. You will use a VIO server
as mover service partition on the destination managed system. Check that it is up and
running before proceeding. If all assigned systems are managed by the same HMC, the
remote partition mobility cannot be performed and youll do a local mobility operation
instead.

Copyright IBM Corp. 2010, 2013 Exercise 7. Live Partition Mobility 7-3
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

__ 1. In the following tables, write down the source and destination system names
assigned to your class.
Table: Migration information - for students 1 to 8
Student LPAR Source Source VIO server Destination Destination VIO
number system server (Destination
(Source mover
Name system
service partition) mover service
partition)
student 1 lpar1 VIOS1 VIOS1
student 2 lpar2 VIOS2 VIOS2
student 3 lpar3 VIOS3 VIOS3
student 4 lpar4 VIOS4 VIOS4
student 5 lpar1 VIOS1 VIOS1
student 6 lpar2 VIOS2 VIOS2
student 7 lpar3 VIOS3 VIOS3
student 8 lpar4 VIOS4 VIOS4
If there are more than eight students attending the class, there will be additional
systems with similar configurations for student9 through student16.
Table: Migration information - for students 9 to 16
Student LPAR Source Source VIO server Destination Destination VIO
number system server (Destination
(Source mover
Name system
service partition) mover service
partition)
student 9 lpar1 VIOS1 VIOS1
student 10 lpar2 VIOS2 VIOS2
student 11 lpar3 VIOS3 VIOS3
student 12 lpar4 VIOS4 VIOS4
student 13 lpar1 VIOS1 VIOS1
student 14 lpar2 VIOS2 VIOS2
student 15 lpar3 VIOS3 VIOS3
student 16 lpar4 VIOS4 VIOS4

Part 2: Pre-migration checks


The current environment should be ready for the partition migration. In the following steps,
you will look at the migration requirements and make changes only when necessary.
__ 2. Verify there are enough CPU and memory resources on the destination system to
host your logical partition and potentially the three others from the other lab groups.

7-4 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty Access the HMC GUI, go to the Systems Management menu, click Servers in
the left navigation pane. In the Table view, locate the Available Processing Units
and the Available Memory values associated with your destination system (In the
upper right side, make sure the View: is Table and not Tree. You will not see
these values in the Tree view.)

You can also list the current available memory and processor values using the
HMC command line. Type the following commands and look at the
curr_avail_sys_proc_units and curr_avail_sys_mem attributes:
lshwres -r proc -m <managed_system> --level sys
configurable_sys_proc_units=8.0,curr_avail_sys_proc_units=5.0,pend_
avail_sys_proc_units=5.0,installed_sys_proc_units=8.0,max_capacity_
sys_proc_units=deprecated,deconfig_sys_proc_units=0,min_proc_units_
per_virtual_proc=0.1,max_virtual_procs_per_lpar=64,max_procs_per_lp
ar=64,max_shared_proc_pools=64

lshwres -r mem -m <managed_system> --level sys


configurable_sys_mem=16384,curr_avail_sys_mem=12672,pend_avail_sys_
mem=12672,installed_sys_mem=16384,max_capacity_sys_mem=deprecated,d
econfig_sys_mem=0,sys_firmware_mem=640,mem_region_size=64,configura
ble_num_sys_huge_pages=0,curr_avail_num_sys_huge_pages=0,pend_avail
_num_sys_huge_pages=0,max_num_sys_huge_pages=0,requested_num_sys_hu
ge_pages=0,huge_page_size=16384,total_sys_bsr_arrays=16,bsr_array_s
ize=8,curr_avail_sys_bsr_arrays=16
__ 3. Log in to your assigned VIO server on your source system and use the lsmap
command to check the hdisk numbers used as the backing devices for your
assigned client LPAR. There are likely two disks.

Copyright IBM Corp. 2010, 2013 Exercise 7. Live Partition Mobility 7-5
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

Command and example output showing two disks:


$ lsmap -all
SVSA Physloc Client Partition ID
--------------- -------------------------------------------- ------------------
vhost0 U8204.E8A.06BCC9P-V3-C32 0x00000009

VTD vtscsi0
Status Available
LUN 0x8100000000000000
Backing device hdisk1
Physloc U78AA.001.WZSGHWS-P2-D5
Mirrored false

VTD vtscsi2
Status Available
LUN 0x8200000000000000
Backing device hdisk2
Physloc U78AA.001.WZSGHWS-P2-D1
Mirrored false
__ 4. In your assigned client LPAR, document the PVIDs for the two disks using the lspv
command.
This example shows the PVIDs in the second column:
# lspv
hdisk0 00f6bcc9f7c38cc0 rootvg active
hdisk1 00f6bcc9a30e4bdc None
__ 5. Verify that both backing devices (hdisk<#>) have the reserve_policy attribute set
to no_reserve on both the source and destination VIO servers. If needed, use the
chdev command to change the attribute value.
Note: The hdisk numbers might not be the same on the source and destination VIO
servers. Use the PVIDs to determine the correct disks.
Use lspv on the source VIO server to view the PVIDs of the hdisk backing
devices and verify the same PVID is also visible on the destination VIO server.
lsdev -dev hdisk<#> -attr reserve_policy
If necessary, run the following command to change the reserve_policy
attribute value:
chdev -dev hdisk# -attr reserve_policy=no_reserve
Note: For the source VIOS partition, if you need to change the
reserve_policy attribute for the client LPAR, you will need to shut down the
client LPAR, remove the virtual target devices, run the chdev command to
change the attribute, remake the virtual target devices with mkvdev, then activate
the LPAR. Because the disks are not yet in use on the destination, this procedure
is not necessary on the destination server. You can simply run the chdev
command if necessary.

7-6 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty __ 6. Determine the VLAN ID that the client LPAR is using. In a terminal session to your
AIX LPAR, run the following command:
# entstat -d ent0 | grep ID
Example output that shows only one VLAN ID (10) in use:
# entstat -d ent0 | grep VLAN
Invalid VLAN ID Packets: 0
Port VLAN ID: 10
VLAN Tag IDs: None
__ 7. Verify that the destination VIO server is also bridging the same VLAN associated
with your client partitions virtual network adapter. Log in to your destination VIO
server, and execute lsmap -all -net and check for the SEA adapter name.
When done, execute entstat -all ent# | grep ID to verify the PVID is the
same as on your AIX client partition.
Output example of lsmap command:
$ lsmap -net -all
SVEA Physloc
------ --------------------------------------------
ent1 U8204.E8A.06BCC9P-V3-C31-T1

SEA ent3
Backing device ent0
Status Available
Physloc U7311.D20.WZSGHWS-P1-C06-T1
Output example of entstat command:
$ entstat -all ent3 | grep ID
Control Channel PVID: 19
Invalid VLAN ID Packets: 0
Port VLAN ID: 10
VLAN Tag IDs: None
Switch ID: ETHERNET0
Invalid VLAN ID Packets: 0
Port VLAN ID: 19
VLAN Tag IDs: None
Switch ID: ETHERNET0
__ 8. Verify each HMC is capable of performing remote migrations. In a CLI session to
each HMC, run the lslparmigr command,
Example command:
$ lslparmigr -r manager

Copyright IBM Corp. 2010, 2013 Exercise 7. Live Partition Mobility 7-7
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

If the HMC is capable, the attribute remote_lpar_mobility_capable displays a


value of 1. Otherwise, it shows a value of 0.
__ 9. Use the ping command in an SSH session or the HMC GUI to test the network
communications between the two HMCs. Skip this step if you only have one HMC.
If using the HMC GUI, go to the navigation area, and select HMC Management.
In the Operations section of the contents area, select Test Network
Connectivity. In the Network Diagnostic Information window, select the Ping
tab. In the text box, enter the IP address or host name of the remote HMC, and
select Ping.
__ 10. Start a command line session to the destination systems HMC (use ssh/putty), log
in as hscroot and use the mkauthkeys command to retrieve authentication keys
from the source systems HMC (the one currently managing the mobile partition).
Skip this step if you only have one HMC.
In the following example, replace the IP address for your assigned HMCs IP
address:
mkauthkeys --ip 10.31.204.21 -u hscroot -t rsa -g
Enter the password for user hscroot on the remote HMC when prompted.
Note: This step could have been done on the local HMC (where the LPAR is)
and then the --ip flag would specify the remote HMCs address.

Part 3: Partition mobility steps


In this section, you will perform the remote live partition migration if you have two HMCs
(one for each system). Otherwise, you will perform a local Live Partition Mobility operation.
__ 11. On your (source) HMC browser session, display your VIOS partition (refer to the
Migration Information table) Properties menu. Look at the General tab and verify
the Mover service partition attribute is set. If the checkbox is not already checked,
check it.

7-8 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty

__ 12. Start an HMC command line session to your source HMC (use ssh/putty) and
execute the lslparmigr command to list the source and destination system's
mover service partitions (managed system names are displayed in the HMC
browser session).
Example command for systems with only one HMC for both the source and
target systems:
hscroot@<hmc_hostname> lslparmigr r msp m <source managed system name> \
-t <destination managed system name> \
--filter lpar_names=<mobile partition name> --ip
Example command for systems for remote LPM (two HMCs):
hscroot@<hmc_hostname> lslparmigr r msp m <source managed system name> \
-t <destination managed system name> --ip address <remote HMC IP> -u hscroot \
--filter lpar_names=<mobile partition name>
__ 13. Dynamically change the MSP partitions (VIOS partitions) so that the processors are
uncapped. The migration process consumes about 0.85 CPU on the MSPs so the
partitions need to be able to use more processing resources.
Select the VIOS partition. Run the Dynamic Logical Partitioning > Processor
> Add or Remove task. Check the Uncapped checkbox and make the weight
value 192. The processing units can stay at whatever value is currently
configurated. Change both the source and target MSPs.
__ 14. Using the HMC browser session, start the migration process. View the Migration
Information panel and click Next.

Copyright IBM Corp. 2010, 2013 Exercise 7. Live Partition Mobility 7-9
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

Select your mobile LPAR by clicking the check box on the same row as your
LPAR. Then from the LPAR's context menu, click Operations > Mobility >
Migrate. The Migration wizard displays. Click Next.
__ 15. For the New destination profile name panel, type default, and click Next.
__ 16. For remote LPM operations, select the Remote Migration box. Then, type the IP
address for the destination systems HMC in the Remote HMC field. Type hscroot
for the Remote User and click Next. If your source and destination systems are
being managed by the same HMC, skip this step and just click Next.

__ 17. Select the destination managed system from the pull-down menu and click Next.

__ 18. If you see Paging VIOS Redundancy, make sure the Paging VIOS redundancy value
is none. This panel will only appear if a partition profile contains an AMS
configuration.
__ 19. The validation task will run at this point and will take a moment. At Validations
Errors/Warning, you might see a window containing a message ID. The type of
message should be Errors. You should see an error message that is similar to the
following:

7-10 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty

__ 20. When an error occurs during the live partition mobility validation process, you can
find additional details at the virtual I/O server by using the alog command.
__ 21. As the padmin user, log in to your source VIO server, and then run the following alog
command:
echo alog -t cfg -o config.log | oem_setup_env | more
Search for the physical device location specified in the error message. This should
be associated with information such as ERROR: cannot migrate reserve
type single_path.
Looking at the alog output, can you explain why we have this error during the
validation? What should you do to fix the problem?
The error message identifies the associated vscsi adapter that has a single path
disk associated. Use the lsmap command on your VIO server to find out which
hdisks devices are attached to the VSCSI adapter specified in the error
message. Then use the lsdev command to find out which hdisk device has a
single path. You must remove the virtual target device (vtscsi0) that maps this
hdisk device to your client logical partition.

Copyright IBM Corp. 2010, 2013 Exercise 7. Live Partition Mobility 7-11
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

Also, at the client you can use the lspath command to identify the hdisk that
has the single path.
__ a. On the virtual I/O server, use the lsmap -all command to find out which virtual
SCSI adapter has the physical location code mentioned in the error message.
Then identify the hdisk devices that are attached to the virtual SCSI adapter.
Here is an example of the lsmap command output. The virtual SCSI server
adapter vhost0 is the adapter with the physical location code mentioned in the
error message.
$ lsmap -all
SVSA Physloc Client Partition ID
--------------- -------------------------------------------- ------------------
vhost0 U8204.E8A.65BF7D1-V1-C13 0x00000005
VTD sys046l1rvg
Status Available
LUN 0x8100000000000000
Backing device hdisk6
Physloc U7311.D20.650443C-P1-C07-T1-W500507680140581E-L6000000000000

VTD vtscsi0
Status Available
LUN 0x8200000000000000
Backing device hdisk1
Physloc U7311.D20.650443C-P1-C07-T1-W500507680140581E-L1000000000000

SVSA Physloc Client Partition ID


--------------- -------------------------------------------- ------------------
vhost1 U8204.E8A.65BF7D1-V1-C14 0x00000006
VTD sys046l2rvg
Status Available
LUN 0x8100000000000000
Backing device hdisk7
Physloc U7311.D20.650443C-P1-C07-T1-W500507680140581E-L7000000000000

In the example output above, hdisk0 and hdisk1 are the hdisk devices attached
to the virtual SCSI server adapter vhost0.
__ b. Use the lsdev command to identify the hdisk device that has a reserve_policy
attribute set to single_path.
hdisk1 is a non shared LUN and has the reserve_policy set to single_path.
$ lsdev -dev hdisk1 -attr | grep policy
reserve_policy single_path Reserve Policy True
__ c. In order to proceed with the live partition migration, you must remove the virtual
target device that maps this hdisk device to your client logical partition. Log in to
your VIO server and run the rmdev command to remove the VTD.
Use this command if the VTD device name is vtscsi0:
$ rmdev -dev vtscsi0
__ 22. Re-run the validation (click Back to get the Destination panel, and then click Next).
At Validations Errors/Warnings, verify the type of message is a warning and not

7-12 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty an error. If error messages do not appear, the validation process was successful and
you can proceed with the next step. You should still read the warning messages.
You can ignore any warning messages related to VIOS partitions not involved in the
migration.

The following warning messages might be associated with a successful outcome:


HSCLA295 As part of the migration process, the HMC creates a
new migration profile containing the partition's current
state. The default is to use the current profile, which
replaces the existing definition of this profile. While
this works for most scenarios, other options are possible.
You may specify a different existing profile, which would be
replaced with the current partition definition, or you may
specify a new profile to save the current partition state.

HSCLA291 The selected partition may have an open virtual


terminal session. The HMC forces termination of the
partition's open virtual terminal session when the
migration has completed.
Click Next.
__ 23. You must select a pair of MSPs that will be used during the migration process. Verify
the automatically selected MSP pair corresponds to your source and destination

Copyright IBM Corp. 2010, 2013 Exercise 7. Live Partition Mobility 7-13
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

MSPs. If not, select them (refer to the migration information table for your source
and destination MSPs) and click Next.
__ 24. View the VLAN configuration, and click Next.
__ 25. Check the virtual SCSI assignment. The automatically selected pair might not be
correct. Recall that the client partition has two paths for its hdisk0 and you want to
recreate that configuration on the destination. Select the appropriate VIOS partitions
and click Next.
The following commands and example outputs show how to determine what
paths currently exist and the location codes for the paths:
# lspath -l hdisk0
Enabled hdisk0 vscsi0
Enabled hdisk0 vscsi1
# lscfg | grep vscsi
* vscsi1 U8203.E4A.06F9EC1-V4-C4-T1 Virtual SCSI Client Adapter
* vscsi0 U8203.E4A.06F9EC1-V4-C5-T1 Virtual SCSI Client Adapter
The location codes contain the virtual adapter ID after the "C". You can use the
HMC to determine which VIOS is associated with the client adapter IDs.
__ 26. Verify the Pool ID 0 is selected for the Destination Shared Processor Pool.
__ 27. Keep the Wait time default value and click Next.
__ 28. When the Partition Migration Summary menu displays. Look at the Destination
partition ID value. The ID is assigned automatically and the first available value is
chosen (in our example ID=3). Do not click Finish.

7-14 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty

__ 29. Establish two login sessions, one to each of your source and destination MSP VIO
servers. In these sessions, start topas to monitor the CPU and network resource
utilization during the migration process.
__ 30. click Finish on the Summary panel to start the partition migration.
__ 31. Look at the topas command outputs and observe the kernel, CPU, and network
activity.
__ 32. Observe the pop-up window and wait until the migration status shows success. Click
Close to exit.

Copyright IBM Corp. 2010, 2013 Exercise 7. Live Partition Mobility 7-15
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

__ 33. Using the lspartition HMC command, identify the LPAR's new partition ID and
managed system.
Example command: lspartition -dlpar
A service processor lock might occur when doing simultaneous moves that use the
same VIO servers. In case of a lock, you can use the Recover option in the Mobility
task; the HMC will perform the necessary operations to complete the migration.

Part 4: Migrate back to the initial managed server


__ 34. Perform the migration of your partition back to your initial managed server.
Select your mobile LPAR by clicking the checkbox on the same row as your
LPAR. Then from the LPAR's context menu, click Operations > Mobility >
Migrate. The Migration wizard displays. Click Next. Enter in the appropriate
choices to migrate your partition back to its original managed systems. Verify that
the partition migrates successfully.

End of exercise

7-16 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty Exercise review/wrap-up


This exercise provided hands on the POWER6 Live Partition Mobility feature.

Copyright IBM Corp. 2010, 2013 Exercise 7. Live Partition Mobility 7-17
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

7-18 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty Exercise 8. Suspend and resume


(with hints)

Estimated time
01:00

What this exercise is about


This lab provides the students with an opportunity to perform suspend
and resume operations on a partition.

What you should be able to do


After completing this exercise, you should be able to:
View the reserved storage device pool and device configuration
Suspend a partition
Resume a suspended partition

Introduction
Using the Suspend / Resume feature, clients can provide long-term
suspension of partitions, saving the partition state (memory, NVRAM,
and VSP state) on persistent storage, freeing server resources that
were in use by that partition, restoring partition state to server
resources, and resuming operation of that partition and its applications
either on the same server or on a different server.

Requirements
This workbook
HMC V7.7.2 or later
System Firmware v7.2.0 SP1 or higher
AIX v7.1 TL0 SP2 or higher or v6.1 TL6 SP or higher

Copyright IBM Corp. 2010, 2013 Exercise 8. Suspend and resume 8-1
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

Instructor exercise overview


In this exercise, the partition will be suspended and resumed. The
system already should already have a shared memory pool so this will
be the reserved storage pool needed in this exercise.

8-2 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty Exercise instructions with hints

Preface
All procedures in this exercise depend on the availability of specific equipment in
your classroom.
All hints are marked by a sign.

Used tools and lab environment


In this lab, you will be using your assigned logical partition. The tool used during this
exercise is a shell script which will update a log file with output from the date command.
This tool is located in the /home/an31/ex8 directory.

Command Type Role


Program used to update the log file with
S-R.sh Shell script
date command output

Part 1: View the reserved storage pool configuration


The partition state is stored on a persistent storage device during the suspend operation.
The device must be assigned to a reserved storage device pool on the HMC. The systems
used in this class already have a shared memory pool configured and this pool can be used
for the reserved storage pool for Suspend-Resume operations. You will not need to create
a separate reserved storage pool.
__ 1. View the reserved storage pool configuration.
Select your managed system and run the Configuration > Virtual resources >
Reserved Storage Device Pool Management task. The following graphic is an
example of the window that pops up.

Copyright IBM Corp. 2010, 2013 Exercise 8. Suspend and resume 8-3
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

It should display the list of devices in the reserved storage device pool as shown
above. These were the device (hdisk1 to hdisk5) which were earlier used as
paging devices in AMS exercise.
__ 2. List volumes in the reserved storage pool from the HMC command line.

8-4 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty From the HMC command line interface, run lshwres with the --rsubtype
rsdev flag to view the reserved storage device used to save suspension data for
your partition.
hscroot@hmc45:~> lshwres -r rspool -m sys426 --rsubtype rsdev
device_name=hdisk1,vios_name=sys426_amsvios,vios_id=9,size=5120,type=
phys,state=Inactive,p
hys_loc=U5877.001.M09H12R-P1-C10-T1-W500507680140B855-
L1000000000000,is_redundant=0,lpar_id=none,device_selection_type=auto

device_name=hdisk2,vios_name=sys426_amsvios,vios_id=9,size=5120,type=
phys,state=Inactive,p
hys_loc=U5877.001.M09H12R-P1-C10-T1-W500507680140B855-
L2000000000000,is_redundant=0,lpar_id=none,device_selection_type=auto

device_name=hdisk3,vios_name=sys426_amsvios,vios_id=9,size=5120,type=
phys,state=Inactive,p
hys_loc=U5877.001.M09H12R-P1-C10-T1-W500507680140B855-
L3000000000000,is_redundant=0,lpar_id=none,device_selection_type=auto

device_name=hdisk4,vios_name=sys426_amsvios,vios_id=9,size=5120,type=
phys,state=Inactive,p
hys_loc=U5877.001.M09H12R-P1-C10-T1-W500507680140B855-
L4000000000000,is_redundant=0,lpar_id=none,device_selection_type=auto

device_name=hdisk5,vios_name=sys426_amsvios,vios_id=9,size=5120,type=
phys,state=Inactive,p
hys_loc=U5877.001.M09H12R-P1-C10-T1-W500507680140B855-
L5000000000000,is_redundant=0,lpar_id=none,device_selection_type=auto
Note: The size of the volume must be at least the same size as the maximum
memory specified in the profile for the suspending partition. In our lab environment,
we are using 5 GB LUNs for reserved storage device which is more then the
maximum memory configured for the LPARs.

Part 2: Suspend the partition


__ 3. Allow the partition to allow suspend operations.
__ a. Select your LPAR and open the Properties tab and check the option Allow this
partition to be suspended.
Example panel that shows the checkbox is checked.

Copyright IBM Corp. 2010, 2013 Exercise 8. Suspend and resume 8-5
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

__ b. Now we will execute a script on the suspend capable partition and will verify the
functionality of the resume operation once the partition is resumed back. The
script is located in the /home/an31/ex8 directory.
Note: The script must be executed in the background with nohup.
# ./nohup S-R.sh &
[1] 4128902
# Sending nohup output to nohup.out.

# ps -ef | grep S-R.sh


root 4128902 5832904 0 07:49:29 vty0 0:00 /bin/sh ./S-R.sh
The script S-R.sh logs the output of date command in a log file
/tmp/test.log with the interval of 1 second.

# tail -f /tmp/test.log
Fri Jan 13 07:52:44 CET 2012
Fri Jan 13 07:52:45 CET 2012
Fri Jan 13 07:52:46 CET 2012
Fri Jan 13 07:52:47 CET 2012
Fri Jan 13 07:52:48 CET 2012
Fri Jan 13 07:52:49 CET 2012
Fri Jan 13 07:52:50 CET 2012
Fri Jan 13 07:52:51 CET 2012
__ 4. Suspend the client partition.

8-6 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty Select your LPAR and run the Operations > Suspend Operations > Suspend
task. Accept the defaults and click the Suspend button. This is the window that
pops up:

__ 5. After clicking the Suspend button, the status for each activity will be shown in a
separate window. Watch the status window as each activity completes. When all
steps have completed, the status window will look like this and you can click the
Close button:

__ 6. Notice the state in the LPAR table on the HMC GUI.


Example view showing the status of Suspended:

__ 7. View the reserved storage pool state once the suspend operation is initiated from
the HMC command line.

Copyright IBM Corp. 2010, 2013 Exercise 8. Suspend and resume 8-7
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

Command and example output:


lshwres -r rspool -m sys426 --rsubtype rsdev --filter \
"lpar_names=lpar1"
device_name=hdisk5,vios_name=sys426_amsvios,vios_id=9,size=5120,type=
phys,state=Active,
phys_loc=U5877.001.M09H12R-P1-C10-T1-W500507680140B855-
L5000000000000,is_redundant=0,lpar_name=sys426_lpar1,lpar_id=5,device
_selection_type=null
Observe that hdisk5 has been assigned to a suspended partition from the reserved
storage device pool when the suspend operation was initiated.
Note: Any volume part of the reserved storage device pool would be chosen when
the suspend operation is initiated. The size of the device should be 110% of the max
memory value defined for the partition.
__ 8. View the state of the partition.
Command and example output:
hscroot@hmc45:~> lssyscfg -r lpar -m sys426 -F name,state --filter \
"lpar_names=sys426_lpar1"
lpar1,Suspended

Part 3: Resume the suspended partition


__ 9. The resume operation returns the partition to the state it was in when suspended.
Now resume the partition that you suspended.
Select your LPAR and run the Operation > Suspend operations > Resume
task. The following window opens. Click Resume.

The resume operation starts immediately and a window, as shown below,


provides information regarding the status and shows the progress. Once the
resume operation has completed, the status window looks like this:

8-8 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty

__ 10. In the main Hardware Management Console window, check the status of the
partition. It should now be Running.
__ 11. Log in to your LPAR and check to see if the script functionality is resumed after the
completion of Resume operation.
Example command and expected output:
# ps -ef | grep S-R.sh
root 4128902 1 0 07:49:29 - 0:00 /bin/sh ./S-R.sh

# tail -f /tmp/test.log
Fri Jan 13 07:54:04 CET 2012
Fri Jan 13 07:54:05 CET 2012
Fri Jan 13 07:54:06 CET 2012
Observe that the script has started logging process in the /tmp/test.log file once
the partition was resumed.
__ 12. Kill the S-R.sh process.
Example commands:
# ps -ef | grep S-R.sh
(Discover the PID.)
# kill -9 4128902
__ 13. Log in to the HMC and check the status of the volume used by suspended partition.
Example command:

Copyright IBM Corp. 2010, 2013 Exercise 8. Suspend and resume 8-9
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

hscroot@hmc:~> lshwres -r rspool -m sys426 --rsubtype rsdev --filter \


"lpar_names=sys426_lpar1"
No results were found.
In this step we observed that the partition picked hdisk5 from the reserved
storage pool device for saving the partition information on the persistent storage
while the partition was suspended. And it has released the volume once the
partition is being resumed.
Note: If the partition is not a shared memory partition, all devices used to store
partition suspend data will be released.
__ 14. Look for a log message for the suspend and resume operations by using the
following command on the HMC:
lssvcevents -t console | head
Example command and example output showing successful suspend and
resume operations:
hscroot@hmc:~> lssvcevents -t console | head
time=12/13/2012 01:40:57,"text=[EVENT 1831]: A TIMER TASK HAS BEEN
SCHEDULED ON A NAMED TIMER <br/>
Named Timer Thread: PM-TaskPortal Timer <br/>
Timer Task ID: <b>1893298393</b> <br/>
Date: <br/>
Delay (s): 0 <br/>
Period (s): "
time=12/13/2012 01:38:57,text=HSCE2418 UserName hscroot: Resume
operation on Partition with id 3 of Managed System 8202-E4B*06BCC9P
succeeded.
time=12/13/2012 01:35:15,text=HSCE2416 UserName hscroot: Suspend
operation on Partition with id 3 of Managed System 8202-E4B*06BCC9P
succeeded.
time=12/13/2012 01:31:21,"text=[EVENT 1831]: A TIMER TASK HAS BEEN
SCHEDULED ON A NAMED TIMER <br/>
Named Timer Thread: PM-TaskPortal Timer <br/>

End of exercise

8-10 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

EXempty Exercise review/wrapup


This exercise provided hands on use of the POWER7 Suspend-Resume feature and
introduced the students to the concept of reserved storage pools.

Copyright IBM Corp. 2010, 2013 Exercise 8. Suspend and resume 8-11
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

8-12 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

AP Exercise A. Using the Virtual I/O Server


Performance Analysis Reporting Tool
(with hints)

Estimated time
00:30

What this exercise is about


This lab provides instructions on running and analyzing the Virtual I/O
Server Performance Analysis Reporting Tool (PART).

What you should be able to do


After completing this exercise, you should be able to:
Collect Virtual I/O Server configuration and performance data using
the Performance Analysis Reporting Tool
Analyze the results in a web browser

Introduction
The PART tool is now included in the Virtual I/O Server (VIOS) in
version 2.2.2.0. Students will use this tool to collect configuration and
performance data on a VIOS partition. They will view the resulting
XML file in a browser.

Requirements
This workbook
One Virtual I/O Server partition at Version 2.2.2.0 or higher
One AIX 7 logical partition
A system with a web browser
The ability to copy files from the assigned lab partition to a system
with a web browser. This capability is not available in all training
labs.

Copyright IBM Corp. 2010, 2013 Exercise A. Using the Virtual I/O Server Performance Analysis A-1
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

Instructor exercise overview


In this exercise, students will generate some activity on a VIOS
partition. They will run the part tool to collect data for 5 minutes. Then
they will use a web browser to view the output file which is in XML
format. They will see what configuration information is provided and
review the recommended tuning changes.

A-2 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

AP Exercise instructions with hints

Preface
All procedures in this exercise depend on the availability of specific equipment in your
classroom.
All hints are marked by a sign.

Using the VIOS Performance Advisor tool


__ 1. Log in to your assigned Virtual I/O Server partition and check the software version. It
must be at least 2.2.2.1 to continue with this exercise.
Example command and its expected output:
$ ioslevel
2.2.2.1

__ 2. Use the whence command to find the part tool.


Example command and its expected output:
$ whence part
/usr/ios/utils/part

__ 3. Now that you have verified that the part tool is available, generate a CPU load in
your VIOS partition. Run oem_setup_env and then run two yes commands in the
background. Redirect the command output to /dev/null.
Example commands and expected output with PID:
# yes > /dev/null &
[1] 8978558
# yes > /dev/null &
[2] 8192090

__ 4. Run the part tool for 10 minutes and use the detailed logging level.
Example command and its expected output:
$ part -i 10 -t 2 &
[1] 10551342

__ 5. Open a second login session to your assigned VIOS partition and run the
oem_setup_env command. Run lparstat with a 2 second interval and a count
value of 5. You should see some CPU activity.
Below is an example command and its expected output showing CPU activity. The
VIOS is using 1.00 physc. In the system configuration line we see that SMT is set to

Copyright IBM Corp. 2010, 2013 Exercise A. Using the Virtual I/O Server Performance Analysis A-3
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

four threads and there are four logical processors. This means that the LPAR is
configured with one virtual processor, so it is at its maximum processing capacity.
# lparstat 2 5

System configuration: type=Shared mode=Uncapped smt=4 lcpu=4


mem=1024MB psize=4 ent=0.50

%user %sys %wait %idle physc %entc lbusy vcsw phint


----- ----- ------ ------ ----- ----- ------ ----- -----
91.6 0.8 0.0 7.7 1.00 200.0 50.5 358 2
92.9 0.8 0.0 6.4 1.00 199.9 50.2 425 2
88.8 0.7 0.0 10.5 1.00 199.8 48.5 489 1
88.2 0.7 0.0 11.0 1.00 200.1 51.1 527 2
88.6 0.8 0.0 10.6 1.00 199.9 50.1 497 5

__ 6. Type exit to return to the VIOS command line.

__ 7. While you are waiting for the part command to finish, explore the topas and nmon
panels that were covered in the lecture materials. Explore other available
commands from the VIOS command line that were covered in the lecture materials.
Suggested commands include: vmstat, svmon, fcstat, and entstat.

__ 8. When you see the following message in the first login session to the VIOS, the part
utility is finished. The filename on your system will be different as it is the hostname
followed by the date and time.
part: Reports are successfully generated in lpar1_121210_12_46_16.tar

__ 9. When the part utility is finished, look for the filename that was printed to the screen
in the /home/padmin directory.
Example command and expected output:
$ ls /home/padmin/lpar1*
/home/padmin/lpar1_121210_12_46_16.tar

__ 10. Extract the tar files in the vadvisor using the following command, then list the
contents. Be sure to replace the example tar file name with the name of your tar file.
$ tar xvf vadvisor/lpar1_121210_12_46_16.tar

__ 11. This will have created a directory with the same name as the tar file, except for the
.tar suffix. List out the new directory with ls and explore what files are there.

A-4 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

AP Example files and subdirectories:


$ ls lpar1*/*
lpar1_121210_12_46_16/george5_121210_1246.nmon
lpar1_121210_12_46_16/popup.js
lpar1_121210_12_46_16/style.css
lpar1_121210_12_46_16/vios_advisor.xsl
lpar1_121210_12_46_16/vios_advisor_report.xml

lpar1_121210_12_46_16/images:
Warning_icon.png close.jpg headerLogo.png readonly.png
bg.png correct.png investigate.png red-error.png

lpar1_121210_12_46_16/logs:
ionfile logfile

__ 12. Use the ftp command and copy the entire contents of the new directory to your
local PC. If you are using Microsoft Windows, use the Command Prompt window
and the ftp tool. If you are on a Linux system, use ftp. Copy the files to a
folder/directory. Follow this procedure to perform these actions:
__ a. Open a session on your local system. This could be a Windows PC that is in your
classroom or a personal computer running Linux. When you open the Windows
Command Prompt window, it will likely show that the current directory is
C:\Documents and Settings\Administrator or something similar. This is fine.
__ b. Create a directory named vadvisor.
Example Windows command: mkdir vadvisor
__ c. Change your current directory to the new directory.
Example Windows command: cd vadvisor
__ d. Run ftp to your assigned VIOS partition.
Example Windows command which lists the VIOS IP address as 10.10.10.10:
ftp 10.10.10.10
__ e. Log in as the padmin user and enter the password when prompted.
__ f. Run the FTP subcommand binary and press Enter.
__ g. Run the FTP subcommand prompt and press Enter.
__ h. Run the FTP subcommand mget followed by the name of the directory, followed
by a slash (/) and an asterisk. For example:
mget lpar1_12_12_10_12_46_16/*
__ i. The command will take a moment to run. When its finished, type the FTP bye
subcommand to close the FTP session.

Copyright IBM Corp. 2010, 2013 Exercise A. Using the Virtual I/O Server Performance Analysis A-5
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

__ j. List the items in the directory that you created with the dir Windows command.
You should see an filename with the .xml suffix.

__ 13. Use the GUI to navigate to your vadvisor folder/directory. Create a folder in the
vadvisor folder/directory called images. Put all of the files that end in .png into that
folder/directory.

__ 14. On the Windows or Linux local system, open a browser, then open the XML file in
your directory in the browser. An easy way to do this is to use the file explorer and
double click on the XML file name.

__ 15. Look at the data available and pay particular attention to the VIOS - CPU panel.
What do you observe?
Example CPU panel:

One would expect that the tool to suggested a higher entitled capacity and a higher
virtual processor number since the CPU utilization was over 100%. At least the tool
highlighted this important area for you to investigate.

__ 16. Look at the VIOS - Memory panel. Note that the tool always complains if there is
less than 2.5 GB of memory configured in a VIOS partition.
Example memory panel:

A-6 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints

AP

__ 17. Left click on some of the cells in the chart to see help information.

__ 18. Close the browser.

__ 19. Remove the vadvisor folder (or directory) that you created.

__ 20. Let your instructor know that you have completed this exercise.

End of exercise

Copyright IBM Corp. 2010, 2013 Exercise A. Using the Virtual I/O Server Performance Analysis A-7
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints

Exercise review/wrapup
This exercise provided hands on use of the VIOS Performance Advisor tool. Students used
the part utility to capture 10 minutes of data after starting CPU intensive processes with
the yes command. Then they copied the resulting files to a local system and viewed the
report, which is in XML format, in a web browser.

A-8 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0

backpg
Back page

Você também pode gostar