Você está na página 1de 512

Front cover

IBM PureFlex System and


IBM Flex System
Products and Technology
Describes the IBM Flex System Enterprise
Chassis and compute node technology

Provides details about available I/O


modules and expansion options

Explains networking and


storage configurations

David Watts
Randall Davis
Dave Ridley

ibm.com/redbooks
International Technical Support Organization

IBM PureFlex System and IBM Flex System Products


and Technology

October 2013

SG24-7984-03
Note: Before using this information and the product it supports, read the information in “Notices” on
page xi.

Fourth Edition (October 2013)

This edition applies:

IBM PureFlex System


IBM Flex System Enterprise Chassis
IBM Flex System Manager
IBM Flex System x220 Compute Node
IBM Flex System x222 Compute Node
IBM Flex System x240 Compute Node
IBM Flex System x440 Compute Node
IBM Flex System p260 Compute Node
IBM Flex System p270 Compute Node
IBM Flex System p24L Compute Node
IBM Flex System p460 Compute Node
IBM Flex System V7000 Storage Node
IBM 42U 1100mm Enterprise V2 Dynamic Rack
IBM PureFlex System 42U Rack and 42U Expansion Rack

© Copyright International Business Machines Corporation 2012, 2013. All rights reserved.
Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule
Contract with IBM Corp.
Contents

Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi

Summary of changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii


October 2013, Fourth Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
August 2013, Third Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
February 2013, Second Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii

Chapter 1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 IBM PureFlex System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 IBM Flex System overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.1 IBM Flex System Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.2 IBM Flex System Enterprise Chassis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.3 Compute nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.2.4 Expansion nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.2.5 Storage nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.2.6 I/O modules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.3 This book. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

Chapter 2. IBM PureFlex System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11


2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2 Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.2.1 Configurator for IBM PureFlex System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.3 PureFlex solutions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.3.1 PureFlex Solution for IBM i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.3.2 PureFlex Solution for SmartCloud Desktop Infrastructure . . . . . . . . . . . . . . . . . . 16
2.4 IBM PureFlex System Express . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.4.1 Available Express configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.4.2 Chassis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.4.3 Compute nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.4.4 IBM Flex System Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.4.5 PureFlex Express storage requirements and options . . . . . . . . . . . . . . . . . . . . . . 21
2.4.6 Video, keyboard, mouse option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.4.7 Rack cabinet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.4.8 Available software for Power Systems compute nodes . . . . . . . . . . . . . . . . . . . . 25
2.4.9 Available software for x86-based compute nodes . . . . . . . . . . . . . . . . . . . . . . . . 26
2.5 IBM PureFlex System Enterprise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.5.1 Enterprise configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.5.2 Chassis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.5.3 Top-of-rack switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.5.4 Compute nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.5.5 IBM Flex System Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.5.6 PureFlex Enterprise storage options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

© Copyright IBM Corp. 2012, 2013. All rights reserved. iii


2.5.7 Video, keyboard, and mouse option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.5.8 Rack cabinet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.5.9 Available software for Power Systems compute node . . . . . . . . . . . . . . . . . . . . . 35
2.5.10 Available software for x86-based compute nodes . . . . . . . . . . . . . . . . . . . . . . . 35
2.6 Services for IBM PureFlex System Express and Enterprise . . . . . . . . . . . . . . . . . . . . . 36
2.6.1 PureFlex FCoE Customization Service. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.6.2 PureFlex Services for IBM i. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.6.3 Software and hardware maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.7 IBM SmartCloud Entry for Flex system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

Chapter 3. Systems management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41


3.1 Management network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.2 Chassis Management Module. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.2.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.2.2 Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.3 Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.4 Compute node management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.4.1 Integrated Management Module II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.4.2 Flexible service processor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.4.3 I/O modules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.5 IBM Flex System Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.5.1 IBM Flex System Manager functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.5.2 Hardware overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.5.3 Software features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.5.4 User interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
3.5.5 Mobile System Management application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
3.5.6 Flex System Manager CLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

Chapter 4. Chassis and infrastructure configuration . . . . . . . . . . . . . . . . . . . . . . . . . . 69


4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
4.1.1 Front of the chassis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
4.1.2 Midplane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
4.1.3 Rear of the chassis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
4.1.4 Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
4.1.5 Air filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
4.1.6 Compute node shelves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
4.1.7 Hot plug and hot swap components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4.2 Power supplies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
4.3 Fan modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
4.4 Fan logic module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
4.5 Front information panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
4.6 Cooling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
4.7 Power supply selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
4.7.1 Power policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
4.7.2 Number of power supplies required for N+N and N+1 . . . . . . . . . . . . . . . . . . . . . 95
4.8 Fan module population . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
4.9 Chassis Management Module. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
4.10 I/O architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
4.11 I/O modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
4.11.1 I/O module LEDs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
4.11.2 Serial access cable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
4.11.3 I/O module naming scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
4.11.4 Switch to adapter compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

iv IBM PureFlex System and IBM Flex System Products and Technology
4.11.5 IBM Flex System EN6131 40Gb Ethernet Switch . . . . . . . . . . . . . . . . . . . . . . . 117
4.11.6 IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch . . . . . . . . 121
4.11.7 IBM Flex System Fabric EN4093 and EN4093R 10Gb Scalable Switch . . . . . 129
4.11.8 IBM Flex System Fabric SI4093 System Interconnect Module . . . . . . . . . . . . . 136
4.11.9 IBM Flex System EN4091 10Gb Ethernet Pass-thru Module . . . . . . . . . . . . . . 142
4.11.10 IBM Flex System EN2092 1Gb Ethernet Scalable Switch . . . . . . . . . . . . . . . 144
4.11.11 IBM Flex System FC5022 16Gb SAN Scalable Switch. . . . . . . . . . . . . . . . . . 148
4.11.12 IBM Flex System FC3171 8Gb SAN Switch . . . . . . . . . . . . . . . . . . . . . . . . . . 155
4.11.13 IBM Flex System FC3171 8Gb SAN Pass-thru. . . . . . . . . . . . . . . . . . . . . . . . 158
4.11.14 IBM Flex System IB6131 InfiniBand Switch . . . . . . . . . . . . . . . . . . . . . . . . . . 160
4.12 Infrastructure planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
4.12.1 Supported power cords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
4.12.2 Supported PDUs and UPS units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
4.12.3 Power planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
4.12.4 UPS planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
4.12.5 Console planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
4.12.6 Cooling planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
4.12.7 Chassis-rack cabinet compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
4.13 IBM 42U 1100mm Enterprise V2 Dynamic Rack . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
4.14 IBM PureFlex System 42U Rack and 42U Expansion Rack . . . . . . . . . . . . . . . . . . . 178
4.15 IBM Rear Door Heat eXchanger V2 Type 1756 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180

Chapter 5. Compute nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185


5.1 IBM Flex System Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
5.2 IBM Flex System x220 Compute Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
5.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
5.2.2 Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
5.2.3 Chassis support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
5.2.4 System architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
5.2.5 Processor options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
5.2.6 Memory options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
5.2.7 Internal disk storage controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
5.2.8 Supported internal drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
5.2.9 Embedded 1 Gb Ethernet controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
5.2.10 I/O expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
5.2.11 Integrated virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
5.2.12 Systems management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
5.2.13 Operating system support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
5.3 IBM Flex System x222 Compute Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
5.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
5.3.2 Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
5.3.3 Chassis support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
5.3.4 System architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
5.3.5 Processor options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
5.3.6 Memory options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
5.3.7 Supported internal drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
5.3.8 Expansion Node support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
5.3.9 Embedded 10Gb Virtual Fabric adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
5.3.10 Mid-mezzanine I/O adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
5.3.11 Integrated virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
5.3.12 Systems management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
5.3.13 Operating system support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
5.4 IBM Flex System x240 Compute Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234

Contents v
5.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
5.4.2 Features and specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
5.4.3 Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
5.4.4 Chassis support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
5.4.5 System architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
5.4.6 Processor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
5.4.7 Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
5.4.8 Standard onboard features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
5.4.9 Local storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
5.4.10 Integrated virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
5.4.11 Embedded 10 Gb Virtual Fabric adapter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
5.4.12 I/O expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
5.4.13 Systems management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
5.4.14 Operating system support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
5.5 IBM Flex System x440 Compute Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
5.5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
5.5.2 Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
5.5.3 Chassis support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
5.5.4 System architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
5.5.5 Processor options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
5.5.6 Memory options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
5.5.7 Internal disk storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
5.5.8 Embedded 10Gb Virtual Fabric. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
5.5.9 I/O expansion options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
5.5.10 Network adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
5.5.11 Storage host bus adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
5.5.12 Integrated virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
5.5.13 Light path diagnostics panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
5.5.14 Operating systems support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
5.6 IBM Flex System p260 and p24L Compute Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
5.6.1 Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
5.6.2 System board layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
5.6.3 IBM Flex System p24L Compute Node. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
5.6.4 Front panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
5.6.5 Chassis support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
5.6.6 System architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
5.6.7 Processor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
5.6.8 Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
5.6.9 Active Memory Expansion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
5.6.10 Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
5.6.11 I/O expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
5.6.12 System management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
5.6.13 Operating system support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
5.7 IBM Flex System p270 Compute Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
5.7.1 Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
5.7.2 System board layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
5.7.3 Comparing the p260 and p270 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
5.7.4 Front panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
5.7.5 Chassis support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
5.7.6 System architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
5.7.7 IBM POWER7+ processor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
5.7.8 Memory subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
5.7.9 Active Memory Expansion feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329

vi IBM PureFlex System and IBM Flex System Products and Technology
5.7.10 Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
5.7.11 I/O expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
5.7.12 System management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
5.7.13 Operating system support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
5.8 IBM Flex System p460 Compute Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
5.8.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
5.8.2 System board layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
5.8.3 Front panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
5.8.4 Chassis support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
5.8.5 System architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
5.8.6 Processor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
5.8.7 Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
5.8.8 Active Memory Expansion feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
5.8.9 Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
5.8.10 Local storage and cover options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
5.8.11 Hardware RAID capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
5.8.12 I/O expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
5.8.13 System management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
5.8.14 Integrated features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
5.8.15 Operating system support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
5.9 IBM Flex System PCIe Expansion Node. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
5.9.1 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
5.9.2 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
5.9.3 Supported PCIe adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
5.9.4 Supported I/O expansion cards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
5.10 IBM Flex System Storage Expansion Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
5.10.1 Supported nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
5.10.2 Features on Demand upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
5.10.3 Cache upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
5.10.4 Supported HDD and SSD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
5.11 I/O adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
5.11.1 Form factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
5.11.2 Naming structure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
5.11.3 Supported compute nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
5.11.4 Supported switches. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374
5.11.5 IBM Flex System EN2024 4-port 1Gb Ethernet Adapter. . . . . . . . . . . . . . . . . . 376
5.11.6 IBM Flex System EN4132 2-port 10Gb Ethernet Adapter. . . . . . . . . . . . . . . . . 377
5.11.7 IBM Flex System EN4054 4-port 10Gb Ethernet Adapter. . . . . . . . . . . . . . . . . 378
5.11.8 IBM Flex System EN6132 2-port 40Gb Ethernet Adapter. . . . . . . . . . . . . . . . . 380
5.11.9 IBM Flex System CN4054 10Gb Virtual Fabric Adapter . . . . . . . . . . . . . . . . . . 381
5.11.10 IBM Flex System CN4058 8-port 10Gb Converged Adapter . . . . . . . . . . . . . 384
5.11.11 IBM Flex System EN4132 2-port 10Gb RoCE Adapter. . . . . . . . . . . . . . . . . . 387
5.11.12 IBM Flex System FC3172 2-port 8Gb FC Adapter . . . . . . . . . . . . . . . . . . . . . 389
5.11.13 IBM Flex System FC3052 2-port 8Gb FC Adapter . . . . . . . . . . . . . . . . . . . . . 391
5.11.14 IBM Flex System FC5022 2-port 16Gb FC Adapter . . . . . . . . . . . . . . . . . . . . 393
5.11.15 IBM Flex System FC5024D 4-port 16Gb FC Adapter . . . . . . . . . . . . . . . . . . . 394
5.11.16 IBM Flex System FC5052 2-port and FC5054 4-port 16Gb FC Adapters. . . . 396
5.11.17 IBM Flex System FC5172 2-port 16Gb FC Adapter . . . . . . . . . . . . . . . . . . . . 398
5.11.18 IBM Flex System IB6132 2-port FDR InfiniBand Adapter . . . . . . . . . . . . . . . . 400
5.11.19 IBM Flex System IB6132 2-port QDR InfiniBand Adapter. . . . . . . . . . . . . . . . 401
5.11.20 IBM Flex System IB6132D 2-port FDR InfiniBand Adapter. . . . . . . . . . . . . . . 403

Chapter 6. Network integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405

Contents vii
6.1 Choosing the Ethernet switch I/O module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
6.2 Virtual local area networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
6.3 Scalability and performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
6.4 High Availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
6.4.1 Highly available topologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
6.4.2 Spanning Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
6.4.3 Link aggregation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417
6.4.4 NIC teaming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
6.4.5 Trunk failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
6.4.6 Virtual Router Redundancy Protocol. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421
6.5 FCoE capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422
6.6 Virtual Fabric vNIC solution capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
6.6.1 Virtual Fabric mode vNIC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424
6.6.2 Switch-independent mode vNIC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426
6.7 Unified Fabric Port feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
6.8 Easy Connect concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429
6.9 Stacking feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
6.10 Openflow support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432
6.11 802.1Qbg Edge Virtual Bridge support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433
6.12 SPAR feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433
6.13 Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434
6.13.1 Management tools and their capabilities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
6.14 Summary and conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437

Chapter 7. Storage integration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439


7.1 IBM Flex System V7000 Storage Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
7.1.1 V7000 Storage Node types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444
7.1.2 Controller Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445
7.1.3 Expansion Modules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450
7.1.4 SAS cabling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452
7.1.5 Host interface cards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454
7.1.6 Fibre Channel over Ethernet with a V7000 Storage Node . . . . . . . . . . . . . . . . . 454
7.1.7 V7000 Storage Node drive options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455
7.1.8 Features and functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455
7.1.9 Licenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
7.1.10 Configuration restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458
7.2 External storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459
7.2.1 IBM Storwize V7000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460
7.2.2 IBM XIV Storage System series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462
7.2.3 IBM System Storage DS8000 series. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
7.2.4 IBM System Storage DS5000 series. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
7.2.5 IBM System Storage V3700 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
7.2.6 IBM System Storage DS3500 series. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
7.2.7 IBM network-attached storage products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465
7.2.8 IBM FlashSystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465
7.2.9 IBM System Storage TS3500 Tape Library . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466
7.2.10 IBM System Storage TS3310 series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467
7.2.11 IBM System Storage TS3200 Tape Library . . . . . . . . . . . . . . . . . . . . . . . . . . . 467
7.2.12 IBM System Storage TS3100 Tape Library . . . . . . . . . . . . . . . . . . . . . . . . . . . 468
7.3 Fibre Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468
7.3.1 FC requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468
7.3.2 FC switch selection and fabric interoperability rules . . . . . . . . . . . . . . . . . . . . . . 469
7.4 FCoE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473

viii IBM PureFlex System and IBM Flex System Products and Technology
7.5 iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475
7.6 HA and redundancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476
7.7 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478
7.8 Backup solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478
7.8.1 Dedicated server for centralized LAN backup. . . . . . . . . . . . . . . . . . . . . . . . . . . 479
7.8.2 LAN-free backup for nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480
7.9 Boot from SAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481
7.9.1 Implementing Boot from SAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481
7.9.2 iSCSI SAN Boot specific considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481

Abbreviations and acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483

Related publications and education. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485


IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485
IBM education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486
Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486
Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487

Contents ix
x IBM PureFlex System and IBM Flex System Products and Technology
Notices

This information was developed for products and services offered in the U.S.A.

IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.

IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not grant you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.

The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.

This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.

Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.

IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring
any obligation to you.

Any performance data contained herein was determined in a controlled environment. Therefore, the results
obtained in other operating environments may vary significantly. Some measurements may have been made
on development-level systems and there is no guarantee that these measurements will be the same on
generally available systems. Furthermore, some measurements may have been estimated through
extrapolation. Actual results may vary. Users of this document should verify the applicable data for their
specific environment.

Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.

This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.

COPYRIGHT LICENSE:

This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs.

© Copyright IBM Corp. 2012, 2013. All rights reserved. xi


Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines
Corporation in the United States, other countries, or both. These and other IBM trademarked terms are
marked on their first occurrence in this information with the appropriate symbol (® or ™), indicating US
registered or common law trademarks owned by IBM at the time this information was published. Such
trademarks may also be registered or common law trademarks in other countries. A current list of IBM
trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml

The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
Active Cloud Engine™ IBM Flex System™ PureSystems™
Active Memory™ IBM Flex System Manager™ Real-time Compression™
AIX® IBM SmartCloud® Redbooks®
AIX 5L™ iDataPlex® Redbooks (logo) ®
AS/400® Linear Tape File System™ ServerProven®
BladeCenter® Netfinity® ServicePac®
DB2® POWER® Storwize®
DS4000® Power Systems™ System Storage®
DS8000® POWER6® System Storage DS®
Easy Tier® POWER6+™ System x®
EnergyScale™ POWER7® Tivoli®
eServer™ POWER7+™ Tivoli Storage Manager FastBack®
FICON® PowerPC® VMready®
FlashCopy® PowerVM® X-Architecture®
FlashSystem™ PureApplication™ XIV®
IBM® PureData™
IBM FlashSystem™ PureFlex™

The following terms are trademarks of other companies:

Intel, Intel Xeon, Pentium, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered
trademarks of Intel Corporation or its subsidiaries in the United States and other countries.

Linux is a trademark of Linus Torvalds in the United States, other countries, or both.

Linear Tape-Open, LTO, Ultrium, the LTO Logo and the Ultrium logo are trademarks of HP, IBM Corp. and
Quantum in the U.S. and other countries.

Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States,
other countries, or both.

Java, and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its
affiliates.

UNIX is a registered trademark of The Open Group in the United States and other countries.

Other company, product, or service names may be trademarks or service marks of others.

xii IBM PureFlex System and IBM Flex System Products and Technology
Preface

To meet today’s complex and ever-changing business demands, you need a solid foundation
of compute, storage, networking, and software resources. This system must be simple to
deploy, and be able to quickly and automatically adapt to changing conditions. You also need
to be able to take advantage of broad expertise and proven guidelines in systems
management, applications, hardware maintenance, and more.

The IBM® PureFlex™ System combines no-compromise system designs along with built-in
expertise and integrates them into complete, optimized solutions. At the heart of PureFlex
System is the IBM Flex System™ Enterprise Chassis. This fully integrated infrastructure
platform supports a mix of compute, storage, and networking resources to meet the demands
of your applications.

The solution is easily scalable with the addition of another chassis with the required nodes.
With the IBM Flex System Manager™, multiple chassis can be monitored from a single panel.
The 14 node, 10U chassis delivers high-speed performance complete with integrated
servers, storage, and networking. This flexible chassis is simple to deploy now, and to scale
to meet your needs in the future.

This IBM Redbooks® publication describes IBM PureFlex System and IBM Flex System. It
highlights the technology and features of the chassis, compute nodes, management features,
and connectivity options. Guidance is provided about every major component, and about
networking and storage connectivity.

This book is intended for customers, Business Partners, and IBM employees who want to
know the details about the new family of products. It assumes that you have a basic
understanding of blade server concepts and general IT knowledge.

© Copyright IBM Corp. 2012, 2013. All rights reserved. xiii


Authors
This book was produced by a team of specialists from around the world working at the
International Technical Support Organization, Raleigh Center.

David Watts is a Consulting IT Specialist at the IBM ITSO Center in


Raleigh. He manages residencies and produces IBM Redbooks
publications on hardware and software topics that are related to
IBM Flex System, IBM System x®, and BladeCenter® servers and
associated client platforms. He has authored over 200 books,
papers, and Product Guides. He holds a Bachelor of Engineering
degree from the University of Queensland (Australia), and has
worked for IBM in the United States and Australia since 1989. David
is an IBM Certified IT Specialist and a member of the IT Specialist
Certification Review Board.

Randall Davis is a Senior IT Specialist working in the System x


pre-sales team for IBM Australia as a Field Technical Sales Support
(FTSS) specialist. He regularly performs System x, BladeCenter,
and Storage demonstrations for customers at the IBM
Demonstration Centre in Melbourne, Australia. He also helps
instruct Business Partners and customers on how to configure and
install the BladeCenter. His areas of expertise are the IBM
BladeCenter, System x servers, VMware, and Linux. Randall
started at IBM as a System 36 and AS/400® Engineer in 1989.

Dave Ridley is the PureFlex and Flex System Technical Product


Manager for IBM in the United Kingdom and Ireland. His role
includes product transition planning, supporting marketing events,
press briefings, managing the UK loan pool, running early ship
programs, and supporting the local sales and technical teams. He is
based in Horsham in the United Kingdom, and has worked for IBM
since 1998. In addition, he has been involved with IBM x86 products
for 27 years.

Thanks to the authors of the previous editions of this book.


򐂰 Authors of the second edition, IBM PureFlex System and IBM Flex System Products and
Technology, published in February 2013, were:
David Watts
Dave Ridley
򐂰 Authors of the first edition, IBM PureFlex System and IBM Flex System Products and
Technology, published in July 2012, were:
David Watts
Randall Davis
Richard French
Lu Han
Dave Ridley
Cristian Rojas

xiv IBM PureFlex System and IBM Flex System Products and Technology
Thanks to the following people for their contributions to this project:

From IBM marketing:


򐂰 TJ Aspden 򐂰 Botond Kiss
򐂰 Michael Bacon 򐂰 Shekhar Mishra
򐂰 John Biebelhausen 򐂰 Sander Kim
򐂰 Mark Cadiz 򐂰 Dean Parker
򐂰 Bruce Corregan 򐂰 Hector Sanchez
򐂰 Mary Beth Daughtry 򐂰 David Tareen
򐂰 Meleata Pinto 򐂰 David Walker
򐂰 Mike Easterly 򐂰 Randi Wood
򐂰 Diana Cunniffe 򐂰 Bob Zuber
򐂰 Kyle Hampton

From IBM development:


򐂰 Mike Anderson 򐂰 Andy Huryn
򐂰 Sumanta Bahali 򐂰 Bill Ilas
򐂰 Wayne Banks 򐂰 Don Keener
򐂰 Barry Barnett 򐂰 Caroline Metry
򐂰 Keith Cramer 򐂰 Meg McColgan
򐂰 Mustafa Dahnoun 򐂰 Mark McCool
򐂰 Dean Duff 򐂰 Rob Ord
򐂰 Royce Espey 򐂰 Greg Pruett
򐂰 Kaena Freitas 򐂰 Mike Solheim
򐂰 Jim Gallagher 򐂰 Fang Su
򐂰 Dottie Gardner 򐂰 Vic Stankevich
򐂰 Sam Gaver 򐂰 Tan Trinh
򐂰 Phil Godbolt 򐂰 Rochelle White
򐂰 Mike Goodman 򐂰 Dale Weiler
򐂰 John Gossett 򐂰 Mark Welch
򐂰 Tim Hiteshew 򐂰 Al Willard

From the International Technical Support Organization:


򐂰 Kevin Barnes 򐂰 Shari Deiana
򐂰 Tamikia Barrow 򐂰 Cheryl Gera
򐂰 Mary Comianos 򐂰 Ilya Krutov
򐂰 Deana Coble 򐂰 Karen Lawrence

Others from IBM around the world:


򐂰 Kerry Anders 򐂰 Michael L. Nelson
򐂰 Simon Casey 򐂰 Kiron Rakkar
򐂰 Bill Champion 򐂰 Matt Slavin
򐂰 Jonathan A Tyrrell 򐂰 Fabien Willmann

Others from other companies:


򐂰 Tom Boucher, Emulex 򐂰 Jimmy Myers, Brocade
򐂰 Brad Buland, Intel 򐂰 Haithuy Nguyen, Mellanox
򐂰 Jeff Lin, Emulex 򐂰 Brian Sparks, Mellanox
򐂰 Chris Mojica, QLogic 򐂰 Matt Wineberg, Brocade
򐂰 Brent Mosbrook, Emulex

Preface xv
Now you can become a published author, too!
Here’s an opportunity to spotlight your skills, grow your career, and become a published
author—all at the same time! Join an ITSO residency project and help write a book in your
area of expertise, while honing your experience using leading-edge technologies. Your efforts
will help to increase product acceptance and customer satisfaction, as you expand your
network of technical contacts and relationships. Residencies run from two to six weeks in
length, and you can participate either in person or as a remote resident working from your
home base.

Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html

Comments welcome
Your comments are important to us!

We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks publications in one of the following ways:
򐂰 Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
򐂰 Send your comments in an email to:
redbooks@us.ibm.com
򐂰 Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400

Stay connected to IBM Redbooks


򐂰 Find us on Facebook:
http://www.facebook.com/IBMRedbooks
򐂰 Follow us on Twitter:
http://twitter.com/ibmredbooks
򐂰 Look for us on LinkedIn:
http://www.linkedin.com/groups?home=&gid=2130806
򐂰 Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks
weekly newsletter:
https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm
򐂰 Stay current on recent Redbooks publications with RSS Feeds:
http://www.redbooks.ibm.com/rss.html

xvi IBM PureFlex System and IBM Flex System Products and Technology
Summary of changes

This section describes the technical changes that were made in this edition of the book and in
previous editions. This edition might also include minor corrections and editorial changes that
are not identified.

Summary of Changes
for SG24-7984-03
for IBM PureFlex System and IBM Flex System Products and Technology
as created or updated on October 23, 2013 10:17 pm.

October 2013, Fourth Edition


This revision reflects the addition, deletion, or modification of new and changed information
that is described here.

New information
The following new products were added to the book:
򐂰 IBM PureFlex System Express
򐂰 IBM PureFlex System Enterprise
򐂰 IBM SmartCloud® Entry 3.2

These products are described in Chapter 2, “IBM PureFlex System” on page 11.

Important: The Flex System components that were announced in October 2013 will be
covered in the next edition of this book.

August 2013, Third Edition


This revision reflects the addition, deletion, or modification of new and changed information
that is described below.

New information
The following new products and options were added to the book:
򐂰 IBM Flex System x222 Compute Node
򐂰 IBM Flex System p260 Compute Node (POWER7+™ SCM)
򐂰 IBM Flex System p270 Compute Node (POWER7+ DCM)
򐂰 IBM Flex System p460 Compute Node (POWER7+ SCM)
򐂰 IBM Flex System EN6132 2-port 40Gb Ethernet Adapter
򐂰 IBM Flex System FC5052 2-port 16Gb FC Adapter
򐂰 IBM Flex System FC5054 4-port 16Gb FC Adapter
򐂰 IBM Flex System FC5172 2-port 16Gb FC Adapter
򐂰 IBM Flex System FC5024D 4-port 16Gb FC Adapter
򐂰 IBM Flex System IB6132D 2-port FDR InfiniBand Adapter
򐂰 IBM Flex System Fabric SI4093 System Interconnect Module
򐂰 IBM Flex System EN6131 40Gb Ethernet Switch

© Copyright IBM Corp. 2012, 2013. All rights reserved. xvii


February 2013, Second Edition
This revision reflects the addition, deletion, or modification of new and changed information
that is described below.

New information
The following new products and options were added to the book:
򐂰 IBM SmartCloud Entry V2.4
򐂰 IBM Flex System Manager V1.2
򐂰 IBM Flex System Fabric EN4093R 10Gb Scalable Switch
򐂰 IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch
򐂰 FoD license upgrades for the IBM Flex System FC5022 16Gb SAN Scalable Switch
򐂰 IBM PureFlex System 42U Rack
򐂰 2100-W power supply option for the Enterprise Chassis
򐂰 New options and models of the IBM Flex System x220 Compute Node
򐂰 IBM Flex System x440 Compute Node
򐂰 Additional solid-state drive options for all x86 compute nodes
򐂰 IBM Flex System p260 Compute Node, model 23X with IBM POWER7+ processors
򐂰 New memory options for the IBM Power Systems™ compute nodes
򐂰 IBM Flex System Storage Expansion Node
򐂰 IBM Flex System PCIe Expansion Node
򐂰 IBM Flex System CN4058 8-port 10Gb Converged Adapter
򐂰 IBM Flex System EN4132 2-port 10Gb RoCE Adapter
򐂰 IBM Flex System V7000 Storage Node

Changed information
The following updates were made to existing product information:
򐂰 Updated the configurations of IBM PureFlex System Express, Standard, and Enterprise
򐂰 Switch stacking feature of Ethernet switches
򐂰 FCoE and iSCSI support

xviii IBM PureFlex System and IBM Flex System Products and Technology
1

Chapter 1. Introduction
During the last 100 years, information technology moved from a specialized tool to a
pervasive influence on nearly every aspect of life. From tabulating machines that counted with
mechanical switches or vacuum tubes to the first programmable computers, IBM has been a
part of this growth. The goal has always been to help customers to solve problems. IT is a
constant part of business and of general life. The expertise of IBM in delivering IT solutions
has helped the planet become more efficient. As organizational leaders seek to extract more
real value from their data, business processes, and other key investments, IT is moving to the
strategic center of business.

To meet these business demands, IBM has introduced a new category of systems. These
systems combine the flexibility of general-purpose systems, the elasticity of cloud computing,
and the simplicity of an appliance that is tuned to the workload. Expert integrated systems are
essentially the building blocks of capability. This new category of systems represents the
collective knowledge of thousands of deployments, established guidelines, innovative
thinking, IT leadership, and distilled expertise.

The offerings are designed to deliver value in the following ways:


򐂰 Built-in expertise helps you to address complex business and operational tasks
automatically.
򐂰 Integration by design helps you to tune systems for optimal performance and efficiency.
򐂰 Simplified experience, from design to purchase to maintenance, creates efficiencies
quickly.

These offerings are optimized for performance and virtualized for efficiency. These systems
offer a no-compromise design with system-level upgradeability. The capability is built for
cloud, containing “built-in” flexibility and simplicity.

© Copyright IBM Corp. 2012, 2013. All rights reserved. 1


IBM PureFlex System is an expert integrated system. It is an infrastructure system with
built-in expertise that deeply integrates with the complex IT elements of an infrastructure.

This chapter describes the IBM PureFlex System and the components that make up this
compelling offering and includes the following topics:
򐂰 1.1, “IBM PureFlex System” on page 3
򐂰 1.2, “IBM Flex System overview” on page 6
򐂰 1.3, “This book” on page 10

2 IBM PureFlex System and IBM Flex System Products and Technology
1.1 IBM PureFlex System
To meet today’s complex and ever-changing business demands, you need a solid foundation
of server, storage, networking, and software resources. Furthermore, it must be simple to
deploy, and able to quickly and automatically adapt to changing conditions. You also need
access to, and the ability to take advantage of, broad expertise and proven guidelines in
systems management, applications, hardware maintenance and more.

IBM PureFlex System is a comprehensive infrastructure system that provides an expert


integrated computing system. It combines servers, enterprise storage, networking,
virtualization, and management into a single structure. Its built-in expertise enables
organizations to manage and flexibly deploy integrated patterns of virtual and hardware
resources through unified management. These systems are ideally suited for customers who
want a system that delivers the simplicity of an integrated solution while still able to tune
middleware and the runtime environment.

IBM PureFlex System uses workload placement that is based on virtual machine compatibility
and resource availability. By using built-in virtualization across servers, storage, and
networking, the infrastructure system enables automated scaling of resources and true
workload mobility.

IBM PureFlex System has undergone significant testing and experimentation so that it can
mitigate IT complexity without compromising the flexibility to tune systems to the tasks’
businesses demand. By providing flexibility and simplicity, IBM PureFlex System can provide
extraordinary levels of IT control, efficiency, and operating agility. This combination enables
businesses to rapidly deploy IT services at a reduced cost. Moreover, the system is built on
decades of expertise. This expertise enables deep integration and central management of the
comprehensive, open-choice infrastructure system. It also dramatically cuts down on the
skills and training that is required for managing and deploying the system.

IBM PureFlex System combines advanced IBM hardware and software along with patterns of
expertise. It integrates them into three optimized configurations that are simple to acquire and
deploy so you get fast time to value.

IBM PureFlex System is built and integrated before shipment so it can be quickly deployed
into the data center. PureFlex System is shipped complete, integrated within a rack
incorporating the all the required power, networking and SAN cabling together with all the
associated switches, compute nodes, and storage.

Figure 1-1 on page 4 shows an IBM PureFlex System 42U rack, complete with its distinctive
PureFlex door.

Chapter 1. Introduction 3
Figure 1-1 IBM PureFlex System

The PureFlex System includes the following configurations:


򐂰 IBM PureFlex System Express, which is designed for small and medium businesses and
is the most affordable entry point for PureFlex System.
򐂰 IBM PureFlex System Standard, which is optimized for application servers with supporting
storage and networking, and is designed to support your key ISV solutions.
򐂰 IBM PureFlex System Enterprise, which is optimized for transactional and database
systems. It has built-in redundancy for highly reliable and resilient operation to support
your most critical workloads.

4 IBM PureFlex System and IBM Flex System Products and Technology
These configurations are summarized in Table 1-1.

Table 1-1 IBM PureFlex System configurations


Component IBM PureFlex System IBM PureFlex System IBM PureFlex System
Express Standard Enterprise

IBM PureFlex System 42U 1 1 1


Rack

IBM Flex System Enterprise 1 1 1


Chassis

IBM Flex System Fabric 1 1 2 with both port-count


EN4093 10Gb Scalable Switch upgrades

IBM Flex System FC3171 8Gb 1 2 2


SAN Switcha

IBM Flex System FC5022 1 2 2


24-port 16Gb ESB SAN
Scalable Switch a

IBM Flex System Manager 1 1 1


Node

IBM Flex System Manager IBM Flex System Manager IBM Flex System Manager Flex System Manager
software license with 1-year service and Advanced with 3-year Advanced with 3-year
support service and support service and support

Chassis Management Module 2 2 2

Chassis power supplies 2/6 4/6 6/6


(std/max)

Chassis 80 mm fan modules 4/8 6/8 8/8


(std/max)

IBM Flex System V7000 Yes (redundant controller) Yes (redundant controller) Yes (redundant controller)
Storage Nodeb

IBM Storwize V7000 Disk Yes (redundant controller) Yes (redundant controller) Yes (redundant controller)
Systemb

IBM Storwize V7000 Software 򐂰 Base with 1-year 򐂰 Base with 3-year 򐂰 Base with 3-year
software maintenance software maintenance software maintenance
agreement agreement agreement
򐂰 Optional Real Time 򐂰 Real Time 򐂰 Real Time
Compression Compression Compression
a. Select the IBM Flex System FC3171 8Gb SAN Switch or IBM Flex System FC5022 24-port 16Gb ESB SAN
Scalable Switch module.
b. Select the IBM Flex System V7000 Storage Node that is installed inside the Enterprise chassis or the external
IBM Storwize® V7000 Disk System.

The fundamental building blocks of the three IBM PureFlex System solutions are the compute
nodes, storage nodes, and networking of the IBM Flex System Enterprise Chassis.

Chapter 1. Introduction 5
1.2 IBM Flex System overview
IBM Flex System is a full system of hardware that forms the underlying strategic basis of IBM
PureFlex System and IBM PureApplication™ System and forms the underlying hardware
basis of other IBM PureSystems™ offerings. IBM Flex System optionally includes a
management appliance, known as Flex System Manager.

IBM Flex System is the next generation blade chassis offering from IBM, which features the
latest innovations and advanced technologies.

The major components of the IBM Flex System are described next.

1.2.1 IBM Flex System Manager


IBM Flex System Manager (FSM) is a high performance scalable systems management
appliance with a preinstalled software stack. It is designed to optimize the physical and virtual
resources of the Flex System infrastructure while simplifying and automating repetitive tasks.
Flex System Manager provides easy system setup procedures with wizards and built-in
expertise, and consolidated monitoring for all of your resources, including compute, storage,
networking and virtualization.

It is an ideal solution that allows you to reduce administrative expense and focus your efforts
on business innovation.

A single user interface controls the following features:


򐂰 Intelligent automation
򐂰 Resource pooling
򐂰 Improved resource usage
򐂰 Complete management integration
򐂰 Simplified setup

As an appliance, Flex System Manager is delivered preinstalled onto a dedicated compute


node platform, which is designed to provide a specific purpose. It is intended to configure,
monitor, and manage IBM Flex System resources in up to 16 IBM Flex System Enterprise
Chassis, which optimizes time-to-value. FSM provides an instant resource-oriented view of
the Enterprise Chassis and its components, which provides vital information for real-time
monitoring.

An increased focus on optimizing time-to-value is evident in the following features:


򐂰 Setup wizards, including initial setup wizards, provide intuitive and quick setup of the Flex
System Manager.
򐂰 The Chassis Map provides multiple view overlays to track health, firmware inventory, and
environmental metrics.
򐂰 Configuration management for repeatable setup of compute, network, and storage
devices.
򐂰 Remote presence application for remote access to compute nodes with single sign-on.
򐂰 Quick search provides results as you type.

Beyond the physical world of inventory, configuration, and monitoring, IBM Flex System
Manager enables virtualization and workload optimization for a new class of computing:
򐂰 Resource usage: Detects congestion, notification policies, and relocation of physical and
virtual machines that include storage and network configurations within the network fabric.

6 IBM PureFlex System and IBM Flex System Products and Technology
򐂰 Resource pooling: Pooled network switching, with placement advisors that consider virtual
machine (VM) compatibility, processor, availability, and energy.
򐂰 Intelligent automation: Automated and dynamic VM placement that is based on usage,
hardware predictive failure alerts, and host failures.

Figure 1-2 shows the IBM Flex System Manager appliance.

Figure 1-2 IBM Flex System Manager

1.2.2 IBM Flex System Enterprise Chassis


The IBM Flex System Enterprise Chassis is the foundation of the Flex System offering, which
features 14 standard (half-width) Flex System form factor compute node bays in a 10U
chassis that delivers high-performance connectivity for your integrated compute, storage,
networking, and management resources.

Up to a total of 28 independent servers can be accommodated in each Enterprise Chassis, if


double dense x222 compute nodes are deployed.

The chassis is designed to support multiple generations of technology, and offers


independently scalable resource pools for higher usage and lower cost per workload.

With the ability to handle up 14 Nodes, supporting the intermixing of IBM Power Systems and
Intel x86, the Enterprise Chassis provides flexibility and tremendous compute capacity in a
10U package. Additionally, the rear of the chassis accommodates four high speed I/O bays
that can accommodate up to 40 GbE high speed networking, 16 Gb Fibre Channel or 56 Gb
InfiniBand. With interconnecting compute nodes, networking, and storage that uses a high
performance and scalable mid-plane, the Enterprise Chassis can support latest high speed
networking technologies.

The ground-up design of the Enterprise Chassis reaches new levels of energy efficiency
through innovations in power, cooling, and air flow. Simpler controls and futuristic designs
allow the Enterprise Chassis to break free of “one size fits all” energy schemes.

The ability to support the workload demands of tomorrow’s workloads is built in with a new I/O
architecture, which provides choice and flexibility in fabric and speed. With the ability to use
Ethernet, InfiniBand, Fibre Channel (FC), Fibre Channel over Ethernet (FCoE), and iSCSI,
the Enterprise Chassis is uniquely positioned to meet the growing and future I/O needs of
large and small businesses.

Chapter 1. Introduction 7
Figure 1-3 shows the IBM Flex System Enterprise Chassis.

Figure 1-3 The IBM Flex System Enterprise Chassis

1.2.3 Compute nodes


IBM Flex System offers compute nodes that vary in architecture, dimension, and capabilities.

Optimized for efficiency, density, performance, reliability, and security, the portfolio includes a
range of IBM POWER® and Intel Xeon-based nodes that are designed to make full use of the
full capabilities of these processors that can be mixed within the same Enterprise Chassis.

Power Systems nodes are available in two and four socket variety, which uses the IBM
POWER7® & IBM POWER7+ processors. Also available is a POWER7 node that is
optimized for cost effective deployment of Linux.

Compute Nodes that use Intel processors are available that range from the two socket Intel
Xeon E5-2400 product family and the two socket Intel E5-2600 product family to the four
socket Intel E5-4800 product family.

Up to 28 two socket Intel Xeon E5-2400 servers can be deployed in a single enterprise
chassis where high-density cloud, virtual desktop, or server virtualization is wanted.

8 IBM PureFlex System and IBM Flex System Products and Technology
Figure 1-4 shows a four socket IBM POWER7 compute node, the p460.

Figure 1-4 IBM Flex System p460 Compute Node

The nodes are complemented with leadership I/O capabilities of up to 16 channels of


high-speed I/O lanes per standard wide node bay and 32 lanes per full wide node bay.
Various I/O adapters and matching I/O Modules are available.

1.2.4 Expansion nodes


Expansion nodes can be attached to certain standard form factor (half-width) Flex System
compute nodes, which allows the expansion of the nodes’ capabilities with locally attached
storage or PCIe adapters.

The IBM Flex System Storage Expansion Node provides locally attached disk expansion to
the x240 and x220. SAS and SATA disk are supported.

With the attachment of the IBM Flex System PCIe Expansion Node, an x220 or x240 can
have up to four PCIe adapters attached. High performance GPUs can also be installed within
the PCIe Expansion Node from companies such as Intel and NVIDIA.

1.2.5 Storage nodes


The storage capabilities of IBM Flex System give you advanced functionality with storage
nodes in your system and make full use of your existing storage infrastructure through
advanced virtualization.

Storage is available within the chassis by using the IBM Flex System V7000 Storage Node
that integrates with the Flex System Chassis or externally with the IBM Storwize V7000.

IBM Flex System simplifies storage administration with a single user interface for all your
storage. The management console is integrated with the comprehensive management
system. These management and storage capabilities allow you to virtualize third-party
storage with nondisruptive migration of your current storage infrastructure. You can also make
use of intelligent tiering so you can balance performance and cost for your storage needs.
The solution also supports local and remote replication and snapshots for flexible business
continuity and disaster recovery capabilities.

Flex System can also be connected to various external storage systems.

Chapter 1. Introduction 9
1.2.6 I/O modules
The range of available modules and switches to support key network protocols allows you to
configure IBM Flex System to fit in your infrastructure. However, you can do so without
sacrificing the ability to be ready for the future. The networking resources in IBM Flex System
are standards-based, flexible, and fully integrated into the system. This combination gives you
no-compromise networking for your solution. Network resources are virtualized and managed
by workload. These capabilities are automated and optimized to make your network more
reliable and simpler to manage.

IBM Flex System gives you the following key networking capabilities:
򐂰 Supports the networking infrastructure that you have today, including Ethernet, FC, FCoE,
and InfiniBand.
򐂰 Offers industry-leading performance with 1 Gb, 10 Gb, and 40 Gb Ethernet, 8 Gb and
16 Gb Fibre Channel, QDR and FDR InfiniBand.
򐂰 Provides pay-as-you-grow scalability so you can add ports and bandwidth when needed.

Networking in data centers is undergoing a transition from a discrete traditional model to a


more flexible, optimized model. The network architecture in IBM Flex System was designed to
address the key challenges customers are facing today in their data centers. The key focus
areas of the network architecture on this platform are unified network management, optimized
and automated network virtualization, and simplified network infrastructure.

Providing innovation, leadership, and choice in the I/O module portfolio uniquely positions
IBM Flex System to provide meaningful solutions to address customer needs.

Figure 1-5 shows the IBM Flex System Fabric EN4093R 10Gb Scalable Switch.

Figure 1-5 IBM Flex System Fabric EN4093R 10Gb Scalable Switch

1.3 This book


This book describes the IBM Flex System components in detail. It addresses the technology
and features of the chassis, compute nodes, management features, connectivity and storage
options. It starts with a description of the systems management features of the product
portfolio.

10 IBM PureFlex System and IBM Flex System Products and Technology
2

Chapter 2. IBM PureFlex System


IBM PureFlex System is one member of the IBM PureSystems range of expert integrated
systems. PureSystems deliver Application as a Service (AaaS) such as the PureApplication
System and PureData™ System, and Infrastructure as a Service (IaaS), which can be
enabled with IBM PureFlex System.

This chapter includes the following topics:


򐂰 2.1, “Introduction” on page 12
򐂰 2.2, “Components” on page 13
򐂰 2.3, “PureFlex solutions” on page 15
򐂰 2.4, “IBM PureFlex System Express” on page 17
򐂰 2.5, “IBM PureFlex System Enterprise” on page 27
򐂰 2.6, “Services for IBM PureFlex System Express and Enterprise” on page 36
򐂰 2.7, “IBM SmartCloud Entry for Flex system” on page 39

© Copyright IBM Corp. 2012, 2013. All rights reserved. 11


2.1 Introduction
IBM PureFlex System provides an integrated computing system that combines servers,
enterprise storage, networking, virtualization, and management into a single structure. You
can use its built-in expertise to manage and flexibly deploy integrated patterns of virtual and
hardware resources through unified management.

PureFlex System includes the following features:


򐂰 Configurations that ease acquisition experience and match your needs.
򐂰 Optimized to align with targeted workloads and environments.
򐂰 Designed for cloud with the SmartCloud Entry option.
򐂰 Choice of architecture, operating system, and virtualization engine.
򐂰 Designed for simplicity with integrated, single-system management across physical and
virtual resources.
򐂰 Ships as a single integrated entity directly to you.
򐂰 Includes factory integration and lab services optimization.

Revised in the fourth quarter of 2013, IBM PureFlex System now consolidates the three
previous offerings (Express, Standard, and Enterprise) into two simplified pre-integrated
offerings (Express and Enterprise) that support the latest compute, storage, and networking
requirements. Clients can select from either of these offerings that help simplify ordering and
configuration. As a result, PureFlex System helps cut the cost, time, and complexity of
system deployments, which reduces the time to gain real value.

Latest enhancements include support for the latest compute nodes, I/O modules, and I/O
adapters with the latest release of software, such as IBM SmartCloud Entry with the latest
Flex System Manager release.

PureFlex 4Q 2013 includes the following enhancements:


򐂰 New PureFlex Express
򐂰 New PureFlex Enterprise
򐂰 New Rack offerings for Express: 25U, 42U (or none)
򐂰 New compute nodes: x222, p270, p460
򐂰 New networking support: 10 GbE Converged
򐂰 New SmartCloud Entry v3.2 offering

The IBM PureFlex System includes the two following offerings:


򐂰 Express: An infrastructure system for small-sized and midsized businesses; the most
cost-effective entry point with choice and flexibility to upgrade to higher function.
For more information, see 2.4, “IBM PureFlex System Express” on page 17.
򐂰 Enterprise: An infrastructure system that is optimized for scalable cloud deployments with
built-in redundancy for highly reliable and resilient operation to support critical applications
and cloud services.
For more information, see 2.5, “IBM PureFlex System Enterprise” on page 27.

12 IBM PureFlex System and IBM Flex System Products and Technology
2.2 Components
A PureFlex System configuration features the following main components:
򐂰 A preinstalled and configured IBM Flex System Enterprise Chassis.
򐂰 Choice of compute nodes with IBM POWER7, POWER7+, or Intel Xeon E5-2400 and
E5-2600 processors.
򐂰 IBM Flex System Manager that is preinstalled with management software and licenses for
software activation.
򐂰 IBM Flex System V7000 Storage Node or IBM Storwize V7000 external storage system.
򐂰 The following hardware components are preinstalled in the IBM PureFlex System rack:
– Express: 25U, 42U rack, or no rack configured
– Enterprise: 42U rack only
򐂰 Choice of software:
– Operating system: IBM AIX®, IBM i, Microsoft Windows, Red Hat Enterprise Linux, or
SUSE Linux Enterprise Server
– Virtualization software: IBM PowerVM®, KVM, VMware vSphere, or Microsoft Hyper V
– SmartCloud Entry 3.2 (for more information, see 2.7, “IBM SmartCloud Entry for Flex
system” on page 39)
򐂰 Complete pre-integrated software and hardware
򐂰 Optional onsite services available to get you up and running and provide skill transfer

The hardware differences between Express and Enterprise are summarized in Table 2-1. The
base configuration of the two offerings is shown that can be further customized within the IBM
configuration tools.

Table 2-1 PureFlex System hardware overview configurations


Components PureFlex Express PureFlex Enterprise

PureFlex Rack Optional: 42U, 25U or no rack Required: 42U Rack

Flex System Enterprise Required. Single chassis only Required: Multi-chassis, 1, 2, or


Chassis 3 chassis

Chassis power supplies 2/6 2/6


Minimum/maximum

Chassis Fans 4/8 4/8


minimum/maximum

Flex System Manager Required Required

Compute nodes (one minimum) p260, p270, p460, x220, x222, p260, p270, p460, x220, x222,
POWER or x86 based x240, x440 x240, x440

VMware ESXi USB key Selectable on x86 nodes Selectable on x86 nodes

Top of rack switches Optional: Integrated by client Integrated by IBM

Integrated 1GbE switch Selectable (redundant) Selectable (redundant)

Integrated 10GbE switch Selectable (redundant) Selectable (redundant)

Integrated 16Gb Fibre Channel Selectable (redundant) Selectable (redundant)

Chapter 2. IBM PureFlex System 13


Components PureFlex Express PureFlex Enterprise

Converged 10GbE switch Selectable (Redundant or Selectable (redundant)


(FCoE) non-redundant)

IBM Storwize V7000 or V7000 Required & selectable Required and selectable
Storage Node

Media enclosure Selectable DVD or DVD and Selectable DVD or DVD and
tape tape

PureFlex System software can also be customized in a similar manner to the hardware
components of the two offerings. Enterprise has a slightly different composition of software
defaults than Express, which are summarized in Table 2-2.

Table 2-2 PureFlex software defaults overview


Software Express Enterprise

Storage Storwize V7000 or Flex System V7000 Base


Real Time Compression (optional)

Flex System Manager (FSM) FSM Standard FSM advanced


Upgradeable to Advanced Selectable to Standarda

IBM Virtualization PowerVM Standard PowerVM Enterprise


Upgradeable to Enterprise Selectable to Standard

Virtualization customer VMware, Microsoft Hyper-V, KVM, Red Hat, and SUSE Linux
installed

Operating systems AIX Standard (V6 and V7), IBM i (7.1, 6.1). RHEL (6), SUSE
(SLES 11)
Customer installed: Windows Server, RHEL, SLES

Security Power SC Standard (AIX only)


Tivoli Provisioning Manager (x86 only)

Cloud SmartCloud Entry (optional)

Software maintenance Standard 1 year, upgradeable to three years


a. Advanced is required for Power Systems

2.2.1 Configurator for IBM PureFlex System


For the latest Express and Enterprise PureFlex System offerings, the e-config configuration
tool must be used. Configurations that are composed of either/or x86 and Power Systems
compute nodes are configurable. The e-config configurator is available at this website:
http://ibm.com/services/econfig/announce/

14 IBM PureFlex System and IBM Flex System Products and Technology
2.3 PureFlex solutions
To enhance the integrated offerings that are available from IBM, two new PureFlex based
solutions are available. One is focused on IBM i and the other on Virtual Desktop.

These solutions, which can be selected within the IBM configurators for ease of ordering, are
integrated at the IBM factory before they are delivered to the client.

Services are also available to complement these PureFlex Solutions offerings.

2.3.1 PureFlex Solution for IBM i


The IBM PureFlex System Solution for IBM i is a combination of IBM i and an IBM PureFlex
System with POWER and x86 processor-based compute nodes that provides a completely
integrated business system.

By consolidating their IBM i and x86 based applications onto a single platform, the solution
offers an attractive alternative for small and midsized clients who want to reduce IT costs and
complexity in a mixed environment.

The PureFlex Solution for IBM i is based on the PureFlex Express offering and includes the
following features:
򐂰 Complete integrated hardware and software solution:
– Simple, one button ordering fully enabled in configurator
– All hardware is pre-configured, integrated, and cabled
– Software preinstall of IBM i OS, PowerVM, Flex System Manager, and V7000 Storage
software
򐂰 Reliability and redundancy IBM i clients demand:
– Redundant switches and I/O
– Pre-configured Dual VIOS servers
– Internal storage with pre-configured drives RAID and Mirrored
򐂰 Optimally sized to get started quickly:
– p260 compute node configured for IBM i
– x86 compute node configured for x86 workloads
– Ideal for infrastructure consolidation of multiple workloads
򐂰 Management integration across all resources
Flex System Manager simplifies management of all resources within PureFlex
򐂰 IBM Lab Services (optional) to accelerate deployment
Skilled PureFlex and IBM i experts perform integration, deployment, and migration
services onsite from IBM or can be Business Partner delivered.

Chapter 2. IBM PureFlex System 15


2.3.2 PureFlex Solution for SmartCloud Desktop Infrastructure
The IBM PureFlex System Solution for SmartCloud Desktop Infrastructure (SDI) offers lower
costs and complexity of existing desktop environments while securely manages a growing
mobile workforce.

This integrated infrastructure solution is made available for clients who want to deploy
desktop virtualization. It is optimized to deliver performance, fast time to value, and security
for Virtual Desktop Infrastructure (VDI) environments.

The solution uses IBM’s breadth of hardware offerings, software, and services to complete
successful VDI deployments. It contains predefined configurations that are highlighted in the
reference architectures that include integrated Systems Management and VDI management
nodes.

PureFlex Solution for SDI provides performance and flexibility for VDI and includes the
following features:
򐂰 Choice of compute nodes for specific client requirements, including x222 high-density
node.
򐂰 Windows Storage Servers and Flex System V7000 Storage Node provide block and file
storage for non-persistent and persistent VDI deployments.
򐂰 Flex System Manager and Virtual Desktop Management Servers easily and efficiently
manage virtual desktops and VDI infrastructure.
򐂰 Converged FCoE offers clients superior networking performance.
򐂰 Windows 2012 and VMware View are available.
򐂰 New Reference Architectures for Citrix Xen Desktop and VMware View are available.

For more information about these and other VDI offerings, see the IBM SmartCloud Desktop
Infrastructure page at this website:
http://ibm.com/systems/virtualization/desktop-virtualization/

16 IBM PureFlex System and IBM Flex System Products and Technology
2.4 IBM PureFlex System Express
The tables in this section represent the hardware, software, and services that make up an
IBM PureFlex System Express offering. The following items are described:
򐂰 2.4.1, “Available Express configurations”
򐂰 2.4.2, “Chassis” on page 20
򐂰 2.4.3, “Compute nodes” on page 20
򐂰 2.4.4, “IBM Flex System Manager” on page 21
򐂰 2.4.5, “PureFlex Express storage requirements and options” on page 21
򐂰 2.4.6, “Video, keyboard, mouse option” on page 24
򐂰 2.4.7, “Rack cabinet” on page 25
򐂰 2.4.8, “Available software for Power Systems compute nodes” on page 25
򐂰 2.4.9, “Available software for x86-based compute nodes” on page 26

To specify IBM PureFlex System Express in the IBM ordering system, specify the indicator
feature code that is listed in Table 2-3 for each machine type.

Table 2-3 Express indicator feature code


AAS feature code XCC feature code Description

EFDA Not applicable IBM PureFlex System Express Indicator Feature Code

EBM1 Not applicable IBM PureFlex System Express with PureFlex Solution for IBM i
Indicator Feature Code

2.4.1 Available Express configurations


The PureFlex Express configuration is available in a single chassis as a traditional Ethernet
and Fibre Channel combination or converged networking configurations that use Fibre
Channel over Ethernet (FCoE) or Internet Small Computer System Interface (iSCSI). The
required storage in these configurations can be an IBM Storwize V7000 or an IBM Flex
System V7000 Storage Node. Compute nodes can be Power or x86 based, or a combination
of both.

The IBM Flex System Manager provides the system management for the PureFlex
environment

Ethernet and Fibre Channel combinations have the following characteristics:


򐂰 Power, x86 or hybrid combinations of compute nodes
򐂰 1Gb or 10Gb Ethernet adapters or LAN on Motherboard (LOM, x86 only)
򐂰 1Gb or 10Gb Ethernet switches
򐂰 16Gb (or 8Gb for x86 only) Fibre Channel adapters
򐂰 16Gb (or 8Gb for x86 only) Fibre Channel switches

FCoE configurations have the following characteristics:


򐂰 Power, x86 or hybrid combinations of compute nodes
򐂰 10Gb Converged Network Adapters (CNA) or LOM (x86 only)
򐂰 10Gb Converged Network switch or switches

Configurations
There are seven different configurations that are orderable within the PureFlex express
offering. These offerings cover various redundant and non-redundant configurations with the
different types of protocol and storage controllers.

Chapter 2. IBM PureFlex System 17


Table 2-4 summarizes the PureFlex Express offerings.

Table 2-4 PureFlex Express Offerings


Configuration 1A 2A 2B 3A 3B 4A 4B

Networking 10 GbE 10 GbE 10GbE 1 GbE 1 GbE 10 GbE 10 GbE


Ethernet

Networking Fibre FCoE FCoE FCoE 16 Gb 16 Gb 16 Gb 16 Gb


Channel

Number of 1 2 2 4 4 4 4
Switches (up to
16)a

V7000 Storage V7000 V7000 Storwize V7000 Storwize V7000 Storwize


node or Storwize Storage Storage V7000 Storage V7000 Storage V7000
V7000 Node Node Node Node

Chassis 1 Chassis with 2 Chassis management modules, fans, and power supple units (PSUs)

Rack None or 42U or 25U (+PDUs)

TF3 KVM Tray Optional

Media Enclosure DVD only DVD and Tape

V7000 Options Storage Options: (24 HDD, 22 HDD + 2 SSD, 20 HDD + 4 SSD or Custom)
Storwize expansion (limit to single rack in Express, overflow storage rack in Enterprise), nine units per controller
Up to two Storwize V7000 controllers and up to nine IBM Flex System V7000 Storage Nodes.

V7000 Content VIOS, AIX, IBM i, and SCE on first Controller

Nodes P260, p270, p460, x222, x240, x220, x440

POWER Nodes CN4058 8-port 10Gb Converged Adapter EN2024 4-port 1Gb Ethernet EN4054 4-port 10Gb
Ethernet I/O Adapter Ethernet Adapter
Adapters

POWER nodes Not applicable FC5054 4-port 16Gb FC Adapter


Fibre Channel I/O
Adapters

x86 Nodes CN4054 10Gb Virtual Fabric Adapter EN2024 4-port 1Gb Ethernet EN4054 4-port 10Gb
Ethernet I/O Adapter Ethernet Adapter
adapters LAN on Motherboard (2port LAN on Motherboard (2-port
10GbE) 10GbE)

x86 Nodes Not applicable FC5022 16Gb 2-port Fibre Channel adapter
Fibre Channel I/O FC3052 8Gb 2-port Fibre Channel adapter
Adapters FC5024D 4-port Fibre Channel adapter

ESXi USB Key Optional with x86 Nodes

Port FoD Ports are computed during configuration based on chassis switch, node type, and the I/O adapter selection.
Activations

IBM i PureFlex Not Configurable Available Not Available Not


Solution Configur- Configur-
able able

VDI PureFlex Not configurable


Solution
a. Switches that are required are determined by total number of racks

18 IBM PureFlex System and IBM Flex System Products and Technology
Example configuration
There are seven configurations for PureFlex Express as described in Table 2-4 on page 18.
Configuration 2B features a single chassis with an external V7000 Storwize controller. This
solution uses FCoE and includes the Converged Switch module CN3093 to provide an FC
Forwarder. This means that only converged adapters must be installed on the node and that
the CN4093 breaks out Ethernet and Fibre Channel externally from the chassis.

Figure 2-1 shows the connections, including the Fibre Channel and Ethernet data networks
and the management network that is presented to the Access Points within the PureFlex
Rack. The green box signifies the chassis and its components with the inter-switch link
between the two switches.

Because this is an Express solution, it is an entry configuration.

Node Bays
1 to 14
CN4093 CN4093

CMM ISL CMM

Access
Points

StoreWize
V7000

Midplane Connections Chassis Boundary


Management 1GbE Label Chassis Elements
Data 10GbE Label Rack Mounted Elements
Data 40GbE
Data 8Gb FC

Figure 2-1 PureFlex Express with FCoE and external V7000 Storwize

Chapter 2. IBM PureFlex System 19


2.4.2 Chassis
The IBM Flex System Enterprise Chassis contains all the components of the PureFlex
Express configuration with the exception of the IBM Storwize V7000 and any expansion
enclosure. The chassis is installed in a 25U or 42U rack. The compute nodes, storage nodes,
switch modules and IBM Flex System Manager are installed in the chassis. When the V7000
Storage Node is chosen as the storage type, a “no rack” option is also available.

Table 2-5 lists the major components of the Enterprise Chassis, including the switches and
options.

Feature codes: The tables in this section do not list all feature codes. Some
features are not listed here for brevity.

Table 2-5 Components of the chassis and switches


AAS feature code XCC feature code Description

7893-92X 8721-HC1 IBM Flex System Enterprise Chassis

7955-01M 8731-AC1 IBM Flex System Manager

A0TF 3598 IBM Flex System EN2092 1Gb Ethernet Scalable Switch

ESW7 A3J6 IBM Flex System Fabric EN4093R 10Gb Scalable Switch

ESW2 A3HH IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch

EB28 5053 IBM SFP+ SR Transceiver

EB29 3268 IBM SFP RJ45 Transceiver

3286 5075 IBM 8Gb SFP+ Software Optical Transceiver

3771 A2RQ IBM Flex System FC5022 24-port 16Gb ESB SAN Scalable Switch

5370 5084 Brocade 8Gb SFP+ Software Optical Transceiver

9039 A0TM Base Chassis Management Module

3592 A0UE Additional Chassis Management Module

2.4.3 Compute nodes


The PureFlex System Express requires at least one of the following compute nodes:
򐂰 IBM Flex System p24l, p260, p270, or p460 Compute Nodes, IBM POWER7, or
POWER7+ based (see Table 2-6)
򐂰 IBM Flex System x220, x222, x240, or x440 Compute Nodes, x86 based (see Table 2-7 on
page 21)

Table 2-6 Power Based Compute Nodes


AAS feature code MTM Description

0497 1457-7FL IBM Flex System p24L Compute Node

0437 7895-22x IBM Flex System p260 Compute Node

ECSD 7895-23A IBM Flex System p260 Compute Node (POWER7+, 4 cores only)

20 IBM PureFlex System and IBM Flex System Products and Technology
AAS feature code MTM Description

ECS3 7895-23X IBM Flex System p260 Compute Node (POWER7+)

0438 7895-42X IBM Flex System p460 Compute Node

ECS9 7895-43X IBM Flex System p460 Compute Node (POWER7+)

ECS4 7954-24X IBM Flex System p270 Compute Node (POWER7+)

Table 2-7 x86 based compute nodes


AAS feature code MTM Description

ECS7 7906-25X IBM Flex System x220 Compute Node

ECSB 7916-27X IBM Flex System x222 Compute Node

0457 7863-10X IBM Flex System x240 Compute Node

ECSB 7917-45X IBM Flex System x440 Compute Node

2.4.4 IBM Flex System Manager


The IBM Flex System Manager (FSM) is a high-performance, scalable system management
appliance. It is based on the IBM Flex System x240 Compute Node. The FSM hardware
comes preinstalled with Systems Management software that you can use to configure,
monitor, and manage IBM PureFlex Systems.

The IBM Flex System Manager 7955-01M includes the following features:
򐂰 Intel Xeon E5-2650 8C 2.0GHz 20MB 1600MHz 95W
򐂰 32GB of 1333MHz RDIMMs memory
򐂰 Two 200GB, 1.8", SATA MLC SSD in a RAID 1 configuration
򐂰 1TB 2.5” SATA 7.2K RPM hot-swap 6 Gbps HDD
򐂰 IBM Open Fabric Manager
򐂰 Optional FSM advanced, which adds VM Control Enterprise license

2.4.5 PureFlex Express storage requirements and options


The PureFlex Express configuration requires a SAN-attached storage system.

The following storage options are available:


򐂰 IBM Storwize V7000
򐂰 IBM Flex System V7000 Storage Node

The required number of drives depends on drive size and compute node type. All storage is
configured with RAID-5 with a single hot spare that is included in the total number of drives.
The following configurations are available:
򐂰 Power Systems compute nodes only, 16 x 300GB, or 8 x 600GB drives
򐂰 Hybrid (Power and x86), 16 x 300GB, or 8 x 600GB drives
򐂰 Multi-chassis configurations require 24x 300GB drives
SmartCloud Entry is optional with Express; if selected, the following drives are available:
– x86 based nodes only, including SmartCloud Entry, 8 x 300 GB, or 8x 600 GB drives
– Hybrid (both Power and x86) with SmartCloud Entry, 16x 300 GB, or 600 GB drives

Chapter 2. IBM PureFlex System 21


Solid-state drives (SSDs) are optional. However, if they are added to the configuration, they
are normally used for the V7000 Easy Tier® function, which improves system performance.

IBM Storwize V7000


The IBM Storwize V7000 that is shown in Figure 2-2 is one of the two storage options that is
available in a PureFlex Express configuration. This option is installed in the same rack as the
chassis. Other expansion units can be added in the same rack or an adjoining rack,
depending on the quantity that is ordered.

Figure 2-2 IBM Storwize V7000

The IBM Storwize V7000 consists of the following components, disk, and software options:
򐂰 IBM Storwize V7000 Controller (2076-124)
򐂰 SSDs:
– 200 GB 2.5-inch
– 400 GB 2.5-inch
򐂰 Hard disk drives (HDDs):
– 300 GB 2.5-inch 10K
– 300 GB 2.5-inch 15K
– 600 GB 2.5-inch 10K
– 800 GB 2.5-inch 10K
– 900 GB 2.5-inch 10K
– 1 TB 2.5-inch 7.2K
– 1.2 TB 2.5-inch 10K
򐂰 Expansion Unit (2076-224): up to 9 per V7000 Controller
IBM Storwize V7000 Expansion Enclosure (24 disk slots)
򐂰 Optional software:
– IBM Storwize V7000 Remote Mirroring
– IBM Storwize V7000 External Virtualization
– IBM Storwize V7000 Real-time Compression™

IBM Flex System V7000 Storage Node


IBM Flex System V7000 Storage Node (as shown in Figure 2-3 on page 23) is one of the two
storage options that is available in a PureFlex Express configuration. This option uses four
compute node bays (2 wide x 2 high) in the Flex chassis. Up to two expansion units can also
be in the Flex chassis, each using four compute node bays. External expansion units are also
supported.

22 IBM PureFlex System and IBM Flex System Products and Technology
Figure 2-3 IBM Flex System V7000 Storage Node

The IBM Flex System V7000 Storage Node consists of the following components, disk, and
software options:
򐂰 IBM Storwize V7000 Controller (4939-A49)
򐂰 SSDs:
– 200GB 2.5-inch
– 400GB 2.5-inch
– 800 GB 2.5-inch
򐂰 HDDs:
– 300GB 2.5-inch 10K
– 300GB 2.5-inch 15K
– 600GB 2.5-inch 10K
– 800GB 2.5-inch 10K
– 900GB 2.5-inch 10K
– 1 TB 2.5-inch 7.2K
– 1.2TB 2.5-inch 10K
򐂰 Expansion Unit (4939-A29)
IBM Storwize V7000 Expansion Enclosure (24 disk slots)
򐂰 Optional software:
– IBM Storwize V7000 Remote Mirroring
– IBM Storwize V7000 External Virtualization
– IBM Storwize V7000 Real-time Compression

7226 Multi-Media Enclosure


The 7226 system (as shown in Figure 2-4) is a rack-mounted enclosure that can be added to
any PureFlex Express configuration and features two drive bays that can hold one or two tape
drives, and up to four slim-design DVD-RAM drives. These drives can be mixed in any
combination of any available drive technology or electronic interface in a single 7226
Multimedia Storage Enclosure.

Figure 2-4 7226 Multi-Media Enclosure

Chapter 2. IBM PureFlex System 23


The 7226 enclosure media devices offers support for SAS, USB, and Fibre Channel
connectivity, depending on the drive. Support in a PureFlex configuration includes the
external USB and Fibre Channel connections.

Table 2-8 shows the Multi-Media Enclosure and available PureFlex options.

Table 2-8 Multi-Media Enclosure and options


Machine type Feature Code Description

7226 Model 1U3 Multi-Media Enclosure

7226-1U3 5763 DVD Sled with DVD-RAM USB Drive

7226-1U3 8248 Half-high LTO Ultrium 5FC Tape Drive

7226-1U3 8348 Half-high LTO Ultrium 6 FC Tape Drive

2.4.6 Video, keyboard, mouse option


The IBM 7316 Flat Panel Console Kit that is shown in Figure 2-5 is an option to any PureFlex
Express configuration that can provide local console support for the FSM and x86 based
compute nodes.

Figure 2-5 IBM 7316 Flat Panel Console

The console is a 19-inch, rack-mounted 1U unit that includes a language-specific IBM Travel
Keyboard. The console kit is used with the Console Breakout cable that is shown in figure
Figure 2-6. This cable provides serial, video, and two USB ports. The Console Breakout cable
can be attached to the keyboard, video, and mouse (KVM) connector on the front panel of x86
based compute nodes, including the FSM.

Figure 2-6 Console Breakout cable

24 IBM PureFlex System and IBM Flex System Products and Technology
The CMM in the chassis also allows direct connection to nodes via the internal chassis
management network that communicates to the FSP or iMM2 on the node to allow remote
out-of-band management.

2.4.7 Rack cabinet


The Express configuration includes the options of being shipped with or without a rack. Rack
options includes 25U or 42U size.

Table 2-9 lists the major components of the rack and options.

Table 2-9 Components of the rack


AAS feature code XCC feature code Description

42U

7953-94X 93634AX IBM 42U 1100mm Enterprise V2 Dynamic Rack

EU21 None PureFlex door

EC01 None Gray Door

EC03 None Side Cover Kit (Black)

EC02 None Rear Door (Black/flat)

25U

7014-S25 93072RX IBM S2 25U Standard Rack

ERGA None PureFlex door

None Gray Door

No Rack

4650 None No Rack specify

2.4.8 Available software for Power Systems compute nodes


In this section, we describe the software that is available for Power Systems compute nodes.

VIOS, AIX and IBM i


VIOS are preinstalled on each Power Systems compute node with a primary operating
system on the primary node of the PureFlex Express configuration. The primary OS can be
one of the following options:
򐂰 AIX v6.1
򐂰 AIX v7.1
򐂰 IBM i v7.1

Chapter 2. IBM PureFlex System 25


RHEL and SUSE Linux on Power
VIOS is preinstalled on each Linux on Power selected compute node for the virtualization
layer. Client operating systems, such as Red Hat Enterprise Linux (RHEL) and SUSE Linux
Enterprise Server (SLES), can be ordered with the PureFlex Express configuration, but they
are not preinstalled. The following Linux on Power versions are available:
򐂰 RHEL v5U9 POWER7
򐂰 RHEL v6U4 POWER7 or POWER7+
򐂰 SLES v11SP2

2.4.9 Available software for x86-based compute nodes


x86-based compute nodes can be ordered with VMware ESXi 5.1 hypervisor preinstalled to
an internal USB key. Operating systems that are ordered with x86 based nodes are not
preinstalled. The following operating systems are available for x86 based nodes:
򐂰 Microsoft Windows Server 2008 Release 2
򐂰 Microsoft Windows Server Standard 2012
򐂰 Microsoft Windows Server Datacenter 2012
򐂰 Microsoft Windows Server Storage 2012
򐂰 RHEL
򐂰 SLES

26 IBM PureFlex System and IBM Flex System Products and Technology
2.5 IBM PureFlex System Enterprise
The tables in this section represent the hardware, software, and services that make up IBM
PureFlex System Enterprise. We describe the following items:
򐂰 2.5.1, “Enterprise configurations”
򐂰 2.5.2, “Chassis” on page 30
򐂰 2.5.3, “Top-of-rack switches” on page 30
򐂰 2.5.4, “Compute nodes” on page 31
򐂰 2.5.5, “IBM Flex System Manager” on page 31
򐂰 2.5.6, “PureFlex Enterprise storage options” on page 32
򐂰 2.5.7, “Video, keyboard, and mouse option” on page 34
򐂰 2.5.8, “Rack cabinet” on page 35
򐂰 2.5.9, “Available software for Power Systems compute node” on page 35
򐂰 2.5.10, “Available software for x86-based compute nodes” on page 35

To specify IBM PureFlex System Enterprise in the IBM ordering system, specify the indicator
feature code that is listed in Table 2-10 for each machine type.

Table 2-10 Enterprise indicator feature code


AAS feature code XCC feature code Description

EFDC Not applicable IBM PureFlex System Enterprise Indicator Feature Code

EVD1 Not applicable IBM PureFlex System Enterprise with PureFlex Solution for SmartCloud
Desktop Infrastructure

2.5.1 Enterprise configurations


PureFlex Enterprise is available in a single or multiple chassis (up to three chassis per rack)
configuration as a traditional Ethernet and Fibre Channel combination or a converged solution
that uses Converged Network Adapters and FCoE. All chassis in the configuration must use
the same connection technology. The required storage in these configurations can be a IBM
Storwize V7000 or a IBM Flex System V7000 Storage Node. Compute nodes can be Power
or x86 based, or a hybrid combination that includes both. The IBM Flex System Manager
provides the system management.

Ethernet and Fibre Channel Combinations have the following characteristics:


򐂰 Power, x86 or hybrid combinations of compute nodes
򐂰 1Gb or 10Gb Ethernet adapters or LAN on Motherboard (LOM, x86 only)
򐂰 10Gb Ethernet switches
򐂰 16Gb (or 8Gb for x86 only) Fibre Channel adapters
򐂰 16Gb (or 8Gb for x86 only) Fibre Channel switches

CNA configurations have the following characteristics:


򐂰 Power, x86 or hybrid combinations of compute nodes
򐂰 10Gb Converged Network Adapters (CNA) or LOM (x86 only)
򐂰 10Gb Converged Network switch or switches

Configurations
There are eight different orderable configurations within the enterprise PureFlex offerings.
These offerings cover various redundant and non-redundant configurations along with the
different types of protocol and storage controllers.

Chapter 2. IBM PureFlex System 27


Table 2-11 summarizes the PureFlex Enterprise offerings that are fully configurable within the
IBM configuration tools.

Table 2-11 PureFlex Enterprise Offerings

Configuration 5A 5B 6A 6B 7A 7B 8A 8B

Networking Ethernet 10 GbE 10 GbE 10GbE 10 GbE 10 GbE 10 GbE 10 GbE 10GbE

Networking Fibre FCoE FCoE FCoE FCoE 16 Gb 16 Gb 16 Gb 16Gb


Channel

Number of Switches 2 2 1x: 2/8 1x: 2/8 1x: 4/10 1x: 4/10 1x: 4/10 1x: 4/10
up to 18 maximum.a 2x: 10 2x: 10 2x: 14 2x: 14 2x: 14 2x: 14
3x: 12 3x: 12 3x: 18 3x: 18 3x: 18 3x: 18

V7000 Storage Node V7000 Storwize V7000 Storwize V7000 Storwize V7000 Storwize
or Storwize V7000 Storage V7000 Storage V7000 Storage V7000 Storage V7000
Node Node Node Node

Chassis 1, 2, or 3x Chassis with two Chassis management modules, fans, and PSUs

Rack 42U Rack mandatory

TF3 KVM Tray Optional

Media enclosure DVD only DVD and tape

V7000 Storage Options: (24 HDD, 22 HDD + 2 SSD, 20 HDD + 4 SSD or Custom)
Options Storwize expansion (limit to single rack in Express, overflow storage rack in Enterprise): nine units per
controller
Up to two Storwize V7000 controllers, up to nine IBM Flex System V7000 Storage Nodes

V7000 VIOS, AIX, IBM i, and SCE on first Controller


Content

Nodes P260, p270, p460, x222, x240, x220, x440

POWER Nodes CN4058 8-port 10Gb Converged Adapter EN4054 4-port 10Gb Ethernet Adapter
Ethernet I/O
Adapters

POWER nodes Fibre Not applicable FC5054 4-port 16Gb FC Adapter


Channel I/O
Adapters

x86 Nodes CN4054 10Gb Virtual Fabric Adapter EN4054 4-port 10Gb Ethernet Adapter
Ethernet I/O LAN on Motherboard (2-port 10GbE) + FCoE LAN on Motherboard (2-port 10GbE)
adapters

x86 Nodes Not applicable FC5022 2-port 16Gb FC Adapter


Fibre Channel I/O FC3052 2-port 8Gb FC Adapter
Adapters

ESXi USB Key Optional; for x86 compute nodes only

Port FoD Activations Ports are computed during configuration based upon chassis switch, node type, and the I/O adapter selection.

IBM i PureFlex Not configurable


Solution

VDI PureFlex Supported Not configurable


Solution
a. 1x = 1 Chassis, 2x = 2 Chassis & 3x = 3 Chassis

28 IBM PureFlex System and IBM Flex System Products and Technology
Example configuration
There are eight different configuration starting points for PureFlex Enterprise, as described in
Table 2-11 on page 28. These configurations can be enhanced further with multi-chassis and
other storage configurations.

Figure 2-7 shows an example of the wiring for base configuration 6B, which is an Enterprise
PureFlex that uses an external Storwize V7000 enclosure and CN4093 10Gb Converged
Scalable Switch converged infrastructure switches. Also included are external SAN B24
switches and Top-of-Rack (TOR) G8264 Ethernet switches. The TOR switches allow for the
data networks to allow other chassis to be configured into this solution (not shown).

G8264 TOR G8264 TOR

Access
Points

40Gb
ISL

CN4093 CN4093

Node Bays
1 to 14

CMM CMM

G8062 G8062
1Gb Mgt 1Gb Mgt

SAN SAN
2498-B24 TOR 2498-B24 TOR

Storwize
V7000

Midplane Connections Chassis Boundary


Management 1GbE Label Chassis Elements
Data 10GbE Label Rack Mounted Elements
Data 40GbE
Data 8Gb FC

Figure 2-7 PureFlex Enterprise with External V7000 and FCoE

Chapter 2. IBM PureFlex System 29


There is a management network that is included in this configuration that is composed of a
1GbE G8062 network switch.

The Access points within the PureFlex chassis provide connections from the clients network
into the internal networking infrastructure of the PureFlex system and connections into to the
Management network.

2.5.2 Chassis
Table 2-12 lists the major components of the IBM Flex System Enterprise Chassis, including
the switches.

Feature codes: The tables in this section do not list all feature codes. Some
features are not listed here for brevity.

Table 2-12 Components of the chassis and switches


AAS feature code XCC feature code Description

7893-92X 8721-HC1 IBM Flex System Enterprise Chassis

7955-01M 8731-AC1 IBM Flex System Manager

A0TF 3598 IBM Flex System EN2092 1Gb Ethernet Scalable Switch

ESW2 A3HH IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch

ESW7 A3J6 IBM Flex System Fabric EN4093R 10Gb Scalable Switch

EB28 5053 IBM SFP+ SR Transceiver

EB29 3268 IBM SFP RJ45 Transceiver

3286 5075 IBM 8Gb SFP+ Software Optical Transceiver

3771 A2RQ IBM Flex System FC5022 24-port 16Gb ESB SAN Scalable Switch

5370 5084 Brocade 8Gb SFP+ Software Optical Transceiver

9039 A0TM Base Chassis Management Module

3592 A0UE Other Chassis Management Module

2.5.3 Top-of-rack switches


The PureFlex Enterprise configuration can consist of a compliment of six TOR switches, two
IBM System Networking RackSwitch G8052, two IBM System Networking RackSwitch
G8264, and two IBM System Storage SAN24B-4 Express switches. These switches are
required in a multi-chassis configuration and are optional in a single chassis configuration.

The TOR switch infrastructure is in place for aggregation purposes, which consolidate the
integration point of a multi-chassis system to core networks.

30 IBM PureFlex System and IBM Flex System Products and Technology
Table 2-13 lists the switch components.

Table 2-13 Components of the Top-of-Rack Ethernet switches


AAS feature code XCC feature code Description

1455-48E 7309-G52 IBM System Networking RackSwitch G8052R

1455-64C 7309-HC3 IBM System Networking RackSwitch G8264R

2498-B24 2498-24E IBM System Storage SAN24B-4 Express

2.5.4 Compute nodes


The PureFlex System Enterprise requires one or more of the following compute nodes:
򐂰 IBM Flex System p24L, p260, p270, or p460 Compute Nodes, IBM POWER7, or
POWER7+ based (see Table 2-14)
򐂰 IBM Flex System x220, x222, x240 or x440 Compute Nodes, x86 based (see Table 2-15)

Table 2-14 Power Systems compute nodes


AAS feature code MTM Description

0497 1457-7FL IBM Flex System p24L Compute Node

0437 7895-22x IBM Flex System p260 Compute Node

ECSD 7895-23A IBM Flex System p260 Compute Node (POWER7+ 4 core only)

ECS3 7895-23X IBM Flex System p260 Compute Node (POWER7+)

0438 7895-42X IBM Flex System p460 Compute Node

ECS9 7895-43X IBM Flex System p460 Compute Node (POWER7+)

ECS4 7954-24X IBM Flex System p270 Compute Node (POWER7+)

Table 2-15 x86 based compute nodes


AAS feature code MTM Description

ECS7 7906-25X IBM Flex System x220 Compute Node

ECSB 7916-27X IBM Flex System x222 Compute Node

0457 7863-10X IBM Flex System x240 Compute Node

ECS8 7917-45X IBM Flex System x440 Compute Node

2.5.5 IBM Flex System Manager


The IBM Flex System Manager (FSM) is a high-performance, scalable system management
appliance. It is based on the IBM Flex System x240 Compute Node. The FSM hardware
comes preinstalled with Systems Management software that you can use to configure,
monitor, and manage IBM PureFlex Systems.

FSM is based on the following components:


򐂰 Intel Xeon E5-2650 8C 2.0GHz 20MB 1600MHz 95W
򐂰 32GB of 1333 MHz RDIMMs memory
򐂰 Two 200GB, 1.8", SATA MLC SSD in a RAID 1 configuration

Chapter 2. IBM PureFlex System 31


򐂰 1TB 2.5” SATA 7.2K RPM hot-swap 6 Gbps HDD
򐂰 IBM Open Fabric Manager
򐂰 Optional FSM advanced, adds VM Control Enterprise license

2.5.6 PureFlex Enterprise storage options


Any PureFlex Enterprise configuration requires a SAN-attached storage system. The
following storage options that are available are the integrated storage node or the external
Storwize unit:
򐂰 IBM Storwize V7000
򐂰 IBM Flex System V7000 Storage Node

The required numbers of drives depends on drive size and compute node type. All storage is
configured with RAID5 with a single Hot Spare that is included in the total number of drives.
The following configurations are available:
򐂰 Power based nodes only, 16 x 300GB, or 8 x 600GB drives
򐂰 Hybrid (both Power and x86), 16 x 300GB, or 8 x 600GB drives
򐂰 x86 based nodes only including SmartCloud Entry, 8 x 300GB, or 8x 600GB drives
򐂰 Hybrid (both Power and x86) with SmartCloud Entry, 16x 300GB, or 600GB drives

SSDs are optional; however, if they are added to the configuration, they are normally used for
the V7000 Easy Tier function improving system performance.

IBM Storwize V7000


The IBM Storwize V7000 is one of the two storage options that is available in a PureFlex
Enterprise configuration. This option can be rack mounted in the same rack as the Enterprise
chassis. Other expansion units can be added in the same rack or a second rack, depending
on the quantity ordered.

The IBM Storwize V7000 consists of the following components, disk, and software options:
򐂰 IBM Storwize V7000 Controller (2076-124)
򐂰 SSDs:
– 200 GB 2.5-inch
– 400 GB 2.5-inch
򐂰 HDDs:
– 300 GB 2.5-inch 10K
– 300 GB 2.5-inch 15K
– 600 GB 2.5-inch 10K
– 800 GB 2.5-inch 10K
– 900 GB 2.5-inch 10K
– 1 TB 2.5-inch 7.2K
– 1.2 TB 2.5-inch 10K
򐂰 Expansion Unit (2076-224): Up to nine per V7000 Controller
IBM Storwize V7000 Expansion Enclosure (24 disk slots)
򐂰 Optional software:
– IBM Storwize V7000 Remote Mirroring
– IBM Storwize V7000 External Virtualization
– IBM Storwize V7000 Real-time Compression

32 IBM PureFlex System and IBM Flex System Products and Technology
IBM Flex System V7000 Storage Node
IBM Flex System V7000 Storage Node is one of the two storage options that is available in a
PureFlex Enterprise configuration. This option uses four compute node bays (2 wide x 2 high)
in the Flex chassis. Up to two expansion units also can be in the Flex chassis, each using four
compute node bays. External expansion units are also supported.

The IBM Flex System V7000 Storage Node consists of the following components, disk, and
software options:
򐂰 SSDs:
– 200GB 2.5-inch
– 400GB 2.5-inch
– 800 GB 2.5-inch
򐂰 HDDs:
– 300GB 2.5-inch 10K
– 300GB 2.5-inch 15K
– 600GB 2.5-inch 10K
– 800GB 2.5-inch 10K
– 900GB 2.5-inch 10K
– 1 TB 2.5-inch 7.2K
– 1.2TB 2.5-inch 10K
򐂰 Expansion Unit (4939-A29)
IBM Storwize V7000 Expansion Enclosure (24 disk slots)
򐂰 Optional software:
– IBM Storwize V7000 Remote Mirroring
– IBM Storwize V7000 External Virtualization
– IBM Storwize V7000 Real-time Compression

7226 Multi-Media Enclosure


The 7226 system that is shown in Figure 2-8 is a rack-mounted enclosure that can be added
to any PureFlex Enterprise configuration and features two drive bays that can hold one or two
tape drives, one or two RDX removable disk drives, and up to four slim-design DVD-RAM
drives. These drives can be mixed in any combination of any available drive technology or
electronic interface in a single 7226 Multimedia Storage Enclosure.

Figure 2-8 7226 Multi-Media Enclosure

The 7226 enclosure media devices offers support for SAS, USB, and Fibre Channel
connectivity, depending on the drive. Support in a PureFlex configuration includes the
external USB and Fibre Channel connections.

Table 2-16 shows the Multi-Media Enclosure and available PureFlex options.

Chapter 2. IBM PureFlex System 33


Table 2-16 Multi-Media Enclosure and options
Machine/Type Feature Code Description

7226 Model 1U3 Multi-Media Enclosure

7226-1U3 5763 DVD Sled with DVD-RAM USB Drive

7226-1U3 8248 Half-high LTO Ultrium 5 FC Tape Drive

7226-1U3 8348 Half-high LTO Ultrium 6 FC Tape Drive

2.5.7 Video, keyboard, and mouse option


The IBM 7316 Flat Panel Console Kit that is shown in Figure 2-9 is an option to any PureFlex
Enterprise configuration that can provide local console support for the FSM and x86 based
compute nodes.

Figure 2-9 IBM 7316 Flat Panel Console

The console is a 19-inch, rack-mounted 1U unit that includes a language-specific IBM Travel
Keyboard. The console kit is used with the Console Breakout cable that is shown in
Figure 2-10. This cable provides serial, video, and two USB ports. The Console Breakout
cable can be attached to the KVM connector on the front panel of x86 based compute nodes,
including the FSM.

Figure 2-10 Console Breakout cable

The CMM in the chassis also allows direct connection to nodes via the internal chassis
management network that communicates to the FSP or iMM2 on the node, which allows
remote out-of-band management.

34 IBM PureFlex System and IBM Flex System Products and Technology
2.5.8 Rack cabinet
The Enterprise configuration includes an IBM PureFlex System 42U Rack. Table 2-17 lists the
major components of the rack and options.

Table 2-17 Components of the rack


AAS feature code XCC feature code Description

7953-94X 93634AX IBM 42U 1100mm Enterprise V2 Dynamic Rack

EU21 None PureFlex Door

EC01 None Gray Door (selectable in place of EU21)

EC03 None Side Cover Kit (Black)

EC02 None Rear Door (Black/flat)

2.5.9 Available software for Power Systems compute node


In this section, we describe the software that is available for the Power Systems compute
node.

Virtual I/O Server, AIX and IBM i


VIOS is preinstalled on each Power Systems compute node with a primary operating system
on the primary node of the PureFlex Express configuration. The primary OS can be one of the
following options:
򐂰 AIX v6.1
򐂰 AIX v7.1
򐂰 IBM i v7.1

RHEL and SUSE Linux on Power


VIOS is preinstalled on each Linux on Power compute node for the virtualization layer. Client
operating systems (such as RHEL and SLES) can be ordered with the PureFlex Express
configuration, but they are not preinstalled. The following Linux on Power versions are
available:
򐂰 RHEL v5U9 POWER7
򐂰 RHEL v6U4 POWER7 or POWER7+
򐂰 SLES v11SP2

2.5.10 Available software for x86-based compute nodes


x86 based compute nodes can be ordered with VMware ESXi 5.1 hypervisor preinstalled to
an internal USB key. Operating systems that are ordered with x86 based nodes are not
preinstalled. The following operating systems are available for x86 based nodes:
򐂰 Microsoft Windows Server 2008 Release 2
򐂰 Microsoft Windows Server Standard 2012
򐂰 Microsoft Windows Server Datacenter 2012
򐂰 Microsoft Windows Server Storage 2012
򐂰 RHEL
򐂰 SLES

Chapter 2. IBM PureFlex System 35


2.6 Services for IBM PureFlex System Express and Enterprise
Services are recommended, but can be decoupled from a PureFlex configuration. The
following offerings are available and can be added to either PureFlex offering:
򐂰 PureFlex Introduction
This three-day offering provides IBM Flex System Manager and storage functions but
does not include external integration, virtualization, or cloud. It covers the setup of one
node.
򐂰 PureFlex Virtualized
This offering is a five-day Standard services offering that includes all tasks of the PureFlex
Introduction and expands the scope to include virtualization, another FC switch, and up to
four nodes in total.
򐂰 PureFlex Enterprise
This offering provides advanced virtualization (including VMware clustering) but does not
include external integration or cloud. It covers up to four nodes in total.
򐂰 PureFlex Cloud
This pre-packaged offering is available which, in addition to all the tasks that are included
in the PureFlex Virtualized offering, adds the configuration of the SmartCloud Entry
environment, basic network integration, and implementation of up to 13 nodes in the first
chassis.
򐂰 PureFlex Extra Chassis Add-on
This offering is a services offering that extends the implementation of another chassis (up
to 14 nodes), and up to two virtualization engines (for example, VMware ESXi, KVM, or
PowerVM VIOS).

As shown in Table 2-18, the four main offerings are cumulative; for example, Enterprise takes
seven days in total and includes the scope of the Virtualized and Introduction services
offerings. PureFlex Extra Chassis is per chassis.

Table 2-18 PureFlex Service offerings


Function delivered PureFlex PureFlex PureFlex PureFlex PureFlex Extra
Intro Virtualized Enterprise Cloud Chassis Add-on
3 days 5 days 7 days 10 days 5 days

򐂰 One node Included Included Included Included No add-on


򐂰 FSM Configuration
򐂰 Discovery, Inventory
򐂰 Review Internal Storage
configuration
򐂰 Basic Network Integration using
pre-configured switches
(factory default)
򐂰 No external SAN integration
򐂰 No FCoE changes
򐂰 No Virtualization
򐂰 No Cloud
򐂰 Skills Transfer

36 IBM PureFlex System and IBM Flex System Products and Technology
Function delivered PureFlex PureFlex PureFlex PureFlex PureFlex Extra
Intro Virtualized Enterprise Cloud Chassis Add-on
3 days 5 days 7 days 10 days 5 days

򐂰 Basic virtualization (VMware, Not Included Included Included 򐂰 Configure up to


KVM, and VMControl) included 14 nodes within
򐂰 No external SAN Integration one chassis
򐂰 No Cloud 򐂰 Up to two
򐂰 Up to four nodes virtualization
engines (ESXi,
KVM, or
PowerVM)

򐂰 Advanced virtualization Not Not included Included Included 򐂰 Configure up to


򐂰 Server pools or VMware cluster included 14 nodes within
configured (VMware or one chassis
VMControl) 򐂰 Up to two
򐂰 No external SAN integration virtualization
򐂰 No FCoE Config Changes engines (ESXi,
򐂰 No Cloud KVM, or
PowerVM)

򐂰 Configure SmartCloud Entry Not Not included Not included Included 򐂰 Configure up to
򐂰 Basic External network included 14 nodes within
integration one chassis
򐂰 No FCoE Config changes 򐂰 Up to two
򐂰 No external SAN integration virtualization
򐂰 First chassis is configured with engines (ESXi,
13 nodes KVM, or
PowerVM)

In addition to the offerings that are listed in Table 2-18 on page 36, two other services
offerings are now available for PureFlex System and PureFlex IBM i Solution: PureFlex FCoE
Customization Service and PureFlex Services for IBM i.

2.6.1 PureFlex FCoE Customization Service


This new services customization offers of 1 day length and provides the following features:
򐂰 Design a new FCoE solution to meet customer requirements
򐂰 Change FCoE VLAN from default
򐂰 Modify internal FCoE Ports
򐂰 Change FCoE modes and Zoning

The prerequisite for the FCoE customization service is PureFlex Intro, Virtualized, or Cloud
Service and that FCoE is on the system.

Limited two pre-configured switches in the single chassis, no External SAN configurations,
other chassis, or switches are included.

Chapter 2. IBM PureFlex System 37


2.6.2 PureFlex Services for IBM i
This package offers five days of support the IBM i PureFlex Solution. IBM performs the
following PureFlex Virtualized services for a single Power node:
򐂰 Provisioning of a virtual server through VMControl basic provisioning for the Power node:
– Prepare, capture, and deploy an IBM i virtual server.
– Perform System Health and Monitoring with basic Automation Plans.
– Review Security and roles-based access.
򐂰 Services on a single x86 node:
– Verify VMware ESXi installation, create a virtual machine (VM), and install a Windows
Server operating system on the VM.
– Install and configure vCenter on the VM.

This service includes the following prerequisites:


򐂰 One p460 Power compute node
򐂰 Two IBM Flex System Fabric EN2092 10Gb Scalable Ethernet switch modules
򐂰 Two IBM Flex System 16Gb FC5022 chassis SAN scalable switches
򐂰 One IBM Flex System V7000 Storage node

This service does not include the following features:


򐂰 External SAN integration
򐂰 FCoE configuration changes
򐂰 Other chassis or switches

Services descriptions: The services descriptions given in this section, including the
number of service days do not form a contracted deliverable. They are shown for guidance
only. In all cases please engage IBM Lab Services (or your chosen Business Partner) to
define a formal statement of work.

2.6.3 Software and hardware maintenance


The following service and support offerings can be selected to enhance the standard support
that is available with IBM PureFlex System:
򐂰 Service and Support:
– Software maintenance: 1-year 9x5 (9 hours per day, 5 days per week).
– Hardware maintenance: 3-year 9x5 Next Business Day service.
– 24x7 Warranty Service Upgrade
򐂰 Maintenance and Technical Support (MTS): three years with one microcode analysis per
year.

38 IBM PureFlex System and IBM Flex System Products and Technology
2.7 IBM SmartCloud Entry for Flex system
IBM SmartCloud Entry is an easy to deploy, simple to use software offering that features a
self-service portal for workload provisioning, virtualized image management, and monitoring.
It is an innovative, cost-effective approach that also includes security, automation, basic
metering, and integrated platform management

IBM SmartCloud Entry is the first tier in a three-tier family of cloud offerings that is based on
the Common Cloud Stack (CCS) foundation. The following offerings form the CCS:
򐂰 SmartCloud Entry
򐂰 SmartCloud Provisioning
򐂰 SmartCloud Orchestrator

IBM SmartCloud Entry is an ideal choice to get started with a private cloud solution that can
scale and expand the number of cloud users and workloads. More importantly, SmartCloud
Entry delivers a single, consistent cloud experience that spans multiple hardware platforms
and virtualization technologies, which makes it a unique solution for enterprises with
heterogeneous IT infrastructure and a diverse range of applications.

SmartCloud Entry provides clients with comprehensive IaaS capabilities.

For enterprise clients who are seeking advanced cloud benefits, such as deployment of
multi-workload patterns and Platform as a Service (PaaS) capabilities, IBM offers various
advanced cloud solutions. Because IBM’s cloud portfolio is built on a common foundation,
clients can purchase SmartCloud Entry initially and migrate to an advanced cloud solution in
the future. This standardized architecture facilitates client migrations to the advanced
SmartCloud portfolio solutions.

SmartCloud Entry offers simplified cloud administration with an intuitive interface that lowers
administrative overhead and improves operations productivity with an easy self-service user
interface. It is open and extensible for easy customization to help tailor to unique business
environments. The ability to standardize virtual machines and images reduces management
costs and accelerates responsiveness to changing business needs.

Extensive virtualization engine support includes the following hypervisors:


򐂰 PowerVM
򐂰 VMware vSphere 5
򐂰 KVM
򐂰 Microsoft Hyper-V

The latest release of PureFlex (announced October 2013) allows the selection of SmartCloud
Entry 3.2. This now supports Microsoft Hyper-V and Linux KVM using OpenStack. The
product also allows the use of OpenStack APIs.

Also included is IBM Image Construction and Composition Tool (ICCT). ICCT on SmartCloud
is a web-based application that simplifies and automates virtual machine image creation.
ICCT is provided as an image that can be provisioned on SmartCloud.

You can simplify the creation and management of system images with the following
capabilities:
򐂰 Create “golden master” images and software appliances by using corporate-standard
operating systems.
򐂰 Convert images from physical systems or between various x86 hypervisors.
򐂰 Reliably track images to ensure compliance and minimize security risks.

Chapter 2. IBM PureFlex System 39


򐂰 Optimize resources, which reduces the number of virtualized images and the storage
required for them.

Reduce time to value for new workloads with the following simple VM management options:
򐂰 Deploy application images across compute and storage resources.
򐂰 Offer users self-service for improved responsiveness.
򐂰 Enable security through VM isolation, project-level user access controls.
򐂰 Simplify deployment; there is no need to know all the details of the infrastructure.
򐂰 Protect your investment with support for existing virtualized environments.
򐂰 Optimize performance on IBM systems with dynamic scaling, expansive capacity and
continuous operation.

Improve efficiency with a private cloud that includes the following capabilities:
򐂰 Delegate provisioning to authorized users to improve productivity.
򐂰 Implement pay-per-use with built-in workload metering.
򐂰 Standardize deployment to improve compliance and reduce errors with policies and
templates.
򐂰 Simplify management of projects, billing, approvals and metering with an intuitive user
interface.
򐂰 Ease maintenance and problem diagnosis with integrated views of both physical and
virtual resources.

For more information about IBM SmartCloud Entry on Flex System, see this website:
http://www.ibm.com/systems/flex/smartcloud/bto/entry/

40 IBM PureFlex System and IBM Flex System Products and Technology
3

Chapter 3. Systems management


IBM Flex System Manager (the management component of IBM Flex System Enterprise
Chassis) and compute nodes are designed to help you get the most out of your IBM Flex
System installation. They also allow you to automate repetitive tasks. These management
interfaces can significantly reduce the number of manual navigational steps for typical
management tasks. They offer simplified system setup procedures by using wizards and
built-in expertise to consolidate monitoring for physical and virtual resources.

It must be noted that from August 2013, Power Systems Nodes installed within a Flex System
chassis can alternatively be managed by Hardware Management Console (HMC) and
Integrated Virtualization Manager (IVM). This allows clients with existing rack-based Power
Systems servers to use a single management tool to manage their rack and Flex System
nodes. However, systems management that is implemented in this way means none of the
cross element management functions that are available with FSM (such as management of
x86 Nodes, Storage, Networking, system pooling, or advanced virtualization function) are
available.

For the most complete and sophisticated broad management of a Flex System environment,
the FSM is recommended.

This chapter includes the following topics:


򐂰 3.1, “Management network” on page 42
򐂰 3.2, “Chassis Management Module” on page 43
򐂰 3.3, “Security” on page 46
򐂰 3.4, “Compute node management” on page 47
򐂰 3.5, “IBM Flex System Manager” on page 50

© Copyright IBM Corp. 2012, 2013. All rights reserved. 41


3.1 Management network
In an IBM Flex System Enterprise Chassis, you can configure separate management and
data networks.

The management network is a private and secure Gigabit Ethernet network. It is used to
complete management-related functions throughout the chassis, including management
tasks that are related to the compute nodes, switches, storage, and the chassis.

The management network is shown in Figure 3-1 as the blue line. It connects the Chassis
Management Module (CMM) to the compute nodes (and storage node, which is not shown),
the switches in the I/O bays, and the Flex System Manager (FSM). The FSM connection to
the management network is through a special Broadcom 5718-based management network
adapter (Eth0). The management networks in multiple chassis can be connected through the
external ports of the CMMs in each chassis through a GbE top-of-rack switch.

The yellow line in the Figure 3-1 shows the production data network. The FSM also connects
to the production network (Eth1) so that it can access the Internet for product updates and
other related information.

Eth1: Embedded Enterprise Chassis


2-port 10 GbE
controller with Flex System Manager System x Power
Virtual Fabric compute node Systems
compute node
Connector

Eth0: Special GbE Eth0 Eth1


management IMM IMM FSP
network adapter

CMM
Port

I/O bay 1 I/O bay 2

CMMs in
CMM CMM CMM
other
Data Network
Enterprise
Chassis
Top-of-Rack Switch

Management Network Management


workstation

Figure 3-1 Separate management and production data networks

42 IBM PureFlex System and IBM Flex System Products and Technology
Tip: The management node console can be connected to the data network for convenient
access.

One of the key functions that the data network supports is the discovery of operating systems
on the various network endpoints. Discovery of operating systems by the FSM is required to
support software updates on an endpoint, such as a compute node. The FSM Checking and
Updating Compute Nodes wizard assists you in discovering operating systems as part of the
initial setup.

3.2 Chassis Management Module


The CMM provides single-chassis management and is used to communicate with the
management controller in each compute node. It provides system monitoring, event
recording, and alerts. It also manages the chassis, its devices, and the compute nodes. The
chassis supports up to two chassis management modules. If one CMM fails, the second CMM
can detect its inactivity, self-activate, and take control of the system without any disruption.
The CMM is central of the management of the chassis, and is required in the Enterprise
Chassis.

The following section describes the usage models of the CMM and its features.

For more information, see 4.9, “Chassis Management Module” on page 101.

3.2.1 Overview
The CMM is a hot-swap module that provides basic system management functions for all
devices that are installed in the Enterprise Chassis. An Enterprise Chassis comes with at
least one CMM and supports CMM redundancy.

The CMM is shown in Figure 3-2.

Figure 3-2 Chassis Management Module

Chapter 3. Systems management 43


Through an embedded firmware stack, the CMM implements functions to monitor, control,
and provide external user interfaces to manage all chassis resources. You can use the CMM
to perform the following functions, among others:
򐂰 Define login IDs and passwords.
򐂰 Configure security settings such as data encryption and user account security. The CMM
contains an LDAP client that can be configured to provide user authentication through one
or more LDAP servers. The LDAP server (or servers) to be used for authentication can be
discovered dynamically or manually pre-configured.
򐂰 Select recipients for alert notification of specific events.
򐂰 Monitor the status of the compute nodes and other components.
򐂰 Find chassis component information.
򐂰 Discover other chassis in the network and enable access to them.
򐂰 Control the chassis, compute nodes, and other components.
򐂰 Access the I/O modules to configure them.
򐂰 Change the startup sequence in a compute node.
򐂰 Set the date and time.
򐂰 Use a remote console for the compute nodes.
򐂰 Enable multi-chassis monitoring.
򐂰 Set power policies and view power consumption history for chassis components.

3.2.2 Interfaces
The CMM supports a web-based graphical user interface that provides a way to perform
chassis management functions within a supported web browser. You can also perform
management functions through the CMM command-line interface (CLI). Both the web-based
and CLI interfaces are accessible through the single RJ45 Ethernet connector on the CMM,
or from any system that is connected to the same network.

The CMM has the following default IPv4 settings:


򐂰 IP address: 192.168.70.100
򐂰 Subnet: 255.255.255.0
򐂰 User ID: USERID (all capital letters)
򐂰 Password: PASSW0RD (all capital letters, with a zero instead of the letter O)

The CMM does not have a fixed static IPv6 IP address by default. Initial access to the CMM in
an IPv6 environment can be done by using the IPv4 IP address or the IPv6 link-local address.
The IPv6 link-local address is automatically generated based on the MAC address of the
CMM. By default, the CMM is configured to respond to DHCP first before it uses its static IPv4
address. If you do not want this operation to take place, connect locally to the CMM and
change the default IP settings. For example, you can connect locally by using
a notebook.

The web-based GUI brings together all the functionality that is needed to manage the chassis
elements in an easy-to-use fashion consistently across all System x IMM2 based platforms.

44 IBM PureFlex System and IBM Flex System Products and Technology
Figure 3-3 shows the Chassis Management Module login window.

Figure 3-3 CMM login window

Figure 3-4 shows an example of the Chassis Management Module front page after login.

Figure 3-4 Initial view of CMM after login

Chapter 3. Systems management 45


3.3 Security
The focus of IBM on smarter computing is evident in the improved security measures that are
implemented in IBM Flex System Enterprise Chassis. Today’s world of computing demands
tighter security standards and native integration with computing platforms. For example, the
push towards virtualization increased the need for more security. This increase comes as
more mission-critical workloads are consolidated on to fewer and more powerful servers. The
IBM Flex System Enterprise Chassis takes a new approach to security with a ground-up
chassis management design to meet new security standards.

The following security enhancements and features are provided in the chassis:
򐂰 Single sign-on (central user management)
򐂰 End-to-end audit logs
򐂰 Secure boot: IBM Tivoli® Provisioning Manager and CRTM
򐂰 Intel TXT technology (Intel Xeon -based compute nodes)
򐂰 Signed firmware updates to ensure authenticity
򐂰 Secure communications
򐂰 Certificate authority and management
򐂰 Chassis and compute node detection and provisioning
򐂰 Role-based access control
򐂰 Security policy management
򐂰 Same management protocols that are supported on BladeCenter AMM for compatibility
with earlier versions
򐂰 Insecure protocols are disabled by default in CMM, with Locks settings to prevent user
from inadvertently or maliciously enabling them
򐂰 Supports up to 84 local CMM user accounts
򐂰 Supports up to 32 simultaneous sessions
򐂰 Planned support for DRTM
򐂰 CMM supports LDAP authentication

The Enterprise Chassis ships Secure, and supports the following security policy settings:
򐂰 Secure: Default setting to ensure a secure chassis infrastructure and includes the
following features:
– Strong password policies with automatic validation and verification checks
– Updated passwords that replace the manufacturing default passwords after the initial
setup
– Only secure communication protocols such as Secure Shell (SSH) and Secure
Sockets Layer (SSL)
– Certificates to establish secure, trusted connections for applications that run on the
management processors
򐂰 Legacy: Flexibility in chassis security, which includes the following features:
– Weak password policies with minimal controls
– Manufacturing default passwords that do not have to be changed

46 IBM PureFlex System and IBM Flex System Products and Technology
– Unencrypted communication protocols, such as Telnet, SNMPv1, TCP Command
Mode, FTP Server, and TFTP Server

The centralized security policy makes Enterprise Chassis easy to configure. In essence, all
components run with the same security policy that is provided by the CMM. This consistency
ensures that all I/O modules run with a hardened attack surface.

The CMM and the IBM Flex System Manager management node each have their own
independent security policies that control, audit, and enforce the security settings. The
security settings include the network settings and protocols, password and firmware update
controls, and trusted computing properties such as secure boot. The security policy is
distributed to the chassis devices during the provisioning process.

3.4 Compute node management


Each node in the Enterprise Chassis has a management controller that communicates
upstream through the CMM-enabled 1 GbE private management network that enables
management capability. Different chassis components that are supported in the Enterprise
Chassis can implement different management controllers. Table 3-1 shows the different
management controllers that are implemented in the chassis components.

Table 3-1 Chassis components and their respective management controllers


Chassis components Management controller

Intel Xeon processor-based compute nodes Integrated Management Module II (IMM2)

Power Systems compute nodes Flexible service processor (FSP)

Chassis Management Module Integrated Management Module II (IMM2)

The management controllers for the various Enterprise Chassis components have the
following default IPv4 addresses:
򐂰 CMM:192.168.70.100
򐂰 Compute nodes: 192.168.70.101-114 (corresponding to the slots 1-14 in the chassis)
򐂰 I/O Modules: 192.168.70.120-123 (sequentially corresponding to chassis bay numbering)

In addition to the IPv4 address, all I/O modules support link-local IPv6 addresses and
configurable external IPv6 addresses.

3.4.1 Integrated Management Module II


The Integrated Management Module II (IMM2) is the next generation of the IMMv1 (first
released in the Intel Xeon “Nehalem-EP”-based servers). It is present on all IBM systems with
Intel Xeon “Romley” and “Sandy Bridge” processors, and features a complete rework of
hardware and firmware. The IMM2 enhancements include a more responsive user interface,
faster power on, and increased remote presence performance.

The IMM2 incorporates a new web-based user interface that provides a common “look and
feel” across all IBM System x software products. In addition to the new interface, the following
other major enhancements from IMMv1 are included:
򐂰 Faster processor and more memory
򐂰 IMM2 manageable “northbound” from outside the chassis, which enables consistent
management and scripting with System x rack servers

Chapter 3. Systems management 47


򐂰 Remote presence:
– Increased color depth and resolution for more detailed server video
– Active X client in addition to Java client
– Increased memory capacity (~50 MB) provides convenience for remote software
installations
򐂰 No IMM2 reset is required on configuration changes because they become effective
immediately without reboot
򐂰 Hardware management of non-volatile storage
򐂰 Faster Ethernet over USB
򐂰 1 Gb Ethernet management capability
򐂰 Improved system power-on and boot time
򐂰 More detailed information for UEFI detected events enables easier problem determination
and fault isolation
򐂰 User interface meets accessibility standards (CI-162 compliant)
򐂰 Separate audit and event logs
򐂰 “Trusted” IMM with significant security enhancements (CRTM/TPM, signed updates,
authentication policies, and so on)
򐂰 Simplified update and flashing mechanism
򐂰 Addition of Syslog alerting mechanism provides you with an alternative to email and
SNMP traps
򐂰 Support for Features on-demand (FoD) enablement of server functions, option card
features, and System x solutions and applications
򐂰 First Failure Data Capture: One button web press starts data collection and download

For more information about IMM2, see Chapter 5, “Compute nodes” on page 185. For more
information, see the following publications:
򐂰 Integrated Management Module II User’s Guide:
http://ibm.com/support/entry/portal/docdisplay?lndocid=MIGR-5086346
򐂰 IMM and IMM2 Support on IBM System x and BladeCenter Servers, TIPS0849:
http://www.redbooks.ibm.com/abstracts/tips0849.html

3.4.2 Flexible service processor


Several advanced system management capabilities are built into Power Systems compute
nodes (p24L, p260, p460, p270). An FSP handles most of the server-level system
management. The FSP that is used in Power Systems compute nodes is the same service
processor that is used on POWER rack servers. It has system alerts and Serial over LAN
(SOL) capability

The FSP provides out-of-band system management capabilities, such as system control,
runtime error detection, configuration, and diagnostic procedures. Generally, you do not
interact with the FSP directly. Rather, you interact by using tools such as IBM Flex System
Manager and Chassis Management Module.

The Power Systems compute nodes all have one FSP each.

48 IBM PureFlex System and IBM Flex System Products and Technology
The FSP provides an SOL interface, which is available by using the CMM and the console
command. The Power Systems compute nodes do not have an on-board video chip, and do
not support keyboard, video, and mouse (KVM) connections. Server console access is
obtained by a SOL connection only.

SOL provides a means to manage servers remotely by using a CLI over a Telnet or SSH
connection. SOL is required to manage servers that do not have KVM support or that are
attached to the FSM. SOL provides console redirection for Software Management Services
(SMS) and the server operating system.

The SOL feature redirects server serial-connection data over a LAN without requiring special
cabling by routing the data through the CMM network interface. The SOL connection enables
Power Systems compute nodes to be managed from any remote location with network
access to the CMM.

SOL offers the following functions:


򐂰 Remote administration without KVM
򐂰 Reduced cabling and no requirement for a serial concentrator
򐂰 Standard Telnet/SSH interface, which eliminates the requirement for special client
software

The CMM CLI provides access to the text-console command prompt on each server through
an SOL connection. This configuration allows the Power Systems compute nodes to be
managed from a remote location.

3.4.3 I/O modules


The I/O modules have the following base functions:
򐂰 Initialization
򐂰 Configuration
򐂰 Diagnostic tests (both power-on and concurrent)
򐂰 Status Reporting

In addition, the following set of protocols and software features are supported on the I/O
modules:
򐂰 A configuration method over the Ethernet management port.
򐂰 A scriptable SSH CLI, a web server with SSL support, Simple Network Management
Protocol v3 (SNMPv3) Agent with alerts, and a sFTP client.
򐂰 Server ports that are used for Telnet, HTTP, SNMPv1 agents, TFTP, FTP, and other
insecure protocols are DISABLED by default.
򐂰 LDAP authentication protocol support for user authentication.
򐂰 For Ethernet I/O modules, 802.1x enabled with policy enforcement point (PEP) capability
to allow support of TNC (Trusted Network Connect).
򐂰 The ability to capture and apply a switch configuration file and the ability to capture a first
failure data capture (FFDC) data file.
򐂰 Ability to transfer files by using URL update methods (HTTP, HTTPS, FTP, TFTP, sFTP).
򐂰 Various methods for firmware updates, including FTP, sFTP, and TFTP. In addition,
firmware updates by using a URL that includes protocol support for HTTP, HTTPs, FTP,
sFTP, and TFTP.
򐂰 SLP discovery and SNMPv3.

Chapter 3. Systems management 49


򐂰 Ability to detect firmware/hardware hangs, and ability to pull a “crash-failure memory
dump” file to an FTP (sFTP) server.
򐂰 Selectable primary and backup firmware banks as the current operational firmware.
򐂰 Ability to send events, SNMP traps, and event logs to the CMM, including security audit
logs.
򐂰 IPv4 and IPv6 on by default.
򐂰 The CMM management port supports IPv4 and IPv6 (IPV6 support includes the use of
link local addresses.
򐂰 Port mirroring capabilities:
– Port mirroring of CMM ports to internal and external ports.
– For security reasons, the ability to mirror the CMM traffic is hidden and is available only
to development and service personnel
򐂰 Management virtual local area network (VLAN) for Ethernet switches: A configurable
management 802.1q tagged VLAN in the standard VLAN range of 1 - 4094. It includes the
CMM’s internal management ports and the I/O modules internal ports that are connected
to the nodes.

3.5 IBM Flex System Manager


The FSM is a high-performance, scalable system management appliance. It is based on the
IBM Flex System x240 Compute Node. For more information about the x240, see 5.4, “IBM
Flex System x240 Compute Node” on page 234. The FSM hardware comes preinstalled with
systems management software that you can use to configure, monitor, and manage IBM Flex
System resources in up to sixteen chassis.

3.5.1 IBM Flex System Manager functions


The IBM Flex System Manager includes the following high-level features and functions:
򐂰 Supports a comprehensive, pre-integrated system that is configured to optimize
performance and efficiency
򐂰 Automated processes that are triggered by events simplify management and reduce
manual administrative tasks
򐂰 Centralized management reduces the skills and the number of steps it takes to manage
and deploy a system
򐂰 Enables comprehensive management and control of energy usage and costs
򐂰 Automates responses for a reduced need for manual tasks such as custom actions and
filters, configure, edit, relocate, and automation plans
򐂰 Storage device discovery and coverage in integrated physical and logical topology views
򐂰 Full integration with server views, including virtual server views, enables efficient
management of resources

The preinstall contains a set of software components that are responsible for running
management functions. These components are activated by using the available IBM Feature
on Demand (FoD) software entitlement licenses. They are licensed on a per-chassis basis, so
you need one license for each chassis you plan to manage. The management node comes
without any entitlement licenses, so you must purchase a license to enable the required FSM
functions. The part numbers are listed later in this section.

50 IBM PureFlex System and IBM Flex System Products and Technology
IBM Flex System Manager base feature set that is preinstalled offers the following function:
򐂰 Support up to 16 managed chassis
򐂰 Support for up to 224 Nodes
򐂰 Support up to 5,000 managed elements
򐂰 Auto-discovery of managed elements
򐂰 Overall health status
򐂰 Monitoring and availability
򐂰 Hardware management
򐂰 Security management
򐂰 Administration
򐂰 Network management (Network Control)
򐂰 Storage management (Storage Control)
򐂰 Virtual machine lifecycle management (VMControl Express)

The IBM Flex System Manager Advanced feature set upgrade offers the following advanced
features:
򐂰 Image management (VMControl Standard)
򐂰 Pool management (VMControl Enterprise)
򐂰 Advanced network monitoring and quality of service (QoS) configuration (Service Fabric
Provisioning)

The Fabric Provisioning upgrade offers advanced network monitoring and quality of service
(QoS) configuration (Service Fabric Provisioning). Fabric provisioning functionality is included
in the advanced feature set. It is also available as a separate Fabric Provisioning feature
upgrade for the base feature set. It also can be ordered as a separate fabric provisioning
upgrade for the Flex System Manager node via the HVEC order route.

Upgrade licenses: The Advanced Upgrade and the Fabric Provisioning feature upgrade
are mutually exclusive. Either the Advance Upgrade or the Fabric Provisioning feature can
be applied on top of the base feature set license, but not both. The Service Fabric
provisioning upgrade is not selectable in AAS.

The part number to order the management node is shown in Table 3-2.

Table 3-2 Ordering information for IBM Flex System Manager node
HVEC AAS Description

8731-A1xa 7955-01Mb IBM Flex System Manager node


a. x in the Part number represents a country-specific letter (for example, the EMEA part number
is 8731A1G, and the US part number is 8731A1U). Ask your local IBM representative for
specifics.
b. This part number is ordered as part of the IBM PureFlex System

The part numbers to order FoD software entitlement licenses are shown in the following
tables. The part numbers for the same features are different in different countries. Ask your
local IBM representative for specifics.

Table 3-3 on page 52 shows the following sets of part numbers:


򐂰 Column 1: for Latin America and Europe/Middle East/Africa
򐂰 Column 2: For US, Canada, Asia Pacific, and Japan

Chapter 3. Systems management 51


Table 3-3 HVEC Ordering information for FoD licenses
Part number

LA & EMEA US, CAN, AP, JPN Description

Base feature set

95Y1174 90Y4217 IBM Flex System Manager Per Managed Chassis with 1-Year Software Support
and Subscription (software S&S)

95Y1179 90Y4222 IBM Flex System Manager Per Managed Chassis with 3-Year software S&S

Advanced feature set upgradea

94Y9219 90Y4249 IBM Flex System Manager, Advanced Upgrade, Per Managed Chassis with
1-Year software S&S

94Y9220 00D7554 IBM Flex System Manager, Advanced Upgrade, Per Managed Chassis with
3-Year software S&S

Fabric Provisioning feature upgradea

95Y1178 90Y4221 IBM Flex System Manager Service Fabric Provisioning with 1-Year S&S

95Y1183 90Y4226 IBM Flex System Manager Service Fabric Provisioning with 3-Year S&S
a. The Advanced Upgrade and Fabric Provisioning licenses are applied on top of the IBM FSM base license

Table 3-4 shows the indicator codes that are selected when configuring Flex System Manager
in AAS by using e-config. This also selects the relevant options for one or three years of S&S
that is included in the configurator output.

Table 3-4 7955-01M Flex System Manager feature codes


Feature code Description

Advanced feature set upgradea

EB31 FSM Platform Software Bundle Pre-load Indicator

EB32 FSM Platform Virtualization Software Bundle Pre-load Indicator


a. The FSM Platform Virtualization Software Bundle Pre-load Indicator are applied on top of the
FSM Platform Software Bundle Pre-load Indicator

Flex System manager Licensing examples


To help explain the required part numbers, in this section we describe two examples of Flex
System Manager licensing. Included in each example are the part numbers that are required
for Latin America, Europe/Middle East/Africa and then for US, Canada, Asia Pacific, and
Japan.

Example 1
A client wants to manage four Flex System chassis with one FSM, no advanced license
function, with three years of support and subscription (S&S).

The client purchases the following products:


򐂰 One Flex System Manager node
򐂰 Four IBM Flex System Manager per managed chassis with 3-Year SW S&S

52 IBM PureFlex System and IBM Flex System Products and Technology
Table 3-5 shows the part numbers and quantity that is required. The following sets of part
numbers are shown:
򐂰 Column 1: For Latin America and Europe/Middle East/Africa
򐂰 Column 2: For US, Canada, Asia Pacific, and Japan

Table 3-5 Example 1 part numbers


Part number

Qty LA & EMEA US, CAN, AP, JPN Description

1 8731-A1xa 8731-A1xa IBM Flex System Manager node

4 95Y1179 90Y4222 IBM Flex System Manager Per Managed Chassis with three-year
software S&S
a. x in the Part number represents a country-specific letter (for example, the EMEA part number is 8731A1G, and
the US part number is 8731A1U). Ask your local IBM representative for specifics.

Example 2
The client wants to manage four Flex System chassis in total, two chassis are located on one
site and two on another, with a local FSM installed in a chassis at each of these sites. They
require advance functionality with three-year S&S.

A client purchases the following products:


򐂰 Two Flex System Manager Nodes
򐂰 Four IBM Flex System Manager per managed chassis with three-year software S&S
򐂰 Four IBM Flex System Manager, Advanced Upgrade, Per Managed Chassis with
three-year software S&S.

Table 3-6 shows the part numbers and quantity that are required. The following sets of part
numbers are shown:
򐂰 Column 1: For Latin America and Europe/Middle East/Africa
򐂰 Column 2: For US, Canada, Asia Pacific, and Japan

Table 3-6 Example 2 part numbers


Part number

Qty LA & EMEA US, CAN, AP, JPN Description

2 8731-A1xa 8731-A1xa IBM Flex System Manager node

4 95Y1179 90Y4222 IBM Flex System Manager Per Managed Chassis with three-year
software S&S

4 94Y9220 00D7554 IBM Flex System Manager, Advanced Upgrade, Per Managed Chassis
with three-year software S&S
a. x in the Part number represents a country-specific letter (for example, the EMEA part number is 8731A1G, and
the US part number is 8731A1U). Ask your local IBM representative for specifics.

Chapter 3. Systems management 53


3.5.2 Hardware overview
Fundamentally, the FSM from a hardware point of view is a locked-down compute node with a
specific hardware configuration. This configuration is designed for optimal performance of the
preinstalled software stack. The FSM looks similar to the Intel- based x240. However, there
are slight differences between the system board designs, so these two hardware nodes are
not interchangeable.

Figure 3-5 shows a front view of the FSM.

Figure 3-5 IBM Flex System Manager

54 IBM PureFlex System and IBM Flex System Products and Technology
Figure 3-6 shows the internal layout and major components of the FSM.

Cover

Heat sink

Microprocessor

Microprocessor
heat sink filler
SSD and HDD
I/O expansion
backplane
adapter

Hot-swap ETE
storage adapter
cage

SSD interposer

SSD
drives

SSD mounting
insert

Air baffles

Hot-swap
storage drive DIMM
DIMM
Storage filler
drive filler

Figure 3-6 Exploded view of the IBM Flex System Manager node, showing major components

Chapter 3. Systems management 55


The FSM comes preconfigured with the components that are described in Table 3-7.

Table 3-7 Features of the IBM Flex System Manager node


Feature Description

Model 8731-A1x (XCC, x-config)


numbers 7955-01M (AAS, e-config)

Processor 1x Intel Xeon processor E5-2650 8C 2.0 GHz 20 MB Cache 1600 MHz 95 W

Memory 8 x 4 GB (1x4 GB, 1Rx4, 1.35 V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP
RDIMM

SAS Controller One LSI 2004 SAS Controller

Disk 1 x IBM 1TB 7.2K 6 Gbps NL SATA 2.5" SFF HS HDD


2 x IBM 200GB SATA 1.8" MLC SSD (configured in an RAID-1)

Integrated NIC Embedded dual-port 10 Gb Virtual Fabric Ethernet controller (Emulex BE3)
Dual-port 1 GbE Ethernet controller on a management adapter (Broadcom 5718)

Systems Integrated Management Module II (IMM2)


Management Management network adapter

Figure 3-7 shows the internal layout of the FSM.

Filler slot for Processor 1


Processor 2

Drive bays Management network


adapter

Figure 3-7 Internal view that shows the major components of IBM Flex System Manager

56 IBM PureFlex System and IBM Flex System Products and Technology
Front controls
The FSM has similar controls and LEDs as the IBM Flex System x240 Compute Node.
Figure 3-8 shows the front of an FSM with the location of the control and LEDs highlighted.

Solid state
drive LEDs Power Identify
button/LED LED
a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
a a a a a a a a a a a a a a a a a a 2 aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa 1 aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a a a a
a a a a a a a a a a a a a a a a a a
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa 0 aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa

USB connector Hard disk drive Hard disk drive Fault


activity LED status LED LED
KVM connector Check log
LED

Figure 3-8 FSM front panel showing controls and LEDs

Storage
The FSM ships with 2 x IBM 200 GB SATA 1.8" MLC SSD and 1 x IBM 1 TB 7.2K 6 Gbps NL
SATA 2.5" SFF HS HDD drives. The 200 GB SSD drives are configured in an RAID-1 pair that
provides roughly 200 GB of usable space. The 1 TB SATA drive is not part of a RAID group.

Chapter 3. Systems management 57


The partitioning of the disks is listed in Table 3-8.

Table 3-8 Detailed SSD and HDD disk partitioning


Physical disk Virtual disk size Description

SSD 50 MB Boot disk

SSD 60 GB OS/Application disk

SSD 80 GB Database disk

HDD 40 GB Update repository

HDD 40 GB Dump space

HDD 60 GB Spare disk for OS/Application

HDD 80 GB Spare disk for database

HDD 30 GB Service Partition

Management network adapter


The management network adapter is a standard feature of the FSM and provides a physical
connection into the private management network of the chassis. The adapter is shown in
Figure 3-6 on page 55 as the everything-to-everything (ETE) adapter.

The management network adapter contains a Broadcom 5718 Dual 1GbE adapter and a
Broadcom 5389 8-port L2 switch. This card is one of the features that makes the FSM unique
when compared to all other nodes that are supported by the Enterprise Chassis. The
management network adapter provides a physical connection into the private management
network of the chassis. The connection allows the software stack to have visibility into the
data and management networks. The L2 switch on this card is automatically set up by the
IMM2 and connects the FSM and the onboard IMM2 into the same internal private network.

3.5.3 Software features


The IBM Flex System Manager management software includes the following main features:
򐂰 Monitoring and problem determination:
– A real-time multichassis view of hardware components with overlays for more
information
– Automatic detection of issues in your environment through event setup that triggers
alerts and actions
– Identification of changes that might affect availability
– Server resource usage by virtual machine or across a rack of systems
򐂰 Hardware management
– Automated discovery of physical and virtual servers and interconnections, applications,
and supported third-party networking
– Configuration profiles that integrate device configuration and update steps into a single
interface, which dramatically improves the initial configuration experience
– Inventory of hardware components
– Chassis and hardware component views:
• Hardware properties

58 IBM PureFlex System and IBM Flex System Products and Technology
• Component names and hardware identification numbers
• Firmware levels
• Usage rates
򐂰 Network management:
– Management of network switches from various vendors
– Discovery, inventory, and status monitoring of switches
– Graphical network topology views
– Support for KVM, pHyp, VMware virtual switches, and physical switches
– VLAN configuration of switches
– Integration with server management
– Per-virtual machine network usage and performance statistics that are provided to
VMControl
– Logical views of servers and network devices that are grouped by subnet and VLAN
򐂰 Network management (advanced feature set or fabric provisioning feature):
– Defines QoS settings for logical networks
– Configures QoS parameters on network devices
– Provides advanced network monitors for network system pools, logical networks, and
virtual systems
򐂰 Storage management:
– Discovery of physical and virtual storage devices
– Physical and logical topology views
– Support for virtual images on local storage across multiple chassis
– Inventory of physical storage configuration
– Health status and alerts
– Storage pool configuration
– Disk sparing and redundancy management
– Virtual volume management
– Support for virtual volume discovery, inventory, creation, modification, and deletion
򐂰 Virtualization management (base feature set)
– Support for VMware, Hyper-V, KVM, and IBM PowerVM
– Create virtual servers
– Edit virtual servers
– Manage virtual servers
– Relocate virtual servers
– Discover virtual server, storage, and network resources, and visualize the
physical-to-virtual relationships
򐂰 Virtualization management (advanced feature set)
– Create new image repositories for storing virtual appliances and discover existing
image repositories in your environment
– Import external, standards-based virtual appliance packages into your image
repositories as virtual appliances
– Capture a running virtual server that is configured the way you want, complete with
guest operating system, running applications, and virtual server definition

Chapter 3. Systems management 59


– Import virtual appliance packages that exist in the Open Virtual Machine Format (OVF)
from the Internet or other external sources
– Deploy virtual appliances quickly to create virtual servers that meet the demands of
your ever-changing business needs
– Create, capture, and manage workloads
– Create server system pools, which enable you to consolidate your resources and
workloads into distinct and manageable groups
– Deploy virtual appliances into server system pools
– Manage server system pools, including adding hosts or more storage space, and
monitoring the health of the resources and the status of the workloads in them
– Group storage systems together by using storage system pools to increase resource
usage and automation
– Manage storage system pools by adding storage, editing the storage system pool
policy, and monitoring the health of the storage resources
򐂰 I/O address management:
– Manages assignments of Ethernet MAC and Fibre Channel WWN addresses.
– Monitors the health of compute nodes, and automatically, without user intervention,
replaces a failed compute node from a designated pool of spare compute nodes by
reassigning MAC and WWN addresses.
– Preassigns MAC addresses, WWN addresses, and storage boot targets for the
compute nodes.
– Creates addresses for compute nodes, saves the address profiles, and deploys the
addresses to the slots in the same or different chassis.
򐂰 Other features:
– Resource-oriented chassis map provides instant graphical view of chassis resource
that includes nodes and I/O modules:
• Fly-over provides instant view of individual server (node) status and inventory
• Chassis map provides inventory view of chassis components, a view of active
statuses that require administrative attention, and a compliance view of server
(node) firmware
• Actions can be taken on nodes such as working with server-related resources,
showing and installing updates, submitting service requests, and starting the
remote access tools
• Resources can be monitored remotely from mobile devices, including Apple iOS
based devices, Google Android -based devices, and RIM BlackBerry based
devices. Flex System Manager Mobile applications are separately available under
their own terms and conditions as outlined by the respective mobile markets.
– Remote console:
• Ability to open video sessions and mount media such as DVDs with software
updates to their servers from their local workstation
• Remote KVM connections
• Remote Virtual Media connections (mount CD/DVD/ISO/USB media)
• Power operations against servers (Power On/Off/Restart)
– Hardware detection and inventory creation
– Firmware compliance and updates

60 IBM PureFlex System and IBM Flex System Products and Technology
– Health status (such as processor usage) on all hardware devices from a single chassis
view
– Automatic detection of hardware failures:
• Provides alerts
• Takes corrective action
• Notifies IBM of problems to escalate problem determination
– Administrative capabilities, such as setting up users within profile groups, assigning
security levels, and security governance
– Bare metal deployment of hypervisors (VMware ESXi, KVM) through centralized
images

New function for Flex System Manager release 1.3


Announced on the 6th August 2013, FSM V1.3 includes the following enhancements that are
released to support new hardware and to incorporate more function based on client feedback:
򐂰 Support for newly announced hardware
Support for the p270 and x222 nodes, I/O modules and support for new options.
򐂰 Enhanced chassis support
Support for managing 16 chassis, 224 Nodes and 5000 endpoints from one FSM.
򐂰 PowerVM Management enhancements:
– Remote restart for Power Systems, which provides the capability to activate a partition
on any appropriately-configured running server in the unlikely event that the partition’s
original server and any associated service partitions or management entities become
unavailable.
– Create and manage shared storage pools
– Resize disk during deploy
– Relocate to specific target and “Pin” VM to specific host
– Set priority for relocate order
򐂰 FSM capacity usage
This tool within FSM allows the system administrator to monitor overall resources of the
FSM for usage. It also provides recommendations to the user on how to manager capacity
with thresholds that are presented in different colors (green, yellow or red) in the window.
The Default view shows a quick view of FSM Capacity and indicates the following metrics:
– Average active users
– Current number of managed endpoints
– Average CPU usage
– Average disk IO usage
– Average memory usage
– Current disk space usage
Also, warnings are generated if metrics exceed a set of predefined thresholds. The
warning includes full details of the specific warning and recommendations to help rectify
the situation. Thresholds are presented as green, yellow or red. It is also possible to
configure the thresholds.
Further, a capacity usage report can be generated that shows overall usage, the current
status of key parameters, and a list of historical data.
򐂰 Deploy compute node image enhancements:
– Increased supported OS Images in repository from 2 to 5

Chapter 3. Systems management 61


– Improved MAC address support: pNIC VNIC, Virtual addresses
– Improved OS support: ESXi Image 5.1.1, REL 6.4, REL KVM platform agent for
VMControl and ESXi 5000V Agent V1.1.0
– Bare metal deployment patterns included for the new x222 nodes.
򐂰 Configuration Patterns enhancements:
– Pattern stored in LDAP
– New path options for initial setup
– New I/O adapter options
– Unique x222 configuration pattern support and independent node failover support
– New “Keep settings” option for boot order configuration
– Improved guidance for deployment of a pattern
– Increased supported OS Images in repository from 2 - 5
– Improved MAC address support
– Improved dialog
– Ability to edit and deploy patterns during initial FSM setup
– Multiple improvements in usability of patterns
򐂰 Enhancements to console:
– View scheduled Jobs in mouse flyover
– Compliance issues show in console scoreboard and on chassis map compliance
overlay
– Flex firmware views and Compliance issue views; compliance automatically marked
when new updates are imported
– More backup and restore context-sensitive help
򐂰 IEEE 802.1Qbg support added for Power Nodes
򐂰 Performance enhancement for Inventory export (much faster export)
򐂰 Compare installed fixes for IBM i between one installed system and another
򐂰 Smart Zoning enhancements:
– Simplified interactions between storage and server, no need to pre-zone
– Create storage volume enhancements; can automatically zone host and storage when
zoning was not previously configured.
– Only needed zoning operations are performed to ensure host and storage can
communicate with each other:
• If zoning is not enabled, it is enabled
• If a zone set is not created, it is created
• If a zone does not exist for host and storage, one is created
򐂰 Management extended to support System x3950 SAP HANA appliance:
– Manual discovery and inventory
– Power Control
– Remote Access
– System Configuration
– System Health and Status
– Release Management (firmware, software installation and update)
– Service and Support

62 IBM PureFlex System and IBM Flex System Products and Technology
Supported agents, hardware, operating systems, and tasks
IBM Flex System Manager provides four tiers of agents for managed systems. For each
managed system, you must choose the tier that provides the amount and level of capabilities
that you need for that system. Select the level of agent capabilities that best fits the type of
managed system and the management tasks you must perform.

IBM Flex System Manager features the following agent tiers:


򐂰 Agentless in-band
Managed systems without any FSM client software installed. FSM communicates with the
managed system through the operating system.
򐂰 Agentless out-of-band
Managed systems without any FSM client software installed. FSM communicates with the
managed system through something other than the operating system, such as a service
processor or an HMC.
򐂰 Platform Agent
Managed systems with Platform Agent installed. FSM communicates with the managed
system through the Platform Agent.
򐂰 Common Agent
Managed systems with Common Agent installed. FSM communicates with the managed
system through the Common Agent.

Table 3-9 lists the agent tier support for the IBM Flex System managed compute nodes.
Managed nodes include x86 nodes that supports Windows, Linux and VMware, and Power
Nodes compute nodes that support IBM AIX, IBM i, and Linux.

Table 3-9 Agent tier support by management system type


Agent tier Agentless Agentless Platform Common
Managed system type in-band out-of-band Agent Agent

Compute nodes that run AIX Yes Yes No Yes

Compute nodes that run IBM i Yes Yes Yes Yes

Compute nodes that run Linux No Yes Yes Yes

Compute nodes that run Linux and Yes Yes Yes Yes
supporting SSH

Compute nodes that run Windows No Yes Yes Yes

Compute nodes that run Windows Yes Yes Yes Yes


and supporting SSH or distributed
component object model (DCOM)

Compute nodes that run VMware Yes Yes Yes Yes

Other managed resources that Yes Yes No No


support SSH or SNMP

Table 3-10 on page 64 summarizes the management tasks that are supported by the
compute nodes that depend on the agent tier.

Chapter 3. Systems management 63


Table 3-10 Compute node management tasks that are supported by the agent tier
Agent tier Agentless Agentless Platform Common
Managed system type in-band out-of-band Agent Agent

Command automation No No No Yes

Hardware alerts No Yes Yes Yes

Platform alerts No No Yes Yes

Health and status monitoring No No Yes Yes

File transfer No No No Yes

Inventory (hardware) No Yes Yes Yes

Inventory (software) Yes No Yes Yes

Problems (hardware status) No Yes Yes Yes

Process management No No No Yes

Power management No Yes No Yes

Remote control No Yes No No

Remote command line Yes No Yes Yes

Resource monitors No No Yes Yes

Update manager No No Yes Yes

Table 3-11 shows the supported virtualization environments and their management tasks.

Table 3-11 Supported virtualization environments and management tasks


Virtualization environment AIX and IBM i VMware Microsoft Linux
Management task Linuxa vSphere Hyper-V KVM

Deploy virtual servers Yes Yes Yes Yes Yes

Deploy virtual farms No No Yes No Yes

Relocate virtual servers Yes No Yes No Yes

Import virtual appliance packages Yes Yes No No Yes

Capture virtual servers Yes Yes No No Yes

Capture workloads Yes Yes No No Yes

Deploy virtual appliances Yes Yes No No Yes

Deploy workloads Yes Yes No No Yes

Deploy server system pools Yes No No No Yes

Deploy storage system pools Yes No No No No


a. Linux on Power Systems compute nodes

64 IBM PureFlex System and IBM Flex System Products and Technology
Table 3-12 shows the supported I/O switches and their management tasks.

Table 3-12 Supported I/O switches and management tasks


EN2092 EN4093 and CN4093 FC3171 FC5022
1 Gb EN4093R 10 Gb 8 Gb FC 16 Gb FC
Management task Ethernet 10 Gb Ethernet Converged

Discovery Yes Yes Yes Yes Yes

Inventory Yes Yes Yes Yes Yes

Monitoring Yes Yes Yes Yes Yes

Alerts Yes Yes Yes Yes Yes

Configuration management Yes Yes Yes Yes No

Automated logical network Yes Yes Yes Yes No


provisioning (ALNP)

Stacked switch No Yes No No No

Table 3-13 shows the supported virtual switches and their management tasks.

Table 3-13 Supported virtual switches and management tasks


Virtualization environment Linux KVM VMware vSphere PowerVM Hyper-V

Virtual switch Platform Agent VMware IBM 5000V PowerVM Hyper-V


Management task

Discovery Yes Yes Yes Yes No

Inventory Yes Yes Yes Yes No

Configuration management Yes Yes Yes Yes No

Automated logical network Yes Yes Yes Yes No


provisioning (ALNP)

Table 3-14 shows the supported storage systems and their management tasks.

Table 3-14 Supported storage systems and management tasks


Storage system V7000 Storage IBM Storwize
Management task Node V7000

Storage device discovery Yes Yes

Inventory collection Yes Yes

Monitoring (alerts and status) Yes Yes

Integrated physical and logical topology views Yes No

Show relationships between storage and server resources Yes Yes

Perform logical and physical configuration Yes Yes

View and manage attached devices Yes No

VMControl provisioning Yes Yes

Chapter 3. Systems management 65


3.5.4 User interfaces
IBM Flex System Manager supports the following management interfaces:
򐂰 Web interface
򐂰 IBM FSM Explorer console
򐂰 Mobile System Management application
򐂰 Command-line interface

Web interface
The following browsers are supported by the management software web interface:
򐂰 Mozilla Firefox versions 3.5.x, 3.6.x, 7.0, and Extended Support Release (ESR) 10.0.x
򐂰 Microsoft Internet Explorer versions 7.0, 8.0, and 9.0

IBM FSM Explorer console


The IBM FSM Explorer console provides an alternative resource-based view of your
resources and helps you manage your Flex System environment with intuitive navigation of
those resources.

You can perform the following tasks in IBM FSM Explorer:


򐂰 Configure local storage, network adapters, boot order, and Integrated Management
Module (IMM) and Unified Extensible Firmware Interface (UEFI) settings for one or more
compute nodes before you deploy operating-system or virtual images to them.
򐂰 Install operating system images on IBM X-Architecture® compute nodes.
򐂰 Browse resources, view the properties of resources, and perform some basic
management tasks, such as power on and off, collect inventory, and working with LEDs.
򐂰 Use the Chassis Map to edit compute node details, view server properties, and manage
compute node actions.
򐂰 Work with resource views, such as All Systems, Chassis and Members, Hosts, Virtual
Servers, Network, Storage, and Favorites.
򐂰 Perform visual monitoring of status and events.
򐂰 View event history and active status.
򐂰 View inventory.
򐂰 Perform visual monitoring of job status.

For other tasks, IBM FSM Explorer starts IBM Flex System Manager in a separate browser
window or tab. You can return to the IBM FSM Explorer tab when you complete those tasks.

3.5.5 Mobile System Management application


The Mobile System Management application is a simple and no cost tool that you can
download for a mobile device that has an Android, Apple iOS, or BlackBerry operating
system. You can use the Mobile System Management application to monitor your IBM Flex
System hardware remotely.

The Mobile System Management application provides access to the following types of IBM
Flex System information:
򐂰 Health and Status: Monitor health problems and check the status of managed resources.
򐂰 Event Log: View the event history for chassis, compute nodes, and network devices.

66 IBM PureFlex System and IBM Flex System Products and Technology
򐂰 Chassis Map (hardware view): Check the front and rear graphical hardware views of
a chassis.
򐂰 Chassis List (components view): View a list of the hardware components that are installed
in a chassis.
򐂰 Inventory Management: See the Vital Product Data (VPD) for a managed resource (for
example, serial number or IP address).
򐂰 Multiple chassis management: Manage multiple chassis and multiple management nodes
from a single application.
򐂰 Authentication and security: Secure all connections by using encrypted protocols (for
example, SSL), and secure persistent credentials on your mobile device.

You can download the Mobile System Management application for your mobile device from
one of the following app stores:
򐂰 Google Play for the Android operating system
򐂰 iTunes for the Apple iOS
򐂰 BlackBerry App World

New in Flex System Manager Mobile 1.2.0


With the latest release of Mobile Manager, the following enhancements were added:
򐂰 Power Actions:
– Perform the following actions on Compute Nodes:
• Power on
• Power off
• Restart
• Shut down and power off
• LED flash
• LED on
• LED off
– Perform actions on CMM, such as Virtual Reseat and Restart Primary CMM
򐂰 Recent Jobs
View a list of the recent jobs (last 24 hours) that were run from mobile or desktop.
򐂰 Event Log
Easier to toggle between event log and status.
򐂰 Chassis Map (hardware view):
– Check the front and rear graphical hardware views for a chassis
– Overlay the graphical views with Power and Error LEDs
򐂰 Inventory Management
See the Vital Product Data (VPD) and Firmware Levels for managed resources
򐂰 Authentication and Security:
– Simpler connection menu
– Accept unsigned certificates

For more information about the application, see the Mobile System Management application
page at this website:
http://www.ibm.com/systems/flex/fsm/mobile/

Chapter 3. Systems management 67


3.5.6 Flex System Manager CLI
The CLI is an important interface for the IBM Flex System Manager management software.
You can use it to accomplish simple tasks directly or as a scriptable framework for automating
functions that are not easily accomplished from a GUI. The IBM Flex System Manager
management software includes a library of commands that you can use to configure the
management software or perform many of the systems management operations that can be
accomplished from the management software web-based interface.

For more information, see the IBM Flex System Manager product publications available from
the IBM Flex System Information Center at this website:
http://publib.boulder.ibm.com/infocenter/flexsys/information/index.jsp

At the Information Center, search for the following publications:


򐂰 Installation and User's Guide
򐂰 Systems Management Guide
򐂰 Commands Reference Guide
򐂰 Management Software Troubleshooting Guide

68 IBM PureFlex System and IBM Flex System Products and Technology
4

Chapter 4. Chassis and infrastructure


configuration
The IBM Flex System Enterprise Chassis (machine type 8721) is a 10U next-generation
server platform with integrated chassis management. It is a compact, high-density,
high-performance, rack-mount, and scalable platform system. It supports up to 14 standard
(half-wide) compute nodes that share common resources, such as power, cooling,
management, and I/O resources within a single Enterprise Chassis. In addition, it can also
support up to seven full-width nodes or three 4-bay (full-wide & double-high) nodes when the
shelves are removed. You can mix and match standard, 2-bay, and 4-bay nodes to meet your
specific hardware needs.

This chapter includes the following topics:


򐂰 4.1, “Overview” on page 70
򐂰 4.2, “Power supplies” on page 79
򐂰 4.3, “Fan modules” on page 82
򐂰 4.4, “Fan logic module” on page 85
򐂰 4.5, “Front information panel” on page 86
򐂰 4.6, “Cooling” on page 87
򐂰 4.8, “Fan module population” on page 99
򐂰 4.7, “Power supply selection” on page 92
򐂰 4.9, “Chassis Management Module” on page 101
򐂰 4.10, “I/O architecture” on page 104
򐂰 4.11, “I/O modules” on page 112
򐂰 4.12, “Infrastructure planning” on page 161
򐂰 4.13, “IBM 42U 1100mm Enterprise V2 Dynamic Rack” on page 172
򐂰 4.14, “IBM PureFlex System 42U Rack and 42U Expansion Rack” on page 178
򐂰 4.15, “IBM Rear Door Heat eXchanger V2 Type 1756” on page 180

© Copyright IBM Corp. 2012, 2013. All rights reserved. 69


4.1 Overview
Figure 4-1 shows the Enterprise Chassis as seen from the front. The front of the chassis
includes 14 horizontal bays with robust removable dividers that allow nodes and future
elements to be installed within the chassis. Nodes can be Compute, Storage, or Expansion
type. The nodes can be installed when the chassis is powered.

The chassis uses a die-cast mechanical bezel for rigidity to allow shipment of the chassis with
nodes installed. This chassis construction allows for tight tolerances between nodes, shelves,
and the chassis bezel. These tolerances ensure accurate location and mating of connectors
to the midplane.

Figure 4-1 IBM Flex System Enterprise Chassis

The Enterprise Chassis includes the following major components:


򐂰 Fourteen standard (half-wide) node bays. It can also support seven, two-bay or three,
four-bay nodes with the shelves removed.
򐂰 8271-A1x: Up to Six 2500W power modules that provide N+N or N+1 redundant power.
򐂰 8271-LRx: Up to Six 2100W Power Modules that provide N+N or N+1 redundant power.
򐂰 Ten fan modules (eight 80 mm fan modules and two 40 mm fan modules).
򐂰 Four physical I/O modules.
򐂰 An I/O architectural design capable of providing the following features:
– Up to eight lanes of I/O to an I/O adapter. Each lane capable of up to 16 Gbps.
– A maximum of 16 lanes of I/O to a half-wide node with two adapters.
– Various networking solutions that include Ethernet, Fibre Channel, FCoE, and
InfiniBand.
򐂰 Two IBM Chassis Management Module (CMMs). The CMM provides single-chassis
management support.

70 IBM PureFlex System and IBM Flex System Products and Technology
Table 4-1 lists the quantity of components that comprise the 8271 machine type:

Table 4-1 8721 Enterprise Chassis configuration


8721-A1x 8721-LRx Description

1 1 IBM Flex System Enterprise Chassis

1 1 Chassis Management Module

2 0 2500W power supply unit

0 2 2100W Power supply unita

4 4 80 mm fan modules

2 2 40 mm fan modules

1 1 Console breakout cable

2 2 C19 to C20 2M power cables

1 1 Rack mount kit


a. 2100W power supply units are also available through the CTO process

More Console Breakout Cables can be ordered, if required. The console breakout cable
connects to the front of an x86 node and allows Keyboard, Video, USB, and Serial to be
attached locally to that node. For more information about alternative methods, see 4.12.5,
“Console planning” on page 169. The Chassis Management Module (CMM) includes built-in
console re-direction via the CMM Ethernet port.

Table 4-2 Ordering part number and feature code


Part number Feature code Description

81Y5286 A1NF IBM Flex System Console Breakout Cable

Figure 4-2 on page 72 shows the component parts of the chassis with the shuttle removed.
The shuttle forms the rear of the chassis where the I/O Modules, power supplies, fan
modules, and CMMs are installed. The Shuttle is removed only to gain access to the
midplane or fan distribution cards in the rare event of a service action.

Chapter 4. Chassis and infrastructure configuration 71


Chassis Chassis 40mm fan Fan CMM Power
management module logic filler supply
module module filler

I/O
module

80mm fan
module
80mm fan
filler

Fan distribution
cards Midplane Rear Shuttle
Power LED
supply card

Figure 4-2 Enterprise Chassis component parts

Within the chassis, a personality card holds vital product data (VPD) and other information
that is relevant to the particular chassis. This card can be replaced only under service action,
and is not normally accessible. The personality card is attached to the midplane, as shown in
Figure 4-4 on page 74.

72 IBM PureFlex System and IBM Flex System Products and Technology
4.1.1 Front of the chassis
Figure 4-3 shows the bay numbers and air apertures on the front of the Enterprise Chassis.

Upper airflow inlets

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
Bay 13 a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa Bay 14
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
Bay 11 a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
Bay 12
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
a a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a
aaaaaaaaaaaaaaaaaaa
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaa
Bay 9 aaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
a a a a a a a a a a
aaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaa Bay 10
aaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a
aaaaaaaaaaaaaaaaaaa
a a a a a a a a a a
a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
a a a a a a a a a a a a a a a a a a a a a
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa
a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a
aaaaaaaaaaaaaaaaaaa
Bay 7 a a a a a a a a a a aaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa
a a a a a a a a a a
aaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaa
Bay 8
a a a a a a a a a a

a a a a a a a a a a aaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a
aaaaaaaaaaaaaaaaaaa
a a a a a a a a a a aaaaaaaaaaaaaaaaaaa
Bay 5 aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa Bay 6
aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaa

a a a a a a a a a a aaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a
aaaaaaaaaaaaaaaaaaa
a a a a a a a a a a aaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa
Bay 3 aaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaa Bay 4
aaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaa

a a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa a a a a a a a a a a
aaaaaaaaaaaaaaaaaaaaaaaaaaaa
Bay 1 aaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaa a a a a a a a a a a Bay 2
aaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa
a a a a a a a a a a a a a a a a a a a a a
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa

Information Panel Lower airflow Inlets

Figure 4-3 Front view of the Enterprise Chassis

The chassis includes the following features on the front:


򐂰 The front information panel on the lower left of the chassis
򐂰 Bays 1 - 14 that support nodes and FSM
򐂰 Lower airflow inlet apertures that provide air cooling for switches, CMMs, and power
supplies
򐂰 Upper airflow inlet apertures that provide cooling for power supplies

For efficient cooling, each bay in the front or rear in the chassis must contain a device or filler.

The Enterprise Chassis provides several LEDs on the front information panel that can be
used to obtain the status of the chassis. The Identify, Check log, and the Fault LED are also
on the rear of the chassis for ease of use.

Chapter 4. Chassis and infrastructure configuration 73


4.1.2 Midplane
The midplane is the circuit board that connects to the compute nodes from the front of the
chassis. It also connects to I/O modules, fan modules, and power supplies from the rear of the
chassis. The midplane is located within the chassis and can be accessed by removing the
Shuttle assembly. Removing the midplane is rare and only necessary in case of service
action.

The midplane is passive, which is to say that there are no electronic components on it. The
midplane has apertures to allow air to pass through. When no node is installed in a standard
node bay, the Air Damper is completely closed for that bay, which gives highly efficient scale
up cooling.

The midplane has reliable industry standard connectors on both sides for power supplies, fan
distribution cards, switches, I/O modules and nodes. The chassis design allows for highly
accurate placement and mating of connectors from the nodes, I/O modules, and Power
supplies to the midplane, as shown in Figure 4-4.

Midplane front view Midplane rear view

Node power Management I/O module Power supply CMM


connectors connectors connectors connectors connectors

I/O adapter connectors Fan power and signal Personality card


connectors connector

Figure 4-4 Connectors on the midplane

The midplane uses a single power domain within the design. This a cost-effective overall
solution and optimizes the design for a preferred 10U Height.

Within the midplane, there are five separate power and ground planes for distribution of the
main 12.2 Volt power domain through the chassis.

74 IBM PureFlex System and IBM Flex System Products and Technology
The midplane also distributes I2C management signals and some 3.3v for powering
management circuits. The power supplies source their fan power from the midplane.

Figure 4-4 on page 74 shows the connectors on both sides of the midplane.

4.1.3 Rear of the chassis


Figure 4-5 shows the rear view of the chassis.

Figure 4-5 Rear view of Enterprise Chassis

The following components can be installed into the rear of the chassis:
򐂰 Up to two CMMs.
򐂰 Up to six 2500W or 2100W power supply modules.
򐂰 Up to six fan modules that consist of four 80 mm fan modules and two 40 mm fan modules.
More fan modules can be installed for a total of 10 modules.
򐂰 Up to four I/O modules.

4.1.4 Specifications
Table 4-3 shows the specifications of the Enterprise Chassis 8721-A1x.

Table 4-3 Enterprise Chassis specifications


Feature Specifications

Machine type-model System x ordering sales channel: 8721-A1x or 8721-LRx


Power Systems sales channel: 7893-92Xa

Form factor 10U rack mounted unit

Maximum number of compute 14 half-wide (single bay), 7 full-wide (two bays), or 3 double-height full-wide (four bays).
nodes supported Mixing is supported.

Chapter 4. Chassis and infrastructure configuration 75


Feature Specifications

Chassis per 42U rack 4

Nodes per 42U rack 56 half-wide, or 28 full-wide

Management One or two Chassis Management Modules for basic chassis management. Two CMMs
form a redundant pair. One CMM is standard in 8721-A1x and 8271-LRx. The CMM
interfaces with the Integrated Management Module II (IMM2) or flexible service
processor (FSP) integrated in each compute node in the chassis and also to the
integrated storage node. An optional IBM Flex System Managera management
appliance provides comprehensive management that includes virtualization,
networking, and storage management.

I/O architecture Up to eight lanes of I/O to an I/O adapter, with each lane capable of up to 16 Gbps
bandwidth. Up to 16 lanes of I/O to a half wide-node with two adapters. Various
networking solutions include Ethernet, Fibre Channel, FCoE, and InfiniBand

Power supplies 8721-A1x: Six 2500W power modules that can provide N+N or N+1 redundant power.
Two are standard in this model.
8721-LRx: Six 2100W power modules that can provide N+N or N+1 redundant power.
Two are standard in this model.
Power supplies are 80 PLUS Platinum certified and provide over 94% efficiency at 50%
load and 20% load. Power capacity of 2500 watts output rated at 200 VAC. Each power
supply contains two independently powered 40 mm cooling fan modules.

Fan modules Ten fan modules (eight 80 mm fan modules and two 40 mm fan modules). Four 80 mm
and two 40 mm fan modules are standard in model 8721-A1x and 8721-LRx.

Dimensions 򐂰 Height: 440 mm (17.3”)


򐂰 Width: 447 mm (17.6”)
򐂰 Depth, measured from front bezel to rear of chassis: 800 mm (31.5 inches)
򐂰 Depth, measured from node latch handle to the power supply handle: 840 mm
(33.1 inches)

Weight 򐂰 Minimum configuration: 96.62 kg (213 lb)


򐂰 Maximum configuration: 220.45 kg (486 lb)

Declared sound level 6.3 to 6.8 bels

Temperature Operating air temperature 5°C to 40°C

Electrical power Input power: 200 - 240 VAC (nominal), 50 or 60 Hz


Minimum configuration: 0.51 kVA (two power supplies)
Maximum configuration: 13 kVA (six 2500W power supplies)

Power consumption 12,900 watts maximum


a. When you order the IBM Flex System Enterprise Chassis through the Power Systems sales channel, the IBM Flex
System Manager is required if PowerVM software is selected on a power node.

For data center planning, the chassis is rated to a maximum operating temperature of 40°C.
For comparison, BC-H is rated to 35°C (110v operation is not supported). The AC operating
range is 200 - 240 VAC.

76 IBM PureFlex System and IBM Flex System Products and Technology
4.1.5 Air filter
There is an optional airborne contaminate filter that can be fitted to the front of the chassis, as
listed in Table 4-4.

Table 4-4 IBM Flex System Enterprise Chassis airborne contaminant filter ordering information
Part number Description

43W9055 IBM Flex System Enterprise Chassis airborne contaminant filter

43W9057 IBM Flex System Enterprise Chassis airborne contaminant filter replacement pack

The filter is attached to and removed from the chassis, as shown in Figure 4-6.

Figure 4-6 Dust filter

4.1.6 Compute node shelves


A shelf is required for standard (half-wide) bays. The chassis ships with these shelves in
place. To allow for installation of the full-wide or larger, shelves must be removed from the
chassis. Remove the shelves by sliding two blue latches on the shelf towards the center and
then sliding the shelf out of the chassis.

Chapter 4. Chassis and infrastructure configuration 77


Figure 4-7 shows removal of a shelf from Enterprise Chassis.

Shelf

Tabs

Figure 4-7 Shelf removal

4.1.7 Hot plug and hot swap components


The chassis follows the standard color coding scheme that is used by IBM for touch points
and hot swap components.

Touch points are blue, and are found on the following locations:
򐂰 Fillers that cover empty fan and power supply bays
򐂰 Handle of nodes
򐂰 Other removable items that cannot be hot-swapped

Hot Swap components have orange touch points. Orange tabs are found on fan modules, fan
logic modules, power supplies, and I/O Module handles. The orange designates that the
items are hot swap, and can be removed and replaced while the chassis is powered.
Table 4-5 shows which components are hot swap and which are hot plug.

Table 4-5 Hot plug and hot swap components


Component Hot plug Hot swap

Node Yes Noa

I/O Module Yes Yesb

40 mm Fan Pack Yes Yes

80 mm Fan Pack Yes Yes

Power Supply Yes Yes

Fan logic module Yes Yes


a. Node must be powered off, in standby before removal.
b. I/O Module might require reconfiguration, and removal is disruptive to any communications that
are taking place.

Nodes can be plugged into the chassis while the chassis is powered. The node can then be
powered on. Power the node off before removal.

78 IBM PureFlex System and IBM Flex System Products and Technology
4.2 Power supplies
Power supplies (or power modules) are available with 2500W or 2100W rating. Power
supplies are hot pluggable and are at the rear of the chassis.

The standard chassis models ship with two 2500W power supplies or two 2100W power
supplies, depending on the model. For more information, see Table 4-1 on page 71.

The 2100W power supplies provide a more cost-effective solution for deployments with lower
power demands. The 2100W power supplies also have the advantage in that they draw a
maximum of 11.8A as opposed to the 13.8A of the 2500W power supply. This means that
when you are using a 30A supply which is UL derated to 24A when a PDU is used, two
2100W supplies can be connected to the same PDU with 0.4A remaining. Thus, for 30A UL
derated PDU deployments that are common in North America, the 2100W power supply can
be advantageous. For more information, see 4.12.3, “Power planning” on page 162.

Population information for the 2100W and 2500W power supplies can be found in 4.7, “Power
supply selection” on page 92, which describes planning information for the nodes that are
being installed.

A maximum of six power supplies can be installed within the Enterprise Chassis.

Support of power supplies: Mixing of 2100W and 2500W power supplies is not
supported in the same chassis.

The 2500W supplies are 2500 watts output rated at 200 - 208 VAC (nominal), and 2750 W at
220 - 240 VAC (nominal). The power supply has an oversubscription rating of up to 3538
Watts output at 200VAC. The power supply operating range is 200 - 240 VAC. The power
supplies also contain two dual independently powered 40 mm cooling fan supplies that are
powered not from the power supply, but from the chassis midplane. The fan supplies are
variable speed and controlled by the chassis fan logic.

The 2100W power supplies are 2100 watts output power that is rated at 200 - 240 VAC.
Similar to the 2500W unit, this power supply also supports oversubscription; the 2100W unit
can run up to 2895 W for a short duration. As with the 2500W units, the 2100W supplies have
two independently powered dual 40 mm cooling fans that pick up power from the midplane
included within the power supply assembly.

Table 4-6 shows the ordering information for the Enterprise Chassis power supplies.

Table 4-6 Power supply module option part numbers


Part Feature Description Chassis models
number codesa where standard

43W9049 A0UC / 3590 IBM Flex System Enterprise Chassis 2500W Power Module 8721-A1x (x-config)
7893-92X (e-config)

47C7633 A3JH / 3666 IBM Flex System Enterprise Chassis 2100W Power Module 8721-LRx
a. The first feature code listed is for configurations that are ordered through System x sales channels (HVEC) that
use x-config. The second feature code is for configurations that are ordered through the IBM Power Systems
channel (AAS) that use e-config.

Chapter 4. Chassis and infrastructure configuration 79


Table 4-7 shows the Feature Codes that are used when you are ordering through the Power
Systems channel route (AAS) via e-config.

Table 4-7 Power Supply feature codes AAS (Power Brand)


Description Feature Code for base power supplies Feature code for additional power
(quantity must be 2) supplies (quantity must be 0, 2, or 4)

2100 Wa 9036 3666

2500 W 9059 3590


a. IBM Flex Systems only, not supported in PureFlex configurations

For power supply population, Table 4-11 on page 93 lists details of the support for compute
nodes supported based on type and number of power supplies that are installed in the
chassis and the power policy enabled (N+N or N+1).

Both the 2500W and 2100W power supplies are 80 PLUS Platinum certified. The 80 PLUS
certification is a performance specification for power supplies that are used within servers and
computers. The standard has several ratings, such as Bronze, Silver, Gold, Platinum. To meet
the 80 PLUS Platinum standard, the power supply must have a power factor (PF) of 0.95 or
greater at 50% rated load and efficiency equal to or greater than the following values:
򐂰 90% at 20% of rated load
򐂰 94% at 50% of rated load
򐂰 91% at 100% of rated load

For more information about 80 PLUS certification, see this website:


http://www.plugloadsolutions.com

Table 4-8 lists the efficiency of the 2500W Enterprise Chassis power supplies at various
percentage loads at different input voltages.

Table 4-8 2500W power supply efficiency at different loads for 200 - 208 VAC and 220 - 240 VAC
Load 10% load 20% load 50% load 100% load

Input voltage 200- 208V 220- 240V 200- 208V 220- 240V 200- 208V 220- 240V 200- 208V 220- 240V

Output power 250 W 275 W 500 W 550 W 1250 W 1375 W 2500W 2750 W

Efficiency 93.2% 93.5% 94.2% 94.4% 94.5% 92.2% 91.8% 91.4%

Table 4-9 lists the efficiency of the 2100W Enterprise Chassis power supplies at various
percentage loads at 230 VAC nominal voltage.

Table 4-9 2100W power supply efficiency at different loads for 230 VAC
Load @ 230 VAC 10% load 20% load 50% load 100% load

Output Power 210 W 420 W 1050 W 2100 W

Efficiency 92.8% 94.1% 94.2% 91.8%

Figure 4-8 on page 81 shows the location of the power supplies within the enterprise chassis
where two power supplies are installed into bay 4 and bay 1. Four power supply bays are
shown with fillers that must be removed to install power supplies into the bays. Similar to the
fan bay fillers, there are blue touch point and finger hold apertures (circular) that are located
below the blue touch points to make the filler removal process easy and intuitive.

80 IBM PureFlex System and IBM Flex System Products and Technology
Population information for the 2100W and 2500W power supplies can be found in Table 4-11
on page 93, which describes the number of power supplies that are required dependent on
the nodes being deployed.

Power
supply Power
bay 6 supply
bay 3

Power Power
supply supply
bay 5 bay 2

Power Power
supply supply
bay 4 bay 1

Figure 4-8 Power supply locations

With 2500W power supplies, the chassis allows power configurations to be N+N redundancy
with most node types. Table 4-11 on page 93 shows the support matrix. Alternatively, a
chassis can operate in N+1, where N can equal 3, 4, or 5.

All power supply supplies are combined into a single 12.2v DC power domain within the
chassis. This combination distributes power to each of the compute nodes, I/O modules, and
ancillary components through the Enterprise Chassis midplane. The midplane is a highly
reliable design with no active components. Each power supply is designed to provide fault
isolation and is hot swappable.

Power monitoring of the DC and AC signals allows the CMM to accurately monitor the power
supplies.

The integral power supply fans are not dependent upon the power supply being functional
because they operate and are powered independently from the chassis midplane.

Power supplies are added as required to meet the load requirements of the Enterprise
Chassis configuration. There is no need to over provision a chassis and power supplies can
be added as the nodes are installed. For more information about power-supply unit planning,
see Table 4-11 on page 93.

Figure 4-9 on page 82 shows the power supply rear view and highlights the LEDs. There is a
handle for removal and insertion of the power supply and a removal latch operated by thumb,
so the PSU can easily be unlatched and removed with one hand.

Chapter 4. Chassis and infrastructure configuration 81


Removal latch LEDs (left to right):

Pull handle 򐂰 AC power


򐂰 DC power
򐂰 Fault

Figure 4-9 2500W power supply

The rear of the power supply has a C20 inlet socket for connection to power cables. You can
use a C19-C20 power cable, which can connect to a suitable IBM DPI rack power distribution
unit (PDU).

The Power Supply options that are shown in Table 4-6 on page 79 ship with a 2.5m intra-rack
power cable (C19 to C20).

The rear LEDs indicate the following conditions:


򐂰 AC Power: When lit green, the AC power is being supplied to the PSU inlet.
򐂰 DC Power: When lit green, the DC power is being supplied to the chassis midplane.
򐂰 Fault: When lit amber, there is a fault with the PSU.

Before you remove any power supplies, ensure that the remaining power supplies have
sufficient capacity to power the Enterprise Chassis. Power usage information can be found in
the CMM web interface.

4.3 Fan modules


The Enterprise Chassis supports up to 10 hot pluggable fan modules that consist of two
40 mm fan modules and eight 80 mm fan modules.

A chassis can operate with a minimum of six hot-swap fan modules installed, which consist of
four 80 mm fan modules and two 40 mm fan modules.

The fan modules plug into the chassis and connect to the fan distribution cards. More 80 mm
fan modules can be added as required to support chassis cooling requirements.

82 IBM PureFlex System and IBM Flex System Products and Technology
Figure 4-10 shows the fan bays in the back of the Enterprise Chassis.

Fan Fan
bay 10 bay 5

Fan
bay 4
Fan
bay 9

Fan
bay 3

Fan
bay 8

Fan Fan
bay 7 bay 2

Fan
Fan
bay 6
bay 1

Figure 4-10 Fan bays in the Enterprise Chassis

For more information about how to populate the fan modules, see 4.6, “Cooling” on page 87.

Figure 4-11 shows a 40 mm fan module,

Removal latch

Pull handle

Power on Fault
LED LED
Figure 4-11 40 mm fan module

The two 40 mm fan modules in fan bays 5 and 10 distribute airflow to the I/O modules and
chassis management modules. These modules ship preinstalled in the chassis.

Each 40 mm fan module contains two 40 mm counter rotating fan pairs, side-by-side.

Chapter 4. Chassis and infrastructure configuration 83


The 80 mm fan modules distribute airflow to the compute nodes through the chassis from
front to rear. Each 80 mm fan module contains two 80 mm fan modules, back-to-back within
the module, which are counter rotating.

Both fan modules have an electromagnetic compatibility (EMC) mesh screen on the rear
internal face of the module. This design also provides a laminar flow through the screen.
Laminar flow is a smooth flow of air, sometimes called streamline flow. This flow reduces
turbulence of the exhaust air and improves the efficiency of the overall fan assembly.

The following factors combine to form a highly efficient fan design that provides the best
cooling for lowest energy input:
򐂰 Design of the entire fan assembly
򐂰 Fan blade design
򐂰 Distance between and size of the fan modules
򐂰 EMC mesh screen

Figure 4-12 shows an 80 mm fan module.

Removal latch

Pull handle

Power on Fault
LED LED
Figure 4-12 80 mm fan module

The minimum number of 80 mm fan modules is four. The maximum number of individual
80 mm fan modules that can be installed is eight.

Both fan modules have two LED indicators that consist of a green power-on indicator and an
amber fault indicator. The power indicator lights when the fan module has power, and flashes
when the module is in the power save state.

Table 4-10 lists the specifications of the 80 mm Fan Module Pair option.

Pairs and singles: When the modules are ordered as an option, they are supplied as a
pair. When the modules are configured by using feature codes, they are single fans.

Table 4-10 80 mm Fan Module Pair option part number


Part number Feature codea Description

43W9078 A0UA / 7805 IBM Flex System Enterprise Chassis 80 mm Fan Module
(two fans) (one fan)
a. The first feature code listed is for configurations that are ordered through System x sales
channels (HVEC) by using x-config. The second feature code is for configurations that are
ordered through the IBM Power Systems channel (AAS) by using e-config.

84 IBM PureFlex System and IBM Flex System Products and Technology
For more information about airflow and cooling, see 4.6, “Cooling” on page 87.

4.4 Fan logic module


There are two fan logic modules included within the chassis, as shown in Figure 4-13.

Fan logic Fan logic


bay 2 bay 1

Figure 4-13 Fan logic modules on the rear of the chassis

Fan logic modules are multiplexers for the internal I2C bus, which is used for communication
between hardware components within the chassis. Each fan pack is accessed through a
dedicated I2C bus, switched by the Fan Mux card, from each CMM. The fan logic module
switches the I2C bus to each individual fan pack. This module can be used by the Chassis
Management Module to determine multiple parameters, such as fan RPM.

There is a fan logic module for the left and right side of the chassis. The left fan logic module
access the left fan modules, and the right fan logic module accesses the right fan modules.

Fan presence indication for each fan pack is read by the fan logic module. Power and fault
LEDs are also controlled by the fan logic module.

Chapter 4. Chassis and infrastructure configuration 85


Figure 4-14 shows a fan logic module and its LEDs.

Figure 4-14 Fan logic module

As shown in Figure 4-14, there are two LEDs on the fan logic module. The power-on LED is
green when the fan logic module is powered. The amber fault LED flashes to indicate a faulty
fan logic module. Fan logic modules are hot swappable.

For more information about airflow and cooling, see 4.6, “Cooling” on page 87

4.5 Front information panel


Figure 4-15 shows the front information panel.

!
White backlit Identify Check Fault
IBM logo LED log LED LED

Figure 4-15 Front information panel

The following items are shown on the front information panel:


򐂰 White Backlit IBM Logo: When lit, this logo indicates that the chassis is powered.
򐂰 Locate LED: When lit (blue) solid, this LED indicates the location of the chassis. When the
LED is flashing, this LED indicates that a condition occurred that caused the CMM to
indicate that the chassis needs attention.
򐂰 Check Error Log LED: When lit (amber), this LED indicates that a noncritical event
occurred. This event might be an incorrect I/O module that is inserted into a bay, or a
power requirement that exceeds the capacity of the installed power modules.
򐂰 Fault LED: When lit (amber), this LED indicates that a critical system error occurred. This
error can be an error in a power module or a system error in a node.

86 IBM PureFlex System and IBM Flex System Products and Technology
Figure 4-16 shows the LEDs that are on the rear of the chassis.

Identify Check Fault


LED log LED
LED

Figure 4-16 Chassis LEDs on the rear of the unit (lower right)

4.6 Cooling
This section describes the Enterprise Chassis cooling system. The flow of air within the
Enterprise Chassis follows a front-to-back cooling path. Cool air is drawn in at the front of the
chassis and warm air is exhausted to the rear. Air is drawn in through the front node bays and
the front airflow inlet apertures at the top and bottom of the chassis. There are two cooling
zones for the nodes: a left zone and a right zone.

The cooling process can be scaled up as required, based on which node bays are populated.
For more information about the number of fan modules that are required for nodes, see 4.8,
“Fan module population” on page 99.

When a node is removed from a bay, an airflow damper closes in the midplane. Therefore, no
air is drawn in through an unpopulated bay. When a node is inserted into a bay, the damper is
opened by the node insertion, which allows for cooling of the node in that bay.

Chapter 4. Chassis and infrastructure configuration 87


Figure 4-17 shows the upper and lower cooling apertures.

Upper cooling apertures

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
a a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a
aaaaaaaaaaaaaaaaaaa
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
a a a a a a a a a a aaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaa
a a a a a a a a a a
aaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a
aaaaaaaaaaaaaaaaaaa
a a a a a a a a a a aaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a
a a a a a a a a a a
aaaaaaaaaaaaaaaaaaa

a a a a a a a a a a aaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a
aaaaaaaaaaaaaaaaaaa
a a a a a a a a a a aaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaa
a a a a a a a a a a
aaaaaaaaaaaaaaaaaaa a a a a a a a a a a
aaaaaaaaaaaaaaaaaaa

a a a a a a a a a a aaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a
aaaaaaaaaaaaaaaaaaa
a a a a a a a a a a aaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa
a a a a a a a a a a aaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaa

a a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a aaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa a a a a a a a a a a
aaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaa a a a a a a a a a a
a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a a
aaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaa
a a a a a a a a a a a a a a a a a a a a a
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa

Lower cooling apertures


Figure 4-17 Enterprise Chassis lower and upper cooling apertures

Various fan modules are present in the chassis to assist with efficient cooling. Fan modules
consist of 40 mm and 80 mm types, and are contained within hot pluggable fan modules. The
power supplies also have two integrated, independently powered 40 mm fan modules.

The cooling path for the nodes begins when air is drawn in from the front of the chassis. The
airflow intensity is controlled by the 80 mm fan modules in the rear. Air passes from the front
of the chassis, through the node, through openings in the Midplane, and then into a plenum
chamber. Each plenum is isolated from the other, providing separate left and right cooling
zones. The 80 mm fan packs on each zone then move the warm air from the plenum to the
rear of the chassis.

In a two-bay wide node, the air flow within the node is not segregated because it spans both
airflow zones.

88 IBM PureFlex System and IBM Flex System Products and Technology
Figure 4-18 shows a chassis with the outer casing removed for clarity to show airflow path
through the chassis. There is no airflow through the chassis midplane where a node is not
installed. The air damper is opened only when a node is inserted in that bay.

Node installed in Bay 14


Cool airflow in

80 mm fan pack

Cool airflow in

Warm Airflow Node installed in Bay 1

Midplane

Figure 4-18 Airflow into chassis through the nodes and exhaust through the 80 mm fan packs

Chapter 4. Chassis and infrastructure configuration 89


Figure 4-19 shows the path of air from the upper and lower airflow inlet apertures to the
power supplies.

Nodes
Power Supply Cool airflow in

Cool airflow in
Midplane
Figure 4-19 Airflow path power supplies

90 IBM PureFlex System and IBM Flex System Products and Technology
Figure 4-20 shows the airflow from the lower inlet aperture to the 40 mm fan modules. This
airflow provides cooling for the switch modules and CMM installed in the rear of the chassis.

Nodes
40 mm fan module

Airflow

I/O modules CMM


Figure 4-20 40 mm fan module airflow

The right-side 40 mm fan module cools the right switches, while the left 40 mm fan module
cools the left pair of switches. Each 40 mm fan module has a pair of counter rotating fans for
redundancy.

Cool air flows in from the lower inlet aperture at the front of the chassis. It is drawn into the
lower openings in the CMM and I/O Modules where it provides cooling for these components.
It passes through and is drawn out the top of the CMM and I/O modules. The warm air is
expelled to the rear of the chassis by the 40 mm fan assembly. This expulsion is shown by the
red airflow arrows in Figure 4-20.

The removal of the fan pack exposes an opening in the bay to the 80 mm fan packs that are
located below. A back flow damper within the fan bay then closes. The backflow damper
prevents hot air from reentering the system from the rear of the chassis. The 80 mm fan packs
cool the switch modules and the CMM while the fan pack is being replaced.

Chassis cooling is implemented as a function of the following components:


򐂰 Node configurations
򐂰 Power Monitor circuits
򐂰 Component temperatures
򐂰 Ambient temperature

Chapter 4. Chassis and infrastructure configuration 91


This results in lower airflow volume (measured in cubic feet per minute or CFM) and lower
cooling energy that is spent at a chassis level. This system also maximizes the temperature
difference across the chassis (known generally as the Delta T) for more efficient room
integration. Monitored Chassis level airflow usage is displayed to enable airflow planning and
monitoring for hot air recirculation.

Five Acoustic Optimization states can be selected. Use the one that best balances
performance requirements with the noise level of the fans.

Chassis level CFM usage is available to you for planning purposes. In addition, ambient
health awareness can detect potential hot air recirculation to the chassis.

4.7 Power supply selection


The chassis power supplies that are needed to power the installed compute nodes and other
chassis components depends on a number of power-related selections, including the wattage
of the power supplies, 2100W or 2500W.

The 2100W power supplies might offer a lower-cost alternative to the 2500W power supplies,
where the nodes might be deployed within the 2100W power envelope.

The 2100W power supplies provide a more cost-effective solution for deployments with lower
power demands. The 2100W power supplies also have the advantage that they draw a
maximum of 11.8A as opposed to the 13.8A of the 2500W power supply. This means that
when you are using a 30A supply which is UL derated to 24A when you are using a PDU, two
2100W supplies can be connected to the same PDU with 0.4A remaining. Thus, for 30A UL
derated PDU deployments that are common in North America, the 2100W power supply
might be advantageous. For more information, see 4.12.3, “Power planning” on page 162.

Support of power supplies: Mixing of 2100W and 2500W power supplies is not
supported in the same chassis.

A chassis that is powered by the 2100W power supplies might offer a lower-cost alternative
for specific compute node configurations that are supported within the 2100W PSU power
envelope.

As the number of nodes in a chassis is expanded, more power supplies can be added as
required. This chassis design allows cost effective scaling of power configurations. If there is
not enough DC power available to meet the load demand, the Chassis Management Module
automatically powers down devices to reduce the load demand.

Table 4-11 on page 93 shows the number of compute nodes that can be installed based on
the following factors:
򐂰 The model of compute node that is installed
򐂰 The capacity of the power supply that is installed (2100W or 2500W)
򐂰 The power policy enabled (N+N or N+1)
򐂰 The number of power supplies that are installed (4, 5 or 6)
򐂰 For x86 compute nodes, the thermal design power (TDP) rating of the processors

For power policies, N+N means a fully redundant configuration where there are duplicate
power supplies for each supply that is needed for full operation. N+1 means there is only one
redundant power supply and all other supplies are needed for full operation.

92 IBM PureFlex System and IBM Flex System Products and Technology
In Table 4-11, the colors of the cells have the following meanings:

Supported with no limitations as to the number of compute nodes that can be installed

Supported but with limitations on the number of compute nodes that can be installed.

As you can see, a full complement of any compute nodes at all TDP ratings are supported if
all six power supplies are installed and an N+1 power policy is selected.

Table 4-11 Specific number of compute nodes supported based on installed power supplies
Compute CPU 2100W power supplies 2500W power supplies
node TDP
rating N+1, N=5 N+1, N=4 N+1, N=3 N+N, N=3 N+1, N=5 N+1, N=4 N+1, N=3 N+N, N=3
6 total 5 total 4 total 6 total 6 total 5 total 4 total 6 total

x220 50 W 14 14 14 14 14 14 14 14

60 W 14 14 14 14 14 14 14 14

70 W 14 14 14 14 14 14 14 14

80 W 14 14 14 14 14 14 14 14

95 W 14 14 14 14 14 14 14 14

x222 50 W 14 14 13 14 14 14 14 14

60 W 14 14 12 13 14 14 14 14

70 W 14 14 11 12 14 14 14 14

80 W 14 14 10 11 14 14 13 14

95 W 14 13 9 10 14 14 12 13

x240 60 W 14 14 14 14 14 14 14 14

70 W 14 14 13 14 14 14 14 14

80 W 14 14 13 13 14 14 14 14

95 W 14 14 12 12 14 14 14 14

115 W 14 14 11 12 14 14 14 14

130 W 14 14 11 11 14 14 13 14

135 W 14 14 10 11 14 14 13 14

x440 95 W 7 7 6 6 7 7 7 7

115 W 7 7 5 6 7 7 7 7

130 W 7 7 5 5 7 7 6 7

p24L All 14 12 9 10 14 14 12 13

p260 All 14 12 9 10 14 14 12 13

p270 All 14 12 9 9 14 14 12 12

p460 All 7 6 4 5 7 7 6 6

FSM 95 W 2 2 2 2 2 2 2 2

V7000 N/A 3 3 3 3 3 3 3 3

Chapter 4. Chassis and infrastructure configuration 93


The following assumptions are made:
򐂰 All Compute Nodes are fully configured.
򐂰 Throttling and oversubscription are enabled.

Tip: For more information about exact configuration support, see the Power configurator at
this website:
http://ibm.com/systems/bladecenter/resources/powerconfig.html

4.7.1 Power policies


The following power management policies can be selected to dictate how the chassis is
protected in the case of potential power module or supply failures. These policies are
configured by using the Chassis Management Module graphical interface:
򐂰 AC Power source redundancy
Power is allocated under the assumption that no throttling of the nodes is allowed if a
power supply fault occurs. This is an N+N configuration.
򐂰 AC Power source redundancy with compute node throttling allowed
Power is allocated under the assumption that throttling of the nodes are allowed if a power
supply fault occurs. This is an N+N configuration.
򐂰 Power Module Redundancy
Maximum input power is limited to one less than the number of power modules when more
than one power module is present. One power module can fail without affecting compute
note operation. Multiple power node failures can cause the chassis to power off. Some
compute nodes might not be able to power on if doing so exceeds the power policy limit.
򐂰 Power Module Redundancy with compute node throttling allowed
This mode can be described as oversubscription mode. Operation in this mode assumes
that a node’s load can be reduced (or throttled) to the continuous load rating within a
specified time. This process occurs following a loss of one or more power supplies. The
Power Supplies can exceed their continuous rating of 2500w for short periods. This is for
an N+1 configuration.
򐂰 Basic Power Management
This allows the total output power of all power supplies to be used. When operating in this
mode, there is no power redundancy. If a power supply fails or an AC feed to one or more
supplies is lost, the entire chassis might shut down. There is no power throttling.

The chassis is run by using one of these power capping policies:


򐂰 No Power Capping
Maximum input power is determined by the active power redundancy policy.
򐂰 Static Capping
This sets an overall chassis limit on the maximum input power. In a situation where
powering on a component can cause the limit to be exceeded, the component is prevented
from powering on.

94 IBM PureFlex System and IBM Flex System Products and Technology
4.7.2 Number of power supplies required for N+N and N+1
A total of six power supplies can be installed. Therefore, in an N+N configuration, the options
available are two, four, or six power supplies. For N+1, the total number can be anywhere
between two and six.

Depending on the node type, reference should be made to Table 4-12 if 2500W power
supplies are used, or to Table 4-13 on page 96 if 2100W power supplies are used.

For example: If eight x222 nodes are required to be installed with N+1 redundancy by using
2500W power supplies, from Table 4-12 a minimum of three power supplies are required for
support.

Table 4-12 and Table 4-13 on page 96 show the highest TDP rating of processors for each
node type. In some configurations, the power supplies cannot power the quantity of nodes,
which is highlighted in the tables as “NS” (not sufficient).

It is impossible to physically install more than seven full-wide compute nodes in a chassis, as
shown in Figure 4-12 on page 84.

Table 4-12 and Table 4-13 on page 96 assume that the same type of node is being
configured. Refer to the power configurator for mixed configurations of different node types
within a chassis.

Table 4-12 Number of 2500W power supplies required for each node type
x220 at 95Wa x222 at 95Wa x240 at 135Wa x440 at 130Wa p260 p270 p460
Nodes

N+N N+1 N+N N+1 N+N N+1 N+N N+1 N+ N+1 N+ N+1 N+N N+1
N N

14 6 4 NSb 5 6 5 NSb 5 NSb 5 These


configuratio
13 6 4 6 5 6 4 Not applicable. 6 5 NSb 5
ns are not
It is not
12 4 3 6 4 6 4 6 4 6 4 applicable. It
physically
is not
possible to
11 4 3 6 4 6 4 6 4 6 4 physically
install more
possible to
10 4 3 6 4 6 4 than seven 6 4 6 4 install more
x440s into a
than seven
9 4 3 6 4 6 4 chassis. 6 4 6 4
p460s into a
8 4 3 6 4 4 3 6 4 6 4 chassis.

7 4 3 4 3 4 3 6 5 4 3 4 3 NSb 5

6 4 3 4 3 4 3 6 4 4 3 4 3 6 4

5 4 3 4 3 4 3 6 4 4 3 4 3 6 4

4 2 2 4 3 4 3 4 3 4 3 4 3 6 4

3 2 2 4 3 4 3 4 3 4 3 4 3 4 3

2 2 2 2 2 2 2 4 3 2 2 2 2 4 3

1 2 2 2 2 2 2 2 2 2 2 2 2 2 2
a. Number of power supplies is based on x86 compute nodes with processors of the highest TDP rating.
b. Not supported. The number of nodes exceeds the capacity of the power supplies.

Chapter 4. Chassis and infrastructure configuration 95


Table 4-13 Number of 2100W power supplies required for each node type
x220 at 95Wa x222 at 95Wa x240 at 135Wa x440 at 130Wa p260 p270 p460
Nodes

N+N N+1 N+N N+1 N+N N+1 N+N N+1 N+N N+1 N+N N+1 N+N N+1

14 6 4 NSb 6 NSb 5 NSb 6 NSb 6 These


configuratio
13 6 4 NSb 5 NSb 5 Not applicable. NSb 6 NSb 6
ns are not
It is not
12 6 4 NS b
5 NS b
5 NS b
5 NS b
5 applicable.
physically
It is not
possible to
11 6 4 NSb 5 6 5 NSb 5 NSb 5 physically
install more
possible to
10 6 4 6 5 6 4 than seven 6 5 NSb 5 install more
x440s into a
than seven
9 4 3 6 4 6 4 chassis. 6 4 6 4
p460s into
8 4 3 6 4 6 4 6 4 6 4 a chassis.

7 4 3 6 4 6 4 NSb 5 6 4 6 4 NSb 6

6 4 3 6 4 4 3 NSb 5 6 4 6 4 NSb 5

5 4 3 4 3 4 3 6 4 4 3 4 3 6 5

4 4 3 4 3 4 3 6 4 4 3 4 3 6 4

3 4 3 4 3 4 3 4 3 4 3 4 3 6 4

2 2 2 4 3 4 3 4 3 4 3 4 3 4 3

1 2 2 2 2 2 2 4 3 2 2 2 2 4 3
a. Number of power supplies is based on x86 compute nodes with processors of the highest TDP rating.
b. Not supported. The number of nodes exceeds the capacity of the power supplies.

Tip: For more information about the exact configuration, see the Power configurator at this
website:
http://ibm.com/systems/bladecenter/resources/powerconfig.html

96 IBM PureFlex System and IBM Flex System Products and Technology
The chassis ships with Power supply bay 1 and 4 preinstalled, as shown in Figure 4-21. In the
case of N+N, this can power four x220 nodes as shown, with 2500W power supplies
according to Table 4-12 on page 95.

13 14
6 3
11 12

9 10

7 8 5 2

5 6
3 4 4 1
4 1
1 2

Node Bays Power Supply Bays

Front View Rear View

Figure 4-21 Two power supplies installed with four x220 nodes in N+N

Eight x220 nodes with 2500W N+N configuration is shown in Figure 4-22 where another pair
of power supplies in bays 2 and 5 were installed into the enterprise chassis.

13 14
6 3
11 12
9 10
77 88 55 22

55 66
33 44 4 1
4 1
11 22

Node Bays Power Supply Bays

Front View Rear View


Figure 4-22 Four power supplies installed with eight x220 nodes in N+N

Chapter 4. Chassis and infrastructure configuration 97


Figure 4-23 shows the full six power supplies installed with 14 x220 nodes that use 2500W
power supplies. This is a supported N+N configuration according to Table 4-12 on page 95.

13
13 14
14
66 33
11
11 12
12
99 10
10
77 88 55 22

55 66
33 44 4 1
4 1
11 22

Node Bays Power Supply Bays

Front View Rear View


Figure 4-23 Six power configuration with fourteen x220 nodes in N+N.

Power supplies selected for an N+1 configuration


The chassis ships with two power supplies installed. As shown in Table 4-12 on page 95,
2500W power supplies allow up to four x220 nodes to be installed with N+1 redundancy.

13 14
6 3
11 12
9 10
7 8 5 2

5 6
3 4 4 1
4 1
1 2

Node Bays Power Supply Bays

Front View Rear View


Figure 4-24 Two power supplies installed with four x220 nodes in N+1

98 IBM PureFlex System and IBM Flex System Products and Technology
When eight x220 nodes are installed and N+1 with 2500W power supplies is required,
checking Table 4-12 on page 95 shows support with three power supplies, as shown in
Figure 4-25.

13 14
6 3
11 12
9 10
77 88 5 22

55 66
33 44 4 1
4 1
11 22

Node Bays Power Supply Bays

Front View Rear View


Figure 4-25 Eight x220 nodes with 3 2500W power supplies in N+1 configuration

When 14 x220 nodes are required and N+1 is wanted that uses 2500W power supplies, then
four 2500W power supplies are required according to Table 4-12 on page 95. Figure 4-26
shows this redundancy configuration of N+1 where in this case N=3.

13
13 14
14
11 12 6 3
11 12
99 10
10
77 88 55 22
55 66
33 44 4 1
4 1
11 22

Node Bays Power Supply Bays

Front View Rear View


Figure 4-26 Fourteen x220 nodes with 4 2500W power supplies in N+1 configuration

4.8 Fan module population


The fan modules are populated depending on the nodes that are installed. To support the
base configuration and up to four nodes, a chassis ships with four 80 mm fan modules and
two 40 mm fan modules preinstalled.

When you install more nodes, install the nodes, fan modules, and power supplies from the
bottom upwards.

Chapter 4. Chassis and infrastructure configuration 99


The minimum configuration of 80 mm fan modules is four, which provides cooling for a
maximum of four nodes. This configuration is shown in Figure 4-27 and is the base
configuration.

13 14
11 12 9 4
9 10
8 3
7 8
5 6 7 2
3 4
6 1
1 2

Node Bays Cooling zone Cooling zone

Front View Rear View


Figure 4-27 Four 80 mm fan modules allow a maximum of four nodes installed

Installing six 80 mm fan modules allows another four nodes to be supported within the
chassis. Therefore, the maximum is eight, as shown in Figure 4-28.

13 14
11 12 9 4
9 10
8 3
77 88
55 66 7 2
33 44
6 1
11 22

Node Bays Cooling zone Cooling zone

Front View Rear View


Figure 4-28 Six 80 mm fan modules allow for a maximum of eight nodes

100 IBM PureFlex System and IBM Flex System Products and Technology
To cool more than eight nodes, all fan modules must be installed as shown in Figure 4-29.

13
13 14
14
11
11 12
12 9 4
99 10
10
8 3
77 88
55 66
7 2
33 44
6 1
11 22

Node Bays Cooling zone Cooling zone

Front View Rear View


Figure 4-29 Eight 80 mm fan modules support for 9 - 14 nodes

If there are insufficient fan modules for the number of nodes that are installed, the nodes
might be throttled.

4.9 Chassis Management Module


The CMM provides single chassis management and the networking path for remote
keyboard, video, mouse (KVM) capability for compute nodes within the chassis.

The chassis can accommodate one or two CMMs. The first is installed into CMM Bay 1, the
second into CMM bay 2. Installing two provides CMM redundancy.

Table 4-14 lists the ordering information for the second CMM.

Table 4-14 Chassis Management Module ordering information


Part number Feature codea Description

68Y7030 A0UE / 3592 IBM Flex System Chassis Management Module


a. The first feature code listed is for configurations that are ordered through System x sales
channels (HVEC) using x-config. The second feature code is for configurations that are ordered
through the IBM Power Systems channel (AAS) by using e-config.

Chapter 4. Chassis and infrastructure configuration 101


Figure 4-30 shows the location of the CMM bays on the back of the Enterprise Chassis.

Figure 4-30 CMM Bay 1 and Bay 2

The CMM provides the following functions:


򐂰 Power control
򐂰 Fan management
򐂰 Chassis and compute node initialization
򐂰 Switch management
򐂰 Diagnostics
򐂰 Resource discovery and inventory management
򐂰 Resource alerts and monitoring management
򐂰 Chassis and compute node power management
򐂰 Network management

The CMM includes the following connectors:


򐂰 USB connection: Can be used for insertion of a USB media key for tasks such as firmware
updates.
򐂰 10/100/1000 Mbps RJ45 Ethernet connection: For connection to a management network.
The CMM can be managed through this Ethernet port.
򐂰 Serial port (mini-USB): For local serial (CLI) access to the CMM. Use the cable kit that is
listed in Table 4-15 for connectivity.

Table 4-15 Serial cable specifications


Part number Feature codea Description

90Y9338 A2RR IBM Flex System Management Serial Access Cable


Contains two cables:
򐂰 Mini-USB-to-RJ45 serial cable
򐂰 Mini-USB-to-DB9 serial cable
a. The feature code listed is for both the System x sales channel (HVEC) using x-config and the
Power Systems sales channel (AAS) using e-config.

102 IBM PureFlex System and IBM Flex System Products and Technology
The CMM includes the following LEDs that provide status information:
򐂰 Power-on LED
򐂰 Activity LED
򐂰 Error LED
򐂰 Ethernet port link and port activity LEDs

Figure 4-31 shows the CMM connectors and LEDs.

Figure 4-31 Chassis Management Module

The CMM also incorporates a reset button, which features the following functions (depending
upon how long the button is held in):
򐂰 When pressed for less than 5 seconds, the CMM restarts.
򐂰 When pressed for more than 5 seconds (for example 10 - 15 seconds), the CMM
configuration is reset to manufacturing defaults and then restarts.

For more information about how the CMM integrates into the Systems Management
architecture, see 3.2, “Chassis Management Module” on page 43.

Chapter 4. Chassis and infrastructure configuration 103


4.10 I/O architecture
The Enterprise Chassis can accommodate four I/O modules that are installed in vertical
orientation into the rear of the chassis, as shown in Figure 4-32.

I/O module I/O module I/O module I/O module


bay 1 bay 3 bay 2 bay 4

Figure 4-32 Rear view that shows the I/O Module bays 1 - 4

If a node has a two-port integrated LAN on Motherboard (LOM) as standard, modules 1 and 2
are connected to this LOM. If an I/O adapter is installed in the nodes I/O expansion slot 1,
modules 1 and 2 are connected to this adapter.

Modules 3 and 4 connect to the I/O adapter that is installed within I/O expansion bay 2 on the
node.

These I/O modules provide external connectivity, and connect internally to each of the nodes
within the chassis. They can be Switch or Pass-thru modules, with a potential to support other
types in the future.

104 IBM PureFlex System and IBM Flex System Products and Technology
Figure 4-33 shows the connections from the nodes to the switch modules.

LOM connector
(remove when
I/O expansion
adapter is installed)
4 lanes (KX-4) or
4 10 Gbps lanes (KR)

I/O module 1
Node LOM
bay 1
with LOM
I/O module 3

I/O module 2
Node LOM
bay 2
with I/O
expansion I/O module 4
adapter

14 internal groups
Node
(of 4 lanes each),
bay 14
one to each node.

Figure 4-33 LOM, I/O adapter, and switch module connections

The node in bay 1 in Figure 4-33 shows that when shipped with an LOM, the LOM connector
provides the link from the node system board to the midplane. Some nodes do not ship with
LOM.

If required, this LOM connector can be removed and an I/O expansion adapter can be
installed in its place. This configuration is shown on the node in bay 2 in Figure 4-33

Chapter 4. Chassis and infrastructure configuration 105


Figure 4-34 shows the electrical connections from the LOM and I/O adapters to the I/O
modules, which all takes place across the chassis midplane.

Node
M1
1 .. Switch
. 1
M2

Node
M1
2 .. Switch
. 2
M2

Node
M1
3 .. Switch
. 3
M2

Node
M1
14
.. Switch
M2
. 4

Each line between an I/O adapter and a switch is four links

Figure 4-34 Logical lay out of node to switch interconnects

A total of two I/O expansion adapters (designated M1 and M2 in Figure 4-34) can be plugged
into a half-wide node. Up to four I/O adapters can be plugged into a full-wide node.

Each I/O adapter has two connectors. One connects to the compute node’s system board
(PCI Express connection). The second connector is a high-speed interface to the midplane
that mates to the midplane when the node is installed into a bay within the chassis.

As shown in Figure 4-34, each of the links to the midplane from the I/O adapter (shown in red)
are four links wide. Exactly how many links are used on each I/O adapter is dependent on the
design of the adapter and the number of ports that are wired. Therefore, a half-wide node can
have a maximum of 16 I/O links and a full wide node can have 32 links.

106 IBM PureFlex System and IBM Flex System Products and Technology
Figure 4-35 shows an I/O expansion adapter.

PCIe
connector

Midplane
connector

Adapters share a
Guide block to
common size
ensure correct
(100 mm x 80 mm)
installation

Figure 4-35 I/O expansion adapter

Each of these individual I/O links or lanes can be wired for 1 Gb or 10 Gb Ethernet, or 8 Gbps
or 16 Gbps Fibre Channel. The application-specific integrated circuit (ASIC) type on the I/O
Expansion adapter dictates the number of links that are enabled. Some ASICs are two-port
and some are four port and some I/O expansion adapters contain two ASICs. For a two-port
ASIC, one port can go to one switch and one port to the other. This configuration is shown in
Figure 4-36 on page 108. In the future, other combinations can be implemented.

In an Ethernet I/O adapter, the wiring of the links is to the IEEE 802.3ap standard, which is
also known as the Backplane Ethernet standard. The Backplane Ethernet standard has
different implementations at 10 Gbps, being 10GBASE-KX4 and 10GBASE-KR. The I/O
architecture of the Enterprise Chassis supports the KX4 and KR.

The 10GBASE-KX4 uses the same physical layer coding (IEEE 802.3 clause 48) as
10GBASE-CX4, where each individual lane (SERDES = Serializer/DeSerializer) carries
3.125 Gbaud of signaling bandwidth.

The 10GBASE-KR uses the same coding (IEEE 802.3 clause 49) as 10GBASE-LR/ER/SR,
where the SERDES lane operates at 10.3125 Gbps.

Each of the links between I/O expansion adapter and I/O module can be 4x 3.125 Lanes/port
(KX-4) or 4x 10 Gbps Lanes (KR). This choice is dependent on the expansion adapter and I/O
Module implementation.

Chapter 4. Chassis and infrastructure configuration 107


Figure 4-36 shows how the integrated two-port 10 Gb LOM connects through a LOM
connector to switch 1. This implementation provides a pair of 10 Gb lanes. Each lane
connects to a 10 Gb switch or 10 Gb pass-through module that is installed in I/O module bays
in the rear of the chassis. The LOM connector is sometimes referred to as a periscope
connector because of its shape.

10 Gbps KR lane

P1
1

P1 LOM Connector
LOM

P2

P2

Figure 4-36 LOM implementation: Emulex 10 Gb Virtual Fabric onboard LOM to I/O Module

A half-wide compute node with two standard I/O adapter sockets and an I/O adapter with two
ports is shown in Figure 4-37. Port 1 connects to one switch in the chassis and Port 2
connects to another switch in the chassis. With 14 compute nodes of this configuration
installed in the chassis, each switch requires 14 internal ports for connectivity to the compute
nodes.

Half-wide
node P1 1
x1 Ports

P3
P1 P5
2-Port

I/O adapter P7
in slot 1
P2 P2
x1 Ports

P4
P6 2
P8

P1
P3 3
P5
I/O adapter
in slot 2 P7

P2
P4
P6
4
P8

I/O modules

Figure 4-37 I/O adapter with a two-port ASIC

108 IBM PureFlex System and IBM Flex System Products and Technology
Another possible implementation of the I/O adapter is the four-port. Figure 4-38 shows the
interconnection to the I/O module bays for such I/O adapters that uses a single four-port
ASIC.

Half-wide
P1
node 1

x1 Ports
P3
P1 P5

4-Port
ASIC
I/O adapter P2 P7
in slot 1 P3
P4 P2

x1 Ports
P4
P6 2
P8

P1
P3
3
P5
I/O adapter
P7
in slot 2
P2
P4
P6
P8 4

I/O modules

Figure 4-38 I/O adapter with a four-port single ASIC

In this case, with each node having a four-port I/O adapter in I/O adapter slot 1, each I/O
module requires 28 internal ports enabled. This configuration highlights another key feature of
the I/O architecture: scalable on-demand port enablement. Sets of ports are enabled by using
IBM Features on Demand (FoD) activation licenses to allow a greater number of connections
between nodes and a switch. With two lanes per node to each switch and 14 nodes requiring
four ports that are connected, each switch must have 28 internal ports enabled. You also
need sufficient uplink ports enabled to support the wanted bandwidth. FoD feature upgrades
enable these ports.

Finally, Figure 4-39 on page 110 shows an eight-port I/O adapter that is using two, four-port
ASICs.

Chapter 4. Chassis and infrastructure configuration 109


Half-wide
P0 P1
node 1

4-Port
ASIC
P1 P3
P4 P5
P5
I/O adapter P7
in slot 1
P2 P2

4-Port
ASIC
P3 P4
P6 P6 2
P7 P8

P1
P3
3
P5
I/O adapter
P7
in slot 2
P2
P4
P6
P8 4

I/O modules

Figure 4-39 I/O adapter with 8 port Dual ASIC implementation

Six ports active: In the case of the CN4058 8-port 10Gb Converged Adapter, although
this is a eight port adapter, the currently available switches only support up to six of those
ports (three ports to each of two installed switches). With these switches, three of the four
lanes per module can be enabled.

110 IBM PureFlex System and IBM Flex System Products and Technology
The architecture allows for a total of eight lanes per I/O adapter, as shown in Figure 4-40.
Therefore, a total of 16 I/O lanes per half wide node is possible. Each I/O module requires the
matching number of internal ports to be enabled.

Node A1
bay 1 ........ Switch .
.... bay 1 ..
A2

Node A1
bay 2 ........ Switch .
.... bay 3 ..
A2

........ Switch .
Node A1 .... bay 2 ..
bay
13/14
A2

A3 ........ Switch .
.... bay 4 ..
A4

Figure 4-40 Full chassis connectivity: Eight ports per adapter

For more information about port enablement by using FoD, see 4.11, “I/O modules” on
page 112. For more information about I/O expansion adapters that install on the nodes, see
5.8.1, “Overview” on page 335.

Chapter 4. Chassis and infrastructure configuration 111


4.11 I/O modules
I/O modules are inserted into the rear of the Enterprise Chassis to provide interconnectivity
within the chassis and external to the chassis. This section describes the I/O and Switch
module naming scheme.

There are four I/O Module bays at the rear of the chassis. To insert an I/O module into a bay,
first remove the I/O filler. Figure 4-41 shows how to remove an I/O filler and insert an I/O
module into the chassis by using the two handles.

Figure 4-41 Removing an I/O filler and installing an I/O module

4.11.1 I/O module LEDs


I/O Module Status LEDs are at the bottom of the module when inserted into the chassis. All
modules share three status LEDs, as shown in Figure 4-42.

Serial port for local


management

OK Identify Switch error


Figure 4-42 Example of I/O module status LEDs

112 IBM PureFlex System and IBM Flex System Products and Technology
The LEDs indicate the following conditions:
򐂰 OK (power)
When this LED is lit, it indicates that the switch is on. When it is not lit and the amber
switch error LED is lit, it indicates a critical alert. If the amber LED is also not lit, it indicates
that the switch is off.
򐂰 Identify
You can physically identify a switch by making this blue LED light up by using the
management software.
򐂰 Switch Error
When this LED is lit, it indicates a POST failure or critical alert. When this LED is lit, the
system-error LED on the chassis is also lit.
When this LED is not lit and the green LED is lit, it indicates that the switch is working
correctly. If the green LED is also not lit, it indicates that the switch is off

4.11.2 Serial access cable


The switches (and CMM) support local command-line interface (CLI) access through a USB
serial cable. The mini-USB port on the switch is near the LEDs, as shown in Figure 4-42 on
page 112. A cable kit with supported serial cables can be ordered as listed in Table 4-16.

Table 4-16 Serial cable


Part number Feature codea Description

90Y9338 A2RR IBM Flex System Management Serial Access Cable


a. The feature code listed is for both the System x sales channel (HVEC) using x-config and the
Power Systems sales channel (AAS) using e-config.

The part number 90Y9338 includes the following cables:


򐂰 Mini-USB-to-RJ45 serial cable
򐂰 Mini-USB-to-DB9 serial cable

Chapter 4. Chassis and infrastructure configuration 113


4.11.3 I/O module naming scheme
The I/O module naming scheme follows a logical structure, similar to that of the I/O adapters.

Figure 4-43 shows the I/O module naming scheme. This scheme might be expanded to
support future technology.

IBM Flex System EN2092 1 Gb Ethernet Scalable Switch

EN2092

Fabric Type: Series: Vendor name where A=01: Maximum number of ports
EN = Ethernet 2 for 1 Gb 02 = Brocade available to each node:
FC = Fibre Channel 3 for 8 Gb 09 = IBM 1 = One
CN = Converged Network 4 for 10 Gb 13 = Mellanox 2 = Two
IB = InfiniBand 5 for 16 Gb 17 = QLogic 3 = Three
SI = System Interconnect 6 for 56 Gb & 40 Gb

Figure 4-43 IBM Flex System I/O Module naming scheme

114 IBM PureFlex System and IBM Flex System Products and Technology
4.11.4 Switch to adapter compatibility
This section lists switch to adapter interoperability.

Ethernet switches and adapters


Table 4-17 lists Ethernet switch-to-card compatibility.

Switch upgrades: To maximize the usable port count on the adapters, the switches might
need more license upgrades.

Table 4-17 Ethernet switch to card compatibility


EN2092 CN4093 EN4093R EN4093 EN4091 SI4093 EN6131
1Gb 10Gb 10Gb 10Gb 10Gb 10Gb 40Gb
Switch Switch Switch Switch Pass-thru SIM Switch

Part number 49Y4294 00D5823 95Y3309 49Y4270 88Y6043 95Y3313 90Y9346


Part number
Feature code A0TF / A3HH / A3J6 / A0TB / A1QV / A45T / A3HJ /
(XCC / AAS)a Feature codesa 3598 ESW2 ESW7 3593 3700 ESWA ESW6

None x220 Onboard 1Gb Yes Yesb Yes Yes Yes Yes No

None x222 Onboard 10Gb Yesc Yesc Yesc Yesc No Yesc No

None x240 Onboard 10Gb Yes Yes Yes Yes Yes Yes Yes

None x440 Onboard 10Gb Yes Yes Yes Yes Yes Yes Yes

49Y7900 EN2024 4-port 1Gb Yes Yes Yes Yes Yesd Yes No
A10Y / 1763 Ethernet Adapter

90Y3466 EN4132 2-port 10 Gb No No Yes Yes Yes Yes Yes


A1QY / EC2D Ethernet Adapter

None EN4054 4-port 10Gb Yes Yes Yes Yes Yesd Yes Yes
None / 1762 Ethernet Adapter

90Y3554 CN4054 10Gb Virtual Yes Yes Yes Yes Yesd Yes Yes
A1R1 / 1759 Fabric Adapter

None CN4058 8-port 10Gb Yese Yesf Yesf Yesf Yesd Yes No
None / EC24 Converged Adapter

None EN4132 2-port 10Gb No No Yes Yes Yes Yes Yes


None / EC26 RoCE Adapter

90Y3482 EN6132 2-port 40Gb No No No No No No Yes


A3HK / A3HK Ethernet Adapter
a. The first feature code that is listed is for configurations that are ordered through System x sales channels (XCC by using
x-config). The second feature code is for configurations that are ordered through the IBM Power Systems channel (AAS
by using e-config)
b. 1 Gb is supported on the CN4093’s two external 10 Gb SFP+ ports only. The 12 external Omni Ports do not support
1 GbE speeds.
c. Either Upgrade 1 or Upgrade 2 is required to enable enough internal switch ports to connect to both servers in the
x222.
d. Only two of the ports of this adapter are connected when used with the EN4091 10Gb Pass-thru.
e. Only four of the eight ports of CN4058 adapter are connected with the EN2092 switch.
f. Only six of the eight ports of the CN4058 adapter are connected with the CN4093, EN4093, EN4093R switches.

Chapter 4. Chassis and infrastructure configuration 115


Fibre Channel switches and adapters
Table 4-18 lists Fibre Channel switch-to-card compatibility.

Table 4-18 Fibre Channel switch to card compatibility


FC5022 FC5022 FC5022 FC3171 FC3171
16Gb 16Gb 16Gb 8Gb 8Gb
12-port 24-port 24-port switch Pass-thru
ESB

Part number 88Y6374 00Y3324 90Y9356 69Y1930 69Y1934


Feature
Part codes a
Feature codes A1EH / A3DP / A2RQ / A0TD / A0TJ /
number (XCC / AAS)a 3770 ESW5 3771 3595 3591

69Y1938 A1BM / 1764 FC3172 2-port 8Gb FC Yes Yes Yes Yes Yes
Adapter

95Y2375 A2N5 / EC25 FC3052 2-port 8Gb FC Yes Yes Yes Yes Yes
Adapter

88Y6370 A1BP / EC2B FC5022 2-port 16Gb FC Yes Yes Yes No No


Adapter

95Y2386 A45R / EC23 FC5052 2-port 16Gb FC Yes Yes Yes No No


Adapter

95Y2391 A45S / EC2E FC5054 4-port 16Gb FC Yes Yes Yes No No


Adapter

69Y1942 A1BQ / A1BQ FC5172 2-port 16Gb FC Yes Yes Yes Yes Yes
Adapter

95Y2379 A3HU / A3HU FC5024D 4-port 16Gb FC Yes Yes Yes No No


Adapter
a. The first feature code that is listed is for configurations that are ordered through System x sales channels (XCC by
using x-config). The second feature code is for configurations that are ordered through the IBM Power Systems
channel (AAS by using e-config)

InfiniBand switches and adapters


Table 4-19 lists InfiniBand switch to card compatibility.

Table 4-19 InfiniBand switch to card compatibility


IB6131 InfiniBand
Switch
Feature Part number 90Y3450
Part codes
number (XCC / AAS)a Feature codea A1EK / 3699

90Y3454 A1QZ / EC2C IB6132 2-port FDR InfiniBand Adapter Yesb

None None / 1761 IB6132 2-port QDR InfiniBand Adapter Yes

90Y3486 A365 / A365 IB6132D 2-port FDR InfiniBand Adapter Yesb


a. The first feature code that is listed is for configurations that are ordered through System x sales channels (XCC by
using x-config). The second feature code is for configurations that are ordered through the IBM Power Systems
channel (AAS by using e-config)
b. To operate at FDR speeds, the IB6131 switch needs the FDR upgrade. For more information, see Table 4-44 on
page 160.

116 IBM PureFlex System and IBM Flex System Products and Technology
4.11.5 IBM Flex System EN6131 40Gb Ethernet Switch
The IBM Flex System EN6131 40Gb Ethernet Switch with the EN6132 40Gb Ethernet
Adapter offer the performance that you must support clustered databases, parallel
processing, transactional services, and high-performance embedded I/O applications, which
reduces task completion time and lowers the cost per operation. This switch offers 14 internal
and 18 external 40 Gb Ethernet ports that enable a non-blocking network design. It supports
all Layer 2 functions so servers can communicate within the chassis without going to a
top-of-rack (ToR) switch, which helps improve performance and latency.

Figure 4-44 IBM Flex System EN6131 40Gb Ethernet Switch

This 40 Gb Ethernet solution can deploy more workloads per server without running into I/O
bottlenecks. If there are failures or server maintenance, clients can also move their virtual
machines much faster by using 40 Gb interconnects within the chassis.

The 40 GbE switch and adapter are designed for low latency, high bandwidth, and computing
efficiency for performance-driven server and storage clustering applications. They provide
extreme scalability for low-latency clustered solutions with reduced packet hops.

The IBM Flex System 40 GbE solution offers the highest bandwidth without adding any
significant power impact to the chassis. It can also help increase the system usage and
decrease the number of network ports for further cost savings.

18x QSFP+ ports (up to 40 Gbps)

Switch release handle RS-232 Gigabit Ethernet Switch


(one each side) serial port management port LEDs

Figure 4-45 External ports of the IBM Flex System EN6131 40Gb Ethernet Switch

The front panel contains the following components:


򐂰 LEDs that shows the following statuses of the module and the network:
– Green power LED indicates that the module passed the power-on self-test (POST) with
no critical faults and is operational.
– Identify LED: This blue LED can be used to identify the module physically by
illuminating it through the management software.
– The fault LED (switch error) indicates that the module failed the POST or detected an
operational fault.
򐂰 Eighteen external QSFP+ ports for 10 Gbps, 20 Gbps, or 40 Gbps connections to the
external network devices.
򐂰 An Ethernet physical link LED and an Ethernet Tx/Rx LED for each external port on the
module.

Chapter 4. Chassis and infrastructure configuration 117


򐂰 One mini-USB RS-232 console port that provides another means to configure the switch
module. This mini-USB-style connector enables the connection of a special serial cable
(the cable is optional and it is not included with the switch). For more information, see
Table 4-21.

Table 4-20 shows the part number and feature codes that are used to order the EN6131 40Gb
Ethernet Switch.

Table 4-20 Part number and feature code for ordering


Description Part number Feature code
(x-config / e-config)

IBM Flex System EN6131 40Gb Ethernet Switch 90Y9346 A3HJ / ESW6

QSFP+ Transceivers ordering: No QSFP+ (quad small form-factor pluggable plus)


transceivers or cables are included with the switch. They must be ordered separately.

The switch does not include a serial management cable. However, IBM Flex System
Management Serial Access Cable 90Y9338 is supported and contains two cables, a
mini-USB-to-RJ45 serial cable and a mini-USB-to-DB9 serial cable, either of which can be
used to connect to the switch module locally for configuration tasks and firmware updates.

Table 4-21 lists the supported cables and transceivers.

Table 4-21 Supported transceivers and direct attach cables


Description Part number Feature code
(x-config / e-config)

Serial console cables

IBM Flex System Management Serial Access Cable Kit 90Y9338 A2RR / A2RR

QSFP+ transceiver and optical cables - 40 GbE

IBM QSFP+ 40GBASE-SR Transceiver 49Y7884 A1DR / EB27


(Requires either cable 90Y3519 or cable 90Y3521)

10m IBM MTP Fiber Optical Cable (requires transceiver 49Y7884) 90Y3519 A1MM / EB2J

30m IBM MTP Fiber Optical Cable (requires transceiver 49Y7884) 90Y3521 A1MN / EB2K

QSFP+ direct-attach cables - 40 GbE

3m FDR InfiniBand Cable 90Y3470 A227 / None

3m IBM QSFP+ to QSFP+ Cable 49Y7891 A1DQ / EB2H

5m IBM QSFP+ to QSFP+ Cable 00D5810 A2X8 / ECBN

7m IBM QSFP+ to QSFP+ Cable 00D5813 A2X9 / ECBP

The EN6131 40Gb Ethernet Switch has the following features and specifications:
򐂰 MLNX-OS operating system
򐂰 Internal ports:
– A total of 14 internal full-duplex 40 Gigabit ports (10, 20, or 40 Gbps auto-negotiation).
– One internal full-duplex 1 GbE port that is connected to the chassis management
module.

118 IBM PureFlex System and IBM Flex System Products and Technology
򐂰 External ports:
– A total of 18 ports for 40 Gb Ethernet QSFP+ transceivers or QSFP+ DACs (10, 20, or
40 Gbps auto-negotiation). QSFP+ modules and DACs are not included and must be
purchased separately.
– One external 1 GbE port with RJ-45 connector for switch configuration and
management.
– One RS-232 serial port (mini-USB connector) that provides another means to
configure the switch module.
򐂰 Scalability and performance:
– 40 Gb Ethernet ports for extreme bandwidth and performance.
– Non-blocking architecture with wire-speed forwarding of traffic and an aggregated
throughput of 1.44 Tbps.
– Support for up to 48,000 unicast and up to 16,000 multicast media access control
(MAC) addresses per subnet.
– Static and LACP (IEEE 802.3ad) link aggregation, up to 720 Gb of total uplink
bandwidth per switch, up to 36 link aggregation groups (LAGs), and up to 16 ports per
LAG.
– Support for jumbo frames (up to 9,216 bytes).
– Broadcast/multicast storm control.
– IGMP snooping to limit flooding of IP multicast traffic.
– Fast port forwarding and fast uplink convergence for rapid STP convergence.
򐂰 Availability and redundancy:
– IEEE 802.1D STP for providing L2 redundancy.
– IEEE 802.1w Rapid STP (RSTP) provides rapid STP convergence for critical
delay-sensitive traffic such as voice or video.
򐂰 VLAN support:
– Up to 4094 VLANs are supported per switch, with VLAN numbers 1 - 4094.
– 802.1Q VLAN tagging support on all ports.
򐂰 Security:
– Up to 24,000 rules with VLAN-based, MAC-based, protocol-based, and IP-based
access control lists (ACLs).
– User access control (multiple user IDs and passwords).
– RADIUS, TACACS+, and LDAP authentication and authorization.
򐂰 Quality of service (QoS):
– Support for IEEE 802.1p traffic processing.
– Traffic shaping that is based on defined policies.
– Four Weighted Round Robin (WRR) priority queues per port for processing qualified
traffic.
– Priority-Based Flow Control (PFC) (IEEE 802.1Qbb) extends 802.3x standard flow
control to allow the switch to pause traffic based on the 802.1p priority value in each
packet’s VLAN tag.
– Enhanced Transmission Selection (ETS) (IEEE 802.1Qaz) provides a method for
allocating link bandwidth based on the 802.1p priority value in each packet’s VLAN tag.

Chapter 4. Chassis and infrastructure configuration 119


򐂰 Manageability:
– IPv4 and IPv6 host management.
– Simple Network Management Protocol (SNMP V1, V2, and V3).
– Web-based GUI.
– Industry standard CLI (IS-CLI) through Telnet, SSH, and serial port.
– Link Layer Discovery Protocol (LLDP) to advertise the device's identity, capabilities,
and neighbors.
– Firmware image update (TFTP, FTP, and SCP).
– Network Time Protocol (NTP) for clock synchronization.
򐂰 Monitoring:
– Switch LEDs for external port status and switch module status indication.
– Port mirroring for analyzing network traffic passing through the switch.
– Change tracking and remote logging with the syslog feature.
– Support for sFLOW agent for monitoring traffic in data networks (separate sFLOW
collector/analyzer is required elsewhere).
– POST diagnostic tests.

The switch supports the following Ethernet standards:


򐂰 IEEE 802.1AB Link Layer Discovery Protocol
򐂰 IEEE 802.1D Spanning Tree Protocol (STP)
򐂰 IEEE 802.1p Class of Service (CoS) prioritization
򐂰 IEEE 802.1Q Tagged VLAN (frame tagging on all ports when VLANs are enabled)
򐂰 IEEE 802.1Qbb Priority-Based Flow Control (PFC)
򐂰 IEEE 802.1Qaz Enhanced Transmission Selection (ETS)
򐂰 IEEE 802.1w Rapid STP (RSTP)
򐂰 IEEE 802.3ab 1000BASE-T copper twisted pair Gigabit Ethernet
򐂰 IEEE 802.3ad Link Aggregation Control Protocol
򐂰 IEEE 802.3ba 40GBASE-SR4 short range fiber optics 40 Gb Ethernet
򐂰 IEEE 802.3ba 40GBASE-CR4 copper 40 Gb Ethernet
򐂰 IEEE 802.3u 100BASE-TX Fast Ethernet
򐂰 IEEE 802.3x Full-duplex Flow Control

The EN6131 40Gb Ethernet Switch can be installed in bays 1, 2, 3, and 4 of the Enterprise
Chassis. A supported Ethernet adapter must be installed in the corresponding slot of the
compute node (slot A1 when I/O modules are installed in bays 1 and 2 or slot A2 when I/O
modules are installed in bays 3 and 4).

If a four-port 10 GbE adapter is used, only up to two adapter ports can be used with the
EN6131 40Gb Ethernet Switch (one port per switch).

For more information including example configurations, see the IBM Redbooks Product
Guide IBM Flex System EN6131 40Gb Ethernet Switch, TIPS0911, which is available at this
website:
http://www.redbooks.ibm.com/abstracts/tips0911.html?Open

120 IBM PureFlex System and IBM Flex System Products and Technology
4.11.6 IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch
The IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch provides unmatched
scalability, performance, convergence, and network virtualization. It also delivers innovations
to help address a number of networking concerns and provides capabilities that help you
prepare for the future.

The switch offers full Layer 2/3 switching and FCoE Full Fabric and Fibre Channel NPV
Gateway operations to deliver a converged and integrated solution. It is installed within the
I/O module bays of the IBM Flex System Enterprise Chassis. The switch can help you migrate
to a 10 Gb or 40 Gb converged Ethernet infrastructure and offers virtualization features such
as Virtual Fabric and IBM VMready®, and the ability to work with IBM Distributed Virtual
Switch 5000V.

Figure 4-46 shows the IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch.

Figure 4-46 IBM Flex System Fabric CN4093 10 Gb Converged Scalable Switch

The CN4093 switch is initially licensed for 14 10-GbE internal ports, two external 10-GbE
SFP+ ports, and six external Omni Ports enabled.

The following ports can be enabled:


򐂰 A total of 14 more internal ports and two external 40 GbE QSFP+ uplink ports with
Upgrade 1.
򐂰 A total of 14 more internal ports and six more external Omni Ports with the Upgrade 2
license options.
򐂰 Upgrade 1 and Upgrade 2 can be applied on the switch independently from each other or
in combination for full feature capability.

Table 4-22 shows the part numbers for ordering the switches and the upgrades.

Table 4-22 Part numbers and feature codes for ordering


Description Part number Feature code
(x-config / e-config)

Switch module

IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch 00D5823 A3HH / ESW2

Features on Demand upgrades

IBM Flex System Fabric CN4093 Converged Scalable Switch (Upgrade 1) 00D5845 A3HL / ESU1

IBM Flex System Fabric CN4093 Converged Scalable Switch (Upgrade 2) 00D5847 A3HM / ESU2

Chapter 4. Chassis and infrastructure configuration 121


Description Part number Feature code
(x-config / e-config)

Management cable

IBM Flex System Management Serial Access Cable 90Y9338

Neither QSFP+ or SFP+ transceivers or cables are included with the switch. They must be
ordered separately (see Table 4-24 on page 124).

The switch does not include a serial management cable. However, IBM Flex System
Management Serial Access Cable 90Y9338 is supported and contains two cables, a
mini-USB-to-RJ45 serial cable and a mini-USB-to-DB9 serial cable, either of which can be
used to connect to the switch locally for configuration tasks and firmware updates.

The following base switch and upgrades are available:


򐂰 Part number 00D5823 is for the physical device, which comes with 14 internal 10 GbE
ports enabled (one to each node bay), two external 10 GbE SFP+ ports that are enabled
to connect to a Top of Rack switch or other devices identified as EXT1 and EXT2, and six
Omni Ports that are enabled to connect to Ethernet or Fibre Channel networking
infrastructure, depending on the SFP+ cable or transceiver that is used. The six Omni
ports are from the 12 labeled on the switch as EXT11 through EXT22.
򐂰 Part number 00D5845 (Upgrade 1) can be applied on the base switch when you need
more uplink bandwidth with two 40 GbE QSFP+ ports that can be converted into 4x
10 GbE SFP+ DAC links with the optional break-out cables. These are labeled EXT3,
EXT7 or EXT3-EXT6, EXT7-EXT10 if converted. This upgrade also enables 14 more
internal ports, for a total of 28 ports, to provide more bandwidth to the compute nodes
using 4-port expansion cards.
򐂰 Part number 00D5847 (Upgrade 2) can be applied on the base switch when you need
more external Omni Ports on the switch or if you want more internal bandwidth to the node
bays. The upgrade enables the remaining six external Omni Ports from range EXT11
through EXT22, plus 14 internal 10 Gb ports, for a total of 28 internal ports, to provide
more bandwidth to the compute nodes using four-port expansion cards.
򐂰 Both 00D5845 (Upgrade 1) and 00D5847 (Upgrade 2) can be applied on the switch at the
same time so that you can use six ports on an eight-port expansion card, and use all the
external ports on the switch.

Table 4-23 shows the switch upgrades and the ports they enable.

Table 4-23 CN4093 10 Gb Converged Scalable Switch part numbers and port upgrades
Part Feature Description Total ports that are enabled
number codea
Internal External External External
10Gb 10Gb SFP+ 10Gb Omni 40Gb QSFP+

00D5823 A3HH / ESW2 Base switch (no upgrades) 14 2 6 0

00D5845 A3HL / ESU1 Add Upgrade 1 28 2 6 2

00D5847 A3HM / ESU2 Add Upgrade 2 28 2 12 0

00D5845 A3HL / ESU1 Add both Upgrade 1 and 42 2 12 2


00D5847 A3HM / ESU2 Upgrade 2
a. The first feature code listed is for configurations ordered through System x sales channels (HVEC) using x-config.
The second feature code is for configurations ordered through the IBM Power Systems channel (AAS) using
e-config.

122 IBM PureFlex System and IBM Flex System Products and Technology
Each upgrade license enables more internal ports. To make full use of those ports, each
compute node needs the following appropriate I/O adapter installed:
򐂰 The base switch requires a two-port Ethernet adapter (one port of the adapter goes to
each of two switches).
򐂰 Adding Upgrade 1 or Upgrade 2 requires a four-port Ethernet adapter (two ports of the
adapter to each switch) to use all the internal ports.
򐂰 Adding both Upgrade 1 and Upgrade 2 requires a six-port Ethernet adapter (three ports to
each switch) to use all the internal ports.

Front panel
Figure 4-47 shows the main components of the CN4093 switch.

2x 10 Gb ports 2x 40 Gb uplink ports 12x Omni Ports


(standard) (enabled with Upgrade 1) (6 standard, 6 with Upgrade 2)

SFP+ ports QSFP+ ports SFP+ ports Switch release handle Management Switch
(one each side) ports LEDs

Figure 4-47 IBM Flex System Fabric CN4093 10 Gb Converged Scalable Switch

The front panel contains the following components:


򐂰 LEDs that shows the status of the switch module and the network:
– The OK LED indicates that the switch module passed the power-on self-test (POST)
with no critical faults and is operational.
– Identify: You can use this blue LED to identify the switch physically by illuminating it
through the management software.
– The error LED (switch module error) indicates that the switch module failed the POST
or detected an operational fault.
򐂰 One mini-USB RS-232 console port that provides another means to configure the switch
module. This mini-USB-style connector enables connection of a special serial cable. (The
cable is optional and it is not included with the switch. For more information, see
Table 4-24 on page 124.
򐂰 Two external SFP+ ports for 1 Gb or 10 Gb connections to external Ethernet devices.
򐂰 Twelve external SFP+ Omni Ports for 10 Gb connections to the external Ethernet devices
or 4/8 Gb FC connections to the external SAN devices.

Omni Ports support: 1 Gb is not supported on Omni Ports.

򐂰 Two external QSFP+ port connectors to attach QSFP+ modules or cables for a single
40 Gb uplink per port or splitting of a single port into 4x 10 Gb connections to external
Ethernet devices.
򐂰 A link OK LED and a Tx/Rx LED for each external port on the switch module.
򐂰 A mode LED for each pair of Omni Ports indicating the operating mode. (OFF indicates
that the port pair is configured for Ethernet operation, and ON indicates that the port pair is
configured for Fibre Channel operation.)

Chapter 4. Chassis and infrastructure configuration 123


Cables and transceivers
Table 4-24 lists the supported cables and transceivers.

Table 4-24 Supported transceivers and direct-attach cables


Description Part Feature code
number (x-config / e-config)

Serial console cables

IBM Flex System Management Serial Access Cable Kit 90Y9338 A2RR / A2RR

SFP transceivers - 1 GbE (supported on two dedicated SFP+ ports)

IBM SFP RJ-45 Transceiver (does not support 10/100 Mbps) 81Y1618 3268 / EB29

IBM SFP SX Transceiver 81Y1622 3269 / EB2A

IBM SFP LX Transceiver 90Y9424 A1PN / ECB8

SFP+ transceivers - 10 GbE (supported on SFP+ ports and Omni Ports)

IBM SFP+ SR Transceiver 46C3447 5053 / EB28

IBM SFP+ LR Transceiver 90Y9412 A1PM / ECB9

10GBase-SR SFP+ (MMFiber) transceiver 44W4408 4942 / 3282

SFP+ direct-attach cables - 10 GbE (supported on SFP+ ports and Omni Ports)

1m IBM Passive DAC SFP+ 90Y9427 A1PH / ECB4

3m IBM Passive DAC SFP+ 90Y9430 A1PJ / ECB5

5m IBM Passive DAC SFP+ 90Y9433 A1PK / ECB6

QSFP+ transceiver and cables - 40 GbE (supported on QSFP+ ports)

IBM QSFP+ 40GBASE-SR Transceiver (requires either cable 90Y3519 or cable 49Y7884 A1DR / EB27
90Y3521)

10m IBM MTP Fiber Optical Cable (requires transceiver 49Y7884) 90Y3519 A1MM / EB2J

30m IBM MTP Fiber Optical Cable (requires transceiver 49Y7884) 90Y3521 A1MN / EB2K

QSFP+ breakout cables - 40 GbE to 4 x 10 GbE (supported on QSFP+ ports)

1m 40Gb QSFP+ to 4 x 10Gb SFP+ Cable 49Y7886 A1DL / EB24

3m 40Gb QSFP+ to 4 x 10Gb SFP+ Cable 49Y7887 A1DM / EB25

5m 40Gb QSFP+ to 4 x 10Gb SFP+ Cable 49Y7888 A1DN / EB26

QSFP+ direct-attach cables - 40 GbE (supported on QSFP+ ports)

1m QSFP+ to QSFP+ DAC 49Y7890 A1DP / EB2B

3m QSFP+ to QSFP+ DAC 49Y7891 A1DQ / EB2H

SFP+ transceivers - 8 Gb FC (supported on Omni Ports)

IBM 8Gb SFP+ Software Optical Transceiver 44X1964 5075 / 3286

124 IBM PureFlex System and IBM Flex System Products and Technology
Features and specifications
The IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch has the following
features and specifications:
򐂰 Internal ports:
– A total of 42 internal full-duplex 10 Gigabit ports. (A total of 14 ports are enabled by
default. Optional FoD licenses are required to activate the remaining 28 ports.)
– Two internal full-duplex 1 GbE ports that are connected to the CMM.
򐂰 External ports:
– Two ports for 1 Gb or 10 Gb Ethernet SFP+ transceivers (support for 1000BASE-SX,
1000BASE-LX, 1000BASE-T, 10GBASE-SR, 10GBASE-LR, or SFP+ copper
direct-attach cables (DACs)). These two ports are enabled by default. SFP+ modules
and DACs are not included and must be purchased separately.
– A total of 12 IBM Omni Ports. Each of them can operate as 10 Gb Ethernet (support for
10GBASE-SR, 10GBASE-LR, or 10 GbE SFP+ DACs), or auto-negotiating as 4/8 Gb
Fibre Channel, depending on the SFP+ transceiver that is installed in the port. The first
six ports are enabled by default. An optional FoD license is required to activate the
remaining six ports. SFP+ modules and DACs are not included and must be purchased
separately.

Omni Ports support: Note: Omni Ports do not support 1 Gb Ethernet operations.

– Two ports for 40 Gb Ethernet QSFP+ transceivers or QSFP+ DACs. (Ports are disabled
by default. An optional FoD license is required to activate them.) Also, you can use
break-out cables to break out each 40 GbE port into four 10 GbE SFP+ connections.
QSFP+ modules and DACs are not included and must be purchased separately.
– One RS-232 serial port (mini-USB connector) that provides another means to
configure the switch module.
򐂰 Scalability and performance:
– 40 Gb Ethernet ports for extreme uplink bandwidth and performance.
– Fixed-speed external 10 Gb Ethernet ports to use the 10 Gb core infrastructure.
– Non-blocking architecture with wire-speed forwarding of traffic and aggregated
throughput of 1.28 Tbps on Ethernet ports.
– Media access control (MAC) address learning: Automatic update, and support for up to
128,000 MAC addresses.
– Up to 128 IP interfaces per switch.
– Static and LACP (IEEE 802.3ad) link aggregation, up to 220 Gb of total uplink
bandwidth per switch, up to 64 trunk groups, and up to 16 ports per group.
– Support for jumbo frames (up to 9,216 bytes).
– Broadcast/multicast storm control.
– IGMP snooping to limit flooding of IP multicast traffic.
– IGMP filtering to control multicast traffic for hosts that participate in multicast groups.
– Configurable traffic distribution schemes over trunk links that are based on
source/destination IP or MAC addresses, or both.
– Fast port forwarding and fast uplink convergence for rapid STP convergence.

Chapter 4. Chassis and infrastructure configuration 125


򐂰 Availability and redundancy:
– Virtual Router Redundancy Protocol (VRRP) for Layer 3 router redundancy.
– IEEE 802.1D STP for providing L2 redundancy.
– IEEE 802.1s Multiple STP (MSTP) for topology optimization. Up to 32 STP instances
are supported by a single switch.
– IEEE 802.1w Rapid STP (RSTP) provides rapid STP convergence for critical
delay-sensitive traffic, such as voice or video.
– Per-VLAN Rapid STP (PVRST) enhancements.
– Layer 2 Trunk Failover to support active/standby configurations of network adapter
teaming on compute nodes.
– Hot Links provides basic link redundancy with fast recovery for network topologies that
require Spanning Tree to be turned off.
򐂰 VLAN support:
– Up to 1024 VLANs supported per switch, with VLAN numbers from 1 - 4095 (4095 is
used for management module’s connection only).
– 802.1Q VLAN tagging support on all ports.
– Private VLANs.
򐂰 Security:
– VLAN-based, MAC-based, and IP-based access control lists (ACLs).
– 802.1x port-based authentication.
– Multiple user IDs and passwords.
– User access control.
– Radius, TACACS+, and LDAP authentication and authorization.
򐂰 Quality of service (QoS):
– Support for IEEE 802.1p, IP ToS/DSCP, and ACL-based (MAC/IP source and
destination addresses, VLANs) traffic classification and processing.
– Traffic shaping and re-marking based on defined policies.
– Eight Weighted Round Robin (WRR) priority queues per port for processing qualified
traffic.
򐂰 IP v4 Layer 3 functions:
– Host management.
– IP forwarding.
– IP filtering with ACLs, with up to 896 ACLs supported.
– VRRP for router redundancy.
– Support for up to 128 static routes.
– Routing protocol support (RIP v1, RIP v2, OSPF v2, and BGP-4), for up to 2048 entries
in a routing table.
– Support for DHCP Relay.
– Support for IGMP snooping and IGMP relay.
– Support for Protocol Independent Multicast (PIM) in Sparse Mode (PIM-SM) and
Dense Mode (PIM-DM).

126 IBM PureFlex System and IBM Flex System Products and Technology
򐂰 IP v6 Layer 3 functions:
– IPv6 host management (except for a default switch management IP address).
– IPv6 forwarding.
– Up to 128 static routes.
– Support for OSPF v3 routing protocol.
– IPv6 filtering with ACLs.
򐂰 Virtualization:
– Virtual NICs (vNICs): Ethernet, iSCSI, or FCoE traffic is supported on vNICs.
– Unified fabric ports (UFPs): Ethernet or FCoE traffic is supported on UFPs
– 802.1Qbg Edge Virtual Bridging (EVB) is an emerging IEEE standard for allowing
networks to become virtual machine (VM)-aware:
• Virtual Ethernet Bridging (VEB) and Virtual Ethernet Port Aggregator (VEPA) are
mechanisms for switching between VMs on the same hypervisor.
• Edge Control Protocol (ECP) is a transport protocol that operates between two
peers over an IEEE 802 LAN providing reliable and in-order delivery of upper layer
protocol data units.
• Virtual Station Interface (VSI) Discovery and Configuration Protocol (VDP) allows
centralized configuration of network policies that persists with the VM, independent
of its location.
• EVB Type-Length-Value (TLV) is used to discover and configure VEPA, ECP, and
VDP.
– VMready.
򐂰 Converged Enhanced Ethernet:
– Priority-Based Flow Control (PFC) (IEEE 802.1Qbb) extends 802.3x standard flow
control to allow the switch to pause traffic that is based on the 802.1p priority value in
each packet’s VLAN tag.
– Enhanced Transmission Selection (ETS) (IEEE 802.1Qaz) provides a method for
allocating link bandwidth that is based on the 802.1p priority value in each packet’s
VLAN tag.
– Data center Bridging Capability Exchange Protocol (DCBX) (IEEE 802.1AB) allows
neighboring network devices to exchange information about their capabilities.
򐂰 Fibre Channel over Ethernet (FCoE):
– FC-BB5 FCoE specification compliant.
– Native FC Forwarder switch operations.
– End-to-end FCoE support (initiator to target).
– FCoE Initialization Protocol (FIP) support.
򐂰 Fibre Channel:
– Omni Ports support 4/8 Gb FC when FC SFPs+ are installed in these ports.
– Full Fabric mode for end-to-end FCoE or NPV Gateway mode for external FC SAN
attachments (support for IBM B-type, Brocade, and Cisco MDS external SANs).
– Fabric services in Full Fabric mode:
• Name Server
• Registered State Change Notification (RSCN)
• Login services
• Zoning

Chapter 4. Chassis and infrastructure configuration 127


򐂰 Stacking:
– Hybrid stacking support (from two to six EN4093/EN4093R switches with two CN4093
switches)
– FCoE support
– vNIC support
– 802.1Qbg support
򐂰 Manageability
– Simple Network Management Protocol (SNMP V1, V2, and V3)
– HTTP browser GUI
– Telnet interface for CLI
– SSH
– Secure FTP (sFTP)
– Service Location Protocol (SLP)
– Serial interface for CLI
– Scriptable CLI
– Firmware image update (TFTP and FTP)
– Network Time Protocol (NTP) for switch clock synchronization
򐂰 Monitoring
– Switch LEDs for external port status and switch module status indication.
– Remote Monitoring (RMON) agent to collect statistics and proactively monitor switch
performance.
– Port mirroring for analyzing network traffic that passes through a switch.
– Change tracking and remote logging with syslog feature.
– Support for sFLOW agent for monitoring traffic in data networks (separate sFLOW
analyzer is required elsewhere).
– POST diagnostic tests.

The following features are not supported by IPv6:


򐂰 Default switch management IP address
򐂰 SNMP trap host destination IP address
򐂰 Bootstrap Protocol (BOOTP) and DHCP
򐂰 RADIUS, TACACS+, and LDAP
򐂰 QoS metering and re-marking ACLs for out-profile traffic
򐂰 VMware Virtual Center (vCenter) for VMready
򐂰 Routing Information Protocol (RIP)
򐂰 Internet Group Management Protocol (IGMP)
򐂰 Border Gateway Protocol (BGP)
򐂰 Virtual Router Redundancy Protocol (VRRP)
򐂰 sFLOW

Standards supported
The switches support the following standards:
򐂰 IEEE 802.1AB data center Bridging Capability Exchange Protocol (DCBX)
򐂰 IEEE 802.1D Spanning Tree Protocol (STP)
򐂰 IEEE 802.1p Class of Service (CoS) prioritization
򐂰 IEEE 802.1s Multiple STP (MSTP)
򐂰 IEEE 802.1Q Tagged VLAN (frame tagging on all ports when VLANs are enabled)
򐂰 IEEE 802.1Qbg Edge Virtual Bridging
򐂰 IEEE 802.1Qbb Priority-Based Flow Control (PFC)

128 IBM PureFlex System and IBM Flex System Products and Technology
򐂰 IEEE 802.1Qaz Enhanced Transmission Selection (ETS)
򐂰 IEEE 802.1x port-based authentication
򐂰 IEEE 802.1w Rapid STP (RSTP)
򐂰 IEEE 802.2 Logical Link Control
򐂰 IEEE 802.3 10BASE-T Ethernet
򐂰 IEEE 802.3ab 1000BASE-T copper twisted pair Gigabit Ethernet
򐂰 IEEE 802.3ad Link Aggregation Control Protocol
򐂰 IEEE 802.3ae 10GBASE-SR short range fiber optics 10 Gb Ethernet
򐂰 IEEE 802.3ae 10GBASE-LR long range fiber optics 10 Gb Ethernet
򐂰 IEEE 802.3ba 40GBASE-SR4 short range fiber optics 40 Gb Ethernet
򐂰 IEEE 802.3ba 40GBASE-CR4 copper 40 Gb Ethernet
򐂰 IEEE 802.3u 100BASE-TX Fast Ethernet
򐂰 IEEE 802.3x Full-duplex Flow Control
򐂰 IEEE 802.3z 1000BASE-SX short range fiber optics Gigabit Ethernet
򐂰 IEEE 802.3z 1000BASE-LX long range fiber optics Gigabit Ethernet
򐂰 SFF-8431 10GSFP+Cu SFP+ Direct Attach Cable
򐂰 FC-BB-5 FCoE

For more information, see the IBM Redbooks Product Guide IBM Flex System Fabric
CN4093 10Gb Converged Scalable Switch, TIPS0910, which is available at this website:
http://www.redbooks.ibm.com/abstracts/tips0910.html?Open

4.11.7 IBM Flex System Fabric EN4093 and EN4093R 10Gb Scalable Switch
The IBM Flex System EN4093 and IBM Flex System 4093R 10Gb Scalable Switches are
10 Gb 64-port upgradeable midrange to high-end switch modules. They offer Layer 2/3
switching designed for installation within the I/O module bays of the Enterprise Chassis.

The latest EN4093R switch adds more capabilities to the EN4093, that is, Virtual NIC
(Stacking), Unified fabric port (Stacking), Edge virtual bridging (Stacking), and CEE/FCoE
(Stacking), and so it is ideal for clients that are looking to implement a converged
infrastructure with NAS, iSCSI, or FCoE.

For FCoE implementations, the EN4093R acts as a transit switch that forwards FCoE traffic
upstream to another devices, such as the Brocade VDX or Cisco Nexus 5548/5596, where
the FC traffic is broken out. For a detailed function comparison, see Table 4-27 on page 135.

Each switch contains the following ports:


򐂰 Up to 42 internal 10 Gb ports
򐂰 Up to 14 external 10 Gb uplink ports (enhanced small form-factor pluggable (SFP+)
connectors)
򐂰 Up to 2 external 40 Gb uplink ports (quad small form-factor pluggable (QSFP+)
connectors)

These switches are considered suitable for clients with the following requirements:
򐂰 Building a 10 Gb infrastructure
򐂰 Implementing a virtualized environment
򐂰 Requiring investment protection for 40 Gb uplinks
򐂰 Wanting to reduce total cost of ownership (TCO) and improve performance while
maintaining high levels of availability and security
򐂰 Wanting to avoid oversubscription (traffic from multiple internal ports that attempt to pass
through a lower quantity of external ports, leading to congestion and performance impact)

Chapter 4. Chassis and infrastructure configuration 129


The EN4093 and 4093R 10Gb Scalable Switches are shown in Figure 4-48.

Figure 4-48 IBM Flex System EN4093/4093R 10 Gb Scalable Switch

As listed in Table 4-25, the switch is initially licensed with 14 10-Gb internal ports that are
enabled and 10 10-Gb external uplink ports enabled. Further ports can be enabled, including
the two 40 Gb external uplink ports with the Upgrade 1 and four more SFP+ 10Gb ports with
Upgrade 2 license options. Upgrade 1 must be applied before Upgrade 2 can be applied.

Table 4-25 IBM Flex System Fabric EN4093 10Gb Scalable Switch part numbers and port upgrades
Part Feature Product description Total ports that are enabled
number codea
Internal 10 Gb uplink 40 Gb uplink

49Y4270 A0TB / 3593 IBM Flex System Fabric EN4093 10Gb 14 10 0


Scalable Switch:
򐂰 10x external 10 Gb uplinks
򐂰 14x internal 10 Gb ports

05Y3309 A3J6 / ESW7 IBM Flex System Fabric EN4093R 10Gb 14 10 0


Scalable Switch:
򐂰 10x external 10 Gb uplinks
򐂰 14x internal 10 Gb ports

49Y4798 A1EL / 3596 IBM Flex System Fabric EN4093 10Gb 28 10 2


Scalable Switch (Upgrade 1):
򐂰 Adds 2x external 40 Gb uplinks
򐂰 Adds 14x internal 10 Gb ports

88Y6037 A1EM / 3597 IBM Flex System Fabric EN4093 10Gb 42 14 2


Scalable Switch (Upgrade 2) (requires
Upgrade 1):
򐂰 Adds 4x external 10 Gb uplinks
򐂰 Add 14x internal 10 Gb ports
a. The first feature code listed is for configurations ordered through System x sales channels (HVEC) using x-config.
The second feature code is for configurations ordered through the IBM Power Systems channel (AAS) using
e-config.

130 IBM PureFlex System and IBM Flex System Products and Technology
The key components on the front of the switch are shown in Figure 4-49.

14x 10 Gb uplink ports 2x 40 Gb uplink ports


(10 standard, 4 with Upgrade 2) (enabled with Upgrade 1)

Switch release handle SFP+ ports QSFP+ ports Management Switch


(one either side) ports LEDs

Figure 4-49 IBM Flex System EN4093/4093R 10 Gb Scalable Switch

Each upgrade license enables more internal ports. To make full use of those ports, each
compute node needs the following appropriate I/O adapter installed:
򐂰 The base switch requires a two-port Ethernet adapter (one port of the adapter goes to
each of two switches)
򐂰 Upgrade 1 requires a four-port Ethernet adapter (two ports of the adapter to each switch)
򐂰 Upgrade 2 requires a six-port Ethernet adapter (three ports to each switch)

Consideration: Adding Upgrade 2 enables another 14 internal ports, for a total of 42


internal ports, with three ports that are connected to each of the 14 compute nodes in the
chassis. To make full use of all 42 internal ports, a six-port adapter is required, such as the
CN4058 Adapter.

Upgrade 2 still provides a benefit with a four-port adapter because this upgrade enables an
extra four external 10 Gb uplink as well.

The rear of the switch has 14 SPF+ module ports and two QSFP+ module ports. The QSFP+
ports can be used to provide two 40 Gb uplinks or eight 10 Gb ports. Use one of the
supported QSFP+ to 4x 10 Gb SFP+ cables that are listed in Table 4-26. This cable splits a
single 40 Gb QSPFP port into 4 SFP+ 10 Gb ports.

The switch is designed to function with nodes that contain a 1Gb LOM, such as the IBM Flex
System x220 Compute Node.

To manage the switch, a mini USB port and an Ethernet management port are provided.

The supported SFP+ and QSFP+ modules and cables for the switch are listed in Table 4-26.

Table 4-26 Supported SFP+ modules and cables


Part number Feature codea Description

Serial console cables

90Y9338 A2RR / A2RR IBM Flex System Management Serial Access Cable Kit

Small form-factor pluggable (SFP) transceivers - 1 GbE

81Y1618 3268 / EB29 IBM SFP RJ-45 Transceiver (does not support 10/100 Mbps)

81Y1622 3269 / EB2A IBM SFP SX Transceiver

90Y9424 A1PN / ECB8 IBM SFP LX Transceiver

Chapter 4. Chassis and infrastructure configuration 131


Part number Feature codea Description

SFP+ transceivers - 10 GbE

46C3447 5053 / None IBM SFP+ SR Transceiver

90Y9412 A1PM / ECB9 IBM SFP+ LR Transceiver

44W4408 4942 / 3282 10GBase-SR SFP+ (MMFiber) transceiver

SFP+ Direct Attach Copper (DAC) cables - 10 GbE

90Y9427 A1PH / ECB4 1m IBM Passive DAC SFP+

90Y9430 A1PJ / ECB5 3m IBM Passive DAC SFP+

90Y9433 A1PK / ECB6 5m IBM Passive DAC SFP+

QSFP+ transceiver and cables - 40 GbE

49Y7884 A1DR / EB27 IBM QSFP+ 40GBASE-SR Transceiver


(Requires either cable 90Y3519 or cable 90Y3521)

90Y3519 A1MM / EB2J 10m IBM MTP Fiberoptic Cable (requires transceiver 49Y7884)

90Y3521 A1MN / EB2K 30m IBM MTP Fiberoptic Cable (requires transceiver 49Y7884)

QSFP+ breakout cables - 40 GbE to 4x10 GbE

49Y7886 A1DL / EB24 1m 40 Gb QSFP+ to 4 x 10 Gb SFP+ Cable

49Y7887 A1DM / EB25 3m 40 Gb QSFP+ to 4 x 10 Gb SFP+ Cable

49Y7888 A1DN / EB26 5m 40 Gb QSFP+ to 4 x 10 Gb SFP+ Cable

QSFP+ Direct Attach Copper (DAC) cables - 40 GbE

49Y7890 A1DP / EB2B 1m QSFP+ to QSFP+ DAC

49Y7891 A1DQ / EB2H 3m QSFP+ to QSFP+ DAC


a. The first feature code listed is for configurations ordered through System x sales channels
(HVEC) using x-config. The second feature code is for configurations ordered through the IBM
Power Systems channel (AAS) using e-config.

The EN4093/4093R 10Gb Scalable Switch has the following features and specifications:
򐂰 Internal ports:
– A total of 42 internal full-duplex 10 Gigabit ports (14 ports are enabled by default).
Optional FoD licenses are required to activate the remaining 28 ports.
– Two internal full-duplex 1 GbE ports that are connected to the chassis management
module.
򐂰 External ports:
– A total of 14 ports for 1 Gb or 10 Gb Ethernet SFP+ transceivers (support for
1000BASE-SX, 1000BASE-LX, 1000BASE-T, 10GBASE-SR, or 10GBASE-LR) or
SFP+ DAC cables. A total of 10 ports are enabled by default. An optional FoD license is
required to activate the remaining four ports. SFP+ modules and DAC cables are not
included and must be purchased separately.
– Two ports for 40 Gb Ethernet QSFP+ transceivers or QSFP+ DACs (ports are disabled
by default; an optional FoD license is required to activate them). QSFP+ modules and
DAC cables are not included and must be purchased separately.

132 IBM PureFlex System and IBM Flex System Products and Technology
– One RS-232 serial port (mini-USB connector) that provides another means to
configure the switch module.
򐂰 Scalability and performance:
– 40 Gb Ethernet ports for extreme uplink bandwidth and performance.
– Fixed-speed external 10 Gb Ethernet ports to take advantage of 10 Gb core
infrastructure.
– Autosensing 10/1000/1000 external Gigabit Ethernet ports for bandwidth optimization.
– Non-blocking architecture with wire-speed forwarding of traffic and aggregated
throughput of 1.28 Tbps.
– Media Access Control (MAC) address learning: Automatic update, support of up to
128,000 MAC addresses.
– Up to 128 IP interfaces per switch.
– Static and Link Aggregation Control Protocol (LACP) (IEEE 802.3ad) link aggregation:
Up to 220 Gb of total uplink bandwidth per switch, up to 64 trunk groups, up to 16 ports
per group.
– Support for jumbo frames (up to 9,216 bytes).
– Broadcast/multicast storm control.
– Internet Group Management Protocol (IGMP) snooping to limit flooding of IP multicast
traffic.
– IGMP filtering to control multicast traffic for hosts that participate in multicast groups.
– Configurable traffic distribution schemes over trunk links that are based on
source/destination IP or MAC addresses, or both.
– Fast port forwarding and fast uplink convergence for rapid STP convergence.
򐂰 Availability and redundancy:
– Virtual Router Redundancy Protocol (VRRP) for Layer 3 router redundancy.
– IEEE 802.1D Spanning Tree Protocol (STP) for providing L2 redundancy.
– IEEE 802.1s Multiple STP (MSTP) for topology optimization, up to 32 STP instances
are supported by single switch.
– IEEE 802.1w Rapid STP (RSTP) provides rapid STP convergence for critical
delay-sensitive traffic like voice or video.
– Rapid Per-VLAN STP (RPVST) enhancements.
– Layer 2 Trunk Failover to support active/standby configurations of network adapter that
team on compute nodes.
– Hot Links provides basic link redundancy with fast recovery for network topologies that
require Spanning Tree to be turned off.
򐂰 Virtual local area network (VLAN) support:
– Up to 1024 VLANs supported per switch, with VLAN numbers that range from 1 to
4095 (4095 is used for the management module’s connection only).
– 802.1Q VLAN tagging support on all ports.
– Private VLANs.
򐂰 Security:
– VLAN-based, MAC-based, and IP-based access control lists (ACLs)
– 802.1x port-based authentication
– Multiple user IDs and passwords

Chapter 4. Chassis and infrastructure configuration 133


– User access control
– Radius, TACACS+ and LDAP authentication and authorization
򐂰 Quality of service (QoS):
– Support for IEEE 802.1p, IP ToS/DSCP, and ACL-based (MAC/IP source and
destination addresses, VLANs) traffic classification and processing.
– Traffic shaping and remarking based on defined policies.
– Eight weighted round robin (WRR) priority queues per port for processing qualified
traffic.
򐂰 IP v4 Layer 3 functions:
– Host management
– IP forwarding
– IP filtering with ACLs, up to 896 ACLs supported
– VRRP for router redundancy
– Support for up to 128 static routes
– Routing protocol support (RIP v1, RIP v2, OSPF v2, BGP-4), up to 2048 entries in a
routing table
– Support for Dynamic Host Configuration Protocol (DHCP) Relay
– Support for IGMP snooping and IGMP relay
– Support for Protocol Independent Multicast (PIM) in Sparse Mode (PIM-SM) and
Dense Mode (PIM-DM)
– 802.1Qbg support
򐂰 IP v6 Layer 3 functions:
– IPv6 host management (except default switch management IP address)
– IPv6 forwarding
– Up to 128 static routes
– Support for OSPF v3 routing protocol
– IPv6 filtering with ACLs
򐂰 Virtualization:
– Virtual Fabric with virtual network interface card (vNIC)
– 802.1Qbg Edge Virtual Bridging (EVB)
– IBM VMready
򐂰 Converged Enhanced Ethernet:
– Priority-based Flow Control (PFC) (IEEE 802.1Qbb) extends 802.3x standard flow
control to allow the switch to pause traffic. This function is based on the 802.1p priority
value in each packet’s VLAN tag.
– Enhanced Transmission Selection (ETS) (IEEE 802.1Qaz) provides a method for
allocating link bandwidth that is based on the 802.1p priority value in each packet’s
VLAN tag.
– Data center Bridging Capability Exchange Protocol (DCBX) (IEEE 802.1AB) allows
neighboring network devices to exchange information about their capabilities.
򐂰 Manageability:
– Simple Network Management Protocol (SNMP V1, V2, and V3)
– HTTP browser GUI
– Telnet interface for CLI

134 IBM PureFlex System and IBM Flex System Products and Technology
– Secure Shell (SSH)
– Serial interface for CLI
– Scriptable CLI
– Firmware image update: Trivial File Transfer Protocol (TFTP) and File Transfer
Protocol (FTP)
– Network Time Protocol (NTP) for switch clock synchronization
򐂰 Monitoring:
– Switch LEDs for external port status and switch module status indication.
– Remote monitoring (RMON) agent to collect statistics and proactively monitor switch
performance.
– Port mirroring for analyzing network traffic that passes through the switch.
– Change tracking and remote logging with syslog feature.
– Support for sFLOW agent for monitoring traffic in data networks (separate sFLOW
analyzer is required elsewhere).
– POST diagnostic procedures.
򐂰 Stacking:
– Up to eight switches in a stack
– FCoE support (EN4093R only)
– vNIC support (support for FCoE on vNICs)

Table 4-27 compares the EN4093 to the EN4093R.

Table 4-27 EN4093 and EN4093R supported features


Feature EN4093 EN4093R

Layer 2 switching Yes Yes

Layer 3 switching Yes Yes

Switch Stacking Yes Yes

Virtual NIC (stand-alone) Yes Yes

Virtual NIC (stacking) Yes Yes

Unified Fabric Port (stand-alone) Yes Yes

Unified Fabric Port (stacking) No No

Edge virtual bridging (stand-alone) Yes Yes

Edge virtual bridging (stacking) Yes Yes

CEE/FCoE (stand-alone) Yes Yes

CEE/FCoE (stacking) No Yes

Both the EN4093 and EN4093R support vNIC+ FCoE and 802.1Qbg + FCoE stand-alone
(without stacking). The EN4093R supports vNIC + FCOE with stacking or 802.1Qbg + FCoE
with stacking.

For more information, see the IBM Redbooks Product Guide IBM Flex System Fabric EN4093
and EN4093R 10Gb Scalable Switches, TIPS0864, which is available at this website:
http://www.redbooks.ibm.com/abstracts/tips0864.html?Open

Chapter 4. Chassis and infrastructure configuration 135


4.11.8 IBM Flex System Fabric SI4093 System Interconnect Module
The IBM Flex System Fabric SI4093 System Interconnect Module enables simplified
integration of IBM Flex System into your existing networking infrastructure.

The SI4093 System Interconnect Module requires no management for most data center
environments, which eliminates the need to configure each networking device or individual
ports, thus reducing the number of management points. It provides a low latency, loop-free
interface that does not rely upon spanning tree protocols, thus removing one of the greatest
deployment and management complexities of a traditional switch.

The SI4093 System Interconnect Module offers administrators a simplified deployment


experience while maintaining the performance of intra-chassis connectivity.

The SI4093 System Interconnect Module is shown in Figure 4-50.

Figure 4-50 IBM Flex System Fabric SI4093 System Interconnect Module

The SI4093 System Interconnect Module is initially licensed for 14 10-Gb internal ports
enabled and 10 10-Gb external uplink ports enabled. Further ports can be enabled, including
14 internal ports and two 40 Gb external uplink ports with Upgrade 1, and 14 internal ports
and four SFP+ 10 Gb external ports with Upgrade 2 license options. Upgrade 1 must be
applied before Upgrade 2 can be applied.

The key components on the front of the switch are shown in Figure 4-49 on page 131.

14x 10 Gb uplink ports 2x 40 Gb uplink ports


(10 standard, 4 with Upgrade 2) (enabled with Upgrade 1)

Switch release handle SFP+ ports QSFP+ ports Management Switch


(one either side) ports LEDs

Figure 4-51 IBMIBM Flex System Fabric SI4093 System Interconnect Module

136 IBM PureFlex System and IBM Flex System Products and Technology
Table 4-28 shows the part numbers for ordering the switches and the upgrades.

Table 4-28 Ordering information


Description Part Feature code
number (x-config / e-config)

Interconnect module

IBM Flex System Fabric SI4093 System Interconnect Module 95Y3313 A45T / ESWA

Features on Demand upgrades

SI4093 System Interconnect Module (Upgrade 1) 95Y3318 A45U / ESW8

SI4093 System Interconnect Module (Upgrade 2) 95Y3320 A45V / ESW9

Important: SFP and SFP+ (small form-factor pluggable plus) transceivers or cables are
not included with the switch. They must be ordered separately. For more information, see
Table 4-29 on page 138.

The following base switch and upgrades are available:


򐂰 Part number 95Y3313 is for the physical device, and it includes 14 other internal 10-Gb
ports enabled (one to each node bay) and 10 external 10-Gb ports enabled for
connectivity to an upstream network, plus external servers and storage. All external 10 Gb
ports are SFP+ based connections.
򐂰 Part number 95Y3318 (Upgrade 1) can be applied on the base interconnect module to
make full use of four-port adapters that are installed in each compute node. This upgrade
enables 14 other internal ports, for a total of 28 ports. The upgrade also enables two
40 Gb uplinks with QSFP+ connectors. These QSFP+ ports can also be converted to four
10 Gb SFP+ DAC connections by using the appropriate fan-out cable. This upgrade
requires the base interconnect module.
򐂰 Part number 95Y3320 (Upgrade 2) can be applied on top of Upgrade 1 when you want
more uplink bandwidth on the interconnect module or if you want more internal bandwidth
to the compute nodes with the adapters capable of supporting six ports (such as
CN4058). The upgrade enables the remaining four external 10 Gb uplinks with SFP+
connectors, plus 14 other internal 10 Gb ports, for a total of 42 ports (three to each
compute node).

Chapter 4. Chassis and infrastructure configuration 137


Table 4-29 lists the supported port combinations on the interconnect module and the required
upgrades.

Table 4-29 Supported port combinations


Quantity required

Supported port combinations Base switch, 95Y3313 Upgrade 1, 95Y3318 Upgrade 2, 95Y3320

14x internal 10 GbE 1 0 0


10x external 10 GbE

28x internal 10 GbE 1 1 0


10x external 10 GbE
2x external 40 GbE

42x internal 10 GbEa 1 1 1


14x external 10 GbE
2x external 40 GbE
a. This configuration uses six of the eight ports on the CN4058 adapter that are available for IBM Power Systems
compute nodes.

Supported cables and transceivers


Table 4-30 lists the supported cables and transceivers.

Table 4-30 Table 3. Supported transceivers and direct-attach cables


Description Part Feature code
number (x-config / e-config)

Serial console cables

IBM Flex System Management Serial Access Cable Kit 90Y9338 A2RR / None

SFP transceivers - 1 GbE

IBM SFP RJ-45 Transceiver (does not support 10/100 Mbps) 81Y1618 3268 / EB29

IBM SFP SX Transceiver 81Y1622 3269 / EB2A

IBM SFP LX Transceiver 90Y9424 A1PN / ECB8

SFP+ transceivers - 10 GbE

IBM SFP+ SR Transceiver 46C3447 5053 / EB28

IBM SFP+ LR Transceiver 90Y9412 A1PM / ECB9

10GBase-SR SFP+ (MMFiber) transceiver 44W4408 4942 / 3282

SFP+ direct-attach cables - 10 GbE

1m IBM Passive DAC SFP+ 90Y9427 A1PH / ECB4

3m IBM Passive DAC SFP+ 90Y9430 A1PJ / ECB5

5m IBM Passive DAC SFP+ 90Y9433 A1PK / ECB6

138 IBM PureFlex System and IBM Flex System Products and Technology
Description Part Feature code
number (x-config / e-config)

QSFP+ transceiver and cables - 40 GbE

IBM QSFP+ 40GBASE-SR Transceiver 49Y7884 A1DR / EB27


(Requires either cable 90Y3519 or cable 90Y3521)

10m IBM MTP Fiber Optical Cable (requires transceiver 49Y7884) 90Y3519 A1MM / EB2J

30m IBM MTP Fiber Optical Cable (requires transceiver 49Y7884) 90Y3521 A1MN / EB2K

QSFP+ breakout cables - 40 GbE to 4x10 GbE

1m 40Gb QSFP+ to 4 x 10Gb SFP+ Cable 49Y7886 A1DL / EB24

3m 40Gb QSFP+ to 4 x 10Gb SFP+ Cable 49Y7887 A1DM / EB25

5m 40Gb QSFP+ to 4 x 10Gb SFP+ Cable 49Y7888 A1DN / EB26

QSFP+ direct-attach cables - 40 GbE

1m QSFP+ to QSFP+ DAC 49Y7890 A1DP / EB2B

3m QSFP+ to QSFP+ DAC 49Y7891 A1DQ / EB2H

With the flexibility of the interconnect module, you can make full use of the technologies that
are required for the following environments:
򐂰 For 1 GbE links, you can use SFP transceivers plus RJ-45 cables or LC-to-LC fiber cables,
depending on the transceiver.
򐂰 For 10 GbE, you can use direct-attached cables (DAC, also known as Twinax), which
come in lengths 1 - 5 m. These DACs are a cost-effective and low-power alternative to
transceivers, and are ideal for all 10 Gb Ethernet connectivity within the rack, or even
connecting to an adjacent rack. For longer distances, there is a choice of SFP+
transceivers (SR or LR) plus LC-to-LC fiber optic cables.
򐂰 For 40 Gb links, you can use QSFP+ to QSFP+ cables up to 3 m, or QSFP+ transceivers
and MTP cables for longer distances. You also can break out the 40 Gb ports into four 10
GbE SFP+ DAC connections by using break-out cables.

Features and specifications


The SI4093 System Interconnect Module includes the following features and specifications:
򐂰 Modes of operations:
– Transparent (or VLAN-agnostic) mode
In VLAN-agnostic mode (default configuration), the SI4093 transparently forwards
VLAN tagged frames without filtering on the customer VLAN tag, which provides an
end host view to the upstream network. The interconnect module provides traffic
consolidation in the chassis to minimize TOR port usage, and it enables
server-to-server communication for optimum performance (for example, vMotion). It
can be connected to the FCoE transit switch or FCoE gateway (FC Forwarder) device.
– Local Domain (or VLAN-aware) mode
In VLAN-aware mode (optional configuration), the SI4093 provides more security for
multi-tenant environments by extending client VLAN traffic isolation to the interconnect
module and its uplinks. VLAN-based access control lists (ACLs) can be configured on
the SI4093. When FCoE is used, the SI4093 operates as an FCoE transit switch, and it
must be connected to the FCF device.

Chapter 4. Chassis and infrastructure configuration 139


򐂰 Internal ports:
– A total of 42 internal full-duplex 10 Gigabit ports (14 ports are enabled by default;
optional FoD licenses are required to activate the remaining 28 ports).
– Two internal full-duplex 1 GbE ports are connected to the chassis management
module.
򐂰 External ports:
– A total of 14 ports for 1 Gb or 10 Gb Ethernet SFP+ transceivers (support for
1000BASE-SX, 1000BASE-LX, 1000BASE-T, 10GBASE-SR, or 10GBASE-LR) or
SFP+ copper direct-attach cables (DAC). A total of 10 ports are enabled by default. An
optional FoD license is required to activate the remaining four ports. SFP+ modules
and DACs are not included and must be purchased separately.
– Two ports for 40 Gb Ethernet QSFP+ transceivers or QSFP+ DACs. (Ports are disabled
by default. An optional FoD license is required to activate them.) QSFP+ modules and
DACs are not included and must be purchased separately.
– One RS-232 serial port (mini-USB connector) that provides an additional means to
configure the switch module.
򐂰 Scalability and performance:
– 40 Gb Ethernet ports for extreme uplink bandwidth and performance.
– External 10 Gb Ethernet ports to use 10 Gb upstream infrastructure.
– Non-blocking architecture with wire-speed forwarding of traffic and aggregated
throughput of 1.28 Tbps.
– Media access control (MAC) address learning: automatic update, support for up to
128,000 MAC addresses.
– Static and LACP (IEEE 802.3ad) link aggregation, up to 220 Gb of total uplink
bandwidth per interconnect module.
– Support for jumbo frames (up to 9,216 bytes).
򐂰 Availability and redundancy:
– Layer 2 Trunk Failover to support active/standby configurations of network adapter
teaming on compute nodes.
– Built in link redundancy with loop prevention without a need for Spanning Tree protocol.
򐂰 VLAN support
– Up to 32 VLANs supported per interconnect module SPAR partition, with VLAN
numbers 1 - 4095 (4095 is used for management module’s connection only).
– 802.1Q VLAN tagging support on all ports.
򐂰 Security:
– VLAN-based access control lists (ACLs) (VLAN-aware mode).
– Multiple user IDs and passwords.
– User access control.
– Radius, TACACS+, and LDAP authentication and authorization.
򐂰 Quality of service (QoS)
Support for IEEE 802.1p traffic classification and processing.
򐂰 Virtualization
– Switch Independent Virtual NIC (vNIC2)
Ethernet, iSCSI, or FCoE traffic is supported on vNICs

140 IBM PureFlex System and IBM Flex System Products and Technology
– Switch partitioning (SPAR):
• SPAR forms separate virtual switching contexts by segmenting the data plane of
the switch. Data plane traffic is not shared between SPARs on the same switch.
• SPAR operates as a Layer 2 broadcast network. Hosts on the same VLAN attached
to a SPAR can communicate with each other and with the upstream switch. Hosts
on the same VLAN but attached to different SPARs communicate through the
upstream switch.
• SPAR is implemented as a dedicated VLAN with a set of internal server ports and a
single uplink port or link aggregation (LAG). Multiple uplink ports or LAGs are not
allowed in SPAR. A port can be a member of only one SPAR.
򐂰 Converged Enhanced Ethernet:
– Priority-Based Flow Control (PFC) (IEEE 802.1Qbb) extends 802.3x standard flow
control to allow the switch to pause traffic based on the 802.1p priority value in each
packet’s VLAN tag.
– Enhanced Transmission Selection (ETS) (IEEE 802.1Qaz) provides a method for
allocating link bandwidth based on the 802.1p priority value in each packet’s VLAN tag.
– Data Center Bridging Capability Exchange Protocol (DCBX) (IEEE 802.1AB) allows
neighboring network devices to exchange information about their capabilities.
򐂰 Fibre Channel over Ethernet (FCoE):
– FC-BB5 FCoE specification compliant.
– FCoE transit switch operations.
– FCoE Initialization Protocol (FIP) support.
򐂰 Manageability:
– IPv4 and IPv6 host management.
– Simple Network Management Protocol (SNMP V1, V2, and V3).
– Industry standard command-line interface (IS-CLI) through Telnet, SSH, and serial
port.
– Secure FTP (sFTP).
– Service Location Protocol (SLP).
– Firmware image update (TFTP and FTP/sFTP).
– Network Time Protocol (NTP) for clock synchronization.
– IBM System Networking Switch Center (SNSC) support.
򐂰 Monitoring:
– Switch LEDs for external port status and switch module status indication.
– Change tracking and remote logging with syslog feature.
– POST diagnostic tests.

Supported standards
The switches support the following standards:
򐂰 IEEE 802.1AB Data Center Bridging Capability Exchange Protocol (DCBX)
򐂰 IEEE 802.1p Class of Service (CoS) prioritization
򐂰 IEEE 802.1Q Tagged VLAN (frame tagging on all ports when VLANs are enabled)
򐂰 IEEE 802.1Qbb Priority-Based Flow Control (PFC)
򐂰 IEEE 802.1Qaz Enhanced Transmission Selection (ETS)
򐂰 IEEE 802.3 10BASE-T Ethernet
򐂰 IEEE 802.3ab 1000BASE-T copper twisted pair Gigabit Ethernet

Chapter 4. Chassis and infrastructure configuration 141


򐂰 IEEE 802.3ad Link Aggregation Control Protocol
򐂰 IEEE 802.3ae 10GBASE-SR short range fiber optics 10 Gb Ethernet
򐂰 IEEE 802.3ae 10GBASE-LR long range fiber optics 10 Gb Ethernet
򐂰 IEEE 802.3ba 40GBASE-SR4 short range fiber optics 40 Gb Ethernet
򐂰 IEEE 802.3ba 40GBASE-CR4 copper 40 Gb Ethernet
򐂰 IEEE 802.3u 100BASE-TX Fast Ethernet
򐂰 IEEE 802.3x Full-duplex Flow Control
򐂰 IEEE 802.3z 1000BASE-SX short range fiber optics Gigabit Ethernet
򐂰 IEEE 802.3z 1000BASE-LX long range fiber optics Gigabit Ethernet
򐂰 SFF-8431 10GSFP+Cu SFP+ Direct Attach Cable

For more information, see the IBM Redbooks Product Guide IBM Flex System Fabric SI4093
System Interconnect Module, which is available at this website:
http://www.redbooks.ibm.com/abstracts/tips0864.html?Open

4.11.9 IBM Flex System EN4091 10Gb Ethernet Pass-thru Module


The EN4091 10Gb Ethernet Pass-thru Module offers a one-for-one connection between a
single node bay and an I/O module uplink. It has no management interface and can support
both 1 Gb and 10 Gb dual-port adapters that are installed in the compute nodes. If quad-port
adapters are installed in the compute nodes, only the first two ports have access to the
pass-through module’s ports.

The necessary 1 GbE or 10 GbE module (SFP, SFP+ or DAC) must also be installed in the
external ports of the pass-through. This configuration supports the speed (1 Gb or 10 Gb) and
medium (fiber optic or copper) for adapter ports on the compute nodes.

The IBM Flex System EN4091 10Gb Ethernet Pass-thru Module is shown in Figure 4-52.

Figure 4-52 IBM Flex System EN4091 10Gb Ethernet Pass-thru Module

The ordering part number and feature codes are listed in Table 4-31.

Table 4-31 EN4091 10Gb Ethernet Pass-thru Module part number and feature codes
Part number Feature codea Product Name

88Y6043 A1QV / 3700 IBM Flex System EN4091 10Gb Ethernet Pass-thru
a. The first feature code listed is for configurations ordered through System x sales channels
(HVEC) using x-config. The second feature code is for configurations ordered through the IBM
Power Systems channel (AAS) using e-config.

142 IBM PureFlex System and IBM Flex System Products and Technology
The EN4091 10Gb Ethernet Pass-thru Module includes the following specifications:
򐂰 Internal ports
14 internal full-duplex Ethernet ports that can operate at 1 Gb or 10 Gb speeds
򐂰 External ports
Fourteen ports for 1 Gb or 10 Gb Ethernet SFP+ transceivers (support for 1000BASE-SX,
1000BASE-LX, 1000BASE-T, 10GBASE-SR, or 10GBASE-LR) or SFP+ DAC. SFP+
modules and DAC cables are not included, and must be purchased separately.
򐂰 Unmanaged device that has no internal Ethernet management port. However, it can
provide its VPD to the secure management network in the CMM.
򐂰 Supports 10 Gb Ethernet signaling for CEE, FCoE, and other Ethernet-based transport
protocols.
򐂰 Allows direct connection from the 10 Gb Ethernet adapters that are installed in compute
nodes in a chassis to an externally located Top of Rack switch or other external device.

Consideration: The EN4091 10Gb Ethernet Pass-thru Module has only 14 internal ports.
As a result, only two ports on each compute node are enabled, one for each of two
pass-through modules that are installed in the chassis. If four-port adapters are installed in
the compute nodes, ports 3 and 4 on those adapters are not enabled.

There are three standard I/O module status LEDs, as shown in Figure 4-42 on page 112.
Each port has link and activity LEDs.

Table 4-32 lists the supported transceivers and DAC cables.

Table 4-32 IBM Flex System EN4091 10Gb Ethernet Pass-thru Module
Part number Feature codesa Description

SFP+ transceivers - 10 GbE

44W4408 4942 / 3282 10 GbE 850 nm Fibre Channel SFP+ Transceiver (SR)

46C3447 5053 / None IBM SFP+ SR Transceiver

90Y9412 A1PM / None IBM SFP+ LR Transceiver

SFP transceivers - 1 GbE

81Y1622 3269 / EB2A IBM SFP SX Transceiver

81Y1618 3268 / EB29 IBM SFP RJ45 Transceiver

90Y9424 A1PN / None IBM SFP LX Transceiver

Direct-attach copper (DAC) cables

81Y8295 A18M / EN01 1m 10GE Twinax Act Copper SFP+ DAC (active)

81Y8296 A18N / EN02 3m 10GE Twinax Act Copper SFP+ DAC (active)

81Y8297 A18P / EN03 5m 10GE Twinax Act Copper SFP+ DAC (active)

95Y0323 A25A / None 1m IBM Active DAC SFP+ Cable

95Y0326 A25B / None 3m IBM Active DAC SFP+ Cable

95Y0329 A25C / None 5m IBM Active DAC SFP+ Cable

Chapter 4. Chassis and infrastructure configuration 143


a. The first feature code listed is for configurations ordered through System x sales channels
(HVEC) using x-config. The second feature code is for configurations ordered through the IBM
Power Systems channel (AAS) using e-config.

For more information, see the IBM Redbooks Product Guide IBM Flex System EN4091 10Gb
Ethernet Pass-thru Module, TIPS0865, which is available at this website:
http://www.redbooks.ibm.com/abstracts/tips0865.html?Open

4.11.10 IBM Flex System EN2092 1Gb Ethernet Scalable Switch


The EN2092 1Gb Ethernet Switch provides support for L2/L3 switching and routing. The
switch includes the following ports:
򐂰 Up to 28 internal 1 Gb ports
򐂰 Up to 20 external 1 Gb ports (RJ45 connectors)
򐂰 Up to 4 external 10 Gb uplink ports (SFP+ connectors)

The switch is shown in Figure 4-53.

Figure 4-53 IBM Flex System EN2092 1Gb Ethernet Scalable Switch

As listed in Table 4-33, the switch comes standard with 14 internal and 10 external Gigabit
Ethernet ports enabled. Further ports can be enabled, including the four external 10 Gb
uplink ports. Upgrade 1 and the 10 Gb Uplinks upgrade can be applied in either order.

Table 4-33 IBM Flex System EN2092 1Gb Ethernet Scalable Switch part numbers and port upgrades
Part number Feature codea Product description

49Y4294 A0TF / 3598 IBM Flex System EN2092 1Gb Ethernet Scalable Switch:
򐂰 14 internal 1 Gb ports
򐂰 10 external 1 Gb ports

90Y3562 A1QW / 3594 IBM Flex System EN2092 1Gb Ethernet Scalable Switch
(Upgrade 1):
򐂰 Adds 14 internal 1 Gb ports
򐂰 Adds 10 external 1 Gb ports

49Y4298 A1EN / 3599 IBM Flex System EN2092 1Gb Ethernet Scalable Switch (10 Gb
Uplinks), which adds 4 external 10 Gb uplinks
a. The first feature code listed is for configurations ordered through System x sales channels
(HVEC) using x-config. The second feature code is for configurations ordered through the IBM
Power Systems channel (AAS) using e-config.

144 IBM PureFlex System and IBM Flex System Products and Technology
The key components on the front of the switch are shown in Figure 4-54.

20x external 1 Gb ports 4x 10 Gb uplink ports


(10 standard, 10 with Upgrade 1) (enabled with Uplinks upgrade)

RJ45 ports SFP+ ports Management Switch


port LEDs

Figure 4-54 IBM Flex System EN2092 1Gb Ethernet Scalable Switch

The standard switch has 14 internal ports, and the Upgrade 1 license enables 14 more
internal ports. To make full use of those ports, each compute node needs the following
appropriate I/O adapter installed:
򐂰 The base switch requires a two-port Ethernet adapter that is installed in each compute
node (one port of the adapter goes to each of two switches).
򐂰 Upgrade 1 requires a four-port Ethernet adapter that is installed in each compute node
(two ports of the adapter to each switch).

The standard has 10 external ports enabled. More external ports are enabled with the
following license upgrades:
򐂰 Upgrade 1 enables 10 more ports for a total of 20 ports
򐂰 Uplinks Upgrade enables the four 10 Gb SFP+ ports.

These upgrades can be installed in either order.

This switch is considered ideal for clients with the following characteristics:
򐂰 Still use 1 Gb as their networking infrastructure.
򐂰 Are deploying virtualization and require multiple 1 Gb ports.
򐂰 Want investment protection for 10 Gb uplinks.
򐂰 Looking to reduce TCO and improve performance, while maintaining high levels of
availability and security.
򐂰 Looking to avoid oversubscription (multiple internal ports that attempt to pass through a
lower quantity of external ports, leading to congestion and performance impact).

The switch has three switch status LEDs (see Figure 4-42 on page 112) and one mini-USB
serial port connector for console management.

Uplink Ports 1 - 20 are RJ45, and the 4 x 10 Gb uplink ports are SFP+. The switch supports
either SFP+ modules or DAC cables. The supported SFP+ modules and DAC cables for the
switch are listed in Table 4-34.

Table 4-34 IBM Flex System EN2092 1Gb Ethernet Scalable Switch SFP+ and DAC cables
Part number Feature codea Description

SFP transceivers

81Y1622 3269 / EB2A IBM SFP SX Transceiver

Chapter 4. Chassis and infrastructure configuration 145


Part number Feature codea Description

81Y1618 3268 / EB29 IBM SFP RJ45 Transceiver

90Y9424 A1PN / None IBM SFP LX Transceiver

SFP+ transceivers

44W4408 4942 / 3282 10 GbE 850 nm Fibre Channel SFP+ Transceiver


(SR)

46C3447 5053 / None IBM SFP+ SR Transceiver

90Y9412 A1PM / None IBM SFP+ LR Transceiver

DAC cables

90Y9427 A1PH / None 1m IBM Passive DAC SFP+

90Y9430 A1PJ / ECB5 3m IBM Passive DAC SFP+

90Y9433 A1PK / None 5m IBM Passive DAC SFP+


a. The first feature code listed is for configurations ordered through System x sales channels
(HVEC) using x-config. The second feature code is for configurations ordered through the IBM
Power Systems channel (AAS) using e-config.

The EN2092 1 Gb Ethernet Scalable Switch includes the following features and
specifications:
򐂰 Internal ports:
– A total of 28 internal full-duplex Gigabit ports; 14 ports are enabled by default. An
optional FoD license is required to activate another 14 ports.
– Two internal full-duplex 1 GbE ports that are connected to the chassis management
module.
򐂰 External ports:
– Four ports for 1 Gb or 10 Gb Ethernet SFP+ transceivers (support for 1000BASE-SX,
1000BASE-LX, 1000BASE-T, 10GBASE-SR, or 10GBASE-LR) or SFP+ DAC. These
ports are disabled by default. An optional FoD license is required to activate them.
SFP+ modules are not included and must be purchased separately.
– A total of 20 external 10/100/1000 1000BASE-T Gigabit Ethernet ports with RJ-45
connectors; 10 ports are enabled by default. An optional FoD license is required to
activate another 10 ports.
– One RS-232 serial port (mini-USB connector) that provides another means to
configure the switch module.
򐂰 Scalability and performance:
– Fixed-speed external 10 Gb Ethernet ports for maximum uplink bandwidth
– Autosensing 10/1000/1000 external Gigabit Ethernet ports for bandwidth optimization
– Non-blocking architecture with wire-speed forwarding of traffic
– MAC address learning: Automatic update, support of up to 32,000 MAC addresses
– Up to 128 IP interfaces per switch
– Static and LACP (IEEE 802.3ad) link aggregation, up to 60 Gb of total uplink bandwidth
per switch, up to 64 trunk groups, up to 16 ports per group
– Support for jumbo frames (up to 9,216 bytes)

146 IBM PureFlex System and IBM Flex System Products and Technology
– Broadcast/multicast storm control
– IGMP snooping for limit flooding of IP multicast traffic
– IGMP filtering to control multicast traffic for hosts that participate in multicast groups
– Configurable traffic distribution schemes over trunk links that are based on
source/destination IP or MAC addresses, or both
– Fast port forwarding and fast uplink convergence for rapid STP convergence
򐂰 Availability and redundancy:
– VRRP for Layer 3 router redundancy
– IEEE 802.1D STP for providing L2 redundancy
– IEEE 802.1s MSTP for topology optimization, up to 32 STP instances that are
supported by a single switch
– IEEE 802.1w RSTP (provides rapid STP convergence for critical delay-sensitive traffic
like voice or video)
– RPVST enhancements
– Layer 2 Trunk Failover to support active/standby configurations of network adapter
teaming on compute nodes
– Hot Links provides basic link redundancy with fast recovery for network topologies that
require Spanning Tree to be turned off
򐂰 VLAN support:
– Up to 1024 VLANs supported per switch, with VLAN numbers that range from 1 to
4095 (4095 is used for the management module’s connection only)
– 802.1Q VLAN tagging support on all ports
– Private VLANs
򐂰 Security:
– VLAN-based, MAC-based, and IP-based ACLs
– 802.1x port-based authentication
– Multiple user IDs and passwords
– User access control
– Radius, TACACS+, and Lightweight Directory Access Protocol (LDAP) authentication
and authorization
򐂰 QoS:
– Support for IEEE 802.1p, IP ToS/DSCP, and ACL-based (MAC/IP source and
destination addresses, VLANs) traffic classification and processing
– Traffic shaping and remarking based on defined policies
– Eight WRR priority queues per port for processing qualified traffic
򐂰 IP v4 Layer 3 functions:
– Host management
– IP forwarding
– IP filtering with ACLs, up to 896 ACLs supported
– VRRP for router redundancy
– Support for up to 128 static routes

Chapter 4. Chassis and infrastructure configuration 147


– Routing protocol support (RIP v1, RIP v2, OSPF v2, BGP-4), up to 2048 entries in a
routing table
– Support for DHCP Relay
– Support for IGMP snooping and IGMP relay
– Support for Protocol Independent Multicast (PIM) in Sparse Mode (PIM-SM) and
Dense Mode (PIM-DM).
򐂰 IP v6 Layer 3 functions:
– IPv6 host management (except default switch management IP address)
– IPv6 forwarding
– Up to 128 static routes
– Support for OSPF v3 routing protocol
– IPv6 filtering with ACLs
򐂰 Virtualization: VMready
򐂰 Manageability:
– Simple Network Management Protocol (SNMP V1, V2, and V3)
– HTTP browser GUI
– Telnet interface for CLI
– SSH
– Serial interface for CLI
– Scriptable CLI
– Firmware image update (TFTP and FTP)
– NTP for switch clock synchronization
򐂰 Monitoring:
– Switch LEDs for external port status and switch module status indication
– RMON agent to collect statistics and proactively monitor switch performance
– Port mirroring for analyzing network traffic that passes through the switch
– Change tracking and remote logging with the syslog feature
– Support for the sFLOW agent for monitoring traffic in data networks (separate sFLOW
analyzer is required elsewhere)
– POST diagnostic functions

For more information, see the IBM Redbooks Product Guide IBM Flex System EN2092 1Gb
Ethernet Scalable Switch, TIPS0861, which is available at this website:
http://www.redbooks.ibm.com/abstracts/tips0861.html?Open

4.11.11 IBM Flex System FC5022 16Gb SAN Scalable Switch


The IBM Flex System FC5022 16Gb SAN Scalable Switch is a high-density, 48-port 16 Gbps
Fibre Channel switch that is used in the Enterprise Chassis. The switch provides 28 internal
ports to compute nodes by way of the midplane, and 20 external SFP+ ports. These system
area network (SAN) switch modules deliver an embedded option for IBM Flex System users
who deploy storage area networks in their enterprise. They offer end-to-end 16 Gb and 8 Gb
connectivity.

The N_Port Virtualization mode streamlines the infrastructure by reducing the number of
domains to manage. It allows you to add or move servers without impact to the SAN.
Monitoring is simplified by using an integrated management appliance. Clients who use an
end-to-end Brocade SAN can make use of the Brocade management tools.

148 IBM PureFlex System and IBM Flex System Products and Technology
Figure 4-55 shows the IBM Flex System FC5022 16Gb SAN Scalable Switch.

Figure 4-55 IBM Flex System FC5022 16Gb SAN Scalable Switch

Three versions are available, as listed in Table 4-35: 12-port and 24-port switch modules and
a 24-port switch with the Enterprise Switch Bundle (ESB) software. The port count can be
applied to internal or external ports by using a feature that is called Dynamic Ports on
Demand (DPOD). Ports counts can be increased with license upgrades, as described in “Port
and feature upgrades” on page 150.

Table 4-35 IBM Flex System FC5022 16Gb SAN Scalable Switch part numbers
Part Feature Description Ports enabled
number codesa by default

88Y6374 A1EH / 3770 IBM Flex System FC5022 16Gb SAN Scalable Switch 12

00Y3324 A3DP / ESW5 IBM Flex System FC5022 24-port 16Gb SAN Scalable Switch 24

90Y9356 A1EJ / 3771 IBM Flex System FC5022 24-port 16Gb ESB SAN Scalable Switch 24
a. The first feature code listed is for configurations ordered through System x sales channels (HVEC) using x-config.
The second feature code is for configurations ordered through the IBM Power Systems channel (AAS) using
e-config.

Table 4-36 provides a feature comparison between the FC5022 switch models.

Table 4-36 Feature comparison by model


Feature FC5022 16Gb 24-port FC5022 24-port 16Gb FC5022 16Gb SAN
ESB Switch SAN Scalable Switch Scalable Switch

90Y9356 00Y3324 88Y6374

Number of active ports 24 24 12

Number of SFP+ included None 2x 16 Gb SFP+ None

Full fabric Included Included Included

Access Gateway Included Included Included

Advanced zoning Included Included Included

Enhanced Group Management Included Included Included

ISL Trunking Included Optional Not available

Adaptive Networking Included Not available Not available

Advanced Performance Monitoring Included Not available Not available

Fabric Watch Included Optional Not available

Chapter 4. Chassis and infrastructure configuration 149


Feature FC5022 16Gb 24-port FC5022 24-port 16Gb FC5022 16Gb SAN
ESB Switch SAN Scalable Switch Scalable Switch

90Y9356 00Y3324 88Y6374

Extended Fabrics Included Not available Not available

Server Application Optimization Included Not available Not available

The part number for the switch includes the following items:
򐂰 One IBM Flex System FC5022 16Gb SAN Scalable Switch or IBM Flex System FC5022
24-port 16Gb ESB SAN Scalable Switch
򐂰 Important Notices Flyer
򐂰 Warranty Flyer
򐂰 Documentation CD-ROM

The switch does not include a serial management cable. However, IBM Flex System
Management Serial Access Cable 90Y9338 is supported and contains two cables: a
mini-USB-to-RJ45 serial cable and a mini-USB-to-DB9 serial cable. Either cable can be used
to connect to the switch locally for configuration tasks and firmware updates.

Port and feature upgrades


Table 4-37 lists the available port and feature upgrades. These are all IBM Features on
Demand license upgrades.

Table 4-37 FC5022 switch upgrades


24-port 24-port 16 Gb
16 Gb 16 Gb SAN switch
ESB switch SAN switch
Part Feature
number codesa Description 90Y9356 00Y3324 88Y6374

88Y6382 A1EP / 3772 FC5022 16Gb SAN Switch (Upgrade 1) No No Yes

88Y6386 A1EQ / 3773 FC5022 16Gb SAN Switch (Upgrade 2) Yes Yes Yes

00Y3320 A3HN / ESW3 FC5022 16Gb Fabric Watch Upgrade No Yes Yes

00Y3322 A3HP / ESW4 FC5022 16Gb ISL/Trunking Upgrade No Yes Yes


a. The first feature code listed is for configurations ordered through System x sales channels (HVEC) using x-config.
The second feature code is for configurations ordered through the IBM Power Systems channel (AAS) using
e-config.

With DPOD, ports are licensed as they come online. With the FC5022 16Gb SAN Scalable
Switch, the first 12 ports that report (on a first-come, first-served basis) on boot are assigned
licenses. These 12 ports can be any combination of external or internal Fibre Channel ports.
After all the licenses are assigned, you can manually move those licenses from one port to
another port. Because this process is dynamic, no defined ports are reserved except ports 0
and 29. The FC5022 16Gb ESB Switch has the same behavior. The only difference is the
number of ports.

150 IBM PureFlex System and IBM Flex System Products and Technology
Table 4-38 shows the total number of active ports on the switch after you apply compatible
port upgrades.

Table 4-38 Total port counts after you apply upgrades


Total number of active ports

24-port 16 Gb 24-port 16 Gb 16 Gb SAN switch


ESB SAN switch SAN switch

Ports on Demand upgrade 90Y9356 00Y3324 88Y6374

Included with base switch 24 24 12

Upgrade 1, 88Y6382 (adds 12 ports) Not supported Not supported 24

Upgrade 2, 88Y6386 (adds 24 ports) 48 48 48

Transceivers
The FC5022 12-port and 24-port ESB SAN switches come without SFP+, which must be
ordered separately to provide outside connectivity. The FC5022 24-port SAN switch comes
standard with two Brocade 16 Gb SFP+ transceivers; more SFP+ can be ordered if required.
Table 4-39 lists the supported SFP+ options.

Table 4-39 Supported SFP+ transceivers


Part number Feature codea Description

88Y6416 5084 / 5370 Brocade 8 Gb SFP+ Software Optical Transceiver

88Y6393 A22R / 5371 Brocade 16 Gb SFP+ Optical Transceiver


a. The first feature code listed is for configurations ordered through System x sales channels
(HVEC) using x-config. The second feature code is for configurations ordered through the IBM
Power Systems channel (AAS) using e-config.

Benefits
The switches offer the following key benefits:
򐂰 Exceptional price and performance for growing SAN workloads
The FC5022 16Gb SAN Scalable Switch delivers exceptional price and performance for
growing SAN workloads. It achieves this through a combination of market-leading
1,600 MBps throughput per port and an affordable high-density form factor. The 48 FC
ports produce an aggregate 768 Gbps full-duplex throughput, plus any external eight ports
can be trunked for 128 Gbps inter-switch links (ISLs). Because 16 Gbps port technology
dramatically reduces the number of ports and associated optics and cabling required
through 8/4 Gbps consolidation, the cost savings and simplification benefits are
substantial.
򐂰 Accelerating fabric deployment and serviceability with diagnostic ports
Diagnostic Ports (D_Ports) are a new port type that is supported by the FC5022 16Gb
SAN Scalable Switch. They enable administrators to quickly identify and isolate 16 Gbps
optics, port, and cable problems, which reduces fabric deployment and diagnostic times. If
the optical media is found to be the source of the problem, it can be transparently replaced
because 16 Gbps optics are hot-pluggable.

Chapter 4. Chassis and infrastructure configuration 151


򐂰 A building block for virtualized, private cloud storage
The FC5022 16Gb SAN Scalable Switch supports multi-tenancy in cloud environments
through VM-aware end-to-end visibility and monitoring, QoS, and fabric-based advanced
zoning features. The FC5022 16Gb SAN Scalable Switch enables secure distance
extension to virtual private or hybrid clouds with dark fiber support. They also enable
in-flight encryption and data compression. Internal fault-tolerant and enterprise-class
reliability, availability, and serviceability (RAS) features help minimize downtime to support
mission-critical cloud environments.
򐂰 Simplified and optimized interconnect with Brocade Access Gateway
The FC5022 16Gb SAN Scalable Switch can be deployed as a full-fabric switch or as a
Brocade Access Gateway. It simplifies fabric topologies and heterogeneous fabric
connectivity. Access Gateway mode uses N_Port ID Virtualization (NPIV) switch
standards to present physical and virtual servers directly to the core of SAN fabrics. This
configuration makes it not apparent to the SAN fabric, which greatly reduces management
of the network edge.
򐂰 Maximizing investments
To help optimize technology investments, IBM offers a single point of serviceability that is
backed by industry-renowned education, support, and training. In addition, the IBM
16/8 Gbps SAN Scalable Switch is in the IBM ServerProven® program, which enables
compatibility among various IBM and partner products. IBM recognizes that customers
deserve the most innovative, expert integrated systems solutions.

Features and specifications


FC5022 16Gb SAN Scalable Switches have the following features and specifications:
򐂰 Internal ports:
– 28 internal full-duplex 16 Gb FC ports (up to 14 internal ports can be activated with
Port-on-Demand feature, remaining ports are reserved for future use)
– Internal ports operate as F_ports (fabric ports) in native mode or in access gateway
mode
– Two internal full-duplex 1 GbE ports connect to the chassis management module
򐂰 External ports:
– Twenty external ports for 16 Gb SFP+ or 8 Gb SFP+ transceivers that supporting 4 Gb,
8 Gb, and 16 Gb port speeds. SFP+ modules are not included and must be purchased
separately. Ports are activated with Port-on-Demand feature.
– External ports can operate as F_ports, FL_ports (fabric loop ports), or E_ports
(expansion ports) in native mode. They can operate as N_ports (node ports) in access
gateway mode.
– One external 1 GbE port (1000BASE-T) with RJ-45 connector for switch configuration
and management.
– One RS-232 serial port (mini-USB connector) that provides another means to
configure the switch module.
򐂰 Access gateway mode (N_Port ID Virtualization - NPIV) support.
򐂰 Power-on self-test diagnostics and status reporting.
򐂰 ISL Trunking (licensable) allows up to eight ports (at 16, 8, or 4 Gbps speeds) to combine.
These ports form a single, logical ISL with a speed of up to 128 Gbps (256 Gbps full
duplex). This configuration allows for optimal bandwidth sage, automatic path failover, and
load balancing.

152 IBM PureFlex System and IBM Flex System Products and Technology
򐂰 Brocade Fabric OS delivers distributed intelligence throughout the network and enables a
wide range of value-added applications. These applications include Brocade Advanced
Web Tools and Brocade Advanced Fabric Services (on certain models).
򐂰 Supports up to 768 Gbps I/O bandwidth.
򐂰 A total of 420 million frames switches per second, 0.7 microseconds latency.
򐂰 8,192 buffers for up to 3,750 km extended distance at 4 Gbps FC (Extended Fabrics
license required).
򐂰 In-flight 64 Gbps Fibre Channel compression and decompression support on up to two
external ports (no license required).
򐂰 In-flight 32 Gbps encryption and decryption on up to two external ports (no license
required).
򐂰 A total of 48 Virtual Channels per port.
򐂰 Port mirroring to monitor ingress or egress traffic from any port within the switch.
򐂰 Two I2C connections able to interface with redundant management modules.
򐂰 Hot pluggable, up to four hot pluggable switches per chassis.
򐂰 Single fuse circuit.
򐂰 Four temperature sensors.
򐂰 Managed with Brocade Web Tools.
򐂰 Supports a minimum of 128 domains in Native mode and Interoperability mode.
򐂰 Nondisruptive code load in Native mode and Access Gateway mode.
򐂰 255 N_port logins per physical port.
򐂰 D_port support on external ports.
򐂰 Class 2 and Class 3 frames.
򐂰 SNMP v1 and v3 support.
򐂰 SSH v2 support.
򐂰 Secure Sockets Layer (SSL) support.
򐂰 NTP client support (NTP V3).
򐂰 FTP support for firmware upgrades.
򐂰 SNMP/Management Information Base (MIB) monitoring functionality that is contained
within the Ethernet Control MIB-II (RFC1213-MIB).
򐂰 End-to-end optics and link validation.
򐂰 Sends switch events and syslogs to the CMM.
򐂰 Traps identify cold start, warm start, link up/link down and authentication failure events.
򐂰 Support for IPv4 and IPv6 on the management ports.

The FC5022 16Gb SAN Scalable Switches come standard with the following software
features:
򐂰 Brocade Full Fabric mode: Enables high performance 16 Gb or 8 Gb fabric switching.
򐂰 Brocade Access Gateway mode: Uses NPIV to connect to any fabric without adding
switch domains to reduce management complexity.
򐂰 Dynamic Path Selection: Enables exchange-based load balancing across multiple
Inter-Switch Links for superior performance.

Chapter 4. Chassis and infrastructure configuration 153


򐂰 Brocade Advanced Zoning: Segments a SAN into virtual private SANs to increase security
and availability.
򐂰 Brocade Enhanced Group Management: Enables centralized and simplified management
of Brocade fabrics through IBM Network Advisor.

Enterprise Switch Bundle software licenses


The IBM Flex System FC5022 24-port 16Gb ESB SAN Scalable Switch includes a complete
set of licensed features. These features maximize performance, ensure availability, and
simplify management for the most demanding applications and expanding virtualization
environments.

This switch comes with 24 port licenses that can be applied to internal or external links on this
switch.

This switch also includes the following ESB software licenses:


򐂰 Brocade Extended Fabrics
Provides up to 1000 km of switches fabric connectivity over long distances.
򐂰 Brocade ISL Trunking
Allows you to aggregate multiple physical links into one logical link for enhanced network
performance and fault tolerance.
򐂰 Brocade Advanced Performance Monitoring
Enables performance monitoring of networked storage resources. This license includes
the TopTalkers feature.
򐂰 Brocade Fabric Watch
Monitors mission-critical switch operations. Fabric Watch now includes the new Port
Fencing capabilities.
򐂰 Adaptive Networking
Adaptive Networking provides a rich set of capabilities to the data center or virtual server
environments. It ensures high priority connections to obtain the bandwidth necessary for
optimum performance, even in congested environments. It optimizes data traffic
movement within the fabric by using Ingress Rate Limiting, QoS, and Traffic Isolation
Zones
򐂰 Server Application Optimization (SAO)
This license optimizes overall application performance for physical servers and virtual
machines. When it is deployed with Brocade Fibre Channel host bus adapters (HBAs),
SAO extends Brocade Virtual Channel technology from fabric to the server infrastructure.
This license delivers application-level, fine-grain QoS management to the HBAs and
related server applications.

Supported Fibre Channel standards


The switches support the following Fibre Channel standards:
򐂰 FC-AL-2 INCITS 332: 1999
򐂰 FC-GS-5 ANSI INCITS 427 (includes FC-GS-4 ANSI INCITS 387: 2004)
򐂰 FC-IFR INCITS 1745-D, revision 1.03 (under development)
򐂰 FC-SW-4 INCITS 418:2006
򐂰 FC-SW-3 INCITS 384: 2004
򐂰 FC-VI INCITS 357: 2002

154 IBM PureFlex System and IBM Flex System Products and Technology
򐂰 FC-TAPE INCITS TR-24: 1999
򐂰 FC-DA INCITS TR-36: 2004, includes the following standards:
– FC-FLA INCITS TR-20: 1998
– FC-PLDA INCIT S TR-19: 1998
򐂰 FC-MI-2 ANSI/INCITS TR-39-2005
򐂰 FC-PI INCITS 352: 2002
򐂰 FC-PI-2 INCITS 404: 2005
򐂰 FC-PI-4 INCITS 1647-D, revision 7.1 (under development)
򐂰 FC-PI-5 INCITS 479: 2011
򐂰 FC-FS-2 ANSI/INCITS 424:2006 (includes FC-FS INCITS 373: 2003)
򐂰 FC-LS INCITS 433: 2007
򐂰 FC-BB-3 INCITS 414: 2006
򐂰 FC-BB-2 INCITS 372: 2003
򐂰 FC-SB-3 INCITS 374: 2003 (replaces FC-SB ANSI X3.271: 1996 and FC-SB-2 INCITS
374: 2001)
򐂰 RFC 2625 IP and ARP Over FC
򐂰 RFC 2837 Fabric Element MIB
򐂰 MIB-FA INCITS TR-32: 2003
򐂰 FCP-2 INCITS 350: 2003 (replaces FCP ANSI X3.269: 1996)
򐂰 SNIA Storage Management Initiative Specification (SMI-S) Version 1.2 and includes the
following standards:
– SNIA Storage Management Initiative Specification (SMI-S) Version 1.03 ISO standard
IS24775-2006. (replaces ANSI INCITS 388: 2004)
– SNIA Storage Management Initiative Specification (SMI-S) Version 1.1.0
– SNIA Storage Management Initiative Specification (SMI-S) Version 1.2.0

For more information, see the IBM Redbooks Product Guide IBM Flex System FC5022 16Gb
SAN Scalable Switches, TIPS0870, which is available at this website:
http://www.redbooks.ibm.com/abstracts/tips0870.html?Open

4.11.12 IBM Flex System FC3171 8Gb SAN Switch


The IBM Flex System FC3171 8Gb SAN Switch is a full-fabric Fibre Channel switch module. It
can be converted to a pass-through module when configured in transparent mode.
Figure 4-56 shows the IBM Flex System FC3171 8Gb SAN Switch.

Figure 4-56 IBM Flex System FC3171 8Gb SAN Switch

Chapter 4. Chassis and infrastructure configuration 155


The I/O module has 14 internal ports and 6 external ports. All ports are licensed on the switch
because there are no port licensing requirements. Ordering information is listed in Table 4-40.

Table 4-40 FC3171 8Gb SAN Switch


Part number Feature codea Product Name

69Y1930 A0TD / 3595 IBM Flex System FC3171 8Gb SAN Switch
a. The first feature code listed is for configurations ordered through System x sales channels
(HVEC) using x-config. The second feature code is for configurations ordered through the IBM
Power Systems channel (AAS) using e-config.

No SFP modules and cables are supplied as standard. The ones that are listed in Table 4-41
are supported.

Table 4-41 FC3171 8Gb SAN Switch supported SFP modules and cables
Part number Feature codesa Description

44X1964 5075 / 3286 IBM 8 Gb SFP+ Software Optical Transceiver

39R6475 4804 / 3238 4 Gb SFP Transceiver Option


a. The first feature code listed is for configurations ordered through System x sales channels
(HVEC) using x-config. The second feature code is for configurations ordered through the IBM
Power Systems channel (AAS) using e-config.

You can reconfigure the FC3171 8Gb SAN Switch to become a pass-through module by
using the switch GUI or CLI. The module can then be converted back to a full function SAN
switch at some future date. The switch requires a reset when you turn on or off transparent
mode.

The switch can be configured by using the following methods:


򐂰 Command Line
Access the switch by using the console port through the CMM or through the Ethernet
port. This method requires a basic understanding of the CLI commands.
򐂰 QuickTools
Requires a current version of the Java runtime environment on your workstation before
you point a web browser to the switch’s IP address. The IP address of the switch must be
configured. QuickTools does not require a license and code is included.

On this switch when in Full Fabric mode, access to all of the Fibre Channel Security features
is provided. Security includes additional services of SSL and SSH, which are available. In
addition, RADIUS servers can be used for device and user authentication. After SSL or SSH
is enabled, the security features are available to be configured. Configuring security features
allows the SAN administrator to configure which devices are allowed to log on to the Full
Fabric Switch module. This process is done by creating security sets with security groups.
These sets are configured on a per switch basis. The security features are not available when
in pass-through mode.

The FC3171 8Gb SAN Switch includes the following specifications and standards:
򐂰 Fibre Channel standards:
– C-PH version 4.3
– FC-PH-2
– FC-PH-3
– FC-AL version 4.5

156 IBM PureFlex System and IBM Flex System Products and Technology
– FC-AL-2 Rev 7.0
– FC-FLA
– FC-GS-3
– FC-FG
– FC-PLDA
– FC-Tape
– FC-VI
– FC-SW-2
– Fibre Channel Element MIB RFC 2837
– Fibre Alliance MIB version 4.0
򐂰 Fibre Channel protocols:
– Fibre Channel service classes: Class 2 and class 3
– Operation modes: Fibre Channel class 2 and class 3, connectionless
򐂰 External port type:
– Full fabric mode: Generic loop port
– Transparent mode: Transparent fabric port
򐂰 Internal port type:
– Full fabric mode: F_port
– Transparent mode: Transparent host port/NPIV mode
– Support for up to 44 host NPIV logins
򐂰 Port characteristics:
– External ports are automatically detected and self-configuring
– Port LEDs illuminate at startup
– Number of Fibre Channel ports: 6 external ports and 14 internal ports
– Scalability: Up to 239 switches maximum depending on your configuration
– Buffer credits: 16 buffer credits per port
– Maximum frame size: 2148 bytes (2112 byte payload)
– Standards-based FC, FC-SW2 Interoperability
– Support for up to a 255 to 1 port-mapping ratio
– Media type: SFP+ module
򐂰 2 Gb specifications:
– 2 Gb fabric port speed: 1.0625 or 2.125 Gbps (gigabits per second)
– 2 Gb fabric latency: Less than 0.4 msec
– 2 Gb fabric aggregate bandwidth: 80 Gbps at full duplex
򐂰 4 Gb specifications:
– 4 Gb switch speed: 4.250 Gbps
– 4 Gb switch fabric point-to-point: 4 Gbps at full duplex
– 4 Gb switch fabric aggregate bandwidth: 160 Gbps at full duplex
򐂰 8 Gb specifications:
– 8 Gb switch speed: 8.5 Gbps
– 8 Gb switch fabric point-to-point: 8 Gbps at full duplex
– 8 Gb switch fabric aggregate bandwidth: 320 Gbps at full duplex
򐂰 Nonblocking architecture to prevent latency
򐂰 System processor: IBM PowerPC®

For more information, see the IBM Redbooks Product Guide IBM Flex System FC3171 8Gb
SAN Switch and Pass-thru, TIPS0866, which is available at:
http://www.redbooks.ibm.com/abstracts/tips0866.html?Open

Chapter 4. Chassis and infrastructure configuration 157


4.11.13 IBM Flex System FC3171 8Gb SAN Pass-thru
The IBM Flex System FC3171 8Gb SAN Pass-thru I/O module is an 8 Gbps Fibre Channel
Pass-thru SAN module. It has 14 internal ports and six external ports and is shipped with all
ports enabled.

Figure 4-57 shows the IBM Flex System FC3171 8 Gb SAN Pass-thru module.

Figure 4-57 IBM Flex System FC3171 8Gb SAN Pass-thru

Ordering information is listed in Table 4-42.

Table 4-42 FC3171 8Gb SAN Pass-thru part number


Part number Feature codea Description

69Y1934 A0TJ / 3591 IBM Flex System FC3171 8Gb SAN Pass-thru
a. The first feature code listed is for configurations ordered through System x sales channels
(HVEC) using x-config. The second feature code is for configurations ordered through the IBM
Power Systems channel (AAS) using e-config.

Exception: If you must enable full fabric capability later, do not purchase this switch.
Instead, purchase the FC3171 8Gb SAN Switch.

There are no SFPs supplied with the switch and must be ordered separately. Supported
transceivers and fiber optic cables are listed in Table 4-43.

Table 4-43 FC3171 8Gb SAN Pass-thru supported modules and cables
Part number Feature code Description

44X1964 5075 / 3286 IBM 8 Gb SFP+ Software Optical Transceiver

39R6475 4804 / 3238 4 Gb SFP Transceiver Option

The FC3171 8Gb SAN Pass-thru can be configured by using the following methods:
򐂰 Command Line
Access the module by using the console port through the Chassis Management Module or
through the Ethernet port. This method requires a basic understanding of the CLI
commands.
򐂰 QuickTools
Requires a current version of the JRE on your workstation before you point a web browser
to the module’s IP address. The IP address of the module must be configured. QuickTools
does not require a license and the code is included.

158 IBM PureFlex System and IBM Flex System Products and Technology
The pass-through module supports the following standards:
򐂰 Fibre Channel standards:
– C-PH version 4.3
– FC-PH-2
– FC-PH-3
– FC-AL version 4.5
– FC-AL-2 Rev 7.0
– FC-FLA
– FC-GS-3
– FC-FG
– FC-PLDA
– FC-Tape
– FC-VI
– FC-SW-2
– Fibre Channel Element MIB RFC 2837
– Fibre Alliance MIB version 4.0
򐂰 Fibre Channel protocols:
– Fibre Channel service classes: Class 2 and class 3
– Operation modes: Fibre Channel class 2 and class 3, connectionless
򐂰 External port type: Transparent fabric port
򐂰 Internal port type: Transparent host port/NPIV mode
Support for up to 44 host NPIV logins
򐂰 Port characteristics:
– External ports are automatically detected and self- configuring
– Port LEDs illuminate at startup
– Number of Fibre Channel ports: 6 external ports and 14 internal ports
– Scalability: Up to 239 switches maximum depending on your configuration
– Buffer credits: 16 buffer credits per port
– Maximum frame size: 2148 bytes (2112 byte payload)
– Standards-based FC, FC-SW2 Interoperability
– Support for up to a 255 to 1 port-mapping ratio
– Media type: SFP+ module
򐂰 Fabric point-to-point bandwidth: 2 Gbps or 8 Gbps at full duplex
򐂰 2 Gb Specifications:
– 2 Gb fabric port speed: 1.0625 or 2.125 Gbps (gigabits per second)
– 2 Gb fabric latency: Less than 0.4 msec
– 2 Gb fabric aggregate bandwidth: 80 Gbps at full duplex
򐂰 4 Gb Specifications:
– 4 Gb switch speed: 4.250 Gbps
– 4 Gb switch fabric point-to-point: 4 Gbps at full duplex
– 4 Gb switch fabric aggregate bandwidth: 160 Gbps at full duplex
򐂰 8 Gb Specifications:
– 8 Gb switch speed: 8.5 Gbps
– 8 Gb switch fabric point-to-point: 8 Gbps at full duplex
– 8 Gb switch fabric aggregate bandwidth: 320 Gbps at full duplex
򐂰 System processor: PowerPC
򐂰 Maximum frame size: 2148 bytes (2112 byte payload)

Chapter 4. Chassis and infrastructure configuration 159


򐂰 Nonblocking architecture to prevent latency

For more information, see the IBM Redbooks Product Guide IBM Flex System FC3171 8Gb
SAN Switch and Pass-thru, TIPS0866, which is available at this website:
http://www.redbooks.ibm.com/abstracts/tips0866.html?Open

4.11.14 IBM Flex System IB6131 InfiniBand Switch


IBM Flex System IB6131 InfiniBand Switch is a 32-port InfiniBand switch. It has 18 FDR/QDR
(56/40 Gbps) external ports and 14 FDR/QDR (56/40 Gbps) internal ports for connections to
nodes. This switch ships standard with quad data rate (QDR) and can be upgraded to 14 data
rate (FDR). Figure 4-58 shows the IBM Flex System IB6131 InfiniBand Switch.

Figure 4-58 IBM Flex System IB6131 InfiniBand Switch

Ordering information is listed in Table 4-44.

Table 4-44 IBM Flex System IB6131 InfiniBand Switch Part number and upgrade option
Part number Feature codesa Product Name

90Y3450 A1EK / 3699 IBM Flex System IB6131 InfiniBand Switch:


򐂰 18 external QDR ports
򐂰 14 QDR internal ports

90Y3462 A1QX / ESW1 IBM Flex System IB6131 InfiniBand Switch (FDR Upgrade):
upgrades all ports to FDR speeds
a. The first feature code listed is for configurations ordered through System x sales channels
(HVEC) using x-config. The second feature code is for configurations ordered through the IBM
Power Systems channel (AAS) using e-config.

Running the MLNX-OS, this switch has one external 1 Gb management port and a mini USB
Serial port for updating software and debug use. These ports are in addition to InfiniBand
internal and external ports.

The switch has 14 internal QDR links and 18 CX4 uplink ports. All ports are enabled. The
switch can be upgraded to FDR speed (56 Gbps) by using the FOD process with part number
90Y3462 as listed in Table 4-44.

There are no InfiniBand cables that are shipped as standard with this switch and these must
be purchased separately. Supported cables are listed in Table 4-45.

Table 4-45 IB6131 InfiniBand Switch InfiniBand supported cables


Part number Feature codesa Description

49Y9980 3866 / 3249 IB QDR 3m QSFP Cable Option (passive)

90Y3470 A227 / ECB1 3m FDR InfiniBand Cable (passive)


a. The first feature code listed is for configurations ordered through System x sales channels
(HVEC) using x-config. The second feature code is for configurations ordered through the IBM
Power Systems channel (AAS) using e-config.

160 IBM PureFlex System and IBM Flex System Products and Technology
The switch includes the following specifications:
򐂰 IBTA 1.3 and 1.21 compliance
򐂰 Congestion control
򐂰 Adaptive routing
򐂰 Port mirroring
򐂰 Auto-Negotiation of 10 Gbps, 20 Gbps, 40 Gbps, or 56 Gbps
򐂰 Measured node-to-node latency of less than 170 nanoseconds
򐂰 Mellanox QoS: 9 InfiniBand virtual lanes for all ports, eight data transport lanes, and one
management lane
򐂰 High switching performance: Simultaneous wire-speed any port to any port
򐂰 Addressing: 48K Unicast Addresses maximum per Subnet, 16K Multicast Addresses per
Subnet
򐂰 Switch throughput capability of 1.8 Tb/s

For more information, see the IBM Redbooks Product Guide IBM Flex System IB6131
InfiniBand Switch, TIPS0871, which is available at this website:

http://www.redbooks.ibm.com/abstracts/tips0871.html?Open

4.12 Infrastructure planning


This section addresses the key infrastructure planning areas of power, uninterruptible power
supply (UPS), cooling, and console management that must be considered when you deploy
the IBM Flex System Enterprise Chassis.

For more information about planning your IBM Flex System power infrastructure, see IBM
Flex System Enterprise Chassis Power Guide, WP102111, which is available at this website:
http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102111

4.12.1 Supported power cords


The Enterprise Chassis supports the power cords that are listed in Table 4-46. One power
cord (feature 6292), is shipped with each power supply option or standard with the server
(one per standard power supply).

Table 4-46 Supported power cords


Part number Feature code Description

40K9772 6275 4.3m, 16A/208V, C19 to NEMA L6-20P (US) power cord

39Y7916 6252 2.5 m, 16A/100-240V, C19 to IEC 320-C20 Rack Power Cable

None 6292 2 m, 16A/100-250V, C19 to IEC 320-C20 Rack Power Cable

00D7192 A2Y3 4.3 m, US/CAN, NEMA L15-30P - (3P+Gnd) to 3X IEC 320 C19

00D7193 A2Y4 4.3 m, EMEA/AP, IEC 309 32A (3P+N+Gnd) to 3X IEC 320 C19

00D7194 A2Y5 4.3 m, A/NZ, (PDL/Clipsal) 32A (3P+N+Gnd) to 3X IEC 320 C19

Chapter 4. Chassis and infrastructure configuration 161


4.12.2 Supported PDUs and UPS units
Table 4-47 lists the supported PDUs.

Table 4-47 Supported power distribution units


Part number Description

39Y8923 DPI 60A 3-Phase C19 Enterprise PDU w/ IEC309 3P+G (208V) fixed power cords

39Y8938 30amp/125V Front-end PDU with NEMA L5-30P connector

39Y8939 30amp/250V Front-end PDU with NEMA L6-30P connector

39Y8940 60amp/250V Front-end PDU with IEC 309 60A 2P+N+Gnd connector

39Y8948 DPI Single Phase C19 Enterprise PDU w/o power cords

46M4002 IBM 1U 9 C19/3 C13 Active Energy Manager DPI PDU

46M4003 IBM 1U 9 C19/3 C13 Active Energy Manager 60A 3-Phase PDU

46M4140 IBM 0U 12 C19/12 C13 50A 3-Phase PDU

46M4134 IBM 0U 12 C19/12 C13 Switched and Monitored 50A 3-Phase PDU

46M4167 IBM 1U 9 C19/3 C13 Switched and Monitored 30A 3-Phase PDU

71762MX IBM Ultra Density Enterprise PDU C19 PDU+ (WW)

71762NX IBM Ultra Density Enterprise PDU C19 PDU (WW)

71763MU IBM Ultra Density Enterprise PDU C19 3-Phase 60A PDU+ (NA)

71763NU IBM Ultra Density Enterprise PDU C19 3-Phase 60A PDU (NA)

Table 4-48 lists the supported UPS units.

Table 4-48 Supported uninterruptible power supply units


Part number Description

21303RX IBM UPS 7500XHV

21304RX IBM UPS 10000XHV

53956AX IBM 6000VA LCD 4U Rack UPS (200V/208V)

53959KX IBM 11000VA LCD 5U Rack UPS (230V)

4.12.3 Power planning


The Enterprise Chassis can have a maximum of six power supplies installed, so consider
how to provide the best power optimized source. Both N+N and N+1 configurations are
supported for maximum flexibility in power redundancy. This allows a configuration of
balanced 3-phase power input into a single or group of chassis. Consideration must be given
to the nodes that are being installed within the chassis to ensure sufficient power supplies are
installed to deliver the required redundancy. For more information, see 4.7, “Power supply
selection” on page 92.

Each power supply in the chassis has a 16A C20 three-pin socket, and can be fed by a C19
power cable from a suitable supply.

162 IBM PureFlex System and IBM Flex System Products and Technology
The chassis power system is designed for efficiency by using data center power that consists
of 3-phase, 60A Delta 200 VAC (North America), or 3-phase 32A wye 380-415 VAC
(international). The chassis can also be fed from single phase 200-240VAC supplies if
required.

The power is scaled as required, so as more nodes are added, the power and cooling
increases accordingly. For power planning, Table 4-11 on page 93 shows the number of
power supplies needed for N+N or N+1, node dependent.

This section explains single phase and 3-phase example configurations for North America
and worldwide, starting with 3-phase and assumes that you have power budget in your
configuration to deliver N+N or N+1, given your particular node configuration.

The 2100W power modules have the advantage in North America that they draw a maximum
11.8A as opposed to 13.8A of the 2500W power modules. This means that when you are
using a 30A supply, which is derated to 24A with a PDU, up to two 2100W power modules can
be connected to the same PDU with 0.4A remaining. With 2500W power modules, only one
power module can be connected to a 30A PDU at the maximum (label) rating. Thus, for North
America, the 2100W power module is advantageous for 30A supply PDU deployments.

Figure 4-59 shows two chassis that were populated with six 2100W power supplies. Six 30A
PDUs were configured to supply power to the two chassis.

Each PDU has up to


22.8A (label rating) 6x 2100W power supplies
draw by two 2100W
Flex Chassis power 11.8A 11.8A
supplies (2x 11.8A).
11.8A
0.4A capacity 11.8A
remaining on a 24A
Derated PDU.

11.8A
Each 30A PDU 11.8A
200-240V
Can provide up 11.8A
to 24A 11.8A

11.8A 11.8A
6x 71762NX Ultra
Density = Cables
11.8A 11.8A
Enterprise PDU

6x 40K9614 IBM DPI 30A 1ph Cord with NEMA L6-30P connector
(71762NX + 40K9614 = FC 6500)

Figure 4-59 2100W power supplies optimized for use with 30A UL Derated PDU

Chapter 4. Chassis and infrastructure configuration 163


Power cabling: 32 A at 380 - 415 V 3-phase (International)
Figure 4-60 shows one 3-phase, 32A wye PDU (worldwide, WW) that provides power feeds
for two chassis. In this case, an appropriate 3-phase power cable is selected for the
Ultra-Dense Enterprise PDU+. This cable then splits the phases and supplies one phase to
each of the three power supplies within each chassis. One 3-phase 32A wye PDU can power
two fully populated chassis within a rack. A second PDU can be added for power redundancy
from an alternative power source, if the chassis is configured for N+N and meets the
requirements for this as shown in Table 4-11 on page 93.

Figure 4-60 shows a typical configuration given a 32A 3-phase wye supply at 380-415VAC
(often termed “WW” or “International”) for N+N. Ensure the node deployment meets the
requirements that are shown in Table 4-11 on page 93.

IEC320 16A C19-C20


3m power cable

46M4002 1U 9
C19/3 C13 Switched and
monitored DPI PDU

L3 L2 L3 L2
N L1 N L1
G G

40K9611 IBM DPI 32a


Cord (IEC 309 3P+N+G) = Power
cables

Figure 4-60 Example power cabling 32A at 380-415V 3-phase: international

The maximum number of Enterprise Chassis that can be installed with a 42U rack is four.
Therefore, the chassis requires a total of four 32A 3-phase wye feeds to provide for a
redundant N+N configuration.

164 IBM PureFlex System and IBM Flex System Products and Technology
Power cabling: 60 A at 208 V 3-phase (North America)
In North America, the chassis requires four 60A 3-phase delta supplies at 200 - 208 VAC. A
configuration that is optimized for 3-phase configuration is shown in Figure 4-61.

IEC320 16A C19-C20 3m


power cable

46M4003 1U 9 C19/3
C13 Switched and
monitored DPI PDI

L1 L1
G G
L2 L2
L3 L3

46M4003 Includes fixed


IEC60309 3P+G 60A line cord

Figure 4-61 Example of power cabling 60 A at 208 V 3-phase

Chapter 4. Chassis and infrastructure configuration 165


Power cabling: Single Phase 63 A (International)
Figure 4-62 shows International 63A single phase supply feed example. This example uses
the switched and monitored PDU+ with an appropriate power cord. Each 2500W PSU can
draw up to 13.85A from its supply. Therefore, a single chassis can easily be fed from a 63A
single phase supply, leaving 18.45A available capacity. This capacity can feed a single PSU
on a second chassis power supply (13.85A). It also can be available for the PDU to supply
further items in the rack, such as servers or storage devices.

46M4002 1U 9 C19/3
C13 Switched and
monitored DPI PDI

N L N L

G G

40K9613 IBM DPI 63a


Cord (IEC 309 P+N+G)
= Cables

Figure 4-62 Single phase 63 A supply

166 IBM PureFlex System and IBM Flex System Products and Technology
Power cabling: 60 A 200 VAC single phase supply (North America)
In North America, UL derating means that a 60 A PDU supplies only 48 Amps. At 200 VAC,
the 2500W power supplies in the Enterprise Chassis draw a maximum of 13.85 Amps.
Therefore, a single phase 60A supply can power a fully configured chassis. A further 6.8 A is
available from the PDU to power other items within the chassis, such as servers or storage,
as shown in Figure 4-63.

46M4002 1U 9 C19/3
C13 Switched and
monitored DPI PDI

L1 L1
G G
L2 L2
L3 L3
40K9615 IBM DPI 60a Cord (IEC 309 2P+G)
Building power = 200 VAC, 60 Amp, 1 Phase
(48A supplied by PDU after UL derating)
= Cables

Figure 4-63 60 A 200 VAC single phase supply

For more information about planning your IBM Flex System power infrastructure, see IBM
Flex System Enterprise Chassis Power Requirements Guide, WP102111, which is available
at this website:
http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102111

Chapter 4. Chassis and infrastructure configuration 167


4.12.4 UPS planning
It is possible to power the Enterprise Chassis with a UPS, which provides protection in case
of power failure or interruption. IBM does not offer a 3-phase UPS. However, single phase
UPS available from IBM can be used to supply power to a chassis, at 200VAC and 220VAC.
An alternative is to use third-party UPS product if 3-phase is required.

At international voltages, the 11000VA UPS is ideal for powering a fully loaded chassis.
Figure 4-64 shows how each power feed can be connected to one of the four 20A outlets on
the rear of the UPS. This UPS requires hard wiring to a suitable supply by a qualified
electrician.

53959KX
IBM UPS11000
5U

= Cables

Figure 4-64 Two UPS11000 international single-phase (208 - 230 VAC)

In North America, the available UPS at 200-208VAC is the UPS6000. This UPS has two
outlets that can be used to power two of the power supplies within the chassis. In a fully
loaded chassis, the third pair of power supplies must be connected to another UPS.
Figure 4-65 shows this UPS configuration.

53956AX
IBM UPS6000
4U

= Cables

Figure 4-65 Two UPS 6000 North American (200 - 208 VAC)

For more information, see IBM 11000VA LCD 5U Rack Uninterruptible Power Supply,
TIPS0814, which is available at this website:

http://www.redbooks.ibm.com/abstracts/tips0814.html?Open

168 IBM PureFlex System and IBM Flex System Products and Technology
4.12.5 Console planning
The Enterprise Chassis is a “lights out” system and can be managed remotely with ease.
However, the following methods can be used to access an individual nodes console:
򐂰 Each x86 node can be individually connected to by physically plugging in a console
breakout cable to the front of the node. (One console breakout cable is supplied with each
chassis). This cable presents a 15pin video connector, two USB sockets, and a serial
cable out the front. Connecting a portable screen and USB keyboard and mouse near the
front of the chassis enables quick connection into the console breakout cable and access
directly into the node. This configuration is often called crash cart management capability.
򐂰 Connect an SCO, VCO2, or UCO (Conversion Option) that is connected into the front of
each x86 node via a local console cable to a Global or Local Console Switch. Although
supported, this is not a particularly elegant method because there are a significant
number of cables to be routed from the front of a chassis in the case of 28 servers (14
x222 Compute Nodes).
򐂰 Connection to the FSM management interface by browser allows remote presence to each
node within the chassis.
򐂰 Connection remotely into the Ethernet management port of the CMM by using the browser
allows remote presence to each node within the chassis.
򐂰 Connect directly to each IMM2 on a node and start a remote console session to that node
through the IMM.

Local KVM, such as was possible with the BladeCenter Advanced Management Module, is
not possible with Flex System. The CMM does not present a KVM port externally.

The ordering part number and feature code is shown in Table 4-49.

Table 4-49 Ordering part number and feature code


Part number Feature Description
codea

81Y5286 A1NF IBM Flex System Console Breakout Cable


a. The feature code listed is for both the System x sales channel (HVEC) using x-config and the
Power Systems sales channel (AAS) using e-config.

4.12.6 Cooling planning


The chassis is designed to operate in ASHRAE class A3 operating environments, which
means temperatures up to 40° C (104° F) or altitudes up to 10,000 ft (3,000 m).

The airflow requirements for the Enterprise Chassis are from 270 CFM (cubic feet per minute)
to a maximum of 1020 CFM.

The Enterprise Chassis includes the following environmental specifications:


򐂰 Humidity, non-condensing: -12°C dew point (10.4°F) and 8% - 85% relative humidity
򐂰 Maximum dew point: 24°C (75°F)
򐂰 Maximum elevation: 3050 m (10.006 ft)
򐂰 Maximum rate of temperature change: 5°C/hr (41°F/hr)
򐂰 Heat Output (approximate): Maximum configuration: potentially 12.9kW

Chapter 4. Chassis and infrastructure configuration 169


The 12.9kW figure is only a potential maximum, where the most power hungry configuration
is chosen and all power envelopes are maximum. For a more realistic figure, use the IBM
Power Configurator tool to establish specific power requirements for a configuration, which is
available at this website:
http://www.ibm.com/systems/x/hardware/configtools.html

Data center operation at environmental temperatures above 35° C can generally be operated
in a free air cooling environment where outside air is filtered and then used to ventilate the
data center. This is the definition of ASHRAE class A3 (and also the A4 class, which raises
the upper limit to 45° C). A conventional data center does not normally run with computer
room air conditioning (CRAC) units up to 40° C because the risk of failures of CRAC or power
to the CRACs failing gives limited time for shutdowns before over-temperature events occur.
IBM Flex System Enterprise Chassis is suitable for operation in an ASHRAE class A3
environment that is installed in operating and non-operating mode.

Information about ASHRAE 2011 thermal guidelines, data center classes, and white papers
can be found at the American Society of Heating, Refrigerating, and Air-Conditioning
Engineers (ASHRAE) website:
http://www.ashrae.org

The chassis can be installed within IBM or non-IBM racks. However, the IBM 42U 1100mm
Enterprise V2 Dynamic Rack does offer in North America a single floor tile wide and two tiles
deep. More information about this sizing, see 4.13, “IBM 42U 1100mm Enterprise V2
Dynamic Rack” on page 172.

If installed within a non-IBM rack, the vertical rails must have clearances to EIA-310-D. There
must be sufficient room in front of the vertical front rack-mounted rail to provide minimum
bezel clearance of 70 mm (2.76 inches) depth. The rack must be sufficient to support the
weight of the chassis, cables, power supplies, and other items that are installed within. There
must be sufficient room behind the rear of the rear rack rails to provide for cable management
and routing. Ensure the stability of any non-IBM rack by using stabilization feet or baying kits
so that it does not become unstable when it is fully populated.

Finally, ensure that sufficient airflow is available to the Enterprise Chassis. Racks with glass
fronts do not normally allow sufficient airflow into the chassis, unless they are specialized
racks that are specifically designed for forced air cooling. Airflow information in CFM is
available from the IBM Power Configurator tool.

4.12.7 Chassis-rack cabinet compatibility


IBM offers an extensive range of industry-standard, EIA-compatible rack enclosures and
expansion units. The flexible rack solutions help you consolidate servers and save space,
while allowing easy access to crucial components and cable management.

170 IBM PureFlex System and IBM Flex System Products and Technology
Table 4-50 lists the IBM Flex System Enterprise Chassis supported in each rack cabinet.

Table 4-50 The chassis that is supported in each rack cabinet


Part Feature Rack cabinet Enterprise
number code Chassis

93634PX A1RC IBM 42U 1100 mm Enterprise V2 Deep Dynamic Rack Yesa

93634EX A1RD IBM 42U 1100 mm Dynamic Enterprise V2 Expansion Yesa


Rack

93634CX A3GR IBM PureFlex System 42U Rack Yesb

93634DX A3GS IBM PureFlex System 42U Expansion Rack Yesb

93634AX A31F IBM PureFlex System 42U Rack Yesc

93634BX A31G IBM PureFlex System 42U Expansion Rack Yesc

201886X 2731 IBM 11U Office Enablement Kit Yesd

93072PX 6690 IBM S2 25U Static Standard Rack Yes

93072RX 1042 IBM S2 25U Dynamic Standard Rack Yes

93074RX 1043 IBM S2 42U Standard Rack Yes

99564RX 5629 IBM S2 42U Dynamic Standard Rack Yes

99564XX 5631 IBM S2 42U Dynamic Standard Expansion Rack Yes

93084PX 5621 IBM 42U Enterprise Rack Yes

93084EX 5622 IBM 42U Enterprise Expansion Rack Yes

93604PX 7649 IBM 42U 1200 mm Deep Dynamic Rack Yes

93604EX 7650 IBM 42U 1200 mm Deep Dynamic Expansion Rack Yes

93614PX 7651 IBM 42U 1200 mm Deep Static Rack Yes

93614EX 7652 IBM 42U 1200 mm Deep Static Expansion Rack Yes

93624PX 7653 IBM 47U 1200 mm Deep Static Rack Yes

93624EX 7654 IBM 47U 1200 mm Deep Static Expansion Rack Yes

14102RX 1047 IBM eServer™ Cluster 25U Rack Yes

14104RX 1048 IBM Linux Cluster 42U Rack Yes

9306-900 None IBM Netfinity® Rack No

9306-910 None IBM Netfinity Rack No

9306-42P None IBM Netfinity Enterprise Rack No

9306-42X None IBM Netfinity Enterprise Rack Expansion Cabinet No

9306-200 None IBM Netfinity NetBAY 22 No


a. This rack cabinet is optimized for IBM Flex System Enterprise Chassis, including dedicated
front to back cable raceways. For more information, including images, see 4.13, “IBM 42U
1100mm Enterprise V2 Dynamic Rack” on page 172.

Chapter 4. Chassis and infrastructure configuration 171


b. This rack cabinet is optimized for IBM Flex System Enterprise Chassis, including dedicated
front to back cable raceways, and includes a unique PureFlex door. For more information,
including images of the door, see 4.14, “IBM PureFlex System 42U Rack and 42U Expansion
Rack” on page 178.
c. This rack cabinet is optimized for IBM Flex System Enterprise Chassis, including dedicated
front to back cable raceways, and includes the original square blue design of unique PureFlex
Logo’d Door, shipped between Q2 and Q4 2012.
d. This Office Enablement kit is specifically designed for the IBM BladeCenter S Chassis. The
Flex System Enterprise Chassis can be installed within the 11U office enablement kit with 1U
of space remaining; however, the acoustic footprint of a configuration might not be acceptable
for office use. We recommend that an evaluation be performed before deployment in an office
environment.

Racks that have glass-fronted doors do not allow sufficient airflow for the Enterprise Chassis,
such as the Netfinity racks that are shown in Table 4-50 on page 171. In some cases with the
older Netfinity racks, the chassis depth is such that the Enterprise Chassis cannot be
accommodated within the dimensions of the rack.

4.13 IBM 42U 1100mm Enterprise V2 Dynamic Rack


The IBM 42U 1100mm Enterprise V2 Dynamic Rack is an industry-standard 24-inch rack that
supports the Enterprise Chassis, BladeCenter, System x servers, and options. It is available
in Primary or Expansion form. The expansion rack is designed for baying and has no side
panels. It ships with a baying kit. After it is attached to the side of a primary rack, the side
panel that is removed from the primary rack is attached to the side of the expansion rack.

The available configurations are shown in Table 4-51.

Table 4-51 Rack options and part numbers


Model Description Details

9363-4PX IBM 42U 1100mm Enterprise V2 Dynamic Rack ships with side panels and is
Rack stand-alone.

9363-4EX IBM 42U 1100mm Enterprise V2 Dynamic Rack ships with no side panels, and is
Expansion Rack designed to attach to a primary rack

This 42U rack conforms to the EIA-310-D industry standard for a 24-inch, type A rack cabinet.
The dimensions are listed in Table 4-52.

Table 4-52 Dimensions of IBM 42U 1100mm Enterprise V2 Dynamic Rack, 9363-4PX
Dimension Value

Height 2009 mm (79.1 in.)

Width 600 mm (23.6 in.)

Depth 1100 mm (43.3 in.)

Weight 174 kg (384 lb), including outriggers

The rack features outriggers (stabilizers) allowing for movement while populated.

172 IBM PureFlex System and IBM Flex System Products and Technology
Figure 4-66 shows the 9363-4PX rack.

Figure 4-66 9363-4PX Rack (note tile width relative to rack)

The IBM 42U 1100mm Enterprise V2 Dynamic Rack includes the following features:
򐂰 A perforated front door allows for improved air flow.
򐂰 Square EIA Rail mount points.
򐂰 Six side-wall compartments support 1U-high PDUs and switches without taking up
valuable rack space.
򐂰 Cable management rings are included to help cable management.
򐂰 Easy to install and remove side panels are a standard feature.
򐂰 The front door can be hinged on either side, which provides flexibility to open in either
direction.
򐂰 Front and rear doors and side panels include locks and keys to help secure servers.
򐂰 Heavy-duty casters with the use of outriggers (stabilizers) come with the 42U Dynamic
racks for added stability, which allows movement of the rack while loaded.
򐂰 Tool-less 0U PDU rear channel mounting reduces installation time and increases
accessibility.
򐂰 1U PDU can be mounted to present power outlets to the rear of the chassis in side pocket
openings.
򐂰 Removable top and bottom cable access panels in both front and rear.

IBM is a leading vendor with specific ship-loadable designs. These kinds of racks are called
dynamic racks. The IBM 42U 1100mm Enterprise V2 Dynamic Rack and IBM 42U 1100mm
Enterprise V2 Dynamic Expansion Rack are dynamic racks.

Chapter 4. Chassis and infrastructure configuration 173


A dynamic rack has extra heavy-duty construction and sturdy packaging that can be reused
for shipping a fully loaded rack. They also have outrigger casters for secure movement and tilt
stability. Dynamic racks also include a heavy-duty shipping pallet that includes a ramp for
easy “on and off” maneuvering. Dynamic racks undergo more shock and vibration testing,
and all IBM racks are of welded rather than the more flimsy bolted construction.

Figure 4-67 shows the rear view of the 42U 1100mm Flex System Dynamic Rack.

Mountings
for IBM 0U
PDU

Cable
raceway

Outriggers
Figure 4-67 42U 1100mm Flex System Dynamic Rack rear view, with doors and sides panels removed

The IBM 42U 1100mm Enterprise V2 Dynamic Rack rack also provides more space than
previous rack designs, for front cable management of SAS cables exiting the V7000 Storage
Node and the PCIe Expansion Node.

There are four cable raceways on each rack, with two on each side. The raceways allow
cables to be routed from the front of the rack, through the raceway, and out to the rear of the
rack, which is required when connecting an externally mounted Storwize Expansion unit to an
integrated V7000 Storage Node.

174 IBM PureFlex System and IBM Flex System Products and Technology
Figure 4-68 shows the cable raceways.

Cable raceway

Figure 4-68 Cable raceway (as viewed from rear of rack)

Figure 4-69 shows a cable raceway when viewed inside the rack looking down. Cables can
enter the side bays of the rack from the raceway, or pass from one side bay to the other,
passing vertically through the raceway. These openings are at the front and rear of each
raceway.

Cable raceway

Cable raceway
vertical apertures

Front Vertical
mounting Rail

Figure 4-69 Cable Raceway at front of rack viewed from above

The 1U rack PDUs can also be accommodated in the side bays. In these bays, the PDU is
mounted vertically in the rear of the side bay and presents its outlets to the rear of the rack.
Four 0U PDUs can also be vertically mounted in the rear of the rack.

Chapter 4. Chassis and infrastructure configuration 175


Rear vertical aperture blocked by a PDU: When a PDU is installed in a rear side pocket
bay, it is not possible to use the cable raceway vertical apertures at the rear.

The rack width is 600 mm (which is a standard width of a floor tile in many locations) to
complement current raised floor data center designs. Dimensions of the rack base are shown
in Figure 4-70.

600 mm

46 mm
199 mm
65 mm
1100 mm

65 mm

458 mm

Front of Rack
Figure 4-70 Rack dimensions

The rack has square mounting holes that are common in the industry, onto which the
Enterprise Chassis and other server and storage products can be mounted.

For implementations where the front anti-tip plate is not required, an air baffle/air recirculation
prevention plate is supplied with the rack. You might not want to use the plate when an airflow
tile must be positioned directly in front of the rack.

176 IBM PureFlex System and IBM Flex System Products and Technology
This air baffle, which is shown in Figure 4-71, can be installed to the lower front of the rack. It
helps prevent warm air from the rear of the rack from circulating underneath the rack to the
front, which improves the cooling efficiency of the entire rack solution.

Recirculation
prevention plate

Figure 4-71 Recirculation prevention plate

Chapter 4. Chassis and infrastructure configuration 177


4.14 IBM PureFlex System 42U Rack and 42U Expansion Rack
The IBM PureFlex System 42U Rack and IBM PureFlex System 42U Expansion Rack are
optimized for use with IBM Flex System components, IBM System x servers, and
BladeCenter systems. Their robust design allows them to be shipped with equipment already
installed. The rack footprint is 600 mm x 1100 mm.

The IBM PureFlex System 42U Rack is shown in Figure 4-72.

Figure 4-72 IBM PureFlex System 42U Rack

These racks are usually shipped as standard with a PureFlex system, but they are available
for ordering by clients who want to deploy rack solutions with a similar design across their
data center. The door design also can be fitted to existing deployed PureFlex System racks
that have the original solid blue door design that shipped from Q2 2012 onwards.

Table 4-53 shows the available options and associated part numbers for the two PureFlex
racks and the PureFlex door.

Table 4-53 PureFlex system racks and rack door


Model Description Details

9363-4CX / A3GR IBM PureFlex System 42U Rack Primary Rack. Ships with side doors.

9363-4DX / A3GS IBM PureFlex System 42U Expansion Rack. Ships with no side doors, but with a
Expansion Rack baying kit to join onto a primary rack.

44X3132 / EU21 IBM PureFlex System Rack Door Front door for rack that is embellished with PureFlex
design.

These racks share the rack frame design of the IBM 42U 1100mm Enterprise V2 Dynamic
Rack, but ship with a PureFlex branded door. The door can be ordered separately.

These IBM PureFlex System 42U racks are industry-standard 19-inch racks that support IBM
PureFlex System and Flex System chassis, IBM System x servers, and BladeCenter chassis.

178 IBM PureFlex System and IBM Flex System Products and Technology
The racks conform to the EIA-310-D industry standard for 19-inch, type A rack cabinets, and
have outriggers (stabilizers), which allows for movement of large loads.

The optional IBM Rear Door Heat eXchanger can be installed into this rack to provide a
superior cooling solution, and the entire cabinet will still fit on a standard data center floor tile
(width). For more information, see 4.15, “IBM Rear Door Heat eXchanger V2 Type 1756” on
page 180.

The front door is hinged on one side only. The rear door can be hinged on either side and can
be removed for ease of access when cabling or servicing systems within the rack. The front
door is a unique PureFlex -branded front door that allows for excellent airflow into the rack.

The rack includes the following features:


򐂰 Six side-wall compartments support 1U-high power distribution units (PDUs) and switches
without taking up valuable rack space.
򐂰 Cable management slots are provided to route hook-and-loop fasteners around cables.
򐂰 Side panels are a standard feature and are easy to install and remove.
򐂰 Front and rear doors and side panels include locks and keys to help secure servers.
򐂰 Horizontal and vertical cable channels are built into the frame.
򐂰 Heavy-duty casters with outriggers (stabilizers) come with the 42U rack for added stability,
which allows for movement of large loads.
򐂰 Tool-less 0U PDU rear channel mounting is provided.
򐂰 A 600 mm standard width to complement current raised-floor data center designs.
򐂰 Increase in depth to from 1,000 mm to 1,100 mm to improve cable management.
򐂰 Increase in door perforation to maximize airflow.
򐂰 Support for tool-less 0U PDU mounting, and 1U PDU easy installation of 1U PDUs.
򐂰 Front-to-back cable raceways for easy routing of cables such as Fibre Channel or SAS.
򐂰 Support for shipping of fully integrated solutions.
򐂰 Vertical cable channels that are built into the frame.
򐂰 Lockable doors and side panels.
򐂰 Heavy-duty casters to help safely move large loads in the rack.
򐂰 Front stabilizer plate.

The door can be ordered as a separate part number for attaching to existing PureFlex racks.

Rack specifications for the two IBM PureFlex System Racks and the PureFlex Rack door are
shown in Table 4-54.

Table 4-54 IBM PureFlex System Rack specifications


Description Description Dimension Value

9363-4CX PureFlex System 42U Rack Height 2009 mm (79.1 in.)

Width 604 mm (23.8 in.)

Depth 1100 mm (43.3 in.)

Weight 179 kg (394 lb), including outriggers

Chapter 4. Chassis and infrastructure configuration 179


Description Description Dimension Value

9363-4DX PureFlex System 42U Expansion Rack Height 2009 mm (79.1 in.)

Width 604 mm (23.8 in.)

Depth 1100 mm (43.3 in.)

Weight 142 kg (314 lb), including outriggers

44X3132 IBM PureFlex System Rack Door kit Height 1924 mm (75.8 in.)

Width 597 mm (23.5 in.)

Depth 90 mm (3.6 in.)

Weight 19.5 kg (43 lb)

4.15 IBM Rear Door Heat eXchanger V2 Type 1756


The IBM Rear Door Heat eXchanger V2 is designed to attach to the rear of the following
racks:
򐂰 IBM 42U 1100mm Enterprise V2 Dynamic Rack
򐂰 IBM 42U 1100mm Enterprise V2 Dynamic Expansion Rack

It provides effective cooling for the warm air exhausts of equipment that is mounted within the
rack. The heat exchanger has no moving parts to fail and no power is required.

The rear door heat exchanger can be used to improve cooling and reduce cooling costs in a
high-density HPC Enterprise Chassis environment.

The physical design of the door is slightly different from that of the existing Rear Door Heat
Exchanger (32R0712) that is marketed by IBM System x. This door has a wider rear aperture,
as shown in Figure 4-73. It is designed for attachment specifically to the rear of an IBM 42U
1100mm Enterprise V2 Dynamic Rack or IBM 42U 1100mm Enterprise V2 Dynamic
Expansion Rack.

Figure 4-73 Rear Door Heat Exchanger

180 IBM PureFlex System and IBM Flex System Products and Technology
Attaching a rear door heat exchanger to the rear of a rack allows up to 100,000 BTU/hr or
30kw of heat to be removed at a rack level.

As the warm air passes through the heat exchanger, it is cooled with water and exits the rear
of the rack cabinet into the data center. The door is designed to provide an overall air
temperature drop of up to 25°C measured between air that enters the exchanger and exits the
rear.

Figure 4-74 shows the internal workings of the IBM Rear Door Heat eXchanger V2.

Figure 4-74 IBM Rear Door Heat eXchanger V2

The supply inlet hose provides an inlet for chilled, conditioned water. A return hose delivers
warmed water back to the water pump or chiller in the cool loop. It must meet the water supply
requirements for secondary loops.

Table 4-55 Rear door heat exchanger


Model Description Details

1756-42X IBM Rear Door Heat Rear door heat exchanger that
eXchanger V2 for 9363 Racks can be installed to the rear of
the 9363 Rack

Chapter 4. Chassis and infrastructure configuration 181


Figure 4-75 shows the percentage heat that is removed from a 30 KW heat load as a function
of water temperature and water flow rate. With 18 Degrees at 10 (gpm), 90% of 30 kW heat is
removed by the door.

% heat removal as function of water temperature and flow rate for


given rack power, rack inlet temperature, and rack air flow rate

140
Water
130 temperature
12°C *
120 14°C *
16°C *
110
18°C *
% heat removal

100 20°C *
22°C *
90
24°C *
80 Rack Power
(W) = 30000
70 Tinlet, air
(C) = 27
60
Airflow
(cfm) = 2500
50
4 6 8 10 12 14
Water flow rate (gpm)

Figure 4-75 Heat removal by Rear Door Heat eXchanger V2 at 30 KW of heat

For efficient cooling, water pressure and water temperature must be delivered in accordance
with the specifications listed in Table 4-56. The temperature must be maintained above the
dew point to prevent condensation from forming.

Table 4-56 1756 RDHX specifications


Rear Door heat exchanger V2 Specifications

Depth 129 mm (5.0 in)

Width 600 mm (23.6 in)

Height 1950 mm (76.8 in)

Empty Weight 39 kg (85 lb)

Filled Weight 48 kg (105 lb)

Temperature Drop Up to 25°C (45°F) between air exiting and entering RDHX

Water Temperature Above Dew Point:


18°C ±1°C (64.4°F ±1.8°F) for ASHRAE Class 1 Environment
22°C ±1°C (71.6°F ±1.8°F) for ASHRAE Class 2 Environment

Required water flow rate (as measured at the Minimum: 22.7 liters (6 gallons) per minute, Maximum: 56.8 liters (15
supply entrance to the heat exchanger) gallons) per minute

182 IBM PureFlex System and IBM Flex System Products and Technology
The installation and planning guide provides lists of suppliers that can provide coolant
distribution unit solutions, flexible hose assemblies, and water treatment that meet the
suggested water quality requirements.

It takes three people to install the rear door heat exchanger. The exchanger requires a
non-conductive step ladder to be used for attachment of the upper hinge assembly. Consult
the planning and implementation guides before proceeding.

The installation and planning guides can be found at this website:


http://www.ibm.com/support/entry/portal/

Chapter 4. Chassis and infrastructure configuration 183


184 IBM PureFlex System and IBM Flex System Products and Technology
5

Chapter 5. Compute nodes


This chapter describes the IBM Flex System servers or compute nodes. The applications that
are installed on the compute nodes can run natively on a dedicated physical server or they
can be virtualized in a virtual machine that is managed by a hypervisor layer.

The IBM Flex System portfolio of compute nodes includes Intel Xeon processors and IBM
POWER7 processors. Depending on the compute node design, nodes can come in one of
the following form factors:
򐂰 Half-wide node: Occupies one chassis bay, half the width of the chassis (approximately
215 mm or 8.5 in.). An example is the IBM Flex System x240 Compute Node.
򐂰 Full-wide node: Occupies two chassis bays side-by-side, the full width of the chassis
(approximately 435 mm or 17 in.). An example is the IBM Flex System p460 Compute
Node.

This chapter includes the following topics:


򐂰 5.1, “IBM Flex System Manager” on page 186
򐂰 5.2, “IBM Flex System x220 Compute Node” on page 186
򐂰 5.3, “IBM Flex System x222 Compute Node” on page 216
򐂰 5.4, “IBM Flex System x240 Compute Node” on page 234
򐂰 5.5, “IBM Flex System x440 Compute Node” on page 275
򐂰 5.6, “IBM Flex System p260 and p24L Compute Nodes” on page 298
򐂰 5.7, “IBM Flex System p270 Compute Node” on page 318
򐂰 5.8, “IBM Flex System p460 Compute Node” on page 335
򐂰 5.9, “IBM Flex System PCIe Expansion Node” on page 356
򐂰 5.10, “IBM Flex System Storage Expansion Node” on page 363
򐂰 5.11, “I/O adapters” on page 370

© Copyright IBM Corp. 2012, 2013. All rights reserved. 185


5.1 IBM Flex System Manager
The IBM Flex System Manager (FSM) is a high-performance, scalable system management
appliance that is based on the IBM Flex System x240 Compute Node. The FSM hardware
comes preinstalled with systems management software that you can use to configure,
monitor, and manage IBM Flex System resources in up to four chassis.

For more information about the hardware and software of the FSM, see 3.5, “IBM Flex
System Manager” on page 50.

5.2 IBM Flex System x220 Compute Node


The IBM Flex System x220 Compute Node, machine type 7906, is the next generation
cost-optimized compute node that is designed for less demanding workloads and low-density
virtualization. The x220 is efficient and equipped with flexible configuration options and
advanced management to run a broad range of workloads.

This section includes the following topics:


򐂰 5.2.1, “Introduction” on page 186
򐂰 5.2.2, “Models” on page 190
򐂰 5.2.3, “Chassis support” on page 190
򐂰 5.2.4, “System architecture” on page 191
򐂰 5.2.5, “Processor options” on page 193
򐂰 5.2.6, “Memory options” on page 193
򐂰 5.2.7, “Internal disk storage controllers” on page 201
򐂰 5.2.8, “Supported internal drives” on page 206
򐂰 5.2.9, “Embedded 1 Gb Ethernet controller” on page 209
򐂰 5.2.10, “I/O expansion” on page 209
򐂰 5.2.11, “Integrated virtualization” on page 211
򐂰 5.2.12, “Systems management” on page 211
򐂰 5.2.13, “Operating system support” on page 215

5.2.1 Introduction
The IBM Flex System x220 Compute Node is a high-availability, scalable compute node that
is optimized to support the next-generation microprocessor technology. With a balance of
cost and system features, the x220 is an ideal platform for general business workloads. This
section describes the key features of the server.

186 IBM PureFlex System and IBM Flex System Products and Technology
Figure 5-1 shows the front of the compute node and highlights the location of the controls,
LEDs, and connectors.

Two 2.5” HS Light path


drive bays diagnostics
panel

Console breakout Power LED


USB port panel
cable port

Figure 5-1 IBM Flex System x220 Compute Node

Figure 5-2 shows the internal layout and major components of the x220.

Cover

Heat sink

Microprocessor
heat sink filler

I/O expansion
adapter

Left air baffle

Microprocessor
Hard disk
drive backplane

Hard disk
drive cage

Hot-swap
hard disk Right air baffle
drive

DIMM
Hard disk
drive bay filler

Figure 5-2 Exploded view of the x220, showing the major components

Chapter 5. Compute nodes 187


Table 5-1 lists the features of the x220.

Table 5-1 IBM Flex System x220 Compute Node specifications


Components Specification

Form factor Half-wide compute node.

Chassis support IBM Flex System Enterprise Chassis.

Processor Up to two Intel Xeon Processor E5-2400 product family processors. These processors can be
eight-core (up to 2.3 GHz), six-core (up to 2.4 GHz), or quad-core (up to 2.2 GHz). There is one
QPI link that runs at 8.0 GTps, L3 cache up to 20 MB, and memory speeds up to 1600 MHz.
The server also supports one Intel Pentium Processor 1400 product family processor with two
cores, up to 2.8 GHz, 5 MB L3 cache, and 1066 MHz memory speeds.

Chipset Intel C600 series.

Memory Up to 12 DIMM sockets (six DIMMs per processor) using LP DDR3 DIMMs. RDIMMs and
UDIMMs are supported. 1.5 V and low-voltage 1.35 V DIMMs are supported. Support for up to
1600 MHz memory speed, depending on the processor. Three memory channels per processor
(two DIMMs per channel). Supports two DIMMs per channel operating at 1600 MHz (2 DPC @
1600 MHz) with single and dual rank RDIMMs.

Memory maximums 򐂰 With LRDIMMs: Up to 384 GB with 12x 32 GB LRDIMMs and two E5-2400 processors.
򐂰 With RDIMMs: Up to 192 GB with 12x 16 GB RDIMMs and two E5-2400 processors.
򐂰 With UDIMMs: Up to 48 GB with 12x 4 GB UDIMMs and two E5-2400 processors.
Half of these maximums and DIMMs count with one processor installed.

Memory protection ECC, Chipkill (for x4-based memory DIMMs). Optional memory mirroring and memory rank
sparing.

Disk drive bays Two 2.5-inch hot-swap serial-attached SCSI (SAS)/Serial Advanced Technology Attachment
SATA drive bays that support SAS, SATA, and SSD drives. Optional support for up to eight
1.8-inch SSDs. Onboard ServeRAID C105 supports SATA drives only.

Maximum internal With two 2.5-inch hot-swap drives:


storage (Raw) 򐂰 Up to 2 TB with 1 TB 2.5-inch NL SAS HDDs
򐂰 Up to 2.4 TB with 1.2 TB 2.5-inch SAS HDDs
򐂰 Up to 2 TB with 1 TB 2.5-inch NL SATA HDDs
򐂰 Up to 1 TB with 512 GB 2.5-inch SATA SSDs.
An intermix of SAS and SATA HDDs and SSDs is supported. With 1.8-inch SSDs and
ServeRAID M5115 RAID adapter, you can have up to 4 TB with eight 512 GB 1.8-inch SSDs.

RAID support 򐂰 Software RAID 0 and 1 with integrated LSI-based 3 Gbps ServeRAID C105 controller;
supports SATA drives only. Non-RAID is not supported.
򐂰 Optional ServeRAID H1135 RAID adapter with LSI SAS2004 controller, supports
SAS/SATA drives with hardware-based RAID 0 and 1. An H1135 adapter is installed in a
dedicated PCIe 2.0 x4 connector and does not use either I/O adapter slot (see Figure 5-3
on page 189).
򐂰 Optional ServeRAID M5115 RAID adapter with RAID 0, 1, 10, 5, 50 support and 1 GB
cache. M5115 uses the I/O adapter slot 1. Can be installed in all models, including models
with an embedded 1 GbE Fabric Connector. Supports up to eight 1.8-inch SSD with
expansion kits. Optional flash-backup for cache, RAID 6/60, and SSD performance
enabler.

Network interfaces Some models (see Table 5-2 on page 190): Embedded dual-port Broadcom BCM5718 Ethernet
Controller that supports Wake on LAN and Serial over LAN, IPv6. TCP/IP offload Engine (TOE)
not supported. Routes to chassis I/O module bays 1 and 2 through a Fabric Connector to the
chassis midplane. The Fabric Connector precludes the use of I/O adapter slot 1, with the
exception that the M5115 can be installed in slot 1 while the Fabric Connector is installed.
Remaining models: No network interface standard; optional 1 Gb or 10 Gb Ethernet adapters.

188 IBM PureFlex System and IBM Flex System Products and Technology
Components Specification

PCI Expansion slots Two connectors for I/O adapters; each connector has PCIe x8+x4 interfaces.
Includes an Expansion Connector (PCIe 3.0 x16) for future use to connect a compute node
expansion unit. Dedicated PCIe 2.0 x4 interface for ServeRAID H1135 adapter only.

Ports USB ports: One external and two internal ports for an embedded hypervisor. A console
breakout cable port on the front of the server provides local KVM and serial ports (cable
standard with chassis; additional cables are optional).

Systems UEFI, IBM IMM2 with Renesas SH7757 controller, Predictive Failure Analysis, light path
management diagnostics panel, automatic server restart, and remote presence. Support for IBM Flex System
Manager, IBM Systems Director, and IBM ServerGuide.

Security features Power-on password, administrator's password, and Trusted Platform Module V1.2.

Video Matrox G200eR2 video core with 16 MB video memory that is integrated into the IMM2.
Maximum resolution is 1600x1200 at 75 Hz with 16 M colors.

Limited warranty Three-year customer-replaceable unit and onsite limited warranty with 9x5/NBD.

Operating systems Microsoft Windows Server 2008 R2, Red Hat Enterprise Linux 5 and 6, SUSE Linux Enterprise
supported Server 10 and 11, VMware vSphere. For more information, see 5.2.13, “Operating system
support” on page 215.

Service and support Optional service upgrades are available through IBM ServicePac® offerings: 4-hour or 2-hour
response time, 8-hour fix time, 1-year or 2-year warranty extension, and remote technical
support for IBM hardware and selected IBM and OEM software.

Dimensions Width: 217 mm (8.6 in.), height: 56 mm (2.2 in.), depth: 492 mm (19.4 in.)

Weight Maximum configuration: 6.4 kg (14.11 lb).

Figure 5-3 shows the components on the system board of the x220.

Hot-swap drive bay Processor 2 and USB Broadcom I/O connector 1


backplane six memory DIMMs port 2 Ethernet Fabric Connector

Light path Optional USB Processor 1 and I/O connector 2 Expansion


diagnostics ServeRAID H1135 port 1 six memory DIMMs Connector
Figure 5-3 Layout of the IBM Flex System x220 Compute Node system board

Chapter 5. Compute nodes 189


5.2.2 Models
The current x220 models are shown in Table 5-2. All models include 4 GB of memory (one
4 GB DIMM) running at either 1333 MHz or 1066 MHz (depending on model).

Table 5-2 Models of the IBM Flex System x220 Compute Node, type 7906
Model Intel Processor Memory RAID Disk Disks Embed I/O
E5-2400: 2 maximum adapter baysa 1 GbEb slots
Pentium 1400: 1 maximum (used/
max)

7906-A2x 1x Intel Pentium 1403 2C 2.6 GHz 5 1x 4 GB UDIMM ServeRAID 2x 2.5” Open Standard 1 / 2b
MB 1066 MHz 80 W (1066 MHz)c C105 hot-swap

7906-B2x 1x Intel Xeon E5-2430L 6C 2.0 GHz 1x 4 GB UDIMM ServeRAID 2x 2.5” Open Standard 1 / 2b
15 MB 1333 MHz 60 W 1333 MHz C105 hot-swap

7906-C2x 1x Intel Xeon E5-2403 4C 1.8 GHz 1x 4 GB RDIMM ServeRAID 2x 2.5” Open Standard 1 / 2b
10 MB 1066 MHz 80 W (1066 MHz)c C105 hot-swap

7906-D2x 1x Intel Xeon E5-2420 6C 1.9 GHz 1x 4 GB RDIMM ServeRAID 2x 2.5” Open Standard 1 / 2b
15 MB 1333 MHz 95 W 1333 MHz C105 hot-swap

7906-F2x 1x Intel Xeon E5-2418L 4C 2.0 GHz 1x4GB RDIMM ServeRAID 2x 2.5” Open Standard 1 / 2b
10 MB 1333 MHz 50 W 1333 MHz C105 hot-swap

7906-G2x 1x Intel Xeon E5-2430 6C 2.2 GHz 1x 4 GB RDIMM ServeRAID 2x 2.5” Open No 0/2
15 MB 1333 MHz 95 W 1333 MHz C105 hot-swap

7906-G4x 1x Intel Xeon E5-2430 6C 2.2 GHz 1x 4 GB RDIMM ServeRAID 2x 2.5” Open Standard 1 / 2b
15 MB 1333 MHz 95 W 1333 MHz C105 hot-swap

7906-H2x 1x Intel Xeon E5-2440 6C 2.4 GHz 1x 4 GB RDIMM ServeRAID 2x 2.5” Open Standard 1 / 2b
15 MB 1333 MHz 95 W 1333 MHz C105 hot-swap

7906-J2x 1x Intel Xeon E5-2450 8C 2.1 GHz 1x 4 GB RDIMM ServeRAID 2x 2.5” Open No 0/2
20 MB 1600 MHz 95 W 1333 MHzc C105 hot-swap

7906-L2x 1x Intel Xeon E5-2470 8C 2.3 GHz 1x 4 GB RDIMM ServeRAID 2x 2.5” Open No 0/2
20 MB 1600 MHz 95 W 1333 MHzc C105 hot-swap
a. The 2.5-inch drive bays can be replaced and expanded with 1.8-inch bays and a ServeRAID M5115 RAID
controller. This configuration supports up to eight 1.8-inch SSDs.
b. These models include an embedded 1 Gb Ethernet controller. Connections are routed to the chassis midplane by
using a Fabric Connector. Precludes the use of I/O connector 1 (except the ServeRAID M5115).
c. For A2x and C2x, the memory operates at 1066 MHz, the memory speed of the processor. For J2x and L2x,
memory operates at 1333 MHz to match the installed DIMM, rather than 1600 MHz.

5.2.3 Chassis support


The x220 type 7906 is supported in the IBM Flex System Enterprise Chassis as listed in
Table 5-3.

Table 5-3 x220 chassis support


Server BladeCenter chassis (all) IBM Flex System Enterprise
Chassis

x220 No Yes

190 IBM PureFlex System and IBM Flex System Products and Technology
Up to 14 x220 Compute Nodes can be installed in the chassis in 10U of rack space. The
actual number of x220 systems that can be powered on in a chassis depends on the following
factors:
򐂰 The TDP power rating for the processors that are installed in the x220
򐂰 The number of power supplies installed in the chassis
򐂰 The capacity of the power supplies installed in the chassis (2100 W or 2500 W)
򐂰 The power redundancy policy used in the chassis (N+1 or N+N)

Table 4-11 on page 93 provides guidelines about the number of x220 systems that can be
powered on in the IBM Flex System Enterprise Chassis, based on the type and number of
power supplies installed.

The x220 is a half wide compute node and requires that the chassis shelf is installed in the
IBM Flex System Enterprise Chassis. Figure 5-4 shows the chassis shelf in the chassis.

Figure 5-4 The IBM Flex System Enterprise Chassis showing the chassis shelf

The shelf is required for half-wide compute nodes. To allow for installation of the full-wide or
larger, shelves must be removed from within the chassis. Remove the shelves by sliding the
two latches on the shelf towards the center, and then sliding the shelf from the chassis.

5.2.4 System architecture


The IBM Flex System x220 Compute Node features the Intel Xeon E5-2400 series
processors. The Xeon E5-2400 series processor has models with either four, six, or eight
cores per processor with up to 16 threads per socket. The processors have the following
features:
򐂰 Up to 20 MB of shared L3 cache
򐂰 Hyper-Threading
򐂰 Turbo Boost Technology 2.0 (depending on processor model)
򐂰 One QPI link that runs at up to 8 GT/s
򐂰 One integrated memory controller
򐂰 Three memory channels that support up to two DIMMs each

Chapter 5. Compute nodes 191


The x220 also supports an Intel Pentium 1403 or 1407 dual-core processor for entry-level
server applications. Only one Pentium processor is supported in the x220. CPU socket 2
must be left unused, and only six DIMM sockets are available.

Figure 5-5 shows the system architecture of the x220 system.

(optional)

ServeRAID
PCIe 2.0 x4 H1135
x4 ESI link

Intel Intel HDDs


Xeon C600 Internal USB
or SSDs
Processor 1 PCH Front USB
Front KVM port
USB
x1 USB
DDR3 DIMMs Video &
serial
3 memory
channels QPI link IMM v2
Management to midplane
(up to
2 DIMMs per 8 GT/s)
channel PCIe 2.0 x2
1 GbE LOM

PCIe 3.0 x8+x4


I/O connector 1
Intel PCIe 3.0 x4
PCIe 3.0 x8+x4
Xeon I/O connector 2
Processor 2 PCIe 3.0 x4
PCIe 3.0 x16
Sidecar connector

Figure 5-5 IBM Flex System x220 Compute Node system board block diagram

The IBM Flex System x220 Compute Node has the following system architecture features as
standard:
򐂰 Two 1356-pin, Socket B2 (LGA-1356) processor sockets
򐂰 An Intel C600 PCH
򐂰 Three memory channels per socket
򐂰 Up to two DIMMs per memory channel
򐂰 12 DDR3 DIMM sockets
򐂰 Support for UDIMMs and RDIMMs
򐂰 One integrated 1 Gb Ethernet controller (1 GbE LOM in diagram)
򐂰 One LSI 2004 SAS controller
򐂰 Integrated software RAID 0 and 1 with support for the H1135 LSI-based RAID controller
򐂰 One IMM2
򐂰 Two PCIe 3.0 I/O adapter connectors with one x8 and one x4 host connection each (12
lanes total).
򐂰 One internal and one external USB connector

192 IBM PureFlex System and IBM Flex System Products and Technology
5.2.5 Processor options
The x220 supports the processor options that are listed in Table 5-4. The server supports one
or two Intel Xeon E5-2400 processors, but supports only one Intel Pentium 1403 or 1407
processor. The table also shows which server models have each processor standard. If no
corresponding model for a particular processor is listed, the processor is available only
through the configure-to-order (CTO) process.

Table 5-4 Supported processors for the x220


Part number Feature Intel Xeon processor description Models
codea where used

Intel Pentium processors

None A1VZ / None Intel Pentium 1403 2C 2.6 GHz 5 MB 1066 MHz 80 W A2x

Noneb A1W0 / None Intel Pentium 1407 2C 2.8 GHz 5 MB 1066 MHz 80 W -

Intel Xeon processors

Noneb A3C4 / None Intel Xeon E5-1410 4C 2.8 GHz 10 MB 1333 MHz 80 W -

90Y4801 A1VY / A1WC Intel Xeon E5-2403 4C 1.8 GHz 10 MB 1066 MHz 80 W C2x

90Y4800 A1VX / A1WB Intel Xeon E5-2407 4C 2.2 GHz 10 MB 1066 MHz 80 W -

90Y4799 A1VW / A1WA Intel Xeon E5-2420 6C 1.9 GHz 15 MB 1333 MHz 95 W D2x

90Y4797 A1VU / A1W8 Intel Xeon E5-2430 6C 2.2 GHz 15 MB 1333 MHz 95 W G2x, G4x

90Y4796 A1VT / A1W7 Intel Xeon E5-2440 6C 2.4 GHz 15 MB 1333 MHz 95 W H2x

90Y4795 A1VS / A1W6 Intel Xeon E5-2450 8C 2.1 GHz 20 MB 1600 MHz 95 W J2x

90Y4793 A1VQ / A1W4 Intel Xeon E5-2470 8C 2.3 GHz 20 MB 1600 MHz 95 W L2x

Intel Xeon processors - Low power

00D9528 A3C7 / A3CA Intel Xeon E5-2418L 4C 2.0 GHz 10 MB 1333 MHz 50 W F2x

00D9527 A3C6 / A3C9 Intel Xeon E5-2428L 6C 1.8 GHz 15 MB 1333 MHz 60 W -

90Y4805 A1W2 / A1WE Intel Xeon E5-2430L 6C 2.0 GHz 15 MB 1333 MHz 60 W B2x

00D9526 A3C5 / A3C8 Intel Xeon E5-2448L 8C 1.8 GHz 20 MB 1600 MHz 70 W -

90Y4804 A1W1 / A1WD Intel Xeon E5-2450L 8C 1.8 GHz 20 MB 1600 MHz 70 W -
a. The first feature code is for processor 1 and second feature code is for processor 2.
b. The Intel Pentium 1407 and Intel Xeon E5-1410 are available through CTO or special bid only.

5.2.6 Memory options


IBM DDR3 memory is compatibility tested and tuned for optimal performance and throughput.
IBM memory specifications are integrated into the light path diagnostic procedures for
immediate system performance feedback and optimum system uptime. From a service and
support standpoint, IBM memory automatically assumes the IBM system warranty, and IBM
provides service and support worldwide.

Chapter 5. Compute nodes 193


The x220 supports LP DDR3 memory LRDIMMs, RDIMMs, and UDIMMs. The server
supports up to six DIMMs when one processor is installed, and up to 12 DIMMs when two
processors are installed. Each processor has three memory channels, with two DIMMs per
channel.

The following rules apply when you select the memory configuration:
򐂰 Mixing 1.5 V and 1.35 V DIMMs in the same server is supported. In such a case, all
DIMMs operate at 1.5 V.
򐂰 The maximum number of ranks that are supported per channel is eight.
򐂰 The maximum quantity of DIMMs that can be installed in the server depends on the
number of processors. For more information, see the “Maximum quantity” row in Table 5-5
and Table 5-6 on page 195.
򐂰 All DIMMs in all processor memory channels operate at the same speed, which is
determined as the following lowest values:
– Memory speed that is supported by a specific processor.
– Lowest maximum operating speed for the selected memory configuration that depends
on rated speed. For more information, see the “Maximum operating speed” section in
Table 5-5 and Table 5-6 on page 195. The shaded cells indicate that the speed that is
indicated is the maximum that the DIMM allows.

Cells that are highlighted with a gray background indicate when the specific combination of
DIMM voltage and number of DIMMs per channel still allows the DIMMs to operate at rated
speed.

Table 5-5 Maximum memory speeds (Part 1 - UDIMMs and LRDIMMs)


Spec UDIMMs LRDIMMs

Rank Single rank Dual rank Quad rank

Part numbers 49Y1403 (2 GB) 49Y1404 (4 GB) 90Y3105 (32 GB)

Rated speed 1333 MHz 1333 MHz 1333 MHz

Rated voltage 1.35 V 1.35 V 1.35 V

Operating voltage 1.35 V 1.5 V 1.35 V 1.5 V 1.35 V 1.5 V

Maximum quantitya 12 12 12 12 12 12

Largest DIMM 2 GB 2 GB 4 GB 4 GB 32 GB 32 GB

Max memory capacity 24 GB 24 GB 48 GB 48 GB 384 GB 384 GB

Max memory at rated speed 12 GB 12 GB 24 GB 24 GB N/A 192 GB

Maximum operating speed

1 DIMM per channel 1333 MHz 1333 MHz 1333 MHz 1333 MHz 1066 MHz 1333 MHz

2 DIMMs per channel 1066 MHz 1066 MHz 1066 MHz 1066 MHz 1066 MHz 1066 MHz
a. The maximum quantity that is supported is shown for two processors installed. When one processor is installed,
the maximum quantity that is supported is half of that shown.

194 IBM PureFlex System and IBM Flex System Products and Technology
Table 5-6 Maximum memory speeds (Part 2 - RDIMMs)
Spec RDIMMs

Rank Single rank Dual rank Quad rank

Part numbers 49Y1406 (4 GB) 49Y1407 (4 GB) 90Y3109 49Y1400


49Y1397 (8 GB) (4 GB) (16 GB)

Rated speed 1333 MHz 1333 MHz 1600 MHz 1066 MHz

Rated voltage 1.35 V 1.35 V 1.5 V 1.35 V

Operating voltage 1.35 V 1.5 V 1.35 V 1.5 V 1.5 V 1.35 V 1.5 V

Max quantitya 12 12 12 12 12 12 12

Largest DIMM 4 GB 4 GB 8 GB 8 GB 4 GB 16 GB 16 GB

Max memory capacity 48 GB 48 GB 96 GB 96 GB 48 GB 192 GB 192 GB

Max memory at rated speed 48 GB 48 GB 96 GB 96 GB 48 GB N/A N/A

Maximum operating speed (MHz)

1 DIMM per channel 1333 MHz 1333 MHz 1333 MHz 1333 MHz 1600 MHz 800 MHz 800 MHz

2 DIMMs per channel 1333 MHz 1333 MHz 1333 MHz 1333 MHz 1600 MHz 800 MHz 800 MHz
a. The maximum quantity that is supported is shown for two processors installed. When one processor is installed,
the maximum quantity that is supported is half of that shown.

The following memory protection technologies are supported:


򐂰 ECC
򐂰 Chipkill (for x4-based memory DIMMs; look for “x4” in the DIMM description)
򐂰 Memory mirroring
򐂰 Memory sparing

If memory mirroring is used, DIMMs must be installed in pairs (minimum of one pair per
processor). Both DIMMs in a pair must be identical in type and size.

If memory rank sparing is used, a minimum of one quad-rank DIMM or two single-rank or
dual-rank DIMMs must be installed per populated channel. These DIMMs do not need to be
identical. In rank sparing mode, one rank of a DIMM in each populated channel is reserved as
spare memory. The size of a rank varies depending on the DIMMs installed.

Table 5-7 lists the memory options available for the x220 server. DIMMs can be installed one
at a time, but for performance reasons, install them in sets of three (one for each of the three
memory channels) if possible.

Table 5-7 Supported memory DIMMs


Part Feature Description
number codea

Unbuffered DIMM (UDIMM) modules

49Y1403 A0QS 2GB (1x2GB, 1Rx8, 1.35V) PC3L-10600 CL9 ECC DDR3 1333MHz LP UDIMM

49Y1404 8648 4GB (1x4GB, 2Rx8, 1.35V) PC3L-10600 CL9 ECC DDR3 1333MHz LP UDIMM

Chapter 5. Compute nodes 195


Part Feature Description
number codea

Registered DIMMs (RDIMMs) - 1333 MHz and 1066 MHz

49Y1406 8941 4GB (1x4GB, 1Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333MHz LP RDIMM

49Y1407 8942 4GB (1x4GB, 2Rx8, 1.35V) PC3L-10600 CL9 ECC DDR3 1333MHz LP RDIMM

49Y1397 8923 8GB (1x8GB, 2Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333MHz LP RDIMM

49Y1563 A1QT 16GB (1x16GB, 2Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333MHz LP RDIMM

49Y1400 8939 16GB (1x16GB, 4Rx4, 1.35V) PC3L-8500 CL7 ECC DDR3 1066MHz LP RDIMM

Registered DIMMs (RDIMMs) - 1600 MHz

49Y1559 A28Z 4GB (1x4GB, 1Rx4, 1.5V) PC3-12800 CL11 ECC DDR3 1600MHz LP RDIMM

90Y3178 A24L 4GB (1x4GB, 2Rx8, 1.5V) PC3-12800 CL11 ECC DDR3 1600MHz LP RDIMM

90Y3109 A292 8GB (1x8GB, 2Rx4, 1.5V) PC3-12800 CL11 ECC DDR3 1600MHz LP RDIMM

00D4968 A2U5 16GB (1x16GB, 2Rx4, 1.5V) PC3-12800 CL11 ECC DDR3 1600MHz LP RDIMM

Load-reduced DIMMs (LRDIMMs)

90Y3105 A291 32GB (1x32GB, 4Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333MHz LP LRDIMM
a. The feature code listed is for both the System x sales channel (HVEC) using x-config and the Power Systems
sales channel (AAS) using e-config.

DIMM installation order


This section describes the recommended order in which DIMMs should be installed, based on
the memory mode that is used.

The x220 boots with just one memory DIMM installed per processor. However, the suggested
memory configuration is to balance the memory across all the memory channels on each
processor to use the available memory bandwidth. Use one of the following suggested
memory configurations where possible:
򐂰 Three or six memory DIMMs in a single processor x220 server
򐂰 Six or 12 memory DIMMs in a dual processor x220 server

This sequence spreads the DIMMs across as many memory channels as possible. For best
performance and to ensure a working memory configuration, install the DIMMs in the sockets
as shown in the following sections for the following supported modes:
򐂰 Independent channel mode
򐂰 Rank sparing mode
򐂰 Mirrored channel mode

Memory DIMM installation: Independent channel mode


The following guidelines are only for when the processors are operating in Independent
channel mode.

Independent channel mode provides a maximum of 96 GB of usable memory with one


installed microprocessor, and 192 GB of usable memory with two installed microprocessors
(using 16 GB DIMMs).

196 IBM PureFlex System and IBM Flex System Products and Technology
Table 5-8 shows DIMM installation if you have one processor installed.

Table 5-8 Suggested DIMM installation with one processor installed (independent channel mode)
Processor 1 Processor 2

Optimal memory configa


Channel 1 Channel 2 Channel 3 Channel 1 Channel 2 Channel 3

processors

Number of
Number of

DIMM 11

DIMM 12

DIMM 10
DIMM 1

DIMM 2

DIMM 3

DIMM 4

DIMM 5

DIMM 6

DIMM 9

DIMM 7

DIMM 8
DIMMs

1 1 x

1 2 x x

x 1 3 x x x

1 4 x x x x

1 5 x x x x x

x 1 6 x x x x x x
a. For optimal memory performance, populate all memory channels equally.

Table 5-9 shows DIMM installation if you have two processors installed.

Table 5-9 Suggested DIMM installation with two processors installed (independent channel mode)
Processor 1 Processor 2
Optimal memory configa

Channel 1 Channel 2 Channel 3 Channel 1 Channel 2 Channel 3


processors
Number of

Number of

DIMM 11

DIMM 12

DIMM 10
DIMM 1

DIMM 2

DIMM 3

DIMM 4

DIMM 5

DIMM 6

DIMM 9

DIMM 7

DIMM 8
DIMMs

2 2 x x

2 3 x x x

2 4 x x x x

2 5 x x x x x

x 2 6 x x x x x x

2 7 x x x x x x x

2 8 x x x x x x x x

2 9 x x x x x x x x x

2 10 x x x x x x x x x x

2 11 x x x x x x x x x x x

x 1 12 x x x x x x x x x x x x
a. For optimal memory performance, populate all memory channels equally.

Chapter 5. Compute nodes 197


Memory DIMM installation: Rank-sparing mode
The following guidelines are only for when the processors are operating in rank-sparing
mode.

In rank-sparing mode, one rank is held in reserve as a spare for the other ranks in the same
channel. If the error threshold is passed in an active rank, the contents of that rank are copied
to the spare rank in the same channel. The failed rank is taken offline and the spare rank
becomes active. Rank sparing in one channel is independent of rank sparing in other
channels.

If a channel contains only one DIMM and the DIMM is single or dual ranked, do not use rank
sparing.

The x220 boots with one memory DIMM installed per processor. However, with rank-sparing
mode, if you use all quad ranked DIMMs, use the tables for Independent channel mode for a
single processor (see Table 5-8 on page 197) or for two processors (see Table 5-9 on
page 197).

At least one DIMM pair must be installed for each processor.

This sequence spreads the DIMMs across as many memory channels as possible. For best
performance and to ensure a working memory configuration in rank sparing mode with single
or dual ranked DIMMs, install the DIMMs in the sockets as shown in the following tables.

Table 5-10 shows DIMM installation if you have one processor that is installed with rank
sparing mode enabled by using single or dual ranked DIMMs.

Table 5-10 Suggested DIMM installation with one processor in rank-sparing mode
Processor 1 Processor 2
Optimal memory configa

Channel 1 Channel 2 Channel 3 Channel 1 Channel 2 Channel 3


processors

Number of
Number of

DIMM 11

DIMM 12

DIMM 10
DIMM 1

DIMM 2

DIMM 3

DIMM 4

DIMM 5

DIMM 6

DIMM 9

DIMM 7

DIMM 8
DIMMs

1 2 x x

1 4 x x x x

x 1 6 x x x x x x
a. For optimal memory performance, populate all memory channels equally

198 IBM PureFlex System and IBM Flex System Products and Technology
Table 5-11 shows DIMM installation if you have two processors that are installed with rank
sparing, by using dual or single ranked DIMMs.

Table 5-11 Suggested DIMM installation with 2 processors, rank-sparing mode, single or dual ranked
Processor 1 Processor 2

Optimal memory configa Channel 1 Channel 2 Channel 3 Channel 1 Channel 2 Channel 3

processors

Number of
Number of

DIMM 11

DIMM 12

DIMM 10
DIMM 1

DIMM 2

DIMM 3

DIMM 4

DIMM 5

DIMM 6

DIMM 9

DIMM 7

DIMM 8
DIMMs

2 4 x x x x

2 6 x x x x x x

x 2 8 x x x x x x x x

2 10 x x x x x x x x x x

x 2 12 x x x x x x x x x x x x
a. For optimal memory performance, populate all memory channels equally

Memory DIMM installation: Mirrored-channel mode


Table 5-12 lists the memory DIMM installation order for the x220, with one or two processors
that are installed when mirrored-channel mode is used.

In mirrored-channel mode, the channels are paired and both channels in a pair store the
same data.

For each microprocessor, DIMM channels 2 and 3 form one redundant pair, and channel 1 is
unused. Because of the redundancy, the effective memory capacity of the compute node is
half the installed memory capacity.

The maximum memory is limited because one channel remains unused.

Table 5-12 The DIMM installation order for mirrored-channel mode


DIMM paira One processor that is installed Two processors that are installed

1st 3 and 5 3 and 5, 8 and 10

2nd 4 and 6 4 and 6

3rd 7 and 9
a. The pair of DIMMs must be identical in capacity, type, and rank count.

Chapter 5. Compute nodes 199


Table 5-13 and Table 5-14 show the suggested DIMM installation in mirrored channel mode
for one or two processors.

Table 5-13 Suggested DIMM installation with one processor - mirrored channel mode
Processor 1 Processor 2

Optimal memory configa Channel 1 Channel 2 Channel 3 Channel 1 Channel 2 Channel 3

processors

Number of
Number of

DIMM 11

DIMM 12

DIMM 10
DIMMsb

DIMM 1

DIMM 2

DIMM 3

DIMM 4

DIMM 5

DIMM 6

DIMM 9

DIMM 7

DIMM 8
1 4 x x

x 1 6 x x x x
a. For optimal memory performance, populate all memory channels equally.
b. The pair of DIMMs must be identical in capacity, type, and rank count.

Table 5-14 Suggested DIMM installation with two processors - mirrored channel mode
Processor 1 Processor 2
Optimal memory configa

Channel 1 Channel 2 Channel 3 Channel 1 Channel 2 Channel 3


processors
Number of

Number of

DIMM 11

DIMM 12

DIMM 10
DIMMsb

DIMM 1

DIMM 2

DIMM 3

DIMM 4

DIMM 5

DIMM 6

DIMM 9

DIMM 7

DIMM 8
x 2 4 x x x x

2 6 x x x x x x

x 2 8 x x x x x x x x
a. For optimal memory performance, populate all memory channels equally.
b. The pair of DIMMs must be identical in capacity, type, and rank count.

Memory installation considerations for IBM Flex System x220 Compute


Node
Use the following general guidelines when you determine the memory configuration of your
IBM Flex System x220 Compute Node:
򐂰 All memory installation considerations apply equally to one- and two-processor systems.
򐂰 All DIMMs must be DDR3 DIMMs.
򐂰 Memory of different types (RDIMMs, and UDIMMs) cannot be mixed in the system.
򐂰 If you mix DIMMs with 1.35 V and 1.5 V, the system runs all of them at 1.5 V and you lose
the energy advantage.
򐂰 If you mix DIMMs with different memory speeds, all DIMMs in the system run at the lowest
speed.
򐂰 You cannot mix non-mirrored channel and mirrored channel modes.

200 IBM PureFlex System and IBM Flex System Products and Technology
򐂰 Install memory DIMMs in order of their size, with the largest DIMM first. The correct
installation order is the DIMM slot farthest from the processor first (DIMM slots 5, 8, 3, 10,
1, and 12).
򐂰 Install memory DIMMs in order of their rank, with the largest DIMM in the DIMM slot
farthest from the processor. Start with DIMM slots 5 and 8 and work inwards.
򐂰 Memory DIMMs can be installed one DIMM at a time. However, avoid this configuration
because it can affect performance.
򐂰 For maximum memory bandwidth, install one DIMM in each of the three memory channels
(three DIMMs at a time).
򐂰 Populate equivalent ranks per channel.
򐂰 Physically, DIMM slots 2, 4, 6, 7, 9, and 11 must be populated (actual DIMM or DIMM
filler). DIMM slots 1,3, 5, 8, 10, and 12 do not require a DIMM filler.
򐂰 Different memory modes require a different population order (see Table 5-12 on page 199,
Table 5-13 on page 200, and Table 5-14 on page 200).

5.2.7 Internal disk storage controllers


The x220 server has two 2.5-inch hot-swap drive bays that are accessible from the front of the
blade server, as shown in Figure 5-1 on page 187. The server optionally supports 1.8-inch
solid-state drives (SSDs), as described in “ServeRAID M5115 configurations and options” on
page 203.

The x220 supports the following disk controllers:


򐂰 ServeRAID C105: An onboard SATA controller with software RAID capabilities
򐂰 ServeRAID H1135: An entry level hardware RAID controller
򐂰 ServeRAID M5115: An advanced RAID controller with cache, back up, and RAID options

These three controllers are mutually exclusive. Table 5-15 lists the ordering information.

Table 5-15 Internal storage controller ordering information


Part number Feature Description Maximum quantity
code

Integrated None ServeRAID C105 1

90Y4750 A1XJ ServeRAID H1135 Controller for IBM Flex System 1


and IBM BladeCenter

90Y4390 A2XW ServeRAID M5115 SAS/SATA Controller 1

ServeRAID C105 controller


On standard models, the two 2.5-inch drive bays are connected to a ServeRAID C105
onboard SATA controller with software RAID capabilities. The C105 function is embedded in
the Intel C600 chipset.

The C105 has the following features:


򐂰 Support for SATA drives (SAS is not supported)
򐂰 Support for RAID 0 and RAID 1 (non-RAID is not supported)
򐂰 6 Gbps throughput per port
򐂰 Support for up to two volumes
򐂰 Support for virtual drive sizes greater than 2 TB

Chapter 5. Compute nodes 201


򐂰 Fixed stripe unit size of 64 KB
򐂰 Support for MegaRAID Storage Manager management software

Consideration: There is no native (in-box) driver for Windows and Linux. The drivers must
be downloaded separately. In addition, there is no support for VMware, Hyper-V, Xen, or
SSDs.

ServeRAID H1135
The x220 also supports an entry level hardware RAID solution with the addition of the
ServeRAID H1135 Controller for IBM Flex System and BladeCenter. The H1135 is installed in
a dedicated slot, as shown in Figure 5-3 on page 189. When the H1135 adapter is installed,
the C105 controller is disabled.

The H1135 has the following features:


򐂰 Based on the LSI SAS2004 6 Gbps SAS 4-port controller
򐂰 PCIe 2.0 x4 host interface
򐂰 CIOv form factor (supported in the x220 and BladeCenter HS23E)
򐂰 Support for SAS, SATA, and SSD drives
򐂰 Support for RAID 0, RAID 1, and non-RAID
򐂰 6 Gbps throughput per port
򐂰 Support for up to two volumes
򐂰 Fixed stripe size of 64 KB
򐂰 Native driver support in Windows, Linux, and VMware
򐂰 S.M.A.R.T. support
򐂰 Support for MegaRAID Storage Manager management software

ServeRAID M5115
The ServeRAID M5115 SAS/SATA Controller (90Y4390) is an advanced RAID controller that
supports RAID 0, 1, 10, 5, 50, and optional 6 and 60. It includes 1 GB of cache, which can be
backed up to flash memory when it is attached to an optional supercapacitor. The M5115
attaches to the I/O adapter 1 connector. It can be attached even if the Fabric Connector is
installed (used to route the embedded Gb Ethernet to chassis bays 1 and 2). The ServeRAID
M5115 cannot be installed if an adapter is installed in I/O adapter slot 1. When the M5115
adapter is installed, the C105 controller is disabled.

The ServeRAID M5115 supports the following combinations of 2.5-inch drives and 1.8-inch
SSDs:
򐂰 Up to two 2.5-inch drives only
򐂰 Up to four 1.8-inch drives only
򐂰 Up to two 2.5-inch drives, plus up to four 1.8-inch SSDs
򐂰 Up to eight 1.8-inch SSDs

For more information about these configurations, see “ServeRAID M5115 configurations and
options” on page 203.

The ServeRAID M5115 controller has the following specifications:


򐂰 Eight internal 6 Gbps SAS/SATA ports.
򐂰 PCI Express 3.0 x8 host interface.
򐂰 6 Gbps throughput per port.
򐂰 800 MHz dual-core IBM PowerPC processor with an LSI SAS2208 6 Gbps ROC controller.
򐂰 Support for RAID levels 0, 1, 10, 5, 50 standard; support for RAID 6 and 60 with optional
upgrade using 90Y4411.

202 IBM PureFlex System and IBM Flex System Products and Technology
򐂰 Optional onboard 1 GB data cache (DDR3 running at 1333 MHz) with optional flash
backup (MegaRAID CacheVault technology) as part of the Enablement Kit 90Y4342.
򐂰 Support for SAS and SATA HDDs and SSDs.
򐂰 Support for intermixing SAS and SATA HDDs and SSDs. Mixing different types of drives in
the same array (drive group) is not recommended.
򐂰 Support for SEDs with MegaRAID SafeStore.
򐂰 Optional support for SSD performance acceleration with MegaRAID FastPath and SSD
caching with MegaRAID CacheCade Pro 2.0 (90Y4447).
򐂰 Support for up to 64 virtual drives, up to 128 drive groups, and up to 16 virtual drives per
drive group. Also supports up to 32 physical drives per drive group.
򐂰 Support for LUN sizes up to 64 TB.
򐂰 Configurable stripe size up to 1 MB.
򐂰 Compliant with DDF CoD.
򐂰 S.M.A.R.T. support.
򐂰 MegaRAID Storage Manager management software.

ServeRAID M5115 configurations and options


The x220 with the addition of the M5115 controller supports 2.5-inch drives or 1.8-inch SSDs,
or combinations of the two.

Table 5-16 lists the ServeRAID M5115 and associated hardware kits.

Table 5-16 ServeRAID M5115 and supported hardware kits for the x220
Part Feature Description Maximum
number code supported

90Y4390 A2XW ServeRAID M5115 SAS/SATA Controller 1

90Y4424 A35L ServeRAID M5100 Series Enablement Kit for IBM Flex System x220 1

90Y4425 A35M ServeRAID M5100 Series IBM Flex System Flash Kit for x220 1

90Y4426 A35N ServeRAID M5100 Series SSD Expansion Kit for IBM Flex System x220 1

At least one hardware kit is required with the ServeRAID M5115 controller. The following
hardware kits enable specific drive support:
򐂰 ServeRAID M5100 Series Enablement Kit for IBM Flex System x220 (90Y4424) enables
support for up to two 2.5-inch HDDs or SSDs in the hot-swap bays in the front of the
server. It includes a CacheVault unit, which enables MegaRAID CacheVault flash cache
protection.
This enablement kit replaces the standard two-bay backplane that is attached through the
system board to an onboard controller. The new backplane attaches with an included flex
cable to the M5115 controller. It also includes an air baffle, which also serves as an
attachment for the CacheVault unit.
MegaRAID CacheVault flash cache protection uses NAND flash memory that is powered
by a supercapacitor to protect data that is stored in the controller cache. This module
eliminates the need for the lithium-ion battery that is commonly used to protect DRAM
cache memory on PCI RAID controllers.

Chapter 5. Compute nodes 203


To avoid data loss or corruption during a power or server failure, CacheVault technology
transfers the contents of the DRAM cache to NAND flash. This process uses power from
the supercapacitor. After power is restored to the RAID controller, the saved data is
transferred from the NAND flash back to the DRAM cache. The DRAM cache can then be
flushed to disk.

Tip: The Enablement Kit is only required if 2.5-inch drives are to be used. This kit is not
required if you plan to install four or eight 1.8-inch SSDs only.

򐂰 ServeRAID M5100 Series IBM Flex System Flash Kit for x220 (90Y4425) enables support
for up to four 1.8-inch SSDs. This kit replaces the standard two-bay backplane with a
four-bay SSD backplane that attaches with an included flex cable to the M5115 controller.
Because only SSDs are supported, a CacheVault unit is not required, so this kit does not
have a supercapacitor.
򐂰 ServeRAID M5100 Series SSD Expansion Kit for IBM Flex System x220 (90Y4426)
enables support for up to four internal 1.8-inch SSDs. This kit includes two air baffles (left
and right) that can attach two 1.8-inch SSD attachment locations. It also contains flex
cables for attachment to up to four 1.8-inch SSDs.

Table 5-17 shows the kits that are required for each combination of drives. For example, if you
plan to install eight 1.8-inch SSDs, you need the M5115 controller, the Flash kit, and the SSD
Expansion kit.

Table 5-17 ServeRAID M5115 hardware kits


Drive support that is required Components required

Maximum Maximum ServeRAID Enablement Kit Flash Kit SSD Expansion


number of number of M5115 90Y4424 90Y4425 Kit 90Y4426
2.5-inch drives 1.8-inch SSDs 90Y4390

2 0 => Required Required

0 4 (front) => Required Required

2 4 (internal) => Required Required Required

0 8 (both) => Required Required Required

204 IBM PureFlex System and IBM Flex System Products and Technology
Figure 5-6 shows how the ServeRAID M5115 and the Enablement Kit are installed in the
server to support two 2.5-inch drives with MegaRAID CacheVault flash cache protection (see
row 1 of Table 5-17 on page 204).

ServeRAID M5115 controller (90Y4390) with


ServeRAID M5100 Series Enablement Kit for x220 (90Y4424)

ServeRAID
M5115
controller

MegaRAID CacheVault Replacement 2-drive


flash cache protection backplane

Figure 5-6 The ServeRAID M5115 and the Enablement Kit installed

Figure 5-7 shows how the ServeRAID M5115 and Flash and SSD Expansion Kits are
installed in the server to support eight 1.8-inch solid-state drives (see row 4 of Table 5-17 on
page 204).

ServeRAID M5115 controller (90Y4390) with


ServeRAID M5100 Series Flash Kit for x220 (90Y4425) and
ServeRAID M5100 Series SSD Expansion Kit for x220 (90Y4426)

ServeRAID
M5115
controller

Eight drives supported:


- Four internal drives
- Four front-accessible drives
Flash Kit: Replacement SSD Expansion Kit: Four SSDs on
4-drive SSD backplane special air baffles above DIMMs
and drive bays (no CacheVault flash protection)

Figure 5-7 ServeRAID M5115 with Flash and SSD Expansion Kits installed

The eight SSDs are installed in the following locations:


򐂰 Four in the front of the system in place of the two 2.5-inch drive bays
򐂰 Two in a tray above the memory banks for processor 1
򐂰 Two in a tray above the memory banks for processor 2

Chapter 5. Compute nodes 205


Optional add-ons to the ServeRAID M5115 controller are RAID 6 support, SSD performance
accelerator, and SSD caching enabler. The FoD license upgrades are listed in Table 5-18.

Table 5-18 Supported upgrade features


Part Feature Description Maximum
number code supporte
d

90Y4410 A2Y1 ServeRAID M5100 Series RAID 6 Upgrade for IBM Flex 1
System

90Y4412 A2Y2 ServeRAID M5100 Series Performance Accelerator for IBM 1


Flex System (MegaRAID FastPath)

90Y4447 A36G ServeRAID M5100 Series SSD Caching Enabler for IBM Flex 1
System (MegaRAID CacheCade Pro 2.0)

The following features are included:


򐂰 RAID 6 Upgrade (90Y4410)
Adds support for RAID 6 and RAID 60. This license is an FoD license.
򐂰 Performance Accelerator (90Y4412)
The Performance Accelerator for IBM Flex System, which is implemented by using the LSI
MegaRAID FastPath software and provides high-performance I/O acceleration for
SSD-based virtual drives. It uses a low-latency I/O path to increase the maximum IOPS
capability of the controller. This feature boosts the performance of applications with a
highly random data storage access pattern, such as transactional databases. Part number
90Y4412 is an FoD license.
򐂰 SSD Caching Enabler for traditional hard disk drives (90Y4447)
The SSD Caching Enabler for IBM Flex System, which implemented by using the LSI
MegaRAID CacheCade Pro 2.0 and is designed to accelerate the performance of HDD
arrays. It can do so with only an incremental investment in SSD technology. The feature
enables the SSDs to be configured as a dedicated cache to help maximize the I/O
performance for transaction-intensive applications, such as databases and web serving.
The feature tracks data storage access patterns and identifies the most frequently
accessed data. The hot data is then automatically stored on the SSDs that are assigned
as a dedicated cache pool on the ServeRAID controller. Part number 90Y4447 is a FoD
license. This feature requires that at least one SSD drive is installed.

5.2.8 Supported internal drives


The x220 supports 1.8-inch and 2.5-inch drives.

Supported 1.8-inch drives


The 1.8-inch solid-state drives that are supported by the ServeRAID M5115 are listed in
Table 5-19.

Table 5-19 Supported 1.8-inch solid-state drives


Part Feature Description Maximum
number code supported

49Y6124 A3AP IBM 400GB SATA 1.8" MLC Enterprise SSD 8

43W7746 5420 IBM 200 GB SATA 1.8-inch MLC SSD 8

206 IBM PureFlex System and IBM Flex System Products and Technology
Part Feature Description Maximum
number code supported

49Y6119 A3AN IBM 200GB SATA 1.8" MLC Enterprise SSD 8

00W1120 A3HQ IBM 100GB SATA 1.8" MLC Enterprise SSD 8

43W7726 5428 IBM 50 GB SATA 1.8-inch MLC SSD 8

49Y5993 A3AR IBM 512 GB SATA 1.8-inch MLC Enterprise Value SSD 8

49Y5834 A3AQ IBM 64 GB SATA 1.8-inch MLC Enterprise Value SSD 8

00W1222 A3TG IBM 128GB SATA 1.8" MLC Enterprise Value SSD 8

00W1227 A3TH IBM 256GB SATA 1.8" MLC Enterprise Value SSD 8

Supported 2.5-inch drives


The 2.5-inch drive bays support SAS or SATA HDDs or SATA SSDs. Table 5-20 lists the
supported 2.5-inch drive options. The maximum quantity that is supported is two.

Table 5-20 2.5-inch drive options for internal disk storage


Supported by ServeRAID controller
Part Feature
number code Description C105 H1135 M5115

10K SAS hard disk drives

42D0637 5599 IBM 300 GB 10K 6 Gbps SAS 2.5-inch SFF Slim-HS No Supported Supported
HDD

49Y2003 5433 IBM 600 GB 10K 6 Gbps SAS 2.5-inch SFF Slim-HS No Supported Supported
HDD

81Y9650 A282 IBM 900 GB 10K 6 Gbps SAS 2.5-inch SFF HS HDD No Supported Supported

00AD075 A48S IBM 1.2TB 10K 6Gbps SAS 2.5'' G2HS HDD No Supported Supported

15K SAS hard disk drives

42D0677 5536 IBM 146 GB 15K 6 Gbps SAS 2.5-inch SFF Slim-HS No Supported Supported
HDD

90Y8926 A2XB IBM 146GB 15K 6Gbps SAS 2.5" SFF G2HS HDD No Supported Supported

81Y9670 A283 IBM 300 GB 15K 6 Gbps SAS 2.5-inch SFF HS HDD No Supported Supported

10K and 15K Self-encrypting drives (SED)

90Y8944 A2ZK IBM 146GB 15K 6Gbps SAS 2.5" SFF G2HS SED No Supported Supported

90Y8913 A2XF IBM 300GB 10K 6Gbps SAS 2.5" SFF G2HS SED No Supported Supported

90Y8908 A3EF IBM 600GB 10K 6Gbps SAS 2.5" SFF G2HS SED No Supported Supported

81Y9662 A3EG IBM 900GB 10K 6Gbps SAS 2.5" SFF G2HS SED No Supported Supported

00AD085 A48T IBM 1.2TB 10K 6Gbps SAS 2.5'' G2HS SED No Supported Supported

SAS-SSD Hybrid drive

00AD102 A4G7 IBM 600GB 10K 6Gbps SAS 2.5'' G2HS Hybrid No Supported Supported

Chapter 5. Compute nodes 207


Supported by ServeRAID controller
Part Feature
number code Description C105 H1135 M5115

NL SATA

81Y9722 A1NX IBM 250 GB 7.2 K 6 Gbps NL SATA 2.5-inch SFF HS Supported Supported Supported
HDD

81Y9726 A1NZ IBM 500 GB 7.2 K 6 Gbps NL SATA 2.5-inch SFF HS Supported Supported Supported
HDD

81Y9730 A1AV IBM 1 TB 7.2 K 6 Gbps NL SATA 2.5-inch SFF HS Supported Supported Supported
HDD

NL SAS

42D0707 5409 IBM 500 GB 7200 6 Gbps NL SAS 2.5-inch SFF No Supported Supported
Slim-HS HDD

90Y8953 A2XE IBM 500GB 7.2K 6Gbps NL SAS 2.5" SFF G2HS HDD No Supported Supported

81Y9690 A1P3 IBM 1 TB 7.2 K 6 Gbps NL SAS 2.5-inch SFF HS HDD No Supported Supported

Solid-state drives - Enterprise

41Y8331 A4FL S3700 200GB SATA 2.5" MLC HS Enterprise SSD No Supported Supported

41Y8336 A4FN S3700 400GB SATA 2.5" MLC HS Enterprise SSD No Supported Supported

41Y8341 A4FQ S3700 800GB SATA 2.5" MLC HS Enterprise SSD No Supported Supported

00W1125 A3HR IBM 100GB SATA 2.5" MLC HS Enterprise SSD No Supported Supported

43W7718 A2FN IBM 200 GB SATA 2.5-inch MLC HS SSDa No Supported Supported

49Y6129 A3EW IBM 200GB SAS 2.5" MLC HS Enterprise SSD No Supported Supported

49Y6134 A3EY IBM 400GB SAS 2.5" MLC HS Enterprise SSD No Supported Supported

49Y6139 A3F0 IBM 800GB SAS 2.5" MLC HS Enterprise SSD No Supported Supported

49Y6195 A4GH IBM 1.6TB SAS 2.5" MLC HS Enterprise SSD No Supported Supported

Solid-state drives - Enterprise Value

49Y5839 A3AS IBM 64 GB SATA 2.5-inch MLC HS Enterprise Value No Supported Supported
SSD

90Y8648 A2U4 IBM 128GB SATA 2.5" MLC HS Enterprise Value SSD No Supported Supported

90Y8643 A2U3 IBM 256GB SATA 2.5" MLC HS Enterprise Value SSD No Supported Supported

49Y5844 A3AU IBM 512 GB SATA 2.5-inch MLC HS Enterprise Value No Supported Supported
SSD
a. Withdrawn from marketing.

IBM Flex System Storage Expansion Node


The x220 also supports the IBM Flex System Storage Expansion Node, which provides
another 12 drive bays. For more information, see 5.10, “IBM Flex System Storage Expansion
Node” on page 363.

208 IBM PureFlex System and IBM Flex System Products and Technology
5.2.9 Embedded 1 Gb Ethernet controller
Some models of the x220 include an Embedded 1 Gb Ethernet controller (also known as
LOM) built into the system board. Table 5-2 on page 190 lists which models of the x220
include the controller. Each x220 model that includes the controller also has the Compute
Node Fabric Connector that is installed in I/O connector 1 and physically screwed onto the
system board. The Compute Node Fabric Connector provides connectivity to the Enterprise
Chassis midplane. Figure 5-3 on page 189 shows the location of the Fabric Connector.

The Fabric Connector enables port 1 on the controller to be routed to I/O module bay 1.
Similarly, port 2 is routed to I/O module bay 2. The Fabric Connector can be unscrewed and
removed, if required, to allow the installation of an I/O adapter on I/O connector 1.

The Embedded 1 Gb Ethernet controller has the following features:


򐂰 Broadcom BCM5718 based
򐂰 Dual-port Gigabit Ethernet controller
򐂰 PCIe 2.0 x2 host bus interface
򐂰 Supports Wake on LAN
򐂰 Supports Serial over LAN
򐂰 Supports IPv6

Consideration: TCP/IP offload engine (TOE) is not supported.

5.2.10 I/O expansion


Like other IBM Flex System compute nodes, the x220 has two PCIe 3.0 I/O expansion
connectors for attaching I/O adapters. On the x220, each of these connectors has 12 PCIe
lanes. These lanes are implemented as one x8 link (connected to the first application-specific
integrated circuit (ASIC) on the installed adapter) and one x4 link (connected to the second
ASIC on the installed adapter).

The I/O expansion connectors are high-density 216-pin PCIe connectors. Installing I/O
adapters allows the x220 to connect to switch modules in the IBM Flex System Enterprise
Chassis.

The x220 also supports the IBM Flex System PCIe Expansion Node, which provides up to
another six adapter slots: two Flex System I/O adapter slots and up to four standard PCIe
slots. For more information, see 5.9, “IBM Flex System PCIe Expansion Node” on page 356.

Chapter 5. Compute nodes 209


Figure 5-8 shows the rear of the x240 compute node and the locations of the I/O connectors.

I/O connector 1

I/O connector 2

Figure 5-8 Rear of the x220 compute node showing the locations of the I/O connectors

Table 5-21 lists the I/O adapters that are supported in the x220.

Table 5-21 Supported I/O adapters for the x220 compute node
Part number Feature code Ports Description

Ethernet adapters

49Y7900 A10Y 4 IBM Flex System EN2024 4-port 1Gb Ethernet Adapter

90Y3466 A1QY 2 IBM Flex System EN4132 2-port 10Gb Ethernet Adapter

90Y3554 A1R1 4 IBM Flex System CN4054 10Gb Virtual Fabric Adapter

90Y3482 A3HK 2 IBM Flex System EN6132 2-port 40Gb Ethernet Adapter

Fibre Channel adapters

69Y1938 A1BM 2 IBM Flex System FC3172 2-port 8Gb FC Adapter

95Y2375 A2N5 2 IBM Flex System FC3052 2-port 8Gb FC Adapter

88Y6370 A1BP 2 IBM Flex System FC5022 2-port 16Gb FC Adapter

95Y2386 A45R 2 IBM Flex System FC5052 2-port 16Gb FC Adapter

95Y2391 A45S 4 IBM Flex System FC5054 4-port 16Gb FC Adapter

69Y1942 A1BQ 2 IBM Flex System FC5172 2-port 16Gb FC Adapter

InfiniBand adapters

90Y3454 A1QZ 2 IBM Flex System IB6132 2-port FDR InfiniBand Adapter

Consideration: Any supported I/O adapter can be installed in either I/O connector.
However, you must be consistent across the chassis and all compute nodes.

210 IBM PureFlex System and IBM Flex System Products and Technology
5.2.11 Integrated virtualization
The x220 offers USB flash drive options that are preinstalled with versions of VMware ESXi.
This software is an embedded version of VMware ESXi and is fully contained on the flash
drive without requiring any disk space. The USB memory key plugs into one of the two
internal USB ports on the x220 system board, as shown in Figure 5-3 on page 189. If you
install USB keys in both USB ports, both devices are listed in the boot menu. You can use this
configuration to boot from either device, or set one as a backup in case the first gets
corrupted.

The supported USB memory keys are listed in Table 5-22.

Table 5-22 Virtualization options


Part Feature Description Maximum
number code supported

41Y8300 A2VC IBM USB Memory Key for VMware ESXi 5.0 1

41Y8307 A383 IBM USB Memory Key for VMware ESXi 5.0 Update1 1

41Y8311 A2R3 IBM USB Memory Key for VMware ESXi 5.1 1

41Y8298 A2G0 IBM Blank USB Memory Key for VMware ESXi Downloadsa 2
a. The Blank USB Memory Key requires the download of the VMware vSphere (ESXi) Hypervisor
with IBM Customization image, which is available at this website:
http://ibm.com/systems/x/os/vmware/

There are two types of USB keys, preloaded keys or blank keys. Blank keys allow you to
download an IBM customized version of ESXi and load it onto the key. The x240 supports one
or two keys installed, but only certain combinations:

The following combinations are supported:


򐂰 One preload key
򐂰 One blank key
򐂰 One preload key and one blank key
򐂰 Two blank keys

Two preload keys is an unsupported combination. Installing two preloaded keys prevents
ESXi from booting as described at this website:

http://kb.vmware.com/kb/1035107

Having two keys installed provides a backup boot device. Both devices are listed in the boot
menu, which allows you to boot from either device or to set one as a backup in case the first
one gets corrupted.

5.2.12 Systems management


The following section describes some of the systems management features that are available
with the x220.

Front panel LEDs and controls


The front of the x220 includes several LEDs and controls that help with systems
management. They include a hard disk drive (HDD) activity LED, status LEDs, and power,
identify, check log, fault, and light path diagnostic LEDs.

Chapter 5. Compute nodes 211


Figure 5-9 shows the location of the LEDs and controls on the front of the x220.

Hard disk drive Hard disk drive


USB port activity LED status LED Identify LED Fault LED

NMI control Console Breakout Power button / LED Check log LED
Cable port

Figure 5-9 The front of the x220 with the front panel LEDs and controls shown

Table 5-23 describes the front panel LEDs.

Table 5-23 x220 front panel LED information


LED Color Description

Power Green This LED lights solid when system is powered up. When the compute node is initially
plugged into a chassis, this LED is off. If the power-on button is pressed, the IMM flashes
this LED until it determines that the compute node can power up. If the compute node can
power up, the IMM powers the compute node on and turns on this LED solid. If the compute
node cannot power up, the IMM turns off this LED and turns on the information LED. When
this button is pressed with the server out of the chassis, the light path LEDs are lit.

Location Blue A user can use this LED to locate the compute node in the chassis by requesting it to flash
from the chassis management module console. The IMM flashes this LED when instructed
to by the Chassis Management Module. This LED functions only when the server is
powered on.

Check error log Yellow The IMM turns on this LED when a condition occurs that prompts the user to check the
system error log in the Chassis Management Module.

Fault Yellow This LED lights solid when a fault is detected somewhere on the compute node. If this
indicator is on, the general fault indicator on the chassis front panel should also be on.

Hard disk drive Green Each hot-swap hard disk drive has an activity LED, and when this LED is flashing, it
activity LED indicates that the drive is in use.

Hard disk drive Yellow When this LED is lit, it indicates that the drive failed. If an optional IBM ServeRAID
status LED controller is installed in the server, when this LED is flashing slowly (one flash per second),
it indicates that the drive is being rebuilt. When the LED is flashing rapidly (three flashes
per second), it indicates that the controller is identifying the drive.

212 IBM PureFlex System and IBM Flex System Products and Technology
Table 5-24 describes the x220 front panel controls.

Table 5-24 x220 front panel control information


Control Characteristic Description

Power on / off Recessed with Power If the server is off, pressing this button causes the server to power up and
button LED start loading. When the server is on, pressing this button causes a graceful
shutdown of the individual server so it is safe to remove. This process
includes shutting down the operating system (if possible) and removing
power from the server. If an operating system is running, you might have to
hold the button for approximately 4 seconds to initiate the shutdown. This
button must be protected from accidental activation. Group it with the Power
LED.

NMI Recessed. It can be Causes an NMI for debugging purposes.


accessed only by using
a small pointed object.

Power LED
The status of the power LED of the x220 shows the power status of the compute node. It also
indicates the discovery status of the node by the Chassis Management Module. The power
LED states are listed in Table 5-25.

Table 5-25 The power LED states of the x220 compute node
Power LED state Status of compute node

Off No power to compute node

On; fast flash mode Compute node has power


Chassis Management Module is in discovery mode (handshake)

On; slow flash mode Compute node has power


Power in stand-by mode

On; solid Compute node has power


Compute node is operational

Exception: The power button does not operate when the power LED is in fast flash mode.

Light path diagnostic procedures


For quick problem determination when located physically at the server, the x220 offers the
following three step guided path:
1. The Fault LED on the front panel.
2. The light path diagnostics panel, which is shown in Figure 5-10 on page 214.
3. LEDs next to key components on the system board.

Chapter 5. Compute nodes 213


The x220 light path diagnostics panel is visible when you remove the server from the chassis.
The panel is on the upper right of the compute node as shown in Figure 5-10.

Figure 5-10 Location of x220 light path diagnostics panel

To illuminate the light path diagnostics LEDs, power off the compute node, slide it out of the
chassis, and press the power button. The power button doubles as the light path diagnostics
remind button when the server is removed from the chassis.

The meaning of each LED in the light path diagnostics panel is listed in Table 5-26.

Table 5-26 Light path panel LED definitions


LED Color Meaning

LP Green The light path diagnostics panel is operational

S BRD Yellow System board error is detected

MIS Yellow A mismatch has occurred between the processors, DIMMs, or HDDs within the
configuration as reported by POST

NMI Yellow An NMI has occurred

TEMP Yellow An over-temperature condition has occurred that was critical enough to shut
down the server

MEM Yellow A memory fault has occurred. The corresponding DIMM error LEDs on the
system board should also be lit.

ADJ Yellow A fault is detected in the adjacent expansion unit (if installed)

Integrated Management Module II


Each x220 compute node has an IMM2 onboard and uses the UEFI to replace the older BIOS
interface.

The IMM2 provides the following major features as standard:


򐂰 IPMI v2.0-compliance
򐂰 Remote configuration of IMM2 and UEFI settings without the need to power on the server

214 IBM PureFlex System and IBM Flex System Products and Technology
򐂰 Remote access to system fan, voltage, and temperature values
򐂰 Remote IMM and UEFI update
򐂰 UEFI update when the server is powered off
򐂰 Remote console by way of a serial over LAN
򐂰 Remote access to the system event log
򐂰 Predictive failure analysis and integrated alerting features; for example, by using Simple
Network Management Protocol (SNMP)
򐂰 Remote presence, including remote control of server by using a Java or Active x client
򐂰 Operating system failure window (blue screen) capture and display through the web
interface
򐂰 Virtual media that allows the attachment of a diskette drive, CD/DVD drive, USB flash
drive, or disk image to a server

Remember: Unlike IBM BladeCenter, the assigned TCP/IP address of the IMM is available
on the local network. You can use this address to remotely manage the x220 by connecting
directly to the IMM independent of the IBM Flex System Manager or Chassis Management
Module.

For more information about the IMM, see 3.4.1, “Integrated Management Module II” on
page 47.

5.2.13 Operating system support


The following operating systems are supported by the x220:
򐂰 Microsoft Windows Server 2008 HPC Edition
򐂰 Microsoft Windows Server 2008 R2
򐂰 Microsoft Windows Server 2008, Datacenter x64 Edition
򐂰 Microsoft Windows Server 2008, Enterprise x64 Edition
򐂰 Microsoft Windows Server 2008, Standard x64 Edition
򐂰 Microsoft Windows Server 2008, Web x64 Edition
򐂰 Microsoft Windows Server 2012
򐂰 Red Hat Enterprise Linux 5 Server with Xen x64 Edition
򐂰 Red Hat Enterprise Linux 5 Server x64 Edition
򐂰 Red Hat Enterprise Linux 6 Server x64 Edition
򐂰 SUSE Linux Enterprise Server 10 for AMD64/EM64T
򐂰 SUSE Linux Enterprise Server 11 for AMD64/EM64T
򐂰 SUSE Linux Enterprise Server 11 with Xen for AMD64/EM64T
򐂰 VMware ESX 4.1
򐂰 VMware ESXi 4.1
򐂰 VMware vSphere 5
򐂰 VMware vSphere 5.1

ServeRAID C105: There is no native (in-box) driver for the ServeRAID C105 controller for
Windows and Linux; the drivers must be downloaded separately. The ServeRAID C105
controller does not support for VMware, Hyper-V, Xen, or solid-state drives (SSDs).

For more information about the latest list of supported operating systems, see the IBM
ServerProven page at this website:
http://ibm.com/systems/info/x86servers/serverproven/compat/us/nos/flexmatrix.shtml

Chapter 5. Compute nodes 215


5.3 IBM Flex System x222 Compute Node
The IBM Flex System x222 Compute Node is a high-density dual-server offering that is
designed for virtualization, dense cloud deployments, and hosted clients. The x222 has two
independent compute nodes in one mechanical package, which means that the x222 has a
dense design that allows up to 28 servers to be housed in a single 10U Flex System
Enterprise Chassis.

Compute Node versus Server: In this section, the term Compute Node refers to the entire
x222. The term server refers to each independent half of the x222.

This section includes the following topics:


򐂰 5.3.1, “Introduction” on page 216
򐂰 5.3.2, “Models” on page 219
򐂰 5.3.3, “Chassis support” on page 219
򐂰 5.3.4, “System architecture” on page 220
򐂰 5.3.5, “Processor options” on page 222
򐂰 5.3.6, “Memory options” on page 223
򐂰 5.3.7, “Supported internal drives” on page 225
򐂰 5.3.8, “Expansion Node support” on page 226
򐂰 5.3.9, “Embedded 10Gb Virtual Fabric adapter” on page 226
򐂰 5.3.10, “Mid-mezzanine I/O adapters” on page 228
򐂰 5.3.11, “Integrated virtualization” on page 231
򐂰 5.3.12, “Systems management” on page 232
򐂰 5.3.13, “Operating system support” on page 234

5.3.1 Introduction
The IBM Flex System x222 Compute Node is a high-density offering that is designed to
maximize the computing power that is available in the data center. With a balance between
cost and system features, the x222 is an ideal platform for dense workloads, such as
virtualization. This section describes the key features of the server.

Figure 5-11 shows the front of the x222 Compute Node showing the location of the controls,
LEDs, and connectors.

x222 Compute Node

Upper server

Lower server Light path


diagnostics

LED
2.5” SS panel
USB Console breakout HDD bay Power
port cable port (or 2x 1.8” HS)

Figure 5-11 The IBM Flex System x222 Compute Node

216 IBM PureFlex System and IBM Flex System Products and Technology
Figure 5-12 shows the internal layout and major components of the x222.

Upper
system-board
assembly
Upper air
baffle

Fabric
connector

DIMM I/O expansion


adapter

Microprocessor

Heat sink
filler
Lower
heat sink

Lower air
baffle
Lower
Simple-swap system-board
hard disk assembly
drive

Hard disk
drive bay filler

Solid-state drive
mounting sleeve
Figure 5-12 Exploded view of the x222, showing the major components

Table 5-27 lists the features of the x222.

Table 5-27 IBM Flex System x222 Compute Node specifications


Components Specification

Form factor Standard Flex System form factor with two independent servers.

Chassis support IBM Flex System Enterprise Chassis.

Processor Up to four processors in a standard (half-width) Flex System form factor.


Each separate server: Up to two Intel Xeon Processor E5-2400 product family CPUs with
eight-core (up to 2.3 GHz), six-core (up to 2.4 GHz), or quad-core (up to 2.2 GHz), one QPI link
that runs at 8.0 GTps, L3 cache up to 20 MB, and memory speeds up to 1600 MHz.

The two separate servers are independent and cannot be combined to form a single,
four-socket system.

Chipset Intel C600 series

Chapter 5. Compute nodes 217


Components Specification

Memory Up to 24 DIMM sockets in a standard (half-width) Flex System form factor.


Each separate server: Up to 12 DIMM sockets (six DIMMs per processor) by using Low Profile
(LP) DDR3 DIMMs. RDIMMs and LRDIMMs are supported. 1.5 V and low-voltage 1.35 V
DIMMs are supported. There is support for up to 1600 MHz memory speed, depending on the
processor. There are three memory channels per processor (two DIMMs per channel).
Supports two DIMMs per channel operating at 1600 MHz (two DPC @ 1600MHz) with single
and dual rank RDIMMs.

Memory maximums Each separate server:


With LRDIMMs: Up to 384 GB with 12x 32 GB LRDIMMs and two processors
With RDIMMs: Up to 192 GB with 12x 16 GB RDIMMs and two processors

Memory protection ECC, Chipkill, optional memory mirroring, and memory rank sparing.

Disk drive bays Each separate server: One 2.5" simple-swap SATA drive bay supporting SATA and SSD drives.
Optional SSD mounting kit to convert a 2.5” simple-swap bay into two 1.8” hot-swap SSD bays.

Maximum internal Each separate server: Up to 1 TB using a 2.5” SATA simple-swap drive or up to 512 GB using
storage (Raw) two 1.8” SSDs and the SSD Expansion Kit.

RAID support None

Network interfaces Each separate server: Two 10 Gb Ethernet ports with Embedded 10Gb Virtual Fabric Ethernet
LAN on motherboard (LOM) controller; Emulex BE3 based. Routes to chassis bays 1 and 2
through a Fabric Connector to the midplane. Features on Demand upgrade to FCoE and iSCSI.
Usage of both ports on both servers requires two scalable Ethernet switches in the chassis,
each upgraded to enable 28 internal switch ports.

PCI Expansion slots Each separate server: One connector for an I/O adapter; PCI Express 3.0 x16 interface.
Supports special mid-mezzanine I/O cards that are shared by both servers. Only one card is
needed to connect both servers.

Ports Each separate server: One external, two internal USB ports for an embedded hypervisor. A
console breakout cable port on the front of the server provides local KVM and serial ports (one
cable is provided as standard with chassis; more cables optional).

Systems Each separate server: UEFI, IBM Integrated Management Module II (IMM2) with Renesas
management SH7757 controller, Predictive Failure Analysis, light path diagnostics panel, automatic server
restart, and remote presence. Support for IBM Flex System Manager, IBM Systems Director,
and IBM ServerGuide.

Security features Power-on password and admin password, Trusted Platform Module (TPM) 1.2.

Video Each separate server: Matrox G200eR2 video core with 16 MB video memory that is integrated
into IMM2. The maximum resolution is 1600x1200 at 75 Hz with 16 M colors.

Limited warranty Three-year, customer-replaceable unit and onsite limited warranty with 9x5/NBD.

Operating systems Microsoft Windows Server 2008 R2 and 2012, Red Hat Enterprise Linux 5 and 6, SUSE Linux
supported Enterprise Server 10 and 11, and VMware ESXi 4.1, 5.0 and 5.1. For more information, see
5.2.13, “Operating system support” on page 215.

Service and support Optional country-specific service upgrades are available through IBM ServicePacs: 6, 4, or
2-hour response time, 8-hour fix time, 1-year or 2-year warranty extension, remote technical
support for IBM hardware and selected IBM and OEM software.

Dimensions Width: 217 mm (8.6 in.), height: 56 mm (2.2 in.), depth: 492 mm (19.4 in.)

Weight Maximum configuration: 8.2 kg (18 lb)

218 IBM PureFlex System and IBM Flex System Products and Technology
5.3.2 Models
The current x222 models are shown in Table 5-28. All models include 2x 8 GB of memory
(one 8 GB DIMM per server).

Table 5-28 Standard models


Model Intel Xeon processors Disk adapter Disk bays Disks Networking I/O slots
(2 max per server, 4 total) (1 per server) (1 per (2 per server) (used/max)
server)

7916-A2x 2x E5-2418L 4C 2.0GHz SATA 2x 2.5” SS Open 4x 10 GbE 0/1


10MB 1333MHz 50W (non-RAID)

7916-B2x 2x E5-2430L 6C 2.0GHz SATA 2x 2.5” SS Open 4x 10 GbE 0/1


15MB 1333MHz 60W (non-RAID)

7916-C2x 2x E5-2450L 8C 1.8GHz SATA 2x 2.5” SS Open 4x 10 GbE 0/1


20MB 1600MHz 70W (non-RAID)

7916-D2x 2x E5-2403 4C 1.8GHz SATA 2x 2.5” SS Open 4x 10 GbE 0/1


10MB 1066MHz 80W (non-RAID)

7916-F2x 2x E5-2407 4C 2.2GHz SATA 2x 2.5” SS Open 4x 10 GbE 0/1


10MB 1066MHz 80W (non-RAID)

7916-G2x 2x E5-2420 6C 1.9GHz SATA 2x 2.5” SS Open 4x 10 GbE 0/1


15MB 1333MHz 95W (non-RAID)

7916-H2x 2x E5-2430 6C 2.2GHz SATA 2x 2.5” SS Open 4x 10 GbE 0/1


15MB 1333MHz 95W (non-RAID)

7916-H6x 2x E5-2430 6C 2.2GHz SATA 2x 2.5” SS Open 4x 10 GbE 1/1


15MB 1333MHz 95W (non-RAID) 2x InfiniBanda

7916-J2x 2x E5-2440 6C 2.4GHz SATA 2x 2.5” SS Open 4x 10 GbE 0/1


15MB 1333MHz 95W (non-RAID)

7916-M2x 2x E5-2450 8C 2.1GHz SATA 2x 2.5” SS Open 4x 10 GbE 0/1


20MB 1600MHz 95W (non-RAID)

7916-N2x 2x E5-2470 8C 2.3GHz SATA 2x 2.5” SS Open 4x 10 GbE 0/1


20MB 1600MHz 95W (non-RAID)
a. Model H6x includes the IBM Flex System IB6132D 2-port FDR InfiniBand Adapter.

5.3.3 Chassis support


The x222 type 7916 is supported in the IBM Flex System Enterprise Chassis as shown in
Table 5-29.

Table 5-29 x222 chassis support


Server BladeCenter chassis (all) IBM Flex System Enterprise
Chassis

x222 No Yes

Chapter 5. Compute nodes 219


Up to 14 x222 Compute Nodes (up to 28x separate servers) can be installed in the chassis in
10U of rack space. The actual number of x222 systems that can be powered on in a chassis
depends on the following factors:
򐂰 The TDP power rating for the processors that are installed in the x222.
򐂰 The number of power supplies installed in the chassis.
򐂰 The capacity of the power supplies installed in the chassis (2100 W or 2500 W).
򐂰 The power redundancy policy used in the chassis (N+1 or N+N).

Table 4-11 on page 93 provides guidelines about the number of x222 systems that can be
powered on in the IBM Flex System Enterprise Chassis, based on the type and number of
power supplies that are installed.

5.3.4 System architecture


The x222 Compute Node contains two individual and independent servers. The servers share
power and network connections to the IBM Flex System Enterprise Chassis, but they operate
as two separate servers. It is not possible to combine the servers to form a single four-socket
server.

Figure 5-13 shows the x222 open and the two separate servers, upper and lower.

I/O adapter connector Power and signal


for upper server interconnect

The two servers


are located on the
top and bottom
halves of the x222
I/O connector 2 on node.
shared adapter
(InfiniBand or FC) with
connections top and
bottom to each server

I/O connector 1 shared with


both servers for 10 GbE
(2 ports for each server)

Figure 5-13 The x222 open, showing the two servers

Each server within the IBM Flex System x222 Compute Node has the following system
architecture features as standard:
򐂰 Two 1356-pin, Socket B2 (LGA-1356) processor sockets
򐂰 An Intel C600 series Platform Controller Hub
򐂰 Three memory channels per socket

220 IBM PureFlex System and IBM Flex System Products and Technology
򐂰 Up to two DIMMs per memory channel
򐂰 12 DDR3 DIMM sockets
򐂰 Support for RDIMMs and LRDIMMs
򐂰 One integrated 10 Gb Ethernet controller (10 GbE LOM in Figure 5-14)
򐂰 One IMM2
򐂰 One connector for attaching to a mid-mezzanine I/O adapter
򐂰 One SATA connector for one 2.5” simple-swap SAS HDD or SSD (or two 1.8” SSDs with
the optional 1.8” enablement kit)
򐂰 Two internal and one external USB connector

Figure 5-14 shows the system architecture of the x222 system.

2.5” HDD or SSD


x4 ESI link (2x 1.8” with optional kit)
Intel Intel
Xeon C600 2x Internal USB
Processor 0 PCH Front USB

USB Front KVM port


DDR3 DIMMs x1 USB Video &
serial
3 memory QPI link
channels (up to IMM2 Management
2 DIMMs per 8 GT/s)
channel
Chassis
PCIe 3.0 x8 Midplane
Intel 10 GbE LOM
Xeon
PCIe 3.0 x16
Processor 1
I/O connector
PCIe 3.0 x8

Upper server
Server Mid-mezz
Lower server interconnect Adapter

PCIe 3.0 x8
Intel I/O connector
Xeon PCIe 3.0 x16
Processor 1 Fabric
Connector
10 GbE LOM
PCIe 3.0 x8
DDR3 DIMMs
Management 1Gb Management
3 memory QPI link Switch Connector
channels (up to IMM2

2 DIMMs per 8 GT/s) Video &


channel serial
x1 USB
USB Front KVM port
Intel Intel Front USB
Xeon C600 2x Internal USB
Processor 0 PCH
x4 ESI link 2.5” HDD or SSD
(2x 1.8” with optional kit)

Figure 5-14 IBM Flex System x222 Compute Node block diagram

Chapter 5. Compute nodes 221


5.3.5 Processor options
Each server within the IBM Flex System x222 Compute Node features the Intel Xeon
E5-2400 series processors. The Xeon E5-2400 series processor has models with either four,
six, or eight cores per processor with up to 16 threads per socket.

The processors include the following features:


򐂰 Up to 20 MB of shared L3 cache
򐂰 Hyper-Threading
򐂰 Turbo Boost Technology 2.0 (depending on processor model)
򐂰 One QPI link that runs at up to 8 GT/s
򐂰 One integrated memory controller
򐂰 Three memory channels that support up to two DIMMs each

The x222 supports the processor options that are listed in Table 5-30. The x222 supports up
to four Intel Xeon E5-2400 processors, one or two in each independent server. All four
processors that are used in an x222 must be identical. The table also shows which server
models have each processor as standard. If no corresponding model for a particular
processor is listed, the processor is available only through the configure-to-order (CTO)
process.

Important: It is not possible to combine the servers to form a single four-socket server.
Each of the two-socket servers are independent from each other with the exception of
shared power, a shared dual-ASIC I/O adapter, and a shared fabric connector to the
midplane.

Table 5-30 Supported processors for the x222


Part number Feature codea Intel Xeon processor description Models
where used

Intel Xeon processors

00D1266 A35X / A370 Intel Xeon E5-2403 4C 1.8GHz 10MB 1066MHz 80W D2x

00D1265 A35W / A36Z Intel Xeon E5-2407 4C 2.2GHz 10MB 1066MHz 80W F2x

00D1264 A35V / A36Y Intel Xeon E5-2420 6C 1.9GHz 15MB 1333MHz 95W G2x

00D1263 A35U / A36X Intel Xeon E5-2430 6C 2.2GHz 15MB 1333MHz 95W H2x, H6x

00D1262 A35T / A36W Intel Xeon E5-2440 6C 2.4GHz 15MB 1333MHz 95W J2x

00D1261 A35S / A36V Intel Xeon E5-2450 8C 2.1GHz 20MB 1600MHz 95W M2x

00D1260 A35R / A36U Intel Xeon E5-2470 8C 2.3GHz 20MB 1600MHz 95W N2x

Intel Xeon processors - Low power

00D1269 A360 / A373 Intel Xeon E5-2418L 4C 2.0GHz 10MB 1333MHz 50W A2x

00D1271 A362 / A375 Intel Xeon E5-2428L 6C 1.8GHz 15MB 1333MHz 60W -

00D1268 A35Z / A372 Intel Xeon E5-2430L 6C 2.0GHz 15MB 1333MHz 60W B2x

00D1270 A361 / A374 Intel Xeon E5-2448L 8C 1.8GHz 20MB 1333MHz 70W -

00D1267 A35Y / A371 Intel Xeon E5-2450L 8C 1.8GHz 20MB 1600MHz 70W C2x
a. The first feature code is for processor 1 and the second feature code is for processor 2.

222 IBM PureFlex System and IBM Flex System Products and Technology
5.3.6 Memory options
IBM DDR3 memory is compatibility tested and tuned for optimal performance and throughput.
IBM memory specifications are integrated into the light path diagnostics panel for immediate
system performance feedback and optimum system uptime. From a service and support
standpoint, IBM memory automatically assumes the IBM system warranty, and IBM provides
service and support worldwide.

The servers in the x222 support Low Profile (LP) DDR3 memory RDIMMs and LRDIMMs.
UDIMMs are not supported. Each of the two servers in the x222 has 12 DIMM sockets. Each
server supports up to six DIMMs when one processor is installed and up to 12 DIMMs when
two processors are installed. Each processor has three memory channels, and there are two
DIMMs per channel.

The following rules apply when you select the memory configuration:
򐂰 Mixing 1.5 V and 1.35 V DIMMs in the same server is supported. In such a case, all
DIMMs operate at 1.5 V.
򐂰 The maximum number of ranks that are supported per channel is eight.
򐂰 The maximum quantity of DIMMs that can be installed in each server in the x222 depends
on the number of processors, as shown in the “Max. qty supported” row in Table 5-31 on
page 224 and Table 5-32 on page 224.
򐂰 All DIMMs in all processor memory channels operate at the same speed, which is
determined as the lowest value of the following situations:
– The memory speed that is supported by a specific processor.
– The lowest maximum operating speed for the selected memory configuration that
depends on the rated speed, as shown under the “Max. operating speed” section in
Table 5-31 on page 224.

Table 5-31 on page 224 and Table 5-32 on page 224 show the maximum memory speeds
that are achievable based on the installed DIMMs and the number of DIMMs per channel.
Table 5-31 on page 224 and Table 5-32 on page 224 also show the maximum memory
capacity at any speed that is supported by the DIMM and the maximum memory capacity at
the rated DIMM speed. In Table 5-31 on page 224 and Table 5-32 on page 224, cells that are
highlighted with a gray background indicate when the specific combination of DIMM voltage
and number of DIMMs per channel still allows the DIMMs to operate at rated speed.

Important: The quantities and capacities are for one server within the x222 (that is, half of
the x222). The maximums for the entire x222 (both servers) is twice these numbers.

Chapter 5. Compute nodes 223


Table 5-31 Maximum memory speeds: RDIMMs
Spec RDIMMs

Rank Single rank Dual rank

Part numbers 49Y1406 (4 GB) 49Y1559 (4 GB) 49Y1407 (4 GB) 90Y3178 (4 GB)
49Y1397 (8 GB) 90Y3109 (8 GB)
49Y1563 (16 GB) 00D4968(16GB)

Rated speed 1333 MHz 1600 MHz 1333 MHz 1600 MHz

Rated voltage 1.35 V 1.5 V 1.35 V 1.5 V

Operating voltage 1.35 V 1.5 V 1.5 V 1.35 V 1.5 V 1.5 V

Max quantitya 12 12 12 12 12 12

Largest DIMM 4 GB 4 GB 4 GB 16 GB 16 GB 16 GB

Max memory 48 GB 48 GB 48 GB 192 GB 192 GB 192 GB


capacity

Max memory at rated 48 GB 48 GB 48 GB 192 GB 192 GB 192 GB


speed

Maximum operating speed (MHz)

1 DIMM per channel 1333 MHz 1333 MHz 1600 MHz 1333 MHz 1333 MHz 1600 MHz

2 DIMMs per channel 1333 MHz 1333 MHz 1600 MHz 1333 MHz 1333 MHz 1600 MHz
a. The maximum quantity that is supported is shown for two installed processors. When one processor is installed,
the maximum quantity that is supported is half of that shown.

Table 5-32 Maximum memory speeds - LRDIMMs


Spec LRDIMMs

Rank Quad rank

Part numbers 90Y3105 (32 GB)

Rated speed 1333 MHz

Rated voltage 1.35 V

Operating voltage 1.35 V 1.5 V

Max quantitya 12 12

Largest DIMM 32 GB 32 GB

Max memory capacity 384 GB 384 GB

Max memory at rated speed N/A 192 GB

Maximum operating speed (MHz)

1 DIMM per channel 1066 MHz 1333 MHz

2 DIMMs per channel 1066 MHz 1066 MHz


a. The maximum quantity that is supported is shown for two installed processors. When one
processor is installed, the maximum quantity that is supported is half of that shown.

224 IBM PureFlex System and IBM Flex System Products and Technology
The following memory protection technologies are supported:
򐂰 ECC
򐂰 Chipkill (for x4-based memory DIMMs; look for “x4” in the DIMM description)
򐂰 Memory mirroring
򐂰 Memory sparing

If memory mirroring is used, the DIMMs must be installed in pairs (minimum of one pair per
processor) and both DIMMs in a pair must be identical in type and size.

If memory rank sparing is used, a minimum of one quad-rank DIMM or two single-rank or
dual-rank DIMMs must be installed per populated channel (the DIMMs do not need to be
identical). In rank sparing mode, one rank of a DIMM in each populated channel is reserved
as spare memory. The size of a rank varies depending on the DIMMs that are installed.

Table 5-33 lists the memory options that are available for the x222. DIMMs can be installed
one at a time in each server, but for performance reasons, install them in sets of three (one for
each of the three memory channels).

Table 5-33 Memory options for the x222


Part Feature Description Models
number code where
used

Registered DIMMs (RDIMMs) - 1333 MHz

49Y1406 8941 4GB (1x4GB, 1Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333MHz LP RDIMM -

49Y1407 8942 4GB (1x4GB, 2Rx8, 1.35V) PC3L-10600 CL9 ECC DDR3 1333MHz LP RDIMM -

49Y1397 8923 8GB (1x8GB, 2Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333MHz LP RDIMM -

49Y1563 A1QT 16GB (1x16GB, 2Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333MHz LP RDIMM -

Registered DIMMs (RDIMMs) - 1600 MHz

49Y1559 A28Z 4GB (1x4GB, 1Rx4, 1.5V) PC3-12800 CL11 ECC DDR3 1600MHz LP RDIMM -

90Y3178 A24L 4GB (1x4GB, 2Rx8, 1.5V) PC3-12800 CL11 ECC DDR3 1600MHz LP RDIMM -

90Y3109 A292 8GB (1x8GB, 2Rx4, 1.5V) PC3-12800 CL11 ECC DDR3 1600MHz LP RDIMM All

00D4968 A2U5 16GB (1x16GB, 2Rx4, 1.5V) PC3-12800 CL11 ECC DDR3 1600MHz LP RDIMM -

Load-reduced DIMMs (LRDIMMs)

90Y3105 A291 32GB (1x32GB, 4Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333MHz LP LRDIMM -

5.3.7 Supported internal drives


Each of the two servers in the x222 has one 2.5-inch simple-swap drive bay that is accessible
from the front of the unit (as shown in Figure 5-11 on page 216). Each server offers a 6 Gbps
SATA controller that is implemented by the Intel C600 series chipset.

Each 2.5-inch drive bay supports a SATA HDD or SATA SSD. The 2.5-inch drive bay can be
replaced with two 1.8-inch hot-swap bays for SSDs by first installing the Flex System SSD
Expansion Kit in to the 2.5-inch bay.

RAID functionality is not provided by the chipset and, if required, must be implemented by the
operating system.

Chapter 5. Compute nodes 225


Table 5-34 lists the supported drives in the x222.

Table 5-34 Supported drives


Part Feature Description Maximum
number code supported
per servera

1.8-inch drives and expansion kit

00W0366 A3HV IBM Flex System SSD Expansion Kit 1


(used to convert the 2.5-inch bay in to two 1.8-inch bays)

00W1120 A3HQ IBM 100GB SATA 1.8" MLC Enterprise SSD 2

49Y6119 A3AN IBM 200GB SATA 1.8" MLC Enterprise SSD 2

2.5-inch drives

90Y8974 A369 IBM 500GB 7.2K 6Gbps SATA 2.5'' G2 SS HDD 1

90Y8979 A36A IBM 1TB 7.2K 6Gbps SATA 2.5'' G2 SS HDD 1

90Y8984 A36B IBM 128GB SATA 2.5” MLC Enterprise Value SSD for Flex System x222 1

90Y8989 A36C IBM 256GB SATA 2.5” MLC Enterprise Value SSD for Flex System x222 1

90Y8994 A36D IBM 100GB SATA 2.5” MLC Enterprise SSD for Flex System x222 1
a. The quantities that are listed here are for each of the separate servers within the x222 node.

5.3.8 Expansion Node support


The x222 does not support the IBM Flex System Storage® Expansion Node or the IBM Flex
System PCIe Expansion Node.

5.3.9 Embedded 10Gb Virtual Fabric adapter


Each server in the x222 Compute Node includes an Embedded 10Gb Virtual Fabric adapter
(also known as LAN on Motherboard or LOM) built in to the system board. The x222 has one
Fabric Connector (which is physically on the lower server) and the Ethernet connections from
both Embedded 10 Gb VFAs are routed through it.

Figure 5-15 on page 227 shows the internal connections between the Embedded 10Gb VFAs
and the switches in chassis bays 1 and 2.

226 IBM PureFlex System and IBM Flex System Products and Technology
Base switch Upgrade 1
Embedded Fabric connector ports switch ports
10 GbE for embedded 10 GbE
Switch
.. ..
x222 . . bay 1 ..
Ethernet .
Upper
server

Lower Switch
.. ..
server . . bay 2 ..
Ethernet .

Upper server traffic routes to Upgrade 1


ports on both switches
Lower server traffic routes to Base ports
on both switches
Figure 5-15 Embedded 10 Gb VFA connectivity to the switches

The following components are shown in Figure 5-15:


򐂰 The blue lines show that the two Ethernet ports in the upper server route to switches in
bay 1 and bay 2. These connections require that the switch have Upgrade 1 enabled so as
to enable the second bank of internal ports, ports 15 - 28.
򐂰 The red lines show that the two Ethernet ports in the lower server also route to switches in
bay 1 and bay 2. These connections both go to the base ports of the switch, ports 1 - 14.

Switch upgrade 1 required: You must have Upgrade 1 enabled in the two switches.
Without this feature upgrade, the upper server does not have any Ethernet connectivity.

For more information about supported Ethernet switches, see 4.11.4, “Switch to adapter
compatibility” on page 115.

The Embedded 10Gb VFA is based on the Emulex BladeEngine 3 (BE3), which is a
single-chip, dual-port 10 Gigabit Ethernet (10GbE) Ethernet Controller. The Embedded 10Gb
VFA includes the following features:
򐂰 PCI-Express Gen2 x8 host bus interface
򐂰 Supports multiple virtual NIC (vNIC) functions
򐂰 TCP/IP Offload Engine (TOE enabled)
򐂰 SR-IOV capable
򐂰 RDMA over TCP/IP capable
򐂰 iSCSI and FCoE upgrade offering through FoD

Table 5-35 on page 228 lists the ordering information for the IBM Flex System Embedded
10Gb Virtual Fabric Upgrade, which enables the iSCSI and FCoE support on the Embedded
10Gb Virtual Fabric adapter.

Two licenses required: To enable the FCoE/iSCSI upgrade for both servers in the x222
Compute Node, two licenses are required.

Chapter 5. Compute nodes 227


Table 5-35 Feature on Demand upgrade for FCoE and iSCSI support
Part number Feature code Description Maximum supported
per servera

90Y9310 A2TD IBM Virtual Fabric Advanced Software Upgrade (LOM) 1 per server
2 per x222 Compute Node
a. To enable the FCoE/iSCSI upgrade for both servers in the x222 Compute Node, two licenses are required.

5.3.10 Mid-mezzanine I/O adapters


In addition to the Embedded 10GbE VFAs on each server, the x222 supports one I/O adapter
that is shared between the two servers and is routed to the I/O Modules that are installed in
bays 3 and 4 of the chassis.

The shared I/O adapter is mounted in the lower server, as shown in Figure 5-16. The adapter
has two host interfaces, one on either side, for connecting to the servers. Each host interface
is PCI Express 3.0 x16.

I/O expansion adapter


Lower server (installed in lower server)

I/O adapter connector to


I/O adapter connector to lower server Midplane
upper server (underside) interface

Figure 5-16 Location of the I/O adapter

Table 5-36 lists the supported adapters. Adapters are shared between the two servers with
half of the ports routing to each server.

Table 5-36 Network adapters


Part Feature Description Number Maximum
number code of ports supporteda

90Y3486 A365 IBM Flex System IB6132D 2-port FDR InfiniBand adapter 2 1

95Y2379 A3HU IBM Flex System FC5024D 4-port 16Gb FC adapter 4 1


a. One adapter is supported per x222 Compute Node. The adapter is shared between the two servers within the
x222.

228 IBM PureFlex System and IBM Flex System Products and Technology
A compatible I/O module must be installed in the corresponding I/O bays in the chassis, as
shown in the Table 5-37.

Table 5-37 Adapter to I/O bay correspondence


Upper or Adapter With FC5024D With IB6132D Corresponding
Lower server 4-port 2-port I/O module bay
in the chassis

Embedded 10 GbE Port 1 Module bay 1 (Upgrade 1a)


Upper server
Virtual Fabric adapter Port 2 Module bay 2 (Upgrade 1a )

Embedded 10 GbE Port 1 Module bay 1 (Base)


Lower server
Virtual Fabric adapter Port 2 Module bay 2 (Base)

Port 1 Not used Module bay 3


Upper server I/O Expansion adapter
FC5024D 4-port 16Gb FC Port 2 Port 1 Module bay 4
or Port 1 Port 1 Module bay 3
Lower server IB6132D 2-port FDR InfiniBand
Port 2 Not used Module bay 4
a. Requires a scalable switch with 28 or more internal ports enabled. For the EN2092, EN4093, EN4093R, and
SI4093 switches, this means Upgrade 1 is required. For the CN4093, Upgrade 1 or Upgrade 2 is required.

For more information about the supported switches, see 4.11.4, “Switch to adapter
compatibility” on page 115.

The FC5024D is a four-port adapter where two ports are routed to each server. Port 1 of each
server is connected to the switch in bay 3 and Port 2 of each server is connected to the switch
in bay 4. To make full use of all four ports, you must install a supported Fibre Channel switch
in both switch bays.

Chapter 5. Compute nodes 229


Figure 5-17 shows how the FC5024D 4-port 16 Gb FC adapter and the Embedded 10Gb
VFAs are connected to the Ethernet and Fibre Channel switches installed in the chassis.

Base switch Upgrade 1


ports switch ports

Switch
.. ..
Embedded Fabric connector . . bay 1 ..
.
10 GbE for embedded 10 GbE Ethernet

x222
Switch
..
. bay 3 ..
Upper .
FC
server

Lower Switch
server .. ..
. . bay 2 ..
.
Ethernet

Ethernet traffic – upper server


Ethernet traffic – lower server Switch
Fibre Channel traffic ..
FC5024D 4-port . bay 4 ..
PCIe traffic 16 Gb FC adapter .
FC

Figure 5-17 Logical layout of the interconnects: Ethernet and Fibre Channel

The FC5024D 4-port 16Gb FC Adapter is supported by the following switches:


򐂰 IBM Flex System FC5022 16Gb SAN Scalable Switch
򐂰 IBM Flex System FC5022 24-port 16Gb SAN Scalable Switch
򐂰 IBM Flex System FC5022 24-port 16Gb ESB SAN Scalable Switch

Fibre Channel switch ports: The Fibre Channel switches in bays 3 and 4 use Ports on
Demand to enable both internal and external ports. You should ensure that enough ports
are licensed to activate all internal ports and all needed external ports. For more
information, see 4.11.11, “IBM Flex System FC5022 16Gb SAN Scalable Switch” on
page 148.

For more information about this adapter, see 5.11.15, “IBM Flex System FC5024D 4-port
16Gb FC Adapter” on page 394.

The IB6132D is a two-port adapter and has one port that is routed to each server. One port of
the adapter connects to the InfiniBand switch in switch bay 3 and the other adapter port
connects to the InfiniBand switch in switch bay 4 in the chassis. The IB6132D requires that
two InfiniBand switches be installed in the chassis.

230 IBM PureFlex System and IBM Flex System Products and Technology
Figure 5-18 shows how the IB6132D 2-port FDR InfiniBand adapter and the four ports of the
two Embedded 10 GbE VFAs are connected to the Ethernet and InfiniBand switches that are
installed in the chassis.

Base switch Upgrade 1


ports switch ports

Switch
.. ..
Embedded Fabric connector . . bay 1 ..
.
10 GbE for embedded 10 GbE Ethernet

x222
Switch
..
. bay 3 ..
Upper .
InfiniBand
server

Lower Switch
server .. ..
. . bay 2 ..
.
Ethernet

Ethernet traffic – upper server


Ethernet traffic – lower server Switch
InfiniBand traffic IB6132D 2-port ..
FDR InfiniBand . bay 4 ..
PCIe traffic .
InfiniBand

Figure 5-18 Logical layout of the interconnects: Ethernet and InfiniBand

The IB6132D 2-port FDR InfiniBand Adapter is supported by the IBM Flex System IB6131
InfiniBand Switch. To use the adapter at FDR speeds, the switch needs the FDR upgrade.
For more information, see 4.11.14, “IBM Flex System IB6131 InfiniBand Switch” on page 160.

For more information about this adapter, see 5.11.20, “IBM Flex System IB6132D 2-port FDR
InfiniBand Adapter” on page 403.

5.3.11 Integrated virtualization


Each server in the x222 supports the ESXi hypervisor on a USB memory key through two
internal USB ports. The supported USB memory keys are listed in Table 5-38.

Table 5-38 Virtualization options


Part Feature Description Maximum
number code supported

41Y8298 A2G0 IBM Blank USB Memory Key for VMware ESXi Downloadsa 2

41Y8307 A383 IBM USB Memory Key for VMware ESXi 5.0 Update1 1

41Y8311 A2R3 IBM USB Memory Key for VMware ESXi 5.1 1

Chapter 5. Compute nodes 231


a. The Blank USB Memory Key requires the download of the VMware vSphere (ESXi) Hypervisor
with IBM Customization image, which is available at this website:
http://ibm.com/systems/x/os/vmware/

There are two types of USB keys: preload keys or blank keys. Blank keys allow you to
download an IBM customized version of ESXi and load it onto the key. Each server supports
one or two keys to be installed, but only in the following combinations:
򐂰 One preload key (keys that are preloaded at the factory)
򐂰 One blank key (a key that you download the customized image)
򐂰 One preload key and one blank key
򐂰 Two blank keys

Two preload keys is an unsupported combination. Installing two preload keys prevents ESXi
from booting. This is similar to the error as described at this website:

http://kb.vmware.com/kb/1035107

Having two keys that are installed provides a backup boot device. Both devices are listed in
the boot menu, which allows you to boot from either device or to set one as a backup in case
the first one becomes corrupted.

5.3.12 Systems management


Each server in the x222 Compute Node contains an IBM Integrated Management Module II
(IMM2), which interfaces with the advanced management module in the chassis. The
combination of these features provides advanced service-processor control, monitoring, and
an alerting function. If an environmental condition exceeds a threshold or if a system
component fails, LEDs on the system board are lit to help you diagnose the problem, the error
is recorded in the event log, and you are alerted to the problem.

Remote management
A virtual presence capability comes standard for remote server management. Remote server
management is provided through the following industry-standard interfaces:
򐂰 Intelligent Platform Management Interface (IPMI) Version 2.0
򐂰 SNMP Version 3
򐂰 Common Information Model (CIM)
򐂰 Web browser

The server supports virtual media and remote control features, which provide the following
functions:
򐂰 Remotely viewing video with graphics resolutions up to 1600 x 1200 at 75 Hz with up to 23
bits per pixel, regardless of the system state.
򐂰 Remotely accessing the server by using the keyboard and mouse from a remote client.
򐂰 Mapping the CD or DVD drive, diskette drive, and USB flash drive on a remote client, and
mapping ISO and diskette image files as virtual drives that are available for use by the
server.
򐂰 Uploading a diskette image to the IMM2 memory and mapping it to the server as a virtual
drive.
򐂰 Capturing blue-screen errors.

232 IBM PureFlex System and IBM Flex System Products and Technology
Light path diagnostics
For quick problem determination when you are physically at the server, the x222 offers the
following three-step guided path:
1. The Fault LED on the front panel.
2. The light path diagnostics panel, as shown in the following figure.
3. LEDs that are next to key components on the system board.

The light path diagnostics panel is visible when you remove the x222 Compute Node from the
chassis. The panel for each server is on the right side, as shown in Figure 5-19.

Light path panel on upper server

Light path panel on lower server

Figure 5-19 Location of the light path diagnostics panel on each server in the x222 Compute Node

To illuminate the light path diagnostics LEDs, power off the compute node, slide it out of the
chassis, and press the power button on the specific server showing the error. The power
button on each server doubles as the light path diagnostics remind button when the server is
removed from the chassis.

Chapter 5. Compute nodes 233


The meanings of the LEDs in the light path diagnostics panel are listed in Table 5-39.

Table 5-39 Light path diagnostic panel LEDs


LED Meaning

LP The light path diagnostics panel is operational.

S BRD A system board error is detected.

MIS A mismatch has occurred between the processors, DIMMs, or HDDs within the
configuration (as reported by POST).

NMI A non-maskable interrupt (NMI) has occurred.

TEMP An over-temperature condition occurs that was critical enough to shut down the server.

MEM A memory fault has occurred. The corresponding DIMM error LEDs on the system board
are also lit.

5.3.13 Operating system support


Each server in the x222 Compute Node supports the following operating systems:
򐂰 Microsoft Windows Server 2008 R2 with Service Pack 1
򐂰 Microsoft Windows Server 2008, Datacenter x64 Edition with Service Pack 2
򐂰 Microsoft Windows Server 2008, Enterprise x64 Edition with RA Service Pack 2
򐂰 Microsoft Windows Server 2008, Standard x64 Edition with RA Service Pack 2
򐂰 Microsoft Windows Server 2008, Web x64 Edition with RA Service Pack 2
򐂰 Microsoft Windows Server 2012
򐂰 Novell SUSE Linux Enterprise Server 11 for AMD64/EM64T, Service Pack 2
򐂰 Novell SUSE Linux Enterprise Server 11 with Xen for AMD64/EM64T, Service Pack 2
򐂰 Red Hat Enterprise Linux 5 Server x64 Edition, U9
򐂰 Red Hat Enterprise Linux 5 Server with Xen x64 Edition, U9
򐂰 Red Hat Enterprise Linux 6 Server x64 Edition, U4
򐂰 VMware ESX 4.1, U3
򐂰 VMware ESXi 4.1, U3
򐂰 VMware vSphere 5, U2
򐂰 VMware vSphere 5.1, U1

For more information about the latest list of supported operating systems, see this website:

http://ibm.com/systems/info/x86servers/serverproven/compat/us/nos/flexmatrix.shtml

5.4 IBM Flex System x240 Compute Node


The IBM Flex System x240 Compute Node, available as machine type 8737 with a three-year
warranty, is a half-wide, two-socket server. It runs the latest Intel Xeon processor E5-2600
family (formerly code named Sandy Bridge-EP) processors. It is ideal for infrastructure,
virtualization, and enterprise business applications, and is compatible with the IBM Flex
System Enterprise Chassis.

This section includes the following topics:


򐂰 5.4.1, “Introduction” on page 235
򐂰 5.4.2, “Features and specifications” on page 237
򐂰 5.4.3, “Models” on page 239

234 IBM PureFlex System and IBM Flex System Products and Technology
򐂰 5.4.4, “Chassis support” on page 239
򐂰 5.4.5, “System architecture” on page 240
򐂰 5.4.6, “Processor” on page 242
򐂰 5.4.7, “Memory” on page 245
򐂰 5.4.8, “Standard onboard features” on page 258
򐂰 5.4.9, “Local storage” on page 259
򐂰 5.4.10, “Integrated virtualization” on page 266
򐂰 5.4.11, “Embedded 10 Gb Virtual Fabric adapter” on page 268
򐂰 5.4.12, “I/O expansion” on page 269
򐂰 5.4.13, “Systems management” on page 271
򐂰 5.4.14, “Operating system support” on page 274

5.4.1 Introduction
The x240 supports the following equipment:
򐂰 Up to two Intel Xeon E5-2600 series multi-core processors
򐂰 Twenty-four memory DIMMs
򐂰 Two hot-swap drives
򐂰 Two PCI Express I/O adapters
򐂰 Two optional internal USB connectors

Figure 5-20 shows the x240.

Figure 5-20 The x240

Chapter 5. Compute nodes 235


Figure 5-21 shows the location of the controls, LEDs, and connectors on the front of the x240.

Hard disk drive Hard disk drive


USB port activity LED status LED

NMI control Console Breakout Power button / LED LED panel


Cable port

Figure 5-21 The front of the x240 showing the location of the controls, LEDs, and connectors

Figure 5-22 shows the internal layout and major components of the x240.

Cover

Heat sink

Microprocessor
heat sink filler

I/O expansion
adapter

Air baffle

Microprocessor

Hot-swap
storage backplane

Hot-swap
storage
cage

Hot-swap
storage drive Air baffle

DIMM
Storage
drive filler

Figure 5-22 Exploded view of the x240 showing the major components

236 IBM PureFlex System and IBM Flex System Products and Technology
5.4.2 Features and specifications
Table 5-40 lists the features of the x240.

Table 5-40 Features of the x240


Component Specification

Machine types 8737 (x-config)


8737-15X and 7863-10X (e-config)

Form factor Half-wide compute node

Chassis support IBM Flex System Enterprise Chassis

Processor Up to two Intel Xeon Processor E5-2600 product family processors. These
processors can be eight-core (up to 2.9 GHz), six-core (up to 2.9 GHz),
quad-core (up to 3.3 GHz), or dual-core (up to 3.0 GHz). Two QPI links up to
8.0 GT/s each. Up to 1600 MHz memory speed. Up to 20 MB L3 cache.

Chipset Intel C600 series.

Memory Up to 24 DIMM sockets (12 DIMMs per processor) using Low Profile (LP)
DDR3 DIMMs. RDIMMs, UDIMMs, and LRDIMMs supported. 1.5V and
low-voltage 1.35V DIMMs supported. Support for up to 1600 MHz memory
speed, depending on the processor. Four memory channels per processor,
with three DIMMs per channel.

Memory maximums With LRDIMMs: Up to 768 GB with 24x 32 GB LRDIMMs and two processors
With RDIMMs: Up to 512 GB with 16x 32 GB RDIMMs and two processors
With UDIMMs: Up to 64 GB with 16x 4 GB UDIMMs and two processors

Memory protection ECC, optional memory mirroring, and memory rank sparing.

Disk drive bays Two 2.5" hot-swap SAS/SATA drive bays that support SAS, SATA, and SSD
drives. Optional support for up to eight 1.8” SSDs.

Maximum internal With two 2.5” hot-swap drives:


storage 򐂰 Up to 2 TB with 1 TB 2.5" NL SAS HDDs
򐂰 Up to 2.4 TB with 1.2 TB 2.5" SAS HDDs
򐂰 Up to 2 TB with 1 TB 2.5" SATA HDDs
򐂰 Up to 3.2 TB with 1.6 TB 2.5" SATA SSDs.
An intermix of SAS and SATA HDDs and SSDs is supported. Alternatively,
with 1.8” SSDs and ServeRAID M5115 RAID adapter, up to 1.6 TB with eight
200 GB 1.8” SSDs. Additional storage available with an attached Flex System
Storage Expansion Node.

RAID support RAID 0, 1, 1E, and 10 with integrated LSI SAS2004 controller. Optional
ServeRAID M5115 RAID controller with RAID 0, 1, 10, 5, or 50 support and
1 GB cache. Supports up to eight 1.8” SSD with expansion kits. Optional
flash-backup for cache, RAID 6/60, and SSD performance enabler.

Network interfaces x2x models: Two 10 Gb Ethernet ports with Embedded 10 Gb Virtual Fabric
Ethernet LAN on motherboard (LOM) controller; Emulex BladeEngine 3
based.
x1x models: None standard; optional 1 Gb or 10 Gb Ethernet adapters

PCI Expansion slots Two I/O connectors for adapters. PCI Express 3.0 x16 interface.

Ports USB ports: one external. Two internal for embedded hypervisor with optional
USB Enablement Kit. Console breakout cable port that provides local
keyboard video mouse (KVM) and serial ports (cable standard with chassis;
additional cables are optional)

Chapter 5. Compute nodes 237


Component Specification

Systems UEFI, IBM Integrated Management Module II (IMM2) with Renesas SH7757
management controller, Predictive Failure Analysis, light path diagnostics panel, automatic
server restart, remote presence. Support for IBM Flex System Manager, IBM
Systems Director, and IBM ServerGuide.

Security features Power-on password, administrator's password, Trusted Platform Module 1.2

Video Matrox G200eR2 video core with 16 MB video memory that is integrated into
the IMM2. Maximum resolution is 1600x1200 at 75 Hz with 16 M colors.

Limited warranty 3-year customer-replaceable unit and onsite limited warranty with 9x5/NBD

Operating systems Microsoft Windows Server 2008 R2, Red Hat Enterprise Linux 5 and 6, SUSE
supported Linux Enterprise Server 10 and 11, VMware vSphere. For more information,
see 5.4.14, “Operating system support” on page 274.

Service and support Optional service upgrades are available through IBM ServicePacs: 4-hour or
2-hour response time, 8 hours fix time, 1-year or 2-year warranty extension,
and remote technical support for IBM hardware and selected IBM and OEM
software.

Dimensions Width 215 mm (8.5”), height 51 mm (2.0”), depth 493 mm (19.4”)

Weight Maximum configuration: 6.98 kg (15.4 lb)

Figure 5-23 shows the components on the system board of the x240.

Hot-swap drive bay Processor 2 and 12 I/O connector 1


backplane memory DIMMs
Fabric Connector

Light path Processor 1 and 12 I/O connector 2 Expansion


diagnostics memory DIMMs Connector

Figure 5-23 Layout of the x240 system board

238 IBM PureFlex System and IBM Flex System Products and Technology
5.4.3 Models
The current x240 models are shown in Table 5-41. All models include 8 GB of memory (2x
4 GB DIMMs) running at either 1600 MHz or 1333 MHz (depending on the model).

Table 5-41 Models of the x240 type 8737


Modelsa Intel processor (model, cores, core speed, Standard Available Available 10 GbE
L3 cache, memory speed, TDP power) (two max) memoryb drive bays I/O slotsc embedd

8737-A1x 1x Xeon E5-2630L 6C 2.0 GHz 15 MB 1333 MHz 60 W 2x 4 GB Two (open) 2 No

8737-D2x 1x Xeon E5-2609 4C 2.40 GHz 10 MB 1066 MHz 80 W 2x 4 GB Two (open) 1 Yes

8737-F2x 1x Xeon E5-2620 6C 2.0 GHz 15 MB 1333 MHz 95 W 2x 4 GB Two (open) 1 Yes

8737-G2x 1x Xeon E5-2630 6C 2.3 GHz 15 MB 1333 MHz 95 W 2x 4 GB Two (open) 1 Yes

8737-H1x 1x Xeon E5-2640 6C 2.5 GHz 15 MB 1333 MHz 95 W 2x 4 GB Two (open) 2 No

8737-H2x 1x Xeon E5-2640 6C 2.5 GHz 15 MB 1333 MHz 95 W 2x 4 GB Two (open) 1 Yes

8737-J1x 1x Xeon E5-2670 8C 2.6 GHz 20 MB 1600 MHz 115 W 2x 4 GB Two (open) 2 No

8737-L2x 1x Xeon E5-2660 8C 2.2 GHz 20 MB 1600 MHz 95 W 2x 4 GB Two (open) 1 Yes

8737-M1x 1x Xeon E5-2680 8C 2.7 GHz 20 MB 1600 MHz 130 W 2x 4 GB Two (open) 2 No

8737-M2x 1x Xeon E5-2680 8C 2.7 GHz 20 MB 1600 MHz 130 W 2x 4 GB Two (open) 1 Yes

8737-N2x 1x Xeon E5-2643 4C 3.3 GHz 10 MB 1600 MHz 130 W 2x 4 GB Two (open) 1 Yes

8737-Q2x 1x Xeon E5-2667 6C 2.9 GHz 15 MB 1600 MHz 130 W 2x 4 GB Two (open) 1 Yes

8737-R2x 1x Xeon E5-2690 8C 2.9 GHz 20 MB 1600 MHz 135 W 2x 4 GB Two (open) 1 Yes
a. The model numbers that are provided are worldwide generally available variant (GAV) model numbers that are not
orderable as listed. They must be modified by country. The US GAV model numbers use the following
nomenclature: xxU. For example, the US orderable part number for 8737-A2x is 8737-A2U. See the
product-specific official IBM announcement letter for other country-specific GAV model numbers.
b. The maximum system memory capacity is 768 GB when you use 24x 32 GB DIMMs.
c. Some models include an Embedded 10 Gb Virtual Fabric Ethernet LOM controller as standard. This embedded
controller precludes the use of an I/O adapter in I/O connector 1, as shown in Figure 5-23 on page 238. For more
information, see 5.4.11, “Embedded 10 Gb Virtual Fabric adapter” on page 268.
d. Models number in the form x2x (for example, 8737-L2x) include an Embedded 10 Gb Virtual Fabric Ethernet LOM
controller as standard. Model numbers in the form x1x (for example 8737-A1x) do not include this embedded
controller.

5.4.4 Chassis support


The x240 type 8737 is supported in the IBM Flex System Enterprise Chassis as listed in
Table 5-42.

Table 5-42 x240 chassis support


Server BladeCenter chassis (All) IBM Flex System Enterprise
Chassis

x240 No Yes

Chapter 5. Compute nodes 239


Up to 14 x240 Compute Nodes can be installed in the chassis in 10U of rack space. The
actual number of x240 systems that can be powered on in a chassis depends on the following
factors:
򐂰 The TDP power rating for the processors that are installed in the x240.
򐂰 The number of power supplies installed in the chassis.
򐂰 The capacity of the power supplies installed in the chassis (2100 W or 2500 W).
򐂰 The power redundancy policy used in the chassis (N+1 or N+N).

Table 4-11 on page 93 provides guidelines about the number of x240 systems that can be
powered on in the IBM Flex System Enterprise Chassis, based on the type and number of
power supplies installed.

The x240 is a half-wide compute node. The chassis shelf must be installed in the IBM Flex
System Enterprise Chassis. Figure 5-24 shows the chassis shelf in the chassis.

Figure 5-24 The IBM Flex System Enterprise Chassis showing the chassis shelf

The shelf is required for half-wide compute nodes. To install the full-wide or larger, shelves
must be removed from within the chassis. Slide the two latches on the shelf towards the
center and then slide the shelf from the chassis.

5.4.5 System architecture


The IBM Flex System x240 Compute Node features the Intel Xeon E5-2600 series
processors. The Xeon E5-2600 series processor has models with two, four, six, and eight
cores per processor with up to 16 threads per socket. The processors have the following
features:
򐂰 Up to 20 MB of shared L3 cache
򐂰 Hyper-Threading
򐂰 Turbo Boost Technology 2.0 (depending on processor model)
򐂰 Two QuickPath Interconnect (QPI) links that run at up to 8 GTps
򐂰 One integrated memory controller
򐂰 Four memory channels that support up to three DIMMs each

240 IBM PureFlex System and IBM Flex System Products and Technology
The Xeon E5-2600 series processor implements the second generation of Intel Core
microarchitecture (Sandy Bridge) by using a 32 nm manufacturing process. It requires a new
socket type, the LGA-2011, which has 2011 pins that touch contact points on the underside of
the processor. The architecture also includes the Intel C600 (Patsburg B) Platform Controller
Hub (PCH).

Figure 5-25 shows the system architecture of the x240 system.

LSI2004
x4 ESI link PCIe x4 G2 SAS
Intel Intel Internal USB
Xeon C600 HDDs or
Front USB SSDs
Processor 1 PCH
USB
Front KVM port

DDR3 DIMMs x1 USB


Video &
4 memory channels
serial
3 DIMMs per channel
QPI IMM v2
Management to midplane
links
(8 GT/s) PCIe x8 G2
10GbE LOM

PCIe x16 G3
I/O connector 1
Intel PCIe x8 G3
PCIe x16 G3
Xeon I/O connector 2
Processor 2 PCIe x8 G3
PCIe x16 G3
Sidecar connector

Figure 5-25 IBM Flex System x240 Compute Node system board block diagram

The IBM Flex System x240 Compute Node has the following system architecture features as
standard:
򐂰 Two 2011-pin type R (LGA-2011) processor sockets
򐂰 An Intel C600 PCH
򐂰 Four memory channels per socket
򐂰 Up to three DIMMs per memory channel
򐂰 Twenty-four DDR3 DIMM sockets
򐂰 Support for UDIMMs, RDIMMs, and new LRDIMMs
򐂰 One integrated 10 Gb Virtual Fabric Ethernet controller (10 GbE LOM in diagram)
򐂰 One LSI 2004 SAS controller
򐂰 Integrated HW RAID 0 and 1
򐂰 One Integrated Management Module II
򐂰 Two PCIe x16 Gen3 I/O adapter connectors
򐂰 Two Trusted Platform Module (TPM) 1.2 controllers
򐂰 One internal USB connector

Chapter 5. Compute nodes 241


The new architecture allows the sharing of data on-chip through a high-speed ring
interconnect between all processor cores, the last level cache (LLC), and the system agent.
The system agent houses the memory controller and a PCI Express root complex that
provides 40 PCIe 3.0 lanes. This ring interconnect and LLC architecture are shown in
Figure 5-26.

Core L1/L2 LLC

Core L1/L2 LLC


Ring
…. interconnect

Core L1/L2 LLC

to Chipset QPI link


System agent
PCIe 3.0 Memory
Root Complex Controller

40 lanes 4 channels
PCIe 3.0 3 DIMMs per channel

Figure 5-26 Intel Xeon E5-2600 basic architecture

The two Xeon E5-2600 series processors in the x240 are connected through two QuickPath
Interconnect (QPI) links. Each QPI link is capable of up to eight giga-transfers per second
(GTps) depending on the processor model installed. Table 5-43 shows the QPI bandwidth of
the Intel Xeon E5-2600 series processors.

Table 5-43 QuickPath Interconnect bandwidth


Intel Xeon E5-2600 series QuickPath Interconnect speed QuickPath Interconnect
processor (GTps) bandwidth (GBps) in each
direction

Advanced 8.0 GTps 32.0 GBps

Standard 7.25 GTps 29.0 GBps

Basic 6.4 GTps 25.6 GBps

5.4.6 Processor
The Intel Xeon E5-2600 series is available with up to eight cores and 20 MB of last-level
cache. It features an enhanced instruction set called Intel Advanced Vector Extension (AVX).
This set doubles the operand size for vector instructions (such as floating-point) to 256 bits
and boosts selected applications by up to a factor of two.

The new architecture also introduces Intel Turbo Boost Technology 2.0 and improved power
management capabilities. Turbo Boost automatically turns off unused processor cores and
increases the clock speed of the cores in use if thermal requirements are still met. Turbo
Boost Technology 2.0 makes use of the new integrated design. It also implements a more
granular overclocking in 100 MHz steps instead of 133 MHz steps on former Nehalem-based
and Westmere-based microprocessors.

242 IBM PureFlex System and IBM Flex System Products and Technology
As listed in Table 5-41 on page 239, standard models come with one processor that is
installed in processor socket 1.

In a two processor system, both processors communicate with each other through two QPI
links. I/O is served through 40 PCIe Gen2 lanes and through a x4 Direct Media Interface
(DMI) link to the Intel C600 PCH.

Processor 1 has direct access to 12 DIMM slots. By adding the second processor, you enable
access to the remaining 12 DIMM slots. The second processor also enables access to the
sidecar connector, which enables the use of mezzanine expansion units.

Table 5-44 show a comparison between the features of the Intel Xeon 5600 series processor
and the new Intel Xeon E5-2600 series processor that is installed in the x240.

Table 5-44 Comparison of Xeon 5600 series and Xeon E5-2600 series processor features
Specification Xeon 5600 Xeon E5-2600

Cores Up to six cores / 12 threads Up to eight cores / 16 threads

Physical Addressing 40-bit (Uncorea limited) 46-bit (Core and Uncorea )

Cache size 12 MB Up to 20 MB

Memory channels per socket 3 4

Max memory speed 1333 MHz 1600 MHz

Virtualization technology Real Mode support and Adds Large VT pages


transition latency reduction

New instructions AES-NI Adds AVX

QPI frequency 6.4 GTps 8.0 GTps

Inter-socket QPI links 1 2

PCI Express 36 Lanes PCIe on chipset 40 Lanes/Socket Integrated PCIe


a. Uncore is an Intel term that is used by Intel to describe the parts of a processor that are not
the core.

Table 5-45 lists the features for the different Intel Xeon E5-2600 series processor types.

Table 5-45 Intel Xeon E5-2600 series processor features


Processor Processor Turbo HT L3 cache Cores Power QPI Link Max DDR3
model frequency TDP speeda speed

Advanced

Xeon E5-2650 2.0 GHz Yes Yes 20 MB 8 95 W 8 GT/s 1600 MHz

Xeon E5-2658 2.1 GHz Yes Yes 20 MB 8 95 W 8 GT/s 1600 MHz

Xeon E5-2660 2.2 GHz Yes Yes 20 MB 8 95 W 8 GT/s 1600 MHz

Xeon E5-2665 2.4 GHz Yes Yes 20 MB 8 115 W 8 GT/s 1600 MHz

Xeon E5-2670 2.6 GHz Yes Yes 20 MB 8 115 W 8 GT/s 1600 MHz

Xeon E5-2680 2.7 GHz Yes Yes 20 MB 8 130 W 8 GT/s 1600 MHz

Xeon E5-2690 2.9 GHz Yes Yes 20 MB 8 135 W 8 GT/s 1600 MHz

Chapter 5. Compute nodes 243


Processor Processor Turbo HT L3 cache Cores Power QPI Link Max DDR3
model frequency TDP speeda speed

Standard

Xeon E5-2620 2.0 GHz Yes Yes 15 MB 6 95 W 7.2 GT/s 1333 MHz

Xeon E5-2630 2.3 GHz Yes Yes 15 MB 6 95 W 7.2 GT/s 1333 MHz

Xeon E5-2640 2.5 GHz Yes Yes 15 MB 6 95 W 7.2 GT/s 1333 MHz

Basic

Xeon E5-2603 1.8 MHz No No 10 MB 4 80 W 6.4 GT/s 1066 MHz

Xeon E5-2609 2.4 GHz No No 10 MB 4 80 W 6.4 GT/s 1066 MHz

Low power

Xeon E5-2650L 1.8 GHz Yes Yes 20 MB 8 70 W 8 GT/s 1600 MHz

Xeon E5-2648L 1.8 GHz Yes Yes 20 MB 8 70 W 8 GT/s 1600 MHz

Xeon E5-2630L 2.0 GHz Yes Yes 15 MB 6 60 W 7.2 GT/s 1333 MHz

Special Purpose

Xeon E5-2667 2.9 GHz Yes Yes 15 MB 6 130 W 8 GT/s 1600 MHz

Xeon E5-2643 3.3 GHz No No 10 MB 4 130 W 6.4 GT/s 1600 MHz

Xeon E5-2637 3.0 GHz No No 5 MB 2 80 W 8 GT/s 1600 MHz


a. GTps = giga transfers per second.

Table 5-46 lists the processor options for the x240.

Table 5-46 Processors for the x240 type 8737


Part number Feature Description Where used

81Y5180 A1CQ Intel Xeon Processor E5-2603 4C 1.8 GHz 10 MB Cache 1066 MHz 80 W

81Y5182 A1CS Intel Xeon Processor E5-2609 4C 2.40 GHz 10 MB Cache 1066 MHz 80 W D2x

81Y5183 A1CT Intel Xeon Processor E5-2620 6C 2.0 GHz 15 MB Cache 1333 MHz 95 W F2x

81Y5184 A1CU Intel Xeon Processor E5-2630 6C 2.3 GHz 15 MB Cache 1333 MHz 95 W G2x

81Y5206 A1ER Intel Xeon Processor E5-2630L 6C 2.0 GHz 15 MB Cache 1333 MHz 60 W A1x

49Y8125 A2EP Intel Xeon Processor E5-2637 2C 3.0 GHz 5 MB Cache 1600 MHz 80 W

81Y5185 A1CV Intel Xeon Processor E5-2640 6C 2.5 GHz 15 MB Cache 1333 MHz 95 W H1x, H2x

81Y5190 A1CY Intel Xeon Processor E5-2643 4C 3.3 GHz 10 MB Cache 1600 MHz 130 W N2x

95Y4670 A31A Intel Xeon Processor E5-2648L 8C 1.8 GHz 20 MB Cache 1600 MHz 70 W

81Y5186 A1CW Intel Xeon Processor E5-2650 8C 2.0 GHz 20 MB Cache 1600 MHz 95 W

81Y5179 A1ES Intel Xeon Processor E5-2650L 8C 1.8 GHz 20 MB Cache 1600 MHz 70 W

95Y4675 A319 Intel Xeon Processor E5-2658 8C 2.1 GHz 20 MB Cache 1600 MHz 95 W

81Y5187 A1CX Intel Xeon Processor E5-2660 8C 2.2 GHz 20 MB Cache 1600 MHz 95 W L2x

49Y8144 A2ET Intel Xeon Processor E5-2665 8C 2.4 GHz 20 MB Cache 1600 MHz 115 W

244 IBM PureFlex System and IBM Flex System Products and Technology
Part number Feature Description Where used

81Y5189 A1CZ Intel Xeon Processor E5-2667 6C 2.9 GHz 15 MB Cache 1600 MHz 130 W Q2x

81Y9418 A1SX Intel Xeon Processor E5-2670 8C 2.6 GHz 20 MB Cache 1600 MHz 115 W J1x

81Y5188 A1D9 Intel Xeon Processor E5-2680 8C 2.7 GHz 20 MB Cache 1600 MHz 130 W M1x, M2x

49Y8116 A2ER Intel Xeon Processor E5-2690 8C 2.9 GHz 20 MB Cache 1600 MHz 135 W R2x

For more information about the Intel Xeon E5-2600 series processors, see this website:
http://intel.com/content/www/us/en/processors/xeon/xeon-processor-5000-sequence.html

5.4.7 Memory
The x240 has 12 DIMM sockets per processor (24 DIMMs in total) running at 800, 1066,
1333, or 1600 MHz. It supports 2 GB, 4 GB, 8 GB, 16 GB, and 32 GB memory modules, as
shown in Table 5-49 on page 250.

The x240 with the Intel Xeon E5-2600 series processors can support up to 768 GB of
memory in total when you use 32 GB LRDIMMs with both processors installed. The x240
uses double data rate type 3 (DDR3) LP DIMMs. You can use registered DIMMs (RDIMMs),
unbuffered DIMMs (UDIMMs), or load-reduced DIMMs (LRDIMMs). However, the mixing of
the different memory DIMM types is not supported.

The E5-2600 series processor has four memory channels, and each memory channel can
have up to three DIMMs. Figure 5-27 shows the E5-2600 series and the four memory
channels.
Channel 3
Channel 2

DIMM 12
DIMM 10

DIMM 11
DIMM 7
DIMM 8
DIMM 9

Intel Xeon
E5-2600
processor
Channel 1
Channel 0

DIMM 1
DIMM 6

DIMM 3

DIMM 2
DIMM 4

DIMM 5

Figure 5-27 The Intel Xeon E5-2600 series processor and the four memory channels

Memory subsystem overview


Table 5-47 summarizes some of the characteristics of the x240 memory subsystem. All of
these characteristics are described in the following sections.

Table 5-47 Memory subsystem characteristics of the x240


Memory subsystem characteristic IBM Flex System x240 Compute Node

Number of memory channels per 4


processor

Supported DIMM voltages Low voltage (1.35V)


Standard voltage (1.5V)

Chapter 5. Compute nodes 245


Memory subsystem characteristic IBM Flex System x240 Compute Node

Maximum number of DIMMs per channel 3 (using 1.5V DIMMs)


(DPC) 2 (using 1.35V DIMMs)

DIMM slot maximum One-processor: 12


Two-processor: 24

Mixing of memory types (RDIMMS, Not supported in any configuration


UDIMMS, LRDIMMs)

Mixing of memory speeds Supported; lowest common speed for all installed DIMMs

Mixing of DIMM voltage ratings Supported; all 1.35 V will run at 1.5 V

Registered DIMM (RDIMM) modules

Supported memory sizes 32, 16, 8, 4, and 2 GB

Supported memory speeds 1600, 1333, 1066, and 800 MHz

Maximum system capacity 512 GB (16 x 16 GB)

Maximum memory speed 1.35V @ 2DPC: 1333 MHz


1.5V @ 2DPC: 1600 MHz
1.5V @ 3DPC: 1066 MHz

Maximum ranks per channel 8


(any memory voltage)

Maximum number of DIMMs One-processor: 12


Two-processor: 24

Unbuffered DIMM (UDIMM) modules

Supported memory sizes 4 GB

Supported memory speeds 1333 MHz

Maximum system capacity 64 GB (16 x 4 GB)

Maximum memory speed 1.35V @ 2DPC: 1333 MHz


1.5V @ 2DPC: 1333 MHz
1.35V or 1.5V @ 3DPC: Not supported

Maximum ranks per channel 8


(any memory voltage)

Maximum number of DIMMs One-processor: 8


Two-processor: 16

Load-reduced (LRDIMM) modules

Supported sizes 32 and 16 GB

Maximum capacity 768 GB (24 x 32 GB)

Supported speeds 1333 and 1066 MHz

Maximum memory speed 1.35V @ 2DPC: 1066 MHz


1.5V @ 2DPC: 1333 MHz
1.35V or 1.5V @ 3DPC: 1066 MHz

Maximum ranks per channel 8a


(any memory voltage)

246 IBM PureFlex System and IBM Flex System Products and Technology
Memory subsystem characteristic IBM Flex System x240 Compute Node

Maximum number of DIMMs One-processor: 12


Two-processor: 24
a. Because of reduced electrical loading, a 4R (four-rank) LRDIMM has the equivalent load of a
two-rank RDIMM. This reduced load allows the x240 to support three 4R LRDIMMs per channel
(instead of two as with UDIMMs and RDIMMs). For more information, see “” on page 247.

Figure 5-28 shows the location of the 24 memory DIMM sockets on the x240 system board
and other components.

DIMMs 13-18 Microprocessor 2 DIMMs 1-6 I/O expansion 1

LOM connector
(some models only)

I/O expansion 2

DIMMs 19-24 DIMMs 7-12 Microprocessor 1

Figure 5-28 DIMM layout on the x240 system board

Table 5-48 lists which DIMM connectors belong to which processor memory channel.

Table 5-48 The DIMM connectors for each processor memory channel
Processor Memory channel DIMM connector

Channel 0 4, 5, and 6

Channel 1 1, 2, and 3
Processor 1
Channel 2 7, 8, and 9

Channel 3 10, 11, and 12

Channel 0 22, 23, and 24

Channel 1 19, 20, and 21


Processor 2
Channel 2 13, 14, and 15

Channel 3 16, 17, and 18

Chapter 5. Compute nodes 247


Memory types
The x240 supports the following types of DIMM memory:
򐂰 RDIMM modules
Registered DIMMs are the mainstream module solution for servers or any applications
that demand heavy data throughput, high density, and high reliability. RDIMMs use
registers to isolate the memory controller address, command, and clock signals from the
dynamic random-access memory (DRAM). This process results in a lighter electrical load.
Therefore, more DIMMs can be interconnected and larger memory capacity is possible.
However, the register often does impose a clock or more of delay, meaning that registered
DIMMs often have slightly longer access times than their unbuffered counterparts.
In general, RDIMMs have the best balance of capacity, reliability, and workload
performance with a maximum performance of 1600 MHz (at 2 DPC).
For more information about supported x240 RDIMM memory options, see Table 5-49 on
page 250.
򐂰 UDIMM modules
In contrast to RDIMMs that use registers to isolate the memory controller from the
DRAMs, UDIMMs attach directly to the memory controller. Therefore, they do not
introduce a delay, which creates better performance. The disadvantage is limited drive
capability. Limited capacity means that the number of DIMMs that can be connected
together on the same memory channel remains small because of electrical loading. This
leads to less DIMM support, fewer DIMMs per channel (DPC), and overall lower total
system memory capacity than RDIMM systems.
UDIMMs have the lowest latency and lowest power usage. They also have the lowest
overall capacity.
For more information about supported x240 UDIMM memory options, see Table 5-49 on
page 250.
򐂰 LRDIMM modules
Load-reduced DIMMs are similar to RDIMMs. They also use memory buffers to isolate the
memory controller address, command, and clock signals from the individual DRAMS on
the DIMM. Load-reduced DIMMs take the buffering a step further by buffering the memory
controller data lines from the DRAMs as well.

248 IBM PureFlex System and IBM Flex System Products and Technology
Figure 5-29 shows a comparison of RDIMM and LRDIMM memory types.

Registered DIMM Load-reduced DIMM

DATA
DRAM DRAM

DRAM DRAM

DRAM DRAM

DRAM DRAM

Memory Memory Memory


Register
controller controller CMD/ Buffer
CMD/ADDR/
CLK ADDR/
CLK
DRAM DRAM

DRAM DRAM
DATA
DRAM DRAM

DRAM DRAM

Figure 5-29 Comparing RDIMM buffering and LRDIMM buffering

In essence, all signaling between the memory controller and the LRDIMM is now
intercepted by the memory buffers on the LRDIMM module. This system allows more
ranks to be added to each LRDIMM module without sacrificing signal integrity. It also
means that fewer actual ranks are “seen” by the memory controller (for example, a 4R
LRDIMM has the same “look” as a 2R RDIMM).
The added buffering that the LRDIMMs support greatly reduces the electrical load on the
system. This reduction allows the system to operate at a higher overall memory speed for
a certain capacity. Conversely, it can operate at a higher overall memory capacity at a
certain memory speed.
LRDIMMs allow maximum system memory capacity and the highest performance for
system memory capacities above 384 GB. They are suited for system workloads that
require maximum memory such as virtualization and databases.
For more information about supported x240 LRDIMM memory options, see Table 5-49 on
page 250.

The memory type that is installed in the x240 combines with other factors to determine the
ultimate performance of the x240 memory subsystem. For a list of rules when populating the
memory subsystem, see “Memory installation considerations” on page 257.

Chapter 5. Compute nodes 249


Memory options
Table 5-49 lists the memory DIMM options for the x240.

Table 5-49 Memory DIMMs for the x240


Part FC Description Where
number used

Registered DIMM (RDIMM) modules - 1066 MHz and 1333 MHz

49Y1405 8940 2 GB (1x2GB, 1Rx8, 1.35V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP RDIMM

49Y1406 8941 4 GB (1x4GB, 1Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP RDIMM H1x, H2x,
G2x, F2x,
D2x, A1x

49Y1407 8942 4 GB (1x4GB, 2Rx8, 1.35V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP RDIMM

49Y1397 8923 8 GB (1x8GB, 2Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP RDIMM

49Y1563 A1QT 16 GB (1x16GB, 2Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP RDIMM

49Y1400 8939 16 GB (1x16GB, 4Rx4, 1.35V) PC3L-8500 CL7 ECC DDR3 1066 MHz LP RDIMM

Registered DIMM (RDIMM) modules - 1600 MHz

49Y1559 A28Z 4 GB (1x4GB, 1Rx4, 1.5V) PC3-12800 CL11 ECC DDR3 1600 MHz LP RDIMM R2x, Q2x,
N2x, M2x,
M1x, L2x,
J1x

90Y3178 A24L 4 GB (1x4GB, 2Rx8, 1.5V) PC3-12800 CL11 ECC DDR3 1600 MHz LP RDIMM

90Y3109 A292 8 GB (1x8GB, 2Rx4, 1.5V) PC3-12800 CL11 ECC DDR3 1600 MHz LP RDIMM

00D4968 A2U5 16 GB (1x16GB, 2Rx4, 1.5V) PC3-12800 CL11 ECC DDR3 1600 MHz LP RDIMM

Unbuffered DIMM (UDIMM) modules

49Y1404 8648 4 GB (1x4GB, 2Rx8, 1.35V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP UDIMM

Load-reduced (LRDIMM) modules

49Y1567 A290 16 GB (1x16GB, 4Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP LRDIMM

90Y3105 A291 32 GB (1x32GB, 4Rx4, 1.35V) PC3L-10600 CL9 ECC DDR3 1333 MHz LP LRDIMM

Memory channel performance considerations


The memory that is installed in the x240 can be clocked at 1600 MHz, 1333 MHz, 1066 MHz,
or 800 MHz. You select the speed based on the type of memory, population of memory,
processor model, and several other factors. Use the following items to determine the ultimate
performance of the x240 memory subsystem:
򐂰 Model of Intel Xeon E5-2600 series processor installed
As described in 5.4.5, “System architecture” on page 240, the Intel Xeon E5-2600 series
processors includes one integrated memory controller. The model of processor that is
installed determines the maximum speed that the integrated memory controller clocks the
installed memory. Table 5-45 on page 243 lists the maximum DDR3 speed that the
processor model supports. This maximum speed might not be the ultimate speed of the
memory subsystem.

250 IBM PureFlex System and IBM Flex System Products and Technology
򐂰 Speed of DDR3 DIMMs installed
For maximum performance, the speed rating of each DIMM module must match the
maximum memory clock speed of the Xeon E5-2600 processor. Remember the following
rules when you match processors and DIMM modules:
– The processor never over-clocks the memory in any configuration.
– The processor clocks all the installed memory at either the rated speed of the
processor or the speed of the slowest DIMM installed in the system.
For example, an Intel Xeon E5-2640 series processor clocks all installed memory at a
maximum speed of 1333 MHz. If any 1600 MHz DIMM modules are installed, they are
clocked at 1333 MHz. However, if any 1066 MHz or 800 MHz DIMM modules are installed,
all installed DIMM modules are clocked at the slowest speed (800 MHz).
򐂰 Number of DIMMs per channel (DPC)
Generally, the Xeon E5-2600 processor series clocks up to 2DPC at the maximum rated
speed of the processor. However, if any channel is fully populated (3DPC), the processor
slows all the installed memory down.
For example, an Intel Xeon E5-2690 series processor clocks all installed memory at a
maximum speed of 1600 MHz up to 2DPC. However, if any one channel is populated with
3DPC, all memory channels are clocked at 1066 MHz.
򐂰 DIMM voltage rating
The Xeon E5-2600 processor series supports both low voltage (1.35 V) and standard
voltage (1.5 V) DIMMs. Table 5-49 on page 250 shows that the maximum clock speed for
supported low voltage DIMMs is 1333 MHz. The maximum clock speed for supported
standard voltage DIMMs is 1600 MHz.

Table 5-50 and Table 5-51 on page 252 list the memory DIMM types that are available for the
x240 and shows the maximum memory speed, which is based on the number of DIMMs per
channel, ranks per DIMM, and DIMM voltage rating.

Table 5-50 Maximum memory speeds (Part 1 - UDIMMs, LRDIMMs and Quad rank RDIMMs)
Spec UDIMMs LRDIMMs RDIMMs

Rank Dual rank Quad rank Quad rank

Part numbers 49Y1404 (4 GB) 49Y1567 (16 GB) 49Y1400 (16 GB)
90Y3105 (32 GB) 90Y3102 (32 GB)

Rated speed 1333 MHz 1333 MHz 1066 MHz

Rated voltage 1.35 V 1.35 V 1.35 V

Operating voltage 1.35 V 1.5 V 1.35 V 1.5 V 1.35 V 1.5 V

Maximum quantitya 16 16 24 24 8 16

Largest DIMM 4 GB 4 GB 32 GB 32 GB 32 GB 32 GB

Max memory capacity 48 GB 48 GB 768 GB 768 GB 256 GB 512 GB

Max memory at rated speed 48 GB 48 GB N/A 512 GB N/A 256 GB

Maximum operating speed

1 DIMM per channel 1333 MHz 1333 MHz 1066 MHz 1333 MHz 800 MHz 1066 MHz

2 DIMMs per channel 1333 MHz 1333 MHz 1066 MHz 1333 MHz NSb 800 MHz

3 DIMMs per channel NSc NSc 1066 MHz 1066 MHz NSd NSd

Chapter 5. Compute nodes 251


a. The maximum quantity that is supported is shown for two processors installed. When one processor is installed,
the maximum quantity that is supported is half of that shown.
b. NS = Not supported at 1.35 V. Will operate at 1.5 V instead
c. NS = Not supported. UDIMMs only support up to 2 DIMMs per channel.
d. NS = Not supported. RDIMMs support up to 8 ranks per channel.

Table 5-51 Maximum memory speeds (Part 2 - Single and Dual rank RDIMMs)
Spec RDIMMs

Rank Single rank Dual rank

Part numbers 49Y1405 (2GB) 49Y1559 (4GB) 49Y1407 (4GB) 90Y3178 (4GB)
49Y1406 (4GB) 49Y1397 (8GB) 90Y3109 (8GB)
49Y1563 (16GB) 00D4968 (16GB)

Rated speed 1333 MHz 1600 MHz 1333 MHz 1600 MHz

Rated voltage 1.35 V 1.5 V 1.35 V 1.5 V

Operating voltage 1.35 V 1.5 V 1.5 V 1.35 V 1.5 V 1.5 V

Max quantitya 16 24 24 16 24 24

Largest DIMM 4 GB 4 GB 4 GB 16 GB 16 GB 16 GB

Max memory capacity 64 GB 96 GB 96 GB 256 GB 384 GB 384 GB

Max memory at rated speed N/A 64 GB 64 GB N/A 256 GB 256 GB

Maximum operating speed (MHz)

1 DIMM per channel 1333 MHz 1333 MHz 1600 MHz 1333 MHz 1333 MHz 1600 MHz

2 DIMMs per channel 1333 MHz 1333 MHz 1600 MHz 1333 MHz 1333 MHz 1600 MHz

3 DIMMs per channel NSb 1066 MHz 1066 MHz NSb 1066 MHz 1066 MHz
a. The maximum quantity that is supported is shown for two processors installed. When one processor is installed,
the maximum quantity that is supported is half of that shown.
b. NS = Not supported at 1.35 V. Will operate at 1.5 V instead

Tip: When an unsupported memory configuration is detected, the IMM illuminates the
“DIMM mismatch” light path error LED and the system does not boot. A DIMM mismatch
error includes the following examples:
򐂰 Mixing of RDIMMs, UDIMMs, or LRDIMMs in the system
򐂰 Not adhering to the DIMM population rules

In some cases, the error log points to the DIMM slots that are mismatched.

Memory modes
The x240 type 8737 supports the following memory modes:
򐂰 Independent channel mode
򐂰 Rank-sparing mode
򐂰 Mirrored-channel mode

These modes can be selected in the Unified Extensible Firmware Interface (UEFI) setup. For
more information, see 5.4.13, “Systems management” on page 271.

252 IBM PureFlex System and IBM Flex System Products and Technology
Independent channel mode
This mode is the default mode for DIMM population. DIMMs are populated in the last DIMM
connector on the channel first, then installed one DIMM per channel, equally distributed
between channels and processors. In this memory mode, the operating system uses the full
amount of memory that is installed and no redundancy is provided.

The IBM Flex System x240 Compute Node that is configured in independent channel mode
yields a maximum of 192 GB of usable memory with one processor installed. It yields 384 GB
of usable memory with two processors installed that use 16 GB DIMMs. Memory DIMMs must
be installed in the correct order, starting with the last physical DIMM socket of each channel
first. The DIMMs can be installed without matching sizes, but avoid this configuration because
it might affect optimal memory performance.

For more information about the memory DIMM installation sequence when you use
independent channel mode, see “Memory DIMM installation: Independent channel and
rank-sparing modes” on page 254.

Rank-sparing mode
In rank-sparing mode, one memory DIMM rank serves as a spare of the other ranks on the
same channel. The spare rank is held in reserve and is not used as active memory. The spare
rank must have an identical or larger memory capacity than all the other active memory ranks
on the same channel. After an error threshold is surpassed, the contents of that rank are
copied to the spare rank. The failed rank of memory is taken offline, and the spare rank is put
online and used as active memory in place of the failed rank.

The memory DIMM installation sequence when using rank-sparing mode is identical to
independent channel mode, as described in “Memory DIMM installation: Independent
channel and rank-sparing modes” on page 254.

Mirrored-channel mode
In mirrored-channel mode, memory is installed in pairs. Each DIMM in a pair must be identical
in capacity, type, and rank count. The channels are grouped in pairs. Each channel in the
group receives the same data. One channel is used as a backup of the other, which provides
redundancy. The memory contents on channel 0 are duplicated in channel 1, and the memory
contents of channel 2 are duplicated in channel 3. The DIMMs in channel 0 and channel 1
must be the same size and type. The DIMMs in channel 2 and channel 3 must be the same
size and type. The effective memory that is available to the system is only half of what is
installed.

Because memory mirroring is handled in hardware, it is operating system-independent.

Consideration: In a two processor configuration, memory must be identical across the two
processors to enable the memory mirroring feature.

Chapter 5. Compute nodes 253


Figure 5-30 shows the E5-2600 series processor with the four memory channels and which
channels are mirrored when operating in mirrored-channel mode.

Channel 1

Channel 3

DIMM 12
DIMM 10

DIMM 11
DIMM 3
DIMM 2
DIMM 1
Channel 0 & 1 Intel Xeon Channel 2 & 3
mirrored E5-2600 mirrored
processor

Channel 0

Channel 2

DIMM 9
DIMM 6

DIMM 7

DIMM 8
DIMM 4

DIMM 5

Mirrored Pair
Figure 5-30 Showing the mirrored channels and DIMM pairs when in mirrored-channel mode

For more information about the memory DIMM installation sequence when mirrored channel
mode is used, see “Memory DIMM installation: Mirrored-channel” on page 257.

DIMM installation order


This section describes the preferred order in which DIMMs should be installed, based on the
memory mode that is used.

Memory DIMM installation: Independent channel and rank-sparing modes


The following guidelines are only for when the processors are operating in independent
channel mode or rank-sparing mode.

The x240 boots with one memory DIMM installed per processor. However, the suggested
memory configuration balances the memory across all the memory channels on each
processor to use the available memory bandwidth. Use one of the following suggested
memory configurations:
򐂰 Four, eight, or 12 memory DIMMs in a single processor x240 server
򐂰 Eight, 16, or 24 memory DIMMs in a dual processor x240 server

This sequence spreads the DIMMs across as many memory channels as possible. For best
performance and to ensure a working memory configuration, install the DIMMs in the sockets
as shown in Table 5-52 on page 255 and Table 5-53 on page 256.

254 IBM PureFlex System and IBM Flex System Products and Technology
Table 5-52 shows DIMM installation if you have one processor installed.

Table 5-52 Suggested DIMM installation for the x240 with one processor installed
Processor 1 Processor 2
Optimal memory configa

Channel 2 Channel 1 Channel 3 Channel 4 Channel 3 Channel 4 Channel 2 Channel 1


processors
Number of
Number of

DIMM 10
DIMM 11
DIMM 12
DIMM 13
DIMM 14
DIMM 15
DIMM 16
DIMM 17
DIMM 18
DIMM 19
DIMM 20
DIMM 21
DIMM 22
DIMM 23
DIMM 24
DIMM 1
DIMM 2
DIMM 3
DIMM 4
DIMM 5
DIMM 6
DIMM 7
DIMM 8
DIMM 9
DIMMs

1 1