Escolar Documentos
Profissional Documentos
Cultura Documentos
www.ibm.com/redbooks
ibm.com
yourdotcom
Trademarks
The following are trademarks of the International Business Machines Corporation in the United States, other countries, or both.
Not all common law marks used by IBM are listed on this page. Failure of a mark to appear does not mean that IBM does not use the mark nor does it mean that the product is not actively marketed or is not significant within its relevant market. Those trademarks followed by are registered trademarks of IBM in the United States; all others are trademarks or common law marks of IBM in the United States.
ibm.com
yourdotcom
14
15 16 17
3
ibm.com
yourdotcom
ibm.com
yourdotcom
Announce 2/08 Server w/ up to 77 PU cores 5 models Up to 64-way Granular Offerings for up to 12 CPs PU (Engine) Characterization
CP, SAP, IFL, ICF, zAAP, zIIP
Announced 10/08 Server w/ 12 cores Single model Up to 5-way CPs High levels of Granularity available
130 Capacity Indicators
PU (Engine) Characterization
CP, SAP, IFL, ICF, zAAP, zIIP
On Demand Capabilities
CoD, CIU, CBU, On/Off CoD, CPE
On Demand Capabilities
CoD, CIU, CBU, On/Off CoD. CPE
Channels
Four LCSSs 2 Subchannel Sets MIDAW facility 63.75 subchannels Up to 1024 ESCON channels Up to 336 FICON channels FICON Express2, 4 and 8 zHPF OSA 10 GbE, GbE, 1000BASE-T InfiniBand Coupling Links
Channels
Two LCSSs 2 Subchannel Sets MIDAW facility 63.75 subchannels Up to 480 ESCON channels Up to 128 FICON channels FICON Express2, 4 and 8 zHPF OSA 10 GbE, GbE, 1000BASE-T InfiniBand Coupling Links
Configurable Crypto Express3 Parallel Sysplex clustering HiperSockets up to 16 Up to 60 logical partitions Enhanced Availability Operating Systems
z/OS, z/VM, z/VSE, TPF, z/TPF, Linux on System z
Configurable Crypto Express3 Parallel Sysplex clustering HiperSockets up to 16 Up to 30 logical partitions Enhanced Availability Operating Systems
z/OS, z/OS.e, z/VM, z/VSE, TPF, z/TPF, Linux on System z
ibm.com
yourdotcom
Announced 7/10 Server w/ up to 96 PU cores 5 models Up to 80-way Granular Offerings for up to 15 CPs PU (Engine) Characterization
CP, SAP, IFL, ICF, zAAP, zIIP
Announced 7/10 Model 002 for z196 or z114 zBX Racks with:
BladeCenter Chassis N + 1 components Blades Top of Rack Switches 8 Gb FC Switches Power Units Advance Management Modules
On Demand Capabilities
CoD, CIU, CBU, On/Off CoD, CPE
PU (Engine) Characterization
CP, SAP, IFL, ICF, zAAP, zIIP
On Demand Capabilities
CoD, CIU, CBU, On/Off CoD. CPE
Channels
Four LCSSs 3 Subchannel Sets MIDAW facility Up to 240 ESCON channels Up to 288 FICON channels FICON Express8 and 8S zHPF OSA 10 GbE, GbE, 1000BASE-T InfiniBand Coupling Links
Up to 112 Blades
IBM Smart Analytics Optimizer Solution POWER7 Blades IBM System x Blades IBM WebSphere DataPower Integration Appliance XI50 for zEnterprise (M/T 2462-4BX) Operating Systems AIX 5.3 and higher Linux for x Blades Microsoft Windows for x Blades Hypervisors PowerVM Enterprise Edition Integrated Hypervisor for System x
Channels
Two LCSSs 2 Subchannel Sets MIDAW facility Up to 240 ESCON channels Up to 128 FICON channels FICON Express8 and 8S zHPF OSA 10 GbE, GbE, 1000BASE-T InfiniBand Coupling Links
Configurable Crypto Express3 Parallel Sysplex clustering HiperSockets up to 32 Up to 60 logical partitions Enhanced Availability Unified Resource Manager Operating Systems
z/OS, z/VM, z/VSE, z/TPF, Linux on System z
Configurable Crypto Express3 Parallel Sysplex clustering HiperSockets up to 32 Up to 30 logical partitions Unified Resource Manager Operating Systems
z/OS, z/VM, z/VSE, TPF, z/TPF, Linux on System z
ibm.com
4000 3500 3000 2500 MHz 2000 1500 1000 500 0
yourdotcom
3.8 GHz
1997
Multiprise 2000 Multiprise 2000 1st full-custom Mid-range CMOS S/390 Multiprise 3000 Internal disk, IFL introduced on midrange
7
1999
Multiprise 3000
2002
z800
2004
z890
2006
z9 BC
2008
z10 BC
2011
z114
z800 - Full 64-bit z/Architecture z890 - Superscalar CISC pipeline z9 BC - System level scaling
z10 BC - Architectural extensions Higher frequency CPU z114 Additional Architectural extensions and new cache structure
ibm.com
yourdotcom
ibm.com
yourdotcom
z114 Overview
Machine Type 2818 2 Models M05 and M10 Single frame, air cooled Non-raised floor option available Overhead Cabling and DC Power Options Processor Units (PUs) 7 PU cores per processor drawer (One for M05 and two for M10) Up to 2 SAPs per system, standard 2 spares designated for Model M10 Dependant on the H/W model - up to 5 or 10 PU cores available for characterization Central Processors (CPs), Integrated Facility for Linux (IFLs), Internal Coupling Facility (ICFs), System z Application Assist Processors (zAAPs), System z Integrated Information Processor (zIIP), optional - additional System Assist Processors (SAPs) 130 capacity settings Memory Up to 256 GB for System including HSA System minimum = 8 GB (Model M05), 16 GB (Model M10) 8 GB HSA separately managed RAIM standard Maximum for customer use 248 GB (Model M10) Increments of 8 or 32 GB I/O Support for non-PCIe Channel Cards Introduction of PCIe channel subsystem Up to 64 PCIe Channel Cards Up to 2 Logical Channel Subsystems (LCSSs) STP - optional (No ETR)
ibm.com
yourdotcom
When Model M10 (requires the 2nd processor drawer)? > 5 Customer PUs > 120 GB memory > 4 Fanouts for additional I/O connectivity especially PSIFB links Depends - numbers vary for drawers, I/O features and PSIFB links
10 2011 IBM Corporation
ibm.com
yourdotcom
z114
Improved PSIFB Coupling Link Physical Coupling Links increased to 72 (Model M10) New 32 slot PCIe Based I/O Drawer Increased granularity of I/O adapters New form factor I/O adapters i.e FICON Express8S and OSA-Expres4S Humidity and altimeter smart sensors
Improved processor cache design New and additional instructions On Demand enhancements CFCC Level 17 enhancements Cryptographic enhancements 6 and 8 GBps interconnects 2 New OSA CHPIDs OSX and OSM
11 2011 IBM Corporation
Optional High Voltage DC power Optional overhead I/O cable exit zBX-002 with ISAOPT, POWER7, DataPower XI50z and IBM System x Blades NRF Support with either top exit or bottom exit I/O and power Reclassification from general business environment to data center
ibm.com
yourdotcom
One hardware model zBX is controlled by one specific z196 or z114 Up to 4 Racks (B, C, D and E) 2 BladeCenters Chassis per rack Non-acoustics doors standard Optional Rear Door Heat Exchanger Optional Rear acoustic door Redundant Power, Cooling and Management Modules 10 GbE and 1000BASE-T Network modules 8 Gb FC modules
zBX
Up to 112 Blades IBM Smart Analytics Optimizer POWER7 Blades IBM System x Blades IBM WebSphere DataPower Integration Appliance XI50 for zEnterprise HMCs required for Unified Resource Manager Additional zBX owned HMC required if System maintained by Third Party
12
ibm.com
yourdotcom
2 x Support Elements
Rear View
2011 IBM Corporation
Front View
ibm.com
yourdotcom
14 14
ibm.com
yourdotcom
System resources split between 2 drawers (Model M10) Second processor drawer (Model 10) for: Increased specialty engine capability Increased memory capability Increased I/O capability More coupling links than z10 BC More I/O features than z10 BC Note: Unlike the z196 Books, add/remove/repair of the processor drawer is disruptive
15 2011 IBM Corporation
ibm.com
yourdotcom
HCA HCA
inter proc drawer connect
PU SCM
SC SCM
2 x OSC
2 x FSP
HCA
PU SCM
2 x I/O Fanouts
HCA
2x DCA
16
ibm.com
yourdotcom
Processor drawer #2
Processor drawer #1
HCA2 or 3s OSC
17 2011 IBM Corporation
FSP
HCA2 or 3s
ibm.com
yourdotcom
The Oscillator / Fabric Flex Cable which connects Drawer 0 with Drawer 1 Used to transport Oscillator signals (from Drawer 0 to Node1) and to interconnect the Fabric buses
18 18
ibm.com
yourdotcom
2 Flexible Support Processor (FSP) card slots providing support for the Service Network subsystem 4 fanout card slots providing support for the I/O subsystem and/or coupling 2 x oscillators with Pulse Per Second (PPS)
19
ibm.com
yourdotcom
2 Flexible Support Processor (FSP) card slots providing support for the Service Network subsystem 4 fanout card slots providing support for the I/O subsystem and/or coupling 2 x oscillators No PPS
20
ibm.com
yourdotcom
Note: If the NTP LAN goes through the HMC, PPS can not be used.
21 21
ibm.com
yourdotcom
M05 M10
0-5 0-5
0-2 0-5
0-2 0-5
0-5 0-10
0 2
Model structure based on number of drawers M05 sparing based on prior Business Class (BC) offerings no dedicated spares M10 sparing based on Enterprise Class (EC) offerings dedicated spares SAP and PU Allocation/Sparing in the M10 Default assignment is one SAP per drawer; one Spare per drawer. Spill and fill CP low to high; spill and fill specialty engines high to low Two defective PUs may cause the default assignment to spill and fill into the second processor drawer. LPAR has the capability to request PU of a specified type to be grouped together in a book/drawer (i.e. LPAR may change the default assignment) Disruptive upgrade from M05 to M10
No model downgrades
Upgrades from z9 BC and z10 BC into either model M05 or M10 Only the M10 will upgrade to z196 Model M15 (Air cooled only)
22 22
ibm.com
yourdotcom
z114 PU Conversions
Specialty engines are converted "any to any", including Additional SAPs and unassigned IFLs. To From.. IFL Unassigned IFL ICF zAAP zIIP Additional SAP IFL x Yes Yes Yes Yes Yes Unassigned IFL Yes x Yes Yes Yes Yes ICF Yes Yes x Yes Yes Yes zAAP Yes Yes Yes x Yes Yes zIIP Yes Yes Yes Yes x Yes Additional SAP Yes Yes Yes Yes Yes x
CP capacity conversions are handled separately from the specialty engines under the following rules:
CP Capacities are converted "any to any"...allow MIPS upgrades, downgrades and horizontal movement. CP Capacities are handled separately from specialty engines, meaning that in the conditions where CP capacity on the target machine is constructed using a lesser amount of CPs, the remaining CPs are not available to be converted to specialty engines. .
23 23
ibm.com
yourdotcom
Examples:
F01 E01 D01 C01 B01 A01 1-way F02 E02 D02 C02 B02 A02 2-way F03 E03 D03 C03 B03 A03 3-way F04 E04 D04 C04 B04 A04 4-way F05 E05 D05 C05 B05 A05 5-way
2-way to 1-way capacity downgrade The PU in the green column must maintain its high water mark and can not be purchased as a specialty engine. 2-way to 1-way capacity upgrade The high water mark is now in the red column and the PU in the green column can be used during the purchase of a specialty engine.
24 24
ibm.com
yourdotcom
z9 BC S07
z10 BC E10
673 MIPS 26-2760 MIPS 256 GB 10 0 0-5 30/2 16 Up to 4 8 128 48/96 480 6.0 GB/sec 12/48/12 5 10 130 Upgrade to z114
z114 M05
782 MIPS
z114 M10
26 - 3139 MIPS 128 GB 5 0 0-5 30/2 32 Up to 3(1) 8/32(2) 128(3) 48/96 240(4) 6.0 GB/sec 8.0 GB/sec (5) 0 /48/8 -16(6) 0(5)/48/16 - 32(7) 2 5 130 5 10 130 Up to 3(1) 256 GB 10 2
25
ibm.com
yourdotcom
0 1 0 0
2 2 3
8 card slots per I/O drawer, 32 per PCIe I/O drawer FICON count is based on 2 PCIe I/O drawers or 4 I/O drawers Limit of 240 ESCON channels is consistent with Statement of Direction Limit of 0 ICB-4 links is consistent with Statements of Direction 8 ports of 12x, 16 ports of 1x PSIFB links available on model M05 based on 4 HCA capabilities 16 ports of 12x, 32 ports 1x PSIFB links available on model M10 based on 8 HCA capabilities
ibm.com
yourdotcom
z1114 M10
ibm.com
yourdotcom
28
ibm.com
yourdotcom
LGA
Processor drawer with 2 PU SCMs, 1 SC SCM Memory DIMMS and DCA Power
29 2011 IBM Corporation
ibm.com
yourdotcom
z114 SCM Vs z196 MCM Comparison Same PU and SC Chip z114 SCMs
PU SCM 50mm x 50mm in size fully assembled
Quad core chip with 3 and 4 active cores 2 PU SCMs for M05 and 4 PU SCMS for M10 PU Chip size 23.498 mm x 21.797 mm
PU 2
PU 1
PU 0
S10 SC 1 S11 PU 3 PU 4 SC 0
S00 S01 PU 5
30
ibm.com
yourdotcom
MC IOs
Core 0
Core 2
GX IOs
L2 (1.5MB) MCU CoP L2 (1.5MB) MC IOs L3_0 Controller L3_1 Controller L3B Core 1 L3 Cache (12MB)
Core 3
ibm.com
yourdotcom
12S0 45nm SOI Technology 13 layers of metal Chip Area 478.8mm^2 24.4mm x 19.6mm 7100 Power C4s 1819 signal C4s
1.5 Billion Transistors 1 Billion cells for eDRAM eDRAM Shared L4 Cache 96 MB per SC chip 6 CP chip interfaces
3 Fabric interfaces
1 used
2 clock domains 5 unique chip voltage supplies
2 used
32 32
ibm.com
yourdotcom
5 DIMMs
5 DIMMs
GX06 GX02
GX1 GX0
Drawer #1
GX07 GX01
GX1 GX0
CP02
SC0
CP2
SC00 CP1
SC0
CP01
Up to 20 DIMM slots 5-channel Memory RAIM protection 4-level cache hierarchy Split 64K I-L1 and 128K D-L1 per core 1.5 MB L2 per core 12 MB on chip L3 half that of z196 96 MB node level L4 half that of z196 eDRAM (embedded) caches L3, L4 and SP Key Cache
Color legend: Same as in z196 models Different 33 2011 IBM Corporation
CP12
SC0
CP2
SC10 CP1
SC0
CP11
GX1 GX0
GX1 GX0
GX16 GX12
5 DIMMs
Drawer #2
GX17 GX11
5 DIMMs
ibm.com
yourdotcom
Topology
GX1
GX0
PU2
PU1
CP0
Book
FBC
FBC
Book
SC1
SC0
FBC FBC
FBC
FBC
Book
GX3
GX4
z114 is built from subset of the z196 design and chipset Half the L3 and L4 cache sizes Half the chip to chip bandwidth
PSI
GX5
34
ibm.com
yourdotcom
35 35
ibm.com
yourdotcom
6 L3s, 24 L1 / L2s
L2 L1
2 L3s, 7 L1 / L2s
L1:
64KI + 128KD 8w DL1, 4w IL1 256B line size Private 1.5MB Inclusive of L1s 12w Set Associative 256B cache line size Shared 24MB Inclusive of L2s 12w Set Associative 256B cache line size 192MB Inclusive 24w Set Associative 256B cache line size
L1:
64KI + 128KD 8w DL1, 4w IL1 256B line size Private 1.5MB Inclusive of L1s 12w Set Associative 256B cache line size Shared 12MB Inclusive of L2s 12w Set Associative 256B cache line size 96MB Inclusive 24w Set Associative 256B cache line size
L2
L2
L3
L3
L4
L4
z196
36 36 2011 IBM Corporation
z114
ibm.com
yourdotcom
L3 Cache
12MB eDRAM Gear Ratio: 2:1 dataflow, 4:1 control flow 2 address slices, address bit 55 selection 16 octword cache interleaves Address and Data bit stacking Store-in cache, store dual-piped per L3 slice 16B 2:1 store/fetch buffers planned between L2&L3 Extension of z10 cache coherency scheme CP/SC merged transaction bussing (addr/control + data) IO fetch reserved cache slots
37 37
ibm.com
yourdotcom
38 38
ibm.com
yourdotcom
39
ibm.com
yourdotcom
Instrs 1 2 3 4 5
L1 miss
L1 miss
Time
Time
40
ibm.com
yourdotcom
41
ibm.com
yourdotcom
Core 1
TLB
OB
IB
Cmpr Exp
16K
16K
Cmpr Exp
Crypto Cipher
Crypto Hash
42
ibm.com
yourdotcom
z114
Bi-Nodal 2 drawers 14 2 PUs, 1 SC 64KB I, 128KB D 1.5MB private per core (L2) 12MB Shared per 4 cores (L3) 96MB Shared per node (SC L4) 1 Main, 1 Store 1 Main x81 DIMMs 256 Gigabytes on 20 DIMMs
43
ibm.com
yourdotcom
44
ibm.com
yourdotcom
Memory technology introduced on the z196 is used on the z114 Redundant Array of Memory (RAIM) which in the Disk industry is known as RAID Protection from Unscheduled Incident Repair Actions (UIRAs) caused by a DIMM failure
DIMM failures include all components on the DIMM Portions of the memory controller or card failure isolated to one memory channel
Key Cache
Key Cache
MCU 0
2B
MCU 1
2B
DATA CHECK
Flexible memory option not available on z114 Used for Enhanced Book Availability on z196
45
ibm.com
yourdotcom
10 DIMMSs/drawer
No cascading
46 46
ibm.com
yourdotcom
Increment (GB) 8 8 32
FC1903 - Memory Capacity Increment Used to determine memory billing for new memory being added server configurations Indicates 8GB (or 32 GB in larger configurations) LICCd increments of Memory Capacity. Charged by increments when Plan Ahead memory is enabled (conversion) Subsequent memory upgrade orders will use up the Plan Ahead memory first
FC 1904 - 16GB Feature Converted Memory Used to determine billing for memory on MES upgrades coming from System z9 and z10 servers.. FC1993 Preplanned Memory Tracks the quantity of 8GB physical increments. Charged (half price) when physical memory is installed
47 2011 IBM Corporation
ibm.com
yourdotcom
10 x 8 GB DIMMs
Plan Ahead Memory Pre-plugged memory based on Feature Size Feature Size target capacity specified by the customer. 8 32 Enabled by LICCC, concurrently. FC1993 tracks the quantity of 8GB 16 40 physical increments. 24 48 Charged (half price) when physical memory is installed 56 FC1903 generally indicates 8GB (or 32 GB in larger configurations) LICCd increments of Memory Capacity. Charged by increments when Plan Ahead memory is enabled Subsequent memory upgrade orders will use up the Plan Ahead memory first
ibm.com
yourdotcom
32 GB increments
4 GB/4GB
Feature Size
4GB/8GB 8GB/4GB
Feature Size
8GB/8GB
Feature Size
4GB/16GB 16GB/4GB
Feature Size
8GB/16GB 16GB/8GB
Feature Size
16GB/16GB
Feature Size
16 24 32 40 48 56
64 72 80 88
152
184
216 248
ibm.com
yourdotcom
Memory upgrades within the same color (except white) are concurrent without the need for Memory Plan Ahead.
M10 # plugged 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 152 120 88 Dial Max N/A 56
(2 CPC drawers)
Dimm (GB) N/A 4/4 4/4 4/4 4/4 4/4 4/4 4/8 4/8 4/8 4/8 8/8 8/8 8/8 8/8 4/16 8/16 16/16 16/16
# plugged N/A 10/10 10/10 10/10 10/10 10/10 10/10 10/10 10/10 10/10 10/10 10/10 10/10 10/10 10/10 10/10 10/10 10/10 10/10
N/A
184 248
ibm.com
yourdotcom
51
ibm.com
yourdotcom
52
ibm.com
yourdotcom
Full Name
I/O drawer I/O cage PCIe switch PCIe I/O drawer PCIe interconnect
Description / Comments
I/O drawer introduced with z10 BC and also supported on z196 and z114; has 8 I/O card slots I/O cage available since z900 (not supported on z10 BC or z114); has 28 I/O card slots Industry standard PCIe switch Application-specific integrated circuit (ASIC) used to fanout (or multiplex) the PCI bus to the I/O cards within the PCIe I/O drawer New I/O drawer that supports PCIe bus I/O infrastructure; has 32 I/O card slots Card in the PCIe I/O drawer that contains the PCIe switch ASIC z10 uses IFB-MP; z9 uses STI-MP Card on front of processor book that supports PCIe Gen2 bus; used exclusively to connect to the PCIe I/O drawer; PCIe fanout supports FICON Express8S and OSA-Express4S Used instead of an HCA2-C fanout for I/O which continues to support the cards in the I/O cage and I/O drawer For 1x InfiniBand at unrepeated distances up to 10 km; 5 Gbps link data rate; 4 ports per fanout; may operate at 2.5 Gbps or 5 Gbps. Based upon capability of DWDM. Exclusive to z196 and z114; can communicate with an HCA2-O LR fanout; third generation Host Channel Adapter For 12x InfiniBand at 150 meters; supports 12x IFB and 12x IFB3 protocols; increased service times when using 12x IFB3 protocol; 6 GBps link data rate; two ports per fanout; can communicate with an HCA2-O fanout on z196 or z10; cannot communicate with an HCA1-O fanout on z9; third generation Host Channel Adapter
N/A
PCIe fanout
53
ibm.com
yourdotcom
Full name Adapter identification Coupling using InfiniBand Host Channel Adapter Parallel Sysplex using InfiniBand 12x InfinBand 1x InfiniBand 12x InfiniBand3 z10 N/A (z9 only) Copper Connects to I/O Cage (z10 EC only) or I/O Drawer Optical - Coupling 12x InfiniBand Optical - Coupling 1x InfiniBand N/A N/A N/A Copper - Coupling (ICB-4)
Comments HCA fanout has AID instead of a PCHID CHPID type z196, z114, z10, System z9 Path for communication InfiniBand Coupling Links 12 lanes of fiber in each direction Long Reach - one pair of fiber Improved service times of 12x IFB on HCA3-O z196/z114 N/A (z9 only) Copper - Connects to I/O Cage (z196 only) or I/O Drawer Optical - Coupling 12x InfiniBand Optical - Coupling 1x InfiniBand Copper - Connects to PCIe Drawer Optical - Coupling 12x InfiniBand Optical - Coupling 1x InfiniBand N/A
ibm.com
yourdotcom
z9
Extended Link (processor to IO Cage) eSTI (IBM)
z10 z196
InfiniBand DDR (Industry Standard)
IO Cage Backplane
mSTI (IBM)
mSTI (IBM)
Coupling Link
ISC (IBM)
55
ibm.com
yourdotcom
STI z9
STI z990/z890
STI z900/z800
56 2011 IBM Corporation
ibm.com
yourdotcom
57
ibm.com
yourdotcom
58
ibm.com
yourdotcom
59
ibm.com
yourdotcom
60
ibm.com
yourdotcom
z114 and z196 at GA2 support two different internal I/O infrastructures
The InfiniBand I/O infrastructure first made available on z10 InfiniBand fanouts supporting the current 6 GBps InfiniBand I/O interconnect InfiniBand I/O card domain multiplexers with Redundant I/O interconnect in: The 14U, 28-slot, 7-domain I/O cage (z196 only) The 5U, 8-slot, 2-domain IO drawer (z114 and z196) Selected legacy I/O feature cards Carry forward and new build And a new PCI Express 2 I/O infrastructure PCIe fanouts supporting a new 8 GBps PCIe I/O interconnect PCIe switches with Redundant I/O interconnect in for I/O domains in a new 7U, 32-slot, 4-domain I/O drawer (z114 and z196 GA2) New FICON Express8S and OSA-Express4S I/O feature cards Based on selected industry standard PCIe I/O Designed to:
Reduce purchase granularity (fewer ports per card) Improve performance Increase I/O port density Save energy
61
ibm.com
yourdotcom
Supports all z10 BC and z196 GA1 I/O and Crypto Express3 cards Supports 8 I/O cards, 4 front and 4 back, horizontal orientation, in two 4-card domains (shown as A and B) Requires two IFB-MP daughter cards, each connected to a 6 MBps InfiniBand interconnect to activate both domains.
Rear
DCA
I/O - B I/O - A IFB-MP A 5U IFB MP B
RII
I/O - B I/O - A
To support Redundant I/O Interconnect (RII) between the two domains, the two interconnects must be from two different InfiniBand fanouts. (Two fanouts can support two of these drawers.) Concurrent add, repair. Requires 5 EIA Units of space (8.75 inches 222 mm)
62
ibm.com
yourdotcom
7U Domain 0
Domain 2
Rear
7U Domain 3 Domain 1
ibm.com
yourdotcom
PROCESSOR
PROCESSOR
GX I/O Bus
GX I/O Bus
HCA
HCA
EXTENDED LINKS
in 30
midplane
30 in
midplane
FAILOVER
8.75 in
CAGE BUSES
I/O Cards
DRAWER
I/O Cards
22.75 in
12.25.75 in
22.75 in
64
ibm.com
yourdotcom
Top views I/O cage, I/O drawer, PCIe I/O drawer Domains
I/O cage 28 slots 7 I/O domains
1A 2B 3A 4B
RII
B W I R N G D E
25E 26F 27E 28F 17G 18G 19G 20G
RII
I/O-B I/O-A
1 1 1 1
PCIe interconnect
IFB-MP
5A 6B 7A 8B 9C 10D 11C 12D
IFB-MP B IFB-MP A
I/O-A I/O-B
Front - 4
F
Rear - 4
C
13C 14D 15C 16D
2 2 2 2
PCIe interconnect
RII
0 0 0 0 FSP-1, 1
1 1 1 1 FSP-1, 2 3 3 3 3
PCIe interconnect
2 2 2 2
3 3 3 3
Front 16
65
Rear 12
2011 IBM Corporation
Front 16
Rear 16
ibm.com
yourdotcom
I/O Drawer
No changes to as used on 2098,and 2817 7u I/O drawer The addition or removal of a drawer is a concurrent MES IFB-MP cards as used on I/O Cage DCA's are unchanged from 2098 z10-BC Each houses 8 I/O cards over 2 domains Min/Max is 0/4 for new build and 0/4 for MES All card assemblies in the can be added concurrently.
66 66
ibm.com
yourdotcom
67 67
ibm.com
yourdotcom
z114 Fanout Usage for I/O (Mixed PCIe and I/O drawers)
PCIe Fanout Support
1 PCIe drawer = 2 PCIe fanouts 2 PCIe drawers = 4 PCIe fanouts Processor drawer 0
Memory
PU PU PU
PCIe (8x)
PCIe (8x)
I/O Drawer 1
Channels
FICON Express8S OSA-Express4S 10 GbE
ESCON
68
FPGA
ESCON
FPGA
2GBps mSTI
ibm.com
yourdotcom
Different PCIe Fanouts Support Domain Pairs: 0 and 1 2 and 3 Normal operation each PCIe interconnect in a pair supports the eight I/O cards in its domain. Backup operation: One PCIe interconnect supports all 16 I/O cards in the domain pair.
Front
Rear
Domain 1
69
ibm.com
yourdotcom
Notes: Each HCA2-C and PCIe fanout has 2 ports Each I/O drawer has 2 I/O domains (4 I/O features per domain). All 4 domains are plugged to the HCA2-C Each PCIe drawer has 4 I/O domains (8 I/O features per domain). 2 domains at a time are plugged to the PCIe fanout 4th I/O drawer uses one of the processor drawer locations
70 2011 IBM Corporation
ibm.com
yourdotcom
*The 1X links support a distance of up to 10 km without repeaters, RPQ 8P2263 or 8P2340 is required for 20 km support. Extended distances of up to 100 km are also possible using Dense Wavelength Division Multiplexors (DWDMs), going over 100 km requires RPQ 8P2263 or 8P2340
HCA3-O can communicate with the z10 via HCA2-O HCA3-O can not communicate with the z9 HCA2 on zEnterprise can communicate with z9 HCA1
71 2011 IBM Corporation
ibm.com
yourdotcom
IFB IFB
HCA2-O
IFB
HCA2-O LR
IFB
PCIe
PCIe fanout
HCA3-O
IFB
IFB
HCA3-O LR
IFB
IFB
Fanout cards Two-port InfiniBand host channel adapters dedicated to function HCA2-C fanout I/O Interconnect Supports FICON, ESCON, OSA, ISC-3 and Crypto Express3 cards in I/O drawer and I/O cage domains. Always plugged in pairs. HCA2-O fanout 12x InfiniBand coupling links CHPID type CIB for Coupling Fiber optic external coupling link 150 m HCA2-O LR fanout 1x InfiniBand coupling links Long Reach (carrry forward only for z114) CHPID type CIB for Coupling Fiber optic external coupling link 10 km (Unrepeated), 100 km repeated PCIe fanout I/O Interconnect Supports FICON Express8S, OSA-Express4S in PCIe drawer. Always plugged in pairs HCA3-O fanout 12x InfiniBand coupling links CHPID type CIB for Coupling Fiber optic external coupling link 150 m HCA3-O LR fanout 1x InfiniBand coupling links Long Reach CHPID type CIB for Coupling Fiber optic external coupling link 10 km (Unrepeated), 100 km repeated
* Performance considerations may reduce the number of CHPIDs per port
ibm.com
yourdotcom
73
ibm.com
yourdotcom
74
ibm.com
yourdotcom
75
ibm.com
yourdotcom
76
ibm.com
yourdotcom
77
ibm.com
yourdotcom
Note: PSIFB links are direct from the HCAs and not from the I/O cage or drawers
ibm.com
yourdotcom
79
ibm.com
yourdotcom
FLASH
Increased performance compared to FICON Express8 2, 4, 8 Gbps 10KM LX - 9 micron single mode fiber Unrepeated distance - 10 kilometers (6.2 miles) Receiving device must also be LX SX - 50 or 62.5 micron multimode fiber Distance variable with link data rate and fiber type Receiving device must also be SX 2 channels of LX or SX (no mix) Small form factor pluggable (SFP) optics Concurrent repair/replace action for each SFP
SFP+
HBA ASIC
FLASH
LX
OR
SX
80
ibm.com
yourdotcom
LX
81 2011 IBM Corporation
OR
SX
ibm.com
yourdotcom
2, 4, 8 Gbps
2, 4, 8 Gbps
2, 4, 8 Gbps
2, 4, 8 Gbps
FC 332510KM LX LC Duplex
82
ibm.com
yourdotcom
2, 4, 8 Gbps
Small Form Factor pluggable optics for concurrent repair/replace Personalize as:
FC Native FICON Channel-To-Channel (CTC) z/OS, z/VM, z/VSE, z/TPF, TPF, Linux on System z FCP (Fibre Channel Protocol) Support of SCSI devices z/VM, z/VSE, Linux on System z
83 2011 IBM Corporation
2, 4, 8 Gbps
FC 3326 SX LC Duplex
ibm.com
yourdotcom
84
ibm.com
100000
yourdotcom
I/O driver benchmark I/Os per second 4k block size Channel 100% utilized
z H P F z H P F FICON Express8
52000 92000
z H P F
FICON Express4 and FICON FICON Express4 Express2 and 31000 FICON Express2 ESCON
1200 14000
FICON Express8
20000
FICON Express8S
23000
z10
z10
z196 z10
z196 z10
z196 z114
z196 z114 z H P F
1700 1600 1500 1400 1300 1200 1100 1000 900 800 700 600 500 400 300 200 100 0
I/O driver benchmark MegaBytes per second Full-duplex Large sequential read/write mix
1600
108% increase
z10 z9
z10
z196 z10
z196 z10
z196 z114
z196 z114
85
ibm.com
yourdotcom
92000
FE4 4 Gbps
FE4 4 Gbps
31500
FE4 4 Gbps
FE8 8 Gbps
FE8 8 Gbps
FE8S 8 Gbps
10% increase
z9
z9
z10
z10
z196
z196 z114
1600
FE8S 8 Gbps
108% increase
z10
2011 IBM Corporation
z196 z10
z196 z114
86
ibm.com
yourdotcom
Receiver side of port should contain sufficient buffer credits to ensure channel is fully utilized
Receiver must contain = > 4 buffer credits per km of fiber distance at 8 Gbps
In the short block case, the concern is response time, not data bandwidth. Where the buffer-to-buffer credits (BtoB credits) becomes an issue is when large blocks per I/O are being transferred, so link bandwidth becomes the factor of interest. In this case, the frames will be almost always full (2K).
87
ibm.com
yourdotcom
Integrated into existing System z host configuration tools (HCD & HCM) Requirements:
z114 or z196 server with FICON Express8S, FICON Express8 or FICON Express4 Switch/director attached fabric (no direct attachment) z/OS V1.12 (at least 1 LPAR for Dynamic I/O capability) First exploiter is IBM System Storage DS8700 DS8000 licensed machine code level 6.5.15.xx (bundle version 75.15.xx.xx), or later see IBM PSP buckets Suggested: FICON DCM (z/OS dynamic channel management) to help manage performance
88
ibm.com
yourdotcom
zDAC Value
Without I/O discovery and auto configuration, using HCD **
Defining the fibre channel network
Control unit address and ports specified on paper; requires many people and transcribing from paper to HCD.
Steps: multiple iterative steps HCD: several panels per control unit Time: hours
Control Unit addresses and ports are specified manually one at a time with new or new based on function in HCD.
From the discovered control units, HCD retrieves and proposes control unit and device types and numbers, channel path assignments, partition access and OS device parameters. Definitions are automatically written into a specified target work IODF which is created as a copy of the active or accessed IODF. Steps: 1 HCD: 5 panels for whole fibre channel network Addresses and ports discovered and defined electronically, using best practices ad existing configurations. Errors discovered in working IODF and can be corrected before activation
Steps: multiple iterative steps HCD: 10 panels per control unit Time: hours
Error checking
Errors not found until IODF is activated or device is brought online, requiring rework. Manual error checking
ibm.com
yourdotcom
90 90
ibm.com
yourdotcom
Maximum FICON features varies with mix of Drawers types and Model of the System All use LC Duplex connectors
91 2011 IBM Corporation
ibm.com
yourdotcom
OM1 62.5 micron at 200 MHz-km, OM2 50 micron at 500 MHz-km, OM3 50 micron at 2000 MHz-km Inter-Switch Links (ISLs) is the link between two FICON directors; FICON features do not operate at 10 Gbps * The link loss budget is the channel insertion loss as defined by the standard. # This distance and dB budget applies to FICON Express4 4KM LX features 92 2011 IBM Corporation
ibm.com
yourdotcom
93
ibm.com
yourdotcom
Designed to reduce the minimum round-trip networking time between z196/z114 systems (reduced latency)
Designed to improve round trip at the TCP/IP application layer
OSA-Express3 and 4S10 GbE 45% improvement compared to the OSA-Express2 10 GbE OSA-Express3 and 4S GbE 45% improvement compared to the OSA-Express2 GbE Designed to improve throughput (mixed inbound/outbound) OSA-Express3 and 4S 10 GbE 1.0 GBytes/ps @ 1492 MTU 1.1 GBytes/ps @ 8992 MTU 3-4 times the throughput of OSA-Express2 10 GbE 0.90 of Ethernet line speed sending outbound 1506-byte frames 1.25 of Ethernet line speed sending outbound 4048-byte frames
94
The above statements are based on OSA-Express3 performance measurements performed in a test environment on a System z10 EC and do not represent actual field measurements. Results may vary.
ibm.com
yourdotcom
OSA-Express4S GbE and 10 GbE fiber for the PCIe I/O drawer
10 Gigabit Ethernet (10 GbE) CHPID types: OSD, OSX Single mode (LR) or multimode (SR) fiber One port of LR or one port of SR 1 PCHID/CHPID
FPGA
PCIe
IBM ASIC
Gigabit Ethernet (GbE) CHPID types: OSD (CHPID OSN not supported) Single mode (LX) or multimode (SX) fiber Two ports of LX or two ports of SX 1 PCHID/CHPID Small form factor optics LC Duplex
95 2011 IBM Corporation
FPGA
PCIe
IBM ASIC
ibm.com
yourdotcom
96 96
ibm.com
yourdotcom
80% increase
40% increase
1120 615
OSA-E3 OSA-E4S
1000
500
1180
OSA-E3
1680
OSA-E4S
70% increase
70% increase
Notes: AWM on z/OS z/OS is doing checksum 1 megabyte per second (MBps) is 1,048,576 bytes per second MBps represents payload throughput (does not count packet and frame headers)
1000 MBps
MBps
1500
1180 680
OSA-E3 OSA-E4S
1000
2080 1240
500
OSA-E3
OSA-E4S
97 97
ibm.com
yourdotcom
CHPID type OSD All OSA features z196, z114 z10, z9, zSeries OSX 10 GbE z196, z114
Purpose / Traffic Queue Direct Input/Output (QDIO) architecture TCP/IP traffic when Layer 3 (uses IP address) Protocol-independent when Layer 2 (uses MAC address)
OSA-Express for zBX. Connectivity and access control to intraensemble data network (IEDN) from z196 or z114 to zBX
z/OS, z/VM
*IBM is working with its Linux distribution partners to include support in future Linux on System z distribution releases.
98
ibm.com
yourdotcom
PCIe
PCIe
FC 3370 10 GbE LR FC 3371 10 GbE SR
PCIe
99
ibm.com
yourdotcom
PCIe PCIe
CHPID shared by two ports
4 ports - FC 3367 1000BASE-T
Description TN3270E, non-SNA DFT, IPL CPCs, and LPARs, OS system console operations TCP/IP traffic when Layer 3, Protocol-independent when Layer 2 TCP/IP and/or SNA/APPN/HPR traffic NCPs running under IBM Communication Controller for Linux (CDLC) Connectivity to intranode management network (INMN) from z196/z114 to zBX
ibm.com
yourdotcom
101
ibm.com
yourdotcom
Open Systems Adapter CHPIDs types: OSM (Express3 1000BASE-T)) and OSX (Express3 and 4S 10 GbE)
Two new OSA CHPID types to support new types of z196/z114 networks A z196/z114 System can have up to 6 types of OSA CHPIDs
External (customer managed) networks Defined as OSC,OSD,OSE, & OSN Existing customer provided and managed OSA ports used for access to the current customer external networks - no changes Intranode management network (INMN) Defined as CHPID type OSM, OSA-Express for Unified Resource Manager When the PCIe adaptor on 1000BASE-T is defined as CHPID type OSM, the second port cannot be used for anything else OSA-Express3 1000BASE-T configured as CHPID type OSM for connectivity to INMN from z196/z114 to Unified Resource Manager functions OSA connection via the Bulk Power Hub (BPH) on the z196/z114 to the Top of the Rack (TORs) switches on zBX Intraensemble data networks (IEDN) Defined as CHPID OSX, OSA-Express for zBX OSA-Express3 or 4S 10 GbE configured as CHPID type OSX for connectivity and access control to IEDN from z196/z114 to zBX
Functions Supported:
Dynamic I/O support HCD CP Query capabilities Ensemble Management for these new channel paths and their related subchannels.
z/VM 5.4 CHPID types OSX and OSM cannot be varied online
102 2011 IBM Corporation
ibm.com
yourdotcom
OSM (INMN)
OSA-Express3 1GbE 2 CHPIDs 2 PORTS/CHPID FC3367
OSX (IEDN)
OSA-Express3 10 GbE 2 CHPIDs 1 PORT/CHPID
IEDN Distances
MM (Short Range Optics) 50 micron at 2000 MHz-km: 300 meters (984) 50 micron at 500 MHz-km: 82 meters (269) 62.5 micron at 200 MHz-km: 33 meters (108) SM (Long Range Optics) 10 km (6.2 miles)
CHPID PORT 0
CHPID PORT 0
Supports IOCP CHPID types: OSC, OSD, OSE, OSN, and OSM (ONLY 1000BASE-T). PCHID = xx0 & xx1
OSM IOCDS EXAMPLE:
CHPID PCHID=191,PATH=(CSS(0,1,2,3),23),TYPE=OSM,,SHARED, CNTLUNIT CUNUMBR=0910,PATH=((CSS(0),23)),UNIT=OSM IODEVICE ADDRESS=(0910,15),CUNUMBR=(0910),UNIT=OSA,UNITADD=00, MODEL=M,DYNAMIC=YES,LOCANY=YES
Supports IOCP CHPID types: OSD and OSX (ONLY 10 GbE). PCHID = xx0 & xx1
OSX IOCDS EXAMPLE:
* CHPID PCHID=5E1,PATH=(CSS(0,1,2,3),2F),TYPE=OSX,SHARED, . CNTLUNIT CUNUMBR=09F0,PATH=((CSS(0),2F)),UNIT=OSX IODEVICE ADDRESS=(09F0,15),CUNUMBR=(09F0),UNIT=OSA,UNITADD=00, MODEL=X,DYNAMIC=YES,LOCANY=YES *
103
ibm.com
yourdotcom
HiperSockets
z/VSE LP3
104
ibm.com
yourdotcom
105 105
ibm.com
yourdotcom
CPU
CPU
CPU
OSA-Express
WAN
106 106
ibm.com
yourdotcom
IPv6 Support
Checksum offload for IPv6 packets is now available for z/OS environments: When the checksum function is offloaded from the host, CPU cycles are reduced, improving performance. With the introduction of OSA-Express4S, the checksum offload function is now performed for IPv6 packets as well as IPv4 packets, whether the traffic goes out to the local area network (LAN), comes in from the LAN, or flows logical partition-to-logical partition through OSAExpress4S. Checksum offload provides the capability of calculating the Transmission Control Protocol (TCP), User Datagram Protocol (UDP), and Internet Protocol (IP) header checksums for Internet Protocol Version 4 (IPv4) packets and now, IPv6 packets. Checksum verifies the correctness of files. When checksum offload was introduced in May of 2003, it was limited to IPv4 packets. Checksum offload for IPv6 packets is exclusive to OSA-Express4S features (CHPID types OSD and OSX) on the z196 and z114. It is supported by z/OS. Refer to the Software requirements section. Checksum offload for IPv4 packets is currently available for all in-service releases of z/OS and Linux on System z. Checksum offload for LPAR-to-LPAR traffic in the z/OS environment is included in the OSAExpress4S design for both IPv4 and IPv6 packets. Large send for IPv6 packets is now available for z/OS environments: Large send (also referred to as TCP segmentation offload) is designed to improve performance by offloading outbound TCP segmentation processing from the host to an OSA-Express4S feature by employing a more efficient memory transfer into OSA-Express4S. Note: Large Send for IPv6 packets is not supported for LPAR-to-LPAR packets. Large send support for IPv6 packets applies to the OSA-Express4S features (CHPID type OSD and OSX), and is exclusive to the z196 and z114. It is supported by z/OS and z/VM for guest exploitation Large send for IPv4 packets is currently available for all in-service releases of z/OS and Linux on System z.
107 107 2011 IBM Corporation
ibm.com
yourdotcom
Ports 2 2 1 1
108
ibm.com
yourdotcom
F/C
23xx/13xx 3364 3365 3366 3368 3362 3363 3367 3369 3370 3371 3373
Ports
2 2 2 2 1 41 41 41 21 2 2 21
Available
NO Carry Forward Carry Forward Carry Forward NO Carry Forward Carry Forward New/Carry Forward New/Carry Forward Carry Forward Carry Forward Carry Forward
CHPID
OSD, OSN OSD, OSN OSD, OSE, OSC, OSN, OSM OSD, OSE, OSC, OSN OSD, OSX OSD, OSX OSD, OSN
OSA-Express3 cards will be shown as selections via 8P2534 only if FC4000 slots are open.
109 2011 IBM Corporation
ibm.com
yourdotcom
Purpose / Traffic
Supports Queue Direct Input/Output (QDIO) architecture TCP/IP traffic when Layer 3 (uses IP address) Protocol-independent when Layer 2 (uses MAC address) Non-QDIO; for SNA/APPN/HPR traffic and TCP/IP passthru traffic OSA-Integrated Console Controller (OSA-ICC) Supports TN3270E, non-SNA DFT to IPL CPCs & LPs OSA-Express for Unified Resource Manager Connectivity to intranode management network (INMN) from z196 or z114 to Unified Resource Manager functions OSA-Express for NCP Appears to OS as a device supporting CDLC protocol Enables Network Control Program (NCP) channel-related functions Provides LP-to-LP connectivity OS to IBM Communication Controller for Linux (CCL) OSA-Express for zBX Connectivity and access control to intraensemble data network (IEDN) from z196 or z114 to zBX
Operating Systems
z/OS, z/VM z/VSE, z/TPF Linux on System z z/OS, z/VM z/VSE z/OS, z/VM z/VSE z/OS, z/VM Linux on System z z/OS, z/VM z/VSE, z/TPF Linux on System z
110
ibm.com
yourdotcom
111
ibm.com
yourdotcom
ISC-M (hot-plug)
ISC-D (hot-plug)
Distances: ISC-3 Maximum distance is 10 km, RPQ 8P2197 is required for 20 km support Extended distances of up to 100 km are possible using Dense Wavelength Division Multiplexors (DWDMs) Over 100 km requires RPQ 8P2263 or 8P2340 The z114 and z196 will be the last servers to offer ordering of ISC-3. Customers should start migrating to PSIFB coupling links.
112 2011 IBM Corporation
EP
ISC link
LC Duplex connector
ibm.com
yourdotcom
Parallel Sysplex using InfiniBand (PSIFB) ready for even the most demanding data sharing workloads
Simplify Parallel Sysplex connectivity Do more with less Can share physical links by defining multiple logical links (CHPIDs) Can consolidate multiple legacy links (ISC and/or ICB) Can more easily address link constraints Define another CHPID to increase available subchannels instead of having to add physical links More flexible placement of systems in a data center 12x InfiniBand coupling links (FC 0171 HCA3-O and FC 163 HCA2-O) Support optical cables up to 150 meters. No longer restricted to 7 meters between System z CPCs 1x InfiniBand coupling links (FC 0170 HCA3-O LR and FC FC 0168 HCA2-O LR) Use the same single mode fiber optic cables as ISC-3 and FICON/FCP for unrepeated distances of up to 10 km, and metropolitan distances with qualified DWDM solutions
113
ibm.com
yourdotcom
IFB
IFB
IFB
IFB
IFB
IFB
Note: The InfiniBand link data rates of 6 GBps, 3 GBps, 2.5 Gbps, or 5 Gbps do not represent the performance of the link. The actual performance is dependent upon many factors including latency through the adapters, cable lengths, and the type of workload.
114
ibm.com
yourdotcom
IFB
IFB3
HCA2-O
HCA3-O
System z10
z196
115
ibm.com
yourdotcom
HCA3-O LR 1x Details
New 1x InfiniBand fanout cards, exclusive to z196 and z114
HCA3-O LR fanout for 1x InfiniBand coupling links
Four ports per feature Fiber optic cabling Link sharing between multiple Sysplexes Dstance of up to 10 km without repeaters, RPQ 8P2263 or 8P2340 is required for 20 km support Extended distances of up to 100 km are also possible using Dense Wavelength Division Multiplexors (DWDMs), going over 100 km requires RPQ 8P2263 or 8P2340 Supports connectivity to HCA2-O LR Link data rate server-to-server 5 Gbps Link data rate with DWDM; 2.5 or 5 Gbps
DWDM 5 Gpbs
Server 5 Gpbs
HCA2-O LR
HCA3-O LR
HCA3-O LR
32 subchannels per CHPID (default) Option to define 32 or 7 subchannels z114 or z196 GA2 to z114 or z196 GA2
System z10
System z9
(not supported)
z196
z196
116
ibm.com
yourdotcom
1x InfiniBand
5 or 2.5 Gbps
10 km
9 SM fiber
IFB
HCA2-O
IFB
Ports exit from the front of a book/processor drawer with HCAs. Unlike ISC-3 links, does not use I/O card slots 12x InfiniBand z196, z114, z10, z9 DDR at 6 GBps z196/z114 and z10 SDR at 3 GBps z196/z114 & z10 to z9 z9 to z9 connection not supported 1x InfiniBand z196/z114 and z10 (not available for z9) DDR at 5 Gbps (Server to Server and with DWDM) SDR at 2.5 Gbps (with DWDM)
DDR = double date rate, SDR = single data rate * For z114 carry forward only ** Performance considerations may reduce the number of CHPIDs per port
IFB
HCA2-O LR*
IFB
HCA3-O
IFB
IFB
HCA3-O LR
IFB
IFB
ibm.com
yourdotcom
118
ibm.com
yourdotcom
HCA3-O
More subchannels per physical link CF Receiver CHPIDs can share links NOT more subchannels per CHPID MIF uses same address, 7 subchannels per CHPID
7 SubChannels per channel CHPID Up to 16 channel paths across two ports ___________________ 112 subchannels across two ports
Eg.)
119
ibm.com
yourdotcom
z114 and z196 GA2 1x InfiniBand Coupling Links Multiple CHPIDs per link, 32 or 7 subchannels per CHPID (HCA2-O LR and HCA3-O LR)
Up to 16 CHPIDs using same physical links More subchannels per physical link Link sharing by different Sysplexes
HCA3-O LR
Now more subchannels per CHPID 32 subchannels per CHPID Option to define 32* or 7 subchannels z114 or z196 GA2 to z114 or z196 GA2 32* subchannels per CHPID (default) Up to 16 CHPIDs per HCA3-O LR ___________________ 512 subchannels per HCA3-O LR
For Example:
CHPID FF 32 subchannels
Carry forward only for z114
120 2011 IBM Corporation
CHPID FE 32 subchannels
ibm.com
yourdotcom
HCA3-O
HCA3-O LR
Up to 10/100KM
Up to 150 meters
z9 EC and z9 BC
..
HCA2-O**
ISC-3
Fanouts
IFB-MP
Up to 10/100 km
HCA2-C
z196/z114
121 2011 IBM Corporation
ibm.com
yourdotcom
1X InfiniBand-DDR LR (5 Gbps)
z196/z114 to z196/z114 z196/z114 to z10 Uses one lane for a total link rate of 5 Gbps and supports an unrepeated distance of 10 km Repeated distances at up to 100 km when attached to a Dense Wavelength Division Multiplexer (DWDM) qualified by System z Speed may be auto-negotiated if the attached DWDM is capable of operating at SDR or DDR Supports SDR at 2.5 Gbps when connected to a DWDM capable of SDR speed Supports DDR at 5 Gbps when connected to a DWDM capable of DDR speed 9 micron single mode fiber optic cables with LC Duplex connectors (same cable as ISC3)
Note: The InfiniBand link data rates of 6 GBps, 3 GBps, 2.5 Gbps, or 5 Gbps do not represent the performance of the link. The actual performance is dependent upon many factors including latency through the adapters, cable lengths, and the type of workload. With InfiniBand coupling links, while the link data rate may be higher than that of ICB (12x InfiniBand (SDR or DDR) or ISC-3 (1x InfiniBand (SDR or DDR), the service times of coupling operations are greater, and the actual throughput may be less than with ICB links or ISC-3 links.
122
ibm.com
yourdotcom
@ wavelength
Unrepeated distance
Unrepeated distance
@ 1310 nm
10 km 6.2 miles
5.66 dB
10 km 6.2 miles
5.66 dB
1x InfiniBand operating at 2.5 Gbps or 5 Gbps is exclusive to System z10 and z196/z114 servers All attachments to an outside cable plant (including public "dark fiber") are supported only through a patch panel or Wavelength Division Multiplexer (WDM) product. 10 to 100 km repeated with an InfiniBand DDR or SDR qualified DWDM. (STP qualification is also required if STP is in use.)
123
ibm.com
yourdotcom
Note: the fiber optic cabling is the same as used with ISC-3, FICON LX, 10 GbE LR, and GbE LX
LC Duplex harness
124
ibm.com
yourdotcom
2.06 dB
2.06 dB
1. 12x InfiniBand SDR links operating at 3 GBps (2.5 Gbps per lane) are used to connect a z196/z114 to a z9 or z196/z114 to z10 2. 12x InfiniBand DDR links operating at 6 GBps (5.0 Gbps per lane) are used to connect z196/z114 servers or z196/z114 to z10
125
ibm.com
yourdotcom
Supported 12x InfiniBand DDR cable lengths OM3 50/125 micrometer multimode fiber optic cabling
Cables available from: IBM Global Technology Services (GTS) Anixter www.anixter.com/ Computer Crafts Inc. www.computer-crafts.com/ Tyco www.tycoelectronics.com/ Fujikura www.fujikura.com/ Fiber core 50u multimode Light source SX laser Fiber bandwidth @ wavelength: 2000 MHz-km @ 850 nm IBM cable part numbers highly recommended
Item Description 24-fiber cable Assembly 24-fiber cable Assembly 24-fiber cable Assembly 24-fiber cable Assembly 24-fiber cable Assembly 24-fiber cable Assembly 24-fiber cable Assembly 24-fiber cable Assembly 24-fiber cable Assembly
2011 IBM Corporation
Cable IBM P/N 41V2466 15R8844 15R8845 41V2467 41V2468 41V2469 41V2470 41V2471 42V2083
Cable Length Meters 10.0 m 13.0 m 15.0 m 20.0 m 40.0 m 80.0 m 120.0 m 150.0 m Custom
Cable Length Feet 32.8 f 42.7 f 49.2 f 65.6 f 131.2 f 262.4 f 393.7 f 492.1 f N/A
Connector Type MPO - MPO MPO - MPO MPO - MPO MPO - MPO MPO - MPO MPO - MPO MPO - MPO MPO - MPO MPO - MPO
ibm.com
yourdotcom
The z114 is the last server to offer ordering of new ISC-3 features. ISC-3 requires an I/O Drawer.
Server Time Protocol (STP) Required
127
ibm.com
yourdotcom
z196/z114 1x InfiniBand
N/A
Theoretical maximum rates shown 9672 / z900 / z800 / z990 / z890 Connectivity to z196/z114 Not Supported z196 and z114 does not support ICB connectivity 1x InfiniBand supports single data rate (SDR) at 2.5 Gbps when connected to a DWDM capable of SDR speed and double data rate (DDR) at 5 Gbps when connected to a DWDM capable of DDR speed or when point-to-point with another z196/z114 or z10 System z9 does NOT support 1x InfiniBand DDR or SDR InfiniBand Coupling Links
Note: The InfiniBand link data rates of 6 GBps, 3 GBps, 2.5 Gbps, or 5 Gbps do not represent the performance of the link. The actual performance is dependent upon many factors including latency through the adapters, cable lengths, and the type of workload. With InfiniBand coupling links, while the link data rate may be higher than that of ICB (12x InfiniBand (SDR or DDR) or ISC-3 (1x InfiniBand (SDR or DDR), the service times of coupling operations are greater, and the actual throughput may be less than with ICB links or ISC-3 links.
128
ibm.com
yourdotcom
z196
32
N/A
N/A
48
32 32 32 32 32
N/A 16 (32/RPQ) 12 16 16
48 48 48 48 48
128 64 64 64 64
1. A z196 M49, M66 or M80 supports a maximum 104 extended distance links (48 1x IFB and 48 ISC-3) plus 8 12x IFB links. A z196 M32 supports a maximum 96 extended distance links (48 1x IFB and 48 ISC-3) plus 4 12x IFB links*. A z196 M15 supports a maximum 72 extended distance links (24 1x IFB and 48 ISC-3) with no 12x IFB links*. 2. z114 M10 supports a maximum of 72 extended distance links (24 1x IFB and 48 ISC-3) with no 12x IFB links*. 3. z114 M05 supports a maximum of 56 extended distance links (8 1x IFB and 48 ISC-3) with no 12x IFB links*.
ibm.com
yourdotcom
130
ibm.com
yourdotcom
* Can be carried forward or ordered on MES using RPQ 8P2534 ** OSA-Express2 10 GbE LR is not supported as a carry forward *** Two features initially, one thereafter
131 2011 IBM Corporation
ibm.com
yourdotcom
Features
ISC-3 HCA2-O LR* (1x) HCA2-O (12x) HCA3-O LR (1x) HCA3-O (12x) * Carry forward only
Minimum # of features
0 0 0 0 0
Maximum # of features
12 4 M05** 6 M10** 4 M05** 8 M10** 4 M05** 8 M10** 4 M05** 8 M10**
Maximum connections
48 links 8 links M05 12 links M10 8 links M05 16 links M10 16 links M05 32 links M10 8 links M05 16 links M10
Purchase increments
1 link 2 links 2 links 4 links 2 links
** Uses ALL the available fanout slots for PSIFB. Allows no other I/O or coupling.
132
ibm.com
yourdotcom
133
ibm.com
yourdotcom
Full name
Comments
Server assigned by the customer to provide additional means for the Backup Time Server to determine whether it should take over as the Current Time Server. Server assigned by the customer to take over as the Current Time Server (stratum 1 server) because of a planned or unplanned reconfiguration. The Coordinated Server Time in a CTN represents the time for the CTN. CST is determined at each server in the CTN. A network that contains a collection of servers that are time synchronized to CST.
BTS
Identifier that is used to indicate whether the server has Coordinated Timing Network been configured to be part of a CTN and, if so, identifies Identifier that CTN. Current Time Server Going Away Signal A server that is currently the clock source for an STP-only CTN. A reliable unambiguous signal to indicate that the CPC is about to enter a check stopped state. The server assigned by the customer to be the preferred stratum 1 server in an STP-only CTN.
PTS
134
ibm.com
yourdotcom
ibm.com
yourdotcom
136
ibm.com
yourdotcom
137
ibm.com
yourdotcom
138
ibm.com
yourdotcom
139
ibm.com
yourdotcom
How it works
Going Away Signal sent by the Current Time Server (CTS) in an STPonly Coordinated Timing Network (CTN) Received by the Backup Time Server (BTS) BTS can safely take over as the CTS without relying on Offline Signal (OLS) in a 2 server CTN or the Arbiter in a CTN with 3 or more servers Current recovery design is still used when going away signal is not received by BTS and for other failures
140
ibm.com
yourdotcom
Pre-reqs
This enhancement is only available if you have a HCA3-O on the CTS communicating with a HCA3-O on the BTS. The STP recovery design is still available for the cases when a Going Away Signal is not received or for other failures besides a server failure Infiniband (IFB) links using HCA3-O to HCA3-O 12x IFB to 12x IFB HCA-O LR to HCA3-O LR 1x IFB
141
ibm.com
yourdotcom
Dependencies
Dependencies on OLS and CAR removed in a 2 server CTN Going Away Signal has priority over OLS in a 2 server CTN Dependencies on BTS-->Arbiter communication removed in CTNs with 3 or more servers BTS can also use going away signal to take over as CTS for CTNs with 3 or more servers without communicating with Arbiter
142
ibm.com
yourdotcom
143
ibm.com
yourdotcom
SE Enhancement
An enhancement has been made to improve the time accuracy of the SEs Battery Operated Clock (BOC) by synchronizing the SEs BOC to the servers Time of Day Clock (TOD) every hour, instead of the previous synchronization which took place every 24 hours. This enhancement will allow the SEs clock to maintain a time accuracy of 100 milliseconds to an NTP server configured as the External Time Source in an STP-only CTN.
144
ibm.com
yourdotcom
SE BOC Update
At IML SE BOC used to set the CPC TOD Prior DR93, Primary SE synchronizes with CPC TOD every 24 hours since SE BOC may have a significantly larger accumulated drift in time compared to CPC TOD Could result in repeated timestamps in logs To ensure that new Primary SE running with accurate CEC time Primary SE BOC synchs with the CPR TOD Alternate SE BOC synchs with Primary SE BOC Alternate SE switches to Primary SE
145
ibm.com
yourdotcom
146
ibm.com
yourdotcom
NTP thresholds
Allows you to specify threshold settings in order to suppress generation of hardware and operating system messages related to changes in the NTP server stratum level or source ID. Select the stratum level threshold value to indicate the NTP server stratum level that must be reached before generating messages. Select the source ID time threshold value to indicate the amount of time that must pass before a change in the source ID generates messages. Messages issued if the source ID does not return to the original value within the specified time period. Example: If NTP server stratum level changes from 2 to 3 and back from 3 to 2, no messages will be generated If NTP server stratum level changes from 2 to 4, messages will be generated
147
ibm.com
yourdotcom
148
ibm.com
yourdotcom
149
ibm.com
yourdotcom
150
ibm.com
yourdotcom
PCI Cryptographic Accelerator (PCICA) z990 z800/z900 PCIX Cryptographic Coprocessor z990 (PCIXCC) CP Assist for Cryptographic Functions Crypto Express2 Crypto Express3
z990
Cryptographic Coprocessor Facility Supports Secure key cryptographic processing PCICC Feature Supports Secure key cryptographic processing PCICA Feature Supports Clear key SSL acceleration PCIXCC Feature Supports Secure key cryptographic processing CP Assist for Cryptographic Function allows limited Clear key crypto functions from any CP/IFL NOT equivalent to CCF on older machines in function or Crypto Express2 capability Crypto Express2 Combines function and performance of PCICA and PCICC Crypto Express3 PCI-e Interface, additional processing capacity with improved RAS 151 2011 IBM Corporation
ibm.com
yourdotcom
152
ibm.com
yourdotcom
Crypto Express3 and Crypto Express3 -1P (z114 only) AES Key Encrypting Keys (AES KEKs) and AES typed keys, in general TR-31 key wrapping method for interoperable secure key exchange Elliptic Curve Cryptography (EC-DH key agreement protocol) PIN Decimalization table protection (Rivest-Shamir-Adleman) RSA-Optimal Asymmetric Encryption Padding (OAEP) with SHA-256
153
ibm.com
yourdotcom
154
ibm.com
yourdotcom
155
ibm.com
yourdotcom
Standards/ Compliance PCI HSM Standards Compliance design Configuration Profile Support
156
ibm.com
yourdotcom
Engines
N/A 2 1 N/A N/A N/A N/A N/A N/A
Available
New/Carry Forward New/Carry Forward New z114 /Carry Forward New/Carry Forward New New/Carry Forward New/Carry Forward (7.0) Yes New/Carry Forward
Comments
No Charge DES/TDES
z10 BC and z114 only Upgrade available to TKE 7.1 Required for z196/z114 Required for z196/z114 and ships with 7.1 LIC EC to change from NXP JCOP 41 to NXP JCOP 31
Note: Crypto Express3 requires a I/O Cage (z196 only) or I/O Drawer (FC4000)
157
ibm.com
yourdotcom
z/VM
Common Criteria z/VM 5.3 EAL 4+ for CAPP/LSPP System Integrity Statement z/VM 6.1 Under evaluation
z/OS
Linux
z/VM
Linux Linux
Linux on System z
Common Criteria SUSE SLES10 certified at EAL4+ with CAPP Red Hat EL5 EAL4+ with CAPP and LSPP OpenSSL - FIPS 140-2 Level 1 Validated CP Assist - SHA-1 validated for FIPS 180-1 DES & TDES validated for FIPS 46-3
z/OS
Common Criteria EAL4+ with CAPP and LSPP z/OS 1.7 + RACF z/OS 1.8 + RACF z/OS 1.9 + RACF z/OS 1.10 + RACF (OSPP) z/OS 1.11 + RACF (OSPP) z/OS 1.10 IPv6 Certification by JITC IdenTrust certification for z/OS PKI Services FIPS 140-2 z/OS System SSL 1.10 z/OS System SSL 1.11 z/OS ICSF PKCS#11 Services z/OS 1.11 Statement of Integrity
158
Common Criteria EAL5 with specific target of evaluation -- LPAR: Logical partitions System z196 Common Criteria EAL5+ with specific target of Evaluation -- LPAR z114 pending Crypto Express2 & Crypto Express3 Coprocessors - FIPS 140-2 level 4 Hardware Evaluation - Approved by German ZKA CP Assist - FIPS 197 (AES) - FIPS 46-3 (TDES) - FIPS 180-3 (Secure Hash) 2011 IBM Corporation
ibm.com
yourdotcom
159
ibm.com
yourdotcom
Sources of Outages - Pre z9 -Hrs/Year/SystUnscheduled Outages Scheduled Outages Planned Outages Preplanning requirements Power & Thermal Management Impact of Outage
Scheduled (CIE+Disrupt ive Pat ches + ECs) Planned - (MES + Driver Upgrades) Unscheduled (UIRA)
Prior Servers
IBM System z9
zEnterprise
In c rea sed
Fo c
us o
ver
tim e
Temperature = Silicon Reliability Worst Enemy Wearout = Mechanical Components Reliability Worst Enemy.
160
ibm.com
yourdotcom
161
ibm.com
yourdotcom
162
ibm.com
yourdotcom
163
ibm.com
yourdotcom
Improving Reliability
Power and Thermal Optimization and Management Static Power Save Mode under z/OS control FICON dynamic sub-chip level mini power down Smart blower management by sensing altitude and humidity
164
ibm.com
yourdotcom
CLK Diff
CLK Diff
C EC
DIMM
DRAM X
C R C
X X
ASIC C R C
X X
165
ibm.com
yourdotcom
Key Cache
Key Cache
MCU 1
MCU 2
DATA CHECK
166
ibm.com
yourdotcom
Key Cache
Key Cache
MCU 0
MCU 1
DATA CHECK
167
ibm.com
yourdotcom
Clocked from primary drawer redundant oscillators via cable and pass-thru card Concurrent drawer add & repair not supported Limited degrade capabilities 2 dedicated IBM spares PU assignment and sparing across drawers
I/O Hub
I/O Hub
I/O Hub
168
ibm.com
yourdotcom
Failover
I/O Hub
Failover
I/O Mux I/O Mux I/O Mux I/O Mux
Alternate Path
Network adapter. Crypro Accelerator Crypto Accelerator Network adapter. I/O adapter. I/O adapter.
I/O Switch
I/O Switch
Network Switch
Network Switch
SAP / CP sparing SAP Reassignment I/O Reset and Failover I/O Mux Reset / Failover Redundant I/O Adapter Redundant Network Adapters Redundant Crypto Accelerators I/O Switched Fabric Network Switched/Router Fabric High Availability Plugging Rules Channel Initiated Retry High Data Integrity Infrastructure Type 2 Recovery I/O Alternate Path Network Alternate Path
ISC Links
Control Unit
Control Unit
End User
End User
169
ISC Links
ibm.com
yourdotcom
Failover
I/O Hub
Failover
I/O Mux I/O Mux I/O Mux I/O Mux
Alternate Path
Network adapter. Crypro Accelerator Crypto Accelerator Network adapter. I/O adapter. I/O adapter.
I/O Switch
I/O Switch
Network Switch
Network Switch
SAP / CP sparing SAP Reassignment I/O Reset and Failover I/O Mux Reset / Failover Redundant I/O Adapter Redundant Network Adapters Redundant Crypto Accelerators I/O Switched Fabric Network Switched/Router Fabric High Availability Plugging Rules Channel Initiated Retry High Data Integrity Infrastructure Type 2 Recovery I/O Alternate Path Network Alternate Path
ISC Links
Control Unit
Control Unit
End User
End User
170
ISC Links
ibm.com
yourdotcom
AMD 1
FSP-1 card
AMD 2
Drawer Technology
Hub cards on processor drawer Back Failover is Front To Back Domain 1 Domain 0 Front
171 2011 IBM Corporation
Concurrent Adapter Repair Light strip for adapter service and upgrades Concurrent Switch card Repair Concurrent DCA repair Concurrent FSP repair Concurrent Drawer Add
New Technology
Domain 3 Domain 2
Concurrent Drawer Repair Concurrent upgrade from legacy drawer to new PCIe drawer
ibm.com
yourdotcom
zEnterprise HMC Integration System z brings Mainframe Systems Management and Service to the Blade Environment
HMC
Familiar IBM BladeCenter attached through Ethernet on z196/z114 System z HMC style operation and control of blades Redundant HMC with mirroring of critical ensemble data and switchover Dynamic provisioning of blade resources Trained IBM Service personnel for service and installation. Directed repair via Primary HMC Call home on error Dynamic failure trend monitoring Firmware management and Upgrade
Ethernet
172 2011 IBM Corporation
ibm.com
yourdotcom
z114 RAS
The z114 Server continues to reduce customer down time by attack all sources of outages: unscheduled outages, scheduled outages and planned outages. Power and cooling requirements were reduced while still managing reliability. Major new Memory design for preventing outages Introducing new IO drawers, IO technologies with full concurrent service Introducing System z management to the mixed computing environment Delivering Green functions and RAS together
173
ibm.com
yourdotcom
174
ibm.com
yourdotcom
CoD Offerings
On-line Permanent Upgrade
Permanent upgrade performed by customer (previously referred to Customer Initiated Upgrade - CIU)
ibm.com
yourdotcom
Basics of CoD
Capacity on Demand
Permanent Upgrade Temporary Upgrade
Replacement Capacity
CBU
CPE
Using pre-paid unassigned capacity up to the limit of the HWM No expiration Capacity - MSU % - # Engines
176
On/Off CoD with tokens No expiration Capacity - MSU % - # Engines Tokens - MSU days - Engine days
ibm.com
yourdotcom
API Activation
Enforce Terms and Conditions and physical model limitations
R7* R8*
Authorization Layer
R2* R3* R4* R5* R6*
Up to 8 records installed and active on the System and up to 200 records staged on the SE Change permanent capacity via CIU or MES order
Base Model
177
ibm.com
yourdotcom
z10
z196/z114
Separate orders for purchase of unassigned engines On/Off CoD records must be replenished manually CoD records staged on machine deliver No On/Off CoD administrative test
Unassigned engine purchase via CIU Auto replenishment of On/Off CoD records Manufacturing install of up to 4 CoD records with system ship. This is for new z196s/z114s only On/Off CoD Administrative tests
178
ibm.com
yourdotcom
* Not supported for some memory upgrades Note: Upgrades are non-disruptive only where there is sufficient hardware resource available and provided pre-planning has been done
179
ibm.com
yourdotcom
180
ibm.com
yourdotcom
z114
Granularity levels similar to z10 BC to facilitate upgrades and incremental growth Nomenclature: XYY X = Capacity level YY= Number of processors A00 = ICF or IFL only Any to any capacity upgrade/downgrade capability within the Model CBU capability from smallest to largest capacities within the Model On/Off CoD within the Model Linux only and ICF only servers Model M10 provides specialty engine scale out capabilities
181
ibm.com
yourdotcom
z10 BC Z01 z10 BC Z02 z10 BC Z03 z10 BC Z04 z10 BC Z05
1-Way
ABC
1-way (sub-capacity 26 PCIs)
#E
e gi n
ibm.com
yourdotcom
GHz
MIPs tables
HiperDispatch
MSUs ratings
Dont use one number capacity comparisons! Work with IBM technical support for capacity planning! Customers can now use zPCR
The IBM Processor Capacity Reference (zPCR) is a free tool available for download that can be used to size your System z processors. http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS1381 183 2011 IBM Corporation
ibm.com
yourdotcom
LSPR workloads
Three hardware-characteristic-based workload categories Low, Average, and High categories, based on Relative Nest Intensity Previous workload primitives and Mixes were replaced
* Note: Whenever considering a z/OS partition with more than 48 CPs in a single partition, send a note to zpcr@us.ibm.com for guidance.
184 2011 IBM Corporation
ibm.com
yourdotcom
LSPR Tables
Multi-image (MI) Processor Capacity Ratio table Average complex LPAR configuration for each model based on customer profiles Most representative for vast majority of customers Same workload assumed in every partition z/OS only Use for high level sizing Used to develop the MSU rating Single-image (SI) Processor Capacity Ratio table * One z/OS partition equal in size to N-way of model (up to 80-way) This table is ONLY relevant for truly single image z/OS cases Includes z/VM and Linux workloads on both GP and IFL CPs *Note: Likely to be dropped in a future release of zPCR Single-Image table is used as the basis for IBM capacity planning tools such as zPCRs LPAR Configuration Capacity Planning function. Multi-Image table is a generalization of relative capacity, and is not useful for capacity planning purposes. Capacity for z/OS on z10 and later processors is represented with HiperDispatch turned ON.
185 2011 IBM Corporation
ibm.com
yourdotcom
*Note: Measured using HiperDispatch; applies to z10, z196 and z114 only
186 2011 IBM Corporation
ibm.com
yourdotcom
z/VM 5.4
Supports up to a maximum of 32 CPs LSPR measured up to 32 CPs
Linux
zPCR allows up to up to a maximum of 32 CPs LSPR measured up to 16 CPs
z/VSE
zPCR allows up to a maximum of 4 CPs
CFCC
Supports up to a maximum of 16 CPs
ibm.com
yourdotcom
ibm.com
yourdotcom
*Note: CP3KEXTR is a Load and Go program written in Assembler. Runs under z/OS. Available to customers since September 2010
189
ibm.com
yourdotcom
190
ibm.com
yourdotcom
191
ibm.com
yourdotcom
OR
Engage a Techline specialist via Deal Hub Connect to help you collect the data and do the sizing Sizing questionnaires located here: http://w3-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS4034
We're working with the WLE (Workload Estimator) team for support Targeted to be available at announce Techline will deliver World Wide education when the support is available
192
ibm.com
yourdotcom
IBM Business Partners will be able to obtain the tools directly from PartnerWorld. we're working with the WLE (Workload Estimator) team for support Targeted to be available at announce Techline will deliver World Wide education when the support is available
193
ibm.com
yourdotcom
194
ibm.com
yourdotcom
195
ibm.com
yourdotcom
196
ibm.com
yourdotcom
z114
z196 Model M15
M10 upgrade path only
M10
Yes
M05
NO
z9 BC
Upgrade paths from z9 BC and z10 BC Upgrade path to z196 Model M15 (Air cooled only) Disruptive upgrade M05 to M10 and from M10 to z196 M15
197 2011 IBM Corporation
ibm.com
yourdotcom
z10 BC
z9 BC
RAIM fault tolerant memory Federated capacity and hybrid computing capabilities via integration with zBX, application server blades, optimizers and Unified Resource Manager.
z890
198
ibm.com
yourdotcom
Summary
199
ibm.com
yourdotcom
200
ibm.com
yourdotcom
Summary: Deploy Multi-tier workloads on a single integrated system Create an integrated infrastructure that aligns to end-to-end business processes across technology boundaries
Scale without adding complexity to meet the growing demands on the infrastructure Reduce cost through centralized data center management and automation Fit for purpose application deployment Unify and optimize multiple architectures to work as a single system Improve performance with co-location of data and applications Improve resiliency through a private network Enable innovation through flexibility and breaking down silos
Extending the reach and strategic role of the mainframe across the enterprise; simplify and reduce the range of skills necessary for managing the datacenter; break down cultural divides.
201 201 2011 IBM Corporation
ibm.com
yourdotcom
Questions ?
Parwez Hamid parwez_hamid@uk.ibm.com Luiz Fadel fadel@br.ibm.com
202
ibm.com
yourdotcom
Dank u
Dutch
Merci
French
c
Russian
Gracias
Spanish
Arabic
Korean
Tack s mycket
Swedish
Hindi
Hebrew
Obrigado
Brazilian Portuguese
Chinese
Dankon
Esperanto
Thank You
Trugarez
Breton
Japanese
Tak
Danke
German Danish
Grazie
Italian
}
Tamil
dkuji
Czech
Thai
ibm.com
yourdotcom
End of Presentation
204
ibm.com
yourdotcom
205
ibm.com
yourdotcom
FC 4003
I/O drawer
FC 4000
7 I/O domains 28 I/O slots (legacy I/O cards only) Up to 4 fanouts for all 7 domains 14 EIA Units
206
ibm.com
yourdotcom
207
ibm.com
yourdotcom
and ISC-3 coupling links Once the empty slots (i.e. I/O cage & I/O drawer) are filled, eConfig will drive out a PCIe I/O drawer with the associated features
Customers will not be allowed to carry forward the oldest generation of I/O cards in future severs* z114 will not support (i.e. allow the carry forward) of FICON Express4, OSA-Express2 and ESCON features Future servers will not support FICON Express8, OSA-Express3, ISC-3, or PSC features
ibm.com
yourdotcom
ibm.com
yourdotcom
Partnership
Old
http://www-935.ibm.com/services/us/index.wss/itservice/igs/a1026000?cm_re=masthead-_-itservices-_-site The order process for Prizm is the same as it is for IBM cabling systems 210 2011 IBM Corporation
Current
Prizm is available from IBM GTS Site & Facilities as part of the EFM Service (ESCON to FICON Migration - offering # 6948-97D)
ibm.com
yourdotcom
http://www.opticatech.com/ To help reduce or eliminate ESCON channels on the host while maintaining ESCON and Bus/Tag-based devices and applications
What is PRIZM?
A purpose-built appliance designed exclusively for IBM System z to enable a smooth transition from use of ESCON channels on the host to use of FICON channels on the host Allows ESCON devices to connect to FICON channels and FICON fabrics/networks Also designed to support attachment of parallel (bus/tag) devices to FICON channels via ESBT module Converts 1 or 2 FICON channels into 4, 8 or 12 ESCON channels Replace aging ESCON Directors with PRIZM Achieve streamlined infrastructure and reduced Total Cost of Ownership Qualified by the IBM Poughkeepsie, NY Vendor Solutions Lab in July 2009 for all ESCON devices Refer to: http://www-03.ibm.com/systems/z/hardware/connectivity/index.html Products -- > FICON / FCP Connectivity -- > Other supported devices PRIZM is available via IBM Global Technology Services: ESCON to FICON Migration offering (#6948-97D) Front
FICON Ports: LC Duplex
Back
211
ibm.com
yourdotcom
FICON Disk
FICON
FICON
ESCON Tape
ESBT Module
B/T
212
ibm.com
yourdotcom
Additional Material
213 213
ibm.com
yourdotcom
PCHID Ranges I/O Card Slot LG01 Drawer Type I/O Drawer PCIe I/O Drawer I/O Drawer PCIe I/O Drawer I/O Drawer PCIe I/O Drawer I/O Drawer PCIe I/O Drawer I/O Drawer PCIe I/O Drawer I/O Drawer PCIe I/O Drawer I/O Drawer PCIe I/O Drawer I/O Drawer PCIe I/O Drawer I/O Drawer PCIe I/O Drawer I/O Drawer PCIe I/O Drawer
2011 IBM Corporation
A02B
A09B DCA 1
A16B
100 - 103 100 - 10F 104 - 107 110 - 11F 108 - 10B 120 - 12F 10C - 10F 130 - 13F
180 - 183 180 - 18F 184 - 187 190 - 19F 188 - 18B 1A0 - 1AF 18C - 18F 1B0 - 1BF
LG02
LG03
LG04
LG05
PCIe Switch 0
LG06
110 - 113 140 - 14F 114 - 117 150 - 15F 118 - 11B
190 - 193 1C0 - 1CF 194 - 197 1D0 - 1DF 198 - 19B IFB MP
LG07
LG08
LG09
X 260 - 26F X
X 2E0 - 2EF X
LG10
214
FSP-1,1
ibm.com
yourdotcom
PCHID Ranges I/O Card Slot LG11 Drawer Type I/O Drawer PCIe I/O Drawer I/O Drawer PCIe I/O Drawer I/O Drawer PCIe I/O Drawer I/O Drawer PCIe I/O Drawer I/O Drawer PCIe I/O Drawer I/O Drawer PCIe I/O Drawer I/O Drawer PCIe I/O Drawer I/O Drawer PCIe I/O Drawer I/O Drawer PCIe I/O Drawer I/O Drawer
2011 IBM Corporation
A02B 170 - 17F 120 - 123 X 124 - 127 X 128 - 12B X 12C - 12F X
A09B 1F0 - 1FF 1A0 - 1A3 X 1A4 - 1A7 X 1A8 - 1AB X 1AC - 1AF X
LG12
LG13
LG14
LG15
PCIe Switch 2 X 130 - 133 X 134 - 137 X 138 - 13B X 13C - 13F X 140 - 143 X 1B0 - 1B3 X 1B4 - 1B7 X 1B8 - 1BB X 1BC - 1BF X 1C0 - 1C3
LG16
LG17
LG18
LG19
LG20
215
ibm.com
yourdotcom
PCHID Ranges I/O Card Slot LG21 Drawer Type I/O Drawer PCIe I/O Drawer I/O Drawer PCIe I/O Drawer I/O Drawer PCIe I/O Drawer I/O Drawer PCIe I/O Drawer I/O Drawer PCIe I/O Drawer I/O Drawer PCIe I/O Drawer I/O Drawer PCIe I/O Drawer I/O Drawer PCIe I/O Drawer I/O Drawer PCIe I/O Drawer I/O Drawer PCIe I/O Drawer
2011 IBM Corporation
A16B X X X X X X X X X X X X X X X X X X
LG22
LG23
LG24
PCIe Switch 3 X 150 - 153 X 154 - 157 X 158 - 15B X 15C - 15F X FSP-1,2 X 160 - 163 X 1E0 - 1E3 X 1D0 - 1D3 X 1D4 - 1D7 X 1D8 - 1DB X 1DC - 1DF X
LG25
LG26
LG27
LG28
LG29
LG30
216
X X
ibm.com
yourdotcom
PCHID Ranges I/O Card Slot LG31 Drawer Type I/O Drawer PCIe I/O Drawer I/O Drawer PCIe I/O Drawer I/O Drawer PCIe I/O Drawer I/O Drawer PCIe I/O Drawer I/O Drawer PCIe I/O Drawer I/O Drawer PCIe I/O Drawer I/O Drawer PCIe I/O Drawer I/O Drawer PCIe I/O Drawer A02B X 164 - 167 X 168 - 16B X 16C - 16F X A09B X 1E4 - 1E7 X 1E8 - 1EB X 1EC - 1EF X A16B X X X X X X X X X X X X X X X X A26B (M05 only) X X X X X X X X X X X X X X X X
LG32
LG33
LG34
PCIe Switch 1 X 170 - 173 X 174 - 177 X 178 - 17B X 17C - 17F X 1F0 - 1F3 X 1F4 - 1F7 X 1F8 - 1FB X 1FC - 1FF
LG35
LG36
LG37
LG38
217