Escolar Documentos
Profissional Documentos
Cultura Documentos
com
1K
Cisco Nexus
x86
Evolution of Compute
Next Steps: The Data Center Focus for 2011
BRKDCT-2023
Cisco Public
Virtualized
App App App OS OS OS App App App OS OS OS App App App OS OS OS
1 Application
Hypervisor
...1 Server
20,000,000
17,500,000 15,000,000 12,500,000 10,000,000 7,500,000 5,000,000 2,500,000 0 2005 2006 2007 2008 Virtualized
BRKDCT-2023 2011 Cisco and/or its affiliates. All rights reserved.
Tipping Point
2009
2010
2011
Non-Virtualized
Cisco Public
Transition
2012
2013
2014
Minicomputer
Mainframe
1960
1970
1980
1990
2000
2010
BRKDCT-2023
Cisco Public
Layer 3
Aggregation
Layer 2
Access
VM motion can be restricted because current L2 architectures cant scale FabricPath scales L2 for Faster, Simpler, Flatter L2 Networks OTV for Layer 2 Extension over Layer 3 within a data center
BRKDCT-2023 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public
OTV for L2 Extension over Layer 3 across data centers LISP for Mobility Across the Internet or WAN
BRKDCT-2023 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public
CONVERGENCE
OTV FEX-link
SCALE
VN-Link DCB/FCoE
INTELLIGENCE
vPC VDC
FY08
BRKDCT-2023 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public
1K
Cisco Nexus
x86
Evolution of Compute
Next Steps: The Data Center Focus for 2011
BRKDCT-2023
Cisco Public
...
Virtualized Access Switch Supporting a Compute Domain
BRKDCT-2023 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public
10
Physical Topology
Logical Topology
Provides increased bandwidth -All links are actively forwarding vPC maintains independent control planes
Non-vPC
vPC
11
VLAN 1 Rack 1
VLAN 2 Rack 2
VLAN 3 Rack 3
VLAN 1, 2, 3
Adding additional server capacity while maintaining layer 2 adjacencies in same VLAN Disruptive - Requires physical move to free adjacent rack space
VLAN Extensibility any VLAN any where! Location independence for workloads Consistent, predictable bandwidth and latency with FabricPath.
Cisco Public
12
Cisco Nexus 5000 + FEX is a virtual modular system FEX is a virtual line card for Nexus 5000 Nexus 5000 maintains all mgmt and config Rack or blade servers or UCS Supports ToR, MoR, EoR deployments 100M, GE 10 GE FCoE server access
Nexus 2000 Fabric Extender
Nexus 5000
Servers
Access Layer
Rack-1
9/22/2011
BRKDCT-2023
Rack-2
Rack-3
Cisco Public
Rack-N
13
LAN
Nexus 5000 Parent Switch
SAN
MDS
=
Distributed Modular System
N5000
N2232
1 12
N2232
14
BRKDCT-2023
Cisco Public
15
VM
SWITCH
VM SERVER
VM
SERVER
vETH
vETH
NETWORK DEVICE
BRKDCT-2023
Cisco Public
16
vCenter
17
vCenter
VSM
n1000v(config)# port-profile WebServers n1000v(config-port-prof)# switchport mode access n1000v(config-port-prof)# switchport access vlan 100 n1000v(config-port-prof)# no shut
VM #2
VM #3
VM #4
18
P81E Overview
PCIe standards compliant adapter
o o o o 2x10Gb, Unified Fabric x8 PCIe Gen 2 adapter Full height, half length adapter Two sfp+ 10Gbps external ports (supports sfp+ copper)
19
Adapter-FEX
Nexus 5000
1 2 3 4 5 6 7 8
Enables external switch to forward frames that belong to the same physical port by using VNTag
Under standardization - 802.1Qbh
FEX
Port 0 Port 1
Functionally consolidated I/O devices (Eth/FC) Multiple interfaces for single OS server Interfaces to virtualized servers
vNIC 5
vNIC 1
vNIC 2
vNIC 3
vNIC 4
BRKDCT-2023
Cisco Public
20
One Network
Parent Switch to Application
Single Point of Management
Network Manager
IEEE 802.1 Qbh*
Manage network all the way to the OS interface Physical and Virtual
IEEE 802.1 Qbh*
FEX
FEX Architecture Consolidates network management FEX managed as line card of parent switch Adapter FEX Consolidates multiple 1Gb interface into a single 10Gb interface Extends network into server VM-FEX Consolidates virtual and physical network Each VM gets a dedicated port on switch
*IEEE 802.1Qbh pre-standard
Hypervisor
Legacy
BRKDCT-2023
Adapter FEX
VM FEX
Cisco Public
21
Unified Ports
Dynamic Ports Allocation: Lossless Ethernet or Fibre Channel
Convert protocol support on the same port dynamically All ports on UCS 6200 Series 16-port Expansion Module for 6248UP and 6296UP
Unified Port
Lossless Ethernet:
Benefits
Simplify switch purchase remove ports ratio guess work Increase design flexibility Remove bandwidth bottlenecks
BRKDCT-2023
Cisco Public
22
Reduce overall Data Center power consumption by up to 8%. Extend the lifecycle of current data center.
Wire hosts once to connect to any network - SAN, LAN, HPC. Faster rollout of new apps and services. Rack, Row, and Cross-Data Center VM portability become possible.
Every host will be able to mount any storage target. Increase SAN attach rate. Drive storage consolidation and improve utilization.
BRKDCT-2023
Cisco Public
23
IEEE DCB
Priority Flow Control IEEE 802.1Qbb creates lossless Ethernet with classes of service Bandwidth Management IEEE 802.1Qaz allows flexible bandwidth sharing for LAN and SAN Data Center Bridging Exchange Protocol IEEE 802.1Qbb provides devicedevice communication on resources
BRKDCT-2023 2011 Cisco and/or its affiliates. All rights reserved.
FCoE
Mapping of FC Frames over Ethernet Enables FC to Run on a Lossless Ethernet
Ethernet Fibre Channel Traffic
Byte 0
Ethernet Header FCoE Header FC Header
Byte 2229
CRC FCS
FC Payload
Cisco Public
EOF
24
I/O Consolidation
Traditional
I/O Consolidation with FCoE
LAN
SAN A
SAN B
LAN
SAN A
SAN B
Ethernet
FC
FC Ethernet FC
Ethernet
FC
Enhanced Ethernet and FCoE
BRKDCT-2023
Cisco Public
25
Fabric B
The first phase of the Unified Fabric evolution design focused on the fabric edge Unified the LAN Access and the SAN Edge by using FCoE Consolidated Adapters, Cabling and Switching at the first hop in the fabrics The Unified Edge supports multiple LAN and SAN topology options
Virtualized Data Center LAN designs
FCoE FC
Generation 2 CNAs
BRKDCT-2023 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public
26
VE
VF
VE
VE
Edge FCF Switch Mode
VNP
VE
Fully compatible with virtualized access switch and will Co-exist with FabricPath and/or Layer 3 designs
Servers
FC Attached Storage
BRKDCT-2023
Cisco Public
27
28
29
30
1K
Cisco Nexus
x86
Evolution of Compute
Next Steps: The Data Center Focus for 2011
BRKDCT-2023
Cisco Public
31
Physical
Virtual
Ethernet
Fibre Channel
Virtual
Scale
Physical
BRKDCT-2023
Cisco Public
32
33
Chassis Mgmt Ethernet Blade Switch Mgmt Fibre Channel Blade Switch Mgmt Virtual Switch Mgmt
BRKDCT-2023
Cisco Public
34
BRKDCT-2023
Cisco Public
35
Chassis Mgmt
BRKDCT-2023
Cisco Public
36
20Gb/s
40Gb/s
80Gb/s
37
Stateless client computing is where every compute node has no inherent state pertaining to the services it may host. In this respect, a compute node is just an execution engine for any application (CPU, memory, and disk flash or hard drive). The core concept of a stateless computing environment is to separate state of a server that is built to host an application, from the hardware it can reside on. The servers can easily then be deployed, cloned, grown, shrunk, de-activated, archived, re-activated, etc.
2010
BRKDCT-2023 2011 Cisco and/or its affiliates. All rights reserved.
Cisco Public
39 39
Host Firmware Policy = VIC-EMC-vSphere4 Management Firmware Policy = 1-3-mgmt-fw IPMI Profile = standard-IPMI Serial-over-LAN policy = VMware-SOL Monitoring Threshold Policy = VMware-Thresholds
Local Storage Profile = no-local-storage Scrub Policy = Scrub local disks only
BRKDCT-2023
Cisco Public
40
Chassis-1/Blade-5
LAN
Chassis-9/Blade-2
SAN
BRKDCT-2023
41
Server Availability
Oracle
Blade Blade Blade Blade Blade
Web
Blade Blade Blade Blade Blade
VMware
Blade Blade Blade Blade Blade
Todays Deployment:
Provisioned for peak capacity Spare node per workload
Oracle
Blade Blade Blade
Web
Blade Blade Blade
VMware
Blade Blade
Blade
Blade Blade
BRKDCT-2023
Cisco Public
42
16 Needed 2 Needed
4 Needed
2010
BRKDCT-2023
Cisco Public
43 43
OS Sees Administratively definable (MAC, QoS, VLAN, VSAN, WWPN, etc.) Ethernet and Fiber Channel interfaces which connects to a Cisco virtual interface (VIF)
2010
BRKDCT-2023 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public
Hypervisor sees unconfigured (no MAC, VLAN, etc.) Ethernet interfaces which are configured by the external VMM and connects to a Cisco virtual interface (VIF)
44 44
Logical Switch
FEX
Logical Switch
Hypervisor
vSwitch
VM-FEX
App OS
App OS
App
App OS
App OS
App
OS
OS
45
UCS VM-FEX
Extend FEX Architecture to the Virtual Access Layer
UCS Fabric Interconnect Parent Switch
LAN
SAN
MDS
UCS IOM-FEX
=
Distrbuted Modular System
UCS 6100
UCS IOM
UCS IOM
UCS VIC
1 160
UCS VIC
App OS App OS
App OS App OS
App
OS App OS
App OS App OS
App OS App OS
App OS App OS
Cisco Public
46
BRKDCT-2023
Cisco Public
47
12 10 8 6 4 2 0 0
Bandwidth (Gbps)
10
20
30 40 Time (sec)
50
60
70
8GB VM, sending UDP stream using pckgen (1500MTU) UCS B200 blades with UCS VIC card vSphere technology preview
BRKDCT-2023 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public
48
#1 Two-Socket 2-Node Record SPECjAppServer*2004 11,283.80 JOPS@Standard #1 Two-Socket Record SPECjbb*2005 1,015,802 BOPS #1 Two-Socket x86 Record SPECompM*base2001 52,314 base score* #1 Two-Socket Record SPECompL*base2001 278,603 base score** #1 Two-Socket Record
1st ever to publish on new Cloud bmk #1 VMmark* 2.0 6.51 @ 6 tiles* Four-Socket X86 Blade Record #1 SPECint*_rate_base2006 720 base score
#1
#1
#1
#1
Four-Socket Record #1 SPECompM*2001 100,258 base score* Single-Node Record #1 4S LS-Dyna* Crash Simulation 41,727 seconds car2car
Results as of Jan 24, 2011: 1Two socket comparison based x86 Volume serversIntel Xeon 5600 series and AMD Opteron 6100 Series 1Four socket comparison based on x86 serversIntel Xeon 7500 series and AMD Opteron 6100 Series
#1
BRKDCT-2023
Cisco Public
49
1K
Cisco Nexus
x86
Evolution of Compute
Next Steps: The Data Center Focus for 2011
BRKDCT-2023
Cisco Public
50
Data Center/Cloud
51