Você está na página 1de 52

Tony Hsieh thsieh@cisco.

com

Technologies Transforming the Data Center


Agenda
The Evolving Data Centre Evolution of the Fabric

1K
Cisco Nexus

x86

Evolution of Compute
Next Steps: The Data Center Focus for 2011

BRKDCT-2023

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

The Evolving Data Centre Architecture


Technology Disruptor - Virtualization
Traditional
App App App OS OS OS App App App OS OS OS

Virtualized
App App App OS OS OS App App App OS OS OS App App App OS OS OS

1 Application

Hypervisor

...1 Server

20,000,000
17,500,000 15,000,000 12,500,000 10,000,000 7,500,000 5,000,000 2,500,000 0 2005 2006 2007 2008 Virtualized
BRKDCT-2023 2011 Cisco and/or its affiliates. All rights reserved.

Tipping Point

2009

2010

2011

Non-Virtualized

Cisco Public

Transition

Many Apps, or VMs

...1 Server, or Host

2012

2013

2014

Source: IDC, Nov 2010


3

The Evolving Data Centre Architecture


Compute & Fabric Technology Transitions
Scaling the Virtualized Fabric
FabricPath, VNLink, Unified Fabric Internet Massive Scaled TCP/IP Networks and Fabrics
Client Server Web Cloud Virtualization

TCP/IP & SCSI

Minicomputer

Mainframe

1960

1970

1980

1990

2000

2010

BRKDCT-2023

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

The Evolving Data Centre Architecture


Challenges for the Classical Network Design
Hypervisor based server virtualization and the associated capabilities (vMotion, ) are changing multiple aspects of the Data Center design
Where is the server now? Where is the access port? Where does the VLAN exist? Any VLAN Anywhere? How large do we need to scale Layer 2? What are the capacity planning requirements for flexible workloads? Wherearethe policy boundaries with flexible workload (Security, QoS, WAN acceleration, )?
BRKDCT-2023 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

Data Center Row 1

Data Center Row 2

Layer 2 and Layer 3 Data Center Tiers


Core

Layer 3

Aggregation

Layer 2

Access

VM motion can be restricted because current L2 architectures cant scale FabricPath scales L2 for Faster, Simpler, Flatter L2 Networks OTV for Layer 2 Extension over Layer 3 within a data center
BRKDCT-2023 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

Data Center Interconnect


LISP

OTV for L2 Extension over Layer 3 across data centers LISP for Mobility Across the Internet or WAN
BRKDCT-2023 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

The Evolving Data Centre Fabric


The Pillars of Change
FY10

Unified Ports FabricPath

Deployment Flexibility Architectural Flexibility / Scale Workload Mobility

CONVERGENCE
OTV FEX-link

Simplified Management w/ Scale


VM-Aware Networking Consolidated I/O Active-Active Uplinks Virtualizes the Switch

SCALE

VN-Link DCB/FCoE

INTELLIGENCE

vPC VDC

FY08
BRKDCT-2023 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

Technologies Transforming the Data Center


Agenda
The Evolving Data Centre Evolution of the Fabric

1K
Cisco Nexus

x86

Evolution of Compute
Next Steps: The Data Center Focus for 2011

BRKDCT-2023

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

Virtualized Access Switch


Mapping the Compute Pod to the Physical Network
De-Coupling of the Layer 1 and Layer 2 Topologies allows for more efficient alignment of compute resources to network devices
Define the Layer 2 boundary at the switch boundary if the compute workload maps to the scale of the virtualized switch (up to 2 x 1536 ports today) L3 L2

Virtualized Access Switch

...
Virtualized Access Switch Supporting a Compute Domain
BRKDCT-2023 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

10

Data Center Switching Architecture


vPC MultiChassisEtherChannel
vPC is a Port-channeling concept extending link aggregation to two separate physical switches Allows the creation of resilient L2 topologies based on Link Aggregation.

Physical Topology

Logical Topology

-Enables loop free Layer 2 topologies


with physical network redundancy

Virtual Port Channel


L2
Si Si

Provides increased bandwidth -All links are actively forwarding vPC maintains independent control planes

Non-vPC

vPC

Increased BW with vPC


2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

11

Scaling the Fabric (Pod)


FabricPath
Scenario: Application grows beyond currently compute capacity and allocated rack space causing network disruptions and physical changes

VLAN 1 Rack 1

VLAN 2 Rack 2

VLAN 3 Rack 3

VLAN 1, 2, 3

Adding additional server capacity while maintaining layer 2 adjacencies in same VLAN Disruptive - Requires physical move to free adjacent rack space

VLAN Extensibility any VLAN any where! Location independence for workloads Consistent, predictable bandwidth and latency with FabricPath.
Cisco Public

2011 Cisco and/or its affiliates. All rights reserved.

12

FEX Provides Unified Server Access Architecture Core


Layer

Cisco Nexus 5000 + FEX is a virtual modular system FEX is a virtual line card for Nexus 5000 Nexus 5000 maintains all mgmt and config Rack or blade servers or UCS Supports ToR, MoR, EoR deployments 100M, GE 10 GE FCoE server access
Nexus 2000 Fabric Extender

Agg Layer VSS/vPC L3 L2

Nexus 5000

Servers

Access Layer
Rack-1
9/22/2011
BRKDCT-2023

Rack-2

Rack-3
Cisco Public

Rack-N
13

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Systems Confidential, Non-Disclosure Required

Distributed Modular System to the ToR

LAN
Nexus 5000 Parent Switch

SAN
MDS

Cisco Nexus 2000 FEX

=
Distributed Modular System

N7000/ C6500 Access Layer

N5000

N2232
1 12

N2232

Distributed Modular System


Nexus 2000 FEX is a Virtual Line Card to the Nexus 5000 Nexus 5000 maintains all management & configuration No Spanning Tree between FEX & Nexus 5000

Over 5 million Nexus 2000 ports deployed!!!


BRKDCT-2023 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

14

Data Center Architecture Evolution


VN-Link Virtual Machine Aware Fabric VN-Link extends the network to the virtualization layer VN-Link leverages innovation within networking equipment
Virtual Ethernet Interface Port Profiles Virtual Interface mobility Consistent operations model
vETH vETH

Network and Compute Control Planes are actively synchronized

BRKDCT-2023

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

15

Data Center Architecture Evolution


VN-Link Virtual Machine Aware Fabric

VM
SWITCH

VM SERVER

Nexus 1000V - IEEE 802.1Q standard-based VM


SWITCH

VM
SERVER

vETH

vETH

NETWORK DEVICE

Network Interface Virtualization


(VNTAG TechnologyIEEE 802.1Qbh pre-standard)

BRKDCT-2023

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

16

Data Center Architecture Evolution


VN-Link Today Nexus 1000V
VM VM VM VM VM VM VM VM VM VM VM VM

Nexus 1000V VEM vSphere

Nexus 1000V VEM vSphere

Nexus 1000V VEM vSphere

Virtual Supervisor Module (VSM)


Virtual or Physical appliance running Virtual Ethernet Module Cisco NXOS (supports HA) (VEM) Enables advanced networking Performs management, monitoring, & Cisco Nexus hypervisor capability configuration on the 1000V Installation ESX &ESXiVM with dedicated Provides each Tight integration with VMware vCenter switch port VUM & Manual Installation Collection of VEMs = 1 vNetwork VEM is installed/upgraded like an ESX Distributed Switch patch
BRKDCT-2023 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

Nexus 1000V VSM

vCenter

17

Data Center Architecture Evolution


VN-Link Virtual Machine Aware Fabric
Coordinated Management State between Network and Compute Coordinated Control Plane state between Network and Compute Transition to real time coordination between fabric and compute

vCenter

VSM

n1000v(config)# port-profile WebServers n1000v(config-port-prof)# switchport mode access n1000v(config-port-prof)# switchport access vlan 100 n1000v(config-port-prof)# no shut

VM #2

VM #3

VM #4

ESX & VEM


BRKDCT-2023 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

18

P81E Overview
PCIe standards compliant adapter
o o o o 2x10Gb, Unified Fabric x8 PCIe Gen 2 adapter Full height, half length adapter Two sfp+ 10Gbps external ports (supports sfp+ copper)

Connectivity to any 10G switch possible.


o Unified IO to FCoE switch (Nexus 5K supported)

Ideal PCIe adapter for both single-OS and Hypervisors


o Network Interface Virtualization capable o Consolidate up to 128 PCIe devices o VN-link capability o Adapter-FEX o Hypervisor bypass with vMotion
BRKDCT-2023 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

19

Adapter-FEX
Nexus 5000
1 2 3 4 5 6 7 8

Any network interface can be virtualized (FEX, adapter)

Enables external switch to forward frames that belong to the same physical port by using VNTag
Under standardization - 802.1Qbh

Why virtualized adapters?


1 2 3

FEX
Port 0 Port 1

Functionally consolidated I/O devices (Eth/FC) Multiple interfaces for single OS server Interfaces to virtualized servers
vNIC 5

NIV Capable Adapter

vNIC 1

vNIC 2

vNIC 3

vNIC 4

BRKDCT-2023

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

20

One Network
Parent Switch to Application
Single Point of Management
Network Manager
IEEE 802.1 Qbh*

Manage network all the way to the OS interface Physical and Virtual
IEEE 802.1 Qbh*

FEX

FEX Architecture Consolidates network management FEX managed as line card of parent switch Adapter FEX Consolidates multiple 1Gb interface into a single 10Gb interface Extends network into server VM-FEX Consolidates virtual and physical network Each VM gets a dedicated port on switch
*IEEE 802.1Qbh pre-standard

IEEE 802.1 Qbh*

Hypervisor

Legacy
BRKDCT-2023

Adapter FEX

VM FEX
Cisco Public

2011 Cisco and/or its affiliates. All rights reserved.

21

Unified Ports
Dynamic Ports Allocation: Lossless Ethernet or Fibre Channel
Convert protocol support on the same port dynamically All ports on UCS 6200 Series 16-port Expansion Module for 6248UP and 6296UP

Unified Port

Native Fibre Channel

Lossless Ethernet:

1/10GbE, FCoE, iSCSI, NAS


Use-cases
Flexible LAN & storage convergence based on business needs Service can be adjusted based on the demand for specific traffic

Benefits
Simplify switch purchase remove ports ratio guess work Increase design flexibility Remove bandwidth bottlenecks

BRKDCT-2023

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

22

The Goals of Unified Fabric


What we already know

Reduce overall Data Center power consumption by up to 8%. Extend the lifecycle of current data center.

Wire hosts once to connect to any network - SAN, LAN, HPC. Faster rollout of new apps and services. Rack, Row, and Cross-Data Center VM portability become possible.
Every host will be able to mount any storage target. Increase SAN attach rate. Drive storage consolidation and improve utilization.

BRKDCT-2023

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

23

Unified Fabric over Ethernet


Technologies & Standards

IEEE DCB
Priority Flow Control IEEE 802.1Qbb creates lossless Ethernet with classes of service Bandwidth Management IEEE 802.1Qaz allows flexible bandwidth sharing for LAN and SAN Data Center Bridging Exchange Protocol IEEE 802.1Qbb provides devicedevice communication on resources
BRKDCT-2023 2011 Cisco and/or its affiliates. All rights reserved.

FCoE
Mapping of FC Frames over Ethernet Enables FC to Run on a Lossless Ethernet
Ethernet Fibre Channel Traffic

Byte 0
Ethernet Header FCoE Header FC Header

Byte 2229
CRC FCS

FC Payload

Cisco Public

EOF

24

I/O Consolidation
Traditional
I/O Consolidation with FCoE

LAN

SAN A

SAN B

LAN

SAN A

SAN B

Ethernet

FC

FC Ethernet FC

Ethernet

FC
Enhanced Ethernet and FCoE

BRKDCT-2023

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

25

Unified Fabric Design Today


What is really being deployed
Fabric A
FC

Fabric B

The first phase of the Unified Fabric evolution design focused on the fabric edge Unified the LAN Access and the SAN Edge by using FCoE Consolidated Adapters, Cabling and Switching at the first hop in the fabrics The Unified Edge supports multiple LAN and SAN topology options
Virtualized Data Center LAN designs
FCoE FC

Nexus 5000/5500 Nexus 5000 as FCF or as NPV device

Fibre Channel edge with direct attached initiators and targets


Fibre Channel edge-core and edgecore-edge designs Fibre Channel NPV edge designs

Nexus 2232 10GE FEX

Generation 2 CNAs
BRKDCT-2023 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

26

Unified Fabric Design CY 2011


Larger Fabric Multi-Hop Topologies
Multi-hop edge/core/edge topology
Supported by Nexus 5000 & 5500 in Q4CY10
N7K or MDS FCoE enabled Fabric Switches

Core SAN switches supporting FCoE


N7K with F1 line cards

VE

VF

VE

MDS with FCoE line cards

VE
Edge FCF Switch Mode

Edge FC switches supporting either


N5K - FCoE-NPV with FCoE uplinks to the FCoE enabled core (VNP to VF) N5K or N7K - FC Switch with FCoE ISL uplinks (VE to VE)

VNP

VE

Fully compatible with virtualized access switch and will Co-exist with FabricPath and/or Layer 3 designs

Servers, FCoE attached Storage

Servers

FC Attached Storage

BRKDCT-2023

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

27

VM-aware Path Analytics


VM > Physical Server > Embedded Switch > ISLs > Network Switch(s) > Target Port

VM-aware discovery & SAN/LAN correlation


VMPath: Path Analytics from VM to storage ports VM Performance stats on CPU, Mem, latency, traffic, & errors

Host Storage path troubleshooting performance trending


BRKDCT-2023 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

28

Health & Performance Dashboards

At a glance dashboards for SAN health & performance.


Top and bottom fabric talkers over a span of time Hosts dashboard with VM-Host & VM performance stats

Drill down for detailed performance analytics & trending


BRKDCT-2023 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

29

Multi-Hop FCoE Management

FCoE provisioning & activation across the data center.

FC Ethernet mapping. ISL performance/health monitoring.


FCoE overlays on Data Center topology views. FCoE management paradigm consistent with FC management.
BRKDCT-2023 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

30

Technologies Transforming the Data Center


Agenda
The Evolving Data Centre Evolution of the Fabric

1K
Cisco Nexus

x86

Evolution of Compute
Next Steps: The Data Center Focus for 2011

BRKDCT-2023

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

31

Unified Computing Building Blocks


Unified Fabric Introduced with the Cisco Nexus Series

Physical
Virtual

Wire once infrastructure (Nexus 5500) Fewer switches, adapters, cables

Ethernet

Fibre Channel

Virtual

VN-Link (Nexus 1000v) Manage virtual the same as physical

Scale
Physical

Fabric Extender (Nexus 2000) Scale without increasing points of management

BRKDCT-2023

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

32

Building Blocks of Cisco UCS


An Integrated System Optimizes Data Center Efficiency
UCS Manager Service Profiles Virtualization integration
UCS Fabric Interconnect 10GE unified fabric switch One per 320 blades

UCS Fabric Extender Remote line card One per chassis

UCS Blade Server Chassis Flexible bay configurations


UCS Blade and Rack Servers x86 industry standard Patented extended memory UCS I/O Adapters Choice of multiple adapters
BRKDCT-2023 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

33

Unify and Simplify


Fabric Extender and VNLink simplify server access management

SAN A LAN SAN B


Ethernet Switch Mgmt FC Switch Mgmt

Chassis Mgmt Ethernet Blade Switch Mgmt Fibre Channel Blade Switch Mgmt Virtual Switch Mgmt

BRKDCT-2023

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

34

Unify and Simplify


Unified Fabric simplifies I/O infrastructure and management while maintaining Enterprise-class high-availability

SAN A LAN SAN B


Ethernet Switch Mgmt FC Switch Mgmt

Chassis Mgmt Fibre Channel Blade Switch Mgmt

BRKDCT-2023

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

35

Unify and Simplify


Cisco UCS consolidates server infrastructure into a single point of management

SAN A LAN SAN B


Unified Network Mgmt

Chassis Mgmt

BRKDCT-2023

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

36

Wire for Bandwidth, Not Connectivity


Uplinks

20Gb/s

40Gb/s

80Gb/s

Wire once architecture


All links can be active all the time Policy-driven bandwidth allocation

Virtual interface granularity


BRKDCT-2023 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

37

Cisco Unified Computing System (UCS)


A New Approach to Server Infrastructure
What does your Data Center organization look like?

From ad hoc and inconsistent


BRKDCT-2023

to structured, but siloed, complicated and costly


Cisco Public

to simple, optimized and automated


38

2011 Cisco and/or its affiliates. All rights reserved.

What is stateless computing architecture?

Stateless client computing is where every compute node has no inherent state pertaining to the services it may host. In this respect, a compute node is just an execution engine for any application (CPU, memory, and disk flash or hard drive). The core concept of a stateless computing environment is to separate state of a server that is built to host an application, from the hardware it can reside on. The servers can easily then be deployed, cloned, grown, shrunk, de-activated, archived, re-activated, etc.
2010
BRKDCT-2023 2011 Cisco and/or its affiliates. All rights reserved.

Cisco Inc., Company Confidential Presented under NDA

Cisco Public

39 39

UCS Technologies for Elasticity


Service Profiles: A UCS Server (not a blade, but an XML object)
Profile Name = vmhost-cluster1-1
UUID = 12345678-ABCD-9876-5432ABCDEF123456 Description = ESX4u1 1st Host in Cluster 1 LAN Config
vNIC0 Switch = Switch A vNIC0 Pin Group = SwitchA-pingroupA vNIC0 VLAN Trunking = Enabled vNIC0 Native VLAN = VLAN 100 vNIC0 MAC Address = 00:25:B5:00:01:01 vNIC0 Hardware Failover Enabled = No vNIC0 QoS policy = VMware-QoS-policy vNIC1 Switch = Switch B vNIC1 Pin Group = SwitchB-pingroupA vNIC1 VLAN Trunking = Enabled vNIC1 Native VLAN = VLAN 100 vNIC1 MAC Address = 00:25:B5:00:01:02 vNIC1 Hardware Failover Enabled = No vNIC1 QoS policy = VMware-QoS-policy SAN Config
Node ID = 20:00:00:25:B5:2b:3c:01:0f vHBA0 Switch = Switch A vHBA0 VSAN = VSAN1-FabricA vHBA0 WWPN = 20:00:00:25:B5:2b:3c:01:01 vHBA1 Switch = Switch B vHBA1 VSAN = VSAN1-FabricB vHBA1 WWPN = 20:00:00:25:B5:2b:3c:01:02

Boot Policy = boot-from-ProdVMax


Boot order =
1.Virtual CD-ROM 2.vHBA0, 50:00:16:aa:bb:cc:0a:01, LUN 00, primary 3.vHBA1, 50:00:16:aa:bb:cc:0b:01, LUN 00, secondary 4.vNIC0

Host Firmware Policy = VIC-EMC-vSphere4 Management Firmware Policy = 1-3-mgmt-fw IPMI Profile = standard-IPMI Serial-over-LAN policy = VMware-SOL Monitoring Threshold Policy = VMware-Thresholds

Local Storage Profile = no-local-storage Scrub Policy = Scrub local disks only

BRKDCT-2023

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

40

Network has Complete Visibility to Servers


Service Profiles Capture more than MAC & WWN
MAC, WWN, Boot Order, Firmware, network & storage policy Stateless Compute where network & storage see all movement

Better diagnosability and QoS from network to blade, policy follows


Server Name: SP-A UUID: 56 4d cd 3f 59 5b 61 MAC : 08:00:69:02:01:FC WWN: 5080020000075740 Boot Order: SAN, LAN

Chassis-1/Blade-5

LAN

Chassis-9/Blade-2

Service Profiles deliver Service Agility regardless of Physical or Virtual Machine


2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

SAN

BRKDCT-2023

41

Server Availability
Oracle
Blade Blade Blade Blade Blade

Web
Blade Blade Blade Blade Blade

VMware
Blade Blade Blade Blade Blade

Todays Deployment:
Provisioned for peak capacity Spare node per workload

Burst capacity HA spare

Oracle
Blade Blade Blade

Web
Blade Blade Blade

VMware
Blade Blade

With Server Profiles:


Resources provisioned as needed Same availability with fewer spares

Blade

Blade Blade

BRKDCT-2023

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

42

UCS Technologies for Elasticity


Infrastructure Automation from Profiles or Templates
Supports fast turnaround business requirements No need to maintain cache of servers for each service type Can assign business priority to each server
Temporarily disassociate lower priority services when compute needed
Exchange-Node UUID, MAC,WWN Boot info firmware LAN, SAN Config Firmware ESX-DRS-Node UUID, MAC,WWN Boot info firmware LAN, SAN Config Firmware Oracle-RAC-Node UUID, MAC,WWN Boot info firmware LAN, SAN Config Firmware

Can assign project length to each server or group of servers


Can disassociate after sign-up period (and appropriate governance) Make reclaimed compute available for other projects Preservation of boot/data images (disk/LUN) needed if project restoral needed later

16 Needed 2 Needed

4 Needed

Boundary for services is not a chassis or rack


Server Pools and Qualifications allow more intelligent infrastructure

2010

BRKDCT-2023

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Inc., Company Confidential Presented under NDA

Cisco Public

43 43

UCS Technologies for Elasticity


Interface Virtualization VM-FEX

OS Sees Administratively definable (MAC, QoS, VLAN, VSAN, WWPN, etc.) Ethernet and Fiber Channel interfaces which connects to a Cisco virtual interface (VIF)
2010
BRKDCT-2023 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

Hypervisor sees unconfigured (no MAC, VLAN, etc.) Ethernet interfaces which are configured by the external VMM and connects to a Cisco virtual interface (VIF)
44 44

Extending FEX architecture to Virtual Machines


Cascading Port Extender
Baseline architecture LAN Switch Baseline architecture LAN Switch Switch port extended over cascaded Fabric Extenders to the Virtual Machine FEX

Logical Switch

FEX

Logical Switch

Logical Switch Hypervisor

Hypervisor
vSwitch

VM-FEX

App OS

App OS

App

App OS

App OS

App

OS

OS

Collapse virtual and physical networking tiers!!!


BRKDCT-2023 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

45

UCS VM-FEX
Extend FEX Architecture to the Virtual Access Layer
UCS Fabric Interconnect Parent Switch

LAN

SAN
MDS

UCS IOM-FEX

=
Distrbuted Modular System

N7000/ C6500 Access Layer

Cisco UCS VIC

UCS 6100

UCS IOM

UCS IOM

VM-FEX: Single Virtual-Physical Access Layer


Collapse virtual and physical switching into a single access layer

UCS VIC
1 160

UCS VIC

App OS App OS

App OS App OS

App
OS App OS

App OS App OS

App OS App OS

App OS App OS

VIC is a Virtual Line Card to the UCS Fabric Interconnect


Fabric Interconnect maintains all management & configuration Virtual and Physical traffic treated the same
BRKDCT-2023

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

46

UCS Technologies Impacting the DC


What does VM-FEX change?
Each Virtual Machine vNIC now connected to the data network edge 1:1 mapping between a virtual adapter and the upstream network port Helps with Payment Card Industry (example) requirements for VMs to have separate adapter (no soft switches) As Virtual Machines move around infrastructure, the network edge port moves along with the virtual adapter

BRKDCT-2023

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

47

VMdirectPath with VMotion:


Native Performance, Network-Awareness, and Mobility
Temporary transition from VMDP to standard I/O

12 10 8 6 4 2 0 0

Bandwidth (Gbps)

vMotion to secondary host

10

20

30 40 Time (sec)

50

60

70

8GB VM, sending UDP stream using pckgen (1500MTU) UCS B200 blades with UCS VIC card vSphere technology preview
BRKDCT-2023 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

48

Current Cisco UCS Performance Records


Server World Records
#1

Continuing the Trend of Record Setting Performance

#1 Two-Socket 2-Node Record SPECjAppServer*2004 11,283.80 JOPS@Standard #1 Two-Socket Record SPECjbb*2005 1,015,802 BOPS #1 Two-Socket x86 Record SPECompM*base2001 52,314 base score* #1 Two-Socket Record SPECompL*base2001 278,603 base score** #1 Two-Socket Record

1st ever to publish on new Cloud bmk #1 VMmark* 2.0 6.51 @ 6 tiles* Four-Socket X86 Blade Record #1 SPECint*_rate_base2006 720 base score
#1

#1

#1

Four-Socket x86 Record SPECjbb*2005 2,021,525 BOPS

#1

Four-Socket Record #1 SPECompM*2001 100,258 base score* Single-Node Record #1 4S LS-Dyna* Crash Simulation 41,727 seconds car2car
Results as of Jan 24, 2011: 1Two socket comparison based x86 Volume serversIntel Xeon 5600 series and AMD Opteron 6100 Series 1Four socket comparison based on x86 serversIntel Xeon 7500 series and AMD Opteron 6100 Series

#1

Oracle eBiz Suite Payroll Batch


581,846 Employees/Hour Large 368,098 Employees/Hour Medium

BRKDCT-2023

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

49

Technologies Transforming the Data Center


Agenda
The Evolving Data Centre Evolution of the Fabric

1K
Cisco Nexus

x86

Evolution of Compute
Next Steps: The Data Center Focus for 2011

BRKDCT-2023

2011 Cisco and/or its affiliates. All rights reserved.

Cisco Public

50

Summary: The Data Center


At the Heart of Business Innovation
Business Value
IT Initiatives
Virtualization Consolidation Application Integration
Cost Reduction and Revenue Generation
New Service Creation and New Business Models

Data Center/Cloud

Compliance Cloud Services


Governance and Risk Management

Data Center Transformation


Source: IT initiatives from Goldman Sachs CIO Study
BRKDCT-2023 2011 Cisco and/or its affiliates. All rights reserved. Cisco Public

51

Você também pode gostar