Você está na página 1de 41

Unified Data Center Architecture:

Integrating Unified Compute System


Technology
BRKCOM-2986

marregoc@cisco.com

Presentation_ID

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

Agenda
Overview
Design Considerations
A Unified DC Architecture
Deployment Scenarios

BRKCOM-2986_c1

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

UCS Building Blocks


UCS Manager
Embedded manages entire system
UCS 6100 Series Fabric Interconnect
20 Port 10Gb FCoE UCS-6120
40 Port 10Gb FCoE UCS-6140

UCS Fabric Extender UCS 2100 Series


Remote line card

UCS 5100 Series Blade Server Chassis


Flexible bay configurations
UCS B-Series Blade Server
Industry-standard architecture
UCS Virtual Adapters
Choice of multiple adapters
BRKCOM-2986_c1

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

Legend
Catalyst 6500
Multilayer Switch

Nexus 1000V (VEM)


Nexus 1000V (VSM)

Catalyst 6500
L2 Switch
Virtual Switching
System

Embedded VEM with VMs


VM VM
VM VM
VM VM

Generic Cisco
Multilayer Switch

Nexus 2000 (Fabric Extender)

Generic Virtual
Switch

Nexus 5K with VSM

Nexus Multilayer
Switch

UCS Fabric Interconnect

Nexus
L2 Switch

UCS Blade Chassis

Nexus Virtual DC Switch


(Multilayer)
Nexus Virtual DC
Switch (L2)

ACE Service Module

MDS 9500
Director Switch

Virtual Blade Switch

MDS
Fabric Switch
BRKCOM-2986_c1

ASA

2009 Cisco Systems, Inc. All rights reserved.

IP

Cisco Public

IP Storage
4

Agenda
Overview
Design Considerations
Blade Chassis
Rack Capacity & Server Density

System Capacity & Density


Chassis External Connectivity
Chassis Internal Connectivity

A Unified DC Architecture
Deployment Scenarios

BRKCOM-2986_c1

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

Blade Chassis UCS 5108 Details


Functional View

Actual View
Front View

slot 1

Slots

slot 2

10.5
26.7 cm

slot 3

32

slot 4
slot 5

81.2 cms

slot 6

17.5

slot 7
slot 8

Front to back
Airflow

44.5 cm

Rear View

Fabric Extenders

BRKCOM-2986_c1

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

Blade Chassis UCS 5108 Details


UCS 5108 Chassis Characteristics
Functional View

8 Blade Slots
Chassis

slot 1

8 half-width servers or;


4 full-width servers

Up to two Fabric Extenders

slot 2

Both concurrently active

slot 3

Redundant and Hot Swappable

slot 4

Power Supplies
Fan Modules

slot 5
slot 6
slot 7
slot 8

Additional Chassis Details:


Size: 10.5 (6U) x 18.5 x 32
Total Power Consumption

Slots
Fabric Extenders

Nominal Estimates**
Half-width Servers: 1.5 3.5 kW
Full-width Servers: 1.5 3.5 kW

Cabling: SFP+ Connectors

Actual Front and Rear Views

BRKCOM-2986_c1

2009 Cisco Systems, Inc. All rights reserved.

For FEX to Fabric Interconnect


CX1: 3 & 5 meters
USR: 100 meters Late summer
SR: 300 meters

Airflow: front to back


Cisco Public

Blades, Slots, and Mezz Cards


B-Series Blades
Chassis

Full-width
Mezz Card Ports

Half-width

slot 1

Full-width B250-M1
Each blade uses two slots
Always 1,2 or 3,4, or 5,6, or 7,8

Each Slot Takes a Single Mezz Card


Each Mezz Card Has 2 Ports

slot 3

slot 4

Each port connect to one FEX

slot 5
slot 7

Each blade uses one slot

Slots and Mezz Cards

slot 2

slot 6

Half-width Blade B200-M1

Blade Slots

Total of 8 Mezz Cards Per Chassis

slot 8

Blades, Slots and Mezz Cards


Half-width Blade B200-M1
One mezz card two ports per server
Each Port to different FEX

Full-width Blade B250-M1


Two mezz cards four ports per server
Two: from each mezz card to each FEX
BRKCOM-2986_c1

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

UCS 2100 Series Fabric Extender Details


UCS 2104 XP Fabric Extender Port Usage
Two Fabric Extenders per Chassis
4 x 10GE Uplinks per Fabric Extender
Fabric Extender Connectivity
1, 2 or 4 uplinks per Fabric Extender
All uplinks connect to one Fabric Interconnect

Port Combination Across Fabric Extenders


FEX Port count must match per enclosure
Any port on FEX could be utilized

slot 1
slot 2
slot 3
slot 4
slot 5
slot 6
slot 7
slot 8

Fabric Extenders Ports

Fabric Extender Capabilities


Managed as part of UC System
802.1q Trunking and FCoE Capabilities
Uplink Traffic Distribution
Fabric Extenders

Selected when blades are inserted


Slot assignment occurs at power up
All slot traffic is assigned to a single uplink

Other Logic
Monitor and control of environmentals
Blade insertion/removal events
BRKCOM-2986_c1

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

UCS 6100 Series Fabric Interconnect Details


Fabric Interconnect Overview
Management of Compute System
Network Connectivity
to/from Compute Nodes
to/from LAN/SAN Environments

Fabric Interconnects

Types of Fabric Interconnect


UCS-6120XP 1U
20 Fixed 10GE/FCoE ports & 1 expansion slot

UCS-6140XP 2U
40 Fixed 10GE/FCoE ports & 2 expansion slots

slot 1
slot 2
slot 3
slot 4
slot 5
slot 6
slot 7
slot 8

Fabric Interconnect Details


Expansion Modules
Fibre Channel:
8 x 1/2/4G FC

Ethernet:
Fabric Extenders

6 x 10GE SFP+

Fibre Channel & Ethernet:


4 1/2/4G FC & 4 x 10GE SFP+

Fixed Ports: FEX or uplink connectivity


Expansion Slot Ports: uplink connectivity only
BRKCOM-2986_c1

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

10

Rack Capacity and Cabling Density


6 Foot Rack

7 Foot Rack
24

24

Rack Space What is possible!


1RU = 1.75 = 44.45 mm

6 Foot Rack - 42U usable: Up to 7 chassis


7 Foot Rack - 44U usable: Up to 7 chassis

80

73.5

77

85

Power what is realistic!


2 chassis = 3 - 6 kW
3 chassis = 4.5 - 9 kW
4 chassis = 6 - 12 kW
5 chassis = 7.5 15kW
6 chassis = 9 - 18 kW

19
BRKCOM-2986_c1

2009 Cisco Systems, Inc. All rights reserved.

19
Cisco Public

11

UCS Power - Nominal Estimates


UCS Power Specifics
Fabric Interconnect

Fabric Interconnect

UCS-6120XP: 350W 450W


UCS-6140XP: 480W 750W

Processors (Max consumption)


Full Width Blade
Enclosure Front View
Half Width Blade

Mezz Card

Enclosure Rear View


Fabric Extender

Power Supply

BRKCOM-2986_c1

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

Xeon L5520: 60W


Xeon E5520: 80W
Xeon E5540: 80W
Xeon X5570: 95W

Mezz Cards: 11W to 21W


Chassis: 210W 710W
Operational Blade: ~60W to ~395W
Nominal Power Range Depends

Processor
Memory
Mezz Cards
HD
Turbo Mode
Workload
OS and OS Power Saving Configuration
C-State Usage
Hyperthreading, Enhanced SpeedStep
12

Server Density & Uplinks/Bandwidth per Rack


Blades per
Rack

2 Chassis

3 Chassis

4 Chassis

5 Chassis

6 Chassis

Half-width

16

24

32

40

48

Full-width

12

16

20

24

3 Chassis

4 Chassis

5 Chassis

6 Chassis

6 - 60G

8 - 80G

Uplinks/
2 Chassis
Bandwidth
1 FEX
Uplink
2 FEX
Uplinks
4 FEX
Uplinks

4 - 40G
8 - 80G

10 - 100G 12 - 120G

12 - 120G 16 - 160G 20 - 200G 24 - 240G

16 - 160G 24 - 240G 32 - 320G 40 - 400G 48 - 480G


Overall Rack Density
Depends on Power Per Rack
Power Configuration per Enclosure

BRKCOM-2986_c1

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

13

UCS Compute Node Density


Fabric Interconnects

Unified Compute System Density


Unified Compute System Density Factors:

slot 1
slot 2
slot 3
slot 4
slot 5
slot 6
slot 7
slot 8

Blade Server Type


Chassis Server Density
Fabric Interconnect density
Uplinks from Fabric Extenders
Bandwidth per compute node
Network Oversubscription

Bandwidth vs Oversubscription
Bandwidth:
slot 1
slot 2
slot 3
slot 4
slot 5
slot 6
slot 7
slot 8

Traffic load a server needs to support


Specified by server and/or application engineer

Oversubscription:
A measure of network capacity
Designed by network engineer

Multi-homed servers
More IO ports may not mean more bandwidth
Depends on active-active vs. active-standby

Fabric Extenders
BRKCOM-2986_c1

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

14

UCS Compute Node Density


Total number of blades per Unified Compute System
Fabric
Interconnects

slot 1
slot 2
slot 3
slot 4
slot 5
slot 6
slot 7
slot 8

slot 1
slot 2
slot 3
slot 4
slot 5
slot 6
slot 7
slot 8

Unified Compute Chassis

slot 1
slot 2
slot 3
slot 4
slot 5
slot 6
slot 7
slot 8

slot 1
slot 2
slot 3
slot 4
slot 5
slot 6
slot 7
slot 8

Fabric Extenders

Chassis Uplink Capacity


Influenced by blade bandwidth requirements
Influenced by bandwidth per blade type
Half-width:10GE per blade, 4 uplinks per FEX
Full-width: 20GE per blade, 4 uplinks per FEX

Influenced by Oversubscription
East - West Traffic Subscription: is 1:1
North South Traffic Subscription:
Needs to be Engineered Interconnect Uplinks
BRKCOM-2986_c1

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

15

UCS Compute Density and Bandwidth Capacity


# of Servers determined by FI Density and #of uplinks per chassis
Server Density Ranges: 20 320

Half-width: 40 320

Full-width: 20 160

Bandwidth Ranges: 2.5 20 Gbps


Half-width: 2.5 10Gbps

Full-width: 5 20Gbps

Max Chassis per UCS: 5 40


Uplinks per FEX
# of Chassis
UCS-6120
B/W per Chassis
Half-width Blades # of Blades
B/W per Blade
Full-width Blades # of Blades
B/W per Blade
# of Chassis
UCS-6140
B/W per Chassis
Half-width Blades # of Blades
B/W per Blade
Full-width Blades # of Blades
B/W per Blade

1 x 10GE

20
20G
160
2.5 Gbps
80
5 Gbps
40
20G
320
2.5 Gbps
160
5 Gbps

Max Bandwidth Per blade: Half-width: 20Gbps


BRKCOM-2986_c1

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

2 x 10GE

10
40G
80
5 Gbps
40
10 Gbps
20
40G
160
5 Gbps
80
10 Gbps

4 x 10GE

5
80G
40
10 Gbps
20
20 Gbps
10
80G
80
10 Gbps
40
20 Gbps

Full-width: 40Gbps
16

UCS Fabric Interconnect Location


Horizontal Cabling
Alternatives to Interconnect Placement and Cabling
Location depends on Cabling and Density (Interconnect and Enclosure)
CX1: 1 5 meters: Interconnect may be placed in centralized location mid row*
USR: 100 meters: Interconnect may be placed at the end of the row* near cross connect
SR: 300 meters: Interconnect may be placed near multi-row* interconnect

* Row length calculations are based on 24 wide racks/cabinets

20
32

2
2
2
2 2 2
2 2

20
32

80 40

2
2

2
2

4
16

16
16

8 8

80
40
16

16
16

416

16

16
16

16

16

16

16
40

8 8

40
2

40

CX1 Cabling 5 meters

Server Count: 20 chassis x 8 = 160 servers


2009 Cisco Systems, Inc. All rights reserved.

100 meters ~ 330 feet


Using 2 Uplinks per fabric extender
3 chassis per rack

4 Fabric Interconnects (UCS-6120)

2 x 20 10GE Ports for chassis connectivity


4x10GE & 4x4G-FC North facing

BRKCOM-2986_c1

USR & Fiber 100 meters

5 meters ~ 16 feet
Using 2 Uplinks per Fabric Extender
3 chassis per rack

4 Fabric Interconnects (UCS-6120)

16

Cisco Public

2 x 20 10GE Ports for chassis connectivity


2 4x10GE & 4x4G-FC North Facing

Server Count: 20 x 8 = 160 half-width servers


17

Unified Compute System External Connectivity


LAN Fabric

Core

Fabric A

SAN Fabric

Fabric B

Edge
Storage
Arrays

POD
Aggregation

Core
Access
LAN Access

Fabric Interconnect
Virtual Access
Fabric Extenders

Unified Compute System

Unified Compute System

Ethernet Fabric

SAN Fabric

Single Fabric
Fabric Interconnect: 10GE attached

Two Physical Identical Fabrics


Fabric Interconnect: 4G FC attached

Switch Mode
End-host Mode

NPV Mode

Interconnect Connectivity Point

Interconnect Connectivity Point

Based on Core-Edge vs Edge-Core-Edge models


SAN Core
SAN Edge

L3/L2 Boundary in all cases


Nexus 7000 & Catalyst 6500
BRKCOM-2986_c1

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

18

Internal Connectivity
Mezz Card Ports and Blades Connectivity
Fabric Extender Connectivity
Fabric Extender Ports
s1
s1
s5
s2
s3
s3
s5
s2
s4
s7
s6
s5
s2
s3
s6
s4
s7
s7
s6
s8
s4
s8
s8

slot 1
slot 2
slot 3
blade2
slot 4
slot 5
blade3
slot 6
slot 7
blade4
slot 8

blade1

s1
s1
s1
s5
s2
s3
s3
s5
s2
s4
s7
s6
s5
s6s2
s3
s7s4
s7
s8s6
s4
s8
s8

Connect to single Fabric Interconnect


Each port is independently used
Do not form a port channel
Each port is a trunk

Traffic distribution is based on slot at bring up time


Slot Mezz Port map to connected Fabric Extender
FCS: in a sequential fashion port pinning
Post FCS: a port could have dedicated uplink

Mezz Card Port Usage


s1
s1
s5
s3
s2
s5
s3
s2
s7
s4
s6
s5
s2
s6
s3
s4
s7
s7
s6
s8
s4
s8
s8

BRKCOM-2986_c1

slot
blade1
slot 11
blade1
blade2
slot
blade2
slot 22
slot
blade3
slot 33
blade3
slot
blade4
slot 44
blade4
slot
blade5
slot 55
blade5
blade6
slot
blade6
slot 66
slot
blade7
slot 77
blade7
blade8
slot
blade8
slot 88

s1
s1
s1
s5
s3
s2
s5
s3
s2
s7
s4
s6
s5
s6s2
s3
s7s4
s7
s8s6
s4
s8
s8

2009 Cisco Systems, Inc. All rights reserved.

Each slot connects to each fabric extender


Each slot supports a dual-port mezz card
Slots and blades:
One slot, one mezz card, one half-width blade
Two Slots, two mezz cards, full-width blade

Slot Mezz Port is assigned Fabric Extender


Based on available uplinks in sequence
Port Redundancy based least physical connections
Cisco Public

19

Internal Connectivity
Mezz Cards and Virtual Interfaces Connectivity
Interconnect

Interconnect

Interconnect

Interconnect

vEth vEth vEth

vEth vEth vEth

vEth vEth vEth

vEth vEth vEth

mezz1

mezz2

mezz15

mezz16

mezz1

mezz2

mezz15

mezz16

eth0 eth1

eth0 eth1

eth0 eth1

eth0 eth1

eth0 eth1

eth0 eth1

eth0 eth1

eth0 eth1

vnic1 vnic2 vnic1 vnic2

vnic1 vnic2

vnic1 vnic2

Full width

Half width

Half width

chassis

General Connectivity
Port channels
Are not formed between mezz ports
Are not formed across mezz cards

Backup Interfaces
Mezz port backup within mezz card
Redundancy: Depends on Mezz card

Interface Redundancy
vnic redundancy done across mezz card
ports
BRKCOM-2986_c1
2009 Cisco Systems, Inc. All rights reserved.
Cisco Public

vhba vhba

vhba vhba

Full width

vhba vhba

Half width

vhba vhba

Half width

chassis

Blade Connectivity
Full width 2 mezz cards 4 ports
vnics mapped to any port
vhbas are round robin mapped to fabric *

Half width 1 mezz card 2 ports


vnics mapped to any port
vhbas mapped to 1 port not redundant*
vhba mapping is roundrobin
*Host multipahing sw required for redundancy
20

Agenda
Overview
Design Considerations
A Unified DC Architecture
The Unified DC Architecture
The Virtual Access Layer
Unified Compute Pods
Distributed Access Fabric Model

Deployment Scenarios

BRKCOM-2986_c1

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

21

The Unified DC Architecture


NEXUS 7000

L3

L3

Core: L3 boundary to the DC network.


Functional point for route summarization, the
injection of default routes and termination of
segmented virtual transport networks

Service
Appliances

Service
Modules

NEXUS 7000 VPC

L2

Catalyst
6500

NEXUS 7000 VPC

NEXUS 5000

L2

Unified
Compute
System

NEXUS 2000

vL2

VM VMVM

VM VM

VM VMVM

VM VM

VM VMVM

VM VM

VM VMVM

VM VM

VM VMVM

VM VM

VM VMVM

VM VM

VM VMVM

VM VM

VM VMVM

VM VM

VM VMVM

VM VM

VM VMVM

VM VM

VM

VM

VM

VM

VM

VM

VM

VM

VM

VM

VM

VM

Rack 1 Rack 2 Rack 3


BRKCOM-2986_c1

NEXUS 1000v
VM VM

VM VM

VM VM

VM VM

VM VM
POD

2009 Cisco Systems, Inc. All rights reserved.

VM VM
VM VM
VM VM

Rack 1
Cisco Public

VM VM
VM VM
VM VM

Rack x

Aggregation: Typical L3/L2 boundary. DC


aggregation point for uplink and DC services
offering key features: VPC, VDC, 10GE density
and 1st point of migration to 40GE and 100GE
Access: Classic network layer providing nonblocking paths to servers & IP storage devices
through VPC. It leverages Distributed Access
Fabric Model (DAF) to centralize config &
mgmt and ease horizontal cabling demands
related to 1G and 10GE server environments
Virtual Access: A virtual layer of network
intelligence offering access layer-like controls
to extend traditional visibility, flexibility and
mgmt into virtual server environments. Virtual
network switches bring access layer switching
capabilities to virtual servers without burden of
topology control plane protocols. Virtual
Adapters provide granular control over virtual
and physical server IO resources
22

A Unified Compute Pod


A Modular, Predictable Virtualized Compute Environment
POD: Modular Repeatable Compute Environment w/ Predictable Scalability & Deterministic Functions

POD

Core

SAN Fabric

Core

Aggregation

POD

POD

Storage
Arrays

LAN Fabric

Core

Fabric A
Aggregation

Fabric B

Access
LAN Access

Access

SAN Edge
A

Fabric Interconnect

SAN Edge
B

Virtual Access
Fabric Extenders

Virtual Access

UCS

UCS

Unified Compute Pod

General Purpose POD


Typical enterprise application environment

Collection of Unified Compute Systems

Classic client server applications


Multi-tier Applications: web, app, DB
Low to High Density Compute Environments
Include stateful services

IO Consolidation
Workload Mobility
Application Flexibility
Virtualization

The POD Concept: applies to distinct application environments and through a modular
approach to building the physical, network and compute infrastructure in a predictable
and repeatable manner. It allows organizations to plan the rollout of distinct compute
environment as needed in a shared physical data center using a pay as you go model
BRKCOM-2986_c1

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

23

Physical Infrastructure and Network Topology


Mapping the Physical to the Logical
DC

COLD AISLE

Zone

POD

HOT AISLE

Pod
Network Rack
DAF POD
Storage Rack
Server Rack

POD

4 PODs

DAF Rack

ToR POD

POD

EoR Access
POD
BRKCOM-2986_c1

UCS Blade
Rack

POD

DAF UCS
POD
2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

24

Access Layer Network Model


End of Row, Top of Rack, and Blade Switches
What it used to be

End of Row

Top of Rack

Blade Switches

GE Access

Row

Row

What is emerging

Row

EoR & Blades

ToR & Blades

DAF & 1U Servers

DAF & Blades

Nexus 2K+Nexus 5k
Future 7K & 6K

UCS Fabric
Extender to Fabric
Interconnect

What influences physical layout


Primarily:
Power
Cooling
Cabling
Secondarily
Access Model
Port Density

What Cisco has done


BRKCOM-2986_c1

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

25

Distributed Access Fabric in Blade Environment


Distributed Access Fabric - DAF
DAF Blade Rack
Fabric
Instance

Fabric
Interconnect

Horizontal
Cabling

Cross Connect Rack

Network Rack

Vertical
Cabling

Why Distributed Access Fabric?


All chassis are managed by Fabric Interconnect: single config point, single monitoring point
Fabric instances per chassis present at rack level reduced management point
Fabric instances are extensions of the fabric Interconnect: they are fabric Extenders
Simplifies cabling infrastructure: Horizontal cabling choice: what is available Fiber of copper
CX1 cabling for brownfield installations Fabric Interconnect Centrally located
USR for greenfield installations Fabric Interconnect at End of Row near the cross connect
Vertical cabling: just an in-rack patch cable

BRKCOM-2986_c1

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

26

Network Equipment Distribution


EoR, ToR, Blade Switches, and Distributed Access Fabric
EoR

ToR

Network Modular Switch at the end of Low RU, lower port density
switch per server rack
Fabric & a row of server racks
Location
Copper: server to ToR switch
Cabling Copper: server to access

Port
Density

Blade Switches

DAF

Switches Integrated in to
blade enclosures per server
racks

Access fabric on top of rack &


access switch on end of row

Copper or Fiber: access to


aggregation

Copper of fiber: In rack patch


Fiber: access fabric to fabric
switch

Fiber: access to aggregation

Fiber: ToR to aggregation

240 336 ports - C6500

40 48 ports C49xx GE

288 672 ports N7000


6 12 multi-RU servers

20 4000 ports N2K-5K GE


8-30 1 RU servers
3 blade enclosures per rack Applicable to low and high
density server racks most
12-48 blade servers
flexible

Rack
Server
Density
Tiers on Typically 2: Access and
aggregation
POD

BRKCOM-2986_c1

Typically 2: Access and


aggregation

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

14 16 Servers (dual-homed) Ranges from classic EoR to


ToR designs

Typically 2: Access and


aggregation

One or Two. Collapsed


access / aggregation or
classic aggregation & access

27

Agenda
Overview
Design Considerations
A Unified DC Architecture
Deployment Scenarios
Overall Architecture
Ethernet Environments
Switch mode

EHM
UIO Environments

BRKCOM-2986_c1

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

28

UCS and Network Environments


Fabric A

Storage
Arrays

Core

Fabric B

Fabric Interconnect

Fabric Extenders
Unified Compute System

Unified Compute System

LAN & SAN Deployment Considerations

Other Considerations Details

Deployment Options

Mezz Cards: CNAs

Uplinks per Fabric Extender


Fan-out, Oversubscription & Bandwidth

Uplinks per Fabric Interconnect

Fabric Interconnect FC Uplinks

Flavor of Expansion Module

Fabric Interconnect Connectivity Point


Fabric Interconnect is the Access Layer
Should connect to L2/L3 Boundary switch
BRKCOM-2986_c1

2009 Cisco Systems, Inc. All rights reserved.

Up to 10G FC (FCoE)
Enet traffic could take full capacity if needed

Cisco Public

4G FC: 8 or 4 Ports and N-port channels


Uplinks per Fabric Interconnect
2 or 4 FC uplinks per Fabric
2 or 3 10GE uplinks to each upstream switch
29

UCS Fabric Interconnect Operation Modes


Switch Mode & End-Host-Mode
POD

POD
VSS or VPC

VSS or VPC

Fabric Interconnect
Behaves like any other Ethernet Switch
participates on STP topology
Follow STP design best practices

Provides L2 Switching Functionality


Uplink Capacity based on Port-Channels

Switch Looks Like a Host


Switch Still Performs L2 Switching
MAC addresses Active on 1 link at a time
MAC addresses pinned to uplinks
Local MAC learning not on uplinks
Forms Loop-free Topology

UCS mgr Syncs Fabric Interconnects


Uplink Capacity

6140: 2 Expansion Slots


8-port Port Channels
6120: 1 Expansion Slot
6-port Port Channels
2009 Cisco Systems, Inc. All rights reserved.

End-Host-Mode

Switch Mode

BRKCOM-2986_c1

Cisco Public

6140: 12-way (12 uplinks)


6120: 6-way (6 uplinks)

30

UCS on Ethernet Environments


Fabric Interconnect & Network Topologies
POD

POD
VSS or VPC

Classic STP Topologies

Simplified Topologies

Fabric Interconnect

Upstream Switches Provide Non-blocking path


mechanism: VSS or VPC
Interaction with VSS or VPC

Switch runs STP


participates on STP topology
Follow STP design best practices

Upstream Devices: Any L3/L2 boundary switch


Catalyst 6500
Nexus 7000

No Special feature needed


Topology is Loopfree
Less Reliance on STP

Increased Overall Bandwidth Capacity

8-way multi-pathing

A combination of VPC/VSS and End-Host-Mode provides optimized network environments by reducing


MAC scaling constraints, increasing available bandwidth and lowering processing load on agg switches
BRKCOM-2986_c1

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

31

UCS on Ethernet Environments


Connectivity Point
POD

16

16

256 10GE

Access

112 10GE

112 10GE
UCS-6140

UCS-6140

Switch 112 10GE


Mode
5:1

L2

4.5:1

16

14.5:1

L2

POD

256 10GE

L3

6.5:1
8

16

Aggregation

UCS-6140

8
UCS-6140

End-Host
Mode

3.3:1

L2

Rack 1

Rack N

Wire speed 10GE ports


Rack 1

Oversubscription Rates

Rack N

Fabric Interconnect to ToR Access Switches

Fabric Interconnect to Aggregation Switches

Fabric Interconnect in End-Host Mode No STP

Fabric Interconnect in switch mode

Leverage Total Uplink Capacity: 60G or 120 G Capacity

Leverage Total Uplink Capacity: 60G or 80G Capacity

L2 topology remains 2-tier

L2 topology remains 2-tier

Scalability

Scalability

6140 pair: 10 chassis = 80 compute nodes 3.3:1 subscription

6140 pair: 10 chassis = 80 compute nodes 5:1 subscription

5040 pair: 6 UCS systems = 480 compute nodes 15:1 subscription

7018 pair: 14 6140 pairs: 1,120 compute nodes

7018 pair: 13 5040 pairs: 6,240 compute node

Enclosure 80GE Compute node: 10GE Attached

Enclosure: 80 GE Compute Node: 10GE Attached

BRKCOM-2986_c1

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

32

UCS on Ethernet Environments


Connectivity Point
POD

16

16

16

POD

8
UCS-6140

UCS-6140

8
UCS-6140

UCS-6140

Rack 1

Rack 1

16

Rack N

Rack N

What other factors need to be considered

BRKCOM-2986_c1

East to West Traffic Profiles

Size of L2 domain: VLANs, MAC, number of ports, servers, switches, etc

Tiers in the L2 topology: STP domain, diameter

Oversubscription

Latency

Server Virtualization Load: MAC addresses, VLANs, etc

Switch Mode vs End-Host-Mode

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

33

Unified Compute System POD Example


Ethernet Only - Switch Mode
L3
L2

POD

POD

Aggregation
6

UCS6120

16

UCS6120

16

6
UCS6120

UCS6120

L2
8

Access

UCS6140

16

UCS6140

16

8
UCS6140

16

UCS6140

16

vL2

Rack 1

Rack 2

Rack 1

Rack 2

Virtual Access

Rack 1

Rack 3

Rack 1

Rack 3

Access Layer: UCS-6120 x 2

Access Layer: UCS-6140 x 2

52 10GE 1:1 ports: 12 uplinks & 40 downlinks = 5 chassis

104 10GE 1:1 ports: 12 uplinks & 80 downlinks = 10 chassis

5 chassis = 40 halfwidth servers

10 chassis = 80 halfwidth servers

Chassis Subscription: 16:8 = 2:1 Blade Bandwidth: 10G

Chassis Subscription: 16:8 = 2:1 Blade Bandwidth: 10G

Access to aggregation oversubscription: 20:6 ~ 3.3:1

Access to Aggregation oversubscription: 40:8 ~ 5:1

Aggregation Layer: 7010 + vPC

Aggregation Layer: 7018 + vPC

128
10GE 1:1 ports
= 8 UCS = 320 servers
BRKCOM-2986_c1
2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

256 10GE 1:1 ports = 14 UCS = 1,120 servers

34

UCS POD Density


Ethernet Only
POD

POD

NEXUS 7010

UCS6120

UCS6120

UCS6120

UCS6120

UCS6140

320 - 1,280 Blades


Rack 1

Rack X

Rack 1

6 uplinks 20
downlinks

UCS-6140
8 uplinks 40
downlinks

UCS6140

UCS6140

UCS6140

1,120 - 4,480 Blades


Rack X

Nexus 7018: 256 10GE 1:1 per pair


UCS-6120

NEXUS 7018

~18 UCS-6120 pairs


2,880,1440 & 720
~ 14 UCS-6140 pairs
4,480, 2240 & 1120

Rack 1

Rack X

Rack 1

Rack X

1 x 10GE FEX uplink


2.5G per slot

2 x 10GE FEX uplinks 4 x 10GE FEX Uplinks


5G per slot
10G per slot

20 Chassis
160 blades
40 Chassis
320 blades

10 Chassis
80 blades
20 Chassis
160 blades

5 Chassis
40 blades
10 Chassis
80 blades

20 Chassis
160 blades
40 Chassis
320 blades

10 Chassis
80 blades
20 Chassis
160 blades

5 Chassis
40 blades
10 Chassis
80 blades

Nexus 7010: 128 10GE 1:1 per pair


UCS-6120
6 uplinks 20
downlinks

UCS-6140
8 uplinks 40
downlinks
Session_ID
Presentation_ID

~8 6120 pairs
1,280, 640 & 320
~ 6 6140 pairs
1,920, 960 & 480
2008 Cisco Systems, Inc. All rights reserved.

Cisco Confidential

35

SAN Environments
NPV Mode
A

Core

Edge

Edge

Core

NPIV

Edge
NPV

POD

POD

Fabric Interconnect NPV Mode:


Treat UCS as a host
Useful addressing domain scalability (No
domain ID needed on fabric switch)
SAN Fabric agnostic
Two Models to consider
Core-Edge
Edge-Core-Edge
BRKCOM-2986_c1

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

Connecting Point:
Based on Scalability and Port Density
NPIV could add significant load
Load: FLOGI, Zoning, other services

Core Layer:
Tends to be less scalable due to limited port count

Edge Layer:
Is more scalable because load is spread across
multiple switches
36

UCS on a Storage Network


FC/ FCoE Only NPV Mode
POD
Core
A
2

UCS6120

UCS6120

UCS6140

UCS6140

Access/Edge

Virtual Access

Rack 1

Rack 4

Core: MDS9509

Rack 1

Rack 8

Core: MDS9509

12 4GFC x 7 = 84 4G FC = 21 UCS = 1680 servers

Edge: 6120 x 2

12 4GFC x 7 = 84 4G FC = 10 UCS = 1600 servers

Edge: 6140 x 2

40 10GE 1:1 ports, 8 4G FC uplinks = 10 chassis

80 10GE 1:1 ports, 16 4G FC uplinks = 20 chassis

10 chassis = 80 halfwidth servers

20 chassis = 160 halfwidth servers

FC chassis bandwidth: 4 x 10 GE = 5G per blade

chassis Subscription: 4 x 10GE : 5G per blade

6120 subscription: 8x4G FC: 20G = 6.25 : 1

6140 subscription: 16x4G FC: 40G = 6.25:1

BRKCOM-2986_c1

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

37

UCS Density on LAN/SAN Environments


Storage
Arrays

40x10GE: 8x10GE = 5:1

UCS6120

4
4

2
UCS6120

UCS6140

40x10GE: 8x4G = 12.5:1

UCS6140

Unified Compute System

Unified Compute System

LAN Environment

SAN Environment

Using 7018 at Aggregation Layer

Using 9509 at Aggregation Layer

28 UCS based on 6120s


14 UCS based on 6140s
Using 6120s in the access

21 UCS based on 6120s


10 UCS based on 6140s
Using 6120s in the access

5 Chassis (4 ports per FEX) 40 blades

Using 6140s in the access

2009 Cisco Systems, Inc. All rights reserved.

5 Chassis (4 ports per FEX) 40 blades

Using 6140s in the access

10 Chassis (4 ports per FEX) 80 blades

BRKCOM-2986_c1

Cisco Public

10 Chassis (4 ports per FEX) 8- blades


38

Conclusion
End-host-mode vs Switch mode
EHM to Access Switch is possible
Switch mode to L2/L3 Boundary switch is appropriate
If Aggregating to ToR switch is desired use EHM

Fan-out, Oversubscription & Bandwidth


Understand All Three:
Network folks: oversubscription
Storage folks: fan-out
Server folks: bandwidth

Get Trained Early on


Lots of New and Useful Technology
Architecture is Rapidly Evolving
BRKCOM-2986_c1

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

39

Complete Your Online


Session Evaluation
Give us your feedback and you
could win fabulous prizes.
Winners announced daily.
Receive 20 Passport points for
each session evaluation you
complete.

Complete your session evaluation


online now (open a browser
through our wireless network to
access our portal) or visit one of
the Internet stations throughout
the Convention Center.

BRKCOM-2986_c1

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

Dont forget to activate your


Cisco Live Virtual account for access to
all session material, communities, and
on-demand and live activities throughout
the year. Activate your account at the
Cisco booth in the World of Solutions or visit
www.ciscolive.com.
40

BRKCOM-2986_c1

2009 Cisco Systems, Inc. All rights reserved.

Cisco Public

41

Você também pode gostar