Você está na página 1de 13

UCS Technology Labs UCS B-Series Components

and Management

Chassis and Blade Discovery


Last updated: April 11, 2013

Task
Discover the chassis with 2 links from each FI.
Do not aggregate links together from the FI to the chassis.
Explore all of the components of the chassis discovered.

Configuration
In the left navigation pane, on the Equipment tab, click Equipment, click the Policies tab in
the right pane, and then click the Global Policies tab in the tabset below. In the Chassis
Discovery Policy section, click the Action drop-down list and choose 2 Link. Select None for
the Link Grouping Preference.

FEEDBACK

In the left navigation pane, navigate to Fabric Interconnects >> Fabric Interconnect A
(primary) and note the picture of the Fabric Interconnect with ports that are populated with
SFPs, displayed in light yellow to indicate Link Down.

Navigate to Fabric Interconnect A >> Fixed Module >> Unconfigured Ethernet Ports.

Click Port 1, and in the right pane click Configure as Server Port; when prompted if you are
sure, click Yes.

Do the same for Port 2.

Note that these ports have been automatically moved under the Server Ports section.

Note that we already begin seeing Chassis 1 appear along with some IO Modules (FEX).

Navigate to Fabric Interconnect B >> Fixed Module >> Unconfigured Ethernet Ports and
do the same thing for Port 1.

And for Port 2.

They have moved to the Server Ports section now as well.

Verification

We begin seeing all of the components that are installed in the one chassis present in our
system. (Note that, of course, more chassis are supported with a one pair of FIs - up to 20 - but
we happen to only have one installed.)

Explore Server 1 to discover things such as amount and speed of memory, number of CPUs and
cores, and the number of DCE Interfaces. Here we have a Cisco model B22 M3 blade with 48GB
of RAM, 2 CPUs each with 6 cores, and 4 DCE interfaces - this is because of the VIC 1240
card, which has 2x 10Gbps 802.1KR backplane ethernet traces that route to each IOM/FEX
(because there are two IOMs in the chassis, we have a total of 4x 10Gbps traces for an available
aggregate forwarding bandwidth of 40GB).

Note the VIF (Virtual InterFace) Paths, and that they have automatically formed PC (Port
Channels) - one to each IOM - because this is the only way to use all 20GB of bandwidth to
each IOM.

If we click DCE Interfaces, we can see where each trace goes and the PC that it belongs to in
a summary view. Notice that the traces are numbered strangely - 1, 3, 5, 7. This is because
there is the possibility that we could add a Port Extender (a daughter card that can be added to
the mezzanine adapter) that would bring the total number of traces up to 8 (4 to each IOM), and
the trace numbers are reserved for the extender ports - 2, 4, 6, 8. Although we have DCE
interfaces, we have no HBAs or NICs. This is because everything in a "Palo" mezzanine card
(which include the 1st-gen M81KR and 2nd-gen VIC1240 and VIC1280) is completely virtual and
must be instantiated using Adapter-FEX. We will do this later by creating vNICs and vHBAs and
assigning them to these blades by means of an abstraction called a Service Profile.

The same information but at a detailed view for trace 1.

The same information but at a detailed view for trace 3.

The same information but at a detailed view for trace 5.

The same information but at a detailed view for trace 7.

Note the specifics of Server 2, which is exactly the same as Server 1 - B22 M3 with 48GB RAM,
2 CPUs with 6 cores each and a VIC 1240 mezzanine adapter.

Note the VIF Paths.

Note the specifics of the DCE Interfaces (pictured here for later reference when we explore
specific vNIC paths).

Note the specifics of Server 3, which is significantly different from Servers 1 and 2. Here we have
an older generation 2 Cisco B200 M2 with 1 CPU with 4 cores and 16GB RAM. There are also
only 2 DCE interfaces, and there are pre-defined HBAs and NICs. That is because this blade

has an Emulex mezzanine adapter, which is a CNA (Converged Network Adapter) with a 2-port
Ethernet and 2-port FCoE HBA.

Here we see the specifics of the DCE interfaces, with one 10Gbps path leading up to each IOM
in the chassis and then up to each FI independently.

Note the specifics of the VIF Paths swinging up to each FI.

Note the specifics of both HBA interfaces.

Note the specifics of HBA Port 1, with its burned-in pWWN and nWWN addresses.

Note the specifics of both NIC interfaces.

Note the specifics of NIC Port 1, with its burned-in MAC address.

In the FIs, after we connect nxos, we can see the results of configuring our ports as uplinks.

FI-A:

Here is our FEX definition.

fex 1
pinning max-links 1
description "FEX0001"

Here are our two port-channels that are defined below and going to our two blade servers that
have VIC-1240 vntag-capable adapters.

interface port-channel1280
switchport mode vntag
switchport vntag
no pinning server sticky
speed 10000
interface port-channel1281
switchport mode vntag
switchport vntag
no pinning server sticky
speed 10000

These next two interfaces are on our FI and are the two FEX-facing interfaces.

interface Ethernet1/1
description S: Server
no pinning server sticky
switchport mode fex-fabric
fex associate 1 chassis-serial FOX1630GZB9 module-serial FCH16297JG2 mod
ule-sl
ot right
no shutdown
interface Ethernet1/2
description S: Server
no pinning server sticky
switchport mode fex-fabric
fex associate 1 chassis-serial FOX1630GZB9 module-serial FCH16297JG2 mod
ule-sl
ot right
no shutdown

The next set of interfaces begins our remote linecard or FEX (aka IOM), and we can see that
they match up to the DCE interfaces discovered earlier on the blades and their mezzanine
adapters. We can see that the first four populated interfaces (1, 3, 5, 7) are all VNTag interfaces
- this is because they are running Adapter-FEX technology. The next interface (9) is a normal
trunk interface to a host - being the blade with the non-vntag capabale M72KR-E Emulex adapter

interface Ethernet1/1/1
switchport vntag max-vifs 118
no pinning server sticky
switchport mode vntag
fabric-interface Eth1/1
channel-group 1280
no shutdown

interface Ethernet1/1/2
no pinning server sticky
interface Ethernet1/1/3
switchport vntag max-vifs 118
no pinning server sticky
switchport mode vntag
fabric-interface Eth1/1
channel-group 1280
no shutdown
interface Ethernet1/1/4
no pinning server sticky
interface Ethernet1/1/5
switchport vntag max-vifs 118
no pinning server sticky
switchport mode vntag
fabric-interface Eth1/2
channel-group 1281
no shutdown
interface Ethernet1/1/6
no pinning server sticky
interface Ethernet1/1/7
switchport vntag max-vifs 118
no pinning server sticky
switchport mode vntag
fabric-interface Eth1/2
channel-group 1281
no shutdown
interface Ethernet1/1/8
no pinning server sticky
interface Ethernet1/1/9
pinning server pinning-failure link-down
switchport mode trunk
fabric-interface Eth1/1
no shutdown
interface Ethernet1/1/10
no pinning server sticky
interface Ethernet1/1/11
no pinning server sticky
interface Ethernet1/1/12
no pinning server sticky

interface Ethernet1/1/13
no pinning server sticky
interface Ethernet1/1/14
no pinning server sticky
interface Ethernet1/1/15
no pinning server sticky
interface Ethernet1/1/16
no pinning server sticky
interface Ethernet1/1/17
no pinning server sticky
interface Ethernet1/1/18
no pinning server sticky
interface Ethernet1/1/19
no pinning server sticky
interface Ethernet1/1/20
no pinning server sticky
interface Ethernet1/1/21
no pinning server sticky
interface Ethernet1/1/22
no pinning server sticky
interface Ethernet1/1/23
no pinning server sticky
interface Ethernet1/1/24
no pinning server sticky
interface Ethernet1/1/25
no pinning server sticky
interface Ethernet1/1/26
no pinning server sticky
interface Ethernet1/1/27
no pinning server sticky
interface Ethernet1/1/28
no pinning server sticky
interface Ethernet1/1/29

no pinning server sticky


interface Ethernet1/1/30
no pinning server sticky
interface Ethernet1/1/31
no pinning server sticky
interface Ethernet1/1/32
no pinning server sticky

Even though the IOM is advertised as a 32-port backplane interface, this extra Ethernet interface
is used for control of the IOM and Chassis and appears on both FEXs. Technically, this interface
controls the CMC (Chassis Management Controller) via the CMS (Chassis Management
Switch).

interface Ethernet1/1/33
no pinning server sticky
switchport mode trunk
switchport trunk native vlan 4044
switchport trunk allowed vlan 4044
no shutdown

FI-B:
fex 1
pinning max-links 1
description "FEX0001"
interface port-channel1282
switchport mode vntag
switchport vntag
no pinning server sticky
speed 10000
interface port-channel1283
switchport mode vntag
switchport vntag
no pinning server sticky
speed 10000
interface Ethernet1/1
description S: Server
no pinning server sticky
switchport mode fex-fabric
fex associate 1 chassis-serial FOX1630GZB9 module-serial FCH16297JG2 mod

ule-sl
ot right
no shutdown
interface Ethernet1/2
description S: Server
no pinning server sticky
switchport mode fex-fabric
fex associate 1 chassis-serial FOX1630GZB9 module-serial FCH16297JG2 mod
ule-sl
ot right
no shutdown

interface Ethernet1/1/1
switchport vntag max-vifs 118
no pinning server sticky
switchport mode vntag
fabric-interface Eth1/1
channel-group 1283
no shutdown
interface Ethernet1/1/2
no pinning server sticky
interface Ethernet1/1/3
switchport vntag max-vifs 118
no pinning server sticky
switchport mode vntag
fabric-interface Eth1/1
channel-group 1283
no shutdown
interface Ethernet1/1/4
no pinning server sticky
interface Ethernet1/1/5
switchport vntag max-vifs 118
no pinning server sticky
switchport mode vntag
fabric-interface Eth1/2
channel-group 1282
no shutdown
interface Ethernet1/1/6
no pinning server sticky
interface Ethernet1/1/7
switchport vntag max-vifs 118
no pinning server sticky

switchport mode vntag


fabric-interface Eth1/2
channel-group 1282
no shutdown
interface Ethernet1/1/8
no pinning server sticky
interface Ethernet1/1/9
no pinning server sticky
pinning server pinning-failure link-down
switchport mode trunk
fabric-interface Eth1/1
no shutdown
interface Ethernet1/1/10
no pinning server sticky
interface Ethernet1/1/11
no pinning server sticky
interface Ethernet1/1/12
no pinning server sticky
interface Ethernet1/1/13
no pinning server sticky
interface Ethernet1/1/14
no pinning server sticky
interface Ethernet1/1/15
no pinning server sticky
interface Ethernet1/1/16
no pinning server sticky
interface Ethernet1/1/17
no pinning server sticky
interface Ethernet1/1/18
no pinning server sticky
interface Ethernet1/1/19
no pinning server sticky
interface Ethernet1/1/20
no pinning server sticky
interface Ethernet1/1/21
no pinning server sticky

interface Ethernet1/1/22
no pinning server sticky
interface Ethernet1/1/23
no pinning server sticky
interface Ethernet1/1/24
no pinning server sticky
interface Ethernet1/1/25
no pinning server sticky
interface Ethernet1/1/26
no pinning server sticky
interface Ethernet1/1/27
no pinning server sticky
interface Ethernet1/1/28
no pinning server sticky
interface Ethernet1/1/29
no pinning server sticky
interface Ethernet1/1/30
no pinning server sticky
interface Ethernet1/1/31
no pinning server sticky
interface Ethernet1/1/32
no pinning server sticky
interface Ethernet1/1/33
no pinning server sticky
switchport mode trunk
switchport trunk native vlan 4044
switchport trunk allowed vlan 4044
no shutdown

^ back to top

Disclaimer (http://www.ine.com/feedback.htm) | Privacy Policy (http://www.ine.com/resources/)


Inc., All Rights Reserved (http://www.ine.com/about-us.htm)

2013 INE

Você também pode gostar