Você está na página 1de 67

Welcome to Connectrix Foundations.

Click the Notes tab to view text that corresponds to the audio recording.
Click the Supporting Materials tab to download a PDF version of this eLearning.
Copyright 1996, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012 EMC Corporation. All Rights Reserved.
EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED AS IS. EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY
KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.
EMC2, EMC, Data Domain, RSA, EMC Centera, EMC ControlCenter, EMC LifeLine, EMC OnCourse, EMC Proven, EMC Snap, EMC SourceOne,
EMC Storage Administrator, Acartus, Access Logix, AdvantEdge, AlphaStor, ApplicationXtender, ArchiveXtender, Atmos, Authentica,
Authentic Problems, Automated Resource Manager, AutoStart, AutoSwap, AVALONidm, Avamar, Captiva, Catalog Solution, C-Clip, Celerra,
Celerra Replicator, Centera, CenterStage, CentraStar, ClaimPack, ClaimsEditor, CLARiiON, ClientPak, Codebook Correlation Technology,
Common Information Model, Configuration Intelligence, Configuresoft, Connectrix, CopyCross, CopyPoint, Dantz, DatabaseXtender, Direct
Matrix Architecture, DiskXtender, DiskXtender 2000, Document Sciences, Documentum, elnput, E-Lab, EmailXaminer, EmailXtender,
Enginuity, eRoom, Event Explorer, FarPoint, FirstPass, FLARE, FormWare, Geosynchrony, Global File Virtualization, Graphic Visualization,
Greenplum, HighRoad, HomeBase, InfoMover, Infoscape, Infra, InputAccel, InputAccel Express, Invista, Ionix, ISIS, Max Retriever,
MediaStor, MirrorView, Navisphere, NetWorker, nLayers, OnAlert, OpenScale, PixTools, Powerlink, PowerPath, PowerSnap, QuickScan,
Rainfinity, RepliCare, RepliStor, ResourcePak, Retrospect, RSA, the RSA logo, SafeLine, SAN Advisor, SAN Copy, SAN Manager, Smarts,
SnapImage, SnapSure, SnapView, SRDF, StorageScope, SupportMate, SymmAPI, SymmEnabler, Symmetrix, Symmetrix DMX, Symmetrix
VMAX, TimeFinder, UltraFlex, UltraPoint, UltraScale, Unisphere, VMAX, Vblock, Viewlets, Virtual Matrix, Virtual Matrix Architecture, Virtual
Provisioning, VisualSAN, VisualSRM, Voyence, VPLEX, VSAM-Assist, WebXtender, xPression, xPresso, YottaYotta, the EMC logo, and where
information lives, are registered trademarks or trademarks of EMC Corporation in the United States and other countries.

All other trademarks used herein are the property of their respective owners.

Copyright 2012 EMC Corporation. All rights reserved. Published in the USA.

Revision Date: <Published DATE /MM/YYYY>


Revision Number: MR-1WP-CONFD_8.0

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 1
The objectives for this course are shown here. Please take a moment to read them.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 2
The objectives for this module are shown here. Please take a moment to read them.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 3
A SAN provides two primary capabilities: block-level storage connectivity from a host to a
storage frame or array, and block-level storage connectivity between storage frames or
arrays.
For a storage array such as Symmetrix or CLARiiON, the LUN is the fundamental unit of block
storage that can be provisioned. The hosts disk driver treats the array LUN identically to a
direct-attached disk spindle, presenting it to the operating system as a raw device or
character device. This is the fundamental difference between SAN and NAS. A NAS appliance
presents storage in the form of a filesystem that the host can mount and use via network
protocols, such as NFS or CIFS.
A fabric is a logically defined space in which Fibre Channel nodes can communicate with each
other. A fabric can be created by using just a single switch or a group of switches connected
together.
The primary function of the fabric is to receive Fibre Channel data frames from a source port
and route them to the destination port whose address identifier is specified in the Fibre
Channel frames. Each port is physically attached through a link to the fabric.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 4
Traditional DAS solutions such as parallel SCSI, were not designed to scale to the
requirements of modern enterprise-class storage. Scalability issues with DAS include distance
limitations dictated by the underlying electrical signal technologies.
With static configuration, the bus needs to be quiesced for every device reconfiguration.
Every connected host loses access to all storage on the bus during the process. In parallel
SCSI, devices on the bus must be set to a unique ID in the range of 0 to 15. The addition of
new devices and/or initiators with parallel SCSI requires careful planning.
DAS requires an actual physical connection via cable for every logical connection from a host
to a storage device or port. The only way to deploy new storage or redeploy storage across
hosts is to modify the physical cabling. In theory, multiple host initiators can be
accommodated on a single bus. In practice, cabling issues rapidly become a challenge as the
configuration grows.
In contrast, switched networked architectures can service multiple logical connections to
each device via a single physical connection from that device to the infrastructure. This slide
shows that the storage array can provide storage to both hosts.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 5
The basic interconnectivity options that are supported with Fibre Channel architecture are
Point-to-Point, Fibre Channel Arbitrated Loop, and Fabric connect.
FC-AL is a loop topology that does not require the expense of a Fibre Channel switch. In fact,
even the hub is optional. It is possible to run FC-AL with direct cable connections between
participating devices.
For most typical storage area network installations, Fabric connection via switches is the
appropriate Fibre Channel topology choice. Unlike a loop configuration, a switched fabric
provides scalability and dedicated bandwidth between any given pair of inter-connected
devices. FC-SW uses a 24-bit address to route traffic and can accommodate as many as 15
million devices in a single fabric. Adding or removing devices in a switched fabric does not
affect ongoing traffic between other unrelated devices.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 6
SANs combine the benefits of channel technologies with the benefits of a networked
architecture. This results in a more robust, flexible, and sophisticated approach to connect
hosts to storage resources. SANs overcome the limitations of DAS, while using the same
logical interface, SCSI, to access storage.
Host to storage communication in a SAN is block I/O, just as with DAS implementations. With
parallel SCSI, the host SCSI adapter handles block I/O requests. In a Fibre Channel SAN, block
requests are handled by a Fibre Channel Host Bus Adapter. A Fibre Channel HBA is a standard
peripheral card on the host computer.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 7
Fibre Channel is a serial data transfer interface that operates over copper wire and/or optical
fiber at data rates up to 8Gbps and up to 10Gbps when used as ISL on supported switches.
Networking and I/O protocols such as SCSI commands, are mapped to Fibre Channel
constructs and then encapsulated and transported within Fibre Channel frames. This process
allows high-speed transfer of multiple protocols over the same physical interface.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 8
The Fibre Channel standards define a layered protocol. Please refer to the Student Resource
Guide for further explanation.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 9
Frames are the basic building blocks of a Fibre Channel connection. The frames contain
header information, data, CRC, and frame delineation markers. All information in Fibre
Channel is passed in frames. The maximum amount of data carried in a frame is 2112 bytes
with the total frame size at 2148 bytes.
The header contains the Source and Destination addresses, which allow the frame to be
routed to the correct port. The Type field interpretation is dependent on whether the frame
is a link control or a Fibre Channel data frame.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 10
Fibre Channel addresses are used to designate the source and destination of frames in the
Fibre Channel network. The Fibre Channel address field is 24-bits in length. Unlike Ethernet,
these addresses are not burned in, but are assigned when the node enters the loop or is
connected to the switch.
The Fibre Channel Address identifiers are three-bytes in length. The Frame Header contains
two three-byte fields for address identifiers. These are listed in the Student Resource Guide.
Each N_Port has a fabric-unique identifier, the N_Port Identifier by which it is known. The
source and destination N_Port Identifiers and alias-address identifiers are used to route
frames within the fabric.
The Physical Address is switch-specific and dynamically generated during the Fabric Login.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 11
A World Wide Name or WWN is a 64-bit address used in Fibre Channel networks to uniquely
identify each element in the network. The name is assigned to a host bus adapter or switch
port by the vendor at the time of manufacture. It is similar to the MAC address in an
Ethernet network.
There are two designations of WWN: World Wide Port Name and World Wide Node Name.
Both are globally unique 64-bit identifiers. The difference lies in where each value is
physically assigned. For example, a server may have dual HBAs installed, thus having
multiple ports or connections to a SAN. A World Wide Port Name is assigned to each physical
port.
The World Wide Node Name represents the entire server, which can be referred to as the
node or node process and is derived from one of the World Wide Port Name.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 12
A Fabric is a virtual space in which all storage nodes communicate with each other over
distances. It can be created with a single switch or a group of switches connected together.
The primary function of the fabric is to receive data frames from a source N_Port and route
them to the destination N_Port, whose address identifier is specified in the frames.
Each N_Port is physically attached through a link to the fabric. Many switches such as the
Cisco MDS support multiple fabrics within the same switch and also within the same physical
switch topology. When a device logs into a fabric, its information is maintained in a database.
Information required for it to access other devices or changes to the topology is provided by
another database.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 13
Fibre Channel services are entities within the switch construct. They service the attached
nodes.
The Login Service is used by all nodes when they perform a Fabric Login. The Name Service is
used to store information about all devices attached to the fabric. The Fabric Controller
provides state change notification to all registered nodes in the fabric. For details on these
services, please refer to the Student Resource Guide.
The role of the Management Server is to provide a single access point for all three services
based on virtual containers called zones. A zone is a collection of nodes defined to reside in
a closed space. Nodes inside a zone are aware of nodes in the zone they belong to but not
outside of it. The three services that this Server provides access to are the Name Server, the
Fabric Configuration Server, and the Fabric Zone Server.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 14
Switches can be connected in different ways to create a fabric. The type of topology used
depends on requirements such as availability, scalability, cost, and performance. Typically,
there is no single answer to the question as to which topology is best suited for an
environment.
Fan-out ratio is a measure of the number of hosts that can access a storage port at any given
time. Storage consolidation enables customers to achieve the full benefits of using enterprise
storage. This topology allows customers to map multiple host HBA ports onto a single
storage port, for example, a Symmetrix FA port.
Fan-in ratio is a measure of how many storage systems can be accessed by a single host at
any given time. This allows a customer to expand connectivity by a single host across
multiple storage units. There are situations when a host requires additional storage capacity.
Additional space is carved from a new or an existing storage unit. This topology then allows a
host to see more storage devices.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 15
A full mesh topology has all switches connected to each other. A partial mesh topology is
when some switches are not interconnected. For example, on this slide, the full mesh graphic
without the diagonal ISLs would be a partial mesh.
Full mesh topology provides maximum availability. However, this is done at the expense of
connectivity, which can become prohibitively expensive with an increasing number of
switches.
Compound Core-Edge topology is a combination of the full mesh and core-edge three-tier
topologies. In this configuration, all host to storage traffic must traverse the connectivity tier.
The connectivity or core tier is used for ISLs only. This type of topology is found in situations
where several smaller SAN islands are consolidated into a single large fabric. It can also be
found where SAN-NAS integration requires everything to be plugged together for ease of
management or for backups.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 16
Simple Core-Edge topology can have two variations: two-tier or three-tier. In a two-tier
topology, all hosts are connected to the edge tier and all storage is connected to the core
tier. With three-tier, all hosts are connected to one edge, all storage is connected to the
other edge, and the core tier is only used for ISLs.
The Edge tier, usually departmental switches, offers an inexpensive approach to add more
hosts into the fabric.
The Core or backbone tier, usually enterprise directors, ensures the highest availability since
all traffic has to either traverse through or terminate at this tier. Usually two directors and/or
switches are used to provide redundancy.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 17
Switches are connected to each other in a fabric using ISLs. This is accomplished by
connecting them to each other through an expansion port on the switch. ISLs are used to
transfer node-to-node data traffic as well as fabric management traffic from one switch to
another. ISLs can critically affect the performance and availability characteristics of the SAN.
In a poorly designed fabric, a single ISL failure can cause the entire fabric to fail. An
overloaded link can cause an I/O bottleneck. It is imperative to have a sufficient number of
ISLs to ensure adequate availability and accessibility.
If possible, avoid using ISLs for host-to-storage connectivity whenever performance
requirements are stringent. If ISLs are unavoidable, the performance implications should be
carefully considered at the design stage.
When adding ISLs in a fabric, there are some basic best practices. Always connect each
switch to at least two other switches in the fabric. This prevents a single link failure from
causing total loss of connectivity to nodes on that switch. Also, for host-to-storage
connectivity across ISLs, use a mix of equal-cost primary paths.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 18
Distance is a consideration while implementing ISLs. This is especially important when a
fabric spans campus distances. For example, two datacenters, a few miles apart would use
longwave laser instead of shortwave.
The three media options that are available while implementing an ISL are Multimode ISL,
Single-mode ISL, and DWDM ISL.
Some variables that affect supportable distance are propagation and dispersion losses,
buffer-to-buffer credit, and optical power.
For best possible long distance results with a conventional Fibre Channel link, use longwave
laser over single-mode 9-micron cable. This is least susceptible to modal dispersion, thereby
enabling distances up to 35 km.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 19
The iSCSI protocol provides a means of transporting SCSI packets over TCP/IP. iSCSI works by
wrapping SCSI commands into TCP and transporting them over an IP network. Since iSCSI is
IP-based traffic, it can be routed or switched on standard Ethernet equipment.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 20
Traditional Ethernet adapters or NICs are designed to transfer packetized file-level data
among PCs, servers, and storage devices. However, they do not traditionally transfer block-
level data, which is handled by a storage host bus adapter. In order to process block-level
data, the data must be placed in a TCP/IP packet before being sent over the IP network.
Through the use of iSCSI drivers on the host or server, a NIC can transmit packets of block-
level data over an IP network. When using a NIC, the server handles the packet creation of
block-level data and performs all of the TCP/IP processing. This is extremely CPU intensive
and lowers overall server performance.
The TCP/IP processing performance bottleneck has been the driving force behind the
development of TCP/IP offload engines or TOEs on adapter cards. A TOE moves the TCP/IP
processing from the host CPU to the TOE card. Thus, a TCP/IP offload storage NIC operates
more like a storage HBA than a standard NIC.
An iSCSI HBA is similar to a Fibre Channel HBA where all operations occur on the card. The
iSCSI HBA is required for an EMC Boot from SAN environment.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 21
Because a message is divided into a number of packets, each packet can, if necessary, be
sent by a different route across the network. Packets can arrive in a different order than the
order in which they were sent. The Internet Protocol simply delivers them. It is up to the
Transmission Control Protocol to put them back in the right order.
An iSCSI packet contains SCSI data and the iSCSI Header, which is created by the iSCSI
Initiator and is wrapped in other protocol layers to facilitate its transport.
The Ethernet Header is used to provide addressing for the physical layer while the IP Header
provides packet routing information used for moving the information across the network.
The TCP Header contains the information needed to guarantee delivery to the target
destination. The iSCSI Header defines how to extract SCSI commands and data.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 22
Native iSCSI allows for all communications using Ethernet. Initiators may be directly attached
to iSCSI Targets or may be connected using standard Ethernet routers and switches. Bridging
architectures allow for the Initiators to exist in an Ethernet environment while the storage
remains in a Fibre Channel SAN.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 23
Names enable iSCSI storage resources to be managed, regardless of address. An iSCSI node
name is also the SCSI device name of an iSCSI device. The iSCSI name of a SCSI device is the
principal object used in authentication of targets to initiators and initiators to targets. It is
also used to identify and manage iSCSI storage resources. Since iSCSI names are associated
with iSCSI nodes, the replacement of network adapter cards does not require reconfiguration
of all SCSI and iSCSI resource allocation information.
iSCSI names must be unique within the operational domain of the end user. However,
because the operational domain of an IP network is potentially worldwide, the iSCSI name
formats are also worldwide unique. To assist naming authorities in the construction of
worldwide unique names, iSCSI provides two name formats for different types of naming
authorities: iqn or iSCSI qualified name and eui or extended unique identifier.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 24
To ensure that data reaches all users who need it, organizations are looking for ways to
transport data throughout the enterprise locally over the SAN as well as over much longer
distances. One of the best ways to achieve this goal is to interconnect geographically
dispersed SANs through reliable, high-speed links. This approach involves transporting Fibre
Channel block data over existing IP infrastructures, currently used throughout the enterprise.
The FCIP protocol standard has rapidly gained acceptance as a manageable, cost-effective
way to blend the best of both worlds: Fibre Channel block data storage and widely-deployed
IP infrastructure. As a result, organizations now have an excellent way to protect, store, and
move their data while leveraging existing technology investments.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 25
The top layer of the FCIP protocol stack has SCSI applications, which include the SCSI driver
program that executes read and write commands. These block, stream, and other command
types are grouped into a layer that also contains SCSI data and status information.
Below the SCSI layer is the FCP layer, which is simply a Fibre Channel frame whose payload is
SCSI. The FCP layer rides on top of the Fibre Channel transport layer. This layer can run
natively within a SAN fabric environment or be encapsulated into IP at the FCIP layer. TCP and
IP are then used to transport the encapsulated information across wired or wireless Ethernet
or another transport that supports TCP/IP traffic.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 26
FCIP entities are switches or other network adapters used during FCIP. The primary purpose
of an FCIP entity is to forward FC frames. Primitive signals, sequences, and Class 1 FC frames
are not transmitted through FCIP because they cannot encode using FC Frame encapsulation.
An IP network sees the FCIP entities as peers, therefore requiring TCP/IP communication.
FCIP entities contain 1+ TCP endpoints in an IP-based network.
From a Fibre Channel perspective, the pairs of FCIP entities and their FC entities forward FC
frames between FC fabric elements. The end nodes do not know that an IP link exists.
Therefore, the path taken by the FC frames follows the normal routing procedure established
by the IP network. FCIP does not participate in the FC frame routing.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 27
FCIP can transport existing Fibre Channel services across the IP network so that two or more
interconnected SANs can appear as a single large SAN and can be managed by traditional
SAN management applications. In addition, FCIP enables SAN applications to support
additional protocols without modification. These applications can include disk mirroring
between buildings in a campus network or remote replication over the WAN. The types of
applications utilized are based on the distance the data must travel, the network bandwidth,
and the QoS requirements and/or abilities of the network connection.
While some implementations of FCIP are point-to-point tunnels, the protocol does not
require the gateways to support only point-to-point tunnelling. The FCIP standard supports
all Fibre Channel services including FSPF routing algorithms so that multiple logical links
created from a single gateway can route Fibre Channel packets over the IP infrastructure. Not
only is FCIP routable, but IP networks do not need to know anything about the packets being
routed.
The Fibre Channel services handle all routing between logical links, while the TCP protocol
handles the delivery of packets to the specific gateway device.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 28
A Fibre Channel switch, acting as an FCIP Gateway, encapsulates FC frames into IP packets.
Once encapsulated into IP packets, they can be sent across the IP network through the FCIP
link. The IP network can be made up of Gigabit Ethernet, Fast Ethernet, or IP switches and
routers. At the other end of the IP network, the original Fibre Channel frames are recovered
and sent through the SAN by the receiving Fibre Channel switch.
The FCIP connected FC SANs essentially form a new unified fabric. The IP network is
transparent to the Fibre Channel fabric. Only edge devices need to be aware of FCIP
encapsulation. This solution can take advantage of existing Fiber Channel networks and tie
them together using existing IP networks.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 29
Fibre Channel over Ethernet or FCoE is a new technology protocol in the process of being
defined by the T11 Standards Committee. It expands Fibre Channel into the Ethernet
environment. As a physical interface, it uses CNAs. FCoE allows Fibre Channel frames to be
encapsulated within Ethernet frames, providing a transport protocol more efficient than
TCP/IP.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 30
Today, each application class has its own interface: Ethernet for networking, Fibre Channel
for storage, and Infiniband for clustering. The result is three different networks, each with an
adapter for each system or server, three cables and switches, three skill sets and tools, and
three different management facilities.
FCoE uses Converged Enhanced Ethernet. The result of a converged network are fewer
adapters, cables, and switches. This results in lower costs and better utilization of resources.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 31
The Converged Network Adapter or CNA appears to the host as two PCI devices a network
adapter and a Fibre Channel adapter. If the request is a network transaction, it is delivered to
the lossless MAC. In the case of Fibre Channel, the frames are encapsulated as FCoE by the
FCoE encapsulation engine. They are later sent to the lossless MAC for delivery.
Received traffic is processed by the lossless MAC. It filters FCoE and delivers traffic either to
the Ethernet NIC, if it is a network transaction or is decapsulated by the FCoE engine. The
FCoE engine is forwarded to the Fibre Channel HBA device.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 32
FCoE Encapsulation takes place directly in the Ethernet frame. This differs from iSCSI and FCIP
because TCP/IP overhead is not used by FCoE. The simple encapsulation of Fibre Channel into
Ethernet frames results in great benefits, such as greater efficiency and performance, due to
less overhead.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 33
The most common FCoE implementation is the integration of an FCoE switch connected
either by ISL as an E_Port or used as a tunnel to a Fibre Channel fabric containing storage
arrays. The FCoE switch connects to a network switch as well. The hosts have dual path
connection through CNAs to the FCoE switch.
The current EMC FCoE implementation recommendations include having EMC storage
connected through Fibre Channel and hosts connected through FCoE. The hosts must have
two CNAs, each to process FCoE frames. There is no support for Virtual Ports at this time.
EMC only supports Ethernet connectivity directly to the FCoE switch device. Additional
switches and routers are not supported at this time.
Similar to native iSCSI, native FCoE implementations will increase once storage arrays roll out
FCoE ports.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 34
These are the key points covered in this module. Please take a moment to review them.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 35
The objectives for this module are shown here. Please take a moment to read them.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 36
EMC offers a complete range of SAN connectivity products under the Connectrix brand.
Connectrix products are supplied by two different vendors: Brocade and Cisco.
Enterprise Directors are deployed in High Availability and/or large scale environments.
Connectrix Directors can have more than a hundred ports per device. When necessary, the
SAN can be scaled further using ISLs. The disadvantage of directors are higher cost and larger
footprint.
Departmental Switches are used in smaller environments. SANs using switches can be
designed to tolerate the failure of any one switch. Switches are ideal for workgroup or mid-
tier environments. Large SANs built entirely with switches and ISLs require more connectivity
components due to relatively low port-count per switch; therefore, adding more complexity
to your SAN. The disadvantage of departmental switches are lower number of ports and
limited scalability.
To support mixed iSCSI and Fibre Channel environments, multi-protocol routers are available.
These routers have the capability of bridging Fibre Channel SANs and IP SANs. Thus, they can
provide connectivity between iSCSI host initiators and Fibre Channel storage targets. In
addition, multi-protocol routers are required for extending Fibre Channel SANs over long
distances via IP networks.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 37
The Connectrix family represents the industrys most extensive selection of networked
storage connectivity products. Connectrix integrates high-speed Fibre Channel connectivity,
highly resilient switching technology, and options for intelligent IP storage networking. This
wide range of connectivity options allows you to configure Connectrix directors, switches,
and routers to meet any business requirement. Combined with EMCs industry-leading
design, implementation, and support services, you have everything in one complete package.
Connectrix products provide more than just network connectivity. They offer simple,
centralized, automated SAN management; proven interoperability across your networked
storage solution; and the highest availability to meet escalating business continuity and
service level requirements.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 38
This slide summarizes the salient features and benefits of the Connectrix B-series.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 39
Connectrix MDS for intelligent SANs is an integral part of enterprise data center architecture
and provides a better way to access, manage, and protect growing information resources
across a consolidated Fibre Channel, Fibre Channel over IP, Small Computer System Interface
over IP, Gigabit Ethernet, and optical network.
The MDS-9000 Series serves as a platform for EMC Invista and RecoverPoint. In addition,
MDS provides Storage Media Encryption for tape and virtual-tape environments.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 40
This slide depicts a summary of features and benefits of the Connectrix MDS-9000 Series.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 41
This slide depicts a summary of the available configuration options of the MDS-9500 Series
Directors.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 42
The NEX-5020 switch supports a base of forty 10 Gigabit Ethernet ports and has two slots
available for two switching modules. For server connectivity into an existing Connectrix MDS
SAN, each switching module supports four 4Gb/s Fibre Channel ports and an additional four
10 Gigabit Ethernet ports.
Major features and benefits of the Nexus 5020 include a line-rate 10 Gigabit Ethernet with a
switching architecture that supports full-bandwidth, low-latency, and predictable
performance on all ports. It also supports Fibre Channel over Ethernet, which can provide
server I/O consolidation because LAN and SAN traffic travels on a single 10 Gigabit Ethernet
link. This can simplify cabling, reduce the number of adapters, lower costs, and cut power
consumption.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 43
This slide summarizes the available features of the MDS-9000 Series fabric switches. The
Connectrix MDS-9124 supports VSANs, but not Inter-VSAN routing.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 44
There are several ways to monitor and manage Fibre Channel switches in a fabric. If the
switches in the fabric are contained in a cabinet with a Service Processor, console software
loaded on the SP can be used to manage them. Some switches also offer a console port,
which is used for serial connection to the switch for initial configuration using the CLI. This is
typically used to set the management IP address on the switch. Subsequently, all
configuration and monitoring can be done via IP. Telnet or ssh may be used to log into the
switch over IP and issue CLI commands to it. The primary purpose of the CLI is to automate
management of a large number of switches and/or directors with the use of scripts, although
the CLI may be used interactively too. In addition, almost all models of switches support a
browser-based graphical interface for management.
There are vendor-specific tools and management suites that can be used to configure and
monitor the entire fabric. The B-Series has Web Tools and Connectrix Manager. The MDS-
Series has Cisco Fabric Manager and Device Manager.
SAN Manager, an integral part of EMC ControlCenter, provides some management and
monitoring capabilities for devices from both vendors. A final option is to deploy a third-
party management framework such as Tivoli. Such frameworks can use SNMP to monitor all
fabric elements.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 45
Web Tools is an easy-to-use, browser-based application for switch management and is
included with all Connectrix B-Series products. Web Tools simplify switch management by
enabling administrators to configure, monitor, and manage switch and fabric parameters
from a single online access point. Web Tools support the use of aliases for easy identification
of zone members.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 46
Connectrix Manager is a licensed software product widely used for the management of
Connectrix B-Series switches and directors. It can be run locally on the Connectrix Service
Processor or remotely on any network-attached workstation. Since this application is Java-
based, IT administrators can run it virtually form any type of client device.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 47
Fabric Manager and Device Manager must be installed in a server and can contain several
clients. This Java-based tool simplifies management of the MDS Series through an integrated
approach to fabric administration, device discovery, topology mapping, and configuration
functions for the switch, fabric, and port.
Features of Fabric Manager include Fabric visualization for automatic discovery, zone and
path highlighting, comprehensive configuration across multiple switches, powerful
configuration analysis, network diagnostics and comprehensive security.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 48
SAN Manager automatically discovers, maps, and displays the entire SAN topology at a high
level or in detail. Users choose to display specific physical and logical information about each
object in the topology. This information can help Storage Administrators manage their
storage infrastructure better by correlating all SAN elements in context. With SAN Manager,
Administrators can view physical devices such as HBAs, servers, and storage arrays. Logical
information such as zoning and storage-device masking definitions can also be viewed,
allowing Administrators to fully analyze SAN performance and health.
SAN Advisor helps validate constantly changing SAN environments. SAN Advisors import and
validation capabilities extend beyond mere physical devices and connectivity to include
zoning, VSANs, and user-defined group information.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 49
Command Line Interface is available for any Connectrix switch. Each switch vendor
implements its own syntax for common switch and fabric functions.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 50
Port type security refers to the ability to restrict switch port to assume a particular function.
For example, a switch port can be restricted to function only as an F_port or an E_port. CHAP
authentication applies to iSCSI interfaces. Persistent port disable means that a port remains
disabled across reboots.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 51
Zoning is a switch function that allows devices within the fabric to be logically segmented
into groups that can communicate with each other. When a device logs into a fabric, it is
registered by the name server. When a port logs into the fabric, it goes through a device
discovery process with other devices registered as SCSI FCP in the name server. The zoning
function controls this process by letting only ports in the same zone to establish these link
level services.
A collection of zones is called a zone set. The zone set can be active or inactive. An active
zone set is the collection of zones currently being used by the switched fabric to manage
data traffic.
Single HBA zoning consists of a single HBA port and one or more storage ports. A port can
reside in multiple zones. This provides the ability to map a single storage port to multiple
host ports. For example, a Symmetrix FA port or a CLARiiON SP port can be mapped to
multiple single HBA zones. This allows multiple hosts to share a single storage port.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 52
The design of the Fibre Channel-switched fabric environment allows users to add and remove
nodes dynamically. When users add or remove a node, hosts are notified of the change to
the fabric environment. Most hosts query the fabric name server to receive an update.
Nodes that do not query the name server may not be aware that their target is no longer
available and continue to send frames to the same destination port.
With hardware-enforced WWN zoning, only zone members in the same zone can
communicate when they are logged into the switch. No license is required to enable this
feature. It is enabled by default and requires no customer configuration.
Use the persistent port disable command to prevent user-specified ports from being enabled
after a reboot. This feature prevents physically connected nodes from logging in to the
switch.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 53
Virtual storage area networks are user-defined logical SANs. VSANs enhance overall fabric
security by creating multiple logical SANs over a single hardware infrastructure. A VSAN can
exist within a single-switch chassis or span multiple chasses. Nodes connected to one VSAN
cannot communicate with nodes in another VSAN. Each VSAN has its own active zone set,
name server, and routing protocol. Fabric events in one VSAN do not impact the stability of
another VSAN. EMC currently supports 20 VSANs.
With hardware-enforced WWN zoning, the active zoning configuration is pushed to the port
ASIC where the ingress and egress ports are located. Only members in the same zone may
communicate when they are logged in to the switch.
Port security, which can be accessed via the port security database, ensures that
unauthorized end nodes and other Fibre Channel switches cannot access the fabric. Each
VSAN has its own database. Port security is activated on a per-VSAN basis.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 54
Device masking ensures that volume access to servers is controlled appropriately. This
prevents unauthorized or accidental use in a distributed environment.
A zone set can have multiple host HBAs and a common storage port. LUN masking prevents
multiple hosts from trying to access the same volume presented on the common storage
port. LUN masking is a feature offered by EMC Symmetrix and CLARiiON arrays.
When servers log into the switched fabric, the WWNs of their HBAs are passed to the storage
fibre adapter ports that are in their respective zones. The storage system records the
connection and builds a filter listing the storage devices available to that WWN, through the
storage fibre adapter port. The HBA port then sends I/O requests directed at a particular LUN
to the storage fibre adapter. Each request includes the identity of their requesting HBA and
the identity of the requested storage device, with its storage fibre adapter and logical unit
number. The storage array processes requests to verify that the HBA is allowed to access that
LUN on the specified port. If an HBA requests access to a LUN to which it does not have
access will generate an error.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 55
Storage groups are meaningful only in shared environments where multiple hosts have
exclusive or shared access to LUNs in a storage system. Specify host storage group access
using the WWN of each HBA and LUN. Generally, you may find it easier to use Navisphere
Manager than the CLI to create and manipulate storage groups.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 56
These are the key points covered in this module. Please take a moment to review them.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 57
The objectives for this module are shown here. Please take a moment to read them.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 58
You should deploy a SAN for business-critical functions. This slide describes the common SAN
business requirements.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 59
Storage area networks can handle large amounts of block-level I/O and are suited to meet
the demands of high performance applications that need access to data in real time. In many
environments, these applications have to share access to storage resources. Implementing
them in a SAN allows efficient use of these resources. When data volatility is high, a hosts
need for capacity and performance can grow or shrink significantly overtime. SAN
architecture is flexible, so existing storage can be rapidly redeployed across hosts with
minimal disruption.
SANs are also used to consolidate storage within an enterprise. Consolidation can be at a
physical or logical level. Physical consolidation involves the physical relocation of resources to
a centralized location. Once resources are consolidated, facilities can make more efficient
use of resources such as heating, ventilation and air conditioning, power protection,
personnel, and physical security. Physical consolidations have a drawback in that they do not
offer resilience against a site failure.
Logical consolidation is the process of bringing components under a unified management
infrastructure and creating a shared resource pool. Since SANs can be extended to span vast
distances physically, they do not strictly require the logically related entities to be physically
close to each other. Logical consolidation does not take full advantage of the benefits of site
consolidation; but it does offer some protection against site failure, especially if well
planned.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 60
Direct-attached storage architecture has distinct limitations. Only one server can use the
storage attached to it. Storage that goes unused becomes a stranded or underused resource.
If an application is saturating one server, you cannot deploy another server to share the
burden because the application data is only available on the direct-attached server. By
consolidating data files with SAN, you can reduce your total cost of ownership and eliminate
needless duplication and resynchronization of files.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 61
This slide describes the benefits of larger directors. At 528 ports, the MDS-9513 model is the
industrys most scalable director.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 62
The promise of these standards, protocols, or approaches is really about reducing TCO for
storage networking, particularly when positioned or compared to Fibre Channel-based
networks.
In this case, the TCO value proposition can be broken into three areas: reduced acquisition
costs, reduced management and operation costs, and vendor independence.
Interoperability has been a major challenge for Fibre Channel, both within the network and
at the end-points. While Fibre Channel network hardware often supports SNMP, full element
management still requires a completely separate management console. The storage over IP
promise is that this will be combined into a single management environment.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 63
Virtualization technology is causing significant changes across storage networks. Most
significantly, server virtualization technology is driving demand for networked storage
solutions due to the overall increase in storage capacity requirements brought about by
virtualization initiatives. To fully realize the benefits of server virtualization, businesses need
to deploy SAN infrastructures to consolidate and share storage resources.
Users are therefore purchasing additional storage capacity and networked storage
technologies to satisfy the storage requirements associated with server virtualization
deployments. Increasing use of server virtualization is also leading users to rethink data
protection and disaster recovery strategies, which is expanding the market for remote data
replication solutions.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 64
The example displayed on this slide shows how the IT service arm of a leading North
American energy company started a consolidation project, using VMware Infrastructure as
the primary vehicle.
The customer had more than 1,000x86 industry-standard servers, more than 100 SQL
servers, Lotus Notes mail servers, Citrix servers, and many Line-of-Business applications.
Using VMware as the primary system infrastructure strategy capability, the company ended
up with 50 IBM x440 servers a 20:1 reduction.
What is interesting is how consolidation has affected the companys infrastructure beyond
the reduced server count. Virtualization-based consolidation not only affects the server
count, but raises the quality and service levels of everything around it, including storage,
network, and facilities.
Specifically, in this project, the storage consolidation moved from siloed, direct-attached
storage to highly available tiered storage and SAN. Network ports were consolidated by a
ratio of 10:1. Dramatic power and cost savings were realized on the facilities and hardware
infrastructure.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 65
These are the key points covered in this module. Please take a moment to review them.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 66
These are the key points covered in this course. Please take a moment to review them.
This concludes the training. Proceed to the course assessment on the next slide.

Copyright 2012 EMC Corporation. Do not copy - All Rights Reserved. Connectrix Foundations 67

Você também pode gostar