Escolar Documentos
Profissional Documentos
Cultura Documentos
Celerra ICON
Celerra Training for Engineering
Course Introduction
Course Introduction - 1
Revision History
Revision Number
Course Date
Revisions
1.0
February 2006
Complete
Course Introduction - 2
Course Introduction - 2
Prerequisites
y Successful completion of the following EMC courses:
EMC Technology Foundations (ETF) or the NAS Foundations selfstudy module from that course
Celerra Features and Functionality (Knowledgelink)
Choice of the following NAS hardware platforms based on relevance is
recommended (Knowledgelink Self-Study)
CNS Architectural Overview self-study
NS Series Architectural Overview self-study
NSX Architectural Overview self-study
Course Introduction - 3
Course Introduction - 3
Course Objectives
y Describe the functional components and operations of the major building
blocks that make up a Celerra NAS solution
y Install the operating system and NAS software on a Control Station and the
DART operating environment on a Data Mover
y Configure Network Interfaces
y Configure a Celerra Data mover for high availability
Back-end
Data Mover failover
Network high availability
Describe the storage configuration requirements for both a CLARiiON and Symmetrix
back-end
Course Introduction - 4
Course Introduction - 4
Agenda Day 1
y Class Introduction
y Celerra Overview
y Hardware Overview
y Software Installation Concepts
y Planning, Installing, and Configuring a Gateway System
y Installation Lab
Course Introduction - 5
Course Introduction - 5
Agenda Day 2
y Celerra Management & Support
Command Line Interface
Celerra Manager
Course Introduction - 6
Course Introduction - 6
Agenda Day 3
y Back-end Storage Configuration
Review CLARiiON storage concepts
Symmetrix IMPL.bin file requirements
Course Introduction - 7
Course Introduction - 7
Agenda Day 4
y User Mapping in a CIFS Environment
y Configuring CIFS Servers on the Data Mover
y File System Permissions
y Virtual Data Mover
y Lab:
Usermapper
CIFS Configuration
Windows Integration
VDMs
Course Introduction - 8
Course Introduction - 8
Agenda Day 5
y SnapSure Concepts and Configuration
y Celerra Replicator Overview
y iSCSI Concepts and Implementation
y Lab:
Snapsure Implementation
Local Celerra Replication
iSCSI Implementation with Windows Host
Course Introduction - 9
Course Introduction - 9
Closing Slide
Course Introduction - 10
Course Introduction - 10
Celerra ICON
Celerra Training for Engineering
Celerra Overview
Revision History
Revision Number
Course Date
Revisions
1.0
February 2006
Complete
1.2
May 2006
Updates
Celerra Overview - 2
Module Objectives
y Describe the current Celerra NAS product offering
y Locate resources used in setting up and maintaining a
Celerra
Documentation CD
Support Matrix
NAS Engineering Websites
Celerra Overview - 3
Delivering
on ILM
Global Accessibility
Unified Name Space
Wide Area Filesystems
Centralized Management
Information Security
Unified Management
Celerra Overview - 4
NS704
Advanced Clustering
Four Data Movers
Integrated NAS
CLARiiON
DART
NS704
Four Data Movers
48 TB usable Fibre
Channel, ATA capacity
32 Gigabit Ethernet
network ports (24
Copper, 8 Optical)
Integrated CLARiiON
NS700
One or two Data Movers
16 or 32 TB usable Fibre
Channel, ATA capacity
8 or 16 Gigabit Ethernet
network ports (Copper /
Optical)
Two Fibre Channel
HBAs per Data Mover
Integrated
CLARiiON
NS500G / NS700G
High Availability
One or two Data
Movers
NAS gateway to SAN
CLARiiON, Symmetrix
DART
NS704G
Advanced Clustering
Four Data Movers
NAS gateway to SAN
CLARiiON,
Symmetrix
DART
Celerra NSX
Advanced Clustering
Four to eight X-Blades
NAS gateway to SAN
CLARiiON, Symmetrix
DART
NS500/350G
NS704G
Celerra NSX
NS700G
Celerra Overview - 5
EMC offers the broadest range of NAS platforms. In addition to the platforms above, a legacy 14 Data
Mover CNS/CFS configurations was available in the past. While the hardware was considerable
different, it ran the same DART operating system as the current offerings. For a short time, we also
offered, NetWin 110/200, a WSS 2003 based low-end configuration. Note: the NS600 is no-longer
available.
Documentation
Celerra Overview - 6
http://powerlink.emc.com/km/appmanager/km/secureDesktop?_nfpb=true&_pageLabel=servicesDocL
ibPg&internalId=0b01406680024e3f&_irrt=true
Celerra Overview - 7
http://www.emc.com/interoperability/matrices/nas_interoperability_matrix.pdf
Celerra Overview - 8
http://naseng/default.html
NAS Support
Celerra Overview - 9
Celerra Overview - 10
As you proceed through this course, you will find it useful to understand how the Celerra lab is
configured. In the lab, you will work for a fictitious company, Hurricane Marine, LTD, a manufacturer
of yachts.
UNIX Environment
NIS
NISDomain:
Domain:
nis-master
10.127.*.163
hmarine.com
hmarine.com
NIS
NISServer:
Server:nis-master
nis-master
UNIX Clients
sun1
10.127.*.11
sun2
10.127.*.12
sun3
10.127.*.13
sun4
10.127.*.14
sun5
10.127.*.15
sun6
10.127.*.16
Celerra Overview - 11
hmarine.com
Domain Controller:
hm-1.hmarine.com
Sub Domain:
corp.hmarine.com
Domain Controller:
hm-dc2.hmarine.com
Computer Accounts:
w2k1, w2k2, w2k3, w2k4
Data Movers
All user accounts
Celerra Overview - 12
Network Configuration
y UNIX and W2000 Clients
y DNS and NIS
Subnet A 10.127.*.0
UNIX Clients
Subnet C 10.127.*.64
Router
Subnet D 10.127.*.96
EMC Celerra/Symmetrix
Subnet E 10.127.*.128
EMC Celerra/Symmetrix
Subnet F 10.127.*.160
NIS
2006 EMC Corporation. All rights reserved.
2000
Celerra Overview - 13
Closing Slide
Celerra Overview - 14
Celerra ICON
Celerra Training for Engineering
Hardware Review
Hardware Review - 1
Revision History
Rev Number
Course Date
1.0
February 2006
Revisions
Complete
Hardware Review - 2
Hardware Review - 3
CIFS
NFS
FTP
share
export
Celerra
Data
iSCSI
Mover
target
CIFS
FTP/TFTP
TCP/IP
iSCSI
Network
Windows
Windows
UNIX
FTP
iSCSI
Mapped drive
Share access
NFS mount
Client
Initiator
Hardware Review - 4
Storage Subsystem
y Control Station
Linux operating system
Dedicated management host
Configuration and monitoring
Fibre Channel
Production
y Storage Subsystem
Completely separate
Network
Data Mover
CLARiiON or Symmetrix
May be dedicated or shared
Contains
Control Station
Data Movers
A Celerra system can contain one or more individual file servers running EMCs proprietary DART operating
system. Each of these file servers is called a Data Mover. One or more Data Movers in a Celerra can act as a hot
spare, or standby, for other production Data Movers providing high availability.
Control Station
The Celerra also provides one management host, the Control Station, which runs the Linux operating system, and
Network Attached Storage (NAS) management services (e.g., Data Mover configuration and monitoring
software). A second Control Station may also be present for redundancy.
Separate Storage Subsystem
All production data and the complete configuration database of the Celerra is stored on a separate storage
subsystem. Data Movers contain no hard drive.
Hardware Review - 5
y Gateway Storage
Storage subsystem can also provide storage to other hosts
Supports Symmetrix and/or CLARiiON (with AccessLogix)
Hardware Review - 6
CLARiiON-only
Storage Subsystem
Dedicated
y Field-installed
CLARiiON
to Celerra
Data Mover
Procedure furnished by
Celerra Technical Support
Control Station
Pre-Loaded Factory-installed
When the Celerra NS Integrated arrives from the factory, the Celerra software is pre-loaded. When the system is
powered on a simple initialization wizard is run providing the opportunity to enter site-specific configuration
network information of the Linux Control Station.
Field-installed
The Celerra NS Integrated can sometimes require that you manually perform the installation. When the manual
method of installation is required (e.g. the factory setup is flawed, or the system is ordered without a cabinet), the
original factory image, if present, must be overwritten. This involves CLARiiON clean-up procedures that will
be furnished by Celerra Technical Support when needed.
Hardware Review - 7
CLARiiON-only
Storage Subsystem
Data Mover
Control Station
Direct-connected Celerra NS Gateway configurations use a direct Fibre Channel connection to the CLARiiON
storage subsystem.
The CLARiiON may also be used to provide storage to other hosts.
Hardware Review - 8
CLARiiON or Symmetrix
Storage Subsystem
Data Mover
Control Station
Fabric-connected Celerra NS Gateway configuration use a SAN Fibre Channel connection to the CLARiiON
and/or Symmetrix storage subsystem.
The fabric-connected gateway is the only Celerra NS configuration that supports using a Symmetrix storage
array.
Using a fabric-connected configuration allows wider utilization of the CLARiiON Storage Processors FE (Front
End) ports.
The storage array may also be used to provide storage to other hosts.
Hardware Review - 9
Integrated only
2 Data Movers
Upgradeable
Integrated or Gateway
S Models
The S stands for single Data Mover Model.
These systems can typically be upgraded with an additional Data Mover. If you upgraded a single Data Mover NS Series
device you would no longer refer to it as an S.
Having only one Data Mover does not restrict the installation type or deployment method. An S series device can be
deployed either as an Integrated system or as a Gateway system.
O Models
The 0 denotes an integrated system.
These systems contain two Data Movers.
These systems are deployed as Integrated systems.
G Models
The G stands for Gateway.
The Gateway can be either Direct or Fabric attached.
GS Models
This represents a combination of the G and S.
I Models
The I stands for Integrated.
NSX Prefix
The represents the NSX bladed series.
Hardware Review - 10
Back-end
Front-end
Symmetrix or CLARiiON
Storage System
Storage Subsystem
IP Network
Clients
2005 EMC Corporation. All rights reserved.
NAS clients
SAN
Celerra
Symmetrix
and/or
CLARiiON
Celerra Hardware Review - 11
It is important to understand that the terms back-end and front-end are in reference to the component being
discussed.
Storage System
For the CLARiiON SPs, the back-end refers to the disk array enclosures (DAEs) to which it is connected via
Fibre Channel Arbitrated Loop, while the front-end refers to the Fibre Channel connection to the hosts
(possibly via a Fibre Channel switch). With a Symmetrix, the front-end is the FA Director and port that connect
to host systems and the back-end is the DADisk Adapter(director) that connects to the physical drive modules.
Celerra Data Movers and Control Station
For the components of the Celerra Network Server the back-end refers to the storage subsystem (i.e. the
CLARiiON and/or Symmetrix), while the front-end refers to the NAS clients in the production TCP/IP
network.
Hardware Review - 11
y Failover policies
Automatic
Retry
Manual
Hardware Review - 12
Normal operations
Monitor
Control Station
FAILED
Go!
Control Station
Restore
Monitor
Control Station
Hardware Review - 13
Standby
Data Mover
Primary
Data Mover
Primary
Data Mover
Primary
Data Mover
CS0: Primary
Control Station
CS1: Standby
Control Station
Since data flow is separated from control flow, you can lose the Control Station and still access data
through the Data Movers. But you cannot manage the system until Control Station function is reestablished. EMC provides Control Station failover as an option.
Celerra supports up to two Control Stations per Celerra cabinet. When running a configuration with
redundant Control Stations, the standby Control Station monitors the primary Control Stations
heartbeat over the redundant internal network. If a failure is detected, the standby Control Station
takes control of the Celerra and mounts the /nas file file system.
If a Control station fails, individual Data Movers continue to respond to user requests and users
access to data is uninterrupted. Under normal circumstances, after the primary Control Station has
failed over, you continue to use the secondary Control Station as the primary. When the Control
Stations are next rebooted, either directly or as a result of a power down and restart cycle, the first
Control Station to start is restored as the primary.
A Control Station failover will initiate a call home.
Hardware Review - 14
To storage
Fibre Channel
Data Mover
Hardware Review - 15
Production
Network
cge0
cge1
cge2
fge0
Each Data Mover provides several physical connections to the production Ethernet data network.
Ethernet port types
The exact number of these connections depends on the Data Mover model. Typically, there are two types of
Ethernet ports that may be found on a Data Mover, copper 10/100/1000 Mbps and optical Gigabit Ethernet. The
copper ports have hardware names beginning with cge, followed by the ordinal number of the port. (e.g.
cge0, cge1, cge2, etc.) The optical, or fiber, Ethernet ports have hardware names beginning with fge,
followed by the ordinal number of the port. (e.g., fge0, fge1, etc.)
Making the Connections
These production Ethernet ports require manual connection to the production Ethernet switch. The Ethernet
cables for these connections are NOT included with the Celerra Network Server.
Hardware Review - 16
Ethernet
Ethernet
Data Mover
Hardware Review - 17
Network
Control Station
Hardware Review - 18
Data Mover
Ethernet switch
Control Station
Hardware Review - 19
Storage Processor A
Storage Processor B
All Celerra NS Integrated systems, and some Gateway systems use CLARiiON disk arrays for their storage
subsystem. A CLARiiON Storage Processor requires two connections to the backend disk array.
For more information on EMC CLARiiON, please refer to CLARiiON documentation and training.
Hardware Review - 20
Data Mover3
BE0
BE1
FE1
FE0
Data Mover 2
BE1
BE0
FE1
FE0
y Minimum of two
connections per from
each SP to each DM
Direct cabling
Fabric connections with
switch zoning
SPB
SPA
Hardware Review - 21
y NS Gateways
To Ethernet switch
Each CLARiiON Storage Processor has an Ethernet port to facilitate management of the array.
Celerra NS Integrated Systems
When the CLARiiON is connected to the Celerra NS Integrate system, both SPA and SPB should come from the
factory with these Ethernet ports connected to the Celerras private LAN Ethernet switch.*
Celerra NS Gateway Systems
When the CLARiiON is being used by a Celerra NS Gateway system, the SPs must be connected to the
production/administrative Ethernet switch so that the administrator can connect.
*Note: In some instances, the Celerra NS Integrated model may also be shipped for mounting in an existing rack.
In these cases, you would be required to make the necessary connections.
Hardware Review - 22
Control Station
serial cable
Modem
Phone line
Hardware Review - 23
The illustration above is a NS500. The NS500(S) is very specific in its combinations of Data Movers, Control
Stations, and Storage Processors depending on what was ordered.
Remember: With an Integrated system, the storage (and SPs) are included. You can not connect an Integrated
system to an existing SAN environment.
While it is possible to place these individual components in a different order it is recommended that you follow
the format listed above. If you do change the location of components please be aware of cable length issues.
Hardware Review - 24
The NS500G(S) is very specific in its combinations of Data Movers and Control Stations depending on what was
ordered. However, the possible combination of storage that a Gateway can connect to is not illustrated in this
slide. The illustration above pertains directly to a NS500G Only.
While it is possible to place these individual components in a different order it is recommended that you follow
the format listed above. If you do change the location of components please be aware of cable length issues.
The customer may have ordered a new CLARiiON array with the Celerra Gateway system, or the customer may
already have the array. If your customer ordered the optional cabinet, the components are installed in the cabinet
at the EMC factory.
Because the NS500 share the same physical enclosure as a CX500, when you look at the front, it looks like there
should be drive modules in the slots in the front. That is not the case. The storage is provided by a separate
CLARiiON enclosure.
Hardware Review - 25
This illustration highlights the general physical differences between a single Data Mover model and a dual Data
Mover model shown on the next slide.
Hardware Review - 26
or
MIA
Media Interface Adaptor This is used to convert a HSSDC cable to a LC connection.
Serial to CS
This is an RJ-45 to DB-9m that connects to the appropriate Control Station serial connection
(discussed later).
Private LAN:
This is a RJ-45 cable that connects to the Control Stations Ethernet switch
Hardware Review - 27
NS500-AUX SPs look very similar to CLARiiON CX500 SPs. The NS500-AUX has two small form-factor
pluggable (SFP) sockets in place of the CX500 optical ports.
Hardware Review - 28
Hardware Review - 29
The Celerra may include one of two different Control Stations: NS-600-CS or the NS-CS. The two Control
Stations function in the same manner, but the buttons, lights, and ports are in different locations. The setup
procedure is essentially the same for either Control Station.
Hardware Review - 30
This front view of a NS-CS is only viewable after you have removed the front bezel. The front view of this
model Control Station presents a floppy drive, CD-ROM drive and a serial port connection.
The floppy and CD-ROM are used for installations and upgrades of EMC NAS code.
The serial port is used to connect directly to a computer that has been configured with the proper setting as
described inside the setup guide. Commonly available programs allow the user to interact with the Control
Station. These serial ports will allow you to access the system in the event of a loss of LAN connectivity.
Hardware Review - 31
The rear view of this Control Station is obstructed. Access to these ports can be difficult due to the fact the
Ethernet switch is blocking the middle portion of this device as illustrated above.
The Public LAN connection is typically connected to the customer network. This allows the Celerra to be
accessed and managed via the GUI and/or CLI.
The Private LAN connection is attached to the Ethernet Switch directly behind the Control Station.
While this device comes with 4 serial connections only 1 is required per Data Mover.
It is not common to hook up a mouse and/or keyboard. Management of this device is done via the serial
connection as explained earlier.
Hardware Review - 32
The Control Station contains two individual pieces of hardware that are attached.
This NS600 series model of Celerra has a Control Station that has a model NS-600-CS.
Hardware Review - 33
The illustration above pertains directly to a NS700. This model is also available with 4 Data Movers (the NS704
Integrated).
Typically these devices come pre-cabled and pre wired. While it is possible to place these individual components
in a different order it is recommended that you follow the format listed above. If you do change the location of
components please be aware of cable length issues.
Remember: With an Integrated system the storage (and SPs) are included. You will not connect an Integrated
system to an existing SAN environment.
Like the NS600, the NS700 is also available with a single Data Mover (NS700(G)S). If that is the case, the Data
Mover Enclosure will only include DM2, the bottom mover.
Hardware Review - 34
The NS700G can be connected to various storage options including a Symmetrix depending on your
configuration option.
Hardware Review - 35
Regardless of model type designation (NS600 or NS600G) there are no hardware differences between Data
Movers. However if a G model is deployed, you will be required to install MIAs in order to connect to the
array.
MIA
Media Interface Adaptor This is used to convert a HSSDC cable to a SPF connection
Serial to CS
This is a DB-9m that connects to the appropriate Control Station serial connection (discussed later)
Public LAN:
The public LAN refer to the customers network that will be used to access files stored on the Celerra / Storage. They are
RJ-45 ports that support following speeds 10/100/1000.
CGE:
Copper Gigabit Ethernet
Private LAN:
This is a RJ-45 cable that connects to the Control Stations Ethernet switch.
SPF
Small form-factor Pluggable
Copper FC:
This is a HSSDC cable that can be converted via MIA (as required) to connect to the array.
Hardware Review - 36
Regardless of model type designation (NS700 or NS700G), there are no hardware differences between Data
Movers. However if a G model is deployed, you will be required to install MIAs in order to connect to the
array.
MIA
Media Interface Adaptor This is used to convert a HSSDC cable to a SPF connection.
Serial to CS
This is a DB-9m that connects to the appropriate Control Station serial connection (discussed later).
Public LAN:
The public LAN refer to the customers network that will be used to access files stored on the Celerra / Storage. They are
RJ-45 ports that support following speeds 10/100/1000.
CGE:
Copper Gigabit Ethernet
Private LAN:
This is a RJ-45 cable that connects to the Control Stations Ethernet switch.
SPF
Small form-factor Pluggable
Copper FC:
This is a HSSDC cable that can be converted via MIA (as required) to connect to the array.
In the illustration above, you will notice that the 8-port NS700 Data Mover also includes serial and Ethernet
Hardware Review - 37
connections for a second Control Station.
Important: The NS704G is a fabric-attached Gateway system only. There is no direct connect option for this
device.
With the exception of the NSX series (discussed later) this is the only NS-series device that can have 2 Control
Stations.
While it is possible to place these individual components in a different order it is recommended that you follow
the format listed above. If you do change the location of components please be aware of cable length issues.
The NS704G can be connected to various storage options including a Symmetrix depending on your
configuration option.
Hardware Review - 38
DM3/SPB
DM2/SPA
Hardware Review - 39
The CX700 AUX Storage Processor is sold only with an integrated NS700 or NS700S Celerra. The lack of a
SAN personality card prohibits and SAN connection to SPA and SPB.
Hardware Review - 40
The EMC Celerra NSX network server is a network-attached storage (NAS) gateway system that connects to
EMC Symmetrix, CLARiiON arrays, or both. The NSX system has between four and eight X-Blade 60 and two
Control Stations. The EMCNAS software automatically configures at least one blade as a standby for high
availability.
Hardware Review - 41
NSX Blade
Please note the location and names of the equipment listed above. You will learn more about each piece of
equipment later in this module.
Important: The terms blade and Data Mover to refer to the same device.
Hardware Review - 42
The Celerra NSX is always configured as a fabric-connected gateway system. A Fabric-Connected Celerra
Gateway system is cabled to a Fibre Channel switch using fibre-optic cables and small form-factor pluggable
(SFP) optical modules. It then connects through the Fibre Channel fabric to one or more arrays.
Other servers may also connect to the arrays through the fabric. You can use a single switch, or for added
redundancy you can use two switches. The Celerra system and the array or arrays must connect to the same
switches.
If you are connecting the Celerra system to more than one array, one array must be configured for booting the
blades. This array should be the highest-performance system and must be set up first. The other arrays cannot be
used to boot the blades and must be configured after the other setup steps are complete.
Hardware Review - 43
The external network cables connect clients of the Celerra system to the blades. Another external network cable
connects the CS to the customers network for remote management of the system. The external network cables
are provided by the customer. The category and connector type of the cable must be appropriate for use in the
customer network. The six copper Ethernet network ports on the blades are labeled cge0 through cge5. These
ports support 10, 100, or 1000 megabit connections and have standard RJ-45 connectors. The two optical Gigabit
Ethernet network ports are labeled fge0 and fge1. They have LC optical connectors and support 50 or 62.5
micron multimode optical cables. Ports fge0 and fge1 use optical SFP modules installed at the factory.
Hardware Review - 44
The front view of this model Control Station presents a floppy drive, CD-ROM drive and a serial port
connection.
The floppy and CD-ROM are used for installations and upgrades of EMCNAS code.
The serial port is used to connect directly to a computer that has been configured with the proper setting as
described inside the setup guide. Commonly available programs allow the user to interact with the Control
Station. These serial ports will allow you to access the system in the event of a loss of LAN connectivity.
Hardware Review - 45
The NSX Control Station is designed for use with NSX systems only. While it still serves all the roles and
responsibilities of a traditional Control Station please be aware that there is a different backend port selection on
this model.
Hardware Review - 46
The private (internal) LAN cables connect the CS to the blades through the blade enclosures system
management switches. These cables and switches make up a private network that does not connect to any
external network. Each blade enclosure has two system management switches, one on each side of the enclosure.
Hardware Review - 47
With the removal of the Ethernet switch, the following diagram above illustrates a method in which the Control
Station and Data Movers can communicate via a private and redundant connection with each other.
Path
From
To
CS0 (Left)
CS0 (Right)
CS1(Left)
CS1(Right)
10
Hardware Review - 48
The Celerra NSX always ships in its own EMC cabinet. The cabinet may include two uninterruptible power
supplies (UPSs) to sustain system operation for a short AC power loss. All components in the cabinet, except for
the CallHome modems, are connected to the UPS to maintain high availability despite a power outage. The two
Control Stations have automatic transfer switches (ATSs) for short AC power loss in addition to the two UPSs.
The NSX Cabinet only includes the Control Station(s) and Data Movers. Storage is always in a separate cabinet.
Hardware Review - 49
Module Summary
In this lesson you learned about:
y Be careful of the term back-end and front-end as they are
different if you are looking at it from an the Celerra or
storage system prospective
y While physically different, all models share similar
components and interconnections
NS500/350
NS600/NS700
NSX
Hardware Review - 50
Closing Slide
Hardware Review - 51
Celerra ICON
Celerra Training for Engineering
Revision History
Rev Number
Course Date
1.0
February 2006
Revisions
Complete
The objectives for this module are shown here. Please take a moment to read them.
DART, etc
6 System LUNs
Data Mover
Control Station
Internal drive
Linux &
NAS management
services
Installation, and Configuration Overview - 4
A Celerra system uses two storage locations for installation of its software: the Control Stations
internal disk drive and 6 System LUNs (also known as Control LUNs) on the storage array.
Control Station internal disk drive
The Celerra Control Station contains an internal disk drive upon which the Linux operating system is
installed as well as the NAS management services that are used to configure and manage Data Movers
and the file systems on the storage subsystem. The Control Station also holds an auxiliary boot image
which can be used by Data Movers whenever its OS cannot be located on the storage array.
6 Control LUNs on storage array
Celerra Data Movers have no local disk drives. Data Movers require 6 Control (or System) LUNs on
the storage subsystem. These System LUNs contain the DART operating system, configuration files,
log files, the Celerra configuration database ("NASDB"), NASDB backups, dump files, etc.. (The
exact contents of each LUN are discussed later in this course.)
NOTE: This module discussed the installation storage requirements. Storage LUNs for data are not
present at this time.
Storage Subsystem
DART, etc
y 6 System LUNs
6 System LUNs
Fibre Channel
Connect cables
Fabric Zoning
Data Mover
Control Station
Internal drive
Linux &
NAS management
services
Installation, and Configuration Overview - 5
Storage Subsystem
6 System LUNs
Fibre Channel
Data Mover
Private LAN
Control Station
EMPTY
The Data Mover operating system (DART), NAS and config files will be stored on the Internal IDE
drive in the control station and on the System LUNs on the storage subsystem (CLARiiON or
Symmetrix). At the beginning of a new install there are no files in any of those locations. (Actually
there may likely be a factory image of Linux on the Control station, this will be overwritten during
installation.)
It is assumed the Floppy and CD are loaded.
Storage Subsystem
6 System LUNs
Fibre Channel
Storage Subsystem
EMPTY
6 System LUNs
Fibre Channel
Nameserver
Data Mover
Private LAN
Control Station
After the files are written to the CS drive, the CS reboots, asks all the configuration questions and
restarts the network. A PXE image, with a bootable configuration for the DMs, is created on the CS
internal drive.
The DMs are now automatically rebooted from that PXE image by the installation script.
Storage Subsystem
EMPTY
6 System LUNs
Fibre Channel
1?
Where is DART?
Data Mover
Private LAN
2?
Control Station
Storage Subsystem
EMPTY
Next:
y
6 System LUNs
Perform FC zoning
Fibre Channel
Private LAN
Control Station
Data Mover
For manual installations, once the DMs boot up, the back end fibre channel ports become active and
the WWNs of the DMs are displayed on the HyperTerminal screen.
The manual install requires that you do all the backend configuration (LUNs, registration, storage
groups etc.) before continuing beyond this step.
Storage Subsystem
EMPTY
6 System LUNs
Where is DART?
Fibre Channel
1?
Data Mover
Private LAN
2?
Control Station
Once the Data Movers are given access to the storage array they still cannot boot from the System
LUNs because DART (nas.exe) has not been loaded there at this time. However, the Data Mover can
see the System LUNs.
The Data Movers still access DART from the Control Station via PXE. Now they can provide access to
the Control LUNs for the Control Station via the Network Block Service (NBS).
Storage Subsystem
Fibre Channel
Data Mover
NBS Server
Control Station
NBS Client
Using NBS (Network Block Service, see below) over the internal network the CS can access the
System LUNs via the Data Mover(s).
Network Block Devices
NBS uses iSCSI with CLARiiON proprietary changes. Below is a generic description of Network
Block Devices.
Linux can use a remote server as one of its block devices. Every time the client computer wants to read
/dev/nd0, it will send a request to the NBS server via TCP, which will reply with the data requested.
This can be used for stations with low disk space (or even diskless - if you boot from floppy) to borrow
disk space from other computers. Unlike NFS, it is possible to put any file system on it.
Using NBS over the internal TCP/IP network, the CS partitions, formats and installs all the required
NAS (DART) code on the System LUNs.
Storage Subsystem
DART, etc
6 System LUNs
Further configuration as
required:
Network interfaces
File systems
Exports and shares
Etc.
Fibre Channel
Where is DART? 1 ?
Data Mover
Private LAN
Control Station
Once DART, etc. has been loaded onto the System LUNs, the Data Mover can now successfully boot
over Fibre Channel from the System LUNs, and the remainder of the automated installation tasks can
complete.
Module Summary
In this module you learned about:
y Celerra NS software will be installed to the Control
Station local drive and the System LUNs on the storage
array
y The major installation tasks included
Load Linux and DART image to the Control Station
PXE boot Data Mover to provide Control Station access to array via
NBS
Load DART, etc, to array
Closing Slide
Celerra ICON
Celerra Training for Engineering
Revision History
Rev Number
Course Date
1.0
February 2006
1.2
May 2006
Revisions
Complete
Update and reorganization
Regardless of the specific configuration, all Celerra installations are performed using the same general
process and phases In this module we will be discussing the Installation and configuration of a Fabricconnected Gateway system configuration. Much of the back-end configuration and fabric zoning can
be performed automatically using Auto-configure scripts, however, we will be discussing the manual
configuration as this represents a worst-case complexity.
The following portions of this course are designed to focus on the technical publication, Celerra
NS500G/NS600G/NS700G Gateway Configuration Setup Guide.
Please locate your copy of this document and follow the discussions closely with the document.
If you cannot locate your copy, please notify your instructor immediately.
Installation and configuration of a Celerra gateway system is typically done in three phases.
y Phase 1: The installation is planned and configuration information is collected from the customer.
y Phase 2: The hardware is physically installed and cabled, the software installed, and the Control
Station. At this point the system is functional, but cannot yet be used by clients to store and retrieve
files.
y Phase 3: The system is configured with client network connections, file systems, shares, exports,
and so on. When this phase is complete, the system is fully usable by clients.
In the field, two or more individuals from different EMC or Authorized Service Provider organizations
typically work together to complete the different phases of the installation. Close coordination is
required to ensure the requirements are communicated.
y Site preparations
Physical space considerations, power, network connectivity, etc
The first phase starts when the customer agrees to the installation and ends when all of the required
information has been collected. Missing information, such as IP addresses, can cause significant delays
later in the installation process.
1. Use the EMC Change Control Authority (CCA) process to get the initial setup information and to
verify you have all needed software before going to the customers site.
2. Verify that the customer has completed all site preparation steps, including providing appropriate
power and network connections.
3. If the Celerra system is being connected to a new array, verify that the array has been installed and
configured before starting to install the Celerra system. Verify that the required revision of the array
software is installed and committed.
4. Fill out the configuration worksheets with the customer.
5. Give the phase 2 configuration information to the installer who will complete the next phase.
The version of EMC NAS that ships with the Celerra Network Server is not likely to be appropriate for the installation.
When the Change Control Authorization is onsulted, the correct version will be identified and placed on the EMC FTP site
as an ISO image for download.
After downloading the approved version, you will want to create a CD from the ISO image
Installation Boot Floppy
Typically, you should be able to use the boot floppy that shipped with the Celerra Network Server. If you need to create a
new boot floppy from either a Linux or Windows host., The procedure to do this from Windows is included below.
y Extract the rawrite.exe file from the Global Services Service Pack CD. You can also obtain rawrite.exe for free from
many internet sites.
y Copy rawrite.exe to C:\temp
y Put the EMC NAS code CD into the CD-ROM drive of the Windows computer being used to create the boot floppy
y Place a blank, formatted floppy into the floppy drive of the same computer
y Change directory to C:\temp
y Type rawrite.exe and press [Enter], and provide the following information when prompted:
Disk image name: D:\images\boot.img
Target diskette drive: A:
y When the command prompt returns the boot floppy creation is complete.
The second phase includes physically installing the system, cabling it to the customers network, and
configuring the Control Stations and CallHome. The second phase is complete when the system
successfully calls home and you have filled out the Phase 2 Completion Hand-Off Worksheet from
Appendix G.
You should always install and configure the system according to the instructions in the Gateway setup
guide. Be sure to follow the steps in the order given.
The basic steps for installing a Celerra gateway system are as follows.
y Verify you have received the required phase 2 configuration information from the individual who
completed phase 1.
y Verify that all required components are onsite.
y Assemble the system and make required connections. This part of the procedure varies greatly
from a system that shipped with and EMC cabinet to one which did not.
y Power on the system and install or upgrade software as needed. You will need your service laptop
computer.
y Configure Control Station 0.
y For dual Control Station configurations, configure Control Station 1.
y Configure and test CallHome.
Chapters 5, 7, 9, and 10 each discuss the Fabric-connect Gateway installation. Each chapter focusing
on a different model. Please discuss the chapter that is appropriate for your lab setup.
Chapter 11: Configure the Boot Array discusses how to verify the software version, write cache and
user account settings of the CLARiiON array. Please note that complete configuration of the storage
subsystem is beyond the scope of this course. However, when appropriate we will focus on the storage
requirements for the Celerra installation.
Chapter 12: Install and Configure the EMC NAS Software
Once the correct NAS software version is installed on the Control Station, you will power on the Data
Movers for the first time. The Data Movers cannot boot from the array because the array does not have
the EMCNAS software installed. Instead, the Data Movers boot from the Control Station over the
private LAN connections. This is called PXE (preboot execution environment) or network booting.
In addition to the two Celerra private network configurations, you will also be provided the
opportunity to configure the Control Stations IP connection to the production/administrative network.
None
Remember the terminal parameters: 19200, 8bit, no parity, 1 stop and none for Flow Control.
The first pat of the the installation process includes the configuration of the Control Station. We are
going to do standard installation so we will select Serialinstall. Alternatively, the Control
Station configuration can be described in advance in the file ksnas.cfg. Then, during the beginning of
the installation the kickstart installation option can be chosen by typing serialkickstart when
prompted at Steps 5-6 above. The kickstart facility will then extract this configuration information
from ksnas.cfg.
CS installation
y Linux will be installed on the Control Station
y When complete, you will be prompted to remove the boot media
(floppy)
y Control Station will reboot using Linux that was just installed
y You will be prompted for the following:
Is this a Dual CS configuration?
IP address for Primary Internal Network
Default 192.168.1.100
Best Practice is to use default address for Internal Network, however, if because of potential conflicts
in the environment and these default addresses cannot be used, it is easer to change the defaults now
rater than after the installation completes.
The option to use the Celerra Gateway Auto-Config function is a critical step in the install process.
Select Yes and the Auto-config scripts will Configure the CLARiiON array and zoning on the switch.
In this case we are going to select no and review the manual steps required to configure the fabric and
the back-end storage.
When performing the fabric-connected Gateway installation, use Appendix E of the Gateway Setup
Guide for the procedure to zone the Fibre Channel switch and configure the LUNs on the storage array.
Data
Data Mover
Mover
FCP
FCP HBA
HBA 0:
0:
FCP
FCP HBA
HBA 1:
1:
33 WWN
WWN Node/Port
Node/Port Names:
Names:
N_PORT
S_ID
N_PORT
S_ID 010c00
010c00 Node
Node 50060160b0602f3b
50060160b0602f3b Port
Port 5006016830602f3b
5006016830602f3b
N_PORT
S_ID
N_PORT
S_ID 010f00
010f00 Node
Node 50060160b0602f3b
50060160b0602f3b Port
Port 5006016930602f3b
5006016930602f3b
yy
yy
yy
Data
Data Mover
Mover
FCP
FCP HBA
HBA 0:
0:
FCP
HBA
1:
FCP HBA 1:
44 WWN
WWN Node/Port
Node/Port Names:
Names:
N_PORT
S_ID
N_PORT
S_ID 010900
010900 Node
Node 50060160b0602ed2
50060160b0602ed2 Port
Port 5006016030602ed2
5006016030602ed2
N_PORT
S_ID
010b00
Node
50060160b0602ed2
Port
5006016130602ed2
N_PORT
S_ID 010b00 Node 50060160b0602ed2 Port 5006016130602ed2
yy
yy
yy
yy
Data
Data Mover
Mover 55 WWN
WWN Node/Port
Node/Port Names:
Names:
FCP
S_ID
FCP HBA
HBA 0:
0: N_PORT
N_PORT
S_ID 010a00
010a00 Node
Node 50060160b0602ed2
50060160b0602ed2 Port
Port 5006016830602ed2
5006016830602ed2
FCP
S_ID
FCP HBA
HBA 1:
1: N_PORT
N_PORT
S_ID 010800
010800 Node
Node 50060160b0602ed2
50060160b0602ed2 Port
Port 5006016930602ed2
5006016930602ed2
Type
Type 'C'
'C' to
to continue:
continue:
When you get to the prompt where you were asked if you want to Auto-config and you select No, the
Data Movers will reboot and report the WWNs of the HBAs. Record this information as you will need
the WWNs later when zoning the switch.
You might find it easier to cut-and-paste this information into a notepad document.
CLARiiON
FC
Switch
FC
Switch
Storage
Processor
Storage
Processor
The physical connections are typically made using multimode fiber optic cables. Each Fibre Channel
port on each data mover connects to an available port on the switch as does each port on the storage
array.
An ideal configuration is designed and implemented with No Single Points of Failure. That is, any
one component can fail and still have access to the storage. This requires the following:
y Two Fibre Channel HBAs per Data Mover (standard configuration)
y Two Fibre Channel Switches
y Two Storage Processors with two available ports each
While the ideal configuration includes two Fibre Channel switches with independent fabric
configuration, SANs are often implemented with single switch because of the high availability features
that are built in.
switch118:admin>
switch118:admin> switchshow
switchshow
Port
Port Media
Media Speed
Speed State
State
=========================
=========================
00
id
N2
Online
F-Port
id
N2
Online
F-Port
11
id
N2
Online
F-Port
id
N2
Online
F-Port
22
id
N2
Online
F-Port
id
N2
Online
F-Port
33
id
N2
Online
F-Port
id
N2
Online
F-Port
44
id
N2
No_Light
id
N2
No_Light
55
id
N2
No_Light
id
N2
No_Light
66
id
N2
No_Light
id
N2
No_Light
77
id
N2
No_Light
id
N2
No_Light
88
id
N2
Online
F-Port
id
N2
Online
F-Port
99
id
N2
Online
F-Port
id
N2
Online
F-Port
10
id
N2
Online
F-Port
10
id
N2
Online
F-Port
11
id
N2
Online
F-Port
11
id
N2
Online
F-Port
12
id
N2
Online
F-Port
12
id
N2
Online
F-Port
13
id
N2
Online
F-Port
13
id
N2
Online
F-Port
14
id
N2
Online
F-Port
14
id
N2
Online
F-Port
15
id
N2
Online
F-Port
15
id
N2
Online
F-Port
50:06:01:61:00:60:02:42
50:06:01:61:00:60:02:42
50:06:01:60:00:60:02:42
50:06:01:60:00:60:02:42
50:06:01:68:00:60:02:42
50:06:01:68:00:60:02:42
50:06:01:69:00:60:02:42
50:06:01:69:00:60:02:42
50:06:01:69:30:60:2e:d2
50:06:01:69:30:60:2e:d2
50:06:01:60:30:60:2e:d2
50:06:01:60:30:60:2e:d2
50:06:01:68:30:60:2e:d2
50:06:01:68:30:60:2e:d2
50:06:01:61:30:60:2e:d2
50:06:01:61:30:60:2e:d2
50:06:01:68:30:60:2f:3b
50:06:01:68:30:60:2f:3b
50:06:01:60:30:60:2f:3b
50:06:01:60:30:60:2f:3b
50:06:01:61:30:60:2f:3b
50:06:01:61:30:60:2f:3b
50:06:01:69:30:60:2f:3b
50:06:01:69:30:60:2f:3b
Preparing, Installing, and Configuring a Fabric-Connected Gateway - 18
Zoning Requirements
y Fibre Channel SANs provide flexible connectivity where any port in
the fabric is capable of seeing any other port
y Zoning is configured on the Switch for performance, security, and
availability reasons to restrict which ports in a fabric see each
other
Switch 1
Zone1 - DM2-0 to SPA-0
Zone2 DM3-0 to SPB-1
Control
Station
CLARiiON
0
1
DM3
0
1
FC-SW1
SP-B
3
0
1
DM2
FC-SW2
SP-A
Switch 2
Zone1 - DM2-1 to SPB-0
Zone2 DM3-1 to SPA-1
2006 EMC Corporation. All rights reserved.
By design, Fibre Channel switches provide flexible connectivity where any port in the fabric is capable
of seeing any other port. This can lead to performance, security, and availability issues. Zoning is
feature of most switches that restrict which ports in the fabric see each other. This eliminates any
unnecessary interactions between ports.
In the example above, each switch is a separate fabric and is thus configured separately.
An alternate Zoning configuration might look like this:
Switch 1
Zone1 DM2-0
Zone2 DM2-0
Zone3 DM3-0
Zone4 DM2-0
to SPA-0
to SPB-1
to SPA-0
to SPB-1
Switch 2
Zone1 DM2-1
Zone2 DM2-1
Zone3 DM3-1
Zone4 DM2-1
to SPA-0
to SPB-1
to SPA-0
to SPB-1
Configure Zoning
y Zoning is performed on the Fibre Channel Switch
Single HBA zoning
Zoning by WWPN
zonecreate "DM_2_BE_0_SPA_Port0","50:06:01:60:30:60:2f:3b;50:06:01:60:00:60:02:42"
zonecreate "DM_2_BE_0_SPB_Port0","50:06:01:60:30:60:2f:3b;50:06:01:68:00:60:02:42"
zonecreate "DM_2_BE_1_SPA_Port1","50:06:01:61:30:60:2f:3b;50:06:01:61:00:60:02:42"
zonecreate "DM_2_BE_1_SPB_Port1","50:06:01:61:30:60:2f:3b;50:06:01:69:00:60:02:42"
zonecreate "DM_3_BE_0_SPA_Port0","50:06:01:68:30:60:2f:3b;50:06:01:60:00:60:02:42"
zonecreate "DM_3_BE_0_SPB_Port0","50:06:01:68:30:60:2f:3b;50:06:01:68:00:60:02:42"
zonecreate "DM_3_BE_1_SPA_Port1","50:06:01:69:30:60:2f:3b;50:06:01:61:00:60:02:42"
zonecreate "DM_3_BE_1_SPB_Port1","50:06:01:69:30:60:2f:3b;50:06:01:69:00:60:02:42"
zonecreate "DM_4_BE_0_SPA_Port0","50:06:01:60:30:60:2e:d2;50:06:01:60:00:60:02:42"
zonecreate "DM_4_BE_0_SPB_Port0","50:06:01:60:30:60:2e:d2;50:06:01:68:00:60:02:42"
zonecreate "DM_4_BE_1_SPA_Port1","50:06:01:61:30:60:2e:d2;50:06:01:61:00:60:02:42"
zonecreate "DM_4_BE_1_SPB_Port1","50:06:01:61:30:60:2e:d2;50:06:01:69:00:60:02:42"
zonecreate "DM_5_BE_0_SPA_Port0","50:06:01:68:30:60:2e:d2;50:06:01:60:00:60:02:42"
zonecreate "DM_5_BE_0_SPB_Port0","50:06:01:68:30:60:2e:d2;50:06:01:68:00:60:02:42"
zonecreate "DM_5_BE_1_SPA_Port1","50:06:01:69:30:60:2e:d2;50:06:01:61:00:60:02:42"
zonecreate "DM_5_BE_1_SPB_Port1","50:06:01:69:30:60:2e:d2;50:06:01:69:00:60:02:42
cfgcreate "Celerra_cfg", "DM_2_BE_0_SPA_Port0; DM_2_BE_0_SPB_Port0; DM_2_BE_1_SPA_Port1;
DM_2_BE_1_SPB_Port1; DM_3_BE_0_SPA_Port0; DM_3_BE_0_SPB_Port0; DM_3_BE_1_SPA_Port1;
DM_3_BE_1_SPB_Port1; DM_4_BE_0_SPA_Port0; DM_4_BE_0_SPB_Port0; DM_4_BE_1_SPA_Port1;
DM_4_BE_1_SPB_Port1; DM_5_BE_0_SPA_Port0; DM_5_BE_0_SPB_Port0; DM_5_BE_1_SPA_Port1;
DM_5_BE_1_SPB_Port1"
cfgenable "Celerra_cfg"
cfgsave
Preparing, Installing, and Configuring a Fabric-Connected Gateway - 20
Brocade Zoning
y Output from the
ZoneShow
command
yy Switch118:admin>
Switch118:admin> zoneshow
zoneshow
yy ...
...
yy Effective
Effective configuration:
configuration:
y Zone Configuration
is a set of zones
yy
yy
yy
yy
yy
yy
yy
yy
cfg:
cfg:
zone:
zone:
50:06:01:60:30:60:2f:3b
50:06:01:60:30:60:2f:3b
50:06:01:60:00:60:02:42
50:06:01:60:00:60:02:42
zone:
zone:
yy
yy
50:06:01:68:00:60:02:42
50:06:01:68:00:60:02:42
DM_2_BE_1_SPA_Port1
DM_2_BE_1_SPA_Port1
50:06:01:61:30:60:2f:3b
50:06:01:61:30:60:2f:3b
zone:
zone:
50:06:01:61:00:60:02:42
50:06:01:61:00:60:02:42
DM_2_BE_1_SPB_Port1
DM_2_BE_1_SPB_Port1
50:06:01:61:30:60:2f:3b
50:06:01:61:30:60:2f:3b
zone:
zone:
50:06:01:69:00:60:02:42
50:06:01:69:00:60:02:42
DM_3_BE_0_SPA_Port0
DM_3_BE_0_SPA_Port0
50:06:01:68:30:60:2f:3b
50:06:01:68:30:60:2f:3b
yy
yy
yy
DM_2_BE_0_SPB_Port0
DM_2_BE_0_SPB_Port0
50:06:01:60:30:60:2f:3b
50:06:01:60:30:60:2f:3b
zone:
zone:
yy
yy
yy
yy
Celerra_cfg
Celerra_cfg
DM_2_BE_0_SPA_Port0
DM_2_BE_0_SPA_Port0
50:06:01:60:00:60:02:42
50:06:01:60:00:60:02:42
...
...
Above is an example of the zoning configuration that was auto-generated during a CX704G
installation. Not that the members of a zone are defined by the World Wide Port Numbers (WWPN)
of the Data Mover HBA and the SP ports. Also each zone only includes one initiator device (HBA).
The output above was the result of the Brocade ZoneShow command. The output was abbreviated to
only show the effective zone configuration for one Data Mover.
y Symmetrix Back-end
Configure Logical Volumes
Assign Channel Addresses and present volumes on FA port
Configure Volume Logix
A CLARiiON array can be managed using either a Command Line Interface or the graphical
Navisphere Manager. Navisphere Manager is browser based and is invoked by simply specifying the
IP address of either SPA or SPB. When prompted, enter the userid and password.
Note: Integrated systems may not include the Navisphere Manager User Interface and all configuration
and monitoring is done through the Celerra Control Station using the CLI.
Registration typically associates a hostname and IP address of a host with the WWPN of the Fibre
Channel HBA and also sets other attributes of the connection.
In a typically open system host environment, all HBAs for a host are registered together and assigned
the same name. With Celerra, the auto-generate script that runs during install, registers each HBA
separately. For proper operation, it is important that the Initiator Information is set as shown above in
the example:
Initiator Type = CLARiiON Open
Failover Mode = 0
Array CommPath = Disabled
Unit Serial Number = Array
Size
00
01
02
11GB
11GB
2GB
03
04
05
2GB
2GB
2GB
Contents
DART, Individual DM configuration files
Data Mover log files
Reserved (not used on NS-series)
Linux on Control Stations (CS0) with no local HDD
Reserved (not used on NS-series)
Linux on Control Stations (CS1) with no local HDD
NAS configuration database (NASDB)
NASDB backups, dump file, log files, etc.
When a Celerra is first installed, a minimum of six LUNs are created either manually, or automatically
through the install scripts. The table above displays all of the Celerra System LUNs, along with their
size an contents. Please note that LUNs 02 and 03 are not currently used for the Celerra NS series.
Earlier Celerra models, in which the Control Station had no internal hard drive, would use these LUNs
to hold the Linux installation. Additional LUNs must be configured for user file systems data.
A RAID Group is a collection of related physical disks. 1 or as many as 128 LUNs may be created
form a RAID Group. This screen shows the dialog for configuring a RAID Group.
The user needs to specify how many disks are to be reserved the display will change to indicate
which RAID types are supported by that quantity of disks. In addition, the user may choose a decimal
ID for the RAID Group. If none is selected, the storage system will choose the lowest available
number.
The user must either allow the storage system to select the physical disks to be used, or may choose to
select them manually. Note that the storage system will not automatically select disks 0,0,0 through
0,0,4 they may be selected manually by the user. These disks contain the CLARiiON reserved areas,
so they have less capacity than other disks of the same size.
Other parameters that may be set include:
y Expansion/defragmentation priority - Determines how fast expansion and defragmentation occur.
Values are Low, Medium (default), or High.
y Automatically destroy - Enables or disables (default) the automatic destruction of the RAID Group
when the last LUN in that RAID Group is unbound.
Maximum number of RAID Groups per array = 240
Number of disks per RAID Group = RAID 5 = 3-16 disks, RAID 3 = 5 or 9 disks, RAID 1 = 2 disks,
RAID 10 = 2, 4, 6, 8, 10, 12, 14, or 16 disks. Remember, Celerra Best Practices specify the number of
disk per RAID Group.
Binding LUNs
When binding LUNs, the user must select the RAID Group to be used, and, if this is the first LUN
being bound on that RAID Group, the RAID type. If a LUN already exists on the RAID Group, the
RAID type has already been selected, and cannot be changed.
The size of a LUN can be specified in Blocks, MB, GB, or TB. The maximum LUN size is 2 TB. The
maximum number of LUNs in a RAID Group is 128.
In the example above, we specified create two LUNs using all available capacity in the RAID Group
and distribute the LUNs across both Storage Processors for load balancing purposes.
The configuration object used for assigning LUNs to hosts is called a Storage Group. Basically you
create a Storage Group, add LUNs and connect hosts. When a host is connected to a Storage Group,
it will have full read/write access to all LUNs in the Storage Group.
When creating a Storage Group, the software requires only a name for the Storage Group. All other
configuration is performed after the Storage Group is created.
A name supplied for a Storage Group is 1-64 characters in length. It may contain spaces and special
characters, but this is discouraged. After clicking OK or Apply, an empty Storage Group, with the
chosen name, is created on the storage system.
To assign LUNs, right click on the Storage Group, select properties and the LUNs tab. The LUNs tab
is used to add or remove LUNs from a Storage Group, or verify which are members. The Show LUNs
option allows the user to choose whether to only show LUNs which are not yet members of any
Storage Group, or to show all LUNs.
When a LUN is added to a storage Group, it is automatically assigned the next available SCSI address
starting with address 00. Use caution here as the address that is assigned automatically is not apparent
unless you scroll over to the right in the Selected LUNs pane.
The Celerra Network Server requires specific LUN addresses for system LUNs. At the time a LUN is
added to a Storage Group, highlighting the LUN, clicking the Host ID field, and choosing the host ID
from the dropdown list. If a LUN was previously assigned to a Storage Group and the address must be
changed, if first must be removed from the Storage Group and re-added.
If LUN addressing is not set up in accordance with the defined rules, it is very likely that the
installation will fail. If, after the system has been in production, the LUN addressing is modified (i.e.
when adding storage to the array for increased capacity) in a way that does not comply with these
rules, the Data Movers will likely fail upon the subsequent reboot.
The Hosts tab allows hosts to be connected to, or disconnected from a Storage Group. Connecting a
host provides that host with full read/write access to the LUNs in the Storage Group.
The procedure here is similar to that used on the LUNs tab select a host, then move it by using the
appropriate arrow. In most stand-alone host environments, only a single host is added to the Storage
Group but because a Celerra Network Server is actually a cluster, all HBA connections for all Data
Movers are connected.
Data
Data Mover
Mover
FCP
FCP HBA
HBA 0:
0:
FCP
FCP HBA
HBA 1:
1:
33 WWN
WWN Node/Port
Node/Port Names:
Names:
N_PORT
S_ID
N_PORT
S_ID 010c00
010c00 Node
Node 50060160b0602f3b
50060160b0602f3b Port
Port 5006016830602f3b
5006016830602f3b
N_PORT
S_ID
N_PORT
S_ID 010f00
010f00 Node
Node 50060160b0602f3b
50060160b0602f3b Port
Port 5006016930602f3b
5006016930602f3b
yy ...
...
yy Type
Type 'C'
'C' to
to continue:
continue:
The process from the Gateway Setup Guide can now resume at Chapter 12, Step 8
Steps 9-10, Create NAS Administrator account
Step 11, Enable UNICODE
Installation completes
The third and final phase includes all of the configuration required to make the Celerra system
available to clients. The specific steps depend on which services the customer purchased. For example,
a customer may elect to have only one initial file system created, or may choose an advanced
configuration with multiple file systems, advanced networking configurations, and so on.
The scope of this phase depends on the service offering that the custom has purchased. Many of the
advanced implementations are outside the scope of this class and would typically be performed by a
Technical Solutions specialist. However, in many implementations the installer may be expected to
perform a simple implementation.
The possible steps include:
y Configure the Data Movers, including failover policies, DNS and NIS domains, and virtual Data
Movers.
y Configure the Data Mover network connections, including fail-safe networks, link aggregations,
and Ethernet channels.
y Create users and groups.
y Optionally configure additional arrays, for fabric-connected systems.
y Create or configure volumes, shares, and exports, including Usermapper, CIFS servers, and quotas.
The third phase is complete when all planned configuration steps are complete and customer has
signed off the installation.
Module Summary
Planning is Critical!
y Installing a Celerra Network Server includes:
Physically connecting components
Installing Linux and the NAS code on the Control Station
Configuring and zoning the back-end
Installing and configuring DART on the Data Movers
Closing Slide
Celerra ICON
Celerra Training for Engineering
Revision History
Rev Number
Course Date
1.0
February 2006
1.1
March 2006
1.2
May 2006
Revisions
Complete
y Provides
NAS services for user/production data
NFS, CIFS, FTP, TFTP, iSCSI
All Celerra NS system Control Stations include a local hard drive where the Linux operating system in
installed. (EMC-modified Redhat)
In addition to the Linux operating system, the Control Station also runs EMC NAS software for
managing and monitoring Data Movers and the backend.
Administrators can log on to the Control Station for management tasks using the CLI (via SSH) or the
Celerra Manager GUI via HTTPS and a supported web browser.
The Control Station is not in the data path and provides no services for user/production data. All file
serving functions are provided by the Data Mover. If the Control Station is powered down, although
there would be no management functionality, monitoring, or DM failover, there should be no
interruption in data availability.
The directory structure on the Celerra NS Control Station includes the following:
1. Linux software that is installed to the Control Stations internal hard drive
2. EMC NAS software that is installed to the Control Stations internal hard drive
3. EMC NAS software that is installed to the Celerra System LUNs on the storage subsystem,
accessed by the Control Station via NBS.
1k-blocks
/dev/hda3
2063536
716344
1242368
/dev/hda1
31079
2674
26801
256692
256692
/dev/nde1
1818352
548636
1177344
/dev/ndf1
1818352
60584
1665396
4% /nas/var
/dev/nda1
136368
32108
104260
24% /nas/dos
/dev/hda5
2063504
468752
1489932
none
37% /
10% /boot
0% /dev/shm
32% /nbsnas
24% /nas
During the installation process, the directory structure on the Control station is setup. Use the standard
LINUX df command to view the Control Station software directory structure.
The file systems prefaced by /dev/nd* are accesses on the back-end storage subsystem via its IP
connection to the Data Movers. The Data Movers run the Network Block Service (NBS).
/dev/hda1
/boot
/home
The file systems on devices beginning with hd are on the Control Stations local hard drive. Of these,
hda3 (mounted on /) and hda1 (mounted on /boot) are part of the Linux installation.
Although this is generally a typical Linux installation, there are some locations that hold important
Celerra-specific data.
/etc holds several Linux configuration files (such as passwd, group, hosts and so on) that hold data
entries that are key to the function of the Celerra software. For example the /etc/hosts file holds host
name resolution information that is used by the Control Station to connect with Data Movers and
CLARiiON SPs.
/home holds user profiles on a Linux system. The profile for the NAS administrator account, typically
nasadmin, is stored in /home/nasadmin. This location also holds some backup copies of the NAS
database.
/nas
The df command will also report the hda5 file system which is mounted to /nas. Although this file
system is also on the Control Stations internal hard drive, it not a part of the Linux installation.
Rather, it is part of the Celerra EMC NAS software installation.
/nas contains items that are very important to the functionality and maintenance of the Celerra. These
items include various commands, scripts, and utilities, as well as important symbolic links to locations
stored on the storage subsystem. These links are accessible through a Data Mover via NBS.
/nas will be discussed more deeply later in this module.
/nas/bin
Common Celerra commands, etc
/nas/sbin
Advanced Celerra commands, etc
/nas/tools
Various config. and support tools
/nas/http
Celerra Manager
/nas/jserver
Celerra Manager
/nas/log
Data Mover & Celerra system logs
/nas/server/slot# Mount point for DM root file systems
The /nas directory is the primary access point for the Control Station to the Celerra system data.
Although /nas is mounted a partition on the Control Station hard drive, the majority of objects found in
/nas are actually symbolic links to a location within /nbsnas which is located on the backend.
This slide displays key locations within /nas that physically located on the NS Control Stations local
hard drive.
The tools above are essential to supporting and troubleshooting the Celerra NS. Since these tools are
physically located on the Control Station hard drive, they are still accessible even if no Data Movers
are online and the Control Station has no connection to the backend.
Fibre Channel
iSCSI, TCP/IP
The network block service (NBS) enables the Celerra NS Control Station to access LUNs on an array.
For example, the Control Station uses NBS to read and write its database information stored in the
control LUNs, and to install NAS software on the array.
NBS data is sent from the Control Station to a Data Mover over the private LAN connection. The DM
then sends the data to the array over the Fibre Channel connection.
An NBS client daemon on the Control Station communicates with the NBS server on the Data Movers.
The Control Station must have at least one Data Mover running normally and accessible over the local
network to run any administrative commands. Celerra administration is not possible without the NBS
connection.
/nbsnas
/nas/var
/nas/dos
y /nbsnas
Located on the backend, LUN 04
Contains EMC NAS database, NASDB
Also contains mountpoints for other partitions
y /nas/var
Located on the backend, LUN 05
Contains:
NASDB backups
Dump files
Log files
2006 EMC Corporation. All rights reserved.
Virtually all critical software is stored on the backend in the six Celerra Control volumes (or System
LUNs). The NS Control Station accesses all of this data via NBS.
If NBS is not functioning, then the Control Station cannot perform EMC NAS operations.
The /nbsnas directory is mounted to a physical location on the backend, LUN 04. This is the location
of the EMC NAS Configuration Database (NASDB).
/nas/dos on the Control Station is a symbolic link to /nbsnas/dos, which is physically located on the
backend storage array at LUN 00. This is where the Data Movers operating system, DART, is located,
as well as configuration files for individual Data Movers.
Size
00
11GB
Paths: /nbsnas/dos
Links: /nas/dos
Contents: DART, Individual DM configuration files
01
11GB
02
2GB
03
2GB
04
2GB
Paths: /nbsnas
Contents: NAS configuration database (NASDB)
05
2GB
Paths: /nbsnas/var
Links: /nas/var
Contents: NASDB backups, dump file, log files, etc.
Contents
The table above displays all of the Celerra System LUNs, along with their size an contents. Please note
that LUNs 02 and 03 are not currently used for the Celerra NS series. Earlier Celerra models, in which
the Control Station had no internal hard drive, would use these LUNs to hold the Linux installation.
server_ commands
y Commands for managing Data Mover configurations and
status
y Issued to a single Data Mover or all Data Movers
server_date server_2
server_date ALL
y Examples:
server_version displays EMC NAS version on Data Mover
server_sysconfig manages Data Mover hardware components,
e.g. physical network devices
server_ifconfig manages Data Mover logical network interfaces
server_date manages Data Mover date and time
The server_ commands are issued at the Control Station to manage Data Movers. The server_
prefix is generally followed by a common UNIX/Linux command. Examples of this are in the slide
above.
The server_ command is always followed by the name of the Data Mover to which you wish to
direct the command (such as server_date server_2) or ALL to issue the command for all
Data Movers in the system (server_date ALL)
nas_ commands
y Commands for managing global configurations and status
i.e. not specific to any Data Mover
y Examples:
nas_version displays EMC NAS version on Control Station
nas_fs manages Celerra file systems
nas_storage manages backend storage
nas_server displays and manages Data Mover server table
Like the server_ commands, the nas_ command prefix is always followed by the command itself.
The slide above show a few examples of nas_ commands. Notice that these command functions are
not related to a specific Data Mover (with some exceptions), but rather at the system globally.
fs_ commands
y Commands for managing file system features
y Examples:
fs_ckpt manages SnapSure checkpoints
fs_timefinder manages TimeFinder/FS
fs_copy manages file system copies
fs_replicate manages file system replication
Manual Pages
y Most Celerra management commands have Unix-like
manual pages
Command synopsis
Description
Usage examples
type
acl
slot groupID
state
name
1000
server_2
1000
server_3
inuse
sizeMB
storageID-devID
type
name
servers
11263
2,1
11263
2,1
2047
WRE00022100904-000A CLSTD d3
2,1
2047
WRE00022100904-000B CLSTD d4
2,1
2047
WRE00022100904-000C CLSTD d5
2,1
2047
WRE00022100904-000D CLSTD d6
2,1
To verify the software the version on the Control Stations and the Data Movers use the
nas_version and server_version commands
The recommended version of software changes constantly. The CCA process will dictate which
version should be installed.
In most cases, the Version of the software will be the same for both the Control Station and the Data
Movers, however during upgrades, it is possible to defer the Data Mover reboot and thus the Data
Mover and the Control Station may be running different versions.
kbytes
used
avail
capacity
Mounted on
root_fs_common
13624
288
13336
2%
/.etc_common
root_fs_2
114592
624
113968
1%
Filesystem
kbytes
used
avail
capacity
root_fs_common
13624
288
13336
2%
/.etc_common
root_fs_3
114592
624
113968
1%
server_3 :
Mounted on
The server_df command displays the mounted file systems and their utilization. A newly installed
Data Mover should have two file systems, its own root file system, and a shared file system.
y Examples:
/nas/sbin/getreason displays Data Mover and Control Station
boot status
/nas/sbin/navicli manages CLARiiON
/nas/sbin/setup_slot manages NAS type, version, etc
2006 EMC Corporation. All rights reserved.
Celerra reason codes identify the current status of a Data Mover or Control Station. Possible reason
codes are listed below.
0
10
11
13
14
There is no way to edit a file directly on the Data Move. To edit a file, you must copy it to the Control
Station, edit it, and copy it back to the Data Mover using the server_file command.
An alternative command for rebooting a Data Mover is the /nas/sbin/t2reste command. This
command allows you to power-off, power-on, reboot, specific slots in the Celerra. t2resey is an
internal command and those not documented for external use but often works where the
server_cpu command might fail.
Parameter File
y Specific system attributes are set by default on the Celerra
y Can establish or override attributes by editing parameter file
System wide: /nas/site/slot_parm
Specific Data Mover: /nas/server/slot_x/param
In addition to the Celerra EMC NAS commands, the Control Station also runs most Linux commands.
There are a few Linux commands that have been removed from the Control Stations EMC-modified
RedHat to save space.
Verifying Connectivity
y Pinging Data Movers over private networks
$ ping -c 1 server_2
PING server_2 (192.168.1.2
1.2) from 192.168.1.100
1.100 : 56(84) bytes of data.
64 bytes from server_2 (192.168.1.2): icmp_seq=0 ttl=255 time=117 usec
$ ping -c 1 server_2b
PING server_2b (192.168.2.2
2.2) from 192.168.2.100
2.100 : 56(84) bytes of data.
64 bytes from server_2b (192.168.2.2): icmp_seq=0 ttl=255 time=125 usec
The standard UNIX ping command can be used to verify the Control Stations management path to the
Data Movers over the private LAN. Using hostnames rather than IP addresses verifies the validity of
the /etc/hosts file by pinging the Data Movers by name (e.g. server_2). If you ping the hostname
server_2b, the Control Station will ping Data Mover 2 using the backup private network.
If you have a CLARiiON array, you can also verify that the Control Station has network connectivity
to SPA and SPB. Their IP addresses should be recorded in the /etc/hosts file as well and should be
reachable by the names spa and spb.
Link encap:Ethernet
HWaddr 00:02:B3:AF:3D:12
inet addr:192.168.1.100
Bcast:192.168.1.255
MTU:1500
Mask:255.255.255.0
Metric:1
Output is abbreviated
The EMC Celerra NAS database is also known as the NASDB. The NASDB represents all critical
Celerra-specific data and configurations. This is important to remember when supporting, maintaining,
or recovering the Celerra system. The Linux configuration is not part of the NASDB.
The NASDB is backed up automatically every hour, at one minute past the hour. The NASDB backups
can be used in supporting, maintaining, or recovering the Celerra to restore Celerra-specific
configurations. General Linux configuration information is not a part of NASDB backup.
sys_log
cmd_log
cmd_log.err
nas_log.al
nas_log.al.err
Osmlog
If the EMC NAS upgrade operation fails record the step of the operation that failed. There are logs
present on the system that can help in isolating and resolving the upgrade problem. The directory
/nas/log holds most of the system logs. If problem escalation is required, these logs will be required.
Celerra Manager
Celerra Manager is a web-based GUI used to manage a Celerra remotely. This slide shows a typical
Celerra Manager screen.
The list on the left part of the window shows the Celerra features that can be managed using Celerra
Manager. Some of these features will be described later in this module. The navigation pane on the
left is used to select the file server and feature to manage; the task pane on the right is used to manage
the feature.
Celerra Manager consolidates the functionality of the three previous Celerra Management products
including Web Manager, Celerra Native Manager, and Celerra Monitor.
You can purchase Celerra Manager in either basic or advanced editions. The differences between the
two will be addressed on the following slide.
To run Celerra Manager on a client machine, EMC recommends the use of Java Runtime environment
(JRE) v1.4.2 or higher. JRE v1.4.0 or 1.4.1 can run Celerra Manager, but you may see known
performance issues with these earlier revisions. Either Internet Explorer or Netscape browser can be
used. EMC recommends Internet Explorer 6.0 or higher, or Netscape 6.2.3 or higher. Netscape 6.2.2
can run Celerra Manager, but you may experience problems with this revision because of Netscapes
handling of some of the Java code used in the Manager.
Now with version 5.4 when Celerra Manger is started it will detect if you have the right version of
Java, f not it will direct you to the download section of the Java home website from Sun Microsystems.
Basic Edition
Advanced Edition
y Checkpoint scheduling
y Volumes feature
y Tools feature
Launch SSH shell
Launch Celerra Monitor
Launch Navisphere
y Wizards
CIFS setup
File system
Celerra setup
Network
Celerra Manager comes in a Basic edition (the default version that comes with the Celerra) and an Advanced edition that can be
purchased separately. This chart shows the features that are included in both the Basic and Advanced editions.
Basic Edition: (Note: Advanced Edition detail shown on a subsequent slide)
Basic file system configuration features
SnapSure Checkpoints The following SnapSure Checkpoint tasks are now supported:
Delete or refresh any checkpoint, not just the oldest
Delete an individual scheduled checkpoint instead of only the entire schedule
Delete a schedule by modifying a scheduled checkpoint to Never recur
Usermapper Usermapper can now be managed in Celerra Manager. The Usermapper list screen shows an overview of settings for the
system. CIFS Usermapper properties can also be displayed for each Data Mover.
Tree Quota Management This feature allows you to access tree quota configuration and status information.
Wizards The following wizards are included in both the basic and advanced editions of Celerra Manager.
Network Wizards
y Set up services
y Create an interface
y Create a device
y Create a route
Create a file system
Set up Celerra
CIFS Wizards
y Set up CIFS services
y Create a CIFS server
y Create a CIFS share
Tree quotas
VTLUs
Status
Utilization
y Wizards
y Integrated help
2006 EMC Corporation. All rights reserved.
Celerra Manager uses a dual-frame approach. The left-hand frame contains an expandable tree view of
administration. The right-hand frame contains the system health, links to on-line help, and the data
output and form inputs for the selected administration including:
y Network - Configuration of network settings including DNS, NIS, WINS, link aggregations, and
network identity (IP addresses, subnet masks, VLAN ID).
y Hardware - Tools required to manage and inventory the physical hardware in the system. This
includes operations to configure shelves of disks when the back-end storage array is CLARiiON,
managing global spares, and upgrades (disk, bios, firmware, software).
y Data Mover - Management of CIFS shares, NFS exports, and User Mapping. Other functions
include reboot, shutdown, number of reboots, date/time and NTP configuration, DM name, DM
type, and character encoding.
y File Systems and Shares - The tools required to list, create, modify, expand, check, and delete file
systems and their related shares.
y Checkpoints - Includes screens to list, create, modify, refresh, and delete SnapSure checkpoints. It
also provides a way to restore file system to one of its checkpoints.
y Status - Monitors the status of the Celerra, including uptime, software versions, release notice link,
network statistics, event logs, and hardware status (any hardware components that are in a
degraded state).
y Tree quotas - Screens to set and report status on both hard and soft storage quotas.
y VTLUs - Tools to create, configure, and manage virtual tape library units.
y Utilization - Monitors the CPU and memory utilization for the Data Movers.
This slide shows an example of Celerra Manager, Advanced Edition. This interface is used to create
an iSCSI Target.
To enable Advanced Edition:
1. From the Celerra Home screen, select the License tab
2. Select the Celerra Manager Advanced Edition License checkbox apply
Status Monitor
Status Monitor:
Small, standalone version of navigation tree. This is launched by right clicking Celerra in the large
Celerra Manager window. It is the same as the navigation tree, except Celerra nodes can not be
expanded. It is used to monitor Celerra(s) without keeping the large Celerra Manager window open.
Using Status Monitor:
y A Celerras status starts blinking, meaning a new alert or hardware issue has occurred.
y By left-clicking on the Celerra node, Celerra Manager is launched to that Celerras status page in a
new browser window.
y The admin can now inspect the status page and take action on any new items.
The File Systems page in Celerra Manager displays all file systems individually with small live graphs
showing space usage. You can use these live graphs to compare file systems to help determine if data
should be reallocated. This example shows two file systems that have nearly reached their storage
capacity and many with no utilized storage. These graphs capture utilization at the moment the page is
initially opened and will update according to the polling options you have set.
With Celerra Monitor, you can estimate future file system space usage and predict when the space on a
file system might become exhausted. This is done based on historical usage that displays on the File
System Space Usage window.
You can also chart an approximation of future space usage. The approximation and the prediction are
based on the data currently displaying on the graph; therefore, you should not base your decisions
about future storage planning on the graphs with a short time interval, such as a two-hour window.
Estimates will be more accurate if you display a graph representing a full week or more of usage
before making an approximation or prediction. It is advisable to make a prediction or approximation
based on one week of usage, then expand the graph to display the entire recorded history and make
another approximation and prediction. If both approximations and both predictions are close to each
other, you can probably assume the approximation is reasonable.
Occasionally, a prediction cannot be made due to a lack of data. In this case, a warning message
appears and no prediction is made.
Tool tip
Place your cursor
directly over a
field to view its
help text
Online help
Click the help
button to go to the
Celerra Manger
Online Help Guide
2006 EMC Corporation. All rights reserved.
Celerra Manger has a comprehensive online help system to guide you through all of your management
tasks.
One of the most useful help tools is field help. Field help enables you to get information about a
specific field in Celerra Manager while you are working in the application. To view the help
information for a particular field, move your cursor directly over it. If help is available for that field,
help text will appear in a box near the field label.
You can also open a comprehensive online help guide by clicking the help button in the upper right
hand corner of the application. To open online help, click the help icon at the top of the task page. The
help page for the page you are currently on in Celerra Manger will open.
The Celerra Managers online help guide includes comprehensive instructions for administering your
Celerra Network Server. Topics including procedures are addressed.
Online Help Guide Tips
y To see a list of help topics in the help navigation pane, click the Contents tab. The Contents tab
includes step by-step instructions for performing procedures in Celerra Manager.
y To view the system index, in the navigation pane, click the Index tab.
y To search for a word or phrase in the online help, go to the Search tab.
Certain tasks on Celerra can only be performed from the Command Line Interface (CLI). The Tools
folder in Celerra Managers navigation pane includes a Java-based Secure Shell (SSH) applet.
Other useful tools include Celerra Monitor and Navisphere. For a description of each tool hover your
cursor over the button and read the Tool Description in the text box to the right.
y Always log on
as the NAS
administrator
(typically
nasadmin).
y If necessary,
you can su to
root
The Java-based SSH tool provides easy access to the Celerra CLI. Some tasks are only performed from
the CLI. This includes modification of some of Celerras CallHome settings.
After clicking the SSH Shell button on the Tools page, you will be required to present a username and
password. You should always log on as the NAS administrator (typically nasadmin) rather than root. If
you require root access, log on as nasadmin then enter the su command (do not use the su
command). Following these steps will provide you with the necessary profile for running Celerra
commands.
The Set Up Celerra Wizard allows you to do the initial configuration of the Control Station and Data
Movers. After the successful completion of this wizard, you should be able to share data.
Create interface
Create/select network device, IP configuration, MTU, VLAN ID
Module Summary
In this module you learned about:
y The main software components of the Celerra system are:
Closing Slide
Celerra ICON
Celerra Training for Engineering
Revision History
Rev Number
Course Date
1.0
May 2006
Revisions
Complete
y Upgrade environment
Data present and being accessed
Celerra features are interoperating with network features
e.g. DNS, NDMP, administrative scripts, etc
Software Components
y Software Upgrade typically includes:
Linux on the Control Station
NAS software components on the Control Station
DART on the Data Movers
Health Checks
y Before attempting an upgrade, the system must be
operating normally
y The field uses health check scripts to verify the system
prior to getting CCA approval to perform upgrade
http://www.celerra.isus.emc.com/top_level/top_tool.htm
The following portions of this course are designed to focus on the technical publication, Celerra
Network Server Customer Service Universal Upgrade Procedure.
Reboot Required?
y
In-family upgrades
Out-of-family
Module Summary
y Upgrading NAS Software involves more risk than new
install
y Review Release Notes and understand potential impacts
y Upgrade should not be attempted if system is in is not
fully operational
Installation may fail if system is degraded
Could result in data loss or system unavailability
Closing Slide
Celerra ICON
Celerra Training for Engineering
Network Configuration
-1
Revision History
Rev Number
Course Date
1.0
February 2006
1.1
March 2006
1.2
May 2006
Revisions
Initial
Network Configuration - 2
Network Configuration
-2
Network Configuration - 3
Network Configuration
-3
Network
Symmetrix
-and/orCLARiiON
2006 EMC Corporation. All rights reserved.
Celerra
Clients
Network Configuration - 4
NAS, or network attached storage is all about the network. In this module we will be exploring how to
configure the Celerra to integrate into a IP network environment.
Network Configuration
-4
Data Mover
To Ethernet Switch & Client Systems
Devices
Interfaces
10.127.50.12
cge0
cge1
cge2
10.127.60.12
cge3
10.127.70.12
cge4
cge5
10.127.80.12
fge1
fge0
One-to-one relationship
One-to-many relationship
Many-to-one relationship
2006 EMC Corporation. All rights reserved.
Network Configuration - 5
Network Configuration
-5
Devices
Interfaces
10.127.50.12
cge0
cge1
cge2
10.127.60.12
cge3
10.127.70.12
cge4
cge5
10.127.80.12
fge1
fge0
Network Configuration - 6
Network Configuration
-6
Data Mover
To Ethernet Switch & Client Systems
y One interface is
configured to two or
more Network Devices
Devices
Interfaces
10.127.50.12
cge0
cge1
cge2
cge3
cge4
cge5
fge1
fge0
Network Configuration - 7
Network Configuration
-7
Network Configuration - 8
In the first section of this module, we will cover the basic steps for configuring devices and interfaces,
verifying connectivity, and configuring routes. The above referenced document is an excellent
resource for more information on configuring and managing Celerra networking.
Network Configuration
-8
Configure Network
Devices
Configure IP Interfaces
Verify Data Mover
Connectivity
Configure Routes
y Additional specifications
-Platform
-pci device
-o options
speed={ 10 | 100 | 1000 | auto }
duplex={ full | half | auto }
linkneg={ enable | disable }
Network Configuration - 9
Command syntax
To modify and display the hardware configuration for Data Movers, type the following command:
server_sysconfig <mover_name> or ALL
Note: Type ALL to execute the command for all of the Data Movers.
Additional specifications
-Platform: Displays the system configuration of the Data Mover, including processor type, processor and bus
speed in MHz, main memory in MB, and the motherboard type.
-pci device: Displays information for the specified network adapter card installed in the Data Mover (for
example, ana0 or fpa1).
-o options: Options must be separated by commas with no additional spaces in the command line.
speed={ 10 | 100 | 1000 | auto }: The speed is automatically detected from the Ethernet line. The auto option
turns autonegotiation back on if you have previously specified a speed setting in the command line. If you set
speed=auto, the duplex option is automatically set to auto as well.
duplex={ full | half | auto }: Auto turns autonegotiation back on if you have previously specified a duplex
setting in the command line. The default duplex setting is half for Fast Ethernet if the duplex is not set to auto.
linkneg={ enable | disable }: Enables you to disable autonegotiation on the NIC if it is not supported by the
network Gigabit switch. The default is enable.
Network Configuration
-9
Configure Network
Devices
Configure IP Interfaces
y Command:
Configure Routes
y Example:
$ server_sysconfig server_2 -pci cge0
server_2 :
On Board:
Broadcom Gigabit Ethernet Controller
0: cge0 IRQ: 24
speed=100 duplex=full txflowctl=disable rxflowctl=disable
Network Configuration - 10
Network Configuration
- 10
Configure Network
Devices
Configure IP Interfaces
Verify Data Mover
Connectivity
Configure Routes
y Command:
server_sysconfig <mover_name>
-pci <device_name> -o speed={10|100|1000|auto}
y Example:
server_sysconfig server_5 -pci cge0 -o speed=100
Network Configuration - 11
Network Configuration
- 11
Configure Network
Devices
Configure IP Interfaces
Verify Data Mover
Connectivity
Configure Routes
y Command syntax:
server_sysconfig <mover_name> -pci
<device_name> -option duplex={full|auto|half}
y Example:
server_sysconfig server_5 -pci cge0 -o duplex=full
Network Configuration - 12
Network Configuration
- 12
Configure Network
Devices
Configure IP Interfaces
Verify Data Mover
Connectivity
Configure Routes
Network Configuration - 13
This slide shows how to list network devices using Celerra Manager.
Network Configuration
- 13
Network > Devices tab > right click Device > Properties >
set Speed/Duplex
Configure Network
Devices
Configure IP Interfaces
Verify Data Mover
Connectivity
Configure Routes
Network Configuration - 14
This slide shows how to set speed and duplex on the device using Celerra Manager.
Network Configuration
- 14
Configure IP Interfaces
y Interfaces are the logical configuration
An interface defines and IP address and other
parameters
Configure Network
Devices
Configure IP Interfaces
Network Configuration - 15
The IP address, subnet mask, and broadcast address are all required when configuring the interface.
Network Configuration
- 15
IP Address Configuration
y Use the server_ifconfig command to:
Create a network interface from a network device
Assign an address to a network interface
Configure Network
Devices
Configure IP Interfaces
Verify Data Mover
Connectivity
Configure Routes
y Command:
server_ifconfig <mover_name> -create Device <device_name>
-name <if_name> -protocol IP <ipaddress> <netmask> <broadcast>
y Example:
server_ifconfig server_3 -c -D cge0 -n cge0-1 -p IP
192.168.101.20 255.255.255.0 192.168.101.255
Network Configuration - 16
Additional specifications
-a: Displays parameters for all configured interfaces.
-d if_name: Deletes an interface configuration.
-c -D device_name -n if_name -p { ... }: Creates an interface on the specified device and assigns the specified protocol and associated
parameters to the interface. Also reassigns the default name to be the new name specified.
-p IP ipaddr ipmask ipbroadcast: Assigns IP protocol with specified IP address mask and broadcast address.
if_name up: Marks the interface up. This happens automatically when setting the first address on an interface. The up option enables an
interface that has been marked down, reinitializing the hardware.
if_name down: Marks the interface down. The system does not attempt to transmit messages through that interface If possible, the
interface is reset to disable reception as well. This action does not automatically disable routes using the interface.
if_name mtu=MTU: Sets the Maximum Transmission Unit (MTU) size in bytes for the user-specified interface.
Example: To create the IP interface ana0-1 for ana0 (device), you would type:
server_ifconfig server_3 -c -D cge0 -n cge0-1 -p IP 192.168.101.20 255.255.255.0 192.168.101.255
server_3: done
Disabling an interface: To disable ana0-1 (interface), you would type:
server_ifconfig server_3 cge0-1 down (or up to enable)
server_3: done
The IP address specifies the IP address of a machine. You may have multiple interfaces per device, each identified by a different IP
address.
Deleting an interface: To delete ana0-1 (interface), you would type:
server_ifconfig server_3 -d cge0-1
server_3: done
Network Configuration
- 16
Displaying IP Configuration
$ server_ifconfig
server_2 all
server_2 :
cge0_2 protocol=IP device=cge0
inet=128.222.93.165 netmask=255.255.255.0 broadcast=128.222.92.1
UP, ethernet, mtu=1500, vlan=0, macaddr=0:60:16:4:47:9b
cge0_1 protocol=IP device=cge0
inet=128.222.92.165 netmask=255.255.255.0 broadcast=128.222.92.1
UP, ethernet, mtu=1500, vlan=0, macaddr=0:60:16:4:47:9b
loop protocol=IP device=loop
inet=127.0.0.1 netmask=255.0.0.0 broadcast=127.255.255.255
UP, loopback, mtu=32768, vlan=0, macaddr=0:0:0:0:0:0
netname=localhost
el31 protocol=IP device=fxp0
inet=192.168.2.2 netmask=255.255.255.0 broadcast=192.168.2.255
UP, ethernet, mtu=1500, vlan=0, macaddr=8:0:1b:43:89:16
netname=localhost
el30 protocol=IP device=fxp0
inet=192.168.1.2 netmask=255.255.255.0 broadcast=192.168.1.255
UP, ethernet, mtu=1500, vlan=0, macaddr=8:0:1b:43:89:16
netname=localhost
2006 EMC Corporation. All rights reserved.
Network Configuration - 17
Example
To display parameters of all interfaces for server_2, type the following command:
server_ifconfig server_2 -a
server_2 :
loop protocol=IP device=loop
inet=127.0.0.1 netmask=255.0.0.0 broadcast=127.255.255.255
UP, loopback, mtu=32704, macaddr=0:0:d1:1e:a6:44 netname=localhost
cge0-1 protocol=IP device=ana0
inet=192.168.101.20 netmask=255.255.192.0 broadcast=192.168.101.255
UP, ethernet, mtu=1500, macaddr=0:0:d1:1e:a6:44
el31 protocol=IP device=el31
inet=192.168.2.2 netmask=255.255.255.0 broadcast=192.168.2.255
UP, ethernet, mtu=1500, macaddr=0:50:4:e3:6d:7d netname=localhost
el30 protocol=IP device=el30
inet=192.168.1.2 netmask=255.255.255.0 broadcast=192.168.1.255
UP, ethernet, mtu=1500, macaddr=0:50:4:e3:6d:7c netname=localhost
Note:
When deleting or modifying an IP configuration for an interface, remember to update the appropriate CIFS servers that may
be using that interface and any NFS exports that may depend on the changed interface.
Network Configuration
- 17
Displaying IP Configuration
y
Configure Network
Devices
Configure IP Interfaces
Verify Data Mover
Connectivity
Configure Routes
Network Configuration - 18
This slide shows how to display the IP configuration using Celerra Manager.
Network Configuration
- 18
IP Address Configuration
y
Configure Network
Devices
Configure IP Interfaces
Verify Data Mover
Connectivity
Configure Routes
Network Configuration - 19
Network Configuration
- 19
Configure Network
Devices
Configure IP Interfaces
y Command:
server_ping <mover_name> -interface <interface> <ipaddress>
Example:
server_ping server_3
Network Configuration - 20
Network Configuration
- 20
Adding Routes
y Data movers support Dynamic and Static Routing
y Three types of static routes:
Host
Network
Default
Configure Network
Devices
Configure IP Interfaces
Verify Data Mover
Connectivity
Configure Routes
Network Configuration - 21
Network Configuration
- 21
Routing Examples
y Default gateways
server_route server_2 a default 192.168.64.1
server_route ALL a default 192.168.64.1
Network Configuration - 22
Routing options
You can configure the routing table to route to a particular host or network. Therefore, an
administrator can:
y Specify that packets sent to a particular host, such as 192.168.64.10, be transmitted using a
particular interface, such as 192.168.101.22
y Configure the Data Mover so that TCP/IP packets destined for network 192.168.144.0 go through
interface 192.168.101.21
To use the default gateway for all unspecified destinations, you can use the add default option
followed by the gateway address (192.168.64.1). The ALL parameter defines that particular gateway
for all Data Movers in the Celerra cabinet.
Network Configuration
- 22
Configure Network
Devices
Configure IP Interfaces
Verify Data Mover
Connectivity
Configure Routes
Network Configuration - 23
This slide shows how to configure a default gateway route using Celerra Manager.
Network Configuration
- 23
net
net
host
host
default
Network Configuration - 24
The server_route command is used to list the routing table entries for a Data Mover. For example, to
list the routing table for Data Mover 2, use the following command:
server_route server_2 list
Network Configuration
- 24
Network Configuration - 25
This slide shows how to view configured routes using Celerra Manager.
Network Configuration
- 25
Deleting Routes
y Delete a particular route
server_route server_2 d net 192.168.144.0
192.168.101.21
server_route server_2 d host 192.168.64.10
192.168.101.22
server_route server_2 d default 192.168.64.1
Network Configuration - 26
Command options
Use the following command options to delete routes:
flush (-f) : Removes all routes from the Data Movers routing table until the Data Mover is rebooted.
DeleteAll: Permanently clears all routes from the routing table.
Network Configuration
- 26
Deleting Routes
y Network > Routing tab > Select the route to delete > Delete
Network Configuration - 27
Network Configuration
- 27
Naming Services
y Resolve hostname into the corresponding IP address
And IP addresses to hostnames
Network Configuration - 28
Network Configuration
- 28
y Examples:
server_dns server_2 hmarine.com 192.168.64.15
server_dns server_2 p tcp corp.hmarine.com 192.168.64.15
2006 EMC Corporation. All rights reserved.
Network Configuration - 29
Network Configuration
- 29
y Command:
server_dns <mover_name>
y Example:
server_dns server_2
server_2 :
DNS is running.
hmarine.com
proto:udp server(s):10.127.50.162
Network Configuration - 30
Network Configuration
- 30
y Command:
server_dns <mover_name> -option {start|stop|flush}
y Example:
server_dns server_2 -o stop
server_dns server_2 -o flush
server_dns server_2 -o start
Network Configuration - 31
Network Configuration
- 31
DNS Settings
y Network > DNS Settings
Network Configuration - 32
In Celerra Manager, the Services Tab has been replaced with a NIS Settings Tab and a DNS
Settings Tab. This slide shows the DNS Domains listings from the DNS Settings Tab.
Network Configuration
- 32
Network Configuration - 33
Network Configuration
- 33
Network Configuration - 34
Network Configuration
- 34
Windows 2000
UNIX
Network Configuration - 35
This slide shows the steps necessary in implementing time services on the Celerra.
Network Configuration
- 35
y Example:
server_date server_2 timesvc start ntp -i 02:00 10.127.50.162
server_date server_2 timesvc start ntp -i 02:00 10.127.50.161
Network Configuration - 36
Command
server_date <mover_name> timesvc start ntp -interval <hh:mm> <NTPserver_IP | FQDN>
Note: The interval value is expressed as hours and minutes (hh:mm). The default time interval is 60
minutes.
Example
server_date server_2 timesvc start ntp -i 02:00 10.127.50.162
server_date server_2 timesvc start ntp -i 02:00 10.127.50.161
These server will be polled every two hours. The server at 10.127.50.161 will only be polled if
10.127.50.162 does not respond.
Data Mover preparation for NTP/SNTP
y Ensure that the Data mover is bound to a Windows domain prior to starting the time service.
Note: this is not required in a non-Windows environment.
y Ensure that the time service is started before configuring the Data Mover for NTP/SNTP.
y The Data Mover time must be maintained to within the "Maximum tolerance for computer clock
synchronization" as defined for the Kerberos Policy in Active Directory (discussed later in this
course).
Network Configuration
- 36
y Verify synchronization
server_date server_2 timesvc update ntp
server_date server_2 timesvc stats ntp
Time synchronization statistics since start:
hits=1,misses=0,first poll hit=1,miss=0
Last offset:0 secs,-3000usecs
Time sync hosts:
0 1 10.127.50.162
0 2 10.127.50.161
Command succeeded: timesync action=stats
2006 EMC Corporation. All rights reserved.
Network Configuration - 37
Network Configuration
- 37
Network Configuration - 38
This slide shows how to configure an NTP server using Celerra Manager.
Network Configuration
- 38
Module Summary
y Device are the physical hardware: Interfaces are the logical configuration
that defines the IP address and other configuration information
y Device and Interfaces can be configured in a one:one, many:one, or
one:many relationship
y To list network interfaces, use Celerra Manager or the
server_sysconfig command
y The speed and duplex must be set on each Data Mover interface to match
the current network environment
y The Data Mover interface IP address, mask, and broadcast address are
configured using Celerra Manager or server_ifconfig
y Default, network, and host routes can be configured for the Data Mover
y DNS and NTP are two services that are typically configured on each Data
Mover
Dont forget to verify NTP!
2006 EMC Corporation. All rights reserved.
Network Configuration - 39
Network Configuration
- 39
Closing Slide
Network Configuration - 40
Network Configuration
- 40
Celerra ICON
Celerra Training for Engineering
-1
Revision History
Rev Number
Course Date
1.0
February 2006
1.2
May 2006
Revisions
Complete
Updates and enhancements
-2
The objectives for this module are shown here. Please take a moment to read them.
-3
Failover Process
y Primary Data Mover fails
Triggers
Failure of both internal networks
Power failure within Data Mover
Software panic
Exception on the Data Mover
Data Mover hang
Memory error on Data Mover
Non-triggers
Removal of Data Mover from slot
Manual reboot
Standby
Primary
Primary
Primary
Failover process
The failover process is when a primary Data Mover fails and a designated standby Data Mover is
activated as a spare.
The Control Stations heartbeat monitoring will detect the following:
y Failure of both internal networks
y Power failure within the Data Mover
y Software panic
y Exception on the Data Mover
y Data Mover hang
y Memory error on Data Mover
Failover will NOT occur as a result of the following:
y Removal of Data Mover from slot
y Manual reboot
If a primary Data Mover becomes unavailable, the standby assumes the MAC and IP addresses of the
primary Data Mover and provides seamless, uninterrupted access to its file systems.
-4
Description
The standby Data Mover is a hot-spare for the primary Data Movers, and can act as a spare for any
Data Mover in the system.
Data Mover Ratio
The recommended ratio is one standby for every three Data Movers. A two Data Mover NSxxx is preconfigured with server_3 as a standby for server_2. The NS704G can be configured as 3 primaries and
one standby, or 2 primaries and 2 standbys.
-5
Failover Policies
y Predetermined action
y Invoked by the Control Station when failure is detected
No communications between Control Station and Data Mover
If CS is down, cannot detect a DM failure and cannot invoke a failover
Action
Auto
Retry
The Control Station first tries to recover the primary Data Mover.
If the recovery fails, the Control Station automatically activates
the standby.
Manual
The Control Station shuts down the primary Data Mover and
takes no other action. The standby must be activated manually.
Default policy when using the CLI for configuration.
Failover policy
A failover policy is a predetermined action that the Control Station invokes when it detects a failover
condition. The failover policy type determines the action that occurs in the event of a Data Mover
failover.
Types of failover policies
The following are the three failover policies from which you can choose when you configure Data
Mover failover:
y Auto: The standby Data Mover immediately takes over the function of its primary. (default policy)
y Retry: The Celerra File Server first tries to recover the primary Data Mover. If the recovery fails,
the Celerra File Server automatically activates the standby.
y Manual: The Celerra File Server issues a shutdown for the primary Data Mover. The system takes
no other action. The standby must be activated manually.
-6
-7
VLAN 40
Network
Clients
Also prior to configuration, check the Ethernet switch to verify that the switch ports of the primary
Data Movers are assigned to the same VLANs as the standby Data Mover, unless VLAN tagging
(discussed later in the course) will be employed. In addition, verify that any EtherChannel
configuration related to the ports for the primary Data Movers is in place for the standby.
-8
After verifying that the standby Data Mover is free from any configurations and that the hardware
matches that of the primary, follow the steps below to configure Data Mover failover.
y Configure the initial Data Mover to standby.
Result:
The new standby Data Mover will reboot and assume the standby role.
y When the reboot is complete, configure additional primary Data Movers to use the same standby
(optional).
-9
Failover Configuration
To configure Data Mover failover
y Command syntax:
server_standby <primary_DM> create mover=<standby_DM>
policy {auto|manual|retry}
y Result:
server_2: server_3 is rebooting as standby
Command
server_standby <primary_DM> create mover=<standby_DM> policy
{auto|manual|retry}
Example
To assign server_3 as the standby for server_2, use the following command:
server_standby server_2 -create mover=server_3 -policy auto
Result
server_2: server_3 is rebooting as standby
- 10
Failover Configuration
To configure server_3 as the standby for server_2:
Select Data Movers > server_2 > Role= primary > Standby Movers= server_3
Failover Policy= auto > apply
- 11
It is important to test Data Mover failover to ensure that if needed, the Data Movers could properly
failover and clients can properly access the standby Data Mover. If any network or Operating Systems
environmental changes are made, ensure that the standby Data Mover(s) still provide client access to
their file systems.
- 12
y Example:
server_standby server_2 -activate mover
- 13
y Command:
server_standby <primary_DM> -restore mover
y Example:
server_standby server_2 restore mover
- 14
Data Movers > highlight the server to restore > click Restore
This slide shows how to use Celerra Manager to restore a Data Mover after a failover.
- 15
You can delete a failover relationship at any time. To delete a failover relationship:
Remove relationship
server_standby server_2 -delete mover
Note: If the Data Mover is a standby for more than one primary, you must remove the relationship for
each Data Mover.
Set up the standby Data Mover as primary
server_setup server_3 type nas
- 16
The interface on the left shows how to delete a standby relationship. The interface on the right shows
how to change the role of the standby Data Mover to primary.
- 17
Module Summary
Key points covered in this module are:
Failover is when a primary Data Mover fails and a
designated standby is activated in its place
The standby Data Mover assumes the identity of the
failed Data Mover
Network identity including MAC, IP addresses, routing and other
network configuration
Storage Identity: File systems
Service Identity: Shares and Exports
The key points covered in this module are shown here. Please take a moment to read them.
- 18
Closing Slide
- 19
Celerra ICON
Celerra Training for Engineering
Network Availability - 1
Revision History
Rev Number
Course Date
1.0
February 2006
1.1
March 2006
1.2
May 2006
Revisions
Complete
Updates and enhancements
Updated diagrams
Network Availability - 2
Network Availability - 2
Network Availability - 3
Network Availability - 3
Network Availability - 4
The objectives for this lesson are shown here. Please take a moment to read them.
Network Availability - 4
Virtual Device
10.127.50.12
Trunk
cge0
cge1
cge2
cge3
cge4
cge5
fge1
fge0
Data Mover
y Two or more network devices can be configured into a Virtual Device called a Trunk
EtherChannel
802.3ad Link Aggregation Control Protocol (LACP)
Network Availability - 5
Network Availability - 5
EtherChannel
y In addition to configuring the Data Mover, specific ports
on the Ethernet switch must be configured into a
EtherChannel
Combines physical ports into one logical port
Provides network availability - Failed connections redirected to
other ports
Does not provide increased client bandwidth
EtherChannel
Trunk
1
2
3
4
5
6
7
8
EtherChannel
9
10
11
12
13
14
15
16
Network
Clients
Network Availability - 6
EtherChannel
EtherChannel combines multiple physical ports (two, four, or eight) into a single logical port for the
purpose of providing fault tolerance for Ethernet ports and cabling. EtherChannel is not designed for
load balancing or to increase bandwidth.
For example, four physical ports can be combined into a single logical interface. If one of those
interfaces should fail, the traffic from the affected node can then be redirected through one of the other
physical interfaces within the EtherChannel.
Bandwidth
EtherChannel does not provide increased bandwidth from the clients perspective. Because each client
is connected only to a single port, the client does not receive any added performance. Any increased
bandwidth on the side of the channeled host (the Data Mover) is incidental. However, this is not an
issue because the objective of EtherChannel is to provide fault tolerance, not to increase aggregate
bandwidth.
Notes: For a complete discussion of the algorithms used, consult www.cisco.com and perform a search
on the term statistical load balancing.
IMPORTANT: Goal is fault tolerance, NOT increased bandwidth.
Network Availability - 6
Data
Mover
EtherChannel
Trunk
2005 EMC Corporation. All rights reserved.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Network Availability - 7
Once an EtherChannel (or LACP aggregation) is configured, the Ethernet switch must make a
determination as to which physical port to use for a connection. Three statistical load distribution
methodologies are available on the Celerra; distribution by MAC address, by IP address, or by a
combination of IP address and TCP port.
MAC Address
The Ethernet switch will hash enough bits (1 bit for 2 ports, 2 bits for 4 ports, and 3 bits for 8 ports) of
the source and/or destination MAC addresses) of the incoming packet through an algorithm. The result
of the hashing will be used to decide which physical port through which to make the connection.
Keep in mind that traffic coming from a remote network will contain the source MAC address of the
router interface nearest the switch. This could mean that all traffic from the remote network will be
directed through the same interface in the channel.
IP address
The source and destination IP ports are observed while determining the output port. IP is the default.
TCP
The source and destination IP addresses and ports are observed while determining the output port
Configuration
Statistical load distribution can be configured for the whole system by setting the LoadBalance=
parameter in the global or local parameters file. It can also be configured per trunk by using the
server_sysconfig command. Configuring load distribution on a per trunk basis overrides the entry in
the parameters file.
Network Availability - 7
EtherChannel Setup
Step
Action
Where
Switch
Data Mover
Data Mover
Network Availability - 8
Supported configurations
EMC Celerra supports the following EtherChannel configurations:
y A Data Mover can channel 2, 4, or 8 FE interface ports together
y Two Gigabit Ethernet ports may be channeled
Network Availability - 8
cge3
Data Mover
11
13
15
17
19
10
12
14
16
18
20
Network Availability - 9
Network Availability - 9
Network Availability - 10
Note: Since no load distribution method has been specified in this example, the default is IP.
Network Availability - 10
Assigning an IP Address
To assign an IP address to a virtual device
y Command:
server_ifconfig server_x create Device
<virtual_device_name> -name <interface_name>
-protocol IP <ipaddr> ipmask> <ipbroadcast>
y Example:
server_ifconfig server_2 c D trk0 n trk0
p IP 192.168.101.20 255.255.192.0
192.168.127.255
Network Availability - 11
Assigning an IP address
Once the virtual device has been created, use the server_ifconfig command to assign an IP address to
the virtual device. Be sure to use the name designated for the name parameter in the
server_sysconfig command as the device parameter in the server_ifconfig statement.
Example
In the command below, the D trk0 parameter refers to the virtual device that was created (on the
previous page) using the name trk0 parameter.
server_ifconfig server_2 c D trk0 n trk0 p IP 192.168.101.20 255.255.192.0 192.168.127.255
Network Availability - 11
Network Availability - 12
This slide shows how to configure an EtherChannel device using Celerra Manager.
Network Availability - 12
Assigning an IP Address
y Network > Interfaces > New
Network Availability - 13
This slide shows how to configure an IP address for the EtherChannel device using Celerra Manager.
Network Availability - 13
Network Availability - 14
Network Availability - 14
Ethernet Channel
Link Aggregation
Full or half
Full
2, 4, or 8
Availability
Misconfiguration
protection
Switch support
Link speeds
Duplex
Number of ports
Network Availability - 15
Note: Although LACP supports any number of ports greater than 1, the Celerra supports a maximum
of 12.
Network Availability - 15
y Example:
server_sysconfig server_2 -v -n trk0 -c trk -o
"device=cge0,cge1,cge4,cge5 protocol=lacp
protocol=lacp
Network Availability - 16
LACP considerations:
y An LACP link can be created with any number of Ethernet devices, up to the maximum of 12 per
virtual device.
y Only Full Duplex Ethernet ports can be used to create the link. ATM, FDDI or any other types of
ports are not supported.
y All Data Mover ports used must be the same speed. If a mixture of port speeds is given, the Data
Mover will choose the greatest number of ports at the same speed. In case of a tie, the fastest ports
will be chosen. For example, if you used the following device=cge0,cge1,fge0 when setting up
the virtual device ports, cge0 and cge1 will be used despite the fact that the fge0 port would be
faster. The primary goal is higher availability.
y Only physical ports on the Data Mover can be used to create the link.
y Although multiple links are joined, no one client will gain an advantage from this configuration
with regards to network speed or throughput.
Verifying that ports are up and running
One way to verify all of the ports are up and running would be to run show port lacp-channel statistic
(on a Cisco Systems switch). Each time the command is run you can see that the LACPDU packet
reports have changed for active ports.
Monitoring the number of Ibytes and Obytes
Use the server_netstat -i command to monitor the number of Ibytes and Obytes for each port.
Network Availability - 16
Network Availability - 17
This slide shows how to configure an LACP device using Celerra Manager.
Network Availability - 17
Network Availability - 18
The objectives for this lesson are shown here. Please take a moment to read them.
Network Availability - 18
Virtual Device
10.127.50.12
FailSafe
cge0
cge1
cge2
cge3
cge4
cge5
fge1
fge0
Data Mover
Network Availability - 19
FSN
Fail Safe Network (FSN) is a virtual network interface Celerra feature. Like EtherChannel, FSN
provides fault tolerance out beyond the physical Data Mover providing redundancy for cabling and
switch ports.
Unlike EtherChannel, FSN can also provide fault tolerance in the case of switch failure. While
EtherChannel provides redundancy across active ports (all port in the channel carrying traffic), FSN is
comprised of an active and standby interface. The standby interface does not send or respond to any
network traffic.
Switch independence
FSN operation is independent of the switch. Recall that EtherChannel requires an Ethernet switch that
supports EtherChannel. This is not the case with FSN because it is simply a combination of an active
and standby interface with failover being orchestrated by the Data Mover itself. Additionally, the two
members of the FSN device can also be connected to separate Ethernet switches.
Network Availability - 19
Virtual Device
10.127.50.12
FailSafe
Trunk
10.127.60.12
10.127.70.12
cge0
cge1
cge2
cge3
cge4
cge5
fge1
10.127.80.12
fge0
Data Mover
Network Availability - 20
Network Availability - 20
fsn0
Data Mover
cge0
cge1
cge2
cge3
Switch
Network
Network Availability - 21
This slide shows an FSN device that consists of two NIC ports (cge0 and cge1) on the same Data
Mover, connected to the network thru the same switch.
The operation is as follows:
1. If NIC ana0 is the active connection, then all traffic through the FSN device flows through that
port and to the network.
2. If the link signal fails (for example, because of a physical hardware disconnection), the link
automatically fails over to the next NIC port in the FSN device (in this example, cge0), using the
same IP and MAC address combination. All traffic then flows through cge1.
Network Availability - 21
Switch
Data Mover
fsn0
trk0
cge0
cge1
ISL
trk1
cge2
cge3
trk1=cge2, cge3
(standby)
Switch
Network
Network Availability - 22
This slide shows an FSN device that consists of an Etherchannel called trk0 (comprised of ana0, ana1,
ana2, and ana3) and another Etherchannel called trk1 (comprised of ana4, ana5, ana6, and ana7). Both
Etherchannels connect to different switches. In this case, the active device, trk0, will be used for all
network traffic unless all four paths in that EtherChannel fail, or if the switch fails. If that occurred,
trk1 with is associated switch would take over network traffic for the Data Mover.
Network Availability - 22
Creating an FSN
y When creating a FSN, and you do not specify a primary
device
Both devices are considered equal
No fail back
y Command:
server_sysconfig server_x virtual name
<fsn_name> create fsn option
device=<dev>,<dev>
y Example:
server_sysconfig server_2 v n fsn0 c fsn o
device=trk0,trk1
Network Availability - 23
Creating a FailSafe Network Device without a Primary Device defined is the recommended approach.
For example, to create a FailSafe Network device called fsn0 using two Etherchannels called trk0 and
trk1:
server_sysconfig server_2 v n fsn0 c fsn o device=trk0,trk1
Network Availability - 23
y Command:
server_sysconfig server_x virtual name
<fsn_name> create fsn option
primary=<primary_dev> device=<standby_dev>
y Example:
server_sysconfig server_2 v n fsn0 c fsn o
primary=cge0 device=cge4
2005 EMC Corporation. All rights reserved.
Network Availability - 24
For example, to create an FSN for server_2 named fsn0 with the primary device defined as ana0 and
the standby device as ana4:
server_sysconfig server_2 v n fsn0 c fsn o primary=ana0 device=ana4
Note: The Celerra Manager screen which can be used to configure Fail Safe Network with an optional
primary device is the same as the previous slide showing the configuration without a primary device.
Network Availability - 24
Network Availability - 25
This slide shows how to configure an FSN device without a primary device defined using Celerra
Manager.
Network Availability - 25
Network Availability - 26
Network Availability - 26
Network Availability - 27
This slide shows how to list all virtual devices using Celerra Manager.
Network Availability - 27
y Example:
server_sysconfig server_2 v d fsn0
Network Availability - 28
device
fsn0
Network Availability - 28
Network > Devices > highlight the virtual device(s) to delete > click Delete
Network Availability - 29
This slide shows how to delete virtual devices using Celerra Manager.
Network Availability - 29
Network Availability - 30
The objectives for this lesson are shown here. Please take a moment to read them.
Network Availability - 30
y Ethernet switch
Layer 2
Sends traffic to specific port
100Mbps/Full duplex support
Network Availability - 31
Network Availability - 31
y Example:
00-D0-59-23-E3-AE Port 3/32
Length
(bytes)
46-1500
Frame:
Preamble
Dest
MAC
Source
MAC
Type
Data
FCS
Ethernet packet
Network Availability - 32
Example:
For example, Node A (Port 3/32, address 00-D0-59-23-E3-AE) sends a packet to Node Z (Port 2/11,
address 00-D0-59-23-D0-3C). The switch records that 00-D0-59-23-E3-AE can be accessed via port
3/32, and, since it does not yet know where Node Z is located, it sends the packet to all ports much as a
hub would. When Node Z replies to Node A, the switch will read the source address from Node Zs
packet and record that 00-D0-59-23-D0-33C can be accessed via port 2/11. This time, however, the
switch knows that Node A (00-D0-59-23-E3-AE) is located off of port 3/32 so Node Zs reply is sent
only to Node As port. It is only a short time before the switch table has records for all of the nodes
connected to the switch.
Network Availability - 32
VLANs
y Virtual Local Area Network (VLAN)
y Managed Switches typically have the capability of being
configured to support VLANs
Groupings of switch ports
Divides large number of ports
Confines broadcasts
Contributes to security
Network Availability - 33
VLANs
VLANs (Virtual Local Area Networks) are a method of grouping switch ports together into a virtual
LAN as the name would indicate. Switch ports, as well as router interfaces, can be assigned to VLANs.
Using VLANs
VLANs can be used to:
y Break up a very large LAN into smaller virtual LANs. This may be useful to control network
traffic such as broadcasts. (Another name often used for a VLAN is a Broadcast Domain.)
Although VLANs are not a security vehicle unto themselves, they can be used as part of an overall
security scheme in the network.
y Combine separate physical LANs into one virtual LAN. For example, the sales staff of an
organization is physically dispersed along both the east and west coasts of the United States
(WAN), yet all of the network clients have similar network needs and should rightly be in the same
logical unit. By employing VLANs, all of the sales staff can be in the same logical network, the
same virtual LAN.
Assigning VLANs
Typically, each IP network would be assigned to a separate VLAN. Additionally, in order to transmit
from one VLAN to another, the traffic would need to go through a router. In line with this thought,
each router interface or sub-interface can be assigned to a VLAN.
Network Availability - 33
Example of VLANs
VLAN
VLAN10
10
Admin
Net.
Admin Net.
192.168.64.0
192.168.64.0
VLAN
VLAN20
20
MS
clients
MS clients
192.168.144.0
192.168.144.0
VLAN
VLAN30
30
UNIX
clients
UNIX clients
192.168.160.0
192.168.160.0
11
13
15
17
19
10
12
14
16
18
20
Ethernet switch
Network Availability - 34
Example of VLANs
In the example shown on the slide, there are four VLANs. One is the default VLAN, VLAN 1. Any
port not assigned to a VLAN is automatically a member of this VLAN. The other three VLANs are
unique. VLAN 10 is the administrative VLAN. This VLAN contains all of the servers, such as NT
servers, the NIS server, as well as the control stations and Data Movers of the Celerra File Servers.
The hosts in this VLAN are in the 192.168.64.0/18* network. All of the Microsoft Windows clients are
in the 192.168.144.0/20 network and are assigned to VLAN 20. VLAN 30 is the 192.168.160.0/20
network, this is the location of all of the UNIX workstations.
* The IP addressing displayed above is a different method of expressing an address and mask
combination. The number following the slash describes the number of network bits in the IP address.
Thus, 192.168.64.0/18 states that the network portion of 192.168.64.0 is the first 18 bits. This
correlates to a subnet mask of 255.255.192.0.
Network Availability - 34
VLAN
VLAN20
20
MS
MSclients
clients
192.168.144.0
192.168.144.0
VLAN
VLAN30
30
UNIX
UNIXclients
clients
192.168.160.0
192.168.160.0
11
13
15
17
19
10
12
14
16
18
20
Ethernet switch
1
11
13
15
17
19
10
12
14
16
18
20
Ethernet switch
2005 EMC Corporation. All rights reserved.
Network Availability - 35
To transfer frames between switch ports in the same VLAN, Inter Switch Links could be used,
however Trunking provides a more efficient use of ports.
Network Availability - 35
VLAN Trunking
y Connecting InterSwitch VLANs wastes ports
y VLAN Trunking allows
multiple VLANs to share one path
Assign one port as Trunk port
Allow VLANs on trunk
Network Availability - 36
Connecting VLANs
As previously mentioned, VLANs often span multiple switches. In the simplest form, this is set up by
connecting each VLAN on each switch to a port assigned to the same VLAN on every other switch.
While this will function, costly switch ports are used, making it not practical. For example, to connect
four VLANs across only two switches would require the use of four ports from each switch (or 8
ports). Connecting the same four VLANs across four switches would require 12 ports on each switch
(or 48 ports).
VLAN trunking
An alternative to this simple method is VLAN Trunking. With VLAN Trunking, a single port or multiport channel is set up to allow traffic from multiple VLANs onto the port. These packets are
encapsulated using a Trunking encapsulation protocol such as ISL (InterSwitch Link) or DOT1Q (aka
802.1q) before being transmitted. When the packet reaches the destination switch, the encapsulation is
stripped and the packet can be forwarded into the appropriate VLAN. One important requirement is
that the same encapsulation protocol must be employed on each end of the trunk (whether the device
on the other end is another switch or a router interface).
Notes: Connecting VLANs without trunking wastes ports.
Network Availability - 36
Length
(bytes)
46-1500
Frame
Preamble
Dest
MAC
Source
MAC
VLAN
ID
Type
Data
FCS
Ethernet packet
Network Availability - 37
VLAN ports
The following three port types on the Ethernet switch relate to VLANs:
y The typical port to which a node would be connected is referred to as an access port which can be
assign to one, and only one, VLAN.
y Trunk ports are used primarily for interswitch connections or connections to a router.
y The hybrid port can act as either an access or trunk port.
VLAN tag
How does a switch or router know to which VLAN a packet belongs? Assuming that the administrator
has assigned access ports to various VLANs, the switch will then modify packets which enter through
access ports. A new frame is added to the packet beside the source MAC address field; this new frame
identifies the VLAN to which the packet belongs. This is referred to as the VLAN Tag.
Trunk ports do not have VLAN tags added to them because, as you will see later, these packets will
already have the tag. The packets from the Hybrid port are tagged only when a tag in not already
present.
Network Availability - 37
11
13
15
17
19
10
12
14
16
18
20
11
13
15
17
19
10
12
14
16
Trunk 18
Port
Trunk
TrunkPort
Port
802.1q
802.1q
Trunk Port
802.1q
802.1q
20
Network Availability - 38
Network Availability - 38
Ethernet Switch
Standby Data Mover
802.1q
Trunk ports
Network Clients
on multiple
VLANS
Network Availability - 39
VLAN tagging
As discussed earlier in this module, an Ethernet network can implement VLANs to help manage
network traffic. Ethernet switches help to do this by adding a VLAN tag frame to network packets. A
VLAN-tagged frame carries an explicit identification of the VLAN to which it belongs. It carries this
non-null VLAN ID within the frame header. The tagging mechanism implies a frame modification. For
IEEE 802.1Q-compliant switches, the frame is modified according to the port type used (access port,
trunk port, or hybrid port).
Implementing VLAN tagging
Celerra supports the ability to add the VLAN tag for itself. This feature is for use with Gigabit Ethernet
NICs only. Implementing VLAN Tagging effectively allows the physical port to which the Data
Mover is connected the ability to belong to several VLANs at the same time. VLAN Tagging
support enables a single Data Mover with Gigabit Ethernet ports to be the Standby for multiple
primary Data Movers from different VLANs with Gigabit Ethernet ports.
In addition to Data Mover configuration, the physical port that the Data Mover port is connected to
must be configured as a trunk port using the 802.1q protocol by the switch administrator.
Network Availability - 39
Network Availability - 40
Network Availability - 40
y Examples:
server_ifconfig server_2 admin vlan=10
server_ifconfig server_2 sales vlan=20
Network Availability - 41
Shown here are the commands necessary to assign a VLAN tag to an interface, and to remove the tag.
Network Availability - 41
Network Availability - 42
This slide shows how to assign a VLAN tag while creating an interface.
Note: By default, the name of the interface will be the IP address with underscores. For example, the
interface shown on this slide will be named 10_127_56_109. Please note the new field Name: which
allows the name of the interface to be configured.
Network Availability - 42
Module Summary
y EtherChannel combines multiple physical ports (2, 4 or 8) into a
single logical port for the purpose of providing fault tolerance
There are three methods of statistical load distribution available on the
Celerra; MAC address, IP address (default), and a combination of TCP
port and IP address
Network Availability - 43
Network Availability - 43
Closing Slide
Network Availability - 44
Network Availability - 44
Celerra ICON
Celerra Training for Engineering
Revision History
Rev Number
Course Date
Revisions
1.0
February 2006
Complete
1.2
May 2006
The back-end of a Celerra Network Server consists of one or more CLARiiON and/or Symmetrix
storage systems. It is important that you understand, and are able to communicate the configuration
requirements when setting up and supporting a Celerra Network Server. This module will provide an
overview. For more detailed training on CLARiiON and Symmetrix, refer to Knowledgelink.
Customer Requirements:
Capacity
Availability
Scalability
Advanced Features
DMX-3
DMX/DMX-2
CLARiiON
CX Series
Customer
Investment
2006 EMC Corporation. All rights reserved.
Celerra Network Servers supports both CLARiiON and Symmetrix back-ends. At the high end we
have the Symmetrix DMX-3. DMX-2 continues to be manufactured and sold for customers who dont
require the scalability that the DMX 3 offers. At the mid-tier, EMC continues to offer and expand
CLARiiON family of products. Today, the CLARiiON is the most common back-end configuration.
Celerra Volumes
y When a Celerra is first installed, a minimum of six LUNs are either
manually or automatically configured for the Control Volumes
y Additional LUNs must be configured in the Symmetrix or CLARiiON
Back-end storage so that user defined file systems may be
configured
LUN
Size
00
01
02
11GB
11GB
2GB
03
04
05
16 (10hex)
2GB
2GB
2GB
varies
Contents
DART, Individual DM configuration files
Data Mover log files
Reserved (not used on NS-series)
Linux on Control Stations (CS0) with no local HDD
Reserved (not used on NS-series)
Linux on Control Stations (CS1) with no local HDD
NAS configuration database (NASDB)
NASDB backups, dump file, log files, etc.
User File Systems
Back-end Storage Requirements - 5
When a Celerra is first installed, a minimum of six LUNs are created either manually, or automatically
through the install scripts. The table above displays all of the Celerra System LUNs, along with their
size an contents. Please note that LUNs 02 and 03 are not currently used for the Celerra NS series.
Earlier Celerra models, in which the Control Station had no internal hard drive, would use these LUNs
to hold the Linux installation. Additional LUNs must be configured for user file systems data.
Storage
Processor
Storage
Processor
LCC
LCC
LCC
LCC
LCC
LCC
LCC
Each storage processor includes one or two CPUs and large amount of memory. Most of the
memory is used for read and write caching. Read and write caching improve performance in two
ways:
y For a read request If a read request seeks information thats already in the read or write
cache, the storage system can deliver it immediately, much faster than a disk access can.
y For a write request the storage system writes updated information to SP write-cache
memory instead of to disk, allowing the server to continue as if the write had actually
completed. The write to disk from cache occurs later, at the most expedient time. If the request
modifies information thats in the cache waiting to be written to disk, the storage system
updates the information in the cache before writing it to disk; this requires just one disk access
instead of two. Write caching, particularly, helps write performance an inherent problem
for RAID types that require writing to multiple disks.
CLARiiONs module architecture allows the customer to add drives as needed to meet capacity
requirements. When more capacity is required, additional disk enclosures containing disk modules
can be easily added.
LCC or Link Control Cards are used to connect disk modules. In addition, the LCC monitors the
FRUs within the shelf and reports status information to the storage processor. The LCC contain
bypass circuitry that allows continued operation of the loop in the event of port failure.
DAE2
SPE
y Storage Processors
used in Integrated
systems do not have
optical FC connections
y AUX ports are used
Copper connections
SPS
2006 EMC Corporation. All rights reserved.
Above outlines the general steps that are performed when connecting a CLARiiON to a Celerra. Many
of these steps may be performed automatically during the installation. However, in some cases, such
as integrating into an existing SAN environment, these steps must be performed manually.
Data Mover
Data Mover
Control
Station
2006 EMC Corporation. All rights reserved.
Storage
Processor
Storage
Processor
One of the simplest environments is the integrated configuration where the Fibre Channel ports on the
Data Mover connect directly into the Fibre Channel ports on the Storage Processors. Depending on the
environment, the array may be dedicated to the Celerra, or available storage processor ports may be
used to connect host systems, either directly or through a Fibre Channel fabric.
Celerra
Clients
Storage Area Networks provide much greater flexibility than direct attached storage. They allow
greater distance between hosts and the array, and allow the sharing of storage ports by multiple hosts.
An enterprise level SAN allows the consolidation of block level storage and file level storage on the
same set of arrays. The benefit is efficiency and flexibility in storage allocation.
SANs are typically implemented using Fibre Channel switches. One or more switches interconnected
together is called a fabric. Fibre Channel switches work similarly to Ethernet switches, however the
protocols employed are completely different.
In a Celerra environment, SANs are what allow multiple Data Movers access to the same file systems
and provides the flexibility to move a file system from one Data Mover to another for load balancing
and availability
FC
Switch
FC
Switch
Storage
Processor
Storage
Processor
The physical connections are typically made using multimode fiber optic cables. Each Fibre Channel
port on each data mover connects to an available port on the switch as does each port on the storage
array.
An ideal configuration is designed and implemented with No Single Points of Failure. That is, any
one component can fail and still have access to the storage. This requires the following:
y Two Fibre Channel HBAs per Data Mover (standard configuration)
y Two Fibre Channel Switches
y Two Storage Processors with two available ports each
While the ideal configuration includes two Fibre Channel switches with independent fabric
configuration, SANs are often implemented with single switch because of the high availability features
that are built in.
Zoning Requirements
y Fibre Channel SANs provide flexible connectivity where any port in
the fabric is capable of seeing any other port
y Zoning is configured on the Switch for performance, security, and
availability reasons to restrict which ports in a fabric see each
other
Switch 1
Zone1 - DM2-0 to SPA-0
Zone2 DM3-0 to SPB-1
DM2
0
1
0
1
DM3
FC-SW1
2
3
Control
Station
SP-B
FC-SW2
SP-A
Switch 2
Zone1 - DM2-1 to SPB-0
Zone2 DM3-1 to SPA-1
By design, Fibre Channel switches provide flexible connectivity where any port in the fabric is capable
of seeing any other port. This can lead to performance, security, and availability issues. Zoning is
feature of most switches that restrict which ports in the fabric see each other. This eliminates any
unnecessary interactions between ports.
In the example above, each switch is a separate fabric and is thus configured separately.
An alternate Zoning configuration might look like this:
Switch 1
Zone1 DM2-0
Zone2 DM2-0
Zone3 DM3-0
Zone4 DM2-0
to SPA-0
to SPB-1
to SPA-0
to SPB-1
Switch 2
Zone1 DM2-1
Zone2 DM2-1
Zone3 DM3-1
Zone4 DM2-1
to SPA-0
to SPB-1
to SPA-0
to SPB-1
Brocade Zoning
y Output from the
ZoneShow
command
yy Switch118:admin>
Switch118:admin> zoneshow
zoneshow
yy ...
...
yy Effective
Effective configuration:
configuration:
y Zone Configuration
is a set of zones
yy
cfg:
cfg:
Celerra_Gateway_Config
Celerra_Gateway_Config
yy
yy
zone:
zone:
jwcs_DM2_P0_SPA_P0_WRE00022000774
jwcs_DM2_P0_SPA_P0_WRE00022000774
50:06:01:60:30:60:2f:3b
50:06:01:60:30:60:2f:3b
y Best Practice is
single initiator
zoning with ports
defined by WWPN
yy
yy
y Example shown is of
zoning configuration
created by autoconfig script
yy
yy
zone:
zone:
50:06:01:60:00:60:02:42
50:06:01:60:00:60:02:42
jwcs_DM2_P0_SPB_P0_WRE00022000774
jwcs_DM2_P0_SPB_P0_WRE00022000774
50:06:01:60:30:60:2f:3b
50:06:01:60:30:60:2f:3b
zone:
zone:
50:06:01:68:00:60:02:42
50:06:01:68:00:60:02:42
jwcs_DM2_P1_SPA_P1_WRE00022000774
jwcs_DM2_P1_SPA_P1_WRE00022000774
50:06:01:61:30:60:2f:3b
50:06:01:61:30:60:2f:3b
zone:
zone:
50:06:01:61:00:60:02:42
50:06:01:61:00:60:02:42
jwcs_DM2_P1_SPB_P1_WRE00022000774
jwcs_DM2_P1_SPB_P1_WRE00022000774
50:06:01:61:30:60:2f:3b
50:06:01:61:30:60:2f:3b
yy
yy
yy
yy
yy
yy
50:06:01:69:00:60:02:42
50:06:01:69:00:60:02:42
yy ...
...
Above is an example of the zoning configuration that was auto-generated during a CX704G
installation. Not that the members of a zone are defined by the World Wide Port Numbers (WWPN)
of the Data Mover HBA and the SP ports. Also each zone only includes one initiator device (HBA).
The output above was the result of the Brocade ZoneShow command. The output was abbreviated to
only show the effective zone configuration for one Data Mover.
Supported RAID
Types
RAID5 4+1
RAID5 8+1
RAID1
RAID3 4+1
RAID3 8+1
RAID5 6+1
AVM Pool
clar_r5_performance
clar_r5_economy
clar_r1
clarata_r3
clarata_r3
clarata_archive
Available HLU
16+
16+
Back-end Storage Requirements - 14
Celerra systems with integrated CLARiiON storage support pre-defined, shelf-by-shelf configuration
templates. These templates, along with the setup_clariion command, can build user/data LUNs
on the existing RAID groups. For example, a 4+1 or 8+1 RAID5 group will have two user/data LUNs
created on it.
Although the Gateway systems do not support the scripted pre-defined template configuration, the
supported configurations used manually, in any order and mixed throughout the CLARiiON.
Supported CLARiiON RAID configurations for data LUNs
The table above displays supported CLARiiON RAID configurations supported by Celerra Network
Server. When you add a supported RAID group to the Celerra configuration, the storage will be added
to a Celerra AVM Storage Pool. Storage is allocated to Celerra file systems from these storage pools.
Automatic Volume Manager
The Automatic Volume Manager (AVM) feature of the Celerra Network Server automates volume
creation and management. By using AVM you can automatically create and expand file systems.
Storage Pools
A storage pool is a container, or pool, of disk volumes. AVM storage pools configure and allocate
contained storage to file systems.
The Celerra Automatic Volume Manager creates file systems for user data based upon defined storage
pools. Each storage pool is designed for a particular performance-to-cost requirement for data storage.
These storage pools are defined by storage profiles, or set of rules, related to the type of RAID array
used.
This table maps a disk group type and shelf-by-shelf templates to a storage profile, associating the
RAID type and the storage space that results in the Automatic Volume Management (AVM) pool. The
storage profile name is a set of rules used by AVM to determine what type of disk volumes to use to
provide storage for the pool.
A RAID Group is a collection of related physical disks. 1 or as many as 128 LUNs may be created
form a RAID Group. This screen shows the dialog for configuring a RAID Group.
The user needs to specify how many disks are to be reserved the display will change to indicate
which RAID types are supported by that quantity of disks. In addition, the user may choose a decimal
ID for the RAID Group. If none is selected, the storage system will choose the lowest available
number.
The user must either allow the storage system to select the physical disks to be used, or may choose to
select them manually. Note that the storage system will not automatically select disks 0,0,0 through
0,0,4 they may be selected manually by the user. These disks contain the CLARiiON reserved areas,
so they have less capacity than other disks of the same size.
Other parameters that may be set include:
y Expansion/defragmentation priority - Determines how fast expansion and defragmentation occur.
Values are Low, Medium (default), or High.
y Automatically destroy - Enables or disables (default) the automatic destruction of the RAID Group
when the last LUN in that RAID Group is unbound.
Maximum number of RAID Groups per array = 240
Number of disks per RAID Group = RAID 5 = 3-16 disks, RAID 3 = 5 or 9 disks, RAID 1 = 2 disks,
RAID 10 = 2, 4, 6, 8, 10, 12, 14, or 16 disks. Remember, Celerra Best Practices specify the number of
disk per RAID Group.
Binding a LUN
y Best Practice is to configure a few large
LUNs rather than many small LUNs
setup_clariion script creates two LUNs
per RAID group for FC disks
Spread across SPs
When binding LUNs, the user must select the RAID Group to be used, and, if this is the first LUN
being bound on that RAID Group, the RAID type. If a LUN already exists on the RAID Group, the
RAID type has already been selected, and cannot be changed.
The size of a LUN can be specified in Blocks, MB, GB, or TB. The maximum LUN size is 2 TB. The
maximum number of LUNs in a RAID Group is 128.
In the example above, we specified create two LUNs using all available capacity in the RAID Group
and distribute the LUNs across both Storage Processors for load balancing purposes.
After connecting the cables to the system and configuring zoning, verify the connections between the
Data Movers and the Array. As part of the normal Fibre Channel Port Login (PLOGI) process, the
CLARiiON create Initiator Records defining the connections. The Initiator Name is the WWPN of the
Fibre Channel HBA on the Data Mover. An important fieled is Logged In. This indicates the current
state of the connection. If the entry is missing or Logged In is No, this indicates a cable or zoning
problem.
Registration is normally performed by the Navisphere Host Agent in a typically open systems server
environment, however, the Celerra Data Mover does not run the host agent. During the install, the
Celerra auto-generate script will manually register the HBAs. However, in some environments, it may
be necessary to do this manually. To manually register a HBA connection, select Group Edit and the
dialog on the following page appears.
On a NS system, the WWPN can be used to identify the Data Mover and port. The 24th and 25th digit
can be interpreted as follows:
y 60 = Data Mover 2 Port 0
y 61 = Data Mover 2 Port 1
y 68 = Data Mover 3 Port 0
y 69 = Data Mover 3 Port 1
The Example above is a NS704G with four Data Movers.
Register HBA
y The Data Mover does not run the Navi Host Agent so it is
therefore necessary to manually register HBAs
Associates a name with the WWN of the DM Fibre Channel HBA
Defines other attributes of connection
Registration typically associates a hostname and IP address of a host with the WWPN of the Fibre
Channel HBA and also sets other attributes of the connection.
In a typically open system host environment, all HBAs for a host are registered together and assigned
the same name. With Celerra, the auto-generate script that runs during install, registers each HBA
separately. For proper operation, it is important that the Initiator Information is set as shown above in
the example:
Initiator Type = CLARiiON Open
Failover Mode = 0
Array CommPath = Disabled
Unit Serial Number = Array
The configuration object used for assigning LUNs to hosts is called a Storage Group. Basically you
create a Storage Group, add LUNs and connect hosts. When a host is connected to a Storage Group,
it will have full read/write access to all LUNs in the Storage Group.
When creating a Storage Group, the software requires only a name for the Storage Group. All other
configuration is performed after the Storage Group is created.
A name supplied for a Storage Group is 1-64 characters in length. It may contain spaces and special
characters, but this is discouraged. After clicking OK or Apply, an empty Storage Group, with the
chosen name, is created on the storage system.
To assign LUNs, right click on the Storage Group, select properties and the LUNs tab. The LUNs tab
is used to add or remove LUNs from a Storage Group, or verify which are members. The Show LUNs
option allows the user to choose whether to only show LUNs which are not yet members of any
Storage Group, or to show all LUNs.
When a LUN is added to a storage Group, it is automatically assigned the next available SCSI address
starting with address 00. Use caution here as the address that is assigned automatically is not apparent
unless you scroll over to the right in the Selected LUNs pane.
The Celerra Network Server requires specific LUN addresses for system LUNs. At the time a LUN is
added to a Storage Group, highlighting the LUN, clicking the Host ID field, and choosing the host ID
from the dropdown list. If a LUN was previously assigned to a Storage Group and the address must be
changed, if first must be removed from the Storage Group and re-added.
If LUN addressing is not set up in accordance with the defined rules, it is very likely that the
installation will fail. If, after the system has been in production, the LUN addressing is modified (i.e.
when adding storage to the array for increased capacity) in a way that does not comply with these
rules, the Data Movers will likely fail upon the subsequent reboot.
The Hosts tab allows hosts to be connected to, or disconnected from a Storage Group. Connecting a
host provides that host with full read/write access to the LUNs in the Storage Group.
The procedure here is similar to that used on the LUNs tab select a host, then move it by using the
appropriate arrow. In most stand-alone host environments, only a single host is added to the Storage
Group but because a Celerra Network Server is actually a cluster, all HBA connections for all Data
Movers are connected.
After the LUNs have been bound, they must be added to the Celerra database before they can be used
for a file system. LUNs are added to the Celerra database from the CLI or GUI.
To add LUNs to the database from CLI:
server_devconfig ALL create scsi all
To add LUNs to the database from Celerra Manager, navigate to Storage > Systems and click the
Rescan button.
Note: the undocumented command nas_diskmark is called by both the server_devconfig
command and the Celerra Manager Rescan. This command scans for new devices and marks
newly discovered disks by physically writing a unique disk ID as well as the Celerra ID on the physical
media and records the information in the configuration database.
Next we will look at the Celerra connectivity requirements for a Symmetrix back-end. While the steps
and requirements are very similar, the configuration process of a Symmetrix is very different. We will
start the discussion by reviewing the Symmetrix Architecture at a high level.
Front-end
Channel
Adapter
Shared Global
Memory
Cache
Back-end
Disk Adapter
All members of the Symmetrix family share the same fundamental architecture. The modular hardware
framework allows rapid integration of new storage technology, while supporting existing
configurations.
There are three functional areas:
y Shared Global Memory - provides cache memory
y Front-end - the Symmetrix connects to the hosts systems using Channel Adapter a.k.a Channel
Directors. Each director includes multiple independent processors on the same circuit board, and
an interface-specific adapter board. Celerra Data Movers connect to the storage through the frontend.
y Back-end is how the Symmetrix controls and manages its physical disk drives, referred to as Disk
Adapters or Disk Directors. Like front-end directors, each director includes multiple independent
processors on the same circuit board.
What differentiates the different generations and models is the number, type, and speed of the various
processors, and the technology used to interconnect the front-end and back-end with cache.
64GB
Memory
64GB
Memory
64GB
Memory
64GB
Memory
64GB
Memory
64GB
Memory
64GB
Memory
64GB
Memory
Today, the Symmetrix employs a Direct Matrix Architecture. The real advantage of Direct Memory
Architecture cannot be appreciated until you visualize it as in the picture above. The Global Memory
technology supports multiple regions and 16 connections on each global memory director. In a fully
configured Symmetrix system, each of the sixteen directors connects to one of the sixteen memory
ports on each of the eight global memory directors. These 128 individual point-to-point connections
facilitate up to 128 concurrent global memory operations in the system.
Each memory board has sixteen ports with one connection to each director. Each region on a board can
sustain a data rate of 500MB read, and 500MB write. Therefore a full configuration with 8 memory
boards would have a maximum internal system throughput of 128GB.
Each front-end and back-end director has direct connections to memory allowing each director to
connect to each memory board. Each of the four processors on a director can connect concurrently to
different memory boards.
Internally the communications protocol between the directors and memory is fibre channel over
copper-based physical differential data connections.
As in all Celerra configurations, the first 16 host LUN addresses are reserved. Therefore, the first
available data LUN host address is 0x010 (16 in decimal).
All LUNs, both system and data, require redundant paths for high availability. Each LUN must be
mapped through redundant FA ports and accessed from the Celerra via redundant Fibre Channel
Fabrics.
The user LUN requirements for a Celerra with a Symmetrix storage subsystem are provided by
Symmetrix Service Readiness (SSR, formerly C-4). Follow these rules when configuring user data
LUNs for the Celerra Network Server on a Symmetrix.
Before being implemented, the storage configuration must be CCA-approved.
The specific configuration requirements is based on the NAS code levels and the Enginuity levels on
the Symmetrix. Always reference the latest requirements off the SR website
PC Memory
Edit
Configuration
Information
(IMPL.BIN file)
PC
Hard disk
From
system
Director
The Symmetrix is configured using a static configuration file called the IMPL.bin. The file is created
initially using SymmWin and loaded into each director in the Symmetrix. When modifying a
configuration, the current IMPL.bin file is pulled from the Symmetrix and edited use Symmwin.
SymmWin
y Graphical-based tool for
configuring and monitoring a
Symmetrix System
Runs locally on the service processor
May also run on stand-alone PC
Logging in to SymmWin
y Click on the
green unlock
ICON
y User type
determines
level of access
Access level
may be
changed after
login
Most
operations can
be performed
as CE
Password =
SADE
2006 EMC Corporation. All rights reserved.
Click on the Green unlock and enter the user type, user name, and password and press ENTER or click
on the green check mark.
There are a number of different login levels allowing varying levels of access.
y Symmetrix
y Software Engineer (SE)
y Software Assistance Center
y Customer Engineer (CE) CE Password = sade
y OEM
y TS
y RTS
y Product Support Engineer (PSE)
y Engineering
y Production
y Configuration group
y QA group
y PC Group
Access level may be changed after initial login. From the main SymmWin menu, select File and
Access level to change access rights. You must have a valid password. Advanced access password =
zehirut.
After Login
y Title bar
changes to
reflect User
Name and
group affiliation
y The code level
is the SymmWin
level and may
not be the same
as what is
running on the
system
The default install directory is O:\EMC\<Serial Number>\symmwin. To start SymmWin, click on the
symmwin.exe file.
After successful login, the title bar will reflect the user name and group affiliation. Depending on the
group you will have more or less capabilities and the icons will vary.
The code level is what is loaded on the service processor and may not be the same as what is running
on the system.
Configuration information is stored in the IMPL.bin file. This is loaded into the directors during the
IMPL process and is also stored locally on the service processors. When viewing the configuration, it
is important that you select IMPL from System in order to get the current view of the configuration.
Viewing Configuration
y Select
Configuration
y Choose
IMPL Initialization
y Verify that FBA is
enabled
After loading the IMPL.bin file, SymmWin can be used to graphically display the system hardware
and logical configuration.
One of the first requirements for configuring a Symmetrix for Celerra is that the FBA is enabled.
Director Map
y Selecting DirMap from the Configuration dropdown
displays the locations and types of directors
Back-end
DF Disk
Adapter Fibre
Front-end
FA Fibre
Channel
EA ESCON
EF FICON
SE iSCSI
The DMX-3 card cage has 24 slots. Normally the DAs occupy the outside slots and the host directors
occupy the inside slot positions. Reference the diagram below to relate the director diagram reported
in SymmWin to the physical card cage. When looking at the director map, remember the director
number and slot numbers are not the same. Director 1 is in slot 0, Director 2 is in slot 1, etc. Slot
numbers are in hex, director numbers are in decimal.
D
I
R
1
D
I
R
2
D
I
R
3
D
I
R
4
D
I
R
5
D
I
R
6
D
I
R
7
D
I
R
8
M M
0 1
M
2
M M
3 4
M
5
M
6
M D
7 I
R
9
S
l
o
t
0
S
l
o
t
1
S
l
o
t
2
S
l
o
t
3
S
l
o
t
4
S
l
o
t
5
S
l
o
t
6
S
l
o
t
7
S
l
o
t
1
0
S
l
o
t
1
2
S
l
o
t
1
3
S
l
o
t
1
4
S
l
o
t
1
5
S
l
o
t
1
6
S
l S
o l
t o
1 t
7 8
B
E
B
E
F F BE BE F
E E or or E
FE FE
F
E
S
l
o
t
1
1
F
E
D D
I I
R R
1 1
0 1
D
I
R
1
2
D
I
R
1
3
D
I
R
1
4
D
I
R
1
5
D
I
R
1
6
S
l
o
t
9
S
l
o
t
B
S
l
o
t
C
S
l
o
t
D
S
l
o
t
E
S
l
o
t
F
F F
E E
B
E
B
E
S
l
o
t
A
F BE BE
E or or
FE FE
Edit Directors
y Front-end directors can be configured to support various
protocol parameters
y SCSI and Fibre
Channel
parameters
Gold and upper
case = enabled
Blue and lower
case = disabled
Space key to
toggle
y Reference the
Support Matrix
for Celerraspecific settings
2006 EMC Corporation. All rights reserved.
SCSI is a standards-based protocol that has been around for over twenty years and the command set
and nexus is flexible enough to support many different types of storage devices and host operating
systems. Nearly every server vendor supports SCSI, unfortunately not every vendor implements SCSI
in exactly the same way. For example, while both HP-UX and IBM AIX support the SCSI protocol,
they support a different subset of the operational parameters. Fibre Channel is the transport protocol
used with the SCSI protocol and it too has a number of configurable protocol and link parameters.
The emulation used by front-end ports is implemented in software thus provides the flexibility to
configure the front-end port to support a diversity of host configurations. Celerra also has specific
SCSI requirements and the ports must be set appropriately.
The Edit Director window is used to verify or change flag settings for each director. Some flags simply
display information about the director while others are used to control various functions.
In the example above, we selected the FA tab, the ID field lists all the fibre channel directors and the
data fields contains various flags used by specific host systems. When a particular flag is active or set,
it is colored in gold and is displayed in upper-case letters. If a flag is inactive, it is blue and has lower
case letters.
It is important to understand what the individual flags do before changing them. An incorrect setting
can cause errors or performance problems. Reference the EMC Support matrix or the e-Lab Navigator
for specific configuration requirements for each operating system type and configuration.
Volume Requests
y Define the requirements for the creation of specific logical
volumes using VolReq
y Control
Volumes
y User Data
Volumes
The Volume request window is used to request specified logical volumes be configured. Multiple
sizes and types are specified as separate requests.
Count: Number of volumes to be configured
Emulation: FBA
Type/host: Server/Celerra
Size is specified in either Cylinders or Blocks
Mirror type:
y RAID (Parity Raid Non DMX-3)
y NORMAL (non-mirrored)
y 2-MIR (RAID-1)
y 3- MIR (3 way mirror)
y 4-MIR (4 way Mirror)
y 3RAID-5 ( 3+1 Raid 5)
y 7RAID-5 (7+1 RAID-5)
y CDEV (Cache Device Used for Virtual Devices with SNAP)
On the left side of the window is a list of volumes requests. Note the Volumes column show the
Symmetrix Logical Volume numbers.
The example above shows the requests for the six control volumes and 100 user data volumes.
Unlike a CLARiiON, a Symmetrix does not present all LUNs to all ports. To make a LUN available to
a specific FA port, a channel address must be assigned. Celerra Data Mover discover and access
Symmetrix Logical Volumes using these Channel Addresses. The Channel Address is the SCSI ID.
Note: the CLARiiON specifies the SCSI address as a decimal number, while the Symmetrix specifies
the address as a hex number. Either way, control volume start with address 00 and user data volumes
start with address 16 (0x10).
In this example, we are presenting the Celerra Control volumes and User data volumes on four
different FA ports. Remember Celerra requires specific addresses be assigned to control LUNs:
y (2) 12275 cylinder volumes as target and LUN address 00 and 01.
y (4) 2215 cylinder volumes as target and LUN address 02 through 05.
y (1) 3 cylinder volume as address 0F this is the gatekeeper device.
y If using VCM assign (1) 16 cylinder volume as target 0E
Data volumes must be mapped to the Celerra starting at target and LUN address 10.
An alternative to using SymmWin, the channel address assignments could also be performed using
Solutions Enabler symconfigure command.
The default volume attributes are appropriate for all volumes except the gatekeeper device which must
have AS400Gate enabled.
During the IMPL, the bin file which contains all the mapping and configuration information is sent to
all the directors. It is during this sequence that cache is initialized and all the tables are built.
After the IMPL.bin is created, it is loaded to the system. If this is Configure and Install New
Symmetrix then the physical disk drives are given a VTOC (Volume Table Of Contents). VTOC
comes from the mainframe world. Mainframe file systems uses VTOC to allocate and remember where
files are on disk.
On the Symmetrix, when we VTOC a drive, we perform a high level format which clears any existing
data and defines default Tables for each logical track in each logical volume. This also generates the
correct CRC information for each logical track in each logical volume we are formatting.
Configure and Load New Symmetrix reconfigures each disk and destroys all previous data. If you
only need to make a change to an existing configuration, you would perform an On-line Configuration
Change procedure.
If you are making changes to an existing configuration, these changes can be done on-line.. When
executing the procedure, extensive checking is performed to validate the change before it is
implemented.
Device Masking
Data
Mover
Data
Mover
Data
Mover
Data
Mover
Other
Host
Other
Host
Other
Host
HBA0 HBA1
HBA0 HBA1
HBA0 HBA1
HBA0 HBA1
HBA0 HBA1
HBA0 HBA1
HBA0 HBA1
FC
Switch
FA
VCMDB
FA
Symmetrix
Depending on the environment, the Symmetrix may be configured with Volume Logix. Volume Logix
is used to mask which HBAs see which LUNs on a FA port.
Storage Area Networks provide a fan-out capability were it is likely that more than one host is
connected to the same Fibre Channel port. The actual number of HBAs that can be configured to a
single port is operating system and configuration dependent but fan-out ratios as high as 64:1 are
currently supported. Reference the support matrix for specific configuration limitations.
Each port may have as many as 4096 addressable volumes presented. When several hosts connect to a
single Symmetrix port, an access control conflict can occur because all hosts have the potential to
discover and use the same storage devices. However, by creating entries in the Symmetrixs device
masking database (VCMDB), you can control which host sees which volume.
Device Masking is independent from zoning but they are typically used together in an environment.
Zoning provides access control at the port level and restricts which host bus adapter sees which port
on the storage system and device masking restricts which host sees which specific volumes presented
on a port.
Device Masking uses the UWWN (Unique Worldwide Name) of Host Bus Adapters and a VCM
database device. The device-masking database (VCMDB) on each Symmetrix unit specifies the
devices that a particular WWN can access through a specific Fibre port.
Device Masking
y
Volume Logix is the software in the Symmetrix that performs the device masking function. The
capability is built into Engunity but its use is optional. To set this up you must create a database
volume (VCMDB), set flags on the Fibre Channel or iSCSI ports to enable the uses. Once the Database
is setup and enabled, the Solutions Enabler symmask command can be used to configure entries
granting specific hosts assess to specific volumes.
VCMDB entry specifies a hosts HBA identity (using an HBA port WWN1), its associated FA port,
and a range of devices mapped to the FA port that should be visible only to the corresponding HBA.
Once you make this VCMDB entry and activate the configuration, the Symmetrix makes visible to a
host those devices that the VCMDB identifies are available to that hosts initiator WWN through that
FA port.
Device masking also allows you to configure heterogeneous hosts to share access to the same FA port,
which is useful in an environment with different host types. However,
Reference:
y EMC Solutions Enabler Symmetrix Device Masking CLI Product Guide
y Using the SYMCLI Configuration Manager Engineering WhitePaper
y Using SYMCLI to Perform Device Masking Engineering WhitePaper
C:>
C:> symmask
symmask refresh
refresh
Refresh
Refresh Symmetrix
Symmetrix FA
FA directors
directors with
with contents
contents of
of SymMask
SymMask database
database
000190100172
000190100172 (y/[n])
(y/[n]) ?? yy
Symmetrix
Symmetrix FA
FA directors
directors updated
updated with
with contents
contents of
of SymMask
SymMask Database
Database
000190100172
000190100172
C:>
C:>
To make an entry for the HBA-to-FA connection in the VCMDB and specifying devices that the HBA
can access, use the symmask command shown above. On the first line we are specifying that
volumes 00c0, 00c1, 00c2, 00c3, 00c4, and 00c5is accessible to the first Celerra HBA through FA 7A
port0. The second command enables assess to the same volumes through the other HBA and the other
Symmetrix port.
In a Celerra environment, typically all volumes are presented to all Data Movers so similar entries
would need to be added for every Celerra FC HBA.
After making changes to the VCM database, you must tell the Symmetrix to refresh the access control
tables in the director. This is done using the symmask refresh command.
Symmetrix
Symmetrix ID
ID
:: 000190100172
000190100172
Database
Database Type
Type
:: Type6
Type6
Last
Last updated
updated at
at
:: 01:40:29
01:40:29 PM
PM on
on Thu
Thu Sep
Sep 01,2005
01,2005
Director
Director Identification
Identification :: FA-7A
FA-7A
Director
Director Port
Port
:: 00
User-generated
User-generated
Identifier
Identifier
Type
Type
Node
Node Name
Name
Port
Port Name
Name
Devices
Devices
-------------------------------
---------
-----------------------------------------------------------------
-----------------
5006016030602f3b
5006016030602f3b Fibre
Fibre 5006016030602f3b
5006016030602f3b 11 5006016030602f3b
5006016030602f3b 00c0:00c5
00c0:00c5
You can display the entire contents of the VCMDB or use options to restrict the display to your area of
interest. In the example above we are displaying accesses control records for the entries we previously
added. Note: the entire output is not displayed.
Module Summary
y LUN addressing is Critical:
Control volume use addresses 00 05
User data volumes use address 16+ (0x10)
Closing Slide
Celerra ICON
Celerra Training for Engineering
-1
Revision History
Rev Number
Course Date
1.0
February 2006
1.2
May, 2006
Revisions
Complete
5.5 Updates and enhancements
-2
Module Objectives
Upon completion of this module, you will be able to:
y Describe the logical storage terms and concepts
including disks, slice, stripe, metavolumes, and file
systems
y Manage storage using AVM Automated Volume
Management on the Celerra
y Describe the concept of Storage Pools
y Configure and manage volumes and a file system using
CLI and Celerra Manager
To meet customer needs, the Celerra provides considerable flexibility when creating file systems. The
Celerra offers manual as well as automatic file system creation to allow customers to tailor file systems
to meet specific needs. File systems can be access and shared by NFS and CIFS users, as well as by
other file system access protocols. This module illustrates the necessary steps to create file systems
manually or by utilizing the automatic capabilities of the Celerra.
-3
Create Metavolumes
Create a File System
2005 EMC Corporation. All rights reserved.
-4
AVM greatly simplifies the creation of logical storage on a Celerra, however for a few customers,
AVM is not practical because of dispersed volumes, multiple and/or mixed back-ends, storage
limitations, or the implementation of other storage based features such as BCVs, NearCopy/FarCopy,
etc.
In this module we will be discussing the manual creation of volumes and file systems but keep in mind
that often AVM will be used.
-5
LUN4 LUN3
Celerra
Celerra
Celerra
Disk,
Disk,d7
d3 Disk, d7
Celerra
Disk, d7
Celerra
Disk, d7
Celerra
Disk, d7
Celerra
Celerra
Disk, d8 Disk, d8
Celerra
Disk, d8
Celerra
Disk, d8
Celerra
Disk, d8
RAID 1 is typical
with Symmetrix
Hyper
CLARiiON LUNs
are typically
configured much
larger than
Symmetrix
Celerra
Disk, d3
Celerra
Disk, d4
Celerra
Disk, d5
Celerra
Disk, d6
Hyper
Symmetrix
Celerra
Disk, d5
Celerra
Disk, d6
Celerra
Disk, d3
Celerra
Disk, d4
-6
storageID-devID
CK200051400304-0000
CK200051400304-0001
CK200051400304-0002
CK200051400304-0005
CK200051400304-0004
CK200051400304-0003
CK200051400304-0006
CK200051400304-0007
CK200051400304-000F
CK200051400304-000E
CK200051400304-0008
CK200051400304-000A
CK200051400304-000B
CK200051400304-000D
CK200051400304-000C
type
CLSTD
CLSTD
CLSTD
CLSTD
CLSTD
CLSTD
CLSTD
CLSTD
CLSTD
CLSTD
CLSTD
CLSTD
CLSTD
CLSTD
CLSTD
name
root_disk
root_ldisk
d3
d4
d5
d6
d7
d8
d16
d17
d11
d12
d13
d14
d15
servers
1,2,3,4
1,2,3,4
1,2,3,4
1,2,3,4
1,2,3,4
1,2,3,4
1,2,3,4
1,2,3,4
1,2,3,4
1,2,3,4
1,2,3,4
1,2,3,4
1,2,3,4
1,2,3,4
1,2,3,4
Column definitions:
id - ID of the disk (assigned automatically)
inuse - Whether or not the disk is in use by a file system; y indicates yes, n indicates no
sizeMB - Size of the disk in megabytes
storageID-devID - ID of the storage system and device associated with the disk
type - Type of the disk
name - Name of the disk
servers - Data Movers that have access to the disk
Note: When adding new volumes to the configuration it is necessary to run the command:
server_devconfig ALL create scsi all
in order to define the new LUNs to the system.
-7
Celerra Manager can also be used to view Celerra disks. This slide shows d7 is not in use.
-8
Slice Volumes
Verify Disk Volumes
Slice1
(10GB)
Slice2
(10GB)
Slice3
(10GB)
Slice4
(10GB)
disk d10
disk d11
disk d12
disk d13
Slice volumes
Slice volumes are cut out of disks or other volume configurations to make smaller volumes that are
better suited for a particular purpose, such as SnapSure. Slice volumes are not always necessary or
even recommended. However, if a smaller size volume is needed (as you will see with SnapSure), it
will then be critical to understand slice volumes and be able to implement them.
Offsets
When you create a slice volume, you can indicate an offset, which is the distance (in megabytes) from
the end of one slice to the start of the next. Unless a value is specified for the offset (the point on the
container volume where the Slice volume begins), the system places the slice in the first-fit algorithm
(default) that is the next available volume space.
-9
y Command syntax:
nas_slice name <slice_name> create <volume_name>
<size_in_MB>
y Example:
nas_slice n sl1 c d16 500
Configuring Celerra Volumes & File Systems - 10
Before you can create a slice from a Celerra disk volume, you must identify the volume from which the slice
volume will be created. The root slice volumes created during installation appear when you list your volume
configurations. However, you do not have access privileges to them, and therefore, cannot execute any
commands against them.
To create a slice from a disk volume, use the nas_slice command:
nas_slice name <slice_name> create <volume_name size_in_MB>
Example:
Note:
Slice volumes should not be employed if TimeFinder/FS is planned.
- 10
This slide shows how to create a slice volume using Celerra Manager.
- 11
Stripe Volumes
Verify Disk Volumes
Create slice Volumes
Create Stripe Volumes
Create Metavolumes
Create File system
A stripe volume is a logical arrangement of participating disk, slice, or metavolumes that are
organized, as equally as possible, into a set of interlaced stripes.
Creating a stripe volume
Creating a stripe volume allows you to achieve a higher aggregate throughput from a volume set since
stripe units contained on volumes in the volume set can be active concurrently. Stripe volumes can
also improve system performance by balancing the load across the participating volumes.
Recommended stripe size
The size of the stripe (also referred to as the stripe depth) refers to the amount of data written to a
member of the stripe volume before moving to the next member. The use of different stripe sizes
depends on the applications you are using. The recommended stripe size is 32K for Symmetrix used
predominantly for NFS clients, 8K for Symmetrix used predominantly for CIFS, and 8K for
CLARiiON.
Naming stripe volumes
If you do not select a name for the stripe volume, a default name is assigned.
Carefully consider the size of the stripe volume you want. After the stripe volume is created, its size
remains fixed. However, you can extend a file system built on top of a stripe volume by combining or
concatenating it with additional stripe volumes.
- 12
Stripe volumes can be created manually or automatically. The difference is, Celerra chooses which
volumes to include in the stripe volume when you choose automatic, verses you choosing the volumes
in the manual method. The two options are presented on subsequent slides.
You should configure stripes to use the maximum amount of disk space. The size of the participating
volumes within the stripe should be uniform and evenly divisible by the size of the stripe. Each
participating volume should contain the same number of stripes. Space is wasted if the volumes are
evenly divisible by the stripe size but are unequal in capacity. The residual space is not included in the
configuration and is unavailable for data storage.
If creating the stripe volume manually, no two members of the stripe volume should reside on
the same physical spindle in the Symmetrix (or CLARiiON).
To identify the physical location of a Celerra Symmetrix disk volume:
y Run nas_disk list and identify the "storageID-devID of the disk volume
y Run $ /nas/symcli/bin/symdev -sid <storageID> list |grep <devID>
y Identify the DA, Interface, and Target (format: DA:IT ex: 01A:C2) (this represents the physical
location of the disk volume)
y To view other devices on the same spindle, type the following command:
$ /nas/symcli/bin/symdisk -sid <storageID> sho <DA:IT>
- 13
y Examples:
Creating a stripe volume from slice volumes
nas_volume -n str1 -c -S 8192 sl1,sl2,sl3,sl4
Example
To create a stripe volume with a depth or 32768 bytes out of volumes slices 1- 4, use the
following command:
nas_volume n str1 c S 32768 sl1,sl2,sl3,sl4
id = 316
name = str1
acl = 0
in_use = false
type = stripe
stripe_size = 32768
volume_set = sl1,sl2,sl3,sl4
disks = d3,d4,d5,d6
- 14
y Example:
nas_volume n teststr c S 32768 size=50
You can also choose to allow Celerra to select which disk volumes to include in a stripe volume.
When the size= option is employed, Celerra will automatically select the correct number of disk
volumes. This method will also reduce the risk of having the same physical spindle in the Symmetrix
or CLARiiON used more than once in a stripe volume. Care should still be used to get the best usage
of disk space. For example, if all disk volumes are 9 GB, then the total capacity of the stripe volume
specified in the size= option should be divisible by 9 GB (for example, 36 or 72 GB).
Example
nas_volume -n teststr -c -S size=10
id
= 116
name
= teststr
acl
= 0
in_use
= False
type
= stripe
stripe_size = 32768
volume_set
= d3,d4,d5,d6,d7,d8,d9,d10,d11,d12,d13
disks
= d3,d4,d5,d6,d7,d8,d9,d10,d11,d12,d13
- 15
This slide shows how to create a stripe volume manually using Celerra Manager.
Two Slices were created, You must select at least two Slices to create a Stripe.
- 16
y Symmetrix considerations
Use Celerra stripe volumes not Symmetrix metavolumes and stripes
For multiple client NFS loads and MPFS sequential workloads, use
16 volumes in each stripe set when possible
CLARiiON
For optimal performance, stripe across different volumes. While striping across a single volume is possible, it
will not improve performance.
On an NSxxx with a single DAE, do not stripe a file system across two LUNs from the same RAID group.
Instead, concatenate LUNs from a single RAID group together using Celerra, then create a stripe volume across
that concatenated metavolume.
With a single DAE system, stripe file systems over as many spindles as possible, even if this means crossing
RAID types or configurations.
With multiple DAE systems, avoid striping a file system across LUNs of different RAID types and
configurations. Do not mix RAID1, 4+1 RAID5, and 8+1 RAID5 LUNs in a single file system. Do not mix
LUNs composed of different sized spindles.
Symmetrix
Symmetrix metavolumes should only be used for architectural reasons. If there is no feasible method to provide
more FA Ports to increase the number of paths available for the Data Mover, Symmetrix metavolumes should be
considered as a method of reducing target/LUN counts. Aside from that, Symmetrix hypervolumes do not
provide features that are not otherwise provided by DART based volume management. Additionally, using
Symmetrix metavolumes can make it harder for the Celerra Admin to determine which spindles are being used
for each file system.
For multiple client NFS loads and HighRoad (more on HighRoad later) sequential workloads, use 16 volumes in
each stripe set, when possible.
- 17
Metavolume
Verify Disk Volumes
Create Slice volumes
Create Stripe Volumes
Create Metavolumes
40 GB Metavolume
10 GB
10 GB
10 GB
10 GB
disk d10
disk d11
disk d12
disk d13
Celerra metavolume
A metavolume is an end-to-end concatenation of one or more disk volumes, slice volumes, stripe
volumes, or metavolumes. A metavolume is required to create a file system because metavolumes
provide the expandable storage capacity that might be needed to dynamically expand file systems. A
metavolume also provides a way to form a logical volume that is larger than a single disk.
metavolume size
The size of the metavolume must be at least 2 MB to accommodate a file system.
Naming a metavolume
If you do not enter a metavolume name, a default name is assigned.
- 18
Creating a Metavolume
y Create the required metavolume from:
Disk Volumes
Slice Volumes
Stripe Volumes
y Command:
nas_volume -name <name> -create Meta <volume_name>
y Example:
nas_volume n mtv1 c M str1
Example
To create a metavolume named mtv1 from stripe volume str1, use the following command:
nas_volume n mtv1 c M str1
id = 312
name = mtv1
acl = 0
in_use = false
type = meta
volume_set = str1
disks = d3,d4,d5,d6
- 19
Creating a Metavolume
y
- 20
File System
Once you have configured the metavolume, you are now ready to create a file system. A file system is
a method of cataloging and managing the files and directories on a storage system. The default, and
most common, Celerra file system type is uxfs. Some other types of file systems are ckpt (Checkpoint
file system), rawfs (Raw file system), and mgfs (Migration file system).
- 21
- 22
y Example:
nas_fs -n fs1 -c mtv1
id = 17
name = fs1
acl = 0
in_use = false
type = uxfs
volume = mtv1
rw_servers =
ro_server =
symm_devs = 014,015,016
disks = d3,d4,d5,d6
2005 EMC Corporation. All rights reserved.
Example
To create a file system named fs1 from mtv1, type the following command:
nas_fs -n fs1 -c mtv1
id = 17
name = fs1
acl = 0
in_use = false
type = uxfs
volume = mtv1
rw_servers =
ro_server =
symm_devs = 014,015,016
disks = d3,d4,d5,d6
- 23
This slide shows how to create a file system using Celerra Manager.
Note: The Meta Volume Create from option was selected here. Creating a file system from a
Storage Pool is shown on a subsequent slide.
- 24
Storage Pool
- 25
symm_std
Designed for highest performance and availability at medium cost. This AVM profile uses Symmetrix STD disk volumes
(typically RAID 1).
symm_std_rdf_src
Designed for highest performance and availability at medium cost, specifically for storage that will be mirrored to a remote
Celerra File Server using SRDF. For information about SRDF, refer to the Using SRDF With Celerra technical
module.
clar_r1
Designed for high performance and availability at low cost. This AVM profile uses CLARiiON CLSTD disk volumes
created from RAID 1 mirrored-pair disk groups.
clar_r5_performance
Designed for medium performance and availability at low cost. This AVM profile uses CLARiiON CLSTD disk volumes
created from 4+1 RAID 5 disk groups.
clar_r5_economy
Designed for medium performance and availability at lowest cost. This AVM profile uses CLARiiON CLSTD disk
volumes created from 8+1 RAID 5 disk groups.
clarata_archive
Designed for archival performance and availability at lowest cost. This storage pool uses CLARiiON Advanced
Technology Attachment (ATA) disk drives in a RAID 5 configuration.
clarata_r3
Designed for archival performance and availability at lowest cost. This AVM storage pool uses CLARiiON ATA disk
drives in a RAID 3 configuration.
- 26
y Example:
nas_pool create name marketing -acl 0 -volumes
d126,d127,d128,d129 description pool for marketing default_slice_flag y
If your environment requires more flexibility than the system-defined AVM storage pools allow, use
this command to create user-defined storage pools and define its attributes.
Example
nas_pool l (displays naming conventions to use for the AVM systemdefined storage pools shown on slide 26)
nas_pool create name marketing -acl 0 -volumes d126,d127,d128,d129
description pool for marketing -default_slice_flag y
Using nas_storage will set the name for a storage system, assign an access control value, display
attributes, synchronize the storage system with the Control Station, and perform a failback for
CLARiiON systems.
The output from this command is determined by the type of storage system attached to the Celerra
Network Server.
- 27
This slide shows how to create a user-defined storage pool using Celerra Manager.
Note: By checking Slice Pool Volumes by Default?, the Celerra will slice existing volumes in the
pool to satisfy the user request for storage space. Otherwise, the Celerra would attempt to acquire a
new volume to satisfy the user request.
- 28
Command:
nas_fs n <fs_name> create size=<size_in_GB>
pool=<storage pool>
y Examples:
To create a striped 300GB FS from Symmetrix STD disk
nas_fs n fs01 c size=300 pool=symm_std
o slice=y
To employ AVM from the command line, use the following syntax;
nas_fs n <fs_name> create size=<size_in_GB> pool=<storage pool>
Examples:
To create a 300GB file system named fs01 from Symmetrix STD disk volumes type:
nas_fs n fs01 c size=300 pool=symm_std
- 29
This slide shows a sample Celerra Manager screen used to create a file system using the Automatic
Volume Management feature that allocates storage on an as-needed basis to storage pools. You can
create a file system by specifying the amount of space to allocate to the file system from a systemdefined or a user-defined storage pool.
- 30
Beginning with NAS 5.4, Data Movers can support up to 16TBs of Fibre Channel storage. The
minimum Data Mover for CNS cabinets is the 510 model. ALL NS family of Data Movers will
support up to 16TBs of Fibre Channel storage.
This new feature is available on new and existing file systems. The file system must be made up of
Meta Volumes no larger than 2TBs in size, then added together.
- 31
Example:
# nas_volume n mtv1 c M str1,str2
# nas_volume n mtv2 c M str3,str4
...
# nas_volume n supermeta c M
mtv1,mtv2,mtv3,mtv4,mtv5,mtv6,mtv7,mtv8
# nas_fs -n fs1 -c supermeta
Example
To create a metavolume named mtv1 from stripe volume str1, use the following command:
nas_volume n mtv1 c M str1
Creating a metavolume from multiple volumes
To create a metavolume from multiple volumes, use the following command:
nas_volume -name <meta_vol_name> -create Meta <vol_name>,<vol_name>
After creating the 2TB metavolumes, you must create a larger metavolume (16TBs) from the smaller
2TB metavolumes.
- 32
File system limitations depend on Data Mover hardware and NAS software version. Check the release
notes and product documentation for supported units.
Other issues should be considered in planning, such as file system backups and restores and
consistency checks because of the time it takes to accomplish these with large file systems. As a basic
rule, smaller file systems perform better than larger ones.
- 33
volume
name
server
10
root_fs_1
12
root_fs_2
14
root_fs_3
16
root_fs_4
15
38
root_fs_15
16
40
root_fs_common
17
73
root_fs_ufslog
18
76
root_panic_reserve
99
Diamond
...
2,1
...
23
1,2
Column Definitions
id ID of the file system
inuse whether or not the file system registered in the mount table of a Data Mover
type type of file system
y 1=uxfs (default)
y 2-4=not used
y 5=rawfs (unformatted file system)
y 6=mirrorfs (mirrored file system)
y 7=ckpt (checkpoint file system)
y 8=mgfs (migration file system)
y 100=group file system
y 102=nmfs (nested mount file system)
acl access control value for the file system
volume volume on which the file system resides
name name assigned to the file system
server ID of the Data Mover accessing the file system
- 34
id
= 23
name
= Diamond
acl
= 0
in_use
= False
type
= uxfs
worm
= off
volume
= v99
pool
= clar_r5_performance
member_of = root_avm_fs_group_3
rw_servers=
ro_servers=
rw_vdms
=
ro_vdms
=
stor_devs = CK200051400304-000C,CK200051400304-000B,
CK200051400304-0008, CK200051400304-0007
disks
= d15,d13,d11,d8
- 35
y Example:
server_df server_2 fs1
server_2 :
Filesystem
kbytes
used
fs1
69630208
avail
379520 69250688
capacity Mounted on
1%
/mp1
Checking utilization
To view the amount of used/free space in a file system, use this command.
server_df <mover_name> <file_system_name>
- 36
This slide shows how to view file system information including utilization using Celerra Manager.
- 37
y Example:
nas_fs x fs1 str2
Result: After extending the fs1 file system to include str2, mtv1 will also include str2.
- 38
This slide shows how to extend a file system by volume using Celerra Manager.
- 39
This slide shows how to extend the size of a file system by size.
- 40
To delete a file system, use the nas_fs delete command. To delete a file system and all the
underlying meta, stripe, and slice volumes. Use nas_fs d <filesystem_name> o volume.
- 41
This slide shows how to delete a file system by using Celerra Manager.
Note: When deleting a file system using Celerra Manager, the underlying volume structure is also
deleted If the file system was originally created using AVM.
- 42
Max
Size
HWM
File systems created with AVM can be enabled with auto extension capability. You can enable auto
extension on a new or existing file system. When you enable auto extension, you can also choose to:
adjust the high water mark (HWM) value, set a maximum file size to which the file system can grow,
and enable virtual provisioning.
Auto extension causes the file system to automatically extend when it reaches the high water mark and
permits you to grow the file system gradually on an as-needed basis. The virtual provisioning option
which can only be used in conjunction with auto extension allows you to allocate storage based on
your longer term projections, while you dedicate only the file system resources you currently need. It
also allows you to show the user or application the maximum size of the file system, of which only a
portion is actually allocated, while allowing the file system to slowly grow on demand as the data is
written.
- 43
Max Size
y Command:
nas_fs -n ufs3 -create size=256M
pool=clar_r5_performance -auto_extend yes
-max_size 1T vp=Yes
2005 EMC Corporation. All rights reserved.
The virtual provisioning option lets you present the maximum size of the file system to the user or
application, of which only a portion is actually allocated; virtual provisioning permits the file system to
slowly grow on demand as the data is written.
Enabling virtual provisioning with Automatic File System Extension does not automatically reserve
the space from the storage pool for that file system. Administrators must ensure adequate storage space
exists so the automatic extension operation can succeed. If the available storage is less than the
maximum size setting, then automatic extension fails. Users receive an error message when the file
system becomes full, even though it appears there is free space in the file system.
- 44
This slide shows how to create a file system with Auto Extend and Virtual Provisioning enabled by
using Celerra Manager. Refer to the Celerra man pages for more detailed information and options on
creating Celerra file systems with CLI.
- 45
References
y Managing Celerra Volumes and File Systems with
Automatic Volume Management
P/N 300-002-689 Rev A01 Version 5.5 March 2006
y Managing Celerra Volumes and File Systems Manually
P/N 300-002-705 Rev A01 Version 5.5 March 2006
* Above are available on the User Information CD
- 46
Module Summary
Key points covered in this module are:
File systems can be created manually or with Automatic
Volume Manager
The types of Celerra volumes that can be created are
slice, stripe, and metavolumes
Storage Pools are containers that holds storage ready
for use by file systems, checkpoints, or other Celerra
objects that use storage
System-defined Storage Pools
User-defined Storage Pools
The key points for this module are shown here. Please take a moment to review them.
- 47
Closing Slide
- 48
Celerra ICON
Celerra Training for Engineering
-1
Revision History
Rev Number
Course Date
1.0
February 2006
1.2
May, 2006
Revisions
Complete
Updates and enhancements
-2
Module Objectives
Upon completion of this module, you should be able to:
y
In the prior module we created file system. Before a file system can be accessed by a client, it first
must be mounted on a Data Mover and exported. This module covers these steps. While the process is
nearly the same for both NFS and CIFS environments, this module focuses on NFS clients in a UNIX
environment only.
-3
Introduction to NFS
y NFS (Network File System) is a client/server distributed file service
that provides file sharing in is a network environment
Developed in the early 80s by SUN Microsystems
Standard for network file access in UNIX environments
-4
Process Overview
Existing File System
Create Mountpoint
Mount FS to Mountpoint
Export Mounted FS
NFS Mount from Clients
2006 EMC Corporation. All rights reserved.
Before a file system can be accessed by clients, the file system must be mounted and exported. After
creating a file system on the Symmetrix or CLARiiON using the nas_fs command:
The Celerra administrator must:
y Create a mountpoint on a Data Mover.
y Mount the file system to the mountpoint.
y Export the mounted file system.
The clients must remotely mount the exported file system.
-5
Mountpoints
y Command: server_mountpoint
Create Mountpoint
Mount FS to Mountpoint
Export Mounted FS
NFS Mount from Clients
Creating a mountpoint
A mountpoint can be created on a Data Mover before you can mount a file system. The Celerra will create the
mountpoint when a file system is mounted. You must delete the mountpoint manually (with CLI, the GUI
deletes the mountpoint when the file system is deleted). Each file system can be mounted (rw) on only one
mountpoint, and each mountpoint can provide access to one file system at a time. Celerra supports having
multiple Data Movers mounting the same file system concurrently only if all mounts are read only. The read
only (ro) and read write (rw) mount options are discussed in relation to the server_mount command.
Naming mountpoints
Mountpoint names must begin with a "/" followed by alphanumeric characters (for example, /new).
Mounting a file system
You can mount a file system, rooted on a subdirectory of an already exported file system, as long as the file
system has not previously been mounted above or below that mount point. For example:
y File system fs1 is mounted to /mp1
y Directory /dir1 is create in /mp1
y File system fs2 is then mounted to /mp1/dir1
y However, file system fs1 cannot be mounted to /mp1/dir1
Maximum number of nested mountpoints
The maximum number of nested mountpoints that you can create under a directory is eight. However, you can
only mount a file system up to the seventh level.
-6
Creating a Mountpoint
y Creating a mountpoint simply creates a directory on the
Data Mover
y Command:
server_mountpoint mover_name -create mountpoint
y Example:
server_mountpoint server_2 c /mp1
server_2: done
Data Mover
Mountpoint
Mountpoint
/mp1
/mp1
/mp1
To create a mountpoint:
server_mountpoint mover_name -create mountpoint
Note:
It is not necessary to create a mountpoint prior to mounting the file system. The mount command will
create the mountpoint and mount the file system. The name of the mountpoint will be the file system
name.
-7
Mount FS to Mountpoint
Export Mounted FS
NFS Mount from Clients
Data Mover
Mountpoint
Mountpoint
/mp1
/mp1
/mp1
File
FileSystem
System
fs1
fs1
fs1
Once you create a mountpoint, you must mount your file system to the mountpoint in order to provide
user access. File systems are mounted permanently by default. If you perform a temporary unmount
(default), in the case of a system reboot, the mount table is activated and the file system is
automatically mounted again.
-8
y Example:
server_mount server_2 fs1 /mp1
y File system writes operations are cached and read data is prefetched
by default but can be disabled by mount options
Mount options vary for NFS and CIFS. When performing a mount, you can institute the following
options to define the mount:
Read-write: When a file system is mounted read-write (default) on a Data Mover, only that Data
Mover is allowed access to the file system. No other Data Mover is allowed read or read-write access
to that file system.
Read-only: When a file system is mounted read-only on a Data Mover, clients cannot write to the file
system, regardless of the export permissions. A file system can be mounted read-only on several Data
Movers concurrently, as long as no Data Mover has mounted the file system as read-write.
Additional options for CIFS mount include:
File locking
Opportunistic locks
Notify
Access checking policies
-9
File Systems
With Celerra Manager, the file system is mounted, by default, at the time you create it. A file system
called marketing is shown here. When marketing was created, it was automatically mounted to a
mountpoint called marketing.
- 10
Note:
- 11
- 12
y Example:
server_export server_2 /mp1
Command
Paths are exported from Data Movers using the server_export command. This adds an entry to
the export table. Entries to the table are permanent and are automatically re-exported if the Data
Mover reboots.
Export options
Options used when exporting the file system play an integral part of managing security to the file
system. You can ignore existing options in an export entry by including the -ignore option. This
forces the system to ignore the options in the export table and follow the specific guidelines of that
export.
It is not necessary to export the root of a file system.
It is sometimes advantageous to export a directory on the file system rather than the file system itself.
- 13
Description
ro
ro=
rw=
access=
root=
When using the server_export command you can specify the level of access for each NFS export.
Client lists for ro=, rw=, access=, and root= can be a hostname, netgroup, subnet, or IP address and
must be colon-separated, without spaces. You can also exclude access by using the dash (-) prior to an
entry for ro=, rw=, and access=, for example rw=-host1.
- 14
- 15
This slide shows how to export an NFS file system using Celerra Manager.
- 16
- 17
This slide shows how to permanently unexport an NFS file system using Celerra Manager.
- 18
Mount FS to Mountpoint
# mkdir /hmarine
y NFS mount the exported file system to the local
directory
# mount 192.168.101.20:/mp1 /hmarine
Once the file system has been exported from the Celerra, NFS clients will need to NFS mount the file
system. The typical procedure involves the use of a local directory as a mountpoint, whether preexisting or created specifically for this purpose.
In the example below, a directory named /hmarine on a Sun Solaris workstation is being NFS mounted
to a Celerra file system that is mounted to /mp1 on a Data Mover with the IP address 192.168.101.20.
Similar syntax can be used for other clients supporting NFS.
As root, create a new directory.
# mkdir /hmarine
At this point /hmarine is a directory. Performing an ls command on /hmarine should yield no results
because the directory is empty.
NFS mount /hmarine to the Data Movers /mp1 exported file system.
# mount 192.168.101.20:/mp1 /hmarine
If a host name resolution solution (such as DNS) has been employed, the command could be as
follows:
# mount cel1dm2:/mp1 /hmarine
After mounting /hmarine to the Data Movers exported /mp1, /hmarine now is a file system, not a
directory. An ls command on /hmarine should now yield contents of lost&found (which is at the root
of all file systems).
- 19
On NFS Clients
server_commands
Create Mountpoint
Mount FS to Mountpoint
Export Mounted FS
This slide summarizes what needs to occur when creating a file system and making it available to NFS
clients on the network.
1. A meta volume is created using either a stripe, slice, or disk volume
2. A file system is created on the meta volume
3. Mountpoint is created
4. The file system is mounted to the mountpoint
5. The mountpoint is exported for NFS
6. The NFS client creates a local directory and mounts the remote Celerra file system
- 20
/nmfs_fs1
/nestfs01
server_df
properties
/nestfs2
/nestfs3
/nestfs4
A Nested Mount File System is a collection of individual file systems that can be exported as a single
share or single mount point. Normally, the collection of file systems remain together after creation;
although it is possible to remove an individual file system or to break up the collection entirely.
The space for each Nested Mount File System and each of the component file systems can be
examined using server_df.
The space reported for the NMFS will be the aggregation of the space within each of the component
file systems mounted in it.
The space reported for each component will be the actual space within the component file system.
In some cases, the access control associated with a NMFS root may not be sufficient for the entire
collection of file systems. Thus, NMFS will allow different export controls on each of the component
file systems. Access to each of the file systems may be individually set via the server_export for
the component file systems.
- 21
/nmfs_fs1
Co
m
exp pon
ort ent
ed
for
R0
/nestfs01
/nestfs2
/nestfs3
/nestfs4
A component (nested) file system will get its permissions one of two ways:
y The user can export the component file system separate from the NMFS file system and give it
permissions at that time.
y The user can export just the NMFS file system. The component file systems then inherit the
permissions from the parent (NMFS) file system.
Example:
Set export permission to Nested_1 = r/w
y fs002=r/w (inherited)
y fs003=r/w (inherited)
y fs004=r/w (inherited)
Set export permission to fs002 = r/o
y fs002=r/o (component export)
y fs003=r/w (inherited)
y fs004=r/w (inherited)
Set export permission to fs004 = root=10.0.0.1
y fs002=r/o (component export)
y fs003=r/w (inherited)
y fs004=root=10.0.0.1 (component export)
Exporting File Systems to UNIX
- 22
The sum of the size of the four component (nested) file systems is equal to the size of the NMFS file
system (nmfs_fs1).
- 23
NFS User
UID
GID
File System
Objects
Celerra Data Movers compare users to UIDs and groups to GIDs using traditional passwd and group
files or by querying NIS.
Data movers will check their local /.etc/passwd and /.etc/group files first, and then check with NIS if
the Data Mover has been configured for NIS.
If the Active Directory schema has been extended to include UNIX attributes for Windows users and
groups, you can configure a Data Mover to query the Active Directory to determine if a user and the
group of which the user is a member have UNIX attributes assigned. If so, information stored in these
attributes is used for file access authorization.
The Data Mover first checks its local cache. It then queries all the configured naming services in a
predetermined order until the requested entity is found or until all naming services have been queried.
The search order is determined by the name service switch (nsswitch), which is configured using the
nsswitch.conf file.
- 24
y Command:
# /nas/sbin/server_user <mover_name> -add
passwd <UID or user name>
y Example:
# /nas/sbin/server_user server_2 -add -passwd
itechi
Adding users
Users can be added to /.etc/passwd on a Data Mover with the server_user command. This command opens to a script
that allows you to create or modify a user account. The server_user command also allows you to add or delete an
optional password to a user account. This command must be run from the /nas/sbin directory as root.
# /nas/sbin/server_user server_2 -add -passwd itechi
Creating new user itechi
User ID: 1007
Group ID: 105
Comment: Ira Techi, IS admin
Home Directory:
Shell:
Changing password for new user itechi
New passwd:
Retype new passwd:
server_2: done
Password and group files
In addition to server_user, passwd and group files can be created manually, or copied from another system, and then
placed into /.etc using the server_file command.
- 25
y Example:
server_nis server_2 hmarine.com 192.168.64.10,192.168.64.11
NIS
Data Mover
NIS (Network Information Service) is a Network service that converts hostnames to IP addresses or IP
addresses to hostnames. NIS can also be used to store user and group names used in authentication.
Command syntax
server_nis server_2 <nis_domain_name> <IP_Addr_of_NIS_server1>,
<IP_Addr_of_NIS_server2>,
Example
server_nis server_2 hmarine.com 192.168.64.10,192.168.64.10
Note: EMC recommends that two NIS servers are configured for each Data Mover for redundancy.
- 26
This slide shows how to define an NIS server using Celerra Manager.
- 27
passwd
passwd&&group
group
Control Station
NIS
Examples
To copy files from an NIS client, type the following command:
# ypcat passwd >passwd
# ypcat group >group
To copy passwd and group files to Control Station and then FTP these files to the Data Mover, type
the following command:
server_file server_2 -put passwd passwd
server_file server_2 -put group group
- 28
Module Summary
y Before a file system can be accessed by clients, it must be mounted
and exported from the Celerra
y The server_mount command is used to mount a file system
When a file system is mounted read/write on a Data Mover (default), only
that Data Mover is allowed access to the file system
When a file system is mounted read-only on a Data Mover, clients cannot
write to the file system regardless of the export permissions
In this module we discussed making a file system available to NFS clients. While there are third party
NFS client software packages available for Windows, Windows users typically use the CIFS protocol
that we will be discussing in subsequent modules.
- 29
Closing Slide
- 30
Celerra ICON
Celerra Training for Engineering
Intro to CIFS
-1
Revision History
Rev Number
Course Date
1.0
February 2006
1.1
March 2006
1.2
May 2006
Revisions
Complete
Intro to CIFS
Intro to CIFS
-2
Introduction to CIFS
y Define terminology used in a CIFS environment
y Describe how CIFS users are authenticated and how
their credentials are mapped in the Celerra Network File
Server
y Describe the purpose of Usermapper and how it works
y Configure Internal Usermapper in a multi Celerra
environment
y Describe the options for mapping user credentials in a
multiprotocol environment
Intro to CIFS
In this module we will be discussing Security issues; basically identifying who a user is. How we
address these issues is different if we are in a UNIX only environment, a Windows CIFS environment,
or a mixed environment. In this section we are going to discuss a tool called Usermapper that is used
to map Windows credentials (SID) to the UNIX-like User ID (UID) and Group ID (GID) conventions
that is used by DART on the Celerra.
An alternative to using Usermapper is to manually create entries in a passwd and groups file or use the
tool NTmigrate that will extract Windows user credentials and convert them into UIDs and GIDs used
by the Celerra.
Intro to CIFS
-3
Intro to CIFS
The default file serving protocol for the Celerra is NFS. While there are NFS clients available for
Microsoft Environment, CIFS is the most widely used protocol for file sharing in a Windows
environment.
When configured for CIFS, the Celera Data Mover is emulating Windows server and provides high
performance and provides all the features of a Windows server
as well as a robust set of features.
Intro to CIFS
-4
Windows NT 4.0
Windows 2000 Mixed
Windows 2000 Native
Windows Server 2003 Family Interim
Windows Server 2003
Intro to CIFS
CIFS support has been available with Celerra for a number of years. Each release of NAS add more
features and supported platforms.
Intro to CIFS
-5
Data Mover
CIFS
Server
CIFS Service
CIFS
Server
Intro to CIFS
The file sharing protocol that is used by default is NFS. If Windows clients will be accessing CIFS
shares, than the CIFS service must be started. This service is implemented at the kernel level for
maximum performance.
CIFS Servers are logical instances. A single Data Mover may host multiple CIFS Servers, each is
configured as a separate entity when it joins the domain.
Intro to CIFS
-6
Intro to CIFS
At the highest level, the way you make a CIFS share available to clients is very similar to how you
would do it for NFS clients: Create the file system, mount it on the Data Mover and Export it.
However there are many other consideration and prerequisites that must be met, specifically the
integration into the Windows environment.
Intro to CIFS
-7
DNS (Domain Name Service) is used to locate computers and services on the
network
Maintains a database of domain names, host names, IP addresses and services
DNS provides name resolution
y Domain users join (log in to) the domain by presenting their credentials
Authentication is performed using Kerberos
Secret-key encryption mechanism
Users log in to domain once, not necessary to log into each computer in the domain
2006 EMC Corporation. All rights reserved.
Intro to CIFS
While it is possible to configure standalone CIFS servers, most environments integrate into the
Windows domain. When properly configured, the CIFS server on the Data Mover is visible on the
Microsoft network, either via the browse list or via a UNC path.
EMC strongly recommends that you enable Unicode on the Celerra Network Server. If you do not
enable Unicode, ASCII filtering is automatically enabled when you create the first Windows 2000 or
Windows Server 2003-compatible CIFS server on the Data Mover. If neither Unicode nor ASCII
filtering are enabled, you cannot create a Windows 2000 or Windows Server 2003-compatible CIFS
server.
Unicode can be enabled using the uc_config command and through the Set Up Celerra Wizard on the
Celerra Manager. However, depending on your environment, you might also need to customize the
settings of the translation configuration files before enabling Unicode.
Intro to CIFS
-8
Authentication
y Three authentication options:
Single username/password - SHARE Security
At the Data Mover - UNIX Security
When the user enters NT/2000 network - NTSecurity
Intro to CIFS
One of the configuration options is to specify where the user validation will take place. For example,
the users could be required to present the username and password that matches what is stored in the
passwd file on the Data Mover, or NIS (UNIX security). Or, it could be assumed that, since the user
is coming from the Windows network, he/she has already provided a username/password and was
validated by a Microsoft domain controller, and would thus have a Security Access Token, therefore we
will trust that they are who they say they are (NT security). A simpler but less flexible option is to
have a single username/passwd for all users who would like to access the system. This is referred to a
SHARE security.
Intro to CIFS
-9
UNIX
Overview
SHARE
Overview
Uses no passwords or
uses plain-text
passwords.
Asks for read-only or
read/ write password.
ACLs not checked.
Not recommended.
Default user
authentication method.
Recommended
Security Mode.
2006 EMC Corporation. All rights reserved.
Intro to CIFS
Intro to CIFS
- 10
UNIX
SHARE
How It Works
How It Works
How It Works
y Access-checking is
against user and group
security IDs (SIDs)
2006 EMC Corporation. All rights reserved.
Intro to CIFS
Intro to CIFS
- 11
UNIX
SHARE
When to Use
When to Use
When to Use
Not recommended.
Not recommended.
Recommended Security
Mode.
Intro to CIFS
Intro to CIFS
- 12
Setting Authentication
$ server_cifs <movername> -add security=<security_mode>
Where:
<movername> = name of the specified Data Mover
<security_mode> =
NT(Default) The Windows NT password database on the PDC (which uses
encrypted passwords with NETLOGON) is used. The passwd file, NIS, or
Usermapper is required to convert Windows NT usernames to UNIX UIDs.
UNIXThe client supplies a username and a plain-text password to the server. The
server uses the passwd database or NIS to authenticate the user.
SHAREClients only supply the read-only or read/write password you configure
when creating a share. Unicode must not be enabled.
y Example:
To set the user authentication method to UNIX for server_2, type:
$ server_cifs server_2 -add security=UNIX
Intro to CIFS
Intro to CIFS
- 13
Verifying Authentication
y Checking the user authentication method set on a Data Mover
y Example: To check the user authentication method for
server_2:
$ server_cifs server_2
server_2 :
96 Cifs threads started
Security mode = NT
Max protocol = NT1
I18N mode = ASCII
Home Directory Shares DISABLED
Usermapper auto broadcast enabled
Usermapper[0] = [127.0.0.1] state:active
port:14640
...
Intro to CIFS
Intro to CIFS
- 14
SID
CIFS
Services
SID
File System
Objects
UID/GID
UID/GID
UNIX
User
ID
ID/G
NFS
Services
y Every file system object (such as a file, directory, link, shortcut) has an associated
owner and owner group identified with UID and GID
y UIDs and GIDs are used by the Celerra to control access to the file system objects
y Windows and UNIX users present credentials in different format
Unix users: numeric User Identifiers (UIDs) and Group Identifiers (GIDs)
Windows users: Security Identifier strings (SIDs)
Intro to CIFS
The Celerra Data Mover runs the EMC proprietary DART operating system, which is based on UNIX.
Similar to other UNIX systems, users are authorized to enter the system based on a username and
password that is stored in the passwd file (located in /.etc on the Data Mover), or on an NIS server.
Users and the groups they belong to are also associated with UIDs and GIDs, which are stored in the
passwd and group files.
Microsoft does not employ UIDs and GIDs to identify users. Rather, they are identified by a SID
(Security Identifier). Therefore, each user must have a username/UID created for them in order to
access the UNIX-like Data Mover. (The same is true for group and GIDs) Again, this username/UID
will be located on the Data Mover (/.etc/passwd) or on a NIS server.
Intro to CIFS
- 15
Intro to CIFS
There are a couple methods for generating the username/UID for the CIFS user. One is to simply use a
processor that runs on one of the Data Mover called Usermapper that sequentially assigned GID and
UIDs to Windows users. This is appropriate in environments that have only Windows users. If the
environments includes both Windows and UNIX users, it would be appropriate to use the same user ids
for both. In these cases, migrating Windows SIDs to UIDs and merging those credentials with the
UNIX credentials is appropriate. A set of utilities called NTMigrate is useful in these environments.
Intro to CIFS
- 16
Usermapper Overview
y Celerra Usermapper is process that runs on the Data
Mover as a daemon
Automatically assigns UIDs and GIDs to Windows users and groups
Persistently maintains mapping information
Intro to CIFS
Prior to NAS v5.2, the Usermapper service ran on Linux (including the Celerra Control Station), or
UNIX. Starting with v5.2, Usermapper now runs on the Data Mover. The Usermapper Service
automatically generates and maintains a database that maps SIDs to UIDs and GIDs for users or groups
accessing file systems from a Windows domain.
Intro to CIFS
- 17
User Mapping
y User Logs into domain
Username maps to SID
Sends SID when requesting access
to Data Mover file object
Data Mover
passwd/group
files
Mapping Cache
External Network
User
Request
Data Mover
passwd/group
files
Mapping Cache
Data Mover
passwd/group
files
Mapping Cache
Intro to CIFS
When a request is received, the Data Mover first check to see if a mapping of the UID/GID exists in
the local Usermapper cache that each Data Mover maintains. If no mapping is found, it next checks
for mapping in the local password/group files, NIS directory, or Active Directory. The order of
query is determined by the nsswitch.config file. If no mapping is found, a mapping request is
sent to the local Usermapper service (either Primary or Secondary).
If the primary Usermapper service is unavailable or, if for some reason, it cannot map the user or
group, an error is logged in the server log.
On the Data Mover, persistent SID-to-UID/GID cache, introduced with DART 5.2, is stored in the file
/.etc/secmap.
:
Intro to CIFS
- 18
Mapping Cache
Internal Network
Data Mover
passwd/group
files
Data Mover
Mapping Cache
passwd/group
files
Usermapper
Users
Groups
SID - UID
SID - UID
SID - UID
SID - UID
SID - GID
SID - GID
SID - GID
SID - GID
Intro to CIFS
The Data Mover first determines if it has a mapping for the SID in its local Usermapper cache. If there
is no such mapping, the Data Mover sends a mapping request to the primary Usermapper service
The Data Mover checks its local user and group files and then, if configured, checks the NIS and
Active Directory.
The primary Usermapper service checks its database to determine if this user or group has already
been assigned a UID/GID. If not, the primary Usermapper generates a new UID and GID and adds the
new user or group to its database, along with the mapping. It then returns the mapping to the Data
Mover and the Data Mover permanently caches the mapping.
Note: If the primary Usermapper service is unavailable or, if for some reason, it cannot map the user or
group, an error is logged in the server log.
Intro to CIFS
- 19
Usermapper Implementation
y When a Celerra Server is booted for the first time after
installation, server_2 is automatically configured with a
Primary Usermapper Service
No installation or configuration required
Service is made highly available by configuring standby Data Mover
Intro to CIFS
When a Celerra v5.2+ is booted for the first time, it is automatically configured with the default
Usermapper configuration. In this situation, Usermapper is fully operational and no additional
installation or configuration is required.
By default, all the Data Movers in the cabinet use the internal IP address of the data Mover in slot 2 as
the location of the primary Usermapper service.
This automatic implementation is not always appropriate. If your environment has more than one
Celerra that share the same domain space, the default configuration should be modified. One Celerra
(server_2) should remain as the primary Usermapper service, and the other cabinets should be
configured with server_2 as secondary. The Data Movers in these cabinets send mapping requests to
their local secondary Usermapper and each secondary Usermapper forwards these requests to the
single primary Usermapper service.
When multiple Data Movers do not share the same Windows domain, each domain should be
configured with its own primary Usermapper service.
Note: As in a standard Celerra configuration, you can configure another Data Mover to serve as a
failover, providing backup for the primary Usermapper service.
Intro to CIFS
- 20
Intro to CIFS
In a multi Celerra Environment only one Data Mover will be configured as the Primary. The Data
Mover in slot 2 of all other Celerra in the environment are configured as secondaries. Within a Celerra
system, all mapping requests are directed to the local Primary or Secondary Usermapper service.
Secondary Usermappers for mapping requests to the Primary and only the Primary can create new
mappings.
Intro to CIFS
- 21
passwd/group
files
Data Mover
Mapping Cache
passwd/group
files
Mapping Cache
Data Mover
Public
Network
passwd/group
files
Mapping Cache
Users
Groups
Groups
Users
SID - UID
SID - UID
SID - UID
SID - UID
SID - GID
SID - GID
SID - GID
SID - GID
SID - GID
SID - GID
SID - GID
SID - GID
SID - UID
SID - UID
SID - UID
SID - UID
Internal Network
Internal Network
Mapping Cache
Data Mover
passwd/group
files
Intro to CIFS
One instance of the Usermapper service serves as the primary Usermapper service, meaning it assigns
UIDs and GIDs to Windows users and groups. By default, this instance is configured on the Data
Mover in slot 2 (server_2). The other Data Movers in a single cabinet are configured as clients of the
primary Usermapper service, meaning they send mapping requests to the primary service when they do
not find a mapping for a user or group in their local cache.
Other instances of the Usermapper service can serve as secondary Usermapper services, meaning they
collect requests for mappings and forward them to the primary Usermapper service. Typically, you
would only configure a secondary Usermapper service in a multi-cabinet environment.
You should have only one primary Usermapper in a single cabinet. In the situation where the Celerra
is configured to support multiple Windows domains, a primary Usermapper service for each domain
may be configured.
EMC recommends one Usermapper instance (primary or secondary) per cabinet. If it is a large cabinet
populated with 14 Data Movers, it may be beneficial to configure a secondary service to reduce traffic
to the primary.
Intro to CIFS
- 22
Intro to CIFS
Intro to CIFS
- 23
Intro to CIFS
Intro to CIFS
- 24
Intro to CIFS
Intro to CIFS
- 25
Usermapper Considerations
y Configure only one primary Usermapper per Celerra
environment
Otherwise there could be duplicate mappings
Intro to CIFS
Intro to CIFS
- 26
Usermapper Database
y The Usermapper database is backed up as part of the hourly
NASDB backup
y The Usermapper database can be exported
server_usermapper server_2 Export user
usermapper_sample
Creates a file on the control station
Sample format:
S-1-5-15-139d2e78-56b1775d-5475b975-323d:*:11894:
903:user:diamond.jim from domain dir:/user/S-1-5-15139d2e78-56b1775d-5475b975-3323d:/bin/ksh
Intro to CIFS
EMC recommends that you do not change the Usermapper database. Changes made to the database
are not reflected by a client Data Mover if the client Data Mover has already cached the existing
Usermapper entry in its local cache.
Intro to CIFS
- 27
Active Directory
Users
Accounts
UNIX Users
Windows
Users
passwd/group
files
Intro to CIFS
Intro to CIFS
- 28
Windows
Users
y To configure:
Copy passwd and group files from Data Mover to Control Station
Edit files adding Windows user names to passwd file and Domain
name as a group name in the group file
Copy the passwd and group files back to the Data Mover and/or
update NIS master
2006 EMC Corporation. All rights reserved.
Intro to CIFS
Intro to CIFS
- 29
Intro to CIFS
Intro to CIFS
- 30
NTMigrate
y Creates UNIX account for Microsoft users
Format consistent with passwd/group files or NIS database
Makes sure each user has a unique UID
Assigns new UIDs
Same UID if user exists in both Windows and Unix environments
ntmiunix.pl
Perl script that runs on Unix/Linux hosts and merges passwd/group files
Intro to CIFS
NTMigrate is used in all situations where CIFS and NFS users will be using the same Data Mover
(multi-protocol).
There are two utilities that make up NTMigrate. The first utility is ntmigrat.exe, this is run on a
Windows domain controller for each Windows domain. The second utility is ntmiunix.pl, and is run on
a UNIX system that has Perl installed.
If a Data Mover is servicing both CIFS and NFS client, it is important that users have the same ID
whether they access the Data Mover from Windows or from UNIX. For example, if Etta Place creates
a file from her UNIX workstation and then accesses the same file from a Windows workstation, she
would rightly expect to still be the owner of that file. Additionally, administering permission would be
very complex if each user had multiple IDs. Moreover, it is crucial that no two users are assigned the
same ID.
When Usermapper assigns IDs to users, there is little control over what IDs the users will receive. This
is not acceptable in a CIFS/NFS mixed environment. For this reason NTMigrate is the utility of choice
in such mixed settings.
NTMigrate is a static solution. If the user/group account database changes, NTMigrate must be re-run
to be updated.
NTMigrate extracts a list of users and global groups from each Windows domain and combines the
results with existing passwd and group file information from an existing UNIX host or NIS server. The
resulting files are used by the Data Mover to provide all users and groups with a single, unique ID as
they access the Data Movers file systems.
Intro to CIFS
- 31
Windows
Users
Users
Accounts
passwd/group
files
Control Station
passwd/group
files
Intro to CIFS
Intro to CIFS
- 32
Active Directory
Users
UID/GID
y To configure:
Install Celerra UNIX User Management component of Celerra CIFS MMC
snap-in
Manually assign UIDs and GIDs to Windows Users
Set cifs.useADMap Parameter to 1 for Data Movers
2006 EMC Corporation. All rights reserved.
Intro to CIFS
Intro to CIFS
- 33
Intro to CIFS
The Celerra UNIX User Management is a Microsoft Management Console (MMC) snap-in to the
Celerra Management Console that can be used to assign, remove, or modify UNIX attributes for
Windows user or group on the local domain and on remote domains. The location of the attribute
database can either be in a local or remote domain.
You would choose to store the attribute database in the Active Directory of a local domain if:
y You have only one domain
y Trusts are not allowed, or
y You have no need to centralize your UNIX user management information
You would choose a remote domain if:
y You have multiple domains,
y Bi-directional trusts between domains that need to access the attribute database already exist, and
y You want to centralize your UNIX user management
Intro to CIFS
- 34
Extension Snap-in to AD
y Property page to User and
Groups
y Allows administrator to
browse an NIS server or
passwd/group file to match
existing UNIX user/group
Intro to CIFS
Celerra UNIX users and groups property pages are extensions to Active Directory Users and
Computers (ADCU). You an use these property pages to assign, remove, or modify UNIX attributes
for a single Windows user or group on the local domain. You cannot use this feature to manage users
or groups on a remote domain.
Intro to CIFS
- 35
Intro to CIFS
The CIFS Migration tool scans either NIS server for users and groups, or local UNIX passwd and
group files looking for names that match existing Windows User/Group in the local and trusted
domains. A hierarchy of discovered users and groups in domains is displayed and an administrator can
select UNIX users and groups to migrate to Active Directory
Intro to CIFS
- 36
References
y Configuring Celerra User Mapping Technical Module
P/N 300-002-715 Rev A01 Version 5.5 March 2006
y NTMigrate with Celerra Technical Module
P/N 300-002-719 Rev A01 Version 5.5 March 2006
y Celerra UNIX Attributes Migration Tool online help
* Above are available on the User Information CD
Intro to CIFS
Notes: For instructions on using Celerra UNIX User Management Snap-In or Celerra UNIX Users and
Groups Property Page Extension, refer to the online help. For installation instructions, refer to
Installing Celerra Management Applications technical module. For instructions on using local files,
refer to Configuring CIFS on Celerra for a Multiprotocol Environment.
Intro to CIFS
- 37
Module Summary
y Security Mode = NT is the default and what is recommended for CIFS
environments
y Usermapper is used in a CIFS-only environment
y Usermapper is configured automatically on install
The primary Usermapper service runs on server_2 by default
EMC recommends one primary Usermapper in a Celerra environment and one
Usermapper instance per Celerra cabinet
The Usermapper service is automatic; no installation or configuration is required
y NTMigrate is used when CIFS and NFS users will be accessing the same
Data Mover
Creates UNIX accounts for Microsoft users
An administrator consolidates all of the user/group information into one passwd and
one group file
Static - every time a new user is added to the Microsoft network, the NTMigrate
process must be performed again
2006 EMC Corporation. All rights reserved.
Intro to CIFS
The key points for this module are shown here. Please take a moment to read them.
Intro to CIFS
- 38
Closing Slide
Intro to CIFS
Intro to CIFS
- 39
Celerra ICON
Celerra Training for Engineering
Configuring CIFS
-1
Revision History
Rev Number
Course Date
1.0
February 2006
1.2
May 2006
Revisions
Complete
Configuring CIFS - 2
Configuring CIFS
-2
Configuring CIFS
Objectives:
y Describe the interactions between the Data Mover and
services within a Microsoft network environment
Active Directory
DNS
Kerberos
Time Services
Configuring CIFS - 3
In this module we will be discussing making file systems available in a Windows environment. We
will see that the Data Mover is dependent on a number of services provided by the Microsoft network.
A simple CIFS server configuration may not include a domain infrastructure. This type of CIFS server,
called a stand-alone server, does not require external components such as a domain controller, NIS
server, or Usermapper. Users log in to the stand-alone CIFS server through local user accounts. For
this class we are going to focus on a more typical configuration with full integration into a Windows
network environment.
Configuring CIFS
-3
Data Mover
CIFS
Server
CIFS
Server
CIFS
Server
CIFS
Server
CIFS
Server
Configuring CIFS - 4
While a Celerra Data Mover is a literal server, a CIFS server is a logical server that emulates the
functionality of a Windows file server. Each Data Mover can be configured with one or more CIFS
servers. Each CIFS server can have its own shares* and can belong to a different Windows domain.
You must configure at least one network interface for each CIFS server.
*In order for shares to be associated with a single CIFS server it must be stated in the export statement.
Configuring CIFS
-4
Data Mover
Virtual Data Mover (VDM-1)
CIFS
Server
CIFS
Server
CIFS
Server
CIFS
Server
Configuring CIFS - 5
A Virtual Data Mover (VDM) is a Celerra Network Server software feature that enables you to
administratively separate CIFS servers and their associated resources, like file systems, into virtual
containers. These virtual containers allow administrative separation between groups of CIFS servers,
enable replication of CIFS environments, and allow the movement of CIFS servers from Data Mover
to Data Mover.
VDMs support CIFS servers, allowing you to place one or multiple CIFS servers into a VDM along
with their file systems. The servers residing in a VDM store their dynamic configuration information
(such as local groups, shares, security credentials, and audit logs, etc.) in a configuration file system. A
VDM can then be loaded and unloaded, moved from Data Mover to Data Mover, or even replicated to
a remote Data Mover, as an autonomous unit. The servers, their file systems, and all of the
configuration data that allows clients to access the file systems, are available in one virtual container.
The CIFS server information include in a VDM (and is thus portable) includes:
Local group database for the servers in the VDM
Share database for the servers in the VDM
CIFS server configuration (compnames, interface names, etc.)
Home directory information for the servers in the VDM
Auditing and Event log information
Kerberos information for the servers in the VDM
Configuring CIFS
-5
Configuring CIFS - 6
In the context of Celerra Management, availability refers to bringing the Celerra CIFS Server and its
file systems resources into the Microsoft network.
When configuring a Celerra Data Mover for CIFS, there are three aspects under what we refer to as
availability:
y The Data Mover requires certain prerequisites in order to successfully join the domain
y The actual act of joining the domain, causing the CIFS server to appear in the Active Directory
y The file systems on the Data Mover must be exported specifying the CIFS protocol and a share
name
The CIFS protocol must also be manually started. NFS is the default file sharing protocol for EMC
Celerra and is started automatically.
The steps taken to make a CIFS Server and its file systems appear on the network are the same
regardless of whether the file system access is is in an CIFS-only or also supports NFS clients.
Configuring CIFS
-6
Configuring CIFS - 7
Active Directory lists resources and services available in the Microsoft network..
One of the primary goals in configuring CIFS is for Active Directory to hold the account for the Data
Movers CIFS server(s). A successful configuration will result in an EMC Celerra container holding
the CIFS Server account to be created inside Active Directory Users and Computers. Alternatively,
the Data Movers CIFS server account can be placed in another AD container, if so desired.
In Windows 2000 Native Mode, there are no longer Primary and Backup Domain Controllers; there are
just Domain Controllers. All DCs hold a writeable version of the directory. Windows 2000 uses LDAP
(Lightweight Directory Access Protocol) to facilitate Active Directory. Active directory
communicates with all of the Domain Controllers in the Active Directory domain.
Configuring CIFS
-7
Configuring CIFS - 8
Within a network environment there is a need to map IP addresses and hostnames to IP addresses. This
could be done using local hosts files on each Data Mover but that is much more difficult to manage.
DNS is a network service that provides this address resolution When a Data Mover is joining a W2K/3
domain, it must be configured to use DNS.
Windows 2000 features Dynamic DNS. DDNS permits hosts to register themselves with DDNS. In the
case of Celerra File Server, when correctly configured, the Data Mover should automatically register
its entries in both the forward and reverse lookup zones in the DDNS data base.
Windows 2000 is completely dependent on the functionality of DDNS. It is by means of DDNS that all
servers and services communicate across the Windows 2000 enterprise. DDNS is said to live in
Active Directory. Yet, Active Directory cannot function without DDNS.
Windows 2000 Native Mode no longer requires WINS for any name resolution. However, it is possible
that it could be present to provide backward compatibility to some older clients.
Note:
WINS provides NetBios to IP address (and IP address to NetBios) name resolution.
DNS provides host name to IP address (and IP address to host name) name resolution.
Configuring CIFS
-8
Configuring CIFS - 9
In order for the Data Mover to be able to update DNS with its IP information, Allow Dynamic
Updates must be set to Yes on both the Forward and Reverse Lookup Zones.
Part of the preparation required to configure CIFS for the Windows 2000 environment is to verify that
Dynamic DNS is configured.
If DDNS is not supported in the environment, you must manually update the DNS server.
Configuring CIFS
-9
y Example:
server_dns server_2 corp.hmarine.com 192.168.64.15
Configuring CIFS - 10
Command:
server_dns server_x <DM_DNS_suffix> <IP_of_DNS_server>
Example:
server_dns server_2 corp.hmarine.com 192.168.64.15
Remember that the domain name entered is not necessarily the root domain name, but rather the
domain or sub-domain in which the Data Mover will be located (the Data Movers DNS suffix).
Configuring CIFS
- 10
hmarine.com
hmarine.com
Sub Domain:
corp.hmarine.com
Data Mover
Configuring CIFS - 11
When entering the server_dns command, specify the DNS suffix of the Data Mover, not the root
domain name or the domain where the DNS server resides. For example, Hurricane Marine has two
domains, the root domain is hmarine.com, the sub-domain is corp.hmarine.com.
You will be placing the Data Mover in the corp Windows 2000 Active Directory domain. Therefore,
your DNS configuration should coincide, and the domain indicated in your server_dns command
will be corp.hmarine.com.
Configuring CIFS
- 11
Authentication
y Data Mover must authenticate user requests
y In Windows 2000/2003 environment, Kerberos V5 is used
Data Mover uses Kerberos Keys to confirm client credentials within
Kerberos realm
DM configured for Windows Kerberos realm during the Join process.
Configuring CIFS - 12
Authentication is proving you are who you say you are. Windows NT uses NTLM (NT LAN
Manager) for user authentication, whereby a user would logon to the NT domain providing a username
and password. NT would then issue the user an Access Token identifying the user to any necessary
resources. The Access Token would remain until the user logged off of the system.
Windows 2000 employs Kerberos V5 technology providing a more secure, more complex logon
process. Unlike the NTLM token, the Kerberos ticket is time sensitive. Therefore, it is key to a
successful configuration that the date and time of the Data Mover be synchronized with the Windows
2000 domain.
Kerberos is a network authentication protocol that was designed at the Massachusetts Institute of
Technology (MIT) in the 1980s to provide proof of identity on a network. The Kerberos protocol uses
strong cryptography so that a client can prove its identity to a server (and visa versa) across an insecure
network connection. After a client and server have used Kerberos to prove their identity, they can also
encrypt all of their communications to assure privacy and data integrity as they go about their business.
Rather than Kerberos usual password-hash based secret key, Microsoft chose to add its own
extensions, which makes its implementation of Kerberos slightly nonstandard, but still allows for
authentication with other networks that use Kerberos 5.
DART uses the UDP protocol by default, but will switch to TCP when triggered by the proper status
message from the KDC (Key Distribution Center). The use of TCP allows for a larger ticket size.
This process is completely transparent to the user.
Configuring CIFS
- 12
Time Synchronization
y The DM may be configured to uses NTP (Network Time Protocol) or
SNTP (Simple Network Time Protocol) client to synchronize its clock
with that of an NTP server.
SNTP/NTP enables all DMs to synchronize from a single clock source,
keeping timestamps on files and directories accurate
Data Mover time must be maintained to within 5 minutes of the Kerberos
server
SNTP/NTP must be configured when joining the DM to a W2K/3 domain
Configuring CIFS - 13
Since Kerberos is time sensitive, it is necessary for the Data Mover to be synchronized with the date
and time of the KDC (Key Distribution Center). To do this, first set the data and time of the Data
Mover as close as possible to the appropriate values using the server_date command.
In order to maintain this synchronized state, start the Network Time Protocol on the Data Mover to
synchronize with an appropriate time source.
To verify NTP functionality (look for hits and poll hits):
server_date server_2 timesvc stats ntp
Time synchronization statistics since start:
hits=1,misses=0,first poll hits=1,miss=0
Last offset:0 secs,-3000usecs
Time sync hosts:
0 1 10.127.50.162
0 2 10.127.50.161
Command succeeded: timesync action=stats
Configuring CIFS
- 13
Time Synchronization
y Data Movers > server_2
Configuring CIFS - 14
This slide shows how to configure an NTP server using Celerra Manager.
Configuring CIFS
- 14
y Command syntax:
server_cifs server_x add
compname=<DM_name>,domain=<domain_FQDN>,
interface=<IF_name>
y Example:
server_cifs server_x add
compname=cel1dm2,domain=corp.hmarine.com,
interface=cge0-1
2006 EMC Corporation. All rights reserved.
Configuring CIFS - 15
Once you have configured your Data Mover to interoperate in a Windows 2000 environment, you must
declare a computer name for the CIFS server being added, as well as the fully qualified domain name
of the Windows domain that you will join. You do this using the server_cifs command.
server_cifs server_x -a compname=<computer_name>,
domain=<fqdn_domain_name>,netbios=<netbios_name>,
interface=<if_name>
where:
compname=<computer_name> is the computer name of the CIFS server you wish to add.
domain=<fqdn_domain_name> is the Full Qualified Domain Name of the Windows domain
you will join.
netbios=<netbios_name> specifies a NetBIOS name if different from the compname.
interface=<if_name> specifies an interface to be used
Maximum number of CIFS that can be defined per data mover is 512.
Configuring CIFS
- 15
Configuring CIFS - 16
If a CIFS server is to use multiple network interfaces, repeat the previous command for each interface
to be added.
If the interface= flag is omitted entirely, then the CIFS server takes all unused interfaces. This is
referred to as the default CIFS server.
Note: Since CIFS servers are linked to network interfaces, the deletion (or modification) of a network
interface will require updating the appropriate CIFS server
Configuring CIFS
- 16
y To add aliases to a CIFS server repeat the server_cifs add command for each alias
$ server_cifs server_2 a
compname=cel1dm2,domain=corp.hmarine.com,
alias=diamond
Configuring CIFS - 17
Data Mover's CIFS server has a compname (for Windows 2000/2003) and a NetBIOS name (for
backward compatibility). The compname and NetBIOS name are the same by default, though they can
be configured to be different. Aliases provide multiple, alternate identities for a given NetBIOS name.
Because NetBIOS aliases act as secondary NetBIOS names, the aliases share the same set of local
groups and shares as the primary NetBIOS name. Aliases can be added to an existing server or created
when creating a new server. For aliases, you do not need to create accounts in the domain. Aliases
must be unique both:
y Across a Windows domain for WINS registration and broadcast announcements
y On the same Data Mover to avoid WINS name conflicts
Adding NetBIOS names
In contrast, one could add extra NetBIOS names to a CIFS server. These machine names will be
associated with different local groups. For example, if one were to consolidate two old servers with
same or different NetBIOS names from two different domains with different local groups, additional
NetBIOS machine names could be used with NetBIOS aliases. Each NetBIOS name should have an
account on in the domain, and that join process would have to be accomplished as well.
Configuring CIFS
- 17
y Example:
server_cifs server_2 J
compname=cel1dm2,domain=corp.hmarine.com,
admin=administrator
Server_2: Enter Password *******
Configuring CIFS - 18
Configuring CIFS
- 18
Configuring CIFS - 19
The organizational Unit identifies hierarchy where the Data Mover lives in the Active Directory. The
default organizational unit (OU) for a Data Movers CIFS server is ou=Computers,ou=EMC Celerra.
To specify a different organizational unit (new or existing) use the ou= option when joining the
domain.
Example:
To configure server_2 to join the compname cel1dm2 to the domain corp.hmarine.com in the File
Servers Organizational Unit:
server_cifs server_2 J
compname=cel1dm2,domain=corp.hmarine.com,admin=administrator,ou=ou=
File Servers
Configuring CIFS
- 19
Configuring CIFS - 20
Click on CIFS in the tree hierarchy > click on the CIFS servers tab > click new.
Join the Domain and specify the Organizational Units.
Configuring CIFS
- 20
y Example:
$ server_setup server_2 P cifs o start
y Stop and restart after changes
Usermapper
WINS
Security mode
2006 EMC Corporation. All rights reserved.
Configuring CIFS - 21
After completing configuration of CIFS, the protocol must be started using the server_setup command
to activate the CIFS protocol for each Data Mover.
CIFS must be stopped and restarted for any changes in the configuration to take effect. Such change
could include but not be limited to:
y Adding/changing the External Usermapper address
y Adding/changing the address of the WINS server
y Changing the security mode
The server_setup command would also be used to remove all CIFS configurations, which is often
useful in troubleshooting CIFS.
Command syntax:
To start the CIFS protocol, type the following command:
server_setup server_2 P cifs option start
To stop the CIFS protocol, type the following command:
server_setup server_2 P cifs option stop
To delete CIFS configurations, type the following command:
server_setup server_2 P cifs option delete
Configuring CIFS
- 21
Starting CIFS
Configuring CIFS - 22
Select CIFS from the tree hierarchy > click on the Configuration tab > click in the CIFS Service
Started box
Configuring CIFS
- 22
Same file system may be exported once for NFS and again for CIFS
y Command syntax:
server_export server_x Protocol cifs -name
<share_name> <path_name>
y Example:
server_export server_2 P cifs n data /mp2
Configuring CIFS - 23
The share name is the name that the file system will be displayed as on the network. It does not have
to be the same name as the mountpoint and it can be hidden.
File systems can be mounted as either:
Read/Write: When a file system is mounted read/write (default) on a Data Mover, only that Data
Mover can access the file system. Other Data Movers cannot mount the file system.
Read-Only: When a file system is mounted read-only on a Data Mover, clients cannot write to the file
system regardless of the export permissions. A file system can be mounted read-only on several Data
Movers concurrently, as long as no Data Mover has mounted the file system as read/write.
You can export the path as a global share (accessible from all CIFS servers on the Data Mover), or as a
local share (accessible from a single CIFS server). To create a local share, you must use the netbios=
option of the server_export command to specify from which CIFS server the share is accessible. If you
do not use the netbios= option, shares created with server_export are globally accessible by all CIFS
servers on the Data Mover.
Configuring CIFS
- 23
Configuring CIFS - 24
This slide shows how to export a file system for CIFS using Celerra Manager.
Configuring CIFS
- 24
y For a local share you must use the netbios= option of the
server_export command
y Command:
server_export server_x -P cifs n <sharename>
-o netbios=<netbios_name>
Configuring CIFS - 25
When you create a share, you can export the path as a global share (accessible from all CIFS servers on
the Data Mover), or as a local share (accessible from a single CIFS server and all of its aliases). To
create a local share, you must use the netbios= option of the server_export command to specify from
which CIFS server the share is accessible. If you do not use the netbios= option, shares created with
server_export are globally accessible by all CIFS servers on the Data Mover.
Normally, shares created through Windows administrative tools are local shares and only accessible
from the CIFS server used by the Windows client. However, the cifs srvmgr.globalShares parameter
lets you change this behavior so that shares created through Server Manager or MMC are global
shares.
Configuring CIFS
- 25
y Example:
server_export server_2 P cifs n sharedata
/mp2/subdir
Configuring CIFS - 26
It is also possible to export a directory in a file system rather than the file system itself. The UxFS file
system contains .etc and lost&found at their root. These are key to the function and integrity of the file
system. Few Microsoft users will be aware to stay away from these objects. Exporting a directory
instead of the file system effectively hides these two objects from the Microsoft network.
Configuring CIFS
- 26
y Example:
server_export server_2 P cifs n data$ /mp2
Configuring CIFS - 27
It is possible to hide a CIFS-exported file system. Hidden shares can be directly accessed using the
correct UNC path, but they are not displayed in Network Neighborhood (usually for security reasons).
Note: If using Celerra Manager, append a $ to the CIFS share name field when exporting the file
system.
Configuring CIFS
- 27
y Command:
$ server_cifs mover_name Unjoin compname=name,
domain=domain_FQDN,admin=name
y Example:
$ server_cifs server_2 U compname=cel1dm2,
domain=corp.hmarine.com,admin=administrator
server_2 : Enter password:********
Configuring CIFS - 28
Before deleting a CIFS server, or changing the domain to which it is joined, the CIFS server should be
unjoined from the Windows domain. Unjoining the domain removes the server account from Active
Directory and removes the associated entries from DNS (if Dynamic DNS is being employed). As in
joining a domain, you will be required to provide an administrator name and password to perform this
task.
If you delete a CIFS server without first unjoining the domain, a ghost of the CIFS server account will
remain in Active Directory.
Configuring CIFS
- 28
y Example:
$ server_cifs server_2 d compname=hugo
Configuring CIFS - 29
Deleting a CIFS server from a Data Mover removes all associated aliases as well. All network
interfaces used by the CIFS server are freed.
Configuring CIFS
- 29
y Example:
y $ server_setup server_2 P cifs o stop
y $ server_setup server_2 P cifs o delete
y WARNING:
Deleting an entire CIFS configuration also clears the
CIFS configurations for all VDMs on that physical DM
Configuring CIFS - 30
Configuring CIFS
- 30
Stand-alone Servers
y In a enterprise environment, the CIFS servers will most
likely be integrated into a Windows 2000/2003 Domain
y Stand-alone CIFS servers are a low-overhead alternative
for small environments
Do not require external components, such as a domain controller,
NIS server, or Usermapper
Allow users to log in through local user accounts stored on the Data
Mover
May use the security=NT option to provide full NT authentication for
users logging into the servers and ACL checking to authorize user
access to storage objects
Configuring CIFS - 31
A stand-alone CIFS server on a Data Mover is the equivalent of a workgroup server in a Microsoft
environment.
A stand-alone CIFS server provides these advantages over a CIFS server with SHARE authentication:
y Unicode support.
y Full NT authentication for users logging onto the server. In addition, if you desire to provide very
simple user access, you can enable the default Guest account and assign limited rights and
privileges to the Guest account.
y ACL checking to restrict user access.
y Support for NT commands.
y Access to files larger than 2 GB.
y Improved performance for Write operations.
Configuring CIFS
- 31
Example:
server_cifs server_2 add standalone=dm2cge01,
workgroup=EngLab,interface=cge0-1,local_users
Configuring CIFS - 32
Configuring CIFS
- 32
2
1
3
2006 EMC Corporation. All rights reserved.
Configuring CIFS - 33
Configuring CIFS
- 33
5
6
7
8
9
10
Configuring CIFS - 34
Configuring CIFS
- 34
10.127.56.144
Configuring CIFS - 35
Configuring CIFS
- 35
\\10.127.56.144\sashare
Configuring CIFS - 36
Configuring CIFS
- 36
DNS updated
Configuring CIFS - 37
Configuring CIFS
- 37
.
.
.
2004-11-05 15:52:18: SMB: 4: DomainJoin::addAdminToLocalGroup: User
administrator added to Local Group
2004-11-05 15:52:18: ADMIN: 4: Command succeeded: domjoin
compname=marketing.dmain1.com domain=DMAIN1.COM admin=administrator
password=**************** ou="ou=Computers,ou=EMC Celerra" init
Configuring CIFS - 38
Configuring CIFS
- 38
Configuring CIFS - 39
Configuring CIFS
- 39
$ server_log server_2
--------- skipped ------------2002-06-11 08:22:35: SMB: 3:
filtering
Configuring CIFS - 40
Configuring CIFS
- 40
Configuring CIFS - 41
Configuring CIFS
- 41
y Wrong Password
Password is case sensitive
2006 EMC Corporation. All rights reserved.
Configuring CIFS - 42
Configuring CIFS
- 42
$ server_log server_2
--------- skipped ------------2002-06-11 08:37:10: ADMIN: 4: Command succeeded: cifs add
compname=DM2-ANA12 domain=NATIVE.WIN2K interface=ana12
2002-06-11 08:37:25: SMB: 3: DomainJoin::doDomjoin: Computer account
'dm2-ana12' already exists.
2002-06-11 08:37:25: ADMIN: 3: Command failed: domjoin compname=dm2ana12 domain=native.win2k admin=admin2 password=#=7%- init
Configuring CIFS - 43
Configuring CIFS
- 43
Configuring CIFS - 44
Configuring CIFS
- 44
CIFS Threads
y Threads are considered a pool of compute resources
Each request from the client:
Obtains a thread from the pool
executes,
and then returns used thread back to the pool.
Configuring CIFS - 45
Configuring CIFS
- 45
Module Summary
Key points covered in this module are:
y Windows 2000 uses Kerberos, by default, to
authenticate Windows users
y Kerberos is time-sensitive and requires that the Data
Mover and Windows Domain Controller be in sync for a
successful Data Mover join to occur
y Dynamic DNS permits hosts to register their own IP
address with the DNS server
y When joining a Data Mover to the domain, the default
organizational unit is computers Celerra
y A share name is used when exporting a file system for
CIFS users to access
y Server_log is first point in troubleshooting!
2006 EMC Corporation. All rights reserved.
Configuring CIFS - 46
The key points for this module are shown here. Please take a moment to read them.
Configuring CIFS
- 46
Closing Slide
Configuring CIFS - 47
Configuring CIFS
- 47
Celerra ICON
Celerra Training for Engineering
-1
Revision History
Rev Number
Course Date
1.0
February 2006
1.2
May 2006
Revisions
Complete
Updates and enhancements
-2
Permissions is also known as authorization: who has access to what. In this module we will discuss
how we would manage permissions in a Windows only environment. Basically we handle it the way
we would if the Data Mover was a Windows server. Later we will discuss how we handle permissions
in a mixed environment where we have NFS and CIFS users sharing access to the same file systems.
-3
Permissions are different than security. In this course, security relates to proving that users are who
they say they are, generally by providing a username and password. Permission deals with the actions a
given user can or cannot perform with the object associated with those permissions. While security
usually revolves around accessing a system, permissions are associated with a specific object, such as a
file or directory.
-4
y Managing permissions:
Create global and or local groups
Add users to groups
Assigning permissions to Groups (or Users)
Access Control List (ACL)
User: Single account in an Microsoft Windows environment. For example, a user named Etta Place.
Global Group: Groups of users within a particular Windows domain. Global Groups are usually
modeled after users job functions (e.g., Propulsion Engineers, Eastcoast Sales, and Managers). Global
Groups can contain users from that domain.
Local Group: Local Groups are created/managed on individual servers. Local Groups bring together
those that will need similar permissions to certain objects. Local Groups are modeled after different
kinds of activity (for example, Printer Operators, Backup Operators, Server Operators) and can contain
Users and/or Global Groups from the same Windows domain and/or trusted domains.
Domain Local Groups: With W2K Doamin Local Groups serve the same purpose as local groups but
are stored in the active directory and are accessable across the domain.
Managing permissions for CIFS involves the following two steps:
y Creating Local and or Global Groups
y Adding users to the Groups
y Configure Access Control List for file system objects to assign various levels of permissions to
groups or users
y Careful planning is required
Managing permissions on file system objects on a Celerra is no different than the way you would do it
in on a Microsoft Server.
-5
File System
Objects
Local
Groups
There are many ways to organize users and groups, one way is to use a hierarchical model for
managing users, groups, and permissions:
y Users in a domain are placed in Global Groups of that domain.
y Domain Global Groups are placed in Local Groups on the local servers.
y Permissions (ACLs) to objects on the server are assigned to the Local Groups.
Note: Permission can also be assigned to a user, but this is not the recommended practice.
-6
Windows
NT domain
domain
Data Mover
To manage users and groups for CIFS on a Data Mover, you will need to use the Computer
Management Console (from Administrative Tools) and the connect to another computer function.
The Windows domain(s) will contain users, Global Groups, and Domain Local Groups. Local Groups,
however, exist on the Celerras Data Mover. These Data Mover Local Groups must be created from
Windows using the Computer Management console.
-7
In order to create a Local Group on the Data Mover, you will need to connect the Computer
Management console to the Data Mover using Connect to another computer function. The following
pages will illustrate both connecting Computer Management to the Data Mover, and the creation of
Local Groups on the Data Mover. (e.g. \\cel1dm2)
To connect the Computer Management console to a Data Mover:
y Logon to a Windows as a domain administrator
y Click Start Programs Administrative Tools Computer Management
y Right-click on Computer Management (Local) Connect to another computer
y In the Select Computer dialog box, locate and select the name of the Data Movers CIFS server.
y Click OK.
This should connect the Computer Management console to the Data Mover. To verify, check the top
level of the Tree window pane for the computer name of the Data Movers CIFS server (e.g .
Computer Management (CEL1DM2.CORP.HMARINE.COM) )
-8
-9
Assigning permissions
Once you have created Local Groups on the Data Mover, you can assign permissions to CIFS exports
using Microsoft Windows.
File level permissions
File level permissions are set and effective at the object level (e.g. directory or file). The options for
file level permissions are more extensive than share level permissions. Some examples of file
permission options are Read, Write, List folder contents, Traverse folder, Read permissions,
Change permissions, Take ownership, etc.
Share level permissions
Share level permissions are set only at the path that was specified when creating the share. The set
permissions have effect on the entire contents of the share. The options for share permissions are
limited to Full Control, Change, and Read.
File level permissions can be managed from Windows Explorer and from the Computer Management
consoles Share properties page. From either of these locations, choose the Security tab to access file
level permissions.
- 10
Basicpermissions
permissions
Basic
2006 EMC Corporation. All rights reserved.
Advancedpermissions
permissions
Advanced
Managing Permissions in a CIFS-only Environment - 11
To view the permissions set on a particular folder/file, right click the folder/file > properties > select
the Security tab. The Basic permissions will be displayed as shown on the left of this slide. You can
add/delete the appropriate user and define specific permissions for that user. If these permissions are
not granular enough, click Advanced and you will be presented with the Advanced permissions as
shown on the right side of this slide. More granular permissions can be set here.
- 11
To use the Computer Management Console in Windows to manage the share permissions on a Data
Movers CIFS server:
y Connect the Computer Management console to the Data Movers CIFS server
y Click on System Tools Shared folders Shares to view the shares on the CIFS server
y Locate the share for which you want to manage permissions
y Use the Share Permissions tab to set the required permissions
- 12
y Example:
lrwxrwxrwx 1 kcb eng
Other
Group
- 13
y Problems:
Cannot securely map security information from NFS to CIFS and vice versa.
Direct mapping between NFS permissions and CIFS ACLs opens huge
security vulnerabilities.
y Solution:
Each file and directory must have a set of NFS and CIFS permissions
NFS for owner / group / others (rwx)
CIFS ACLs made up from ACEs
It is semantically impossible to map CIFS ACLs to NFS permissions and vice versa and maintain
security (NFS rwx CIFS rwxpdo). The best solution is to maintain two sets of permissions: one
for UNIX and one for CIFS. In environments where UNIX and Windows users need access to the
same set of objects, the actual permissions are determined by the access policy specified when the file
system was mounted using the server_mount command.
- 14
Access Policies
y If we maintain two sets of permissions, which permissions must
users satisfy?
Only their native method
Both CIFS and NFS
y Options:
NATIVE (default)
NT
UNIX
SECURE
MIXED
MIXED_COMPACT
Note:
- 15
CIFS
CIFS
Permission
NFS
NFS
Permission
CIFS
SECURE
CIFS
Permission
NFS
NFS
Permission
File system
File system
NT
UNIX
CIFS
CIFS
Permission
NFS
NFS
Permission
File system
2006 EMC Corporation. All rights reserved.
CIFS
CIFS
Permission
NFS
NFS
Permission
File system
Managing Permissions in a CIFS-only Environment - 16
NATIVE option
This is the default access checking policy. Under this policy CIFS users will be checked by the
permission defined in the CIFS ACL for a given object. The CIFS users ignore any permission setting
defined for UNIX. Similarly, NFS users must satisfy the UNIX permissions, ignoring any CIFS
permissions.
NT option
When the NT access checking policy is in place, CIFS users will only be required to pass the CIFS
permissions. The NFS users, however, will be required to pass both NFS and CIFS permissions.
UNIX option
The UNIX access policy is the opposite of the NT policy. Access for NFS users will be controlled by
the UNIX permissions only. Whereas the CIFS user access will be checked by both CIFS and UNIX
permissions.
SECURE option
The SECURE policy requires both CIFS and NFS users to have their access checked by both CIFS and
UNIX permissions.
Note: The red arrows on the slide illustrate which permissions the different users must pass through
for each Access Policy. The UNIX option is highlighted because this is the Access Policy that will be
employed in the lab exercises.
- 16
y MIXED
UNIX Mode bits -> Windows Owner, Group, & Everyone
Windows ACL -> Unix Owner, Group, & Other
y MIXED_COMPAT
UNIX Mode bits -> Owner & Everybody (Everyone = Unix Group)
Windows ACL -> Unix Group & Other
2006 EMC Corporation. All rights reserved.
The system synchronizes permissions on existing files and directories when changing from a nonMIXED access policy to MIXED or MIXED_COMPAT
Reference Managing Celerra for a Multiprotocol Environment Technical Module for examples of
mapping.
- 17
File Permission
R
W
X
Directory Permission
R
W
X
Read Data
Read Attribute
Write Data
Append Data
Write Attribute
Delete
Read Permissions
List Folders
X
X
Create Files
Create Folders
The table above shows how the MIXED/MIXED_COMPAT access policy maps Windows ACL file
and directory permissions into UNIX Mode bits.
- 18
List Folders
Read Data
Read Attribute
Write Attribute
Delete *
Read Permissions
Change Permissions *
Take Ownership *
The table above shows how the MIXED/MIXED_COMPAT access policy maps UNIX Mode bits file
and directory permissions into Windows ACLs.
- 19
MIXED_COMPAT
group
other
group
Users Primary
Everyone
Everyone
Group
Group
Group
other
Ignored
MIXED and MIXED_COMPAT are very similar. The MIXED_COMPAT policy was designed for
compatibility with the methods used by other vendors. The key difference between these two access
checking policies involved how each one maps the UNIX Mode group and other entities.
The MIXED policy maps the group Mode bit to the Windows users Primary group. In Windows the
Primary group assignment is not mandatory. Therefore, if this policy is used, it is important that the
Primary group for Windows users is being assigned. The UNIX Mode bit other is mapped to the
Windows Everyone group.
The MIXED_COMPAT policy maps the other Mode bit to the Windows Everyone group. The
UNIX Mode bit other is ignored.
- 20
Module Summary
y Permission the level of access a user can or cannot perform on an
object
CIFS only environments deal with permissions as a Windows Server
NFS Users deal with permissions as is native with UNIX
- 21
Closing Slide
- 22
Celerra ICON
Celerra Training for Engineering
-1
Revision History
Rev Number
Course Date
1.0
February 2006
1.2
March 2006
Revisions
Complete
Update and enhance
diagrams
-2
Objectives
y Describe some of the considerations for Windows and
UNIX clients access the same objects
y Describe some of the CIFS features supported on the
Celerra and how to set them up and manage
Celerra Antivirus Agent (CAVA)
Home Directory support
Distributed File System (DFS)
Group Policy Objects
-3
Other considerations:
File Locking options
File naming conventions
Handling of Unix Symbolic Links
File attributes
-4
y Solution:
Data Mover maps CIFS deny modes into NFS locks and NFS locks into CIFS Deny
Modes
A Deny Read-Write mode request translates to an NFS exclusive read-write lock
An NFS shared read lock will be translated into a CIFS Deny Write mode
-5
wlock
NFS reads will still be allowed, but writes will not
rwlock
NFS reads and writes will be denied if a read lock applies
y Example:
server_mount server_2 o rwlock fs01 /mp1
2006 EMC Corporation. All rights reserved.
Lock policies
Some applications support the ability to lock files that are in use. CIFS enforces strict file locking and sharing
semantics to mediate access from multiple clients to the same file. While NFS locks are advisory, CIFS locks are
mandatory. When a user locks a file, CIFS does not allow any other user access to the file. NFS locking rules are
cooperative, so that a client is allowed to access a file locked by another client if it does not use the lock
procedure. Each file system can have its own lock policy.
File locking options for the server_mount command
There are three types of file locking options:
No locking(Default, least secure): server_mount server_2 -o nolock fs01 /mp1
Even if a file is locked by CIFS, NFS client reads and writes are allowed. NFS clients can read and write files
locked by CIFS unless the NFS client attempts to lock the file. NFS clients see behavior that is consistent with
NFS semantics. Since NFS locks are advisory, even if an NFS client locks a file, reads and writes from CIFS
clients are allowed to that file regardless of the locking policy. However, if a CIFS client requests a lock on a
file already locked by NFS, the request is denied regardless of the lock policy.
Read only locking (Write lock): server_mount server_2 -o wlock fs01 /mp1
If a file is locked by CIFS, all NFS writes are denied. NFS clients can read files.
Read/write locking: server_mount server_2 -o rwlock fs01 /mp1
If a file is locked by CIFS, all NFS reads and writes are denied. When a file system is used by NFS clients, this
policy guarantees data coherency from both clients.
-6
Opportunistic locks
Opportunistic locks, also known as Oplocks, are locks placed by a client on a file residing on a
server. In most cases, a file requests an oplock so it can cache data locally, thus reducing network
traffic and improving apparent response time. Reference: MSDN.Microsoft.com
Opportunistic locks are configured per file system and are on by default. Unless your organization is
using a database application that recommends that oplocks be turned off, or if you are handling critical
data and cannot afford the slightest data loss, you can leave oplocks on.
Turning on oplocks
To turn on oplocks, specify oplock in the mount options. For example, type the following command
server_mount server_2 o oplock fs01 /mp1
Turning off oplocks
To turn off oplocks, specify nooplock in the mount options. For example, type the following
command:
server_mount server_2 o nooplock fs01 /mp1
Opportunistic Locks can only be configured using the CLI.
-7
Symbolic links
Symbolic links are files created by UNIX users that point to another file or directory. CIFS clients are able to
follow symbolic links because they behave in a similar fashion as a Microsoft Windows shortcut.
DOS attributes
The DOS attributes apply to the symbolic link, not the target file or directory.
Deleting links
If a symbolic link refers to a directory, and a CIFS user attempts to delete the link, the link and all of the files and
directories in the directory to which the link referred are deleted. Microsoft clients are unaware of symbolic links
and interpret a delete operation as an attempt to delete the directory and all of its contents.
When a lock is set on a symbolic link, the lock is held by the target file.
Supporting up links
Additionally, the Celerra Network Server does not by default support either symbolic links that contain full path
names (/dir1/dir3/foo) or symbolic links that refer up from the directory in which you encounter the symbolic
link (../foo). However, a symbolic link such as dir2/dir3/foo is supported.
File System Linking
NAS v5.3 supports CIFS file system linking. File system linking allows CIFS client access to several file
systems from a single share.
-8
NFS creates:
CIFS sees:
Filename
Filename
filename
filename~1
Filename~1
Filename~1~1
-9
DOS Attributes
y Object attributes for UNIX and Windows objects are not
the identical
For example: Creation Dates apply to Windows objects
- 10
CIFS Features
y One of the design goals of the Celerra is to seamlessly
integrate into a windows environment
Emulates Windows Server functionality while providing high
performance and availability
Again, one of our goals with the Celerra file system is to provide all the functionality of a Windows
Server while providing high availability and performance. To do this, Celerra must support similar
features. Above is a list of some of these features that we support. We will only be covering a subset
of these.
- 11
- 12
y Prerequisites
CIFS must be configured and started
User/Group mapping must be functioning properly (e.g. Usermapper)
y Restrictions
NT security only
Share name HOME is reserved and cannot be used in whole or in
part in any other share name, directory, mountpoint, or file system
- 13
y Scalability
Accommodates a single user or up to thousands of users
Can be spread over multiple file systems
In v5.4, the Home directory feature provides support for extended regular expressions (ERE) for the
/.etc/homedir configuration file. This allows users to dramatically decrease the configuration file size
in each server. It also provides a more flexible mapping of the users. A user or a group of users having
common characteristics in their names and domain names can be mapped with a single line. During
parsing, more than one line may be matched, but only the latest line matched is used for mapping. The
following special characters found anywhere outside bracket expressions are supported: ^ . [ $ ( ) | * +
?{ \.
The benefits to the customer from these enhancements include:
Ease of administration
y Has a single share name for all users home directories on a given data mover or virtual data mover
y Multiple users can be mapped with just a single database entry
y Regular expressions allow extremely powerful and flexible mapping
Scalability
y Accommodates a single user or up to thousands of users
y Users within a single domain can be spread over multiple file systems
Integration with Microsoft Windows
y Management is performed through snap-in to Microsoft Management Console (MMC)
y CIFS login information used to search map database
- 14
Create
Createuser
userdirs
dirs
Edit
Edituser
userprofiles
profiles
To enable the home directory feature for a Data Mover, you must have created the CIFS service and
then complete the following steps:
1. Create the map file (/.etc/homedir on the Data Mover)
2. Enable home directories on the Data Mover
Note: The home directory feature is disabled by default
3. Export an administrative share for creation of users directories
4. Create the users home directories
5. Add home directories to users profiles
- 15
Create
Createuser
userdirs
dirs
Edit
Edituser
userprofiles
profiles
To configure EMC Celerra for home directory support, the administrator must create each users directory, and create a
map file that contains a mapping of each domain user to the home directory location on the Data Mover. The map file is
/.etc/homedir (the file does not exist by default) and is a series of text lines in the following format:
domain:username:/path
The following examples are methods of configuring the homedir file.
Example 1: Specify all variables
y corp:eplace:/userdata1
y corp:semm:/userdata1
y hmarine:administrator:/userdata2
y Result: each user is mapped to the path specified.
Example 2: Specify domain and path, and use a wildcard to define users
y corp:*:/userdata1
y hmarine:*:/userdata2
y Result: All users from corp will be mapped to their own directory in /userdata1. All users from hmarine will be
mapped to their own directory in /userdata2.
Example 3: Specify the paths only, and use a wildcard to define domains and users
y *:*:/userdata
y Result: All users from all domains will be mapped to their own directory in /userdata.
After the homedir file is created, FTP it to the Data Mover /.etc directory using the server_file command
Example:
server_file server_2 put homedir homedir
NOTE: Optionally, the homedir file can be created/edited from Windows 2000 using the Celerra Management MMC snapin.
- 16
Enable
EnableHomedir
Homedir
Exp.
Exp.Adm.
Adm.Share
Share
Create
Createuser
userdirs
dirs
Edit
Edituser
userprofiles
profiles
server_cifs server_2
server_2 :
32 Cifs threads started
Security mode = NT
Max protocol = NT1
I18N mode = UNICODE
Home Directory Shares ENABLED,
map=/.etc/homedir
- 17
Edit
Edituser
userprofiles
profiles
- 18
- 19
Enable
EnableHomedir
Homedir
Users properties
Profile tab
Exp.
Exp.Adm.
Adm.Share
Share
Create
Createuser
userdirs
dirs
Edit
Edituser
userprofiles
profiles
y Example:
net user esele /domain /homedir:\\cel1dm2\HOME
Each user must have the path to their home directory added to their profile.
There are two methods to do this:
1. Log onto a Windows Domain Controller as the domain Administrator and use Active Directory
Users and Computers. Open each users properties page and select the Profile tab to enter the
path for the home directory.
2. Log on to any Windows client as the domain Administrator and use the net user command as
follows:
net user username /domain /homedir:path
Example:
To edit Ellen Seles profile by adding the path to the HOME share on Data Mover cel1dm2 type:
net user esele /domain /homedir:\\cel1dm2\HOME
- 20
Home
Directory
Feature
Edit
Edituser
userprofiles
profiles
- 21
With Windows 2000/2003, you can enable and manage home directories through the Celerra Home
Directory Management snap-in for MMC. The required pre-conditions for this are listed on the
following slide.
Additionally, the snap-in can also manage the homedir file and the directory structure for the home
directories. To add a home directory entry, right-click on HomeDir and select Home directory entry.
Enter the name of the domain, the user name (or an * for all users from that domain), and the Path.
Alternatively, you can use the Browse button to select an existing directory, or create new directories.
Before the Celerra Management MMC Snap-in for Home Directories feature can be employed
successfully, certain preconditions must exist.
y The mounted file system for each users home directory configuration must exist on the Data
Mover
y Sufficient permission must be in place for administration and for user access
y The homedir file must exist in /.etc of the Data Mover
y Active Directory must have a UID mapped for the user (or some other user ID option can be in
place, such as Usermapper)
- 22
The browse button opens a file navigation dialog at the root of the data mover. The Browse button
below the path already exists in the current MMC snap-in. When it is clicked, we display a file browser
rooted at the path \\mover\c$ that allows the user to navigate to the desired directory and select it using
the GUI. We then populate the text box with the path that they selected. By clicking on the Modify
button, you get the Modify Umask dialog box as shown on the next slide.
- 23
Umask is used to set the default permissions for newly created files and directories a umask takes away
permission rather than sets them. In the example above, write and execute boxes are checked which
effectively takes away thes rights for Group and Other.
- 24
- 25
y Three components
AntiVirus Client software that runs on the Data Mover
CAVA Software on Windows AntiVirus server
3rd party AntiVirus engine on Windows server
CAVA
EMCs Celerra AntiVirus Agent (CAVA) provides an AntiVirus solution to clients of an EMC Celerra
Network Server using industry-standard (Common Internet File System) protocols, in a Microsoft
Windows 2000/2003 or Windows NT domain. CAVA uses third-party AntiVirus software (AntiVirus
engine) to identify and eliminate known viruses before they infect file(s) on the backend storage.
The Celerra File Server setup is resistant to the invasion of viruses because of its architecture. Each
Data Mover runs DART software, a real-time, embedded operating system. The Data Mover is
resistant to viruses because its APIs are not published, third parties are unable to run programs
containing a virus on a Data Mover. Although the Data Mover is resistant to viruses, if a Windows
client attempts to store an infected file on the storage system, the Windows client must be protected
against the effects of the virus should the infected file be opened.
The AntiVirus solution
The Celerra AntiVirus solution uses a combination of the Celerra File Server Data Mover, CAVA, and
a third-party AntiVirus engine. The CAVA and a third-party AV engine must be installed on a
Windows 2000/2003/NT server(s) in the domain.
- 26
Client
1
2
3
Storage
4
Each time the Celerra receives a file, it locks it for read access and then sends a request to the antivirus scanning server, or servers, to examine the file. The Celerra will send the UNC path name to the
Windows server to determine whether appropriate action needs to take place. The Celerra may have to
wait for verification that the file is not infected before making the file available for user access. The
Celerra anti-virus solution is made possible through the use of the EMC Celerra Anti-virus Agent
(CAVA) in a Windows NT or Windows 2000/2003 domain with CIFS access. Both the AV Engine
from an EMC partner and the Celerra Anti-virus Agent (CAVA) run on the anti-virus scanning server.
Specific triggers were setup at the DART level to signal the CAVA whenever the Celerra receives a
file, so that the UNC path name is sent to the AV scanning server.
- 27
Triggering a Scan
In general, CAVA scans files:
y On first read of a file since
CAVA install
Update of virus definitions
CAVA maintains a table of events that trigger a scan of a file for a virus. For a complete, up-to-date list of these
events, see Using Celerra AntiVirus Agent. In general, CAVA scans in the following instances;
1. Scan on first read. CAVA will scan files for viruses the first time that a file is read subsequent to:
a) the implementation of CAVA
b) an update to virus definitions (This feature has certain configurable aspects.)
2. Creating, modifying, or moving a file.
3. When restoring files from a backup
4. Renaming a file from a non-triggerable file name to a triggerable file name, based on masks and excl
in viruschecker.conf
5. An administrator can perform a full scan of a file system using the server_viruschk fsscan
command. The administrator can query the state of the scan while it is running, and can stop the scan if
necessary.
When a new virus ar known, AV vendors add it to their virus definitions, and then have the new definitions
implemented on actual system. The causes a window of vulnerability in which an infected file could be
scanned and found to be clean, when, in fact, it is not. If the AV software does not scan on any reads, then a
client could later read the infected file and infect their system. CAVA allows the Celerra administrator to
address this issue. When the updated virus definition is made available by the AV vendor, the administrator
can set a particular date in the viruschecker.conf file as the access time. When users reads a given file, this
access time is compared to the time that the file was last opened. If the access time specified in
viruschecker.conf is more recent, then the file will be scanned for know viruses.
- 28
CAVA Features
y Automatic Virus Definition Update
- CAVA is aware when 3rd Party engine has been updated
y CAVA Calculator
- Sizing tool to aid in estimating the number of CAVAs
- 29
The configuration file, /.etc/viruschecker.conf, defines virus checking settings and options that must be
in place for each Data Mover that will utilize virus checking. A sample of this file resides in /nas/sys
and can be copied and modified to suit particular needs. The viruschecker.conf file is created and/or
modified from the Celerra Control Station using the vi editor. Once the viruschecker.conf is completed,
it can be copied to/from the Data Movers /.etc directory using the server_file command.
Examples
server_file server_2 get viruschecker.conf viruschecker.conf
server_file server_2 put viruschecker.conf viruschecker.conf
Mandatory settings
masks= sets the list of file masks that need to be checked.
masks=*.EXE:*.COM:*.DOC:*.DOT:*.XL?:*.MD?
excl= sets the lists of filenames or file masks that do not need to be checked.
excl=*.TMP
addr= sets the IP addresses of the VC Servers that you wish to connect to.
addr=10.127.50.161:10.127.23.162
- 30
You can use the Celerra AnitVirus Management snap-in to manage the virus-checking parameters
(viruschecker.conf file) used with Celerra AntiVirus Agent (CAVA) and third-party AntiVirus
programs. The Celerra AntiVirus Agent and a third-party AntiVirus program must be installed on the
Windows NT/2000/2003 server.
- 31
- 32
Microsofts DFS (Distributed File System) allows administrators to group shared folders located on
different servers into a logical DFS namespace. A DFS namespace is a virtual view of these shared
folders shown in a directory tree structure. By using DFS, administrators can select which shared
folders to view in the namespace, assign names to these folders, and design the tree hierarchy in which
the folders appear. Users can navigate through the namespace without needing to know the server
names or the actual shared folders hosting the data.
Each DFS tree structure has a root target that is the host server running the DFS service and hosting the
namespace. A DFS root contains DFS links pointing to the shared folders (a share itself and any
directory below it) on the network. These folders are called DFS targets.
- 33
Client
Public
Network
Exports
DFS root
Data Mover
Data Mover
Data Mover
Data Mover
Above illustrates the DFS concepts. Note: while the Celerra may host the root of the DFS file system,
each leaf could be on a different Data Mover or any other file server in the environment. This provides
infinite scalability and provides a single name space to the clients.
- 34
Microsoft offers two types of DFS root servers, the domain DFS root server and the standalone DFS
root server. The domain DFS server stores the DFS hierarchy in the Active Directory. The standalone
DFS root server stores the DFS hierarchy locally and can have only one root target.
Prior to 5.4 you could not do this as root.
For a detailed description of DFS, visit the Microsoft website at http://www.microsoft.com.
Review the following before configuring a DFS root.
y You create a DFS root on a share.
y You can only establish a DFS root on a global share from a Windows Server 2003 or a Windows
XP machine.
y With a Windows 2000 server, you can create only one DFS root per CIFS server; creating a DFS
root on a global share is not allowed. You cannot manage multiple DFS roots on a CIFS server
using a Windows 2000 server.
y A DFS root on a global share can be viewed from any CIFS server on the Data Mover.
y Before removing a share on which you have established a DFS root, you must first delete the DFS
root.
After starting the CIFS service, DFS support is enabled by default.
To disable DFS functionality, set the following Windows Registry key to zero, stop the CIFS service
and then restart the CIFS service.
HKEY_LOCAL_MACHINE\SOFTWARE\EMC\DFS\Enable
- 35
The Microsoft dfscmd.exe tool enables you to administer the DFS root content (for example,
creating and deleting links. You cannot delete a DFS tree structure using this command.
The dfsutil.exe and dfscmd.exe tools are included with the Windows 2000 or Windows
Server 2003 Support Tools.
- 36
It is difficult for a Windows client to open a path to a file system object when its path contains an
absolute symbolic link. A Windows client asks a server to perform a function on a file system object
based on a given path. Unlike Windows, a UNIX client uses a target path relative to its mount point.
This can lead to a file system object on a remote server. For example: A UNIX client has the following
two file systems mounted:
server1:/ufs1 mounted on /first
server2:/ufs2 mounted on /second
On ufs1, there is an absolute symbolic directory link to /second/home. A UNIX client can easily access
this link from ufs1. However, since this path exists only on the UNIX client and not on the local server,
a Windows client is unable to follow this path.
The Wide Links feature enables Windows clients to resolve the path to absolute symbolic links by
mapping the UNIX mount point to the Windows server:\share\path. This mapping is done through the
Microsoft MMC Distributed File System tool.
- 37
- 38
In Windows 2000 and 2003, group policy allows administrators to manage desktop environments by
applying configuration settings to computer and user accounts. Group policy offers the ability to
define and enforce policy settings for the following:
y Scripts including computer startup/shutdown and user logon/logoff
y Security local computer, domain, and network security settings
y Folder redirection direct and store users folders on the network
y Registry-based for the operating system, its components, and applications
y Software installation and maintenance centrally manage installation, updates and removal of
software
GPO is managed through the MMC (Microsoft Management Console). GPO policy is administratively
set and applied to the entire domain, to a site, or to an organizational unit. Any GPOs that affect a user
are applied at logon time, for example, applications, configuration, or folder redirection. Any GPOs
that affect the computer are applied at system startup time. Some examples are disk quotas, auditing,
and event logs. All policies are updated periodically, the frequency depending on how it was
configured.
GPOs are not applied individually to users or computers. GPOs can be set at multiple levels. As they
are applied at the Domain down to the organizational unit, the settings are cumulative.
GPOs are supported with Windows 2000/2003/XP.
- 39
Celerra Data Movers that are joined to a Windows 2000/2003 domain support and retrieve certain
GPO settings. When participating in Windows 2000/2003 domains, as a member server, the Data
Mover is affected by many Windows mechanisms including Kerberos, Auditing, SMB signing, event
logs, and user rights. A goal of the Celerra Data Mover is to participate, and act as, a Windows
member server to the domain.
EMC offers MMC snap-ins to be used by administrators to display the effective settings for the
auditing policy and user right assignment.
- 40
GPO Operation
y GPO cache
Settings stored in /.etc/gpo.cache
Not a user-editable file
The GPO daemon is the DART thread which controls GPO updates. There is one GPO daemon running per
Data Mover. The daemon starts/stops/restarts with the server_setup cifs start/stop command.
On GPO daemon startup, it reads in GPO cache, then retrieves the latest settings for each joined CIFS Server.
Each CIFS server may be in a different organizational unit in the domain, therefore, each can have different GPO
settings.
The latest retrieved GPO settings for each joined CIFS server are stored in the root file system under
/.etc/gpo.cache. This is not a user editable configuration file.
Cached settings are read in when the GPO daemon starts up. The GPO settings are available as soon as possible,
so there is no need to wait for setting retrieval. Settings are available even if the Domain Controller cannot be
reached.
The server_security Celerra CLI command can be used to query or update the security policy settings on
a Data Mover. Using this command, the administrator can force an update of a security policy setting, or query
security policy settings.
You can use CIFS parameters to enable or disable GPO, GPO cache, and GPO log messages.
Before NAS 5.2, the GPO settings were automatically refreshed by the Data Mover every 90 minutes. The data
Mover now uses the update interval as defined by the Windows Domain.
If the GPO refresh policy is disabled at the Domain level, the Celerra Administrator must issue the
server_security command to manually refresh the GPO policy settings. If no refresh policy is defined at
the Domain level, the Data Mover will use an update interval of 90 minutes.
- 41
Module Summary
y Carefully consider application locking requirements in a mixed
environment
y Other mixed environment considerations include Symbolic links,
upper & lower case filenames, and file attributes
y Celerra home directory support allows a user to have a default file
save location on the Celerra
y Celerra AntiVirus Agent (CAVA) provides an AntiVirus solution for
Celerra Windows clients
CAVA uses third-party AntiVirus software (AntiVirus engine) to identify and
eliminate known viruses before they infect files(s) on the back-end storage
- 42
Closing Slide
- 43
Celerra ICON
Celerra Training for Engineering
-1
Revision History
Rev Number
Course Date
1.0
February 2006
1.2
May 2006
Revisions
Complete
Updates and enhancements
-2
-3
CIFS
Server
CIFS
Server
CIFS
Server
A Virtual Data Mover is a software feature that enables the administrative separation of CIFS servers
from each other and from the associated environment. Separating one or more CIFS servers, enable
replication of CIFS environments, and allow the movement of CIFS servers from one Data Mover to
another.
VDM store dynamic configuration data for CIFS servers in a separate configuration file system. This
includes information such as local groups, shares, security credentials, and audit logs. A VDM can be
loaded and unloaded, moved between Data Movers, or replicated to a remote Data Mover as an
autonomous unit. The servers file systems, and all of the configuration data that allows clients to
access the file system are available in one virtual container.
An key motivation for Virtual Data Movers is the ability to replicate the CIFS environment (not only
the data) using asynchronous techniques.. This application is discussed in more detail later in the
Celerra Replicator module..
.
-4
Computer name
Shares
Security credentials
Local groups
Audit logs
Home Directory
configuration
y Implemented as a
separate root file
system
File
System 1
Celerra 1
VDM Root
File System
File
System 2
\\srv1_nas
eventlog
config
homedir
kerberos
shares
etc.
A VDM can be configured with one or more CIFS servers (NFS is only supported on Physical Data
Movers). The diagram above illustrates a logical view of a Virtual Data Moverwith a single CIFS
server (computer name of srv1_nas). When creating a VDM, at least one network interface is
associated with it and is used by the CIFS server for client access. The CIFS servers in each VDM
have access only to the file systems mounted to that VDM, and therefore, can only create export
(share) those file systems mounted to that VDM. This allows a user to administratively partition, or
group, their file systems and CIFS servers.
Virtual Data Mover specific configuration data includes the following:
y Local group database for the servers in the VDM
y Share database for the servers in the VDM
y CIFS server configuration (compnames, interface names, etc.)
y Celerra home directory information for the servers in the VDM
y Auditing and Event Log information
y Kerberos information for the servers in the VDM
-5
Usermapper
Passwd/group files
CIFS Service
NIS/DNS client
Routing
NTP
Network Interface
Internationalization
Virus checker
Data Mover Failover
FTP
NDMP backup
Root
File System
Configuration Files
CIFS Databases
File Systems
NFS
CIFS Server
NIS/DNS
This diagram shows a typical physical Data Mover implementation. The Data Mover supports both
NFS and CIFS servers, each with the same view of all the server resources. All of the configuration,
control data, and event logs are stored in the root file system of the Data Mover.
While it is beneficial to consolidate multiple servers into one physical Data Mover for some
environments, isolation between servers is required in others (ISPs).
In a non-VDM implementation, the root file system for the VDM holds all configuration information
for CIFS, NFS, and other network services. When VDMs are implemented, the dynamic CIFS
specific configuration is placed in a separate root file system while the physical Data Mover root file
systems contains the configuration information about the supporting infrastructure and services.
-6
VDM01
Root
File System
Configuration Files
CIFS Databases
File Systems
Root VDM File Systems
UNIX Server
CIFS Server
CIFS servers
Network
Interfaces
VDM02
VDM02 Root File System
Configuration files
Local groups
Home Directories files
Directories
File systems
CIFS servers
NIS/DNS Clients
VDMs are implemented as a separate files that is mounted on a physical Data Movers root file system.
All dynamic CIFS configuration information is stored in the VDM root file system.
Each VDM will have at least one network interface associated with it.
In order to support the movement of CIFS servers within a VDM from one Data Mover to another,
both Data Movers must have network interfaces defined with the same name, however the should
(must) have different IP addresses.
Currently a maximum of 29 VDMs are supported per physical Data Mover.
-7
Creating a VDM
y When creating a VDM, you create a file system that
contains all the configuration information for the VDM
y File system may be created:
Using all the defaults
Control Station finds first available disk space and creates a 128 MB file
system
By default, when creating a VDM, the Control Station automatically allocates a 128MB volume,
creates the root configuration file system (root_fs_id#) for the VDM, and saves the binding
relationship between the VDM and root file system in the Control Stations database. As a a separate
file system, the user data and configuration data in the VDM are separate.
You can explicitly specify the file system and size you want to use, or specify a pool if using
Automatic Volume Manager (AVM).
-8
VDM States
y Two VDM states
Loaded
Fully functional active VDM
CIFS is active and user file systems accessible
Loaded as read/write
Mounted
CIFS servers are not active
Configured file system mounted read only
CIFS is not active and user file systems are not accessible
VDM is passive for eventual loading
Celerra Replicator requires mounted state to allow replication
Loaded VDM
A loaded VDM is the fully functional mode of VDM. A loaded VDM is considered active. For loaded
VDMs, the configured file system is loaded read/write and the CIFS servers in the VDM are running
and serving data. It is not possible to load one VDM into another VDM.
Mounted VDM
With mounted VDMs, the VDM configuration file system is mounted read-only but can be queried.
For example, the nas_server command can be used to get a list of the expected interfaces and the
server_export command can be used to get a list of shares. The CIFS servers are not active. The
VDM is passive for eventual loading, as in the case of a failover, where the secondary site might be
called upon to act as the primary. For Celerra Replicator, file systems need to be mounted read-only to
allow replication to occur.
Two other VDM states exist; PermUnloaded and TempUnloaded. You would unload a VDM to stop
activity on the Data Mover. Unloads, by default, are temporary. You would temporarily unload the
VDM if you were replicating from a primary to secondary site to stop activity on the primary in
preparation for replication. If you choose to permanently unload a VDM, the VDM file system is not
mounted on the Data Mover. On reboot, the VDM does not reload.
-9
y Example 1:
nas_server name vdm01 type vdm create server_2
setstate loaded
This slide shows the CLI command to create a Virtual Data Mover in a loaded state.
nas_server name <vdm_name> -type vdm create <movername> -setstate
loaded
<vdm_name> is name assigned to VDM
<movername> is name of physical Data Mover
If a VDM name is specified, it must be unique to the entire Celerra system. If it is not specified, a
default is assigned (vdm_id#).
In this case, the file system name was specified (vdm_root_fs1). If it were not specified, the root file
system of the VDM is automatically allocated. The default size of the VDM root file system is
128MB.
If you wanted to create a Virtual Data Mover in a mounted state, the command would be the same
except that setstate loaded would be changed to setstate mounted.
Note: When you name a VDM, the Celerra assigns the root file system name so it is easily identified
as a root file system associated with a specific VDM. For example, if you gave the VDM the name
Marketing, the VDM root file system is automatically named root_fs_vdm_Marketing.
- 10
By default, all VDMs created via the GUI are in a loaded state. At this time, we cannot verify this in
the GUI.
- 11
When you create a VDM, the VDM root file system is created to store the CIFS server configuration
information for the CIFS servers that you create within the VDM. The VDM root file system stores the
majority of the CIFS servers dynamic data.
- 12
Once the VDM has been created and loaded, the CIFS servers can be configured in the same manner as
would be done on a physical Data Mover.
The CIFS service is stopped/started on the physical Data Mover, not at the Virtual Data Mover level.
- 13
This slide shows how the file system would be mounted to the Virtual Data Mover.
- 14
compname=<comp_name>,domain=<domain_name>,
interface=<if_name>
- 15
compname=<comp_name>,domain=<domain_name>,
admin=<admin_name>
- 16
- 17
Moving a VDM
y To move a VDM to a different
Data Move, the target must:
Have access to the disks that
contains all file systems of the
source VDM
Data Mover
Virtual Data Mover
CIFS
Server
Data Mover
CIFS
Server
y Command:
nas_server vdm <vdm_name> -move <target_movername>
When you move a VDM to a different Data Mover, the VDM is first unloaded from the source Data
Mover. Then all the file systems are unmounted from the source. The target Data Mover loads the
VDM and then all the file systems are mounted and exported.
In order to successfully move a VDM from one Data Mover to another, the target Data Mover must
have:
y Access to all the file systems (root and user) as the source Data Mover.
y Network interface with the same name. However, the network device does not need to be the
same. For example, the source can use a 10/100 Mbps Ethernet device, and the target can use a
10/100/1000 Mbps Ethernet device. The move would be successful as long as the device names
are identical.
y The target Data Mover should not have any CIFS servers with compnames or netbios names
matching that of the VDM to be moved.
You can use the same IP addresses, however there is a risk of duplicate addresses. If you use the same
IP addresses, you have to bring down the interface on the source and bring up the interface on the
target manually as part of the procedures for moving the VDM.
- 18
DM at Primary Site
Virtual Data Mover
Replicate VDM
CIFS
Server
File System
With the asynchronous data replication and failover/failback capabilities of Celerra Replicator, and
Virtual Data Movers, the Celerra can offer an Asynchronous data recovery solution. A differential
copy mechanism is provided when file system changes are transmitted across the IP network from the
primary to a secondary site. In the event of a disaster, the entire CIFS environment (VDM) can be
failed over to the secondary site. Clients continue accessing their CIFS shares from the secondary site.
- 19
Module Summary
Key points covered in this module are:
y A Virtual Data Mover (VDM) is a Celerra feature that enables
administrators to group CIFS servers into virtual containers
y The VDM stores information regarding local groups, shares,
security credentials, and audit logs
y VDMs are created with the nas_server command
y VDMs can be loaded and unloaded, moved between Data Movers,
or replicated to a remote Data Mover
A loaded VDM is the fully functional mode of a VDM
When a VDM is mounted, the configuration file system is mounted readonly and can be queried, but the CIFS servers are not active
- 20
Closing Slide
- 21
Celerra ICON
Celerra Training for Engineering
SnapSure
Celerra SnapSure
-1
Revision History
Rev Number
Course Date
1.0
February 2006
1.2
May 2006
Revisions
Complete
Updates and enhancements
Celerra SnapSure - 2
Celerra SnapSure
-2
SnapSure
Upon completion of this module, you will be able to:
y Describe how SnapSure makes a point-in-time view of a
file system
y Describe the use of a Save Volume (SavVol), how it is
sized and what happens when it runs out of available
space
y Using both the CLI and Celerra Manager, configure a
Checkpoint file system
y From a client system, access a checkpoint using CVFS
(Checkpoint View File System)
y Schedule Checkpoints
y Discuss planning issues around SnapSure
2006 EMC Corporation. All rights reserved.
Celerra SnapSure - 3
Celerra SnapSure
-3
SnapSure Overview
y SnapSure Provides
A point-in-time view of file system
Known as a Checkpoint
Consists of a combination of live file system data and saved data
Celerra SnapSure - 4
SnapSure creates a point-in-time view of a file system. SnapSure creates a checkpoint file system
that is not a copy or a mirror image of the original file system. Rather, the checkpoint file system, is
a calculation of what the production file system looked like at a particular time and is not an actual file
system at all. The checkpoint is a read only view of the file system prior to changes at that particular
time.
Note: With 5.5 96 Checkpoints are supported.
IMPORTANT: The information in this module references the current version of SnapSure. ALWAYS read the SnapSure
documentation and Release Notes for a specific version of Celerra Network Server.
Many applications benefit from the ability to work from a file system that is not in a state of change.
Fuzzy backups can occur when performing backups from a live file system. Performing backups
from a SnapSure checkpoint provide a consistent point-in-time source and eliminates fuzzy backups.
Applications like Celerra Replicator require a baseline copy of the production file system during setup,
this also requires a consistent point-in-time, read-only file system from which to copy the data.
Celerra SnapSure
-4
Tuesday View
ReadRead-only
only
Wednesday View
ReadRead-only
Production
Live Data
Full Read/Write
Checkpoints
Celerra SnapSure - 5
SnapSure checkpoints provide users with multiple point-in-time views of their data. In the illustration
above the users live, production data is my_file. If they need to access what that file looked like on
previous days, they can easily access read-only versions of that file as viewed from different times.
This can be useful for restoring lost files or simply for checking what the data looked like previously.
In this example, checkpoints were taken on each day of the week.
Celerra SnapSure
-5
ckpt
PFS
y SavVol
Stores original data from PFS
before changes are made
COOM
SavVol
Original Data
Bit map
Block Maps
Bitmap
Data structure that identifies which blocks have
changed in the PFS
Block maps
Records the address of saved data blocks in the
SavVol
2006 EMC Corporation. All rights reserved.
Celerra SnapSure - 6
PFS
The PFS is any typical Celerra file system. Applications that require access to the PFS are referred to
as PFS Applications.
Checkpoint
A point-in-time view of the PFS. SnapSure uses a combination of live PFS data and saved data to
display what the file system looked like at a particular point-in-time. A checkpoint is thus dependent
on the PFS and is not a disaster recovery solution. It is NOT a copy of a file system.
SavVol
Each PFS with a checkpoint has an associated save volume, or SavVol. The first change made to each
PFS data block following a checkpoint triggers SnapSure to copy that data block to the SavVol.
Bitmap
SnapSure maintains a bitmap of every data block in the PFS where it identifies if the data block has
changed.
Blockmap
A blockmap of the SavVol is maintained to record the address in the SavVol of each saved data block.
Celerra SnapSure
-6
Bitmap 1
PFS
DB01*
DB01
DB01*
DB02
DB03
DB04
DB05
DB05*
PFS
Applications
DB06
DB07
DB01=0
DB01=1
DB02=0
DB03=0
DB04=0
DB05=0
DB05=1
DB06=0
DB07=0
DB08=0
DB08=1
DB09=0
Blockmap
Blockmap 1
0
1
2
DB01
DB05
DB08
DB08
DB08*
DB09
SavVol
Block Address
Data Block
0
1
2
DB01
DB05
DB08
Celerra SnapSure - 7
When the first checkpoint of a PFS is created, SnapSure creates the SavVol, a bitmap, with all block
set to zeros, and a blockmap that is empty.
Celerra SnapSure
-7
PFS
DB01
DB01*
DB02
DB03
DB04
DB05
DB05*
DB06
DB07
DB01=1
DB02=0
DB03=0
DB04=0
DB05=1
DB06=0
DB07=0
DB08=1
DB09=0
Checkpoint
Apps
Blockmap
Blockmap 1
0
1
2
DB01
DB05
DB08
DB08
DB08*
DB09
Checkpoint
CheckpointApp
Appreads
reads
blocks
6,
blocks 6,8,8,99
SavVol
Block Address
Data Block
0
1
2
DB01
DB05
DB08
Celerra SnapSure - 8
When a user or application reads the point-in-time of the newest checkpoint, the bitmap is parsed to
see if the block requested has changed since the creation of the checkpoint. If the block has not
changed (a value of 0) the READ is performed from the PFS. If the block has changed (a value of 1)
the blockmap is parsed to identify the address in the SavVol for that data block, then the READ is
performed from the SavVol.
The example above illustrates a READ of data blocks 6, 8, and 9.
Celerra SnapSure
-8
PFS
DB01
DB01*
DB02
DB02*
DB03
DB04
DB05
DB05*
PFS
Applications
DB06
DB06*
DB07
DB01=0
DB02=0
DB02=1
DB03=0
DB04=0
DB05=0
DB06=0
DB06=1
DB07=0
DB08=0
DB08=1
DB09=0
Blockmap
Blockmap 1
Blockmap
Blockmap 2
3
4
5
DB02
DB06
DB08*
DB08*
DB08**
DB09
y Multiple ckpts
allow multiple
point-in-time
views of PFS
0
1
2
SavVol
Block Address
Data Block
0
1
2
3
4
5
DB01
DB05
DB08
DB02
DB06
DB08*
DB01
DB05
DB08
Blockmap for
each checkpoint
Bitmap for latest
checkpoint only
y Old blockmaps
are preserved
and linked
Celerra SnapSure - 9
When a new checkpoint is created SnapSure creates a new blockmap and begins the process anew.
Celerra SnapSure
-9
PFS
DB01
DB01*
DB02
DB02*
DB03
DB04
DB05
DB05*
PFS
Applications
DB06
DB06*
DB07
DB01=0
DB02=1
DB03=0
DB04=0
DB05=0
DB06=1
DB07=0
DB08=1
DB09=0
Checkpoint
Apps
Blockmap
Blockmap 1
Blockmap
Blockmap 2
3
4
5
DB02
DB06
DB08*
DB08
DB08**
DB09
Checkpoint
CheckpointApp
Appreads
reads
blocks
6,
blocks 6,8,8,99
0
1
2
SavVol
Block Address
Data Block
0
1
2
3
4
5
DB01
DB05
DB08
DB02
DB06
DB08*
DB01
DB05
DB08
Celerra SnapSure - 10
Client READs from the latest checkpoint are performed using the same method as when READing
from a single checkpoint. Older blockmaps are simply ignored
Celerra SnapSure
- 10
PFS
DB01
DB01*
DB02
DB02*
DB03
DB04*
DB05
DB05*
PFS
Applications
DB06
DB06*
DB07
DB08
DB08***
DB09
DB01=0
DB02=0
DB03=0
DB04=1
DB05=0
DB06=0
DB07=0
DB08=1
DB09=0
Blockmap
Blockmap 3
6
7
DB04
DB08**
3
4
5
DB02
DB06
DB08*
Block Address
Data Block
0
1
2
3
4
5
6
7
DB01
DB05
DB08
DB02
DB06
DB08*
DB04
DB08**
Blockmap
Blockmap 1
0
1
2
DB01
DB05
DB08
Checkpoint
Apps
SavVol
Celerra SnapSure - 11
When READs of other checkpoints (i.e. not the newest) are requested, SnapSure directly queries the
checkpoint's blockmap for the SavVol block number to read. If the block number is in the blockmap,
the data is read from the SavVol space for the checkpoint. If the block number is not in the blockmap,
SnapSure queries the next newer checkpoint to find the block number. If the requested block number is
found in the blockmap, the data block is read from the SavVol space for the checkpoint. This
mechanism is repeated until it reaches the PFS.
Celerra SnapSure
- 11
SavVol Sizes
y Save Volumes (SavVol) are created automatically when
you create the first Checkpoint of a Production File
System
Sizes based on size of PFS
If PFS > 10GB then SavVol = 10GB
If PFS < 10GB and PFS > 64MB then SavVol = PFS size
If PFS < 64MB then SavVol = 64MB
Celerra SnapSure - 12
SnapSure requires a SavVol to hold data when you create the first checkpoint of a PFS. SnapSure will
create and manage the SavVol automatically.
SavVol sizes
The following criteria is used for automatic SavVol creation:
y If PFS > 10GB, then SavVol = 10GB
y If PFS < 10GB and PFS > 64MB, then SavVol = PFS size
y If PFS < 64MB, then SavVol = 64MB
y Extends by 10GB if checkpoint reaches a high water mark
Custom SavVol Creation
The SavVol can be manually created and managed. Please see Using SnapSure on Celerra for
planning considerations. Additional Checkpoints
If you create another checkpoint, SnapSure uses the same SavVol, but logically separates the point-intime data using unique checkpoint names.
Celerra SnapSure
- 12
Celerra SnapSure - 13
If the SnapSure SavVol reaches a full state it will become inactive. All checkpoints using that SavVol
are then invalid and cannot be accessed. Therefore, SnapSure employs a High Water Mark (HWM) as
a point at which the SavVol will be automatically extend. The HWM is expressed as a percentage of
the total size of the SavVol. The default HWM is 90%.
When the SavVol High Water Mark (HWM) is reached, SnapSure will extend the SavVol in 10GB
increments. However, SnapSure will not consume additional disk space if doing so will leave the
Celerra with less than 20% free disk space. If there is not sufficient disk space to extend the SavVol,
SnapSure will begin overwriting checkpoints starting with the oldest checkpoint.
Celerra SnapSure
- 13
Celerra SnapSure - 14
If you set the HWM to 0% when you create a checkpoint, this tells SnapSure not to extend the SavVol
when a checkpoint reaches full capacity. Instead, SnapSure deletes the data in the oldest checkpoint
and recycles the space to keep the most recent checkpoint active. It repeats this behavior each time a
checkpoint needs space. If you use this setting and have a critical need for the checkpoint information,
periodically check the SavVol space used (using the fs_ckpt <fsname> -list command), and before it
becomes full, copy the checkpoint to tape, read it with checkpoint applications, or extend it to keep it
active. In summary, if you plan to use the 0% HWM option at creation time, auditing and extending
the SavVol yourself are important checkpoint management tasks to consider.
Celerra SnapSure
- 14
y Example:
fs_ckpt pfs1 Create
y Results:
Celerra SnapSure - 15
By default, when you create the first checkpoint of a PFS, SnapSure creates a SavVol. It uses this
SavVol for all additional checkpoints for that PFS.
To create a checkpoint file system, use the following command:
fs_ckpt <PFS_name> Create
Note: Check Using SnapSure on Celerra for all the command options.
Celerra SnapSure
- 15
Celerra SnapSure - 16
Celerra SnapSure
- 16
Celerra SnapSure - 17
This Celerra Manager screen shows how to create a checkpoint using Celerra Manager.
Celerra SnapSure
- 17
Celerra SnapSure - 18
Celerra SnapSure
- 18
.ckpt
date &time
date &time
CVFS is a navigation feature that provides NFS and CIFS clients with read-only access to online,
mounted checkpoints in the PFS namespace. This eliminates the need for administrator involvement in
recovering point-in-time files.
Virtual entries to checkpoints in PFS
In addition to the .ckpt_mountpoint entry at the root of the PFS, SnapSure also creates virtual links
within each directory of the PFS. All of these hidden links are named ".ckpt" (by default) and can be
accessed from within every directory, as well as the root, of a PFS. You can change the name of the
virtual checkpoint name from .ckpt to a name of your choosing by using a parameter in the
slot_(x)/param file. For example you could change the name from .ckpt to .snapshot. The
.ckpt hidden link cannot be listed in any way and can only be accessed by manually changing into that
link. After changing into .ckpt, listing the contents will display links to all Checkpoint views of that
directory. The names of these links will reflect the date of the Checkpoint followed by the time zone of
the Control Station.
You can change the checkpoint name presented to NFS/CIFS clients when they list the .ckpt directory,
to a custom name, if desired. The default format of checkpoint names is:
yyyy_mm_dd_hh.mm.ss_<Data_Mover_timezone>. You can customize the default checkpoint names
shown in the .ckpt directory to names such as Monday, or Jan_week4_2004, and so on. You can only
change the name when you mount the checkpoint.
For example; to change the name of checkpoint pfs04_ckpt1 of pfs_04 to Monday while mounting the
checkpoint on Data Mover 3 on mountpoint EMC, use the following command:
server_mount server_2 -o cvfsname=Monday pfs1_ckpt1 /pfs1_ckpt1
Celerra SnapSure
- 19
Celerra SnapSure - 20
ls l /EMC/mp1
drwxr-xr-x 2 32771 32772 80 Nov 21 8:05 2005 dir1
drwxr-xr-x 2 root other 80 Nov 14 10:25 2005 resources
-rw-r--r-- 1 32768 32772 292 Nov 19 11:15 2005 A1.dat
-rw-r--r-- 1 32768 32772 292 Nov 19 11:30 2005 A2.dat
Ls la /EMC/mp1/.ckpt
drwxr-xr-x 5 root root 1024 Nov 19 08:02 2005_11_19_16.15.43_GMT
drwxr-xr-x 6 root root 1024 Nov 19 11:36 2005_11_19_16.39.39_GMT
drwxr-xr-x 7 root root 1024 Nov 19 11:42 2005_11_20_12.27.29_GMT
ls l /MEC/mp1/.ckpt/2005_11_19_16.39.39_GMT
-rw-r--r-- 1 32768 32772 292 Nov 19 11:15 A1.dat
-rw-r--r-- 1 32768 32772 292 Nov 19 11:30 A2.dat
-rw-r--r-- 1 32768 32772 292 Nov 19 11:45 A3.dat
drwxr-xr-x 2 root other 80 Nov 14 10:25 resources
Celerra SnapSure
- 20
Celerra SnapSure - 21
Celerra SnapSure
- 21
Celerra SnapSure - 22
ShadowCopyClient is a Microsoft Windows feature that allows Windows users to access previous
versions of a file via the Microsoft Volume Shadow Copy Server. ShadowCopyClient is also supported
by EMC Celera to enable Windows clients to list, view, copy, and restore from files in checkpoints
created with SnapSure.
Celerra SnapSure
- 22
Displaying Checkpoints
y Command:
nas_fs -info <PFS_name>
y Example:
nas_fs info fs01
Id
Name
= 21
= fs01
ckpts = fs01_ckpt2,fs01_ckpt3
Celerra SnapSure - 23
Example
nas_fs -i fs01
id
= 21
name
= fs01
ckpts
= fs01_ckpt2,fs01_ckpt3
Celerra SnapSure
- 23
Displaying Checkpoints
y Checkpoints
Celerra SnapSure - 24
This Celerra Manager screen shows how to list checkpoints using Celerra Manager.
Celerra SnapSure
- 24
Checkpoint Details
To view the checkpoint properties
y Command:
nas_fs info <checkpoint_name>
y Example:
nas_fs info fs01_ckpt_01
id = 34
name = fs01_ckpt_01
Type = ckpt
Checkpt_of = fs01 Thu Nov 29 13:39:55 EDT 2001
used = 78%
Full(mark) = 80%
Celerra SnapSure - 25
You can audit a Checkpoint to monitor its SavVol space utilization. SnapSure features self-extending
checkpoints that are triggered when the HWM is reached.
Command
To audit a checkpoint, type the following command:
nas_fs -info <checkpoint_name>
Example
To audit the checkpoint named fs01_ckpt_01, type the following command:
nas_fs -i fs01_ckpt_01
id = 34
name = fs01_ckpt_01
Type = ckpt
Checkpt_of = fs01 Thu Nov 29 13:39:55 EDT 2001
used = 78%
Full(mark) = 80%
This example shows that the SavVol is now at 78% capacity. It auto-extends when it reaches a
capacity of 80%.
Celerra SnapSure
- 25
Checkpoints
y To view PFS (ckptfs) and SavVol (volume) storage utilization
nas_fs -size pfs1_ckpt1
volume: total = 1000 avail = 700 used = 300 (30%) (sizes in MB)
ckptfs: total = 10000 avail = 4992 used = 5008 (50%) (sizes in MB)
Celerra SnapSure - 26
Use the nas_fs size command on the PFS to audit the disk utilization of the SavVol.
Sample output from nas_fs -size
nas_fs -s pfs1_ckpt1
volume: total = 1000 avail = 700 used = 300 (30%) (sizes in MB)
ckptfs: total = 10000 avail = 4992 used = 5008 (50%) (sizes in MB)
Celerra SnapSure
- 26
Checkpoint Details
y File Systems > right-click the PFS > Properties > Checkpoint Storage
tab
Celerra SnapSure - 27
This Celerra Manager screen shows how to list checkpoints using Celerra Manager.
Celerra SnapSure
- 27
Refreshing Checkpoints
y Unmounts checkpoint
y Deletes data for that checkpoint
y Updates checkpoint to newest status
y Remounts checkpoint
y Example:
fs_ckpt fs01_ckpt01 -refresh
Celerra SnapSure - 28
Refreshing Checkpoints
When you refresh a checkpoint, SnapSure deletes the checkpoint and creates a new checkpoint,
recycling SavVol space while maintaining the old file system name, ID, and mount state. If a
checkpoint contains important data, be sure to back it up or use it before you refresh it.
Command
The -refresh command automatically unmounts the checkpoint, deletes the contents of the SavVol,
updates the status of the checkpoint to reflect that it is the most current, and, finally, remounts the
checkpoint.
Celerra SnapSure
- 28
Refreshing Checkpoints
y Checkpoints > right-click the Checkpoint to refresh > Refresh
Celerra SnapSure - 29
Celerra SnapSure
- 29
y Command:
/nas/sbin/rootfs_ckpt <ckpt_name> -Restore
y Example:
/nas/sbin/rootfs_ckpt pfs1_ckpt1 Restore
Celerra SnapSure - 30
Celerra SnapSure
- 30
Celerra SnapSure - 31
Celerra SnapSure
- 31
Deleting Checkpoints
y Checkpoint must be permanently unmounted
y Deleted like any other file system
y Command:
nas_fs delete <checkpoint_name>
y Example:
nas_fs -d fs1_ckpt1
Celerra SnapSure - 32
When you delete a checkpoint (and it is not the oldest checkpoint), SnapSure compacts the deleted
checkpoint and merges the needed blockmap entries to the older checkpoint before the delete
completes. This ensures that chronological blocks of data, important to older checkpoints, are not lost
with the delete. Deleting checkpoints out of order does not affect the point-in-time view of other
checkpoints and frees up SavVol space that can be used for new checkpoints. If you delete the newest
checkpoint of a PFS, no compact or merge process occurs until a new checkpoint is created. The
compact and merge process is an asynchronous, background process at that time. A change in SavVol
space is only seen after the process completes.
Celerra SnapSure
- 32
Deleting Checkpoints
y Checkpoints > right-click the Checkpoint to delete > Delete
Celerra SnapSure - 33
Celerra SnapSure
- 33
Celerra SnapSure - 34
An automated checkpoint-refresh solution can be configured using Celerra Manager or a Linux cron
job script. There is no Celerra CLI equivalent.
Using the Checkpoints > Schedules tab in Celerra Manager, you can schedule checkpoint creation and
refreshes on arbitrary, multiple hours of a day, days of a week, or days of a month. You can also
specify multiple hours of a day on multiple days of a week to further simplify administrative tasks.
More than one schedule per PFS is supported, as is the ability to name scheduled checkpoints, name
and describe each schedule, and to query the schedule associated with a checkpoint. You can also
create a schedule of a PFS that already has a checkpoint created on it, and modify existing schedules.
You can also create a basic checkpoint schedule without some of the customization options by clicking
on any mounted PFS listed in Celerra Manager and click properties > Schedules tab. This tab enables
hourly, daily, weekly, or monthly checkpoints to be created using default checkpoint and schedule
names and no ending date for the schedule.
SnapSure allows multiple checkpoint schedules to be created for each PFS. However, EMC supports a
total of 64 checkpoints (scheduled or otherwise) per PFS, as system resources permit.
Because Celerra backs up its database at one minute past every hour, checkpoints should not be
scheduled to occur at these times.
Celerra SnapSure
- 34
Checkpoint Scheduling
y Checkpoints > Schedules tab > New
Celerra SnapSure - 35
Celerra SnapSure
- 35
Celerra SnapSure - 36
Creating a checkpoint requires the PFS to be paused. Therefore, PFS write activity suspends (read
activity continues) while the system creates the checkpoint. The pause time depends on the amount of
data in the cache but is typically a few seconds. EMC recommends a 10 minute interval between the
creation, or refresh, of checkpoints of the same PFS.
Restoring a PFS from a checkpoint requires the PFS to be frozen. Therefore, all PFS activities are
suspended while the system restores the PFS from the selected checkpoint.
Refreshing a checkpoint requires the checkpoint file system to be frozen. Therefore, checkpoint file
system read activity suspends while the system refreshes the checkpoint. If a UNIX client were
attempting to access the checkpoint during a refresh, the system continuously tries to connect. When
the system thaws, the file system automatically remounts. If a CIFS client were attempting to access
the checkpoint during a refresh, the Windows application may drop the link. It is dependent on the
application, or if the system freezes for more than 45 seconds.
Deleting a checkpoint requires the PFS to be paused. Therefore, PFS write activity suspends
momentarily while the system deletes the checkpoint.
If a checkpoint becomes inactive for any reason, read/write activity on the PFS continues
uninterrupted.
Celerra SnapSure
- 36
=
=
=
=
=
=
3919
81240
0
0
1048576(KB)
275624(KB)
Celerra SnapSure - 37
Celerra Network Server allocates up to 1 GB of physical RAM per Data Mover to store the blockmaps
for all checkpoints of all PFSs on the Data Movers. If a Data Mover has less than 4GB of RAM, the
512MB will be allocated.
Each time a checkpoint is read, the system queries it to find the location of the required data block. For
any checkpoint, blockmap entries that are needed by the system but not resident in main memory is
paged in from the SavVol. The entries stay in main memory until system memory consumption
requires them to be purged.
Celerra SnapSure
- 37
Module Summary
Key points covered in this module are:
SnapSure creates a point-in-time view of a file system
When a data block in a PFS is changed, the data block is first
copied to the SavVol to preserve the point-in-time data
Bitmaps keep track of the data blocks that have changed since the
time the checkpoint was created
A blockmap maps data block location in the SavVol
All checkpoints for a single PFS reside in one SavVol
The Celerra allows up to 64 (96 with NAS 5.5) checkpoints for each
PFS
Creating, refreshing, and restoring checkpoints can be
accomplished with the CLI and Celerra Manager
The creation of a SnapSure checkpoint schedules is accomplished
with Celerra Manager
2006 EMC Corporation. All rights reserved.
Celerra SnapSure - 38
Celerra SnapSure
- 38
Closing Slide
Celerra SnapSure - 39
Celerra SnapSure
- 39
Celerra ICON
Celerra Training for Engineering
Celerra Replicator
Celerra Replicator
-1
Celerra Replicator
Upon completion of this module, you will be able to:
y Explain how Celera Replicator makes a copy of a file
system on the same Data Mover, another Data Mover in
the same Celerra, or a Data Mover in a remote Celerra
y Identify conditions and requirements for implementing
Celerra Replicator
y Describe the stages in the replication process
y Using the CLI and or Celerra Manager, configure Celerra
Replicator
y Describe CIFS Asynchronous Data Recover using
Celerra Replicator and Virtual Data Movers
y Identify issues and restrictions
2006 EMC Corporation. All rights reserved.
Celerra Replicator - 2
The objectives for this module are shown here. Please take a moment to review them.
Celerra Replicator
-2
Celerra Replicator - 3
Celerra Replicator produces a read-only, point-in-time replica of a source (production) file system.
The Celerra Replication service periodically updates this copy, making it consistent with the
production file system. This read-only replica can be used by a Data Mover in the same Celerra
cabinet (local replication), or a Data Mover at a remote site (remote replication) for content
distribution, backup, and application testing.
In the event that the primary site becomes unavailable for processing, Celerra Replicator enables you
to failover to the remote site for production. When the primary site becomes available, you can use
Celerra Replicator to synchronize the primary site with the remote site, and then failback the primary
site for production. You can also use the failover/reverse features to perform maintenance at the
primary site or testing at the remote site.
Celerra Replicator
-3
Key Terminology
y Local Source (local_src)
y Remote Destination (remote_dst)
y Local Destination (local_dst)
y Delta Set
SnapSure Save Volume
y Playback Service
y Replicator SavVol
y Replication Failover
y Replication Resync
y Replication Reverse
2006 EMC Corporation. All rights reserved.
Celerra Replicator - 4
Celerra Replicator
-4
Replication Requirements
y Celerra Replicator keeps the destination file system up to date
Initial copy of source file system required at destination
Use fs_copy command
For large file systems, it may be necessary to make a physical copy and
transport it to destination
Celerra Replicator - 5
The Production File System must be mounted before you can begin replication. For a local replication,
the source and destination Data Mover must be appropriately configured for the network (IP addresses
configured). For a remote replication, the IP addresses must be configured for the primary and
secondary Data Movers. IP connectivity must also exist between the primary Control Station and the
remote Control Station.
Celerra Replicator
-5
Celerra
Primary
Data Mover
Secondary
Data Mover
Source
File System
2
Destination
File System
SavVol
Celerra Replicator - 6
Local replication produces a read-only copy of the source file system for use by a Data Mover in the
same Celerra cabinet. The primary Data Mover services reads and writes from the network clients
while the secondary Data Mover exports the read-only replica of the source file system. Local
replication can occur within the same Data Mover (called loopback) or different Data Movers.
This slide shows the process of local replication.
1. Throughout the process, network clients read and write to the source file systems through the
primary Data Mover without interruption.
2. For the initial replication start, the source and destination file systems are manually synchronized
using the fs_copy command.
3. After synchronization, the addresses of all block modifications made to the source file system are
used by the replication service to create one or more delta sets, by copying the modified blocks to
the SavVol shared by the primary and secondary Data Movers.
4. The local replication playback service periodically reads any available, complete delta sets and
updates the destination file system, making it consistent with the source file system. During this
time, all subsequent changes made to the source file system are tracked.
5. The secondary Data Mover exports the read-only copy to use for content distribution, backup, and
application testing.
Celerra Replicator
-6
Primary
Data Mover
Secondary
Data Mover
Source
File System
Destination
File System
SavVol
5
Remote
SavVol
Celerra Replicator - 7
Remote replication creates and periodically updates a read-only copy of a source file system at a
remote site. This is done by transferring changes made to a production file system (source) at a
local site to file system replica (destination) at the remote site over an IP network. By default,
these transfers are automatic. However, you can initiate a manual update.
This slide shows the process of remote replication.
1. Throughout this process, network clients read and write to the source file systems through a Data
Mover at the primary site without interruption.
2. For the initial replication process to start, the source and destination file systems are manually
synchronized using the fs_copy command.
3. The addresses of all subsequent block modifications made to the source file system are used by the
replication service to create one or more delta sets. Remote replication creates a delta set by
copying the modified blocks to the SavVol at the primary site.
4. Remote replication transfers any available, complete delta sets (which includes the block
addresses) to the remote SavVol. During this time, the system tracks subsequent changes made to
the source file system on the primary site.
5. At the remote site, the replication service continually plays back any available, complete delta sets
to the destination file system, making it consistent with the source file system.
6. The Data Mover at the destination site exports the read-only copy for content distribution, backup,
and application testing. This optional step is done manually.
Celerra Replicator
-7
Primary
Data Mover
Secondary
Data Mover
Source
File System
Destination
File System
SavVol
Remote
SavVol
Celerra Replicator - 8
If the primary file system becomes unavailable, usually as the result of a disaster, you can make the
destination file system read/write. After the primary site is again available, you can then restore
replication to become read/write at the primary site and read-only at the remote site.
Failover breaks the replication relationship between the source and destination file system, and
changes the destination file system from read-only to read/write.
The system plays back the outstanding delta sets on the destination file system according to
instructions issued by the user. The system can play back either all or none of the delta sets. The
system then stops replication in a way that allows it to be restarted later.
The system also fails over to the remote site, enabling read/write access to the destination file system
from the network clients. If the source is online, it becomes read-only.
Celerra Replicator
-8
Primary
Data Mover
Secondary
Data Mover
Source
File System
Destination
File System
SavVol
Remote
SavVol
Celerra Replicator - 9
When the original source file system becomes available, the replication relationship can be
reestablished.
The fs_replicate resync option is used to populate the source file system with the changes
made to the destination file system when the sites were in a failover condition. The direction of the
replication process is reversed. The remote file system is read/write and the source file system is readonly.
Autofullcopy=yes will ensure that a full copy of the data from the source to the remote site takes
place. Without the autofullcopy=yes option; an incremental copy will occur. If the standard
fs_replicate resync fails the user will be prompted to run it again using the new
autofullcopy=yes option
Notes:
This is run from the remote site and can take a considerable amount of time.
If you think that a resynchronization may not be successful, you can execute a full copy of the source
file system by using the autofullcopy=yes option with the fs_replicate -resync
command.
Example: fs_replicate resync fs1:cel=eng172158 fs2 -0
autofullcopy=yes
Celerra Replicator
-9
y The direction of
replication is
reversed
y Primary site again
accepts source
updates from
network clients
Primary
Data Mover
Secondary
Data Mover
Source
File System
Destination
File System
SavVol
Remote
SavVol
Celerra Replicator - 10
During reverse, the direction of replication is reversed (as it was before the failover). The primary site
again accepts the source file system updates from the network clients, then the replication service
transfers them to the remote site for playback to the destination file system. During the failover phase,
both the source and destination file systems are temporarily set as read-only.
Note:
A reverse requires both the primary and remote sites to be available and results in no data loss.
During the reversal phase, both the source and destination file systems are temporarily set at read-only.
Celerra Replicator
- 10
y Allows you to
temporarily stop
replication
Primary
Data Mover
Secondary
Data Mover
Source
File System
Destination
File System
SavVol
Remote
SavVol
Celerra Replicator - 11
Using the suspend and restart options for a replication relationship allows you to temporarily stop
replication, perform some action, and then restart the replication relationship using an incremental
rather than a full data copy.
Stopping and restarting replication can be useful for the following:
y Change the size of the replication SavVol size. During replication, the size of a SavVol may need
to be changed because the SavVol is too large or to small.
y Mount the replication source or destination file system on a different Data Mover.
y Change the IP addresses or interfaces that replication is using.
Suspend is an option that allows you to stop an active replication relationship and leave replication in a
condition that allows it to be restarted. When suspending a replication relationship, the system:
y Ensures all the delta sets have been transferred to the destination site.
y Plays back all the outstanding delta sets.
y Creates a checkpoint on the source site, which is used to restart replication.
Example:
fs_replicate -suspend <srcfs> <dstfs>:cel=<cel_name>
Celerra Replicator
- 11
Primary
Data Mover
Secondary
Data Mover
Source
File System
Destination
File System
SavVol
Remote
SavVol
Celerra Replicator - 12
After you suspend a replication relationship using the -suspend option, only the -restart option can be
used to restart it. This command verifies that replication is in a condition that allows a restart. It begins
the process with a differential copy using a checkpoint of the source file system.
The restart checks to see if a suspend has occurred, and if it has, it uses the suspend checkpoint to
incrementally restart the replication process.
Example:
fs_replicate -restart <srcfs> <dstfs>:cel=<cel_name>
Celerra Replicator
- 12
y Replication may be
stop or abort when
you no longer want
to keep the
destination
synchronized
y May occur when file
systems have fallen
out of synch making
it necessary restart
replication
Primary
Data Mover
Secondary
Data Mover
Source
File System
Destination
File System
SavVol
Remote
SavVol
Celerra Replicator - 13
Celerra Replicator
- 13
Replication Policies
y Frequency that Delta sets are creation and played back is
determined by two policies:
Time out
Primary site time interval at which replication automatically generates
a delta set
Remote site time interval at which playback service automatically plays
back all available delta sets to destination file system
High Watermark
Primary site the point at which the replication service automatically
creates a delta set on the SavVol of accumulated changes since the last
delta set (in MB)
Remote site the point at which the replication service automatically
plays back all available delta sets to the destination file system (in MB)
Celerra Replicator - 14
The delta set contains the block modifications made to the source and is used by the replication service
to synchronize the destination with the source. The amount of information within the delta set is based
on the activity of the source and how you set the time-out and high watermark replication policies.
The minimum delta set size is 128MB. The replication service is triggered by either the time-out or
the high watermark policy, whichever is reached first.
Time-out
At the primary site, the time-out is the time interval at which the replication service automatically
generates a delta set. At the remote site, the time-out value is the time interval at which the playback
service automatically plays back all available delta sets to the destination file system. At both sites,
the default time-out value is 600 seconds. A value of 0 indicates that there is never a time-out, and
pauses the replication activities.
High Watermark
At the primary site, the high watermark indicates the size of the file system changes (in MBs)
accumulated since the last delta set. The replication service automatically creates a delta set on the
SavVol. At the remote site, the high watermark represents the size (in MBs) of the delta sets present
on the secondary SavVol. The replication service automatically plays back all available delta sets to
the destination file system. At both sites, the default high watermark is 600 MB. A value of 0 pauses
the replication activities and disables this policy.
Celerra Replicator
- 14
Celerra Replicator - 15
By default, the size of the SavVol is 10% of the size of the source file system. The minimum SavVol
size is 1 GB and the maximum is 500 GB.
This may not be sufficient for your particular replication. Consider the size of the source file system
and, more importantly, the frequency of changes to the source. High write activity could indicate a
larger SavVol. Also consider the network bandwidth available between the source and destination
Celerras. If the rate of change on the source file system is continuously greater than the available
network bandwidth, the replication service will not be able to transfer data quickly enough and will
eventually become inactive. Lastly, evaluate the risk tolerance to network outages. For example, if
the network experiences long outages, check if the primary SavVol will accommodate the necessary
delta sets.
If you determine that the default SavVol size is insufficient, it can be changed before replication starts,
when starting replication, or after replication is running.
Celerra Replicator
- 15
Celerra Replicator - 16
Celerra Replicator
- 16
Copy
Copyckpt
ckpt
Start
StartRepl.
Repl.
ckpt_01
Create
Createckpt
ckpt
Copy
Copydata
data
R/W
Source
File System
Check
Checkstatus
status
y Example:
fs_ckpt src_fs -Create
Celerra Replicator - 17
A SnapSure checkpoint is used as the baseline of data to be copied to the destination file system.
Command
y fs_ckpt <fs_name> -Create
Example
y fs_ckpt local_src Create
Celerra Replicator
- 17
Create
Createdest.
dest.fsfs
Copy
Copyckpt
ckpt
Start
StartRepl.
Repl.
ckpt_01
RO
Create
Createckpt
ckpt
Copy
Copydata
data
Check
Checkstatus
status
R/W
Source
File System
Destination
File System
(rawfs)
y Example:
nas_fs name dest_fs -type rawfs create
samesize=src_fs pool= pool=clar_r5_performance
-option slice=y
2006 EMC Corporation. All rights reserved.
Celerra Replicator - 18
Since replication has to start on a rawfs file system, the destination file system has to be created as
rawfs. It is later converted to uxfs. The destination must be the same size as the source.
To force the filesystem from UxFS to rawfs issue the new nas_fs T rawfs <filesystem>
-Force command.
The procedure shown here assumes your source file system was created on a slice volume, or by using
AVM. If your source file system was created on a disk or a set of concatenated disks, refer to
Using Celerra Replicator for details. You should use the CLI to create the destination file
system, since Celerra Manager cannot create a rawfs type file system.
The destination file system can be created in three ways:
1. nas_fs name <name> -type rawfs create samesize=<local_src> pool=<x> -option slice=y
2. By manually creating a volume of the correct size, creating a metavolume, and then creating the
file system (as rawfs)
After you create the destination file system, mount it as read-only on the secondary Data Mover.
The samesize option -- The samesize option will ensure that the file systems on both sides are
identical is size.
Celerra Replicator
- 18
Copy
Copyckpt
ckpt
Start
StartRepl.
Repl.
ckpt_01
RO
Create
Createckpt
ckpt
Copy
Copydata
data
R/W
Source
File System
Check
Checkstatus
status
Destination
File System
(rawfs)
y Example:
fs_copy start src_ckpt_01 dest_fs
option convert=no
2006 EMC Corporation. All rights reserved.
Celerra Replicator - 19
Copy the checkpoint of the source file system to the destination file system to create a baseline. This
copy will be updated incrementally with changes that occur to the source file system. You do this once
per file system to be replicated. The checkpoint must be copied without converting it to uxfs by using
the convert=no option.
To copy a checkpoint to the destination file system:
fs_copy start <local_ckpt> <dstfs> -option convert=no
Celerra Replicator
- 19
Copy
Copyckpt
ckpt
Start
StartRepl.
Repl.
ckpt_01
RO
Create
Createckpt
ckpt
Copy
Copydata
data
R/W
Source
File System
Destination
File System
(rawfs)
Check
Checkstatus
status
y Example:
SavVol
Celerra Replicator - 20
When you start replication, the system verifies that the primary and secondary Data Movers can
communicate with each other. Changes made to the source file system begin to get logged. You start
the process once per file system to be replicated. The default is 600 MB for high watermark, and 600
seconds for time-out.
To start replication for the first time:
fs_replicate -start <source file system> <destination file system>
Celerra Replicator
- 20
Copy
Copyckpt
ckpt
Start
StartRepl.
Repl.
ckpt_01
ckpt_02
RO
Create
Createckpt
ckpt
Copy
Copydata
data
R/W
Source
File System
Destination
File System
(rawfs)
Check
Checkstatus
status
y Example:
SavVol
Celerra Replicator - 21
Next, you create a second checkpoint that is compared to the initial checkpoint. The changes between
the two checkpoints will be copied to the destination file system in the next step.
To create a SnapSure checkpoint of the source file system:
fs_ckpt <fs_name> -Create
Celerra Replicator
- 21
Copy
Copyckpt
ckpt
Start
StartRepl.
Repl.
ckpt_01
ckpt_02
RO
Create
Createckpt
ckpt
Copy
Copydata
data
R/W
Source
File System
Destination
File System
Check
Checkstatus
status
y Example:
SavVol
Celerra Replicator - 22
Next, copy the incremental changes that exist between the two checkpoints to the destination file
system.
Command:
y fs_copy -start <new_checkpoint> <dstfs> -fromfs
<previous_checkpoint> -option <options>
Where
y <new_checkpoint> is the last checkpoint taken
y <dstfs> is the destination file system
y <previous_checkpoint> is the first checkpoint taken
Celerra Replicator
- 22
Create
Createdest.
dest.fsfs
Copy
Copyckpt
ckpt
Start
StartRepl.
Repl.
Create
Createckpt
ckpt
Copy
Copydata
data
RO
R/W
Source
File System
Check
Checkstatus
status
Destination
File System
(rawfs)
SavVol
y Example:
fs_replicate list
fs_replicate info src_fs -v
2006 EMC Corporation. All rights reserved.
Celerra Replicator - 23
To list the current replications that are running and check their status:
fs_replicate -list
Celerra Replicator
- 23
This slide provides an overview of the remote replication process. Details will follow.
Celerra Replicator
- 24
Verify
Verifylink
link
Source
Sourceckpt
ckpt
Create
CreateDest
Destfsfs
Copy
Copyckpt
ckpt
Start
StartRepl.
Repl.
Second
Secondckpt
ckpt
Example:
[primary site]# /nas/sbin/nas_rdf
init eng16864 172.24.168.64
[remote site]# /nas/sbin/nas_rdf
init eng16857 172.24.168.57
Copy
Copychanges
changes
Check
Checkstatus
status
Celerra Replicator - 25
At both the primary and remote sites, you must establish a trust relationship that enables HTTP
communications between the primary and remote Celerra. This trust relationship is built on a
passphrase set on the Control Stations of both Celerras. The passphrase is stored in clear text and is
used to generate a ticket for Celerra-to-Celerra communication. The time on the primary and remote
Control Stations must be synchronized.
Note: To establish communication, you must have root privileges and each site must be active and
configured for external communications.
Command:
y [primary site]# /nas/sbin/nas_rdf init <cel_name_of_remote_site_CS>
<ip_address_of_remote_CS_interface>
You are prompted for the following to establish a user login account:
y login
y password
y passphrase (must be the same on both sides)
Note: this trusted relationship may also be established using the command:
nas_cel create.
Celerra Replicator
- 25
Copy
Copyckpt
ckpt
nas_cel -list
fs_ckpt src_fs -Create
Start
StartRepl.
Repl.
Second
Secondckpt
ckpt
Copy
Copychanges
changes
Check
Checkstatus
status
2006 EMC Corporation. All rights reserved.
Celerra Replicator - 26
Celerra Replicator
- 26
Create
CreateDest
Destfsfs
Copy
Copyckpt
ckpt
Start
StartRepl.
Repl.
Second
Secondckpt
ckpt
Copy
Copychanges
changes
Check
Checkstatus
status
2006 EMC Corporation. All rights reserved.
Celerra Replicator - 27
The destination file system must be created as rawfs, and must be the same size as the source file
system. This can be accomplished using the samesize option (new in 5.3) The samesize option will
ensure that the filesystems on both sides are identical is size.
To create the destination file system:
nas_fs name <dstfs> -type rawfs create samesize=srcfs:cel=eng123 pool=clar_r5_performance
The entire checkpoint of the source is then copied to the destination file system just created. This
creates a baseline copy of the source on the destination. This copy will be updated incrementally with
changes that occur to the source file system. You do this once per file system to be replicated.
To copy a checkpoint to the destination file system,
fs_copy start <srcfs> <dstfs>:cel=<cel_name> -option convert=no
Where:
srcfs is the source file system checkpoint
dstfs is the destination file system
cel_name is the remote Celerra
Example:
fs_copy start src_ckpt1 dest:cel=eng16864 option convert=no
Celerra Replicator
- 27
y Start Replication
Verify
Verifylink
link
Source
Sourceckpt
ckpt
Create
CreateDest
Destfsfs
Copy
Copyckpt
ckpt
y Example:
Start
StartRepl.
Repl.
Second
Secondckpt
ckpt
Copy
Copychanges
changes
Check
Checkstatus
status
Celerra Replicator - 28
When you start replication, the system verifies that the primary and secondary Data Movers can communicate,
starts replication, and begins logging changes made to the source. The default is 600 MB for high water mark
and 600 seconds for time-out. The first replication policy creates the delta set.
To start replication:
fs_replicate start <srcfs> <dstfs>:cel=<cel_name> savsize=<MB>
Example:
fs_replicate start src dest:cel=eng16864 savsize=20000
When using the fs_replicate modify option, the values become effective the next time a trigger for
these policies is reached. For example, if the current policies are changed from 600 for the high watermark and
time-out interval to 300, the next time replication reaches 600, the trigger is changed to 300.
Example: fs_replicate -modify src -option hwm=300
When using the refresh option; the execution of the command either starts the generation of a delta set or a
playback of a delta set. When that operation completes, the time-out interval or high watermark changes. The
difference between the two is the new modify option does not create a delta set or attempt a playback.
Refreshing a replication creates a delta set on the source site and plays back an outstanding delta set on
the destination site as if the next high watermark or time-out interval was reached.
Celerra Replicator
- 28
Create
CreateDest
Destfsfs
Copy
Copyckpt
ckpt
Start
StartRepl.
Repl.
Second
Secondckpt
ckpt
Copy
Copychanges
changes
Check
Checkstatus
status
2006 EMC Corporation. All rights reserved.
Celerra Replicator - 29
Celerra Replicator
- 29
Start
StartRepl.
Repl.
Second
Secondckpt
ckpt
Copy
Copychanges
changes
Check
Checkstatus
status
2006 EMC Corporation. All rights reserved.
Celerra Replicator - 30
To list the current replications that are running and check their status:
fs_replicate -list
Celerra Replicator
- 30
Right Click Replications > Select Available Destinations > Select New
Destination
2006 EMC Corporation. All rights reserved.
Celerra Replicator - 31
Note:
The destination file system needs to be setup as primary since it may be defined as a standby
You also need to set up an interface for the destination data mover.
Celerra Replicator
- 31
Celerra Replicator - 32
Celerra Replicator
- 32
Celerra Replicator - 33
Celerra Replicator
- 33
Celerra Replicator - 34
Celerra Replicator
- 34
Celerra Replicator - 35
Celerra Replicator
- 35
Celerra Replicator - 36
Celerra Replicator
- 36
Celerra Replicator - 37
Check your Source and Destination File Systems; as well as, Time Out and High Water Marks.
Celerra Replicator
- 37
Task Status
Celerra Replicator - 38
Click on Task Status, and Right Click on the task for an update.
Celerra Replicator
- 38
y Example:
[remote_site] fs_replicate failover
src:cel=eng16857 dest
Celerra Replicator - 39
A failover is used if the primary site has experienced a disaster and is unavailable, or the source file
system is corrupted. The destination file system becomes read/write. If the primary site again
becomes available, you can resynchronize your file systems at the remote and primary sites and restart
replication.
To failover to a remote site:
[remote_site] fs_replicate failover <srcfs>:cel=<cel_name> <destfs> option <options>
Celerra Replicator
- 39
Celerra Replicator - 40
After failover, a checkpoint is created on the remote site and the destination file system becomes read/write.
New writes are then allowed on the destination file system. Before resynchronizing the file systems, verify that
the file system on the original primary site is available and mounted as read-only. To attempt to resynchronize
the source and destination file system and restart replication:
[remote_site]
The resync option attempts to incrementally resynchronize the source and destination file systems by examining
the changes in the SavVol. Reasons a resynch may not be possible are:
y You performed a failover and, after your primary site became available, continued to receive I/O to your
source file system.
y After you performed a failover, you decided to abort replication when the primary site became available
because the information was unusable.
y Your file system fell out of synchronization.
Note: Replication is now working in reverse order. Changes that occurred after the failover are copied to the
source site and replication is started again. If resynchronization is not possible, abort replication and restart
replication.
As previously mentioned, a new autocopy option is available with v5.3. Autofullcopy=yes will ensure
that a full copy of the data from the source to the remote site takes place. Without the autofullcopy=yes
option; an incremental copy will occur. If the standard fs_replicate resync fails the user will be
prompted to run it again using the new autofullcopy=yes option.
Note: This is run from the remote site and can take a considerable amount of time.
Celerra Replicator
- 40
Replicate Reverse
y Resumes normal replication
Source to Destination
y Changes:
Destination Read Only
Source Read Write
y Example:
[remote_site] fs_replicate reverse
src:cel=eng16857 dest
Celerra Replicator - 41
Reverse changes the replication direction. It requires both the primary and remote sites to be
available. The write activity on the destination file system is stopped, and any changes are applied to
the source file system before the primary site becomes read/write. Before you reverse the replication
direction, you should verify the direction of your replication process.
To initiate failback:
[remote_site]
Celerra Replicator
- 41
Sequence is Important!
y
Celerra Replicator - 42
Celerra Replicator
- 42
Celerra Replicator - 43
Celerra Replicator includes support for failover and reverse, as well as Virtual Data Mover support.
Combining these features provides an asynchronous data recovery solution for CIFS servers and CIFS
file systems.
In a CIFS environment, in order to successfully access file systems on a remote secondary site, you
must replicate the entire CIFS working environment including local groups, user mapping information,
Kerberos, shares and event logs. You must replicate the production file systems attributes, access the
file system through the same UNC path, and find the previous CIFS servers attributes on the
secondary file system.
Since the release of v5.2 features, an asynchronous Data Recovery solution is possible. By following
the documented procedures, Data Mover clients can continue accessing data in the event of a failover
from the primary site to the secondary site.
Celerra Replicator
- 43
Celerra Replicator - 44
A connection must be established between the control stations to enable replication. Both the primary
and secondary sites must have interfaces configured with the same name. Use the server_ifconfig
command to configure interfaces. DNS resolution is required on both the primary and secondary sites.
The time must be synchronized between the two sites and the domain controllers at each site.
Some method of mapping Windows users to UIDs and GIDs for example, Internal Usermapper.
File systems must be prepared. First, determine the space required at both the primary and secondary
sites. Then create volumes and file systems to accommodate the size requirements.
Next, VDMs are created. A primary VDM is created in the loaded state. The secondary VDM is
created in a mounted state. This read-only state is used on the secondary side when replicating a VDM.
It cannot be actively managed, and receives updates from the primary during replication.
Finally, create data file systems on the primary and secondary sites and mount file systems to the
VDM.
In the steady-state CIFS environment, all data is replicated from the primary to the secondary, and all
the daily management changes are automatically replicated. Successful access to CIFS servers, when
failed over, depends on the customer taking adequate actions to maintain DNS, Active Directory, user
mappings, and network support of the data recovery site. Celerra depends on those components for
successful failover.
Monitor the Data Mover, file systems and the replication process.
Celerra Replicator
- 44
Restrictions
y Celerra Data Migration Service (CDMS) not supported
y MPFS on destination file system not supported
y fs_replicate failover, -resync, and
reverse options not supported for local or loopback
replication
y A TimeFinder BCV cannot be a source or destination file
system for replication
y The Primary CS manages replication; standby CS not
used by replication
If primary CS fails over to standby CS, replication service continues
to run, but replication management capabilities not available
2006 EMC Corporation. All rights reserved.
Celerra Replicator - 45
Celerra Replicator
- 45
Configuration Considerations
y Avoid allowing the replication service to create delta sets
faster than it can copy them to the destination
y Examine network bandwidth
y At the beginning of the delta set playback for CIFS, there
is a temporary freeze/thaw period that may cause a
network disconnect
y Evaluate the remote site for the following and determine if
Virtual Data Movers can be used
Subnet addresses
NIS/DNS availability
Windows environment WINS, DNS, DC, BDC, share names,
availability of Usermapper or NTMigrate
2006 EMC Corporation. All rights reserved.
Celerra Replicator - 46
To avoid the source and destination file systems becoming out of synch, do not allow the replication
service to create delta sets faster than it can copy them to the destination file system. Set the delta set
creation replication policy to a higher number than the delta set playback number.
You need to determine if the network bandwidth can effectively transport changes to the remote site.
During the delta set playback on the destination file system, network clients can access the destination
file system. However, at the beginning of the delta set playback for CIFS clients, there is a temporary
freeze/thaw period that may cause a network disconnect. Therefore, do not set the replication policy to
a low number since this reduces the availability of the destination file system.
Lastly, evaluate the remote side for a compatible infrastructure. For example, DNS, NIS, WINS,
Domain Controllers, BDC, NT Migrate and Usermapper.
Celerra Replicator
- 46
Troubleshooting
y Do not change source or destination IP addresses once
replication has started
y Network Connectivity
Verify duplex match
Routing issues
Celerra Replicator - 47
Celerra Replicator
- 47
Module Summary
Key points covered in this module are:
Celerra Replicator produces a read-only, point-in-time replica of a
source file system (local or remote)
Local replication produces a copy of the source file system within
the same Celerra cabinet
Remote replication produces a copy of the source file system in a
remote Celerra
A SavVol is used to store copied data blocks from the source
Failover changes the destination file system from read-only to readwrite
Replication reverse will reverse the direction of replication causing
the source file system to become read-only, and the destination file
system to become read-write
2006 EMC Corporation. All rights reserved.
Celerra Replicator - 48
Celerra Replicator
- 48
Closing Slide
Celerra Replicator - 49
Celerra Replicator
- 49
Celerra ICON
Celerra Training for Engineering
Celerra iSCSI
Celerra iSCSI
-1
Revision History
Rev Number
Course Date
1.0
February 2006
1.2
May 2006
Revisions
Complete
NAS 5.5 enhancements
Celerra iSCSI - 2
Celerra iSCSI
-2
Celerra iSCSI - 3
Celerra iSCSI
-3
What is iSCSI?
Internet Small Computer System Interface (iSCSI)
Target
Fibre Channel
Fibre Channel
SAN
SAN
iSCSI
Target
IP Network
Celerra iSCSI - 4
The concepts of iSCSI are very similar to those of a Storage Area Network. With iSCSI architecture,
host systems and storage devices communicate over an IP network, using TCP to provide a reliable
transport service. SCSI commands, data, and status are encapsulated and delivered between an initiator
and target using iSCSI protocol.
Traditionally. hosts were connected to storage either directly (cable attached) using the SCSI protocol,
or through a SAN using SCSI over Fibre Channel protocol (as shown on the slide). Again, the concept
of iSCSI is not much different. Instead of a Fibre Channel SAN, iSCSI encapsulates the SCSI protocol
within the TCP/IP protocol stack. This allows for any IP network to transport iSCSI communications
between host and storage.
Celerra iSCSI
-4
Why iSCSI?
y IP networks are extremely common and typically already
in place
Good base of expertise in designing and maintaining IP networks
Celerra iSCSI - 5
iSCSI allows for host to storage connections over an IP network. IP networks are extremely common
and available for transporting SCSI communications between host and storage. SCSI is, and has been
used for some time.
There is a lot of IT experience in building, maintaining, and tuning IP networks. This makes it easier
to implement the technology. For those organizations who do not have a SAN infrastructure in place,
the cost of an IP infrastructure is typically less. Moreover, IP networks are most likely present in the
organization already.
Lastly, while Fibre Channel has become the storage interconnect of choice of many data centers, a
large population of servers still exist for which the cost of Fibre Channel has been a barrier to
consolidation on a LAN. With iSCSI, organizations have a low-cost method for consolidating and
networking these previously stranded servers.
Standard Network Interface Cards can be used. For higher performance, a number of vendors offer
TOE (TCPIP Offload Engine) Interface cards that offload the TCPIP processing from the CPU to the
Interface card. These are very effective with iSCSI.
Celerra iSCSI
-5
Celerra iSCSI - 6
In order to understand Celerra iSCSI implementation, it is necessary to define some of the terms and
concepts used. We will see that the concepts are very similar to Fibre Channel based storage area
networks. We will discuss each in the following slides.
Celerra iSCSI
-6
IP
IPNetwork
Network
Host
Celerra
Celerra iSCSI - 7
In any SCSI environment, there are two types of devices: Initiators and Targets. Initiators send
commands such as read or write, and targets respond.
In order to support iSCSI, each host system needs to run at least one iSCSI initiator. Typically, an
iSCSI initiator appears simply as any other SCSI initiator in the host system.
The storage system, in our case the Celerra Data Mover, is configured as a iSCSI target. An iSCSI
target provides logical units and supports SCSI protocol the same as any SCSI target does, but also
supports the iSCSI protocol. An iSCSI storage network, then, is a connection, via TCP, between one or
more iSCSI initiators and one or more iSCSI targets.
On start up, an initiator logs into a target and the LUNs associated with the target are made available to
the host system. An iSCSI LUN appears like a local SCSI disk drive to the host and using standard
SCSI protocol, the host communicates to the disk and can use it like any other disk: For example,
initialize it with a signature, create a partition, format a file system, and assign a drive letter.
EMC currently supports Windows and LINUX initiators. Check the support matrix for specific
requirements and restrictions.
Celerra iSCSI
-7
IP Packet
TCP Packet
iSCSI PDU
iSCSI CDB
Celerra iSCSI - 8
The protocol model above shows the layers and protocols involved in a simple iSCSI storage network.
The iSCSI target provides both a SCSI protocol layer and an iSCSI protocol layer. A SCSI Command
Descriptor Blocks (CDBs) from the SCSI layer. An iSCSI wrapper is added and the SCSI CDB is
transmitted within an iSCSI PDU (Protocol Data Unit) across an IP network to the iSCSI target.
Communication between the iSCSI initiator and iSCSI target occurs over one or more TCP
connections. A group of TCP connections that link an iSCSI initiator and target form an iSCSI session.
On the target side, the iSCSI layer extracts the SCSI CDB from the iSCSI PDU. The iSCSI layer
presents to SCSI CDB to the SCSI layer for execution on the SCSI device. The SCSI response is then
transmitted back to the iSCSI initiator.
To the host the iSCSI storage device appears as a standard SCSI device. The operating system and
applications communicate with the device using standard SCSI commands. The fact that the transport
involves iSCSI and TCP/IP is transparent to the operating system and applications.
Celerra iSCSI
-8
Data Mover
Target 1
Portal Group 1
Target 2
Portal Group 2
10.168.0.111:3260
172.24.81.13:3262
172.24.81.12:3261
192.168.0.12:3262
Portal Group 3
192.168.0.12:3269
Network Interfaces
Celerra iSCSI - 9
Both initiators and targets use network interfaces, called network portals, to communicate over the
network. An initiators network portal is defined by IP address; while the target's network portal is
defined by IP address and TCP port (default 3260). Therefore, target portals can share IP addresses as
long as each portal uses a unique TCP port. This slide shows how the network portals might be
defined on the target, or Celerra side.
A portal group is a collection of one or more network portals that are identified by a tag called the
portal group tag. Celerra, requires portal groups which are primarily used for session control.
Celerra iSCSI
-9
iSCSI Names
y All initiators and targets require a unique iSCSI identifier
y Two types of iSCSI names
iqn. iSCSI Qualified Name
iqn.1992-05.com.emc:apm000339013630000-10
Celerra iSCSI - 10
All initiators and targets within an iSCSI network must be named with a unique worldwide iSCSI
identifier. There are two types of iSCSI names.
IQN iSCSI Qualified Name
To generate names of this type, the organization generating this name must own a registered domain
name. This domain name does not have to be active, and does not have to resolve to an address; it just
needs to be reserved to prevent others from generating iSCSI names using the same domain name. The
domain name must be additionally qualified by a date.
EUI Extended Unique Identifier
An EUI is a globally unique identifier based on the IEEE EUI-64 naming standard. These names are
formed by the eui prefix followed by a 16-character hexadecimal name. The 16-character part of the
name include3s 24 bits for the company name assigned by IEEE, and 40 bits for a unique ID such as a
serial number.
Celerra iSCSI
- 10
iSCSI
LUN1
iSCSI
LUN2
iSCSI
LUN3
iSCSI
LUN4
LUN3
LUN1
IP
IPNetwork
Network
LUN2
LUN4
Celerra iSCSI - 11
A logical unit is an element on a storage device that interprets SCSI CDBs and executes SCSI
command such as reading from and writing to storage. Each Logical Unit has an address within a
target called a Logical Unit Number (LUN). As opposed to raw back-end SCSI device based LUNs, a
Celerra iSCSI LUN is a software feature that processes SCSI commands.
The size of an iSCSI LUN is limited to the maximum size of a Celerra file, currently 1TB.
In the iSCSI protocol, LUN masks are used to control access to LUNs on iSCSI targets. A LUN mask
is essentially a filter that controls which initiators have access to which LUNs on the target. If you
create a LUN mask that denies an initiator access to a specific LUN, that initiator cannot see or access
the LUN. The Celerra system supports iSCSI LUN masking based on the iSCSI names of the host
initiators. In this example, one host has been defined to have access to LUN1 and LUN2. The host on
the right has access to LUN3 and LUN4.Generally, only a single host is granted access to a LUN.
In addition, note there are two types of Celerra iSCSI LUNs:
y Production LUN: A logical unit that is serves as a primary (or production) storage device.
y Snap LUN: A point-in-time representation (an iSCSI snap) of a PLU that has been promoted to
LUN status so that it can be accessed.
Note: It is strongly recommended that you only use LUNs 0 to 127 for Production LUNs. LUNs 128 to
254 are used by Celerra iSCSI host applications for promoting iSCSI snaps to Snap LUN status.
Celerra iSCSI
- 11
iSCSI Discovery
SendTargetsDiscovery
Target
Initiator
IP Network
Target Portal=10.127.50.162:3260
10.127.50.162
Initiator
iSNS
Target
Initiators
Targets
portals
IP Network
iSNS server
Initiator
Celerra iSCSI - 12
Before an initiator can establish a session with a target, the initiator must first discover where targets
are located and the names of the targets available to it. This information is gained through the process
of discovery. Discovery can occur in two ways:
SendTargetDiscovery
The initiator is manually configured with the targets network portal which it uses to establish a
discovery session with the iSCSI service on the target. As shown, the initiator issues a
SendTargetDiscovery command. The target responds with the names and addresses for the targets
available to the host.
iSNS Internet Storage Name Service
iSNS enables automatic discovery of iSCSI devices on an IP network. You can configure initiators
and targets to automatically register themselves with the iSNS server. Then, whenever an initiator
wants to know what targets are accessible to it, the initiator queries the iSNS server for a list of
available targets.
Celerra iSCSI
- 12
SendTargetDiscovery
y On the windows host, configure the initiator with the
targets Network Portal (IP Address and port)
y Used to establish
a discovery
session with the
iSCSI service on
the target
Celerra iSCSI - 13
In SendTargets discovery, you manually configure the initiator with a targets network portal (IP
address and port number), and then the initiator uses that network portal to establish a discovery
session with the iSCSI service on the target system.
During the discovery session, which takes place prior to the initiator logging in to a target, the initiator
tries to discover the names of the targets that are accessible through the portal. The initiator issues a
special command, the SendTargets command, to the iSCSI service on the target system. The iSCSI
service then responds with the names and addresses of all the targets that are available on the target
system. For a Celerra Network Server, the target system is an individual Data MoverSendTargets
discovery does not discover targets on other Data Movers on the Celerra Network Server.
Celerra iSCSI
- 13
iSCSI Authentication
Target
Initiator
CHAP Challenge
Hash value of challenge
Successful authentication
Celerra iSCSI - 14
The Celerra supports CHAP (Challenge Handshake Authentication Protocol) authentication. This slide
shows how CHAP works.
Once an initiator discovers its targets, log in occurs. During log in, the target and initiator agree upon
operational parameters for the session. Optionally, authentication can be configured so that during
login the CHAPs protocol can validate an initiator and/or target are who they say they are.
Authentication can be configured as one-way (initiator to target) or two-way (initiator to target and
target to initiator). The target sends a CHAP challenge message to the initiator.
y The initiator takes the shared secret, calculates a value using a one-way hash function, and returns
the hash value to the target.
y The target computes the expected hash value from the shared secret, and then compares the
expected value to the value received from the initiator. If the two values match, authentication is
acknowledged and the login process moves into the operational stage. If the two values do not
match the target immediately terminates the connection.
Celerra iSCSI
- 14
Size seen
by host
Allocated
file system
space
Celerra iSCSI - 15
The new feature of iSCSI Virtual LUN Provisioning was originally known as Sparse LUN.
Virtual Provisioning allows a user to create Virtually Sized LUNs which are reported to the clients
as bigger than what the actual size of the underlying file system is capable of holding for data.
To create a Virtually Provisioned LUN, an administrator must use the CLI interface on the Control
Station.
The size of both regular iSCSI LUNs (A.K.A. Dense LUNs) and Virtual LUNs is 2TB (minus 1 MB
for overhead).
Celerra iSCSI
- 15
Configuring iSCSI
y Configure the Celerra iSCSI target
Backend Storage
VLU
VLU
VLU
File system
iSCSI
Target
Determine file
system size
Create and mount
the file system
Data Mover
Network
Portal
Network
Portal
Portal
Group
Network
W2k
Configure iSCSI Host
2006 EMC Corporation. All rights reserved.
iSCSI
Initiator
Celerra iSCSI - 16
This slide lists the configuration steps required to configure the iSCSI target (Celerra) and initiator.
The following slides will provide details of the target configuration on the Celerra. For information
regarding the configuration of Windows initiator, please refer to Installing Celerra iSCSI Host
Applications.
Celerra iSCSI
- 16
Celerra iSCSI - 17
Celerra iSCSI
- 17
Celerra iSCSI - 18
Celerra Manager offers two iSCSI Wizards; Create an iSCSI LUN and Create an iSCSI Target.
Each wizard guides you through the process of configuring iSCSI support on the Celerra by creating
iSCSI LUNs and Targets. The following slides guide you through the process of configuring these
elements without the use of the wizards.
To use Celerra Manager to configure iSCSI, you must activate the iSCSI liciense.
Celerra iSCSI
- 18
Celerra iSCSI - 19
One or more file systems must be created to provide dedicated iSCSI storage.
A CIFS or NFS client can see the LUN itself and copy and delete the LUN; however, these clients
cannot make modifications to a LUN since the CIFS and NFS protocols cannot understand the contents
of the LUN. EMC recommends that file systems with iSCSI LUNs be dedicated to iSCSI and not used
for other purposes. For example, an iSCSI file system should not be exported via a CIFS share or NFS
export.
The file system should be large enough to hold the iSCSI LUNs and any snaps of the LUNs.
Potentially, an iSCSI snap (different technology than SnapSure) could take up the same amount of
space on the file system as the LUN.
LUN_size+(no_of_snaps)*(LUN_size*change-rate))+(n*LUN_size)=minimum file system space
needed to support one LUNchange_rate- % of LUN data that changes between snaps n number of
iSCSI snaps promoted at any one time.
Promoting a snap assigns a LUN to the snap so that it can be accessed by an iSCSI host.
Celerra iSCSI
- 19
y Command:
server_iscsi server_x -target
alias <alias_name>
create <pg_tag>:np=<np_list>
y Example
server_iscsi server_2 -target alias target1
create 1000:np=10.127.51.163,10.127.51.164
Celerra iSCSI - 20
You must create one or more iSCSI targets on the Data Mover so that an iSCSI initiator can establish a
session and exchange data with the Celerra. This slide shows the CLI command used to create an
iSCSI target.
server_iscsi server_x -target: Creates, deletes, and configures iSCSI targets on the Data
Mover.
<alias_name> = a local, user-friendly name for the new iSCSI target. This name is an alias for the
targets qualified name and is used for designating a specific iSCSI target in other commands. The
<alias_name> is not used for authentication but is used as a key identifier in the Celerra iSCSI
configuration so therefore must be unique. The <alias_name> can have a maximum of 255
characters.
<pg_tag> = the portal group tag that identifies the portal group within an iSCSI node. The
<pg_tag> is an integer within the range of 0-65535. The default port for a portal group is 3260. If no
pg_tag is specified, the default Portal Group Tag of 1 is used.
<np_list> = a comma-separated list of network portals. A network portal in a target is identified
by its IP address and its listening TCP port. The format of a network portal is <ip>[:<port>]. If no port
is specified, the default port of 3260 is used.
Celerra iSCSI
- 20
Celerra iSCSI - 21
This slide shows how to create an iSCSI target using Celerra Manager.
Note: Ensure that the iSCSI license has been enabled on the Celerra.
Celerra iSCSI
- 21
y Command:
server_iscsi server_x -lun
number <lun_number>
create <target_alias_name>
-size <size> -fs <fs_name>
[-vp {yes|no}]
y Example:
server_iscsi server_2 -lun number 2 create
target1 -size 1000 -fs iscsi02
2006 EMC Corporation. All rights reserved.
Celerra iSCSI - 22
After creating an iSCSI target, you must create iSCSI LUNs on the target. The LUNs physically reside
on space within the file system. From a client perspective, an iSCSI LUN appears as any other disk
device.
Currently EMC implements dense LUNs on the Celerra. Dense LUNs utilize Persistent Block
Reservation (PBR) to ensure that there is sufficient space on the file system for all data that may be
written to the LUN. PBR reserves disk space for the entire LUN although the actual disk space is not
taken from the reservation pool until data is actually written to the LUN. Note: If you have a LUN on a
file system but have not yet written any data to the LUN, the server_df command reports free disk
space as if the LUN was full.
Note: Maximum of 255 LUNs per target, however it is recommended that only 128 be configured if
using SNAP as as Snap LUN numbers will automatically begin at 128. Also there is a recommended
maximum of 1000 iSCSI LUNS per Data Mover.
Celerra iSCSI
- 22
Celerra iSCSI - 23
This slide shows how to create an iSCSI LUN using Celerra Manager.
Celerra iSCSI
- 23
y Command:
server_iscsi server_X mask
set <target_alias_name>
initiator <initiator_name>
-grant <access_list>
y Example:
server_iscsi server_2 -mask set target1
initiator iqn.1991_05.com.microsoft:nas46.celerra6.emc.com
-grant 0-100
2006 EMC Corporation. All rights reserved.
Celerra iSCSI - 24
To control initiator access to an iSCSI LUN, you must configure an iSCSI LUN mask. A LUN mask
controls incoming iSCSI access by granting or denying specific iSCSI initiators to specific iSCSI
LUNs. By default, all initial LUN masks are set to deny access to all iSCSI initiators. You must
create a LUN mask to explicitly grant access to an initiator.
You should not grant two initiators access to the same LUN. Granting multiple initiators access to the
same LUN can cause conflicts when more than one initiator tries writing to the LUN. If the LUN has
been formatted with the NTFS file system in Windows, simultaneous writes may corrupt the NTFS file
system on the LUN. As a best practice, you should not grant initiator access to any undefined LUNs.
Changes to LUN masks take effect immediately. Be careful when deleting or modifying a LUN mask
for an initiator with an active session. When you delete a LUN mask or remove grants from a mask,
initiator access to LUNs currently in use is cut off and will interrupt applications using those LUNs.
Celerra iSCSI
- 24
y Right click the target > Properties > LUN mask tab > New
Celerra iSCSI - 25
This slide shows how to create an iSCSI LUN mask using Celerra Manager.
Celerra iSCSI
- 25
Celerra iSCSI - 26
If you want iSCSI initiators to automatically discover the iSCSI targets on the Data Mover, you can
configure an iSNS client on the Data Mover (an iSNS server must be present). Configuring an iSNS
client on the Data Mover causes the Data Mover to register all of its iSCSI targets with an external
iSNS server. iSCSI initiators can then query the iSNS server to discover available targets on the Data
Movers.
If you want the Data Mover to authenticate the identify of iSCSI initiators contacting it, you can
configure CHAP on the Data Mover.
Celerra iSCSI
- 26
y Command:
server_iscsi server_x
-service -start
y Example:
server_iscsi server_2 -service -start
Celerra iSCSI - 27
You must start the iSCSI service on the Data Mover before using iSCSI targets.
This slide shows the command required to start the iSCSI service on the Celerra.
Celerra iSCSI
- 27
Celerra iSCSI - 28
This slide shows how to start the iSCSI service on the Celerra using Celerra Manager.
Celerra iSCSI
- 28
y Command:
server_iscsi lun -numner <lun_number>
-extend <target_alias_name>
-size <size> {M|G|T}
2006 EMC Corporation. All rights reserved.
Celerra iSCSI - 29
iSCSI LUN extension is a new feature that helps solve the storage space problem by allowing iSCSI
LUNS to be dynamically extended while in use. Extending a LUN can be done using both the Celerra
Manager and CLI and applies to both Virtually Provisioned and Dense LUNs.
To extend a LUN the following prerequisites need to be met:
y There has to be sufficient space on the underlying file system to satisfy the extension request. With
Virtually Provisioned LUN this does not apply.
y The final size of the LUN after extension can not exceed 2 TB.
Additional notes:
y LUN extension is not reversible; at this time there is no procedure for shrinking a LUN.
y Host must be able to support LUN expansion. Once the LUN is extended the systems
administrator must take action in order to recognized the change in size. This typically requires
running host configuration method or rebooting the system.
On a Windows host, a NTFS files ystem can be extended using diskpart utility.
y The command to extend a LUN can only be executed on a single LUN at a time whether in CLI or
Celerra Manager.
Celerra iSCSI
- 29
Celerra iSCSI - 30
Celerra iSCSI
- 30
File
Systems
Celerra
Exchange
Mailbox
Stores
CIFS client(s)
Exchange
System,
Log Files
Exchange
Public
Folders
LAN
NFS client(s)
SQLclient(s)
y Components:
Exchange 2000+
Windows initiator 1.06+
Any Celerra
Outlook Client(s)
initiator
Exchange Server
Celerra iSCSI - 31
This slide shows a sample iSCSI solution with Microsoft Exchange. The Celerra has been configured
as an iSCSI Target. The Exchange server is running the Windows iSCSI Initiator.
On the Celerra:
y iSCSI Target is configured
y A file system is created which will house the iSCSI LUNs
y iSCSI LUNs are created (3 in this case)
y LUN mask is configured to provide access to the Exchange server
On the Exchange Server:
y iSCSI initiator is installed
y Celerra Target portal is defined
y Log on to the Celerra Target for LUN access
y Migrate the Exchange Storage Group databases to Celerra iSCSI LUNs
Exchange clients will access the Exchange server as usual. The Storage group databases will now
reside on the Celerra iSCSI LUNs.
Celerra iSCSI
- 31
Module Summary
Key points covered in this module are:
y iSCSI is a block-level storage transport session protocol that allows
users to create a storage area network using TCP/IP networks
y iSCSI initiator resides on the client and issues command to the
target configured on the Celerra Data Mover
y Initiators and targets use network portals to communicate over the
network
y Celerra iSCSI LUNs are built as files within a Celerra file system
y LUN masking is required to control host access to iSCSI LUNs
y There are two types of iSCSI names; iqn and eui
y An iSCSI initiator can discover its targets either through the
SendTargetDiscovery, or an iSNS server
y Celerra supports CHAP authentication to add a higher level of
security
2006 EMC Corporation. All rights reserved.
Celerra iSCSI - 32
Celerra iSCSI
- 32
Closing Slide
Celerra iSCSI - 33
Celerra iSCSI
- 33
Celerra ICON
Celerra ICON
Appendix
Overview
In this Appendix
Appendix A:
Appendix B:
Appendix C:
Appendix D:
Appendix E:
Appendix F:
Appendix G:
See Page
3
4
5
6
9
10
11
Celerra ICON
Description
EMC and
Hurricane
Marine, LTD
Until recently, EMC data storage has been the only EMC product Hurricane
Marine, LTD has utilized. Now, however, they have opted to put an EMC Einfostructure in place. EMC has just installed EMC Connectrix ED-1032, and
is now looking to implement EMC Celerra as their key file server.
Environment
People
Hurricane Marine, LTDs president and founder is Perry Tesca. The head of
his IS department is Ira Techi. You will be working closely with Mr. Techi in
implementing EMC Celerra into his network. Mr. Techi has some needs that
Celerra is required to fulfill, but there are also some potential needs that he
may like to explore.
Organization
chart
On the following page is the organization chart for Hurricane Marine, LTD.
Continued on next page
Celerra ICON
Engineering
Propulsion
Earl Pallis
Eddie Pope
Etta Place
Egan Putter
Eldon Pratt
Elliot Proh
Elvin Ping
Engineering
Structural
Edgar South
Ellen Sele
Eric Simons
Eva Song
Ed Sazi
Evan Swailz
Sales East
Sarah Emm
Sadie Epari
Sal Eammi
Sage Early
Sam Echo
Santos Elton
Saul Ettol
Sash Extra
Sean Ewer
Sales West
Seve Wari
Scott West
Seda Weir
Seiko Wong
Sema Welles
Selena Willet
Selma Witt
Sergio Wall
Seve Wassi
Seymore Wai
Steve Woo
Information
Systems
Ira Techi
Iggy Tallis
Isabella Tei
Ivan Teribl
Managers
Perry Tesca
Liza Minacci
Earl Pallis
Edgar South
Sarah Emm
Seve Wari
Ira Tech
Celerra ICON
Microsoft
networking
features
Windows 2000
domains
Though the root domain is present solely for administrative purposes at this
time, corp.hmarine.com will hold containers for all users, groups, and
computer accounts. A third domain, asia.corp.hmarine.com, is also being
planned for future expansion.
Root
Domain:
hmarine.com
Domain Controller:
hm-1.hmarine.com
Sub Domain:
corp.hmarine.com
Computer Accounts:
w2k1, w2k2, w2k3, w2k4,
w2k5, w2k6
All Data Movers
All user accounts
Domain Controller:
hm-dc2.hmarine.com
Celerra ICON
Full Name
NT Global Group
Earl Pallis
EPing
Elvin Ping
Propulsion Engineers
EPlace
Etta Place
Propulsion Engineers
EPope
Eddie Pope
Propulsion Engineers
EPratt
Eldon Pratt
Propulsion Engineers
EProh
Elliot Proh
Propulsion Engineers
Administrator
EPallis
Domain Admins
EPutter
Egan Putter
Propulsion Engineers
ESazi
Ed Sazi
Structural Engineers
ESele
Ellen Sele
Structural Engineers
ESimons
Eric Simons
Structural Engineers
ESong
Eva Song
Structural Engineers
ESouth
Edgar South
ESwailz
Evan Swailz
Structural Engineers
ITallis
Iggy Tallis
ITechi
Ira Techi
ITei
Isabella Tei
ITeribl
Ivan Teribl
LMinacci
Liza Minacci
PTesca
Perry Tesca
President, Managers
SEammi
Sal Eammi
Eastcoast Sales
SEarly
Sage Early
Eastcoast Sales
SEcho
Sam Echo
Eastcoast Sales
SElton
Santos Elton
Eastcoast Sales
SEmm
Sarah Emm
SEpari
Sadie Epari
Eastcoast Sales
SEttol
Saul Ettol
Eastcoast Sales
SEwer
Sean Ewer
Eastcoast Sales
SExtra
Sash Extra
Eastcoast Sales
SWai
Seymore Wai
Westcoast Sales
SWall
Sergio Wall
Westcoast Sales
SWari
Seve Wari
SWassi
Seve Wassi
Westcoast Sales
SWeir
Seda Weir
Westcoast Sales
SWelles
Sema Welles
Westcoast Sales
SWest
Scott West
Westcoast Sales
SWillet
Selena Willet
Westcoast Sales
SWitt
Selma Witt
Westcoast Sales
SWong
Seiko Wong
Westcoast Sales
SWoo
Steve Woo
Westcoast Sales
Celerra ICON
Full Name
Group
epallis
Earl Pallis
engprop, mngr
engprop
engprop
engprop
engprop
engprop
engprop
engstruc
engstruc
engstruc
engstruc
engstruc, mngr
engstruc
infotech
infotech, mngr
infotech
infotech
mngr
mngr
saleseas
saleseas
saleseas
saleseas
saleseas, mngr
saleseas
saleseas
saleseas
saleseas
saleswes
saleswes
saleswes, mngr
saleswes
saleswes
saleswes
saleswes
saleswes
saleswes
saleswes
saleswes
eping
Elvin Ping
eplace
Etta Place
epope
Eddie Pope
epratt
Eldon Pratt
eproh
Elliot Proh
eputter
Egan Putter
esazi
Ed Sazi
esele
Ellen Sele
esimons
Eric Simons
esong
Eva Song
esouth
Edgar South
eswailz
Evan Swailz
itallis
Iggy Tallis
itechi
Ira Techi
itei
Isabella Tei
iteribi
Ivan Teribi
lminacci
Liza Minacci
ptesca
Perry Tesca
seammi
Sal Eammi
searly
Sage Early
secho
Sam Echo
selton
Santos Elton
semm
Sarah Emm
separi
Sadie Epari
settol
Saul Ettol
sewer
Sean Ewer
sextra
Sash Extra
swai
Seymore Wai
swall
Sergio Wall
swari
Seve Wari
epallis
Seve Wassi
sweir
Seda Weir
swelles
Sema Welles
swest
Scott West
swillet
Selena Willet
switt
Selma Witt
swong
Seiko Wong
swoo
Steve Woo
Celerra ICON
swillet:NP:1030:104:Selena Willet:/home/swillet:/bin/csh
epallis:NP:1004:101:Earl Pallis:/home/epallis:/bin/csh
swassi:NP:1037:104:Seve Wassi:/home/swassi:/bin/csh
separi:NP:1010:103:Sadi Epari:/home/separi:/bin/csh
esouth:NP:1003:102:Edgar South:/home/esouth:/bin/csh
daemon:NP:1:1::/:
swong:NP:1021:104:Seiko Wong:/home/swong:/bin/csh
sewer:NP:1036:103:Sean Ewer:/home/sewer:/bin/csh
secho:NP:1025:103:Sam Echo:/home/secho:/bin/csh
eping:NP:1031:101:Elvin Ping:/home/eping:/bin/csh
swai:NP:1038:104:Seymour Wai:/home/swai:/bin/csh
itei::1017:105:Isabella Tei:/home/itei:/bin/csh
adm:NP:4:4:Admin:/var/adm:
iteribl:NP:1022:105:Ivan Teribl:/home/iteribl:/bin/csh
ptesca:NP:1001:106:Perry Tesca:/home/ptesca:/bin/csh
nobody:NP:60001:60001:Nobody:/:
epratt:NP:1024:101:Eldon Pratt:/home/epratt:/bin/csh
eplace:NP:1014:101:Etta Place:/home/eplace:/bin/csh
swest:NP:1011:104:Scott West:/home/swest:/bin/csh
sweir:NP:1016:104:Seda Weir:/home/sweir:/bin/csh
nuucp:NP:9:9:uucp Admin:/var/spool/uucppublic:/usr/lib/uucp/uucico
esong:NP:1018:102:Eva Song:/home/esong:/bin/csh
eproh:NP:1028:101:Elliot Proh:/home/eproh:/bin/csh
root:oiOEvBA22p40s:0:1:Super-User:/:/sbin/sh
lminacci:NP:1002:106:Liza Minacci:/home/lminacci:/bin/csh
nobody4:NP:65534:65534:SunOS 4.x Nobody:/:
itallis:NP:1012:105:Iggy Tallis:/home/itallis:/bin/csh
sextra:NP:1034:103:Sash Extra:/home/sextra:/bin/csh
settol:NP:1032:103:Saul Ettol:/home/settol:/bin/csh
selton:NP:1029:103:Santos Elton:/home/selton:/bin/csh
searly:NP:1020:103:Sage Early:/home/searly:/bin/csh
listen:*LK*:37:4:Network Admin:/usr/net/nls:
itechi:NP:1007:105:Ira Techi:/home/itechi:/bin/csh
switt:NP:1033:104:Selma Witt:/home/switt:/bin/csh
swari:NP:1006:104:Seve Wari:/home/swari:/bin/csh
swall:NP:1035:104:Sergio Wall:/home/swall:/bin/csh
uucp:NP:5:5:uucp Admin:/usr/lib/uucp:
swoo:NP:1039:104:Steve Woo:/home/swoo:/bin/csh
semm:NP:1005:103:Sarah Emm:/home/semm:/bin/csh
noaccess:NP:60002:60002:No Access User:/:
swelles:NP:1026:104:Sema Welles:/home/swelles:/bin/csh
eswailz:NP:1027:102:Evan Swailz:/home/swailz:/bin/csh
esimons:NP:1013:102:Eric Simons:/home/esimons:/bin/csh
eputter:NP:1019:101:Egan Putter:/home/eputter:/bin/csh
seammi:NP:1015:103:Sal Eammi:/home/seammi:/bin/csh
esele:NP:1008:102:Ellen Sele:/home/esele:/bin/csh
esazi:NP:1023:102:Ed Sazi:/home/esazi:/bin/csh
epope:NP:1009:101:Eddie Pope:/home/epope:/bin/csh
sys:NP:3:3::/:
bin:NP:2:2::/usr/bin:
lp:NP:71:8:Line Printer Admin:/usr/spool/lp
Celerra ICON
sysadmin::14:
saleswes::104:swari,swest,swong,swelles,swillet,switt,swall,swassi,sw
ai,swoo
saleseas::103:semm,separi,seammi,searly,secho,selton,settol,sextra,se
wer
noaccess::60002:
infotech::105:itechi,itallis,itei,iteribl,
engstruc::102:esouth,esele,esimons,esong,esazi,eswailz
nogroup::65534:
engprop::101:epallis,epope,eplace,eputter,epratt,eproh,eping
nobody::60001:
daemon::12:root,daemon
staff::10:other::1:
nuucp::9:root,nuucp
uucp::5:root,uucp
root::0:root
mngr::106:lminacci,epallis,esouth,semm,swari,itechi,ptesca
mail::6:root
tty::7:root,tty,adm
sys::3:root,bin,sys,adm
bin::2:root,bin,daemon
adm::4:root,adm,daemon
lp::8:root,lp,
Celerra ICON
IP address
Subnet mask
Broadcast
Gateway
Network
Sw port
: VLAN
Info
10.127.*.11
255.255.255.224
10.127.*.31
10.127.*.30
10.127.*.0
2/43:10
UNIX
10.127.*.12
255.255.255.224
10.127.*.31
10.127.*.30
10.127.*.0
2/44:10
UNIX
10.127.*.13
255.255.255.224
10.127.*.31
10.127.*.30
10.127.*.0
2/45:10
UNIX
10.127.*.14
255.255.255.224
10.127.*.31
10.127.*.30
10.127.*.0
2/46:10
UNIX
10.127.*.15
255.255.255.224
10.127.*.31
10.127.*.30
10.127.*.0
2/47:10
UNIX
10.127.*.16
255.255.255.224
10.127.*.31
10.127.*.30
10.127.*.0
2/48:10
UNIX
10.127.*.71
255.255.255.224
10.127.*.95
10.127.*.94
10.127.*.64
2/37:30
Win2000
10.127.*.72
255.255.255.224
10.127.*.95
10.127.*.94
10.127.*.64
2/38:30
Win2000
10.127.*.73
255.255.255.224
10.127.*.95
10.127.*.94
10.127.*.64
2/39:30
Win2000
10.127.*.74
255.255.255.224
10.127.*.95
10.127.*.94
10.127.*.64
2/40:30
Win2000
10.127.*.75
255.255.255.224
10.127.*.95
10.127.*.94
10.127.*.64
2/41:30
Win2000
10.127.*.76
255.255.255.224
10.127.*.95
10.127.*.94
10.127.*.64
2/42:30
Win2000
10.127.*.110
255.255.255.224
10.127.*.127
10.127.*.126
10.127.*.96
3/25:41
Celerra 1
10.127.*.112
255.255.255.224
10.127.*.127
10.127.*.126
10.127.*.96
2/1-4:41
Celerra 1
10.127.*.113
255.255.255.224
10.127.*.127
10.127.*.126
10.127.*.96
2/5-8:41
Celerra 1
10.127.*.120
255.255.255.224
10.127.*.127
10.127.*.126
10.127.*.96
3/27:41
Celerra 2
10.127.*.122
255.255.255.224
10.127.*.127
10.127.*.126
10.127.*.96
2/9-12:41
Celerra 2
10.127.*.123
255.255.255.224
10.127.*.127
10.127.*.126
10.127.*.96
2/13-16:41
Celerra 2
10.127.*.130
255.255.255.224
10.127.*.159
10.127.*.158
10.127.*.127
3/29:42
Celerra 3
10.127.*.132
255.255.255.224
10.127.*.159
10.127.*.158
10.127.*.127
2/17-20:42
Celerra 3
10.127.*.133
255.255.255.224
10.127.*.159
10.127.*.158
10.127.*.127
2/21-24:42
Celerra 3
10.127.*.140
255.255.255.224
10.127.*.159
10.127.*.158
10.127.*.127
3/26:42
Celerra 4
10.127.*.142
255.255.255.224
10.127.*.159
10.127.*.158
10.127.*.127
3/1-4:42
Celerra 4
10.127.*.143
255.255.255.224
10.127.*.159
10.127.*.158
10.127.*.127
3/5-8:42
Celerra 4
10.127.*.150
255.255.255.224
10.127.*.159
10.127.*.158
10.127.*.127
3/28:42
Celerra 5
10.127.*.152
255.255.255.224
10.127.*.159
10.127.*.158
10.127.*.127
3/9-12:42
Celerra 5
10.127.*.153
255.255.255.224
10.127.*.159
10.127.*.158
10.127.*.127
3/13-16:42
Celerra 5
10.127.*.100
255.255.255.224
10.127.*.127
10.127.*.126
10.127.*.96
3/30:41
Celerra 6
10.127.*.102
255.255.255.224
10.127.*.127
10.127.*.126
10.127.*.96
3/17-20:41
Celerra 6
10.127.*.103
255.255.255.224
10.127.*.127
10.127.*.126
10.127.*.96
3/21-24:41
Celerra 6
10.127.*.161
255.255.255.224
10.127.*.191
10.127.*.190
10.127.*.160
2/27:43
Root W2k
10.127.*.162
255.255.255.224
10.127.*.191
10.127.*.190
10.127.*.160
2/28:43
Corp W2K
10.127.*.163
255.255.255.224
10.127.*.191
10.127.*.190
10.127.*.160
2/29:43
NIS
10.127.*.161
255.255.255.224
10.127.*.254
255.255.255.0
10.127.*.253
255.255.255.0
On hm-1
3/32:Trunk
Celerra ICON
Appendix F: Bibliography
Installing Celerra iSCSI Host Components
P/N 300-001-993, Rev A01, Version 5.4, April, 2005
Configuring Virtual Data Movers
P/N 300-001-978, Rev A01, Version 5.4, April, 2005
Using SnapSure on Celerra
P/N 300-002-030, Rev A01, Version 5.4, April, 2005
Using FTP on Celerra Network Server
P/N 300-002-019, Rev A01, Version 5.4, April, 2005
Using Windows Administrative Tools with Celerra
P/N 300-001-985, Rev A01, Version 5.4, April, 2005
Configuring External Usermapper for Celerra
P/N 300-002-023, Rev A01, Version 5.4, April, 2005
Configuring and Managing Celerra Networking
P/N 300-002-016607, Rev A01, Version 5.4, April, 2005
Celerra File Extension Filtering
P/N 300-001-972, Rev A01, Version 5.4, April, 2005
Implementing Automatic Volume Management with Celerra
P/N 300-002-078, Rev A01, Version 5.4, April, 2005
Using Quotas on Celerra
P/N 300-002-029, Rev A01, Version 5.4, April, 2005
Using Celerra Antivirus Agent
P/N 300-001-991, Rev A01, Version 5.4, April, 2005
Using Celerra Replicator
P/N 300-002-035, Rev A01, Version 5.4, April, 2005
Managing NFS Access to the Celerra Network Server
P/N 300-002-036, Rev A01, Version 5.4, April, 2005
Configuring and Managing Celerra Network High Availability
P/N 300-002-015, Rev A01, Version 5.4, April, 2005
10
NAS Management
Appendix G
cel1dm2
cel1dm3
VLAN 42
11
cel2dm2
10
13
15
cel2dm3
17
19
cel3dm2
VLAN 43
21
23
25
cel3dm3
12
14
16
18
20
22
11
13
15
17
19
21
24
VLAN 20
27
29
W2K
NIS
31
33
VLAN 30
35
37
28
25
27
30
VLAN 10
41
43
W2kW
W2K
26
39
32
34
36
38
40
45
47
Sun
42
44
46
48
Module 3
VLAN 42
1
cel4dm2
cel4dm3
VLAN 41
cel5dm2
10
12
cel5dm3
14
16
cel6dm2
18
23
cel6dm3
20
22
24
V 42
29
31
28
VLAN 42
30
T
32
V 41
Router Configuration
Interface
VLAN
IP Address
Interface
VLAN
IP Address
Interface
VLAN
IP Address
Interface
VLAN
IP Address
0/1
0/0
n/a
1
assigned
10.127.*.254
0/0.10
0/0.20
10
20
10.127.*.30
10.127.*.62
0/0.30
0/0.41
30
41
10.127.*.94
10.127.*.126
0/0.42
0/0.43
42
43
10.127.*.158
10.127.*.190
IP Addressing
Subnet A - VLAN 10
Network
Gateway
Broadcast
sun1
sun2
sun3
sun4
sun5
sun6
10.127.*.0
10.127.*.30
10.127.*.31
10.127.*.11
10.127.*.12
10.127.*.13
10.127.*.14
10.127.*.15
10.127.*.16
Subnet B - VLAN 20
Network
Gateway
Broadcast
Not in use
DNS: 10.127.*.161
10.127.*.32
10.127.*.62
10.127.*.63
Subnet C - VLAN 30
Network
Gateway
Broadcast
w2k1
w2k2
w2k3
w2k4
w2k5
w2k6
10.127.*.64
10.127.*.94
10.127.*.95
10.127.*.71
10.127.*.72
10.127.*.73
10.127.*.74
10.127.*.75
10.127.*.76
Subnet D - VLAN 41
Network
Gateway
Broadcast
cel6cs0
cel6dm2
cel1cs0
cel1dm2
cel2cs0
cel2dm2
10.127.*.96
10.127.126
10.127.*.127
10.127.*.100
10.127.*.102
10.127.*.110
10.127.*.112
10.127.*.120
10.127.*.122
Subnet E - VLAN 42
Network
Gateway
Broadcast
cel3cs0
cel3dm2
cel4cs0
cel4dm2
cel5cs0
cel5dm2
10.127.*.127
10.127.*.158
10.127.*.159
10.127.*.130
10.127.*.132
10.127.*.140
10.127.*.142
10.127.*.150
10.127.*.152
Subnet F - VLAN 43
Network
Gateway
Broadcast
hm-1
hm-dc2
nis-master
10.127.*.160
10.127.*.190
10.127.*.191
10.127.*.161
10.127.*.162
10.127.*.163
NIS: 10.127.*.163
11
NAS Management
Appendix G
12