Você está na página 1de 727

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra ICON
Celerra Training for Engineering

Course Introduction

2006 EMC Corporation. All rights reserved.

Course Introduction - 1

Copyright 2006 EMC Corporation. All Rights Reserved.

Revision History
Revision Number

Course Date

Revisions

1.0

February 2006

Complete

2006 EMC Corporation. All rights reserved.

Course Introduction - 2

Copyright 2006 EMC Corporation. All Rights Reserved.


EMC believes the information in this publication is accurate as of its publication date. The information is subject to change
without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED AS IS. EMC CORPORATION MAKES NO
REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS
PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR
FITNESS FOR A PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.
AutoIS , DG, E-Infostructure, EMC, EMC2, CLARalert, CLARiiON, HighRoad, Navisphere, PowerPath, ResourcePak,
SRDF, Symmetrix, The EMC Effect, VisualSAN, and WideSky are registered trademarks, and Access Logix, ATAtude,
Automated Resource Manager, AVALONidm, C-Clip, CacheStorm, Celerra, Celerra Replicator, Centera, CentraStar,
CLARevent, Connectrix, CopyCross, CopyPoint, CrosStor, Direct Matrix, Direct Matrix Architecture, EDM, E-Lab, EMC
Automated Networked Storage, EMC ControlCenter, EMC Developers Program, EMC Enterprise Storage, EMC Enterprise
Storage Network, EMCLink, EMC OnCourse, EMC Proven, Enginuity, FarPoint, FLARE, GeoSpan, InfoMover,
MirrorView, OnAlert, OpenScale, PowerVolume, RepliCare, SafeLine, SAN Manager, SDMS, SnapSure, SnapView,
SnapView/IP, SRDF, StorageScope, SymmAPI, SymmEnabler, TimeFinder, Universal Data Tone, where information lives
are trademarks of EMC Corporation.
All other trademarks used herein are the property of their respective owners.

Course Introduction - 2

Copyright 2006 EMC Corporation. All Rights Reserved.

Prerequisites
y Successful completion of the following EMC courses:
EMC Technology Foundations (ETF) or the NAS Foundations selfstudy module from that course
Celerra Features and Functionality (Knowledgelink)
Choice of the following NAS hardware platforms based on relevance is
recommended (Knowledgelink Self-Study)
CNS Architectural Overview self-study
NS Series Architectural Overview self-study
NSX Architectural Overview self-study

y Basic knowledge of:


UNIX Administration
Microsoft Windows 2000/2003
TCP/IP networking
Storage systems concepts
2006 EMC Corporation. All rights reserved.

Course Introduction - 3

Course Introduction - 3

Copyright 2006 EMC Corporation. All Rights Reserved.

Course Objectives
y Describe the functional components and operations of the major building
blocks that make up a Celerra NAS solution
y Install the operating system and NAS software on a Control Station and the
DART operating environment on a Data Mover
y Configure Network Interfaces
y Configure a Celerra Data mover for high availability
Back-end
Data Mover failover
Network high availability
Describe the storage configuration requirements for both a CLARiiON and Symmetrix
back-end

y Configure and manage Celerra volumes and file systems


y Export Celerra file systems for NFS and CIFS access
y Manage CIFS in both Windows only and mixed environments
y Implement and manage SnapSure and Celerra Replicator
y Implement Celerra iSCSI target
2006 EMC Corporation. All rights reserved.

Course Introduction - 4

Course Introduction - 4

Copyright 2006 EMC Corporation. All Rights Reserved.

Agenda Day 1
y Class Introduction
y Celerra Overview
y Hardware Overview
y Software Installation Concepts
y Planning, Installing, and Configuring a Gateway System
y Installation Lab

2006 EMC Corporation. All rights reserved.

Course Introduction - 5

Course Introduction - 5

Copyright 2006 EMC Corporation. All Rights Reserved.

Agenda Day 2
y Celerra Management & Support
Command Line Interface
Celerra Manager

y Configuring Network Interfaces


y Data Mover Failover
y Network High Availability
y Lab:
Upgrading NAS software
Configuring Network Interfaces
Configuring Data Mover Failover
Test and Verify
2006 EMC Corporation. All rights reserved.

Course Introduction - 6

Course Introduction - 6

Copyright 2006 EMC Corporation. All Rights Reserved.

Agenda Day 3
y Back-end Storage Configuration
Review CLARiiON storage concepts
Symmetrix IMPL.bin file requirements

y Configuring Celerra Volumes and File Systems


y Exporting File Systems for NFS Access
y Introduction to CIFS and Standalone CIFS Server
y Lab:
Configuring Volumes and File Systems
Exporting File Systems for NFS access
Test and verify Data Mover failover with NFS clients
Standalone CIFS Server Configuration
2006 EMC Corporation. All rights reserved.

Course Introduction - 7

Course Introduction - 7

Copyright 2006 EMC Corporation. All Rights Reserved.

Agenda Day 4
y User Mapping in a CIFS Environment
y Configuring CIFS Servers on the Data Mover
y File System Permissions
y Virtual Data Mover
y Lab:
Usermapper
CIFS Configuration
Windows Integration
VDMs

2006 EMC Corporation. All rights reserved.

Course Introduction - 8

Course Introduction - 8

Copyright 2006 EMC Corporation. All Rights Reserved.

Agenda Day 5
y SnapSure Concepts and Configuration
y Celerra Replicator Overview
y iSCSI Concepts and Implementation
y Lab:
Snapsure Implementation
Local Celerra Replication
iSCSI Implementation with Windows Host

2006 EMC Corporation. All rights reserved.

Course Introduction - 9

Course Introduction - 9

Copyright 2006 EMC Corporation. All Rights Reserved.

Closing Slide

2006 EMC Corporation. All rights reserved.

Course Introduction - 10

Course Introduction - 10

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra ICON
Celerra Training for Engineering

Celerra Overview

2006 EMC Corporation. All rights reserved.

EMC Celerra Overview - 1

Copyright 2006 EMC Corporation. All Rights Reserved.

Revision History
Revision Number

Course Date

Revisions

1.0

February 2006

Complete

1.2

May 2006

Updates

2006 EMC Corporation. All rights reserved.

Celerra Overview - 2

Copyright 2006 EMC Corporation. All Rights Reserved.


EMC believes the information in this publication is accurate as of its publication date. The information is subject to change
without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED AS IS. EMC CORPORATION MAKES NO
REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS
PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR
FITNESS FOR A PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.
AutoIS , DG, E-Infostructure, EMC, EMC2, CLARalert, CLARiiON, HighRoad, Navisphere, PowerPath, ResourcePak,
SRDF, Symmetrix, The EMC Effect, VisualSAN, and WideSky are registered trademarks, and Access Logix, ATAtude,
Automated Resource Manager, AVALONidm, C-Clip, CacheStorm, Celerra, Celerra Replicator, Centera, CentraStar,
CLARevent, Connectrix, CopyCross, CopyPoint, CrosStor, Direct Matrix, Direct Matrix Architecture, EDM, E-Lab, EMC
Automated Networked Storage, EMC ControlCenter, EMC Developers Program, EMC Enterprise Storage, EMC Enterprise
Storage Network, EMCLink, EMC OnCourse, EMC Proven, Enginuity, FarPoint, FLARE, GeoSpan, InfoMover,
MirrorView, OnAlert, OpenScale, PowerVolume, RepliCare, SafeLine, SAN Manager, SDMS, SnapSure, SnapView,
SnapView/IP, SRDF, StorageScope, SymmAPI, SymmEnabler, TimeFinder, Universal Data Tone, where information lives
are trademarks of EMC Corporation.
All other trademarks used herein are the property of their respective owners.

EMC Celerra Overview - 2

Copyright 2006 EMC Corporation. All Rights Reserved.

Module Objectives
y Describe the current Celerra NAS product offering
y Locate resources used in setting up and maintaining a
Celerra
Documentation CD
Support Matrix
NAS Engineering Websites

y Describe the environment that is used for the hands-on


lab exercises

2006 EMC Corporation. All rights reserved.

Celerra Overview - 3

EMC Celerra Overview - 3

Copyright 2006 EMC Corporation. All Rights Reserved.

EMC NAS Vision


Infinite Scalability
Massive Consolidation Workloads
Scalable File Services For Grids
Data Service Continuity

Delivering
on ILM

Optimized Data Placement


Object Level ILM
Filesystem Virtualization
System Virtualization

Global Accessibility
Unified Name Space
Wide Area Filesystems

Centralized Management
Information Security
Unified Management

2006 EMC Corporation. All rights reserved.

Celerra Overview - 4

EMC Celerra Overview - 4

Copyright 2006 EMC Corporation. All Rights Reserved.

EMC NAS Platforms


NS500/NS350/NS700
High Availability
One or two Data
Movers
Integrated NAS
CLARiiON
DART
NS500/350
One or two Data Movers
8 or 16 TB usable Fibre
Channel, ATA capacity
Four or eight Gigabit
Ethernet network ports
(Copper)
Two Fibre Channel HBAs
per Data Mover
Integrated CLARiiON

NS704
Advanced Clustering
Four Data Movers
Integrated NAS
CLARiiON
DART
NS704
Four Data Movers
48 TB usable Fibre
Channel, ATA capacity
32 Gigabit Ethernet
network ports (24
Copper, 8 Optical)
Integrated CLARiiON

NS700
One or two Data Movers
16 or 32 TB usable Fibre
Channel, ATA capacity
8 or 16 Gigabit Ethernet
network ports (Copper /
Optical)
Two Fibre Channel
HBAs per Data Mover
Integrated
CLARiiON

NS500G / NS700G
High Availability
One or two Data
Movers
NAS gateway to SAN
CLARiiON, Symmetrix
DART

NS704G
Advanced Clustering
Four Data Movers
NAS gateway to SAN
CLARiiON,
Symmetrix
DART

Celerra NSX
Advanced Clustering
Four to eight X-Blades
NAS gateway to SAN
CLARiiON, Symmetrix
DART

NS500/350G

NS704G

Celerra NSX

One or two Data Movers


8 or 16 TB usable Fibre
Channel, ATA capacity
Four or eight Gigabit
Ethernet network ports
(Copper)
Two or four Fibre
Channel HBAs
CLARiiON or Symmetrix
storage

NS700G

Four Data Movers


48 TB usable Fibre
Channel, ATA
capacity
32 Gigabit Ethernet
network ports (24
Copper, eight Optical)
Eight Fibre Channel
HBAs
CLARiiON or
Symmetrix storage

Four to eight X-Blades


112 TB usable Fibre
Channel, ATA capacity
64 Gigabit Ethernet
ports (48 Copper, 16
Optical)
16 Fibre Channel HBAs
CLARiiON or Symmetrix
storage

One or two Data Movers


16 or 32 TB usable Fibre
Channel, ATA capacity
8 or 16 Gigabit Ethernet
network ports (Copper /
Optical)
Two or four Fibre
Channel HBAs
CLARiiON or Symmetrix
storage

SIMPLE WEB-BASED MANAGEMENT


2006 EMC Corporation. All rights reserved.

Celerra Overview - 5

EMC offers the broadest range of NAS platforms. In addition to the platforms above, a legacy 14 Data
Mover CNS/CFS configurations was available in the past. While the hardware was considerable
different, it ran the same DART operating system as the current offerings. For a short time, we also
offered, NetWin 110/200, a WSS 2003 based low-end configuration. Note: the NS600 is no-longer
available.

EMC Celerra Overview - 5

Copyright 2006 EMC Corporation. All Rights Reserved.

Documentation

2006 EMC Corporation. All rights reserved.

Celerra Overview - 6

http://powerlink.emc.com/km/appmanager/km/secureDesktop?_nfpb=true&_pageLabel=servicesDocL
ibPg&internalId=0b01406680024e3f&_irrt=true

EMC Celerra Overview - 6

Copyright 2006 EMC Corporation. All Rights Reserved.

EMC NAS Interoperability Matrix

Celerra Overview - 7

2006 EMC Corporation. All rights reserved.

http://www.emc.com/interoperability/matrices/nas_interoperability_matrix.pdf

EMC Celerra Overview - 7

Copyright 2006 EMC Corporation. All Rights Reserved.

NAS Engineering Home

2006 EMC Corporation. All rights reserved.

Celerra Overview - 8

http://naseng/default.html

EMC Celerra Overview - 8

Copyright 2006 EMC Corporation. All Rights Reserved.

NAS Support

2006 EMC Corporation. All rights reserved.

Celerra Overview - 9

EMC Celerra Overview - 9

Copyright 2006 EMC Corporation. All Rights Reserved.

Lab Scenario for Hurricane Marine, LTD


y Real-world simulation
y Preconfigured
NIS
W2K and UNIX user accounts
DNS

y Multiple Operating Systems


SUN
Win2k

y Managed Ethernet Switches


VLANs and segregated network
High Availability

y Not optimized for performance


2006 EMC Corporation. All rights reserved.

Celerra Overview - 10

As you proceed through this course, you will find it useful to understand how the Celerra lab is
configured. In the lab, you will work for a fictitious company, Hurricane Marine, LTD, a manufacturer
of yachts.

EMC Celerra Overview - 10

Copyright 2006 EMC Corporation. All Rights Reserved.

UNIX Environment

NIS
NISDomain:
Domain:

nis-master
10.127.*.163

hmarine.com
hmarine.com

NIS
NISServer:
Server:nis-master
nis-master
UNIX Clients

sun1
10.127.*.11

sun2
10.127.*.12

sun3
10.127.*.13

2006 EMC Corporation. All rights reserved.

sun4
10.127.*.14

sun5
10.127.*.15

sun6
10.127.*.16

Celerra Overview - 11

UNIX environment for Hurricane Marine LTD


Hurricane, Marines UNIX network, is supported by one NIS master server. That servers host name is
nis-master. Your instructor will play the role of the administrator and will hold the password to nismaster confidential. On the other hand, you will be logging in to your UNIX workstations as various
NIS users, as well as integrating the Celerra with NIS.
For a list of NIS users and groups, see Appendix D, Hurricane Marines UNIX Users and Group
Memberships.

EMC Celerra Overview - 11

Copyright 2006 EMC Corporation. All Rights Reserved.

Windows 2000 Network


Root
Domain:

hmarine.com
Domain Controller:
hm-1.hmarine.com
Sub Domain:

corp.hmarine.com
Domain Controller:
hm-dc2.hmarine.com
Computer Accounts:
w2k1, w2k2, w2k3, w2k4
Data Movers
All user accounts

2006 EMC Corporation. All rights reserved.

Celerra Overview - 12

Windows 2000 network for Hurricane Marine, LTD


Hurricane Marine will soon be implementing a Microsoft Windows 2000 network in Native Mode.
They will need to test Celerra functionality to support Active Directory.
The Windows 2000 network is comprised of two domains. The hmarine.com domain is the root of the
forest, while corp.hmarine.com is a subdomain of the root. While the root domain is present solely for
administrative purposes at this time, corp.hmarine.com will hold containers for all users, groups, and
computer accounts.

EMC Celerra Overview - 12

Copyright 2006 EMC Corporation. All Rights Reserved.

Network Configuration
y UNIX and W2000 Clients
y DNS and NIS
Subnet A 10.127.*.0

y 5 separate, routed, TCP/IP


subnets
y Multiple VLANs

UNIX Clients

Subnet C 10.127.*.64

Windows 2000 Clients

Router
Subnet D 10.127.*.96

EMC Celerra/Symmetrix

Subnet E 10.127.*.128

EMC Celerra/Symmetrix

Subnet F 10.127.*.160

NIS & 2000 Servers

NIS
2006 EMC Corporation. All rights reserved.

2000
Celerra Overview - 13

Network configuration for Hurricane Marine, LTD


Some important features of Hurricane Marines network are as follows:
y The work performed by different employees presents differing needs. For example, while the sales
staff all use Microsoft Windows applications, the engineering group requires UNIX workstations.
y The security for these two environments are managed separately. The UNIX network uses NIS to
manage security. On the other hand, the Microsoft network is using a Windows 2000 (Native
Mode) network for security.
y Hurricane Marines network is currently divided into five networks connected by a router, for
security reasons.
y DNS has been implemented at this site for host name resolution.
Terminology
NIS: Network Information System
DNS: Domain Name System
VLAN: Virtual Local Area Network

EMC Celerra Overview - 13

Copyright 2006 EMC Corporation. All Rights Reserved.

Closing Slide

2006 EMC Corporation. All rights reserved.

Celerra Overview - 14

EMC Celerra Overview - 14

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra ICON
Celerra Training for Engineering

Hardware Review

2005 EMC Corporation. All rights reserved.

Hardware Review - 1

Copyright 2006 EMC Corporation. All Rights Reserved.

Revision History
Rev Number

Course Date

1.0

February 2006

2005 EMC Corporation. All rights reserved.

Revisions
Complete

Celerra Hardware Review - 2

Hardware Review - 2

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra Hardware Review

Upon completion of this module, you will be able to:


y Identify the location of key components and
interconnections for the NS500/350, NS600/700, and
NSX models
Data Mover
Control Station
Storage Processor (on CLARiiON)
Private LAN Ethernet switch
Call Home modem

y Explain the difference between an NS Integrated and


Gateway systems, and the difference between a directconnected and fabric-connected gateway
2005 EMC Corporation. All rights reserved.

Celerra Hardware Review - 3

The objectives for this module are shown here.

Hardware Review - 3

Copyright 2006 EMC Corporation. All Rights Reserved.

Purpose of the Celerra Network Server


y NAS provides client access to
storage via

Storage Subsystem (Disks)


CLARiiON
or
Symmetrix

File system layer


Network services

y Celerra functions as a file server


y Supported protocols
NFS

File system layer

CIFS

NFS

FTP

share

export

Celerra
Data
iSCSI

Mover

target

CIFS
FTP/TFTP

TCP/IP

iSCSI

Network

Windows

Windows

UNIX

FTP

iSCSI

Mapped drive

Share access

NFS mount

Client

Initiator

2005 EMC Corporation. All rights reserved.

Celerra Hardware Review - 4

Concept of Network Attached Storage


Network Attached Storage (NAS) provides clients with access to disk storage over an IP network. This is done
by creation/management of file systems, and facilitating at least one network service to publish that file system to
the network.
Purpose of the Celerra Network Server
The Celerra Network Server functions as a high-available, NAS file server in a TCP/IP network. Celerra
provides the services of a file server via one or more of the following protocols:
Network File System (NFS)
Common Internet File System (CIFS)
File Transfer Protocol (FTP) and Trivial FTP (TFTP)
Internet Small Computer System Interface (iSCSI)
Client access
The network client can access the Celerras file systems via several methods. Windows clients typically access
CIFS shares via a mapped network drive, or over Network Neighborhood. UNIX clients usually gain access via
an NFS mount. Windows and UNIX clients can also get access over FTP, TFTP, and/or iSCSI services.

Hardware Review - 4

Copyright 2006 EMC Corporation. All Rights Reserved.

Three Main Components


y Data Mover(s)
DART operating system
File server
Highly reliable hardware and
configurable for High Availability

Storage Subsystem

y Control Station
Linux operating system
Dedicated management host
Configuration and monitoring

Fibre Channel
Production

y Storage Subsystem
Completely separate

Network
Data Mover

CLARiiON or Symmetrix
May be dedicated or shared

Contains

Control Station

NFS, CIFS, FTP, iSCSI


Services

All production data


Complete Celerra
configuration database
2005 EMC Corporation. All rights reserved.

Celerra Hardware Review - 5

Data Movers
A Celerra system can contain one or more individual file servers running EMCs proprietary DART operating
system. Each of these file servers is called a Data Mover. One or more Data Movers in a Celerra can act as a hot
spare, or standby, for other production Data Movers providing high availability.
Control Station
The Celerra also provides one management host, the Control Station, which runs the Linux operating system, and
Network Attached Storage (NAS) management services (e.g., Data Mover configuration and monitoring
software). A second Control Station may also be present for redundancy.
Separate Storage Subsystem
All production data and the complete configuration database of the Celerra is stored on a separate storage
subsystem. Data Movers contain no hard drive.

Hardware Review - 5

Copyright 2006 EMC Corporation. All Rights Reserved.

Two Types of Celerra Configurations


Two main types of configuration:
y Integrated Storage
Storage subsystem is dedicated to the Celerra NS
Celerra is directly connected to the storage array
Storage array must be CLARiiON (without AccessLogix)

y Gateway Storage
Storage subsystem can also provide storage to other hosts
Supports Symmetrix and/or CLARiiON (with AccessLogix)

2005 EMC Corporation. All rights reserved.

Celerra Hardware Review - 6

Two Types of Celerra NS Configurations


A Celerra configuration can be classified as one of two types: Integrated or Gateway.
Celerra Integrated
In an integrated configuration, the entire disk array is dedicated to the Celerra Network Server. No other hosts
can utilize any of the storage. The Celerra Data Movers are directly connected to the storage subsystem.
The Celerra Integrated configuration supports only a CLARiiON storage subsystem.
Celerra Gateway
In the gateway configuration, the storage subsystem can be used to provide storage to other hosts in addition to
the Celerra Network Server.
A Celerra Gateway can use Symmetrix and/or CLARiiON for the storage subsystem.

Hardware Review - 6

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra NS Integrated Installation Methods


y Factory-installed
Setup mostly completed
at factory

CLARiiON-only
Storage Subsystem

Dedicated

Init Wizard provides


basic network configuration
for Linux on Control Station

y Field-installed

CLARiiON

to Celerra

Direct Fibre Channel

Must connect cables


May require overwriting of
factory image

Data Mover

Procedure furnished by
Celerra Technical Support

Control Station

2005 EMC Corporation. All rights reserved.

Celerra Hardware Review - 7

Pre-Loaded Factory-installed
When the Celerra NS Integrated arrives from the factory, the Celerra software is pre-loaded. When the system is
powered on a simple initialization wizard is run providing the opportunity to enter site-specific configuration
network information of the Linux Control Station.
Field-installed
The Celerra NS Integrated can sometimes require that you manually perform the installation. When the manual
method of installation is required (e.g. the factory setup is flawed, or the system is ordered without a cabinet), the
original factory image, if present, must be overwritten. This involves CLARiiON clean-up procedures that will
be furnished by Celerra Technical Support when needed.

Hardware Review - 7

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra NS Gateway Direct-Connected


y Data Mover connects directly
to CLARiiON
Two ports on CLARiiON Storage
Processor are dedicated to Celerra

CLARiiON-only
Storage Subsystem

Symmetrix not supported


FC Fabric
Sun

Direct Fibre Channel


Linux
MS

y Additional hosts can attach


to unused ports on the CLARiiON
2005 EMC Corporation. All rights reserved.

Data Mover

Control Station

Celerra Hardware Review - 8

Direct-connected Celerra NS Gateway configurations use a direct Fibre Channel connection to the CLARiiON
storage subsystem.
The CLARiiON may also be used to provide storage to other hosts.

Hardware Review - 8

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra NS Gateway, Fabric-Connected


y Celerra connects to storage
using Fibre Channel switch(s)

CLARiiON or Symmetrix
Storage Subsystem

CLARiiON and/or Symmetrix


Only configuration for Symmetrix
FC Fabric
Sun

Fabric-connected Fibre Channel


Linux
MS

Data Mover

Control Station

y Other hosts can share storage system


2005 EMC Corporation. All rights reserved.

Celerra Hardware Review - 9

Fabric-connected Celerra NS Gateway configuration use a SAN Fibre Channel connection to the CLARiiON
and/or Symmetrix storage subsystem.
The fabric-connected gateway is the only Celerra NS configuration that supports using a Symmetrix storage
array.
Using a fabric-connected configuration allows wider utilization of the CLARiiON Storage Processors FE (Front
End) ports.
The storage array may also be used to provide storage to other hosts.

Hardware Review - 9

Copyright 2006 EMC Corporation. All Rights Reserved.

General Model Type Marketing Designations


y Models ending with a 0

y Models ending with a G

Integrated only

The G stands for Gateway

2 Data Movers

At least 2 Data Movers

y Models ending with an I


The I stands for Integrated

y Models ending with a S


Single Data Mover

y Models ending with GS


Single DM
Gateway connected

y The prefix NSX

Upgradeable

NSX bladed series

Integrated or Gateway

Based on latest hardware


architecture
Gateway configuration only

2005 EMC Corporation. All rights reserved.

Celerra Hardware Review - 10

S Models
The S stands for single Data Mover Model.
These systems can typically be upgraded with an additional Data Mover. If you upgraded a single Data Mover NS Series
device you would no longer refer to it as an S.
Having only one Data Mover does not restrict the installation type or deployment method. An S series device can be
deployed either as an Integrated system or as a Gateway system.
O Models
The 0 denotes an integrated system.
These systems contain two Data Movers.
These systems are deployed as Integrated systems.
G Models
The G stands for Gateway.
The Gateway can be either Direct or Fabric attached.
GS Models
This represents a combination of the G and S.
I Models
The I stands for Integrated.
NSX Prefix
The represents the NSX bladed series.

Hardware Review - 10

Copyright 2006 EMC Corporation. All Rights Reserved.

Terminology Clarification: Back-end and Front-end

Back-end
Front-end

Symmetrix or CLARiiON
Storage System

Celerra Data Movers


and Control Station

Physical Disks in DAE


Connected hosts
(direct or fabric connected)

Storage Subsystem

IP Network

Clients
2005 EMC Corporation. All rights reserved.

NAS clients

SAN

Celerra

Symmetrix
and/or
CLARiiON
Celerra Hardware Review - 11

It is important to understand that the terms back-end and front-end are in reference to the component being
discussed.
Storage System
For the CLARiiON SPs, the back-end refers to the disk array enclosures (DAEs) to which it is connected via
Fibre Channel Arbitrated Loop, while the front-end refers to the Fibre Channel connection to the hosts
(possibly via a Fibre Channel switch). With a Symmetrix, the front-end is the FA Director and port that connect
to host systems and the back-end is the DADisk Adapter(director) that connects to the physical drive modules.
Celerra Data Movers and Control Station
For the components of the Celerra Network Server the back-end refers to the storage subsystem (i.e. the
CLARiiON and/or Symmetrix), while the front-end refers to the NAS clients in the production TCP/IP
network.

Hardware Review - 11

Copyright 2006 EMC Corporation. All Rights Reserved.

Data Mover High Availability


y Provided by Standby Data Mover option
Requires two or more DMs

y When DM failure occurs:


Control Station initiates failover
Triggered by communications failure between CS and DM
Standby takes over and provides services of failed DM
Little or no interruption

y Failover policies
Automatic
Retry
Manual

Primary Data Mover

Standby Data Mover

y Installation scripts automatically configure


If 2 or more DMs are present at install, one is configured as Standby
Auto policy
2005 EMC Corporation. All rights reserved.

Celerra Hardware Review - 12

Data Mover high availability


In Celerra systems with two or more Data Movers, Data Mover failover can be configured to provide high
availability in the event of a Data Mover failure. In these configurations one or more Data Movers serves as a
Standby Data Mover. The production Data Mover is referred to as a Primary Data Mover, or a type NAS Data
Mover.
Failover policies
When Data Mover failover is configured, a predetermined failover policy is specified. This policy determined
what sort of action is required for the failover to take place in the event that the Primary Data Mover goes down.
The policies are Automatic, Retry, and Manual.
y Automatic policy: The Automatic policy will enact Data Mover failover to the Standby immediately when a
failure of the Primary Data Mover is detected.
y Retry policy: When failure of the Primary is detected, the Retry policy will try to reboot the Data Mover, if
this does not resolve the problem then it will enact Data Mover failover to the Standby.
y Manual policy: When the Manual policy is in place, Data Mover failover will only occur via administrative
action.
Data Mover failover and Celerra installation
In Celerra systems with more than one Data Mover, the Celerra installation script will automatically configure
one Data Mover as Standby and the remainder as Primaries.

Hardware Review - 12

Copyright 2006 EMC Corporation. All Rights Reserved.

How Data Mover Failover Works


y

Normal operations

Primary Data Mover

Control Station constantly


monitors all DMs

Standby Data Mover

Monitor
Control Station

When Primary DM fails


1. Control Station instructs
Standby to take over
2. Standby assumes identity of
failed DM provides all
production services to clients

FAILED

Go!
Control Station

3. Original Primary goes into


failed state

After problem is resolved

Administrator manually initiates


restoration of original Primary

Primary Data Mover

Primary Data Mover

Restore

Standby Data Mover

Monitor
Control Station

2005 EMC Corporation. All rights reserved.

Celerra Hardware Review - 13

How Data Mover failover works:


During normal operation the Celerra Control Station continually monitors the status of all Data Movers. If a
Primary Data Mover should experience a failure, the Control Station will instruct the Standby Data Mover to
take over as Primary while forcing the original Primary, if it is still running, into a failed state.
Once failover is enacted, the Standby Data Mover becomes Primary and resumes the entire identity of the failed
Data Mover. In most cases, this process should have little or no noticeable effect on user access to data.

Hardware Review - 13

Copyright 2006 EMC Corporation. All Rights Reserved.

Control Station Failover


y Optional redundant Control Stations
y Primary and standby Control
Stations monitor each using
heartbeat protocol
Over dual internal Ethernet
network
y Standby Control Station monitors
the primary CS
y If a failure is detected, the standby
takes control

Standby
Data Mover
Primary
Data Mover
Primary
Data Mover
Primary
Data Mover

CS0: Primary
Control Station
CS1: Standby
Control Station

y Standby will initiate Call Home


2005 EMC Corporation. All rights reserved.

Celerra Hardware Review - 14

Since data flow is separated from control flow, you can lose the Control Station and still access data
through the Data Movers. But you cannot manage the system until Control Station function is reestablished. EMC provides Control Station failover as an option.
Celerra supports up to two Control Stations per Celerra cabinet. When running a configuration with
redundant Control Stations, the standby Control Station monitors the primary Control Stations
heartbeat over the redundant internal network. If a failure is detected, the standby Control Station
takes control of the Celerra and mounts the /nas file file system.
If a Control station fails, individual Data Movers continue to respond to user requests and users
access to data is uninterrupted. Under normal circumstances, after the primary Control Station has
failed over, you continue to use the secondary Control Station as the primary. When the Control
Stations are next rebooted, either directly or as a result of a power down and restart cycle, the first
Control Station to start is restored as the primary.
A Control Station failover will initiate a call home.

Hardware Review - 14

Copyright 2006 EMC Corporation. All Rights Reserved.

Data Mover Storage Connections (back-end)


y Every Data Mover has redundant FC connections to
backend
Gateway models require installer to connect cables

y Redundant paths to storage systems


Direct connect to SPs or FAs

To storage

Fibre Channel Switches/Fabrics


gateway configurations
Design for No Single Points
of Failure in back-end IO path

Fibre Channel

Data Mover

2005 EMC Corporation. All rights reserved.

Celerra Hardware Review - 15

Data Mover Backend Connections


Every Celerra Data Mover has two physical Fibre Channel connections to the backend storage. This provides a
redundant path, primarily for high availability.
When connecting to a CLARiiON array, these connections should lead to separate Storage Processors (SP).
When the Celerra is a fabric-connected Gateway, ideally this connection would be going through separate FC
switches and fabrics.
When connecting to a Symmetrix, these connections should lead to separate FAs via separate switches and
fabrics.
Installer actions
You manually cable the connections. You may also be required to mount the Celerra components into an EMC
or third party rack system.
NS Integrated models should come from the factory with connections in place, requiring you to verify the
connections.
*Note: In some instances the Celerra NS Integrated model may also be shipped for mounting in an existing rack.
In these cases you would be required to make the necessary connections.

Hardware Review - 15

Copyright 2006 EMC Corporation. All Rights Reserved.

Data Mover Connection to Production Data Network

y Each Data Mover has a number of Ethernet connections


to the production network
y Quantity and type are model specific
y Types:
Copper 10/100/1000Mbps (cge)

Production
Network

Optical GbE (fge)

y Connections are made to


production Ethernet switch
Data Mover

2005 EMC Corporation. All rights reserved.

cge0

cge1

cge2

fge0

Celerra Hardware Review - 16

Each Data Mover provides several physical connections to the production Ethernet data network.
Ethernet port types
The exact number of these connections depends on the Data Mover model. Typically, there are two types of
Ethernet ports that may be found on a Data Mover, copper 10/100/1000 Mbps and optical Gigabit Ethernet. The
copper ports have hardware names beginning with cge, followed by the ordinal number of the port. (e.g.
cge0, cge1, cge2, etc.) The optical, or fiber, Ethernet ports have hardware names beginning with fge,
followed by the ordinal number of the port. (e.g., fge0, fge1, etc.)
Making the Connections
These production Ethernet ports require manual connection to the production Ethernet switch. The Ethernet
cables for these connections are NOT included with the Celerra Network Server.

Hardware Review - 16

Copyright 2006 EMC Corporation. All Rights Reserved.

Control Station Internal Connection to Management Path

y Control Station monitors and manages


Data Movers via a private
Ethernet switch included
Control Station
with the Celerra NS
NEVER connect any other
network device or host to
private Ethernet switch

Ethernet

1-to-4 Y serial cable


Ethernet switch

y NS uses a serial connections


to provide redundancy
1-to-4 Y-cable
Supports up to 4 DMs

Ethernet

y NSX uses second


Ethernet network and
System Management switch
2005 EMC Corporation. All rights reserved.

Data Mover

Celerra Hardware Review - 17

Ethernet Management Path


Primarily, the Celerra Control Station communicates with the Data Movers via a private LAN (physically
separate from the production data network) that serves as a management path. The Celerra NS includes a small
Ethernet switch to facilitate this communication. NS Integrated models should come pre-cabled from the factory,
requiring you to verify the connections.* NS Gateway models will require you to make these connections. The
Ethernet cables are included with all Celerra NS models.
Serial Management Path via 1-to-4 Y-Cable
In addition to this management Ethernet path, the NS also uses a serial connection between the Control Station
and the Data Movers. This provides minimal management functionality in the event that the Ethernet path fails.
There is only one serial connection on the Control Station for this communication. The serial cable used is a 1-to4 Y-cable. This allows up to 4 Data Movers to communicate via this connection. The ends of the Y-cable are
labeled S1 through S4. S1 should connect to the first Data Mover (server_2). If the system has 2 Data Movers,
S2 should be used to make the next connection, and so forth.
The Celerra NS system includes a small Ethernet switch to facilitate communication between the Control Station
and the Data Movers. In Celerra NS Integrated systems communication with the CLARiiON Storage Processors
(SPs) is also facilitated via this switch.
This switch must NEVER be connected to any other device or host.

Hardware Review - 17

Copyright 2006 EMC Corporation. All Rights Reserved.

Control Station Administrative Connection to Production


Network

y The Control Station has one Ethernet connection to the


management network
For administrative purposes ONLY
Control Station provides no external
services
Production

y Ethernet type: 10/100Mbps

Network

Administrators path to Control Station

Control Station

2005 EMC Corporation. All rights reserved.

Celerra Hardware Review - 18

Control Station Connection for Administration


The Control Station has one physical connection to the production Ethernet network. This connection provides
means for the Celerra administrator to connect to the Control Stations CLI or Celerra Manager GUI for
management of the Celerra Network Server.
Making the Connections
The Control Station connection to the production network requires that you manually connect to the production
Ethernet switch. An Ethernet cable for this connection is included with the Celerra Network Server.

Hardware Review - 18

Copyright 2006 EMC Corporation. All Rights Reserved.

Control Station Storage Communication (back-end)


y Control Stations do not have
FC connection to the back-end
Storage Subsystem

y All Control Station communication


to the backend passes through
a Data Mover
Network Block Services (NBS)
NAS management functions therefore
require an operational Data Mover

y Control Station also has IP connectivity


to storage for configuration and
monitoring
2005 EMC Corporation. All rights reserved.

Data Mover

Ethernet switch
Control Station

Celerra Hardware Review - 19

Control Station Backend Communication


Celerra NS Control Stations have no Fibre Channel connection to the backend storage in any of the NS models.
All NS Control Station communication is performed by passing the communication through a Data Mover.
Therefore the presence of an operational Data Mover is required in order for a Control Station to perform
virtually all of its NAS management functions.
Note: The Control Station on legacy CNS/CFS systems had direct connection to Control LUNs.

Hardware Review - 19

Copyright 2006 EMC Corporation. All Rights Reserved.

Storage Processor Back-end Connections


y Connects the Storage
Processor to the Disk
Array Enclosure
y Consult your CLARiiON
documentation for the
number of backend
connections for your
model

2005 EMC Corporation. All rights reserved.

Storage Processor A

Storage Processor B

Celerra Hardware Review - 20

All Celerra NS Integrated systems, and some Gateway systems use CLARiiON disk arrays for their storage
subsystem. A CLARiiON Storage Processor requires two connections to the backend disk array.
For more information on EMC CLARiiON, please refer to CLARiiON documentation and training.

Hardware Review - 20

Copyright 2006 EMC Corporation. All Rights Reserved.

Storage Processor Front-end Port Connection


y Connects the Storage
Processor to the Data
Movers

Data Mover3
BE0

BE1

FE1

FE0

Data Mover 2
BE1

BE0

FE1

FE0

y Minimum of two
connections per from
each SP to each DM
Direct cabling
Fabric connections with
switch zoning

SPB

SPA

y Note: Integrated systems


do not have FC FrontEnd ports
Uses AUX0/BE2 and
AUX1/BE3 copper-based
connections
2005 EMC Corporation. All rights reserved.

Celerra Hardware Review - 21

Integrated and Direct-Connected Gateway


Typically, the Storage Processor frontend ports (FE0 and FE1) from each SP (SPA and SPB) are distributed
across different Data Movers and different backend ports (BE0 and BE1) on each Data Mover in NS Integrated
and Direct-connected Gateway models.
Fabric-Connected Gateway
In a Fabric-connected Gateway, the same principle is accomplished via connections to the FC fabrics and zoning.
NOTE:
In the example above, the FE port designations are for example only. The connection requirements illustrated
above are that BE0 on each DM must connect to SPA, but not necessarily FE0.

Hardware Review - 21

Copyright 2006 EMC Corporation. All Rights Reserved.

SP Connection to Management LAN


y NS Integrated
Connect the Ethernet port on
SPA and SPB to a private LAN
switch

y NS Gateways

To Ethernet switch

Connect the Ethernet port on


SPA and SPB to a
production/administrative LAN
switch
Storage Processor

2005 EMC Corporation. All rights reserved.

Celerra Hardware Review - 22

Each CLARiiON Storage Processor has an Ethernet port to facilitate management of the array.
Celerra NS Integrated Systems
When the CLARiiON is connected to the Celerra NS Integrate system, both SPA and SPB should come from the
factory with these Ethernet ports connected to the Celerras private LAN Ethernet switch.*
Celerra NS Gateway Systems
When the CLARiiON is being used by a Celerra NS Gateway system, the SPs must be connected to the
production/administrative Ethernet switch so that the administrator can connect.
*Note: In some instances, the Celerra NS Integrated model may also be shipped for mounting in an existing rack.
In these cases, you would be required to make the necessary connections.

Hardware Review - 22

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra NS Modem Connections


y Serial connection to the
Control Station
Cable included
Do not use the serial cable that
ships with the modem

y Analog phone line


y CLARiiON Management
Station may also have a Call
Home modem

Control Station

serial cable
Modem

Phone line

2005 EMC Corporation. All rights reserved.

Celerra Hardware Review - 23

Modem Serial Connection


The modem in the Celerra NS has a serial port for connection to the Control Station. When making this
connection, use the serial cable that comes with the NS. Do not use the serial cable that came with the modem.
Phone Line
An analog phone line must also be connected to the modem. This cable is not included with the Celerra NS.
NOTE:
Each storage subsystem will also have a modem. Please see the setup documentation for the storage system for
instructions on setting up its modem.

Hardware Review - 23

Copyright 2006 EMC Corporation. All Rights Reserved.

NS500 Standard Equipment

2005 EMC Corporation. All rights reserved.

Celerra Hardware Review - 24

The illustration above is a NS500. The NS500(S) is very specific in its combinations of Data Movers, Control
Stations, and Storage Processors depending on what was ordered.

Remember: With an Integrated system, the storage (and SPs) are included. You can not connect an Integrated
system to an existing SAN environment.

While it is possible to place these individual components in a different order it is recommended that you follow
the format listed above. If you do change the location of components please be aware of cable length issues.

Hardware Review - 24

Copyright 2006 EMC Corporation. All Rights Reserved.

NS500G Standard Equipment

2005 EMC Corporation. All rights reserved.

Celerra Hardware Review - 25

The NS500G(S) is very specific in its combinations of Data Movers and Control Stations depending on what was
ordered. However, the possible combination of storage that a Gateway can connect to is not illustrated in this
slide. The illustration above pertains directly to a NS500G Only.
While it is possible to place these individual components in a different order it is recommended that you follow
the format listed above. If you do change the location of components please be aware of cable length issues.
The customer may have ordered a new CLARiiON array with the Celerra Gateway system, or the customer may
already have the array. If your customer ordered the optional cabinet, the components are installed in the cabinet
at the EMC factory.
Because the NS500 share the same physical enclosure as a CX500, when you look at the front, it looks like there
should be drive modules in the slots in the front. That is not the case. The storage is provided by a separate
CLARiiON enclosure.

Hardware Review - 25

Copyright 2006 EMC Corporation. All Rights Reserved.

Single Data Mover Configurations


Single Data Mover NS500S/NS500GS

Dual Data Movers NS500/NS500G

2005 EMC Corporation. All rights reserved.

Celerra Hardware Review - 26

This illustration highlights the general physical differences between a single Data Mover model and a dual Data
Mover model shown on the next slide.

Hardware Review - 26

Copyright 2006 EMC Corporation. All Rights Reserved.

NS500 Data Mover

or

2005 EMC Corporation. All rights reserved.

Celerra Hardware Review - 27

MIA
Media Interface Adaptor This is used to convert a HSSDC cable to a LC connection.

Serial to CS
This is an RJ-45 to DB-9m that connects to the appropriate Control Station serial connection
(discussed later).

Public LAN: CGE(X)


The public LAN refer to the customers network that will be used to access files stored on the
Celerra / Storage. The CGE ports are RJ-45 ports that support following speeds: 10/100/1000. The
speeds are defined by the customers environment.

Private LAN:
This is a RJ-45 cable that connects to the Control Stations Ethernet switch

Hardware Review - 27

Copyright 2006 EMC Corporation. All Rights Reserved.

NS500 AUX Storage Processor


Console connection
(used for support)

2005 EMC Corporation. All rights reserved.

Celerra Hardware Review - 28

NS500-AUX SPs look very similar to CLARiiON CX500 SPs. The NS500-AUX has two small form-factor
pluggable (SFP) sockets in place of the CX500 optical ports.

Hardware Review - 28

Copyright 2006 EMC Corporation. All Rights Reserved.

NS500 Data Mover Status LEDs

y Note: The LEDs on a the CLARiiON Storage Processor


are interpreted similarly
Celerra Hardware Review - 29

2005 EMC Corporation. All rights reserved.

Fault LED Indicators


Off indicates no fault.
Amber indicates fault.

Flashing Amber Indicators


Six fast one long indicates rewriting BIOS/POST. Do not remove Data Mover while this is occurring.
Slow (every four seconds) indicates BIOS (basic input/output system) activity.
Fast (every second) indicates POST (power on self-test) activity.
Fastest (four times second) indicates booting.

Hardware Review - 29

Copyright 2006 EMC Corporation. All Rights Reserved.

Control Station/Switch Assembly


Private Ethernet
Switch

2005 EMC Corporation. All rights reserved.

Celerra Hardware Review - 30

The Celerra may include one of two different Control Stations: NS-600-CS or the NS-CS. The two Control
Stations function in the same manner, but the buttons, lights, and ports are in different locations. The setup
procedure is essentially the same for either Control Station.

Hardware Review - 30

Copyright 2006 EMC Corporation. All Rights Reserved.

NS-CS Front View

2005 EMC Corporation. All rights reserved.

Celerra Hardware Review - 31

This front view of a NS-CS is only viewable after you have removed the front bezel. The front view of this
model Control Station presents a floppy drive, CD-ROM drive and a serial port connection.
The floppy and CD-ROM are used for installations and upgrades of EMC NAS code.
The serial port is used to connect directly to a computer that has been configured with the proper setting as
described inside the setup guide. Commonly available programs allow the user to interact with the Control
Station. These serial ports will allow you to access the system in the event of a loss of LAN connectivity.

Hardware Review - 31

Copyright 2006 EMC Corporation. All Rights Reserved.

NS-CS Rear View

2005 EMC Corporation. All rights reserved.

Celerra Hardware Review - 32

The rear view of this Control Station is obstructed. Access to these ports can be difficult due to the fact the
Ethernet switch is blocking the middle portion of this device as illustrated above.
The Public LAN connection is typically connected to the customer network. This allows the Celerra to be
accessed and managed via the GUI and/or CLI.
The Private LAN connection is attached to the Ethernet Switch directly behind the Control Station.
While this device comes with 4 serial connections only 1 is required per Data Mover.

It is not common to hook up a mouse and/or keyboard. Management of this device is done via the serial
connection as explained earlier.

Hardware Review - 32

Copyright 2006 EMC Corporation. All Rights Reserved.

Control Station/Switch Assembly


Private Ethernet
Switch

Celerra Hardware Review - 33

2005 EMC Corporation. All rights reserved.

The Control Station contains two individual pieces of hardware that are attached.
This NS600 series model of Celerra has a Control Station that has a model NS-600-CS.

Hardware Review - 33

Copyright 2006 EMC Corporation. All Rights Reserved.

NS700 Standard Equipment


y Also available with 4 Data Movers (NS704 Integrated)

2005 EMC Corporation. All rights reserved.

Celerra Hardware Review - 34

The illustration above pertains directly to a NS700. This model is also available with 4 Data Movers (the NS704
Integrated).
Typically these devices come pre-cabled and pre wired. While it is possible to place these individual components
in a different order it is recommended that you follow the format listed above. If you do change the location of
components please be aware of cable length issues.
Remember: With an Integrated system the storage (and SPs) are included. You will not connect an Integrated
system to an existing SAN environment.
Like the NS600, the NS700 is also available with a single Data Mover (NS700(G)S). If that is the case, the Data
Mover Enclosure will only include DM2, the bottom mover.

Hardware Review - 34

Copyright 2006 EMC Corporation. All Rights Reserved.

NS700G Standard Equipment

2005 EMC Corporation. All rights reserved.

Celerra Hardware Review - 35

The NS700G can be connected to various storage options including a Symmetrix depending on your
configuration option.

Hardware Review - 35

Copyright 2006 EMC Corporation. All Rights Reserved.

6 Port Data Mover

Celerra Hardware Review - 36

2005 EMC Corporation. All rights reserved.

Regardless of model type designation (NS600 or NS600G) there are no hardware differences between Data
Movers. However if a G model is deployed, you will be required to install MIAs in order to connect to the
array.
MIA
Media Interface Adaptor This is used to convert a HSSDC cable to a SPF connection
Serial to CS
This is a DB-9m that connects to the appropriate Control Station serial connection (discussed later)
Public LAN:
The public LAN refer to the customers network that will be used to access files stored on the Celerra / Storage. They are
RJ-45 ports that support following speeds 10/100/1000.
CGE:
Copper Gigabit Ethernet
Private LAN:
This is a RJ-45 cable that connects to the Control Stations Ethernet switch.
SPF
Small form-factor Pluggable
Copper FC:
This is a HSSDC cable that can be converted via MIA (as required) to connect to the array.

Hardware Review - 36

Copyright 2006 EMC Corporation. All Rights Reserved.

8 Port Data Mover (w/ connections for CS1)

2005 EMC Corporation. All rights reserved.

Celerra Hardware Review - 37

Regardless of model type designation (NS700 or NS700G), there are no hardware differences between Data
Movers. However if a G model is deployed, you will be required to install MIAs in order to connect to the
array.
MIA
Media Interface Adaptor This is used to convert a HSSDC cable to a SPF connection.
Serial to CS
This is a DB-9m that connects to the appropriate Control Station serial connection (discussed later).
Public LAN:
The public LAN refer to the customers network that will be used to access files stored on the Celerra / Storage. They are
RJ-45 ports that support following speeds 10/100/1000.
CGE:
Copper Gigabit Ethernet
Private LAN:
This is a RJ-45 cable that connects to the Control Stations Ethernet switch.
SPF
Small form-factor Pluggable
Copper FC:
This is a HSSDC cable that can be converted via MIA (as required) to connect to the array.

In the illustration above, you will notice that the 8-port NS700 Data Mover also includes serial and Ethernet
Hardware Review - 37
connections for a second Control Station.

Copyright 2006 EMC Corporation. All Rights Reserved.

NS704G Standard Equipment

2005 EMC Corporation. All rights reserved.

Celerra Hardware Review - 38

Important: The NS704G is a fabric-attached Gateway system only. There is no direct connect option for this
device.
With the exception of the NSX series (discussed later) this is the only NS-series device that can have 2 Control
Stations.
While it is possible to place these individual components in a different order it is recommended that you follow
the format listed above. If you do change the location of components please be aware of cable length issues.
The NS704G can be connected to various storage options including a Symmetrix depending on your
configuration option.

Hardware Review - 38

Copyright 2006 EMC Corporation. All Rights Reserved.

Data Mover or Storage Processor status LEDs

DM3/SPB

DM2/SPA

Celerra Hardware Review - 39

2005 EMC Corporation. All rights reserved.

Fault LED Indicators


Off indicates no fault.
Amber indicates fault.

Flashing Amber Indicators


Six fast one long indicates rewriting BIOS/POST. Do not remove Data Mover while this is occurring.
Slow (every four seconds) indicates BIOS (basic input/output system) activity.
Fast (every second) indicates POST (power on self-test) activity.
Fastest (four times second) indicates booting.

Hardware Review - 39

Copyright 2006 EMC Corporation. All Rights Reserved.

CX700 AUX Storage Processor


y Note there are no SAN ports on these SPs

2005 EMC Corporation. All rights reserved.

Celerra Hardware Review - 40

The CX700 AUX Storage Processor is sold only with an integrated NS700 or NS700S Celerra. The lack of a
SAN personality card prohibits and SAN connection to SPA and SPB.

Hardware Review - 40

Copyright 2006 EMC Corporation. All Rights Reserved.

NSX Control Station and Blade Layout

Control Station (CS1)


Default Standby
Control Station (CS0)
Default Primary

2005 EMC Corporation. All rights reserved.

Celerra Hardware Review - 41

The EMC Celerra NSX network server is a network-attached storage (NAS) gateway system that connects to
EMC Symmetrix, CLARiiON arrays, or both. The NSX system has between four and eight X-Blade 60 and two
Control Stations. The EMCNAS software automatically configures at least one blade as a standby for high
availability.

Hardware Review - 41

Copyright 2006 EMC Corporation. All Rights Reserved.

NSX Blade

2005 EMC Corporation. All rights reserved.

Celerra Hardware Review - 42

Please note the location and names of the equipment listed above. You will learn more about each piece of
equipment later in this module.

Important: The terms blade and Data Mover to refer to the same device.

Hardware Review - 42

Copyright 2006 EMC Corporation. All Rights Reserved.

NSX Blade Back-end Ports

2005 EMC Corporation. All rights reserved.

Celerra Hardware Review - 43

The Celerra NSX is always configured as a fabric-connected gateway system. A Fabric-Connected Celerra
Gateway system is cabled to a Fibre Channel switch using fibre-optic cables and small form-factor pluggable
(SFP) optical modules. It then connects through the Fibre Channel fabric to one or more arrays.
Other servers may also connect to the arrays through the fabric. You can use a single switch, or for added
redundancy you can use two switches. The Celerra system and the array or arrays must connect to the same
switches.
If you are connecting the Celerra system to more than one array, one array must be configured for booting the
blades. This array should be the highest-performance system and must be set up first. The other arrays cannot be
used to boot the blades and must be configured after the other setup steps are complete.

Hardware Review - 43

Copyright 2006 EMC Corporation. All Rights Reserved.

Blade Public Network Ports

2005 EMC Corporation. All rights reserved.

Celerra Hardware Review - 44

The external network cables connect clients of the Celerra system to the blades. Another external network cable
connects the CS to the customers network for remote management of the system. The external network cables
are provided by the customer. The category and connector type of the cable must be appropriate for use in the
customer network. The six copper Ethernet network ports on the blades are labeled cge0 through cge5. These
ports support 10, 100, or 1000 megabit connections and have standard RJ-45 connectors. The two optical Gigabit
Ethernet network ports are labeled fge0 and fge1. They have LC optical connectors and support 50 or 62.5
micron multimode optical cables. Ports fge0 and fge1 use optical SFP modules installed at the factory.

Hardware Review - 44

Copyright 2006 EMC Corporation. All Rights Reserved.

NSX Control Station Front

2005 EMC Corporation. All rights reserved.

Celerra Hardware Review - 45

The front view of this model Control Station presents a floppy drive, CD-ROM drive and a serial port
connection.

The floppy and CD-ROM are used for installations and upgrades of EMCNAS code.

The serial port is used to connect directly to a computer that has been configured with the proper setting as
described inside the setup guide. Commonly available programs allow the user to interact with the Control
Station. These serial ports will allow you to access the system in the event of a loss of LAN connectivity.

Hardware Review - 45

Copyright 2006 EMC Corporation. All Rights Reserved.

NSX Control Station Rear

2005 EMC Corporation. All rights reserved.

Celerra Hardware Review - 46

The NSX Control Station is designed for use with NSX systems only. While it still serves all the roles and
responsibilities of a traditional Control Station please be aware that there is a different backend port selection on
this model.

Hardware Review - 46

Copyright 2006 EMC Corporation. All Rights Reserved.

NSX System Managed Switch

2005 EMC Corporation. All rights reserved.

Celerra Hardware Review - 47

The private (internal) LAN cables connect the CS to the blades through the blade enclosures system
management switches. These cables and switches make up a private network that does not connect to any
external network. Each blade enclosure has two system management switches, one on each side of the enclosure.

Hardware Review - 47

Copyright 2006 EMC Corporation. All Rights Reserved.

NSX System Managed Switch Cable Layout

Celerra Hardware Review - 48

2005 EMC Corporation. All rights reserved.

With the removal of the Ethernet switch, the following diagram above illustrates a method in which the Control
Station and Data Movers can communicate via a private and redundant connection with each other.
Path

From

To

CS0 (Left)

Blade Enclosure 0 Port3 (R)

CS0 (Right)

Blade Enclosure 0 Port3 (L)

CS1(Left)

Blade Enclosure 0 Port4 (R)

CS1(Right)

Blade Enclosure 0 Port4 (L)

Blade Enclosure 0 Port0 (L)

Blade Enclosure 1 Port3 (L)

Blade Enclosure 0 Port0 (R)

Blade Enclosure 1 Port3 (R)

Blade Enclosure 1 Port0 (L)

Blade Enclosure 2 Port3 (L)

Blade Enclosure 1 Port0 (R)

Blade Enclosure 2 Port3 (R)

Blade Enclosure 2 Port0 (L)

Blade Enclosure 3 Port3 (L)

10

Blade Enclosure 2 Port0 (R)

Blade Enclosure 3 Port3 (R)

Hardware Review - 48

Copyright 2006 EMC Corporation. All Rights Reserved.

NSX Power Subsystem

2005 EMC Corporation. All rights reserved.

Celerra Hardware Review - 49

The Celerra NSX always ships in its own EMC cabinet. The cabinet may include two uninterruptible power
supplies (UPSs) to sustain system operation for a short AC power loss. All components in the cabinet, except for
the CallHome modems, are connected to the UPS to maintain high availability despite a power outage. The two
Control Stations have automatic transfer switches (ATSs) for short AC power loss in addition to the two UPSs.
The NSX Cabinet only includes the Control Station(s) and Data Movers. Storage is always in a separate cabinet.

Hardware Review - 49

Copyright 2006 EMC Corporation. All Rights Reserved.

Module Summary
In this lesson you learned about:
y Be careful of the term back-end and front-end as they are
different if you are looking at it from an the Celerra or
storage system prospective
y While physically different, all models share similar
components and interconnections
NS500/350
NS600/NS700
NSX

y Integrated systems use a captive CLARiiON array


y Gateway configurations connect to the back-end via a
Fabric and may share the back-end with other hosts
2005 EMC Corporation. All rights reserved.

Celerra Hardware Review - 50

Hardware Review - 50

Copyright 2006 EMC Corporation. All Rights Reserved.

Closing Slide

2005 EMC Corporation. All rights reserved.

Celerra Hardware Review - 51

Hardware Review - 51

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra ICON
Celerra Training for Engineering

Installation and Configuration Overview

2006 EMC Corporation. All rights reserved.

Installation and Configuration Overview - 1

Copyright 2006 EMC Corporation. All Rights Reserved.

Revision History
Rev Number

Course Date

1.0

February 2006

2006 EMC Corporation. All rights reserved.

Revisions
Complete

Installation, and Configuration Overview - 2

Installation and Configuration Overview - 2

Copyright 2006 EMC Corporation. All Rights Reserved.

Product, Installation, and Configuration Overview


Upon completion of this module, you will be able to:
y Describe the locations where Celerra software is installed
y List the major installation tasks
y Explain how the Celerra Data Mover boots during the
installation phases

2006 EMC Corporation. All rights reserved.

Installation, and Configuration Overview - 3

The objectives for this module are shown here. Please take a moment to read them.

Installation and Configuration Overview - 3

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra Software Locations


y 6 System LUNs on storage array
Storage Subsystem

Data Movers DART OS


NASDB, logs, config files, etc.
a.k.a. Control LUNs

DART, etc

DMs have no internal storage

6 System LUNs

y Control Stations internal disk drive


Linux OS
EMC NAS management software
Auxiliary boot image for Data Movers

y Additional LUNS for user data are


configured in the storage system
and presented to the Data Movers
2006 EMC Corporation. All rights reserved.

Data Mover

Control Station
Internal drive
Linux &
NAS management
services
Installation, and Configuration Overview - 4

A Celerra system uses two storage locations for installation of its software: the Control Stations
internal disk drive and 6 System LUNs (also known as Control LUNs) on the storage array.
Control Station internal disk drive
The Celerra Control Station contains an internal disk drive upon which the Linux operating system is
installed as well as the NAS management services that are used to configure and manage Data Movers
and the file systems on the storage subsystem. The Control Station also holds an auxiliary boot image
which can be used by Data Movers whenever its OS cannot be located on the storage array.
6 Control LUNs on storage array
Celerra Data Movers have no local disk drives. Data Movers require 6 Control (or System) LUNs on
the storage subsystem. These System LUNs contain the DART operating system, configuration files,
log files, the Celerra configuration database ("NASDB"), NASDB backups, dump files, etc.. (The
exact contents of each LUN are discussed later in this course.)
NOTE: This module discussed the installation storage requirements. Storage LUNs for data are not
present at this time.

Installation and Configuration Overview - 4

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra NS Installation Tasks


Installation of a Celerra includes
configuration or installation of the
following:

Storage Subsystem

DART, etc

y 6 System LUNs

6 System LUNs

y Fibre Channel connectivity between


Data Mover and Storage System

Fibre Channel

Connect cables
Fabric Zoning

Data Mover

y Load the software


Linux on Control Station
DART, etc. on System LUNs
2006 EMC Corporation. All rights reserved.

Control Station
Internal drive
Linux &
NAS management
services
Installation, and Configuration Overview - 5

The key tasks of the Celerra NS installation include:


y Creating and configuring the 6 System LUNs on the storage array
y Providing redundant Fibre Channel access to the System LUNs for each Data Mover
y Installing and configuring Linux on the Control Station
y Installing DART onto the System LUNs on the storage system for the Data Movers.

Installation and Configuration Overview - 5

Copyright 2006 EMC Corporation. All Rights Reserved.

Software Status Before Installation


y 6 System LUNs are either empty
or not configured yet

Storage Subsystem

Data Movers are not able to boot


EMPTY

y The Control Station drive is empty


Or may contain code that will be
overwritten

6 System LUNs

Fibre Channel

Data Mover
Private LAN
Control Station
EMPTY

2006 EMC Corporation. All rights reserved.

Installation, and Configuration Overview - 6

The Data Mover operating system (DART), NAS and config files will be stored on the Internal IDE
drive in the control station and on the System LUNs on the storage subsystem (CLARiiON or
Symmetrix). At the beginning of a new install there are no files in any of those locations. (Actually
there may likely be a factory image of Linux on the Control station, this will be overwritten during
installation.)
It is assumed the Floppy and CD are loaded.

Installation and Configuration Overview - 6

Copyright 2006 EMC Corporation. All Rights Reserved.

When the Installation is Initiated


The following are written to the CS local
drive:

Storage Subsystem

y Linux OS for Control Station


EMPTY

y NAS management software

6 System LUNs

y Auxiliary DART image for Data Movers


to Pre Execution Environment (PXE)

Fibre Channel

For Data Mover network boot


Data Mover
Private LAN
Control Station

Linux & NAS Including PXE image


2006 EMC Corporation. All rights reserved.

Installation, and Configuration Overview - 7

Starting the software installation


Boot the CS (Control Station) from the floppy and run the installation command when prompted.
Linux is installed on the CS internal IDE drive, also the NAS code (including DART) is copied to the
local IDE drive

Installation and Configuration Overview - 7

Copyright 2006 EMC Corporation. All Rights Reserved.

After the Files are Written to the CS


y Remove the CD & floppy
y The Control Station reboots
y Prompts for Linux configuration
options

Storage Subsystem

EMPTY
6 System LUNs

IP address, netmask, gateway


Hostname

Fibre Channel

Nameserver

y Data Movers are rebooted by the


installation script

Data Mover
Private LAN
Control Station

Linux & NAS Including PXE image


2006 EMC Corporation. All rights reserved.

Installation, and Configuration Overview - 8

After the files are written to the CS drive, the CS reboots, asks all the configuration questions and
restarts the network. A PXE image, with a bootable configuration for the DMs, is created on the CS
internal drive.
The DMs are now automatically rebooted from that PXE image by the installation script.

Installation and Configuration Overview - 8

Copyright 2006 EMC Corporation. All Rights Reserved.

When the Data Movers are Rebooted


y

They look for a boot image of DART


(nas.exe)

Storage Subsystem

1. Attempts boot over FC - FAIL

Fabric is not zoned

CLARiiON registration not configured

EMPTY
6 System LUNs

2. Attempts PXE boot over private LAN - OK

DMs PXE from CS drive

Load temporary DART (nas.exe)

Fibre Channel

1?

Where is DART?

Data Mover

Private LAN

2?

Control Station

Linux & NAS Including PXE image


2006 EMC Corporation. All rights reserved.

Installation, and Configuration Overview - 9

The CS reboots the DMs.


DMs cannot boot to the System LUNs as there is no O/S (DART) on them yet*, so they default to a
network boot to the PXE image on the CS.
* Also, if this system is connecting to the storage via a fibre channel fabric, there is no zoning in place
at this time.

Installation and Configuration Overview - 9

Copyright 2006 EMC Corporation. All Rights Reserved.

Providing Access to the System LUNs


When the Data Movers PXE boot from DART
image (nas.exe) on CS drive, DM queries
HBAs and the WWNs are passed to the CS
where they are displayed on the
HyperTerminal screen

Storage Subsystem

EMPTY

Next:
y

6 System LUNs

Perform FC zoning

If using a FC fabric connection

Fibre Channel

Configure the CLARiiON

Create RAID Group w/ System LUNs


Register DMs
Create Storage Group with
System LUNs
Data Movers

Private LAN
Control Station

Data Movers are rebooted again

2006 EMC Corporation. All rights reserved.

Data Mover

Linux & NAS Including PXE image


Installation, and Configuration Overview - 10

For manual installations, once the DMs boot up, the back end fibre channel ports become active and
the WWNs of the DMs are displayed on the HyperTerminal screen.
The manual install requires that you do all the backend configuration (LUNs, registration, storage
groups etc.) before continuing beyond this step.

Installation and Configuration Overview - 10

Copyright 2006 EMC Corporation. All Rights Reserved.

Providing CS Access to System LUNs (NBS)


When the Data Movers are rebooted
again.
y

Storage Subsystem

The look for a boot image of DART


(nas.exe)

They cannot boot from the System LUNs


but they CAN access them

DMs PXE from CS drive

DMs start Network Block Services


(NBS)

EMPTY
6 System LUNs

Where is DART?

Allows the CS to write to the back-end

Fibre Channel

1?

Data Mover
Private LAN

2?

Control Station

Linux & NAS Including PXE image


2006 EMC Corporation. All rights reserved.

Installation, and Configuration Overview - 11

Once the Data Movers are given access to the storage array they still cannot boot from the System
LUNs because DART (nas.exe) has not been loaded there at this time. However, the Data Mover can
see the System LUNs.
The Data Movers still access DART from the Control Station via PXE. Now they can provide access to
the Control LUNs for the Control Station via the Network Block Service (NBS).

Installation and Configuration Overview - 11

Copyright 2006 EMC Corporation. All Rights Reserved.

DART Installed Over NBS


When the Data Movers are rebooted
again.
y

CS can now see the System LUNs


through DMs using NBS

Partitions and formats System LUNs

Loads DART, etc. to array

Completes software configuration of Data


Movers

Storage Subsystem

DART, etc LOADED


6 System LUNs

Fibre Channel

Data Mover
NBS Server
Control Station
NBS Client

2006 EMC Corporation. All rights reserved.

Installation, and Configuration Overview - 12

Using NBS (Network Block Service, see below) over the internal network the CS can access the
System LUNs via the Data Mover(s).
Network Block Devices
NBS uses iSCSI with CLARiiON proprietary changes. Below is a generic description of Network
Block Devices.
Linux can use a remote server as one of its block devices. Every time the client computer wants to read
/dev/nd0, it will send a request to the NBS server via TCP, which will reply with the data requested.
This can be used for stations with low disk space (or even diskless - if you boot from floppy) to borrow
disk space from other computers. Unlike NFS, it is possible to put any file system on it.
Using NBS over the internal TCP/IP network, the CS partitions, formats and installs all the required
NAS (DART) code on the System LUNs.

Installation and Configuration Overview - 12

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra Data Mover Installation and Boot Process


When the Data Movers are rebooted
again.
y

Storage Subsystem

DMs successfully boot DART from


array

DART, etc

1. Attempts boot over FC - OK

6 System LUNs

Installation of DMs completes

Further configuration as
required:

Network interfaces
File systems
Exports and shares
Etc.

Fibre Channel

Where is DART? 1 ?
Data Mover
Private LAN
Control Station

Linux & NAS Including PXE image


2006 EMC Corporation. All rights reserved.

Installation, and Configuration Overview - 13

Once DART, etc. has been loaded onto the System LUNs, the Data Mover can now successfully boot
over Fibre Channel from the System LUNs, and the remainder of the automated installation tasks can
complete.

Installation and Configuration Overview - 13

Copyright 2006 EMC Corporation. All Rights Reserved.

Module Summary
In this module you learned about:
y Celerra NS software will be installed to the Control
Station local drive and the System LUNs on the storage
array
y The major installation tasks included
Load Linux and DART image to the Control Station
PXE boot Data Mover to provide Control Station access to array via
NBS
Load DART, etc, to array

y The Data Mover first attempt to boot from the storage


array, if DART is unavailable, performs PXE boot from the
Control Station
2006 EMC Corporation. All rights reserved.

Installation, and Configuration Overview - 14

Installation and Configuration Overview - 14

Copyright 2006 EMC Corporation. All Rights Reserved.

Closing Slide

2006 EMC Corporation. All rights reserved.

Installation, and Configuration Overview - 15

Installation and Configuration Overview - 15

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra ICON
Celerra Training for Engineering

Preparing, Installing, and Configuring a


Fabric-Connected Gateway System

2006 EMC Corporation. All rights reserved.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 1

Copyright 2006 EMC Corporation. All Rights Reserved.

Revision History
Rev Number

Course Date

1.0

February 2006

1.2

May 2006

2006 EMC Corporation. All rights reserved.

Revisions
Complete
Update and reorganization

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 2

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 2

Copyright 2006 EMC Corporation. All Rights Reserved.

Preparing, Installing, and Configuring a Fabric-connected


Gateway

Upon completion of this module, you will be able to:


y Plan and prepare for installation of Control Station
operating system, NAS software, and DART operating
Environment
y Perform pre-installation tasks
y Install and connect components
y Configure the boot array
y Install the EMC NAS software

2006 EMC Corporation. All rights reserved.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 3

Regardless of the specific configuration, all Celerra installations are performed using the same general
process and phases In this module we will be discussing the Installation and configuration of a Fabricconnected Gateway system configuration. Much of the back-end configuration and fabric zoning can
be performed automatically using Auto-configure scripts, however, we will be discussing the manual
configuration as this represents a worst-case complexity.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 3

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra NS Installation Documentation


Key document for this discussion:
y Celerra NS500G/NS600G/NS700G Gateway
Configuration Setup Guide
y Referred to from here on as the Gateway Setup Guide
Locate your copy before continuing

2006 EMC Corporation. All rights reserved.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 4

The following portions of this course are designed to focus on the technical publication, Celerra
NS500G/NS600G/NS700G Gateway Configuration Setup Guide.
Please locate your copy of this document and follow the discussions closely with the document.
If you cannot locate your copy, please notify your instructor immediately.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 4

Copyright 2006 EMC Corporation. All Rights Reserved.

Three Phases of Installation


y The three phases of installation are:
Phase 1: Planning and data collection
Phase 2: Physical installation and initial configuration
Phase 3: Final Configuration

y In-depth discussion of each phase is covered in the


Gateway Setup Guide
y Today we will focus of this course is on phase 2
Do not minimize the value of the required assessment and planning
that must be performed in the field during phase1 and earlier
Qualifier document

We will continue tomorrow with Phase 3

2006 EMC Corporation. All rights reserved.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 5

Installation and configuration of a Celerra gateway system is typically done in three phases.
y Phase 1: The installation is planned and configuration information is collected from the customer.
y Phase 2: The hardware is physically installed and cabled, the software installed, and the Control
Station. At this point the system is functional, but cannot yet be used by clients to store and retrieve
files.
y Phase 3: The system is configured with client network connections, file systems, shares, exports,
and so on. When this phase is complete, the system is fully usable by clients.
In the field, two or more individuals from different EMC or Authorized Service Provider organizations
typically work together to complete the different phases of the installation. Close coordination is
required to ensure the requirements are communicated.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 5

Copyright 2006 EMC Corporation. All Rights Reserved.

Phase 1: Planning and Data Collection


y Software verification
Verify the correct software level
Change Control Authority (CCA)
Interoperability Matrix

y Site preparations
Physical space considerations, power, network connectivity, etc

y Verify Symmetrix and or CLARiiON back-end requirements


Software Level, Access Logix, Write Cache configuration, etc.
Control LUNs
User Data Volumes

y Gather required information and complete Setup Worksheets in


Appendix G
SAN and storage cabling and zoning requirements
Internal and external network
IP addresses, Netmask, Gateways DNS, etc.
2006 EMC Corporation. All rights reserved.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 6

The first phase starts when the customer agrees to the installation and ends when all of the required
information has been collected. Missing information, such as IP addresses, can cause significant delays
later in the installation process.
1. Use the EMC Change Control Authority (CCA) process to get the initial setup information and to
verify you have all needed software before going to the customers site.
2. Verify that the customer has completed all site preparation steps, including providing appropriate
power and network connections.
3. If the Celerra system is being connected to a new array, verify that the array has been installed and
configured before starting to install the Celerra system. Verify that the required revision of the array
software is installed and committed.
4. Fill out the configuration worksheets with the customer.
5. Give the phase 2 configuration information to the installer who will complete the next phase.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 6

Copyright 2006 EMC Corporation. All Rights Reserved.

EMC NAS Installation Software


y Acquire ISO image CCA-approved from EMCs FTP site
y Burn the CD from ISO
y The installation boot floppy that shipped should be usable
If necessary, create installation boot floppy from CD
Image is located on NAS software CD
Use rawrite.exe to copy \images\boot.img to floppy

2006 EMC Corporation. All rights reserved.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 7

The version of EMC NAS that ships with the Celerra Network Server is not likely to be appropriate for the installation.
When the Change Control Authorization is onsulted, the correct version will be identified and placed on the EMC FTP site
as an ISO image for download.
After downloading the approved version, you will want to create a CD from the ISO image
Installation Boot Floppy
Typically, you should be able to use the boot floppy that shipped with the Celerra Network Server. If you need to create a
new boot floppy from either a Linux or Windows host., The procedure to do this from Windows is included below.
y Extract the rawrite.exe file from the Global Services Service Pack CD. You can also obtain rawrite.exe for free from
many internet sites.
y Copy rawrite.exe to C:\temp
y Put the EMC NAS code CD into the CD-ROM drive of the Windows computer being used to create the boot floppy
y Place a blank, formatted floppy into the floppy drive of the same computer
y Change directory to C:\temp
y Type rawrite.exe and press [Enter], and provide the following information when prompted:
Disk image name: D:\images\boot.img
Target diskette drive: A:
y When the command prompt returns the boot floppy creation is complete.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 7

Copyright 2006 EMC Corporation. All Rights Reserved.

Data Collection Required for Installation


Discuss: Gateway Setup Guide Appendix G: Setup
Worksheets
y Site Preparation Worksheet
y Fibre Channel Cabling Worksheet
Note the instructional text for Tables G-1 and G-2

y CLARiiON Boot Array Worksheet


y Control Station 0 Networking Worksheet
Note the default values for the internal network

y Private LAN Worksheet


(If non-default configuration was CCA-approved)
2006 EMC Corporation. All rights reserved.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 8

Please take time to review the five worksheets listed above.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 8

Copyright 2006 EMC Corporation. All Rights Reserved.

Phase 2: Physical Installation and Initial Config


y Verify configuration information from Phase 1
y Verify presence of required components
y Assemble system and make connections
EMC provided cabinet
Customer cabinet

y Power on and install software


y Configure CS0
y Configure CS1 (if present)
NS704 and NSX models

2006 EMC Corporation. All rights reserved.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 9

The second phase includes physically installing the system, cabling it to the customers network, and
configuring the Control Stations and CallHome. The second phase is complete when the system
successfully calls home and you have filled out the Phase 2 Completion Hand-Off Worksheet from
Appendix G.
You should always install and configure the system according to the instructions in the Gateway setup
guide. Be sure to follow the steps in the order given.
The basic steps for installing a Celerra gateway system are as follows.
y Verify you have received the required phase 2 configuration information from the individual who
completed phase 1.
y Verify that all required components are onsite.
y Assemble the system and make required connections. This part of the procedure varies greatly
from a system that shipped with and EMC cabinet to one which did not.
y Power on the system and install or upgrade software as needed. You will need your service laptop
computer.
y Configure Control Station 0.
y For dual Control Station configurations, configure Control Station 1.
y Configure and test CallHome.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 9

Copyright 2006 EMC Corporation. All Rights Reserved.

Make the Required Component Connections


Discuss
Gateway Setup Guide
Part 2: Physical Installation and Initial Configuration
y Chapter 5, 7, 9 or 10 Connect Cables for a FabricConnected (for your NS model e.g. NS600G)
y Chapter 11: Configure the Boot Array
y Chapter 12: Install and Configure the EMC NAS
Software
Note
Step 7f: Option for fabric-connected
Step 7j: Choose no for manual configuration
2006 EMC Corporation. All rights reserved.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 10

Chapters 5, 7, 9, and 10 each discuss the Fabric-connect Gateway installation. Each chapter focusing
on a different model. Please discuss the chapter that is appropriate for your lab setup.
Chapter 11: Configure the Boot Array discusses how to verify the software version, write cache and
user account settings of the CLARiiON array. Please note that complete configuration of the storage
subsystem is beyond the scope of this course. However, when appropriate we will focus on the storage
requirements for the Celerra installation.
Chapter 12: Install and Configure the EMC NAS Software
Once the correct NAS software version is installed on the Control Station, you will power on the Data
Movers for the first time. The Data Movers cannot boot from the array because the array does not have
the EMCNAS software installed. Instead, the Data Movers boot from the Control Station over the
private LAN connections. This is called PXE (preboot execution environment) or network booting.
In addition to the two Celerra private network configurations, you will also be provided the
opportunity to configure the Control Stations IP connection to the production/administrative network.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 10

Copyright 2006 EMC Corporation. All Rights Reserved.

Connect Serial Cable to Control Station


y The Control Station typically does not have a monitor
Connect a serial cable
Start a HyperTerminal session

None

2006 EMC Corporation. All rights reserved.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 11

Remember the terminal parameters: 19200, 8bit, no parity, 1 stop and none for Flow Control.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 11

Copyright 2006 EMC Corporation. All Rights Reserved.

Power on Control Station


y Install boot floppy and NAS Software CD and power on
Control Workstation
y Select
serialinstall

2006 EMC Corporation. All rights reserved.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 12

The first pat of the the installation process includes the configuration of the Control Station. We are
going to do standard installation so we will select Serialinstall. Alternatively, the Control
Station configuration can be described in advance in the file ksnas.cfg. Then, during the beginning of
the installation the kickstart installation option can be chosen by typing serialkickstart when
prompted at Steps 5-6 above. The kickstart facility will then extract this configuration information
from ksnas.cfg.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 12

Copyright 2006 EMC Corporation. All Rights Reserved.

CS installation
y Linux will be installed on the Control Station
y When complete, you will be prompted to remove the boot media
(floppy)
y Control Station will reboot using Linux that was just installed
y You will be prompted for the following:
Is this a Dual CS configuration?
IP address for Primary Internal Network
Default 192.168.1.100

IP address of IPMI network (for dual CS configurations only)


IP address of Backup Internal Network
Default: 192.168.2.100

IP address of external Management LAN


If this is a Gateway or Integrated and whether Direct or Fabric Connected
and other information about the back-end
2006 EMC Corporation. All rights reserved.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 13

Best Practice is to use default address for Internal Network, however, if because of potential conflicts
in the environment and these default addresses cannot be used, it is easer to change the defaults now
rater than after the installation completes.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 13

Copyright 2006 EMC Corporation. All Rights Reserved.

Selecting the Manual Configuration


To use the manual configuration method:
y Step 7j: Celerra Gateway Auto-Config option, Choose
No for manual configuration of backend

2006 EMC Corporation. All rights reserved.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 14

The option to use the Celerra Gateway Auto-Config function is a critical step in the install process.
Select Yes and the Auto-config scripts will Configure the CLARiiON array and zoning on the switch.
In this case we are going to select no and review the manual steps required to configure the fabric and
the back-end storage.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 14

Copyright 2006 EMC Corporation. All Rights Reserved.

After step 7j:


Manually Zone FC Switches and Configure Control LUNs

Reference Gateway Setup Guide


Appendix E: Zoning FC Switches and Manually
Configuring Control LUNs
y Zone the Fibre Channel Switches by WWN
y Create Control LUNs on back-end array

2006 EMC Corporation. All rights reserved.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 15

When performing the fabric-connected Gateway installation, use Appendix E of the Gateway Setup
Guide for the procedure to zone the Fibre Channel switch and configure the LUNs on the storage array.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 15

Copyright 2006 EMC Corporation. All Rights Reserved.

Gather WWPNs of DM Ports


y The install script will reboot the DMs and report:
yy Setup
Setup Control
Control LUNs
LUNs on
on the
the backend
backend and
and create
create Storage
Storage Groups
Groups and
and Fabric
Fabric
Switch
Switch Zonning
Zonning with
with the
the following
following Data
Data Mover
Mover HBA
HBA UIDs.
UIDs. Please
Please see
see the
the
Installation
Guidefor
help.
Installation Guidefor help.
yy Data
Data Mover
Mover 22 WWN
WWN Node/Port
Node/Port Names:
Names:
yy FCP
S_ID
FCP HBA
HBA 0:
0: N_PORT
N_PORT
S_ID 010d00
010d00 Node
Node 50060160b0602f3b
50060160b0602f3b Port
Port 5006016030602f3b
5006016030602f3b
yy FCP
S_ID
FCP HBA
HBA 1:
1: N_PORT
N_PORT
S_ID 010e00
010e00 Node
Node 50060160b0602f3b
50060160b0602f3b Port
Port 5006016130602f3b
5006016130602f3b
yy
yy
yy

Data
Data Mover
Mover
FCP
FCP HBA
HBA 0:
0:
FCP
FCP HBA
HBA 1:
1:

33 WWN
WWN Node/Port
Node/Port Names:
Names:
N_PORT
S_ID
N_PORT
S_ID 010c00
010c00 Node
Node 50060160b0602f3b
50060160b0602f3b Port
Port 5006016830602f3b
5006016830602f3b
N_PORT
S_ID
N_PORT
S_ID 010f00
010f00 Node
Node 50060160b0602f3b
50060160b0602f3b Port
Port 5006016930602f3b
5006016930602f3b

yy
yy
yy

Data
Data Mover
Mover
FCP
FCP HBA
HBA 0:
0:
FCP
HBA
1:
FCP HBA 1:

44 WWN
WWN Node/Port
Node/Port Names:
Names:
N_PORT
S_ID
N_PORT
S_ID 010900
010900 Node
Node 50060160b0602ed2
50060160b0602ed2 Port
Port 5006016030602ed2
5006016030602ed2
N_PORT
S_ID
010b00
Node
50060160b0602ed2
Port
5006016130602ed2
N_PORT
S_ID 010b00 Node 50060160b0602ed2 Port 5006016130602ed2

yy
yy
yy
yy

Data
Data Mover
Mover 55 WWN
WWN Node/Port
Node/Port Names:
Names:
FCP
S_ID
FCP HBA
HBA 0:
0: N_PORT
N_PORT
S_ID 010a00
010a00 Node
Node 50060160b0602ed2
50060160b0602ed2 Port
Port 5006016830602ed2
5006016830602ed2
FCP
S_ID
FCP HBA
HBA 1:
1: N_PORT
N_PORT
S_ID 010800
010800 Node
Node 50060160b0602ed2
50060160b0602ed2 Port
Port 5006016930602ed2
5006016930602ed2
Type
Type 'C'
'C' to
to continue:
continue:

2006 EMC Corporation. All rights reserved.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 16

When you get to the prompt where you were asked if you want to Auto-config and you select No, the
Data Movers will reboot and report the WWNs of the HBAs. Record this information as you will need
the WWNs later when zoning the switch.
You might find it easier to cut-and-paste this information into a notepad document.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 16

Copyright 2006 EMC Corporation. All Rights Reserved.

Physical Cabling Requirements


y Celerra Gateway configurations connect to the storage
system through one or more Fibre Channel switches
y Storage system and SAN fabric may be shared with other
servers
y Think No Single Point of Failure!
Control
Station
Data Mover
Data Mover
2006 EMC Corporation. All rights reserved.

CLARiiON

FC
Switch
FC
Switch

Storage
Processor
Storage
Processor

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 17

The physical connections are typically made using multimode fiber optic cables. Each Fibre Channel
port on each data mover connects to an available port on the switch as does each port on the storage
array.
An ideal configuration is designed and implemented with No Single Points of Failure. That is, any
one component can fail and still have access to the storage. This requires the following:
y Two Fibre Channel HBAs per Data Mover (standard configuration)
y Two Fibre Channel Switches
y Two Storage Processors with two available ports each
While the ideal configuration includes two Fibre Channel switches with independent fabric
configuration, SANs are often implemented with single switch because of the high availability features
that are built in.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 17

Copyright 2006 EMC Corporation. All Rights Reserved.

Verify Fiber Channel Connections


y Log on to the Fibre Channel switch and verify cable
connections
yy
yy
yy
yy
yy
yy
yy
yy
yy
yy
yy
yy
yy
yy
yy
yy
yy
yy
yy

switch118:admin>
switch118:admin> switchshow
switchshow

Port
Port Media
Media Speed
Speed State
State
=========================
=========================
00
id
N2
Online
F-Port
id
N2
Online
F-Port
11
id
N2
Online
F-Port
id
N2
Online
F-Port
22
id
N2
Online
F-Port
id
N2
Online
F-Port
33
id
N2
Online
F-Port
id
N2
Online
F-Port
44
id
N2
No_Light
id
N2
No_Light
55
id
N2
No_Light
id
N2
No_Light
66
id
N2
No_Light
id
N2
No_Light
77
id
N2
No_Light
id
N2
No_Light
88
id
N2
Online
F-Port
id
N2
Online
F-Port
99
id
N2
Online
F-Port
id
N2
Online
F-Port
10
id
N2
Online
F-Port
10
id
N2
Online
F-Port
11
id
N2
Online
F-Port
11
id
N2
Online
F-Port
12
id
N2
Online
F-Port
12
id
N2
Online
F-Port
13
id
N2
Online
F-Port
13
id
N2
Online
F-Port
14
id
N2
Online
F-Port
14
id
N2
Online
F-Port
15
id
N2
Online
F-Port
15
id
N2
Online
F-Port

2006 EMC Corporation. All rights reserved.

50:06:01:61:00:60:02:42
50:06:01:61:00:60:02:42
50:06:01:60:00:60:02:42
50:06:01:60:00:60:02:42
50:06:01:68:00:60:02:42
50:06:01:68:00:60:02:42
50:06:01:69:00:60:02:42
50:06:01:69:00:60:02:42

50:06:01:69:30:60:2e:d2
50:06:01:69:30:60:2e:d2
50:06:01:60:30:60:2e:d2
50:06:01:60:30:60:2e:d2
50:06:01:68:30:60:2e:d2
50:06:01:68:30:60:2e:d2
50:06:01:61:30:60:2e:d2
50:06:01:61:30:60:2e:d2
50:06:01:68:30:60:2f:3b
50:06:01:68:30:60:2f:3b
50:06:01:60:30:60:2f:3b
50:06:01:60:30:60:2f:3b
50:06:01:61:30:60:2f:3b
50:06:01:61:30:60:2f:3b
50:06:01:69:30:60:2f:3b
50:06:01:69:30:60:2f:3b
Preparing, Installing, and Configuring a Fabric-Connected Gateway - 18

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 18

Copyright 2006 EMC Corporation. All Rights Reserved.

Zoning Requirements
y Fibre Channel SANs provide flexible connectivity where any port in
the fabric is capable of seeing any other port
y Zoning is configured on the Switch for performance, security, and
availability reasons to restrict which ports in a fabric see each
other
Switch 1
Zone1 - DM2-0 to SPA-0
Zone2 DM3-0 to SPB-1

Control
Station

CLARiiON
0
1

DM3

0
1

FC-SW1

SP-B

3
0
1

DM2

FC-SW2

SP-A

Switch 2
Zone1 - DM2-1 to SPB-0
Zone2 DM3-1 to SPA-1
2006 EMC Corporation. All rights reserved.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 19

By design, Fibre Channel switches provide flexible connectivity where any port in the fabric is capable
of seeing any other port. This can lead to performance, security, and availability issues. Zoning is
feature of most switches that restrict which ports in the fabric see each other. This eliminates any
unnecessary interactions between ports.
In the example above, each switch is a separate fabric and is thus configured separately.
An alternate Zoning configuration might look like this:
Switch 1
Zone1 DM2-0
Zone2 DM2-0
Zone3 DM3-0
Zone4 DM2-0

to SPA-0
to SPB-1
to SPA-0
to SPB-1

Switch 2
Zone1 DM2-1
Zone2 DM2-1
Zone3 DM3-1
Zone4 DM2-1

to SPA-0
to SPB-1
to SPA-0
to SPB-1

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 19

Copyright 2006 EMC Corporation. All Rights Reserved.

Configure Zoning
y Zoning is performed on the Fibre Channel Switch
Single HBA zoning
Zoning by WWPN

y Need WWNs of DM back-end ports and Storage Array


front-end ports
y Zoning is performed using switch specific tools
GUI
CLI

y Because WWNs must be exact, easier to create a script


and use cut-and-paste to minimize chance of error

2006 EMC Corporation. All rights reserved.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 20

zonecreate "DM_2_BE_0_SPA_Port0","50:06:01:60:30:60:2f:3b;50:06:01:60:00:60:02:42"
zonecreate "DM_2_BE_0_SPB_Port0","50:06:01:60:30:60:2f:3b;50:06:01:68:00:60:02:42"
zonecreate "DM_2_BE_1_SPA_Port1","50:06:01:61:30:60:2f:3b;50:06:01:61:00:60:02:42"
zonecreate "DM_2_BE_1_SPB_Port1","50:06:01:61:30:60:2f:3b;50:06:01:69:00:60:02:42"
zonecreate "DM_3_BE_0_SPA_Port0","50:06:01:68:30:60:2f:3b;50:06:01:60:00:60:02:42"
zonecreate "DM_3_BE_0_SPB_Port0","50:06:01:68:30:60:2f:3b;50:06:01:68:00:60:02:42"
zonecreate "DM_3_BE_1_SPA_Port1","50:06:01:69:30:60:2f:3b;50:06:01:61:00:60:02:42"
zonecreate "DM_3_BE_1_SPB_Port1","50:06:01:69:30:60:2f:3b;50:06:01:69:00:60:02:42"
zonecreate "DM_4_BE_0_SPA_Port0","50:06:01:60:30:60:2e:d2;50:06:01:60:00:60:02:42"
zonecreate "DM_4_BE_0_SPB_Port0","50:06:01:60:30:60:2e:d2;50:06:01:68:00:60:02:42"
zonecreate "DM_4_BE_1_SPA_Port1","50:06:01:61:30:60:2e:d2;50:06:01:61:00:60:02:42"
zonecreate "DM_4_BE_1_SPB_Port1","50:06:01:61:30:60:2e:d2;50:06:01:69:00:60:02:42"
zonecreate "DM_5_BE_0_SPA_Port0","50:06:01:68:30:60:2e:d2;50:06:01:60:00:60:02:42"
zonecreate "DM_5_BE_0_SPB_Port0","50:06:01:68:30:60:2e:d2;50:06:01:68:00:60:02:42"
zonecreate "DM_5_BE_1_SPA_Port1","50:06:01:69:30:60:2e:d2;50:06:01:61:00:60:02:42"
zonecreate "DM_5_BE_1_SPB_Port1","50:06:01:69:30:60:2e:d2;50:06:01:69:00:60:02:42
cfgcreate "Celerra_cfg", "DM_2_BE_0_SPA_Port0; DM_2_BE_0_SPB_Port0; DM_2_BE_1_SPA_Port1;
DM_2_BE_1_SPB_Port1; DM_3_BE_0_SPA_Port0; DM_3_BE_0_SPB_Port0; DM_3_BE_1_SPA_Port1;
DM_3_BE_1_SPB_Port1; DM_4_BE_0_SPA_Port0; DM_4_BE_0_SPB_Port0; DM_4_BE_1_SPA_Port1;
DM_4_BE_1_SPB_Port1; DM_5_BE_0_SPA_Port0; DM_5_BE_0_SPB_Port0; DM_5_BE_1_SPA_Port1;
DM_5_BE_1_SPB_Port1"
cfgenable "Celerra_cfg"
cfgsave
Preparing, Installing, and Configuring a Fabric-Connected Gateway - 20

Copyright 2006 EMC Corporation. All Rights Reserved.

Brocade Zoning
y Output from the
ZoneShow
command

yy Switch118:admin>
Switch118:admin> zoneshow
zoneshow
yy ...
...
yy Effective
Effective configuration:
configuration:

y Zone Configuration
is a set of zones

yy
yy

yy
yy

yy
yy
yy
yy

cfg:
cfg:
zone:
zone:

50:06:01:60:30:60:2f:3b
50:06:01:60:30:60:2f:3b
50:06:01:60:00:60:02:42
50:06:01:60:00:60:02:42
zone:
zone:

yy
yy

50:06:01:68:00:60:02:42
50:06:01:68:00:60:02:42
DM_2_BE_1_SPA_Port1
DM_2_BE_1_SPA_Port1
50:06:01:61:30:60:2f:3b
50:06:01:61:30:60:2f:3b

zone:
zone:

50:06:01:61:00:60:02:42
50:06:01:61:00:60:02:42
DM_2_BE_1_SPB_Port1
DM_2_BE_1_SPB_Port1
50:06:01:61:30:60:2f:3b
50:06:01:61:30:60:2f:3b

zone:
zone:

50:06:01:69:00:60:02:42
50:06:01:69:00:60:02:42
DM_3_BE_0_SPA_Port0
DM_3_BE_0_SPA_Port0
50:06:01:68:30:60:2f:3b
50:06:01:68:30:60:2f:3b

yy
yy
yy

2006 EMC Corporation. All rights reserved.

DM_2_BE_0_SPB_Port0
DM_2_BE_0_SPB_Port0
50:06:01:60:30:60:2f:3b
50:06:01:60:30:60:2f:3b

zone:
zone:

yy
yy
yy
yy

Celerra_cfg
Celerra_cfg
DM_2_BE_0_SPA_Port0
DM_2_BE_0_SPA_Port0

50:06:01:60:00:60:02:42
50:06:01:60:00:60:02:42
...
...

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 21

Above is an example of the zoning configuration that was auto-generated during a CX704G
installation. Not that the members of a zone are defined by the World Wide Port Numbers (WWPN)
of the Data Mover HBA and the SP ports. Also each zone only includes one initiator device (HBA).
The output above was the result of the Brocade ZoneShow command. The output was abbreviated to
only show the effective zone configuration for one Data Mover.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 21

Copyright 2006 EMC Corporation. All Rights Reserved.

Configuring Back-end Storage


y CLARiiON Back-end
Manually register Data Mover HBAs with Navisphere
Create RAID Groups and Bind LUNs
Configure Storage Groups
Create Storage Group
Add LUNs
Connect Data Mover HBAs

y Symmetrix Back-end
Configure Logical Volumes
Assign Channel Addresses and present volumes on FA port
Configure Volume Logix

2006 EMC Corporation. All rights reserved.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 22

In the example above, all connections have been registered.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 22

Copyright 2006 EMC Corporation. All Rights Reserved.

CLARiiON Navisphere Manager


y Navisphere Manager
browser-based GUI
Provides configuration and
management interface to
storage array

y Open a browser and specify


the IP address of either SPA
or SPB
y Login when prompted.
y Note: With Integrated
systems, Navisphere
Manager is not available
Array is configured using CLI
and scripts
2006 EMC Corporation. All rights reserved.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 23

A CLARiiON array can be managed using either a Command Line Interface or the graphical
Navisphere Manager. Navisphere Manager is browser based and is invoked by simply specifying the
IP address of either SPA or SPB. When prompted, enter the userid and password.
Note: Integrated systems may not include the Navisphere Manager User Interface and all configuration
and monitoring is done through the Celerra Control Station using the CLI.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 23

Copyright 2006 EMC Corporation. All Rights Reserved.

Manually Register DM HBAs


y Initiator Records define connections
between DM and Array
Each HBA should have connection to each SP
Dependent on Zoning configuration

2006 EMC Corporation. All rights reserved.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 24

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 24

Copyright 2006 EMC Corporation. All Rights Reserved.

Register Initiator Records


y Associate a name with a initiator and defines operating
parameters for connection

2006 EMC Corporation. All rights reserved.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 25

Registration typically associates a hostname and IP address of a host with the WWPN of the Fibre
Channel HBA and also sets other attributes of the connection.
In a typically open system host environment, all HBAs for a host are registered together and assigned
the same name. With Celerra, the auto-generate script that runs during install, registers each HBA
separately. For proper operation, it is important that the Initiator Information is set as shown above in
the example:
Initiator Type = CLARiiON Open
Failover Mode = 0
Array CommPath = Disabled
Unit Serial Number = Array

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 25

Copyright 2006 EMC Corporation. All Rights Reserved.

Register All Connections


y Repeat the prior step for all initiator connections

2006 EMC Corporation. All rights reserved.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 26

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 26

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra Control LUNs


y When a Celerra is first installed, the Control Volumes
must be configured
LUN

Size

00
01
02

11GB
11GB
2GB

03
04
05

2GB
2GB
2GB

Contents
DART, Individual DM configuration files
Data Mover log files
Reserved (not used on NS-series)
Linux on Control Stations (CS0) with no local HDD
Reserved (not used on NS-series)
Linux on Control Stations (CS1) with no local HDD
NAS configuration database (NASDB)
NASDB backups, dump file, log files, etc.

y Additional LUNs for user storage as required


2006 EMC Corporation. All rights reserved.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 27

When a Celerra is first installed, a minimum of six LUNs are created either manually, or automatically
through the install scripts. The table above displays all of the Celerra System LUNs, along with their
size an contents. Please note that LUNs 02 and 03 are not currently used for the Celerra NS series.
Earlier Celerra models, in which the Control Station had no internal hard drive, would use these LUNs
to hold the Linux installation. Additional LUNs must be configured for user file systems data.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 27

Copyright 2006 EMC Corporation. All Rights Reserved.

Creating a RAID Group

2006 EMC Corporation. All rights reserved.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 28

A RAID Group is a collection of related physical disks. 1 or as many as 128 LUNs may be created
form a RAID Group. This screen shows the dialog for configuring a RAID Group.
The user needs to specify how many disks are to be reserved the display will change to indicate
which RAID types are supported by that quantity of disks. In addition, the user may choose a decimal
ID for the RAID Group. If none is selected, the storage system will choose the lowest available
number.
The user must either allow the storage system to select the physical disks to be used, or may choose to
select them manually. Note that the storage system will not automatically select disks 0,0,0 through
0,0,4 they may be selected manually by the user. These disks contain the CLARiiON reserved areas,
so they have less capacity than other disks of the same size.
Other parameters that may be set include:
y Expansion/defragmentation priority - Determines how fast expansion and defragmentation occur.
Values are Low, Medium (default), or High.
y Automatically destroy - Enables or disables (default) the automatic destruction of the RAID Group
when the last LUN in that RAID Group is unbound.
Maximum number of RAID Groups per array = 240
Number of disks per RAID Group = RAID 5 = 3-16 disks, RAID 3 = 5 or 9 disks, RAID 1 = 2 disks,
RAID 10 = 2, 4, 6, 8, 10, 12, 14, or 16 disks. Remember, Celerra Best Practices specify the number of
disk per RAID Group.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 28

Copyright 2006 EMC Corporation. All Rights Reserved.

Binding LUNs

2006 EMC Corporation. All rights reserved.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 29

When binding LUNs, the user must select the RAID Group to be used, and, if this is the first LUN
being bound on that RAID Group, the RAID type. If a LUN already exists on the RAID Group, the
RAID type has already been selected, and cannot be changed.
The size of a LUN can be specified in Blocks, MB, GB, or TB. The maximum LUN size is 2 TB. The
maximum number of LUNs in a RAID Group is 128.
In the example above, we specified create two LUNs using all available capacity in the RAID Group
and distribute the LUNs across both Storage Processors for load balancing purposes.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 29

Copyright 2006 EMC Corporation. All Rights Reserved.

Creating a Storage Group

LUNs are made read/write


accessible to a hosts through
Storage Groups
1. Create Storage Group
2. Add LUNs
3. Connect Hosts

2006 EMC Corporation. All rights reserved.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 30

The configuration object used for assigning LUNs to hosts is called a Storage Group. Basically you
create a Storage Group, add LUNs and connect hosts. When a host is connected to a Storage Group,
it will have full read/write access to all LUNs in the Storage Group.
When creating a Storage Group, the software requires only a name for the Storage Group. All other
configuration is performed after the Storage Group is created.
A name supplied for a Storage Group is 1-64 characters in length. It may contain spaces and special
characters, but this is discouraged. After clicking OK or Apply, an empty Storage Group, with the
chosen name, is created on the storage system.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 30

Copyright 2006 EMC Corporation. All Rights Reserved.

Storage Group Properties - LUNs


y Select LUNs to be added to the Storage Group
Celerra Control Volumes
User Data Volumes

2006 EMC Corporation. All rights reserved.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 31

To assign LUNs, right click on the Storage Group, select properties and the LUNs tab. The LUNs tab
is used to add or remove LUNs from a Storage Group, or verify which are members. The Show LUNs
option allows the user to choose whether to only show LUNs which are not yet members of any
Storage Group, or to show all LUNs.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 31

Copyright 2006 EMC Corporation. All Rights Reserved.

LUN Addressing is Critical!


y LUN addresses are automatically y Celerra requires specific LUN addresses
assigned when you add a LUN to
Control Volumes: Addresses 00-05
a Storage Group Defaults may
User date volume begin with
not be appropriate
Address 16 (10 hex)

2006 EMC Corporation. All rights reserved.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 32

When a LUN is added to a storage Group, it is automatically assigned the next available SCSI address
starting with address 00. Use caution here as the address that is assigned automatically is not apparent
unless you scroll over to the right in the Selected LUNs pane.
The Celerra Network Server requires specific LUN addresses for system LUNs. At the time a LUN is
added to a Storage Group, highlighting the LUN, clicking the Host ID field, and choosing the host ID
from the dropdown list. If a LUN was previously assigned to a Storage Group and the address must be
changed, if first must be removed from the Storage Group and re-added.
If LUN addressing is not set up in accordance with the defined rules, it is very likely that the
installation will fail. If, after the system has been in production, the LUN addressing is modified (i.e.
when adding storage to the array for increased capacity) in a way that does not comply with these
rules, the Data Movers will likely fail upon the subsequent reboot.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 32

Copyright 2006 EMC Corporation. All Rights Reserved.

Connecting a Host to a Storage Group


y Connecting a host to a Storage Group provides full Read/Write access to
the LUNs within the Storage Group
y Connect all Data Mover HBAs to the Storage Group

2006 EMC Corporation. All rights reserved.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 33

The Hosts tab allows hosts to be connected to, or disconnected from a Storage Group. Connecting a
host provides that host with full read/write access to the LUNs in the Storage Group.
The procedure here is similar to that used on the LUNs tab select a host, then move it by using the
appropriate arrow. In most stand-alone host environments, only a single host is added to the Storage
Group but because a Celerra Network Server is actually a cluster, all HBA connections for all Data
Movers are connected.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 33

Copyright 2006 EMC Corporation. All Rights Reserved.

Complete the EMC NAS Software Installation


y After zoning and configuring the back-end Type C to
continue the installation
yy Setup
Setup Control
Control LUNs
LUNs on
on the
the backend
backend and
and create
create Storage
Storage Groups
Groups and
and Fabric
Fabric
Switch
Switch Zonning
Zonning with
with the
the following
following Data
Data Mover
Mover HBA
HBA UIDs.
UIDs. Please
Please see
see the
the
Installation
Installation Guidefor
Guidefor help.
help.
yy Data
Data Mover
Mover 22 WWN
WWN Node/Port
Node/Port Names:
Names:
yy FCP
S_ID
FCP HBA
HBA 0:
0: N_PORT
N_PORT
S_ID 010d00
010d00 Node
Node 50060160b0602f3b
50060160b0602f3b Port
Port 5006016030602f3b
5006016030602f3b
yy FCP
S_ID
FCP HBA
HBA 1:
1: N_PORT
N_PORT
S_ID 010e00
010e00 Node
Node 50060160b0602f3b
50060160b0602f3b Port
Port 5006016130602f3b
5006016130602f3b
yy
yy
yy

Data
Data Mover
Mover
FCP
FCP HBA
HBA 0:
0:
FCP
FCP HBA
HBA 1:
1:

33 WWN
WWN Node/Port
Node/Port Names:
Names:
N_PORT
S_ID
N_PORT
S_ID 010c00
010c00 Node
Node 50060160b0602f3b
50060160b0602f3b Port
Port 5006016830602f3b
5006016830602f3b
N_PORT
S_ID
N_PORT
S_ID 010f00
010f00 Node
Node 50060160b0602f3b
50060160b0602f3b Port
Port 5006016930602f3b
5006016930602f3b

yy ...
...
yy Type
Type 'C'
'C' to
to continue:
continue:

2006 EMC Corporation. All rights reserved.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 34

The process from the Gateway Setup Guide can now resume at Chapter 12, Step 8
Steps 9-10, Create NAS Administrator account
Step 11, Enable UNICODE
Installation completes

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 34

Copyright 2006 EMC Corporation. All Rights Reserved.

Installation Process Continues


y File systems will be created on the control LUNs and
DART will be installed
y Will be prompted for
Password of nasadmin user
Enable UNICODE?

y Data Movers will be rebooted several more times as they


are configured
Standby Data Mover Configuration

y To complete the installation process will take


approximately one hour after configuring the backend
Will be prompted if there are problems with the back-end
configuration and allowed to correct the problem before continuing
2006 EMC Corporation. All rights reserved.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 35

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 35

Copyright 2006 EMC Corporation. All Rights Reserved.

Phase 3: Final Configuration


y Typical installation may include the following additional
installation steps
Configure the Data Movers
DM Failover
DNS and NIS domains
Virtual Data Movers
Network connections

Create users and groups.


Configure additional arrays
Implement file system shares/exports

y The Celerra Setup Wizard can be used to simplify the


implementation of most configurations
2006 EMC Corporation. All rights reserved.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 36

The third and final phase includes all of the configuration required to make the Celerra system
available to clients. The specific steps depend on which services the customer purchased. For example,
a customer may elect to have only one initial file system created, or may choose an advanced
configuration with multiple file systems, advanced networking configurations, and so on.
The scope of this phase depends on the service offering that the custom has purchased. Many of the
advanced implementations are outside the scope of this class and would typically be performed by a
Technical Solutions specialist. However, in many implementations the installer may be expected to
perform a simple implementation.
The possible steps include:
y Configure the Data Movers, including failover policies, DNS and NIS domains, and virtual Data
Movers.
y Configure the Data Mover network connections, including fail-safe networks, link aggregations,
and Ethernet channels.
y Create users and groups.
y Optionally configure additional arrays, for fabric-connected systems.
y Create or configure volumes, shares, and exports, including Usermapper, CIFS servers, and quotas.
The third phase is complete when all planned configuration steps are complete and customer has
signed off the installation.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 36

Copyright 2006 EMC Corporation. All Rights Reserved.

Module Summary
Planning is Critical!
y Installing a Celerra Network Server includes:
Physically connecting components
Installing Linux and the NAS code on the Control Station
Configuring and zoning the back-end
Installing and configuring DART on the Data Movers

y Many of the steps can be performed automatically using AutoConfig scripts


Manual configuration may be required in some environments
Manual configuration includes:
Creating Zoning configuration
Registering Initiator Records
Creating Raid Groups and binding LUNs
Creating Storage Group, adding LUNs, and connecting Data Mover HBAs
2006 EMC Corporation. All rights reserved.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 37

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 37

Copyright 2006 EMC Corporation. All Rights Reserved.

Closing Slide

2006 EMC Corporation. All rights reserved.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 38

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 38

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra ICON
Celerra Training for Engineering

NAS Management and Support

2006 EMC Corporation. All rights reserved.

Celerra Management and Support - 1

Copyright 2006 EMC Corporation. All Rights Reserved.

Revision History
Rev Number

Course Date

1.0

February 2006

1.1

March 2006

Updates and enhancements

1.2

May 2006

Updates and reorganization

2006 EMC Corporation. All rights reserved.

Revisions
Complete

Celerra Management and Support - 2

Celerra Management and Support - 2

Copyright 2006 EMC Corporation. All Rights Reserved.

Overview of EMC NAS Software


Upon completion of this module, you will be able to:
y Identify the main software components of a Celerra
system
y Navigate the Celerra directory structure from the Control
Station
y Describe the structure of the Command Line Interface
y Describe the NAS database and how it is backed up
y Use the Celerra Manager to perform basic systems
administration

2006 EMC Corporation. All rights reserved.

Celerra Management and Support - 3

Celerra Management and Support - 3

Copyright 2006 EMC Corporation. All Rights Reserved.

Data Mover Operating System


y DART - Data Access in Real Time
Multi-thread operating system optimized for data movement
EMCs proprietary
Physically located on the backend on LUN 0

y Provides
NAS services for user/production data
NFS, CIFS, FTP, TFTP, iSCSI

Local and remote file system replication


Celerra Replicator, SnapSure, etc

y No local user interface


You cannot logon to a Data Mover
All management is perform from Control Station via:
CLI
Celerra Manager GUI
Celerra Management and Support - 4

2006 EMC Corporation. All rights reserved.

DART: Data Access in Real Time


On a Celerra system, the operating system software that runs on the Data Mover is called DART. DART is EMCs
proprietary UNIX-based OS. It is a real-time, multithreaded operating system optimized for file access, while providing
service for standard protocols.
The Data Mover OS is physically located on LUN 00 on the backend storage array.
DART services and features
The DART software provides the NAS services for file access to user/production data via the following services/protocols:
NFS
CIFS
FTP and TFTP
iSCSI
In addition to the standard NAS services, the EMC NAS software also offers several features enhanced file system
functions. These feature include Celerra Replicator, SnapSure, and others.
No Data Mover Local Interface
DART has no local user interface (CLI nor GUI), you cannot log on to a Data Mover. All management of a Data Mover is
performed via the Celerra Control Station (CLI or Celerra Manager GUI) and the EMC NAS software package. The EMC
NAS software on the Control Station includes a special command set designed to administer a Data Mover across the
Celerras private/internal network.

Celerra Management and Support - 4

Copyright 2006 EMC Corporation. All Rights Reserved.

Control Station Operating System


y NS Control Station OS is physically located on its local
hard drive
EMC-modified Redhat Linux (7.2)

y Runs EMC NAS software for managing/monitoring Data


Movers and storage resources
y Management interfaces
CLI (via SSH or Serial console)
Celerra Manager GUI via HTTPS

y Provides no user/production services


The Control Station is not in the data path
A down Control Station should not interrupt data availability
2006 EMC Corporation. All rights reserved.

Celerra Management and Support - 5

All Celerra NS system Control Stations include a local hard drive where the Linux operating system in
installed. (EMC-modified Redhat)
In addition to the Linux operating system, the Control Station also runs EMC NAS software for
managing and monitoring Data Movers and the backend.
Administrators can log on to the Control Station for management tasks using the CLI (via SSH) or the
Celerra Manager GUI via HTTPS and a supported web browser.
The Control Station is not in the data path and provides no services for user/production data. All file
serving functions are provided by the Data Mover. If the Control Station is powered down, although
there would be no management functionality, monitoring, or DM failover, there should be no
interruption in data availability.

Celerra Management and Support - 5

Copyright 2006 EMC Corporation. All Rights Reserved.

Control Station NAS Services


y Monitors Data Movers and facilitates Data Mover failover
y Management and configuration of the following system
resources:
Back-end Storage
Disks, Volume, & File system

y Management and configuration of the following Data


Mover resources:
Start, monitor, & stop Services
Network configuration
File System mounts and exports

y Provides Web services for Celerra Manager and CLI


interface to for configuration and management of Data
Movers
2006 EMC Corporation. All rights reserved.

Celerra Management and Support - 6

Control Station Management and Monitoring Software


In addition to the Linux operating system, the Control Station runs various services that are part of the EMC
NAS software. This software includes the ability to manage and monitor the Data Movers (and, to a limited
degree, the backend). Some of the functionality provided by the Control Station is the implementation and
management of:
Volumes and file systems
Network configuration and services (e.g. NFS, CIFS, iSCSI, etc.)
Celerra features, such as Replicator and SnapSure
Web services for Celerra Manager
Initiating Data Mover failover

Celerra Management and Support - 6

Copyright 2006 EMC Corporation. All Rights Reserved.

Control Station Back-end Communication


y No direct read/write access to LUNs on back-end
Control Station does not have Fibre Channel HBAs
NS series Control Station accesses the Control LUNs on the backend through Data Mover using Network Block Service (NBS)
Legacy CFS/CNS Control stations had direct access to back-end

y For configuration and management, Control Station


communicates to back-end over IP
Navisphere CLI
Communicates over:
Public LAN for Gateway configurations
Private LAN Integrated configurations

Can launch Navisphere Manager from Celerra Manager GUI


Symmetrix
symcli
2006 EMC Corporation. All rights reserved.

Celerra Management and Support - 7

Communication path to backend


The executable commands and scripts used on the Control Station are physically located on the Control Stations
local hard drive. However, the NS Control Station has no connection to the backend. Therefore, in order for
these commands and scripts to be able to function the Control Station must access the backend through a Data
Mover (typically server_2 if it is online). This communication path is provided by the Celerra internal network
and a service called NBS (Network Block Service).
Communication with CLARiiON SPA/SPB
For CLARiiON arrays, management traffic between the Control Station and the array uses the public LAN connection for
NS Gateway systems or the private LAN for Integrated systems. NBS is used only for block data.

Celerra Management and Support - 7

Copyright 2006 EMC Corporation. All Rights Reserved.

Control Station Local Directory Structure


y Control Station software includes:
Linux software installed locally
EMC NAS software installed locally
EMC NAS configuration information and software installed on the
back-end
Accessed using Network Block Services (NBS)

y Software on internal disk is located in hdx partitions


Example: /dev/hda3

y Software accessed through NBS is located in ndx


partitions:
Example: /dev/ndx1

2006 EMC Corporation. All rights reserved.

Celerra Management and Support - 8

The directory structure on the Celerra NS Control Station includes the following:
1. Linux software that is installed to the Control Stations internal hard drive
2. EMC NAS software that is installed to the Control Stations internal hard drive
3. EMC NAS software that is installed to the Celerra System LUNs on the storage subsystem,
accessed by the Control Station via NBS.

Celerra Management and Support - 8

Copyright 2006 EMC Corporation. All Rights Reserved.

Verify Control Station Connection to Storage Array


y Standard LINUX df command:
$ df
Filesystem

1k-blocks

Used Available Use% Mounted on

/dev/hda3

2063536

716344

1242368

/dev/hda1

31079

2674

26801

256692

256692

/dev/nde1

1818352

548636

1177344

/dev/ndf1

1818352

60584

1665396

4% /nas/var

/dev/nda1

136368

32108

104260

24% /nas/dos

/dev/hda5

2063504

468752

1489932

none

2006 EMC Corporation. All rights reserved.

37% /
10% /boot
0% /dev/shm
32% /nbsnas

24% /nas

Celerra Management and Support - 9

During the installation process, the directory structure on the Control station is setup. Use the standard
LINUX df command to view the Control Station software directory structure.
The file systems prefaced by /dev/nd* are accesses on the back-end storage subsystem via its IP
connection to the Data Movers. The Data Movers run the Network Block Service (NBS).

Celerra Management and Support - 9

Copyright 2006 EMC Corporation. All Rights Reserved.

Control Stations Linux


y The Control Stations Linux locations seen from df
command:
/dev/hda3

/dev/hda1

/boot

y Key locations that are stored on local drive


/etc

Various config files (passwd, hosts, etc)

/home

User profiles for Celerra administrators

2006 EMC Corporation. All rights reserved.

Celerra Management and Support - 10

The file systems on devices beginning with hd are on the Control Stations local hard drive. Of these,
hda3 (mounted on /) and hda1 (mounted on /boot) are part of the Linux installation.
Although this is generally a typical Linux installation, there are some locations that hold important
Celerra-specific data.
/etc holds several Linux configuration files (such as passwd, group, hosts and so on) that hold data
entries that are key to the function of the Celerra software. For example the /etc/hosts file holds host
name resolution information that is used by the Control Station to connect with Data Movers and
CLARiiON SPs.
/home holds user profiles on a Linux system. The profile for the NAS administrator account, typically
nasadmin, is stored in /home/nasadmin. This location also holds some backup copies of the NAS
database.

Celerra Management and Support - 10

Copyright 2006 EMC Corporation. All Rights Reserved.

Control Stations Local EMC NAS Software


y The Control Stations EMC NAS location seen from df
command:
/dev/hda5

/nas

y Primary access point for the Control Station to the Celerra


system data
Commands, scripts and utilities
Links to backend
Etc.

2006 EMC Corporation. All rights reserved.

Celerra Management and Support - 11

The df command will also report the hda5 file system which is mounted to /nas. Although this file
system is also on the Control Stations internal hard drive, it not a part of the Linux installation.
Rather, it is part of the Celerra EMC NAS software installation.
/nas contains items that are very important to the functionality and maintenance of the Celerra. These
items include various commands, scripts, and utilities, as well as important symbolic links to locations
stored on the storage subsystem. These links are accessible through a Data Mover via NBS.
/nas will be discussed more deeply later in this module.

Celerra Management and Support - 11

Copyright 2006 EMC Corporation. All Rights Reserved.

Key Location: /nas

y Located on Control Station hard drive


Contains commands, utilities and important system
configuration information
/nas contains links to objects locations in /nbsnas on the
backend

y Key locations that are stored on local drive

/nas/bin
Common Celerra commands, etc
/nas/sbin
Advanced Celerra commands, etc
/nas/tools
Various config. and support tools
/nas/http
Celerra Manager
/nas/jserver
Celerra Manager
/nas/log
Data Mover & Celerra system logs
/nas/server/slot# Mount point for DM root file systems

2006 EMC Corporation. All rights reserved.

Celerra Management and Support - 12

The /nas directory is the primary access point for the Control Station to the Celerra system data.
Although /nas is mounted a partition on the Control Station hard drive, the majority of objects found in
/nas are actually symbolic links to a location within /nbsnas which is located on the backend.
This slide displays key locations within /nas that physically located on the NS Control Stations local
hard drive.
The tools above are essential to supporting and troubleshooting the Celerra NS. Since these tools are
physically located on the Control Station hard drive, they are still accessible even if no Data Movers
are online and the Control Station has no connection to the backend.

Celerra Management and Support - 12

Copyright 2006 EMC Corporation. All Rights Reserved.

Network Block Service (NBS)


y EMC Proprietary client/server service
Server: Data Mover
Storage Subsystem

Client: Control Station

y Utilizes the iSCSI protocol


y Provides the Control Station with
access to:

Fibre Channel

EMC NAS configuration data on the control


LUNs
User data LUNs

iSCSI, TCP/IP

y Requires at least one Data Mover be


online
2006 EMC Corporation. All rights reserved.

Data Mover (NBS server)

Control Station (NBS client)

Celerra Management and Support - 13

The network block service (NBS) enables the Celerra NS Control Station to access LUNs on an array.
For example, the Control Station uses NBS to read and write its database information stored in the
control LUNs, and to install NAS software on the array.
NBS data is sent from the Control Station to a Data Mover over the private LAN connection. The DM
then sends the data to the array over the Fibre Channel connection.
An NBS client daemon on the Control Station communicates with the NBS server on the Data Movers.
The Control Station must have at least one Data Mover running normally and accessible over the local
network to run any administrative commands. Celerra administration is not possible without the NBS
connection.

Celerra Management and Support - 13

Copyright 2006 EMC Corporation. All Rights Reserved.

NBS-Accessed EMC NAS Software Installed to the Backend


y The core EMC NAS stored on backend seen from df command
/dev/nde1
/dev/ndf1
/dev/nda1

/nbsnas
/nas/var
/nas/dos

y /nbsnas
Located on the backend, LUN 04
Contains EMC NAS database, NASDB
Also contains mountpoints for other partitions

y /nas/dos (symbolic link to /nbsnas/dos)


Located on the backend, LUN 00
Contains:
DART operating system
Individual Data Mover configuration files

y /nas/var
Located on the backend, LUN 05
Contains:
NASDB backups
Dump files
Log files
2006 EMC Corporation. All rights reserved.

Celerra Management and Support - 14

Virtually all critical software is stored on the backend in the six Celerra Control volumes (or System
LUNs). The NS Control Station accesses all of this data via NBS.
If NBS is not functioning, then the Control Station cannot perform EMC NAS operations.
The /nbsnas directory is mounted to a physical location on the backend, LUN 04. This is the location
of the EMC NAS Configuration Database (NASDB).
/nas/dos on the Control Station is a symbolic link to /nbsnas/dos, which is physically located on the
backend storage array at LUN 00. This is where the Data Movers operating system, DART, is located,
as well as configuration files for individual Data Movers.

Celerra Management and Support - 14

Copyright 2006 EMC Corporation. All Rights Reserved.

Contents of Celerra Control Volumes


LUN

Size

00

11GB

Paths: /nbsnas/dos
Links: /nas/dos
Contents: DART, Individual DM configuration files

01

11GB

Contents: Data Mover log files

02

2GB

Reserved (not used on NS-series)


Contents: Linux on Control Stations (CS0) with no local HDD

03

2GB

Reserved (not used on NS-series)


Contents: Linux on Control Stations (CS1) with no local HDD

04

2GB

Paths: /nbsnas
Contents: NAS configuration database (NASDB)

05

2GB

Paths: /nbsnas/var
Links: /nas/var
Contents: NASDB backups, dump file, log files, etc.

2006 EMC Corporation. All rights reserved.

Contents

Celerra Management and Support - 15

The table above displays all of the Celerra System LUNs, along with their size an contents. Please note
that LUNs 02 and 03 are not currently used for the Celerra NS series. Earlier Celerra models, in which
the Control Station had no internal hard drive, would use these LUNs to hold the Linux installation.

Celerra Management and Support - 15

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra CLI Command


y Data Movers managed using server_ commands
y Global functions managed via nas_ commands
y File system features managed with fs_ commands
y Other miscellaneous advanced/support commands
y Native Linux commands
y Accessed using Telnet, Putty (or other secure client), or
from the Celerra Manager GUI

2006 EMC Corporation. All rights reserved.

Celerra Management and Support - 16

Celerra Management and Support - 16

Copyright 2006 EMC Corporation. All Rights Reserved.

server_ commands
y Commands for managing Data Mover configurations and
status
y Issued to a single Data Mover or all Data Movers
server_date server_2
server_date ALL

y Examples:
server_version displays EMC NAS version on Data Mover
server_sysconfig manages Data Mover hardware components,
e.g. physical network devices
server_ifconfig manages Data Mover logical network interfaces
server_date manages Data Mover date and time

2006 EMC Corporation. All rights reserved.

Celerra Management and Support - 17

The server_ commands are issued at the Control Station to manage Data Movers. The server_
prefix is generally followed by a common UNIX/Linux command. Examples of this are in the slide
above.
The server_ command is always followed by the name of the Data Mover to which you wish to
direct the command (such as server_date server_2) or ALL to issue the command for all
Data Movers in the system (server_date ALL)

Celerra Management and Support - 17

Copyright 2006 EMC Corporation. All Rights Reserved.

nas_ commands
y Commands for managing global configurations and status
i.e. not specific to any Data Mover

y Examples:
nas_version displays EMC NAS version on Control Station
nas_fs manages Celerra file systems
nas_storage manages backend storage
nas_server displays and manages Data Mover server table

2006 EMC Corporation. All rights reserved.

Celerra Management and Support - 18

Like the server_ commands, the nas_ command prefix is always followed by the command itself.
The slide above show a few examples of nas_ commands. Notice that these command functions are
not related to a specific Data Mover (with some exceptions), but rather at the system globally.

Celerra Management and Support - 18

Copyright 2006 EMC Corporation. All Rights Reserved.

fs_ commands
y Commands for managing file system features
y Examples:
fs_ckpt manages SnapSure checkpoints
fs_timefinder manages TimeFinder/FS
fs_copy manages file system copies
fs_replicate manages file system replication

2006 EMC Corporation. All rights reserved.

Celerra Management and Support - 19

Celerra Management and Support - 19

Copyright 2006 EMC Corporation. All Rights Reserved.

Manual Pages
y Most Celerra management commands have Unix-like
manual pages
Command synopsis
Description
Usage examples

y Example: man server_sysconfig


server_sysconfig
Manages the hardware configuration for the specified Data Mover(s).
SYNOPSIS
server_sysconfig { <movername> | ALL }
-Platform
| -pci [<device> [-option <options>]]
| -virtual -delete [-Force] <device>
| -virtual -info <device>
| -virtual [-name <device> -create <device_class> -option <option,..>]
DESCRIPTION
server_sysconfig displays and modifies the hardware configuration of the
Data Movers.
...
2006 EMC Corporation. All rights reserved.

Celerra Management and Support - 20

Celerra Management and Support - 20

Copyright 2006 EMC Corporation. All Rights Reserved.

CLI Example: Listing Data Movers


y Use nas_server command to list Data Movers
$ nas_server -list
id

type

acl

slot groupID

state

name

1000

server_2

1000

server_3

2006 EMC Corporation. All rights reserved.

Celerra Management and Support - 21

Listing Data Movers


The nas_server list command will give a brief summary of the Data Movers known by the
system. Some key areas of the output are:
y id: The ID of the Data Mover is a unique value that is assigned based on the order in which the
Data Mover was discovered.
y type: The type value indicates if the server is a production Data Mover (1) or a standby Data Mover
(4).
y slot: The slot indicates the physical location of the Data Mover.
Why are these values useful?
In the event that a production Data Mover should failover to it standby, the name of the Data Movers
will change. For example, if the Data Mover in slot 2 failed, the Data Mover in slot 3 would take over
its identity as server_2 and the Data Mover in slot 2 would be renamed to server_2.faulted.server_3.
It could become confusing which physical server was which. By associating the name and type with
the slot number, you can be sure which server is which.

Celerra Management and Support - 21

Copyright 2006 EMC Corporation. All Rights Reserved.

List the Celerra Disk Volumes

y Use nas_disk command to list disk volumes


$ nas_disk -list
id

inuse

sizeMB

storageID-devID

type

name

servers

11263

WRE00022100904-0008 CLSTD root_disk

2,1

11263

WRE00022100904-0009 CLSTD root_ldisk

2,1

2047

WRE00022100904-000A CLSTD d3

2,1

2047

WRE00022100904-000B CLSTD d4

2,1

2047

WRE00022100904-000C CLSTD d5

2,1

2047

WRE00022100904-000D CLSTD d6

2,1

y The disk volumes correspond the 6 System


LUNs
y There are no disk volumes for user data in this
example
2006 EMC Corporation. All rights reserved.

Celerra Management and Support - 22

Listing disk volumes


Each LUNs in the storage array is seen by the Celerra components as a disk volume or disk. The
nas_disk list command will give a brief summary of the disk volumes known by the system.
Some key areas of the output
y id: the ID of the disk (assigned automatically).
y inuse: use by a file system.
y sizeMB: the total size of disk in megabytes.
y storageID-devID: the ID (serial number) of the storage system (CLARiiON or Symmetrix) and
device number (Array Logical Unit in CLARiiON, Volume ID in Symmetrix) associated with the
disk.
y type: the type of disk. Specifically, the type of LUN or RAID configuration of the disk volume
within the storage array. Disk types are STD, BCV, CLSTD, R1BCV,R2BCV, R1STD, R2STD,
CLATA.
y name: the name of the disk. Two d's (dd) in a disk name indicate a remote disk.
y servers: lists the Data Movers that have access to this disk volume.
In the example above disk ID 1 is in use. It is 11GB in size. It is ALU 0008 within CLARiiON serial
number WRE00022000904. It is known by the name root_disk. It is accessible by both Data Movers
server_2 and server_3. This corresponds to the first of the 6 Celerra System LUNs, HLU 00.

Celerra Management and Support - 22

Copyright 2006 EMC Corporation. All Rights Reserved.

CLI Examples: Verify the Software Version


y Use nas_version command to display version of NAS
package on Control Station
$ nas_version
5.4.17-5

y Use server_version command to display version of


NAS package on the Data Movers
$ server_version ALL
server_2 : Product: EMC Celerra File Server
Version: T5.4.17.5
server_3 : Product: EMC Celerra File Server
Version: T5.4.17.5

2006 EMC Corporation. All rights reserved.

Celerra Management and Support - 23

To verify the software the version on the Control Stations and the Data Movers use the
nas_version and server_version commands
The recommended version of software changes constantly. The CCA process will dictate which
version should be installed.
In most cases, the Version of the software will be the same for both the Control Station and the Data
Movers, however during upgrades, it is possible to defer the Data Mover reboot and thus the Data
Mover and the Control Station may be running different versions.

Celerra Management and Support - 23

Copyright 2006 EMC Corporation. All Rights Reserved.

Verify Data Mover to Root File Systems


y Verify that the Data Movers have accessed and mounted their own
root file system and the shared root file system
$ server_df ALL
server_2 :
Filesystem

kbytes

used

avail

capacity

Mounted on

root_fs_common

13624

288

13336

2%

/.etc_common

root_fs_2

114592

624

113968

1%

Filesystem

kbytes

used

avail

capacity

root_fs_common

13624

288

13336

2%

/.etc_common

root_fs_3

114592

624

113968

1%

server_3 :

2006 EMC Corporation. All rights reserved.

Mounted on

Celerra Management and Support - 24

The server_df command displays the mounted file systems and their utilization. A newly installed
Data Mover should have two file systems, its own root file system, and a shared file system.

Celerra Management and Support - 24

Copyright 2006 EMC Corporation. All Rights Reserved.

Miscellaneous and Advanced Commands


y Commands outside the realm of everyday Celerra
management
Displaying Data Mover boot status
Support tools
Etc

y Some require root authority


Best Practice: Login as nasadmin and su to root when necessary

y Examples:
/nas/sbin/getreason displays Data Mover and Control Station
boot status
/nas/sbin/navicli manages CLARiiON
/nas/sbin/setup_slot manages NAS type, version, etc
2006 EMC Corporation. All rights reserved.

Celerra Management and Support - 25

Celerra Management and Support - 25

Copyright 2006 EMC Corporation. All Rights Reserved.

Data Movers Status


y Use /nas/sbin/getreason command to list Data
Movers status
$ /nas/sbin/getreason
10 - slot_0 primary control station
5 - slot_2 contacted
5 - slot_3 contacted

y Healthy reason codes


Data Movers: 5
Primary Control Station: 10
Secondary Control Station: 11

2006 EMC Corporation. All rights reserved.

Celerra Management and Support - 26

Celerra reason codes identify the current status of a Data Mover or Control Station. Possible reason
codes are listed below.
0

Reset (or unknown).

DOS boot phase.

DART is loaded on data mover.

DART is ready on data mover.

DART is in contact with CS box monitor.

Control Station is ready, but is not running NAS service.

DART is in panic state.

DART reboot is pending or halted state.

10

Primary control station reason code.

11

Secondary control station reason code.

13

DART panicked and completed memory dump (single DM


configurations only).

14

15 Data Mover is flashing firmware. DART is flashing BIOS and/or


POST firmware. DM cannot be reset.
17 NSX Data Mover Hardware fault detected.
18 NSX DM Memory Test Failure. BIOS detected memory error.
19 NSX DM POST Test Failure. General POST error.
20 NSX DM POST NVRAM test failure. Invalid NVRAM content
error (checksum, WWN, etc.).
21 NSX DM POST invalid peer DM type.
22 NSX DM POST invalid DM part number.
23 NSX DM POST Fibre Channel test failure. Error in Blade Fibre
connection (controller, Fibre discovery, etc).
24 NSX DM POST network test failure. Error in Ethernet controller

25 NSX DM T2NET Error. Unable to get blade reason code due to


management switch problems.
This reason code can be set on the DM for any of the
following:
Error
Failed To Get Reason Code. DM or CS may not
- Data Mover enclosure-ID was not found at boot time.
be present in the slot or NS DM might be powered off.
- Data Movers local network interface MAC address is
different from MAC address in configuration file.
- Data Movers serial number is different from serial number
in configuration file.
- Data Mover was PXE booted with install configuration.

Celerra Management and Support - 26

Copyright 2006 EMC Corporation. All Rights Reserved.

Copying Files to/from a Data Mover


y There is no way to edit a file directly on the Data Mover
y Must copy it to the Control Station, edit it, and copy it
back to the Data Mover using the server_file
command
y Syntax:
server_file {<movername|ALL} {-get|-put}
<src_file> <dst_file>
y Example:
server_file server_2 get passwd mypasswd
server_file server_2 put mypasswd passwd
2006 EMC Corporation. All rights reserved.

Celerra Management and Support - 27

There is no way to edit a file directly on the Data Move. To edit a file, you must copy it to the Control
Station, edit it, and copy it back to the Data Mover using the server_file command.

Celerra Management and Support - 27

Copyright 2006 EMC Corporation. All Rights Reserved.

Rebooting a Data Mover


y If necessary, a Data Mover can be rebooted using the
server_CPU command
y Syntax:
server_cpu {<movername|ALL} reboot
[-monitor] <time>
y Example:
server_cpu server_x reboot monitor now
server_2 : reboot in progress
0.0.0.0.0.0.0.0.0.0.3.3.3.3.3.4

2006 EMC Corporation. All rights reserved.

Celerra Management and Support - 28

An alternative command for rebooting a Data Mover is the /nas/sbin/t2reste command. This
command allows you to power-off, power-on, reboot, specific slots in the Celerra. t2resey is an
internal command and those not documented for external use but often works where the
server_cpu command might fail.

Celerra Management and Support - 28

Copyright 2006 EMC Corporation. All Rights Reserved.

Parameter File
y Specific system attributes are set by default on the Celerra
y Can establish or override attributes by editing parameter file
System wide: /nas/site/slot_parm
Specific Data Mover: /nas/server/slot_x/param

y May also list or modify using the server_parm command


y Example:
server_parm server_2 facility tcp list
server_parm server_2 facility tcp modify
maxStreams value=32768
y Most changes require rebooting Data Mover
y Reference: Celerra Network Server Parameter Guide
2006 EMC Corporation. All rights reserved.

Celerra Management and Support - 29

Celerra Management and Support - 29

Copyright 2006 EMC Corporation. All Rights Reserved.

Native Linux Commands


y The Control Station also runs the common Linux
command set
y Examples:
ifconfig manages Control Station network interfaces
df displays the available disk space on the Control Station
partitions
ping sends ICMP echo request from Control Station

2006 EMC Corporation. All rights reserved.

Celerra Management and Support - 30

In addition to the Celerra EMC NAS commands, the Control Station also runs most Linux commands.
There are a few Linux commands that have been removed from the Control Stations EMC-modified
RedHat to save space.

Celerra Management and Support - 30

Copyright 2006 EMC Corporation. All Rights Reserved.

Verifying Connectivity
y Pinging Data Movers over private networks
$ ping -c 1 server_2
PING server_2 (192.168.1.2
1.2) from 192.168.1.100
1.100 : 56(84) bytes of data.
64 bytes from server_2 (192.168.1.2): icmp_seq=0 ttl=255 time=117 usec

$ ping -c 1 server_2b
PING server_2b (192.168.2.2
2.2) from 192.168.2.100
2.100 : 56(84) bytes of data.
64 bytes from server_2b (192.168.2.2): icmp_seq=0 ttl=255 time=125 usec

y Ping CLARiiON Storage Processors


$ ping -c 1

<IP address of SPA>

PING A_WRE00022100904 (10.127.23.68


10.127.23.68) from 10.127.23.90 : 56(84) bytes of data.
64 bytes from A_WRE00022100904 (10.127.23.68): icmp_seq=0 ttl=128 time=231 usec

$ ping -c 1 <IP address of SPB>


PING B_WRE00022100904 (10.127.23.69
10.127.23.69) from 10.127.23.90 : 56(84) bytes of data.
64 bytes from B_WRE00022100904 (10.127.23.69): icmp_seq=0 ttl=128 time=192 usec

2006 EMC Corporation. All rights reserved.

Celerra Management and Support - 31

The standard UNIX ping command can be used to verify the Control Stations management path to the
Data Movers over the private LAN. Using hostnames rather than IP addresses verifies the validity of
the /etc/hosts file by pinging the Data Movers by name (e.g. server_2). If you ping the hostname
server_2b, the Control Station will ping Data Mover 2 using the backup private network.
If you have a CLARiiON array, you can also verify that the Control Station has network connectivity
to SPA and SPB. Their IP addresses should be recorded in the /etc/hosts file as well and should be
reachable by the names spa and spb.

Celerra Management and Support - 31

Copyright 2006 EMC Corporation. All Rights Reserved.

Displaying Network Interfaces on Control Station


y Linux command ifconfig
y Example:
$ /sbin/ifconfig
eth0

Link encap:Ethernet

HWaddr 00:02:B3:AF:3D:12

inet addr:192.168.1.100

Bcast:192.168.1.255

UP BROADCAST RUNNING MULTICAST

MTU:1500

Mask:255.255.255.0

Metric:1

RX packets:40619106 errors:0 dropped:0 overruns:0 frame:0


TX packets:68904994 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:100
RX bytes:287656163 (274.3 Mb)

TX bytes:1790777903 (1707.8 Mb)

Output is abbreviated

2006 EMC Corporation. All rights reserved.

Celerra Management and Support - 32

Celerra Management and Support - 32

Copyright 2006 EMC Corporation. All Rights Reserved.

NAS Database (NASDB)


y All critical Celerra data and configuration files
Also includes standard Linux configuration files if they have been
modified by Celerra
e.g. /etc/hosts has several entries required by Celerra and is part of
NASDB

y NAS Database is automatically backed up


Scheduled at 1 minute past every hour
Location of backups:
Full tar backups in:
/home/nasadmin
/nas/var/backup
/nas/var is link to /nbsnas/var

Incremental backups in:


/nas/site/SCCS
/nas/server/slot_*/SCCS

2006 EMC Corporation. All rights reserved.

Celerra Management and Support - 33

The EMC Celerra NAS database is also known as the NASDB. The NASDB represents all critical
Celerra-specific data and configurations. This is important to remember when supporting, maintaining,
or recovering the Celerra system. The Linux configuration is not part of the NASDB.
The NASDB is backed up automatically every hour, at one minute past the hour. The NASDB backups
can be used in supporting, maintaining, or recovering the Celerra to restore Celerra-specific
configurations. General Linux configuration information is not a part of NASDB backup.

Celerra Management and Support - 33

Copyright 2006 EMC Corporation. All Rights Reserved.

NAS Log files


Reviewing log files is first step in problem resolution

Directory /nas/log contains majority of system logs

Run following command to gather Data Mover server log

sys_log
cmd_log
cmd_log.err
nas_log.al
nas_log.al.err
Osmlog

server_log server_x a s > /nas/log/server_x.log


The command will gather server_x logs and output them to a file called
server_2.log in the /nas/log directory

On NSxxx systems in /var/log directory are logs that show the


Data Mover boot process. They are named ctapttysx.log
where x is a digit 4, 5, 6 or 7. The file ctapttys4.log is for
server_2, file ctapttys5 is for server_3 and so forth

2006 EMC Corporation. All rights reserved.

Celerra Management and Support - 34

If the EMC NAS upgrade operation fails record the step of the operation that failed. There are logs
present on the system that can help in isolating and resolving the upgrade problem. The directory
/nas/log holds most of the system logs. If problem escalation is required, these logs will be required.

Celerra Management and Support - 34

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra Manager

2006 EMC Corporation. All rights reserved.

Celerra Management and Support - 35

Celerra Manager is a web-based GUI used to manage a Celerra remotely. This slide shows a typical
Celerra Manager screen.
The list on the left part of the window shows the Celerra features that can be managed using Celerra
Manager. Some of these features will be described later in this module. The navigation pane on the
left is used to select the file server and feature to manage; the task pane on the right is used to manage
the feature.

Celerra Management and Support - 35

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra Manager Overview


y Celerra Manager combines the functionality of the three
previous graphical Celerra management products:
Celerra Web Manager
Celerra Native Manager
Celerra Monitor

y EMC ControlCenter look and feel


y Basic and Advanced Editions
y System requirements on client machine running Celerra
Manager are:
Java Runtime Environment v1.4.2+
Internet Explorer 6.0+, or Netscape 6.2.3+
2006 EMC Corporation. All rights reserved.

Celerra Management and Support - 36

Celerra Manager consolidates the functionality of the three previous Celerra Management products
including Web Manager, Celerra Native Manager, and Celerra Monitor.
You can purchase Celerra Manager in either basic or advanced editions. The differences between the
two will be addressed on the following slide.
To run Celerra Manager on a client machine, EMC recommends the use of Java Runtime environment
(JRE) v1.4.2 or higher. JRE v1.4.0 or 1.4.1 can run Celerra Manager, but you may see known
performance issues with these earlier revisions. Either Internet Explorer or Netscape browser can be
used. EMC recommends Internet Explorer 6.0 or higher, or Netscape 6.2.3 or higher. Netscape 6.2.2
can run Celerra Manager, but you may experience problems with this revision because of Netscapes
handling of some of the Java code used in the Manager.

Now with version 5.4 when Celerra Manger is started it will detect if you have the right version of
Java, f not it will direct you to the download section of the Java home website from Sun Microsystems.

Celerra Management and Support - 36

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra Manager Editions

Basic Edition

Advanced Edition

y Basic File system Configuration features

y Multiple Celerra support

y Checkpoint scheduling

y Volumes feature

y Usermapper GUI support

y Data Migration feature

y Tree quota management

y Tools feature
Launch SSH shell
Launch Celerra Monitor
Launch Navisphere

y Wizards
CIFS setup
File system
Celerra setup
Network

y Additional performance statistics


y Manual Volume Management part of the
Wizards feature
y Additional Notification tabs

Celerra Management and Support - 37

2006 EMC Corporation. All rights reserved.

Celerra Manager comes in a Basic edition (the default version that comes with the Celerra) and an Advanced edition that can be
purchased separately. This chart shows the features that are included in both the Basic and Advanced editions.
Basic Edition: (Note: Advanced Edition detail shown on a subsequent slide)
Basic file system configuration features
SnapSure Checkpoints The following SnapSure Checkpoint tasks are now supported:
Delete or refresh any checkpoint, not just the oldest
Delete an individual scheduled checkpoint instead of only the entire schedule
Delete a schedule by modifying a scheduled checkpoint to Never recur
Usermapper Usermapper can now be managed in Celerra Manager. The Usermapper list screen shows an overview of settings for the
system. CIFS Usermapper properties can also be displayed for each Data Mover.
Tree Quota Management This feature allows you to access tree quota configuration and status information.
Wizards The following wizards are included in both the basic and advanced editions of Celerra Manager.
Network Wizards
y Set up services
y Create an interface
y Create a device
y Create a route
Create a file system
Set up Celerra
CIFS Wizards
y Set up CIFS services
y Create a CIFS server
y Create a CIFS share

Celerra Management and Support - 37

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra Manager Basic Edition


y Supports the most common tasks
Network configuration
Hardware configuration
Management:
Data Movers
file systems
Shares
Checkpoints

Tree quotas
VTLUs
Status
Utilization

y Wizards
y Integrated help
2006 EMC Corporation. All rights reserved.

Celerra Management and Support - 38

Celerra Manager uses a dual-frame approach. The left-hand frame contains an expandable tree view of
administration. The right-hand frame contains the system health, links to on-line help, and the data
output and form inputs for the selected administration including:
y Network - Configuration of network settings including DNS, NIS, WINS, link aggregations, and
network identity (IP addresses, subnet masks, VLAN ID).
y Hardware - Tools required to manage and inventory the physical hardware in the system. This
includes operations to configure shelves of disks when the back-end storage array is CLARiiON,
managing global spares, and upgrades (disk, bios, firmware, software).
y Data Mover - Management of CIFS shares, NFS exports, and User Mapping. Other functions
include reboot, shutdown, number of reboots, date/time and NTP configuration, DM name, DM
type, and character encoding.
y File Systems and Shares - The tools required to list, create, modify, expand, check, and delete file
systems and their related shares.
y Checkpoints - Includes screens to list, create, modify, refresh, and delete SnapSure checkpoints. It
also provides a way to restore file system to one of its checkpoints.
y Status - Monitors the status of the Celerra, including uptime, software versions, release notice link,
network statistics, event logs, and hardware status (any hardware components that are in a
degraded state).
y Tree quotas - Screens to set and report status on both hard and soft storage quotas.
y VTLUs - Tools to create, configure, and manage virtual tape library units.
y Utilization - Monitors the CPU and memory utilization for the Data Movers.

Celerra Management and Support - 38

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra Manager Advanced Edition


y Additional functionality
Multiple NS/CNS
support
Integrated monitoring
capability
Manual Volume
Management
Celerra Data Migration
Service

2006 EMC Corporation. All rights reserved.

Celerra Management and Support - 39

This slide shows an example of Celerra Manager, Advanced Edition. This interface is used to create
an iSCSI Target.
To enable Advanced Edition:
1. From the Celerra Home screen, select the License tab
2. Select the Celerra Manager Advanced Edition License checkbox apply

Celerra Management and Support - 39

Copyright 2006 EMC Corporation. All Rights Reserved.

Status Monitor

2006 EMC Corporation. All rights reserved.

Celerra Management and Support - 40

Status Monitor:
Small, standalone version of navigation tree. This is launched by right clicking Celerra in the large
Celerra Manager window. It is the same as the navigation tree, except Celerra nodes can not be
expanded. It is used to monitor Celerra(s) without keeping the large Celerra Manager window open.
Using Status Monitor:
y A Celerras status starts blinking, meaning a new alert or hardware issue has occurred.
y By left-clicking on the Celerra node, Celerra Manager is launched to that Celerras status page in a
new browser window.
y The admin can now inspect the status page and take action on any new items.

Celerra Management and Support - 40

Copyright 2006 EMC Corporation. All Rights Reserved.

Monitoring Data Movers - Performance


y Utilization rates updated
according to polling
options set.
y Open the Statistics tab to
investigate further.

2006 EMC Corporation. All rights reserved.

Celerra Management and Support - 41

Checking Data Mover performance:


A number of various conditions can degrade the performance of your Data Movers. These conditions
include:
y A high percentage of CPU usage or memory usage
y A high percentage of storage space utilization
y A large amount of throughput on the Data Movers
Celerra Manager provides various tools for monitoring your Data Movers for these conditions.
Listing Data Movers' CPU and memory usage:
The Data Mover page in Celerra Manager displays all Data Movers individually with small live graphs
showing the CPU and memory usage. Here you can compare Data Mover performance to help you
determine how workload can be reallocated. These graphs capture utilization rates that are available at
the moment the page is initially opened and are updated according to the polling options you have set.
If this page shows high utilization rates, open the Statistics tab to investigate further, as discussed
below.

Celerra Management and Support - 41

Copyright 2006 EMC Corporation. All Rights Reserved.

Monitoring File System Space

2006 EMC Corporation. All rights reserved.

Celerra Management and Support - 42

The File Systems page in Celerra Manager displays all file systems individually with small live graphs
showing space usage. You can use these live graphs to compare file systems to help determine if data
should be reallocated. This example shows two file systems that have nearly reached their storage
capacity and many with no utilized storage. These graphs capture utilization at the moment the page is
initially opened and will update according to the polling options you have set.

Celerra Management and Support - 42

Copyright 2006 EMC Corporation. All Rights Reserved.

Predicting File Systems Future Usage


This example shows a
file system (fs19) that is
predicted to reach its
space limit in 78 days.
Note: This feature displays
theoretical usage only. It should
not be interpreted as actual
usage. It is helpful for planning
purposes, but should not be the
sole diagnostic tool for
determining future file system
usage.

2006 EMC Corporation. All rights reserved.

Celerra Management and Support - 43

With Celerra Monitor, you can estimate future file system space usage and predict when the space on a
file system might become exhausted. This is done based on historical usage that displays on the File
System Space Usage window.
You can also chart an approximation of future space usage. The approximation and the prediction are
based on the data currently displaying on the graph; therefore, you should not base your decisions
about future storage planning on the graphs with a short time interval, such as a two-hour window.
Estimates will be more accurate if you display a graph representing a full week or more of usage
before making an approximation or prediction. It is advisable to make a prediction or approximation
based on one week of usage, then expand the graph to display the entire recorded history and make
another approximation and prediction. If both approximations and both predictions are close to each
other, you can probably assume the approximation is reasonable.
Occasionally, a prediction cannot be made due to a lack of data. In this case, a warning message
appears and no prediction is made.

Celerra Management and Support - 43

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra Manager Integrated Help Page Level

Tool tip
Place your cursor
directly over a
field to view its
help text

Online help
Click the help
button to go to the
Celerra Manger
Online Help Guide
2006 EMC Corporation. All rights reserved.

Celerra Management and Support - 44

Celerra Manger has a comprehensive online help system to guide you through all of your management
tasks.
One of the most useful help tools is field help. Field help enables you to get information about a
specific field in Celerra Manager while you are working in the application. To view the help
information for a particular field, move your cursor directly over it. If help is available for that field,
help text will appear in a box near the field label.
You can also open a comprehensive online help guide by clicking the help button in the upper right
hand corner of the application. To open online help, click the help icon at the top of the task page. The
help page for the page you are currently on in Celerra Manger will open.

Celerra Management and Support - 44

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra Manager Integrated Help Online Help


Guide

2006 EMC Corporation. All rights reserved.

Celerra Management and Support - 45

The Celerra Managers online help guide includes comprehensive instructions for administering your
Celerra Network Server. Topics including procedures are addressed.
Online Help Guide Tips
y To see a list of help topics in the help navigation pane, click the Contents tab. The Contents tab
includes step by-step instructions for performing procedures in Celerra Manager.
y To view the system index, in the navigation pane, click the Index tab.
y To search for a word or phrase in the online help, go to the Search tab.

Celerra Management and Support - 45

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra Manager Tools / SSH


Use the SSH tool to access the Celerra Command Line Interface (CLI)
y Select Tools from the Navigation pane

2006 EMC Corporation. All rights reserved.

Celerra Management and Support - 46

Certain tasks on Celerra can only be performed from the Command Line Interface (CLI). The Tools
folder in Celerra Managers navigation pane includes a Java-based Secure Shell (SSH) applet.
Other useful tools include Celerra Monitor and Navisphere. For a description of each tool hover your
cursor over the button and read the Tool Description in the text box to the right.

Celerra Management and Support - 46

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra Managers Java-based SSH console

y Always log on
as the NAS
administrator
(typically
nasadmin).
y If necessary,
you can su to
root

2006 EMC Corporation. All rights reserved.

Celerra Management and Support - 47

The Java-based SSH tool provides easy access to the Celerra CLI. Some tasks are only performed from
the CLI. This includes modification of some of Celerras CallHome settings.
After clicking the SSH Shell button on the Tools page, you will be required to present a username and
password. You should always log on as the NAS administrator (typically nasadmin) rather than root. If
you require root access, log on as nasadmin then enter the su command (do not use the su
command). Following these steps will provide you with the necessary profile for running Celerra
commands.

Celerra Management and Support - 47

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra Manager Wizards / Set Up Celerra

y To access the Set Up


Celerra Wizard, select
Wizards from the
Navigation pane

2006 EMC Corporation. All rights reserved.

Celerra Management and Support - 48

The Set Up Celerra Wizard allows you to do the initial configuration of the Control Station and Data
Movers. After the successful completion of this wizard, you should be able to share data.

Celerra Management and Support - 48

Copyright 2006 EMC Corporation. All Rights Reserved.

Using the Set Up Celerra Wizard


Wizard configurations include:

Begin Celerra Set Up, for Control Station


Host name, DNS, NTP, Time zone and licensing

Set Up Data Mover


Standby relationships and policies, NTP, Unicode

Set Up Network Services


DNS, NIS

Create interface
Create/select network device, IP configuration, MTU, VLAN ID

Create a file system


Requires user LUNs on storage array

Create a CIFS share


Create share, CIFS server, join domain

Answer the wizard prompts based on info in Appendix G


worksheet

2006 EMC Corporation. All rights reserved.

Celerra Management and Support - 49

Set Up Celerra Wizard


The setup wizard leads you through most of the remaining configuration tasks. Depending on which
service offering the customer purchased, you may skip some steps. For example, you may configure a
single network interface on each Data Mover, or you may configure all of the network interfaces,
including high availability networking features.
The information you enter in the setup wizard takes effect when you click Submit. You can always
cancel the current wizard step without saving your changes.
Setup Wizard Worksheet
The Set Up Celerra Wizard has six main steps. The Setup Wizard Worksheets in Appendix G should
be used to gather the information prior to running the wizard. Then the worksheet can be referenced
when answering the questions presented by the wizard.

Celerra Management and Support - 49

Copyright 2006 EMC Corporation. All Rights Reserved.

Module Summary
In this module you learned about:
y The main software components of the Celerra system are:

The Data Mover DART OS


Linux on the Control Station
EMC NAS software
NBS

y The Celerra Control Station includes commands and utilities on the


local disks and important Celerra configuration information located
on the back-end storage
Access through NBS

y Data Mover has no direct user interface; all configuration and


monitoring is performed through the Control Station
y Commands are either global to the system or local to a specific Data
Mover
y NAS database backup is automatically backed up hourly to locations
on both the Control Station and back-end storage
2006 EMC Corporation. All rights reserved.

Celerra Management and Support - 50

Celerra Management and Support - 50

Copyright 2006 EMC Corporation. All Rights Reserved.

Closing Slide

2006 EMC Corporation. All rights reserved.

Celerra Management and Support - 51

Celerra Management and Support - 51

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra ICON
Celerra Training for Engineering

NAS Software Upgrades

2006 EMC Corporation. All rights reserved.

NAS Software Upgrades - 1

Copyright 2006 EMC Corporation. All Rights Reserved.

Revision History
Rev Number

Course Date

1.0

May 2006

2006 EMC Corporation. All rights reserved.

Revisions
Complete

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 2

NAS Software Upgrades - 2

Copyright 2006 EMC Corporation. All Rights Reserved.

NAS Software Upgrades


Upon completion of this module, you will be able to:
y Describe the planning required to perform an upgrade
y Identify available resources available to assist with
upgrade
y Perform an software upgrade in a lab environment

2006 EMC Corporation. All rights reserved.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 3

NAS Software Upgrades - 3

Copyright 2006 EMC Corporation. All Rights Reserved.

EMC NAS Upgrade


y Installation environment
Sterile
No data
No network connections or services
No previous version of EMC NAS
No Celerra features implemented

y Upgrade environment
Data present and being accessed
Celerra features are interoperating with network features
e.g. DNS, NDMP, administrative scripts, etc

Features of previous versions of EMC NAS could pose challenges

2006 EMC Corporation. All rights reserved.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 4

EMC NAS Installation Environment


Typical installations of Celerra Network Server are relatively sterile. There is no data present, no
network connections, and no interoperability with network services. Since there was no previous
versions of EMC NAS software, there were no interoperability issues regarding Celerra Network
Server features.
EMC NAS Upgrade Environment
All of this is different in an upgrade environment. The system being upgraded in almost all cases has
user/application data, not only present but also in use. Various Celerra functions are very likely
interoperating with other servers and services out in the production network. Examples of this would
be DNS, NDMP, administrative scripts accessing the Celerra from a remote host, etc. Previous
implementation of Celerra features could also pose upgrade challenges involving careful planning. For
example, in the past SnapSure checkpoint schedules needed to be deleted prior to upgrade and then
recreated afterwards.
(Note, software upgrades of factory-installed integrated systems are part of the installation process and
do not pose the risks discussed here.)

NAS Software Upgrades - 4

Copyright 2006 EMC Corporation. All Rights Reserved.

Software Components
y Software Upgrade typically includes:
Linux on the Control Station
NAS software components on the Control Station
DART on the Data Movers

2006 EMC Corporation. All rights reserved.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 5

NAS Software Upgrades - 5

Copyright 2006 EMC Corporation. All Rights Reserved.

Health Checks
y Before attempting an upgrade, the system must be
operating normally
y The field uses health check scripts to verify the system
prior to getting CCA approval to perform upgrade
http://www.celerra.isus.emc.com/top_level/top_tool.htm

y All issue identified must be resolved before proceeding


with upgrade

2006 EMC Corporation. All rights reserved.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 6

NAS Software Upgrades - 6

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra NS Upgrade Documentation


Key document for this discussion:
y Celerra Network Server Customer Service Universal
Upgrade Procedure
Applies to all Celerra models
This module will focus on Celerra NS-series upgrades

2006 EMC Corporation. All rights reserved.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 7

The following portions of this course are designed to focus on the technical publication, Celerra
Network Server Customer Service Universal Upgrade Procedure.

NAS Software Upgrades - 7

Copyright 2006 EMC Corporation. All Rights Reserved.

Planning for Upgrades


Plan carefully!
y Document all functions on Celerra and in the environment
y Ask:
What is involved?
Old versions of services? e.g., Old version of NDMP
Scripts accessing Celerra?

What has changed since last known good DM reboot?


Ethernet switches, routers, DNS, Active Directory?

y Compare findings NAS Interoperability Matrix for the


version to which you will be upgrading

2006 EMC Corporation. All rights reserved.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 8

Planning for EMC NAS Upgrade


Because of the complex environment and risks surrounding the upgrade careful planning is critical.
Validating Interoperability
To minimize downtime and other risks diligent documentation of the environment is essential. This
includes documentation of Celerra features that are in use, as well as functions, services, scripts that
are interoperating with the Celerra, and changes that may have taken place since the last known good
boot of the Data Movers and Control Station. For example, have there been any modifications made to
Ethernet switches, network routers, or DNS and Active Directory servers?
Once the environment has been documented, the results should be carefully compared with the NAS
Interoperability Matrix to assure that the new version of EMC NAS software supports the environment.

NAS Software Upgrades - 8

Copyright 2006 EMC Corporation. All Rights Reserved.

Reboot Required?
y

In-family upgrades

FROM code is same major rev as TO code


e.g. From v5.4.14-3 to v5.4.15-2

Allows postponement of Data Mover reboot

Out-of-family

FROM code is different major rev as TO code


e.g. From v5.3.17-1 to v5.4.15-2

Data Mover reboot cannot be postponed

2006 EMC Corporation. All rights reserved.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 9

NAS Software Upgrades - 9

Copyright 2006 EMC Corporation. All Rights Reserved.

Simple Upgrade Procedure


1. Perform pre-upgrade health checks
2. Review Release Notes and Celerra Network Server Customer
Service Universal Upgrade Procedure
3. Acquire appropriate code CD
4. Mount CD on control station
y

Mount /dev/cdrom /mnt

5. Execute upgrade script


cd /mnt/EMC/nas
./setup

Perform extensive prerequisite checking before proceeding

Control Station software components and DART are upgraded separately

Reboot of Data Movers will be necessary - Can be deferred if in-family


upgrade

2006 EMC Corporation. All rights reserved.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 10

NAS Software Upgrades - 10

Copyright 2006 EMC Corporation. All Rights Reserved.

Module Summary
y Upgrading NAS Software involves more risk than new
install
y Review Release Notes and understand potential impacts
y Upgrade should not be attempted if system is in is not
fully operational
Installation may fail if system is degraded
Could result in data loss or system unavailability

y Reboot will be necessary but planning can minimize


impact

2006 EMC Corporation. All rights reserved.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 11

NAS Software Upgrades - 11

Copyright 2006 EMC Corporation. All Rights Reserved.

Closing Slide

2006 EMC Corporation. All rights reserved.

Preparing, Installing, and Configuring a Fabric-Connected Gateway - 12

NAS Software Upgrades - 12

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra ICON
Celerra Training for Engineering

Network Interface Configuration

2006 EMC Corporation. All rights reserved.

Network Configuration

-1

Copyright 2006 EMC Corporation. All Rights Reserved.

Revision History
Rev Number

Course Date

1.0

February 2006

1.1

March 2006

Updates and enhancements

1.2

May 2006

Updates and enhancements

2006 EMC Corporation. All rights reserved.

Revisions
Initial

Network Configuration - 2

Network Configuration

-2

Copyright 2006 EMC Corporation. All Rights Reserved.

Basic Network Configuration


Upon completion of this module, you will be able to:
y Describe the differences between Network Devices and
Network Interfaces
y Configure hardware parameters for network Device
y Configure an IP address on Network Interfaces
y Configure the routing table
y Configure a Data Mover for DNS
y Implement Time Services

2006 EMC Corporation. All rights reserved.

Network Configuration - 3

Network Configuration

-3

Copyright 2006 EMC Corporation. All Rights Reserved.

Network Attached Storage

y Network consists of:


Physical components: Switches, routers, and hubs that connect
clients to the services of the Data Movers
Network Services: DNS, NIS, Active Directory Time Services, etc

y Celerra is typically integrated into a pre-existing network


environment
Weaknesses in the network design may become apparent

Network

Symmetrix
-and/orCLARiiON
2006 EMC Corporation. All rights reserved.

Celerra

Clients
Network Configuration - 4

NAS, or network attached storage is all about the network. In this module we will be exploring how to
configure the Celerra to integrate into a IP network environment.

Network Configuration

-4

Copyright 2006 EMC Corporation. All Rights Reserved.

Devices and Interfaces

Have properties such as speed


and duplex

y Interfaces are the logical


configuration
An interface defines and IP
address and other parameters
Used by clients to address the
services on the Data Mover
Each Interface may be
configured for different subnets
an/or VLANs

y Devices and Interfaces may


be configured in a

Data Mover
To Ethernet Switch & Client Systems

y Devices are the physical


Network Interface Cards
(NIC)

Devices
Interfaces

10.127.50.12

cge0
cge1
cge2

10.127.60.12

cge3

10.127.70.12

cge4
cge5

10.127.80.12

fge1
fge0

One-to-one relationship
One-to-many relationship
Many-to-one relationship
2006 EMC Corporation. All rights reserved.

Network Configuration - 5

Network Configuration

-5

Copyright 2006 EMC Corporation. All Rights Reserved.

Many-to-One: Interfaces to Devices


Data Mover
To Ethernet Switch & Client Systems

y More than one IP


address can be assigned
to a single Network
Device

Devices
Interfaces

10.127.50.12

cge0
cge1
cge2

10.127.60.12

cge3

10.127.70.12

cge4
cge5

10.127.80.12

fge1
fge0

2006 EMC Corporation. All rights reserved.

Network Configuration - 6

Network Configuration

-6

Copyright 2006 EMC Corporation. All Rights Reserved.

One-to-Many: Interfaces to Devices

y Used in High Availability


network configurations
y Discussed in a following
module

Data Mover
To Ethernet Switch & Client Systems

y One interface is
configured to two or
more Network Devices

Devices
Interfaces

10.127.50.12

cge0
cge1
cge2
cge3
cge4
cge5
fge1
fge0

2006 EMC Corporation. All rights reserved.

Network Configuration - 7

Network Configuration

-7

Copyright 2006 EMC Corporation. All Rights Reserved.

Configuring Celerra Networking


y Reference:
Configuring and Managing Celerra Networking
P/N 300-002-707 March 2006
Configure Network
Devices
Configure IP Interfaces
Verify Data Mover
Connectivity
Configure Routes

2006 EMC Corporation. All rights reserved.

Network Configuration - 8

In the first section of this module, we will cover the basic steps for configuring devices and interfaces,
verifying connectivity, and configuring routes. The above referenced document is an excellent
resource for more information on configuring and managing Celerra networking.

Network Configuration

-8

Copyright 2006 EMC Corporation. All Rights Reserved.

Modifying and Displaying Configuration


y Command:
server_sysconfig <mover_name>
server_sysconfig ALL

Configure Network
Devices
Configure IP Interfaces
Verify Data Mover
Connectivity
Configure Routes

y Additional specifications
-Platform
-pci device
-o options
speed={ 10 | 100 | 1000 | auto }
duplex={ full | half | auto }
linkneg={ enable | disable }

2006 EMC Corporation. All rights reserved.

Network Configuration - 9

Command syntax
To modify and display the hardware configuration for Data Movers, type the following command:
server_sysconfig <mover_name> or ALL
Note: Type ALL to execute the command for all of the Data Movers.
Additional specifications
-Platform: Displays the system configuration of the Data Mover, including processor type, processor and bus
speed in MHz, main memory in MB, and the motherboard type.
-pci device: Displays information for the specified network adapter card installed in the Data Mover (for
example, ana0 or fpa1).
-o options: Options must be separated by commas with no additional spaces in the command line.
speed={ 10 | 100 | 1000 | auto }: The speed is automatically detected from the Ethernet line. The auto option
turns autonegotiation back on if you have previously specified a speed setting in the command line. If you set
speed=auto, the duplex option is automatically set to auto as well.
duplex={ full | half | auto }: Auto turns autonegotiation back on if you have previously specified a duplex
setting in the command line. The default duplex setting is half for Fast Ethernet if the duplex is not set to auto.
linkneg={ enable | disable }: Enables you to disable autonegotiation on the NIC if it is not supported by the
network Gigabit switch. The default is enable.

Network Configuration

-9

Copyright 2006 EMC Corporation. All Rights Reserved.

Listing Network Interfaces


y Determine available network devices
and configuration attributes

Configure Network
Devices
Configure IP Interfaces

y Command:

Verify Data Mover


Connectivity

server_sysconfig <mover_name> pci

Configure Routes

y Example:
$ server_sysconfig server_2 -pci cge0
server_2 :
On Board:
Broadcom Gigabit Ethernet Controller
0: cge0 IRQ: 24
speed=100 duplex=full txflowctl=disable rxflowctl=disable
Network Configuration - 10

2006 EMC Corporation. All rights reserved.

Listing network interfaces


Before you configure hardware parameters, list the network interfaces (PCI devices) to see what exists.
Command syntax
server_sysconfig <mover_name> pci
Example
To view a specific network adapter device, type the following command:
server_sysconfig server_3 -pci ana0
Slot: 3
Adaptec ANA-6944 Multiple Fast Ethernet Controller 0: ana0 IRQ: 15
speed=100 duplex=full

Network Configuration

- 10

Copyright 2006 EMC Corporation. All Rights Reserved.

Setting the Transmission Speed


y Transmission Speed defaults to auto
Best Practice is to set transmission speed
to the maximum speed supported by ALL
network components between the Celerra
and the client

Configure Network
Devices
Configure IP Interfaces
Verify Data Mover
Connectivity
Configure Routes

y Command:
server_sysconfig <mover_name>
-pci <device_name> -o speed={10|100|1000|auto}

y Example:
server_sysconfig server_5 -pci cge0 -o speed=100

2006 EMC Corporation. All rights reserved.

Network Configuration - 11

Default transmission speed


Celerras transmission speed will default to auto, however, it is best to force the transmission speed.
Optimal transmission speed
For optimum network throughput, Data Movers should be set to communicate at 100 Mbps in Full
Duplex mode to greatly increase the maximum theoretical throughput. For example, if a Data Mover
has eight Fast Ethernet ports set at 10 Mbps in Half Duplex mode, the combined maximum theoretical
network throughput is 80 Mbps. However, if the same eight ports are communicating at 100 Mbps in
Full Duplex mode, the maximum theoretical network throughput would be 1600 Mbps.
Command
server_sysconfig <mover_name> -pci <device_name> -o speed={10|100|1000|auto}
Example
To configure a Fast Ethernet network adapter for 100 Mbps, use the following command:
server_sysconfig server_5 -pci ana0 -o speed=100

Network Configuration

- 11

Copyright 2006 EMC Corporation. All Rights Reserved.

Setting Duplex Mode


y Duplex setting defaults to Auto Negotiate
Best Practice is to set to Full Duplex
Ethernet hubs or other network components
that do not support Full Duplex are not
recommended in EMC Celerra environments

Configure Network
Devices
Configure IP Interfaces
Verify Data Mover
Connectivity
Configure Routes

y Command syntax:
server_sysconfig <mover_name> -pci
<device_name> -option duplex={full|auto|half}

y Example:
server_sysconfig server_5 -pci cge0 -o duplex=full

2006 EMC Corporation. All rights reserved.

Network Configuration - 12

Default duplex setting


Celerras default duplex setting is Auto Negotiate.
Notes:
The entire network should run in the same duplex mode.
The network should be Full Duplex (all switches, no hubs).
Recommended duplex setting
It is preferred that the Data Mover transmit at 100Mbps, Full Duplex. However, 10 Mbps, Half Duplex can also be used
when necessary. In order for a network to function well, the same settings should be deployed across the network.
Removing and replacing Ethernet hubs
Since Ethernet hubs are not capable of operating in Full Duplex mode, it is strongly recommended that all Ethernet hubs in
EMC Celerra environments be removed from the network and replaced with Ethernet switches. If the network does not
fully support Full Duplex, then implementing Full Duplex on Celerra could cause connectivity devices in the network to fill
their buffers which would have a drastic effect on the performance of the network as well as possible data loss.
Example
To configure a Fast Ethernet network adapter for Full Duplex, you would type the following command:
server_sysconfig server_5 -pci ana0 -o duplex=full
Note: Setting duplex with Celerra Manager is shown on the previous slide.

Network Configuration

- 12

Copyright 2006 EMC Corporation. All Rights Reserved.

Listing Network Interfaces


y

Network > Devices tab

Configure Network
Devices
Configure IP Interfaces
Verify Data Mover
Connectivity
Configure Routes

2006 EMC Corporation. All rights reserved.

Network Configuration - 13

This slide shows how to list network devices using Celerra Manager.

Network Configuration

- 13

Copyright 2006 EMC Corporation. All Rights Reserved.

Setting the Transmission Speed


y

Network > Devices tab > right click Device > Properties >
set Speed/Duplex

Configure Network
Devices
Configure IP Interfaces
Verify Data Mover
Connectivity
Configure Routes

2006 EMC Corporation. All rights reserved.

Network Configuration - 14

This slide shows how to set speed and duplex on the device using Celerra Manager.

Network Configuration

- 14

Copyright 2006 EMC Corporation. All Rights Reserved.

Configure IP Interfaces
y Interfaces are the logical configuration
An interface defines and IP address and other
parameters

Configure Network
Devices
Configure IP Interfaces

y Single IP interface on a device or multiple


IP interfaces on a single device

Verify Data Mover


Connectivity
Configure Routes

y Step 1: Gather required information


IP addresses
Subnet
Broadcast address

2006 EMC Corporation. All rights reserved.

Network Configuration - 15

The IP address, subnet mask, and broadcast address are all required when configuring the interface.

Network Configuration

- 15

Copyright 2006 EMC Corporation. All Rights Reserved.

IP Address Configuration
y Use the server_ifconfig command to:
Create a network interface from a network device
Assign an address to a network interface

Configure Network
Devices
Configure IP Interfaces
Verify Data Mover
Connectivity

Display interface parameters and addressing

Configure Routes

Disable or delete configured interfaces


Change the MTU for an interface

y Command:
server_ifconfig <mover_name> -create Device <device_name>
-name <if_name> -protocol IP <ipaddress> <netmask> <broadcast>

y Example:
server_ifconfig server_3 -c -D cge0 -n cge0-1 -p IP
192.168.101.20 255.255.255.0 192.168.101.255
Network Configuration - 16

2006 EMC Corporation. All rights reserved.

Additional specifications
-a: Displays parameters for all configured interfaces.
-d if_name: Deletes an interface configuration.
-c -D device_name -n if_name -p { ... }: Creates an interface on the specified device and assigns the specified protocol and associated
parameters to the interface. Also reassigns the default name to be the new name specified.
-p IP ipaddr ipmask ipbroadcast: Assigns IP protocol with specified IP address mask and broadcast address.
if_name up: Marks the interface up. This happens automatically when setting the first address on an interface. The up option enables an
interface that has been marked down, reinitializing the hardware.
if_name down: Marks the interface down. The system does not attempt to transmit messages through that interface If possible, the
interface is reset to disable reception as well. This action does not automatically disable routes using the interface.
if_name mtu=MTU: Sets the Maximum Transmission Unit (MTU) size in bytes for the user-specified interface.
Example: To create the IP interface ana0-1 for ana0 (device), you would type:
server_ifconfig server_3 -c -D cge0 -n cge0-1 -p IP 192.168.101.20 255.255.255.0 192.168.101.255
server_3: done
Disabling an interface: To disable ana0-1 (interface), you would type:
server_ifconfig server_3 cge0-1 down (or up to enable)
server_3: done
The IP address specifies the IP address of a machine. You may have multiple interfaces per device, each identified by a different IP
address.
Deleting an interface: To delete ana0-1 (interface), you would type:
server_ifconfig server_3 -d cge0-1
server_3: done

Network Configuration

- 16

Copyright 2006 EMC Corporation. All Rights Reserved.

Displaying IP Configuration
$ server_ifconfig

server_2 all

server_2 :
cge0_2 protocol=IP device=cge0
inet=128.222.93.165 netmask=255.255.255.0 broadcast=128.222.92.1
UP, ethernet, mtu=1500, vlan=0, macaddr=0:60:16:4:47:9b
cge0_1 protocol=IP device=cge0
inet=128.222.92.165 netmask=255.255.255.0 broadcast=128.222.92.1
UP, ethernet, mtu=1500, vlan=0, macaddr=0:60:16:4:47:9b
loop protocol=IP device=loop
inet=127.0.0.1 netmask=255.0.0.0 broadcast=127.255.255.255
UP, loopback, mtu=32768, vlan=0, macaddr=0:0:0:0:0:0
netname=localhost
el31 protocol=IP device=fxp0
inet=192.168.2.2 netmask=255.255.255.0 broadcast=192.168.2.255
UP, ethernet, mtu=1500, vlan=0, macaddr=8:0:1b:43:89:16
netname=localhost
el30 protocol=IP device=fxp0
inet=192.168.1.2 netmask=255.255.255.0 broadcast=192.168.1.255
UP, ethernet, mtu=1500, vlan=0, macaddr=8:0:1b:43:89:16
netname=localhost
2006 EMC Corporation. All rights reserved.

Network Configuration - 17

Example
To display parameters of all interfaces for server_2, type the following command:
server_ifconfig server_2 -a
server_2 :
loop protocol=IP device=loop
inet=127.0.0.1 netmask=255.0.0.0 broadcast=127.255.255.255
UP, loopback, mtu=32704, macaddr=0:0:d1:1e:a6:44 netname=localhost
cge0-1 protocol=IP device=ana0
inet=192.168.101.20 netmask=255.255.192.0 broadcast=192.168.101.255
UP, ethernet, mtu=1500, macaddr=0:0:d1:1e:a6:44
el31 protocol=IP device=el31
inet=192.168.2.2 netmask=255.255.255.0 broadcast=192.168.2.255
UP, ethernet, mtu=1500, macaddr=0:50:4:e3:6d:7d netname=localhost
el30 protocol=IP device=el30
inet=192.168.1.2 netmask=255.255.255.0 broadcast=192.168.1.255
UP, ethernet, mtu=1500, macaddr=0:50:4:e3:6d:7c netname=localhost
Note:
When deleting or modifying an IP configuration for an interface, remember to update the appropriate CIFS servers that may
be using that interface and any NFS exports that may depend on the changed interface.

Network Configuration

- 17

Copyright 2006 EMC Corporation. All Rights Reserved.

Displaying IP Configuration
y

Network > Interfaces tab

Configure Network
Devices
Configure IP Interfaces
Verify Data Mover
Connectivity
Configure Routes

Network Configuration - 18

2006 EMC Corporation. All rights reserved.

This slide shows how to display the IP configuration using Celerra Manager.

Network Configuration

- 18

Copyright 2006 EMC Corporation. All Rights Reserved.

IP Address Configuration
y

Network > New

Configure Network
Devices
Configure IP Interfaces
Verify Data Mover
Connectivity
Configure Routes

Network Configuration - 19

2006 EMC Corporation. All rights reserved.

This slide shows how to configure an IP address using Celerra Manager.


Note: By default, the name of the interface will be the IP address with underscores. For example, the
interface shown on this slide will be named 10_127_56_109. Please note the new field Name: which
allows the name of the interface to be configured. The broadcast address is not configurable.

Network Configuration

- 19

Copyright 2006 EMC Corporation. All Rights Reserved.

Verify Data Mover Connectivity


y Use the server_ping command to:

Configure Network
Devices

After configuring interface, verify connectivity

Configure IP Interfaces

Can ping using either the IP Address or Hostname


Hostname requires local name resolution

Verify Data Mover


Connectivity
Configure Routes

Reports roundtrip delay

y Command:
server_ping <mover_name> -interface <interface> <ipaddress>

Example:
server_ping server_3

2006 EMC Corporation. All rights reserved.

-interface cge0-1 192.168.101.21

Network Configuration - 20

Network Configuration

- 20

Copyright 2006 EMC Corporation. All Rights Reserved.

Adding Routes
y Data movers support Dynamic and Static Routing
y Three types of static routes:
Host
Network
Default

Configure Network
Devices
Configure IP Interfaces
Verify Data Mover
Connectivity
Configure Routes

y Adding a default gateway


server_route <mover_name> -add default <gateway_addr>

y Adding a route to a network


server_route <mover_name> -add net <dest_addr> <gateway_addr>
<netmask>

y Adding a route to a host


server_route <mover_name> -add host <dest_addr> <gateway_addr>

2006 EMC Corporation. All rights reserved.

Network Configuration - 21

Function of the routing table


The routing table of a Data Mover is used to direct outgoing network traffic via both external (router)
and internal (individual network interfaces such as cge0, cge1, etc.) gateways. For network activity
initiated by the Data Mover, the system uses the routing table to get destination and gateway
information. Routes to a particular host must be distinguished from those to a network. The optional
keywords net and host specify the address type and force the destination to be interpreted as a network
or a host, respectively. The commands shown on this slide show how to add a default gateway, host
route, or network route to the routing table.

Network Configuration

- 21

Copyright 2006 EMC Corporation. All Rights Reserved.

Routing Examples

y Configure routing table to route to a particular


Host
server_route server_2 a host 192.168.64.10 192.168.101.22

y Configure routing table to route to a particular


Network
server_route server_2 a net 192.168.144.0 192.168.101.21

y Default gateways
server_route server_2 a default 192.168.64.1
server_route ALL a default 192.168.64.1

2006 EMC Corporation. All rights reserved.

Network Configuration - 22

Routing options
You can configure the routing table to route to a particular host or network. Therefore, an
administrator can:
y Specify that packets sent to a particular host, such as 192.168.64.10, be transmitted using a
particular interface, such as 192.168.101.22
y Configure the Data Mover so that TCP/IP packets destined for network 192.168.144.0 go through
interface 192.168.101.21
To use the default gateway for all unspecified destinations, you can use the add default option
followed by the gateway address (192.168.64.1). The ALL parameter defines that particular gateway
for all Data Movers in the Celerra cabinet.

Network Configuration

- 22

Copyright 2006 EMC Corporation. All Rights Reserved.

Configure Default Gateway


y

Network > Routing tab > New

Configure Network
Devices
Configure IP Interfaces
Verify Data Mover
Connectivity
Configure Routes

2006 EMC Corporation. All rights reserved.

Network Configuration - 23

This slide shows how to configure a default gateway route using Celerra Manager.

Network Configuration

- 23

Copyright 2006 EMC Corporation. All Rights Reserved.

Displaying Data Mover Routing Information


Example:
server_route server_2 list
server_2:
net

192.168.64.0 192.168.101.20 cge0

net

192.168.144.0 192.168.101.21 cge1

net

192.168.160.0 192.168.101.23 cge3

host

192.168.64.10 192.168.101.22 cge2

host

127.0.0.1 127.0.0.1 loop

default

192.168.64.1 192.168.101.20 cge0

2006 EMC Corporation. All rights reserved.

Network Configuration - 24

The server_route command is used to list the routing table entries for a Data Mover. For example, to
list the routing table for Data Mover 2, use the following command:
server_route server_2 list

Network Configuration

- 24

Copyright 2006 EMC Corporation. All Rights Reserved.

Viewing Data Mover Routing Information


y

Network > Routing tab

2006 EMC Corporation. All rights reserved.

Network Configuration - 25

This slide shows how to view configured routes using Celerra Manager.

Network Configuration

- 25

Copyright 2006 EMC Corporation. All Rights Reserved.

Deleting Routes
y Delete a particular route
server_route server_2 d net 192.168.144.0
192.168.101.21
server_route server_2 d host 192.168.64.10
192.168.101.22
server_route server_2 d default 192.168.64.1

y Temporarily delete all routes


server_route server_2 flush

y Permanently delete all routes


server_route server_2 DeleteAll
server_route ALL DeleteAll

2006 EMC Corporation. All rights reserved.

Network Configuration - 26

Command options
Use the following command options to delete routes:
flush (-f) : Removes all routes from the Data Movers routing table until the Data Mover is rebooted.
DeleteAll: Permanently clears all routes from the routing table.

Network Configuration

- 26

Copyright 2006 EMC Corporation. All Rights Reserved.

Deleting Routes
y Network > Routing tab > Select the route to delete > Delete

2006 EMC Corporation. All rights reserved.

Network Configuration - 27

This slide shows how delete routes using Celerra Manager.

Network Configuration

- 27

Copyright 2006 EMC Corporation. All Rights Reserved.

Naming Services
y Resolve hostname into the corresponding IP address
And IP addresses to hostnames

y A number of different techniques


Local hosts file
Network Information Systems (NIS)
Domain Name System (DNS)

y Each Datamover requires that one or more of these


techniques be configured
The nsswitch.conf file determines the order of query

2006 EMC Corporation. All rights reserved.

Network Configuration - 28

Network Configuration

- 28

Copyright 2006 EMC Corporation. All Rights Reserved.

Configuring a Data Mover for DNS


y Windows 2000 and Windows Server 2003 environments
require a DNS server
y Before you can configure DNS for a Data Mover, you must
know:
The DNS domain name
IP addresses of DNS servers
Multiple DNS servers can be specified

y Both TCP and UDP protocols are supported - UDP is the


default
y Command:
server_dns <mover_name> <dns_domain_name> <IP_of_DNS_server>

y Examples:
server_dns server_2 hmarine.com 192.168.64.15
server_dns server_2 p tcp corp.hmarine.com 192.168.64.15
2006 EMC Corporation. All rights reserved.

Network Configuration - 29

Support for DNS


Each domain is associated with a list of servers that can respond to DNS queries for that domain. You
can configure a Data Mover with an unlimited number of DNS domains and each domain can have up
to three DNS servers. It is strongly recommended that you set up a minimum of two DNS servers per
domain (one preferred and one alternate). Since a Data Mover can be connected to different domains,
it must be able to query different DNS servers. When configuring the Data Mover for DNS, multiple
DNS servers can be included, separated by spaces, in the command statement. Additionally, although
the default protocol for DNS is UDP, the TCP protocol can be specified.
Celerra Data Movers support both traditional DNS and Dynamic DNS in a Microsoft Windows 2000
network.

Network Configuration

- 29

Copyright 2006 EMC Corporation. All Rights Reserved.

Displaying Current DNS Configuration

y Command:
server_dns <mover_name>

y Example:
server_dns server_2
server_2 :
DNS is running.
hmarine.com
proto:udp server(s):10.127.50.162

2006 EMC Corporation. All rights reserved.

Network Configuration - 30

Displaying current DNS configuration


server_dns provides connectivity to the DNS lookup servers for the specified Data Mover(s) to convert
hostnames and IP addresses.
Note: Refer to the Configuring a Data Mover for DNS (Celerra Manager) slide to view the current
DNS configuration.

Network Configuration

- 30

Copyright 2006 EMC Corporation. All Rights Reserved.

Stopping and Starting DNS


y DNS entries are cached locally on the Data Mover
y May be necessary to flush incorrect or out of date entries
Before flushing DNS, service must be stopped

y Command:
server_dns <mover_name> -option {start|stop|flush}

y Example:
server_dns server_2 -o stop
server_dns server_2 -o flush
server_dns server_2 -o start

2006 EMC Corporation. All rights reserved.

Network Configuration - 31

Stopping, Starting, and Flushing DNS


server_dns provides connectivity to the DNS lookup servers for the specified data mover (s) to
convert hostnames and IP addresses.
The stop option halts access to the DNS lookup server(s). After DNS service has been halted, the
flush option can be used to clear the cache that has been halted and it can be used to clear the cache
that has been saved on the data mover. The start option activates the link for the DNS lookup
server(s).

Network Configuration

- 31

Copyright 2006 EMC Corporation. All Rights Reserved.

DNS Settings
y Network > DNS Settings

2006 EMC Corporation. All rights reserved.

Network Configuration - 32

In Celerra Manager, the Services Tab has been replaced with a NIS Settings Tab and a DNS
Settings Tab. This slide shows the DNS Domains listings from the DNS Settings Tab.

Network Configuration

- 32

Copyright 2006 EMC Corporation. All Rights Reserved.

Configuring a Data Mover for DNS


y Network > DNS Settings > New
enter DNS Domain Name and IP Address

2006 EMC Corporation. All rights reserved.

Network Configuration - 33

This slide shows how to configure DNS using Celerra Manager.

Network Configuration

- 33

Copyright 2006 EMC Corporation. All Rights Reserved.

Simple Network Time Protocol (SNTP)


y The Data Mover implements an NTP client that can
synchronize the system clock with an NTP or SNTP
server
NTP uses sophisticated algorithms for time correction and
maintenance to allow time synchronization with an accuracy of about
a millisecond
SNTP implements a subset of NTP for use in environments with less
stringent synchronization and accuracy requirements

y To the client, NTP or SNTP are indistinguishable

2006 EMC Corporation. All rights reserved.

Network Configuration - 34

Data Movers and SNTP


The Data Mover implements an NTP client that can synchronize the system clock with an NTP or
SNTP server.
NTP
NTP is a standard timekeeping protocol used on many platforms, including both Windows and UNIX
environments. The full NTP specification uses sophisticated algorithms for time correction and
maintenance to allow time synchronization with an accuracy of about a millisecond. This high level of
accuracy is achieved even in large networks with long network delays or in cases where access to a
time server is lost for extended periods of time.
SNTP
SNTP implements a subset of NTP for use in environments with less-stringent synchronization and
accuracy requirements. SNTP uses simple algorithms for time correction and maintenance and is
capable of accuracy to the level of a fraction of a second. To an NTP or SNTP client, NTP and SNTP
servers are indistinguishable. SNTP can be used:
y When the ultimate performance of the full NTP implementation is not needed or justified.
y In environments where accuracy on the order of large fractions of a second is good enough.

Network Configuration

- 34

Copyright 2006 EMC Corporation. All Rights Reserved.

Implementing Time Services


1. Ensure NTP/SNTP server is configured and operational

Windows 2000

UNIX

2. Initialize time services on the Data Mover


3. Verify successful initialization
4. Monitor the time services

2006 EMC Corporation. All rights reserved.

Network Configuration - 35

This slide shows the steps necessary in implementing time services on the Celerra.

Network Configuration

- 35

Copyright 2006 EMC Corporation. All Rights Reserved.

Initializing Time Services on a Data Mover


To configure time services on the Data Mover
y Command:
server_date <mover_name> timesvc start ntp
-interval <hh:mm> <NTPserver_IP | FQDN>

y Example:
server_date server_2 timesvc start ntp -i 02:00 10.127.50.162
server_date server_2 timesvc start ntp -i 02:00 10.127.50.161

2006 EMC Corporation. All rights reserved.

Network Configuration - 36

Command
server_date <mover_name> timesvc start ntp -interval <hh:mm> <NTPserver_IP | FQDN>
Note: The interval value is expressed as hours and minutes (hh:mm). The default time interval is 60
minutes.
Example
server_date server_2 timesvc start ntp -i 02:00 10.127.50.162
server_date server_2 timesvc start ntp -i 02:00 10.127.50.161
These server will be polled every two hours. The server at 10.127.50.161 will only be polled if
10.127.50.162 does not respond.
Data Mover preparation for NTP/SNTP
y Ensure that the Data mover is bound to a Windows domain prior to starting the time service.
Note: this is not required in a non-Windows environment.
y Ensure that the time service is started before configuring the Data Mover for NTP/SNTP.
y The Data Mover time must be maintained to within the "Maximum tolerance for computer clock
synchronization" as defined for the Kerberos Policy in Active Directory (discussed later in this
course).

Network Configuration

- 36

Copyright 2006 EMC Corporation. All Rights Reserved.

Verifying Successful Initialization


y Verify date and time
server_date server_2

y Verify synchronization
server_date server_2 timesvc update ntp
server_date server_2 timesvc stats ntp
Time synchronization statistics since start:
hits=1,misses=0,first poll hit=1,miss=0
Last offset:0 secs,-3000usecs
Time sync hosts:
0 1 10.127.50.162
0 2 10.127.50.161
Command succeeded: timesync action=stats
2006 EMC Corporation. All rights reserved.

Network Configuration - 37

Verifying successful initialization


Once the time service has been started, verify the date and time on the Data Mover, as well as the time
synchronization. You should see at least one "hits=" and one "first poll hit=" to indicate a successful
response from a time server poll. A "miss" indicates no response was received within a 3 second
timeout window.
The offset applied as a result of the last successful time server poll.
After the first poll, "last offset" should be well within a second and "usecs" should be less than 10,000.
The two numbers to the left of the IP addresses of each NTP server indicate whether the host was user
supplied or auto-detected, and in what order to poll (which NTP server to contact first, second, etc.)
Notes:
y Refer to the previous slide to verify the NTP server configuration.
y To monitor NTP; if the polling interval is set for two hours, run the command every two hours to
verify hits (changes), and to check offset times.

Network Configuration

- 37

Copyright 2006 EMC Corporation. All Rights Reserved.

Initializing Time Services on a Data Mover


y Data Movers > Server_x > enter IP address of NTP server

Network Configuration - 38

2006 EMC Corporation. All rights reserved.

This slide shows how to configure an NTP server using Celerra Manager.

Network Configuration

- 38

Copyright 2006 EMC Corporation. All Rights Reserved.

Module Summary
y Device are the physical hardware: Interfaces are the logical configuration
that defines the IP address and other configuration information
y Device and Interfaces can be configured in a one:one, many:one, or
one:many relationship
y To list network interfaces, use Celerra Manager or the
server_sysconfig command
y The speed and duplex must be set on each Data Mover interface to match
the current network environment
y The Data Mover interface IP address, mask, and broadcast address are
configured using Celerra Manager or server_ifconfig
y Default, network, and host routes can be configured for the Data Mover
y DNS and NTP are two services that are typically configured on each Data
Mover
Dont forget to verify NTP!
2006 EMC Corporation. All rights reserved.

Network Configuration - 39

The key points for this module are shown here.

Network Configuration

- 39

Copyright 2006 EMC Corporation. All Rights Reserved.

Closing Slide

2006 EMC Corporation. All rights reserved.

Network Configuration - 40

Network Configuration

- 40

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra ICON
Celerra Training for Engineering

Data Mover Failover

2006 EMC Corporation. All rights reserved.

Data Mover Failover

-1

Copyright 2006 EMC Corporation. All Rights Reserved.

Revision History
Rev Number

Course Date

1.0

February 2006

1.2

May 2006

2005 EMC Corporation. All rights reserved.

Revisions
Complete
Updates and enhancements

Data Mover Failover

Data Mover Failover

-2

Copyright 2006 EMC Corporation. All Rights Reserved.

Data Mover Failover


Upon completion of this module, you will be able to:
y
y
y
y

Describe the failover process


Describe the Data Mover Primary and Standby roles
Describe the three failover policies
Explain steps to prepare and configure a Data Mover
Failover configuration
y Test Data Mover failover
y Restore a Data Mover
y Delete a failover relationship

2005 EMC Corporation. All rights reserved.

Data Mover Failover

The objectives for this module are shown here. Please take a moment to read them.

Data Mover Failover

-3

Copyright 2006 EMC Corporation. All Rights Reserved.

Failover Process
y Primary Data Mover fails
Triggers
Failure of both internal networks
Power failure within Data Mover
Software panic
Exception on the Data Mover
Data Mover hang
Memory error on Data Mover

Non-triggers
Removal of Data Mover from slot
Manual reboot

y Designated standby is activated and


assumes identity of failed Primary

Reconfigures network interfaces


Mounts file systems
Exports/shares file system
Services
Other configurable attributes

Standby
Primary
Primary
Primary

2005 EMC Corporation. All rights reserved.

Data Mover Failover

Failover process
The failover process is when a primary Data Mover fails and a designated standby Data Mover is
activated as a spare.
The Control Stations heartbeat monitoring will detect the following:
y Failure of both internal networks
y Power failure within the Data Mover
y Software panic
y Exception on the Data Mover
y Data Mover hang
y Memory error on Data Mover
Failover will NOT occur as a result of the following:
y Removal of Data Mover from slot
y Manual reboot
If a primary Data Mover becomes unavailable, the standby assumes the MAC and IP addresses of the
primary Data Mover and provides seamless, uninterrupted access to its file systems.

Data Mover Failover

-4

Copyright 2006 EMC Corporation. All Rights Reserved.

Standby Data Mover


y For the highest availability, configure one standby for
each Data Mover
y NS704G can be configured as 3 primaries and one
standby, or 2 primaries and 2 standbys
y A fully configured NSX with 8 Data Movers could support
one standby for seven primaries
Recommend one standby for every three primaries Data Movers

2005 EMC Corporation. All rights reserved.

Data Mover Failover

Description
The standby Data Mover is a hot-spare for the primary Data Movers, and can act as a spare for any
Data Mover in the system.
Data Mover Ratio
The recommended ratio is one standby for every three Data Movers. A two Data Mover NSxxx is preconfigured with server_3 as a standby for server_2. The NS704G can be configured as 3 primaries and
one standby, or 2 primaries and 2 standbys.

Data Mover Failover

-5

Copyright 2006 EMC Corporation. All Rights Reserved.

Failover Policies
y Predetermined action
y Invoked by the Control Station when failure is detected
No communications between Control Station and Data Mover
If CS is down, cannot detect a DM failure and cannot invoke a failover

y Three configurable Failover Policies:


Policy

Action

Auto

The Control Station immediately activates the standby Data


Mover. Default policy when using Celerra Manager for
configuration.

Retry

The Control Station first tries to recover the primary Data Mover.
If the recovery fails, the Control Station automatically activates
the standby.

Manual

The Control Station shuts down the primary Data Mover and
takes no other action. The standby must be activated manually.
Default policy when using the CLI for configuration.

2005 EMC Corporation. All rights reserved.

Data Mover Failover

Failover policy
A failover policy is a predetermined action that the Control Station invokes when it detects a failover
condition. The failover policy type determines the action that occurs in the event of a Data Mover
failover.
Types of failover policies
The following are the three failover policies from which you can choose when you configure Data
Mover failover:
y Auto: The standby Data Mover immediately takes over the function of its primary. (default policy)
y Retry: The Celerra File Server first tries to recover the primary Data Mover. If the recovery fails,
the Celerra File Server automatically activates the standby.
y Manual: The Celerra File Server issues a shutdown for the primary Data Mover. The system takes
no other action. The standby must be activated manually.

Data Mover Failover

-6

Copyright 2006 EMC Corporation. All Rights Reserved.

Preparing the Data Mover


y During planning and design, determine primary and
standby Data Movers
y Ensure that the
Primaries and standbys have the same network hardware
components
Standby is not currently configured to provide any services

2005 EMC Corporation. All rights reserved.

Data Mover Failover

To prepare the Data Mover for configuration:


y Determine which Data Movers are primary and which are standby.
y Verify that the standby Data Movers are the same model, or greater processing capability, as the
primaries that they are protecting.
y Ensure that the primary Data Movers and the corresponding standbys have the same network
hardware components.
y Ensure that the standby Data Mover is free from all networking and file system configurations
(there should be no IP configurations set for the network interfaces).

Data Mover Failover

-7

Copyright 2006 EMC Corporation. All Rights Reserved.

Verify Ethernet Switch Configuration


y Network configuration must be identical to ensure
continued client access after a failover
Switch ports of the primary Data Movers are assigned to the same
VLANs
Same EtherChannel or LACP configuration the switch ports that
connect both the primary and standby Data Movers
Ethernet Switch
Standby Data Mover
VLAN 50

VLAN 40

Network
Clients

Primary Data Mover

Note: This configuration would result in the inability of clients to


connect after a failover
2005 EMC Corporation. All rights reserved.

Data Mover Failover

Also prior to configuration, check the Ethernet switch to verify that the switch ports of the primary
Data Movers are assigned to the same VLANs as the standby Data Mover, unless VLAN tagging
(discussed later in the course) will be employed. In addition, verify that any EtherChannel
configuration related to the ports for the primary Data Movers is in place for the standby.

Data Mover Failover

-8

Copyright 2006 EMC Corporation. All Rights Reserved.

Configuring Data Mover Failover


y Set Data Mover type for Primary Data Mover
server_setup server_2 type nas

y Set the Data Mover type of at least one Data Mover to


Standby
server_setup server_3 type standby

y By default, the NSxxx configures server_2 as primary and


server_3 as standby

2005 EMC Corporation. All rights reserved.

Data Mover Failover

After verifying that the standby Data Mover is free from any configurations and that the hardware
matches that of the primary, follow the steps below to configure Data Mover failover.
y Configure the initial Data Mover to standby.
Result:
The new standby Data Mover will reboot and assume the standby role.
y When the reboot is complete, configure additional primary Data Movers to use the same standby
(optional).

Data Mover Failover

-9

Copyright 2006 EMC Corporation. All Rights Reserved.

Failover Configuration
To configure Data Mover failover
y Command syntax:
server_standby <primary_DM> create mover=<standby_DM>
policy {auto|manual|retry}

y For example to assign server_3 as the standby for


server_2:
server_standby server_2 -c mover=server_3 -policy auto

y Result:
server_2: server_3 is rebooting as standby

2005 EMC Corporation. All rights reserved.

Data Mover Failover

Command
server_standby <primary_DM> create mover=<standby_DM> policy
{auto|manual|retry}

Example
To assign server_3 as the standby for server_2, use the following command:
server_standby server_2 -create mover=server_3 -policy auto
Result
server_2: server_3 is rebooting as standby

Data Mover Failover

- 10

Copyright 2006 EMC Corporation. All Rights Reserved.

Failover Configuration
To configure server_3 as the standby for server_2:
Select Data Movers > server_2 > Role= primary > Standby Movers= server_3
Failover Policy= auto > apply

2005 EMC Corporation. All rights reserved.

Data Mover Failover

This slide shows how to configure server_3 as a standby Data Mover.


To configure server_3 as the standby for server_2; Select data movers > server_2 > select the Role,
Standby Mover, and Failover Policy
Enter the name of the standby in the Standby Mover section and click apply.
Note: server_3 will reboot.

Data Mover Failover

- 11

Copyright 2006 EMC Corporation. All Rights Reserved.

Testing Data Mover Failover


y Test Data Mover failover periodically/regularly e.g.
Quarterly
y Perform test at low/no impact times
y When testing, validate the following
Standby assumes the identity of primary
VLAN membership and trunking
User access to:
File systems
Shares
Home directories
etc.

2005 EMC Corporation. All rights reserved.

Data Mover Failover

It is important to test Data Mover failover to ensure that if needed, the Data Movers could properly
failover and clients can properly access the standby Data Mover. If any network or Operating Systems
environmental changes are made, ensure that the standby Data Mover(s) still provide client access to
their file systems.

Data Mover Failover

- 12

Copyright 2006 EMC Corporation. All Rights Reserved.

Testing Data Mover Failover


y Rather than forcing a failure, gracefully test Data Mover
failover by activating standby Data Mover periodically
y Command:
server_standby <primary_DM> activate mover

y Example:
server_standby server_2 -activate mover

y Failed Data Mover is renamed


Example: server_2.faulted.server_3

2005 EMC Corporation. All rights reserved.

Data Mover Failover

Testing Data Mover failover


It is recommended that you periodically test the functionality of the Data Mover failover configuration.
Testing a Data Mover involves manually forcing a failover.
Command
server_standby <primary_DM> activate mover
Example
To force server_2 to failover to its standby Data Mover, use the following command
server_standby server_2 -activate mover
server_2:
replace in progress ..done
commit in progress (not interruptible)...done
server_2: renamed as server_2.faulted.server_3
server_3: renamed as server_2
Note: The primary Data Mover is renamed to server_2.faulted.server_3
(OriginalPrimary.faulted.OriginalStandby)and the standby Data Mover assumes the
name of the failed primary.
Data Mover Failover

- 13

Copyright 2006 EMC Corporation. All Rights Reserved.

Restoring a Data Mover After Failover


Restore the failed Data Mover after
A failover has occurred
Any issues that caused the failover have been corrected

y Command:
server_standby <primary_DM> -restore mover

y Example:
server_standby server_2 restore mover

2005 EMC Corporation. All rights reserved.

Data Mover Failover

Restoring Data Mover failover


After a failover has occurred and any issues that caused the failover have been corrected, restore the
failed Data Mover back to its primary status using the restore option. Also, restore the failed Data
Mover after testing.
Command
To restore a Data Mover, use the following command:
server_standby <primary_DM> restore mover
Example
To restore server 2 (the failed Data Mover) back to primary status, use the following command:
server_standby server_2 restore mover

Data Mover Failover

- 14

Copyright 2006 EMC Corporation. All Rights Reserved.

Restoring a Data Mover After Failover


y

Data Movers > highlight the server to restore > click Restore

2005 EMC Corporation. All rights reserved.

Data Mover Failover

This slide shows how to use Celerra Manager to restore a Data Mover after a failover.

Data Mover Failover

- 15

Copyright 2006 EMC Corporation. All Rights Reserved.

Deleting a Failover Relationship


y To remove the failover relationship
server_standby server_2 -delete mover

y To configure the standby Data Mover as primary


server_setup server_3 type nas

2005 EMC Corporation. All rights reserved.

Data Mover Failover

You can delete a failover relationship at any time. To delete a failover relationship:
Remove relationship
server_standby server_2 -delete mover

Note: If the Data Mover is a standby for more than one primary, you must remove the relationship for
each Data Mover.
Set up the standby Data Mover as primary
server_setup server_3 type nas

Note: The Data Mover will reboot.

Data Mover Failover

- 16

Copyright 2006 EMC Corporation. All Rights Reserved.

Deleting a Failover Relationship (Celerra Manager)


y To delete standby relationship:
Data Movers > Server_x > delete
entry in Standby Movers:

2005 EMC Corporation. All rights reserved.

To change Data Mover from


standby to primary:
Data Movers > server_x >
select Role: primary

Data Mover Failover

The interface on the left shows how to delete a standby relationship. The interface on the right shows
how to change the role of the standby Data Mover to primary.

Data Mover Failover

- 17

Copyright 2006 EMC Corporation. All Rights Reserved.

Module Summary
Key points covered in this module are:
Failover is when a primary Data Mover fails and a
designated standby is activated in its place
The standby Data Mover assumes the identity of the
failed Data Mover
Network identity including MAC, IP addresses, routing and other
network configuration
Storage Identity: File systems
Service Identity: Shares and Exports

Three Celerra failover policies are available; auto, retry,


and manual
It is important to periodically test the failover functionality
and client access to data after a failover
2005 EMC Corporation. All rights reserved.

Data Mover Failover

The key points covered in this module are shown here. Please take a moment to read them.

Data Mover Failover

- 18

Copyright 2006 EMC Corporation. All Rights Reserved.

Closing Slide

2005 EMC Corporation. All rights reserved.

Data Mover Failover

Data Mover Failover

- 19

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra ICON
Celerra Training for Engineering

Network High Availability

2005 EMC Corporation. All rights reserved.

Network Availability - 1

Copyright 2006 EMC Corporation. All Rights Reserved.

Revision History
Rev Number

Course Date

1.0

February 2006

1.1

March 2006

1.2

May 2006

2005 EMC Corporation. All rights reserved.

Revisions
Complete
Updates and enhancements
Updated diagrams

Network Availability - 2

Network Availability - 2

Copyright 2006 EMC Corporation. All Rights Reserved.

Network High Availability


Upon completion of this module, you will be able to:
y Plan and configure EtherChannel on a Data Mover
y Plan and configure LACP on a Data Mover
y Configure a Celerra Data Mover for Fail Safe Network
y Plan and configure support for VLAN Tagging on a Data
Mover

2005 EMC Corporation. All rights reserved.

Network Availability - 3

Continuous data availability is a key requirement in a networked storage environment. In prior


modules we discussed the high availability features of the backend storage with its redundant
connections to the Data movers, and the ability to configure Data Movers in a high availability failover
configuration. In this module we will be discussing options for configuring network availability
including; EtherChannel, LACP, and Failsafe networks.

Network Availability - 3

Copyright 2006 EMC Corporation. All Rights Reserved.

Lesson 1: Trunks - EtherChannel & LACP


Upon completion of this lesson, you will be able to:
y Describe EtherChannel functionality
y Configure a Celerra for EtherChannel
y Configure a Data Mover for Link Aggregation Control
Protocol (LACP)

2005 EMC Corporation. All rights reserved.

Network Availability - 4

The objectives for this lesson are shown here. Please take a moment to read them.

Network Availability - 4

Copyright 2006 EMC Corporation. All Rights Reserved.

High Availability Networks - Trunks


Devices
Interfaces

Virtual Device

10.127.50.12

Trunk

cge0
cge1
cge2
cge3
cge4
cge5
fge1
fge0

To Ethernet Switch & Client Systems

Data Mover

y Two or more network devices can be configured into a Virtual Device called a Trunk
EtherChannel
802.3ad Link Aggregation Control Protocol (LACP)

y Require Ethernet switch support and configuration


2005 EMC Corporation. All rights reserved.

Network Availability - 5

Network Availability - 5

Copyright 2006 EMC Corporation. All Rights Reserved.

EtherChannel
y In addition to configuring the Data Mover, specific ports
on the Ethernet switch must be configured into a
EtherChannel
Combines physical ports into one logical port
Provides network availability - Failed connections redirected to
other ports
Does not provide increased client bandwidth

Example: Channeling ports 1-4


Ethernet Switch
Data
Mover

EtherChannel
Trunk

1
2
3
4
5
6
7
8

EtherChannel

9
10
11
12
13
14
15
16

2005 EMC Corporation. All rights reserved.

Network
Clients

Network Availability - 6

EtherChannel
EtherChannel combines multiple physical ports (two, four, or eight) into a single logical port for the
purpose of providing fault tolerance for Ethernet ports and cabling. EtherChannel is not designed for
load balancing or to increase bandwidth.
For example, four physical ports can be combined into a single logical interface. If one of those
interfaces should fail, the traffic from the affected node can then be redirected through one of the other
physical interfaces within the EtherChannel.
Bandwidth
EtherChannel does not provide increased bandwidth from the clients perspective. Because each client
is connected only to a single port, the client does not receive any added performance. Any increased
bandwidth on the side of the channeled host (the Data Mover) is incidental. However, this is not an
issue because the objective of EtherChannel is to provide fault tolerance, not to increase aggregate
bandwidth.
Notes: For a complete discussion of the algorithms used, consult www.cisco.com and perform a search
on the term statistical load balancing.
IMPORTANT: Goal is fault tolerance, NOT increased bandwidth.

Network Availability - 6

Copyright 2006 EMC Corporation. All Rights Reserved.

Statistical Load Distribution


y Three methods of statistical load distribution
MAC address
IP address (default)
IP address and TCP port

y When a port fails, traffic is redirect to another available port in the


trunk
y Configured one of two ways:
Parameters file
Configure the trunk using the server_sysconfig command
Ethernet Switch
cge0
cge1
cge2
cge3

Data
Mover

EtherChannel
Trunk
2005 EMC Corporation. All rights reserved.

1
2
3
4
5
6
7
8

9
10
11
12
13
14
15
16

Network Availability - 7

Once an EtherChannel (or LACP aggregation) is configured, the Ethernet switch must make a
determination as to which physical port to use for a connection. Three statistical load distribution
methodologies are available on the Celerra; distribution by MAC address, by IP address, or by a
combination of IP address and TCP port.
MAC Address
The Ethernet switch will hash enough bits (1 bit for 2 ports, 2 bits for 4 ports, and 3 bits for 8 ports) of
the source and/or destination MAC addresses) of the incoming packet through an algorithm. The result
of the hashing will be used to decide which physical port through which to make the connection.
Keep in mind that traffic coming from a remote network will contain the source MAC address of the
router interface nearest the switch. This could mean that all traffic from the remote network will be
directed through the same interface in the channel.
IP address
The source and destination IP ports are observed while determining the output port. IP is the default.
TCP
The source and destination IP addresses and ports are observed while determining the output port
Configuration
Statistical load distribution can be configured for the whole system by setting the LoadBalance=
parameter in the global or local parameters file. It can also be configured per trunk by using the
server_sysconfig command. Configuring load distribution on a per trunk basis overrides the entry in
the parameters file.

Network Availability - 7

Copyright 2006 EMC Corporation. All Rights Reserved.

EtherChannel Setup

Step

Action

Where

Confirm Channel Configuration

Switch

Create a Trunk Virtual Device

Data Mover

Configure Interface on the Trunk


Virtual Device

Data Mover

2005 EMC Corporation. All rights reserved.

Network Availability - 8

Supported configurations
EMC Celerra supports the following EtherChannel configurations:
y A Data Mover can channel 2, 4, or 8 FE interface ports together
y Two Gigabit Ethernet ports may be channeled

Network Availability - 8

Copyright 2006 EMC Corporation. All Rights Reserved.

Confirming the Ethernet Switch


y Ports can be channeled
cge0 cge1 cge2

show port capabilities 2/1

cge3

Data Mover

y Ports are channeled


show port channel

11

13

15

17

19

10

12

14

16

18

20

Verify Ports are Configured for Ether Channel


2005 EMC Corporation. All rights reserved.

Network Availability - 9

Confirming Ethernet Switch


To confirm the Ethernet switch:
y Confirm with the networks Ethernet switch administrator that the Data Mover ports are physically
connected to ports on the switch that are capable of being channeled together. An example
command that the switch administrator may use is show port capabilities 2/1 (where 2/1 is one of
the physical ports connected to the Data Mover).
y Verify that channeling is configured for the ports. An example command is show port channel.

Network Availability - 9

Copyright 2006 EMC Corporation. All Rights Reserved.

Creating an EtherChannel Virtual Device (CLI)


To create a Celerra virtual, device combine the Data
Movers physical network ports into one logical device
y Command:
server_sysconfig <movername> virtual
name <virtual_device_name> create trk
options device=<device>,<device>
lb={mac|ip|tcp}

y Example to combine ports ana0, ana1, ana4, and ana5


into a virtual device named trk0
server_sysconfig server_2 v n trk0 c trk o
device=cge0,cge1,cge4,cge5

2005 EMC Corporation. All rights reserved.

Network Availability - 10

Configuring the Data Mover for Ether Channel


Combine the Data Movers physical network device ports into one logical device - a virtual device.
The example in the slide combines ports cge0, cge1, cge4, and cge5 into a virtual device named trk0.

Note: Since no load distribution method has been specified in this example, the default is IP.

Network Availability - 10

Copyright 2006 EMC Corporation. All Rights Reserved.

Assigning an IP Address
To assign an IP address to a virtual device
y Command:
server_ifconfig server_x create Device
<virtual_device_name> -name <interface_name>
-protocol IP <ipaddr> ipmask> <ipbroadcast>

y Example:
server_ifconfig server_2 c D trk0 n trk0
p IP 192.168.101.20 255.255.192.0
192.168.127.255

2005 EMC Corporation. All rights reserved.

Network Availability - 11

Assigning an IP address
Once the virtual device has been created, use the server_ifconfig command to assign an IP address to
the virtual device. Be sure to use the name designated for the name parameter in the
server_sysconfig command as the device parameter in the server_ifconfig statement.
Example
In the command below, the D trk0 parameter refers to the virtual device that was created (on the
previous page) using the name trk0 parameter.
server_ifconfig server_2 c D trk0 n trk0 p IP 192.168.101.20 255.255.192.0 192.168.127.255

Network Availability - 11

Copyright 2006 EMC Corporation. All Rights Reserved.

Creating a Virtual Device


y Network > Devices > New

2005 EMC Corporation. All rights reserved.

Network Availability - 12

This slide shows how to configure an EtherChannel device using Celerra Manager.

Network Availability - 12

Copyright 2006 EMC Corporation. All Rights Reserved.

Assigning an IP Address
y Network > Interfaces > New

2005 EMC Corporation. All rights reserved.

Network Availability - 13

This slide shows how to configure an IP address for the EtherChannel device using Celerra Manager.

Network Availability - 13

Copyright 2006 EMC Corporation. All Rights Reserved.

Link Aggregation Control Protocol (LACP)


y Industry standard (802.3ad) alternative to EtherChannel
y Configuring Data Mover for LACP
Similar to EtherChannel
Specify LACP protocol

2005 EMC Corporation. All rights reserved.

Network Availability - 14

Link Aggregation Control Protocol


Link Aggregation Control Protocol (LACP) is an alternative to EtherChannel. The IEEE 802.3ad Link
Aggregation Control Protocol also allows multiple Ethernet links to be combined into a single virtual
device on the Data Mover. Like EtherChannel, by combining many links into a single virtual device
you get:
y Increased availability (A single link failure does not break the link)
y Port Distribution (The link used for communication with another computer is determined from the
source and destination MAC addresses)
y Better Link Control (LACP is able to detect broken links passing LACPDU, Link Aggregation
Control Protocol Data Unit, frames between the Data Mover and the Ethernet switch)

Network Availability - 14

Copyright 2006 EMC Corporation. All Rights Reserved.

EtherChannel and LACP Comparison


Feature

Ethernet Channel

Link Aggregation

Switch must use IEEE standard, Fast,


or Gig Ethernet

Switch must support IEEE


802.3ad Link Aggregation

Allows links of different speeds

Disables links with a different


speed than the majority

Full or half

Full

2, 4, or 8

Any number > 1

Availability

No keep-alive mechanism to handle


broken links which are physically still
marked as up

Better link control: LACPDU


frames are transmitted on
each link in the aggregation to
ensure they are not broken

Misconfiguration
protection

Misconfigured links difficult to detect

Detects misconfigured links


and marks them as down

Switch support
Link speeds
Duplex
Number of ports

2005 EMC Corporation. All rights reserved.

Network Availability - 15

Shown here is a comparison of Ethernet Channel and Link Aggregation.

Note: Although LACP supports any number of ports greater than 1, the Celerra supports a maximum
of 12.

Network Availability - 15

Copyright 2006 EMC Corporation. All Rights Reserved.

Configuring a Data Mover for LACP


Configuring LACP is very similar to configuring
EtherChannel on a Data Mover
y Command:
server_sysconfig server_x virtual
name <virtual_device_name> create trk
options device=<device>,<device> protocol=lacp

y Example:
server_sysconfig server_2 -v -n trk0 -c trk -o
"device=cge0,cge1,cge4,cge5 protocol=lacp
protocol=lacp

2005 EMC Corporation. All rights reserved.

Network Availability - 16

LACP considerations:
y An LACP link can be created with any number of Ethernet devices, up to the maximum of 12 per
virtual device.
y Only Full Duplex Ethernet ports can be used to create the link. ATM, FDDI or any other types of
ports are not supported.
y All Data Mover ports used must be the same speed. If a mixture of port speeds is given, the Data
Mover will choose the greatest number of ports at the same speed. In case of a tie, the fastest ports
will be chosen. For example, if you used the following device=cge0,cge1,fge0 when setting up
the virtual device ports, cge0 and cge1 will be used despite the fact that the fge0 port would be
faster. The primary goal is higher availability.
y Only physical ports on the Data Mover can be used to create the link.
y Although multiple links are joined, no one client will gain an advantage from this configuration
with regards to network speed or throughput.
Verifying that ports are up and running
One way to verify all of the ports are up and running would be to run show port lacp-channel statistic
(on a Cisco Systems switch). Each time the command is run you can see that the LACPDU packet
reports have changed for active ports.
Monitoring the number of Ibytes and Obytes
Use the server_netstat -i command to monitor the number of Ibytes and Obytes for each port.

Network Availability - 16

Copyright 2006 EMC Corporation. All Rights Reserved.

Configuring a Data Mover for LACP


y Network > Devices > New > select Link Aggregation

Network Availability - 17

2005 EMC Corporation. All rights reserved.

This slide shows how to configure an LACP device using Celerra Manager.

Network Availability - 17

Copyright 2006 EMC Corporation. All Rights Reserved.

Lesson 2: Fail Safe Network


Upon completion of this lesson, you will be able to:
y List the components of the FSN virtual device
y Create an FSN virtual device
y List virtual devices
y Describe how to delete virtual devices

2005 EMC Corporation. All rights reserved.

Network Availability - 18

The objectives for this lesson are shown here. Please take a moment to read them.

Network Availability - 18

Copyright 2006 EMC Corporation. All Rights Reserved.

High Availability Network FailSafe Networks


Devices
Interfaces

Virtual Device

10.127.50.12

FailSafe

cge0
cge1
cge2
cge3
cge4
cge5
fge1
fge0

To Ethernet Switch & Client Systems

Data Mover

y FailSafe Network (FSN) devices are configured as Virtual Devices


Interfaces are configured on the FSN Virtual Device
Active Standby configuration

y No special switch support or configuration required


2005 EMC Corporation. All rights reserved.

Network Availability - 19

FSN
Fail Safe Network (FSN) is a virtual network interface Celerra feature. Like EtherChannel, FSN
provides fault tolerance out beyond the physical Data Mover providing redundancy for cabling and
switch ports.
Unlike EtherChannel, FSN can also provide fault tolerance in the case of switch failure. While
EtherChannel provides redundancy across active ports (all port in the channel carrying traffic), FSN is
comprised of an active and standby interface. The standby interface does not send or respond to any
network traffic.
Switch independence
FSN operation is independent of the switch. Recall that EtherChannel requires an Ethernet switch that
supports EtherChannel. This is not the case with FSN because it is simply a combination of an active
and standby interface with failover being orchestrated by the Data Mover itself. Additionally, the two
members of the FSN device can also be connected to separate Ethernet switches.

Network Availability - 19

Copyright 2006 EMC Corporation. All Rights Reserved.

Many Possible Configurations


Devices
Virtual Device
Interfaces

Virtual Device

10.127.50.12

FailSafe

Trunk

10.127.60.12
10.127.70.12

cge0
cge1
cge2
cge3
cge4
cge5
fge1

10.127.80.12

fge0

To Ethernet Switch & Client Systems

Data Mover

y Trunks and FSNs may be used together


May be unlike devices

y FSN allow for the configuration of Primary and Standby


2005 EMC Corporation. All rights reserved.

Network Availability - 20

FSN virtual device


Like EtherChannel, the FSN virtual device is created using the server_sysconfig command, and then
the FSN device is used for the -Device parameter in the server_ifconfig IP configuration. The FSN
virtual device can be composed of any combination of like or dissimilar Ethernet interfaces. For
example:
FE with FE
GbE with GbE
GbE with FE
EtherChannel with FE
EtherChannel with GbE
Note: FDDI and ATM devices are not supported with FSN.
Setting a primary device
When the primary option is specified, the primary device will always be the active device (except
when it is in a failed state). This is generally not recommended.

Network Availability - 20

Copyright 2006 EMC Corporation. All Rights Reserved.

Sample Configuration (Simple)


fsn0:
cge0 (active)
cge1 (standby)

fsn0

Data Mover
cge0

cge1

cge2

cge3

Switch

Network

2005 EMC Corporation. All rights reserved.

Network Availability - 21

This slide shows an FSN device that consists of two NIC ports (cge0 and cge1) on the same Data
Mover, connected to the network thru the same switch.
The operation is as follows:
1. If NIC ana0 is the active connection, then all traffic through the FSN device flows through that
port and to the network.
2. If the link signal fails (for example, because of a physical hardware disconnection), the link
automatically fails over to the next NIC port in the FSN device (in this example, cge0), using the
same IP and MAC address combination. All traffic then flows through cge1.

Network Availability - 21

Copyright 2006 EMC Corporation. All Rights Reserved.

Sample Configuration (Complex)


trk0=cge0, cge1
(active)

Switch

Data Mover

fsn0
trk0
cge0

cge1

ISL

trk1
cge2

cge3

trk1=cge2, cge3
(standby)

Switch

Network

2005 EMC Corporation. All rights reserved.

Network Availability - 22

This slide shows an FSN device that consists of an Etherchannel called trk0 (comprised of ana0, ana1,
ana2, and ana3) and another Etherchannel called trk1 (comprised of ana4, ana5, ana6, and ana7). Both
Etherchannels connect to different switches. In this case, the active device, trk0, will be used for all
network traffic unless all four paths in that EtherChannel fail, or if the switch fails. If that occurred,
trk1 with is associated switch would take over network traffic for the Data Mover.

Network Availability - 22

Copyright 2006 EMC Corporation. All Rights Reserved.

Creating an FSN
y When creating a FSN, and you do not specify a primary
device
Both devices are considered equal
No fail back

y Command:
server_sysconfig server_x virtual name
<fsn_name> create fsn option
device=<dev>,<dev>

y Example:
server_sysconfig server_2 v n fsn0 c fsn o
device=trk0,trk1

2005 EMC Corporation. All rights reserved.

Network Availability - 23

Creating a FailSafe Network Device without a Primary Device defined is the recommended approach.

To create a FailSafe Network device:


server_sysconfig <movername> virtual name <fsn_name> create fsn option
device=<dev>,<dev>

For example, to create a FailSafe Network device called fsn0 using two Etherchannels called trk0 and
trk1:
server_sysconfig server_2 v n fsn0 c fsn o device=trk0,trk1

Network Availability - 23

Copyright 2006 EMC Corporation. All Rights Reserved.

Creating an FSN with an Optional Primary Device


y If you specify a primary device when configuring FSN:
Primary device will be used if it is available
When primary device fails, will failover to standby
Will automatically failback when primary returns to service
Typically used when devices are not equal

y Command:
server_sysconfig server_x virtual name
<fsn_name> create fsn option
primary=<primary_dev> device=<standby_dev>

y Example:
server_sysconfig server_2 v n fsn0 c fsn o
primary=cge0 device=cge4
2005 EMC Corporation. All rights reserved.

Network Availability - 24

To create an FSN with the Primary device option (not recommended):


server_x virtual name <fsn_name> create fsn option
primary=<primary_dev> device=<standby_dev>
server_sysconfig

For example, to create an FSN for server_2 named fsn0 with the primary device defined as ana0 and
the standby device as ana4:
server_sysconfig server_2 v n fsn0 c fsn o primary=ana0 device=ana4

Note: The Celerra Manager screen which can be used to configure Fail Safe Network with an optional
primary device is the same as the previous slide showing the configuration without a primary device.

Network Availability - 24

Copyright 2006 EMC Corporation. All Rights Reserved.

Creating an FSN With Primary Device


y Network > Devices > New > select Fail Safe Network

2005 EMC Corporation. All rights reserved.

Network Availability - 25

This slide shows how to configure an FSN device without a primary device defined using Celerra
Manager.

Network Availability - 25

Copyright 2006 EMC Corporation. All Rights Reserved.

Listing Virtual Devices


To display a list of virtual devices
y Command:
server_sysconfig <movername> v
server_2 :
Virtual devices:
fsn0 active=trk0 standby=trk0, trk1
trk1 devices=cge1 cge3 cge5
trk0 devices=cge0 cge2 cge4
fsn failsafe nic devices : fsn0
trk trunking devices : trk0 trk1

2005 EMC Corporation. All rights reserved.

Network Availability - 26

Listing virtual devices


The above list reports that server_2 has an FSN, fsn0, made up of two EtherChannels, trk0 and trk1.
The currently active EtherChannel is trk0. EtherChannel trk0 is composed of cge0, cge2, and cge4.
EtherChannel trk1 is composed of cge1, cge3, and cge5.

Network Availability - 26

Copyright 2006 EMC Corporation. All Rights Reserved.

Listing Virtual Devices


y Network > Devices

2005 EMC Corporation. All rights reserved.

Network Availability - 27

This slide shows how to list all virtual devices using Celerra Manager.

Network Availability - 27

Copyright 2006 EMC Corporation. All Rights Reserved.

Deleting Virtual Devices


To delete virtual devices
y Command:
server_sysconfig server_x virtual delete
<device>

y Example:
server_sysconfig server_2 v d fsn0

Network Availability - 28

2005 EMC Corporation. All rights reserved.

To delete a Virtual Device:


server_sysconfig server_X virtual delete

device

For example, to delete a virtual device on server_2 called fsn0:


server_sysconfig server_2 v d

fsn0

Network Availability - 28

Copyright 2006 EMC Corporation. All Rights Reserved.

Deleting Virtual Devices


y

Network > Devices > highlight the virtual device(s) to delete > click Delete

2005 EMC Corporation. All rights reserved.

Network Availability - 29

This slide shows how to delete virtual devices using Celerra Manager.

Network Availability - 29

Copyright 2006 EMC Corporation. All Rights Reserved.

Lesson 3: VLAN Tagging


Upon completion of this lesson, you will be able to:
y Describe VLAN tagging and how it is implemented
y Configure VLAN tagging
y Configure a Data Mover for multiple VLANs

2005 EMC Corporation. All rights reserved.

Network Availability - 30

The objectives for this lesson are shown here. Please take a moment to read them.

Network Availability - 30

Copyright 2006 EMC Corporation. All Rights Reserved.

Ethernet Switch Review


y Ethernet hub
Layer 1
All traffic sent to all ports
Half duplex mode only
Often limited to 10 Mbps

y Ethernet switch
Layer 2
Sends traffic to specific port
100Mbps/Full duplex support

y Managed Ethernet switch


Allow control and traffic management
Includes features such as EtherChannel and VLANs
2005 EMC Corporation. All rights reserved.

Network Availability - 31

Ethernet hub considerations


The Ethernet hub operates at the Physical layer of the network (Layer 1) and incorporates no software
or intelligence. Because the hub is incapable of making any decisions, it sends all traffic to all ports in
order to ensure that a packet reaches its destination. This imposes unwanted overhead on all of the
components involved, which results in unsatisfactory performance. Additionally, because hubs cannot
provide dedicated node-to-node communication, they cannot support full duplex communication.
When nodes communicate in half duplex mode, one node will transmit while the other receives. In full
duplex mode, both network nodes transmit and receive simultaneously, thus doubling theoretical
throughput. Although new Fast Ethernet (FE) hubs are capable of 100Mbps transmission, many models
are limited to 10 Mbps.
Ethernet switch
The Ethernet switch operates at the Data Link layer of the network (Layer 2), providing software
capable of directing network traffic only to the port(s) specified in the destination of the packet. A
switched network offers direct node-to-node communication, thus supporting full duplex
communication. While the 10Mbps FE limitation is common with hubs, FE switches provide 100Mbps
transmission.
Managed switches
Beyond the standard Ethernet switches are the managed switches that provide a variety of software
enhancements that allow the network administrator to control and manage network traffic. Some of the
features that will be of interest in Celerra Management are EtherChannel and VLANs.

Network Availability - 31

Copyright 2006 EMC Corporation. All Rights Reserved.

Ethernet Switch Table


y Associates MAC address to port
Switch reads source address frame of sender
Records mapping in table

y Example:
00-D0-59-23-E3-AE Port 3/32

Length
(bytes)

46-1500

Frame:

Preamble

Dest
MAC

Source
MAC

Type

Data

FCS

Ethernet packet

2005 EMC Corporation. All rights reserved.

Network Availability - 32

Ethernet switch table


In order to send network traffic to only the necessary port(s), the Ethernet switch builds a table of
which the MAC (Media Access Control) address can be accessed off of each switch port. Since each
packet on the network includes the MAC address of the sending node, the switch simply reads this
frame of the packet as it enters the switch and records the port through which the packet came. In the
sample Ethernet packet above, the FCS is the Frame Check Sequence which is used to validate the
integrity of the packet.

Example:
For example, Node A (Port 3/32, address 00-D0-59-23-E3-AE) sends a packet to Node Z (Port 2/11,
address 00-D0-59-23-D0-3C). The switch records that 00-D0-59-23-E3-AE can be accessed via port
3/32, and, since it does not yet know where Node Z is located, it sends the packet to all ports much as a
hub would. When Node Z replies to Node A, the switch will read the source address from Node Zs
packet and record that 00-D0-59-23-D0-33C can be accessed via port 2/11. This time, however, the
switch knows that Node A (00-D0-59-23-E3-AE) is located off of port 3/32 so Node Zs reply is sent
only to Node As port. It is only a short time before the switch table has records for all of the nodes
connected to the switch.

Network Availability - 32

Copyright 2006 EMC Corporation. All Rights Reserved.

VLANs
y Virtual Local Area Network (VLAN)
y Managed Switches typically have the capability of being
configured to support VLANs
Groupings of switch ports
Divides large number of ports
Confines broadcasts
Contributes to security

Combines physically separate WANs

Different VLANs usually use different IP networks


InterVLAN traffic must be routed

2005 EMC Corporation. All rights reserved.

Network Availability - 33

VLANs
VLANs (Virtual Local Area Networks) are a method of grouping switch ports together into a virtual
LAN as the name would indicate. Switch ports, as well as router interfaces, can be assigned to VLANs.
Using VLANs
VLANs can be used to:
y Break up a very large LAN into smaller virtual LANs. This may be useful to control network
traffic such as broadcasts. (Another name often used for a VLAN is a Broadcast Domain.)
Although VLANs are not a security vehicle unto themselves, they can be used as part of an overall
security scheme in the network.
y Combine separate physical LANs into one virtual LAN. For example, the sales staff of an
organization is physically dispersed along both the east and west coasts of the United States
(WAN), yet all of the network clients have similar network needs and should rightly be in the same
logical unit. By employing VLANs, all of the sales staff can be in the same logical network, the
same virtual LAN.
Assigning VLANs
Typically, each IP network would be assigned to a separate VLAN. Additionally, in order to transmit
from one VLAN to another, the traffic would need to go through a router. In line with this thought,
each router interface or sub-interface can be assigned to a VLAN.

Network Availability - 33

Copyright 2006 EMC Corporation. All Rights Reserved.

Example of VLANs

VLAN
VLAN10
10
Admin
Net.
Admin Net.
192.168.64.0
192.168.64.0

VLAN
VLAN20
20
MS
clients
MS clients
192.168.144.0
192.168.144.0

VLAN
VLAN30
30
UNIX
clients
UNIX clients
192.168.160.0
192.168.160.0

11

13

15

17

19

10

12

14

16

18

20

Ethernet switch

2005 EMC Corporation. All rights reserved.

Network Availability - 34

Example of VLANs
In the example shown on the slide, there are four VLANs. One is the default VLAN, VLAN 1. Any
port not assigned to a VLAN is automatically a member of this VLAN. The other three VLANs are
unique. VLAN 10 is the administrative VLAN. This VLAN contains all of the servers, such as NT
servers, the NIS server, as well as the control stations and Data Movers of the Celerra File Servers.
The hosts in this VLAN are in the 192.168.64.0/18* network. All of the Microsoft Windows clients are
in the 192.168.144.0/20 network and are assigned to VLAN 20. VLAN 30 is the 192.168.160.0/20
network, this is the location of all of the UNIX workstations.

* The IP addressing displayed above is a different method of expressing an address and mask
combination. The number following the slash describes the number of network bits in the IP address.
Thus, 192.168.64.0/18 states that the network portion of 192.168.64.0 is the first 18 bits. This
correlates to a subnet mask of 255.255.192.0.

Network Availability - 34

Copyright 2006 EMC Corporation. All Rights Reserved.

Multi Switch VLANs


VLAN
VLAN10
10
Admin
AdminNet.
Net.
192.168.64.0
192.168.64.0

VLAN
VLAN20
20
MS
MSclients
clients
192.168.144.0
192.168.144.0

VLAN
VLAN30
30
UNIX
UNIXclients
clients
192.168.160.0
192.168.160.0

11

13

15

17

19

10

12

14

16

18

20

Ethernet switch
1

11

13

15

17

19

10

12

14

16

18

20

Ethernet switch
2005 EMC Corporation. All rights reserved.

Network Availability - 35

To transfer frames between switch ports in the same VLAN, Inter Switch Links could be used,
however Trunking provides a more efficient use of ports.

Network Availability - 35

Copyright 2006 EMC Corporation. All Rights Reserved.

VLAN Trunking
y Connecting InterSwitch VLANs wastes ports
y VLAN Trunking allows
multiple VLANs to share one path
Assign one port as Trunk port
Allow VLANs on trunk

y VLAN trunking also allows a router or other host to


participate in multiple VLANs
Supported by most routers
Protocols must match
802.1q
ISL (InterSwitch Link)

Celerra supports 802.1q VLAN tagging


2005 EMC Corporation. All rights reserved.

Network Availability - 36

Connecting VLANs
As previously mentioned, VLANs often span multiple switches. In the simplest form, this is set up by
connecting each VLAN on each switch to a port assigned to the same VLAN on every other switch.
While this will function, costly switch ports are used, making it not practical. For example, to connect
four VLANs across only two switches would require the use of four ports from each switch (or 8
ports). Connecting the same four VLANs across four switches would require 12 ports on each switch
(or 48 ports).
VLAN trunking
An alternative to this simple method is VLAN Trunking. With VLAN Trunking, a single port or multiport channel is set up to allow traffic from multiple VLANs onto the port. These packets are
encapsulated using a Trunking encapsulation protocol such as ISL (InterSwitch Link) or DOT1Q (aka
802.1q) before being transmitted. When the packet reaches the destination switch, the encapsulation is
stripped and the packet can be forwarded into the appropriate VLAN. One important requirement is
that the same encapsulation protocol must be employed on each end of the trunk (whether the device
on the other end is another switch or a router interface).
Notes: Connecting VLANs without trunking wastes ports.

Network Availability - 36

Copyright 2006 EMC Corporation. All Rights Reserved.

The VLAN Tag


y Three types of switch ports
Access ports (typical port)
Trunk ports
Hybrid ports

y Switch adds VLAN frame to packets from access ports

Length
(bytes)

46-1500

Frame

Preamble

Dest
MAC

Source
MAC

VLAN
ID

Type

Data

FCS

Ethernet packet

2005 EMC Corporation. All rights reserved.

Network Availability - 37

VLAN ports
The following three port types on the Ethernet switch relate to VLANs:
y The typical port to which a node would be connected is referred to as an access port which can be
assign to one, and only one, VLAN.
y Trunk ports are used primarily for interswitch connections or connections to a router.
y The hybrid port can act as either an access or trunk port.
VLAN tag
How does a switch or router know to which VLAN a packet belongs? Assuming that the administrator
has assigned access ports to various VLANs, the switch will then modify packets which enter through
access ports. A new frame is added to the packet beside the source MAC address field; this new frame
identifies the VLAN to which the packet belongs. This is referred to as the VLAN Tag.
Trunk ports do not have VLAN tags added to them because, as you will see later, these packets will
already have the tag. The packets from the Hybrid port are tagged only when a tag in not already
present.

Network Availability - 37

Copyright 2006 EMC Corporation. All Rights Reserved.

InterSwitch VLANs With Trunking


3 VLANs, 2 switches, 2 ports

11

13

15

17

19

10

12

14

16

18

20

11

13

15

17

19

10

12

14

16
Trunk 18
Port

2005 EMC Corporation. All rights reserved.

Trunk
TrunkPort
Port
802.1q
802.1q

Trunk Port
802.1q
802.1q

20

Network Availability - 38

InterSwitch VLANs with trunking


This slide demonstrates the benefit of VLAN Trunking. Only one port on each switch is needed to
allow all of the VLANs to communicate between the switches.

Network Availability - 38

Copyright 2006 EMC Corporation. All Rights Reserved.

VLAN Tagging for Celerra


y One physical port can now access many VLANs

Allows Data Mover to belong to multiple VLANs


Standby DM can take over from primaries in different VLANs

y To configure Celerra network interfaces to be members of


multiple VLANS:
1. Configure switch ports connecting the Celerra for Encapsulation 802.1q
2. Add VLAN Definitions to interface definitions

Celerra will add VLAN tag to Ethernet frames

Ethernet Switch
Standby Data Mover
802.1q
Trunk ports

Network Clients
on multiple
VLANS

Primary Data Mover

2005 EMC Corporation. All rights reserved.

Network Availability - 39

VLAN tagging
As discussed earlier in this module, an Ethernet network can implement VLANs to help manage
network traffic. Ethernet switches help to do this by adding a VLAN tag frame to network packets. A
VLAN-tagged frame carries an explicit identification of the VLAN to which it belongs. It carries this
non-null VLAN ID within the frame header. The tagging mechanism implies a frame modification. For
IEEE 802.1Q-compliant switches, the frame is modified according to the port type used (access port,
trunk port, or hybrid port).
Implementing VLAN tagging
Celerra supports the ability to add the VLAN tag for itself. This feature is for use with Gigabit Ethernet
NICs only. Implementing VLAN Tagging effectively allows the physical port to which the Data
Mover is connected the ability to belong to several VLANs at the same time. VLAN Tagging
support enables a single Data Mover with Gigabit Ethernet ports to be the Standby for multiple
primary Data Movers from different VLANs with Gigabit Ethernet ports.
In addition to Data Mover configuration, the physical port that the Data Mover port is connected to
must be configured as a trunk port using the 802.1q protocol by the switch administrator.

Network Availability - 39

Copyright 2006 EMC Corporation. All Rights Reserved.

Creating Multiple Logical Interfaces


To configure multiple logical interfaces using one physical
device
y Examples
server_ifconfig server_2 -D cge0 -n admin -p IP
192.168.101.20 255.255.192.0 192.168.127.255
server_ifconfig server_2 -D cge0 -n sales -p IP
192.168.144.20 255.255.240.0 192.168.159.255

2005 EMC Corporation. All rights reserved.

Network Availability - 40

Configuring a Data Mover for multiple VLANs


y Create multiple logical interfaces using the server_ifconfig command with different -name
parameters for each logical interface.
y Specify a VLAN for each named interface, still using the server_ifconfig command.

Network Availability - 40

Copyright 2006 EMC Corporation. All Rights Reserved.

Assigning Each Interface Name to a VLAN


To assign a VLAN tag to the interface
y Command:
server_ifconfig server_2 <interface>
vlan=<VLAN_ID>

y Examples:
server_ifconfig server_2 admin vlan=10
server_ifconfig server_2 sales vlan=20

To remove the VLAN tag


y Example:
server_ifconfig server_2 admin vlan=0
2005 EMC Corporation. All rights reserved.

Network Availability - 41

Shown here are the commands necessary to assign a VLAN tag to an interface, and to remove the tag.

Network Availability - 41

Copyright 2006 EMC Corporation. All Rights Reserved.

Assigning a VLAN Tag to an Interface


y Network > Interfaces > New

2005 EMC Corporation. All rights reserved.

Network Availability - 42

This slide shows how to assign a VLAN tag while creating an interface.

Note: By default, the name of the interface will be the IP address with underscores. For example, the
interface shown on this slide will be named 10_127_56_109. Please note the new field Name: which
allows the name of the interface to be configured.

Network Availability - 42

Copyright 2006 EMC Corporation. All Rights Reserved.

Module Summary
y EtherChannel combines multiple physical ports (2, 4 or 8) into a
single logical port for the purpose of providing fault tolerance
There are three methods of statistical load distribution available on the
Celerra; MAC address, IP address (default), and a combination of TCP
port and IP address

y A virtual device (EtherChannel or LACP) is created using the


server_sysconfig command, or Celerra Manager
Once a virtual device is created, it must be assigned an IP address

y LACP, the IEEE 802.3ad standard, is similar to EtherChannel in


that it allows multiple Ethernet links to be combined into a single
virtual device on the Data Mover
y Fail Safe Network provides high availability in the event of a port
and Ethernet switch failure
y Switch port trunking and VLAN tagging allows a single Data
Mover Device to participate in multiple VLANs
2005 EMC Corporation. All rights reserved.

Network Availability - 43

Network Availability - 43

Copyright 2006 EMC Corporation. All Rights Reserved.

Closing Slide

2005 EMC Corporation. All rights reserved.

Network Availability - 44

Network Availability - 44

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra ICON
Celerra Training for Engineering

SAN and Storage Requirements

2005 EMC Corporation. All rights reserved.

SAN and Storage Requirements - 1

Copyright 2006 EMC Corporation. All Rights Reserved.

Revision History
Rev Number

Course Date

Revisions

1.0

February 2006

Complete

1.2

May 2006

Update and reorganization

2006 EMC Corporation. All rights reserved.

Back-end Storage Requirements - 2

SAN and Storage Requirements - 2

Copyright 2006 EMC Corporation. All Rights Reserved.

SAN and Back-end Storage requirements


Upon completion of this module, you will be able to:
y Describe the basic architecture of Symmetrix and
CLARiiON storage arrays
y Describe SAN concepts as they apply to a Celerra
Environment
y Identify requirements for user LUNs for both Symmetrix
and CLARiiON back-ends

2006 EMC Corporation. All rights reserved.

Back-end Storage Requirements - 3

The back-end of a Celerra Network Server consists of one or more CLARiiON and/or Symmetrix
storage systems. It is important that you understand, and are able to communicate the configuration
requirements when setting up and supporting a Celerra Network Server. This module will provide an
overview. For more detailed training on CLARiiON and Symmetrix, refer to Knowledgelink.

SAN and Storage Requirements - 3

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra Back-end Options

Customer Requirements:
Capacity
Availability
Scalability
Advanced Features

DMX-3

DMX/DMX-2

CLARiiON
CX Series

Customer
Investment
2006 EMC Corporation. All rights reserved.

Back-end Storage Requirements - 4

Celerra Network Servers supports both CLARiiON and Symmetrix back-ends. At the high end we
have the Symmetrix DMX-3. DMX-2 continues to be manufactured and sold for customers who dont
require the scalability that the DMX 3 offers. At the mid-tier, EMC continues to offer and expand
CLARiiON family of products. Today, the CLARiiON is the most common back-end configuration.

SAN and Storage Requirements - 4

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra Volumes
y When a Celerra is first installed, a minimum of six LUNs are either
manually or automatically configured for the Control Volumes
y Additional LUNs must be configured in the Symmetrix or CLARiiON
Back-end storage so that user defined file systems may be
configured
LUN

Size

00
01
02

11GB
11GB
2GB

03
04
05
16 (10hex)

2GB
2GB
2GB
varies

2006 EMC Corporation. All rights reserved.

Contents
DART, Individual DM configuration files
Data Mover log files
Reserved (not used on NS-series)
Linux on Control Stations (CS0) with no local HDD
Reserved (not used on NS-series)
Linux on Control Stations (CS1) with no local HDD
NAS configuration database (NASDB)
NASDB backups, dump file, log files, etc.
User File Systems
Back-end Storage Requirements - 5

When a Celerra is first installed, a minimum of six LUNs are created either manually, or automatically
through the install scripts. The table above displays all of the Celerra System LUNs, along with their
size an contents. Please note that LUNs 02 and 03 are not currently used for the Celerra NS series.
Earlier Celerra models, in which the Control Station had no internal hard drive, would use these LUNs
to hold the Linux installation. Additional LUNs must be configured for user file systems data.

SAN and Storage Requirements - 5

Copyright 2006 EMC Corporation. All Rights Reserved.

CLARiiON Architecture Review


2Gb Fibre Channel Front End

Storage
Processor

CLARiiON Messaging Interface (CMI)

Storage
Processor

2Gb Fibre Channel Back End


LCC

LCC

LCC

LCC

LCC

LCC

LCC
LCC

Storage Processor based architecture


Modular design
2006 EMC Corporation. All rights reserved.

Back-end Storage Requirements - 6

Each storage processor includes one or two CPUs and large amount of memory. Most of the
memory is used for read and write caching. Read and write caching improve performance in two
ways:
y For a read request If a read request seeks information thats already in the read or write
cache, the storage system can deliver it immediately, much faster than a disk access can.
y For a write request the storage system writes updated information to SP write-cache
memory instead of to disk, allowing the server to continue as if the write had actually
completed. The write to disk from cache occurs later, at the most expedient time. If the request
modifies information thats in the cache waiting to be written to disk, the storage system
updates the information in the cache before writing it to disk; this requires just one disk access
instead of two. Write caching, particularly, helps write performance an inherent problem
for RAID types that require writing to multiple disks.
CLARiiONs module architecture allows the customer to add drives as needed to meet capacity
requirements. When more capacity is required, additional disk enclosures containing disk modules
can be easily added.
LCC or Link Control Cards are used to connect disk modules. In addition, the LCC monitors the
FRUs within the shelf and reports status information to the storage processor. The LCC contain
bypass circuitry that allows continued operation of the loop in the event of port failure.

SAN and Storage Requirements - 6

Copyright 2006 EMC Corporation. All Rights Reserved.

CLARiiON Modular Components


y CX 600/700 Storage Arrays consist of the following components:
Two Standby Power Supplies (SPS)
One Storage Processor Enclosure (SPE) containing two Storage
Processors (SPs)
One or more Disk Array Enclosures (DAEs) containing physical disks

DAE2

SPE

y Storage Processors
used in Integrated
systems do not have
optical FC connections
y AUX ports are used
Copper connections

SPS
2006 EMC Corporation. All rights reserved.

Back-end Storage Requirements - 7

The Storage Processor is the heart of a CLARiiON system.


All CLARiiONs used with Celerra have two storage processors. Each storage processor:
y Contains one or two CPUs and memory that is used for Read and Write caching
y Contain the (4) front-end fibre connections to the SAN fabric
y Contain back-end ports to communicate with disks that are contained in a Disk Array Enclosure
y Executes the (FLARE Operating Environment)
y Processes the data written to or read from the disk drives
y Monitors the physical disk drives themselves
y Second SP provides a alternate scalable performance and redundancy

SAN and Storage Requirements - 7

Copyright 2006 EMC Corporation. All Rights Reserved.

Configuring a CLARiiON for Celerra


1. Connect Data Mover ports to the CLARiiON Storage Processor

Direct connect as arbitrated loop

Fabric attached through a FC switch

2. Register Data Mover Fibre Channel HBAs with Navisphere


3. Create RAID Group
4. Bind LUNs
5. Create Storage Group
6. Add LUNs to Storage Group

Pay attention to the addresses assigned

7. Connect HBA ports to Storage Group

2006 EMC Corporation. All rights reserved.

Back-end Storage Requirements - 8

Above outlines the general steps that are performed when connecting a CLARiiON to a Celerra. Many
of these steps may be performed automatically during the installation. However, in some cases, such
as integrating into an existing SAN environment, these steps must be performed manually.

SAN and Storage Requirements - 8

Copyright 2006 EMC Corporation. All Rights Reserved.

Physical Cabling Requirements


y With Celerra Integrated configurations, the Data Movers
connect directly to the ports on the storage system
Use copper Fibre Channel cables and arbitrated loop protocol
CLARiiON is said to be Captive

Data Mover
Data Mover
Control
Station
2006 EMC Corporation. All rights reserved.

Storage
Processor
Storage
Processor

Back-end Storage Requirements - 9

One of the simplest environments is the integrated configuration where the Fibre Channel ports on the
Data Mover connect directly into the Fibre Channel ports on the Storage Processors. Depending on the
environment, the array may be dedicated to the Celerra, or available storage processor ports may be
used to connect host systems, either directly or through a Fibre Channel fabric.

SAN and Storage Requirements - 9

Copyright 2006 EMC Corporation. All Rights Reserved.

Storage Area Networks (SANs) in a Celerra Environment

y In Gateway configurations, SANs provide connectivity


from the Data Mover to the storage on the back-end
Dedicated network for storage
Enables the consolidation of server block level and file level storage
in a storage infrastructure
SAN
IP Network
Fibre Channel
Switches
Symmetrix
-and/orCLARiiON

2006 EMC Corporation. All rights reserved.

Celerra

Clients

Back-end Storage Requirements - 10

Storage Area Networks provide much greater flexibility than direct attached storage. They allow
greater distance between hosts and the array, and allow the sharing of storage ports by multiple hosts.
An enterprise level SAN allows the consolidation of block level storage and file level storage on the
same set of arrays. The benefit is efficiency and flexibility in storage allocation.
SANs are typically implemented using Fibre Channel switches. One or more switches interconnected
together is called a fabric. Fibre Channel switches work similarly to Ethernet switches, however the
protocols employed are completely different.
In a Celerra environment, SANs are what allow multiple Data Movers access to the same file systems
and provides the flexibility to move a file system from one Data Mover to another for load balancing
and availability

SAN and Storage Requirements - 10

Copyright 2006 EMC Corporation. All Rights Reserved.

Physical Cabling Requirements


y Celerra Gateway configurations connect to the storage
system through one or more Fibre Channel switches
y Storage system and SAN fabric may be shared with other
servers
y Think No Single Points of Failure!
Data Mover
Data Mover
Control
Station
2006 EMC Corporation. All rights reserved.

FC
Switch
FC
Switch

Storage
Processor
Storage
Processor

Back-end Storage Requirements - 11

The physical connections are typically made using multimode fiber optic cables. Each Fibre Channel
port on each data mover connects to an available port on the switch as does each port on the storage
array.
An ideal configuration is designed and implemented with No Single Points of Failure. That is, any
one component can fail and still have access to the storage. This requires the following:
y Two Fibre Channel HBAs per Data Mover (standard configuration)
y Two Fibre Channel Switches
y Two Storage Processors with two available ports each
While the ideal configuration includes two Fibre Channel switches with independent fabric
configuration, SANs are often implemented with single switch because of the high availability features
that are built in.

SAN and Storage Requirements - 11

Copyright 2006 EMC Corporation. All Rights Reserved.

Zoning Requirements
y Fibre Channel SANs provide flexible connectivity where any port in
the fabric is capable of seeing any other port
y Zoning is configured on the Switch for performance, security, and
availability reasons to restrict which ports in a fabric see each
other
Switch 1
Zone1 - DM2-0 to SPA-0
Zone2 DM3-0 to SPB-1

DM2

0
1

0
1

DM3

FC-SW1

2
3

Control
Station

SP-B

FC-SW2

SP-A

Switch 2
Zone1 - DM2-1 to SPB-0
Zone2 DM3-1 to SPA-1

2006 EMC Corporation. All rights reserved.

Back-end Storage Requirements - 12

By design, Fibre Channel switches provide flexible connectivity where any port in the fabric is capable
of seeing any other port. This can lead to performance, security, and availability issues. Zoning is
feature of most switches that restrict which ports in the fabric see each other. This eliminates any
unnecessary interactions between ports.
In the example above, each switch is a separate fabric and is thus configured separately.
An alternate Zoning configuration might look like this:
Switch 1
Zone1 DM2-0
Zone2 DM2-0
Zone3 DM3-0
Zone4 DM2-0

to SPA-0
to SPB-1
to SPA-0
to SPB-1

Switch 2
Zone1 DM2-1
Zone2 DM2-1
Zone3 DM3-1
Zone4 DM2-1

to SPA-0
to SPB-1
to SPA-0
to SPB-1

SAN and Storage Requirements - 12

Copyright 2006 EMC Corporation. All Rights Reserved.

Brocade Zoning
y Output from the
ZoneShow
command

yy Switch118:admin>
Switch118:admin> zoneshow
zoneshow
yy ...
...
yy Effective
Effective configuration:
configuration:

y Zone Configuration
is a set of zones

yy

cfg:
cfg:

Celerra_Gateway_Config
Celerra_Gateway_Config

yy
yy

zone:
zone:

jwcs_DM2_P0_SPA_P0_WRE00022000774
jwcs_DM2_P0_SPA_P0_WRE00022000774
50:06:01:60:30:60:2f:3b
50:06:01:60:30:60:2f:3b

y Best Practice is
single initiator
zoning with ports
defined by WWPN

yy
yy

y Example shown is of
zoning configuration
created by autoconfig script

yy
yy

2006 EMC Corporation. All rights reserved.

zone:
zone:

50:06:01:60:00:60:02:42
50:06:01:60:00:60:02:42
jwcs_DM2_P0_SPB_P0_WRE00022000774
jwcs_DM2_P0_SPB_P0_WRE00022000774
50:06:01:60:30:60:2f:3b
50:06:01:60:30:60:2f:3b

zone:
zone:

50:06:01:68:00:60:02:42
50:06:01:68:00:60:02:42
jwcs_DM2_P1_SPA_P1_WRE00022000774
jwcs_DM2_P1_SPA_P1_WRE00022000774
50:06:01:61:30:60:2f:3b
50:06:01:61:30:60:2f:3b

zone:
zone:

50:06:01:61:00:60:02:42
50:06:01:61:00:60:02:42
jwcs_DM2_P1_SPB_P1_WRE00022000774
jwcs_DM2_P1_SPB_P1_WRE00022000774
50:06:01:61:30:60:2f:3b
50:06:01:61:30:60:2f:3b

yy
yy
yy
yy

yy
yy

50:06:01:69:00:60:02:42
50:06:01:69:00:60:02:42

yy ...
...

Back-end Storage Requirements - 13

Above is an example of the zoning configuration that was auto-generated during a CX704G
installation. Not that the members of a zone are defined by the World Wide Port Numbers (WWPN)
of the Data Mover HBA and the SP ports. Also each zone only includes one initiator device (HBA).
The output above was the result of the Brocade ZoneShow command. The output was abbreviated to
only show the effective zone configuration for one Data Mover.

SAN and Storage Requirements - 13

Copyright 2006 EMC Corporation. All Rights Reserved.

CLARiiON RAID Configurations for Celerra


y In the field, RAID configuration is provided by a Solution
Architect
y Integrated systems provide pre-defined shelf-by-shelf
RAID templates via setup_clariion script
y Gateway systems require manual configuration of RAID
groups and LUNs
Pre-defined templates and setup_clariion script may also be used
Standard configurations are easier to support
Drive Type
Fibre
Channel
ATA

Supported RAID
Types
RAID5 4+1
RAID5 8+1
RAID1
RAID3 4+1
RAID3 8+1
RAID5 6+1

AVM Pool
clar_r5_performance
clar_r5_economy
clar_r1
clarata_r3
clarata_r3
clarata_archive

Available HLU
16+
16+
Back-end Storage Requirements - 14

2006 EMC Corporation. All rights reserved.

Celerra systems with integrated CLARiiON storage support pre-defined, shelf-by-shelf configuration
templates. These templates, along with the setup_clariion command, can build user/data LUNs
on the existing RAID groups. For example, a 4+1 or 8+1 RAID5 group will have two user/data LUNs
created on it.
Although the Gateway systems do not support the scripted pre-defined template configuration, the
supported configurations used manually, in any order and mixed throughout the CLARiiON.
Supported CLARiiON RAID configurations for data LUNs
The table above displays supported CLARiiON RAID configurations supported by Celerra Network
Server. When you add a supported RAID group to the Celerra configuration, the storage will be added
to a Celerra AVM Storage Pool. Storage is allocated to Celerra file systems from these storage pools.
Automatic Volume Manager
The Automatic Volume Manager (AVM) feature of the Celerra Network Server automates volume
creation and management. By using AVM you can automatically create and expand file systems.
Storage Pools
A storage pool is a container, or pool, of disk volumes. AVM storage pools configure and allocate
contained storage to file systems.

SAN and Storage Requirements - 14

Copyright 2006 EMC Corporation. All Rights Reserved.

Configuration Templates and AVM Storage Profiles

y Integrated and Gateway support mixed RAID configurations per shelf


2006 EMC Corporation. All rights reserved.

Back-end Storage Requirements - 15

The Celerra Automatic Volume Manager creates file systems for user data based upon defined storage
pools. Each storage pool is designed for a particular performance-to-cost requirement for data storage.
These storage pools are defined by storage profiles, or set of rules, related to the type of RAID array
used.
This table maps a disk group type and shelf-by-shelf templates to a storage profile, associating the
RAID type and the storage space that results in the Automatic Volume Management (AVM) pool. The
storage profile name is a set of rules used by AVM to determine what type of disk volumes to use to
provide storage for the pool.

SAN and Storage Requirements - 15

Copyright 2006 EMC Corporation. All Rights Reserved.

Creating a RAID Group

2006 EMC Corporation. All rights reserved.

Back-end Storage Requirements - 16

A RAID Group is a collection of related physical disks. 1 or as many as 128 LUNs may be created
form a RAID Group. This screen shows the dialog for configuring a RAID Group.
The user needs to specify how many disks are to be reserved the display will change to indicate
which RAID types are supported by that quantity of disks. In addition, the user may choose a decimal
ID for the RAID Group. If none is selected, the storage system will choose the lowest available
number.
The user must either allow the storage system to select the physical disks to be used, or may choose to
select them manually. Note that the storage system will not automatically select disks 0,0,0 through
0,0,4 they may be selected manually by the user. These disks contain the CLARiiON reserved areas,
so they have less capacity than other disks of the same size.
Other parameters that may be set include:
y Expansion/defragmentation priority - Determines how fast expansion and defragmentation occur.
Values are Low, Medium (default), or High.
y Automatically destroy - Enables or disables (default) the automatic destruction of the RAID Group
when the last LUN in that RAID Group is unbound.
Maximum number of RAID Groups per array = 240
Number of disks per RAID Group = RAID 5 = 3-16 disks, RAID 3 = 5 or 9 disks, RAID 1 = 2 disks,
RAID 10 = 2, 4, 6, 8, 10, 12, 14, or 16 disks. Remember, Celerra Best Practices specify the number of
disk per RAID Group.

SAN and Storage Requirements - 16

Copyright 2006 EMC Corporation. All Rights Reserved.

Binding a LUN
y Best Practice is to configure a few large
LUNs rather than many small LUNs
setup_clariion script creates two LUNs
per RAID group for FC disks
Spread across SPs

2006 EMC Corporation. All rights reserved.

Back-end Storage Requirements - 17

When binding LUNs, the user must select the RAID Group to be used, and, if this is the first LUN
being bound on that RAID Group, the RAID type. If a LUN already exists on the RAID Group, the
RAID type has already been selected, and cannot be changed.
The size of a LUN can be specified in Blocks, MB, GB, or TB. The maximum LUN size is 2 TB. The
maximum number of LUNs in a RAID Group is 128.
In the example above, we specified create two LUNs using all available capacity in the RAID Group
and distribute the LUNs across both Storage Processors for load balancing purposes.

SAN and Storage Requirements - 17

Copyright 2006 EMC Corporation. All Rights Reserved.

Verify Connection from Data Mover to SP Ports


y Initiator Records define connections
between DM and Array
Dependent on Zoning configuration

2006 EMC Corporation. All rights reserved.

Back-end Storage Requirements - 18

After connecting the cables to the system and configuring zoning, verify the connections between the
Data Movers and the Array. As part of the normal Fibre Channel Port Login (PLOGI) process, the
CLARiiON create Initiator Records defining the connections. The Initiator Name is the WWPN of the
Fibre Channel HBA on the Data Mover. An important fieled is Logged In. This indicates the current
state of the connection. If the entry is missing or Logged In is No, this indicates a cable or zoning
problem.
Registration is normally performed by the Navisphere Host Agent in a typically open systems server
environment, however, the Celerra Data Mover does not run the host agent. During the install, the
Celerra auto-generate script will manually register the HBAs. However, in some environments, it may
be necessary to do this manually. To manually register a HBA connection, select Group Edit and the
dialog on the following page appears.
On a NS system, the WWPN can be used to identify the Data Mover and port. The 24th and 25th digit
can be interpreted as follows:
y 60 = Data Mover 2 Port 0
y 61 = Data Mover 2 Port 1
y 68 = Data Mover 3 Port 0
y 69 = Data Mover 3 Port 1
The Example above is a NS704G with four Data Movers.

SAN and Storage Requirements - 18

Copyright 2006 EMC Corporation. All Rights Reserved.

Register HBA
y The Data Mover does not run the Navi Host Agent so it is
therefore necessary to manually register HBAs
Associates a name with the WWN of the DM Fibre Channel HBA
Defines other attributes of connection

2006 EMC Corporation. All rights reserved.

Back-end Storage Requirements - 19

Registration typically associates a hostname and IP address of a host with the WWPN of the Fibre
Channel HBA and also sets other attributes of the connection.
In a typically open system host environment, all HBAs for a host are registered together and assigned
the same name. With Celerra, the auto-generate script that runs during install, registers each HBA
separately. For proper operation, it is important that the Initiator Information is set as shown above in
the example:
Initiator Type = CLARiiON Open
Failover Mode = 0
Array CommPath = Disabled
Unit Serial Number = Array

SAN and Storage Requirements - 19

Copyright 2006 EMC Corporation. All Rights Reserved.

Creating a Storage Group

LUNs are made read/write


accessible to hosts through
Storage Groups
1. Create Storage Group
2. Add LUNs
3. Connect Hosts

2006 EMC Corporation. All rights reserved.

Back-end Storage Requirements - 20

The configuration object used for assigning LUNs to hosts is called a Storage Group. Basically you
create a Storage Group, add LUNs and connect hosts. When a host is connected to a Storage Group,
it will have full read/write access to all LUNs in the Storage Group.
When creating a Storage Group, the software requires only a name for the Storage Group. All other
configuration is performed after the Storage Group is created.
A name supplied for a Storage Group is 1-64 characters in length. It may contain spaces and special
characters, but this is discouraged. After clicking OK or Apply, an empty Storage Group, with the
chosen name, is created on the storage system.

SAN and Storage Requirements - 20

Copyright 2006 EMC Corporation. All Rights Reserved.

Storage Group Properties - LUNs


y Select LUNs to be added to the Storage Group
Celerra Control Volumes
User Data Volumes

2006 EMC Corporation. All rights reserved.

Back-end Storage Requirements - 21

To assign LUNs, right click on the Storage Group, select properties and the LUNs tab. The LUNs tab
is used to add or remove LUNs from a Storage Group, or verify which are members. The Show LUNs
option allows the user to choose whether to only show LUNs which are not yet members of any
Storage Group, or to show all LUNs.

SAN and Storage Requirements - 21

Copyright 2006 EMC Corporation. All Rights Reserved.

Importance of LUN Addressing


y LUN addresses are automatically assigned when you add a LUN to a Storage
Group Defaults may not be appropriate
y Celerra required specific LUN addresses
Control Volumes require addresses 00-05
User date volume begin with Address 16 (10 hex)

2006 EMC Corporation. All rights reserved.

Back-end Storage Requirements - 22

When a LUN is added to a storage Group, it is automatically assigned the next available SCSI address
starting with address 00. Use caution here as the address that is assigned automatically is not apparent
unless you scroll over to the right in the Selected LUNs pane.
The Celerra Network Server requires specific LUN addresses for system LUNs. At the time a LUN is
added to a Storage Group, highlighting the LUN, clicking the Host ID field, and choosing the host ID
from the dropdown list. If a LUN was previously assigned to a Storage Group and the address must be
changed, if first must be removed from the Storage Group and re-added.
If LUN addressing is not set up in accordance with the defined rules, it is very likely that the
installation will fail. If, after the system has been in production, the LUN addressing is modified (i.e.
when adding storage to the array for increased capacity) in a way that does not comply with these
rules, the Data Movers will likely fail upon the subsequent reboot.

SAN and Storage Requirements - 22

Copyright 2006 EMC Corporation. All Rights Reserved.

Connecting a Host to a Storage Group


y Connecting a host to a Storage Group provides full Read/Write access to
the LUNs
y Connect all Data Mover HBAs to the Storage Group

2006 EMC Corporation. All rights reserved.

Back-end Storage Requirements - 23

The Hosts tab allows hosts to be connected to, or disconnected from a Storage Group. Connecting a
host provides that host with full read/write access to the LUNs in the Storage Group.
The procedure here is similar to that used on the LUNs tab select a host, then move it by using the
appropriate arrow. In most stand-alone host environments, only a single host is added to the Storage
Group but because a Celerra Network Server is actually a cluster, all HBA connections for all Data
Movers are connected.

SAN and Storage Requirements - 23

Copyright 2006 EMC Corporation. All Rights Reserved.

Implementation of CLARiiON User LUNs


y After creating LUNs, add to Celerra database
Using CLI: server_devconfig ALL

create scsi all

Calls the nas_diskmark command

Using Celerra Manager: Rescan

y Verify disks are available using the following command:


nas_disk -list
2006 EMC Corporation. All rights reserved.

Back-end Storage Requirements - 24

After the LUNs have been bound, they must be added to the Celerra database before they can be used
for a file system. LUNs are added to the Celerra database from the CLI or GUI.
To add LUNs to the database from CLI:
server_devconfig ALL create scsi all
To add LUNs to the database from Celerra Manager, navigate to Storage > Systems and click the
Rescan button.
Note: the undocumented command nas_diskmark is called by both the server_devconfig
command and the Celerra Manager Rescan. This command scans for new devices and marks
newly discovered disks by physically writing a unique disk ID as well as the Celerra ID on the physical
media and records the information in the configuration database.

SAN and Storage Requirements - 24

Copyright 2006 EMC Corporation. All Rights Reserved.

Lesson 2: Celerra Symmetrix Connectivity


y Objective
Describe the high level architecture of a Symmetrix
Using appropriate resources, identify the requirements for
connecting a Symmetrix to a Celerra Network Server
Given an existing Symmetrix IMPL.bin file, verify that the
configuration requirements for basic configuration have been met
Describe Volume Logix requirements for a Symmetrix in a Celerra
environment

2006 EMC Corporation. All rights reserved.

Back-end Storage Requirements - 25

Next we will look at the Celerra connectivity requirements for a Symmetrix back-end. While the steps
and requirements are very similar, the configuration process of a Symmetrix is very different. We will
start the discussion by reviewing the Symmetrix Architecture at a high level.

SAN and Storage Requirements - 25

Copyright 2006 EMC Corporation. All Rights Reserved.

Symmetrix Architecture Introduction

Front-end
Channel
Adapter

Shared Global
Memory
Cache

Back-end
Disk Adapter

y All Symmetrix share a similar basic architecture


2006 EMC Corporation. All rights reserved.

Back-end Storage Requirements - 26

All members of the Symmetrix family share the same fundamental architecture. The modular hardware
framework allows rapid integration of new storage technology, while supporting existing
configurations.
There are three functional areas:
y Shared Global Memory - provides cache memory
y Front-end - the Symmetrix connects to the hosts systems using Channel Adapter a.k.a Channel
Directors. Each director includes multiple independent processors on the same circuit board, and
an interface-specific adapter board. Celerra Data Movers connect to the storage through the frontend.
y Back-end is how the Symmetrix controls and manages its physical disk drives, referred to as Disk
Adapters or Disk Directors. Like front-end directors, each director includes multiple independent
processors on the same circuit board.
What differentiates the different generations and models is the number, type, and speed of the various
processors, and the technology used to interconnect the front-end and back-end with cache.

SAN and Storage Requirements - 26

Copyright 2006 EMC Corporation. All Rights Reserved.

Direct Matrix Architecture

64GB
Memory

64GB
Memory

2006 EMC Corporation. All rights reserved.

64GB
Memory

64GB
Memory

64GB
Memory

64GB
Memory

64GB
Memory

64GB
Memory

Back-end Storage Requirements - 27

Today, the Symmetrix employs a Direct Matrix Architecture. The real advantage of Direct Memory
Architecture cannot be appreciated until you visualize it as in the picture above. The Global Memory
technology supports multiple regions and 16 connections on each global memory director. In a fully
configured Symmetrix system, each of the sixteen directors connects to one of the sixteen memory
ports on each of the eight global memory directors. These 128 individual point-to-point connections
facilitate up to 128 concurrent global memory operations in the system.
Each memory board has sixteen ports with one connection to each director. Each region on a board can
sustain a data rate of 500MB read, and 500MB write. Therefore a full configuration with 8 memory
boards would have a maximum internal system throughput of 128GB.
Each front-end and back-end director has direct connections to memory allowing each director to
connect to each memory board. Each of the four processors on a director can connect concurrently to
different memory boards.
Internally the communications protocol between the directors and memory is fibre channel over
copper-based physical differential data connections.

SAN and Storage Requirements - 27

Copyright 2006 EMC Corporation. All Rights Reserved.

LUN Requirements on Symmetrix


y Control Volumes
2 x 12275 cylinder volumes as Channel Address 00 and 01.
4 x 2215 cylinder volumes as Channel Address 02 through 05.
1 x 3 cylinder volume as Channel Address 0F
Gatekeeper device
Must set the AS400 bit to this device

If using VCM, assign 1 x 16 cylinder volume as Channel Address 0E

y FA front-end Ports must be configured with Celerra-specific settings


y Volumes for user data must be mapped to the Celerra FA ports
starting at Channel Address 10(hex)
y Map all LUNs to Celerra Data Movers using redundant paths
Multiple front-end Fiber Adapters(FAs)
Redundant Fibre Channel Fabrics/Switches

y Volume Logix must be configured to allow all volumes to access by


all Data Movers thru assigned front-end FA ports
2006 EMC Corporation. All rights reserved.

Back-end Storage Requirements - 28

As in all Celerra configurations, the first 16 host LUN addresses are reserved. Therefore, the first
available data LUN host address is 0x010 (16 in decimal).
All LUNs, both system and data, require redundant paths for high availability. Each LUN must be
mapped through redundant FA ports and accessed from the Celerra via redundant Fibre Channel
Fabrics.
The user LUN requirements for a Celerra with a Symmetrix storage subsystem are provided by
Symmetrix Service Readiness (SSR, formerly C-4). Follow these rules when configuring user data
LUNs for the Celerra Network Server on a Symmetrix.
Before being implemented, the storage configuration must be CCA-approved.

SAN and Storage Requirements - 28

Copyright 2006 EMC Corporation. All Rights Reserved.

Symmetrix Configuration for Celerra


y Reference Configuring the Symmetrix for the Celerra File
Server from SSR (Formerly C-4)
y Requirements are code
level and Symmetrix
model specific
y Field configurations
must be approved by
CCA

2006 EMC Corporation. All rights reserved.

Back-end Storage Requirements - 29

The specific configuration requirements is based on the NAS code levels and the Enginuity levels on
the Symmetrix. Always reference the latest requirements off the SR website

SAN and Storage Requirements - 29

Copyright 2006 EMC Corporation. All Rights Reserved.

Symmetrix IMPL.bin File


y The IMPL.bin file, a.k.a the bin file,
contains the configuration information for a
Symmetrix
From
Disk

y The file defines:

PC Memory
Edit
Configuration
Information
(IMPL.BIN file)

Physical hardware configuration


Directors
Memory
Physical Drives

PC
Hard disk

Logical storage configuration


Emulation, number, size and
data protection schemes
for logical volumes
Special volume attributes
Volume front-end assignments
Director flags

From
system

Director

Operational parameters and features

y Located in each director, and on the service


processor
2006 EMC Corporation. All rights reserved.

Back-end Storage Requirements - 30

The Symmetrix is configured using a static configuration file called the IMPL.bin. The file is created
initially using SymmWin and loaded into each director in the Symmetrix. When modifying a
configuration, the current IMPL.bin file is pulled from the Symmetrix and edited use Symmwin.

SAN and Storage Requirements - 30

Copyright 2006 EMC Corporation. All Rights Reserved.

SymmWin
y Graphical-based tool for
configuring and monitoring a
Symmetrix System
Runs locally on the service processor
May also run on stand-alone PC

y SymmWin is built for specific


versions of Enginuity
Make sure you are using the correct
build

2006 EMC Corporation. All rights reserved.

Back-end Storage Requirements - 31

SymmWin is an EMC written graphical-based application for managing a Symmetrix. Capabilities


include:
y Building and modifying system configuration files (IMPL.bin)
y Issuing Inlines commands, diagnostic, and utility scripts
y Monitoring performance statistics
y Automatically performs periodic error polling for errors and events. Certain errors will cause the
service processor to Call Home.
SymmWin runs locally on a Symmetrix Service Processor or on a standalone PC. Running on the
service processor allows communications with an operational Symmetrix. Running it on a standalone
system allows you to build a new configuration or view and modify an archived configuration file.

SAN and Storage Requirements - 31

Copyright 2006 EMC Corporation. All Rights Reserved.

Logging in to SymmWin
y Click on the
green unlock
ICON
y User type
determines
level of access
Access level
may be
changed after
login
Most
operations can
be performed
as CE
Password =
SADE
2006 EMC Corporation. All rights reserved.

Back-end Storage Requirements - 32

Click on the Green unlock and enter the user type, user name, and password and press ENTER or click
on the green check mark.
There are a number of different login levels allowing varying levels of access.
y Symmetrix
y Software Engineer (SE)
y Software Assistance Center
y Customer Engineer (CE) CE Password = sade
y OEM
y TS
y RTS
y Product Support Engineer (PSE)
y Engineering
y Production
y Configuration group
y QA group
y PC Group
Access level may be changed after initial login. From the main SymmWin menu, select File and
Access level to change access rights. You must have a valid password. Advanced access password =
zehirut.

SAN and Storage Requirements - 32

Copyright 2006 EMC Corporation. All Rights Reserved.

After Login
y Title bar
changes to
reflect User
Name and
group affiliation
y The code level
is the SymmWin
level and may
not be the same
as what is
running on the
system

2006 EMC Corporation. All rights reserved.

Back-end Storage Requirements - 33

The default install directory is O:\EMC\<Serial Number>\symmwin. To start SymmWin, click on the
symmwin.exe file.
After successful login, the title bar will reflect the user name and group affiliation. Depending on the
group you will have more or less capabilities and the icons will vary.
The code level is what is loaded on the service processor and may not be the same as what is running
on the system.

SAN and Storage Requirements - 33

Copyright 2006 EMC Corporation. All Rights Reserved.

Selecting the Configuration


y Choose IMPL
from System
to view the
active
configuration

2006 EMC Corporation. All rights reserved.

Back-end Storage Requirements - 34

Configuration information is stored in the IMPL.bin file. This is loaded into the directors during the
IMPL process and is also stored locally on the service processors. When viewing the configuration, it
is important that you select IMPL from System in order to get the current view of the configuration.

SAN and Storage Requirements - 34

Copyright 2006 EMC Corporation. All Rights Reserved.

Viewing Configuration
y Select
Configuration
y Choose
IMPL Initialization
y Verify that FBA is
enabled

2006 EMC Corporation. All rights reserved.

Back-end Storage Requirements - 35

After loading the IMPL.bin file, SymmWin can be used to graphically display the system hardware
and logical configuration.
One of the first requirements for configuring a Symmetrix for Celerra is that the FBA is enabled.

SAN and Storage Requirements - 35

Copyright 2006 EMC Corporation. All Rights Reserved.

Director Map
y Selecting DirMap from the Configuration dropdown
displays the locations and types of directors
Back-end
DF Disk
Adapter Fibre

Front-end
FA Fibre
Channel
EA ESCON
EF FICON
SE iSCSI

y Celerra connects to the Symmetrix using FA Directors

Back-end Storage Requirements - 36

2006 EMC Corporation. All rights reserved.

The DMX-3 card cage has 24 slots. Normally the DAs occupy the outside slots and the host directors
occupy the inside slot positions. Reference the diagram below to relate the director diagram reported
in SymmWin to the physical card cage. When looking at the director map, remember the director
number and slot numbers are not the same. Director 1 is in slot 0, Director 2 is in slot 1, etc. Slot
numbers are in hex, director numbers are in decimal.

D
I
R
1

D
I
R
2

D
I
R
3

D
I
R
4

D
I
R
5

D
I
R
6

D
I
R
7

D
I
R
8

M M
0 1

M
2

M M
3 4

M
5

M
6

M D
7 I
R
9

S
l
o
t
0

S
l
o
t
1

S
l
o
t
2

S
l
o
t
3

S
l
o
t
4

S
l
o
t
5

S
l
o
t
6

S
l
o
t
7

S
l
o
t
1
0

S
l
o
t
1
2

S
l
o
t
1
3

S
l
o
t
1
4

S
l
o
t
1
5

S
l
o
t
1
6

S
l S
o l
t o
1 t
7 8

B
E

B
E

F F BE BE F
E E or or E
FE FE

F
E

S
l
o
t
1
1

F
E

D D
I I
R R
1 1
0 1

D
I
R
1
2

D
I
R
1
3

D
I
R
1
4

D
I
R
1
5

D
I
R
1
6

S
l
o
t
9

S
l
o
t
B

S
l
o
t
C

S
l
o
t
D

S
l
o
t
E

S
l
o
t
F

F F
E E

B
E

B
E

S
l
o
t
A

F BE BE
E or or
FE FE

SAN and Storage Requirements - 36

Copyright 2006 EMC Corporation. All Rights Reserved.

Edit Directors
y Front-end directors can be configured to support various
protocol parameters
y SCSI and Fibre
Channel
parameters
Gold and upper
case = enabled
Blue and lower
case = disabled
Space key to
toggle

y Reference the
Support Matrix
for Celerraspecific settings
2006 EMC Corporation. All rights reserved.

Back-end Storage Requirements - 37

SCSI is a standards-based protocol that has been around for over twenty years and the command set
and nexus is flexible enough to support many different types of storage devices and host operating
systems. Nearly every server vendor supports SCSI, unfortunately not every vendor implements SCSI
in exactly the same way. For example, while both HP-UX and IBM AIX support the SCSI protocol,
they support a different subset of the operational parameters. Fibre Channel is the transport protocol
used with the SCSI protocol and it too has a number of configurable protocol and link parameters.
The emulation used by front-end ports is implemented in software thus provides the flexibility to
configure the front-end port to support a diversity of host configurations. Celerra also has specific
SCSI requirements and the ports must be set appropriately.
The Edit Director window is used to verify or change flag settings for each director. Some flags simply
display information about the director while others are used to control various functions.
In the example above, we selected the FA tab, the ID field lists all the fibre channel directors and the
data fields contains various flags used by specific host systems. When a particular flag is active or set,
it is colored in gold and is displayed in upper-case letters. If a flag is inactive, it is blue and has lower
case letters.
It is important to understand what the individual flags do before changing them. An incorrect setting
can cause errors or performance problems. Reference the EMC Support matrix or the e-Lab Navigator
for specific configuration requirements for each operating system type and configuration.

SAN and Storage Requirements - 37

Copyright 2006 EMC Corporation. All Rights Reserved.

Volume Requests
y Define the requirements for the creation of specific logical
volumes using VolReq
y Control
Volumes
y User Data
Volumes

Back-end Storage Requirements - 38

2006 EMC Corporation. All rights reserved.

The Volume request window is used to request specified logical volumes be configured. Multiple
sizes and types are specified as separate requests.
Count: Number of volumes to be configured
Emulation: FBA
Type/host: Server/Celerra
Size is specified in either Cylinders or Blocks
Mirror type:
y RAID (Parity Raid Non DMX-3)
y NORMAL (non-mirrored)
y 2-MIR (RAID-1)
y 3- MIR (3 way mirror)
y 4-MIR (4 way Mirror)
y 3RAID-5 ( 3+1 Raid 5)
y 7RAID-5 (7+1 RAID-5)
y CDEV (Cache Device Used for Virtual Devices with SNAP)
On the left side of the window is a list of volumes requests. Note the Volumes column show the
Symmetrix Logical Volume numbers.
The example above shows the requests for the six control volumes and 100 user data volumes.

SAN and Storage Requirements - 38

Copyright 2006 EMC Corporation. All Rights Reserved.

Assigning Channel Addresses


y Assign volumes channel address using VolMap page
y Identify specific
ports for
Celerra
y Assign
appropriate
addresses
00-05 Control
LUNs
10 User LUNS

2006 EMC Corporation. All rights reserved.

Back-end Storage Requirements - 39

Unlike a CLARiiON, a Symmetrix does not present all LUNs to all ports. To make a LUN available to
a specific FA port, a channel address must be assigned. Celerra Data Mover discover and access
Symmetrix Logical Volumes using these Channel Addresses. The Channel Address is the SCSI ID.
Note: the CLARiiON specifies the SCSI address as a decimal number, while the Symmetrix specifies
the address as a hex number. Either way, control volume start with address 00 and user data volumes
start with address 16 (0x10).

SAN and Storage Requirements - 39

Copyright 2006 EMC Corporation. All Rights Reserved.

Address Assignment Complete


y Celerra Network Server requires specific LUN addresses
Control Volumes require Channel Addresses 00 05
User data volumes must be assigned Channel Addresses 10 (hex)
and above

2006 EMC Corporation. All rights reserved.

Back-end Storage Requirements - 40

In this example, we are presenting the Celerra Control volumes and User data volumes on four
different FA ports. Remember Celerra requires specific addresses be assigned to control LUNs:
y (2) 12275 cylinder volumes as target and LUN address 00 and 01.
y (4) 2215 cylinder volumes as target and LUN address 02 through 05.
y (1) 3 cylinder volume as address 0F this is the gatekeeper device.
y If using VCM assign (1) 16 cylinder volume as target 0E
Data volumes must be mapped to the Celerra starting at target and LUN address 10.
An alternative to using SymmWin, the channel address assignments could also be performed using
Solutions Enabler symconfigure command.

SAN and Storage Requirements - 40

Copyright 2006 EMC Corporation. All Rights Reserved.

Volume Attribute for Celerra GateKeeper device


y Must enable AS400 gatekeeper attribute for Celerra
Gatekeeper device

2006 EMC Corporation. All rights reserved.

Back-end Storage Requirements - 41

The default volume attributes are appropriate for all volumes except the gatekeeper device which must
have AS400Gate enabled.

SAN and Storage Requirements - 41

Copyright 2006 EMC Corporation. All Rights Reserved.

Loading the Bin File


y After the bin file is saved it must be loaded to the
directors
y During the Initial Configure and Install New Symmetrix
procedure, all drives are VTOCed
VTOCing is the process of formatting the drives and placing a
Volume Table Of Contents on each disk that describes the physical
layout of the drive
Any existing data is lost during the VTOC operation

y After the drives are VTOCed, the system is IMPLed


Initial Microcode Program Load

y Performed using Procedure Wizard


2006 EMC Corporation. All rights reserved.

Back-end Storage Requirements - 42

During the IMPL, the bin file which contains all the mapping and configuration information is sent to
all the directors. It is during this sequence that cache is initialized and all the tables are built.
After the IMPL.bin is created, it is loaded to the system. If this is Configure and Install New
Symmetrix then the physical disk drives are given a VTOC (Volume Table Of Contents). VTOC
comes from the mainframe world. Mainframe file systems uses VTOC to allocate and remember where
files are on disk.
On the Symmetrix, when we VTOC a drive, we perform a high level format which clears any existing
data and defines default Tables for each logical track in each logical volume. This also generates the
correct CRC information for each logical track in each logical volume we are formatting.

SAN and Storage Requirements - 42

Copyright 2006 EMC Corporation. All Rights Reserved.

Loading the Bin File


y Procedures
y Select
Procedure
Wizard
y Expand Code
Load procedures
menu
y Select Configure
and Install New
Symmetrix
y Follow directions
from the script

2006 EMC Corporation. All rights reserved.

Back-end Storage Requirements - 43

Configure and Load New Symmetrix reconfigures each disk and destroys all previous data. If you
only need to make a change to an existing configuration, you would perform an On-line Configuration
Change procedure.

SAN and Storage Requirements - 43

Copyright 2006 EMC Corporation. All Rights Reserved.

Online Configuration Change


y Configuration changes can be performed online

2006 EMC Corporation. All rights reserved.

Back-end Storage Requirements - 44

If you are making changes to an existing configuration, these changes can be done on-line.. When
executing the procedure, extensive checking is performed to validate the change before it is
implemented.

SAN and Storage Requirements - 44

Copyright 2006 EMC Corporation. All Rights Reserved.

Device Masking
Data
Mover

Data
Mover

Data
Mover

Data
Mover

Other
Host

Other
Host

Other
Host

HBA0 HBA1

HBA0 HBA1

HBA0 HBA1

HBA0 HBA1

HBA0 HBA1

HBA0 HBA1

HBA0 HBA1

FC
Switch

FA

VCMDB

FA

Symmetrix

2006 EMC Corporation. All rights reserved.

Back-end Storage Requirements - 45

Depending on the environment, the Symmetrix may be configured with Volume Logix. Volume Logix
is used to mask which HBAs see which LUNs on a FA port.
Storage Area Networks provide a fan-out capability were it is likely that more than one host is
connected to the same Fibre Channel port. The actual number of HBAs that can be configured to a
single port is operating system and configuration dependent but fan-out ratios as high as 64:1 are
currently supported. Reference the support matrix for specific configuration limitations.
Each port may have as many as 4096 addressable volumes presented. When several hosts connect to a
single Symmetrix port, an access control conflict can occur because all hosts have the potential to
discover and use the same storage devices. However, by creating entries in the Symmetrixs device
masking database (VCMDB), you can control which host sees which volume.
Device Masking is independent from zoning but they are typically used together in an environment.
Zoning provides access control at the port level and restricts which host bus adapter sees which port
on the storage system and device masking restricts which host sees which specific volumes presented
on a port.
Device Masking uses the UWWN (Unique Worldwide Name) of Host Bus Adapters and a VCM
database device. The device-masking database (VCMDB) on each Symmetrix unit specifies the
devices that a particular WWN can access through a specific Fibre port.

SAN and Storage Requirements - 45

Copyright 2006 EMC Corporation. All Rights Reserved.

Device Masking
y

Volume Logix is the software in the Symmetrix that


performs the device masking function

Requires a VCM Database on the Symmetrix

After the VCM database is setup, Solutions Enabler is


used to add entries

Specifying specific volumes that are accessible to specific host bus


adapters
symmask is the CLI for managing device masking
EMC ControlCenter SAN Manager can also be used to perform
device masking

2006 EMC Corporation. All rights reserved.

Back-end Storage Requirements - 46

Volume Logix is the software in the Symmetrix that performs the device masking function. The
capability is built into Engunity but its use is optional. To set this up you must create a database
volume (VCMDB), set flags on the Fibre Channel or iSCSI ports to enable the uses. Once the Database
is setup and enabled, the Solutions Enabler symmask command can be used to configure entries
granting specific hosts assess to specific volumes.
VCMDB entry specifies a hosts HBA identity (using an HBA port WWN1), its associated FA port,
and a range of devices mapped to the FA port that should be visible only to the corresponding HBA.
Once you make this VCMDB entry and activate the configuration, the Symmetrix makes visible to a
host those devices that the VCMDB identifies are available to that hosts initiator WWN through that
FA port.
Device masking also allows you to configure heterogeneous hosts to share access to the same FA port,
which is useful in an environment with different host types. However,
Reference:
y EMC Solutions Enabler Symmetrix Device Masking CLI Product Guide
y Using the SYMCLI Configuration Manager Engineering WhitePaper
y Using SYMCLI to Perform Device Masking Engineering WhitePaper

SAN and Storage Requirements - 46

Copyright 2006 EMC Corporation. All Rights Reserved.

Adding DM HBA Access to Symmetrix Devices


C:>
C:> symmask
symmask add
add dev
dev c0,c1,c2,c3,c4,c5
c0,c1,c2,c3,c4,c5 -wwn
-wwn 5006016030602f3b
5006016030602f3b -dir
-dir 7a
7a -p
-p 00
C:>
C:> symmask
symmask add
add dev
dev c0,c1,c2,c3,c4,c5
c0,c1,c2,c3,c4,c5 -wwn
-wwn 5006016030602f3b
5006016030602f3b -dir
-dir 8a
8a -p
-p 00

C:>
C:> symmask
symmask refresh
refresh

Refresh
Refresh Symmetrix
Symmetrix FA
FA directors
directors with
with contents
contents of
of SymMask
SymMask database
database
000190100172
000190100172 (y/[n])
(y/[n]) ?? yy

Symmetrix
Symmetrix FA
FA directors
directors updated
updated with
with contents
contents of
of SymMask
SymMask Database
Database
000190100172
000190100172
C:>
C:>

2006 EMC Corporation. All rights reserved.

Back-end Storage Requirements - 47

To make an entry for the HBA-to-FA connection in the VCMDB and specifying devices that the HBA
can access, use the symmask command shown above. On the first line we are specifying that
volumes 00c0, 00c1, 00c2, 00c3, 00c4, and 00c5is accessible to the first Celerra HBA through FA 7A
port0. The second command enables assess to the same volumes through the other HBA and the other
Symmetrix port.
In a Celerra environment, typically all volumes are presented to all Data Movers so similar entries
would need to be added for every Celerra FC HBA.
After making changes to the VCM database, you must tell the Symmetrix to refresh the access control
tables in the director. This is done using the symmask refresh command.

SAN and Storage Requirements - 47

Copyright 2006 EMC Corporation. All Rights Reserved.

Displaying the Contents of the VCMDB


C:>
C:> cd
cd \Program
\Program Files\EMC\SYMCLI\bin
Files\EMC\SYMCLI\bin
C:>
C:> symmaskdb
symmaskdb list
list database
database

Symmetrix
Symmetrix ID
ID

:: 000190100172
000190100172

Database
Database Type
Type

:: Type6
Type6

Last
Last updated
updated at
at

:: 01:40:29
01:40:29 PM
PM on
on Thu
Thu Sep
Sep 01,2005
01,2005

Director
Director Identification
Identification :: FA-7A
FA-7A
Director
Director Port
Port

:: 00

User-generated
User-generated
Identifier
Identifier

Type
Type

Node
Node Name
Name

Port
Port Name
Name

Devices
Devices

-------------------------------

---------

-----------------------------------------------------------------

-----------------

5006016030602f3b
5006016030602f3b Fibre
Fibre 5006016030602f3b
5006016030602f3b 11 5006016030602f3b
5006016030602f3b 00c0:00c5
00c0:00c5

2006 EMC Corporation. All rights reserved.

Back-end Storage Requirements - 48

You can display the entire contents of the VCMDB or use options to restrict the display to your area of
interest. In the example above we are displaying accesses control records for the entries we previously
added. Note: the entire output is not displayed.

SAN and Storage Requirements - 48

Copyright 2006 EMC Corporation. All Rights Reserved.

Module Summary
y LUN addressing is Critical:
Control volume use addresses 00 05
User data volumes use address 16+ (0x10)

y Both CLARiiON and Symmetrix back-ends supports several different


RAID configurations
Best Practice is to use standard configurations

y Symmetrix are configured by creating/modifying the binfile


Reference the SSR website for current requirements

y For more information on setting up and configuring EMC storage


systems, the following training is offered by Mercer Road for
Engineering
Symmetrix Internals
CLARiiON Environments and Application Integration
2006 EMC Corporation. All rights reserved.

Back-end Storage Requirements - 49

SAN and Storage Requirements - 49

Copyright 2006 EMC Corporation. All Rights Reserved.

Closing Slide

2006 EMC Corporation. All rights reserved.

Back-end Storage Requirements - 50

SAN and Storage Requirements - 50

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra ICON
Celerra Training for Engineering

Configuring Celerra Volumes & File Systems

2006 EMC Corporation. All rights reserved.

Configuring Celerra Volumes & File Systems

-1

Copyright 2006 EMC Corporation. All Rights Reserved.

Revision History
Rev Number

Course Date

1.0

February 2006

1.2

May, 2006

2005 EMC Corporation. All rights reserved.

Revisions
Complete
5.5 Updates and enhancements

Configuring Celerra Volumes & File Systems - 2

Configuring Celerra Volumes & File Systems

-2

Copyright 2006 EMC Corporation. All Rights Reserved.

Module Objectives
Upon completion of this module, you will be able to:
y Describe the logical storage terms and concepts
including disks, slice, stripe, metavolumes, and file
systems
y Manage storage using AVM Automated Volume
Management on the Celerra
y Describe the concept of Storage Pools
y Configure and manage volumes and a file system using
CLI and Celerra Manager

2006 EMC Corporation. All rights reserved.

Configuring Celerra Volumes & File Systems - 3

To meet customer needs, the Celerra provides considerable flexibility when creating file systems. The
Celerra offers manual as well as automatic file system creation to allow customers to tailor file systems
to meet specific needs. File systems can be access and shared by NFS and CIFS users, as well as by
other file system access protocols. This module illustrates the necessary steps to create file systems
manually or by utilizing the automatic capabilities of the Celerra.

Configuring Celerra Volumes & File Systems

-3

Copyright 2006 EMC Corporation. All Rights Reserved.

Two Ways to Configure a Celerra File System


Manually Create Each Object

Automatically with AVM


(Automatic Volume Manager

Verify Disk Volumes


Create Slice Volumes
(optional)

Create Stripe Volumes


(optional)

Create Metavolumes
Create a File System
2005 EMC Corporation. All rights reserved.

Create a File System


Configuring Celerra Volumes & File Systems - 4

Configuring a Celerra file system manually


The steps for configuring a Celerra file system manually are as follows:
y Verify the Celerra disk volumes (presented)
y Create slice volumes (optional, not usually recommended)
y Create stripe volumes (optional, usually recommended)
y Create a Celerra metavolume (required)
y Create a file system using metavolumes
Notes:
y This process does not involve any Data Movers.
y All of the commands used to configure a file system are nas_ commands.
Configuring a Celerra file system automatically with AVM (Automatic Volume Manager)
y Create a file system
When creating a file system using AVM, you do not have to create volumes before setting up the file
system. AVM allocates space to the file system from the storage pool you specify and automatically
creates any required volumes when it creates the file system.

Configuring Celerra Volumes & File Systems

-4

Copyright 2006 EMC Corporation. All Rights Reserved.

Volume Management Overview


y Automatic Volume Manager (AVM)
The Celerra offers many possible combinations for creating a file system,
However, AVM provides the greatest simplicity and ease of use
AVM automates volume creation and management, eliminating the need to
manually create stripes, slices or meta volumes
Allows users to create and expand file systems without the need to create
and manage the underlying volumes
AVM is storage-system independent and supports existing requirements
for automatic storage allocation (SnapSure, SRDF, IP Replication)
AVM does not preclude manual volume and/or file system management, but
instead, gives users a simple volume and file system management tool

y Manual Volume creation


Provides greater control of file system placement for maximum possible
performance
Better control in mixed storage or environment where special storage
features are implemented such as TimeFinder or SRDF
2005 EMC Corporation. All rights reserved.

Configuring Celerra Volumes & File Systems - 5

AVM greatly simplifies the creation of logical storage on a Celerra, however for a few customers,
AVM is not practical because of dispersed volumes, multiple and/or mixed back-ends, storage
limitations, or the implementation of other storage based features such as BCVs, NearCopy/FarCopy,
etc.
In this module we will be discussing the manual creation of volumes and file systems but keep in mind
that often AVM will be used.

Configuring Celerra Volumes & File Systems

-5

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra Disk Volumes


y Both Symmetrix
and CLARiiON
support RAID5
and RAID1
volumes
RAID 5 is typical
with CLARiiON

LUN4 LUN3

CLARiiON RAID5 4+1 Group

Celerra
Celerra
Celerra
Disk,
Disk,d7
d3 Disk, d7

Celerra
Disk, d7

Celerra
Disk, d7

Celerra
Disk, d7

Celerra
Celerra
Disk, d8 Disk, d8

Celerra
Disk, d8

Celerra
Disk, d8

Celerra
Disk, d8

RAID 1 is typical
with Symmetrix

2005 EMC Corporation. All rights reserved.

Hyper

CLARiiON LUNs
are typically
configured much
larger than
Symmetrix

Celerra
Disk, d3

Celerra
Disk, d4

Celerra
Disk, d5

Celerra
Disk, d6

Hyper

y Volume sizes vary

Symmetrix

Celerra
Disk, d5

Celerra
Disk, d6

Celerra
Disk, d3

Celerra
Disk, d4

Configuring Celerra Volumes & File Systems - 6

Celerra Disk Volume


What Celerra believes to be a disk is actually a Logical Volume or a LUN.
A CLARiiON LUN is created from available space in a RAID Group and typically spans 5 or 9
physical disk drives on the back-end. When CLARiiON LUNS are configured for Celerra usually only
one or two LUNs are created from a RAID group thus they are often quite large.
The Symmetrix hyper volumes will be presented as a portion of a physical disk. The size will depend
on how the bin file was configured but the maximum LUN size for a DMX-3 is approximately 65GB.
Thus for the same capacity, there are typically many more Symmetrix LUN than CLARiiON LUNs.
While our goal is to make the back-end configuration transparent to the Celerra, for optimal
performance, you must consider the Celerra disk placement on the storage system in order to balance
the work load. Placement is of particular interest when creating striped volumes as each member of a
stripe ideally is located on different physical disk on the back-end.

Configuring Celerra Volumes & File Systems

-6

Copyright 2006 EMC Corporation. All Rights Reserved.

Viewing Disk Volumes


y Use the following command to determine storage availability:
# nas_disk -list
id
inuse sizeMB
1
y
11263
2
y
11263
3
y
2047
4
y
2047
5
y
2047
6
y
2047
7
n
273709
8
n
273709
9
n
136854
10
n
136854
11
n
273709
12
n
273709
13
n
273709
14
n
273709
15
n
273709

storageID-devID
CK200051400304-0000
CK200051400304-0001
CK200051400304-0002
CK200051400304-0005
CK200051400304-0004
CK200051400304-0003
CK200051400304-0006
CK200051400304-0007
CK200051400304-000F
CK200051400304-000E
CK200051400304-0008
CK200051400304-000A
CK200051400304-000B
CK200051400304-000D
CK200051400304-000C

2005 EMC Corporation. All rights reserved.

type
CLSTD
CLSTD
CLSTD
CLSTD
CLSTD
CLSTD
CLSTD
CLSTD
CLSTD
CLSTD
CLSTD
CLSTD
CLSTD
CLSTD
CLSTD

name
root_disk
root_ldisk
d3
d4
d5
d6
d7
d8
d16
d17
d11
d12
d13
d14
d15

servers
1,2,3,4
1,2,3,4
1,2,3,4
1,2,3,4
1,2,3,4
1,2,3,4
1,2,3,4
1,2,3,4
1,2,3,4
1,2,3,4
1,2,3,4
1,2,3,4
1,2,3,4
1,2,3,4
1,2,3,4

Configuring Celerra Volumes & File Systems - 7

Column definitions:
id - ID of the disk (assigned automatically)
inuse - Whether or not the disk is in use by a file system; y indicates yes, n indicates no
sizeMB - Size of the disk in megabytes
storageID-devID - ID of the storage system and device associated with the disk
type - Type of the disk
name - Name of the disk
servers - Data Movers that have access to the disk

Note: When adding new volumes to the configuration it is necessary to run the command:
server_devconfig ALL create scsi all
in order to define the new LUNs to the system.

Configuring Celerra Volumes & File Systems

-7

Copyright 2006 EMC Corporation. All Rights Reserved.

Viewing Disk Volumes


To determine storage availability
y Storage > Volumes > Show Volumes of Type - disk

2005 EMC Corporation. All rights reserved.

Configuring Celerra Volumes & File Systems - 8

Celerra Manager can also be used to view Celerra disks. This slide shows d7 is not in use.

Configuring Celerra Volumes & File Systems

-8

Copyright 2006 EMC Corporation. All Rights Reserved.

Slice Volumes
Verify Disk Volumes

y Slice volumes are cut out of other volume

Create Slice Volumes


Create Stripe Volumes
Create Metavolumes

Used to make volumes smaller in size than a full disk


volume
The slice size and name is specified when it is created
Consecutive space will be allocated using first fit
algorithm
Offset may be specified to control placement

Create File system

Slice1
(10GB)

Slice2
(10GB)

Slice3
(10GB)

Slice4
(10GB)

disk d10

disk d11

disk d12

disk d13

2005 EMC Corporation. All rights reserved.

Configuring Celerra Volumes & File Systems - 9

Slice volumes
Slice volumes are cut out of disks or other volume configurations to make smaller volumes that are
better suited for a particular purpose, such as SnapSure. Slice volumes are not always necessary or
even recommended. However, if a smaller size volume is needed (as you will see with SnapSure), it
will then be critical to understand slice volumes and be able to implement them.
Offsets
When you create a slice volume, you can indicate an offset, which is the distance (in megabytes) from
the end of one slice to the start of the next. Unless a value is specified for the offset (the point on the
container volume where the Slice volume begins), the system places the slice in the first-fit algorithm
(default) that is the next available volume space.

Configuring Celerra Volumes & File Systems

-9

Copyright 2006 EMC Corporation. All Rights Reserved.

Creating Slice Volumes


y A Slice Volume may be created from disks or
metavolumes
y Creating slices is optional
Recommended whenever storage requirements are less than a full
disk volume
SnapSure

y Command syntax:
nas_slice name <slice_name> create <volume_name>
<size_in_MB>

y Example:
nas_slice n sl1 c d16 500
Configuring Celerra Volumes & File Systems - 10

2005 EMC Corporation. All rights reserved.

Before you can create a slice from a Celerra disk volume, you must identify the volume from which the slice
volume will be created. The root slice volumes created during installation appear when you list your volume
configurations. However, you do not have access privileges to them, and therefore, cannot execute any
commands against them.
To create a slice from a disk volume, use the nas_slice command:
nas_slice name <slice_name> create <volume_name size_in_MB>
Example:

To create a 500MB slice on disk d3, use the following command:


nas_slice n sl1 c d3 500
id =219
name = s11
acl = 0
in_use = false
slice_of = d3
offset (MB) = 0
size (MB) = 500
volume_name = s11

Note:
Slice volumes should not be employed if TimeFinder/FS is planned.

Configuring Celerra Volumes & File Systems

- 10

Copyright 2006 EMC Corporation. All Rights Reserved.

Creating Slice Volumes


y Storage > Volume > New > Type: Slice

2005 EMC Corporation. All rights reserved.

Configuring Celerra Volumes & File Systems - 11

This slide shows how to create a slice volume using Celerra Manager.

Configuring Celerra Volumes & File Systems

- 11

Copyright 2006 EMC Corporation. All Rights Reserved.

Stripe Volumes
Verify Disk Volumes
Create slice Volumes
Create Stripe Volumes
Create Metavolumes
Create File system

y Stripe volumes are a logical arrangement


of disks, slices or metavolumes into a set
of interlaced stripes
Potentially higher aggregate throughput

y Stripe Size specifies how much data is


written to each member volume
Default stripe size = 32,768 bytes

2005 EMC Corporation. All rights reserved.

Configuring Celerra Volumes & File Systems - 12

A stripe volume is a logical arrangement of participating disk, slice, or metavolumes that are
organized, as equally as possible, into a set of interlaced stripes.
Creating a stripe volume
Creating a stripe volume allows you to achieve a higher aggregate throughput from a volume set since
stripe units contained on volumes in the volume set can be active concurrently. Stripe volumes can
also improve system performance by balancing the load across the participating volumes.
Recommended stripe size
The size of the stripe (also referred to as the stripe depth) refers to the amount of data written to a
member of the stripe volume before moving to the next member. The use of different stripe sizes
depends on the applications you are using. The recommended stripe size is 32K for Symmetrix used
predominantly for NFS clients, 8K for Symmetrix used predominantly for CIFS, and 8K for
CLARiiON.
Naming stripe volumes
If you do not select a name for the stripe volume, a default name is assigned.
Carefully consider the size of the stripe volume you want. After the stripe volume is created, its size
remains fixed. However, you can extend a file system built on top of a stripe volume by combining or
concatenating it with additional stripe volumes.

Configuring Celerra Volumes & File Systems

- 12

Copyright 2006 EMC Corporation. All Rights Reserved.

Creating Stripe Volumes


y Manually
User chooses the volumes to include in stripe
Careful planning is required
To optimize available space
Achieve best possible performance

y Automatically using AVM


Celerra chooses the volumes to include
Follows best practices

2005 EMC Corporation. All rights reserved.

Configuring Celerra Volumes & File Systems - 13

Stripe volumes can be created manually or automatically. The difference is, Celerra chooses which
volumes to include in the stripe volume when you choose automatic, verses you choosing the volumes
in the manual method. The two options are presented on subsequent slides.
You should configure stripes to use the maximum amount of disk space. The size of the participating
volumes within the stripe should be uniform and evenly divisible by the size of the stripe. Each
participating volume should contain the same number of stripes. Space is wasted if the volumes are
evenly divisible by the stripe size but are unequal in capacity. The residual space is not included in the
configuration and is unavailable for data storage.

If creating the stripe volume manually, no two members of the stripe volume should reside on
the same physical spindle in the Symmetrix (or CLARiiON).
To identify the physical location of a Celerra Symmetrix disk volume:
y Run nas_disk list and identify the "storageID-devID of the disk volume
y Run $ /nas/symcli/bin/symdev -sid <storageID> list |grep <devID>
y Identify the DA, Interface, and Target (format: DA:IT ex: 01A:C2) (this represents the physical
location of the disk volume)
y To view other devices on the same spindle, type the following command:
$ /nas/symcli/bin/symdisk -sid <storageID> sho <DA:IT>

Configuring Celerra Volumes & File Systems

- 13

Copyright 2006 EMC Corporation. All Rights Reserved.

Creating Stripe Volume Manually

To create a stripe volume with specific volumes


y Command:
nas_volume n <stripe_name> create
Stripe <depth> <vol>,<vol>

y Examples:
Creating a stripe volume from slice volumes
nas_volume -n str1 -c -S 8192 sl1,sl2,sl3,sl4

Creating a stripe volume from disk volumes


nas_volume -n str2 -c -S 32768 d3,d4,d5,d6

2005 EMC Corporation. All rights reserved.

Configuring Celerra Volumes & File Systems - 14

Example
To create a stripe volume with a depth or 32768 bytes out of volumes slices 1- 4, use the
following command:
nas_volume n str1 c S 32768 sl1,sl2,sl3,sl4
id = 316
name = str1
acl = 0
in_use = false
type = stripe
stripe_size = 32768
volume_set = sl1,sl2,sl3,sl4
disks = d3,d4,d5,d6

Configuring Celerra Volumes & File Systems

- 14

Copyright 2006 EMC Corporation. All Rights Reserved.

Creating Stripe Volume Automatically


To create a stripe volume allowing the Celerra to
choose the appropriate volumes
y Command:
nas_volume n <name> c S <depth> size=<size_in_GB>

y Example:
nas_volume n teststr c S 32768 size=50

y Using AVM minimizes risk of placing multiple slices


on same physical disk

2005 EMC Corporation. All rights reserved.

Configuring Celerra Volumes & File Systems - 15

You can also choose to allow Celerra to select which disk volumes to include in a stripe volume.
When the size= option is employed, Celerra will automatically select the correct number of disk
volumes. This method will also reduce the risk of having the same physical spindle in the Symmetrix
or CLARiiON used more than once in a stripe volume. Care should still be used to get the best usage
of disk space. For example, if all disk volumes are 9 GB, then the total capacity of the stripe volume
specified in the size= option should be divisible by 9 GB (for example, 36 or 72 GB).
Example
nas_volume -n teststr -c -S size=10
id

= 116

name

= teststr

acl

= 0

in_use

= False

type

= stripe

stripe_size = 32768
volume_set

= d3,d4,d5,d6,d7,d8,d9,d10,d11,d12,d13

disks

= d3,d4,d5,d6,d7,d8,d9,d10,d11,d12,d13

Configuring Celerra Volumes & File Systems

- 15

Copyright 2006 EMC Corporation. All Rights Reserved.

Creating Stripe Volume Manually


y

Storage > Volume > New > Type: Stripe

2005 EMC Corporation. All rights reserved.

Configuring Celerra Volumes & File Systems - 16

This slide shows how to create a stripe volume manually using Celerra Manager.
Two Slices were created, You must select at least two Slices to create a Stripe.

Configuring Celerra Volumes & File Systems

- 16

Copyright 2006 EMC Corporation. All Rights Reserved.

Striping Best Practices


y CLARiiON considerations
Do not stripe across two LUNs from same RAID Group
Stripe over as many spindles as possible
Avoid striping across LUNs of different RAID types and disk
architectures

y Symmetrix considerations
Use Celerra stripe volumes not Symmetrix metavolumes and stripes
For multiple client NFS loads and MPFS sequential workloads, use
16 volumes in each stripe set when possible

y Stripe size of 32,768 for is recommended


y Reference: Engineering Whitepaper: Celerra Network
Server Best Practices for Performance
2005 EMC Corporation. All rights reserved.

Configuring Celerra Volumes & File Systems - 17

CLARiiON
For optimal performance, stripe across different volumes. While striping across a single volume is possible, it
will not improve performance.
On an NSxxx with a single DAE, do not stripe a file system across two LUNs from the same RAID group.
Instead, concatenate LUNs from a single RAID group together using Celerra, then create a stripe volume across
that concatenated metavolume.
With a single DAE system, stripe file systems over as many spindles as possible, even if this means crossing
RAID types or configurations.
With multiple DAE systems, avoid striping a file system across LUNs of different RAID types and
configurations. Do not mix RAID1, 4+1 RAID5, and 8+1 RAID5 LUNs in a single file system. Do not mix
LUNs composed of different sized spindles.
Symmetrix
Symmetrix metavolumes should only be used for architectural reasons. If there is no feasible method to provide
more FA Ports to increase the number of paths available for the Data Mover, Symmetrix metavolumes should be
considered as a method of reducing target/LUN counts. Aside from that, Symmetrix hypervolumes do not
provide features that are not otherwise provided by DART based volume management. Additionally, using
Symmetrix metavolumes can make it harder for the Celerra Admin to determine which spindles are being used
for each file system.
For multiple client NFS loads and HighRoad (more on HighRoad later) sequential workloads, use 16 volumes in
each stripe set, when possible.

Configuring Celerra Volumes & File Systems

- 17

Copyright 2006 EMC Corporation. All Rights Reserved.

Metavolume
Verify Disk Volumes
Create Slice volumes
Create Stripe Volumes
Create Metavolumes

y Metavolume is a concatenation of one or


more disks, slices or striped volumes
y File Systems reside on Metavolumes
You must create a Metavolume before creating a
file system

Create File system

40 GB Metavolume
10 GB

10 GB

10 GB

10 GB

disk d10

disk d11

disk d12

disk d13

2005 EMC Corporation. All rights reserved.

Configuring Celerra Volumes & File Systems - 18

Celerra metavolume
A metavolume is an end-to-end concatenation of one or more disk volumes, slice volumes, stripe
volumes, or metavolumes. A metavolume is required to create a file system because metavolumes
provide the expandable storage capacity that might be needed to dynamically expand file systems. A
metavolume also provides a way to form a logical volume that is larger than a single disk.
metavolume size
The size of the metavolume must be at least 2 MB to accommodate a file system.
Naming a metavolume
If you do not enter a metavolume name, a default name is assigned.

Configuring Celerra Volumes & File Systems

- 18

Copyright 2006 EMC Corporation. All Rights Reserved.

Creating a Metavolume
y Create the required metavolume from:
Disk Volumes
Slice Volumes
Stripe Volumes

y Command:
nas_volume -name <name> -create Meta <volume_name>

y Example:
nas_volume n mtv1 c M str1

2005 EMC Corporation. All rights reserved.

Configuring Celerra Volumes & File Systems - 19

Example
To create a metavolume named mtv1 from stripe volume str1, use the following command:
nas_volume n mtv1 c M str1
id = 312
name = mtv1
acl = 0
in_use = false
type = meta
volume_set = str1
disks = d3,d4,d5,d6

Creating a metavolume from multiple volumes


To create a metavolume from multiple volumes, use the following command:
nas_volume -name <meta_vol_name> -create Meta <vol_name>,<vol_name>

Configuring Celerra Volumes & File Systems

- 19

Copyright 2006 EMC Corporation. All Rights Reserved.

Creating a Metavolume
y

Storage > Volume > New > Type: Meta

2005 EMC Corporation. All rights reserved.

Configuring Celerra Volumes & File Systems - 20

This slide shows how to create a metavolume using Celerra Manager.

Configuring Celerra Volumes & File Systems

- 20

Copyright 2006 EMC Corporation. All Rights Reserved.

File System Overview


Verify Disk Volumes
Create Slice Volumes
Create Stripe Volumes
Create Metavolumes
Create File System

y The file system resides on the Metavolume


y File systems can be created on previously defined
Metavolumes, or when using AVM, the
Metavolume is created automatically
y Creating a File System builds the data structures
(metadata) required to create, locate, and access
files and directories
Metavolume

File System

2005 EMC Corporation. All rights reserved.

Configuring Celerra Volumes & File Systems - 21

Once you have configured the metavolume, you are now ready to create a file system. A file system is
a method of cataloging and managing the files and directories on a storage system. The default, and
most common, Celerra file system type is uxfs. Some other types of file systems are ckpt (Checkpoint
file system), rawfs (Raw file system), and mgfs (Migration file system).

Configuring Celerra Volumes & File Systems

- 21

Copyright 2006 EMC Corporation. All Rights Reserved.

Requirements for Creating a File System


y If not using AVM, must first create Metavolume
y Minimum size is 2MB
y Maximum size is 16TB
y Name may be specified or a default file system name is
assigned
y File systems are of type UxFS by default
Journal File System for fast recovery
FS Block size is 8K

2005 EMC Corporation. All rights reserved.

Configuring Celerra Volumes & File Systems - 22

Mandatory file system requirements


The following requirements are mandatory when creating a file system:
y You can only create a file system on non-root metavolumes that are not in use
y A metavolume must be at least 2 MB to accommodate a file system
Naming file systems
If you do not name the file system, a default name is assigned. By default, all file systems are created
as UxFS.

Configuring Celerra Volumes & File Systems

- 22

Copyright 2006 EMC Corporation. All Rights Reserved.

Creating a File System


y Command:
nas_fs name <name> create <meta_vol>

y Example:
nas_fs -n fs1 -c mtv1
id = 17
name = fs1
acl = 0
in_use = false
type = uxfs
volume = mtv1
rw_servers =
ro_server =
symm_devs = 014,015,016
disks = d3,d4,d5,d6
2005 EMC Corporation. All rights reserved.

Configuring Celerra Volumes & File Systems - 23

Example
To create a file system named fs1 from mtv1, type the following command:
nas_fs -n fs1 -c mtv1
id = 17
name = fs1
acl = 0
in_use = false
type = uxfs
volume = mtv1
rw_servers =
ro_server =
symm_devs = 014,015,016
disks = d3,d4,d5,d6

Configuring Celerra Volumes & File Systems

- 23

Copyright 2006 EMC Corporation. All Rights Reserved.

Creating a File System


y File Systems > New > select Meta Volume

2005 EMC Corporation. All rights reserved.

Configuring Celerra Volumes & File Systems - 24

This slide shows how to create a file system using Celerra Manager.

Note: The Meta Volume Create from option was selected here. Creating a file system from a
Storage Pool is shown on a subsequent slide.

Configuring Celerra Volumes & File Systems

- 24

Copyright 2006 EMC Corporation. All Rights Reserved.

Automatic Volume Manager - AVM


y Automates volume/file systems creation and
management
No need to create underlying volumes

y Uses Storage Pools to logically organization storage by


type, performance, and/or other characteristics
Storage Pool - a container that holds storage available for use by file
systems, or other Celerra objects that use storage
System-defined
User-defined

Storage Pool

Profile define rules on how devices


are aggregated and put into
system-defined storage pools
Storage system type
Volume protection scheme and
other characteristics
2005 EMC Corporation. All rights reserved.

Configuring Celerra Volumes & File Systems - 25

Automatic Volume Manager


The Automatic Volume Management (AVM) feature of the Celerra File Server automates volume creation and
management. By using Celerra command options and interfaces that support AVM, you can create and expand
file systems without manually creating and managing their underlying volumes.
Storage Pool
A storage pool is a container for one or more member volumes. All storage pools have attributes, some of which
are modifiable. There are two types of storage pools;
y System-defined System-defined storage pools with NAS 5.3 are what used to be called system profiles in
prior releases. AVM controls the allocation of storage to a file system when you create the file system by
allocating space from a system-defined storage pool. The system-defined storage pools ship with the
Celerra. They are designed to optimize performance based on the hardware configuration.
y User-defined User-defined storage pools allow for more flexibility in that you choose what storage should
be included in the pool. If the user defines the storage pool, the user must explicitly add and remove storage
from the storage pool and define the attributes for the storage pool.
Profile
Profiles provide the rules that define how devices are aggregated and put into system-defined storage pools.
Users cannot create, delete, or modify these profiles. There are two types of profiles;
y Volume Volume profiles define how new disk volumes are added to a system-defined storage pool.
y Storage Storage profiles define how the raw physical spindles are aggregated into Celerra disk volumes.
Note: Both volume profiles and storage profiles are associated with system-defined storage pools and are unique
and predefined for each storage system.

Configuring Celerra Volumes & File Systems

- 25

Copyright 2006 EMC Corporation. All Rights Reserved.

AVM System-Defined Storage Pools


y symm_std
Highest performance, medium cost, using Symmetrix STD disk volumes
y symm_std_rdf_src
Highest performance, medium cost, using SRDF
y clar_r1
High performance, low cost, using CLARiiON CLSTD disk volumes in RAID 1
y clar_r5_performance
Medium performance, low cost, using CLARiiON CLSTD disk volumes in 4+1
RAID 5
y clar_r5_economy
Medium performance, lowest cost, using CLARiiON CLSTD disk volumes in 8+1
RAID 5
y clarata_archive
6+1, low performance, high capacity
y clarata_r3
Archival performance, lowest cost, using CLARIION ATA disk drives in Raid 3
2005 EMC Corporation. All rights reserved.

Configuring Celerra Volumes & File Systems - 26

symm_std
Designed for highest performance and availability at medium cost. This AVM profile uses Symmetrix STD disk volumes
(typically RAID 1).
symm_std_rdf_src
Designed for highest performance and availability at medium cost, specifically for storage that will be mirrored to a remote
Celerra File Server using SRDF. For information about SRDF, refer to the Using SRDF With Celerra technical
module.
clar_r1
Designed for high performance and availability at low cost. This AVM profile uses CLARiiON CLSTD disk volumes
created from RAID 1 mirrored-pair disk groups.
clar_r5_performance
Designed for medium performance and availability at low cost. This AVM profile uses CLARiiON CLSTD disk volumes
created from 4+1 RAID 5 disk groups.
clar_r5_economy
Designed for medium performance and availability at lowest cost. This AVM profile uses CLARiiON CLSTD disk
volumes created from 8+1 RAID 5 disk groups.
clarata_archive
Designed for archival performance and availability at lowest cost. This storage pool uses CLARiiON Advanced
Technology Attachment (ATA) disk drives in a RAID 5 configuration.
clarata_r3
Designed for archival performance and availability at lowest cost. This AVM storage pool uses CLARiiON ATA disk
drives in a RAID 3 configuration.

Configuring Celerra Volumes & File Systems

- 26

Copyright 2006 EMC Corporation. All Rights Reserved.

Creating a User-Defined Storage Pool

To create a user-defined storage pool


y Command:
nas_pool create name <name> -acl <acl> -volumes
<volume_names> -description <desc> -default_slice_flag <y|n>

y Example:
nas_pool create name marketing -acl 0 -volumes
d126,d127,d128,d129 description pool for marketing default_slice_flag y

Use the nas_storage command to display


back-end attributes
y Example:
nas_storage -i -a
2005 EMC Corporation. All rights reserved.

Configuring Celerra Volumes & File Systems - 27

If your environment requires more flexibility than the system-defined AVM storage pools allow, use
this command to create user-defined storage pools and define its attributes.
Example
nas_pool l (displays naming conventions to use for the AVM systemdefined storage pools shown on slide 26)
nas_pool create name marketing -acl 0 -volumes d126,d127,d128,d129
description pool for marketing -default_slice_flag y
Using nas_storage will set the name for a storage system, assign an access control value, display
attributes, synchronize the storage system with the Control Station, and perform a failback for
CLARiiON systems.
The output from this command is determined by the type of storage system attached to the Celerra
Network Server.

Configuring Celerra Volumes & File Systems

- 27

Copyright 2006 EMC Corporation. All Rights Reserved.

Creating a User-Defined Storage Pool


y Storage > Pools > New

2005 EMC Corporation. All rights reserved.

Configuring Celerra Volumes & File Systems - 28

This slide shows how to create a user-defined storage pool using Celerra Manager.

Note: By checking Slice Pool Volumes by Default?, the Celerra will slice existing volumes in the
pool to satisfy the user request for storage space. Otherwise, the Celerra would attempt to acquire a
new volume to satisfy the user request.

Configuring Celerra Volumes & File Systems

- 28

Copyright 2006 EMC Corporation. All Rights Reserved.

Creating a File System


Using AVM from the Celerra CLI

Command:
nas_fs n <fs_name> create size=<size_in_GB>
pool=<storage pool>

y Examples:
To create a striped 300GB FS from Symmetrix STD disk
nas_fs n fs01 c size=300 pool=symm_std
o slice=y

To create a 200GB FS on a CLARiiON system with RAID 1


nas_fs n fs01 c size=200 pool=clar_r1
2005 EMC Corporation. All rights reserved.

Configuring Celerra Volumes & File Systems - 29

To employ AVM from the command line, use the following syntax;
nas_fs n <fs_name> create size=<size_in_GB> pool=<storage pool>

Examples:
To create a 300GB file system named fs01 from Symmetrix STD disk volumes type:
nas_fs n fs01 c size=300 pool=symm_std

To create a 200GB FS on a CLARiiON system with RAID 1 for high performance:


nas_fs n fs01 c size=200 pool=clar_r1

Configuring Celerra Volumes & File Systems

- 29

Copyright 2006 EMC Corporation. All Rights Reserved.

Creating a File System Using Celerra Manager & AVM


y File Systems > New > select Storage Pool

2005 EMC Corporation. All rights reserved.

Configuring Celerra Volumes & File Systems - 30

This slide shows a sample Celerra Manager screen used to create a file system using the Automatic
Volume Management feature that allocates storage on an as-needed basis to storage pools. You can
create a file system by specifying the amount of space to allocate to the file system from a systemdefined or a user-defined storage pool.

Configuring Celerra Volumes & File Systems

- 30

Copyright 2006 EMC Corporation. All Rights Reserved.

Maximum File Systems Size


y DART NAS 5.4 supports a total of up to 16TB of Fibre
Channel capacity per Data Mover*
Improved performance of both DART and system management
makes this possible

y A single file systems can be created or extended up to


16TB file system size
y File system are created on Meta volumes
The maximum size of a basic volume is 2TBs
To create a 16TB File system requires meta volume with 8 or more
components

* Reference Support Matrix for specific restrictions limitations


2005 EMC Corporation. All rights reserved.

Configuring Celerra Volumes & File Systems - 31

Beginning with NAS 5.4, Data Movers can support up to 16TBs of Fibre Channel storage. The
minimum Data Mover for CNS cabinets is the 510 model. ALL NS family of Data Movers will
support up to 16TBs of Fibre Channel storage.
This new feature is available on new and existing file systems. The file system must be made up of
Meta Volumes no larger than 2TBs in size, then added together.

Configuring Celerra Volumes & File Systems

- 31

Copyright 2006 EMC Corporation. All Rights Reserved.

Creating a 16 TB File Systems Example


Verify Disk Volumes
Create Slice Volumes

y Example: Create a 16TB metavolume from


(8) 2TB metavolumes:

Create Stripe Volumes


Create Metavolumes
Create File System

Create the (8) 2TB metavolumes first

Example:
# nas_volume n mtv1 c M str1,str2
# nas_volume n mtv2 c M str3,str4
...
# nas_volume n supermeta c M
mtv1,mtv2,mtv3,mtv4,mtv5,mtv6,mtv7,mtv8
# nas_fs -n fs1 -c supermeta

2005 EMC Corporation. All rights reserved.

Configuring Celerra Volumes & File Systems - 32

Example
To create a metavolume named mtv1 from stripe volume str1, use the following command:
nas_volume n mtv1 c M str1
Creating a metavolume from multiple volumes
To create a metavolume from multiple volumes, use the following command:
nas_volume -name <meta_vol_name> -create Meta <vol_name>,<vol_name>

After creating the 2TB metavolumes, you must create a larger metavolume (16TBs) from the smaller
2TB metavolumes.

Configuring Celerra Volumes & File Systems

- 32

Copyright 2006 EMC Corporation. All Rights Reserved.

File System Size


y Dependent on Data Mover hardware and NAS software
version
y Check the release notes and product documentation
y Considerations:
Checkpoints:
16TB file system may take all available storage
You may be unable to create checkpoints/snaps

Replication: a 16TB file system may be too large to replicate


Back up and recovery

y Just because you can, doesnt mean it is a best practice!

2005 EMC Corporation. All rights reserved.

Configuring Celerra Volumes & File Systems - 33

File system limitations depend on Data Mover hardware and NAS software version. Check the release
notes and product documentation for supported units.
Other issues should be considered in planning, such as file system backups and restores and
consistency checks because of the time it takes to accomplish these with large file systems. As a basic
rule, smaller file systems perform better than larger ones.

Configuring Celerra Volumes & File Systems

- 33

Copyright 2006 EMC Corporation. All Rights Reserved.

Viewing Existing File Systems


Command example to view all existing File Systems:
# nas_fs -l
id

inuse type acl

volume

name

server

10

root_fs_1

12

root_fs_2

14

root_fs_3

16

root_fs_4

15

38

root_fs_15

16

40

root_fs_common

17

73

root_fs_ufslog

18

76

root_panic_reserve

99

Diamond

...
2,1

...
23

1,2

Configuring Celerra Volumes & File Systems - 34

2005 EMC Corporation. All rights reserved.

Column Definitions
id ID of the file system
inuse whether or not the file system registered in the mount table of a Data Mover
type type of file system
y 1=uxfs (default)
y 2-4=not used
y 5=rawfs (unformatted file system)
y 6=mirrorfs (mirrored file system)
y 7=ckpt (checkpoint file system)
y 8=mgfs (migration file system)
y 100=group file system
y 102=nmfs (nested mount file system)
acl access control value for the file system
volume volume on which the file system resides
name name assigned to the file system
server ID of the Data Mover accessing the file system

Configuring Celerra Volumes & File Systems

- 34

Copyright 2006 EMC Corporation. All Rights Reserved.

Viewing File System Details

To view configuration details about a file system:


# nas_fs -info Diamond

id
= 23
name
= Diamond
acl
= 0
in_use
= False
type
= uxfs
worm
= off
volume
= v99
pool
= clar_r5_performance
member_of = root_avm_fs_group_3
rw_servers=
ro_servers=
rw_vdms
=
ro_vdms
=
stor_devs = CK200051400304-000C,CK200051400304-000B,
CK200051400304-0008, CK200051400304-0007
disks
= d15,d13,d11,d8

2005 EMC Corporation. All rights reserved.

Configuring Celerra Volumes & File Systems - 35

Configuring Celerra Volumes & File Systems

- 35

Copyright 2006 EMC Corporation. All Rights Reserved.

File System Utilization


To check the utilization of a particular file system
y Command:
server_df <movername> <fs_name>

y Example:
server_df server_2 fs1
server_2 :
Filesystem

kbytes

used

fs1

69630208

avail

379520 69250688

2005 EMC Corporation. All rights reserved.

capacity Mounted on
1%

/mp1

Configuring Celerra Volumes & File Systems - 36

Checking utilization
To view the amount of used/free space in a file system, use this command.
server_df <mover_name> <file_system_name>

Configuring Celerra Volumes & File Systems

- 36

Copyright 2006 EMC Corporation. All Rights Reserved.

Checking File System Utilization


y File Systems > double click the file system

2005 EMC Corporation. All rights reserved.

Configuring Celerra Volumes & File Systems - 37

This slide shows how to view file system information including utilization using Celerra Manager.

Configuring Celerra Volumes & File Systems

- 37

Copyright 2006 EMC Corporation. All Rights Reserved.

Extending a File System by Volume


To extend file systems online
y Command:
nas_fs x <fs_name> <new_volume_name>

y Example:
nas_fs x fs1 str2

y To extend the file system using AVM:


nas_fs x <fs_name> size=<size>

2005 EMC Corporation. All rights reserved.

Configuring Celerra Volumes & File Systems - 38

Extending a file system by volume


When a file system begins to near capacity, the file system can be extended online. A UxFS file
system can be extended using any type of volume.
Example
y File system fs1 is made from metavolume mtv1
y metavolume mtv1 is made from stripe volume str1
y Stripe volume str1 is made from disk volumes d3,d4,d5,d6
If this file system were at 75% capacity, then you may wish to extend this file system. In this case, you
could make a second stripe volume, str2, from several new disk volumes. To extend the file system
to include str2, type the following command (after creating the new stripe volume):
nas_fs x fs1 str2

Result: After extending the fs1 file system to include str2, mtv1 will also include str2.

Configuring Celerra Volumes & File Systems

- 38

Copyright 2006 EMC Corporation. All Rights Reserved.

Extending a File System by Volume


y File Systems > highlight the file system to extend > select Volume, select
Extend with Volume

2005 EMC Corporation. All rights reserved.

Configuring Celerra Volumes & File Systems - 39

This slide shows how to extend a file system by volume using Celerra Manager.

Configuring Celerra Volumes & File Systems

- 39

Copyright 2006 EMC Corporation. All Rights Reserved.

Extending a File System by Size


y File Systems > highlight the file system to extend > Extend > select
Storage Pool > enter Extend Size by (MB)

Configuring Celerra Volumes & File Systems - 40

2005 EMC Corporation. All rights reserved.

This slide shows how to extend the size of a file system by size.

Configuring Celerra Volumes & File Systems

- 40

Copyright 2006 EMC Corporation. All Rights Reserved.

Deleting a File System


y To delete a file system:
nas_fs delete fs2

y To delete a file system and all meta, stripe, and slice


volumes
nas_fs delete fs2 o volume

2005 EMC Corporation. All rights reserved.

Configuring Celerra Volumes & File Systems - 41

To delete a file system, use the nas_fs delete command. To delete a file system and all the
underlying meta, stripe, and slice volumes. Use nas_fs d <filesystem_name> o volume.

Configuring Celerra Volumes & File Systems

- 41

Copyright 2006 EMC Corporation. All Rights Reserved.

Deleting a File System


y File Systems > highlight the file system to delete > click Delete

2005 EMC Corporation. All rights reserved.

Configuring Celerra Volumes & File Systems - 42

This slide shows how to delete a file system by using Celerra Manager.

Note: When deleting a file system using Celerra Manager, the underlying volume structure is also
deleted If the file system was originally created using AVM.

Configuring Celerra Volumes & File Systems

- 42

Copyright 2006 EMC Corporation. All Rights Reserved.

Auto Extension - NAS 5.5 Enhancements


y Automatically extends the size of the file system
when utilization reaches High Water Mark (HWM)

Max
Size

Default HWM is 90% full


For file systems < 10 GB will increase by the
size of the file system

HWM

If >10GB, will extend by 5% or 10GB which ever is larger

y Works only with file system created using AVM


y Enable either at time of creation
May also be turned on or off after creation

y Max Size Parameters uncontrolled growth


y Command:
nas_fs -n ufs3 -create size=256M
pool=clar_r5_performance -auto_extend yes
-max_size 100G
2005 EMC Corporation. All rights reserved.

Configuring Celerra Volumes & File Systems - 43

File systems created with AVM can be enabled with auto extension capability. You can enable auto
extension on a new or existing file system. When you enable auto extension, you can also choose to:
adjust the high water mark (HWM) value, set a maximum file size to which the file system can grow,
and enable virtual provisioning.
Auto extension causes the file system to automatically extend when it reaches the high water mark and
permits you to grow the file system gradually on an as-needed basis. The virtual provisioning option
which can only be used in conjunction with auto extension allows you to allocate storage based on
your longer term projections, while you dedicate only the file system resources you currently need. It
also allows you to show the user or application the maximum size of the file system, of which only a
portion is actually allocated, while allowing the file system to slowly grow on demand as the data is
written.

Configuring Celerra Volumes & File Systems

- 43

Copyright 2006 EMC Corporation. All Rights Reserved.

Virtual Provisioning - NAS 5.5 Enhancements


y Presents maximum File System
size to the user

Max Size

Only fraction of the disk space is


actually allocated from storage pool

y File system grow on demand as the


data is written
Works with Auto Extension
Must specify Max Size
Allocated

y Command:
nas_fs -n ufs3 -create size=256M
pool=clar_r5_performance -auto_extend yes
-max_size 1T vp=Yes
2005 EMC Corporation. All rights reserved.

Configuring Celerra Volumes & File Systems - 44

The virtual provisioning option lets you present the maximum size of the file system to the user or
application, of which only a portion is actually allocated; virtual provisioning permits the file system to
slowly grow on demand as the data is written.
Enabling virtual provisioning with Automatic File System Extension does not automatically reserve
the space from the storage pool for that file system. Administrators must ensure adequate storage space
exists so the automatic extension operation can succeed. If the available storage is less than the
maximum size setting, then automatic extension fails. Users receive an error message when the file
system becomes full, even though it appears there is free space in the file system.

Configuring Celerra Volumes & File Systems

- 44

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra Manager File System Creation Dialog

2005 EMC Corporation. All rights reserved.

Configuring Celerra Volumes & File Systems - 45

This slide shows how to create a file system with Auto Extend and Virtual Provisioning enabled by
using Celerra Manager. Refer to the Celerra man pages for more detailed information and options on
creating Celerra file systems with CLI.

Configuring Celerra Volumes & File Systems

- 45

Copyright 2006 EMC Corporation. All Rights Reserved.

References
y Managing Celerra Volumes and File Systems with
Automatic Volume Management
P/N 300-002-689 Rev A01 Version 5.5 March 2006
y Managing Celerra Volumes and File Systems Manually
P/N 300-002-705 Rev A01 Version 5.5 March 2006
* Above are available on the User Information CD

2005 EMC Corporation. All rights reserved.

Configuring Celerra Volumes & File Systems - 46

Configuring Celerra Volumes & File Systems

- 46

Copyright 2006 EMC Corporation. All Rights Reserved.

Module Summary
Key points covered in this module are:
File systems can be created manually or with Automatic
Volume Manager
The types of Celerra volumes that can be created are
slice, stripe, and metavolumes
Storage Pools are containers that holds storage ready
for use by file systems, checkpoints, or other Celerra
objects that use storage
System-defined Storage Pools
User-defined Storage Pools

File systems must be created on metavolumes


Metavolume includes one or more disk volumes, slice volumes,
strip volumes or other meta volumes
2006 EMC Corporation. All rights reserved.

Configuring Celerra Volumes & File Systems - 47

The key points for this module are shown here. Please take a moment to review them.

Configuring Celerra Volumes & File Systems

- 47

Copyright 2006 EMC Corporation. All Rights Reserved.

Closing Slide

2005 EMC Corporation. All rights reserved.

Configuring Celerra Volumes & File Systems - 48

Configuring Celerra Volumes & File Systems

- 48

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra ICON
Celerra Training for Engineering

Exporting File Systems to UNIX Clients

2006 EMC Corporation. All rights reserved.

Exporting File Systems to UNIX

-1

Copyright 2006 EMC Corporation. All Rights Reserved.

Revision History
Rev Number

Course Date

1.0

February 2006

1.2

May, 2006

2006 EMC Corporation. All rights reserved.

Revisions
Complete
Updates and enhancements

Exporting File Systems to UNIX Clients - 2

Exporting File Systems to UNIX

-2

Copyright 2006 EMC Corporation. All Rights Reserved.

Module Objectives
Upon completion of this module, you should be able to:
y

Make a Celerra file System available to UNIX clients


using NFS including:

Creating mountpoint on Data Mover


Mounting the file system on the Data Mover
Export the mounted file system

Explain mount and export options used to control


access to file system objects

On a UNIX client, access an exported file system


including

Creating a local mount point


NFS mount the file system
Manage user and group credentials for file system object access

2006 EMC Corporation. All rights reserved.

Exporting File Systems to UNIX Clients - 3

In the prior module we created file system. Before a file system can be accessed by a client, it first
must be mounted on a Data Mover and exported. This module covers these steps. While the process is
nearly the same for both NFS and CIFS environments, this module focuses on NFS clients in a UNIX
environment only.

Exporting File Systems to UNIX

-3

Copyright 2006 EMC Corporation. All Rights Reserved.

Introduction to NFS
y NFS (Network File System) is a client/server distributed file service
that provides file sharing in is a network environment
Developed in the early 80s by SUN Microsystems
Standard for network file access in UNIX environments

y Built on the Open Network Computing Remote Procedure Call


system (RPC)
y Celerra supports NFS versions 2, 3, and 4
Version 2 is stateless and UDP based
Version 3 is based on TCP and provides better security when Secure NFS
is implemented using optional uses Kerberos
Version 4 is strongly influenced by CIFS and utilizes a stateful protocol, has
better security and performance, and is supported using IPv4

y NFSv4 support was added with NAS release 5.5


2006 EMC Corporation. All rights reserved.

Exporting File Systems to UNIX Clients - 4

Exporting File Systems to UNIX

-4

Copyright 2006 EMC Corporation. All Rights Reserved.

Process Overview
Existing File System
Create Mountpoint
Mount FS to Mountpoint
Export Mounted FS
NFS Mount from Clients
2006 EMC Corporation. All rights reserved.

Exporting File Systems to UNIX Clients - 5

Before a file system can be accessed by clients, the file system must be mounted and exported. After
creating a file system on the Symmetrix or CLARiiON using the nas_fs command:
The Celerra administrator must:
y Create a mountpoint on a Data Mover.
y Mount the file system to the mountpoint.
y Export the mounted file system.
The clients must remotely mount the exported file system.

Exporting File Systems to UNIX

-5

Copyright 2006 EMC Corporation. All Rights Reserved.

Mountpoints
y Command: server_mountpoint
Create Mountpoint

y File systems are mounted to mountpoints

Mount FS to Mountpoint

Mountpoints are directory located on Data Movers


When mounting a file system, a mountpoint is
created automatically if it does not exist

Export Mounted FS
NFS Mount from Clients

y When creating a file system with Celerra


Manager, a mountpoint is created and the
file system is mounted automatically
y File systems can be mounted to directories
or subdirectories
File system fs1 mounted to /mp1/dir1

2006 EMC Corporation. All rights reserved.

Exporting File Systems to UNIX Clients - 6

Creating a mountpoint
A mountpoint can be created on a Data Mover before you can mount a file system. The Celerra will create the
mountpoint when a file system is mounted. You must delete the mountpoint manually (with CLI, the GUI
deletes the mountpoint when the file system is deleted). Each file system can be mounted (rw) on only one
mountpoint, and each mountpoint can provide access to one file system at a time. Celerra supports having
multiple Data Movers mounting the same file system concurrently only if all mounts are read only. The read
only (ro) and read write (rw) mount options are discussed in relation to the server_mount command.
Naming mountpoints
Mountpoint names must begin with a "/" followed by alphanumeric characters (for example, /new).
Mounting a file system
You can mount a file system, rooted on a subdirectory of an already exported file system, as long as the file
system has not previously been mounted above or below that mount point. For example:
y File system fs1 is mounted to /mp1
y Directory /dir1 is create in /mp1
y File system fs2 is then mounted to /mp1/dir1
y However, file system fs1 cannot be mounted to /mp1/dir1
Maximum number of nested mountpoints
The maximum number of nested mountpoints that you can create under a directory is eight. However, you can
only mount a file system up to the seventh level.

Exporting File Systems to UNIX

-6

Copyright 2006 EMC Corporation. All Rights Reserved.

Creating a Mountpoint
y Creating a mountpoint simply creates a directory on the
Data Mover
y Command:
server_mountpoint mover_name -create mountpoint

y Example:
server_mountpoint server_2 c /mp1
server_2: done

2006 EMC Corporation. All rights reserved.

Data Mover
Mountpoint
Mountpoint
/mp1
/mp1
/mp1

Exporting File Systems to UNIX Clients - 7

To create a mountpoint:
server_mountpoint mover_name -create mountpoint

For example, to create a mountpoint called mp1 for server_2:


server_mountpoint server_2 c /mp1
server_2: done

Note:
It is not necessary to create a mountpoint prior to mounting the file system. The mount command will
create the mountpoint and mount the file system. The name of the mountpoint will be the file system
name.

Exporting File Systems to UNIX

-7

Copyright 2006 EMC Corporation. All Rights Reserved.

Mounting a File System


Create Mountpoint

y Mount the file system to the mountpoint


Makes it available to the Data Mover

Mount FS to Mountpoint
Export Mounted FS
NFS Mount from Clients

Not available to clients until exported

y Mount for Read/Write or Read Only


y Mounts are permanent
Back-end Storage
System

Data Mover
Mountpoint
Mountpoint
/mp1
/mp1
/mp1

File
FileSystem
System
fs1
fs1
fs1

2006 EMC Corporation. All rights reserved.

Exporting File Systems to UNIX Clients - 8

Once you create a mountpoint, you must mount your file system to the mountpoint in order to provide
user access. File systems are mounted permanently by default. If you perform a temporary unmount
(default), in the case of a system reboot, the mount table is activated and the file system is
automatically mounted again.

Exporting File Systems to UNIX

-8

Copyright 2006 EMC Corporation. All Rights Reserved.

Mounting a File System to a Mountpoint


y Command:
server_mount server_2 option <options>
<file_system> </mountpoint>

y Example:
server_mount server_2 fs1 /mp1

y File systems can be mounted for Read Only access (ro)


Multiple Data Movers can mount the same file system if mounted for Read
Only access
A file system mounted for read-write can only be mounted on one Data Mover

y File system writes operations are cached and read data is prefetched
by default but can be disabled by mount options

y Other mount options apply to mixed CIFS and NFS


environments (discussed later)
2006 EMC Corporation. All rights reserved.

Exporting File Systems to UNIX Clients - 9

Mount options vary for NFS and CIFS. When performing a mount, you can institute the following
options to define the mount:
Read-write: When a file system is mounted read-write (default) on a Data Mover, only that Data
Mover is allowed access to the file system. No other Data Mover is allowed read or read-write access
to that file system.
Read-only: When a file system is mounted read-only on a Data Mover, clients cannot write to the file
system, regardless of the export permissions. A file system can be mounted read-only on several Data
Movers concurrently, as long as no Data Mover has mounted the file system as read-write.
Additional options for CIFS mount include:
File locking
Opportunistic locks
Notify
Access checking policies

Exporting File Systems to UNIX

-9

Copyright 2006 EMC Corporation. All Rights Reserved.

Mounting a File System to a Mountpoint


y

File Systems

2006 EMC Corporation. All rights reserved.

Exporting File Systems to UNIX Clients - 10

With Celerra Manager, the file system is mounted, by default, at the time you create it. A file system
called marketing is shown here. When marketing was created, it was automatically mounted to a
mountpoint called marketing.

Exporting File Systems to UNIX

- 10

Copyright 2006 EMC Corporation. All Rights Reserved.

Unmounting a File System


y File systems can be unmounted permanently or
temporarily
y Examples: Permanent unmount
server_umount server_2 p /mp1
server_umount ALL p a

y Examples: Temporary unmount


server_umount server_2 -t /mp1

y Deleting a file system via Celerra Manager will


automatically unmount the file system permanently and
delete the mountpoint
2006 EMC Corporation. All rights reserved.

Exporting File Systems to UNIX Clients - 11

Unmounting a file system


Celerra file systems can be unmounted permanently or temporarily.
Permanent unmount
When a file system is unmounted permanently, the file system entries are removed from the mount table and the
entries are not remounted at boot up.
Examples:
To permanently unmount mountpoint /mp1 on server_2, type the following command:
server_umount server_2 p /mp1
To permanently unmount all file systems from all mountpoints, type the following command:
server_umount ALL p a
Temporary unmount
When a file system is unmounted temporarily, the entries remain in the mount table and are remounted again
when the Data Mover reboots. When a temporary unmount takes place, neither the file system nor the
mountpoint can be deleted.
Example
To temporarily unmount mountpoint /mp1 on server_2, type the following command:
server_umount server_2 t /mp1

Note:

There is not an option to unmount a file system with Celerra Manager

Exporting File Systems to UNIX

- 11

Copyright 2006 EMC Corporation. All Rights Reserved.

Exporting File Systems


Create Mountpoint
Mount FS to Mountpoint
Export Mounted FS
NFS Mount from Clients

2006 EMC Corporation. All rights reserved.

y Exporting makes the file systems


available on the network for client
access
y The default is export for NFS clients
y May also be exported for CIFS access

Exporting File Systems to UNIX Clients - 12

Exporting file systems


After creating a mountpoint and mounting a file system, you must export the path to allow NFS and/or
CIFS users to access the system.

Exporting File Systems to UNIX

- 12

Copyright 2006 EMC Corporation. All Rights Reserved.

Exporting File Systems


y Command:
server_export server_2 option <options> </mountpoint>

y Example:
server_export server_2 /mp1

2006 EMC Corporation. All rights reserved.

Exporting File Systems to UNIX Clients - 13

Command
Paths are exported from Data Movers using the server_export command. This adds an entry to
the export table. Entries to the table are permanent and are automatically re-exported if the Data
Mover reboots.
Export options
Options used when exporting the file system play an integral part of managing security to the file
system. You can ignore existing options in an export entry by including the -ignore option. This
forces the system to ignore the options in the export table and follow the specific guidelines of that
export.
It is not necessary to export the root of a file system.
It is sometimes advantageous to export a directory on the file system rather than the file system itself.

Exporting File Systems to UNIX

- 13

Copyright 2006 EMC Corporation. All Rights Reserved.

NFS Export Options


y When exporting a file system you can use the option to
specify the level of access
Option

Description

ro

Export the path for all NFS clients for read-only.

ro=

Export the path for specific NFS clients as read-only.

rw=

Export the path as read/write for specific client(s). If no other options


are specified, all other clients have read-only access.

access=

Provide default access for the specified clients. Deny access to


those NFS clients not given explicit access.

root=

Provide root access to clients already listed in the export command


by specifying a client for root=. Setting root access does not grant
access to the export by itself. Root access is added to the other
permissions.

2006 EMC Corporation. All rights reserved.

Exporting File Systems to UNIX Clients - 14

When using the server_export command you can specify the level of access for each NFS export.
Client lists for ro=, rw=, access=, and root= can be a hostname, netgroup, subnet, or IP address and
must be colon-separated, without spaces. You can also exclude access by using the dash (-) prior to an
entry for ro=, rw=, and access=, for example rw=-host1.

Exporting File Systems to UNIX

- 14

Copyright 2006 EMC Corporation. All Rights Reserved.

NFS Export Options Examples


y Export options can be assigned to:
IP Address
IP Subnet
Netgroup

y Exporting access to an IP subnet


server_export server_2
o access=192.168.160.0/255.255.240.0 /mp1

y Assigning root privilege to another host


server_export server_2 o root=192.168.64.10 /mp1

y Exporting using read mostly


server_export server_2 o anon=guest,rw=sales /mp1

Exporting File Systems to UNIX Clients - 15

2006 EMC Corporation. All rights reserved.

Export security options


The server_export command provides a variety of security options. The various options can be configured
to reference a(n) IP host address, IP subnet, or Netgroup.
Anonymous users
Anonymous users can also be associated with a particular UID. (Celerra will first parse the /.etc/passwd,
/.etc/hosts, /.etc/netgroups files for resolution of host names, UIDs, and netgroups. An NIS server will then be
checked if the Data Mover has been configured to do so with the server_nis command.)
The anon= option
The anon= option specifies a UID that will be applied to anonymous users. A value of 0 assigns root privilege
to unknown users. Alternatively, an organization can create an account for such purposes, such as guest. The
default is anon=nobody; unknown users will be denied access.
server_export server_2 o anon=guest /mp1
Assigning root privilege
The Celerra Administrator can assign root privilege to a particular entity, such as the networks UNIX
administrator's workstation.
server_export server_2 o root=192.168.64.10 /mp1
Note: Refer to the slide titled Exporting File Systems (NFS) (Celerra Manager) to view the export options
using Celerra Manager.

Exporting File Systems to UNIX

- 15

Copyright 2006 EMC Corporation. All Rights Reserved.

Exporting File Systems (NFS)


y NFS Exports > New

2006 EMC Corporation. All rights reserved.

Exporting File Systems to UNIX Clients - 16

This slide shows how to export an NFS file system using Celerra Manager.

Exporting File Systems to UNIX

- 16

Copyright 2006 EMC Corporation. All Rights Reserved.

Unexporting File Systems for NFS


y Example - Permanent unexport (most common)
server_export server_2 u p /mp1

y Example - Temporary unexport (for all mountpoints)


server_export ALL u t -all

2006 EMC Corporation. All rights reserved.

Exporting File Systems to UNIX Clients - 17

To unexport a file system permanently:


server_export server_2 unexport permanent /mp1

To temporarily unexport all file systems:


server_export ALL u t all

Exporting File Systems to UNIX

- 17

Copyright 2006 EMC Corporation. All Rights Reserved.

Unexporting File Systems for NFS


y NFS Exports > Highlight export to
delete > Delete

2006 EMC Corporation. All rights reserved.

Exporting File Systems to UNIX Clients - 18

This slide shows how to permanently unexport an NFS file system using Celerra Manager.

Exporting File Systems to UNIX

- 18

Copyright 2006 EMC Corporation. All Rights Reserved.

Client Mount Procedure


Create Mountpoint

y NFS clients must mount the file system to a


mount point directory
Manual
At system reboot

Mount FS to Mountpoint

Automatically using the automount daemon


Export Mounted FS

y Create a mountpoint directory


Standard UNIX directory

NFS Mount from Clients

# mkdir /hmarine
y NFS mount the exported file system to the local
directory
# mount 192.168.101.20:/mp1 /hmarine

Exporting File Systems to UNIX Clients - 19

2006 EMC Corporation. All rights reserved.

Once the file system has been exported from the Celerra, NFS clients will need to NFS mount the file
system. The typical procedure involves the use of a local directory as a mountpoint, whether preexisting or created specifically for this purpose.
In the example below, a directory named /hmarine on a Sun Solaris workstation is being NFS mounted
to a Celerra file system that is mounted to /mp1 on a Data Mover with the IP address 192.168.101.20.
Similar syntax can be used for other clients supporting NFS.
As root, create a new directory.
# mkdir /hmarine
At this point /hmarine is a directory. Performing an ls command on /hmarine should yield no results
because the directory is empty.
NFS mount /hmarine to the Data Movers /mp1 exported file system.
# mount 192.168.101.20:/mp1 /hmarine
If a host name resolution solution (such as DNS) has been employed, the command could be as
follows:
# mount cel1dm2:/mp1 /hmarine
After mounting /hmarine to the Data Movers exported /mp1, /hmarine now is a file system, not a
directory. An ls command on /hmarine should now yield contents of lost&found (which is at the root
of all file systems).

Exporting File Systems to UNIX

- 19

Copyright 2006 EMC Corporation. All Rights Reserved.

Summary of Mounting File Systems for NFS


On Celerra File Server
nas_commands

On NFS Clients

server_commands

Start with Disk Volumes

Create Mountpoint

Create Local Directory

Create Slice Vol. (opt)

Mount FS to Mountpoint

NFS Mount to Data Mover

Create Stripe Vol. (opt)

Export Mounted FS

Create Meta Volume


Create File System

2006 EMC Corporation. All rights reserved.

Exporting File Systems to UNIX Clients - 20

This slide summarizes what needs to occur when creating a file system and making it available to NFS
clients on the network.
1. A meta volume is created using either a stripe, slice, or disk volume
2. A file system is created on the meta volume
3. Mountpoint is created
4. The file system is mounted to the mountpoint
5. The mountpoint is exported for NFS
6. The NFS client creates a local directory and mounts the remote Celerra file system

Exporting File Systems to UNIX

- 20

Copyright 2006 EMC Corporation. All Rights Reserved.

Nested Mount Point File System


y Unified file system with a single namespace
y Combination of several individual Celerra file
systems
y Resource aggregation for reporting purposes

/nmfs_fs1
/nestfs01

server_df
properties

/nestfs2

y Virtual read-only root file system of type NMFS


Created using nas_fs command with type=nmfs
No space required

/nestfs3
/nestfs4

y Access control by exports (nested exports)

2006 EMC Corporation. All rights reserved.

Exporting File Systems to UNIX Clients - 21

A Nested Mount File System is a collection of individual file systems that can be exported as a single
share or single mount point. Normally, the collection of file systems remain together after creation;
although it is possible to remove an individual file system or to break up the collection entirely.
The space for each Nested Mount File System and each of the component file systems can be
examined using server_df.
The space reported for the NMFS will be the aggregation of the space within each of the component
file systems mounted in it.
The space reported for each component will be the actual space within the component file system.
In some cases, the access control associated with a NMFS root may not be sufficient for the entire
collection of file systems. Thus, NMFS will allow different export controls on each of the component
file systems. Access to each of the file systems may be individually set via the server_export for
the component file systems.

Exporting File Systems to UNIX

- 21

Copyright 2006 EMC Corporation. All Rights Reserved.

Example of Nested Mount File System


NMFS export for RW

/nmfs_fs1

Co
m
exp pon
ort ent
ed
for
R0

/nestfs01
/nestfs2

y Export permissions are set


on the NMFS

/nestfs3

y Component file systems inherit NMFS


permissions

/nestfs4

y Component file system can be assigned individual export


permissions
Supersedes the inherited NMFS permissions
2006 EMC Corporation. All rights reserved.

Exporting File Systems to UNIX Clients - 22

A component (nested) file system will get its permissions one of two ways:
y The user can export the component file system separate from the NMFS file system and give it
permissions at that time.
y The user can export just the NMFS file system. The component file systems then inherit the
permissions from the parent (NMFS) file system.
Example:
Set export permission to Nested_1 = r/w
y fs002=r/w (inherited)
y fs003=r/w (inherited)
y fs004=r/w (inherited)
Set export permission to fs002 = r/o
y fs002=r/o (component export)
y fs003=r/w (inherited)
y fs004=r/w (inherited)
Set export permission to fs004 = root=10.0.0.1
y fs002=r/o (component export)
y fs003=r/w (inherited)
y fs004=root=10.0.0.1 (component export)
Exporting File Systems to UNIX

- 22

Copyright 2006 EMC Corporation. All Rights Reserved.

Example of Nested Mount File System

Note: nmfs_fs1 will show the total file system


size of nested file systems
2006 EMC Corporation. All rights reserved.

Exporting File Systems to UNIX Clients - 23

The sum of the size of the four component (nested) file systems is equal to the size of the NMFS file
system (nmfs_fs1).

Exporting File Systems to UNIX

- 23

Copyright 2006 EMC Corporation. All Rights Reserved.

NFS User Authentication and Authorization


y In a UNIX environment, every user is
identified by a unique User Identifier (UID)
and is a member of one or more groups
identified by a Group ID (GID)
Request for access to file system objects
include UID/GID of user
UIDs and GIDs are used by the Data Mover
to determine access to file system objects

y When the Data Mover receives a user


request, it queries one of three possible
sources to authenticate request

NFS User

UID
GID

File System
Objects

Local passwd and group files


Network Information Service (NIS)
Active Directory

y If no match is found, the user is mapped


to Anonymous
Limited access
2006 EMC Corporation. All rights reserved.

Exporting File Systems to UNIX Clients - 24

Celerra Data Movers compare users to UIDs and groups to GIDs using traditional passwd and group
files or by querying NIS.
Data movers will check their local /.etc/passwd and /.etc/group files first, and then check with NIS if
the Data Mover has been configured for NIS.
If the Active Directory schema has been extended to include UNIX attributes for Windows users and
groups, you can configure a Data Mover to query the Active Directory to determine if a user and the
group of which the user is a member have UNIX attributes assigned. If so, information stored in these
attributes is used for file access authorization.
The Data Mover first checks its local cache. It then queries all the configured naming services in a
predetermined order until the requested entity is found or until all naming services have been queried.
The search order is determined by the name service switch (nsswitch), which is configured using the
nsswitch.conf file.

Exporting File Systems to UNIX

- 24

Copyright 2006 EMC Corporation. All Rights Reserved.

Users and Groups with Local Files

y Command:
# /nas/sbin/server_user <mover_name> -add
passwd <UID or user name>

y Example:
# /nas/sbin/server_user server_2 -add -passwd
itechi

y Interactively configures user name, password and


other attributes in /.etc/passwd file
Must be run as root user

2006 EMC Corporation. All rights reserved.

Exporting File Systems to UNIX Clients - 25

Adding users
Users can be added to /.etc/passwd on a Data Mover with the server_user command. This command opens to a script
that allows you to create or modify a user account. The server_user command also allows you to add or delete an
optional password to a user account. This command must be run from the /nas/sbin directory as root.
# /nas/sbin/server_user server_2 -add -passwd itechi
Creating new user itechi
User ID: 1007
Group ID: 105
Comment: Ira Techi, IS admin
Home Directory:
Shell:
Changing password for new user itechi
New passwd:
Retype new passwd:
server_2: done
Password and group files
In addition to server_user, passwd and group files can be created manually, or copied from another system, and then
placed into /.etc using the server_file command.

Exporting File Systems to UNIX

- 25

Copyright 2006 EMC Corporation. All Rights Reserved.

Users and Groups with NIS


To configure a Data Mover to query NIS directly
y Command:
server_nis <mover_name> <domain_name>
<ip address of NIS server(s)>

y Example:
server_nis server_2 hmarine.com 192.168.64.10,192.168.64.11

NIS
Data Mover

2006 EMC Corporation. All rights reserved.

Exporting File Systems to UNIX Clients - 26

NIS (Network Information Service) is a Network service that converts hostnames to IP addresses or IP
addresses to hostnames. NIS can also be used to store user and group names used in authentication.
Command syntax
server_nis server_2 <nis_domain_name> <IP_Addr_of_NIS_server1>,
<IP_Addr_of_NIS_server2>,
Example
server_nis server_2 hmarine.com 192.168.64.10,192.168.64.10

Note: EMC recommends that two NIS servers are configured for each Data Mover for redundancy.

Exporting File Systems to UNIX

- 26

Copyright 2006 EMC Corporation. All Rights Reserved.

Users and Groups with NIS


y

Network > NIS Settings

Exporting File Systems to UNIX Clients - 27

2006 EMC Corporation. All rights reserved.

This slide shows how to define an NIS server using Celerra Manager.

Exporting File Systems to UNIX

- 27

Copyright 2006 EMC Corporation. All Rights Reserved.

Users and Groups with NIS


y Place NIS password and group files on Data Mover
Data Mover

server_file server_2 -put passwd passwd


server_file server_2 -put group group

passwd
passwd&&group
group

Control Station

# ypcat passwd >passwd

NIS

# ypcat group >group


2006 EMC Corporation. All rights reserved.

Exporting File Systems to UNIX Clients - 28

Copying passwd and group files onto a Data Mover


Alternatively, passwd and group files can be copied from the NIS server using ypcat and then
FTPd to the Data Movers /.etc directory using server_file.

Examples
To copy files from an NIS client, type the following command:
# ypcat passwd >passwd
# ypcat group >group

To copy passwd and group files to Control Station and then FTP these files to the Data Mover, type
the following command:
server_file server_2 -put passwd passwd
server_file server_2 -put group group

Exporting File Systems to UNIX

- 28

Copyright 2006 EMC Corporation. All Rights Reserved.

Module Summary
y Before a file system can be accessed by clients, it must be mounted
and exported from the Celerra
y The server_mount command is used to mount a file system
When a file system is mounted read/write on a Data Mover (default), only
that Data Mover is allowed access to the file system
When a file system is mounted read-only on a Data Mover, clients cannot
write to the file system regardless of the export permissions

y The server_export command, or Celerra Manager is used to export


a mounted file system for client access
y A Nested Mount File System is a collection of individual file systems
that can be exported as a single share or single mount point
y Once a file system has been exported from the Celerra, NFS clients
need to mount the file system for access
y NFS Users are authenticated and authorized access using UIDs
and GIDs
2006 EMC Corporation. All rights reserved.

Exporting File Systems to UNIX Clients - 29

In this module we discussed making a file system available to NFS clients. While there are third party
NFS client software packages available for Windows, Windows users typically use the CIFS protocol
that we will be discussing in subsequent modules.

Exporting File Systems to UNIX

- 29

Copyright 2006 EMC Corporation. All Rights Reserved.

Closing Slide

2006 EMC Corporation. All rights reserved.

Exporting File Systems to UNIX Clients - 30

Exporting File Systems to UNIX

- 30

Copyright 2006 EMC Corporation.All Rights Reserved.

Celerra ICON
Celerra Training for Engineering

Common Internet File system (CIFS)

2006 EMC Corporation. All rights reserved.

Intro to CIFS

-1

Copyright 2006 EMC Corporation.All Rights Reserved.

Revision History
Rev Number

Course Date

1.0

February 2006

1.1

March 2006

Updates and enhancements

1.2

May 2006

Updates and enhancements

2006 EMC Corporation. All rights reserved.

Revisions
Complete

Intro to CIFS

Intro to CIFS

-2

Copyright 2006 EMC Corporation.All Rights Reserved.

Introduction to CIFS
y Define terminology used in a CIFS environment
y Describe how CIFS users are authenticated and how
their credentials are mapped in the Celerra Network File
Server
y Describe the purpose of Usermapper and how it works
y Configure Internal Usermapper in a multi Celerra
environment
y Describe the options for mapping user credentials in a
multiprotocol environment

2006 EMC Corporation. All rights reserved.

Intro to CIFS

In this module we will be discussing Security issues; basically identifying who a user is. How we
address these issues is different if we are in a UNIX only environment, a Windows CIFS environment,
or a mixed environment. In this section we are going to discuss a tool called Usermapper that is used
to map Windows credentials (SID) to the UNIX-like User ID (UID) and Group ID (GID) conventions
that is used by DART on the Celerra.
An alternative to using Usermapper is to manually create entries in a passwd and groups file or use the
tool NTmigrate that will extract Windows user credentials and convert them into UIDs and GIDs used
by the Celerra.

Intro to CIFS

-3

Copyright 2006 EMC Corporation.All Rights Reserved.

Common Internet File System (CIFS)


y CIFS is a file access protocol used by the Microsoft
Windows operating system for distributed file sharing
Designed for the Internet and is based on the Server Message Block
(SMB)
Open standard for network file service.

y When configured for CIFS services, the Data Mover


provides file access features similar to those of a
Windows server
High performance, availability, and security with native Windows
server functionality
Supports features such as:
Locking policies and Oplocks
Quotas and filtering
DFS, Home Directories, etc.
2006 EMC Corporation. All rights reserved.

Intro to CIFS

The default file serving protocol for the Celerra is NFS. While there are NFS clients available for
Microsoft Environment, CIFS is the most widely used protocol for file sharing in a Windows
environment.
When configured for CIFS, the Celera Data Mover is emulating Windows server and provides high
performance and provides all the features of a Windows server
as well as a robust set of features.

Intro to CIFS

-4

Copyright 2006 EMC Corporation.All Rights Reserved.

Genealogy of CIFS Support


y NAS 2.1 initial CIFS implementation (NT)
y NAS 2.2 W2K mixed mode support
y NAS 2.2+ - W2K Native Mode
y NAS 4.2 WinXP
y NAS 5.0 GPO
y NAS 5.1 W2K3 Client support
10 Years of
Refinement &
Enhancements

y NAS 5.2 W2K3 domain (interim)


y NAS 5.3 DM CIFS server can be configured to operate
in all MS Windows domain environments

Windows NT 4.0
Windows 2000 Mixed
Windows 2000 Native
Windows Server 2003 Family Interim
Windows Server 2003

y NAS 5.4 - Large file systems support, Performance


enhancements, etc
y NAS 5.5 Q106
2006 EMC Corporation. All rights reserved.

Intro to CIFS

CIFS support has been available with Celerra for a number of years. Each release of NAS add more
features and supported platforms.

Intro to CIFS

-5

Copyright 2006 EMC Corporation.All Rights Reserved.

Celerra CIFS Implementation


y On the Data Mover, CIFS is implemented as a
Service that must be started
Implemented as a kernel process
For maximum efficiency/performance

Provides Windows network file serving functionality


including lock policies, access-checking policies,
filtering, quotas, etc.

y One or more CIFS Servers are configured


A logical server that uses the CIFS protocol to transfer
files
A Data Mover can host many instances of a CIFS
Server
Each instance is associated with one or more network
interfaces

Data Mover
CIFS
Server
CIFS Service
CIFS
Server

y During configuration, the Celerra CIFS Server joins


a specific Windows domain as a member server
Each server appears as a separate computer in active
directory
2006 EMC Corporation. All rights reserved.

Intro to CIFS

The file sharing protocol that is used by default is NFS. If Windows clients will be accessing CIFS
shares, than the CIFS service must be started. This service is implemented at the kernel level for
maximum performance.
CIFS Servers are logical instances. A single Data Mover may host multiple CIFS Servers, each is
configured as a separate entity when it joins the domain.

Intro to CIFS

-6

Copyright 2006 EMC Corporation.All Rights Reserved.

Client Access to CIFS


y

Similar to NFS, providing file access for Windows clients


primarily consists of mounting a file system, and then
exporting the share so that Windows users can access it
1. Create the file system
2. Create a mount point on the Data Mover to mount the file system
3. Mount the file system on the Data Mover
4. Export the file system using CIFS protocol - CIFS users access as a Share

However, there are other considerations and prerequisites


that must be satisfied

Integration into the Windows Network Domain

Configure User Authentication and Authorization

Configuring CIFS Servers

Starting CIFS protocol on the Data Mover

Enable UNICODE for Internationalization

2006 EMC Corporation. All rights reserved.

Intro to CIFS

At the highest level, the way you make a CIFS share available to clients is very similar to how you
would do it for NFS clients: Create the file system, mount it on the Data Mover and Export it.
However there are many other consideration and prerequisites that must be met, specifically the
integration into the Windows environment.

Intro to CIFS

-7

Copyright 2006 EMC Corporation.All Rights Reserved.

Celerra CIFS Server is a Windows Domain Member


y While stand-alone CIFS configurations are possible, most likely, the CIFS
servers integrate into the Windows Domain in order to make file system
resources available to domain client systems
y A Domain is a logical grouping of computers that share common security
and user account information
All computers and users has domain account that uniquely identifies them
Domain Administrator creates the user/computer accounts
Account information maintained in the Active Directory
Advanced directory services accessed through a protocol such as LDAP

DNS (Domain Name Service) is used to locate computers and services on the
network
Maintains a database of domain names, host names, IP addresses and services
DNS provides name resolution

y Domain users join (log in to) the domain by presenting their credentials
Authentication is performed using Kerberos
Secret-key encryption mechanism

Users log in to domain once, not necessary to log into each computer in the domain
2006 EMC Corporation. All rights reserved.

Intro to CIFS

While it is possible to configure standalone CIFS servers, most environments integrate into the
Windows domain. When properly configured, the CIFS server on the Data Mover is visible on the
Microsoft network, either via the browse list or via a UNC path.
EMC strongly recommends that you enable Unicode on the Celerra Network Server. If you do not
enable Unicode, ASCII filtering is automatically enabled when you create the first Windows 2000 or
Windows Server 2003-compatible CIFS server on the Data Mover. If neither Unicode nor ASCII
filtering are enabled, you cannot create a Windows 2000 or Windows Server 2003-compatible CIFS
server.
Unicode can be enabled using the uc_config command and through the Set Up Celerra Wizard on the
Celerra Manager. However, depending on your environment, you might also need to customize the
settings of the translation configuration files before enabling Unicode.

Intro to CIFS

-8

Copyright 2006 EMC Corporation.All Rights Reserved.

Authentication
y Three authentication options:
Single username/password - SHARE Security
At the Data Mover - UNIX Security
When the user enters NT/2000 network - NTSecurity

y Data Movers uses NT user authentication as the default method


Best Practice: Do not use UNIX or SHARE user authentication methods
When a CIFS user logs in, a security access token is created that contains
the Security ID (SID) for the user, and the SID for the users group
Presented to the data mover at time of access
Compared with the security descriptor of any CIFS object to determine access
rights

y Authentication option is set on a per Data Mover basis and applies


to every interface and all CIFS servers on a Data Mover

2006 EMC Corporation. All rights reserved.

Intro to CIFS

One of the configuration options is to specify where the user validation will take place. For example,
the users could be required to present the username and password that matches what is stored in the
passwd file on the Data Mover, or NIS (UNIX security). Or, it could be assumed that, since the user
is coming from the Windows network, he/she has already provided a username/password and was
validated by a Microsoft domain controller, and would thus have a Security Access Token, therefore we
will trust that they are who they say they are (NT security). A simpler but less flexible option is to
have a single username/passwd for all users who would like to access the system. This is referred to a
SHARE security.

Intro to CIFS

-9

Copyright 2006 EMC Corporation.All Rights Reserved.

Security Modes - Overview


NT
Overview

UNIX
Overview

Allows access to shares Authentication is done


only after authentication
on the Data Mover using
by a domain controller.
the local files (passwd
and group) or NIS.
Client sends a user
name and encrypted
Uses plain-text
password to the Data
passwords.
Mover for
ACLs not checked.
authentication.
Not recommended.
Checks file, directory,
and share-level ACLs.

SHARE
Overview
Uses no passwords or
uses plain-text
passwords.
Asks for read-only or
read/ write password.
ACLs not checked.
Not recommended.

Default user
authentication method.
Recommended
Security Mode.
2006 EMC Corporation. All rights reserved.

Intro to CIFS

Intro to CIFS

- 10

Copyright 2006 EMC Corporation.All Rights Reserved.

Security Mode How they Work


NT

UNIX

SHARE

How It Works

How It Works

How It Works

y The client sends a


username and
encrypted password to
the Data Mover or
Kerberos tickets. User
authentication is done
by the domain controller
using NTLM V0.12 or
Kerberos (default in
Windows 2000 and
Windows Server 2003)
and LDAP.

y The client sends a


username and a plaintext password to the
Data Mover. The Data
Mover verifies ID
information by checking
the passwd file on the
Data Mover or NIS.

y If you do not specify a


password when creating
the share, any user that
can connect to the share
is granted access.
y If you do specify a
password, the user must
provide the specified
password when
connecting to the share.

y Access-checking is
against user and group
security IDs (SIDs)
2006 EMC Corporation. All rights reserved.

Intro to CIFS

LDAP Lightweight Directory Access Protocol

Intro to CIFS

- 11

Copyright 2006 EMC Corporation.All Rights Reserved.

Security Mode When to Use


NT

UNIX

SHARE

When to Use

When to Use

When to Use

Most useful for


configurations that
require a high degree of
security and that are
accessed primarily by
CIFS users.

Usually only used if


there is no Windows
domain available.

Only useful for


configurations with few
security requirements.

Not recommended.

Not recommended.

Recommended Security
Mode.

2006 EMC Corporation. All rights reserved.

Intro to CIFS

Intro to CIFS

- 12

Copyright 2006 EMC Corporation.All Rights Reserved.

Setting Authentication
$ server_cifs <movername> -add security=<security_mode>
Where:
<movername> = name of the specified Data Mover
<security_mode> =
NT(Default) The Windows NT password database on the PDC (which uses
encrypted passwords with NETLOGON) is used. The passwd file, NIS, or
Usermapper is required to convert Windows NT usernames to UNIX UIDs.
UNIXThe client supplies a username and a plain-text password to the server. The
server uses the passwd database or NIS to authenticate the user.
SHAREClients only supply the read-only or read/write password you configure
when creating a share. Unicode must not be enabled.

y Example:
To set the user authentication method to UNIX for server_2, type:
$ server_cifs server_2 -add security=UNIX

2006 EMC Corporation. All rights reserved.

Intro to CIFS

Intro to CIFS

- 13

Copyright 2006 EMC Corporation.All Rights Reserved.

Verifying Authentication
y Checking the user authentication method set on a Data Mover
y Example: To check the user authentication method for
server_2:
$ server_cifs server_2
server_2 :
96 Cifs threads started

Security mode = NT
Max protocol = NT1
I18N mode = ASCII
Home Directory Shares DISABLED
Usermapper auto broadcast enabled
Usermapper[0] = [127.0.0.1] state:active
port:14640
...

2006 EMC Corporation. All rights reserved.

Intro to CIFS

Intro to CIFS

- 14

Copyright 2006 EMC Corporation.All Rights Reserved.

User ID Resolution and User Mapping


Data Mover
Windows
User

SID

CIFS
Services

SID

File System
Objects

UID/GID

UID/GID

UNIX
User

ID
ID/G

NFS
Services

y Every file system object (such as a file, directory, link, shortcut) has an associated
owner and owner group identified with UID and GID
y UIDs and GIDs are used by the Celerra to control access to the file system objects
y Windows and UNIX users present credentials in different format
Unix users: numeric User Identifiers (UIDs) and Group Identifiers (GIDs)
Windows users: Security Identifier strings (SIDs)

y Data Mover uses UIDs to determine access to file system objects


2006 EMC Corporation. All rights reserved.

Intro to CIFS

The Celerra Data Mover runs the EMC proprietary DART operating system, which is based on UNIX.
Similar to other UNIX systems, users are authorized to enter the system based on a username and
password that is stored in the passwd file (located in /.etc on the Data Mover), or on an NIS server.
Users and the groups they belong to are also associated with UIDs and GIDs, which are stored in the
passwd and group files.
Microsoft does not employ UIDs and GIDs to identify users. Rather, they are identified by a SID
(Security Identifier). Therefore, each user must have a username/UID created for them in order to
access the UNIX-like Data Mover. (The same is true for group and GIDs) Again, this username/UID
will be located on the Data Mover (/.etc/passwd) or on a NIS server.

Intro to CIFS

- 15

Copyright 2006 EMC Corporation.All Rights Reserved.

User ID Resolution and User Mapping Options


Multiple techniques for mapping SIDs
to UIDs depending on environment
yWindows only?
Use Usermapper process

yBoth Unix and Windows users?


Primarily Windows environment?
Assign UNIX attributes to domain users
in Active Directory

UNIX your primary client and only one


Windows Domain?
Add domain users to UNIX
authentication method
Local passwd and group files
NIS

NTMigrate can facilitate mapping


Windows users to local files

2006 EMC Corporation. All rights reserved.

Intro to CIFS

There are a couple methods for generating the username/UID for the CIFS user. One is to simply use a
processor that runs on one of the Data Mover called Usermapper that sequentially assigned GID and
UIDs to Windows users. This is appropriate in environments that have only Windows users. If the
environments includes both Windows and UNIX users, it would be appropriate to use the same user ids
for both. In these cases, migrating Windows SIDs to UIDs and merging those credentials with the
UNIX credentials is appropriate. A set of utilities called NTMigrate is useful in these environments.

Intro to CIFS

- 16

Copyright 2006 EMC Corporation.All Rights Reserved.

Usermapper Overview
y Celerra Usermapper is process that runs on the Data
Mover as a daemon
Automatically assigns UIDs and GIDs to Windows users and groups
Persistently maintains mapping information

y One Data Mover is configured as the Primary and other


Data Movers are clients
The Data Mover in slot 2 (server_2) is configured as the primary by
default when the system is installed
In a multi-Celerra environment, Secondary Usermappers are
configured
Only one Primary Usermapper per Celerra Environment

y Legacy systems used an External Usermapper process


that ran on the Control Station
2006 EMC Corporation. All rights reserved.

Intro to CIFS

Prior to NAS v5.2, the Usermapper service ran on Linux (including the Celerra Control Station), or
UNIX. Starting with v5.2, Usermapper now runs on the Data Mover. The Usermapper Service
automatically generates and maintains a database that maps SIDs to UIDs and GIDs for users or groups
accessing file systems from a Windows domain.

Intro to CIFS

- 17

Copyright 2006 EMC Corporation.All Rights Reserved.

User Mapping
y User Logs into domain
Username maps to SID
Sends SID when requesting access
to Data Mover file object

Password and group files


Mapping cache

y If no local mapping, and NIS is


configured, queries Domain
Controller for user name
associated with SID and queries
NIS for UID/GID
y If no mapping found and Active
Directory is configured, a query is
sent to AD for SID to UID
mapping
2006 EMC Corporation. All rights reserved.

Data Mover
passwd/group
files

Mapping Cache

External Network

y Data Mover check for local


mapping

User
Request

Data Mover
passwd/group
files

Mapping Cache

Data Mover
passwd/group
files

Mapping Cache

Network Information Server (NIS)


and/or
Active Directory (AD)

Intro to CIFS

When a request is received, the Data Mover first check to see if a mapping of the UID/GID exists in
the local Usermapper cache that each Data Mover maintains. If no mapping is found, it next checks
for mapping in the local password/group files, NIS directory, or Active Directory. The order of
query is determined by the nsswitch.config file. If no mapping is found, a mapping request is
sent to the local Usermapper service (either Primary or Secondary).
If the primary Usermapper service is unavailable or, if for some reason, it cannot map the user or
group, an error is logged in the server log.
On the Data Mover, persistent SID-to-UID/GID cache, introduced with DART 5.2, is stored in the file
/.etc/secmap.
:

Intro to CIFS

- 18

Copyright 2006 EMC Corporation.All Rights Reserved.

Queries Usermapper for Mapping


y If no mapping found, Data
Mover queries local
Usermapper for SID to
UID/GID mapping

y If no mapping exists, a new


mapping is added to the
database and the UID/GID is
returned to the requesting Data
Mover
y Mapping is permanently
cached
y User is allowed authorized
access to file system objects
2006 EMC Corporation. All rights reserved.

Mapping Cache

Internal Network

y Usermapper checks database


for existing mapping

Data Mover
passwd/group
files

Data Mover
Mapping Cache

passwd/group
files

Data Mover (server_2)


Mapping
Request

Usermapper
Users

Groups

SID - UID
SID - UID
SID - UID
SID - UID

SID - GID
SID - GID
SID - GID
SID - GID

Intro to CIFS

The Data Mover first determines if it has a mapping for the SID in its local Usermapper cache. If there
is no such mapping, the Data Mover sends a mapping request to the primary Usermapper service
The Data Mover checks its local user and group files and then, if configured, checks the NIS and
Active Directory.
The primary Usermapper service checks its database to determine if this user or group has already
been assigned a UID/GID. If not, the primary Usermapper generates a new UID and GID and adds the
new user or group to its database, along with the mapping. It then returns the mapping to the Data
Mover and the Data Mover permanently caches the mapping.
Note: If the primary Usermapper service is unavailable or, if for some reason, it cannot map the user or
group, an error is logged in the server log.

Intro to CIFS

- 19

Copyright 2006 EMC Corporation.All Rights Reserved.

Usermapper Implementation
y When a Celerra Server is booted for the first time after
installation, server_2 is automatically configured with a
Primary Usermapper Service
No installation or configuration required
Service is made highly available by configuring standby Data Mover

y All other Data Movers are configured as clients of the


Usermapper service
Primary is discovered using a broadcast over the internal network

2006 EMC Corporation. All rights reserved.

Intro to CIFS

When a Celerra v5.2+ is booted for the first time, it is automatically configured with the default
Usermapper configuration. In this situation, Usermapper is fully operational and no additional
installation or configuration is required.
By default, all the Data Movers in the cabinet use the internal IP address of the data Mover in slot 2 as
the location of the primary Usermapper service.
This automatic implementation is not always appropriate. If your environment has more than one
Celerra that share the same domain space, the default configuration should be modified. One Celerra
(server_2) should remain as the primary Usermapper service, and the other cabinets should be
configured with server_2 as secondary. The Data Movers in these cabinets send mapping requests to
their local secondary Usermapper and each secondary Usermapper forwards these requests to the
single primary Usermapper service.
When multiple Data Movers do not share the same Windows domain, each domain should be
configured with its own primary Usermapper service.
Note: As in a standard Celerra configuration, you can configure another Data Mover to serve as a
failover, providing backup for the primary Usermapper service.

Intro to CIFS

- 20

Copyright 2006 EMC Corporation.All Rights Reserved.

Multi Celerra Environments


y If more than one Celerra exists in the same environment, the default
installation is not appropriate
Only one Primary Usermapper
All other Celerra are configured as Secondary Usermappers
Note: If each Celerra is in a separate Domain, they can be configured with
their own Primary

y Data Movers send mapping requests to local Secondary


Usermapper service
y Secondary Usermappers forward mapping requests to the Primary
service
y Only the Primary Usermapper generates new mappings
Both Primary and Secondary Usermapper processes maintain a mapping
database
All Secondary Usermapper services may not have the same mapping in
their database
2006 EMC Corporation. All rights reserved.

Intro to CIFS

In a multi Celerra Environment only one Data Mover will be configured as the Primary. The Data
Mover in slot 2 of all other Celerra in the environment are configured as secondaries. Within a Celerra
system, all mapping requests are directed to the local Primary or Secondary Usermapper service.
Secondary Usermappers for mapping requests to the Primary and only the Primary can create new
mappings.

Intro to CIFS

- 21

Copyright 2006 EMC Corporation.All Rights Reserved.

Multi Celerra Environments


Data Mover

passwd/group
files

Data Mover
Mapping Cache

passwd/group
files

Data Mover (server_2)


Primary Usermapper

Mapping Cache

Data Mover

Public
Network

passwd/group
files

Mapping Cache

Data Mover (server_2)


Secondary Usermapper

Users

Groups

Groups

Users

SID - UID
SID - UID
SID - UID
SID - UID

SID - GID
SID - GID
SID - GID
SID - GID

SID - GID
SID - GID
SID - GID
SID - GID

SID - UID
SID - UID
SID - UID
SID - UID

2006 EMC Corporation. All rights reserved.

Internal Network

Internal Network

Mapping Cache

Data Mover

passwd/group
files

Intro to CIFS

One instance of the Usermapper service serves as the primary Usermapper service, meaning it assigns
UIDs and GIDs to Windows users and groups. By default, this instance is configured on the Data
Mover in slot 2 (server_2). The other Data Movers in a single cabinet are configured as clients of the
primary Usermapper service, meaning they send mapping requests to the primary service when they do
not find a mapping for a user or group in their local cache.
Other instances of the Usermapper service can serve as secondary Usermapper services, meaning they
collect requests for mappings and forward them to the primary Usermapper service. Typically, you
would only configure a secondary Usermapper service in a multi-cabinet environment.
You should have only one primary Usermapper in a single cabinet. In the situation where the Celerra
is configured to support multiple Windows domains, a primary Usermapper service for each domain
may be configured.
EMC recommends one Usermapper instance (primary or secondary) per cabinet. If it is a large cabinet
populated with 14 Data Movers, it may be beneficial to configure a secondary service to reduce traffic
to the primary.

Intro to CIFS

- 22

Copyright 2006 EMC Corporation.All Rights Reserved.

Usermapper for Multi Celerra Environments


y Designate a primary and verify Primary Usermapper
service is enabled
server_usermapper server_2

y On the secondary Celerra systems, disable the default


primary Usermapper service
server_usermapper server_2 disable

y Configure secondary Usermapper service


server_usermapper server_2 enable
primary=192.168.12.34

y Verify secondary service is configured


server_usermapper server_2
2006 EMC Corporation. All rights reserved.

Intro to CIFS

Intro to CIFS

- 23

Copyright 2006 EMC Corporation.All Rights Reserved.

Verifying the Status of Usermapper


y To verify the Usermapper configuration for a Data Mover:
$ server_cifs server_2
server_2 :
256 Cifs threads started
Security mode = NT
Max protocol = NT1
I18N mode = ASCII
Home Directory Shares DISABLED
Usermapper auto broadcast enabled
Usermapper[0] = [127.0.0.1] state:active
port:14640
...

2006 EMC Corporation. All rights reserved.

Intro to CIFS

Intro to CIFS

- 24

Copyright 2006 EMC Corporation.All Rights Reserved.

Viewing the Usermapper Configuration


y

CIFS > Usermappers tab

2006 EMC Corporation. All rights reserved.

Intro to CIFS

You can view the Usermapper configuration using Celerra Manager.

Intro to CIFS

- 25

Copyright 2006 EMC Corporation.All Rights Reserved.

Usermapper Considerations
y Configure only one primary Usermapper per Celerra
environment
Otherwise there could be duplicate mappings

y Mapping stops if Data Mover root file system reaches


95% full
New users will not be allowed access to the system

y UID and GID ranges are fixed in the database


Default UID and GID values start at 32K
May be specified in the usermap.cfg file
Best Practice is to use the default range

2006 EMC Corporation. All rights reserved.

Intro to CIFS

Intro to CIFS

- 26

Copyright 2006 EMC Corporation.All Rights Reserved.

Usermapper Database
y The Usermapper database is backed up as part of the hourly
NASDB backup
y The Usermapper database can be exported
server_usermapper server_2 Export user
usermapper_sample
Creates a file on the control station
Sample format:
S-1-5-15-139d2e78-56b1775d-5475b975-323d:*:11894:
903:user:diamond.jim from domain dir:/user/S-1-5-15139d2e78-56b1775d-5475b975-3323d:/bin/ksh

y NOTE: This is just to illustrate Usermapper concept.


Incorrectly modifying the Usermapper database can result in
client lost of access to file system objects!
2006 EMC Corporation. All rights reserved.

Intro to CIFS

EMC recommends that you do not change the Usermapper database. Changes made to the database
are not reflected by a client Data Mover if the client Data Mover has already cached the existing
Usermapper entry in its local cache.

Intro to CIFS

- 27

Copyright 2006 EMC Corporation.All Rights Reserved.

Options for Multi-Protocol Environments


y If the Celerra environment consists
of Windows clients only,
Internal Usermapper is the best choice
y When Celerra is in a Multi Protocol
Environment, and users have both UNIX
and Windows accounts, must either:
Migrate UNIX users to Windows Domain
Migrate Windows Domain users to UNIX
environment

Active Directory
Users
Accounts

UNIX Users

Windows
Users

y Manually add Windows users to


passwd/group files or NIS
y Celerra UNIX Attributes Migration Tool
Migrates local passwd and group files or
NIS files to Active Directory

y NTMigrate Migrates Windows users to


Unix User environment
2006 EMC Corporation. All rights reserved.

passwd/group
files

Data Mover or NIS Server

Intro to CIFS

Intro to CIFS

- 28

Copyright 2006 EMC Corporation.All Rights Reserved.

Using Local Files to Map Users


y In multiprotocol environments:

Windows
Users

With both UNIX and Windows clients


Data Mover
But, primarily UNIX clients
passwd/group
Use local password and group files
files
and assign specific UNIX UIDs and
GIDs to Windows users
If the same user has both UNIX and Windows accounts, the
Windows users is mapped to the UID and GIDs of the existing UNIX
accounts

y To configure:
Copy passwd and group files from Data Mover to Control Station
Edit files adding Windows user names to passwd file and Domain
name as a group name in the group file
Copy the passwd and group files back to the Data Mover and/or
update NIS master
2006 EMC Corporation. All rights reserved.

Intro to CIFS

Explicit User and Group Mapping Using Local Files or NIS


With this method, you must edit the /.edt/passwd and /.etc/group files or NIS and manually map
Windows users and groups to distinct UNIX UIDs and GIDs.

Intro to CIFS

- 29

Copyright 2006 EMC Corporation.All Rights Reserved.

Using NIS to Map Users


y Using local files on multiple servers is difficult to administer
Each Data Mover has their own copy and must be maintained

y Network Information Services (NIS) may be used to provide


centralized management of passwd and group files
y To set up:
Configure NIS database with user and group information for Windows users
Similar format as local password and group files
May be imported from local host

Configure Data Mover for NIS


server_nis ALL <domain name> <ip address>

If usernames are not in the format of username.domain, set the


parameter cifs.resolver to 1

2006 EMC Corporation. All rights reserved.

Intro to CIFS

Intro to CIFS

- 30

Copyright 2006 EMC Corporation.All Rights Reserved.

NTMigrate
y Creates UNIX account for Microsoft users
Format consistent with passwd/group files or NIS database
Makes sure each user has a unique UID
Assigns new UIDs
Same UID if user exists in both Windows and Unix environments

y May be used in all environments


Especially useful in mixed CIFS and UNIX mixed

y Has two utilities


ntmigrat.exe
Run on Windows domain controller and extracts user credentials

ntmiunix.pl
Perl script that runs on Unix/Linux hosts and merges passwd/group files

y Must be re-run whenever Microsoft users are added


2006 EMC Corporation. All rights reserved.

Intro to CIFS

NTMigrate is used in all situations where CIFS and NFS users will be using the same Data Mover
(multi-protocol).
There are two utilities that make up NTMigrate. The first utility is ntmigrat.exe, this is run on a
Windows domain controller for each Windows domain. The second utility is ntmiunix.pl, and is run on
a UNIX system that has Perl installed.
If a Data Mover is servicing both CIFS and NFS client, it is important that users have the same ID
whether they access the Data Mover from Windows or from UNIX. For example, if Etta Place creates
a file from her UNIX workstation and then accesses the same file from a Windows workstation, she
would rightly expect to still be the owner of that file. Additionally, administering permission would be
very complex if each user had multiple IDs. Moreover, it is crucial that no two users are assigned the
same ID.
When Usermapper assigns IDs to users, there is little control over what IDs the users will receive. This
is not acceptable in a CIFS/NFS mixed environment. For this reason NTMigrate is the utility of choice
in such mixed settings.
NTMigrate is a static solution. If the user/group account database changes, NTMigrate must be re-run
to be updated.
NTMigrate extracts a list of users and global groups from each Windows domain and combines the
results with existing passwd and group file information from an existing UNIX host or NIS server. The
resulting files are used by the Data Mover to provide all users and groups with a single, unique ID as
they access the Data Movers file systems.

Intro to CIFS

- 31

Copyright 2006 EMC Corporation.All Rights Reserved.

The NTMigrate Process


Active Directory

1. On Domain Controller run


ntmigrat.exe

Collects user information from Windows domain


Creates passwd and groups files

2. On Control Station run ntmiunix.pl

Windows
Users

Compares Microsoft users/groups to users


defined in Unix passwd/group files
Assigns the same UID or assigns new ID

3. Administrator combines files into a single


passwd/group file
4. Administrator places files in the correct
location on the Data Mover

Users
Accounts

Copy passwd/group files to Data Mover


Use the server_file command

passwd/group
files

Control Station

passwd/group
files

Data Mover or NIS Server


2006 EMC Corporation. All rights reserved.

Intro to CIFS

The procedure for NTMigrate will be divided into four areas:


y Running ntmigrat.exe
y Running ntmiunix.pl
y Consolidating all of the users/group information into one passwd and one group file
y Placing these files in the correct location (Ex. The Data Mover, or NIS)

Intro to CIFS

- 32

Copyright 2006 EMC Corporation.All Rights Reserved.

Using Active Directory to Map Users


y If the Celerra environment consists
of Windows clients only, Usermapper
is the best choice
y In multiprotocol environments:

Active Directory
Users
UID/GID

With both UNIX and Windows clients


But, primarily Windows clients
Use Active Directory to centralize both Windows and UNIX user accounts

y Extend AD schema to include UNIX attributes for Windows users


and groups
Data Mover queries AD for user and group information to determine file
access authorization

y To configure:
Install Celerra UNIX User Management component of Celerra CIFS MMC
snap-in
Manually assign UIDs and GIDs to Windows Users
Set cifs.useADMap Parameter to 1 for Data Movers
2006 EMC Corporation. All rights reserved.

Intro to CIFS

Celerra UNIX User Management Snap-In


Celerra UNIX User Management is a MMC snap-in to the Celerra Management Console that you can
use to assign, remove, or modify UNIX UID/GIDs for a single Windows user or group on the local or
remote domains. It can be used in a single or multi-protocol environment (Windows 2000/2003). It is
best for large environments with numerous domains where you want to centralize your UNIX user
account management.

Intro to CIFS

- 33

Copyright 2006 EMC Corporation.All Rights Reserved.

Celerra UNIX User Management


y Extends Active Directory to
include UNIX attributes for
Windows users
y Manage sharing of database
between trusted domains.
y Manage Windows
users/groups from other
domains.

2006 EMC Corporation. All rights reserved.

Intro to CIFS

The Celerra UNIX User Management is a Microsoft Management Console (MMC) snap-in to the
Celerra Management Console that can be used to assign, remove, or modify UNIX attributes for
Windows user or group on the local domain and on remote domains. The location of the attribute
database can either be in a local or remote domain.
You would choose to store the attribute database in the Active Directory of a local domain if:
y You have only one domain
y Trusts are not allowed, or
y You have no need to centralize your UNIX user management information
You would choose a remote domain if:
y You have multiple domains,
y Bi-directional trusts between domains that need to access the attribute database already exist, and
y You want to centralize your UNIX user management

Intro to CIFS

- 34

Copyright 2006 EMC Corporation.All Rights Reserved.

Extension Snap-in to AD
y Property page to User and
Groups
y Allows administrator to
browse an NIS server or
passwd/group file to match
existing UNIX user/group

2006 EMC Corporation. All rights reserved.

Intro to CIFS

Celerra UNIX users and groups property pages are extensions to Active Directory Users and
Computers (ADCU). You an use these property pages to assign, remove, or modify UNIX attributes
for a single Windows user or group on the local domain. You cannot use this feature to manage users
or groups on a remote domain.

Intro to CIFS

- 35

Copyright 2006 EMC Corporation.All Rights Reserved.

CIFS Migration Tool


y Searches a NIS server or
local passwd/group files for
user names that match
existing Windows
User/Group
y Display hierarchy of
discovered users and groups
in domains
y Administrator selects users
and groups to migrate to
Active Directory
2006 EMC Corporation. All rights reserved.

Intro to CIFS

The CIFS Migration tool scans either NIS server for users and groups, or local UNIX passwd and
group files looking for names that match existing Windows User/Group in the local and trusted
domains. A hierarchy of discovered users and groups in domains is displayed and an administrator can
select UNIX users and groups to migrate to Active Directory

Intro to CIFS

- 36

Copyright 2006 EMC Corporation.All Rights Reserved.

References
y Configuring Celerra User Mapping Technical Module
P/N 300-002-715 Rev A01 Version 5.5 March 2006
y NTMigrate with Celerra Technical Module
P/N 300-002-719 Rev A01 Version 5.5 March 2006
y Celerra UNIX Attributes Migration Tool online help
* Above are available on the User Information CD

2006 EMC Corporation. All rights reserved.

Intro to CIFS

Notes: For instructions on using Celerra UNIX User Management Snap-In or Celerra UNIX Users and
Groups Property Page Extension, refer to the online help. For installation instructions, refer to
Installing Celerra Management Applications technical module. For instructions on using local files,
refer to Configuring CIFS on Celerra for a Multiprotocol Environment.

Intro to CIFS

- 37

Copyright 2006 EMC Corporation.All Rights Reserved.

Module Summary
y Security Mode = NT is the default and what is recommended for CIFS
environments
y Usermapper is used in a CIFS-only environment
y Usermapper is configured automatically on install
The primary Usermapper service runs on server_2 by default
EMC recommends one primary Usermapper in a Celerra environment and one
Usermapper instance per Celerra cabinet
The Usermapper service is automatic; no installation or configuration is required

y In a mixed UNIX and Windows environment, you have two choices:


Move UNIX clients to Active Directory and add UNIX attributes to Windows users in
Active Directory
Migrate Domain users to local password/group files or NIS

y NTMigrate is used when CIFS and NFS users will be accessing the same
Data Mover
Creates UNIX accounts for Microsoft users
An administrator consolidates all of the user/group information into one passwd and
one group file
Static - every time a new user is added to the Microsoft network, the NTMigrate
process must be performed again
2006 EMC Corporation. All rights reserved.

Intro to CIFS

The key points for this module are shown here. Please take a moment to read them.

Intro to CIFS

- 38

Copyright 2006 EMC Corporation.All Rights Reserved.

Closing Slide

2006 EMC Corporation. All rights reserved.

Intro to CIFS

Intro to CIFS

- 39

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra ICON
Celerra Training for Engineering

Configuring CIFS on a Data Mover

2006 EMC Corporation. All rights reserved.

Configuring CIFS

-1

Copyright 2006 EMC Corporation. All Rights Reserved.

Revision History
Rev Number

Course Date

1.0

February 2006

1.2

May 2006

2006 EMC Corporation. All rights reserved.

Revisions
Complete

Updates and Enhancements

Configuring CIFS - 2

Configuring CIFS

-2

Copyright 2006 EMC Corporation. All Rights Reserved.

Configuring CIFS
Objectives:
y Describe the interactions between the Data Mover and
services within a Microsoft network environment
Active Directory
DNS
Kerberos
Time Services

y Configuring CIFS on the Celerra


y Make file systems available to clients in a Windows 2000
environment

2006 EMC Corporation. All rights reserved.

Configuring CIFS - 3

In this module we will be discussing making file systems available in a Windows environment. We
will see that the Data Mover is dependent on a number of services provided by the Microsoft network.
A simple CIFS server configuration may not include a domain infrastructure. This type of CIFS server,
called a stand-alone server, does not require external components such as a domain controller, NIS
server, or Usermapper. Users log in to the stand-alone CIFS server through local user accounts. For
this class we are going to focus on a more typical configuration with full integration into a Windows
network environment.

Configuring CIFS

-3

Copyright 2006 EMC Corporation. All Rights Reserved.

Data Movers vs. CIFS servers


y Data Mover: Physical Celerra file server
y CIFS Server: Logical file server to Windows clients
y Data Mover can be configured with multiple CIFS servers
y Each CIFS server:

Data Mover

Is associated with at least


one interface/IP address
Configures with its own
shares

CIFS
Server

CIFS
Server

CIFS
Server

Seen in the Windows


domain as a computer resource
Can be configured with
NetBIOS aliases
2006 EMC Corporation. All rights reserved.

CIFS
Server

CIFS
Server

Configuring CIFS - 4

While a Celerra Data Mover is a literal server, a CIFS server is a logical server that emulates the
functionality of a Windows file server. Each Data Mover can be configured with one or more CIFS
servers. Each CIFS server can have its own shares* and can belong to a different Windows domain.
You must configure at least one network interface for each CIFS server.

*In order for shares to be associated with a single CIFS server it must be stated in the export statement.

Configuring CIFS

-4

Copyright 2006 EMC Corporation. All Rights Reserved.

Virtual Data Movers


y A Virtual Data Movers (VDM) allow administrative separation of
CIFS servers
y Implemented as a separate root file system and thus can be moved
and/or replicated
y VDM / CIFS server info
includes:
Local group database
Share database for the servers
in the VDM
CIFS server configuration
Home directory information
Auditing and Event log
Kerberos information

y VDMs are discussed in more


detail in later modules

Data Mover
Virtual Data Mover (VDM-1)
CIFS
Server

CIFS
Server

CIFS
Server

Virtual Data Mover (VDM-2)


CIFS
Server

CIFS
Server

Configuring CIFS - 5

2006 EMC Corporation. All rights reserved.

A Virtual Data Mover (VDM) is a Celerra Network Server software feature that enables you to
administratively separate CIFS servers and their associated resources, like file systems, into virtual
containers. These virtual containers allow administrative separation between groups of CIFS servers,
enable replication of CIFS environments, and allow the movement of CIFS servers from Data Mover
to Data Mover.
VDMs support CIFS servers, allowing you to place one or multiple CIFS servers into a VDM along
with their file systems. The servers residing in a VDM store their dynamic configuration information
(such as local groups, shares, security credentials, and audit logs, etc.) in a configuration file system. A
VDM can then be loaded and unloaded, moved from Data Mover to Data Mover, or even replicated to
a remote Data Mover, as an autonomous unit. The servers, their file systems, and all of the
configuration data that allows clients to access the file systems, are available in one virtual container.
The CIFS server information include in a VDM (and is thus portable) includes:
Local group database for the servers in the VDM
Share database for the servers in the VDM
CIFS server configuration (compnames, interface names, etc.)
Home directory information for the servers in the VDM
Auditing and Event log information
Kerberos information for the servers in the VDM
Configuring CIFS

-5

Copyright 2006 EMC Corporation. All Rights Reserved.

CIFS File Systems Availability


y In the context of Celerra Management, availability refers
to making the CIFS Server and resources visible to
clients on the Microsoft network
y Proper operations of CIFS is dependent on services
provided by Microsoft Network environment including
Active Directory, DNS, and Kerberos
y CIFS availability typically requires:
CIFS Services must be running on the Data Mover
CIFS Server must join the Windows domain
CIFS Server visible in Active Directory
Data Mover must exports file system as a share using CIFS protocol

2006 EMC Corporation. All rights reserved.

Configuring CIFS - 6

In the context of Celerra Management, availability refers to bringing the Celerra CIFS Server and its
file systems resources into the Microsoft network.
When configuring a Celerra Data Mover for CIFS, there are three aspects under what we refer to as
availability:
y The Data Mover requires certain prerequisites in order to successfully join the domain
y The actual act of joining the domain, causing the CIFS server to appear in the Active Directory
y The file systems on the Data Mover must be exported specifying the CIFS protocol and a share
name
The CIFS protocol must also be manually started. NFS is the default file sharing protocol for EMC
Celerra and is started automatically.
The steps taken to make a CIFS Server and its file systems appear on the network are the same
regardless of whether the file system access is is in an CIFS-only or also supports NFS clients.

Configuring CIFS

-6

Copyright 2006 EMC Corporation. All Rights Reserved.

Active Directory and Celerra CIFS Server


y For clients to access the
CIFS Services on the Data
Mover, the CIFS Server
running on the Data
Mover must appear in the
Active Directory
EMC Celerra AD container
(default)
Computers sub-container
(default)

y Appears after successfully


joining the Domain

2006 EMC Corporation. All rights reserved.

Configuring CIFS - 7

Active Directory lists resources and services available in the Microsoft network..
One of the primary goals in configuring CIFS is for Active Directory to hold the account for the Data
Movers CIFS server(s). A successful configuration will result in an EMC Celerra container holding
the CIFS Server account to be created inside Active Directory Users and Computers. Alternatively,
the Data Movers CIFS server account can be placed in another AD container, if so desired.
In Windows 2000 Native Mode, there are no longer Primary and Backup Domain Controllers; there are
just Domain Controllers. All DCs hold a writeable version of the directory. Windows 2000 uses LDAP
(Lightweight Directory Access Protocol) to facilitate Active Directory. Active directory
communicates with all of the Domain Controllers in the Active Directory domain.

Configuring CIFS

-7

Copyright 2006 EMC Corporation. All Rights Reserved.

Domain Name Service (DNS )


y DNS provides Name resolution
IP address to hostname
Hostname to IP address
Eliminates the need to maintain local hosts files for each DM

y DNS must be configured before joining the Data Movers


CIFS server to a W2K/3 domain
y By default, the Data Mover issues secure Dynamic DNS
updates to the DNS server for the DNS domain it joins
DDNS permits hosts to register themselves
CIFS Server registers its services

y WINS may no longer be required but may also be


configured
2006 EMC Corporation. All rights reserved.

Configuring CIFS - 8

Within a network environment there is a need to map IP addresses and hostnames to IP addresses. This
could be done using local hosts files on each Data Mover but that is much more difficult to manage.
DNS is a network service that provides this address resolution When a Data Mover is joining a W2K/3
domain, it must be configured to use DNS.
Windows 2000 features Dynamic DNS. DDNS permits hosts to register themselves with DDNS. In the
case of Celerra File Server, when correctly configured, the Data Mover should automatically register
its entries in both the forward and reverse lookup zones in the DDNS data base.
Windows 2000 is completely dependent on the functionality of DDNS. It is by means of DDNS that all
servers and services communicate across the Windows 2000 enterprise. DDNS is said to live in
Active Directory. Yet, Active Directory cannot function without DDNS.
Windows 2000 Native Mode no longer requires WINS for any name resolution. However, it is possible
that it could be present to provide backward compatibility to some older clients.
Note:
WINS provides NetBios to IP address (and IP address to NetBios) name resolution.
DNS provides host name to IP address (and IP address to host name) name resolution.

Configuring CIFS

-8

Copyright 2006 EMC Corporation. All Rights Reserved.

Configuring DNS Server


y Windows 2000 DNS server should be configured to

Allow dynamic updates


y Select Properties
Forward zone
of root domain
Reverse zone
of DMs subnet

2006 EMC Corporation. All rights reserved.

Configuring CIFS - 9

In order for the Data Mover to be able to update DNS with its IP information, Allow Dynamic
Updates must be set to Yes on both the Forward and Reverse Lookup Zones.
Part of the preparation required to configure CIFS for the Windows 2000 environment is to verify that
Dynamic DNS is configured.
If DDNS is not supported in the environment, you must manually update the DNS server.

Configuring CIFS

-9

Copyright 2006 EMC Corporation. All Rights Reserved.

Configuring Data Mover for DNS

y Use the following command syntax to configure a


Data Mover for DNS:
server_dns server_x <Domain_Name> <IP_of_DNS_Server>

y Example:
server_dns server_2 corp.hmarine.com 192.168.64.15

2006 EMC Corporation. All rights reserved.

Configuring CIFS - 10

Command:
server_dns server_x <DM_DNS_suffix> <IP_of_DNS_server>
Example:
server_dns server_2 corp.hmarine.com 192.168.64.15
Remember that the domain name entered is not necessarily the root domain name, but rather the
domain or sub-domain in which the Data Mover will be located (the Data Movers DNS suffix).

Configuring CIFS

- 10

Copyright 2006 EMC Corporation. All Rights Reserved.

Specify Data Movers DNS Suffix


server_dns server_2 corp.hmarine.com 192.168.64.15
Root
RootDomain:
Domain:

hmarine.com
hmarine.com

Sub Domain:
corp.hmarine.com
Data Mover

2006 EMC Corporation. All rights reserved.

Configuring CIFS - 11

When entering the server_dns command, specify the DNS suffix of the Data Mover, not the root
domain name or the domain where the DNS server resides. For example, Hurricane Marine has two
domains, the root domain is hmarine.com, the sub-domain is corp.hmarine.com.
You will be placing the Data Mover in the corp Windows 2000 Active Directory domain. Therefore,
your DNS configuration should coincide, and the domain indicated in your server_dns command
will be corp.hmarine.com.

Configuring CIFS

- 11

Copyright 2006 EMC Corporation. All Rights Reserved.

Authentication
y Data Mover must authenticate user requests
y In Windows 2000/2003 environment, Kerberos V5 is used
Data Mover uses Kerberos Keys to confirm client credentials within
Kerberos realm
DM configured for Windows Kerberos realm during the Join process.

Kerberos is time sensitive


Synchronize data & time between DNS server and Data Mover using
Network Time Protocol (NTP) server

y In Windows NT environments, NTLM may be used for


authentication
Authentication is done on the Domain Controller and Data Mover
accepts clients credentials
NTLM co-exists providing backward compatibility
2006 EMC Corporation. All rights reserved.

Configuring CIFS - 12

Authentication is proving you are who you say you are. Windows NT uses NTLM (NT LAN
Manager) for user authentication, whereby a user would logon to the NT domain providing a username
and password. NT would then issue the user an Access Token identifying the user to any necessary
resources. The Access Token would remain until the user logged off of the system.
Windows 2000 employs Kerberos V5 technology providing a more secure, more complex logon
process. Unlike the NTLM token, the Kerberos ticket is time sensitive. Therefore, it is key to a
successful configuration that the date and time of the Data Mover be synchronized with the Windows
2000 domain.
Kerberos is a network authentication protocol that was designed at the Massachusetts Institute of
Technology (MIT) in the 1980s to provide proof of identity on a network. The Kerberos protocol uses
strong cryptography so that a client can prove its identity to a server (and visa versa) across an insecure
network connection. After a client and server have used Kerberos to prove their identity, they can also
encrypt all of their communications to assure privacy and data integrity as they go about their business.
Rather than Kerberos usual password-hash based secret key, Microsoft chose to add its own
extensions, which makes its implementation of Kerberos slightly nonstandard, but still allows for
authentication with other networks that use Kerberos 5.
DART uses the UDP protocol by default, but will switch to TCP when triggered by the proper status
message from the KDC (Key Distribution Center). The use of TCP allows for a larger ticket size.
This process is completely transparent to the user.

Configuring CIFS

- 12

Copyright 2006 EMC Corporation. All Rights Reserved.

Time Synchronization
y The DM may be configured to uses NTP (Network Time Protocol) or
SNTP (Simple Network Time Protocol) client to synchronize its clock
with that of an NTP server.
SNTP/NTP enables all DMs to synchronize from a single clock source,
keeping timestamps on files and directories accurate
Data Mover time must be maintained to within 5 minutes of the Kerberos
server
SNTP/NTP must be configured when joining the DM to a W2K/3 domain

y Set the date and approximate time:


server_date server_2 0203111426

y Configure the Data Mover for NTP:


server_date server_2 timesvc start ntp i 01:30 192.168.64.20

y Verify NTP functionality:


server_date server_2 timesvc stats ntp
2006 EMC Corporation. All rights reserved.

Configuring CIFS - 13

Since Kerberos is time sensitive, it is necessary for the Data Mover to be synchronized with the date
and time of the KDC (Key Distribution Center). To do this, first set the data and time of the Data
Mover as close as possible to the appropriate values using the server_date command.
In order to maintain this synchronized state, start the Network Time Protocol on the Data Mover to
synchronize with an appropriate time source.
To verify NTP functionality (look for hits and poll hits):
server_date server_2 timesvc stats ntp
Time synchronization statistics since start:
hits=1,misses=0,first poll hits=1,miss=0
Last offset:0 secs,-3000usecs
Time sync hosts:
0 1 10.127.50.162
0 2 10.127.50.161
Command succeeded: timesync action=stats

Configuring CIFS

- 13

Copyright 2006 EMC Corporation. All Rights Reserved.

Time Synchronization
y Data Movers > server_2

2006 EMC Corporation. All rights reserved.

Configuring CIFS - 14

This slide shows how to configure an NTP server using Celerra Manager.

Configuring CIFS

- 14

Copyright 2006 EMC Corporation. All Rights Reserved.

Creating CIFS Servers


y Create a CIFS Server by declaring a compname
When a CIFS Server joins the Windows 2000 Domain, it is identified by the
compname
Many CIFS Servers may be defined
Specify a interface for each CIFS Server
Used for client access
If no interface is specified, it is the default CIFS Server and uses all unused
interfaces

y Command syntax:
server_cifs server_x add
compname=<DM_name>,domain=<domain_FQDN>,
interface=<IF_name>

y Example:
server_cifs server_x add
compname=cel1dm2,domain=corp.hmarine.com,
interface=cge0-1
2006 EMC Corporation. All rights reserved.

Configuring CIFS - 15

Once you have configured your Data Mover to interoperate in a Windows 2000 environment, you must
declare a computer name for the CIFS server being added, as well as the fully qualified domain name
of the Windows domain that you will join. You do this using the server_cifs command.
server_cifs server_x -a compname=<computer_name>,
domain=<fqdn_domain_name>,netbios=<netbios_name>,
interface=<if_name>
where:
compname=<computer_name> is the computer name of the CIFS server you wish to add.
domain=<fqdn_domain_name> is the Full Qualified Domain Name of the Windows domain
you will join.
netbios=<netbios_name> specifies a NetBIOS name if different from the compname.
interface=<if_name> specifies an interface to be used
Maximum number of CIFS that can be defined per data mover is 512.

Configuring CIFS

- 15

Copyright 2006 EMC Corporation. All Rights Reserved.

Add Network Interfaces to a CIFS server


y To add multiple network interfaces to a CIFS server
repeat the server_cifs -add command for each interface
$ server_cifs server_2 add
compname=cel1dm2,domain=corp.hmarine.com,
interface=cge0-1
$ server_cifs server_2 a
compname=cel1dm2,domain=corp.hmarine.com,
interface=cge0-2
y IMPORTANT: Deleting a network interface requires
updating its associated CIFS server

2006 EMC Corporation. All rights reserved.

Configuring CIFS - 16

If a CIFS server is to use multiple network interfaces, repeat the previous command for each interface
to be added.
If the interface= flag is omitted entirely, then the CIFS server takes all unused interfaces. This is
referred to as the default CIFS server.
Note: Since CIFS servers are linked to network interfaces, the deletion (or modification) of a network
interface will require updating the appropriate CIFS server

Configuring CIFS

- 16

Copyright 2006 EMC Corporation. All Rights Reserved.

Assign Aliases to a CIFS Server (Optional)


y CIFS servers have a computer name and a NetBIOS
name (usually the same name)
Either name can have multiple aliases
A CIFS server and its aliases share:
Local groups
Shares
Domain computer account

y To add aliases to a CIFS server repeat the server_cifs add command for each alias
$ server_cifs server_2 a
compname=cel1dm2,domain=corp.hmarine.com,
alias=diamond

2006 EMC Corporation. All rights reserved.

Configuring CIFS - 17

Data Mover's CIFS server has a compname (for Windows 2000/2003) and a NetBIOS name (for
backward compatibility). The compname and NetBIOS name are the same by default, though they can
be configured to be different. Aliases provide multiple, alternate identities for a given NetBIOS name.
Because NetBIOS aliases act as secondary NetBIOS names, the aliases share the same set of local
groups and shares as the primary NetBIOS name. Aliases can be added to an existing server or created
when creating a new server. For aliases, you do not need to create accounts in the domain. Aliases
must be unique both:
y Across a Windows domain for WINS registration and broadcast announcements
y On the same Data Mover to avoid WINS name conflicts
Adding NetBIOS names
In contrast, one could add extra NetBIOS names to a CIFS server. These machine names will be
associated with different local groups. For example, if one were to consolidate two old servers with
same or different NetBIOS names from two different domains with different local groups, additional
NetBIOS machine names could be used with NetBIOS aliases. Each NetBIOS name should have an
account on in the domain, and that join process would have to be accomplished as well.

Configuring CIFS

- 17

Copyright 2006 EMC Corporation. All Rights Reserved.

Joining the Domain


y Joining the Domain registers the CIFS Server in DNS
and adds the server attributes in the Active Directory
y Command Syntax:
server_cifs server_x Join
compname=<DM_name>,domain=<domain_FQDN>,
admin=<admin_name>

y Example:
server_cifs server_2 J
compname=cel1dm2,domain=corp.hmarine.com,
admin=administrator
Server_2: Enter Password *******

Configuring CIFS - 18

2006 EMC Corporation. All rights reserved.

To join the Data Mover to the Windows domain:


server_cifs server_x -Join compname=<computername>,
domain=<fqdn_domainname>,admin=<admin_name>
where:admin=admin_name specifies the logon name of the user with administrative rights in the
Windows domain Forest.
You are prompted for the administrators password. The Administrator account and password are used
to create the account in the Active Directory and are not stored after adding the machine account.
To remove (unjoin) a Data Mover from the Windows 2000 domain:
server_cifs server_2 Unjoin
compname=cel1dm2,domain=corp.hmarine.com,admin=administrator
Password: *****

Configuring CIFS

- 18

Copyright 2006 EMC Corporation. All Rights Reserved.

Specifying the Organizational Unit


y The default Organizational Unit (OU)
EMC Celerra
Computers

y To configure a specific Organizational Unit


server_cifs server_2 J compname=cel1dm2,
domain=corp.hmarine.com,admin=administrator,
ou=ou=File Servers
2006 EMC Corporation. All rights reserved.

Configuring CIFS - 19

The organizational Unit identifies hierarchy where the Data Mover lives in the Active Directory. The
default organizational unit (OU) for a Data Movers CIFS server is ou=Computers,ou=EMC Celerra.
To specify a different organizational unit (new or existing) use the ou= option when joining the
domain.
Example:
To configure server_2 to join the compname cel1dm2 to the domain corp.hmarine.com in the File
Servers Organizational Unit:
server_cifs server_2 J
compname=cel1dm2,domain=corp.hmarine.com,admin=administrator,ou=ou=
File Servers

Configuring CIFS

- 19

Copyright 2006 EMC Corporation. All Rights Reserved.

Adding the Computer and Joining the Domain


y CIFS > CIFS Servers tab > New

Configuring CIFS - 20

2006 EMC Corporation. All rights reserved.

Click on CIFS in the tree hierarchy > click on the CIFS servers tab > click new.
Join the Domain and specify the Organizational Units.

Configuring CIFS

- 20

Copyright 2006 EMC Corporation. All Rights Reserved.

Starting, Stopping, and Deleting CIFS


y CIFS Services is not started by default on the Data Mover
y Command syntax:
$ server_setup <movername> -Protocol cifs
-option start

y Example:
$ server_setup server_2 P cifs o start
y Stop and restart after changes
Usermapper
WINS
Security mode
2006 EMC Corporation. All rights reserved.

Configuring CIFS - 21

After completing configuration of CIFS, the protocol must be started using the server_setup command
to activate the CIFS protocol for each Data Mover.
CIFS must be stopped and restarted for any changes in the configuration to take effect. Such change
could include but not be limited to:
y Adding/changing the External Usermapper address
y Adding/changing the address of the WINS server
y Changing the security mode
The server_setup command would also be used to remove all CIFS configurations, which is often
useful in troubleshooting CIFS.
Command syntax:
To start the CIFS protocol, type the following command:
server_setup server_2 P cifs option start
To stop the CIFS protocol, type the following command:
server_setup server_2 P cifs option stop
To delete CIFS configurations, type the following command:
server_setup server_2 P cifs option delete

Configuring CIFS

- 21

Copyright 2006 EMC Corporation. All Rights Reserved.

Starting CIFS

2006 EMC Corporation. All rights reserved.

Configuring CIFS - 22

Select CIFS from the tree hierarchy > click on the Configuration tab > click in the CIFS Service
Started box

Configuring CIFS

- 22

Copyright 2006 EMC Corporation. All Rights Reserved.

Sharing a File System for CIFS


y Exporting a CIFS file system makes it available on the
network
Same as with NFS except:
Specify CIFS protocol when exporting
Provide a share name

Same file system may be exported once for NFS and again for CIFS

y Command syntax:
server_export server_x Protocol cifs -name
<share_name> <path_name>

y Example:
server_export server_2 P cifs n data /mp2

2006 EMC Corporation. All rights reserved.

Configuring CIFS - 23

The share name is the name that the file system will be displayed as on the network. It does not have
to be the same name as the mountpoint and it can be hidden.
File systems can be mounted as either:
Read/Write: When a file system is mounted read/write (default) on a Data Mover, only that Data
Mover can access the file system. Other Data Movers cannot mount the file system.
Read-Only: When a file system is mounted read-only on a Data Mover, clients cannot write to the file
system regardless of the export permissions. A file system can be mounted read-only on several Data
Movers concurrently, as long as no Data Mover has mounted the file system as read/write.
You can export the path as a global share (accessible from all CIFS servers on the Data Mover), or as a
local share (accessible from a single CIFS server). To create a local share, you must use the netbios=
option of the server_export command to specify from which CIFS server the share is accessible. If you
do not use the netbios= option, shares created with server_export are globally accessible by all CIFS
servers on the Data Mover.

Configuring CIFS

- 23

Copyright 2006 EMC Corporation. All Rights Reserved.

Sharing a File System for CIFS


y CIFS > New

Configuring CIFS - 24

2006 EMC Corporation. All rights reserved.

This slide shows how to export a file system for CIFS using Celerra Manager.

Configuring CIFS

- 24

Copyright 2006 EMC Corporation. All Rights Reserved.

Global Shares Versus Local (Netbios) Shares


y When a share is created it can be exported as:
Global share (accessible from all CIFS servers on a DM)
Local share (reserved for access from a single CIFS server)

y For a local share you must use the netbios= option of the
server_export command
y Command:
server_export server_x -P cifs n <sharename>
-o netbios=<netbios_name>

2006 EMC Corporation. All rights reserved.

Configuring CIFS - 25

When you create a share, you can export the path as a global share (accessible from all CIFS servers on
the Data Mover), or as a local share (accessible from a single CIFS server and all of its aliases). To
create a local share, you must use the netbios= option of the server_export command to specify from
which CIFS server the share is accessible. If you do not use the netbios= option, shares created with
server_export are globally accessible by all CIFS servers on the Data Mover.
Normally, shares created through Windows administrative tools are local shares and only accessible
from the CIFS server used by the Windows client. However, the cifs srvmgr.globalShares parameter
lets you change this behavior so that shares created through Server Manager or MMC are global
shares.

Configuring CIFS

- 25

Copyright 2006 EMC Corporation. All Rights Reserved.

Sharing a File System Directory


y root file system contains .etc and lost&found
y Best Practice is to export a sub-directory rather than the
root directories
Hides lost&found and .etc
Microsoft network users are unaware of the purpose of lost&found
and .etc

y Example:
server_export server_2 P cifs n sharedata
/mp2/subdir

2006 EMC Corporation. All rights reserved.

Configuring CIFS - 26

It is also possible to export a directory in a file system rather than the file system itself. The UxFS file
system contains .etc and lost&found at their root. These are key to the function and integrity of the file
system. Few Microsoft users will be aware to stay away from these objects. Exporting a directory
instead of the file system effectively hides these two objects from the Microsoft network.

Configuring CIFS

- 26

Copyright 2006 EMC Corporation. All Rights Reserved.

Hidden or Administrative Shares


y By appending a $ to the end of the share name when
exporting a file system, it is not directly visible to the enduser
Not displayed in Network Neighborhood
May be accessed via UNC path

y Example:
server_export server_2 P cifs n data$ /mp2

2006 EMC Corporation. All rights reserved.

Configuring CIFS - 27

It is possible to hide a CIFS-exported file system. Hidden shares can be directly accessed using the
correct UNC path, but they are not displayed in Network Neighborhood (usually for security reasons).
Note: If using Celerra Manager, append a $ to the CIFS share name field when exporting the file
system.

Configuring CIFS

- 27

Copyright 2006 EMC Corporation. All Rights Reserved.

Unjoining a Windows Domain


y Removes entries in:
Active Directory
DDNS

y Command:
$ server_cifs mover_name Unjoin compname=name,
domain=domain_FQDN,admin=name

y Example:
$ server_cifs server_2 U compname=cel1dm2,
domain=corp.hmarine.com,admin=administrator
server_2 : Enter password:********

2006 EMC Corporation. All rights reserved.

Configuring CIFS - 28

Before deleting a CIFS server, or changing the domain to which it is joined, the CIFS server should be
unjoined from the Windows domain. Unjoining the domain removes the server account from Active
Directory and removes the associated entries from DNS (if Dynamic DNS is being employed). As in
joining a domain, you will be required to provide an administrator name and password to perform this
task.
If you delete a CIFS server without first unjoining the domain, a ghost of the CIFS server account will
remain in Active Directory.

Configuring CIFS

- 28

Copyright 2006 EMC Corporation. All Rights Reserved.

Deleting a CIFS Server


y Command:
$ server_cifs mover_name delete compname=name

y Example:
$ server_cifs server_2 d compname=hugo

2006 EMC Corporation. All rights reserved.

Configuring CIFS - 29

Deleting a CIFS server from a Data Mover removes all associated aliases as well. All network
interfaces used by the CIFS server are freed.

Configuring CIFS

- 29

Copyright 2006 EMC Corporation. All Rights Reserved.

Clearing a Data Movers CIFS Configuration


y Command:
$ server_setup mover_name Protocol cifs option stop
$ server_setup mover_name Protocol cifs option delete

y Example:
y $ server_setup server_2 P cifs o stop
y $ server_setup server_2 P cifs o delete

y WARNING:
Deleting an entire CIFS configuration also clears the
CIFS configurations for all VDMs on that physical DM

2006 EMC Corporation. All rights reserved.

Configuring CIFS - 30

Clearing a Data Movers CIFS configuration


You can clear a Data Movers entire CIFS configuration and return to the original state. Before
clearing the configuration the CIFS service must be stopped on the Data Mover.

Configuring CIFS

- 30

Copyright 2006 EMC Corporation. All Rights Reserved.

Stand-alone Servers
y In a enterprise environment, the CIFS servers will most
likely be integrated into a Windows 2000/2003 Domain
y Stand-alone CIFS servers are a low-overhead alternative
for small environments
Do not require external components, such as a domain controller,
NIS server, or Usermapper
Allow users to log in through local user accounts stored on the Data
Mover
May use the security=NT option to provide full NT authentication for
users logging into the servers and ACL checking to authorize user
access to storage objects

y Mount and export the network share as with a regular


CIFS server
2006 EMC Corporation. All rights reserved.

Configuring CIFS - 31

A stand-alone CIFS server on a Data Mover is the equivalent of a workgroup server in a Microsoft
environment.
A stand-alone CIFS server provides these advantages over a CIFS server with SHARE authentication:
y Unicode support.
y Full NT authentication for users logging onto the server. In addition, if you desire to provide very
simple user access, you can enable the default Guest account and assign limited rights and
privileges to the Guest account.
y ACL checking to restrict user access.
y Support for NT commands.
y Access to files larger than 2 GB.
y Improved performance for Write operations.

Configuring CIFS

- 31

Copyright 2006 EMC Corporation. All Rights Reserved.

Creating a Stand-alone Server (CLI)


Command:
server_cifs <movername> -add standalone=<netbios_name>,
workgroup=<workgroup_name>,interface=<if_name>,
local_users

Example:
server_cifs server_2 add standalone=dm2cge01,
workgroup=EngLab,interface=cge0-1,local_users

Configuring CIFS - 32

2006 EMC Corporation. All rights reserved.

To create a stand-alone CIFS server, use this command syntax:


server_cifs <movername> -add standalone=<netbios_name>,
workgroup=<workgroup_name>[,alias=<alias_name>...]
[,hidden={y|n}] [[,interface=<if_name>
[,wins=<ip>[:<ip>]]...][,local_users][-comment <comment>]
Where:
<movername> = name of the specified Data Mover or Virtual Data Mover (VDM).
<netbios_name> = NetBIOS name for the CIFS server. The NetBIOS name is limited to 15
characters and cannot begin with an @ (at sign) or - (dash) character. The name also cannot
include white space, tab characters, or the following symbols:
/\:;,=*+|[]?<>"
Each <netbios_name> within a Celerra Network Server must be unique.
<workgroup_name> = name of the Windows workgroup. The <workgroup_name> is used for
announcements and WINS registration.
<alias_name> = WINS alias for the NetBIOS name. The assigned <alias_name> must be
unique on the Data Mover.
hidden={y|n} By default, the NetBIOS name is displayed in Windows Explorer. If hidden=y is
specified, the NetBIOS name does not appear.
<if_name> = IP interface for the CIFS server.
<ip> = Associates up to two WINS IP addresses with each interface.
local_users Enables local users support that lets you create a limited number of local user
accounts on the CIFS server.

Configuring CIFS

- 32

Copyright 2006 EMC Corporation. All Rights Reserved.

Creating a Stand-alone Server

2
1

1) Select the CIFS folder in the left pane of the browser.


2) Click on the CIFS Servers tab in the right pane.
3) Click on the New button at the bottom of the right pane.

3
2006 EMC Corporation. All rights reserved.

Configuring CIFS - 33

To create a Stand-alone server using the Celerra Manager:


Select the CIFS in the left pane of the browser window,
Select the CIFS Servers tab in the right pane.
Click on the New button at the bottom of the right pane.

Configuring CIFS

- 33

Copyright 2006 EMC Corporation. All Rights Reserved.

Creating a Stand-alone Server

4) Select the Data Mover


from menu
5) Click on the Standalone
button

5
6
7

6) Fill in the NetBIOS name of


server
7) Specify the servers
Workgroup name

8
9

8) Enter and confirm Admin


Password
9) Select Interfaces to be
used with server

10

10) Click on the OK button

Configuring CIFS - 34

2006 EMC Corporation. All rights reserved.

(Continued from previous page)


When the right pane refreshes, pick the appropriate Data Mover from the pull down menu,
Click on the radio button next to the Standalone option under Server Type,
Fill in the NetBIOS name field with the unique server name,
Optionally, fill in the Aliases field with any aliases that will be used in conjunction with the NetBIOS
name of your Stand-alone server.
Specify the name of the Stand-alone servers Workgroup.
Fill in the Set Local Admin Password and Confirm Admin Password fields on the following two
lines.
Select the checkbox associated with the Interfaces you wish to associate with the Stand-alone server.
Click on the OK button to complete the process.
The screen should provide you with visual confirmation that the Stand-alone server was created.

Configuring CIFS

- 34

Copyright 2006 EMC Corporation. All Rights Reserved.

Change Password of Local Administration


y Must change the
password on our
newly created
standalone cifs server
y Select Ctrl+Alt+Delete
y Select: Change
Password
User Enter:
Administrator
Log on to Enter:
10.127.56.144

10.127.56.144

Old pass Enter:


Password
New pass Enter:
nasadmin
2006 EMC Corporation. All rights reserved.

Configuring CIFS - 35

Configuring CIFS

- 35

Copyright 2006 EMC Corporation. All Rights Reserved.

Map Network Drive


y On Client, map network drive as you normally would

\\10.127.56.144\sashare

2006 EMC Corporation. All rights reserved.

Configuring CIFS - 36

Configuring CIFS

- 36

Copyright 2006 EMC Corporation. All Rights Reserved.

CIFS Configuration Verification


server_cifs server_4
server_4 :

CIFS service started

96 Cifs threads started


Security mode = NT
Unicode enabled
Max protocol = NT1
I18N mode = UNICODE
Home Directory Shares DISABLED
Usermapper enabled
Usermapper auto broadcast enabled
Internal Usermapper DM2
...
Usermapper[0]=[192.168.1.2]state:active port:14640(auto discovered)
Usermapper[1]=[192.168.2.2]state:active(auto discovered)
Enabled interfaces: (All interfaces are enabled)
...
DOMAIN DMAIN1 FQDN=dmain1.com SITE=Default-First-Site-Name RC=6
SID=S-1-5-15-f959d8b9-782fbaf2-11992684-ffffffff
>DC=AQUAMAN(172.24.84.201) ref=4 time=1 ms (Closest Site)
DC**SMBSERVER(169.254.190.165) ref=2 time=0 ms
Domain Controller
DC**SMBSERVER(169.254.222.120)
ref=2 time=0 ms
DC*TORCH(172.24.84.213) ref=2 time=0 ms (Closest Site)
CIFS server

CIFS Server MARKETING[DMAIN1] RC=2


Full computer name=marketing.dmain1.com realm=DMAIN1.COM
Comment='EMC-SNAS:T5.4.7.0'
if=fsn-84 l=172.24.84.84 b=172.24.84.255 mac=0:60:cf:20:9b:5b
FQDN=marketing.dmain1.com (Updated to DNS)

DNS updated

2006 EMC Corporation. All rights reserved.

Configuring CIFS - 37

Configuring CIFS

- 37

Copyright 2006 EMC Corporation. All Rights Reserved.

CIFS Configuration Verification - server_log


server_log server_2
2004-11-05 15:44:08: LGDB: 4:
d40f8a24-ffffffff
2004-11-05 15:44:08: SMB: 4:
2004-11-05 15:44:08: SMB: 4:
DMAIN1.COM

SID for MARKETING: S-1-5-15-32434d45-3c21CIFS Server MARKETING[] created (0)


Full computer name marketing.dmain1.com, Realm

2004-11-05 15:44:08: ADMIN: 4: Command succeeded:


domain=DMAIN1.COM interface=fsn-84

cifs add compname=MARKETING

.
.
.
2004-11-05 15:52:18: SMB: 4: DomainJoin::addAdminToLocalGroup: User
administrator added to Local Group
2004-11-05 15:52:18: ADMIN: 4: Command succeeded: domjoin
compname=marketing.dmain1.com domain=DMAIN1.COM admin=administrator
password=**************** ou="ou=Computers,ou=EMC Celerra" init

2006 EMC Corporation. All rights reserved.

Configuring CIFS - 38

Configuring CIFS

- 38

Copyright 2006 EMC Corporation. All Rights Reserved.

Troubleshooting - Domain Join fails


Server_log server_2
2004-11-05 15:46:13: SMB: 3: CIFS is not running. Start the CIFS service with
'server_setup <data_mover> -P cifs -o start'
2004-11-05 15:46:13: ADMIN: 3: Command failed: domjoin
compname=marketing.dmain1.com domain=DMAIN1.COM admin=administrator
password=23233D193D37252D ou="ou=Computers,ou=EMC Celerra" init

y CIFS Services not running


y Solution: Start CIFS service
$ server_setup server_2 P cifs o start

2006 EMC Corporation. All rights reserved.

Configuring CIFS - 39

Configuring CIFS

- 39

Copyright 2006 EMC Corporation. All Rights Reserved.

Troubleshooting - UNICODE is not Configured


server_cifs server_2 -a compname=dm2-ana12,domain=native. win2k,interface=ana12
server_2 :
Error 4020: server_2 : failed to complete command

$ server_log server_2
--------- skipped ------------2002-06-11 08:22:35: SMB: 3:
filtering

Cifs error: Compname require I18N on or Ascii

2002-06-11 08:22:35: ADMIN: 3: Command failed:


domain=NATIVE.WIN2Kinterface=ana12

cifs add compname=DM2-ANA12

--------- skipped -------------

y Solution: Enable UNICODE


$ /nas/sbin/uc_config on mover server_2

2006 EMC Corporation. All rights reserved.

Configuring CIFS - 40

Configuring CIFS

- 40

Copyright 2006 EMC Corporation. All Rights Reserved.

Troubleshooting - Time Synchronization Issue


server_cifs server_2 -a compname=dm2-ana12,domain= native.win2k,interface=ana12
Error 4020: server_2 : failed to complete command
$ server_log server_2
--------- skipped ------------2002-06-10 10:10:10: KERBEROS: 3:
KDC in realm NATIVE.WIN2K.

krb5_sendto_kdc: unable to send message to any

2002-06-10 10:10:10: KERBEROS: 4: krb5_gss_release_cred: kg_delete_cred_id


failed: major GSS_S_CALL_BAD_STRUCTURE or GSS_S_NO_CRED, minor
G_VALIDATE_FAILED
2002-06-10 10:10:10: KERBEROS: 3: DomainJoin::getAdminCreds:
gss_acquire_cred_ext failed: Miscellaneous failure
Clock skew too great
2002-06-10 10:10:10: ADMIN: 3: Command failed: domjoin compname=dm2-ana12
domain=native.win2k admin=admin2 password=#=7%- init

y Data Mover not configured for NTP or Data Mover and


Kerberos servers could be configured for different time
sources
y Solution: Correctly configure NTP
2006 EMC Corporation. All rights reserved.

Configuring CIFS - 41

Configuring CIFS

- 41

Copyright 2006 EMC Corporation. All Rights Reserved.

Troubleshooting Join Failure


server_cifs server_2 -J compname=dm2-ana12,domain=native.
win2k,admin=admin2
server_2 : Enter Password:*****
Error 4020: server_2 : failed to complete command
$ server_log server_2 | tail
---------- skipped --------------2002-06-11 08:34:18: KERBEROS: 3: DomainJoin::getAdminCreds:
gss_acquire_cred_ext failed: Miscellaneous failure
Preauthentication failed
---------- skipped ---------------

y Wrong Password
Password is case sensitive
2006 EMC Corporation. All rights reserved.

Configuring CIFS - 42

Configuring CIFS

- 42

Copyright 2006 EMC Corporation. All Rights Reserved.

Troubleshooting Join Failure


$ server_cifs server_2 -a compname=dm2-ana12,domain=native.
win2k,interface=ana12
server_2 :
Error 4020: server_2 : failed to complete command

$ server_log server_2
--------- skipped ------------2002-06-11 08:37:10: ADMIN: 4: Command succeeded: cifs add
compname=DM2-ANA12 domain=NATIVE.WIN2K interface=ana12
2002-06-11 08:37:25: SMB: 3: DomainJoin::doDomjoin: Computer account
'dm2-ana12' already exists.
2002-06-11 08:37:25: ADMIN: 3: Command failed: domjoin compname=dm2ana12 domain=native.win2k admin=admin2 password=#=7%- init

y Account already existed in Active Directory


y Solution: Rerun the command with the o reuse flag
2006 EMC Corporation. All rights reserved.

Configuring CIFS - 43

Configuring CIFS

- 43

Copyright 2006 EMC Corporation. All Rights Reserved.

Troubleshooting Join Failure


$ server_cifs server_2 -J compname=dm2-ana12,domain=native.
win2k,admin=admin2
server_2 : Enter Password:*****
Error 4020: server_2 : failed to complete command
$ server_log server_2 | tail
---------- skipped --------------2002-06-11 08:25:00: SMB: 3: DomainJoin::findServer: Unable to
find Domain Controller for domain native.win2k. command option
'server' should be specified.
2002-06-11 08:25:00: SMB: 3: DomainJoin::findServer: Unable to
connect to any Domain controller for native.win2k domain
2002-06-11 08:25:00: ADMIN: 3: Command failed: domjoin
compname=dm2-ana12 domain=native.win2k admin=admin2
password=#=7%- init
---------- skipped ---------------

y DNS is not configured on Data Mover


2006 EMC Corporation. All rights reserved.

Configuring CIFS - 44

Configuring CIFS

- 44

Copyright 2006 EMC Corporation. All Rights Reserved.

CIFS Threads
y Threads are considered a pool of compute resources
Each request from the client:
Obtains a thread from the pool
executes,
and then returns used thread back to the pool.

Request is not executed when no available threads are in the pool.


The impact is visible if:
Many requests are coming in simultaneously.
Some threads require a long period of time to process a command.

y DM CIFS configuration has 256 threads


Default and recommended number of threads used with CIFS
protocol
3 threads are reserved for Virus Checker
NFS and CIFS do not share any threads
server_setup command can modify the number of CIFS threads
2006 EMC Corporation. All rights reserved.

Configuring CIFS - 45

Configuring CIFS

- 45

Copyright 2006 EMC Corporation. All Rights Reserved.

Module Summary
Key points covered in this module are:
y Windows 2000 uses Kerberos, by default, to
authenticate Windows users
y Kerberos is time-sensitive and requires that the Data
Mover and Windows Domain Controller be in sync for a
successful Data Mover join to occur
y Dynamic DNS permits hosts to register their own IP
address with the DNS server
y When joining a Data Mover to the domain, the default
organizational unit is computers Celerra
y A share name is used when exporting a file system for
CIFS users to access
y Server_log is first point in troubleshooting!
2006 EMC Corporation. All rights reserved.

Configuring CIFS - 46

The key points for this module are shown here. Please take a moment to read them.

Configuring CIFS

- 46

Copyright 2006 EMC Corporation. All Rights Reserved.

Closing Slide

2006 EMC Corporation. All rights reserved.

Configuring CIFS - 47

Configuring CIFS

- 47

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra ICON
Celerra Training for Engineering

Managing Permissions in a CIFS Environment

2006 EMC Corporation. All rights reserved.

Managing Permissions in a CIFS-only Environment

-1

Copyright 2006 EMC Corporation. All Rights Reserved.

Revision History
Rev Number

Course Date

1.0

February 2006

1.2

May 2006

2006 EMC Corporation. All rights reserved.

Revisions
Complete
Updates and enhancements

Managing Permissions in a CIFS-only Environment - 2

Managing Permissions in a CIFS-only Environment

-2

Copyright 2006 EMC Corporation. All Rights Reserved.

Managing Permissions in a CIFS Environment


Upon completion of this module, you will be able to:
y Define the term Permissions
y Manage permissions in a CIFS-only environment
Manage users, groups, and permissions
Create local groups on a Data Mover
Assign permissions

y Explain the differences in how permissions are managed


in CIFS-only environments and in mixed CIFS and NFS
environments
y Manage permissions in a mixed environment
y Set permissions on a network drive
y From a Windows client, access the CIFS-exported share
2006 EMC Corporation. All rights reserved.

Managing Permissions in a CIFS-only Environment - 3

Permissions is also known as authorization: who has access to what. In this module we will discuss
how we would manage permissions in a Windows only environment. Basically we handle it the way
we would if the Data Mover was a Windows server. Later we will discuss how we handle permissions
in a mixed environment where we have NFS and CIFS users sharing access to the same file systems.

Managing Permissions in a CIFS-only Environment

-3

Copyright 2006 EMC Corporation. All Rights Reserved.

Authentication and Authorization


y When a user logs in to a system they authenticate
themselves
This is also know as security Who are you?

y Once a user has successfully authenticated themselves,


authorization determines the level of access the user has
to files and directories
Referred to as permissions
Permissions schemes are different for UNIX and CIFS objects
Windows uses (Access Control Lists (ACLs)
Unix uses Read/Write/Execute mode bits for Owner, Group, and others

y This module discusses permissions in both CIFS only,


and mixed CIFS and UNIX environments
2006 EMC Corporation. All rights reserved.

Managing Permissions in a CIFS-only Environment - 4

Permissions are different than security. In this course, security relates to proving that users are who
they say they are, generally by providing a username and password. Permission deals with the actions a
given user can or cannot perform with the object associated with those permissions. While security
usually revolves around accessing a system, permissions are associated with a specific object, such as a
file or directory.

Managing Permissions in a CIFS-only Environment

-4

Copyright 2006 EMC Corporation. All Rights Reserved.

Managing Permissions for CIFS Objects


y Permissions for CIFS file system object are managed
through NTFS
Access Control Lists

y Managing permissions:
Create global and or local groups
Add users to groups
Assigning permissions to Groups (or Users)
Access Control List (ACL)

Careful planning is required!

y Permissions on file system objects on a Celerra are


managed in the same manner as you would on a
Microsoft Server
2006 EMC Corporation. All rights reserved.

Managing Permissions in a CIFS-only Environment - 5

User: Single account in an Microsoft Windows environment. For example, a user named Etta Place.
Global Group: Groups of users within a particular Windows domain. Global Groups are usually
modeled after users job functions (e.g., Propulsion Engineers, Eastcoast Sales, and Managers). Global
Groups can contain users from that domain.
Local Group: Local Groups are created/managed on individual servers. Local Groups bring together
those that will need similar permissions to certain objects. Local Groups are modeled after different
kinds of activity (for example, Printer Operators, Backup Operators, Server Operators) and can contain
Users and/or Global Groups from the same Windows domain and/or trusted domains.
Domain Local Groups: With W2K Doamin Local Groups serve the same purpose as local groups but
are stored in the active directory and are accessable across the domain.
Managing permissions for CIFS involves the following two steps:
y Creating Local and or Global Groups
y Adding users to the Groups
y Configure Access Control List for file system objects to assign various levels of permissions to
groups or users
y Careful planning is required
Managing permissions on file system objects on a Celerra is no different than the way you would do it
in on a Microsoft Server.

Managing Permissions in a CIFS-only Environment

-5

Copyright 2006 EMC Corporation. All Rights Reserved.

Users, Groups, and Permissions


Users
Global
Groups
ACL

File System
Objects

Local
Groups

2006 EMC Corporation. All rights reserved.

Managing Permissions in a CIFS-only Environment - 6

There are many ways to organize users and groups, one way is to use a hierarchical model for
managing users, groups, and permissions:
y Users in a domain are placed in Global Groups of that domain.
y Domain Global Groups are placed in Local Groups on the local servers.
y Permissions (ACLs) to objects on the server are assigned to the Local Groups.
Note: Permission can also be assigned to a user, but this is not the recommended practice.

Managing Permissions in a CIFS-only Environment

-6

Copyright 2006 EMC Corporation. All Rights Reserved.

Managing Users and Groups on the Celerra


y Use Computer Management Console
Connect to another Computer

y Create Local Group on


Data Mover

Windows
NT domain
domain

Add Global Groups to Local Groups


Using Computer Management console

y Domain users, Global Groups,


and Domain Local Groups
exist in Active Directory
Manage using Active Directory Users and Computers

y Local Groups are created on the CIFS Server


Using Computer Management Console
2006 EMC Corporation. All rights reserved.

Data Mover

Managing Permissions in a CIFS-only Environment - 7

To manage users and groups for CIFS on a Data Mover, you will need to use the Computer
Management Console (from Administrative Tools) and the connect to another computer function.
The Windows domain(s) will contain users, Global Groups, and Domain Local Groups. Local Groups,
however, exist on the Celerras Data Mover. These Data Mover Local Groups must be created from
Windows using the Computer Management console.

Managing Permissions in a CIFS-only Environment

-7

Copyright 2006 EMC Corporation. All Rights Reserved.

Connecting Computer Management Console


to CIFS Server
y Right-click Computer Management
Select Connect to another computer
Locate & Select Data Movers CIFS
server
Click OK

2006 EMC Corporation. All rights reserved.

Managing Permissions in a CIFS-only Environment - 8

In order to create a Local Group on the Data Mover, you will need to connect the Computer
Management console to the Data Mover using Connect to another computer function. The following
pages will illustrate both connecting Computer Management to the Data Mover, and the creation of
Local Groups on the Data Mover. (e.g. \\cel1dm2)
To connect the Computer Management console to a Data Mover:
y Logon to a Windows as a domain administrator
y Click Start Programs Administrative Tools Computer Management
y Right-click on Computer Management (Local) Connect to another computer
y In the Select Computer dialog box, locate and select the name of the Data Movers CIFS server.
y Click OK.
This should connect the Computer Management console to the Data Mover. To verify, check the top
level of the Tree window pane for the computer name of the Data Movers CIFS server (e.g .
Computer Management (CEL1DM2.CORP.HMARINE.COM) )

Managing Permissions in a CIFS-only Environment

-8

Copyright 2006 EMC Corporation. All Rights Reserved.

Adding Local Group to CIFS Server

2006 EMC Corporation. All rights reserved.

Managing Permissions in a CIFS-only Environment - 9

To add a Local Group to a Data Mover:


y With the Computer Management console connected to the Data Movers CIFS server, expand
System Tools Local Users and Groups right-click on Groups from the menu select New
Group.
y In the New Group dialog box, type the name for the new Local Group in the Group Name window.
y Add Global Groups (and/or users) from the Windows domain(s) by clicking the Add button.
y In the Select Users and Groups dialog box, choose the appropriate domain from the Look in dropdown menu. (i.e. corp.hmarine.com)
y Select the desired users and groups and click Create when finished.
Note: You can also choose to employ the built-in Local Groups.

Managing Permissions in a CIFS-only Environment

-9

Copyright 2006 EMC Corporation. All Rights Reserved.

Assigning Permissions to CIFS Objects


y Two Levels:
File level permissions
Set at object level
Extensive permission options

Share level permissions


Set on the share path
Effective on the entire contents of the share
Limited permission options
Full Control
Change
Read

y Set file level permissions using Computer Management


console or Windows Explorer
File level permissions are under the Security tab
2006 EMC Corporation. All rights reserved.

Managing Permissions in a CIFS-only Environment - 10

Assigning permissions
Once you have created Local Groups on the Data Mover, you can assign permissions to CIFS exports
using Microsoft Windows.
File level permissions
File level permissions are set and effective at the object level (e.g. directory or file). The options for
file level permissions are more extensive than share level permissions. Some examples of file
permission options are Read, Write, List folder contents, Traverse folder, Read permissions,
Change permissions, Take ownership, etc.
Share level permissions
Share level permissions are set only at the path that was specified when creating the share. The set
permissions have effect on the entire contents of the share. The options for share permissions are
limited to Full Control, Change, and Read.
File level permissions can be managed from Windows Explorer and from the Computer Management
consoles Share properties page. From either of these locations, choose the Security tab to access file
level permissions.

Managing Permissions in a CIFS-only Environment

- 10

Copyright 2006 EMC Corporation. All Rights Reserved.

File Level Permissions Examples

Basicpermissions
permissions
Basic
2006 EMC Corporation. All rights reserved.

Advancedpermissions
permissions
Advanced
Managing Permissions in a CIFS-only Environment - 11

To view the permissions set on a particular folder/file, right click the folder/file > properties > select
the Security tab. The Basic permissions will be displayed as shown on the left of this slide. You can
add/delete the appropriate user and define specific permissions for that user. If these permissions are
not granular enough, click Advanced and you will be presented with the Advanced permissions as
shown on the right side of this slide. More granular permissions can be set here.

Managing Permissions in a CIFS-only Environment

- 11

Copyright 2006 EMC Corporation. All Rights Reserved.

Managing Share Level Permissions


y Set share permissions via
Computer Management
console

2006 EMC Corporation. All rights reserved.

Managing Permissions in a CIFS-only Environment - 12

To use the Computer Management Console in Windows to manage the share permissions on a Data
Movers CIFS server:
y Connect the Computer Management console to the Data Movers CIFS server
y Click on System Tools Shared folders Shares to view the shares on the CIFS server
y Locate the share for which you want to manage permissions
y Use the Share Permissions tab to set the required permissions

Managing Permissions in a CIFS-only Environment

- 12

Copyright 2006 EMC Corporation. All Rights Reserved.

UNIX Security Model


y Access rights to objects are referred to as mode bits
Read (r)
Write (w)
Execute (x)

y Mode bits define permissions to:


User (owning the file)
Group (associated with the File System object)
Others (All other users)

y Example:
lrwxrwxrwx 1 kcb eng

10 Dec 9 13:42 xyz.doc -> xyz.html

-rw-r--r-- 1 kcb eng 1862 Jan 2 14:32 abc.html


drwxr-xr-x 2 kcb eng 5096 Mar 9 11:30 schedule
User

Other
Group

2006 EMC Corporation. All rights reserved.

Managing Permissions in a CIFS-only Environment - 13

Managing Permissions in a CIFS-only Environment

- 13

Copyright 2006 EMC Corporation. All Rights Reserved.

Permissions in a Mixed CIFS & Unix Environment


y CIFS ACLs characteristics
Multiple ACLs with Allow and/or Deny options for different types of
access

y NFS permissions characteristics


Only three entries
Owner / Group / Others (~ Everyone)
Allow (there no Deny options) Read Write Execute (rwx)

y Problems:
Cannot securely map security information from NFS to CIFS and vice versa.
Direct mapping between NFS permissions and CIFS ACLs opens huge
security vulnerabilities.

y Solution:
Each file and directory must have a set of NFS and CIFS permissions
NFS for owner / group / others (rwx)
CIFS ACLs made up from ACEs

In mixed environments, the actual permissions is determined by file system


access policy used when the share was exported
2006 EMC Corporation. All rights reserved.

Managing Permissions in a CIFS-only Environment - 14

It is semantically impossible to map CIFS ACLs to NFS permissions and vice versa and maintain
security (NFS rwx CIFS rwxpdo). The best solution is to maintain two sets of permissions: one
for UNIX and one for CIFS. In environments where UNIX and Windows users need access to the
same set of objects, the actual permissions are determined by the access policy specified when the file
system was mounted using the server_mount command.

Managing Permissions in a CIFS-only Environment

- 14

Copyright 2006 EMC Corporation. All Rights Reserved.

Access Policies
y If we maintain two sets of permissions, which permissions must
users satisfy?
Only their native method
Both CIFS and NFS

y Options:
NATIVE (default)
NT
UNIX
SECURE
MIXED
MIXED_COMPACT

y Example of how policy is set:


server_mount server_2 o accesspolicy=UNIX fs01 /mp1

Managing Permissions in a CIFS-only Environment - 15

2006 EMC Corporation. All rights reserved.

Types of clients and permissions


When exporting a file system for both NFS and CIFS, there are two different types of clients (NFS and
Microsoft/CIFS) and two different types of permissions (UNIX permissions and CIFS ACL).
The server_mount command
The access checking policy options for the server_mount command provide flexibility to the
administrator regarding what permissions each type of client will be required to satisfy.
Access policy options
server_mount Access Policy command options:
server_mount server_2 -o accesspolicy=NATIVE fs01 /mp1
server_mount server_2 -o accesspolicy=SECURE fs01 /mp1
server_mount server_2 -o accesspolicy=NT fs01 /mp1
server_mount server_2 -o accesspolicy=UNIX fs01 /mp1
server_mount server_2 -o accesspolicy=MIXED_COMPAT fs01 /mp1

Note:

Access policy can be specified with Celerra Manager.

Managing Permissions in a CIFS-only Environment

- 15

Copyright 2006 EMC Corporation. All Rights Reserved.

Access Policy Options for server_mount


NATIVE

CIFS
CIFS
Permission

NFS
NFS
Permission

CIFS

SECURE

CIFS
Permission

NFS
NFS
Permission

File system

File system

NT

UNIX

CIFS
CIFS
Permission

NFS
NFS
Permission

File system
2006 EMC Corporation. All rights reserved.

CIFS
CIFS
Permission

NFS
NFS
Permission

File system
Managing Permissions in a CIFS-only Environment - 16

NATIVE option
This is the default access checking policy. Under this policy CIFS users will be checked by the
permission defined in the CIFS ACL for a given object. The CIFS users ignore any permission setting
defined for UNIX. Similarly, NFS users must satisfy the UNIX permissions, ignoring any CIFS
permissions.
NT option
When the NT access checking policy is in place, CIFS users will only be required to pass the CIFS
permissions. The NFS users, however, will be required to pass both NFS and CIFS permissions.
UNIX option
The UNIX access policy is the opposite of the NT policy. Access for NFS users will be controlled by
the UNIX permissions only. Whereas the CIFS user access will be checked by both CIFS and UNIX
permissions.
SECURE option
The SECURE policy requires both CIFS and NFS users to have their access checked by both CIFS and
UNIX permissions.
Note: The red arrows on the slide illustrate which permissions the different users must pass through
for each Access Policy. The UNIX option is highlighted because this is the Access Policy that will be
employed in the lab exercises.

Managing Permissions in a CIFS-only Environment

- 16

Copyright 2006 EMC Corporation. All Rights Reserved.

MIXED & MIXED_COMPAT Access Policy Options


y New Access policy with NAS code Version 5.4
y Access to file or directory through either NFS or CIFS is determined
by the protocol that last set its ownership or permission
Any modification of either set of permissions will destroy the other
Effectively, only one set of permissions for each file or directory, the set that
was modified last
Internally the file system still maintains both set of permissions but it uses only
the ACL

y MIXED
UNIX Mode bits -> Windows Owner, Group, & Everyone
Windows ACL -> Unix Owner, Group, & Other

y MIXED_COMPAT
UNIX Mode bits -> Owner & Everybody (Everyone = Unix Group)
Windows ACL -> Unix Group & Other
2006 EMC Corporation. All rights reserved.

Managing Permissions in a CIFS-only Environment - 17

The system synchronizes permissions on existing files and directories when changing from a nonMIXED access policy to MIXED or MIXED_COMPAT
Reference Managing Celerra for a Multiprotocol Environment Technical Module for examples of
mapping.

Managing Permissions in a CIFS-only Environment

- 17

Copyright 2006 EMC Corporation. All Rights Reserved.

MIXED/MIXED_COMPAT: Translating ACL-to-Mode


Windows ACL Permission

File Permission
R
W
X

Traverse Folder/Execute File

Directory Permission
R
W
X

Read Data

Read Attribute

Read Extended Attributes

Write Data

Append Data

Write Attribute

Write Extended Attribute

Delete

Read Permissions
List Folders

X
X

Create Files

Create Folders

Delete Subfolders and Files

2006 EMC Corporation. All rights reserved.

Managing Permissions in a CIFS-only Environment - 18

The table above shows how the MIXED/MIXED_COMPAT access policy maps Windows ACL file
and directory permissions into UNIX Mode bits.

Managing Permissions in a CIFS-only Environment

- 18

Copyright 2006 EMC Corporation. All Rights Reserved.

MIXED/MIXED_COMPAT: Translating Mode-to-ACL


UNIX Mode bit
R
W
X
X

Windows ACL Permission

Traverse Folder/Execute File

List Folders

Read Data

Read Attribute

Read Extended Attributes

Create Files/Write Data

Create Folders/Append Data

Write Attribute

Write Extended Attribute

Delete Subfolders and Files

* Always set for Owner;


never set for Everyone

Delete *
Read Permissions
Change Permissions *
Take Ownership *

2006 EMC Corporation. All rights reserved.

Managing Permissions in a CIFS-only Environment - 19

The table above shows how the MIXED/MIXED_COMPAT access policy maps UNIX Mode bits file
and directory permissions into Windows ACLs.

Managing Permissions in a CIFS-only Environment

- 19

Copyright 2006 EMC Corporation. All Rights Reserved.

MIXED versus MIXED_COMPAT

Mapping group and other Mode bits


MIXED

MIXED_COMPAT

group

other

group

Users Primary

Everyone

Everyone

Group

Group

Group

2006 EMC Corporation. All rights reserved.

other

Ignored

Managing Permissions in a CIFS-only Environment - 20

MIXED and MIXED_COMPAT are very similar. The MIXED_COMPAT policy was designed for
compatibility with the methods used by other vendors. The key difference between these two access
checking policies involved how each one maps the UNIX Mode group and other entities.
The MIXED policy maps the group Mode bit to the Windows users Primary group. In Windows the
Primary group assignment is not mandatory. Therefore, if this policy is used, it is important that the
Primary group for Windows users is being assigned. The UNIX Mode bit other is mapped to the
Windows Everyone group.
The MIXED_COMPAT policy maps the other Mode bit to the Windows Everyone group. The
UNIX Mode bit other is ignored.

Managing Permissions in a CIFS-only Environment

- 20

Copyright 2006 EMC Corporation. All Rights Reserved.

Module Summary
y Permission the level of access a user can or cannot perform on an
object
CIFS only environments deal with permissions as a Windows Server
NFS Users deal with permissions as is native with UNIX

y The Celerra is capable of exporting a file system to both CIFS and


NFS clients simultaneously
Not typical but possible
Plan ALL data security & integrity elements BEFORE deployment

y Native is the default access checking policy


Under the native access policy, CIFS users will be checked with CIFS
permissions defined on the object, and NFS users will be checked by UNIX
permissions

y Local Groups can be created/managed on individual CIFS Servers


y server_mount and server_export commands have options to
build tight security for data stored on the DM file systems
2006 EMC Corporation. All rights reserved.

Managing Permissions in a CIFS-only Environment - 21

Managing Permissions in a CIFS-only Environment

- 21

Copyright 2006 EMC Corporation. All Rights Reserved.

Closing Slide

2006 EMC Corporation. All rights reserved.

Managing Permissions in a CIFS-only Environment - 22

Managing Permissions in a CIFS-only Environment

- 22

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra ICON
Celerra Training for Engineering

CIFS Considerations & Features

2006 EMC Corporation. All rights reserved.

CIFS Considerations & Features

-1

Copyright 2006 EMC Corporation. All Rights Reserved.

Revision History
Rev Number

Course Date

1.0

February 2006

1.2

March 2006

2006 EMC Corporation. All rights reserved.

Revisions
Complete
Update and enhance
diagrams

CIFS Considerations & Features

CIFS Considerations & Features

-2

Copyright 2006 EMC Corporation. All Rights Reserved.

Objectives
y Describe some of the considerations for Windows and
UNIX clients access the same objects
y Describe some of the CIFS features supported on the
Celerra and how to set them up and manage
Celerra Antivirus Agent (CAVA)
Home Directory support
Distributed File System (DFS)
Group Policy Objects

2006 EMC Corporation. All rights reserved.

CIFS Considerations & Features

CIFS Considerations & Features

-3

Copyright 2006 EMC Corporation. All Rights Reserved.

CIFS/NFS Co-Existence Considerations


y Because UNIX and Windows file objects follow different
conventions, there are a number of issues when
integrating mixed UNIX and Windows environments
Discussed previously:
Mapping User Credentials
Mapping UNIX and Windows file object permissions

Other considerations:
File Locking options
File naming conventions
Handling of Unix Symbolic Links
File attributes

y Issues only apply when clients access the same objects


from both UNIX and Windows
2006 EMC Corporation. All rights reserved.

CIFS Considerations & Features

CIFS/NFS co-existence issues


EMCs Celerra Network Server is capable of exporting a file system to both NFS and CIFS clients
simultaneously.
y Access Checking Policy: When a file system is exported to both NFS and CIFS users, determine
by which permission we want the users to be regulated.
y File Locking: When a user employs an application that supports file locking, determine what we
want other users to be able to do when a file is locked.
y Opportunistic Lock: Determine if there are circumstances that may prohibit the use of
opportunistic locks.
y Other Issues such as links and naming conventions.

CIFS Considerations & Features

-4

Copyright 2006 EMC Corporation. All Rights Reserved.

File and Directory Locking


y NFS locks are advisory
Seldom used in practice

y CIFS has Oplocks locks and Deny Modes


Standard and Deny Modes lock use depend upon application

y Problems exist because CIFS Deny Modes do not equate to NFS


locks
NFS client can obtain an NFS lock on a file that has a Deny Mode set
CIFS client could ask for (and obtain) a Deny Read-Write mode on a file that has
an NFS lock set without asking for a lock and gain access

y Solution:
Data Mover maps CIFS deny modes into NFS locks and NFS locks into CIFS Deny
Modes
A Deny Read-Write mode request translates to an NFS exclusive read-write lock
An NFS shared read lock will be translated into a CIFS Deny Write mode

y Interaction between NFS locking and CIFS standard locks is


determined by arguments in the server_mount command

2006 EMC Corporation. All rights reserved.

CIFS Considerations & Features

CIFS Considerations & Features

-5

Copyright 2006 EMC Corporation. All Rights Reserved.

CIFS File and Directory Locking with NFS


y If either an NFS or CIFS lock is taken out no other locks will be
granted
CIFS clients respect only CIFS locks and can ignore NFS locks and read or
write to file

y One of three lock modes can be configured for mapping NFS to


CIFS locks
nolock - (default)
NFS locks remain advisory - an NFS client can ignore the lock and read or write
to the file

wlock
NFS reads will still be allowed, but writes will not

rwlock
NFS reads and writes will be denied if a read lock applies

y Example:
server_mount server_2 o rwlock fs01 /mp1
2006 EMC Corporation. All rights reserved.

CIFS Considerations & Features

Lock policies
Some applications support the ability to lock files that are in use. CIFS enforces strict file locking and sharing
semantics to mediate access from multiple clients to the same file. While NFS locks are advisory, CIFS locks are
mandatory. When a user locks a file, CIFS does not allow any other user access to the file. NFS locking rules are
cooperative, so that a client is allowed to access a file locked by another client if it does not use the lock
procedure. Each file system can have its own lock policy.
File locking options for the server_mount command
There are three types of file locking options:
No locking(Default, least secure): server_mount server_2 -o nolock fs01 /mp1
Even if a file is locked by CIFS, NFS client reads and writes are allowed. NFS clients can read and write files
locked by CIFS unless the NFS client attempts to lock the file. NFS clients see behavior that is consistent with
NFS semantics. Since NFS locks are advisory, even if an NFS client locks a file, reads and writes from CIFS
clients are allowed to that file regardless of the locking policy. However, if a CIFS client requests a lock on a
file already locked by NFS, the request is denied regardless of the lock policy.
Read only locking (Write lock): server_mount server_2 -o wlock fs01 /mp1
If a file is locked by CIFS, all NFS writes are denied. NFS clients can read files.
Read/write locking: server_mount server_2 -o rwlock fs01 /mp1
If a file is locked by CIFS, all NFS reads and writes are denied. When a file system is used by NFS clients, this
policy guarantees data coherency from both clients.

CIFS Considerations & Features

-6

Copyright 2006 EMC Corporation. All Rights Reserved.

CIFS Opportunistic Lock Options


y Opportunistic Locks (oplocks) is another locking protocol
used by CIFS
y Allows local caching on the client system to enhance
application performance
y NFS clients are not granted a lock if a oplock is in place
but can ignore oplocks
y Enabled by default
Off via nooplock option
Typically turned off with database applications

Example: server_mount server_2 o nooplock fs01 /mp1

2006 EMC Corporation. All rights reserved.

CIFS Considerations & Features

Opportunistic locks
Opportunistic locks, also known as Oplocks, are locks placed by a client on a file residing on a
server. In most cases, a file requests an oplock so it can cache data locally, thus reducing network
traffic and improving apparent response time. Reference: MSDN.Microsoft.com
Opportunistic locks are configured per file system and are on by default. Unless your organization is
using a database application that recommends that oplocks be turned off, or if you are handling critical
data and cannot afford the slightest data loss, you can leave oplocks on.
Turning on oplocks
To turn on oplocks, specify oplock in the mount options. For example, type the following command
server_mount server_2 o oplock fs01 /mp1
Turning off oplocks
To turn off oplocks, specify nooplock in the mount options. For example, type the following
command:
server_mount server_2 o nooplock fs01 /mp1
Opportunistic Locks can only be configured using the CLI.

CIFS Considerations & Features

-7

Copyright 2006 EMC Corporation. All Rights Reserved.

UNIX File System Symbolic Link Issues


y CIFS clients can follow link
y DOS attributes apply to link, not target
y Deleting link from CIFS
Deleting link deletes file
Deleting directory link deletes entire directory and contents

y Up links and links defined by absolute paths not


supported by default
Enabled by setting parameters:
shadow followabsolutpath
shadow followdotdot

2006 EMC Corporation. All rights reserved.

CIFS Considerations & Features

Symbolic links
Symbolic links are files created by UNIX users that point to another file or directory. CIFS clients are able to
follow symbolic links because they behave in a similar fashion as a Microsoft Windows shortcut.
DOS attributes
The DOS attributes apply to the symbolic link, not the target file or directory.
Deleting links
If a symbolic link refers to a directory, and a CIFS user attempts to delete the link, the link and all of the files and
directories in the directory to which the link referred are deleted. Microsoft clients are unaware of symbolic links
and interpret a delete operation as an attempt to delete the directory and all of its contents.
When a lock is set on a symbolic link, the lock is held by the target file.
Supporting up links
Additionally, the Celerra Network Server does not by default support either symbolic links that contain full path
names (/dir1/dir3/foo) or symbolic links that refer up from the directory in which you encounter the symbolic
link (../foo). However, a symbolic link such as dir2/dir3/foo is supported.
File System Linking
NAS v5.3 supports CIFS file system linking. File system linking allows CIFS client access to several file
systems from a single share.

CIFS Considerations & Features

-8

Copyright 2006 EMC Corporation. All Rights Reserved.

File Naming Conventions


y Case-preserving
y Not case-sensitive
y CIFS appends name

NFS creates:

CIFS sees:

Filename

Filename

filename

filename~1

Filename~1

Filename~1~1

2006 EMC Corporation. All rights reserved.

CIFS Considerations & Features

UNIX file naming conventions


UNIX is case sensitive which means that an NFS user could create the following files in a single
directory (because they would be three separate files):
Filename
filename
Filename~1
CIFS file naming conventions
CIFS is not case sensitive, but is case preserving. Therefore, if a CIFS user were to view the directory
containing these files, they would see a suffix added to each file name.
NFS file naming conventions
When the NFS user saves the first file as FileName, the CIFS user will view it as Filename. When the
NFS user then saves the second file as filename, the CIFS user will see it displayed as filename~1.
Then, if the NFS user saves a file called Filename~1, the CIFS user would see Filename~1~1.

CIFS Considerations & Features

-9

Copyright 2006 EMC Corporation. All Rights Reserved.

DOS Attributes
y Object attributes for UNIX and Windows objects are not
the identical
For example: Creation Dates apply to Windows objects

y Celerra maintains both CIFS and UNIX file attributes


y Use caution when performing backup of a CIFS file using
NFS as DOS attributes may be lost
y server_archive command preserves both NFS and
DOS object attributes

2006 EMC Corporation. All rights reserved.

CIFS Considerations & Features

Attributes for CIFS clients


CIFS clients require some attribute information not supported by NFS clients. For example, the
creation date for a file, which is an attribute for CIFS clients, is unknown by NFS clients. The Celerra
File Server supports CIFS attributes and the creation date for a file.
Backing up CIFS attributes
When backing up data using the server_archive command, use the -J option to back up the
CIFS attributes. Reference Using the Celerra server_archive Utility, Technical Module

CIFS Considerations & Features

- 10

Copyright 2006 EMC Corporation. All Rights Reserved.

CIFS Features
y One of the design goals of the Celerra is to seamlessly
integrate into a windows environment
Emulates Windows Server functionality while providing high
performance and availability

y The file serving features you find on a Windows Server


are also supported on the Celerra Data Mover
CIFS Auditing
Home Directory support
Integration with third party AntiVirus software
Celerra AntiVirus Agent (CAVA)

Microsoft Group Policy Object (GPO) support


File Extension Filtering
Distributed File System (DFS)
2006 EMC Corporation. All rights reserved.

CIFS Considerations & Features

Again, one of our goals with the Celerra file system is to provide all the functionality of a Windows
Server while providing high availability and performance. To do this, Celerra must support similar
features. Above is a list of some of these features that we support. We will only be covering a subset
of these.

CIFS Considerations & Features

- 11

Copyright 2006 EMC Corporation. All Rights Reserved.

Lesson 2: Home Directory Support


Upon completion of this lesson, you will be able to:
y Describe how the Home Directory features works
y Configure Home Directory support
Creating Home Directory map file
Creating users Home Directory
Editing user profiles

2006 EMC Corporation. All rights reserved.

CIFS Considerations & Features

CIFS Considerations & Features

- 12

Copyright 2006 EMC Corporation. All Rights Reserved.

Home Directory Feature


y Simplifies administering and connecting personal shares
Provides each user with their own share via a mapped drive

y Prerequisites
CIFS must be configured and started
User/Group mapping must be functioning properly (e.g. Usermapper)

y Restrictions
NT security only
Share name HOME is reserved and cannot be used in whole or in
part in any other share name, directory, mountpoint, or file system

2006 EMC Corporation. All rights reserved.

CIFS Considerations & Features

Home Directory feature


The home directory feature enables you to associate users to a directory stored on a server that is regarded as the users
home directory. This directory would commonly be the default location for saving new files. So, when Etta Place creates a
new engineering document, the default location to save the file would no longer be My Documents, but rather her personal
directory on the Data Mover.
Mapping to a home directory
Additionally, the user could have a mapped network drive associated to their own home directory. When this feature is
deployed, Etta Place, for example, could logon to a Microsoft Workstation and her H: drive would connect directly to her
directory on a server. If Sarah Emm then logs on to the same workstation, the H: drive would connect to her directory.
Prerequisite
To enable the home directory feature for a Data Mover, you must have configured and started CIFS on the Data Mover.
User/Group ID mapping (e.g. Usermapper, NTMigrate) must also be functioning properly.
Restrictions
A special share name, HOME, is reserved for the home directory feature. Because of this limitation, the following
restrictions apply:
y The home directory feature is not available on CIFS servers configured with SHARE level security.
y Not available for UNIX security.
y If you have created a share called HOME, you cannot enable the home directory feature.
y If you have enabled the home directory feature, you cannot create a share called HOME.
The home directory feature simplifies the administration of personal shares and the process of connecting to them as well as
backing them up.

CIFS Considerations & Features

- 13

Copyright 2006 EMC Corporation. All Rights Reserved.

Benefits of Home Directory Feature


y Ease of administration
Data Mover provides a single share name for all users home
directories
Users set it as their own private share
Regular expressions allow extremely powerful and flexible mapping
Multiple users can be mapped with just a single database entry

y Scalability
Accommodates a single user or up to thousands of users
Can be spread over multiple file systems

y Integration with Microsoft Windows


Management is performed through snap-in to Microsoft Management
Console (MMC)
CIFS login information used to search map database
2006 EMC Corporation. All rights reserved.

CIFS Considerations & Features

In v5.4, the Home directory feature provides support for extended regular expressions (ERE) for the
/.etc/homedir configuration file. This allows users to dramatically decrease the configuration file size
in each server. It also provides a more flexible mapping of the users. A user or a group of users having
common characteristics in their names and domain names can be mapped with a single line. During
parsing, more than one line may be matched, but only the latest line matched is used for mapping. The
following special characters found anywhere outside bracket expressions are supported: ^ . [ $ ( ) | * +
?{ \.
The benefits to the customer from these enhancements include:
Ease of administration
y Has a single share name for all users home directories on a given data mover or virtual data mover
y Multiple users can be mapped with just a single database entry
y Regular expressions allow extremely powerful and flexible mapping
Scalability
y Accommodates a single user or up to thousands of users
y Users within a single domain can be spread over multiple file systems
Integration with Microsoft Windows
y Management is performed through snap-in to Microsoft Management Console (MMC)
y CIFS login information used to search map database

CIFS Considerations & Features

- 14

Copyright 2006 EMC Corporation. All Rights Reserved.

Home Directory Procedure


Config
Configmap
mapfile
file
Enable
EnableHomedir
Homedir
Exp.
Exp.Adm.
Adm.Share
Share

1. Create the map file


2. Enable Home directory support
3. Export an administrative share

Create
Createuser
userdirs
dirs

4. Create home directories

Edit
Edituser
userprofiles
profiles

5. Edit users profiles

2006 EMC Corporation. All rights reserved.

CIFS Considerations & Features

To enable the home directory feature for a Data Mover, you must have created the CIFS service and
then complete the following steps:
1. Create the map file (/.etc/homedir on the Data Mover)
2. Enable home directories on the Data Mover
Note: The home directory feature is disabled by default
3. Export an administrative share for creation of users directories
4. Create the users home directories
5. Add home directories to users profiles

CIFS Considerations & Features

- 15

Copyright 2006 EMC Corporation. All Rights Reserved.

Create the homedir map file


Config
Configmap
mapfile
file
Enable
EnableHomedir
Homedir
Exp.
Exp.Adm.
Adm.Share
Share

y Format of homedir file


domain:username:/path
Wildcards are allowed

y homedir file located in /.etc on DM


Use server_file

Create
Createuser
userdirs
dirs
Edit
Edituser
userprofiles
profiles

y Best Practice is to create and edit homedir map file


using Celerra Management MMC plugin
Validates entries

2006 EMC Corporation. All rights reserved.

CIFS Considerations & Features

To configure EMC Celerra for home directory support, the administrator must create each users directory, and create a
map file that contains a mapping of each domain user to the home directory location on the Data Mover. The map file is
/.etc/homedir (the file does not exist by default) and is a series of text lines in the following format:
domain:username:/path
The following examples are methods of configuring the homedir file.
Example 1: Specify all variables
y corp:eplace:/userdata1
y corp:semm:/userdata1
y hmarine:administrator:/userdata2
y Result: each user is mapped to the path specified.
Example 2: Specify domain and path, and use a wildcard to define users
y corp:*:/userdata1
y hmarine:*:/userdata2
y Result: All users from corp will be mapped to their own directory in /userdata1. All users from hmarine will be
mapped to their own directory in /userdata2.
Example 3: Specify the paths only, and use a wildcard to define domains and users
y *:*:/userdata
y Result: All users from all domains will be mapped to their own directory in /userdata.
After the homedir file is created, FTP it to the Data Mover /.etc directory using the server_file command
Example:
server_file server_2 put homedir homedir
NOTE: Optionally, the homedir file can be created/edited from Windows 2000 using the Celerra Management MMC snapin.

CIFS Considerations & Features

- 16

Copyright 2006 EMC Corporation. All Rights Reserved.

Enable Home Directory on Data Mover


Config
Configmap
mapfile
file

y Enable the home directory feature on DM


server_cifs server_2 -option homedir

Enable
EnableHomedir
Homedir
Exp.
Exp.Adm.
Adm.Share
Share

y Verify enabled/disabled homedir status

Create
Createuser
userdirs
dirs
Edit
Edituser
userprofiles
profiles

server_cifs server_2
server_2 :
32 Cifs threads started
Security mode = NT
Max protocol = NT1
I18N mode = UNICODE
Home Directory Shares ENABLED,
map=/.etc/homedir

2006 EMC Corporation. All rights reserved.

CIFS Considerations & Features

Enable Home Directories on the Data Mover


After you create the map file, you must enable home support on the Data Mover by typing:
server_cifs server_2 -option homedir
To verify if the homedir option is currently enabled or disabled on a Data Mover use the server_cifs
command:
server_cifs server_2
server_2 :
32 Cifs threads started
Security mode = NT
Max protocol = NT1
I18N mode = UNICODE
Home Directory Shares ENABLED, map=/.etc/homedir

CIFS Considerations & Features

- 17

Copyright 2006 EMC Corporation. All Rights Reserved.

Export an Administrative Share


Config
Configmap
mapfile
file
Enable
EnableHomedir
Homedir
Exp.
Exp.Adm.
Adm.Share
Share
Create
Createuser
userdirs
dirs

y Share used for creating and managing users


directories
y Can be unexported when not in use.
y Can be exported as hidden share
y Example:

Edit
Edituser
userprofiles
profiles

server_export server_2 P cifs n user$


/userdata

2006 EMC Corporation. All rights reserved.

CIFS Considerations & Features

Export an Administrative Share


Export the path for the home directories as a share for the administration of the users directories. This
share will be used for administrative functions such as creation on each users directory, and setting
permissions (if desired).
This path can be exported as a hidden share to prohibit users from browsing to it. It can also be
unexported when not needed for administrative functions.
Example:
To export /userdata on server_2 as a hidden share named user$, type:
server_export server_2 P cifs n user$ /userdata

CIFS Considerations & Features

- 18

Copyright 2006 EMC Corporation. All Rights Reserved.

Create Each Users Home Directory


Config
Configmap
mapfile
file
Enable
EnableHomedir
Homedir
Exp.
Exp.Adm.
Adm.Share
Share
Create
Createuser
userdirs
dirs
Edit
Edituser
userprofiles
profiles

y Create each users directory individually


Connect to administrative share as domain
admin
Create each directory to match users
username

y Optionally configure permissions


Remove Everyone group from ACL
Assign Full Control to user and administrator

2006 EMC Corporation. All rights reserved.

CIFS Considerations & Features

Create Each Users Home Directory


Unlike Windows file server, which will automatically create the users home directory the first time
the user logs on, home directories on a Celerra must be manually created by the administrator.
To create each users home directory log onto Windows as the Domain administrator and connect to
the administrative share using the UNC path.
Example:
Start > Run > type \\cel1dm2\user$
After connecting to the administrative share, create a directory for each user. The name of each users
directory must match exactly the users username in the Windows domain.
Optional Permissions Configuration
By default the Everyone group will have full control of the directories created. Optionally, the
administrator may choose to remove the Everyone group and assign only the individual user and the
domain administrator full control of the users home directory.

CIFS Considerations & Features

- 19

Copyright 2006 EMC Corporation. All Rights Reserved.

Edit Users Profiles to Add Home Directory Path


Config
Configmap
mapfile
file

y From a Windows Domain Controller


Active Directory Users and Computers

Enable
EnableHomedir
Homedir

Users properties
Profile tab

Exp.
Exp.Adm.
Adm.Share
Share
Create
Createuser
userdirs
dirs

y From any Windows host use net user


command

Edit
Edituser
userprofiles
profiles

net user username /domain


/homedir:path

y Example:
net user esele /domain /homedir:\\cel1dm2\HOME

2006 EMC Corporation. All rights reserved.

CIFS Considerations & Features

Each user must have the path to their home directory added to their profile.
There are two methods to do this:
1. Log onto a Windows Domain Controller as the domain Administrator and use Active Directory
Users and Computers. Open each users properties page and select the Profile tab to enter the
path for the home directory.
2. Log on to any Windows client as the domain Administrator and use the net user command as
follows:
net user username /domain /homedir:path
Example:
To edit Ellen Seles profile by adding the path to the HOME share on Data Mover cel1dm2 type:
net user esele /domain /homedir:\\cel1dm2\HOME

CIFS Considerations & Features

- 20

Copyright 2006 EMC Corporation. All Rights Reserved.

From a Windows Domain Controller


Config
Configmap
mapfile
file
Enable
EnableHomedir
Homedir
Exp.
Exp.Adm.
Adm.Share
Share
Create
Createuser
userdirs
dirs

Home
Directory
Feature

Edit
Edituser
userprofiles
profiles

2006 EMC Corporation. All rights reserved.

CIFS Considerations & Features

Log in to a Windows server from a domain administrator account.


Click Start and select Programs, Administrative Tools, Active Directory Users and Computers.
Click Users to display the users in the right pane.
Right-click a user and select Properties from the shortcut menu. The users property sheet appears.
Click the Profile tab and under Home folder:
1. Select Connect.
2. Select the drive letter you want to map to the home directory.
3. Enter the following in the To field:
\\<cifs_server>\HOME
where: <cifs_server> = IP address, computer name, or NetBIOS name of the CIFS server.

CIFS Considerations & Features

- 21

Copyright 2006 EMC Corporation. All Rights Reserved.

Enabling Home Directories with MMC Snap-in

2006 EMC Corporation. All rights reserved.

CIFS Considerations & Features

With Windows 2000/2003, you can enable and manage home directories through the Celerra Home
Directory Management snap-in for MMC. The required pre-conditions for this are listed on the
following slide.
Additionally, the snap-in can also manage the homedir file and the directory structure for the home
directories. To add a home directory entry, right-click on HomeDir and select Home directory entry.
Enter the name of the domain, the user name (or an * for all users from that domain), and the Path.
Alternatively, you can use the Browse button to select an existing directory, or create new directories.
Before the Celerra Management MMC Snap-in for Home Directories feature can be employed
successfully, certain preconditions must exist.
y The mounted file system for each users home directory configuration must exist on the Data
Mover
y Sufficient permission must be in place for administration and for user access
y The homedir file must exist in /.etc of the Data Mover
y Active Directory must have a UID mapped for the user (or some other user ID option can be in
place, such as Usermapper)

CIFS Considerations & Features

- 22

Copyright 2006 EMC Corporation. All Rights Reserved.

Using the Home Directory MMC Extension


y Properties Page
y Checkboxes for Boolean options
y Friendlier-than-octal umask
format

2006 EMC Corporation. All rights reserved.

CIFS Considerations & Features

The browse button opens a file navigation dialog at the root of the data mover. The Browse button
below the path already exists in the current MMC snap-in. When it is clicked, we display a file browser
rooted at the path \\mover\c$ that allows the user to navigate to the desired directory and select it using
the GUI. We then populate the text box with the path that they selected. By clicking on the Modify
button, you get the Modify Umask dialog box as shown on the next slide.

CIFS Considerations & Features

- 23

Copyright 2006 EMC Corporation. All Rights Reserved.

Directory MMC Extension Modify Umask


y Modify Umask Dialog
y Choose default or override
y Checkboxes correspond to octal
bits in high-to-low order

2006 EMC Corporation. All rights reserved.

CIFS Considerations & Features

Umask is used to set the default permissions for newly created files and directories a umask takes away
permission rather than sets them. In the example above, write and execute boxes are checked which
effectively takes away thes rights for Group and Other.

CIFS Considerations & Features

- 24

Copyright 2006 EMC Corporation. All Rights Reserved.

Lesson 3: Celerra AntiVirus Agent (CAVA)


Upon completion of this lesson, you will be able to:
y
y
y
y
y

List components of EMC CAVA


Identify CAVA requirements
Explain the viruschecker.conf file
Describe scanning methodology
Identify CAVA considerations

2006 EMC Corporation. All rights reserved.

CIFS Considerations & Features

CIFS Considerations & Features

- 25

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra AntiVirus Agent


y DART operating system is not susceptible to viruses
y CAVA provides AntiVirus protection for CIFS file systems
objects provided by Data Mover
Identifies and eliminates known viruses in client files

y Three components
AntiVirus Client software that runs on the Data Mover
CAVA Software on Windows AntiVirus server
3rd party AntiVirus engine on Windows server

2006 EMC Corporation. All rights reserved.

CIFS Considerations & Features

CAVA
EMCs Celerra AntiVirus Agent (CAVA) provides an AntiVirus solution to clients of an EMC Celerra
Network Server using industry-standard (Common Internet File System) protocols, in a Microsoft
Windows 2000/2003 or Windows NT domain. CAVA uses third-party AntiVirus software (AntiVirus
engine) to identify and eliminate known viruses before they infect file(s) on the backend storage.
The Celerra File Server setup is resistant to the invasion of viruses because of its architecture. Each
Data Mover runs DART software, a real-time, embedded operating system. The Data Mover is
resistant to viruses because its APIs are not published, third parties are unable to run programs
containing a virus on a Data Mover. Although the Data Mover is resistant to viruses, if a Windows
client attempts to store an infected file on the storage system, the Windows client must be protected
against the effects of the virus should the infected file be opened.
The AntiVirus solution
The Celerra AntiVirus solution uses a combination of the Celerra File Server Data Mover, CAVA, and
a third-party AntiVirus engine. The CAVA and a third-party AV engine must be installed on a
Windows 2000/2003/NT server(s) in the domain.

CIFS Considerations & Features

- 26

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra AntiVirus Solution


Celerra
File Server

Client
1

2
3
Storage
4

Virus Checking Server

1. Client writes to a file and sent to Celerra

2. Celerra sends UNC path name to the Windows Server running


anti-virus software
3. The Anti-Virus agent takes corrective action on the file
4. The file is released for access
2006 EMC Corporation. All rights reserved.

CIFS Considerations & Features

Each time the Celerra receives a file, it locks it for read access and then sends a request to the antivirus scanning server, or servers, to examine the file. The Celerra will send the UNC path name to the
Windows server to determine whether appropriate action needs to take place. The Celerra may have to
wait for verification that the file is not infected before making the file available for user access. The
Celerra anti-virus solution is made possible through the use of the EMC Celerra Anti-virus Agent
(CAVA) in a Windows NT or Windows 2000/2003 domain with CIFS access. Both the AV Engine
from an EMC partner and the Celerra Anti-virus Agent (CAVA) run on the anti-virus scanning server.
Specific triggers were setup at the DART level to signal the CAVA whenever the Celerra receives a
file, so that the UNC path name is sent to the AV scanning server.

CIFS Considerations & Features

- 27

Copyright 2006 EMC Corporation. All Rights Reserved.

Triggering a Scan
In general, CAVA scans files:
y On first read of a file since
CAVA install
Update of virus definitions

y When creating, moving, modifying a file


y When restoring files from a backup
y When renaming
Based on masks and excl in viruschecker.conf

y When administrator performs a full file system scan


server_viruschk {<movername>|ALL} fsscan <fsname>
-create
2006 EMC Corporation. All rights reserved.

CIFS Considerations & Features

CAVA maintains a table of events that trigger a scan of a file for a virus. For a complete, up-to-date list of these
events, see Using Celerra AntiVirus Agent. In general, CAVA scans in the following instances;
1. Scan on first read. CAVA will scan files for viruses the first time that a file is read subsequent to:
a) the implementation of CAVA
b) an update to virus definitions (This feature has certain configurable aspects.)
2. Creating, modifying, or moving a file.
3. When restoring files from a backup
4. Renaming a file from a non-triggerable file name to a triggerable file name, based on masks and excl
in viruschecker.conf
5. An administrator can perform a full scan of a file system using the server_viruschk fsscan
command. The administrator can query the state of the scan while it is running, and can stop the scan if
necessary.
When a new virus ar known, AV vendors add it to their virus definitions, and then have the new definitions
implemented on actual system. The causes a window of vulnerability in which an infected file could be
scanned and found to be clean, when, in fact, it is not. If the AV software does not scan on any reads, then a
client could later read the infected file and infect their system. CAVA allows the Celerra administrator to
address this issue. When the updated virus definition is made available by the AV vendor, the administrator
can set a particular date in the viruschecker.conf file as the access time. When users reads a given file, this
access time is compared to the time that the file was last opened. If the access time specified in
viruschecker.conf is more recent, then the file will be scanned for know viruses.

CIFS Considerations & Features

- 28

Copyright 2006 EMC Corporation. All Rights Reserved.

CAVA Features
y Automatic Virus Definition Update
- CAVA is aware when 3rd Party engine has been updated

y CAVA Calculator
- Sizing tool to aid in estimating the number of CAVAs

y User Notification on Virus Detection


- Administrator can now control and enable user notification

2006 EMC Corporation. All rights reserved.

CIFS Considerations & Features

Automatic Virus Definition Update


With the release of v5.4, CAVA is aware when the third-party antivirus engine has acquired a new
virus definition file and notifies the CAVA administrator. Files that were previously scanned are
automatically scanned with the updated virus definition when they are next opened, even if no
modifications were made to the file since the last scan.
CAVA Calculator
The CAVA Calculator is a sizing tool that can estimate the number of CAVAs required to provide a
user-defined level of performance in a CAVA pool, based upon user information. The tool can be run
at any time, even if there is no CAVA present. You install CAVA Calculator from the CAVA software
distribution CD. For more information about the CAVA Calculator, refer to its online help and the
Using Celerra Antivirus Agent technical module.
User Notification on Virus Detection
An administrator can specify where virus notification is sent, and upon what kind of action the
notification is sent. Notification can be sent to both the client in the form of a Windows message and to
the Control Station event log, or to only the client or the Control Station. Actions that trigger the
notification include a file being deleted, modified or quarantined. The notification text can also be
customized.

CIFS Considerations & Features

- 29

Copyright 2006 EMC Corporation. All Rights Reserved.

The viruschecker.conf File


y Holds settings/options for each Data Mover
Masks define file extensions that will be scanned
For example, to scan all .exe, .com, and .doc files
masks=*.exe:*.com:*.doc

Excl defines files or file extensions to exclude from scanning


For example, to exclude all .tmp files
excl=*.tmp

Addr defines the IP address(es) of the AV server(s)


For example, to configure two AV servers
Addr=10.127.50.161:10.127.50.162

y Check Using Celerra AntiVirus Agent for additional


options

2006 EMC Corporation. All rights reserved.

CIFS Considerations & Features

The configuration file, /.etc/viruschecker.conf, defines virus checking settings and options that must be
in place for each Data Mover that will utilize virus checking. A sample of this file resides in /nas/sys
and can be copied and modified to suit particular needs. The viruschecker.conf file is created and/or
modified from the Celerra Control Station using the vi editor. Once the viruschecker.conf is completed,
it can be copied to/from the Data Movers /.etc directory using the server_file command.
Examples
server_file server_2 get viruschecker.conf viruschecker.conf
server_file server_2 put viruschecker.conf viruschecker.conf
Mandatory settings
masks= sets the list of file masks that need to be checked.
masks=*.EXE:*.COM:*.DOC:*.DOT:*.XL?:*.MD?
excl= sets the lists of filenames or file masks that do not need to be checked.
excl=*.TMP
addr= sets the IP addresses of the VC Servers that you wish to connect to.
addr=10.127.50.161:10.127.23.162

CIFS Considerations & Features

- 30

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra AntiVirus Management Snap-in


y Provides CAVA management via MMC

2006 EMC Corporation. All rights reserved.

CIFS Considerations & Features

You can use the Celerra AnitVirus Management snap-in to manage the virus-checking parameters
(viruschecker.conf file) used with Celerra AntiVirus Agent (CAVA) and third-party AntiVirus
programs. The Celerra AntiVirus Agent and a third-party AntiVirus program must be installed on the
Windows NT/2000/2003 server.

CIFS Considerations & Features

- 31

Copyright 2006 EMC Corporation. All Rights Reserved.

Lesson 4: Distributed File System (DFS)


Upon completion of this lesson, you will be able to:
y Describe DFS with widelink support
y Explain the value of DFS

2006 EMC Corporation. All rights reserved.

CIFS Considerations & Features

CIFS Considerations & Features

- 32

Copyright 2006 EMC Corporation. All Rights Reserved.

Using the Data Mover as a Standalone DFS Server


y Microsofts DFS (Distributed File System) allows
administrators to group shared folders located on
different servers into a logical DFS namespace
y A DFS namespace is a virtual view of these shared
folders shown in a directory tree structure
y By using DFS, administrators can select which shared
folders to view in the namespace, assign names to these
folders, and design the tree hierarchy in which the folders
appear

2006 EMC Corporation. All rights reserved.

CIFS Considerations & Features

Microsofts DFS (Distributed File System) allows administrators to group shared folders located on
different servers into a logical DFS namespace. A DFS namespace is a virtual view of these shared
folders shown in a directory tree structure. By using DFS, administrators can select which shared
folders to view in the namespace, assign names to these folders, and design the tree hierarchy in which
the folders appear. Users can navigate through the namespace without needing to know the server
names or the actual shared folders hosting the data.
Each DFS tree structure has a root target that is the host server running the DFS service and hosting the
namespace. A DFS root contains DFS links pointing to the shared folders (a share itself and any
directory below it) on the network. These folders are called DFS targets.

CIFS Considerations & Features

- 33

Copyright 2006 EMC Corporation. All Rights Reserved.

Distributed File Systems


Client sees a single
virtual name space
(directory structure)

Client

Public
Network
Exports
DFS root

Data Mover

Data Mover

Data Mover

Data Mover

DFS root server

(or file server)

(or file server)

(or file server)

2006 EMC Corporation. All rights reserved.

CIFS Considerations & Features

Above illustrates the DFS concepts. Note: while the Celerra may host the root of the DFS file system,
each leaf could be on a different Data Mover or any other file server in the environment. This provides
infinite scalability and provides a single name space to the clients.

CIFS Considerations & Features

- 34

Copyright 2006 EMC Corporation. All Rights Reserved.

DFS Root Support on Celerra


y Two Types of Roots of DFS root Servers:
Domain DFS root server
Stores the DFS hierarchy in Active Directory

Standalone DFS root server


Stores the DFS hierarchy locally

y The Celerra Network Server provides the same


functionality as a Windows 2000 or Windows Server 2003
standalone DFS root server
y DFS support is enabled by default when starting the CIFS
service
y For more information, Go to www.Microsoft.com and
search for DFS
2006 EMC Corporation. All rights reserved.

CIFS Considerations & Features

Microsoft offers two types of DFS root servers, the domain DFS root server and the standalone DFS
root server. The domain DFS server stores the DFS hierarchy in the Active Directory. The standalone
DFS root server stores the DFS hierarchy locally and can have only one root target.
Prior to 5.4 you could not do this as root.
For a detailed description of DFS, visit the Microsoft website at http://www.microsoft.com.
Review the following before configuring a DFS root.
y You create a DFS root on a share.
y You can only establish a DFS root on a global share from a Windows Server 2003 or a Windows
XP machine.
y With a Windows 2000 server, you can create only one DFS root per CIFS server; creating a DFS
root on a global share is not allowed. You cannot manage multiple DFS roots on a CIFS server
using a Windows 2000 server.
y A DFS root on a global share can be viewed from any CIFS server on the Data Mover.
y Before removing a share on which you have established a DFS root, you must first delete the DFS
root.
After starting the CIFS service, DFS support is enabled by default.
To disable DFS functionality, set the following Windows Registry key to zero, stop the CIFS service
and then restart the CIFS service.
HKEY_LOCAL_MACHINE\SOFTWARE\EMC\DFS\Enable

CIFS Considerations & Features

- 35

Copyright 2006 EMC Corporation. All Rights Reserved.

Configuring and Administering DFS Support


To configure a share as a DFS root, use one of the
following:
y The Microsoft MMC Distributed File System tool, which
provides a New DFS Root Wizard with comprehensive
help
y The Microsoft command-line tool called dfsutil.exe which
uses the optional flag to work with the API instead of the
Registry
We recommend using the Windows Server 2003 version since it is
capable of managing multiple DFS roots on the same server

2006 EMC Corporation. All rights reserved.

CIFS Considerations & Features

The Microsoft dfscmd.exe tool enables you to administer the DFS root content (for example,
creating and deleting links. You cannot delete a DFS tree structure using this command.
The dfsutil.exe and dfscmd.exe tools are included with the Windows 2000 or Windows
Server 2003 Support Tools.

CIFS Considerations & Features

- 36

Copyright 2006 EMC Corporation. All Rights Reserved.

Resolving Absolute Symbolic Links Using DFS


y It is difficult for a Windows client to open a path to a file
system object when its path contains an absolute
symbolic link
y The Wide Links feature enables Windows clients to
resolve the path to absolute symbolic links by mapping
the UNIX mount point to the Windows server:\share\path
y Two parameters must be set
shadow followabsolutepath=1
shadow followdotdot=1

2006 EMC Corporation. All rights reserved.

CIFS Considerations & Features

It is difficult for a Windows client to open a path to a file system object when its path contains an
absolute symbolic link. A Windows client asks a server to perform a function on a file system object
based on a given path. Unlike Windows, a UNIX client uses a target path relative to its mount point.
This can lead to a file system object on a remote server. For example: A UNIX client has the following
two file systems mounted:
server1:/ufs1 mounted on /first
server2:/ufs2 mounted on /second
On ufs1, there is an absolute symbolic directory link to /second/home. A UNIX client can easily access
this link from ufs1. However, since this path exists only on the UNIX client and not on the local server,
a Windows client is unable to follow this path.
The Wide Links feature enables Windows clients to resolve the path to absolute symbolic links by
mapping the UNIX mount point to the Windows server:\share\path. This mapping is done through the
Microsoft MMC Distributed File System tool.

CIFS Considerations & Features

- 37

Copyright 2006 EMC Corporation. All Rights Reserved.

Lesson 5: Microsoft Group Policy Objects


y Group Policy Objects (GPO) Overview
y Effect of GPOs on the Celerra
y GPO Operation
y GPO settings
y Resolving GPO settings
y GPO Update interval

2006 EMC Corporation. All rights reserved.

CIFS Considerations & Features

CIFS Considerations & Features

- 38

Copyright 2006 EMC Corporation. All Rights Reserved.

Windows Group Policy Object Overview


What are GPOs (Group Policy Objects)?
y Microsoft concept for applying security (policy) for a set of
associated host or users
Supported with Windows 2000/2003/XP

y Allows for centralized management of user accounts and


security policies using MMC (Microsoft Management
Console)
y GPO policy can be set for domain, site, and/or
Organizational Unit within the Windows Domain
GPOs are cumulative (top down)

2006 EMC Corporation. All rights reserved.

CIFS Considerations & Features

In Windows 2000 and 2003, group policy allows administrators to manage desktop environments by
applying configuration settings to computer and user accounts. Group policy offers the ability to
define and enforce policy settings for the following:
y Scripts including computer startup/shutdown and user logon/logoff
y Security local computer, domain, and network security settings
y Folder redirection direct and store users folders on the network
y Registry-based for the operating system, its components, and applications
y Software installation and maintenance centrally manage installation, updates and removal of
software
GPO is managed through the MMC (Microsoft Management Console). GPO policy is administratively
set and applied to the entire domain, to a site, or to an organizational unit. Any GPOs that affect a user
are applied at logon time, for example, applications, configuration, or folder redirection. Any GPOs
that affect the computer are applied at system startup time. Some examples are disk quotas, auditing,
and event logs. All policies are updated periodically, the frequency depending on how it was
configured.
GPOs are not applied individually to users or computers. GPOs can be set at multiple levels. As they
are applied at the Domain down to the organizational unit, the settings are cumulative.
GPOs are supported with Windows 2000/2003/XP.

CIFS Considerations & Features

- 39

Copyright 2006 EMC Corporation. All Rights Reserved.

Integration of GPOs with the Celerra


y Microsoft defines many group policies
Only a few GPOs affect Data Mover operation management
Kerberos
Auditing
SMB signing
Event logs
User rights

y Implemented on Celerra using a GPO Daemon


Starts/stops/restarts with server_setup CIFS command
On startup, GPO daemon retrieves latest settings for each joined
CIFS server reads and updates GPO cache
Data Movers (DART) retrieve and maintain security settings from
GPOs for each CIFS server joined to the Windows 2000/2003
Domain
2006 EMC Corporation. All rights reserved.

CIFS Considerations & Features

Celerra Data Movers that are joined to a Windows 2000/2003 domain support and retrieve certain
GPO settings. When participating in Windows 2000/2003 domains, as a member server, the Data
Mover is affected by many Windows mechanisms including Kerberos, Auditing, SMB signing, event
logs, and user rights. A goal of the Celerra Data Mover is to participate, and act as, a Windows
member server to the domain.
EMC offers MMC snap-ins to be used by administrators to display the effective settings for the
auditing policy and user right assignment.

CIFS Considerations & Features

- 40

Copyright 2006 EMC Corporation. All Rights Reserved.

GPO Operation
y GPO cache
Settings stored in /.etc/gpo.cache
Not a user-editable file

y Query and update policy settings are managed using


server_security command
GPO configuration parameters to enable/disable

y Celerra Management snap-ins are used to display the


effective settings for Audit and User rights

2006 EMC Corporation. All rights reserved.

CIFS Considerations & Features

The GPO daemon is the DART thread which controls GPO updates. There is one GPO daemon running per
Data Mover. The daemon starts/stops/restarts with the server_setup cifs start/stop command.
On GPO daemon startup, it reads in GPO cache, then retrieves the latest settings for each joined CIFS Server.
Each CIFS server may be in a different organizational unit in the domain, therefore, each can have different GPO
settings.
The latest retrieved GPO settings for each joined CIFS server are stored in the root file system under
/.etc/gpo.cache. This is not a user editable configuration file.
Cached settings are read in when the GPO daemon starts up. The GPO settings are available as soon as possible,
so there is no need to wait for setting retrieval. Settings are available even if the Domain Controller cannot be
reached.
The server_security Celerra CLI command can be used to query or update the security policy settings on
a Data Mover. Using this command, the administrator can force an update of a security policy setting, or query
security policy settings.
You can use CIFS parameters to enable or disable GPO, GPO cache, and GPO log messages.
Before NAS 5.2, the GPO settings were automatically refreshed by the Data Mover every 90 minutes. The data
Mover now uses the update interval as defined by the Windows Domain.
If the GPO refresh policy is disabled at the Domain level, the Celerra Administrator must issue the
server_security command to manually refresh the GPO policy settings. If no refresh policy is defined at
the Domain level, the Data Mover will use an update interval of 90 minutes.

CIFS Considerations & Features

- 41

Copyright 2006 EMC Corporation. All Rights Reserved.

Module Summary
y Carefully consider application locking requirements in a mixed
environment
y Other mixed environment considerations include Symbolic links,
upper & lower case filenames, and file attributes
y Celerra home directory support allows a user to have a default file
save location on the Celerra
y Celerra AntiVirus Agent (CAVA) provides an AntiVirus solution for
Celerra Windows clients
CAVA uses third-party AntiVirus software (AntiVirus engine) to identify and
eliminate known viruses before they infect files(s) on the back-end storage

y DFS allows multiple folders, located on different servers to be


viewed as one logical namespace

2006 EMC Corporation. All rights reserved.

CIFS Considerations & Features

CIFS Considerations & Features

- 42

Copyright 2006 EMC Corporation. All Rights Reserved.

Closing Slide

2006 EMC Corporation. All rights reserved.

CIFS Considerations & Features

CIFS Considerations & Features

- 43

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra ICON
Celerra Training for Engineering

Virtual Data Movers

2006 EMC Corporation. All rights reserved.

Virtual Data Movers

-1

Copyright 2006 EMC Corporation. All Rights Reserved.

Revision History
Rev Number

Course Date

1.0

February 2006

1.2

May 2006

2005 EMC Corporation. All rights reserved.

Revisions
Complete
Updates and enhancements

Virtual Data Movers

Virtual Data Movers

-2

Copyright 2006 EMC Corporation. All Rights Reserved.

Virtual Data Movers


Upon completion of this module, you will be able to:
y
y
y
y

Describe Virtual Data Movers


Explain the benefits of Virtual Data Movers
Configure Virtual Data Movers
Describe VDM implementation considerations

2006 EMC Corporation. All rights reserved.

Virtual Data Movers

Virtual Data Movers

-3

Copyright 2006 EMC Corporation. All Rights Reserved.

Virtual Data Mover (VDM) Overview


y Software feature that allows administrative separation of
CIFS Servers
y Virtual Data Movers configuration information can be
replicated from a primary site to a secondary site to
support disaster recovery
Data Mover
y Other benefits include:
Ability to move CIFS servers
from one physical Data Mover
to another workload management
Server isolation and security
Independent CIFS configurations
on the same Data Mover

2005 EMC Corporation. All rights reserved.

Virtual Data Mover (VDM-1)


CIFS
Server

CIFS
Server

CIFS
Server

Virtual Data Mover (VDM-2)


CIFS
Server

CIFS
Server

Virtual Data Movers

A Virtual Data Mover is a software feature that enables the administrative separation of CIFS servers
from each other and from the associated environment. Separating one or more CIFS servers, enable
replication of CIFS environments, and allow the movement of CIFS servers from one Data Mover to
another.
VDM store dynamic configuration data for CIFS servers in a separate configuration file system. This
includes information such as local groups, shares, security credentials, and audit logs. A VDM can be
loaded and unloaded, moved between Data Movers, or replicated to a remote Data Mover as an
autonomous unit. The servers file systems, and all of the configuration data that allows clients to
access the file system are available in one virtual container.
An key motivation for Virtual Data Movers is the ability to replicate the CIFS environment (not only
the data) using asynchronous techniques.. This application is discussed in more detail later in the
Celerra Replicator module..
.

Virtual Data Movers

-4

Copyright 2006 EMC Corporation. All Rights Reserved.

Virtual Data Movers


y VDM defines the
configuration of
one or more CIFS
servers

Computer name
Shares
Security credentials
Local groups
Audit logs
Home Directory
configuration

y Implemented as a
separate root file
system

Virtual Data Mover

File
System 1

Celerra 1
VDM Root
File System

File
System 2

\\srv1_nas

eventlog
config
homedir

2005 EMC Corporation. All rights reserved.

kerberos
shares
etc.

Virtual Data Movers

A VDM can be configured with one or more CIFS servers (NFS is only supported on Physical Data
Movers). The diagram above illustrates a logical view of a Virtual Data Moverwith a single CIFS
server (computer name of srv1_nas). When creating a VDM, at least one network interface is
associated with it and is used by the CIFS server for client access. The CIFS servers in each VDM
have access only to the file systems mounted to that VDM, and therefore, can only create export
(share) those file systems mounted to that VDM. This allows a user to administratively partition, or
group, their file systems and CIFS servers.
Virtual Data Mover specific configuration data includes the following:
y Local group database for the servers in the VDM
y Share database for the servers in the VDM
y CIFS server configuration (compnames, interface names, etc.)
y Celerra home directory information for the servers in the VDM
y Auditing and Event Log information
y Kerberos information for the servers in the VDM

Virtual Data Movers

-5

Copyright 2006 EMC Corporation. All Rights Reserved.

Physical Data Movers


y Much of the information
configuration and
services is related to the
Physical Data and
shared with all VDMs:

Usermapper
Passwd/group files
CIFS Service
NIS/DNS client
Routing
NTP
Network Interface
Internationalization
Virus checker
Data Mover Failover
FTP
NDMP backup

2005 EMC Corporation. All rights reserved.

Physical Data Mover

Root
File System
Configuration Files
CIFS Databases
File Systems

NFS

CIFS Server

NIS/DNS

Virtual Data Movers

This diagram shows a typical physical Data Mover implementation. The Data Mover supports both
NFS and CIFS servers, each with the same view of all the server resources. All of the configuration,
control data, and event logs are stored in the root file system of the Data Mover.
While it is beneficial to consolidate multiple servers into one physical Data Mover for some
environments, isolation between servers is required in others (ISPs).
In a non-VDM implementation, the root file system for the VDM holds all configuration information
for CIFS, NFS, and other network services. When VDMs are implemented, the dynamic CIFS
specific configuration is placed in a separate root file system while the physical Data Mover root file
systems contains the configuration information about the supporting infrastructure and services.

Virtual Data Movers

-6

Copyright 2006 EMC Corporation. All Rights Reserved.

With Virtual Data Movers


Physical Data Mover

VDM01
Root
File System
Configuration Files
CIFS Databases
File Systems
Root VDM File Systems

UNIX Server
CIFS Server

VDM02 Root File System


Configuration files
Local groups
Home Directories files
Directories
File systems

CIFS servers

Network
Interfaces

VDM02
VDM02 Root File System
Configuration files
Local groups
Home Directories files
Directories
File systems

CIFS servers

NIS/DNS Clients

2005 EMC Corporation. All rights reserved.

Virtual Data Movers

VDMs are implemented as a separate files that is mounted on a physical Data Movers root file system.
All dynamic CIFS configuration information is stored in the VDM root file system.
Each VDM will have at least one network interface associated with it.
In order to support the movement of CIFS servers within a VDM from one Data Mover to another,
both Data Movers must have network interfaces defined with the same name, however the should
(must) have different IP addresses.
Currently a maximum of 29 VDMs are supported per physical Data Mover.

Virtual Data Movers

-7

Copyright 2006 EMC Corporation. All Rights Reserved.

Creating a VDM
y When creating a VDM, you create a file system that
contains all the configuration information for the VDM
y File system may be created:
Using all the defaults
Control Station finds first available disk space and creates a 128 MB file
system

Explicitly specify the file system size


Specify a pool using AVM (Automatic Volume Manager)

y Virtual Data Movers can be created using either the CLI


or Celerra Manager

2005 EMC Corporation. All rights reserved.

Virtual Data Movers

By default, when creating a VDM, the Control Station automatically allocates a 128MB volume,
creates the root configuration file system (root_fs_id#) for the VDM, and saves the binding
relationship between the VDM and root file system in the Control Stations database. As a a separate
file system, the user data and configuration data in the VDM are separate.
You can explicitly specify the file system and size you want to use, or specify a pool if using
Automatic Volume Manager (AVM).

Virtual Data Movers

-8

Copyright 2006 EMC Corporation. All Rights Reserved.

VDM States
y Two VDM states
Loaded
Fully functional active VDM
CIFS is active and user file systems accessible
Loaded as read/write

Mounted
CIFS servers are not active
Configured file system mounted read only
CIFS is not active and user file systems are not accessible
VDM is passive for eventual loading
Celerra Replicator requires mounted state to allow replication

y A VDM can be Temporarily Unloaded (TempUnloaded) or


Permanently Unloader (PermUnloaded)
2005 EMC Corporation. All rights reserved.

Virtual Data Movers

Loaded VDM
A loaded VDM is the fully functional mode of VDM. A loaded VDM is considered active. For loaded
VDMs, the configured file system is loaded read/write and the CIFS servers in the VDM are running
and serving data. It is not possible to load one VDM into another VDM.
Mounted VDM
With mounted VDMs, the VDM configuration file system is mounted read-only but can be queried.
For example, the nas_server command can be used to get a list of the expected interfaces and the
server_export command can be used to get a list of shares. The CIFS servers are not active. The
VDM is passive for eventual loading, as in the case of a failover, where the secondary site might be
called upon to act as the primary. For Celerra Replicator, file systems need to be mounted read-only to
allow replication to occur.
Two other VDM states exist; PermUnloaded and TempUnloaded. You would unload a VDM to stop
activity on the Data Mover. Unloads, by default, are temporary. You would temporarily unload the
VDM if you were replicating from a primary to secondary site to stop activity on the primary in
preparation for replication. If you choose to permanently unload a VDM, the VDM file system is not
mounted on the Data Mover. On reboot, the VDM does not reload.

Virtual Data Movers

-9

Copyright 2006 EMC Corporation. All Rights Reserved.

Creating Loaded VDM

To create a VDM in a loaded state


y Command:
nas_server name <VDM_name> -type vdm create
<movername> -setstate loaded

y Example 1:
nas_server name vdm01 type vdm create server_2
setstate loaded

y Example 2 (Using AVM):


nas_server -name vdm01 -type vdm -create server_2
-setstate loaded pool=clar_r5_performance
2005 EMC Corporation. All rights reserved.

Virtual Data Movers

This slide shows the CLI command to create a Virtual Data Mover in a loaded state.
nas_server name <vdm_name> -type vdm create <movername> -setstate
loaded
<vdm_name> is name assigned to VDM
<movername> is name of physical Data Mover
If a VDM name is specified, it must be unique to the entire Celerra system. If it is not specified, a
default is assigned (vdm_id#).
In this case, the file system name was specified (vdm_root_fs1). If it were not specified, the root file
system of the VDM is automatically allocated. The default size of the VDM root file system is
128MB.
If you wanted to create a Virtual Data Mover in a mounted state, the command would be the same
except that setstate loaded would be changed to setstate mounted.
Note: When you name a VDM, the Celerra assigns the root file system name so it is easily identified
as a root file system associated with a specific VDM. For example, if you gave the VDM the name
Marketing, the VDM root file system is automatically named root_fs_vdm_Marketing.

Virtual Data Movers

- 10

Copyright 2006 EMC Corporation. All Rights Reserved.

Creating a VDM via Celerra Manager


Data Movers -> Virtual Data Movers -> New

2005 EMC Corporation. All rights reserved.

Virtual Data Movers

By default, all VDMs created via the GUI are in a loaded state. At this time, we cannot verify this in
the GUI.

Virtual Data Movers

- 11

Copyright 2006 EMC Corporation. All Rights Reserved.

Creating a VDM via Celerra Manager

The root file system is automatically created

2005 EMC Corporation. All rights reserved.

Virtual Data Movers

When you create a VDM, the VDM root file system is created to store the CIFS server configuration
information for the CIFS servers that you create within the VDM. The VDM root file system stores the
majority of the CIFS servers dynamic data.

Virtual Data Movers

- 12

Copyright 2006 EMC Corporation. All Rights Reserved.

Configuring CIFS on a VDM


y

After the VDM is created, configure CIFS services on


the VDM in the same manner as on a physical Data
Mover
1. Create file system mount points and mount file system
2. Configure CIFS Server
3. Join the CIFS Server to Windows domain
4. Export Shares to clients

Note: a default CIFS server and CIFS servers within a


VDM cannot co-exist in the same DM

2005 EMC Corporation. All rights reserved.

Virtual Data Movers

Once the VDM has been created and loaded, the CIFS servers can be configured in the same manner as
would be done on a physical Data Mover.
The CIFS service is stopped/started on the physical Data Mover, not at the Virtual Data Mover level.

Virtual Data Movers

- 13

Copyright 2006 EMC Corporation. All Rights Reserved.

Mounting a file system on a VDM


y Create mountpoint and mount the file systems on the
VDM
y Example:
server_mountpoint vdm1 c /mntvdm
server_ mount vdm1 fs02 /mntvdm

2005 EMC Corporation. All rights reserved.

Virtual Data Movers

This slide shows how the file system would be mounted to the Virtual Data Mover.

Virtual Data Movers

- 14

Copyright 2006 EMC Corporation. All Rights Reserved.

Configure CIFS Server on VDM


y Define the CIFS servers on the VDM
server_cifs <VDM_name> -add

compname=<comp_name>,domain=<domain_name>,
interface=<if_name>

2005 EMC Corporation. All rights reserved.

Virtual Data Movers

Next you would create the CIFS servers in the VDM.


If the interface= flag is omitted entirely, then the CIFS server takes all unused interfaces. This is
referred to as the default CIFS server.
Notes: The default CIFS server is not compatible with VDM because it uses all interfaces and does not
leave a free interface for VDM. Each CIFS server on a VDMs and physical Data Movers must have
their own interface explicitly specified.

Virtual Data Movers

- 15

Copyright 2006 EMC Corporation. All Rights Reserved.

Joining the CIFS Server to the Domain


y To join the CIFS server to the domain
server_cifs <VDM_name> -Join

compname=<comp_name>,domain=<domain_name>,
admin=<admin_name>

2005 EMC Corporation. All rights reserved.

Virtual Data Movers

Join the CIFS server to the domain.

Virtual Data Movers

- 16

Copyright 2006 EMC Corporation. All Rights Reserved.

Exporting CIFS Shares on VDM


y Export the CIFS shares
server_export <VDM_name> -Protocol cifs

name <sharename> <mountpoint>

2005 EMC Corporation. All rights reserved.

Virtual Data Movers

Export the shares for client access.

Virtual Data Movers

- 17

Copyright 2006 EMC Corporation. All Rights Reserved.

Moving a VDM
y To move a VDM to a different
Data Move, the target must:
Have access to the disks that
contains all file systems of the
source VDM

Data Mover
Virtual Data Mover
CIFS
Server

Have identical interface names


to the source VDM

Data Mover

With different IP addresses

Virtual Data Mover

Have no CIFS servers with the


same name as the source VDM

CIFS
Server

y Command:
nas_server vdm <vdm_name> -move <target_movername>

2005 EMC Corporation. All rights reserved.

Virtual Data Movers

When you move a VDM to a different Data Mover, the VDM is first unloaded from the source Data
Mover. Then all the file systems are unmounted from the source. The target Data Mover loads the
VDM and then all the file systems are mounted and exported.
In order to successfully move a VDM from one Data Mover to another, the target Data Mover must
have:
y Access to all the file systems (root and user) as the source Data Mover.
y Network interface with the same name. However, the network device does not need to be the
same. For example, the source can use a 10/100 Mbps Ethernet device, and the target can use a
10/100/1000 Mbps Ethernet device. The move would be successful as long as the device names
are identical.
y The target Data Mover should not have any CIFS servers with compnames or netbios names
matching that of the VDM to be moved.
You can use the same IP addresses, however there is a risk of duplicate addresses. If you use the same
IP addresses, you have to bring down the interface on the source and bring up the interface on the
target manually as part of the procedures for moving the VDM.

Virtual Data Movers

- 18

Copyright 2006 EMC Corporation. All Rights Reserved.

VDMs and Celerra Replicator


DM at Secondary Site

DM at Primary Site
Virtual Data Mover

Replicate VDM

Virtual Data Mover


CIFS
Server

CIFS
Server

Replicate User Data


File System

File System

y CIFS configuration has complex interrelationships and is


dependent on much more than just the file systems
y Disaster Recovery requires replicating the environment, not
just the data
y Celerra Replicator provides Asynchronous replication of both
VDM and user file systems
2005 EMC Corporation. All rights reserved.

Virtual Data Movers

With the asynchronous data replication and failover/failback capabilities of Celerra Replicator, and
Virtual Data Movers, the Celerra can offer an Asynchronous data recovery solution. A differential
copy mechanism is provided when file system changes are transmitted across the IP network from the
primary to a secondary site. In the event of a disaster, the entire CIFS environment (VDM) can be
failed over to the secondary site. Clients continue accessing their CIFS shares from the secondary site.

Virtual Data Movers

- 19

Copyright 2006 EMC Corporation. All Rights Reserved.

Module Summary
Key points covered in this module are:
y A Virtual Data Mover (VDM) is a Celerra feature that enables
administrators to group CIFS servers into virtual containers
y The VDM stores information regarding local groups, shares,
security credentials, and audit logs
y VDMs are created with the nas_server command
y VDMs can be loaded and unloaded, moved between Data Movers,
or replicated to a remote Data Mover
A loaded VDM is the fully functional mode of a VDM
When a VDM is mounted, the configuration file system is mounted readonly and can be queried, but the CIFS servers are not active

y To move a VDM from one Data Mover to another, the target DM


must have network interfaces with the same name and access to
the VDM root file system and all user file systems
2006 EMC Corporation. All rights reserved.

Virtual Data Movers

The key points covered in this module are shown here.

Virtual Data Movers

- 20

Copyright 2006 EMC Corporation. All Rights Reserved.

Closing Slide

2005 EMC Corporation. All rights reserved.

Virtual Data Movers

Virtual Data Movers

- 21

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra ICON
Celerra Training for Engineering

SnapSure

2006 EMC Corporation. All rights reserved.

Celerra SnapSure

-1

Copyright 2006 EMC Corporation. All Rights Reserved.

Revision History
Rev Number

Course Date

1.0

February 2006

1.2

May 2006

2006 EMC Corporation. All rights reserved.

Revisions
Complete
Updates and enhancements

Celerra SnapSure - 2

Celerra SnapSure

-2

Copyright 2006 EMC Corporation. All Rights Reserved.

SnapSure
Upon completion of this module, you will be able to:
y Describe how SnapSure makes a point-in-time view of a
file system
y Describe the use of a Save Volume (SavVol), how it is
sized and what happens when it runs out of available
space
y Using both the CLI and Celerra Manager, configure a
Checkpoint file system
y From a client system, access a checkpoint using CVFS
(Checkpoint View File System)
y Schedule Checkpoints
y Discuss planning issues around SnapSure
2006 EMC Corporation. All rights reserved.

Celerra SnapSure - 3

SnapSure is conceptually similar to CLARiiONs SnapView Snapshots or the Symmetrix TimeFinder


Snaps in that it makes a point-in-time view of a file system by performing a copy of the original data to
a save area at the time a data block is modified. There are a couple big differenced between SnapSure
and SnapView Snapshots:
y SnapView Snapshots is at the LUN level and Snapsure performs checkpoints at the file system
level.
y SnapView Snapshots provides a Read and Writable copy of a LUN and Snapsure provides a Read
only copy.
y Snapsure is more of a complete solution in that it provides scheduling to automate, and by default,
check points are mounted automatically making it easy to access the Checkpoint copy of the file
system.

Celerra SnapSure

-3

Copyright 2006 EMC Corporation. All Rights Reserved.

SnapSure Overview
y SnapSure Provides
A point-in-time view of file system
Known as a Checkpoint
Consists of a combination of live file system data and saved data

Uses a Copy Old On Modify technique to maintain previous views


of production file systems

y Provides multiple views of a file system


Live data viewed directly from production file system
Checkpoint provides point-in-time, read-only view of production file
system
Each production file system may have as many as 96 checkpoints,
each presenting a logical view of the file system at different points in
time
64 checkpoints with NAS 5.4 Check Support Matrix
2006 EMC Corporation. All rights reserved.

Celerra SnapSure - 4

SnapSure creates a point-in-time view of a file system. SnapSure creates a checkpoint file system
that is not a copy or a mirror image of the original file system. Rather, the checkpoint file system, is
a calculation of what the production file system looked like at a particular time and is not an actual file
system at all. The checkpoint is a read only view of the file system prior to changes at that particular
time.
Note: With 5.5 96 Checkpoints are supported.
IMPORTANT: The information in this module references the current version of SnapSure. ALWAYS read the SnapSure
documentation and Release Notes for a specific version of Celerra Network Server.

Many applications benefit from the ability to work from a file system that is not in a state of change.
Fuzzy backups can occur when performing backups from a live file system. Performing backups
from a SnapSure checkpoint provide a consistent point-in-time source and eliminates fuzzy backups.
Applications like Celerra Replicator require a baseline copy of the production file system during setup,
this also requires a consistent point-in-time, read-only file system from which to copy the data.

Celerra SnapSure

-4

Copyright 2006 EMC Corporation. All Rights Reserved.

Example of Users Views of Data


Monday View
ReadRead-only

Tuesday View
ReadRead-only
only

Wednesday View
ReadRead-only

Production
Live Data
Full Read/Write

Production File System

Checkpoints

2006 EMC Corporation. All rights reserved.

Celerra SnapSure - 5

SnapSure checkpoints provide users with multiple point-in-time views of their data. In the illustration
above the users live, production data is my_file. If they need to access what that file looked like on
previous days, they can easily access read-only versions of that file as viewed from different times.
This can be useful for restoring lost files or simply for checking what the data looked like previously.
In this example, checkpoints were taken on each day of the week.

Celerra SnapSure

-5

Copyright 2006 EMC Corporation. All Rights Reserved.

Key Components of SnapSure


y Production File System (PFS)
y Checkpoint (ckpt)

ckpt

PFS

Logical point-in-time view of data


Read-only
Supports multiple Checkpoints per PFS

y SavVol
Stores original data from PFS
before changes are made

COOM

Copy old on modify (COOM)

SavVol

Also persistently stores internal


data structures

Original Data
Bit map
Block Maps

Bitmap
Data structure that identifies which blocks have
changed in the PFS

Block maps
Records the address of saved data blocks in the
SavVol
2006 EMC Corporation. All rights reserved.

Celerra SnapSure - 6

PFS
The PFS is any typical Celerra file system. Applications that require access to the PFS are referred to
as PFS Applications.
Checkpoint
A point-in-time view of the PFS. SnapSure uses a combination of live PFS data and saved data to
display what the file system looked like at a particular point-in-time. A checkpoint is thus dependent
on the PFS and is not a disaster recovery solution. It is NOT a copy of a file system.
SavVol
Each PFS with a checkpoint has an associated save volume, or SavVol. The first change made to each
PFS data block following a checkpoint triggers SnapSure to copy that data block to the SavVol.
Bitmap
SnapSure maintains a bitmap of every data block in the PFS where it identifies if the data block has
changed.
Blockmap
A blockmap of the SavVol is maintained to record the address in the SavVol of each saved data block.

Celerra SnapSure

-6

Copyright 2006 EMC Corporation. All Rights Reserved.

Copy Old On Modify

Bitmap 1

PFS
DB01*

DB01
DB01*
DB02
DB03
DB04
DB05
DB05*

PFS
Applications

DB06
DB07

DB01=0
DB01=1
DB02=0
DB03=0
DB04=0
DB05=0
DB05=1
DB06=0
DB07=0
DB08=0
DB08=1
DB09=0

Blockmap
Blockmap 1
0
1
2

DB01
DB05
DB08

DB08
DB08*
DB09

2006 EMC Corporation. All rights reserved.

SavVol
Block Address

Data Block

0
1
2

DB01
DB05
DB08

Celerra SnapSure - 7

When the first checkpoint of a PFS is created, SnapSure creates the SavVol, a bitmap, with all block
set to zeros, and a blockmap that is empty.

Celerra SnapSure

-7

Copyright 2006 EMC Corporation. All Rights Reserved.

Checkpoint Provides View of FS at Point-in-Time


Bitmap 1

PFS
DB01
DB01*
DB02
DB03
DB04
DB05
DB05*
DB06
DB07

DB01=1
DB02=0
DB03=0
DB04=0
DB05=1
DB06=0
DB07=0
DB08=1
DB09=0

Checkpoint
Apps

y Checkpoint is a readonly view of PFS

Blockmap
Blockmap 1
0
1
2

DB01
DB05
DB08

DB08
DB08*
DB09

Checkpoint
CheckpointApp
Appreads
reads
blocks
6,
blocks 6,8,8,99

SavVol
Block Address

Data Block

0
1
2

DB01
DB05
DB08

y On read, first checks


bitmap entry
If 0, read from PFS
If 1, find address in
blockmap and reads
block from SavVol

2006 EMC Corporation. All rights reserved.

Celerra SnapSure - 8

When a user or application reads the point-in-time of the newest checkpoint, the bitmap is parsed to
see if the block requested has changed since the creation of the checkpoint. If the block has not
changed (a value of 0) the READ is performed from the PFS. If the block has changed (a value of 1)
the blockmap is parsed to identify the address in the SavVol for that data block, then the READ is
performed from the SavVol.
The example above illustrates a READ of data blocks 6, 8, and 9.

Celerra SnapSure

-8

Copyright 2006 EMC Corporation. All Rights Reserved.

Multiple Check Checkpoints


Bitmap 2

PFS
DB01
DB01*
DB02
DB02*
DB03
DB04
DB05
DB05*

PFS
Applications

DB06
DB06*
DB07

DB01=0
DB02=0
DB02=1
DB03=0
DB04=0
DB05=0
DB06=0
DB06=1
DB07=0
DB08=0
DB08=1
DB09=0

2006 EMC Corporation. All rights reserved.

Blockmap
Blockmap 1

Blockmap
Blockmap 2
3
4
5

DB02
DB06
DB08*

DB08*
DB08**
DB09

y Multiple ckpts
allow multiple
point-in-time
views of PFS

0
1
2

SavVol
Block Address

Data Block

0
1
2
3
4
5

DB01
DB05
DB08
DB02
DB06
DB08*

DB01
DB05
DB08

Blockmap for
each checkpoint
Bitmap for latest
checkpoint only

y Old blockmaps
are preserved
and linked

Celerra SnapSure - 9

When a new checkpoint is created SnapSure creates a new blockmap and begins the process anew.

Celerra SnapSure

-9

Copyright 2006 EMC Corporation. All Rights Reserved.

Viewing the Latest Checkpoint


Bitmap 2

PFS
DB01
DB01*
DB02
DB02*
DB03
DB04
DB05
DB05*

PFS
Applications

DB06
DB06*
DB07

DB01=0
DB02=1
DB03=0
DB04=0
DB05=0
DB06=1
DB07=0
DB08=1
DB09=0

2006 EMC Corporation. All rights reserved.

Checkpoint
Apps
Blockmap
Blockmap 1

Blockmap
Blockmap 2
3
4
5

DB02
DB06
DB08*

DB08
DB08**
DB09

Checkpoint
CheckpointApp
Appreads
reads
blocks
6,
blocks 6,8,8,99

0
1
2

SavVol
Block Address

Data Block

0
1
2
3
4
5

DB01
DB05
DB08
DB02
DB06
DB08*

DB01
DB05
DB08

y Reading newest checkpoint:


same as single checkpoint
y All older blockmaps are
ignored

Celerra SnapSure - 10

Client READs from the latest checkpoint are performed using the same method as when READing
from a single checkpoint. Older blockmaps are simply ignored

Celerra SnapSure

- 10

Copyright 2006 EMC Corporation. All Rights Reserved.

Viewing the Other Checkpoints


Bitmap 3

PFS
DB01
DB01*
DB02
DB02*
DB03
DB04*
DB05
DB05*

PFS
Applications

DB06
DB06*
DB07
DB08
DB08***
DB09

2006 EMC Corporation. All rights reserved.

DB01=0
DB02=0
DB03=0
DB04=1
DB05=0
DB06=0
DB07=0
DB08=1
DB09=0

Blockmap
Blockmap 3
6
7

Reading older checkpoints


y Bitmap is ignored
Checkpoint
CheckpointApp
Appreads
reads
blocks
6,
blocks 6,8,8,99
Blockmap
Blockmap 2

DB04
DB08**

3
4
5

DB02
DB06
DB08*

Block Address

Data Block

0
1
2
3
4
5
6
7

DB01
DB05
DB08
DB02
DB06
DB08*
DB04
DB08**

Blockmap
Blockmap 1
0
1
2

DB01
DB05
DB08

Checkpoint
Apps

SavVol

Celerra SnapSure - 11

When READs of other checkpoints (i.e. not the newest) are requested, SnapSure directly queries the
checkpoint's blockmap for the SavVol block number to read. If the block number is in the blockmap,
the data is read from the SavVol space for the checkpoint. If the block number is not in the blockmap,
SnapSure queries the next newer checkpoint to find the block number. If the requested block number is
found in the blockmap, the data block is read from the SavVol space for the checkpoint. This
mechanism is repeated until it reaches the PFS.

Celerra SnapSure

- 11

Copyright 2006 EMC Corporation. All Rights Reserved.

SavVol Sizes
y Save Volumes (SavVol) are created automatically when
you create the first Checkpoint of a Production File
System
Sizes based on size of PFS
If PFS > 10GB then SavVol = 10GB
If PFS < 10GB and PFS > 64MB then SavVol = PFS size
If PFS < 64MB then SavVol = 64MB

y All checkpoints of a PFS share the same SavVol


y May manually create the SavVol if the default size or
back-end attributes are not appropriate
See Using SnapSure on Celerra Technical Module for planning
considerations
2006 EMC Corporation. All rights reserved.

Celerra SnapSure - 12

SnapSure requires a SavVol to hold data when you create the first checkpoint of a PFS. SnapSure will
create and manage the SavVol automatically.
SavVol sizes
The following criteria is used for automatic SavVol creation:
y If PFS > 10GB, then SavVol = 10GB
y If PFS < 10GB and PFS > 64MB, then SavVol = PFS size
y If PFS < 64MB, then SavVol = 64MB
y Extends by 10GB if checkpoint reaches a high water mark
Custom SavVol Creation
The SavVol can be manually created and managed. Please see Using SnapSure on Celerra for
planning considerations. Additional Checkpoints
If you create another checkpoint, SnapSure uses the same SavVol, but logically separates the point-intime data using unique checkpoint names.

Celerra SnapSure

- 12

Copyright 2006 EMC Corporation. All Rights Reserved.

SavVol Automatic Extension


y If insufficient disk space in SavVol, will overwrite and
delete oldest checkpoint
It will continue to overwrite older checkpoints until none are left
At that point, all checkpoints that share the SavVol are invalidated

y Automatic extensions increase available space in SavVol


Extension is triggered by High Water Mark (HWM)
Default HWM is 90% full

Additional space is allocated in 10GB increments


Will not exceed 20% of total space available

y Automatic extension can be disabled and default HWM


changed
2006 EMC Corporation. All rights reserved.

Celerra SnapSure - 13

If the SnapSure SavVol reaches a full state it will become inactive. All checkpoints using that SavVol
are then invalid and cannot be accessed. Therefore, SnapSure employs a High Water Mark (HWM) as
a point at which the SavVol will be automatically extend. The HWM is expressed as a percentage of
the total size of the SavVol. The default HWM is 90%.
When the SavVol High Water Mark (HWM) is reached, SnapSure will extend the SavVol in 10GB
increments. However, SnapSure will not consume additional disk space if doing so will leave the
Celerra with less than 20% free disk space. If there is not sufficient disk space to extend the SavVol,
SnapSure will begin overwriting checkpoints starting with the oldest checkpoint.

Celerra SnapSure

- 13

Copyright 2006 EMC Corporation. All Rights Reserved.

Disabling SavVol Automatic Extension


y To disable automatic SavVol extension, set HWM to 0%
y SnapSure will overwrite the oldest checkpoint
y Maintains a rolling window of checkpoints
y Auditing and manually extending size of SavVol is
important to ensure appropriate window of changes is
maintained

2006 EMC Corporation. All rights reserved.

Celerra SnapSure - 14

If you set the HWM to 0% when you create a checkpoint, this tells SnapSure not to extend the SavVol
when a checkpoint reaches full capacity. Instead, SnapSure deletes the data in the oldest checkpoint
and recycles the space to keep the most recent checkpoint active. It repeats this behavior each time a
checkpoint needs space. If you use this setting and have a critical need for the checkpoint information,
periodically check the SavVol space used (using the fs_ckpt <fsname> -list command), and before it
becomes full, copy the checkpoint to tape, read it with checkpoint applications, or extend it to keep it
active. In summary, if you plan to use the 0% HWM option at creation time, auditing and extending
the SavVol yourself are important checkpoint management tasks to consider.

Celerra SnapSure

- 14

Copyright 2006 EMC Corporation. All Rights Reserved.

Creating a Checkpoint File System


To create a checkpoint for PFS pfs1
y Command:
fs_ckpt <PFS_name> Create

y Example:
fs_ckpt pfs1 Create

y Results:

Default name is pfs1_ckpt1


SnapSure creates SavVol from volume pool
High Water Mark (HWM) set to default 90%
Checkpoint is automatically mounted to same Data Mover as pfs1

2006 EMC Corporation. All rights reserved.

Celerra SnapSure - 15

By default, when you create the first checkpoint of a PFS, SnapSure creates a SavVol. It uses this
SavVol for all additional checkpoints for that PFS.
To create a checkpoint file system, use the following command:
fs_ckpt <PFS_name> Create

Note: Check Using SnapSure on Celerra for all the command options.

Celerra SnapSure

- 15

Copyright 2006 EMC Corporation. All Rights Reserved.

Creating a Checkpoint File System (CLI)


Additional options
y Assign a name for the checkpoint
fs_ckpt pfs1 name Monday Create

y Assign a custom HWM


fs_ckpt pfs1 Create o %full=75

y For more configuration options see the Celerra Network


Server Command Reference Manual

2006 EMC Corporation. All rights reserved.

Celerra SnapSure - 16

Check Using SnapSure on Celerra for all the command options.

Celerra SnapSure

- 16

Copyright 2006 EMC Corporation. All Rights Reserved.

Creating a Checkpoint File System


y Checkpoints > New

2006 EMC Corporation. All rights reserved.

Celerra SnapSure - 17

This Celerra Manager screen shows how to create a checkpoint using Celerra Manager.

Celerra SnapSure

- 17

Copyright 2006 EMC Corporation. All Rights Reserved.

SnapSure Additional Functions


SnapSure allows you to perform the following actions:
y Checkpoint View File System
y Celerra Manager provides scheduler for automating the
creation of checkpoint
y Refresh option replaces a checkpoint with a new point-intime view
y Checkpoints can be used to restore a PFS to a previous
point-in-time
Rollback
First creates a new checkpoint to facilitate rolling forward again if
necessary
2006 EMC Corporation. All rights reserved.

Celerra SnapSure - 18

Celerra SnapSure

- 18

Copyright 2006 EMC Corporation. All Rights Reserved.

Checkpoint View File System (CVFS)


y SnapSure navigation feature provides CIFS and NFS
clients with read-only access to checkpoint
y Hidden directory named
.ckpt by default
y Created automatically

.ckpt

date &time

date &time

y Contains all checkpoint


views of that directory
Naming convention
yyyy_mm_dd_hh_mm_ss_<Data_Mover_timezone>

y Names can be changed when mounting, for example:


server_mount server_3 -o cvfsname=Monday
pfs1_ckpt1 /pfs1_ckpt1
Celerra SnapSure - 19

2006 EMC Corporation. All rights reserved.

CVFS is a navigation feature that provides NFS and CIFS clients with read-only access to online,
mounted checkpoints in the PFS namespace. This eliminates the need for administrator involvement in
recovering point-in-time files.
Virtual entries to checkpoints in PFS
In addition to the .ckpt_mountpoint entry at the root of the PFS, SnapSure also creates virtual links
within each directory of the PFS. All of these hidden links are named ".ckpt" (by default) and can be
accessed from within every directory, as well as the root, of a PFS. You can change the name of the
virtual checkpoint name from .ckpt to a name of your choosing by using a parameter in the
slot_(x)/param file. For example you could change the name from .ckpt to .snapshot. The
.ckpt hidden link cannot be listed in any way and can only be accessed by manually changing into that
link. After changing into .ckpt, listing the contents will display links to all Checkpoint views of that
directory. The names of these links will reflect the date of the Checkpoint followed by the time zone of
the Control Station.
You can change the checkpoint name presented to NFS/CIFS clients when they list the .ckpt directory,
to a custom name, if desired. The default format of checkpoint names is:
yyyy_mm_dd_hh.mm.ss_<Data_Mover_timezone>. You can customize the default checkpoint names
shown in the .ckpt directory to names such as Monday, or Jan_week4_2004, and so on. You can only
change the name when you mount the checkpoint.
For example; to change the name of checkpoint pfs04_ckpt1 of pfs_04 to Monday while mounting the
checkpoint on Data Mover 3 on mountpoint EMC, use the following command:
server_mount server_2 -o cvfsname=Monday pfs1_ckpt1 /pfs1_ckpt1

Celerra SnapSure

- 19

Copyright 2006 EMC Corporation. All Rights Reserved.

Accessing CVFS (NFS)


To access a checkpoint:
y List a client directory in the PFS to view existing files and
directories
ls l /EMC/mp1

y List the .ckpt virtual directory


ls la /EMC/mp1/.ckpt

y Access a virtual checkpoint from the list of entries


ls l /EMC/mp1/.ckpt/2005_11_19_16.39.39_GMT

2006 EMC Corporation. All rights reserved.

Celerra SnapSure - 20

ls l /EMC/mp1
drwxr-xr-x 2 32771 32772 80 Nov 21 8:05 2005 dir1
drwxr-xr-x 2 root other 80 Nov 14 10:25 2005 resources
-rw-r--r-- 1 32768 32772 292 Nov 19 11:15 2005 A1.dat
-rw-r--r-- 1 32768 32772 292 Nov 19 11:30 2005 A2.dat
Ls la /EMC/mp1/.ckpt
drwxr-xr-x 5 root root 1024 Nov 19 08:02 2005_11_19_16.15.43_GMT
drwxr-xr-x 6 root root 1024 Nov 19 11:36 2005_11_19_16.39.39_GMT
drwxr-xr-x 7 root root 1024 Nov 19 11:42 2005_11_20_12.27.29_GMT
ls l /MEC/mp1/.ckpt/2005_11_19_16.39.39_GMT
-rw-r--r-- 1 32768 32772 292 Nov 19 11:15 A1.dat
-rw-r--r-- 1 32768 32772 292 Nov 19 11:30 A2.dat
-rw-r--r-- 1 32768 32772 292 Nov 19 11:45 A3.dat
drwxr-xr-x 2 root other 80 Nov 14 10:25 resources

Celerra SnapSure

- 20

Copyright 2006 EMC Corporation. All Rights Reserved.

Accessing CVFS (CIFS)

y In the address field of Windows Explorer, enter the name


of the client directory on the PFS
y Select a virtual Checkpoint from the list
z:\.ckpt\2005_11_13_01_03_38.GMT
2006 EMC Corporation. All rights reserved.

Celerra SnapSure - 21

Other ways to access Checkpoints are:


y Enter the full path from Start Run
y Manually change to the .ckpt directory from the Windows CLI via a Mapped Network Drive

Celerra SnapSure

- 21

Copyright 2006 EMC Corporation. All Rights Reserved.

Accessing Checkpoint using ShadowCopyClient


CIFS clients can also access
checkpoint data via Windows
ShadowCopyClient (CIFS only)
y Native with Windows 2003
(Download available for Windows
2000 and XP)
y Method
Access share as mapped drive
View properties on any object
Select Previous Versions tab

2006 EMC Corporation. All rights reserved.

Celerra SnapSure - 22

ShadowCopyClient is a Microsoft Windows feature that allows Windows users to access previous
versions of a file via the Microsoft Volume Shadow Copy Server. ShadowCopyClient is also supported
by EMC Celera to enable Windows clients to list, view, copy, and restore from files in checkpoints
created with SnapSure.

Celerra SnapSure

- 22

Copyright 2006 EMC Corporation. All Rights Reserved.

Displaying Checkpoints
y Command:
nas_fs -info <PFS_name>

y Example:
nas_fs info fs01
Id
Name

= 21
= fs01

ckpts = fs01_ckpt2,fs01_ckpt3

2006 EMC Corporation. All rights reserved.

Celerra SnapSure - 23

Displaying checkpoints using a PFS name


To view all checkpoints that have been created for a given PFS, use the following command:
nas_fs -info <PFS_name>

Example
nas_fs -i fs01
id

= 21

name

= fs01

ckpts

= fs01_ckpt2,fs01_ckpt3

Celerra SnapSure

- 23

Copyright 2006 EMC Corporation. All Rights Reserved.

Displaying Checkpoints
y Checkpoints

2006 EMC Corporation. All rights reserved.

Celerra SnapSure - 24

This Celerra Manager screen shows how to list checkpoints using Celerra Manager.

Celerra SnapSure

- 24

Copyright 2006 EMC Corporation. All Rights Reserved.

Checkpoint Details
To view the checkpoint properties
y Command:
nas_fs info <checkpoint_name>

y Example:
nas_fs info fs01_ckpt_01
id = 34
name = fs01_ckpt_01
Type = ckpt
Checkpt_of = fs01 Thu Nov 29 13:39:55 EDT 2001
used = 78%
Full(mark) = 80%
Celerra SnapSure - 25

2006 EMC Corporation. All rights reserved.

You can audit a Checkpoint to monitor its SavVol space utilization. SnapSure features self-extending
checkpoints that are triggered when the HWM is reached.
Command
To audit a checkpoint, type the following command:
nas_fs -info <checkpoint_name>

Example
To audit the checkpoint named fs01_ckpt_01, type the following command:
nas_fs -i fs01_ckpt_01
id = 34
name = fs01_ckpt_01
Type = ckpt
Checkpt_of = fs01 Thu Nov 29 13:39:55 EDT 2001
used = 78%
Full(mark) = 80%

This example shows that the SavVol is now at 78% capacity. It auto-extends when it reaches a
capacity of 80%.

Celerra SnapSure

- 25

Copyright 2006 EMC Corporation. All Rights Reserved.

Checkpoints
y To view PFS (ckptfs) and SavVol (volume) storage utilization
nas_fs -size pfs1_ckpt1
volume: total = 1000 avail = 700 used = 300 (30%) (sizes in MB)
ckptfs: total = 10000 avail = 4992 used = 5008 (50%) (sizes in MB)

Note: volume = Save Volume


ckptfs = Production File System

Celerra SnapSure - 26

2006 EMC Corporation. All rights reserved.

Use the nas_fs size command on the PFS to audit the disk utilization of the SavVol.
Sample output from nas_fs -size
nas_fs -s pfs1_ckpt1
volume: total = 1000 avail = 700 used = 300 (30%) (sizes in MB)
ckptfs: total = 10000 avail = 4992 used = 5008 (50%) (sizes in MB)

(Where volume is the SavVol and ckptfs is the PFS)

Celerra SnapSure

- 26

Copyright 2006 EMC Corporation. All Rights Reserved.

Checkpoint Details
y File Systems > right-click the PFS > Properties > Checkpoint Storage
tab

2006 EMC Corporation. All rights reserved.

Celerra SnapSure - 27

This Celerra Manager screen shows how to list checkpoints using Celerra Manager.

Celerra SnapSure

- 27

Copyright 2006 EMC Corporation. All Rights Reserved.

Refreshing Checkpoints
y Unmounts checkpoint
y Deletes data for that checkpoint
y Updates checkpoint to newest status
y Remounts checkpoint
y Example:
fs_ckpt fs01_ckpt01 -refresh

2006 EMC Corporation. All rights reserved.

Celerra SnapSure - 28

Refreshing Checkpoints
When you refresh a checkpoint, SnapSure deletes the checkpoint and creates a new checkpoint,
recycling SavVol space while maintaining the old file system name, ID, and mount state. If a
checkpoint contains important data, be sure to back it up or use it before you refresh it.
Command
The -refresh command automatically unmounts the checkpoint, deletes the contents of the SavVol,
updates the status of the checkpoint to reflect that it is the most current, and, finally, remounts the
checkpoint.

Celerra SnapSure

- 28

Copyright 2006 EMC Corporation. All Rights Reserved.

Refreshing Checkpoints
y Checkpoints > right-click the Checkpoint to refresh > Refresh

2006 EMC Corporation. All rights reserved.

Celerra SnapSure - 29

This slide shows how to refresh a checkpoint using Celerra Manager.

Celerra SnapSure

- 29

Copyright 2006 EMC Corporation. All Rights Reserved.

Restoring the Production File Systems


y File System recovery
y Automatically creates checkpoint of PFS prior to restore
Enables rolling back if required

y Command:
/nas/sbin/rootfs_ckpt <ckpt_name> -Restore

y Example:
/nas/sbin/rootfs_ckpt pfs1_ckpt1 Restore

y Requires root authority

2006 EMC Corporation. All rights reserved.

Celerra SnapSure - 30

Restoring file systems


SnapSure provides the capability to restore the entire production file system from a given Checkpoint.
To do so requires both the root command as well as the Restore option.
Note:
A checkpoint file system is automatically created for the point-in-time image of the PFS just before the
restore, in the event that the restored image is not wanted.

Celerra SnapSure

- 30

Copyright 2006 EMC Corporation. All Rights Reserved.

Restoring File Systems


y Checkpoints > right-click the Checkpoint to restore > Restore

2006 EMC Corporation. All rights reserved.

Celerra SnapSure - 31

This slide shows how to restore a checkpoint using Celerra Manager.

Celerra SnapSure

- 31

Copyright 2006 EMC Corporation. All Rights Reserved.

Deleting Checkpoints
y Checkpoint must be permanently unmounted
y Deleted like any other file system
y Command:
nas_fs delete <checkpoint_name>

y Example:
nas_fs -d fs1_ckpt1

2006 EMC Corporation. All rights reserved.

Celerra SnapSure - 32

When you delete a checkpoint (and it is not the oldest checkpoint), SnapSure compacts the deleted
checkpoint and merges the needed blockmap entries to the older checkpoint before the delete
completes. This ensures that chronological blocks of data, important to older checkpoints, are not lost
with the delete. Deleting checkpoints out of order does not affect the point-in-time view of other
checkpoints and frees up SavVol space that can be used for new checkpoints. If you delete the newest
checkpoint of a PFS, no compact or merge process occurs until a new checkpoint is created. The
compact and merge process is an asynchronous, background process at that time. A change in SavVol
space is only seen after the process completes.

Note: The checkpoint to be deleted must not be mounted.

Celerra SnapSure

- 32

Copyright 2006 EMC Corporation. All Rights Reserved.

Deleting Checkpoints
y Checkpoints > right-click the Checkpoint to delete > Delete

2006 EMC Corporation. All rights reserved.

Celerra SnapSure - 33

This slide shows how to delete a checkpoint using Celerra Manager.

Celerra SnapSure

- 33

Copyright 2006 EMC Corporation. All Rights Reserved.

SnapSure Checkpoint Scheduling


y Use Celerra Manager or Linux cron job script
y Schedule Interval Options
Once or Repeating
Hourly
Daily
Weekly
Monthly

y Do not schedule checkpoints to occur while the Celerra is


backing up its database
Backups occurs at one minute past the hour

2006 EMC Corporation. All rights reserved.

Celerra SnapSure - 34

An automated checkpoint-refresh solution can be configured using Celerra Manager or a Linux cron
job script. There is no Celerra CLI equivalent.
Using the Checkpoints > Schedules tab in Celerra Manager, you can schedule checkpoint creation and
refreshes on arbitrary, multiple hours of a day, days of a week, or days of a month. You can also
specify multiple hours of a day on multiple days of a week to further simplify administrative tasks.
More than one schedule per PFS is supported, as is the ability to name scheduled checkpoints, name
and describe each schedule, and to query the schedule associated with a checkpoint. You can also
create a schedule of a PFS that already has a checkpoint created on it, and modify existing schedules.
You can also create a basic checkpoint schedule without some of the customization options by clicking
on any mounted PFS listed in Celerra Manager and click properties > Schedules tab. This tab enables
hourly, daily, weekly, or monthly checkpoints to be created using default checkpoint and schedule
names and no ending date for the schedule.
SnapSure allows multiple checkpoint schedules to be created for each PFS. However, EMC supports a
total of 64 checkpoints (scheduled or otherwise) per PFS, as system resources permit.
Because Celerra backs up its database at one minute past every hour, checkpoints should not be
scheduled to occur at these times.

Celerra SnapSure

- 34

Copyright 2006 EMC Corporation. All Rights Reserved.

Checkpoint Scheduling
y Checkpoints > Schedules tab > New

2006 EMC Corporation. All rights reserved.

Celerra SnapSure - 35

This slide shows how to refresh a checkpoint using Celerra Manager.

Celerra SnapSure

- 35

Copyright 2006 EMC Corporation. All Rights Reserved.

SnapSure Performance Considerations


y Creating a new Checkpoint causes a brief pause to the PFS
Write activity is suspended
Read activity continues

y Restoring a PFS from a checkpoint causes the PFS to freeze


momentarily
Both read and write activity suspended

y Refreshing a checkpoint suspends reading of checkpoint file system


y Deleting a checkpoint momentarily spends write activity to PFS
y The length of pause depends on the amount of cached data, but is
typically only a few seconds
NFS clients simply retry
Worst case Windows clients must refresh share

2006 EMC Corporation. All rights reserved.

Celerra SnapSure - 36

Creating a checkpoint requires the PFS to be paused. Therefore, PFS write activity suspends (read
activity continues) while the system creates the checkpoint. The pause time depends on the amount of
data in the cache but is typically a few seconds. EMC recommends a 10 minute interval between the
creation, or refresh, of checkpoints of the same PFS.
Restoring a PFS from a checkpoint requires the PFS to be frozen. Therefore, all PFS activities are
suspended while the system restores the PFS from the selected checkpoint.
Refreshing a checkpoint requires the checkpoint file system to be frozen. Therefore, checkpoint file
system read activity suspends while the system refreshes the checkpoint. If a UNIX client were
attempting to access the checkpoint during a refresh, the system continuously tries to connect. When
the system thaws, the file system automatically remounts. If a CIFS client were attempting to access
the checkpoint during a refresh, the Windows application may drop the link. It is dependent on the
application, or if the system freezes for more than 45 seconds.
Deleting a checkpoint requires the PFS to be paused. Therefore, PFS write activity suspends
momentarily while the system deletes the checkpoint.
If a checkpoint becomes inactive for any reason, read/write activity on the PFS continues
uninterrupted.

Celerra SnapSure

- 36

Copyright 2006 EMC Corporation. All Rights Reserved.

Data Mover Memory Requirements for SnapSure


y SnapSure allocates up to 1GB of Data Mover memory for
checkpoint blockmaps
512MB if DM RAM is <4GB

y Paged in and out of memory as needed


y SnapSure memory allocations include Celerra Replicator
operations
$ server_sysstat server_2 -blockmap
server_2 :
total paged in
total paged out
page in rate
page out rate
block map memory quota
block map memory consumed
2006 EMC Corporation. All rights reserved.

=
=
=
=
=
=

3919
81240
0
0
1048576(KB)
275624(KB)
Celerra SnapSure - 37

Celerra Network Server allocates up to 1 GB of physical RAM per Data Mover to store the blockmaps
for all checkpoints of all PFSs on the Data Movers. If a Data Mover has less than 4GB of RAM, the
512MB will be allocated.
Each time a checkpoint is read, the system queries it to find the location of the required data block. For
any checkpoint, blockmap entries that are needed by the system but not resident in main memory is
paged in from the SavVol. The entries stay in main memory until system memory consumption
requires them to be purged.

Celerra SnapSure

- 37

Copyright 2006 EMC Corporation. All Rights Reserved.

Module Summary
Key points covered in this module are:
SnapSure creates a point-in-time view of a file system
When a data block in a PFS is changed, the data block is first
copied to the SavVol to preserve the point-in-time data
Bitmaps keep track of the data blocks that have changed since the
time the checkpoint was created
A blockmap maps data block location in the SavVol
All checkpoints for a single PFS reside in one SavVol
The Celerra allows up to 64 (96 with NAS 5.5) checkpoints for each
PFS
Creating, refreshing, and restoring checkpoints can be
accomplished with the CLI and Celerra Manager
The creation of a SnapSure checkpoint schedules is accomplished
with Celerra Manager
2006 EMC Corporation. All rights reserved.

Celerra SnapSure - 38

Celerra SnapSure

- 38

Copyright 2006 EMC Corporation. All Rights Reserved.

Closing Slide

2006 EMC Corporation. All rights reserved.

Celerra SnapSure - 39

Celerra SnapSure

- 39

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra ICON
Celerra Training for Engineering

Celerra Replicator

2006 EMC Corporation. All rights reserved.

Celerra Replicator

-1

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra Replicator
Upon completion of this module, you will be able to:
y Explain how Celera Replicator makes a copy of a file
system on the same Data Mover, another Data Mover in
the same Celerra, or a Data Mover in a remote Celerra
y Identify conditions and requirements for implementing
Celerra Replicator
y Describe the stages in the replication process
y Using the CLI and or Celerra Manager, configure Celerra
Replicator
y Describe CIFS Asynchronous Data Recover using
Celerra Replicator and Virtual Data Movers
y Identify issues and restrictions
2006 EMC Corporation. All rights reserved.

Celerra Replicator - 2

The objectives for this module are shown here. Please take a moment to review them.

Celerra Replicator

-2

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra Replicator Overview


y Asynchronous file system replication
y Creates and maintains a read-only copy of source file
system
Local same Celerra System
Remote different Celerra System
Loop Back same Data Mover

y Remote replication enables Disaster Recovery


In the event of primary site failure
Failover to secondary site

When Primary site returns to service


Failback to primary site

2006 EMC Corporation. All rights reserved.

Celerra Replicator - 3

Celerra Replicator produces a read-only, point-in-time replica of a source (production) file system.
The Celerra Replication service periodically updates this copy, making it consistent with the
production file system. This read-only replica can be used by a Data Mover in the same Celerra
cabinet (local replication), or a Data Mover at a remote site (remote replication) for content
distribution, backup, and application testing.
In the event that the primary site becomes unavailable for processing, Celerra Replicator enables you
to failover to the remote site for production. When the primary site becomes available, you can use
Celerra Replicator to synchronize the primary site with the remote site, and then failback the primary
site for production. You can also use the failover/reverse features to perform maintenance at the
primary site or testing at the remote site.

Celerra Replicator

-3

Copyright 2006 EMC Corporation. All Rights Reserved.

Key Terminology
y Local Source (local_src)
y Remote Destination (remote_dst)
y Local Destination (local_dst)
y Delta Set
SnapSure Save Volume

y Playback Service
y Replicator SavVol
y Replication Failover
y Replication Resync
y Replication Reverse
2006 EMC Corporation. All rights reserved.

Celerra Replicator - 4

local_src: Primary File System or Production File System


local_dst: Secondary File System or Destination File System
Delta Set: Modifications on local_src
SnapSure Save Volume: Where SnapSure writes checkpoint data
Playback Service: The process of reading the delta sets from the destination SavVol and updating the
destination file system.
Replicator SavVol: A volume used to store copied data blocks from the source. Celerra Replicator
and SnapSure share the allocated pool for SavVol space.
Replication Failover: Process that changes the destination file system from read only to read/write.
Replication reverse: Process of reversing the direction of replication (formally known as failback).
The source file system becomes read-only and the destination file system becomes read/write.

Celerra Replicator

-4

Copyright 2006 EMC Corporation. All Rights Reserved.

Replication Requirements
y Celerra Replicator keeps the destination file system up to date
Initial copy of source file system required at destination
Use fs_copy command
For large file systems, it may be necessary to make a physical copy and
transport it to destination

y To start replication, the source and destination file systems must be


mounted
Destination must be a rawfs

y For local replication, network connectivity required between


Source Data Mover
Destination Data Mover

y For remote replication, network connectivity required between


Source Control Station and destination Control Station
Source Data Mover and destination Data Mover
2006 EMC Corporation. All rights reserved.

Celerra Replicator - 5

The Production File System must be mounted before you can begin replication. For a local replication,
the source and destination Data Mover must be appropriately configured for the network (IP addresses
configured). For a remote replication, the IP addresses must be configured for the primary and
secondary Data Movers. IP connectivity must also exist between the primary Control Station and the
remote Control Station.

Celerra Replicator

-5

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra Replicator Local Replication


1. Network clients read
and write to source
2. Source and destination
synchronized using
fs_copy

Celerra
Primary
Data Mover

Secondary
Data Mover

3. Source block changes


copied to SavVol
4. Playback service
updates destination file
system

Source
File System

5. File system exported as


read only
2006 EMC Corporation. All rights reserved.

2
Destination
File System

SavVol

Celerra Replicator - 6

Local replication produces a read-only copy of the source file system for use by a Data Mover in the
same Celerra cabinet. The primary Data Mover services reads and writes from the network clients
while the secondary Data Mover exports the read-only replica of the source file system. Local
replication can occur within the same Data Mover (called loopback) or different Data Movers.
This slide shows the process of local replication.
1. Throughout the process, network clients read and write to the source file systems through the
primary Data Mover without interruption.
2. For the initial replication start, the source and destination file systems are manually synchronized
using the fs_copy command.
3. After synchronization, the addresses of all block modifications made to the source file system are
used by the replication service to create one or more delta sets, by copying the modified blocks to
the SavVol shared by the primary and secondary Data Movers.
4. The local replication playback service periodically reads any available, complete delta sets and
updates the destination file system, making it consistent with the source file system. During this
time, all subsequent changes made to the source file system are tracked.
5. The secondary Data Mover exports the read-only copy to use for content distribution, backup, and
application testing.

Celerra Replicator

-6

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra Replicator Remote Replication


1. Network clients read
and write to source
2. Source and destination
synchronized using
fs_copy

Primary Celerra Site

Remote Celerra Site

Primary
Data Mover

Secondary
Data Mover

3. Source block changes


copied to primary
SavVol
4. Replication transfers
complete delta sets to
remote SavVol
5. Playback service on
remote site

Source
File System

Destination
File System

SavVol

6. File system exported as


read only
2006 EMC Corporation. All rights reserved.

5
Remote
SavVol

Celerra Replicator - 7

Remote replication creates and periodically updates a read-only copy of a source file system at a
remote site. This is done by transferring changes made to a production file system (source) at a
local site to file system replica (destination) at the remote site over an IP network. By default,
these transfers are automatic. However, you can initiate a manual update.
This slide shows the process of remote replication.
1. Throughout this process, network clients read and write to the source file systems through a Data
Mover at the primary site without interruption.
2. For the initial replication process to start, the source and destination file systems are manually
synchronized using the fs_copy command.
3. The addresses of all subsequent block modifications made to the source file system are used by the
replication service to create one or more delta sets. Remote replication creates a delta set by
copying the modified blocks to the SavVol at the primary site.
4. Remote replication transfers any available, complete delta sets (which includes the block
addresses) to the remote SavVol. During this time, the system tracks subsequent changes made to
the source file system on the primary site.
5. At the remote site, the replication service continually plays back any available, complete delta sets
to the destination file system, making it consistent with the source file system.
6. The Data Mover at the destination site exports the read-only copy for content distribution, backup,
and application testing. This optional step is done manually.
Celerra Replicator

-7

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra Replicator Replication Failover


y Replication stops
y Applies outstanding delta
sets to the destination file
system

Primary Celerra Site

Remote Celerra Site

Primary
Data Mover

Secondary
Data Mover

Source
File System

Destination
File System

SavVol

Remote
SavVol

If Primary side is still


available existing delta set
may be replicated to
destination
The remote system may
play back all or none of the
delta sets

y Fails over to the remote site


Enables read/write access
to destination file system
Clients may reconnect
2006 EMC Corporation. All rights reserved.

Celerra Replicator - 8

If the primary file system becomes unavailable, usually as the result of a disaster, you can make the
destination file system read/write. After the primary site is again available, you can then restore
replication to become read/write at the primary site and read-only at the remote site.
Failover breaks the replication relationship between the source and destination file system, and
changes the destination file system from read-only to read/write.
The system plays back the outstanding delta sets on the destination file system according to
instructions issued by the user. The system can play back either all or none of the delta sets. The
system then stops replication in a way that allows it to be restarted later.
The system also fails over to the remote site, enabling read/write access to the destination file system
from the network clients. If the source is online, it becomes read-only.

Celerra Replicator

-8

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra Replicator resync


Primary Celerra Site

y Source file system


is populated with
changes made to
the destination
when sites were in
failover condition
y Allows all new Delta
sets to be sent
back to primary
y Replication in
reverse direction

Primary
Data Mover

Remote Celerra Site

Secondary
Data Mover

Source
File System

Destination
File System

SavVol

Remote
SavVol

Celerra Replicator - 9

2006 EMC Corporation. All rights reserved.

When the original source file system becomes available, the replication relationship can be
reestablished.
The fs_replicate resync option is used to populate the source file system with the changes
made to the destination file system when the sites were in a failover condition. The direction of the
replication process is reversed. The remote file system is read/write and the source file system is readonly.
Autofullcopy=yes will ensure that a full copy of the data from the source to the remote site takes
place. Without the autofullcopy=yes option; an incremental copy will occur. If the standard
fs_replicate resync fails the user will be prompted to run it again using the new
autofullcopy=yes option
Notes:
This is run from the remote site and can take a considerable amount of time.
If you think that a resynchronization may not be successful, you can execute a full copy of the source
file system by using the autofullcopy=yes option with the fs_replicate -resync
command.
Example: fs_replicate resync fs1:cel=eng172158 fs2 -0
autofullcopy=yes

Celerra Replicator

-9

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra Replicator reverse


Primary Celerra Site

y The direction of
replication is
reversed
y Primary site again
accepts source
updates from
network clients

2006 EMC Corporation. All rights reserved.

Remote Celerra Site

Primary
Data Mover

Secondary
Data Mover

Source
File System

Destination
File System

SavVol

Remote
SavVol

Celerra Replicator - 10

During reverse, the direction of replication is reversed (as it was before the failover). The primary site
again accepts the source file system updates from the network clients, then the replication service
transfers them to the remote site for playback to the destination file system. During the failover phase,
both the source and destination file systems are temporarily set as read-only.

Note:
A reverse requires both the primary and remote sites to be available and results in no data loss.
During the reversal phase, both the source and destination file systems are temporarily set at read-only.

Celerra Replicator

- 10

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra Replicator suspend


Primary Celerra Site

y Allows you to
temporarily stop
replication

Remote Celerra Site

Primary
Data Mover

Secondary
Data Mover

Source
File System

Destination
File System

SavVol

Remote
SavVol

y Ensures all delta


sets are sent to the
destination side
y Plays back all delta
sets
y Creates a
checkpoint on the
source side

Celerra Replicator - 11

2006 EMC Corporation. All rights reserved.

Using the suspend and restart options for a replication relationship allows you to temporarily stop
replication, perform some action, and then restart the replication relationship using an incremental
rather than a full data copy.
Stopping and restarting replication can be useful for the following:
y Change the size of the replication SavVol size. During replication, the size of a SavVol may need
to be changed because the SavVol is too large or to small.
y Mount the replication source or destination file system on a different Data Mover.
y Change the IP addresses or interfaces that replication is using.
Suspend is an option that allows you to stop an active replication relationship and leave replication in a
condition that allows it to be restarted. When suspending a replication relationship, the system:
y Ensures all the delta sets have been transferred to the destination site.
y Plays back all the outstanding delta sets.
y Creates a checkpoint on the source site, which is used to restart replication.
Example:
fs_replicate -suspend <srcfs> <dstfs>:cel=<cel_name>

Celerra Replicator

- 11

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra Replicator restart


y Allows you to start
replication again
y Verifies for a
previous suspend
y Checks both sides
have the same size
file systems
y Starts process with
a differential copy
using a checkpoint
of the source file
system

Primary Celerra Site

Remote Celerra Site

Primary
Data Mover

Secondary
Data Mover

Source
File System

Destination
File System

SavVol

Remote
SavVol

Celerra Replicator - 12

2006 EMC Corporation. All rights reserved.

After you suspend a replication relationship using the -suspend option, only the -restart option can be
used to restart it. This command verifies that replication is in a condition that allows a restart. It begins
the process with a differential copy using a checkpoint of the source file system.
The restart checks to see if a suspend has occurred, and if it has, it uses the suspend checkpoint to
incrementally restart the replication process.
Example:
fs_replicate -restart <srcfs> <dstfs>:cel=<cel_name>

Celerra Replicator

- 12

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra Replicator abort


Primary Celerra Site

y Replication may be
stop or abort when
you no longer want
to keep the
destination
synchronized
y May occur when file
systems have fallen
out of synch making
it necessary restart
replication

Remote Celerra Site

Primary
Data Mover

Secondary
Data Mover

Source
File System

Destination
File System

SavVol

Remote
SavVol

2006 EMC Corporation. All rights reserved.

Celerra Replicator - 13

Typically, you need to abort Celerra Replicator when:


y You no longer want to replicate the file system.
y Your source and destination file systems have fallen out of synchronization and you want to end
replication.
Note: Aborting replication does not delete the underlying file systems.

Celerra Replicator

- 13

Copyright 2006 EMC Corporation. All Rights Reserved.

Replication Policies
y Frequency that Delta sets are creation and played back is
determined by two policies:
Time out
Primary site time interval at which replication automatically generates
a delta set
Remote site time interval at which playback service automatically plays
back all available delta sets to destination file system

High Watermark
Primary site the point at which the replication service automatically
creates a delta set on the SavVol of accumulated changes since the last
delta set (in MB)
Remote site the point at which the replication service automatically
plays back all available delta sets to the destination file system (in MB)

2006 EMC Corporation. All rights reserved.

Celerra Replicator - 14

The delta set contains the block modifications made to the source and is used by the replication service
to synchronize the destination with the source. The amount of information within the delta set is based
on the activity of the source and how you set the time-out and high watermark replication policies.
The minimum delta set size is 128MB. The replication service is triggered by either the time-out or
the high watermark policy, whichever is reached first.
Time-out
At the primary site, the time-out is the time interval at which the replication service automatically
generates a delta set. At the remote site, the time-out value is the time interval at which the playback
service automatically plays back all available delta sets to the destination file system. At both sites,
the default time-out value is 600 seconds. A value of 0 indicates that there is never a time-out, and
pauses the replication activities.
High Watermark
At the primary site, the high watermark indicates the size of the file system changes (in MBs)
accumulated since the last delta set. The replication service automatically creates a delta set on the
SavVol. At the remote site, the high watermark represents the size (in MBs) of the delta sets present
on the secondary SavVol. The replication service automatically plays back all available delta sets to
the destination file system. At both sites, the default high watermark is 600 MB. A value of 0 pauses
the replication activities and disables this policy.

Celerra Replicator

- 14

Copyright 2006 EMC Corporation. All Rights Reserved.

SavVol Size Requirements


y SavVol size considerations
Source file system size
Update frequency at source
Network bandwidth available between source and destination
Evaluate risk tolerance to network outages

y Increasing the size of the SavVol


Before starting replication, if the default size of 10% of the source file
system is insufficient
For a particular file system, when starting a replication
After replication has started, if the SavVol no longer meets your
requirements

2006 EMC Corporation. All rights reserved.

Celerra Replicator - 15

By default, the size of the SavVol is 10% of the size of the source file system. The minimum SavVol
size is 1 GB and the maximum is 500 GB.
This may not be sufficient for your particular replication. Consider the size of the source file system
and, more importantly, the frequency of changes to the source. High write activity could indicate a
larger SavVol. Also consider the network bandwidth available between the source and destination
Celerras. If the rate of change on the source file system is continuously greater than the available
network bandwidth, the replication service will not be able to transfer data quickly enough and will
eventually become inactive. Lastly, evaluate the risk tolerance to network outages. For example, if
the network experiences long outages, check if the primary SavVol will accommodate the necessary
delta sets.
If you determine that the default SavVol size is insufficient, it can be changed before replication starts,
when starting replication, or after replication is running.

Celerra Replicator

- 15

Copyright 2006 EMC Corporation. All Rights Reserved.

Configuring Local Replication


Overview
Create
Createckpt
ckpt
Create
Createdest.
dest.fsfs
Copy
Copyckpt
ckpt
Start
StartRepl.
Repl.
Create
Createckpt
ckpt
Copy
Copydata
data
Check
Checkstatus
status

1. Create a checkpoint of source file


system
2. Create destination file system
Create as rawfs

3. Copy the checkpoint to destination


4. Start Replication
5. Create a second checkpoint of source
file system
6. Copy changes between first and second
checkpoint
7. Check replication status

2006 EMC Corporation. All rights reserved.

Celerra Replicator - 16

This slide provides an overview of the local replication process.


Note: The following slides will show examples. Please refer to man pages for command details.

Celerra Replicator

- 16

Copyright 2006 EMC Corporation. All Rights Reserved.

Local Replication - Create Checkpoint


Create
Createckpt
ckpt
Create
Createdest.
dest.fsfs

1. Create SnapSure checkpoint as baseline to


be copied to destination file system

Copy
Copyckpt
ckpt
Start
StartRepl.
Repl.

ckpt_01

Create
Createckpt
ckpt
Copy
Copydata
data

R/W

Source
File System

Check
Checkstatus
status

y Example:
fs_ckpt src_fs -Create

2006 EMC Corporation. All rights reserved.

Celerra Replicator - 17

A SnapSure checkpoint is used as the baseline of data to be copied to the destination file system.
Command
y fs_ckpt <fs_name> -Create
Example
y fs_ckpt local_src Create

Celerra Replicator

- 17

Copyright 2006 EMC Corporation. All Rights Reserved.

Local Replication - Create Destination File System


Create
Createckpt
ckpt

2. Create or use existing destination file system

Create
Createdest.
dest.fsfs

Exactly the same size as source, type of rawfs,


mount it as read-only on secondary Data Mover

Copy
Copyckpt
ckpt
Start
StartRepl.
Repl.

ckpt_01

RO

Create
Createckpt
ckpt
Copy
Copydata
data
Check
Checkstatus
status

R/W

Source
File System

Destination
File System
(rawfs)

y Example:
nas_fs name dest_fs -type rawfs create
samesize=src_fs pool= pool=clar_r5_performance
-option slice=y
2006 EMC Corporation. All rights reserved.

Celerra Replicator - 18

Since replication has to start on a rawfs file system, the destination file system has to be created as
rawfs. It is later converted to uxfs. The destination must be the same size as the source.
To force the filesystem from UxFS to rawfs issue the new nas_fs T rawfs <filesystem>
-Force command.
The procedure shown here assumes your source file system was created on a slice volume, or by using
AVM. If your source file system was created on a disk or a set of concatenated disks, refer to
Using Celerra Replicator for details. You should use the CLI to create the destination file
system, since Celerra Manager cannot create a rawfs type file system.
The destination file system can be created in three ways:
1. nas_fs name <name> -type rawfs create samesize=<local_src> pool=<x> -option slice=y
2. By manually creating a volume of the correct size, creating a metavolume, and then creating the
file system (as rawfs)
After you create the destination file system, mount it as read-only on the secondary Data Mover.
The samesize option -- The samesize option will ensure that the file systems on both sides are
identical is size.

Celerra Replicator

- 18

Copyright 2006 EMC Corporation. All Rights Reserved.

Local Replication - Copy Checkpoint to Destination file system


Create
Createckpt
ckpt
Create
Createdest.
dest.fsfs

3. Copy checkpoint of source to destination to


create baseline

Copy
Copyckpt
ckpt
Start
StartRepl.
Repl.

ckpt_01

RO

Create
Createckpt
ckpt
Copy
Copydata
data

R/W

Source
File System

Check
Checkstatus
status

Destination
File System
(rawfs)

y Example:
fs_copy start src_ckpt_01 dest_fs
option convert=no
2006 EMC Corporation. All rights reserved.

Celerra Replicator - 19

Copy the checkpoint of the source file system to the destination file system to create a baseline. This
copy will be updated incrementally with changes that occur to the source file system. You do this once
per file system to be replicated. The checkpoint must be copied without converting it to uxfs by using
the convert=no option.
To copy a checkpoint to the destination file system:
fs_copy start <local_ckpt> <dstfs> -option convert=no

Celerra Replicator

- 19

Copyright 2006 EMC Corporation. All Rights Reserved.

Local Replication - Start Replication


Create
Createckpt
ckpt
Create
Createdest.
dest.fsfs

4. Start replication and begin logging changes to


the source file system

Copy
Copyckpt
ckpt
Start
StartRepl.
Repl.

ckpt_01

RO

Create
Createckpt
ckpt
Copy
Copydata
data

R/W

Source
File System

Destination
File System
(rawfs)

Check
Checkstatus
status

y Example:

SavVol

fs_replicate -start src_fs dest_fs

2006 EMC Corporation. All rights reserved.

Celerra Replicator - 20

When you start replication, the system verifies that the primary and secondary Data Movers can
communicate with each other. Changes made to the source file system begin to get logged. You start
the process once per file system to be replicated. The default is 600 MB for high watermark, and 600
seconds for time-out.
To start replication for the first time:
fs_replicate -start <source file system> <destination file system>

Celerra Replicator

- 20

Copyright 2006 EMC Corporation. All Rights Reserved.

Local Replication - Create second checkpoint


Create
Createckpt
ckpt
Create
Createdest.
dest.fsfs

5. Create a second checkpoint on the source


file system

Copy
Copyckpt
ckpt
Start
StartRepl.
Repl.

ckpt_01

ckpt_02

RO

Create
Createckpt
ckpt
Copy
Copydata
data

R/W

Source
File System

Destination
File System
(rawfs)

Check
Checkstatus
status

y Example:

SavVol

fs_ckpt src_fs -Create


2006 EMC Corporation. All rights reserved.

Celerra Replicator - 21

Next, you create a second checkpoint that is compared to the initial checkpoint. The changes between
the two checkpoints will be copied to the destination file system in the next step.
To create a SnapSure checkpoint of the source file system:
fs_ckpt <fs_name> -Create

Celerra Replicator

- 21

Copyright 2006 EMC Corporation. All Rights Reserved.

Local Replication - Copy Incremental Changes


Create
Createckpt
ckpt
Create
Createdest.
dest.fsfs

6. Copy incremental changes between the first and


second checkpoints to the destination

Copy
Copyckpt
ckpt
Start
StartRepl.
Repl.

ckpt_01

ckpt_02

RO

Create
Createckpt
ckpt
Copy
Copydata
data

R/W

Source
File System

Destination
File System

Check
Checkstatus
status

y Example:

SavVol

fs_copy -start src_ckpt_02 dest_fs


fromfs src_ckpt_01
2006 EMC Corporation. All rights reserved.

Celerra Replicator - 22

Next, copy the incremental changes that exist between the two checkpoints to the destination file
system.
Command:
y fs_copy -start <new_checkpoint> <dstfs> -fromfs
<previous_checkpoint> -option <options>
Where
y <new_checkpoint> is the last checkpoint taken
y <dstfs> is the destination file system
y <previous_checkpoint> is the first checkpoint taken

Celerra Replicator

- 22

Copyright 2006 EMC Corporation. All Rights Reserved.

Local Replication - Check replication status


Create
Createckpt
ckpt

7. Monitor status of replication their status

Create
Createdest.
dest.fsfs
Copy
Copyckpt
ckpt
Start
StartRepl.
Repl.
Create
Createckpt
ckpt
Copy
Copydata
data

RO

R/W

Source
File System

Check
Checkstatus
status

Destination
File System
(rawfs)
SavVol

y Example:
fs_replicate list
fs_replicate info src_fs -v
2006 EMC Corporation. All rights reserved.

Celerra Replicator - 23

To list the current replications that are running and check their status:
fs_replicate -list

Celerra Replicator

- 23

Copyright 2006 EMC Corporation. All Rights Reserved.

Configuring Remote Replication


Comm
Comm
Verify
Verifylink
link
Source
Sourceckpt
ckpt
Create
CreateDest
Destfsfs
Copy
Copyckpt
ckpt
Start
StartRepl.
Repl.
Second
Secondckpt
ckpt
Copy
Copychanges
changes
Check
Checkstatus
status
2006 EMC Corporation. All rights reserved.

Process is nearly identical to Local Copy


1. Establish communication between
primary and remote sites
2. Verify communications link
3. Create checkpoint of source
4. Create destination file system on remote
Create as rawfs

5. Copy baseline checkpoint to destination


file system
6. Start replication
7. Create second checkpoint of source
8. Copy incremental changes
9. Check replication status
Celerra Replicator - 24

This slide provides an overview of the remote replication process. Details will follow.

Celerra Replicator

- 24

Copyright 2006 EMC Corporation. All Rights Reserved.

Configuring Remote Replication


Establish communication
Comm
Comm

Verify
Verifylink
link

Creates a Replication user and passphrase


Performed as root user

Source
Sourceckpt
ckpt
Create
CreateDest
Destfsfs
Copy
Copyckpt
ckpt

A trust relationship must be established between


Control Stations at Source and Destination sites

Performed from both primary and secondary sites

Start
StartRepl.
Repl.
Second
Secondckpt
ckpt

Example:
[primary site]# /nas/sbin/nas_rdf
init eng16864 172.24.168.64
[remote site]# /nas/sbin/nas_rdf
init eng16857 172.24.168.57

Copy
Copychanges
changes
Check
Checkstatus
status
Celerra Replicator - 25

2006 EMC Corporation. All rights reserved.

At both the primary and remote sites, you must establish a trust relationship that enables HTTP
communications between the primary and remote Celerra. This trust relationship is built on a
passphrase set on the Control Stations of both Celerras. The passphrase is stored in clear text and is
used to generate a ticket for Celerra-to-Celerra communication. The time on the primary and remote
Control Stations must be synchronized.
Note: To establish communication, you must have root privileges and each site must be active and
configured for external communications.
Command:
y [primary site]# /nas/sbin/nas_rdf init <cel_name_of_remote_site_CS>
<ip_address_of_remote_CS_interface>
You are prompted for the following to establish a user login account:
y login
y password
y passphrase (must be the same on both sides)
Note: this trusted relationship may also be established using the command:
nas_cel create.

Celerra Replicator

- 25

Copyright 2006 EMC Corporation. All Rights Reserved.

Configuring Remote Replication


Verify communication and create checkpoint
Comm
Comm
Verify
Verifylink
link
Source
Sourceckpt
ckpt
Create
CreateDest
Destfsfs

y Verify that that the primary and remote


sites can communicate and create a
checkpoint
y Examples:

Copy
Copyckpt
ckpt

nas_cel -list
fs_ckpt src_fs -Create

Start
StartRepl.
Repl.
Second
Secondckpt
ckpt
Copy
Copychanges
changes
Check
Checkstatus
status
2006 EMC Corporation. All rights reserved.

Celerra Replicator - 26

Check whether the primary and remote Celerras can communicate.


nas_cel list
A SnapSure checkpoint will be used as the baseline of data to be copied to the destination file system.
fs_ckpt <fs_name> -Create
Where fs_name is the name of the file system for which a checkpoint is created

Celerra Replicator

- 26

Copyright 2006 EMC Corporation. All Rights Reserved.

Configuring Remote Replication


Create Destination file system and copy checkpoint
Comm
Comm
Verify
Verifylink
link
Source
Sourceckpt
ckpt

y Create the destination file system and copy


the checkpoint of the source to the destination
y Examples:

Create
CreateDest
Destfsfs
Copy
Copyckpt
ckpt
Start
StartRepl.
Repl.

nas_fs name dest_fs -type rawfs


create samesize=src_fs:cel=eng12345
pool=clar_r5_performance
fs_copy -start src_ckpt1
dest_fs:cel=eng16864 -option convert=no

Second
Secondckpt
ckpt
Copy
Copychanges
changes
Check
Checkstatus
status
2006 EMC Corporation. All rights reserved.

Celerra Replicator - 27

The destination file system must be created as rawfs, and must be the same size as the source file
system. This can be accomplished using the samesize option (new in 5.3) The samesize option will
ensure that the filesystems on both sides are identical is size.
To create the destination file system:
nas_fs name <dstfs> -type rawfs create samesize=srcfs:cel=eng123 pool=clar_r5_performance

The entire checkpoint of the source is then copied to the destination file system just created. This
creates a baseline copy of the source on the destination. This copy will be updated incrementally with
changes that occur to the source file system. You do this once per file system to be replicated.
To copy a checkpoint to the destination file system,
fs_copy start <srcfs> <dstfs>:cel=<cel_name> -option convert=no

Where:
srcfs is the source file system checkpoint
dstfs is the destination file system
cel_name is the remote Celerra
Example:
fs_copy start src_ckpt1 dest:cel=eng16864 option convert=no

Celerra Replicator

- 27

Copyright 2006 EMC Corporation. All Rights Reserved.

Configuring Remote Replication


Start Replication
Comm
Comm

y Start Replication

Verify
Verifylink
link

Verifies Data Mover communication


Starts replication

Source
Sourceckpt
ckpt
Create
CreateDest
Destfsfs
Copy
Copyckpt
ckpt

Begins logging changes made to source

y Example:

Start
StartRepl.
Repl.

fs_replicate start src_fs


dest_fs:cel=eng16864

Second
Secondckpt
ckpt
Copy
Copychanges
changes
Check
Checkstatus
status
Celerra Replicator - 28

2006 EMC Corporation. All rights reserved.

When you start replication, the system verifies that the primary and secondary Data Movers can communicate,
starts replication, and begins logging changes made to the source. The default is 600 MB for high water mark
and 600 seconds for time-out. The first replication policy creates the delta set.
To start replication:
fs_replicate start <srcfs> <dstfs>:cel=<cel_name> savsize=<MB>
Example:
fs_replicate start src dest:cel=eng16864 savsize=20000
When using the fs_replicate modify option, the values become effective the next time a trigger for
these policies is reached. For example, if the current policies are changed from 600 for the high watermark and
time-out interval to 300, the next time replication reaches 600, the trigger is changed to 300.
Example: fs_replicate -modify src -option hwm=300
When using the refresh option; the execution of the command either starts the generation of a delta set or a
playback of a delta set. When that operation completes, the time-out interval or high watermark changes. The
difference between the two is the new modify option does not create a delta set or attempt a playback.
Refreshing a replication creates a delta set on the source site and plays back an outstanding delta set on
the destination site as if the next high watermark or time-out interval was reached.

Celerra Replicator

- 28

Copyright 2006 EMC Corporation. All Rights Reserved.

Configuring Remote Replication


Create second checkpoint and copy changes
Comm
Comm
Verify
Verifylink
link
Source
Sourceckpt
ckpt

y Create a second checkpoint and copy


incremental changes to the destination file
system
y Example:

Create
CreateDest
Destfsfs
Copy
Copyckpt
ckpt
Start
StartRepl.
Repl.

fs_ckpt src Create


fs_copy start src_ckpt_02
dest:cel=eng16864 -fromfs
src_ckpt_01 -option monitor=off

Second
Secondckpt
ckpt
Copy
Copychanges
changes
Check
Checkstatus
status
2006 EMC Corporation. All rights reserved.

Celerra Replicator - 29

To create a second checkpoint:


fs_ckpt <fs_name> -Create
Example:
fs_ckpt src -Create
To copy the incremental changes that exist between the two checkpoints to the destination file system:
fs_copy start <new_check_point> <destfs>:cel=<cel_name>
-fromfs <previous_check_point> -option <options>
Where:
new_check_point is the last checkpoint taken
dstfs is the destination file system
cel_name is the Celerra where the destination file system resides
previous_check_point is the first checkpoint taken

Celerra Replicator

- 29

Copyright 2006 EMC Corporation. All Rights Reserved.

Configuring Remote Replication


Check replication status
Comm
Comm
Verify
Verifylink
link
Source
Sourceckpt
ckpt
Create
CreateDest
Destfsfs
Copy
Copyckpt
ckpt

y List current replications and check their


status
y Example:
fs_replicate list
fs_replicate info src_fs -v

Start
StartRepl.
Repl.
Second
Secondckpt
ckpt
Copy
Copychanges
changes
Check
Checkstatus
status
2006 EMC Corporation. All rights reserved.

Celerra Replicator - 30

To list the current replications that are running and check their status:
fs_replicate -list

Celerra Replicator

- 30

Copyright 2006 EMC Corporation. All Rights Reserved.

Replication via Celerra Manager

Right Click Replications > Select Available Destinations > Select New
Destination
2006 EMC Corporation. All rights reserved.

Celerra Replicator - 31

GUI designed for easier replication and management


Both Local and Remote replication can be done via Celerra Manager. The above example is a local
replication.
Celerra Manager will automatically create the secondary file system as type rawfs. It will also
automatically create the mountpoint and mount the file system to server_x as read-only.

Note:
The destination file system needs to be setup as primary since it may be defined as a standby
You also need to set up an interface for the destination data mover.

Celerra Replicator

- 31

Copyright 2006 EMC Corporation. All Rights Reserved.

Replication via Celerra Manager

Select User File System > Click Continue


2006 EMC Corporation. All rights reserved.

Celerra Replicator - 32

Celerra Replicator

- 32

Copyright 2006 EMC Corporation. All Rights Reserved.

Replication via Celerra Manager

2006 EMC Corporation. All rights reserved.

Celerra Replicator - 33

Select Source, and Destination data movers.


Select Source and Destination File System.

Celerra Replicator

- 33

Copyright 2006 EMC Corporation. All Rights Reserved.

Replication via Celerra Manager

2006 EMC Corporation. All rights reserved.

Celerra Replicator - 34

The destination file system is created as rawfs (ro) and unmounted.

Celerra Replicator

- 34

Copyright 2006 EMC Corporation. All Rights Reserved.

Replication via Celerra Manager

2006 EMC Corporation. All rights reserved.

Celerra Replicator - 35

After the destination file system is created, we can begin replication.

Celerra Replicator

- 35

Copyright 2006 EMC Corporation. All Rights Reserved.

Replication via Celerra Manager

2006 EMC Corporation. All rights reserved.

Celerra Replicator - 36

Select the Replication Option and Type.

Celerra Replicator

- 36

Copyright 2006 EMC Corporation. All Rights Reserved.

Start the Replication Service

2006 EMC Corporation. All rights reserved.

Celerra Replicator - 37

Check your Source and Destination File Systems; as well as, Time Out and High Water Marks.

Celerra Replicator

- 37

Copyright 2006 EMC Corporation. All Rights Reserved.

Task Status

2006 EMC Corporation. All rights reserved.

Celerra Replicator - 38

Click on Task Status, and Right Click on the task for an update.

Celerra Replicator

- 38

Copyright 2006 EMC Corporation. All Rights Reserved.

Initiating Replication Failover


y Initiate a failover if the primary site is unavailable
Source FS changed to Read Only
Destination changed to Read Write

y Three failover options


Sync Last delta set sent and played back
Now - Immediate failover with no playback of delta sets
Default play back any delta sets on destination SavVol

y Example:
[remote_site] fs_replicate failover
src:cel=eng16857 dest

2006 EMC Corporation. All rights reserved.

Celerra Replicator - 39

A failover is used if the primary site has experienced a disaster and is unavailable, or the source file
system is corrupted. The destination file system becomes read/write. If the primary site again
becomes available, you can resynchronize your file systems at the remote and primary sites and restart
replication.
To failover to a remote site:
[remote_site] fs_replicate failover <srcfs>:cel=<cel_name> <destfs> option <options>

Celerra Replicator

- 39

Copyright 2006 EMC Corporation. All Rights Reserved.

Replicate From the Destination to Source


y After a failed over, attempt to resynchronize the source
and destination file system
y First verify that the file system on the original primary site
is mounted read/only
y Example:
[remote_site] fs_replicate resync
src:cel=eng16857 dest

Celerra Replicator - 40

2006 EMC Corporation. All rights reserved.

After failover, a checkpoint is created on the remote site and the destination file system becomes read/write.
New writes are then allowed on the destination file system. Before resynchronizing the file systems, verify that
the file system on the original primary site is available and mounted as read-only. To attempt to resynchronize
the source and destination file system and restart replication:
[remote_site]

fs_replicate resync <srcfs>:[cel=<cel_name>] <dstfs>

The resync option attempts to incrementally resynchronize the source and destination file systems by examining
the changes in the SavVol. Reasons a resynch may not be possible are:
y You performed a failover and, after your primary site became available, continued to receive I/O to your
source file system.
y After you performed a failover, you decided to abort replication when the primary site became available
because the information was unusable.
y Your file system fell out of synchronization.
Note: Replication is now working in reverse order. Changes that occurred after the failover are copied to the
source site and replication is started again. If resynchronization is not possible, abort replication and restart
replication.
As previously mentioned, a new autocopy option is available with v5.3. Autofullcopy=yes will ensure
that a full copy of the data from the source to the remote site takes place. Without the autofullcopy=yes
option; an incremental copy will occur. If the standard fs_replicate resync fails the user will be
prompted to run it again using the new autofullcopy=yes option.
Note: This is run from the remote site and can take a considerable amount of time.

Celerra Replicator

- 40

Copyright 2006 EMC Corporation. All Rights Reserved.

Replicate Reverse
y Resumes normal replication
Source to Destination

y Changes:
Destination Read Only
Source Read Write

y Example:
[remote_site] fs_replicate reverse
src:cel=eng16857 dest

2006 EMC Corporation. All rights reserved.

Celerra Replicator - 41

Reverse changes the replication direction. It requires both the primary and remote sites to be
available. The write activity on the destination file system is stopped, and any changes are applied to
the source file system before the primary site becomes read/write. Before you reverse the replication
direction, you should verify the direction of your replication process.
To initiate failback:
[remote_site]

fs_replicate reverse <srcfs>:cel=<cel_name> <dstfs>

Celerra Replicator

- 41

Copyright 2006 EMC Corporation. All Rights Reserved.

Sequence is Important!
y

When recovering from a failover, the order of execution


is critical:
1. fs_replicate failover
2. fs_replicate resync
3. fs_replicate reverse

2006 EMC Corporation. All rights reserved.

Celerra Replicator - 42

Celerra Replicator

- 42

Copyright 2006 EMC Corporation. All Rights Reserved.

CIFS Asynchronous Data Recovery Overview


y CIFS has complex configuration requirements
CIFS environment is dependent on much more than file systems
Configuration information (DNS, GPO, Name)
Shares
Credentials
Event logs

Successful client access requires replicating the environment, not


just the data

y Celerra Replicator and VDM support allow for an


Asynchronous Data Recovery solution for the Celerra

2006 EMC Corporation. All rights reserved.

Celerra Replicator - 43

Celerra Replicator includes support for failover and reverse, as well as Virtual Data Mover support.
Combining these features provides an asynchronous data recovery solution for CIFS servers and CIFS
file systems.
In a CIFS environment, in order to successfully access file systems on a remote secondary site, you
must replicate the entire CIFS working environment including local groups, user mapping information,
Kerberos, shares and event logs. You must replicate the production file systems attributes, access the
file system through the same UNC path, and find the previous CIFS servers attributes on the
secondary file system.
Since the release of v5.2 features, an asynchronous Data Recovery solution is possible. By following
the documented procedures, Data Mover clients can continue accessing data in the event of a failover
from the primary site to the secondary site.

Celerra Replicator

- 43

Copyright 2006 EMC Corporation. All Rights Reserved.

Setting up CIFS Replication Environment


y Set up IP infrastructure
Establish connection between control stations
Initialize interface connectivity

y Configure DNS and other Windows network services


Synchronize source and destination data and times

y Configure user mapping Primary - Secondary


y Replicate the CIFS environment (VDM) from the primary to the
secondary side
y Replicate the data file systems
y Ensure that the network environment on the secondary side can
successfully accommodate a failover
y Monitor Data Mover, file system, and the replication process
2006 EMC Corporation. All rights reserved.

Celerra Replicator - 44

A connection must be established between the control stations to enable replication. Both the primary
and secondary sites must have interfaces configured with the same name. Use the server_ifconfig
command to configure interfaces. DNS resolution is required on both the primary and secondary sites.
The time must be synchronized between the two sites and the domain controllers at each site.
Some method of mapping Windows users to UIDs and GIDs for example, Internal Usermapper.
File systems must be prepared. First, determine the space required at both the primary and secondary
sites. Then create volumes and file systems to accommodate the size requirements.
Next, VDMs are created. A primary VDM is created in the loaded state. The secondary VDM is
created in a mounted state. This read-only state is used on the secondary side when replicating a VDM.
It cannot be actively managed, and receives updates from the primary during replication.
Finally, create data file systems on the primary and secondary sites and mount file systems to the
VDM.
In the steady-state CIFS environment, all data is replicated from the primary to the secondary, and all
the daily management changes are automatically replicated. Successful access to CIFS servers, when
failed over, depends on the customer taking adequate actions to maintain DNS, Active Directory, user
mappings, and network support of the data recovery site. Celerra depends on those components for
successful failover.
Monitor the Data Mover, file systems and the replication process.

Celerra Replicator

- 44

Copyright 2006 EMC Corporation. All Rights Reserved.

Restrictions
y Celerra Data Migration Service (CDMS) not supported
y MPFS on destination file system not supported
y fs_replicate failover, -resync, and
reverse options not supported for local or loopback
replication
y A TimeFinder BCV cannot be a source or destination file
system for replication
y The Primary CS manages replication; standby CS not
used by replication
If primary CS fails over to standby CS, replication service continues
to run, but replication management capabilities not available
2006 EMC Corporation. All rights reserved.

Celerra Replicator - 45

Celerra Replicator

- 45

Copyright 2006 EMC Corporation. All Rights Reserved.

Configuration Considerations
y Avoid allowing the replication service to create delta sets
faster than it can copy them to the destination
y Examine network bandwidth
y At the beginning of the delta set playback for CIFS, there
is a temporary freeze/thaw period that may cause a
network disconnect
y Evaluate the remote site for the following and determine if
Virtual Data Movers can be used
Subnet addresses
NIS/DNS availability
Windows environment WINS, DNS, DC, BDC, share names,
availability of Usermapper or NTMigrate
2006 EMC Corporation. All rights reserved.

Celerra Replicator - 46

To avoid the source and destination file systems becoming out of synch, do not allow the replication
service to create delta sets faster than it can copy them to the destination file system. Set the delta set
creation replication policy to a higher number than the delta set playback number.
You need to determine if the network bandwidth can effectively transport changes to the remote site.
During the delta set playback on the destination file system, network clients can access the destination
file system. However, at the beginning of the delta set playback for CIFS clients, there is a temporary
freeze/thaw period that may cause a network disconnect. Therefore, do not set the replication policy to
a low number since this reduces the availability of the destination file system.
Lastly, evaluate the remote side for a compatible infrastructure. For example, DNS, NIS, WINS,
Domain Controllers, BDC, NT Migrate and Usermapper.

Celerra Replicator

- 46

Copyright 2006 EMC Corporation. All Rights Reserved.

Troubleshooting
y Do not change source or destination IP addresses once
replication has started
y Network Connectivity
Verify duplex match
Routing issues

y If "resync is requested" in server_log on the source Data


Mover, adjust save volume and to/hwm options of
fs_replicate.
y Ensure both src and dest volumes are same size
y Ensure fs_copy uses checkpoint not the file system
2006 EMC Corporation. All rights reserved.

Celerra Replicator - 47

Troubleshooting Celerra Replicator


Network duplex mismatch causes replication performance problems. Check that duplex settings of
both the interface and the switch match before beginning the replication configuration.
Use server_netstat -i to check I/O of each physical port.
If you find "resync is requested" in server_log of source Data Mover, it means that replication is
not working. To recover from this situation, stop the replication service, and adjust the save
volume and to/hwm options of fs_replicate.
1023982214: VRPL: 0: 2: srcvolsec:236 resync is requested
fs_replicate - modify can be used to set the destination side
policy back to the desired level without cutting a new delta set
as the refresh option does
You may also use the new start/suspend option
src and dest volume size should be identical. Otherwise, you will get src size () is not equal to dest
size ()" error to fs_replicate -start operation. If nas_disk -list shows same
volume size as source Celerra, you can use it as is. If nas_disk -list output is different
between the source volume size and destination volume size, you must use nas_slice to make
them identical. In addition, you must slice both sides to keep identical block level size.
If you get "Read Only server not found for destination file system" error to fs_copy -start
operation, ensure that you specified checkpoint volume. In this example, the source file system was
used in error, it should be dest_ckpt1.

Celerra Replicator

- 47

Copyright 2006 EMC Corporation. All Rights Reserved.

Module Summary
Key points covered in this module are:
Celerra Replicator produces a read-only, point-in-time replica of a
source file system (local or remote)
Local replication produces a copy of the source file system within
the same Celerra cabinet
Remote replication produces a copy of the source file system in a
remote Celerra
A SavVol is used to store copied data blocks from the source
Failover changes the destination file system from read-only to readwrite
Replication reverse will reverse the direction of replication causing
the source file system to become read-only, and the destination file
system to become read-write
2006 EMC Corporation. All rights reserved.

Celerra Replicator - 48

Celerra Replicator

- 48

Copyright 2006 EMC Corporation. All Rights Reserved.

Closing Slide

2006 EMC Corporation. All rights reserved.

Celerra Replicator - 49

Celerra Replicator

- 49

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra ICON
Celerra Training for Engineering

Celerra iSCSI

2006 EMC Corporation. All rights reserved.

Celerra iSCSI

-1

Copyright 2006 EMC Corporation. All Rights Reserved.

Revision History
Rev Number

Course Date

1.0

February 2006

1.2

May 2006

2006 EMC Corporation. All rights reserved.

Revisions
Complete
NAS 5.5 enhancements

Celerra iSCSI - 2

Celerra iSCSI

-2

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra iSCSI Objectives


Upon completion of this module, you will be able to:
y Describe the concepts and terminologies used in the
iSCSI implementation on the Celerra

Targets and initiators


Methods of target discovery and why it is necessary
LUN masking
How the Celerra target authenticates the iSCSI initiator

y Using the CLI and or the Celerra Manager, configure


iSCSI targets and LUNs on a Data Mover

2006 EMC Corporation. All rights reserved.

Celerra iSCSI - 3

Celerra iSCSI

-3

Copyright 2006 EMC Corporation. All Rights Reserved.

What is iSCSI?
Internet Small Computer System Interface (iSCSI)
Target

Fibre Channel SAN

Fibre Channel
Fibre Channel
SAN
SAN

iSCSI

Target

IP Network

2006 EMC Corporation. All rights reserved.

Celerra iSCSI - 4

The concepts of iSCSI are very similar to those of a Storage Area Network. With iSCSI architecture,
host systems and storage devices communicate over an IP network, using TCP to provide a reliable
transport service. SCSI commands, data, and status are encapsulated and delivered between an initiator
and target using iSCSI protocol.
Traditionally. hosts were connected to storage either directly (cable attached) using the SCSI protocol,
or through a SAN using SCSI over Fibre Channel protocol (as shown on the slide). Again, the concept
of iSCSI is not much different. Instead of a Fibre Channel SAN, iSCSI encapsulates the SCSI protocol
within the TCP/IP protocol stack. This allows for any IP network to transport iSCSI communications
between host and storage.

Celerra iSCSI

-4

Copyright 2006 EMC Corporation. All Rights Reserved.

Why iSCSI?
y IP networks are extremely common and typically already
in place
Good base of expertise in designing and maintaining IP networks

y Infrastructure cost of an IP network is typically less than a


SAN infrastructure
Cost of NIC card vs FC HBA
IP Switch vs Fibre Channel Switch

y IP networks allow for very long distances


y Celerra iSCSI allows customers to consolidate both file
level and block level storage on the same platform

2006 EMC Corporation. All rights reserved.

Celerra iSCSI - 5

iSCSI allows for host to storage connections over an IP network. IP networks are extremely common
and available for transporting SCSI communications between host and storage. SCSI is, and has been
used for some time.
There is a lot of IT experience in building, maintaining, and tuning IP networks. This makes it easier
to implement the technology. For those organizations who do not have a SAN infrastructure in place,
the cost of an IP infrastructure is typically less. Moreover, IP networks are most likely present in the
organization already.
Lastly, while Fibre Channel has become the storage interconnect of choice of many data centers, a
large population of servers still exist for which the cost of Fibre Channel has been a barrier to
consolidation on a LAN. With iSCSI, organizations have a low-cost method for consolidating and
networking these previously stranded servers.
Standard Network Interface Cards can be used. For higher performance, a number of vendors offer
TOE (TCPIP Offload Engine) Interface cards that offload the TCPIP processing from the CPU to the
Interface card. These are very effective with iSCSI.

Celerra iSCSI

-5

Copyright 2006 EMC Corporation. All Rights Reserved.

iSCSI Terms and Concepts


y iSCSI Protocol stack
y Initiator and Target
y Network Portal and Portal Groups
y iSCSI naming
y LUNs and LUN Masking
y iSCSI Discovery
y iSCSI Authentication
y Virtually Provisioned LUNs

2006 EMC Corporation. All rights reserved.

Celerra iSCSI - 6

In order to understand Celerra iSCSI implementation, it is necessary to define some of the terms and
concepts used. We will see that the concepts are very similar to Fibre Channel based storage area
networks. We will discuss each in the following slides.

Celerra iSCSI

-6

Copyright 2006 EMC Corporation. All Rights Reserved.

Initiators and Targets


Target
Initiator

IP
IPNetwork
Network

Host

Celerra

2006 EMC Corporation. All rights reserved.

Celerra iSCSI - 7

In any SCSI environment, there are two types of devices: Initiators and Targets. Initiators send
commands such as read or write, and targets respond.
In order to support iSCSI, each host system needs to run at least one iSCSI initiator. Typically, an
iSCSI initiator appears simply as any other SCSI initiator in the host system.
The storage system, in our case the Celerra Data Mover, is configured as a iSCSI target. An iSCSI
target provides logical units and supports SCSI protocol the same as any SCSI target does, but also
supports the iSCSI protocol. An iSCSI storage network, then, is a connection, via TCP, between one or
more iSCSI initiators and one or more iSCSI targets.
On start up, an initiator logs into a target and the LUNs associated with the target are made available to
the host system. An iSCSI LUN appears like a local SCSI disk drive to the host and using standard
SCSI protocol, the host communicates to the disk and can use it like any other disk: For example,
initialize it with a signature, create a partition, format a file system, and assign a drive letter.
EMC currently supports Windows and LINUX initiators. Check the support matrix for specific
requirements and restrictions.

Celerra iSCSI

-7

Copyright 2006 EMC Corporation. All Rights Reserved.

iSCSI Protocol Stack

IP Packet

TCP Packet

2006 EMC Corporation. All rights reserved.

iSCSI PDU

iSCSI CDB

Celerra iSCSI - 8

The protocol model above shows the layers and protocols involved in a simple iSCSI storage network.
The iSCSI target provides both a SCSI protocol layer and an iSCSI protocol layer. A SCSI Command
Descriptor Blocks (CDBs) from the SCSI layer. An iSCSI wrapper is added and the SCSI CDB is
transmitted within an iSCSI PDU (Protocol Data Unit) across an IP network to the iSCSI target.
Communication between the iSCSI initiator and iSCSI target occurs over one or more TCP
connections. A group of TCP connections that link an iSCSI initiator and target form an iSCSI session.
On the target side, the iSCSI layer extracts the SCSI CDB from the iSCSI PDU. The iSCSI layer
presents to SCSI CDB to the SCSI layer for execution on the SCSI device. The SCSI response is then
transmitted back to the iSCSI initiator.
To the host the iSCSI storage device appears as a standard SCSI device. The operating system and
applications communicate with the device using standard SCSI commands. The fact that the transport
involves iSCSI and TCP/IP is transparent to the operating system and applications.

Celerra iSCSI

-8

Copyright 2006 EMC Corporation. All Rights Reserved.

Network Portals and Portal Groups

Data Mover
Target 1
Portal Group 1

Target 2

Portal Group 2

10.168.0.111:3260

172.24.81.13:3262

172.24.81.12:3261

192.168.0.12:3262

Portal Group 3
192.168.0.12:3269

Network Interfaces

2006 EMC Corporation. All rights reserved.

Celerra iSCSI - 9

Both initiators and targets use network interfaces, called network portals, to communicate over the
network. An initiators network portal is defined by IP address; while the target's network portal is
defined by IP address and TCP port (default 3260). Therefore, target portals can share IP addresses as
long as each portal uses a unique TCP port. This slide shows how the network portals might be
defined on the target, or Celerra side.
A portal group is a collection of one or more network portals that are identified by a tag called the
portal group tag. Celerra, requires portal groups which are primarily used for session control.

Celerra iSCSI

-9

Copyright 2006 EMC Corporation. All Rights Reserved.

iSCSI Names
y All initiators and targets require a unique iSCSI identifier
y Two types of iSCSI names
iqn. iSCSI Qualified Name
iqn.1992-05.com.emc:apm000339013630000-10

eui. Extended Unique Identifier


eui.02004567a425678a

2006 EMC Corporation. All rights reserved.

Celerra iSCSI - 10

All initiators and targets within an iSCSI network must be named with a unique worldwide iSCSI
identifier. There are two types of iSCSI names.
IQN iSCSI Qualified Name
To generate names of this type, the organization generating this name must own a registered domain
name. This domain name does not have to be active, and does not have to resolve to an address; it just
needs to be reserved to prevent others from generating iSCSI names using the same domain name. The
domain name must be additionally qualified by a date.
EUI Extended Unique Identifier
An EUI is a globally unique identifier based on the IEEE EUI-64 naming standard. These names are
formed by the eui prefix followed by a 16-character hexadecimal name. The 16-character part of the
name include3s 24 bits for the company name assigned by IEEE, and 40 bits for a unique ID such as a
serial number.

Celerra iSCSI

- 10

Copyright 2006 EMC Corporation. All Rights Reserved.

LUNS and LUN Masking

Celerra File System

iSCSI
LUN1

iSCSI
LUN2

iSCSI
LUN3

iSCSI
LUN4

LUN3

LUN1

IP
IPNetwork
Network
LUN2

2006 EMC Corporation. All rights reserved.

LUN4

Celerra iSCSI - 11

A logical unit is an element on a storage device that interprets SCSI CDBs and executes SCSI
command such as reading from and writing to storage. Each Logical Unit has an address within a
target called a Logical Unit Number (LUN). As opposed to raw back-end SCSI device based LUNs, a
Celerra iSCSI LUN is a software feature that processes SCSI commands.
The size of an iSCSI LUN is limited to the maximum size of a Celerra file, currently 1TB.
In the iSCSI protocol, LUN masks are used to control access to LUNs on iSCSI targets. A LUN mask
is essentially a filter that controls which initiators have access to which LUNs on the target. If you
create a LUN mask that denies an initiator access to a specific LUN, that initiator cannot see or access
the LUN. The Celerra system supports iSCSI LUN masking based on the iSCSI names of the host
initiators. In this example, one host has been defined to have access to LUN1 and LUN2. The host on
the right has access to LUN3 and LUN4.Generally, only a single host is granted access to a LUN.
In addition, note there are two types of Celerra iSCSI LUNs:
y Production LUN: A logical unit that is serves as a primary (or production) storage device.
y Snap LUN: A point-in-time representation (an iSCSI snap) of a PLU that has been promoted to
LUN status so that it can be accessed.
Note: It is strongly recommended that you only use LUNs 0 to 127 for Production LUNs. LUNs 128 to
254 are used by Celerra iSCSI host applications for promoting iSCSI snaps to Snap LUN status.

Celerra iSCSI

- 11

Copyright 2006 EMC Corporation. All Rights Reserved.

iSCSI Discovery
SendTargetsDiscovery

Target

Initiator

IP Network

Target Portal=10.127.50.162:3260
10.127.50.162

Initiator

iSNS

Target
Initiators
Targets
portals

IP Network

iSNS server

Initiator

2006 EMC Corporation. All rights reserved.

Celerra iSCSI - 12

Before an initiator can establish a session with a target, the initiator must first discover where targets
are located and the names of the targets available to it. This information is gained through the process
of discovery. Discovery can occur in two ways:
SendTargetDiscovery
The initiator is manually configured with the targets network portal which it uses to establish a
discovery session with the iSCSI service on the target. As shown, the initiator issues a
SendTargetDiscovery command. The target responds with the names and addresses for the targets
available to the host.
iSNS Internet Storage Name Service
iSNS enables automatic discovery of iSCSI devices on an IP network. You can configure initiators
and targets to automatically register themselves with the iSNS server. Then, whenever an initiator
wants to know what targets are accessible to it, the initiator queries the iSNS server for a list of
available targets.

Celerra iSCSI

- 12

Copyright 2006 EMC Corporation. All Rights Reserved.

SendTargetDiscovery
y On the windows host, configure the initiator with the
targets Network Portal (IP Address and port)
y Used to establish
a discovery
session with the
iSCSI service on
the target

2006 EMC Corporation. All rights reserved.

Celerra iSCSI - 13

In SendTargets discovery, you manually configure the initiator with a targets network portal (IP
address and port number), and then the initiator uses that network portal to establish a discovery
session with the iSCSI service on the target system.
During the discovery session, which takes place prior to the initiator logging in to a target, the initiator
tries to discover the names of the targets that are accessible through the portal. The initiator issues a
special command, the SendTargets command, to the iSCSI service on the target system. The iSCSI
service then responds with the names and addresses of all the targets that are available on the target
system. For a Celerra Network Server, the target system is an individual Data MoverSendTargets
discovery does not discover targets on other Data Movers on the Celerra Network Server.

Celerra iSCSI

- 13

Copyright 2006 EMC Corporation. All Rights Reserved.

iSCSI Authentication

Target
Initiator
CHAP Challenge
Hash value of challenge
Successful authentication

2006 EMC Corporation. All rights reserved.

Celerra iSCSI - 14

The Celerra supports CHAP (Challenge Handshake Authentication Protocol) authentication. This slide
shows how CHAP works.
Once an initiator discovers its targets, log in occurs. During log in, the target and initiator agree upon
operational parameters for the session. Optionally, authentication can be configured so that during
login the CHAPs protocol can validate an initiator and/or target are who they say they are.
Authentication can be configured as one-way (initiator to target) or two-way (initiator to target and
target to initiator). The target sends a CHAP challenge message to the initiator.
y The initiator takes the shared secret, calculates a value using a one-way hash function, and returns
the hash value to the target.
y The target computes the expected hash value from the shared secret, and then compares the
expected value to the value received from the initiator. If the two values match, authentication is
acknowledged and the login process moves into the operational stage. If the two values do not
match the target immediately terminates the connection.

Celerra iSCSI

- 14

Copyright 2006 EMC Corporation. All Rights Reserved.

iSCSI Virtually Provisioned iSCSI LUNs


y NAS 5.5 enhancement A.K.A Sparse LUNs
y Allows the user to create iSCSI
LUNs larger than the underlying
file system space available

Size seen
by host

Allocates space on demand


Full LUN size is presented to host
systems
Minimizes administrative tasks
associated with LUN expansion

Assumption that typical environment


over-provisions and underutilizes
storage capacity
Works well with NAS 5.5 Auto-Extend
file systems feature
2006 EMC Corporation. All rights reserved.

Allocated
file system
space
Celerra iSCSI - 15

The new feature of iSCSI Virtual LUN Provisioning was originally known as Sparse LUN.
Virtual Provisioning allows a user to create Virtually Sized LUNs which are reported to the clients
as bigger than what the actual size of the underlying file system is capable of holding for data.
To create a Virtually Provisioned LUN, an administrator must use the CLI interface on the Control
Station.
The size of both regular iSCSI LUNs (A.K.A. Dense LUNs) and Virtual LUNs is 2TB (minus 1 MB
for overhead).

Celerra iSCSI

- 15

Copyright 2006 EMC Corporation. All Rights Reserved.

Configuring iSCSI
y Configure the Celerra iSCSI target
Backend Storage

Create and mount iSCSI file


systems for iSCSI LUNs
Create iSCSI targets
Create iSCSI LUNs
Create LUNs
and LUN masks
& Masking
Start the iSCSI service
Optionally, configure iSNS and
CHAP on the Data Mover

VLU
VLU
VLU

File system

iSCSI
Target

Determine file
system size
Create and mount
the file system

Data Mover

y Configure the iSCSI Initiator

Register the initiator in the registry


Configure iSCSI discovery
Log into iSCSI target
Configure iSCSI drives
Network
Portal

Network
Portal

Network
Portal

Create the Targets


Start the service
Configure CHAP

Portal
Group

Network

W2k
Configure iSCSI Host
2006 EMC Corporation. All rights reserved.

iSCSI
Initiator
Celerra iSCSI - 16

This slide lists the configuration steps required to configure the iSCSI target (Celerra) and initiator.
The following slides will provide details of the target configuration on the Celerra. For information
regarding the configuration of Windows initiator, please refer to Installing Celerra iSCSI Host
Applications.

Celerra iSCSI

- 16

Copyright 2006 EMC Corporation. All Rights Reserved.

Configuring iSCSI Targets


1. Create and mount one or more file
systems to hold the iSCSI LUNs

Create and Mount FS

2. Create one or more iSCSI targets on


the Data Mover

Create iSCSI targets

3. Create one or more iSCSI LUNs on


an iSCSI target

Create iSCSI LUNs

4. Permit iSCSI initiators to access


specific LUNs by configuring a LUN
mask for the target

Configuring a LUN Mask

5. (Optional) Configure the iSNS client


on the Data Mover

Configure iSNS (Optional)

6. (Optional) Enable authentication by


creating CHAP entries for initiators
and for the Data Mover

Enable CHAP (Optional)

7. Start the iSCSI service on the Data


Mover

Start iSCSI Service on DM

2006 EMC Corporation. All rights reserved.

Celerra iSCSI - 17

Celerra iSCSI

- 17

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra Manager iSCSI Wizards

2006 EMC Corporation. All rights reserved.

Celerra iSCSI - 18

Celerra Manager offers two iSCSI Wizards; Create an iSCSI LUN and Create an iSCSI Target.
Each wizard guides you through the process of configuring iSCSI support on the Celerra by creating
iSCSI LUNs and Targets. The following slides guide you through the process of configuring these
elements without the use of the wizards.
To use Celerra Manager to configure iSCSI, you must activate the iSCSI liciense.

Celerra iSCSI

- 18

Copyright 2006 EMC Corporation. All Rights Reserved.

Creating and Mounting an iSCSI File System


y Create and mount file system
using CLI or Celerra Manager
y File system should be dedicated
to iSCSI storage
y File system should be large
enough to hold iSCSI LUNs and
iSCSI Snap copies

Create and mount FS


Create iSCSI targets
Create iSCSI LUNs
Configuring a LUN mask
Configure iSNS (Optional)
Enable CHAP (Optional)
Start iSCSI Service on DM

2006 EMC Corporation. All rights reserved.

Celerra iSCSI - 19

One or more file systems must be created to provide dedicated iSCSI storage.
A CIFS or NFS client can see the LUN itself and copy and delete the LUN; however, these clients
cannot make modifications to a LUN since the CIFS and NFS protocols cannot understand the contents
of the LUN. EMC recommends that file systems with iSCSI LUNs be dedicated to iSCSI and not used
for other purposes. For example, an iSCSI file system should not be exported via a CIFS share or NFS
export.
The file system should be large enough to hold the iSCSI LUNs and any snaps of the LUNs.
Potentially, an iSCSI snap (different technology than SnapSure) could take up the same amount of
space on the file system as the LUN.
LUN_size+(no_of_snaps)*(LUN_size*change-rate))+(n*LUN_size)=minimum file system space
needed to support one LUNchange_rate- % of LUN data that changes between snaps n number of
iSCSI snaps promoted at any one time.
Promoting a snap assigns a LUN to the snap so that it can be accessed by an iSCSI host.

Celerra iSCSI

- 19

Copyright 2006 EMC Corporation. All Rights Reserved.

Create an iSCSI Target


y Targets are configured on the
Data Mover to allow an Initiator
to establish a session and
exchange data

Create and mount FS


Create iSCSI targets
Create iSCSI LUNs
Configuring a LUN mask

y Command:
server_iscsi server_x -target
alias <alias_name>
create <pg_tag>:np=<np_list>

Configure iSNS (Optional)


Enable CHAP (Optional)
Start iSCSI Service on DM

y Example
server_iscsi server_2 -target alias target1
create 1000:np=10.127.51.163,10.127.51.164

2006 EMC Corporation. All rights reserved.

Celerra iSCSI - 20

You must create one or more iSCSI targets on the Data Mover so that an iSCSI initiator can establish a
session and exchange data with the Celerra. This slide shows the CLI command used to create an
iSCSI target.
server_iscsi server_x -target: Creates, deletes, and configures iSCSI targets on the Data
Mover.
<alias_name> = a local, user-friendly name for the new iSCSI target. This name is an alias for the
targets qualified name and is used for designating a specific iSCSI target in other commands. The
<alias_name> is not used for authentication but is used as a key identifier in the Celerra iSCSI
configuration so therefore must be unique. The <alias_name> can have a maximum of 255
characters.
<pg_tag> = the portal group tag that identifies the portal group within an iSCSI node. The
<pg_tag> is an integer within the range of 0-65535. The default port for a portal group is 3260. If no
pg_tag is specified, the default Portal Group Tag of 1 is used.
<np_list> = a comma-separated list of network portals. A network portal in a target is identified
by its IP address and its listening TCP port. The format of a network portal is <ip>[:<port>]. If no port
is specified, the default port of 3260 is used.

Celerra iSCSI

- 20

Copyright 2006 EMC Corporation. All Rights Reserved.

Create an iSCSI Target


y iSCSI > Target tab > New

2006 EMC Corporation. All rights reserved.

Celerra iSCSI - 21

This slide shows how to create an iSCSI target using Celerra Manager.
Note: Ensure that the iSCSI license has been enabled on the Celerra.

Celerra iSCSI

- 21

Copyright 2006 EMC Corporation. All Rights Reserved.

Create iSCSI LUN


y An iSCSI LUN is a file in a file system
Hosts access the iSCSI LUN as a SCSI disk
Maximum LUN size is 2 TB-1 MB
Optionally specify Virtual Provisioning

y Command:
server_iscsi server_x -lun
number <lun_number>
create <target_alias_name>
-size <size> -fs <fs_name>
[-vp {yes|no}]

Create and mount FS


Create iSCSI targets
Create iSCSI LUNs
Configuring a LUN mask
Configure iSNS (Optional)
Enable CHAP (Optional)
Start iSCSI Service on DM

y Example:
server_iscsi server_2 -lun number 2 create
target1 -size 1000 -fs iscsi02
2006 EMC Corporation. All rights reserved.

Celerra iSCSI - 22

After creating an iSCSI target, you must create iSCSI LUNs on the target. The LUNs physically reside
on space within the file system. From a client perspective, an iSCSI LUN appears as any other disk
device.
Currently EMC implements dense LUNs on the Celerra. Dense LUNs utilize Persistent Block
Reservation (PBR) to ensure that there is sufficient space on the file system for all data that may be
written to the LUN. PBR reserves disk space for the entire LUN although the actual disk space is not
taken from the reservation pool until data is actually written to the LUN. Note: If you have a LUN on a
file system but have not yet written any data to the LUN, the server_df command reports free disk
space as if the LUN was full.
Note: Maximum of 255 LUNs per target, however it is recommended that only 128 be configured if
using SNAP as as Snap LUN numbers will automatically begin at 128. Also there is a recommended
maximum of 1000 iSCSI LUNS per Data Mover.

Celerra iSCSI

- 22

Copyright 2006 EMC Corporation. All Rights Reserved.

Create iSCSI LUN


y iSCSI > LUNs tab > New

2006 EMC Corporation. All rights reserved.

Celerra iSCSI - 23

This slide shows how to create an iSCSI LUN using Celerra Manager.

Celerra iSCSI

- 23

Copyright 2006 EMC Corporation. All Rights Reserved.

Create iSCSI LUN Mask


y LUN Masking grants or denies
access to specific LUNs by
specific Initiators

Create and mount FS


Create iSCSI targets
Create iSCSI LUNs
Configuring a LUN mask

y Command:
server_iscsi server_X mask
set <target_alias_name>
initiator <initiator_name>
-grant <access_list>

Configure iSNS (Optional)


Enable CHAP (Optional)
Start iSCSI Service on DM

y Example:
server_iscsi server_2 -mask set target1
initiator iqn.1991_05.com.microsoft:nas46.celerra6.emc.com
-grant 0-100
2006 EMC Corporation. All rights reserved.

Celerra iSCSI - 24

To control initiator access to an iSCSI LUN, you must configure an iSCSI LUN mask. A LUN mask
controls incoming iSCSI access by granting or denying specific iSCSI initiators to specific iSCSI
LUNs. By default, all initial LUN masks are set to deny access to all iSCSI initiators. You must
create a LUN mask to explicitly grant access to an initiator.
You should not grant two initiators access to the same LUN. Granting multiple initiators access to the
same LUN can cause conflicts when more than one initiator tries writing to the LUN. If the LUN has
been formatted with the NTFS file system in Windows, simultaneous writes may corrupt the NTFS file
system on the LUN. As a best practice, you should not grant initiator access to any undefined LUNs.
Changes to LUN masks take effect immediately. Be careful when deleting or modifying a LUN mask
for an initiator with an active session. When you delete a LUN mask or remove grants from a mask,
initiator access to LUNs currently in use is cut off and will interrupt applications using those LUNs.

Celerra iSCSI

- 24

Copyright 2006 EMC Corporation. All Rights Reserved.

Configure iSCSI LUN Masking


y iSCSI > Targets tab

y Right click the target > Properties > LUN mask tab > New

2006 EMC Corporation. All rights reserved.

Celerra iSCSI - 25

This slide shows how to create an iSCSI LUN mask using Celerra Manager.

Celerra iSCSI

- 25

Copyright 2006 EMC Corporation. All Rights Reserved.

Configure iSNS and CHAP


y Optionally, configure iSNS and CHAP
on the Celerra

Create and mount FS


Create iSCSI targets

y iSNS allows initiators to automatically


discover iSNS targets on the Data
Mover
server_iscsi server_2 ns isns
set server <IP address>

y Configure CHAP requires iSCSI


initiators to authenticate with the
target on the Data Mover

Create iSCSI LUNs


Configuring a LUN mask
Configure iSNS (Optional)
Enable CHAP (Optional)
Start iSCSI Service on DM

server_security server_2 add


policy chap name <client name>
Enter Password: ************

2006 EMC Corporation. All rights reserved.

Celerra iSCSI - 26

If you want iSCSI initiators to automatically discover the iSCSI targets on the Data Mover, you can
configure an iSNS client on the Data Mover (an iSNS server must be present). Configuring an iSNS
client on the Data Mover causes the Data Mover to register all of its iSCSI targets with an external
iSNS server. iSCSI initiators can then query the iSNS server to discover available targets on the Data
Movers.
If you want the Data Mover to authenticate the identify of iSCSI initiators contacting it, you can
configure CHAP on the Data Mover.

Celerra iSCSI

- 26

Copyright 2006 EMC Corporation. All Rights Reserved.

Start the iSCSI Service


y iSCSI Services must be started
on the Data Mover before using
iSCSI Targets

Create and mount FS


Create iSCSI targets
Create iSCSI LUNs
Configuring a LUN mask

y Command:
server_iscsi server_x
-service -start

Configure iSNS (Optional)


Enable CHAP (Optional)
Start iSCSI Service on DM

y Example:
server_iscsi server_2 -service -start

Celerra iSCSI - 27

2006 EMC Corporation. All rights reserved.

You must start the iSCSI service on the Data Mover before using iSCSI targets.
This slide shows the command required to start the iSCSI service on the Celerra.

Celerra iSCSI

- 27

Copyright 2006 EMC Corporation. All Rights Reserved.

Start the iSCSI Service


y iSCSI > Configuration tab

2006 EMC Corporation. All rights reserved.

Celerra iSCSI - 28

This slide shows how to start the iSCSI service on the Celerra using Celerra Manager.

Celerra iSCSI

- 28

Copyright 2006 EMC Corporation. All Rights Reserved.

iSCSI LUN Extension


y NAS 5.5 feature that allowing the size of a iSCSI LUNs to
be dynamically extended
Size cannot be reduced

y Support for both Virtually Provisioned and Dense LUNs


y Host must be capable of recognizing dynamic change in
LUN size
Manual volume/file system extension is required after LUN is
extended from the Control Station interface

y Command:
server_iscsi lun -numner <lun_number>
-extend <target_alias_name>
-size <size> {M|G|T}
2006 EMC Corporation. All rights reserved.

Celerra iSCSI - 29

iSCSI LUN extension is a new feature that helps solve the storage space problem by allowing iSCSI
LUNS to be dynamically extended while in use. Extending a LUN can be done using both the Celerra
Manager and CLI and applies to both Virtually Provisioned and Dense LUNs.
To extend a LUN the following prerequisites need to be met:
y There has to be sufficient space on the underlying file system to satisfy the extension request. With
Virtually Provisioned LUN this does not apply.
y The final size of the LUN after extension can not exceed 2 TB.
Additional notes:
y LUN extension is not reversible; at this time there is no procedure for shrinking a LUN.
y Host must be able to support LUN expansion. Once the LUN is extended the systems
administrator must take action in order to recognized the change in size. This typically requires
running host configuration method or rebooting the system.
On a Windows host, a NTFS files ystem can be extended using diskpart utility.
y The command to extend a LUN can only be executed on a single LUN at a time whether in CLI or
Celerra Manager.

Celerra iSCSI

- 29

Copyright 2006 EMC Corporation. All Rights Reserved.

Celerra iSCSI Host Components


y Celerra Volume Shadow Copy Service (VSS) Provider
Runs as a Windows service
Interface between Microsofts VSS and Celerra iSCSI snap
capabilities

y EMC SnapSure Manager for iSCSI


Create snaps of Celerra iSCSI LUNs
Requires Celerra Local Disk Service (CLD), Windows 2000 or
Windows 2003

y Exchange 2000 Integration Module


Use iSCSI LUNs to store Exchange data
Create and manage snaps of the Exchange data stored on Celerra
iSCSI LUNs
Requires Windows 2000 and Exchange 2000
2006 EMC Corporation. All rights reserved.

Celerra iSCSI - 30

Celerra VSS Provider


The Celerra VSS Provider for iSCSI runs as a Windows service and provides the interface between VSS and the iSCSI
snapshot capabilities of the Celerra. Celerra VSS Provider for iSCSI lets VSS requestor applications, such as VSS-enabled
backup applications, make shadow copies of Celerra iSCSI LUNs. VSS provides the backup framework for Windows
Server 2003 and enables the creation of read-only, point-in-time copies of data, called shadow copies. VSS integrates with
front-end applications so they can create and access shadow copies. The Celerra VSS Provider for iSCSI is a hardwarebased provider that works directly with iSCSI LUNs on a Celerra Network Server and the VSS service on Windows Server
2003 to provide consistent shadow copy creation and addressing. Since the Celerra VSS Provider is a hardware-based
provider that works on the storage backend, it reduces the load on the iSCSI hosts CPU and memory. It is also more
efficient in an environment where you need to take shadow copies of multiple volumes at the same time.
SnapSure Manager for iSCSI
SnapSure Manager for iSCSI is an application that lets you take snapshots of data stored on Celerra iSCSI LUNs. You can
take iSCSI snapshots individually or via a schedule. You can also restore data from an iSCSI snapshot by either restoring
the entire snapshot, or mounting the snapshot and then copying data from the snapshot. SnapSure Manager for iSCSI relies
on the EMC CLD (Celerra Local Disk) service. SnapSure Manager for iSCSI can be installed and run on Windows Server
2003; however, it does not support the VSS framework, and therefore cannot make VSS-type snapshots.
iSCSI SnapSure Manager for Exchange 2000
iSCSI SnapSure Manager for Exchange 2000 is an MMC add-on that integrates with the Microsoft Exchange 2000 System
Manager and lets you store Exchange system files and storage groups on Celerra iSCSI LUNs. It also lets you take
snapshots of Exchange storage groups. Data can be recovered from storage group snapshots by either restoring the entire
snapshot or mounting the snapshot to a backup Exchange system and restoring individual mailboxes from that system.
iSCSI SnapSure Manager for Exchange 2000 relies on the EMC CLD service.

Celerra iSCSI

- 30

Copyright 2006 EMC Corporation. All Rights Reserved.

y Consolidate Public Folders and


System Files to Celerra as
necessary
y iSCSI LUNs could be access
over private Ethernet for best
performance

File
Systems

Celerra iSCSI Target

Celerra iSCSI Exchange Solution


Basic Production Architecture
y Consolidate Mailstore
databases and Log Files of
each Exchange Storage Group
in to separate iSCSI LUNs

Celerra

Exchange
Mailbox
Stores

CIFS client(s)
Exchange
System,
Log Files
Exchange
Public
Folders

LAN

NFS client(s)

SQLclient(s)

y Components:
Exchange 2000+
Windows initiator 1.06+
Any Celerra

Outlook Client(s)
initiator

Exchange Server

2006 EMC Corporation. All rights reserved.

Celerra iSCSI - 31

This slide shows a sample iSCSI solution with Microsoft Exchange. The Celerra has been configured
as an iSCSI Target. The Exchange server is running the Windows iSCSI Initiator.
On the Celerra:
y iSCSI Target is configured
y A file system is created which will house the iSCSI LUNs
y iSCSI LUNs are created (3 in this case)
y LUN mask is configured to provide access to the Exchange server
On the Exchange Server:
y iSCSI initiator is installed
y Celerra Target portal is defined
y Log on to the Celerra Target for LUN access
y Migrate the Exchange Storage Group databases to Celerra iSCSI LUNs
Exchange clients will access the Exchange server as usual. The Storage group databases will now
reside on the Celerra iSCSI LUNs.

Celerra iSCSI

- 31

Copyright 2006 EMC Corporation. All Rights Reserved.

Module Summary
Key points covered in this module are:
y iSCSI is a block-level storage transport session protocol that allows
users to create a storage area network using TCP/IP networks
y iSCSI initiator resides on the client and issues command to the
target configured on the Celerra Data Mover
y Initiators and targets use network portals to communicate over the
network
y Celerra iSCSI LUNs are built as files within a Celerra file system
y LUN masking is required to control host access to iSCSI LUNs
y There are two types of iSCSI names; iqn and eui
y An iSCSI initiator can discover its targets either through the
SendTargetDiscovery, or an iSNS server
y Celerra supports CHAP authentication to add a higher level of
security
2006 EMC Corporation. All rights reserved.

Celerra iSCSI - 32

Celerra iSCSI

- 32

Copyright 2006 EMC Corporation. All Rights Reserved.

Closing Slide

2006 EMC Corporation. All rights reserved.

Celerra iSCSI - 33

Celerra iSCSI

- 33

Celerra ICON

Celerra ICON
Appendix
Overview

In this Appendix

Appendix A:
Appendix B:
Appendix C:
Appendix D:
Appendix E:
Appendix F:
Appendix G:

This Appendix contains the following topics.


Topic
Hurricane Marine, LTD
Hurricane Marine Windows Network Design
Hurricane Marine Windows User and Group Memberships
Hurricane Marine UNIX Users and Groups
Hurricane Marine, IP Addresses, and Schema
Bibliography
Switch Ports, Router Configuration, IP Addressing

See Page
3
4
5
6
9
10
11

Copyright 2005 EMC Corporation. All rights reserved. Rev 5.4

Celerra ICON

Appendix A: Hurricane Marine, LTD

Description

Hurricane Marine, LTD is a fictitious enterprise that has been created as a


case study for Celerra training. Hurricane Marine, LTD is a world leader in
luxury and racing boats and yachts. Their success has been enhanced by
EMCs ability to make their information available to all of their staff at the
same time.

EMC and
Hurricane
Marine, LTD

Until recently, EMC data storage has been the only EMC product Hurricane
Marine, LTD has utilized. Now, however, they have opted to put an EMC Einfostructure in place. EMC has just installed EMC Connectrix ED-1032, and
is now looking to implement EMC Celerra as their key file server.

Environment

Hurricane Marine, LTD computer network consists of both a Microsoft


network and UNIX. While their engineering staff does the bulk of their work
in a UNIX environment, all employees have Microsoft Widows based
applications as well. Thus, Hurricane Marine, LTD has implemented support
for both systems. You will find appendixes that outline the design of both the
Microsoft and UNIX security structures.
Their network runs exclusively on a TCP/IP network. Appendixes have also
been provided to assist you with the design of the IP network.

People

Hurricane Marine, LTDs president and founder is Perry Tesca. The head of
his IS department is Ira Techi. You will be working closely with Mr. Techi in
implementing EMC Celerra into his network. Mr. Techi has some needs that
Celerra is required to fulfill, but there are also some potential needs that he
may like to explore.

Organization
chart

On the following page is the organization chart for Hurricane Marine, LTD.
Continued on next page

Copyright 2005 EMC Corporation. All rights reserved. Rev 5.4

Celerra ICON

Appendix A: Hurricane Marine, LTD, Continued


Hurricane Marine, LTD - Organization Chart
Perry Tesca President
Liza Minacci Dir. Marketing
Users
Perry Tesca
Liza Minacci
Edgar South
Earl Pallis
Sarah Emm
Seve Wari
Ira Techi
Perry Tesca
Ellen Sele
Eddie Pope
Sadie Epari
Scott West
Iggy Tallis
Liza Minacci
Eric Simons
Etta Place
Sal Eammi
Seda Weir
Isabella Tei
Edgar South
Eva Song
Egan Putter
Sage Early
Seiko Wong
Ivan Teribl
Earl Pallis
Ed Sazi
Eldon Pratt
Sam Echo
Sema Welles
Ira Tech
Evan Swailz
Elliot Proh
Santos Elton
Selena Willet
Sarah Emm
Elvin Ping
Saul Ettol
Selma Witt
Seve Wari
Sash Extra
Sergio Wall
Sean Ewer
Seve Wassi
Seymore Wai
Steve Woo

Engineering
Propulsion
Earl Pallis
Eddie Pope
Etta Place
Egan Putter
Eldon Pratt
Elliot Proh
Elvin Ping

Engineering
Structural
Edgar South
Ellen Sele
Eric Simons
Eva Song
Ed Sazi
Evan Swailz

Sales East
Sarah Emm
Sadie Epari
Sal Eammi
Sage Early
Sam Echo
Santos Elton
Saul Ettol
Sash Extra
Sean Ewer

Sales West
Seve Wari
Scott West
Seda Weir
Seiko Wong
Sema Welles
Selena Willet
Selma Witt
Sergio Wall
Seve Wassi
Seymore Wai
Steve Woo

Copyright 2005 EMC Corporation. All rights reserved. Rev 5.4

Information
Systems
Ira Techi
Iggy Tallis
Isabella Tei
Ivan Teribl

Managers
Perry Tesca
Liza Minacci
Earl Pallis
Edgar South
Sarah Emm
Seve Wari
Ira Tech

Celerra ICON

Appendix B: Hurricane Marine Microsoft Network


Design

Microsoft
networking
features

DNS Server: 10.127.*.161


DHCP:
Not in use, all nodes have static IP addresses (See the IP
Appendix).

Windows 2000
domains

The Windows 2000 network in comprised of the following two domains:

hmarine.com domain (the root of the forest)


corp.hmarine.com (a subdomain of the root)
asia.corp.hmarine.com (future expansion)

Though the root domain is present solely for administrative purposes at this
time, corp.hmarine.com will hold containers for all users, groups, and
computer accounts. A third domain, asia.corp.hmarine.com, is also being
planned for future expansion.

Root
Domain:
hmarine.com
Domain Controller:
hm-1.hmarine.com

Sub Domain:
corp.hmarine.com

Computer Accounts:
w2k1, w2k2, w2k3, w2k4,
w2k5, w2k6
All Data Movers
All user accounts

Domain Controller:
hm-dc2.hmarine.com

Copyright 2005 EMC Corporation. All rights reserved. Rev 5.4

Celerra ICON

Appendix C: Hurricane Windows 2000 User and Group


Memberships
Hurricane Marine
Windows 2000 Users & Group Memberships
CORP Domain
Username

Full Name

NT Global Group

Earl Pallis

Propulsion Engineers, Managers

EPing

Elvin Ping

Propulsion Engineers

EPlace

Etta Place

Propulsion Engineers

EPope

Eddie Pope

Propulsion Engineers

EPratt

Eldon Pratt

Propulsion Engineers

EProh

Elliot Proh

Propulsion Engineers

Administrator
EPallis

Domain Admins

EPutter

Egan Putter

Propulsion Engineers

ESazi

Ed Sazi

Structural Engineers

ESele

Ellen Sele

Structural Engineers

ESimons

Eric Simons

Structural Engineers

ESong

Eva Song

Structural Engineers

ESouth

Edgar South

Structural Engineers, Managers

ESwailz

Evan Swailz

Structural Engineers

ITallis

Iggy Tallis

IS, DOMAIN ADMINS

ITechi

Ira Techi

IS, DOMAIN ADMINS, Managers

ITei

Isabella Tei

IS, DOMAIN ADMINS

ITeribl

Ivan Teribl

IS, DOMAIN ADMINS

LMinacci

Liza Minacci

Director of Marketing, Managers

PTesca

Perry Tesca

President, Managers

SEammi

Sal Eammi

Eastcoast Sales

SEarly

Sage Early

Eastcoast Sales

SEcho

Sam Echo

Eastcoast Sales

SElton

Santos Elton

Eastcoast Sales

SEmm

Sarah Emm

Eastcoast Sales, Managers

SEpari

Sadie Epari

Eastcoast Sales

SEttol

Saul Ettol

Eastcoast Sales

SEwer

Sean Ewer

Eastcoast Sales

SExtra

Sash Extra

Eastcoast Sales

SWai

Seymore Wai

Westcoast Sales

SWall

Sergio Wall

Westcoast Sales

SWari

Seve Wari

Westcoast Sales, Managers

SWassi

Seve Wassi

Westcoast Sales

SWeir

Seda Weir

Westcoast Sales

SWelles

Sema Welles

Westcoast Sales

SWest

Scott West

Westcoast Sales

SWillet

Selena Willet

Westcoast Sales

SWitt

Selma Witt

Westcoast Sales

SWong

Seiko Wong

Westcoast Sales

SWoo

Steve Woo

Westcoast Sales

Copyright 2005 EMC Corporation. All rights reserved. Rev 5.4

Celerra ICON

Appendix D: Hurricane Marine UNIX Users and Groups


Hurricane Marine
UNIX Users & Group Memberships
NIS Domain hmarine.com
Username & Password

Full Name

Group

epallis

Earl Pallis

engprop, mngr
engprop
engprop
engprop
engprop
engprop
engprop
engstruc
engstruc
engstruc
engstruc
engstruc, mngr
engstruc
infotech
infotech, mngr
infotech
infotech
mngr
mngr
saleseas
saleseas
saleseas
saleseas
saleseas, mngr
saleseas
saleseas
saleseas
saleseas
saleswes
saleswes
saleswes, mngr
saleswes
saleswes
saleswes
saleswes
saleswes
saleswes
saleswes
saleswes

eping

Elvin Ping

eplace

Etta Place

epope

Eddie Pope

epratt

Eldon Pratt

eproh

Elliot Proh

eputter

Egan Putter

esazi

Ed Sazi

esele

Ellen Sele

esimons

Eric Simons

esong

Eva Song

esouth

Edgar South

eswailz

Evan Swailz

itallis

Iggy Tallis

itechi

Ira Techi

itei

Isabella Tei

iteribi

Ivan Teribi

lminacci

Liza Minacci

ptesca

Perry Tesca

seammi

Sal Eammi

searly

Sage Early

secho

Sam Echo

selton

Santos Elton

semm

Sarah Emm

separi

Sadie Epari

settol

Saul Ettol

sewer

Sean Ewer

sextra

Sash Extra

swai

Seymore Wai

swall

Sergio Wall

swari

Seve Wari

epallis

Seve Wassi

sweir

Seda Weir

swelles

Sema Welles

swest

Scott West

swillet

Selena Willet

switt

Selma Witt

swong

Seiko Wong

swoo

Steve Woo

Note: Password is the same as username


Continued on next page

Copyright 2005 EMC Corporation. All rights reserved. Rev 5.4

Celerra ICON

Appendix D: Hurricane Marine UNIX Users and Groups,


Continued

NIS Passwd file

swillet:NP:1030:104:Selena Willet:/home/swillet:/bin/csh
epallis:NP:1004:101:Earl Pallis:/home/epallis:/bin/csh
swassi:NP:1037:104:Seve Wassi:/home/swassi:/bin/csh
separi:NP:1010:103:Sadi Epari:/home/separi:/bin/csh
esouth:NP:1003:102:Edgar South:/home/esouth:/bin/csh
daemon:NP:1:1::/:
swong:NP:1021:104:Seiko Wong:/home/swong:/bin/csh
sewer:NP:1036:103:Sean Ewer:/home/sewer:/bin/csh
secho:NP:1025:103:Sam Echo:/home/secho:/bin/csh
eping:NP:1031:101:Elvin Ping:/home/eping:/bin/csh
swai:NP:1038:104:Seymour Wai:/home/swai:/bin/csh
itei::1017:105:Isabella Tei:/home/itei:/bin/csh
adm:NP:4:4:Admin:/var/adm:
iteribl:NP:1022:105:Ivan Teribl:/home/iteribl:/bin/csh
ptesca:NP:1001:106:Perry Tesca:/home/ptesca:/bin/csh
nobody:NP:60001:60001:Nobody:/:
epratt:NP:1024:101:Eldon Pratt:/home/epratt:/bin/csh
eplace:NP:1014:101:Etta Place:/home/eplace:/bin/csh
swest:NP:1011:104:Scott West:/home/swest:/bin/csh
sweir:NP:1016:104:Seda Weir:/home/sweir:/bin/csh
nuucp:NP:9:9:uucp Admin:/var/spool/uucppublic:/usr/lib/uucp/uucico
esong:NP:1018:102:Eva Song:/home/esong:/bin/csh
eproh:NP:1028:101:Elliot Proh:/home/eproh:/bin/csh
root:oiOEvBA22p40s:0:1:Super-User:/:/sbin/sh
lminacci:NP:1002:106:Liza Minacci:/home/lminacci:/bin/csh
nobody4:NP:65534:65534:SunOS 4.x Nobody:/:
itallis:NP:1012:105:Iggy Tallis:/home/itallis:/bin/csh
sextra:NP:1034:103:Sash Extra:/home/sextra:/bin/csh
settol:NP:1032:103:Saul Ettol:/home/settol:/bin/csh
selton:NP:1029:103:Santos Elton:/home/selton:/bin/csh
searly:NP:1020:103:Sage Early:/home/searly:/bin/csh
listen:*LK*:37:4:Network Admin:/usr/net/nls:
itechi:NP:1007:105:Ira Techi:/home/itechi:/bin/csh
switt:NP:1033:104:Selma Witt:/home/switt:/bin/csh
swari:NP:1006:104:Seve Wari:/home/swari:/bin/csh
swall:NP:1035:104:Sergio Wall:/home/swall:/bin/csh
uucp:NP:5:5:uucp Admin:/usr/lib/uucp:
swoo:NP:1039:104:Steve Woo:/home/swoo:/bin/csh
semm:NP:1005:103:Sarah Emm:/home/semm:/bin/csh
noaccess:NP:60002:60002:No Access User:/:
swelles:NP:1026:104:Sema Welles:/home/swelles:/bin/csh
eswailz:NP:1027:102:Evan Swailz:/home/swailz:/bin/csh
esimons:NP:1013:102:Eric Simons:/home/esimons:/bin/csh
eputter:NP:1019:101:Egan Putter:/home/eputter:/bin/csh
seammi:NP:1015:103:Sal Eammi:/home/seammi:/bin/csh
esele:NP:1008:102:Ellen Sele:/home/esele:/bin/csh
esazi:NP:1023:102:Ed Sazi:/home/esazi:/bin/csh
epope:NP:1009:101:Eddie Pope:/home/epope:/bin/csh
sys:NP:3:3::/:
bin:NP:2:2::/usr/bin:
lp:NP:71:8:Line Printer Admin:/usr/spool/lp

Continued on next page

Copyright 2005 EMC Corporation. All rights reserved. Rev 5.4

Celerra ICON

Appendix D: Hurricane Marine UNIX Users and Groups,


Continued

NIS Group file

sysadmin::14:
saleswes::104:swari,swest,swong,swelles,swillet,switt,swall,swassi,sw
ai,swoo
saleseas::103:semm,separi,seammi,searly,secho,selton,settol,sextra,se
wer
noaccess::60002:
infotech::105:itechi,itallis,itei,iteribl,
engstruc::102:esouth,esele,esimons,esong,esazi,eswailz
nogroup::65534:
engprop::101:epallis,epope,eplace,eputter,epratt,eproh,eping
nobody::60001:
daemon::12:root,daemon
staff::10:other::1:
nuucp::9:root,nuucp
uucp::5:root,uucp
root::0:root
mngr::106:lminacci,epallis,esouth,semm,swari,itechi,ptesca
mail::6:root
tty::7:root,tty,adm
sys::3:root,bin,sys,adm
bin::2:root,bin,daemon
adm::4:root,adm,daemon
lp::8:root,lp,

Copyright 2005 EMC Corporation. All rights reserved. Rev 5.4

Celerra ICON

Appendix E: Hurricane Marine IP Address and Schema


Host IP Configurations
Host/comp.
name
sun1
sun2
sun3
sun4
sun5
sun6
W2k1
W2k2
W2k3
W2k4
W2k5
W2k6
cel1cs0
cel1dm2
cel1dm3
cel2cs0
cel2dm2
cel2dm3
cel3cs0
cel3dm2
cel3dm3
cel4cs0
cel4dm2
cel4dm3
cel5cs0
cel5dm2
cel5dm3
cel6cs0
cel6dm2
cel6dm3
hm-1
hm-dc2
nis-master
DNS server
Router
E. Switch

IP address

Subnet mask

Broadcast

Gateway

Network

Sw port
: VLAN

Info

10.127.*.11

255.255.255.224

10.127.*.31

10.127.*.30

10.127.*.0

2/43:10

UNIX

10.127.*.12

255.255.255.224

10.127.*.31

10.127.*.30

10.127.*.0

2/44:10

UNIX

10.127.*.13

255.255.255.224

10.127.*.31

10.127.*.30

10.127.*.0

2/45:10

UNIX

10.127.*.14

255.255.255.224

10.127.*.31

10.127.*.30

10.127.*.0

2/46:10

UNIX

10.127.*.15

255.255.255.224

10.127.*.31

10.127.*.30

10.127.*.0

2/47:10

UNIX

10.127.*.16

255.255.255.224

10.127.*.31

10.127.*.30

10.127.*.0

2/48:10

UNIX

10.127.*.71

255.255.255.224

10.127.*.95

10.127.*.94

10.127.*.64

2/37:30

Win2000

10.127.*.72

255.255.255.224

10.127.*.95

10.127.*.94

10.127.*.64

2/38:30

Win2000

10.127.*.73

255.255.255.224

10.127.*.95

10.127.*.94

10.127.*.64

2/39:30

Win2000

10.127.*.74

255.255.255.224

10.127.*.95

10.127.*.94

10.127.*.64

2/40:30

Win2000

10.127.*.75

255.255.255.224

10.127.*.95

10.127.*.94

10.127.*.64

2/41:30

Win2000

10.127.*.76

255.255.255.224

10.127.*.95

10.127.*.94

10.127.*.64

2/42:30

Win2000

10.127.*.110

255.255.255.224

10.127.*.127

10.127.*.126

10.127.*.96

3/25:41

Celerra 1

10.127.*.112

255.255.255.224

10.127.*.127

10.127.*.126

10.127.*.96

2/1-4:41

Celerra 1

10.127.*.113

255.255.255.224

10.127.*.127

10.127.*.126

10.127.*.96

2/5-8:41

Celerra 1

10.127.*.120

255.255.255.224

10.127.*.127

10.127.*.126

10.127.*.96

3/27:41

Celerra 2

10.127.*.122

255.255.255.224

10.127.*.127

10.127.*.126

10.127.*.96

2/9-12:41

Celerra 2

10.127.*.123

255.255.255.224

10.127.*.127

10.127.*.126

10.127.*.96

2/13-16:41

Celerra 2

10.127.*.130

255.255.255.224

10.127.*.159

10.127.*.158

10.127.*.127

3/29:42

Celerra 3

10.127.*.132

255.255.255.224

10.127.*.159

10.127.*.158

10.127.*.127

2/17-20:42

Celerra 3

10.127.*.133

255.255.255.224

10.127.*.159

10.127.*.158

10.127.*.127

2/21-24:42

Celerra 3

10.127.*.140

255.255.255.224

10.127.*.159

10.127.*.158

10.127.*.127

3/26:42

Celerra 4

10.127.*.142

255.255.255.224

10.127.*.159

10.127.*.158

10.127.*.127

3/1-4:42

Celerra 4

10.127.*.143

255.255.255.224

10.127.*.159

10.127.*.158

10.127.*.127

3/5-8:42

Celerra 4

10.127.*.150

255.255.255.224

10.127.*.159

10.127.*.158

10.127.*.127

3/28:42

Celerra 5

10.127.*.152

255.255.255.224

10.127.*.159

10.127.*.158

10.127.*.127

3/9-12:42

Celerra 5

10.127.*.153

255.255.255.224

10.127.*.159

10.127.*.158

10.127.*.127

3/13-16:42

Celerra 5

10.127.*.100

255.255.255.224

10.127.*.127

10.127.*.126

10.127.*.96

3/30:41

Celerra 6

10.127.*.102

255.255.255.224

10.127.*.127

10.127.*.126

10.127.*.96

3/17-20:41

Celerra 6

10.127.*.103

255.255.255.224

10.127.*.127

10.127.*.126

10.127.*.96

3/21-24:41

Celerra 6

10.127.*.161

255.255.255.224

10.127.*.191

10.127.*.190

10.127.*.160

2/27:43

Root W2k

10.127.*.162

255.255.255.224

10.127.*.191

10.127.*.190

10.127.*.160

2/28:43

Corp W2K

10.127.*.163

255.255.255.224

10.127.*.191

10.127.*.190

10.127.*.160

2/29:43

NIS

10.127.*.161

255.255.255.224

10.127.*.254

255.255.255.0

10.127.*.253

255.255.255.0

Copyright 2005 EMC Corporation. All rights reserved. Rev 5.4

On hm-1
3/32:Trunk

Celerra ICON

Appendix F: Bibliography
Installing Celerra iSCSI Host Components
P/N 300-001-993, Rev A01, Version 5.4, April, 2005
Configuring Virtual Data Movers
P/N 300-001-978, Rev A01, Version 5.4, April, 2005
Using SnapSure on Celerra
P/N 300-002-030, Rev A01, Version 5.4, April, 2005
Using FTP on Celerra Network Server
P/N 300-002-019, Rev A01, Version 5.4, April, 2005
Using Windows Administrative Tools with Celerra
P/N 300-001-985, Rev A01, Version 5.4, April, 2005
Configuring External Usermapper for Celerra
P/N 300-002-023, Rev A01, Version 5.4, April, 2005
Configuring and Managing Celerra Networking
P/N 300-002-016607, Rev A01, Version 5.4, April, 2005
Celerra File Extension Filtering
P/N 300-001-972, Rev A01, Version 5.4, April, 2005
Implementing Automatic Volume Management with Celerra
P/N 300-002-078, Rev A01, Version 5.4, April, 2005
Using Quotas on Celerra
P/N 300-002-029, Rev A01, Version 5.4, April, 2005
Using Celerra Antivirus Agent
P/N 300-001-991, Rev A01, Version 5.4, April, 2005
Using Celerra Replicator
P/N 300-002-035, Rev A01, Version 5.4, April, 2005
Managing NFS Access to the Celerra Network Server
P/N 300-002-036, Rev A01, Version 5.4, April, 2005
Configuring and Managing Celerra Network High Availability
P/N 300-002-015, Rev A01, Version 5.4, April, 2005

Copyright 2005 EMC Corporation. All rights reserved. Rev 5.4

10

NAS Management

Appendix G

Switch Ports, Router Configuration, IP Addressing


Cisco Systems 2980g Port layout
Module 2
VLAN 41
1

cel1dm2

cel1dm3

VLAN 42

11

cel2dm2
10

13

15

cel2dm3

17

19

cel3dm2

VLAN 43

21

23

25

cel3dm3

12

14

16

18

20

22

11

13

15

17

19

21

24

VLAN 20

27

29

W2K

NIS

31

33

VLAN 30
35

37

28

25

27

30

VLAN 10
41

43

W2kW

W2K
26

39

32

34

36

38

40

45

47

Sun
42

44

46

48

Module 3
VLAN 42
1

cel4dm2

cel4dm3

VLAN 41

cel5dm2
10

12

cel5dm3
14

16

cel6dm2
18

23

cel6dm3

20

22

24

V 42
29

31

c1cs0 c2cs0 c3cs0


c4cs0 c5cs0 c6cs0
26

28

VLAN 42

30

T
32

V 41

Router Configuration
Interface

VLAN

IP Address

Interface

VLAN

IP Address

Interface

VLAN

IP Address

Interface

VLAN

IP Address

0/1
0/0

n/a
1

assigned
10.127.*.254

0/0.10
0/0.20

10
20

10.127.*.30
10.127.*.62

0/0.30
0/0.41

30
41

10.127.*.94
10.127.*.126

0/0.42
0/0.43

42
43

10.127.*.158
10.127.*.190

IP Addressing
Subnet A - VLAN 10
Network
Gateway
Broadcast
sun1
sun2
sun3
sun4
sun5
sun6

10.127.*.0
10.127.*.30
10.127.*.31
10.127.*.11
10.127.*.12
10.127.*.13
10.127.*.14
10.127.*.15
10.127.*.16

Subnet B - VLAN 20
Network
Gateway
Broadcast

Not in use

DNS: 10.127.*.161

10.127.*.32
10.127.*.62
10.127.*.63

Subnet C - VLAN 30
Network
Gateway
Broadcast
w2k1
w2k2
w2k3
w2k4
w2k5
w2k6

10.127.*.64
10.127.*.94
10.127.*.95
10.127.*.71
10.127.*.72
10.127.*.73
10.127.*.74
10.127.*.75
10.127.*.76

Subnet D - VLAN 41
Network
Gateway
Broadcast
cel6cs0
cel6dm2
cel1cs0
cel1dm2
cel2cs0
cel2dm2

10.127.*.96
10.127.126
10.127.*.127
10.127.*.100
10.127.*.102
10.127.*.110
10.127.*.112
10.127.*.120
10.127.*.122

Subnet E - VLAN 42
Network
Gateway
Broadcast
cel3cs0
cel3dm2
cel4cs0
cel4dm2
cel5cs0
cel5dm2

10.127.*.127
10.127.*.158
10.127.*.159
10.127.*.130
10.127.*.132
10.127.*.140
10.127.*.142
10.127.*.150
10.127.*.152

Subnet F - VLAN 43
Network
Gateway
Broadcast
hm-1
hm-dc2
nis-master

10.127.*.160
10.127.*.190
10.127.*.191
10.127.*.161
10.127.*.162
10.127.*.163

NIS: 10.127.*.163

Copyright 2005 EMC Corporation. All rights reserved. Rev 5.4

11

NAS Management

Appendix G

Switch Ports, Router Configuration, IP Addressing

Copyright 2005 EMC Corporation. All rights reserved. Rev 5.4

12

Você também pode gostar