Escolar Documentos
Profissional Documentos
Cultura Documentos
File Systems
P/N 300-002-120
Rev A01
Version 5.4
April 2005
Contents
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3
Access to Celerra File Systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3
Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4
References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6
Restriction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6
Cautions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6
Celerra File System Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7
File System Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7
File System Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8
File System Planning Considerations. . . . . . . . . . . . . . . . . . . . . . . . . .8
About Mounting a Celerra File System on a Data Mover . . . . . . . . .10
Monitoring the Amount of Space in Use in a File System . . . . . . . .11
File Systems and Automatic Volume Management . . . . . . . . . . . . . .12
Celerra WORM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12
Checking File System Consistency . . . . . . . . . . . . . . . . . . . . . . . . . .13
Nested Mount File Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15
System Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16
EMC NAS Interoperability Matrix. . . . . . . . . . . . . . . . . . . . . . . . . . . . .16
User Interface Choices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .17
Managing Celerra File Systems Roadmap . . . . . . . . . . . . . . . . . . . . . . . .18
Create a Celerra File System on a Celerra Network Server . . . . . . . . . . .20
Create a Mount Point for a Celerra File System on a Data Mover . . . . . .22
Mount a Celerra File System on a Mount Point on a Data Mover . . . . . .23
Enabling Access to Celerra File Systems . . . . . . . . . . . . . . . . . . . . . . . . .24
Enable NFS Access to a Celerra File System. . . . . . . . . . . . . . . . . . .24
Enable CIFS Access to a Celerra File System . . . . . . . . . . . . . . . . . .24
Enable FTP Access to a Celerra File System . . . . . . . . . . . . . . . . . . .24
Enable TFTP Access to a Celerra File System. . . . . . . . . . . . . . . . . .25
Enable MPFS Access to a Celerra File System . . . . . . . . . . . . . . . . .25
Managing Celerra File Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .26
Listing All Celerra File Systems on a Celerra Network Server . . . . .26
Listing Configuration Information for a Celerra File System . . . . . .27
Checking Celerra File System Capacity . . . . . . . . . . . . . . . . . . . . . . .27
Finding Out If a File System Has an Associated Storage Pool . . . .30
Extending a Celerra File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . .31
Exporting a Celerra File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . .35
1 of 66
Renaming a Celerra File System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Deleting a Celerra File System and Freeing Its Disk Space . . . . . . . 36
Listing Mount Points on a Data Mover . . . . . . . . . . . . . . . . . . . . . . . . 39
Listing All Celerra File Systems Mounted on a Data Mover. . . . . . . 39
Mounting a File System for Read/Write Access . . . . . . . . . . . . . . . . 40
Mounting a File System for Read-Only Access. . . . . . . . . . . . . . . . . 40
Unmounting a Celerra File System from a Data Mover . . . . . . . . . . 41
Unmounting All Celerra File Systems from a Data Mover . . . . . . . . 43
Changing the Size Threshold for All File Systems . . . . . . . . . . . . . . 45
Changing the Size Threshold for a Data Mover. . . . . . . . . . . . . . . . . 45
Enhancing File Read and Write Performance . . . . . . . . . . . . . . . . . . 46
Finding and Fixing File System Storage Errors . . . . . . . . . . . . . . . . . . . . 49
Starting and Monitoring a File System Check . . . . . . . . . . . . . . . . . . 49
Starting ACL Check on a File System . . . . . . . . . . . . . . . . . . . . . . . . 49
Listing Current File System Checks. . . . . . . . . . . . . . . . . . . . . . . . . . 50
Displaying Information on a Single File System Check . . . . . . . . . . 50
Displaying Information on all Current File System Checks . . . . . . . 51
Managing Celerra Nested Mount File Systems . . . . . . . . . . . . . . . . . . . . 52
Creating a Nested Mount File System . . . . . . . . . . . . . . . . . . . . . . . . 52
Mounting a Nested Mount File System . . . . . . . . . . . . . . . . . . . . . . . 53
Deleting the Nested Mount File System . . . . . . . . . . . . . . . . . . . . . . . 53
Adding an Existing File System to the NMFS . . . . . . . . . . . . . . . . . . 55
Exporting a Component File System . . . . . . . . . . . . . . . . . . . . . . . . . 56
Removing a Component File System . . . . . . . . . . . . . . . . . . . . . . . . . 56
Moving a Nested Mount File System . . . . . . . . . . . . . . . . . . . . . . . . . 56
Troubleshooting File Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Error Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Known Problems and Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Related Information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Want to Know More? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Appendix: GID Support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Restrictions for GID Support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Note: All referenced documents are available on the Celerra Network Server User
Information CD.
Terminology
This section defines terms that you need to understand in order to manage volumes
and file systems on the Celerra Network Server. Refer to the Celerra Network
Server User Information Glossary on the user information CD for a complete list of
Celerra terminology.
Automatic Volume Management (AVM): A feature of the Celerra Network Server that
creates and manages volumes automatically, without manual volume management
by an administrator. AVM organizes volumes into pools of storage that can be
allocated to file systems.
Business continuance volume (BCV): A Symmetrix® volume used as a mirror that
attaches to and fully synchronizes with a production (source) volume on the Celerra
Network Server. The synchronized BCV is then separated from the source volume
and is addressed directly from the host to serve in backup and restore operations,
decision support, and application testing.
Celerra WORM: Celerra Write-Once Read-Many feature.
Component file system: Refers to the file system/s mounted on the nested mount
root file system and is part of the nested mount file system.
Disk volume: On Celerra systems, a physical storage unit as exported from the
storage array. All other volume types are created from disk volumes. See also
Metavolume, Slice volume, Stripe volume, Volume.
File system: A method of cataloging and managing the files and directories on a
storage system.
LUN (Logical unit number): The identifying number of a SCSI object that processes
SCSI commands. The LUN is the last part of the SCSI address for a SCSI object.
The LUN is an ID for the logical unit, but the term is sometimes used to refer to the
logical unit itself.
Metavolume: On a Celerra system, a concatenation of volumes, which can consist
of disk, slice, or stripe volumes. Also called a hypervolume or hyper. Every file
system must be created on top of a unique metavolume. See also Disk volume,
Slice volume, Stripe volume, Volume.
Nested export: An export of a component file system. The export has its own
specified access controls.
Nested mount file system (NMFS): File system that contains the nested mount root
file system and component file systems.
Restriction
◆ When building volumes on a Celerra Network Server attached to a Symmetrix
storage system, use standard Symmetrix volumes (also called hypervolumes),
not Symmetrix metavolumes.
Cautions
◆ All parts of a Celerra file system must use the same type of disk storage and must
be stored on a single storage system.
◆ If you plan to set quotas on a file system to control the amount of space that
users and groups can consume, you should turn on quotas immediately after
creating the file system. Turning on quotas later, when the file system is in use,
can cause temporary file system disruption, including slow file system access.
For information about quotas and how to turn on quotas, refer to the technical
module Using Quotas on Celerra on the user information CD.
◆ If your user environment requires international character support (i.e., support of
non-English character sets or Unicode characters), EMC strongly recommends
configuring your Celerra Network Server to support this feature before creating
file systems. For detailed information about international character support and
how to configure it on a Celerra Network Server, refer to the technical module
Using International Character Sets with Celerra on the user information CD.
◆ If you plan to create TimeFinder®/FS (local, NearCopy, or FarCopy) snapshots of a
production file system (PFS), do not use slice volumes (nas_slice) when creating
the PFS. Instead, use the full disk presented to the Celerra Network Server, since
TimeFinder copies and restores volumes in their entirety and does not recognize
sliced partitions created by the host (in this example, the Celerra Network
Server).
◆ Do not manually edit the nas_db database without consulting EMC Customer
Support. Any changes to this file could cause problems with your Celerra
installation.
! !
CAUTION
All parts of a Celerra file system should use the same type of disk storage and should
be stored on a single storage system.
Issues Considerations
Size of the production file Plan your backup and restore strategy. Larger file systems
systems might take more time to back up and restore.
If a file system becomes inconsistent (rare occurrence), larger
file systems can take more time for a consistency check to run.
Data growth rates Be sure to consider your space needs as your data increases.
Use of TimeFinder/FS or remote If you plan to create multiple copies of your production file
TimeFinder/FS system, you need to plan for that number of BCVs. For
example, from one production file system you may create 10
copies. Therefore, you need to plan for 10 BCVs, not one.
TimeFinder/FS uses the physical disk, not the logical volume,
when it creates BCV copies. The copy is done track by track,
so unused capacity is carried over to the BCVs.
Volumes used for BCVs need to be the same size as the
standard volume.
For information about TimeFinder/FS, refer to the Using
TimeFinder/FS with Celerra and Using TimeFinder/FS Near
Copy and Far Copy with Celerra technical modules on the user
information CD.
Issues Considerations
Use of SRDF® disaster recovery All file systems on the Data Mover must be built on SRDF
volumes. For information about SRDF, refer to the Using
SRDF/S with Celerra for Disaster Recovery technical module
or the Using SRDF/A with Celerra technical module on the
user information CD.
If you use the AVM feature to create the file systems, you must
specify the symm_std_rdf_src storage pool. This storage
pool directs AVM to allocate space from volumes configured at
installation time for remote mirroring using SRDF.
Use of HighRoad® You cannot enable HighRoad access to file systems with a
stripe depth of less than 32 KB. For more information, refer to
the Using HighRoad on Celerra technical module on the user
information CD.
Use of CWORM When you create a new file system, you have the option of
applying the CWORM feature (Celerra Write-Once Read-
Many) to the file system. Selecting the CWORM option
designates the file system as CWORM; the file system is
persistently marked as such until it is deleted.
CWORM is a licensed feature of Celerra. You must have a
CWORM software license to create a CWORM file system
The CWORM feature is disabled by default.
Celerra WORM
Celerra WORM (Write-Once Read-Many) allows you to archive data to WORM
storage on standard rewritable magnetic disks. You can write data to a CIFS or NFS
file system a single time to create a permanent, noneditable set of records that
cannot be altered, corrupted, or deleted.
CWORM can be applied to any standard Celerra file system. In a CWORM
environment, administrators can use WORM protection on a per-file basis. Files
can be stored with specified retention periods, which until expiration will prohibit the
files from being deleted. Only an administrator has the ability to delete the CWORM
file system.
A Celerra WORM-enabled file system:
◆ Safeguards data while ensuring its integrity and accessibility.
◆ Simplifies the task of archiving data for administrators and applications.
◆ Improves storage management flexibility and application performance.
If your system is CWORM enabled, refer to the Using Celerra WORM technical
module on the user information CD for additional information on this feature.
! !
CAUTION
◆ During fsck, the file system is not available to the users. NFS clients receive a
“NFS server not responding” message and CIFS clients lose the server
connection and must remap shares.
◆ Depending on the size of the file system, the fsck utility may use a significant
chunk of the system’s resources (memory and CPU) and may affect overall
system performance.
◆ Two is the maximum number of fsck processes that can be run on a single Data
Mover at the same time.
◆ A fsck of permanently unmounted file system can be executed on a standby Data
Mover. The fsck process continues to run during and after a Data Mover failover.
◆ If a Data Mover reboots or experiences failover or failback while running fsck on
an unmounted file system, fsck will not start automatically after the Data Mover
reboots. You must start fsck manually on the unmounted file system.
Using auto fsck (default)
The Celerra Network Server begins auto fsck when file system corruption is
detected. The first step in the fsck process is to ensure the corruption can be safely
corrected without bringing down the server. The fsck process also corrects any
inconsistencies in the ACL (Access Control List) database.
The corrupted file system is not available to users during the fsck process. After
fsck finds and corrects the corruption, users again have access to the file system.
While fsck is running, other file systems mounted on the same server are not
affected and are available to users.
You can check the status of fsck through the Control Station using the nas_fsck
command. Refer to the section on displaying information on a single file system
check.
Note: When viewing online, click the text in the roadmap to access that phase. To return to
this roadmap from other pages, click the roadmap symbol at the center bottom of the page.
Table 4 lists the tasks to manage Celerra volumes as described in this technical
module.
Table 4 File Systems Roadmap Tasks
Task Procedure
Explain procedures for creating a Celerra file Create a Celerra File System on a Celerra
system with existing volumes or using the AVM Network Server on page 20.
feature.
Create a mount point on a Data Mover for a file Create a Mount Point for a Celerra File System
system on a Data Mover on page 22.
Mount a Celerra file system on a Data Mover Mount a Celerra File System on a Mount Point
on a Data Mover on page 23
For a detailed synopsis of the commands presented in this section or to view syntax
conventions, refer to the online Celerra man pages or the Celerra Network Server
Command Reference Manual on the user information CD.
Action
To create a file system with existing volumes, use this command syntax:
$ nas_fs -name <name> -create <volume_name>
Where:
<name> = the name assigned to a file system
<volume_name> = the name of the existing volume
Example:
To create a file system with existing volumes called ufs1, type:
$ nas_fs -name ufs1 -create mtv1
Output
id = 18
name = ufs1
acl = 0
in_use = False
type = uxfs
volume = mtv1
rw_servers=
ro_servers=
rw_vdms =
ro_vdms =
symm_devs =
002806000209-006,
002806000209-007,
002806000209-008,
002806000209-009
disks = d3,d4,d5,d6
Note
A file system name must be unique on a particular Celerra Network Server. It can include up to
254 characters, including hyphens ( - ), underscores ( _ ), and periods ( . ), but cannot begin with a
hyphen or period.
Refer to the Celerra Network Server Command Reference Manual on the user information CD for
information about the options available for creating a file system with the nas_fs command.
Action
Note: If you do not specify a storage system, any single storage system with available space for
the specified pool will be used.
Example:
To create a file system using AVM called ufs1, type:
$ nas_fs -name ufs1 -create size=10G pool=clar_r5_performance
Output
id = 24
name = ufs1
acl = 0
in_use = False
type = uxfs
volume = v121
pool = clar_r5_performance
member_of = root_avm_fs_group_3
rw_servers=
ro_servers=
rw_vdms =
ro_vdms =
symm_devs =
WRE00100100430-0010,
WRE00100100430-0011
disks = d7,d13
Note
A file system name must be unique on a particular Celerra Network Server. It can include up to
254 characters, including hyphens ( - ), underscores ( _ ), and periods ( . ), but cannot begin with a
hyphen or period. The Celerra Network Server Command Reference Manual on the user
information CD identifies the nas_fs command options available for creating a file system.
Action
Output
server_3: done
Action
To mount a file system on a mount point on a Data Mover, use this command syntax:
$ server_mount <movername> -option <options>
<fs_name> <mount_point>
Where:
<movername> = name of the specified Data Mover
<options> = specifies mount options, separated by commas
<fs_name> = named file system to be mounted
<mount_point> = path to mount point for the specified Data Mover
Example:
To mount a file system on a mount point on a Data Mover with a nolock option, type:
$ server_mount server_2 -option nolock,accesspolicy=NATIVE
ufs1 /ufs1
Output
server_2: done
File systems are mounted permanently by default. If you unmount a file system
temporarily and then reboot the file server, the file system is remounted
automatically.
Action
To view a list of all Celerra file systems on a Celerra Network Server, type:
$ nas_fs -list
Output
Note
Column definitions:
id — ID of the file system (assigned automatically).
inuse — Whether the file system registered into the mount table of a Data Mover;
y indicates yes, n indicates no.
type — Type of file system.
acl — Access control value for the file system.
volume — Volume on which the file system resides.
name — Name assigned to the file system.
server — ID of the Data Mover accessing the file system.
Action
To view configuration information for a specific file system, use this command syntax:
$ nas_fs -info <fs_name>
Where:
<fs_name> = name of the file system for which you want to view information
Example:
To view configuration information on ufs1, type:
$ nas_fs -info ufs1
Output
id = 18
name = ufs1
acl = 0
in_use = False
type = uxfs
volume = mtv1
pool =
rw_servers=
ro_servers=
rw_vdms =
ro_vdms =
symm_devs =
002806000209-006,002806000209-007,002806000209-008,002
806000209-009
disks = d3,d4,d5,d6
Action
To see the total amount of space allocated to a file system, the amount of free and used disk
space available to the file system, and the percentage of space used, use this command
syntax:
$ nas_fs -size <fs_name>
Where:
<fs_name> = name of the file system
Example:
To view the total space available on ufs1, type:
$ nas_fs -size ufs1
Output
Note
Action
To see the total amount of space allocated to every file system on a Data Mover, the free and
used space available to each file system, and the percentage of used space, use this
command syntax:
$ server_df <movername>
Where:
<movername> = name of the specified Data Mover
Example:
$ server_df server_2
Output
server_2:
Filesystem kbytes used avail capacity Mounted on
root_fs_common 15360 1392 13968 9% /.etc_common
ufs1 34814592 54240 34760352 0% /ufs1
ufs2 104438672 64 104438608 0% /ufs2
ufs1_snap1 34814592 64 34814528 0% /ufs1_snap1
root_fs_2 15360 224 15136 1% /
Action
To see the number of inodes allocated to a file system on a Data Mover, the number of used
and available inodes for the file system, and the percentage of total inodes in use by the file
system, use this command syntax:
$ server_df <movername> -inode <fs_name>
Where:
<movername> = name of the specified Data Mover
<fs_name> = name of the file system
Example:
To view the inode allocation, use, and availability on ufs2, type:
$ server_df server_2 -inode ufs2
Output
server_2:
Filesystem inodes used avail capacity Mounted on
ufs2 12744766 8 12744758 0% /ufs2
Note
Column definitions:
Filesystem — Name of the file system.
inodes — Total number of inodes allocated to the file system.
used — Number of inodes in use by the file system.
avail — Number of free inodes available for use by the file system.
capacity — Percentage of total inodes in use.
Mounted on — Name of the mount point of the file system on the Data Mover.
Action
To see the number of inodes allocated to each file system on a Data Mover, the number of
used and available inodes for each file system, and the percentage of total inodes in use by
each file system, use this command syntax:
$ server_df <movername> -inode
Where:
<movername> = name of the specified Data Mover
Example:
$ server_df server_2 -inode
Output
server_2:
Filesystem inodes used avail capacity Mounted on
root_fs_common 7870 14 7856 0% /.etc_common
ufs1 4250878 1368 4249510 0% /ufs1
ufs2 12744766 8 12744758 0% /ufs2
ufs1_snap1 4250878 8 4250870 0% /ufs1_snap1
root_fs_2 7870 32 7838 0% /
Note
Column definitions:
Filesystem - Name of the file system.
inodes — Total number of inodes allocated to the file system.
used — Number of inodes in use by the file system.
avail — Number of free inodes available for use by the file system.
capacity — Percentage of total inodes in use.
Mounted on — Name of the mount point of the file system on the Data Mover.
Action
Check the file system configuration to determine if the file system has an associated storage
pool:
$ nas_fs -info <fs_name>
Where:
<fs_name> = name of the file system whose configuration you want to check
Example:
To check the file system configuration for associated storage pool on ufs1, type:
$ nas_fs -info ufs1
id = 24
name = ufs1
acl = 0
in_use = False
type = uxfs
volume = v121
pool = clar_r5_performance
member_of = root_avm_fs_group_3
rw_servers=
ro_servers=
rw_vdms =
ro_vdms =
symm_devs =
WRE00100100430-0010,
WRE00100100430-0011
disks = d7,d13
Note
The pool line includes the value clar_r5_performance, which is the storage pool
associated with the file system.
If the pool line contains no value (is blank), the file system has no associated storage pool.
Note: For information about AVM, refer to Celerra File System Concepts on page 7. For
information about storage pools, refer to the Managing Celerra Volumes technical module
on the user information CD.
The following sections explain procedures for extending a Celerra file system by
volume or by size using AVM.
Extending a Celerra File System by Volume
Note: If the file system has an associated storage pool, we recommend that you extend it
using the procedure Extending a Celerra File System by Size Using AVM on page 33. To
figure out if a file system has an associated storage pool, use the procedure Finding Out If a
File System Has an Associated Storage Pool on page 30.
You can extend a file system with any unused disk volume, slice volume, stripe
volume, or metavolume. Adding volume space to a file system adds the space to
the metavolume on which it is built. So when you extend a file system, the total size
of its underlying metavolume is also extended.
If the metavolume underlying the file system is made up of stripe volumes, you
should extend the file system with stripe volumes of the same size and type.
Step Action
3. Check the size of the file system after extending it to see how much the size increased:
$ nas_fs -size <fs_name>
Where: <fs_name> = the name of the file system
Example:
$ nas_fs -size ufs1
Output:
total = 138096 avail = 138096 used = 0 ( 0% ) (sizes in MB)
volume: total = 138096 (sizes in MB)
Note: For information about AVM, refer to Celerra File System Concepts on page 7. For
information about storage pools, refer to Storage Pools in the Managing Celerra Volumes
technical module on the user information CD.
Step Action
1. Check the file system configuration to confirm that the file system has an associated
storage pool. If you see a pool defined in the output, the file system was created with AVM
and has an associated storage pool.
$ nas_fs -info <fs_name>
Example:
$ nas_fs -info ufs1
Output:
id = 24
name = ufs1
acl = 0
in_use = False
type = uxfs
volume = v121
pool = clar_r5_performance
member_of = root_avm_fs_group_3
rw_servers=
ro_servers=
rw_vdms =
ro_vdms =
symm_devs =
WRE00100100430-0010,
WRE00100100430-0011
disks = d7,d13
Note: If you do not specify a storage system, the default storage system is the one on
which the file system resides. If the file system spans multiple storage systems, the
default is to use all the storage systems on which the file system resides.
Enter the size in gigabytes by typing <number>G (for example, 250G) or in megabytes by
typing <number>M (for example, 500M).
Example:
$ nas_fs -xtend ufs1 size=10G
Output:
id = 24
name = ufs1
acl = 0
in_use = False
type = uxfs
volume = v121
pool = clar_r5_performance
member_of = root_avm_fs_group_3
rw_servers=
ro_servers=
rw_vdms =
ro_vdms =
symm_devs =
WRE00100100430-0010,
WRE00100100430-0011
disks = d7,d13,d19,d25
3. Check the size of the file system after extending it to confirm that the size increased.
$ nas_fs -size <fs_name>
Where: <fs_name> = the name of the file system.
Example:
$ nas_fs -size ufs1
Output:
total = 138096 avail = 138096 used = 0 ( 0% ) (sizes in MB)
volume: total = 138096 (sizes in MB)
Action
Example:
To export the file system ufs2 on an NFS client, type:
$ server_export server_3 -Protocol nfs -option root=10.1.1.1 /ufs2
To export a Celerra file system on an CIFS client, use this command syntax:
$ server_export <movername> -Protocol <Protocol> -name <sharename>
/<pathname>
Where:
<movername> = name of the specified Data Mover
<Protocol> = NFS or CIFS. NFS is default
<sharename> = CIFS sharename
<pathname> = pathname of the file system to export
Example:
To export the file system ufs2 on an CIFS client, type:
$ server_export server_3 -Protocol cifs -name ufs2 /ufs2
Output
Note
Action
Output
id = 18
name = ufs1
acl = 0
in_use = False
type = uxfs
volume = mtv1
rw_servers=
ro_servers=
rw_vdms =
ro_vdms =
symm_devs =
002806000209-006,002806000209-007,002806000209-008,002
806000209-009
disks = d3,d4,d5,d6
! !
CAUTION
Deleting a file system deletes all the data of the file system. In case of file system with
checkpoints, all the checkpoints should be deleted prior to the file system deletion.
The file system deletion does not delete data from business continuance volumes
associated with the file system.
Use this procedure to delete a Celerra file system and free its associated disk
space.
Step Action
1. Back up from the Celerra file system all data that you want to keep.
2. Check the file system configuration to determine if the file system has an associated
storage pool (you will need this information for a later step):
$ nas_fs -info <fs_name>
If the pool output line includes a value (is not blank), the file system has an associated
storage pool.
If the Celerra file system does not have an associated storage pool, continue with step 3.
If the Celerra file system has an associated storage pool, continue with step 4.
3. Determine and note the name of the metavolume on which the Celerra file system is built
(you will need this name for a later step):
$ nas_fs -info <fs_name>
Refer to Listing Configuration Information for a Celerra File System on page 27for
example output.
The volume field contains the name of the metavolume. The disks field lists the disks
that provide storage to the file system.
4. If the Celerra file system has associated checkpoints, permanently unmount and then
delete the checkpoints and their associated volumes.
5. If the Celerra file system has associated business continuance volumes, break the
connection between (unmirror) the file system and its business continuance volumes. For
how to unmirror a business continuance volume, refer to the Using TimeFinder/FS with
Celerra technical module on the user information CD.
7. Permanently unmount the Celerra file system from its associated Data Movers:
$ server_umount <movername> -perm <fs_name>
Refer to Unmounting a File System Permanently on page 42 for more information.
8. Delete the Celerra file system from the Celerra Network Server:
$ nas_fs -delete <fs_name>
If the Celerra file system had an associated storage pool, you are finished. As part of
the file system delete operation, AVM deletes all underlying volumes and frees the space
for use by other file systems.
If the Celerra file system had no associated storage pool, continue with step 9. The
volumes underlying the file system were created manually and must be manually deleted.
9. Delete the metavolume on which the Celerra file system was created:
$ nas_volume -delete <meta_volume_name>
Refer to deleting a metavolume or stripe volume to in the Managing Celerra Volumes
technical module on the user information CD for more information.
10. If the metavolume included stripe volumes, delete all stripe volumes associated with the
metavolume until the disk space is free:
$ nas_volume -delete <stripe_volume_name>
11. If the metavolume included slice volumes, delete all slice volumes associated with the
metavolume until the disk space is free:
$ nas_volume -delete <slice_volume_name>
Refer to deleting a slice volume in the Managing Celerra Volumes technical module on
the user information CD for more information.
12. To finish freeing the disk space, check for slice volumes, stripe volumes, and
metavolumes that are not in use (n in inuse column) with these commands:
$ nas_volume -list
$ nas_slice -list
As needed, delete unused volumes until you free all of the disk space you want to free.
Action
To list the mount points on a Data Mover, use this command syntax:
$ server_mountpoint <movername> -list
Where:
<movername> = name of the specified Data Mover
Example:
To list mount points on server_3, type:
$ server_mountpoint server_3 -list
Output
server_3 :
/.etc_common
/ufs1
/ufs1_snap1
Action
To view a list of all Celerra file systems mounted on a Data Mover, use this command syntax:
$ server_mount <movername>
Where:
<movername> = name of the specified Data Mover
Example:
To list all Celerra file Systems mounted on server_3, type:
$ server_mount server_3
Output
server_3:
fs2 on /fs2 uxfs,perm,rw
fs1 on /fs1 uxfs,perm,rw
root_fs_3 on / uxfs,perm,rw
Step Action
1. If the file system is mounted on any Data Mover, use the server_umount command to
unmount the file system permanently from all Data Movers on which it is mounted.
For how to unmount a file system from a Data Mover, refer to Unmounting a Celerra File
System from a Data Mover on page 41.
2. Use the server_mount command to mount the file system on a Data Mover for read/
write access.
For how to mount a file system on a Data Mover, refer to Mount a Celerra File System on
a Mount Point on a Data Mover on page 23.
Step Action
1. If the file system is mounted for read/write access on a Data Mover, use the
server_umount command to unmount the file system permanently from that Data
Mover.
For how to unmount a file system from a Data Mover, refer to Unmounting a Celerra File
System from a Data Mover on page 41.
2. Use the server_mount command to mount the file system for read-only access on each
Data Mover from which you want to provide read-only access to the file system.
For how to mount a file system on a Data Mover, refer to Mount a Celerra File System on
a Mount Point on a Data Mover on page 23.
Note: It is good practice to unexport NFS exports and unshare CIFS shares of the file
system before unmounting the file system from the Data Mover.
Action
To temporarily unmount a file system by specifying the file system name, use this command
syntax:
$ server_umount <movername> -temp <fs_name>
Where:
<movername> = name of the specified Data Mover
<fs_name> = name of file system to unmount
Example:
To temporarily unmount a file system named ufs1, type:
$ server_umount server_2 -temp ufs1
Output
Note: It is good practice to unexport NFS exports and unshare CIFS shares of the file
system before unmounting the file system from the Data Mover.
Action
To permanently unmount a file system from a Data Mover by specifying the file system name,
use this command syntax:
$ server_umount <movername> -perm <fs_name>
Where:
<movername> = name of the specified Data Mover
<fs_name> = name of file system to unmount
Example:
To permanently unmount a file system named mtv1, type:
$ server_umount server_2 -perm ufs1
To permanently unmount a file system from a Data Mover by specifying the name of its mount
point, type:
$ server_umount <movername> -perm <mount_point>
Where:
<movername> = name of the specified Data Mover
<mount_point> = path to mount point for the specified Data Mover
Example:
To permanently unmount a file system to server_2, type:
$ server_umount server_2 -perm /ufs1
Output
Note: It is good practice to unexport NFS exports and unshare CIFS shares of the file
system before unmounting the file system from the Data Mover.
Action
To temporarily unmount all file systems on a Data Mover, use this command syntax:
$ server_umount <movername> -temp all
or
$ server_umount <movername> all
Where:
<movername> = name of the specified Data Mover
Example:
To temporarily unmount all file systems on server_2, type:
$ server_umount server_2 -temp all
Output
! !
CAUTION
Permanently unmounting all file systems from a Data Mover should be done with
caution because the operation deletes the contents of the mount table. To re-
establish client access to the file systems after this operation, you must rebuild the
mount table by remounting each file system on the Data Mover.
Use this procedure to permanently unmount all file systems on a Data Mover. We
recommend that you unexport NFS exports and unshare CIFS shares of the file
systems before unmounting the file systems from the Data Mover.
Action
To permanently unmount all file systems on a Data Mover, use this command syntax:
$ server_umount <movername> -perm all
Where:
<movername> = name of the specified Data Mover
Example:
To permanently unmount all file systems on server_2, type:
$ server_umount server_2 -perm all
Output
Step Action
1. To change the size threshold for all file systems, use this command syntax:
$ server_param <movername> -facility <facility_name> -modify
<param_name>
-value <new_value>
Where:
<movername> = name of Data Mover
<facility_name> = facility for the parameters
<param_name> = name of parameter
<new_value> = new value for parameter
Example:
To change the size threshold for all file systems to 85%, type:
$ server_param ALL -facility file -modify fsSizeThreshold -value
85
Step Action
Action
To turn off the read prefetch mechanism for a file system, use this command syntax:
$ server_mount <movername> -option <options>,noprefetch
<fs_name> <mount_point>
Where:
<movername> = name of the specified Data Mover
<options> = specifies mount options, separated by commas
<fs_name> = name of the file system
<mount_point> = path to mount point for the specified Data Mover
Example:
To read the prefetch mechanism on ufs1, type:
$ server_mount server_3 -option rw,noprefetch ufs1 /ufs1
Output
server_3: done
Step Action
1. To turn off the read prefetch mechanism for all file systems on one Data Mover, use this
command syntax:
server_param <movername> -facility <facility_name> -modify
<param_name> -value <new_value>
Where:
<movername> = name of the data mover
<facility_name> = facility for the parameters
<param_name> = name of parameter
<new_value> = new value for parameter
Example:
To turn off the prefetch mechanism for all file systems on server_2, type
$ server_param server_2 -facility file -modify prefetch -value 0
To turn on the read prefetch mechanism for all file systems on one Data Mover, use this
command syntax:
server_param <movername> -facility <facility_name> -modify
<param_name> -value <new_value>
Example:
To turn on the read prefetch mechanism for all file systems on server_2, type:
$ server_param server_2 -facility file -modify prefetch -value 1
Action
To turn on the write performance improvement mechanism for a file system, use this command
syntax:
$ server_mount <movername> -option <options>,uncached
<fs_name> <mount_point>
Where:
<movername> = name of the specified Data Mover
<options> = specifies mount options, separated by commas
<fs_name> = name of the file system
<mount_point> = path to mount point for the specified Data Mover
Example:
$ server_mount server_3 -option rw,uncached ufs1 /ufs1
Output
server_3: done
Action
To start and monitor fsck on a specified file system, use this command syntax:
$ nas_fsck -start <name> -monitor
Where:
<name> = name of the specified file system
Example:
To start fsck on ufs1 and monitor the progress, type:
$ nas_fsck -start ufsl -monitor
Output
id = 27
name = ufs1
volume = mtv1
fsck_server = server_2
inode_check_percent = 10..20..30..40..60..70..80..100
directory_check_percent = 0..0..100
used_ACL_check_percent = 100
free_ACL_check_status = Done
cylinder_group_check_status = In Progress..Done
Action
To start ACL check a specified file system, use this command syntax:
$ nas_fsck -start <name> -aclchkonly
Where:
<name> = name of the specified file system
Example:
To start ACL check on ufs1 and monitor the progress, type:
$ nas_fsck -start ufsl -aclchkonly
Output
Action
Output
Action
To display information about a file system check on a single file system, use this command syntax:
$ nas_fsck -info <name>
Where:
<name> = name of the specified file system
Example:
To display information about file system check for ufs2, type:
$ nas_fsck -info ufs2
Output
name = ufs2
id = 23
volume = v134
fsck_server = server_5
inode_check_percent = 100
directory_check_percent = 100
used_ACL_check_percent = 100
free_ACL_check_status = Done
cylinder_group_check_status = In Progress
Action
To display information on all file system checks currently running, use this command syntax:
$ nas_fsck -info -all
Example:
To display information on all files system checks currently running, type:
$ nas_fsck -info -all
Output
name = ufs2
id = 23
volume = v134
fsck_server = server_5
inode_check_percent = 30
directory_check_percent = 0
used_ACL_check_percent = 0
free_ACL_check_status = Not Started
cylinder_group_check_status = Not Started
name = ufs1
id = 27
volume = mtv1
fsck_server = server_2
inode_check_percent = 100
directory_check_percent = 0
used_ACL_check_percent = 0
free_ACL_check_status = Not Started
cylinder_group_check_status = Not Started
Action
Output
id = 719
name = nmfs4
acl = 0
in_use = False
type = nmfs
worm = off
volume = 0
pool =
rw_servers=
ro_servers=
rw_vdms =
ro_vdms =
stor_devs =
disks =
Note
Action
Output
server_3: done
Step Action
Output:
server_3 : done
Example:
To delete a nested mount file system, nmfs4, type:
$ nas_fs -d nmfs4
Output:
id = 719
name = nmfs4
acl = 0
in_use = False
type = nmfs
worm = off
volume = 0
pool =
rw_servers=
ro_servers=
rw_vdms =
ro_vdms =
stor_devs =
disks =
Step Action
1. If the existing file system is mounted, permanently unmount the file system using this
command syntax:
To permanently unmount a file system from a Data Mover by specifying the file system
name, use this command syntax:
$ server_umount <movername> -perm <fs_name>
Where:
<movername> = name of the specified Data Mover
<fs_name> = name of file system to unmount
Example:
To permanently unmount a file system named fs1, type:
$ server_umount server_2 -perm fs1
To permanently unmount a file system from a Data Mover by specifying the name of its
mount point, type:
$ server_umount <movername> -perm <mount_point>
Where:
<movername> = name of the specified Data Mover
<mount_point> = path to mount point for the specified Data Mover
Example:
To permanently unmount a file system to server_2, type:
$ server_umount server_2 -perm /fs1
Output:
For both command examples:
server_2: done
2. Mount the file system in the nested mount file system using this command syntax:
$ server_mount <movername> -option <options> <component file
system name> /<nmfs pathname>/<component file system name>
Where:
<movername> = name of the specified Data Mover
<options> = specifies mount options, separated by commas
<component file system name> = name of the component file system
<nmfs pathname> = pathname of the nested mount file system
<component file system name> = name of file system to be mounted
<mount_point> = path to mount point for the specified Data Mover
Example:
To mount a file system on a mount point on a Data Mover with a nolock option, type:
$ server_mount server_3 fs5 /nmfs4/fs5
Output:
server_2: done
Error Messages
This section contains two tables to assist in troubleshooting Celerra file system
problems. Table 5 presents error messages, their definition, and corrective action.
Refer to the Celerra Network Server Error Messages Guide on the user information
CD for additional information on error messages.
Note: There can be more than one symptom for the same problem and, in some cases
more than one cause.
filesystem is mounted, You may be trying to delete a Verify that this is the correct
can not delete mounted file system. file system to be deleted,
permanently unmount the file
system, and then retry.
filesystem is not You may be attempting to Mount the file system, and
mounted execute a command to file then retry.
system that must be mounted.
filesystem unavailable The file system is already Verify the list of mounted file
for read_write mount mounted by another Data systems.
Mover.
item is currently in The file system is still Unmount the file system, and
use by movername mounted. then retry.
Mount Point Name [name] The mount point was not When entering the mount point
is not valid. Please entered correctly or does not name, the slash (/) that
Re-enter exist. precedes the mount point
name may have been omitted.
Type a slash before entering
the mount point name, and
then retry.
If the error message
reappears, check your list of
mount points.
must unmount from You may be trying to execute a Unmount the file system, and
server(s): movername file system check against a then retry.
mounted file system.
no such file or The value being typed either Check that you are entering
directory does not exist or does not exist the correct value and ensure
as typed (typing error). that the uppercase and
lowercase letters match.
Note: All mount points begin
with a forward slash (/).
Path busy: filesystem A file system is already using Create a new mount point, or
fsname is currently the mount point you are unmount the file system, and
mounted on mountpoint attempting to mount. then retry.
requires root command You may be attempting to Select a non root volume or file
execute a command against a system, and then retry.
root file system or volume to Alternately, retry the command
which you do not have access. as a root command by
prepending it with the word
root, using this command
syntax: root<command>.
server_x: Device busy You may have tried to execute Check the list of mounted file
the same command more than systems to verify that the path
once. is already successfully
mounted.
server_x: No such file The mount point may not exist Confirm that the mount point is
or directory on the specified Data Mover. on the specified Data Mover.
undefined netgroup The netgroup you are trying to Enter the name of the
export for is not recognized. netgroup into the system, and
then retry.
For information about entering
the netgroup name into the
system, refer to the
Configuring Celerra Naming
Services technical module.
Filesystem size The amount of space in use in Extend the file system to
threshold (##%) the file system (<fs_name>) increase its available space.
crossed has reached its maximum This reduces the percentage
(fs <fs_name> threshold (##%). The threshold of used space.
default is 90 percent space in For how to extend a file
Note: This message appears use. File system performance system, refer to Extending a
in the event log on the server. may degrade when a file Celerra File System on
system is almost full. page 31.
When you create a new file in NMFS root directory is read/ Do not try to create files or
the NMFS root directory, a only. folders in the NMFS root
file exists error appears. directory.
You are unable to mount a file There are many probable Perform server_mount -all
system. causes for this scenario. Many to activate all entries in the
will provide an error message, mount table. Obtain a list of
though occasionally, there will mounted file systems, and
not be one. In this case, the then observe the entries. If the
mount table entry already file system in question is
exists. already mounted (temporary or
permanent), perform the
necessary steps to unmount it,
and then retry.
An unmounted file system The file system may have Perform a permanent unmount
reappears in the mount table been temporarily unmounted to remove the entry from the
after a system reboot. prior to reboot. mount table.
When you create a new file in NMFS root directory is read/ Do not try to create files or
the NMFS root directory, a only. folders in the NMFS root
file exists error appears. directory.
Q
Quotas for file system 6, 11
66 of 66 Version 5.4