Você está na página 1de 134

EMC VNX Series

Release 7.0

Managing Volumes and File Systems with VNX AVM


P/N 300-011-806 REV A01

EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.EMC.com

Copyright 1998 - 2011 EMC Corporation. All rights reserved. Published February 2011 EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED "AS IS." EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date regulatory document for your product line, go to the Technical Documentation and Advisories section on EMC Powerlink. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. All other trademarks used herein are the property of their respective owners. Corporate Headquarters: Hopkinton, MA 01748-9103

Managing Volumes and File Systems on VNX AVM 7.0

Contents

Preface.....................................................................................................7 Chapter 1: Introduction.........................................................................11


Overview................................................................................................................12 System requirements.............................................................................................12 Restrictions.............................................................................................................12 AVM restrictions..........................................................................................13 Automatic file system extension restrictions...........................................14 Thin provisioning restrictions...................................................................15 VNX for block system restrictions............................................................16 Cautions..................................................................................................................16 User interface choices...........................................................................................19 Related information..............................................................................................22

Chapter 2: Concepts.............................................................................23
AVM overview.......................................................................................................24 System-defined storage pools overview............................................................24 Mapped storage pools overview.........................................................................25 User-defined storage pools overview.................................................................26 File system and automatic file system extension overview............................26 AVM storage pool and disk type options..........................................................27 AVM storage pools .....................................................................................27 Disk types.....................................................................................................27 System-defined storage pools....................................................................30 RAID groups and storage characteristics................................................33 User-defined storage pools .......................................................................35 Storage pool attributes..........................................................................................35

Managing Volumes and File Systems on VNX AVM 7.0

Contents

System-defined storage pool volume and storage profiles.............................38 VNX for block system-defined storage pool algorithms.......................39 VNX for block system-defined storage pools for RAID 5, RAID 3, and RAID 1/0 SATA support................................................................42 VNX for block system-defined storage pools for Flash support..........44 Symmetrix system-defined storage pools algorithm.............................45 VNX for block mapped pool file systems................................................48 Symmetrix mapped pool file systems......................................................49 File system and storage pool relationship.........................................................51 Automatic file system extension.........................................................................53 Thin provisioning..................................................................................................57 Planning considerations.......................................................................................57

Chapter 3: Configuring.........................................................................63
Configure disk volumes.......................................................................................64 Provide storage from a VNX or legacy CLARiiON system to a gateway system......................................................................................65 Create pool-based provisioning for file storage systems.......................66 Add disk volumes to an integrated system.............................................68 Create file systems with AVM.............................................................................68 Create file systems with system-defined storage pools.........................70 Create file systems with user-defined storage pools..............................72 Create the file system..................................................................................76 Create file systems with automatic file system extension.....................79 Create file systems with the automatic file system extension option enabled........................................................................................80 Extend file systems with AVM............................................................................82 Extend file systems by using storage pools.............................................83 Extend file systems by adding volumes to a storage pool....................85 Extend file systems by using a different storage pool...........................87 Enable automatic file system extension and options.............................90 Enable thin provisioning............................................................................94 Enable automatic extension, thin provisioning, and all options simultaneously.......................................................................................96 Create file system checkpoints with AVM.........................................................98

Chapter 4: Managing..........................................................................101
List existing storage pools..................................................................................102 Display storage pool details...............................................................................103

Managing Volumes and File Systems on VNX AVM 7.0

Contents

Display storage pool size information.............................................................104 Display size information for Symmetrix storage pools.......................106 Modify system-defined and user-defined storage pool attributes...............107 Modify system-defined storage pool attributes....................................108 Modify user-defined storage pool attributes.........................................111 Extend a user-defined storage pool by volume..............................................115 Extend a user-defined storage pool by size.....................................................116 Extend a system-defined storage pool.............................................................117 Extend a system-defined storage pool by size......................................118 Remove volumes from storage pools...............................................................119 Delete user-defined storage pools.....................................................................120 Delete a user-defined storage pool and its volumes............................121

Chapter 5: Troubleshooting................................................................123
AVM troubleshooting considerations...............................................................124 EMC E-Lab Interoperability Navigator............................................................124 Known problems and limitations.....................................................................124 Error messages.....................................................................................................125 EMC Training and Professional Services.........................................................126

Glossary................................................................................................127 Index.....................................................................................................131

Managing Volumes and File Systems on VNX AVM 7.0

Contents

Managing Volumes and File Systems on VNX AVM 7.0

Preface

As part of an effort to improve and enhance the performance and capabilities of its product lines, EMC periodically releases revisions of its hardware and software. Therefore, some functions described in this document may not be supported by all versions of the software or hardware currently in use. For the most up-to-date information on product features, refer to your product release notes. If a product does not function properly or does not function as described in this document, please contact your EMC representative.

Managing Volumes and File Systems on VNX AVM 7.0

Preface

Special notice conventions EMC uses the following conventions for special notices:
CAUTION: A caution contains information essential to avoid data loss or damage to the system or equipment.

Important: An important note contains information essential to operation of the software.

Note: A note presents information that is important, but not hazard-related.

Hint: A note that provides suggested advice to users, often involving follow-on activity for a particular action.

Where to get help EMC support, product, and licensing information can be obtained as follows: Product information For documentation, release notes, software updates, or for information about EMC products, licensing, and service, go to the EMC Online Support website (registration required) at http://Support.EMC.com. Troubleshooting Go to the EMC Online Support website. After logging in, locate the applicable Support by Product page. Technical support For technical support and service requests, go to EMC Customer Service on the EMC Online Support website. After logging in, locate the applicable Support by Product page, and choose either Live Chat or Create a service request. To open a service request through EMC Online Support, you must have a valid support agreement. Contact your EMC sales representative for details about obtaining a valid support agreement or with questions about your account.
Note: Do not request a specific support representative unless one has already been assigned to your particular system problem.

Your comments Your suggestions will help us continue to improve the accuracy, organization, and overall quality of the user publications. Please send your opinion of this document to:

Managing Volumes and File Systems on VNX AVM 7.0

Preface

techpubcomments@EMC.com

Managing Volumes and File Systems on VNX AVM 7.0

Preface

10

Managing Volumes and File Systems on VNX AVM 7.0

1 Introduction

Topics included are:


Overview on page 12 System requirements on page 12 Restrictions on page 12 Cautions on page 16 User interface choices on page 19 Related information on page 22

Managing Volumes and File Systems on VNX AVM 7.0

11

Introduction

Overview
Automatic Volume Management (AVM) is an EMC VNX feature that automates volume creation and management. By using the VNX command options and interfaces that support AVM, system administrators can create and expand file systems without creating and managing the underlying volumes. The automatic file system extension feature automatically extends file systems created with AVM when the file systems reach their specified high water mark (HWM). Thin provisioning works with automatic file system extension and allows the file system to grow on demand. With thin provisioning, the space presented to the user or application is the maximum size setting, while only a portion of that space is actually allocated to the file system. This document is part of the VNX documentation set and is intended for use by system administrators responsible for creating and managing volumes and file systems by using VNX AVM.

System requirements
Table 1 on page 12 describes the EMC VNX series software, hardware, network, and storage configurations.
Table 1. System requirements Software Hardware Network Storage VNX series version 7.0 No specific hardware requirements No specific network requirements Any VNX-qualified storage system

Restrictions
The restrictions listed in this section are applicable to AVM, automatic file system extension, the thin provisioning feature, and the EMC VNX for block system.

12

Managing Volumes and File Systems on VNX AVM 7.0

Introduction

AVM restrictions
The restrictions applicable to AVM are:

Create a file system by using only one storage pool. If you need to extend a file system, extend it by using either the same storage pool or by using another compatible storage pool. Do not extend a file system across storage systems unless it is absolutely necessary. File systems might reside on multiple disk volumes. Ensure that all disk volumes used by a file system reside on the same storage system for file system creation and extension. This is to protect against storage system and data unavailability. RAID 3 is supported only with EMC VNX Capacity disk volumes. When building volumes on a VNX for file attached to an EMC Symmetrix storage system, use regular Symmetrix volumes (also called hypervolumes), not Symmetrix metavolumes. Use AVM to create the primary EMC TimeFinder/FS (NearCopy or FarCopy) file system, if the storage pool attributes indicate that no sliced volumes are used in that storage pool. AVM does not support business continuance volumes (BCVs) in a storage pool with other disk types. AVM storage pools must contain only one disk type. Disk types cannot be mixed. Table 4 on page 28 provides a complete list of disk types. Table 5 on page 31 provides a list of storage pools and the description of the associated disk types. LUNs that have been added to the file-based storage group are discovered during the normal storage discovery (diskmark) and mapped to their corresponding storage pools on the VNX for file. If a pool is encountered with the same name as an existing user-defined pool or system-defined pool from the same VNX for block system, diskmark will fail. It is possible to have duplicate pool names on different VNX for block systems, but not on the same VNX for block system. Names of pools mapped from a VNX for block system to a VNX for file cannot be modified. A user cannot manually delete a mapped pool. Mapped storage pools overview on page 25 provides a description of a mapped storage pool. For VNX for file, a storage pool cannot contain both mirrored and non-mirrored LUNs. If diskmark discovers both mirrored and non-mirrored LUNs, diskmark will fail. Also, data may be unavailable or lost during failovers. The VNX for file control volumes (LUNs 0 through 5) must be thick devices and use the same data service policies. Otherwise, the NAS software installation will fail.

Restrictions

13

Introduction

Automatic file system extension restrictions


The restrictions applicable to automatic file system extension are:

Automatic file system extension does not work on MGFS, which is the EMC file system type used while performing data migration from either CIFS or NFS to the VNX system by using VNX File System Migration (also known as CDMS). Automatic extension is not supported on file systems created with manual volume management. You can enable automatic file system extension on the file system only if it is created or extended by using an AVM storage pool. Automatic extension is not supported on file systems used with TimeFinder/FS NearCopy or FarCopy. While automatic file system extension is running, the Control Station blocks all other commands that apply to this file system. When the extension is complete, the Control Station allows the commands to run. The Control Station must be running and operating properly for automatic file system extension, or any other VNX feature, to work correctly. Automatic extension cannot be used for any file system that is part of a remote data facility (RDF) configuration. Do not use the nas_fs command with the -auto_extend option for file systems associated with RDF configurations. Doing so generates the error message: Error 4121: operation not supported for file systems of type EMC SRDF. The options associated with automatic extension can be modified only on file systems mounted with read/write permission. If the file system is mounted read-only, you must remount the file system as read/write before modifying the automatic file system extension, HWM, or maximum size options. Enabling automatic file system extension and thin provisioning does not automatically reserve the space from the storage pool for that file system. Administrators must ensure that adequate storage space exists, so that the automatic extension operation can succeed. When there is not enough storage space available to extend the file system to the requested size, the file system extends to use all the available storage. For example, if automatic extension requires 6 GB but only 3 GB are available, the file system automatically extends to 3 GB. Although the file system was partially extended, an error message appears to indicate that there was not enough storage space available to perform automatic extension. When there is no available storage, automatic extension fails. You must manually extend the file system to recover from this issue.

Automatic file system extension is supported with EMC VNX Replicator. Enable automatic extension only on the source file system in a replication scenario. The destination file system synchronizes with the source file system and extends automatically. Do not enable automatic extension on the destination file system. When using automatic extension and thin provisioning, you can create replicated copies of extendible file systems, but to do so, use slice volumes (slice=y).

14

Managing Volumes and File Systems on VNX AVM 7.0

Introduction

You cannot create iSCSI thick LUNs on file systems that have automatic extension enabled. You cannot enable automatic extension on a file system if there is a storage mode iSCSI LUN present on the file system. You will receive an error, "Error 2216: <fs_name>: item is currently in use by iSCSI." However, iSCSI virtually provisioned LUNs are supported on file systems with automatic extension enabled. Automatic extension is not supported on the root file system of a Data Mover or on the root file system of a Virtual Data Mover (VDM).

Thin provisioning restrictions


The restrictions applicable to thin provisioning are:

VNX for file supports thin provisioning on Symmetrix DMX-4 and legacy CLARiiON CX4 and CX5 disk volumes. The options associated with thin provisioning can be modified only on file systems mounted with read/write permission. If the file system is mounted read-only, you must remount the file system as read/write before modifying the thin provisioning, HWM, or maximum size options. Do not use VNX for file thin provisioned objects (iSCSI LUNs or iSCSI file systems) with Symmetrix or VNX for block thin provisioned devices. A single file system should not span virtual and regular Symmetrix or VNX for block volumes. Use only one layer of thin provisioning, either on the Symmetrix or VNX for block storage system, or on the VNX for file, but not on both. If the user attempts to create VNX for file thin provisioned objects with Symmetrix or VNX for block thin provisioned devices, the Data Mover will generate an error similar to the following: "VNX for File thin provisioning and VNX for Block or Symmetrix thin provisioning cannot coexist on a file system". Thin provisioning is supported with VNX Replicator. Enable thin provisioning only on the source file system in a replication scenario. The destination file system synchronizes with the source file system and extends automatically. Do not enable thin provisioning on the destination file system. When using automatic file system extension and thin provisioning, you can create replicated copies of extendible file systems, but to do so, use slice volumes (slice=y). With thin provisioning enabled, the NFS, CIFS, and FTP clients see the actual size of the VNX Replicator destination file system while they see the virtually provisioned maximum size of the source file system. Interoperability considerations on page 57 provides more information on using automatic file system extension with VNX Replicator. Thin provisioning is supported on the primary file system, but not supported with primary file system checkpoints. NFS, CIFS, and FTP clients cannot see the virtually provisioned maximum size of any EMC SnapSure checkpoint file system. If a file system is created by using a virtual storage pool, the -thin option of the nas_fs command cannot be enabled. VNX for file thin provisioning and VNX for block thin provisioning cannot coexist on a file system.

Restrictions

15

Introduction

VNX for block system restrictions


The restrictions applicable to VNX for block systems are:

Use RAID group-based LUNs instead of pool-based LUNs to create system control LUNs. Pool-based LUNs can be created as thin LUNs or converted to thin LUNs at any time. A thin control LUN could run out of space and lead to a Data Mover panic. VNX for block mapped pools support only RAID 5, RAID 6, and RAID 1/0:

RAID 5 is the default RAID type, with a minimum of three drives (2+1). Use multiples of five drives. RAID 6 has a minimum of four drives (2+2). Use multiples of eight drives. RAID 1/0 has a minimum of two drives (1+1).

EMC Unisphere is required to provision virtual devices (thin and thick LUNs) on the VNX for block system. Any platforms that do not provide Unisphere access cannot use this feature. You cannot mix mirrored and non-mirrored LUNs in the same VNX for block system pool. You must separate mirrored and non-mirrored LUNs into different storage pools on VNX for block systems. If diskmark discovers both mirrored and non-mirrored LUNs, diskmark will fail.

Cautions
If any of this information is unclear, contact your EMC Customer Support Representative for assistance:

Do not span a file system (including checkpoint file systems) across multiple storage systems. All parts of a file system must use the same disk volume type and be stored on a single storage system. Spanning more than one storage system increases the chance of data loss, data unavailability, or both. One storage system could fail while the other continues, and thus make failover difficult. In this case, the targets might not be consistent. In addition, a spanned file system is subject to any performance and feature set differences between storage systems. If you plan to set quotas on a file system to control the amount of space that users and groups can consume, turn on quotas immediately after creating the file system. Using Quotas on VNX contains instructions on turning on quotas and general quotas information. If your user environment requires international character support (that is, support of non-English character sets or Unicode characters), configure the VNX system to support this feature before creating file systems. Using International Character Sets with VNX contains instructions to support and configure international character support on a VNX system.

16

Managing Volumes and File Systems on VNX AVM 7.0

Introduction

If you plan to create TimeFinder/FS (local, NearCopy, or FarCopy) snapshots, do not use slice volumes (nas_slice) when creating the production file system (PFS). Instead, use the full portion of the disk presented to the VNX system. Using slice volumes for a PFS slated as the source for snapshots wastes storage space and can result in loss of PFS data. Automatic file system extension is interrupted during VNX system software upgrades. If automatic extension is enabled, the Control Station continues to capture the HWM events, but the actual file system extension does not start until the VNX system upgrade process completes. Closely monitor VNX for block pool space that contains pool LUNs to ensure that there is enough space available. Use the nas_pool -size <AVM pool name> command and look for the physical usage information. An alert is generated when a VNX for block pool reaches the user-defined threshold level. Deleting a thin file system or a thin disk volume does not release any space on a system.

To release the space in a thin pool on the Symmetrix storage system, unbind the LUN by using the symconfigure command. To release the space in a thin pool on either a VNX or a legacy CLARiiON system, unbind the LUN by using the nas_disk -delete -perm -unbind command.

Before removing a data service policy from a Fully Automated Storage Tiering (FAST) Symmetrix Storage Group that is already mapped to a VNX for file storage pool and is in use with multiple tiers, to prevent an error from occurring on the VNX for file, you must do one of the following:

Configure a single tier policy with the disk type wanted and allow the FAST engine to move the disks. Once the disks are moved to the same tier, remove the data service policy from the Symmetrix Storage Group and run diskmark. Use the Symmetrix nondisruptive LUN migration utility to ensure that every file system is built on top of a single type of disk. Migrate data through NFS or CIFS by using either VNX Replicator, the CLI nas_copy command, file system migration, or a third-party vendor's migration software.

The Flash BCV (BCVE), R1EFD, R2EFD, R1BCVE, or R2BCVE standalone disk types are not supported on a VNX for file. However, a VNX for file supports using a FAST policy that contains a Flash tier as long as the FAST policy contains multiple tiers. When you need to remove a FAST policy that contains a Flash tier from the VNX for file Storage Group, an error will occur if the Flash technology is used in BCV, R1, or R2 devices. The nas_diskmark -mark -all operation cannot set disk types of BCVE, R1EFD, R2EFD, R1BCVE, or R2BCVE. To prevent an error from occurring, do one of the following:

Configure a single tier policy by using either FC or ATA disks, and allow the FAST engine to move the Flash disks to the selected type. Use the Symmetrix nondisruptive LUN migration utility to ensure that the file system is built on top of a single type of disk, either FC or SATA.

Cautions

17

Introduction

VNX thin provisioning allows you to specify a value above the maximum supported storage capacity for the system. If an alert message indicates that you are running out of space, or if you reach the system's storage capacity limits and have virtually provisioned resources that are not fully allocated, you may need to do one of the following:

Delete unnecessary data. Enable VNX File Deduplication and Compression to try to reduce file system storage usage. Migrate data to a different system that has space.

Closely monitor Symmetrix pool space that contains pool LUNs to ensure that there is enough space available. Use the command /usr/symcli/bin/symcfg list -pool -thin -all to display pool usage. If the masking option is being used, moving LUNs between Symmetrix Storage Groups can cause file system disruption. If the LUNs need to be moved frequently between FAST Storage Groups for various performance requests, you can create separate FAST Storage Groups and Masking Storage Groups to avoid disruptions. A single LUN can belong to both a FAST Storage Group and a Masking Storage Group. The Symmetrix FAST capacity algorithm does not consider striping on the file system side. The algorithm may mix different technologies in the same striping volume, which can affect performance until the performance algorithm optimizes it. The initial configuration of the striping volumes is very important to ensure that the performance is maximized even before the initial data move is completed by the FAST engine. For example, a FAST policy contains 50 percent Performance disk volumes and 50 percent Capacity disk volumes, and the storage group has 16 disk volumes. The initial configuration should be 1 striping meta volume with 8 Performance disk volumes and 1 striping meta volume with 8 Capacity disk volumes, instead of 4 Performance disk volumes and 4 Capacity disk volumes in the same striping meta volume. The same point needs to be considered when the FAST policy is changed or devices are added to or removed from the FAST storage group. AVM will try to use the same technology in the striping meta volume. If you are using Symmetrix or legacy CLARiiON systems, and you need to migrate a LUN that is in a VNX for file storage group, the size of the target LUN must be the same size as the source LUN or data unavailability and data loss may occur. For better performance and improved space usage, ensure that the target LUN is a newly-created LUN with no existing data. Insufficient space on a Symmetrix pool that contains pool LUNs might result in a Data Mover panic and data unavailability. To avoid this situation, pre-allocate 100 percent of the TDEV when binding it to the pool. If you do not allocate 100 percent, there is the possibility of overallocation. Closely monitor the pool usage. Insufficient space on a VNX for block system pool that contains thin LUNs might result in a Data Mover panic and data unavailability. You cannot pre-allocate space on a VNX for file storage pool. Closely monitor the pool usage to avoid running out of space.

18

Managing Volumes and File Systems on VNX AVM 7.0

Introduction

User interface choices


The VNX system offers flexibility in managing networked storage that is based on your support environment and interface preferences. This document describes how to use AVM by using the VNX command line interface (CLI). You can also perform many of these tasks by using one of the system's management applications:

EMC Unisphere software Celerra Monitor Microsoft Management Console (MMC) snap-ins Active Directory Users and Computers (ADUC) extensions

The Unisphere software online help contains additional information about managing your VNX system. Installing Management Applications on VNX for File includes instructions on launching the Unisphere software, and on installing the MMC snap-ins and the ADUC extensions. The VNX Release Notes contain additional, late-breaking information about VNX management applications. Table 2 on page 19 identifies the storage pool tasks that you can perform in each interface, and the command syntax or the path to the Unisphere software page to use to perform the task. Unless otherwise noted in the task, the operations apply to user-defined and system-defined storage pools. The VNX Command Line Interface Reference for File contains more information on the commands described in Table 2 on page 19.
Table 2. Storage pool tasks supported by user interface Task VNX Control Station CLI Unisphere software Select Storage Storage Configuration Storage Pools for File, and click Create.

Create a new user-defined storage nas_pool -create -name pool by volumes. <name> -volumes <volumes> Note: This task applies only to user-defined storage pools. Create a new user-defined storage nas_pool -create -name pool by size. <name> -size <integer>[M|G|T] Note: This task applies only to -template <system_pool_name> user-defined storage pools. -num_stripe_members <num> -stripe_size <num> List existing storage pools.

Select Storage Storage Configuration Storage Pools for File, and click Create.

nas_pool -list

Select Storage Storage Configuration Storage Pools for File.

User interface choices

19

Introduction

Table 2. Storage pool tasks supported by user interface (continued) Task Display storage pool details. VNX Control Station CLI nas_pool -info <name> Unisphere software Select Storage Storage Configuration Storage Pools for File, and click Properties.

Note: When you perform this operation, the total_potential_mb does not include the space in the storage pool in the output.

Note: When you perform this operation, the total_potential_mb represents the total available storage, including the storage pool. Select Storage Storage Configuration Storage Pools for File, and view the Storage Capacity and Storage Used(%) columns. Select Storage Storage Configuration Storage Pools for File, and click Properties. Select or clear Slice Pool Volumes by Default? as required. Select Storage Storage Configuration Storage Pools for File, and click Properties. Select or clear Automatic Extension Enabled as required.

Display storage pool size informanas_pool -size <name> tion.

Specify whether AVM uses slice nas_pool -modify volumes or entire unused disk vol{<name>|id=<id>} umes from the storage pool to cre-default_slice_flag {y|n} ate or expand a file system.

Specify whether AVM extends the nas_pool -modify storage pool automatically with {<name>|id=<id>} -is_dynamic unused disk volumes whenever the {y|n} pool needs more space.

Note: This task applies only to system-defined storage pools. Specifying y tells AVM to allocate nas_pool -modify new, unused disk volumes to the {<name>|id=<id>} -is_greedy storage pool when creating or ex{y|n} panding, even if there is available space in the pool. Specifying n tells AVM to allocate all available storage pool space to create or expand a file system before adding volumes to the pool. When extending a file system, the is_greedy attribute is ignored unless there is not enough free space on the existing volumes that the file Select Storage Storage Configuration Storage Pools for File, and click Properties. Select or clear Obtain Unused Disk Volumes as required.

20

Managing Volumes and File Systems on VNX AVM 7.0

Introduction

Table 2. Storage pool tasks supported by user interface (continued) Task system is using. Table 7 on page 36 describes the is_greedy behavior. VNX Control Station CLI Unisphere software

Note: This task applies only to system-defined storage pools. Add volumes to a user-defined storage pool. Select Storage Storage Configuration Storage Pools for File. Select the storage pool that you want to extend, and click Extend. Select one or more volumes to add to the pool.

Note: This task applies only to user-defined storage pools.

nas_pool -xtend {<name>|id=<id>} -volumes <volume_name> [,<volume_name>,...]

Extend a storage pool by size and nas_pool -xtend specify a storage system from {<name>|id=<id>} which to allocate storage. -size <integer> [M|G|T] -storage <system_name> Note: This task applies to systemdefined storage pools only when the is_dynamic attribute for the storage pool is set to n.

Select Storage Storage Configuration Storage Pools for File. Select the storage pool that you want to extend, and click Extend. Select the Storage System to be used to extend the file system, and type the size requested in MB, GB, or TB.

Note: The drop-down list shows all the available storage systems. The volumes shown are only those created on the storage system that is highlighted. Remove volumes from a storage pool. Select Storage Storage Configuration Storage Pools for File. Select the storage pool that you want to shrink, and click Shrink. Select one or more volumes that are not in use to be removed from the pool.

nas_pool -shrink {<name>|id=<id>} -volumes <volume_name> [,<volume_name>,...] [-deep] The -deep setting is optional, and is used to recursively remove all members.

Delete a storage pool.

nas_pool -delete {<name>|id=<id>} [-deep] The -deep setting is optional, and is used to recursively remove all members.

Note: This task applies only to user-defined storage pools.

Select Storage Storage Configuration Storage Pools for File. Select the storage pool that you want to delete, and click Delete.

User interface choices

21

Introduction

Table 2. Storage pool tasks supported by user interface (continued) Task Change the name of a storage pool. VNX Control Station CLI Unisphere software

Note: This task applies only to user-defined storage pools.

Select Storage Storage nas_pool -modify Configuration Storage Pools for File, {<name>|id=<id>} -name <name> and click Properties. Type the new name in the Name text box.

Create a file system with automatic $ nas_fs -name <name> file system extension enabled. -type <type> -create pool=<pool> storage=<system_name> {size=<integer>[T|G|M]} -auto_extend {no|yes}

Select Storage Storage Configuration Storage Pools for File, and click Create. Select Automatic Extension Enabled.

Related information
Specific information related to the features and functionality described in this guide are included in:

VNX Command Line Interface Reference for File Parameters Guide for VNX for File Configuring NDMP Backups to Disk on VNX Controlling Access to System Objects on VNX Managing Volumes and File Systems for VNX Manually Online VNX man pages

EMC VNX documentation on the EMC Online Support website The complete set of EMC VNX series customer publications is available on the EMC Online Support website. To search for technical documentation, go to http://Support.EMC.com. After logging in to the website, click the VNX Support by Product page to locate information for the specific feature required.

VNX wizards Unisphere software provides wizards for performing setup and configuration tasks. The Unisphere online help provides more details on the wizards.

22

Managing Volumes and File Systems on VNX AVM 7.0

2 Concepts

Topics included are:


AVM overview on page 24 System-defined storage pools overview on page 24 Mapped storage pools overview on page 25 User-defined storage pools overview on page 26 File system and automatic file system extension overview on page 26 AVM storage pool and disk type options on page 27 Storage pool attributes on page 35 System-defined storage pool volume and storage profiles on page 38 File system and storage pool relationship on page 51 Automatic file system extension on page 53 Thin provisioning on page 57 Planning considerations on page 57

Managing Volumes and File Systems on VNX AVM 7.0

23

Concepts

AVM overview
The AVM feature automatically creates and manages file system storage. AVM is storage-system independent and supports existing requirements for automatic storage allocation (SnapSure, SRDF, and IP replication). You can configure file systems created with AVM to automatically extend. The automatic extension feature enables you to configure a file system so that it extends automatically, without system administrator intervention, to support file system operations. Automatic extension causes the file system to extend when it reaches the specified usage point, the HWM, as described in Automatic file system extension on page 53. You set the size for the file system you create, and also the maximum size to which you want the file system to grow. The thin provisioning option lets you present the maximum size of the file system to the user or application, of which only a portion is actually allocated. Thin provisioning allows the file system to slowly grow on demand as the data is written.
Note: Enabling the thin provisioning option with automatic extension does not automatically reserve the space from the storage pool for that file system. Administrators must ensure that adequate storage space exists so that the automatic extension operation can succeed. If the available storage is less than the maximum size setting, then automatic extension fails. Users receive an error message when the file system becomes full, even though it appears that there is free storage space in the file system.

File systems support the following FAST data service policies:

For VNX for block systems: thin LUNs and thick LUNs, compression, auto-tiering, and mirroring (EMC MirrorView or RecoverPoint). For Symmetrix systems: thin LUNs and thick LUNs, auto-tiering, and R1, R2, or BCV disk volumes.

To create file systems, use one or more types of AVM storage pools:

System-defined storage pools User-defined storage pools

System-defined storage pools overview


System-defined storage pools are predefined and available with the VNX system. You cannot create or delete these predefined storage pools. You can modify some of the attributes of the system-defined storage pools, but this is unnecessary. AVM system-defined storage pools do not preclude the use of user-defined storage pools or manual volume and file system management, but instead give system administrators a simple volume and file system management tool. With VNX command options and interfaces that support AVM, you can use system-defined storage pools to create and expand file systems without manually creating and managing stripe volumes, slice volumes, or metavolumes. If your applications do not require precise placement of file systems on

24

Managing Volumes and File Systems on VNX AVM 7.0

Concepts

particular disks or on particular locations on specific disks, using AVM is an efficient way for you to create file systems. Flash drives behave differently than Performance or Capacity drives. AVM uses different logic to configure file systems on Flash drives. To configure Flash drives for maximum performance, AVM may select more disk volumes than are needed to satisfy the requested capacity. While the individual disk volumes are no longer available for manual volume management, the unused Flash drive space is still available for creating additional file systems or extending existing file systems. VNX for block system-defined storage pools for Flash support on page 44 contains additional information about using Flash drives. AVM system-defined storage pools are adequate for most high availability and performance considerations. Each system-defined storage pool manages the details of allocating storage to file systems. When you create a file system by using AVM system-defined storage pools, storage is automatically allocated from the pool to the new file system. After the storage is allocated from that pool, the storage pool can dynamically grow and shrink to meet the file system needs.

Mapped storage pools overview


A mapped pool is a storage pool that is dynamically created during the normal storage discovery (diskmark) process for use on the VNX for file. It is a one-to-one mapping with either a VNX storage pool or a FAST Symmetrix Storage Group. A mapped pool can contain a mix of different types of LUNs that use any combination of data services:

thin thick auto-tiering mirrored VNX compression

However, ensure that the mapped pool contains only the same type of LUNs that use the same data services for the best file system performance:

all thick all thin all the same auto-tiering options all mirrored or none mirrored all compressed or none compressed

If a mapped pool is not in use and no LUNs exist in the file-based storage group that corresponds to the pool, the pool will be deleted automatically during diskmark. VNX for block data services can be configured at the LUN level. When creating a file system with mapped pools, the default slice option is set to no to help prevent inconsistent data services on the file system.

Mapped storage pools overview

25

Concepts

User-defined storage pools overview


User-defined storage pools allow you to create containers or pools of storage, filled with manually created volumes. When the applications require precise placement of file systems on particular disks or locations on specific disks, use AVM user-defined storage pools for more control. User-defined storage pools also allow you to reserve disk volumes so that the system-defined storage pools cannot use them. User-defined storage pools provide a better option for those who want more control over their storage allocation while still using the more automated management tool. User-defined storage pools are not as automated as the system-defined storage pools. You must specify some attributes of the storage pool and the storage system from which the space is allocated to create file systems. While somewhat less involved than creating volumes and file systems manually, using these storage pools requires more manual involvement on your part than the system-defined storage pools. When you create a file system by using a user-defined storage pool, you must: 1. Create the storage pool. 2. Choose and add volumes to it either by manually selecting and building the volume structure or by auto-selection. 3. Expand it with new volumes when required. 4. Remove volumes you no longer require in the storage pool. Auto-selection is performed by choosing a minimum size and a system pool which describes the disk attributes. With auto-selection, whole disk volumes are taken from the volumes available in the system pool and placed in the user pool according to the selected stripe options. Auto-selection uses the same AVM algorithms that choose which disk volumes to stripe in a system pool. System-defined storage pool volume and storage profiles on page 38 describes the AVM algorithms used.

File system and automatic file system extension overview


You can create or extend file systems with AVM storage pools and configure the file system to automatically extend as needed. You can do one of the following:

Enable automatic extension on a file system when it is created. Enable and disable it at any later time by modifying the file system.

The options that work with automatic file system extension are:

HWM Maximum size Thin provisioning

The HWM and maximum size are described in Automatic file system extension on page 53. Thin provisioning is described in Thin provisioning on page 57.

26

Managing Volumes and File Systems on VNX AVM 7.0

Concepts

The default supported maximum size for any file system is 16 TB. With automatic extension, the maximum size is the size to which the file system could grow, up to the supported 16 TB. Setting the maximum size is optional with automatic extension, but mandatory with thin provisioning. With thin provisioning enabled, users and applications see the maximum size, while only a portion of that size is actually allocated to the file system. Automatic extension allows the file system to grow as needed without system administrator intervention, and meet system operations requirements continuously, without interruptions.

AVM storage pool and disk type options


AVM provides a range of options for managing volumes. The VNX system can choose the configuration and placement of the file systems by using system-defined storage pools, or you can create a user-defined storage pool and define its attributes. This section contains the following:

AVM storage pools on page 27 Disk types on page 27 System-defined storage pools on page 30 RAID groups and storage characteristics on page 33 User-defined storage pools on page 35

AVM storage pools


An AVM storage pool is a container or pool of volumes. Table 3 on page 27 lists the major difference between system-defined and user-defined storage pools.
Table 3. System-defined and user-defined storage pool difference Functionality Ability to grow and shrink System-defined storage pools Automatic, but the dynamic behavior can be disabled. User-defined storage pools Manual only Administrators must manage the volume configuration, addition, and removal of storage from these storage pools.

Chapter 4 provides more detailed information.

Disk types
A storage pool must contain volumes from only one disk type.

AVM storage pool and disk type options

27

Concepts

Table 4 on page 28 lists the available disk types associated with the storage pools and the disk type descriptions.
Table 4. Disk types Disk type CLSTD CLATA CLSAS Description Standard VNX for block disk volumes. VNX for block Capacity disk volumes. VNX for block Serial Attached SCSI (SAS) disk volumes. VNX for block Performance and SATA II Flash drive disk volumes. VNX for block Capacity disk volumes for use with EMC MirrorView/Synchronous. Standard VNX for block disk volumes for use with MirrorView/Synchronous. VNX for block CLEFD disk volumes that are used with MirrorView/Synchronous. VNX for block SAS disk volumes that are used with MirrorView/Synchronous. Standard Symmetrix disk volumes, typically RAID 1 configuration. Symmetrix Performance disk volumes, set up as source for mirrored storage that uses SRDF functionality. Standard Symmetrix disk volume that is a mirror of another standard Symmetrix disk volume over RDF links. High performance Symmetrix disk volumes built on Flash drives, typically RAID 5 configuration. Standard Symmetrix disk volumes built on Capacity drives, typically RAID 1 configuration. Symmetrix Capacity disk volumes, set up as source for mirrored storage that uses SRDF functionality. Symmetrix Capacity disk volumes, set up as target for mirrored storage that uses SRDF functionality.

CLEFD

CMATA

CMSTD

CMEFD

CMSAS

STD

R1STD

R2STD

EFD

ATA

R1ATA

R2ATA

28

Managing Volumes and File Systems on VNX AVM 7.0

Concepts

Table 4. Disk types (continued) Disk type Performance Description VNX for block Performance disk volumes that correspond to VNX for block pool-based LUNs. VNX for block Capacity disk volumes that correspond to VNX for block pool-based LUNs. VNX for block Flash disk volumes that correspond to VNX for block pool-based LUNs.

Capacity

Extreme_performance

Mixed

For VNX for block, a mixture of VNX for block Performance, Capacity, or Flash disk volumes that correspond to VNX for block pool-based LUNs. For Symmetrix, a mixture of Symmetrix Flash, Performance, or Capacity disk volumes that correspond to devices in FAST Storage Groups.

Mirrored_mixed

For VNX for block, a mixture of VNX for block Performance, Capacity, or Flash disk volumes that correspond to VNX for block pool-based LUNs used with MirrorView/Synchronous. For VNX for block, Performance disk volumes that correspond to VNX for block pool-based LUNs used with MirrorView/Synchronous. For VNX for block, Capacity disk volumes that correspond to VNX for block pool-based LUNs used with MirrorView/Synchronous.

Mirrored_performance

Mirrored_capacity

Mirrored_extreme_performance For VNX for block, Flash disk volumes that correspond to VNX for block pool-based LUNs used with MirrorView/Synchronous. BCV Business continuance volume (BCV) for use by TimeFinder/FS operations. BCV, built from Capacity disks, for use by TimeFinder/FS operations. BCV, built from Capacity disks, that is mirrored to a different Symmetrix over RDF links, RAID 1 configuration, and used as a source volume by TimeFinder/FS operations. BCV, built from Capacity disks, that is a mirror of another BCV over RDF links, and used as a target of destination volume by TimeFinder/FS operations.

BCVA

R1BCA

R2BCA

AVM storage pool and disk type options

29

Concepts

Table 4. Disk types (continued) Disk type R1BCV Description BCV that is mirrored to a different Symmetrix over RDF links, RAID 1 configuration, and used as a source volume by TimeFinder/FS operations. BCV that is a mirror of another BCV over RDF links, and used as a target of destination volume by TimeFinder/FS operations. BCV, built from a mixture of Symmetrix Flash, Performance, or Capacity disk volumes, and used by TimeFinder/FS operations. A mixture of Symmetrix Flash, Performance, or Capacity disk volumes, set up as source for mirrored storage that uses SRDF functionality. Mixed BCV that is a mirror of another BCV over RDF links, and used as a target of destination volume by TimeFinder/FS operations. Mixed BCV that is mirrored to a different Symmetrix over RDF links, RAID 1 configuration, and used as a source volume by TimeFinder/FS operations. Mixed BCV that is a mirror of another BCV over RDF links, and used as a target of destination volume by TimeFinder/FS operations.

R2BCV

BCVMixed

R1Mixed

R2Mixed

R1BCVMixed

R2BCVMixed

System-defined storage pools


Choosing system-defined storage pools to build the file system is an efficient way to manage volumes and file systems. They are associated with the type of attached storage system you have. This means that:

VNX for block storage pools are available for attached VNX for block storage systems. Symmetrix storage pools are available for attached Symmetrix storage systems.

System-defined storage pools are dynamic by default. The AVM feature adds and removes volumes automatically from the storage pool as needed. Table 5 on page 31 lists the system-defined storage pools supported on the VNX for file. RAID groups and storage characteristics on page 33 contains additional information about RAID group combinations for system-defined storage pools.

30

Managing Volumes and File Systems on VNX AVM 7.0

Concepts

Note: A storage pool can include disk volumes of only one type. Table 5. System-defined storage pools Storage pool name symm_std Description Designed for high performance and availability at medium cost.This storage pool uses STD disk volumes (typically RAID 1). Designed for high performance and availability at low cost. This storage pool uses ATA disk volumes (typically RAID 1). Designed for high performance and availability at medium cost, specifically for storage that will be mirrored to a remote VNX for file that uses SRDF, or to a local VNX for file that uses TimeFinder/FS. Using SRDF/S with VNX for Disaster Recovery and Using TimeFinder/FS, NearCopy, and FarCopy on VNX for File provide more information about the SRDF feature. Designed for high performance and availability at medium cost, specifically as a mirror of a remote VNX for file that uses SRDF. This storage pool uses Symmetrix R2STD disk volumes. Using SRDF/S with VNX for Disaster Recovery provides more information about the SRDF feature. Designed for archival performance and availability at low cost, specifically for storage mirrored to a remote VNX for file that uses SRDF. This storage pool uses Symmetrix R1ATA disk volumes. Using SRDF/S with VNX for Disaster Recovery provides more information about the SRDF feature. Designed for archival performance and availability at low cost, specifically as a mirror of a remote VNX for file that uses SRDF. This storage pool uses Symmetrix R2ATA disk volumes. Using SRDF/S with VNX for Disaster Recovery provides more information about the SRDF feature. Designed for very high performance and availability at high cost.This storage pool uses Flash disk volumes (typically RAID 5). Designed for high performance and availability at low cost. This storage pool uses CLSTD disk volumes created from RAID 1 mirrored-pair disk groups. Designed for high availability at low cost. This storage pool uses CLSTD disk volumes created from RAID 6 disk groups. Designed for medium performance and availability at low cost. This storage pool uses CLSTD disk volumes created from 4+1 RAID 5 disk groups. Designed for medium performance and availability at low cost. This storage pool uses CLSTD disk volumes created from 8+1 RAID 5 disk groups.

symm_ata

symm_std_rdf_src

symm_std_rdf_tgt

symm_ata_rdf_src

symm_ata_rdf_tgt

symm_efd

clar_r1

clar_r6

clar_r5_performance

clar_r5_economy

AVM storage pool and disk type options

31

Concepts

Table 5. System-defined storage pools (continued) Storage pool name clarata_archive Description Designed for use with infrequently accessed data, such as archive retrieval. This storage pool uses CLATA disk drives in a RAID 5 configuration. Designed for archival performance and availability at low cost. This AVM storage pool uses LCFC, SATA II, and CLATA disk drives in a RAID 3 configuration. Designed for high availability at low cost. This storage pool uses CLATA disk volumes created from RAID 6 disk groups. Designed for high performance and availability at medium cost.This storage pool uses two CLATA disk volumes in a RAID 1/0 configuration. Designed for medium performance and availability at medium cost. This storage pool uses VNX Serial Attached SCSI (SAS) disk volumes created from RAID 5 disk groups. Designed for high availability at medium cost.This storage pool uses CLSAS disk volumes created from RAID 6 disk groups. Designed for high performance and availability at medium cost.This storage pool uses two CLSAS disk volumes in a RAID 1/0 configuration. Designed for very high performance and availability at high cost.This storage pool uses CLEFD disk volumes created from 4+1 and 8+1 RAID 5 disk groups. Designed for high performance and availability at medium cost.This storage pool uses two CLEFD disk volumes in a RAID 1/0 configuration. Designed for high performance and availability at low cost. This storage pool uses CMSTD disk volumes created from RAID 1 mirrored-pair disk groups for use with MirrorView/Synchronous. Designed for medium performance and availability at low cost. This storage pool uses CMSTD disk volumes created from 4+1 RAID 5 disk groups for use with MirrorView/Synchronous. Designed for medium performance and availability at low cost. This storage pool uses CMSTD disk volumes created from 8+1 RAID 5 disk groups for use with MirrorView/Synchronous. Designed for high availability at low cost. This storage pool uses CMSTD disk volumes created from RAID 6 disk groups for use with MirrorView/Synchronous.

clarata_r3

clarata_r6

clarata_r10

clarsas_archive

clarsas_r6

clarsas_r10

clarefd_r5

clarefd_r10

cm_r1

cm_r5_performance

cm_r5_economy

cm_r6

32

Managing Volumes and File Systems on VNX AVM 7.0

Concepts

Table 5. System-defined storage pools (continued) Storage pool name cmata_archive Description Designed for use with infrequently accessed data, such as archive retrieval. This storage pool uses CMATA disk drives in a RAID 5 configuration for use with MirrorView/Synchronous. Designed for archival performance and availability at low cost. This AVM storage pool uses CMATA disk drives in a RAID 3 configuration for use with MirrorView/Synchronous. Designed for high availability at low cost. This storage pool uses CMATA disk volumes created from RAID 6 disk groups for use with MirrorView/Synchronous. Designed for high performance and availability at medium cost.This storage pool uses two CMATA disk volumes in a RAID 1/0 configuration for use with MirrorView/Synchronous. Designed for medium performance and availability at medium cost. This storage pool uses CMSAS disk volumes created from RAID 5 disk groups for use with MirrorView/Synchronous. Designed for high availability at low cost. This storage pool uses CMSAS disk volumes created from RAID 6 disk groups for use with MirrorView/Synchronous. Designed for high performance and availability at medium cost.This storage pool uses two CMSAS disk volumes in a RAID 1/0 configuration for use with MirrorView/Synchronous. Designed for very high performance and availability at high cost.This storage pool uses CMEFD disk volumes created from 4+1 and 8+1 RAID 5 disk groups for use with MirrorView/Synchronous. Designed for high performance and availability at medium cost.This storage pool uses two CMEFD disk volumes in a RAID 1/0 configuration for use with MirrorView/Synchronous.

cmata_r3

cmata_r6

cmata_r10

cmsas_archive

cmsas_r6

cmsas_r10

cmefd_r5

cmefd_r10

RAID groups and storage characteristics


The following table correlates the storage array to the RAID groups for system-defined storage pools.

AVM storage pool and disk type options

33

Concepts

Table 6. RAID group combinations Storage NX4 SAS or SATA RAID 5 2+1 RAID 5 3+1 RAID 5 4+1 RAID 5 5+1 RAID 5 NS20 / NS40 / NS80 FC NS20 / NS40 / NS80 ATA NS-120 / NS-480 / NS-960 FC NS-120 / NS-480 / NS-960 ATA NS-120 / NS-480 / NS-960 EFD VNX SAS 3+1 RAID 5 4+1 RAID 5 6+1 RAID 5 8+1 RAID 5 VNX NL SAS Not supported 4+2 RAID 6 6+2 RAID 6 Not supported 4+2 RAID 6 6+2 RAID 6 1+1 RAID 1/0 4+1 RAID 5 6+1 RAID 5 8+1 RAID 5 4+1 RAID 5 8+1 RAID 5 4+1 RAID 5 6+1 RAID 5 8+1 RAID 5 4+1 RAID 5 8+1 RAID 5 4+1 RAID 5 8+1 RAID 5 4+2 RAID 6 6+2 RAID 6 12+2 RAID 6 4+2 RAID 6 6+2 RAID 6 12+2 RAID 6 4+2 RAID 6 6+2 RAID 6 12+2 RAID 6 4+2 RAID 6 6+2 RAID 6 12+2 RAID 6 Not supported 1+1 RAID 1/0 1+1 RAID 1/0 1+1 RAID 1/0 Not supported 1+1 RAID 1 RAID 6 4+2 RAID 6 RAID 1 1+1 RAID 1/0

34

Managing Volumes and File Systems on VNX AVM 7.0

Concepts

User-defined storage pools


For some customer environments, more user control is required than the system-defined storage pools offer. One way for administrators to have more control is to create their own storage pools and define the attributes of the storage pool. AVM user-defined storage pools allow you to have more control over how the storage is allocated to file systems. Administrators can create a storage pool. They can also add volumes to the storage pool either by manually selecting and building the volume structure, or by auto-selection, expanding the storage pool with new volumes when required, and removing volumes that are no longer required in the storage pool. Auto-selection is performed by choosing a minimum size and a system pool which describes the disk attributes. With auto-selection, whole disk volumes are taken from the volumes available in the system pool and placed in the user pool according to the selected stripe options. The auto-selection uses the same AVM algorithms that choose which disk volumes to stripe in a system pool. When extending a user-defined storage pool, AVM references the last pool member's volume structure and makes the best effort to keep the underlying volume structures consistent. System-defined storage pool volume and storage profiles on page 38 contains additional information. While user-defined storage pools have attributes similar to system-defined storage pools, user-defined storage pools are not dynamic. They require administrators to explicitly add and remove volumes manually. If you define the storage pool, you must also explicitly add and remove storage from the storage pool and define the attributes for that storage pool. Use the nas_pool command to do the following:

List, create, delete, extend, shrink, and view storage pools. Modify the attributes of storage pools.

Create file systems with AVM on page 68 and Chapter 4 provide more information. Understanding how AVM storage pools work enables you to determine whether system-defined storage pools, user-defined storage pools, or both, are appropriate for the environment. It is also important to understand the ways in which you can modify the storage-pool behavior to suit your file system requirements. Modify system-defined and user-defined storage pool attributes on page 107 provides a list of all the attributes and the procedures to modify them.

Storage pool attributes


System-defined and user-defined storage pools have attributes that control how they create volumes and file systems. Table 7 on page 36 lists the storage pool attributes, their values, whether an attribute is modifiable and for which storage pools, and a description of the attribute. The system-defined storage pools are shipped with the VNX system. They are designed to optimize performance based on the hardware configuration. Each of the

AVM storage pool and disk type options

35

Concepts

system-defined storage pools has associated profiles that define the kind of storage used, and how new storage is added to, or deleted from, the storage pool.
Table 7. Storage pool attributes Attribute name Values Quoted string Modifiable Yes User-defined storage pools Quoted string Yes User-defined storage pools acl Integer. For exam- Yes ple, 0. User-defined storage pools Description Unique name. If a name is not specified during creation, one is automatically generated. A text description. Default is (blank string). Access control level. Controlling Access to System Objects on VNX contains instructions to manage access control levels. Indicates whether AVM can slice member volumes to meet the file system request. A y entry tells AVM to create a slice of exactly the correct size from one or more member volumes. An n entry gives the primary or source file system exclusive access to one or more member volumes. Note: If using TimeFinder or automatic file system extension, this attribute should be set to n. You cannot restore file systems built with sliced volumes to a previous state by using TimeFinder/FS. "y" | "n" Yes System-defined storage pools Note: This attribute is applicable only if volume_profile is not blank. Indicates whether this storage pool is allowed to automatically add or remove member volumes. The default value is n.

description

default_slice_flag

"y" | "n"

Yes System-defined and user-defined storage pools

is_dynamic

36

Managing Volumes and File Systems on VNX AVM 7.0

Concepts

Table 7. Storage pool attributes (continued) Attribute is_greedy Values "y" | "n" Modifiable Yes System-defined storage pools Note: This attribute is applicable only if volume_profile is not blank. Indicates whether a storage pool is greedy. When a storage pool receives a request for space, a greedy storage pool attempts to create a new member volume before searching for free space in existing member volumes.The attribute value for this storage pool is y. A storage pool that is not greedy uses all available space in the storage pool before creating a new member volume. The attribute value for this storage pool is n. Note: When extending a file system, AVM searches for free space on the existing volumes that the file system is currently using and ignores the is_greedy attribute value. If there is not enough free space available, AVM first uses the available space of the existing volumes of the file system, and then uses the is_greedy attribute value to determine where to look for the remaining space. Description

The system-defined storage pools are designed for use with the Symmetrix and VNX for block storage systems. The structure of volumes created by AVM might differ greatly depending on the type of storage system that is used by the various storage pools. This difference allows AVM to exploit the architecture of current and future block storage devices that are attached to the VNX for file. Figure 1 on page 38 shows how the different storage pools are associated with the disk volumes for each storage-system type attached. The nas_disk -list command lists the disk volumes. These are the representation of the VNX for file LUNs that are exported from the attached storage system.

Storage pool attributes

37

Concepts

Note: Any given disk volume must be a member of only one storage pool.

cmata_r6 cmata_r3 clarata_r3

cmata_archive AVM storage pools

clarata_archive

symm_std

clar_r5_economy

symm_std_rdf_src

clar_r5_performance

clar_r1 Disk volumes in the storage pools

d3

d4

dn

dm

dx

dy

dz

dn

Symmetrix storage system

Storage systems

VNX for block storage system


VNX-000014

Figure 1. AVM system-defined storage pools

System-defined storage pool volume and storage profiles


Volume profiles are the set of rules and parameters that define how new storage is added to a system-defined storage pool. A volume profile defines a standard method of building a large section of storage from a set of disk volumes. This large section of storage can be added to a storage pool that might contain similar large sections of storage. The system-defined storage pool is responsible to satisfy requests for any amount of storage. Users cannot create or delete system-defined storage pools and their associated profiles. However, users can list, view, and extend the system-defined storage pools, and also modify storage pool attributes. Volume profiles have an attribute named storage_profile. A volume profile's storage profile defines the rules and attributes that are used to aggregate some number of disk volumes (listed by the nas_disk -list command) into a volume that can be added to a system-defined storage pool. A volume profile uses its storage profile to determine the set of disk volumes to select (or match existing VNX disk volumes), where a given disk volume might match the rules and attributes of a storage profile.

38

Managing Volumes and File Systems on VNX AVM 7.0

Concepts

The following sections explain how these profiles help system-defined storage pools aggregate the disk volumes into storage pool members, place the members into storage pools, and then build file systems for each storage-system type:

VNX for block system-defined storage pool algorithms on page 39 VNX for block system-defined storage pools for RAID 5, RAID 3, and RAID 1/0 SATA support on page 42 VNX for block system-defined storage pools for Flash support on page 44 Symmetrix system-defined storage pools algorithm on page 45 VNX for block mapped pool file systems on page 48 Symmetrix mapped pool file systems on page 49

When using the system-defined storage pools without modifications by using the Unisphere software or the VNX CLI, this activity is transparent to users.

VNX for block system-defined storage pool algorithms


When you create a file system that requires new storage, AVM attempts to create the most optimal stripe volume for a VNX for block storage system. System-defined storage pools for VNX for block storage systems work with LUNs of a specific type, for example, 4+1 RAID 5 LUNs for the clar_r5_performance storage pool. VNX for block integrated models use storage templates to create the LUNs that the VNX for file recognizes as disk volumes. VNX for block storage templates are a combination of template definition files and scripts that create RAID groups and bind LUNs on VNX for block storage systems. You see noly the scripts, not the templates. These storage templates are invoked by using the VNX for block root-only setup script or by using the Unisphere software. Disk volumes exported from a VNX for block storage system are relatively large. A VNX for block system also has two storage processors (SPs). Most VNX for block storage templates create two LUNs per RAID group, one owned by SP A and the other by SP B. Only the VNX for block RAID 3 storage templates create both LUNs that are owned by one of the SPs. If no disk volumes are found when a request for space is made, AVM considers the storage pool attributes, and initiates the next step based on these settings:

The is_greedy setting indicates if the storage pool must add a new member volume to meet the request, or if it must use all the available space in the storage pool before adding a new member volume. AVM then checks the is_dynamic setting.
Note: When extending a file system, the is_greedy attribute is ignored unless there is not enough free space on the existing volumes that the file system is using. Table 7 on page 36 describes the is_greedy behavior.

The is_dynamic setting indicates if the storage pool can dynamically grow and shrink:

System-defined storage pool volume and storage profiles

39

Concepts

If set to yes, then it allows AVM to automatically add a member volume to meet the request. If set to no, and a member volume must be added to meet the request, then the user must manually add the member volume to the storage pool.

The flag that requests a file system slice indicates if the file system can be built on a slice volume from a member volume. The default_slice_flag setting indicates if AVM can slice storage pool member volumes to meet the request.

Most of the system-defined storage pools for VNX for block storage systems first search for four same-size disk volumes from different buses, different SPs, and different RAID groups. The absolute criteria that the volumes must meet are:

Disk volume cannot exceed 14 TB. Disk volume must match the type specified in the storage profile of the storage pool. Disk volumes must be of the same size. No two disk volumes can come from the same RAID group. Disk volumes must be on a single storage system.

If found, AVM stripes the LUNs together and inserts the stripe into the storage pool. If AVM cannot find the four disk volumes that are bus-balanced, it looks for four same-size disk volumes that are SP-balanced from different RAID groups. If not found, AVM then searches for four same-size disk volumes from different RAID groups. Next, if AVM has been unable to satisfy these requirements, it looks for three same-size disk volumes that are SP-balanced from different RAID groups, and so on, until the only option left is for AVM to use one disk volume. The criteria that the one disk volume must meet are:

Disk volume cannot exceed 14 TB. Disk volume must match the type specified in the storage profile of the storage pool. If multiple volumes match the first two criteria, then the disk volume must be from the least-used RAID group.

40

Managing Volumes and File Systems on VNX AVM 7.0

Concepts

Figure 2 on page 41 shows the algorithm used to select disk volumes to add to a pool member in an AVM VNX for block system-defined storage pool, which is either clar_r1, clar_r5_performance, or clar_r5_economy.

Figure 2. clar_r1, clar_r5_performance, and clar_r5_economy storage pools algorithm

System-defined storage pool volume and storage profiles

41

Concepts

Figure 3 on page 42 shows the structure of a clar_r5_performance storage pool. The volumes in the storage pools are balanced between SP A and SP B.
clar_r5_performance storage pool stripe_volume1 stripe_volume2

VNX 4+1 RAID5 disk volumes

dx

dy

dz

dw 3

dm 3

dn

Owned by storage processor A

Owned by storage processor B

VNX-000015

Figure 3. clar_r5_performance storage pool structure

VNX for block system-defined storage pools for RAID 5, RAID 3, and RAID 1/0 SATA support
The three VNX for block system-defined storage pools that provide support for the SATA environment are clarata_archive (RAID 5), clarata_r3 (RAID 3), and clarata_r10 (RAID 1/0). The clarata_r3 storage pool follows the basic VNX for block algorithm explained in System-defined storage pool volume and storage profiles on page 38, but uses only one disk volume and does not allow striping of volumes. One of the applications for this pool is backup to disk. Users can manage the RAID 3 disk volumes manually in a user-defined storage pool. However, using the system-defined storage pool clarata_r3 helps users maximize the benefit from AVM disk selection algorithms. The clarata_r3 storage pool supports only VNX for block Capacity drives, not Performance drives. The criteria that the one disk volume must meet are:

Disk volume cannot exceed 14 TB. Disk volume must match the type specified in the storage profile of the storage pool. If multiple volumes match the first two criteria, then the disk volume must be from the least-used RAID group.

42

Managing Volumes and File Systems on VNX AVM 7.0

Concepts

Figure 4 on page 43 shows the storage pool clarata_r3 algorithm.

Figure 4. clarata_r3 storage pool algorithm

The storage pools clarata_archive and clarata_r10 differ from the basic VNX for block algorithm. These storage pools use two disk volumes or a single disk volume, and all Capacity drives are the same.

System-defined storage pool volume and storage profiles

43

Concepts

Figure 5 on page 44 shows the profile algorithm used to select disk volumes by using either the clarata_archive or clarata_r10 storage pool.

Figure 5. clarata_archive and clarata_r10 storage pools algorithm

VNX for block system-defined storage pools for Flash support


The VNX for file provides the clarefd_r5, clarefd_r10, cmefd_r5, and cmefd_r10 storage pools for Flash drive support on the VNX for block storage system. AVM uses the same disk selection algorithm and volume structure for each Flash pool. However, the algorithm differs from the standard VNX for block algorithm explained in System-defined storage pool volume and storage profiles on page 38 and is outlined next. The algorithm adheres to EMC best practices to achieve the overall best performance and use of Flash drives. Users can also manually manage Flash drives in user-defined pools. The AVM algorithm used for disk selection and volume structure for all Flash system-defined pools is as follows: 1. The LUN creation process is responsible for storage processor balancing. By default, run the setup_clariion command on integrated systems to set up storage processor balancing. 2. Use a default stripe width of 256 KB (provided in the profile). The stripe member count in the profile is ignored and should be left at 1. 3. When two or more LUNs of the same size are available, always stripe LUNs. Otherwise, concatenate LUNs. 4. No RAID group balancing or RAID group usage is considered.

44

Managing Volumes and File Systems on VNX AVM 7.0

Concepts

5. No order is applied to the LUNs being striped together except that all LUNs from the same RAID group in the stripe will be next to each other. For example, storage processor balanced order is not applied. 6. Use a maximum of two RAID groups from which to take LUNs: a. If only one RAID group is available, use every same size LUN in the RAID group. This maximizes the LUN count and meets the size requested. b. If only two RAID groups are available, use every same size LUN in each RAID group. This maximizes the LUN count and meets the size requested. Figure 6 on page 45 shows the profile algorithm used to select disk volumes by using either the clarefd_r5, clarefd_r10, cmefd_r5, or cmefd_r10 storage pool.

Figure 6. clarefd_r5, clarefd_r10, cmefd_r5, and cmefd_r10 storage pools algorithm

Symmetrix system-defined storage pools algorithm


AVM works differently with Symmetrix storage systems because of the size and uniformity of the disk volumes involved. Typically, the disk volumes exported from a Symmetrix

System-defined storage pool volume and storage profiles

45

Concepts

storage system are small and uniform in size. The aggregation strategy used by Symmetrix storage pools is primarily to combine many small disk volumes into larger volumes that can be used by file systems. AVM attempts to distribute the input/output (I/O) to as many Symmetrix directories as possible. The Symmetrix storage system can use slicing and striping to distribute I/O among the physical disks on the storage system. This is less of a concern for the AVM aggregation strategy. A Symmetrix storage pool creates a stripe volume across one set of Symmetrix disk volumes, or creates a metavolume, as necessary to meet the request. The stripe or metavolume is added to the Symmetrix storage pool. When the administrator asks for a specific number of gigabytes of space from the Symmetrix storage pool, the requested size of space is allocated from this system-defined storage pool. AVM adds to and takes from the system-defined storage pool as required. The stripe size is set in the system-defined profiles. You cannot modify the stripe size of a system-defined storage pool. The default stripe size for Symmetrix storage pool is 256 KB. Multipath file system (MPFS) requires a stripe depth of 32 KB or greater. The algorithm that AVM uses looks for a set of eight disk volumes. If the set of eight is not found, then the algoritm looks for a set of four disk volumes. If the set of four is not found, then the algorithm looks for a set of two disk volumes. If the set of two disk volumes is not found, then the algorithm looks for one disk volume. AVM stripes the disk volumes together, if the disk volumes are all of the same size. If the disk volumes are not the same size, AVM creates a metavolume on top of the disk volumes. AVM then adds the stripe or the metavolume to the storage pool. If AVM cannot find any disk volumes, it looks for a metavolume in the storage pool that has space, takes a slice from that metavolume, and makes a metavolume over that slice.

46

Managing Volumes and File Systems on VNX AVM 7.0

Concepts

Figure 7 on page 47 shows the AVM algorithm used to select disk volumes by using a Symmetrix system-defined storage pool.

Figure 7. Symmetrix storage pool algorithm

Figure 8 on page 47 shows the structure of a Symmetrix storage pool.

Figure 8. Symmetrix storage pool structure

All this system-defined storage pool activity is transparent to users and provides an easy way to create and manage file systems. The system-defined storage pools do not allow users to have much control over how AVM aggregates storage to meet file system needs, but most users prefer ease-of-use over control. When users make a request for a new file system that uses the system-defined storage pools, AVM does the following:

System-defined storage pool volume and storage profiles

47

Concepts

1. Determines if more volumes need to be added to the storage pool. If so, selects and adds volumes. 2. Selects an existing, available storage pool volume to use for the file system. The volume might also be sliced to obtain the correct size for the file system request. If the request is larger than the largest volume, AVM concatenates the volumes to create the size required to meet the request. 3. Places a metavolume on the resulting volume and builds the file system within the metavolume. 4. Returns the file system information to the user. All system-defined storage pools have specific, predictable rules for getting disk volumes into storage pools, provided by their associated profiles.

VNX for block mapped pool file systems


AVM builds a VNX for block mapped pool file system as follows: 1. Concatenation will be used. Striping will not be used. 2. Unless requested, slicing will not be used. 3. AVM checks for free disk volumes, and sorts them by thin and thick disk volumes. 4. AVM checks for free disk volumes:

If there are no free disk volumes and the slice option is set to no, there is not enough space available and the request fails. If there are free disk volumes: a. AVM first checks for thick disk volumes that satisfy the size request (equal to or greater than the file system size). b. If not found, AVM then checks for thin disk volumes that satisfy the size request. c. If still not found, AVM combines thick and thin disk volumes to find ones that satisfy the size request.

5. If one disk volume satisfies the size request exactly, AVM takes the selected disk volume and uses the whole disk to build the file system. 6. If a larger disk volume is found which is a better fit than any set of smaller disks, then AVM uses the larger disk volume. 7. If multiple disk volumes satisfy the size request, AVM sorts the disk volumes from smallest to largest, and then sorts in alternating SP A and SP B lists. Starting with the first disk volume, AVM searches through a list for matching data services until the size request is met. If the size request is not met, AVM searches again but ignores the data services.

48

Managing Volumes and File Systems on VNX AVM 7.0

Concepts

Note: Mapped pools are treated as standard AVM pools, not as user-defined pools, except that mapped pools are always dynamic and the is_greedy option is ignored.

Figure 9 on page 49 shows the VNX for block mapped pool algorithm.
User requests file system on a mapped pool

Are there any free dVols?

No

Is slice option specified as Y? No

Yes

Look for existing volumes that can be sliced to meet the size. Will try for consistent data service first, and then will look for anything that is available

Yes

Sort all dVols into thin and thick buckets

Not enough space to fulfill request. Fail the request, and clean up pool if necessary.

No

Is size request met?

Search thick bucket for best fit. If not found, search thin bucket for best fit. If not found, combine thin and thick and look for best fit.

Yes

Done

Try to find a smaller set of LUNs which satisfy the size but are not data service consistent

Is best fit a single dvol?

Yes

Either slice or use whole disk, create meta on slice or disk, and then create file system on meta.

Select larger LUN for data service consistency and build file system on the single LUN

No

Yes

Find all smaller dvols, sort from smallest to largest, and then sort into alternating SPA/SPB list

Done

Does a larger LUN exist?

No

Yes Starting with first dVol, search through list for matching data service until size is met. No

Is size request met?

CNS-001894

Figure 9. VNX for block mapped pool file systems

Symmetrix mapped pool file systems


AVM builds a Symmetrix mapped pool file system as follows: 1. Unless requested, slicing will not be used. 2. AVM checks for free disk volumes, and sorts them by thin and thick disk volumes for the purpose of striping together the same type of disk volumes:

If there are no free disk volumes and the slice option is set to no, there is not enough space available and the request fails. If there are free disk volumes: a. AVM first checks for a set of eight disk volumes.

System-defined storage pool volume and storage profiles

49

Concepts

b. If a set of eight is not found, AVM then looks for a set of four disk volumes. c. If a set of four is not found, AVM then looks for a set of two disk volumes. d. If a set of two is not found, AVM finally looks for one disk volume. 3. When free disk volumes are found: a. AVM first checks for thick disk volumes that satisfy the size request, which can be equal to or greater than the file system size. If thick disk volumes are available, AVM first tries to stripe the thick disk volumes that have the same disk type. Otherwise, AVM stripes together thick disk volumes that have different disk types. b. If thick disks are not found, AVM then checks for thin disk volumes that satisfy the size request. If thin disk volumes are available, AVM first tries to stripe the thin disk volumes that have the same disk type, where "same" means the single disk type of the pool in which it resides. Otherwise, AVM stripes together thin disk volumes that have different disk types. c. If thin disks are not found, AVM combines thick and thin disk volumes to find ones that satisfy the size request. 4. If neither thick nor thin disk volumes satisfy the size request, AVM then checks for whether striping of one same disk type will satisfy the size request, ignoring whether the disk volumes are thick or thin. 5. If still no matches are found, AVM checks whether slicing was requested. a. If slicing was requested, then AVM checks whether any stripes exist that satisfy the size request. If yes, then AVM slices an existing stripe. b. If slicing was not requested, AVM checks whether any free disk volumes can be concatenated to satisfy the size request. If yes, AVM concatenates disk volumes, matching data services if possible, and builds the file system. 6. If still no matches are found, there is not enough space available and the request fails.
Note: Mapped pools are treated as standard AVM pools, not as user-defined pools, except that mapped pools are always dynamic and the is_greedy option is ignored.

50

Managing Volumes and File Systems on VNX AVM 7.0

Concepts

Figure 10 on page 51 shows the Symmetrix mapped pool algorithm.


User requests file system on a mapped pool

Yes

Are there any free dVols?

No

Is slice option specified as Y?

Yes

No

Sort all dVols into thick and thin buckets

Not enough space to fulfill request. Fail the request, and remove unused volumes if necessary.

No

Can any free dVols be concatenated to fulfill space request (no striping)?

Yes

Concatenate dVols, matching data service if possible, and create file system.

Further divide thick and thin buckets into disktype-specific buckets for the purpose of striping together like disktypes.

Done

Done

No

No

Now apply standard Symm1 AVM strategy to each of these buckets starting with the thick buckets first.

Was slice option Y specified?

Yes

Do any in use stripes exist to fulfill the size request?

No

Yes

Is size request met from thick stripes of the same disktype?

No

Is size request met from thin stripes of the same disktype?

No

Can stripes of the same disktype satisfy space request if thin and thick are ignored?

Slice existing stripe to fulfill space request.

Yes

Yes

Yes

Done
CNS-001895

Figure 10. Symmetrix mapped pool file systems

File system and storage pool relationship


When you create a file system that uses a system-defined storage pool, AVM consumes disk volumes either by adding new members to the pool, or by using existing pool members. To create a file system by using a user-defined storage pool, do one of the following:

Create the storage pool and add the volumes you want to use manually before creating the file system. Let AVM create the user pool by size.

Deleting a file system associated with either a system-defined or user-defined storage pool returns the unused space to the storage pool. But the storage pool might continue to reserve

File system and storage pool relationship

51

Concepts

that space for future file system requests. Figure 11 on page 52 shows two file systems built from an AVM storage pool.

Figure 11. File systems built by AVM

As Figure 12 on page 52 shows, if FS2 is deleted, the storage used for that file system is returned to the storage pool. AVM continues to reserve it, as well as any other member volumes that are available in the storage pool, for a future request. This practice is true of system-defined and user-defined storage pools.

Figure 12. FS2 deletion returns storage to the storage pool

If FS1 is also deleted, the storage that was used for the file systems is no longer required. A system-defined storage pool removes the volumes from the storage pool and returns the disk volumes to the storage system for use with other features or storage pools. You can change the attributes of a system-defined storage pool so that it is not dynamic, and will not grow and shrink dynamically. By making this change, you increase your direct involvement in managing the volume structure of the storage pool, including adding and removing volumes. A user-defined storage pool does not have any capability to add and remove volumes. To use volumes contained in a user-defined storage pool for another purpose, you must remove

52

Managing Volumes and File Systems on VNX AVM 7.0

Concepts

the volumes. Remove volumes from storage pools on page 119 provides more information. Otherwise, the user-defined storage pool continues to reserve the space for use by that pool. Figure 13 on page 53 shows that the storage pool container continues to exist after the file systems are deleted, and AVM continues to reserve the volumes for future requests of that storage pool.

Figure 13. FS1 deletion leaves storage pool container with volumes

If you have modified the attributes that control the dynamic behavior of a system-defined storage pool, use the procedure in Remove volumes from storage pools on page 119 to remove volumes from the system-defined storage pool. To reuse the volumes for other purposes for a user-defined storage pool, remove the volumes or delete the storage pool.

Automatic file system extension


Automatic file system extension works only when an AVM storage pool is associated with a file system. You can enable or disable automatic extension when you create a file system or modify the file system properties later. Create file systems with AVM on page 68 provides the procedure to create file systems with AVM system-defined or user-defined storage pools and enable automatic extension on a newly created file system. Enable automatic file system extension and options on page 90 provides the procedure to modify an existing file system and enable automatic extension. You can set the HWM and maximum size for automatic file system extension. The Control Station might attempt to extend the file system several times, depending on these settings.

HWM The HWM identifies the threshold for initiating automatic file system extension. The HWM value must be between 50 percent and 99 percent. The default HWM is 90 percent of the file system size. Automatic extension guarantees that the file system usage is at least 3 percent below the HWM. Figure 14 on page 56 contains the algorithm for how the calculation is performed. For example, a 100 GB file system reaches its 80 percent HWM at 80 GB. The file system

Automatic file system extension

53

Concepts

then automatically extends to 110 GB and is now at 72.73 percent usage (80 GB), which is well below the 80 percent HWM for the 110 GB file system:

If automatic extension is disabled, when the file system reaches the 90 percent (internal) HWM, an event notification is sent. You must then manually extend the file system. Ignoring the notification could cause data loss. If automatic extension is enabled, when the file system reaches the HWM, an automatic extension event notification is sent to the sys_log and the file system automatically extends without any administrative action. Calculating the automatic extension size depends on the extend_size value and the current file system size: extend_size = polling_interval*io_rate*100/(100-HWM) where: polling interval: default is 10 seconds io_rate: default is 200 MB/s HWM: value is set per file system

If a file system is smaller than the extend_size value, it extends by its size when it reaches the HWM. If a file system is larger than the extend_size value, it extends by 5 percent of its size or the extend_size, whichever is larger, when it reaches the HWM.
Examples

The following examples use file system sizes of 100 GB and 500 GB, and HWM values of 80 percent, 85 percent, 90 percent, and 95 percent:

Example 1 100 GB file system, 85 percent HWM extend_size = (10*200*100)/(100-85) Result = 13.3 GB 13.3 GB is greater than 5 GB (which is 5 percent of 100 GB). Therefore, the file system is extended by 13.3 GB.

Example 2 100 GB file system, 90 percent HWM extend_size = (10*200*100)/(100-90) Result = 20 GB 20 GB is greater than 5 GB (which is 5 percent of 100 GB). Therefore, the file system is extended by 20 GB.

Example 3 500 GB file system, 90 percent HWM extend_size = (10*200*100)/(100-90) Result = 20 GB

54

Managing Volumes and File Systems on VNX AVM 7.0

Concepts

20 GB is less than 25 GB (which is 5 percent of 500 GB). Therefore, the file system is extended by 25 GB.

Example 4 500 GB file system, 95 percent HWM extend_size = (10*200*100)/(100-95) Result = 40 GB 40 GB is greater than 25 GB (which is 5 percent of 500 GB). Therefore, the file system is extended by 40 GB.

Example 5 500 GB file system, 80 percent HWM extend_size = (10*200*100)/(100-80) Result = 10 GB Since the total used space on the file system after this extension would be 78.4 percent (400/510 *100), which is less than the (HWM-3) limit, the file system is extended by a single 19.5 GB extension (400 * 100/77).

Maximum size The default maximum size for any file system is 16 TB. The maximum size for automatic file system extension is from 3 MB up to 16 TB. If thin provisioning is enabled and the selected storage pool is a traditional RAID group (non-virtual VNX for block thin) storage pool, the maximum size is required. Otherwise, this field is optional. The extension size is also dependent on having additional space in the storage pool associated with the file system.

Automatic file extension conditions The conditions for automatically extending a file system are as follows:

If the file system size reaches the specified maximum size, the file system cannot extend beyond that size, and the automatic extension operation is rejected. If the available space is less than the extend size, the file system extends by the maximum available space. If only the HWM is set with automatic extension enabled, the file system automatically extends when that HWM is reached, if there is space available and the file system size is less than 16 TB. If only the maximum size is specified with automatic extension enabled, the file system automatically extends when the default HWM of 90 percent is reached, and the file system has space available and the maximum size has not been reached. If the file system reaches or exceeds the set maximum size, automatic extension is rejected.

Automatic file system extension

55

Concepts

If the HWM or maximum file size is not set, but either automatic extension or thin provisioning is enabled, the file system's HWM and maximum size are set to the default values of 90 percent and 16 TB, respectively.

Calculating the size of an automatic file system extension During each automatic file system extension, fs_extend_handler, located on the Control Station (/nas/sbin/fs_extend_handler) calculates the extension size by using the algorithm shown in Figure 14 on page 56.

Figure 14. Calculating the size of an automatic file system extension

56

Managing Volumes and File Systems on VNX AVM 7.0

Concepts

Thin provisioning
The thin provisioning option allows you to allocate storage capacity based on anticipated needs, while you dedicate only the resources you currently need. Combining automatic file system extension and thin provisioning lets you grow the file system gradually as needed. When thin provisioning is enabled and a virtual storage pool is not being used, the virtual maximum file system size is reported to NFS and CIFS clients. If a virtual storage pool is being used, the actual file system size is reported to NFS and CIFS clients.
Note: Enabling thin provisioning with automatic file system extension does not automatically reserve the space from the storage pool for that file system. Administrators must ensure that adequate storage space exists so that the automatic extension operation can succeed. If the available storage is less than the maximum size setting, automatic extension fails. Users receive an error message when the file system becomes full, even though it appears that there is free space in the file system.

Planning considerations
This section covers important volume and file system planning information and guidelines, interoperability considerations, storage pool characteristics, and upgrade considerations that you need to know when implementing AVM and automatic file system extension. Review these topics:

File system management and the nas_fs command The EMC SnapSure feature (checkpoints) and the fs_ckpt command VNX for file volume management concepts (metavolumes, slice volumes, stripe volumes, and disk volumes) and the nas_volume, nas_server, nas_slice, and nas_disk commands RAID technology Symmetrix storage systems VNX for block storage systems

Interoperability considerations When using automatic file system extension with replication, consider these guidelines:

Enable automatic extension and thin provisioning only on the source file system. The destination file system is synchronized with the source and extends automatically. When the source file system reaches its HWM, the destination file system automatically extends first and then the source file system automatically extends. Do one of the following:

Thin provisioning

57

Concepts

Set up the source replication file system with automatic extension enabled, as explained in Create file systems with automatic file system extension on page 79. Modify an existing source file system to automatically extend by using the procedure Enable automatic file system extension and options on page 90.

If the extension of the destination file system succeeds but the extension of the source file system fails, the automatic extension operation stops functioning. You receive an error message that indicatges the failure is due to the limitation of available disk space on the source side. Manually extend the source file system to make the source and destination file systems the same size by using the nas_fs -xtend <fs_name> -option src_only command. Using VNX Replicator provides more detailed information on correcting the failure.

Other interoperability considerations are:

The automatic extension and thin provisioning configuration is not moved over to the destination file system during replication failover. If you intend to reverse the replication, and the destination file system becomes the source, you must enable automatic extension on the new source file system. With thin provisioning enabled, the NFS, CIFS, and FTP clients see the actual size of the VNX Replicator destination file system, and the clients see the virtually provisioned maximum size on the source file system. Table 8 on page 58 describes this client view.

Table 8. Client view of VNX Replicator source and destination file systems Source file system without thin provisioning Actual size Source file system with thin provisioning Maximum size

Destination file system Clients see: Actual size

Using VNX Replicator contains more information on using automatic file system extension with VNX Replicator.

AVM storage pool considerations Consider these AVM storage pool characteristics:

System-defined storage pools have a set of rules that govern how the system manages storage. User-defined storage pools have attributes that you define for each storage pool. All system-defined storage pools (virtual and non-virtual) are dynamic. They acquire and release disk volumes as needed. Administrators can modify the attribute to disable this dynamic behavior. User-defined storage pools are not dynamic. They require administrators to explicitly add and remove volumes manually. You are allowed to choose disk volume storage from only one of the attached storage systems when creating a user-defined storage pool.

58

Managing Volumes and File Systems on VNX AVM 7.0

Concepts

Striping never occurs above the storage-pool level. The system-defined VNX for block storage pools attempt to use all free disk volumes before maximizing use of the partially used volumes. This behavior is considered to be a greedy attribute. You can modify the attributes that control this greedy behavior in system-defined storage pools. Modify system-defined storage pool attributes on page 108 describes the procedure.
Note: When extending a file system, the is_greedy attribute is ignored unless there is not enough free space on the existing volumes that the file system is using. Table 7 on page 36 describes the is_greedy behavior.

Another option is to create user-defined storage pools to group disk volumes to keep system-defined storage pools from using them. Create file systems with user-defined storage pools on page 72 provides more information on creating user-defined storage pools. You can create a storage pool to reserve disk volumes, but never create file systems from that storage pool. You can move the disk volumes out of the reserving user-defined storage pool if you need to use them for file system creation or other purposes.

The system-defined Symmetrix storage pools maximize the use of disk volumes acquired by the storage pool before consuming more. This behavior is considered to be a "not greedy" attribute. AVM does not perform storage system operations necessary to create new disk volumes, but consumes only existing disk volumes. You might need to add LUNs to your storage system and configure new disk volumes, especially if you create user-defined storage pools. A file system might use many or all the disk volumes that are members of a system-defined storage pool. You can use only one type of disk volume in a user-defined storage pool. For example, if you create a storage pool and then add a disk volume based on Capacity drives to the pool, add only other Capacity-based disk volumes to the pool to extend it. SnapSure checkpoint SavVols might use the same disk volumes as the file system of which the checkpoints are made. By default, a checkpoint SavVol is sliced so that a SavVol auto-extension will not use space unnecessarily. AVM does not add members to the storage pool if the amount of space requested is more than the sum of the unused and available disk volumes, but less than or equal to the available space in an existing system-defined storage pool. Some AVM system-defined storage pools designed for use with VNX for block storage systems acquire pairs of storage-processor balanced disk volumes with the same RAID type, disk count, and size. When reserving disk volumes from a VNX for block storage system, it is important to reserve them in similar pairs. Otherwise, AVM might not find matching pairs, and the number of usable disk volumes might be more limited than was intended.

Planning considerations

59

Concepts

To guarantee consistent file system performance, on the VNX for block system configure a storage pool that uses the same data services that will map to an AVM pool that uses the same data services on the VNX for file. Because of the minimum storage requirement restriction for a VNX for block system's storage pool, if you must create a heterogeneous pool that uses multiple data services to satisfy different use cases, do the following: 1. Use a heterogeneous system-defined AVM pool to create user-defined pools that group disk volumes with matching data service policies. 2. Create file systems from the user-defined pools. For example, for one use case you might need to create both a regular file system and an archive file system.

The system allows you to control the data service configuration at the file system level. By default, disk volumes are not sliced unless you explicitly select that setting at file system creation time. By not slicing a disk volume, the system guarantees that a file system will not share disks with other file systems. There is a 1:n relationship between the file system and the disk volumes, where n is greater than or equal to 1. You can go to the VNX for block or Symmetrix storage system and modify the data service policies of the set of LUNs underneath the same file system to change the data policy of the file system. This option may cause the file system that is created to exceed the specified storage capacity because the file system size will be disk volume-aligned. Choose the LUN size on the VNX for block or Symmetrix system storage pool carefully. The pool-based LUN overhead is a collection of 2 percent of the file system capacity size plus 3 GB for a Direct Logical Unit (DLU), and fully provisioned Thin LUN (TLU).

Create file systems with AVM on page 68 provides more information on creating file systems by using the different pool types. Related information on page 22 provides a list of related documentation.

Upgrading VNX for file software When you upgrade to VNX for file version 7.0 software, all system-defined storage pools are available. The system-defined storage pools for the currently attached storage systems with available space appear in the output when you list storage pools, even if AVM is not used on the system. If you have not used AVM in the past, these storage pools are containers and do not consume storage until you create a file system by using AVM. If you have used AVM in the past, in addition to the system-defined storage pools, any user-defined storage pools also appear when you list the storage pools.

60

Managing Volumes and File Systems on VNX AVM 7.0

Concepts

CAUTION: Automatic file system extension is interrupted during software upgrades. If automatic file system extension is enabled, the Control Station continues to capture HWM events. However, actual file system extension does not start until the upgrade process completes.

File system and automatic file system extension considerations Before implementing AVM, consider your environment, most important file systems, file system sizes, and expected growth. Follow these general guidelines when planning to use AVM in your environment:

Create the most important and most-used file systems first. AVM system-defined storage pools use free disk volumes to create a new file system. For example, there are 40 disk volumes on the storage system. AVM takes eight disk volumes, creates stripe1, slice1, metavolume1, and then creates the file system ufs1:

Assuming the default behavior of the system-defined storage pool, AVM uses eight more disk volumes, creates stripe2, and builds file system ufs2, even though there is still space available in stripe1. File systems ufs1 and ufs2 are on different sets of disk volumes and do not share any LUNs, for more efficient access.

If you plan to create user-defined storage pools, consider LUN selection and striping, and do your own disk volume aggregation before putting the volumes into the storage pool. This ensures that the file systems are not built on a single LUN. Disk volume aggregation is a manual process for user-defined storage pools. For file systems with sequential I/O, two LUNs per file system are generally sufficient. If you use AVM for file systems with sequential I/O, consider modifying the attribute of the storage pool to restrict slicing. If you would like to control the data service configuration at the file system level but still consider doing auto extension and thin provisioning, do one of the following:

Create a VNX for block or Symmetrix storage pool with thin LUNs, and then create file systems from that pool. Set the slice option to Yes if you want to enable file system auto extension.

Automatic file system extension does not alleviate the need for appropriate planning. Create the file systems with adequate space to accommodate the estimated usage. Allocating too little space to accommodate normal file system usage makes the Control Station rapidly and repeatedly attempt to extend the file system. If the Control Station cannot adequately extend the file system to accommodate the usage quickly enough, the automatic extension fails. Known problems and limitations on page 124 provides more information on how to identify and recover from this issue.

Planning considerations

61

Concepts

Note: When planning file system size and usage, consider setting the HWM, so that the free space above the HWM setting is a certain percentage above the largest average file for that file system.

Use of AVM with a single-enclosure VNX for block storage system could limit performance because AVM does not stripe between or across RAID group 0 and other RAID groups. This is the only case where striping across 4+1 RAID 5 and 8+1 RAID 5 is suggested. If you want to set a stripe size that is different from the default stripe size for system-defined storage pools, create a user-defined storage pool. Create file systems with user-defined storage pools on page 72 provides more information. Take disk contention into account when creating a user-defined pool. If you have disk volumes to reserve so that the system-defined storage pools do not use them, consider creating a user-defined storage pool and add those specific volumes to it.

62

Managing Volumes and File Systems on VNX AVM 7.0

3 Configuring

The tasks to configure volumes and file systems with AVM are:

Configure disk volumes on page 64 Create file systems with AVM on page 68 Extend file systems with AVM on page 82 Create file system checkpoints with AVM on page 98

Managing Volumes and File Systems on VNX AVM 7.0

63

Configuring

Configure disk volumes


System network servers that are gateway network-attached storage (NAS) systems and that connect to Symmetrix and VNX for block storage systems are:

VNX VG2 VNX VG8

The gateway system stores data on VNX for block user LUNs or Symmetrix hypervolumes. If the user LUNs or hypervolumes are not configured correctly on the array, AVM and the Unisphere for File software cannot be used to manage the storage. Typically, an EMC Customer Support Representative does the initial setup of disk volumes on these gateway storage systems. However, if your VNX gateway system is attached to a VNX for block storage system and you want to add disk volumes to the configuration, use the procedures that follow: 1. Use the Unisphere for Block software or the VNX for block CLI to create VNX for block user LUNs. 2. Use either the Unisphere for File software or the VNX for file CLI to make the new user LUNs available to the VNX for file as disk volumes. The user LUNs must be created before you create file systems. To add user LUNs, you must be familiar with the following:

Unisphere for Block software or the VNX for block CLI. Process of creating RAID groups and user LUNs for the VNX for file volumes.

The documentation for Unisphere for Block and VNX for block CLI describes how to create RAID groups and user LUNs. If the disk volumes are configured by EMC experts, go to Create file systems with AVM on page 68.

64

Managing Volumes and File Systems on VNX AVM 7.0

Configuring

Provide storage from a VNX or legacy CLARiiON system to a gateway system


1. Create RAID groups and LUNs (as needed for VNX for file volumes) by using the Unisphere software or VNX for block CLI:

Always create the user LUNs in balanced pairs, one owned by SP A and one owned by SP B. The paired LUNs must be the same size. FC or SAS disks must be configured as RAID 1/0, RAID 5, or RAID 6. The paired LUNs do not need to be in the same RAID group but should be of the same RAID type. RAID groups and storage characteristics on page 33 lists the valid RAID group and storage array combinations. Gateway models use the same combinations as the NS-80 (for CX3 storage systems) or the NS-960 (for CX4 storage systems). SATA disks must be configured as RAID 1/0, RAID 5, or RAID 6. All LUNs in a RAID group must belong to the same SP. Create pairs by using LUNs from two RAID groups. RAID groups and storage characteristics on page 33 lists the valid RAID group and storage array combinations. Gateway models use the same combinations as the NS-80 (for CX3 storage systems) or the NS-960 (for CX4 storage systems). The host LUN identifier (HLU) must be greater than or equal to 16 for user LUNs.

Use these settings when creating RAID group user LUNs:

RAID Type: RAID 1/0, RAID 5, or RAID 6 for FC or SAS disks and RAID 1/0, RAID 5, or RAID 6 for SATA disks LUN ID: Select the first available value Rebuild Priority: ASAP Verify Priority: ASAP Enable Read Cache: Selected Enable Write Cache: Selected Enable Auto Assign: Cleared (off) Number of LUNs to Bind: 2 Alignment Offset: 0 LUN size: Must not exceed 14 TB
Note: If you create 4+1 RAID 3 LUNs, the Number of LUNs to Bind value should be 1.

2. Create a storage group to which to add the LUNs for the gateway system.

Using the Unisphere software: a. Select Hosts Storage Groups.

Configure disk volumes

65

Configuring

b. Click Create.

Using the VNX for block CLI, type the following command:
naviseccli -h <system> storagegroup -create -gname <groupname>

3. Ensure that you add the LUNs to the gateway system's storage group. Set the HLU to 16 or greater.

Using the Unisphere software: a. Select Hosts Storage Groups. b. In Storage Group Name, select the storage group that you created in step 2. c. Click Connect LUNs. d. Click the LUNs tab. e. Expand SP A and SP B. f. Select the LUNs to add and click Add.

Using the VNX for block CLI, type the following command:
naviseccli -h <system> storagegroup <HLU number> -alu <LUN number> -addhlu -gname ~filestorage -hlu

4. Perform one of these steps to make the new user LUNs available to the VNX for file:

Using the Unisphere for File software: a. Select Storage Storage Configuration File Systems. b. From the task list, select File Storage Rescan Storage Systems.

Using the VNX for file CLI, type the following command:
nas_diskmark -mark -all

Note: Do not change the host LUN identifier of the VNX for file LUNs after rescanning. This might cause data loss or unavailability.

Create pool-based provisioning for file storage systems


1. Create storage pools and LUNs as needed for VNX for file volumes. Use these settings when creating user LUNs for use with mapped pools:

LUN ID: Use the default

66

Managing Volumes and File Systems on VNX AVM 7.0

Configuring

LUN Name: Use the default or supply a name Number of LUNS to create: 2 Enable Auto Assign: Cleared (Off) Alignment Offset: 0 LUN Size: Must not exceed 16 TB

2. Ensure that you add the LUNs to the file system's storage group. Set the HLU to 16 or greater.

Using the Unisphere software: a. Select Hosts Storage Groups. b. In Storage Group Name, select ~filestorage. c. Click Connect LUNs. d. Click LUNs. e. Expand SP A and SP B. f. Select the LUNs you want to add and click Add.

Using the VNX for block CLI, type the following command:
naviseccli -h <system> storagegroup <HLU number> -alu <LUN number> -addhlu -gname ~filestorage -hlu

3. Use one of these methods to make the new user LUNs available to the VNX for file:

Using the Unisphere software: a. Select Storage Storage Configuration File Systems. b. From the task list, select File Storage Rescan Storage Systems.
Note: Do not change the host LUN identifier of the VNX for file LUNs after rescanning. This might cause data loss or unavailability.

Using the VNX for file CLI, type the following command:
nas_diskmark -mark -all

Note: Do not change the host LUN identifier of the VNX for file LUNs after rescanning. This might cause data loss or unavailability.

Configure disk volumes

67

Configuring

Add disk volumes to an integrated system


Configure unused or new disk devices on a VNX for block storage system by using the Disk Provisioning Wizard for File. This wizard is available only for integrated VNX for file models (NX4 and NS non-gateway systems excluding NS80), including Fibre Channel-enabled models, attached to a single VNX for block storage system.
Note: For VNX systems, Advanced Data Service Policy features such as FAST and compression are supported on pool-based LUNs only. They are not supported on RAID-based LUNs.

To open the Disk Provisioning Wizard for File in the Unisphere software: 1. Select Storage Storage Configuration Storage Pools. 2. From the task list, select Wizards Disk Provisioning Wizard for File.
Note: To use the Disk Provisioning Wizard for File, you must log in to Unisphere by using the global sysadmin user account or by using a user account which has privileges to manage storage.

An alternative to the Disk Provisioning Wizard for File is available by using the VNX for file CLI at /nas/sbin/setup_clariion. This alternative is not available for unified VNX systems. The script performs the following actions:

Provisions the disks on integrated (non-Performance) VNX for block storage systems when there are unbound disks to configure. This script binds the data LUNs on the xPEs and DAEs, and makes them accessible to the Data Movers. Ensures that your RAID groups and LUN settings are appropriate for your VNX for file server configuration.

The Unisphere for File software supports only the array templates for EMC CLARiiON CX and CX3 storage systems. CX4 and VNX systems must use the User_Defined mode with the /nas/sbin/setup_clariion CLI script. The setup_clariion script allows you to configure VNX for block storage systems on a shelf-by-shelf basis by using predefined configuration templates. For each enclosure (xPE or DAE), the script examines your specific hardware configuration and gives you a choice of appropriate templates. You can mix combinations of RAID configurations on the same storage system. The script then combines the shelf templates into a custom, User_Defined array template for each VNX for block system, and then configures your array.

Create file systems with AVM


This section describes the procedures to create a file system by using AVM storage pools, and also explains how to create file systems by using the automatic file system extension feature.

68

Managing Volumes and File Systems on VNX AVM 7.0

Configuring

You can enable automatic file system extension on new or existing file systems if the file system has an associated AVM storage pool. When you enable automatic file system extension, use the nas_fs command options to adjust the HWM value, set a maximum file size to which the file system can be extended, and enable thin provisioning. Create file systems with automatic file system extension on page 79 provides more information. You can create file systems by using storage pools with automatic file system extension enabled or disabled. Specify the storage system from which to allocate space for the type of storage pool that is being created. Choose any of these procedures to create file systems:

Create file systems with system-defined storage pools on page 70 Allows you to create file systems without having to also create the underlying volume structure.

Create file systems with user-defined storage pools on page 72 Allows more administrative control of the underlying volumes and placement of the file system. Use these user-defined storage pools to prevent the system-defined storage pools from using certain volumes.

Create file systems with automatic file system extension on page 79 Allows you to create a file system that automatically extends when it reaches a certain threshold by using space from either a system-defined or a user-defined storage pool.

Create file systems with AVM

69

Configuring

Create file systems with system-defined storage pools


When you create a file system by using the system-defined storage pools, it is not necessary to create volumes before setting up the file system. AVM allocates space to the file system from the specified storage pool on the storage system associated with that storage pool. AVM automatically creates any required volumes when it creates the file system. This process ensures that the file system and its extensions are created from the same type of storage, with the same cost, performance, and availability characteristics. The storage system appears either alphabetic characters or as a set of integers:

VNX for block storage systems display as a prefix of alphabetic characters before a set of integers, for example, FCNTR074200038-0019. Symmetrix storage systems display as a set of integers, for example, 002804000190-003C.

To create a file system with system-defined storage pools: 1. Obtain the list of available system-defined storage pools and mapped storage pools by typing:
$ nas_pool -list

Output:
id 3 40 41 in_use acl name n 0 clar_r5_performance y 0 TP1 y 0 FP1 storage_system FCNTR074200038 FCNTR074200038 FCNTR074200038

2. Display the size of a specific storage pool by using this command syntax:
$ nas_pool -size <name>

where:
<name>

= name of the storage pool

Example: To display the size of the clar_r5_performance storage pool, type:


$ nas_pool -size clar_r5_performance

Output:
id = 3 name = clar_r5_performance used_mb = 128000 avail_mb = 0 total_mb = 260985 potential_mb = 260985 Note: To display the size of all storage pools, use the -all option instead of the <name> option.

3. Obtain the system name of an attached Symmetrix storage system by typing:

70

Managing Volumes and File Systems on VNX AVM 7.0

Configuring

$ nas_storage -list

Output:
id acl name 1 0 000183501491 serial number 000183501491

4. Obtain information of a specific Symmetrix storage system in the list by using this command syntax:
$ nas_storage -info <system_name>

where:
<system_name>

= name of the storage system

Example: To obtain information about the Symmetrix storage system 000183501491, type:
$ nas_storage -info 000183501491

Output:
type num slot ident stat scsi vols ports p0_stat p1_stat p2_stat p3_stat R1 1 1 RA-1A Off NA 0 1 Off NA NA NA DA 2 2 DA-2A On WIDE 25 2 On Off NA NA DA 3 3 DA-3A On WIDE 25 2 On Off NA NA SA 5 5 SA-5A On ULTRA 0 2 On On NA NA SA 12 12 SA-12A On ULTRA 0 2 Off On NA NA DA 14 14 DA-14A On WIDE 27 2 On Off NA NA DA 15 15 DA-15A On WIDE 26 2 On Off NA NA R1 16 16 RA-16A On NA 0 1 On NA NA NA R2 17 1 RA-1B Off NA 0 1 Off NA NA NA DA 18 2 DA-2B On WIDE 26 2 On Off NA NA DA 19 3 DA-3B On WIDE 27 2 On Off NA NA SA 21 5 SA-5B On ULTRA 0 2 On On NA NA SA 28 13 SA-12B OnULTRA 0 2 On On NA NA DA 30 14 DA-14B On WIDE 25 2 On Off NA NA DA 31 15 DA-15B On WIDE 25 2 On Off NA NA R2 32 16 RA-16B On NA 0 1 On NA NA NA

5. Create a file system by size with a system-defined storage pool by using this command syntax:
$ nas_fs -name <fs_name> -create size=<size> pool=<pool> storage=<system_name>

where:
<fs_name> <size>

= name of the file system.

= amount of space to add to the file system. Specify the size in GB by typing <number>G (for example, 250G), in MB by typing <number>M (for example, 500M), or in TB by typing <number>T (for example, 1T).
<pool>

= name of the storage pool. = name of the storage system from which space for the file system is

<system_name>

allocated. Example:

Create file systems with AVM

71

Configuring

To create a file system ufs1 of size 10 GB with a system-defined storage pool, type:
$ nas_fs -name ufs1 -create size=10G pool=symm_std storage=00018350149

Note: To mirror the file system with SRDF, you must specify the symm_std_rdf_src storage pool. This directs AVM to allocate space from volumes configured when installing for remote mirroring by using SRDF. Using SRDF/S with VNX for Disaster Recovery contains more information.

Output:
id = name = acl = in_use = type = volume = pool = member_of = rw_servers = ro_servers = rw_vdms = ro_vdms = auto_ext = deduplication= stor_devs = disks = 1 ufs1 0 False uxfs avm1 symm_std

no,thin=no off 00018350149 d20,d12,d18,d10

Note: The VNX Command Line Interface Reference for File contains information on the options available for creating a file system with the nas_fs command.

Create file systems with user-defined storage pools


The AVM system-defined storage pools are available for use with the VNX for file. If you require more manual control than the system-defined storage pools allow, create a user-defined storage pool and then create the file system by using that pool.
Note: Create a user-defined storage pool and define its attributes to reserve disk volumes so that your system-defined storage pools cannot use them. Before you begin

Prerequisites include:

A user-defined storage pool can be created either by using manual volume management or by letting AVM create the storage pool with a specified size. If you use manual volume management, you must first stripe the volumes together and add the resulting volumes to the storage pool you create. Managing Volumes and File Systems for VNX Manually describes the steps to create and manage volumes.

72

Managing Volumes and File Systems on VNX AVM 7.0

Configuring

You cannot use disk volumes you have reserved for other purposes. For example, you cannot use any disk volumes reserved for a system-defined storage pool. Controlling Access to System Objects on VNX contains more information on access control levels. AVM system-defined storage pools designed for use with VNX for block storage systems acquire pairs of use disk volumes that are storage-processor balanced and use the same RAID type, disk count, and size. Modify system-defined and user-defined storage pool attributes on page 107 provides more information. When creating a user-defined storage pool to reserve disk volumes from a VNX for block storage system, use disk volumes that are storage-processor balanced and use the same qualities. Otherwise, AVM cannot find matching pairs, and the number of usable disk volumes might be more limited than was intended.

To create a file system with a user-defined storage pool:


Create a user-defined storage pool by volumes on page 74 Create a user-defined storage pool by size on page 74 Create the file system on page 76 Create file systems with automatic file system extension on page 79 Create file systems with the automatic file system extension option enabled on page 80

Create file systems with AVM

73

Configuring

Create a user-defined storage pool by volumes To create a user-defined storage pool (from which space for the file system is allocated) by volumes, add volumes to the storage pool and define the storage pool attributes.
Action To create a user-defined storage pool by volumes, use this command syntax: $ nas_pool -create -name <name> -acl <acl> -description <desc> -volumes <volume_name>[,<volume_name>,...] -default_slice_flag {y|n} where:
<name> = name of the storage pool. <acl> = designates an access control level for the new storage pool. Default value is 0. <desc> = assigns a comment to the storage pool. Type the comment within quotes. <volume_name> = names of the volumes to add to the storage pool. Can be a metavolume, slice volume, stripe volume,

or disk volume. Use a comma to separate each volume name. -default_slice_flag = determines whether members of the storage pool can be sliced when space is dispensed from the storage pool. If set to y, then members might be sliced. If set to n, then the members of the storage pool cannot be sliced, and volumes specified cannot be built on a slice. Example: To create a user-defined storage pool named marketing with a description, with the disk members d126, d127, d128, and d129 specified, and allow the volumes to be built on a slice, type:
$ nas_pool -create -name marketing -description "storage pool for marketing" -volumes d126,d127,d128,d129 -default_slice_flag y

Output id = 5 name = marketing description = storage pool for marketing acl = 0 in_use = False clients = members = d126,d127,d128,d129 default_slice_flag = True is_user_defined = True thin = False disk_type = CLSTD server_visibility = server_2,server_3,server_4 template_pool = N/A num_stripe_members = N/A stripe_size = N/A

Create a user-defined storage pool by size To create a user-defined storage pool (from which space for the file system is allocated) by size, specify a template pool, size of the pool, minimum stripe size, and number of stripe members.

74

Managing Volumes and File Systems on VNX AVM 7.0

Configuring

Action To create a user-defined storage pool by size, use this command syntax: $ nas_pool -create -name <name> -acl <acl> -description <desc> -default_slice_flag {y|n} -size <integer>[M|G|T] -storage <system_name> -template <system_pool_name> -num_stripe_members <num_stripe_mem> -stripe_size <num> where:
<name> = name of the storage pool. <acl> = designates an access control level for the new storage pool. Default value is 0. <desc> = assigns a comment to the storage pool. Type the comment within quotes. -default_slice_flag = determines whether members of the storage pool can be sliced when space is dispensed

from the storage pool. If set to y, then members might be sliced. If set to n, then the members of the storage pool cannot be sliced, and volumes specified cannot be built on a slice.
<integer> = size of the storage pool, an integer between 1 and 1024. Specify the size in GB (default) by typing <integer>G

(for example, 250G), in MB by typing <integer>M (for example, 500M), or in TB by typing <integer>T (for example, 1T).
<system_name> = storage system on which one or more volumes will be created and added to the storage pool. <system_pool_name> = system pool template used to create the user pool. Required when the -size option is specified.

The user pool will be created by using the profile attributes of the specified system pool template.
<num_stripe_mem> = number of stripe members used to create the user pool. Works only when both the -size and -template options are also specified. It overrides the number of stripe members attribute of the specified system pool

template.
<num> = stripe size used to create the user pool.Works only when both the -size and -template options are also specified.

It overrides the stripe size attribute of the specified system pool template. Example: To create a 20 GB user-defined storage pool that is named marketing with a description by using the clar_r5_performance pool, and that contains 4 stripe members with a stripe size of 32768 KB, and allow the volumes to be built on a slice, type:
$ nas_pool -create -name marketing -description "storage pool for marketing" -default_slice_flag y -size 20G -template clar_r5_performance -num_stripe_members 4 -stripe_size 32768

Create file systems with AVM

75

Configuring

Output id = 5 name = marketing description = storage pool for marketing acl = 0 in_use = False clients = members = v213 default_slice_flag = True is_user_defined = True thin = False disk_type = CLSTD server_visibility = server_2,server_3 template_pool = clar_r5_performance num_stripe_members = 4 stripe_size = 32768

Create the file system


To create a file system, you must first create a user-defined storage pool. Create a user-defined storage pool by volumes on page 74 and Create a user-defined storage pool by size on page 74 provide more information. Use this procedure to create a file system by specifying a user-defined storage pool and an associated storage system: 1. List the storage system by typing:
$ nas_storage -list

Output:
id 1 acl name serial number 0 APM00033900125 APM00033900125

2. Get detailed information of a specific attached storage system in the list by using this command syntax:
$ nas_storage -info <system_name>

where:
<system_name>

= name of the storage system

Example: To get detailed information of the storage system APM00033900125, type:


$ nas_storage -info APM00033900125

Output:

76

Managing Volumes and File Systems on VNX AVM 7.0

Configuring

id arrayname name model_type model_num db_sync_time num_disks num_devs num_pdevs num_storage_grps num_raid_grps cache_page_size wr_cache_mirror low_watermark high_watermark unassigned_cache failed_over captive_storage Active Software Navisphere ManagementServer Base Storage Processors SP Identifier signature microcode_version serial_num prom_rev agent_rev phys_memory sys_buffer read_cache write_cache free_memory raid3_mem_size failed_over hidden network_name ip_address subnet_mask gateway_address num_disk_volumes SP Identifier signature microcode_version serial_num prom_rev agent_rev phys_memory raid3_mem_size failed_over hidden network_name ip_address subnet_mask gateway_address num_disk_volumes

= = = = = = = = = = = = = = = = = =

1 APM00033900125 APM00033900125 RACKMOUNT 630 1073427660 == Sat Jan 30 21 1 0 10 8 True 70 90 0 False True

6 17:21:00 EST 2007

= 6.6.0.1.43 = 6.6.0.1.43 = 02.06.630.4.001

= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =

A 926432 2.06.630.4.001 LKE00033500756 3.00.00 6.6.0 (1.43) 3968 749 32 3072 115 0 False True spa 128.221.252.200 255.255.255.0 128.221.252.100 11 - root_disk root_ldisk d3 d4 d5 d6 d8 d13 d14 d15 d16 B 926493 2.06.630.4.001 LKE00033500508 3.00.00 6.6.0 (1.43) 3968 0 False True OEM-XOO25IL9VL9 128.221.252.201 255.255.255.0 128.221.252.100 4 - disk7 d9 d11 d12

Create file systems with AVM

77

Configuring

Note: This is not a complete output.

3. Create the file system from a user-defined storage pool and designate the storage system on which you want the file system to reside by using this command syntax:
$ nas_fs -name <fs_name> -type <type> -create <volume_name> pool=<pool> storage=<system_name>

where:
<fs_name> <type>

= name of the file system

= type of file system, such as uxfs (default), mgfs, or rawfs = name of the volume

<volume_name> <pool>

= name of the storage pool = name of the storage system on which the file system resides

<system_name>

Example: To create the file system ufs1 from a user-defined storage pool and designate the APM00033900125 storage system on which you want the file system ufs1 to reside, type:
$ nas_fs -name ufs1 -type uxfs -create MTV1 pool=marketing storage=APM00033900125

Output:
id = name = acl = in_use = type = volume = pool = member_of = rw_servers = ro_servers = rw_vdms = ro_vdms = auto_ext = deduplication= stor_devs = disks = 2 ufs1 0 False uxfs MTV1 marketing root_avm_fs_group_2

no,thin=no off APM00033900125-0111 d6,d8,d11,d12

78

Managing Volumes and File Systems on VNX AVM 7.0

Configuring

Create file systems with automatic file system extension


Use the -auto_extend option of the nas_fs command to enable automatic file system extension on a new file system created with AVM. The option is disabled by default.
Note: Automatic file system extension does not alleviate the need for appropriate planning. Create the file systems with adequate space to accommodate the estimated usage. Allocating too little space to accommodate normal file system usage makes the Control Station rapidly and repeatedly attempt to extend the file system. If the Control Station cannot adequately extend the file system to accommodate the usage quickly enough, the automatic extension fails.

If automatic file system extension is disabled and the file system reaches 90 percent full, a warning message is written to the sys_log. Any action necessary is at the administrators discretion.
Note: You do not need to set the maximum size for a newly created file system when you enable automatic extension. The default maximum size is 16 TB. With automatic extension enabled, even if the HWM is not set, the file system automatically extends up to 16 TB, if the storage space is available in the storage pool.

Use this procedure to create a file system by specifying a system-defined storage pool and a storage system, and enable automatic file system extension.
Action To create a file system with automatic file system extension enabled, use this command syntax: $ nas_fs -name <fs_name> -type <type> -create size=<size> pool=<pool> storage=<system_name> -auto_extend {no|yes} where:
<fs_name> = name of the file system. <type> = type of file system. <size> = amount of space to add to the file system. Specify the size in GB by typing <number>G (for example, 250G),

in MB by typing <number>M (for example, 500M), or in TB by typing <number>T (for example, 1T).
<pool> = name of the storage pool from which to allocate space to the file system. <system_name> = name of the storage system associated with the storage pool.

Example: To enable automatic file system extension on a new 10 GB file system created by specifying a system-defined storage pool and a VNX for block storage system, type:
$ nas_fs -name ufs1 -type uxfs -create size=10G pool=clar_r5_performance storage=APM00042000814 -auto_extend yes

Create file systems with AVM

79

Configuring

Output id = name = acl = in_use = type = worm = volume = pool = member_of = rw_servers = ro_servers = rw_vdms = ro_vdms = auto_ext = deduplication= stor_devs = disks 434 ufs1 0 False uxfs off v1634 clar_r5_performance root_avm_fs_group_3

hwm=90%,thin=no off APM00042000814-001D,APM00042000814-001A, APM00042000814-0019,APM00042000814-0016 = d20,d12,d18,d10

Create file systems with the automatic file system extension option enabled
When you create a file system with automatic extension enabled, you can set the point at which the file system automatically extends (the HWM) and the maximum size to which it can grow. You can also enable thin provisioning at the same time that you create or extend a file system. Enable automatic file system extension and options on page 90 provides information on modifying the automatic file system extension options. If you set the slice=no option on the file system, the actual file system size might become bigger than the size specified for the file system, which would exceed the maximum size. In this case, you receive a warning, and the automatic extension fails. If you do not specify the file system slice option (-option slice=yes|no) when you create the file system, it defaults to the setting of the storage pool. Modify system-defined and user-defined storage pool attributes on page 107 provides more information.
Note: If the actual file system size is above the HWM when thin provisioning is enabled, the client sees the actual file system size instead of the specified maximum size.

Enabling automatic file system extension and thin provisioning options does not automatically reserve the space from the storage pool for that file system. So that the automatic extension can succeed, administrators must ensure that adequate storage space exists. If the available storage is less than the maximum size setting, automatic extension fails. Users receive an error message when the file system becomes full, even though it appears that there is free space in the file system. The file system must be manually extended. Use this procedure to simultaneously set the automatic file system extension options when you are creating the file system:

80

Managing Volumes and File Systems on VNX AVM 7.0

Configuring

1. Create a file system of a specified size, enable automatic file system extension and thin provisioning, and set the HWM and the maximum file system size simultaneously by using this command syntax:
$ nas_fs -name <fs_name> -type <type> -create size=<integer>[T|G|M] pool=<pool> storage=<system_name> -auto_extend {no|yes} -thin {yes|no} -hwm <50-99>% -max_size <integer>[T|G|M]

where:
<fs_name> <type>

= name of the file system.

= type of file system. = size requested in MB, GB, or TB. The maximum size is 16 TB.

<integer> <pool>

= name of the storage pool.

<system_name> = attached storage system on which the file system and storage pool reside.

= percentage between 50 and 99, at which you want the file system to automatically extend.
<50-99>

Example: To create a 10 MB file system of type UxFS from an AVM storage pool, with automatic extension enabled, and a maximum file system size of 200 MB, HWM of 90 percent, and thin provisioning enabled, type:
$ nas_fs -name ufs2 -type uxfs -create size=10M pool=clar_r5_performance -auto_extend yes -thin yes -hwm 90% -max_size 200M

Output:
id = 27 name = ufs2 acl = 0 in_use = True type = uxfs worm = off volume = v104 pool = clar_r5_performance member_of = root_avm_fs_group_3 rw_servers= server_2 ro_servers= rw_vdms = ro_vdms = auto_ext = hwm=90%,max_size=200M,thin=yes deduplication = Off thin_storage = True tiering_policy = Auto-tier compressed = False mirrored = False ckpts = Note: When you enable thin provisioning on a new or existing file system, you must also specify the maximum size to which the file system can automatically extend.

Create file systems with AVM

81

Configuring

2. Verify the settings for the specific file system after enabling automatic extension by using this command syntax:
$ nas_fs -info <fs_name>

where:
<fs_name>

= name of the file system

Example: To verify the settings for file system ufs2 after enabling automatic extension, type:
$ nas_fs -info ufs2

Output:
id = 27 name = ufs2 acl = 0 in_use = True type = uxfs worm = off volume = v104 pool = clar_r5_performance member_of = root_avm_fs_group_3 rw_servers = server_2 ro_servers = rw_vdms = ro_vdms = backups = ufs2_snap1,ufs2_snap2 auto_ext = hwm=90%,max_size=200M,thin=yes deduplication= off thin_storage = True tiering_policy= Auto-tier compressed = False mirrored = False ckpts = stor_devs = APM00042000814-001D,APM00042000814-001A, APM00042000814-0019,APM00042000814-0016 disks = d20,d12,d18,d10

You can also set the options -hwm and -max_size on each file system with automatic extension enabled. When enabling thin provisioning on a file system, you must set the maximum size, but setting the high water mark is optional.

Extend file systems with AVM


Increase the size of a file system nearing its maximum capacity by extending the file system. You can:

Extend the size of a file system to add space if it has an associated system-defined or user-defined storage pool. You can also specify the storage system from which to allocate space. Extend file systems by using storage pools on page 83 provides instructions.

82

Managing Volumes and File Systems on VNX AVM 7.0

Configuring

Extend the size of a file system by adding volumes if the file system has an associated system-defined or user-defined storage pool. Extend file systems by adding volumes to a storage pool on page 85 provides instructions. Extend the size of a file system by using a storage pool other than the one used to create the file system. Extend file systems by using a different storage pool on page 87 provides instructions. Extend an existing file system by enabling automatic extension on that file system. Enable automatic file system extension and options on page 90 provides instructions. Extend an existing file system by enabling thin provisioning on that file system. Enable thin provisioning on page 94 provides instructions.

Managing Volumes and File Systems on VNX Manually contains the instructions to extend file systems manually.

Extend file systems by using storage pools


All file systems created by using the AVM feature have an associated storage pool. Extend a file system created with either a system-defined storage pool or a user-defined storage pool by specifying the size and the name of the file system. AVM allocates storage from the storage pool to the file system. You can also specify the storage system you want to use. If you do not specify, the last storage system associated with the storage pool is used.
Note: A file system created by using a mapped storage pool can be extended on its existing pool or by using a compatible mapped storage pool that contains the same disk type.

Use this procedure to extend a file system by size: 1. Check the file system configuration to confirm that the file system has an associated storage pool by using this command syntax:
$ nas_fs -info <fs_name>

where:
<fs_name>

= name of the file system

Note: If you see a storage pool defined in the output, the file system was created with AVM and has an associated storage pool.

Example: To check the file system configuration to confirm that file system ufs1 has an associated storage pool, type:
$ nas_fs -info ufs1

Output:

Extend file systems with AVM

83

Configuring

id = 27 name = ufs1 acl = 0 in_use = True type = uxfs worm = off volume = v104 pool = FP1 member_of = root_avm_fs_group_3 rw_servers= server_2 ro_servers= rw_vdms = ro_vdms = deduplication = Off thin_storage = True tiering_policy = Auto-tier compressed = False mirrored = False ckpts =

2. Extend the size of the file system by using this command syntax:
$ nas_fs -xtend <fs_name> size=<size> pool=<pool> storage=<system_name>

where:
<fs_name> <size>

= name of the file system.

= amount of space to add to the file system. Specify the size in GB by typing <number>G (for example, 250G), in MB by typing <number>M (for example, 500M), or in TB by typing <number>T (for example, 1T).
<pool>

= name of the storage pool.

= name of the storage system. If you do not specify a storage system, the default storage system is the one on which the file system resides. If the file system spans multiple storage systems, the default is any one of the storage systems on which the file system resides.
<system_name>

Note: The first time you extend the file system without specifying a storage pool, the default storage pool is the one used to create the file system. If you specify a storage pool that is different from the one used to create the file system, the next time you extend this file system without specifying a storage pool, the last pool in the output list is the default.

Example: To extend the size of file system ufs1 by 10 MB, type:


$ nas_fs -xtend ufs1 size=10M pool=clar_r5_performance storage=APM00023700165

Output:

84

Managing Volumes and File Systems on VNX AVM 7.0

Configuring

id = name = acl = in_use = type = volume = pool = member_of = rw_servers= ro_servers= rw_vdms = ro_vdms = stor_devs = disks =

8 ufs1 0 False uxfs v121 clar_r5_performance root_avm_fs_group_3

APM00023700165-0111 d7,d13,d19,d25,d30,d31,d32,d33

3. Check the size of the file system after extending it to confirm that the size increased by using this command syntax:
$ nas_fs -size <fs_name>

where:
<fs_name>

= name of the file system

Example: To check the size of file system ufs1 after extending it to confirm that the size increased, type:
$ nas_fs -size ufs1

Output:
total = 138096 avail = 138096 used = 0 ( 0% ) (sizes in MB) volume: total = 138096 (sizes in MB)

Extend file systems by adding volumes to a storage pool


You can extend a file system manually by specifying the volumes to add.
Note: With user-defined storage pools, you can manually create the underlying volumes, including striping, before adding the volume to the storage pool. Managing Volumes and File Systems on VNX Manually describes the procedures needed to perform these tasks before creating or extending the file system.

If you do not specify a storage system when extending the file system, the default storage system is the one on which the file system resides. If the file system spans multiple storage systems, the default is any one of the storage systems on which the file system resides. Use this procedure to extend the file system by adding volumes to the same user-defined storage pool that was used to create the file system: 1. Check the configuration of the file system to confirm the associated user-defined storage pool by using this command syntax:

Extend file systems with AVM

85

Configuring

$ nas_fs -info <fs_name>

where:
<fs_name>

= name of the file system

Example: To check the configuration of file system ufs3 to confirm the associated user-defined storage pool, type:
$ nas_fs -info ufs3

Output:
id = 27 name = ufs3 acl = 0 in_use = True type = uxfs worm = off volume = v104 pool = marketing member_of = rw_servers= ro_servers= rw_vdms = ro_vdms = deduplication = Off thin_storage = True tiering_policy = Auto-tier compressed = False mirrored = False ckpts = Note: The user-defined storage pool used to create the file system is defined in the output as pool=marketing.

2. Add volumes to extend the size of a file system by using this command syntax:
$ nas_fs -xtend <fs_name> <volume_name> pool=<pool> storage=<system_name>

where:
<fs_name>

= name of the file system. = name of the volume to add to the file system.

<volume_name> <pool>

= storage pool associated with the file system. It can be user-defined or system-defined.
<system_name>

= name of the storage system on which the file system resides.

Example: To extend file system ufs3, type:


$ nas_fs -xtend ufs3 v121 pool=marketing storage=APM00023700165

Output:

86

Managing Volumes and File Systems on VNX AVM 7.0

Configuring

id = name = acl = in_use = type = volume = pool = member_of = rw_servers= ro_servers= rw_vdms = ro_vdms = stor_devs = disks =

10 ufs3 0 False uxfs v121 marketing

APM00023700165-0111 d7,d8,d13,d14

Note: The next time you extend this file system without specifying a storage pool, the last pool in the output list is the default.

3. Check the size of the file system after extending it to confirm that the size increased by using this command syntax:
$ nas_fs -size <fs_name>

where:
<fs_name>

= name of the file system

Example: To check the size of file system ufs3 after extending it to confirm that the size increased, type:
$ nas_fs -size ufs3

Output:
total = 138096 avail = 138096 used = 0 ( 0% ) (sizes in MB) volume: total = 138096 (sizes in MB)

Extend file systems by using a different storage pool


You can use more than one storage pool to extend a file system. Ensure that the storage pools have space allocated from the same storage system to prevent the file system from spanning more than one storage system.
Note: A file system created by using a mapped storage pool can be extended on its existing pool or by using a compatible mapped storage pool that contains the same disk type.

Use this procedure to extend the file system by using a storage pool other than the one used to create the file system: 1. Check the file system configuration to confirm that it has an associated storage pool by using this command syntax:

Extend file systems with AVM

87

Configuring

$ nas_fs -info <fs_name>

where:
<fs_name>

= name of the file system

Example: To check the file system configuration to confirm that file system ufs2 has an associated storage pool, type:
$ nas_fs -info ufs2

Output:
id = 9 name = ufs2 acl = 0 in_use = True type = uxfs worm = off volume = v121 pool = clar_r5_performance member_of = root_avm_fs_group_3 rw_servers= ro_servers= rw_vdms = ro_vdms = deduplication = Off thin_storage = True tiering_policy = Auto-tier compressed = False mirrored = False ckpts = Note: The storage pool used earlier to create or extend the file system is shown in the output as associated with this file system.

2. Optionally, extend the file system by using a storage pool other than the one used to create the file system by using this command syntax:
$ nas_fs -xtend <fs_name> size=<size> pool=<pool>

where:
<fs_name> <size>

= name of the file system.

= amount of space to add to the file system. Specify the size in GB by typing <number>G (for example, 250G), in MB by typing <number>M (for example, 500M), or in TB by typing <number>T (for example, 1T).
<pool>

= name of the storage pool.

Example: To extend file system ufs2 by using a storage pool other than the one used to create the file system, type:
$ nas_fs -xtend ufs2 size=10M pool=clar_r5_economy

88

Managing Volumes and File Systems on VNX AVM 7.0

Configuring

Output:
id = name = acl = in_use = type = volume = pool = member_of = rw_servers= ro_servers= rw_vdms = ro_vdms = stor_devs = disks = 9 ufs2 0 False uxfs v123 clar_r5_performance,clar_r5_economy root_avm_fs_group_3,root_avm_fs_group_4

APM00033900165-0112 d7,d13,d19,d25

Note: The storage pools used to create and extend the file system now appear in the output. There is only one storage system from which space for these storage pools is allocated.

3. Check the file system size after extending it to confirm the increase in size by using this command syntax:
$ nas_fs -size <fs_name>

where:
<fs_name>

= name of the file system

Example: To check the size of file system ufs2 after extending it to confirm the increase in size, type:
$ nas_fs -size ufs2

Output:
total = 138096 avail = 138096 used = 0 ( 0% ) (sizes in MB) volume: total = 138096 (sizes in MB)

Extend file systems with AVM

89

Configuring

Enable automatic file system extension and options


You can automatically extend an existing file system created with AVM system-defined or user-defined storage pools. The file system automatically extends by using space from the storage system and storage pool with which the file system is associated. If you set the slice=no option on the file system, the actual file system size might become bigger than the size specified for the file system, which would exceed the maximum size. In this case, you receive a warning, and the automatic extension fails. If you do not specify the file system slice option (-option slice=yes|no) when you create the file system, it defaults to the setting of the storage pool. Modify system-defined and user-defined storage pool attributes on page 107 describes the procedure to modify the default_slice_flag attribute on the storage pool. Use the -modify option to enable automatic extension on an existing file system. You can also set the HWM and maximum size. To enable automatic file system extension and options:

Enable automatic file system extension on page 91 Set the HWM on page 92 Set the maximum file system size on page 93

You can also enable thin provisioning at the same time that you create or extend a file system. Enable thin provisioning on page 94 describes the procedure to enable thin provisioning on an existing file system. Enable automatic extension, thin provisioning, and all options simultaneously on page 96 describes the procedure to simultaneously enable automatic extension, thin provisioning, and all options on an existing file system.

90

Managing Volumes and File Systems on VNX AVM 7.0

Configuring

Enable automatic file system extension If the HWM or maximum size is not set, and if there is space available, the file system automatically extends up to the default maximum size of 16 TB when the file system reaches the default HWM of 90 percent. An error message appears if you try to enable automatic extension on a file system that was created manually.
Note: The HWM is 90 percent by default when you enable automatic file system extension. Action To enable automatic extension on an existing file system, use this command syntax: $ nas_fs -modify <fs_name> -auto_extend {no|yes} where:
<fs_name> = name of the file system

Example: To enable automatic extension on the existing file system ufs3, type:
$ nas_fs -modify ufs3 -auto_extend yes

Output id = name = acl = in_use = type = worm = volume = pool = member_of = rw_servers= ro_servers= rw_vdms = ro_vdms = auto_ext = stor_devs = disks disk=d20 disk=d20 disk=d18 disk=d18 disk=d14 disk=d14 disk=d11 disk=d11 28 ufs3 0 True uxfs off v157 clar_r5_performance root_avm_fs_group_3 server_2

hwm=90%,thin=no APM00042000818-001F,APM00042000818-001D APM00042000818-0019,APM00042000818-0016 = d20,d18,d14,d11 stor_dev=APM00042000818-001F addr=c0t1l15 server=server_2 stor_dev=APM00042000818-001F addr=c32t1l15 server=server_2 stor_dev=APM00042000818-001D addr=c0t1l13 server=server_2 stor_dev=APM00042000818-001D addr=c32t1l13 server=server_2 stor_dev=APM00042000818-0019 addr=c32t1l9 server=server_2 stor_dev=APM00042000818-0019 addr=c0t1l9 server=server_2 stor_dev=APM00042000818-0016 addr=c0t1l6 server=server_2 stor_dev=APM00042000818-0016 addr=c32t1l6 server=server_2

Extend file systems with AVM

91

Configuring

Set the HWM With automatic file system extension enabled on an existing file system, use the -hwm option to set a threshold. To specify a threshold, type an integer between 50 and 99 percent. The default is 90 percent. If the HWM or maximum size is not set, the file system automatically extends up to the default maximum size of 16 TB when the file system reaches the default HWM of 90 percent, if the space is available. The value for the maximum size, if specified, has an upper limit of 16 TB.
Action To set the HWM on an existing file system, with automatic file system extension enabled, use this command syntax: $ nas_fs modify <fs_name> -hwm <50-99>% where:
<fs_name> = name of the file system <50-99> = an integer representing the file system usage point at which you want it to automatically extend

Example: To set the HWM to 85 percent on the existing file system ufs3, with automatic extension already enabled, type:
$ nas_fs -modify ufs3 -hwm 85%

Output id = name = acl = in_use = type = worm = volume = pool = member_of = rw_servers= ro_servers= rw_vdms = ro_vdms = auto_ext = stor_devs = 28 ufs3 0 True uxfs off v157 clar_r5_performance root_avm_fs_group_3 server_2

hwm=85%,thin=no APM00042000818-001F,APM00042000818-001D, APM00042000818-0019,APM00042000818-0016 disks = d20,d18,d14,d11 disk=d20 stor_dev=APM00042000818-001F addr=c0t1l15 server=server_2 disk=d20 stor_dev=APM00042000818-001F addr=c32t1l15 server=server_2 disk=d18 stor_dev=APM00042000818-001D addr=c0t1l13 server=server_2 disk=d18 stor_dev=APM00042000818-001D addr=c32t1l13 server=server_2 disk=d14 stor_dev=APM00042000818-0019 addr=c0t1l9 server=server_2 disk=d14 stor_dev=APM00042000818-0019 addr=c32t1l9 server=server_2 disk=d11 stor_dev=APM00042000818-0016 addr=c0t1l6 server=server_2 disk=d11 stor_dev=APM00042000818-0016 addr=c32t1l6 server=server_2

92

Managing Volumes and File Systems on VNX AVM 7.0

Configuring

Set the maximum file system size Use the -max_size option to specify a maximum size to which a file system can grow. To specify the maximum size, type an integer and specify T for TB, G for GB (default), or M for MB. To convert gigabytes to megabytes, multiply the number of gigabytes by 1024. To convert terabytes to gigabytes, multiply the number of terabytes by 1024. For example, to convert 450 gigabytes to megabytes, 450 x 1024 = 460800 MB. When you enable automatic file system extension, the file system automatically extends up to the default maximum size of 16 TB. Set the HWM at which you want the file system to automatically extend. If the HWM is not set, the file system automatically extends up to 16 TB when the file system reaches the default HWM of 90 percent, if the space is available.
Action To set the maximum file system size with automatic file system extension already enabled, use this command syntax: $ nas_fs -modify <fs_name> -max_size <integer>[T|G|M] where:
<fs_name> = name of the file system <integer> = maximum size requested in MB, GB, or TB

Example: To set the maximum file system size on the existing file system, type:
$ nas_fs -modify ufs3 -max_size 16T

Extend file systems with AVM

93

Configuring

Output id = name = acl = in_use = type = worm = volume = pool = member_of = rw_servers= ro_servers= rw_vdms = ro_vdms = auto_ext = stor_devs = 28 ufs3 0 True uxfs off v157 clar_r5_performance root_avm_fs_group_3 server_2

hwm=85%,max_size=16769024M,thin=no APM00042000818-001F,APM00042000818-001D, APM00042000818-0019,APM00042000818-0016 disks = d20,d18,d14,d11 disk=d20 stor_dev=APM00042000818-001F addr=c0t1l15 server=server_2 disk=d20 stor_dev=APM00042000818-001F addr=c32t1l15 server=server_2 disk=d18 stor_dev=APM00042000818-001D addr=c0t1l13 server=server_2 disk=d18 stor_dev=APM00042000818-001D addr=c32t1l13 server=server_2 disk=d14 stor_dev=APM00042000818-0019 addr=c0t1l9 server=server_2 disk=d14 stor_dev=APM00042000818-0019 addr=c32t1l9 server=server_2 disk=d11 stor_dev=APM00042000818-0016 addr=c0t1l6 server=server_2 disk=d11 stor_dev=APM00042000818-0016 addr=c32t1l6 server=server_2

Enable thin provisioning


You can also enable thin provisioning at the same time that you create or extend a file system. Use the -thin option to enable thin provisioning. You must also specify the maximum size to which the file system should automatically extend. An error message appears if you attempt to enable thin provisioning and do not set the maximum size. Set the maximum file system size on page 93 describes the procedure to set the maximum file system size. The upper limit for the maximum size is 16 TB. The maximum size you set is the file system size that is presented to users, if the maximum size is larger than the actual file system size.
Note: Enabling automatic file system extension and thin provisioning options does not automatically reserve the space from the storage pool for that file system. Administrators must ensure that adequate storage space exists, so that the automatic extension operation can succeed. If the available storage is less than the maximum size setting, automatic extension fails. Users receive an error message when the file system becomes full, even though it appears that there is free space in the file system. The file system must be manually extended.

Enable thin provisioning on the source file system when the feature is used in a replication situation. With thin provisioning enabled, the NFS, CIFS, and FTP clients see the actual size of the Replicator destination file system, and the clients see the virtually provisioned maximum size of the Replicator source file system. Interoperability considerations on page 57 provides additional information.

94

Managing Volumes and File Systems on VNX AVM 7.0

Configuring

Action To enable thin provisioning with automatic extension enabled on the file system, use this command syntax: $ nas_fs -modify <fs_name> -max_size <integer>[T|G|M] -thin {yes|no} where:
<fs_name> = name of the file system <integer> = size requested in MB, GB, or TB

Example: To enable thin provisioning, type:


$ nas_fs -modify ufs1 -max_size 16T -thin yes

Output id = name = acl = in_use = type = worm = volume = pool = member_of = rw_servers= ro_servers= rw_vdms = ro_vdms = auto_ext = stor_devs = 27 ufs3 0 True uxfs off v157 clar_r5_performance root_avm_fs_group_3 server_2

hwm=85%,max_size=16769024M,thin=yes APM00042000818-001F,APM00042000818-001D, APM00042000818-0019,APM00042000818-0016 disks = d20,d18,d14,d11 disk=d20 stor_dev=APM00042000818-001F addr=c0t1l15 server=server_2 disk=d20 stor_dev=APM00042000818-001F addr=c32t1l15 server=server_2 disk=d18 stor_dev=APM00042000818-001D addr=c0t1l13 server=server_2 disk=d18 stor_dev=APM00042000818-001D addr=c32t1l13 server=server_2 disk=d14 stor_dev=APM00042000818-0019 addr=c0t1l9 server=server_2 disk=d14 stor_dev=APM00042000818-0019 addr=c32t1l9 server=server_2 disk=d11 stor_dev=APM00042000818-0016 addr=c0t1l6 server=server_2 disk=d11 stor_dev=APM00042000818-0016 addr=c32t1l6 server=server_2

Extend file systems with AVM

95

Configuring

Enable automatic extension, thin provisioning, and all options simultaneously


Note: An error message appears if you try to enable automatic file system extension on a file system that was created without using a storage pool. Action To simultaneously enable automatic file system extension and thin provisioning on an existing file system, and to set the HWM and the maximum size, use this command syntax: $ nas_fs -modify <fs_name> -auto_extend {no|yes} -thin {yes|no} -hwm <50-99>% -max_size <integer>[T|G|M] where:
<fs_name> = name of the file system <50-99> = an integer that represents the file system usage point at which you want it to automatically extend <integer> = size requested in MB, GB, or TB

Example: To modify a UxFS to enable automatic extension, enable thin provisioning, set a maximum file system size of 16 TB with an HWM of 90 percent, type:
$ nas_fs -modify ufs4 -auto_extend yes -thin yes -hwm 90% -max_size 16T

Output id = name = acl = in_use = type = worm = volume = pool = member_of = rw_servers= ro_servers= rw_vdms = ro_vdms = auto_ext = stor_devs = disks 29 ufs4 0 False uxfs off v157 clar_r5_performance root_avm_fs_group_3

hwm=90%,max_size=16769024M,thin=yes APM00042000818-001F,APM00042000818-001D, APM00042000818-0019,APM00042000818-0016 = d20,d18,d14,d11

96

Managing Volumes and File Systems on VNX AVM 7.0

Configuring

Verify the maximum size of the file system Automatic file system extension fails when the file system reaches the maximum size.
Action To force an extension to determine whether the maximum size has been reached, use this command syntax: $ nas_fs -xtend <fs_name> size=<size> where:
<fs_name> = name of the file system <size> = size to extend the file system by, in GB, MB, or TB

Example: To force an extension to determine whether the maximum size has been reached, type:
$ nas_fs -xtend ufs1 size=4M

Output id = name = acl = in_use = type = worm = volume = pool = member_of = rw_servers= ro_servers= rw_vdms = ro_vdms = auto_ext = thin=yes <<< stor_devs = disks = disk=d10 disk=d10 disk=d10 disk=d10 759 ufs1 0 True uxfs off v2459 clar_r5_performance root_avm_fs_group_3 server_4

hwm=90%,max_size=16769024M (reached) APM00041700549-0018 d10 stor_dev=APM00041700549-0018 stor_dev=APM00041700549-0018 stor_dev=APM00041700549-0018 stor_dev=APM00041700549-0018

addr=c16t1l8 server=server_4 addr=c32t1l8 server=server_4 addr=c0t1l8 server=server_4 addr=c48t1l8 server=server_4

Extend file systems with AVM

97

Configuring

Create file system checkpoints with AVM


Use either AVM system-defined or user-defined storage pools to create file system checkpoints. Specify the storage system where the file system checkpoint should reside. Use this procedure to create the checkpoint by specifying a storage pool and storage system:
Note: You can specify the storage pool for the checkpoint SavVol only when there are no existing checkpoints of the PFS.

1. Obtain the list of available storage systems by typing:


$ nas_storage -list

Note: To obtain more detailed information on the storage system and associated names, use the -info option instead.

2. Create the checkpoint by using this command syntax:


$ fs_ckpt <fs_name> -name <name> -Create [size=<integer>[T|G|M|%]] pool=<pool> storage=<system_name>

where:
<fs_name> <name>

= name of the file system for which you want to create a checkpoint.

= name of the checkpoint. = amount of space to allocate to the checkpoint. Type the size in TB, GB, or

<integer>

MB.
<pool>

= name of the storage pool. = storage system on which the file system checkpoint resides.

<system_name>

Note: Thin provisioning is not supported with checkpoints. NFS, CIFS, and FTP clients cannot see the virtually provisioned maximum size of a SnapSure checkpoint file system.

Example: To create the checkpoint ckpt1, type:


$ fs_ckpt ufs1 -name ckpt1 -Create size=10G pool=clar_r5_performance storage=APM00023700165

Output:

98

Managing Volumes and File Systems on VNX AVM 7.0

Configuring

id = name = acl = in_use = type = volume = pool = member_of = rw_servers= ro_servers= rw_vdms = ro_vdms = stor_devs = disks =

1 ckpt1 0 False uxfs V126 clar_r5_performance

APM00023700165-0111 d7,d8

Create file system checkpoints with AVM

99

Configuring

100

Managing Volumes and File Systems on VNX AVM 7.0

4 Managing

The tasks to manage AVM storage pools are:


List existing storage pools on page 102 Display storage pool details on page 103 Display storage pool size information on page 104 Modify system-defined and user-defined storage pool attributes on page 107 Extend a user-defined storage pool by volume on page 115 Extend a user-defined storage pool by size on page 116 Extend a system-defined storage pool on page 117 Remove volumes from storage pools on page 119 Delete user-defined storage pools on page 120

Managing Volumes and File Systems on VNX AVM 7.0

101

Managing

List existing storage pools


When the existing storage pools are listed, all system-defined storage pools and user-defined storage pools appear in the output, regardless of whether they are in use.
Action To list all existing system-defined and user-defined storage pools, type:
$ nas_pool -list

Output id 3 40 41 in_use acl name n 0 clar_r5_performance y 0 TP1 y 0 FP1 storage_system FCNTR074200038 FCNTR074200038 FCNTR074200038

102

Managing Volumes and File Systems on VNX AVM 7.0

Managing

Display storage pool details


Action To display detailed information for a storage pool, use this command syntax: $ nas_pool -info <name> where:
<name> = name of the storage pool

Example: To display detailed information for the storage pool FP1, type:
$ nas_pool -info FP1

Output id name description acl in_use clients members default_slice_flag is_user_defined thin tiering_policy compressed mirrored disk_type volume_profile is_dynamic is_greedy = = = = = = = = = = = = = = = = = 40 FP1 Mapped Pool on FCNTR074200038 0 False True False Mixed Auto-tier False False Mixed FP1_vp True N/A

Display storage pool details

103

Managing

Display storage pool size information


Information about the size of the storage pool appears in the output. If there is more than one storage pool, the output shows the size information for all the storage pools. The size information includes:

The total used space in the storage pool in megabytes (used_mb). The total unused space in the storage pool in megabytes (avail_mb). The total used and unused space in the storage pool in megabytes (total_mb). The total space available from all sources in megabytes that could be added to the storage pool (potential_mb). For user-defined storage pools, the output for potential_mb is 0 because they must be manually extended and shrunk.

Note: If either nonMB-aligned disk volumes or disk volumes of different sizes are striped together, truncation of storage might occur. The total amount of space added to a pool might be different than the total amount taken from potential storage. Total space in the pool includes the truncated space, but potential storage does not include the truncated space.

In the Unisphere for File software, the potential megabytes in the output represents the total available storage, including the storage pool. In the VNX for file CLI, the output for potential_mb does not include the space in the storage pool.
Note: Use the -size -all option to display the size information for all storage pools. Action To display the size information for a specific storage pool, use this command syntax: $ nas_pool -size <name> where:
<name> = name of the storage pool

Example: To display the size information for the clar_r5_performance storage pool, type:
$ nas_pool -size clar_r5_performance

Output id = 3 name = clar_r5_performance used_mb = 128000 avail_mb = 0 total_mb = 260985 potential_mb = 260985

104

Managing Volumes and File Systems on VNX AVM 7.0

Managing

Action To display the size information for a specific mapped storage pool, use this command syntax: $ nas_pool -size <name> where:
<name> = name of the storage pool

Example: To display the size information for the Pool0 storage pool, type:
$ nas_pool -size Pool0

Output id = 43 name = Pool0 used_mb = 0 avail_mb = 0 total_mb = 0 potential_mb = 3691 Physical storage usage in Pool Pool0 on APM00101902363 used_mb = 16385 avail_mb = 1632355 total_mb = 1648740

Display storage pool size information

105

Managing

Display size information for Symmetrix storage pools


Use the -size -all option to display the size information for all storage pools.
Action To display the size information of Symmetrix storage pools, use this command syntax: $ nas_pool -size <name> -slice y where:
<name> = name of the storage pool

Example: To request size information for the Symmetrix symm_std storage pool, type:
$ nas_pool -size symm_std -slice y

Output id = 5 name = symm_std used_mb = 128000 avail_mb = 0 total_mb = 260985 potential_mb = 260985 Note

Use the -slice y option to include any space from sliced volumes in the available result. However, if the default_slice_flag value is set to no, then sliced volumes do not appear in the output. The size information for the system-defined storage pool named clar_r5_performance appears in the output. If you have more storage pools, the output shows the size information for all the storage pools. used_mb is the used space in the specified storage pool in megabytes. avail_mb is the amount of unused available space in the storage pool in megabytes. total_mb is the total of used and unused space in the storage pool in megabytes. potential_mb is the potential amount of storage that can be added to the storage pool available from all sources in megabytes. For user-defined storage pools, the output for potential_mb is 0 because they must be manually extended and shrunk. In this example, total_mb and potential_mb are the same because the total storage in the storage pool is equal to the total potential storage available. If either nonmegabyte-aligned disk volumes or disk volumes of different sizes are striped together, truncation of storage might occur. The total amount of space added to a pool might be different than the total amount taken from potential storage. Total space in the pool includes the truncated space, but potential storage does not include the truncated space.

106

Managing Volumes and File Systems on VNX AVM 7.0

Managing

Modify system-defined and user-defined storage pool attributes


System-defined and user-defined storage pools have attributes that control how they manage the volumes and file systems. Table 7 on page 36 lists the modifiable storage pool attributes, and their values and descriptions. You can change the attribute default_slice_flag for system-defined and user-defined storage pools. The flag indicates whether member volumes can be sliced. If the storage pool has member volumes built on one or more slices, you cannot set this value to n.
Action To modify the default_slice_flag for a system-defined or user-defined storage pool, use this command syntax: $ nas_pool -modify {<name>|id=<id>} -default_slice_flag {y|n} where:
<name> = name of the storage pool <id> = ID of the storage pool

Example: To modify a storage pool named marketing and change the default_slice_flag to prevent members of the pool from being sliced when space is dispensed, type:
$ nas_pool -modify marketing -default_slice_flag n

Output id = name = description = acl = in_use = clients = members = default_slice_flag= is_user_defined = thin = disk_type = server_visibility = template_pool = num_stripe_members= stripe_size = Note

5 marketing storage pool for marketing 0 False d126,d127,d128,d129 False True False STD server_2,server_3,server_4 N/A N/A N/A

When the default_slice_flag is set to y, it appears as True in the output. If using automatic file system extension, the default_slice_flag should be set to n.

Modify system-defined and user-defined storage pool attributes

107

Managing

Modify system-defined storage pool attributes


The system-defined storage pools attributes that can be modified are:

-is_dynamic: Indicates whether the system-defined storage pool is allowed to automatically add or remove member volumes. -is_greedy: If this is set to y (greedy), the system-defined storage pool attempts to create new member volumes before using space from existing member volumes. If this is set to n (not greedy), the system-defined storage pool consumes all the existing space in the storage pool before trying to add additional member volumes.
Note: When extending a file system, the is_greedy attribute is ignored unless there is not enough free space on the existing volumes that the file system is using. Table 7 on page 36 describes the is_greedy behavior.

The tasks to modify the attributes of a system-defined storage pool are:


Modify the -is_greedy attribute of a system-defined storage pool on page 109 Modify the -is_dynamic attribute of a system-defined storage pool on page 110

108

Managing Volumes and File Systems on VNX AVM 7.0

Managing

Modify the -is_greedy attribute of a system-defined storage pool


Action To modify the -is_greedy attribute of a specific system-defined storage pool to allow the storage pool to use new volumes rather than existing volumes, use this command syntax: $ nas_pool -modify {<name>|id=<id>} -is_greedy {y|n} where:
<name> = name of the storage pool <id> = ID of the storage pool

Example: To change the attribute -is_greedy to false, for the storage pool named clar_r5_performance, type:
$ nas_pool -modify clar_r5_performance -is_greedy n

Output id name description acl in_use clients members default_slice_flag is_user_defined thin volume_profile is_dynamic is_greedy num_stripe_members stripe_size Note The n entered in the example delivers a False answer to the is_greedy attribute in the output. = = = = = = = = = = = = = = = 3 clar_r5_performance CLARiiON RAID5 4plus1 0 False True False False clar_r5_performance_vp True False 4 32768

Modify system-defined and user-defined storage pool attributes

109

Managing

Modify the -is_dynamic attribute of a system-defined storage pool


Action To modify the -is_dynamic attribute of a specific system-defined storage pool to not allow the storage pool to add or remove new members, use this command syntax: $ nas_pool -modify {<name>|id=<id>} -is_dynamic {y|n} where:
<name> = name of the storage pool <id> = ID of the storage pool

Example: To change the attribute -is_dynamic to false to not allow the storage pool to add or remove new members, for the storage pool named clar_r5_performance, type:
$ nas_pool -modify clar_r5_performance -is_dynamic n

Output id name description acl in_use clients members default_slice_flag is_user_defined thin volume_profile is_dynamic is_greedy num_stripe_members stripe_size Note The n entered in the example delivers a False answer to the is_dynamic attribute in the output. = = = = = = = = = = = = = = = 3 clar_r5_performance CLARiiON RAID5 4plus1 0 False d126,d127,d128,d129 True False False clar_r5_performance_vp False False 4 32768

110

Managing Volumes and File Systems on VNX AVM 7.0

Managing

Modify user-defined storage pool attributes


The user-defined storage pools attributes that can be modified are:

-name: Changes the name of the specified user-defined storage pool to the new name. -acl: Designates an access control level for a user-defined storage pool. The default value is 0. -description: Changes the description comment for the user-defined storage pool.

The tasks to modify the attributes of a user-defined storage pool are:


Modify the name of a user-defined storage pool on page 112 Modify the access control of a user-defined storage pool on page 113 Modify the description of a user-defined storage pool on page 114

Modify system-defined and user-defined storage pool attributes

111

Managing

Modify the name of a user-defined storage pool


Action To modify the name of a specific user-defined storage pool, use this command syntax: $ nas_pool -modify <name> -name <new_name> where:
<name> = old name of the storage pool <new_name> = new name of the storage pool

Example: To change the name of the storage pool named marketing to purchasing, type:
$ nas_pool -modify marketing -name purchasing

Output id name description acl in_use clients members default_slice_flag is_user_defined thin disk_type server_visibility template_pool num_stripe_members stripe_size Note The name change to purchasing appears in the output.The description does not change unless the administrator changes it. = = = = = = = = = = = = = = = 5 purchasing storage pool for marketing 0 False d126,d127,d128,d129 True True False STD server_2,server_3,server_4 N/A N/A N/A

112

Managing Volumes and File Systems on VNX AVM 7.0

Managing

Modify the access control of a user-defined storage pool Controlling Access to System Objects on VNX contains instructions to manage access control levels.
Note: The access control level change to 1 appears in the output. The description does not change unless the administrator modifies it. Action To modify the access control level for a specific user-defined storage pool, use this command syntax: $ nas_pool -modify {<name>|id=<id>} -acl <acl> where:
<name> = name of the storage pool. <id> = ID of the storage pool. <acl> = designates an access control level for the new storage pool. The default value is 0.

Example: To change the access control level for the storage pool named purchasing, type:
$ nas_pool -modify purchasing -acl 1000

Output id name description acl in_use clients members default_slice_flag is_user_defined thin disk_type server_visibility template_pool num_stripe_members stripe_size = = = = = = = = = = = = = = = 5 purchasing storage pool for marketing 1000 False d126,d127,d128,d129 True True False STD server_2,server_3,server_4 N/A N/A N/A

Modify system-defined and user-defined storage pool attributes

113

Managing

Modify the description of a user-defined storage pool


Action To modify the description of a specific user-defined storage pool, use this command syntax: $ nas_pool -modify {<name>|id=<id>} -description <description> where:
<name> = name of the storage pool. <id> = ID of the storage pool. <description> = descriptive comment about the pool or its purpose. Type the comment within quotes.

Example: To change the descriptive comment for the storage pool named purchasing, type:
$ nas_pool -modify purchasing -description "storage pool for purchasing"

Output id name description acl in_use clients members default_slice_flag is_user_defined thin disk_type server_visibility template_pool num_stripe_members stripe_size = = = = = = = = = = = = = = = 15 purchasing storage pool for purchasing 1000 False d126,d127,d128,d129 True True False STD server_2,server_3,server_4 N/A N/A N/A

114

Managing Volumes and File Systems on VNX AVM 7.0

Managing

Extend a user-defined storage pool by volume


You can add a slice volume, a metavolume, a disk volume, or a stripe volume to a user-defined storage pool.
Action To extend an existing user-defined storage pool by volumes, use this command syntax: $ nas_pool -xtend {<name>|id=<id>} [-storage <system_name>] -volumes [<vol ume_name>,...] where:
<name> = name of the storage pool <id> = ID of the storage pool <system_name> = name of the storage system, used to differentiate pools when the same pool name is used in multiple

storage systems
<volume_name> = names of the volumes separated by commas

Example: To extend the volumes for the storage pool named engineering, with volumes d130, d131, d132, and d133, type:
$ nas_pool -xtend engineering -volumes d130,d131,d132,d133

Output id name description acl in_use clients members default_slice_flag is_user_defined thin disk_type server_visibility template_pool num_stripe_members stripe_size Note The original volumes (d126, d127, d128, and d129) appear in the output, followed by the volumes added in the example. = = = = = = = = = = = = = = = 6 engineering 0 False d126,d127,d128,d129,d130,d131,d132,d133 True True False STD server_2,server_3,server_4 N/A N/A N/A

Extend a user-defined storage pool by volume

115

Managing

Extend a user-defined storage pool by size


Action To extend the volumes for an existing user-defined storage pool by size, use this command syntax: $ nas_pool -xtend {<name>|id=<id>} -size <integer> [M|G|T] [-storage <system_name>] where:
<name> = name of the storage pool <id> = ID of the storage pool <system_name> = storage system on which one or more volumes will be created, to be added to the storage pool

Example: To extend the volumes for the storage pool named engineering, by a size of 1 GB, type:
$ nas_pool -xtend engineering -size 1G

Output id name description acl in_use clients members default_slice_flag is_user_defined thin disk_type server_visibility template_pool num_stripe_members stripe_size = = = = = = = = = = = = = = = 6 engineering 0 False d126,d127,d128,d129,d130,d131,d132,d133 True True False STD server_2,server_3,server_4 N/A N/A N/A

116

Managing Volumes and File Systems on VNX AVM 7.0

Managing

Extend a system-defined storage pool


You can specify a size by which AVM expands a system-defined pool and turns off the dynamic behavior of the system pool to prevent it from consuming additional disk volumes. Doing so:

Uses the disk selection algorithms that AVM uses to create system-defined storage pool members. Prevents system-defined storage pools from rapidly consuming a large number of disk volumes.

You can specify the storage system from which to allocate space to the pool. The dynamic behavior of the system-defined storage pool must be turned off by using the nas_pool -modify command before extending the pool.
Note: When extending a file system, the is_greedy attribute is ignored unless there is not enough free space on the existing volumes that the file system is using. Table 7 on page 36 describes the is_greedy behavior.

On successful completion, the system-defined storage pool expands by at least the specified size. The storage pool might expand more than the requested size. The behavior is the same as when the storage pool is expanded during a file-system creation. If a storage system is not specified and the pool has members from a single storage system, then the default is the existing storage system. If a storage system is not specified and the pool has members from multiple storage systems, the existing set of storage systems is used to extend the storage pool. If a storage system is specified, space is allocated from that system:

The specified pool must be a system-defined pool. The specified pool must have the is_dynamic attribute set to n, or false. Modify system-defined storage pool attributes on page 108 provides instructions to change the attribute. There must be enough disk volumes to satisfy the size requested.

Extend a system-defined storage pool

117

Managing

Extend a system-defined storage pool by size


Action To extend a system-defined storage pool by size and specify a storage system from which to allocate space, use this command syntax: $ nas_pool -xtend {<name>|id=<id>} -size <integer> -storage <system_name> where:
<name> = name of the system-defined storage pool. <id> = ID of the storage pool. <integer> = size requested in MB or GB. The default size unit is MB. <system_name> = name of the storage system from which to allocate the storage.

Example: To extend the system-defined clar_r5_performance storage pool by size and designate the storage system from which to allocate space, type:
$ nas_pool -xtend clar_r5_performance -size 128M -storage APM00023700165-0011

Output id name description acl in_use clients members default_slice_flag is_user_defined thin disk_type server_visibility volume_profile is_dynamic is_greedy num_stripe_members stripe_size = = = = = = = = = = = = = = = = = 3 clar_r5_performance CLARiiON RAID5 4plus1 0 False v216 False False False CLSTD server_2,server_3 clar_r5_performance_vp False False 4 32768

118

Managing Volumes and File Systems on VNX AVM 7.0

Managing

Remove volumes from storage pools


Action To remove volumes from a system-defined or user-defined storage pool, use this command syntax: $ nas_pool -shrink {<name>|id=<id>} [-storage <system_name>] -volumes [<volume_name>,...] where:
<name> = name of the storage pool <id> = ID of the storage pool <system_name> = name of the storage system, used to differentiate pools when the same pool name is used in multiple

storage systems
<volume_name> = names of the volumes separated by commas

Example: To remove volumes d130 and d133 from the storage pool named marketing, type:
$ nas_pool -shrink marketing -volumes d130,d133

Output id name description acl in_use clients members default_slice_flag is_user_defined thin disk_type server_visibility template_pool num_stripe_members stripe_size = = = = = = = = = = = = = = = 5 marketing storage pool for marketing 1000 False d126,d127,d128,d129,d131,d132 True True False STD server_2,server_3,server_4 N/A N/A N/A

Remove volumes from storage pools

119

Managing

Delete user-defined storage pools


You can delete only a user-defined storage pool that is not in use. You must remove all storage pool member volumes before deleting a user-defined storage pool. This delete action removes only the volumes in the specified storage pool and deletes the storage pool, not the volumes. System-defined storage pools cannot be deleted.
Action To delete a user-defined storage pool, use this command syntax: $ nas_pool -delete <name> where:
<name> = name of the storage pool

Example: To delete the user-defined storage pool named sales, type:


$ nas_pool -delete sales

Output id name description acl in_use clients members default_slice_flag is_user_defined template_pool num_stripe_members stripe_size = = = = = = = = = = = = 7 sales 0 False True True N/A N/A N/A

120

Managing Volumes and File Systems on VNX AVM 7.0

Managing

Delete a user-defined storage pool and its volumes


The -deep option deletes the storage pool and also recursively deletes each member of the storage pool unless it is in use or is a disk volume.
Action To delete a user-defined storage pool and the volumes in it, use this command syntax: $ nas_pool -delete {<name>|id=<id>} [-deep] where:
<name> = name of the storage pool <id> = ID of the storage pool

Example: To delete the storage pool named sales, type:


$ nas_pool -delete sales -deep

Output id name description acl in_use clients members default_slice_flag is_user_defined thin template_pool num_stripe_members stripe_size = = = = = = = = = = = = = 7 sales 0 False True True False N/A N/A N/A

Delete user-defined storage pools

121

Managing

122

Managing Volumes and File Systems on VNX AVM 7.0

5 Troubleshooting

As part of an effort to continuously improve and enhance the performance and capabilities of its product lines, EMC periodically releases new versions of its hardware and software. Therefore, some functions described in this document may not be supported by all versions of the software or hardware currently in use. For the most up-to-date information on product features, refer to your product release notes. If a product does not function properly or does not function as described in this document, contact your EMC Customer Support Representative. Problem Resolution Roadmap for VNX contains additional information about using the EMC Online Support website and resolving problems. Topics included are:

AVM troubleshooting considerations on page 124 EMC E-Lab Interoperability Navigator on page 124 Known problems and limitations on page 124 Error messages on page 125 EMC Training and Professional Services on page 126

Managing Volumes and File Systems on VNX AVM 7.0

123

Troubleshooting

AVM troubleshooting considerations


Consider these steps when troubleshooting AVM:

Obtain all files and subdirectories in /nas/log/ and /nas/volume/ from the Control Station before reporting problems, which helps to diagnose the problem faster. Additionally, save any files in /nas/tasks when problems are seen from the Unisphere for File software. The support material script collects information related to the Unisphere for File software and APL. Set the environment variable NAS_REPLICATE_DEBUG=1 to log additional information in /nas/log/nas_log.al.tran.

EMC E-Lab Interoperability Navigator


The EMC E-Lab Interoperability Navigator is a searchable, web-based application that provides access to EMC interoperability support matrices. It is available at http://Support.EMC.com. After logging in to the EMC Online Support website, locate the applicable Support by Product page, find Tools, and click E-Lab Interoperability Navigator.

Known problems and limitations


Table 9 on page 124 describes known problems that might occur when using AVM and automatic file system extension and presents workarounds.
Table 9. Known problems and workarounds Known problem Symptom Workaround

AVM system-defined storage pools and Temporary disks might be used by AVM Place the newly marked disks in a usercheckpoint extensions recognize tempo- system-defined storage pools or defined storage pool.This protects them rary disks as available disks. checkpoint extension. from being used by system-defined storage pools (and manual volume management).

124

Managing Volumes and File Systems on VNX AVM 7.0

Troubleshooting

Table 9. Known problems and workarounds (continued) Known problem Symptom Workaround Alleviate this timing issue by lowering the HWM on a file system to ensure automatic extension can accommodate normal file system activity. Set the HWM to allow enough free space in the file system to accommodate write operations to the largest average file in that file system. For example, if you have a file system that is 100 GB, and the largest average file in that file system is 20 GB, set the HWM for automatic extension to 70%. Changes made to the 20 GB file might cause the file system to reach the HWM, or 70 GB. There is 30 GB of space left in the file system to handle the file changes, and to initiate and complete automatic extension without failure.

In an NFS environment, the write activ- An error message indicating the failure ity to the file system starts immediately of automatic extension start, and a full when a file changes. When the file sys- file system. tem reaches the HWM, it begins to automatically extend but might not finish before the Control Station issues a file system full error. This causes an automatic extension failure. In a CIFS environment, the CIFS/Windows Microsoft client does Persistent Block Reservation (PBR) to reserve the space before the writes begin. As a result, the file system full error occurs before the HWM is reached and before automatic extension is initiated.

Error messages
All event, alert, and status messages provide detailed information and recommended actions to help you troubleshoot the situation. To view message details, use any of these methods:

Unisphere software:

Right-click an event, alert, or status message and select to view Event Details, Alert Details, or Status Details.

CLI:

Type nas_message -info <MessageID>, where <MessageID> is the message identification number.

Celerra Error Messages Guide:

Use this guide to locate information about messages that are in the earlier-release message format.

EMC Online Support:

Error messages

125

Troubleshooting

Use the text from the error message's brief description or the message's ID to search the Knowledgebase on the EMC Online Support website. After logging in to EMC Online Support, locate the applicable Support by Product page, and search for the error message.

EMC Training and Professional Services


EMC Customer Education courses help you learn how EMC storage products work together within your environment to maximize your entire infrastructure investment. EMC Customer Education features online and hands-on training in state-of-the-art labs conveniently located throughout the world. EMC customer training courses are developed and delivered by EMC experts. Go to the EMC Online Support website at http://Support.EMC.com for course and registration information. EMC Professional Services can help you implement your VNX series efficiently. Consultants evaluate your business, IT processes, and technology, and recommend ways that you can leverage your information for the most benefit. From business plan to implementation, you get the experience and expertise that you need without straining your IT staff or hiring and training new personnel. Contact your EMC Customer Support Representative for more information.

126

Managing Volumes and File Systems on VNX AVM 7.0

Glossary

A automatic file system extension Configurable file system feature that automatically extends a file system created or extended with AVM when the high water mark (HWM) is reached. See also high water mark. Automatic Volume Management (AVM) Feature of VNX for file that creates and manages volumes automatically without manual volume management by an administrator. AVM organizes volumes into storage pools that can be allocated to file systems. See also thin provisioning. D disk volume On a VNX for file, a physical storage unit as exported from the storage system. All other volume types are created from disk volumes. See also metavolume, slice volume, stripe volume, and volume. F File migration service Feature for migrating file systems from NFS and CIFS source file servers to the VNX for file. The online migration is transparent to users once it starts. file system Method of cataloging and managing the files and directories on a system. Fully Automated Storage Tiering (FAST) Lets you assign different categories of data to different types of storage media within a tiered pool. Data categories may be based on performance requirements, frequency of use, cost, and other considerations. The FAST feature retains the most frequently accessed or important data

Managing Volumes and File Systems on VNX AVM 7.0

127

Glossary

on fast, high performance (more expensive) drives, and moves the less frequently accessed and less important data to less-expensive (lower-performance) drives. H high water mark (HWM) Trigger point at which the VNX for file performs one or more actions, such as sending a warning message, extending a volume, or updating a file system, as directed by the related feature's software/parameter settings. L logical unit number (LUN) Identifying number of a SCSI or iSCSI object that processes SCSI commands. The LUN is the last part of the SCSI address for a SCSI object. The LUN is an ID for the logical unit, but the term is often used to refer to the logical unit itself. M mapped pool A storage pool that is dynamically created during the normal storage discovery (diskmark) process for use on the VNX for file. It is a one-to-one mapping with either a VNX storage pool or a FAST Symmetrix Storage Group. A mapped pool can contain a mix of different types of LUNs that use any combination of data services (thin, thick, auto-tiering, mirrored, and VNX compression). However, mapped pools should contain only the same type of LUNs that use the same data services (all thick, all thin, all the same auto-tiering options, all mirrored or none mirrored, and all compressed or none compressed) for the best file system performance. metavolume On VNX for file, a concatenation of volumes, which can consist of disk, slice, or stripe volumes. Also called a hypervolume or hyper. Every file system must be created on top of a unique metavolume. See also disk volume, slice volume, stripe volume, and volume. S slice volume On VNX for file, a logical piece or specified area of a volume used to create smaller, more manageable units of storage. See also disk volume, metavolume, stripe volume, and volume. storage pool Groups of available disk volumes organized by AVM that are used to allocate available storage to file systems. They can be created automatically by AVM or manually by the user. See also Automatic volume management (AVM)

128

Managing Volumes and File Systems on VNX AVM 7.0

Glossary

stripe volume Arrangement of volumes that appear as a single volume. Allows for stripe units that cut across the volume and are addressed in an interlaced manner. Stripe volumes make load balancing possible. See also disk volume, metavolume, and slice volume. system-defined storage pool Predefined AVM storage pools that are set up to help you easily manage both storage volume structures and file system provisioning by using AVM. T thin LUN A LUN whose storage capacity grows by using a shared virtual (thin) pool of storage when needed. thin pool A user-defined VNX for block storage pool that contains a set of disks on which thin LUNs can be created. thin provisioning Configurable VNX for file feature that lets you allocate storage based on long-term projections, while you dedicate only the file system resources that you currently need. NFS or CIFS clients and applications see the virtual maximum size of the file system of which only a portion is physically allocated. See also Automatic Volume Management. U Universal Extended File System (UxFS) High-performance, VNX for file default file system, based on traditional Berkeley UFS, enhanced with 64-bit support, metadata logging for high availability, and several performance enhancements. user-defined storage pools User-created storage pools containing volumes that are manually added. User-defined storage pools provide an appropriate option for users who want control over their storage volume structures while still using the automated file system provisioning functionality of AVM to provision file systems from the user-defined storage pools. V volume On VNX for file, a virtual disk into which a file system, database management system, or other application places data. A volume can be a single disk partition or multiple partitions on one or more physical drives. See also disk volume, metavolume, slice volume, and stripe volume.

Managing Volumes and File Systems on VNX AVM 7.0

129

Glossary

130

Managing Volumes and File Systems on VNX AVM 7.0

Index

A
algorithm automatic file system extension 56 Symmetrix 46 system-defined storage pools 38 VNX for block 41 attributes storage pool, modify 107, 108, 111 storage pools 36 system-defined storage pools 108 user-defined storage pools 111 automatic file system extension algorithm 56 and VNX Replicator interoperability considerations 57 considerations 61 enabling 68 how it works 27 maximum size option 79 maximum size, set 93 options 26 restrictions 14 thin provisioning 94 Automatic Volume Management (AVM) restrictions 13 storage pool 27

clarata_archive storage pool 32 clarata_r10 storage pool 32 clarata_r3 storage pool 32 clarata_r6 storage pool 32 clarefd_r10 storage pool 32 clarefd_r5 storage pool 32 clarsas_archive storage pool 32 clarsas_r10 storage pool 32 clarsas_r6 storage pool 32 cm_r1 storage pool 32 cm_r5_economy storage pool 32 cm_r5_performance storage pool 32 cm_r6 storage pool 32 cmata_archive storage pool 33 cmata_r10 storage pool 33 cmata_r3 storage pool 33 cmata_r6 storage pool 33 cmefd_r10 storage pool 33 cmefd_r5 storage pool 33 cmsas_archive storage pool 33 cmsas_r10 storage pool 33 cmsas_r6 storage pool 33 considerations automatic file system extension 61 interoperability 57 create a file system 68, 70, 72 using system-defined pools 70 using user-defined pools 72

C
cautions 16 spanning storage systems 16 character support, international 16 checkpoint, create for file system 98 clar_r1 storage pool 31 clar_r5_economy storage pool 31 clar_r5_performance storage pool 31 clar_r6 storage pool 31

D
data service policy removing from storage group 17 delete user-defined storage pools 120 details, display 103 display details 103

Managing Volumes and File Systems on VNX AVM 7.0

131

Index

display (continued) size information 104

profiles, volume and storage 38

E
EMC E-Lab Navigator 124 error messages 125 extend file systems by size 83 by volume 85 with different storage pool 87 extend storage pools system-defined by size 118 user-defined by size 116 user-defined by volume 115

Q
quotas for file system 16

R
RAID group combinations 34 related information 22 restrictions 12, 13, 14, 15, 16, 17 automatic file system extension 14 AVM 13 Symmetrix volumes 13 thin provisioning 15 TimeFinder/FS 17 VNX for block 16

F
FAST capacity algorithm and striping 18 file system create checkpoint 98 extend by size 83 extend by volume 85 quotas 16 file system considerations 61

S
storage pools attributes 47 clar_r1 31 clar_r5_economy 31 clar_r5_performance 31 clar_r6 31 clarata_archive 32 clarata_r10 32 clarata_r3 32 clarata_r6 32 clarefd_r10 32 clarefd_r5 32 clarsas_archive 32 clarsas_r10 32 clarsas_r6 32 cm_r1 32 cm_r5_economy 32 cm_r5_performance 32 cm_r6 32 cmata_archive 33 cmata_r10 33 cmata_r3 33 cmata_r6 33 cmefd_r10 33 cmefd_r5 33 cmsas_archive 33 cmsas_r10 33 cmsas_r6 33 delete user-defined 120 display details 103 display size information 104 explanation 27 extend system-defined by size 118

I
international character support 16

K
known problems and limitations 124

L
legacy CLARiiON and deleting thin items 17

M
masking option and moving LUNs 18 messages, error 125 migrating LUNs 18 modify system-defined storage pools 108

P
planning considerations 57

132

Managing Volumes and File Systems on VNX AVM 7.0

Index

storage pools (continued) extend user-defined by size 116 extend user-defined by volume 115 list 102 modify attributes 107 remove volumes from user-defined 119 supported types 31 symm_ata 31 symm_ata_rdf_src 31 symm_ata_rdf_tgt 31 symm_efd 31 symm_std 31 symm_std_rdf_src 31 symm_std_rdf_tgt 31 system-defined algorithms 38 system-defined Symmetrix 46 system-defined VNX for block 39 symm_ata storage pool 31 symm_ata_rdf_src storage pool 31 symm_ata_rdf_tgt storage pool 31 symm_efd storage pool 31 symm_std storage pool 31 symm_std_rdf_src storage pool 31 symm_std_rdf_tgt storage pool 31 Symmetrix and deleting thin items 17 Symmetrix pool, insufficient space 18 system-defined storage pools 38, 70, 83, 85, 108 algorithms 38

system-defined storage pools (continued) create a file system with 70 extend file systems by size 83 extend file systems by volume 85

T
thin provisioning, out of space message 18 troubleshooting 123

U
Unicode characters 16 upgrade software 60 user-defined storage pools 72, 83, 85, 111, 119 create a file system with 72 extend file systems by size 83 extend file systems by volume 85 modify attributes 111 remove volumes 119

V
VNX for block pool, insufficient space 18 VNX upgrade automatic file system extension issue 17

Managing Volumes and File Systems on VNX AVM 7.0

133

Index

134

Managing Volumes and File Systems on VNX AVM 7.0

Você também pode gostar