Você está na página 1de 52

EMC VNX Operating Environment

for File
Version 7.0.52.1

Release Notes
P/N 300-011-984
REV A09
April 23, 2012

These release notes contain information about version 7.0.52.1 of VNX


OE for File. Topics include:

Product description ................................................................................... 2


New features and changes ........................................................................ 2
Fixed problems ........................................................................................... 2
Known problems and limitations .......................................................... 14
Technical notes ......................................................................................... 33
Environment and system requirements................................................ 44
Documentation ......................................................................................... 44
Software media, organization, and files ............................................... 45
Installation and upgrades ....................................................................... 45
Troubleshooting and getting help ......................................................... 51

Check the EMC Online Support website, at http://Support.EMC.com,


for updates to this document. Updates are made when a new version or
patch is released, or when new information is discovered.

Product description

Product description
This Service Pack release fixes several issues covered in the Fixed
problems section below.

New features and changes


There are no new features in this release.

Versions earlier than 7.0.52.x


For a complete list of the bugs that were fixed in previous versions of
VNX for File, go to the EMC Online Support website (registration
required) at: http://Support.EMC.com.

Fixed problems
Version 7.0.52.1
Backup
Severity

Symptom Description

Fix Summary

Tracking Number

VBB restore operation failed in


restoring directories with UTF8
characters.

The code was fixed


to handle directories
with UTF8
characters.

459567/
43756396

The Data Mover panicked


when the SavVol filled up
during a greater than 60-day
long backup and the oldest
checkpoint was inactivated.

The code was fixed


to protect against
inactivated
checkpoints
panicking the Data
Mover. The number
of active versus
inactive checkpoints
were verified to be
correct.

460751/
44831934

CIFS
Severity
2

Symptom Description
Copying a file over a symlink
failed from an SMB2 client,
reporting insufficient space
even though there was enough

Fix Summary
SMB2 mount points
implementation was
corrected to fix this
issue.

EMC VNX Operating Environment for File Version 7.52.0.X Release Notes

Tracking Number
431344/
42601052

Fixed problems

Severity

Symptom Description
space on the target file
system. This was because the
size of the file system
containing the symlink was
being checked for space rather
than the target file system.

Fix Summary

Tracking Number

Windows 8 clients were unable


to map EMC server shares.
The net use command failed
with system error 2148073478.

The code was fixed


to support the
Windows 8
compatibility: The
Data Mover
selectively checked
a SMB2 request
and signed the
response.

459810/
No Clarify ID

Windows clients issuing a


DOS 'dir' command which
included a complex mask
resulted in all CIFS access to
the file server being blocked. A
forced panic of the VNX OE for
File was needed to restore
client access.

The code was


changed to reformat DOS masks
for optimal
execution before
they were submitted
to the file server.

460364/ 44773644

Window user trying to create a


new file (right click in Window
Explorer) did not get new
option if the share they were
working on was mounted on a
network
drive, file system was a
writeable snap and client OS
was Win7/W2K8.

The code was fixed


to return the right
state of the file
system. Read-only
state was returned if
it was a regular
snap (read-only) or
writeable
(Read/Write).

461052/
44834702

When CIFS server had many


open files (likely more than
100000, depends on file
names size), the open files
enumeration over MSRPC
(MMC, srvmgr) could not
complete.

The fix prevented


cleanup task from
running on inprogress active
requests. This
allowed the open
files enumeration to
complete.

462010/
44848002

Users could experience


access issues to server's
resource when domain was
configured to use Resource
Groups through Active
Directory Federation Services
(ADFS), as described in
Microsoft document:

The code was


modified to handle
credential resource
groups correctly.

464981/
45324904

http://technet.microsoft.com/
EMC Operating Environment for File Version 7.52.0.X Release Notes

Fixed problems

Severity

Symptom Description

Fix Summary

Tracking Number

enus/library/cc737832(WS.10).a
spx
2

server_stats and printstats


cifs did not report correctly for
SMB2 TCP-connections and
SMB1/SMB2 threads.

The SMB2 TCPconnections and


SMB1 and SMB2
threads counters
reported by
server_stats had
been corrected.

468729/
No Clarify ID

CIFS Virus Check


Severity
2

Symptom Description
Customer misconfigured
Symantec AV product.
VNX viruscheck commands
showed CAVAs as online,
even though Symantec was
misconfigured, so customer
had no indication that they
misconfigured Symantec. By
remaining online the CAVAs
were compromising the AV
security of the VNX shares.

Fix Summary

CAVA heartbeat
monitor code was
modified to
continually check
the user context of
the Symantec, to
make sure that
Symantec was
properly
configured. If the
CAVA noticed that
Symantec was
misconfigured due
to user context it
sent "error-auth"
state back to VNX,
this state was
recorded in
server_log, and the
VNX did not send
any scan requests
to this particular
CAVA until the user
corrected the
Symantec
configuration.
Tracked the SAVSE
process to make
sure it had the
proper AV rights.

EMC VNX Operating Environment for File Version 7.52.0.X Release Notes

Tracking Number
440771/
41604250

Fixed problems

Severity

Symptom Description

Fix Summary

Tracking Number

When a file was being opened


for writing by one client, and
another client was reading
attributes (or extended
attributes) of this file,
unnecessary virus checker
scan was triggered for the file
causing write performance
degradation.

With the fix, the


virus checker scan
could not be
triggered when
second client was
reading attributes
only on a file that
was opened for
writing from the first
client.

463462/
43901404

When an NFS user renamed a


file on a file system with CEPP
enabled for NFS, the VNX OE
for File panicked with the
following message:
Page Fault Interrupt. Virt
ADDRESS: 0x00007ec8f3 Err
code: 0 Target addr:
0x0000000188

The code was fixed


to avoid the panic
condition.

463584/
45067744

VNX OE for File


Severity

Symptom Description

Fix Summary

Tracking Number

Creating a checkpoint on the


Control Station while the
production file system is
mounted on a Unix/Linux client
did not allow that client to
immediately cd into the .ckpt
directory from the root of the
file system, and/or all the
checkpoints were not visible
from the .ckpt directory at the
root of the file system. Once
the file system was unmounted
and mounted, you could then
cd to .ckpt, and all the
checkpoints were visible from
the root of the file system.

The code was fixed


so that the
checkpoints were
assigned correct
values when they
were created, and
all checkpoints and
.ckpt directory were
visible to the client
from the root of the
PFS immediately
after creation.

455858/
No Clarify ID

VNX OE for File panicked with


the following message due to a
memory leak when F-RDE was
scanning a file system with a
large number of hard linked
files:
DART panic/fault message: ***
Page Fault Interrupt. Virt

The code was fixed


to avoid the panic
condition.

459484/
44220834

EMC Operating Environment for File Version 7.52.0.X Release Notes

Fixed problems

Severity

Symptom Description
ADDRESS: 0x0000159274 Err
code: 2 Target addr:
0x0000000000 **

Fix Summary

Tracking Number

Data Mover panicked with the


following message due to a
NAME_LOOKUP request
made by quota tools when no
NIS domain was defined on
the Data Mover or one of its
VDMs:
Page Fault Interrupt. Virt
ADDRESS: 0x000023f312 Err
code: 0 Target addr:
0x0000000008

The code was


modified to correctly
handle the case
when no NIS
domain was
defined.

472205/
46098586

Under load, the VNX OE for


File crashed with "Out of
memory".
The following message was
displayed:
DART panic/fault message:
>>PANIC in file:
../addrspac.cxx at line: 486 :
Out of memory
This could happened randomly
when the memory was
fragmented.

The memory used


by DBMS internal
mechanism was
pre-allocated at the
initialization of the
system to avoid the
lack of memory
situation.

460260/
44774392

DHSM
Severity
1

Symptom Description
VNX OE for File panicked with
the following panic message
when user created a DHSM
http connection and tried to
modify this connection using
incorrect fs_dhsm command (user option was used but no
user name was provided).
DART panic/fault message:
*** GP exception. Virt
ADDRESS: 0x0000aa783f. Err
code: 0 ***

Fix Summary
The DHSM
command parser
code was made
more robust to
avoid the panic.

EMC VNX Operating Environment for File Version 7.52.0.X Release Notes

Tracking Number
422536/
41631464

Fixed problems

File System
Severity

Symptom Description

A Data Mover could deadlock


if FS checkpoint or any other
tasks that needed FS pause
while iSCSI LUN/snap deletes
were going on in the
background.
A corrupted NLM request
caused Data Mover panic with
the following message:
Page Fault Interrupt. Virt
ADDRESS: 0x000087d7fa Err
code: 0 Target addr: 0
Datamover stack:
#0
Lockd_Initial::start(this=0x360
cdc40) lockdthr.cxx:1401
#1
Sthread_startThread_internal(
ip=0x2d4464a60)
sthread.cxx:698

Fix Summary

Tracking Number

Locks were
acquired in the
proper order to
avoid the deadlock
situation.

459827/
44694588

Fixed the use of uninitialized pointer.

466579/
45508014

FLR
Severity
1

Symptom Description
VNX OE for File panicked with
the following message:
DART panic/fault message:
>>PANIC in file: ../rawirp.cxx at
line: 607 :
Write Verify failed for FLR data

Fix Summary
The code was fixed
to avoid the panic.

Tracking Number
461016/
44839716

FSCK
Severity
1

Symptom Description
If PBR check was required
during FSCK or if quota
recalculation was required
when mounting file system, the
inode scan routine did not print
any progress message making
the user believe that the
operation was hung.

Fix Summary
The code was fixed
so that the inode
scan could print the
progress messages.

Tracking Number
463902/
45121204

EMC Operating Environment for File Version 7.52.0.X Release Notes

Fixed problems

Installation & Upgrades


Severity

Symptom Description

Fix Summary

During the upgrade, the task


upgrade Primary Blades was
deleted and root_disk_reserve
was recreated unnecessarily.

The code was


modified to check
the difference
between
total_reserve_size
and root_disk_size
to avoid the deletion
and recreation of
root disk reserve in
unnecessary cases.

424823/
99999999

There was a corrupt internal


checkpoint which was not
detected until the PUHC was
run prior to VNX OE for Block
code upgrade. There was a
need to proactively detect
corrupt internal checkpoints
and alert the customer.

The code was


enhanced to
proactively detect
corrupt internal
checkpoints and
alert the customer
using PAHC
checks.

425085/
41900500

The use of non-default server


names caused PUHC check
failures such as the following:

The PUHC code


was fixed to allow
for non-default
server names being
used.

466911/
45517616

------------------------------------Errors-----------------------------------Blades : Check network


connectivity
Error HC_DM_14505148472:
Failed network connectivity
check to
vnx01-dm01
<===== non-default server
name
*Action : Name: System_Error
Key: 1002 Level: undefined
Blades : Check network
connectivity
Error HC_DM_14505148472:
Failed network connectivity
check to
vnx01-dm01b
*Action : Name: System_Error
Key: 1002 Level: undefined
Blades : Check network
connectivity
Error HC_DM_14505148472:
8

Tracking Number

EMC VNX Operating Environment for File Version 7.52.0.X Release Notes

Fixed problems

Severity

Symptom Description
Failed network connectivity
check to
vnx01-dm02
*Action : Name: System_Error
Key: 1002 Level: undefined

Fix Summary

Tracking Number

Blades : Check network


connectivity
Error HC_DM_14505148472:
Failed network connectivity
check to
vnx01-dm02b
*Action : Name: System_Error
Key: 1002 Level: undefined

NFSv4
Severity

Symptom Description

VNX OE for File panicked with


the following message:
DART panic/fault message:
*** Page Fault Interrupt. Virt
ADDRESS: 0x00006a7fc0 Err
code: 0 Target addr:
0x0000000000 **
AIX client hung when using
NFSv4.

All NFS threads blocked


causing NFS access to be lost.
This could have happened due
to an edge condition with
NFSv4 when verifying the
attributes for an NFSv4
operation leads to an infinite
loop.
The NFSV4 server returned the
error NFS4_RESOURCE when
there were too many open

Fix Summary

Tracking Number

The code was fixed


to avoid the panic.

460992/
44859406

Server could no
longer grant a read
delegation if the
open access mode
was read + write. If
a write delegation
could not be
granted, no
delegation would
be granted at all.
The conditions
which could trigger
this infinite loop
had been fixed.

464549/
44422786

The code was fixed


to release an open
owner when it had

465354/
45380486

465606/
45378140

EMC Operating Environment for File Version 7.52.0.X Release Notes

Fixed problems

Severity

Symptom Description

owners. This could also happen


if there were few opened files.
The reason was that open
owners were released only
after a lease period to allow
close replay detection.

Fix Summary

Tracking Number

no more files
opened. This
avoided the issue.

Platforms
Severity

Symptom Description

Fix Summary

Tracking Number

The connectemc daemon hung


and stopped sending the
callhome files.

A solution was
implemented to
detect and restart
the connectemc
main process if it
was hung, and
added a regular
restart mechanism
of the connectemc
main process as the
last recovery resort.

453530/
44047694

Users saw a sequence of


messages in the sys_log similar
to these:
CS_PLATFORM:BoxMonitor:C
RITICAL:515:::::Enclosure 0
management switch A I2C A
bus error.
CS_PLATFORM:BoxMonitor:C
RITICAL:536:::::Enclosure 0
fault occurred.
CS_PLATFORM:BoxMonitor:IN
FO:632:::::Enclosure 0
Compute Blade A : I2C Bus A
device error mask is 0x80.
CS_PLATFORM:BoxMonitor:IN
FO:751:::::Enclosure 0
management switch A I2C A
bus OK.

The fix was in the


Nimitz management
switch firmware rev
4.32.

453768/
44009966

The duration of the fault and


frequency to the next
occurrence was variable from
system to system.

The firmware
incorrectly treated
any unexpected
open I2C switch in
its path as a failure
of the entire I2C
bus.
The firmware
implemented a
"debouncing"
technique that
required
consecutive switch
and I2C transaction
failures to signal an
I2C bus
error.

A call home would be placed


for this event.

2
10

On Control Station,

Non-critical

EMC VNX Operating Environment for File Version 7.52.0.X Release Notes

455888/

Fixed problems

Severity

Symptom Description

/nas/log/sys_log file was filled


up with non-critical
informational messages
related to backend storage
that caused the logs to grow
large in size. This slowed down
support materials log
collection and also made
support materials large in size.

Fix Summary

informational
messages related to
backend activity
were removed from
sys_log file.

Tracking Number

44234720

Quotas
Severity
2

Symptom Description
User Name column on quota
information page in Unisphere
showed the User ID and not the
Domain\Username.

Fix Summary
The code was fixed
to display the user
information
correctly.

Tracking Number
464377/
No Clarify ID

SnapSure
Severity
1

Symptom Description
The user created scheduler
with the option cvfsname_prefix.Then revised
as the option time_based_cvfsname and
rebooted DM. Some ckpt
cvfsname in direcotry .ckpt
might have disappeared.

Fix Summary
The code was fixed
to handle this
situation.

Tracking Number
464787/
99999999

Storage
Severity
2

Symptom Description
NDMP Backup failed with the
following medium write error
message on direct attached
LTO5 drive with append only
feature :
---------- server log---

Fix Summary
The code had
added support for
16 byte SCSI
commands for
tape/robot devices.

Tracking Number
431941/
41648218

CAM: 3: cdb: 0a 00 00 04 00 00
00 00 00 00 00 00
| SCSI ERR ON #3200
camstat 0x84 bstat
InvalidOperation
| scsi stat.skey.ascq
0x02.07.5a02 Check condition
|
skey [ ] Write
EMC Operating Environment for File Version 7.52.0.X Release Notes

11

Fixed problems

Severity

Symptom Description
protect
|____________________
ascq
Operator selected
write protect

Fix Summary

Tracking Number

Symptom Description

Fix Summary

Tracking Number

Celerra Monitor did not start


with error "Component
<serial_number> is not
accessible to this user".

The fix modified


data types in
Jserver to fit
Symmetrix director
configuration
attributes without
truncation.

420801/
41209492

Invalid component information


warning for SPE was received
as an alert to the user.

The code was


modified to correctly
detect SPE and
DPE which would
prevent this warning
from occurring.

426589/
42052458

While executing
/nas/http/bin/set_passphrase,
the following errors were
displayed:
/nas/http/bin/set_passphrase:
line 26: [: too many arguments
/nas/http/bin/set_passphrase:
line 33: [: too many arguments

The code was


modified to correctly
handle multiple
storage arrays
registered with
Control Station.

437569/
43482280

/nbsnas/var/emcsupport
partition was full though there
were no files.
Numerous messages similar to
the following were present in
sys_log files:
Nov 10 12:03:01
2011:CS_PLATFORM:ADMIN:
WARNING:18:::::dskMon[1245]:
FS
/dev/mapper/emc_vg_lun_5emc_lv_nas_var_emcsupport
mounted on
/nbsnas/var/emcsupport filling
up (at 100%, max = 90%).

The code was


modified to prevent
/nbsnas/var/emcsup
port partition from
getting full.

456533/
44303866

VNX upgrade failed with error


message "Error
IM_14505279794 : Failed to
setup the following local

The fix was to


remove obsolete
'exdisks' check
performed while

458484/
44421782

System Management
Severity

12

EMC VNX Operating Environment for File Version 7.52.0.X Release Notes

Fixed problems

Severity

Symptom Description
storage devices ..." due to
ongoing RAID group
defragmentation on storage
array.

Fix Summary
running
'nas_storage -check
-all' command.

Server_export command failed


with error message Invalid
argument when a host FQDN
contained "sec" clause.

The code that


processed export
options was
modified to ensure
that "sec" clause in
FQDN did not get
process as "sec="
export option.

460366/
44678140

Errors like the following


appeared in the server log of a
standby server:

The code was


modified to disallow
sending VTLU
queries from Control
Station to standby
server.

465968/
45378244

2012-01-09 02:35:26:
13160087552: SVTL: 3:
Command failed: poolsInfo
2012-01-09 02:35:26:
26044989440: SVTL: 6:
SvtlServer() -->
svtl_rename(/svtl, /.etc/svtl)
failed: Stale handle
2012-01-09 02:35:26:
13160087552: SVTL: 3:
mkDir(/.etc/svtl/locks/libraries)
failed: IO_Error
2012-01-09 02:35:26:
13160087552: SVTL: 3:
SvtlServer() --> pTlusMutex>Init(/.etc/svtl/locks/libraries)
SVTL_STATUS_ERROR
2012-01-09 02:35:26:
13160087552: SVTL: 3:
SvtlServer::Alloc(/.etc/svtl):
SVTL_STATUS_ERROR

Tracking Number

UFS
Severity

Symptom Description

Fix Summary

Tracking Number

Failed to mount the file system


with the following message:
Mount not allowed: too many
mounts on this server.

The code was


modified to handle
this situation.

165703/
33316276

CPU usage spikes of 100


percent lasting for several
seconds occurred when clients
accessed tens of thousands of

The code was


changed to reclaim
memory used in
directory processing

457456/
44256920

EMC Operating Environment for File Version 7.52.0.X Release Notes

13

Known problems and limitations

Severity
1

Symptom Description
directories.

Fix Summary
more efficiently.

Data Mover panicked with panic


header Assertion failure:
'aclHdr.getHdr()->revision ==
ACLRev if ACL database was
full (1 million different ACLs)
and there were concurrent
threads trying to allocate a new
ACL.

The solution
eliminated the
condition where two
threads attempt to
allocate the last
ACL entry.

Tracking Number
467142/
45564250

Known problems and limitations


Unisphere Service Manager
Control Station does not allow Starting NAS Services while Upgrading (454458)
Powering off Control Station 0 (CS0) during a dual Control Station (Dual
CS) upgrade causes failover to Control Station 1. CS0 is needed for the
upgrade. Control cannot be returned to CS0 either by USM or SSH
because NAS is not running on CS0 after the failover. CS1 cannot be
failed over, because it requires NAS running on CS0.
Workaround
Turn off NAS services on the secondary Control Station and then
continue the upgrade with CLI.
USM does not allow upgrade to continue (480234)
When upgrading with USM, after PUHC is complete, it will report
Upgrade cannot proceed, but there will be no indication as to why. All
the PUHC checks pass successfully.
Workaround
Click on the upgrade log link and find the error message being reported in
PUHC. This will give an indication of the particular problem that needs to be
fixed. You can contact your service provider to resolve the issue.

Control Station
Changing hostname (371583)
Changing VNX for file hostname will start a background task to rename
VNX for block LUNs with the new hostname. This operation can take
anywhere from 30 seconds to 10 minutes to complete, depending on the
14

EMC VNX Operating Environment for File Version 7.52.0.X Release Notes

Known problems and limitations

number of LUNs. If another hostname change occurs during the rename


process, the new hostname change can fail to update the VNX LUN
names or will require a reboot of the VNX Control Station.
To prevent this failure, do not change the hostname more often than
once in 10 minutes.
Help page not available (392029)
If you access Unisphere via the Control Station IP, the help pages may
not be available, and the page shows "Session Timeout".
Workaround
Access Unisphere through SPA/SPB IP instead of the Control Station IP.
VDM create operation fails (414745)
When file systems on a Data Mover reach the maximum number (2000)
of mounted file systems per Data Mover, the VDM create operation fails
with the following error: Error 2237: Execution failed:
match_seqnum: Precondition violated.
[NAS_DB_ROWLOCK_OBJECT.increment_seqnum]. This causes an

incomplete VDM to be created in the NAS database.


Workaround
Delete the partially created VDM. If you need to create another VDM on
that Data Mover, unmount one of the existing file systems.

Deduplication
Restoring space-reduced files
Restoring space-reduced files from a PAX-based NDMP backup into a
file system causes the deduplication state to be set to suspended if the
deduplication state is not currently enabled.

Event Enabler
The following known problems and limitations pertain to VNX Event
Enabler:
VEE CAVA/CEPA servers
Each Data Mover should have a VEE CAVA pool consisting of a
minimum of two VEE CAVA servers specified in the Data Movers
viruschecker.conf file, or a CEPA pool consisting of a minimum of two
CEPA servers specified in the Data Movers cepp.conf file.

EMC Operating Environment for File Version 7.52.0.X Release Notes

15

Known problems and limitations

CIFS and NFS clients


The CAVA feature of the VEE product is for CIFS and NFS clients. If FTP
protocols are used to modify files, the files will not be scanned for
viruses. No UNIX antivirus application support is provided.
File-Level Retention
It is strongly recommended that the antivirus (AV) administrator
updates the virus definition files on all resident AV engines in the CAVA
pools, and periodically runs a full file system scan of the file system to
detect infected FLR files. The Using Common AntiVirus Agent technical
module provides additional information about virus definition updates.
For systems that use version 7.0 and earlier, when using Windows
Explorer to copy read-only files either into or within an FLR file system,
or between FLR file systems, the protection state of the files copy
depends on whether the Windows client making the copy uses the SMB1
or SMB2 protocol. If the Windows client uses SMB1, then the destination
file will be read-only, but not protected. If the Windows client uses
SMB2, the destination file will be read-only and locked with an infinite
retention date. This infinite date can be reduced in an FLR-E file system,
but not in an FLR-C file system.
Microsoft Access
Do not scan Microsoft Access database files in real time. Files inside a
database are not scanned.
Windows 64-bit operating systems
In order to run VEE on Windows 64-bit operating systems, the VNX-toVEE communications must be over Microsoft RPC (MS-RPC). The
version of CEE that runs on Windows 64-bit operating systems is
supported with Celerra version 5.6.45 or later and VEE version 4.5.0.4 or
later.

File-level Retention (142702)


Prior to Celerra version 5.6.43.9, FLR protected empty locked files from
being deleted. As of version 5.6.43.9, both FLR-E and FLR-C allow the
deletion of empty locked files and empty append-only files. Any locked
or append-only file that contains data will be protected. This behavior
will be changed in a future release such that all locked and append-only
files will be protected regardless of whether they contain data.

16

EMC VNX Operating Environment for File Version 7.52.0.X Release Notes

Known problems and limitations

Installation Assistant for File/Unified IP Address ending in *.0 or *.255 (391782)


If you attempt to use an IP address for the VNX for block storage
processor that ends in *.0 or *.255 (e.g. xxx.yyy.zzz.0 or xxx.yyy.zzz.255),
the clariion_mgmt script (or Installation Assistant) will treat the IP
address as invalid; regardless of the netmask used for the network. You
will need to supply a different IP address to configure the SPs on an
integrated VNX for file.
Example usage:
[root@system ~]# /nas/sbin/clariion_mgmt -start -spa_ip
x.y.z.110 -spb_ip x.y.z.255 # (where x.y.z is a valid IPv4 address)
Error 17: Invalid public IP address for SPB.

Setting up MPFS
The Set up MPFS button is available for all the systems on the Select a
wizard screen. Select this option only if you want to run MPFS and the
MPFS infrastructure is configured on your system in advance. Refer to
the EMC VNX Series MPFS over FC and iSCSI v6.0 Linux Clients Product
Guide to manually configure MPFS on your system.

Installation and upgrades


VNX for file now joined to domain during upgrade (354625)
The VNX for file is now being joined to the VNX for block domain
automatically during an upgrade. You can use Unisphere to elevate the
privileges of any user that has been transferred from the Domain to the
VNX for file. This does not occur for gateway machines, however.
Gateway machines cannot be a part of a VNX domain.
Control LUN requirements enforced
There is strict enforcement of the control LUN size and HLU numbers
for fresh installations. An exact match between the LUN capacity and the
correct HLU value is required. Mismatches will generate a "Control
LUN Check Failed" error message during installation. You must correct
the control LUNs before installation can complete successfully.
Erroneous Pre-Upgrade Health Check failure (405959)
You may get an erroneous PUHC failure when upgrading a VNX
gateway system connected to a Symmetrix.

EMC Operating Environment for File Version 7.52.0.X Release Notes

17

Known problems and limitations

Error during upgrade (394763 )


The following message is seen during an upgrade on the secondary
Control Station: Creating IDE cache of NAS file system cannot
create regular file`/nbsnas/site/clariion_mgmt.cksum': Readonly file system cp: cannot create regular file
`/nbsnas/site/clariion_mgmt.cfg': Read-only file system

Workaround
This error is saying the database holding Proxy ARP config files is outof-date on the secondary Control Station. It is benign and can be ignored.
To stop the error message from being displayed again (during another
upgrade), carry out the following steps:
Most likely, a clariion_mgmt -stop was performed on the primary CS. If
this is the case, run this command on the secondary CS to fix it:
/nasmcd/sbin/clariion_mgmt -stop -ip_already_set.

Otherwise, if you have changed the IP addresses on CS0 to something


other than the initial setup, run this command:
/nasmcd/sbin/clariion_mgmt -start -spa_ip <new_ip> -spb_ip
<new_ip> -ip_already_set -no_verify.

Error when preparing to upgrade (378606)


If the login user doesn't have "Control Station shell" privilege, the
following error message will be displayed when selecting the File
software to prepare for upgrade: Unable to connect to the provided IP
address or Hostname.
Workaround
Follow the Recommended Action displayed and assign "Control Station
shell" privilege to the user.
Express install fails (422422)
During an express install on a Unified System, if you enter a bad
username and then try to change it during the install, it will fail.
Pre-Upgrade Health Check failures during upgrades
Pre-Upgrade space check issue (420766)
During an upgrade, you will receive the following error message even
though the PUHC space checks passed. Error IM_14505279734:
ERROR: File system /nbsnas usage is 77 percent and is
greater than the 70 percent threshold. Cleanup is required
in /nbsnas. Description: ERROR: File system usage is %d
percent and is greater than the 70 percent threshold.
Cleanup is required in /nbsnas.

18

EMC VNX Operating Environment for File Version 7.52.0.X Release Notes

Known problems and limitations

Backend error (392641)


When the health checks run, sometimes a backend error is found that is
not recognized by any of the checks. No individual checks will be shown
to have failed but at the end of the output, the error is displayed in the
"errors" section.
Workaround
Fix the issue stated in the "errors" section of the health check output.
Setup internationalization task of software upgrade fails (421470)
The setup internationalization task of software upgrade fails and an
earlier task has shows: Warning 17716815827: root_fs_9 not
mounted on server server_9.
Upgrade fails if NAS service is down (392635)
Upgrades fail when the NAS service is down on the Control Station. The
failing task will log an error similar to: mkdir: cannot create
directory `/nas/tmp': File exists. You can avoid this issue by
leaving the NAS service up when upgrading. In particular, do not run
"service nas stop", "/etc/rc3.d/S95nas stop", or like commands prior to
upgrading.
Workaround
If you encounter the issue, perform the following: First, try starting the
NAS service by logging into the Control Station as "root" and running
the command /sbin/service nas start. If the command succeeds, wait 5
minutes and then try the upgrade again. If that doesn't work, mount the
NAS file system manually with the command mount /nbsnas and retry
the upgrade.
If the upgrade still fails, an incorrect entry for /nbsnas may have been
recorded in the upgrade's mounted file system (mtab) database. Escalate
the issue through your support organization.
VNX gateways cannot be upgraded while in a domain
VNX Gateways running 6.0 cannot be upgraded to VNX V7.0 while they
are in a domain. I.e. they have to be removed from the 6.0 domain before
being upgraded. Once installed at 7.0 - the gateway cannot join another
domain because 7.0 does not support the domain add/remove yet.

EMC Operating Environment for File Version 7.52.0.X Release Notes

19

Known problems and limitations

Multibyte support
Using multibyte characters in fields in which they are not supported.
If you use multibyte characters in data element fields that do not support
multibyte characters (for example, checkpoint schedule descriptions),
these fields will not display correctly. If you want these fields to be
readable, you will have to recreate or rename the existing data element
text using the ASCII character set.
Refer to PRIMUS emc224755 for more detailed information about this
limitation.
Using multibyte characters with ECC
Celerra version 5.6.40.3 and later may cause discovery and name display
issues for ECC 6.x if certain VNX for File object names use non-ASCII
characters. The relevant names are:
o

File system name

Mount point name

CIFS share name

NFS export name

Quota user name

Discovery issue
If non-ASCII characters are used and if the name is more than 63
characters, discovery may not work.
Garbled name display issue
If these names use non-ASCII characters, the names do not display
properly in ECC. However any ASCII characters used in the name do
display properly. Furthermore, the name itself and the underlying data
on the VNX system are not affected.
Suggested Workarounds
o Use only ASCII characters in the specified object names when using
ECC for VNX discovery. If multibyte characters must be used, keep
all applicable object names under 63 characters.
o

Since ASCII characters display properly, append or prepend ASCII


characters to object names so you can use these characters to find the
correct name.

Refer to Primus emc229599 for more details about this issue.

20

EMC VNX Operating Environment for File Version 7.52.0.X Release Notes

Known problems and limitations

NDMP
Incremental NDMP backups and NDMPcopy operations
Incremental NDMP backups and NDMPcopy operations use the same
last backup timestamp tracking for each data set on a Data Mover. A
data set in this context means the point where the backup or copy
operation starts (e.g., the root of a file system or a specific sub-directory
within a file system). This means that if incremental NDMP backup and
NDMPcopy operations of the same data set (i.e., starting from the same
point within a file system) are interleaved, then the following will occur:
o

The incremental backup will contain only changes in that data set
since the NDMPcopy operation instead of since the last backup
operation.

The incremental NDMPcopy of that data set will copy only those
files changed since the NDMP backup instead of since the last
NDMPcopy operation.

To avoid this, do not interleave NDMP backup operations with


incremental NDMPcopy operations of the same data set; if you do,
perform a full NDMP backup of that data set after any NDMPcopy
activity.
Vendor support
Not all NDMP vendor versions support this version of VNX. For the
latest NDMP Vendors/Versions/Patches and restrictions supported in
this version of VNX, refer to the Backup section of the E-Lab
Interoperability Navigator.

NFS
Cached files
If a client mounts exported parent and child paths of the same file
system, it is possible to access the same file from two different mount
points. If client side caching is enabled, the client may end up
maintaining two different versions of a cached file.
Celerra Data Migration System
Access to online migration file systems while using CDMS is not
supported by NFSv4 clients until migration completes.
When NFSv4 is used, it triggers directory migration of subdirectories
and requires global lock for the entire Data Mover. Therefore, the Data
Mover will be inaccessible during the migration.
EMC Operating Environment for File Version 7.52.0.X Release Notes

21

Known problems and limitations

Quotas
Quota information displayed in Unisphere is sometimes incorrect (471557)
Sometimes quota information displayed in Unisphere is incorrect.
Subsequent refresh of the same page returns correct information. This
happens infrequently, and only when quota information is requested
when the CS is rebooted.
Workaround
Refreshing the quota information page will correct the problem.

Replication
Using VNX Replicator with the VDM for NFS multi-naming domain solution
The VDM for NFS multi-naming domain solution for the Data Mover in
the UNIX environment is implemented by configuring an NFS server,
also known as an NFS endpoint, per VDM. The VDM is used as a
container for information including the file systems that are exported by
the NFS endpoint. The NFS exports of the VDM are visible through a
subset of the Data Mover network interfaces assigned for the VDM. The
same network interface can be shared by both CIFS and NFS protocols.
However, only one NFS endpoint and CIFS server is addressed through
a particular logical network interface.
The exports configuration for this NFS end-point is stored in the VDM
root file system. The VDM root file system is replicated as part of a VDM
replication session.
Restriction
The VDM for NFS multi-naming domain solution is not supported by
versions of the Operating Environment earlier than 7.0.50.0. Therefore,
when you replicate a VDM from a source system that runs on version
7.0.50.0 and later of the VNX Operating Environment to a destination
system that runs on any version of the Operating Environment earlier
than 7.0.50.0, the file systems replicate correctly, but the NFS endpoint
does not work on the system with the Operating Environment earlier
than version 7.0.50.0.
Replication reverse command fails (400479)
The replication reverse (nas_replicate -reverse) command hangs, as it
fails to start the replication in the reverse direction.
Workaround
Abort the reverse task and manually start the replication.
22

EMC VNX Operating Environment for File Version 7.52.0.X Release Notes

Known problems and limitations

Replication session remains in unable to sync state (418631)


When the source/destination file system (FS) sector count becomes not
MB aligned, the replicator-triggered auto-extend of the destination file
system fails to match the size of the source FS. This causes the replication
session to remain in the unable to sync state.
Workaround
The following steps will synchronize the sizes and allow the replicator to
finish the sync.
1.

Find the source FS sector count using nas_fs -size (block count =
sector count) that is MB aligned close to the current sector count and
add at least 2MBs worth of sectors to that count. For example, if the
source FS is 20400 sectors in size, it is not MB aligned, the closest
would be 20480 - so it is short by 80. Add 2MBs worth of sectors to
it. The final extent size for the source FS will be 80 + 2048 = 2128
sectors.

2.

If the source FS is not MB aligned, extend the source FS using the


following command:
nas_fs -xtend <srcFs> size=<sectorCountDiff>S
Example: nas_fs -xtend srcFS1 size=2128S

3.

Find the sector count difference between the source and the
destination FS and extend the destination by that amount using the
following command:
nas_fs -xtend <dstFs> size=<sectorCountDiff>S -o rv2_dst

Example: nas_fs -xtend dstFS size=2048S -o rv2_dst


Value retention (415849)
While creating replication and selecting heterogeneous storage pool, a
warning message is displayed. When you click cancel on the warning
message, your selection is not retained when the control flows back on
the form used for creating the replication.
Workaround
Deselect and reselect the appropriate VNX for File server and its
interconnect, storage system to repopulate its pools. You can then select
the appropriate VNX OE for File interconnect storage system and pools
from the drop down and press apply/ok to creating the replication.
Replication Session exists even after RP failover (467550)
The nas_copy command may not complete after rdf failover. On File
Cabinet DR remote site, when logged as any admin other than
EMC Operating Environment for File Version 7.52.0.X Release Notes

23

Known problems and limitations

DRAdmin, from Unisphere and NAS CLIs, user can see that the Copy
operation is still in-progress and making no progress.
Workaround
There are two options to work around this problem.
o

As a non-dradmin user on the remote VNX, user can choose to


locally abort the copy operation.

User can issue DR failback and the copy should finish by itself after
the DR failback.

Security
Changing LDAP settings (374023)
When running the server_netstat command, or updating the LDAP
settings, the following error was displayed:
The lockbox stable value threshold was not met because the
system fingerprint has changed. Reset the SSV values by
running "/nas/sbin/cst_setup -reset" from the CLI to resolve
the issue.

Duplicate serial number (413889)


When launching Unisphere Manager from Firefox, you may get a
certificate error from the browser. The problem is that FireFox sees a
certificate issuer of Celerra Certificate Authority and a certificate from
two different systems with the same serial number. FireFox generates a
certificate error because this is never supposed to happen. FireFox does
not recognize that it's really two different issuers.
Workaround
Re-generate the SSL certificate on one of the systems with the command
"nas_config -ssl". This will generate a new certificate with a new serial
number and re-start Apache.

SnapSure
Parameter file
Do not insert a new-line character at the end of a nas_param file
(/nas/site/ or /nas/sys/). The empty line causes fs_ckpt to return
an encodeable.extract error.

24

EMC VNX Operating Environment for File Version 7.52.0.X Release Notes

Known problems and limitations

Writable checkpoints operations


Writable checkpoints do not support the following operations and
features:
o

Checkpoint refresh

Checkpoint schedule

VNX replication

CDMS

File-level retention

File system extension

iSCSI

MirrorView/SRDF

TimeFinder/RDF

VTLU

SRDF/S
nas_rdf activate
The nas_rdf activate reverse command may fail to swap device-pair
personalities during the activate. This occurs when the source VNX has a
TimeFinder/FS snapshot/s created on R1BCVs which use TimeFinder
Emulation mode (Symm 8 supports Emulation mode only). The error
message below is specific to the personality swap and having remotely
protected storage, and does not impact the successful activation of the
destination VNX system.
Error message in nas_rdf activate reverse:
An RDF 'Swap Personality' operation execution is in progress
for device group '1R2_500_3'. Please wait...
Cannot use the device for this function because it is a Copy
Session target
ERROR 13431799810 : Swap operation of device group 1R2_500_3
failed due to the improper backend state or configuration.
Run 'nas_message -info 13431799810' to get full description
and recommended action.

Workaround
Canceling the relationship between the R1BCV mirrored volumes and
the R1STD volumes is needed to complete the swap operation.
After the SRDF restore completes, the SYMAPI will automatically
request a full establish during the subsequent fs_timefinder -M refresh
command to the snapshots built on the canceled R1BCV volumes.
EMC Operating Environment for File Version 7.52.0.X Release Notes

25

Known problems and limitations

Table 1 nas_rdf activate reverse procedure


Step

Action

To check symm id of source Symmetrix, run /nas/symcli/bin/symcfg list on the


destination SRDF system.

To check the mirrored volumes, run /nas/symcli/bin/symmir sid <source symm id> list
on the destination SRDF system. You can also double check if Emulation mode is
used from the output.

To find R2 device group, run /nas/symcli/bin/symdg list on the destination SRDF


system.

To find R1BCV volumes, run /nas/symcli/bin/symrdf g <r2 group name> query bcv
on the destination SRDF system. If the same device ID is seen in the output, that is
the R1BCV device you need to cancel.

To prepare for the cancel operation, create a file which has all R1BCV volume pairs to
cancel on destination SRDF system.
Example of the file:
# cat file
# <standard device id><bcv device id>
0068 00B8
0069 00B9

To cancel the R1BCV volumes, run /nas/symcli/bin/symmir sid <source sid> -f <file
name> cancel on the destination SRDF system.

To confirm the canceled volume, run /nas/symcli/bin/symmir sid <source symm id>
list on the destination SRDF system. The canceled volumes should not be listed in the
output.

To compete the activate reverse operation, run /nas/sbin/nas_rdf activate reverse


on the destination SRDF system.

nas_rdf restore fails


The nas_rdf restore command may fail on the SRDF system, which uses
TimeFinder Emulation mode (Symm 8 supports Emulation mode only)
with R2BCV mirrored volumes.
Error message in nas_rdf restore:
An RDF 'Failover' operation execution is n progress for
device group '1R2_500_3'. Please wait...
Cannot use the device for this function because it is a Copy
Session target
CRITICAL FAULT: SRDF restore failed for 1R2_500_3

Workaround
Canceling the relationship between the R2BCV mirrored volumes and
the R2STD volumes is needed to complete the restore operation. After

26

EMC VNX Operating Environment for File Version 7.52.0.X Release Notes

Known problems and limitations

SRDF restore completes, full establish is required when we turn on


mirroring again by using fs_timefinder to the canceled R2BCV volumes.
Table 2 nas_rdf activate restore procedure
Step

Action

To check the mirrored volumes, run /nas/symcli/bin/symmir sid list on the destination
SRDF system. You can also double check if Emulation mode is used from the output.

To find R2 device group, run /nas/symcli/bin/symdg list on the destination SRDF


system.

To find R2BCV volumes, run /nas/symcli/bin/symrdf g <r2 group name> query bcv
on the destination SRDF system. If the same device ID is seen in the output, that is the
R2BCV device you need to cancel.

To cancel the R2BCV volumes, run /nas/symcli/bin/symmir g <r2 group name>


cancel -force on the destination SRDF system.

To confirm the canceled volume, run /nas/symcli/bin/symmir sid list on the destination
SRDF system. The canceled volumes should not be listed in the output.

To compete restore operation, run /nas/sbin/nas_rdf restore on the destination SRDF


system.

Nas_rdf init command fails


During the SRDF discover remote storage devices process, the nas_rdf init command fails and returns the following error: "Unable to setup
remote devices".
Workaround
Delete the device group for R2 devices using /nas/symcli/bin/symdg
delete <DgName> -force.
For example: /nas/symcli/bin/symdg delete 1R2_500_13 force. Then
run nas_rdf init, to re-create the R2 device group.
SSD drives not supported
SRDF/S does not support solid state disk (SSD) drives. If you attempt to
configure an SSD drive with SRDF/S, VNX health check operations
reject the device as an unsupported storage.

Unisphere
The following notes apply to Unisphere.
No Storage System Value Displayed (460767)
When all columns are displayed in the Storage Pool page, no values are
shown for the storage system.
EMC Operating Environment for File Version 7.52.0.X Release Notes

27

Known problems and limitations

Workaround
Storage system values may be displayed through the use of the CLI.
No Filtering for Show RAID Groups (461101)
The Show RAID Groups for filter is not present. The user cannot filter
the list of available RAID groups to reduce their number.
Workaround
All RAID Groups are displayed.
Browsers
EMC requires the use of one of the following browsers to run Unisphere:
o

Internet Explorer 6.0 SP2, 6.2.3 through 8.0; Internet Explorer 7.1 is
recommended.

Firefox 3.0 or later.

Firefox
Firefox Password Manager (PM) is enabled by default. This feature
records and stores the password that you enter on a website and then
automatically fills in the password entry field on your next and
subsequent visit. The password field masks the password with asterisks
(*).
To prevent Firefox from remembering passwords:
1.

Click Tools > Options... and select the Security tab.

2.

Under Passwords, clear the Remember password for sites check


box.

Browser crashes (375079)


While running an Internet Explorer (IE) session, the browser crashes. The
following pop-up error is displayed: Target system already
managed. An instance of this browser is already running
Unisphere. You cannot have the application running multiple
times within the same instance of a browser.

Workaround
Close the original browser and re-open another instance of browser. If
the problem still occurs, clear your IE browsing history.
List page not refreshed (396845)
Occasionally, Unisphere doesnt refresh the list page automatically after
creating or deleting operation.
28

EMC VNX Operating Environment for File Version 7.52.0.X Release Notes

Known problems and limitations

Workaround
Refresh the page using the green arrow icon.
Popup pages go to background (396885)
Unisphere pop-up pages go to background and user needs to bring the
pop-up window to the foreground as the active window to make it
visible
Workaround
This is a browser compatibility issue. For Windows, click on the new
browser window representation in the Windows task bar, use Alt-tab to
step through the active windows and choose the new browser window,
or use Windows Task Manager to make the new browser window active.
You can also try other browsers, such as IE 8.0.6001, which does not have
the problem.
Changes from other user interfaces
Any changes made to the network server using another user interface
such as the CLI are not automatically shown in Unisphere. Refresh the
browser to see these changes.
Checkpoint schedule page (376710)
If the time zone of the VNX Control Station and the client system
running the UEM are different the checkpoint schedule page will display
times in UTC (coordinated universal time).
Confirmation dialog inactive (37974)
While using Unisphere, a confirmation dialog appears briefly, then
moves to the background. Users need to manually re-focus the window
and bring the dialog box to the foreground.
Dashboard
Dashboard summary (364539)
The Dashboard Summary page did not contain view block. This is
unexpected, but those blocks can be re-added using the customize
button.
File system delete confirmation (372252)
The 'OK' button is disabled on the Unisphere File System Delete
Confirmation page for a file system with File-level retention enabled at
the Compliance level. The table on the File System Delete Confirmation
page indicates that protected files still exist on the file system even
EMC Operating Environment for File Version 7.52.0.X Release Notes

29

Known problems and limitations

though the File System Properties page and the 'nas_fs -i' CLI command
both show that all file protection has expired for the file system. The user
is unable to delete the file system using Unisphere.
Workaround
The File-level retention status is now cached for performance reasons.
Force the file retention data to be updated by selecting the Unisphere
Refresh for the File System List page (in the upper right corner of the File
System List panel) or by opening a new browser session.
Help page not available (398491)
When you access Unisphere via the Control Station's hostname or DNS
name rather than an IP, the help pages may be not available.
Workaround
Access Unisphere through SPA/SPB or the Control Station's IP.
Logging in after browser crash (374685)
After a browser crash, the GUI reports "system already managed" when
user tries to log back in.
Logging in via multiple screens (163075)
Logging into VNX via two or more screens may succeed, but logging out
of one of the sessions causes the other sessions to lose contact.
Login screen (370340)
When you log into Unisphere via a VNX system, Unisphere will not
display a login screen if you click the yes button on the security
warning dialog, without also checking the Always trust content
checkbox on the same dialog.
Multi-domain configuration (431658)
If you add a legacy domain via the multi-domain configuration dialog,
the legacy domain may NOT show up in the remote domains list.
Workaround
Close the current session and re-launch the Unisphere application.
Role information not displayed (398160)
When you login to Unisphere (with username sysadmin) and select a
system from drop down system list, the Role information is not
displayed.
30

EMC VNX Operating Environment for File Version 7.52.0.X Release Notes

Known problems and limitations

Workaround
Select "All systems" in the drop down system list and the role
information similar to the following will appear in the lower right hand
corner: User: sysadmin Role: Administrator
Storage domain (376823)
When the VNX belongs to a storage domain that is destroyed, the
storage domain users on the VNX Control Station are not automatically
moved to the "defunct" state. To correct this, manually remove all the
storage domain users from the system by running the command:
/nas/http/webui/bin/admin_management.pl defunctStorageUsers

Then remove the VNX from the domain.


Users tab displays unrelated messages (364513)
Unisphere > Settings > User's tab displays messages which are
unrelated to that screen. These messages are a result of a query that was
initiated by the user on some previous table screen.
Workaround
Visiting another table screen and then to coming back to the table screen
where the messages were seen should remove these messages.

VNX Installation Assistant for File/Unified (formerly CSA)


VIA may time out when configuring primary CS with version 7.0.51.3 (471557)
This issue may occur after a fresh install. On the Applying page, when
executing the task:
Configuring Primary Control Station,
VIA reports the error:
Re-authentication:
Connection to VNX has timed out.
Username: root
Password:
Workaround
When this error occurs the user should enter the root password at the
prompt to continue the task.

EMC Operating Environment for File Version 7.52.0.X Release Notes

31

Known problems and limitations

MACs not discovered (416320)


When using a client with multiple NICs to run VNX Installation
Assistant, there is a chance that VIA will not discover all available MACs
for all unconfigured VNX hosts. VIA may discover MACs accessed by
only one of the NICs enabled.
Workaround
Enable only one NIC on your client and connect that NIC to the subnet
with your unconfigured VNX host(s). Perform the following steps in XP:
1.

Click Start.

2.

Click Control Panel to bring up Control Panel window.

3.

Click System to bring up System Properties window.

4.

Click Hardware tab.

5.

Click Device Manager to bring up Device Manager window.

6.

Open network adapters and select a NIC to disable.

7.

Leave your one chosen NIC enabled.

8.

Hook your LAN cable to that NIC.

VNX for block storage pools not displayed (397788)


VNX for block storage pools mapped to the file side are not displayed in
the VNX Installation Assistant's CIFS wizard.
Workaround
Use Unisphere instead to create file systems from these mapped pools.
Zero time remaining during disk provisioning process (396228)
VNX Installation Assistant incorrectly displays zero time remaining
during the LUN marking task part of the disk provisioning process. If
this occurs, continue to wait for this task to complete because the time
remaining estimate is incorrect.
Caution

32

Do not install Linux non-VNX applications on the Control Station.


Doing so consumes resources and might adversely affect system
operation. VNX Control Station is a dedicated controller and not an
applications platform.

Do not manually edit the nas_db database without consulting EMC


Customer Service. Any changes to this file could cause problems
with your VNX installation.

EMC VNX Operating Environment for File Version 7.52.0.X Release Notes

Technical notes

Command line scripts


The NAS database stores specific information required for each Data
Mover. The VNX for file automatically performs a backup of the entire
database every hour and saves it to a file named
nasdb_backup.1.tar.gz. These backups start one minute after the hour.
EMC recommends that you check your scripts to make sure they are not
scheduled to start at the same time as the backups. During backup, the
database becomes locked or inaccessible, and some commands that rely
on the database might fail.
Correctable hardware errors (356926)
Transient correctable hardware errors are currently being handled by the
firmware. Although rare, if enough of these errors occur, a watchdog
panic can be triggered. In a future release, these errors will be handled
by the VNX for file operating system instead of the firmware which will
improve efficiency and prevent the watchdog panic.
server_http
The server_http command fails if you issue it on a Data Mover while
running fsck. Other commands (including FileMover API) may not work
until fsck completes.
Virtual device naming
When creating virtual devices, such as Fail-Safe Network (FSN) devices,
Ethernet channels, and link aggregations, do not use the following
names as the name of the device:
amgr

http

mac

pfkey

spstream

tcprpc

Blksrv

i82596

Meshforwarder

Pipe

Ssl

Tftp

Client

Icmp

Nd

Pmap

Sslpipe

Toeframechk

Cmfap

icmp6

Nfs

Rcp

Statd

Upd

Dep

Ip

nfsPrimary

Rfa

StreamHead

Udpnfsmesh

Dfe

ip6

Nwh

Scsicfg

Tcp

Echo

Lockd

Pax

Smb

Tcpnfsmesh

Escon

Lpfn

Pcnfs

Smpte

tcprpc

Technical notes

EMC Operating Environment for File Version 7.52.0.X Release Notes

33

Technical notes

Configuration guidelines
The values listed in this section represent the highest or lowest values
tested by EMC. In some cases, actual capacities may exceed the capacities
tested by EMC. Theoretical means in theory, not always practical. Refer
to the E-Lab Interoperability Navigator for the most up-to-date
information regarding these guidelines.
Table 3 CIFS guidelines
Guideline/
Maximum
Specification
tested value

34

comment

CIFS TCP
connection

64K (default
and
theoretical
max.), 20K
(max. tested)

Param tcp.maxStreams sets the maximum number of TCP


connections a Data Mover can have. Maximum value is 64K, or
65535. TCP connections (streams) are shared by other
components and should be changed in monitored small
increments.

Share name
length

80 characters
(Unicode).
12 chars
(ASCII)

Unicode: The maximum length for a share name with Unicode


enabled is 80 characters.
ASCII: In ASCII mode, the maximum share length is 12
characters.

Number of CIFS
shares

10,000 per
Data Mover
40,000 per
VNX for File

You may notice a performance impact when managing a large


number of shares.

Number of
NetBIOS
names/compna
mes per Data
Mover

509 (max)

Limited by the number of network interfaces available on the


Data Mover. From a local group perspective, the number is
limited to 509. NetBIOS and compnames must be associated
with at least one unique network interface.

NetBIOS name
length

15

NetBIOS names are limited to 15 characters (Microsoft limit)


and cannot begin with an@ (at sign) or a - (dash) character.
The name also cannot include white space, tab characters, or
the following symbols: / \ : ; , = * + | [ ] ? < > . If using
compnames, the NetBIOS form of the name is assigned
automatically and is derived from the first 15 characters of the
<comp_name>.

Comment
(ASCII chars)
for NetBIOS
name for server

256

Limited to 256 ASCII characters.


Restricted Characters: You cannot use double quotation ("),
semi-colon (;), accent (`), and comma (,) characters within the
body of a comment. Attempting to use these special characters
results in an error message. You can only use an exclamation
point (!) if it is preceded by a single quotation mark ().
Default Comments: If you do not explicitly add a comment,
the system adds a default comment of the form EMCSNAS:T<x.x.x.x> where <x.x.x.x> is the version of the NAS
software.

Compname

63 bytes

For integration with Windows environment releases greater

EMC VNX Operating Environment for File Version 7.52.0.X Release Notes

Technical notes

Guideline/
Specification
length

Maximum
tested value

comment

Number of
domains

10 tested

509 (theoretical max.) The maximum number of Windows


domains a Data Mover can be a member of. To increase the
max default value from 32, change param
cifs.lsarpc.maxDomain. The Parameters Guide contains more
detailed information about this parameter.

Block size
negotiated

64 KB
128KB with
SMB2

Maximum buffer size that can be negotiated with Microsoft


Windows clients. To increase the default value, change param
cifs.W95BufSz, cifs.NTBufSz or cifs.W2KBufSz. The
Parameters Guide contains more detailed information about
these parameters.
Note: With SMB2.1, read and write operation supports 1MB
buffer (this feature is named 'large MTU').

Number of
simultaneous
requests per
CIFS session
(maxMpxCount)

127

This value is fixed and defines the number of requests a client


is able to send to the Data Mover at the same time, (for
example, a change notification request). To increase this value,
change the maxMpxCount parameter. The Parameters Guide
contains more detailed information about this parameter.

Total number
files/directories
opened per
Data Mover

200,000

A large number of open files could require high memory usage


on the Data Mover and potentially lead to out-of-memory
issues.

Number of
Home
Directories
supported

20,000

than Windows 2000, the CIFS server computer name length


can be up to 21 characters when UTF-8 (3 bytes char) is used.

Number of
Windows/UNIX
users

Earlier versions of the VNX Server relied on a basic database,


nameDB, to maintain Usermapper and secmap mapping
information. DBMS now replaces the basic database. This
solves the inode consumption issue and provides better
consistency and recoverability with the support of database
transactions. It also provides better atomicity, isolation, and
durability in database management.

Number of
users per TCP
connection

16K (default),
64K (max)

To increase the default value, change param cifs.listBlocks


(default 64, max 255). The value of this parameter times 256 =
max number of users.
Note: TID/FID/UID shares this parameter and cannot be
changed individually for each ID. Use caution when increasing
this value as it could lead to an out-of-memory condition. Refer
to the System Parameters Guide for parameter information.

Number of
files/directories
opened per
CIFS
connection

16K (default),
64K (max)

To increase the default value, change param cifs.listBlocks


(default 64, max 255). The value of this parameter times 256 =
max number of files/directories opened per CIFS connection.
Note: TID/FID/UID shares this param and cannot be changed
individually for each ID. Use caution when increasing this value

EMC Operating Environment for File Version 7.52.0.X Release Notes

35

Technical notes

Guideline/
Specification

Maximum
tested value

comment
as it could lead to an out-of-memory condition. Be sure to
follow the recommendation for total number of files/directories
opened per Data Mover. Refer to the System Parameters
Guide for parameter information.

Number of
VDMs per Data
Mover

29

number of
ntxmap User
Mapping entries

1000

The total number of VDMs, file systems, and checkpoints


across a whole cabinet cannot exceed 2048.

Table 4 FileMover
Guideline/
Specification

Maximum
tested
value

Max connections to
secondary storage per
primary (VNX for File)
file system

1024

Number of HTTP
threads for servicing
FileMover API requests
per Data Mover

64

comment

Number of threads available for recalling data from


secondary storage is half the number of whichever is the
lower of CIFS or NFS threads. 16 (default), can be
increased using server_http command, max (tested) is
64.

Table 5 File systems guidelines

36

Guideline/
Specification

Maximum tested
value

comment

Mount point
name length

255 bytes (ASCII)

The "/" is used when creating the mount point and is


equal to one character. If exceeded, Error 4105:
Server_x:path_name: invalid path specified is
returned.

File system
name length

240 bytes (ASCII)


19 chars display for list
option

For nas_fs list, the name of a file system will be


truncated if it is more than 19 characters. To display the
full file system name, use the info option with a file
system ID (nas_fs -i id=<fsid>).

Filename
length

255 bytes (NFS),


255 characters (CIFS)

With Unicode enabled in an NFS environment, the


number of characters that can be stored depends on the
client encoding type such as latin-1. For example: With
Unicode enabled, a Japanese UTF-8 character may
require three bytes.
With Unicode enabled in a CIFS environment, the

EMC VNX Operating Environment for File Version 7.52.0.X Release Notes

Technical notes

Guideline/
Specification

Maximum tested
value

comment
maximum number of characters is 255. For filenames
shared between NFS and CIFS, CIFS allows 255
characters. NFS truncates these names when they are
more than 255 bytes in UTF-8, and manages the file
successfully.

Pathname
length

1,024 bytes

Note: Make sure the final path length of restored files is


less than 1024 bytes. For example, if a file is backed up
which originally had path name of 900 bytes, and it is
restoring to a path with 400 bytes, the final path length
1
would be 1300 bytes and would not be restored.

Directory
name length

255 bytes

This is a hard limit and is rejected on creation if over


255 limit.The limit is bytes for UNIX names, Unicode
1
characters for CIFS.

Subdirectories
(per parent
directory)

65,533

This is a hard limit, code will prevent you from creating


more than 65,533 directories.

Number of file
systems per
VNX

4096

This max number includes VDM and checkpoint file


systems.

Number of file
systems per
Data Mover

2048

Mount operation will fail when the number of file


systems reaches 2048 with an error indicating
maximum number of file systems reached. This max
number includes VDM and checkpoint file systems.

Maximum Disk
Volume Size

Dependent on RAID
group (see comments)

Unified platforms Running setup_clariion on the


NS-960 platform will provision the storage to use 1 LUN
per RAID group which may result in LUNS > 2 TB,
depending on drive capacity and number of drives in the
RAID group. On all CX4 integrated platforms (NS-120,
NS-480, NS-960), the 2 TB LUN limitation has been
lifted--this may or may not result in LUN size larger than
2 TB depending on RAID group size and drive capacity.
For all other unified platforms, setup_clariion will
continue to function as in the past, breaking large RAID
groups into LUNS that are < 2 TB in size.
Gateway systems For gateway systems, users may
configure LUNs greater than 2 TB up to 16 TB or max
size of the RAID group, which ever is less. This is
supported for NS-40G, NS-80G, NSX, and NS-G8 NAS
gateways when attached to CX3, CX4 and Symmetrix
DMX-4 backends.
Multi-Path File Systems (MPFS) MPFS supports
LUNs greater than 2 TB. Windows 2000 and 32-bit
Windows XP, however, cannot support large LUNs due
to a Microsoft Windows OS limitation. All other MPFS
Windows clients support LUN sizes greater than 2 TB if
the 5.0.90.900 patch is applied. Use of these 32-bit
Windows XP clients on NS-960 systems require RPQ

EMC Operating Environment for File Version 7.52.0.X Release Notes

37

Technical notes

Guideline/
Specification

Maximum tested
value

comment
(Request for Price Quotation) approval.

Total storage
for a Data
Mover (Fibre
Channel Only)

VNX Version 7.0


200 TB VNX5300
256 TB VNX5500,
VNX5700, VNX7500,
VG2 & VG8
Celerra Version 6.0
16 TB = NX4
32 TB =NS20, NS120
64 TB =NS40, NS480,
NS40G, NSG2
128 TB = NS80,
NS80G, NSX
256 TB = NS960, NSG8
Celerra Version 5.6
8 TB = CNS (514
DM)
10 TB = NX350,
NS500
16 TB = NX4, NS700
32 TB = NS20,
NS120
64 TB = NS40,
NS480,
NS40G,
NS-G2, VG2
128 TB = NS80,
NS960,
NS80G,
NSX, NS-G8

These total capacity values represent Fibre Channel


disk maximum with no ATA drives. Fibre Channel
capacity will change if ATA is used.
Notes:

On a per-Data-Mover basis, the total size of all


file systems, and the size of all SavVols used
by SnapSure must be less than the total
supported capacity. Exceeding these limit can
cause out of memory panic.

Refer to the VNX for File capacity limits tables


for more information, including mixed disk type
configurations.

File system
size

See right column for


TB information per
Data Mover

16 TB (hard limit, enforced) NX4, NS20, NS120, NS40,


NS480, NS80, NS960,
NS40G, NS-G8, NS80G,
NSX, NS-G8, VG2, VG8, and all of VNX series
10 TB NS350
8 TB 514/514T Data Mover, NS500
When planning a file system, with respect to maximum
desired size, be sure to factor in the time required for
FSCK, ACL checks, file system build & restore. This
guideline applies to both Fibre Channel and ATA drives.

File size

16 TB (4 TB with file
size policy of quota)

This hard limit is enforced and cannot be exceeded.

Number of
directories
38

Same as number of inodes in file system. Each 8 KB of


space = 1 inode.

EMC VNX Operating Environment for File Version 7.52.0.X Release Notes

Technical notes

Guideline/
Specification
supported per
file system

Maximum tested
value

comment

Number of
files per
directory

500,000

Exceeding this number will cause performance


problems.

Maximum
number of files
and directories
per VNX file
system

256 Million (default)

This is the total number of files and directories that can


be in a single VNX file system. This number can be
increased to 4 billion at file system creation time but
should only be done after considering recovery and
restore time and the total storage utilized per file. The
actual maximum number in a given file system is
dependent on a number of factors including the size of
the file system.

Maximum
amount of
deduplicated
data
supported

256 TB

When quotas are enabled with file size policy, the


maximum amount of deduplicated data supported is 256
TB. This amount includes other files owned by UID 0 or
1
GID .

All other industry-standard caveats, restrictions, policies, and best practices prevail. This
includes, but is not limited to, FSCK times (now made faster through multi-threading),
backup and restore times, number of objects per file system, snapshots, file system
replication, VDM replication, performance, availability, extend times, and layout policies.
Proper planning and preparation should occur prior to implementing these guidelines.
Table 6 Naming Services guidelines
Guideline/Specification

Maximum tested
value

comment

Number of DNS domains

3 - WebUI
unlimited CLI

Three DNS servers per Data Mover is the


limit if using WebUI. There is no limit when
using the CLI (command line interface).

Number of NIS Servers


per Data Mover

10

You can configure up to 10 NIS servers in a


single NIS domain on a Data Mover.
Note: A Data Mover supports only one NIS
domain. Each time you configure a NIS
domain and specify the servers, it overwrites
the previous configuration.

NIS record capacity

1004 bytes

A Data Mover can read 1004 bytes of data


from a NIS record.

Number of DNS servers


per DNS domain

EMC Operating Environment for File Version 7.52.0.X Release Notes

39

Technical notes

Table 7 NFS guidelines


Guideline/ Specification

Maximum tested
value

comment

Number of NFS exports

2,048 per Data Mover


tested Unlimited
theoretical max.

You may notice a performance impact when


managing a large number of exports using
Unisphere.

Number of concurrent
NFS clients

64K with TCP


(theoretical)
Unlimited with UDP
(theoretical)

Limited by TCP connections.

Netgroup line size

16383

The maximum line length that the Data


Mover will accept in the local netgroup file on
the Data Mover or the netgroup map in the
NIS domain that the Data Mover is bound to.

Number of UNIX groups


supported

64K

2 billion max value of any GID. The


maximum number of GIDs is 64K, but an
individual GID can have an ID in the range of
0- 2147483648.

Table 8 Networking guidelines

40

Guideline/
Specification

Maximum tested
value

comment

Link
aggregation/ether
channel

8 ports (ether
channel)
12 ports (link
aggregation (LACP)

Ether channel: the number of ports used must be a


power of 2 (2, 4, or 8). Link aggregation: any number
of ports can be used. All ports must be the same
speed. Mixing different NIC types (i.e., copper and
fibre) is not recommended.

Number of VLANs
supported

4094

IEEE standard.

Number of
interfaces per Data
Mover

45 tested

Theoretically 509.

Number of FTP
connections

Theoretical value
64K

By default the value is (in theory) 0xFFFF, but it is also


limited by the number of TCP streams that can be
opened. To increase the default value, change param
tcp.maxStreams (set to 0x00000800 by default). If
you increase it to 64K before you start TCP, you will
not be able to increase the number of FTP
connections. Refer to the System Parameters Guide
for parameter information.

EMC VNX Operating Environment for File Version 7.52.0.X Release Notes

Technical notes

Table 9 Quotas guidelines


Guideline/Specification

Maximum tested value

comment

Number of Tree Quotas

8191

Per file system

Max size of Tree Quotas

256 TB

Includes file size and quota tree


size.

Max number of unique groups

64K

Per file system

Quota path length

1024

Table 10 Replicator V2 guidelines


Guideline/Specification

Maximum tested value

Number of Replication sessions


per VNX

1365 (NSX)
(other platforms): 682

Max number of Replication


sessions per Data Mover

1024

Max number of local and remote


file system and VDM replication
sessions per Data Mover

682

Max number of loopback file


system and VDM replication
sessions per Data Mover

341

comment

This enforced limit includes all


configured file system, VDM, and
copy sessions.

Table 11 Snapsure guidelines


Guideline/Specification
Number of checkpoints per file
system

Maximum tested value


96 read-only
16 writeable

comment
Up to 96 read-only checkpoints
per file system are supported as
well as 16 writeable checkpoints.

Table 12 Virus Checking guidelines


Guideline/
Specification

Maximum tested value

comment

Virus checker
file pending

90% of available
VNODES in system

The percentage of total cached open files available in


the system that can be pending on Virus Checking. An
event is sent to the Control Station when the maximum
is reached. To change the default value, modify param
viruschk.vnodeMax(default=2000) and/or param
viruschk.vnodeHWM(default=0x5a(90%). Refer to
the System Parameters Guide for parameter
information.

EMC Operating Environment for File Version 7.52.0.X Release Notes

41

Technical notes

On systems with Unicode enabled, a character may require between 1


and 3 bytes, depending on encoding type or character used. For example,
a Japanese character typically uses 3 bytes in UTF-8. ASCII characters
require 1 byte.

Capacity guidelines
VNX for File capacity limits
Following is a list of the version 7.0 VNX for File capacity limits. Refer to
the E-Lab Interoperability Navigator, which is updated monthly, for
the most up-to-date capacity limits.
Table 13 VNX for File V 7.0 capacity limit
VNX5300
VNX5500

VNX5700

VNX7500

VG2

VG8

Usable IP Storage
Capacity limit per blade
(all disk types and
uses) 3

200 TB

256 TB

256 TB

256 TB

256 TB

256 TB

Max FS Size

16TB

16TB

16TB

16TB

16TB

16TB

Max # FS per
4
DM/Blade

2048

2048

2048

2048

2048

2048

Max # FS per cabinet

4096

4096

4096

4096

4096

4096

Max configured
replication sessions per
DM/Blade for Replicator
2
v2

1024

1024

1024

1024

1024

1024

Max # of checkpoints
1
per PFS

96

96

96

96

96

96

Max # of NDMP
sessions per DM/Blade

Notes:
PFS is Production File System
A maximum of 256 concurrently transferring sessions are supported per DM
This is the usable IP storage capacity per blade. For overall platform capacity,
consider the type and size of disk drive and the usable Fibre Channel host
capacity requirements, RAID group options and the total number of disks
supported
This count includes production file systems, user defined file system
checkpoints, and two checkpoints for each replication session.

42

EMC VNX Operating Environment for File Version 7.52.0.X Release Notes

Technical notes

Common virus checking solution


Table 14 lists the requirements for using CAVA with third-party
antivirus engines.
Table 14 Virus checking solution requirements
All Vendor

VNX for File version

AntiVirus version

Computer Associates
eTrust r8.1

All currently-supported VNX for


1
File versions

Common AntiVirus Agent


(CAVA) 3.4.0 and later

All currently-supported VNX for


1
File versions

Common AntiVirus Agent


(CAVA) 3.4.0 and later

All currently-supported VNX for


1
File versions

Common AntiVirus Agent


(CAVA) 3.4.0 and later

5.6.46.4 and later

VNX Event Enabler (VEE)


4.5.1 and later

All currently-supported VNX for


1
File versions

Common AntiVirus Agent


(CAVA) 3.4.0 and later

Symantec Endpoint Protection


11.06

5.6.48.x and later

VNX Event Enabler (VEE)


4.5.2.2 and later

SAV for NAS 5.2.x

All currently-supported VNX for


1
File versions

Common AntiVirus Agent


(CAVA) 3.6.2 and later

eTrust r8
TrendMicro
ServerProtect for EMC NAS
5.3.1
ServerProtect for EMC NAS
5.8
Sophos
AntiVirus 3.x
AntiVirus 5.x
AntiVirus 6.x
AntiVirus 7.x
AntiVirus 9.x
Kaspersky
Kaspersky Anti-Virus for
Windows Servers Enterprise
Edition
McAfee
VirusScan 8.0i
VirusScan 8.7i
Symantec

SAV for NAS 4.3.x

Currently supported NAS versions include the entire 5.6.x, 6.0.x, and 7.0.x families. Refer to the EMC ELab Interoperability Navigator for the latest supported version information.

EMC Operating Environment for File Version 7.52.0.X Release Notes

43

Environment and system requirements

Environment and system requirements


Symmetrix microcode and VNX for block operating environment levels
The E-Lab Interoperability Navigator contains the latest microcode and
VNX for block operating environment information. The E-Lab
Interoperability Navigator is updated monthly and can be found on
http://Support.EMC.com. After logging in to the EMC Online Support
website, locate the applicable Support by Product page, find Tools, and
click E-Lab Interoperability Navigator.

Centera FileArchiver not supported


The Centera FileArchiver policy engine is not supported when using
VNX version 7.0.x.x or later.

Documentation
VNX for File version 7.0 documentation
EMC provides the ability to create step-by-step planning, installation,
and maintenance instructions tailored to your environment. To create
VNX customized documentation, go to: https://mydocs.emc.com/VNX.

Documentation updates
Multibyte support for VNX data elements
In the VNX management interfaces, multibyte support allows users to
input, store, and display multibyte characters. Users can input and view
VNX object fields in their native language. The multibyte feature is
supported since Celerra NAS 5.6 and later.
The following data elements are enabled with multibyte support and
also support the Unicode 3.0 standard:

44

CIFS Share name

CIFS share comment

CIFS export path

Tree quota path

Tree quota comment

Quota user name

CIFS server netbios name

EMC VNX Operating Environment for File Version 7.52.0.X Release Notes

Software media, organization, and files

CIFS server computer name

CIFS server alias

CIFS server comment

CIFS domain name

CIFS workgroup

Organizational Unit (OU) name

DFS root name

CDMS source share name

CDMS local path

CDMS source export path

Mount point name

File system name

Checkpoint name

NFS export name

NFS export path

Software media, organization, and files


The following table lists the applicable software media for this product
version:
Table 15 Available software
Part number

Description

453-004-581

VNX FILE SOFTWARE DVD IMAGE

Installation and upgrades


VNX for File installation process
This section lists some facts you should know about the VNX for File and
the installation and configuration process before proceeding further.
How to prepare for VNX for File installation
To prepare for VNX for File installation and configuration, complete the
following tasks:
1.

Perform the pre-site installation tasks as required by EMC to


complete the installation and connectivity of the VNX for File. Tasks
include preparing electrical facilities, wiring the network, adding a
telephone line, checking environmental conditions and related
EMC Operating Environment for File Version 7.52.0.X Release Notes

45

Installation and upgrades

requirements, as stated in the Professional Services Statement of


Work (PSSOW).
Note: For a copy of the PS-SOW document, contact your Professional
Services Account Team. For examples of VNX for File network topologies,
refer to the Configuring and Managing VNX Networking technical module.

2.

Configure your environment for UNIX clients, Microsoft Windows


clients, or a combination of Windows and UNIX clients, as
applicable.

3.

Complete network assessment to ensure proper performance and


protocol configurations. (Optional EMC services can be ordered to
assist with this process.)

VNX for File installation and configuration


EMC Customer Service installs the VNX for File hardwarea process
that includes the physical setup, cabling, power application, and
diagnostic testing of the VNX for File.
Next, EMC Professional Services or an EMC partner might configure and
integrate the VNX for File. For example, for a VNX for File with a
Symmetrix system, this process typically includes the following tasks:
o

Coordinating the activities to enable installation, cabling, and


connectivity to system cabinets.

Loading and configuring the VNX for File software.

Creating, mounting, and exporting file systems.

Configuring network protocols.

Integrating with a supported, existing user authentication


environment.

Configuring standby Data Mover(s), if applicable.

Configuring and demonstrating system management and


administration tools.

EMC Professional Services can perform additional VNX for File


configuration tasks, such as host provisioning (with VNX for File MPFS),
backup protection, and network connectivity, depending on the
individual EMC customer agreement defined in the PSSOW. These
services are part of the EMC Network-Attached Storage Design and
Implementation service offering, which is designed to provide
continuous technical expertise as you manage your organizations
growing network storage needs. For more information about this service
offering, contact your EMC Customer Support representative.

46

EMC VNX Operating Environment for File Version 7.52.0.X Release Notes

Installation and upgrades

After EMC installs, configures, and tests the VNX for File, it is fully
operational and ready to support networked storage operations.
NAS and VNX for block operating environment upgrades
EMC requires NAS upgrades to be performed before the VNX for block
operating environment is upgraded on any attached VNX for block
systems.
The reason for this is to maintain compatibility between the storage API
(NAVI) on the Control Station and the VNX for block operating
environment on the VNX for block. The API is built in as part of the NAS
management software and is used to gather data about the backend such
as statistics, health, and configuration.
If the VNX for block operating environment is upgraded first, the API
may encounter issues while communicating with the new VNX for block
operating environment. Upgrading the NAS first allows the API to have
limited backward compatibility with the VNX for block operating
environment code.
Reboot required at upgrade
VNX OE for File
The VNX OE for File software is comprised of two installation
components, a Linux package and a VNX OE for File software package.
The software currently running on the VNX for File upgrade software
versions determine the number of Control Station and Data Mover
reboots required and whether Data Mover reboots can be scheduled to a
later time after the upgrade is complete.
The VNX for File software runs on the Data Movers and is part of the
VNX OE for File software package. The new version of VNX for File
operating system is loaded and run on the Data Movers only after a Data
Mover boot.
Control Station rebooting
Most software upgrades require a Control Station reboot. When
upgrading VNX for File software, if either of the first two digits of the
new version number change (for example, 6.0.x to 7.0.x), the Control
Station requires rebooting.
At the beginning of upgrade, the system information section lists
whether or not the Control Station reboot is mandatory. Note that this
initial output is long and may scroll off the screen. You will be prompted
whether to continue the upgrade after all the upgrade information is
displayed.
EMC Operating Environment for File Version 7.52.0.X Release Notes

47

Installation and upgrades

If you enter no, the upgrade stops, and you are returned to the command
prompt.
===========System Information===========
Install Manager Version: 6.0.34-4
Install Manager Command:
/celerra/upgrade/pkg_6.0.34.4/install_mgr
Starts at: Thu Jun 3 10:36:56 2010
From version: 5.6.49-1
To: 6.0.34-4
Dual Control Stations: yes
Slot: slot_0
Blade slot(s): 2, 3, 4, 5
Cabinet family: NS
Cabinet type: NS-HAMMERHEAD
Control Station reboot: multiple (mandatory)
Blade reboot: 1 time (mandatory)
Total number of attempts: 0
=========================================
.
.< other information is posted here>
.
Do you wish to continue with the upgrade now [ yes or no ]?
> yes

Data Mover rebooting


The Data Movers require a reboot in order to run the new version of the
VNX for File operating system. When upgrading VNX for File software,
if either of the first two digits of the new version number change (i.e.,
6.0.x to 7.0.x), the Data Mover requires rebooting.
The system information screen is the same as the information in the
Control Station rebooting screen.
Limited scheduling of the Data Mover reboot exists. An estimated time
for when the Data Mover reboot will happen is shown at the beginning
of upgrade:
==============Estimated Time for Services ===============

48

Current Time:

10:36

Estimated Time when NAS service will be stopped:

10:38

EMC VNX Operating Environment for File Version 7.52.0.X Release Notes

Installation and upgrades

Estimated Time when Data Movers will be reset:

11:28

Estimated Time when NAS service will be restarted: 11:39


Estimated Time when upgrade will be complete:

11:50

==================================================
The Upgrade Data Movers task prompts before it reboots the Standby
Data Movers. It then provides a menu to reboot the Primary Data
Movers similar to the following:
==================================================
11:58 [ 46/70 ] Upgrade Data Movers

7 minutes

Start at: Thu Jun 3 11:58:38 2010


All servers must now be rebooted.
Ready to reboot the Standby Data Mover (3). Press the ENTER
key to begin...
The following server(s) need to be rebooted to run the new
release:
2 ) server_2
Enter slot number(s) or ALL to select server(s) for reboot:

==================================================
When upgrading VNX for File, if both of the first two digits of the new
version of software remain the same (for example, 7.0.x to 7.0.y), you
may delay the rebooting of the Data Movers.
The system information screen is the same as the information in the
Control Station rebooting example except for the following:
Data Mover reboot: 1 time (advisory)

The Upgrade Data Movers task asks if you wish to reboot the Data
Movers. If you enter yes, you will be prompted to press ENTER to reboot
the standby Data Movers, then prompted to reboot the primary Data
Movers. If you enter no, the Data Movers continue running the current
(pre-upgrade) version of VNX software until they are rebooted.
=================================================
12:29 [ 44/67 ] Upgrade Data Movers7 minutes
All servers need to be rebooted to run the new release.
Do you wish to reboot them now? [yes or no]:
Ready to reboot the Standby Data Mover (3). Press the ENTER
key to begin...
The following server(s) need to be rebooted to run the new
release:
2 ) server_2
EMC Operating Environment for File Version 7.52.0.X Release Notes

49

Installation and upgrades

Enter slot number(s) or ALL to select server(s) for reboot:


Waiting for ping response from server_2 ... :up (0 secs)

=================================================
Schedules during upgrades
As part of the upgrade procedure, schedules that run between midnight
and 1:00 A.M. may lose those run times. This affects only schedules that
run every X hours of the day, (X being from 1 through 12), and not
schedules that run at specific hours of the day. This is a known problem
and can be corrected by modifying the affected schedules after the
upgrade.
Upgrade requirements
Existing read/write-able file system must have 128K minimum free
space prior to upgrading to VNX OE for File version 7.0.

Block OE upgrades
Occasionally, VNX for block storage connected to a VNX for File requires
a software upgrade. When upgrading Block OE software, do not run
heavy loads on the VNX for File, as operations may be extremely slow or
interrupted.
Notes:
o Block OE software cannot be upgraded unless the Control Station
software is upgraded first. Ignoring this rule may make the VNX
unmanageable by the VNX for File.
o

Incremental VNX for block arrays cannot be added to systems


running older VNX for File software unless they are running a
compatible release of VNX for block software.

A warning will appear when you upgrade the VNX for File software
if the VNX for block software needs to be updated. If you receive this
warning, finish upgrading the VNX for File software and then
upgrade the VNX for block software as soon as possible to avoid
compatibility issues.

Consult the E-Lab Interoperability Navigator at


http://Support.EMC.com for all software interdependencies.

50

EMC VNX Operating Environment for File Version 7.52.0.X Release Notes

Troubleshooting and getting help

Troubleshooting and getting help


Where to get help
EMC support, product, and licensing information can be obtained as
follows:
Product information: For documentation, release notes, software
updates, or for information about EMC products, licensing, and service,

go to the EMC Powerlink website (registration required) at:


http://Powerlink.EMC.com
Troubleshooting: Go to the EMC Online Support website. After logging
in, locate the appropriate Support by Product page.
Technical support: For technical support and service requests, go to
EMC Customer Service on the EMC Online Support website. After
logging in, locate the appropriate Support by Product page and choose
either Live Chat or Create a service request. To open a service request
through EMC Online Support, you must have a valid support
agreement. Contact your EMC Sales Representative for details about
obtaining a valid support agreement or to answer any questions about
your account.
Note: Do not request a specific support representative unless one has been
assigned to your particular system problem.

EMC E-Lab Interoperability Navigator


The EMC E-Lab Interoperability Navigator is a searchable, web-based
application that provides access to EMC interoperability support
matrices. It is available at http://Support.EMC.com. After logging in to
the EMC Online Support website, locate the applicable Support by
Product page, find Tools, and click E-Lab Interoperability Navigator.

EMC Operating Environment for File Version 7.52.0.X Release Notes

51

Troubleshooting and getting help

2012 EMC Corporation. All Rights Reserved.


EMC believes the information in this publication is accurate as of its publication date. The
information is subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED "AS IS." EMC
CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND
WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY
DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A
PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires
an applicable software license.
For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks
on EMC.com.
All other trademarks used herein are the property of their respective owners.

52

EMC VNX Operating Environment for File Version 7.52.0.X Release Notes

Você também pode gostar