Você está na página 1de 250

DATA ONTAP NFS TROUBLESHOOTING FOR ENGINEERING

Instructor Notes
<Notes>
Student Notes
<Notes>

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

CLASSROOM LOGISTICS
Instructor Notes
<Notes>
Student Notes
<Notes>

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

INTRODUCTIONS
Instructor Notes
<Notes>
Student Notes
<Notes>

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

COURSE OBJECTIVES
Instructor Notes
<Notes>
Student Notes
<Notes>

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

COURSE AGENDA: DAY 1


Instructor Notes
<Notes>
Student Notes
<Notes>

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

COURSE PREREQUISITES (RECOMMENDED)


Instructor Notes
<Notes>
Student Notes
<Notes>

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

NETAPP UNIVERSITY INFORMATION SOURCES


Instructor Notes
<Notes>
Student Notes
<Notes>

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

THANK YOU
Instructor Notes
<Notes>
Student Notes
<Notes>

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

NFS
Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

MODULE OBJECTIVES
Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

10

SCON ISSUES: CONFIGURATION


Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

11

Instructor Notes
<Notes>

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

12

NFS ISSUES: CONFIGURATION


Instructor Notes
<Notes>
Student Notes
Configuration is the number one source for NFS issues.
Incorrect configuration of nfs for the SVM (enabling protocol version,options) , network configuration, export policies ,
authenication and authorization mechanisms.
In NFS configuration of entities external to the enclustered ONTAP system also play a major role and incorrect
configuration of these entities (Active Directory, NIS server etc.) has been the cause of more that 50% of protocols
related cases opened by customers
In the subsequent modules we will understand and troubleshoot:
a)

NFS Configuration(internal and external)

b)

Sub Systems involved in NFS operations(SCON,VLDB,Nblade,CSM etc.)

c)

NFS caching mechanism(s)

d)

The sources of configuration information such as VLDB tables , mgwd Tables , files in mroot etc.

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

13

The approach that will be taken in each of the following modules is that :
we will attempt to :

a) Learn about the CLI Commands,EMS,Counters,Logging and Tracing tools for triaging issues related to the
topics being discussed
b) understand why the issue has occurred
c)Learn how we can fix it.

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

14

WHERE TO GO NEXT
Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

15

NFS-SPECIFIC COMMANDS
Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

16

NFS-SPECIFIC COMMANDS
Instructor Notes
<Notes>
Student Notes
dev01cluster-1::> vserver nfs show
Vserver: dev01
General Access: true
v3: enabled
v4.0: enabled
4.1: disabled
UDP: enabled
TCP: enabled
Default Windows User: learn\Administrator
Default Windows Group: learn\Domain Users
dev01cluster-1::> set diag
Warning: These diagnostic commands are for use by NetApp personnel only.
Do you want to continue? {y|n}: y
dev01cluster-1::*> nfs status
The NFS server is running on Vserver "dev01".

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

17

NFS-SPECIFIC COMMANDS
Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

18

NFS-SPECIFIC COMMANDS
Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

19

NFS-SPECIFIC COMMANDS
Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

20

NFS-SPECIFIC COMMANDS
Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

21

NFS-SPECIFIC COMMANDS
Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

22

MODULE SUMMARY
Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

23

THANK YOU
Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

24

NFS
Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

25

MODULE OBJECTIVES
Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

26

Instructor Notes
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

27

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

28

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

29

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

30

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

31

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

32

NFSV3 FEATURES
Instructor Notes
Student Notes
NFS version 3 (NFSv3) introduces the concept of safe asynchronous writes. An NFSv3 client can specify that the
server is allowed to reply before the server saves the requested data to disk, which permits the server to gather small
NFS write operations into a single efficient disk write operation. An NFSv3 client can also specify that the data must be
written to disk before the server replies, just like an NFS version 2 (NFSv2) write. The client specifies the type of write
by setting the stable_how field in the arguments of each write operation to UNSTABLE (or ASYNC) to request a safe
asynchronous write and to STABLE (SYNC or FSYNC) for a NFSv2-style write.

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

33

NFSV3 FEATURES (CONT.)


Instructor Notes
Student Notes
Weak Cache Consistency
SETATTR
A client may request that the server check that the object is in an expected state before performing the SETATTR
operation. To do this, it sets the argument guard.check to TRUE and the client passes a time value in guard.obj_ctime.
If guard.check is TRUE, the server must compare the value of guard.obj_ctime to the current ctime of the object. If the
values are different, the server must preserve the object attributes and must return a status of NFS3ERR_NOT_SYNC.
If guard.check is FALSE, the server will not perform this check.

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

34

DATA ONTAP AND CLIENT SUPPORT


Instructor Notes
Student Notes
Typical mount options also include: tcp,rsize=65536,wsize=65536,hard,intr,bg. Certain deployment scenarios may
require different options. Always check with application vendors for appropriate NFS mount options.

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

35

DATA ONTAP AND CLIENT SUPPORT


Instructor Notes
Student Notes
Typical mount options also include: tcp,rsize=65536,wsize=65536,hard,intr,bg. Certain deployment scenarios may
require different options. Always check with application vendors for appropriate NFS mount options.

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

36

Student Notes:
The state field in the expanded NFS LOCK command packet is an odd number.

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

37

NFSV3 LOCKS
Instructor Notes
<Notes>
Student Notes
The NLM provides two types of locks, monitored and non-monitored.
Monitored Locks:
Monitored locks are reliable. A client process which establishes monitored locks can be assured that if the server host, on which the locks are
established, crashes and recovers, the locks will be reinstated without any action on the client process' part. Likewise, locks that are held by a
client process will be discarded by the NLM on the server host if the client host crashes before the locks are released.
Monitored locks require both the client and server hosts to implement the NSM protocol.
Monitored locks are preferred over the non-monitored locks.
NSM
Each NSM keeps track of its own "state" and notifies any interested party of a change in this state. The state is merely a number which increases
monotonically each time the condition of the host changes: an even number indicates the host is down, while an odd number indicates the host is
up.
The NSM does not actively "probe" hosts it has been asked to monitor; instead it waits for the monitored host to notify it that the monitored host's
status has changed (that is, crashed and rebooted).
When it receives an SM_MON request an NSM adds the information in the SM_MON parameter to a notify list. If the host has a status change
(crashes and recovers), the NSM will notify each host on the notify list via the SM_NOTIFY call. If the NSM receives notification of a status change
from another host it will search the notify list for that host and call the RPC supplied in the SM_MON call.
NSM maintains copies of its current state and of the notify list on stable storage.
For example on RedHat RHEL 6.5
# mount

dev01-vsim1-d3.sim.rtp.netapp.com:/dev01_nfs on /cmode2 type nfs (rw,vers=3,addr=10.63.50.199)


# cd /var/lib/nfs/statd/sm
# ls -l
total 4
-rw------- 1 rpcuser rpcuser 120 Nov 25 04:01 dev01-vsim1-d3.sim.rtp.netapp.com

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

38

Cluster-Mode:
For NFSv3, no lock information is stored on N-blade
The dblade persistently stores information about the clients which have made lock requests in two metafiles (lmgr_host_file and
lmgr_host_notify). When the NFS server reboots (D-blade goes down), the lock manager goes through all records stored in the
metafiles and sends reclaim requests to the N-blade . The N-blade then sends these reclaim requests to the appropriate
clients. Each client is responsible for reclaiming all previously requested locks within the grace period designated by the server.
Any state which is not reclaimed is cleaned up by the server once the grace period has expired.

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

38

BREAKING AN NFSV3 LOCK


Instructor Notes
<Notes>
Student Notes
The overall procedure is:
1.

On the client, shut down all processes that use the affected NFS resources by using ps ef and grep for known
processes such as a database process.

2.

On the client, umount all NFS resources.

3.

On the client, determine the process ID of statd and lockd: ps ef | grep lockd and ps ef | grep statd.

4.

Kill the lockd and statd processes: kill [lockd_process_id] and kill [statd_process_id].

5.

On the server, use vserver locks show to verify that the locks are gone. If not use vserver locks break to break the
locks

6.

On the client, restart the statd and lockd processes: /usr/lib/nfs/statd and /usr/lib/nfs/lockd.

7.

On the client, remount the NFS export and restart any required processes.

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

39

BREAKING AN NFSV3 LOCK: LOCK RECLAIM


Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

40

MODULE SUMMARY
Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

41

THANK YOU
Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

42

NFS
Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

43

MODULE OBJECTIVES
Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

44

HOW NFS WORKS IN THE Data ONTAP ENVIRONMENT


Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

45

HOW NFS REQUESTS WORK


Instructor Notes
<Notes>
Student Notes
From: http://wikid.netapp.com/w/NGS_NPI/Data_ONTAP/8.0.0/C-modeNetworking
Client-facing
Packets from clients arrive on SK network ports, which may be independent ethernet ports, or may be ifgrps (trunks) or VLANs.
The ARP, ICMP, IP, TCP, and UDP protocols are processed by the same SK network stack that is used for 7-Mode. The TCP
control blocks (TCPCBs) and the Internet Protocol Control Blocks (INPCBs, for both TCP and UDP) are located in the SK stack. If
the packet arrives for a Cluster-Mode LIF with connection state or a listening socket for this packet, then the packet is queued on
the socket, and a signal is sent to PCP. PCP will read the data from the socket and forward the data to the appropriate stream
protocol, such as NFS or CIFS. All packets through the SK network stack code and data packets through PCP are processed in an
Octet context or in nwk_legacy (for UDP and minor protocols). (PCP will conduct connection setup and teardown operations in the
network exclusive domain in 8.0.)
The stream protocols are GX-based scale-out versions of their 7-Mode counterparts. They include NFSv2, NFSv3, CIFS, CIFS
nameservice, NLM, and NRV. Details of the stream protocols are beyond the scope of this document and can be obtained from the
NAS development teams. In 8.x, the stream protocol code runs in a separate NBlade thread (an SK thread) called protocol, which
is not in the network domain. The stream protocol parses the request and converts it into a SpinNP request. This may require
querying the VLDB to determine which DBlade contains the relevant volume. (The VLDB query would follow the red lines from the
RDB client to the VLDB in the diagram in the Control Path section of this document.) The stream protocol then asks CSM to
forward the request to the correct DBlade.
Once CSM gets a response from the DBlade, it gives the reply back to the stream protocol, which reformats the reply into an NFS
or CIFS format, and enqueues an outbound packet. These packets are placed into the send socket buffer for transmission by the
SK network stack.
Transmission of these packets will always use the same LIF that the request arrived on. The next hop address is determined either
from the same Fast Path feature used by 7-Mode, or by doing a route lookup in the routing table of the LIF's routing group. The
route lookup occurs in the active table located in the SK stack, containing dynamically-learned ARP entries as well as routes
populated by the Vifmgr application from its databases.

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

46

REMOTE NFS REQUESTS


Instructor Notes
<Notes>
Student Notes
From: http://wikid.netapp.com/w/NGS_NPI/Data_ONTAP/8.0.0/C-modeNetworking
Cluster-facing
CSM (Cluster Session Manager) determines whether the DBlade is local to this node, or present on another node. If a
CSM session needs to communicate to a DBlade on another node in the cluster, it sends SpinNP packets over the RC
protocol. CT is a cluster-facing stream protocol maintained by the CSM group. Like the client-facing stream protocols,
RC uses PCP to interface to the network stack, exactly as described in the Client-facing section above. It's UDP traffic
is carried through the nwk_legacy thread in 8.0.
Since optimization of cluster communications is critical to scale-out performance, CT implements a fastpath through
the SK stack. Most RC packets meet the very specific criteria to qualify for the fastpath, allowing them to bypass much
of the IP, UDP, socket, and PCP layers.

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

47

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

48

Slide 48
LP7

Is the editing of the final bullet worded accurately?


Lisa Pere, 12/17/2014

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

49

SecD AND CIFS


Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

50

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

51

EXPORTING IN Data ONTAP


Instructor Notes
<Notes>
Student Notes
Export policies contain one or more export rules that process each client access request. The result of
the process determines whether the client is denied or granted access and what level of access. An
export policy with export rules must exist on a Vserver for clients to access data.
You associate exactly one export policy with each volume to configure client access to the volume. A
Vserver can contain multiple export policies. This enables you to do the following for Vservers with
multiple volumes:
Assign different export policies to each volume of a Vserver for individual client access control
to each volume in the Vserver.
Assign the same export policy to multiple volumes of a Vserver for identical client access control
without having to create a new export policy for each volume.
If a client makes an access request that is not permitted by the applicable export policy, the request
fails with a permission-denied message. If a client does not match any rule in the volume's export
policy, then access is denied. If an export policy is empty, then all accesses are implicitly denied.
You can modify an export policy dynamically on a system running Data ONTAP.

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

52

EXPORT POLICIES AND VOLUMES


Instructor Notes
<Notes>
Student Notes

Each Vserver with FlexVol volume has a default export policy that contains no rules. An export
policy with rules must exist before clients can access data on the Vserver, and each FlexVol volume
contained in the Vserver must be associated with an export policy.
When you create a Vserver with FlexVol volume, the storage system automatically creates a default
export policy called default for the root volume of the Vserver. You must create one or more rules
for the default export policy before clients can access data on the Vserver. Alternatively, you can
create a custom export policy with rules. You can modify and rename the default export policy, but
you cannot delete the default export policy.
When you create a FlexVol volume in its containing Vserver with FlexVol volume, the storage
system creates the volume and associates the volume with the default export policy for the Vserver. By
default, each volume created in the Vserver is associated with the default
export policy for the vserver. You can use the default export policy for all volumes contained in
the Vserver, or you can create a unique export policy for each volume. You can associate multiple
volumes with the same export policy.
The root volume of a virtual server (Vserver) is created from an aggregate within the cluster. The root volume
is mounted at the root junction path (/) and is automatically assigned the default export policy. As you add
new volumes to the namespace, assign new export policies (such as vsNFS_vol1 assigned vsNFS_policy1)
or have the export policy inherit from the volumes parent (vsNFS_vol02 inherits from vsNFS_roots
export policy).
2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

53

EXPORT POLICY RULES


Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

54

SECURITY TYPES AND CLIENT ACCESS LEVELS


Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

55

EXPORT POLICY RULES: EXAMPLE 1


Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

56

EXPORT POLICY RULES: EXAMPLE 2


Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

57

EXPORT POLICY RULES: EXAMPLE 3


Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

58

EXPORT POLICY RULES: EXAMPLE 4


Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

59

EXPORTING IN Data ONTAP


Instructor Notes
<Notes>
Student Notes
Export policies contain one or more export rules that process each client access request. The result of
the process determines whether the client is denied or granted access and what level of access. An
export policy with export rules must exist on a Vserver for clients to access data.
You associate exactly one export policy with each volume to configure client access to the volume. A
Vserver can contain multiple export policies. This enables you to do the following for Vservers with
multiple volumes:
Assign different export policies to each volume of a Vserver for individual client access control
to each volume in the Vserver.
Assign the same export policy to multiple volumes of a Vserver for identical client access control
without having to create a new export policy for each volume.
If a client makes an access request that is not permitted by the applicable export policy, the request
fails with a permission-denied message. If a client does not match any rule in the volume's export
policy, then access is denied. If an export policy is empty, then all accesses are implicitly denied.
You can modify an export policy dynamically on a system running Data ONTAP.
burt 842647: vserver export-policy check access does not allow access check for root. This will be fixed in 8.3.1. burt
854974: vserver export-policy check-access command showing access denied incorrectly when actually a transient
problem was hit when evaluating rule. This has already been fixed in 8.3.1.

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

60

EXERCISE
Instructor Notes
<Notes>
Student Notes
Please refer to your exercise guide.

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

61

NFS LOOKUP SERVICES


Instructor Notes
<Notes>
Student Notes

When an NFS client connects to the SVM, Data ONTAP obtains the UNIX credentials(unix user name and group
name) for the user
by checking different name services, depending on the name services configuration of the SVM.
Data ONTAP can check credentials for local UNIX accounts, NIS domains, and LDAP domains. At
least one of them must be configured so that Data ONTAP can successfully authenticate the user.
You can specify multiple name services and the order in which Data ONTAP searches them.
In a pure NFS environment with UNIX volume security styles, this configuration is sufficient to
authenticate and provide the proper file access for a user connecting from an NFS client
In NFSv3, client-server communication happens using numeric UID/GID. The client is responsible
for mapping it back to the unix user name and group name using the source of the GID/UID mapping
specified in the /etc/nsswitch.conf file on the client.
For example, user root has uid = 0 and gid = 0.
Now the client sends a CREATE request to the ONTAP to create a
file called foo. The UID and the GID are contained in the RPC layer and parameters, such as
filename, attributers, etc., for the CREATE procedure embedded in the NFS layer. When the
ONTAP nfs server gets the CREATE request from the client, it uses the UID and GID
numbers from the RPC header and stores them in the inode of the new file foo if the security style of the volume
being accessed is unix
If the security style is ntfs then the UID/GID have to be mapped to the unix user name and group name respectively.
Data ONTAP uses the sources in specified for the passwd and group databases to perform this mapping. The sources
may be nis,files or ldap.
2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

62

Subsequently the unix user name has to mapped to a windows name which in turn must be
mapped to a SID. This SID is then stored in the inodo of the file foo
After the file foo is created, the nfs server returns the numeric UID and GID during every
GETATTR request for that file from the client. The client then maps the corresponding UID
and GID numbers that the nfs server returns to to unix user name and group name
respectively after consulting the /etc/nsswitch.conf file for appropriate sources.

In 8.2, the name server switch configuration was at the vserver level and that meant that for all the
databases - netgroups, hosts, users, and groups - the set of name servers had to be evaluated in the
order provided.
vsim::> vserver modify -vserver vs1 -ns-switch ?
nis file ldap
This UI had additional problems.
One could not specify 'dns' as a value in the 'ns-switch' for a vserver because of the shared nature of
this configuration. A DNS server does not have the capability to host a netgroup database. So if
someone set ns-switch to 'files,dns' netgroup resolution would fail if ONTAP tried to contact a DNS
server for netgroup information. So 'dns' was never allowed as a valid entry that could be specified in
the 'ns-switch' field for a vserver.

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

62

Netgroup Limits
For Netgroups in 8.3 we support a maximum nesting level of 1000. And for a local netgroup file we support a maximum
of 4096 characters per line in the netgroup file.
http://limits.gdl.englab.netapp.com/limits/ONTAP/CMode/sw:fullsteam.0/limit:max_netgroup_nest
http://limits.gdl.englab.netapp.com/limits/ONTAP/CMode/sw:fullsteam.0/limit:max_netgroup_characters
http://limits.gdl.englab.netapp.com/limits/ONTAP/CMode/sw:fullsteam.0/limit:max_exports_characters

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

63

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

64

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

65

User Name Mapping During Multiprotocol Access


Data ONTAP performs a number of steps when attempting to map user names. Name mapping can take place for one of two reasons:
The user name needs to be mapped to a UID
The user name needs to be mapped to a Windows SID
Name Mapping Functionality
The method of user mapping will depend on the security style of the volume being accessed. If a volume with UNIX security style is accessed via
NFS, then a UID will need to be translated from the user name to determine access. If the volume is NTFS security style, then the UNIX user name
will need to map to a Windows user name/SID for NFS requests because the volume will use NTFS-style ACLs. All access decisions will be made
by the NetApp device based on credentials, group membership, and permissions on the volume.
By default, NTFS security style volumes are set to 777 permissions, with a UID and GID of 0, which generally translates to the root user. NFS
clients will see these volumes in NFS mounts with this security setting, but users will not have full access to the mount. The access will be
determined by which Windows user the NFS user is mapped to.
The cluster will use the following order of operations to determine the name mapping:
1. 1:1 implicit name mapping
a. Example: WINDOWS\john maps to UNIX user john implicitly
b. In the case of LDAP/NIS, this generally is not an issue
2. Vserver name-mapping rules
a. If no 1:1 name mapping exists, SecD checks for name mapping rules
b. Example: WINDOWS\john maps to UNIX user unixjohn
3. Default Windows/UNIX user
a. If no 1:1 name mapping and no name mapping rule exist, SecD will check the NFS server for a default Windows user or the CIFS server for a
default UNIX user
b. By default, pcuser is set as the default UNIX user in CIFS servers when created using System Manager 3.0 or vserver setup
c. By default, no default Windows user is set for the NFS server
4. If none of the above exist, then authentication will fail
a. In most cases in Windows, this manifests as the error A device attached is not functioning
b. In NFS, a failed name mapping will manifest as access or permission denied
Name mapping and name switch sources will depend on the SVM configuration.
Best Practice
It is a best practice to configure an identity management server such as LDAP with Active Directory for large multiprotocol environments.

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

66

NFS AND KERBEROS


Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

67

AUTHORIZATION
Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

68

SecD AND CIFS


Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

69

SecD IN NFS
Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

70

Slide 70
LP8

If possible, changes to this figure should be made to figure (slide 29)


* First box on left should say NFS Network Module (in Kernel) on top; bottom box should say Wraps
request in an Open Network Computing Remote Procedure Call (ONC RPC) and sends that to secd.
* Blue text in second box to left should say onto a work queue (1 queue for network or data module
requests and 1 queue for mgwd requests). (Other text is fine.)
* Bottom part of text in third box should say A thread executes a work item.
* All instances of "Unix" should be changed to "UNIX".
* All statements in purple to the immediate left of the purple vertical line should end in a period.
* Statement in box at bottom of the line should say Send RPC reply.
* Any spaces between sentences should be single, not double.
* In first horizontal line on right side, box on far right should say Return an error.
* In third horizontal line on right side, change non-boxed text to Query for Windows Domain
Controllers (DCs). and Get preferred DCs.
* Need a footnote that says SMF=Simple Management Framework
*In fourth horizontal line on right side, change non-boxed text to Get best server. and Connect to
server. Change box to far right to Return connection.
* In fifth horizontal line, change non-boxed text to Perform passthrough authentication.
* In figure key, change top right text to secd Module and bottom right text to Data Stored in SMF
Lisa Pere, 12/18/2014

SecD: APPLICATION AND SERVER


Instructor Notes
<Notes>
Student Notes
Asynchronous means that a single client (specially written to do this) can send multiple RPCs simultaneously which
the N-blade does, and the server will process and respond to the RPCs in parallel starting with Data ONTAP 8.1.

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

71

SecD: PROCESS AND CONFIGURATION


Instructor Notes
<Notes>
Student Notes
Example of table update:
[kern_SecD:info:1599] .------------------------------------------------------------------------------.
[kern_SecD:info:1599] |

TRACE MATCH

[kern_SecD:info:1599] | RPC SecD_rpc_config_table_update succeeded and is being dumped because of a |


[kern_SecD:info:1599] |

tracing match on:

[kern_SecD:info:1599] |
[kern_SecD:info:1599] |

All

|
|

RPC received at Wed Mar 14 14:29:51 2012

[kern_SecD:info:1599] |------------------------------------------------------------------------------'
[kern_SecD:info:1599] | [000.000.061] debug: Worker Thread 34369186320 processing RPC 702:SecD_rpc_config_table_update with request ID:23133 which sat in
the queue for 0 seconds. { in run() at server/SecD_rpc_server.cpp:1463 }
[kern_SecD:info:1599] | [000.000.130] debug: An update to configuration table SecD_cifs_server_security_db_view has been received. Checking table contents... { in
SecD_rpc_config_table_update_1_svc() at configuration_manager/SecD_rpc_config.cpp:353 }
[kern_SecD:info:1599] | [000.000.222] debug: SUCCESS: Number of field names matches number of field values! Updating table 'SecD_cifs_server_security_db_view'
{ in SecD_rpc_config_table_update_1_svc() at configuration_manager/SecD_rpc_config.cpp:369 }
[kern_SecD:info:1599] | [000.000.248] debug: Update received for config source SecD_cifs_server_security_db_view: row about to be added { in updateSourceData()
at ../SecD/include/SecD_configuration_sources.h:188 }
[kern_SecD:info:1599] | [000.000.263] debug: Translating row to record for table 'CifsServerSecurity' { in updateSourceData() at
../SecD/include/SecD_configuration_sources.h:193 }
[kern_SecD:info:1599] | [000.000.368] debug: Querying config source 'CifsServerSecurity' (with 0 rows of data) by keys vserver id: '5' { in query() at
configuration_manager/SecD_configuration_sources.cpp:4308 }
[kern_SecD:info:1599] | [000.000.393] debug: Translating rowToRecord for table 'CifsServerSecurity' { in postUpdateSourceData() at
configuration_manager/SecD_configuration_sources.cpp:4379 }
[kern_SecD:info:1599] | [000.000.496] debug: SecD RPC Server sending reply to RPC 702: SecD_rpc_config_table_update { in SecDSendRpcResponse() at
server/SecD_rpc_server.cpp:1359 }
[kern_SecD:info:1599] |------------------------------------------------------------------------------.
[kern_SecD:info:1599] |
[kern_SecD:info:1599] |

RPC completed at Wed Mar 14 14:29:51 2012

End of log for successful RPC SecD_rpc_config_table_update.

[kern_SecD:info:1599] '------------------------------------------------------------------------------'

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

72

SecD SERVER DEPENDENCY


Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

73

SecD CONNECTION MANAGEMENT: REQUEST


Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

74

SecD CONNECTION MANAGEMENT: PROCESS


Instructor Notes
<Notes>

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

75

SecD CONNECTION MANAGEMENT: Best Practice


Instructor Notes
<Notes>
Student Notes
Internally known as Vserver Reachability, or VSUN (Virtual Server Uniform Networking), LIF Sharing is a feature that
allows a Vserver to make outbound connections from any node, regardless of which nodes its LIFs are on
With LSOC, a Vserver can make outbound connections from one node, using a LIF of a different node.
One mental model for this is that it makes all nodes appear to have all LIFs.
This feature is always-on, and has no configuration knobs. It works automatically in the background without
applications or admins being aware of it. (The feature is limited to applications which use the FreeBSD stack, such as
user-space applications. It is not available for applications which still run in the SK stack.)

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

76

SecD CACHING: TYPES


Instructor Notes
<Notes>
Student Notes
Caching in SecD

One of the enhancements made to SecD was to provide extensive caching for connections.
This helps improve performance by preventing constant calls for connections, as well as avoid issues when
a network hiccups.

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

77

SecD CACHING: MANAGEMENT


Instructor Notes
<Notes>
Student Notes
Caching in SecD
One of the enhancements made to SecD was to provide extensive caching for connections.
This helps improve performance by preventing constant calls for connections, as well as avoid issues when a network
hiccups.
ad-to-netbios-domain netbios-to-ad-domain connection-shim-lif
ems-delivery

ldap-groupid-to-name ldap-groupname-to-id

ldap-userid-to-creds ldap-username-to-creds name-to-sid


sid-to-name
nis-userid-to-creds
schannel-key

nis-groupid-to-name

nis-groupname-to-id

nis-username-to-creds

2014 NetApp. All rights reserved.

nis-group-membership

netgroup

NetApp Confidential - For Internal Use Only

78

SecD CONFIGURATION: SETTINGS


Instructor Notes
<Notes>
Student Notes
::*> diag SecD configuration show-fields -source-name
cifs-server

kerberos-realm

machine-account

nis-domain

vserver

vserverid-to-name

unix-group-membership local-unix-user

local-unix-group

kerberos-keyblock

ldap-config

ldap-client-config

ldap-client-schema

name-mapping

nfs-kerberos

cifs-server-options

cifs-server-security

dns

cifs-preferred-dc

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

79

SecD CONFIGURATION: MISMATCHES


Instructor Notes
<Notes>
Student Notes

::*> diag SecD configuration query node nodename -source-name


cifs-server

kerberos-realm

machine-account

nis-domain

vserver

vserverid-to-name

unix-group-membership local-unix-user

local-unix-group

kerberos-keyblock

ldap-config

ldap-client-config

ldap-client-schema

name-mapping

nfs-kerberos

cifs-server-options

cifs-server-security

dns

cifs-preferred-dc

virtual-interface

routing-group-routes

SecD-cache-config
Example of DNS config query for Vserver 12:
::*> diag SecD configuration query -node nodename -source-name dns
vserver: 12
domains: rtp2k3dom3.ngslabs.netapp.com
name-servers: 10.61.70.5
state: true
timeout: 2
attempts: 1
2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

80

user-modified: true

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

80

https://wikid.netapp. com/w/NFS/FS.0/Documents/Exports/libC/DesignSpec#libc
DNS was vserverized in Data ONTAP starting 8.2.1.
vsim::> vserver services dns hosts ?
create Create a new host table entry delete
Remove a host table entry modify
Modify hostname or aliases
show Display IP address to hostname mappings
The NFS exports code in mgwd used SecD to talk to the DNS server starting 8.2.1. But SecD did/does not have
support to refer to a local hosts database for hostname to IP address resolution. The NFS exports code would always
talk to the DNS server to resolve hostnames or IP addresses. So NFS exports could never take advantage of
hostnames configured locally using the 'vserver services dns hosts' command.

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

81

In 8.2.0,8.2.1 and 8.2.2 for example, if the netgroup contained 80000 hostnames, the exports processing code in mgwd
would wait until the entire list of 80000 hostnames was downloaded from the name server before checking the IP
address of the client (for which the export check was being performed) was a member of the netgroup or not.

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

82

In 8.2.0,8.2.1 and 8.2.2 for example, if the netgroup contained 80000 hostnames, the exports processing code in mgwd
would wait until the entire list of 80000 hostnames was downloaded from the name server before checking the IP
address of the client (for which the export check was being performed) was a member of the netgroup or not.

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

83

SecD AND CIFS


Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

84

Instructor Notes
<Notes>
Student Notes
When connecting by means of NFS, the N-blade
makes calls to mgwd and/or SecD to gather
information about export policy objects, users,
groups,extended groups,hosts and netgroups of
the client and user that is attempting to connect.mgwd and SecD in turn may make calls to external name servers to
gather this information.

Data ONTAP uses several exports related caches to store the gathered information for faster access. There
are certain tasks you can perform to manage export policy caches for
troubleshooting purposes.
How Data ONTAP uses export policy caches
To improve system performance, Data ONTAP uses local caches to store information such as host
names and netgroups. This enables Data ONTAP to process export policy rules more quickly than
retrieving the information from external sources. Understanding what the caches are and what they
do can help you troubleshoot client access issues.
You configure export policies to control client access to NFS exports. Each export policy contains
rules, and each rule contains parameters to match the rule to clients requesting access. Some of these
parameters require Data ONTAP to contact an external source, such as DNS or NIS servers, to
resolve objects such as domain names, host names, or netgroups.
These communications with external sources take a small amount of time. To increase performance,
Data ONTAP reduces the amount of time it takes to resolve export policy rule objects by storing
information locally on each node in several caches.
2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

85

Every export policy rule has a anon field. The UNIX credentials of this anon user need to be provided during an export
check. If a UID is specified for anon field in a export policy rule, the UID must be mapped to a unix user name.
Cache is used for anon. Doesn't matter what the clientmatch is. Clientmatch could be any of IP
Address/Domain/Host/Netgroupnd corresponding UNIX credentials are looked up by making a RPC call to SecD and
the obtained credential is stored in the 'id' cache.
The nsswitch sources referred are passwd (for UID and primary GID)
group (for additional GID) when building the UNIX credentials

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

86

Every export policy rule has a anon field. The UNIX credentials of this anon user need to be provided during an export
check. If a UNIX username is specified for anon field in a export policy rule, the UID corresponding to that UNIX
username is looked up by making a RPC call to SecD and the obtained UID is stored in the 'name' cache.
Cache is used for anon. Doesn't matter what the clientmatch is. Clientmatch could be any of IP
Address/Domain/Host/Netgroup
Nsswitch sources referred are passwd (for looking up UNIX username to UID mapping)

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

87

Export policy rules that have hostname in the clientmatch field will use this cache. The hostname specified in the rule
is converted into an IP address by performing a lookup. The IP address and hostname is stored in the cache. The
client IP address that comes as part of the export check RPC is compared with the IP address of the hostname
present in the policy rule for comparison.
If Clientmatch is host , the ns-switch sources specified for the hosts database are looked up(for looking up IP address
corresponding to the hostname specified in the clientmatch rule)

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

88

Export policy rules that have netgroup in the clientmatch field will use this cache. The netgroup specified in the rule is
fetched from the name server specified in the ns-switch order for netgroup. Once all these hostnames are fetched they
are converted into IP addresses by performing a lookup on the name servers in the ns-switch order specified for hosts.
These IP addresses are then stored in the Patricia Trie in the cache. The client IP address that comes as part of the
export check RPC is compared with the IP address in the Trie for match.
Clientmatch is netgroup
netgroup database sources (for looking up all hosts present in a netgroup specified in the rule)
hosts database sources (for converting hostnames present in the netgroup into IP addresses)

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

89

by host" The IP address was found/not-found to be in the netgroup through a lookup done using the netgroup.byhost
API. Either the netgroup.byhost database contains the IP address as a key OR contains the hostname corresponding
to this IP address as a key. The hostname for this IP address is obtained via a reverse DNS query
This result state implies:
the netgroup cache (Patricia Trie) is in the process of being populated. And while it is being populated the
netgroup.byhost API is used to do the lookup and/or
the IP address was not found in the partial set of IP addresses present in the Patricie Trie that is in the process of
being populated
"cache" The IP address was found to be in the netgroup through a lookup done using the netgroup cache. The IP
address matched an IP address that was found in the Patricia Trie This result state implies:
the netgroup cache has been populated and is in a ready state or
the netgroup cache has been partially populated and the IP address we were looking for was found in the partial set of
IP addresses present in the Patricia Trie
"reverse lookup scan" The IP address was found/not-found to be in the netgroup through a lookup that involved: a
check to see if the IP address was present in the list of host entries obtained for the netgroup from the name server
if the IP address match fails, a reverse DNS query is performed to get the hostname corresponding to the IP address
a match is then attempted using this hostname with the host entries obtained for the netgroup from the name server
This result implies: the netgroup lookup could not be done using the netgroup.byhost API either because
netgroup.byhost has not been configured OR the netgroup.byhost API returned an non-deterministic result and/or
the netgroup cache is still in the process of being built. But the list of host entries present in the netgroup has already
been fetched from the name server. What is going on at the moment is that these host entries are being converted into
IP addresses (if not already an IP address) and being inserted into the Patricia Trie
the IP address was not found in the partial set of IP addresses present in the Patricie Trie that is in the process of
being populated
"not a member" The IP address was not found in the netgroup This result implies the IP address was deterministically
found to not be a member of the netgroup using one of the following:
by a lookup using the netgroup.byhost API if the netgroup cache is not yet populated
by a lookup using the netgroup cache only if the cache has been fully populated (non-membership cannot be
2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

90

ascertained deterministically using a partial set of IP addresses in a Patricia Trie)


by a lookup using a lookup and match (using IP address or hostname (obtained via reverse DNS
query)) on the host entries obtained for the netgroup from the name server

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

90

Example:
netgroup file has (h1,,)
DNS domain has lab.netapp.com for that vserver
ip1 accesses the volume and DNS returns h1.lab.netapp.com for that IP address
If netgroup DNS domain search is enabled, h1 will be allowed access.
If netgroup DNS domain search is disabled, h1 will be denied access.

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

91

Clientmatch can be anything IP address/host/domain/netgroup


Nsswitch sources referred to are:
passwd (for constructing anon user UNIX credentials)
group (for constructing anon user UNIX credentials)
hosts (for converting hostnames to IP addresses and vice versa for export policy rules that have host/domain/netgroup
clientmatch)
netgroup (for getting all host entries belonging to a netgroup when the export policy rule contains a netgroup)

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

92

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

93

https://wikid.netapp.com/w/NFS/FS.0/Documents/Exports/showmount/DesignSpec#Overview_of_chosen_approach
As exports are added or deleted, a job will be created in the cluster to run in the mhost to rewrite the exports data file for each vserver.
Whenever a volume
is created with a junction path
deleted with a junction path
is promoted as a root volume
or a qtree
is created to have an explicit export policy
modified to have an explicit export policy
modified to remove an explicit export policy
deleted with an explicit export policy
then we will kick off the job
going offline
going online
being unmounted
being mounted
As such, we do not start a job in these scenarios.
As MOUNTD EXPORT requests arrive in the nblade, the exports data XDR buffers will be read from the file and sent back to the client.
In the mhost, when the job manager loops over all vservers, if it detects that the option is disabled for the current vserver, it skips over regenerating the exports.data file
for that vserver.
In the nblade, when processing a MOUNTD_EXPORT request, if the nblade skips trying to access the exports.data file and simply reports '/'. Note that if there was an
old copy of the file (i.e., an admin turned off the feature), then it need not be deleted.
The location of the exports.data file
dev01cluster-1-01% pwd
/clus/dev01/.vsadmin/config/etc
dev01cluster-1-01% ls
exports.data
It is in XDR format
The read buffers for the exports file will be cached in the Nblade for N minutes. Any new MOUNT EXPORT requests that the Nblade received in those N minutes will
use the cached read buffer list to create the response without having to reread the exports file. The exports file is not likely to be changing with any great frequency. So,
in the case of a mount storm the cached read buffer list will greatly improve the response time and alleviate unnecessary load on the filer. The added reference to the
cached read buffers will be dropped after N mins and the buffers will be released back into the system on last use.
https://wikid.netapp.com/w/NFS/FS.0/Documents/Exports/showmount/DesignSpec

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

94

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

95

N-BLADE INTERACTION
Instructor Notes
<Notes>
Student Notes
NFS mount a volume in vserver student1 whose security style is ntfs and issue the following command:
cluster1::diag nblade credentials*> show -vserver student1 unix-user-name root

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

96

N-BLADE INTERACTION: CREDENTIALING


Instructor Notes
<Notes>
Student Notes
Student Notes
NFS mount a volume in vserver student1 whose security style is ntfs and issue the following command:
cluster1::diag nblade credentials*> show -vserver student1 unix-user-name root

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

97

N-BLADE INTERACTION: CREDENTIALING


Instructor Notes
<Notes>
Student Notes
::diag nblade cifs*>
credentials interfaces path-mapping server

shares

spinnp
::diag nblade cifs*>
credentials interfaces path-mapping server

shares

spinnp
::*> diag nblade cifs credentials show -vserver vs0 -unix-user-name cmodeuser
Getting credential handles.
3 handles found....
Getting cred 0 for user.
Global Virtual Server: 7
Cred Store Uniquifier: 2
Cifs SuperUser Table Generation: 0
Locked Ref Count: 0
Info Flags: 1
Alternative Key Count: 0
Additional Buffer Count: 1
Allocation Time: 338055323 ms
Hit Count: 6 ms
Locked Count: 0 ms
Windows Creds:
Flags: 128
Primary Group: S-1-000000000005-21-184436492-4217587956-933746605-513
Domain 0 (S-1-000000000005-21-184436492-4217587956-933746605):
Rid 0: 1117
Rid 1: 1195
Rid 2: 513
Domain 1 (S-1-000000000005-32):
Rid 0: 545
Domain 2 (S-1-000000000001):
Rid 0: 0
Domain 3 (S-1-000000000005):
Rid 0: 11
Rid 1: 2
Unix Creds:
Flags: 0
Domain ID: 0
Uid: 503
Gid: 500
Additional Gids:
Gid 0: 500
Getting cred 1 for user.
Global Virtual Server: 7
Cred Store Uniquifier: 2
Cifs SuperUser Table Generation: 0
Locked Ref Count: 0
Info Flags: 1
Alternative Key Count: 0
Additional Buffer Count: 1
Allocation Time: 19900289 ms
Hit Count: 9 ms
Locked Count: 0 ms
Windows Creds:
Flags: 128
Primary Group: S-1-000000000005-21-184436492-4217587956-933746605-513
Domain 0 (S-1-000000000005-21-184436492-4217587956-933746605):
Rid 0: 1117
Rid 1: 1195
Rid 2: 513
Domain 1 (S-1-000000000005-32):
Rid 0: 545
Domain 2 (S-1-000000000001):
Rid 0: 0
Domain 3 (S-1-000000000005):
Rid 0: 11
Rid 1: 2
Unix Creds:
Flags: 0
Domain ID: 0
Uid: 503
Gid: 500
Additional Gids:
Getting cred 2 for user.
Global Virtual Server: 7
Cred Store Uniquifier: 2
Cifs SuperUser Table Generation: 0
Locked Ref Count: 0
Info Flags: 1
Alternative Key Count: 0
Additional Buffer Count: 1
Allocation Time: 9559651 ms
Hit Count: 8 ms
Locked Count: 0 ms
Windows Creds:
Flags: 128
Primary Group: S-1-000000000005-21-184436492-4217587956-933746605-513
Domain 0 (S-1-000000000005-21-184436492-4217587956-933746605):
Rid 0: 1117
Rid 1: 513
Domain 1 (S-1-000000000005-32):
Rid 0: 545
Domain 2 (S-1-000000000001):
Rid 0: 0
Domain 3 (S-1-000000000005):
Rid 0: 11
Rid 1: 2
Unix Creds:
Flags: 0
Domain ID: 0
Uid: 503
Gid: 500
Additional Gids:
::*> diag nblade cifs credentials flush -vserver vs0
FlushCredStore succeeded flushing 2 entries

br3050n2-rtp::*> diag nblade cifs credentials show -vserver vs0 -unix-user-name cmodeuser
Getting credential handles.

ERROR: command failed: RPC call to SecD failed. RPC: 'cred store: not found'.
Reason: ''

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

98

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

99

SecD AND CIFS


Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

100

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

101

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

102

exports.ngbh.allFailed This message occurs when a netgroup by host request fails because all ns-switch sources for the netgroup database have returned connection
errors and files are unusable as a source.
Severity:ERR
Frequency 1m
Remedial Action:There might be temporary connectivity issues with the Vserver's configured ns-switch sources for the netgroup database. Check connectivity to the
name servers configured for netgroups
netgroup.nis.config This message occurs when a netgroup lookup request finds that Network Information Service (NIS) is specified as a ns-switch source, but NIS is not
configured for the Vserver. Netgroup lookups using NIS will not function.
Severity:ERR
Frequency 5m
Remedial Action:Check the ns-switch sources configured for the netgroup database using "vserver services name-service ns-switch show" and the NIS configuration for
the Vserver using "nis-domain show". Either remove NIS as a ns-switch source, or configure NIS.
netgroup.nis.byhost.missing This message occurs when the netgroup.byhost map is not configured on the Network Information Service (NIS) server and NIS is
configured as a ns-switch source for the Vserver. Enabling netgroup.byhost enables mount operations to succeed faster when the netgroup size is large.
Severity:INFO
Frequency:12h
Remedial Action:Consider configuring the netgroup.byhost map on the NIS server to gain better performance with netgroups.
netgroup.ldap.config This message occurs when a netgroup lookup request finds that Lightweight Directory Access Protocol (LDAP) is specified as a ns-switch source,
but LDAP is not configured for the Vserver. Netgroup lookups using LDAP will not function.
Severity :ERR
Frequency:5m
Remedial Action:Check the ns-switch sources configured for the netgroup database using "vserver services name-service ns-switch show" and the LDAP configuration
for the Vserver using "ldap show" and "ldap client show". Either remove LDAP as a ns-switch source, or configure LDAP.
netgroup.ldap.byhost.missing This message occurs when netgroup.byhost is disabled in the Lightweight Directory Access Protocol (LDAP) client configuration on the
storage system, and LDAP is configured as an ns-switch source for the Vserver. Enabling netgroup.byhost enables mount operations to succeed faster when the
netgroup size is large.
Severity:INFO
Frequency:12h
Remedial Action:Consider configuring the netgroup.byhost database on the LDAP server and enabling netgroup.byhost on the LDAP client configuration on the storage
system by using the "ldap client modify" command.
netgroup.files.missing This message occurs when a netgroup lookup request finds that files is specified as a ns-switch source, but a netgroup file cannot be found.
Severity:ERR
Frequency:5m
Remedial Action:Check that the local netgroup file is present and load it if necessary. The commands to perform this are "vserver services netgroup load" and "vserver
services netgroup status".

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

103

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

104

COMMON ISSUES: CANNOT MOUNT


Instructor Notes
<Notes>
Student Notes
Emphasize that many of the NFS issues are due to Networking issues like faulty cards,duplicate IP address
etc.

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

105

EXERCISE
Instructor Notes
<Notes>
Student Notes
Please refer to your exercise guide.

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

106

COMMON ISSUES
Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

107

Newclients may access the server even before the server has had a chance to pull the new netgroup file
that has this client present in the netgroup.
In 8.2.3 and 8.3.0, a worst case of 3 hours can be expected before a new client added to a netgroup is
allowed access.

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

108

COMMON ISSUES: ACCESS DENIED AFTER MOUNTING


Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

109

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

110

EXERCISE
Instructor Notes
<Notes>
Student Notes
Please refer to your exercise guide.

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

111

MODULE SUMMARY
Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

112

THANK YOU
Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

113

NFS
Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

114

MODULE OBJECTIVES
Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

115

Data ONTAP DATA STRUCTURES: MSID


Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

116

Data ONTAP DATA STRUCTURES


Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

117

CLUSTERED ONTAP DATA STRUCTURES: DSID


Instructor Notes
<Notes>
Student Notes
Each volume has a unique dsid
DSID values in SpinNP messages/ZAPIs/RPCs are translated into wafl fsid

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

118

JUNCTION CHARACTERISTICS
Instructor Notes
<Notes>
Student Notes
Junction inodes are created in the volume they are mounted on.
/student1_nfs - > vserver root volume junction student1_nfs
/student1_nfs/volx -> student1_nfs junction volx

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

119

JUNCTIONS AND NFS MOUNT REQUESTS


Instructor Notes
<Notes>
Student Notes
::*> debug smdb table junctionTable show -msid 2147484728
fileHandle

msid

isActive

---------------------------- ---------- -------"0x00|2147484725|97|1814643" 2147484728 true


% vldbtest dump -j | grep 2147484728
junctionID: 0x00|2147484725|97|1814643 msid: 2147484728 isActive: 1

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

120

SPINNP FILE HANDLES


Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

121

NFSV3 FILE HANDLES


Instructor Notes
<Notes>
Student Notes
MSID of the clients mount point = msid of the volume originally mounted
e.g. if mount student1:/student1_nfs/
cd volx where volx is another volume mounted in student1_nfs
MSID of student1_nfs is returned as fsid when v3-fsid-change disabled
If it is enabled then MSID of the volume currently being accessed.
Default is enabled for backward compatibility
If vserver nfs [ -v3-fsid-change {enabled|disabled} ]
Enabled

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

122

Data ONTAP DATA STRUCTURES SUMMARIZED


Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

123

ABOUT VLDB
Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

124

VLDB
Instructor Notes
<Notes>
Student Notes
::*> node run local showfh /vol/test
flags=0x00 snapid=0 fileid=0x000040 gen=0x65f84bc3 fsid=0x7cc0f654 dsid=0x00000000000444
msid=0x00000080000438
::*> vol explore -format volume 1092 -dump name,dsid,msid,fsid
(volume explore)
name=test
dsid=1092
msid=2147484728
fsid=0x7CC0F654
::*> vol show -volume test -fields msid,dsid,fsid
(volume show)
vserver

volume dsid msid

fsid

---------- ------ ---- ---------- ---------vserver

test 1092 2147484728 2093020756

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

125

TROUBLESHOOTING VLDB: COMMON ISSUES


Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

126

TROUBLESHOOTING VLDB: COMMON ISSUES


Instructor Notes
<Notes>
Student Notes
vldbtest Dump Feature
% vldbtest dump
instanceID:0 host:127.0.0.1 cmd:dump
Command: dump
Command options:
-v

dump voltable

-s

dump snapshottable

-f

dump familytable

-F

dump flexclonetable

-M

dump msidtable

-a

dump aggrtable

-b

dump bladetable

-i

dump allocidtable

-n

dump nextidtable

-j

dump junctiontable

-m

dump mgmtvoltable

-c

dump coralstripetable

-e

dump coralepochtable

-l

dump all tables% vldbtest dump -j

instanceID:0 host:127.0.0.1 cmd:dump


Calling vldb_dump_1
result: 0
junctionCount: 27
junctionID: 0x00|2147484678|96|269297 msid: 2147484689 isActive: 1
junctionID: 0x00|2147484678|97|148362959 msid: 2147484687 isActive: 1
junctionID: 0x00|2147484678|99|528588616 msid: 2147484692 isActive: 1
junctionID: 0x00|2147484688|96|9297239 msid: 2147484695 isActive: 1
junctionID: 0x00|2147484688|97|29143836 msid: 2147484704 isActive: 1
junctionID: 0x00|2147484688|98|95477819 msid: 2147484699 isActive: 1
junctionID: 0x00|2147484688|99|8296130 msid: 2147484705 isActive: 1
junctionID: 0x00|2147484688|100|8296653 msid: 2147484706 isActive: 1
junctionID: 0x00|2147484688|101|8297183 msid: 2147484707 isActive: 1
junctionID: 0x00|2147484688|102|8297732 msid: 2147484708 isActive: 1
junctionID: 0x00|2147484688|103|8298256 msid: 2147484709 isActive: 1
junctionID: 0x00|2147484688|104|8298845 msid: 2147484710 isActive: 1
junctionID: 0x00|2147484688|231|8299372 msid: 2147484711 isActive: 1
junctionID: 0x00|2147484688|232|8299958 msid: 2147484712 isActive: 1
junctionID: 0x00|2147484688|233|8300463 msid: 2147484713 isActive: 1
junctionID: 0x00|2147484688|234|8300995 msid: 2147484714 isActive: 1
junctionID: 0x00|2147484688|235|8301510 msid: 2147484715 isActive: 1
junctionID: 0x00|2147484688|236|8302069 msid: 2147484716 isActive: 1
junctionID: 0x00|2147484688|237|8302575 msid: 2147484717 isActive: 1
junctionID: 0x00|2147484688|238|8303100 msid: 2147484718 isActive: 1
junctionID: 0x00|2147484688|239|8303622 msid: 2147484719 isActive: 1
junctionID: 0x00|2147484688|240|8383273 msid: 2147484721 isActive: 1
junctionID: 0x00|2147484688|241|9611688 msid: 2147484722 isActive: 1
junctionID: 0x00|2147484692|96|528594867 msid: 2147484693 isActive: 1
junctionID: 0x00|2147484693|96|528599577 msid: 2147484694 isActive: 1
junctionID: 0x00|2147484697|97|381120 msid: 2147484698 isActive: 1
junctionID: 0x00|2147484725|97|1814642 msid: 2147484728 isActive: 1
% vldbtest dump -v
instanceID:0 host:127.0.0.1 cmd:dump
Calling vldb_dump_1
result: 0
volCount: 35
vol#0:
vsid: 5 dsid: 1029 msid: 2147484677 name: myroot
aggr: 3b97b760-d57b-11e0-99fc-00a09812efd2 type: 0 dataVersion: 1 distVector: 0 srcMsid: 2147484677 crTime: 1314917720 info: 0 comment: AccessType: 1 StorageType: 1 StripeType: 1
vol#1:
vsid: 6 dsid: 1030 msid: 2147484678 name: ldap_root
aggr: 2d5bb6b8-d668-11e0-adeb-00a09812f23a type: 0 dataVersion: 1 distVector: 0 srcMsid: 2147484678 crTime: 1315434919 info: 0 comment: AccessType: 1 StorageType: 1 StripeType: 1
vol#2:
vsid: 6 dsid: 1051 msid: 2147484687 name: cifs2
aggr: 2d5bb6b8-d668-11e0-adeb-00a09812f23a type: 0 dataVersion: 1 distVector: 0 srcMsid: 2147484687 crTime: 1318276648 info: 0 comment: AccessType: 1 StorageType: 1 StripeType: 1
vol#3:
vsid: 7 dsid: 1052 msid: 2147484688 name: root_vol
aggr: 2d5bb6b8-d668-11e0-adeb-00a09812f23a type: 0 dataVersion: 1 distVector: 0 srcMsid: 2147484688 crTime: 1319130596 info: 0 comment: AccessType: 1 StorageType: 1 StripeType: 1
vol#4:
vsid: 6 dsid: 1053 msid: 2147484689 name: krb5
aggr: 2d5bb6b8-d668-11e0-adeb-00a09812f23a type: 0 dataVersion: 1 distVector: 0 srcMsid: 2147484689 crTime: 1319139601 info: 0 comment: AccessType: 1 StorageType: 1 StripeType: 1
vol#5:
vsid: 6 dsid: 1055 msid: 2147484684 name: ntfs
aggr: 3b97b760-d57b-11e0-99fc-00a09812efd2 type: 0 dataVersion: 1 distVector: 0 srcMsid: 2147484684 crTime: 1318107064 info: 0 comment: AccessType: 1 StorageType: 1 StripeType: 1
vol#6:
vsid: 6 dsid: 1056 msid: 2147484692 name: ilm
aggr: 2d5bb6b8-d668-11e0-adeb-00a09812f23a type: 0 dataVersion: 1 distVector: 0 srcMsid: 2147484692 crTime: 1319833132 info: 0 comment: AccessType: 1 StorageType: 1 StripeType: 1
vol#7:
vsid: 6 dsid: 1057 msid: 2147484693 name: sww
aggr: 2d5bb6b8-d668-11e0-adeb-00a09812f23a type: 0 dataVersion: 1 distVector: 0 srcMsid: 2147484693 crTime: 1319833194 info: 0 comment: AccessType: 1 StorageType: 1 StripeType: 1
vol#8:
vsid: 6 dsid: 1058 msid: 2147484694 name: sww_hd13
aggr: 2d5bb6b8-d668-11e0-adeb-00a09812f23a type: 0 dataVersion: 1 distVector: 0 srcMsid: 2147484694 crTime: 1319833241 info: 0 comment: AccessType: 1 StorageType: 1 StripeType: 1
vol#9:
vsid: 7 dsid: 1059 msid: 2147484695 name: krb5
aggr: 2d5bb6b8-d668-11e0-adeb-00a09812f23a type: 0 dataVersion: 1 distVector: 0 srcMsid: 2147484695 crTime: 1320181061 info: 0 comment: AccessType: 1 StorageType: 1 StripeType: 1
vol#10:
vsid: 7 dsid: 1060 msid: 2147484696 name: nfskrbtst
aggr: 2d5bb6b8-d668-11e0-adeb-00a09812f23a type: 0 dataVersion: 1 distVector: 0 srcMsid: 2147484696 crTime: 1324514290 info: 0 comment: AccessType: 1 StorageType: 1 StripeType: 1
vol#11:
vsid: 8 dsid: 1061 msid: 2147484697 name: root_vol
aggr: 3b97b760-d57b-11e0-99fc-00a09812efd2 type: 0 dataVersion: 1 distVector: 0 srcMsid: 2147484697 crTime: 1325276702 info: 0 comment: AccessType: 1 StorageType: 1 StripeType: 1
vol#12:
vsid: 8 dsid: 1062 msid: 2147484698 name: vol1
aggr: 2d5bb6b8-d668-11e0-adeb-00a09812f23a type: 0 dataVersion: 1 distVector: 0 srcMsid: 2147484698 crTime: 1325276715 info: 0 comment: AccessType: 1 StorageType: 1 StripeType: 1
vol#13:
vsid: 7 dsid: 1063 msid: 2147484699 name: ACL
aggr: 2d5bb6b8-d668-11e0-adeb-00a09812f23a type: 0 dataVersion: 1 distVector: 0 srcMsid: 2147484699 crTime: 1326229337 info: 0 comment: AccessType: 1 StorageType: 1 StripeType: 1
vol#14:
vsid: 7 dsid: 1067 msid: 2147484703 name: acl_one
aggr: 2d5bb6b8-d668-11e0-adeb-00a09812f23a type: 0 dataVersion: 1 distVector: 0 srcMsid: 2147484703 crTime: 1327089856 info: 0 comment: AccessType: 1 StorageType: 1 StripeType: 1
vol#15:
vsid: 7 dsid: 1068 msid: 2147484704 name: vol1
aggr: 2d5bb6b8-d668-11e0-adeb-00a09812f23a type: 0 dataVersion: 1 distVector: 0 srcMsid: 2147484704 crTime: 1327091108 info: 0 comment: AccessType: 1 StorageType: 1 StripeType: 1
vol#16:
vsid: 7 dsid: 1069 msid: 2147484705 name: vol2
aggr: 2d5bb6b8-d668-11e0-adeb-00a09812f23a type: 0 dataVersion: 1 distVector: 0 srcMsid: 2147484705 crTime: 1327091162 info: 0 comment: AccessType: 1 StorageType: 1 StripeType: 1
vol#17:
vsid: 7 dsid: 1070 msid: 2147484706 name: vol3
aggr: 2d5bb6b8-d668-11e0-adeb-00a09812f23a type: 0 dataVersion: 1 distVector: 0 srcMsid: 2147484706 crTime: 1327091167 info: 0 comment: AccessType: 1 StorageType: 1 StripeType: 1
vol#18:
vsid: 7 dsid: 1071 msid: 2147484707 name: vol4
aggr: 2d5bb6b8-d668-11e0-adeb-00a09812f23a type: 0 dataVersion: 1 distVector: 0 srcMsid: 2147484707 crTime: 1327091173 info: 0 comment: AccessType: 1 StorageType: 1 StripeType: 1
vol#19:
vsid: 7 dsid: 1072 msid: 2147484708 name: vol5
aggr: 2d5bb6b8-d668-11e0-adeb-00a09812f23a type: 0 dataVersion: 1 distVector: 0 srcMsid: 2147484708 crTime: 1327091178 info: 0 comment: AccessType: 1 StorageType: 1 StripeType: 1
vol#20:
vsid: 7 dsid: 1073 msid: 2147484709 name: vol6
aggr: 2d5bb6b8-d668-11e0-adeb-00a09812f23a type: 0 dataVersion: 1 distVector: 0 srcMsid: 2147484709 crTime: 1327091183 info: 0 comment: AccessType: 1 StorageType: 1 StripeType: 1
vol#21:
vsid: 7 dsid: 1074 msid: 2147484710 name: vol7
aggr: 2d5bb6b8-d668-11e0-adeb-00a09812f23a type: 0 dataVersion: 1 distVector: 0 srcMsid: 2147484710 crTime: 1327091189 info: 0 comment: AccessType: 1 StorageType: 1 StripeType: 1
vol#22:
vsid: 7 dsid: 1075 msid: 2147484711 name: vol8
aggr: 2d5bb6b8-d668-11e0-adeb-00a09812f23a type: 0 dataVersion: 1 distVector: 0 srcMsid: 2147484711 crTime: 1327091194 info: 0 comment: AccessType: 1 StorageType: 1 StripeType: 1
vol#23:
vsid: 7 dsid: 1076 msid: 2147484712 name: vol9
aggr: 2d5bb6b8-d668-11e0-adeb-00a09812f23a type: 0 dataVersion: 1 distVector: 0 srcMsid: 2147484712 crTime: 1327091200 info: 0 comment: AccessType: 1 StorageType: 1 StripeType: 1
vol#24:
vsid: 7 dsid: 1077 msid: 2147484713 name: vol10
aggr: 2d5bb6b8-d668-11e0-adeb-00a09812f23a type: 0 dataVersion: 1 distVector: 0 srcMsid: 2147484713 crTime: 1327091205 info: 0 comment: AccessType: 1 StorageType: 1 StripeType: 1
vol#25:
vsid: 7 dsid: 1078 msid: 2147484714 name: vol11
aggr: 2d5bb6b8-d668-11e0-adeb-00a09812f23a type: 0 dataVersion: 1 distVector: 0 srcMsid: 2147484714 crTime: 1327091211 info: 0 comment: AccessType: 1 StorageType: 1 StripeType: 1
vol#26:
vsid: 7 dsid: 1079 msid: 2147484715 name: vol12
aggr: 2d5bb6b8-d668-11e0-adeb-00a09812f23a type: 0 dataVersion: 1 distVector: 0 srcMsid: 2147484715 crTime: 1327091216 info: 0 comment: AccessType: 1 StorageType: 1 StripeType: 1
vol#27:
vsid: 7 dsid: 1080 msid: 2147484716 name: vol13
aggr: 2d5bb6b8-d668-11e0-adeb-00a09812f23a type: 0 dataVersion: 1 distVector: 0 srcMsid: 2147484716 crTime: 1327091221 info: 0 comment: AccessType: 1 StorageType: 1 StripeType: 1
vol#28:
vsid: 7 dsid: 1081 msid: 2147484717 name: vol14
aggr: 2d5bb6b8-d668-11e0-adeb-00a09812f23a type: 0 dataVersion: 1 distVector: 0 srcMsid: 2147484717 crTime: 1327091227 info: 0 comment: AccessType: 1 StorageType: 1 StripeType: 1
vol#29:
vsid: 7 dsid: 1082 msid: 2147484718 name: vol15
aggr: 2d5bb6b8-d668-11e0-adeb-00a09812f23a type: 0 dataVersion: 1 distVector: 0 srcMsid: 2147484718 crTime: 1327091232 info: 0 comment: AccessType: 1 StorageType: 1 StripeType: 1
vol#30:
vsid: 7 dsid: 1083 msid: 2147484719 name: vol16
aggr: 2d5bb6b8-d668-11e0-adeb-00a09812f23a type: 0 dataVersion: 1 distVector: 0 srcMsid: 2147484719 crTime: 1327091237 info: 0 comment: AccessType: 1 StorageType: 1 StripeType: 1
vol#31:
vsid: 7 dsid: 1085 msid: 2147484721 name: acl_two
aggr: 2d5bb6b8-d668-11e0-adeb-00a09812f23a type: 0 dataVersion: 1 distVector: 0 srcMsid: 2147484721 crTime: 1327092033 info: 0 comment: AccessType: 1 StorageType: 1 StripeType: 1
vol#32:
vsid: 7 dsid: 1086 msid: 2147484722 name: acl_ntfs81
aggr: 2d5bb6b8-d668-11e0-adeb-00a09812f23a type: 0 dataVersion: 1 distVector: 0 srcMsid: 2147484722 crTime: 1327104318 info: 0 comment: AccessType: 1 StorageType: 1 StripeType: 1
vol#33:
vsid: 12 dsid: 1089 msid: 2147484725 name: root
aggr: 2d5bb6b8-d668-11e0-adeb-00a09812f23a type: 0 dataVersion: 1 distVector: 0 srcMsid: 2147484725 crTime: 1327606721 info: 0 comment: AccessType: 1 StorageType: 1 StripeType: 1
vol#34:
vsid: 12 dsid: 1092 msid: 2147484728 name: test
aggr: 2d5bb6b8-d668-11e0-adeb-00a09812f23a type: 0 dataVersion: 1 distVector: 0 srcMsid: 2147484728 crTime: 1329320130 info: 0 comment: AccessType: 1 StorageType: 1 StripeType: 1

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

127

TROUBLESHOOTING VLDB: SYSTEM SHELL TOOLS


Instructor Notes
<Notes>
Student Notes
% vldbtest help
usage: vldbtest [switches] [-I instanceID] [-H server_host] command [arguments...]
Commands are :
getrecord
modify

get data set record


modify data set record

dsidlookup

lookup volume data set with given dsid

getsnapshots

lookup snapshot data set with given dsid

getbladeinfo

get blade info with given blade UUID

getnewids

get new IDs allocated

createnodevolumeinfo create 7-mode volume info with given volume UUID


dump

Dump tables in the VLDB

updateaggrmap

update aggregate to dblade mapping

volnamelookup

lookup volume with given vserver and volume name

getbladelist

blade table iterator-like interface

msidlookup

lookup volume with given msid

rootlookup

lookup vserver root

junctionlookup lookup junction msid


rjunctionlookup reverse junction lookup
getaggrlocationbyname get dblade UUID with given aggregate name
createmroot

create mroot data set record

deletecoralsetsnapshot delete coral set snapshot


addmembers

add members to an existing coral set

flushnbladefhcache Flushes Nblade FileHandle Cache.

Sample of MSID that doesnt exist:


% vldbtest msidlookup -m 2147484679
instanceID:0 host:127.0.0.1 cmd:msidlookup
Calling vldb_msidLookup_1
result: 250
VVOL attrs: none
Root Volume: 0
MSID Policy: 0
MSID Policy Attributes Count: 0
MSID Lookup Volume Count: 0
Sample of MSID that exists:
% vldbtest msidlookup -m 2147484728
instanceID:0 host:127.0.0.1 cmd:msidlookup
Calling vldb_msidLookup_1
result: 0
VVOL attrs: none
Root Volume: 0
MSID Policy: 0
MSID Policy Attributes Count: 0
MSID Lookup Volume Count: 1
setType: 3

isLeaf: 1

DSID: 1092

Access:1

Storage:1

Stripe:1

DataVersion:1

BladeID: 4c0fc069-cd09-11e0-8990-3d8225f660d2

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

128

TROUBLESHOOTING VLDB: MANIPULATING MSIDS


Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

129

There are N-Blade counters available in FreeBSD sysctl as well as "stats rw_ctx" that provide information about the
number of rewinds and giveups along with the reason. Small increments in these values are always expected as
operations do suspend from time to time due to several valid reasons. But large increments in giveup or rewinds may
indicate an underlying issue

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

130

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

131

TROUBLESHOOTING VLDB: LATENCY


Instructor Notes
<Notes>
Student Notes
% sysctl sysvar.nblade.ngprocess.vldb
sysvar.nblade.ngprocess.vldb.lat0: 836
sysvar.nblade.ngprocess.vldb.lat1: 0
sysvar.nblade.ngprocess.vldb.lat2: 0
sysvar.nblade.ngprocess.vldb.lat3: 0
sysvar.nblade.ngprocess.vldb.lat4: 0
sysvar.nblade.ngprocess.vldb.lat5: 0
sysvar.nblade.ngprocess.vldb.lat6: 0
sysvar.nblade.ngprocess.vldb.lat7: 0
sysvar.nblade.ngprocess.vldb.lat8: 0
sysvar.nblade.ngprocess.vldb.lat9: 0
sysvar.nblade.ngprocess.vldb.lat10: 18
sysvar.nblade.ngprocess.vldb.lat11: 0
sysvar.nblade.ngprocess.vldb.lat12: 0
sysvar.nblade.ngprocess.vldb.lat13: 0
sysvar.nblade.ngprocess.vldb.lat14: 0
sysvar.nblade.ngprocess.vldb.lat15: 3

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

132

TROUBLESHOOTING VLDB: VLDB TIMEOUTS


Instructor Notes
<Notes>
Student Notes
::*> debug latency show -node local -process VLDB
Node

Mean

Process ID

Description

Maximum

Samples

(usec)

(msec)

br3170c-01
vldb

TM_changeRole

vldb

RM_newEpochCB

vldb

12

QM_doWork

vldb

13

QM_briefWait

vldb

14

QM_getQuorumInfo

vldb

15

146.5
1

703

553

LU_writeUnitFile

18

LU_beginUpdate

906433

1002

21.5

QM_getSiteDataList 2

16

vldb

10.6221

506

vldb

4
1

21

538.75

702

vldb

21

LU_logCommitRec

vldb

22

LU_postCommit

196

369

vldb

24

LDB_retrieveRec

27

2.96296

vldb

29

LDB_readTxnChanges 25

vldb

33

resource_readLock

vldb

34

resource_

0
0

0.4

2552

0.384404

2552

0.344044

readLockDrop
vldb

35

resource_writeLock 414

vldb

36

resource_

0.63285

vldb

38

vldb

39

DBI_addRecord

vldb

40

DBI_removeRecord

102

15.3431

vldb

44

DBI_cursorAdvance

1276

17.7296

vldb

45

414

0.227053

0
0

writeLockDrop
DBI_appCompareFunc 13051

DBI_

311

1.39821

17.4984

1206

13.573

cursorMoveToTarget
vldb

46

DBI_

70

0.7

cursorEstablishPosit
ion
vldb

47

vldb

49

cb_queueCallback

49.5

vldb

50

DBI_cursorNext
cb_invokeCallback

70
2

448

vldb

78

rpc_qm_client_qmPoll 47

vldb

80

rpc_qm_server_qmPoll 2

vldb

82

rpc_tm_client_

vldb

103

vldb

111

2.01429

1075.89
103
2403

5
0

propagate
rpc_rm_client_

1291

1693

getVersion
rpc_rm_client_
goOnline
30 entries were displayed.

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

133

TROUBLESHOOTING VLDB: SYSTEM PERFORMANCE


Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

134

TROUBLESHOOTING VLDB: ACCESSING A STALE MOUNT


Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

135

STALE JUNCTIONS
Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

136

FINDING STALE JUNCTIONS


Instructor Notes
<Notes>
Student Notes
This can happen if you delete a LS mirror volume after changing its state from restricted to offline.
The delete will happen without the unmount.

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

137

ELIMINATING STALE JUNCTIONS


Instructor Notes
<Notes>
Student Notes
::> vol create -vserver win2k8 -volume dummy -aggregate aggr2
::> set diag
::*> vol show -vserver vs0 -volume dummy -fields msid,dsid,junction-path
vserver volume dsid
------- -----vs0

----

msid

junction-path

--------------- -------------

dummy 1113 2147484751 -

::*> debug smdb table modifyVolume run -volToModify 1113 -volmsid 2147484749 -fieldsToModify 2 -force true
::*> vol show -vserver vs0 -volume dummy -fields msid,dsid,junction-path
vserver volume dsid msid

junction-path

------- ------ ---- ---------- ------------vs0

dummy 1113 2147484749 /break

::*> vol unmount -vserver vs0 -volume dummy


::*> debug smdb table junctionTable show -msid 2147484749
There are no entries matching your query.
2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

138

ABOUT WEX
Instructor Notes
<Notes>
Student Notes
WAFL Explorer tool.

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

139

VOLUME EXPLORER
Mounting /access a corrupt volume can lead to a hung client. This command can be used to check the volume.
Instructor Notes
<Notes>
Student Notes

cluster1::*> volume explore


Usage:
[-format] {volume|inode|dir|sdir|raw|scan|indir|odjobs|snapshot|aggregate|qtr
ee|quota|help}
*Output format
[[-scope] <text>]
*Scope of content to explore
{ [ -import-file <text> ]
*Import data from specified file
| [ -export-file <text> ]
*Export data to specified file
| [ -fill <text> ] }
*Fill specified region with this character
{ [ -list <text> ]
*Display fields in table format
| [ -dump <text> ]
*Display fields in value-pair format
| [ -print {true|false} ] } *Display fields in compact format
[ -set <text> ]
*Change these fields
[ -recursive [true] ]
*Recursively report content
[ -verbose [true] ]
*Report additional details
[ -find <text> ]
*Search for content matching certain values
[ -noinomap [true] ]
*Skip updating the summary inomap
[ -rawtimes [true] ]
*Report times in raw format
https://wikid.netapp.com/w/WAFL_QA/Automation/Cookbook/VolumeExplore
cluster1::*> vol explore -format indir -scope student1_nfs./
(volume explore)
parent
entry
fbn
pvbn
vvbn
---------- ---------- ---------- ---------- ---------0
0
558567
137
node::*> volume explore -format inode -scope 1032./1m1
found 1032.64/1m1 to be inode 1032.96
inode 1032.96 generation 429177712 at location 1032@1991456b+2368:192
type 1.0, diskflags 0x02, size 1048576, blockcount 258, level 2, av-gen-num 0
umask 010777, uid 0, gid 0, xinode 0, sinode 0, nlink 1, epoch 0
ctime 04-Jan-2011 16:24:37, mtime 04-Jan-2011 16:24:37, atime 04-Jan-2011 16:26:42

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

140

ptr[0]: pvbn 21975, vvbn 2024152


ptr[1]: pvbn 21974, vvbn 2024151

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

140

nitrocfg
Instructor Notes
<Notes>
Student Notes
Tracing difficult to demo because for most modules tracing is not registered for non-debug kernels
nitrocfg configures and queries "generic" nblade components (and a few components which didn't really have enough configurability/observability to warrant their own tools)
% nitrocfg 0 showVar
OncRpcHistCtl = 0x8a6c52e8
NfsDebug = 0x0007
OncRpcDupReqDebug = 0x0007
CPxDebug = 0x0000
SpinNpHistCtl = 0x8ca0caa8
Nfs41pNFSLayoutReturnOnClose = 0x0000
AVCacheDebug = 0x000f
AVCacheRefreshPeriod = 0x493e0
AVPolicyEnabled = 0x0001
VldbHistCtl = 0x8e9e6fa8
ReferralDebug = 0x000f
ReplayCacheDebug = 0x000f
AccessCacheDebug = 0x000f
AccessCacheMaxRefreshTime = 0x249f0
AccessCacheMinRefreshTime = 0x01f4
AccessCacheRefreshPeriod = 0x493e0
ExportChecksEnabled = 0x0001
CfVldbDebug = 0x001f
VldbCacheRefreshPeriod = 0x927c0
NitroRpcDebug = 0x0007
BsdHostIfRpcDebugEnable = 0x0000
NbladeEMSTest = 0x0000
NitroTimerDebug = 0x0000
NitroPostmanDebug = 0x0000
NitroExecContextDebug = 0x0000
NitroCIDDebug = 0x0000
NitroStreamCoreDebug = 0x0000
Nitrocfg syntax:
% nitrocfg
nitrocfg
Copyright (c) 1992-2011 NetApp, Inc.
Usage: nitrocfg instance [-s] [command {opts ...} ]
-s enables silent mode, which disables printing of retry messages
commands:
csmAddLocal
nbladeId[]
dbladeId[]
csmAddLocalDceUuid
nblade_dce_uuid dblade_dce_uuid
csmAddRemote
bladeId[] ipAddr port proto
csmAddRemoteDceUuid
ipAddr port proto
csmAddClusterVif
bladeId[] ipaddr port proto
csmAddClusterVifDceUuid dce_uuid ipaddr port proto
invalidateAccessCache
virtual_server_id ruleset_id
invalidateAVCache
policy_id
FlushMsidCache
FlushVserverCache
FlushJunctionCache
VldbCacheInvalidateMsid msid
VldbCacheInvalidateDsid dsid
VldbCacheInvalidateVserver
VirtualServer
VldbCacheInvalidateDblade
dblade_dce_uuid
getMsid msid
getDsid dsid
pmapSet
programNumber programVersion protocol port virtualServer
pmapUnset
programNumber programVersion protocol virtualServer
pmapDump
virtualServer
getVar
name
setVar
name value
showVar
setNfsOptions
VirtualServer ctxHigh ctxIdle nfsAccess V2E V3E V4.0E V4.0AclE V4.0RdDlgE V4.0WrDlgE V4FsidCE V4RlyDrop V40RefE V4.0RqOC V4IdDomain udpEnable tcpEnable spinauthEnable jukeboxEnable jukeboxCacheEnable nfsV3RequireReadAttributes ntfsSecOps nfsChown
forceSpinNpReaddir traceE trigger udpXferSize tcpXferSize tcpV3ReadSize tcpV3WriteSize V4SymLinkEnable V4LeaseS V4GraceS V3FsidCE V3ConnDropE V4AclPreserveE V4.1E rquotaEnable V4.1SPE V4.1ImplIdDomain V4.1ImplIdName V4.1ImplIdDate V41pNFS V41pNFSStriped V40MigrE V41RefE V41MigrE V41AclE
vStorageEnable
getNfsOptions
VirtualServer
protocol: 17 = udp, 6 = tcp
pingMsid msid isRw
pingDsid dsid
pingUUID UUID
pingAllUUIDs
mapTest interface-name
runTest interface-index ...
cacheMsid msid epoch

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

141

EXERCISE
Instructor Notes
<Notes>
Student Notes
Please refer to your exercise guide.

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

142

EXERCISE
Instructor Notes
<Notes>
Student Notes
Please refer to your exercise guide.

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

143

MODULE SUMMARY
Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

144

THANK YOU
Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

145

MODULE 3: NFS VERSION 4


Instructor Notes
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

146

MODULE OBJECTIVES
Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

147

THE MOUNT PROCESS FOR NFSV4


Instructor Notes
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

148

UNIX and most other systems mount local disks or partitions on directories of the root file system. NFS
exports are exported relative to root or /. Early versions of Data ONTAP had only one volume, so directories
were exported relative to root just like any other NFS server.
When disk capacities grew to the point that a single volume was no longer practical, the ability to create
multiple volumes was added.
NFS server administrators rarely make the entire server's filesystem name space available to NFS clients. More often
portions of the name space are made available via an "export" feature. In previous versions of the NFS protocol, the
root filehandle for each export is obtained through the MOUNT protocol; the client sends a string that identifies the
export of name space and the server returns the root filehandle for it. The MOUNT protocol supports an EXPORTS
procedure that will enumerate the server's exports.
NFS version 4 servers present all the exports within the framework of a single server name space. An NFS version 4
client uses LOOKUP and READDIR operations to browse seamlessly from one export to another. Portions of the
server name space that are not exported are bridged via a "pseudo filesystem" that provides a view of exported
directories only. A pseudo filesystem has a unique fsid and behaves like a normal, read only filesystem.
The ROOT filehandle is the "conceptual" root of the filesystem name space at the NFS server. The client uses or starts
with the ROOT filehandle by employing the PUTROOTFH operation. The PUTROOTFH operation instructs the server
to set the "current" filehandle to the ROOT of the server's file tree. Once this PUTROOTFH operation is used, the client
can then traverse the entirety of the server's file tree with the LOOKUP operation.

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

149

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

150

We will take about owner and group information later in the module

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

151

NFSV4 FEATURES
https://wikid.netapp.com/w/NFS/v4gx/FS-updated
https://wikid.netapp.com/w/NFS/v4gx/DS#v4gx_Design
Instructor Notes
Student Notes
NFSv4 introduces a major structural change to the protocol compared to earlier versions and the elimination of
ancillary protocols. In NFS version 2 (NFSv2) and NFS version 3 (NFSv3), the Mount protocol is used to obtain the
initial file handle, while file locking is supported by the Network Lock Manager (NLM) protocol. NFS version 4 (NFSv4)
is a single protocol that uses a well-defined port, which, coupled with the use of TCP, allows NFS to easily transit
firewalls to enable support for the Internet. As in WebNFS, the use of initialized file handles obviates the need for a
separate Mount protocol. Locking is fully integrated into the protocol, which is also required to enable mandatory
locking. The lease-based locking support adds significant state (and concomitant error-recovery complexity) to the
NFSv4 protocol.
Another structural difference between NFSv4 and its predecessors is the introduction of a COMPOUND remote
procedure call (RPC) procedure that allows the client to group traditional file operations into a single request to send to
the server. In NFSv2 and NFSv3, all actions are RPC procedures. NFSv4 is no longer a simple RPC-based
distributed application. In NFSv4, work is accomplished through operations. An operation is a file-system action that
forms part of a COMPOUND procedure. NFSv4 operations correspond functionally to RPC procedures in earlier
versions of NFS. The server in turn groups the operation replies into a single response. Error handling is simple on the
server: Evaluation proceeds until the first error or last operation, whereupon the server returns a reply for all evaluated
operations.
See NFS Version 4 Protocol at http://www.netapp.com/library/tr/3085.pdf for more information.

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

152

NFSV4: CONNECTION
Instructor Notes
Student Notes

A client first contacts the server by using the SETCLIENTID operation, in which it presents an opaque
structure, identifying itself to the server, together with a verifier. The opaque structure uniquely identifies a
particular client. A verifier is a unique, nonrepeating 64-bit object that is generated by the client that allows
a server to detect client reboots. On receipt of the clients identifying data, the server returns a 64-bit client
ID. The client ID is unique and does not conflict with those that were previously granted, even across server
reboots. The client ID is used in client recovery of a locking state after a server reboot. A server after a
reboot rejects a stale client ID, which forces the client to re-establish a client ID and locking state. After a client
reboot, the client must get a new client ID to use to identify itself to the server. When it does so, using the
same identity information and a different verifier, the server notes the reboot and frees all locks that were
obtained by the previous instantiation of the client.

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

153

Preliminary testing of callback functionality by means of a CB_NULL procedure determines whether callbacks can be
supported. The CB_NULL procedure checks the continuity of the callback path. We will talk about this when discussing
delegations.

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

154

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

155

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

156

OPEN is new in NFSv4 (only LOOKUP in NFs3)


If in OPEN reply the server returns in the results_flags:confirm the value OPEN4_RESULT_CONFIRM,the client
performs a OPEN_CONFIRM operation.
AN OPEN which requires a confirm is never granted a delegation
In the case that an OPEN is retransmitted and the lock_owner is being used for the first time or the lock_owner state
has been previously released by the server, the use of the OPEN_CONFIRM operation will prevent incorrect behavior.
When the server observes the use of the lock_owner for the first time, it will direct the client to perform the
OPEN_CONFIRM for the corresponding OPEN. This sequence establishes the use of an lock_owner and associated
sequence number. Since the OPEN_CONFIRM sequence connects a new open_owner on the server with an existing
open_owner on a client, the sequence number may have any value. The OPEN_CONFIRM step assures the server
that the value received is the correct one

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

157

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

158

DELEGATION
https://wikid.netapp.com/w/NFS/v4gx/Delegation
Instructor Notes
Student Notes
NFSv4 allows a server to delegate specific actions on a file to a client to enable more aggressive client caching of data
and to allow caching of the locking state. A server cedes control of file updates and the locking state to a client through
a delegation. This reduces latency by allowing the client to perform operations and cache data locally. After a client
holds a delegation, it can perform operations on files whose data was cached locally to avoid network latency and
optimize I/O. The more aggressive caching that results from delegations can be a big help in environments with the
following characteristics:
Frequent opens and closes
Frequent GETATTRs
File locking
Read-only sharing
High latency
Fast clients
A heavily loaded server with many clients
Two types of delegations exist: READ and WRITE. Delegations do not help to improve performance for all workloads.
A proper evaluation of the workload and the application behavior pattern must be done before you use delegations. For
example, if multiple writers to a single file exist, then the WRITE delegation might not be a good choice, while for readintensive workloads, the READ delegation would provide better performance.
A READ delegation can be given out to multiple clients with an OPEN for a READ as long as they have a callback
path. A client can voluntarily return the delegation, or the NetApp storage system can recall the delegation in case of
conflicting access to the file. This is done through a callback path that is established from the server to the client.

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

159

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

160

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

161

DELEGATIONS USAGE
Instructor Notes
Student Notes
The NFSv4 server in Data ONTAP will grant a read delegation in response to an OPEN for read,if no other client has the file open for read or denying read. This guarantees that no other client will be able to write to the file. If other clients open the same file for read-only access, they will be
allowed to read the file. If another NFSv4 client
supporting read delegations opens the file for read-only access, it will be granted a read delegation as well. For files being created, no delegation is returned on exclusive creates. This is because the client will issue a SETATTR request after the CREATE, which will cause the delegation to be
recalled anyway, so as an optimization, we do not give out the delegation in the first place.
When using a read delegation, a client can do read-only opens and corresponding closes locally. It also doesnt need to poll the server to check modified times on the file, because no one will be allowed to modify the file. A lease is associated with a delegation.Upon lease expiration, the
delegation's state goes away and any locks associated with the delegation are marked expired.If the client does not renew its lease within acertain time period (controlled by an option), these locks are revoked.
Read Delegation Granting
Granted during OPEN with share_access READ
Server may grant a read delegation if:
read delegation option is enabled, and
open does not require confirmation, and
this is not an exclusive open, and
no client has the file open for write, and
no client has file open with deny-read, and
a recall path present, and
claim type is CLAIM_NULL or CLAIM_PREVIOUS (reclaim)
Subject to recall (via CB_RECALL proc) in case of a conflicting access
Client can keep the delegation until recall or voluntary return via DELEGRETURN
Note:
RFC3530 section 8.1.8. restricts the server from bestow a delegation for any open which would require confirmation. It is difficult and adds complexity.
No read delegation will be given out on exclusive open. This is because the client will issue a SETATTR after the create, which will cause the delegation to be recalled. As an optimization, we do not give out the delegation in the first place
RECALLING A READ DELEGATION
A client can voluntarily return the delegation,or the nfsv4 server can recall the delegation in case of conflicting access to the file. This is done through a callback path
established from the server to the client.
Server recalls when
OPEN request for write
OPEN request denying read
RENAME, REMOVE, SETATTR
WRITE request

When a delegation is recalled, there might be a number of opens that the client has done locally and now
needs to propagate to the server. It will do that before returning the delegation.
There are other scenarios where read delegations are recalled for e.g. dump restore of a volume.

When a conflicting request such as an OPEN for WRITE comes in for a file that has a read
delegation, an NFSERR_DELAY/NFSERR_JUKEBOX error is returned if the request is coming
from an NFSv4 client. Client retries conflicting request after some delay.
If the request is coming over NFSv2/NFSv3/CIFS, the request is
suspended waiting for the delegation to be recalled. When that is done, the suspended request is
restarted and finally granted.

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

162

Write Delegation Granting

Write delegation gives exclusive access to a file to one client. The server will grant a write
delegation if no other client has the file open.
Once a client has a write delegation, it is guaranteed that no other client can access that file
as long as the delegation remains valid.
Granted during OPEN with share_access WRITE or BOTH
Server may grant a write delegation if:
write delegation option is enabled, and
open does not require confirmation, and
no other client has the file open, and
a recall path present, and
claim type is CLAIM_NULL or CLAIM_PREVIOUS (reclaim)
No other client can read or write
Subject to recall (via CB_RECALL proc) in case of a conflicting access
Client can keep the delegation until recall or voluntary return via DELEGRETURN
Note: RFC3530 section 8.1.8. restricts the server from bestow a delegation for any open which would require confirmation. It is difficult and adds complexity.
Write Delegation Recall

The way delegations work, clients should do all OPEN, WRITE, LOCK, and UNLOCK
requests locally. However, there are some clients, such as Solaris, that send LOCK and
UNLOCK requests to the server even though they might be holdinga write delegation for that
file. The NetApp storage system associates such opens and locks with the delegation, and if
the delegation is returned, these opens and locks are disassociated from the delegation
state.
However, if a client decides to do locking/unlocking locally, it will have to send the lock state
over when the delegation is being returned/recalled.
Server recalls when
A non-subsumed OPEN request is received
I/O request without an opened sharelock
RENAME, REMOVE, SETATTR (Note that the server do not recall a write delegation if a SETATTR is coming in with the delegation stateid. It is coming from the client holding the delegation)
WRITE request with a stateid other than that of the delegation or an open subsumed by the delegation, or a byte-range lock gotten under such an open. The only other stateids that should exist are the special stateids and write done with those
should definitely cause a recall.
NFSERR_DELAY will be returned to the client making the conflicting request. Client retries conflicting request after some delay.
There are other scenarios where write delegations are recalled For e.g. volume offline,volume delete and dump restore
Delegation Reclaim
Delegations maybe reclaimed by the client upon server reboot.
Client reclaims delegation upon server reboot
OPEN with claim type set to CLAIM_PREVIOUS
Server should grant delegation
If delegation being reclaimed is not more permissive than the corresponding OPEN
If recall path is not setup, server can set the recall flag
Client should return delegation as soon as possible
Delegation Revoke
Delegations maybe revoked by the server.
Server can revoke a delegation if
Client does not return delegation upon recall
When the lease expires and the last recall time is greater than the lease time
"locks break" command
SnapMirror operation

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

162

Instructor Notes
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

163

Instructor Notes
Student Notes
Client1 opens file f2 in mode O_RDONLY
Server grants client1 a read delegation for file f2
Client1 tries to obtain a read only byte range lock on file f2

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

164

Instructor Notes
Student Notes
The server has no knowledge of byte range read only lock on file f2 by an application running on client1 since
client1 processed the lock on locally cached contents (since it has a read delegation)
Server is only aware of the read delegation

Client2 opens the same file f2 in mode O_RDONLY

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

165

Instructor Notes
Student Notes
Server grants read delegation to client2
Client 2 opens file f2 for read/write

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

166

Instructor Notes
Student Notes
The server recalls the read delegation for f2 because client2 has opened it for read/write.
Client1 makes the server aware of opens and locks that client1 had processed locally
So the byte range lock now appears at the server side
Client2 opens file f2 for read/write

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

167

1.

Client1 opens file f1 for read

2.

Server grants client1 read delegation stateid 0xd8b0

3.

Client1 reads from f1 using stateid 0xd8b0 i.e delegation

4.

Even though the client application makes multiple read calls reading only 5 characters at a time only 1 read is sent
to the server which reads all the file contents

5.

Server recalls delegation due to f1 being opened for write by Client2

6.

Client1 sends to server the opens and locks on f1 that client1 executed on the locally cached content

7.

Client1 returns the delegation

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

168

Instructor Notes
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

169

Instructor Notes
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

170

1.

Client1 opens file f8 for write

2.

Server grants client1 write delegation stateid 0x6f88

3.

Client1 writes to f8 using stateid 0x6f88 i.e delegation (all bytes are written in 1 write operation)

4.

Client1 reads f8 using stateid 0x6f88 i.e delegation (no open for read,since it is processed locally)

5. Client 1 tries to claim current delegation for file f8 FH 0x5d621dd


6.

Server does not honor claim since client 2 has opened the file for read

7.

Client1 returns the delegation

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

171

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

172

CLIENT-SERVER LEASE MANAGEMENT


https://wikid.netapp.com/w/NFS/v4gx/LockingDesign#V4_Locking_Model
Instructor Notes
Student No
Recovery Details
In the area of recovery, however, it is desirable to document exactly the level of recovery from various client, server, and communications failures:
Client reboot
Interruption of client-server communication
nblade failure
dblade failure
overlapping nblade and dblade failures
The case of client reboot is the simplest to explain and deal with. The client will do a SETCLIENTID indicating a new instance of the same system. The nblade will free all of the clients locks.
In the case of interruption of client-server communication, the nblade will expire the client's lease after the lease time goes by without lease renewal. At this point, the nblade will make that client's
locks "soft" (i.e. revocable). If no conflicting lock occurs before communication is restored, the client's locks will be restored to revocable status and life will go on uninterrupted. If a conflicting lock
occurs, the nblade is notified and all of the client's locks will be released. Also, because of revocation, the client will be prevented from reclaiming locks if the nblade should go down
Now we will consider failure of the nblade. In actual operation, failure on an nblade is liable to include interactions with some dblades with the nblade (due to be hosted on the same box) and some
dblades which did not fail and rode through the nblade failure. In the interests of simplicity, we will consider these cases separately, presenting first two pure situations in which an nblade interacts
with a single dblade which either failed with the nblade or survived then nblade failure. We can then discuss the general nblade failure case.
First, we consider what is, perhaps surprisingly, the simpler case in which the nblade and the one dblade the client is using, fail together. In this case, the new nblade will return
NFS4ERR_STALE_STATE or NFS4ERR_STALE_STATEID when clients issue lock-related requests. The client will then establish a new clientid using SETCLIENTID and then do open-reclaim
and lock-reclaim operations to recover his lock state. Normal (i.e. non-reclaim) lock operations are rejected with a GRACE error until the grace period is over.
Now suppose the nblade fails and all the dblades with which the client is communicating survive. We have a similar, but not exactly the same situation. The new nblade will return
NFS4ERR_STALE_CLIENTID or NFS4ERR_STALE_STATEID when clients issue lock-related requests. The client will then establish a new clientid using SETCLIENTID. Note that in this case, no
locks (such as locks from the previous instance of the same client) will be released at this point, since the nblade doesn't know anything about the locks from a previous instance and so cannot
release them. The client will then do open-reclaim and lock-reclaim operations to recover his lock state. Note that his locks from a previous instantiation are preventing conflicting locks from
occurring so his reclaims are safe even though there is no grace period (since the dblade did not fail). Where old and new lock instances conflict the dblade will allow the substitution based on
SpinNP option flags. Once the nblade-based grace period is over any left-over locks from old client instance are released based on requests sent by the nblade. Two things to note about this sort
of grace period:
Normal lock operations can be done during this sort of grace period.
When a client reboots during this period, locks from a previous client instance will not be freed at SETCLIENTID_CONFIRM time. Instead this will wait until the grace period is over.
Now consider the general case of nblade failure in which there are a mixture of dblades which fail and those which survive. The new nblade will return NFS4ERR_STALE_STATE or
NFS4ERR_STALE_STATEID when clients issue lock-related requests. The client will then establish a new clientid using SETCLIENTID. When the client does open-reclaim and lock-reclaim, the
effect will depend on the dblade which the request addresses. Either there will be a true reclaim within a dblade grace period, or will establish a new lock to replace the one from the old nblade
instance for a non-failed dblade. Once the nblade-based grace period is over any left-over locks from old client instances on dblade which did not fail are released. Client's which only reference
dblades which have failed or those which have not will see only that class of behavior. Clients which reference a mixture may see more complicated behavior. In particular, after completing lock
reclaims, it will do non-reclaim open and lock request and they will be accepted by dblades which did not fail, even though the grace periods for some dblades may not be over. Subsequent
requests to those dblades may get a GRACE error. The nblade will deal with this by substituting DELAY in this case.
Finally, we can deal with the case of dblade failure with no associated nblade failure. In this case client will not see a failure and existing clientids and stateid will continue to be honored. Locks will
be reclaimed by the nblade on behalf of its clients. The nblade, once that recovery is done, will indicate that recovery is complete. Once all nblades do that, the grace period can be terminated,
although this will not happen if the nblade has NLM locks present since that case there will be SpinNP locking client not indicating recovery-complete. If clients make locking requests before the
nblade completes locking recovery then they will get a DELAY although this will normally be short-lived. If there other nblades that do not complete their recovery or there are NLM locks, clients
may make normal locking requests of the dblade and get a GRACE error. Just as in the nablde failure case, the nblade will substitute DELAY.
tes

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

173

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

174

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

175

ACCESS CONTROL LISTS (ACLS)


Instructor Notes
Student Notes
An Access Control Entry (ACE) within an ACL can be one of four types: ALLOW, DENY, AUDIT, or ALARM. Because
client support is limited, AUDIT and ALARM (also called security ACL or SACL) is not supported in the Data ONTAP
operating system currently. ALLOW and DENY simply mean that the ACE allows or denies the specified access to the
entity that is attempting access. AUDIT means if the entity in the ACE attempts the specified access, log the attempt.

DENY ACEs should be avoided whenever possible, since they can be confusing and complicated.
When DENY ACEs are set, users might be denied access when they expect to be granted access. This is
because the ordering of NFSv4 ACLs affects how they are evaluated.
The above
NOTE: The more ACEs, the greater impact the ACL has on performance.

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

176

ACE PERMISSIONS
Instructor Notes
Student Notes
For more information on NFSv4 ACLs and ACEs, refer to the following:
http://linux.die.net/man/1/nfs4_setfacl
http://linux.die.net/man/5/nfs4_acl

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

177

NFSV4 IMPLEMENTATION: CLIENT CONFIGURATION


Instructor Notes
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

178

ACL PRESERVATION
Instructor Notes
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

179

Instructor Notes
Student Notes
The NFSv4 ID domain name is a pseudo domain name that the client and server must agree upon before they can do
most of the NFSv4 operations. The NFSv4 domain name may not be equal to the NIS or Domain Name System (DNS)
domain name. It could be any string that the client and server NFSv4 implementations must understand. Unlike NFSv2
and NFSv3, NFSv4 does not ship the traditional user ID (UID) and group ID (GID) in the NFS requests and responses.
For all the NFSv4 requests to be processed correctly, you must ensure that the user-name-to-user-ID mappings are
available to the server and the client.

[-v4-id-domain <NIS or DNS domain>]: NFSv4 ID Mapping Domain: In Data ONTAP, this optional
parameter specifies the domain portion of the string form of user and group names as defined by the
NFSv4 protocol. By default, the domain name is normally taken from the NIS domain or the DNS domain
that is in use. However, the value of this parameter overrides the default behavior.
In NFSv4 the client and server exchange of the owner and owners group between client and server
happens in the string format,
For example, user root has uid = 0 and gid = 0. When a client sends a request to create a file called foo
Now the client sends an OPEN operation to the server, instead of CREATE. The RPC will still have the
numeric UID and GID, and the NFS layer will have
an OPEN operation with the CREATE flag set to ON to create the file, as well as the FATTR parameter,
which will contain root@domain.com as the owner and
root@domain. Com as the owners group.domain.com is the nfsv4 id-domain.
On the Linux client, this domain is configured in the
/etc/idmapd.conf
(/etc/nfs/default
on a Solaris client) file by specifying following line:
Domain=domain.com
2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

180

On ONTAP, the same domain should be specified in the


-v4-id-domain option.
When the domains are matched, the ONTAP nfs server maps incoming username and
groupname to
UID/GID using NSDB which uses the sources specified in the passwd and group databases
for the vserver name-service ns-switch; otherwise, UID=65535/GID=65535
(nobody/nobody) will be used.

For simplicity, suppose that we are dealing with files as passwd and group source.
Unix-user table will be consulted for UID for root for the given SVM, and
Unix-group table will be consulted for GID for root group for the given SVM. Once number the
UID and GID are obtained, the file
foo is created, and the numerical UID/GID is stored on disk(inode of file foo).

The reverse mapping will be performed when the client issues the GETATTR operation.
ONTAP must fill the FATTR parameter with the unix-user name/group-name , and
-v4-id-domain will be added to the end of the string name before it is sent it on the wire.
On the client side , when the domain check is passed, NSDB is consulted for mapping of the
received username and groupname to numeric UID and GID. NSDB does the mapping
according to the /etc/nsswitch.conf file.
Because id-domains are matching, it is assumed that client and ONTAP will use the same
source for this mapping,
i.e the same NIS or LDAP server, or the entries in /etc/passwd of the client match the entries
for the SVM in the unix-user table of the cluster and the entries in /etc/group of the client
match the entries for the SVM in unix-group table of the cluster.

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

180

Instructor Notes
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

181

PRODUCTION NFSV4 INFRASTRUCTURE


https://wikid.netapp.com/w/NFS/v4gx/NSDB#NSDB
Instructor Notes
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

182

PRODUCTION NFSV4 INFRASTRUCTURE


https://wikid.netapp.com/w/NFS/v4gx/NSDB#NSDB
Instructor Notes
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

183

NFSV4 REFERRALS
https://wikid.netapp.com/w/NFS/v4gx/Referrals
Instructor Notes
Student Notes
Data ONTAP 8.1 introduces NFSv4 referrals. When referrals are enabled in a virtual server (Vserver), Data ONTAP
8.1 provides referrals within that Vserver to NFSv4 clients. An intraVserver referral occurs when a cluster node that is
receiving the NFSv4 request refers the NFSv4 client to another LIF in the Vserver. The NFSv4 client uses this referral
to direct its access over the referred path at the target LIF from that point onward. The original cluster node issues a
referral when it determines that a LIF exists in the Vserver and is a resident on the cluster node on which the data
volume resides. In other words, if a cluster node receives an NFSv4 request for a nonlocal volume, it can refer the
client to the local path for that volume through the LIF. This therefore allows clients faster access to the data and
avoids extra traffic on the cluster interconnect.
If a volume moves to another aggregate on another node, the volume movement is done nondisruptively. However, the
referral is not updated until the NFSv4 client unmounts and remounts the file system to understand the new location of
the volume and direct the client to the LIF on the node on which the volume now resides. By default, NFSv4 referrals
are enabled on Linux clients like Red Hat Enterprise Linux 5.4 and later releases.
intra-vserver
Occurs at junction crossings
Controlled by option v4.0-referrals or v4.1-referrals enabled|disabled
NFS4ERR_MOVED and FS_LOCATIONS attribute

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

184

Instructor Notes
Student Notes
When a mount request is sent, the request will act as a normal NFSv4.x mount operation. However, once the DH LOOKUP call is made, the server
(NetApp cluster) will respond with the GETFH status of NFS4ERR_MOVED to notify the client that the volume being accessed does not live
where the LIF being requested lives. The server will then send a LOOKUP call to the client, notifying it of the IP (via the fs_location4 value) on the
node where the data volume lives. This works regardless of whether a client is mounting via DNS name or IP. However, the client will report that it
is mounted to the IP specified rather than the IP returned to the client from the s
The mount location looks to be at the IP address specified by the client:
[root@centos6 /]# mount | grep /mnt
10.61.92.37:/nfsvol on /mnt type nfs4 (rw,addr=10.61.92.37,clientaddr=10.61.179.164)
But the cluster shows that the connection was actually established to node1, where the data volume lives. No connection was made to node2:
cluster::> network connections active show -node node1 -service nfs*
Vserver Interface Remote
CID Ctx Name Name:Local Port Host:Port Protocol/Service
--------- --- --------- ----------------- -------------------- ---------------Node: node1
286571835 6 vs0 data:2049 10.61.179.164:763 TCP/nfs
cluster::> network connections active show -node node2 -service nfs*
There are no entries matching your query.
Clients might become confused about which IP address they are actually connected to as per the mount command
If a volume is junctioned below other volumes, the referral will use the volume being mounted to refer as the local volume. For example:
A client wants to mount vol2
Vol2s junction is /vol1/vol2
Vol1 lives on node1; vol2 lives on node2
A mount is made to cluster:/vol1/vol2
The referral will return the IP address of a LIF that lives on node2, regardless of what IP address is returned from DNS for the hostname cluster
The mount will use the LIF local to vol2 on node2
In a mixed client environment, if any of the clients do not support referrals, then the -v4.0-referrals option should not be enabled. If the option is
enabled and clients that do not support referrals get a referral from the server, that client will be unable to access the volume and will experience
failures. See RFC 3530 for more details on referrals.

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

185

Instructor Notes
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

186

1.

Client Mount Request to 10.63.26.230

2.

Server 10.63.26.230 replies with NFSERR_MOVED and Fs Location 10.63.25.222 (lif on node1 where aggr1
hosts dev01_nfsv4)

3.

3. PutROOTFH,GETATTR and LOOKUP (mount processing) redirected to 10.63.25.222

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

187

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

188

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

189

Instructor Notes
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

190

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

191

SNAPSHOT DIRECTORIES IN NFSV4


Instructor Notes
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

192

EXERCISE
Instructor Notes
<Notes>
Student Notes
Please refer to your exercise guide.

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

193

REFERENCES
Instructor Notes
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

194

MODULE OBJECTIVES
Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

195

THANK YOU
Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

196

Module 4: NFS Version 4.1


Instructor Notes
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

197

MODULE OBJECTIVES
Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

198

NFSV4.1
HTTPS://WIKID.NETAPP.COM/W/NFS/RR/V4.1FS#OVERVIEW
Instructor Notes
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

199

SESSIONS
HTTPS://WIKID.NETAPP.COM/W/NFS/V4.1GX/SESSIONS/DESIGNSPEC#OVERVIEW
Instructor Notes
Student Notes
Exactly once semantics (EOS) improves on NFSv4. Within the lifetime that a server retains a sessions state, a client
never has a request erroneously executed multiple times, nor does any doubt exist as to whether a request was
executed.

Note: 10.63.25.222 and 10.63.25.199 are on the same node.


Mount request from Client1 (ip 10.230.236.220) to Server (ip 10.63.25.222)
Mount request from Client1 (ip 10.230.236.220) to Server (ip 10.63.25.199)
A second mount request from Client1 (ip 10.230.236.220) to Server (ip 10.63.25.199)
In this case the client attempts to use the same session id as obtained in step 2 , but the server forces him
to create a new session.
As far as trunking goes:
The NFSv4.1 spec talks about two types of trunking: Client and Session
The server informs the client about trunking support through the EXCHANGE_ID result in the
eir_server_owner field. The major id portion of the eir_server_owner specifies the scope of client trunking. If
a client performs two EXCHANGE_ID operations to two different server IP addresses and gets back the
same clientid with the same majorid for each response, it can perform client trunking across these two
connections. In those EXCHANGE_ID results the minor id field informs the client if it can perform session
trunking across these two connections

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

200

Session Trunking. If the eia_clientowner argument is the same in two


different EXCHANGE_ID
requests, and the eir_clientid,
eir_server_owner.so_major_id, eir_server_owner.so_minor_id, and
eir_server_scope results match in both EXCHANGE_ID results, then
the client is permitted to
perform session trunking. If the
client has no session mapping to the tuple of eir_clientid,
eir_server_owner.so_major_id, eir_server_scope, and
eir_server_owner.so_minor_id, then it
creates the session via a
CREATE_SESSION operation over one of the connections,
which
associates the connection to the session. If there is a session
for the tuple, the client
can send BIND_CONN_TO_SESSION to
associate the connection to the session.
Of course, if
the client does not desire to use session trunking,
it is not required to do so. It can invoke
CREATE_SESSION on the
connection. This will result in client ID trunking as described
below.
It can also decide to drop the connection if it does not
choose to use trunking. Client ID Trunking.
If the eia_clientowner argument is the same in
two different EXCHANGE_ID requests, and the
eir_clientid,
eir_server_owner.so_major_id, and eir_server_scope results match
in both
EXCHANGE_ID results, then the client is permitted to
perform client ID trunking (regardless of
whether the
eir_server_owner.so_minor_id results match). The client can
associate each
connection with different sessions, where each
session is associated with the same server.
The
client completes the act of client ID trunking by invoking
CREATE_SESSION on each connection,
using the same client ID that
was returned in eir_clientid. These invocations create two
sessions
and also associate each connection with its respective
session. The client is free to decline to use
client ID trunking
by simply dropping the connection at this point.
When doing client ID trunking,
locking state is shared across
sessions associated with that same client ID. This requires the
server to coordinate state across sessions.

In Data ONTAP:
We support sessionid trunking to the extent of trunking all connections coming to the
same LIF. There is no support for sessionid trunking across LIFs on one node. There
is also no support for clientid trunking.

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

200

NFSV4.1: CONNECTION
Instructor Notes
Student Notes

This process is similar to the NFS version 4 (NFSv4) connection method, except a client first contacts the
server by using the EXCHANGEID operation.

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

201

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

202

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

203

The EXCHANGE_ID result contains a sequence number. The client must use this sequence number for the next
CREATE_SESSION operation performed for this client (if the EXCHGID4_FLAG_CONFIRMED_R flag is not set,
otherwise the client must ignore the seqid value). The sequence number is used to provide replay protection for the
CREATE_SESSION operation

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

204

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

205

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

206

The SEQUENCE operation is used to build Exactly Once Semantics (EOS) into the NFSv4.1 protocol. Each set of
operations issued for a session will be proceeded by a SEQUENCE operation. The SEQUENCE operation, if present,
must be the first op in a COMPOUND. Otherwise, NFS4ERR_SEQUENCE_POS is returned to the client. The
operation request includes a session id, slot id, sequence id, a value specifying the highest outstanding slot id and a
Boolean specifying if the result of this current request should be cached. Processing of these values ensures that the
client and server are in sequence (with regard to outstanding and processed requests) and that any request is only
processed by the server once. This is a big change from previous versions of NFS protocols, which did not provide
EOS. Previously, reply caches were used to prevent the replay of non-idempotent operations. Idempotent operations
were allowed to be executed many times over if a retransmission occurred. With EOS, any type of operation will be
executed only once. A retry of an operation will result in either a cached response or an error

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

207

When processing a SEQUENCE operation, a particular slot table entry is accessed using the slot id in the SEQUENCE
request. If the slot id is greater than the highest slot id in the slot table, the server will return NFS4ERR_BADSLOT for
the SEQUENCE operation. If the slot id is valid, the status of the slot entry is checked to determine if this is: A) the first
request made by the client on this slot B) a request for a slot which has an in progress operation C) a request for a slot
which has a completed operation. In the case of (A), the request must be made with seqid 1, otherwise a
MISORDERED error is returned. In the case of (B), ERR_DELAY is returned as an error for the SEQUENCE
operation. Both a per-session and system wide stat are incremented to note an inprogress hit. In the case of (C), the
seqid in the request is compared against the one in the slot table entry. The following rules are applied for the
sequencing check:
If sa_seqid == slot_entry_seqid, this is a retransmission and the reply data in the slot entry is used to formulate a
response. The SEQUENCE operation results in NFS4_OK. The slot entry is unchanged with regard to sequencing,
reply data or state. Per-session and system wide stats are incremented noting a hit in the slot table for a retransmitted
request.
If sa_seq < slot_entry_seqid OR sa_seq > (slot_entry_seqid + 1), this is a misordered sequence operation. The
SEQUENCE operation results in a MISORDERED error. The slot entry is unchanged with regard to sequencing, reply
data or state. Per-session and system-wide stats are incremented noting a misordered sequence operation.
If sa_seq == slot_entry_seqid + 1, this is a properly sequenced request. The SEQUENCE operation results in
NFS4_OK. The slot entry seqid is incremented by one and the entry status is marked inprogress.

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

208

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

209

RELIABLE CALLBACKS
Instructor Notes
Student Notes
A channel is not a connection. A channel represents the direction ONC RPC requests are sent. Each session has one
or two channels: the fore channel and the backchannel. Because there are at most two channels per session, and
because each channel has a distinct purpose, channels are not assigned identifiers. The fore channel is used for
ordinary requests from the client to the server, and carries COMPOUND requests and responses. A session always
has a fore channel. The backchannel is used for callback requests from server to client, and carries CB_COMPOUND
requests and responses. Whether or not there is a backchannel is a decision made by the client; however, many
features of NFSv4.1 require a backchannel. NFSv4.1 servers must support backchannels.

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

210

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

211

PNFS
HTTPS://WIKID.NETAPP.COM/W/NFS/PNFS/FUNCTIONALSPECIFICATION#OVERVIEW
HTTPS://WIKID.NETAPP.COM/W/NFS/PNFS/ARCHITECTURESPECIFICATION
Instructor Notes
Student Notes

In pNFS, clients access a metadata server to query where the data is found. The metadata server returns
information about the datas layout and location. Clients direct requests for data access to a set of data
servers that are specified by the layout through a data-storage protocol, which may be NFSv4.1 or another
protocol. This removes I/O bottlenecks by eliminating a single storage access and improves large file
performance. It also improves data management by load-balancing data across multiple machines. pNFS
requires Linux kernel 2.6.39 or higher.

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

212

PNFS ACCESS
Instructor Notes
Student Notes
In this example, the pNFS client mounts at LIF4. Node 4 hosting LIF4 becomes the metadata server. When the client
performs a LOOKUP to OPEN a file that is found in volume T controlled by node 1. If a file that is requested by the
pNFS client exists in a volume that is resigning in an aggregate that is controlled by node 4, node 4 serves the file
locally. If the file is in a volume that is in an aggregate on node 1 (or any of the other nodes), then for NFSv3, that
becomes a nonlocal access for any client. But with pNFS, the client receives the layout for the file and the file. The
client then subsequently accesses the file over LIF1.

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

213

Storage Device (NFSv4.1 definition) A storage device stores a regular file's data, but leaves metadata management to the metadata server. A
storage device could be another NFSv4.1 server, an object storage device (OSD), a block device accessed over a SAN (e.g., either FiberChannel
or iSCSI SAN), or some other entity.
Storage Protocol The NFSv4.1 pNFS feature has been structured to allow for a variety of storage protocols to be defined and used. One example
storage protocol is NFSv4.1 itself. Other options for the storage protocol are: Block/volume protocols such as iSCSI, FCP and Object protocols
such as OSD over iSCSI or Fibre Channel.
Control Protocol It is used by the exported file system between the metadata server and storage devices. Specification of such protocols is
outside the scope of the NFSv4.1 protocol. Such control protocols would be used to control activities such as the allocation and deallocation of
storage, the management of state required by the storage devices to perform client access control, and, depending on the storage protocol, the
enforcement of authentication and authorization so that restrictions that would be enforced by the metadata server are also enforced by the
storage device
Metadata Information about a file system object, such as its name, location within the namespace, owner, ACL and other attributes.
Metadata may also include storage location information and this will vary based on the underlying storage mechanism that is used.
Metadata Server (or Server) An NFSv4.1 server which supports the pNFS feature. A variety of architectural choices exists for the metadata server
and its use of file system information held at the server. Some servers may contain metadata only for file objects residing at the metadata server
while the file data resides on associated storage devices. Other metadata servers may hold both metadata and a varying degree of file data.
Device Id The device ID identifies a group of storage devices.
Device Mappings The mappings that exist between Device Id and the storage addresses that reach each of the storage devices in the group.
Data Server For all practical purposes, equated with Device Mapping Layout A layout describes the mapping of a file's data to the storage devices
that hold the data.
Constituent Also referred to as volume constituent. Basically, an entity, a group of which forms a volume. A constituent could store a stripe of a
file or a full copy of the file itself.

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

214

LAYOUTS
Layout Type A layout belongs to a specific layout type. The layout type allows for variants to handle different storage
protocols, such as those associated with block/volume, object, and NFSv4.1 file layout types. Volume A logical and
manageable storage container that stores customers data set. A volume could be hosted on a single physical
controller or could be hosted on multiple hosts and store copy or a stripe of the data.
Instructor Notes
Student Notes
In the request, the NFSv4.1 client provides the following:
Client id which represents the NFSv4.1 client.
The filehandle is the filehandle of the file on the metadata server.
The layout type
The layout iomode indicates to the metadata server the clients intent to perform on the data, either READ or
READ and WRITE operations.
The range is used to detect overlapping layouts granted clients.
In the response, the NFSv4.1 metadata server provides the following for a file layout:
Device ID which represents the location of the data.
A file_id which represents how the data on a file on each data server is organized and whether COMMIT
operations should be sent to metadata server or the data server.
The first stripe index is a location of the first element that is to be used.
The pattern offset is the logical offset into the stripe location to start.
The filehandle list is a an array of filehandles on the data servers.
Multiple layouts may be returned to the client. For more information, please see RFC 5661 found at:
http://www.faqs.org/rfcs/rfc5661.html.

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

215

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

216

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

217

Layout Operations
LAYOUTGET
LAYOUTCOMMIT
LAYOUTRETURN

NFSv4.1 standard : The LAYOUTGET operation requests a layout from the metadata server

for reading or

writing the file given by the filehandle at the byte- range specified by offset and length Data ONTAP metadata
server in response to a LAYOUTGET request creates a layout state object or bumps up
sequence number on the existing layout state object.

NFSv4.1 standard :The LAYOUTCOMMIT operation commits changes in the layout represented

by the current
filehandle, client ID (derived from the session ID in the preceding SEQUENCE operation), byte-range, and stateid.
Since layouts are sub-dividable, a smaller portion of a layout, retrieved via LAYOUTGET, can be committed.

Basically commit changes to a file or byte range in a file belonging to a volume. In the case
of Data ONTAP layout is of a volume.File is a subset of the volume.
Data ONTAP metadata server response to a LAYOUTCOMMIT is a no-op and only verifies if the time
attributes supplied by client are not stale. ONTAP maintains the consistency of attributes of files on
each filesystem it support .Thus, client's attributes are not honored.
NFSV4.1 standard : LAYOUTRETURN This operation returns from the client to the server one or more layouts
represented by the client ID Data ONTAP metadata server response to a LAYOUTRETURN is to

clean up state on the layout state object


Callbacks
Callbacks will be generated when one of the following events occur.
Lif failover
Lif failover
2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

218

Lif migrate
Lif revert
Lif delete
Node reboot
Volume failover
Volume move
Volume delete
D-blade reboot
SFO (Storage failover)
A successful failover of these events will trigger invalidation of pNFS device configuration
and the recall of layouts granted corresponding to them. Client is expected to return the
layouts and stop accessing the data servers. If the clients do not stop accessing the data
servers, those requests will be fenced off.
It is expected that VLDB, VifMgr will call pNFS Device Mappings subsystem when location of
a volume constituent or a Lif changes
NFSv4.1 standard: The CB_LAYOUTRECALL operation is used by the server to recall layouts from
the client; as a result, the client will begin the process of returning layouts via LAYOUTRETURN. The
CB_LAYOUTRECALL operation specifies one of three forms of recall processing with the value of
layoutrecall_type4. The recall is for one of the following: a specific layout of a specific file
(LAYOUTRECALL4_FILE), an entire file system ID (LAYOUTRECALL4_FSID), or all file systems
(LAYOUTRECALL4_ALL). Data ONTAP

CB_LAYOUT_RECALL with a layout stateid as an argument (LAYOUTRECALL4_FILE)


is be implemented, but not really invoked.
CB_LAYOUT_RECALL with an fsid as an argument (LAYOUTRECALL4_FSID) is
supported.

2014 NetApp. All rights reserved.

NetApp Confidential - For Internal Use Only

218

Instructor Notes
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

219

Instructor Notes
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

220

1.

Client Mount Request to 10.63.26.230

2.

PutROOTFH,GETATTR and LOOKUP (mount processing) at 10.63.26.230

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

221

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

222

1.

OPEN to MDS

2.

SESSION

Instructor Notes
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

223

1.

OPEN to MDS

2.

SESSION

Instructor Notes
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

224

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

225

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

226

Instructor Notes
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

227

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

228

Volume move occurs during the time client has file open for write
Before volume move write is redirected to DS
After volume move DS returns NFS$ERR_STALE for write access (Due to volume move all stateids associated with
the invalidated deviceid are marked stale .VLDB RPC call to Nblade)
Write is redirected to MDS

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

229

DATA ONTAP SUPPORT


Instructor Notes
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

230

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

231

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

232

PNFS STATUS
Instructor Notes
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

233

NFSV4.1 IMPLEMENTATION: CLIENT CONFIGURATION


Instructor Notes
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

234

EXERCISE
Instructor Notes
<Notes>
Student Notes
Please refer to your exercise guide.

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

235

EXERCISE
Instructor Notes
<Notes>
Student Notes
Please refer to your exercise guide.

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

236

MODULE OBJECTIVES
Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

237

REFERENCES
Instructor Notes
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

238

THANK YOU
Instructor Notes
<Notes>
Student Notes

2014 NetApp. All rights reserved.

NetApp Confidential - Internal Use Only

239

Você também pode gostar