Você está na página 1de 21

Copyright BM Corp. 2012. All rights reserved.

561
Chapter 11. Best practices
n this chapter we document Best Practices in implementing, customizing, and performance
in your SONAS appliance.
11
562 SONAS mplementation and Best Practices Guide
11.1 MuIti tenant introduction
Multi tenant scenarios seem tempting with SONAS due to the large amount of NAS storage
that can be shared. Tenants could be different departments or even multiple companies in
cloud scenarios.
There is no official multi tenant support in SONAS and some functions such as chargeback
can take some time to become part of the core product.
However, this document explores different ways of exploiting existing SONAS functionality to
provide multi tenancy scenarios. Engaging from a customer perspective it explains on which
levels (physical, logical, authorization, staff) separation can be used within the SONAS
architecture.
t is a starting point for known scenarios, can be used as a "pick list for customers, or as a
discussion baseline for additional scenarios. The scenarios described are Windows centric in
a first run, however NFS access can always be provided with the same user / password
combination that is used for CFS access.
11.2 MuIti tenants: Requirements
The requirements that potentially can arise are multifaceted:
Security:
- ndependent) authentication servers (AD, LDAP, multi instances)
- Authorization (view, read, write, .)
- Audit logging (administration, sharing)
Separation of data (optional):
- Logical, physical
Cross organizational) sharing of data (optional):
- Allow access to users from remote authentication servers
- Performance optimization for physical separated locations
Utilization / Chargeback (optional):
- Accounting to justify central T / budget expansions
- Chargeback based on usage pattern
- Enforce restrictions based on SLA / purchased options
Enhanced availability requirements:
- Downtime impacts higher number of clients / tenants
- Financial penalties
- Reputation loss
Request handling (optional):
- Signup for new tenants
- Portfolio pick list
- Review / Approval cycles
Chapter 11. Best practices 563
ntegration of existing infrastructure (optional):
- Existing, heterogeneous authentication servers
- Data migration
The following section outlines how various requirements can be satisfied with BM SONAS:
Security:
- (independent) authentication servers (AD, LDAP, multi instances)
As of now SONAS connects to a single, trusted authentication source only, being either
AD or LDAP. n the following section a 'proxy' concept is used to connect to multiple AD
Server without the need to create a trusted forest.
- Authorization (view, read, write):
The NFSv4 ACLs that SONAS uses to store common permission attributes for NFS
and CFS exploitation are the baseline for consistent mapping across platforms. The
V4 container has a rich set of permissions that can be mapped, including complex
items such as inheritance and nested permissions.
- Audit logging (administration, sharing):
While in SONAS 1.3 audit logging is available for CL / GU commands there is no
support for audit logging on the data shares them self. Depending on the utilization of
the system and the detail of such auditing it could become a very performance
consuming layer.
Separation of data (optional):
- Logical, physical:
From a flexibility perspective logical separation is far superior from physical separation.
Dynamic allocation from a common physical hardware pool to logical entities allows
(quota based) overprovision, as well as small step expansion. Also entry size is smaller
as no physical boundaries (e.g. 60 disk increments, physical nterface nodes) have to
be considered.
(Cross organizational) sharing of data (optional):
- Allow access to users from remote authentication servers:
For cross organizational sharing of data the ability to grant fine granular access to
users from remote site could be required. As of now there is no concept how to
manage such permissions in SONAS internally.
- Performance optimization for physical separated locations:
The Active Cloud Engine introduced in the 1.3 release allows caching across WAN
distances to optimize access performance for remote clients. Refer to Chapter 4,
"Active Cloud Engine on page 165 explaining Active Cloud Engine in detail.
Utilization / Chargeback (optional):
- Accounting to justify central T / budget expansions:
SONAS 1.3 contains strong features that allow reporting on user, group, share (if
created as fileset) and filesystem level. 11.3.7, "Quota usage and capacity reporting
on page 573 contains more details.
- Chargeback based on usage pattern:
SONAS 1.3 provides enhanced performance monitoring, but not on a level that allows
reporting based on user group or share level.
564 SONAS mplementation and Best Practices Guide
- Enforce restrictions based on SLA / purchased options:
Quality of Service (QoS) on a network or disk level is not yet available in SONAS.
However, dedicating physical components to individual users can be used for
mitigation.For example, dedicating a limited number of physical ports to certain Servers
can throttle the bandwidth that becomes available to those servers. Creating dedicated
filesystems / filesets on dedicated storage pools can limit the disk performance
available. While those scenarios are not desirable in general (as described previously
in separation of data) but can be useful in special scenarios.
Enhanced availability requirements:
- Downtime impacts higher number of clients / tenants
- Financial penalties
- Reputation loss
Request handling (optional):
- Signup for new tenants:
The SONAS GU / CL concept is based on a central administration team that has
global permission to add shares, create snapshots or create reports. So in order to
provide self service management or split up reports to individual organizations,
customized services or add on products need to be considered.
Currently there are multiple streams investigating the options, such as exploiting Tivoli
Usage and Accounting Manager (TUAM), Tivoli Storage Productivity Center or the
Cloud Foundation Stack (CFS) as a self service portal orchestrating the SONAS CL in
the background.
- Portfolio pick list:
Customized implementations could provide SLA based functionality, based on:
Disk tier (SAS / NL-SAS / SSD cached (later release))
Number of copies written to disks (enable data replication for certain projects only)
Optional Backup
Optional HSM
- Review / Approval cycles:
This would be a functionality of the self service portal in front of SONAS at this point in
time.
ntegration of existing infrastructure (optional):
- Existing, heterogeneous authentication servers:
Currently the lack of support for multiple authentication servers provides limited
possibilities for such integration.
- Data migration:
Migrating hundreds of TB of data into SONAS (or other systems that scale into similar
range) provides an industry challenge. Data migrations (and how to hide them in a
transparent way) is described in Chapter 5., Backup and recovery, availability, and
resiliency functions on page 235 and 11.4, "Data migration on page 577.
Chapter 11. Best practices 565
11.3 SONAS configurations in muIti tenant environments
n this section, we look at scenarios for multi tenant environments.
11.3.1 Shared storage, shared administration, isoIated data
The first scenario uses very basic separation and makes a very flexible use of the global
namespace that SONAS provides. The entire storage is configured within a single filesystem
which is separated on a logical level into multiple shares. All users are within a single Active
Domain Forest, so if there are multiple AD servers, they can have a trusted relationship in
between them.
Access permissions are set on a per file / directory / share level, so all users might see
shareA where some others might see shareB only. The configuration effort on SONAS side is
minimal, as User management is done within the Active directory server, ACL management
can be done by end users and Administrators using the standard tools, such as the Windows
Explorer.
The scenario shown in Figure 11-1 also allows having totally independent groups using
separate shares without any overlap, so total separation from a user's perspective can be
achieved.
Figure 11-1 Example of shared storage, shared administration, isolated data
566 SONAS mplementation and Best Practices Guide
Table 11-1 describes the shared storage, shared administration, and isolated data scenario.
Table 11-1 Shared storage, administration and isolated data scenario
11.3.2 Shared storage, dedicated user administration, isoIated data
n the second scenario more separation is required. The configuration still contains all disks
within a single file system, however dedicated file sets allow setting different quotas for
individual shares. To achieve clear separation of duty, separate AD Servers are used to allow
administration of subsets of users.
A similar configuration could potentially be achieved with nested forests, however the
configuration chosen in this sample allows very independent AD configurations, the
administrator of departmentA would not even see the domain or users from departmentB.
Also, in migration scenarios where independent AD's already exist the setup for a proxy might
be easier to achieve initially.
Figure 11-2 shows how one way trusts can be used to attach to multiple AD servers. Be
aware that for MS SFU / MfU two way trusts are required for proper Kerberos authentication.
Figure 11-2 One way trusts attached to multiple AD servers
Disks Shared, data is striped across disks for best performance. Capacity is
shared across aII tenants
Quota None
Security Within a share, ACL can be applied per folder / directory.
Per share there is a configuration option to hide files to users that don't have
read access ('hide unreadable').
The SONAS teamis investigating options how to hide shares in a similar way.
User Management Shared AD Server, no separate user management.
Network Shared network, no separation, no dedicated cables.
Chapter 11. Best practices 567
Table 11-2 describes shared storage, dedicated user administration, isolated data scenario
from Figure 11-2.
Table 11-2 Shared storage, dedicated user administration, isolated data
n addition to user authentication, there might be additional requirements driving a logical
separation within SONAS. These can be different user quota requirements (500GB for
departmentA, 2TB for department) or the need for independent snapshots (daily for
departmentB, weekly for departmentA) could result in separate filesets. Even within single
departments there could be subsets of data residing in individual filesets.
For stricter separation, e.g. different organizations on the same infrastructure its likely to have
separated VLAN configurations on the network layer, denying the ability to trace cross
organizational packages. Guaranteed network performance could drive a requirement for
dedicated network adapters per organization.
Another reason driving separation into multiple file sets can be the reporting capabilities of
SONAS in the 1.3 version, which are described in 11.3.7, Quota usage and capacity
reporting on page 573.
Disks Shared hardware, logical separation on file set level. Either file set per
tenant or file sets per project.
Quota Quotas can be applied on multiple entities:
filesystem
Fileset
User
Group
Security As in Scenario 1
User Management solated, dedicated AD Servers with independent administrators
Network Shared network
Optional: Dedicated Network ports
568 SONAS mplementation and Best Practices Guide
11.3.3 Dedicated hardware, administration, isoIated networks
The last scenario provides the most separation, even on hardware level. t uses dedicated
nterface nodes as well as dedicated disks and therefore file systems per tenant. The
departmental organizations are clearly separated by independent Active Directory Servers as
shown in Figure 11-3.
Figure 11-3 Dedicated hardware, administration, isolated networks scenario
Table 11-3 describes the environment shown in Figure 11-3.
Table 11-3 Dedicated hardware, administration, isolated networks details
Regulations might lead to a very separated environment. Dedicated disks could be grouped
in a filesystem or fileset, ensuring data separation even on a disk level. Dedicated disks would
allow increments based on a per array level, e.g. 8+P+Q=10 disks.
Going for full isolation dedicated nterface nodes are an option - however for redundancy
reasons a minimum of two is recommended per tenant.
Disks Dedicated, isolated disks
Quota Filesystem / Fileset / User / Group
Security ACL based
User Management solated, dedicated AS servers
Network Shared network
Optional: Dedicated nterface nodes
Optional: Dedicated Network ports
Optional: VLAN (restrictions might apply)
SONAS GU Shared
Chapter 11. Best practices 569
11.3.4 Hardware separation: Disk Iayer
As indicated in the sample scenarios, there are multiple layers of logical and/or physical
separation that can be used in multi tenant scenarios.
The schema in Figure 11-4 shows the layers available from a physical disk array layer up to
the CFS / NFS array that a user is accessing.
1-n Disk arrays are grouped into storage pools. On top of 1-n storage pools, file systems are
created. The storage pool location of individual files is determined by the policy engine.
Optional filesets (linked as subdirectory into an existing file system) allow fine granular setting
of quota and snapshots as an optional mechanism.
Basically each directory within a filesystem can be exported as separate file share. A root
directory might be shared with administrative purpose with individual directories shared out
separately to subgroups of users.
Figure 11-4 Hardware separation at the disk level
570 SONAS mplementation and Best Practices Guide
11.3.5 Windows based access and user management
n this section, (as shown in Figure 11-5) section we show how separate shares are created
for the individual departments. Providing a domain user and domain group during initial
creation ensures that ACL management by Windows Explorer can be used to set fine
granular permissions for domain users.
Figure 11-5 Creation of separate shares
Chapter 11. Best practices 571
After the share is created, the administrator can connect to the share and set ACLs as
required as shown in Figure 11-6.
Figure 11-6 Connecting to newly created share
572 SONAS mplementation and Best Practices Guide
Using the custom option "Create a new independent file set (see Figure 11-7) allows the use
of advanced functions for the departmental / project share:
ndependent Snapshots on file set level
Quota management (apply and monitor) on file set level
Utilization reporting on file set level
File sets as baseline for shares enable the ability to use the above functions on share level,
user and group based quota and utilization reports are still applicable.
Figure 11-7 Create a new independent file set option for share
Chapter 11. Best practices 573
11.3.6 Cross organizationaI shares
Global shares are also possible, for example by specifying user and group from difference
domains to have ACL change permissions from both sides - or simply by granting cross
domain access on an already existing share. t is true also no direct trust exists in between
the domains DepartmentA and DepartmentB. (See Figure 11-8.)
Figure 11-8 Enable access control list on share the CIFS
11.3.7 Quota usage and capacity reporting
Setting quotas on file set (which might be individual shares), group or user level allows
reporting on a granular level. Soft limit violations can be used for reporting and alerting while
hard limit violations lead to "out of space conditions highlighted to the NAS client. The user
cannot exceed the capacity defined by quota. Figure 11-9 shows where the limits are set for
quotas.
574 SONAS mplementation and Best Practices Guide
Figure 11-9 New Quota panel
Figure 11-10 shows quotas on user and file set level. The SONAS CL allows export of such
data in a parseable format for automated scripting and external reporting.
Figure 11-10 Quotas on user and file set level
Figure 11-11 shows output from the - command.
Figure 11-11 lsquota command output
-
------ - -- -
- - -
---
---
---
---
----

----
Chapter 11. Best practices 575
11.3.8 SampIe: NFS access
n order to access the same share from CFS and NFS clients, user mapping has to be in
place.
SONAS offers multiple ways to map CFS and NFS user, e.g. automated, internal user
mapping. However, as it is transparent to the NFS users (UD generated by SONAS
potentially mismatches with UD used on various NFS clients) external user mapping is
recommended. Again, there are multiple ways to do that, e.g. based on names when
combining AD and NS authentication. For the scope of this document, UD mapping within
the Active Directory Server with MS dentity Management for UNX (MfU) is described.
With MfU properly configured, the AD tools for user configuration contain a new tab UNX
Attributes and appear as shown in Figure 11-12.
Figure 11-12 IMU tabs in Windows User management for group and user
Some prerequisites when using SONAS with MfU in Multi AD environments:
Each group must have a valid GD
Each user must have a valid UD and GD entry
Each domain must have a separate, non overlapping UD range
Two way trust between proxy and department must be in place to get valid Kerberos
certificates from proxy.com. One way trust does NOT work with MfU setup.
During SFU configuration, the D ranges are specified as in the following example:
-

576 SONAS mplementation and Best Practices Guide


With proper MfU setup on the AD server, two way trusted domains in place and SONAS
configured for AD and SFU, the SONAS system is able to identify users, lookup the UD / GD
and authenticate individual users, which involves Kerberos ticket creation by proxy.com
User Iookup
-

--- ---
- -
--- ---
- -

--- ---
User authentication (test)
-


-


11.3.9 AdditionaI considerations
This document centers on CFS / Windows access at this point in time. SFU and dentity
Management for UNX are not yet supported with trusted domains due to known limitations.
SONAS internal mapping is supported but does not allow async replication.
The role model of the SONAS GU allows separation of duty on transaction base, not on
organizational base. One can grant / deny access, grant full administrative access or export
shares only. The model does not yet provide support to grant access, for example on
filesystem / fileset level.
t is not yet supported to hide directories / shares, for example on a cross departmental level.
As an outcome all shares are visible to everybody, but access is denied based on ACL
enforcement.
Special considerations might apply for failover when using dedicated ports / nterface nodes.
f there is a requirement for dedicated hardware that could mean there is an enhanced need
for nterface nodes to handle all failover cases.
ntegration and metrics of utilization and chargeback information might be part of a later
release of this document.
Chapter 11. Best practices 577
11.4 Data migration
When you deploy a new SONAS infrastructure in an existing environment, you might need to
migrate files and directories from your current file servers to SONAS. File data migration in
NAS environments is quite different from block data migration that is traditionally performed
on storage subsystems. n a block environment you migrate LUNs in relatively straightforward
ways. When you migrate files in a NAS environment you have to take into account additional
aspects such as the multiple access protocols that can be used and the multiple security and
access rights mechanisms that the customer uses and how these fit in with SONAS. There is
no universal tool or method for file migration from your existing file server or NAS filer into the
SONAS system.
11.4.1 Migration chaIIenges
The challenges in migrating file data are:
Keep downtime to a minimum or even no down time
Ensure there is no data loss or corruption
Consolidation, where multiple source file servers are migrated into one target
For completeness of our discussion, one way to avoid migration challenges is to avoid data
migration and repopulate the new environment from scratch. t is easier to do for specific
environments such as digital media or data mining but can be problematic for user files as you
cannot expect end users to start from scratch with their files.
Migrating data to new environments is complex and can require expertise that your in-house
T staff might not have. But migrating to new storage technology can help you manage
exponential data growth and support business initiatives.
11.4.2 IBM Migration Services for Network Attached Storage Systems
BM Migration Services for Network Attached Storage Systems - Scale Out Network
Attached Storage (SONAS) provides an efficient migration that uses experienced storage
specialists to plan, migrate, test and validate data, helping to minimize downtime and
accelerating your return on investment.
BM Migration Services for Network Attached Storage Systems - Scale Out Network
Attached Storage (SONAS) is designed to provide simplified, efficient data migration to new
technologies, helping to more quickly realize the returns on your storage investment. Our
team of highly skilled storage specialists helps assess, plan for, migrate, test and validate the
migration of data to the SONAS system, providing a single vendor for all of your unique
requirements. Backed by experience and best practices, we use a range of tools and skills
that help minimize risk of downtime and disruptions during migration, so your in-house staff
can focus on higher-priority business initiatives
For more information about the BM Migration Services for Network Attached Storage
Systems - Scale Out Network Attached Storage (SONAS), go to the following website:
------
-
578 SONAS mplementation and Best Practices Guide
11.5 Setting up authentication for mixed environments
For those environments where there are both NFS and CFS access to the exports, you need
Active Directory and SFU or Active Directory and NS combination. CFS access by Windows
and NFS access by UNX clients.
All files and directories on SONAS have a UD and GD pair. For files created by Windows, is
assigned an SD. This SD is converted to UD/GD pair and stored on SONAS. For files
created by UNX, that is, NFS client, it automatically has the UD and GD of the owner who
created it.
For a plain AD environment for CFS access, the UD/GD is randomly generated by SONAS.
These files are not accessible to UNX users. t is acceptable because the client expects only
CFS users to access.
Now, if we have a scenario where client needs NFS access and these files be accessible to
the same users/owners by UNX, this is not possible. The UNX users have a different
UD/GD combination than the one that SONAS assigned as it is randomly generated. Unless
you create UNX users to match the UD/GD, they do not have access. However, it can be a
cumbersome task.
The Best Practice for mixed environment is have all the UNX users and groups ready. Store
these UD and GD in the SFU or NS. Have the setup ready before you have the exports or
any data written onto the SONAS system.
t is also recommended that UNX Attribute primary group assigned be same as the Window's
primary group defined for the user. SONAS by default always uses the primary Windows
group as primary group for the user. This results in new files and directories created by a user
by CFS being owned by his primary windows group and not by the primary UNX group.
11.6 Setting up ACLs for fiIe systems
By default, when you create a file system, the owner is root. Hence, by default, only the root
user is able to access the same. t is a best practice to configure the initial ACLs for the newly
created file system with inheritance. t ensures that all the files and folders created in the file
system have ACLs as required.
As an administrator, without root access to SONAS, you cannot run or other such
commands. t is recommended you export the root filesystem as soon as it is created and
make administrator the owner. Setting the group is optional at the start.
The file system exported can then be accessed by the administrator as a NAS user. The
ACLs to restrict other users to access can then be added by the administrative user.
The filesystem can also be exported with "browsable=no option and hence be hidden from
other users.
n case you do not want to export the root fileset to the users, after you do this and set ACLs
to the fileset, you can remove the export. The ACLs stay as updated.
Important: For security reasons, only allows a change to the owner as
long as the file system is empty. After the filesystem contains files/folders or linked filesets,
the is no longer allowed to change the owner.
Chapter 11. Best practices 579
11.7 Setting up ACLs for fiIe sets
By default when you create a file set, the owner is root. Hence, by default only the root user is
able to access the same. t is true even if inheritance is enabled on the parent folder.
Similar to the file system creation, it is recommended you export the file set as soon as it is
created and make administrator the owner. Setting group is optional at the start.
The file set exported can then be accessed by the administrator as a NAS user. The ACLs to
restrict other users to access or provide access to, can then be added by the administrative
user.
11.8 Setting up ACLs for exports
SONAS does not have a CL command to provide ACLs from the command line. The SONAS
administrator can provide with ACLs for an export by the GU. As a user/owner who has
access permissions to the files/folders, you can add users/groups with ACLs or remove etc.
The best practice for providing ACLs is by specifying the owner at the time of creating. This
way, the owner has access to the export. For CFS, user can access by the windows terminal
and then add new ACLs. For a pure NFS environment, owner can mount the export and add
the POSX bits for owner/group. Also, from R1.3, you can now add ACLs using the GU.
For a mixed environment of CFS and NFS access, it is highly recommended not to change
ACLs using the NFS client accessing the export. Modifying the POSX bits for user/group
overwrites all the ACLs. Next, if the export is being accessed as CFS, because the NFSV4
ACLs are now overwritten, it might no longer be accessible for any Windows clients such as
CFS users.
11.9 Netgroups for NFS
For those clients who have most or all of the access coming into SONAS by NFS clients, one
of the best practices is to have netgroup configured.
Netgroup is a group of hosts used to restrict access for mounting NFS exports on a set of
hosts and deny mounting on rest of the hosts. SONAS supports netgroup being stored in NS.
n this way, clients can create netgroups and add those clients that can be part of the group.
An export an be created where owner is the netgroup. Hence, all clients inside that group can
access the export easily.
Important: For security reasons, only allows the owner to be changed
as long as the file system is empty. After the filesystem contains files/folders or linked
filesets, the is no longer allowed to change the owner.
Important: For a mixed environment with CFS and NFS ACLs, modifying POSX bits by
NFS clients using overwrites all NFSV4 ACLs and might be
inaccessible to CFS clients.
580 SONAS mplementation and Best Practices Guide
Copyright BM Corp. 2012. All rights reserved. 581
Appendix A. PoIicy ruIe syntax definitions
This appendix provides the syntax definitions and SQL expressions for the GPFS policy rules.
A

Você também pode gostar