Escolar Documentos
Profissional Documentos
Cultura Documentos
Table of contents
Executive summary............................................................................................................................... 6
Business case .................................................................................................................................. 6
Efficiently size and provision storage........................................................................................... 6
Ensure continuous availability..................................................................................................... 6
Ensure nondisruptive backup and near-instantaneous recovery .................................................. 6
Solution overview ............................................................................................................................ 7
Key findings ..................................................................................................................................... 7
Introduction.......................................................................................................................................... 8
Purpose ........................................................................................................................................... 8
Scope .............................................................................................................................................. 8
Not in scope..................................................................................................................................... 8
Audience ......................................................................................................................................... 8
Technology overview ............................................................................................................................ 9
Overview .......................................................................................................................................... 9
VMware Zimbra Collaboration Server ............................................................................................... 9
VMware vSphere .............................................................................................................................. 9
EMC VNX series storage ................................................................................................................... 9
EMC VNX5700 storage array ........................................................................................................... 10
EMC SnapView ............................................................................................................................... 10
EMC Navisphere Admsnap ............................................................................................................. 10
EMC Replication Manager .............................................................................................................. 11
EMC PowerPath/VE ........................................................................................................................ 11
EMC Virtual Storage Integrator (VSI) plug-in for vSphere ................................................................. 11
VMware ZCS architecture ................................................................................................................... 12
Core advantages ............................................................................................................................ 12
Zimbra packages ........................................................................................................................... 12
Zimbra Core............................................................................................................................... 12
Zimbra LDAP.............................................................................................................................. 12
Zimbra Store (Zimbra server) ..................................................................................................... 13
Zimbra MTA ............................................................................................................................... 13
Zimbra SNMP ............................................................................................................................ 13
Zimbra Logger ........................................................................................................................... 13
Zimbra Convertd ........................................................................................................................ 13
Zimbra Spell .............................................................................................................................. 13
Zimbra Proxy ............................................................................................................................. 13
Zimbra Memcached ................................................................................................................... 13
Zimbra Archiving ....................................................................................................................... 13
EMC STORAGE DESIGN AND DATA PROTECTION FOR
VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
Executive summary
Business case
A mailbox server building block represents the amount of storage and server resources required to support a
specific number of ZCS users. The amount of required resources is derived from a specific user profile type, mailbox
size, and disk requirements. Using the building block approach simplifies the design and implementation of ZCS.
Once the initial building block is designed, it can be easily reproduced to support the required number of users in
your enterprise. By using this approach, EMC customers can now create their own building blocks that are based on
their companys specific email environment requirements. This approach is very helpful when future growth is
expected because it makes email environment expansion simple and straightforward. EMC best practices involving
the building block approach for storage design have proven to be very successful in many customer implementations.
Solution overview
ZCS is a complete, open source email and collaboration platform. It features email,
contacts, calendar, documents, file sharing, tasks, and social media, plus
synchronization for desktops and devices.
VNX series storage provides a high-performance, unified storage platform with
unsurpassed simplicity and efficiency, optimized for virtual applications. With the
VNX series, you can achieve new levels of performance, protection, compliance, and
ease of management.
Instant snapshot-based backup and recovery is provided by Replication Manager and
SnapView. Replication Manager automates the creation of snapshots of ZCS mailbox
server volumes using SnapView technology.
The combination of ZCS with VNX series storage provides an optimal collaboration
infrastructure. Storage, compute, and network layers maintain high availability, while
EMCs building-block sizing approach achieves predictable performance and a
repeatable storage design.
In addition, EMCs building-block approach to sizing accelerates your deployment of
ZCS. Once deployed, the performance, management, and protection advantages of
running ZCS on VNX series storage are self-evident.
VMware HA provides uniform high availability across the entire virtualized IT
environment without the cost and complexity of failover solutions tied to either
operating systems or specific applications.
Key findings
Full support for VMware vSphere 5.0 virtualization with all of vSpheres
advanced features
Building block based on 5,000 heavy users per mailbox server validated
Simplified high availability with VMware High Availability (HA) clustering and
Distributed Resource Scheduling (DRS)
Simple, effective, and quick storage provisioning for VMware Zimbra content
with EMC VSI plug-in for VMware vCenter
Introduction
Purpose
The purpose of this white paper is to describe the design and validation of an
enterprise email solution, based on VMware ZCS that offers a high-performing,
efficient, and predictable storage design with full availability and protection. The
paper describes how the solution leverages the features and strengths of EMC VNX
series storage, EMC VNX SnapView, and EMC Replication Manager for snapshotbased backup and recovery, and VMware vSphere 5 for high availability.
Scope
The scope of this paper corresponds to the scope of the solution validation (build,
test, and document) activities performed by EMC engineers in an EMC solutions
laboratory.
What was built and tested is described and, where possible, recommendations and
guidelines are provided for professionals to design a similar solution.
The concepts, instructions, procedures, recommendations, and guidelines presented
in this document are thorough but not all-inclusive.
Not in scope
Audience
The target audience for this white paper is business executives, IT directors, and
infrastructure administrators who are responsible for their companys messaging
infrastructure.
The target audience also includes professional services groups, system integration
partners, and EMC teams tasked with deploying messaging systems in a customer
environment.
A high-level understanding of ZCS and VNX series storage is beneficial. Familiarity
with VMware virtualization concepts is also beneficial.
Technology overview
Overview
VMware Zimbra
Collaboration
Server
This section provides an overview of the primary technologies used in this solution.
The tight integration of these products and technologies yields all of the benefits of
the enterprise messaging solution described in this paper.
VMware vSphere
EMC SnapView
EMC PowerPath/VE
ZCS is a next generation email, calendar and collaboration solution that is optimized
for VMware. ZCS provides an open platform designed for virtualization and portability
across private and public clouds, making it simpler to manage and more costeffective to scale. With the most innovative Web application available, ZCS boosts
end-user productivity on any device or desktopany time, any placeat dramatically
lower costs compared to other providers. Versions of ZCS include a Network Edition,
an Open-source Edition, and a prepackaged Virtual Appliance.
This solution uses ZCS Network Edition and focuses on ZCS mailbox server design.
VMware vSphere
VMware vSphere uses the power of virtualization to transform data centers into
simplified cloud computing infrastructures and enables IT organizations to deliver
flexible and reliable IT services. vSphere virtualizes and aggregates the underlying
physical hardware resources across multiple systems and provides pools of virtual
resources to the data center. As a cloud operating system, vSphere manages large
collections of infrastructure (such as CPUs, storage, and networking) as a seamless
and dynamic operating environment and also manages the complexity of a data
center.
EMC VNX series storage is powered by Intel Xeon Processors, for intelligent storage
that automatically and efficiently scales in performance, while ensuring data integrity
and security. The VNX series is designed to deliver maximum performance and
scalability for mid-tier enterprises, enabling them to grow, share, and cost-effectively
manage multiprotocol file and block systems. VNX arrays incorporate the
RecoverPoint splitter, which supports unified file and block replication for local data
protection and disaster recovery.
EMC Unisphere is the central management platform for the EMC VNX series,
providing a single combined view of file and block systems, with all features and
functions available through a common interface. Unisphere is optimized for virtual
applications and provides industry-leading VMware integration, automatically
discovering virtual machines and VMware ESXi servers and providing end-to-end,
virtual-to-physical mapping.
EMC VNX5700
storage array
The EMC VNX5700 storage array used in this solution is designed to deliver maximum
performance and scalability for enterprises. It is a converged platform that replaces
EMC CLARiiON and EMC Celerra and enables organizations to dynamically grow,
share, and cost-effectively manage multiprotocol file systems and multiprotocol block
storage access. The VNX Operating Environment enables Microsoft Windows and
Linux/Unix clients to share files in multiprotocol (NFS and CIFS) environments. At the
same time, it supports iSCSI, Fibre Channel (FC), and FCoE access for high-bandwidth
and latency-sensitive block applications. For additional VNX specifications, refer to
the EMC VNX Series Unified Storage Systems Specification Sheet.
EMC SnapView
It allows full access to a point-in-time copy of your production data with modest
impact on performance and without modifying the actual production data.
For backup, it practically eliminates the time that production data spends
offline or in hot backup mode, and it offloads the backup overhead from the
production server to another server.
It enables you to maintain a consistent replica across a set of LUNs. You do this
by performing a consistent fracture, which is a fracture of more than one clone
at the same time, or a fracture that you create when starting a session in
consistent mode.
It provides instantaneous data recovery if the source LUN becomes corrupt. You
can perform a recovery operation on a clone by initiating a reverse
synchronization on a snapshot session and by initiating a rollback operation.
The EMC Navisphere Admsnap program runs on host systems in conjunction with
SnapView running on the EMC VNX storage processors (SPs), and lets you start,
activate, deactivate, and stop SnapView sessions. All Admsnap commands are sent
to the storage system through the Fibre Channel bus. The Admsnap utility is an
executable program that you can run interactively or with a script.
10
EMC Replication
Manager
EMC PowerPath/VE EMC PowerPath/VE provides intelligent, high-performance path management with
path failover and load balancing optimized for EMC and selected third-party storage
systems. PowerPath/VE supports multiple paths between a vSphere host and an
external storage device. Having multiple paths enables the vSphere host to access a
storage device even if a specific path is unavailable. Multiple paths can also share
the I/O traffic to a storage device. PowerPath/VE is particularly beneficial in highly
available environments because it can prevent operational interruptions and
downtime. The PowerPath/VE path failover capability avoids host failure by
maintaining uninterrupted application support on the host in the event of a path
failure (if another path is available).
PowerPath/VE works with VMware ESX/ESXi as a multipath plug-in (MPP) that
provides path management to hosts. It is installed as a kernel module on the vSphere
host. It plugs into the vSphere I/O stack framework to provide the advanced
multipathing capabilities of PowerPath/VE, including dynamic load balancing and
automatic failover to the vSphere hosts.
For this solution, PowerPath/VE is installed on all ESXi hosts that house ZCS virtual
machines.
EMC Virtual
Storage Integrator
(VSI) plug-in for
vSphere
EMC Virtual Storage Integrator (VSI) for VMware vSphere is a free plug-in for VMware
vSphere Client that provides a single management interface for managing EMC
storage within the vSphere environment. Features can be added and removed from
VSI independently, providing flexibility for customizing VSI user environments.
Features are managed by using the VSI Feature Manager. VSI provides a unified user
experience, allowing each of the features to be updated independently and new
features to be introduced rapidly in response to changing customer requirements.
11
Zimbra packages
Uses industry standard open protocols: SMTP, LMTP, SOAP, XML, IMAP, POP2
Each mailbox server can be scaled horizontally by adding more data stores
Each mailbox server can be scaled vertically by adding more CPU and
memory resources to the virtual machines and by using advanced storage
array technologies such as thin provisioning
Browser based client interface: Zimbra Web Client gives users easy access to
all the ZCS features
VMware ZCS includes the following application packages. For more information about
each package, visit http://www.zimbra.com.
Zimbra Core
The Zimbra Core package includes the libraries, utilities, monitoring tools, and basic
configuration files.
Zimbra LDAP
ZCS uses the OpenLDAP software, an open source LDAP directory server. User
authentication is provided through OpenLDAP. Each account on the Zimbra Store
server has a unique mailbox ID that is the primary point of reference to identify the
account. The OpenLDAP schema has been customized for ZCS.
This solution focuses on the performance of ZCS with the most popular ZCS web client protocol, SOAP.
12
Message storeThe message store is where all email messages and file
attachments reside
Zimbra MTA
Postfix is the open source mail transfer agent (MTA) that receives email via SMTP and
routes each message to the appropriate Zimbra mailbox server using Local Mail
Transfer Protocol (LMTP). The Zimbra MTA also includes the anti-virus and anti-spam
components.
Zimbra SNMP
Installing the Zimbra SNMP package is optional. If you choose to install Zimbra SNMP
for monitoring, run the package on every Zimbra server.
Zimbra Logger
Installing the Zimbra Logger package is optional. It can be installed on one mailbox
server. The Zimbra logger installs tools for syslog aggregation and reporting. If you do
not install the logger, the server statistics section of the administration console will
not display.
Zimbra Convertd
The Zimbra Convertd package is installed on the Zimbra Store server. Only one
Zimbra Convertd package needs to be present in the ZCS environment.
Zimbra Spell
Installing the Zimbra Spell package is optional. Aspell is the open source spell
checker used on the Zimbra Web Client. When Zimbra Spell is installed, the Zimbra
Apache package is also installed.
Zimbra Proxy
Installing the Zimbra Proxy is optional. Use of an IMAP/POP proxy server allows mail
retrieval for a domain to be split across multiple Zimbra servers on a per user basis.
The Zimbra Proxy package can be installed with the Zimbra LDAP, the Zimbra MTA,
the Zimbra mailbox server, or on its own server.
Zimbra Memcached
Memcached is a separate package from Zimbra Proxy and is automatically selected
when the Zimbra Proxy package is installed. One server must run Zimbra Memcached
when the proxy is in use. All installed Zimbra proxies can use a single memcached
server.
Zimbra Archiving
The Zimbra Archiving and Discovery feature is an optional feature for Zimbra Network
Edition. Archiving and Discovery offers the ability to store and search all messages
that were delivered to or sent by Zimbra. This package includes the cross-mailbox
search function, which can be used for both live and archive mailbox searches. Using
Archiving and Discovery can trigger additional mailbox license usage.
EMC STORAGE DESIGN AND DATA PROTECTION FOR
VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
13
Log filesEach component in ZCS has log files. Local logs are located in
/opt/zimbra/log
Additional information about mailbox server design is provided in the Storage design
considerations for Zimbra mailbox servers section.
14
The solution was validated with the ZCS mail user profile characteristics presented in
Table 1.
Table 1.
Value
Total users
10,000
5,000
500 MB
Heavy Enterprise
21 received/hour/user, 7 sent/hour/user
(224 messages/user/8 hour day)
124 KB average message size
80% with 25 KB message body
20% with 20 KB message body and 500 KB
attachment
Architecture
overview
User type
Concurrency
100%
To validate the performance and functionality of ZCS on EMC VNX series storage, we
configured two VMware vSphere ESXi servers (nodes) on a VMware vSphere
hypervisor platform and deployed multiple ZCS roles on each ESXi node. Two nodes
were configured for VMware high availability (HA) and VMware Distributed Resource
Scheduling (DRS) to achieve high availability and balanced performance. We
configured each ESXi host with sufficient resources to support an entire 10,000-user
ZCS environment. In the event of a vSphere cluster node failure, or during vSphere
cluster node maintenance, all virtual machines on the affected node were configured
to move automatically to the still-functioning node.
SOAP: Simple Object Access Protocol, an XML-based messaging protocol used for sending requests for Web
services. The Zimbra servers use SOAP for receiving and processing requests, which can come from Zimbra command
line tools or Zimbra user interfaces.
15
Architecture
diagram
Figure 1.
16
Hardware
components
Hardware components
Item
Units
Description
Storage platform
Fabric switch
vCenter server
Disks
37
Software
Software components
Item
Description
VMware ZCS
Multipathing software
vCenter Server
Version 5.2
Version 5.2
Navisphere Admsnap
Version 2.30
17
Once all ZCS virtual machines were configured and operational, we distributed them
across two vSphere nodes and defined two DRS affinity rules to keep specific sets of
Zimbra virtual machines (each virtual machine had a different messaging role) on
different nodes. The VMware DRS load-balancing automation level was set to
Automatic with a Normal (default) threshold.
We defined Virtual Machines to Hosts (VM-Host)-type affinity rules. This rule type
was introduced in vSphere 4.1 for DRS clusters to augment the existing anti-affinity
rules, which are now known as VM-VM rules.
VM-Host affinity rules enable you to stipulate that a set of virtual machines either
should or must run on specific nodes within a cluster. Unlike a VM-VM rule, which
specifies anti-affinity among specific virtual machines, a VM-Host rule specifies an
affinity relationship between a set of virtual machines and a set of cluster nodes. A
VM-Host rule has either a Required (must) attribute or a Preferential (should)
attribute.
Mandatory rules apply to non-DRS operations even in a DRS cluster, such as manual
power-on operations, manual vMotion operations, and VMware HA host failover
events.
Because DRS honors VM-Host Preferential (should) rules during load balancing
operations and node maintenance operations, we chose VM-Host rules (instead of
VM-VM rules) to ensure automatic failback of virtual machines to the original node
following node maintenance.
We defined two VM-Host Preferential rules to stipulate that nodes Host 1 (R900-11)
and Host 2 (R900-12) should each host a specific Mailbox Server virtual machine,
MTA Server virtual machine, and LDAP Server virtual machine, as presented in
Table 4.
Table 4.
This rule
Should keep
Host 1 (R900-11)
Host 2 (R900-12)
18
Figure 2 shows the DRS VM-Host affinity rules defined for this solution.
Figure 2.
19
One of the methods that can be used to simplify the sizing and configuration of large
amounts of storage on EMC VNX series storage arrays for use with ZCS is to define a
unit of measurea mailbox server building block.
A mailbox server building block represents the amount of storage and server
resources required to support a specific number of ZCS users. The amount of required
resources is derived from a specific user profile type, mailbox size, and disk
requirements. Using the building block approach simplifies the design and
implementation of ZCS.
Once the initial building block is designed, it can be easily reproduced to support the
required number of users in your enterprise. By using this approach, EMC customers
can now create their own building blocks that are based on their companys specific
email environment requirements. This approach is very helpful when future growth is
expected because it makes email environment expansion simple and straightforward.
EMC best practices involving the building block approach for storage design have
proven to be very successful in many customer implementations.
Building block
design
What is the total number of incoming and outgoing messages per day?
What percentage of total users will be using Webmail vs. POP vs. IMAP vs.
Mobile vs. Outlook?
20
Zimbra professional services can help with sizing and will use the answers from these
questions as input into a Zimbra deployment worksheet. Figure 27 in
Appendix B: Zimbra deployment worksheet shows an example of the sizing tab in a
Zimbra deployment worksheet.
Note: The ZCS deployment worksheet is used only by Zimbra professional services.
Phase 2: Design the mailbox server building block
Phase 2 involves designing a mailbox server building block that satisfies the
requirements collected in Phase 1. The building block includes storage and server
resources.
Prior to designing your building block, use the following resources to ensure that you
have a thorough understanding of the relevant concepts and building block
components:
EMC VNX series storage and VMware vSphere design best practices
21
Storage must be designed to work optimally with the ZCS application. To do this, ZCS
I/O characteristics, I/O patterns, and read/write ratios must be identified. Based on
testing performed to validate this solution (test results are presented in the
Performance validation and test results section), we identified the types and sizes of
I/O generated by ZCS. We observed that ZCS mailbox servers generate primarily
small, 4 KB to 32 KB random I/Os to the database. This observation enabled us to
size the test environment accurately for optimal performance and the best user
experience.
ZCS mailbox servers are read/write intensive. Even with appropriately configured RAM
on the relevant virtual machines, the message store generates a large amount of disk
activity. For the ZCS mailbox server, the majority of I/O activity is generated by these
three sources, in decreasing order of load generated:
MySQL instance that runs on each message store server and stores metadata
(folders, tags, flags, and so on)
MySQL, the Lucene index, and blob stores generate random I/O and, therefore,
should be serviced by a fast disk subsystem. Blob message stores generate less I/O
than MySQL or the Lucene index but require more capacity, thus blob stores can be
deployed on slower disks in some cases.
Note: The target user profile and other customer-specific requirements directly affect
storage design recommendations for ZCS. Work closely with EMC and VMware
presales support to receive appropriate guidance.
22
ZCS mailbox server In a multi-server ZCS deployment, eight major mailbox server partitions are typically
planned for use. However, the use of VNX-specific enterprise array features can
partitions
reduce the number of required partitions. Table 5 provides information about each
ZCS mailbox server partition and its function.
Table 5.
Function
/opt/zimbra
/opt/zimbra/db
/opt/zimbra/store
/opt/zimbra/index
/opt/zimbra/redolog
/opt/zimbra/logs
/opt/zimbra/backup
/opt/zimbra/store02
Mount points
For each LUN created on the VNX storage, we configured a mount point and then
created a filesystem on the mounted partition. Each LUN configured for a mailbox
server must provide enough performance and capacity based on user requirements
and recommendations from the ZCS deployment worksheet.
For this solution we configured five LUNs and mounted them to the first five partitions
listed in Table 5.
Note: If native Zimbra backups (zmbackup) are used, you need a sixth LUN and a
mount point (/opt/zimbra/backup). If Zimbra HSM is used, you need to create a
seventh LUN and a mount point for the secondary store (/opt/zimbra/store02).
23
Message stores
A Zimbra Message Store holds all email messages for a single mailbox server,
including the message bodies and any file attachments. Messages are stored in MIME
format. The message store is located on each Zimbra server under /opt/zimbra/store.
Each mailbox has a dedicated directory named after its internal Zimbra mailbox ID.
Mailbox IDs are unique for each server; IDs are not unique system-wide.
Zimbra is designed with Single Copy Storage (known as Single Instance Storage
(SIS) in Microsoft Exchange versions prior to Exchange 2010) that allows messages
with multiple recipients to be stored only once in the filesystem. On Unix systems, the
mailbox directory for each user contains a hard link to the actual file.
For this solution, we deployed one primary message store for each mailbox server.
Redo logs
Each Zimbra server generates redo logs that contain every transaction processed by
that server. If an unexpected shutdown occurs on the server, the redo logs are used
for the following:
During restore, to recover data written since the last full backup in the event of
a server failure
When the current redo log file size reaches 100 MB, it rolls over to an archive
directory. At that point, the server starts a new redo log. All uncommitted transactions
from the previous redo log are preserved. In the case of a crash, when the server
restarts, the current redo log and the archived logs are read to reapply any
uncommitted transactions. When an incremental backup is run, the redo logs are
moved from the archive to the backup directory.
For this solution, we placed redo logs on 10k rpm SAS disks with RAID1/0 protection.
24
ZCS mailbox server Having identified the requirements for storage (I/O, capacity, and bandwidth), servers
(CPU, memory), we designed a ZCS mailbox server building block with the
building block
specifications shown in Table 6. The building block meets the requirements
specifications
presented in Table 1 for 5,000 heavy users with a 500 MB mailbox size. Each mailbox
server ran in a virtual machine.
Table 6.
CPUs per
virtual
machine
Memory per
virtual machine
5,000
16 GB
16 disks:
All Zimbra virtual machine VMFS data stores housing the Red Hat Enterprise Linux
Server 5.5 operating system were configured from a RAID group made up of five 2 TB
7.2k rpm NL-SAS drives configured with RAID 5 (4+1) protection.
Scaling to 10,000
users
To scale the configuration to 10,000 users (two building blocks/mailbox servers), the
two SAS disks housing the Zimbra redo logs and Zimbra root partitions can be shared
between the two mailbox servers (there is sufficient capacity for this). The second
building block, therefore, requires only 16 disks (two fewer than the first building
block). This configuration was tested and caused no performance degradation (see
the Performance validation and test results section).
Mailbox server
disk layout
We configured six LUNs on the VNX array and configured five mount points on each
mailbox server. We did not configure a backup volume because we used VNX
SnapView snapshots to back up ZCS content.
Figure 3 shows the LUN configuration on the VNX5700 storage array for mailbox
server storage as viewed with EMC Unisphere:
Figure 3.
25
Table 7 shows the devices, mount points, and disks configured for one ZCS mailbox
server in this solution. Volume sizes were based on the requirements presented in
Table 1 in conjunction with recommendations generated by the Zimbra deployment
worksheet.
Table 7.
Linux volume configuration and mount points for one mailbox server
Filesystem
Mounted on
LUN Size
/dev/sdb1
/opt/zimbra
40 GB
/dev/sdd1
/opt/zimbra/db
50 GB
/dev/sde1
/opt/zimbra/index
500 GB
/dev/sdc1
/opt/zimbra/redolog
90 GB
/dev/sdf1
/opt/zimbra/store
3.5 TB
Figure 4 shows the same device, mount point, and disk configuration as viewed with
a CLI command issued on the mailbox server virtual machine running Red Hat Linux
5.5.
Figure 4.
Table 8 presents the configuration profile of ZCS mailbox server virtual machines
deployed on vSphere.
Table 8.
Role
vCPUs
Memory
Disks (VMDKs)
Disk type
vSCSI Controller
16 GB reserved
VMFS
RDM/P
50 GB: MySQL DB
RDM/P
RDM/P
RDM/P
RDM/P
26
Figure 5.
VSI plug-in for VMware vSphere visible in vCenter under Solutions and
Applications
Figure 6.
27
Storage
provisioning
process with the
VSI plug-in
1.
Figure 7.
2.
Figure 8.
28
3.
We then selected one of the storage pools previously configured for this
solution in order to create a LUN.
Figure 9.
4.
Figure 10.
29
5.
We then selected a volume type of RDM and specified the LUN properties.
Figure 11.
6.
Finally, we clicked Finish and VSI created the requested LUN on the VNX array.
Notes
If you choose VMFS Datastore instead of RDM Volume, the data store name you
specify becomes the user-friendly name of the LUN when viewed with
Unisphere.
VSI is aware of the EMC FAST VP auto-tiering policy features. These features can
be reviewed by clicking the Advanced button.
When provisioning storage with VSI, storage access policies can be set on an
individual user basis.
For more information about the VSI plug-in for VMware vCenter, visit the EMC
Community Network.
30
As shown in Figure 12, the EMC Storage Viewer plug-in provides additional visibility
into VNX storage from vCenter. By selecting the EMC VSI tab and selecting a data
store or an RDM LUN, you can view details about that volume. To display details
about the VNX storage pool in which the LUN resides, select Hard Disk 6 for mailbox
server ZMBX04.
Figure 12.
31
Follow these recommendations when configuring VNX storage in the context of this or
a similar solution:
During the configuration of storage pools, RAID groups and LUNs for Zimbra
mailbox servers, consider I/O patterns when defining partitions. Separate
random workloads from sequential workloads on different disks.
Create dedicated storage pools or RAID groups for ZCS. If your array supports
other applications, use different storage pools or RAID groups for those
applications.
Size storage not only for capacity but also for performance.
Figure 13.
32
Guest virtual
machine
configuration
recommendations
Follow these recommendations when configuring vSphere ESXi hosts for this or a
similar solution:
Install EMC PowerPath/VE to improve performance and maximize host-tostorage I/O throughput
Configure each ESXi host with at least two HBAs connected to different fabrics
for best performance and high availability
Make the number of CPU cores equivalent to (or exceed) the number of virtual
CPUs you plan to run concurrently (within the set of virtual machines on the
ESXi host)
Make physical memory equivalent to the sum of the memory used by the
individual virtual machines plus additional memory required for the ESXi host
Plan for host failures within a vSphere cluster so that a single host's resources
can sustain the environment with minimum required performance
Install EMC VSI Plug-in, which provides the vSphere administrator with a view
of and access to all EMC storage
Follow these recommendations when configuring guest virtual machines for this or a
similar solution:
Install the latest version of VMware tools on guest virtual machines to optimize
performance and enhance the user experience
Use multiple SCSI controllers when creating VMDKs for ZCS mailbox server
virtual machines. Use separate SCSI bus IDs to spread the I/O load (sequential
vs. random) across different SCSI buses.
Allocate enough space for a swap file. ESXi Server creates a swap file for each
virtual machine that is equal in size to the difference between the virtual
machines, configured memory size, and memory reservation. The default is to
place the swap file on the same data store as the guest operating system.
33
Filesystem formatting
The following parameters enable filesystem formatting:
mke2fs j L Label name O dir_index m 2 i 10240 J size=400 b
4096 R stride =16
34
Once this software is installed, Replication Manager discovers the ZCS virtual
machines, as shown in Figure 14.
35
Figure 14.
Special
consideration for
Linux filesystems
EMC Replication Manager: EMC software installed on ZCS mailbox server virtual
machine
To reduce snapshot restore time, it might be necessary to adjust either of two Linux
filesystem configuration parameters. Linux sets a default value of 30 for the
Maximum Mount Count parameter. When this value is reached, Linux performs a
filesystem check on the disk, which can cause significant mount delay on a large
filesystem. For this solution, the message store volume was 3.5 TB. There are two
ways to avoid this filesystem check on the mount. The first way is to reset the Mount
Count value to 1. The second way is to increase the Maximum mount count value.
For this solution we used the tune2fs utility to change the Mount Count value to 1.
To set/reset Mount count to 1, run this command:
tune2fs -C 1 /dev/sdf1
Note the difference in the uppercase C versus the lowercase c in each of the two
commands.
After the values are reset, you can use the CLI to verify the new values.
Note: Zimbra administrators can monitor snapshot restore time and, if necessary,
change these settings as required. Alternatively, a custom script can be created and
then run during Replication Manager backup jobs to monitor and adjust settings
automatically.
36
Backup process
with Replication
Manager
The Replication Manager Linux File System Agent replicates databases and software
applications that store their data in supported Linux filesystems. The filesystem
remains mounted for replication. The agent flushes the filesystem I/O buffer
immediately before creating a replica to ensure all changes have been synchronized
to disk. Pre- and post-replication scripts can be implemented to support other
applications and databases, besides those specifically supported by Replication
Manager. The scripts can quiesce data to ensure consistency before a split by:
Putting the database or application into (and out of) an Online Backup mode, if
such a mode is available
Note: In order to keep your ZCS environment fully consistent, and to simplify the
recovery process, mailbox server backups should be performed together with an
LDAP server backup. By doing this, you will avoid out of sync conditions where users
properties might be changed while their mailboxes are being backed up.
37
Figure 15.
As shown in Figure 16, the Use Consistent Split option was selected to take
advantage of VNX consistent-split technology. Pre- and post-replication scripts were
implemented to ensure the consistency of ZCS data.
Figure 16.
38
Recovery process
The Replication Manager agent automatically unmounts specified Zimbra file systems
before it starts a replica restore. It also automatically mounts these Zimbra
filesystems after the restore completes. To make the automatic unmount action
successful, Zimbra services must be stopped either manually or through an
application callout script.
Application callout scripts allow you to add customized actions to Replication
Manager at several points throughout the replication, mounting, and restore
processes. Replication Manager calls these executable scripts based on the names of
the scripts and their locations in the Replication Manager host.
The callout scripts must be located in the same directory as irccd on the host. The
default location in the Linux environment is /opt/emc/rm/client/bin/.
The naming convention for callout scripts is as follows:
IR_CALLOUT_<application_set_name>_<job_name>_<n>
<application_set_name> is the name of the application set that contains the job that
runs the script.
<job_name> is the name of the job that runs the script within the application set.
<n> is a number that determines when the script runs, as shown in Table 9.
Table 9.
100
110
At the beginning of failover process (for Celerra iSCSI or VNXe iSCSI replica
promotion)
200
300
Before the application recovery process starts; the 300 callout is valid only for
mount operations in which some recovery occurs before filesystems are made
visible
400
500
510
550
After the network files have been copied but before the database is recovered;
use the 500 callout to make changes to the Oracle initialization file before the
application starts
600
39
Figure 17 shows the Replication Manager GUI used to restore a mailbox server
replica.
Figure 17.
Performance results for ZCS backup and restore with Replication Manager and VNX
SnapView snapshots, and recommendations for calculating necessary storage space
for snapshots, are presented in the Performance validation and test results section.
40
Testing methods
To validate this solution, we used the Soapgen load generation testing utility.
Soapgen was developed and is maintained by the Zimbra performance lab. It is a
comprehensive and flexible mail server load generator designed to provide
functionality similar to Microsoft Exchange Loadgen. Soapgen can test many mail
protocols and mailbox profiles.
Note: The Soapgen tool is available only from Zimbra professional services.
Zimbra Soapgen test tool
The Zimbra Soapgen utility enables you to test all server functions in a ZCS
configuration and has the following features:
A test task scheduler to schedule tasks for different test accounts; all tasks are
submitted to a queue and wait to be picked for execution at the scheduled time
Various test tasks simulate user interaction with Zimbra through the use of
different protocols (SOAP, HTML, IMAP, POP3, CalDav, and Blackberry
synchronization)
41
We used two client virtual machines configured with Soapgen to generate a load of
5,000 heavy enterprise users against each of two mailbox servers (cumulatively,
10,000 users). We used a workload profile that is typical of an enterprise user. This
profile is presented in Table 10.
Table 10.
Value
Total users
10,000
5,000
500 MB
Heavy enterprise
21 received/hour/user, 7 sent/hour/user
(224 messages/user/8 hour day)
124 KB average message size
80% with 25 KB message body
20% with 20 KB message body and 500 KB
attachment
Test scenarios
User type
Concurrency
100%
We conducted a series of five tests to validate the performance and scalability of the
ZCS mailbox server building block, the benefits of using VNX FAST Cache, the
performance of NL-SAS disks with ZCS data, and protection (backup and restore) for
ZCS data using Replication Manager with VNX SnapView snapshots.
Test 3: Advanced protection for ZCS data using Replication Manager with VNX
SnapView snapshots
42
Key performance
indicators
Target value
Disk latency
Less than 20 ms
Disk utilization
Disk IOPS
Tests 1 through 4 examined messages sent/received, moves and deletes, and all
other functions performed by an enterprise email user on a regular basis. All tests ran
successfully. The Soapgen client performed consistently against the ZCS servers
without causing any corruption to any of the ZCS components.
43
Figure 18.
I/O types generated on the VNX5700 array during two hours of Soapgen client
load
The results of Test 1 demonstrated that the building block we designed provided
solid performance and a significant amount of headroom for additional load.
Table 12 presents the performance results for one building blockone ZCS mailbox
server virtual machine with a heavy workload of more than 200 messages per user
per day. This building block was deployed using 10k SAS disks for all Zimbra
volumes. Test results show excellent VNX storage performance with balanced
distribution of the user load across ZCS volumes. Very low disk utilization with
excellent throughput was also observed during this test.
The LMTP delivery rate was approximately 11.84 messages per second injected,
28.41 messages per second received (multiple recipients). This implied heavy MySQL
writes.
44
Table 12.
Validation parameter
43%
184.89 ms
ZCS volumes
Disk
utilization
Disk
throughput
Disk
IOPS
Avg. Disk
latencies
(ms)
0.21%
923 KB/sec
14
3.78
10%
3,890 KB/sec
172
0.41
MySQL DB volume
(/opt/zimbra/db)
7%
4,135 KB/sec
129
1.66
Index volume
(/opt/zimbra/index)
8%
798 KB/sec
39
3.25
41%
6,507 KB/sec
202
4.58
Total
n/a
16,253 KB/sec
556
n/a
Figure 19 shows throughput and utilization details for each Zimbra volume. VNX
storage easily handled ZCS application I/O and produced 16,254 KB/s throughput
with 556 IOPS across all disks supporting the 5000-user ZCS mailbox server building
block.
Figure 19.
45
Figure 20 shows the number of disk IOPS for each ZCS volume in the building block,
with a total of 556 IOPS across all volumes.
Figure 20.
Number of disk IOPS for each Zimbra volume in 5,000-user mailbox server
building block
46
Validation parameter
Mailbox server 1
Mailbox server 2
45%
43%
296.06 ms
294.44 ms
Table 14 presents details of disk throughput and IOPS achieved during the validation
of two ZCS mailbox server building blocks. The total throughput of 36,523 KB/s was
achieved with 1,445 IOPS across all ZCS volumes and disks.
Table 14.
ZCS volume
Disk
throughput
(KB/s)
Disk IOPS
Avg. Disk
Latencies
(ms)
1,944
32
4.1
8,246
406
0.78
8,878
188
2.1
4,030
84
4.2
13,425
255
5.1
Total
36,523
1,445
n/a
47
Figure 21.
Disk throughput and utilization for two building blocks (10,000 users)
Figure 22.
48
Identify the durations required to back up (replicate) and recover the ZCS data
Table 15 shows the LUN size and actual data size of each ZCS application LUN that
was used in the replication and restore processes.
Table 15.
LUN size
Data size
/opt/zimbra
40 GB
3 GB
/opt/zimbra/db
50 GB
26 GB
/opt/zimbra/index
500 GB
160 GB
/opt/zimbra/store
3.5 TB
2.7 TB
/opt/zimbra/redolog
90 GB
22 GB
Total
4,180 GB
2,911 GB
49
Figure 23.
To calculate total snapshot space by using percentage data, we sum all space usage
for all LUNs participating in the replication. For this solution, total snapshot space
was 61 GB in the SnapView reserved LUN pool.
Total snapshot space used =
(20 GB * 56%) + (20 GB * 38%) + (40 GB * 50%) + (40 GB * 53%) + (20 GB * 4%) =
11.2 GB + 7.6 GB + 20 GB + 21.2 GB + 0.8 GB = 61GB
Note that 61 GB also includes snapshot metadata that is usually 5% to 10% of the
total source LUN space. In the solution as validated, metadata was about 7%
(32 GB).
Another way to calculate the used snapshot space is to look at the writes to the
reserve LUN pool in the SnapView session properties. For this solution, there were
around 463,932 writes produced during a two-hour heavy enterprise user load, which
consumed 29 GB of space.
Figure 24.
Total snapshot space used = (Writes to RLP after Soapgen test) (Writes to RLP before
Soapgen test) * 64 KB SnapView chunks + (metadata) = (464077 - 145) *
64 KB = 463,932 * 64 KB = 29 GB + metadata
The remaining 32 GB (7%) was used for snapshot metadata. The amount of metadata
is a percentage of the total source LUN size (4,180 GB in this case), which is allocated
for map entries.
Total snapshot space used = 29 GB + 32 GB = 61 GB
50
Test 4 results: Benefits of using EMC VNX FAST Cache with ZCS data
The goal of this test was to evaluate whether enabling EMC VNX FAST Cache for the
storage pool containing the Zimbra store volume (/opt/zimbra/store) would improve
the performance and user experience.
Because we observed low disk utilization during previous tests with a Sopagen user
workload of 200 messages sent/received per user per day, there was no point in
enabling FAST Cache with the same workload.
We doubled the Soapgen user workload to from 200 messages to 400 messages
sent/received per user per day and ran it on one mailbox server for two hours without
enabling FAST Cache. We did not change either the CPU or memory configuration on
the mailbox server virtual machine. After running this extreme workload (double the
heavy user workload) for two hours, we observed that Zimbra mailbox server CPU
utilization jumped to 85% and send mail latencies jumped above the 1,000 ms
target. This outcome was expected.
We then created 200 GB FAST Cache on VNX5700 array (made from two 200 GB SSD
drives in RAID 1/0) and enabled it on the LUN configured for the ZCS store volume
(/opt/zimbra/store). We then ran the same extreme workload for two hours. After very
short warm-up time, FAST Cache began to absorb most of the extra load. Mailbox
server average CPU utilization fell to 51% and the average send mail latency fell
below the 1,000 ms target. We ran this test several times and confirmed the
repeatability of these results.
Thus, enabling FAST Cache on the Zimbra store volume permitted the VNX array to
handle twice the original workload (400 messages sent/received/user/day) without
reducing performance, degrading the user experience, or requiring additional server
CPU or memory resources on the ZCS mailbox server virtual machine. At the end of
the test only 2% of the 200 GB FAST Cache was used.
51
Figure 25 shows the effect of FAST Cache on the Zimbra mailbox server when running
an extreme user workload.
Figure 25.
Effect of FAST Cache on Zimbra mailbox server with extreme user workload
52
Figure 26.
53
Conclusion
The combination of ZCS with EMC VNX series storage provides an optimal
collaboration infrastructure. Storage, compute, and network layers maintain high
availability, while EMCs building-block sizing approach achieves predictable
performance and a repeatable storage design.
Easy scaling with
EMCs buildingblock approach to
sizing
Based on an EMC sizing building block of 5,000 users, your ZCS environment
can be scaled in multiples of 5,000 seats. Two building blocks supporting a
total of 10,000 heavy users were successfully validated.
VNX storage easily handled ZCS application I/O and produced 16,254 KB/s
throughput with 556 IOPS across all disks supporting the 5000-user ZCS
mailbox server building block.
Benefits of EMC
VNX series FAST
Cache
NL-SAS disk
performance with
ZCS
NL-SAS disks on VNX5700 storage provide excellent performance with minimal disk
utilization and low latencies.
VMware HA
clustering
Enabling FAST Cache on the Zimbra store volume permitted the VNX array to
handle twice the original workload (400 messages sent/received/user/day)
without reducing performance, degrading the user experience, or requiring
additional server CPU or memory resources on the ZCS mailbox server virtual
machine. By the end of the test run, only 2% of the 200 GB FAST Cache was
used.
The performance was almost identical to 10k SAS disks; all key performance
criteria were successfully met.
54
EMC Replication Manager and EMC VNX SnapView provide instant snapshot-based
Advanced
protection (backup backup and recovery for ZCS data. Replication Manager automates the creation of
snapshots of ZCS mailbox server volumes using SnapView technology. We
and recovery)
determined snapshot space requirements in order to provide efficient sizing
guidelines.
Compared with the native ZCS backup process, the use of Replication Manager
with SnapView significantly reduced the time it took to back up all ZCS mailbox
servers.
The backup and restore times were very short and the operations were very
efficient:
55
References
White papers
Product
documentation
EMC SnapView
VMware vSphere
56
ZCS binaries: 10 GB
ZCS logs: 20 GB
In this solution we used storage snapshots. The space calculations for snapshots are
described in Snapshot space calculations.
If ZCS native backups are used, allocate an additional 160% of the space required.
57
Figure 27.
58