Você está na página 1de 12

Deploy IBM DB2 pureScale feature on IBM Power

Systems
Benefit from the shared disk architecture of DB2 pureScale

Skill Level: Intermediate

Miso Cilimdzic (cilimdzi@ca.ibm.com)


DB2 Performance Manager
IBM

Sanjeeva Kumar Ogirala (sogirala@in.ibm.com)


Software Engineer
IBM

09 Sep 2010

The DB2® pureScale™ feature for Enterprise Server Edition builds on familiar and
proven design features from the IBM® DB2 for z/OS® database software. This article
describes different deployment methods of the DB2 pureScale feature on IBM Power
Systems™ and how the hardware pieces come together to create a pureScale
cluster.

Introduction
The DB2(R) pureScale(TM) feature helps reduce the risk and cost of business
growth by providing nearly unlimited capacity, continuous availability, and application
transparency. DB2 pureScale benefits from a low latency interconnect, such as
InfiniBand, and is built on top of a shared disk architecture. To achieve the low
latency, the Power Systems InfiniBand Host Channel Adapters (HCA) and switches
are used, and a fiber channel SAN provides access to shared disks.

This article addresses the following questions:

• How are the DB2 pureScale members connected?

Deploy IBM DB2 pureScale feature on IBM Power Systems Trademarks


© Copyright IBM Corporation 2010. All rights reserved. Page 1 of 12
developerWorks® ibm.com/developerWorks

• How are a member and PowerHA™ pureScale server connected?


• How are the AIX® LPARs for the member and PowerHA pureScale server
connected in a cluster?
• Is each and every AIX LPAR of one host connected to each and every
LPAR of other host?
• How is the SAN storage connected to the cluster?
This article explains how the DB2 pureScale cluster hardware is coupled together for
a DB2 pureScale production system. The article also clarifies concepts related to
setting up a DB2 pureScale cluster. Setting up and configuring the cluster requires
expertise in UNIX(R), InfiniBand, and SAN storage.

Understanding the DB2 pureScale feature


The DB2 pureScale feature is based on IBM DB2 RDBMS shared-disk technology.
When you hear about DB2 pureScale, it is usually in the context of a solution based
on a cluster architecture made up of several tightly coupled components:

• At least two DB2 members


• PowerHA pureScale server (CF)
• A high-speed communication network, such as InfiniBand
• IBM Tivoli® System Automation for Multiplatforms (Tivoli SA MP) software
• IBM Reliable Scalable Clustering Technology (RSCT) software
• IBM General Parallel File System (GPFS™) software
The DB2 pureScale feature addresses capacity and availability issues by providing
an easier way to scale up or to scale down while making sure that the entire
database is always available. The shared disk enables all the members to access
the same data set. Any member failure or CF failure (in case of duplexed CF) does
not impact the database availability. With DB2 pureScale, additional capacity is
added by simply adding new members to the existing cluster. PowerHA pureScale
server Global Buffer Manager (GBP) and Global Lock Manager (GLM) provide
centralized data access synchronization.

Figure 1 shows a high-level view of a DB2 pureScale instance with four members
and two CFs. It shows DB2 clients connected to the data server. DB2 members are
processing database requests, and PowerHA pureScale servers provide centralized
synchronization services. Data is stored on shared-disk storage, which is accessible
by all members.

Deploy IBM DB2 pureScale feature on IBM Power Systems Trademarks


© Copyright IBM Corporation 2010. All rights reserved. Page 2 of 12
ibm.com/developerWorks developerWorks®

Figure 1. A view of the major components in a DB2 pureScale environment

Understanding the hardware components that make up the


solution
Following is a list of the hardware required for the DB2 pureScale environment
described in this article:

• IBM POWER6® or POWER7® Servers with AIX


• Fiber Channel SAN storage, SAN switch and Host Bus Adapters (HBA)
• InfiniBand switch, InfiniBand Host Channel Adapters (HCA), and cables
• Ethernet adapters
• Hardware Management Console (HMC)
The sections below briefly explain each element of the solution.

IBM POWER6 or POWER7 Servers

The servers are POWER6 or POWER7 computers with AIX Logical Partitions
(LPAR) on which DB2 pureScale binaries are deployed. A minimum of two members
and two PowerHA pureScale servers are advised. It is recommended that each

Deploy IBM DB2 pureScale feature on IBM Power Systems Trademarks


© Copyright IBM Corporation 2010. All rights reserved. Page 3 of 12
developerWorks® ibm.com/developerWorks

member and PowerHA pureScale server be deployed on its own LPARs and across
a minimum of two POWER6 or POWER7 computers. Currently, the following
POWER systems are supported:

• POWER6 550
• POWER6 595
• POWER7 710
• POWER7 720
• POWER7 730
• POWER7 740
• POWER7 750
• POWER7 755
• POWER7 770
• POWER7 780
• POWER7 795
Fiber Channel SAN storage, switches, and HBA

Fiber Channel-attached SAN storage is shared among all DB2 members. DB2
pureScale benefits from storage with SCSI3-Persistent Reserve support. DB2
pureScale uses this technology to quickly fence off errant members from the storage
in case of a failure, which ensures that the database files remain consistent. For a
list of storage with SCSI3-PR support that has been tested and is supported by
GPFS, see the online GPFS FAQ in Resources.

Because the shared data is at the heart of a DB2 pureScale system, a RAID
configuration is recommended to provide maximum redundancy and availability.
Some of the more fault-tolerant RAID levels, such as RAID10 and RAID6, help
provide an extra assurance that the storage subsystem can survive various disk
failures.

SAN switches are typically used to connect the servers to the storage controller. For
a DB2 pureScale deployment, SAN switches should be redundant and also
connected to different power supplies for maximum availability.

Host Bus Adapter (HBA) is used to connect the servers to the SAN storage, typically
by using a SAN switch using Fiber Channel cables. Redundant HBAs on each DB2
member and the use of multipath software, such as IBM AIX MPIO, or device drivers
that support multipath access to LUNS is recommended. Note that load balancing is
available for some such multipath drivers that would increase the throughput when

Deploy IBM DB2 pureScale feature on IBM Power Systems Trademarks


© Copyright IBM Corporation 2010. All rights reserved. Page 4 of 12
ibm.com/developerWorks developerWorks®

multiple HBAs are used.

InfiniBand switch, HCA, and cables

InfiniBand is a low-latency high-bandwidth interconnect used to communicate among


DB2 members and PowerHA pureScale servers. InfiniBand Host Channel Adapter
(HCA) is the device that enables the servers to be connected. HCAs are connected
to an InfiniBand switch fabric using InfiniBand cables to form a subnet. InfiniBand
connectivity is further described under Using InfiniBand (IB).

Ethernet adapters

Ethernet adapters are typically connected to the corporate network and enable DB2
clients to connect with the DB2 pureScale instance, such as EtherChannel or
Network Interface Backup technology. DB2 pureScale feature automatically routes
connection requests to the member with the lowest workload. Alternatively, you can
specify that DB2 clients are to connect to specific active members in the DB2
pureScale instance.

Hardware Management Console

The IBM Hardware Management Console (HMC) provides systems administrators a


tool to plan, deploy, and manage IBM System p® servers. The HMC provides server
hardware management and virtualization (partition) management.

Using InfiniBand (IB)


The HCAs, InfiniBand cables, and InfiniBand switch form a subnet. The performance
of this network is critical, because it is used to communicate locking and caching
information across the cluster. All hosts in the instance must use the same type of
interconnect. DB2 pureScale exploits InfiniBand, which provides Remote Direct
Memory Access (RDMA) support. The use of RDMA enables direct updates in
member host memory without requiring member processor time. Each of the IB
components and their part numbers are described in the following sections.

Host Channel Adapters (HCA)

The IBM GX++ HCA is installed in the POWER system servers, which are used as
part of the DB2 pureScale cluster. DB2 pureScale supports only the GX++ HCA
adapters. The list of supported adapters with the feature codes is shown Table 1.

Table 1. POWER system server models and supported HCA adapters


POWER system server model HCA feature codes
550, 750 5609
595, 795 1816

Deploy IBM DB2 pureScale feature on IBM Power Systems Trademarks


© Copyright IBM Corporation 2010. All rights reserved. Page 5 of 12
developerWorks® ibm.com/developerWorks

710, 730 5266


720, 740 5615
770, 780 1808

HCAs connected to the IB switch

The HCAs are connected to the IB switch using a 12x to 4x IB cable, such as the 10
meter copper cable under FC 1854, or using a 4x to 4x IB cable, such as the FC
3246 (4x to 4x cable only for FC 5266).

Multiple LPARs on a server connected to the IB fabric

There are multiple ways to connect the LPARs, depending on how many LPARs
there are and how many HCAs are supported for that server model. Some of the
options include the following:

POWER 750 with one LPAR


The HCA is assigned to the LPAR. One IB cable is connected to the IB switch.

POWER 750 with two LPARs


The HCA is logically partitioned using the POWER hypervisor, and each LPAR
is assigned a portion of the HCA bandwidth and resources. One IB cable is
connected to the IB switch.

POWER 770 with two LPARs


Two HCAs are installed, and each LPAR has a dedicated HCA. Two IB cables
are connected to the IB switch.

POWER 770 with multiple LPARs


One or more HCAs are installed. Either every LPAR has a dedicated HCA, or
some or all LPARs share the HCAs. The same number of IB cables as HCAs
are connected to the IB switch.

InfiniBand switch

At the center of the InfiniBand fabric is the IB switch, which ties all of the DB2
pureScale servers into a subnet. The IBM line of 7874 IB switches provides a wide
range of port counts from 24 to 240.

Table 2 lists the supported IBM POWER Systems InfiniBand switches.

Table 2. Supported IBM POWER Systems InfiniBand switches


Feature codes Supported switches
7874-024 1U, 24-port 4x DDR IB Edge Switch (QLogic
9024CU)

Deploy IBM DB2 pureScale feature on IBM Power Systems Trademarks


© Copyright IBM Corporation 2010. All rights reserved. Page 6 of 12
ibm.com/developerWorks developerWorks®

7874-040 4U, 48-port 4x DDR IB Director Switch (QLogic


9040)
7874-120 7U, 120-port 4x DDR IB Director Switch (QLogic
9120)
7874-240 14U, 240-port 4x DDR IB Director Switch (QLogic
9240)

Exploring sample deployment models


There are various combinations of servers for DB2 pureScale feature deployment.
This section describes a few common deployment models.

• Two-server deployment
• Three-server deployment
• Four-and-more-server deployment
Table 3 shows the configurations for the three models.

Table 3. Three configuration models


Components
Number Number IBM IB IBM IB IBM IB FC SAN FC SAN FC SAN FC SAN
of of switch HCAs cables HBA switch cables storage
servers LPARS controller
2-server 2 4 (2 MandatoryMinimum Minimum Minimum Optional 4 Mandatory
model LPARs 2 2 2 dual cables,
on each) port 2 from
each
server
3-server 3 5 (2 MandatoryMinimum Minimum Minimum Optional Minimum Mandatory
model LPARs 3 3 3 dual 6
on two port cables,
servers 2 from
and 1 each
LPAR server
on one
server)
4-and-more-server
4 or 4 or MandatoryMinimum Minimum Minimum Optional Minimum Mandatory
model more more 1 per 1 per 2 per 2 from
server server server each
dual port server

Two-server deployment

To maintain high availability (HA) characteristics, two servers are the minimum
configuration. In such a configuration, each server would have two LPARs (one DB2

Deploy IBM DB2 pureScale feature on IBM Power Systems Trademarks


© Copyright IBM Corporation 2010. All rights reserved. Page 7 of 12
developerWorks® ibm.com/developerWorks

LPAR, one PowerHA pureScale server LPAR). The loss of one physical server in
this configuration enables the DB2 pureScale instance to continue to be available,
because one DB2 member and one PowerHA pureScale server will be available on
the surviving physical server.

In this configuration, high availability is not preserved during a hardware failure or a


hardware maintenance window of any one server. The IB cards can be either
dedicated to each LPAR (if a server supports more than one HCA) or shared.
Similarly the HBAs can be either dedicated to each LPAR or shared using Virtual I/O
Server (VIOS). Each of the IB HCAs is connected to the IB switch with IB cables.
Similarly the HBA adapters are connected to the FC SAN switch with FC SAN
cables. Figure 2 shows this configuration.

Figure 2. A four LPAR, two POWER server configuration with cabling

Three-server deployment

The three-server deployment enables high availability during hardware failure or


hardware maintenance of one server (such as the one without the PowerHA
pureScale server LPAR). In this configuration each server has one member LPAR
(for a total of three members) and two PowerHA pureScale server LPARs on two
different servers. The description of the IB and FC SAN connectivity is the same as
for the two-server setup except that the server hosting only the member LPAR has a
dedicated HCA. Figure 3 shows this configuration.

Figure 3. A five LPAR, three POWER server configuration with cabling

Deploy IBM DB2 pureScale feature on IBM Power Systems Trademarks


© Copyright IBM Corporation 2010. All rights reserved. Page 8 of 12
ibm.com/developerWorks developerWorks®

Four-and-more-server deployment

The four-and-more-server deployment enables additional members and an option to


isolate the PowerHA pureScale server on dedicated servers. Scale out of the cluster
is achieved by simply adding additional servers, while making sure storage
input/output capacity is increased proportionally and PowerHA pureScale server
LPAR capacity is increased gradually.

The configuration is the same as for the three-server deployment except that an
additional LPAR and a member on the additional servers are added. It is also
possible to deploy one LPAR per server, in which case DB2 pureScale members
and the PowerHA pureScale server use dedicated HCA/HBA. Figure 4 shows this
configuration.

Figure 4. A four-and-more POWER servers configuration with cabling

Deploy IBM DB2 pureScale feature on IBM Power Systems Trademarks


© Copyright IBM Corporation 2010. All rights reserved. Page 9 of 12
developerWorks® ibm.com/developerWorks

Conclusion
The IBM DB2 pureScale feature and IBM POWER servers provide a tightly coupled
solution that addresses business growth and continuous availability needs. This
article has shown various sample deployment models, which are built from
industry-standard components. Various deployment models illustrate a flexible
infrastructure, which can be as small as a 2-member cluster up to a 128-member
cluster, and thus satisfy various business requirements.

Deploy IBM DB2 pureScale feature on IBM Power Systems Trademarks


© Copyright IBM Corporation 2010. All rights reserved. Page 10 of 12
ibm.com/developerWorks developerWorks®

Resources
Learn
• Get more information about GPFS in the General Parallel File system FAQs in
the IBM Cluster Information Center.
• Refer to the DB2 for Linux, UNIX, and Windows Information Center for more
information about the DB2 pureScale feature.
• Read "InfiniBand usage " for more information about InfiniBand usage on IBM
POWER servers.
• Scan through "IBM HMC" for complete information about IBM Hardware
Management Console.
• Explore "IBM Qlogic" for more information on the IBM Qlogic IB switch.
• Learn more about Information Management at the developerWorks Information
Management zone. Find technical documentation, how-to articles, education,
downloads, product information, and more.
• Stay current with developerWorks technical events and webcasts.
• Follow developerWorks on Twitter.
Get products and technologies
• Build your next development project with IBM trial software, available for
download directly from developerWorks.
Discuss
• Participate in the discussion forum for this content.
• Check out the developerWorks blogs and get involved in the developerWorks
community.

About the authors


Miso Cilimdzic
Miso has been with IBM since 2000. He has worked on various DB2
performance-related activities with recent focus on DB2 pureScale.

Deploy IBM DB2 pureScale feature on IBM Power Systems Trademarks


© Copyright IBM Corporation 2010. All rights reserved. Page 11 of 12
developerWorks® ibm.com/developerWorks

Sanjeeva Kumar Ogirala


Sanjeeva Kumar Ogirala is a Software Engineer in DB2 performance
team. He is a post graduate with an M.Tech in Power Systems from IIT
Delhi. He has been with IBM since July 2007, and he is an IBM-certified
DB2 for Linux, UNIX, and Windows database administrator.

Deploy IBM DB2 pureScale feature on IBM Power Systems Trademarks


© Copyright IBM Corporation 2010. All rights reserved. Page 12 of 12

Você também pode gostar