servers running Red Hat Enterprise Linux 6 Update 4 Installation cookbook
Table of contents Executive summary ...................................................................................................................................................................... 3 Introduction .................................................................................................................................................................................... 3 Environment description ............................................................................................................................................................. 4 Hardware description ............................................................................................................................................................... 4 Software description ................................................................................................................................................................ 4 Documentation .......................................................................................................................................................................... 5 Useful My Oracle Support Notes ............................................................................................................................................ 5 Infrastructure description ............................................................................................................................................................ 5 Environment pre-requisites .................................................................................................................................................... 5 Oracle Grid Infrastructure installation server checklist .................................................................................................... 5 HP BladeSystem ........................................................................................................................................................................ 6 HP Virtual Connect .................................................................................................................................................................... 7 HP Onboard Administrator .................................................................................................................................................... 10 Connectivity .............................................................................................................................................................................. 11 System pre-requisites ................................................................................................................................................................ 11 Memory requirement ............................................................................................................................................................. 11 Check the temporary space available................................................................................................................................. 12 Check for the kernel release ................................................................................................................................................. 12 Install the HP Service Pack for ProLiant and its RHEL 6.4 supplement ...................................................................... 13 Check for the newly presented shared LUNs .................................................................................................................... 15 Set the kernel parameters .................................................................................................................................................... 16 Check the necessary packages ............................................................................................................................................ 17 Checking shared memory file system mount ................................................................................................................... 18 Preparing the network ........................................................................................................................................................... 19 Setting Network Time Protocol for Cluster Time Synchronization .............................................................................. 21 Check the SELinux setting ..................................................................................................................................................... 23 Create the grid and oracle users and groups .................................................................................................................... 24 Configure the secure shell service ...................................................................................................................................... 24 Set the limits ............................................................................................................................................................................ 26
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
2 Installing the cvuqdisk RPM for Linux ................................................................................................................................. 26 Storage connectivity driver configuration ......................................................................................................................... 26 Install the ASMLib support library ....................................................................................................................................... 32 Check the available disk space ............................................................................................................................................. 34 Setting the disk I/O scheduler on Linux .............................................................................................................................. 34 Determining root script execution plan ............................................................................................................................. 35 Oracle Clusterware installation ................................................................................................................................................ 36 Environment setting ............................................................................................................................................................... 36 Check the environment before installation ....................................................................................................................... 36 RunInstaller .............................................................................................................................................................................. 36 Check the installation ............................................................................................................................................................. 42 ASM disk group creation ............................................................................................................................................................ 46 Oracle RAC 12c database installation ..................................................................................................................................... 49 Environment setting ............................................................................................................................................................... 49 Installation ................................................................................................................................................................................ 49 Create a RAC database ............................................................................................................................................................... 53 Create a RAC database ........................................................................................................................................................... 53 Post-installation steps ........................................................................................................................................................... 57 Cluster verification ...................................................................................................................................................................... 60 Cluster verification utility ....................................................................................................................................................... 60 Appendix ........................................................................................................................................................................................ 62 Anaconda file ........................................................................................................................................................................... 62 Grid user environment setting ............................................................................................................................................. 63 Oracle user environment setting ......................................................................................................................................... 64 Summary ....................................................................................................................................................................................... 65 For more information ................................................................................................................................................................. 66
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
3 Executive summary On July 1, 2013 Oracle announced general availability of Oracle Database 12c, designed for the Cloud. New features such as Oracle Multitenant, for consolidating multiple databases, and Automatic Data Optimization, for compressing and tiering data at a higher density, resource efficiency and flexibility along with many other enhancements will be important for many customers to understand and implement as new applications take advantage of these features. Globally, HP continues to be the leader of installed servers running Oracle. Were going to extend our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oracles software. As a leader in Oracle database market share, HP continues to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook. HP has tested various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications. Together HP and Oracle help the worlds leading businesses succeed and weve accumulated a great deal of experience along the way. We plan to leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions, tested and validated. This document provides a step by step installation description of Oracle RAC 12c running Red Hat Enterprise Linux on HP ProLiant servers and HP 3PAR StoreServ storage. This paper doesnt intend to replace the official Oracle documentation. It is validation and experience sharing with focus on the system pre-requisites and is complementary to any generic documentation of HP, Oracle and Red Hat. Oracle Real Application Clusters (RAC) enables multiple cluster nodes to act as a single processing engine wherein any node can respond to a database request. Servers in a RAC deployment are bound together using Oracle Clusterware (CRS) cluster management software. Oracle Clusterware enables the servers to work together as a single entity. Target audience: This document addresses the RAC 12c installation procedures. The readers should have a good knowledge of Linux administration as well as knowledge of Oracle databases. This white paper describes testing performed in July 2013. Introduction HP Converged Infrastructure delivers the framework for a dynamic data center, eliminating costly, rigid IT silos while unlocking resources for innovation rather than management. This infrastructure matches the supply of IT resources with the demand for business applications; its overarching benefits include the following: Modularity Openness Virtualization Resilience Orchestration By transitioning away from a product-centric approach to a shared-service management model, HP Converged Infrastructure can accelerate standardization, reduce operating costs, and accelerate business results. A dynamic Oracle business requires a matching IT infrastructure. You need a data center with the flexibility to automatically add processing power to accommodate spikes in Oracle database traffic and the agility to shift resources from one application to another as demand changes. To become truly dynamic you must start thinking beyond server virtualization and consider the benefits of virtualizing your entire infrastructure. Thus, virtualization is a key component of HP Converged Infrastructure. HPs focus on cloud and virtualization is a perfect match for the new features of the Oracle 12c database designed for the cloud. For customers who are eager to start building cloud solutions, they can start with the HP integrated solutions. For customers who want to move at a slower pace, they can get started with Converged Infrastructure building blocks that put them on the path towards integrated solutions at a later date. Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
4 Environment description Drive business innovation and eliminate server sprawl with HP BladeSystem, the industrys only Converged Infrastructure architected for any workload from client to cloud. HP BladeSystem is engineered to maximize every hour, watt, and dollar, saving up to 56% total cost of ownership over traditional infrastructures. With HP BladeSystem, it is possible to create a change ready, power efficient, network optimized, simple to manage and high performance infrastructure on which to consolidate, build and scale your Oracle database implementation. Each HP BladeSystem c7000 enclosure can accommodate up to 16 half-height blades or up to 8 full-height blades or a mixture of both. In addition, there are 8 Interconnect Bays with support for any I/O fabric your database applications requires. Choosing a server for Oracle databases involves selecting a server that is the right mix of performance, price and power efficiency with the most optimal management. HPs experience during test and in production has been that 2 or 4 socket servers are ideal platforms for Oracle database for the cloud depending on the workload requirements. 2 and 4 socket systems offer better memory performance and memory scaling for databases that need large System Global Area (SGA). HP BladeSystem reduces costs and simplifies management through shared infrastructure. The HP 3PAR StoreServ storage arrays are designed to deliver enterprise IT storage as a utility service simply, efficiently, and flexibly. The arrays feature a tightly coupled clustered architecture, secure multi-tenancy, and mixed workload support for enterprise-class data centers. Use of unique thin technologies reduces acquisition and operational costs by up to 50% while autonomic management features improve administrative efficiency by up to tenfold when compared with traditional storage solutions. The HP 3PAR StoreServ Gen4 ASIC in each of the systems controller nodes provides a hyper-efficient, silicon-based engine that drives on-the-fly storage optimization to maximize capacity utilization while delivering high service levels. Hardware description For this white paper, we rely on 2 main components of the HP Converged Infrastructure introduced earlier, two HP ProLiant BL460c Gen8 servers and an HP 3PAR StoreServ 7200 as SAN storage as shown in figure 1. Figure 1. The c7000 and the 3PAR StoreServ 7200 used during this cookbook preparation. This view is a subset of the fully populated enclosure.
Software description Red Hat Enterprise Linux Server release 6.4 Oracle Database 12cR1 Real Application Cluster Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
5 Documentation The table below lists the main documentation used during the creation of this white paper. Document ID Document title E17888-14 Oracle Grid Infrastructure Installation Guide 12c Release 1 (12.1) for Linux E17720-15 Oracle Database Installation Guide 12c Release 1 (12.1) for Linux
Useful My Oracle Support Notes Note ID Title 1089399.1 Oracle ASMLib Software Update Policy for Red Hat Enterprise Linux Supported by Red Hat 1514012.1 runcluvfy stage -pre crsinst generates Reference Data is not available for verifying prerequisites on this operating system distribution on Red Hat 6 1567127.1 RHEL 6: 12c CVU Fails: Reference data is not available for verifying prerequisites on this operating system distribution
Infrastructure description Multiple configurations are possible in order to build a RAC cluster. This chapter will only deliver some information about the architecture we worked with during this project. For the configuration tested, two HP ProLiant blade servers attached to HP 3PAR StoreServ storage and a fully redundant SAN and network LAN list of components were used. In this section we will also look at the HP ProLiant blade infrastructure. Environment pre-requisites Based on this architecture, the adaptive infrastructure requirements for an Oracle RAC are: High speed communication link (private Virtual Connect network) between all nodes of the cluster. (This link is used for RAC Cache Fusion that allows RAC nodes to synchronize their memory caches.) A common public (Virtual Connect) communication link for communication with Oracle clients The storage subsystem must be accessible by all cluster nodes for access to the Oracle shared files (Voting, OCR and Database files) At least two HP servers are required. In the current configuration we used a couple of HP ProLiant server blades in a c7000 blade chassis configured to boot from the HP 3PAR StoreServ storage subsystem Oracle Grid Infrastructure installation server checklist Network switches: Public network switch, at least 1 GbE, connected to a public gateway. Private network switch, at least 1 GbE, dedicated for use only with other cluster member nodes. The interface must support the user datagram protocol (UDP) using high-speed network adapters and switches that support TCP/IP. Runlevel: Servers should be either in runlevel 3 or runlevel 5. Random Access Memory (RAM): At least 4 GB of RAM for Oracle Grid Infrastructure for a Cluster installation, including installations where you plan to install Oracle RAC. Temporary disk space allocation: At least 1 GB allocated to /tmp. Operating system: Supported in the list of supported kernels and releases listed in http://docs.oracle.com/cd/E16655_01/install.121/e17888/prelinux.htm#CIHFICFD In our configuration: Red Hat Enterprise Linux 6 Supported distributions: Red Hat Enterprise Linux 6: 2.6.32-71.el6.x86_64 or later Red Hat Enterprise Linux 6 with the Unbreakable Enterprise Kernel: 2.6.32-100.28.5.el6.x86_64 or later Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
6 Same operating system kernel running on each cluster member node OpenSSH installed manually Storage hardware: either Storage Area Network (SAN) or Network-Attached Storage (NAS). Local Storage Space for Oracle Software At least 3.5 GB of space for the Oracle base of the Oracle Grid Infrastructure installation owner (Grid user). The Oracle base includes Oracle Clusterware and Oracle ASM log files. For Linux x86_64 platforms, allocate 5.8 GB of disk space for the Oracle home (the location for the Oracle Database software binaries). Boot from SAN is supported. HP BladeSystem HP combined its comprehensive technology to make BladeSystem not only easy to use, but also useful to youregardless of whether you choose the BladeSystem c3000 or c7000 Platinum Enclosure. Intelligent infrastructure support: Power Discovery Services allows BladeSystem enclosures to communicate information to HP Intelligent PDUs that automatically track enclosure power connections to the specific iPDU outlet to ensure redundancy and prevent downtime. Location Discovery Services allows the c7000 to automatically record its exact location in HP Intelligent Series Racks eliminating time consuming manual asset tracking. HP Thermal Logic technologies: Combine energy-reduction technologies such as the 80 PLUS Platinum, 94 percent- efficient HP 2650W/2400W Platinum Power Supply, with pinpoint measurement and control through Dynamic Power Capping, to save energy and reclaim trapped capacity without sacrificing performance. HP Virtual Connect architecture: Wire once, then add, replace, or recover blades on the fly, without impacting networks and storage or creating extra steps. HP Insight Control: This essential infrastructure management software helps save time and money by making it easy to deploy, migrate, monitor, control, and enhance your IT infrastructure through a single, simple management console for your BladeSystem servers. HP Dynamic Power Capping: Maintain an enclosures power consumption at or below a cap value to prevent any increase in compute demand from causing a surge in power that could trip circuit breakers. HP Dynamic Power Saver: Enable more efficient use of power in the server blade enclosure. During periods of low server utilization, the Dynamic Power Saver places power supplies in standby mode, incrementally activating them to deliver the required power as demand increases. HP Power Regulator: Dynamically change each servers power consumption to match the needed processing horsepower, thus reducing power consumption automatically during periods of low utilization. HP NonStop midplane: No single point of failure, to keep your business up and running. HP Onboard Administrator: Wizards get you up and running fast and are paired with useful tools to simplify daily tasks, warn of potential issues, and assist you with repairs. HP administration tools were used to configure the HP environment as shown in figure 2. Figure 2. A screen shot of the HP BladeSystem enclosure view from the HP Onboard Administrator.
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
7 Further details on the HP BladeSystem can be found at hp.com/go/BladeSystem. HP Virtual Connect HP developed Virtual Connect technology to simplify networking configuration for the server administrator using an HP BladeSystem c-Class environment. The baseline Virtual Connect technology virtualizes the connections between the server and the LAN and SAN network infrastructure. It adds a hardware abstraction layer that removes the direct coupling between them. Server administrators can physically wire the uplinks from the enclosure to its network connections once, and then manage the network addresses and uplink paths through Virtual Connect software. Using Virtual Connect interconnect modules provides the following capabilities: Reduces the number of cables required for an enclosure, compared to using pass-through modules. Reduces the number of edge switches that LAN and SAN administrators must manage. Allows pre-provisioning of the networkso server administrators can add, replace, or upgrade servers without requiring immediate involvement from the LAN or SAN administrators. Enables a flatter, less hierarchical network, reducing equipment and administration costs, reducing latency and improving performance. Delivers direct server-to-server connectivity within the BladeSystem enclosure. This is an ideal way to optimize for East/West traffic flow, which is becoming more prevalent at the server edge with the growth of server virtualization, cloud computing, and distributed applications. Without Virtual Connect abstraction, changes to server hardware (for example, replacing the system board during a service event) often result in changes to the MAC addresses and WWNs. The server administrator must then contact the LAN/SAN administrators, give them updated addresses, and wait for them to make the appropriate updates to their infrastructure. With Virtual Connect, a server profile holds the MAC addresses and WWNs constant, so the server administrator can apply the same networking profile to new hardware. This can significantly reduce the time for a service event. Virtual Connect Flex-10 technology further simplifies network interconnects. Flex-10 technology lets you split a 10 Gb Ethernet port into four physical function NICs (called FlexNICs). This lets you replace multiple, lower-bandwidth NICs with a single 10 Gb adapter. Prior to Flex-10, a typical server blade enclosure required up to 40 pieces of hardware (32 mezzanine adapters and 8 modules) for a full enclosure of 16 virtualized servers. Use of HP FlexNICs with Virtual Connect interconnect modules reduces the required hardware up to 50% by consolidating all the NIC connections onto two 10 Gb ports. Virtual Connect FlexFabric adapters broadened the Flex-10 capabilities by providing a way to converge network and storage protocols on a 10 Gb port. Virtual Connect FlexFabric modules and FlexFabric adapters can (1) converge Ethernet, Fibre Channel, or accelerated iSCSI traffic into a single 10 Gb data stream, (2) partition a 10 Gb adapter port into four physical functions with adjustable bandwidth per physical function, and (3) preserve routing information for all data types. Flex-10 technology and FlexFabric adapters reduce management complexity; the number of NICs, HBAs, and interconnect modules needed, and associated power and operational costs. Using FlexFabric technology lets you reduce the hardware requirements by 95% for a full enclosure of 16 virtualized serversfrom 40 components to two FlexFabric modules. The most recent Virtual Connect innovation is the ability to connect directly to HP 3PAR StoreServ Storage systems. You can either eliminate the intermediate SAN infrastructure or have both direct-attached storage and storage attached to the SAN fabric. Server administrators can manage storage device connectivity and LAN network connectivity using Virtual Connect Manager. The direct-attached Fibre Channel storage capability has the potential to reduce SAN acquisition and operational costs significantly while reducing the time it takes to provision storage connectivity. Figure 3 and 4 show an example of the interface to the Virtual Connect environment. Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
8 Figure 3. View of the Virtual Connect Manager home page of the environment used.
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
9 Figure 4. The Virtual Connect profile of one of the cluster nodes.
Further details on HP Virtual Connect technology can be found at hp.com/go/VirtualConnect. Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
10 HP Onboard Administrator The Onboard Administrator for the HP BladeSystem enclosure is the brains of the c-Class infrastructure. Together with the enclosure's HP Insight Display, the Onboard Administrator has been designed for both local and remote administration of HP BladeSystem c-Class. This module and its firmware provide: Wizards for simple, fast setup and configuration Highly available and secure access to the HP BladeSystem infrastructure Security roles for server, network, and storage administrators Agent-less device health and status Thermal Logic power and cooling information and control Each enclosure is shipped with one Onboard Administrator module/firmware. If desired, a customer may order a second redundant Onboard Administrator module for each enclosure. When two Onboard Administrator modules are present in a BladeSystem c-Class enclosure, they work in an active - standby mode, assuring full redundancy with integrated management. Figure 5 below shows the information related to the enclosure we used in this exercise. On the right side, the front and rear view of the enclosure component is available. By clicking on one component, the detailed information will appear in the central frame. Figure 5. From the HP Onboard Administrator, very detailed information related to the server information is available.
More about the HP Onboard Administrator: hp.com/go/oa. Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
11 Connectivity The diagram in figure 6 below shows a basic representation of the components connectivity. Figure 6. Components connectivity.
System pre-requisites This section describes the system configuration steps to be completed before installing the Oracle Grid Infrastructure and creating a Real Application Cluster database. Memory requirement Check the available RAM and the swap space on the system. The minimum required is 4GB in an Oracle RAC cluster. [root@oracle52 ~]# grep MemTotal /proc/meminfo MemTotal: 198450988 kB [root@oracle52 ~]# grep SwapTotal /proc/meminfo SwapTotal: 4194296 kB
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
12 The swap volume may vary based on the RAM size. As per the Oracle documentation, the swap ratio should be the following: RAM Swap 4 to 16 GB 1 times the RAM size > 16 GB 16 GB
Our HP ProLiant blades had 192GB of memory so we created a 4GB swap volume. This is below the recommendation. However, because of the huge amount of RAM available, we do not expect any usage of this swap space. Keep in mind the swap activity negatively impacts database performance. The command swapon -s tells how much swap space exists on the system (in KB). [root@oracle52 ~]# swapon -s Filename Type Size Used Priority /dev/dm-3 partition 4194296 0 -1
The free command gives an overview of the current memory consumption. The -g extension provides values in GB. [root@oracle52 ~]# free -g total used free shared buffers cached Mem: 189 34 154 0 0 29 -/+ buffers/cache: 5 184 Swap: 3 0 3
Check the temporary space available Oracle recommends having at least 1GB of free space in /tmp. [root@oracle52 ~]# df -h /tmp Filesystem Size Used Avail Use% Mounted on /dev/mapper/mpathap2 39G 4.1G 33G 12% /
In our case, /tmp is part of /. Even if this is not an optimal setting, we are far above the 1GB free space. Check for the kernel release To determine which chip architecture each server is using and which version of the software you should install, run the following command at the operating system prompt as the root user: [root@oracle52 ~]# uname -m x86_64
By the way, note that, Oracle 12c is not available for Linux 32-bit architecture. Then check the distribution and version you are using. [root@oracle53 ~]# more /etc/redhat-release Red Hat Enterprise Linux Server release 6.4 (Santiago)
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
13 Finally, go to My Oracle Support and check if this version is certified in the certification tab as shown in figure 7. Figure 7. Copy of the certification status
Install the HP Service Pack for ProLiant and its RHEL 6.4 supplement HP Service Pack for ProLiant (SPP) is a comprehensive systems software and firmware update solution, which is delivered as a single ISO image. This solution uses HP Smart Update Manager (HP SUM) as the deployment tool and is tested on all HP ProLiant Gen8, G7, and earlier servers as defined in the Service Pack for ProLiant Server Support Guide found at hp.com/go/spp/documentation. See figure 8 for download information. For the pre-requisites about HP SUM, look at the installation documentation: http://h18004.www1.hp.com/products/servers/management/unified/hpsum_infolibrary.html The latest SPP for Red Hat 6.4 as well as a supplement for RHEL 6.4 can be downloaded from hp.com. http://h20566.www2.hp.com/portal/site/hpsc/template.PAGE/public/psi/swdHome/?sp4ts.oid=5177950&spf_p.tpst=swd Main&spf_p.prp_swdMain=wsrp- navigationalState%3DswEnvOID%253D4103%257CswLang%253D%257Caction%253DlistDriver&javax.portlet.begCacheTo k=com.vignette.cachetoken&javax.portlet.endCacheTok=com.vignette.cachetoken#Application%20-% Figure 8. Download location for the SPP
In order to install the SPP, we need first to mount the ISO image. Then, from an X terminal run the hpsum executable: [root@oracle52 kits]# mkdir /cdrom [root@oracle52 kits]# mount -o loop=/dev/loop0 HP_Service_Pack_for_Proliant_2013.02.0-0_725490-001_spp_2013.02.0- SPP2013020B.2013_0628.2.iso /cdrom [root@oracle52 kits]# cd /cdrom/hp/swpackages [root@oracle52 kits]#./hpsum
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
14
Click Next.
Provide the credentials for root and click Next.
Select the components you need to install and click Install.
A sample list of updates to be done is displayed. Click OK, the system will work for almost 10 to 15 minutes.
Operation completed. Check the log. SPP will require a reboot of the server once fully installed.
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
15 To install the RHEL 6.4 supplement for HP SPP, you must first untar the file before running hpsum again: [root@oracle52 kits]# mkdir supspp.rhel6 [root@oracle52 kits]# mv supspp.rhel6.4.en.tar.gz supspp.rhel6 [root@oracle52 kits]# cd supspp.rhel6 [root@oracle52 kits]# tar xvf supspp.rhel6.4.en.tar.gz [root@oracle52 kits]# ./hpsum
Next, follow the same procedure as with the regular SPP. A last option to consider regarding the SPP is the online upgrade repository service: http://downloads.linux.hp.com/SDR/ This site provides yum and apt repositories for Linux-related software packages. Much of this content is also available from various locations at hp.com in ISO or tgz format, but if you prefer to use yum or apt, you may subscribe your systems to some or all of these repositories for quick and easy access to the latest rpm/deb packages from HP. Check for the newly presented shared LUNs The necessary shared LUNs might have been presented after the last server reboot. In order to discover new SCSI devices (like Fibre Channel, SAS...) you sometimes need to rescan the SCSI bus to add devices or to tell the kernel a device is gone. Find what the host number is for the HBA: [root@oracle52 ~]# ls /sys/class/fc_host/ host1 host2
1. Ask the HBA to issue a LIP signal to rescan the FC bus: [root@oracle52 ~]# echo 1 >/sys/class/fc_host/host1/issue_lip [root@oracle52 ~]# echo 1 >/sys/class/fc_host/host2/issue_lip
2. Wait around 15 seconds for the LIP command to have effect. 3. Ask Linux to rescan the SCSI devices on that HBA: [root@oracle52 ~]# echo - - - >/sys/class/scsi_host/host1/scan [root@oracle52 ~]# echo - - - >/sys/class/scsi_host/host2/scan
The wildcards "- - -" mean to look at every channel, every target, every LUN. That's it. You can look for log messages at dmesg to see if it's working, and you can check at /proc/scsi/scsi to see if the devices are there. Alternatively, once the SPP is installed, an alternative is to use the hp_rescan utility. Look for it in /opt/hp. [root@oracle52 hp_fibreutils]# hp_rescan -h NAME hp_rescan DESCRIPTION Sends the rescan signal to all or selected Fibre Channel HBAs/CNAs. OPTIONS -a, --all - Rescan all Fibre Channel HBAs -h, --help - Prints this help message -i, --instance - Rescan a particular instance <SCSI host number> -l, --list - List all supported Fibre Channel HBAs
Another alternative is to install the sg3_utils package (yum install sg3_utils) from the main RHEL distribution DVD. It provides scsi-rescan (sym-linked to rescan-scsi-bus.sh). Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
16 Set the kernel parameters Check the required kernel parameters by using the following commands: # cat /proc/sys/kernel/sem # cat /proc/sys/kernel/shmall # cat /proc/sys/kernel/shmmax # cat /proc/sys/kernel/shmmni # cat /proc/sys/fs/file-max # cat /proc/sys/net/ipv4/ip_local_port_range
The following values should be the result: Parameter Value kernel.semmsl 250 kernel.semmns 32000 kernel.semopm 100 kernel.semmni 128 kernel.shmall physical RAM size / pagesize (**) kernel.shmmax Half of the RAM or 4GB (*) kernel.shmmni 4096 fs.file-max 6815744 fs.aio-max-nr 1048576 net.ipv4.ip_local_port_range 9000 65500 net.core.rmem_default 262144 net.core.rmem_max 4194304 net.core.wmem_default 262144 net.core.wmem_max 1048576 (*) max is 4294967296 (**): 8239044 in our case
In order to make these parameters persistent, update the /etc/sysctl.conf file. [root@oracle52 hp_fibreutils]# vi /etc/sysctl.conf . . . # Controls the maximum shared segment size, in bytes kernel.shmmax = 101606905856 # Half the size of physical memory in bytes
# Controls the maximum number of shared memory segments, in pages kernel.shmall = 24806374 # Half the size of physical memory in pages
Run sysctl p to load the updated parameters in the current session. Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
17 Check the necessary packages The following packages are necessary before installing Oracle Grid infrastructure and Oracle RAC 12c: binutils-2.20.51.0.2-5.11.el6 (x86_64) compat-libcap1-1.10-1 (x86_64) compat-libstdc++-33-3.2.3-69.el6 (x86_64) compat-libstdc++-33-3.2.3-69.el6.i686 gcc-4.4.4-13.el6 (x86_64) gcc-c++-4.4.4-13.el6 (x86_64) glibc-2.12-1.7.el6 (i686) glibc-2.12-1.7.el6 (x86_64) glibc-devel-2.12-1.7.el6 (x86_64) glibc-devel-2.12-1.7.el6.i686 ksh libgcc-4.4.4-13.el6 (i686) libgcc-4.4.4-13.el6 (x86_64) libstdc++-4.4.4-13.el6 (x86_64) libstdc++-4.4.4-13.el6.i686 libstdc++-devel-4.4.4-13.el6 (x86_64) libstdc++-devel-4.4.4-13.el6.i686 libaio-0.3.107-10.el6 (x86_64) libaio-0.3.107-10.el6.i686 libaio-devel-0.3.107-10.el6 (x86_64) libaio-devel-0.3.107-10.el6.i686 libXext-1.1 (x86_64) libXext-1.1 (i686) libXtst-1.0.99.2 (x86_64) libXtst-1.0.99.2 (i686) libX11-1.3 (x86_64) libX11-1.3 (i686) libXau-1.0.5 (x86_64) libXau-1.0.5 (i686) libxcb-1.5 (x86_64) libxcb-1.5 (i686) libXi-1.3 (x86_64) libXi-1.3 (i686) make-3.81-19.el6 sysstat-9.0.4-11.el6 (x86_64) unixODBC-2.2.14-11.el6 (64-bit) or later unixODBC-devel-2.2.14-11.el6 (64-bit) or later The packages above are necessary in order to install Oracle. The package release is the minimal release required. You can check whether these packages are available or not with one of the following commands: # rpm -q make-3.79.1 # check the exact release or # rpm -qa|grep make # syntax comparison in the rpm database
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
18 Due to the specific 64-bit architecture of the x86_64, some packages are necessary in both 32 bits release and 64 bits releases. The following command output will specify the based architecture of the specific package: # rpm -qa --queryformat "%{NAME}-%{VERSION}.%{RELEASE} (%{ARCH})\n" | grep glibc-devel
Finally, installation of the packages should be done using yum. This is the easiest way as long as a repository server is available. [root@oracle52 tmp]# yum list libaio-devel Loaded plugins: rhnplugin, security Available Packages libaio-devel.i386 0.3.106-5 rhel-x86_64-server-5 libaio-devel.x86_64 0.3.106-5 rhel-x86_64-server-5 [root@oracle52 tmp]# yum install libaio-devel.i386 Loaded plugins: rhnplugin, security Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package libaio-devel.i386 0:0.3.106-5 set to be updated --> Finished Dependency Resolution
Dependencies Resolved
============================================================================ Package Arch Version Repository Size ============================================================================ Installing: libaio-devel i386 0.3.106-5 rhel-x86_64-server-5 12 k
Total download size: 12 k Is this ok [y/N]: y Downloading Packages: libaio-devel-0.3.106-5.i386.rpm | 12 kB 00:00 Running rpm_check_debug Running Transaction Test Finished Transaction Test Transaction Test Succeeded Running Transaction Installing : libaio-devel 1/1
Installed: libaio-devel.i386 0:0.3.106-5
Complete!
Checking shared memory file system mount On Linux x86-64, ensure that the /dev/shm mount area is of type tmpfs and is mounted with the following options: With rw and exec permissions set on it Without noexec or nosuid set on it Use the following procedure to check the shared memory file system: 1. Check current mount settings. For example:
[root@oracle52 swpackages]# more /etc/fstab |grep "tmpfs" tmpfs /dev/shm tmpfs defaults 0 0 [root@oracle52 ~]# mount|grep tmpfs tmpfs on /dev/shm type tmpfs (rw) Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
19
2. If necessary, change the mount settings. For example, log in as root, open the /etc/fstab file with a text editor, and modify the tmpfs line: tmpfs /dev/shm /tmpfs rw, exec 0 0
Preparing the network Oracle RAC needs at least two physical interfaces. The first one is dedicated to the interconnect traffic. The second one will be used for public access to the server and for the Oracle Virtual-IP address as well. In case you want to implement bonding, consider additional network interfaces. For clusters using single interfaces for private networks, each node's private interface for interconnects must be on the same subnet, and that subnet must be connected to every node of the cluster. For clusters using Redundant Interconnect Usage, each private interface should be on a different subnet. However, each cluster member node must have an interface on each private interconnect subnet, and these subnets must connect to every node of the cluster. Private interconnect redundant network requirements With Redundant Interconnect Usage, you can identify multiple interfaces to use for the cluster private network, without the need of using bonding or other technologies. This functionality is available starting with Oracle Database 11g Release 2 (11.2.0.2). If you use the Oracle Clusterware Redundant Interconnect feature, then you must use IPv4 addresses for the interfaces. When you define multiple interfaces, Oracle Clusterware creates from one to four highly available IP (HAIP) addresses. Oracle RAC and Oracle Automatic Storage Management (Oracle ASM) instances use these interface addresses to ensure highly available, load-balanced interface communication between nodes. The installer enables Redundant Interconnect Usage to provide a high availability private network. By default, Oracle Grid Infrastructure software uses all of the HAIP addresses for private network communication, providing load-balancing across the set of interfaces you identify for the private network. If a private interconnect interface fails or becomes non-communicative, then Oracle Clusterware transparently moves the corresponding HAIP address to one of the remaining functional interfaces. About the IP addressing requirement This installation guide documents how to perform a typical installation. It doesnt cover the Grid Naming Service. For more information about GNS, refer to the Oracle Grid Infrastructure Installation Guide for Linux. You must configure the following addresses manually in your corporate DNS: A public IP address for each node A virtual IP address for each node A private IP address for each node Three single client access name (SCAN) addresses for the cluster. Note, the SCAN cluster name needs to be resolved by the DNS and should not be stored in the /etc/hosts file. Three addresses is a recommendation. Before moving forward, we need to define the nodes and cluster information: Data Value Cluster name okc12c SCAN address 1 172.16.0.34 SCAN address 2 172.16.0.35 SCAN address 3 172.16.0.36
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
20 Data Node 1 Node 2 Server public name oracle52 oracle53 Server public IP address 172.16.0.52 172.16.0.53 Server VIP name oracle52vip oracle53vip Server VIP address 172.16.0.32 172.16.0.33 Server private name 1 oracle52priv0 oracle53priv0 Server private IP address 1 192.168.0.52 192.168.0.53 Server private name 2 oracle52priv1 oracle53priv1 Server private IP address 2 192.168.1.52 192.168.1.53
The current configuration should contain at least the following: eth0 and eth1, as respectively, public and private interfaces. Please note the interface naming should be the same on all nodes of the cluster. In the current case, eth2 was also initialized in order to set up the private interconnect redundant network. [root@oracle52 ~]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 link/ether 00:17:a4:77:ec:3c brd ff:ff:ff:ff:ff:ff inet 172.16.0.53/21 brd 172.16.0.255 scope global eth0 inet6 fe80::217:a4ff:fe77:ec3c/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 link/ether 00:17:a4:77:ec:3e brd ff:ff:ff:ff:ff:ff inet 192.168.0.53/24 brd 192.168.0.255 scope global eth1 inet6 fe80::217:a4ff:fe77:ec3e/64 scope link valid_lft forever preferred_lft forever 4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 link/ether 00:17:a4:77:ec:40 brd ff:ff:ff:ff:ff:ff inet 192.168.1.53/16 brd 192.168.255.255 scope global eth2 inet6 fe80::217:a4ff:fe77:ec40/64 scope link
Enter into /etc/hosts addresses and names for: interconnect names for system 1 and system 2 VIP addresses for node 1 and node 2 [root@oracle52 network-scripts]# more /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
During the installation process, IPv6 can be unselected. IPv6 is not supported for the private interconnect traffic. Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
21 Setting Network Time Protocol for Cluster Time Synchronization Oracle Clusterware requires the same time zone environment variable setting on all cluster nodes. During installation, the installation process picks up the time zone environment variable setting of the Grid installation owner on the node where OUI runs, and uses that time zone value on all nodes as the default TZ environment variable setting for all processes managed by Oracle Clusterware. The time zone default is used for databases, Oracle ASM, and any other managed processes. Two options are available for time synchronization: An operating system configured network time protocol (NTP) Oracle Cluster Time Synchronization Service Oracle Cluster Time Synchronization Service is designed for organizations where the cluster servers are unable to access NTP services. If you use NTP, then the Oracle Cluster Time Synchronization daemon (ctssd) starts up in observer mode. If you do not have NTP daemons, then ctssd starts up in active mode and synchronizes time among cluster members without contacting an external time server. In this case, Oracle will log warning messages into the CRS log as shown below. These messages can be ignored. [ctssd(15076)]CRS-2409:The clock on host oracle52 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode. 2010-09-17 16:55:28.920 [ctssd(15076)]CRS-2409:The clock on host oracle52 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.
Update the /etc/ntp.conf file with the NTP server value. [root@oracle52 network-scripts]# vi /etc/ntp.conf . . . # Use public servers from the pool.ntp.org project. # Please consider joining the pool (http://www.pool.ntp.org/join.html). #server 0.rhel.pool.ntp.org #server 1.rhel.pool.ntp.org #server 2.rhel.pool.ntp.org server 172.16.0.52 # ntp server address.
Then, restart the NTP service. [root@oracle52 network-scripts]# /sbin/service ntpd restart Shutting down ntpd: [ OK ] Starting ntpd: [ OK ]
Check if the NTP server is reachable. The value in red needs to be higher than 0: [root@oracle52 ~]# ntpq -p remote refid st t when poll reach delay offset jitter ============================================================================ ntp2.austin.hp. .GPS. 1 u 5 64 1 133.520 15.473 0.000
In case the time difference between the database server and the NTP server is too large, you might have to manually resynchronize your server. Use the command below for this: [root@oracle52 ~]# service ntpd stop [root@oracle52 ~]# ntpdate ntp.hp.net [root@oracle52 ~]# service ntpd start
If you are using NTP, and you plan to continue using it instead of Cluster Time Synchronization Service, then you need to modify the NTP configuration to set the -x flag, which prevents time from being adjusted backward, this is an Oracle requirement. Restart the network time protocol daemon after you complete this task. Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
22 To do this, edit the /etc/sysconfig/ntpd file to add the -x flag, as in the following example: [root@oracle52 network-scripts]# vi /etc/sysconfig/ntpd # Drop root to id 'ntp:ntp' by default. OPTIONS="-u ntp:ntp -p /var/run/ntpd.pid g -x"
Known issue Sometimes, the NTP server defined in the ntp.conf acts as a load balancer and routes the request to different machines. Then, the ntpq p will provide the same time but with a different refid (see below) this shouldnt be a problem. However, Oracle cluster verification compares the refids and raises an error if they are different. [root@oracle53 kits]# ntpq -p remote refid st t when poll reach delay offset jitter ============================================================================ ntp.hp.net 172.16.255.10 3 u 6 64 1 128.719 5.275 0.000 [root@oracle52 ~]# ntpq -p remote refid st t when poll reach delay offset jitter ============================================================================ ntp.hp.net 172.16.58.10 3 u 3 64 1 108.900 12.492 0.000
The error will be log as: INFO: INFO: Error Message:PRVF-5408 : NTP Time Server "172.16.58.10" is common only to the following nodes "oracle52" INFO: INFO: Cause: One or more nodes in the cluster do not synchronize with the NTP Time Server indicated. INFO: INFO: Action: At least one common NTP Time Server is required for a successful Clock Synchronization check. If there are none, reconfigure all of the nodes in the cluster to synchronize with at least one common NTP Time Server. INFO: INFO: Error Message:PRVF-5408 : NTP Time Server "172.16.255.10" is common only to the following nodes "oracle53" INFO: INFO: Cause: One or more nodes in the cluster do not synchronize with the NTP Time Server indicated. INFO: INFO: Action: At least one common NTP Time Server is required for a successful Clock Synchronization check. If there are none, reconfigure all of the nodes in the cluster to synchronize with at least one common NTP Time Server. INFO: INFO: Error Message:PRVF-5416 : Query of NTP daemon failed on all nodes INFO: INFO: Cause: An attempt to query the NTP daemon using the 'ntpq' command failed on all nodes. INFO: INFO: Action: Make sure that the NTP query command 'ntpq' is available on all nodes and make sure that user running the CVU check has permissions to execute it.
Ignoring this error will generate a failure at the end of the installation process as shown in figure 9 below. Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
23 Figure 9. runInstaller error related to the NTP misconfiguration.
In order to work around this issue, it is mandatory to get the same refid on all nodes of the cluster. Best case is to point to a single NTP server or to a GPS server as shown in the example below: [root@oracle52 ~]# ntpq -p remote refid st t when poll reach delay offset jitter ============================================================================ ntp2.austin.hp. .GPS. 1 u 5 64 1 133.520 15.473 0.000
Check the SELinux setting In some circumstances, the SELinux setting might generate some failure during the cluster check or the root.sh execution. In order to completely disable SELinux, set disabled as value for the SELINUX parameter in /etc/selinux/config: [root@oracle53 /]# more /etc/selinux/config # This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - SELinux is fully disabled. SELINUX=disabled
This update is static and requires a reboot of the server. In order to update dynamically the SELinux value uses the following command: [root@oracle52 oraInventory]# getenforce Enforcing [root@oracle52 oraInventory]# setenforce 0 [root@oracle52 oraInventory]# getenforce Permissive
You might also have to disable the iptables in order to get access to the server using VNC. [root@oracle52 .vnc]# service iptables stop iptables: Flushing firewall rules: [ OK ] iptables: Setting chains to policy ACCEPT: filter [ OK ] iptables: Unloading modules: [ OK ]
For more about the iptables setting, look at the Red Hat documentation here. Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
24 Create the grid and oracle users and groups The uid and gid have to be the same on all nodes of the cluster. Use useradd and groupadd parameters to specify explicitly the uid and gid. Lets check first if the uids and gids are used or not: [root@oracle52 ~]# grep -E '504|505|506|507|508|509' /etc/group [root@oracle52 ~]# [root@oracle52 ~]# grep -E '502|501' /etc/passwd [root@oracle52 ~]#
Oracle strongly encourages to carefully creating the users and passwords. The general cluster and database behavior might be negatively impacted if the ownership rules are not respected. This is mainly true if the GRID_HOME and the ORACLE_HOME are owned by two different users. Thus, check that users are members of the correct list or group. [root@oracle52 ~]# id oracle uid=501(oracle) gid=509(oinstall) groups=509(oinstall),505(asmdba),507(dba),508(oper) [root@oracle52 ~]# id grid uid=502(grid) gid=509(oinstall) groups=509(oinstall),504(asmadmin),505(asmdba),506(asmoper),507(dba)
Finally, define the oracle and grid user password. [root@oracle52 sshsetup]# passwd oracle [root@oracle52 sshsetup]# passwd grid
Configure the secure shell service To install Oracle software, Secure Shell (SSH) connectivity must be set up between all cluster member nodes. Oracle Universal Installer (OUI) uses the ssh and scp commands during installation to run remote commands on and copy files to the other cluster nodes. You must configure SSH so that these commands do not prompt for a password. Oracle Enterprise Manager also uses SSH. You can configure SSH from the OUI interface during installation for the user account running the installation. The automatic configuration creates passwordless SSH connectivity between all cluster member nodes. Oracle recommends that you use the automatic procedure if possible. Its also possible to use a script provided in the Grid Infrastructure distribution. To enable the script to run, you must remove stty commands from the profiles of any Oracle software installation owners, and remove other security measures that are triggered during a login, and that generate messages to the terminal. These messages, mail checks, and other displays prevent Oracle software installation owners from using the SSH configuration script that is built into the Oracle Universal Installer. If they are not disabled, then SSH must be configured manually before an installation can be run. In the current case, the SSH setup was done using the Oracle script for both the grid and the oracle user. During the script execution, the user password needs to be provided 4 times. We also included a basic connection check in the example below. The SSH setup script needs to be run on both nodes of the cluster. [root@oracle52 sshsetup]# su grid [grid@oracle52 sshsetup]# ./sshUserSetup.sh -user grid -hosts "oracle52 oracle53" Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
25
[grid@oracle52 sshsetup]$ ssh oracle53 date Wed Jul 24 14:05:13 CEST 2013 [grid@oracle52 sshsetup]$ exit logout [root@oracle52 sshsetup]# su oracle [oracle@oracle52 ~]$ ./sshUserSetup.sh -user oracle -hosts "oracle52 oracle53" [oracle@oracle52 ~]$ ssh oracle53 date Wed Jul 24 14:02:16 CEST 2013
Issue: The authorized_keys file was not correctly updated. For a two way free passphrase access, it is necessary to export manually the rsa file from the remote node to the local as described below. [grid@oracle53 .ssh]$ scp id_rsa.pub oracle52:/home/grid/.ssh/rsa@oracle53 [grid@oracle52 .ssh]$ cat rsa@oracle53>>authorized_keys
Alternatively, it is also possible to set the secure shell between all nodes in the cluster manually. 1. On each node, check if ssh is already active:
# ssh nodename1 date # ssh nodename2 date
2. Generate key: # ssh-keygen -b 1024 -t dsa
Accept default value without passphrase. 3. Export public key to the remote node: # cd ~/.ssh # scp id_dsa.pub nodename2:.ssh/id_dsa_username@nodename1.pub
To establish whether SSH is correctly configured, run the following commands: # ssh nodename1 date should send the date of node1 # ssh nodename2 date should send the date of node2 # ssh private_interconnect_nodename1 date should send the date of node1 # ssh private_interconnect_clunodename2 date should send the date of node2
If this works without prompting for any password, the SSH is correctly defined. Note The important point here is there is no password requested. Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
26 Set the limits To improve the performance of the software, you must increase the following shell limits for the oracle and grid users. Update /etc/security/limits.conf with the following: grid soft nproc 2047 grid hard nproc 16384 grid soft nofile 1024 grid hard nofile 65536 grid soft stack 10240 grid hard stack 32768 grid soft memlock 41984000 grid hard memlock 41984000 oracle soft memlock 41984000 oracle hard memlock 41984000 oracle soft nproc 2047 oracle hard nproc 16384 oracle soft nofile 1024 oracle hard nofile 65536 oracle soft stack 10240 oracle hard stack 32768
Installing the cvuqdisk RPM for Linux The Oracle Pre-Install RPM is not available for Red Hat 6.4; thus, you must install the cvuqdisk RPM. Without cvuqdisk, Cluster Verification Utility cannot discover shared disks, and you receive the error message Package cvuqdisk not installed when you run Cluster Verification Utility. To install the cvuqdisk RPM, complete the following procedure: 1. Locate the cvuqdisk RPM package, which is in the directory rpm on the Oracle Grid Infrastructure installation media. 2. Copy the cvuqdisk package to each node on the cluster. [root@oracle52 rpm]# scp cvuqdisk-1.0.9-1.rpm oracle53:/tmp/
3. As root, use the following command to find if you have an existing version of the cvuqdisk package: [root@oracle52 rpm]# rpm -qi cvuqdisk
If you have an existing version, then enter the following command to de-install the existing version: # rpm -e cvuqdisk
4. Set the environment variable CVUQDISK_GRP to point to the group that will own cvuqdisk, typically oinstall. For example: [root@oracle52 rpm]# CVUQDISK_GRP=oinstall; export CVUQDISK_GRP
5. In the directory where you have saved the cvuqdisk rpm, use the following command to install the cvuqdisk package: [root@oracle52 rpm]# rpm -ivh cvuqdisk-1.0.9-1.rpm Preparing... ########################################### [100%] 1:cvuqdisk ########################################### [100%]
Storage connectivity driver configuration Since Red Hat 5.3 and above, only the QLogic and multipath inbox drivers are supported as stated in the quote below. Beginning with Red Hat RHEL 5.2 and Novell SLES 10 SP2, HP will offer a technology preview for inbox HBA drivers in a non-production environment. HP will provide full support with subsequent Red Hat RHEL 5.3 and Novell SLES10 SP3 releases. Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
27 http://h20000.www2.hp.com/bizsupport/TechSupport/Document.jsp?lang=en&cc=us&taskId=120&prodSeriesId=3559651 &prodTypeId=18964&objectID=c01430228 HP used to provide an enablement kit for the device-mapper. This is not the case anymore with Red Hat 6.x. However, a reference guide is still maintained and is available in the HP storage reference site: SPOCK (login required). The document can be reached here. Check if the multipath driver is installed: [root@oracle52 yum.repos.d]# rpm -qa |grep multipath device-mapper-multipath-0.4.9-64.el6.x86_64 device-mapper-multipath-libs-0.4.9-64.el6.x86_64 [root@oracle52 yum.repos.d]# rpm -qa |grep device-mapper device-mapper-persistent-data-0.1.4-1.el6.x86_64 device-mapper-event-libs-1.02.77-9.el6.x86_64 device-mapper-event-1.02.77-9.el6.x86_64 device-mapper-multipath-0.4.9-64.el6.x86_64 device-mapper-libs-1.02.77-9.el6.x86_64 device-mapper-1.02.77-9.el6.x86_64 device-mapper-multipath-libs-0.4.9-64.el6.x86_64
To check which HBAs are installed in the system, use the lspci command. [root@oracle52 yum.repos.d]# lspci|grep Fibre 05:00.0 Fibre Channel: QLogic Corp. ISP2532-based 8Gb Fibre Channel to PCI Express HBA (rev 02) 05:00.1 Fibre Channel: QLogic Corp. ISP2532-based 8Gb Fibre Channel to PCI Express HBA (rev 02)
Check if the multipath daemon is already running. [root@oracle52 ~]# chkconfig --list |grep multi multipathd 0:off 1:off 2:off 3:on 4:on 5:on 6:off [root@oracle52 ~]# service multipathd status multipathd (pid 5907) is running...
If the multipath driver is not enabled by default at boot, change the configuration. # chkconfig [--level levels] multipathd on
Configuration of the /etc/multipath.conf The /etc/multipath.conf file consists of the following sections to configure the attributes of a Multipath device: System defaults (defaults) Black-listed devices (devnode_blacklist/blacklist) Storage array model settings (devices) Multipath device settings (multipaths) Blacklist exceptions (blacklist_exceptions) The defaults section defines default values for attributes which are used whenever required settings are unavailable. The blacklist section defines which devices should be excluded from the multipath topology discovery. The blacklist_exceptions section defines which devices should be included in the multipath topology discovery, despite being listed in the blacklist section. The multipaths section defines the multipath topologies. They are indexed by a World Wide Identifier (WWID). The devices section defines the device-specific settings based on vendor and product values. Check the current fresh installed configuration. [root@oracle52 yum.repos.d]# multipathd -k multipathd> show Config
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
In order to customize the number of DM Multipath features or to add support of HP devices which are not built-in, the user needs to modify /etc/multipath.conf. It is advisable to include the array which is already built-in as well. For now, our multipath.conf file looks like this: [root@oracle52 yum.repos.d]# more /etc/multipath.conf # multipath.conf written by anaconda defaults { user_friendly_names yes } blacklist { devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*" devnode "^hd[a-z]" devnode "^dcssblk[0-9]*" device { vendor "DGC" product "LUNZ" } device { vendor "IBM" product "S/390.*" } # don't count normal SATA devices as multipaths device { vendor "ATA" } # don't count 3ware devices as multipaths device { vendor "3ware" } device { vendor "AMCC" } # nor highpoint devices device { vendor "HPT" } device { vendor HP product Virtual_DVD-ROM } wwid "*" } blacklist_exceptions { wwid "360002ac0000000000000001f00006e40" } multipaths { multipath { uid 0 gid 0 wwid "360002ac0000000000000001f00006e40" mode 0600 } }
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
29 We need to add the following HP 3PAR array profiles and suggested settings to the /etc/multipath.conf file under the devices section and use these values: # multipath.conf written by anaconda
Then, rebuild the initramfs. [root@oracle52 yum.repos.d]# cd /boot [[root@oracle52 boot]# mv initramfs-2.6.32-358.el6.x86_64.img initramfs-2.6.32- 358.el6.x86_64.img.yan [root@oracle52 boot]# dracut
Finally, we may update the boot menu for rollback purpose. Add the part below that is in red. [root@oracle52 boot]# cd /boot/grub [root@oracle52 grub]# vi menu.lst # grub.conf generated by anaconda # # Note that you do not have to rerun grub after making changes to this file # NOTICE: You have a /boot partition. This means that # all kernel and initrd paths are relative to /boot/, eg. # root (hd0,0) # kernel /vmlinuz-version ro root=/dev/mapper/mpathap2 # initrd /initrd-[generic-]version.img #boot=/dev/mpatha default=0 timeout=5 splashimage=(hd0,0)/grub/splash.xpm.gz hiddenmenu title Red Hat Enterprise Linux Server (2.6.32-358.14.1.el6.x86_64) root (hd0,0) kernel /vmlinuz-2.6.32-358.14.1.el6.x86_64 ro root=UUID=51b7985c-3b07- 4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet crashkernel=auto initrd /initramfs-2.6.32-358.14.1.el6.x86_64.img Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
30 title Red Hat Enterprise Linux (2.6.32-358.el6.x86_64) root (hd0,0) kernel /vmlinuz-2.6.32-358.el6.x86_64 ro root=UUID=51b7985c-3b07-4543- 9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet initrd /initramfs-2.6.32-358.el6.x86_64.img title Red Hat Enterprise Linux Server (2.6.32-358.14.1.el6.x86_64) bkp root (hd0,0) kernel /vmlinuz-2.6.32-358.14.1.el6.x86_64 ro root=UUID=51b7985c-3b07- 4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet crashkernel=auto initrd /initramfs-2.6.32-358.14.1.el6.x86_64.img.yan
The QLogic parameters will only be used after the next reboot. Enable the multipathing for the Oracle shared volume The multipath devices are created in the /dev/mapper directory of the hosts. These devices are similar to any other block devices present in the host, and are used for any block or file level I/O operations, such as creating the file system. You must use the devices under /dev/mapper/. You can create a user friendly named device alias by using the alias and the WWID attributes of the multipath device present in the multipath subsection of the /etc/multipath.conf file. We already created 5 LUNs (1 dedicated to each node for the operating system and 3 shared for ASM) in the HP 3PAR SAN and presented them to both oracle52 and oracle53. So far, only the system LUN is configured. To check the available paths to the root device execute the following command: [root@oracle52 yum.repos.d]# multipath -l mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdata,VV size=100G features='0' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=0 status=active |- 1:0:0:0 sda 8:0 active undef running `- 2:0:0:0 sde 8:64 active undef running
Next, we have to make sure we have persistent device names within the cluster. With the default settings in /etc/multipath.conf, it is necessary to reconfigure the mapping information by using the v0 parameter of the multipath command. [root@oracle52 ~]# multipath -v0 [root@oracle52 ~]# multipath -l mpathd (360002ac0000000000000002100006e40) dm-6 3PARdata,VV size=20G features='0' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=0 status=active |- 1:0:0:3 sdd 8:48 active undef running `- 2:0:0:3 sdh 8:112 active undef running mpathc (360002ac0000000000000002200006e40) dm-5 3PARdata,VV size=20G features='0' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=0 status=active |- 1:0:0:2 sdc 8:32 active undef running `- 2:0:0:2 sdg 8:96 active undef running mpathb (360002ac0000000000000002300006e40) dm-4 3PARdata,VV size=20G features='0' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=0 status=active |- 1:0:0:1 sdb 8:16 active undef running `- 2:0:0:1 sdf 8:80 active undef running mpatha (360002ac0000000000000001f00006e40) dm-0 3PARdata,VV size=100G features='0' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=0 status=active |- 1:0:0:0 sda 8:0 active undef running `- 2:0:0:0 sde 8:64 active undef running [root@oracle52 ~]# [root@oracle52 ~]# ls /dev/mapper control mpatha mpathap1 mpathap2 mpathap3 mpathb mpathc mpathd
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
31 These WWIDs can now be used to create customized multipath device names in adding the entries below to the /etc/multipath.conf. multipaths { multipath { uid 0 gid 0 wwid "360002ac0000000000000001f00006e40" mode 0600 } multipath { wwid 360002ac0000000000000002100006e40 alias voting } multipath { wwid 360002ac0000000000000002200006e40 alias data01 } multipath { wwid 360002ac0000000000000002300006e40 alias fra01 } }
In order to create the multipath devices with the defined alias names, execute "multipath -v0" (you may need to execute "multipath -F" before to get rid of the old device names). [root@oracle52 ~]# multipath -F [root@oracle52 ~]# multipath v1 fra01 data01 voting [root@oracle52 ~]# ls /dev/mapper control data01 fra01 mpatha mpathap1 mpathap2 mpathap3 voting
With 12c, we do not need to bind the block device to the raw device as raw is not supported anymore. If we were not using ASMLib, we would need to manage the right level of permission to the shared volume. This can be achieved by two ways: 1. Updating the rc.local file 2. Create a udev rule (see example below which is not relevant to our environment) Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
32 In such a case, we would have to update the system as below. The file called 99-oracle.rules is a copy of /etc/udev/rules.d/60-raw.rules which has been updated with our own data. [root@dbkon01 rules.d]# pwd /etc/udev/rules.d [root@dbkon01 rules.d]# more 99-oracle.rules # This file and interface are deprecated. # Applications needing raw device access should open regular # block devices with O_DIRECT. # # Enter raw device bindings here. # # An example would be: # ACTION=="add", KERNEL=="sda", RUN+="/bin/raw /dev/raw/raw1 %N" # to bind /dev/raw/raw1 to /dev/sda, or # ACTION=="add", ENV{MAJOR}=="8", ENV{MINOR}=="1", RUN+="/bin/raw /dev/raw/raw2 %M %m" # to bind /dev/raw/raw2 to the device with major 8, minor 1. # Oracle Configuration Registry KERNEL==" mapper/voting", OWNER="root", GROUP="oinstall", MODE="640" # Voting Disks KERNEL=="mapper/data01", OWNER="oracle", GROUP="dba", MODE="660" KERNEL=="mapper/fra01", OWNER="oracle", GROUP="dba", MODE="660"
However, as ASMLib is used, there is no need to ensure permissions and device path persistency in udev. Install the ASMLib support library Oracle ASM (Automated Storage Management) is a data volume manager for Oracle databases. ASMLib is an optional utility that can be used on Linux systems to manage Oracle ASM devices. ASM assists users in disk management by keeping track of storage devices dedicated to Oracle databases and allocating space on those devices according to the requests from Oracle database instances. ASMLib was initially developed by Oracle for the major paid Linux distribution. However, since Red Hat 6.0, Oracle only provides this library to Oracle Linux. Since version 6.4, Red Hat (RH) does provide its own library. It is part of the supplementary channel. As of version 6, the RH ASMLib is not supported. HP published, some time ago, a white paper describing how to articulate the device-mapper with ASMLib. This white paper is available here. ASMLib consists of the following components: An open source (GPL) kernel module package: kmod-oracleasm (provided by Red Hat) An open source (GPL) utilities package: oracleasm-support (provided by Oracle) A closed source (proprietary) library package: oracleasmlib (provided by Oracle) The Oracle packages can be downloaded from here. For the installation, move to the directory where the packages are located and install them. [root@oracle52 ASMLib]# yum install kmod-oracleasm-2.0.6.rh1-2.el6.x86_64.rpm oracleasmlib-2.0.4-1.el6.x86_64.rpm oracleasm-support-2.1.8-1.el6.x86_64.rpm
The ASM driver needs to be loaded, and the driver filesystem needs to be mounted. This is taken care of by the initialization script, /etc/init.d/oracleasm. Run the /etc/init.d/oracleasm script with the 'configure' option. It will ask for the user and group that default to owning the ASM driver access point. This step has to be done on every node of the cluster. [root@oracle52 ASMLib]# /usr/sbin/oracleasm init [root@oracle52 ASMLib]# /etc/init.d/oracleasm configure Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library driver. The following questions will determine whether the driver is Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
33 loaded on boot and what permissions it will have. The current values will be shown in brackets ('[]'). Hitting <ENTER> without typing an answer will keep that current value. Ctrl-C will abort.
Default user to own the driver interface []: grid Default group to own the driver interface []: asmadmin Start Oracle ASM library driver on boot (y/n) [n]: y Scan for Oracle ASM disks on boot (y/n) [y]: y Writing Oracle ASM library driver configuration: done Initializing the Oracle ASMLib driver: [ OK ] Scanning the system for Oracle ASMLib disks: [ OK ]
The disable/enable option of the oracleasm script will activate or not the automatic startup of the package. The system administrator has one last task. Every disk that ASMLib is going to be accessed needs to be created and made available. This is accomplished by creating an ASM disk once for the entire cluster. [root@oracle52 ASMLib]# oracleasm createdisk VOTING /dev/mapper/voting Writing disk header: done Instantiating disk: done [root@oracle52 ASMLib]# oracleasm createdisk DATA01 /dev/mapper/data01 Writing disk header: done Instantiating disk: done [root@oracle52 ASMLib]# oracleasm createdisk FRA01 /dev/mapper/fra01 Writing disk header: done Instantiating disk: done [root@oracle52 ASMLib]# oracleasm listdisks DATA01 FRA01 VOTING
When a disk is added to a RAC setup, the other nodes need to be notified about it. Run the 'createdisk' command on one node, and then run 'scandisks' on every other node. [root@oracle53 ASMLib]# oracleasm scandisks Reloading disk partitions: done Cleaning any stale ASM disks... Scanning system for ASM disks... [root@oracle53 ASMLib]# oracleasm listdisks DATA01 FRA01 VOTING
Finally, check the ownership of the asm devices. It should be member of the asmadmin group. [root@oracle52 ASMLib]# ls -l /dev/oracleasm/disks/ brw-rw---- 1 grid asmadmin 253, 5 Jul 25 15:26 DATA01 brw-rw---- 1 grid asmadmin 253, 4 Jul 25 15:26 FRA01 brw-rw---- 1 grid asmadmin 253, 6 Jul 25 15:26 VOTING
There are some other useful commands like deletedisk, querydisk, listdisks, etc. In order to optimize the scanning effort of Oracle when preparing the ASM disks, we can update the oracleasm parameter file as below. In this update we defined a scan order with priority for the multipath device and we excluded the single path device of the scanning process. [root@oracle52 ~]# vi /etc/sysconfig/oracleasm . . # ORACLEASM_SCANORDER: Matching patterns to order disk scanning ORACLEASM_SCANORDER="/dev/mapper"
# ORACLEASM_SCANEXCLUDE: Matching patterns to exclude disks from scan ORACLEASM_SCANEXCLUDE="sd" Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
34
Check that oracleasm will be started automatically after the next boot. [root@oracle52 sysconfig]# chkconfig --list oracleasm oracleasm 0:off 1:off 2:on 3:on 4:on 5:on 6:off
Check the available disk space Starting with RAC 11gR2, only 2 ORACLE_HOMEs are needed instead of 3 with the previous releases. The reason is the ASM directory is now part of the cluster ORACLE-HOME (also called GRID ORACLE_HOME). Oracle considers that storage and cluster management are system administration tasks while the database is a dba task. The $ORACLE_BASE of the grid and the oracle users must be different. For the installation, we need the following disk space: At least 3.5 GB of space for the Oracle base of the Oracle Grid Infrastructure installation owner (Grid user). The Oracle base includes Oracle Clusterware and Oracle ASM log files. 5.8 GB of disk space for the Oracle home (the location for the Oracle Database software binaries). OCR and Voting disks: Need one of each, or more if external redundancy is used. The size of each file is 1GB. Database space: Depends on how big the database will be. Oracle recommends at least 2GB. Temporary space: Oracle requires 1GB space in /tmp. /tmp is used by default; or, it may be in another location by setting ORA_TMP and ORA_TEMP in the oracle user environment prior to installation. In this example, we created the following directories: Path Usage Size /u01/app/oracle/ $ORACLE_BASE for the oracle db owner 5.8GB /u01/app/oracle/12c $ORACLE_HOME for the oracle db user /u01/app/base $ORACLE_BASE for the grid owner 3.5GB /u01/app/grid/12c $ORACLE_HOME for the grid user /dev/oracleasm/disks/FRA01 Flash recovery area (ASM) 20GB /dev/oracleasm/disks/VOTING OCR (volume) 2GB /dev/oracleasm/disks/DATA01 Database (volume) 20GB
Create the installation directories and set the accurate privileges on both nodes for the grid user. [root@oracle53 u01]# mkdir -p /u01/app/grid/12c [root@oracle53 u01]# chown -R grid:oinstall /u01/app/grid [root@oracle53 u01]# chmod -R 775 /u01/app/grid
Create the installation directories and set the accurate privileges on both nodes for the oracle user. [root@oracle52 oracle]# mkdir /u01/app/oracle/12c [root@oracle52 oracle]# chown -R oracle:oinstall /u01/app/oracle [root@oracle52 oracle]# chmod -R 775 /u01/app/oracle
Setting the disk I/O scheduler on Linux Disk I/O schedulers reorder, delay, or merge requests for disk I/O to achieve better throughput and lower latency. Linux has multiple disk I/O schedulers available, including Deadline, Noop, Anticipatory, and Completely Fair Queuing (CFQ). For best performance with Oracle ASM, Oracle recommends that you use the Deadline I/O Scheduler. Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
35 In order to change the I/O scheduler, we first need to identify the device-mapper path for each and every ASM disk. [root@oracle52 sys]# multipath -l data01 (360002ac0000000000000002200006e40) dm-5 3PARdata,VV size=20G features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=0 status=active |- 1:0:0:2 sdc 8:32 active undef running `- 2:0:0:2 sdg 8:96 active undef running fra01 (360002ac0000000000000002300006e40) dm-4 3PARdata,VV size=20G features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=0 status=active |- 1:0:0:1 sdb 8:16 active undef running `- 2:0:0:1 sdf 8:80 active undef running voting (360002ac0000000000000002100006e40) dm-6 3PARdata,VV size=20G features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=0 status=active |- 1:0:0:3 sdd 8:48 active undef running `- 2:0:0:3 sdh 8:112 active undef running
An alternative for identifying the LUN is to use the scsi_id. For instance: [root@oracle52 sys]# scsi_id --whitelist --replace-whitespace -- device=/dev/mapper/data01 360002ac0000000000000002200006e40
On each cluster node, enter the following command to ensure that the Deadline disk I/O scheduler is configured for use: [root@oracle52 sys]# echo deadline > /sys/block/dm-4/queue/scheduler [root@oracle52 sys]# echo deadline > /sys/block/dm-5/queue/scheduler [root@oracle52 sys]# echo deadline > /sys/block/dm-6/queue/scheduler
Next check that the I/O scheduler status has been updated: [root@oracle52 sys]# cat /sys/block/dm-6/queue/scheduler noop anticipatory [deadline] cfq
In order to make this change persistent, we can update /etc/grub.conf. [root@oracle52 sys]# vi /etc/grub.conf . . . splashimage=(hd0,0)/grub/splash.xpm.gz hiddenmenu title Red Hat Enterprise Linux Server (2.6.32-358.14.1.el6.x86_64) root (hd0,0) kernel /vmlinuz-2.6.32-358.14.1.el6.x86_64 ro root=UUID=51b7985c-3b07- 4543-9851-df05e4e54e0b rd_NO_LUKS rd_NO_LVM LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet crashkernel=auto elevator=deadline
Determining root script execution plan During Oracle Grid Infrastructure installation, the installer requires you to run scripts with superuser (or root) privileges to complete a number of system configuration tasks. You can continue to run scripts manually as root, or you can delegate to the installer the privilege to run configuration steps as root, using one of the following options: Use the root password: Provide the password to the installer as you are providing other configuration information. The password is used during installation, and not stored. The root user password must be identical on each cluster member node. To enable root command delegation, provide the root password to the installer when prompted. Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
36 Use Sudo: Sudo is a UNIX and Linux utility that allows members of the sudoers list privileges to run individual commands as root. To enable Sudo, have a system administrator with the appropriate privileges configure a user that is a member of the sudoers list, and provide the username and password when prompted during installation. [root@oracle52 sys]# visudo ## Allow root to run any commands anywhere root ALL=(ALL) ALL grid ALL=(ALL) NOPASSWD: ALL oracle ALL=(ALL) NOPASSWD: ALL
Once this setting is enabled, grid and oracle users can act as root by prefixing each and every command with a sudo. For instance: [root@oracle52 sys]# su - grid [grid@oracle52 ~]$ sudo yum install glibc-utils.x86_64 Loaded plugins: product-id, refresh-packagekit, rhnplugin, security, : subscription-manager This system is receiving updates from RHN Classic or RHN Satellite. Setting up Install Process
Obviously, enabling sudo for grid and oracle users raises security issues. It is recommended to turn sudo off right after the complete binary installation. Oracle Clusterware installation Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set in .bash_profile on all your cluster nodes. export ORACLE_BASE=/u01/app/base export ORACLE_HOME=/u01/app/grid/12c
Note in 12c, the $GRID_HOME shouldnt be a subdirectory of the $ORACLE_BASE. Check the environment before installation In order for runcluvfy.sh to run correctly with Red Hat 6, redhat-release-6Server-1.noarch.rpm needs to be installed. This is a "dummy" rpm which has to be installed as root user as follows: [root@oracle53 kits]# rpm -ivh redhat-release-6Server-1.noarch.rpm Preparing... ######################################### [100%] 1:redhat-release ######################################### [100%]
This is required because runcluvfy runs the following rpm command "rpm -q --qf %{version} redhat-release-server" and expects "6Server" to be returned. In Red Hat 6, redhat-release-server rpm does not exist. Download the rpm from My oracle Support Doc ID 1514012.1. Dont be confused by the platform, download the clupack.zip file which is attached to the document and install the package. Then, run the cluster verify utility which is located in the base directory of the media file and check for some missing setup: ./runcluvfy.sh stage -pre crsinst -n oracle52,oracle53 verbose>>/tmp/cluvfy.log
In our case an error related to the swap space was reported. We can ignore it. RunInstaller Start the runInstaller from your distribution location. The runInstaller program is located in the root directory of the distribution. In order to run the installer graphical interface, its necessary to setup a vncserver session or a terminal X and a Display. Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
37
In a basic single installation environment, there is no need for an automatic update. Any automatic update would be a customer strategy.
Select Install and Configure Oracle Grid Infrastructure for a Cluster.
In this example, the goal is to install a standard cluster, not a flex cluster.
Select Advanced Installation.
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
38
Select optional languages if needed.
Enter cluster name and SCAN name. Remember, the SCAN name needs to be resolved by the DNS. For high availability purposes, Oracle recommends using 3 IP addresses for the SCAN service. The service will also work if only one is used.
Configure the public and VIP names of all nodes in the cluster. The SSH setting was done earlier. It is also possible to double-check if everything is fine from this screen. A failure here will prevent the installation from being successful. Then click Next.
Define the role for the Ethernet port. As mentioned earlier, we dedicated 2 interfaces for the private interconnect traffic. Oracle will enable HA capacity using the 2 interfaces. 172.16.0.52 Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
39
Click Yes to create a database repository for the Grid Infrastructure Management Repository.
Oracle recommends using Standard ASM as the storage option. We pre-configured the system for the ASM implementation.
In this screen, it is time to create a first ASM diskgroup. This diskgroup will be used to store the cluster voting disk as well as the OCR repository.
Define the password for the ASM instance. Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
40
We chose not to configure IPMI (Intelligent Management Platform Interface) during the installation.
IPMI provides a set of common interfaces to computer hardware and firmware that system administrators can use to monitor system health and manage the system. With Oracle 12c, Oracle Clusterware can integrate IPMI to provide failure isolation support and to ensure cluster integrity. You can configure node-termination during installation by selecting a node-termination protocol, such as IPMI.
Define the group for the ASM instance owner accordingly with the groups initially created.
Check the path for $ORACLE_BASE and $ORACLE_HOME. Once again, both directories should be parallel. $ORACLE_HOME cant be a subdirectory of $ORACLE_BASE.
Set the Inventory location with the path earlier created. Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
41
Define the sudo credentials by providing the grid user password.
The first warning can be ignored. It is related to the swap space as explained earlier.
Regarding the second warning: - PRVF-5150: Path ORCL:DISK1 is not a valid path on all nodes Operation Failed on Nodes: [] Refer to the My Oracle Support (MOS) Note: Device Checks for ASM Fails with PRVF-5150: Path ORCL: is not a valid path:
MOS DOC: "Device Checks for ASM Fails with PRVF-5150: Path ORCL: is not a valid path [ID 1210863.1]" Solution: At the time of this writing, bug 10026970 is fixed in 11.2.0.3 which is not released yet. If the ASM device passes manual verification, the warning can be ignored. Manual Verification To verify ASMLib status: $/etc/init.d/oracleasm status Checking if ASM is loaded: yes Checking if /dev/oracleasm is mounted: yes
[grid@oracle52 ~]# dd if=/dev/oracleasm/disks/DATA01 of=/dev/null bs=1024k count=1 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.00401004 s, 261 MB/s
Confirm that we want to ignore the warnings. Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
42
Summary of the installation settings.
Click Yes for running the sudo root.sh command.
Click Next.
Installation completed. Click Close. The installation log is located in /u01/app/oracle/oraInventory/logs/
Check the installation Processes Check that the processes are running on both nodes. ps ef|grep ora ps ef|grep d.bin
Nodes information olsnodes provides information about the nodes in the CRS cluster and their interfaces. This is roughly similar to the previous releases. [grid@oracle52 ~]$ olsnodes -h Usage: olsnodes [ [ [-n] [-i] [-s] [-t] [<node> | -l [-p]] ] | [-c] | [-a] ] [- g] [-v] Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
43 where -n print node number with the node name -p print private interconnect address for the local node -i print virtual IP address with the node name <node> print information for the specified node -l print information for the local node -s print node status - active or inactive -t print node type - pinned or unpinned -g turn on logging -v Run in debug mode; use at direction of Oracle Support only. -c print clusterware name -a print active node roles of the nodes in the cluster
[grid@oracle52 ~]$ crsctl check crs CRS-4638: Oracle High Availability Services is online CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online
crs_stat and crsctl will deliver useful information about the status of the cluster. Nevertheless, the crs_stat command is deprecated and has been replaced by 'crsctl status resource'. The crs_stat command remains for backward compatibility only. crsctl does much more than crs_stat as it will manage the entire cluster resources. [grid@oracle52 ~]$ crsctl -h Usage: crsctl add - add a resource, type or other entity crsctl backup - back up voting disk for CSS crsctl check - check a service, resource or other entity crsctl config - output autostart configuration crsctl debug - obtain or modify debug state crsctl delete - delete a resource, type or other entity crsctl disable - disable autostart crsctl discover - discover DHCP server crsctl enable - enable autostart crsctl eval - evaluate operations on resource or other entity without performing them crsctl get - get an entity value crsctl getperm - get entity permissions crsctl lsmodules - list debug modules crsctl modify - modify a resource, type or other entity crsctl query - query service state crsctl pin - Pin the nodes in the nodelist crsctl relocate - relocate a resource, server or other entity crsctl replace - replaces the location of voting files crsctl release - release a DHCP lease crsctl request - request a DHCP lease or an action entrypoint crsctl setperm - set entity permissions crsctl set - set an entity value crsctl start - start a resource, server or other entity crsctl status - get status of a resource or other entity crsctl stop - stop a resource, server or other entity crsctl unpin - unpin the nodes in the nodelist crsctl unset - unset a entity value, restoring its default
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
44 The command below shows, in short, the status of the CRS processes of the cluster. [root@oracle52 ~]# crsctl check cluster -all ************************************************************** oracle52: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** oracle53: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online **************************************************************
The command below shows the status of the CRS processes. [root@ oracle52 ohasd]# crsctl stat res -t -init [grid@oracle52 ~]$ crsctl stat res -t -init ----------------------------------------------------------------------------- Name Target State Server State details ----------------------------------------------------------------------------- Cluster Resources ----------------------------------------------------------------------------- ora.asm 1 ONLINE ONLINE oracle52 Started,STABLE ora.cluster_interconnect.haip 1 ONLINE ONLINE oracle52 STABLE ora.crf 1 ONLINE ONLINE oracle52 STABLE ora.crsd 1 ONLINE ONLINE oracle52 STABLE ora.cssd 1 ONLINE ONLINE oracle52 STABLE ora.cssdmonitor 1 ONLINE ONLINE oracle52 STABLE ora.ctssd 1 ONLINE ONLINE oracle52 OBSERVER,STABLE ora.diskmon 1 OFFLINE OFFLINE STABLE ora.drivers.acfs 1 ONLINE ONLINE oracle52 STABLE ora.evmd 1 ONLINE ONLINE oracle52 STABLE ora.gipcd 1 ONLINE ONLINE oracle52 STABLE ora.gpnpd 1 ONLINE ONLINE oracle52 STABLE ora.mdnsd 1 ONLINE ONLINE oracle52 STABLE ora.storage 1 ONLINE ONLINE oracle52 STABLE
The command below can be used with -t extension for shorter output. [grid@oracle52 ~]$ crsctl stat res NAME=ora.DATA.dg TYPE=ora.diskgroup.type TARGET=ONLINE , ONLINE STATE=ONLINE on oracle52, ONLINE on oracle53
NAME=ora.FRA.dg TYPE=ora.diskgroup.type TARGET=ONLINE , ONLINE STATE=ONLINE on oracle52, ONLINE on oracle53
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
45 NAME=ora.LISTENER.lsnr TYPE=ora.listener.type TARGET=ONLINE , ONLINE STATE=ONLINE on oracle52, ONLINE on oracle53
NAME=ora.LISTENER_SCAN1.lsnr TYPE=ora.scan_listener.type TARGET=ONLINE STATE=ONLINE on oracle52
NAME=ora.MGMTLSNR TYPE=ora.mgmtlsnr.type TARGET=ONLINE STATE=ONLINE on oracle52
NAME=ora.asm TYPE=ora.asm.type TARGET=ONLINE , ONLINE STATE=ONLINE on oracle52, ONLINE on oracle53
NAME=ora.cvu TYPE=ora.cvu.type TARGET=ONLINE STATE=ONLINE on oracle52
NAME=ora.mgmtdb TYPE=ora.mgmtdb.type TARGET=ONLINE STATE=ONLINE on oracle52
NAME=ora.net1.network TYPE=ora.network.type TARGET=ONLINE , ONLINE STATE=ONLINE on oracle52, ONLINE on oracle53
NAME=ora.oc4j TYPE=ora.oc4j.type TARGET=ONLINE STATE=ONLINE on oracle52
NAME=ora.ons TYPE=ora.ons.type TARGET=ONLINE , ONLINE STATE=ONLINE on oracle52, ONLINE on oracle53
NAME=ora.oracle52.vip TYPE=ora.cluster_vip_net1.type TARGET=ONLINE STATE=ONLINE on oracle52
NAME=ora.oracle53.vip TYPE=ora.cluster_vip_net1.type TARGET=ONLINE STATE=ONLINE on oracle53
NAME=ora.scan1.vip TYPE=ora.scan_vip.type TARGET=ONLINE STATE=ONLINE on oracle52
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
Checking the SCAN configuration The Single Client Access Name (SCAN) is a name that is used to provide service access for clients to the cluster. Because the SCAN is associated with the cluster as a whole, rather than to a particular node, the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients. It also adds location independence for the databases, so that client configuration does not have to depend on which nodes are running a particular database instance. Clients can continue to access the cluster in the same way as with previous releases, but Oracle recommends that clients accessing the cluster use SCAN. [grid@oracle52 ~]$ cluvfy comp scan
Verifying scan
Checking Single Client Access Name (SCAN)...
Checking TCP connectivity to SCAN Listeners... TCP connectivity to SCAN Listeners exists on all cluster nodes
Checking name resolution setup for "oracle34 "...
Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ... All nodes have same "hosts" entry defined in file "/etc/nsswitch.conf" Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed
Checking SCAN IP addresses... Check of SCAN IP addresses passed
Verification of SCAN VIP and Listener setup passed
Verification of scan was successful.
ASM disk group creation Since 11gR2, Oracle provides a GUI tool called ASMCA which can simplify the creation and the management for the ASM disk group. Now, theres minimal learning curve associated with configuring and maintaining an ASM instance. ASM disk groups can be simply managed by both DBAs and system administrators with little knowledge of ASM. ASMCA supports the majority of Oracle Database features, such as the ASM cluster file system (ACFS) and volume management. Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
47 The ASMCA application is run by the Grid Infrastructure owner. Just launch it with ASMCA.
Existing disk groups are already listed. Click Create to create a new disk group. ASMCA will recognize the candidate disks we created using ASMLib.
Note the quorum checkbox will only be used if we add a voting disk to the cluster layer. Note also we used External redundancy as we do not need any extra failure group.
Disk group successfully created.
The 2 disk groups are now created but not mounted on all nodes. Click Mount All to mount them all. Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
48
Click Yes to confirm.
The disk groups are ready. We can now quit ASMCA.
We can also list the disk groups from a command line interface: [grid@oracle52 ~]$ ORACLE_SID=+ASM1 [grid@oracle52 ~]$ asmcmd lsdg State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED EXTERN N 512 4096 1048576 20480 14576 0 14576 0 Y DATA/ MOUNTED EXTERN N 512 4096 1048576 20480 20149 0 20149 0 N FRA/ MOUNTED EXTERN N 512 4096 1048576 20480 20384 0 20384 0 N VOTING/
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
49 Oracle RAC 12c database installation Environment setting Check that $ORACLE_BASE and $ORACLE_HOME are correctly set in.bash_profile on all your cluster nodes. export ORACLE_BASE=/u01/app/oracle export ORACLE_HOME=/u01/app/oracle/12c
Note in 12c, the $GRID_HOME shouldnt be a subdirectory of the $ORACLE_BASE. Installation Login as oracle:oinstall user and start the runInstaller from your distribution location.
Define here whether to receive security updates from My Oracle Support or not.
A warning message is displayed if we decline the previous suggestion.
Define here whether to use the software updates from My Oracle Support or not. Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
50
For now, we just want to install the binaries. The database will be created later with DBCA.
Select RAC installation.
The nodes members of the RAC cluster are selected in this screen. The SSH setup or verification can also be done in this screen.
Select Languages in this screen. Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
51
The Standard Edition is eligible in a 4 CPUs sockets cluster maximum.
Define the $ORACLE_HOME and $ORACLE_BASE where the Oracle products will be installed.
Define the operating system groups to be used.
The pre-installation system check raises a warning on the swap space. As said earlier, this can be ignored. Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
52
This is a double-check warning as we ignored the previous warning.
And here is a summary of the selected options before the installation.
The installation is ongoing.
Run root.sh from a console on both nodes of the cluster. [root@oracle53 kits]# cd /u01/app/oracle/12c [root@oracle53 12c]# ./root.sh Performing root user operation for Oracle 12c
The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/app/oracle/12c
Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of "dbhome" have not changed. No need to overwrite. The contents of "oraenv" have not changed. No need to overwrite. The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
53
The installation is now completed.
Create a RAC database Create a RAC database Get connected as oracle user then start DBCA from a node. A terminal X access is needed here again (unless using the silent mode based on answer file not documented here).
The 12c DBCA offers some new options in this screen like Manage Pluggable Database and Instance Management. For now, we will create a new database.
In this stage, we can either create a new database using a template or customize the new database. Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
54
Select whether to use RAC and which template to use. Also note this new DBCA 12c option, it is now possible to see what parameters are used in the template database.
The parameter detail screen is displayed.
Define the name of the new database.
The Server Pool is a 12c new option. The server pool allows to create server profiles and to run RAC database in it. It helps optimizing the workload load balancing between the nodes of a cluster mainly when these nodes are not equally powerful. Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
55
Here we define whether we want to configure the Enterprise Manager and to run the Cluster Verification script. We can also configure the EM Cloud Control which is a new management feature for 12c.
Here we define the credentials for the Oracle database.
Specify the database location.
Select sample schema and security options if needed. Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
56
Select details about the sizing and the configuration of the database.
Ready to install
Oracle runs the cluster and configuration checks again. We still have an alert on the swap size. We can ignore it.
Last check before the installation. Click Finish. Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
57
Database creation in Progress.
Database creation completed.
Post-installation steps The service (aka sqlnet) allows the connection to the database instances. Since 11gR2, the way it works slightly changes as Oracle introduced the SCAN service (seen earlier). First, we need to check that the listeners are up and running: [root@oracle52 ~]# ps -ef|grep LISTENER|grep -v grep grid 10466 1 0 Jul26 ? 00:00:09 /u01/app/grid/12c/bin/tnslsnr LISTENER_SCAN1 -no_crs_notify -inherit grid 12601 1 0 Jul26 ? 00:00:10 /u01/app/grid/12c/bin/tnslsnr LISTENER -no_crs_notify inherit
Then we need to check the listener definition within the database allocation parameters. Note a consequence of the SCAN new feature: the remote_listener points to the SCAN service instead of a list of node listeners. In node 1:
SQL> show parameter local_lis
NAME TYPE VALUE --------------------------------- ----------- ------------------------------ local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST= 172.16.0.32)(PORT=1521)) SQL> show parameter remote_listener NAME TYPE VALUE --------------------------------- ----------- --------------------------- remote_listener string oracle34 :1521 Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
58
In node 2: SQL> show parameter local_lis
NAME TYPE VALUE --------------------------------- ----------- ------------------------------ local_listener string (ADDRESS=(PROTOCOL=TCP)(HOST= 172.16.0.33)(PORT=1521)) SQL> show parameter remote_listener NAME TYPE VALUE --------------------------------- ----------- --------------------------- remote_listener string oracle34 :1521
Look at the listener.ora files. The listening service is part of the cluster. Thus, the file is located in $GRID_HOME (owned by the grid user). Below is the output from node 1 and then the output from node 2. [grid@oracle52 ~]$ more $ORACLE_HOME/network/admin/listener.ora MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR)))) # line added by Agent # listener.ora Network Configuration File: /u01/app/grid/12c/network/admin/listener.ora # Generated by Oracle configuration tools.
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1 = ON
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1 = OFF
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON # line added by Agent VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET # line added by Agent
[grid@oracle53 ~]$ more $ORACLE_HOME/network/admin/listener.ora LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))) # line added by Agent ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON # line added by Agent VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET # line added by Agent
Check the status of the listener. [grid@oracle52 ~]$ lsnrctl status listener
LSNRCTL for Linux: Version 12.1.0.1.0 - Production on 30-JUL-2013 15:02:44
Copyright (c) 1991, 2013, Oracle. All rights reserved.
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
59 Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))) STATUS of the LISTENER ------------------------ Alias LISTENER Version TNSLSNR for Linux: Version 12.1.0.1.0 - Production Start Date 26-JUL-2013 14:04:22 Uptime 4 days 0 hr. 58 min. 21 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File /u01/app/grid/12c/network/admin/listener.ora Listener Log File /u01/app/base/diag/tnslsnr/oracle52/listener/alert/log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=172.16.0.52)(PORT=1521))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=172.16.0.32)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=oracle52)(PORT=5500))(Security=(my_w allet_directory=/u01/app/oracle/12c/admin/HP12C/xdb_wallet))(Presentation=HTTP) (Session=RAW)) Services Summary... Service "+ASM" has 1 instance(s). Instance "+ASM1", status READY, has 1 handler(s) for this service... Service "-MGMTDBXDB" has 1 instance(s). Instance "-MGMTDB", status READY, has 1 handler(s) for this service... Service "HP12C" has 1 instance(s). Instance "HP12C_2", status READY, has 1 handler(s) for this service... Service "HP12CXDB" has 1 instance(s). Instance "HP12C_2", status READY, has 1 handler(s) for this service... Service "_mgmtdb" has 1 instance(s). Instance "-MGMTDB", status READY, has 2 handler(s) for this service... The command completed successfully
Then, check the status of the SCAN listener. [grid@oracle52 ~]$ lsnrctl status LISTENER_SCAN1
LSNRCTL for Linux: Version 12.1.0.1.0 - Production on 30-JUL-2013 15:05:11
Copyright (c) 1991, 2013, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1))) STATUS of the LISTENER ------------------------ Alias LISTENER_SCAN1 Version TNSLSNR for Linux: Version 12.1.0.1.0 - Production Start Date 26-JUL-2013 14:03:54 Uptime 4 days 1 hr. 1 min. 16 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File /u01/app/grid/12c/network/admin/listener.ora Listener Log File /u01/app/base/diag/tnslsnr/oracle52/listener_scan1/alert/log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=172.16.0.34)(PORT=1521))) Services Summary... Service "HP12C" has 2 instance(s). Instance "HP12C_1", status READY, has 1 handler(s) for this service... Instance "HP12C_2", status READY, has 1 handler(s) for this service... Service "HP12CXDB" has 2 instance(s). Instance "HP12C_1", status READY, has 1 handler(s) for this service... Instance "HP12C_2", status READY, has 1 handler(s) for this service... Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
60 Service "_mgmtdb" has 1 instance(s). Instance "-MGMTDB", status READY, has 1 handler(s) for this service... The command completed successfully
And finally, we can check the srvctl value for the SCAN service. [grid@oracle52 ~]$ srvctl config scan SCAN name: oracle34 , Network: 1 Subnet IPv4: 172.16.0.0/255.255.0.0/eth0 Subnet IPv6: SCAN 0 IPv4 VIP: 172.16.0.34
Cluster verification Cluster verification utility In the $ORA_CRS_HOME/bin directory, you will find a Cluster Verification Utility (CVU) validation tool called cluvfy. CVU goals: To verify if we have a well formed cluster for RAC installation, configuration, and operation Full stack verification Non-intrusive verification Easy to use interface Supports all RAC platforms / configurations - well-defined, uniform behavior CVU non-goals: Does not perform any cluster or RAC operation Does not take any corrective action following the failure of a verification task Does not enter into areas of performance tuning or monitoring Does not attempt to verify the internals of a cluster database [grid@oracle52 ~]$ cluvfy comp -list
Valid Components are: nodereach : checks reachability between nodes nodecon : checks node connectivity cfs : checks CFS integrity ssa : checks shared storage accessibility space : checks space availability sys : checks minimum system requirements clu : checks cluster integrity clumgr : checks cluster manager integrity ocr : checks OCR integrity olr : checks OLR integrity ha : checks HA integrity freespace : checks free space in CRS Home crs : checks CRS integrity nodeapp : checks node applications existence admprv : checks administrative privileges peer : compares properties with peers software : checks software distribution acfs : checks ACFS integrity asm : checks ASM integrity gpnp : checks GPnP integrity gns : checks GNS integrity scan : checks SCAN configuration ohasd : checks OHASD integrity clocksync : checks Clock Synchronization vdisk : checks Voting Disk configuration and UDEV settings healthcheck : checks mandatory requirements and/or best practice recommendations Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
61 dhcp : checks DHCP configuration dns : checks DNS configuration baseline : collect and compare baselines
Some examples of the cluster verification utility: cluvfy stage -post hwos -n rac1,rac2 It will check for hardware and operating system setup. Check the clusterware integrity: [grid@oracle52 ~]$ cluvfy stage -post hwos -n oracle52,oracle53 Identify the OCR and the voting disk location . . . Post-check for hardware and operating system setup was successful.
The crsctl command, seen before, helps to identify the location of the voting disk: [grid@oracle52 ~]$ crsctl query css votedisk ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE b7dcc18124ac4facbf5c0464874c6637 (ORCL:VOTING01) [VOTING] Located 1 voting disk(s).
OCR does have its own tools. ocrcheck for instance will tell the location of the cluster repository: [grid@oracle52 ~]$ ocrcheck -config Oracle Cluster Registry configuration is : Device/File Name : +VOTING [grid@oracle52 ~]$ [grid@oracle52 ~]$ ocrcheck Status of Oracle Cluster Registry is as follows : Version : 4 Total space (kbytes) : 409568 Used space (kbytes) : 1492 Available space (kbytes) : 408076 ID : 573555284 Device/File Name : +DATA Device/File integrity check succeeded
Device/File not configured
Device/File not configured
Device/File not configured
Device/File not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user Additional commands
To disable the cluster autostart: [root@oracle52 ~]# . /home/grid/.bash_profile [root@oracle52 ~]# $ORACLE_HOME/bin/crsctl disable crs CRS-4621: Oracle High Availability Services autostart is disabled. [root@oracle52 ~]# $ORACLE_HOME/bin/crsctl enable crs CRS-4622: Oracle High Availability Services autostart is enabled.
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
#version=DEVEL install cdrom lang en_US.UTF-8 keyboard us network --onboot no --device eth0 --bootproto dhcp --noipv6 network --onboot no --device eth1 --bootproto dhcp --noipv6 network --onboot no --device eth2 --bootproto dhcp --noipv6 network --onboot no --device eth3 --bootproto dhcp --noipv6 network --onboot no --device eth4 --bootproto dhcp --noipv6 network --onboot no --device eth5 --bootproto dhcp --noipv6 network --onboot no --device eth6 --bootproto dhcp --noipv6 network --onboot no --device eth7 --bootproto dhcp --noipv6 rootpw --iscrypted $6$k08kFoDHeE5o2rJU$wTwi1L.VzDBHhE9WMlFmdii32W2GQzBxRuFVMzhh 2NUqOZGxpKVbd4A58fbpxp07ja0xPbwGRTsIdx97djOHO/ firewall --service=ssh authconfig --enableshadow --passalgo=sha512 selinux --enforcing timezone --utc Europe/Berlin bootloader --location=mbr --driveorder=mpatha --append="crashkernel=auto rhgb quiet" # The following is the partition information you requested # Note that any partitions you deleted are not expressed # here so unless you clear all partitions first, this is # not guaranteed to work #clearpart --none
# Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH PATH=$PATH:/usr/bin/X11:$ORACLE_HOME/bin:. PATH=$PATH:/bin:/usr/bin:/usr/sbin:/etc:/opt/bin:/usr/ccs/bin:/usr/local/bin:/u s r/openwin/bin:/opt/local/GNU/bin:/opt/local/bin:/opt/NSCPnav/bin:/usr/local/sam b a/bin:/usr/ucb PATH=$PATH:$HOME/OPatch
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib:/usr/openwin/lib export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/td/lib:/usr/ucblib:/usr/local/lib:$ Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
65 Summary HP continues to be the leader of installed servers running Oracle. Were extending our industry leading Oracle footprint by delivering the best customer experience with open standards-based converged infrastructure technologies tightly integrated with Oracles software. As a leader in Oracle database market share, HP will continue to provide Oracle focused solutions to our joint customers such as this detailed installation cookbook. HP will continue to test various hardware configurations with Oracle 12c database to make it easier for our customers to implement their critical business applications. Together HP and Oracle will help the businesses succeed whether in cloud solutions or just converging the current data center architectures. We leverage the breadth and depth of HP and Oracle technology and expertise to offer joint industry specific solutions, tested and validated to make your life easier.
Technical white paper | Oracle RAC 12c on HP blade servers running Red Hat Enterprise Linux 6 Update 4
For more information Oracle certification matrix: https://support.oracle.com/ Oracle 12c database documentation: oracle.com/pls/db121/homepage Oracle Technology Network (OTN) RAC: oracle.com/technetwork/database/clustering/overview/index.html HP Reference Architectures for Oracle Grid on the HP BladeSystem: http://h71028.www7.hp.com/enterprise/cache/494866-0-0-0-121.html Fibre Channel Host Bus Adapters (SAN connectivity): http://h18006.www1.hp.com/storage/saninfrastructure/hba.html Linux drivers for ProLiant: http://h18013.www1.hp.com/products/servers/linux/hplinuxcert.html Device mapper reference guide (access requires an HP Passport username and password): http://h20272.www2.hp.com/Pages/spock2Html.aspx?htmlFile=an_solutions_linux.html Oracle ASMLib packages: oracle.com/technetwork/server-storage/linux/asmlib/rhel6-1940776.html ASMLib and Multipathing: http://bizsupport1.austin.hp.com/bc/docs/support/SupportManual/c01725586/c01725586.pdf Device mapper documentation: http://h20000.www2.hp.com/bizsupport/TechSupport/DocumentIndex.jsp?lang=en&cc=us&prodClassId=- 1&contentType=SupportManual&prodTypeId=18964&prodSeriesId=3559651 Linux certification and support matrix HP ProLiant server: http://h18004.www1.hp.com/products/servers/linux/hplinuxcert.html Red Hat ASMLib page: http://rhn.redhat.com/errata/RHEA-2013-0554.html Red Hat iptables setting: https://access.redhat.com/site/documentation/en- US/Red_Hat_Enterprise_Linux/6/html/Identity_Management_Guide/trust-requirements.html HP Software Delivery Repository: http://downloads.linux.hp.com/SDR/
To help us improve our documents, please provide feedback at hp.com/solutions/feedback.
Sign up for updates hp.com/go/getupdated
Copyright 2013 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Oracle and Java are registered trademarks of Oracle and/or its affiliates. UNIX is a registered trademark of The Open Group.