Você está na página 1de 91

Virtualizing Oracle Databases

Nutanix Best Practices


Version 2.2 • January 2018 • BP-2000
Virtualizing Oracle Databases

Copyright
Copyright 2018 Nutanix, Inc.
Nutanix, Inc.
1740 Technology Drive, Suite 150
San Jose, CA 95110
All rights reserved. This product is protected by U.S. and international copyright and intellectual
property laws.
Nutanix is a trademark of Nutanix, Inc. in the United States and/or other jurisdictions. All other
marks and names mentioned herein may be trademarks of their respective companies.

Copyright | 2
Virtualizing Oracle Databases

Contents

1. Executive Summary................................................................................ 5

2. Introduction..............................................................................................7
2.1. Audience........................................................................................................................ 7
2.2. Purpose..........................................................................................................................7

3. Nutanix Enterprise Cloud Platform Overview.......................................9


3.1. Nutanix Acropolis Overview...........................................................................................9
3.2. Distributed Storage Fabric...........................................................................................10
3.3. App Mobility Fabric......................................................................................................10
3.4. AHV..............................................................................................................................11
3.5. Third-Party Hypervisors............................................................................................... 11
3.6. Nutanix Acropolis Architecture.................................................................................... 11

4. Application Overview............................................................................ 14
4.1. Data Tiering and Management....................................................................................14
4.2. Data I/O Detail.............................................................................................................16

5. Nutanix and Oracle Joint Value Proposition...................................... 18


5.1. Why Run Oracle Database on Nutanix?..................................................................... 18

6. Designing and Architecting Oracle on Nutanix..................................20


6.1. General Sizing............................................................................................................. 20

7. Best Practices........................................................................................23
7.1. Oracle Database 11g and 12c.................................................................................... 23
7.2. Nutanix Platform Guidance..........................................................................................29
7.3. Virtualization and Compute Configuration................................................................... 30
7.4. Storage Configuration..................................................................................................30
7.5. Networking................................................................................................................... 31
7.6. High Availability........................................................................................................... 32

3
Virtualizing Oracle Databases

7.7. Sizing Oracle for Nutanix............................................................................................ 32


7.8. Oracle Licensing and Support..................................................................................... 37

8. Solution and Performance Validation................................................. 39


8.1. Environment Overview.................................................................................................39
8.2. Test Environment Configuration Overview.................................................................. 41
8.3. Test Scripts and Configurations.................................................................................. 44
8.4. Validation Results........................................................................................................ 45

9. Conclusion............................................................................................. 48

Appendix......................................................................................................................... 49
Hardware, Software, and VM Setup...................................................................................49
Benchmark and Performance Testing Tools...................................................................... 50
Scripts and Testing Configuration...................................................................................... 51
About the Author.................................................................................................................89
About Nutanix......................................................................................................................89

List of Figures................................................................................................................90

List of Tables................................................................................................................. 91

4
Virtualizing Oracle Databases

1. Executive Summary
This document demonstrates the scalability of the Nutanix Enterprise Cloud Platform and makes
detailed recommendations for designing, optimizing, and scaling Oracle Database deployments
on Nutanix.
Nutanix has performed extensive testing to simulate real-world workloads and conditions for an
Oracle Database environment on Nutanix. We’ve based the sizing data and recommendations
in this document on multiple testing iterations and thorough technical validation. We tested this
solution with Oracle Database 12c R1 deployed on Oracle Linux 6.5 and VMware vSphere 5.5
Update 2, both running on the Nutanix Enterprise Cloud Platform. We used Benchmark Factory
to generate TPC-C-like workloads for testing validation.
The Nutanix platform offers the capability to run almost any workload, and you can run both
Oracle Database and other VM workloads simultaneously on the same platform. The database’s
CPU and storage requirements drive density for Oracle Database deployments, not the
limitations of the platform itself.
From an I/O standpoint, the Nutanix platform can handle the throughput and transaction
requirements of a demanding Oracle database using DSF-localized I/O, server-attached flash,
and intelligent tiering. When you need to virtualize more Oracle Databases, you can simply scale
out the Nutanix platform and add more database VMs to gain more capacity and performance.
According to our test validation results, you can more fully take advantage of the Enterprise
Cloud Platform’s performance capabilities by increasing the number of Oracle Database VMs,
rather than by scaling the number of Oracle instances in a single VM.
A Nutanix system achieves storage performance of over 45,000 IOPS per rack unit (RU) with a
70/30 read/write I/O pattern and 80 IOPS per watt of power for mixed random 8 KB I/O. During
the load tests, we subjected the Oracle RAC environment to VMware vMotion migrations to
demonstrate both the platform’s robustness and that network convergence does not disrupt client
connections, even when pushed to the saturation point.
We determined pod sizing after carefully considering performance as well as accounting for the
additional resources required for N+1 failover to achieve high availability. Tests simulating real-
world workloads and conditions for Oracle found that the midrange Nutanix Enterprise Cloud
Platform delivered the following results using four nodes in two units of rack space:
• 100 percent random: 300,000 read operations and 200,000 write operations using 4 KB blocks
across four nodes.
• 100 percent sequential: 1.5 GB/s write and 4 GB/s read throughput across four nodes.
• Average response times of less than 0.5 seconds for TPC-C-like tests, even when migrating
the VMs using vMotion.

1. Executive Summary | 5
Virtualizing Oracle Databases

Validation testing results demonstrate the Nutanix Enterprise Cloud Platform’s ability to handle
demanding Oracle Database and Oracle RAC workloads, while enjoying the simplicity of
deployment and operation offered by a web-scale virtual computing platform.
Nutanix is an Oracle Gold Partner.

1. Executive Summary | 6
Virtualizing Oracle Databases

2. Introduction

2.1. Audience
This best practices document is part of the Nutanix solutions library and is intended for IT
administrators and solutions architects responsible for designing, managing, and supporting
Oracle Database deployments on Nutanix infrastructures. We assume some familiarity with
VMware vSphere, Oracle Database (ORADB), and the Nutanix Enterprise Cloud Platform.

2.2. Purpose
This document covers the following subject areas:
• Overview of the Nutanix solution.
• Benefits of ORADB on Nutanix.
• High-level ORADB best practices for Nutanix.
• Detailed ORADB best practices for Nutanix.
• Design, architecture, and configuration considerations for ORADB on Nutanix.
• Sizing and VM placement.
• Networking best practices.
• Considerations for Oracle licensing and support.
• Benchmarking and validation of ORADB performance on Nutanix.
Upon completing this document, the reader should be able to design, architect, and deploy a
high-performing and highly available Oracle Database solution on the Nutanix platform.

Table 1: Document Version History

Version
Published Notes
Number
1.0 September 2014 Original publication.
2.0 February 2017 Updated with current Nutanix platform information.

2. Introduction | 7
Virtualizing Oracle Databases

Version
Published Notes
Number
Updated provisioning recommendations and noted
2.1 May 2017 VMware ESXi 6.0.0 support for adding disks online in
Oracle RAC environments.
Updated platform overview and Nutanix Platform
2.2 January 2018
Guidance section.

2. Introduction | 8
Virtualizing Oracle Databases

3. Nutanix Enterprise Cloud Platform Overview

3.1. Nutanix Acropolis Overview


Nutanix delivers a hyperconverged infrastructure solution purpose-built for virtualization and
cloud environments. This solution brings the performance and economic benefits of web-scale
architecture to the enterprise through the Nutanix Enterprise Cloud Platform, which is composed
of two product families—Nutanix Acropolis and Nutanix Prism.
Attributes of this solution include:
• Storage and compute resources hyperconverged on x86 or Power Architecture servers.
• System intelligence located in software.
• Data, metadata, and operations fully distributed across entire cluster of x86 or Power
Architecture servers.
• Self-healing to tolerate and adjust to component failures.
• API-based automation and rich analytics.
• Simplified one-click upgrade.
• Native file services for hosting user profiles.
• Native backup and disaster recovery solutions.
Nutanix Acropolis provides data services and can be broken down into three foundational
components: the Distributed Storage Fabric (DSF), the App Mobility Fabric (AMF), and AHV.
Prism furnishes one-click infrastructure management for virtual environments running on
Acropolis. Acropolis is hypervisor agnostic, supporting three third-party hypervisors—ESXi,
Hyper-V, and XenServer—in addition to the native Nutanix hypervisor, AHV.

3. Nutanix Enterprise Cloud Platform Overview | 9


Virtualizing Oracle Databases

Figure 1: Nutanix Enterprise Cloud Platform

3.2. Distributed Storage Fabric


The Distributed Storage Fabric (DSF) delivers enterprise data storage as an on-demand
service by employing a highly distributed software architecture. Nutanix eliminates the need for
traditional SAN and NAS solutions while delivering a rich set of VM-centric software-defined
services. Specifically, the DSF handles the data path of such features as snapshots, clones, high
availability, disaster recovery, deduplication, compression, and erasure coding.
The DSF operates via an interconnected network of Controller VMs (CVMs) that form a Nutanix
cluster, and every node in the cluster has access to data from shared SSD, HDD, and cloud
resources. The hypervisors and the DSF communicate using the industry-standard NFS, iSCSI,
or SMB3 protocols, depending on the hypervisor in use.

3.3. App Mobility Fabric


The Acropolis App Mobility Fabric (AMF) collects powerful technologies that give IT professionals
the freedom to choose the best environment for their enterprise applications. The AMF
encompasses a broad range of capabilities for allowing applications and data to move freely
between runtime environments, including between Nutanix systems supporting different
hypervisors, and from Nutanix to public clouds. When VMs can migrate between hypervisors (for
example, between VMware ESXi and AHV), administrators can host production and development
or test environments concurrently on different hypervisors and shift workloads between them as
needed. AMF is implemented via a distributed, scale-out service that runs inside the CVM on
every node within a Nutanix cluster.

3. Nutanix Enterprise Cloud Platform Overview | 10


Virtualizing Oracle Databases

3.4. AHV
Nutanix ships with AHV, a built-in enterprise-ready hypervisor based on a hardened version of
proven open source technology. AHV is managed with the Prism interface, a robust REST API,
and an interactive command-line interface called aCLI (Acropolis CLI). These tools combine to
eliminate the management complexity typically associated with open source environments and
allow out-of-the-box virtualization on Nutanix—all without the licensing fees associated with other
hypervisors.

3.5. Third-Party Hypervisors


In addition to AHV, Nutanix Acropolis fully supports Citrix XenServer, Microsoft Hyper-V, and
VMware ESXi. These options give administrators the flexibility to choose a hypervisor that aligns
with the existing skillset and hypervisor-specific toolset within their organization. Unlike AHV,
however, these hypervisors may require additional licensing and, by extension, incur additional
costs.

3.6. Nutanix Acropolis Architecture


Acropolis does not rely on traditional SAN or NAS storage or expensive storage network
interconnects. It combines highly dense storage and server compute (CPU and RAM) into
a single platform building block. Each building block is based on industry-standard Power
Architecture or Intel processor technology and delivers a unified, scale-out, shared-nothing
architecture with no single points of failure.
The Nutanix solution has no LUNs to manage, no RAID groups to configure, and no complicated
storage multipathing to set up. All storage management is VM-centric, and the DSF optimizes I/
O at the VM virtual disk level. There is one shared pool of storage composed of either all-flash
or a combination of flash-based SSDs for high performance and HDDs for affordable capacity.
The file system automatically tiers data across different types of storage devices using intelligent
data placement algorithms. These algorithms make sure that the most frequently used data is
available in memory or in flash for optimal performance. Organizations can also choose flash-
only storage for the fastest possible storage performance. The following figure illustrates the data
I/O path for a write in a hybrid model (mix of SSD and HDD disks).

3. Nutanix Enterprise Cloud Platform Overview | 11


Virtualizing Oracle Databases

Figure 2: Information Life Cycle Management

The figure below shows an overview of the Nutanix architecture, including the hypervisor of
your choice (AHV, ESXi, Hyper-V, or XenServer), user VMs, the Nutanix storage CVM, and its
local disk devices. Each CVM connects directly to the local storage controller and its associated
disks. Using local storage controllers on each host localizes access to data through the DSF,
thereby reducing storage I/O latency. Moreover, having a local storage controller on each
node ensures that storage performance as well as storage capacity increase linearly with node
addition. The DSF replicates writes synchronously to at least one other Nutanix node in the
system, distributing data throughout the cluster for resiliency and availability. Replication factor
2 creates two identical data copies in the cluster, and replication factor 3 creates three identical
data copies.

Figure 3: Overview of the Nutanix Architecture

3. Nutanix Enterprise Cloud Platform Overview | 12


Virtualizing Oracle Databases

Local storage for each Nutanix node in the architecture appears to the hypervisor as one large
pool of shared storage. This allows the DSF to support all key virtualization features. Data
localization maintains performance and quality of service (QoS) on each host, minimizing the
effect noisy VMs have on their neighbors’ performance. This functionality allows for large, mixed-
workload clusters that are more efficient and more resilient to failure than traditional architectures
with standalone, shared, and dual-controller storage arrays.
When VMs move from one hypervisor to another, such as during live migration or a high
availability (HA) event, the now local CVM serves a newly migrated VM’s data. While all write I/
O occurs locally, when the local CVM reads old data stored on the now remote CVM, the local
CVM forwards the I/O request to the remote CVM. The DSF detects that I/O is occurring from a
different node and migrates the data to the local node in the background, ensuring that all read I/
O is served locally as well.
The next figure shows how data follows the VM as it moves between hypervisor nodes.

Figure 4: Data Locality and Live Migration

3. Nutanix Enterprise Cloud Platform Overview | 13


Virtualizing Oracle Databases

4. Application Overview
The Nutanix system operates and scales Oracle Database (ORADB) in conjunction with other
hosted services, providing a single scalable platform for all deployments. External sources and
platforms interact with the ORADB platform on Nutanix over the network. The figure below offers
a high-level view of the ORADB on Nutanix solution.

Figure 5: ORADB on Nutanix Conceptual Arch

The Nutanix approach of modular scale-out enables customers to select any initial deployment
size and grow in granular data and compute increments. This flexibility removes the hurdle of a
large up-front infrastructure purchase that a customer might need many months or years to grow
into, ensuring a faster time-to-value for the implementation.

4.1. Data Tiering and Management


The Nutanix Acropolis Distributed Storage Fabric (DSF) has a built-in information life cycle
management (ILM) process that automatically handles data placement. Applying ILM to the data

4. Application Overview | 14
Virtualizing Oracle Databases

life cycle in ORADB, the high-performance SSD tier automatically receives all hot data writes.
Both hot and warm data reside in this tier to provide easy access and the highest performance.
Nutanix leverages an in-memory read cache to hold frequently accessed data from all tiers.
Given the sequential nature of some ORADB files, you may want to configure ILM to bypass the
SSD tier for sequential workloads, which allows the system to use the SSDs primarily for random
I/O, providing increased SSD endurance.
The Nutanix Elastic Deduplication Engine works with the DSF ILM to efficiently deduplicate
the data cache across both the capacity and performance tiers. Upon a sequential write, the
system can fingerprint the I/O at a 16 K granularity. Where there are duplicate fingerprints,
implying identical pieces of data, MapReduce deduplication can eliminate them on disk. Hot
deduplicated data automatically goes into the content cache, which spans both memory and SSD
and functions as a read cache. Deduplication thus greatly increases the effective SSD and cache
sizes, as the content cache only receives one copy of the data.
For ORADB, these efficiencies mean that you can deduplicate any common operating system,
database, or other data, allowing for higher DSF cache hits for ORADB reads and higher
capacity utilization. Given the sequential nature and large operation sizes of some ORADB I/O,
deduplication creates very little overhead.

Note: Random writes are not fingerprinted, so they do not impact write latency.

The figure below shows the DSF I/O path and the integration of fingerprinting and the content
cache.

Figure 6: DSF I/O Path

As ORADB data becomes less frequently accessed, ILM sees that the data is cooling down.
ILM then migrates the data from SSD to the higher-capacity HDD tier in order to free SSD space
for new hot data. As mentioned above, you can also configure this data to bypass the SSD
tier automatically and write directly to HDD (for archive logs). After a specified time period, the

4. Application Overview | 15
Virtualizing Oracle Databases

system compresses the data to increase storage capacity. If someone accesses this data again,
ILM decompresses it and places it on the appropriate tier.
This approach keeps the most highly accessed data in the cache or highest performance tier, so
indexes and heavily accessed data have the lowest latency and highest possible performance.

4.2. Data I/O Detail


The figure below depicts the high-level I/O path for VMs running on Nutanix. As shown, the DSF
handles all I/O operations on the local node to provide the highest possible performance. Data
writes occur locally for all VMs on the same ESXi node and over 10 GbE for VMs and sources
hosted on another node or remotely.

Figure 7: Data I/O Detail

The figure below describes the detailed I/O path for VMs running on Nutanix. All write I/O,
including data input to the ORADB VMs, takes place on the local node’s SSD tier to provide the
highest possible performance. Read requests for ORADB VMs occur locally and receive service
from the high-performance in-memory read cache or from the SSD or HDD tier, depending on
data placement. Each node also keeps frequently accessed data in the read cache for any local
VM or ORADB server data. Nutanix ILM constantly monitors data and I/O patterns to choose the
appropriate tier placement.

4. Application Overview | 16
Virtualizing Oracle Databases

Figure 8: Data I/O Detail Expanded

4. Application Overview | 17
Virtualizing Oracle Databases

5. Nutanix and Oracle Joint Value Proposition

5.1. Why Run Oracle Database on Nutanix?


• Modular Incremental Scale: With Nutanix, you can start small with just three nodes and
scale out as needed. Depending on the Nutanix system selected, you can get dozens of
terabytes of storage and up to 144 CPU cores for every four nodes in a compact footprint.
Given the modularity of the solution, you can scale one node at a time, giving you the ability to
accurately match supply with demand and minimize the upfront capital expenditure.
• High Performance: Up to 300,000 hybrid, 600,000 all-flash random read IOPS and up to
5 GB/s of sequential throughput in a compact 2RU cluster. ILM keeps indexes and heavily
accessed data in the high-performance SSD and cache tiers. Ongoing Nutanix software
updates continue to improve platform performance over time, which is a key benefit of the
Nutanix software-defined storage design.
• Data Efficiency: The Nutanix solution is truly VM-centric for all compression and deduplication
policies. Unlike traditional solutions that perform these tasks mainly at the LUN or volume
level, the Nutanix solution provides them at the VM and virtual disk level, greatly increasing
efficiency and simplicity. By allowing for both inline and post-process compression capabilities
and cache and on-disk deduplication using MapReduce, the Nutanix solution breaks the
bounds set by traditional solutions.
• Effective Information Life Cycle Management: Nutanix incorporates heat-optimized tiering
(HOT), which leverages multiple tiers of storage and places data on the tier that provides
the best performance. Nutanix architecture supports local SSD and HDD disks attached
to the CVM, as well as remote NAS and cloud-based source targets. The tiering logic is
fully extensible, allowing you to dynamically add and extend new tiers. The Nutanix system
continuously monitors data-access patterns to determine whether access is random,
sequential, or a mixed workload. An SSD tier maintains random I/O workloads to minimize
latency. Sequential workloads can go directly into HDD to improve endurance.
• Business Continuity and Data Protection: Native VM-centric snapshot and replication features
provide extensive DR and protection capabilities without disruptive backup windows. Because
snapshots are VM-centric, you can protect virtual disks containing data, control files, and
logs without the performance impact that protection has on traditional storage systems.
Since snapshots share common data blocks, backups are space efficient. When replicating
snapshots, the system compresses and deduplicates them. Restores require less bandwidth,
as DBAs do not have to restore a full container or LUN. If you’re running Oracle on Windows,
VSS provides integration for application-consistent snapshots and an SRA for VMware SRM
integration.

5. Nutanix and Oracle Joint Value Proposition | 18


Virtualizing Oracle Databases

• Rapid Creation of Test and Development Environments: Because the Nutanix platform can
snapshot running databases and perform data efficient clone operations, you can duplicate a
database for testing or development purposes in minutes instead of hours.
• Enterprise-Grade Cluster Management: Prism offers a simplified and intuitive Apple-
like consumer-friendly approach to managing large clusters. The Prism management
product includes a GUI, which acts as a single pane of glass for servers and storage, alert
notifications, and the bonjour mechanism that automatically detects new nodes in the cluster.
• High-Density Architecture: Nutanix uses state-of-the-art architecture in which a single 2RU
appliance provides eight Intel processors (up to 144 cores) and up to 4 TB of memory.
Coupled with data archiving and compression, Nutanix can reduce your hardware footprint by
up to 4x.
• Faster Database Deployment and Reduced Operational Complexity: Get up and running
in just a few hours and reduce overhead using simple yet powerful management and APIs,
integration with Oracle Enterprise Manager 12c, and unprecedented performance insight.
• Foundation for Database as a Service: The Nutanix system provides a set of standardized
and simple node options that come together to form a cluster. Start with three nodes of any
type, combine different nodes to balance the compute, storage capacity, and performance
requirements of different workloads, and grow one node at a time. The DSF provides
scalability and availability, and, when combined with your preferred hypervisor, you have
a solid foundation on which to run enterprise-class Oracle Databases. By integrating your
favorite cloud management platform, such as Microsoft Windows Azure Pack or VMware
vCloud Suite, and a manageable number of predefined Oracle database templates, you
can provide a simple, automated, and highly scalable policy-driven Database as a Service
environment to your DBAs and application users.

5. Nutanix and Oracle Joint Value Proposition | 19


Virtualizing Oracle Databases

6. Designing and Architecting Oracle on Nutanix


With the Oracle Database on Nutanix solution, you have the flexibility to start small with three
nodes and scale up by increments of one node, one block, or multiple blocks at a time. This
granular approach provides the ability to start small and grow to massive scale, with linear
performance increases and without any negative impact to existing workloads. Linear scalability
and linear performance matter because, when you add databases and Nutanix nodes, you are
not just adding compute capacity, but also storage capacity and performance. In legacy storage
architectures you would usually run out of performance well before reaching your capacity limit,
resulting in significant overprovisioning.

6.1. General Sizing


The following section covers the design decisions and rationale for ORADB deployments on the
Nutanix platform.

Table 2: General Design Decisions

Item Detail Rationale


3 Nutanix nodes (3 ESXi
Minimum Size Minimum size requirement
hosts)
Allow for growth from PoC to
Scale Approach Incremental modular scale
massive scale—N+1

Granular scale to meet


Scale Unit Node(s) capacity demands precisely
Scale in Nx node increments

Table 3: VMware vSphere Design Decisions

Item Detail Rationale


Up to 12–24 ESXi hosts
Cluster Size Isolated fault domains
(minimum of 3 hosts)

6. Designing and Architecting Oracle on Nutanix | 20


Virtualizing Oracle Databases

Item Detail Rationale

Up to 2x 24 or 4x 12 host Task parallelization


Clusters per vCenter
clusters Isolated fault domains

Nutanix handles I/O


Datastore(s) 1 Nutanix datastore per cluster distribution and localization
n-Controller model

Dedicated infrastructure
Named User Plus License cluster or DRS host groups for
deployments: shared cluster larger deployments and where
Infrastructure Services processor-based licenses used
Large deployments and per
(best practice)
processor licensing: dedicated
cluster or DRS host groups License where Oracle is
installed and running

Table 4: Nutanix Design Decisions

Item Detail Rationale

Isolated fault domains


Cluster Size Up to 24–48 nodes
Sale-out performance

Standard practice
Storage Pool(s) 1 Storage Pool
ILM handles tiering

Container(s) 1 Container for VMs and Data Standard practice

Use the settings in the following table when configuring your base VM.

Table 5: VM Configuration

Parameter Configuration
Network adapter VMXNET3

6. Designing and Architecting Oracle on Nutanix | 21


Virtualizing Oracle Databases

Parameter Configuration
Minimum of 3 PVSCSI (OS + DB + redo), 4 for larger or high-
Storage adapter
performance databases
OS and app disks Thin provisioned, disk mode = dependent
Database (ASM) disks for
Thin provisioned, disk mode = independent persistent
standalone
Database (ASM) disks for Thick provisioned lazy zeroed, disk mode = independent
RAC persistent
VMware tools Latest installed
Memory Locked (preferred)
Advanced VM configuration Name: ethernetX.coalescingScheme (Where X = VMNIC Number)
option (with RAC Cluster
Interconnect) Value: Disabled

6. Designing and Architecting Oracle on Nutanix | 22


Virtualizing Oracle Databases

7. Best Practices
Follow these guidelines at the start of an Oracle Database project on Nutanix:
• Perform a current state analysis to identify workloads and sizing.
• Spend time up front to architect a solution that meets both current and future needs.
• Design to deliver consistent performance, reliability, and scale.
• Don’t undersize, don’t oversize—right size.
• Test, optimize, iterate, and scale your database prior to going into production.
• Use only Certified Operating Systems for Oracle Database.
• When using Oracle Database with SAP, follow SAP guidelines in addition to these
recommendations.

Note: The majority of best practice configuration and optimization occurs at the
ORADB and Linux levels.

The Oracle Database on Nutanix best practices can be summarized into the following high-level
items.

7.1. Oracle Database 11g and 12c


Oracle DB Performance and Scalability
• Use the Oracle Validated Package for your Database version and Oracle standard OS
recommendations as the starting point for the installation:
⁃ oracle-rdbms-server-11gR2-preinstall; or
⁃ oracle-rdbms-server-12cR1-preinstall
• Configure Oracle initialization parameters:
⁃ Parallel_Threads_per_CPU=1
⁃ FileSystemIO Options = SetAll (for Oracle 11g)
• SGA = 50 percent to 75 percent of allocated RAM for OLTP, 30 percent for DSS or OLAP.
• PGA size depends on the number of connections and the required sort area. Starting points:
15 percent for OLTP, 50 percent for DSS or OLAP.
• Utilize multiple disks for redo log, archive log, and database table spaces.

7. Best Practices | 23
Virtualizing Oracle Databases

⁃ For each storage area, start with a minimum of two disks for small environments, or four or
more for larger environments (excludes OS and app binaries disks).
⁃ Look for I/O wait contention and scale the number of virtual disks as necessary.
• Use Oracle Automatic Storage Management (ASM) for database files, redo log, and archive
log storage. Place each group of files in a different ASM disk group.
⁃ If you choose to use Logical Volume Manager (LVM) instead of ASM, we recommended
striping volumes (do not concatenate) over multiple disks using a 512 KB stripe size. This
setting reduces the chance of the system mistaking sequential I/O for random, which
can often happen with smaller stripe sizes. Keep the logical volumes (LVs) and physical
volumes (PVs) used for data files separate from LVs and PVs used for redo logs and
archive logs.
• Utilize a 1 MB ASM allocation unit (AU) size for ASM disk groups.

Note: The 1 MB recommendation aligns with the maximum recommended I/O size
suitable for all Nutanix-supported hypervisors.

• Configure ASM disk groups for external redundancy (except quorum groups).
• Use independent persistent disk mode for all ASM disks.
• Split redo log, archive log, and database table space over separate virtual controllers.
• Set the Linux maximum I/O size to match ASM AU size in rc.local or via UDEV; for example:
for disk in sdk sdl sdn sdp sdq; do
echo 1024 > /sys/block/$disk/queue/max_sectors_kb
echo $disk “ set max_sectors_kb to 1024”
done

• /etc/udev/rules.d/71-block-max-sectors.rules: ACTION==”add|change”,
SUBSYSTEM==”block”, RUN+=”/bin/sh -c ‘/bin/echo 1024 > /sys%p/queue/max_sectors_kb’”
• Enable HugePages for Oracle Database SGA. This setting requires modifications to
sysctl.conf (vm.nr_hugepages and vm.hugetlb_shm_group) and limits.conf (Oracle –
memlock).
• For very high-performance database systems on VMware, add the following options to the
boot loader (grub) configuration in addition to those listed above (PVSCSI required):
⁃ vmw_pvscsi.cmd_per_lun=256 vmw_pvscsi.ring_pages=32
• Use Automatic Shared Memory Management (ASMM).
• Add the following options to the boot loader (grub) configuration:
iommu=soft elevator=noop apm=off transparent_hugepage=never numa=off powersaved=off

7. Best Practices | 24
Virtualizing Oracle Databases

• Add the following lines to sysctl.conf to reduce swapping:


vm.overcommit_memory = 1
vm.dirty_background_ratio = 5
vm.dirty_ratio = 15
vm.dirty_expire_centisecs = 500
vm.dirty_writeback_centisecs=100
vm.swappiness = 0

• Add options = -x to /etc/sysconfig/ntpd


• Oracle redo files and groups.
⁃ Configure sufficient redo groups, number of redo files, and file size to meet RPO and RTO
requirements and transaction throughput.
⁃ Configure redo files on a virtual controller separate from other database files.
⁃ Initially configure at least two disks in the ASM redo disk group, then add disks as
performance requirements dictate.
• Database data files.
⁃ Establish one or more database files per configured vCPU to maximize I/O parallelism.
⁃ Preallocate and configure file size appropriately, adjusting as necessary to make sure that
files grow at an equal rate.
⁃ Enable auto extend as a fail-safe, and set the next size to be a multiple of the ASM AU.
⁃ Keep disk capacity utilization below 80 percent.
⁃ Use multiple data files and disks.
⁃ Look for contention for in-memory allocation, such as buffer busy. If there is contention,
increase the number of files.
⁃ Look for I/O subsystem contention. If there is contention, add additional disk(s) to Oracle
ASM disk group(s) to spread the data files across more disks.
⁃ Utilize Oracle Database Automatic Workload Repository (AWR) and Automatic Database
Diagnostic Monitor (ADDM) reports to identify performance bottlenecks and tune
accordingly.
⁃ Scale the number of ORADB VMs versus a large number of ORADB instances and
schemas per VM.
⁃ More memory = higher performance and less read I/O. If you see memory pressures,
increase VM memory and avoid swapping.

7. Best Practices | 25
Virtualizing Oracle Databases

⁃ Size the Linux swap partition to be big enough to handle an unexpected load. Monitor
swapping; if it’s consistently above 0 KB used, increase VM memory.

Oracle DB Availability
In most cases, hypervisor high availability provides an adequate level of availability (99.9
percent) and uptime for applications that are not mission-critical or tier-1.
For mission-critical or tier-1 applications:
• Utilize one of the following:
⁃ Oracle DataGuard or Archive Log Shipping.
⁃ Oracle DataGuard with Fast Start Failover.
⁃ Oracle GoldenGate.
⁃ Oracle RAC (which can be combined with the previous options).
• Use Oracle RMAN and solutions integrated with Oracle RMAN.
• Take consistent database snapshots and backups. Derive the frequency from required RPOs
using Nutanix snapshots.
• When using redo log multiplexing, provision the same number of disks for the primary and
secondary redo log ASM disk groups and ensure that the disk groups are on different virtual
SCSI controllers.
• When using Nutanix snapshot and replication technology for Oracle RAC, include all RAC
nodes in the same consistency group.

Oracle DB Manageability
• Standardize, monitor, and maintain.
• Use ORADB application monitoring solutions, such as Enterprise Manager Grid Control or
Enterprise Manager 12c Cloud Control, integrated with virtualization monitoring solutions such
as vRealize Operations Manager.
• Create standardized data file sizes and data file growth sizes.
• Preallocate data files and manage size proactively.
• Create standardized ORADB VM templates.
• Utilize consistent disk quantities and layout schemes for ORADB VMs.
• Take advantage of centralized database and OS authentication via LDAP.

7. Best Practices | 26
Virtualizing Oracle Databases

Oracle RAC
• Use Oracle 11g R2 11.2.0.2 or later (we recommend 11.2.0.4 or later). 12c is required for
Oracle RAC on Hyper-V.
• For VMware, use the multiwriter flag for disk sharing. (Not required for Hyper-V or AHV.) See
VMware KB 1034165 for more information on setting the multiwriter flag. The two figures
below demonstrate the multiwriter flag set on two RAC nodes.

Figure 9: Sample Multiwriter Flag on RAC Node 1

Figure 10: Sample Multiwriter Flag on RAC Node2

7. Best Practices | 27
Virtualizing Oracle Databases

• Create all shared disks as thick provision lazy zeroed with VMware to allow clustering
features. (Not required for Hyper-V or AHV.)

Note: VMware recommends using the thick provision eager zeroed setting, but
if you are deploying RAC on Nutanix, we recommend using thick provision lazy
zeroed. VMware automatically changes lazy zeroed to eager zeroed after you click
the save button, but the space does not zero out immediately.

• Use independent persistent disk mode for all ASM disks in VMware. (Not required for Hyper-V
or AHV.)
• Use two ASM disk groups for OCR voting disks.
• Use three disks for each of the OCR voting ASM disk groups and configure with high
redundancy.
• Ensure that Oracle and Grid users are part of appropriate operating system groups such as
the oinstall, dba, asmdba, and asmadmin groups, with oinstall as the primary group.
• Use jumbo frames for Oracle RAC interconnect networks.
• Use VLANs to separate Oracle RAC interconnect traffic.
• Configure two NICs for the RAC cluster private interconnect and one NIC for public
communications.
• For each Oracle RAC private interconnect port group, set a different NIC as primary. For
example, 10G1 Primary on Port Group 1 and 10G2 Primary on Port Group 2.
• Add the following settings, where X is the interface number, for each Oracle RAC private
interconnect NIC to sysctl.conf:
⁃ net.ipv4.conf.ethX.rp_filter = 2
• If using a VMware vSphere Distributed Switch, leverage network I/O control to ensure quality
of service for different traffic types and the network health check to ensure correct network
configuration.
• Configure Single Client Access Name (SCAN) and GNS, and use SCAN for all client
connectivity.
• Use NTP and a consistent and reliable time source to ensure consistent time between RAC
nodes.
• Use a clustered mount point such as OCFS or NFS as a database backup location for
Recovery Manager (RMAN) if using backup to disk as part of your data protection process.
• When using VMware, Oracle RAC nodes are supported on up to a maximum of eight hosts
concurrently due to the limit of physical hosts that can access a single virtual disk. For

7. Best Practices | 28
Virtualizing Oracle Databases

configurations involving more than eight hosts, use Nutanix Acropolis Block Services and the
iSCSI in-guest initiator.
• VMware ESXi 6.0.0 (4192238) now supports adding disks online in Oracle RAC environments.

7.2. Nutanix Platform Guidance


• Use a single replication factor 2 container.
• Utilize appropriate model based upon compute, storage, and licensing requirements.
⁃ Ideally size the platform to keep the working set in SSD.
⁃ Nutanix supports database VMs larger than the storage capacity of an individual node, as
all storage is pooled at the block level across an entire Nutanix cluster.
⁃ Utilize higher memory node models for I/O-heavy ORADB workloads.
⁃ Utilize a node that has 2x the memory size of the largest single VM.
⁃ Utilize a node that fits your organization’s licensing constraints.
• Create a dedicated consistency group with the ORADB VMs and applications. When utilizing
Nutanix snapshots and replication for Oracle RAC, all RAC nodes must be in the same
consistency group.
• Leverage application-consistent snapshots on the consistency group to invoke VSS when
snapshotting if using ORADB on Windows. Otherwise, follow MOS ID 604683.1.
• Nutanix CVMs should always be in the hypervisor cluster root, not in a child resource pool.
• The maximum recommended I/O size is 1 MB for best performance.
• The Nutanix Shadow Clones feature is enabled by default in AOS 4.x and later. However, you
should disable this feature when deploying Oracle RAC Databases.
• To disable Shadow Clones, execute the following command:
ncli cluster edit-params enable-shadow-clones=false

• To check the status of Shadow Clones, execute the following command:


ncli cluster get-params

• The result should display Shadow Clones Status: Disabled.


• If space is a constraint, you can enable erasure coding.

7. Best Practices | 29
Virtualizing Oracle Databases

7.3. Virtualization and Compute Configuration


Host and VM Sizing (vCPUs, RAM, Storage)
• Avoid vCPU core oversubscription initially (for tier-1 workloads).
• For small ORADB VMs, keep vCPUs <= to the number of cores per each physical NUMA
node.
• Keep vCPU numbers easily divisible by NUMA node sizes for easy scheduling.
• Leave hyperthreading sharing at the default policy (any).
• For tier-1 workloads, lock ORADB VM memory at a minimum reserve sufficient to cover SGA
HugePages + Program Global Area (PGA).
• Size ORADB VM memory using the following calculation:
⁃ VM Memory = Oracle SGA + Oracle PGA + OS Memory + VM Overhead
• Use Paravirtual SCSI controllers (PVSCSI) and VMXNET3 NICs.
• Do not use resource pools unless absolutely necessary. If using resource pools, ensure
correct sizing of shares.

Host and VM Configurations (HA, DRS, Affinity, Hypervisor Specifics)


• Follow VMware performance best practices.
• When using vSphere Distributed Switch, use route based on physical NIC load for the
teaming policy. Otherwise, use route based on virtual port ID.
• Use multi-NIC vMotion when deploying ORADB with large RAM configurations.
• Use a separate vSphere cluster or DRS antiaffinity rules to keep ORADB VMs on licensed
hosts and to keep related ORADB VMs apart within the licensed group of hosts.

7.4. Storage Configuration


The Nutanix Enterprise Cloud Platform provides an ideal combination of high-performance
compute with localized storage to meet any demand. True to the capabilities inherent in the
platform’s design, there is no need to reconfigure or customize the Nutanix product to optimize
for the Oracle use case.
The figure below shows a high-level example of the relationship between a Nutanix block, node,
storage pool, and container.

7. Best Practices | 30
Virtualizing Oracle Databases

Figure 11: Nutanix Component Architecture

The table below shows the Nutanix storage pool and container configuration. Always ensure
ample storage capacity to be able to tolerate the loss of a node during failure or maintenance.

Table 6: Nutanix Storage Configuration

Name Role Details


SP01 Main storage pool for all data All disks
CTR-RF2-VM-01 Container for all VMs and data ESXi: Datastore

Note: Recommended maximum I/O size is 1 MB.

Compression Settings
Enable inline compression for Oracle Databases.

7.5. Networking
• Utilize and optimize QoS for DSF and database traffic.
• Use low-latency 10 GbE switches.
• Utilize redundant 10 GbE uplinks from each Nutanix node.
• Ensure adequate throughput between Nutanix nodes and ORADB VMs.
• Check for any pause frames that could impact replication and VM communication.
Nutanix recommends a leaf-spine network architecture, which is designed for linear scaling.
A leaf-spine architecture consists of two network tiers: an L2 leaf and an L3 spine based on

7. Best Practices | 31
Virtualizing Oracle Databases

nonblocking switches. This architecture maintains consistent performance without any throughput
reduction due to a static maximum of three hops from any node in the network.
The figure below shows the design of a scale-out leaf-spine network architecture, which provides
20 Gb active throughput from each node to its L2 leaf and scalable 80 Gb active throughput from
each leaf to its spine switch, allowing you to grow from one Nutanix block to thousands without
any impact to available bandwidth.

Figure 12: vMotion Test Scenarios Overview

7.6. High Availability


• Size clusters for N+1 redundancy.
• Use percentage value for HA admission control: 1/hosts per cluster when using VMware
vSphere.

7.7. Sizing Oracle for Nutanix


Oracle DB Server Sizing
The table below describes typical scenarios for any database workload.

7. Best Practices | 32
Virtualizing Oracle Databases

Table 7: Scenario Overview

Scenario Definition
These workloads are transactional in nature and can involve many
OLTP: updates and inserts depending on the job. Depending on size and
workload, write I/O and latency are extremely important for the
Transactional highest possible application performance. The number of transactions
processed per second is a key metric for OLTP databases.
These workloads are analytical in nature and rely a great deal on
OLAP: batched ETL (extract, transform, load) reporting. These workloads
are primarily sequential in nature and can require reading very large
Analytics and quantities of data. Sequential write performance is critical for data
Reporting ingest. These workloads are traditionally utilized in data warehousing
and reporting.

Below are some initial recommendations for Oracle Database server VM sizing based on
assumed workload characterizations.

Table 8: Recommended OS and App Mount Points for Linux—OLTP and OLAP

Min Suggested Max Suggested


Scenario Virtual Disk Type Disk Mode
Size (GB) Size (GB)
Swap 16 16 VMDK, thin provision Dependent
/ 10 20 VMDK, thin provision Dependent
/usr 12 20+ VMDK, thin provision Dependent
/tmp 12 12+ VMDK, thin provision Dependent
/home 10 12+ VMDK, thin provision Dependent
/opt or /u01 30 60+ VMDK, thin provision Dependent
/var or /var/log 10 40+ VMDK, thin provision Dependent
Totals 100 180+

Actual consumption would be far less due to the efficiency of thin provisioning.

7. Best Practices | 33
Virtualizing Oracle Databases

Table 9: OLTP Scenario Detail

Scenario vCPU Memory (GB) Database Disks

2x n GB (DB)
2x n GB (REDO)
Light 1–2 8–16
2x n GB (Archive Log / Flash
Recovery Area)

2-4x+ n GB (DB)
2-4x n GB (REDO)
Medium 4–8+ 32–96+
2-4x n GB (Archive Log / Flash
Recovery Area)

4x+ n GB (DB)
4x+ n GB (REDO)
Heavy 8–16+ 96–192+
4x+ (Archive Log / Flash Recovery
Area)

Table 10: OLAP Scenario Detail

Scenario vCPU Memory (GB) Database Disks

2x n GB (DB)
2x n GB (REDO)
Light 2–4+ 16–32+
2x n GB (Archive Log / Flash
Recovery Area)

2-4x+ n GB (DB)
2-4x n GB (REDO)
Medium 4–8+ 32–96+
2-4x n GB (Archive Log / Flash
Recovery Area)

7. Best Practices | 34
Virtualizing Oracle Databases

Scenario vCPU Memory (GB) Database Disks

4x+ n GB (DB)
4x+ n GB (REDO)
Heavy 8–16+ 96–192+
4+ (Archive Log / Flash Recovery
Area)

Note: These are recommendations for sizing and should be modified after a current
state analysis. OLAP generally runs larger queries in parallel requiring more CPU
utilization.

ORADB Storage Sizing


The following section covers the storage sizing and considerations for running ORADB on
Nutanix.

Note: It is always a good infrastructure practice to add a buffer for contingency


and growth. Nutanix recommends an N+1 model where one node is added to the
minimum requirements for failure and maintenance. For virtualization clusters
that are larger than 12 nodes, we recommended adding one node for failure and
maintenance per 12 nodes. We recommend closely monitoring storage capacity
to ensure that the system has adequate capacity to accommodate any failures
(typically this is the capacity of the largest node). This acts as a safety net in case of
unexpected storage growth. Nodes for failure and maintenance capacity should be
added on top of required storage as calculated below.

Step 1: Calculate the estimated required storage:


• Required storage = (number of databases x average schema size) x average growth rate per
year
For example, if there are ten databases with an average size of 500 GB and a 20 percent growth
rate per year:
• Required storage (year 1) = (10 databases * 500 GB) * 1.2 = 6,000 GB = 6 TB

Note: Adjust required storage based on archive log and backup storage
requirements.

Step 2: Calculate the total required number of Nutanix nodes by storage capacity:
• Total number of Nutanix nodes by storage capacity = (required storage x Nutanix replication
factor) / storage per node

7. Best Practices | 35
Virtualizing Oracle Databases

For example, if there are 20 TB of required storage, the replication factor is two (default), and
there are four TB of storage per node (actual amount of storage varies per model):
• Number of Nutanix nodes by storage capacity = (20 TB * 2) / 4 TB = 10 nodes
Step 3a: Calculate the initial storage capacity of Nutanix nodes. If all nodes are deployed initially,
you can skip this step.
With Nutanix you have the ability to start with a certain capacity and incrementally scale capacity
as needed:
• Initial storage capacity = (total required number of Nutanix nodes * % deployed initially *
storage per node) / replication factor
For example, if your ORADB deployment requires ten Nutanix nodes and you have deployed 50
percent of those nodes initially, with four TB of storage per node (actual amount of storage varies
per model):
• Initial storage capacity = (10 nodes * 50% * 4 TB) / RF2 = 10 TB
Step 3b: Calculate the necessary duration for scaling your Nutanix nodes. If you deploy all nodes
initially, you can skip this step:
• Initial storage data duration = initial storage capacity / daily average ingest rate in GB
For example, if you have deployed five nodes initially with four TB of storage per node (actual
amount of storage varies per model), providing 10 TB of initial capacity at replication factor 2 and
200 GB of ingest data daily:
• Initial storage data duration = 10 TB / 200 GB per day = 50 days
This result means that the required storage capacity would breach the initial storage capacity
after 50 days. Based on storage capacity, you would thus need to scale the nodes to facilitate
demand up to the full 100 percent of the required nodes.

Note: We do not recommend running storage at 100 percent full. In order to ensure
that performance remains acceptable even during maintenance and failure, at
minimum the capacity of the largest node should remain free in the cluster.

Step 4: Determine the necessary type of node based on active working set size.
This step helps you choose a node type and configuration based on the estimated active working
set for your database systems. The active working set is the most frequently accessed data and
should be kept in the SSD storage tier. The SSD tier is pooled across the DSF cluster, so very
large VMs benefit from the combined amount of SSDs in all hosts. It is important that you have
enough SSDs in your DSF cluster in aggregate to cover the active working set.
Monitoring the redo log, archive log, and daily or weekly backups and measuring how much data
changes is one way to determine the active working set. However, that approach won’t capture
hot data blocks that are read frequently but not often changed. Nutanix recommends using AWR

7. Best Practices | 36
Virtualizing Oracle Databases

reports to determine the Oracle Database change rate and I/O patterns between two statistics
snapshot periods. AWR reports and Statspack reports provide data on both read and write I/O
and on how many database blocks have changed. You can then use this data to determine more
accurately the active working set size of your databases. We recommend choosing periods that
include relevant business period cyclical peaks.
• Active working set = database blocks read per day + database blocks changed per day

Note: DSF ILM is very efficient at moving blocks to the SSD tier when they become
warm and is suitable even for cyclical database workloads that may only be active
infrequently.

7.8. Oracle Licensing and Support


The licensing and support position for Oracle running on the Nutanix platform with Microsoft
Hyper-V and VMware vSphere is the same as Oracle’s position for any other platform running
these supported hypervisors and Oracle-supported guest operating systems. Oracle supports
both hypervisors (MOS Note 249212.1: Support Position for Oracle Products Running on
VMware Virtualized Environments and MOS Note 1563794.1: Certified Software on Microsoft
Windows Server 2012 Hyper-V), and both hypervisors are supported on Nutanix by their
respective vendors. With the Nutanix Enterprise Cloud Platform, you have the additional option
of calling Nutanix support for both the hypervisor and the underlying hardware platform. Plus, as
a member of TSANet, Nutanix can also work with Oracle and your chosen hypervisor vendor on
support cases if necessary.
For more information on Oracle’s licensing policies, refer to Oracle’s Software Investment Guide.
The source of truth is your executed and binding legal contract, referred to commonly as the
Oracle License and Services Agreement (OLSA) or Transactional Oracle Master Agreement
(TOMA) (http://www.oracle.com/us/corporate/contracts/index.html). There are a number of
different licensing models, including Named User Plus, Processor, OEM, and various enterprise
license agreements. When it comes to the Enterprise Cloud Platform, it is important to know that
when using Processor-based licensing, the smallest unit that can be licensed is a single Nutanix
node. You cannot partition a single Nutanix node into smaller units for licensing purposes, as
both Hyper-V and vSphere are considered soft-partitioned platforms (see http://www.oracle.com/
us/corporate/pricing/partitioning-070609.pdf). You are not required to license an entire Nutanix
block or cluster if not all the nodes are running Oracle software. You can use hypervisor clusters
or cluster rules to restrict where Oracle software runs, thereby restricting how many nodes in a
large cluster you need to license. You must, however, ensure that you are appropriately licensed
for each Nutanix node where Oracle software runs.

Note: Given software licensing complexity and the potential impact of


noncompliance, Nutanix recommends that you obtain appropriate independent legal
advice on your license agreements when considering any platform change.

7. Best Practices | 37
Virtualizing Oracle Databases

Additional Oracle support and licensing resources:


• Understanding Oracle Certification, Support, and Licensing on VMware Environments
• VMware Expanded Oracle Support Policy
• Oracle and Microsoft Support

7. Best Practices | 38
Virtualizing Oracle Databases

8. Solution and Performance Validation


Nutanix ran a series of tests to demonstrate the capability of the Nutanix Enterprise Cloud
Platform for critical applications such as Oracle Database and Oracle RAC and to validate the
best practices in this document. We designed the tests to drive the platform under high load. The
tests used Benchmark Factory for Databases configured with a TPC-C scale factor of 5,000 to
generate heavy OLTP transaction load conditions on the platform. The target was a three-node
Oracle RAC cluster.
During the TPC-C-like tests and high load conditions, we migrated the Oracle databases
between nodes using VMware vMotion live migration. The ease of this movement demonstrates
the Nutanix platform’s robustness and its ability to continue managing and maintaining platform
operations without disrupting end-user database sessions, even during extreme load conditions.
Oracle RAC is very sensitive to latency, time synchronization, and network traffic. High network
latency, time drift, and loss of network connectivity between Oracle RAC nodes can cause RAC
node evictions. Our tests confirmed that extreme conditions do not interrupt user sessions during
live migration, even under high I/O load on the storage and, more importantly, on the network,
where storage traffic, user sessions, Oracle RAC cluster interconnect traffic, and vMotion traffic
all converge.
We completed the solution and testing described in this document with Oracle Database 12c R1
and Oracle Linux 6.5 64 bit deployed on VMware vSphere 5.5 and the Nutanix platform running
AOS. We used Benchmark Factory for Databases 6.9.3 to generate a TPC-C-like load on the
Oracle database systems to drive high-end user transaction load. The Nutanix platform’s built-in
analytics capability measured platform storage performance and compute utilization.

8.1. Environment Overview


We conducted this testing on a single midrange Nutanix NX-3450 configured with four nodes
and 256 GB RAM per node with VMware vSphere 5.5 as the hypervisor. All network traffic was
distributed over 2x 10 GbE ports per host. Each host was redundantly connected to 2x 10 GbE
switches. Three of the four Nutanix nodes ran Oracle RAC VMs, while the fourth node ran the
test harness and load generator.
We configured each VM running as an Oracle RAC node with 112 GB RAM (100 GB reserved),
eight vCPUs, and Oracle Linux 6.5 64 bit with the Red Hat-compatible kernel. The Nutanix CVM
on each host was configured with 32 GB RAM. The system used 4 TB of shared Oracle RAC
storage with an additional 2 TB allocated for backup.

8. Solution and Performance Validation | 39


Virtualizing Oracle Databases

Figure 13: Test Environment Overview

Figure 14: vMotion Test Scenarios Overview

The solution used multi-NIC vMotion, network I/O control, and vSphere Distributed Switch during
testing to provide quality of service for all network traffic, including the Nutanix CVMs, Oracle
RAC client traffic, Oracle RAC cluster interconnects (x2), hypervisor management, and the
vMotion traffic itself. All network traffic was converged on 2x 10 GbE NICs. When migrating
large VMs, vMotion traffic by itself is sufficient to saturate multiple 10 GbE NICs. Quality of
service is therefore important to ensure that each type of traffic receives its fair share of available
bandwidth and that user traffic is not interrupted.
We performed three vMotion scenarios on Nutanix Hosts B, C, and D during load testing, with the
load generator and test harness on Host A at all times.

8. Solution and Performance Validation | 40


Virtualizing Oracle Databases

• Oracle RAC Node 1 migrates from Host B to Host C, then from Host C back to Host B—in
other words, test reset.
• Oracle RAC Node 1 migrates from Host B to Host C, while Oracle RAC Node 2 migrates from
Host C to Host B at the same time.
• Migration of Oracle RAC Node 1 from Host B to Host C, Oracle RAC Node 2 from Host C to
Host D, and Oracle RAC Node 3 from Host D to Host B, all at the same time. This is called the
merry-go-round migration test, in which all Oracle RAC Nodes migrate at the same time.

8.2. Test Environment Configuration Overview


Assumptions
• TPC-C-like workload used scale factor of 5,000 with Benchmark Factory for Databases—200
virtual users on one agent.
• Disk configurations: 2x redo, 2x data, 2x archive log
Hardware
• Storage/compute: 1 Nutanix NX-3450 (2RU)
• Network: Dell Switches (N4032 and 8024)
Nutanix AOS
• Version: 3.5.2
ORADB RAC Node Configuration
• OS: Oracle Linux 6.5 64 bit
• CPU and memory: 8 vCPU and 112 GB RAM
• Disk:
⁃ 7x OS and app disks
⁃ 2x data ASM disks, 2x redo ASM disks, 2x archive log ASM disks, 2 ASM disk groups for
OCR and Voting Disks (6 disks total)
• Application version: Oracle 12c R1 RAC
• Virtual hardware version: 10
Benchmark Factory for Databases
• Version: 6.9.3
VMware vSphere Hypervisor (ESXi)
• Version: 5.5

8. Solution and Performance Validation | 41


Virtualizing Oracle Databases

• As shown in the following figure, we used vSphere Distributed Switch with network I/O control
enabled. We set shares for Virtual Machine Traffic, Oracle RAC, and NTNX-CVM Network
Resource Pools to high. Shares for all other network resource pools were set to low.

Figure 15: Network I/O Control Configuration

• Nutanix configured Benchmark Factory for Databases to use the Oracle 12c R1 client and
access the Oracle RAC Cluster using the Oracle RAC Single Client Access Name (SCAN).
DHCP allocated Oracle RAC node IP addressing, and Oracle Grid Naming Service (GNS)
provided name resolution for the Oracle RAC cluster.

8. Solution and Performance Validation | 42


Virtualizing Oracle Databases

Figure 16: Oracle Single Client Access Name

• Each Oracle RAC node has two private cluster interconnects on different port groups and
VLANs.

Figure 17: Oracle RAC Node Disk Layout

• Nutanix configured each of the three Oracle RAC nodes with four PVSCSI adapters. All
shared disks were configured as thick provision lazy zeroed VMDKs and set as independent
persistent. We set the disk-sharing mode for each shared disk to multiwriter in accordance
with VMware KB 1034165.

8. Solution and Performance Validation | 43


Virtualizing Oracle Databases

Table 11: Oracle RAC 12c ASM Disk Groups

ASM Disk Group ASM Disks Redundancy


OCRV CRS1, CRS2, CRS3 High
OCRVDG2 CRS4, CRS5, CRS6 High
DATA DATA1, DATA2 External
MISC MISC1, MISC2 External
SYST SYST1, SYST2 External
REDO REDO1, REDO2 External

The table above shows the ASM disk groups that Nutanix configured for the Oracle RAC 12c
nodes under test. We would typically recommend a greater number of disks per disk group
for the DATA and MISC disk groups; in terms of the number of data disks, the configuration
described above was the minimal configuration. Two OCR voting disk groups provided additional
resiliency to the cluster critical quorum area.

8.3. Test Scripts and Configurations


Nutanix used the following scripts and configurations for the Oracle RAC 12c Monster VM
vMotion testing.
A complete listing of each of the scripts and files below is available in the appendix of this
document.

Files
We modified the following files to optimize the performance of Oracle operating environment.
• /boot/grub/grub.conf
• /etc/sysctl.conf
• /etc/security/limits.conf
• /etc/fstab
• /etc/rc.d/rc.local
• /etc/sysconfig/network

Scripts
Nutanix used the following scripts to set up or reset databases, table spaces, users, and Oracle
Database options.

8. Solution and Performance Validation | 44


Virtualizing Oracle Databases

• tnsnames.ora: Benchmark Factory for Databases.


• create_tpcc_tablespace.sql: Creates the necessary tablespaces for the TPC-C-like test on the
Oracle ASM disk groups. We provide this script in the appendix.
• create_tpcc_users.sql: Creates the necessary users and assigns them the necessary
permissions for the TPC-C-like test. We provide this script in the appendix.
• ntnx_init.sql: After you have used Benchmark Factory for Databases to initially load the data
into the database, you can use this script to copy the data into an alternative tablespace
for easy and quick reloading of the test scenario. We provide this script in the appendix.
Alternatively, you could use Oracle Flashback Recovery.
• alter_tpcc_system_options.sql: Sets the necessary Oracle Database options in addition
to those already documented in the best practice checklist. We provide this script in the
appendix.
• ntnx_tpcc_reset.sql: Resets the tablespaces after a test is complete. We provide this script
in the appendix; it allows you to run a new test from a known good point without re-creating
all the data from scratch using Benchmark Factory for Databases. You could use a database
restore or Oracle Flashback Recovery as alternatives to this approach.
• We also used an example script to create partitions on shared disks. We provide this script in
the appendix.
The Oracle scripts are based on examples found at http://www.toadworld.com/products/
benchmark-factory/f/45/p/1148/202.aspx. Nutanix used these scripts during the validation testing
process for this document. We provide them as-is and for informational purposes only, for
adaptation in your own independent testing.

8.4. Validation Results


In the testing scenarios, our objective was to prove the capability and robustness of the Nutanix
Enterprise Cloud Platform for critical database applications. Using Oracle RAC 12c monster VM
live migration with multi-NIC vMotion, we demonstrated that the platform can perform operational
maintenance while under heavy user and database load without disrupting users. We used
vMotions while the Oracle RAC system was under high load to test the platform because of
Oracle RAC’s sensitivity to network interruption, storage latency, and time synchronization
between nodes. Any interruptions, irregularities, or time drift could result in node eviction or
interruption to user database sessions.
During testing, large sequential reads peaked at over 1 GB/s combined, and mixed random
read and write operations peaked at over 12,000 IOPS combined across the three active nodes,
with an average of 16 KB I/O size. This demonstrates that typical performance of mixed 50
percent read, 100 percent random 16 KB I/O over large data sets is 4,000 IOPS per node for a

8. Solution and Performance Validation | 45


Virtualizing Oracle Databases

midrange Nutanix system. AOS 5.0 with All Flash platforms would be expected to provide 10x
better performance.
The storage performance equates to 6,000 IOPS per RU and 12 IOPS per watt of power for
mixed random 16 KB I/O. The tables below record the CPU utilization and network throughput
during the vMotion migrations while the Oracle RAC databases were under load.

Note: We achieved this performance with the Nutanix cluster’s usable capacity
at 75 percent full and the database served by both SSD and HDD storage. This
configuration demonstrates the worst case scenario, as Nutanix recommends that
you provision sufficient SSD capacity to cover the most frequently accessed data
from any high-performance database systems.

Table 12: Nutanix Node CPU Utilization During vMotion Operations

vMotion
State Host B % Host C % Host D %
Times
Before (90s) 52.81 43.13 42.70
Single vMotion
During 63s 58.06 55.94 42.66
(Host B to Host C)
After (90s) 27.79 71.64 42.90

Double vMotion Before (90s) 51.48 43.31 43.29

(Host B to Host C, During 92s 71.82 77.42 34.15


Host C to Host B) After (90s) 52.15 37.90 41.41

Triple vMotion Before (90s) 54.13 41.44 43.04

(Host B to Host C, During 175s 72.68 56.77 61.92


Host C to Host D,
After (90s) 31.90 30.69 21.87
Host D to Host B)

As in any vMotion live migration event, the system experienced reduced performance and
increased average response times during the migration period. However, no user sessions were
disconnected, and the average response time was less than 0.5 seconds for the entire test.

8. Solution and Performance Validation | 46


Virtualizing Oracle Databases

Table 13: Nutanix Multi-NIC Throughput During vMotion Operations

Host B Host C Host D


Mbps Transmit Receive Transmit Receive Transmit Receive
Single vMotion 957.14 14,831.21 15,362.18 429.89 332.81 447.73
Double vMotion 12,640.37 13,225.61 12,929.66 12,264.02 266.41 254.83
Triple vMotion 16,737.57 14,395.43 13,973.31 13,250.91 13,360.47 17,363.56

Nutanix configured the multi-NIC vMotion based on vSphere Distributed Switch (VDS) with
network I/O control enabled with vMotion traffic set to low priority. This setting ensures quality of
service and gives priority to all other traffic types, including the Oracle RAC cluster interconnects
and Nutanix CVMs.

8. Solution and Performance Validation | 47


Virtualizing Oracle Databases

9. Conclusion
The Nutanix platform offers the capability to run almost any workload. You have the ability to
run both ORADB and other VM workloads simultaneously on the same platform. The database’s
CPU and storage requirements drive density for Oracle Database deployments. Test validation
has shown that increasing the number of ORADB VMs on the Nutanix platform is preferable
to scaling the number of Oracle instances, to take full advantage of Nutanix performance and
capabilities.
From an I/O standpoint, the Nutanix platform handled the throughput and transaction
requirements of a demanding Oracle database given DSF-localized I/O, server-attached flash,
and intelligent tiering. When you need to virtualize more Oracle databases, you can simply scale
out the Nutanix platform and add more database VMs to gain capacity and performance.
During testing under the high-load conditions described in the appendix, large sequential reads
peaked at over 1 GB/s combined, and mixed random read and write operations peaked at over
12,000 IOPS combined across the three active Oracle RAC nodes, with an average of 16 KB I/O
size. These results demonstrate that typical performance of mixed 50 percent read, 100 percent
random 16 KB I/O over large data sets is 4,000 IOPS per Nutanix node for a midrange Nutanix
system.
The storage performance equates to 6,000 IOPS per rack unit (RU) and 12 IOPS per watt of
power for mixed random 16 KB I/O. During the load tests, we also subjected the Oracle RAC
environment to VMware vMotion migrations to demonstrate the robustness of the platform and
to prove that convergence of all networking, even when pushed to the saturation point, does not
disrupt client connections.
We determined pod sizing after carefully considering performance as well as accounting for
the additional resources needed for N+1 failover capabilities. The Oracle on Nutanix solution
provides a single high-density platform for Oracle, VM hosting, and application delivery. This
modular, pod-based approach enables such deployments to easily be scaled. You can start small
and pay as you grow for any workload.
The appendix includes detailed validation and benchmarking for the Oracle Database on Nutanix.
It also includes the scripts and testing configuration required to reproduce the successful results.
For further discussion of the best practices for Oracle on Nutanix, please contact us on our twitter
@Nutanix, or visit with us on the Nutanix NEXT Community.

9. Conclusion | 48
Virtualizing Oracle Databases

Appendix

Hardware, Software, and VM Setup


• Hardware
• Storage and compute:
⁃ Nutanix NX-3450
• Per node specs (4 nodes per 2RU block):
⁃ CPU: 2x Intel Xeon E5-2650 v2
⁃ Memory: 256 GB memory
⁃ SSD: 2x 400 GB Intel S3700
⁃ HDD: 4x 1 TB SATA drives
• Network
⁃ Dell PowerConnect N4032 and Dell PowerConnect 8024
• Software
⁃ Nutanix Version: AOS 3.5.2
⁃ Oracle Linux 6.5
⁃ Oracle Database Enterprise Edition 12c
• Infrastructure
⁃ ESXi 5.5.0, vCenter 5.5.0 Update 1
• VM
• Nutanix Controller
⁃ CPU: 8 vCPU
⁃ Memory: 32 GB (Oracle RAC tests)
• Oracle RAC node server configuration (x3):
⁃ OS: Oracle Linux 6.5 64 bit
⁃ 8 vCPU / 112 GB memory (100 GB reserved)

Appendix | 49
Virtualizing Oracle Databases

Benchmark and Performance Testing Tools


Below is a selection of different database and I/O performance benchmark tools that you could
use to simulate a database workload within your environment to validate your configuration and
reproduce the performance results in this document.

Benchmark Factory for Databases


Benchmark Factory for Databases is a specialist database performance tool for simulating
database I/O workloads. It can generate database schemas and load profiles for a wide range of
different database test scenarios, including scenarios based on industry standard benchmarks
such as TPC-C, TPC-E, and TPC-H.
The test benchmark consists of two main phases:
• Schema creation: creation of the TPC-C schema.
• TPC-C-like workload simulation: perform a TPC-C-like workload simulation.
For more information about Benchmark Factory for Databases, visit the Dell Quest website:
http://www.quest.com/benchmark-factory/.

HammerDB Benchmark
HammerDB is a performance tool for simulating database I/O subsystem workload.
The test benchmark consists of two main phases:
• Schema creation: creation of the TPC-C schema.
• TPC-C-like workload simulation: perform a TPC-C-like workload simulation.
For more information about HammerDB, visit the HammerDB Sourceforge site: http://
hammerora.sourceforge.net/.

Oracle Orion: I/O Calibration Tool


Oracle Orion is a tool for predicting the performance of an Oracle database without installing
Oracle or creating a database. Unlike other I/O calibration tools, Oracle Orion is expressly
designed to simulate Oracle database I/O workloads using the Oracle I/O software stack. Orion
can also simulate the effect of Oracle Automatic Storage Management striping.
The test benchmark consists of four different I/O patterns:
• Small random I/O. Designed to simulate OLTP databases that typically generate random I/O
the size of a single database page (8 KB). OLTP databases are typically sensitive to storage
latency and the number of IOPS.

Appendix | 50
Virtualizing Oracle Databases

• Large sequential I/O. Typical of data warehouses, data loads and ETL, backups, and restores,
which all generate large sequential I/O patterns. These workload types are sensitive to
throughput in MB/s. This test generates multiple outstanding 1 MB sequential I/O.
• Large random I/O. A typical disk subsystem sees multiple sequential streams across multiple
disks as a random stream. This test simulates multiple outstanding 1 MB random I/O.
• Mixed workloads. A mix of small random and either large sequential or large random. You can
use this pattern to simulate an OLTP database while a backup is running concurrently.
For more information about Oracle Orion, see http://docs.oracle.com/database/121/TGDBA/
pfgrf_iodesign.htm#TGDBA95244.

SLOB: Silly Little Oracle Benchmark


This tool was developed by Kevin Closson and is available at http://kevinclosson.net/2012/02/06/
introducing-slob-the-silly-little-oracle-benchmark/.
SLOB aims to fill the gap between Orion and full-function transactional benchmarks. SLOB has
the following characteristics:
• SLOB supports testing Oracle logical read (SGA buffer gets) scaling.
• SLOB supports testing physical random single-block reads (db file sequential read).
• SLOB supports testing random single block writes (DBWR flushing capacity).
• SLOB supports testing extreme redo logging I/O.
• SLOB consists of simple PL/SQL.
• SLOB is entirely free of all application contention.

Scripts and Testing Configuration


We used the following scripts and configurations for the Oracle RAC 12c monster VM vMotion
testing.

Appendix | 51
Virtualizing Oracle Databases

System Files

/boot/grub/grub.conf
# grub.conf generated by anaconda
#
# Note that you do not have to rerun grub after making changes to this file
# NOTICE: You have a /boot partition. This means that
# all kernel and initrd paths are relative to /boot/, eg.
# root (hd0,0)
# kernel /vmlinuz-version ro root=/dev/sda2
# initrd /initrd-[generic-]version.img
#boot=/dev/sda
default=2
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title Oracle Linux Server (3.8.13-26.1.1.el6uek.x86_64)
root (hd0,0)
kernel /vmlinuz-3.8.13-26.1.1.el6uek.x86_64 ro root=UUID=aba37b5d-1695-4201-
a589-3545b89d898c rd_NO_LUKS rd_NO_LVM LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-
sun16 crashkernel=auto KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet elevator=noop
iommu=soft apm=off numa=off transparent_hugepage=never vmw_pvscsi.cmd_per_lun=256
vmw_pvscsi.ring_pages=32
initrd /initramfs-3.8.13-26.1.1.el6uek.x86_64.img
title Oracle Linux Server Red Hat Compatible Kernel (2.6.32-431.5.1.el6.x86_64.debug)
root (hd0,0)
kernel /vmlinuz-2.6.32-431.5.1.el6.x86_64.debug ro root=UUID=aba37b5d-1695-4201-
a589-3545b89d898c rd_NO_LUKS rd_NO_LVM LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-
sun16 crashkernel=auto KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet elevator=noop
iommu=soft apm=off numa=off transparent_hugepage=never vmw_pvscsi.cmd_per_lun=256
vmw_pvscsi.ring_pages=32
initrd /initramfs-2.6.32-431.5.1.el6.x86_64.debug.img
title Oracle Linux Server Red Hat Compatible Kernel (2.6.32-431.5.1.el6.x86_64)
root (hd0,0)
kernel /vmlinuz-2.6.32-431.5.1.el6.x86_64 ro root=UUID=aba37b5d-1695-4201-
a589-3545b89d898c rd_NO_LUKS rd_NO_LVM LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-
sun16 crashkernel=auto KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet elevator=noop

Appendix | 52
Virtualizing Oracle Databases

iommu=soft apm=off numa=off transparent_hugepage=never vmw_pvscsi.cmd_per_lun=256


vmw_pvscsi.ring_pages=32
initrd /initramfs-2.6.32-431.5.1.el6.x86_64.img
title Oracle Linux Server Red Hat Compatible Kernel (2.6.32-358.23.2.el6.x86_64.debug)
root (hd0,0)
kernel /vmlinuz-2.6.32-358.23.2.el6.x86_64.debug ro root=UUID=aba37b5d-1695-4201-
a589-3545b89d898c rd_NO_LUKS rd_NO_LVM LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-
sun16 crashkernel=auto KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet elevator=noop
iommu=soft apm=off numa=off transparent_hugepage=never vmw_pvscsi.cmd_per_lun=256
vmw_pvscsi.ring_pages=32
initrd /initramfs-2.6.32-358.23.2.el6.x86_64.debug.img
title Oracle Linux Server (3.8.13-16.3.1.el6uek.x86_64)
root (hd0,0)
kernel /vmlinuz-3.8.13-16.3.1.el6uek.x86_64 ro root=UUID=aba37b5d-1695-4201-
a589-3545b89d898c rd_NO_LUKS rd_NO_LVM LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16
KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet elevator=noop iommu=soft apm=off numa=off
transparent_hugepage=never vmw_pvscsi.cmd_per_lun=256 vmw_pvscsi.ring_pages=32
initrd /initramfs-3.8.13-16.3.1.el6uek.x86_64.img
title Oracle Linux Server Red Hat Compatible Kernel (2.6.32-431.el6.x86_64)
root (hd0,0)
kernel /vmlinuz-2.6.32-431.el6.x86_64 ro root=UUID=aba37b5d-1695-4201-a589-3545b89d898c
rd_NO_LUKS rd_NO_LVM LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=auto
KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet elevator=noop iommu=soft apm=off numa=off
transparent_hugepage=never vmw_pvscsi.cmd_per_lun=256 vmw_pvscsi.ring_pages=32
initrd /initramfs-2.6.32-431.el6.x86_64.img

Appendix | 53
Virtualizing Oracle Databases

/etc/sysctl.conf
#########################################################
# Kernel sysctl configuration file for Red Hat Linux
#
# For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and
# sysctl.conf(5) for more details.
# Controls IP packet forwarding
net.ipv4.ip_forward = 0
# Controls source route verification
net.ipv4.conf.default.rp_filter = 1
# Do not accept source routing
net.ipv4.conf.default.accept_source_route = 0
# Controls the System Request debugging functionality of the kernel
kernel.sysrq = 0
# Controls whether core dumps will append the PID to the core filename.
# Useful for debugging multi-threaded applications.
kernel.core_uses_pid = 1
# Controls the use of TCP syncookies
net.ipv4.tcp_syncookies = 1
# Disable netfilter on bridges.
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
# Controls the default maxmimum size of a mesage queue
kernel.msgmnb = 65536
# Controls the maximum size of a message, in bytes
kernel.msgmax = 65536
# Controls the maximum shared segment size, in bytes
# Controls the maximum number of shared memory segments, in pages
kernel.shmall = 4294967296
# oracle-rdbms-server-12cR1-preinstall setting for fs.file-max is 6815744
fs.file-max = 6815744
# oracle-rdbms-server-12cR1-preinstall setting for kernel.sem is '250 32000 100 128'

Appendix | 54
Virtualizing Oracle Databases

kernel.sem = 250 32000 100 128


# oracle-rdbms-server-12cR1-preinstall setting for kernel.shmmni is 4096
kernel.shmmni = 4096
# oracle-rdbms-server-12cR1-preinstall setting for kernel.shmall is 1073741824 on x86_64
# oracle-rdbms-server-12cR1-preinstall setting for kernel.shmmax is 4398046511104 on x86_64
kernel.shmmax = 4398046511104
# oracle-rdbms-server-12cR1-preinstall setting for net.core.rmem_default is 262144
#net.core.rmem_default = 262144
# oracle-rdbms-server-12cR1-preinstall setting for net.core.rmem_max is 4194304
#net.core.rmem_max = 4194304
# oracle-rdbms-server-12cR1-preinstall setting for net.core.wmem_default is 262144
#net.core.wmem_default = 262144
# oracle-rdbms-server-12cR1-preinstall setting for net.core.wmem_max is 1048576
#net.core.wmem_max = 1048576
# oracle-rdbms-server-12cR1-preinstall setting for fs.aio-max-nr is 1048576
fs.aio-max-nr = 1048576
# oracle-rdbms-server-12cR1-preinstall setting for net.ipv4.ip_local_port_range is 9000 65500
net.ipv4.ip_local_port_range = 9000 65500
# Optimize Memory Management Settings - Michael Webster
vm.overcommit_memory = 1
vm.dirty_background_ratio = 5
vm.dirty_ratio = 15
vm.dirty_expire_centisecs = 500
vm.dirty_writeback_centisecs = 100
vm.swappiness = 0
# Oracle HugePage Configuration (96GB SGA) - Michael Webster
vm.nr_hugepages=49416
vm.hugetlb_shm_group=54321
# Network MTU Probing - Michael Webster
net.ipv4.tcp_mtu_probing=1
# Network Source Route Filtering Relaxed for RAC Interconnect - Michael Webster
net.ipv4.conf.eth2.rp_filter = 2
net.ipv4.conf.eth1.rp_filter = 2

Appendix | 55
Virtualizing Oracle Databases

# Optimize Network Stack and Memory Buffers - Michael Webster


# Increase TCP max buffer size setable using setsockopt()
net.core.rmem_max = 536870912
net.core.wmem_max = 536870912
# Increase Linux autotuning TCP buffer limit
net.ipv4.tcp_rmem = 4096 87380 536870912
net.ipv4.tcp_wmem = 4096 65536 536870912
# Increaes max backlog of packets and congestion control method
net.core.netdev_max_backlog = 250000
net.ipv4.tcp_congestion_control=htcp
# advanced network stack tuning
net.core.somaxconn = 65535
net.ipv4.tcp_keepalive_intvl = 15
net.ipv4.tcp_fin_timeout = 15
net.ipv4.tcp_keepalive_probes = 5
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_syn_backlog = 65535

Appendix | 56
Virtualizing Oracle Databases

/etc/security/limits.conf
#########################################################
# /etc/security/limits.conf
#
#Each line describes a limit for a user in the form:
#
#<domain> <type> <item> <value>
#
#Where:
#<domain> can be:
# - an user name
# - a group name, with @group syntax
# - the wildcard *, for default entry
# - the wildcard %, can be also used with %group syntax,
# for maxlogin limit
#
#<type> can have the two values:
# - "soft" for enforcing the soft limits
# - "hard" for enforcing hard limits
#
#<item> can be one of the following:
# - core - limits the core file size (KB)
# - data - max data size (KB)
# - fsize - maximum filesize (KB)
# - memlock - max locked-in-memory address space (KB)
# - nofile - max number of open files
# - rss - max resident set size (KB)
# - stack - max stack size (KB)
# - cpu - max CPU time (MIN)
# - nproc - max number of processes
# - as - address space limit (KB)
# - maxlogins - max number of logins for this user
# - maxsyslogins - max number of logins on the system

Appendix | 57
Virtualizing Oracle Databases

# - priority - the priority to run user process with


# - locks - max number of file locks the user can hold
# - sigpending - max number of pending signals
# - msgqueue - max memory used by POSIX message queues (bytes)
# - nice - max nice priority allowed to raise to values: [-20, 19]
# - rtprio - max realtime priority
#
#<domain> <type> <item> <value>
#
oracle - nofile 131070
oracle - nproc 16384
oracle - stack 32768
@oinstall - memlock 104857600
grid - nofile 131070
grid - nproc 16384
grid - stack 32768
# End of file

Appendix | 58
Virtualizing Oracle Databases

/etc/fstab
#########################################################
#
# /etc/fstab
# Created by anaconda on Thu Jan 30 08:38:02 2014
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=aba37b5d-1695-4201-a589-3545b89d898c / ext4 defaults,noatime
1 1
UUID=1a0194cb-c082-48d6-a820-7650a9467f89 /boot ext4 defaults,noatime
1 2
UUID=40bebdcf-8a64-456a-b66c-cf25640bd3c0 /home ext4 defaults,noatime
1 2
UUID=a130bca4-b6d9-4544-b7c7-907f823f7964 /opt ext4 defaults,noatime
1 2
UUID=86087c69-2ae4-4caa-9505-980c0de85a92 /tmp ext4 defaults,noatime
1 2
UUID=84a2ec02-dfeb-4a26-9d8a-48663a176e38 /usr ext4 defaults,noatime
1 2
UUID=e95aeff1-2d94-4915-ade7-7b5c259c58bd /var/log ext4 defaults,noatime
1 2
UUID=715efc8f-0d56-4651-aa37-0ad8b29ee231 swap swap defaults 0 0
UUID=2ffbb4e3-735b-4d01-86ac-8b2072663d53 /backup ext4 defaults,noatime
1 2
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0

Appendix | 59
Virtualizing Oracle Databases

/etc/rc.d/rc.local
#########################################################
#!/bin/sh
#
# This script will be executed *after* all the other init scripts.
# You can put your own initialization stuff in here if you don't
# want to do the full Sys V style init stuff.
touch /var/lock/subsys/local
/sbin/ethtool -G eth0 rx 4096 tx 4096
/sbin/ethtool -G eth1 rx 4096 tx 4096
/sbin/ethtool -G eth2 rx 4096 tx 4096
#/sbin/ifconfig eth0 txqueuelen 10000
#/sbin/ifconfig eth1 txqueuelen 10000
#/sbin/ifconfig eth2 txqueuelen 10000
# Set Maximum IO Size
for disk in sda sdb sdc sdd sde sdf sdg sdh sdi sdj sdk sdl sdm sdn sdo sdp sdq sdr sdt sdu
sdv sdw; do
echo 1024 > /sys/block/$disk/queue/max_sectors_kb
echo $disk " max_sectors_kb set to 1024"
done
#

/etc/sysconfig/network
NETWORKING=yes
HOSTNAME=ntnxorarac1.nxorarac.nutanix.local
GATEWAYDEV=eth0
NOZEROCONF=yes

Appendix | 60
Virtualizing Oracle Databases

Scripts

Example Shared-Disk Partition Script


#Example Creating Partitions
for disk in sdt sdu sdv; do
parted /dev/$disk mklabel gpt
parted /dev/$disk mkpart primary "1 -1"
done

tnsnames.ora
Benchmark Factory for Databases (tnsnames.ora):
# tnsnames.ora
NXRACDB1=
(DESCRIPTION=
(ADDRESS=
(PROTOCOL=TCP)
(HOST=nxorarac-scan.nxorarac.nutanix.local)
(PORT=1521)
)
(CONNECT_DATA=
(SERVER=dedicated)
(SERVICE_NAME=NXRACDB1.nxorarac.nutanix.local)
)
)
# end of tnsnames.ora
# sqlnet.ora
NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT)
# end of sqlnet.ora

create_tpcc_tablespace.sql
Use the script below (create_tpcc_tablespace.sql) to create the necessary tablespaces for the
TPC-C-like test on the Oracle ASM disk groups.
# create_tpcc_tablespace.sql
CREATE TABLESPACE "USRS" DATAFILE '+DATA/NXRACDB1/d16.dbf' SIZE 32767M REUSE, '+DATA/
NXRACDB1/d19.dbf' SIZE 32767M REUSE, '+DATA/NXRACDB1/d05.dbf' SIZE 32767M REUSE, '+DATA/

Appendix | 61
Virtualizing Oracle Databases

NXRACDB1/d07.dbf' SIZE 32767M REUSE, '+DATA/NXRACDB1/d14.dbf' SIZE 32767M REUSE, '+DATA/


NXRACDB1/d15.dbf' SIZE 32767M REUSE, '+DATA/NXRACDB1/d20.dbf' SIZE 32767M REUSE, '+DATA/
NXRACDB1/d08.dbf' SIZE 32767M REUSE, '+DATA/NXRACDB1/d18.dbf' SIZE 32767M REUSE, '+DATA/
NXRACDB1/d17.dbf' SIZE 32767M REUSE, '+DATA/NXRACDB1/d02.dbf' SIZE 32767M REUSE, '+DATA/
NXRACDB1/d11.dbf' SIZE 32767M REUSE, '+DATA/NXRACDB1/d12.dbf' SIZE 32767M REUSE, '+DATA/
NXRACDB1/d01.dbf' SIZE 32767M REUSE, '+DATA/NXRACDB1/d13.dbf' SIZE 32767M REUSE, '+DATA/
NXRACDB1/d03.dbf' SIZE 32767M REUSE, '+DATA/NXRACDB1/d04.dbf' SIZE 32767M REUSE, '+DATA/
NXRACDB1/d06.dbf' SIZE 32767M REUSE, '+DATA/NXRACDB1/d09.dbf' SIZE 32767M REUSE, '+DATA/
NXRACDB1/d10.dbf' SIZE 32767M REUSE LOGGING EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT
AUTO;
CREATE TABLESPACE "MISC" DATAFILE '+MISC/NXRACDB1/misc14.dbf' SIZE 32767M REUSE, '+MISC/
NXRACDB1/misc05.dbf' SIZE 32767M REUSE, '+MISC/NXRACDB1/misc06.dbf' SIZE 32767M REUSE,
'+MISC/NXRACDB1/misc07.dbf' SIZE 32767M REUSE, '+MISC/NXRACDB1/misc10.dbf' SIZE 32767M
REUSE, '+MISC/NXRACDB1/misc09.dbf' SIZE 32767M REUSE, '+MISC/NXRACDB1/misc11.dbf' SIZE
32767M REUSE, '+MISC/NXRACDB1/misc01.dbf' SIZE 32767M REUSE, '+MISC/NXRACDB1/misc08.dbf'
SIZE 32767M REUSE, '+MISC/NXRACDB1/misc13.dbf' SIZE 32767M REUSE, '+MISC/NXRACDB1/
misc15.dbf' SIZE 32767M REUSE, '+MISC/NXRACDB1/misc03.dbf' SIZE 32767M REUSE, '+MISC/
NXRACDB1/misc04.dbf' SIZE 32767M REUSE, '+MISC/NXRACDB1/misc12.dbf' SIZE 32767M REUSE,
'+MISC/NXRACDB1/misc02.dbf' SIZE 32767M REUSE LOGGING EXTENT MANAGEMENT LOCAL SEGMENT SPACE
MANAGEMENT AUTO;
CREATE TEMPORARY TABLESPACE "TMP" TEMPFILE '+TEMP/NXRACDB1/temp01.dbf' SIZE 32767M REUSE,
'+TEMP/NXRACDB1/temp02.dbf' SIZE 32767M REUSE, '+TEMP/NXRACDB1/temp03.dbf' SIZE 32767M REUSE
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1024K;
CREATE TABLESPACE "INDX" DATAFILE '+MISC/NXRACDB1/indx01.dbf' SIZE 32767M REUSE, '+MISC/
NXRACDB1/indx02.dbf' SIZE 32767M REUSE, '+MISC/NXRACDB1/indx03.dbf' SIZE 32767M REUSE,
'+MISC/NXRACDB1/indx04.dbf' SIZE 32767M REUSE, '+MISC/NXRACDB1/indx05.dbf' SIZE 32767M
REUSE, '+MISC/NXRACDB1/indx06.dbf' SIZE 32767M REUSE, '+MISC/NXRACDB1/indx07.dbf' SIZE
32767M REUSE, '+MISC/NXRACDB1/indx08.dbf' SIZE 32767M REUSE LOGGING EXTENT MANAGEMENT LOCAL
SEGMENT SPACE MANAGEMENT AUTO;
ALTER TABLESPACE TEMP TABLESPACE GROUP TMP;
ALTER DATABASE DEFAULT TEMPORARY TABLESPACE TMP;

Appendix | 62
Virtualizing Oracle Databases

create_tpcc_users.sql
Use the script below (create_tpcc_users.sql) to create the necessary users and assign them the
necessary permissions for the TPC-C-like test.
# create_tpcc_users.sql
CREATE USER "NTNX" IDENTIFIED BY "nxracdb1" DEFAULT TABLESPACE "USRS" TEMPORARY TABLESPACE
"TMP";
GRANT ALTER SESSION TO "NTNX";
GRANT CREATE MATERIALIZED VIEW TO "NTNX";
GRANT CREATE VIEW TO "NTNX";
GRANT QUERY REWRITE TO "NTNX";
GRANT UNLIMITED TABLESPACE TO "NTNX";
GRANT EXECUTE ON "SYS"."DBMS_LOCK" TO "NTNX";
GRANT "CONNECT" TO "NTNX";
GRANT "RESOURCE" TO "NTNX";
CREATE USER "HOLDEM" IDENTIFIED BY "nxracdb1" DEFAULT TABLESPACE "MISC" TEMPORARY TABLESPACE
"TMP";
GRANT ANALYZE ANY TO "HOLDEM";
GRANT ANALYZE ANY DICTIONARY TO "HOLDEM";
GRANT UNLIMITED TABLESPACE TO "HOLDEM";
GRANT "CONNECT" TO "HOLDEM";
GRANT "RESOURCE" TO "HOLDEM";
COMMIT;
exit

Appendix | 63
Virtualizing Oracle Databases

ntnx_init.sql
After initially loading the data into the database using Benchmark Factory for Databases, you can
use this script (ntnx_init.sql) to copy the data into an alternative tablespace, so you can reload
the test scenario quickly and easily. Alternatively, you could use Oracle Flashback Recovery.
--- filename - ntnx_init.sql
connect ntnx/nxracdb1
grant select on c_customer to holdem;
grant select on c_district to holdem;
grant select on c_history to holdem;
grant select on c_item to holdem;
grant select on c_new_order to holdem;
grant select on c_order to holdem;
grant select on c_order_line to holdem;
grant select on c_stock to holdem;
grant select on c_warehouse to holdem;
connect holdem/nxracdb1
drop table c_customer cascade constraints purge;
drop table c_district cascade constraints purge;
drop table c_history cascade constraints purge;
drop table c_item cascade constraints purge;
drop table c_new_order cascade constraints purge;
drop table c_order cascade constraints purge;
drop table c_order_line cascade constraints purge;
drop table c_stock cascade constraints purge;
drop table c_warehouse cascade constraints purge;
create table c_customer tablespace misc parallel (degree 8) as select * from
ntnx.c_customer;
create table c_district tablespace misc parallel (degree 8) as select * from
ntnx.c_district;
create table c_history tablespace misc parallel (degree 8) as select * from
ntnx.c_history;
create table c_item tablespace misc parallel (degree 8) as select * from ntnx.c_item;
create table c_new_order tablespace misc parallel (degree 8) as select * from
ntnx.c_new_order;
create table c_order tablespace misc parallel (degree 8) as select * from ntnx.c_order;

Appendix | 64
Virtualizing Oracle Databases

create table c_order_line tablespace misc parallel (degree 8) as select * from


ntnx.c_order_line;
create table c_stock tablespace misc parallel (degree 8) as select * from ntnx.c_stock;
create table c_warehouse tablespace misc parallel (degree 8) as select * from
ntnx.c_warehouse;
grant select on c_customer to ntnx;
grant select on c_district to ntnx;
grant select on c_history to ntnx;
grant select on c_item to ntnx;
grant select on c_new_order to ntnx;
grant select on c_order to ntnx;
grant select on c_order_line to ntnx;
grant select on c_stock to ntnx;
grant select on c_warehouse to ntnx;
begin
DBMS_STATS.GATHER_SCHEMA_STATS
(
OwnName => 'HOLDEM'
,Estimate_Percent => SYS.DBMS_STATS.AUTO_SAMPLE_SIZE
,Block_sample => TRUE
,Method_Opt => 'FOR ALL INDEXED COLUMNS SIZE AUTO '
,Degree => 8
,Cascade => TRUE
,No_Invalidate => TRUE
);
end;
/
exec dbms_stats.gather_dictionary_stats;
exec dbms_stats.gather_fixed_objects_stats;
commit;
exit

Appendix | 65
Virtualizing Oracle Databases

alter_tpcc_system_options.sql
Use this script (alter_tpcc_system_options.sql) to set the necessary Oracle Database options in
addition to those already documented in the best practices checklist.
# alter_tpcc_system_options.sql
alter system set cursor_sharing=force scope=both;
alter system set log_checkpoints_to_alert=true scope=both;
alter system set trace_enabled=false scope=both;
alter system set star_transformation_enabled=true scope=both;
alter system set fast_start_mttr_target=3600 scope=both;
exit

ntnx_tpcc_reset.sql
Use this script (ntnx_tpcc_reset.sql) to reset the tablespaces after a test has been completed to
allow a new test to be run from a known good point and to avoid having to re-create all the data

Appendix | 66
Virtualizing Oracle Databases

from scratch using Benchmark Factory for Databases. As an alternative to this approach, you
could use a database restore or Oracle Flashback Recovery.
--- filename - tpcc_reset.sql
connect ntnx/nxracdb1
drop table c_customer cascade constraints purge;
drop table c_district cascade constraints purge;
drop table c_history cascade constraints purge;
drop table c_item cascade constraints purge;
drop table c_new_order cascade constraints purge;
drop table c_order cascade constraints purge;
drop table c_order_line cascade constraints purge;
drop table c_stock cascade constraints purge;
drop table c_warehouse cascade constraints purge;
exit
[oracle@ntnxorarac1 ~]$ cat asmdgfreespace.sql
select name,total_mb,free_mb from v$asm_diskgroup;
exit
[oracle@ntnxorarac1 ~]$ cat listasmoperations.sql
select INST_ID, OPERATION, STATE, POWER, SOFAR, EST_WORK, EST_RATE, EST_MINUTES from GV
$ASM_OPERATION WHERE STATE != 'DONE';
exit
[oracle@ntnxorarac1 ~]$ cat tpcc_reset.sql
--- filename - tpcc_reset.sql
connect ntnx/nxracdb1
drop table c_customer cascade constraints purge;
drop table c_district cascade constraints purge;
drop table c_history cascade constraints purge;
drop table c_item cascade constraints purge;
drop table c_new_order cascade constraints purge;
drop table c_order cascade constraints purge;
drop table c_order_line cascade constraints purge;
drop table c_stock cascade constraints purge;
drop table c_warehouse cascade constraints purge;

Appendix | 67
Virtualizing Oracle Databases

create table c_customer tablespace usrs parallel (degree 16) as select * from
holdem.c_customer;
create table c_district tablespace usrs parallel (degree 16) as select * from
holdem.c_district;
create table c_history tablespace usrs parallel (degree 16) as select * from
holdem.c_history;
create table c_item tablespace usrs parallel (degree 16) as select * from
holdem.c_item;
create table c_new_order tablespace usrs parallel (degree 16) as select * from
holdem.c_new_order;
create table c_order tablespace usrs parallel (degree 16) as select * from
holdem.c_order;
create table c_order_line tablespace usrs parallel (degree 16) as select * from
holdem.c_order_line;
create table c_stock tablespace usrs parallel (degree 16) as select * from
holdem.c_stock;
create table c_warehouse tablespace usrs parallel (degree 16) as select * from
holdem.c_warehouse;
alter table c_customer NOPARALLEL;
alter table c_district NOPARALLEL;
alter table c_history NOPARALLEL;
alter table c_item NOPARALLEL;
alter table c_new_order NOPARALLEL;
alter table c_order NOPARALLEL;
alter table c_order_line NOPARALLEL;
alter table c_stock NOPARALLEL;
alter table c_warehouse NOPARALLEL;
CREATE UNIQUE INDEX C_CUSTOMER_I1 ON NTNX.C_CUSTOMER
(C_W_ID, C_D_ID, C_ID)
NOLOGGING
TABLESPACE INDX
PARALLEL (degree 16);
CREATE INDEX C_CUSTOMER_I2 ON NTNX.C_CUSTOMER
(C_LAST, C_W_ID, C_D_ID, C_FIRST)
NOLOGGING
TABLESPACE INDX
PARALLEL (degree 16);

Appendix | 68
Virtualizing Oracle Databases

CREATE UNIQUE INDEX C_DISTRICT_I1 ON NTNX.C_DISTRICT


(D_W_ID, D_ID)
NOLOGGING
TABLESPACE INDX
PARALLEL (degree 16);
CREATE UNIQUE INDEX C_ITEM_I1 ON NTNX.C_ITEM
(I_ID)
NOLOGGING
TABLESPACE INDX
PARALLEL (degree 16);
CREATE UNIQUE INDEX C_NEW_ORDER_I1 ON NTNX.C_NEW_ORDER
(NO_W_ID, NO_D_ID, NO_O_ID)
NOLOGGING
TABLESPACE INDX
PARALLEL (degree 16);
CREATE UNIQUE INDEX C_ORDER_I1 ON NTNX.C_ORDER
(O_ID, O_W_ID, O_D_ID)
NOLOGGING
TABLESPACE INDX
PARALLEL (degree 16);
CREATE UNIQUE INDEX C_ORDER_LINE_I1 ON NTNX.C_ORDER_LINE
(OL_O_ID, OL_W_ID, OL_D_ID, OL_NUMBER)
NOLOGGING
TABLESPACE INDX
PARALLEL (degree 16);
CREATE UNIQUE INDEX C_STOCK_I1 ON NTNX.C_STOCK
(S_I_ID, S_W_ID)
NOLOGGING
TABLESPACE INDX
PARALLEL (degree 16);
CREATE UNIQUE INDEX C_WAREHOUSE_I1 ON NTNX.C_WAREHOUSE
(W_ID)
NOLOGGING

Appendix | 69
Virtualizing Oracle Databases

TABLESPACE INDX
PARALLEL (degree 16);
/* Formatted on 2007/02/08 14:25 (Formatter Plus v4.8.8) */
CREATE OR REPLACE PROCEDURE NTNX.c_sp_delivery (
ware_id NUMBER,
carrier_id NUMBER,
order_1 IN OUT NUMBER,
order_2 IN OUT NUMBER,
order_3 IN OUT NUMBER,
order_4 IN OUT NUMBER,
order_5 IN OUT NUMBER,
order_6 IN OUT NUMBER,
order_7 IN OUT NUMBER,
order_8 IN OUT NUMBER,
order_9 IN OUT NUMBER,
order_10 IN OUT NUMBER,
retry IN OUT NUMBER,
cur_date IN DATE
)
AS
TYPE intarray IS TABLE OF INTEGER
INDEX BY BINARY_INTEGER;
order_id intarray;
dist_id INTEGER;
cust_id INTEGER;
amount_sum NUMBER;
no_rowid ROWID;
not_serializable EXCEPTION;
PRAGMA EXCEPTION_INIT (not_serializable, -8177);
deadlock EXCEPTION;
PRAGMA EXCEPTION_INIT (deadlock, -60);
snapshot_too_old EXCEPTION;
PRAGMA EXCEPTION_INIT (snapshot_too_old, -1555);

Appendix | 70
Virtualizing Oracle Databases

CURSOR o_cur
IS
SELECT no_o_id, ROWID
FROM c_new_order
WHERE no_w_id = ware_id AND no_d_id = dist_id
ORDER BY no_w_id, no_d_id, no_o_id;
BEGIN
FOR i IN 1 .. 10
LOOP
dist_id := i;
LOOP
BEGIN
OPEN o_cur;
FETCH o_cur
INTO order_id (i), no_rowid;
IF (o_cur%NOTFOUND)
THEN
CLOSE o_cur;
COMMIT;
order_id (i) := 0;
EXIT;
END IF;
CLOSE o_cur;
DELETE FROM c_new_order
WHERE ROWID = no_rowid;
UPDATE c_order
SET o_carrier_id = carrier_id
WHERE o_d_id = dist_id AND o_w_id = ware_id
AND o_id = order_id (i);
SELECT o_c_id
INTO cust_id
FROM c_order
WHERE o_d_id = dist_id AND o_w_id = ware_id

Appendix | 71
Virtualizing Oracle Databases

AND o_id = order_id (i);


UPDATE c_order_line
SET ol_delivery_d = cur_date
WHERE ol_d_id = dist_id
AND ol_w_id = ware_id
AND ol_o_id = order_id (i);
SELECT SUM (ol_amount)
INTO amount_sum
FROM c_order_line
WHERE ol_d_id = dist_id
AND ol_w_id = ware_id
AND ol_o_id = order_id (i);
UPDATE c_customer
SET c_balance = c_balance + amount_sum,
c_delivery_cnt = c_delivery_cnt + 1
WHERE c_id = cust_id AND c_d_id = dist_id AND c_w_id = ware_id;
COMMIT;
EXIT;
EXCEPTION
WHEN not_serializable OR deadlock OR snapshot_too_old
THEN
ROLLBACK;
retry := retry + 1;
END;
END LOOP;
END LOOP;
order_1 := order_id (1);
order_2 := order_id (2);
order_3 := order_id (3);
order_4 := order_id (4);
order_5 := order_id (5);
order_6 := order_id (6);
order_7 := order_id (7);

Appendix | 72
Virtualizing Oracle Databases

order_8 := order_id (8);


order_9 := order_id (9);
order_10 := order_id (10);
END;
/
CREATE OR REPLACE PROCEDURE NTNX.c_sp_new_order (
ware_id NUMBER,
dist_id NUMBER,
cust_id NUMBER,
ord_ol_cnt NUMBER,
ord_all_local NUMBER,
cust_discount OUT NUMBER,
cust_last OUT VARCHAR2,
cust_credit OUT VARCHAR2,
dist_tax OUT NUMBER,
ware_tax OUT NUMBER,
ord_id IN OUT NUMBER,
retry IN OUT NUMBER,
cur_date IN DATE
)
AS
dist_rowid ROWID;
not_serializable EXCEPTION;
PRAGMA EXCEPTION_INIT (not_serializable, -8177);
deadlock EXCEPTION;
PRAGMA EXCEPTION_INIT (deadlock, -60);
snapshot_too_old EXCEPTION;
PRAGMA EXCEPTION_INIT (snapshot_too_old, -1555);
integrity_viol EXCEPTION;
PRAGMA EXCEPTION_INIT (integrity_viol, -1);
BEGIN
LOOP
BEGIN

Appendix | 73
Virtualizing Oracle Databases

SELECT c_district.ROWID, d_tax, d_next_o_id, w_tax


INTO dist_rowid, dist_tax, ord_id, ware_tax
FROM c_district, c_warehouse
WHERE d_id = dist_id AND d_w_id = w_id AND w_id = ware_id;
UPDATE c_district
SET d_next_o_id = ord_id + 1
WHERE ROWID = dist_rowid;
SELECT c_discount, c_last, c_credit
INTO cust_discount, cust_last, cust_credit
FROM c_customer
WHERE c_id = cust_id AND c_d_id = dist_id AND c_w_id = ware_id;
INSERT INTO c_new_order
VALUES (ord_id, dist_id, ware_id);
INSERT INTO c_order
VALUES (ord_id, dist_id, ware_id, cust_id, cur_date, 11,
ord_ol_cnt, ord_all_local);
EXIT;
EXCEPTION
WHEN not_serializable OR deadlock OR snapshot_too_old OR integrity_viol
THEN
ROLLBACK;
retry := retry + 1;
END;
END LOOP;
END;
/
CREATE OR REPLACE PROCEDURE NTNX.c_sp_order_status_id (
ware_id NUMBER,
dist_id NUMBER,
cust_id IN OUT NUMBER,
bylastname NUMBER,
cust_last IN OUT VARCHAR2,
cust_first OUT VARCHAR2,

Appendix | 74
Virtualizing Oracle Databases

cust_middle OUT VARCHAR2,


cust_balance OUT NUMBER,
ord_id IN OUT NUMBER,
ord_entry_d OUT DATE,
ord_carrier_id OUT NUMBER,
ord_ol_cnt OUT NUMBER
)
IS
TYPE rowidarray IS TABLE OF ROWID
INDEX BY BINARY_INTEGER;
cust_rowid ROWID;
ol BINARY_INTEGER;
c_num BINARY_INTEGER;
row_id rowidarray;
not_serializable EXCEPTION;
PRAGMA EXCEPTION_INIT (not_serializable, -8177);
deadlock EXCEPTION;
PRAGMA EXCEPTION_INIT (deadlock, -60);
snapshot_too_old EXCEPTION;
PRAGMA EXCEPTION_INIT (snapshot_too_old, -1555);
CURSOR mo_cur
IS
SELECT o_id, o_entry_d, o_carrier_id, o_ol_cnt
FROM c_order
WHERE o_d_id = dist_id AND o_w_id = ware_id AND o_c_id = cust_id
ORDER BY o_w_id, o_d_id, o_c_id, o_id DESC;
BEGIN
LOOP
BEGIN
SELECT c_balance, c_first, c_middle, c_last
INTO cust_balance, cust_first, cust_middle, cust_last
FROM c_customer
WHERE c_id = cust_id AND c_d_id = dist_id AND c_w_id = ware_id;

Appendix | 75
Virtualizing Oracle Databases

OPEN mo_cur;
FETCH mo_cur
INTO ord_id, ord_entry_d, ord_carrier_id, ord_ol_cnt;
CLOSE mo_cur;
EXIT;
EXCEPTION
WHEN not_serializable OR deadlock OR snapshot_too_old
THEN
ROLLBACK;
END;
END LOOP;
END;
/
CREATE OR REPLACE PROCEDURE NTNX.c_sp_order_status_name (
ware_id NUMBER,
dist_id NUMBER,
cust_id IN OUT NUMBER,
bylastname NUMBER,
cust_last IN OUT VARCHAR2,
cust_first OUT VARCHAR2,
cust_middle OUT VARCHAR2,
cust_balance OUT NUMBER,
ord_id IN OUT NUMBER,
ord_entry_d OUT DATE,
ord_carrier_id OUT NUMBER,
ord_ol_cnt OUT NUMBER
)
IS
TYPE rowidarray IS TABLE OF ROWID
INDEX BY BINARY_INTEGER;
cust_rowid ROWID;
ol BINARY_INTEGER;
c_num BINARY_INTEGER;

Appendix | 76
Virtualizing Oracle Databases

row_id rowidarray;
not_serializable EXCEPTION;
PRAGMA EXCEPTION_INIT (not_serializable, -8177);
deadlock EXCEPTION;
PRAGMA EXCEPTION_INIT (deadlock, -60);
snapshot_too_old EXCEPTION;
PRAGMA EXCEPTION_INIT (snapshot_too_old, -1555);
CURSOR c_cur
IS
SELECT ROWID
FROM c_customer
WHERE c_d_id = dist_id AND c_w_id = ware_id AND c_last = cust_last
ORDER BY c_w_id, c_d_id, c_last, c_first;
CURSOR mo_cur
IS
SELECT o_id, o_entry_d, o_carrier_id, o_ol_cnt
FROM c_order
WHERE o_d_id = dist_id AND o_w_id = ware_id AND o_c_id = cust_id
ORDER BY o_w_id, o_d_id, o_c_id, o_id DESC;
BEGIN
LOOP
BEGIN
c_num := 0;
FOR c_id_rec IN c_cur
LOOP
c_num := c_num + 1;
row_id (c_num) := c_id_rec.ROWID;
END LOOP;
cust_rowid := row_id ((c_num + 1) / 2);
SELECT c_id, c_balance, c_first, c_middle, c_last
INTO cust_id, cust_balance, cust_first, cust_middle, cust_last
FROM c_customer
WHERE ROWID = cust_rowid;

Appendix | 77
Virtualizing Oracle Databases

OPEN mo_cur;
FETCH mo_cur
INTO ord_id, ord_entry_d, ord_carrier_id, ord_ol_cnt;
CLOSE mo_cur;
EXIT;
EXCEPTION
WHEN not_serializable OR deadlock OR snapshot_too_old
THEN
ROLLBACK;
END;
END LOOP;
END;
/
CREATE OR REPLACE PROCEDURE NTNX.c_sp_payment_id (
ware_id NUMBER,
dist_id NUMBER,
cust_w_id NUMBER,
cust_d_id NUMBER,
cust_id IN OUT NUMBER,
bylastname NUMBER,
hist_amount NUMBER,
cust_last IN OUT VARCHAR2,
ware_street_1 OUT VARCHAR2,
ware_street_2 OUT VARCHAR2,
ware_city OUT VARCHAR2,
ware_state OUT VARCHAR2,
ware_zip OUT VARCHAR2,
dist_street_1 OUT VARCHAR2,
dist_street_2 OUT VARCHAR2,
dist_city OUT VARCHAR2,
dist_state OUT VARCHAR2,
dist_zip OUT VARCHAR2,
cust_first OUT VARCHAR2,

Appendix | 78
Virtualizing Oracle Databases

cust_middle OUT VARCHAR2,


cust_street_1 OUT VARCHAR2,
cust_street_2 OUT VARCHAR2,
cust_city OUT VARCHAR2,
cust_state OUT VARCHAR2,
cust_zip OUT VARCHAR2,
cust_phone OUT VARCHAR2,
cust_since OUT DATE,
cust_credit IN OUT VARCHAR2,
cust_credit_lim OUT NUMBER,
cust_discount OUT NUMBER,
cust_balance IN OUT NUMBER,
cust_data OUT VARCHAR2,
retry IN OUT NUMBER,
cur_date IN DATE
)
AS
TYPE rowidarray IS TABLE OF ROWID
INDEX BY BINARY_INTEGER;
cust_rowid ROWID;
ware_rowid ROWID;
dist_ytd NUMBER;
dist_name VARCHAR2 (11);
ware_ytd NUMBER;
ware_name VARCHAR2 (11);
c_num BINARY_INTEGER;
row_id rowidarray;
cust_payments PLS_INTEGER;
cust_ytd NUMBER;
cust_data_temp VARCHAR2 (500);
not_serializable EXCEPTION;
PRAGMA EXCEPTION_INIT (not_serializable, -8177);
deadlock EXCEPTION;

Appendix | 79
Virtualizing Oracle Databases

PRAGMA EXCEPTION_INIT (deadlock, -60);


snapshot_too_old EXCEPTION;
PRAGMA EXCEPTION_INIT (snapshot_too_old, -1555);
BEGIN
LOOP
BEGIN
SELECT ROWID, c_first, c_middle, c_last,
c_street_1, c_street_2, c_city, c_state,
c_zip, c_phone, c_since, c_credit,
c_credit_lim, c_discount, c_balance - hist_amount,
c_payment_cnt, c_ytd_payment + hist_amount,
DECODE (c_credit, 'BC', c_data, ' ')
INTO cust_rowid, cust_first, cust_middle, cust_last,
cust_street_1, cust_street_2, cust_city, cust_state,
cust_zip, cust_phone, cust_since, cust_credit,
cust_credit_lim, cust_discount, cust_balance,
cust_payments, cust_ytd,
cust_data_temp
FROM c_customer
WHERE c_id = cust_id AND c_d_id = cust_d_id AND c_w_id = cust_w_id;
cust_payments := cust_payments + 1;
IF cust_credit = 'BC'
THEN
cust_data_temp :=
SUBSTR ( ( TO_CHAR (cust_id)
|| ' '
|| TO_CHAR (cust_d_id)
|| ' '
|| TO_CHAR (cust_w_id)
|| ' '
|| TO_CHAR (dist_id)
|| ' '
|| TO_CHAR (ware_id)

Appendix | 80
Virtualizing Oracle Databases

|| ' '
|| TO_CHAR (hist_amount, '9999.99')
|| '|'
)
|| cust_data_temp,
1,
500
);
UPDATE c_customer
SET c_balance = cust_balance,
c_ytd_payment = cust_ytd,
c_payment_cnt = cust_payments,
c_data = cust_data_temp
WHERE ROWID = cust_rowid;
cust_data := SUBSTR (cust_data_temp, 1, 200);
ELSE
UPDATE c_customer
SET c_balance = cust_balance,
c_ytd_payment = cust_ytd,
c_payment_cnt = cust_payments
WHERE ROWID = cust_rowid;
cust_data := cust_data_temp;
END IF;
SELECT c_district.ROWID, d_name, d_street_1, d_street_2,
d_city, d_state, d_zip, d_ytd + hist_amount,
c_warehouse.ROWID, w_name, w_street_1, w_street_2,
w_city, w_state, w_zip, w_ytd + hist_amount
INTO cust_rowid, dist_name, dist_street_1, dist_street_2,
dist_city, dist_state, dist_zip, dist_ytd,
ware_rowid, ware_name, ware_street_1, ware_street_2,
ware_city, ware_state, ware_zip, ware_ytd
FROM c_district, c_warehouse
WHERE d_id = dist_id AND d_w_id = w_id AND w_id = ware_id;

Appendix | 81
Virtualizing Oracle Databases

UPDATE c_district
SET d_ytd = dist_ytd
WHERE ROWID = cust_rowid;
UPDATE c_warehouse
SET w_ytd = ware_ytd
WHERE ROWID = ware_rowid;
INSERT INTO c_history
VALUES (cust_id, cust_d_id, cust_w_id, dist_id, ware_id,
cur_date, hist_amount, ware_name || ' ' || dist_name);
COMMIT;
EXIT;
EXCEPTION
WHEN not_serializable OR deadlock OR snapshot_too_old
THEN
ROLLBACK;
retry := retry + 1;
END;
END LOOP;
END;
/
CREATE OR REPLACE PROCEDURE NTNX.c_sp_payment_name (
ware_id NUMBER,
dist_id NUMBER,
cust_w_id NUMBER,
cust_d_id NUMBER,
cust_id IN OUT NUMBER,
bylastname NUMBER,
hist_amount NUMBER,
cust_last IN OUT VARCHAR2,
ware_street_1 OUT VARCHAR2,
ware_street_2 OUT VARCHAR2,
ware_city OUT VARCHAR2,
ware_state OUT VARCHAR2,

Appendix | 82
Virtualizing Oracle Databases

ware_zip OUT VARCHAR2,


dist_street_1 OUT VARCHAR2,
dist_street_2 OUT VARCHAR2,
dist_city OUT VARCHAR2,
dist_state OUT VARCHAR2,
dist_zip OUT VARCHAR2,
cust_first OUT VARCHAR2,
cust_middle OUT VARCHAR2,
cust_street_1 OUT VARCHAR2,
cust_street_2 OUT VARCHAR2,
cust_city OUT VARCHAR2,
cust_state OUT VARCHAR2,
cust_zip OUT VARCHAR2,
cust_phone OUT VARCHAR2,
cust_since OUT DATE,
cust_credit IN OUT VARCHAR2,
cust_credit_lim OUT NUMBER,
cust_discount OUT NUMBER,
cust_balance IN OUT NUMBER,
cust_data OUT VARCHAR2,
retry IN OUT NUMBER,
cur_date IN DATE
)
AS
TYPE rowidarray IS TABLE OF ROWID
INDEX BY BINARY_INTEGER;
cust_rowid ROWID;
ware_rowid ROWID;
dist_ytd NUMBER;
dist_name VARCHAR2 (11);
ware_ytd NUMBER;
ware_name VARCHAR2 (11);
c_num BINARY_INTEGER;

Appendix | 83
Virtualizing Oracle Databases

row_id rowidarray;
cust_payments PLS_INTEGER;
cust_ytd NUMBER;
cust_data_temp VARCHAR2 (500);
not_serializable EXCEPTION;
PRAGMA EXCEPTION_INIT (not_serializable, -8177);
deadlock EXCEPTION;
PRAGMA EXCEPTION_INIT (deadlock, -60);
snapshot_too_old EXCEPTION;
PRAGMA EXCEPTION_INIT (snapshot_too_old, -1555);
CURSOR c_cur
IS
SELECT ROWID
FROM c_customer
WHERE c_d_id = cust_d_id AND c_w_id = cust_w_id
AND c_last = cust_last
ORDER BY c_w_id, c_d_id, c_last, c_first;
BEGIN
LOOP
BEGIN
c_num := 0;
FOR c_id_rec IN c_cur
LOOP
c_num := c_num + 1;
row_id (c_num) := c_id_rec.ROWID;
END LOOP;
cust_rowid := row_id ((c_num + 1) / 2);
SELECT c_id, c_first, c_middle, c_last, c_street_1,
c_street_2, c_city, c_state, c_zip, c_phone,
c_since, c_credit, c_credit_lim, c_discount,
c_balance - hist_amount, c_payment_cnt,
c_ytd_payment + hist_amount,
DECODE (c_credit, 'BC', c_data, ' ')

Appendix | 84
Virtualizing Oracle Databases

INTO cust_id, cust_first, cust_middle, cust_last, cust_street_1,


cust_street_2, cust_city, cust_state, cust_zip, cust_phone,
cust_since, cust_credit, cust_credit_lim, cust_discount,
cust_balance, cust_payments,
cust_ytd,
cust_data_temp
FROM c_customer
WHERE ROWID = cust_rowid;
cust_payments := cust_payments + 1;
IF cust_credit = 'BC'
THEN
cust_data_temp :=
SUBSTR ( ( TO_CHAR (cust_id)
|| ' '
|| TO_CHAR (cust_d_id)
|| ' '
|| TO_CHAR (cust_w_id)
|| ' '
|| TO_CHAR (dist_id)
|| ' '
|| TO_CHAR (ware_id)
|| ' '
|| TO_CHAR (hist_amount, '9999.99')
|| '|'
)
|| cust_data_temp,
1,
500
);
UPDATE c_customer
SET c_balance = cust_balance,
c_ytd_payment = cust_ytd,
c_payment_cnt = cust_payments,

Appendix | 85
Virtualizing Oracle Databases

c_data = cust_data_temp
WHERE ROWID = cust_rowid;
cust_data := SUBSTR (cust_data_temp, 1, 200);
ELSE
UPDATE c_customer
SET c_balance = cust_balance,
c_ytd_payment = cust_ytd,
c_payment_cnt = cust_payments
WHERE ROWID = cust_rowid;
cust_data := cust_data_temp;
END IF;
SELECT c_district.ROWID, d_name, d_street_1, d_street_2,
d_city, d_state, d_zip, d_ytd + hist_amount,
c_warehouse.ROWID, w_name, w_street_1, w_street_2,
w_city, w_state, w_zip, w_ytd + hist_amount
INTO cust_rowid, dist_name, dist_street_1, dist_street_2,
dist_city, dist_state, dist_zip, dist_ytd,
ware_rowid, ware_name, ware_street_1, ware_street_2,
ware_city, ware_state, ware_zip, ware_ytd
FROM c_district, c_warehouse
WHERE d_id = dist_id AND d_w_id = w_id AND w_id = ware_id;
UPDATE c_district
SET d_ytd = dist_ytd
WHERE ROWID = cust_rowid;
UPDATE c_warehouse
SET w_ytd = ware_ytd
WHERE ROWID = ware_rowid;
INSERT INTO c_history
VALUES (cust_id, cust_d_id, cust_w_id, dist_id, ware_id,
cur_date, hist_amount, ware_name || ' ' || dist_name);
COMMIT;
EXIT;
EXCEPTION

Appendix | 86
Virtualizing Oracle Databases

WHEN not_serializable OR deadlock OR snapshot_too_old


THEN
ROLLBACK;
retry := retry + 1;
END;
END LOOP;
END;
/
CREATE OR REPLACE PROCEDURE NTNX.c_sp_stock_level (
ware_id NUMBER,
dist_id NUMBER,
threshold NUMBER,
low_stock OUT NUMBER
)
IS
not_serializable EXCEPTION;
PRAGMA EXCEPTION_INIT (not_serializable, -8177);
deadlock EXCEPTION;
PRAGMA EXCEPTION_INIT (deadlock, -60);
snapshot_too_old EXCEPTION;
PRAGMA EXCEPTION_INIT (snapshot_too_old, -1555);
BEGIN
LOOP
BEGIN
SELECT COUNT (DISTINCT s_i_id)
INTO low_stock
FROM c_order_line, c_stock, c_district
WHERE d_id = dist_id
AND d_w_id = ware_id
AND d_id = ol_d_id
AND d_w_id = ol_w_id
AND ol_i_id = s_i_id
AND ol_w_id = s_w_id

Appendix | 87
Virtualizing Oracle Databases

AND s_quantity < threshold


AND ol_o_id BETWEEN (d_next_o_id - 20) AND (d_next_o_id - 1);
COMMIT;
EXIT;
EXCEPTION
WHEN not_serializable OR deadlock OR snapshot_too_old
THEN
ROLLBACK;
END;
END LOOP;
END;
/
begin
DBMS_STATS.GATHER_SCHEMA_STATS
(
OwnName => 'NTNX'
,Estimate_Percent => SYS.DBMS_STATS.AUTO_SAMPLE_SIZE
,Block_sample => TRUE
,Method_Opt => 'FOR ALL INDEXED COLUMNS SIZE AUTO '
,Degree => 9
,Cascade => TRUE
,No_Invalidate => TRUE
);
end;
/
exit

These Oracle scripts are based on examples from http://www.toadworld.com/products/


benchmark-factory/f/45/p/1148/202.aspx. Nutanix used these scripts during the validation testing
process for this document. We provide them as-is and for informational purposes only, for
adaptation in your own independent testing.

Appendix | 88
Virtualizing Oracle Databases

About the Author


Michael Webster is a VMware Certified Design Expert (VCDX-066) and Technical Director,
Business Critical Apps Engineering at Nutanix, Inc. In this role, Michael manages a team of
solutions architects who design architectures and validate solutions and performance combining
applications with the Nutanix platform. Michael is coauthor of Virtualizing SQL Server on
VMware: Doing It Right (2014 VMware Press), Technical Reviewer of VCDX Boot Camp and
Virtualizing and Tuning Large Scale Java Apps (both by VMware Press), and author of the Top
15 virtualization blog http://longwhiteclouds.com. He has been a top 10 Nutanix .Next Conference
and VMworld session speaker.
Prior to joining Nutanix, Michael was technical lead of the VMware APJ Center of Excellence
for Business Critical Apps. He is recognized as a global subject matter expert on virtualizing
business critical apps, including migrations from UNIX platforms, Oracle, SAP, SQL Server,
Exchange, SharePoint, and Enterprise Java.

About Nutanix
Nutanix makes infrastructure invisible, elevating IT to focus on the applications and services that
power their business. The Nutanix Enterprise Cloud OS leverages web-scale engineering and
consumer-grade design to natively converge compute, virtualization, and storage into a resilient,
software-defined solution with rich machine intelligence. The result is predictable performance,
cloud-like infrastructure consumption, robust security, and seamless application mobility for a
broad range of enterprise applications. Learn more at www.nutanix.com or follow us on Twitter
@nutanix.

Appendix | 89
Virtualizing Oracle Databases

List of Figures
Figure 1: Nutanix Enterprise Cloud Platform...................................................................10

Figure 2: Information Life Cycle Management................................................................ 12

Figure 3: Overview of the Nutanix Architecture...............................................................12

Figure 4: Data Locality and Live Migration......................................................................13

Figure 5: ORADB on Nutanix Conceptual Arch.............................................................. 14

Figure 6: DSF I/O Path................................................................................................... 15

Figure 7: Data I/O Detail................................................................................................. 16

Figure 8: Data I/O Detail Expanded................................................................................ 17

Figure 9: Sample Multiwriter Flag on RAC Node 1.........................................................27

Figure 10: Sample Multiwriter Flag on RAC Node2........................................................ 27

Figure 11: Nutanix Component Architecture................................................................... 31

Figure 12: vMotion Test Scenarios Overview................................................................. 32

Figure 13: Test Environment Overview........................................................................... 40

Figure 14: vMotion Test Scenarios Overview................................................................. 40

Figure 15: Network I/O Control Configuration................................................................. 42

Figure 16: Oracle Single Client Access Name................................................................ 43

Figure 17: Oracle RAC Node Disk Layout...................................................................... 43

90
Virtualizing Oracle Databases

List of Tables
Table 1: Document Version History.................................................................................. 7

Table 2: General Design Decisions.................................................................................20

Table 3: VMware vSphere Design Decisions.................................................................. 20

Table 4: Nutanix Design Decisions................................................................................. 21

Table 5: VM Configuration...............................................................................................21

Table 6: Nutanix Storage Configuration.......................................................................... 31

Table 7: Scenario Overview............................................................................................ 33

Table 8: Recommended OS and App Mount Points for Linux—OLTP and OLAP...........33

Table 9: OLTP Scenario Detail....................................................................................... 34

Table 10: OLAP Scenario Detail..................................................................................... 34

Table 11: Oracle RAC 12c ASM Disk Groups................................................................ 44

Table 12: Nutanix Node CPU Utilization During vMotion Operations.............................. 46

Table 13: Nutanix Multi-NIC Throughput During vMotion Operations............................. 47

91

Você também pode gostar