Você está na página 1de 44

Building an Oracle Grid with Oracle VM on Dell Blade Servers and EqualLogic iSCSI storage

Kai Yu Sr. System Engineer Consultant Dell Global Solutions Engineering

David Mar Engineering Manager Dell Global Solutions Engineering

ABOUT THE AUTHORS

Kai Yu is a Senior System Engineer and works within the Dell Oracle Solutions Lab. He has over 14 years Oracle DBA and Solutions Engineering experience. His specialties include Oracle RAC, Oracle EBS and OVM. He is an avid author of various Oracle technology articles and frequent presenter at Oracle Open World and Collaborate. He is also the Oracle RAC SIG US Event Chair. David Mar is an Engineering Manager for the Dell | Oracle Solutions group. He directs a team across the globe to create Oracle RAC reference configurations and build best practices based upon tested and validated configurations. He has worked for Dell since 2000 and has a MS in Computer Science and BS in Computer Engineering from Texas A&M University.

COMPONENTS WITHIN THE GRID

Physical View

Logical View

Physical View

Scalability

dom1 dom3 dom2 dom4

OVM Servers Database 1 Database 2

Scale out blades

Database 3 Database 4

Volume 1 Volume 2 Volume 3

Compute Virtualization Roles / Components Virtual Grid


RAC DB OS OS
DB / app DB / app

Physical Grid
DB 1 OS DB 2

OS

OS

VM Server Pool
DB 3 OS OS OS OS OS

Oracle VM Partitioning single server Consolidation

Oracle RAC Scale-up through scale-out of hw DB Scalability only

OVM | VIRTUALIZATIONS ROLE


Testing Oracle RAC infrastructure with minimal hardware Testing Oracle RAC pre-production areas App Servers testing and production Good to maximize server utilization rates Consolidation for low utilization servers Partitioning one application from another

OVERVIEW OF EQUALLOGIC

Volume 1 Volume 2 Volume 3

Storage Pool

4 Gigabit network ports per controller 2 GB cache per controller Intelligent automated management tools Self Managing Array Linear Scalability Data Protection Software

STORAGE LAYER ON EQUALLOGIC


Volume 1 Volume 2 Volume 3

Storage Pool

EqualLogic and ASM work well together


http://www.dell.com/oracle Performance Implications of running Oracle ASM with Dell EqualLogicTM PS series storage

EqualLogic ASM Best Practices for Oracle


http://www.dell.com/oracle Best Practices for Deployment of Oracle ASM with DellTM EqualLogicTM PS iSCSI Storage System

Peer Storage architecture Setup Quickly Linear Performance Improvements

SERVER LAYER ON BLADES


Oracle RAC Servers

Front View

Back View

Energy efficient PowerEge M100e enclosure 11th generation blade servers available Intel Nehalem processors Intel QuickPath memory technology Three redundant fabrics 16 half height blade servers

GRID Reference Configuration POC Project

11

1 PHYSICAL GRID

2 THE VIRTUAL GRID

3 MANAGEMENT

Oracle Grid Control to manage both physical Grid and virtual Grid Consider HA for production management Manage externally out of the Blade Cluster, to avoid single point of failure Dell Servers enable health monitoring from EM

4 - STORAGE

GRID IMPLEMENTATION

16

GRID IMPLEMENTATION
Implementation Tasks
Configure Grid Control Management Infrastructure Configure EqualLogic Shared storage Configure Physical Grid Configure Virtual Grid

Grid Control Management Infrastructure Configuration


OEL 4.7 64 bit was installed for Grid control server Oracle Enterprise Manager Grid control 10.2.0.3 installed: with OMS server and repository database and Grid control agent Grid control 10.2.0.5 with patch 3731593_10205_x84_64 Apply to OMS server, repository, and the agent Enable Virtual Management Pack with patch 8244731 to OMS, Install TightVnc Java Viewer to OMS server Restart OMS server
17

GRID IMPLEMENTATION: EQUALLOGIC STORAGE


EqualLogic shared storage configuration
Storage volumes for the physical Grid
Volume blade_crs blade_data1 blade_data2 blade_data3 blade_data5 Size 2GB 100GB 100GB 150GB 150GB Raid 10 10 10 10 10 Used for OCR/Votingdisk Data for DB1 Data for DB2 Data for DB3 Data for DB4 OS Mapping Yes ASM diskgroup1 ASM diskgroup2 ASM diskgroup3 ASM diskgroup5

Storage volumes for the virtual Grid


Volume blade_data4 blade_data6 vmcor1-5 vmracdb1 vmracdb2 Size 400GB 500GB 1GB each 50GB 50GB Raid 10 10 10 10 10 Used for VM Repositories VM Repositories OCR/vogtingdisk Data for RAC DB Data for RAC DB OS Mapping /OVS /OVS/9A87460A7EDE43 EE92201B8B7989DBA6 2XOCRs/3Xvotingdisks ASM diskgroup1 ASM diskgroup2
18

GRID IMPLEMENTATION: EQUALLOGIC STORAGE


EqualLogic storage management console

19

GRID IMPLEMENTATION: PHYSICAL GRID


Physical Grid Configuration : 8-node 11g RAC Network configuration
Network Interface eth0 eth2 eth3 eth4,eth5 VIP IO modules A1 B1 B2 C1,C2 Virtual IP for Connections Public Network iSCSI connection iSCSI connection Bonded to bond0 11g RAC IP address 155.16.9.71-78 10.16.7.241-255(odd) 10.16.7.240-254(odd) 192.168.9.71-78 155.16.9.171-178

iSCSI storage connections Open-iSCSI administration utility to configure host access to storage volumes: eth2-iface on eth2, eth3-iface on eth3 Linux Device Mapper to establish mutipath devices to storage alias:

/etc/multipath:
multipaths { multipath { } wwid 36090a028e093dc7c6099140639aae1c7 alias ocr-crs

Check multipath devices in /dev/mapper: /dev/mapper/ocr-crsp1


20

GRID IMPLEMENTATION: PHYSICAL GRID


Use block devices for 11g Clusterware and RAC databases OEL 5 no longer supports raw devices, Use block devices for 11g clusterware/RAC databases /etc/rc.local to set proper ownerships and permissions
Configure 11g RAC database Infrastructure 11g RAC clusterware configuration: private interconnect : bond0: 192.168.9.71-78 2 x OCRs and 3 X votingdisks on mutipath devices 11g ASM configuration: ORA_ASM_HOME =/opt/oracle/product/11.1.0 ASM instances provide the storage virtualization 11g RAC software to provide the database service ORA_HOME=/opt/oracle/product/11.1.0/db_1 Grid control agent 10.2.0.5 to connect to OMS server

Consolidate multiple databases


21

GRID IMPLEMENTATION: PHYSICAL GRID


Sizing the DB requirement: CPU, IO and memory Determine storage needs and # of DB instances Determine which nodes to run the database Provision the storage volumes and create the ASM diskgroup User DBCA to create the database using the ASM diskgroups For some ERP applications, convert the installed DB to RAC

Examples of Pre-created Databases

22

GRID IMPLEMENTATION: PHYSICAL GRID

23

GRID IMPLEMENTATION: PHYSICAL GRID


Scale out the physical Grid Infrastructure Consolidate more databases to the Grid Increase capability of the existing databases Scale out the storage: a. add additional Equallogic array to the storage group b. expand existing or create new volumes c. make new volume accessible to the servers d. create ASM diskgroup or add partition to the existing diskgroup or resize the partition of an diskgroup [1] Scale the Grid by adding servers: Use 11g RAC add node procedure to add new node to the cluster and add the Expand the database to additional RAC nodes: Use the adding instance procedure of enterprise manager to add new database instance to the database [2] Dynamically move the database to less busy node: User the adding instance and removing instance of enterprise manager to add new database instance to the database [2]
24

GRID IMPLEMENTATION: VIRTUAL GRID


Implementation Tasks Virtual servers Installation Virtual server network and stroage configuration. Connect VM servers to the Grid Control Management VM infrastructure through Grid control Create guest VM using VM template Manage the resources of guest VMs. Virtual servers Installation Prepare local disk and enable virtualization on BIOs Install Oracle VM server 2.1.2 Change Dom0 memory : /boot/grub/menu.lst: edit line: kernel /xen-64bit.gz dom0_mem=1024M Ensure VM agent working: #service ovs-agent status

25

ORACLE VIRTUALIZATION INFRASTRUCTURE IMPLEMENTATION


Oracle VM server network configuration

eth0 : public in dom0, presented to domU through xenbr0 eth2, eth3: iSCSI connections eh4 and eth4 bonded to bond0 for the private interconnect between VMs for 11g RAC configuration through xenbr1 Secure OVM agent is running: service ovs-agent status
26

GRID IMPLEMENTATION: VIRTUAL GRID


dom0 eth0 eth2,eth 3 eth4,eth 5

IO modules
A1 B1, B2 C1,C2

connection
Public Network iSCSI connections Bonded to bond0

IP
155.16.9.71-78 10.16.7.238-235 192.168.9.82-85

domU
eth0

eth1

Customize the default Xen bridge configuration a. Stop the default bridges/etc/xen/scripts/network-bridges stop b. Edit /etc/xen/xend-config.sxp: replace line: (network-script network-bridges) with (network-script network-bridges-dummy) a. Edit /etc/xen/scripts/network-bridges-dummy : /bin/true b. Create infcg-bond0, ifcfg-xenbr0, ifcfg-xenbr1 scripts in /etc/sysconfig/network-scripts

27

GRID IMPLEMENTATION: VIRTUAL GRID


ifcfg-eth0:
DEVICE=eth0 ONBOOT=yes BOOTPROTO=none BRIDGE=xenbr0
ONBOOT=yes

ifcfg-eth4

ifcfg-bond0

ifcfg-xenbr0

ifcfg-xenbr1
DEVICE=xenbr1
ONBOOT=yes

DEVICE=eth4 DEVICE=bond0 DEVICE=xenbr0 ONBOOT=yes ONBOOT=yes TYPE=Ethernet MASTER=bond0 BOOTPROTO=none TYPE=Bridge USERCTL=no BRIDGE=xenbr1 IPADDR=155.16.9.82
NETMASK=255.255.0.0

TYPE=Bridge
TYPE=Bridge NETMASK=255.255.255.0

restart network services: #service network restart


check Xen bridges: [root@kblade9 scripts]# brctl show bridge name bridge id STP enabled interfaces xenbr0 8000.002219d1ded0 no eth0 xenbr1 8000.002219d1ded2 no bond0

Configure shared storages on dom0 for VM servers


For OVM repositories : blade_data4: 400GB, Blade_data6_ 500GB For 11g RAC shared disks: OCRs, votingdisks, ASM diskgroups Configure iSCSI and multipath devices on dom0 Use iSCSI admin tool and Linux DM as the physical server

Check the multipath devices : ls /dev/mapper/* /dev/mapper/blade-data6 /dev/mapper/mpath5 /dev/mapper/ovs_data4p1 /dev/mapper/blade-data6p1 /dev/mapper/vmocr-css1 /dev/mapper/vmocr-css1p1 /dev/mapper/vmracdb1 /dev/mapper/vmracdb1p1 /dev/mapper/ovs_data4
28

GRID IMPLEMENTATION: VIRTUAL GRID


Convert OVM repository to the shared disks: configure OCFS2 a) Edit /etc/ocfs2/clutser.conf b) service o2cb stop
C)

Fcb

d) # mkfs.ocfs2 -b 4k -C 64k -L ovs


e) umount /OVS

/dev/mapper/ovs_data4p1

f) edit /etc/fstab:
#/dev/sda3 /dev/mapper/ovs_data4p1 /OVS /OVS ocfs2 defaults ocfs2 _netdev,datavolume,nointr 1,0 00

29

GRID IMPLEMENTATION: VIRTUAL GRID


g) mount ocfs2 partitions: mount -a -t ocfs2 h) create directories under /OVS: running_pool, seed_pool,sharedDisk i) repeat steps a-h on all the VM servers Add additional volume to the OVS repositories

Connect the VM servers to Enterprise Manager Grid Control Meet pre-requisites: a. Oracle user with oinstall group b. Oracle users ssh user equivalence between dom0 & OMS
30

GRID IMPLEMENTATION: VIRTUAL GRID


c. Create /OVS/provxy d. Oracle user sudo privileges: visudo /etc/sudoers
Add line: oracle ALL=(ALL) NOPASSWD: ALL Comment out line: Defaults requiretty

Create VM server Pool: login to Grid control

31

GRID IMPLEMENTATION: VIRTUAL GRID


Add additional VM servers to the VM server pool

32

ORACLE VIRTUALIZATION INFRASTRUCTURE IMPLEMENTATION


Create Guest VMs using VM template Virtual Machine as node of Virtual Grid Methods of guest VMs: VM template; install media; PXE boot Import a VM template Download OVM_EL5U2_X86_64_11GRAC_PVM.gz to repository
Discover VM template from Grid control Virtual Central

33

ORACLE VIRTUALIZATION INFRASTRUCTURE IMPLEMENTATION


Create VM using the template

34

ORACLE VIRTUALIZATION INFRASTRUCTURE IMPLEMENTATION


Add additional disk to the VM

Attach shared disk to the VM

35

ORACLE VIRTUALIZATION INFRASTRUCTURE IMPLEMENTATION


Images files in repository:
Local disk:

Shared disk

Disks as resources for VM:

36

ORACLE VIRTUALIZATION INFRASTRUCTURE IMPLEMENTATION


Disks shown in VM as the virtual disk partitions:

Attached the storage to to the guest VM a) vm.cfg: disk = [''file:/OVS/sharedDisk/racdb.img,xvdc,w!', b) vm.cfg: disk = ['phy: /dev/mapper/vmracdbp1, xvdc,w!', Exposed Xen bridge for virtual network interface vm.cfg: vif = ['bridge=xenbr0,mac=00:16:3E:11:8E:CE,type=netfront', 'bridge=xenbr1,mac=00:16:3E:50:63:25,type=netfront',] xenbr0 - eth0 in the guest VM xenbr1 eth1 in the guest VM
37

ORACLE VIRTUALIZATION INFRASTRUCTURE IMPLEMENTATION


Storage/Network configuration for 11g RAC on VMs:

38

ORACLE VIRTUALIZATION INFRASTRUCTURE IMPLEMENTATION


Consolidate enterprise applications on the Grid
Applications and middleware on the virtual Grid
Create a guest VM using Oracle OEL 5.2 template Deploy application on the guest VM Build the VM template of the VM Create new guest VMs based on the VM template Provision adequate size of storage volume from SAN Make the volume accessible to all the physical Grid Nodes Create the ASM diskgroup Create database service on the ASM diskgroup Create application database schema on the database Establish the application database connections

Deploy database service on the physical Grid

Deploy DEV/Test Application suite on the virtual Grid


Multi-tier nodes are on the VMs Fast deployment based on templates
39

CASE STUDIES OF GRID HOSTING APPLICATIONS


Grid Architecture to host applications Oracle Applications E-Business suites on RAC/VM Three Applications Tier nodes on three VMs Two Database Tier nodes on Two node RAC. Oracle E-Business Suite single node on Virtual Grid Both Applications tier and Database tier node on VMs Banner applications on physical/virtual Grid Two Applications Tier nodes on two VMs Two Database Tier nodes on Two node RAC Provisioning Oracle 11g RAC on Virtual Grid Two Database RAC nodes on two VMs 11g RAC provisioning by Grid control Provisioning Pack 11g RAC provisionging procedure Add additonal RAC node on new VM Grid control Provisioning Pack Add node Procedure
40

CASE STUDIES OF GRID HOSTING APPLICATIONS


Grid Architecture to host applications

41

CASE STUDIES OF GRID HOSTING APPLICATIONS


Virtual Grid shown on Grid Control Virtual Central

42

SUMMARY
Dell Grid POC Project: Pre-built Grid with physical and virtual grids Grid control as the unified management solution for the Grid Dell blades as the platform of the Grid: RAC node and VM servers EqualLogic provides the shared storage for the physical and virtual Grid The Grid provides the infrastructure to consolidate the enterprise applications as well as the RAC databases Acknowledge the support of Oracle engineers: Akanksha Sheoran, Rajat Nigam, Daniel Dibbets, Kurt Hackel, Channabasappa Nekar,
Premjith Rayaroth, and Dell Engineer: Roger Lopez

Related OpenWorld Presentations:


ID#: S308185, Provisioning Oracle RAC in a Virtualized Environment, Using Oracle Enterprise Manager, 10/11/09 13:00-14:00, Kai Yu & Rajat Nigam ID#: S310132, Oracle E-Business Suite on Oracle RAC and Oracle VM: Architecture and Implementation, 10/14/09 10:15 -11:15 Kai Yu and John Tao
43

References:
Practices of Deployment of Oracle ASM with Dell EqualLogic PS iSCSI Storage System, Dell White Paper 2 . Oracle Press Release: Oracle Enterprise Manager Extends Management to Oracle VM Server Virtualization 3. Oracle Enterprise Manager Concepts 10g Release 5 (10.2.0.5) Part Number B31949-10 4. Oracle Enterprise Manager Grid Control ReadMe for Linux x86-64 10g Release 5 (10.2.0.5) April 2009 5. How to Enable Oracle VM Management Pack in EM Grid Control 10.2.0.5, Metalink Note #: 781879.1 6. Oracle VM: Converting from a Local to Shared OVS Repository, Metalink note # 756968.1 7. How to Add Shared Repositories to Oracle VM Pools with Multiple Servers, Metalink Note # 869430.1 8. Implementing Oracle Grid: A successful Customer Case Study, Kai Yu, Dan Brint, IOUG Select Journal Volume 16, Number 2, Second Quarter 2009 9. Deploying Oracle VM Release 2.1 on Dell PowewrEdge servers and Dell/EMC storage Dell white paper 10. Dell Reference Configuration Deploying Oracle Database on Dell EqualLogic PS5000XV iSCSI Storage A Dell Technical White Paper 11. Technical Best Practices for Virtualization & RAC Oracle RAC SIG webseminar slides, Michael Timpanaro-Perota & Daniel Dibbets
1. Best

44

Você também pode gostar