Você está na página 1de 34

ClusterOverview Reference and User Guide

Author :

Dominic Giles

Oracle Corporation UK Ltd Oracle Parkway, Thames Valley Park, Reading, Berkshire RG6 1RA

ClusterOverview Reference and User Guide

Page 2 of 34

Document Control
Authors Name Dominic Giles Change Record Version Date Draft 1 8th April 2003 Draft 2 5th January 2004 Reviewers Name John Nangle Robin Murgatroyd David Storey Company Oracle

Description Initial Version Updates to refelect new functionality in 2.1f Position Senior Principal Sales Consultant Senior Pre-Sales Manager Principal Sales Consultant

ClusterOverview Reference and User Guide

Page 3 of 34

Table of Contents
................................................................................................................................................................................................ ..............................................................................................................................................................................................2 DOCUMENT CONTROL................................................................................................................................................3 PURPOSE OF DOCUMENT...........................................................................................................................................5 INTRODUCTION.............................................................................................................................................................5 CLUSTEROVERVIEW....................................................................................................................................................6 OVERVIEW..................................................................................................................................................................................6 CLUSTEROVERVIEW CONFIGURATION............................................................................................................................................ 7
Coordinator Information...................................................................................................................................................................8 MonitoredDatabaseList....................................................................................................................................................................8

CLUSTEROVERVIEW USER INTERFACE .......................................................................................................................................... 9


Control Panel.................................................................................................................................................................................... 9 Transactions per Minute Graph......................................................................................................................................................10 User Session Graph .......................................................................................................................................................................11 Scalability Graph............................................................................................................................................................................12 SwingBench.env.............................................................................................................................................................................14 ClusterOverview.xml......................................................................................................................................................................15 SwingConfig.xml........................................................................................................................................................................... 5 1 CCWizard.xml................................................................................................................................................................................17 Step 1 (Generate Data)...................................................................................................................................................................17 Step 2 (Start Coordinator and Load Generators)..........................................................................................................................20 Step 3 (Start ClusterOverview)......................................................................................................................................................21 Step 4 (Start the Load Generators)................................................................................................................................................22

APPENDIX A. WALK THROUGH OF A SCALABILITY DEMO..........................................................................................................14

APPENDIX B. WALK THROUGH OF A HIGH AVAILABILITY DEMO..............................................................................................24


SwingBench.env.............................................................................................................................................................................24 ClusterOverview.xml.....................................................................................................................................................................25 SwingConfig.xml...........................................................................................................................................................................25 tnsnames.ora...................................................................................................................................................................................26 Step 1 (Setup up environment)......................................................................................................................................................27 Step 2 (Start Coordinator and Load Generators)..........................................................................................................................27 Step 3 (Start ClusterOverview)......................................................................................................................................................28 Step 4 (Start the Load Generators)................................................................................................................................................29 Step 5 (Restart Failed Node and Rebalance User Sessions) .......................................................................................................32

ClusterOverview Reference and User Guide

Page 4 of 34

Purpose Of Document
The purpose of this document is to detail the functionality and operation of the ClusterOverview application.

Introduction
The UK based Oracle9i Database Solutions group have developed SwingBench, a Java based test harness to enable the stress testing of Oracle Databases (and Web Servers, build 2.2). SwingBench enables developers to define their own classes by implementing a simple interface. These classes are then loaded and run by SwingBench according to the parameters defined by the user.

ClusterOverview Reference and User Guide

Page 5 of 34

ClusterOverview
Overview
ClusterOverview is a component of the SwingBench benchmarking framework. It is used to control and report the activity of a number (1..n) of load generators running against a Oracle database. It works in conjunction with the SwingBench load generator and Coordinator process to create a load on a server via jdbc. Currently ClusterOverview provides the following functionality Start and Stop given load generators Chart transactions from running load generators Chart users logged monitored databases Chart the scalability of a Oracle9i Real Application Cluster

The following diagram illustrates the architecture of the SwingBench benchmarking framework when used in conjunction with the ClusterOverview component.

Each of the components shown in the diagram above (ClusterOverview, Coordinator and Load Generators) can be run on the same or different machines. The configuration of ClusterOverview is maintained in the ClusterOverview.xml file. This file is parsed at start up and details the location of the coordinator process and database. The following screenshot shows ClusterOverview controlling eight load generators running against a Oracle 9i Real Application Cluster.

ClusterOverview Reference and User Guide

Page 6 of 34

ClusterOverview Reference and User Guide

Page 7 of 34

Installation
To run swingbench a Java Virtual Machine (JVM) must be installed on the client platform. The current recommendation is to use Sun's or IBM's 1.4 JVMs although their 1.3 versions may work (not tested). swingbench/clusteroverview is supplied in a single zip file. To uncompress this file issue the command (Unix/Linux)
[oracle@dgiles-uk swingbench]$ unzip swingbench

On Windows use a tool such as WinZip to perform this operation. The default installation of swingbench/clusteroverview is performed by modifying the values in the $SWINGHOME/swingbench.env file under Linux/Unix and in $SWINGHOME/swingbenchenv.bat file under Windows. The contents of an example swingbench.env are shown below.
#!/bin/bash export ORACLE_HOME=/home/oracle/orabase/product/9.2 export JAVAHOME=/usr/java/j2sdk1.4.2_02 export SWINGHOME=/home/oracle/swingbench export ANTHOME=$SWINGHOME/lib export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/lib export LOADGENHOSTS='localhost' export LOADGENUSER=oracle export CLASSPATH=$JAVAHOME/lib/rt.jar:$JAVAHOME/lib/tools.jar:$ORACLE_HOME/jdbc/lib/ojdbc14.ja r:$SWINGHOME/lib/mytransactions.jar:${SWINGHOME}/lib/swingbench.jar:$ANTHOME/ant.jar

The values shown in red need to be modified to reflect the file structure into which the software has been installed. The values in blue only need to be modified if the user plans to implement a distributed load using one or more load generators. clusteroverview can be invoked on Unix/Linux using the commands (please make sure you read the Clusteroverview Configuration section before running it).
[oracle@dgiles-uk swingbench]$ cd bin [oracle@dgiles-uk bin]$ ./clusteroverview

Or on Windows using the commands


C:\ cd winbin C:\ clusteroverview.bat

ClusterOverview Reference and User Guide

Page 8 of 34

ClusterOverview Configuration
ClusterOverview is initialized with the xml configuration file clusteroverview.xml located in the same directory from which ClusterOverview is launched. The following file describes a typical configuration.
<?xml version="1.0" ?> - <CoordinatorConfiguration> <ChartRefresh Period="5000" /> <DisplayedCharts> <Chart>ControlPanel</Chart> <Chart>TPM</Chart> <Chart>UserConnections</Chart> <Chart>Scalability</Chart> </DisplayedCharts> - <CoordinatorInformation> <Server>//192.168.0.10/CoordinatorServer</Server> </CoordinatorInformation> - <MonitoredDatabaseList> <MonitoredDatabase DisplayName="rac1" Username="system" Password="manager" DriverType="thin" ConnectString="192.168.0.1:1521:RAC9i1" /> <MonitoredDatabase DisplayName="rac2" Username="system" Password="manager" DriverType="thin" ConnectString="192.168.0.2:1521:RAC9i2" /> <MonitoredDatabase DisplayName="rac3" Username="system" Password="manager" DriverType="thin" ConnectString="192.168.0.3:1521:RAC9i3" /> <MonitoredDatabase DisplayName="rac4" Username="system" Password="manager" DriverType="thin" ConnectString="192.168.0.4:1521:RAC9i4" /> </MonitoredDatabaseList> </CoordinatorConfiguration>

The clusteroverview.xml file is composed of 3 main sections detailing the location of the coordinator process , the graphs to be displayed and the databases that are to be monitored. The following sections describe their use and attributes.

DisplayedCharts
The DisplayedCharts attribute allows the user to specify which charts are shown at the startup of clusteroverview. By default all 3 and the control panel are shown. By simply removing or adding entries to this attribute users can control what is initially shown as in the example below.
<DisplayedCharts> <Chart>ControlPanel</Chart> <Chart>TPM</Chart> <Chart>UserConnections</Chart> </DisplayedCharts>

Coordinator Information
The coordinator process is a Java RMI application that coordinates all of the activity within a distributed SwingBench environment. It is typically run on the same platform as the load generators or the ClusterOverview program. To run it the issue the following commands on Unix/Linux
[oracle@dgiles-uk swingbench]$ cd bin [oracle@dgiles-uk bin]$ ./coordinator

Or on Windows using the commands


C:\ cd winbin C:\ ooordinator.bat

ClusterOverview Reference and User Guide

Page 9 of 34

The Coordinator acts as a hub for all communication between the load generator nodes and ClusterOverview, as a result all of the processes must be able to see the machine on which the coordinator process is running. This machine is detailed in the ClusterOverview.xml file and should be of the form //<<hostname>>/CoordinatorServer as shown in the example below, where hostname is the ip address/hostname of the server on which the coordinator process was started
<CoordinatorInformation> <Server>//192.168.0.10/CoordinatorServer</Server> </CoordinatorInformation>

This entry should be mirrored in each of the SwingBench Load Generator configuration files (swingconfig.xml) if you wish to run and control a distributed load generation.

MonitoredDatabaseList
The MonitoredDatabaseList is a description of the databases that ClusterOverview will monitor as shown in the example below.
<MonitoredDatabaseList>

<MonitoredDatabase DisplayName="rac1" Username="system" Password="manager" DriverType="thin" ConnectString="192.168.0.1:1521:RAC9i1" /> <MonitoredDatabase DisplayName="rac2" Username="system" Password="manager" DriverType="thin" ConnectString="192.168.0.2:1521:RAC9i2" /> <MonitoredDatabase DisplayName="rac3" Username="system" Password="manager" DriverType="thin" ConnectString="192.168.0.3:1521:RAC9i3" /> <MonitoredDatabase DisplayName="rac4" Username="system" Password="manager" DriverType="thin" ConnectString="192.168.0.4:1521:RAC9i4" /> </MonitoredDatabaseList>

Entries in this list are used to enable the visualisation of user sessions and for controlling databases and their associated load generators. Each entry in the the MonitoredDatabaseList, a MonitoredDatabase, should contain five attributes DisplayName : used for identification in ClusterOverview's charts, this is usually the same as the connect string that any associated load generators have used Username : a user with DBA access (or select any table privilege) Password : the password for the user with DBA access DriverType : thin or oci, thin if you are using pure java drivers or oci if your connecting via Oracle Net. NOTE : users of Veritas Clustering technology should connect with oci drivers. ConnectString : the connect string for the database. If you're using thin jdbc drivers the connect string should be of the form <<hostname>>:<<port number>>:<<SID>>. If you're using oci drivers then the entry should be a valid entry in your tnsnames.ora (if your using local naming)

ClusterOverview User Interface


As described earlier ClusterOverview's main purpose is to provide a means of controlling and viewing the load created by one more load generators. To facilitate this ClusterOverview provides a control panel and a number of dynamic charts which can be added or removed from the main menu (via the View menu item). The following image shows Cluster Overview with its control panel and all of its current graphs displayed

ClusterOverview Reference and User Guide

Page 10 of 34

ClusterOverview is the last component of the swingbench framework to be started. The coordinator is first, followed by one or more load generators and finally ClusterOverview itself. When ClusterOverview is started the control panel and the Transactions per Minute chart are shown, these display monitored databases/instances and load generators. The following sections describe the control panel and each of the charts available inside of ClusterOverview.

Control Panel
The control panel lists both databases and load generators in two separate tables selectable via a tab. On selecting a row a user can start or stop the load for a database (and as result any load generators associated with it) or users may selectively start and stop individual load generators. The following images shows ClusterOverviews control panel

Actions can be performed on selected databases/load generators using the buttons in the tool bar or via the options in the Run menu item. The following sections describes each buttons functionality

ClusterOverview Reference and User Guide

Page 11 of 34

Starts the load generators associated with a database (one or more). It will also start a load against several load generators when multiple rows are selected Takes a snapshot of the load currently running on a selected database. This button will be disabled if the database is not running or a load generator is selected (only works against databases) Stops the load generators associated with a database (one or more). It will also start a load against several load generators when multiple rows are selected Used to enable load generation to occur in an ordered fashion. Its primary use is to demo Oracle Real Application Clusters. Pressing this button takes you to the first row in the database table, from here you can start a node and then snapshot it. Its use is discussed in greater detail in Appendix A. Walk through of a Scalability Demo. This button is a toggle button i.e. On/Off and will be disabled if a load generator is selected or if any of the load generators have already started a load. Re balances user sessions across nodes in a Real Application Cluster (RAC). When one or more databases/instances are selected and this button is pressed Cluster Overview will terminate session post transaction and (if failover is working) allow RAC to re balance them across surviving nodes. Its use is discussed in greater detail in Appendix B. Walk through of a High Availability Demo This functionality is replicated in the menu.

Transactions per Minute Graph


This graph illustrates the transactions per minute executed on each database/instance in a rolling time window (currently set to a minute). The 3d graph shows transactions that have occurred most recently at the end of the Y-axis and those that occurred one minute ago at the start of Y-axis. The Z-axis shows the number of transactions that have occurred on each monitored database/instance. The total figure for all databases/instances is details in the bottom right had corner. The image below shows the TPM graph for a 8 node Linux cluster.

ClusterOverview Reference and User Guide

Page 12 of 34

User Session Graph


The user session graph details the number and types of connections made to each monitored database/instance. There are three types of users currently monitored in this graph System User Sessions : These are internal oracle process or session connected to the sys/system schema. They are colored in pink. Non Failed Over User Sessions : These are sessions which are created by standard users. These sessions are still connected to the same instance they originally logged onto. They are colored in green. Failed Over user Sessions : These are sessions created by standard users but are no longer connected to the instance they originally logged onto, typically as a result of an instance failure or by a connection rebalancing event. They are colored in blue. The following image illustrates a user session graph that is monitoring an eight node Linux cluster that has had one of the instances fail and the user sessions redistributed.

This graph is discussed in more detail in Appendix B. Walk through of a High Availability Demo

Scalability Graph
This graph is typically used to demonstrate the scalability (via Transactions per Minute) that can be achieved by separate instances in a Real Application Cluster. It is only activated when the Start Database Serially button is pressed. From this point on every time a database/instance is started an additional bar is added to the chart. If a snapshot is taken before the next database/instance is started the current bar is frozen allowing the user to make a comparison of the scalability achieved since the first snapshot was taken. This functionality allows users to determine the level scalability that can be achieved and hence the impact of bringing additional nodes into the cluster. The colour of each bar in the chart

ClusterOverview Reference and User Guide

Page 13 of 34

corresponds to the color, and TPM, of a database/instance in the Transactions per Minute graph. The following image illustrates the scalability that was achieved on an eight node (4 x 2.0Ghz processors) Linux cluster.

Whilst this graph and the functionality associated with the snapshot functionality is useful for demonstrating RAC scalability it is only at best an estimate because of the nature of highly transactional systems. It is also dependent on the accuracy of the first snapshot that took place (this is used as a benchmark for subsequent measurements). This graph and its use is discussed further in Appendix A. Walk through of a Scalability Demo. NOTE : All graphs can be saved as images via the menu.

ClusterOverview Reference and User Guide

Page 14 of 34

Appendix A. Walk through of a Scalability Demo


This appendix describes how the SwingBench framework can be used to demonstrate Scalability in a Oracle9i Real Application Cluster. The following diagram illustrates the hardware setup for the demo.

There are eight load generators and eight nodes used for this demonstration, the coordinator and clusteroverview are also run on load1. The following sections describe the various files used to configure this benchmark.

swingbench.env (located in $SWINGHOME)


#!/bin/bash export ORACLE_HOME=/u01/oracle/o9i export JAVAHOME=/usr/java/j2sdk1.4.1_01 export SWINGHOME=/u01/swingbench export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/lib export LOADGENHOSTS='load1 load2 load3 load4 load5 load6 load7 load8' export LOADGENUSER=oracle export CLASSPATH=$JAVA_HOME/lib/rt.jar:$ORACLE_HOME/jdbc/lib/ojdbc14.jar:${SWINGHOME}/ swinglib/swingbench.jar:${SWINGHOME}/swinglib/swingbenchcoordinator.jar

ClusterOverview Reference and User Guide

Page 15 of 34

clusteroverview.xml (located in $SWINGHOME/bin)


<?xml version='1.0'?> <CoordinatorConfiguration> <ChartRefresh Period="5000"/> <DisplayedCharts> <Chart>ControlPanel</Chart> <Chart>TPM</Chart> <Chart>UserConnections</Chart> <Chart>Scalability</Chart> </DisplayedCharts> <CoordinatorInformation> <Server>//load1/CoordinatorServer</Server> </CoordinatorInformation> <MonitoredDatabaseList> <MonitoredDatabase DisplayName="rac9i1" Username="system" Password="manager" DriverType="thin" ConnectString="192.168.0.1:1521:RAC9i1"/> <MonitoredDatabase DisplayName="rac9i2" Username="system" Password="manager" DriverType="thin" ConnectString="192.168.0.2:1521:RAC9i2"/> <MonitoredDatabase DisplayName="rac9i3" Username="system" Password="manager" DriverType="thin" ConnectString="192.168.0.3:1521:RAC9i3"/> <MonitoredDatabase DisplayName="rac9i4" Username="system" Password="manager" DriverType="thin" ConnectString="192.168.0.4:1521:RAC9i4"/> <MonitoredDatabase DisplayName="rac9i5" Username="system" Password="manager" DriverType="thin" ConnectString="192.168.0.5:1521:RAC9i5"/> <MonitoredDatabase DisplayName="rac9i6" Username="system" Password="manager" DriverType="thin" ConnectString="192.168.0.6:1521:RAC9i6"/> <MonitoredDatabase DisplayName="rac9i7" Username="system" Password="manager" DriverType="thin" ConnectString="192.168.0.7:1521:RAC9i7"/> <MonitoredDatabase DisplayName="rac9i8" Username="system" Password="manager" DriverType="thin" ConnectString="192.168.0.8:1521:RAC9i8"/> </MonitoredDatabaseList> </CoordinatorConfiguration>

swingconfig.xml (located in $SWINGHOME/bin)


The following file is replicated on each of the load generators the only difference is the ConnectString attribute which points to its corresponding database instance i.e. The swingconfig.xml file on Load6 has the ConnectString attribute rac9i6. The author uses a script called distribuecc (located in $SWINGHOME/bin/scripts) to overwrite the swingconfig file on each node with preconfigured files (see default install on Unix/Linux)

ClusterOverview Reference and User Guide

Page 16 of 34

<?xml version = '1.0'?> <SwingBenchConfiguration> <ConnectionInformation WaitTillAllLogon="true" NumberOfUsers="8" MinDelay="0" MaxDelay="0" MaxTransactions="-1" Pooled="-1"> <UserName>cc</UserName> <Password>cc</Password> <ConnectString>rac9i1</ConnectString> <DriverType>oci</DriverType> </ConnectionInformation> <TransactionList/> <ProcessList> <Process Id="New Customer" SourceFile="com.mike.CallingCircle.NewCallingCircleProcess" Weight="25" Enabled="true"/> <Process Id="Update Customer Details" SourceFile="com.mike.CallingCircle.UpdateCallingCircleProcess" Weight="100" Enabled="true"/> <Process Id="Retrieve Customer Details " SourceFile="com.mike.CallingCircle.RetrieveHistoryProcess" Weight="50" Enabled="true"/> </ProcessList> <CoordinatorInformation> <Server>//load1/CoordinatorServer</Server> </CoordinatorInformation> <Charts> <Chart Name="Transactions per Minute" Autoscale="true" MaximumValue="-1.0"/> <Chart Name="Processes per Minute" Autoscale="true" MaximumValue="-1.0"/> <Chart Name="Transactions Maximum, Minimum and Average" Autoscale="true" MaximumValue="-1.0"/> </Charts> <ConnectionInitilizationCommands> <Command Type="Connection Property">BatchUpdates=1</Command> <Command Type="Connection Property">FetchSize=1</Command> <Command Type="Connection Property">StatementCaching=0</Command> <Command Type="SQL Command">alter session set sql_trace = false</Command> <Command Type="SQL Command">alter session set hash_area_size = 1048576</Command> <Command Type="SQL Command">alter session set sort_area_size = 1048576</Command> <Command Type="SQL Command">alter session set optimizer_mode = first_rows</Command> </ConnectionInitilizationCommands> <EnvironmentVariables> <Variable Key="CC_QUERYPROCESS_FILE_LOC" Value="/mnt/nfs/data1/qryccprocess.txt"/> <Variable Key="CC_UPDATEPROCESS_FILE_LOC" Value="/mnt/nfs/data1/updccprocess.txt"/> <Variable Key="CC_NEWPROCESS_FILE_LOC" Value="/mnt/nfs/data1/newccprocess.txt/> </EnvironmentVariables> <AllowedErrorCodes> <ErrorCode Id="1401"/> <ErrorCode Id="2291"/> <ErrorCode Id="1"/> </AllowedErrorCodes> <Statistics CollectionType="Minimal"/> </SwingBenchConfiguration>

ClusterOverview Reference and User Guide

Page 17 of 34

CCWizard.xml
<?xml version='1.0' encoding='windows-1252'?> <WizardConfig Name="Calling Circle Wizard" Mode="InterActive"> <WizardSteps RunnableStep="5"> <WizardStep SourceFile="com.dom.benchmarking.swingbench.wizards.cc.Step0"/> <WizardStep SourceFile="com.dom.benchmarking.swingbench.wizards.cc.Step1"/> <WizardStep SourceFile="com.dom.benchmarking.swingbench.wizards.cc.Step2"/> <WizardStep SourceFile="com.dom.benchmarking.swingbench.wizards.cc.Step3"/> <WizardStep SourceFile="com.dom.benchmarking.swingbench.wizards.cc.Step4"/> <WizardStep SourceFile="com.dom.benchmarking.swingbench.wizards.cc.Step5"/> <WizardStep SourceFile="com.dom.benchmarking.swingbench.wizards.cc.Step6"/> <WizardStep SourceFile="com.dom.benchmarking.swingbench.wizards.cc.Step7"/> <WizardStep SourceFile="com.dom.benchmarking.swingbench.wizards.cc.Step8"/> <WizardStep SourceFile="com.dom.benchmarking.swingbench.wizards.cc.Step9"/> <WizardStep SourceFile="com.dom.benchmarking.swingbench.wizards.cc.Step10"/> </WizardSteps> <DefaultParameters> <Parameter Key="operation" Value="generate"/> <Parameter Key="dbausername" Value="sys as sysdba"/> <Parameter Key="dbapassword" Value="manager"/> <Parameter Key="username" Value="CC"/> <Parameter Key="password" Value="CC"/> <Parameter Key="connectionstring" Value="rac9i1"/> <Parameter Key="connectiontype" Value="oci"/> <Parameter Key="datatablespace" Value="ccdata"/> <Parameter Key="datadatafile" Value="/home/oracle/orabase/oradata/DOM92/ccdata.dbf"/> <Parameter Key="indextablespace" Value="ccindex"/> <Parameter Key="indexdatafile" Value="/home/oracle/orabase/oradata/DOM92/ccindex.dbf"/> <Parameter Key="directories" Value="/mnt/nfs/data1/,/mnt/nfs/data2/,/mnt/nfs/data3/,/mnt/nfs/data4/,/mnt/nfs/ data5/,/mnt/nfs/data6/,/mnt/nfs/data7/,/mnt/nfs/data8/"/"/> <Parameter Key="transactioncount" Value="15000"/> <Parameter Key="nocustaccounts" Value="100000"/> </DefaultParameters> </WizardConfig>

NOTE : The following walk through assumes the user is performing the demo on a Unix/Linux platform and is using the default installation infrastructure.

Step 1 (Generate Data).


This demo uses the CallingCircle benchmark because of the relatively high load it places on the database/instance in comparison to that on the load generator, you can also use your own benchmark implementation (see the SwingBench Reference manual) . The Calling circle is heavily biased towards write operations with nearly 70% of all transactions either performing a insert or update operation. The first step in this walk through is generate the data required by the CallingCircle benchmark. It is assumed that the CallingCircle benchmark has previously been created, the author recommends using the ccwizard to do this (found in the bin directory). In this example the ccwizard has been used to create a schema with 5,000,000 customer accounts, this provides a reasonable number of runs before the entire CC schema has to be dropped and recreated (the ccwizard will do this for you). After the creation of the schema (and assuming all of the above files have been correctly configured) use the ccwizard to generate the data. As detailed in its configuration file the ccwizard will write eight sets of three files to the nfs mounted disk drive. The author recommends that the you modify the ccwizard.xml file in preference to modifying the field values at run time. To invoke the wizard execute the following command
[oracle@load1 swingbench]$ cd bin/ [oracle@load1 bin]$ ./ccwizard

ClusterOverview Reference and User Guide

Page 18 of 34

NOTE : It is possible to launch the generator in character mode which maybe preferable for many environments (see the swingbench reference guide for details) This will launch the wizard as shown below

Press next and select the generate data option as shown below

ClusterOverview Reference and User Guide

Page 19 of 34

Enter values for the schema details or accept the defaults (populated from your modified CCWizard.xml file) and hit next. The Benchmark Details step allows you to enter how many transactions will be generated for a run and where the transaction data will be written to.

The number of transactions field specifies how many will be created per directory location. It also determines the the length of a benchmark run. On a 4 processor 1.6.Ghz Xeon Intel white box running Linux Advanced server it takes 1 minute to consume 500 transactions, 3 minutes for 2,000, 25 minutes for 20,000. As a result it is important to create enough transactions to last the length of your demonstration. After specifying the number of transactions, or accepting the default, hit next and then next again. This will begin the benchmark data generation. NOTE : this is not a benchmark run, simply a means of generating data for a run.

ClusterOverview Reference and User Guide

Page 20 of 34

After the data generation has completed the wizard will display the hit ratio, if this ratio exceeds 30% you should consider regenerating the CC benchmark schema. There should now be 3 files located in each of the directory locations specified i.e.
[oracle@load1 total 9872 -rw-r--r--rw-r--r--rw-r--r-nfs]$ ls -l data1 1 oracle 1 oracle 1 oracle oinstall oinstall oinstall 2531642 Apr 322500 Apr 7226736 Apr 4 10:55 newccprocess.txt 4 10:55 qryccprocess.txt 4 10:55 updccprocess.txt

Step 2 (Start Coordinator and Load Generators)


The next step step involves starting the Coordinator process and the load generators. All of the parameters for these operations are source from the swingbench.env file in the $SWINGBENCH_HOME directory. To start the coordinator enter the following commands.
[oracle@load1 swingbench]$ cd bin [oracle@load1 swingcontrol]$ ./coordinator [oracle@load1 swingcontrol]$ 11:44:34 04/07 [DBUG] Coordinator Cordinator Server 11:44:34 04/07 [DBUG] Coordinator Cordinator Server Ready

Creating

If the coordinator has already been started on the the machine you will receive an error and it will be necessary to kill off the process before continuing. NOTE : Currently it is necessary to restart the Coordinator each time the load generators are started (The author is looking into this) The next step is to start the load generators, this can either be done manually or via remote shell functionality such as ssh. The default install on Unix/Linux comes with a script StartLoadGenerators that will attempt to start all of the load generators via ssh (ssh access will need to have have been configured before this script will work). At present two versions of the front end for the load generators exist, one with a full graphical front end swingbench and one with a

ClusterOverview Reference and User Guide

Page 21 of 34

simple character front end charbench. Both versions run exactly the same kernel but differ in the way the results are presented. swingbench is typically used when the load generators support graphical output and the additional CPU load created by real time charting is not an issue. charbench is typically used when only simple network connectivity is available and the work placed on the load generators is significant. The following commands start the load generators on all nodes
[oracle@load1 swingbench]$ cd bin/scripts [oracle@load1 scripts]$ ./StartLoadGenerators CharBench Author : Version : Dominic Giles 1.01

Configuration is being read from SwingConfig.xml Results are being written to results.xml. Hit Return to Start & Terminate Run... 11:52:26 04/07 [DBUG] CoordinatorServerImpl Current connected clients = 1 Author : Dominic Giles Version : 1.01 Configuration is being read from SwingConfig.xml Results are being written to results.xml. Hit Return to Start & Terminate Run... . . . . 11:52:26 04/07 [DBUG] CoordinatorServerImpl Current 11:52:26 04/07 [DBUG] CoordinatorServerImpl Current 11:52:26 04/07 [DBUG] CoordinatorServerImpl Current 11:52:26 04/07 [DBUG] CoordinatorServerImpl Current 11:52:26 04/07 [DBUG] CoordinatorServerImpl Current 11:52:26 04/07 [DBUG] CoordinatorServerImpl Current 11:52:26 04/07 [DBUG] CoordinatorServerImpl Current

connected connected connected connected connected connected connected

clients clients clients clients clients clients clients

= = = = = = =

2 3 4 5 6 7 8

NOTE : The output above has been abbreviated.

Step 3 (Start ClusterOverview)


The following command starts ClusterOverview
[oracle@load1 swingbench]$ cd bin [oracle@load1 swingcontrol]$ ./clusteroverview

The following image shows ClusterOverview with the user chart and scalability chart displayed

ClusterOverview Reference and User Guide

Page 22 of 34

Whilst the minimum number charts that need to be displayed for this demonstration is the control panel and the scalability chart the others provide a more complete overview of database/instance activity. The additional charts can be added/removed from the menu.

Step 4 (Start the Load Generators)


To start the scalability demo select the first row in the database table then press the start in sequential order button as show below. This is a toggle button and should look like it is permanently pressed in. In this mode you are forced to start and stop the nodes in order. Now press the start button as shown below. The users should begin to log on and the load begin to ramp up. Allow the load to reach a stable transaction rate as shown below

ClusterOverview Reference and User Guide

Page 23 of 34

It is now possible to take a snapshot of this load using the take snapshot button as shown below. This will freeze the first bar in the scalability chart and move you to the second row in the database table. It is now possible to repeat the steps detailed above. Each time a snapshot is taken the current TPM rate is compared with the TPM rate taken for the first instance and a scalability figure is generated i.e. 1.0, 2.0, 3.0, 3.9, 4.8, 5.8, 6.7, 7.7 . To stop all of the load generators unselect the start in sequential order button as show below. Select all of the databases/instances by either dragging up or down the table or pressing Ctrl-A then pressing the stop button as shown below

ClusterOverview Reference and User Guide

Page 24 of 34

Appendix B. Walk through of a High Availability Demo


This appendix describes how the SwingBench framework can be used to demonstrate High Availability in a Oracle9i Real Application Cluster. The following diagram illustrates the hardware setup for the demo.

There are eight load generators used for this demonstration with all of their connections load balanced across all 8 nodes. The following sections describe the various files used to configure this benchmark.

swingbench.env (located in $SWINGHOME)


#!/bin/bash export ORACLE_HOME=/u01/oracle/o9i export JAVAHOME=/usr/java/j2sdk1.4.1_01 export SWINGHOME=/u01/swingbench export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/lib export LOADGENHOSTS='load1 load2 load3 load4 load5 load6 load7 load8' export LOADGENUSER=oracle export CLASSPATH=$JAVA_HOME/lib/rt.jar:$ORACLE_HOME/jdbc/lib/ojdbc14.jar:${SWINGHOME}/ swinglib/swingbench.jar:${SWINGHOME}/swinglib/swingbenchcoordinator.jar

ClusterOverview Reference and User Guide

Page 25 of 34

clusteroverview.xml (located in $SWINGHOME/bin)


<?xml version='1.0'?> <CoordinatorConfiguration> <ChartRefresh Period="5000"/> <DisplayedCharts> <Chart>ControlPanel</Chart> <Chart>TPM</Chart> <Chart>UserConnections</Chart> </DisplayedCharts> <CoordinatorInformation> <Server>//load1/CoordinatorServer</Server> </CoordinatorInformation> <MonitoredDatabaseList> <MonitoredDatabase DisplayName="rac9i1" Username="system" Password="manager" DriverType="thin" ConnectString="192.168.0.1:1521:RAC9i1"/> <MonitoredDatabase DisplayName="rac9i2" Username="system" Password="manager" DriverType="thin" ConnectString="192.168.0.2:1521:RAC9i2"/> <MonitoredDatabase DisplayName="rac9i3" Username="system" Password="manager" DriverType="thin" ConnectString="192.168.0.3:1521:RAC9i3"/> <MonitoredDatabase DisplayName="rac9i4" Username="system" Password="manager" DriverType="thin" ConnectString="192.168.0.4:1521:RAC9i4"/> <MonitoredDatabase DisplayName="rac9i5" Username="system" Password="manager" DriverType="thin" ConnectString="192.168.0.5:1521:RAC9i5"/> <MonitoredDatabase DisplayName="rac9i6" Username="system" Password="manager" DriverType="thin" ConnectString="192.168.0.6:1521:RAC9i6"/> <MonitoredDatabase DisplayName="rac9i7" Username="system" Password="manager" DriverType="thin" ConnectString="192.168.0.7:1521:RAC9i7"/> <MonitoredDatabase DisplayName="rac9i8" Username="system" Password="manager" DriverType="thin" ConnectString="192.168.0.8:1521:RAC9i8"/> </MonitoredDatabaseList> </CoordinatorConfiguration>

swingconfig.xml (located in $SWINGHOME/bin)


The following file is identical on each load generator. The author uses a script called distributeoe (See Linux/Unix install) to overwrite the swingconfig.xml file on every load generator with a preconfigured file.

ClusterOverview Reference and User Guide

Page 26 of 34

<?xml version = '1.0'?> <SwingBenchConfiguration> <ConnectionInformation WaitTillAllLogon="true" NumberOfUsers="200" MinDelay="100" MaxDelay="5000" MaxTransactions="-1" Pooled="-1"> <UserName>soe</UserName> <Password>soe</Password> <ConnectString>lugalb</ConnectString> <DriverType>oci</DriverType> </ConnectionInformation> <TransactionList/> <ProcessList> <Process Id="New Customer Registration" SourceFile="com.dom.benchmarking.swingbench.transactions.NewCustomerProcess" Weight="20" Enabled="true"/> <Process Id="Browse Products" SourceFile="com.dom.benchmarking.swingbench.transactions.BrowseProducts" Weight="50" Enabled="true"/> <Process Id="Order Products" SourceFile="com.dom.benchmarking.swingbench.transactions.NewOrderProcess" Weight="50" Enabled="true"/> <Process Id="Process Orders" SourceFile="com.dom.benchmarking.swingbench.transactions.ProcessOrders" Weight="10" Enabled="true"/> <Process Id="Browse Orders" SourceFile="com.dom.benchmarking.swingbench.transactions.BrowseAndUpdateOrders" Weight="50" Enabled="true"/> </ProcessList> <CoordinatorInformation> <Server>//load1/CoordinatorServer</Server> </CoordinatorInformation> <Charts> <Chart Name="Transactions per Minute" Autoscale="true" MaximumValue="-1.0"/> <Chart Name="Processes per Minute" Autoscale="true" MaximumValue="-1.0"/> <Chart Name="Transactions Maximum, Minimum and Average" Autoscale="true" MaximumValue="-1.0"/> </Charts> <ConnectionInitilizationCommands> <Command Type="Connection Property">BatchUpdates=1</Command> <Command Type="Connection Property">FetchSize=1</Command> <Command Type="Connection Property">StatementCaching=20</Command> <Command Type="SQL Command">alter session set sql_trace = false</Command> <Command Type="SQL Command">alter session set hash_area_size = 1048576</Command> <Command Type="SQL Command">alter session set sort_area_size = 1048576</Command> <Command Type="SQL Command">alter session set optimizer_mode = first_rows</Command> </ConnectionInitilizationCommands> <EnvironmentVariables> <Variable Key="SOE_PRODUCTSDATA_LOC" Value="data/productids.txt"/> <Variable Key="SOE_NAMESDATA_LOC" Value="data/names.txt"/> <Variable Key="SOE_NLSDATA_LOC" Value="data/nls.txt"/> </EnvironmentVariables> <AllowedErrorCodes> <ErrorCode Id="1401"/> <ErrorCode Id="2291"/> <ErrorCode Id="1"/> </AllowedErrorCodes> <Statistics CollectionType="Minimal"/> </SwingBenchConfiguration>

tnsnames.ora

ClusterOverview Reference and User Guide

Page 27 of 34

LUGALB.LUGA.TEST = (DESCRIPTION = (ADDRESS_LIST = (LOAD_BALANCE = ON) (FAILOVER = ON) (ADDRESS = (PROTOCOL = TCP)(HOST (ADDRESS = (PROTOCOL = TCP)(HOST (ADDRESS = (PROTOCOL = TCP)(HOST (ADDRESS = (PROTOCOL = TCP)(HOST (ADDRESS = (PROTOCOL = TCP)(HOST (ADDRESS = (PROTOCOL = TCP)(HOST (ADDRESS = (PROTOCOL = TCP)(HOST (ADDRESS = (PROTOCOL = TCP)(HOST ) (CONNECT_DATA = (SERVICE_NAME = RAC9i.luga.test) (FAILOVER_MODE = (TYPE = SELECT) (METHOD=BASIC) (RETRIES = 64) (DELAY=4) ) ) )

= = = = = = = =

rac1)(PORT rac2)(PORT rac3)(PORT rac4)(PORT rac5)(PORT rac6)(PORT rac7)(PORT rac8)(PORT

= = = = = = = =

1521)) 1521)) 1521)) 1521)) 1521)) 1521)) 1521)) 1521))

Step 1 (Setup up environment)


The fail over demo typically uses the Simple Order Entry benchmark, whilst Calling Circle could be used it it requires a regeneration of benchmark data. This can be inconvenient since both demos are usually shown together with fail over being the last. The author recommends that the OE wizard found in the swing wizard directory is used to create the order entry schema.

Step 2 (Start Coordinator and Load Generators)


The next step step involves starting the Coordinator process and the load generators. All of the parameters for these operations are sourced from the swingbench.env file in the $SWINGBENCH_HOME directory. To start the coordinator enter the following commands.
[oracle@load1 swingbench]$ cd bin [oracle@load1 bin]$ ./Coordinator [oracle@load1 bin]$ 11:44:34 04/07 [DBUG] Coordinator Creating Cordinator Server 11:44:34 04/07 [DBUG] Coordinator Cordinator Server Ready

If the coordinator has already been started on the the machine you will receive an error and it will be necessary to kill off the process before continuing. NOTE : Currently it is necessary to restart the Coordinator each time the load generators are started (The author is looking into this) Before starting the the load generators it is important to ensure that the swingconfig.xml file on each load generator has been updated with a version appropriate for the benchmark. As mentioned earlier the default unix/linux build of Swingbench uses shell scripts called distributeoe to perform this functionality (or distributecc for the calling circle benchmark). The scripts use scp to copy the files to each node so ssh will need to be configured first. Make sure the scripts are modified to suit your environment and then run the scripts by issuing the following command

ClusterOverview Reference and User Guide

Page 28 of 34

[oracle@load1 scripts]$ ./distributeoe oe1.xml 100% |****************************************| oe2.xml 100% |****************************************| oe3.xml 100% |****************************************| oe4.xml 100% |****************************************| oe5.xml 100% |****************************************| oe6.xml 100% |****************************************| oe7.xml 100% |****************************************| oe8.xml 100% |****************************************|

2080 2080 2080 2080 2080 2080 2080 2080

00:00 00:00 00:00 00:00 00:00 00:00 00:00 00:00

The next step is to start the load generators, this can either be done manually or via remote shell functionality such as ssh. The default install on Unix/Linux comes with a script startloadgenerators that will attempt to start all of the load generators via ssh (ssh access will need to have have been configured before this script will work). At present two versions of the front end for the load generators exist, one with a full graphical front end swingbench and one with a simple character front end charbench. Both versions run exactly the same kernel but differ in the way the results are presented. swingbench is typically used when the load generators support graphical output and the additional CPU load created by charting is not an issue. charbench is typically used when only simple network connectivity is available and the work placed on the load generators is significant. The following commands start the load generators on all nodes
[oracle@load1 swingbench]$ cd bin/scripts [oracle@load1 scripts]$ ./StartLoadGenerators CharBench Author : Version : Dominic Giles 1.01

Configuration is being read from SwingConfig.xml Results are being written to results.xml. Hit Return to Start & Terminate Run... 11:52:26 04/07 [DBUG] CoordinatorServerImpl Current connected clients = 1 Author : Dominic Giles Version : 1.01 Configuration is being read from SwingConfig.xml Results are being written to results.xml. Hit Return to Start & Terminate Run... . . . . 11:52:26 04/07 [DBUG] CoordinatorServerImpl Current 11:52:26 04/07 [DBUG] CoordinatorServerImpl Current 11:52:26 04/07 [DBUG] CoordinatorServerImpl Current 11:52:26 04/07 [DBUG] CoordinatorServerImpl Current 11:52:26 04/07 [DBUG] CoordinatorServerImpl Current 11:52:26 04/07 [DBUG] CoordinatorServerImpl Current 11:52:26 04/07 [DBUG] CoordinatorServerImpl Current

connected connected connected connected connected connected connected

clients clients clients clients clients clients clients

= = = = = = =

2 3 4 5 6 7 8

NOTE : The output above has been abbreviated.

Step 3 (Start ClusterOverview)


The following command starts ClusterOverview
[oracle@load1 swingbench]$ cd bin [oracle@load1 bin]$ ./clusteroverview

The following image shows ClusterOverview with the control panel, the TPM chart and the user session chart displayed.

ClusterOverview Reference and User Guide

Page 29 of 34

The scalability chart isn't used in this demo and so should not be displayed

Step 4 (Start the Load Generators)


To start the load generators click on the load generators tab NOTE : It is not possible to start and stop the databases from the database tab since their display is not the same as the connect string used by the load generators. Select all of the load generators by either dragging down the table or by selecting the first row and typing Ctrl-A. Now press the start button as shown below. The load generators should log on the users and begin generating load as shown below.

ClusterOverview Reference and User Guide

Page 30 of 34

Wait until the load has achieved a steady state as show below

ClusterOverview Reference and User Guide

Page 31 of 34

It is now possible to terminate an instance in the cluster, this can be achieved by powering off a database server or by performing a shutdown abort as shown below
SQL*Plus: Release 9.2.0.3.0 - Production on Tue Apr 8 13:23:46 2003 Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.

Connected to: Oracle9i Enterprise Edition Release 9.2.0.3.0 - Production With the Partitioning and Real Application Clusters options JServer Release 9.2.0.3.0 - Production SQL> shutdown abort

As a result the user sessions on the failed node begin to migrate to the surviving node as shown below (In this image the node has been physically powered off).

ClusterOverview Reference and User Guide

Page 32 of 34

The load will drop as the failing node's transactions are recovered. The period of recovery is dependent on the setup of the cluster manager and Oracle9i parameters such as fast_start_mttr_target . Continuing shutting down nodes if you deem it appropriate.

Step 5 (Restart Failed Node and Rebalance User Sessions)


Leave ClusterOverview running and restart the failed node (remember to restart the Oracle*Net listener). It will eventually reappear in the user session chart. Click on the database tab and select the databases (Ctrl and left mouse click) that did not have a forced failure, their status will probably indicate that they are not running and have 0 TPM ignore this. Now press the rebalance button as shown below The user sessions should now be rebalanced across all of the nodes as shown below

ClusterOverview Reference and User Guide

Page 33 of 34

NOTE : In the image above the user session chart has had its Y-scale reset, starting at 185. To stop all of the nodes click on the load generation tab select a row and press Ctrl-A, now press the stop button as shown below

ClusterOverview Reference and User Guide

Page 34 of 34

Você também pode gostar