Você está na página 1de 8

Veritas Cluster Commands !!!

In this post, we are going to see veritas cluster (ha) commands.


Soon after installing VCS it is time to configure it. For configuring a cluster, we
can play with GUI clicks or ha commands.

Though we can have fun playing with GUI, it is better to grab some knowledge on
commands as we are admins. So let me start this.....

root@solaris:~# hares -list | grep solaris


root@solaris:~#
root@solaris2:~# hagrp -list
cvm

solaris2

cvm

solaris

vxfen

solaris2

vxfen

solaris

root@solaris:~#

Till now there are no resources or service groups , so let's start creating one....
cvm and vxfen are default comes with cluster software (cvm-- cluster volume
manager and vxfen -- for I/O fencing)

CMD : for a group

---- hagrp -add <grpname>

CMD : for a resource ---- hares -add <resource name> <resource type>
<service group>

root@solaris2:~#
root@solaris2:~# hagrp -add TESTDB
root@solaris2:~#
root@solaris2:~# hagrp -add TESTCI
root@solaris2:~#
root@solaris2:~# hagrp -list
TESTDB

solaris2

TESTDB

solaris

----- Adding Service Group

TESTCI

solaris2

TESTCI

solaris

cvm

solaris2

cvm

solaris

vxfen

solaris2

vxfen

solaris

root@solaris2:~#
root@solaris:~# hares -add dg_test DiskGroup TESTDB

----- Adding resource

root@solaris:~#
root@solaris:~# hares -modify dg_test DiskGroup testdg
root@solaris:~#
root@solaris:~# hares -list | grep solaris
dg_test solaris
root@solaris:~#

Soon after creating a resource for DiskGroup, we need a volume resource for it.

root@solaris:~#
root@solaris:~# hares -add vol_testvol Volume TESTDB
root@solaris:~#
root@solaris:~# hares -modify vol_testvol Volume testvol
root@solaris:~#
root@solaris:~# hares -modify vol_testvol DiskGroup testdg
root@solaris:~#
root@solaris:~# hares -list | grep solaris
dg_test solaris
vol_testvol solaris
root@solaris:~#

Then a mount point,

root@solaris:~#

root@solaris:~# hares -add mnt_test Mount TESTDB


root@solaris:~#
root@solaris:~# hares -modify mnt_test MountPoint test
root@solaris:~#
root@solaris:~# hares -modify mnt_test BlockDevice /dev/vx/dsk/testdg/testvol
root@solaris:~#
root@solaris:~# hares -modify mnt_test FSType vxfs
root@solaris:~#
root@solaris:~# hares -modify mnt_test FsckOpt -y
root@solaris:~#
root@solaris:~# hares -list | grep solaris
dg_test solaris
vol_testvol solaris
mnt_test solaris
root@solaris2:~#

We have to create resources only on one node they will be visible for both the
nodes.

root@solaris:~# hares -list


dg_test solaris
dg_test solaris2
vol_testvol solaris
vol_testvol solaris2
mnt_test solaris
mnt_test solaris2
root@solaris:~#

Now let us go with creating a resource for ORACLE, so that we can switch oracle
between the nodes and make it started just with clicks...
oracle comes with different attributes like sid(system id), owner, oracle_home,
profile_file.

root@solaris:~# hares -add test_ora Oracle TESTDB


root@solaris:~#
root@solaris:~# hares -modify test_ora Sid SID
root@solaris:~#
root@solaris:~# hares -modify test_ora Owner oraSID
root@solaris:~#
root@solaris:~# hares -modify test_ora Home /oracle/SID/112_64
root@solaris:~#
root@solaris:~# hares -modify test_ora Pfile /oracle/SID/112_64/dbs/initSID.ora
root@solaris:~#

Listener is the thing which acts as a mediator for SAP and DB, hence we need a
resource for this too...

root@solaris:~# hares -add LSNR_test Netlsnr TESTDB


root@solaris:~#
root@solaris:~# hares -modify LSNR_test Owner oraSID
root@solaris:~#
root@solaris:~# hares -modify LSNR_test Home /oracle/SID/112_64
root@solaris:~#

As we use a IP for DB and SAP server, we need a IP resource for both so that we
can switch DB or CI to any node. For IP, we need a NIC resource too..... Attributes
like IP, Netmask and NIC.

root@solaris:~# hares -add NIC_test NIC TESTDB


root@solaris:~#
root@solaris:~# hares -modify NIC_test Device ipmp0
root@solaris:~#
root@solaris:~# hares -add VIP_test IP TESTDB
root@solaris:~#
root@solaris:~# hares -modify VIP_test Device ipmp0
root@solaris:~#

root@solaris:~# hares -modify VIP_test Address 10.20.10.21


root@solaris:~#
root@solaris:~# hares -modify VIP_test NetMask 255.255.255.0

Our ORACLE resource is ready, now let us gor SAP resource....


For SAP we need attributes like Instance type & name, sap_admin, sap_sid,
profile_file .......

root@solaris:~# hares -add SID_SAP SAPNW04 TESTCI


root@solaris:~#
root@solaris:~# hares -modify SID_SAP InstType ENQUEUE
root@solaris:~#
root@solaris:~# hares -modify SID_SAP EnvFile /home/SIDadm/.cshrc
root@solaris:~#
root@solaris:~# hares -modify SID_SAP InstName 00
root@solaris:~#
root@solaris:~# hares -modify SID_SAP ProcMon ms en pr
root@solaris:~#
root@solaris:~# hares -modify SID_SAP SAPAdmin SIDadm
root@solaris:~#
root@solaris:~# hares -modify SID_SAP SAPSID SID
root@solaris:~#
root@solaris:~# hares -modify SID_SAP StartProfile
/usr/sap/SID/SYS/profile/pfl.flname

Similarly we have to create resources like NFSrestart , share and proxy if we are
sharing any filesystems from a node....

root@solaris:~# hares -add NFSrestart_test NFSRestart TESTCI


root@solaris:~#
root@solaris:~# hares -modify NFSrestart_test LocksPathName
/sapmnt/SID/nfslocks
root@solaris:~#

Proxy helps cluster to wait until the target resource is online.


Example : Will be helpful when we use nfs share. It means unless until the
filesystem gets mounted on the node it cannot be shared. This scenario is
handled by proxy.

root@solaris:~# hares -add proxy_test Proxy TESTCI


root@solaris:~#
root@solaris:~# hares -modify proxy_test TargetResName mnt_test1
root@solaris:~#
root@solaris:~# hares -add share_test Share TESTCI
root@solaris:~#
root@solaris:~# hares -modify share_test PathName /test1
root@solaris:~#
root@solaris:~# hares -modify share_test Options -o anon=0
root@solaris:~#

In a cluster dependency is the main thing and will be achieved by proper linking.
We need to create a link between any two resources carefully and thus we define
the relationship of child and parent resource respectively.

CMD : hares -link <parent res> <child res>

hares -link dg_test vol_testvol


hares -link vol_testvol mnt_test

similarly to unlink this relation :

CMD : hares -unlink <parent group> <child group>

If we want to specify dependency while any service group getting online, we can
use links.
**** For example, we will create service group for both SAP and DB. But
DB should be started before CI so in such cases, we can use link to create

dependency which states unless and until DB service group is online SAP service
group should not be online.

CMD : hagrp -link <parent group> <child group>

hagrp -link TESTDB TESTCI

Similarly some more useful and regular commands are,

To bring services online and offline

hagrp -online service_group -sys system_name


hagrp -offline service_group -sys system_name

For Freezing/unfreezing service groups

hagrp -freeze group_name [-persistent]


hagrp -unfreeze group_name [-persistent]

To switch services between any nodes of cluster

hagrp -switch group_name -clus <cluster>

To bring resources online and offline

hares -online resource_name -sys system_name


hares -offline resource_name -sys system_name

Since cluster is a ocean, we have so many other commands too. But it is always
easier if we go with GUI option. This post gives some basic idea regarding
commands which were internally run when we click and perform some action in
GUI.

################################################
################################

Você também pode gostar