Você está na página 1de 4

HACMP topology & usefull commands

Hacmp can be configured in 3 ways.


1. Rotating
2. Cascading
3. Mutual Failover
The cascading and rotating resource groups are the classic , pre-HA 5.1 types. The
new custom type of resource group has been introduced in HA 5.1 onwards.

Cascading resource group:


Upon node failure, a cascading resource group falls over to the available node w
ith the next priority in the node priority list.
Upon node reintegration into the cluster, a cascading resource group falls back
to its home node by default.
Cascading without fallback
Thisoption, this means whenever a primary node fails, the package will failover
to the next available node in the list and when the primary node comes online th
en the package will not fallback automatically. We need to move package to its h
ome node at a convenient time.
Rotating resource group:
This is almost similar to Cascading without fallback, whenever package failover
to the standby nodes it will never fallback to the primary node automatically, w
e need to move it manually at our convenience.
Mutual takeover:
Mutual takeover option, which means both the nodes in this type are active-activ
e mode. Whenever fail over happens the package on the failed node will move to t
he other active node and will run with already existing package. Once the failed
node comes online we can move the package manually to that node.
Useful HACMP commands
clstat - show cluster state and substate; needs clinfo.
cldump - SNMP-based tool to show cluster state
cldisp - similar to cldump, perl script to show cluster state.
cltopinfo - list the local view of the cluster topology.
clshowsrv -a - list the local view of the cluster subsystems.
clfindres (-s) - locate the resource groups and display status.
clRGinfo -v - locate the resource groups and display status.
clcycle - rotate some of the log files.
cl_ping - a cluster ping program with more arguments.
clrsh - cluster rsh program that take cluster node names as argument.
clgetactivenodes - which nodes are active?
get_local_nodename - what is the name of the local node?
clconfig - check the HACMP ODM.
clRGmove - online/offline or move resource groups.
cldare - sync/fix the cluster.
cllsgrp - list the resource groups.
clsnapshotinfo - create a large snapshot of the hacmp configuration.
cllscf - list the network configuration of an hacmp cluster.
clshowres - show the resource group configuration.
cllsif - show network interface information.
cllsres - show short resource group information.
lssrc -ls clstrmgrES - list the cluster manager state.
lssrc -ls topsvcs - show heartbeat information.
cllsnode - list a node centric overview of the hacmp configuration.
--------------------------------------------------------------------------------
-----
Steps 1 to 17 to configure HACMP
Steps to configure HACMP:
1. Install the nodes, make sure the redundancy is maintained for power supplies,
n/w and
fiber n/ws. Then Install AIX on the nodes.
2. Install all the HACMP filesets except HAview and HATivoli.
Install all the RSCT filesets from the AIX base CD.
Make sure that the AIX, HACMP patches and server code are at the latest level (i
deally
recommended).
4. Check for fileset bos.clvm to be present on both the nodes. This is required
to make the
VGs enhanced concurrent capable.
5. V.IMP: Reboot both the nodes after installing the HACMP filesets.
6. Configure shared storage on both the nodes. Also in case of a disk heartbeat,
assign a
1GB shared storage LUN on both nodes.
7. Create the required VGs only on the first node. The VGs can be either normal
VGs or
Enhanced concurrent VGs. Assign particular major number to each VGs while creati
ng
the VGs. Record the major no. information.
To check the Majar no. use the command:
ls lrt /dev grep
Mount automatically at system restart should be set to NO.
8. Varyon the VGs that was just created.
9. V.IMP: Create log LV on each VG first before creating any new LV. Give a uniq
ue
name to logLV.
Destroy the content of logLV by: logform /dev/loglvname
Repeat this step for all VGs that were created.
10. Create all the necessary LVs on each VG.
11. Create all the necessary file systems on each LV created ..you can create moun
t pts
as per the requirement of the customer,
Mount automatically at system restart should be set to NO.
12. umount all the filesystems and varyoff all the VGs.
13. chvg an ---_ All VGs will be set to do not mount automatically at
System restart.
14. Go to node 2 and run cfgmgr v to import the shared volumes.
15. Import all the VGs on node 2
use smitty importvg -----_ import with the same major number as assigned on node
16. Run chvg an for all VGs on node 2.
17. V.IMP: Identify the boot1, boot2, service ip and persistent ip for both the
nodes
and make the entry in the /etc/hosts.
18. Define cluster name.
19. Define the cluster nodes. #smitty hacmp -> Extended Configuration -> Extende
d topology configuration -> Configure an HACMP node - > Add a node to an HACMP c
luster Define both the nodes on after the other.
20. Discover HACMP config: This will import for both nodes all the node info, bo
ot ips,
service ips from the /etc/hosts
smitty hacmp -> Extended configurations -> Discover hacmp related information
21. Adding Communication interface
Add HACMP communication interfaces. (Ether interfaces.)
smitty hacmp -> Extended Configuration -> Extended Topology Configuration ->
Configure HACMP networks -> Add a network to the HACMP cluster.
Select ether and Press enter.
Then select diskhb and Press enter. Diskhb is your non-tcpip heartbeat.
22. Adding device for Disk Heart Beat
Include the interfaces/devices in the ether n/w and diskhb already defined.
smitty hacmp -> Extended Configuration -> Extended Topology Configuration ->
Configure HACMP communication interfaces/devices -> Add communication
Interfaces/devices.
23. Adding boot IP & Disk heart beat information
Include all the four boot ips (2 for each nodes) in this ether interface already
defined.Then include the disk for heartbeat on both the nodes in the diskhb alr
eady defined
24. Adding persistent IP
Add the persistent IPs:
smitty hacmp -> Extended Configuration -> Extended Topology Configuration ->
Configure HACMP persistent nodes IP label/Addresses
25. Adding Persistent IP labels
Add a persistent ip label for both nodes.
26. Defining IP labels
Define the service IP labels for both nodes.
smitty hacmp -> Extended Configuration -> Extended Resource Configuration ->
HACMP extended resource configuration -> Configure HACMP service IP label
27. Adding Resource Group
Add Resource Groups:
smitty hacmp -> Extended Configuration -> Extended Resource Configuration ->
HACMP extended resource group configuration

Continue similarly for all the resource groups.


The node selected first while defining the resource group will be the primary ow
ner of
that resource group. The node after that is secondary node.
Make sure you set primary node correctly for each resource group. Also set the f
ailover/fallback policies as per the requirement of the setup
28. Setting attributes of Resource group
Set attributes of the resource groups already defined:
Here you have to actually assign the resources to the resource groups.
smitty hacmp -> Extended Configuration -> Extended Resource Configuration ->
HACMP extended resource group configuratio
29. Adding IP label & RG owned by Node
Add the service IP label for the owner node and also the VGs owned by the owner
node
Of this resource group.
30 & 31. Synchronize & start Cluster
Synchronize the cluster:
This will sync the info from one node to second node.
Smitty cl_sync

That s it. Now you are ready to start the cluster.


Smitty clstart
You can start the cluster together on both nodes or start individually on each n
ode.
You can start the cluster together on both nodes or start individually on each n
ode.
32 & 33. Check for cluster Stabilize & VG varied on
Wait for the cluster to stabilize. You can check when the cluster is up by follo
wing
commands
a. netstat i
b. ifconfig a : look-out for service ip. It will show on each node if the cluster
is up.
Check whether the VGs under cluster s RGs are varied-ON and the filesystems in the
VGs are mounted after the cluster start.

Here test1vg and test2vg are VGs which are varied-ON when the cluster is started
and
Filesystems /test2 and /test3 are mounted when the cluster starts.
/test2 and /test3 are in test2vg which is part of the RG which is owned by this
node.
32. Perform all the tests such as resource take-over, node failure, n/w failure
and verify
the cluster before releasing the system to the customer.

Você também pode gostar