How to add new node in 11gR2 RAC
cgswongSep 2 2010 — edited Aug 5 2011I thought I'd share my recent use case findings with all concerning the steps to add a new RAC node to a 11gR2 cluster. The documentation has a step missing, though it is likely this is only for a role-separated setup (step 10 - 11 is the critical part):
Phase I - Extending Oracle Clusterware to a new cluster node
1. Make physical connections, install the OS. Follow article “11GR2 GRID INFRASTRUCTURE INSTALLATION FAILS WHEN RUNNING ROOT.SH ON NODE 2 OF RAC [ID 1059847.1]” to ensure successful completion of root.sh!!
2. Create Oracle accounts, and setup SSH among the new node and the existing cluster nodes. Follow article “How To Configure SSH for a RAC Installation [ID 300548.1]" for correct procedure for SSH setup! EACH NODE MUST BE VISITED TO ENSURE THEY ARE ADDED TO KNOWN_HOSTS FILE!!
3. Verify the requirements for cluster node addition using the Cluster Verification Utility (CVU). From an existing cluster node:
$> $GI_HOME/bin/cluvfy stage -post hwos -n <existing and new nodes> -verbose
4. Compare an existing node with the new node(s) to be added:
$> $GI_HOME/bin/cluvfy comp peer -refnode <existing node> -n <new node> -orainv oinstall -osdba dba -verbose
5. Verify the integrity of the cluster and new node by running from an existing cluster node:
$GI_HOME/bin/cluvfy stage -pre nodeadd -n <new node> -fixup -verbose
6. Add the new node by running the following from an existing cluster node:
a. Not using GNS
$GI_HOME/oui/bin/addNode.sh -silent “CLUSTER_NEW_NODES={<new node>}” “CLUSTER_NEW_VIRTUAL_HOSTNAMES={<new node VIP>}”
b. Using GNS
$GI_HOME/oui/bin/addNode.sh -silent “CLUSTER_NEW_NODES={<new node>}”
Run the root scripts when prompted.
7. Verify that the new node has been added to the cluster:
$GI_HOME/bin/cluvfy stage -post nodeadd -n <new node> -verbose
Phase II - Extending Oracle Database RAC to new cluster node
8a. Using Clone Process
i. Use ‘tar’ to archive an existing DB home, and extract to the same location on the new node
ii. On the new node run:
perl $ORACLE_HOME/clone/bin/clone.pl '-O"CLUSTER_NODES={<existing node>,<new node>}"' '-O"LOCAL_NODE=<new node>"' ORACLE_BASE=$ORACLE_BASE ORACLE_HOME=$ORACLE_HOME ORACLE_HOME_NAME=OraDb11g_home1 '-O-noConfig'
iii. On the existing node where the DB home was cloned, run:
$ORACLE_HOME/oui/bin/runInstaller –updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={<existing node>,<new node>}"
OR
8b.Using addNode.sh process (RECOMMENDED)
i. From an existing node in the cluster as the ‘oracle’ user:
$> $ORACLE_HOME/oui/bin/addNode.sh -silent "CLUSTER_NEW_NODES={<new node>}"
9. On the new node run the root.sh as prompted.
10. Set ORACLE_HOME and ensure you are using the ‘oracle’ account user.
Note: Ensure permissions for Oracle executable are 6751, if not, then as root user:
cd $ORACLE_HOME/bin
chgrp asmadmin oracle
chmod 6751 oracle
ls -l oracle
11. On any existing node, run DBCA ($ORACLE_HOME/bin/dbca) to add the new instance:
$ORACLE_HOME/bin/dbca -silent -addInstance -nodeList <new node> -gdbName <db name> -instanceName <new instance> -sysDBAUserName sys -sysDBAPassword <sys password>
NOTES: A. Ensure command is run from existing node with same or less memory, otherwise command will fail due to insufficient memory to support database memory structures. Also ensure that the log file is checked for actual success since it can differ from what is displayed at the screen.
B. Anytime when a patch is applied to the database ORACLE_HOME, please ensure above ownership and permission is corrected after the patch.
12. Verify the administrator privileges on the new node by running on existing node:
$ORACLE_HOME/bin/cluvfy comp admprv -o db_config -d $ORACLE_HOME -n <all nodes list> -verbose
13. For an Admin-Managed Cluster, add the new instance to services, or create additional services. For a Policy-Managed Cluster, verify the instance has been added to an existing server pool.
14. Setup OCM in Cloned Homes
a. Delete all subdirectories to remove previously configured host:
$> rm -rf $ORACLE_HOME/ccr/hosts/*
b. Move (no copy) both ‘grid’ and ‘oracle’ homes:
$> mv $ORACLE_HOME/ccr/inventory/core.jar $ORACLE_HOME/ccr/inventory/pending/core.jar
c. Configure OCM for the cloned home on the new node:
$> $ORACLE_HOME/ccr/bin/configCCR -a