How to Configure or Re-configure Grid Infrastructure in Oracle 11gR2, 12cR1 and 12cR2
This post explains 11gR2 and 12cR1 Grid Infrastructure configuration framework config.sh(config.bat on Windows) which is located in $GRID_HOME/crs/config/. This note also explains 12cR2 Grid Infrastructure configuration framework gridSetup.sh (gridSetup.bat on Windows) which is located in $GRID_HOME.
“config.sh” is a 126.96.36.199 Grid Infrastructure (GI) new feature which can be used to configure 11gR2 GI Cluster after the GI software binary has been installed/cloned properly. It will start GI configuration framework GUI, which will guide user through a few pages so that users can enter the necessary information, finally, it will prompt to run root script. It can also run with a response file in silent mode. The config.sh is available on all versions between 188.8.131.52 and 184.108.40.206. However, in 12.2, the config.sh is replaced with gridSetup.sh, so use gridSetup.sh instead of config.sh on 12.2 and above.
“config.sh” and gridSetup.sh are not the tool to deploy software binary. The binary can be deployed by running standard installation, cloning, etc. This also means if GI binary is corrupted, config.sh and gridSetup.sh will not help, deinstall or node removal/addition procedure can be used in that case.
Cases that config.sh can be used
After GI cluster is deconfigured with rootcrs.pl on all nodes
In this case, if in interactive mode, config.sh will ask for cluster parameters to generate GI configuration files and prompt to run root.sh to build a new GI cluster. Since it asks for configuration information, it does not matter whether the original cluster was a fresh installation or upgraded from an earlier version, or whether the new cluster has same number of nodes or configurations (OCR/VD location, network info etc) as the original one. For 220.127.116.11 and above, it can re-use existing diskgroups for OCR and Voting Disk if the diskgroup attribute compatible.asm is set to 18.104.22.168 and above.
After GI is cloned from other cluster
This is a part of cloning process.
After GI is installed with software only option
In this case:
- if there is no previous Oracle Clusterware installation, it will ask for cluster parameters to generate GI configuration files and prompt to run root.sh to configure a new GI cluster.
- if there is an existing Oracle Clusterware Cluster installation, it will ask for cluster parameters to generate GI configuration files and prompt to run rootupgrade.sh to upgrade the existing cluster. In this case, there is no need to deconfigure anything prior to running config.sh.
- if there is an existing GI Standalone installation, it will error out as it can not be used to upgrade Oracle Restart.
Cases that config.sh or gridSetup.sh is not the best tool
For GI cluster environment, as it will configure/reconfigure all nodes in the cluster which means down time, it is not the best tool for the following scenarios as no down time is needed to accomplish these tasks:
- one or more nodes are having problem, but there is node or nodes that are running fine, in this case, node removal/addition procedure can be used to avoid downtime.
- one or more nodes are having problem, but there is node or nodes that are running fine, and the cluster is freshly installed without any patch set regardless how long it has been running - if patch set update(PSU) has been applied, that is fine, and cluster parameters are not changed since original configuration, eg: OCR/VD on same location, network configuration has not been changed etc, and GRID_HOME is intact. In this case, deconfig and reconfig on each problematic node can be used (as root, execute “$GRID_HOME/crs/install/rootcrs.pl -deconfig -force” then “$GRID_HOME/root.sh”).
To start config.sh or gridSetup.sh in silent mode:
$ config.sh -silent -responseFile [response-file]
$ gridSetup.sh -silent -responseFile [response-file]
The response file must be created prior to running it in silent mode.
Root script parallel execution
When building or upgrading a GI cluster with multiple nodes, one can start root script (root.sh or rootupgrade.sh) on the first node and wait until it completes; then start it on all other nodes except the last one, wait until it finishes on all of them; and lastly run it on the last node.