Upgrade from 11gR2 RAC to 12cR1 RAC using cloning methodology



12.1.0.2 Grid Infrastructure Upgrade: From 11.2.0.4 To 12.1.0.2 Using GI Clone

I used the following methodology to upgrade from 11.2.0.4 to 12.1.0.2 RAC.  This is particularly useful from a time perspective to avoid having to do a separate patching when it comes to applying bundle patches (Exadata) or PSUs.  I tested this process in my non-production environment and highly advise you to do the same to ensure the process is seamless.


A) Prepare SOURCE GI home (In this case it is 12.1.0.2) to be cloned:
********************************
Follow documentation for "Preparing the Oracle Grid Infrastructure Home for Cloning" at https://docs.oracle.com/database/121/CWADD/clonecluster.htm#CWADD92116

Oracle® Clusterware Administration and Deployment Guide
12c Release 1 (12.1)
Part Number E16794-16
********************************


A01) Stop all databases via srvctl (run by oracle)

srvctl stop database -d {db_unique_name}

A02) Stop CRS (run by root)

export ORACLE_HOME=/u01/app/12.1.0.2/grid
$ORACLE_HOME/bin/crsctl stop crs

A03) Validate that everything is shutdown on the GI home and DB homes

ps -ef |grep pmon
ps -ef |egrep "d.bin reboot" |grep -v grep 


B) Create a Copy of the Oracle Grid Infrastructure Home

B01) Create backup of existing home (run as root)

cp -prf /u01/app/12.1.0.2/grid /u01/app/12.1.0.2/grid_bak

B02) Cleanup the unnecessary files. (run as root)

cd /u01/app/12.1.0.2/grid_bak
rm -rf log/td04db01
rm -rf gpnp/td04db01
find gpnp -type f -exec rm -f {} \;
rm -rf cfgtoollogs/*
rm -rf crs/init/*
rm -rf cdata/*
rm -rf crf/*
rm -rf network/admin/*.ora
rm -rf crs/install/crsconfig_params
find . -name '*.ouibak' -exec rm {} \;
find . -name '*.ouibak.1' -exec rm {} \;
rm -rf root.sh*
rm -rf rdbms/audit/*
rm -rf rdbms/log/*
rm -rf inventory/backup/*

B03) Create compressed copy of GI home to be used for cloning (run as root)

cd /u01/app/12.1.0.2/grid_bak
tar -zcvpf /<backup_location>/gridHome.tgz .

chown oracle:oinstall /<backup_location>/gridHome.tgz
chown oracle:oinstall /u01/app/12.1.0.2/grid_bak
chmod 755 /u01/app/12.1.0.2/grid_bak

B04) Startup the clusterware for the node (run as root)

export ORACLE_HOME=/u01/app/12.1.0.2/grid
$ORACLE_HOME/bin/crsctl start crs


C) Deploy the 12.1.0.2 GI home (gridHome.tgz from above step) on the destination cluster nodes where GI 11.2.0.4 is installed and active:
**********************************
Follow documentation for "Step 2: Deploy the Oracle Grid Infrastructure Home on the Destination Nodes" at https://docs.oracle.com/database/121/CWADD/clonecluster.htm#CWADD92125

Oracle® Clusterware Administration and Deployment Guide
12c Release 1 (12.1)

**********************************

C01) As root on each node of cluster, make the directory for the GI HOME.  Unzip the tar ball.  Then chmod the required files to clear setuid and setgid information from the Oracle binary.

mkdir -p /u01/app/12.1.0.2/grid

cd /u01/app/12.1.0.2/grid
tar -zxvf /u01/app/oracle/patches/gridHome.tgz

chown -R oracle:oinstall /u01/app/12.1.0.2

chmod u+s /u01/app/12.1.0.2/grid/bin/oracle
chmod g+s /u01/app/12.1.0.2/grid/bin/oracle
chmod u+s /u01/app/12.1.0.2/grid/bin/extjob
chmod u+s /u01/app/12.1.0.2/grid/bin/jssu
chmod u+s /u01/app/12.1.0.2/grid/bin/oradism

C02) Execute the clone.pl script on ALL cluster nodes 1 node at a time on the destination system (DO NOT run root.sh as instructed):
*************************************
Follow documentation for "Step 3: Run the clone.pl Script on Each Destination Node" at https://docs.oracle.com/database/121/CWADD/clonecluster.htm#CWADD92125

Oracle® Clusterware Administration and Deployment Guide
12c Release 1 (12.1)

*************************************

I created the following script to echo out the perl command that needs to be executed (as oracle user).
## As ORACLE

> cat run_for_gi.ksh
mkdir -p /home/oracle/dba/patches/12102/install

SCRIPT_DIR=/home/oracle/dba/patches/12102/install
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/12.1.0.2/grid
ORACLE_INV=/u01/app/oraInventory
cd $ORACLE_HOME/clone
THIS_NODE=`hostname -s`
node1=
node2=
node3=
node4=

E01=ORACLE_HOME=${ORACLE_HOME}
E02=ORACLE_HOME_NAME=Ora12c_gridinfrahome1
E03=ORACLE_BASE=${ORACLE_BASE}
E04=INVENTORY_LOCATION=${ORACLE_INV}
C01="-O'\"CLUSTER_NODES={$node1,$node2,$node3,$node4}\"'"
C02="-O'\"LOCAL_NODE=${THIS_NODE}\"' CRS=TRUE" 


echo "perl $ORACLE_HOME/clone/bin/clone.pl $E01 $E02 $E03 $E04 $C01 $C02 |tee $SCRIPT_DIR/cloneGridHome_12c.out "


## Execute the perl command that appears on the screen after running the above script.


C03) BEFORE running config.sh, set CRS=false for 12.1.0.2 in inventory.xml as two GI homes can not have CRS=true flag at the same time. If this step is not completed config.sh would error with " [INS-40406] The installer detects no existing Oracle GI software on the system."

a. Backup inventory.xml on all nodes
cp -p /u01/app/oraInventory/ContentsXML/inventory.xml /u01/app/oraInventory/ContentsXML/inventory.xml.bak

b. Run below command on all nodes:
/u01/app/12.1.0.2/grid/oui/bin/runInstaller -updateNodeList ORACLE_HOME="/u01/app/12.1.0.2/grid" CRS=false

PLEASE NOTE: You will receive the following error which appears to be ok.

 /u01/app/12.1.0.2/grid/oui/bin/runInstaller -updateNodeList ORACLE_HOME="/u01/app/12.1.0.2/grid" CRS=false
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 24575 MB    Passed
'UpdateNodeList' failed.
'UpdateNodeList' failed.



C04) BEFORE running config.sh, on all nodes, copy crsconfig_params file from SOURCE 12.1.0.2 GI home to 12.1.0.2 DESTINATION GI home. Modify crsconfig_params accordingly to reflect proper values for destination (cluster name, host names, node names, install_node, etc) details.

If you don't perform this step, config.sh may show failure at cluvfy step with error " INS-20802 Oracle Cluster Verification Utility failed."

config.sh log file would show "INFO: Unable to obtain network interface list from Oracle ClusterwarePRCT-1011 : Failed to run "oifcfg". Detailed error: null"

C05) Run config.sh, choose upgrade option. Run rootupgrade.sh when prompted.

Make sure you DON'T use sudo to run rootupgrade.sh. You should use real root user login to run rootupgrade.sh. If sudo is used to rootupgrade.sh then upgrade may not be successful and crs activeversion would still show 11.2.0.4 even though its running from 12.1.0.2 GI home.

After rootupgrade.sh is completed and configuration done, use can execute cluvfy to validate the health of the cluster is good.  This process was very seamless for me as I was very meticulous on following these instructions.  The GI stack is the most sensitive so I was extremely careful with ensuring these steps were documented clearly.

D)  Steps for the cloning of the DB HOME.

D01)  Create source tar ball for 12.1.0.2 db home binaries.

cd /u01/app/oracle/product/12.1.0.2/dbhome_1
tar -zcvpf /<backup_location>/dbHome.tgz .

D02)  Copy it to all server in the cluster or place in a shared filesystem/location.

D03)  Make directory for the binaries and script directory.

mkdir -p /u01/app/oracle/product/12.1.0.2/dbhome_1
mkdir -p /home/oracle/dba/patches/12102/install

D04) Change to the db home directory.

cd /u01/app/oracle/product/12.1.0.2/dbhome_1
tar -zxvf /<backup_location>/dbHome.tgz

D05) Place below commands in a small script, update the node names (if there are less than 4 nodes - adjust parameter C01 accordingly), and execute.  Run the command that is displayed as output.

SCRIPT_DIR=/home/oracle/dba/patches/12102/install
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/product/12.1.0.2/dbhome_1
cd $ORACLE_HOME/clone
THIS_NODE=`hostname -s`
node1=
node2=
node3=
node4=

E01=ORACLE_HOME=${ORACLE_HOME}
E02=ORACLE_HOME_NAME=OraDb12c_home1
E03=ORACLE_BASE=${ORACLE_BASE}
C01="-O'\"CLUSTER_NODES={$node1,$node2,$node3,$node4}\"'"
C02="-O LOCAL_NODE=$THIS_NODE"

echo "perl $ORACLE_HOME/clone/bin/clone.pl $E01 $E03 $E02 $C01 $C02 |tee $SCRIPT_DIR/clonedbHome_12c.out "

D06)  Have UNIX admin run root.sh on each node.

/u01/app/oracle/product/12.1.0.2/dbhome_1/root.sh




 

Comments

DBA Maestro said…
@jampani, you have to do a "find" for the crsconfig_params file located under the server that was used as the source to tar up the GI home binaries. Simply copy this file over to the destination server's GI home directory that you are using to untar the GI binaries. Hope this helps.

Popular posts from this blog

RMAN-10038: database session for channel prm3 terminated unexpectedly

ORA-17630: Mismatch in the remote file protocol version client 2 server 3

ORA-00338: log {n} of thread {n} is more recent than control file