Rollback RU patches from 12c GI home using opatchauto
June 3, 2019 Leave a comment
Junior DBAs will find these steps useful 🙂
Environment details:
Two-node Real Application Cluster.
Database version: 12.2.0.1
Applied RU: 16-04-2019
1. Check existing patches
[grid@rac1 ~]$ /u01/app/12.2.0/grid/OPatch/opatch lspatches 29314424;OCW APR 2019 RELEASE UPDATE 12.2.0.1.190416 (29314424) 29314339;Database Apr 2019 Release Update : 12.2.0.1.190416 (29314339) 29301676;ACFS APR 2019 RELEASE UPDATE 12.2.0.1.190416 (29301676) 28566910;TOMCAT RELEASE UPDATE 12.2.0.1.0(ID:180802.1448.S) (28566910) 26839277;DBWLM RELEASE UPDATE 12.2.0.1.0(ID:170913) (26839277) OPatch succeeded.
Note that all these patches are part of RU 16-04-2019.
2. Stop all database instances on that node:
# srvctl stop instance -db orclA -i orclA1
3. Download Release Update 16-04-2019 (p29301687_122010_Linux-x86-64.zip), unzip and go to the unzipped patch location:
To rollback all these patches it is easier to have unzipped Release Update 16-04-2019 patch (all existing patches are part of it) on the server.
If you cannot download zipped RU then you need to indicate all patch ids in the list during opatchauto rollback -id 29314424,29314339,29301676,28566910,26839277
As long as I have unzipped RU on rac1, I will do by the following way:
[root@rac1 ~]# cd /u01/app/sw/29301687 [root@rac1 29301687]# ll total 132 drwxr-x--- 4 grid oinstall 48 Mar 25 01:09 26839277 drwxr-x--- 4 grid oinstall 48 Mar 25 01:08 28566910 drwxr-x--- 5 grid oinstall 62 Mar 25 01:03 29301676 drwxr-x--- 4 grid oinstall 67 Mar 25 01:08 29314339 drwxr-x--- 5 grid oinstall 62 Mar 25 01:06 29314424 drwxr-x--- 2 grid oinstall 4096 Mar 25 01:03 automation -rw-rw-r-- 1 grid oinstall 5828 Mar 25 01:29 bundle.xml -rw-r--r-- 1 grid oinstall 120219 Apr 10 18:07 README.html -rw-r----- 1 grid oinstall 0 Mar 25 01:03 README.txt
4. Rollback patches using opatchauto:
[root@rac1 29301687]# /u01/app/12.2.0/grid/OPatch/opatchauto rollback -oh /u01/app/12.2.0/grid …. ==Following patches were SUCCESSFULLY rolled back: Patch: /u01/app/sw/29301687/29314424 Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-05-29_12-56-19PM_1.log Patch: /u01/app/sw/29301687/29301676 Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-05-29_12-56-19PM_1.log Patch: /u01/app/sw/29301687/26839277 Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-05-29_12-56-19PM_1.log Patch: /u01/app/sw/29301687/28566910 Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-05-29_12-56-19PM_1.log Patch: /u01/app/sw/29301687/29314339 Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-05-29_12-56-19PM_1.log
5. Start database instance on the first node and shutdown on the second:
# srvctl start instance -db orclA -i orclA1 # srvctl stop instance -db orclA -i orclA2
6. Connect to the second node and repeat the same steps:
[root@rac2 ~]# cd /u01/app/sw/29301687 [root@rac2 29301687]# /u01/app/12.2.0/grid/OPatch/opatchauto rollback -oh /u01/app/12.2.0/grid
7. Start database instance on rac2
# srvctl start instance -db orclA -i orclA2
8. Check inventory
$ /u01/app/12.2.0/grid/OPatch/opatch lspatches There are no Interim patches installed in this Oracle Home "/u01/app/12.2.0/grid". OPatch succeeded.