Restore OCR and correct Software Patch Level

Software patch level crsctl query crs softwarepatch and active patch level crsctl query crs activersion -f are introduced to ensure GI home has identical patches on all nodes.

It’s highly recommended to manually backup OCR using # ocrconfig -manualbackup before and after applying a patch or making configurtion changes (adding resources into CRS).

When you apply a patch to GI home, the software patch level will be updated on each node. On the last node, the active patch level will be updated for the cluster.

The patch level is stored in OCR, so if we restore old OCR backup, we need to update the patch level inside it using Oracle provided commands.

  1. Try to find patchlevel in the OCR backup that matches to activeversion:
[root@rac1 ~]# ocrdump -stdout -keyname SYSTEM.version.activeversion.patchlevel -backupfile +GRID:/marirac/OCRBACKUP/backup_20200425_171539.ocr.266.1038676539
...
[SYSTEM.version.activeversion.patchlevel]
UB4 (10) : 2701864972
...

If backup file is located on filesystem, then need to replace +GRID:/marirac/OCRBACKUP/backup_20200425_171539.ocr.266.1038676539 with the full path of the file.

Keep copies of OCR backups on filesystem also using asmcmd cp command.

2. Restore OCR backup

a) Stop crs services on all nodes
b) On one of the node, start cluster in exclusive and restore OCR backup:

# crsctl stop crs -f 
# crsctl start crs -excl -nocrs
# ocrconfig -restore /tmp/OCR/OCR_BACKUP_FILE
# crsctl stop crs -f

3. Correct software and active patch levels

a) Keep CRS down on all nodes
b) Start CRS on the 1st node, run clscfg, and check softwarepatch

# crsctl start crs -wait
# clscfg -patch 
# crsctl query crs softwarepatch

c) On the last node (do not run on other node, only from last node), issue the following command:

# crsctl stop rollingpatch

d) Verify that active patch level was updated

# crsctl query crs activeversion -f
Advertisement

Query the cluster active patchlevel in the OCR backup

To identify OCR backup where the cluster active patch level matches, you need to run the following:

Identify cluster active patch level:

[root@rac1 ~]# crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [19.0.0.0.0]. 
The cluster upgrade state is [NORMAL]. 
The cluster active patch level is [2701864972].

If backup is located on diskgroup:

[root@rac1 ~]# ocrdump -stdout -keyname SYSTEM.version.activeversion.patchlevel -backupfile +GRID:/marirac/OCRBACKUP/backup_20200425_171539.ocr.266.1038676539
04/26/2020 12:28:20
+GRID:/marirac/OCRBACKUP/backup_20200425_171539.ocr.266.1038676539
/u01/app/19.3.0/grid/bin/ocrdump.bin -stdout -keyname SYSTEM.version.activeversion.patchlevel -backupfile +GRID:/marirac/OCRBACKUP/backup_20200425_171539.ocr.266.1038676539
[SYSTEM.version.activeversion.patchlevel]
UB4 (10) : 2701864972
SECURITY : {USER_PERMISSION : PROCR_ALL_ACCESS, GROUP_PERMISSION : PROCR_READ, OTHER_PERMISSION : PROCR_READ, USER_NAME : root, GROUP_NAME : root}

If backup is located on filesystem:

[root@rac1 ~]# ocrdump -stdout -keyname SYSTEM.version.activeversion.patchlevel -backupfile /tmp/backup_07_13_47_25
04/26/2020 12:26:57
/tmp/backup_07_13_47_25
/u01/app/19.3.0/grid/bin/ocrdump.bin -stdout -keyname SYSTEM.version.activeversion.patchlevel -backupfile /tmp/backup_07_13_47_25
[SYSTEM.version.activeversion.patchlevel]
UB4 (10) : 2701864972
SECURITY : {USER_PERMISSION : PROCR_ALL_ACCESS, GROUP_PERMISSION : PROCR_READ, OTHER_PERMISSION : PROCR_READ, USER_NAME : root, GROUP_NAME : root}

ORA-15137: The ASM cluster is in rolling patch state, CRS does not start

Problem:

The customer was not able to start cluster because ASM disks were offline and could not become online because the cluster was in ROLLING patch state.

SQL> ALTER DISKGROUP "GRID" ONLINE REGULAR DISK "RAC2$XVDN"
2020-04-23T17:28:10.851065+08:00ORA-15032: not all alterations performed
ORA-15137: The ASM cluster is in rolling patch state.

Troubleshooting:

  1. Check cluster activeversion
[root@rac1 ~]# crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [12.2.0.1.0].
The cluster upgrade state is [ROLLING PATCH]. The cluster active patch level is [2250904419].

2. Check softwarepatch of each node:

[root@rac1 ~]# crsctl query crs softwarepatch
Oracle Clusterware patch level on node rac1 is [2269302628]

[root@rac2 ~]# crsctl query crs softwarepatch
Oracle Clusterware patch level on node rac2 is [2269302628]

[root@rac3 ~]# crsctl query crs softwarepatch
Oracle Clusterware patch level on node rac3 is [2269302628]

[root@rac4 ~]# crsctl query crs softwarepatch
Oracle Clusterware patch level on node rac4 is [3074559134]

As we see software version on rac4 is different then others. In addition to this activeversion [2250904419] is older, this is normal in ROLLING mode, when ROLLING mode is finished then this value in OCR is also updated.

3. From the output of the 2nd step we see that additional troubleshooting on installed patches are necessary, compare patches on each node:

Inventory showed the same output on all 4 nodes:

$ $GRID_HOME/OPatch/opatch lspatches
30593149;Database Jan 2020 Release Update : 12.2.0.1.200114 (30593149)
30591794;TOMCAT RELEASE UPDATE 12.2.0.1.0(ID:RELEASE) (30591794)
30586063;ACFS JAN 2020 RELEASE UPDATE 12.2.0.1.200114 (30586063)
30585969;OCW JAN 2020 RELEASE UPDATE 12.2.0.1.200114 (30585969)
26839277;DBWLM RELEASE UPDATE 12.2.0.1.0(ID:170913) (26839277)

But after checking kfod op=patches we’ve found that 4 additional patches existed on rac4 node only, the difference was not visible in inventory because they were inactive, left from old RU (they were not rolled back during applying superset patches, but they were made inactive).

The output from rac1, rac2 and rac3:

$ $GRID_HOME/bin/kfod op=patches
List of Patches
26710464
26737232
26839277
...
<extracted>

The output from rac4:

$ $GRID_HOME/bin/kfod op=patches
---------------
List of Patches
===============
26710464
26737232
26839277
...
<extracted>
28566910<<<<<Bug 28566910:TOMCAT RELEASE UPDATE 12.2.0.1.0
29757449<<<<<Bug 29757449:DATABASE JUL 2019 RELEASE UPDATE 12.2.0.1.190716
29770040<<<< Bug 29770040:OCW JUL 2019 RELEASE UPDATE 12.2.0.1.190716
29770090<<<<< Bug 29770090:ACFS JUL 2019 RELEASE UPDATE 12.2.0.1.190716

To rollback inactive patches you need to use patchgen, using opatch rollback -id it is not possible.

As root:

# $GRID_HOME/crs/install/rootcrs.sh -prepatch

As grid:

$ $GRID_HOME/bin/patchgen commit -rb 28566910
$ $GRID_HOME/bin/patchgen commit -rb 29757449
$ $GRID_HOME/bin/patchgen commit -rb 29770040
$ $GRID_HOME/bin/patchgen commit -rb 29770090

Validate patch level again on rac4, should be 2269302628:

$ $GRID_HOME/bin/kfod op=PATCHLVL
-------------------
Current Patch level
===================
2269302628

As root:

# $GRID_HOME/crs/install/rootcrs.sh -postpatch

4. If CRS was up at least on one node, then we would be able to stop rollingpatch state using crsctl stop rollingpatch (although, if CRS is up, postpatch should stop rolling also). But now our CRS is down, ASM is not able to access OCR file (this was caused by some other problem, which I consider as a bug, but will not mention here otherwise blog will become too long). We need to do extra steps to start CRS.

The only solution that I used here was to restore OCR from old backup. I needed OCR which was not in ROLLING mode. I needed NORMAL OCR backup, which was located on filesystem. I knew after restore my OCR activeversion would be lower than the current software version on nodes, but this is solvable (we will see this step also)

a) Stop crs services on all nodes
b) On one of the node, start cluster in exclusive and restore OCR backup:

# crsctl stop crs -f 
# crsctl start crs -excl -nocrs
# ocrconfig -restore /tmp/OCR/OCR_BACKUP_FILE
# crsctl stop crs -f

c) If you check softwarepatch here, it will not be 2269302628, but it will be something old (because OCR is old), so we need to correct it on each node:

# crsctl start crs -wait
# clscfg -patch
# crsctl query crs softwarepatch

Sofwarepatch will be 2269302628 on all nodes.

d) Note even correcting softwarepatch crsctl query crs activeversion -f still shows something old, which is normal. To correct it, run the following only on last node:

# crsctl stop rollingpatch
# crsctl query crs activeversion -f

e) If you had custom services or configuration in CRS after OCR backup was taken then you need to add them manually. For example, if you had service added srvctl add service, need to readd. That’s is why it’s is highly recommended to backup OCR manually before and after patching or any custom configuration changes using # ocrconfig -manualbackup