OPatchauto fails: CLSRSC-180: An error occurred while executing the command ‘/bin/rpm -qf /sbin/init’

Problem:

During applying ACFS patch on top of GI home, I received the following error:

Command failure output:
...
2024/03/08 19:31:03 CLSRSC-180: An error occurred while executing the command '/bin/rpm -qf /sbin/init'

After fixing the cause of failure Run opatchauto resume

]
OPATCHAUTO-68061: The orchestration engine failed.
OPATCHAUTO-68061: The orchestration engine failed with return code 1
OPATCHAUTO-68061: Check the log for more details.
OPatchAuto failed.

OPatchauto session completed at Fri Mar 8 19:31:04 2024
Time taken to complete the session 3 minutes, 38 seconds

opatchauto failed with error code 42

Troubleshooting:

I attempted to manually execute the command that failed, and it returned a helpful error message:

[root@rac1 tmp]# /bin/rpm -qf /sbin/init
error: rpmdb: BDB0113 Thread/process 5003/139974823143296 failed: BDB1507 Thread died in Berkeley DB library
error: db5 error(-30973) from dbenv->failchk: BDB0087 DB_RUNRECOVERY: Fatal error, run database recovery
error: cannot open Packages index using db5 - (-30973)
error: cannot open Packages database in /var/lib/rpm
...

Solution:

I have a solution for this type of error in another post. Let’s solve it again:

[root@rac1 tmp]# rpm  --rebuilddb

Rerun the failing command to make sure it was resolved:

[root@rac1 tmp]# /bin/rpm -qf /sbin/init
systemd-239-78.0.3.el8.x86_64

If you encountered an error during patching, you can resume opatchauto at this point:

[root@rac1 tmp]# /u01/app/19.3.0/grid/OPatch/opatchauto resume

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:rac1
CRS Home:/u01/app/19.3.0/grid
Version:19.0.0.0.0
Summary:

==Following patches were SUCCESSFULLY applied:

Patch: /tmp/36114443/36114443
Log: /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2024-03-08_19-29-48PM_1.log

Good luck, as always!

OPATCHAUTO-72050: System instance creation failed, cluvfy fails Verifying ‘/tmp/’ …FAILED (PRVF-7546)

Problem:

While running cluvfy we get the following error:

[grid@rac1 grid]$ ./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -verbose
...
Failures were encountered during execution of CVU verification request "stage -pre crsinst".
...
Verifying '/tmp/' ...FAILED
Cannot run program "/usr/bin/scp": error=13, Permission denied

You should not continue patching when runcluvfy fails, strongly recommended to solve all failed items and only after that run opatchauto. But if you do insist to run patching then you will get the following:

[root@rac1 33509923]# /u01/app/19.3.0/grid/OPatch/opatchauto apply /u01/app/patchinstall/33509923 -oh /u01/app/19.3.0/grid
OPatchauto session is initiated at Sun Mar 20 06:27:17 2022

System initialization log file is /u01/app/19.3.0/grid/cfgtoollogs/opatchautodb/systemconfig2022-03-20_06-27-42AM.log.
...
OPATCHAUTO-72050: System instance creation failed.
OPATCHAUTO-72050: Failed while retrieving system information.
OPATCHAUTO-72050: Please check log file for more details.

Reason:

The issue in our case was that /usr/bin/scp did not have correct permissions. It had 600 while it should have had 755. Why this happened? Don’t really know… it should not be happening.

Solution:

Set correct permissions on both nodes for scp binary using the following way:

# chmod 755 /bin/scp

Retry patching, in our case it was successful:

Apply GI patch 19.14 (33509923) on GI 19.3

Current environment:

$ $ORACLE_HOME/OPatch/opatch lspatches
29585399;OCW RELEASE UPDATE 19.3.0.0.0 (29585399)
29517247;ACFS RELEASE UPDATE 19.3.0.0.0 (29517247)
29517242;Database Release Update : 19.3.0.0.190416 (29517242)
29401763;TOMCAT RELEASE UPDATE 19.0.0.0.0 (29401763)

1. You must use the OPatch utility version 12.2.0.1.28 or later to apply this patch

Download p6880880_190000_Linux-x86-64.zip – OPatch 12.2.0.1.28 for DB 19.0.0.0.0 (Nov 2021), or later.

Replace existing opatch with the new one:

# export ORACLE_HOME=/u01/app/19c/grid
# rm -rf $ORACLE_HOME/OPatch
# su - grid 
$ export ORACLE_HOME=/u01/app/19c/grid
$ unzip -o /u01/swtmp/p6880880_190000_Linux-x86-64.zip -d $ORACLE_HOME

$ $ORACLE_HOME/OPatch/opatch version
OPatch Version: 12.2.0.1.28

2. Unzip the patch 33509923 as the Grid home owner:

# su - grid
$ cd /u01/swtmp
$ unzip p33509923_190000_Linux-x86-64.zip

3. Determine whether any currently installed one-off patches conflict with this patch 33509923 as follows:

# su - grid

$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/swtmp/33509923/33515361

Oracle Home       : /u01/app/19c/grid
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/19c/grid/oraInst.loc
OPatch version    : 12.2.0.1.28
OUI version       : 12.2.0.7.0
Log file location : /u01/app/19c/grid/cfgtoollogs/opatch/opatch2022-01-31_11-58-55AM_1.log

Invoking prereq "checkconflictagainstohwithdetail"
Prereq "checkConflictAgainstOHWithDetail" passed.
OPatch succeeded.

$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/swtmp/33509923/33529556

$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/swtmp/33509923/33534448

$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/swtmp/33509923/33239955

$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /u01/swtmp/33509923/33575402

In case conflict is detected, stop the patch installation and contact Oracle Support Services, otherwise continue.

4. Stop database instances running on that server.

5. Apply patch using opatchauto:

# cd /u01/swtmp/33509923/
# export ORACLE_HOME=/u01/app/19c/grid/
# export PATH=$ORACLE_HOME/bin:$PATH
# $ORACLE_HOME/OPatch/opatchauto apply
...
==Following patches were SUCCESSFULLY applied:

Patch: /u01/swtmp/33509923/33239955
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2022-02-03_11-33-44AM_1.log

Patch: /u01/swtmp/33509923/33515361
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2022-02-03_11-33-44AM_1.log

Patch: /u01/swtmp/33509923/33529556
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2022-02-03_11-33-44AM_1.log

Patch: /u01/swtmp/33509923/33534448
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2022-02-03_11-33-44AM_1.log

Patch: /u01/swtmp/33509923/33575402
Log: /u01/app/19c/grid/cfgtoollogs/opatchauto/core/opatch/opatch2022-02-03_11-33-44AM_1.log

Check current version:

$ $ORACLE_HOME/OPatch/opatch lspatches
33575402;DBWLM RELEASE UPDATE 19.0.0.0.0 (33575402)
33534448;ACFS RELEASE UPDATE 19.14.0.0.0 (33534448)
33529556;OCW RELEASE UPDATE 19.14.0.0.0 (33529556)
33515361;Database Release Update : 19.14.0.0.220118 (33515361)
33239955;TOMCAT RELEASE UPDATE 19.0.0.0.0 (33239955)

6. After finishing patch it is recommened to backup clusterware components:

The home is not clean. This home cannot be used since there was a failed OPatch execution in this home. Use a different home to proceed

Problem:

While installing Oracle software using applyRU:

$ /u01/app/oracle/product/19.3.0/dbhome_1/runInstaller -silent -applyRU /u01/swtmp/32895426/32904851/ -responseFile /opt/rsp/db_19.3_EE_swonlyinstall.rsp -ignorePrereqFailure -waitforcompletion

Got an error:

ERROR: The home is not clean. This home cannot be used since there was a failed OPatch execution in this home. Use a different home to proceed.

Solution:

If a previous attempt to install Oracle software using applyRU or applyOneOffs option failed, the home becomes unusable. Before retrying the process, you need to clean up the home first:

$ rm -rf /u01/app/oracle/product/19.3.0/dbhome_1/*

$ unzip -o /u01/swtmp/LINUX.X64_193000_db_home.zip -d /u01/app/oracle/product/19.3.0/dbhome_1

As a best practice I always have a new Opatch, so need to unzip it again:

$ unzip -o /u01/swtmp/p6880880_190000_Linux-x86-64.zip -d /u01/app/oracle/product/19.3.0/dbhome_1

Rerun the command:

$ /u01/app/oracle/product/19.3.0/dbhome_1/runInstaller -silent -applyRU /u01/swtmp/32895426/32904851/ -responseFile /opt/rsp/db_19.3_EE_swonlyinstall.rsp -ignorePrereqFailure -waitforcompletion
Preparing the home to patch...
Applying the patch /u01/swtmp/32895426/32904851/...

OPATCHAUTO-72115: Please complete the previous apply session across all nodes to perform apply session

Problem:

My opatchauto outofplace patching failed on GI home. I was able to cleanup cloned GI home and information about it in inventory.xml, but after running opatchauto again I was getting the following error:

[root@rac1 29708703]# $ORACLE_HOME/OPatch/opatchauto apply -oh $ORACLE_HOME -outofplace
OPatchauto session is initiated at Sun Aug 18 20:40:43 2019
System initialization log file is /u01/app/18.3.0/grid/cfgtoollogs/opatchautodb/systemconfig2019-08-18_08-40-46PM.log.
Session log file is /u01/app/18.3.0/grid/cfgtoollogs/opatchauto/opatchauto2019-08-18_08-42-20PM.log
The id for this session is Z1CP
OPATCHAUTO-72115: Out of place patching apply session cannot be performed.
OPATCHAUTO-72115: Previous apply session  is not completed on node rac1.
OPATCHAUTO-72115: Please complete the previous apply  session across all nodes to perform apply session.
OPatchAuto failed.

Solution:

Clear checkpoint files from the previous session :

[root@rac1 29708703]# cd /u01/app/18.3.0/grid/.opatchauto_storage/rac1
[root@rac1 rac1]# ls
oopsessioninfo.ser
[root@rac1 rac1]# rm -rf oopsessioninfo.ser 

Rerun opatchauto apply again.

Can’t call method “uid” on an undefined value at …DBUtilServices.pm line 28.

Problem:

opatchauto on GI fails with the following error:

# /u01/app/18.0.0/grid/OPatch/opatchauto apply /0/grid/29301682  -oh /u01/app/18.0.0/grid
 Can't call method "uid" on an undefined value at /u01/app/18.0.0/grid/OPatch/auto/database/bin/module/DBUtilServices.pm line 28.

Reason:

  1. GI is not setup yet. You may have unzipped GI installation file, but have not run gridSetup.sh
  2. $GI_HOME/oraInst.loc is missing.

Solution:

  1. Setup GI by running gridSetup.sh
  2. Copy the oraInst.loc from the other node, if you don’t have another node then please see the file content bellow:
# cat /u01/app/18.0.0/grid/oraInst.loc
inst_group=oinstall
inventory_loc=/u01/app/oraInventory

19cGI & 12cRDBMS opatchauto: Re-link fails on target “procob”

Problem:

Environment: 19c GI | 12c RDBMS | RHEL 7.6

While applying Apr 2019 RU on top of the RDBMS home, opatchauto failed:

# /u01/app/oracle/product/12.2.0/dbhome_1/OPatch/opatchauto apply /u01/swtmp/29314339/ -oh /u01/app/oracle/product/12.2.0/dbhome_1 

==Following patches FAILED in apply:
Patch: /u01/swtmp/29314339
Log: /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatchauto/core/opatch/opatch2019-06-24_18-51-22PM_1.log
Reason: Failed during Patching: oracle.opatch.opatchsdk.OPatchException: Re-link fails on target "procob".
Re-link fails on target "proc". 

We’ve tried using opatch instead of opatchauto and it succeeded:

$ /u01/app/oracle/product/12.2.0/dbhome_1/OPatch/opatch apply

Patch 29314339 successfully applied.
OPatch succeeded.

The problem is definitelly related to opatchauto . We’ve opened SR to Oracle and I will update this post as soon as I get the solution from them. Before that, I want to share three workarounds for this problem with you.

Workarounds:

1. The first workaround, as you have already guessed is to use opatch instead of opatchauto.

$ /u01/app/oracle/product/12.2.0/dbhome_1/OPatch/opatch apply

2. The second workaround is to edit actions.xml file under PSU (e.g /u01/swtmp/29314339/etc/config/actions.xml) and remove the following entries (see Doc ID 2056670.1):

<oracle.precomp.lang opt_req="O" version="12.2.0.1.0">
<make change_dir="%ORACLE_HOME%/precomp/lib" make_file="ins_precomp.mk" make_target="procob"/>
</oracle.precomp.lang>
  
<oracle.precomp.common opt_req="O" version="12.2.0.1.0">
<make change_dir="%ORACLE_HOME%/precomp/lib" make_file="ins_precomp.mk" make_target="proc"/>
</oracle.precomp.common> 

Retry opatchauto.

3. The third workaround is to backup libons.so file in GI home and then copy from RDBMS home (You should return it back, after opatchauto succeeds).

# mv  /u01/app/19.3.0/grid/lib/libons.so  /u01/app/19.3.0/grid/lib/libons.so_backup 
# cp  /u01/app/oracle/product/12.2.0/dbhome_1/lib/libons.so  /u01/app/19.3.0/grid/lib/libons.so  

The reason I’ve used this workaround is that, I’ve found opatchauto was using libons.so file from GI home instead of RDBMS:

[WARNING]OUI-67200:Make failed to invoke "
/usr/bin/make -f ins_precomp.mk proc ORACLE_HOME=/u01/app/oracle/product/12.2.0/dbhome_1"….
'/u01/app/19.3.0/grid/lib/libons.so: undefined reference to `memcpy@GLIBC_2.14'                         

If we compare libons.so files between GI and RDBMS homes, we will find that they are not the same. The problem is that opatchauto uses that file from wrong location. As long as it is not easy to find a way to force opatchauto to use libons.so file from the correct location (working on this with Oracle Support), this workaournd can also be considered.

Please keep in mind, you must return GI libons.so back after opatchauto succeeds. We don’t know what happens if we have 12c libons.so file under 19c GI.

Rollback RU patches from 12c GI home using opatchauto

Junior DBAs will find these steps useful 🙂

Environment details:

Two-node Real Application Cluster.
Database version: 12.2.0.1
Applied RU: 16-04-2019

1. Check existing patches

[grid@rac1 ~]$  /u01/app/12.2.0/grid/OPatch/opatch lspatches
29314424;OCW APR 2019 RELEASE UPDATE 12.2.0.1.190416 (29314424)
29314339;Database Apr 2019 Release Update : 12.2.0.1.190416 (29314339)
29301676;ACFS APR 2019 RELEASE UPDATE 12.2.0.1.190416 (29301676)
28566910;TOMCAT RELEASE UPDATE 12.2.0.1.0(ID:180802.1448.S) (28566910)
26839277;DBWLM RELEASE UPDATE 12.2.0.1.0(ID:170913) (26839277)
OPatch succeeded.

Note that all these patches are part of RU 16-04-2019.

2. Stop all database instances on that node:

# srvctl stop instance -db orclA -i orclA1

3. Download Release Update 16-04-2019 (p29301687_122010_Linux-x86-64.zip), unzip and go to the unzipped patch location:

To rollback all these patches it is easier to have unzipped Release Update 16-04-2019 patch (all existing patches are part of it) on the server.

If you cannot download zipped RU then you need to indicate all patch ids in the list during opatchauto rollback -id 29314424,29314339,29301676,28566910,26839277

As long as I have unzipped RU on rac1, I will do by the following way:

[root@rac1 ~]# cd /u01/app/sw/29301687

[root@rac1 29301687]# ll
 total 132
 drwxr-x--- 4 grid oinstall     48 Mar 25 01:09 26839277
 drwxr-x--- 4 grid oinstall     48 Mar 25 01:08 28566910
 drwxr-x--- 5 grid oinstall     62 Mar 25 01:03 29301676
 drwxr-x--- 4 grid oinstall     67 Mar 25 01:08 29314339
 drwxr-x--- 5 grid oinstall     62 Mar 25 01:06 29314424
 drwxr-x--- 2 grid oinstall   4096 Mar 25 01:03 automation
 -rw-rw-r-- 1 grid oinstall   5828 Mar 25 01:29 bundle.xml
 -rw-r--r-- 1 grid oinstall 120219 Apr 10 18:07 README.html
 -rw-r----- 1 grid oinstall      0 Mar 25 01:03 README.txt

4. Rollback patches using opatchauto:

[root@rac1 29301687]# /u01/app/12.2.0/grid/OPatch/opatchauto rollback -oh /u01/app/12.2.0/grid
 ….
 ==Following patches were SUCCESSFULLY rolled back:
 Patch: /u01/app/sw/29301687/29314424
 Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-05-29_12-56-19PM_1.log
 Patch: /u01/app/sw/29301687/29301676
 Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-05-29_12-56-19PM_1.log
 Patch: /u01/app/sw/29301687/26839277
 Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-05-29_12-56-19PM_1.log
 Patch: /u01/app/sw/29301687/28566910
 Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-05-29_12-56-19PM_1.log
 Patch: /u01/app/sw/29301687/29314339
 Log: /u01/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-05-29_12-56-19PM_1.log

5. Start database instance on the first node and shutdown on the second:

# srvctl start instance -db orclA -i orclA1
# srvctl stop instance -db orclA -i orclA2

6. Connect to the second node and repeat the same steps:

[root@rac2 ~]# cd /u01/app/sw/29301687

[root@rac2 29301687]# /u01/app/12.2.0/grid/OPatch/opatchauto rollback -oh /u01/app/12.2.0/grid

7. Start database instance on rac2

# srvctl start instance -db orclA -i orclA2

8. Check inventory

$  /u01/app/12.2.0/grid/OPatch/opatch lspatches

There are no Interim patches installed in this Oracle Home "/u01/app/12.2.0/grid".
 OPatch succeeded.