PRVG-11069 : IP address “169.254.0.2” of network interface “idrac” on the node “primrac1” would conflict with HAIP usage

Problem:

Oracle 18c GI configuration precheck was failing with the following error:

Summary of node specific errors 

primrac2  - PRVG-11069 : IP address "169.254.0.2" of network interface "idrac" on the node "primrac2" would conflict with HAIP usage.  
- Cause:  One or more network interfaces have IP addresses in the range (169.254..), the range used by HAIP which can create routing conflicts.  
- Action:  Make sure there are no IP addresses in the range (169.254..) on any network interfaces. 

primrac1  - PRVG-11069 : IP address "169.254.0.2" of network interface "idrac" on the node "primrac1" would conflict with HAIP usage. 
- Cause:  One or more network interfaces have IP addresses in the range (169.254..), the range used by HAIP which can create routing conflicts.  
- Action:  Make sure there are no IP addresses in the range (169.254..) on any network interfaces.  

On each node additional network interface – named idrac was started with the ip address 169.254.0.2. I tried to set static ip address in /etc/sysconfig/network-scripts/ifcfg-idrac , also tried to bring the interface down – but after some time interface was starting up automatically and getting the same ip address.

Cluster nodes were DELL servers with Dell Remote Access Controller(iDRAC) Service Module installed. For more information about this module installation/deinstallation… can be found here https://topics-cdn.dell.com/pdf/idrac-service-module-v32_users-guide_en-us.pdf

Servers were configured by system administrator and was not clear why this module was there, we are not using iDRAC module and the only option that we had was to remove/uninstall that module. (configuring module should also be possible to avoid such situation, but we keep our servers as clean as possible without having unsed services)

Solution:

Uninstalled iDRAC module (also expained in the above pdf):

# rpm -e dcism 

After uninstalling it idrac interface did not started anymore, so we could continue GI configuration.

Presentation: Oracle GoldenGate Microservices Overview (with DEMO)

Webinar: Oracle GoldenGate Microservices Overview (with DEMO)

PRVF-6402 : Core file name pattern is not same on all the nodes

Problem:

Oracle 18c GI configuration prerequisite checks failed with the following error:

PRVF-6402 : Core file name pattern is not same on all the nodes. Found core filename pattern "core" on nodes "primrac1". Found core filename pattern "core.%p" on nodes "primrac2".  
- Cause:  The core file name pattern is not same on all the nodes.  
- Action:  Ensure that the mechanism for core file naming works consistently on all the nodes. Typically for Linux, the elements to look into are the contents of two files /proc/sys/kernel/core_pattern or /proc/sys/kernel/core_uses_pid. Refer OS vendor documentation for platforms AIX, HP-UX, and Solaris.

Comparing parameter values on both nodes:

[root@primrac1 ~]# cat /proc/sys/kernel/core_uses_pid
0
[root@primrac2 ~]# cat /proc/sys/kernel/core_uses_pid
1 

[root@primrac1 ~]# sysctl -a|grep core_uses_pid
kernel.core_uses_pid = 0

[root@primrac2 ~]# sysctl -a|grep core_uses_pid
kernel.core_uses_pid = 1

Strange fact was that this parameter was not defined explicitly in sysctl.conf file, but still had different default values:

[root@primrac1 ~]# cat /etc/sysctl.conf |grep core_uses_pid
[root@primrac2 ~]# cat /etc/sysctl.conf |grep core_uses_pid 

Solution:

I’ve set parameter to 1 explicitly in sysctl.conf on both nodes:

[root@primrac1 ~]# cat /etc/sysctl.conf |grep core_uses_pid
kernel.core_uses_pid=1 

[root@primrac2 ~]# cat /etc/sysctl.conf |grep core_uses_pid
kernel.core_uses_pid=1

[root@primrac1 ~]# sysctl -p 
[root@primrac2 ~]# sysctl -p

[root@primrac1 ~]# sysctl -a|grep core_uses_pid 
kernel.core_uses_pid = 1

[root@primrac2 ~]# sysctl -a|grep core_uses_pid 
kernel.core_uses_pid = 1

Pressed Check Again button and GI configuration succeeded.

LGWR: Primary database is in MAXIMUM AVAILABILITY mode | ORA-16072: a minimum of one standby database destination is required

Problem:

One of our customer cloned database from a DG environment to a different host and tried to open it as a standalone database. Controlfile and datafiles still considered the database in maximum availability mode.

Errors after trying to open the database:

LGWR: Primary database is in MAXIMUM AVAILABILITY mode
LGWR: Destination LOG_ARCHIVE_DEST_1 is not serviced by LGWR
LGWR: Minimum of 1 LGWR standby database required
Thu Jul 18 18:43:14 2019
Errors in file /u01/app/oracle/diag/rdbms/orcl/orcl2/trace/orcl2_lgwr_39735_39805.trc:
ORA-16072: a minimum of one standby database destination is required

Solution:

SQL> startup mount;
SQL> alter database set standby database to maximize performance;
SQL> shutdown immediate;

$ srvctl start database -d orcl 

UDEV rules for configuring ASM disks

Problem:

During my previous installations I used the following udev rule on multipath devices:

KERNEL=="dm-[0-9]*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="360050768028200a9a40000000000001c", NAME="oracleasm/asm-disk1", OWNER="oracle", GROUP="asmadmin", MODE="0660"

So to identify the exact disk I used PROGRAM option. The above script looks through `/dev/dm-*` devices and if any of them satisfy the condition, for example:

# scsci_id -gud /dev/dm-3
360050768028200a9a40000000000001c 

then device name will be changed to /dev/oracleasm/asm-disk1, owner:group to grid:asmadmin and permission to 0660

But on my new servers same udev rule was not working anymore. (Of course, it needs more investigation, but our time is really valuable and never enough and if we know another solution that works and is acceptable- let’s just use it)

Solution:

I used udevadm command to identify other properties of these devices and wrote new udev rule (to see all properties, just remove grep):

# udevadm info --query=property --name /dev/mapper/asm1 | grep DM_UUID
DM_UUID=mpath-360050768028200a9a40000000000001c

New udev rule looks like this:

# cat /etc/udev/rules.d/99-oracle-asmdevices.rules
ENV{DM_UUID}=="mpath-360050768028200a9a40000000000001c",  SUBSYSTEM=="block", NAME="oracleasm/asm-disk1", OWNER="grid", GROUP="asmadmin", MODE="0660"

Trigger udev rules:

# udevadm trigger

Verify that name, owner, group and permissions are changed:

# ll /dev/oracleasm/
total 0
brw-rw---- 1 grid asmadmin 253, 3 Jul 17 17:33 asm-disk1

TNS-12518: TNS:listener could not hand off client connection | TNS-12547: TNS:lost contact

Problem:

In two-node cluster, client was not able to connect to the second node, but connection to the first node was successful.

Connection from SQL developer threw error: Status: Failure - Test failed: IO Error: Got minus one from a read call, connect lapse 16ms, Authentication lapse 0ms

Connection from sqlplus using TNS string showed:

[oracle@rac02 ~]$ sqlplus "sys@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=rac02.example.com)(PORT=1522))(CONNECT_DATA=(SERVICE_NAME=orcl)))" as sysdba

ORA-12537: TNS:connection closed

Listener.log showed:

 2019-07-18T11:19:23.568231+00:00
 TNS-12518: TNS:listener could not hand off client connection
  TNS-12547: TNS:lost contact
   TNS-12560: TNS:protocol adapter error
    TNS-00517: Lost contact
     Linux Error: 32: Broken pipe

Solution:

This problem can happen in other cases (entries in sqlnet.ora .. in our case it was ok) and we could think about network problem, because initially we were trying to connect from the application sever and from the SQL developer remotely. But after getting ORA-12537: TNS:connection closed error while trying to connect via sqlplus from the local server, we could only think about local non-network related problem.

The reason of this problem was that setuid bit was not set on /u01/app/oracle/product/12.2.0/dbhome_1/bin/oracle binary:

Problematic node:

[root@rac02 ~]# ll /u01/app/oracle/product/12.2.0/dbhome_1/bin/oracle
 -rwxr-s--x 1 oracle asmadmin 408607040 Apr  4 19:51 /u01/app/oracle/product/12.2.0/dbhome_1/bin/oracle

Healthy node:

[oracle@rac01 ~]$ ll /u01/app/oracle/product/12.2.0/dbhome_1/bin/oracle
-rwsr-s--x 1 oracle asmadmin 408607040 Apr  4 19:48 /u01/app/oracle/product/12.2.0/dbhome_1/bin/oracle

We have set setuid bit on oracle binary in RDBMS home:

[root@rac02 ~]# chmod u+s /u01/app/oracle/product/12.2.0/dbhome_1/bin/oracle

[root@rac02 ~]# ll /u01/app/oracle/product/12.2.0/dbhome_1/bin/oracle
-rwsr-s--x 1 oracle asmadmin 408607040 Apr  4 19:51 /u01/app/oracle/product/12.2.0/dbhome_1/bin/oracle

The problem was resolved without restarting the database instance, so clients were able to connect to the 2nd node. But because of it was staging cluster – I still restarted the database, I just made sure that database was started with the correct binary.