Platform IDs supported by patch are: 46. Platform ID needed is : 226

Problem:

Opatch apply failed with the following error:

Platform IDs supported by patch are: 46 Patch ( 24006111 ) is not applicable on current platform.
Platform ID needed is : 226

Reason:

You may not be familiar with Platform IDs, so the above error actually means “Downloaded patch is for Linux x86 and Linux x86-64 is required”.

Solution:

Download and apply patch for Linux x86-64.

Additional information:

Full list of Platform IDs can be found here

PlatformARU_ID
Apple Mac OS X (PowerPC)421
HP Tru64 UNIX87
HP-UX Itanium197
HP-UX Itanium (32-bit)278
HP-UX PA-RISC (32-bit)2
HP-UX PA-RISC (64-bit)59
IBM AIX on POWER Systems (32-bit)319
IBM AIX on POWER Systems (64-bit)212
IBM i on POWER Systems43
IBM S/390-based Linux (31-bit)211
IBM z/OS on System z30
IBM: Linux on POWER Systems227
IBM: Linux on System z209
Linux Itanium214
Linux x8646
Linux x86-64226
Microsoft Windows (32-bit)912
Microsoft Windows Itanium (64-bit)208
Microsoft Windows x64 (64-bit)233
Sun Solaris SPARC (32-bit)453
Sun Solaris SPARC (64-bit)23
Sun Solaris x86 (32-bit)173
Sun Solaris x86-64 (64-bit)267

ACFS Input/output error on OS kernel 3.10.0-957

Problem:

My ACFS volume has intermittent problems, when I run ls command I get ls: cannot access sm: Input/output error and user/group ownership contains question marks.

[root@primrac1 GG_HOME]# ll
ls: cannot access ma: Input/output error
ls: cannot access sm: Input/output error
ls: cannot access deploy: Input/output error
total 64
d????????? ? ? ? ? ? deploy
drwx------ 2 oracle oinstall 65536 Sep 28 21:05 lost+found
d????????? ? ? ? ? ? ma
d????????? ? ? ? ? ? sm

Reason:

Doc ID 2561145.1 mentions that kernel 3.10.0-957 changed behaviour of d_splice_alias interface which is used by ACFS driver.

My kernel is also the same:

[root@primrac1 GG_HOME]# uname -r
3.10.0-957.21.3.el7.x86_64

Solution:

Download and apply patch 29963428 on GI home.

[root@primrac1 29963428]# /u01/app/19.3.0/grid/OPatch/opatchauto apply -oh /u01/app/19.3.0/grid

JDBC 11g: SQLException(“Io exception: Connection reset”)

Problem:

Connection using 11g ojdbc was very slow and most of the time was failing with Connection reset error after 60s (default inbound connection timeout). Database alert log contained WARNING: inbound connection timed out (ORA-3136) errors.

Reason:

Oracle 11g JDBC drivers use random numbers during authentication. Those random numbers are generated by OS using /dev/random and if there is faulty/slow hardware or not too much activity on the system this generation can be slow, which causes slowness during jdbc connection.

Solution:

Instead of /dev/random indicate non-blocking /dev/urandom as java command line argument:

# java -Djava.security.egd=file:/dev/../dev/urandom -cp ojdbc8.jar:. JDBCTest "stbyrac-scan.example.com"

Simple java code to test connection to Oracle database

Assuming that client computer does not have Oracle client installed.

1. Download necessary version of ojdbc jar file from Oracle. The latest version for now is ojdbc8.jar

2. Install java development tools:

# yum install java-devel * -y

3. Create a sample java code, which:
– connects to the database
– selects 1 from dual
– disconnects

# cat JDBCTest.java
import java.sql.*;
 class JDBCTest{
     public static void main(String args[]) throws SQLException {
         Connection con = null;
         try{
             Class.forName("oracle.jdbc.driver.OracleDriver");
             String dbURL = "jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=" + args[0] + ")(PORT=1521))(CONNECT_DATA=(SERVER = DEDICATED)(SERVICE_NAME=orclgg)))";
             System.out.println("jdbcurl=" + dbURL);
             String strUserID = "system";
             String strPassword = "Oracle123";
             con=DriverManager.getConnection(dbURL,strUserID,strPassword);
             System.out.println("Connected to the database.");
             Statement stmt=con.createStatement();
             System.out.println("Executing query");
             ResultSet rs=stmt.executeQuery("SELECT 1 FROM DUAL");
             while(rs.next())
                 System.out.println(rs.getInt("1"));
             con.close();
         }catch(Exception e){ System.out.println(e);}
         finally {
             con.close();
         }
 }
 }

4. Compile java code and check that *.class file was generated:

# javac JDBCTest.java

# ll  JDBCTest.*
 -rw-r--r-- 1 root root 1836 Sep 27 11:45 JDBCTest.class
 -rw-r--r-- 1 root root  925 Sep 27 11:45 JDBCTest.java

5. Run code:

# java -Djava.security.egd=file:/dev/../dev/urandom -cp ojdbc8.jar:. JDBCTest "stbyrac-scan.example.com"

jdbcurl=jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=stbyrac-scan.example.com)(PORT=1521))(CONNECT_DATA=(SERVER = DEDICATED)(SERVICE_NAME=orclgg)))

Connected to the database.
Executing query…
1

Please note that -Djava.security.egd=file:/dev/../dev/urandom parameter is required to have a stable connection. I will discuss the importance of this parameter in the next post.

OPATCHAUTO-72115: Please complete the previous apply session across all nodes to perform apply session

Problem:

My opatchauto outofplace patching failed on GI home. I was able to cleanup cloned GI home and information about it in inventory.xml, but after running opatchauto again I was getting the following error:

[root@rac1 29708703]# $ORACLE_HOME/OPatch/opatchauto apply -oh $ORACLE_HOME -outofplace
OPatchauto session is initiated at Sun Aug 18 20:40:43 2019
System initialization log file is /u01/app/18.3.0/grid/cfgtoollogs/opatchautodb/systemconfig2019-08-18_08-40-46PM.log.
Session log file is /u01/app/18.3.0/grid/cfgtoollogs/opatchauto/opatchauto2019-08-18_08-42-20PM.log
The id for this session is Z1CP
OPATCHAUTO-72115: Out of place patching apply session cannot be performed.
OPATCHAUTO-72115: Previous apply session  is not completed on node rac1.
OPATCHAUTO-72115: Please complete the previous apply  session across all nodes to perform apply session.
OPatchAuto failed.

Solution:

Clear checkpoint files from the previous session :

[root@rac1 29708703]# cd /u01/app/18.3.0/grid/.opatchauto_storage/rac1
[root@rac1 rac1]# ls
oopsessioninfo.ser
[root@rac1 rac1]# rm -rf oopsessioninfo.ser 

Rerun opatchauto apply again.

sqlplus backspace – ^H, delete – [[D^, add command history

Problem:

In sqlplus, pressing backspace writes ^H and delete – [[D^. Your terminal settings affects keyboard behaviour in sqlplus.

Let’s improve our sqlplus – make backspace and delete keys work as expected and in addition to this let’s add a new feature such as maintaining command history.

Solution:

Install a readline wrapper (rlwrap) – it maintains a separate input history for each command.

[root@rac1 ~]# yum install rlwrap -y 

Create alias for sqlplus in /etc/profile:

alias sqlplus='rlwrap sqlplus' 

Reconnect to the terminal and check that alias is created:

[oracle@rac1 ~]$ alias
alias egrep='egrep --color=auto'
alias fgrep='fgrep --color=auto'
alias grep='grep --color=auto'
alias l.='ls -d .* --color=auto'
alias ll='ls -l --color=auto'
alias ls='ls --color=auto'
alias sqlplus='rlwrap sqlplus'
alias vi='vim'
alias which='alias | /usr/bin/which --tty-only --read-alias --show-dot --show-tilde'

Connect to sqlplus:

[oracle@rac1 ~]$ sqlplus / as sysdba
SQL> 

And test your new sqlplus 🙂 :

Use backspace, delete, execute some command and then press arrow up to see previous command.

Patch Planner to check and request conflict patches

Problem:

Recently, I was applying p29963428_194000ACFSRU_Linux-x86-64.zip on top of 19.4 GI home and got the following error:

==Following patches FAILED in analysis for apply:
 Patch: /u01/swtmp/29963428/29963428
 Log: /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-08-07_10-07-56AM_1.log
 Reason: Failed during Analysis: CheckConflictAgainstOracleHome Failed, [ Prerequisite Status: FAILED, Prerequisite output: 
 Summary of Conflict Analysis:
 There are no patches that can be applied now.
 Following patches have conflicts. Please contact Oracle Support and get the merged patch of the patches : 
 29851014, 29963428
 Conflicts/Supersets for each patch are:
 Patch : 29963428
 Bug Conflict with 29851014 Conflicting bugs are: 29039918, 27494830, 29338628, 29031452, 29264772, 29760083, 28855761 ... 
 After fixing the cause of failure Run opatchauto resume

Solution:

Oracle MOS note ID 1317012.1 describes steps how to check such conflicts and request conflict/merged patches in previous:

1. Run lsinventory from the target home:

[grid@rac1 ~]$ /u01/app/19.3.0/grid/OPatch/opatch lsinventory > GI_lsinventory.txt

2. Logon to support.oracle.com -> Click the “Patch and Updates” tab -> Enter the patch number you want to apply:

2. Click Analyze with OPatch…

3. Attach GI_lsinventory.txt file created in the first step and click “Analyze for Conflict”:

4. Wait for a while and you will see the result. According to it, patch 29963428 conflicts with my current patches:

From the same screen I can “Request Patch”.

5. After clicking “Request Patch” button I got the following error:

Click “See Details”:

The message actually means that fix for the same bug is already included in currently installed 19.4.0.0.190716ACFSRU.

So I don’t have to apply 29963428 patch. I wanted to share the steps with you , because the mentioned tool is really useful.

Install TFA v19.2.1 as Daemon in cluster

Installing TFA as root is preferable, because you will have enhanced capabilities. Such as:

  • Automatic diagnostic collections
  • Collections from remote hosts
  • Collecting of files which are not readable by Oracle software owner such as /var/log/messages or clusterware daemon logs

Download the newest Oracle Trace File Analyzer zip from Doc ID 1513912.1 ( for simplicity you can use wget ):

# wget --http-user=<Your OTN username> "https://updates.oracle.com/Orion/Services/download/TFA-LINUX_v19.2.1.zip?aru=22891540&patch_file=TFA-LINUX_v19.2.1.zip" -O TFA-LINUX_v19.2.1.zip --ask-password
Password: <Your OTN user password>

Setup passwordless ssh user equivalency for root user to all nodes:

[root@rac1 install]# ssh-keygen -t rsa <press Enter on each question>
[root@rac2 install]# ssh-keygen -t rsa <press Enter on each question> 

Copy content of /root/.ssh/id_rsa.pub from rac1 and paste into /root/.ssh/authorized_keys on rac2. Repeat the same for rac2.

Please make sure that the parameter PermitRootLogin yes exists in /etc/ssh/sshd_config file.

Check ssh equivalency:

[root@rac1 install]# ssh rac2 date
 Tue Aug  6 10:37:07 UTC 2019

[root@rac2 .ssh]# ssh rac1 date
 Tue Aug  6 10:37:34 UTC 2019

Unzip file and run the installTFA command ( bolded texts are answers to TFA installation questions, other text is only installation log):

[root@rac1 install]# unzip TFA-LINUX_v19.2.1.zip
[root@rac1 install]# ll
 total 518220
 -rwxr-xr-x 1 root root 265898937 Apr 25 18:44 installTFA-LINUX
 -rw-r--r-- 1 root root      1460 Apr 25 18:09 README.txt
 -rw-r--r-- 1 root root 264751391 Apr 25 19:18 TFA-LINUX_v19.2.1.zip

[root@rac1 install]# ./installTFA-LINUX 

TFA Installation Log will be written to File : /tmp/tfa_install_27153_2019_08_06-10_40_07.log
Starting TFA installation
TFA Version: 192100 Build Date: 201904251105
Enter a location for installing TFA (/tfa will be appended if not supplied) [/sw/install/tfa]:
/u01/app/18.3.0/grid/tfa
Running Auto Setup for TFA as user root…
Would you like to do a [L]ocal only or [C]lusterwide installation ? [L|l|C|c] [C] : C
The following installation requires temporary use of SSH.
If SSH is not configured already then we will remove SSH when complete.
Do you wish to Continue ? [Y|y|N|n] [Y] Y
Installing TFA now…
Discovering Nodes and Oracle resources
Starting Discovery…
Getting list of nodes in cluster . . . . .
List of nodes in cluster:
rac1
rac2
Checking ssh user equivalency settings on all nodes in cluster
Node rac2 is configured for ssh user equivalency for root user
CRS_HOME=/u01/app/18.3.0/grid
Searching for running databases…
orcl 
Searching out ORACLE_HOME for selected databases…
Getting Oracle Inventory…
ORACLE INVENTORY: /u01/app/oraInventory
Discovery Complete…
TFA Will be Installed on the Following Nodes:
++++++++++++++++++++++++++++++++++++++++++++
Install Nodes
rac1
rac2
Do you wish to make changes to the Node List ? [Y/y/N/n] [N] N
TFA will scan the following Directories
++++++++++++++++++++++++++++++++++++++++++++
.-------------------------------------------------------------------------.
|                                   rac1                                  |
+--------------------------------------------------------------+----------+
| Trace Directory                                              | Resource |
+--------------------------------------------------------------+----------+
| /u01/app/18.3.0/grid/cfgtoollogs                             | CFGTOOLS |
| /u01/app/18.3.0/grid/crs/log                                 | CRS      |
| /u01/app/18.3.0/grid/css/log                                 | CRS      |
| /u01/app/18.3.0/grid/evm/admin/logger                        | CRS      |
| /u01/app/18.3.0/grid/evm/log                                 | CRS      |
| /u01/app/18.3.0/grid/install                                 | INSTALL  |
| /u01/app/18.3.0/grid/inventory/ContentsXML                   | INSTALL  |
| /u01/app/18.3.0/grid/log                                     | CRS      |
| /u01/app/18.3.0/grid/network/log                             | CRS      |
| /u01/app/18.3.0/grid/opmn/logs                               | CRS      |
| /u01/app/18.3.0/grid/rdbms/log                               | ASM      |
| /u01/app/18.3.0/grid/srvm/log                                | CRS      |
| /u01/app/grid/cfgtoollogs                                    | CFGTOOLS |
| /u01/app/grid/crsdata/rac1/acfs                              | ACFS     |
| /u01/app/grid/crsdata/rac1/afd                               | ASM      |
| /u01/app/grid/crsdata/rac1/chad                              | CRS      |
| /u01/app/grid/crsdata/rac1/core                              | CRS      |
| /u01/app/grid/crsdata/rac1/crsconfig                         | CRS      |
| /u01/app/grid/crsdata/rac1/crsdiag                           | CRS      |
| /u01/app/grid/crsdata/rac1/cvu                               | CRS      |
| /u01/app/grid/crsdata/rac1/evm                               | CRS      |
| /u01/app/grid/crsdata/rac1/output                            | CRS      |
| /u01/app/grid/crsdata/rac1/trace                             | CRS      |
| /u01/app/grid/diag/apx/+apx                                  | ASMPROXY | 
| /u01/app/grid/diag/apx/+apx/+APX1/cdump                      | ASMPROXY |
| /u01/app/grid/diag/apx/+apx/+APX1/trace                      | ASMPROXY |
| /u01/app/grid/diag/asm/+asm/+ASM1/cdump                      | ASM      |
| /u01/app/grid/diag/asm/+asm/+ASM1/trace                      | ASM      |
| /u01/app/grid/diag/asmtool/user_grid/host_2173698417_110/cdu | ASMTOOL  |
| /u01/app/grid/diag/asmtool/user_grid/host_2173698417_110/tra | ASM      |
| /u01/app/grid/diag/clients/user_grid/host_2173698417_110/cdu | DBCLIENT |
| /u01/app/grid/diag/clients/user_grid/host_2173698417_110/tra | DBCLIENT |
| /u01/app/grid/diag/crs/rac1/crs/cdump                        | CRS      |
| /u01/app/grid/diag/crs/rac1/crs/trace                        | CRS      |
| /u01/app/grid/diag/rdbms/_mgmtdb/-MGMTDB/cdump               | RDBMS    |
| /u01/app/grid/diag/rdbms/_mgmtdb/-MGMTDB/trace               | RDBMS    |
| /u01/app/grid/diag/tnslsnr/rac1/listener/cdump               | TNS      |
| /u01/app/grid/diag/tnslsnr/rac1/listener/trace               | TNS      |
| /u01/app/grid/diag/tnslsnr/rac1/listener_scan1/cdump         | TNS      |
| /u01/app/grid/diag/tnslsnr/rac1/listener_scan1/trace         | TNS      |
| /u01/app/grid/diag/tnslsnr/rac1/listener_scan2/cdump         | TNS      |
| /u01/app/grid/diag/tnslsnr/rac1/listener_scan2/trace         | TNS      |
| /u01/app/grid/diag/tnslsnr/rac1/listener_scan3/cdump         | TNS      |
| /u01/app/grid/diag/tnslsnr/rac1/listener_scan3/trace         | TNS      |
| /u01/app/oraInventory/ContentsXML                            | INSTALL  |
| /u01/app/oraInventory/logs                                   | INSTALL  |
| /u01/app/oracle/cfgtoollogs                                  | CFGTOOLS |
| /u01/app/oracle/diag/clients/user_oracle/host_2173698417_110 | DBCLIENT |
| /u01/app/oracle/diag/clients/user_oracle/host_2173698417_110 | DBCLIENT |
| /u01/app/oracle/diag/rdbms/orcl/orcl1/cdump                  | RDBMS    |
| /u01/app/oracle/diag/rdbms/orcl/orcl1/trace                  | RDBMS    |
| /u01/app/oracle/diag/tnslsnr                                 | TNS      |
| /u01/app/oracle/product/18.3.0/dbhome_1/cfgtoollogs          | CFGTOOLS |
| /u01/app/oracle/product/18.3.0/dbhome_1/install              | INSTALL  |
| com                                                          | ASM      |
'--------------------------------------------------------------+----------'
Installing TFA on rac1:
HOST: rac1    TFA_HOME: /u01/app/18.3.0/grid/tfa/rac1/tfa_home
Installing TFA on rac2:
HOST: rac2    TFA_HOME: /u01/app/18.3.0/grid/tfa/rac2/tfa_home
.-------------------------------------------------------------------------.
| Host | Status of TFA | PID   | Port | Version    | Build ID             |
+------+---------------+-------+------+------------+----------------------+
| rac1 | RUNNING       | 28335 | 5000 | 19.2.1.0.0 | 19210020190425110550 |
| rac2 | RUNNING       | 31169 | 5000 | 19.2.1.0.0 | 19210020190425110550 |
'------+---------------+-------+------+------------+----------------------'
Running Inventory in All Nodes…
Enabling Access for Non-root Users on rac1…
Adding default users to TFA Access list…
Summary of TFA Installation:
.--------------------------------------------------------------.
|                             rac1                             |
+---------------------+----------------------------------------+
| Parameter           | Value                                  |
+---------------------+----------------------------------------+
| Install location    | /u01/app/18.3.0/grid/tfa/rac1/tfa_home |
| Repository location | /u01/app/grid/tfa/repository           |
| Repository usage    | 0 MB out of 10240 MB                   |
'---------------------+----------------------------------------'
.--------------------------------------------------------------.
|                             rac2                             |
+---------------------+----------------------------------------+
| Parameter           | Value                                  |
+---------------------+----------------------------------------+
| Install location    | /u01/app/18.3.0/grid/tfa/rac2/tfa_home |
| Repository location | /u01/app/grid/tfa/repository           |
| Repository usage    | 0 MB out of 10240 MB                   |
'---------------------+----------------------------------------'
TFA is successfully installed…

Boot in single user mode and rescue your RHEL7

Problem:

One of our customer incorrectly changed fstab file and rebooted the OS. As a result, VM was not able to start. Fortunately, cloud where this VM was located supported serial console.

Solution:

We booted in single user mode through serial console and reverted the changes back. To boot in single user mode and update necessary file, do as follows:

Connect to the serial console and while OS is booting in a grub menu press e to edit the selected kernel:

Find line that starts with linux16 ( if you don’t see it press arrow down ), go to the end of this line and type rd.break.

Press ctrl+x.

Wait for a while and system will enter into single user mode:

During this time /sysroot is mounted in read only mode, you need to remount it in read write:

switch_root:/# mount -o remount,rw /sysroot
switch_root:/# chroot /sysroot

You can revert any changes back by updating any file, in our case we updated fstab:

sh-4.2# vim /etc/fstab

Restart VM:

sh-4.2# reboot -f

You are a real hero, because you rescued your system!

Resize ASM disks in AWS (FG enabled cluster)

  1. Connect to AWS console https://console.aws.amazon.com
  2. On the left side -> under the section ELASTIC BLOCK STORE -> choose Volumes
  3. Choose necessary disk -> click Actions button -> choose Modify Volume -> change Size
    Please note that all data disks (not quorum disk) must be increased under the same diskgroup, otherwise ASM will not let you to have different sized disks.

Choose another data disks and repeat the same steps.

4. Run the following on database nodes via root user:

# for i in /sys/block/*/device/rescan; do echo 1 > $i; done

5. Check that disks have correct sizes:

# flashgrid-node

6. Connect to the ASM instance from any database node and run:

[grid@rac1 ~]$ sqlplus / as sysasm
SQL*Plus: Release 19.0.0.0.0 - Production on Fri Aug 23 10:17:50 2019
Version 19.4.0.0.0
Copyright (c) 1982, 2019, Oracle.  All rights reserved.
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.4.0.0.0

SQL> alter diskgroup GRID resize all; 
Diskgroup altered.