sqlplus backspace – ^H, delete – [[D^, add command history

Problem:

In sqlplus, pressing backspace writes ^H and delete – [[D^. Your terminal settings affects keyboard behaviour in sqlplus.

Let’s improve our sqlplus – make backspace and delete keys work as expected and in addition to this let’s add a new feature such as maintaining command history.

Solution:

Install a readline wrapper (rlwrap) – it maintains a separate input history for each command.

[root@rac1 ~]# yum install rlwrap -y 

Create alias for sqlplus in /etc/profile:

alias sqlplus='rlwrap sqlplus' 

Reconnect to the terminal and check that alias is created:

[oracle@rac1 ~]$ alias
alias egrep='egrep --color=auto'
alias fgrep='fgrep --color=auto'
alias grep='grep --color=auto'
alias l.='ls -d .* --color=auto'
alias ll='ls -l --color=auto'
alias ls='ls --color=auto'
alias sqlplus='rlwrap sqlplus'
alias vi='vim'
alias which='alias | /usr/bin/which --tty-only --read-alias --show-dot --show-tilde'

Connect to sqlplus:

[oracle@rac1 ~]$ sqlplus / as sysdba
SQL> 

And test your new sqlplus 🙂 :

Use backspace, delete, execute some command and then press arrow up to see previous command.

Patch Planner to check and request conflict patches

Problem:

Recently, I was applying p29963428_194000ACFSRU_Linux-x86-64.zip on top of 19.4 GI home and got the following error:

==Following patches FAILED in analysis for apply:
 Patch: /u01/swtmp/29963428/29963428
 Log: /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-08-07_10-07-56AM_1.log
 Reason: Failed during Analysis: CheckConflictAgainstOracleHome Failed, [ Prerequisite Status: FAILED, Prerequisite output: 
 Summary of Conflict Analysis:
 There are no patches that can be applied now.
 Following patches have conflicts. Please contact Oracle Support and get the merged patch of the patches : 
 29851014, 29963428
 Conflicts/Supersets for each patch are:
 Patch : 29963428
 Bug Conflict with 29851014 Conflicting bugs are: 29039918, 27494830, 29338628, 29031452, 29264772, 29760083, 28855761 ... 
 After fixing the cause of failure Run opatchauto resume

Solution:

Oracle MOS note ID 1317012.1 describes steps how to check such conflicts and request conflict/merged patches in previous:

1. Run lsinventory from the target home:

[grid@rac1 ~]$ /u01/app/19.3.0/grid/OPatch/opatch lsinventory > GI_lsinventory.txt

2. Logon to support.oracle.com -> Click the “Patch and Updates” tab -> Enter the patch number you want to apply:

2. Click Analyze with OPatch…

3. Attach GI_lsinventory.txt file created in the first step and click “Analyze for Conflict”:

4. Wait for a while and you will see the result. According to it, patch 29963428 conflicts with my current patches:

From the same screen I can “Request Patch”.

5. After clicking “Request Patch” button I got the following error:

Click “See Details”:

The message actually means that fix for the same bug is already included in currently installed 19.4.0.0.190716ACFSRU.

So I don’t have to apply 29963428 patch. I wanted to share the steps with you , because the mentioned tool is really useful.

Install TFA v19.2.1 as Daemon in cluster

Installing TFA as root is preferable, because you will have enhanced capabilities. Such as:

  • Automatic diagnostic collections
  • Collections from remote hosts
  • Collecting of files which are not readable by Oracle software owner such as /var/log/messages or clusterware daemon logs

Download the newest Oracle Trace File Analyzer zip from Doc ID 1513912.1 ( for simplicity you can use wget ):

# wget --http-user=<Your OTN username> "https://updates.oracle.com/Orion/Services/download/TFA-LINUX_v19.2.1.zip?aru=22891540&patch_file=TFA-LINUX_v19.2.1.zip" -O TFA-LINUX_v19.2.1.zip --ask-password
Password: <Your OTN user password>

Setup passwordless ssh user equivalency for root user to all nodes:

[root@rac1 install]# ssh-keygen -t rsa <press Enter on each question>
[root@rac2 install]# ssh-keygen -t rsa <press Enter on each question> 

Copy content of /root/.ssh/id_rsa.pub from rac1 and paste into /root/.ssh/authorized_keys on rac2. Repeat the same for rac2.

Please make sure that the parameter PermitRootLogin yes exists in /etc/ssh/sshd_config file.

Check ssh equivalency:

[root@rac1 install]# ssh rac2 date
 Tue Aug  6 10:37:07 UTC 2019

[root@rac2 .ssh]# ssh rac1 date
 Tue Aug  6 10:37:34 UTC 2019

Unzip file and run the installTFA command ( bolded texts are answers to TFA installation questions, other text is only installation log):

[root@rac1 install]# unzip TFA-LINUX_v19.2.1.zip
[root@rac1 install]# ll
 total 518220
 -rwxr-xr-x 1 root root 265898937 Apr 25 18:44 installTFA-LINUX
 -rw-r--r-- 1 root root      1460 Apr 25 18:09 README.txt
 -rw-r--r-- 1 root root 264751391 Apr 25 19:18 TFA-LINUX_v19.2.1.zip

[root@rac1 install]# ./installTFA-LINUX 

TFA Installation Log will be written to File : /tmp/tfa_install_27153_2019_08_06-10_40_07.log
Starting TFA installation
TFA Version: 192100 Build Date: 201904251105
Enter a location for installing TFA (/tfa will be appended if not supplied) [/sw/install/tfa]:
/u01/app/18.3.0/grid/tfa
Running Auto Setup for TFA as user root…
Would you like to do a [L]ocal only or [C]lusterwide installation ? [L|l|C|c] [C] : C
The following installation requires temporary use of SSH.
If SSH is not configured already then we will remove SSH when complete.
Do you wish to Continue ? [Y|y|N|n] [Y] Y
Installing TFA now…
Discovering Nodes and Oracle resources
Starting Discovery…
Getting list of nodes in cluster . . . . .
List of nodes in cluster:
rac1
rac2
Checking ssh user equivalency settings on all nodes in cluster
Node rac2 is configured for ssh user equivalency for root user
CRS_HOME=/u01/app/18.3.0/grid
Searching for running databases…
orcl 
Searching out ORACLE_HOME for selected databases…
Getting Oracle Inventory…
ORACLE INVENTORY: /u01/app/oraInventory
Discovery Complete…
TFA Will be Installed on the Following Nodes:
++++++++++++++++++++++++++++++++++++++++++++
Install Nodes
rac1
rac2
Do you wish to make changes to the Node List ? [Y/y/N/n] [N] N
TFA will scan the following Directories
++++++++++++++++++++++++++++++++++++++++++++
.-------------------------------------------------------------------------.
|                                   rac1                                  |
+--------------------------------------------------------------+----------+
| Trace Directory                                              | Resource |
+--------------------------------------------------------------+----------+
| /u01/app/18.3.0/grid/cfgtoollogs                             | CFGTOOLS |
| /u01/app/18.3.0/grid/crs/log                                 | CRS      |
| /u01/app/18.3.0/grid/css/log                                 | CRS      |
| /u01/app/18.3.0/grid/evm/admin/logger                        | CRS      |
| /u01/app/18.3.0/grid/evm/log                                 | CRS      |
| /u01/app/18.3.0/grid/install                                 | INSTALL  |
| /u01/app/18.3.0/grid/inventory/ContentsXML                   | INSTALL  |
| /u01/app/18.3.0/grid/log                                     | CRS      |
| /u01/app/18.3.0/grid/network/log                             | CRS      |
| /u01/app/18.3.0/grid/opmn/logs                               | CRS      |
| /u01/app/18.3.0/grid/rdbms/log                               | ASM      |
| /u01/app/18.3.0/grid/srvm/log                                | CRS      |
| /u01/app/grid/cfgtoollogs                                    | CFGTOOLS |
| /u01/app/grid/crsdata/rac1/acfs                              | ACFS     |
| /u01/app/grid/crsdata/rac1/afd                               | ASM      |
| /u01/app/grid/crsdata/rac1/chad                              | CRS      |
| /u01/app/grid/crsdata/rac1/core                              | CRS      |
| /u01/app/grid/crsdata/rac1/crsconfig                         | CRS      |
| /u01/app/grid/crsdata/rac1/crsdiag                           | CRS      |
| /u01/app/grid/crsdata/rac1/cvu                               | CRS      |
| /u01/app/grid/crsdata/rac1/evm                               | CRS      |
| /u01/app/grid/crsdata/rac1/output                            | CRS      |
| /u01/app/grid/crsdata/rac1/trace                             | CRS      |
| /u01/app/grid/diag/apx/+apx                                  | ASMPROXY | 
| /u01/app/grid/diag/apx/+apx/+APX1/cdump                      | ASMPROXY |
| /u01/app/grid/diag/apx/+apx/+APX1/trace                      | ASMPROXY |
| /u01/app/grid/diag/asm/+asm/+ASM1/cdump                      | ASM      |
| /u01/app/grid/diag/asm/+asm/+ASM1/trace                      | ASM      |
| /u01/app/grid/diag/asmtool/user_grid/host_2173698417_110/cdu | ASMTOOL  |
| /u01/app/grid/diag/asmtool/user_grid/host_2173698417_110/tra | ASM      |
| /u01/app/grid/diag/clients/user_grid/host_2173698417_110/cdu | DBCLIENT |
| /u01/app/grid/diag/clients/user_grid/host_2173698417_110/tra | DBCLIENT |
| /u01/app/grid/diag/crs/rac1/crs/cdump                        | CRS      |
| /u01/app/grid/diag/crs/rac1/crs/trace                        | CRS      |
| /u01/app/grid/diag/rdbms/_mgmtdb/-MGMTDB/cdump               | RDBMS    |
| /u01/app/grid/diag/rdbms/_mgmtdb/-MGMTDB/trace               | RDBMS    |
| /u01/app/grid/diag/tnslsnr/rac1/listener/cdump               | TNS      |
| /u01/app/grid/diag/tnslsnr/rac1/listener/trace               | TNS      |
| /u01/app/grid/diag/tnslsnr/rac1/listener_scan1/cdump         | TNS      |
| /u01/app/grid/diag/tnslsnr/rac1/listener_scan1/trace         | TNS      |
| /u01/app/grid/diag/tnslsnr/rac1/listener_scan2/cdump         | TNS      |
| /u01/app/grid/diag/tnslsnr/rac1/listener_scan2/trace         | TNS      |
| /u01/app/grid/diag/tnslsnr/rac1/listener_scan3/cdump         | TNS      |
| /u01/app/grid/diag/tnslsnr/rac1/listener_scan3/trace         | TNS      |
| /u01/app/oraInventory/ContentsXML                            | INSTALL  |
| /u01/app/oraInventory/logs                                   | INSTALL  |
| /u01/app/oracle/cfgtoollogs                                  | CFGTOOLS |
| /u01/app/oracle/diag/clients/user_oracle/host_2173698417_110 | DBCLIENT |
| /u01/app/oracle/diag/clients/user_oracle/host_2173698417_110 | DBCLIENT |
| /u01/app/oracle/diag/rdbms/orcl/orcl1/cdump                  | RDBMS    |
| /u01/app/oracle/diag/rdbms/orcl/orcl1/trace                  | RDBMS    |
| /u01/app/oracle/diag/tnslsnr                                 | TNS      |
| /u01/app/oracle/product/18.3.0/dbhome_1/cfgtoollogs          | CFGTOOLS |
| /u01/app/oracle/product/18.3.0/dbhome_1/install              | INSTALL  |
| com                                                          | ASM      |
'--------------------------------------------------------------+----------'
Installing TFA on rac1:
HOST: rac1    TFA_HOME: /u01/app/18.3.0/grid/tfa/rac1/tfa_home
Installing TFA on rac2:
HOST: rac2    TFA_HOME: /u01/app/18.3.0/grid/tfa/rac2/tfa_home
.-------------------------------------------------------------------------.
| Host | Status of TFA | PID   | Port | Version    | Build ID             |
+------+---------------+-------+------+------------+----------------------+
| rac1 | RUNNING       | 28335 | 5000 | 19.2.1.0.0 | 19210020190425110550 |
| rac2 | RUNNING       | 31169 | 5000 | 19.2.1.0.0 | 19210020190425110550 |
'------+---------------+-------+------+------------+----------------------'
Running Inventory in All Nodes…
Enabling Access for Non-root Users on rac1…
Adding default users to TFA Access list…
Summary of TFA Installation:
.--------------------------------------------------------------.
|                             rac1                             |
+---------------------+----------------------------------------+
| Parameter           | Value                                  |
+---------------------+----------------------------------------+
| Install location    | /u01/app/18.3.0/grid/tfa/rac1/tfa_home |
| Repository location | /u01/app/grid/tfa/repository           |
| Repository usage    | 0 MB out of 10240 MB                   |
'---------------------+----------------------------------------'
.--------------------------------------------------------------.
|                             rac2                             |
+---------------------+----------------------------------------+
| Parameter           | Value                                  |
+---------------------+----------------------------------------+
| Install location    | /u01/app/18.3.0/grid/tfa/rac2/tfa_home |
| Repository location | /u01/app/grid/tfa/repository           |
| Repository usage    | 0 MB out of 10240 MB                   |
'---------------------+----------------------------------------'
TFA is successfully installed…

Boot in single user mode and rescue your RHEL7

Problem:

One of our customer incorrectly changed fstab file and rebooted the OS. As a result, VM was not able to start. Fortunately, cloud where this VM was located supported serial console.

Solution:

We booted in single user mode through serial console and reverted the changes back. To boot in single user mode and update necessary file, do as follows:

Connect to the serial console and while OS is booting in a grub menu press e to edit the selected kernel:

Find line that starts with linux16 ( if you don’t see it press arrow down ), go to the end of this line and type rd.break.

Press ctrl+x.

Wait for a while and system will enter into single user mode:

During this time /sysroot is mounted in read only mode, you need to remount it in read write:

switch_root:/# mount -o remount,rw /sysroot
switch_root:/# chroot /sysroot

You can revert any changes back by updating any file, in our case we updated fstab:

sh-4.2# vim /etc/fstab

Restart VM:

sh-4.2# reboot -f

You are a real hero, because you rescued your system!

Resize ASM disks in AWS (FG enabled cluster)

  1. Connect to AWS console https://console.aws.amazon.com
  2. On the left side -> under the section ELASTIC BLOCK STORE -> choose Volumes
  3. Choose necessary disk -> click Actions button -> choose Modify Volume -> change Size
    Please note that all data disks (not quorum disk) must be increased under the same diskgroup, otherwise ASM will not let you to have different sized disks.

Choose another data disks and repeat the same steps.

4. Run the following on database nodes via root user:

# for i in /sys/block/*/device/rescan; do echo 1 > $i; done

5. Check that disks have correct sizes:

# flashgrid-node

6. Connect to the ASM instance from any database node and run:

[grid@rac1 ~]$ sqlplus / as sysasm
SQL*Plus: Release 19.0.0.0.0 - Production on Fri Aug 23 10:17:50 2019
Version 19.4.0.0.0
Copyright (c) 1982, 2019, Oracle.  All rights reserved.
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.4.0.0.0

SQL> alter diskgroup GRID resize all; 
Diskgroup altered.

Can’t call method “uid” on an undefined value at …DBUtilServices.pm line 28.

Problem:

opatchauto on GI fails with the following error:

# /u01/app/18.0.0/grid/OPatch/opatchauto apply /0/grid/29301682  -oh /u01/app/18.0.0/grid
 Can't call method "uid" on an undefined value at /u01/app/18.0.0/grid/OPatch/auto/database/bin/module/DBUtilServices.pm line 28.

Reason:

  1. GI is not setup yet. You may have unzipped GI installation file, but have not run gridSetup.sh
  2. $GI_HOME/oraInst.loc is missing.

Solution:

  1. Setup GI by running gridSetup.sh
  2. Copy the oraInst.loc from the other node, if you don’t have another node then please see the file content bellow:
# cat /u01/app/18.0.0/grid/oraInst.loc
inst_group=oinstall
inventory_loc=/u01/app/oraInventory

PRVG-11069 : IP address “169.254.0.2” of network interface “idrac” on the node “primrac1” would conflict with HAIP usage

Problem:

Oracle 18c GI configuration precheck was failing with the following error:

Summary of node specific errors 

primrac2  - PRVG-11069 : IP address "169.254.0.2" of network interface "idrac" on the node "primrac2" would conflict with HAIP usage.  
- Cause:  One or more network interfaces have IP addresses in the range (169.254..), the range used by HAIP which can create routing conflicts.  
- Action:  Make sure there are no IP addresses in the range (169.254..) on any network interfaces. 

primrac1  - PRVG-11069 : IP address "169.254.0.2" of network interface "idrac" on the node "primrac1" would conflict with HAIP usage. 
- Cause:  One or more network interfaces have IP addresses in the range (169.254..), the range used by HAIP which can create routing conflicts.  
- Action:  Make sure there are no IP addresses in the range (169.254..) on any network interfaces.  

On each node additional network interface – named idrac was started with the ip address 169.254.0.2. I tried to set static ip address in /etc/sysconfig/network-scripts/ifcfg-idrac , also tried to bring the interface down – but after some time interface was starting up automatically and getting the same ip address.

Cluster nodes were DELL servers with Dell Remote Access Controller(iDRAC) Service Module installed. For more information about this module installation/deinstallation… can be found here https://topics-cdn.dell.com/pdf/idrac-service-module-v32_users-guide_en-us.pdf

Servers were configured by system administrator and was not clear why this module was there, we are not using iDRAC module and the only option that we had was to remove/uninstall that module. (configuring module should also be possible to avoid such situation, but we keep our servers as clean as possible without having unsed services)

Solution:

Uninstalled iDRAC module (also expained in the above pdf):

# rpm -e dcism 

After uninstalling it idrac interface did not started anymore, so we could continue GI configuration.

Presentation: Oracle GoldenGate Microservices Overview (with DEMO)

Webinar: Oracle GoldenGate Microservices Overview (with DEMO)

PRVF-6402 : Core file name pattern is not same on all the nodes

Problem:

Oracle 18c GI configuration prerequisite checks failed with the following error:

PRVF-6402 : Core file name pattern is not same on all the nodes. Found core filename pattern "core" on nodes "primrac1". Found core filename pattern "core.%p" on nodes "primrac2".  
- Cause:  The core file name pattern is not same on all the nodes.  
- Action:  Ensure that the mechanism for core file naming works consistently on all the nodes. Typically for Linux, the elements to look into are the contents of two files /proc/sys/kernel/core_pattern or /proc/sys/kernel/core_uses_pid. Refer OS vendor documentation for platforms AIX, HP-UX, and Solaris.

Comparing parameter values on both nodes:

[root@primrac1 ~]# cat /proc/sys/kernel/core_uses_pid
0
[root@primrac2 ~]# cat /proc/sys/kernel/core_uses_pid
1 

[root@primrac1 ~]# sysctl -a|grep core_uses_pid
kernel.core_uses_pid = 0

[root@primrac2 ~]# sysctl -a|grep core_uses_pid
kernel.core_uses_pid = 1

Strange fact was that this parameter was not defined explicitly in sysctl.conf file, but still had different default values:

[root@primrac1 ~]# cat /etc/sysctl.conf |grep core_uses_pid
[root@primrac2 ~]# cat /etc/sysctl.conf |grep core_uses_pid 

Solution:

I’ve set parameter to 1 explicitly in sysctl.conf on both nodes:

[root@primrac1 ~]# cat /etc/sysctl.conf |grep core_uses_pid
kernel.core_uses_pid=1 

[root@primrac2 ~]# cat /etc/sysctl.conf |grep core_uses_pid
kernel.core_uses_pid=1

[root@primrac1 ~]# sysctl -p 
[root@primrac2 ~]# sysctl -p

[root@primrac1 ~]# sysctl -a|grep core_uses_pid 
kernel.core_uses_pid = 1

[root@primrac2 ~]# sysctl -a|grep core_uses_pid 
kernel.core_uses_pid = 1

Pressed Check Again button and GI configuration succeeded.