Change Oracle DB name using NID

Details:

Environment: RAC
Source name: orclA
Target name: orcl

1. Make sure that you have a recoverable whole database backup.

2. In Real Application Cluster we need to set cluster_database parameter to false and mount an instance on only one node:

[oracle@primrac1 ~]$ sqlplus / as sysdba
SQL> alter system set cluster_database=false scope=spfile;

[oracle@primrac1 ~]$ srvctl stop database orclA
[oracle@primrac1 ~]$ sqlplus / as sysdba
SQL> startup mount;

3. From the 1st node run nid utility. Specify username with sysdba privilege, target database name, and SETNAME parameter to yes.

[oracle@primrac1 ~]$ . oraenv
ORACLE_SID = [oracle] ? orclA1
The Oracle base remains unchanged with value /u01/app/oracle

[oracle@primrac1 ~]$ nid TARGET=SYS DBNAME=orcl SETNAME=YES
DBNEWID: Release 19.0.0.0.0 - Production on Mon Dec 16 16:54:36 2019
Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.
Password:
Connected to database ORCLA (DBID=3133348785)
Connected to server version 19.4.0
Control Files in database:
    +DATA/ORCLA/CONTROLFILE/current.290.1020357617
Change database name of database ORCLA to ORCL? (Y/[N]) => Y
Proceeding with operation
Changing database name from ORCLA to ORCL
     Control File +DATA/ORCLA/CONTROLFILE/current.290.1020357617 - modified
     Datafile +DATA/ORCLA/DATAFILE/system.283.102035744 - wrote new name
     Datafile +DATA/ORCLA/DATAFILE/sysaux.286.102035749 - wrote new name
     Datafile +DATA/ORCLA/DATAFILE/undotbs1.287.102035753 - wrote new name
     Datafile +DATA/ORCLA/DATAFILE/undotbs2.300.102035839 - wrote new name
     Datafile +DATA/ORCLA/DATAFILE/users.288.102035754 - wrote new name
     Datafile +DATA/ORCLA/TEMPFILE/temp.296.102035765 - wrote new name
     Control File +DATA/ORCLA/CONTROLFILE/current.290.1020357617 - wrote new name
     Instance shut down
 Database name changed to ORCL.
 Modify parameter file and generate a new password file before restarting.
 Succesfully changed database name.
 DBNEWID - Completed succesfully.

5. Change db_name parameter in the initialization parameter file:

[oracle@primrac1 dbs]$ . oraenv
ORACLE_SID = [orcl1] ? orclA1

[oracle@primrac1 dbs]$ sqlplus / as sysdba
SQL> startup nomount;
SQL> alter system set db_name=orcl scope=spfile;
SQL> alter system set cluster_database=true scope=spfile;
SQL> shut immediate;

6. Modify database name in srvctl:

[oracle@primrac1 dbs]$ srvctl modify database -db orclA -dbname orcl

7. Remove existing password file entry and create a new one:

[oracle@primrac1 dbs]$ srvctl modify database -db orclA -pwfile

[oracle@primrac1 dbs]$ orapwd  dbuniquename=orclA file='+DATA/ORCLA/PASSWORD/pwdorcl.ora'
Enter password for SYS:

8. Start the database using srvctl:

[oracle@primrac1 dbs]$ srvctl start database -db orclA

Depending on your needs you may also update instance names. If so, please make sure that you don’t have instance-specific parameters (e.g orclA1.instance_number), otherwise, it’s better to recreate the parameter file with the correct instance names.

I’ve shown you just a simple example of the database name change. In your specific case, there may be other things that depend on the database name.

Patch Planner to check and request conflict patches

Problem:

Recently, I was applying p29963428_194000ACFSRU_Linux-x86-64.zip on top of 19.4 GI home and got the following error:

==Following patches FAILED in analysis for apply:
 Patch: /u01/swtmp/29963428/29963428
 Log: /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-08-07_10-07-56AM_1.log
 Reason: Failed during Analysis: CheckConflictAgainstOracleHome Failed, [ Prerequisite Status: FAILED, Prerequisite output: 
 Summary of Conflict Analysis:
 There are no patches that can be applied now.
 Following patches have conflicts. Please contact Oracle Support and get the merged patch of the patches : 
 29851014, 29963428
 Conflicts/Supersets for each patch are:
 Patch : 29963428
 Bug Conflict with 29851014 Conflicting bugs are: 29039918, 27494830, 29338628, 29031452, 29264772, 29760083, 28855761 ... 
 After fixing the cause of failure Run opatchauto resume

Solution:

Oracle MOS note ID 1317012.1 describes steps how to check such conflicts and request conflict/merged patches in previous:

1. Run lsinventory from the target home:

[grid@rac1 ~]$ /u01/app/19.3.0/grid/OPatch/opatch lsinventory > GI_lsinventory.txt

2. Logon to support.oracle.com -> Click the “Patch and Updates” tab -> Enter the patch number you want to apply:

2. Click Analyze with OPatch…

3. Attach GI_lsinventory.txt file created in the first step and click “Analyze for Conflict”:

4. Wait for a while and you will see the result. According to it, patch 29963428 conflicts with my current patches:

From the same screen I can “Request Patch”.

5. After clicking “Request Patch” button I got the following error:

Click “See Details”:

The message actually means that fix for the same bug is already included in currently installed 19.4.0.0.190716ACFSRU.

So I don’t have to apply 29963428 patch. I wanted to share the steps with you , because the mentioned tool is really useful.

Resize ASM disks in AWS (FG enabled cluster)

  1. Connect to AWS console https://console.aws.amazon.com
  2. On the left side -> under the section ELASTIC BLOCK STORE -> choose Volumes
  3. Choose necessary disk -> click Actions button -> choose Modify Volume -> change Size
    Please note that all data disks (not quorum disk) must be increased under the same diskgroup, otherwise ASM will not let you to have different sized disks.

Choose another data disks and repeat the same steps.

4. Run the following on database nodes via root user:

# for i in /sys/block/*/device/rescan; do echo 1 > $i; done

5. Check that disks have correct sizes:

# flashgrid-node

6. Connect to the ASM instance from any database node and run:

[grid@rac1 ~]$ sqlplus / as sysasm
SQL*Plus: Release 19.0.0.0.0 - Production on Fri Aug 23 10:17:50 2019
Version 19.4.0.0.0
Copyright (c) 1982, 2019, Oracle.  All rights reserved.
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.4.0.0.0

SQL> alter diskgroup GRID resize all; 
Diskgroup altered.

FlashGrid SkyCluster Now Supports Oracle Database 19c

FlashGrid SkyCluster Version 19.06 now has full support of GI/DB 19c, which means using FG launcher tool (https://www.flashgrid.io/skycluster-for-aws/#launch , https://www.flashgrid.io/skycluster-for-azure/#launch , https://www.flashgrid.io/skycluster-for-gcp/#launch ) , you can setup multi-node Real Application Clusters in the cloud automatically in about 2 hours.

“Oracle 19c is a long-term support release from Oracle with extended support available through 2026”

https://www.kb.flashgrid.io/release-notes/cloud-provisioningFlashGrid

19cGI & 12cRDBMS opatchauto: Re-link fails on target “procob”

Problem:

Environment: 19c GI | 12c RDBMS | RHEL 7.6

While applying Apr 2019 RU on top of the RDBMS home, opatchauto failed:

# /u01/app/oracle/product/12.2.0/dbhome_1/OPatch/opatchauto apply /u01/swtmp/29314339/ -oh /u01/app/oracle/product/12.2.0/dbhome_1 

==Following patches FAILED in apply:
Patch: /u01/swtmp/29314339
Log: /u01/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatchauto/core/opatch/opatch2019-06-24_18-51-22PM_1.log
Reason: Failed during Patching: oracle.opatch.opatchsdk.OPatchException: Re-link fails on target "procob".
Re-link fails on target "proc". 

We’ve tried using opatch instead of opatchauto and it succeeded:

$ /u01/app/oracle/product/12.2.0/dbhome_1/OPatch/opatch apply

Patch 29314339 successfully applied.
OPatch succeeded.

The problem is definitelly related to opatchauto . We’ve opened SR to Oracle and I will update this post as soon as I get the solution from them. Before that, I want to share three workarounds for this problem with you.

Workarounds:

1. The first workaround, as you have already guessed is to use opatch instead of opatchauto.

$ /u01/app/oracle/product/12.2.0/dbhome_1/OPatch/opatch apply

2. The second workaround is to edit actions.xml file under PSU (e.g /u01/swtmp/29314339/etc/config/actions.xml) and remove the following entries (see Doc ID 2056670.1):

<oracle.precomp.lang opt_req="O" version="12.2.0.1.0">
<make change_dir="%ORACLE_HOME%/precomp/lib" make_file="ins_precomp.mk" make_target="procob"/>
</oracle.precomp.lang>
  
<oracle.precomp.common opt_req="O" version="12.2.0.1.0">
<make change_dir="%ORACLE_HOME%/precomp/lib" make_file="ins_precomp.mk" make_target="proc"/>
</oracle.precomp.common> 

Retry opatchauto.

3. The third workaround is to backup libons.so file in GI home and then copy from RDBMS home (You should return it back, after opatchauto succeeds).

# mv  /u01/app/19.3.0/grid/lib/libons.so  /u01/app/19.3.0/grid/lib/libons.so_backup 
# cp  /u01/app/oracle/product/12.2.0/dbhome_1/lib/libons.so  /u01/app/19.3.0/grid/lib/libons.so  

The reason I’ve used this workaround is that, I’ve found opatchauto was using libons.so file from GI home instead of RDBMS:

[WARNING]OUI-67200:Make failed to invoke "
/usr/bin/make -f ins_precomp.mk proc ORACLE_HOME=/u01/app/oracle/product/12.2.0/dbhome_1"….
'/u01/app/19.3.0/grid/lib/libons.so: undefined reference to `memcpy@GLIBC_2.14'                         

If we compare libons.so files between GI and RDBMS homes, we will find that they are not the same. The problem is that opatchauto uses that file from wrong location. As long as it is not easy to find a way to force opatchauto to use libons.so file from the correct location (working on this with Oracle Support), this workaournd can also be considered.

Please keep in mind, you must return GI libons.so back after opatchauto succeeds. We don’t know what happens if we have 12c libons.so file under 19c GI.

Detach diskgroup from 12c GI and attach to 19c GI

Task:

We have two separate Real Application Clusters, one 12c and another 19c. We decided to migrate data from 12c to 19c by simply detaching all ASM disks from the source and attaching to the destination.

Steps:

1. Connect to the 12c GI via grid user and dismount FRA diskgroup on all nodes:

[grid@rac1 ~]$ sqlplus  / as sysasm
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
SQL> alter diskgroup FRA dismount;
Diskgroup altered. 
[grid@rac2 ~]$ sqlplus  / as sysasm
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
SQL> alter diskgroup FRA dismount;
Diskgroup altered.

You can also use srvctl to stop the diskgroup with one command.

2. Detach disks belonging to the specific diskgroup from 12c cluster and attach to 19c cluster.

3. After ASM disks are visible on 19c cluster, connect as sysasm via grid user and mount the diskgroup:

# Check that there is no FRA resource registered with CRS:

[root@rac1 ~]# crsctl status res -t |grep FRA

# Mount the diskgroup on all nodes

[grid@rac1 ~]$ sqlplus / as sysasm
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
SQL> alter diskgroup FRA mount;
Diskgroup altered.
[grid@rac2 ~]$ sqlplus / as sysasm
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
SQL> alter diskgroup FRA mount;
Diskgroup altered.

# FRA diskgroup resource will automatically be registered with CRS:

[root@rac1 ~]# crsctl status res -t |grep FRA
ora.FRA.dg(ora.asmgroup)

And data will be there…