Upgrade the database time zone file using the DBMS_DST package in 19c

After upgrading database from 12c to 19c, you may need to upgrade database time zone file version. This step is not always mandatory, but it is recommended by pre-upgrade checker.

Useful info about its necessity from Oracle site: https://oracle-base.com/articles/misc/update-database-time-zone-file

From Oracle 11gR2 onward, new time zone files are shipped with upgrades and patches, but they are not automatically applied to the database.

Applying a change to the database time zone file not only affects the way new data is handled, but potentially alters data stored in TIMESTAMP WITH TIME ZONE columns, so you need to consider the impact of this before upgrading the time zone file.

Remember, if you only deal with dates in your country, and your country has not altered its time zone or daylight saving time policy, this upgrade may not be necessary.”

Now let’s do time zone file upgrade:

1. Check current settings:

SQL> SELECT * FROM v$timezone_file;

FILENAME		VERSION     CON_ID
-------------------- ---------- ----------
timezlrg_26.dat 	     26 	 0

2. Startup database in upgrade mode:

SQL> shutdown immediate;
SQL> startup upgrade;

3. Start upgrade window:

SQL> SET SERVEROUTPUT ON
SQL> DECLARE
  l_tz_version PLS_INTEGER;
BEGIN
  l_tz_version := DBMS_DST.get_latest_timezone_version;

  DBMS_OUTPUT.put_line('l_tz_version=' || l_tz_version);
  DBMS_DST.begin_upgrade(l_tz_version);
END;
/


l_tz_version=32
An upgrade window has been successfully started.

PL/SQL procedure successfully completed.

4. Check primary and secondary time zone versions:

SQL> SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value
FROM DATABASE_PROPERTIES
WHERE PROPERTY_NAME LIKE 'DST_%'
ORDER BY PROPERTY_NAME; 

PROPERTY_NAME		 VALUE
------------------------ ----------
DST_PRIMARY_TT_VERSION	 32
DST_SECONDARY_TT_VERSION 26
DST_UPGRADE_STATE	 UPGRADE

5. Startup database in normal mode:

SQL> shut immediate;
SQL> startup;

6. Do the upgrade:

SQL> SET SERVEROUTPUT ON
SQL> DECLARE
  l_failures   PLS_INTEGER;
BEGIN
  DBMS_DST.upgrade_database(l_failures);
  DBMS_OUTPUT.put_line('DBMS_DST.upgrade_database : l_failures=' || l_failures);
  DBMS_DST.end_upgrade(l_failures);
  DBMS_OUTPUT.put_line('DBMS_DST.end_upgrade : l_failures=' || l_failures);
END;
/


Table list: "GSMADMIN_INTERNAL"."AQ$_CHANGE_LOG_QUEUE_TABLE_S"
Number of failures: 0
Table list: "GSMADMIN_INTERNAL"."AQ$_CHANGE_LOG_QUEUE_TABLE_L"
Number of failures: 0
Table list: "MDSYS"."SDO_DIAG_MESSAGES_TABLE"
Number of failures: 0
Table list: "DVSYS"."SIMULATION_LOG$"
Number of failures: 0
Table list: "DVSYS"."AUDIT_TRAIL$"
Number of failures: 0
DBMS_DST.upgrade_database : l_failures=0
An upgrade window has been successfully ended.
DBMS_DST.end_upgrade : l_failures=0

PL/SQL procedure successfully completed.

7. Check new settings:

SQL> SELECT * FROM v$timezone_file;

FILENAME		VERSION     CON_ID
-------------------- ---------- ----------
timezlrg_32.dat 	     32 	 0


SQL> COLUMN property_name FORMAT A30
SQL> COLUMN property_value FORMAT A20

SQL> SELECT property_name, property_value
FROM   database_properties
WHERE  property_name LIKE 'DST_%'
ORDER BY property_name;

PROPERTY_NAME		       PROPERTY_VALUE
------------------------------ --------------------
DST_PRIMARY_TT_VERSION	       32
DST_SECONDARY_TT_VERSION       0
DST_UPGRADE_STATE	       NONE

ORA-56920: a prepare or upgrade window or an on-demand or datapump-job loading of a secondary time zone data file is in an active state

Problem:

During DTS upgrade, while following the steps mentioned here, got the following error:

SQL> DECLARE
  l_tz_version PLS_INTEGER;
BEGIN
  l_tz_version := DBMS_DST.get_latest_timezone_version;
  DBMS_OUTPUT.put_line('l_tz_version=' || l_tz_version);
  DBMS_DST.begin_upgrade(l_tz_version);
END;
/ 

DECLARE
*
ERROR at line 1:
ORA-56920: a prepare or upgrade window or an on-demand or datapump-job loading
of a secondary time zone data file is in an active state
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.DBMS_DST", line 1208
ORA-06512: at line 6

Solution:

The error simply means that this step is already done and you can continue with other steps, make sure that DST_UPGRADE_STATE column in the following query shows UPGRADE value.

SQL> SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value
FROM DATABASE_PROPERTIES
WHERE PROPERTY_NAME LIKE 'DST_%'
ORDER BY PROPERTY_NAME;

PROPERTY_NAME		 VALUE
------------------------ ----------
DST_PRIMARY_TT_VERSION	 32
DST_SECONDARY_TT_VERSION 26
DST_UPGRADE_STATE	 UPGRADE

there is no need to do anything, continue with the next DTS upgrade steps.

PRCD-1229 : An attempt to access configuration of database orcl was rejected because its version 12.2.0.1.0 differs from the program version 19.0.0.0.0

Problem:

After a manual database upgrade from 12c to 19c, I was not able to start database instance using srvctl.

[oracle@rac1 ~]$ srvctl start instance -db orcl -n rac1

PRCD-1027 : Failed to retrieve database orcl
PRCD-1229 : An attempt to access configuration of database orcl was rejected because its version 12.2.0.1.0 differs from the program version 19.0.0.0.0. Instead run the program from /u01/app/oracle/product/12.2.0/dbhome_1.

Solution:

Use srvctl upgrade from new home:

$ /u01/app/oracle/product/19.3.0/dbhome_1/bin/srvctl upgrade database -d orcl -oraclehome /u01/app/oracle/product/19.3.0/dbhome_1

Try to start again:

$ srvctl start instance -db orcl -n rac1

ORA-39701: database must be mounted EXCLUSIVE for UPGRADE or DOWNGRADE

Problem:

In cluster environment, I was not able to start database in upgrade mode:

SQL> startup upgrade

ORACLE instance started.
Total System Global Area 1996486272 bytes
Fixed Size		    8898176 bytes
Variable Size		  704643072 bytes
Database Buffers	 1275068416 bytes
Redo Buffers		    7876608 bytes
Database mounted.
ORA-00603: ORACLE server session terminated by fatal error
ORA-01092: ORACLE instance terminated. Disconnection forced
ORA-39701: database must be mounted EXCLUSIVE for UPGRADE or DOWNGRADE

The following also did not work:

SQL> startup mount exclusive

ORACLE instance started.
Total System Global Area 1996486272 bytes
Fixed Size		    8898176 bytes
Variable Size		  704643072 bytes
Database Buffers	 1275068416 bytes
Redo Buffers		    7876608 bytes
Database mounted.

SQL> alter database open upgrade;
alter database open upgrade
*
ERROR at line 1:
ORA-00603: ORACLE server session terminated by fatal error
ORA-01092: ORACLE instance terminated. Disconnection forced
ORA-39701: database must be mounted EXCLUSIVE for UPGRADE or DOWNGRADE

Solution:

Change parameter cluster_database to FALSE, and startup in upgrade mode:

$ sqlplus / as sysdba

SQL> startup nomount;

SQL> alter system set cluster_database=FALSE scope=spfile sid='*';

SQL> shutdown immediate;

SQL> startup upgrade

ORACLE instance started.
Total System Global Area 1996486272 bytes
Fixed Size		    8898176 bytes
Variable Size		  704643072 bytes
Database Buffers	 1275068416 bytes
Redo Buffers		    7876608 bytes
Database mounted.
Database opened.

After finishing your work, don’t forget to return cluster_database parameter to TRUE and restart your database:

SQL> alter system set cluster_database=TRUE scope=spfile sid='*';

Manually installing the HR Schema in 19c

When creating a database using the dbca there is an option to install Sample Schemas automatically, but if you’ve missed that step then here are the steps to manually install HR schema (only) in your database:

1. Log on to sqlplus as SYS AS SYSDBA:

$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Thu Aug 20 11:44:54 2020
Version 19.7.0.0.0

SQL> 

2. Run hr_main.sql script using the following command:

SQL> @?/demo/schema/human_resources/hr_main.sql

specify password for HR as parameter 1:
Enter value for 1: hr

specify default tablespeace for HR as parameter 2:
Enter value for 2: users

specify temporary tablespace for HR as parameter 3:
Enter value for 3: temp

specify log path as parameter 4:
Enter value for 4: $ORACLE_HOME/demo/schema/log/

PL/SQL procedure successfully completed.
...
******  Creating REGIONS table ....
...
******  Creating COUNTRIES table ....
...
******  Creating LOCATIONS table ....
...
******  Creating DEPARTMENTS table ....
...
******  Creating JOBS table ....
...
******  Creating EMPLOYEES table ....
...
******  Creating JOB_HISTORY table ....
...
******  Creating EMP_DETAILS_VIEW view ...
...
******  Populating REGIONS table ....
...
******  Populating COUNTIRES table ....
...
PL/SQL procedure successfully completed.

3. Verify that the schema was created:

SQL> SELECT table_name FROM dba_tables where owner='HR';

TABLE_NAME
------------------------------------------------------------
REGIONS
COUNTRIES
LOCATIONS
DEPARTMENTS
JOBS
EMPLOYEES
JOB_HISTORY

7 rows selected.

ACFS-05913: unable to contact the standby node stbyrac1

Problem:

I was trying to setup ACFS replication, where one of the steps is to validate keys using acfsutil, which failed with ACFS-05913 error:

[root@rac1 .ssh]# acfsutil repl info -c -u oggrepl stbyrac1 stbyrac2 /GG
acfsutil repl info: ACFS-05913: unable to contact the standby node stbyrac1
acfsutil repl info: ACFS-05913: unable to contact the standby node stbyrac2

Cause: 

An attempt to use the ping utility to contact a standby node failed.

Solution:

Enable ICMP traffic between these nodes and retry validation:

[root@rac1 .ssh]# acfsutil repl info -c -u oggrepl stbyrac1 stbyrac2 /GG
A valid 'ssh' connection was detected for standby node stbyrac1 as user oggrepl.
A valid 'ssh' connection was detected for standby node stbyrac2 as user oggrepl.

srvctl start filesystem hangs

The title of this post is general, there can be a lot of reasons why srvctl start filesystem hangs. The aim of this blog post is to share one of the reasons only.

Problem:

I’ve created ACFS volume and added it to srvctl:

$ asmcmd volcreate -G OGG -s 10G ACFSGG
# srvctl add filesystem -device /dev/asm/acfsgg-11 -path /GG_HOME -volume acfsgg -diskgroup OGG -user oracle -fstype ACFS

then tried to start the filesystem using:

# srvctl start filesystem -device /dev/asm/acfsgg-11

Which hanged.

Troubleshooting:

I’ve checked logs under trace folder under GI base, but could not find any clue. Even worse, stopping filesystem was also hanging.

But let’s stop here, the file that should have been checked was really there, but I missed it and checked wrong files. The file name that shows the necessary error is mount_<process id>.trc and is definitely located under trace folder. So instead of manually mounting filesystem to see the error, you can just open that mount_<process id>.trc and you will see the reason there.

Then I tried manual mounting of the filesystem, without srvctl:

[root@stbyrac1 trace]# /bin/mount -t acfs  /dev/asm/acfsgg-11 /GG_HOME
mount.acfs: ACFS-03037: not an ACFS file system

saw the error, which explained what was happening. My volume was not formatted with acfs filesystem. Somehow I missed that step on the standby cluster, so just a human error, but srvctl at least should have said that instead of hanging and placing info in trace file.

Solution:

Format ACFS volume:

[root@stbyrac1 trace]# mkfs -t acfs /dev/asm/acfsgg-11
mkfs.acfs: version                   = 19.0.0.0.0
mkfs.acfs: on-disk version           = 46.0
mkfs.acfs: volume                    = /dev/asm/acfsgg-11
mkfs.acfs: volume size               = 10737418240  (  10.00 GB )
mkfs.acfs: Format complete.

Because the start and stop operations are hanged, you need to mount filesystem on all database nodes manually:

[root@stbyrac1 ~]# /bin/mount -t acfs  /dev/asm/acfsgg-11 /GG_HOME
[root@stbyrac1 ~]# /bin/mount -t acfs  /dev/asm/acfsgg-11 /GG_HOME

Now try to stop and start filesystem, to make sure srvctl is able to do it’s job without any manual interaction:

[root@stbyrac1 ~]# srvctl stop filesystem -device /dev/asm/acfsgg-11
[root@stbyrac1 ~]# srvctl start filesystem -device /dev/asm/acfsgg-11

Identify whether database is RAC or SI using crsctl

Environment:

GI – configured as a cluster
orclsingle – configured as a Single Instance
orclrac – configured as RAC
orclone – configured as RACOneNode

Q1: How to identify whether database is RAC or SI?

A1: crsctl stat res shows parameter CLUSTER_DATABASE. In case of RAC or RACOneNode the value is TRUE otherwise FALSE

# crsctl stat res ora.orclrac.db -p|grep CLUSTER_DATABASE
CLUSTER_DATABASE=true

# crsctl stat res ora.orclone.db -p|grep CLUSTER_DATABASE
CLUSTER_DATABASE=true

# crsctl stat res ora.orclsingle.db -p|grep CLUSTER_DATABASE
CLUSTER_DATABASE=false

Please note that the same can be identified from sqlplus by selecting CLUSTER_DATABASE initialization parameter.

Q2: How to identify whether my database is RAC or RACOneNode?

A2: srvctl config database shows Type, that can have the following values SINGLE, RAC, or RACOneNode.

$ srvctl config database -db orclrac|grep Type
Type: RAC

$ srvctl config database -db orclone|grep Type
Type: RACOneNode

$ srvctl config database -db orclsingle|grep Type
Type: SINGLE

INS-45511: Installer has detected that an Oracle Grid Infrastructure home is marked incorrectly as configured

Problem:

After deconfiguring Oracle Restart stack using:

[root@rac1 ~]# /u01/app/19.3.0/grid/root.sh -deconfig
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/19.3.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/19.3.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/grid/crsdata/rac1/crsconfig/hadeconfig.log
2020/07/04 10:49:21 CLSRSC-332: CRS resources for listeners are still configured
2020/07/04 10:49:49 CLSRSC-337: Successfully deconfigured Oracle Restart stack

Tried to configure GI as clusterware stack and got the following error:

INS-45511: Installer has detected that an Oracle Grid Infrastructure home is marked incorrectly  as configured

Solution:

Remove CRS="true" accross GI home entry in /u01/app/oraInventory/ContentsXML/inventory.xml

Original:

<HOME NAME="OraGI19Home1" LOC="/u01/app/19.3.0/grid" TYPE="O" IDX="1" CRS="true"/>

After modification:

<HOME NAME="OraGI19Home1" LOC="/u01/app/19.3.0/grid" TYPE="O" IDX="1"/>

Retry configuration.

Change Oracle DB name using NID

Details:

Environment: RAC
Source name: orclA
Target name: orcl

1. Make sure that you have a recoverable whole database backup.

2. In Real Application Cluster we need to set cluster_database parameter to false and mount an instance on only one node:

[oracle@primrac1 ~]$ sqlplus / as sysdba
SQL> alter system set cluster_database=false scope=spfile;

[oracle@primrac1 ~]$ srvctl stop database orclA
[oracle@primrac1 ~]$ sqlplus / as sysdba
SQL> startup mount;

3. From the 1st node run nid utility. Specify username with sysdba privilege, target database name, and SETNAME parameter to yes.

[oracle@primrac1 ~]$ . oraenv
ORACLE_SID = [oracle] ? orclA1
The Oracle base remains unchanged with value /u01/app/oracle

[oracle@primrac1 ~]$ nid TARGET=SYS DBNAME=orcl SETNAME=YES
DBNEWID: Release 19.0.0.0.0 - Production on Mon Dec 16 16:54:36 2019
Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.
Password:
Connected to database ORCLA (DBID=3133348785)
Connected to server version 19.4.0
Control Files in database:
    +DATA/ORCLA/CONTROLFILE/current.290.1020357617
Change database name of database ORCLA to ORCL? (Y/[N]) => Y
Proceeding with operation
Changing database name from ORCLA to ORCL
     Control File +DATA/ORCLA/CONTROLFILE/current.290.1020357617 - modified
     Datafile +DATA/ORCLA/DATAFILE/system.283.102035744 - wrote new name
     Datafile +DATA/ORCLA/DATAFILE/sysaux.286.102035749 - wrote new name
     Datafile +DATA/ORCLA/DATAFILE/undotbs1.287.102035753 - wrote new name
     Datafile +DATA/ORCLA/DATAFILE/undotbs2.300.102035839 - wrote new name
     Datafile +DATA/ORCLA/DATAFILE/users.288.102035754 - wrote new name
     Datafile +DATA/ORCLA/TEMPFILE/temp.296.102035765 - wrote new name
     Control File +DATA/ORCLA/CONTROLFILE/current.290.1020357617 - wrote new name
     Instance shut down
 Database name changed to ORCL.
 Modify parameter file and generate a new password file before restarting.
 Succesfully changed database name.
 DBNEWID - Completed succesfully.

5. Change db_name parameter in the initialization parameter file:

[oracle@primrac1 dbs]$ . oraenv
ORACLE_SID = [orcl1] ? orclA1

[oracle@primrac1 dbs]$ sqlplus / as sysdba
SQL> startup nomount;
SQL> alter system set db_name=orcl scope=spfile;
SQL> alter system set cluster_database=true scope=spfile;
SQL> shut immediate;

6. Modify database name in srvctl:

[oracle@primrac1 dbs]$ srvctl modify database -db orclA -dbname orcl

7. Remove existing password file entry and create a new one:

[oracle@primrac1 dbs]$ srvctl modify database -db orclA -pwfile

[oracle@primrac1 dbs]$ orapwd  dbuniquename=orclA file='+DATA/ORCLA/PASSWORD/pwdorcl.ora'
Enter password for SYS:

8. Start the database using srvctl:

[oracle@primrac1 dbs]$ srvctl start database -db orclA

Depending on your needs you may also update instance names. If so, please make sure that you don’t have instance-specific parameters (e.g orclA1.instance_number), otherwise, it’s better to recreate the parameter file with the correct instance names.

I’ve shown you just a simple example of the database name change. In your specific case, there may be other things that depend on the database name.