ORA-12578: TNS:wallet open failed while starting the database

Problem:

ORA-12578: TNS:wallet open failed encountered while starting the database using SRVCTL. Auto-login wallet doesn’t open during database startup when started using SRVCTL .

$ srvctl start database -db orcl
PRCD-1332 : failed to start database orcl
PRCR-1079 : Failed to start resource ora.orcl.db
CRS-5017: The resource action “ora.orcl.db start” encountered the following error:
ORA-01078: failure in processing system parameters
ORA-01565: error in identifying file ‘+DATA/ORCL/PARAMETERFILE/spfile.291.1001107969’
ORA-17503: ksfdopn:10 Failed to open file +DATA/ORCL/PARAMETERFILE/spfile.291.1001107969
ORA-12578: TNS:wallet open failed

There are two solutions:

Solution1:

Start the database in mount mode using sqlplus:

SQL> startup mount;

Open the wallet:

SQL> alter system set encryption wallet open identified by “wallet_password”;

Open the database:

SQL> alter database open;

Solution2:

Check the value of TNS_ADMIN value by executing:

$ srvctl getenv database -d orcl -t “TNS_ADMIN”
orcl:
PRKF-1128 : Environment variable TNS_ADMIN is not defined.

Set correct TNS_ADMIN value by executing:

$ srvctl setenv database -d orcl -T “TNS_ADMIN=/u01/app/oracle/product/18.3.0/dbhome_1/network/admin”

Change the following parameter in sqlnet.ora file:

From:

SQLNET.WALLET_OVERRIDE = TRUE

To:

SQLNET.WALLET_OVERRIDE = FALSE

Start the database:

$ srvctl start database -db orcl

Backup best practices for Oracle Clusterware

I recommend you to backup clusterware related files after initial setup and at any change. The backup files can save you from OCR, OLR corruption during GI patch. If any of the files become corrupted you will be able to recover it in several minutes (or seconds). Depends on the failure, you may lose several hours to recover your cluster to the state it was before something happened.

Here are the steps to protect your cluster:

1. Backup ASM spfile initially and at any change.

There are several ways to backup ASM spfile using spcopy, spbackup or create pfile=<backup location> from spfile.

To locate the Oracle ASM SPFILE, use the ASMCMD spget command:

ASMCMD> spget
+GRID/myrac/ASMPARAMETERFILE/registry.253.974466047

Copy the Oracle ASM SPFILE to the backup location:

ASMCMD> spbackup +GRID/myrac/ASMPARAMETERFILE/registry.253.974466047 /backup/spfileasm.ora

2. Backing up ASM password file once should be enough. If you change password for pwfile users or add another user into the list, then make a new backup.

Locate the password file using the ASMCMD pwget command.

ASMCMD> pwget --asm
+GRID/orapwASM

Back up the password file to another location with the pwcopy command.

ASMCMD> pwcopy +GRID/orapwASM  /backup/orapwASM 
copying +GRID/orapwASM -> /backup/orapwASM

3. Use md_backup command to create a backup file containing metadata for one or more disk groups.

To backup metadata for all disk groups, do the following:

ASMCMD> md_backup /tmp/dgmetabackup

Disk group metadata to be backed up: DATA
Disk group metadata to be backed up: FRA
Disk group metadata to be backed up: GRID

In case you need to backup metadata only for a specific disk group, use -G option.

4. Backup OLR on each node.

If OLR is missing or corrupted, clusterware can’t be started on that node. So make manual backup initially and after any change:

Do the following on each node:

# ocrconfig -local -manualbackup

Copy generated file to the backup location:

# cp /u01/app/12.2.0/grid/cdata/rac1/backup_20180510_230359.olr /backup/

Or change default backup location to /backup before making the actual backup:

# ocrconfig -local -backuploc /backup

# ocrconfig -local -manualbackup

5. Mirror and Backup OCR.

You should configure OCR in two independent disk groups. Typically, this is the work area and the recovery area. At least two OCR locations should be configured.

# ocrconfig -add +FRA

There are automatic OCR backups that are taken in the past 4 hours, 8 hours, 12 hours, and in the last day and week.

You can also manually backup OCR before applying any patch or upgrade GI home:

# ocrconfig -manualbackup

Regularly save taken backup to another location using the following way:

Identify the latest backup (manual or automatic):

[grid@rac1 ~]$ ocrconfig -showbackup
rac1 2018/05/10 13:06:18 +GRID:/myrac/OCRBACKUP/backup00.ocr.289.975762375 830990544
..

Copy it to the backup location:

$ ocrconfig -copy +GRID:/myrac/OCRBACKUP/backup00.ocr.289.975762375 /backup/backup00.ocr

Or change default backup locations to another diskgroup other than GRID:

# ocrconfig -backuploc +FRA

 

 

Add filegroup fails with ORA-15067: command or option incompatible with diskgroup redundancy

Problem:

I was trying to add filegroup to the FRA diskgroup:

SQL> alter diskgroup FRA add filegroup high_filegroup database orcl set ‘datafile.redundancy’ = ‘HIGH’;

Error:

ORA-15067: command or option incompatible with diskgroup redundancy

Troubleshooting:

Checking diskgroup type:

SQL> select name,type,compatibility,database_compatibility from v$asm_diskgroup where name=’FRA’;

NAME      TYPE   COMPATIBILITY    DATABASE_COMPATIBILITY
————- —— ————————– ————————————————————
FRA        NORMAL 18.0.0.0.0    12.2.0.1.0

Solution:

Change diskgroup type to FLEX:

SQL> alter diskgroup FRA convert redundancy to flex;
Diskgroup altered.

Check that type was changed:

SQL> select name,type,compatibility,database_compatibility from v$asm_diskgroup where name=’FRA’;

NAME      TYPE   COMPATIBILITY    DATABASE_COMPATIBILITY
————- —— ————————– ————————————————————
FRA        FLEX   18.0.0.0.0    12.2.0.1.0

Adding filegroup succeeds:

SQL> alter diskgroup FRA add filegroup high_filegroup database orcl set ‘datafile.redundancy’ = ‘HIGH’;
Diskgroup altered.

Daylight saving time support in Oracle CRS

Dear readers,

I am glad to announce that my blog has been entered in Top 50 Oracle Blogs. For more information about Top 100 Oracle Blogs And Websites for Oracle DBAs To Follow in 2018 please visit https://blog.feedspot.com/oracle_blogs. You will improve your knowledge and experience by following them. 

In this post, I want to share my experience of how I solved the daylight saving time problem with Oracle CRS. With the default setup, in case timezone changes on your system, the client/application who connects to the database remotely(local/BEQ connections have correct timezone) will still have old timezone information and will enter wrong data.

Some countries,  that are not affected by daylight saving time are lucky and does not have to worry about it. But if your servers are not located in lucky countries then you must make CRS DTS aware.

During the GI installation, Oracle saves Timezone information in $CRS_HOME/crs/install/s_crsconfig_hostname_env.txt file, that makes TZ not to change for CRS even it is changed on OS level.

Please note that timezone can be changed for the database using srvctl:

srvctl setenv database -env 'TZ=time zone'

But I do not recommend to do that, because you must do the same everytime you create a new database.
Better to change TZ globally at CRS level.

In simple words just commenting out the TZ variable in $CRS_HOME/crs/install/s_crsconfig_hostname_env.txt and restarting the CRS on each node just one time is enough to do that, but let’s check it.

1.  List the current timezone settings:

[root@rac1 ~]# timedatectl status|grep zone
Time zone: UTC (UTC, +0000)
[root@rac2 ~]#  timedatectl status|grep zone
Time zone: UTC (UTC, +0000)

2. Change timezone at OS level:

[root@rac1 ~]# timedatectl set-timezone Europe/Bratislava
[root@rac2 ~]# timedatectl set-timezone Europe/Bratislava

3. Check local and scan connections:

[oracle@rac1 ~]$ sqlplus / as sysdba

SQL> select to_char(sysdate,'HH24:MI:SS AM')  dbtime from dual;

DBTIME
-----------
18:50:05 PM     <<<<<<<<<<<<Correct , same as OS

[oracle@rac1 ~]$ sqlplus marik/123@ORCL

SQL> select to_char(sysdate,'HH24:MI:SS AM') dbtime from dual;

DBTIME
-----------
16:50:10 PM     <<<<<<<<<<<<Incorrect

4. Comment TZ in the config file:

[root@rac1 ~]# cat /u01/app/18.3.0/grid/crs/install/s_crsconfig_rac1_env.txt|grep TZ=
#   the appropriate time zone name. For example, TZ=America/New_York
#TZ=UTC

[root@rac2 ~]# cat /u01/app/18.3.0/grid/crs/install/s_crsconfig_rac2_env.txt|grep TZ=
#   the appropriate time zone name. For example, TZ=America/New_York
#TZ=UTC

5. Restart CRS on both nodes:

[root@rac1 ~]#  crsctl stop crs
[root@rac1 ~]#  crsctl start crs -wait
[root@rac2 ~]#  crsctl stop crs
[root@rac2 ~]#  crsctl start crs -wait

6. Change timezone on OS level several times and check local & scan connections:

[root@rac1 ~]# timedatectl set-timezone Africa/Conakry
[root@rac2 ~]# timedatectl set-timezone Africa/Conakry

Important: You need to reconnect to the database(so consider that sessions must be disconnected and reconnected again, old connections have old settings)

[oracle@rac1 ~]$ sqlplus / as sysdba

SQL> Select to_char(sysdate,'HH24:MI:SS AM') dbtime from dual;

DBTIME
-----------
17:15:56 PM <<<<<<<<<<<<Correct


[oracle@rac1 ~]$ sqlplus marik/123@ORCL

SQL> Select to_char(sysdate,'HH24:MI:SS AM') dbtime from dual;

DBTIME
-----------
17:15:27 PM <<<<<<<<<<<<Correct

Change one more time:

[root@rac1 ~]# timedatectl set-timezone America/Aruba
[root@rac2 ~]# timedatectl set-timezone America/Aruba

Exit connections and reconnect:

[oracle@rac1 ~]$ sqlplus / as sysdba

SQL> Select to_char(sysdate,'HH24:MI:SS AM') dbtime from dual;

DBTIME
-----------
13:17:47 PM <<<<<<<<<<<<Correct

[oracle@rac1 ~]$ sqlplus marik/123@ORCL

SQL> Select to_char(sysdate,'HH24:MI:SS AM') dbtime from dual;

DBTIME
-----------
13:17:31 PM <<<<<<<<<<<<Correct

MGTCA-1176/MGTCA-1162 : An error occurred while marking the Cluster Manifest File as expired.

If you are installing “Oracle Member Cluster for Oracle Database” and during the installation GIRM configuration assistant fails with the following error:

MGTCA-1176 : An error occurred while marking the Cluster Manifest File as expired.
MGTCA-1162 : failed to add a property to the provided Cluster Manifest File

Just give the following permission to the manifest file , to let the installer make changes there:

chmod 777 manifest.xml

There is no useful info about that on the internet and metalink! I guessed it myself.  That’s why posting that simple solution here.

Disable Auto Space Advisor

On our production database Segment Space Advisor jobs were running very frequently with different names “SYS.ORA$AT_SA_SPC_SY_***”.  That was causing 100% of IO activity when was creating advise on the table containing XML files . LOB segment space for this table is 565G.

In my opinion, it is fare to disable this auto task and run manually whenever you want.

BEGIN
DBMS_AUTO_TASK_ADMIN.disable(
client_name => ‘auto space advisor’,
operation => NULL,
window_name => NULL);
END;
/

Check the status:

SQL> select client_name, status
2 from dba_autotask_client;

CLIENT_NAME STATUS
———————————————-
auto optimizer stats collection ENABLED
auto space advisor DISABLED
sql tuning advisor DISABLED

Thanks to Fikria for this case 🙂

Exadata: Rebuild RAC clusterware without deleting data Version 2

This post is differrent from my previous post Rebuild RAC clusterware without deleting data . Because two days ago I was upgrading grid infrastructure from 12.1 to 12.2 that was successfull on first node , but failed on second node. I will not describe why this happend, but the whole process was something complicated instead of being simple. We have installed several patches before the installation(gridSetup has this option to indicate patches before installation)… Seems the 12.2 software has many bugs even during upgrade process.(But I agree  with other DBA-s that 12.2 database is very stable itself).

So what happend now is that during first node upgrade OCR files was changed. I tried deconfigure from 12.2 home and it was also failed. So now I am with my clusterware that has corrupted OCR and voting disks(it belongs 12.2 version). In my presious post I was starting clusterware in exclusive mode with nocrs and restoring OCR from backup, but now because of voting disks are different version  it does not starting in even exclusive mode.

So I have followed the steps that recreate diskgroup , where OCR and voting disks are saved. Because it is Exadata Cell Storage disks , it was more complicated than with ordinary disks, where you can cleanup header using “dd”. Instead of dd you use cellcli.

So let’s start:

Connect to each cell server(I have three of them) and drop grid disks that belong to DBFS(it contains OCR and Voting disks). Be careful dropping griddisk causes data to be erased. So DBFS must contain only OCR and Vdisks not !DATA!

#Find the name, celldisk and size of the grid disk:

CellCLI> list griddisk where name like ‘DBFS_.*’ attributes name, cellDisk, size
DBFS_CD_02_lbcel01_dr_adm CD_02_lbcel01_dr_adm 33.796875G
DBFS_CD_03_lbcel01_dr_adm CD_03_lbcel01_dr_adm 33.796875G
DBFS_CD_04_lbcel01_dr_adm CD_04_lbcel01_dr_adm 33.796875G
DBFS_CD_05_lbcel01_dr_adm CD_05_lbcel01_dr_adm 33.796875G
DBFS_CD_06_lbcel01_dr_adm CD_06_lbcel01_dr_adm 33.796875G
DBFS_CD_07_lbcel01_dr_adm CD_07_lbcel01_dr_adm 33.796875G
DBFS_CD_08_lbcel01_dr_adm CD_08_lbcel01_dr_adm 33.796875G
DBFS_CD_09_lbcel01_dr_adm CD_09_lbcel01_dr_adm 33.796875G
DBFS_CD_10_lbcel01_dr_adm CD_10_lbcel01_dr_adm 33.796875G
DBFS_CD_11_lbcel01_dr_adm CD_11_lbcel01_dr_adm 33.796875G

 

#Drop

CellCLI> drop griddisk DBFS_CD_02_lbcel01_dr_adm
drop griddisk DBFS_CD_03_lbcel01_dr_adm
drop griddisk DBFS_CD_04_lbcel01_dr_adm
drop griddisk DBFS_CD_05_lbcel01_dr_adm
drop griddisk DBFS_CD_06_lbcel01_dr_adm
drop griddisk DBFS_CD_07_lbcel01_dr_adm
drop griddisk DBFS_CD_08_lbcel01_dr_adm
drop griddisk DBFS_CD_09_lbcel01_dr_adm
drop griddisk DBFS_CD_10_lbcel01_dr_adm
drop griddisk DBFS_CD_11_lbcel01_dr_adm

#Create

cellcli> create griddisk DBFS_CD_02_lbcel01_dr_adm celldisk=CD_02_lbcel01_dr_adm, size=33.796875G
create griddisk DBFS_CD_03_lbcel01_dr_adm celldisk=CD_03_lbcel01_dr_adm, size=33.796875G
create griddisk DBFS_CD_04_lbcel01_dr_adm celldisk=CD_04_lbcel01_dr_adm, size=33.796875G
create griddisk DBFS_CD_05_lbcel01_dr_adm celldisk=CD_05_lbcel01_dr_adm, size=33.796875G
create griddisk DBFS_CD_06_lbcel01_dr_adm celldisk=CD_06_lbcel01_dr_adm, size=33.796875G
create griddisk DBFS_CD_07_lbcel01_dr_adm celldisk=CD_07_lbcel01_dr_adm, size=33.796875G
create griddisk DBFS_CD_08_lbcel01_dr_adm celldisk=CD_08_lbcel01_dr_adm, size=33.796875G
create griddisk DBFS_CD_09_lbcel01_dr_adm celldisk=CD_09_lbcel01_dr_adm, size=33.796875G
create griddisk DBFS_CD_10_lbcel01_dr_adm celldisk=CD_10_lbcel01_dr_adm, size=33.796875G
create griddisk DBFS_CD_11_lbcel01_dr_adm celldisk=CD_11_lbcel01_dr_adm, size=33.796875G

Do the same steps on other cells.

2.  Deconfigure root.sh on each node

# Run deconfig

/u01/app/12.1.0.2/grid/crs/install/rootcrs.sh -deconfig -force

#rename gpnp profile

mv /u01/app/12.1.0.2/grid/gpnp/profiles/peer/profile.xml /tmp/profile_backup.xml

3. Run root.sh on first node

/u01/app/12.1.0.2/grid/root.sh

It will fail because will not find the disk group DBFS for mounting and of course OCR inside.  But now asm is started in nomount mode and we are able to recreate diskgroup

4. Create DBFS diskgroup

sqlplus / as sysasm

SQL> create diskgroup DBFS
failgroup LBCEL01_DR_ADM disk ‘o/*/DBFS_CD_02_lbcel01_dr_adm’,’o/*/DBFS_CD_03_lbcel01_dr_adm’,’o/*/DBFS_CD_04_lbcel01_dr_adm’,’o/*/DBFS_CD_05_lbcel01_dr_adm’,’o/*/DBFS_CD_06_lbcel01_dr_adm’,’o/*/DBFS_CD_07_lbcel01_dr_adm’,’o/*/DBFS_CD_08_lbcel01_dr_adm’,’o/*/DBFS_CD_09_lbcel01_dr_adm’,’o/*/DBFS_CD_10_lbcel01_dr_adm’,’o/*/DBFS_CD_11_lbcel01_dr_adm’
failgroup LBCEL02_DR_ADM disk ‘o/*/DBFS_CD_02_lbcel02_dr_adm’,’o/*/DBFS_CD_03_lbcel02_dr_adm’,’o/*/DBFS_CD_04_lbcel02_dr_adm’,’o/*/DBFS_CD_05_lbcel02_dr_adm’,’o/*/DBFS_CD_06_lbcel02_dr_adm’,’o/*/DBFS_CD_07_lbcel02_dr_adm’,’o/*/DBFS_CD_08_lbcel02_dr_adm’,’o/*/DBFS_CD_09_lbcel02_dr_adm’,’o/*/DBFS_CD_10_lbcel02_dr_adm’,’o/*/DBFS_CD_11_lbcel02_dr_adm’
failgroup lbcel03_dr_adm disk ‘o/*/DBFS_CD_02_lbcel03_dr_adm’,’o/*/DBFS_CD_03_lbcel03_dr_adm’,’o/*/DBFS_CD_04_lbcel03_dr_adm’,’o/*/DBFS_CD_05_lbcel03_dr_adm’,’o/*/DBFS_CD_06_lbcel03_dr_adm’,’o/*/DBFS_CD_07_lbcel03_dr_adm’,’o/*/DBFS_CD_08_lbcel03_dr_adm’,’o/*/DBFS_CD_09_lbcel03_dr_adm’,’o/*/DBFS_CD_10_lbcel03_dr_adm’,’o/*/DBFS_CD_11_lbcel03_dr_adm’
ATTRIBUTE
‘compatible.asm’=’12.1.0.2.0’,
‘compatible.rdbms’=’11.2.0.2.0’,
‘au_size’=’4194304’,
‘cell.smart_scan_capable’=’TRUE’;

5. Do the following steps:

* Deconfigure root.sh again from first node
* remove gpnp profile
* run root.sh again on first node

At this time root.sh should be successful.

6. Restore OCR

/u01/app/12.1.0.2/grid/cdata/<clustername> directory contans OCR backups by default

crsctl stop crs -f
crsctl start crs -excl -nocrs
ocrconfig -restore /u01/app/12.1.0.2/grid/cdata/lbank-clus-dr/backup00.ocr
crsctl stop crs -f
crsctl start crs

7. Run root.sh on the second node

/u01/app/12.1.0.2/grid/root.sh