Backup best practices for Oracle Clusterware

I recommend you to backup clusterware related files after initial setup and at any change. The backup files can save you from OCR, OLR corruption during GI patch. If any of the files become corrupted you will be able to recover it in several minutes (or seconds). Depends on the failure, you may lose several hours to recover your cluster to the state it was before something happened.

Here are the steps to protect your cluster:

1. Backup ASM spfile initially and at any change.

There are several ways to backup ASM spfile using spcopy, spbackup or create pfile=<backup location> from spfile.

To locate the Oracle ASM SPFILE, use the ASMCMD spget command:

ASMCMD> spget
+GRID/myrac/ASMPARAMETERFILE/registry.253.974466047

Copy the Oracle ASM SPFILE to the backup location:

ASMCMD> spbackup +GRID/myrac/ASMPARAMETERFILE/registry.253.974466047 /backup/spfileasm.ora

2. Backing up ASM password file once should be enough. If you change password for pwfile users or add another user into the list, then make a new backup.

Locate the password file using the ASMCMD pwget command.

ASMCMD> pwget --asm
+GRID/orapwASM

Back up the password file to another location with the pwcopy command.

ASMCMD> pwcopy +GRID/orapwASM  /backup/orapwASM 
copying +GRID/orapwASM -> /backup/orapwASM

3. Use md_backup command to create a backup file containing metadata for one or more disk groups.

To backup metadata for all disk groups, do the following:

ASMCMD> md_backup /tmp/dgmetabackup

Disk group metadata to be backed up: DATA
Disk group metadata to be backed up: FRA
Disk group metadata to be backed up: GRID

In case you need to backup metadata only for a specific disk group, use -G option.

4. Backup OLR on each node.

If OLR is missing or corrupted, clusterware can’t be started on that node. So make manual backup initially and after any change:

Do the following on each node:

# ocrconfig -local -manualbackup

Copy generated file to the backup location:

# cp /u01/app/12.2.0/grid/cdata/rac1/backup_20180510_230359.olr /backup/

Or change default backup location to /backup before making the actual backup:

# ocrconfig -local -backuploc /backup

# ocrconfig -local -manualbackup

5. Mirror and Backup OCR.

You should configure OCR in two independent disk groups. Typically, this is the work area and the recovery area. At least two OCR locations should be configured.

# ocrconfig -add +FRA

There are automatic OCR backups that are taken in the past 4 hours, 8 hours, 12 hours, and in the last day and week.

You can also manually backup OCR before applying any patch or upgrade GI home:

# ocrconfig -manualbackup

Regularly save taken backup to another location using the following way:

Identify the latest backup (manual or automatic):

[grid@rac1 ~]$ ocrconfig -showbackup
rac1 2018/05/10 13:06:18 +GRID:/myrac/OCRBACKUP/backup00.ocr.289.975762375 830990544
..

Copy it to the backup location:

$ ocrconfig -copy +GRID:/myrac/OCRBACKUP/backup00.ocr.289.975762375 /backup/backup00.ocr

Or change default backup locations to another diskgroup other than GRID:

# ocrconfig -backuploc +FRA

 

 

Add filegroup fails with ORA-15067: command or option incompatible with diskgroup redundancy

Problem:

I was trying to add filegroup to the FRA diskgroup:

SQL> alter diskgroup FRA add filegroup high_filegroup database orcl set ‘datafile.redundancy’ = ‘HIGH’;

Error:

ORA-15067: command or option incompatible with diskgroup redundancy

Troubleshooting:

Checking diskgroup type:

SQL> select name,type,compatibility,database_compatibility from v$asm_diskgroup where name=’FRA’;

NAME      TYPE   COMPATIBILITY    DATABASE_COMPATIBILITY
————- —— ————————– ————————————————————
FRA        NORMAL 18.0.0.0.0    12.2.0.1.0

Solution:

Change diskgroup type to FLEX:

SQL> alter diskgroup FRA convert redundancy to flex;
Diskgroup altered.

Check that type was changed:

SQL> select name,type,compatibility,database_compatibility from v$asm_diskgroup where name=’FRA’;

NAME      TYPE   COMPATIBILITY    DATABASE_COMPATIBILITY
————- —— ————————– ————————————————————
FRA        FLEX   18.0.0.0.0    12.2.0.1.0

Adding filegroup succeeds:

SQL> alter diskgroup FRA add filegroup high_filegroup database orcl set ‘datafile.redundancy’ = ‘HIGH’;
Diskgroup altered.

Daylight saving time support in Oracle CRS

Dear readers,

I am glad to announce that my blog has been entered in Top 50 Oracle Blogs. For more information about Top 100 Oracle Blogs And Websites for Oracle DBAs To Follow in 2018 please visit https://blog.feedspot.com/oracle_blogs. You will improve your knowledge and experience by following them. 

In this post, I want to share my experience of how I solved the daylight saving time problem with Oracle CRS. With the default setup, in case timezone changes on your system, the client/application who connects to the database remotely(local/BEQ connections have correct timezone) will still have old timezone information and will enter wrong data.

Some countries,  that are not affected by daylight saving time are lucky and does not have to worry about it. But if your servers are not located in lucky countries then you must make CRS DTS aware.

During the GI installation, Oracle saves Timezone information in $CRS_HOME/crs/install/s_crsconfig_hostname_env.txt file, that makes TZ not to change for CRS even it is changed on OS level.

Please note that timezone can be changed for the database using srvctl:

srvctl setenv database -env 'TZ=time zone'

But I do not recommend to do that, because you must do the same everytime you create a new database.
Better to change TZ globally at CRS level.

In simple words just commenting out the TZ variable in $CRS_HOME/crs/install/s_crsconfig_hostname_env.txt and restarting the CRS on each node just one time is enough to do that, but let’s check it.

1.  List the current timezone settings:

[root@rac1 ~]# timedatectl status|grep zone
Time zone: UTC (UTC, +0000)
[root@rac2 ~]#  timedatectl status|grep zone
Time zone: UTC (UTC, +0000)

2. Change timezone at OS level:

[root@rac1 ~]# timedatectl set-timezone Europe/Bratislava
[root@rac2 ~]# timedatectl set-timezone Europe/Bratislava

3. Check local and scan connections:

[oracle@rac1 ~]$ sqlplus / as sysdba

SQL> select to_char(sysdate,'HH24:MI:SS AM')  dbtime from dual;

DBTIME
-----------
18:50:05 PM     <<<<<<<<<<<<Correct , same as OS

[oracle@rac1 ~]$ sqlplus marik/123@ORCL

SQL> select to_char(sysdate,'HH24:MI:SS AM') dbtime from dual;

DBTIME
-----------
16:50:10 PM     <<<<<<<<<<<<Incorrect

4. Comment TZ in the config file:

[root@rac1 ~]# cat /u01/app/18.3.0/grid/crs/install/s_crsconfig_rac1_env.txt|grep TZ=
#   the appropriate time zone name. For example, TZ=America/New_York
#TZ=UTC

[root@rac2 ~]# cat /u01/app/18.3.0/grid/crs/install/s_crsconfig_rac2_env.txt|grep TZ=
#   the appropriate time zone name. For example, TZ=America/New_York
#TZ=UTC

5. Restart CRS on both nodes:

[root@rac1 ~]#  crsctl stop crs
[root@rac1 ~]#  crsctl start crs -wait
[root@rac2 ~]#  crsctl stop crs
[root@rac2 ~]#  crsctl start crs -wait

6. Change timezone on OS level several times and check local & scan connections:

[root@rac1 ~]# timedatectl set-timezone Africa/Conakry
[root@rac2 ~]# timedatectl set-timezone Africa/Conakry

Important: You need to reconnect to the database(so consider that sessions must be disconnected and reconnected again, old connections have old settings)

[oracle@rac1 ~]$ sqlplus / as sysdba

SQL> Select to_char(sysdate,'HH24:MI:SS AM') dbtime from dual;

DBTIME
-----------
17:15:56 PM <<<<<<<<<<<<Correct


[oracle@rac1 ~]$ sqlplus marik/123@ORCL

SQL> Select to_char(sysdate,'HH24:MI:SS AM') dbtime from dual;

DBTIME
-----------
17:15:27 PM <<<<<<<<<<<<Correct

Change one more time:

[root@rac1 ~]# timedatectl set-timezone America/Aruba
[root@rac2 ~]# timedatectl set-timezone America/Aruba

Exit connections and reconnect:

[oracle@rac1 ~]$ sqlplus / as sysdba

SQL> Select to_char(sysdate,'HH24:MI:SS AM') dbtime from dual;

DBTIME
-----------
13:17:47 PM <<<<<<<<<<<<Correct

[oracle@rac1 ~]$ sqlplus marik/123@ORCL

SQL> Select to_char(sysdate,'HH24:MI:SS AM') dbtime from dual;

DBTIME
-----------
13:17:31 PM <<<<<<<<<<<<Correct

Exadata: Rebuild RAC clusterware without deleting data Version 2

This post is differrent from my previous post Rebuild RAC clusterware without deleting data . Because two days ago I was upgrading grid infrastructure from 12.1 to 12.2 that was successfull on first node , but failed on second node. I will not describe why this happend, but the whole process was something complicated instead of being simple. We have installed several patches before the installation(gridSetup has this option to indicate patches before installation)… Seems the 12.2 software has many bugs even during upgrade process.(But I agree  with other DBA-s that 12.2 database is very stable itself).

So what happend now is that during first node upgrade OCR files was changed. I tried deconfigure from 12.2 home and it was also failed. So now I am with my clusterware that has corrupted OCR and voting disks(it belongs 12.2 version). In my presious post I was starting clusterware in exclusive mode with nocrs and restoring OCR from backup, but now because of voting disks are different version  it does not starting in even exclusive mode.

So I have followed the steps that recreate diskgroup , where OCR and voting disks are saved. Because it is Exadata Cell Storage disks , it was more complicated than with ordinary disks, where you can cleanup header using “dd”. Instead of dd you use cellcli.

So let’s start:

Connect to each cell server(I have three of them) and drop grid disks that belong to DBFS(it contains OCR and Voting disks). Be careful dropping griddisk causes data to be erased. So DBFS must contain only OCR and Vdisks not !DATA!

#Find the name, celldisk and size of the grid disk:

CellCLI> list griddisk where name like ‘DBFS_.*’ attributes name, cellDisk, size
DBFS_CD_02_lbcel01_dr_adm CD_02_lbcel01_dr_adm 33.796875G
DBFS_CD_03_lbcel01_dr_adm CD_03_lbcel01_dr_adm 33.796875G
DBFS_CD_04_lbcel01_dr_adm CD_04_lbcel01_dr_adm 33.796875G
DBFS_CD_05_lbcel01_dr_adm CD_05_lbcel01_dr_adm 33.796875G
DBFS_CD_06_lbcel01_dr_adm CD_06_lbcel01_dr_adm 33.796875G
DBFS_CD_07_lbcel01_dr_adm CD_07_lbcel01_dr_adm 33.796875G
DBFS_CD_08_lbcel01_dr_adm CD_08_lbcel01_dr_adm 33.796875G
DBFS_CD_09_lbcel01_dr_adm CD_09_lbcel01_dr_adm 33.796875G
DBFS_CD_10_lbcel01_dr_adm CD_10_lbcel01_dr_adm 33.796875G
DBFS_CD_11_lbcel01_dr_adm CD_11_lbcel01_dr_adm 33.796875G

 

#Drop

CellCLI> drop griddisk DBFS_CD_02_lbcel01_dr_adm
drop griddisk DBFS_CD_03_lbcel01_dr_adm
drop griddisk DBFS_CD_04_lbcel01_dr_adm
drop griddisk DBFS_CD_05_lbcel01_dr_adm
drop griddisk DBFS_CD_06_lbcel01_dr_adm
drop griddisk DBFS_CD_07_lbcel01_dr_adm
drop griddisk DBFS_CD_08_lbcel01_dr_adm
drop griddisk DBFS_CD_09_lbcel01_dr_adm
drop griddisk DBFS_CD_10_lbcel01_dr_adm
drop griddisk DBFS_CD_11_lbcel01_dr_adm

#Create

cellcli> create griddisk DBFS_CD_02_lbcel01_dr_adm celldisk=CD_02_lbcel01_dr_adm, size=33.796875G
create griddisk DBFS_CD_03_lbcel01_dr_adm celldisk=CD_03_lbcel01_dr_adm, size=33.796875G
create griddisk DBFS_CD_04_lbcel01_dr_adm celldisk=CD_04_lbcel01_dr_adm, size=33.796875G
create griddisk DBFS_CD_05_lbcel01_dr_adm celldisk=CD_05_lbcel01_dr_adm, size=33.796875G
create griddisk DBFS_CD_06_lbcel01_dr_adm celldisk=CD_06_lbcel01_dr_adm, size=33.796875G
create griddisk DBFS_CD_07_lbcel01_dr_adm celldisk=CD_07_lbcel01_dr_adm, size=33.796875G
create griddisk DBFS_CD_08_lbcel01_dr_adm celldisk=CD_08_lbcel01_dr_adm, size=33.796875G
create griddisk DBFS_CD_09_lbcel01_dr_adm celldisk=CD_09_lbcel01_dr_adm, size=33.796875G
create griddisk DBFS_CD_10_lbcel01_dr_adm celldisk=CD_10_lbcel01_dr_adm, size=33.796875G
create griddisk DBFS_CD_11_lbcel01_dr_adm celldisk=CD_11_lbcel01_dr_adm, size=33.796875G

Do the same steps on other cells.

2.  Deconfigure root.sh on each node

# Run deconfig

/u01/app/12.1.0.2/grid/crs/install/rootcrs.sh -deconfig -force

#rename gpnp profile

mv /u01/app/12.1.0.2/grid/gpnp/profiles/peer/profile.xml /tmp/profile_backup.xml

3. Run root.sh on first node

/u01/app/12.1.0.2/grid/root.sh

It will fail because will not find the disk group DBFS for mounting and of course OCR inside.  But now asm is started in nomount mode and we are able to recreate diskgroup

4. Create DBFS diskgroup

sqlplus / as sysasm

SQL> create diskgroup DBFS
failgroup LBCEL01_DR_ADM disk ‘o/*/DBFS_CD_02_lbcel01_dr_adm’,’o/*/DBFS_CD_03_lbcel01_dr_adm’,’o/*/DBFS_CD_04_lbcel01_dr_adm’,’o/*/DBFS_CD_05_lbcel01_dr_adm’,’o/*/DBFS_CD_06_lbcel01_dr_adm’,’o/*/DBFS_CD_07_lbcel01_dr_adm’,’o/*/DBFS_CD_08_lbcel01_dr_adm’,’o/*/DBFS_CD_09_lbcel01_dr_adm’,’o/*/DBFS_CD_10_lbcel01_dr_adm’,’o/*/DBFS_CD_11_lbcel01_dr_adm’
failgroup LBCEL02_DR_ADM disk ‘o/*/DBFS_CD_02_lbcel02_dr_adm’,’o/*/DBFS_CD_03_lbcel02_dr_adm’,’o/*/DBFS_CD_04_lbcel02_dr_adm’,’o/*/DBFS_CD_05_lbcel02_dr_adm’,’o/*/DBFS_CD_06_lbcel02_dr_adm’,’o/*/DBFS_CD_07_lbcel02_dr_adm’,’o/*/DBFS_CD_08_lbcel02_dr_adm’,’o/*/DBFS_CD_09_lbcel02_dr_adm’,’o/*/DBFS_CD_10_lbcel02_dr_adm’,’o/*/DBFS_CD_11_lbcel02_dr_adm’
failgroup lbcel03_dr_adm disk ‘o/*/DBFS_CD_02_lbcel03_dr_adm’,’o/*/DBFS_CD_03_lbcel03_dr_adm’,’o/*/DBFS_CD_04_lbcel03_dr_adm’,’o/*/DBFS_CD_05_lbcel03_dr_adm’,’o/*/DBFS_CD_06_lbcel03_dr_adm’,’o/*/DBFS_CD_07_lbcel03_dr_adm’,’o/*/DBFS_CD_08_lbcel03_dr_adm’,’o/*/DBFS_CD_09_lbcel03_dr_adm’,’o/*/DBFS_CD_10_lbcel03_dr_adm’,’o/*/DBFS_CD_11_lbcel03_dr_adm’
ATTRIBUTE
‘compatible.asm’=’12.1.0.2.0’,
‘compatible.rdbms’=’11.2.0.2.0’,
‘au_size’=’4194304’,
‘cell.smart_scan_capable’=’TRUE’;

5. Do the following steps:

* Deconfigure root.sh again from first node
* remove gpnp profile
* run root.sh again on first node

At this time root.sh should be successful.

6. Restore OCR

/u01/app/12.1.0.2/grid/cdata/<clustername> directory contans OCR backups by default

crsctl stop crs -f
crsctl start crs -excl -nocrs
ocrconfig -restore /u01/app/12.1.0.2/grid/cdata/lbank-clus-dr/backup00.ocr
crsctl stop crs -f
crsctl start crs

7. Run root.sh on the second node

/u01/app/12.1.0.2/grid/root.sh

Create RAC database using DBCA silent mode

Real World Scenario: 

Previously, we had a vacancy on Senior DBA position. Some of our candidates had >15 years of experience in database administration.

So for testing their knowlege we created lab. There were already installed grid and database softwares, shared disks were present and diskgroups were already created.

The first task was to create RAC database in silent mode using DBCA.  They had an option to use the internet during the exam. But unfortunatelly they have not managed to do that.

So I decided to write the simple version of the script:

dbca -silent \
-createDatabase \
-templateName General_Purpose.dbc \
-gdbName orcl  \
-sid orcl  \
-SysPassword MyPassword123 \
-SystemPassword MyPassword123 \
-emConfiguration NONE \
-redoLogFileSize 2048  \
-recoveryAreaDestination FRA \
-storageType ASM \
-asmSysPassword MyPassword123 \
-diskGroupName DATA \
-characterSet AL32UTF8 \
-nationalCharacterSet AL32UTF8 \
-automaticMemoryManagement true \
-totalMemory 2536  \
-databaseType MULTIPURPOSE \
-nodelist rac1,rac2

Copying database files
1% complete
3% complete
9% complete
15% complete
21% complete
30% complete
Creating and starting Oracle instance
32% complete
36% complete
40% complete
44% complete
45% complete
48% complete
50% complete
Creating cluster database views
52% complete
70% complete
Completing Database Creation
73% complete
76% complete
85% complete
94% complete
100% complete
Look at the log file “/u01/app/oracle/cfgtoollogs/dbca/orcl/orcl.log” for further details.

Restart Exadata storage cell service without affecting ASM

Brief history:

One week ago on our DR Exadata cell service hanged, which caused all databases located on Exadata to become inaccessible.

CellCLI> LIST ALERTHISTORY
9 2017-10-13T11:56:05+04:00 critical “RS-7445 [Serv CELLSRV hang detected] [It will be restarted] [] [] [] [] [] [] [] [] [] []”

In cell’s alert history there was written that the service would be restarted itself , but it did not and I restarted it by the following way:

CellCLI> ALTER CELL RESTART SERVICES CELLSRV

The databases started to work correctly.

Today, the same problem happend on the HQ side which of course caused to stop everything for a while until I’ve restarted the service.

But identifying which cell was problematic was a little bit difficult, because there was no error in alerthistory.

BUT when I entered the following command on the third cell node – it hanged, other cells were OK.

CellCLI> LIST ACTIVEREQUEST

So I restarted the same service on that node and problem was resolved.

CellCLI> ALTER CELL RESTART SERVICES CELLSRV

Of course, this is not a solution and cell service must not hang! , but this is the simple workaround when you have stopped PRODUCTION database.

I have created SR and waiting answer from them , if there is any usefull news will update this post.

===================================================================================================

Writing down the correct steps of restarting Cell Services without affecting ASM:

1.  Run the following command to check if there are offline disks on other cells that are mirrored with disks on this cell:

CellCLI > LIST GRIDDISK ATTRIBUTES name WHERE asmdeactivationoutcome != ‘Yes’

Warning : If any grid disks are listed in the returned output, then it is not safe to stop or re-start the CELLSRV process because proper Oracle ASM disk group redundancy will not be intact and will cause Oracle ASM to dismount the affected disk group, causing the databases to shut down abruptly.

If no grid disks are listed in the returned output, you can safely restart cellsrv or all services in step #2 below.

2.  Re-start the cell services using either of the following commands:

CellCLI> ALTER CELL RESTART SERVICES CELLSRV

CellCLI> ALTER CELL RESTART SERVICES ALL

BUT what is good news cell has self-defence on reduced redundancy, if you try to restart it when redundancy check is not satisfied you get:

CellCLI> ALTER CELL RESTART SERVICES ALL;

Stopping the RS, CELLSRV, and MS services…
The SHUTDOWN of ALL services was not successful.
CELL-01548: Unable to shut down CELLSRV because disk group DATA, RECO may be forced to dismount due to reduced redundancy.
Getting the state of CELLSRV services… running
Getting the state of MS services… running
Getting the state of RS services… running

 

Rebuild RAC clusterware without deleting data

As I have mentioned in my previous posts, I was applying interim patch on database which had post installation script (# <GI_HOME>/crs/install/rootcrs.pl -postpatch) .
The post script failed with permission denied error on ohasd file and left clusterware in a messy situation.

I have opened SR on metalink and one of their support after a huge amount of time of talking and troubleshooting together says:

“We do not know what happened or what steps you have taken to reach this situation. You should open an SR with us before you deconfigure the node.
Please, do bare metal restore as it is recommended by previous engineer.
Bare Metal Restore Procedure for Compute Nodes on an Exadata Environment ( Doc ID 1084360.1 )”

This Bare Metal Restore is like wiping everything and after that I have had to configure RAC, DATAGUARD and everything from scratch. <<–Don’t like such solutions, this is like “if your windows works slowly then reinstall it”.. for windows this might be really true 🙂 nothing than reinstall helps 😀 but on Linux/Oracle you must troubleshoot first.
So I created another SR with another error(Errors at this time were lot) and for the second time I was lucky.
I was working 24/7 with support, the engineers were shifting. Three different engineers worked at different times on this SR.
I want to mention one “Venkata Pradeep Kumar” Oracle support engineer , he is so clever he helped me a lot and we rescued the system !:)

I want to share the steps with you , it should interesting.

Problem:

After applying patch post script on first node (which failed), clusterware on first node was not starting. At this time second node was fine.
I have deconfigured clusterware (write this step in solution section) on first node and it started but with some problems about oc4j service.

2016/09/27 06:56:15 CLSRSC-1003: Failed to start resource OC4J
2016/09/27 06:56:16 CLSRSC-287: FirstNode configuration failed

I have deconfigured clusterware on second node also and tried to run root.sh, but it said that root.sh could not be run because it was not successful on first node. 😦

So, until root.sh script is not completely successful on first node you should not deconfigure it on second. But if you did it do not panic if you have OCR backup.

Solution:

# Deconfigure crs on problematic node , note you may help the different solution , by just configuring one node. In my situation all nodes became problematic.
# Also please be careful, below steps assumes that you have separate group for OCR. Datafiles must be on different group. Or diskgroup will be wiped.

# From root on both nodes node1 , node2

/u01/app/12.1.0.2/grid/crs/install/rootcrs.sh -deconfig -force

# run root.sh on node1 , it may not be completely successful

/u01/app/12.1.0.2/grid/root.sh

# We need to find a good OCR backup , for me it is week.ocr which was taken automatically in 2016/09/15 09:12:28.
# Patch was applied at 10:00AM in 2016/09/25. So we need week.ocr it is before patching.

[root@lbdm01-dr-adm grid]# ocrconfig -showbackup

lbdm02-dr-adm 2016/09/27 02:35:23 /u01/app/12.1.0.2/grid/cdata/lbank-clus-dr/backup00.ocr 3351897854
lbdm02-dr-adm 2016/09/26 15:44:53 /u01/app/12.1.0.2/grid/cdata/lbank-clus-dr/backup01.ocr 3351897854
lbdm02-dr-adm 2016/09/26 11:44:52 /u01/app/12.1.0.2/grid/cdata/lbank-clus-dr/backup02.ocr 3351897854
lbdm02-dr-adm 2016/09/27 02:35:23 /u01/app/12.1.0.2/grid/cdata/lbank-clus-dr/day.ocr 3351897854
lbdm01-dr-adm 2016/09/15 09:12:28 /u01/app/12.1.0.2/grid/cdata/lbank-clus-dr/week.ocr 854493477
lbdm02-dr-adm 2016/09/25 15:29:18 /u01/app/12.1.0.2/grid/cdata/lbank-clus-dr/backup_20160925_152918.ocr 3351897854
lbdm02-dr-adm 2016/09/25 10:34:56 /u01/app/12.1.0.2/grid/cdata/lbank-clus-dr/backup_20160925_103456.ocr 2725022894
lbdm01-dr-adm 2015/07/29 19:46:28 /u01/app/12.1.0.2/grid/cdata/lbank-clus-dr/backup_20150729_194628.ocr 854493477
lbdm01-dr-adm 2015/07/29 19:46:27 /u01/app/12.1.0.2/grid/cdata/lbank-clus-dr/backup_20150729_194627.ocr 854493477

# Ensure that no process left
# node 1

crsctl stop crs -f
ps -ef|grep “/u01/app”

# if here is anything kill them!

#Start clusterware in exclusive mode with no ocr on node 1

crsctl start crs -excl -nocrs

#Restore OCR on node 1

ocrconfig -restore /u01/app/12.1.0.2/grid/cdata/lbank-clus-dr/week.ocr
ocrcheck

# Stop crs on node 1

crsctl stop crs -f
crsctl start crs

# Check the status

crsctl status res -t

# It should be OK

# Do the same steps on node 2 from root, but it may fail

/u01/app/12.1.0.2/grid/root.sh

# Failed

ORA-15160: rolling migration internal fatal error in module SKGXP,valNorm:not-native
. For details refer to “(:CLSN00107:)” in “/u01/app/oracle/diag/crs/lbdm02-dr-adm/crs/trace/ohasd_oraagent_oracle.trc”.
CRS-2883: Resource ‘ora.asm’ failed during Clusterware stack start.
CRS-4406: Oracle High Availability Services synchronous start failed.
CRS-4000: Command Start failed, or completed with errors.
2016/09/28 09:11:00 CLSRSC-117: Failed to start Oracle Clusterware stack

# deconfig on both nodes
# node1 , node2

 /u01/app/12.1.0.2/grid/crs/install/rootcrs.sh -deconfig -force

#and run agin root.sh
# node 1

/u01/app/12.1.0.2/grid/root.sh

# It was completelly successful.

# On second there is still problem

# Read the following document ORA-15160: rolling migration internal fatal error in module SKGXP,valNorm:not-native (NOTE 1682591.1)

# Here problem was on protocols that was used by asm and rdbms.
# rdbms is using rds protocol and asm is using udp, see Oracle Clusterware and RAC Support for RDS Over Infiniband (NOTE 751343.1)
# problem was in libraries and we should relink them with right protocols
# As the ORACLE_HOME/GI_HOME owner, stop all resources (database, listener, ASM etc) that’s running from the home. When stopping database, use NORMAL or IMMEDIATE option.

# From problemtic node , where asm or database is not starting.

crsctl stop crs
ps -ef|grep d.bin
ps -ef|grep “/u01/app”

# Kill if any process left

# If relinking Grid Infrastructure (GI) home, as root, unlock GI home: <GI_HOME>/crs/install/rootcrs.pl -unlock

/u01/app/12.1.0.2/grid/crs/install/rootcrs.sh -unlock

# As the ORACLE_HOME/GI_HOME owner, go to ORACLE_HOME/GI_HOME and cd to rdbms/lib
# As the ORACLE_HOME/GI_HOME owner, issue “make -f ins_rdbms.mk <protocol write here> ioracle”
#For rdbms

[root@lbdm02-dr-adm lib]# su – oracle
[oracle@lbdm02-dr-adm ~]$ cd $ORACLE_HOME/rdbms/lib
make -f ins_rdbms.mk ipc_rds ioracle

#For asm

. oraenv
+ASM2
[oracle@lbdm02-dr-adm ~]$ cd $ORACLE_HOME/rdbms/lib
make -f ins_rdbms.mk ipc_g ioracle

# From root

/u01/app/12.1.0.2/grid/crs/install/rootcrs.sh -patch

# The last step should have configure clusterware also. And everything should be fine. And you can sleep now. 🙂