Exadata: Rebuild RAC clusterware without deleting data Version 2

This post is differrent from my previous post Rebuild RAC clusterware without deleting data . Because two days ago I was upgrading grid infrastructure from 12.1 to 12.2 that was successfull on first node , but failed on second node. I will not describe why this happend, but the whole process was something complicated instead of being simple. We have installed several patches before the installation(gridSetup has this option to indicate patches before installation)… Seems the 12.2 software has many bugs even during upgrade process.(But I agree  with other DBA-s that 12.2 database is very stable itself).

So what happend now is that during first node upgrade OCR files was changed. I tried deconfigure from 12.2 home and it was also failed. So now I am with my clusterware that has corrupted OCR and voting disks(it belongs 12.2 version). In my presious post I was starting clusterware in exclusive mode with nocrs and restoring OCR from backup, but now because of voting disks are different version  it does not starting in even exclusive mode.

So I have followed the steps that recreate diskgroup , where OCR and voting disks are saved. Because it is Exadata Cell Storage disks , it was more complicated than with ordinary disks, where you can cleanup header using “dd”. Instead of dd you use cellcli.

So let’s start:

Connect to each cell server(I have three of them) and drop grid disks that belong to DBFS(it contains OCR and Voting disks). Be careful dropping griddisk causes data to be erased. So DBFS must contain only OCR and Vdisks not !DATA!

#Find the name, celldisk and size of the grid disk:

CellCLI> list griddisk where name like ‘DBFS_.*’ attributes name, cellDisk, size
DBFS_CD_02_lbcel01_dr_adm CD_02_lbcel01_dr_adm 33.796875G
DBFS_CD_03_lbcel01_dr_adm CD_03_lbcel01_dr_adm 33.796875G
DBFS_CD_04_lbcel01_dr_adm CD_04_lbcel01_dr_adm 33.796875G
DBFS_CD_05_lbcel01_dr_adm CD_05_lbcel01_dr_adm 33.796875G
DBFS_CD_06_lbcel01_dr_adm CD_06_lbcel01_dr_adm 33.796875G
DBFS_CD_07_lbcel01_dr_adm CD_07_lbcel01_dr_adm 33.796875G
DBFS_CD_08_lbcel01_dr_adm CD_08_lbcel01_dr_adm 33.796875G
DBFS_CD_09_lbcel01_dr_adm CD_09_lbcel01_dr_adm 33.796875G
DBFS_CD_10_lbcel01_dr_adm CD_10_lbcel01_dr_adm 33.796875G
DBFS_CD_11_lbcel01_dr_adm CD_11_lbcel01_dr_adm 33.796875G

 

#Drop

CellCLI> drop griddisk DBFS_CD_02_lbcel01_dr_adm
drop griddisk DBFS_CD_03_lbcel01_dr_adm
drop griddisk DBFS_CD_04_lbcel01_dr_adm
drop griddisk DBFS_CD_05_lbcel01_dr_adm
drop griddisk DBFS_CD_06_lbcel01_dr_adm
drop griddisk DBFS_CD_07_lbcel01_dr_adm
drop griddisk DBFS_CD_08_lbcel01_dr_adm
drop griddisk DBFS_CD_09_lbcel01_dr_adm
drop griddisk DBFS_CD_10_lbcel01_dr_adm
drop griddisk DBFS_CD_11_lbcel01_dr_adm

#Create

cellcli> create griddisk DBFS_CD_02_lbcel01_dr_adm celldisk=CD_02_lbcel01_dr_adm, size=33.796875G
create griddisk DBFS_CD_03_lbcel01_dr_adm celldisk=CD_03_lbcel01_dr_adm, size=33.796875G
create griddisk DBFS_CD_04_lbcel01_dr_adm celldisk=CD_04_lbcel01_dr_adm, size=33.796875G
create griddisk DBFS_CD_05_lbcel01_dr_adm celldisk=CD_05_lbcel01_dr_adm, size=33.796875G
create griddisk DBFS_CD_06_lbcel01_dr_adm celldisk=CD_06_lbcel01_dr_adm, size=33.796875G
create griddisk DBFS_CD_07_lbcel01_dr_adm celldisk=CD_07_lbcel01_dr_adm, size=33.796875G
create griddisk DBFS_CD_08_lbcel01_dr_adm celldisk=CD_08_lbcel01_dr_adm, size=33.796875G
create griddisk DBFS_CD_09_lbcel01_dr_adm celldisk=CD_09_lbcel01_dr_adm, size=33.796875G
create griddisk DBFS_CD_10_lbcel01_dr_adm celldisk=CD_10_lbcel01_dr_adm, size=33.796875G
create griddisk DBFS_CD_11_lbcel01_dr_adm celldisk=CD_11_lbcel01_dr_adm, size=33.796875G

Do the same steps on other cells.

2.  Deconfigure root.sh on each node

# Run deconfig

/u01/app/12.1.0.2/grid/crs/install/rootcrs.sh -deconfig -force

#rename gpnp profile

mv /u01/app/12.1.0.2/grid/gpnp/profiles/peer/profile.xml /tmp/profile_backup.xml

3. Run root.sh on first node

/u01/app/12.1.0.2/grid/root.sh

It will fail because will not find the disk group DBFS for mounting and of course OCR inside.  But now asm is started in nomount mode and we are able to recreate diskgroup

4. Create DBFS diskgroup

sqlplus / as sysasm

SQL> create diskgroup DBFS
failgroup LBCEL01_DR_ADM disk ‘o/*/DBFS_CD_02_lbcel01_dr_adm’,’o/*/DBFS_CD_03_lbcel01_dr_adm’,’o/*/DBFS_CD_04_lbcel01_dr_adm’,’o/*/DBFS_CD_05_lbcel01_dr_adm’,’o/*/DBFS_CD_06_lbcel01_dr_adm’,’o/*/DBFS_CD_07_lbcel01_dr_adm’,’o/*/DBFS_CD_08_lbcel01_dr_adm’,’o/*/DBFS_CD_09_lbcel01_dr_adm’,’o/*/DBFS_CD_10_lbcel01_dr_adm’,’o/*/DBFS_CD_11_lbcel01_dr_adm’
failgroup LBCEL02_DR_ADM disk ‘o/*/DBFS_CD_02_lbcel02_dr_adm’,’o/*/DBFS_CD_03_lbcel02_dr_adm’,’o/*/DBFS_CD_04_lbcel02_dr_adm’,’o/*/DBFS_CD_05_lbcel02_dr_adm’,’o/*/DBFS_CD_06_lbcel02_dr_adm’,’o/*/DBFS_CD_07_lbcel02_dr_adm’,’o/*/DBFS_CD_08_lbcel02_dr_adm’,’o/*/DBFS_CD_09_lbcel02_dr_adm’,’o/*/DBFS_CD_10_lbcel02_dr_adm’,’o/*/DBFS_CD_11_lbcel02_dr_adm’
failgroup lbcel03_dr_adm disk ‘o/*/DBFS_CD_02_lbcel03_dr_adm’,’o/*/DBFS_CD_03_lbcel03_dr_adm’,’o/*/DBFS_CD_04_lbcel03_dr_adm’,’o/*/DBFS_CD_05_lbcel03_dr_adm’,’o/*/DBFS_CD_06_lbcel03_dr_adm’,’o/*/DBFS_CD_07_lbcel03_dr_adm’,’o/*/DBFS_CD_08_lbcel03_dr_adm’,’o/*/DBFS_CD_09_lbcel03_dr_adm’,’o/*/DBFS_CD_10_lbcel03_dr_adm’,’o/*/DBFS_CD_11_lbcel03_dr_adm’
ATTRIBUTE
‘compatible.asm’=’12.1.0.2.0’,
‘compatible.rdbms’=’11.2.0.2.0’,
‘au_size’=’4194304’,
‘cell.smart_scan_capable’=’TRUE’;

5. Do the following steps:

* Deconfigure root.sh again from first node
* remove gpnp profile
* run root.sh again on first node

At this time root.sh should be successful.

6. Restore OCR

/u01/app/12.1.0.2/grid/cdata/<clustername> directory contans OCR backups by default

crsctl stop crs -f
crsctl start crs -excl -nocrs
ocrconfig -restore /u01/app/12.1.0.2/grid/cdata/lbank-clus-dr/backup00.ocr
crsctl stop crs -f
crsctl start crs

7. Run root.sh on the second node

/u01/app/12.1.0.2/grid/root.sh

Advertisement

Rebuild RAC clusterware without deleting data

As I have mentioned in my previous posts, I was applying interim patch on database which had post installation script (# <GI_HOME>/crs/install/rootcrs.pl -postpatch) .
The post script failed with permission denied error on ohasd file and left clusterware in a messy situation.

I have opened SR on metalink and one of their support after a huge amount of time of talking and troubleshooting together says:

“We do not know what happened or what steps you have taken to reach this situation. You should open an SR with us before you deconfigure the node.
Please, do bare metal restore as it is recommended by previous engineer.
Bare Metal Restore Procedure for Compute Nodes on an Exadata Environment ( Doc ID 1084360.1 )”

This Bare Metal Restore is like wiping everything and after that I have had to configure RAC, DATAGUARD and everything from scratch. <<–Don’t like such solutions, this is like “if your windows works slowly then reinstall it”.. for windows this might be really true 🙂 nothing than reinstall helps 😀 but on Linux/Oracle you must troubleshoot first.
So I created another SR with another error(Errors at this time were lot) and for the second time I was lucky.
I was working 24/7 with support, the engineers were shifting. Three different engineers worked at different times on this SR.
I want to mention one “Venkata Pradeep Kumar” Oracle support engineer , he is so clever he helped me a lot and we rescued the system !:)

I want to share the steps with you , it should interesting.

Problem:

After applying patch post script on first node (which failed), clusterware on first node was not starting. At this time second node was fine.
I have deconfigured clusterware (write this step in solution section) on first node and it started but with some problems about oc4j service.

2016/09/27 06:56:15 CLSRSC-1003: Failed to start resource OC4J
2016/09/27 06:56:16 CLSRSC-287: FirstNode configuration failed

I have deconfigured clusterware on second node also and tried to run root.sh, but it said that root.sh could not be run because it was not successful on first node. 😦

So, until root.sh script is not completely successful on first node you should not deconfigure it on second. But if you did it do not panic if you have OCR backup.

Solution:

# Deconfigure crs on problematic node , note you may help the different solution , by just configuring one node. In my situation all nodes became problematic.
# Also please be careful, below steps assumes that you have separate group for OCR. Datafiles must be on different group. Or diskgroup will be wiped.

# From root on both nodes node1 , node2

/u01/app/12.1.0.2/grid/crs/install/rootcrs.sh -deconfig -force

# run root.sh on node1 , it may not be completely successful

/u01/app/12.1.0.2/grid/root.sh

# We need to find a good OCR backup , for me it is week.ocr which was taken automatically in 2016/09/15 09:12:28.
# Patch was applied at 10:00AM in 2016/09/25. So we need week.ocr it is before patching.

[root@lbdm01-dr-adm grid]# ocrconfig -showbackup

lbdm02-dr-adm 2016/09/27 02:35:23 /u01/app/12.1.0.2/grid/cdata/lbank-clus-dr/backup00.ocr 3351897854
lbdm02-dr-adm 2016/09/26 15:44:53 /u01/app/12.1.0.2/grid/cdata/lbank-clus-dr/backup01.ocr 3351897854
lbdm02-dr-adm 2016/09/26 11:44:52 /u01/app/12.1.0.2/grid/cdata/lbank-clus-dr/backup02.ocr 3351897854
lbdm02-dr-adm 2016/09/27 02:35:23 /u01/app/12.1.0.2/grid/cdata/lbank-clus-dr/day.ocr 3351897854
lbdm01-dr-adm 2016/09/15 09:12:28 /u01/app/12.1.0.2/grid/cdata/lbank-clus-dr/week.ocr 854493477
lbdm02-dr-adm 2016/09/25 15:29:18 /u01/app/12.1.0.2/grid/cdata/lbank-clus-dr/backup_20160925_152918.ocr 3351897854
lbdm02-dr-adm 2016/09/25 10:34:56 /u01/app/12.1.0.2/grid/cdata/lbank-clus-dr/backup_20160925_103456.ocr 2725022894
lbdm01-dr-adm 2015/07/29 19:46:28 /u01/app/12.1.0.2/grid/cdata/lbank-clus-dr/backup_20150729_194628.ocr 854493477
lbdm01-dr-adm 2015/07/29 19:46:27 /u01/app/12.1.0.2/grid/cdata/lbank-clus-dr/backup_20150729_194627.ocr 854493477

# Ensure that no process left
# node 1

crsctl stop crs -f
ps -ef|grep “/u01/app”

# if here is anything kill them!

#Start clusterware in exclusive mode with no ocr on node 1

crsctl start crs -excl -nocrs

#Restore OCR on node 1

ocrconfig -restore /u01/app/12.1.0.2/grid/cdata/lbank-clus-dr/week.ocr
ocrcheck

# Stop crs on node 1

crsctl stop crs -f
crsctl start crs

# Check the status

crsctl status res -t

# It should be OK

# Do the same steps on node 2 from root, but it may fail

/u01/app/12.1.0.2/grid/root.sh

# Failed

ORA-15160: rolling migration internal fatal error in module SKGXP,valNorm:not-native
. For details refer to “(:CLSN00107:)” in “/u01/app/oracle/diag/crs/lbdm02-dr-adm/crs/trace/ohasd_oraagent_oracle.trc”.
CRS-2883: Resource ‘ora.asm’ failed during Clusterware stack start.
CRS-4406: Oracle High Availability Services synchronous start failed.
CRS-4000: Command Start failed, or completed with errors.
2016/09/28 09:11:00 CLSRSC-117: Failed to start Oracle Clusterware stack

# deconfig on both nodes
# node1 , node2

 /u01/app/12.1.0.2/grid/crs/install/rootcrs.sh -deconfig -force

#and run agin root.sh
# node 1

/u01/app/12.1.0.2/grid/root.sh

# It was completelly successful.

# On second there is still problem

# Read the following document ORA-15160: rolling migration internal fatal error in module SKGXP,valNorm:not-native (NOTE 1682591.1)

# Here problem was on protocols that was used by asm and rdbms.
# rdbms is using rds protocol and asm is using udp, see Oracle Clusterware and RAC Support for RDS Over Infiniband (NOTE 751343.1)
# problem was in libraries and we should relink them with right protocols
# As the ORACLE_HOME/GI_HOME owner, stop all resources (database, listener, ASM etc) that’s running from the home. When stopping database, use NORMAL or IMMEDIATE option.

# From problemtic node , where asm or database is not starting.

crsctl stop crs
ps -ef|grep d.bin
ps -ef|grep “/u01/app”

# Kill if any process left

# If relinking Grid Infrastructure (GI) home, as root, unlock GI home: <GI_HOME>/crs/install/rootcrs.pl -unlock

/u01/app/12.1.0.2/grid/crs/install/rootcrs.sh -unlock

# As the ORACLE_HOME/GI_HOME owner, go to ORACLE_HOME/GI_HOME and cd to rdbms/lib
# As the ORACLE_HOME/GI_HOME owner, issue “make -f ins_rdbms.mk <protocol write here> ioracle”
#For rdbms

[root@lbdm02-dr-adm lib]# su – oracle
[oracle@lbdm02-dr-adm ~]$ cd $ORACLE_HOME/rdbms/lib
make -f ins_rdbms.mk ipc_rds ioracle

#For asm

. oraenv
+ASM2
[oracle@lbdm02-dr-adm ~]$ cd $ORACLE_HOME/rdbms/lib
make -f ins_rdbms.mk ipc_g ioracle

# From root

/u01/app/12.1.0.2/grid/crs/install/rootcrs.sh -patch

# The last step should have configure clusterware also. And everything should be fine. And you can sleep now. 🙂