Move Oracle controlfile from filesystem to ASM

1. Find out controlfile locations:

Please note another copy of controlfile is already on ASM, +FRA diskgroup. We are moving the first one.

SQL> show parameter control_files

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
control_files			     string	 /u01/app/oracle/oradata/PRSH01
						 /control01.ctl, +FRA/PRSH01/co
						 ntrol02.ctl

2. Place database into nomount mode using RMAN and copy controlfile to new location:

$ rman target /

RMAN> startup nomount;
RMAN> restore controlfile to '+DATA' from '/u01/app/oracle/oradata/PRSH01/control01.ctl';

3. Note the name assigned to the controlfile:

[grid@prsh01 ~]$ asmcmd find --type CONTROLFILE +data *
+data/PRSH01/CONTROLFILE/current.261.1122675497

4. Modify the parameter from sqlplus:

SQL> alter system set control_files='+data/PRSH01/CONTROLFILE/current.261.1122675497','+FRA/PRSH01/control02.ctl' scope=spfile;

5. Restart database

SQL> shutdown immediate;
SQL> startup

6. Double check new location

SQL> show parameter control_files

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
control_files			     string	 +DATA/PRSH01/CONTROLFILE/curre
						 nt.261.1122675497, +FRA/PRSH01
						 /control02.ctl
Advertisement

Move Oracle datafiles from filesystem to ASM using RMAN

1. Connect to the database using RMAN and move database into mount mode:

$ rman target /
RMAN> shutdown immediate
RMAN> startup mount

2. Get information about target datafiles and tempfiles:

RMAN> report schema;

using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name PRSH01

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    1020     SYSTEM               YES     /u01/app/oracle/oradata/PRSH01/system01.dbf
3    490      SYSAUX               NO      /u01/app/oracle/oradata/PRSH01/sysaux01.dbf
4    850      UNDOTBS1             YES     /u01/app/oracle/oradata/PRSH01/undotbs01.dbf
7    5        USERS                NO      /u01/app/oracle/oradata/PRSH01/users01.dbf

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    114      TEMP                 32767       /u01/app/oracle/oradata/PRSH01/temp01.dbf

3. Copy datafiles from filesystem to ASM diskgroup, in our case to +DATA:

RMAN> copy datafile 1 to '+DATA';
RMAN> copy datafile 3 to '+DATA';
RMAN> copy datafile 4 to '+DATA';
RMAN> copy datafile 7 to '+DATA';
RMAN> switch datafile 1 to copy;
RMAN> switch datafile 3 to copy;
RMAN> switch datafile 4 to copy;
RMAN> switch datafile 7 to copy;

4. Place tempfile on +DATA:

RMAN> run{
2> set newname for tempfile 1 to '+DATA';
3> switch tempfile 1;
4> }

executing command: SET NEWNAME
renamed tempfile 1 to +DATA in control file

5. Double check that all files have a new name

RMAN> report schema;

Report of database schema for database with db_unique_name PRSH01

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    1020     SYSTEM               ***     +DATA/PRSH01/DATAFILE/system.256.1122674191
3    570      SYSAUX               ***     +DATA/PRSH01/DATAFILE/sysaux.257.1122674225
4    910      UNDOTBS1             ***     +DATA/PRSH01/DATAFILE/undotbs1.258.1122674235
7    5        USERS                ***     +DATA/PRSH01/DATAFILE/users.259.1122674253

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    114      TEMP                 32767       +DATA

6. Open database:

RMAN> alter database open;

7. Check new file names. Note tempfile name will also be generated:

RMAN> report schema;

Report of database schema for database with db_unique_name PRSH01

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    1020     SYSTEM               YES     +DATA/PRSH01/DATAFILE/system.256.1122674191
3    570      SYSAUX               NO      +DATA/PRSH01/DATAFILE/sysaux.257.1122674225
4    910      UNDOTBS1             YES     +DATA/PRSH01/DATAFILE/undotbs1.258.1122674235
7    5        USERS                NO      +DATA/PRSH01/DATAFILE/users.259.1122674253

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    114      TEMP                 32767       +DATA/PRSH01/TEMPFILE/temp.260.1122674673

CRS-2549: Resource ‘ora.asmgroup’ cannot be placed on ‘rac1’ as it is not a valid candidate as per the placement policy

Problem:

After failed JDK patching on the 1st node, we tried troubleshooting and saw that ASM was not able to start:

# su - grid
$ sqlplus / as sysasm
SQL> startup nomount;
ORA-32004: obsolete or deprecated parameter(s) specified for ASM instance
ORA-39511: Start of CRS resource for instance '223' failed with error:[CRS-2549: Resource 'ora.asmgroup' cannot be placed on 'rac1' as it is not a valid candidate as per the placement policy
CRS-0223: Resource 'ora.asm' has placement error.
clsr_start_resource:260 status:223
clsrapi_start_asm:start_asmdbs status:223

Reason:

Prepatch modified RESOURCE_USE_ENABLED=0 for rac1 node:

[grid@rac1 ~]$ crsctl stat server -f

NAME=rac1
MEMORY_SIZE=63465
CPU_COUNT=8
CPU_CLOCK_RATE=2499
CPU_HYPERTHREADING=1
CPU_EQUIVALENCY=1000
DEPLOYMENT=other
CONFIGURED_CSS_ROLE=hub
RESOURCE_USE_ENABLED=0
SERVER_LABEL=
PHYSICAL_HOSTNAME=
CSS_CRITICAL=no
CSS_CRITICAL_TOTAL=0
RESOURCE_TOTAL=0
SITE_NAME=stsfilive
STATE=ONLINE
ACTIVE_POOLS=Free
STATE_DETAILS=
ACTIVE_CSS_ROLE=hub

NAME=rac2
MEMORY_SIZE=63465
CPU_COUNT=8
CPU_CLOCK_RATE=2499
CPU_HYPERTHREADING=1
CPU_EQUIVALENCY=1000
DEPLOYMENT=other
CONFIGURED_CSS_ROLE=hub
RESOURCE_USE_ENABLED=1
….

Solution:

Connect to the failing node and run:

[root@rac1 ~]# crsctl set resource use 1

Start ASM.

Display ASM disk attributes while ASM is not running, using KFOD

$GRID_HOME/bin/kfod has many usages (kfod -help), one of them is to print disk attributes without connecting to an ASM instance. Even more, you can display these attributes while ASM is not running. Imagine how useful can it be for you, when troubleshooting ASM startup issues.

Let’s display: disk size, header, path, diskgroup name, owner user, owner group, physical sector size, logical sector size.

[root@rac1~]# kfod op=disks status=true disks=all dscvgroup=true diskattr=all

Let’s see if ASM is running during that time:

[root@rac1~]# ps -ef|grep smon

root 3716 1     4 12:36 ?      00:00:01 /u01/app/19.3.0/grid/bin/osysmond.bin
root 5178 5083  0 12:37 pts/0  00:00:00 grep --color=auto smon

There is no asm_smon_+ASM1, which means ASM is down.

Print the content of multiple differently named files in Linux

If the number of files you are working on is big, then you need automation as soon as possible.
This post describes find -o option, which helps you work on differently named files when their number is big.

For example, if you want to output the content of files physical_block_size and logical_block_size located under /sys/block/*/queue, run the following:

# find /sys/block/*/queue -name physical_block_size -o -name logical_block_size | while read f ; do echo "$f $(cat $f)" ; done

..
/sys/block/dm-0/queue/physical_block_size 4096
/sys/block/dm-0/queue/logical_block_size 512
/sys/block/dm-1/queue/physical_block_size 512
...

Where -o means OR.

Useful when working on ASM disks.

ORA-15041 during rebalance OR add disk

Problem:

One of our customers had a disk offline for more than disk_repair_time, which caused Oracle to drop 1TB disk. The problem started after that, the drop command caused rebalance operation and because of less than 1TB free space on the diskgroup, the rebalance failed with ORA-15041. Mentioned rebalance caused some of the disks to become 100% full, so free MB on some disks were 0.

Adding disks did not help, because when we were checking free space on the existing disks we were getting the following output:

# su - grid
$ sqlplus / as sysasm
SQL> select disk_number "Disk #", free_mb 
     from v$asm_disk 
     where group_number = 1 
     order by 2

    Disk #    FREE_MB
---------- ----------
        13       0
         0       0
         4       0
         3       4
        11       132900
        ...

As mentioned our rebalance was failing:

Solution

It was AWS environment and in cloud we could easily increase disk size, so we increased all disks in the diskgorup by 200GB:

Resizing steps: https://dba010.com/2019/08/23/resize-asm-disks-in-aws-fg-enabled-cluster/

Triggered Rebalance:

# su - grid
$ sqlplus / as sysasm
SQL> ALTER DISKGROUP DATA REBALANCE POWER 13; 

And after several hours rebalance finished successfully.

Please note that initially we increased space on disks by 1GB and rebalance failed again, then we increased by 200GB and the operation was successful. So you may need to increase disk size several times.

Useful note from Oracle Doc ID 473271.1

asmcmd does not return error text for root user

Problem:

While running asmcmd commands via root user, error messages are not displayed. But if we run the same command via grid – it returns messages. In the following example, ORACLE_HOME is set to RDBMS home (instead of GI) and we are trying to list the file which does not exist:

[root@rac1 ~]# echo $ORACLE_HOME
/u01/app/oracle/product/19.3.0/dbhome_1

[root@rac1 ~]# echo $PATH
/u01/app/19.3.0/grid/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin

[root@rac1 ~]# asmcmd ls +demodg/orcl/tempfile/TEMP.263.10617958
<<<<<<Nothing is displayed here

When a file does not exist, ls command should display ASMCMD-8002: entry 'TEMP.263.10617958' does not exist in directory '+demodg/orcl/tempfile/' error. But in our example, it does not return anything.

In the following example, if we try to copy a file, we see a message about copying a file but actually, the file is not copied because grid user does not have permission under /u01:

[root@rac1 ~]# asmcmd cp +demodg/orcl/tempfile/TEMP.263.1061795851 /u01
copying +demodg/orcl/tempfile/TEMP.263.1061795851 -> /u01/TEMP.263.1061795851

If we run the same command via grid, we get understandable error message:

[grid@rac1 ~]$  asmcmd cp +demodg/orcl/tempfile/TEMP.263.1061795851 /u01
 ASMCMD-9463: operation failed due to lack of write permissions

Reason:

Environment variables are not set correctly. ORACLE_HOME should be pointing to GI home.

Solution:

[root@rac1 ~]# export ORACLE_HOME=/u01/app/19.3.0/grid

[root@rac1 ~]# asmcmd ls +demodg/orcl/tempfile/TEMP.263.10617958
ASMCMD-8002: entry 'TEMP.263.10617958' does not exist in directory '+demodg/orcl/tempfile/'

ORA-15477: cannot communicate with the volume driver (DBD ERROR: OCIStmtExecute)

Problem:

I had GI Standalone installation, which I’ve deconfigured and configured one node RAC which was successful. Then I’ve tried to create ACFS volume which failed with ORA-15477:

[root@host1 dbs]# asmcmd volcreate -G OGG -s 10G ACFSGG
ORA-15032: not all alterations performed
ORA-15477: cannot communicate with the volume driver (DBD ERROR: OCIStmtExecute)

Reason:

It seems the ACFS/ADMV modules are not loaded:

[root@host1 dbs]# lsmod | grep oracle
oracleacfs           5921415  0
oracleadvm           1236257  0
oracleoks             750688  2 oracleacfs,oracleadvm

Solution:

First of all, I will share two possible solutions, that helped others but not me and one possible solution (3rd) that helped me:

  1. Start module manualy and make sure it’s enabled:
# acfsload start
# acfsload enable

Check if modules is loaded using lsmod | grep oracle and retry volume creation.

2. Reinstall acfs/admv modules manually:

[root@host1 dbs]# acfsroot install
ACFS-9300: ADVM/ACFS distribution files found.
ACFS-9314: Removing previous ADVM/ACFS installation.
depmod: ERROR: fstatat(6, uds.ko): No such file or directory
depmod: ERROR: fstatat(6, kvdo.ko): No such file or directory
ACFS-9315: Previous ADVM/ACFS components successfully removed.
ACFS-9294: updating file /etc/sysconfig/oracledrivers.conf
ACFS-9307: Installing requested ADVM/ACFS software.
ACFS-9294: updating file /etc/sysconfig/oracledrivers.conf
ACFS-9308: Loading installed ADVM/ACFS drivers.
ACFS-9321: Creating udev for ADVM/ACFS.
ACFS-9323: Creating module dependencies - this may take some time.
depmod: ERROR: fstatat(6, uds.ko): No such file or directory
depmod: ERROR: fstatat(6, kvdo.ko): No such file or directory
ACFS-9390: The command 'echo '/lib/modules/3.10.0-862.el7.x86_64/extra/usm/oracleadvm.ko
/lib/modules/3.10.0-862.el7.x86_64/extra/usm/oracleoks.ko
/lib/modules/3.10.0-862.el7.x86_64/extra/usm/oracleacfs.ko
' | /sbin/weak-modules --no-initramfs --add-modules 3.10.0-1127.18.2.el7.x86_64 2>&1 |' returned unexpected output that may be important for system configuration:
depmod: ERROR: fstatat(6, kvdo.ko): No such file or directory

depmod: ERROR: fstatat(6, uds.ko): No such file or directory

depmod: ERROR: fstatat(6, uds.ko): No such file or directory

depmod: ERROR: fstatat(6, kvdo.ko): No such file or directory

ACFS-9154: Loading 'oracleoks.ko' driver.
ACFS-9154: Loading 'oracleadvm.ko' driver.
ACFS-9154: Loading 'oracleacfs.ko' driver.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9156: Detecting control device '/dev/asm/.asm_ctl_spec'.
ACFS-9156: Detecting control device '/dev/ofsctl'.
ACFS-9309: ADVM/ACFS installation correctness verified.

Retry volume creation.

If none of the above helps, do the 3rd solution (which is not available on the internet, it was my decision):

3. Rebuild initramfs

[root@host1 ~]# cp -p /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).img.bak
[root@host1 ~]# dracut -f
[root@host1 ~]# reboot

After restart, you should be able to create volume.

ACFS-05913: unable to contact the standby node stbyrac1

Problem:

I was trying to setup ACFS replication, where one of the steps is to validate keys using acfsutil, which failed with ACFS-05913 error:

[root@rac1 .ssh]# acfsutil repl info -c -u oggrepl stbyrac1 stbyrac2 /GG
acfsutil repl info: ACFS-05913: unable to contact the standby node stbyrac1
acfsutil repl info: ACFS-05913: unable to contact the standby node stbyrac2

Cause: 

An attempt to use the ping utility to contact a standby node failed.

Solution:

Enable ICMP traffic between these nodes and retry validation:

[root@rac1 .ssh]# acfsutil repl info -c -u oggrepl stbyrac1 stbyrac2 /GG
A valid 'ssh' connection was detected for standby node stbyrac1 as user oggrepl.
A valid 'ssh' connection was detected for standby node stbyrac2 as user oggrepl.

srvctl start filesystem hangs

The title of this post is general, there can be a lot of reasons why srvctl start filesystem hangs. The aim of this blog post is to share one of the reasons only.

Problem:

I’ve created ACFS volume and added it to srvctl:

$ asmcmd volcreate -G OGG -s 10G ACFSGG
# srvctl add filesystem -device /dev/asm/acfsgg-11 -path /GG_HOME -volume acfsgg -diskgroup OGG -user oracle -fstype ACFS

then tried to start the filesystem using:

# srvctl start filesystem -device /dev/asm/acfsgg-11

Which hanged.

Troubleshooting:

I’ve checked logs under trace folder under GI base, but could not find any clue. Even worse, stopping filesystem was also hanging.

But let’s stop here, the file that should have been checked was really there, but I missed it and checked wrong files. The file name that shows the necessary error is mount_<process id>.trc and is definitely located under trace folder. So instead of manually mounting filesystem to see the error, you can just open that mount_<process id>.trc and you will see the reason there.

Then I tried manual mounting of the filesystem, without srvctl:

[root@stbyrac1 trace]# /bin/mount -t acfs  /dev/asm/acfsgg-11 /GG_HOME
mount.acfs: ACFS-03037: not an ACFS file system

saw the error, which explained what was happening. My volume was not formatted with acfs filesystem. Somehow I missed that step on the standby cluster, so just a human error, but srvctl at least should have said that instead of hanging and placing info in trace file.

Solution:

Format ACFS volume:

[root@stbyrac1 trace]# mkfs -t acfs /dev/asm/acfsgg-11
mkfs.acfs: version                   = 19.0.0.0.0
mkfs.acfs: on-disk version           = 46.0
mkfs.acfs: volume                    = /dev/asm/acfsgg-11
mkfs.acfs: volume size               = 10737418240  (  10.00 GB )
mkfs.acfs: Format complete.

Because the start and stop operations are hanged, you need to mount filesystem on all database nodes manually:

[root@stbyrac1 ~]# /bin/mount -t acfs  /dev/asm/acfsgg-11 /GG_HOME
[root@stbyrac1 ~]# /bin/mount -t acfs  /dev/asm/acfsgg-11 /GG_HOME

Now try to stop and start filesystem, to make sure srvctl is able to do it’s job without any manual interaction:

[root@stbyrac1 ~]# srvctl stop filesystem -device /dev/asm/acfsgg-11
[root@stbyrac1 ~]# srvctl start filesystem -device /dev/asm/acfsgg-11